Some 500,000 Americans use American Sign Language (ASL). For deaf users of ASL, communication with those not fluent in the language can be a challenge. Though most deaf-born Americans learn spoken language through speech therapy, some still have a hard time making themselves understood verbally, or simply find sign language a more fluid form of expressing themselves.
Now, researchers at Texas A&M University have developed a wearable device that “translates” sign language into English by sensing the user’s movements. The device, still in the prototype stage, can recognize some 40 ASL signs with 96 percent accuracy.
The system uses two sensors. One is a motion sensor with an accelerometer and a gyroscope, which measures the user’s hand and arm speed and angle. By sensing where a user’s hands and arms are, it can begin to guess what word they might be signing. Then there’s an electromyographic sensor, which measures the electrical potential of muscle movement. It can tell exactly what part of the hands and fingers are moving, which is critical.
“If you look at American Sign Language vocabulary, there are cases where the hand itself is moving and then you have very fine-grained movement of the fingers,” says Roozbeh Jafari, the engineer who led the project. “If you want to detect those, you’re not going to be able to use just the motion of the hand.”
For example, the word “please” in ASL involves making circles over your chest with your open hand. The word “sorry” uses an identical movement, but the hand is closed in a fist.
The sensors are worn on the user’s right wrist. The information gleaned is sent via Bluetooth to a laptop. This laptop runs an algorithm to interpret the sign and displays the word in English on the screen.
Eventually, Jafari hopes to negate the need for the laptop by incorporating a small computer directly into the device. The computer could then send the English words to another person’s phone, so they can read what their conversation partner is saying.
Obviously, the translator will also have to become proficient in far more signs than 40 to be a viable product. While there are hundreds of ASL signs, the original 40 signs were chosen because they are common and useful, Jafari says. A person can use them to say simple, crucial sentences like “I need water.” The device will also need to get faster—right now, users must pause between words to give it time to translate.
“In reality people don’t talk like that,” Jafari says.
But this more advanced system, he says, is a ways down the line, and will involve more work and funding.
Previous ASL translation technologies have used cameras to read the gestures. But these don’t work well in low lighting conditions, Jafari says. Plus, many people don’t like the idea of having a camera watching them all the time.
A number of other technologies have been created in recent years to help deaf and hearing people communicate with each other. An app called Transcense, released last year, translates speech from multiple people into written words, and presents them on the screen in color-coded bubbles. It’s meant to help deaf people at meetings or in social situations, where multiple speakers can be a challenge for even the most proficient lip-readers. Another company, MotionSavvy, has been working on a tablet that interprets ASL motions and reads the words out loud in English. In China, researchers have used Microsoft’s Kinect motion-sensing equipment to translate Chinese Sign Language into spoken and written words.
As someone who has long worked with wearable technologies, such as watches that monitor heart rhythm, Jafari understands the importance of comfort and aesthetics. If a device is uncomfortable and obtrusive, people won’t wear it. The current prototype of the sign language translation device looks like a medical implement, with electrodes and straps and wires. Jafari would like the final version to be small and attractive.
"I want to make the entire system fit in a watch," he says.