Sign Language Translating Devices Are Cool. But Are They Useful?

Michigan State University researchers are developing a small tool, with a motion capture system, that translates ASL into English

deep-asl-camera.jpg
DeepASL's camera Michigan State University

Over the past several decades, researchers have regularly developed devices meant to translate American Sign Language (ASL) to English, with the hopes of easing communication between people who are deaf and hard of hearing and the hearing world. Many of these technologies use gloves to capture the motion of signing, which can be bulky and awkward.

Now, a group of researchers at Michigan State University (MSU) has developed a glove-less device the size of a tube of Chapstick they hope will improve ASL-English translation.

The technology, called DeepASL, uses a camera device to capture hand motions, then feeds the data through a deep learning algorithm, which matches it to signs of ASL. Unlike many previous devices, DeepASL can translate whole sentences rather than single words, and doesn’t require users to pause between signs.

“This is a truly non-intrusive technology,” says Mi Zhang, a professor of electrical and computer engineering who lead the research.

Zhang and his team hope DeepASL can help people who are deaf and hard of hearing by serving as a real-time translator. It could be especially useful in emergency situations, Zhang says, when waiting for a translator could cost precious minutes. The device, which could be integrated with a phone, tablet or computer, can also help teach ASL, Zhang says. Since more than 90 percent of deaf children are born to parents who are hearing, there is a large community of adults who need to learn ASL quickly. DeepASL could serve as a digital tutor, giving feedback on whether learners are signing correctly.

Zhang has applied for a patent and hopes to have a device on the market within a year. Because it’s based on affordable technology—the Leap Motion motion capture system retails for $78—it could be more widely accessible than previous efforts.

Sign Language Translating Devices Are Cool. But Are They Useful?
Researchers Biyi Fang and Mi Zhang demonstrate DeepASL. Michigan State University

But Christian Vogler, a professor of communication studies at Gallaudet University, a university for people who are deaf or hard of hearing, is skeptical of devices designed to translate ASL, and his skepticism is shared by many in the Deaf community.

Devices generally do not truly ‘translate’ ASL, merely recognize hand signs and turn them into an English word per sign, Vogler says. This means key grammatical information is lost, information about whether a phrase is a question, a negation, a relative clause and so forth. While DeepASL translates full sentences, some features of ASL grammar go beyond hand signs—facial expressions are often used as modifiers, eyebrow raising can turn a phrase into a question, body positioning can indicate when the ASL user is quoting someone else.

So far, “none of the systems have been even remotely useful to people who sign,” Vogler says, adding that researchers often seem to have “very little contact with the [Deaf and hard of hearing] community and very little idea of their real needs.”

Zhang's team did not test the device on people who were deaf and hard of hearing, but on students in a sign language translation program. Zhang emphasizes that DeepASL is designed to enable only basic communication at this point, and that this is just a starting place. He says his team hopes to extend DeepASL's capabilities in the future to capture facial expressions as well.

"That will be the next significant milestone for us to reach," he says.

Vogler says it’s a positive that the MSU technology is using deep learning methods, which have had success with spoken language. But, despite not necessitating a glove, the device likely has the same pitfalls of any previous system, since it doesn’t capture face and body movements.

Vogler thinks researchers should move away from the idea that sign language recognition devices can really meet in-person communication needs.

“We have many options for facilitating in-person communication, and until we have something that actually respects the linguistic properties of signed languages and the actual communication behaviors of signers, these efforts will go nowhere near supplanting or replacing them,” he says. “Instead, people need to work with actual community members, and with people who understand the complexities of signed languages.”

Vogler says it would be useful for sign language recognition technology like MSU’s to work with voice interfaces like Alexa. The growth of these interfaces is an accessibility challenge for people who are deaf and hard of hearing, he says, much as the internet—a largely visual medium—has presented a major challenge for people who are blind over the years.

“We presently do not have an effective and efficient way to interact with these voice interfaces if we are unable to, or do not want to, use our voice,” he says. “Sign language recognition is a perfect match for this situation, and one that actually could end up being useful and getting used.”

Get the latest stories in your inbox every weekday.