From the UA News Center | Smart devices activated with voice commands are a growing segment of consumer technology, but they can be challenging to access by those who use American Sign Language as their primary language.
Saying “Ok, Google,” or “Alexa” to start verbally interacting with virtual assistants that work with Google or Amazon systems, for instance, may not be user friendly for those in deaf and hard of hearing communities.
Researchers across disciplines at The University of Alabama are developing an innovative way for deaf and hard-of-hearing communities to better interact with devices or smart environments. The patent-pending technology leverages radio frequency sensing to enable a human-computer interface built from the start to recognize the language of the deaf community — sign language.
The UA team developed radio detection and ranging, or radar, and artificial intelligence algorithms for a miniature RF sensor as a promising way to advance technologies built for sign language. Initial work showed the concept, and the next phase is partnering with the deaf community to make it more robust and user driven.
“Deaf and hard-of-hearing communities are marginalized for their minority language status,” said Dr. Darrin Griffin, who studies interpersonal communication at UA. “They are generally not brought to the table for the access and inclusion they need. We are taking an approach to involve the deaf community when building these interfaces.”
Dr. Sevgi Zubeyde Gurbuz, who researches the design of next-generation AI-enhanced radars, leads the work to develop sign language recognition using radar and machine learning. The radar transceivers used are low-cost, low-power, small sensors designed for biomedical and automotive radar applications, which have a lower output power than cell phones
“Our focus is not really on translation of sign language, the traditional focus of American Sign Language research,” she said. “Instead, we are focused on how we can pave the way for more technology sensitive to ASL, and thus, more accessible to the deaf community.”
With proof of concept, UA researchers are partnering with the Alabama Institute for Deaf and Blind, a world-class education, rehabilitation and service program serving individuals of all ages who are deaf, blind, deaf-blind and multi-disabled.
“We want to integrate the deaf community at all levels of this project,” Gurbuz said.
Besides more data, the computer needs more quality data to get better at sign recognition. Gurbuz, who is learning ASL, could train the computer, but her signing is not the same as those fluent in the language. Plus, like spoken language, people can add small nuances to sign language that becomes a dialect, making it difficult for a few people to recreate in a lab setting for the computer.
Also, a smart device or computer assistant needs trigger words to activate, and a technology designed for the deaf community would be no different. Students at AIDB can help the researchers develop appropriate trigger signs to activate devices.
The UA team is developing RF-sensor based interactive devices that AIDB students can use not only to provide data, but also learn more about computer interfaces and programming through an interactive, educational activity built for use on the AIDB Stem Bus.
“We are not trying to develop technology for them,” Griffin said. “We want to work together with the deaf community to develop technology they want to use.”
This work is supported by a grant from the National Science Foundation.
Griffin is an associate professor of communication studies. Gurbuz is an assistant professor of electrical and computer engineering. Other team members include Dr. Chris Crawford, assistant professor of computer science; Dr. Evie Malaia, associate professor of communicative disorders; and Dr. Ali Cafer Gurbuz, assistant professor of electrical and computer engineering at Mississippi State University.
Read more about the project in the full article on the UA News Center.