Reading the signs: Swanson students create ASL-to-text translator
November 20, 2019
When Christopher Pasquinelli, a fifth-year computer engineering major, spent a summer interning at PNC, he worked alongside students who were either deaf or hard of hearing. He started thinking about the challenges faced by his fellow interns and how common they must be for people who are deaf across America.
“The problem was they needed an interpreter with them constantly,” Pasquinelli said. “I thought ‘there has to be an easier way for that.’”
Now, Pasquinelli and Haihui Zhu, a third-year computer engineering major, are prototyping an American Sign Language-to-text translator, after an early version won third place in the regional finals of the InnovateFPGA 2019 Global Design Contest — a worldwide artificial intelligence competition open to anyone 13 or older. Another student, Roman Hamilton, was on the competition team but is no longer involved with the project.
The competition required teams to use field-programmable gate arrays — a circuit that can be reprogrammed to behave like a different circuit. From there, Pasquinelli, Zhu and Hamilton used machine learning — a special branch of AI in which a computer is trained to complete certain tasks — to execute the ASL translation. Basically, their translator takes a video of a word being signed and translates it using an ever-learning algorithm.
Though they are still determining the exact function the program will serve, Pasquinelli and Zhu see their program being marketed towards employers so that deaf or hard-of-hearing workers can communicate with their hearing coworkers. In addition, they see the translator as an important resource for first responders in emergency situations with individuals who communicate through sign language.
According to Zhu, the most important part of the engineering process is “defining the right question.” Samuel Dickerson, an assistant professor in the department of electrical and computer engineering, helped them decide the specific function of their program.
“I helped them, primarily, on some of the technical sides,” Dickerson said. “One of the things I was able to help them to do … was to limit the scope of what they wanted to do.”
According to Pasquinelli, the project started off with a very broad idea — easing communication between the hearing and deaf. Then, the team narrowed it down to a specific task — translating ASL to text — that could be easily executed by a computer — in particular, a classification algorithm. In other words, the computer was trained to translate using pattern recognition.
Pasquinelli and Zhu created a three-stage system to translate signs into words. When shown a video of a person signing, the program begins by locating the hand. A single image can have hundreds of thousands of pixels, which, according to Zhu, is too much data for the computer to quickly sort through to interpret the sign. Instead, the computer finds the hand and pinpoints key structural locations of the hand.
“There are 23 key points of one single hand, so the second step is to build a system that can recognize the location of the 23 points,” Zhu said. “Now the data is quite low dimensional.”
With the reduced data size, the computer can combine the location of the 23 points to understand the position of the hand — that is, the computer can “see” the shape being signed. From there, the computer converts the sign into a letter or word, Zhu said.
Some signs, however, involve movement. To perceive the movement, the computer finds a “center” of the hand by averaging the location of the 23 key points.
“Our system will estimate the center of the hand and use that velocity as the information of the motion,” Zhu said.
By combining movement and hand shape, the computer can recognize many different lexicons in American Sign Language, Zhu said. For example, the phrase “hello” consists of a straight hand moving away from the head, like a salute. The AI connects all aspects of the sign to output the word “hello.”
But there is more to sign language than hand motions. For example, tone is conveyed through facial expressions and eyebrow movements. Eventually, the program will have to take these into consideration to understand the meaning of a signed word or phrase, Zhu said.
Another future project will be combining single lexicons into phrases. Like English, American Sign Language has its own grammar rules — making a direct translation difficult to understand.
“We are going to look at the natural language processes,” Zhu said. “That is one of the next steps.”
Furthermore, according to Pasquinelli, normal signing speed is too fast for the computer.
“When people are using this, right now, they have to sign a lot slower,” Pasquinelli said.
According to Zhu and Pasquinelli, one of the most important parts of the project is making sure it will actually be useful to the ASL community. Pasquinelli said he plans to reach out to the ASL community soon.
“There are some researchers working on translation … but it does not solve [the ASL community’s] need,” Zhu said. “The next step is to reach out to the ASL community to work hand in hand, together, to understand what we really need to provide.”
In the future, Pasquinelli and Zhu hope to build a smartphone application, making this product as portable as possible. As for now, there is no set timeline for the project, according to Pasquielli.
“It was a fun experience,” Pasquinelli said. “If there are any engineers out there that are interested in learning artificial intelligence and machine learning, look into [the competition].”
For Pasquinelli and Zhu, this is all about helping people, Pasquinelli said.
“How I look at engineering, what I want to do is to help people and be able to build stuff that eases disabilities and anything in general,” Pasquinelli said.