Reading signs: New method improves AI translation of sign language
Additional data can help differentiate subtle gestures, hand positions, facial expressions
- Date:
- January 15, 2025
- Source:
- Osaka Metropolitan University
- Summary:
- A research team improved the AI recognition accuracy of word-level sign language recognition by adding data such as the signer's hand and facial expressions, as well as skeletal information on the position of the hands relative to the body.
- Share:
Sign languages have been developed by nations around the world to fit the local communication style, and each language consists of thousands of signs. This has made sign languages difficult to learn and understand. Using artificial intelligence to automatically translate the signs into words, known as word-level sign language recognition, has now gained a boost in accuracy through the work of an Osaka Metropolitan University-led research group.
Previous research methods have been focused on capturing information about the signer's general movements. The problems in accuracy have stemmed from the different meanings that could arise based on the subtle differences in hand shape and relationship in the position of the hands and the body.
Graduate School of Informatics Associate Professor Katsufumi Inoue and Associate Professor Masakazu Iwamura worked with colleagues including at the Indian Institute of Technology Roorkee to improve AI recognition accuracy. They added data such as hand and facial expressions, as well as skeletal information on the position of the hands relative to the body, to the information on the general movements of the signer's upper body.
"We were able to improve the accuracy of word-level sign language recognition by 10-15% compared to conventional methods," Professor Inoue declared. "In addition, we expect that the method we have proposed can be applied to any sign language, hopefully leading to improved communication with speaking- and hearing-impaired people in various countries."
Story Source:
Materials provided by Osaka Metropolitan University. Note: Content may be edited for style and length.
Journal Reference:
- Mizuki Maruyama, Shrey Singh, Katsufumi Inoue, Partha Pratim Roy, Masakazu Iwamura, Michifumi Yoshioka. Word-Level Sign Language Recognition With Multi-Stream Neural Networks Focusing on Local Regions and Skeletal Information. IEEE Access, 2024; 12: 167333 DOI: 10.1109/ACCESS.2024.3494878
Cite This Page: