Visual system interprets sign languages
- Date:
- June 2, 2010
- Source:
- Universitat Autònoma de Barcelona
- Summary:
- Spanish sign language is used by over 100,000 people with hearing impairments and is made up of hundreds of signs. Researchers selected over 20 of these signs to develop a new visual interpretation system which allows deaf people to carry out consultations in the language they commonly use.
- Share:
Spanish sign language is used by over 100,000 people with hearing impairments and is made up of hundreds of signs. CVC-UAB researchers Sergio Escalera, Petia Radeva and Jordi Vitrià selected over twenty of these signs to develop a new visual interpretation system which allows deaf people to carry out consultations in the language they commonly use.
Signs can vary slightly depending on each user. Project researchers took this into account during the trials carried out with different people to help the system "become familiarised" with this variability. The signs recognised by the system were programmed to allow deaf people to maintain a basic conversation, including asking for help or directions. "For them it is a non artificial way of communicating and at the same time they can engage with people who do not speak sign language since the system translates the symbols into words in real time," Sergio Escalera said.
The hardware includes a video camera which records image sequences when it detects the presence of a user wanting to make a consultation. A computer vision and automatic learning system detects face, hand and arm movements, as well as any screen scrolling, and incorporates these into a classification system which identifies each movement with the word associated with the sign.
One of the aspects worth highlighting is the ability to adapt the system to any other sign language, since the methodology used is general. The system would only need to be reprogrammed with the signs used in that specific language. The amount of signs the system can recognise is also scalable, although researchers do admit that new data will increase the difficulty in differentiating them.
Applications such as the one developed by CVC-UAB researchers require extreme precision in the identification phase and are very difficult to configure given that the surroundings in which they will be used include changes in light and shadow, different physiognomies and speeds at which the signs are formed.
Other similar projects have been developed in the past. However, most of them failed or were not reliable enough because of the high complexity of variabilities in uncontrolled surroundings. For this project to succeed it was necessary to establish a fixed point in which individuals formed the signs and avoid having different focus points when recording.
The system was recently presented as a prototype in the final phase of a European project and researchers are already working on new project phases, such as using two cameras with the aim of recognising even more complex signs and complementing information with facial characteristics. To carry this out researchers worked in close collaboration with several members of the Catalan Federation of Deaf People, FECOSA.
Story Source:
Materials provided by Universitat Autònoma de Barcelona. Note: Content may be edited for style and length.
Cite This Page: