New! Sign up for our free email newsletter.
Science News
from research organizations

Human cues used to improve computer user-friendliness

Date:
March 6, 2011
Source:
Binghamton University
Summary:
Researchers want computers to understand inputs from humans that go beyond the traditional keyboard and mouse. They have now developed ways to provide information to a computer based on where a user is looking as well as through gestures or speech.
Share:
FULL STORY

Lijun Yin wants computers to understand inputs from humans that go beyond the traditional keyboard and mouse.

"Our research in computer graphics and computer vision tries to make using computers easier," says the Binghamton University computer scientist. "Can we find a more comfortable, intuitive and intelligent way to use the computer? It should feel like you're talking to a friend. This could also help disabled people use computers the way everyone else does."

Yin's team has developed ways to provide information to the computer based on where a user is looking as well as through gestures or speech. One of the basic challenges in this area is "computer vision." That is, how can a simple webcam work more like the human eye? Can camera-captured data understand a real-world object? Can this data be used to "see" the user and "understand" what the user wants to do?

To some extent, that's already possible. Witness one of Yin's graduate students giving a PowerPoint presentation and using only his eyes to highlight content on various slides. When Yin demonstrated this technology for Air Force experts last year, the only hardware he brought was a webcam attached to a laptop computer.

Yin says the next step would be enabling the computer to recognize a user's emotional state. He works with a well-established set of six basic emotions -- anger, disgust, fear, joy, sadness, and surprise -- and is experimenting with different ways to allow the computer to distinguish among them. Is there enough data in the way the lines around the eyes change? Could focusing on the user's mouth provide sufficient clues? What happens if the user's face is only partially visible, perhaps turned to one side?

"Computers only understand zeroes and ones," Yin says. "Everything is about patterns. We want to find out how to recognize each emotion using only the most important features."

He's partnering with Binghamton University psychologist Peter Gerhardstein to explore ways this work could benefit children with autism. Many people with autism have difficulty interpreting others' emotions; therapists sometimes use photographs of people to teach children how to understand when someone is happy or sad and so forth. Yin could produce not just photographs, but three-dimensional avatars that are able to display a range of emotions. Given the right pictures, Yin could even produce avatars of people from a child's family for use in this type of therapy.

Yin and Gerhardstein's previous collaboration led to the creation of a 3D facial expression database, which includes 100 subjects with 2,500 facial expression models. The database is available at no cost to the nonprofit research community and has become a worldwide test bed for those working on related projects in fields such as biomedicine, law enforcement and computer science.

Once Yin became interested in human-computer interaction, he naturally grew more excited about the possibilities for artificial intelligence.

"We want not only to create a virtual-person model, we want to understand a real person's emotions and feelings," Yin says. "We want the computer to be able to understand how you feel, too. That's hard, even harder than my other work."

Imagine if a computer could understand when people are in pain. Some may ask a doctor for help. But others -- young children, for instance -- cannot express themselves or are unable to speak for some reason. Yin wants to develop an algorithm that would enable a computer to determine when someone is in pain based just on a photograph.

Yin describes that health-care application and, almost in the next breath, points out that the same system that could identify pain might also be used to figure out when someone is lying. Perhaps a computer could offer insights like the ones provided by Tim Roth's character, Dr. Cal Lightman, on the television show Lie to Me. The fictional character is a psychologist with an expertise in tracking deception who often partners with law-enforcement agencies.

"This technology," Yin says, "could help us to train the computer to do facial-recognition analysis in place of experts."


Story Source:

Materials provided by Binghamton University. Note: Content may be edited for style and length.


Cite This Page:

Binghamton University. "Human cues used to improve computer user-friendliness." ScienceDaily. ScienceDaily, 6 March 2011. <www.sciencedaily.com/releases/2011/03/110304151016.htm>.
Binghamton University. (2011, March 6). Human cues used to improve computer user-friendliness. ScienceDaily. Retrieved November 20, 2024 from www.sciencedaily.com/releases/2011/03/110304151016.htm
Binghamton University. "Human cues used to improve computer user-friendliness." ScienceDaily. www.sciencedaily.com/releases/2011/03/110304151016.htm (accessed November 20, 2024).

Explore More

from ScienceDaily

RELATED STORIES