Predictive Powers: A Robot That Reads Your Intention?
- Date:
- June 10, 2009
- Source:
- ICT Results
- Summary:
- European researchers in robotics, psychology and cognitive sciences have developed a robot that can predict the intentions of its human partner. This ability to anticipate (or question) actions could make human-robot interactions more natural.
- Share:
European researchers in robotics, psychology and cognitive sciences have developed a robot that can predict the intentions of its human partner. This ability to anticipate (or question) actions could make human-robot interactions more natural.
The walking, talking, thinking robots of science fiction are far removed from the automated machines of today. Even today's most intelligent robots are little more than slaves – programmed to do our bidding.
Many research groups are trying to build robots that could be less like workers and more like companions. But to play this role, they must be able to interact with people in natural ways, and play a pro-active part in joint tasks and decision-making. We need robots that can ask questions, discuss and explore possibilities, assess their companion's ideas and anticipate what their partners might do next.
The EU-funded JAST project (http://www.euprojects-jast.net/) brings a multidisciplinary team together to do just this. The project explores ways by which a robot can anticipate/predict the actions and intentions of a human partner as they work collaboratively on a task.
Who knows best?
You cannot make human-robot interaction more natural unless you understand what 'natural' actually means. But few studies have investigated the cognitive mechanisms that are the basis of joint activity (i.e. where two people are working together to achieve a common goal).
A major element of the JAST project, therefore, was to conduct studies of human-human collaboration. These experiments and observations could feed into the development of more natural robotic behaviour.
The researchers participating in JAST are at the forefront of their discipline and have made some significant discoveries about the cognitive processes involved in joint action and decision-making. Most importantly, they scrutinised the ways in which observation plays an important part in joint activity.
Scientists have already shown that a set of 'mirror neurons' are activated when people observe an activity. These neurons resonate as if they were mimicking the activity; the brain learns about an activity by effectively copying what is going on. In the JAST project, a similar resonance was discovered during joint tasks: people observe their partners and the brain copies their action to try and make sense of it.
In other words, the brain processes the observed actions (and errors, it turns out) as if it is doing them itself. The brain mirrors what the other person is doing either for motor-simulation purposes or to select the most adequate complementary action.
Resonant robotics
The JAST robotics partners have built a system that incorporates this capacity for observation and mirroring (resonance).
“In our experiments the robot is not observing to learn a task,” explains Wolfram Erlhagen from the University of Minho and one of the project consortium's research partners. “The JAST robots already know the task, but they observe behaviour, map it against the task, and quickly learn to anticipate [partner actions] or spot errors when the partner does not follow the correct or expected procedure.”
The robot was tested in a variety of settings. In one scenario, the robot was the 'teacher' – guiding and collaborating with human partners to build a complicated model toy. In another test, the robot and the human were on equal terms. “Our tests were to see whether the human and robot could coordinate their work,” Erlhagen continues. “Would the robot know what to do next without being told?”
By observing how its human partner grasped a tool or model part, for example, the robot was able to predict how its partner intended to use it. Clues like these helped the robot to anticipate what its partner might need next. “Anticipation permits fluid interaction,” says Erlhagen. “The robot does not have to see the outcome of the action before it is able to select the next item.”
The robots were also programmed to deal with suspected errors and seek clarification when their partners’ intentions were ambiguous. For example, if one piece could be used to build three different structures, the robot had to ask which object its partner had in mind.
From JAST to Jeeves
But how is the JAST system different to other experimental robots?
“Our robot has a neural architecture that mimics the resonance processing that our human studies showed take place during joint actions,” says Erlhagen. “The link between the human psychology, experimentation and the robotics is very close. Joint action has not been addressed by other robotics projects, which may have developed ways to predict motor movements, but not decisions or intentions. JAST deals with prediction at a much higher cognitive level.”
Before robots like this one can be let loose around humans, however, they will have to learn some manners. Humans know how to behave according to the context they are in. This is subtle and would be difficult for a robot to understand.
Nevertheless, by refining this ability to anticipate, it should be possible to produce robots that are proactive in what they do.
Not waiting to be asked, perhaps one day a robot may use the JAST approach to take initiative and ask: “Would you care for a cup of tea?”
The JAST project received funding from the ICT strand of the EU’s Sixth Framework Programme for research.
Story Source:
Materials provided by ICT Results. Note: Content may be edited for style and length.
Cite This Page: