Teach your robot well
- Date:
- March 8, 2012
- Source:
- Georgia Institute of Technology
- Summary:
- A new study identifies the types of questions a robot can ask during a learning interaction that are most likely to characterize a smooth and productive human-robot relationship.
- Share:
Within a decade, personal robots could become as common in U.S. homes as any other major appliance, and many if not most of these machines will be able to perform innumerable tasks not explicitly imagined by their manufacturers. This opens up a wider world of personal robotics, in which machines are doing anything their owners can program them to do -- without actually being programmers.
Laying some helpful groundwork for this world is, a new study by researchers in Georgia Tech's Center for Robotics & Intelligent Machines (RIM), who have identified the types of questions a robot can ask during a learning interaction that are most likely to characterize a smooth and productive human-robot relationship. These questions are about certain features of tasks, more so than labels of task components or real-time demonstrations of the task itself, and the researchers identified them not by studying robots, but by studying the everyday (read: non-programmer) people who one day will be their masters.
The findings were detailed in the paper, "Designing Robot Learners that Ask Good Questions," presented this week in Boston at the 7th ACM/IEEE Conference on Human-Robot Interaction (HRI).
"People are not so good at teaching robots because they don't understand the robots' learning mechanism," said lead author Maya Cakmak, Ph.D. student in the School of Interactive Computing. "It's like when you try to train a dog, and it's difficult because dogs do not learn like humans do. We wanted to find out the best kinds of questions a robot could ask to make the human-robot relationship as 'human' as it can be."
Cakmak's study attempted to discover the role "active learning" concepts play in human-robot interaction. In a nutshell, active learning refers to giving machine learners more control over the information they receive. Simon, a humanoid robot created in the lab of Andrea Thomaz (assistant professor in the Georgia Tech's School of Interactive Computing, and co-author), is well acquainted with active learning; Thomaz and Cakmak are programming him to learn new tasks by asking questions.
Cakmak designed two separate experiments: first, she asked human volunteers to assume the role of an inquisitive robot attempting to learn a simple task by asking questions of a human instructor. Having identified the three main question types (feature, label and demonstration), Cakmak tagged each of the participants' questions as one of the three. The overwhelming majority (about 82 percent) of questions were feature queries, showing a clear cognitive preference in human learning for this query type.
Type of question -- Example
Label query -- "Can I pour salt like this?"
Demonstration query -- "Can you show me how to pour salt from here?"
Feature query -- "Can I pour salt from any height?"
Next, Cakmak recruited humans to teach Simon new tasks by answering the robot's questions and then rating those questions on how "smart" they thought they were. Feature queries once again were the preferred interrogatory, with 72 percent of participants calling them the smartest questions.
"These findings are important because they help give us the ability to teach robots the kinds of questions that humans would ask," Cakmak said. "This in turn will help manufacturers produce the kinds of robots that are most likely to integrate quickly into a household or other environment and better serve the needs we'll have for them."
Story Source:
Materials provided by Georgia Institute of Technology. Note: Content may be edited for style and length.
Cite This Page: