New! Sign up for our free email newsletter.
Science News
from research organizations

Wag The Robot? Robot Responds To Human Gestures

Date:
March 12, 2009
Source:
Brown University
Summary:
Researchers have demonstrated how a robot can follow human gestures in a variety of environments -- indoors and outside -- without adjusting for lighting. The achievement is an important step forward in the quest to build fully autonomous robots as partners for human endeavors.
Share:
FULL STORY

Imagine a day when you turn to your own personal robot, give it a task and then sit down and relax, confident that your robot is doing exactly what you wanted it to do.

So far, that autonomous, do-it-all robot is the stuff of science fiction or cartoons like "The Jetsons." But a Brown University-led robotics team has made an important advance: The group has demonstrated how a robot can follow nonverbal commands from a person in a variety of environments — indoors as well as outside — all without adjusting for lighting.

"We have created a novel system where the robot will follow you at a precise distance, where you don't need to wear special clothing, you don't need to be in a special environment, and you don't need to look backward to track it," said Chad Jenkins, assistant professor of computer science at Brown University and the team's leader.

Jenkins will present the achievement at the Human-Robot Interaction conference March 11-13, 2009, in San Diego. A paper accompanying the video also will be presented at the conference. Matthew Loper, a Brown graduate student, is the lead author on the paper. Contributors include former Brown graduate student Nathan Koenig, now at the University of Southern California; former Brown graduate student Sonia Chernova; and Chris Jones, a researcher with the Massachusetts-based robotics maker iRobot Corp.

A video that shows the robot following gestures and verbal commands can be found in the Brown University release.

In the video, Brown graduate students use a variety of hand-arm signals to instruct the robot, including "follow," "halt," "wait" and "door breach." For much of the time, a student walks with his or her back to the robot, turning corners in narrow hallways and walking briskly in an outdoor parking lot. Throughout, the robot dutifully follows, maintaining an approximate three-foot distance, even backing up when a student turns around and approaches it.

In one sequence, Chernova, now studying at Carnegie-Mellon University, instructs the robot with a series of gestures and verbal commands to move through an open doorway, stop, turn around and then cross the threshold again to return where it had started. Chernova then commands the robot to follow her through the hallway.

The team also successfully instructed the robot to turn around (a 180-degree pivot) and to freeze when the student disappeared from view — essentially idling until the instructor reappeared and gave a nonverbal or verbal command.

The Brown team started with a PackBot, a mechanized platform developed by iRobot that has been used widely by the U.S. military for bomb disposal, among other tasks. The researchers outfitted their robot with a commercial depth-imaging camera (picture the head on the robot in the film Wall-E). They also geared the robot with a laptop that included novel computer programs that enabled the machine to recognize human gestures, decipher them and respond to them.

The researchers made two key advances with their robot. The first involved what scientists call visual recognition. Applied to robots, it means helping them to orient themselves with respect to the objects in a room. "Robots can see things," Jenkins explained, "but recognition remains a challenge."

The team overcame this obstacle by creating a computer program, whereby the robot recognized a human by extracting a silhouette, as if a person were a virtual cutout. This allowed the robot to home in on the human and receive commands without being distracted by other objects in the space.

"It's really being able to say, 'That's a person I'm looking at, I'm going to follow that person,'" Jenkins said.

The second advance involved the depth-imaging camera. The team used a CSEM Swiss Ranger, which uses infrared light to detect objects and to establish distances between the camera and the target object, and, just as important, to measure the distance between the camera and any other objects in the area. The distinction is key, Jenkins explained, because it enabled the Brown robot to stay locked in on the human commander, which was essential to maintaining a set distance while following the person.

The result is a robot that doesn't require remote control or constant vigilance, Jenkins said, which is a key step to developing autonomous devices. The team hopes to add more nonverbal and verbal commands for the robot and to increase the three-foot working distance between the commander and the robot.

"What you really want is a robot that can act like a partner," Jenkins added. "You don't want to puppeteer the robot. You just want to supervise it, where you say, 'Here's your job. Now, go do it.'"

"Advances in enabling intuitive human-robot interaction, such as through speech or gestures, go a long way into making the robot more of a valuable sidekick and less of a machine you have to constantly command," added Chris Jones, research program manager at iRobot.

The research was funded by the U.S. Defense Advanced Research Projects Agency Information Processing Techniques Office (DARPA IPTO) and by the U.S. Office of Naval Research.


Story Source:

Materials provided by Brown University. Note: Content may be edited for style and length.


Cite This Page:

Brown University. "Wag The Robot? Robot Responds To Human Gestures." ScienceDaily. ScienceDaily, 12 March 2009. <www.sciencedaily.com/releases/2009/03/090311085058.htm>.
Brown University. (2009, March 12). Wag The Robot? Robot Responds To Human Gestures. ScienceDaily. Retrieved November 21, 2024 from www.sciencedaily.com/releases/2009/03/090311085058.htm
Brown University. "Wag The Robot? Robot Responds To Human Gestures." ScienceDaily. www.sciencedaily.com/releases/2009/03/090311085058.htm (accessed November 21, 2024).

Explore More

from ScienceDaily

RELATED STORIES