New! Sign up for our free email newsletter.
Science News
from research organizations

Robot doesn't have to behave and look like a human

Date:
September 22, 2016
Source:
University of Twente
Summary:
The R2-D2 robot from Star Wars doesn’t communicate in human language but is, nevertheless, capable of showing its intentions. For human-robot interaction, the robot does not have to be a true ‘humanoid,’ provided that its signals are designed in the right way, say researchers.
Share:
FULL STORY

The R2-D2 robot from Star Wars doesn't communicate in human language but is, nevertheless, capable of showing its intentions. For human-robot interaction, the robot does not have to be a true 'humanoid,' provided that its signals are designed in the right way, says researcher Daphne Karreman (University of Twente, The Netherlands).

A human being will only be capable of communicating with robots if this robot has many human characteristics. That is the common idea. But mimicking natural movements and expressions is complicated, and some of our nonverbal communication is not really suitable for robots: wide arm gestures, for example. Humans prove to be capable of responding in a social way, even to machines that look like machines. We have a natural tendency of translating machine movements and signals to the human world. Two simple lenses on a machine can make people wave to the machine.

Beyond R2-D2

Knowing that, designing intuitive signals is challenging. In her research, Daphne Karreman focused on a robot functioning as a guide in a museum or a zoo. If the robot doesn't have arms, can it still point to something the visitors have to look at? Using speech, written language, a screen, projection of images on a wall and specific movements, the robot has quite a number of 'modalities' that humans don't have. Add to this playing with light and colour, and even a 'low-anthropomorphic' robot can be equipped with strong communication skills. It goes way beyond R2-D2 that communicates using beeps that need to be translated first. Karreman's PhD thesis is therefore entitled 'Beyond R2-D2'.

In the wild

Karreman analysed a huge amount of video data to see how humans respond to a robot. Up to now, this type of research was mainly done in controlled lab situations, without other people present or after the test person was informed about what was going to happen. In this case, the robot was introduced 'in the wild' and in an unstructured way. People could come across the robot in the Real Alcázar Palace, Sevilla, for example. They decide for themselves if they want to be guided by a robot. What makes them keep distance, do people recognize what this robot is capable of?

Video tool

To analyse these video data, Karreman developed a tool called Data Reduction Event Analysis Method (DREAM). The robot called Fun Robotic Outdoor Guide (FROG) has a screen, communicates using spoken language and light signals, and has a small pointer on its 'head'. All by itself, FROG recognizes if people are interested in interaction and guidance. Thanks to the powerful DREAM tool, for the first time it is possible to analyse and classify human-robot interaction in a fast and reliable way. Unlike other methods, DREAM will not interpret all signals immediately, but it compares several 'coders' for a reliable and reproducible result.

How many people show interest, do they join the robot during the entire tour, do they respond as expected? It is possible to evaluate this using questionnaires, but that places the robot in a special position: people primarily come to visit the expo or zoo and not for meeting a robot. Using the DREAM tool, spontaneous interaction becomes more visible and thus, robot behaviour can be optimized.

Daphne Karreman did her PhD work in UT's Human Media Interaction group of Prof Vanessa Evers. Her research was part of the European FP7 program FROG (

The R2-D2 robot from Star Wars doesn't communicate in human language but is, nevertheless, capable of showing its intentions. For human-robot interaction, the robot does not have to be a true 'humanoid'. Provided that it signals are designed in the right way, UT researcher Daphne Karreman says.

A human being will only be capable of communicating with robots if this robot has many human characteristics. That is the common idea. But mimicking natural movements and expressions is complicated, and some of our nonverbal communication is not really suitable for robots: wide arm gestures, for example. Humans prove to be capable of responding in a social way, even to machines that look like machines. We have a natural tendency of translating machine movements and signals to the human world. Two simple lenses on a machine can make people wave to the machine.

Beyond R2-D2

Knowing that, designing intuitive signals is challenging. In her research, Daphne Karreman focused on a robot functioning as a guide in a museum or a zoo. If the robot doesn't have arms, can it still point to something the visitors have to look at? Using speech, written language, a screen, projection of images on a wall and specific movements, the robot has quite a number of 'modalities' that humans don't have. Add to this playing with light and colour, and even a 'low-anthropomorphic' robot can be equipped with strong communication skills. It goes way beyond R2-D2 that communicates using beeps that need to be translated first. Karreman's PhD thesis is therefore entitled 'Beyond R2-D2'.

In the wild

Karreman analysed a huge amount of video data to see how humans respond to a robot. Up to now, this type of research was mainly done in controlled lab situations, without other people present or after the test person was informed about what was going to happen. In this case, the robot was introduced 'in the wild' and in an unstructured way. People could come across the robot in the Real Alcázar Palace, Sevilla, for example. They decide for themselves if they want to be guided by a robot. What makes them keep distance, do people recognize what this robot is capable of?

Video tool

To analyse these video data, Karreman developed a tool called Data Reduction Event Analysis Method (DREAM). The robot called

In the wild

Karreman analysed a huge amount of video data to see how humans respond to a robot. Up to now, this type of research was mainly done in controlled lab situations, without other people present or after the test person was informed about what was going to happen. In this case, the robot was introduced 'in the wild' and in an unstructured way. People could come across the robot in the Real Alcázar Palace, Sevilla, for example. They decide for themselves if they want to be guided by a robot. What makes them keep distance, do people recognize what this robot is capable of?

Video tool

To analyse these video data, Karreman developed a tool called Data Reduction Event Analysis Method (DREAM). The robot called Fun Robotic Outdoor Guide (FROG) has a screen, communicates using spoken language and light signals, and has a small pointer on its 'head'. All by itself, FROG recognizes if people are interested in interaction and guidance. Thanks to the powerful DREAM tool, for the first time it is possible to analyse and classify human-robot interaction in a fast and reliable way. Unlike other methods, DREAM will not interpret all signals immediately, but it compares several 'coders' for a reliable and reproducible result.

How many people show interest, do they join the robot during the entire tour, do they respond as expected? It is possible to evaluate this using questionnaires, but that places the robot in a special position: people primarily come to visit the expo or zoo and not for meeting a robot. Using the DREAM tool, spontaneous interaction becomes more visible and thus, robot behaviour can be optimized.

Daphne Karreman did her PhD work in UT's Human Media Interaction group of Prof Vanessa Evers. Her research was part of the European FP7 program FROG (

The R2-D2 robot from Star Wars doesn't communicate in human language but is, nevertheless, capable of showing its intentions. For human-robot interaction, the robot does not have to be a true 'humanoid'. Provided that it signals are designed in the right way, UT researcher Daphne Karreman says.

A human being will only be capable of communicating with robots if this robot has many human characteristics. That is the common idea. But mimicking natural movements and expressions is complicated, and some of our nonverbal communication is not really suitable for robots: wide arm gestures, for example. Humans prove to be capable of responding in a social way, even to machines that look like machines. We have a natural tendency of translating machine movements and signals to the human world. Two simple lenses on a machine can make people wave to the machine.

Beyond R2-D2

Knowing that, designing intuitive signals is challenging. In her research, Daphne Karreman focused on a robot functioning as a guide in a museum or a zoo. If the robot doesn't have arms, can it still point to something the visitors have to look at? Using speech, written language, a screen, projection of images on a wall and specific movements, the robot has quite a number of 'modalities' that humans don't have. Add to this playing with light and colour, and even a 'low-anthropomorphic' robot can be equipped with strong communication skills. It goes way beyond R2-D2 that communicates using beeps that need to be translated first. Karreman's PhD thesis is therefore entitled 'Beyond R2-D2'.

In the wild

Karreman analysed a huge amount of video data to see how humans respond to a robot. Up to now, this type of research was mainly done in controlled lab situations, without other people present or after the test person was informed about what was going to happen. In this case, the robot was introduced 'in the wild' and in an unstructured way. People could come across the robot in the Real Alcázar Palace, Sevilla, for example. They decide for themselves if they want to be guided by a robot. What makes them keep distance, do people recognize what this robot is capable of?

Video tool

To analyse these video data, Karreman developed a tool called Data Reduction Event Analysis Method (DREAM). The robot called Fun Robotic Outdoor Guide (FROG) has a screen, communicates using spoken language and light signals, and has a small pointer on its 'head'. All by itself, FROG recognizes if people are interested in interaction and guidance. Thanks to the powerful DREAM tool, for the first time it is possible to analyse and classify human-robot interaction in a fast and reliable way. Unlike other methods, DREAM will not interpret all signals immediately, but it compares several 'coders' for a reliable and reproducible result.

How many people show interest, do they join the robot during the entire tour, do they respond as expected? It is possible to evaluate this using questionnaires, but that places the robot in a special position: people primarily come to visit the expo or zoo and not for meeting a robot. Using the DREAM tool, spontaneous interaction becomes more visible and thus, robot behaviour can be optimized.

Daphne Karreman did her PhD work in UT's Human Media Interaction group of Prof Vanessa Evers. Her research was part of the European FP7 program FROG (www.frogrobot.eu). Karreman's PhD-thesis is entitled 'Beyond R2-D2. The Design of nonverbal interaction behavior optimized for robot-specific morphologies.'


Story Source:

Materials provided by University of Twente. Note: Content may be edited for style and length.


Cite This Page:

University of Twente. "Robot doesn't have to behave and look like a human." ScienceDaily. ScienceDaily, 22 September 2016. <www.sciencedaily.com/releases/2016/09/160922085352.htm>.
University of Twente. (2016, September 22). Robot doesn't have to behave and look like a human. ScienceDaily. Retrieved November 15, 2024 from www.sciencedaily.com/releases/2016/09/160922085352.htm
University of Twente. "Robot doesn't have to behave and look like a human." ScienceDaily. www.sciencedaily.com/releases/2016/09/160922085352.htm (accessed November 15, 2024).

Explore More

from ScienceDaily

RELATED STORIES