New! Sign up for our free email newsletter.
Science News
from research organizations

Robot adapts voice and gestures just to get attention from you

Dynamically adapting robot's utterance and body language based on subtle human cues

Date:
October 22, 2015
Source:
Toyohashi University of Technology
Summary:
Researchers have developed Talking-Ally, the novel robot that dynamically generate appropriate utterance and gestures based on the person's attention as indicated by his or her actions. The experiments show that this new communicational approach significantly enhances the attention engagement with interactive user.
Share:
FULL STORY

Communication in human dialogue is based on one another's words and body language. We can sense whether the other person is distracted, and we change the course of our conversation and our actions to regain their attention.

Most existing robots, however, still use monologue mechanisms, even when engaging in dialogue with a person. For example, they continue speaking in the same way, even if the person is not paying attention.

Researchers at the Interactions and Communication Design (ICD) Lab at Toyohashi University of Technology have devised the novel robotic communication approach that takes into account the listener's attention. The robot follows a person's gaze and determines if that person is distracted by, for instance, a sports event in the background or something in their surroundings. The robot bends forward and nods if the person is watching television or it turns its head and looks around if the person is looking elsewhere. These behaviors are accompanied by appropriate utterance intended to regain the person's attention. Experiments have confirmed that these adaptive interactions considerably increase the other party's attention focused toward the robot as compared with the gestures and speech generated without considering the person's gaze.

"We have set up an environment to manipulate the person's attention with an engaging sports program broadcast simultaneously with the human-robot interaction. This allowed us to validate a suite of conversation situations and utterance-generation patterns," said Hitomi Matsushita, first author of the conference paper on the robot.

"Talking-Ally dynamically determines and synchronizes its body language, turn initials, and entrust behaviors within the speech, according to the person's attention coordinates," Professor Michio Okada, head of the ICD Lab, explained. "Our analysis shows that this is significantly more persuasive than generating these behaviors randomly."

The experimental results significantly contribute to the HRI community by confirming that adaptive communication is essential in acquiring and maintaining attention during conversation. Moreover, Talking-Ally demonstrates a specific communication protocol that is shown to successfully re-engage a distracted person. This is instrumental in achieving persuasive communication and convincing interaction with the robot. Such a platform can ultimately be tailored for use with any HRI application.

Talking-Ally currently chooses its responsive gestures at random from a set that suitably corresponds to the person's level of attention. Future work on the project will include further customizing the robot's interaction to individuals by choosing a specific body language to use in each situation based on subtle cues from the other party.

Funding:

This research has been supported by both Grant-in-Aid for scientific research of KIBAN-B (26280102) and Grant-in-Aid for scientific research for HOUGA (24650053) from the Japan Society for the Promotion of Science (JSPS).


Story Source:

Materials provided by Toyohashi University of Technology. Note: Content may be edited for style and length.


Journal References:

  1. Naoki Ohshima, Yusuke Ohyama, Yuki Odahara, P. Ravindra S. De Silva, Michio Okada. Talking-Ally: The Influence of Robot Utterance Generation Mechanism on Hearer Behaviors. International Journal of Social Robotics, 2014; 7 (1): 51 DOI: 10.1007/s12369-014-0273-8
  2. Hitomi Matsushita, Yohei Kurata, P. Ravindra S. De Silva, and Michio Okada. Talking-Ally: What is the Future of Robot's Utterance Generation? Proceedings of the 24th IEEE International Symposium on Robot and Human Interactive Communication, 2015

Cite This Page:

Toyohashi University of Technology. "Robot adapts voice and gestures just to get attention from you." ScienceDaily. ScienceDaily, 22 October 2015. <www.sciencedaily.com/releases/2015/10/151022103537.htm>.
Toyohashi University of Technology. (2015, October 22). Robot adapts voice and gestures just to get attention from you. ScienceDaily. Retrieved November 22, 2024 from www.sciencedaily.com/releases/2015/10/151022103537.htm
Toyohashi University of Technology. "Robot adapts voice and gestures just to get attention from you." ScienceDaily. www.sciencedaily.com/releases/2015/10/151022103537.htm (accessed November 22, 2024).

Explore More

from ScienceDaily

RELATED STORIES