New! Sign up for our free email newsletter.
Science News
from research organizations

'Will I look dumb?' Human-like virtual assistants can deter help-seeking

Date:
January 4, 2018
Source:
Association for Psychological Science
Summary:
Virtual assistants have become increasingly sophisticated -- and more human-like -- since the days when Clippy asked if you needed help with your document. These assistants are intended to make programs and apps easier to use, but research suggests that human-like virtual assistants may actually deter some people from seeking help on tasks that are supposed to measure achievement. 
Share:
FULL STORY

Virtual assistants have become increasingly sophisticated -- and more humanlike -- since the days when Clippy asked if you needed help with your document. These assistants are intended to make programs and apps easier to use, but research published in Psychological Science suggests that humanlike virtual assistants may actually deter some people from seeking help on tasks that are supposed to measure achievement. Psychological Science is a journal of the Association for Psychological Science.

"We demonstrate that anthropomorphic features may not prove beneficial in online learning settings, especially among individuals who believe their abilities are fixed and who thus worry about presenting themselves as incompetent to others," says psychological scientist and study author Daeun Park of Chungbuk National University. "Our results reveal that participants who saw intelligence as fixed were less likely to seek help, even at the cost of lower performance."

Previous research has shown that people are inclined to see computerized systems as social beings with only a couple social cues. This social dynamic can make the systems seem less intimidating and more user-friendly, but Park and coauthors Sara Kim and Ke Zhang wondered whether that would be true in a context where performance matters, such as with online learning platforms.

"Online learning is an increasingly popular tool across most levels of education and most computer-based learning environments offer various forms of help, such as a tutoring system that provides context-specific help," says Park. "Often, these help systems adopt humanlike features; however, the effects of these kinds of help systems have never been tested."

In one online study, the researchers had 187 participants complete a task that supposedly measured intelligence. In the task, participants saw a group of three words (e.g., room, blood, salts) and were supposed to come up with a fourth word that related to all three (e.g., bath). On the more difficult problems, they automatically received a hint from an onscreen computer icon -- some participants saw a computer "helper" with humanlike features including a face and speech bubble, whereas others saw a helper that looked like a regular computer.

Participants reported greater embarrassment and concerns about self-image when seeking help from the anthropomorphized computer versus the regular computer, but only if they believed that intelligence is a fixed, not malleable trait.

The findings indicated that a couple of anthropomorphic cues are sufficient to elicit concern about seeking help, at least for some individuals. Park and colleagues decided to test this directly in a second experiment with 171 university students.

In the experiment, the researchers manipulated how the participants thought about intelligence by having them read made-up science articles that highlighted either the stability or the malleability of intelligence. The participants completed the same kind of word problems as in the first study -- this time, they freely chose whether to receive a hint from the computer "helper."

The results showed that students who were led to think about intelligence as fixed were less likely to use the hints when the helper had humanlike features than when it didn't. More importantly, they also answered more questions incorrectly. Those who were led to think about intelligence as a malleable trait showed no differences.

These findings could have implications for our performance using online learning platforms, the researchers conclude:

"Educators and program designers should pay special attention to unintended meanings that arise from humanlike features embedded online learning features," says Park. "Furthermore, when purchasing educational software, we recommend parents review not only the contents but also the way the content is delivered."


Story Source:

Materials provided by Association for Psychological Science. Note: Content may be edited for style and length.


Journal Reference:

  1. Sara Kim, Ke Zhang, Daeun Park. Don’t Want to Look Dumb? The Role of Theories of Intelligence and Humanlike Features in Online Help Seeking. Psychological Science, 2017; 095679761773059 DOI: 10.1177/0956797617730595

Cite This Page:

Association for Psychological Science. "'Will I look dumb?' Human-like virtual assistants can deter help-seeking." ScienceDaily. ScienceDaily, 4 January 2018. <www.sciencedaily.com/releases/2018/01/180104153430.htm>.
Association for Psychological Science. (2018, January 4). 'Will I look dumb?' Human-like virtual assistants can deter help-seeking. ScienceDaily. Retrieved December 22, 2024 from www.sciencedaily.com/releases/2018/01/180104153430.htm
Association for Psychological Science. "'Will I look dumb?' Human-like virtual assistants can deter help-seeking." ScienceDaily. www.sciencedaily.com/releases/2018/01/180104153430.htm (accessed December 22, 2024).

Explore More

from ScienceDaily

RELATED STORIES