New! Sign up for our free email newsletter.
Science News
from research organizations

Humans less likely to return to an automated advisor once given bad advice

Participants gave humans who gave bad advice a second chance

Date:
May 25, 2016
Source:
International Communication Association
Summary:
The ubiquitous Chat Bot popping up on websites asking if you need help has become standard on many sites. We dismiss, we engage, but do we trust the algorithm that is aiding our experience? Giving us answers and advice? A recent study found that participants were less likely to return to an automated advisor if given bad advice over a human advisor under the same circumstances.
Share:
FULL STORY

The ubiquitous Chat Bot popping up on websites asking if you need help has become standard on many sites. We dismiss, we engage, but do we trust the algorithm that is aiding our experience? Giving us answers and advice? A recent study by researchers at the University of Wisconsin found that participants were less likely to return to an automated advisor if given bad advice over a human advisor under the same circumstances.

Andrew Prahl and Lyn M. Van Swol (University of Wisconsin) will present their findings in June at the 66th Annual Conference of the International Communication Association in Fukuoka, Japan. The researchers' experiment asked participants to forecast scheduling for hospital operating rooms, a task they were unfamiliar with. In order to complete this task, participants were given help from either an "advanced computer system" or "a person experienced in operating room management." During the 7th trial of the 14 scheduling trials, participants were given bad advice from either the computer advisor or human advisor.

The researchers found that after participants received bad advice, they rapidly abandoned the computer advisor and did not use the advice on subsequent trials. This "punishment" for giving bad advice was not nearly as strong for the human advisor. It is almost as if people "forgave" the human advisor for making a mistake but did not extend the same feelings of forgiveness to the computer.

Past research has looked at trust of automation through simple tasks like warning systems. Prahl and Van Swol wanted to look at more sophisticated automation that actually makes predictions about future outcomes and compare that to human advice. Unlike previous research, the researchers did not find that humans or algorithmic advisors were generally trusted more, the only significant differences arose after the advisor provided bad advice.

"This has very important implications because time and time again we are seeing humans being replaced by computers in the workplace," said Prahl. "This research suggests that any potential efficiency gains by moving towards automation might be offset because all the automation has to do is err once, and people will rapidly lose trust and stop using it -- this is one of the few studies out there that really show the potential downsides of automation in the workplace."


Story Source:

Materials provided by International Communication Association. Note: Content may be edited for style and length.


Cite This Page:

International Communication Association. "Humans less likely to return to an automated advisor once given bad advice." ScienceDaily. ScienceDaily, 25 May 2016. <www.sciencedaily.com/releases/2016/05/160525132559.htm>.
International Communication Association. (2016, May 25). Humans less likely to return to an automated advisor once given bad advice. ScienceDaily. Retrieved November 22, 2024 from www.sciencedaily.com/releases/2016/05/160525132559.htm
International Communication Association. "Humans less likely to return to an automated advisor once given bad advice." ScienceDaily. www.sciencedaily.com/releases/2016/05/160525132559.htm (accessed November 22, 2024).

Explore More

from ScienceDaily

RELATED STORIES