New! Sign up for our free email newsletter.
Science News
from research organizations

Did my computer say it best?

Research finds trust in algorithmic advice can blind us to mistakes

Date:
September 20, 2022
Source:
University of Georgia
Summary:
With autocorrect and auto-generated email responses, algorithms offer plenty of assistance to help people express themselves. But new research shows people who rely on algorithms for assistance with language-related, creative tasks didn't improve their performance and were more likely to trust low-quality advice.
Share:
FULL STORY

With autocorrect and auto-generated email responses, algorithms offer plenty of assistance to help people express themselves.

But new research from the University of Georgia shows people who rely on algorithms for assistance with language-related, creative tasks didn't improve their performance and were more likely to trust low-quality advice.

Aaron Schecter, an assistant professor in management information systems at the Terry College of Business, had his study "Human preferences toward algorithmic advice in a word association task" published this month in Scientific Reports. His co-authors are Nina Lauharatanahirun, a biobehavioral health assistant professor at Pennsylvania State University, and recent Terry College Ph.D. graduate and current Northeastern University assistant professor Eric Bogert.

The paper is the second in the team's investigation into individual trust in advice generated by algorithms. In an April 2021 paper, the team found people were more reliant on algorithmic advice in counting tasks than on advice purportedly given by other participants.

This study aimed to test if people deferred to a computer's advice when tackling more creative and language-dependent tasks. The team found participants were 92.3% more likely to use advice attributed to an algorithm than to take advice attributed to people.

"This task did not require the same type of thinking (as the counting task in the prior study) but in fact we saw the same biases," Schecter said. "They were still going to use the algorithm's answer and feel good about it, even though it's not helping them do any better."

Using an algorithm during word association

To see if people would rely more on computer-generated advice for language-related tasks, Schecter and his co-authors gave 154 online participants portions of the Remote Associates Test, a word association test used for six decades to rate a participant's creativity.

"It's not pure creativity, but word association is a fundamentally different kind of task than making a stock projection or counting objects in a photo because it involves linguistics and the ability to associate different ideas," he said. "We think of this as more subjective, even though there is a right answer to the questions."

During the test, participants were asked to come up with a word tying three sample words together. If, for example, the words were base, room and bowling, the answer would be ball.

Participants chose a word to answer the question and were offered a hint attributed to an algorithm or a hint attributed to a person and allowed to change their answers. The preference for algorithm-derived advice was strong despite the question's difficulty, the way the advice was worded, or the advice's quality.

Participants taking the algorithm's advice were also twice as confident in their answers as people who used the person's advice. Despite their confidence in their answers, they were 13% less likely than those who used human-based advice to choose correct answers.

"I'm not going say the advice was making people worse, but the fact they didn't do any better yet still felt better about their answers illustrates the problem," he said. "Their confidence went up, so they're likely to use algorithmic advice and feel good about it, but they won't necessarily be right.

Should you accept autocorrect when writing an email?

"If I have an autocomplete or autocorrect function on my email that I believe in, I might not be thinking about whether it's making me better. I'm just going to use it because I feel confident about doing it."

Schechter and colleagues call this tendency to accept computer-generated advice without an eye to its quality as automation bias. Understanding how and why human decision-makers defer to machine learning software to solve problems is an important part of understanding what could go wrong in modern workplaces and how to remedy it.

"Often when we're talking about whether we can allow algorithms to make decisions, having a person in the loop is given as the solution to preventing mistakes or bad outcomes," Schecter said. "But that can't be the solution if people are more likely than not to defer to what the algorithm advises."


Story Source:

Materials provided by University of Georgia. Original written by J. Merritt Melancon. Note: Content may be edited for style and length.


Journal Reference:

  1. Eric Bogert, Nina Lauharatanahirun, Aaron Schecter. Human preferences toward algorithmic advice in a word association task. Scientific Reports, 2022; 12 (1) DOI: 10.1038/s41598-022-18638-2

Cite This Page:

University of Georgia. "Did my computer say it best?." ScienceDaily. ScienceDaily, 20 September 2022. <www.sciencedaily.com/releases/2022/09/220920103428.htm>.
University of Georgia. (2022, September 20). Did my computer say it best?. ScienceDaily. Retrieved November 22, 2024 from www.sciencedaily.com/releases/2022/09/220920103428.htm
University of Georgia. "Did my computer say it best?." ScienceDaily. www.sciencedaily.com/releases/2022/09/220920103428.htm (accessed November 22, 2024).

Explore More

from ScienceDaily

RELATED STORIES