New! Sign up for our free email newsletter.
Science News
from research organizations

When bots do the negotiating, humans more likely to engage in deceptive techniques

Date:
September 23, 2020
Source:
University of Southern California
Summary:
Researchers found that whether humans would embrace a range of deceptive and sneaky negotiating techniques was dependent both on the humans' prior negotiating experience in negotiating as well as whether virtual agents where employed to negotiate on their behalf. The findings stand in contrast to prior studies and show that when humans use intermediaries in the form of virtual agents, they feel more comfortable employing more deceptive techniques than they would normally use when negotiating for themselves.
Share:
FULL STORY

Recently computer scientists at USC Institute of Technologies (ICT) set out to assess under what conditions humans would employ deceptive negotiating tactics. Through a series of studies, they found that whether humans would embrace a range of deceptive and sneaky techniques was dependent both on the humans' prior negotiating experience in negotiating as well as whether virtual agents where employed to negotiate on their behalf. The findings stand in contrast to prior studies and show that when humans use intermediaries in the form of virtual agents, they feel more comfortable employing more deceptive techniques than they would normally use when negotiating for themselves.

Lead author of the paper on these studies, Johnathan Mell, says, "We want to understand the conditions under which people act deceptively, in some cases purely by giving them an artificial intelligence agent that can do their dirty work for them."

Nowadays, virtual agents are employed nearly everywhere, from automated bidders on sites like eBay to virtual assistants on smart phones. One day, these agents could work on our behalf to negotiate the sale of a car, argue for a raise, or even resolve a legal dispute.

Mell, who conducted the research during his doctoral studies in computer science at USC, says, "Knowing how to design experiences and artificial agents which can act like some of the most devious among us is useful in learning how to combat those techniques in real life."

The researchers are eager to understand how these virtual agents or bots might do our bidding and to understand how humans behave when deploying these agents on their behalf.

Gale Lucas, a research assistant professor in the Department of Computer Science at the USC Viterbi School of Engineering and at USC ICT, as well as the corresponding author on the study published in the Journal of Artificial Intelligence Research, says, "We wanted to predict how people are going to respond differently as this technology becomes available and gets to us more widely."

The research team, consisting of Mell, Sharon Mozgai, Jonathan Gratch and Lucas, conducted three separate experiments, focusing on the conditions under which humans would opt for a range of ethically dubious behaviors. These behaviors included tough bargaining (aggressive pressuring), overt lies, information withholding, manipulative use of negative emotions (feigning anger), as well as rapport building and appealing through use of sympathy. Part of these experiments involved negotiations with non-human, virtual agents and programming virtual agents as their proxies.

The researchers found that people were willing to engage in deceptive techniques under the following conditions:

  • If they had more prior experience in negotiation
  • If they had a negative experience in negotiation (as little as 10 minutes of a negative experience could affect their intention to use more deceptive practices in future negotiations)
  • If they had less prior experience in negotiation, but were employing a virtual agent to negotiate for them

Say the authors, "How humans say they will make decisions and how they actually make decisions are rarely aligned." When people programmed virtual agents to make decisions, they acted similarly to as if they had engaged a lawyer as a representative and through this virtual representative, were more willing to resort to deceptive tactics.

"People with less experience may not be confident that they can use the techniques or feel uncomfortable, but they have no problem programming an agent to do that," says Lucas.

Other outcomes: when humans interacted with a virtual agent who was fair, they were fairer, but when the virtual agent was nicer or nasty in terms of its emotional displays, participants did not change their willingness to engage in deceptive practices.

The researchers also gleaned some insights about human behavior in general.

Compared to their willingness to endorse the more deceptive techniques including overt lies, information withholding, and manipulative use of negative emotions, "people really don't have any problem with being nice to get what they want or being tough to get that what they want," says Lucas, which suggests that these apparently less deceptive techniques are considered more morally acceptable by the participants.

The work has implications for ethics on technology use and for future designers. The researchers say, "If humans, as they get more experience, become more deceptive, designers of bots could account for this."

Lucas notes, "As people get to use the agents to do their bidding, we might see that their bidding might get a little less ethical."

Mell adds, "While we certainly don't want people to be less ethical, we do want to understand how people really do act, which is why experiments like these are so important to creating real, human-like artificial agents."


Story Source:

Materials provided by University of Southern California. Original written by Amy Blumenthal. Note: Content may be edited for style and length.


Journal Reference:

  1. Johnathan Mell, Gale Lucas, Sharon Mozgai, Jonathan Gratch. The Effects of Experience on Deception in Human-Agent Negotiation. Journal of Artificial Intelligence Research, Aug. 2, 2020; [abstract]

Cite This Page:

University of Southern California. "When bots do the negotiating, humans more likely to engage in deceptive techniques." ScienceDaily. ScienceDaily, 23 September 2020. <www.sciencedaily.com/releases/2020/09/200922135736.htm>.
University of Southern California. (2020, September 23). When bots do the negotiating, humans more likely to engage in deceptive techniques. ScienceDaily. Retrieved November 23, 2024 from www.sciencedaily.com/releases/2020/09/200922135736.htm
University of Southern California. "When bots do the negotiating, humans more likely to engage in deceptive techniques." ScienceDaily. www.sciencedaily.com/releases/2020/09/200922135736.htm (accessed November 23, 2024).

Explore More

from ScienceDaily

RELATED STORIES