New! Sign up for our free email newsletter.
Science News
from research organizations

How AIs ask for personal information is important for gaining user trust

Date:
May 12, 2021
Source:
Penn State
Summary:
Researchers report that users responded differently when AIs either offered to help the user, or asked for help from the user. This response influenced whether the user trusted the AI with their personal information. They added that these introductions from the AI could be designed in a way to both increase users' trust, as well as raise their awareness about the importance of personal information.
Share:
FULL STORY

People may be reluctant to give their personal information to artificial intelligence (AI) systems even though it is needed by the systems for providing more accurate and personalized services, but a new study reveals that the manner in which the systems ask for information from users can make a difference.

In a study, Penn State researchers report that users responded differently when AIs either offered to help the user, or asked for help from the user. This response influenced whether the user trusted the AI with their personal information. They added that these introductions from the AI could be designed in a way to both increase users' trust, as well as raise their awareness about the importance of personal information.

The researchers, who presented their findings today at the virtual 2021 ACM CHI Conference on Human Factors in Computing Systems, the premier international conference of human-computer interaction research, found that people who are familiar with technology -- power users -- preferred AIs that are in need of help, or help-seeking, while non-expert users were more likely to prefer AIs that introduce themselves as simultaneously help-seekers and help-providers.

As AIs become increasingly ubiquitous, developers need to create systems that can better relate to humans, said S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory.

"There's a need for us to re-think how AI systems talk to human users," said Sundar. "This has come to the surface because there are rising concerns about how AI systems are starting to take over our lives and know more about us than we realize. So, given these concerns, it may be better if we start to switch from the traditional dialogue scripts into a more collaborative, cooperative communication that acknowledges the agency of the user."

Here to help?

The researchers said that traditional AI dialogues usually offer introductions that frame their role as a helper.

In fact, power users may be put off by the way AIs typically communicate with users, which may seem patronizing to them, said Sundar, who is also an affiliate of Penn State's Institute for Computational and Data Sciences (ICDS). For example, the researchers cite Facebook's request for birthday information so its AI can provide age-appropriate experience to its users.

"AIs seem to have a paternalistic attitude in the way they talk to the user -- they seem to tell users they are here to help you and you need to give them your information to get the benefits," said Sundar.

On the other hand, when an AI system asks users for help, it is seen as social. According to Mengqi Liao, a doctoral student in mass communication and lead author of the paper, power users found the help-seeking AI to have social intelligence, referred to by the researchers as social presence.

"It makes sense that if someone is seeking help, that's an inherently social behavior and very interpersonal in nature," said Liao. "This social presence, in turn, leads to more trust and increases the intention to provide the AI with more personal information. Power users also gave help-seeking AIs higher evaluations on their performance even though they deliver the same outcome as other AIs."

Liao added that when the system is both seeking help and telling the users that it can help them in the future, non-power users have less privacy concerns.

Ethical AI

The researchers said that the study offers designers an ethical way to increase trust in machines without trying to trick people into providing their personal information.

"We think that these findings can also be insightful to designers who want to build in tactics to combat AI systems that prey on users," said Liao. "For example, we found that the presence of both the help-seeking and help-providing cues can actually raise privacy concerns among power users. Therefore, simply implementing both the two cues in the explanations to powers users can make them become more vigilant about their personal information."

For non-power users, designers could add help-seeking cues that will reduce social presence and encourage them to become more vigilant of their personal information, added Liao.

The researchers recruited 330 participants from an online microtask research platform. The participants were randomly assigned to a mock website representing one of four experimental conditions -- a help-seeking condition, a help-providing condition, both help-seeking and help-providing conditions and a control condition. In the help-seeking condition, the AI offered a written explanation that it needed users' help and their personal information to improve and grow. In the help-providing condition, it explained that it could use submitted personal information to help the user find news articles. The third condition contained both help-seeking and help-providing cues. The control offered no explanation.

The researchers also studied whether there would be any differences in the way that an AI refers to itself, and found that people trust it less when it refers to itself as "I" compared to "this system."

Future research may include investigating the use of AIs in other contexts that are more sensitive in nature, such as gathering financial and medical information, to determine how these introductions could affect the users' vigilance about guarding this information.


Story Source:

Materials provided by Penn State. Original written by Matt Swayne. Note: Content may be edited for style and length.


Cite This Page:

Penn State. "How AIs ask for personal information is important for gaining user trust." ScienceDaily. ScienceDaily, 12 May 2021. <www.sciencedaily.com/releases/2021/05/210512194530.htm>.
Penn State. (2021, May 12). How AIs ask for personal information is important for gaining user trust. ScienceDaily. Retrieved November 20, 2024 from www.sciencedaily.com/releases/2021/05/210512194530.htm
Penn State. "How AIs ask for personal information is important for gaining user trust." ScienceDaily. www.sciencedaily.com/releases/2021/05/210512194530.htm (accessed November 20, 2024).

Explore More

from ScienceDaily

RELATED STORIES