Showing AI users diversity in training data boosts perceived fairness and trust
- Date:
- October 22, 2024
- Source:
- Penn State
- Summary:
- While artificial intelligence (AI) systems, such as home assistants, search engines or large language models like ChatGPT, may seem nearly omniscient, their outputs are only as good as the data on which they are trained. However, ease of use often leads users to adopt AI systems without understanding what training data was used or who prepared the data, including potential biases in the data or held by trainers. A new study suggests that making this information available could shape appropriate expectations of AI systems and further help users make more informed decisions about whether and how to use these systems.
- Share:
While artificial intelligence (AI) systems, such as home assistants, search engines or large language models like ChatGPT, may seem nearly omniscient, their outputs are only as good as the data on which they are trained. However, ease of use often leads users to adopt AI systems without understanding what training data was used or who prepared the data, including potential biases in the data or held by trainers. A new study by Penn State researchers suggests that making this information available could shape appropriate expectations of AI systems and further help users make more informed decisions about whether and how to use these systems.
The work investigated whether displaying racial diversity cues -- the visual signals on AI interfaces that communicate the racial composition of the training data and the backgrounds of the typically crowd-sourced workers who labeled it -- can enhance users' expectations of algorithmic fairness and trust. Their findings were recently published in the journal Human-Computer Interaction.
AI training data is often systematically biased in terms of race, gender and other characteristics, according to S. Shyam Sundar, Evan Pugh University Professor and director of the Center for Socially Responsible Artificial Intelligence at Penn State.
"Users may not realize that they could be perpetuating biased human decision-making by using certain AI systems," he said.
Lead author Cheng "Chris" Chen, assistant professor of communication design at Elon University, who earned her doctorate in mass communications from Penn State, explained that users are often unable to evaluate biases embedded in the AI systems because they don't have information about the training data or the trainers.
"This bias presents itself after the user has completed their task, meaning the harm has already been inflicted, so users don't have enough information to decide if they trust the AI before they use it," Chen said
Sundar said that one solution would be to communicate the nature of the training data, especially its racial composition.
"This is what we did in this experimental study, with the goal of finding out if it would make any difference to their perceptions of the system," Sundar said.
To understand how diversity cues can impact trust in AI systems, the researchers created two experimental conditions, one diverse and one non-diverse. In the former, participants viewed a short description of the machine learning model and data labeling practice, along with a bar chart showing an equal distribution of facial images in the training data from three racial groups: white, Black and Asian, each making up about one-third of the dataset. In the condition without racial diversity, the bar chart showed that 92% of the images belonged to a single dominant racial group. Similarly, for labelers' backgrounds, balanced representation was maintained with roughly one-third each of white, Black and Asian labelers. The non-diverse condition showed a bar chart conveying that 92% of labelers were from a single racial group.
Participants first reviewed data cards that showed training data characteristics of an AI-powered facial expression classification AI tool called HireMe. They then watched automated interviews of three equally qualified male candidates of different races. The candidates' neutral facial expressions and tone were analyzed in real time by the AI system and presented to participants, highlighting the most prominent expression and each candidate's employability.
Half the participants were exposed to racially biased performance by the system, in that it was manipulated by the experimenters to favor the white candidate, rating his neutral expression as joyful and suitable for the job, while interpreting the Black and Asian candidates' expressions as anger and fear, respectively. In the unbiased condition, the AI identified joy as each candidate's prominent expression and equally noting them as good fits for the position. Participants were then asked to provide feedback on the AI's analysis, rating their agreement on a five-point scale and selecting the most appropriate emotion if they disagreed.
"We found that showing racial diversity in training data and labelers' backgrounds increased users' trust in the AI," Chen said. "The opportunity to provide feedback also helped participants develop a higher sense of agency and increased their potential to use the AI system in the future."
However, the researchers noted that providing feedback about an unbiased system reduced usability for white participants. Because their perception was the the system was already functioning correctly and fairly, they saw little need to provide feedback and viewed it as an unnecessary burden.
The researchers found that, when multiple racial diversity cues were present, they work independently, but both data diversity and labeler diversity cues are effective in shaping users' perception of the system's fairness. The researchers emphasized the idea of the representativeness heuristic, meaning users tended to believe that the training of the AI model is racially inclusive if its racial composition matches their understanding of diversity.
"If AI is just learning expressions labeled mostly by people of one race, the system may misrepresent emotions of other races," said Sundar, who is also the James P. Jimirro Professor of Media Effects at the Penn State Bellisario College of Communications and co-director of the Media Effects Research Laboratory. "The system needs to take race into account when deciding if a face is cheerful or angry, for example, and that comes in the form of greater racial diversity of both images and labelers in the training process."
According to the researchers, for an AI system to be credible, the origin of its training data must be made available, so users can review and scrutinize it to determine their level of trust.
"Making this information accessible promotes transparency and accountability of AI systems," Sundar said. "Even if users don't access this information, its availability signals ethical practice, and fosters fairness and trust in these systems."
Story Source:
Materials provided by Penn State. Original written by Jordan Ford. Note: Content may be edited for style and length.
Journal Reference:
- Cheng Chen, S. Shyam Sundar. Communicating and combating algorithmic bias: effects of data diversity, labeler diversity, performance bias, and user feedback on AI trust. Human–Computer Interaction, 2024; 1 DOI: 10.1080/07370024.2024.2392494
Cite This Page: