New! Sign up for our free email newsletter.
Science News
from research organizations

Real men don't say 'cute'

Psychologists tap big data and Twitter to analyze the accuracy of stereotypes

Date:
November 15, 2016
Source:
Society for Personality and Social Psychology
Summary:
What's in a tweet? From gender to education, the words used on social media carry impressions to others. Using publicly available tweets, social psychologists and computer scientists are helping us to parse out the stereotypes formed by word choices on the social media channel Twitter. Utilizing natural language processing (NLP), a form of artificial intelligence, the researchers show where stereotyping goes from "plausible" to wrong.
Share:
FULL STORY

What's in a tweet? From gender to education, the words used on social media carry impressions to others. Using publicly available tweets, social psychologists and computer scientists from the University of Pennsylvania Positive Psychology Center, Germany, and Australia are helping us to parse out the stereotypes formed by word choices on the social media channel Twitter. Utilizing natural language processing (NLP), a form of artificial intelligence, the researchers show where stereotyping goes from "plausible" to wrong.

The research appears in Social Psychological and Personality Science.

In a series of studies, participants were asked to categorize the authors of tweets based solely on the content of their social media posts. In these studies, people made judgements about a writer's gender, age, education level, or political orientation, based only on the words used in public posts made on Twitter.

The researchers utilized NLP techniques to analyze and isolate the stereotypes people used to categorize people across gender, age, education level, and political orientation. While the stereotypes and people's assumptions were often correct, there were many instances where people got things wrong.

"These inaccurate stereotypes tended to be exaggerated rather than backwards," says lead author Jordan Carpenter (now at Duke University), "for instance, people had a decent idea that people who didn't go to college are more likely to swear than people with PhDs, but they thought PhDs never swear, which is untrue."

Stereotypes in Social Media

By focusing on stereotype inaccuracies, their research reveals how multiple stereotypes can affect each other.

"One of our most interesting findings is the fact that, when people had a hard time determining someone's political orientation, they seemed to revert (unhelpfully) to gender stereotypes, assuming feminine-sounding people were liberal and masculine-sounding people were conservative," states Carpenter.

The data also showed people assume technology related language was the sign of a male writer. In this study, "it's true: men DO post about technology more than women," says Carpenter, "However, this stereotype strongly led to false conclusions: almost every woman who posted about technology was inaccurately believed to be a man."

In the above example, the stereotype is exaggerated and "overly salient in people's judgments about men and women," write the authors. "People on both sides of the 'appropriate because accurate' debate should agree that this stereotype, along with the others we highlight, are inappropriate and should be intervened against."

Artificial Intelligence and Stereotype Research

"One important aspect of this research is that it reverses the way a lot of stereotype research has been done in the past," says Daniel Preotiuc-Pietro, a coauthor and computer scientist at the Positive Psychology Center.

Instead of starting with various groups and asking people what behaviors they associate with them, the researchers started with a set of behaviors and asked people to state the group identity of the person who did them. They also "considered stereotypes as a lexical 'web': the words we associate with a group are themselves our stereotype of that group," writes Preotiuc-Pietro.

This arrangement allowed the team to use Natural Language Processing (NLP) methods to illuminate people's stereotypes without ever directly asking anyone to explicitly endorse them.

"This is a novel way around the problem that people often resist openly stating their stereotypes, either because they want to present themselves as unbiased or because they're not consciously aware of all the stereotypes they use," says Carpenter.

The field of NLP is a general branch of Artificial Intelligence that deals with automatic understanding of written language. NLP has produced many familiar applications used on a daily basis including spell checking, predictive text, virtual assistants like Siri, and suggesting related news stories, to name a few examples.

"As researchers across fields work together more and more frequently, it's exciting to be able to able to use both computer science and psychology methods in ways that contribute to both fields," summarizes Preotiuc-Pietro.


Story Source:

Materials provided by Society for Personality and Social Psychology. Note: Content may be edited for style and length.


Journal Reference:

  1. Jordan Carpenter et al. Real Men Don’t Say “Cute:” Using Automatic Language Analysis to Isolate Inaccurate Aspects of Stereotypes. Social Psychological and Personality Science, November 2016 DOI: 10.1177/1948550616671998

Cite This Page:

Society for Personality and Social Psychology. "Real men don't say 'cute'." ScienceDaily. ScienceDaily, 15 November 2016. <www.sciencedaily.com/releases/2016/11/161115121246.htm>.
Society for Personality and Social Psychology. (2016, November 15). Real men don't say 'cute'. ScienceDaily. Retrieved December 21, 2024 from www.sciencedaily.com/releases/2016/11/161115121246.htm
Society for Personality and Social Psychology. "Real men don't say 'cute'." ScienceDaily. www.sciencedaily.com/releases/2016/11/161115121246.htm (accessed December 21, 2024).

Explore More

from ScienceDaily

RELATED STORIES