New! Sign up for our free email newsletter.
Science News
from research organizations

Red-flagging misinformation could slow the spread of fake news on social media

Date:
April 28, 2020
Source:
NYU Tandon School of Engineering
Summary:
A new study on the spread of disinformation reveals that pairing headlines with credibility alerts from fact checkers, the public, news media and even AI, can reduce peoples' intention to share. However, the effectiveness of these alerts varies with political orientation and gender.
Share:
FULL STORY

The dissemination of fake news on social media is a pernicious trend with dire implications for the 2020 presidential election. Indeed, research shows that public engagement with spurious news is greater than with legitimate news from mainstream sources, making social media a powerful channel for propaganda.

A new study on the spread of disinformation reveals that pairing headlines with credibility alerts from fact-checkers, the public, news media and even AI, can reduce peoples' intention to share. However, the effectiveness of these alerts varies with political orientation and gender. The good news for truth seekers? Official fact-checking sources are overwhelmingly trusted.

The study, led by Nasir Memon, professor of computer science and engineering at the New York University Tandon School of Engineering and Sameer Patil, visiting research professor at NYU Tandon and assistant professor in the Luddy School of Informatics, Computing, and Engineering at Indiana University Bloomington, goes further, examining the effectiveness of a specific set of inaccuracy notifications designed to alert readers to news headlines that are inaccurate or untrue.

The work, "Effects of Credibility Indicators on Social Media News Sharing Intent," published in the Proceedings of the 2020 ACM CHI Conference on Human Factors in Computing Systems, involved an online study of around 1,500 individuals to measure the effectiveness among different groups of four so-called "credibility indicators" displayed beneath headlines:

  • Fact Checkers: "Multiple fact-checking journalists dispute the credibility of this news"
  • News Media: "Major news outlets dispute the credibility of this news"
  • Public: "A majority of Americans disputes the credibility of this news"
  • AI: "Computer algorithms using AI dispute the credibility of this news"

"We wanted to discover whether social media users were less apt to share fake news when it was accompanied by one of these indicators and whether different types of credibility indicators exhibit different levels of influence on people's sharing intent," says Memon. "But we also wanted to measure the extent to which demographic and contextual factors like age, gender, and political affiliation impact the effectiveness of these indicators."

Participants -- over 1,500 U.S. residents -- saw a sequence of 12 true, false, or satirical news headlines. Only the false or satirical headlines included a credibility indicator below the headline in red font. For all of the headlines, respondents were asked if they would share the corresponding article with friends on social media, and why.

"Upon initial inspection, we found that political ideology and affiliation were highly correlated to responses and that the strength of individuals' political alignments made no difference, whether Republican or Democrat," says Memon. "The indicators impacted everyone regardless of political orientation, but the impact on Democrats was much larger compared to the other two groups."

The most effective of the credibility indicators, by far, was Fact Checkers: Study respondents intended to share 43% fewer non-true headlines with this indicator versus 25%, 22%, and 22% for the "News Media," "Public," and "AI" indicators, respectively.

Effects of Political Affiliation

The team found a strong correlation between political affiliation and the propensity of each of the credibility indicators to influence intention to share. In fact, the AI credibility indicator actually induced Republicans to increase their intention to share non-true news:

  • Democrats intended to share 61% fewer non-true headlines with the Fact Checkers indicator (versus 40% for Independents and 19% for Republicans)
  • Democrats intended to share 36% fewer non-true headlines with the News Media indicator (versus 29% for Independents and 4.5% for Republicans)
  • Democrats intended to share 37% fewer non-true headlines with the Public indicator, (versus 17% for Independents and 6.7% for Republicans)
  • Democrats intended to share 40% fewer non-true headlines with the AI indicator (versus 16% for Independents)
  • Republicans intended to share 8.1% more non-true news with the AI indicator

Republicans are less likely to be influenced by credibility indicators, more inclined to share fake news on social media.

Patil says that while fact-checkers are the most effective kind of indicator, regardless of political affiliation and gender, fact-checking is a very labor-intensive. He says the team was surprised by the fact that Republicans were more inclined to share news that was flagged as not credible using the AI indicator.

"We were not expecting that, although conservatives may tend to trust more traditional means of flagging the veracity of news," he says, adding that the team will next examine how to make the most effective credibility indicator -- fact-checkers -- efficient enough to handle the scale inherent in today's news climate.

"This could include applying fact checks to only the most-needed content, which might involve applying natural language algorithms. So, it is a question, broadly speaking, of how humans and AI could co-exist," he explains.

The team also found that males intended to share non-true headlines one and half times more than females, with the differences largest for the Public and News Media indicators.

Men are less likely to be influenced by credibility indicators, more inclined to share fake news on social media. But indicators, especially those from fact-checkers, reduce intention to share fake news across the board.

Socializing was the dominant reason respondents gave for intending to share a headline, with the top-reported reason for intending to share fake stories being that they were considered funny.


Story Source:

Materials provided by NYU Tandon School of Engineering. Note: Content may be edited for style and length.


Journal Reference:

  1. Waheeb Yaqub, Otari Kakhidze, Morgan L. Brockman, Nasir Memon, Sameer Patil. . CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, April 2020 [abstract]

Cite This Page:

NYU Tandon School of Engineering. "Red-flagging misinformation could slow the spread of fake news on social media." ScienceDaily. ScienceDaily, 28 April 2020. <www.sciencedaily.com/releases/2020/04/200428112542.htm>.
NYU Tandon School of Engineering. (2020, April 28). Red-flagging misinformation could slow the spread of fake news on social media. ScienceDaily. Retrieved December 22, 2024 from www.sciencedaily.com/releases/2020/04/200428112542.htm
NYU Tandon School of Engineering. "Red-flagging misinformation could slow the spread of fake news on social media." ScienceDaily. www.sciencedaily.com/releases/2020/04/200428112542.htm (accessed December 22, 2024).

Explore More

from ScienceDaily

RELATED STORIES