New! Sign up for our free email newsletter.
Science News
from research organizations

What readers think about computer-generated texts

Date:
May 3, 2016
Source:
Ludwig-Maximilians-Universitaet Muenchen (LMU)
Summary:
An experimental study has found that readers rate texts generated by algorithms more credible than texts written by real journalists.
Share:
FULL STORY

An experimental study carried out by Ludwig-Maximilians-Universitaet (LMU) in Munich media researchers has found that readers rate texts generated by algorithms more credible than texts written by real journalists.

Readers like to read texts generated by computers, especially when they are unaware that what they are reading was assembled on the basis of an algorithm. This, at any rate, is the conclusion suggested by the results of an experiment recently conducted by LMU media researchers. In the study, 986 subjects were asked to read and evaluate online news stories. Articles which the participants believed to have been written by journalists were consistently given higher marks for readability, credibility and journalistic expertise than those that were flagged as computer-generated -- even in cases where the real "author" was in fact a computer.

Several media outlets already regularly publish texts put together by computer programs. Perhaps the best known of those that have adopted the practice -- sometimes dubbed 'robot journalism' -- is the well-known news agency Associated Press. German publishers have also begun to make use of algorithms to compile texts. At the moment, these are most likely to turn up on the sports pages and in the financial section, as news reports in these fields tend to be based on source data that are already structured in predictable ways.

Dr. Andreas Graefe and Professor Hans-Bernd Brosius at LMU's Department for Communication Studies and Media Research (IfKW) have now investigated how readers perceive and respond to news stories generated by computers. The results of their study appear in the latest issue of Journalism. Graefe and colleagues chose two texts from the online editions of popular German news outlets. One was a report of a soccer match, the other was devoted to the market performance of shares issued by an automotive supplier. In addition, they used an algorithm developed at the Fraunhofer Institute for Communication, Information Processing and Ergonomics to generate texts on the same subjects.

Each participant in the study was then given a sports text and a business text to read, together with a note stating whether they had been written by a journalist or a computer program. What the experimental subjects did not know was that, in some cases, the information given in these notes was deliberately misleading, i.e. untrue.

When they analyzed the results of the experiment, the LMU researchers discovered that their study population found articles actually or putatively written by humans to be more readable than computer-generated texts. In spite of this preference, however, the latter were judged to be more credible than the stories actually written by journalists. This second finding surprised even the designers of the experiment. "The automatically generated texts are full of facts and figures -- and the figures are listed to two decimal places. We believe that this impression of precision strongly contributes to the perception that they are more trustworthy," says Mario Haim of the IfKW, one of the authors of the paper.

However, with respect to readability, readers always rated articles attributed to real journalists more favorably -- even when the attribution was false. "To explain this finding, we assume that readers' expectations differ depending on whether they believe the text to have been written by a person or a machine, and that this preconception influences their perception of the text concerned," says Haim. A more critical approach to computer-based texts might also result from the fact that readers have little experience with such reports. Overall, however, the differences in assessment of the two types of text were relatively small. "We would argue that this suggests that brief, computer-generated texts dealing with sporting events or business and finance are already very appealing to readers," Haim concludes.


Story Source:

Materials provided by Ludwig-Maximilians-Universitaet Muenchen (LMU). Note: Content may be edited for style and length.


Journal Reference:

  1. A. Graefe, M. Haim, B. Haarmann, H.-B. Brosius. Readers perception of computer-generated news: Credibility, expertise, and readability. Journalism, 2016; DOI: 10.1177/1464884916641269

Cite This Page:

Ludwig-Maximilians-Universitaet Muenchen (LMU). "What readers think about computer-generated texts." ScienceDaily. ScienceDaily, 3 May 2016. <www.sciencedaily.com/releases/2016/05/160503072559.htm>.
Ludwig-Maximilians-Universitaet Muenchen (LMU). (2016, May 3). What readers think about computer-generated texts. ScienceDaily. Retrieved November 14, 2024 from www.sciencedaily.com/releases/2016/05/160503072559.htm
Ludwig-Maximilians-Universitaet Muenchen (LMU). "What readers think about computer-generated texts." ScienceDaily. www.sciencedaily.com/releases/2016/05/160503072559.htm (accessed November 14, 2024).

Explore More

from ScienceDaily

RELATED STORIES