Think AI "knows" what it’s doing? Scientists say think again
- Date:
- April 19, 2026
- Source:
- Iowa State University
- Summary:
- Calling AI things like “smart” or saying it “knows” something might sound harmless, but it can quietly mislead people about what AI actually does. A new study shows that news writers are more careful than expected, rarely using strongly human-like language. When they do, it often falls on a spectrum—sometimes describing simple requirements, other times hinting at human traits.
- Share:
Think, know, understand, remember.
These are everyday words people use to describe what goes on in the human mind. But when those same terms are applied to artificial intelligence, they can unintentionally make machines seem more human than they really are.
"We use mental verbs all the time in our daily lives, so it makes sense that we might also use them when we talk about machines -- it helps us relate to them," said Jo Mackiewicz, professor of English at Iowa State. "But at the same time, when we apply mental verbs to machines, there's also a risk of blurring the line between what humans and AI can do."
Mackiewicz and Jeanine Aune, a teaching professor of English and director of the advanced communication program at Iowa State, are part of a research team that studied how writers describe AI using human-like language. This type of wording, known as anthropomorphism, assigns human traits to non-human systems. Their study, "Anthropomorphizing Artificial Intelligence: A Corpus Study of Mental Verbs Used with AI and ChatGPT," was published in Technical Communication Quarterly.
The research team also included Matthew J. Baker, associate professor of linguistics at Brigham Young University, and Jordan Smith, assistant professor of English at the University of Northern Colorado. Both previously studied at Iowa State University.
Why Human-Like Language About AI Can Be Misleading
According to the researchers, using mental verbs to describe AI can create a false impression. Words such as "think," "know," "understand," and "want" suggest that a system has thoughts, intentions, or awareness. In reality, AI does not possess beliefs or feelings. It produces responses by analyzing patterns in data, not by forming ideas or making conscious decisions.
Mackiewicz and Aune also pointed out that this kind of language can overstate what AI is capable of. Phrases like "AI decided" or "ChatGPT knows" can make systems seem more independent or intelligent than they actually are. This can lead to unrealistic expectations about how reliable or capable AI is.
There is also a broader concern. When AI is described as if it has intentions, it can distract from the humans behind it. Developers, engineers, and organizations are responsible for how these systems are built and used.
"Certain anthropomorphic phrases may even stick in readers' minds and can potentially shape public perception of AI in unhelpful ways," Aune said.
How News Writers Actually Use AI Language
To better understand how often this kind of language appears, the researchers analyzed the News on the Web (NOW) corpus. This massive dataset contains more than 20 billion words from English-language news articles published in 20 countries.
They focused on how frequently mental verbs such as "learns," "means," and "knows" were used alongside terms like AI and ChatGPT.
The findings were unexpected.
Mental Verbs Are Less Common Than Expected
The study found that news writers do not frequently pair AI-related terms with mental verbs.
While anthropomorphism is common in everyday speech, it appears far less often in news writing. "Anthropomorphism has been shown to be common in everyday speech, but we found there's far less usage in news writing," Mackiewicz said.
Among the examples identified, the word "needs" appeared most often with AI, showing up 661 times. For ChatGPT, "knows" was the most frequent pairing, but it appeared only 32 times.
The researchers noted that editorial standards may play a role. Associated Press guidelines, which discourage attributing human emotions or traits to AI, could be influencing how journalists write about these technologies.
Context Matters More Than the Words Themselves
Even when mental verbs were used, they were not always anthropomorphic.
For instance, the word "needs" often described basic requirements rather than human-like qualities. Phrases such as "AI needs large amounts of data" or "AI needs some human assistance" are similar to how people describe non-human systems like cars or recipes. In these cases, the language does not imply that AI has thoughts or desires.
In other cases, "needs" was used to express what should be done, such as "AI needs to be trained" or "AI needs to be implemented." Aune explained that these examples were often written in passive voice, which shifts responsibility back to human actors rather than the technology itself.
Anthropomorphism Exists on a Spectrum
The study also showed that not all uses of mental verbs are equal. Some phrases move closer to suggesting human-like qualities.
For example, statements like "AI needs to understand the real world" can imply expectations tied to human reasoning, ethics, or awareness. These uses go beyond simple descriptions and begin to suggest deeper capabilities.
"These instances showed that anthropomorphizing isn't all-or-nothing and instead exists on a spectrum," Aune said
Why Language Choices About AI Matter
Overall, the researchers found that anthropomorphism in news coverage is both less frequent and more nuanced than many might assume.
"Overall, our analysis shows that anthropomorphization of AI in news writing is far less common -- and far more nuanced -- than we might think," Mackiewicz said. "Even the instances that did anthropomorphize AI varied widely in strength."
The findings highlight the importance of context. Simply counting words is not enough to understand how language shapes meaning.
"For writers, this nuance matters: the language we choose shapes how readers understand AI systems, their capabilities and the humans responsible for them," Mackiewicz said.
The research team also emphasized that these insights can help professionals think more carefully about how they describe AI in their work.
"Our findings can help technical and professional communication practitioners reflect on how they think about AI technologies as tools in their writing process and how they write about AI," the research team wrote in the published study.
As AI continues to develop, the way people talk about it will remain important. Mackiewicz and Aune said writers will need to stay mindful of how word choices influence perception.
Looking ahead, the team suggested that future studies could explore how different words shape understanding and whether even rare uses of anthropomorphic language have a strong impact on how people view AI.
Story Source:
Materials provided by Iowa State University. Note: Content may be edited for style and length.
Journal Reference:
- Jeanine Elise Aune, Matthew J. Baker, Jo Mackiewicz, Jordan Smith. Anthropomorphizing Artificial Intelligence: A Corpus Study of Mental Verbs Used with AI and ChatGPT. Technical Communication Quarterly, 2025; 1 DOI: 10.1080/10572252.2025.2593840
Cite This Page: