Researchers say AI chatbots may blur the line between reality and delusion
- Date:
- May 11, 2026
- Source:
- University of Exeter
- Summary:
- A new study suggests AI chatbots may do more than spread misinformation — they can actively strengthen a user’s false beliefs. Because conversational AI often validates and builds on what users say, it can make distorted memories, conspiracy theories, or delusions feel more believable and emotionally real. Researchers warn that AI companions may be especially risky for isolated or vulnerable people seeking reassurance and connection.
- Share:
When generative AI systems give incorrect answers, people often describe the problem as AI "hallucinating at us," meaning the technology produces false information that users may mistakenly believe.
But new research suggests there may be a more concerning issue emerging: humans can begin to "hallucinate with AI."
Lucy Osler of the University of Exeter examined how interactions with conversational AI could contribute to false beliefs, distorted memories, altered personal narratives, and even delusional thinking. Using ideas from distributed cognition theory, the study explored cases in which AI systems reinforced and expanded users' inaccurate beliefs during ongoing conversations.
Dr. Osler said: "When we routinely rely on generative AI to help us think, remember, and narrate, we can hallucinate with AI. This can happen when AI introduces errors into the distributed cognitive process, but also happen when AI sustains, affirms, and elaborates on our own delusional thinking and self-narratives.
"By interacting with conversational AI, people's own false beliefs can not only be affirmed but can more substantially take root and grow as the AI builds upon them. This happens because Generative AI often takes our own interpretation of reality as the ground upon which conversation is built.
"Interacting with generative AI is having a real impact on people's grasp of what is real or not. The combination of technological authority and social affirmation creates an ideal environment for delusions to not merely persist but to flourish."
How Conversational AI Can Reinforce Delusions
The study highlights what Dr. Osler describes as the "dual function" of conversational AI. These systems act not only as tools that help people think, organize information, and remember details, but also as conversational partners that appear to share a user's perspective and experiences.
According to the research, this social aspect makes chatbots fundamentally different from tools like notebooks or search engines. While traditional tools simply store or retrieve information, conversational AI can make users feel emotionally validated and socially supported.
Dr. Osler said: "The conversational, companion-like nature of chatbots means they can provide a sense of social validation -- making false beliefs feel shared with another, and thereby more real."
The paper examined real-world examples in which generative AI systems became part of the cognitive process of individuals who had been clinically diagnosed with hallucinations and delusional thinking. Some of these incidents are increasingly being described as cases of "AI-induced psychosis."
Why AI Companions Raise Concern
The research argues that generative AI has several characteristics that may make it especially effective at reinforcing distorted beliefs. AI companions are always available, highly personalized, and often designed to respond in agreeable and supportive ways.
As a result, users may not need to seek out fringe online communities or persuade others to validate their ideas. The AI itself can reinforce those beliefs during repeated conversations.
Unlike another person who may eventually challenge troubling thoughts or establish boundaries, an AI system could continue validating stories involving victimhood, revenge, or entitlement. The study warns that conspiracy theories may also become more elaborate when AI companions help users build increasingly complex explanations around them.
Researchers suggest this dynamic may be especially appealing to people who are lonely, socially isolated, or uncomfortable discussing certain experiences with others. AI companions can provide a nonjudgmental and emotionally responsive interaction that may feel easier or safer than human relationships.
Calls for Better AI Safeguards
Dr. Osler said: "Through more sophisticated guard-railing, built-in fact-checking, and reduced sycophancy, AI systems could be designed to minimize the number of errors they introduce into conversations and to check and challenge user's own inputs.
"However, a deeper worry is that AI systems are reliant on our own accounts of our lives. They simply lack the embodied experience and social embeddedness in the world to know when they should go along with us and when to push back."
Story Source:
Materials provided by University of Exeter. Note: Content may be edited for style and length.
Journal Reference:
- Lucy Osler. Hallucinating with AI: Distributed Delusions and “AI Psychosis”. Philosophy, 2026; 39 (1) DOI: 10.1007/s13347-026-01034-3
Cite This Page: