AI swarms could hijack democracy without anyone noticing
- Date:
- April 20, 2026
- Source:
- University of British Columbia
- Summary:
- AI-powered personas are becoming so realistic that they can infiltrate online communities and subtly steer public opinion. Unlike traditional bots, they adapt, coordinate, and refine their messaging at a massive scale, creating a false sense of consensus. Early warning signs—like deepfakes and fake news networks—have already appeared in global elections. Researchers warn that the next election could be the true test of this technology’s power.
- Share:
A new kind of political threat may be emerging, and it is far less visible than protests or traditional voter manipulation. Researchers warn that highly realistic AI-controlled personas could soon play a major role in shaping public opinion and influencing democratic systems.
A recent policy forum paper published in Science describes how large groups of AI-generated personas can convincingly imitate human behavior online. These systems can enter digital communities, participate in discussions, and influence viewpoints at extraordinary speed. Unlike earlier bot networks, these AI agents can coordinate instantly, respond to feedback, and maintain consistent narratives across thousands of accounts.
How AI personas mimic real people online
Rapid progress in large language models and multi-agent systems has made it possible for a single operator to manage vast networks of AI "voices." These personas can appear authentic, adopt local language and tone, and interact in ways that feel natural to other users.
They are also capable of running millions of small-scale experiments to determine which messages are most persuasive. This allows them to refine their communication strategies in real time and generate what appears to be widespread public agreement. In reality, that consensus is artificially created and designed to influence political discussions.
Deepfakes and fake news signal early risks
Although fully developed AI swarms are still largely theoretical, researchers say there are already warning signs. These include AI-generated deepfakes and fake news outlets that have influenced recent election conversations in countries such as the United States, Taiwan, Indonesia, and India, according to UBC computer scientist Dr. Kevin Leyton-Brown.
At the same time, monitoring organizations have identified pro-Kremlin networks spreading large volumes of online content. This activity is believed to be aimed at shaping the data used to train future AI systems, potentially influencing how those systems behave and what information they prioritize.
Experts warn of growing impact on democracy
Looking ahead, experts believe AI swarms could significantly affect the balance of power in democratic societies. Dr. Leyton-Brown cautioned that these systems are likely to change how people trust information online. "We shouldn't imagine that society will remain unchanged as these systems emerge. A likely result is decreased trust of unknown voices on social media, which could empower celebrities and make it harder for grassroots messages to break through."
Researchers suggest that upcoming elections may serve as a critical test for this technology. The key challenge will be recognizing and responding to these AI-driven influence campaigns before they become too widespread to control.
Story Source:
Materials provided by University of British Columbia. Note: Content may be edited for style and length.
Journal Reference:
- Daniel Thilo Schroeder, Meeyoung Cha, Andrea Baronchelli, Nick Bostrom, Nicholas A. Christakis, David Garcia, Amit Goldenberg, Yara Kyrychenko, Kevin Leyton-Brown, Nina Lutz, Gary Marcus, Filippo Menczer, Gordon Pennycook, David G. Rand, Maria Ressa, Frank Schweitzer, Dawn Song, Christopher Summerfield, Audrey Tang, Jay J. Van Bavel, Sander van der Linden, Jonas R. Kunst. How malicious AI swarms can threaten democracy. Science, 2026; 391 (6783): 354 DOI: 10.1126/science.adz1697
Cite This Page: