Listen to the article
In a stark warning published today in Science, a prominent group of global experts has raised the alarm about a new threat to democracy: swarms of AI agents capable of mimicking human behavior to manipulate public opinion on an unprecedented scale.
The coalition, which includes Nobel Peace Prize winner Maria Ressa and leading researchers from prestigious institutions like Berkeley, Harvard, Oxford, Cambridge, and Yale, describes these “AI swarms” as a disruptive force that could soon infest social media and messaging platforms worldwide.
According to the experts, would-be autocrats could deploy these technologies to persuade populations to accept canceled elections or overturn legitimate results. The group predicts that such technology could be deployed at scale by the 2028 U.S. presidential election, though early versions have already been observed in 2024 elections in Taiwan, India, and Indonesia.
“These systems are capable of coordinating autonomously, infiltrating communities and fabricating consensus efficiently. By adaptively mimicking human social dynamics, they threaten democracy,” the authors warned in their joint statement.
The threat is particularly concerning because these AI swarms can precisely infiltrate online communities, learn their specific characteristics over time, and deliver increasingly convincing and carefully tailored falsehoods designed to shift public opinion on a mass scale.
Daniel Thilo Schroeder, a research scientist at Norway’s SINTEF research institute and one of the paper’s authors, emphasized how easily these systems could be deployed. “It’s just frightening how easy these things are to vibe code and just have small bot armies that can actually navigate online social media platforms and email and use these tools,” said Schroeder, who has been simulating such swarms in laboratory conditions.
The sophistication of these AI systems is rapidly advancing. They can increasingly adapt to human communication patterns, using appropriate slang and posting irregularly to avoid detection. Additionally, progress in “agentic” AI development means these systems can now autonomously plan and coordinate actions across multiple platforms.
Jonas Kunst, professor of communication at the BI Norwegian Business School and another author of the warning, explained the multiplier effect of these coordinated systems: “If these bots start to evolve into a collective and exchange information to solve a problem – in this case a malicious goal, namely analyzing a community and finding a weak spot – then coordination will increase their accuracy and efficiency. That is a really serious threat that we predict is going to materialize.”
Real-world examples are already emerging. In Taiwan, a frequent target of Chinese propaganda, AI bots have been increasingly engaging with citizens on platforms like Threads and Facebook over the past few months, according to Puma Shen, a Taiwanese Democratic Progressive Party MP who campaigns against Chinese disinformation.
These bots typically flood discussions with unverifiable information, creating what Shen describes as “information overload.” Some AI agents have been spreading narratives suggesting America will abandon Taiwan or encouraging younger Taiwanese to remain neutral in the China-Taiwan dispute by emphasizing its complexity.
“It’s not telling you that China’s great, but it’s [encouraging them] to be neutral,” Shen told the Guardian. “This is very dangerous, because then you think people like me are radical.”
The experts behind the warning include several prominent voices in technology and misinformation research, such as NYU’s Gary Marcus, a self-described “generative AI realist,” and Audrey Tang, Taiwan’s first digital minister, who has warned about authoritarian forces undermining electoral processes using AI.
However, not all experts are convinced that deployment will be as rapid or widespread as the warning suggests. Inga Trauthig, an adviser to the International Panel on the Information Environment, noted that politicians’ reluctance to cede campaign control to AI systems might slow adoption, adding that “most political propagandists I interview are still using older technologies and are not at this cutting edge.”
Nevertheless, Michael Wooldridge, professor of AI foundations at Oxford University, validated the overall concern: “I think it is entirely plausible that bad actors will try to mobilize virtual armies of LLM-powered agents to disrupt elections and manipulate public opinion… It’s technologically perfectly feasible.”
The authors are calling for coordinated global action to counter these risks, including the development of “swarm scanners” and watermarked content to help identify and combat AI-run misinformation campaigns before they can fundamentally undermine democratic processes.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


19 Comments
While the potential for misuse of AI is concerning, I’m hopeful that we can find ways to mitigate these risks through collaborative efforts between policymakers, tech companies, and the research community. Transparency and accountability will be key.
I agree. Proactive and multifaceted approaches will be needed to address this challenge and safeguard our democratic institutions.
As someone with an interest in technology and politics, I find this issue quite fascinating. The potential for AI to be misused for malicious purposes is concerning, but I’m hopeful that we can find ways to harness the benefits of AI while mitigating the risks.
That’s a thoughtful perspective. Striking the right balance between innovation and safeguards will be crucial in addressing this challenge.
This is a complex issue with significant implications for democracy. I’m concerned about the potential for ‘AI swarms’ to be used to spread disinformation and sway public opinion. We’ll need a multi-pronged approach to address this threat effectively.
The prospect of ‘AI swarms’ manipulating public opinion is deeply troubling. While I’m excited about the potential benefits of AI, the risks of misuse must be taken seriously. Robust safeguards and transparency will be crucial to protect our democratic processes.
Well said. Maintaining the integrity of our democratic institutions should be a top priority as we navigate the challenges posed by emerging technologies.
This is a worrying development that deserves serious attention. The prospect of ‘AI swarms’ undermining democratic processes is deeply troubling. I hope researchers and policymakers can work together to develop effective countermeasures.
Hmm, I’m not surprised to see experts raising alarms about this. The potential for AI-driven manipulation of public opinion is quite concerning. We’ll need robust safeguards and oversight to protect the integrity of our elections and public discourse.
Agreed. Maintaining the free exchange of ideas and preventing the spread of disinformation should be top priorities for policymakers and tech companies.
This is a complex and concerning issue. I’m glad to see experts raising awareness about the potential for ‘AI swarms’ to undermine democracy. We’ll need to be vigilant and work together to develop effective countermeasures.
While I’m concerned about the potential for abuse, I’m curious to learn more about the specific capabilities and limitations of these ‘AI swarms’. How can we distinguish them from real human activity, and what technical solutions are being explored to combat this threat?
That’s a good question. Identifying and mitigating the impact of coordinated disinformation campaigns, whether human or AI-driven, will require multi-faceted approaches involving technology, policy, and public education.
This is a sobering warning from respected experts. The threat of ‘AI swarms’ manipulating public opinion is truly alarming. We must take concerted action to protect the integrity of our democratic processes and the free exchange of ideas.
This is a complex and worrying issue. On one hand, the development of AI technologies has brought many benefits, but we must be vigilant about potential misuse that could undermine democratic institutions. Proactive measures to ensure transparency and accountability will be critical.
This is certainly an alarming development. AI bots posing as real people to sway public opinion is a serious threat to the integrity of our democratic processes. Disinformation is already a major challenge, and these ‘AI swarms’ could take it to a whole new level of manipulation.
I agree, we need robust safeguards and transparency around the use of AI in social media and political discourse. Protecting the free exchange of ideas is crucial for a healthy democracy.
As someone with an interest in technology and politics, I find this issue quite concerning. The prospect of ‘AI swarms’ undermining democratic processes is deeply troubling. I hope researchers and policymakers can work together to develop robust safeguards and solutions.
Agreed. Maintaining the integrity of our democratic institutions should be a top priority as we navigate the challenges posed by emerging technologies.