Listen to the article

0:00
0:00

AI Swarms Pose New Threat to Information Integrity on Social Media

“We are moving into a new phase of informational warfare on social media platforms where technological advancements have made the classic bot approach outdated,” warns Jonas Kunst, professor of communication at BI Norwegian Business School and co-author of a new research report examining emerging digital threats.

The report outlines how artificial intelligence systems could work together in “swarms” to create convincing campaigns of misinformation that would be nearly impossible to detect with current monitoring methods.

For disinformation experts, the findings paint a disturbing picture of future manipulation campaigns. Nina Jankowicz, former Biden administration disinformation czar and current CEO of the American Sunlight Project, describes the scenario in stark terms: “What if AI wasn’t just hallucinating information, but thousands of AI chatbots were working together to give the guise of grassroots support where there was none? That’s the future this paper imagines—Russian troll farms on steroids.”

Unlike conventional bot networks that can often be identified through repetitive patterns and behaviors, these AI swarms would generate content indistinguishable from authentic human communication. The sophistication of these systems raises concerns about whether they might already be operational.

“Because of their elusive features to mimic humans, it’s very hard to actually detect them and to assess to what extent they are present,” Kunst explains. “We lack access to most social media platforms because platforms have become increasingly restrictive, so it’s difficult to get insight there. Technically, it’s definitely possible. We are pretty sure that it’s being tested.”

Kunst believes these systems are currently likely operating with human oversight during development. While he doesn’t expect significant deployment during the 2026 U.S. midterm elections, he warns they could become a major disruptive force by the 2028 presidential race.

The threat extends beyond simply creating convincing fake accounts. The researchers highlight how advanced AI systems could map social networks at unprecedented scale, allowing operators to precisely target specific communities for maximum impact.

“Equipped with such capabilities, swarms can position for maximum impact and tailor messages to the beliefs and cultural cues of each community, enabling more precise targeting than that with previous botnets,” the report states.

Perhaps most concerning is the potential for these systems to continuously improve through real-time feedback. “With sufficient signals, they may run millions of micro A/B tests, propagate the winning variants at machine speed, and iterate far faster than humans,” the researchers note. This self-improving capability would make such campaigns increasingly effective over time.

To counter this emerging threat, the report proposes establishing an “AI Influence Observatory” comprising academic groups and non-governmental organizations. This coalition would work to “standardize evidence, improve situational awareness, and enable faster collective response rather than impose top-down reputational penalties.”

Notably absent from this proposed observatory are social media platform representatives. The researchers believe these companies have little motivation to identify and address AI swarms because their business models prioritize engagement metrics above all else.

“Let’s say AI swarms become so frequent that you can’t trust anybody and people leave the platform,” Kunst explains. “Of course, then it threatens the model. If they just increase engagement, for a platform it’s better to not reveal this, because it seems like there’s more engagement, more ads being seen, that would be positive for the valuation of a certain company.”

Government intervention also appears unlikely, according to experts quoted in the report. “The current geopolitical landscape might not be friendly for ‘Observatories’ essentially monitoring online discussions,” notes one researcher, while Jankowicz adds: “What’s scariest about this future is that there’s very little political will to address the harms AI creates, meaning [AI swarms] may soon be reality.”

The report underscores a growing gap between rapidly advancing AI capabilities and the mechanisms needed to ensure these technologies aren’t weaponized for mass manipulation. As social media continues to shape public discourse and political processes worldwide, the emergence of AI swarms threatens to further erode trust in online information ecosystems.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

10 Comments

  1. Isabella Jones on

    This is a complex challenge without easy solutions. Strengthening digital literacy, improving content moderation, and fostering public-private collaboration will all be essential in the fight against AI-powered disinformation.

    • Agreed. It’s an arms race that will require sustained, multifaceted efforts to protect the integrity of our information ecosystem and democratic processes.

  2. Olivia J. Thomas on

    This is a concerning development. AI-powered disinformation campaigns could indeed undermine the integrity of democratic discourse if left unchecked. We’ll need robust solutions to detect and counter these emerging threats to information integrity.

    • Olivia Q. Jackson on

      Agreed. Technological solutions like improved monitoring and fact-checking will be crucial, but we also need policy and regulatory frameworks to address the systemic challenges.

  3. Isabella White on

    As an investor in mining and energy stocks, I’m curious to see how this issue might impact those sectors, which can be vulnerable to manipulation and disinformation. Maintaining market integrity will be critical.

    • Michael Thompson on

      Good point. Misinformation around commodities, ESG, and energy transitions could sway investor sentiment and market dynamics. Vigilance and transparency will be paramount.

  4. The idea of AI ‘swarms’ creating convincing misinformation campaigns is quite chilling. This highlights the need for continued research and innovation to stay ahead of bad actors exploiting these new technologies.

    • Patricia Martinez on

      Absolutely. Public-private partnerships and international cooperation will be key to developing effective countermeasures and protecting the information ecosystem.

  5. While the threat of AI-driven disinformation campaigns is serious, I’m hopeful that continued innovation and a collaborative approach can help us stay one step ahead. Vigilance and adaptability will be key.

    • Well said. By working together across sectors, we can develop the necessary tools and policies to mitigate these emerging risks to our information landscape.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.