Listen to the article

0:00
0:00

AI-Generated Bots Outpace Human Traffic, Pose Risk to Democratic Discourse

A groundbreaking social media simulation has revealed how easily AI-powered bots can influence election outcomes, raising serious concerns about the integrity of online information in democratic societies.

The “Capture the Narrative” project, described as the world’s first social media “wargame,” demonstrated how small teams using consumer-grade generative AI tools could build bot networks capable of flooding platforms with content and swaying voter behavior.

Researchers from the University of New South Wales invited 108 teams from 18 Australian universities to develop AI-powered bots aimed at influencing a fictional presidential election on a simulated social media platform. Teams backed either “Victor,” a left-leaning candidate, or “Marina,” a right-leaning alternative.

Over a four-week period, the AI bots generated more than 60 percent of the platform’s content—over seven million posts—with both sides using highly engaging and sometimes false narratives to influence simulated voters programmed to behave like real people.

The results were striking: Victor narrowly won the election. When researchers reran the simulation without bot interference, Marina won with a swing of 1.78 percent, demonstrating that the bot campaign had materially changed the election outcome.

“It’s scarily easy to create misinformation, easier than truth. It’s really difficult to distinguish between genuine and manufactured posts,” one student participant remarked after the contest. Another admitted, “We needed to get a bit more toxic to get engagement.”

The simulation comes amid growing real-world concerns about AI-generated misinformation. Following a recent attack at Bondi Beach that killed 15 people, AI-generated deepfakes circulated showing human rights lawyer Arsen Ostrovsky—a genuine survivor of the attack who appeared in media interviews covered in blood—as a “crisis actor” being made up by artists to look injured. AFP’s fact-checking unit confirmed these images were AI-generated fakes distributed with false narratives.

The issue extends beyond social media manipulation. According to the 2025 ‘Bad Bot Report’ by cybersecurity firm Imperva, automated traffic has surpassed human online visits for the first time in a decade, accounting for approximately 51 percent of all web activity in 2024. Malicious automated programs—”bad bots”—made up about 37 percent of internet traffic, a significant increase from previous years.

These sophisticated bots aren’t limited to text generation or social media posts. They target live data feeds (APIs), exploit gaps in business logic, and facilitate fraud across financial services, telecommunications, and other industries.

The surge in bot activity presents both technical and social challenges. As automation increases across the internet, distinguishing human-generated content from machine-generated posts becomes increasingly difficult, eroding trust in online information and complicating verification efforts.

Fact-checking organizations have documented numerous instances of AI-generated misinformation during global events, from elections to major crises, highlighting the difficulty of tracing and debunking false content once it spreads.

Experts warn that the proliferation of AI bots and automated content poses significant challenges for democratic discourse. Generative AI can rapidly produce realistic text, images, and videos that blur the line between truth and fiction, while bot networks can amplify this content to simulate consensus or polarize discussion.

Studies have found that even when users recognize information as false, exposure can still influence their perceptions and beliefs, eroding confidence in legitimate sources. Researchers describe a “liar’s dividend,” where the mere possibility of fabricated content leads users to dismiss authentic posts as fakes.

The findings from the “Capture the Narrative” simulation have prompted calls for both greater AI regulation and improved digital literacy so citizens can better recognize and critically evaluate AI-generated misinformation. Understanding how bots operate and how content can be manipulated is increasingly essential to maintaining informed public debate.

Critics note that even sophisticated fact-checking and verification tools struggle to keep pace with the volume and evolution of AI-generated content, highlighting the importance of public awareness and systemic responses to mitigate misinformation risks in the digital age.

As AI capabilities continue to expand, so does the potential for misuse in cyberattacks, misinformation campaigns, and automated manipulation of public opinion. Experts emphasize that targeted education in AI technology, policy, and civic engagement will be necessary to address these complex challenges in the years ahead.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

26 Comments

  1. Patricia Moore on

    Interesting update on AI Could Influence Elections, Underscoring Importance of Digital Literacy. Curious how the grades will trend next quarter.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.