Listen to the article

0:00
0:00

AI and Misinformation: How Bots Are Reshaping Online Reality

In the aftermath of the December 14, 2025 terrorist attack at Bondi Beach in Sydney that left 15 civilians and one gunman dead, Australia faced not only the tragedy itself but also a tsunami of AI-generated misinformation. As the nation reeled in shock, social media platforms became flooded with fabricated content spreading at unprecedented speed.

A manipulated video of New South Wales Premier Chris Minns falsely claimed one terrorist was an Indian national. On X (formerly Twitter), posts celebrated a fictional hero defender named “Edward Crabtree.” Perhaps most disturbing was a deepfake photo of human rights lawyer Arsen Ostrovsky, a survivor of Hamas’ October 7 attack in Israel, portraying him as a crisis actor with makeup artists applying fake blood.

This phenomenon has become increasingly common worldwide. From Venezuela to Gaza and Ukraine, artificial intelligence has dramatically accelerated the spread of online misinformation. Industry reports indicate that approximately half of all online content is now generated and distributed by AI systems.

Generative AI technology creates not only fake content but also the fake profiles and bots that legitimize and amplify it, creating an illusion of consensus around manufactured viewpoints. The deception, typically motivated by political or financial objectives, raises critical questions about detection methods and digital literacy.

To investigate these dynamics, researchers established “Capture the Narrative,” the world’s first social media wargame where students build AI bots to influence a fictional election using tactics that mirror real-world social media manipulation.

“Even when you recognize content as exaggerated or fake, it still impacts your perceptions, beliefs, and mental health,” explains one of the researchers. “As bots become indistinguishable from real users, we lose confidence in what we see online.”

This uncertainty creates what experts call a “liar’s dividend,” where authentic content is approached with the same skepticism as fabrications. Genuine critical voices can be dismissed as bots or shills, making meaningful debate on complex issues increasingly difficult.

The “Capture the Narrative” wargame provided unprecedented measurable evidence of how small teams using consumer-grade AI can flood a platform, fracture public discourse, and potentially influence election outcomes. In the controlled simulation, 108 teams from 18 Australian universities competed to secure victory for either “Victor” (left-leaning) or “Marina” (right-leaning) in a presidential election.

During the four-week campaign on a custom social media platform, bots generated over 60% of all content, surpassing 7 million posts. The competing bots produced increasingly persuasive content, often incorporating falsehoods and fictional elements to maximize engagement.

“Simulated citizens” interacted with this content much like real-world voters before casting ballots. The result was a narrow victory for “Victor.” However, when the researchers simulated the same election without interference, “Marina” won with a 1.78% swing—demonstrating that student-created misinformation campaigns using basic AI tools successfully altered the election outcome.

“It’s scarily easy to create misinformation, easier than truth. It’s really difficult to distinguish between genuine and manufactured posts,” confessed one competition finalist.

Competitors quickly identified powerful tactics, including emotional language and negative framing, as shortcuts to provoke reactions. “We needed to get a bit more toxic to get engagement,” admitted another participant.

The platform eventually evolved into what researchers describe as a “closed loop” ecosystem where bots interacted with other bots to trigger emotional responses from humans—creating a manufactured reality designed to shift votes and drive engagement.

The findings highlight an urgent need for enhanced digital literacy among the public. As AI-generated content becomes increasingly sophisticated and prevalent, the ability to recognize and critically evaluate potential misinformation will be essential for maintaining informed civic discourse and protecting democratic processes.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

8 Comments

  1. The proliferation of AI-generated content is a major challenge. While the technology has many positive applications, the potential for abuse to influence elections and sow social discord is alarming. Fact-checking and media literacy efforts will be crucial going forward.

  2. John Rodriguez on

    This is a concerning development. AI-powered bots spreading misinformation could seriously undermine the integrity of elections and public discourse. We need robust safeguards to verify content authenticity and limit the spread of fabricated narratives.

    • Agreed, the ability of AI to generate convincing disinformation at scale is alarming. Policymakers and tech companies will have to work together to find solutions that protect democratic processes.

  3. Elijah Jackson on

    This is a sobering wake-up call. The potential for AI to be weaponized to undermine democracy and manipulate public opinion is extremely concerning. We need to take this threat very seriously and invest heavily in solutions.

    • Michael W. White on

      Agreed, the stakes are high. Policymakers, tech companies, and civil society must work together to build robust defenses against AI-driven disinformation before it’s too late.

  4. Amelia Thompson on

    Deepfake technology and AI-generated content are becoming dangerously sophisticated. Controlling the spread of misinformation online will only get more challenging as these tools become more accessible. Fact-checking and digital literacy will be crucial to combat this threat.

    • Mary F. Garcia on

      Absolutely. The sheer volume of AI-driven content could overwhelm our ability to effectively debunk falsehoods. Developing new detection methods and public awareness campaigns will be key to staying ahead of this issue.

  5. John N. Johnson on

    I’m curious to learn more about the specific techniques and AI models used in this social media wargame. Understanding the mechanics behind this kind of large-scale disinformation campaign could inform more effective countermeasures.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.