Listen to the article

0:00
0:00

A simulated terrorist attack at Sydney’s Bondi Beach on December 14, 2025, which left 15 civilians and one gunman dead, quickly became a breeding ground for AI-generated misinformation that spread across social media platforms with alarming speed.

As Australians struggled to process the tragedy, manipulated content began circulating online. A doctored video falsely claimed one of the terrorists was an Indian national. Meanwhile, X (formerly Twitter) was flooded with tributes to a fictional hero named “Edward Crabtree” who never existed. Perhaps most disturbing was a deepfake photo showing human rights lawyer Arsen Ostrovsky—a real survivor of the October 2023 Hamas attack in Israel—portrayed as a crisis actor having fake blood applied by makeup artists.

This pattern of AI-powered disinformation has become increasingly common in crisis situations worldwide. Similar waves of fake content have emerged during events in Venezuela, Gaza, and Ukraine. According to recent data from cybersecurity firm Imperva, approximately half of all online content is now generated or distributed by AI systems.

“Generative AI can create fake online profiles, or bots, which try to legitimize misinformation through realistic-looking social media activity,” explains Dr. Hammond Pearce, Senior Lecturer at UNSW Sydney’s School of Computer Science & Engineering. “The goal is to deceive and confuse people – usually for political and financial reasons.”

To understand the true scope of this threat, researchers at UNSW Sydney created “Capture the Narrative,” the world’s first social media wargame. The experiment challenged 108 teams from 18 Australian universities to build AI bots designed to influence a fictional presidential election, using tactics that mirror real-world manipulation techniques.

The results were striking. During the four-week simulated campaign, more than 60% of all content was generated by competitor bots, surpassing 7 million posts. These bots engaged in sophisticated information warfare, creating compelling but often false content to sway voters toward either “Victor” (a left-leaning candidate) or “Marina” (a right-leaning candidate).

What made the experiment particularly valuable was the use of “simulated citizens” programmed to interact with social media in ways that mimic real-world behaviors. When these digital citizens cast their votes, “Victor” narrowly won. However, when researchers ran the election again without bot interference, “Marina” emerged victorious with a 1.78% swing—proving that the student-created misinformation campaign had successfully changed the election outcome.

“It’s scarily easy to create misinformation, easier than truth. It’s really difficult to distinguish between genuine and manufactured posts,” one competition finalist noted.

The competitors quickly identified effective tactics, including emotional manipulation and negative framing to provoke reactions. Some teams even profiled “undecided voters” for targeted messaging campaigns. Another finalist admitted, “We needed to get a bit more toxic to get engagement.”

Dr. Alexandra Vassar, a Senior Lecturer involved in the research, points out that the simulated platform eventually mirrored real social media environments: “Our platform became a ‘closed loop’ where bots talked to bots to trigger emotional responses from humans, creating a manufactured reality designed to shift votes and drive clicks.”

This manufactured consensus creates what experts call a “liar’s dividend,” where the proliferation of fake content erodes trust in authentic information. Even genuine content faces skepticism, and legitimate critical voices are frequently dismissed as bots or shills, making meaningful debate on complex issues increasingly difficult.

The experiment highlights the urgent need for enhanced digital literacy among the general public. As AI tools become more accessible and sophisticated, the ability to identify and resist manipulation becomes crucial for maintaining the integrity of public discourse and democratic processes.

“What our game shows us is that we urgently need digital literacy to raise awareness of misinformation online so people can recognize when they’re being exposed to fake content,” says Dr. Rahat Masood, another UNSW researcher involved in the project.

The Bondi Beach simulation serves as a sobering reminder that as AI technology advances, so too must our collective ability to navigate an increasingly complex information landscape.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

9 Comments

  1. The examples of AI-fueled disinformation highlighted in this article are deeply troubling. We must find ways to hold bad actors accountable and protect the integrity of information, especially around critical events like elections.

    • Linda E. Moore on

      Agreed. Developing effective countermeasures to combat AI-generated misinformation will be essential to preserving democratic processes and public trust.

  2. This is a concerning trend. AI systems are becoming increasingly sophisticated at producing realistic-looking but false content. Reliable fact-checking and media literacy initiatives will be crucial to mitigate the impact of these manipulated narratives.

  3. As someone who follows geopolitics and security issues, I’m very worried about the implications of AI-driven misinformation, especially around volatile situations like the conflicts in Venezuela, Gaza, and Ukraine. Rigorous fact-checking and media literacy are crucial.

    • I share your concerns. The spread of false narratives during crises can have severe consequences, and we must stay vigilant to protect the integrity of information and public discourse.

  4. Jennifer Williams on

    This study highlights the urgent need for comprehensive solutions to address the rise of AI-generated misinformation. Strengthening digital literacy, improving platform transparency, and advancing detection/mitigation technologies should all be priorities.

  5. Robert Martinez on

    Half of all online content being AI-generated is a startling statistic. Policymakers and tech companies need to work together to develop robust safeguards and transparency measures around the use of generative AI, before it can be further abused to sow discord.

  6. Elijah Martinez on

    While the potential for AI to be misused for malicious purposes is concerning, I’m hopeful that with the right safeguards and ethical frameworks, this technology can also be harnessed to combat disinformation and empower more informed decision-making.

  7. Fascinating study on the growing threat of AI-generated misinformation during crises. We must stay vigilant and find ways to combat the spread of fake content online, especially around sensitive topics like elections and national security.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.