Listen to the article
In a disturbing demonstration of how artificial intelligence can manipulate public discourse, researchers have revealed that AI-generated misinformation successfully altered the outcome of a simulated presidential election by nearly two percentage points.
The findings come from “Capture the Narrative,” an unprecedented social media wargame conducted by researchers at UNSW Sydney, where student teams used AI tools to influence a fictional election. The experiment provides concrete evidence of how easily digital misinformation campaigns can be deployed—and their potential real-world impact.
“It’s scarily easy to create misinformation, easier than truth,” said one competition finalist. “It’s really difficult to distinguish between genuine and manufactured posts.”
The simulation’s results mirror growing concerns about AI’s role in real-world events. Following the December 2023 terrorist attack at Bondi Beach in Sydney that left 15 civilians and one gunman dead, social media platforms were flooded with AI-generated falsehoods. These included manipulated videos of political figures, fabricated “hero” narratives, and deepfake images of real people portrayed as crisis actors.
In the UNSW experiment, 108 teams from 18 Australian universities competed to secure victory for either “Victor,” a left-leaning candidate, or “Marina,” a right-leaning candidate. The competition ran on a custom social media platform populated with sophisticated “simulated citizens” programmed to behave like real voters.
The results were striking. Over a four-week period, competitor-built bots generated more than seven million posts, accounting for over 60 percent of all platform content. These bots engaged in various manipulation tactics, including spreading falsehoods, creating fictional narratives, and using emotional language to trigger reactions.
When the simulated election concluded, “Victor” narrowly won. However, when researchers ran the election again without the influence campaigns, “Marina” emerged victorious with a 1.78 percent margin—demonstrating the AI campaign’s decisive impact on the outcome.
What makes the findings particularly concerning is how accessible the tools have become. Students with minimal training using consumer-grade AI technology were able to develop sophisticated influence campaigns capable of swaying election results.
The competition revealed several effective tactics. Teams identified undecided voters for micro-targeting and discovered that negative framing and emotional language generated more engagement. “We needed to get a bit more toxic to get engagement,” admitted one finalist.
The platform eventually evolved into what researchers described as a “closed loop” where bots interacted with other bots to provoke emotional responses from humans—creating a manufactured reality designed to shift votes and drive clicks.
This scenario mirrors what experts are increasingly observing in real digital environments. Recent estimates suggest around half of online content is now created or propagated by AI systems. These tools can generate convincing fake content and create realistic-looking social media profiles that lend credibility to misinformation.
The proliferation of such content creates what researchers call a “liar’s dividend,” where even authentic information is approached with skepticism. This erodes public confidence in online discourse and makes legitimate debate increasingly difficult. Authentic but critical voices risk being dismissed as bots or fakes, further fracturing public discourse on important issues.
The research underscores the urgent need for improved digital literacy. As AI tools become more sophisticated and accessible, the ability to recognize and filter misinformation becomes increasingly crucial for maintaining informed democratic societies.
From political elections to international conflicts in places like Gaza and Ukraine, AI-powered misinformation campaigns continue to shape public perception, often in service of political or financial motives. The UNSW experiment provides valuable insights into how these campaigns operate and the relative ease with which they can be deployed.
As one participant noted, the experience revealed how the online ecosystem increasingly favors emotional manipulation over factual information—a reality that poses significant challenges for media consumers, technology platforms, and democratic institutions alike.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


11 Comments
This is really worrying. It’s clear we need much stronger safeguards and verification processes to ensure the authenticity of online content, especially around elections and major events. AI is a powerful tool, but it can also be dangerously misused.
While the ability of AI to generate convincing misinformation is concerning, I’m hopeful that continued advancements in detection and mitigation techniques can help us stay ahead of these threats. Maintaining public trust will be critical.
Fascinating but concerning findings. The ability of AI to generate convincing misinformation that can influence election outcomes is truly alarming. We need robust safeguards and transparency to ensure the integrity of our democratic processes.
I agree, the potential for AI-driven disinformation campaigns to sway public opinion is deeply troubling. Maintaining trust in elections and media is critical for a healthy democracy.
As someone who follows news and commentary on mining, commodities, and energy, I’m deeply concerned about the potential for AI-generated misinformation to distort public discourse and decision-making in these critical sectors. We must act quickly to address this threat.
The Bondi Beach terrorist attack and subsequent flood of AI-generated falsehoods on social media is a sobering example of how rapidly misinformation can spread. We must find ways to combat these threats without infringing on free speech.
Absolutely. Balancing the need for accurate information with protecting civil liberties will be a major challenge. Careful regulation and user education will be key.
The findings from the UNSW experiment are a sobering wake-up call. We must redouble our efforts to build resilience against AI-driven disinformation campaigns that seek to undermine democratic institutions and processes.
Agreed. This is a complex challenge that will require collaboration between policymakers, technology companies, and the public. Vigilance and a commitment to truth will be essential.
The findings from the UNSW social media wargame demonstrate the urgent need for coordinated global efforts to counter AI-driven disinformation campaigns. Securing the integrity of our information ecosystem should be a top priority for policymakers.
I agree, this is a complex issue that requires a multifaceted response. International cooperation, technological solutions, and public education will all be crucial in the fight against AI-fueled manipulation.