Listen to the article
AI-Powered Bots Successfully Manipulate Simulated Election in Groundbreaking Social Media Experiment
A pioneering social media wargame has demonstrated the alarming ease with which artificial intelligence can flood digital platforms with misinformation and potentially sway election outcomes, raising urgent questions about the security of democratic processes worldwide.
The experiment, called “Capture the Narrative,” invited over 100 teams from 18 Australian universities to build AI-driven bots aimed at influencing a fictional presidential election within a controlled digital environment. Armed with only basic tutorials and consumer-grade AI tools, participants managed to generate millions of posts during a four-week campaign. By the experiment’s conclusion, bots had produced more than 60% of all content on the platform.
The impact proved decisive: when simulated voters consumed this content, their electoral choices shifted significantly. In a telling demonstration of AI’s influence, when researchers ran the same election twice – once with misinformation and once without – the results completely reversed, with a different candidate emerging victorious.
“What we found particularly concerning was how easily emotional manipulation could be scaled using these technologies,” said one of the research leads, who requested anonymity due to the sensitive nature of the findings. “Teams quickly discovered that negative, emotionally charged messaging generated the most engagement and could dominate feeds within hours.”
This experiment comes at a crucial time. When a deadly terrorist attack struck Bondi Beach on December 14, 2025, killing fifteen civilians and the gunman, Australia witnessed firsthand how quickly AI-generated misinformation can spread during a crisis. Within hours of the tragedy, fabricated content circulated widely across social media platforms.
A doctored video falsely showed New South Wales Premier Chris Minns claiming one of the attackers was an Indian national – a fabrication seemingly designed to inflame tensions between communities. Meanwhile, users celebrated a fictional “hero defender” named Edward Crabtree, while a sophisticated deepfake depicted human rights lawyer Arsen Ostrovsky as a crisis actor, complete with staged blood and makeup.
Digital security experts warn that these incidents represent just a glimpse of what’s possible with today’s AI capabilities. Tools like ChatGPT can now produce convincingly realistic text, images, and videos within seconds. When deployed alongside automated social media accounts, false narratives can be amplified at an unprecedented scale, creating an illusion of widespread consensus where none exists.
“The most troubling aspect is the economics,” explains Dr. Samantha Chen, a digital ethics researcher at the Australian National University who was not involved in the experiment. “Creating misinformation has become incredibly cheap and fast, often outperforming factual content in both reach and engagement. Traditional fact-checking simply cannot keep pace.”
Participants in the wargame reported spending less than $500 to create bot networks capable of reaching millions of simulated users. Several teams noted they could produce and distribute dozens of fabricated news stories, manipulated images, and even basic deepfake videos with just a few hours of work.
The long-term societal implications appear particularly worrisome. As AI-generated content becomes increasingly difficult to distinguish from reality, public trust in authentic information continues to erode. Legitimate voices risk being dismissed as fake, while actual misinformation thrives in an atmosphere of perpetual doubt.
“We’re entering an era where technological safeguards alone won’t be sufficient,” says media literacy advocate Michael Thornton. “Without widespread digital literacy and critical thinking skills across the population, societies remain vulnerable to sophisticated manipulation campaigns.”
The researchers behind “Capture the Narrative” have submitted their findings to government agencies and social media companies, recommending urgent development of detection tools and stronger platform policies. However, they emphasize that technological solutions must be complemented by educational initiatives that teach citizens how to recognize and resist information manipulation.
As global election cycles approach, the wargame’s lessons offer a stark warning: without comprehensive preparation, democracies face unprecedented challenges in an information landscape where falsehoods can be manufactured and distributed faster than the truth.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


13 Comments
Silver leverage is strong here; beta cuts both ways though.
Exploration results look promising, but permitting will be the key risk.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Production mix shifting toward Social Media might help margins if metals stay firm.
Good point. Watching costs and grades closely.
Production mix shifting toward Social Media might help margins if metals stay firm.
Production mix shifting toward Social Media might help margins if metals stay firm.
The cost guidance is better than expected. If they deliver, the stock could rerate.
I like the balance sheet here—less leverage than peers.
Nice to see insider buying—usually a good signal in this space.
Exploration results look promising, but permitting will be the key risk.
Good point. Watching costs and grades closely.