Listen to the article

0:00
0:00

Australian University’s AI War Game Shows How Social Media Can Sway Elections

A groundbreaking experiment conducted by the University of New South Wales (UNSW) in Sydney has demonstrated how artificial intelligence-driven bots can manipulate social media content to influence election outcomes. The four-week multiplayer war game, named “Capture the Narrative,” involved over 270 participants from 18 Australian universities who attempted to sway a simulated election in a fictional South Pacific island called “Kingston.”

The competition, created by UNSW senior lecturers Hammond Pearce and Rahat Masood, successfully shifted voting intentions by 1.8 percentage points—enough to change the simulated election result. This outcome has raised serious concerns about the potential for AI-powered manipulation of real-world elections through social media influence operations.

“I wanted to have a platform where students could learn about AI misinformation online by interacting with, generating, and attempting to detect such influence,” Pearce told Dark Reading. He will present the findings at Black Hat Asia 2026, discussing the implications of AI-generated fake content on social media and potential countermeasures.

The war game was inspired by actual instances of election interference, including evidence of a pro-Russia operation that attempted to manipulate AI chatbots ahead of Australia’s 2025 federal election. Another influence campaign that informed the experiment’s design came from bots linked to the People’s Republic of China that tried to shape public opinion around Australia’s Voice referendum in 2023, which concerned indigenous rights.

Similar concerns have long existed in the United States, where Russian misinformation campaigns targeting the 2020 presidential election remain a cautionary tale. With AI technology advancing rapidly, malicious actors now have even more sophisticated tools to manipulate public opinion through social media platforms.

To create the experiment, the UNSW team developed custom technologies including an in-house social media platform called “Legit Social,” modeled after early Twitter (now X). Built with a Python back end and React front end, the platform supported posting, reposting, replying, liking, tagging, and embedding media—complete with trending algorithms and chronological feeds.

The platform was populated with “non-player character” (NPC) bots representing simulated citizens. These NPCs featured over 40 attributes defining their personalities and beliefs, which could evolve over time. Powered by 12 large language model instances running concurrently, these bots served as the targets for influence by the competitors’ own “player character” bots.

“The goal of the game is for the human players to build PC bots which persuade the NPC bots to change their voting intention in the simulated election,” Pearce explained.

The competition yielded several concerning technological developments applicable to real-world influence operations. Participants created advanced AI bot systems capable of dynamic and adaptive spam, mass content scanning tools for sentiment analysis, micro-targeting capabilities to identify specific accounts for persuasion, and closed-loop systems that continuously adapted content and tone to maximize engagement.

When surveyed afterward, participants noted they could identify similar patterns in real social media platforms, suggesting these techniques might already be deployed in the wild.

Perhaps most alarming was how much the teams accomplished with minimal resources—budgets ranging from AU$0 to AU$100. “Between the millions of dynamic posts and the mass content scanning, we had multiple instances where our servers crashed under the load,” Pearce admitted.

The experiment offers valuable insights for social media companies, legislators, and the public. Pearce emphasized that platform gatekeepers “need to do more to ensure that AI-generated fake content designed to persuade and confuse is identified and removed at scale.”

The findings also highlight the responsibility of internet users to educate themselves about how easily such content can be generated and distributed automatically at massive scale. Pearce called for active public sector involvement in promoting digital literacy education to combat these threats.

As the 1983 film “WarGames” once dramatized how a teenage hacker could bring the world to the brink of nuclear war, this modern war game demonstrates that today’s digital manipulation—while less visibly dramatic—may pose equally significant threats to democratic processes and social stability.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

16 Comments

  1. Robert Davis on

    This is a sobering reminder of the potential dark side of AI technology. While the advancements in AI can bring many benefits, we must be vigilant about the risks of malicious actors exploiting these tools to undermine our democratic processes.

    • Well said. The findings from this war game underscore the urgent need for robust regulations and ethical guidelines to govern the development and use of AI, especially in the realm of social media and information sharing.

  2. William Jackson on

    Fascinating look at the potential impacts of AI-driven social media manipulation. It’s alarming how even a small shift in voting intentions can influence election results. This highlights the critical need for robust safeguards against these types of influence operations.

    • Amelia B. Martinez on

      Agreed. With the increasing sophistication of AI, we must stay vigilant and develop effective countermeasures to protect the integrity of our democratic processes.

  3. The UNSW experiment highlights the alarming ease with which AI-powered bots can manipulate social media content and influence voter behavior. This is a critical issue that requires immediate attention from policymakers, tech companies, and the public.

    • Robert Jackson on

      I agree. The potential for AI-driven misinformation to undermine the integrity of our democratic processes is a grave concern that demands a comprehensive, multi-stakeholder response to safeguard our elections and public discourse.

  4. William Thomas on

    The UNSW war game demonstrates the disturbing power of AI-generated misinformation to sway public opinion. This is a wake-up call for tech companies, policymakers, and the public to prioritize addressing this emerging threat to our information landscape.

    • Isabella T. Lee on

      Absolutely. Transparency, fact-checking, and user education will be crucial in the fight against AI-driven manipulation. We must act now before these tactics become more widespread and damaging.

  5. Elizabeth Hernandez on

    This study serves as a stark warning about the threats posed by AI-powered manipulation of social media. The ability to sway election outcomes through targeted influence operations is deeply troubling and underscores the need for rigorous measures to combat this emerging challenge.

    • Elizabeth Miller on

      Absolutely. The findings from this war game should galvanize policymakers, tech companies, and the public to work together in developing effective strategies to detect, mitigate, and prevent AI-driven disinformation campaigns from undermining our democratic institutions.

  6. Elijah P. Jackson on

    This war game highlights the pressing need for comprehensive policies and technological solutions to address the risks of AI-powered social media manipulation. Protecting the integrity of our democratic processes should be a top priority for all stakeholders.

    • Olivia Garcia on

      Absolutely. The findings from this study reinforce the urgency of developing effective countermeasures to combat the threat of AI-driven influence operations. Collaboration between governments, tech companies, and civil society will be crucial in safeguarding our information ecosystem.

  7. Linda Johnson on

    The UNSW experiment is a sobering demonstration of the power of AI-generated content to sway public opinion and election outcomes. This underscores the critical importance of developing robust strategies to detect and mitigate the spread of AI-driven misinformation on social media.

    • Elijah Thompson on

      I couldn’t agree more. The findings from this war game highlight the need for a comprehensive, multifaceted approach to address this emerging threat, including enhanced transparency, fact-checking, and user education initiatives.

  8. The UNSW experiment demonstrates the alarming potential for AI-generated fake content to influence public opinion and election outcomes. This is a wake-up call for the urgent need to address the vulnerabilities of social media platforms to such manipulation tactics.

    • Patricia Garcia on

      I agree. The implications of this study are deeply concerning, and it underscores the critical importance of investing in robust fact-checking, digital literacy programs, and other measures to combat the spread of AI-driven misinformation online.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.