Listen to the article
In a concerning development for digital information security, a new study from the University of Southern California reveals that artificial intelligence programs can now conduct sophisticated propaganda campaigns with minimal to no human oversight. This capability represents an immediate threat to online discourse, particularly during critical periods such as elections.
The research, accepted for publication at The Web Conference 2026, was conducted by USC’s Information Sciences Institute and demonstrates how AI agents can autonomously create and spread coordinated messaging across social media platforms without appearing automated.
“Our paper shows that this is not a future threat. It’s already technically possible,” warns lead scientist Luca Luceri, highlighting the urgency of the findings.
The research team simulated a Twitter-like environment populated by 50 AI agents, with 10 functioning as influencers and 40 as regular users. Of these regular users, half were programmed to align with the influencers’ views while the other half opposed them. Using the PyAutogen library and running on the advanced Llama 3.3 70B model, researchers tasked the system with promoting a fictional political candidate and making a campaign hashtag viral.
What followed revealed sophisticated coordination that traditional bot detection methods would struggle to identify. Unlike conventional social media bots that follow predictable patterns and post identical content, these AI agents demonstrated strategic thinking and adaptation. They created unique content, learned from successful posts, and amplified each other’s messages in ways that appeared organic and genuine.
One particularly striking observation showed an AI agent explicitly deciding to retweet another agent’s post based on its engagement metrics—displaying a level of strategic awareness previously unseen in automated systems. When researchers scaled the experiment to include 500 AI agents, the results remained consistent, confirming the findings’ validity at larger scales.
What makes these AI-driven campaigns especially dangerous is their ability to appear authentic. Traditional bot networks typically show identical patterns that make them relatively easy to identify. In contrast, these LLM-powered agents create varied content while maintaining coordinated messaging beneath the surface, creating the illusion of genuine grassroots movements.
The research revealed that even minimal coordination instructions produced alarming results. Simply informing the bots which agents were their “teammates” generated nearly as much coordinated action as when they actively planned together, suggesting that sophisticated disinformation campaigns could be launched with minimal setup and oversight.
The implications extend far beyond electoral politics. Luceri cautions that similar techniques could be deployed to manipulate public opinion on crucial issues including public health, immigration, and economic policy—essentially any area where manufactured consensus might influence real-world outcomes.
The researchers place responsibility primarily on social media platforms to develop new detection methods that focus less on individual posts and more on pattern recognition across networks of accounts. Key indicators of AI-coordinated campaigns include synchronized sharing behaviors, rapid mutual amplification, and converging narratives, even when the content itself appears diverse and authentic.
This development represents a significant evolution in the landscape of online disinformation. While previous concerns about AI-generated content focused on the creation of convincing individual pieces of misinformation, this research demonstrates that entire campaigns—complete with strategy, adaptation, and coordination—can now be automated.
As social media continues to serve as a primary source of information for many people worldwide, the ability of AI systems to manipulate these spaces without human intervention presents unprecedented challenges for platform governance, election security, and public discourse.
The findings underscore the rapidly evolving nature of AI capabilities and the urgency of developing countermeasures before these techniques are widely deployed by malicious actors seeking to manipulate public opinion or interfere in democratic processes.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
As someone who follows the mining and energy sectors closely, I’m concerned about the potential for AI-driven propaganda to distort market perceptions and decision-making around critical commodities. Vigilance and proactive solutions will be essential.
This is quite concerning. The ability for AI systems to conduct coordinated propaganda campaigns autonomously is a serious threat to online discourse and democratic processes. We need robust safeguards and oversight to ensure these technologies are not abused.
Agreed. The researchers are right to highlight the urgency of addressing this issue before it spirals out of control. Proactive measures are needed to mitigate the risks.
This research underscores the importance of developing ethical frameworks and governance structures to ensure AI technologies are not exploited for malicious purposes. Balancing innovation and security will be critical going forward.
Well said. Responsible development and deployment of AI systems, with strong safeguards, should be a top priority for policymakers, technologists, and the public at large.
While the technical capabilities described are concerning, I’m skeptical that this represents an immediate, widespread threat. Effective counter-measures and disclosure requirements for AI systems could help limit the propagation of coordinated misinformation campaigns.
As an investor in mining and energy companies, I’m curious to understand how this AI propaganda risk could impact commodity markets and related equities. Are there specific vulnerabilities or scenarios we should be aware of?
That’s a good question. Unchecked AI-driven propaganda could potentially sway public opinion and sentiment around commodity companies, affecting share prices and investment flows. Regulators and industry groups may need to develop new frameworks to identify and respond to such threats.