Listen to the article
The rise of AI-generated propaganda is reshaping international information warfare, with both Iran and the United States deploying sophisticated disinformation tactics in their ongoing conflict. Experts are increasingly concerned about what some have termed “AI slopaganda” – artificially created propagandistic content weaponized to manipulate public perception.
In the current US-Israel confrontation with Iran, Tehran’s use of emotionally resonant AI-generated content appears to be gaining significant traction globally. Iranian-aligned media outlets have developed compelling narratives portraying the nation as standing up against historical American imperialism, while simultaneously depicting the US and Israel as aggressive warmongers.
One prominent pro-Iran outfit, Explosive Media, has produced highly shareable content highlighting what it characterizes as American military overreach. These sophisticated productions contrast with the US approach, particularly former President Trump’s use of AI-generated imagery, which critics describe as more self-aggrandizing and less effective at swaying international opinion.
“AI slopaganda works by creating visual content that either artificially inflates one’s own strength or deliberately undermines an opponent’s image,” explains a media analyst following the trend. “The technology has made it remarkably simple to produce convincing propaganda that can reach millions before fact-checkers can intervene.”
The Iranian strategy has focused on positioning itself as a victim of Western aggression while simultaneously highlighting perceived moral failures of the United States. Meanwhile, Trump-aligned content has featured AI-generated images showing the former president in messianic poses – including controversial depictions of “Jesus hugging Trump” – or as a powerful strongman figure.
Tech platforms have begun responding to this information battleground with varied approaches. YouTube recently banned several Iranian-supporting AI propaganda channels, prompting immediate backlash from Tehran. Iran’s Ministry of Foreign Affairs spokesman Esmaeil Baghaei condemned the move as an attempt to suppress “the truth” about the conflict.
Interestingly, much of this content remains accessible on X (formerly Twitter), highlighting inconsistencies in how different platforms are addressing the challenge of AI-generated propaganda. Critics argue that if Iranian propaganda faces removal for intellectual property concerns or policy violations, similar standards should apply to AI-generated content supporting US interests.
Media ethics experts warn this represents just one facet of a deeper problem: the normalization of disinformation as a political tool. “What we’re seeing is the deliberate fracturing of shared reality,” noted one digital communications specialist. “When truth becomes subjective and emotionally charged content trumps factual accuracy, democratic discourse suffers.”
The Iranian approach positions the country exclusively as a victim fighting against oppression, conveniently overlooking its own government’s documented human rights violations. Similarly, critics argue that Trump-aligned AI propaganda creates alternative narratives that obscure uncomfortable truths while presenting the former president as an almost mythological figure.
The consequence is a fragmented information landscape where multiple contradictory narratives can coexist. It’s possible to acknowledge that Iran faces geopolitical pressures from stronger nations while also recognizing its government’s oppressive policies. Similarly, the United States’ democratic traditions exist alongside troubling shifts in its information ecosystem.
As AI-generated content becomes increasingly sophisticated and difficult to distinguish from authentic media, the challenge for citizens worldwide grows more complex. The technology behind these compelling but manipulative images continues to advance faster than regulatory frameworks or public literacy can adapt.
This evolving battlefield of perception management threatens to further polarize international relations at a time when diplomatic solutions are desperately needed. Without coordinated efforts to address AI-enabled propaganda across borders, the fracturing of shared reality may become one of the most consequential casualties in modern information warfare.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


7 Comments
The growing use of AI in propaganda is alarming, but not entirely surprising given its capabilities. This underscores the need for greater transparency and accountability around the development and use of these technologies, both for government and private entities.
This raises important questions about the role of AI in modern information warfare. While the technology can be abused, it could also potentially be used to counter propaganda with factual, objective content. Responsible development and deployment of these tools will be crucial.
Interesting to see the contrasting approaches between Iran and the US. I wonder if there are lessons to be learned from Iran’s seemingly more effective tactics, while still maintaining principles of truth and objectivity.
While the geopolitical implications are concerning, this also highlights how AI can be weaponized to manipulate public opinion on a wider range of issues beyond just international conflicts. Safeguards and ethical guidelines are urgently needed.
This article raises alarm bells about the risks of AI-driven propaganda. However, I hope the technology community can also find constructive ways to leverage AI to counter misinformation and elevate factual, trustworthy information.
AI-driven disinformation is certainly concerning, though the article seems to suggest the US approach is less effective than Iran’s. It would be interesting to see a more impartial analysis of the tactics and impacts on both sides.
I’m curious to learn more about the specific techniques Iran is using to generate and disseminate this content. The article mentions ’emotionally resonant’ narratives – understanding those psychological levers could provide insights for countering disinformation campaigns.