Listen to the article
The protests sweeping across Iran have captured global attention, revealing deep economic desperation and widespread discontent with the country’s ruling regime. Thousands of demonstrators have reportedly lost their lives in violent crackdowns, as security forces attempt to quell the growing unrest that began on December 28, 2025.
Yet amid this genuine crisis, a troubling dynamic has emerged. The digital evidence documenting these protests—photos, videos, and audio recordings—has become entangled in a web of manipulation, casting doubt on even authentic materials. This phenomenon, known as “the liar’s dividend,” benefits those who seek to undermine truth by creating an environment where nothing can be trusted.
Iran’s protests have become perhaps the most contentious battleground in the global struggle against AI-manipulated content. Multiple factions—the Iranian regime itself, opposition groups, and foreign governments—are all competing to control the narrative, using increasingly sophisticated AI tools that are more accessible than ever before.
Social media platforms have been flooded with footage showing diverse motivations among protesters. Some simply oppose the regime, while others chant for Reza Pahlavi, crown prince of Iran’s deposed monarchy. Many videos document economic grievances and anger toward Supreme Leader Ayatollah Ali Khamenei and the Islamic Republic.
The complexity of this information landscape creates fertile ground for manipulation. Within hours of the protests erupting, regime-aligned accounts began dismissing authentic imagery as AI-generated fakes.
A particularly notable case involved what became known as Iran’s “tank man” photo. The image, captured on December 29, showed a protester confronting security forces in Tehran. Although the event was verified from multiple angles, someone enhanced the initially blurry image using AI tools to make it more shareable. Regime supporters immediately seized on the visible artifacts from this enhancement to discredit not just this image but all protest documentation.
This pattern has repeated with audio manipulations as well. In one instance, an account claiming affiliation with the Mujahedin-e Khalq (MEK), a controversial opposition group, posted footage where pro-Pahlavi chants were clearly dubbed over protest videos. The same account then “exposed” this manipulation, claiming monarchists were fabricating support. Pro-regime accounts quickly amplified this narrative.
The strategy appears deliberate. Iranian researchers have documented the regime’s practice of creating fake opposition personas on social media to sow confusion and discredit authentic documentation. Leaked materials from within the regime have confirmed these tactics.
Foreign actors have further complicated the situation. Israeli influence operations have been documented using AI-generated content to push anti-regime narratives. In January, the Israeli Foreign Ministry posted an AI-altered image on its Persian-language social media showing Iranian police blasting protesters with a water hose—a modification of an authentic BBC Persian photograph.
Such interventions may be intended to support opposition to the Islamic Republic but ultimately play into the regime’s hands by giving credence to claims that all protest documentation is foreign deception.
Since January 15, the Iranian regime has implemented a national internet shutdown, severely limiting communication with the outside world. This information blackout has exacerbated the crisis, leaving verified footage reduced to a trickle, primarily through Starlink terminals. Death toll estimates vary dramatically, from 2,435 confirmed by human rights organizations to claims of 12,000 or higher.
Human rights groups and journalists are fighting an uphill battle to validate legitimate documentation that is sometimes falsely labeled as AI-generated. Meanwhile, the regime disseminates footage of counterprotests supporting the Islamic Republic, which opposition accounts question as potentially AI-generated—claims that state media vehemently denies.
This environment of distrust ensures that nothing—neither the scale of the protests, the severity of the crackdown, nor the stability of the regime—can be reliably witnessed in real time.
After four decades of undermining dissent, the Iranian regime now wields AI not just as a tool for creating fakes but as a rhetorical weapon. Every digital artifact becomes potential evidence that nothing can be trusted, creating an epistemic fog that benefits those in power.
The voices of ordinary Iranians, fighting for their future, deserve to be heard clearly—not manipulated by foreign interests, dismissed by regime propagandists, or lost in a digital haze that others have helped create.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
This is a concerning report on the weaponization of disinformation during the protests in Iran. The use of AI-manipulated content to sow confusion and undermine truth is a serious challenge. It’s crucial that we find ways to combat this kind of strategic manipulation of information.
I’m troubled by the report’s description of how multiple factions are competing to control the narrative in Iran using AI tools. This raises serious concerns about the integrity of information surrounding the protests. We need better safeguards against the malicious use of these technologies.
The protests in Iran reflect deep economic and political issues that the regime appears unwilling to address. The proliferation of AI-manipulated media makes it difficult to discern truth from fiction, which can further inflame tensions and inhibit meaningful change.
You’re right, the regime’s crackdown on protesters and the spread of manipulated content is exacerbating an already volatile situation. Restoring trust in information sources will be key to resolving this crisis.
The weaponization of doubt through AI-manipulated content is a troubling trend that extends beyond just the Iran protests. We’re seeing this tactic used in many geopolitical conflicts to sow confusion and undermine truth. Addressing this challenge should be a priority for policymakers and tech companies.
I agree, this is a global problem that requires a coordinated response. Developing robust frameworks to detect and counter AI-generated disinformation is crucial to preserving the integrity of information and democratic discourse.
This report highlights the critical importance of maintaining the integrity of information, especially in times of social and political upheaval. The use of AI to manipulate content and undermine truth is a worrying trend that must be addressed through robust regulatory frameworks and technological solutions.
This report highlights the evolving and complex nature of information warfare. The ability of regimes, opposition groups, and foreign actors to leverage AI tools to manipulate content is deeply concerning. We must find ways to build resilience and trust in our information ecosystems.
The protests in Iran are a stark reminder of the power and danger of disinformation. The regime’s attempts to control the narrative through AI-manipulated content are a troubling development that threatens to further undermine the legitimacy of the protests and the broader struggle for change.
You’re right, the regime’s efforts to sow doubt and confusion through these tactics are a serious obstacle to the protesters’ demands for accountability and reform. Countering this will require innovative approaches to media literacy and fact-checking.