Listen to the article
Pro-Iran Disinformation Campaign Reaches 145 Million Views, Cyabra Analysis Reveals
Sophisticated pro-Iran disinformation networks have flooded social media platforms with manipulated content, reaching an estimated 145 million views and generating over nine million engagements, according to a detailed analysis published by Cyabra Strategy Ltd.
The report, also covered by The New York Times, reveals how Iranian actors deployed tens of thousands of fake accounts to disseminate AI-generated videos across multiple platforms, significantly influencing public perception of the ongoing conflict.
“Our analysis demonstrates the extraordinary reach misinformation and bot amplification can have on online platforms,” said Dan Brahmy, Cyabra Founder and Chief Executive Officer. “With clear coordination patterns and synchronized tactics, these findings highlight that these tactics are a recurring information warfare strategy.”
The investigation identified three central narratives pushed by the campaign, all designed to portray Iran as the dominant and victorious power in the conflict while depicting its adversaries as weak and defeated. The operation employs sophisticated AI-generated content, deepfakes, and synthetic media, coordinated across multiple platforms to maximize impact.
Researchers found clear evidence of centralized control behind the campaign. The fake profiles exhibited identical keywords and hashtags, synchronized posting patterns, and repeated distribution of the same content. Perhaps most concerning, the behavioral patterns mirror those observed in previous Iranian disinformation operations identified by Cyabra.
“While we uncover countless influence operation campaigns daily for our clients, our core mission to restore digital trust drives us to publicly expose threats to global and national security,” Brahmy added. “We consider this far more than just our business, it is an enormous responsibility to restore trust to the online landscape.”
The analysis highlights the growing sophistication of state-sponsored influence operations. Cyabra’s technology evaluates hundreds of behavioral and contextual signals, including posting frequency, language patterns, network behavior, and content originality to identify coordinated inauthentic behavior.
This level of digital manipulation presents significant challenges for governments and private organizations attempting to combat misinformation. As AI-generated content becomes increasingly difficult to distinguish from authentic media, the potential for widespread manipulation of public opinion grows.
The Iranian campaign represents part of a broader trend of information warfare that has intensified in recent years. Social media platforms have struggled to keep pace with these evolving tactics, which exploit algorithmic distribution systems to maximize reach and impact.
Market analysts note that the digital disinformation industry has grown substantially, with both state and non-state actors investing heavily in influence operations. This development has spurred demand for advanced detection technologies like those offered by Cyabra.
The full report, available on Cyabra’s website, provides detailed technical analysis of the campaign’s methods and reach. It serves as a warning about the scale and sophistication of modern information warfare tactics.
Cyabra, which specializes in detecting digital manipulation, has recently entered into a business combination agreement with Trailblazer Merger Corporation I (NASDAQ: TBMC), a special-purpose acquisition company. This move potentially positions the company to expand its capabilities in the growing digital trust and verification market.
As disinformation campaigns continue to evolve in sophistication and scale, the need for advanced detection and mitigation strategies has become increasingly urgent for both public and private sector organizations seeking to protect information integrity.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


13 Comments
While I’m not surprised to hear about Iran using disinformation tactics, the scale of this campaign is really eye-opening. 145 million views is a staggering number. I wonder what the long-term impact of this type of coordinated misinformation will be on public understanding of geopolitical issues.
That’s a great point. Repeated exposure to skewed narratives, even if people don’t believe them fully, can still shape perceptions over time. Combating this will require a multi-pronged approach of education, platform policies, and geopolitical responses.
This report highlights the need for more scrutiny of social media and the information ecosystem. Allowing such large-scale disinformation campaigns to go unchecked poses serious risks to public understanding and democratic discourse. Platforms and researchers must stay vigilant.
Absolutely. Improving our ability to identify and limit the spread of coordinated disinformation should be a top priority. The integrity of online discourse is vital, especially on important geopolitical issues.
While I’m not surprised to see Iran engaging in disinformation campaigns, the scope of this one is really concerning. 145 million views is a huge number, and it shows how powerful these types of coordinated efforts can be. We need to find ways to improve our ability to detect and limit the spread of this kind of manipulation.
Agreed. Improving transparency and accountability around online discourse, especially on sensitive geopolitical issues, should be a top priority. Platforms, researchers, and policymakers all have a role to play in combating the growing threat of coordinated disinformation campaigns.
The use of AI-generated content to amplify these messages is especially concerning. It makes me wonder how much of the online discourse around sensitive topics like this conflict is actually authentic. We really need to find ways to improve transparency and detect manipulation.
This report is a wake-up call about the scale and sophistication of Iran’s disinformation efforts. The use of AI-generated content to amplify these messages is particularly alarming. We need stronger safeguards and more transparency to combat these types of coordinated manipulation tactics.
The use of AI-generated content to spread these messages is particularly worrying. It really highlights how technological advancements can be abused to manipulate public discourse. We need to find ways to improve transparency and hold platforms accountable for enabling the spread of disinformation.
Absolutely. Improving our ability to detect and limit the spread of coordinated disinformation campaigns should be a top priority for platforms, researchers, and policymakers. The integrity of online discourse is crucial, especially on sensitive geopolitical issues.
This is a concerning report on the scale and sophistication of Iran’s disinformation campaign. It’s alarming to see how fake accounts and AI-generated content can have such a wide reach and influence public perceptions. We need stronger safeguards against these types of coordinated information warfare tactics.
Agreed, the ability of bad actors to manipulate online narratives through technology is worrying. Platforms and governments must stay vigilant and find ways to counter these types of disinformation campaigns more effectively.
While I’m not surprised to hear about Iran’s involvement, the scale of this campaign is really concerning. 145 million views is a huge number, and it shows how powerful these types of coordinated disinformation efforts can be. We need stronger measures to combat this.