Listen to the article

0:00
0:00

Disinformation War Intensifies as US-Iran Conflict Unfolds

As military action between the United States and Iran escalates, a parallel information battle is raging across social media platforms. Within hours of initial strikes, social feeds were flooded with fabricated videos, mislabeled images, and AI-edited clips, creating a digital environment where truth becomes nearly impossible to discern.

The economics driving this phenomenon are straightforward: attention translates directly into money and influence. Both are flowing abundantly to accounts that prioritize being first over being accurate, with fact-checking becoming an afterthought in the race for engagement.

What once might have remained localized rumor now scales globally within minutes. Information warfare experts tracking this phenomenon report that engagement-driven posts are reaching view counts in the hundreds of millions within days, effectively transforming the traditional “fog of war” into a profitable business model fueled by bot networks, algorithmic amplification, and increasingly sophisticated generative AI.

In the immediate aftermath of reported military strikes, including a devastating blast at Iran’s Shajareh Tayyebeh school that local sources claimed killed 168 people, social media platforms were inundated with misleading footage. A Wired investigation documented hundreds of deceptive posts on X (formerly Twitter), many appearing within minutes of confirmed explosions.

One particularly viral clip purporting to show missile launches “over Dubai” garnered more than 4 million views despite actually showing footage from a previous attack on Tel Aviv. Similarly, a fabricated before-and-after image allegedly depicting damage to Ali Hosseini Khamenei’s compound was entirely digital, yet still accumulated hundreds of thousands of impressions.

Perhaps most concerning is Wired’s finding that the majority of these misleading posts originated from verified “blue-check” accounts—including state-funded Iranian media outlets—lending misinformation an air of legitimacy while ensuring broader algorithmic distribution.

Familiar disinformation tactics have evolved significantly since previous conflicts. While repurposing video game footage as combat clips remains common—such as flight simulator scenes falsely presented as downed F-35 fighters—AI editing capabilities now make these fabrications increasingly difficult to identify. The BBC has tracked such content proliferating across TikTok, including assets linked to known Russian influence operations, with nearly 100 million collective views across just a handful of AI-generated war videos.

The motivation isn’t exclusively geopolitical. Content creators are actively monetizing crisis-related virality. This trend became significant enough that X announced suspensions from its Creator Revenue Sharing program for users posting unlabeled AI war content. Platform moderation efforts remain reactive, struggling to keep pace with both technological capabilities and the financial incentives that prioritize speed over accuracy.

NewsGuard analysts have documented numerous instances of posts exaggerating Iran’s military capabilities and fabricating battlefield victories. One notable example featured an image allegedly showing the USS Abraham Lincoln sinking in the Arabian Sea. While U.S. Central Command quickly refuted the claim, and investigators traced the photo to the controlled sinking of the USS Oriskany nearly twenty years ago, the false narrative had already reached millions after being shared by prominent accounts, including a Kenyan government official.

Another widely circulated video claimed to show a strike on Israel’s Dimona nuclear facility, but community fact-checking later identified the footage as originating from a 2017 attack in Balaklia, Ukraine. NewsGuard estimates such miscaptioned content has already garnered at least 21.9 million views on X alone, demonstrating how efficiently recycled visuals can be weaponized in new contexts.

The problem is further compounded by AI assistants and search algorithms. Investigations revealed X’s Grok chatbot confidently validating fabricated images of Iranian military movements. Separately, NewsGuard reported that Google’s AI-powered Search Summaries echoed misleading claims when users attempted to verify images—including describing a 2015 residential fire in Sharjah as evidence of a recent “CIA outpost” attack.

Journalism experts caution that these tools, optimized for probabilistic answers rather than verified reporting, perform particularly poorly during breaking news events. The result is a dangerous feedback loop: viral misinformation prompts hurried verification attempts, AI systems fill information gaps with authoritative-sounding but unverified content, and falsehoods gain additional credibility with each cycle.

Researchers note that the window between events occurring and authentic visuals becoming available continues to narrow, testing public patience and amplifying confirmation bias. Sofia Rubinson from NewsGuard’s Reality Check team observes that anonymous “conflict” accounts exploit this information vacuum by posting sensationalistic but dubious content that larger influencers then amplify to mainstream audiences.

The implications extend far beyond social media debates. A report from the UK Centre for Emerging Technology and Security warns that AI-driven information threats can undermine effective crisis response, heighten public fear, and distort democratic decision-making processes. With intermittent internet access in Iran and neighboring regions, on-the-ground verification becomes increasingly difficult, creating fertile conditions for both propagandists and engagement-driven content farms.

As this digital battlefield continues to evolve alongside the physical conflict, experts advise heightened skepticism toward battlefield “exclusives,” careful verification through multiple independent sources, and particular wariness toward accounts that consistently monetize crisis content without correcting demonstrated errors.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

21 Comments

  1. Michael Miller on

    This disinformation war is a troubling development that could have far-reaching consequences for the mining, energy, and commodities sectors. Investors and industry players need to be extremely cautious.

  2. Noah Thompson on

    Concerning to see the surge in online disinformation around the US-Iran tensions. Algorithms amplifying false narratives for profit is a dangerous trend that undermines public discourse.

    • Jennifer Hernandez on

      Absolutely, we need stronger fact-checking and accountability measures to combat the spread of misinformation, especially during geopolitical crises.

  3. Ava Williams on

    The scale and speed at which disinformation is spreading is truly alarming. We need to find ways to combat this issue before it causes even more damage.

    • John D. Williams on

      Absolutely. Strengthening media literacy and fact-checking efforts should be a top priority to help the public navigate the deluge of false information.

  4. James Miller on

    As an industry professional, I’m concerned about the impact this disinformation could have on investment decisions and market volatility. Reliable information is crucial for making informed choices.

  5. Elizabeth K. Williams on

    As someone with a background in the mining industry, I’m deeply concerned about the potential impact of this disinformation crisis. Reliable information is critical for sound decision-making and risk management.

  6. Elizabeth Thompson on

    As someone with a keen interest in the mining and energy industries, I’m deeply concerned about the potential impact of this disinformation crisis. Reliable information is critical for sound decision-making.

    • Jennifer Williams on

      I share your concerns. The spread of false narratives could lead to significant market disruptions and erode public trust in the industries we care about.

  7. Isabella Martin on

    The monetization of disinformation is a disturbing trend. Social media platforms and regulators need to take stronger action to curb the proliferation of false and misleading content.

    • I agree. The potential for financial and market manipulation through the spread of disinformation is a serious concern that deserves urgent attention.

  8. Mary Hernandez on

    As an investor, I’m concerned about the potential impact of this disinformation on commodity and energy markets. Reliable information is critical for sound investment decisions.

    • Patricia Thomas on

      That’s a good point. Volatility driven by false information could lead to significant losses for investors. Transparency and fact-based reporting are essential.

  9. Olivia Brown on

    This disinformation war is worrying, especially for industries like mining and energy that are so closely tied to geopolitical developments. We need to find ways to combat the spread of false narratives.

  10. This disinformation war is a stark reminder of the need for stronger media literacy and fact-checking efforts, especially when it comes to industries like mining and energy that are so closely tied to geopolitics.

    • Robert Jones on

      Absolutely. Investors and industry players must be vigilant in verifying information sources and relying on reputable, fact-based reporting to make informed decisions.

  11. Patricia Garcia on

    The monetization of disinformation is a worrying trend that could have serious implications for the mining, energy, and commodities sectors. We need to find ways to combat this issue.

  12. The mining and energy sectors are particularly vulnerable to geopolitical tensions and related disinformation. Investors need to be vigilant in verifying information sources.

    • Isabella Davis on

      Absolutely. Fact-checking and relying on reputable news sources will be crucial in navigating the complexities of the current situation.

  13. The economics driving this disinformation war are troubling. Monetizing false narratives and exploiting the ‘fog of war’ is unethical and detrimental to informed decision-making.

    • Elizabeth Jones on

      Agreed. Social media platforms must do more to curb the spread of AI-generated and algorithmically amplified disinformation, even if it impacts their bottom line.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.