Listen to the article

0:00
0:00

The Iran conflict has become a digital battleground where AI-generated videos distort reality, raising concerns about information integrity during times of crisis.

Artificial intelligence tools have supercharged the spread of misinformation surrounding the Iran conflict, with deepfake videos and manipulated imagery flooding social media platforms and shaping public perception in unprecedented ways.

The easy accessibility and decreasing cost of AI video technologies have enabled both state actors and individuals to create convincing fabrications of combat footage, missile strikes, and political statements. These deceptive materials circulate rapidly online, particularly during periods of heightened news consumption.

“Dramatic images and videos claiming to show real-time battle scenes and missile strikes are flooding social media feeds, spreading rapidly and misleading millions,” explains Marc Owen Jones, associate professor of media analytics at Northwestern University in Qatar. Jones, who specializes in digital disinformation, notes that “social media has become a battlefield for competing narratives” as various sides attempt to win “hearts and minds” through increasingly sophisticated digital tactics.

The United States and Iran have both leveraged social media in distinct ways. American-aligned content often incorporates Hollywood footage in a “memeification of communication” that appeals to certain right-wing audiences. Meanwhile, Iran has developed its own approach, frequently mocking the United States while using AI-generated imagery that appears to exaggerate Iranian military successes, possibly to pressure Gulf states toward de-escalation.

Recent technological advances have made AI-generated content more convincing and accessible than ever. One notable example involved videos purportedly showing the USS Abraham Lincoln aircraft carrier burning at sea—footage so realistic that former President Donald Trump admitted calling his generals to verify whether the incident had actually occurred. Trump later clarified on his Truth Social platform: “Not only was it not burning, it was not even shot at, Iran knows better than to do that!”

Other fabricated content included videos of U.S. troops crying and Gulf city buildings being destroyed—all later debunked as AI-generated fakes. “The use of AI is legion and is increasingly hard to detect,” Jones warns.

The verification challenge is compounded by the lightning speed at which content spreads. “In a fast-moving conflict, verified information is often delayed, which creates a vacuum that misinformation fills immediately,” Jones explains. “When people are worried, they crave information, but that information is often false.” Unverified content can reach millions within minutes, leaving ordinary users struggling to distinguish truth from fiction.

The misinformation ecosystem extends beyond battle footage. Last week, rumors circulated widely that Israeli Prime Minister Benjamin Netanyahu had died, with some users pointing to supposed visual anomalies in official videos as evidence of AI manipulation. Netanyahu subsequently released several “proof-of-life” videos to counter these claims, though speculation persists in certain online circles.

Coordinated disinformation campaigns further complicate the information landscape. “There are sketchy, anonymous accounts, with histories of multiple name changes, and no discernible identity sharing fake news and AI videos,” Jones notes. These accounts often appear legitimate but may be linked to state-backed operations or individuals seeking profit from sensationalized content. Automated bot networks amplify certain narratives by mass-sharing posts, creating an illusion of widespread popularity.

Not all AI-generated content is malicious. Some videos are created as deliberate parody or satire, mocking world leaders like Trump and Netanyahu. Examples include clips portraying Trump as Iran’s new supreme leader and Netanyahu as a malfunctioning robot. However, even content intended as humor can be misinterpreted as authentic when separated from its original context.

The proliferation of AI-generated misinformation is steadily eroding public trust in information systems. “False information can spread up to ten times faster than accurate reporting on social media, and corrections are rarely as widely seen or believed as the original false claim,” Jones observes. “Outrage drives sharing before fact-checking can occur, which is exactly what bad actors exploit.”

As the conflict continues, Jones advises treating dramatic footage with the same skepticism as unverified claims. “The fact that it looks real is no longer sufficient evidence that it is,” he cautions. This leaves ordinary people navigating an increasingly complex media environment where the line between reality and fabrication continues to blur.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

28 Comments

  1. Elijah I. Johnson on

    Interesting update on Iran War Reshaped by Social Media Misinformation and AI Deepfakes. Curious how the grades will trend next quarter.

  2. Emma B. Moore on

    Interesting update on Iran War Reshaped by Social Media Misinformation and AI Deepfakes. Curious how the grades will trend next quarter.

  3. Amelia Z. Martin on

    Interesting update on Iran War Reshaped by Social Media Misinformation and AI Deepfakes. Curious how the grades will trend next quarter.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.