Listen to the article

0:00
0:00

In a striking illustration of modern warfare’s digital front, the Iran conflict has exposed how artificial intelligence-generated videos can significantly shape public perception during high-profile international crises.

The rapid proliferation of AI-fabricated content has created a parallel information battleground where competing narratives vie for dominance, according to experts monitoring the conflict’s digital dimension.

“Dramatic images and videos claiming to show real-time battle scenes and missile strikes are flooding social media feeds, spreading rapidly and misleading millions,” said Marc Owen Jones, associate professor of media analytics at Northwestern University in Qatar.

Jones, who specializes in analyzing social media’s influence on public opinion, notes that all sides in the conflict are leveraging social media platforms to sway public sentiment. American-aligned content often features “videos intercut with Hollywood clips, a sort of memeification of communication designed to appeal to a far-right aesthetic,” while Iran has responded with its own digital strategy.

“Iran has risen to the game, often mocking the United States with their memes, but a lot of AI-generated images appear to be exaggerating Iran’s military successes, arguably to add pressure on Gulf states to push for de-escalation,” Jones explained.

The technological advances in AI have democratized the creation of convincing deepfakes, allowing virtually anyone to generate high-quality misleading content within seconds. One notable example involved videos purportedly showing the USS Abraham Lincoln, a U.S. aircraft carrier, burning at sea – content so realistic that even former President Donald Trump reportedly contacted military officials to verify their authenticity.

Trump later acknowledged on his Truth Social platform that the footage was fabricated, stating: “Not only was it not burning, it was not even shot at, Iran knows better than to do that!”

Other widespread fabrications include videos depicting U.S. troops crying and purported destruction in Gulf cities. The sheer volume and convincing nature of these AI-generated videos have made verification increasingly challenging for the public.

“The use of AI is legion and is increasingly hard to detect,” Jones emphasized. “In a fast-moving conflict, verified information is often delayed, which creates a vacuum that misinformation fills immediately. When people are worried, they crave information, but that information is often false.”

This information vacuum becomes particularly problematic as unverified content can reach millions within minutes, far outpacing the efforts of fact-checkers and verification teams. The recent rumors regarding Israeli Prime Minister Benjamin Netanyahu exemplify this phenomenon, as speculation about his supposed death spread widely after users pointed to alleged visual glitches in an official video.

Some claimed Netanyahu appeared to have six fingers in the video – a supposed telltale sign of AI manipulation. Despite Netanyahu releasing several subsequent “proof-of-life” videos, the online rumors persisted, demonstrating how digital skepticism can become self-reinforcing.

The coordinated nature of some misinformation campaigns adds another layer of complexity. “There are sketchy, anonymous accounts, with histories of multiple name changes, and no discernible identity sharing fake news and AI videos,” Jones observed. These accounts may be linked to state-backed actors or opportunists seeking to profit from sensationalized content.

Not all manipulated content is designed with malicious intent. Some videos deliberately created as parody or satire – such as those depicting Trump as Iran’s new supreme leader or Netanyahu as a malfunctioning robot – can still be misinterpreted when stripped of their original context.

Other fabricated scenarios include NATO members refusing to help unblock the Strait of Hormuz and Ukrainian President Volodymyr Zelenskyy supposedly arriving in the Gulf region with anti-drone technology.

The cumulative effect of this digital misinformation ecosystem is a profound erosion of public trust in visual evidence. “False information can spread up to ten times faster than accurate reporting on social media, and corrections are rarely as widely seen or believed as the original false claim,” Jones noted.

“Outrage drives sharing before fact-checking can occur, which is exactly what bad actors exploit,” he added, urging the public to approach dramatic footage with inherent skepticism. “The fact that it looks real is no longer sufficient evidence that it is.”

As the Iran conflict continues evolving on both physical and digital fronts, ordinary citizens face the increasingly complex challenge of distinguishing authentic information from sophisticated manipulation – a situation that threatens to undermine informed public discourse about critical global events.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.