Listen to the article

0:00
0:00

AI-Generated Satellite Images Spread Disinformation in Middle East Conflict

A satellite image purportedly showing a devastated U.S. base in Qatar recently spread across social media platforms, garnering millions of views. The image, posted by Tehran Times, an Iranian state-aligned English-language newspaper, displayed what it claimed were “completely destroyed” U.S. radar equipment following recent hostilities.

However, researchers quickly identified the image as an AI-manipulated fake, doctored from an authentic Google Earth image of a U.S. base in Bahrain taken last year. The manipulated photo contained telltale signs of forgery, including a row of cars parked in identical positions in both the original and altered versions.

This incident highlights a growing trend of state actors and propagandists utilizing generative AI technology to fabricate convincing satellite imagery during major conflicts, a development that security experts warn carries significant real-world implications.

“We’ve seen an increase in manipulated satellite imagery appearing on social media in the wake of major events including the Middle East war,” said Brady Africk, an open-source intelligence researcher. “Many of these manipulated images have the hallmarks of imperfect AI-generation: odd angles, blurred details, and hallucinated features that don’t align with reality.”

The problem extends beyond simple image manipulation. Information warfare analyst Tal Hagin identified another AI-generated satellite image claiming to show that Israeli-U.S. jets had targeted a painted silhouette of an aircraft on the ground in Iran, while suggesting Tehran had moved real planes elsewhere. This image contained gibberish coordinates and was watermarked with SynthID, an invisible marker designed to identify content created using Google AI tools.

These fabricated satellite images coincide with the emergence of imposter OSINT (open-source intelligence) accounts on social media platforms that appear designed to undermine the credibility of legitimate digital investigators.

“Due to the fog of war, it can be very difficult to determine the success of an adversary’s strikes. OSINT came as a solution, using public satellite imagery to circumvent censorship inside countries like Iran,” Hagin explained. “But it’s now being preyed upon by disinformation agents.”

This phenomenon isn’t limited to the current Middle East conflict. Similar reports of fake satellite imagery created or manipulated using AI emerged during the Russia-Ukraine conflict and the four-day war between India and Pakistan last year, indicating a troubling pattern of AI deployment in information warfare.

The consequences of such deception can be far-reaching. “Manipulated satellite imagery, like other forms of misinformation, can have real-world impacts when people act on the information they come across without verifying its authenticity,” Africk noted. “This can have effects that range from influencing public opinion on a major issue, like whether or not a country should engage in conflict, to impacting financial markets.”

In this environment, authentic high-resolution satellite imagery collected in real time has become increasingly valuable. Satellite intelligence companies like Vantor play a crucial role in verifying or debunking circulating imagery. During a recent militant attack on Niamey airport in Niger, Vantor detected fake photos purportedly showing the main civilian terminal on fire. The company’s own satellite imagery helped confirm these photos were AI-generated forgeries.

“When a satellite image is presented as visual evidence in the context of war, it can easily influence how people interpret events,” said Bo Zhao from the University of Washington. As AI-generated imagery becomes increasingly sophisticated and convincing, Zhao emphasized that it’s “important for the public to approach such visual content with caution and critical awareness.”

The increasing prevalence of AI-manipulated satellite imagery underscores the evolving challenges facing both media consumers and security professionals in distinguishing fact from fiction in an increasingly complex information landscape. With AI tools becoming more accessible and their outputs more convincing, the ability to verify imagery through multiple sources and technical analysis has never been more crucial.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

30 Comments

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.