Listen to the article
AI-Generated Satellite Images Fuel Misinformation in Middle East Conflict
A satellite image claiming to show a devastated U.S. military base in Qatar recently spread across social media, garnering millions of views. The image, posted by Iranian state-aligned newspaper Tehran Times on X, purported to display “completely destroyed” U.S. radar equipment following military action.
There was just one problem: the image was fake, an AI-manipulated version of a Google Earth photo actually showing a U.S. base in Bahrain from 2025.
This incident highlights an alarming trend in modern warfare—the accelerated use of artificially generated or manipulated satellite imagery to spread misinformation during active conflicts, particularly in the ongoing tensions between the U.S., Israel, and Iran.
“Many of these manipulated images have the hallmarks of imperfect AI-generation: odd angles, blurred details, and hallucinated features that don’t align with reality,” explained Brady Africk, an open-source intelligence researcher, who noted a significant “increase in manipulated satellite imagery” following major events in the Middle East conflict.
The subtle tells in the fake Qatar base image included a row of cars parked in identical positions in both the authentic satellite photo and the manipulated version—a detail that went unnoticed by many as the image circulated widely across multiple languages and platforms.
In another example, information warfare analyst Tal Hagin identified an AI-generated satellite image falsely claiming to show that Israeli-U.S. jets had targeted painted aircraft silhouettes on the ground in Iran, while Tehran had supposedly moved real planes elsewhere. This fake included gibberish coordinates and triggered SynthID detection—an invisible watermark used to identify images created with Google AI.
These fabricated satellite images coincide with the emergence of imposter OSINT (open-source intelligence) accounts on social media that mimic credible digital investigators, further muddying the information landscape.
“Due to the fog of war, it can be very difficult to determine the success of an adversary’s strikes. OSINT came as a solution, using public satellite imagery to circumvent censorship inside countries like Iran,” Hagin explained. “But it’s now being preyed upon by disinformation agents.”
This trend extends beyond the current Middle East conflict. Similar reports of AI-manipulated satellite imagery surfaced during the Russia-Ukraine conflict and the four-day war between India and Pakistan in 2025.
The implications of such misinformation extend far beyond social media engagement. “Manipulated satellite imagery, like other forms of misinformation, can have real-world impacts when people act on the information they come across without verifying its authenticity,” Africk warned. “This can have effects that range from influencing public opinion on a major issue, like whether or not a country should engage in conflict, to impacting financial markets.”
In an increasingly complex information landscape, authentic high-resolution satellite imagery collected in real-time has become critical for decision-makers to assess security threats and debunk falsehoods. Commercial satellite intelligence companies are now playing a crucial role in this verification process.
During a recent militant attack on Niamey airport in Niger, satellite intelligence company Vantor detected AI-generated images circulating online that falsely showed the main civilian terminal on fire. The company’s authentic satellite imagery helped confirm these photos were fabricated.
“When a satellite image is presented as visual evidence in the context of war, it can easily influence how people interpret events,” noted Professor Bo Zhao from the University of Washington, emphasizing that as AI-generated imagery grows increasingly convincing, “it is important for the public to approach such visual content with caution and critical awareness.”
As generative AI technology continues to advance, the challenge of distinguishing reality from fiction in conflict zones will likely intensify, requiring greater vigilance from both media consumers and security professionals alike.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
As AI tools become more sophisticated, the potential for their misuse in manipulating visual evidence is deeply concerning. Maintaining public trust and ensuring information integrity will require vigilance and innovative solutions from the tech sector, policymakers, and the media.
It’s unsettling to see how quickly misinformation can proliferate, especially when it’s visually compelling like these fake satellite images. Fact-checking and media literacy will be crucial going forward to combat the rise of AI-powered disinformation campaigns.
This is a prime example of how the accelerating capabilities of AI can be weaponized to sow discord and confusion, even around sensitive geopolitical issues. Developing robust verification frameworks and promoting digital literacy will be vital to staying ahead of these threats.
You’re right, this highlights the urgent need for international cooperation and coordinated action to address the growing challenge of AI-driven disinformation. The stakes are too high to ignore.
Wow, this is a really concerning trend. The use of AI-generated satellite imagery to spread misinformation during conflicts is truly alarming. We need better ways to verify the authenticity of these images and counter the spread of false narratives.
Agreed. Being able to quickly identify manipulated imagery will be crucial for maintaining public trust and preventing the escalation of tensions based on fabricated evidence.
This is a complex issue with serious implications. On one hand, AI presents risks in terms of the creation and spread of disinformation. On the other, these same technologies could also help improve image verification and debunking of manipulated content. Careful oversight and regulation will be key.
Absolutely. The dual-edged nature of this technology means we’ll need a multifaceted approach – advancing AI-based detection tools while also educating the public on spotting the telltale signs of manipulated imagery.