Listen to the article

0:00
0:00

In a concerning development for information security, state actors and propagandists are increasingly exploiting generative AI to create deceptive satellite imagery during armed conflicts, security experts warn.

A recent example emerged when Iranian state-aligned newspaper Tehran Times posted what it claimed was a comparative satellite image showing “completely destroyed” U.S. radar equipment at a military base in Qatar. The widely shared image reached millions of viewers across social media platforms.

However, researchers quickly determined the image was actually an AI-manipulated version of a Google Earth photo from last year depicting a U.S. base in Bahrain. Telltale signs of manipulation included identical car placements in both the original and doctored images.

“We’re seeing an increase in manipulated satellite imagery appearing on social media in the wake of major events including the Middle East war,” said Brady Africk, an open-source intelligence researcher. “Many of these manipulated images have the hallmarks of imperfect AI-generation: odd angles, blurred details, and hallucinated features that don’t align with reality.”

In another instance, information warfare analyst Tal Hagin identified an AI-generated satellite image falsely depicting Israeli-U.S. jets targeting a painted aircraft silhouette in Iran. The fabrication included nonsensical geographic coordinates and carried a SynthID watermark indicating it was created using Google AI tools.

These examples represent a growing trend of AI-generated disinformation that threatens to undermine legitimate open-source intelligence (OSINT) work. The emergence of imposter OSINT accounts on social media platforms further complicates this landscape.

“Due to the fog of war, it can be very difficult to determine the success of an adversary’s strikes. OSINT came as a solution, using public satellite imagery to circumvent censorship,” explained Hagin. “But it’s now being preyed upon by disinformation agents.”

The problem extends beyond the current Middle East conflict. Similar reports of fake satellite imagery created or edited using AI surfaced during the Russia-Ukraine conflict and the brief India-Pakistan hostilities last year.

The security implications of such deception are significant. “Manipulated satellite imagery, like other forms of misinformation, can have real-world impacts when people act on the information they come across without verifying its authenticity,” Africk noted. “This can have effects that range from influencing public opinion on a major issue, like whether or not a country should engage in conflict, to impacting financial markets.”

Legitimate satellite intelligence companies are working to counteract this trend. During a recent militant attack on Niamey airport in Niger, satellite intelligence firm Vantor used its resources to debunk fake AI-generated images allegedly showing the main civilian terminal on fire.

“When a satellite image is presented as visual evidence in the context of war, it can easily influence how people interpret events,” said Bo Zhao from the University of Washington.

The rapid advancement of AI technology poses escalating challenges for information verification. As generative AI tools become more sophisticated, the visual cues that once helped identify manipulated images are becoming less obvious. This evolution demands heightened scrutiny from both intelligence professionals and the public.

Experts emphasize that authentic, high-resolution satellite imagery collected in real time remains vital for security assessment and misinformation debunking. However, the increasing quality of AI-generated fakes requires more sophisticated detection methods and greater public awareness.

As AI-generated imagery grows increasingly convincing, Zhao stresses that it is “important for the public to approach such visual content with caution and critical awareness.”

The trend highlights a broader challenge in the digital information ecosystem, where the line between reality and fiction continues to blur, requiring new approaches to media literacy and information verification in conflict zones.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

11 Comments

  1. Elijah Smith on

    I’m curious to learn more about the specific techniques used to identify the AI-generated elements in these doctored satellite images. Understanding the telltale signs could help improve our ability to spot manipulated content.

  2. Amelia Martinez on

    This highlights the need for advanced tools and techniques to detect AI-generated fakes. As the technology advances, so must our ability to discern truth from fiction, especially in sensitive geopolitical contexts.

  3. The increasing use of manipulated satellite imagery to fuel tensions is a troubling trend. We must remain cautious and critical when evaluating visual content, especially during times of conflict.

    • Agreed. Leveraging AI to create deceptive imagery is a serious threat to information security and could have grave consequences. Strengthening verification methods is crucial to maintain trust in visual data.

  4. This is a concerning development. As AI capabilities advance, the potential for malicious actors to create deceptive imagery and fuel tensions is alarming. We need robust verification methods to distinguish real from manipulated satellite imagery.

    • Jennifer Martin on

      Absolutely. Credible, fact-based information is critical, especially during conflicts. AI-generated fakes can have severe consequences if not properly identified and debunked.

  5. William Rodriguez on

    The use of AI-generated satellite imagery to inflame tensions is a troubling development. We must redouble our efforts to combat the spread of manipulated visual content and ensure that decision-makers have access to reliable, fact-based information.

  6. Elijah Jackson on

    This is a sobering example of how rapidly advancing AI technology can be misused for malicious purposes. It’s a stark reminder of the importance of critical thinking and fact-checking, especially when it comes to sensitive information.

    • Michael Johnson on

      Precisely. As AI becomes more sophisticated, the potential for abuse and the spread of disinformation grows. Maintaining vigilance and strengthening verification methods will be essential in the years ahead.

  7. Amelia Miller on

    It’s worrying to see state actors exploiting generative AI for propaganda purposes. We must be vigilant in scrutinizing visual content and relying on trusted, verified sources of information.

    • Noah S. Hernandez on

      You raise a good point. The blurred details and hallucinated features in these AI-manipulated images are telltale signs that should raise red flags. Fact-checking is crucial to combat the spread of disinformation.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.