Listen to the article

0:00
0:00

The escalating conflict in the Middle East has given rise to a new form of disinformation: the subtle manipulation of authentic images using artificial intelligence technology. While completely fabricated visuals have drawn significant attention, experts now warn about a more insidious trend of “AI-enhanced” images that distort public perception of actual events.

In early March, a widely circulated photograph showed a kneeling U.S. pilot confronted by a Kuwaiti local after parachuting from his jet. The high-resolution image appeared convincing enough that several media outlets published it. However, closer inspection revealed the pilot had only four fingers on each hand – a telltale sign of AI manipulation.

AFP fact-checkers discovered the image contained a SynthID watermark, an invisible marker embedded in images created with Google AI tools. Despite this manipulation, the underlying event appears genuine. A video showing the same confrontation had been circulating on social media since March 2, and satellite imagery confirmed the location. The incident aligned with reports that Kuwait had mistakenly shot down three U.S. warplanes that day.

Further investigation by AFP located an earlier, blurry version of the same photograph on Telegram. This original image, which lacked the detail in the pilot’s face seen in the enhanced version, passed AI verification tools as authentic. This suggests the blurry original served as the foundation for the manipulated high-resolution version.

“AI-enhancement may subtly alter textures, faces, lighting, or background details, creating an image that looks more ‘real’ than the original,” explained Evangelos Kanoulas, a professor in artificial intelligence at the University of Amsterdam. Such enhancements can “strengthen a particular narrative about an event—for example, making a protest appear more violent, making a crowd appear larger, making facial expressions more intense.”

Another example emerged following Iranian strikes near Erbil airport in Iraq on March 1. Social media users shared a dramatic image showing an enormous blaze at the site. While Google’s SynthID detection identified AI manipulation, the image wasn’t entirely fabricated. The original photograph showed the same scene but with a significantly smaller fire and smoke column, and less vivid coloration.

The boundary between enhancement and content generation is dangerously thin, experts warn. “Even little changes can end up telling a very different story,” said James O’Brien, a professor of computer science at the University of California, Berkeley. These subtle alterations “could change the perception of events” in significant ways.

Generative AI systems remain prone to error and may “hallucinate” elements not present in original images, according to Kanoulas. This phenomenon became evident following the January shooting of Alex Pretti by federal immigration agents in Minneapolis. An AI-enhanced image from a genuine video of the incident went viral, showing Pretti falling to his knees with officers beside him, one holding a gun to his head.

In the original grainy video frame, Pretti held a phone. However, in the AI-treated version, some social media users mistakenly identified the object as a weapon, significantly altering the narrative around the incident.

As tensions between the U.S., Israel, and Iran continue to escalate, the proliferation of manipulated imagery poses serious concerns for public information. Without proper labeling and disclosure of AI enhancement, these images further erode public trust in visual media.

“This kind of content is already having a huge impact on people and their ability to trust the truth,” O’Brien noted. Kanoulas agreed, adding that the trend has led people to “start doubting authentic images as well” – a troubling development as the world struggles to make sense of complex geopolitical events through increasingly questionable visual evidence.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

11 Comments

  1. Mary Rodriguez on

    The ability of AI to subtly manipulate images is a concerning development. Even if the underlying event is genuine, the doctored visuals can paint a misleading picture that influences public perception. We must be extremely cautious about the images we see online.

  2. Elijah Moore on

    This is a prime example of how advanced AI technology can be misused to spread misinformation. While the original event may be real, the altered image creates a false narrative that can have serious consequences. We must be vigilant in our media consumption and fact-checking the visual information we encounter.

    • Ava Hernandez on

      You’re absolutely right. Maintaining media literacy and being critical of the images we see online is more important than ever in an era of AI-enhanced visuals.

  3. Emma E. Smith on

    The use of AI-enhanced images to distort reality is a worrying development. It’s a sobering reminder that we need to be extremely cautious about the visual information we consume, especially in the context of sensitive geopolitical conflicts.

  4. William Martinez on

    This is a prime example of how advanced AI technology can be used to spread misinformation. While the underlying event may be real, the doctored image creates a false narrative that can influence public perception. We must stay vigilant.

    • Jennifer Lee on

      You’re right, this is a concerning trend. Maintaining media literacy and critically analyzing the images we see online is more important than ever.

  5. Elijah Hernandez on

    Wow, this is really concerning. AI-enhanced images have the potential to seriously distort the truth and mislead people. We need to be extremely cautious about the images we see, even if they appear authentic at first glance.

    • I agree, it’s becoming increasingly difficult to distinguish real from manipulated images. Fact-checking and verifying the source is crucial these days.

  6. Amelia L. Jones on

    This is a troubling example of how AI can be weaponized to spread misinformation. While the underlying event may be real, the manipulated image creates a false narrative that can have serious consequences. We must be vigilant in our media consumption.

    • Michael Brown on

      Absolutely. Fact-checking and verifying the source of images is crucial in an era where AI-enhanced visuals can so easily distort the truth.

  7. Robert Lopez on

    The use of AI-enhanced images to distort reality is a deeply concerning development. Even if the underlying event is genuine, the manipulated visuals can paint a misleading picture that influences public perception. We must be extremely cautious about the images we consume, especially in sensitive geopolitical contexts.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.