Listen to the article

0:00
0:00

The rise of AI-generated conflict imagery has created a new front in modern warfare, with false videos and images from the Middle East conflict spreading rapidly across social media platforms. This growing phenomenon presents a significant challenge for viewers trying to distinguish authentic war footage from sophisticated fakes.

Experts warn that the proliferation of AI-generated content about ongoing conflicts represents a deliberate strategy by various state actors to shape public perception and influence military outcomes.

Professor Peter Lee of the University of Portsmouth told BFBS Forces News that propaganda and misinformation serve critical tactical purposes in modern warfare.

“There’s a couple of really important reasons for being better at propaganda and misinformation than your opponent,” Prof Lee explained. “For example, if you want to confuse your enemy into thinking that they’re not doing as well as they are doing, you want to be pumping out information that says you’re not hitting as many things as you like.”

The professor highlighted how Iran has been actively engaged in this practice during the ongoing conflict, noting, “That’s what Iran is saying. The United States is publishing more missile and bomb strikes than I have ever seen.”

One striking example involves an image circulated by an account called IranMilitaryIR_ showing the USS Abraham Lincoln aircraft carrier engulfed in flames. The image appears realistic and was shared by a verified social media account, making it particularly effective at spreading misinformation among viewers who lack the tools to verify its authenticity.

According to Prof Lee, the most sophisticated disinformation materials are likely created by government-backed entities, with both the United States and Iran serving as primary producers in the context of their ongoing conflict. He emphasized that Washington holds a significant advantage in this arena due to its connections with major technology companies like Meta, Apple, Google, and X (formerly Twitter).

“There will be large production of news stories through social media, and the US Department of Defense is blending original news footage with older footage and some that has been AI-generated as a matter of policy,” Prof Lee said. He noted that while the US approach is relatively transparent, other global powers operate with less visibility.

“Then there’ll be China and Russia, who have an interest in this war and who will want to disadvantage the United States,” he added. “Russia is famous for its bot farms, so it will literally farm them out to other countries, meaning they will not be directly traceable back to Russia.”

The technology required to create convincing fake imagery has become increasingly accessible. Examples include an AI-generated image showing Dubai’s Burj Khalifa in flames, which demonstrates how easily such content can be produced. In some cases, footage from video games like War Thunder has been manipulated and presented as actual combat footage, garnering millions of likes on platforms like Instagram.

The implications of this trend extend beyond mere confusion on social media. Prof Lee warned about the potential impact on democratic processes and public opinion.

“AI-generated posts and information could be utilized to persuade people to support an unpopular action taken by a government or to suggest that the state is doing better in the conflict than it is in reality,” he cautioned.

The ethical dimensions of government-sponsored misinformation campaigns present troubling questions about the line between strategic communication and outright deception. “I think ethically it’s a grey area because it is state sanctioned dishonesty,” Prof Lee noted. “On one hand, people don’t expect politicians to be completely honest, but we don’t expect politicians to blatantly lie.”

As AI tools become more sophisticated and accessible, the challenge of identifying authentic conflict footage will likely intensify, creating a new dimension of warfare that targets public perception rather than physical infrastructure. The ability to critically evaluate digital media has never been more essential for civilians trying to understand global conflicts.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

10 Comments

  1. Elizabeth Johnson on

    I’m curious to learn more about the specific tactics and strategies that state actors are using to spread AI-generated disinformation. What are some of the most concerning examples you’ve seen, and how can we develop effective countermeasures?

    • Amelia Garcia on

      That’s a great question. Understanding the specific techniques and motivations behind these disinformation campaigns will be key to developing effective responses. I’d be interested to hear more about the expert insights shared in the article.

  2. Olivia Thomas on

    This is a really complex issue. On one hand, the development of advanced AI tools could enable more effective propaganda and misinformation campaigns. But on the other hand, AI could also potentially help us detect and combat these threats more effectively. What are your thoughts on the pros and cons here?

    • That’s a great point. AI could be a double-edged sword in this context. While it enables new forms of disinformation, it may also provide tools to identify and combat these threats. Balancing these competing factors will be critical.

  3. Patricia L. Smith on

    Wow, the rapid spread of AI-generated disinformation is really concerning. It’s a new and troubling frontier in modern warfare and information warfare. I worry about the potential to manipulate public perception and influence military outcomes through these sophisticated fakes.

    • Amelia Johnson on

      You’re absolutely right. The ability to confuse and mislead the public through AI-generated propaganda is a major challenge. We need better ways to detect and counter these threats.

  4. The article highlights an important point about the tactical advantages that propaganda and misinformation can provide in modern warfare. I’m curious to learn more about how state actors are leveraging AI to gain these strategic benefits and what can be done to mitigate the risks.

    • That’s a great observation. Understanding the specific motivations and techniques behind these AI-powered disinformation campaigns will be critical to developing effective countermeasures. This is a complex challenge that requires a multi-faceted response.

  5. Isabella Miller on

    This is a really worrying trend. The ability to create realistic-looking but fake war footage using AI could have serious consequences for public understanding and military outcomes. We need robust ways to verify the authenticity of online content and counter these malicious efforts to manipulate information.

    • I agree completely. The spread of AI-generated disinformation represents a significant threat to truth and transparency. Developing reliable methods to detect and debunk these fake videos and images should be a top priority.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.