Listen to the article
Fact-checking organization exposes surge in AI-generated fake images of US troops
Amid heightened tensions in the Middle East, a concerning trend of artificial intelligence-generated images depicting captured American soldiers has emerged on social media platforms, according to a recent investigation by Full Fact, the UK’s largest fact-checking organization.
The group has identified multiple fabricated images circulating widely online, designed to create the false impression that US military personnel have been captured by Iranian forces. These sophisticated fakes highlight the growing challenge of distinguishing authentic wartime imagery from AI-generated content.
In one prominent example, three images purportedly showing captured Delta Force operators in Iranian custody have spread across Facebook and X (formerly Twitter). The photos depict men in combat uniforms being escorted by masked soldiers and kneeling near portraits of former Iranian Supreme Leader Ayatollah Ali Khamenei.
Full Fact’s analysis revealed telltale signs of AI generation, including a distinctive diamond watermark from Google’s Gemini AI chatbot visible in uncropped versions. Further examination using Google’s SynthID detector confirmed all three images contained invisible digital watermarks indicating creation through Google’s AI tools.
The fabricated images also contained obvious inconsistencies, including one photo with a date stamp of “2026/04/18” – a date that hasn’t yet occurred.
In a separate incident, social media users shared an image allegedly showing three American servicemen being led away from a downed B-2 stealth bomber by Iranian soldiers. While US B-2 bombers have indeed been deployed for strikes against Iran, there have been no credible reports of any such aircraft being shot down.
This image contained multiple red flags, including the depiction of three crew members, despite B-2 aircraft having a standard crew of only two according to US Air Force specifications. Higher-quality versions of the image revealed additional anomalies, including an Iranian soldier seemingly depicted with three hands and an improbably large Iranian flag in the background.
The organization confirmed that this image also contained a SynthID watermark and appears to have originally been shared on X by an account describing it as “parody” content.
The proliferation of these convincing but false images comes at a particularly sensitive time as regional tensions escalate following exchanges of fire between Israel, Iran and various proxy forces. Misinformation has the potential to inflame tensions further or create false impressions about the scope of the conflict.
In the same report, Full Fact also corrected recent statements by UK Deputy Prime Minister David Lammy regarding Cyprus’s membership in NATO. During a television interview, Lammy incorrectly claimed that Cyprus is “part of NATO” and “a NATO country.”
While Cyprus is an EU member and allied with most NATO countries, it is not a formal NATO member. This distinction is important given recent drone attacks targeting RAF Akrotiri, a British military base located on Cypriot territory. The base is considered UK overseas territory and falls under the protection of the UK, a founding NATO member.
Full Fact has published a comprehensive toolkit to help the public identify misleading information, including specific guides on recognizing AI-generated content, fact-checking questionable videos, and assessing images circulating online during times of conflict.
The organization continues to monitor and debunk false claims related to the Middle East conflict as part of its mission to counter the harmful effects of misinformation in public discourse.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
The use of AI to generate fake military imagery is a troubling development. It highlights the need for advanced forensic techniques to identify manipulated content. Fact-checking organizations play a crucial role in this effort.
I’m curious to know more about Cyprus’s NATO status and how it relates to the larger geopolitical tensions in the region. The report seems to suggest there are additional angles worth exploring here.
That’s a good point. The article focuses on the fake images, but the broader context around Cyprus’s relationship with NATO could provide important context. Further investigation into that angle would be valuable.
This is a disturbing trend. Fake images can sow disinformation and inflame tensions. We must be vigilant in verifying the authenticity of wartime imagery, especially in this age of advanced AI capabilities.
Agreed. It’s crucial that fact-checking organizations stay on top of these issues and expose fraudulent content. Transparency and truth-telling are essential, especially during times of crisis.
This is a concerning issue that speaks to the broader challenges of disinformation in the digital age. I appreciate the fact-checking organization’s efforts to expose these fabricated images and raise awareness about the problem.
As tensions in the Middle East continue to simmer, the proliferation of fake wartime imagery is deeply troubling. Fact-checking and media literacy efforts are essential to prevent the further escalation of conflicts driven by disinformation.
The report’s focus on the distinctive watermark from Google’s Gemini AI chatbot is a fascinating technical detail. It underscores the importance of developing robust digital forensic capabilities to stay ahead of evolving AI-driven manipulation tactics.
The report highlights the growing challenge of distinguishing real from fabricated images. AI-generated fakes are becoming increasingly sophisticated and harder to detect. This is a serious concern for military and security intelligence.
You’re right, this is a major issue that will only become more complex as the technology advances. Robust fact-checking protocols and public awareness campaigns are vital to combat the spread of disinformation.