Listen to the article

0:00
0:00

President Trump Admits Being Fooled by AI Video in Iran Conflict

President Donald Trump acknowledged last week that he was deceived by an artificial intelligence-generated video purporting to show a U.S. aircraft carrier engulfed in flames, highlighting growing concerns about AI-fueled misinformation during the month-old war with Iran.

The incident prompted Trump to call a military general in alarm, only to be reassured that the USS Abraham Lincoln was unharmed. “No, it’s not burning down. Not a bullet was ever fired at it, sir,” Trump quoted the general as saying.

“This is my first glimpse of AI and what they’ve done with it,” Trump said at the Trump-Kennedy Centre. “They showed buildings in Tel Aviv burning to the ground, high rises burning. They showed buildings in Qatar and Saudi Arabia burning, and they weren’t burning. They weren’t hit. It was all AI, AI-based. Terrible.”

The Iranian conflict has sparked what the New York Times described as a “cascade of AI fakes” at an unprecedented scale, with most AI-generated content promoting pro-Iranian narratives. Propaganda videos and memes are being widely distributed online by the Iranian regime and its supporters, with state media broadcasting what experts say is often fabricated footage claiming to show successful strikes against enemy targets.

Professor Stephan Lewandowsky, chair in cognitive psychology at the University of Bristol, warned about the seriousness of this development. “When a head of state briefly bases military anxiety on a deepfake, we have a problem,” he said. “This is not merely a story about Trump’s credulity; it is a story about how vivid, emotionally compelling AI-generated content bypasses critical reasoning universally.”

The social media platform X announced earlier this month it would temporarily suspend monetization for creators who post unlabeled AI-generated videos of armed conflict. However, misleading content continues to proliferate across X, Instagram, TikTok, and Facebook.

This is not the first time Trump has fallen for or shared misleading content. During his 2024 presidential campaign, he shared AI-generated images showing pop star Taylor Swift supporting his candidacy with the caption “I accept!” Swift later publicly endorsed his opponent, Kamala Harris.

Trump has also amplified other debunked claims, including conspiracy theories that Haitian immigrants in Springfield, Ohio, were eating pets, which city officials dismissed as baseless. He shared a viral fake video claiming U.S. government agencies paid Hollywood celebrities including Angelina Jolie and Ben Stiller millions of dollars to visit Ukraine, a fabrication traced to a pro-Russian propaganda group.

Last year, Trump provoked controversy by confronting South African President Cyril Ramaphosa in the Oval Office with falsified social media content claiming a “white genocide” was occurring in South Africa. He pointed to images of white crosses that he claimed were graves of murdered white farmers, but fact-checkers quickly established these were actually symbolic crosses placed along a road during a 2020 protest.

Andrew Chadwick, a professor in the department of communication and media at Loughborough University, identified three factors that make individuals like Trump susceptible to misinformation: confirmation bias, where people tend to believe information that aligns with existing beliefs; “identity-based cognition,” which prioritizes tribal loyalty over factual accuracy; and media diet, with studies showing people who consume low-quality online content are more vulnerable to false information.

“Trump very much lives in that chaotic and hoax-laden world,” Chadwick told The i Paper. “He certainly uses social media to try to set the mainstream news agenda. And that alone carries political risks, as we have seen with this latest case.”

The rise of AI-generated disinformation poses significant challenges for international relations and conflict management. While social media platforms have implemented some safeguards, Professor Lewandowsky notes these measures are “poorly equipped to address the deluge of AI manipulation,” suggesting that “structural platform accountability and building psychological resilience from the ground up are the best path forward.”

As AI tools become more accessible and sophisticated, the barrier between authentic and fabricated content continues to erode, creating an environment where even world leaders can struggle to distinguish fact from fiction.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

10 Comments

  1. Noah B. Jones on

    This is a sobering reminder of how AI can be exploited to spread misinformation, even at the highest levels of government. Developing better detection methods and public awareness around synthetic media will be critical to mitigate these emerging threats.

    • Patricia Smith on

      Absolutely. As AI continues to advance, the potential for malicious actors to leverage it for disinformation campaigns will only grow. Vigilance and cross-sector collaboration will be key to staying ahead of this challenge.

  2. Amelia Garcia on

    Interesting to hear Trump acknowledge being fooled by AI-generated videos in the Iran conflict. It’s a growing concern as the technology becomes more advanced. I hope the military and intelligence agencies stay vigilant against these kinds of AI-based deception campaigns.

    • Amelia White on

      Yes, the scale and sophistication of AI-fueled misinformation is really concerning. Glad Trump recognized the issue, but it’s crucial that leaders and the public remain skeptical of online content, especially during times of heightened geopolitical tensions.

  3. Liam Martinez on

    This is a stark reminder of how AI can be weaponized for propaganda and manipulation, even by world leaders. It’s crucial that we develop robust methods to detect and counter AI-generated misinformation before it spreads further.

    • Elijah Miller on

      I agree, this highlights the urgent need for better AI literacy and fact-checking tools to combat the rising tide of synthetic media. The Iran conflict shows how high the stakes can be when AI is used for deception.

  4. Trump’s admission that he was fooled by AI video in the Iran conflict is concerning, but not surprising given the rapid advancements in this technology. The challenge of detecting and verifying online content will only grow more difficult.

    • Elijah Martin on

      Absolutely. As AI-generated content becomes more convincing, the potential for it to be weaponized for propaganda and disinformation campaigns is alarming. Robust safeguards and public education will be critical to address this emerging threat.

  5. Unsurprising that even the US President can be duped by AI-fueled misinformation. This underscores the urgent need for better detection and verification tools to combat synthetic media, especially in sensitive geopolitical contexts like the Iran conflict.

    • Patricia Thompson on

      Agreed. The scale and sophistication of these AI-based hoaxes is worrying. Governments, tech companies, and the public all have a role to play in developing robust solutions to identify and limit the spread of manipulated content online.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.