Listen to the article

0:00
0:00

Trump Accuses Iran of Using AI as “Disinformation Weapon” Amid Middle East Tensions

US President Donald Trump has accused Iran of weaponizing artificial intelligence to spread misinformation about its military capabilities and battlefield successes, highlighting growing concerns about AI-generated content in wartime.

“AI can be very dangerous, we have to be very careful with it,” Trump told reporters aboard Air Force One on Sunday. His comments followed a post on his Truth Social platform where he claimed, without evidence, that Western media outlets were in “close coordination” with Iran to spread AI-generated “fake news.”

Trump specifically cited three instances where he believes Iran deployed AI deceptively. On Truth Social, he alleged Iran showed “kamikaze boats” that don’t exist and used AI to falsely depict a successful attack on the USS Abraham Lincoln aircraft carrier. He suggested that publications spreading such news should be charged with treason.

Reuters has verified images from Iraq’s port of Basra showing explosive-laden Iranian boats attacking two fuel tankers, killing at least one crew member. While Iranian state media did claim its military struck the USS Abraham Lincoln, Western outlets largely did not amplify this assertion.

The accusations come amid heightened tensions between the Federal Communications Commission and broadcasters. FCC Chairman Brendan Carr on Saturday threatened to pull licenses of broadcasters who did not “correct course” on their coverage of the conflict involving the US, Israel, and Iran. Trump has a history of calling for revoking licenses of broadcast outlets he perceives as unfair to him.

The Middle East conflict has triggered an unprecedented wave of AI-generated visual content, creating a landscape where social media users struggle to distinguish between authentic and fabricated imagery. On Elon Musk’s X platform (formerly Twitter), AI-created videos depicting American soldiers captured by Iran, Israeli cities in ruins, and US embassies ablaze have proliferated despite policy efforts to curb wartime misinformation.

In response to this growing crisis, X announced last week it would suspend creators from its revenue-sharing program for 90 days if they post AI-generated war videos without proper disclosure. Repeat offenders face permanent suspension, according to X’s head of product Nikita Bier.

The policy represents a significant shift for a platform widely criticized for becoming a hub of misinformation since Musk’s $44 billion acquisition in October 2022. State Department official Sarah Rogers praised the move as a “great complement” to X’s Community Notes fact-checking system.

However, disinformation researchers remain skeptical about the effectiveness of these measures. “The feeds I monitor are still flooded with AI-generated content about the war,” said Joe Bodnar of the Institute for Strategic Dialogue. He pointed to a monetized “blue check” account that shared an AI clip depicting an Iranian “nuclear-capable” strike on Israel, garnering more views than the platform’s announcement about cracking down on such content.

A global network of fact-checkers has identified numerous AI fakes about the conflict, many from premium accounts with purchased verification badges. These include fabricated videos of crying American soldiers in bombed embassies, captured US troops, and destroyed naval fleets.

Researchers warn that X’s engagement-based payment model for premium accounts has created financial incentives to spread sensational or false content. Even X’s own AI chatbot Grok has reportedly misidentified AI-generated war imagery as authentic when users sought fact-checks.

Last month, the Tech Transparency Project reported that X appeared to be profiting from premium accounts belonging to Iranian government officials and state-controlled news outlets, potentially violating US sanctions. X subsequently removed verification badges from some of these accounts.

Experts note that while X’s demonetization policy addresses part of the problem, many users spreading AI disinformation aren’t part of the revenue-sharing program. Additionally, the platform’s Community Notes system has been criticized for ineffectiveness, with studies showing more than 90 percent of these fact-check attempts never get published.

“The devil will be in the implementing detail,” said Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech. “Metadata on AI content can be removed and Community Notes are relatively rare. It is unlikely that X will be able to guarantee both high precision and high recall for this policy.”

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

10 Comments

  1. William White on

    This highlights the need for greater regulation and transparency around the development and deployment of AI systems, especially in sensitive domains like national security. Responsible AI governance is crucial to mitigate these emerging risks.

  2. The proliferation of AI-generated disinformation is a concerning development that demands urgent attention. Policymakers, tech companies, and the public must collaborate to develop effective strategies to combat the spread of manipulative content and protect the information ecosystem.

  3. Linda Williams on

    The use of AI to generate false narratives and misleading content is a major challenge that requires a coordinated global response. Strengthening media literacy, fact-checking, and robust content moderation will be key to protecting the public from these threats.

  4. Amelia Taylor on

    This is a sobering reminder of the potential dark side of AI technology. As the capabilities of these systems continue to advance, we must work to ensure they are not exploited for malicious purposes that undermine the integrity of information and democratic discourse.

  5. Patricia Jones on

    The use of AI as a ‘disinformation weapon’ is a worrying trend that undermines public trust and informed decision-making. Fact-checking and media literacy efforts will be crucial to combating the spread of AI-generated misinformation.

    • Patricia B. Hernandez on

      Absolutely. We must invest in building resilience against manipulative AI-driven propaganda, which poses a serious threat to democracy and global stability.

  6. Noah Martinez on

    This is a concerning development, as AI-generated disinformation can be incredibly difficult to detect and combat. We must remain vigilant and rely on reputable, fact-based journalism to counter the spread of misinformation during conflicts.

  7. Linda Thompson on

    While AI can certainly be a powerful tool, its misuse to spread propaganda is deeply troubling. It’s crucial that we develop robust safeguards and regulations to prevent the malicious use of this technology, especially in sensitive geopolitical contexts.

    • Ava Martinez on

      I agree. Strict oversight and accountability measures are needed to ensure AI is not exploited for nefarious purposes by bad actors.

  8. Weaponizing AI to spread disinformation is a worrying escalation of information warfare. We must be vigilant in identifying and debunking such malicious content, while also strengthening our defenses against these sophisticated manipulation tactics.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.