Listen to the article

0:00
0:00

AI-Generated War Content Floods X Despite Platform’s Crackdown

AI-created videos depicting American soldiers captured by Iran, Israeli cities in ruins, and US embassies ablaze are circulating widely on Elon Musk’s X platform, despite recent policy measures aimed at curbing wartime disinformation.

The ongoing Middle East conflict has triggered an unprecedented surge in artificial intelligence-generated visuals that vastly exceeds anything observed in previous conflicts, according to researchers. Many social media users now struggle to differentiate between fabricated content and authentic footage as sophisticated deepfakes proliferate.

In response to mounting criticism, X announced last week it would suspend creators from its revenue sharing program for 90 days if they post AI-generated war videos without proper disclosure. Subsequent violations will trigger permanent suspension from the monetization program, according to X’s head of product Nikita Bier.

“This represents a significant shift for a platform that has faced widespread criticism for becoming a breeding ground for disinformation since Musk’s $44 billion acquisition in October 2022,” said Joe Bodnar of the Institute for Strategic Dialogue.

The policy change received praise from Sarah Rogers, a senior State Department official, who called it a “great complement” to X’s Community Notes system, which uses crowd-sourced verification to reduce the reach and potential monetization of inaccurate content.

However, disinformation experts remain unconvinced of the policy’s effectiveness. “The feeds I monitor are still flooded with AI-generated content about the war,” Bodnar told AFP. “It doesn’t seem like creators have been dissuaded from pushing misleading AI-generated images and videos about the conflict.”

As evidence, he pointed to a recent post from a verified “blue check” account eligible for monetization that shared an AI clip depicting an Iranian “nuclear-capable” strike on Israel. This post garnered more views than Bier’s announcement about the platform’s crackdown on deceptive content.

AFP’s global fact-checking network has identified numerous AI-generated fakes related to the Middle East conflict, many originating from X’s premium accounts with purchased blue verification badges. These include fabricated videos showing tearful American soldiers inside bombed embassies, captured U.S. troops kneeling beside Iranian flags, and destroyed American naval fleets.

The volume of AI-fabricated content, interspersed with authentic imagery from the conflict zone, continues to outpace professional fact-checkers’ ability to debunk them. Complicating matters further, X’s own AI chatbot, Grok, has reportedly provided incorrect verification, telling users that numerous AI visuals from the war were authentic when they were not.

Industry analysts note that X’s monetization model, which rewards premium accounts with payouts based on user engagement, has amplified financial incentives to spread sensationalist or false content. In one instance, a premium account that posted an AI video showing Dubai’s Burj Khalifa skyscraper engulfed in flames ignored Bier’s direct request to label the content as AI-generated. The post remained online and accumulated over two million views.

The platform’s challenges extend beyond individual content creators. Last month, a report from the Tech Transparency Project revealed that X appeared to be profiting from more than two dozen premium accounts belonging to Iranian government officials and state-controlled news outlets pushing propaganda, potentially violating U.S. sanctions. X subsequently removed verification badges from some of these accounts.

Experts point out that even with stricter enforcement of demonetization policies, a substantial number of users spreading AI content aren’t part of X’s revenue sharing program, leaving them subject only to Community Notes fact-checking.

“X’s policy is a reasonable countermeasure to viral disinformation about the war. In principle, this policy reduces the incentive structure for those spreading disinformation,” said Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech. “The devil will be in the implementing detail: Metadata on AI content can be removed and Community Notes are relatively rare. It is unlikely that X will be able to guarantee both high precision and high recall for this policy.”

As the conflict continues, the flood of AI-generated content presents a growing challenge not just for platforms like X, but for users worldwide attempting to understand complex geopolitical events through an increasingly distorted digital lens.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

11 Comments

  1. Patricia White on

    The surge in AI-created war content on X is extremely concerning. As these sophisticated deepfakes become more prevalent, the potential for real-world harm increases exponentially. Robust platform policies and enforcement are critical to mitigating this threat to public discourse and security.

  2. Robert Johnson on

    While X’s new policy changes are a step in the right direction, the sheer volume of AI-created war content flooding the platform remains a major concern. Ongoing monitoring, content moderation, and transparency around these issues will be crucial going forward.

  3. Amelia Thomas on

    This highlights the growing challenge of verifying the authenticity of online content, especially in the context of fast-moving global events. Rigorous fact-checking and disclosure requirements are essential to combat the spread of AI-generated falsehoods on platforms like X.

  4. Patricia G. Martinez on

    This is a sobering reminder of the power of AI to generate highly convincing yet completely fabricated content, especially around sensitive geopolitical issues. X’s efforts to address this are welcome, but the challenge of combating disinformation at scale remains daunting.

  5. Linda Hernandez on

    The proliferation of AI-generated falsehoods about the Iran-US conflict on X is deeply troubling. This highlights the urgent need for platforms to implement effective measures to identify and limit the spread of this kind of dangerous disinformation.

  6. James Hernandez on

    This highlights the growing challenge of distinguishing authentic footage from AI-fabricated visuals, especially in the context of complex, fast-moving world events. Rigorous fact-checking and disclosure requirements are essential to combat the spread of these AI-generated falsehoods.

  7. Noah K. Taylor on

    The scale of AI-generated falsehoods about the Iran-US conflict on X is alarming. Platforms must take aggressive action to identify and remove this kind of manipulative content, while also educating users on identifying credible information sources.

  8. The proliferation of AI-created war videos on X is deeply concerning. As these sophisticated deepfakes become more prevalent, the potential for real-world harm increases exponentially. Robust platform policies and enforcement are critical to mitigating this threat.

  9. Jennifer D. Smith on

    The proliferation of AI-generated falsehoods about major geopolitical conflicts is extremely troubling. Platforms like X need robust policies and enforcement to limit the spread of this kind of dangerous misinformation.

    • Michael Jackson on

      I agree. Disinformation, especially around sensitive global issues, can have grave implications. X’s latest measures are a step in the right direction, but ongoing vigilance and new approaches will be crucial.

  10. Robert Martinez on

    This is deeply concerning. AI-generated war content could have devastating real-world consequences if not properly regulated. I’m glad to see X taking steps to combat this, but more must be done to ensure the integrity of information online.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.