Listen to the article

0:00
0:00

Deepfake War Videos Flood X Despite New Content Policies

AI-generated videos depicting American soldiers captured by Iran, Israeli cities reduced to rubble, and burning U.S. embassies continue to circulate widely on Elon Musk’s X platform, despite recent policy changes aimed at curbing wartime misinformation.

The ongoing Middle East conflict has triggered an unprecedented wave of artificial intelligence-created visuals that far exceeds what researchers have observed in previous wars. The sophisticated nature of these deepfakes has made it increasingly difficult for social media users to distinguish between fabricated content and authentic footage.

In response to the growing concern, X announced last week that creators participating in its revenue sharing program would face 90-day suspensions if they posted AI-generated war videos without proper disclosure. The platform’s head of product, Nikita Bier, warned that repeated violations would result in permanent removal from the monetization program.

This policy shift represents a notable change for the platform, which has faced substantial criticism for becoming what many describe as a breeding ground for misinformation since Musk completed his $44 billion acquisition in October 2022. The move earned praise from State Department official Sarah Rogers, who called it a “great complement” to X’s Community Notes system, potentially reducing both reach and monetization for inaccurate content.

However, disinformation researchers remain unconvinced about the effectiveness of these measures. “The feeds I monitor are still flooded with AI-generated content about the war,” said Joe Bodnar of the Institute for Strategic Dialogue in a statement to AFP. “It doesn’t seem like creators have been dissuaded from pushing misleading AI-generated images and videos about the conflict.”

Bodnar highlighted a particularly concerning example: a post from a verified “blue check” account eligible for monetization that shared an AI-generated video depicting an Iranian “nuclear-capable” strike on Israel. This post garnered more views than Bier’s announcement about the policy crackdown itself.

When AFP inquired about the number of accounts demonetized since the policy announcement, X did not provide a response. Meanwhile, fact-checkers across the globe continue to identify a steady stream of AI fakes related to the Middle East conflict, many originating from premium accounts with purchased verification badges.

These fabrications include emotionally manipulative content such as AI videos showing tearful American soldiers inside bombed embassies, U.S. troops kneeling beside Iranian flags, and destroyed American naval fleets. The sheer volume of these AI-created visuals, interspersed with legitimate footage from the conflict zone, has overwhelmed the capacity of professional fact-checkers to debunk them effectively.

Compounding the problem, X’s own AI chatbot, Grok, has reportedly provided incorrect information to users seeking verification of these images, wrongly confirming numerous AI-generated visuals as authentic.

Researchers have also pointed to X’s monetization model as part of the problem. The platform allows premium accounts to earn revenue based on engagement metrics, creating a powerful financial incentive for users to share sensational or misleading content that drives clicks and views.

In one notable incident, a premium account that posted an AI video showing Dubai’s Burj Khalifa engulfed in flames ignored Bier’s direct request to label the content as AI-generated. The post remained visible and accumulated more than two million views.

The issue extends beyond individual content creators. Last month, a report from the Tech Transparency Project revealed that X appeared to be profiting from more than two dozen premium accounts belonging to Iranian government officials and state-controlled media outlets. These accounts were allegedly pushing propaganda in potential violation of U.S. sanctions, though X subsequently removed verification badges from some of them following the report.

Experts note that even if X’s demonetization policy were strictly enforced, it would fail to address a fundamental problem: many users spreading AI-generated content aren’t part of the revenue sharing program at all. These accounts can still be fact-checked through Community Notes, but the effectiveness of this system has been questioned by researchers. A 2023 study by the Digital Democracy Institute of the Americas found that over 90 percent of Community Notes on X never get published.

“X’s policy is a reasonable countermeasure to viral disinformation about the war. In principle, this policy reduces the incentive structure for those spreading disinformation,” said Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech. “The devil will be in the implementing detail: Metadata on AI content can be removed and Community Notes are relatively rare. It is unlikely that X will be able to guarantee both high precision and high recall for this policy.”

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

8 Comments

  1. James Martinez on

    This is a concerning trend – the spread of AI-generated disinformation on social media platforms. It’s critical that platforms enforce robust policies to address this problem and help users distinguish fact from fiction.

    • Michael Miller on

      Agreed. The sophistication of these deepfakes makes it increasingly challenging for the average user to spot fabricated content. Platforms must invest heavily in detection and mitigation capabilities.

  2. Jennifer W. Smith on

    The Middle East conflict has long been a hotbed for misinformation, and it’s worrying to see AI-generated content exacerbating the problem. Strict enforcement of disclosure policies is a necessary step, but more needs to be done.

    • Amelia Thomas on

      I share your concern. Platforms should also prioritize educating users on how to identify AI-generated content and equipping them with critical thinking skills to navigate online information.

  3. Amelia Brown on

    It’s disturbing to see how these AI-generated war videos can spread so quickly and mislead people. I hope the platform’s new policy changes will help curb the problem, but more comprehensive solutions may be needed.

  4. Jennifer Miller on

    The proliferation of AI-generated disinformation is a serious challenge that platforms must address. While content policies are a step in the right direction, I’m curious to know what other strategies they’re exploring to combat this issue.

    • Noah T. White on

      That’s a good point. Platforms should also consider investing in early warning systems and collaboration with fact-checking organizations to stay ahead of emerging threats.

  5. William Martinez on

    This is a concerning development that highlights the need for increased digital literacy and critical thinking skills among social media users. Platforms must take a multi-pronged approach to address the spread of AI-generated misinformation.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.