Listen to the article
In a swift move to combat potential misinformation surrounding the escalating Middle East conflict, social media platform X has temporarily suspended monetization for unidentified AI-generated content depicting war scenes. The decision comes as tensions rise following Iran’s missile strike against Israel, with platform officials concerned about the spread of false information during the critical geopolitical moment.
The company announced Tuesday it would halt creator payouts for AI-generated war content that lacks proper disclosure labels. This measure aims to prevent artificial intelligence from being weaponized to spread misinformation about the ongoing conflict between Iran and Israel, which has intensified in recent days.
“During this sensitive time, we’re taking additional steps to ensure content on our platform maintains integrity,” an X spokesperson said. “Properly labeled AI content helps users understand what they’re viewing, which is particularly crucial during fast-developing international situations.”
The policy specifically targets creators who fail to identify computer-generated imagery as artificial. Under X’s existing guidelines, all AI-generated content must be clearly labeled to distinguish it from authentic footage. The temporary suspension of payments serves as both enforcement mechanism and deterrent against those who might attempt to profit from misleading content during a crisis.
Media experts note this is part of a broader challenge facing social platforms during international conflicts. Dr. Ellen Richards, a digital ethics researcher at Columbia University, explained: “We’re seeing an unprecedented convergence of sophisticated AI tools and geopolitical tensions. Platforms are struggling to balance free expression with preventing harmful misinformation that could inflame already volatile situations.”
Since Iran launched approximately 180 missiles at Israel on Tuesday, social media has been flooded with both authentic and fabricated imagery. Several viral posts containing AI-generated footage of explosions in Tel Aviv and other Israeli cities have already been identified and removed from various platforms.
This isn’t the first time X has faced criticism over content moderation during international crises. Following its acquisition by Elon Musk in 2022, the company underwent significant staff reductions in its trust and safety teams, raising concerns about its capacity to effectively monitor misleading content during critical events.
Industry analysts point out that X’s approach differs from other major platforms. Meta, which owns Facebook and Instagram, employs a combination of human moderators and AI detection tools to flag potentially misleading content during major global events. YouTube has enhanced its detection algorithms specifically to identify AI-generated war footage.
“The challenge is particularly acute for platforms operating with reduced moderation resources,” said tech policy analyst James Keller. “While automated systems can detect some forms of AI content, the most sophisticated deepfakes require human review, and the volume of content during breaking news events is overwhelming.”
Financial implications for content creators could be significant, as X’s creator program has become an important revenue stream for many users. The company’s ad revenue-sharing model allows creators to earn from engagement with their posts, potentially creating financial incentives to produce high-engagement content about trending topics like the Iran-Israel conflict.
Israeli and Iranian officials have both expressed concern about the proliferation of false information on social media platforms, with diplomatic representatives urging tech companies to take stronger measures against manipulated content that might exacerbate tensions.
The temporary payment suspension reflects the growing recognition among tech companies that content moderation policies need special consideration during international conflicts. As AI tools become increasingly sophisticated and accessible, platforms face mounting pressure to develop more effective methods for distinguishing authentic from synthetic media in real-time crisis situations.
X has not specified how long the payment suspension will remain in effect, stating only that the measure will continue “until the situation stabilizes.”
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


5 Comments
This seems like a reasonable policy by X. Unchecked spread of AI-generated war videos could seriously inflame tensions and spread disinformation. Curious how they’ll enforce disclosure requirements for creators.
Valid question. Enforcing AI disclosure will likely be challenging, but necessary to uphold platform integrity. It’ll be interesting to see X’s approach and how effective it proves.
Prudent move by X to curb potential misinformation around the Iran-Israel conflict. AI-generated war footage without proper disclosure could easily mislead people during such a sensitive geopolitical situation.
Agreed. Transparency around AI content is critical, especially for issues with real-world consequences. Good to see X taking proactive steps to maintain platform integrity.
Kudos to X for recognizing the risks of AI war content during this volatile situation. Hopefully their move to suspend payouts will curb the spread of misleading visuals and help keep the public informed.