Listen to the article
In a significant policy shift aimed at combating misinformation, Elon Musk’s social media platform X (formerly Twitter) has implemented new regulations targeting creators who share artificial intelligence-generated videos of armed conflicts without proper disclosure.
The updated policy, which takes immediate effect, comes amid growing concerns about the spread of synthetic media that could potentially mislead users about real-world conflicts. Under these new guidelines, content creators risk losing their monetization privileges if they fail to clearly label AI-generated war footage.
According to the announcement made by Nikita Bier, X’s head of product, first-time violators will face a 90-day suspension from the platform’s Creator Revenue Sharing programme. Repeat offenders could be permanently banned from monetization opportunities on the platform, marking a strict approach to enforcement.
“This change is designed to protect information integrity, especially during wartime scenarios when misinformation can spread rapidly and have serious consequences,” Bier explained in his announcement post on X.
The decision reflects broader industry concerns about the democratization of AI technology, which has made creating realistic synthetic media increasingly accessible to everyday users. As conflicts in Ukraine, Gaza, and other regions continue to generate global attention, the potential for manipulated content to influence public perception has become a pressing issue for social media platforms.
X’s approach to identifying undisclosed AI content will rely on multiple detection methods. The platform plans to leverage its Community Notes feature—a crowdsourced fact-checking system implemented after Musk’s acquisition of the company—to help flag synthetic or misleading content. Technical measures will include metadata analysis of media files and monitoring for common technical indicators associated with AI-generated videos.
Media experts have pointed to the challenge of balancing creator freedoms with the need for transparency. Dr. Claire Wardle, a misinformation researcher at Brown University, who is not affiliated with X, noted in a recent academic paper that “the line between creative expression and deception becomes particularly problematic in contexts of war and conflict, where accurate information can be a matter of life and death.”
The policy update comes during a period of transformation for X under Musk’s leadership. Since acquiring the platform in 2022 for approximately $44 billion, the billionaire entrepreneur has implemented numerous changes, including rebranding from Twitter to X, modifying verification systems, and introducing the Creator Revenue Sharing programme itself.
For content creators who have built businesses around X’s monetization options, the new policy adds another compliance requirement to navigate. The platform has not specified exactly how creators should label AI-generated content, though industry standards typically involve clear captions, watermarks, or verbal disclosures within videos.
Social media competitors like Meta’s platforms have implemented similar policies regarding AI-generated content, though X’s specific focus on war footage represents a targeted approach to a particularly sensitive category of misinformation.
The financial stakes for creators are significant. While X does not publicly disclose specific revenue figures for its Creator programme, some high-profile users have reported earning substantial income through the platform’s ad revenue sharing model introduced last year.
As AI-generated content becomes increasingly sophisticated and difficult to distinguish from authentic footage, X’s policy update highlights the evolving challenges faced by social media companies in maintaining information integrity while supporting creator economies.
For users of the platform, the change signals an important reminder to approach conflict-related content with heightened scrutiny, as the line between authentic documentation and synthetic creation continues to blur in the digital landscape.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


9 Comments
Interesting move by X to combat AI-generated misinformation on the Iran conflict. Proper disclosure of synthetic media is crucial to maintain trust and information integrity, especially during sensitive geopolitical events.
I agree, this policy seems like a reasonable approach to address the risks of misleading AI content. Strict enforcement is necessary to discourage malicious actors from exploiting these emerging technologies.
I wonder how effective this policy will be in practice. Identifying and labeling AI-generated content can be challenging, especially as the technology continues to advance. Ongoing monitoring and adjustment of the guidelines may be needed.
That’s a fair concern. Enforcing this policy will require significant resources and technological capabilities. It will be interesting to see how X balances content moderation with user experience and creator incentives.
As a mining and commodities investor, I’m curious to see how this new policy might impact the discussion and spread of information related to the energy and resource sectors. Careful curation of content will be important.
That’s a good point. The energy and mining industries could be vulnerable to AI-generated misinformation, given their strategic importance. Proactive measures like this policy are necessary to protect the integrity of these discussions.
I’m curious to see how this policy will be implemented and enforced. While the intent is good, the practical challenges of identifying and labeling AI-generated content could be significant. Ongoing monitoring and refinement of the guidelines will be key.
As someone with a background in mining and energy, I appreciate X’s efforts to combat misinformation. Accurate and reliable information is crucial for making informed decisions in these industries. This policy is a step in the right direction.
I agree. Maintaining information integrity is especially important for sectors like mining and energy, where data and analysis can significantly impact investment decisions and market dynamics. Proactive measures like this are necessary.