Listen to the article

0:00
0:00

Elon Musk’s X Platform Uncovers Pakistan-Based Network Spreading Fake War Videos

Social media platform X, owned by Elon Musk, has discovered and shut down a sophisticated operation based in Pakistan that used dozens of accounts to spread AI-generated war videos. The platform identified 31 compromised accounts that were systematically hacked and repurposed to disseminate artificial intelligence-created content depicting fictional conflict scenarios.

Nikita Bier, X’s head of product, revealed the details of the operation in a series of posts on the platform. According to Bier, the accounts were hacked and had their usernames changed on February 27 to variations of “Iran War Monitor” to lend credibility to the false content.

“Last night, we found a guy in Pakistan that was managing 31 accounts posting AI war videos,” Bier wrote. “We are getting much faster at detecting this—and also eliminating the incentive to do this.”

The investigation revealed that financial gain, rather than political motivation, appeared to be the primary driver behind the operation. The accounts were seemingly attempting to capitalize on X’s creator revenue sharing program, which pays users based on engagement with their content.

“In 99% of cases, it’s just people looking to game monetization,” Bier explained. “The only thing they care about is what gets impressions—not the political leaning.”

This incident highlights a growing concern in the social media landscape: the intersection of artificial intelligence, misinformation, and profit motives. As AI tools become more accessible and sophisticated, creating convincing but entirely fictional video content has become easier than ever, particularly for content related to high-tension global situations.

In response to the discovery, X announced significant policy changes to its Creator Revenue Sharing program. Users caught posting AI-generated war content without proper disclosure will now face a 90-day suspension from the monetization program. Repeat offenders risk permanent exclusion from revenue opportunities on the platform.

The company plans to identify violations through Community Notes—X’s crowdsourced fact-checking system—and by analyzing metadata signals from generative AI tools that may indicate artificially created content.

“During times of war, it is critical that people have access to authentic information on the ground,” X stated in its announcement. “With today’s AI technologies, it is trivial to create content that can mislead people.”

This crackdown comes amid growing global concern about misinformation related to ongoing conflicts, particularly in the Middle East. Social media platforms have increasingly become battlegrounds for information warfare, with state and non-state actors attempting to shape narratives around geopolitical events.

Digital rights experts have long warned about the potential for AI-generated content to exacerbate tensions during conflicts. The ability to quickly create and distribute convincing fake videos of missile strikes, explosions, or military movements can cause real-world panic and potentially influence policy decisions if taken at face value.

The discovery also raises questions about account security on major platforms. The Pakistan-based operation’s ability to hack and repurpose dozens of accounts suggests vulnerabilities that could potentially be exploited by more sophisticated actors with political or ideological motivations.

X has pledged to continue refining its policies and detection methods to ensure users can trust the platform during critical moments and global events. The company faces the challenging balancing act of encouraging legitimate content creation while preventing manipulation of its monetization systems.

This incident represents just one example of a broader trend in social media manipulation that combines technological sophistication with financial incentives, creating new challenges for platforms attempting to maintain information integrity in an era of increasingly convincing artificial intelligence tools.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

8 Comments

  1. Oliver Smith on

    This is a good example of the challenges platforms face in moderating AI-generated content. Kudos to X for their proactive approach in uncovering and removing these fake accounts. Staying ahead of evolving disinformation tactics will be crucial.

  2. Elijah W. Thomas on

    This is a concerning development, but it’s good to see X taking proactive measures to address it. Staying ahead of AI-driven disinformation campaigns will be an ongoing challenge for social media platforms.

  3. John Martinez on

    I appreciate X’s transparency in sharing details about this operation. Shedding light on these types of tactics is important for raising awareness and understanding the evolving disinformation landscape.

  4. Amelia Thompson on

    I’m curious to know more about the technical details of how X was able to detect and attribute this network of fake accounts. Their ability to quickly identify and shut it down is impressive.

  5. Patricia Williams on

    Disturbing to see how AI can be exploited to create realistic-looking but entirely fabricated propaganda videos. The financial incentive angle is particularly concerning. Glad to see X taking decisive action to shut down this operation.

  6. Mary Q. White on

    It’s alarming to see the scale and sophistication of this AI-driven propaganda network. Kudos to X for their swift action in shutting it down. Ongoing vigilance will be crucial to staying ahead of these threats.

  7. Elizabeth Brown on

    Interesting that the motivation behind this seems to be financial rather than political. I wonder how prevalent this type of AI-driven, monetization-focused propaganda is becoming on social media platforms. Glad X is taking steps to address it.

  8. William Martinez on

    Wow, impressive that X was able to detect and shut down this large-scale disinformation operation. It’s concerning to see AI used to create fake propaganda like this, but good that the platform is working to identify and remove it quickly.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.