Listen to the article
In the two months since OpenAI released its Sora video generation tool, a surge of deceptive AI-generated videos has flooded major social media platforms, creating a new frontier in digital misinformation that’s proving difficult to contain.
One prominent example involved a fabricated interview showing a woman discussing food stamps. Though entirely AI-generated, hundreds of viewers reacted with vitriol toward the supposed subject—many with racist comments—while others used the fake content to criticize government assistance programs. The timing coincided with national debate over potential cuts to SNAP benefits during a government shutdown that left real recipients struggling.
Fox News even republished content from a similar fake video as genuine evidence of public outrage over food stamp abuse before eventually removing the article from its website when questioned about its authenticity.
“The barrier to use deepfakes as part of disinformation has collapsed, and once disinformation is spread, it’s hard to correct the record,” said Darjan Vujica, former official at a now-disbanded State Department office that fought foreign influence operations.
The problem extends beyond domestic political discourse. Russian disinformation campaigns have already weaponized Sora videos on TikTok and X, creating fabricated content showing Ukrainian soldiers weeping or exploiting corruption scandals to undermine support for Ukraine. In India, AI videos have been used to inflame religious tensions by depicting Muslims in demeaning situations.
While social media platforms have policies requiring disclosure of AI-generated content and prohibiting deceptive material, enforcement has been inconsistent and often ineffective. The Times found dozens of examples of Sora videos appearing on YouTube without proper AI labeling. Techniques to remove identifying watermarks have proliferated, with several companies explicitly offering such services.
“Could they do better in content moderation for mis- and disinformation? Yes, they’re clearly not doing that,” said Sam Gregory, executive director of Witness, a human rights organization focused on technology threats. “Could they do better in proactively looking for AI-generated information and labeling it themselves? The answer is yes, as well.”
Both Sora and Google’s competing tool Veo embed visible watermarks and invisible metadata to help identify AI content. TikTok recently announced tighter rules around AI disclosure, while YouTube attempts to append labels indicating “altered or synthetic” content using the embedded watermarks.
However, these safeguards frequently fail. Labels often appear only after videos have been viewed thousands or millions of times—if they appear at all. The Times analysis of the food stamp video comments showed nearly two-thirds of 3,000 users responded as if the content were genuine.
“There’s kind of this individual vigilance model,” Gregory noted. “That doesn’t work if your whole timeline is stuff that you have to apply closer vigilance to. It bears no resemblance to how we interact with our things.”
Critics suggest social media companies lack financial motivation to restrict AI content that generates engagement. “In the long term, once 90% of the traffic for the content in your platform becomes AI, it begs some questions about the quality of the platform and the content,” said Alon Yamin, CEO of Copyleaks, an AI content detection company. “So maybe longer term, there might be more financial incentives to actually moderate AI content. But in the short term, it’s not a major priority.”
OpenAI defended its practices in a statement, saying it prohibits deceptive uses of Sora and takes action against policy violations. The company emphasized that addressing AI-generated deception requires “an ecosystem-wide effort” since many similar tools exist without comparable safeguards.
Meta acknowledged the difficulties in labeling all AI-generated content, particularly as the technology rapidly evolves, while stating it’s working to improve detection systems. X and TikTok did not respond to requests for comment about the proliferation of AI fakes on their platforms.
For users scrolling quickly through social media feeds, even clearly marked AI content can be mistaken for authentic material, highlighting the inadequacy of current solutions against this growing challenge to information integrity in the digital age.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


23 Comments
Silver leverage is strong here; beta cuts both ways though.
Production mix shifting toward Social Media might help margins if metals stay firm.
Good point. Watching costs and grades closely.
Production mix shifting toward Social Media might help margins if metals stay firm.
I like the balance sheet here—less leverage than peers.
Production mix shifting toward Social Media might help margins if metals stay firm.
Good point. Watching costs and grades closely.
If AISC keeps dropping, this becomes investable for me.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Silver leverage is strong here; beta cuts both ways though.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Uranium names keep pushing higher—supply still tight into 2026.
Exploration results look promising, but permitting will be the key risk.
If AISC keeps dropping, this becomes investable for me.
Good point. Watching costs and grades closely.
Production mix shifting toward Social Media might help margins if metals stay firm.
Production mix shifting toward Social Media might help margins if metals stay firm.
Good point. Watching costs and grades closely.
Production mix shifting toward Social Media might help margins if metals stay firm.
Good point. Watching costs and grades closely.
If AISC keeps dropping, this becomes investable for me.