Listen to the article
The rapid emergence of OpenAI’s Sora and similar AI video generation tools has triggered a flood of deceptive content across social media platforms, raising serious concerns about the spread of misinformation in an increasingly digitized media landscape.
A particularly troubling example emerged on TikTok last October, where an AI-generated video portrayed a fictitious interview between a reporter and a woman discussing food stamps. Though entirely fabricated, the video provoked hundreds of vitriolic comments, with many viewers assuming the conversation was authentic despite subtle visual anomalies. Some responses included racist attacks against the woman depicted, while others used the fake interview to criticize government assistance programs—coinciding with national debates over then-President Trump’s proposed cuts to such initiatives.
This incident exemplifies a growing problem that digital media experts have been monitoring with increasing alarm. In the mere two months since Sora’s release, researchers have documented a surge in deceptive AI videos across major platforms including TikTok, X (formerly Twitter), YouTube, Facebook, and Instagram.
“The technology has created an environment where public perceptions can be manipulated through a simple series of prompts,” said Sam Gregory, executive director of Witness, a human rights organization focusing on technology threats. “Could they do better in content moderation for mis- and disinformation? Yes, they’re clearly not doing that. Could they do better in proactively looking for AI-generated information and labeling it themselves? The answer is yes, as well.”
While many AI-generated videos are harmless—featuring fictional scenarios, cute animals, or humorous memes—others deliberately stoke division and amplify existing political tensions. These videos have already been incorporated into foreign influence operations, including Russia’s ongoing campaign to undermine support for Ukraine.
The companies behind these powerful AI tools maintain they are implementing safeguards. Both Sora and Google’s rival tool Veo embed visible watermarks on videos they produce. Sora adds a “Sora” label to each video, and both companies include invisible metadata that can identify AI origins when properly scanned by other systems.
Social media platforms have responded with varying levels of urgency. TikTok recently announced plans to tighten its rules on AI disclosure and promised new tools allowing users to control how much synthetic content appears in their feeds. YouTube has begun appending small labels indicating when content is “altered or synthetic” by detecting OpenAI’s invisible watermarks.
OpenAI defended its practices in a statement, saying it “prohibits deceptive or misleading uses of Sora and takes action against violators of its policies.” The company also noted that Sora is just one of dozens of similar tools, many of which employ no safeguards whatsoever.
“AI-generated videos are created and shared across many different tools, so addressing deceptive content requires an ecosystem-wide effort,” the company stated.
A spokesperson for Meta acknowledged the challenge, noting it isn’t always technically possible to label every AI-generated video, particularly as the technology rapidly evolves.
Industry observers point to misaligned incentives as part of the problem. Alon Yamin, CEO of Copyleaks, a company specializing in AI content detection, argued that social media platforms lack financial motivation to restrict these engaging but potentially misleading videos.
“In the long term, once 90% of the traffic for the content in your platform becomes AI, it begs some questions about the quality of the platform and the content,” Yamin explained. “So maybe longer term, there might be more financial incentives to actually moderate AI content. But in the short term, it’s not a major priority.”
As AI video technology continues to improve in quality and accessibility, the challenge of distinguishing authentic content from sophisticated fabrications will likely intensify, testing the effectiveness of current platform policies and detection systems, and potentially reshaping how society consumes and trusts digital media.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


9 Comments
This incident highlights the urgent need for better regulation and accountability around AI-generated content. The public deserves accurate, trustworthy information, not fabricated narratives.
This incident underscores the need for greater public awareness and critical thinking when it comes to online content. Promoting digital literacy should be a top priority for educators and tech companies alike.
I’m curious to see how digital media experts propose addressing this issue. Enhancing media literacy and improving AI detection models seem like important steps, but a multifaceted approach may be required.
The proliferation of deceptive AI videos is deeply troubling. Social media platforms must invest heavily in proactive moderation and transparent fact-checking to maintain the credibility of their services.
Deceptive AI videos are a worrying trend that undermines trust in digital media. Platforms must strengthen their detection and moderation capabilities to stay ahead of these challenges.
Agreed. Responsible development and deployment of these technologies is crucial to preserve the integrity of online information.
While the technology behind Sora and similar tools is impressive, the potential for misuse is alarming. I hope researchers and policymakers can find effective ways to mitigate the risks of AI-generated misinformation.
The rise of AI-generated content is concerning, as it can be used to spread misinformation and sway public opinion. We need robust verification systems and media literacy efforts to combat this threat to truthful discourse.
Deceptive AI videos are a serious threat to the integrity of online discourse. I’m curious to learn more about the specific steps being taken by platforms and regulators to address this challenge.