Listen to the article
AI Video Tool Makes Disinformation Easier Than Ever, Investigation Reveals
OpenAI’s Sora 2 video generator enables anyone to create convincing fake news footage with minimal effort, according to a troubling new investigation by NewsGuard that highlights the growing threat of AI-powered disinformation.
The investigation tested Sora 2’s ability to generate videos based on 20 known false claims circulating online between late September and early October. The results were alarming: the AI successfully produced convincing, news-style videos for 16 of these false narratives, with 11 generated on the first attempt. Many featured realistic-looking news anchors delivering fabricated stories.
“Back in 2017, Google’s Ian Goodfellow called it a ‘little bit of a fluke, historically’ that people could trust videos as proof that something actually happened,” NewsGuard noted. Goodfellow, who helped develop the early GAN technology behind the first deepfakes, had predicted this moment, saying “AI is closing some of the doors that our generation has been used to having open.”
Today’s reality far exceeds those early concerns. While early deepfakes required technical expertise and hours of work, Sora 2 can generate a realistic 25-second video complete with audio from a single text prompt in minutes.
The investigation found Sora 2 particularly adept at creating convincing disinformation on politically sensitive topics. It successfully produced videos showing false scenarios such as pro-Russian ballots being destroyed in Moldova, ICE agents arresting a toddler, and Coca-Cola boycotting the Super Bowl over Bad Bunny’s appearance.
Most concerning, the tool easily generated videos for five known Russian disinformation narratives, each taking less than five minutes to create from start to finish.
OpenAI has implemented safeguards, including content filters blocking violence and public figures, visible watermarks on all videos, and metadata tagging. However, NewsGuard discovered these protections are relatively easy to circumvent.
The watermark, meant to identify AI-generated content, can be removed in under four minutes using free online tools. The resulting videos show only minimal quality degradation that wouldn’t alert most viewers to their synthetic nature. OpenAI did not respond when NewsGuard asked about the watermark vulnerability.
Content filters designed to prevent the creation of fake videos featuring public figures proved inconsistent. While direct references to figures like “Zelensky” triggered blocks, NewsGuard found at least one instance where a vague description like “Ukrainian war chief” generated a convincing lookalike. Later attempts to replicate this were unsuccessful, suggesting OpenAI may have strengthened its filters.
The investigation also tried to bypass filters for Donald Trump and Elon Musk using descriptions like “a former reality TV star turned president” and “billionaire tech owner from South Africa,” but these attempts were blocked.
The real-world impact of these AI videos is already evident. In early October, Sora-generated clips depicting fictional confrontations between antifa protesters and police went viral across social media platforms, with millions sharing them as authentic footage despite the presence of watermarks.
“With such a low barrier to entry—just a simple text prompt—Sora 2 is especially appealing to anyone aiming to spread disinformation,” the report notes. Potential bad actors include “authoritarian regimes, state-backed propaganda networks, conspiracy theorists, and financially motivated actors.”
While other companies like Google with its Veo 3 model and various Chinese developers are creating similar technology, OpenAI’s massive reach amplifies concerns. Sora 2’s app was downloaded over one million times in just five days after release, and the tool is available for free to users.
OpenAI’s content moderation policies have already faced scrutiny. The company recently blocked some Sora-generated videos targeting historical figures with racist content, but simultaneously defended other controversial historical depictions by citing “strong free speech interests.”
As video generation technology advances and becomes more accessible, the line between authentic and synthetic media continues to blur, creating unprecedented challenges for information integrity in the digital age.
Verify This Yourself
Use these professional tools to fact-check and investigate claims independently
Reverse Image Search
Check if this image has been used elsewhere or in different contexts
Ask Our AI About This Claim
Get instant answers with web-powered AI analysis
Related Fact-Checks
See what other fact-checkers have said about similar claims
Want More Verification Tools?
Access our full suite of professional disinformation monitoring and investigation tools
8 Comments
Wow, this is a concerning development. The ability to create such convincing fake news footage with minimal effort is a serious threat to truth and accountability. I hope regulators can find ways to mitigate the risks of AI-powered disinformation.
I’m alarmed by how easily Sora 2 can generate realistic-looking fake news videos. This technology has huge potential for abuse and could undermine public trust in media and institutions. We need robust safeguards and transparency around AI tools like this.
I agree, the rapid advancement of deepfake technology is really troubling. Policymakers will need to act quickly to get ahead of this issue and protect the public from the harms of AI-enabled disinformation.
The emergence of Sora 2 and its ability to generate realistic fake news videos is a troubling sign of the times. I hope the researchers and developers behind this technology are taking the ethical implications seriously and working to mitigate the risks.
Deepfake technology is advancing rapidly, and Sora 2 seems to have taken it to a worrying new level. The potential for abuse and manipulation of public discourse is concerning. I hope regulators can stay ahead of these developments and find effective ways to address the challenges.
This is a really concerning development. The ability to create convincing fake news footage with minimal effort could have serious consequences for public trust and democratic discourse. I hope this spurs renewed efforts to establish robust guardrails around the use of AI-powered deepfakes.
While AI and deepfake technology hold great potential, the reality described here is deeply troubling. The ease with which Sora 2 can generate realistic fake news videos is a serious threat that needs to be addressed through smart regulation and responsible development practices.
This is a concerning development. While AI and deepfake technology have many beneficial applications, the ability to create convincing fake news videos so easily is deeply problematic. We need to find ways to responsibly govern this technology before it causes real harm.