Listen to the article

0:00
0:00

OpenAI’s Sora App Raises New Concerns About AI-Generated Video Deception

In just three days following its release, OpenAI’s new video generation app Sora has demonstrated alarming capabilities, with users creating hyper-realistic videos depicting ballot fraud, immigration arrests, protests, and street violence—none of which actually occurred.

The text-to-video app requires only a simple written prompt to generate convincingly realistic footage of virtually any scenario. Users can upload images of themselves to incorporate their likeness and voice into fabricated scenes. The technology can also integrate fictional characters, company logos, and even representations of deceased celebrities into its generated content.

Experts warn that Sora—alongside similar tools like Google’s Veo 3—represents a significant advancement in the ease of creating deceptive content that appears authentic. While concerns about AI-enabled misinformation have grown steadily in recent years, Sora’s capabilities mark a troubling leap forward in both accessibility and believability.

“It’s worrisome for consumers who every day are being exposed to God knows how many of these pieces of content,” said Hany Farid, professor of computer science at the University of California, Berkeley, and co-founder of GetReal Security. “I worry about it for our democracy. I worry for our economy. I worry about it for our institutions.”

The potential real-world consequences are substantial. Increasingly realistic fabricated videos could exacerbate conflicts, enable consumer fraud, influence elections, or even implicate innocent people in crimes they didn’t commit.

OpenAI has defended its approach, stating it released the app after extensive safety testing and implemented various guardrails. “Our usage policies prohibit misleading others through impersonation, scams or fraud, and we take action when we detect misuse,” the company said in a statement.

Testing conducted by The New York Times revealed some protective measures are in place. The app refused to generate imagery of famous people without permission and declined prompts requesting graphic violence or certain political content.

However, these safeguards have proven incomplete. Sora, currently available only through invitation from existing users, doesn’t require account verification—potentially allowing users to sign up with false identities. The app readily generates content involving children and historical figures like Martin Luther King Jr. and Michael Jackson.

In one revealing test, while the app wouldn’t produce videos of President Trump or current world leaders directly, when asked to create a political rally with attendees “wearing blue and holding signs about rights and freedoms,” it generated a video featuring the unmistakable voice of former President Barack Obama.

Until recently, video has remained a relatively trustworthy medium compared to easily manipulated photos and text. Sora’s high-quality output threatens to undermine this last bastion of digital evidence.

“It was somewhat hard to fake, and now that final bastion is dying,” explained Lucas Hansen, founder of CivAI, a nonprofit researching AI capabilities and risks. “There is almost no digital content that can be used to prove that anything in particular happened.”

This phenomenon, known as the “liar’s dividend,” means increasingly sophisticated AI videos will allow people to dismiss authentic content as fake, further eroding public trust in media.

The app’s quick-scrolling interface encourages users to form immediate impressions without critical examination. Experts fear Sora could generate convincing propaganda, fabricate evidence supporting conspiracy theories, implicate innocent people in crimes, or inflame volatile situations.

Though the app blocked direct requests for violent content, it readily created videos of convenience store robberies and home invasions captured on doorbell cameras. In one instance, a Sora developer posted a video showing OpenAI CEO Sam Altman shoplifting from Target—demonstrating how easily the technology can create defamatory content.

Perhaps most concerning, Sora successfully generated videos of bombs exploding on city streets and other fabricated war imagery—highly sensitive content with the potential to mislead the public about global conflicts. While fake and outdated footage has circulated during recent conflicts, AI tools like Sora enable tailored deceptive content delivered algorithmically to receptive audiences.

“Now I’m getting really, really great videos that reinforce my beliefs, even though they’re false, but you’re never going to see them because they were never delivered to you,” noted Kristian J. Hammond, professor at Northwestern University’s Center for Advancing Safety of Machine Intelligence. “The whole notion of separated, balkanized realities, we already have, but this just amplifies it.”

Even Professor Farid, whose work focuses on detecting fabricated images, now struggles to distinguish real from fake content at first glance. “A year ago, more or less, when I would look at it, I would know, and then I would run my analysis to confirm my visual analysis,” he said. “I can’t do that anymore.”

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

8 Comments

  1. Isabella N. Taylor on

    This AI-generated video tool raises valid concerns about the potential for deception and misinformation. The hyper-realism of the content is quite alarming and could easily mislead the public if not handled responsibly.

    • Agreed, the ease of creating fabricated footage is a concerning development. Strict regulations and transparency around these technologies will be crucial to mitigate abuse.

  2. Ava Hernandez on

    While the technology behind Sora is impressive, the potential for misuse is deeply troubling. I hope industry leaders and policymakers will work to establish robust guardrails and accountability measures.

  3. Elijah Taylor on

    As an investor in mining and energy stocks, I’m worried about how this could impact the credibility of news and analysis in our sector. Verifying the authenticity of information will be crucial going forward.

  4. As someone interested in mining and commodities, I’m curious how this technology could impact reporting and information sharing in our industry. We’ll need to be vigilant about verifying sources and content authenticity.

    • Elijah Rodriguez on

      Good point. Misinformation around mining projects, commodity prices, or environmental issues could have serious real-world consequences. Fact-checking will be paramount.

  5. This is a complex issue with no easy solutions. On one hand, the advancements in text-to-video AI are remarkable. But the risks of weaponizing this technology for disinformation campaigns are very real and concerning.

    • Absolutely. We’ll need to find the right balance between innovation and responsible governance to mitigate the harms of this type of technology.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.