Listen to the article

0:00
0:00

Public Citizen Warns of Serious Risks from OpenAI’s Sora 2 Video Generator

Consumer advocacy group Public Citizen issued a stark warning Wednesday about OpenAI’s artificial intelligence video creation tool Sora 2, accusing the company of releasing the technology without adequate safeguards against potential misuse.

In a strongly worded letter to OpenAI CEO Sam Altman, the organization called for a pause in the deployment of Sora 2, citing concerns about its potential to become “a scalable, frictionless tool for creating and disseminating deepfake propaganda” that could influence election outcomes.

“OpenAI must commit to a measured, ethical, and transparent pre-deployment process that provides guarantees against the profound social risks before any public release,” Public Citizen stated in the letter. The group urged OpenAI to work collaboratively with legal experts, civil rights organizations, and democracy advocates to establish robust ethical guidelines before proceeding.

Of particular concern is the technology’s capacity to create unauthorized deepfakes and revenge pornography featuring both public and private individuals without consent. While OpenAI has claimed to have implemented protective measures, Public Citizen contends these safeguards have proven ineffective.

“The safeguards that the model claims have not been effective,” the advocacy group noted. “For example, researchers bypassed the anti-impersonation safeguards within 24 hours of launch, and the ‘mandatory’ safety watermarks can be removed in under four minutes with free online tools.”

JB Branch, a Big Tech accountability advocate at Public Citizen, characterized the rapid release of Sora 2 as emblematic of OpenAI’s pattern of prioritizing product launches over ethical considerations. “The hasty release of Sora 2 demonstrates a reckless disregard for product safety, name/image/likeness rights, the stability of our democracy, and fundamental consumer protection against harm,” Branch said.

These concerns have been echoed by technology journalists. In a recent PCMag review, Ruben Circelli warned that Sora 2 would “inevitably be weaponized” due to its ability to generate convincingly realistic videos in minutes.

“A world where you can create lifelike videos, with audio, of anything in just a minute or two for free is a world where seeing is not believing,” Circelli cautioned. “So, I suggest never taking any video clips you see online too seriously, unless they come from a source you can absolutely trust.”

Circelli also questioned the broader utility of such technology, suggesting that the environmental and social costs of developing AI infrastructure might not justify the benefits of generating “AI meme videos.”

The controversy surrounding Sora 2 extends beyond American borders. A coalition of prominent Japanese entertainment companies—including Studio Ghibli, Bandai Namco, and Square Enix—has accused OpenAI of copyright infringement, alleging that the company used their protected works without permission to train Sora 2’s animation capabilities.

These allegations have prompted action from the Japanese government, which has formally requested that OpenAI cease any activities that could constitute copyright violations. This comes after users discovered the tool could generate videos resembling popular anime characters and content from well-known Japanese media franchises.

The mounting criticism highlights the increasingly complex ethical landscape surrounding generative AI technologies. As these tools become more sophisticated and accessible, concerns about misinformation, consent, copyright, and democratic integrity continue to grow.

For OpenAI, which has positioned itself as a leader in responsible AI development, these challenges represent a significant test of the company’s commitment to balancing innovation with ethical considerations and public safety. How the company responds to these concerns could set important precedents for the broader AI industry as it navigates similar tensions between technological advancement and societal protection.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

8 Comments

  1. Patricia Taylor on

    This is a complex issue with significant societal implications. I appreciate Public Citizen’s call for a pause and collaboration with experts. The potential for abuse of Sora 2 technology is deeply concerning and must be thoroughly addressed.

  2. James C. Thomas on

    I’m glad to see a consumer advocacy group like Public Citizen taking a strong stance on this issue. The risks of Sora 2 misuse, from election interference to nonconsensual deepfakes, cannot be taken lightly. Responsible development of AI is crucial.

    • Patricia Taylor on

      Absolutely. Proactive measures to mitigate these risks are essential. I hope OpenAI demonstrates a commitment to ethical AI practices and works closely with stakeholders to ensure Sora 2 is deployed safely, if at all.

  3. While AI advancements can be beneficial, the risks of misuse highlighted here are very serious. I hope OpenAI takes this warning seriously and prioritizes safeguards and transparency over rushing this technology to market.

  4. The concerns raised about Sora 2 are well-founded. I hope OpenAI heeds this warning and works closely with relevant experts to establish robust guardrails against potential misuse. Ethical AI development should be the top priority.

  5. William Martinez on

    Deepfake propaganda and nonconsensual content creation are major threats to individual privacy and democratic integrity. Public Citizen is right to demand a cautious, collaborative approach from OpenAI before Sora 2 is released.

    • I agree. OpenAI must demonstrate a clear commitment to ethics and accountability before deploying technology with such profound societal implications. Responsible development of AI is crucial.

  6. This is certainly concerning. The potential for AI-generated deepfakes and propaganda to undermine democracy and personal privacy is alarming. I hope OpenAI takes these warnings seriously and collaborates with experts to establish robust safeguards before deployment.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.