Listen to the article
Consumer Watchdog Demands OpenAI Withdraw Sora 2 Over Misinformation Concerns
Public Citizen, a prominent non-profit consumer advocacy group, has called for OpenAI to withdraw its video-generation software Sora 2, citing serious concerns about misinformation and privacy violations. In a letter addressed to the company and CEO Sam Altman on Tuesday, the organization accused OpenAI of rushing the application to market ahead of competitors without adequate safeguards.
The watchdog group characterized the launch as part of a “consistent and dangerous pattern” of OpenAI prioritizing market position over product safety. According to Public Citizen, Sora 2 demonstrates a “reckless disregard” for both product safety and individuals’ right to control their own likeness.
“Our biggest concern is the potential threat to democracy,” said J.B. Branch, Public Citizen’s tech policy advocate, who authored the letter. “I think we’re entering a world in which people can’t really trust what they see. And we’re starting to see strategies in politics where the first image, the first video that gets released, is what people remember.”
OpenAI did not immediately respond to requests for comment on the advocacy group’s demands. The letter was also sent to members of the U.S. Congress, highlighting the potential regulatory implications.
Sora 2, which was released on iPhones more than a month ago and expanded to Android devices in the U.S., Canada, Japan, South Korea, and several other Asian countries last week, enables users to generate realistic videos based on text prompts. These videos, designed to be shareable on platforms like TikTok, Instagram, and Facebook, range from the absurd—such as the late Queen Elizabeth II rapping—to the deceptively realistic.
The technology has already sparked controversy beyond Public Citizen’s concerns. Last week, news outlet 404 Media reported on a disturbing trend of Sora-generated videos depicting women being strangled, highlighting how the app’s restrictions fail to prevent all forms of harmful content.
Branch noted that while OpenAI blocks nudity, “women are seeing themselves being harassed online” in other ways, pointing to a broader issue of how AI-generated content can be weaponized against vulnerable groups.
OpenAI has shown responsiveness to high-profile complaints, particularly from entertainment industry interests. The company announced agreements with Martin Luther King Jr.’s family on October 16 to prevent “disrespectful depictions” of the civil rights leader. Similarly, on October 20, OpenAI reached an understanding with Breaking Bad actor Bryan Cranston, the SAG-AFTRA union, and talent agencies regarding the unauthorized use of actors’ likenesses.
“That’s all well and good if you’re famous,” Branch observed. “It’s sort of just a pattern that OpenAI has where they’re willing to respond to the outrage of a very small population. They’re willing to release something and apologize afterwards. But a lot of these issues are design choices that they can make before releasing.”
The controversy surrounding Sora 2 echoes similar issues with OpenAI’s flagship product, ChatGPT. Seven lawsuits filed last week in California allege that the chatbot drove people to suicide and harmful delusions. The lawsuits claim OpenAI knowingly released GPT-4o prematurely despite internal warnings about its potentially manipulative nature.
Japanese content creators have also voiced opposition to Sora 2. A trade association representing renowned animators, including Hayao Miyazaki’s Studio Ghibli, and game developers like Bandai Namco and Square Enix, raised concerns about the app’s ability to generate unauthorized content based on copyrighted characters.
While OpenAI defended the app’s capabilities, stating many anime fans want to interact with their favorite characters, the company acknowledged the need for guardrails to protect intellectual property rights.
“We’re engaging directly with studios and rights holders, listening to feedback and learning from how people are using Sora 2, including in Japan, where cultural and creative industries are deeply valued,” OpenAI stated in response to the trade group’s letter last week.
As AI-generated content becomes increasingly sophisticated and accessible, the debate around appropriate safeguards, consent, and the potential for misuse continues to intensify, placing companies like OpenAI at the center of a growing ethical and regulatory storm.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
This is a troubling development. Misinformation and privacy violations are major societal risks that can’t be ignored. OpenAI needs to act quickly to address these concerns before releasing this app.
Absolutely. Responsible innovation should be the top priority, not rushing to market. I hope OpenAI takes a measured, transparent approach to ensure proper safeguards are in place.
While AI can bring many benefits, this situation highlights the potential for misuse and unintended consequences. Responsible development and deployment of these technologies is critical to protect individual rights and maintain public trust.
Absolutely. Regulators and policymakers need to stay on top of these issues and ensure there are proper guardrails in place. Consumers deserve transparency and control over how their data and likenesses are used.
As someone who follows the AI and tech industry, I’m not surprised to see these concerns raised. The potential for misuse of generative AI is substantial and needs to be taken very seriously.
You’re right, this is an area that requires a lot more scrutiny and oversight. I hope OpenAI engages constructively to address the valid issues raised by Public Citizen.
This is certainly a concerning development. AI-generated content that can spread misinformation is a serious threat to public discourse and democratic processes. OpenAI should take these privacy and safety concerns very seriously.
I agree. Rushing out new AI tools without proper safeguards is irresponsible. OpenAI needs to prioritize accountability and ethical development over speed to market.
This is a complex issue without easy solutions. I hope OpenAI engages constructively with Public Citizen and other stakeholders to find ways to mitigate the risks while still enabling beneficial AI applications.
Agreed. Striking the right balance between innovation and safeguards will require ongoing dialogue and collaboration between tech companies, consumer advocates, and policymakers.