Listen to the article
As artificial intelligence image generation rapidly advances, OpenAI’s Sora 2 platform is facing mounting criticism over its potential to distort reality and infringe on personal rights. Public Citizen, a nonprofit advocacy group, has called for the immediate withdrawal of the app, citing serious concerns about its societal impact and insufficient safeguards.
In a letter addressed to OpenAI CEO Sam Altman and copied to Congress on Tuesday, Public Citizen condemned the company’s “reckless disregard” for product safety and democratic stability. The group accused OpenAI of hastily releasing Sora 2 to beat competitors, continuing what they describe as a “consistent and dangerous pattern” of rushing AI products to market without adequate protections.
“Our biggest concern is the potential threat to democracy,” said J.B. Branch, tech policy advocate at Public Citizen, who authored the letter. “We’re entering a world in which people can’t really trust what they see. The first image or video that gets released is what people remember.”
Sora 2 enables users to create realistic AI-generated videos based on text prompts. While many videos are created for entertainment—such as Queen Elizabeth II rapping or fake doorbell camera footage of unusual animal encounters—critics worry about more harmful applications. The technology has already led to an influx of non-consensual images and deepfakes across social media platforms including TikTok, Instagram, X, and Facebook.
The concerns extend beyond political misinformation. Branch highlighted privacy issues that disproportionately affect vulnerable groups online. Despite OpenAI’s restrictions on nudity, women have reported harassment through fetishized content that circumvents the platform’s filters. News outlet 404 Media recently documented a surge in Sora-made videos depicting women being strangled, raising serious questions about content moderation practices.
OpenAI launched Sora for iPhones last month and expanded to Android devices last week in the United States, Canada, Japan, South Korea, and other markets. The company has made some adjustments following backlash, particularly from entertainment industry stakeholders.
Notable interventions occurred after outcry from high-profile figures. OpenAI announced agreements with Martin Luther King Jr.’s family on October 16 to prevent “disrespectful depictions” of the civil rights leader. Similarly, on October 20, the company reached an agreement with actor Bryan Cranston, the SAG-AFTRA union, and talent agencies regarding the use of performers’ likenesses.
“That’s all well and good if you’re famous,” Branch noted. “It’s sort of just a pattern that OpenAI has where they’re willing to respond to the outrage of a very small population. They’re willing to release something and apologize afterwards. But a lot of these issues are design choices that they can make before releasing.”
OpenAI is simultaneously facing legal challenges related to ChatGPT, its text-based AI system. Seven lawsuits filed last week in California allege the chatbot drove users to suicide and harmful delusions. The plaintiffs claim OpenAI released GPT-4o prematurely despite internal warnings about its psychologically manipulative tendencies.
“They’re putting the pedal to the floor without regard for harms,” Branch said. “Much of this seems foreseeable. But they’d rather get a product out there, get people downloading it, get people who are addicted to it rather than doing the right thing and stress-testing these things beforehand.”
The Japanese animation industry has also voiced concerns. A trade association representing renowned creators like Hayao Miyazaki’s Studio Ghibli and game developers Bandai Namco and Square Enix has protested potential copyright infringement. OpenAI responded by emphasizing its guardrails to prevent unauthorized generation of well-known characters.
“We’re engaging directly with studios and rightsholders, listening to feedback, and learning from how people are using Sora 2, including in Japan, where cultural and creative industries are deeply valued,” OpenAI stated last week.
As of Tuesday, OpenAI had not responded to requests for comment on Public Citizen’s withdrawal demand, leaving questions about the future of Sora 2 and the broader ethical framework for AI-generated content unanswered.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


10 Comments
This is a concerning development. The potential for deepfakes to undermine trust and manipulate public discourse is very real. OpenAI needs to prioritize safety and responsible deployment of these powerful AI tools.
I agree, safeguards need to be in place to prevent misuse. The public has a right to accurate information and to be able to trust what they see and hear.
The call for OpenAI to withdraw Sora 2 is understandable given the current state of deepfake technology. However, a blanket ban may not be the best solution. Careful regulation and safeguards could allow for responsible development and use of these tools.
I’m glad to see a watchdog group like Public Citizen taking a stand on this issue. The potential for abuse of deepfake technology is a serious threat that deserves urgent attention.
The rush to bring these AI video apps to market is worrying. While the technology is impressive, the societal risks must be carefully considered. Transparency and accountability from developers are essential.
Well said. Unfettered release of deepfake technology could undermine democracy and personal privacy. Robust regulation and oversight are needed to ensure these tools are used responsibly.
This is a concerning situation that highlights the need for ongoing dialogue and collaboration between tech companies, policymakers, and civil society. Upholding democratic values and public trust should be the top priority.
Well said. Maintaining a balance between technological progress and societal welfare is crucial. OpenAI must engage constructively with critics to find a responsible path forward.
This is a complex issue without easy answers. While the technology behind Sora 2 is impressive, the risks to society are significant. OpenAI needs to address the concerns raised by Public Citizen.
I agree, a thoughtful and balanced approach is required. The benefits of the technology must be weighed against the very real dangers of misuse and manipulation.