Listen to the article
In a move that underscores growing concerns about artificial intelligence technology, consumer advocacy group Public Citizen has issued a stark warning about OpenAI’s latest video generation tool, Sora 2. In a letter addressed directly to OpenAI CEO Sam Altman, the organization criticized the company for rushing the product to market without implementing adequate safeguards.
The letter, sent on November 12, called on OpenAI to “commit to a measured, ethical, and transparent pre-deployment process” before any public release, emphasizing the need to provide “guarantees against the profound social risks” posed by the technology. Public Citizen urged the AI company to pause Sora 2’s deployment and engage with legal experts, civil rights organizations, and democracy advocates to establish robust technological and ethical boundaries.
Chief among Public Citizen’s concerns is Sora 2’s potential to become “a scalable, frictionless tool for creating and disseminating deepfake propaganda” that could influence election outcomes. The organization also highlighted the technology’s capability to generate unauthorized deepfakes and revenge pornography involving both public figures and private individuals without their consent.
While OpenAI claims to have built protections into the system, Public Citizen cited research suggesting these safeguards are easily circumvented. “The safeguards that the model claims have not been effective,” the group noted. “For example, researchers bypassed the anti-impersonation safeguards within 24 hours of launch, and the ‘mandatory’ safety watermarks can be removed in under four minutes with free online tools.”
JB Branch, Big Tech accountability advocate at Public Citizen, characterized the rushed release as demonstrating “a reckless disregard for product safety, name/image/likeness rights, the stability of our democracy, and fundamental consumer protection against harm.”
The controversy surrounding Sora 2 extends beyond Public Citizen’s criticism. In a recent PCMag review, journalist Ruben Circelli warned that the tool would “inevitably be weaponized” due to its ability to create convincingly lifelike videos. “A world where you can create lifelike videos, with audio, of anything in just a minute or two for free is a world where seeing is not believing,” Circelli cautioned, advising readers to be skeptical of online video content unless it comes from trustworthy sources.
Circelli also questioned OpenAI’s data protection practices and the overall utility of such video generation platforms, asking whether “the ability to generate AI meme videos” justifies building “60 football fields’ worth of AI infrastructure every week or uprooting rural families.”
The controversy has international dimensions as well. A coalition of prominent Japanese entertainment companies, including Studio Ghibli, Bandai Namco, and Square Enix, has accused OpenAI of copyright infringement, claiming the company used their copyrighted works to train Sora 2’s animation capabilities without permission.
These allegations have prompted action from the Japanese government, which has formally requested that OpenAI refrain from actions that could constitute copyright violations, particularly after the tool produced videos resembling popular anime characters and video game intellectual property.
The Sora 2 controversy highlights the growing tension between rapid technological advancement and responsible innovation in the AI sector. As generative AI tools become increasingly sophisticated and accessible, questions about their ethical implementation, potential for misuse, and appropriate regulatory frameworks continue to mount.
For OpenAI, which has positioned itself as a leader in responsible AI development, the criticism represents a significant challenge to its public image and stated commitment to safety. How the company responds to these concerns could shape not only the future of Sora 2 but also broader industry standards for AI deployment and safeguards.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


17 Comments
I hope OpenAI takes Public Citizen’s warnings seriously. The risks of Sora 2 being misused for propaganda and abuse are simply too high to ignore.
Deepfakes and AI-driven propaganda are real threats that can’t be ignored. Public Citizen is right to demand a responsible, transparent process from OpenAI before unleashing this technology on the world.
This is a crucial moment for OpenAI. They must demonstrate a genuine commitment to responsible AI development or risk undermining public trust in the technology.
The potential for AI-powered deepfakes and harassment is truly concerning. We need rigorous testing, clear guidelines, and meaningful accountability measures to prevent these tools from causing real harm.
This is certainly a concerning development. The potential for AI-generated deepfakes and propaganda to undermine democratic processes is alarming. OpenAI needs to tread very carefully here and prioritize robust safeguards and ethical oversight.
AI-generated deepfakes and harassment could have devastating societal impacts. OpenAI needs to step up and take a leadership role in establishing ethical guidelines for this technology.
I’m glad to see a watchdog group like Public Citizen stepping up to hold OpenAI accountable. Responsible development of AI is crucial, and they’re right to demand a measured, ethical approach before any public release of Sora 2.
As an AI enthusiast, I’m concerned about the potential misuse of these powerful technologies. OpenAI must take a cautious, collaborative approach to ensure Sora 2 isn’t exploited for malicious ends.
The concerns raised by Public Citizen about Sora 2 are well-founded. OpenAI needs to demonstrate real leadership and a commitment to ethical AI, not just chase the next big thing.
This is a complex issue with major implications for democracy and individual privacy. OpenAI must engage deeply with diverse stakeholders to get the balance right between innovation and safeguards.
Absolutely. The risks of getting this wrong are far too high. OpenAI needs to listen closely to civil rights groups and democracy advocates to ensure Sora 2 doesn’t become a weapon against the public good.
As an AI researcher, I’m deeply concerned about the potential for Sora 2 to be weaponized. OpenAI must work closely with experts to ensure proper safeguards are in place.
Agreed. Responsible AI development requires ongoing collaboration with diverse stakeholders, not just rushing to market. OpenAI has an obligation to get this right.
Rushing to market with powerful AI tools like Sora 2 without proper oversight is reckless. Public Citizen is right to call for a pause and a thorough, transparent review process.
The ability to generate realistic deepfakes at scale is a double-edged sword. OpenAI has an ethical duty to implement robust safeguards and partner with civil society to mitigate the risks.
I appreciate Public Citizen’s proactive stance in raising these critical issues. AI companies cannot afford to rush new technologies to market without fully addressing the societal risks. Transparency and collaboration with civil society groups are essential.
Agreed. Rushing ahead without adequate precautions could lead to disastrous consequences for public trust and the integrity of our information ecosystem.