Listen to the article

0:00
0:00

Japanese authorities and global regulators are increasingly scrutinizing OpenAI’s latest video generation tool, Sora 2, amid mounting concerns over copyright infringement, misinformation risks, and privacy violations.

A coalition of prominent Japanese entertainment companies, including animation powerhouse Studio Ghibli, gaming giants Bandai Namco and Square Enix, has formally accused OpenAI of using copyrighted animation styles and protected content without permission to train its AI model. The Content Overseas Distribution Association (CODA), representing these companies, has criticized OpenAI’s “opt-out” approach, arguing it inappropriately shifts the burden of consent onto rights holders.

“This reversal of normal consent procedures fundamentally violates Japan’s copyright principles,” a CODA representative stated. The organization has called for OpenAI to cease using Japanese creative works entirely until proper legal frameworks are established.

The Japanese government has already taken official action, formally requesting that OpenAI refrain from activities that “could constitute copyright infringement” after users reported the tool generating videos resembling popular anime characters and distinctive animation styles. Japan’s recently enacted AI Promotion Act gives authorities expanded powers to investigate AI systems suspected of infringing on citizens’ rights or harming creative industries.

Beyond copyright concerns, Public Citizen, a U.S. consumer advocacy group, has urged OpenAI to suspend Sora 2 operations entirely, warning that its highly realistic video capabilities could be weaponized to create convincing political deepfakes during an election year. The organization highlighted the inadequacy of current safeguards to prevent impersonation or manipulation.

In response to growing criticism, OpenAI has implemented some immediate measures, including pausing the generation of certain public figures like Martin Luther King Jr. after family representatives objected to AI-generated likenesses circulating online. The company has promised to develop more “granular control” options for rights holders and expand filtering tools to block unauthorized representations.

“Sora 2 was trained on licensed and publicly available data,” an OpenAI spokesperson maintained, adding that the company’s goal is to “democratize creative video production.” However, critics argue these assurances fail to address fundamental questions about consent, compensation, and control in AI-generated media.

The regulatory landscape for AI video generation is rapidly evolving across major markets. In Europe, the newly enacted EU Artificial Intelligence Act will soon impose stricter transparency and liability requirements. Under these rules, AI video systems must clearly label synthetic content and maintain comprehensive documentation about training data and content moderation practices.

Japan’s regulatory approach combines the new AI Promotion Act with existing intellectual property and cyber laws, giving authorities broad investigative powers over suspected violations. Meanwhile, the United States operates under a patchwork of state and federal deepfake regulations, including the forthcoming TAKE IT DOWN Act, which will criminalize non-consensual intimate AI images beginning in 2025.

The fragmented regulatory environment creates a complex compliance challenge for AI video developers. While specifics vary by jurisdiction, regulators across major markets are converging on key principles: transparency in AI-generated content, proper consent mechanisms for training data, and accountability for potential misuse.

Industry observers note that these developments signal a significant shift in how generative AI tools will be regulated globally. “We’re seeing a rapid evolution from a largely unregulated space to one with meaningful guardrails,” explained one digital rights expert. “Companies developing these powerful tools can no longer operate on a ‘move fast and figure out the consequences later’ basis.”

For creative industries, particularly animation and gaming companies with distinctive visual styles, the Sora 2 controversy highlights unresolved questions about how AI systems should compensate original creators whose work informs their outputs. As one Japanese animator noted, “When an AI perfectly mimics a style that took decades to develop, what happens to the ecosystem that supported that creativity?”

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

10 Comments

  1. Robert P. Rodriguez on

    This is a complex issue with valid concerns on both sides. Balancing AI innovation with artist rights and copyright protections will be an ongoing challenge. I hope OpenAI and the Japanese companies can find a constructive path forward through dialogue and compromise.

    • Agreed, this requires a nuanced approach. Protecting intellectual property while enabling beneficial AI development will require care and collaboration.

  2. This is a complex intersection of technology, creativity, and regulation. I hope the stakeholders can find common ground and establish clear guidelines to enable AI progress while respecting intellectual property rights. Thoughtful policymaking will be crucial.

    • James Martinez on

      Well said. Navigating these issues will require compromise, collaboration, and a commitment to balancing innovation and rights protection.

  3. Patricia Jackson on

    As an AI enthusiast, I’m hopeful the industry and regulators can find a balanced solution. Fostering innovation while safeguarding artists’ rights is crucial. I’m curious to see how this unfolds and what compromises may emerge.

    • Agreed, this is a delicate balance. Striking the right compromise between AI progress and artist protections will require thoughtful policymaking.

  4. The concerns over misinformation and privacy risks with Sora 2 are valid. AI tools need robust safeguards, especially for sensitive applications like video generation. OpenAI will have to demonstrate strong responsibility and accountability.

    • Absolutely. Responsible AI development requires proactive measures to mitigate potential harms and unintended consequences.

  5. The allegations of copyright infringement are concerning. AI models should not be trained on protected content without proper permissions and consent processes. OpenAI will need to address these issues seriously to maintain trust and legitimacy.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.