Home » Uncategorized » Japan’s CODA to OpenAI: Stop Training Sora 2 on Our IP

Japan’s CODA to OpenAI: Stop Training Sora 2 on Our IP

by ytools
2 comments 0 views

Japan’s CODA to OpenAI: Stop Training Sora 2 on Our IP

CODA vs. OpenAI Sora 2: Why Japan’s Rights Holders Are Drawing a Legal Line

On October 28, Japan’s Content Overseas Distribution Association (CODA) delivered a written request urging OpenAI to stop training its video generator, Sora 2, on the copyrighted works of CODA’s members without prior permission. The move follows a month of heated debate after Sora 2’s October 1 launch, which ignited social feeds with slick AI-made clips that seemed to echo – sometimes eerily – the look and characters of iconic Japanese franchises.

OpenAI’s CEO Sam Altman celebrated the model’s immediate popularity, noting the strong bond between users and Japanese content. That observation crystallized a dilemma: if a model produces scenes that closely resemble beloved IP – from monster-catching adventures to spirit-filled epics – did it learn those styles from protected materials, and under what legal basis? CODA says the answer, under Japanese law, is straightforward: training or output that leans on copyrighted works requires prior permission. An opt-out mechanism after the fact, the group argues, does not cure the underlying infringement.

Who CODA Represents – and Why It Matters

Founded in 2002 to combat piracy and promote legal distribution abroad, CODA represents heavyweight publishers and studios across anime, manga, and games, including names recognizable to global fans. That membership breadth is crucial. When a rights group that speaks for creators linked to everything from fantasy role-playing sagas to revered animation houses takes a public position, it signals that enforcement might not be a one-off letter – it could become an industry standard.

CODA’s Core Claims

CODA says it has observed substantial Sora 2 output that “closely resembles Japanese content or images,” implying that member works were used as machine-learning data without consent. The group warns that replicating specific copyrighted content through a learned model may constitute infringement. Its letter makes two direct asks: first, stop using member works for training without prior authorization; second, respond sincerely to member claims about infringing outputs, including inquiries that may arrive as the tech spreads.

Opt-Out vs. Permission: A Clash of Systems

Reports surfaced that, shortly before launch, OpenAI contacted certain studios and talent agencies with an option to opt out. Unclear, however, is who exactly received those notices and whether key Japanese rightsholders were included during that compressed window. On launch day, observers noticed that some American IPs appeared more difficult to generate, hinting at filtering. Altman later wrote that the company would “let rightsholders decide how to proceed,” conceding that some depictions might still slip through “edge cases.”

For CODA, that framing misses the point. In Japan, the default expectation is permission first, not opt-out later. The organization stresses that there is “no system” that lets a user avoid liability via objections after the use. Practically, that means if a training set included copyrighted manga panels, game art, or film stills without a license, an opt-out registry does not retroactively make it lawful.

Japan’s Policy Backdrop

Earlier in October, Japan’s Cabinet Office also requested that OpenAI refrain from infringing on domestic IP. Politicians have publicly debated possible recourse under the AI Promotion Act if voluntary measures fail. That posture signals something larger: a potential test case where a nation with strong creator protections insists that generative AI conforms to domestic copyright norms – not the other way around.

The Public Mood: Lawsuits, “Genie Out of the Bottle,” and a Divided Creator Community

The reaction online has been polarized. A loud contingent urges rights holders to sue aggressively, arguing that meaningful deterrence in tech often arrives only when courts begin imposing real costs. Others warn that government interests in headline AI companies could make enforcement selective, especially where investment and national competitiveness loom. A third camp shrugs that Pandora’s box has opened: the tech cannot be uninvented, so the realistic path is to regulate its use, price licenses, and punish the most flagrant misuse while accepting some level of leakage.

There’s nuance too. Some creators are open to opt-in licensing – “use my catalog if you pay me” – seeing distribution gains or new audiences as a fair trade. Others insist that Sora 2 functions as a plagiarism engine when unlicensed works guide its outputs. And many users point to downstream behavior: even if models adopt stricter guardrails, influencer-economy “slop” channels can launder infringing clips at scale, making platform enforcement as critical as training compliance.

What Compliance Might Look Like

If OpenAI aims to meet Japan’s expectations, several measures are on the table:

  • Licensing at the source: Secure explicit, auditable permissions from rightsholders before training, with region-aware terms and revocation pathways.
  • Dataset transparency and audit trails: Maintain verifiable logs of training materials and provenance to sort licensed, public-domain, and forbidden assets.
  • Stronger filters and output checks: Expand blocklists beyond a few marquee Western IPs to include Japanese franchises, art styles tied to unique characters, and recognizable scene grammar.
  • Watermarking and tracing: Embed robust signals to track dissemination and help platforms rapidly remove infringing clips.
  • Creator control panels: Offer granular participation – opt-in by title, franchise, or timeframe – and transparent revenue-sharing when models benefit from licensed catalogs.

Risks on Both Sides

For OpenAI and peers, non-compliance risks litigation, regional restrictions, or forced concessions later. For rights holders, maximalist enforcement could slow access to new discovery channels or fragment standards across markets. The middle path – clear licensing + firm filters – demands operational discipline and legal clarity that the industry has yet to standardize.

Near-Term Outlook

CODA’s letter doesn’t end the debate; it formalizes a red line. Expect sharper demands for dataset documentation, faster takedowns for infringing outputs, and negotiations that resemble music-streaming’s evolution from messy beginnings to licensed infrastructure. Whether through contracts or courts, Japan’s stance suggests that generative video will be judged not just by what it can do, but by whether it can prove it learned the right way.

One thing is clear: this isn’t only about a few famous characters. It’s about who sets the rules of cultural memory in the machine age – and whether creators, not just the models, are compensated and respected as those rules are written.

You may also like

2 comments

Fanat1k December 21, 2025 - 4:04 am

Sue city. If they used the art to train without asking, bill them for every frame. Simple as

Reply
David January 18, 2026 - 3:50 pm

Boring? Nah, this is the copyright boss fight. Someone’s getting nerfed and it won’t be the lawyers 😂

Reply

Leave a Comment