UPDATED 19:43 EST / JUNE 24 2025

AI

Judge sides with Anthropic in landmark AI copyright case, but orders it to go on trial over piracy claims

Anthropic PBC scored a major victory for itself and the broader artificial intelligence industry today when a federal judge ruled that it hasn’t broken the law by training its chatbot Claude on hundreds of legally-purchased books that were later digitized without the authors’ permission

However, the company is still on the hook for millions of pirated copies of books that it downloaded from the internet and used to train its models.

U.S. District Judge William Alsup of the Northern District of California said in a ruling today that the way Anthropic’s models distill information from thousands of written works and produce their own unique text meets the definition of “fair use” under U.S. copyright law. He justified this because the model’s outputs are essentially new.

“Like any reader aspiring to be a writer, Anthropic’s models trained upon works not to race ahead and replicate or supplant them – but to turn a hard corner and create something different,” Alsup wrote in his judgment.

But although the judge dismissed one of the claims made in a class action lawsuit by a trio of authors last year, he ordered that Anthropic must stand trial in December for allegedly stealing thousands of copyrighted works. “Anthropic had no entitlement to use pirated copies for its central library,” Alsup said.

The lawsuit, filed last year by authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson last summer, alleges that the company’s AI model training practices amount to “large-scale theft” of thousands of copyrighted books. It also alleged that the company sought to profit by “strip-mining the human expression and ingenuity behind each of those works.”

During the case, it was revealed in documents disclosed by Anthropic that a number of its researchers raised concerns about the legality of using online libraries of pirated books. That prompted the company to change its approach and purchase copies of hundreds of digitized works.

But the judge said that although the company later purchased many copies of books legally, that doesn’t absolve it of the liability for any earlier thefts. However, he added, it “may affect the extent of statutory damages.”

Analyst Holger Mueller of Constellation Research Inc. said today’s ruling is a landmark judgment and a big victory for U.S. AI companies, as it essentially means the judge has decided to treat AI models as if they were humans.

“The judge is saying that AI models are no different to someone that reads a lawfully-acquired book and learns from it,” Mueller explained. “In other words, AI accumulates knowledge in the same way people do, then goes about creating original works of its own. We have legal precedents justifying this, and so it’s good news for AI vendors, so long as they’re prepared to go out and buy lots of books.”

That said, Mueller believes the judge’s decision to force Anthropic to undergo a trial for its alleged piracy bodes less well for the AI industry, which may well be forced to cough up substantial amounts of cash in compensation. “There are likely going to be some salacious settlements in future,” the analyst said.

Today’s ruling could set a precedent for dozens of similar lawsuits that have been filed against Anthropic’s competitors in the AI industry, including the ChatGPT creator OpenAI, as well as Meta Platforms Inc. and the AI search engine Perplexity AI Inc. Claims of copyright infringement have been piling up against AI companies, with dozens of cases filed by authors, media companies and music labels since 2023, when generative AI burst into the public consciousness. Creators have also signed multiple open letters calling on governments to rein in AI developers and prevent them from using copyrighted works for training their models.

The furor has had a limited impact, with some AI companies responding by signing legal agreements with publishers that allow them to access their copyrighted materials.

Anthropic, which was founded in 2021 by a number of ex-OpenAI employees, has positioned itself as being more responsible and safety-focused, but the lawsuit filed last year charges that its actions “made a mockery of its lofty goals” due to its practise of training its models on pirated works.

In response to today’s ruling, Anthropic did not address the piracy claims, but said it was pleased that the judge had recognized AI training is “transformative and consistent with copyright’s purpose in enabling creativity and fostering scientific progress.”

Image: Anthropic

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.