AI industry horrified to face largest copyright class action ever certified

https://arstechnica.com/tech-policy/2025/08/ai-industry-horrified-to-face-largest-copyright-class-action-ever-certified/

Ashley Belanger Aug 08, 2025 · 5 mins read
AI industry horrified to face largest copyright class action ever certified
Share this

AI industry groups are urging an appeals court to block what they say is the largest copyright class action ever certified. They've warned that a single lawsuit raised by three authors over Anthropic's AI training now threatens to "financially ruin" the entire AI industry if up to 7 million claimants end up joining the litigation and forcing a settlement.

Last week, Anthropic petitioned to appeal the class certification, urging the court to weigh questions that the district court judge, William Alsup, seemingly did not. Alsup allegedly failed to conduct a "rigorous analysis" of the potential class and instead based his judgment on his "50 years" of experience, Anthropic said.

If the appeals court denies the petition, Anthropic argued, the emerging company may be doomed. As Anthropic argued, it now "faces hundreds of billions of dollars in potential damages liability at trial in four months" based on a class certification rushed at "warp speed" that involves "up to seven million potential claimants, whose works span a century of publishing history," each possibly triggering a $150,000 fine.

Confronted with such extreme potential damages, Anthropic may lose its rights to raise valid defenses of its AI training, deciding it would be more prudent to settle, the company argued. And that could set an alarming precedent, considering all the other lawsuits generative AI (GenAI) companies face over training on copyrighted materials, Anthropic argued.

"One district court's errors should not be allowed to decide the fate of a transformational GenAI company like Anthropic or so heavily influence the future of the GenAI industry generally," Anthropic wrote. "This Court can and should intervene now."

In a court filing Thursday, the Consumer Technology Association and the Computer and Communications Industry Association backed Anthropic, warning the appeals court that "the district court’s erroneous class certification" would threaten "immense harm not only to a single AI company, but to the entire fledgling AI industry and to America’s global technological competitiveness."

According to the groups, allowing copyright class actions in AI training cases will result in a future where copyright questions remain unresolved and the risk of "emboldened" claimants forcing enormous settlements will chill investments in AI.

"Such potential liability in this case exerts incredibly coercive settlement pressure for Anthropic," industry groups argued, concluding that "as generative AI begins to shape the trajectory of the global economy, the technology industry cannot withstand such devastating litigation. The United States currently may be the global leader in AI development, but that could change if litigation stymies investment by imposing excessive damages on AI companies."

Some authors won’t benefit from class actions

Industry groups joined Anthropic in arguing that, generally, copyright suits are considered a bad fit for class actions because each individual author must prove ownership of their works. And the groups weren't alone.

Also backing Anthropic's appeal, advocates representing authors—including Authors Alliance, the Electronic Frontier Foundation, American Library Association, Association of Research Libraries, and Public Knowledge—pointed out that the Google Books case showed that proving ownership is anything but straightforward.

In the Anthropic case, advocates for authors criticized Alsup for basically judging all 7 million books in the lawsuit by their covers. The judge allegedly made "almost no meaningful inquiry into who the actual members are likely to be," as well as "no analysis of what types of books are included in the class, who authored them, what kinds of licenses are likely to apply to those works, what the rightsholders’ interests might be, or whether they are likely to support the class representatives’ positions."

Ignoring "decades of research, multiple bills in Congress, and numerous studies from the US Copyright Office attempting to address the challenges of determining rights across a vast number of books," the district court seemed to expect that authors and publishers would easily be able to "work out the best way to recover" damages.

But it's never easy, groups said. Consider, for example, how now-defunct publishers might add a wrinkle to ownership questions with some books involved in the litigation. Or how rightsholders might be affected if they only own a portion of a work, like a chapter or inserts in academic texts. The district court apparently didn't even consider "what will be done with authors who are dead and whose literary estates hold rights split across multiple parties." There are also many so-called "orphan works," where "identifying rightsholders to address ownership questions will be impossible." If the class action moves forward, groups warned that the court may have to review "hundreds of mini-trials to sort out these issues."

Further, some authors may never even find out the lawsuit is happening. The court's suggested notification scheme "would require class claimants to themselves notify other potential rightsholders," groups said, overlooking the fact that it cost Google $34.5 million "to set up a 'Books Rights Registry' to identify owners for payouts under the proposed settlement" in one of the largest cases involving book authors prior to the AI avalanche of lawsuits.

Additionally concerning, the court suggested that it was acceptable to certify the massive class because any authors who did not want to join could opt out. But groups warned that a lackadaisical approach put authors who may never hear about the lawsuit—and perhaps would have litigated their claims differently—in a difficult position, therefore serving as "an inadequate answer to a fundamental fairness problem in the formulation of the class and the due process concerns of absent class members."

Some authors and publishers are "already at odds over AI," which may further complicate these cases, if one side representing legal owners (usually publishers) wants to join but beneficial owners (usually authors) don't.

Simply put, "there is no realistic pathway to resolving these issues in a common way," advocates said, despite the district court seeing a common question in Anthropic downloading all their books. And authors ultimately risk sustaining the cloud of uncertainty over AI training on copyrighted materials by seeking a path likely to force settlements.

"This case is of exceptional importance, addressing the legality of using copyrighted works" for generative AI, "a transformative technology used by hundreds of millions of researchers, authors, and others," groups argued. "The district court’s rushed decision to certify the class represents a 'death knell' scenario that will mean important issues affecting the rights of millions of authors with respect to AI will never be adequately resolved."