Partner
How to Train Your AI (Put Another Way)
7/22/25
Large Language Models (LLMs); Artificial Intelligence; Fair Use
Here’s What Happened:
Another court has weighed in on whether copyrighted works can be used to train AI. In our last post, we discussed how a court ruled that Anthropic PBC’s use of works it had purchased to train AI was fair use. But use of pirated works was not fair use.
Like Antrhopic, Meta Platforms, Inc. used copyrighted works to train its large language model (LLM) called LLaMA. But Meta did not purchase any of the works. Instead, LlaMA accessed snippets from a wide range of works without permission.
Richard Kadrey and 12 other fiction authors sued Meta for copyright infringement. After discovery, both sides brought motions for summary judgment.
The court granted Meta’s motion for summary judgment. Using the four factor test for fair use, the court determined that LLaMA’s use of copyrighted works, even without permission, was fair use:
- Purpose and Character of the Use. The use of the books to train LLaMA was transformative because LlaMA learned language patterns and did not reproduce the works. Because the transformative use was high, the fact that Meta used the works without permission did not affect the conclusion that this factor weighed in favor of fair use;
- The Type of Original Work. The works were fiction. So this factor weighed against fair use.
- The Amount of the Original Work Used. LLaMA used no more than 50 words of the original works at a time. This factor weighed in favor of fair use.
- The Effect of the Use on the Market for the Original. The plaintiffs could not produce evidence of actual market harm or licensing disruption. This factor weighed in favor of fair use.
Since the majority of factors weighed in favor of fair use, Meta was entitled to summary judgment.
WHY YOU SHOULD KNOW THIS: As more and more courts grapple with the training of LLMs, inconsistent rulings are going to create chaos rather than clarity.
Cited Authority_:_ Kadrey v. Meta Platforms, Inc., No. 23-CV-03417-VC, 2025 WL 1752484, at *1 (N.D. Cal. June 25, 2025)