Judge May Decide Generative AI's "Fair Use" Defense Pre-Trial
Generative AI Companies Depend on "Fair Use" in Claims of Infringement. But Will the Courts Let Them? And Should They (or Juries) Decide the Issue?
Wake up! It means it’s time for your Monday morning weekly brAIn dump! This one’s an abbreviated one, because I’ve crossed “the pond” into the UK and Europe for several days to see how generative AI is impacting media and entertainment over there. So first, it's straight to “the mAIn event” (this week’s focus is a major “fair use” development in one of the key generative AI copyright infringement cases). Next, it’s the “AI legal case tracker” (updates on other key AI infringement cases). So … LET’S DO THIS!!!
(FYI: 2/3 of voters in last week’s poll thought that Apple’s iPhone 16 announcement would fall flat and disappoint; and, in my view, it did).
I. The mAIn event - Federal Judge May Decide GenAI’s “Fair Use” Defense Issue Pre-Trial
Generative AI companies face increasing copyright infringement litigation for using copyrighted content for training of their AI models — all without consent or compensation (I’ve written about this several times of course). And, predictably, their defense is always “fair use.”
Well now, in a major development in a high profile generative AI “training” case asserted against Nvidia, federal judge Jon S. Tigar (U.S. District Court, Northern District of California) ruled that he will allow the parties to file summary judgment motions on the fundamental issue of fair use. “Summary Judgment” is a pre-trial motion where the Court can apply the facts presented to it and apply those facts to the relevant law — and then decide whether it wants to reach final determination on that issue itself, or instead leave it to the jury to decide at trial. That means Judge Tigar has given himself the opportunity to decide whether Big Tech’s universal chorus of “fair use” does — or does not — protect them in this generative AI training context. And if the answer is that the Court ultimately rejects Nvidia’s defense, that will be significant new precedent that other courts may follow.
The relevant case is a class action lawsuit filed by three authors — Abdi Nazemian, Brian Keene, and Stewart O'Nan — against Nvidia in March 2024. The authors accuse Nvidia of using their copyrighted works without permission to train its NeMo Megatron-GPT AI model. Nvidia’s relevant training data set reportedly included nearly 200,000 books — all of which Nvidia reportedly later removed after infringement concerns were raised. The plaintiffs use Nvidia’s “cleansing” actions to be an acknowledgment of wrongdoing.
In a separate litigation — Reuters v. ROSS Intelligence — federal judge Stephanos Bibas (U.S. District Court of Delaware) just reversed course and is now also allowing motions for summary judgment on this fair use issue.
So the Courts appear to be increasingly willing to at least consider deciding that dispositive “fair use” defense themselves. And as I’ve written and discussed many times before, I see real risk here for generative AI companies — and can absolutely see Courts rejecting fair use (you can listen to my recent analysis here from my podcast episode).
If Nvidia, ROSS Intelligence or other defendants lose their fair use defense on summary judgment, that would be a big blow that could impact the entire field of AI training licensing and data transfer (especially since early cases and rulings have out-sized precedential impact). If, on the other hand, the courts refuse to rule in plaintiffs’ favor, all it means is that those relevant fair use issues go to trial instead (and juries will decide the issue in most cases).
In other words, a loss by plaintiffs on the fair use issue at summary judgment would be far less damaging to them than it would be for defendants.
Stay tuned. I’ll continue to report on this as it all plays out.
Until then, it’s time for generative AI companies and media companies to either move forward on litigation — or instead enter into AI training agreements that are fair for all involved. Let’s be clear. This generative AI train won’t stop. It’s just getting started.
(Thanks to Professor Edward Lee of Santa Clara University School of Law for bringing this case to my attention. His LinkedIn feed is worth following.)
II. The AI Legal Case Tracker - Updates on Key AI Litigation
I lay out the facts - and latest critical developments - via this link to the “AI case tracker”. You’ll get everything you need (including my detailed analysis) for each of the cases listed below. Lots of recent legal AI-ctivity (including this recent ruling against Stability AI and others, about which I wrote, by a federal judge in California that helped the cause of copyright owners). So much you need to know. So little time. I do the work so you don’t need to!
Here’s the bottom line: generative AI companies should get access to the content they need (it’s all just “data” to them) only when they seek consent from — and pay compensation to — the relevant content/copyright owners whose content they scrape. And when they don’t, they will increasingly find themselves in court.
(1) The New York Times v. Microsoft & OpenAI
(2) UMG Recordings v. Suno
(3) UMG Recordings v. Uncharted Labs (d/b/a Udio)
(4) Universal Music Group, et al. v. Anthropic
(5) Sarah Silverman, et al. v. Meta (class action)
(6) Sarah Silverman v. OpenAI (class action)
(7) Getty Images v. Stability AI and Midjourney