Court Rejects AI's "Fair Use": Now What?
How Media/Tech Dynamics Shift, Damages Assessed, The Supreme Court (& The Trump Wild Card)
Welcome to this week’s “brAIn” dump. Wow, what a week! The Thomson Reuters Court’s no-holds-barred rebuke of “fair use” as a defense to unlicensed AI scraping was the shot heard around the AI world. So, my “mAIn event” asks the next obvious questions: (1) How should AI developers act? (2) How will courts even begin to assess damages? (3) What would the Supreme Court do on appeal? And (4) What will Congress do (especially now with Trump-backed AI Accelerationists firmly in command?). It’s an important analysis about it all. Next, the “mosAIc” — a collage of key AI stories and podcasts. Then, Partner Avery Williams of leading rights-holder trial firm McKool Smith gives his detailed legal analysis of the Thomson Reuters case. Finally, it’s the “AI Litigation Tracker” — updates on key generative AI/media cases by McKool Smith (you can also access the full “Tracker” here via this link).
I. The mAIn Event - “Fair Use” Fail: So, Now What?
Just a few days ago, the first federal court to rule on the issue of “fair use” in the context of AI training on unlicensed works flatly rejected it as a defense, as a matter of law. In my last newsletter, I laid out why Judge Stephen Bibas did it (you can refer back to that post rather than me laying it out here again).
To be clear, Judge Bibas pointed out that Thomson Reuters does not involve generative AI — i.e., the “output/display” side of the infringement equation — and some have grabbed onto that thread to try to limit the court’s punchline. But that genAI v. non-genAI distinction on the issue of “fair use” for unlicensed content scraping/harvesting for training purposes — i.e., the input side of the equation — is, in the words of many Supreme Court Justices over time, “a distinction without a difference.” Bibas rejects “fair use” for unlicensed AI scraping, period. Full stop. Unlicensed scraping is unlicensed scraping by any other name. Attempts to say otherwise are simply a Star Wars-inspired Silicon Valley “genAI mind-trick.” (Note: as I’ve written previously, three different contexts for copyright infringement exist in AI cases— input, output, RAG).
So, Judge Bibas’s rejection of “fair use” is now precedent for all AI training cases (genAI or not), and other courts are sure to follow. Not all of them, mind you. But I’m confident most will (and I correctly predicted that courts would reject “fair use”). The Thomson Reuters case is far from over. It likely will be appealed. But it’s now out there for all to see. Which begs the question. What happens now?
So What Do Generative AI Developers Do Now?
Judge Bibas’s no-holds-barred rejection of “fair use” likely will have — and should have — immediate, significant ramifications. Now the AI industry is on formal notice that, at a minimum, it is reasonable to assume that “scraping” without consent and compensation is infringement. And that means that the power dynamics between AI developers and media/entertainment companies (and the entire creative community) have shifted too — which likely will lead to a flurry of new AI content licensing activity at higher pricing (to achieve, in the minds of IP rights-holders, a more fair allocation of value to the content ("data”) that gives AI its wings).
At a minimum, blanket statements of “fair use” certainty no longer hold water. As The Avett Brothers would sing, now there’s “A Headful of Doubt” (by the way, check out that song - it just streamed from my playlist as I wrote that sentence; coincidence? I think not). Tech companies now face a new day with very real legal risk — at very significant potential monetary scale — breathing down their necks. With each new day comes more infringing conduct (at least in the mind of Judge Bibas). And continuing such behavior — in the face of notice for it to stop — can lead to significantly enhanced statutory penalties — liability multipliers meant to punish bad conduct.
So, what’s an AI developer to do in the face of that spicy meatball? Stop the AI presses? No one really knows. But one thing is certain. AI training on unlicensed copyrighted works likely isn’t stopping any time soon. Which leads to the next logical question …
But How To Measure Damages?
Assume for a moment that Thomson Reuters’ groundbreaking “fair use” win ends the case and other courts follow suit (that’s not where we are yet, but just assume for purposes of this analysis). What are the remedies for winning rightsholders? How would courts even begin to assess monetary damages when the harm caused by an entire internet’s worth of unlicensed scraping has already been done (and can’t be undone unless existing LLMs are scrapped and AI training starts anew with licensed content only)? It’s almost impossible to fathom. Theoretical damage awards under copyright and related laws are downright astronomical!
Courts have the power, under copyright law, to award “actual damages and any additional profits of the infringer” or, instead, so-called “statutory damages.” And those can be up to $30,000 per infringement or up to $150,000 per infringement if “the infringement was committed willfully.” That’s per infringement! How many infringements occur when AI developers scrape the entire internet? And is it now a “willful” act to scrape without consent in the face of Judge Bibas’s flat out rejection of “fair use”? Certainly Meta’s behavior, even before his decision, has been called into question in the Kadrey case (you can read about it in the AI Litigation Tracker). I’ve asked several experts about how courts would even begin to assess damages in these AI cases, and no one really knows. And then let’s not forget the real world implications of it all.
Is the right answer to stop genAI development dead in its tracks? That’s never gonna happen anyway (nor could or should it). Once new technology has left the barn, that horse ain’t comin’ back. We’ve learned that time and time over the past several decades with the internet, streaming and social media.
So good luck with all that courts! We’ll be checking your math once you get to that point. (Here’s a fun “insider” fact on that note: just graduated law school “kids” are frequently the ones who advise federal district court judges — like Bibas — about how to consider these things. I know first-hand, because my first gig was as a law clerk to Chief Judge Harold Fong of the federal district court in Hawaii. Yes, I know it sounds glamorous. But it really wasn’t (although Fong was a great guy)).
What Would The Supreme Court Do?
Moving past the virtually impossible question of allocating damages when infringement is found, it’s a virtual certainty that the rejection of “fair use” will be appealed by defendants in Thomson Reuters or defendants that follow. Given the massive stakes at hand, it’s also reasonable to assume that at least one such case will wind its way up to the hallowed halls of the Supreme Court.
So how would the nine Supremes decide the issue?
Thankfully, I’ve written extensively about this possibility (probability?) already, so there’s no need to lay it out again here (you can read my full analysis of the Supreme Court’s most recent major copyright “fair use” rejection in the Andy Warhol/Prince case via this link). But here’s my punchline. I’m confident that even this wild and crazy Supreme Court would reject “fair use” and uphold Judge Bibas-like decisions on the basis that (i) unlicensed scraping eradicates lucrative AI “licensing market” opportunities for rights-holders, and (ii) genAI applications frequently directly compete with rights-holders for commercial “market substitution” purposes. Justice Sonia Sotomayor makes both points, using that quoted language, in her 7-2 majority opinion.
On the first point, a lucrative AI content licensing market already exists for media companies that is directly adversely impacted by attempts to end-run it. News Corp’s $250+ million licensing deal with OpenAI for text content only (no video included) is just one such deal. And on the second point, most general purpose AI applications like Perplexity and ChatGPT are developed to be the single source for all things, including news, for example. That takes away commercial opportunities for The New York Times and other news sources. And that’s “market substitution” (and one reason The Times is litigating against OpenAI).
Sotomayor’s rationale triggered Judge Bibas in Thomson Reuters too. He applied her “market substitution” test as his binding precedent. It is now the law of the land.
Or is it? Read on if you dare …
The Trump Card: Would Congress Take The Bait?
Under normal circumstances, yes, Andy Warhol/Prince would be the law of the land. But these ain’t no normal circumstances. We know this.
Only one thing can trump Supreme Court decisions. Congress. (Well, actually two things, because the Court can later reverse itself, but you know what I mean). Congress can stare down Supreme Court decisions it doesn’t like and blow them out of the water by enacting overruling legislation. Remember, it was Congress that codified the four-factor “fair use” analysis used by the courts in the first place (Section 107 of the Copyright Act). So yes, it is possible that Congress becomes “motivated” to pass new legislation to expand “fair use” to include training AI on unlicensed copyrighted works. (That thought alone sends a chill down my spine.)
But is that a real possibility? You be the judge.
We’re in uncharted waters. After all, Trump’s new “VP” Elon isn’t exactly one to hold back when it comes to unbridled AI dreams. Neither is much of Silicon Valley — a land filled with AI “Accelerationists.” Mix that predilection for a full-steam-ahead AI approach with the new “DeepSeek AI China treat” and you’ve got yourself a potent AI elixir. “National security” uber alles, right? Guardrails be damned! That could mean copyright too.
But would Congress really heed that clarion Accelerationist call? Again, you be the judge. Congress isn’t exactly checking and balancing these days. That’s not a political statement, by the way - I think we can all agree on that. At a minimum, I highly doubt that The West Wing cheers Bibas’s “fair use” rejection. And Mr. Trump always has a phone by his side.
What’s Congress’s phone number again?
Listen to my latest “the brAIn” podcast which is a smart, insightful, and entertaining discussion of my article that I generated using Google NotebookLM. I approve its content, with the exception of the “synthetic content” discussion near the end.
(For those of you interested in learning more or exploring AI licensing opportunities, reach out to me at peter@creativemedia.biz).
II. The mosAIc — My “Must Read,” “Must Listen” Playlist
(1) The Copyright Office Is Beginning To Show Its AI Cards
Last week, the Copyright Office permitted copyright registration for at least two AI-generated visual works. First, a visual collage consisting of AI generated elements — on the basis of sufficient human effort via “collage, selection and arrangement” (read about this first one here). Second, an AI generated video with AI generated music based on the editing of the AI generated elements (read about this second one here). Does this give clarity on the critical issue of how much human effort is sufficient to obtain copyright protection? Methinks not. But maybe I’m missing something? Read more about it here.
(2) New Research Says AI Dependence May Erode Critical Thinking
Well, duh? Microsoft and Carnegie Mellon University did the research to reach that conclusion. Read about the study here. And here’s a related story by expert Shelly Palmer titled, “Will GPT-5 Make Us Think Alike?”
(3) JD Vance’s AI “Accelerationism” Shocks Much Of The World
The third-in-command (after MusK) addressed the global “AI Action Summit” last week in Paris, and left many in the audience speechless with his no-holds-barred approach. Read more about it here.
(4) Hard Fork Is A “Must Listen” AI-Focused Podcast
It’s not only about AI, but usually is these days. And that’s a good thing. It comes from The New York Times and is a critical resource. The latest episode (link here) focuses on the Paris “AI Action Summit” referred to above. It’s fascinating. Sobering. A “must.”
III. Thomson Reuters v. Ross Intelligence: The “Fair Use” Shot Heard Around The AI World
Legal Analysis by Avery Williams, Partner at McKool Smith
On February 11th, in a case that comes tantalizingly close to deciding the issue of “fair use” in generative AI model training (with many taking the position that now that issue is firmly decided, as laid out below), Circuit Judge Bibas of the District of Delaware ruled that the “fair use” doctrine does not protect the use of West Headnotes in determining what to display as a result of a user query. Thomson Reuters v. Ross Intelligence involves an AI search tool made by the now-defunct Ross Intelligence (“Ross”). Ross’ tool accepted user queries on legal questions and responded with relevant case law. To determine what cases to provide in response to user queries, Ross compared the user queries to “Bulk Memos” from LegalEase, which were written using Westlaw Headnotes. Boiling it down, when a user’s query contained language similar to a West Headnote, Ross’ tool would respond by providing the cases that the West Headnote related to.
While Ross’s tool was not a modern generative AI model (it didn’t use a transformer model or perform next-token prediction to generate unique output for queries), an important similarity exists between Ross’ use of West Headnotes and the way generative AI models train on other copyrighted materials. Ross’ tool did not actually reproduce the West Headnotes in response to a user’s query. Ross used the Headnotes just for “training,” that is, to determine what to produce in response to a user's query. It is easy to draw an analogy between Ross’ use of West Headnotes to determine what cases are responsive to a user’s query, and OpenAI’s use of The New York Times articles to determine how to respond to a question about politics (see the separate The New York Times case against OpenAI summary below). The technology is different, but the themes are similar.
In that context, the Court’s grant of summary judgment against Ross’ fair-use defense — as a matter of law — provides insight into how another court might rule in a generative AI training case. “Fair use” is based on four factors: (1) the purpose and character of the use, (2) the nature of the copyrighted work, (3) the amount of the work used, and (4) the potential impact on the market. The Thompson Reuters Court found that factors two and three favored Ross because of the low degree of creativity involved in carving out headnotes from cases, as well as the fact that Ross did not output the headnotes themselves but rather judicial opinions. However, factor one favored Thomson Reuters because of the commercial nature of Ross’ product and the fact that it was not transformative. The Court noted that Ross’ product was not generative AI, suggesting that a generative AI product could be more transformative than the simpler lexical searching tool that Ross made. Finally, the fourth factor and “undoubtedly the single most important element of fair use” favored Thomson Reuters because of the potential impact on Thomson Reuters’ ability to sell its own data for use in training AI if Ross’ use was permissible. On balance, the Court flatly rejected Ross’ “fair use” defense as a matter of law. That question will not go to a jury.
AI developers will undoubtedly focus on the issue of transformative-use in generative AI fair-use battles to come, but the “commercial use” and “market impact” factors will continue to favor content owners over generative AI companies. We have already seen several massive licensing deals where companies like Reuters and Reddit are profiting from the sale of their own data. If courts continue to favor the “market impact” factor as we see in Thompson Reuters, then OpenAI, Sonos, and the like will have an uphill battle to prove their “fair use” defense.
Follow Peter Csathy on BlueSky via this link.
You can also continue to follow my longer daily posts on LinkedIn via this link.
IV. AI Litigation Tracker: Updates on Key Generative AI/Media Cases (by McKool Smith)
Partner Avery Williams and the team at McKool Smith (named “Plaintiff IP Firm of the Year” by The National Law Journal) lay out the facts of — and latest critical developments in — the key generative AI/media litigation cases listed below. All those detailed updates can be accessed via this link to the “AI Litigation Tracker”.
The Featured Updates:
The Thomson Reuters “Fair Use” Earthquake
(1) The New York Times v. Microsoft & OpenAI
(2) In re OpenAI Litigation (class action)
(3) Dow Jones, et al. v. Perplexity AI
(4) UMG Recordings v. Suno
(5) UMG Recordings v. Uncharted Labs (d/b/a Udio)
(6) Getty Images v. Stability AI and Midjourney
(7) Universal Music Group, et al. v. Anthropic
(8) Sarah Anderson v. Stability AI
(9) Raw Story Media v. OpenAI
(10) The Center for Investigative Reporting v. OpenAI
(11) Authors Guild et al. v. OpenAI
NOTE: Go to the “AI Litigation Tracker” tab at the top of “the brAIn” website for the full discussions and analyses of these and other key generative AI/media litigations. And reach out to me, Peter Csathy (peter@creativemedia.biz), if you would like to be connected to McKool Smith) to discuss these and other legal and litigation issues. I’ll make the introduction.
About My Firm Creative Media
My firm and I specialize in market-defining strategy and content licensing for generative AI, breakthrough business development and M&A, and cost-effective legal services in the worlds of media, entertainment, AI and tech. We develop game-changing strategic opportunities and leverage our uniquely deep relationships to reach key decision-makers and influencers in record time to execute. Not just talk.
Among other things, we represent media companies for generative AI content licensing, with deep relationships and market insights and intelligence second to none. Reach out to Peter Csathy at peter@creativemedia.biz if you’d like to explore working with us.
Send your feedback to me and my newsletter via peter@creativemedia.biz.