How Courts Will Assess Damages In GenAI Copyright Cases
Theoretical Damages Under Copyright Law Are Astronomical, So What's Realistic & "Fair"?
Good morning dear readers, it’s time for your Monday morning “brAIn” dump! This week’s “mAIn event” takes a hard look at how courts even begin to assess damages in generative AI copyright cases (hint: the theoretical numbers may numb your mind). Next, the “mosAIc” — a collage of AI stories and podcasts I curate for you. Finally, the “AI Litigation Tracker” — updates on key GenAI/media cases by Partner Avery Williams of McKool Smith (you can also access the full “Tracker” here via this link).
I. The mAIn Event - How Will Courts Assess Damages In GenAI Copyright Infringement Cases?
As I wrote a couple weeks back, the first federal judge to decide the issue of “fair use” in generative AI “training” cases — in the now notorious Thomson Reuters v. Ross Intelligence case — ruled that the scraping of copyrighted works without consent and compensation is infringement (not a defensible “fair use”) as a matter of law. That decision doesn’t end that case yet.
But let’s assume for a moment that it does (to be clear, it certainly does on the specific issue of “fair use”). What are the remedies for winning rights-holders in these copyright infringement cases — and, relatedly, how will courts assess monetary damages in a world where the relevant “harm” caused by an entire internet’s worth of unlicensed scraping has already been done (and can’t be undone unless existing LLMs are scrapped and AI training starts anew with licensed content only)?
It’s almost impossible to fathom. Theoretical damage awards under copyright and related laws are downright astronomical. That’s why The New York Times, in perhaps the most-watched AI infringement case of them all (versus OpenAI and Microsoft) seeks “billions of dollars” of damages.
So Let’s Do The Math
Courts have the power, under copyright law, to award either (1) “actual damages and any additional profits of the infringer” (basically, a “reasonable” royalty) or, instead (2) so-called “statutory damages.” Statutory damages can be up to $30,000 per infringement — or up to 5X that number to $150,000 per infringement if “the infringement was committed willfully.” That’s per infringement! And how many infringements occur when AI developers scrape the entire internet? And when courts ultimately consider the higher $150,000 per infringement penalty number, it’s certainly reasonable to believe that plaintiffs like The New York Times will argue that it’s a “willful” act to scrape without consent in the face of the Thomson Reuters court’s flat out rejection of “fair use.”
Experts Weigh In
I’ve asked several experts about how judges will even begin to assess damages in these AI cases, and no one really knows. But this is what Partners Chad Hummel and Avery Williams of leading rights-holder firm McKool Smith tell me:
“Statutory damages can vary widely from $200 - $150,000 per work depending on whether the infringement was "willful" or "innocent." But the registration requirement may be a barrier for some copyright holders, particularly high-volume publishers who do not register each work with the copyright office as a matter of course. As an alternative, a court could impose a reasonable royalty, which is common in intellectual property cases where other remedies, like disgorgement or injunctive relief, are not applicable.
And there are indications in the generative AI training market that a reasonable royalty approach would be appropriate. The recent grant of summary-judgment against the “fair use” defense in the Thompson Reuters v. Ross Intelligence case turned, in part, on the argument that Thompson Reuters had the right to license its data for AI training and, accordingly, Ross’s taking of that data and IP without compensation was not a "fair use.” The existence of licensing agreements between major rights-holders would support a reasonable royalty approach, and help set the amount of that royalty. So while the generative training market is in its infancy, there seems to be an emerging path for courts to consider if and when they have to assess damages.”
Real World Implications
Whichever approach courts use, let’s not forget the real world implications of it all. The “right” answer certainly cannot be to try to stop all generative AI development dead in its tracks. That’s impossible anyway. Once new technology is unleashed, that genie is out of the proverbial bottle. We’ve learned that time and time again over the past several decades with the internet, streaming and social media. So permanent injunctions as remedies are certainly off the table.
But potential massive damages are not. “Massive” is a relative term. Courts would look to both the current AI licensing market — and to the sheer billions of dollars being pumped into generative AI tech (data centers, etc.). Given the essential role content plays in AI training — and the size of the major GenAI companies and their investments into their AI infrastructure — I believe the courts would consider billions of dollars.
The Solution
Of course, generative AI companies and their potential media partners can continue to litigate (and the number of such litigations continue to rise). Or, instead, they could invest those significant legal fees and overall resource costs — and some portion of the massive potential legal exposure — to reach “fair” content partnership agreements (essentially, content licenses).
That’s certainly the right answer. It eradicates waste. And it also reduces the “friction” for adoption of generative AI technology by consumers and businesses alike. Until the content licensing issues and risks are significantly mitigated, GenAI tech will not reach Big Tech’s hoped-for scale.
Listen to my latest “the brAIn” podcast which is a smart, insightful, and entertaining discussion of my article that I generated using Google NotebookLM. I approve its content, with the exception of the “synthetic content” discussion near the end.
(For those of you interested in learning more or exploring AI licensing opportunities, reach out to me at peter@creativemedia.biz).
II. The mosAIc — My “Must Read,” “Must Listen” Playlist
(1) My Conversation With Eric Shamlin, Creator of That Now-Famous AI-Generated Holiday Coke Commercial
A couple weeks ago, I was invited to speak at a major marketing and advertising executive-focused AI event hosted by ThinkLA in LA. I invited Eric Shamlin, CEO of Secret Level, to join me on the deus. Shamlin and his team were the ones who created the AI-generated Coke commercial which was THE marketing story during the holidays. I asked Eric how he and his team conceptualized and executed it all — and how consumers and the creative community reacted to the marketing campaign. Suffice it to say that consumer reaction was overwhelmingly positive — but not so much by many in the creative community. It was a fascinating and extremely well-received conversation — one that deserves to be fully watched and/or heard. That’s why I’ll feature it next week as the newsletter’s “mAIn event”!
(2) Can Content Licensing Opportunities for AI Training Last?
It’s a critical question, because there is no certainty. I”m on the front lines of it all — representing multiple media companies in licensing deals — and my advice is to move forward now due to uncertainty in the future. Variety just wrote an analysis of it all, which is worth reading via this link.
(3) Amazon Just Launched Alexa+. Now You Can Ask Amazon To Create A Song “On the Fly”
This is Amazon’s new super-charged agentic AI system which, according to Amazon, integrates with generative AI (and legally troubled) music service Suno to “turn simple, creative requests into complete songs, including vocals, lyrics, and instrumentation.” Yeah, but is Suno legal? Or is it a massive copyright violator, as charged by all the major record labels? Read more about it all from Amazon itself via this link.
Follow Peter Csathy on BlueSky via this link.
You can also continue to follow my longer daily posts on LinkedIn via this link.
III. AI Litigation Tracker: Updates on Key Generative AI/Media Cases (by McKool Smith)
Partner Avery Williams and the team at McKool Smith (named “Plaintiff IP Firm of the Year” by The National Law Journal) lay out the facts of — and latest critical developments in — the key generative AI/media litigation cases listed below. All those detailed updates can be accessed via this link to the “AI Litigation Tracker”.
(1) The New York Times v. Microsoft & OpenAI
(2) In re OpenAI Litigation (class action)
(3) Dow Jones, et al. v. Perplexity AI
(4) UMG Recordings v. Suno
(5) UMG Recordings v. Uncharted Labs (d/b/a Udio)
(6) Getty Images v. Stability AI and Midjourney
(7) Universal Music Group, et al. v. Anthropic
(8) Sarah Anderson v. Stability AI
(9) Raw Story Media v. OpenAI
(10) The Center for Investigative Reporting v. OpenAI
(11) Authors Guild et al. v. OpenAI
NOTE: Go to the “AI Litigation Tracker” tab at the top of “the brAIn” website for the full discussions and analyses of these and other key generative AI/media litigations. And reach out to me, Peter Csathy (peter@creativemedia.biz), if you would like to be connected to McKool Smith) to discuss these and other legal and litigation issues. I’ll make the introduction.
About My Firm Creative Media
My firm and I specialize in market-defining strategy and content licensing for generative AI, breakthrough business development and M&A, and cost-effective legal services in the worlds of media, entertainment, AI and tech. We develop game-changing strategic opportunities and leverage our uniquely deep relationships to reach key decision-makers and influencers in record time to execute. Not just talk.
Among other things, we represent media companies for generative AI content licensing, with deep relationships and market insights and intelligence second to none. Reach out to Peter Csathy at peter@creativemedia.biz if you’d like to explore working with us.
Send your feedback to me and my newsletter via peter@creativemedia.biz.