No, AI Copying Is Not the Same as Human "Copying" (& Substantial Similarity Isn’t Needed)
It's true that artists have built on the creative blocks of others forever. But that’s a false equivalence. (I do the copyright legal analysis so that you don't have to!)
Ready for your Monday morning brAIn dump? Here it is, fuller than usual! First, this week’s “mAIn event” (my feature headline story). Then, “the AI:10” - 10 key media-related AI headlines from last week. Next, it’s time for your “mind-blowing AI video” of the week. Followed by, Creative Media’s entirely re-imagined new website (which finally more concisely lays out what my firm and I do). Then, “the cocktAIl” - my special AI event mixology. Finally, “the AI legal case tracker” - updates on the key generative AI copyright infringement cases.
I. the mAIn event - Yes, AI copying is different than human inspiration. And no, substantial similarity isn’t needed.
[IMPORTANT NOTE: I’m an IP/entertainment lawyer by trade: started my career in major firms & served as General Counsel in a multi-billion dollar operating division of Universal Studios. I read every single word of every opinion I discuss in this analysis. It’s an important piece, and you don’t need to be a lawyer to read it.]
Setting the stage
Right now, media and entertainment industry-defining AI copyright litigation is winding its way through the courts. In all these infringement cases, essentially no one disputes that AI copies entire libraries of copyrighted works word-for-word (or image-by-image, note-for-note) when it trains on them without consent. Rather, those who try to defend it simply point out that the end result of AI’s heist frequently shows no direct relation to the original, and its copying is fundamentally no different than what human artists have done since the beginning of time – build their creative works on top of the building blocks of others before them. They dismiss arguments to the contrary as a form of blind sentimentality that slows down society’s quest for continuous progress.
But humans generally don’t copy the creative works of others in their entirety, because when they do it’s usually called infringement. And artists, even when they do copy, aren’t creating entirely new systems designed to enable billions of users to generate endless works that ultimately flood the marketplace, thereby competing directly with creators and squeezing them out. Big Tech AI training, on the other hand, does precisely that. Yes, AI’s outputs may show little resemblance to any individual creative work swallowed up by generative AI’s insatiable black box due to the sheer numbers involved. But AI’s wholesale copying – at such grand scale and for such broad purposes -- actually makes it more diabolical.
The Supreme Court’s relevant copyright rationale
No-holds-barred generative AI absolutists – a movement known as “Effective Accelerationism” – want us to forget that the question of copyright infringement, together with related claims like misappropriation, has two parts – the input and output sides of the equation. On the input side, wholesale copying combined with market substitution are enough to reject fair use and find infringement, and the separate question of infringement on the output side doesn’t even need to come up. Don’t take that from me. Take that directly from the Supreme Court.
In its recent Andy Warhol Foundation v. Goldsmith case (commonly referred to as the “Warhol-Prince,” since a photograph of musician Prince was central to it), the Supreme Court didn’t even reach the separate issue of infringement on the output side which had already been conceded. Instead, the Court focused only on the use and alleged infringement of the photograph itself. In rejecting fair use as a defense, the 7-2 majority led by Justice Sonia Sotomayor wrote that market substitution is “copyright’s bete noire.” In her words, “The use of an original work to achieve a purpose that is the same as, or highly similar to, that of the original work is more likely to substitute for, or supplant, the work” which, in turn, “undermines the goal of copyright.”
The Court also pushed back on the notion that just because the alleged infringer’s outputs are “transformative” – i.e., they “have a different character” from the original – is enough to find fair use. That’s what Big Tech wants us to believe. But the Court rejected that argument, noting that copyright’s protection is even stronger “where the copyrighted material serves an artistic rather than utilitarian function.” In its words, “To hold otherwise would potentially authorize a range of commercial copying of photographs, to be used for purposes that are substantially the same as those of the originals.”
Take that, Stability AI! (to your fair use defense in Getty Images’ lawsuit).
“Google Books” is entirely different
But what about the famous Authors Guild v. Google case (commonly referred to as “Google Books”), which is always the case Big Tech trots out to argue that no consent or compensation is required to the creators on which its AI trains. The court found fair use in that case, and the same rationale applies here with generative AI.
Wrong.
First, “Google Books” hailed from the Second Circuit Court of Appeals – not the U.S. Supreme Court – so it isn’t the law of the land. But even if it were, the court in Google Books gave its stamp of approval to Google for fundamentally different reasons – none of which were market substitution. Yes, there was wholesale copying there too -- Google copied entire libraries of books without consent. But Google did so to make them searchable and only displayed snippets of copied books in its search results. That, in turn, drove more discovery, sales and consumption of them – not less -- which of course meant more dollars for the authors themselves. Google, in other words, promoted authors. It didn’t seek to replace them.
It's the exact opposite when AI relentlessly scrapes creative works in their entirety. Big Tech developed generative AI systems precisely to create commercial substitutes for wholesale sectors of the media and entertainment industry. Global news analysis and features? Who needs The New York Times when you have OpenAI (litigation ongoing). Stock photos? Who needs Getty Images when you have Stability AI (litigation ongoing). These companies invested massively to create their works. Generative AI companies, however, believe they can simply take them – no payment needed.
Big Tech (at least several of its players) is on a quest to identify valuable creative content, vacuum it up to satisfy AI’s insatiable appetite, and then spit out its own artificially generated products to compete directly against copyright owners. Several generative AI companies aim to be the one stop shop for all kinds of creative works. Only in a small number of cases does Big Tech seek to negotiate with creators and media companies (mere content repositories in its view), as it just did last week with the Financial Times.
The creative community isn’t trying to slow down tech. It just expects to be paid
Silicon Valley predictably warns of dire consequences for any kind of roadblocks to generative AI’s unbridled arms race in the name of progress. The Supreme Court dealt with similar doom and gloom pronouncements in Warhol-Prince, but blithely rejected them. Justice Sotomayor openly mocked claims that the Court’s decision would “snuff out the light of Western civilization, returning us to the Dark Ages ….” In her words, “It will not impoverish our world to require [the infringer] to pay [the creator] a fraction of the proceeds from its reuse of [the] copyrighted work.”
Exactly! That’s all we’re talking about here. The creative community is not trying to stop Big Tech’s development of generative AI. To the contrary, it expressly acknowledges AI’s power and potential. The Human Artistry Campaign, a coalition of major media and entertainment organizations, sets forth seven “core principles for artificial applications” in its mission, and principle number one states, “Technology long empowered human expression, and AI will be no different.” That’s principle number one!
The creative community just expects Big Tech to pay for the foundational ingredients it needs for its AI tech to have its value. In Justice Sotomayor’s words, new expression “does not in itself dispense with the need for licensing.”
So, don’t justify Big Tech’s relentless quest for its next trillion-dollar valuations on claims that it’s no different than what artists have done since the beginning of time.
It isn’t.
What do you think? Send me your feedback and thoughts at peter@creativemedia.biz.
II. the AI:10 - 10 “quick hit” AI headlines from last week
(1) Universal Music Group winds the clock back up on TikTok! Yes, the long national nightmare for teens (and many more parents than we all think) is over. AI was a major issue, and apparently all is now good in music land. Read more here.
(2) Meanwhile Warner Music Group calls for “No Fakes.” WMG’s CEO Robert Kyncl, together with artist FKA twigs, testified before a key Senate subcommittee in support of federal legislation to mute fake soundalikes and make likeness abuse a thing of the past. Read more here.
(3) Apple’s “Sophie’s Choice” (or is it more like The Clash?): should it stay with Google or go to OpenAI for the iPhone generative AI features it needs? Talks with OpenAI have accelerated. Read more here.
(4) Tiktok parent ByteDance is reportedly searching for alternatives too. Tech’s persona non grata is exploring scenarios for selling Tiktok “light” - the app without its “special sauce” algorithms. Chew on that! Read more here.
(5) From the “no, duh!” department: should Big Tech decide whether its generative AI is “safe”? Hmm. I’ll give you an answer, and it doesn’t start with the letter “Y”! Read more here. Especially when you read this story about the U.S. military slowing down its use of genAI for war games because it too easily goes “nuclear” (literally) rather than exploring less, shall we say, draconian options? Read more here.
(6) Here’s your answer - and this week’s PSA! Everything you need to know about AI detectors for ChatGPT, but you were afraid to ask! This list of articles from WIRED will help you understand AI text detection tools on the market right now. Read it here.
(7) AI’s future’s so bright, it’s gotta wear shades! Yes, iconic and immensely cool sunglass brand Ray-Ban, together with extremely uncool Mark Zuckerberg’s Meta, is introducing new smart glasses with multimodal AI. Read more here.
(8) If it bleeds, it leads … but now artificially! Channel 1’s news anchors are now 100% AI generated. As if we didn’t already have enough fake news! Read (and watch) more here.
(9) This one’s from the “feeling the heat” department. See creative community, pressure works! Big Tech is stepping up its efforts to actually license content and pay creators, instead of simply doing what it’s been doing to build its next trillions — take. Read more here about OpenAI’s deal with Financial Times. Of course financial terms aren’tt disclosed. BUT, and this is important, the news industry is divided on the issue of striking deals right now with Big Tech - with some, like the Financial Times, saying “yes”, and others - like, The New York Times, saying “we’ll see you in court.” Read more here.
(10) But if you are doing a deal now, whatever it is they’re paying you, isn’t enough! That’s my own personal cautionary “take” on Big Tech’s efforts to woo IP owners with dollars that may sound “big” today, but really won’t be in the long run. Play the long game content creators! My money is on The New York Times ultimately winning in court. Which serves as a great preface to this week’s “mAIn event” feature analysis below.
III. Mind-blowing AI video of the week: “The next 40 years of TED Talks”
Experimental filmmaker Paul Trillo — one of the leading filmmakers who experiments with generative AI — just created this video using ONLY “Sora” (OpenAI’s new text to video genAI) to showcase how he envisions TED Talks will change over the next 40 years. Fascinating! Watch it. Marvel at it. And again, remember, we’re only 18 months since the public launch of ChatGPT.
IV. Check out Creative Media’s new re-imagined website
My hope is that it more concisely lays out what my firm and I do — how it ties all the pieces together — why it’s so personal to me — and how my team and I can help you. Speaking of team, we now have a New York City office led by Natalie Lee, a young superstar lawyer from a major law firm (and with an excellent business and creative mind).
V. the cocktAIl- your AI mix of “must attend” AI events
After all, it’s always happy hour somewhere!
(1) CogX’s “Festival Los Angeles AI & Transformational Tech” 1-day conference takes place TOMORROW, May 7th at the Fairmont Century Plaza hotel. I’ll be there. REGISTER VIA THIS LINK FOR 50% OFF. And let me know if you want to meet. Reach out to me at peter@creativemedia.biz.
(2) AI LA’s big “A.I. on the Lot” event is NEXT WEEK on May 16th! It’s a “must attend” event (I’ll be there too). Get 20% off now when you register using promo code “PETER” as you check out. REGISTER VIA THIS LINK. Again, reach out to me at peter@creativemedia.biz if you want to meet. The event is a veritable “who’s who” in AI, media and entertainment.
(3) New York City’s big Streaming Media event is coming up May 20th-22nd at the InterContinental Barclay in the heart of the City (learn more via this link). I’ll be there too, participating in an AI debate with 3 other leading voices in the world of media/tech (ours is the second AI/media debate at 2:15 pm Eastern on Monday, May 20th). It’s bound to get spicy! I can get you a major discount here too. Just use the code GoUpstream! when checking out. Again, let me know if you want to connect.
(4) Check out Digital Hollywood’s first generative AI-focused virtual summit, “The Digital Hollywood AI Summer Summit,” coming soon on July 22nd - 25th. The sessions are comprehensive and outstanding. I’ll be moderating two of them. Learn more here via this link.
check out Creative Media and our AI-focused services
VI. the AI legal case tracker - updates on key AI litigation
Rather than lay out the facts of each case - and the latest developments - in every newsletter, click on this “AI case tracker” tab on “the brAIn” website. You’ll get all the up-to-date information you need (including my detailed analysis of each). These are the cases I track (several important developments this past week).
(1) The New York Times v. Microsoft & OpenAI
(2) Sarah Silverman, et al. v. Meta
(3) Sarah Silverman v. OpenAI
(4) Universal Music Group, et al. v. Anthropic
(5) Getty Images v. Stability AI and Midjourney
Insightful! Accelerationism coming up against creativity is an aspect that often gets lost in the discussion about AI. If we're serious about AI and automation being safe and trustworthy; that implies we've got to figure out how to be considerate of any group it would otherwise steamroll.
I spoke with you on LinkedIn about SCOTUS's scraping ruling; but again now that I see these cases are significantly narrowed in scope to AI specifically; I agree there's a potential for courts to shift the game.
IMO it will be a lot smoother to advocate for creators to the judiciary if there are straight-forward mechanisms that can enable large scale attribution - but that's a bias from my own industry =J