Recent Court Action Does Not Bode Well For Artist Claims Of AI Copyright Infringement
Federal Judge Skeptical That Creators Can Claim Infringement When AI “Trains” On Copyrighted Images Across The Internet
First, the good news for human creators everywhere. Just this past Friday, a federal judge upheld the U.S. Copyright Office’s continuing policy that AI-only generated works are not open to copyright protection. In the words of district court Judge Beryl Howell in Washington, D.C., human creativity is “at the core of copyrightability, even as that human creativity is channeled through new tools or into new media.” That puts to rest, at least for now, fears in Hollywood that AI can completely displace writers to create commercially protected scripts. Why would studios and streamers go the AI-only route if they can’t protect their “work product”? So that’s the latest on the “output” side of the AI equation.
Now the more sobering news on the “input” side. Back in January, only two months after OpenAI unleashed the shock and awe of ChatGPT to an unsuspecting world, I noted the initial round of inevitable litigation by copyright holders across the creative spectrum who claimed mass infringement by generative AI’s “training” on copyrighted works across the Internet, including their own. In one of the most closely-watched early cases – Andersen v. Stability AI - a group of visual artists sued AI image generator Stability AI and two others of that ilk – and I posed the question, “does this kind of generative AI infringe our copyrights, our exclusive ability to commercialize our works, on a massive scale?” This was all new then, of course, but I predicted that answers to that fundamental question would be coming soon to a courtroom near you - “this year, in fact,” I wrote at the time.
Well, now one important early return is in, and it isn’t looking good for those visual artists in the Stability AI case – which, of course, means it doesn’t bode well for the creative community in general. Specifically, a few weeks ago, federal judge William Orrick – who sits in the hotbed of creativity in California - was demonstrably skeptical of those artists’ infringement claims, signaling in court that he was inclined to ultimately dismiss them. Orrick pointed to what he deemed to be the small number of each individual artist’s creative contributions included in the AI’s overall training set of 5 billion images, a number that came from the artists’ pleadings. Yes, there may be some technical infringement here, but that impact is de minimis.
The judge’s comments to the litigants mean that he is likely to find that the AI’s “scraping” of copyrighted works and its resulting output of imagery constitutes a transformative fair use. Orrick has yet to publish his final decision on the matter – and he is likely to give the artists a chance to bolster their case - but his words leave little doubt as to where he will land. No amended pleadings will change his fundamental math. And if Orrick ultimately rules this way, then it’s a near certainty that he will also dismiss the artists’ separate state right of publicity claims. How could commercial opportunities for their artwork be adversely impacted by generative AI models that spit out novel images based on their training on billions of works?
But what if that number is 5 million instead? Or 500,000, 5,000, 500, or 50? Would that smaller level of AI training be sufficiently infringing “maximus” then? And what about the U.S. Supreme Court’s recent bombshell decision in the Andy Warhol case? There, the 7-2 majority shocked many by looking past the entire historically infringement cleansing doctrine of transformative fair use to focus instead on whether Warhol’s relevant artwork competed directly for commercial opportunities with the unlicensed photograph on which it was based. The Justices ruled in the photographer’s favor based on their perceived adverse impact to her livelihood. Should their reasoning impact Judge Orrick’s in his Stability AI case?
And should the type of creative medium matter when mass AI training on copyrighted works is alleged? Arguably, AI’s novel visual art outputs compete less directly with the unique styles of the artists who are scraped, than is the case when AI is used to generate novel photographs based on the libraries of Getty Images and others. Getty Images, in fact, is now litigating its own closely-watched infringement case against Stability AI. The entire analysis changes, of course, if an AI generates art “in the style of” a particular artist. That smacks of direct competition a la Warhol.
That’s the thing about the law that is discussed too little. While the legal system strives to portray itself as being a beacon of clarity and certainty when confronted by novel issues – making them appear to be black and white – the reality is much different. The law is mostly “grey.” I am an intellectual property lawyer by trade and also clerked for the chief federal judge in Hawaii (yes, Hawaii), so I’ve seen this first-hand. Court rulings are a series of subjective judgment calls by very non-AI human beings who frequently aren’t particularly well-versed in the specific issues at hand, especially when confronted by entirely new transformational technologies like AI. Judges are generally generalists, and no two judges will rule in precisely the same way - or even analyze these tough issues in precisely the same way.
But here’s the thing. Early cases in any new technology-laced litigation that impacts the creative community always have outsized impacts. They become precedents, guidance for other courts to follow, especially when they come from federal courts that generally hold more weight than the state courts. In the case of claims of AI infringement of creative works, those rulings become even more potent when they come from federal courts that sit in California, the world’s center of gravity when it comes to the world of entertainment.
When he issues his final ruling, which will come soon, Judge Orrick will be amongst the very first to lay down the law when it comes to answering fundamental questions of what constitutes AI infringement in the realm of the creative arts. Assuming he rules as anticipated, now it will be up to other courts faced with similar issues to at least strongly consider his conclusions, if not be fully bound by them. Those courts will further flesh them out based on the parameters he will soon lay out. And ultimately, these thorny copyright issues likely will find their way to the U.S. Supreme Court just as they did in the landmark “Betamax” case of 1984, when a majority of the Justices in a 5-4 decision ruled in favor of Sony and found that the recording of copyrighted works (it used the term “time shifting” in an attempt to make the decision easier to swallow) did not constitute infringement.
Future courts, just like Judge Orrick, certainly will search for some semblance of unattainable certainty as they make their own AI rulings in the months and years ahead. But in those courtrooms, only one thing will be absolutely certain. In a world of fast proliferating AI-driven copyright infringement litigation, lawyers will be fully employed to happily argue each side – at least until AI infringes their own turf and comes after their work.