Welcome to Issue #1! AI "deepfakes": a threat to artists (& all of us), so what’s the solution?
Taylor Swift, George Carlin & President Biden show us it’s time to take name, image, likeness & voice seriously
Welcome to “the brAIn”! Here’s how we do this.
It’s the inaugural issue - your first official Monday morning “brAIn” dump! You may want to keep this one. Who knows, it may be a collector’s items some day.
My mission is to give it to you straight (but also entertaining!) about the promise and perils of generative AI in the context of media and entertainment. I’ve spent equal parts of my career both working with artists and in the creative world, and leading pioneering startups that developed transformational technology. I believe human creativity and content are paramount, but that tech can be the great enabler and conduit for impact. I’ll ultimately side with what is human and frequently call out Big Tech. But I don’t fear technology. My fundamental philosophy about it all is stoicism. We can only control how we respond to the accelerating AI forces around us. But we also have much more control about those forces than we may think … so long as we take action! My goal is for this newsletter - and “digesting” both its content and spirit - to be a part of that action.
This is how I’ll organize my weekly “brAIn” to make it useful and easy to follow.
First, I’ll identify key AI-focused headlines from the past week and preview the coming week. I’ll call this “the trAIler” (you know, like movie trailers show you coming attractions? Hey, I want to entertain too!).
Second, I’ll feature my latest deep dive analysis to give you insights you won’t find anywhere else. Nothing superficial here. I spend lots of time writing these to give you fresh perspectives and valuable “news you can use.” I’ll call this “the mAIn event” - and this week’s focuses on Taylor Swift and those recent disturbing deepfakes - what they mean not only for her, but for all of us. NOTE: you can listen to the article (see below)!
Third, I’ll identify and update the most important media and entertainment-focused generative AI legal cases. I’ll call this the “the litigAtIon tracker.” I know, I know. You think you don’t need to follow the law. But trust me, no matter what role you play, you do. What happens in the courts will impact everything - your art, your livelihood, your life. I promise my updates will be non-legalese and meta (which is appropriate since Meta is a defendant in one leading key case (shocking, I know!)).
Finally, each week I’ll close with curated stories from great minds, identify innovative companies and entrepreneurs to watch, flag noteworthy events and meet-ups, recommend additional resources for you, and generate some surprises too. Consider me your AI mixologist pouring your favorite AI drinks at the end of a busy day. That’s why I call it “the cocktAIl”! Reach out to me at peter@creativemedia.biz if you’d like me to add your recipe (contributing article, event or news story) to the mix.
So with that introduction, let’s do this!
I. the trAIler - AI “quick hit” headlines and previews
(1) Universal Music Group artists no longer sing on TikTok! UMG pulled the plug on the sound in its videos citing, among other things, the flood of AI-generated music overtaking the social media juggernaut. In a scathing open letter, UMG called TikTok’s disregard “nothing short of sponsoring artist replacement by AI.” Read my deep dive analysis here where I applaud the music giant for its bold move, despite all the pain it causes (not to mention teen angst dialed up to 11!). My prediction? The two battling behemoths will reach a new agreement by the end of this month.
(2) The FCC criminalizes AI-generated deepfake robocalls in the wake of the recent President Joe Biden ones that urged New Hampshire voters not to vote. Yes we can! And we must (criminalize them). Just a precursor of things to come this election year.
(3) The Google chatGPT formerly known as “Bard” is now re-christened “Gemini” and generates photos and images too! All much to the chagrin of photo bank Getty Images, not to mention deepfake-targeted celebrities like Taylor Swift (more in “the mAIn event” feature story below). Google assures us that its new AI image generator was “designed with responsibility in mind”- and that it will bake its watermarking feature into the pixels of photos to identify images that are AI-generated. And if Big Tech tells us not to worry, we should believe them, right? (even though its name Gemini virtually assures that it will have its good days and bad days).
(4) Big Tech’s generative AI investment scorecard. Microsoft leads the way, investing more than $10 billion in chatGPT maker OpenAI. Meanwhile Amazon has committed $4 billion to rival Anthropic. And sly little Google is of two minds, splitting its generative AI Gemini personality on its AI’s mind and $2 billion on Anthropic’s.
(5) Apple promises a major new generative AI product “later this year.” But is Apple too late to the generative AI party? Read this from TheWrap which discusses that point. Microsoft’s first mover AI push with OpenAI recently leapfrogged the formerly sleepy Redmond giant over Tim Cook’s Apple to become the most valuable company in the world at a market cap of $3.13 Trillion (just about $200 billion more than Apple).
II. the mAIn event - Taylor S v. deepfakes (feature story)
Several days before the Super Bowl, disturbing Taylor Swift deepfakes flooded the social media feeds of her fans. “X” marked the spot for ground zero. Shocking, I know, because Elon Musk cares so much about “safety,” even as apologist-in-chief Linda Yaccarino tries to woo advertisers back. A few days earlier before the New Hampshire primary, fake President Joe Biden robocalls flooded the phones of voters, urging them not to vote. And even more recently, a deep fake George Carlin “performs” a new comedy special on YouTube, although no one from his estate gave any consent.
The transformational power, promise – and yes, also peril – of generative AI is now sinking into a broad swath of the entertainment community. These examples are just the canary in the coal mine for the threats and harm (commercial and reputational) we will see in the months and years ahead.
So what can and should we all do about it now?
First, it’s critical to simply be all over it, monitoring these developments so that we understand them and take action. No matter what role we play in the creative economy, we should also experiment with generative AI to understand what it can do. To be clear, generative AI certainly will enable us to do great things, several examples of which I have laid out many times previously.
One notable area of opportunity is licensed AI dubbing of film and television, which enables widespread localized international distribution, no matter what the original language is. Los Angeles based Flawless AI is a leader here. Its tech will seamlessly enable actors to speak in multiple languages while their mouths perfectly match their multi-lingual words. That means no subtitles are needed which, in turn, maximizes distribution, audience receptivity, and monetization.
But on the flip side, we also need to take action to mitigate the peril of generative AI gone wild. The examples above reflect outright theft and abuse of the names, images, likenesses and voices of famous individuals that result in both significant commercial and reputational harm. In the case of President Biden, the harm goes even further – it challenges democracy itself.
WIRED reported that the voice hacker behind those Biden fakes likely used AI tools developed by Silicon Valley-based "startup" ElevenLabs, which just recently scored $80 million in new financing at a $1.1 billion valuation in order to add Hollywood AI dubbing to its priority list. After this report was made public, the company suspended the relevant user’s account. It also says all the right things about curbing such abuse in its FAQ’s and terms and conditions. That’s where it notifies users that consent is required for “voice cloning” and indicates it will take down content that crosses that line once they are notified about it.
That all sounds great, of course. But this entire episode seems to follow the typical Silicon Valley playbook. Blue-chip VC backed tech startups create new enabling tech that requires compelling content from their users to become valuable. Then, their lawyers draft the right sounding policies that instruct those users to follow basic rules of copyright and privacy. But those policies are frequently buried in boilerplate and used to provide defense when users break them, which they know is happening on their platforms. Wink, wink. Nudge, nudge.
And why not? It’s in the interests of these tech companies not to add friction to growth of their user base. That makes them more valuable. “It’s better to ask for forgiveness than permission,” right? In ElevenLabs’ case, I found a lot of discussion of the ease and quality of their “voice cloning” tech on the company’s homepage, but essentially none of it required consent and condemnation of abuse. I reached out to the company on two separate occasions to correct any misunderstandings I may have had and to give it a chance to offer its own perspective, but I received no response each time.
We saw an analogous episode nearly two decades ago when YouTube first launched and users happily uploaded copyrighted videos by the millions. SNL’s “Lazy Sunday” rap spoof video became the poster child for this new kind of abuse, and Viacom (now Paramount) sued. Google ultimately swooped in to save the day for YouTube, buying the company and settling the litigation. Now Google’s valuation sits at $1.75 trillion, while SNL owner Comcast NBCUniversal’s sits at about 1/10th at $187 billion - and that number includes Comcast’s lucrative broadband business.
Perhaps some of ElevenLabs’ $80 million fresh bounty should be invested in so-called “trust and safety” initiatives to minimize the risk of user abuse, don’t you think? It’s all a question of will, of course, and perhaps the company has it (I can’t say for sure either way, because it didn’t respond). I’m confident that brilliant tech engineers can find ways to prevent abuse if resources are allocated to that goal. As my favorite media-tech pundit Scott Galloway would say, “it’s not about the realm of what is possible. It’s about the realm of what is profitable.”
Money talks and money reflects priorities. Growth for growth’s sake may be great for venture capitalists, but it certainly isn’t always great for the artists and creators on whose backs multi-trillion dollar Big Tech companies are significantly made.
AI “forensic” technology already exists to combat AI deepfake abuse. A recent tantalizing example includes Nightshade which reportedly uses “poison pill” AI to combat deepfake AI. Another company focused on combating deepfake audio abuse is Wolfsbane AI. Meanwhile, on a recent podcast, leading venture capitalist Chris Dixon of Andreessen Horowitz noted that blockchain tech is capable of “creating an immutable audit chain to see if consent was given.” So solutions are out there. Interestingly, Andreessen Horowitz just happens to be a major investor in ElevenLabs.
Given these latest – but certainly not isolated – disturbing fakes, does anyone really think we have sufficient guardrails to protect individuals and their livelihoods, including artists in the creative community?
What we need here, in addition to self-control on the part of AI developers and AI tech used to fight AI abuse, is direct dialog with the Hollywood and creative community. Only then will mutually beneficial rules of the game and economics be established. Flawless understands this, precisely because top execs come from Hollywood and the creative community themselves. Silicon Valley entrepreneurs and VC’s, take note.
We also need stiffer criminal penalties and greater visibility and advocacy on the issues of non-consensual name, image, likeness and voice abuse. While California has statutes on the books to address those NIL issues, most states don’t. That’s why Congress is now seriously considering national legislation via the so-called No AI FRAUD Act, while SAG-AFTRA and the Human Artistry Campaign are shining their spotlights on the issue. Meanwhile, as mentioned above, just last week the FTC smartly banned robocalls that feature AI-generated voices - a direct reaction to the Biden robo-fakes. Bravo!
All of us – the creative and tech communities together – must work to address these issues now before generative AI generates rewards only for itself - and at the expense, literally, of the creative community.
What do you think? Send me your feedback and reach out to me at peter@creativemedia.biz and check out my firm Creative Media.
III. the litigAtIon tracker - key AI cases to follow (& updates)
(1) The New York Times v. Microsoft & OpenAI
Background: The Times filed its complaint in the U.S. District Court for the Southern District of New York on December 27, 2023 for mass copyright infringement, together with other related claims, claiming that OpenAI scraped “millions” of its copyrighted articles to train its LLM without consent and payment of any kind (and Microsoft essentially enabled the infringement). In its pleading tour de force (one that serves as a master class for all future plaintiffs to follow in these types of AI infringement cases), The Times further contends that Microsoft and OpenAI are essentially trying to build a “market substitute” for its news and, notably, that defendants’ AI generates “hallucinations” based on The Times’ articles and, therefore, substantially damages its reputation and brand. The Times seeks “billions of dollars of statutory and actual damages.” Microsoft’s and OpenAI’s primary defense will undoubtedly be “fair use” - i.e., no license, payment or consent is needed.
Current Status: Nothing substantive yet; still in preliminary hearings about attorney appearances, etc. Read my deep dive analysis of - and prediction for - the case and its ultimate resolution here.
(2) Universal Music Group, et al. v. Anthropic
Background: UMG, Concord Music and several major music companies sued Amazon-backed OpenAI competitor Anthropic on October 18, 2023 in the U.S. District Court for the Middle District of Tennessee. Plaintiffs contend that Anthropic is infringing their collective music lyric copyrights on a massive scale by scraping the entire web to train its AI, essentially sucking up their copyrighted lyrics into its vortex – all without any kind of licensing, consent or payment.
Current Status: Plaintiff music companies have filed preliminary injunction motions, and Anthropic has filed motions to dismiss, but no hearings are set yet on those motions. The court set the trial date for November 18, 2025 (no, that’s not a typo; 2025 it is). Read my deep dive analysis of - and prediction for - the case and its ultimate resolution here.
(3) Sarah Silverman, et al. v. Meta
Background: In this class action lawsuit filed in the U.S. District Court for the Northern District of California, comedian Sarah Silverman and several others sued Mark Zuckerberg’s Meta on July 7, 2023, for copyright infringement for essentially the same rationale used by UMG in its Anthropic case - i.e., mass unauthorized “training” by their AI’s LLM on millions of copyrighted written works, including their own. Meta’s fundamental defense is “fair use,” and this case is much further along than the others noted above. In November 2023, the court largely sided with Meta and dismissed the bulk of plaintiffs’ claims, essentially concluding that Meta’s AI’s “output” is not substantially similar (or even remotely similar) to the individual copyrighted works of the plaintiffs and thus could cause no meaningful harm.
Current Status: The court did, however, give Silverman a chance to amend the original complaint to add a more direct link to actual harm (it was filed in December), and that re-hearing is pending. But my prediction is that the court won’t change its mind and the case will be largely dismissed pre-trial. Read my deep dive analysis of the case, the court’s initial rulings, and what they portend here.
(4) Getty Images v. Stability AI and Midjourney
Background: Getty Images sued leading generative image AI companies Stability AI and Midjourney on February 3, 2023 in the U.S. District Court for the District of Delaware for mass infringement of its copyrighted photographic library. Getty’s claims are similar to those plead by The New York Times v. Microsoft & OpenAI, but in the context of visual images instead of written articles - i.e., unauthorized scraping by their AI with an intent to compete directly with Getty Images (i.e., market substitution).
Current Status: The case was transferred to a new judge on January 8, 2024, who denied the defendants’ motion to dismiss the case on January 26, 2024. But the judge gave defendants a chance to re-file those motions after jurisdictional discovery is completed. Read more about the case and its background and overall context here.
IV. the cocktAIl - your closing mix of other key AI stories, updates, events and additional resources
After all, it’s always happy hour somewhere!
(1) Great AI resources for you
Podcasts: (i) “The Prof G” podcast (Scott Galloway) - one of my favorite podcasts from one of the smartest and most entertaining voices, who gives an ongoing healthy dose of AI; and (ii) “Hard Fork” - another good ongoing AI and overall media-tech source, from The New York Times. Great listening as you walk your dogs!
Publications: (i) skim the leading media and entertainment trades every day - TheWrap and Variety have great media-tech coverage (including AI) (I write a weekly media/entertainment/tech column in TheWrap that you should check out too); and (ii) The New York Times is always a good source for media-tech news, including AI.
(2) Attend AI industry events
For those of you in LA, check out AI LA and its ongoing events. They also hold virtual programming available to all.
Save the date - I strongly recommend you join me at AI LA’s upcoming “AI on the Lot” event on May 16th in LA. Details are forthcoming, but you can join the list to be the first to know. Here’s the link. Lots of movers and shakers joined last year’s.
(3) Influential organizations to follow
Check out the Human Artistry Campaign - which serves as a collective “voice” for over 150 media companies, organizations and societies.
(4) Use the tools and experiment with them
Most importantly, learn how to use the tools! Experiment with chatGPT 4 and Midjourney to stay current and fully understand how generative AI “works.”
Reach out to me at peter@creativemedia.biz with your feedback & submissions.