The generative AI "watch list": 10 tech companies for media & entertainment to closely track
This inaugural list shines a light on the need for "ethically sourced" AI (and these 10 are a good place to start - for all the wrong reasons)
Welcome to this week’s brAIn dump! First, “the AI:10” returns - 10 key media-related AI headlines from last week. Next, this week’s “mAIn event” (the headline story - my inaugural genAI “watch list”). Then, watch/listen to “the great genAI debate!” I face off with expert Robert Tercek in a no-holds barred debate about the ethics and legality of Big Tech’s unlicensed use of copyrighted works. We all then deserve “the cocktAIl” - my special AI event mixology. Finally, the “AI legal case tracker” - updates on the key genAI copyright infringement cases.
I. the AI:10 - the 10 key AI headlines from last week
(1) OpenAI CEO Sam Altman can run, but can’t hide. He’s on a rehabilitation tour after his Scarlett Johansson debacle, sheepishly defending himself by saying “it’s not her voice.” But Sam, that fact is irrelevant under the law (as I pointed out in my feature last week). If it sounds like a Johansson, that can be enough. Ask artists Bette Midler and Tom Waits who faced similar issues before — and won. Here’s a related “take” from The Atlantic. And yet another from Axios that argues that Altman has jumped the shark, adopting Meta CEO Mark Zuckerberg’s “move fast and break things” playbook.
(2) Meanwhile, the fun never stops at OpenAI. It’s training its next AI model GPT-5 to get closer to its ultimate goal of achieving artificial general intelligence (AGI), but promises to be better social stewards this time — forming a new “Safety and Security Committee.” Yup, Altman is on the Committee. So, we’re all good, right? (gulp). Read more about it here.
(3) Apple “crushed” it (the souls of the creative community) last week in its tone deaf iPad video, but still moves full speed ahead with OpenAI. Bloomberg reported that Tim Cook signed a deal with Altman to bring chatbot functionality to iOS 18. Look for that at next week’s WWWD. Bank of America’s analysts predict that Apple’s new AI-powered “IntelliPhones” will dominate the market.
(4) OpenAI certainly is busy! Feeling the heat from media and entertainment (finally!), the genAI leader is on a mission to close as many licensing deals as possible to legally use copyrighted works for training. Most recently, The Atlantic and Vox Media inked deals, but terms were undisclosed. OpenAI’s belated efforts should be applauded.
(5) Caution media companies - don’t fall into the same trap! Writer Jessica Lessin expands on my cautionary note with her own “must read” historical perspective in The Atlantic. “News organizations rushing to absolve AI companies of theft are acting against their own interests,” she writes. I largely agree with her for reasons above.
(6) Is Amazon backed Anthropic really taking the higher ground? It certainly sees an opening to seize the “white knight” narrative amidst all the OpenAI chaos, as Bloomberg reports. But it too trains on copyrighted works without licensing. So while building a “safer” AI mousetrap should be applauded, let’s keep the heat on here too.
(7) What, we worry? Nah, says Netflix’s co-CEO Ted Sarandos, “AI is not going to take your job.” But, he also told The New York Times, “The person who uses AI well might take your job.” Reassured, yet? Time for all of us to take action to best position ourselves by learning to use generative AI tools (and following developments closely). That’s what my newsletter is here for!
(8) Sony Pictures’ Chairman minces no words. Tony Vinciquerra openly proclaimed that his studio will look to “produce both films for theaters and television in a more efficient way, using AI primarily.” To be clear, so long as it’s done “right” (i.e., ethically and legally), then this is the natural order of things with any new tech. We can’t ignore the human ramifications, but must accept this new reality and best position ourselves for it.
(9) The entertainment industry scrambles inside Washington, D.C. to have a seat at the table. The industry correctly wants to have an active hand in any new AI legislation and regulation, so that Big Tech lobbyists don’t craft it all. Read more here in TheWrap.
(10) And then there’s this — your kids can now have their very own M3GAN! Introducing “Moxie,” your son or daughter’s new best synthetic friend. Only $800 (not sure if batteries are included!). Read more here. Spoiler alert: M3GAN didn’t end well!
[NOTE TO READERS: My goal is not to stoke fear about AI. Rather, it’s to be stoic about this new technology that will transform media and entertainment, point out the need to do it both legally and ethically, and then encourage everyone to take action - including advocacy.]
II. Media & entertainment’s generative AI “watch list”: 10 tech companies to closely track (and keep honest)
May was not a good month for generative AI companies in the eyes of the creative community. OpenAI mimicked Scarlett Johansson’s voice to push CEO Sam Altman’s agenda. Google search introduced new “AI Overviews” intended to give consumers the information they need without linking them to the publishers who create it. And Apple crushed all of creativity down to a pulp. Literally.
All of this gives rise to my first generative AI “watch list” – a list of 10 companies that the creative community should track closely and keep honest. Consider this my first generative “hall of shAIme” — a call for “ethically sourced” AI.
1. OpenAI
Altman’s ScarJo debacle was just his latest, and the departure of the company’s “ethical AI” leadership team certainly didn’t help. OpenAI trains its generative AI – including Hollywood and big brand focused video generator Sora – on reams of copyrighted works. Even Google (hardly an AI innocent) voiced its displeasure, pointing out that Sora trained on over one million YouTube videos. Remember, Altman launched OpenAI as a non-profit with a mission to develop AI that “benefits all of humanity.” My how things have changed. Its non-profit “do good” mentality seems like a distant memory. At this point, it’s tough to trust Altman, which maybe shouldn’t be too surprising since he co-founded OpenAI with Elon Musk.
2. Microsoft
Microsoft is essentially guilty by association, since it is by far OpenAI’s biggest investor, and OpenAI is dependent on its cloud infrastructure. That’s why Microsoft finds itself embroiled in massive copyright infringement litigation, together with OpenAI.
3. Google
Google finds itself next on this list amongst Big Tech giants because it is hell bent on overwhelming us with generative AI across all of its services, whether they are ready for prime time or not. Its new search “AI Overviews” feature — which, in one now notorious example, was found to recommend eating rocks to get the nutrients you need — likely will lead to significantly less traffic to the publisher content that enables it. It also may open up Google to endless product liability if users do, in fact, ingest rocks! A good case can be made that Section 230 doesn’t absolve Google from its AI Overview results, since it is now responsible for directly curating them. CEO Sundar Pichai also recently joined OpenAI CEO Altman’s AI video generator party with Google’s own video generator, “Veo” (funny, since Veoh was the name of YouTube’s biggest rival back in the day; remember, there are no coincidences!). Just like Microsoft, Google trains its AI on copyrighted works without consent and compensation. That makes it deserving for inclusion here.
4. Meta
It should come as no surprise for Meta CEO Mark Zuckerberg to find himself on this inaugural list, since respect for the creative community has never been a hallmark. Meta doesn’t disappoint in its casual indifference to creatives in this AI context. When the edict was given to move fast to catch up to OpenAI with its own generative AI development, stakeholders internally cautioned that consent would be needed to scrape copyrighted works. But all of that was cast aside by business exigencies. Meta knew what it was doing but went forward anyway. Move first fast. Ask for forgiveness later.
5. Suno & Udio
Twin AI generative music companies, from different VC mothers, make the list at #5. First, it’s Suno — a much heralded venture backed company that just closed a fresh round of $125 million at a $500 million valuation. Suno certainly is impressive, enabling anyone to create professionally sounding music tracks in seconds simply by text prompting. The problem is that, just like the Big Tech giants it emulates, it’s almost certain that its AI trains on endless streams of copyrighted music. I’ve asked several times and received no confirmation either way – which typically means that the silent party doesn’t want the real answer to be known. But one of the company’s early investors, Antonio Rodriguez of Matrix Partners, isn’t so coy. In his words, “Honestly if we had deals with labels when this company got started, I probably wouldn’t have invested in it. I think that they needed to make this product without the constraints.”
That pretty much sums it up for most generative AI companies and their investors – who find dealing with artists and licensing to be needless, pesky “constraints.” Musicians should be aware of this pervasive attitude, not to mention the coming flood of generative AI music. They can’t stop it. But they should be compensated when their music is used to train it.
Andreessen Horowitz backed Udio is Suno’s relative twin — it’s another leading AI music generator that makes the list for similar reasons.
6. ElevenLabs
ElevenLabs is a high-profile venture capital backed generative voice AI unicorn, and yet another one that builds much of its value on the backs of the creative community. It too reportedly scrapes copyrighted works to fuel its genAI dreams. Again, I repeatedly asked, and heard nothing. To add insult to injury, even high-profile artists count themselves investors. Musician will.i.am is one closely aligned with the company, recently telling an enrapt crowd at an industry event that the music industry “is technology.” Wrong. Tech most certainly expands and empowers much of it. But artists and their songs are at the center of the music universe. At least they should be.
7. Fable Studio
This Bay Area startup (yes, we know where this Silicon Valley story is going) just announced its new “Showrunner” platform to enable anyone to easily write, voice and animate shows using generative AI. The only problem is the same old, same old – Fable Studio’s AI is trained on “publicly available data.” Yes, those same three words again, and we all know what that likely means – copyrighted works. I reached out to CEO Edward Saatchi for comment and received none back. That’s why his company makes the list. Saatchi says he wants to be “the Netflix of AI.” But I just watched the new HBO documentary “MoviePass, Movie Crash” and was reminded that its CEO wanted to be “the Netflix of movie theaters.” And we know how that went.
8. Adobe
Adobe, which prides itself for widely serving the creative community, launched its Firefly generative AI product with so much “ethically sourced” AI promise, assuring its users that all of its training data had been licensed. The only problem is that it wasn’t. It was later revealed that Adobe trained Firefly, at least in part, on images used by image generator Midjourney, which too finds itself embroiled in major copyright infringement litigation. Oops.
9. Andreessen Horowitz
If there’s one venture capital firm to closely watch in the world of generative AI – actively promoting its overall “damn the torpedoes” mentality – it’s this one. It counts OpenAI, ElevenLabs, Udio and several other genAI leaders on its roster, at least several of which can be said to be flouting copyright laws. But let’s give an honorable VC mention to Antonio Rodriguez of Matrix Partners for his shameless disregard for creative IP with respect to Suno (which I reference above). I doubt he’d be so cavalier if competitors stole Suno’s code and genAI “special sauce.” He’d certainly seek “constraints” there.
10. Apple
Et tu, Apple? Its tone deaf “Crush” iPad video ad underscored Big Tech’s indifference to the creative community’s justifiable concerns about generative AI. If Apple – the most artist-friendly of all Big Tech companies – let this ad fly, what does it reveal about how all of Big Tech and Silicon Valley really feel about the value of content and media and entertainment IP?
We know the answer. That’s why these ten are on my inaugural “watch list.” And that’s why the creative community should closely follow the continuing conduct of these companies and ask tough questions all along the way.
To be sure, generative AI is here to stay and will transform the media and entertainment industry. And for the copyright owners, it’s not about stopping generative AI. It’s just about consent and fair compensation for the content that enables it – in other words, “ethically sourced” AI.
And yes, some generative AI companies actually believe in it. Those that do – like Promethean AI, a tool for creatives to manage all of their own IP assets – should be elevated and celebrated because of it. I’ll highlight more of those “ethically sourced” AI companies in upcoming columns.
(Feel free to send me your ideas of generative companies that are doing it “right” so that I can feature them - and any other feedback - to peter@creativemedia.biz).
III. the great genAI debate: “big tech” v. “big media” — is AI training on unlicensed copyrighted works ethically and legally defensible? Expert Robert Tercek and I battle it out at Streaming Media NYC (watch it here via this link)
IV. the cocktAIl- your AI mix of “must attend” AI events
After all, it’s always happy hour somewhere!
(1) UPCOMING IN JULY - Digital Hollywood’s first generative AI-focused virtual summit, “The Digital Hollywood AI Summer Summit,” (July 22nd - 25th). I’ll be moderating two great sessions. Learn more here via this link. It’s all entirely free!
check out Creative Media and our business and legal services (and how we can help you and your company)
V. the AI legal case tracker - updates on key AI litigation
I lay out the facts - and the latest developments - via this link to the “AI case tracker” tab on “the brAIn” website. You’ll get everything you need (including my detailed analysis of each case). These are the cases I track:
(1) The New York Times v. Microsoft & OpenAI
(2) Sarah Silverman, et al. v. Meta
(3) Sarah Silverman v. OpenAI
(4) Universal Music Group, et al. v. Anthropic
(5) Getty Images v. Stability AI and Midjourney
Great list and writeup, Peter. I was optimistic about ElevenLabs when I analyzed ethical voice cloning just a few months ago. Super disappointed about their tactics when they moved into generating music.
If you don’t mind, I’d like to link to this article in one of my next posts on ethics of generative AI for music.