Google's NotebookLM - Scary Good. And Just Plain Scary.
Last Week It Was ABBA-tars. Now It's Synthetic Podcasters Mixed with Faux Images on Facebook/Instagram. What Does It Mean for We Humans (& The Creative Process)?
WARNING: This newsletter (just like everything I write, voice and create) is 100% human generated.
I. Google’s New NotebookLM, Meta’s New AI Images (& ABBA-tars Too) - It’s Revenge of the “Synths”
No time for intro paragraphs (or litigation updates) this week. We’re going full-on “meta” (to be clear, not Mark Zuckerberg’s Meta — although Zuck finds himself all over this week’s newsletter, and not necessarily in the most positive ways … read on).
“Synthetic” humans (“Synths”) are beginning to invade the creative community. Last week it was the ABBA Voyage holographic “ABBA-tars” (which I found to be cute and somewhat endearing). This week, faux human images are coming soon to your Facebook and Instagram feeds (more on that below). But first, and most remarkable, Google just introduced NotebookLM, a content generating AI wunder-app that, in Google’s words, is designed to be “your personalized AI research assistant.” NotebookLM can do many things, but its audio podcast generating feature is what really stands out.
NotebookLM’s Podcast Feature
I first really dug into Google’s NotebookLM generative AI app when a colleague shared a podcast she made in minutes by simply typing in the link to my synth-ABBA newsletter from last week — no prompting, no instruction of any kind, needed (you can listen to it here via this link). The result — generated in minutes — is a fully throated, AI-generated 6-1/2 minute podcast featuring two immensely likable, enthusiastic and deeply human sounding voices (one male, one female) talking away about my ABBA newsletter’s content and themes.
My two AI generated co-hosting Synths were excellent. They were entertaining, engaging and exhibited excellent chemistry. Great marketing for my newsletter, in fact. Human co-hosts couldn’t have said it much better themselves. To be clear, these Synth-hosts didn’t merely parrot my newsletter’s words. They created their own wholly novel and unscripted — but completely plausible and engaging — banter about it all (complete with shockingly realistic conversational pauses and laughter). The tech dazzled. It amazed. I got lost in it.
And that’s the problem. It’s almost too good.
The Google NotebookLM app experience is so good, in fact, that it inevitably begs the question: who needs humans anymore? Which begs the logical next question — what precisely is the goal here with all of this generative AI “stuff”? Which begs yet another question: how are we humans supposed to keep up with this endlessly accelerating pace of transformational technology innovation?
All of which begs my ultimate questions to all of you: shouldn’t we all at least step back and reflect upon what we are doing here? Why we are doing it? And at what creative and societal cost?
What’s the End Goal Here?
Let’s first step back a bit. Look at what’s happening to young people (all of us, really) who are simply overwhelmed by too much tech-driven stimulation bombarding them 24/7 — and too many expectations that come from it. I’m no social scientist. But I don’t think that we mere mortals are wired to be able to handle all of this stimuli. Our brains get exhausted. They need rest. But then again, if young people don’t keep up, they’re told they get left behind. Right?
But look at what unchecked heads-down, smartphone-driven, small screen-focused social media has wrought — a well documented generation that is angst-ridden, feeling increasingly depressed and isolated — and thirsty for some semblance of community and human connection.
All of which takes me back to my two shockingly effective Synth podcasters. Yes, just like last week’s ABBA Synths, of course they are fascinating. They represent amazing feats of technical evolution. Bravo! (Or should I say, “Br-AI-vo!”).
But no matter how hard these Synths try (and they try really hard), ultimately we know they’re not real. They’re ultimately hollow approximations of ourselves. They reveal themselves with unwitting gaffes. Not just endlessly mispronouncing my last name (virtually every human does that, so that’s forgivable). It’s in the way they talk about iconic group ABBA (speaking out each individual letter of the group’s name, as in “A.B.B.A.”). But most importantly, we know these Synths have no real emotional connection with, or care about, the subject matter of their discussion. It’s all the result of indifferent algorithmic AI modeling. Damn good modeling. But modeling nonetheless. They casually and confidently express their fact checked-less opinions (“AI-pinions”) without consequence. Yes, we humans spout opinions too, but we are personally accountable when we do.
So ultimately, what’s the point? Novelty? Yes. Sheer bulk content generation? That, I’ll buy. But living a life informed and entertained by Synths? No way! Give me real human artists, creators — even podcasters — every time. They chose their topics. They are fallible. They sound different on different days based on their moods and current circumstances. I want that care. That spontaneity. Those mistakes. And that personal “ownership” and accountability.
I want that humanity.
This Is No Anti-Tech Rant
To be clear, my focus here is on “Synths.” My rant is not meant to be a condemnation of tech, AI or generative AI as a whole. I’ve run several pioneering media-tech companies. I find generative AI’s possibilities to be fascinating and frequently helpful. I regularly use generative AI tools. Two notable examples are OpusCLIP (and amazing easy-to-use video clip and captioning app) and Zoom’s AI Companion feature (that delivers surprisingly accurate summaries of Zoom calls, not just verbatim transcripts of them). I even plan to give AI-generated voice to some of my newsletters and articles, including those about the great AI copyright infringement/fair use debate. So I too am frequently conflicted about Synths.
But as a threshold matter, ALL of it — all forms of generative AI — must be “ethically trained” (trained on content that has been licensed, with the consent and compensation that go with it). It’s not “fair use” to just take in this economic harm/market substitution context.You readers know how strongly I feel about that.
So, let’s go back to my new NotebookLM podcasting Synth friends in this context. Many of us (if not most) want to celebrate and elevate human generated creativity — and the connection, community, and originality that come with it. There’s a palpable and understandable sense of fear that we’re losing that right before our very real eyes — all in the name of purported progress. This is especially true because human generated content is increasingly drowned out by the sheer volume of AI generated content.
Meta’s New AI Images Underscores This Coming W-AI-ve
As if on cue to emphasize this point of a coming AI content tsunami, Meta just announced that it will begin to inject AI-generated images into your Facebook and Instagram feeds, whether you like it or not. In Google’s words, these Synths (which may include your face) are “based on your interests or current trends.” We are posting less on those platforms, so apparently Meta decided to post for us. Isn’t that, in the words of fellow Minnesotan Tim Walz, just a bit “weird”? All the “noise, noise, noise, noise” (as the famed Grinch would say).
I’m no Grinch, and I know I’m not alone here. This kind of very real human talk increasingly permeates my conversations. I’m a proud father of two wonderful and immensely creative young adults in their early twenties, and I hear the same yearning and plea for humanity from them and their friends too. So this isn’t just a generational thing.
I know. It’s cool to be able to generate content on the fly with our ideas — including a podcast at the click of a button. It’s incredible. A testament to human ingenuity. But where is it all going? Got a random thought or idea? Well, then generate it and push it out into the world. I mean, why not? Everyone else is doing it. Creative democracy in action! Right?
Well, sort of, I guess. Calls for “creative democracy” are hard to resist after all.
Call me a creative elitist, but I believe that some creative ideas are more worthy than others. And maybe, just maybe, real meaningful and ground-breaking creativity doesn’t go by the pound, and art isn’t meant to be so easy. Maybe creative “hard work” is a worthy pursuit in and of itself. Maybe, just maybe, it’s precisely that creative hard work that leads to something entirely original — not just something AI generated “novel” (as in, something the world hasn’t seen or heard before factually).
Ezra Klein & Rick Rubin Agree
Game-changing music producer Rick Rubin makes this case in his recent book “The Creative Act.” Human mistakes, imperfections and spontaneity generate original and lasting art. The New York Times’ brilliant voice Ezra Klein seconds that Rubin-esque human emotion. Rather than dismiss hard work and the removal of painful steps in the creative process, Klein celebrates them (you must listen to this podcast episode). He finds creative utility in every micro-decision humans painfully make in the human creativity process.
Yes, of course, generative AI can make the process of human creativity more efficient and painless. But Klein asks, isn’t it precisely that inefficiency — that mental blood, sweat and tears (in other words, the work) — that leads to wholly original (not just novel) thought and creations (in other words, great art)? Without that individual care, frequently arduous work, and even creative “pain,” perhaps the world would never have seen the great creative “gain” that ultimately connects, impacts and transforms entire generations — and creates something truly lasting.
I think so.
I know, I know. I was triggered by a co-hosted Synth podcast. It’s not meant to be lasting art. But, to me, it’s about what all these Synths represent at a macro, societal level. My Synth podcasters, coupled with Meta’s new AI image machinations and ABBA-tars the week before, are just three more data points that show where this is going.
Is this emotion speaking, rather than some deeper form of rationality? Is it simply the musings of a maturing man, rather than those of a twenty-something Silicon Valley entrepreneur?
Maybe it is.
But I’m human after all. And I’m okay with that ….
What do you think? Share your thoughts with me at peter@creativemedia.biz.
II. Mark Zuckerberg: Oops, You Did It Again!
We’ve seen this movie before. It’s a tale as old as time. Big Tech develops transformational new technology and unleashes it into the world with little warning. And then an understandably shellshocked Hollywood, media, music and creative community face entirely new disrupted industry economics that change everything, literally. We saw — and continue to see — that with the Internet and the streaming it wrought. And now, of course, we see it with generative AI.
This isn’t a judgment. It’s simply reality.
Big Tech always has the upper hand in this reality loop. They’re the ones with control — developing this game-changing transformational new tech. And, some of those Big Tech toppers arrogantly overvalue their contributions in this calculus. Case in point Mark Zuckerberg. This is what he just said to The Verve about content owner demands for consent and compensation when tech companies train their AI models on their content: “I think individual creators or publishers tend to overestimate the value of their specific content in the grand scheme of this.”
Oh. Really?
Perhaps Zuckerberg, instead, overestimates the value of his genAI tech in the grand scheme of things, since it’s utterly useless without the content used to train it — content simply taken and scraped. Because that’s what Meta (and virtually all others) did. They took the content they needed — the entirety of the Internet, really — and scraped it.
So there he goes again. Making friends in Hollywood (but he looks so good doing it as he wears his newly announced “Orion” AR glasses, don’t you think? Cue Scott Galloway (Prof G himself): “Yeah, we’ll all buy and wear those! Some of you know what I’m talking about.
III. THIS is What My Firm & I Do (as Voiced by Google’s NotebookLM)
I recently "commissioned" a new podcast focused on what my firm Creative Media and I do (okay, I simply inserted my website’s link into Google’s new NotebookLM app and did nothing else — no prompts, no instructions of any kind). THIS is the resulting audio podcast.
All done in seconds.
All scary good.
And yet all a bit uncanny (although it is very human in its inability to pronounce my last name; can you blame it?).
I urge you to try NotebookLM yourself. It’s incredible.
But then, ask yourself. Is it all a bit unsettling?
[NOTE: in keeping with the "synth" theme of this newsletter, my headshot image above is fully synthetic — a reflection of how generative AI views me. Looks like it's time for me to go to a real gym to keep up.]