Apple's AI "Intelligence": safe, secure & "ethically sourced" ... or is it?
Its "next big things" asks us to consider what these marketing labels really mean
Clear your head! It’s time to recover from your weekend hangover for your Monday morning brAIn dump! First, “the AI:10” - 10 key AI headlines from last week. Next, this week’s “mAIn event” (of course I had to write about last week’s big “Apple Intelligence” announcement! Wouldn’t want to disappoint you!). Then, the “AI legal case tracker” - updates on key genAI copyright infringement cases. Finally, let’s rAIse a glass and toast each other with “the cocktAIl” - my special AI mixology.
I. the AI:10 - the 10 key AI headlines from last week
(1) Apple re-imagines what “AI” means with “Apple Intelligence.” But does it really, when it depends upon OpenAI for much of its AI super powers? Read my “mAIn event” feature story below. But until then, check out some of the effusive press about Tim Cook’s big announcement from the Apple fan boys and girls. Here’s Axios’ take (“AI may have just had its iPhone moment”). Meanwhile, The Information gives Apple props for giving a “master class in how to explain complex technology to ordinary people.” And Wired lays out why it believes “AI is a feature, not a product” for Apple — and why that is the right strategy.
(2) But Elon is having none of it! Of course he’s not. Apple CEO Tim Cook stole the spotlight for the week. So what does Musk do? He threatens to ban iPhones at his companies. You know, what any highly secure and emotionally intelligent CEO would do. Read more here via Axios.
(3) Meanwhile, OpenAI CEO Sam Altman’s cash machAIne keeps flowing. The company’s annualized revenues just doubled to $3.4 billion since late 2023. Couldn’t happen to a nicer guy (gulp). Read more here via The Information.
(4) Bravo ITN, bravo! The UK-based broadcast news production company just signed a deal to protect its video archive from Big Tech genAI training. The deal with OpenOrigins validates and secures more than one million video clips using the blockchain. Fascinating - and a sign of things to come. Read more here via the Press Gazette.
(5) You say “tomato” and I say “tomAIto.” Screen Daily asks the $64,000 question — which, adjusted for inflation, probably exceeds $1 million, but I digress — is AI for TV drama a “useful tool” or “phenomenally destructive”? I add my own “two cents” in its piece.
(6) Luma AI lights up the AInternet. Posts about the power of this company’s “AI Dream Machine” proliferated during last week’s Apple dominated week. Here’s just one example, which includes a video demo worth watching.
(7) K-pop clones AI bop. South Korean entertainment giant HYBE — the home of superstars BTS - just acquired voice cloning company Supertone for $32 million. Read more here via Music Business Worldwide.
(8) AI voice generator wants us all to play Hooky. This new music platform just launched a subscription service that features voice models of artists like British R&B singer Jay Sean and electronic dance due Bonnie x Clyde. Is it “ethically sourced”? You be the judge! Read more here via Digital Music News.
(9) Meanwhile, local news sites are publishing AI-written articles under fake bylines. Doubt that that is “fair and balanced” …. although some at that media company may disagree. Read more here via that company’s nemesis CNN Business.
(10) But no need to worry, generative AI is no scarier than nail guns or microscopes! So says former U.S. Patent and Trademark Office Director David Kappos, who cautions all of us to “keep ourselves grounded” about AI. It’s nothing to “panic” about, he says, according to Law360.
[NOTE TO READERS: My goal is not to stoke fear about AI. Rather, it’s to be stoic about this new technology that will transform media and entertainment, point out the need to do it both legally and ethically, and then encourage everyone to take action - including advocacy.]
II. “the mAIn event” — Apple’s AI Intelligence: safe, secure & “ethically sourced” — or is it?
Its "next big things" asks us to consider what these marketing labels really mean.
Apple’s redefinition of “AI” to mean “Apple Intelligence” was all the rage earlier this week. That’s kind of funny since a big piece of Apple’s announced AI launch strategy is OpenAI and ChatGPT dependent — which essentially means Microsoft, OpenAI’s biggest investor. But casting that aside for the moment, Apple CEO Tim Cook, as expected, firmly placed privacy and security at the center of his pitch. That’s a fascinating and extremely narrow needle to thread, since privacy, security, and respect for intellectual property go hand in hand with the data AI uses to “do its thing.”
Apple’s two-part AI strategy
Apple’s AI strategy comes in two parts. First, Apple – using its own wholly grown AI tech – will enable users to do myriad tasks more productively and efficiently directly on their iPhones, iPads and Macs. None of those tasks – like prioritizing messages and notifications – requires any outside assistance from OpenAI or any other Big Tech generative AI. Apple Intelligence will be opt-in by default, which means that users must agree to make their data available to Apple’s AI either directly on device or by leveraging the power of its own private cloud for more complex tasks. Apple assures its faithful that it will never ever share their personal data. If all of that is true, so far, so good. No privacy or copyright harm, no infringing foul.
But Apple may be doing at least some of the same things for which OpenAI and other Big Tech AI have been rightfully criticized. The company’s Machine Learning Research site states that its foundational AI model trains on both licensed data and “publicly available data collected by its web-crawler, AppleBot.” There are those three words again – “publicly available data.” Typically, that’s code for unlicensed copyrighted works -- not to mention personal data -- being included in the training data set, which calls into question whether Apple Intelligence is fully “safe” and “ethically sourced.” That more troubling interpretation is bolstered by the fact that Apple says that web publishers “have the option to opt out of the use of their web content for Apple Intelligence training with a data usage control.”
The notion of “ethically sourced” AI also goes beyond privacy and copyright legalities. It gives rise to larger considerations of respect for individuals, their creative works, and their right to commercialize them. That’s particularly pointed for Apple, which – notwithstanding its recent “Crush” video brain freeze when it (literally) pummeled the creative works of humanity down into an iPad – prides itself for safety and for being Big Tech’s home for the creative community.
The second part of Apple’s strategy is also problematic from this “ethically sourced” perspective. This is when users seek generative AI solutions that Apple’s own AI can’t handle and instead hands off the relevant prompt to OpenAI and ChatGPT to do the work when users give permission to do so. Remember, ChatGPT scoops up “publicly available data” which, again, means that third party personal data and unlicensed copyrighted works are likely included to some extent.
[An Apple spokesperson declined to comment, but the company in its press materials says it takes steps to filter personally identifiable data from publicly available information on the web. Apple also states it does not use other users’ private personal data or other user interactions when training models built into Apple Intelligence.]
In any event, all of this properly calls into question Apple’s “white knight” positioning. Let’s take the legal piece first. If Apple’s use of “publicly available data” means what I think it means, then Apple faces the same potentially significant legal liability that OpenAI and other Big Tech players face. It also may be legally liable when it hands off its generative AI work to OpenAI’s ChatGPT even with user consent. Merely because CEO Sam Altman and his Wild West gAIng at OpenAI do the work does not necessarily excuse Apple from legal liability.
Companies can be secondarily liable for copyright infringing behavior if they are aware of those transgressions but actively enable and encourage them anyway. That’s certainly at least arguably the case with Apple, which is well aware that OpenAI is accused of copyright infringement on a grand scale for training its AI models on unlicensed copyrighted works. That’s what The New York Times case, and many others like it, are all about.
“Ethically sourced” is a nuanced concept
To be clear, the concept of “ethically sourced” AI is nuanced beyond the strictly legal part of the equation. Creator-friendly Adobe found this out the hard way. It launched its standalone Firefly generative AI application last year with great artist-first fanfare, trumpeting the fact that its AI trained only on licensed stock works already in the Adobe family. It was later reported, however, that that wasn’t exactly true. Firefly apparently had, in fact, also trained – at least in some part – on images from visual AI generator Midjourney, a company that now also finds itself embroiled in significant copyright litigation. And with that inconvenient truth, Adobe’s purity was called into question, which is fair when a company makes purity a headline feature.
But Adobe’s transgressions appear to be of a completely different order of magnitude than OpenAI’s wholesale guardrail-less taking, and its ethical intentions seem to be generally honorable. Given the great steps it takes at least on the privacy side of the equation, Apple too seems to land closer to Adobe than to OpenAI and other Big Tech generative AI services.
That doesn’t make Apple completely innocent though, especially when being “ethically sourced” is front and center in its pitch. The company developed its two-part strategy to serve its over 2.2 billion users, keep them firmly in its walled garden and catch up in the expected multi-trillion dollar AI race. And it built its next big thAIng knowing that its “Apple Intelligence '' solution likely includes at least some third party personal data and unlicensed copyrighted works.
No foundational AI model can be entirely “pure’” unless it trains only on licensed data. And Apple Intelligence ain’t that.
[NOTE: It is possible for generative AI to be fully pure by training only on 100% licensed data. I’m aware of several companies developing those solutions right now.]
[What do you think? Send me your feedback to peter@creativemedia.biz]
check out Creative Media and our business & legal services (and how we can help you and your company)
III. the AI legal case tracker - updates on key AI litigation
I lay out the facts - and the latest developments - via this link to the “AI case tracker” tab on “the brAIn” website. You’ll get everything you need (including my detailed analysis of each case). These are the cases I track:
(1) The New York Times v. Microsoft & OpenAI
(2) Sarah Silverman, et al. v. Meta
(3) Sarah Silverman v. OpenAI
(4) Universal Music Group, et al. v. Anthropic
(5) Getty Images v. Stability AI and Midjourney
NOTE: Check out the “AI case tracker” tab at the top of the page at “the brAIn” website.
IV. the cocktAIl — your AI mix of “must attend” AI events
After all, it’s always happy hour somewhere!
(1) UPCOMING IN JULY - Digital Hollywood’s first generative AI-focused virtual summit, “The Digital Hollywood AI Summer Summit,” (July 22nd - 25th). I’ll be moderating two great sessions. Learn more here via this link. It’s all entirely free!
(2) Check out my Digital Hollywood AI roundtable sessions to prep for the upcoming AI Summit. You can watch them here via this link, where I interview leading artists like Stewart Copeland of legendary band The Police about how artists feel about generative AI - and leading innovators in the world of media, entertainment and generative AI.