Google and YouTube are trying to have it both ways with AI and copyright

Mobile

Products You May Like

  • Artificial Intelligence

Google has made clear it is going to use the open web to inform and create anything it wants, and nothing can get in its way. Except maybe Frank Sinatra.

Share this story

Frank Sinatra

Photo by Silver Screen Collection / Getty Images

There’s only one name that springs to mind when you think of the cutting edge in copyright law online: Frank Sinatra. 

There’s nothing more important than making sure his estate — and his label, Universal Music Group — gets paid when people do AI versions of Ol’ Blue Eyes singing “Get Low” on YouTube, right? Even if that means creating an entirely new class of extralegal contractual royalties for big music labels just to protect the online dominance of your video platform while simultaneously insisting that training AI search results on books and news websites without paying anyone is permissible fair use? Right? Right?

This, broadly, is the position that Google is taking after announcing a deal with Universal Music Group yesterday “to develop an AI framework to help us work toward our common goals.” Google is signaling that it will pay off the music industry with special deals that create brand-new — and potentially devastating! — private intellectual property rights, while basically telling the rest of the web that the price of being indexed in Search is complete capitulation to allowing Google to scrape data for AI training.

Let’s walk through it.

The quick background here is that, in April, a track called “Heart on My Sleeve” from an artist called Ghostwriter977 with the AI-generated voices of Drake and the Weeknd went viral. Drake and the Weeknd are Universal Music Group artists, and UMG was not happy about it, widely issuing statements saying music platforms needed to do the right thing and take the tracks down.

Streaming services like Apple and Spotify, which control their entire catalogs, quickly complied. The problem then (and now) was open platforms like YouTube, which generally don’t take user content down without a policy violation — most often, copyright infringement. And here, there wasn’t a clear policy violation: legally, voices are not copyrightable (although individual songs used to train their AI doppelgangers are), and there is no federal law protecting likenesses — it’s all a mishmash of state laws. So UMG fell back on something simple: the track contained a sample of the Metro Boomin producer tag, which is copyrighted, allowing UMG to issue takedown requests to YouTube.

This all created a gigantic policy dilemma for Google, which, like every other AI company, is busily scraping the entire web to train its AI systems. None of these companies are paying anyone for making copies of all that data, and as various copyright lawsuits proliferate, they have mostly fallen back on the idea that these copies are permissible fair use under Section 107 of the Copyright Act. 

The thing is that “fair use” is 1) an affirmative defense to copyright infringement, which means you have to admit you made the copy in the first place, and 2) evaluated on a messy case-by-case basis in the courts, a slow and totally inconsistent process that often leads to really bad outcomes that screw up entire creative fields for decades.

But Google has to keep the music industry in particular happy because YouTube basically cannot operate without blanket licenses from the labels — no one wants to go back to the labels suing individual parents because their kids were dancing to Prince in a video. And there’s no way for YouTube Shorts to compete with TikTok without expansive music rights, and taking those off the table by ending up in court with the labels is a bad idea.

So YouTube appears to have caved.

In a blog post announcing a deal with UMG to work on AI… stuff, YouTube boss Neal Mohan makes vague promises about expanding Content ID, the often-controversial YouTube system that generally makes sure copyright holders get paid for their work, to cover “generated content.”

Mohan sandwiched that announcement in between saying there will be a new “YouTube Music AI Incubator” that convenes a bunch of UMG artists and producers (including the estate of Frank Sinatra, of course) and saying that YouTube will be expanding its content moderation policies to cover “the challenges of AI,” without actually saying that AI deepfakes are a huge problem that’s going to get worse. Instead, we get told that the solution to a technology problem is… more technology!

“AI can also be used to identify this sort of content, and we’ll continue to invest in the AI-powered technology that helps us protect our community of viewers, creators, artists and songwriters – from Content ID, to policies and detection and enforcement systems that keep our platform safe behind the scenes,” says Neal. Sure.

First, lumping “copyright and trademark abuse” in with the “and more” of malicious deepfakes and AI-accelerated technical manipulation is actually pretty gross. One thing, at worst, causes potentially lost revenue; the others have the potential to ruin lives and destabilize democracies.

Second and more importantly, there’s really only one solution that the music industry — especially UMG — is going to accept here, and it’s not toothless AI councils. It’s creating a new royalty system for using artists’ voices that does not exist in current copyright law. If you make a video with an AI voice that sounds like Drake, UMG wants to get paid. 

We know this because, in April, when AI Drake was blowing up on YouTube and UMG was issuing takedowns for the song based on the Metro Boomin sample in the track, UMG’s EVP of digital strategy, Michael Nash, explicitly said so during the company’s quarterly earnings call.

“Generative AI that’s enabled by large language models, which trains on our intellectual property, violates copyright law in several ways,” he said. “Companies have to obtain permission and execute a license to use copyrighted content for AI training or other purposes, and we’re committed to maintaining these legal principles.” (Emphasis mine.)

What’s going to happen next is all very obvious: YouTube will attempt to expand Content ID to flag content with voices that sound like UMG artists, and UMG will be able to take those videos down or collect royalties for those songs and videos. Along the way, we will be treated to glossy videos of a UMG artist like Ryan Tedder asking Google Bard to make a sad beat for a rainy day or whatever while saying that AI is amazing.

To be clear, this is a fine solution for YouTube, which has a lot of money and cannot accept the existential risk of losing its music licenses during a decade-long legal fight over fair use and AI. But it is a pretty shitty solution for the rest of us, who do not have the bargaining power of huge music labels to create bespoke platform-specific AI royalty schemes and who will probably get caught up in Content ID’s well-known false-positive error rates without any legal recourse at all.

And the problems here aren’t hard to predict: right now, Content ID generally operates within the framework of intellectual property law. If you make something — a piece of music criticism, say — flagged by Content ID as infringing a copyright and you disagree with it, YouTube never steps in to resolve it but instead imposes some tedious back-and-forth and then, if that doesn’t work out, politely suggests you head to the courts and deal with it legally. (YouTubers generally do not do this, instead coming up with an ever-escalating series of workarounds to defeat overzealous Content ID flags, but that’s the idea.)

But all of that falls apart when YouTube invents a custom right to artists’ voices just for big record labels. Short of some not-yet-implemented solution like watermarking all AI content, there is no AI system on earth that can reliably distinguish between an AI Drake and a kid just trying to rap like Drake. What happens when UMG Content ID flags the kid and UMG issues a takedown notice? There is no legal system for YouTube to fall back on; there’s just a kid, Drake, and a huge company with enormous leverage over YouTube. Seems pretty clear who will lose!

Let’s say YouTube extends this new extralegal private right to likenesses and voices to everyone. What happens to Donald Trump impersonators in an election year? What about Joe Biden impressions? Where will YouTube draw the line between AI Drake and AI Ron DeSantis? Regular ol’ DeSantis has never met a speech regulation he didn’t like — how will YouTube withstand the pressure to remove any impression of DeSantis he requests a takedown for after opening the door to removing AI Frank Sinatra? Are we ready for that, or are we just worried about losing our music rights?

If the answers are in this blog post, I sure don’t see them. But I do see a happy Universal Music Group.

A screenshot of a Google SGE search for how to chop potatoes into fries.

A screenshot of a Google SGE search for how to chop potatoes into fries.

Google’s Search Generative Experience.
Image: Google / David Pierce

While YouTube is busy making nice with UMG, Google proper is ruthlessly wielding its massive leverage over the web to extract as much data as it can to train its AI models for free.

At this moment in web history, Google is the last remaining source of traffic at scale on the web, which is why so many websites are turning into AI-written SEO honeypots. The situation is bad and getting worse.

This means Google has absolutely tremendous leverage over publishers of websites, who are still mostly paying human beings to make content in the hopes that Google ranks their pages highly and sends them traffic, all while Google itself is training its AI models on that expensive content.

In the meantime, Google is also rolling out the Search Generative Experience (SGE) so that it might answer search queries directly using AI — particularly lucrative queries about buying things. In fact, almost every SGE demo Google has ever given has ended in a transaction of some kind.

This is a great deal for Google but a horrible deal for publishers, who are staring down the barrel of ever-diminishing Google referrals and decreasing affiliate revenue but lack any ability to say no to search traffic. And “Google zero” is coming: on Google’s last earnings call, Sundar Pichai bluntly said of SGE, “Over time, this will just be how search works.” 

There is fundamentally no difference between training an AI to sing like Frank Sinatra by feeding it Sinatra songs and training SGE to answer questions about what bikes to buy by training it on articles about bikes. But yet! There is no AI Music Incubator for the web and no set of friendly blog posts about working together with web publishers. Google’s position when it comes to the web is explicit: if its search crawlers can see content on the open web, it can use that content to train AI. The company’s privacy policy was just updated to say it may “use publicly available information to help train Google’s AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities.” 

A website could block Google’s crawlers in its robots.txt file — OpenAI, fresh from scraping every website in the world to build ChatGPT, just allowed its crawler to be blocked in this way — but blocking Google’s crawlers means deindexing your site from search, which is, bluntly, suicidal. 

This is playing out right now with The New York Times, whose robots.txt file blocks OpenAI’s GPTBot but not Google. The Times also just updated its terms of use to prohibit the use of its content to train AI. Given the opportunity to block Google and OpenAI at the technical level, the Times instead chose what amounts to a legal approach — and indeed, the company signed a commercial agreement with Google and is reportedly considering suing OpenAI. Meanwhile, OpenAI has signed its own deal with The Associated Press, setting up a situation where AI companies peel big players out of coalitions that might otherwise exert collective bargaining power over the platforms. (Disclosure: Vox Media, The Verge’s parent company, supports a bill called the JCPA that might further enhance this bargaining power, which comes with its own set of complications.)

It is really not clear whether scraping data to train AI models is fair use, and anyone confidently predicting how the upcoming set of lawsuits from a cast of characters that includes Sarah Silverman and Getty Images will go is definitely working an angle. (A reminder that human beings are not computers: yes, you can “train” your brain to write like some author by reading all their work, but you haven’t made any copies, which is the entire foundation of copyright law. Stop it.)

The only thing that is clear about these looming AI copyright cases is that they have the potential to upend the internet as we know it, copyright law itself, and potentially lead to a drastic rethinking of what people can and cannot do with the art they encounter in their lives. The social internet came up in the age of Everything is a Remix; the next decade’s tagline sounds a lot like “Fuck You, Pay Me.”

This will all take a lot of time! And it behooves Google to slow roll it all while it can. For example, the company is thinking about creating a replacement for robots.txt that allows for more granular content controls but… you know, Google also promised to remove cookies from Chrome in January 2020 and recently pushed that date back yet again to 2024. A lumbering web standards process taking place in the background of an apocalyptic AI fair use legal battle is just fine if no one can turn off your crawler in the meantime!

At the end of this all, there’s more than a real chance that AI chokes out the web — both by flooding user-generated platforms with garbage but also by polluting Google’s own search results so badly that Google has no choice but to sign a handful of lucrative content deals that allow its AI to be trained on real content instead of an endless flood of noise. 

And you know what? That future version of Google looks an awful lot like the present version of YouTube: a new kind of cable network where a flood of user content sits next to an array of lucrative licensing deals with TV networks, music labels, and sports leagues. If you squint, it is the exact kind of walled garden upstarts like Google once set out to disrupt.

Anyway, here’s an AI clone of UMG artist Taylor Swift singing “My Way.”

Products You May Like

Articles You May Like

Pokémon TCG Pocket Already Feels Like A Well-Laid Trap
16 Things To Know Before Starting Dragon Age: The Veilguard
I Can Retire Early After Paying Off $300,000 in Debt. Here’s How I Did It
Eero 6 Plus vs. Eero Pro 6E: Compare Amazon’s Mesh Wi-Fi Systems
The Best Houseplants for Low-Light Rooms and Air Purification

Leave a Reply

Your email address will not be published. Required fields are marked *