Google wants to use music to train AI models, but record labels should push back
Musicians shouldn’t help train AI models that might one day replace them
What you need to know
- YouTube confirmed that it is in talks with record labels about paying for access to music for “other experiments” using AI.
- Google debuted a few applications for AI in music production at Google I/O in May 2024, including at a keynote preshow.
- Labels and artists may entertain offers for permissive AI training due to the complexities of compensation in the streaming age, when artists earn less than a cent per stream.
AI is swiftly entering all creative industries, as generative AI is capable of creating text, photos, videos, and even music. Google, a heavy participant in the AI race, is looking for ways to train its models to perform various tasks, including music generation. For this to work, Google needs source material to train AI models with, and YouTube wants to partner with record labels to get it.
YouTube already has a Dream Track feature that is based on the work of nine artists. However, the Financial Times reports that YouTube is in talks with record labels Universal Music Group (UMG), Sony Music Entertainment, and Warner Records. A potential deal would involve YouTube paying “lump sums of cash” for the right to use the labels’ songs for AI music training. In a statement, YouTube told the Financial Times that it was “in conversations with labels about other experiments” but was not planning to grow Dream Track specifically.
It’s unclear whether copyrighted material can be used to train AI models, at least from a legal standpoint. We’ll have an answer to this dilemma eventually because the New York Times sued OpenAI last year for infringing on its copyright by using the Times’ articles to train AI models.
For now, companies that train their AI models face a high-risk and high-reward scenario. Using copyrighted material to train AI models could drastically accelerate their growth, but it could also open companies up to liability if the courts rule in favor of copyright holders like the New York Times.
The safe bet is to try and strike deals directly with copyright holders to compensate them for AI model training based on their content. In many cases, that’s exactly what Google is doing. By striking a deal with record labels themselves, YouTube would be able to use select tracks, albums, or catalogs—whatever is specified in the agreement—to train AI models and provide certain features without legal ramifications.
That’s significant because the legal ramifications could be massive. In one lawsuit, the Recording Industry Association of America (RIAA) sued Suno and Udio—two AI music generation tools—for copyright infringement and asked for $150,000 in damages per violation, as reported by Rolling Stone. The RIAA is a force in the music industry, representing the biggest record labels, including the ones YouTube wants to strike a deal with, like UMG.
The specifics on what YouTube features will require AI music training are unclear; however, generative AI music creation makes sense for the platform. YouTube has stringent copyright restrictions that limit what kind of music can be used in videos. YouTube videos with unauthorized use of copyright materials could be pulled entirely, suffer a loss of monetization, or result in a copyright strike on the posted channel.
Be an expert in 5 minutes
Get the latest news from Android Central, your trusted companion in the world of Android
There are certainly ways to use music in YouTube videos. The easiest one is to find royalty-free music that is free to use in other works, usually with the provision or request that credit is given to the original creator. Still, it’s quite difficult to safely use music that you didn’t create yourself on YouTube.
AI music generation features could fix that, offering a way to create original background music for YouTube videos with zero risk of copyright implications. As such, it’s easy to see why YouTube would want to work with the major record labels.
It’s the record labels that should back out
It’s perfectly clear why YouTube wants to pay record labels to license their content for AI training purposes. However, the record labels should refuse the offer. We don’t know how much YouTube is offering, but it really doesn’t matter. YouTube could give an artist or label a blank check, and they should still leave it on the table.
Right now, it’s impossible to put a price tag on creativity. Music generation tools can perform a few tricks, and they’re a great way for people who aren’t musically inclined to dabble with song creation. However, the current crop of tools has nothing on professional musicians. Most that are available today either rely on the likeness of popular singers or create something that pales in comparison to that which could be made by a human.
It’s possible—maybe even plausible—that generative AI can become so great at music generation that it replaces artists. I’m doubtful, but I’ll never bet against technology. Regardless of whether generative AI gets better at creating music, record labels and artists certainly shouldn’t help it.
A quick influx of cash might be helpful in the short run, especially as the music industry is struggling to adjust to the finances of streaming. However, it won’t be better in the long run. Once the door is open to AI music training, it can’t ever be closed. It might be tough to earn less than a penny from music streams today, but it’ll be even harder to watch AI music get created by models trained on human tracks without receiving any royalties in return.
There is precedent for record labels working with Google to ship generative AI features—after all, that’s how Dream Track was made possible in the first place. But if I was running a record label, I’d follow in the RIAA’s footsteps without thinking twice. Record labels shouldn’t give up the music in their portfolio to be used for AI training without putting up a massive fight first.
Brady is a tech journalist for Android Central, with a focus on news, phones, tablets, audio, wearables, and software. He has spent the last three years reporting and commenting on all things related to consumer technology for various publications. Brady graduated from St. John's University with a bachelor's degree in journalism. His work has been published in XDA, Android Police, Tech Advisor, iMore, Screen Rant, and Android Headlines. When he isn't experimenting with the latest tech, you can find Brady running or watching Big East basketball.