Tech giant Google and record label Universal Music Group are taking an interesting stance in the AI-generated music debate. Rather than lobby the legal system for sanctions to set an anti-AI precedent, the pair are in talks to allow it – with a new way to profit from it. If they can strike a deal on deepfake songs, the AI music industry is set to take off.
What is AI in music?
Artificial intelligence, in the context of music, is software that can intelligently create music. That is, music that sounds good. These days, AI is so advanced that it can compose melodies, create chords, write lyrics, design sounds, clone an existing music artists voice, and put it all together. The result is a completely AI-generated song, one that the music artist you (think you) hear actually has no idea exists. A song that required no studio, no microphone, and no musicians.
Essential AI Tools
Content Guardian – AI Content Checker – One-click, Eight Checks
Originality AI detector
Jasper AI
WordAI
Copy.ai
This slightly dystopian concept is todays reality – and thanks to the free and open internet, it’s unstoppable. Any individual can use an AI model on their own computer to generate these songs, and share them online. Try “What is ChatGPT – and what is it used for?” or “How to use ChatGPT on mobile” for further reading on ChatGPT.
What do musicians think of AI music?
Music artists are not happy about AI music. Understandably so. Their labels, however, are taking a different stance.
The response of the copyright owners – record labels or the music artists themselves in some fortunate cases – was initially to fight back against unlicensed music-sharing which UMG is calling “both a breach of our agreements and a violation of copyright law.”
The “big three” music labels in the western world, UMG (Universal Music Group, WMG (Warner Music Group) and SONY Music are of course concerned about royalties. Unlicensed music doesn’t make them any money. This gives them two options; Either file DMCA take-down notices across video and streaming platforms ad infinitum, or devise a way to profit from this new technology.
As a result, Google, worlds largest search traffic supervisor and parent company of AI music battleground YouTube, is discussing the latter with UMG, worlds largest music label and copyright holder.
These discussions, while early stage, include an approval process for AI-generated music. According to a report from the Financial Times, this means a “licensing agreement for songs produced with generative AI, including performances by deepfake versions of” artists voices. The ultimate goal is to ensure legality and that money changes hands. These negotiations are predicted to lead to a tool, with Google as UMG’s technology partner, which may relate to Google’s text-to-music AI model called MusicLM. A product launch is not ‘close’ but likely in the pipeline.
Which music artists have deepfake songs?
Regardless of compensation, music artists stand to have their agency and creative control stripped in a world where anyone can use their voice.
Artists including Ariana Grande, Frank Sinatra, Rihanna, Kanye West, Eminem, Johnny Cash, and most publicly Drake, have all been the victims of the initial wave. Essentially, any artist from which a ghostwriter believes they will get the most attention is seen as fair game. Using vocals without permission has never been permissible by US lawmakers in the case of ‘samples’, but their indirect use as training data within an AI model (which then generates unique material from it) is a new and unprecedented legal controversy.
Can AI generated music be copyrighted?
On August 8th, Warner Music chief executive Robert Kyncl said during an investor call that AI-cloning “enable fans to pay their heroes the ultimate compliment through a new level of user-driven content… including new cover versions and mash-ups”.
Meanwhile, UMG’s executive vice president and chief digital officer, Michael Nash has “expressed concerns about the impact of AI on the industry, warning that it could lead to the dilution of the market and violate artists’ rights to compensation for their work”. Unlicensed music-sharing presents an ongoing issue from the dawn of the internet. According to Nash, the rise of AI and the threat it poses to the music industry are not unlike the threats now-defunct music sharing platform Napster faced at the turn of century.
Likewise, “An artist’s voice is the most valuable part of their livelihood, and to steal it – no matter the means – is wrong,” affirms Universal Music general counsel Jeffrey Harleston.
Will AI replace human music?
Drake is the most publicized example of this trend. In April, the song “Heart on my Sleeve” featuring the Toronto rapper plus The Weeknd went viral, garnering tens of millions of plays across multiple platforms.
Whether AI will replace human music relies on the adoption of labels, artists, and consumers – we all have a part to play in the future of our own art and culture. For example, both David Guetta and Timbaland appear to have embraced the technology in the use of Eminem deepfake song “Emin-AI-em” and the voice cloning of Notorious B.I.G. for a sample, respectively. Streaming service Deezer may be the only major example of a platform continuing to systematically remove these spoofs.
Streaming services Apple Music, Spotify, Deezer and more also share an ethical responsibility regarding the use of their services. But sandwiched between the demands of the labels and listeners, they haven’t got a foot to put down.
Do you support AI music in concept? Did you stream any of the deepfake songs? Well, therein lies our answer.