The music business is pushing back against AI. Universal Music Group, home to superstars like Taylor Swift, Nicki Minaj, and Bob Dylan, has urged Spotify and Apple to block AI tools from scraping lyrics and melodies from its artists’ copyrighted songs, the Financial Times reported last week. UMG executive vice president Michael Nash wrote in a recent op-ed that AI music is “diluting the market, making original creations harder to find, and violating artists’ legal rights to compensation from their work.”
Neither Apple nor Spotify returned requests for comment about how many AI-generated songs are on their platforms or whether AI has created more copyright infringement issues.
The news came on the heels of a request from UMG that a rap about cats in the style of Eminem be removed from YouTube for violating copyright. But the music industry is worried about more than AI copycatting a vocal performance; it’s also fretting about machines learning from their artists’ songs. Last year, the Recording Industry Association of America submitted a list of AI scrapers to the US government, claiming that their “use is unauthorized and infringes our members’ rights” when they use copyrighted work to train models.
This argument is similar to the one artists used in a lawsuit brought against AI image generators earlier this year. As with that case, there are still a lot of unanswered questions about the legality of AI-generated art, but Erin Jacobson, a music attorney in Los Angeles, notes that those uploading AI-made material that clearly violates copyright could be held liable. Whether the streamers will be liable is more nuanced.
The new generative tech shows a tendency toward mimicry. Earlier this year, Google announced it had created an AI tool called MusicLM that can generate music from text. Enter a prompt asking for a “fusion of reggaeton and electronic dance music, with a spacey, otherworldly sound,” and the generator delivers a clip. But Google did not release the tool widely, noting in its paper that about 1 percent of the music generated matched existing recordings.
A lot of this AI music could take over the mood-based genres, like ambient piano music or lo-fi. And it may be cheaper for streamers to make playlists using AI-generated music than to pay out even paltry royalties. Clancy says he doesn’t think AI is moving too quickly but that people may be moving too slowly to adapt, which could leave human artists without the equity they deserve in the industry. Changing that means making clear distinctions between AI- and human-made music. “I don’t think it’s fair to say ‘AI music is bad’ or ‘human music is good,’” Clancy says. “But one thing I think we can all agree on is, we like to know what we’re listening to.”
But there are many examples of artists working with AI, not in competition with it. Musician Holly Herndon used AI to create a clone of her voice, which she calls Holly+, to sing in languages and styles she cannot. Herndon created it to keep sovereignty over her own voice, but as she told WIRED late last year, she also did it in the hope other artists would follow her lead. BandLab has a SongStarter feature, which lets users work with AI to create royalty-free beats. It’s meant to remove some of the barriers to songwriting.
AI might become a perfect imitator, but it may not, on its own, create music that resonates with listeners. Our favorite songs capture heartbreak or speak to and shape the current culture; they break new ground during times of political upheaval. AI will have a role in writing, recording, and performing songs. But if people open music streamers and see too many AI-made songs, they may not be able to connect.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.