YouTube is rolling out an experimental new tool that lets users create short, original songs using artificial intelligence-generated voice clones of popular musicians.
Called “Dream Track,” the A.I.-powered technology is initially available to a small group of creators, YouTube announced in a blog post last week. So far, nine artists have volunteered their voices for the project: Alec Benjamin, Charlie Puth, Charli XCX, Demi Lovato, John Legend, Papoose, Sia, T-Pain and Troye Sivan.
Dream Track will generate unique songs up to 30 seconds long for YouTube Shorts, which are brief, vertical videos similar to those on TikTok and Instagram Reels. Users will type a short description of the sound they’re going for—like “a ballad about how opposites attract, upbeat acoustic,” per YouTube’s example—and select one of the nine participating artists. Then, Dream Track will produce a snippet of a new song that aligns with those parameters, sung in the A.I.-generated voice of the chosen artist.
“At this initial phase, the experiment is designed to help explore how the technology could be used to create deeper connections between artists and creators, and ultimately, their fans,” per the blog post.
The tool uses Lyria, Google DeepMind’s A.I. music generation model. Both YouTube and Google DeepMind are subsidiaries of the same parent company, Alphabet. The songs will have an embedded watermark that identifies them as A.I.-generated, though it’s undetectable to the human ear, according to a blog post from Google DeepMind.
In statements released by YouTube, the artists expressed support for the experiment—though some, like Charli XCX, noted that they remain cautious about A.I.’s potential role in the music industry.
“A.I. is going to transform the world and the music industry in ways we do not yet fully understand,” she says in a statement. “This experiment will offer a small insight into the creative opportunities that could be possible, and I’m interested to see what comes out of it.”
A.I.-generated music has drummed up some controversy in recent months. Earlier this year, a creator named Ghostwriter created a song called “Heart on My Sleeve” that featured A.I.-generated vocals imitating artists Drake and the Weeknd. After the song went viral, the artists’ record label, Universal Music Group, successfully lobbied YouTube and other music streaming sites to take down the song because of copyright claims.
Earlier this month, YouTube also unveiled new guidelines for A.I.-generated content on the platform. The rules will require creators to label their realistic A.I.-generated videos as such; YouTube will also create a process for people to request that deepfake videos be removed.
In the murky, still-developing world of A.I.-generated music, artists, lawyers and music platforms are still trying to untangle the various ethical and legal implications of the technology. Where do copyright laws come into play, if at all? Who gets paid for A.I.-generated music? And what happens if A.I. generates controversial lyrics that damage the real artist’s reputation?
“Where we’re talking about the creation of vocals, it could be used to say something that is polar opposite to that person's belief system,” said BT, an American musician and DJ, to NPR’s Chloe Veltman in April.