I started working on an AI music project a few weeks ago, and I have been building a tech stack and workflow to take published songs and super-automate the process of arranging and producing them in well-known genres. It seems like I’ve started this project at the right time because every morning I wake up to some new and awesome app that will help me get this done in new and awesome ways.
This morning was no exception. YouTube unveiled “Dream Track,” an AI music experiment in collaboration with Google DeepMind. The experiment allows a select group of creators to generate AI-driven songs using the voices of nine artists, including Alec Benjamin, Charlie Puth, Charli XCX, Demi Lovato, John Legend, Papoose, Sia, T-Pain, and Troye Sivan. These songs, up to 30 seconds long, can be used in YouTube Shorts.
The project is part of YouTube’s broader exploration into AI’s role in music, emphasizing partnership and responsibility. From what I can tell, it aligns with YouTube’s recent policy changes to control AI-generated content that mimics artists’ voices, allowing for the removal of such content at the artists’ request.
I’m sure you’re aware that there are big feelings on both sides of the “using copyrighted material for AI training debate.” Artists involved in Dream Track have released statements with mixed reactions. Charli XCX voiced caution about AI’s impact on music, while John Legend and Charlie Puth expressed optimism about AI enhancing creativity.
If you’ve got ideas about best practices automated music (AI or other), I’m all ears.
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. This work was created with the assistance of various generative AI models.