Music creator ProducerAI partners with Google Labs

Music creator ProducerAI partners with Google Labs

On Tuesday, Google revealed that the generative AI music platform ProducerAI will join Google Labs.

Supported by The Chainsmokers, the ProducerAI service enables users to compose music using natural language prompts — such as “create a lofi beat” – to generate tracks. It employs Google DeepMind’s Lyria 3 music-generation model, capable of transforming text and even images into audio outputs.

Last week, Google announced plans to integrate Lyria 3 features into the main Gemini app, but ProducerAI allows users to interact with the AI model more as if it were a “collaboration partner,” according to Elias Roman, Senior Director of Product Management at Google Labs.

“ProducerAI has opened up new ways for me to create,” Roman expressed in a blog entry. “I’ve played with genre fusions, crafted personal birthday songs for friends and family, and produced custom workout playlists for myself and others.”

[embedded content]

Google also disclosed that three-time Grammy winner Wyclef Jean utilized the Lyria 3 model and Google’s Music AI Sandbox for his latest track “Back From Abu Dhabi.”

“This isn’t merely a machine where you push a button a hundred times and you’re finished. It involves a thoughtful curation process where you sift through options and say, ‘I believe that’s something we can work with,’” stated Jeff Chang, Director of Product Management at Google DeepMind, in a video released by the company.

Jean reflected on his curiosity about how a flute might sound in an already recorded track, and his ability to swiftly incorporate a flute sound using Google’s tools.

Techcrunch event

Boston, MA
|
June 9, 2026

“What I want everyone to grasp […] is that we now live in an era where human creativity must prevail,” Jean mentioned in the video. “One advantage you have over AI: a soul. One advantage AI has over you: infinite information.”

AI in the music industry

Some musicians have vigorously opposed the integration of AI tools in music creation, as it’s almost certain that generative AI tools were trained on copyrighted materials from artists without permission. Numerous musicians, including big names like Billie Eilish, Katy Perry, and Jon Bon Jovi, endorsed an open letter in 2024 urging tech companies not to compromise human creativity through AI music generation tools.

A group of music publishers has also recently filed a lawsuit against the AI firm Anthropic for $3 billion, alleging that the company illegally downloaded over 20,000 copyrighted songs, inclusive of sheet music, lyrics, and compositions. (Anthropic had previously been ordered by the court to offer a $1.5 billion settlement to authors whose books were used without authorization for AI training.)

Conversely, certain artists have welcomed this technology’s potential to enhance audio quality rather than seek creative support.

Paul McCartney utilized AI-based noise reduction technology — similar to what Zoom or FaceTime use to eliminate unwanted background sounds during video calls — to restore an old, low-quality demo by John Lennon. The resultant “new” Beatles song, “Now and Then,” garnered a Grammy in 2025.

In the meantime, AI music generation tools like Suno have produced synthetic music that is sufficiently realistic to dominate charts on Spotify and Billboard. Telisha Jones, a 31-year-old from Mississippi, transformed her (allegedly organic) poetry into the viral R&B track “How Was I Supposed To Know” using Suno, landing a record deal with Hallwood Media worth an estimated $3 million.

Legal clarity surrounding the use of copyrighted works as training data remains ambiguous — one federal judge, William Alsup, determined last year that training on copyrighted data is permissible, while piracy is not.

Leave a Reply