Google’s new music tool, Lyria 3 is here
Google’s announcement that its Gemini app now writes music for you isn’t just one of those “blowing my mind” product updates. It feels like a symbolic surrender to a long-standing refrain from Big Tech: creative work is now just another checkbox for a machine.
If you don’t know what I am talking about, yesterday Google launched a new feature, Lyria 3, in the Gemini app, that allows us to cook up 30-second tracks complete with lyrics and cover art from a text prompt or a photo, of course, generated by Nano Banana; basically, no instruments, no experience, no pesky tactile skill required.
It’s essentially a LEGO set for “songs” that lasts about as long as a TikTok loop. They say it’s designed for YouTube Creators, and I tend to agree with them, because you can’t make to much with 30 seconds.
Still, the underlying problem is another one, as I am seeing different projects/songs made with AI, including AI artists. And this is what I want to highlight in this piece.
“Behind every beautiful thing, there’s some kind of pain, “ said Bob Dylan, and I could not agree more.
If we take a look at the history ( art, music, literature, poetry, and so on), the main fuel for creation was indeed pain.
Now, how should I put this? Probably the only pain that Lyria can feel is more like a faint server-overload alert than heartbreak.
Real songwriters know that soul isn’t born in a 30-second prompt, it’s extracted through years of mistakes, late nights, losses, and tiny revelations.
Call it a toy if you like. Google will, too.
They even watermark the outputs with a SynthID tag so the 30-second ditties are officially AI-generated, not “inspired.” That’s a nod to copyright concerns, but it also reads like an admission: these aren’t really art, they’re chemical by-products of pattern statistics.
What’s striking isn’t the novelty. Much of this has been possible in labs and APIs for years, and creators have been experimenting with generative music tools as collaborators.
What Lyria 3 does, and what makes this moment worth watching, is normalising the idea that anyone can “write” a song with a chatbot and a mood descriptor. That’s not empowerment; it’s a devaluation of craft.
Just because you pay a subscription to Suno, that’s another AI music generator, and that one is more complex, that doesn’t make you an artist or a singer. Just because you learn how to write a prompt for any LLM model and generates you pages, you are not a writer.
Imagine a world where every blog has AI-generated copy, not that we are not almost in the middle of this, and every company can churn out half-baked music for their ads or social posts.
In that economy, a professional songwriter’s unique skill becomes as optional as knowing how to use a metronome.
You could ask Gemini for an “emotional indie ballad about a lost sock,” and voilà, you have something. Whether it has actual coherence or soul is left to the listener to decide. It is fun to use it with your friends, shorts, to impress your date.
Video: Gemini Lyria Music Generation Feature – Socks, uploaded by Google on YouTube
Still, Lyria 3’s music is capped at 30 seconds, and that’s no accident. It sidesteps deeper legal and ethical quarrels about training data and mimicry of existing works by keeping outputs short and legally fuzzy. That’s a thumbs up from me.
But even within that limit, it’s now possible for someone with no craft or cultural context to generate riffs, lyrics, and chord progressions that sound, to the casual ear, adequately musical. In an attention economy obsessed with shareability, “adequate” quickly becomes plenty.
This matters because real songs, the ones that endure, that carry human experience, aren’t just collections of musical atoms. They’re shaped by story, risk, cultural memory, and sometimes contradiction.
One of my favorite artists, Tom Waits, said, ”I don’t have a formal background. I learned from listening to records, from talking to people, from hanging around record stores, and hanging around musicians and saying, “Hey, how did you do that? Do that again. Let me see how you did that.”
This was the research and prompting before, and it’s not only about reducing time, or getting things faster, and “having more time for you.”
It’s about the entire process, the contact with other artists, humans, and IDEAS.
Those are qualities machines can mimic but not originate. When the machines own the first pass at creation, and the commercial ecosystem embraces that output because it is cheap and fast, the incentives shift. Not gradually. Suddenly.
The record industry is already grappling with AI. Streaming services, publishers, and even labels have begun experimenting with algorithmic playlists and automated composition.
What Gemini’s Lyria 3 does is extend that experiment to public perception. A whole generation may come to think that “making music” means typing a description and choosing a style. Songwriting becomes a UX problem, not a craft one.
That raises a serious question: in a world where AI can conjure up a half-decent hook on demand, what will distinguish professional artists?
If the answer is only brand story or marketing muscle, we aren’t celebrating creativity; we are monetising it out of existence.
Tech companies like Google will frame this as liberation. And in a literal sense, anyone who’s ever wanted to hear a short tune about a sock’s existential crisis now can. But liberation without value for the creator is just consumerism by another name.
Lyria 3 might be good for GIF soundtracks and social clips, and TikTok viral reels, but it doesn’t make professional musicians obsolete; it makes their work less necessary to the platforms that reward hyper-consumable content.
That’s a different threat from outright replacement: it’s obsolescence by trivialisation.
If AI is going to be part of musical creation, then let it be as an assistant to the composer, someone who improves ideas, not replaces them. What we’re seeing with Gemini is not collaboration but outsourcing.
And the lesson for artists isn’t to fear the algorithm. It’s to insist on clarity about where AI replaces labour and where it augments human sensibility.
Because once the marketplace equates the two, the humans who do the work will be the ones left asking for royalties in a language no one else wants to speak.
And, as a personal recommendation, not sponsored, there are streaming platforms like Deezer that have built AI detection tools that flag and label AI-generated tracks, excluding them from recommendations and royalties so that human songwriters aren’t buried under synthetic spam, and consumers can make the difference between AI and human.
If you care about preserving real artistry in a world of text-to-tune generative models, start paying attention to how platforms handle AI-tagging and choose services that give you transparency about what you’re actually listening to.
Yet, I’m not here to throw shade at Lyria 3; if anything, the idea of letting people turn a photo or a mood into a short track sounds like fun for casual use and creative experimentation. It is what Google says it’s meant for.
Yet the reality is that as these models proliferate, we risk confusing novelty with art. And here, the big tech companies are not the ones to blame, but us.