The $4 Billion Threat
The $4 Billion Threat: AI Music vs Human Artists
How AI-generated tracks are slicing into royalties and threatening human creators
- by Nicholas Gunn

We’re living in a time when seeing is no longer believing. Images can be generated, voices can be cloned, and reality itself is increasingly difficult to verify. For many, music has remained one of the last places where authenticity still holds — a direct line to human emotion, creativity, and lived experience.
Now, even that is no longer guaranteed.
AI-generated tracks are flooding streaming platforms at an unprecedented rate, threatening not only the financial stability of musicians but the very soul of the art form. The technology is remarkable — but the stakes for the people behind the music could not be higher.
The Rise of AI Music
Opportunity and Consequences
Streaming services now see over 60,000 new AI-generated tracks uploaded every single day — a dramatic leap from the 10,000 uploaded per day just a year ago. Analysts predict that by 2028, AI music could siphon upwards of $4.2 billion annually away from human creators.
Let’s be honest: the music AI can generate is already incredible — and it’s improving fast. In the next few years, these systems will be able to mimic nearly any performer, blend genres seamlessly, and produce tracks at a level that rivals professional production.
Used as a tool, AI has enormous potential. As an assistive collaborator, it can unlock new creative workflows and expand what’s possible for human artists.
The problem begins when AI-generated tracks enter the same marketplaces as human-made music without clear labelling — and even more troubling, when artists claim copyright and credits for AI-created parts, collecting royalties they did not earn.
The flood comes from two groups: non-music professionals, suddenly able to “make music” with no training or understanding of the craft, and established musicians and labels who are not transparent about their AI use and, critically, are claiming ownership of AI-generated parts when registering tracks. This is fraud, plain and simple, exploiting gaps in copyright law to trigger royalties meant for human-made work.
At this point, it’s no longer just innovation — it’s competition inside a system that was never designed for it, and it’s being exploited to the detriment of real human creators.
The Mechanics of Royalties
How AI Dilutes the Royalty Pool
These consequences show up where it matters most — in the money human artists depend on to survive. Streaming platforms operate under a pro-rata royalty system, meaning all subscriber revenue is pooled and divided across every track in the platform’s library. Every new AI track claiming a share reduces the portion available to human artists.
Here’s how it works:
- Master royalties come from the actual sound recordings — the performances you hear. These go to whoever owns the master, whether that’s the artist or the label. The payout per stream is small, usually between $0.003 and $0.004 on Spotify, fluctuating based on listener location, habits, and engagement.
- Mechanical or IP royalties come from the composition itself — the melody, lyrics, and arrangement. Each stream triggers a payout to registered writers and publishers, typically around 15.3% of store revenue.
Now, add more than 60,000 AI-generated tracks into the system every single day. The audience isn’t growing at that pace — but the competition for revenue is. Every new track reduces what human creators take home.
It’s the equivalent of working a full shift in a busy restaurant, earning every tip, and then splitting it evenly with people who didn’t even show up.
That’s the reality for musicians in today’s streaming economy — and at this scale, it’s not a marginal issue, it’s a structural one.
The Mechanics of Royalties
How AI Dilutes the Royalty Pool
Legally, only human-created contributions can be copyrighted and registered. That means if you add a melody, lyric, or vocal performance to a track that also includes AI-generated elements, only your human contribution is eligible for royalties. Fully AI-generated tracks, by contrast, fall into the public domain, cannot be copyrighted, and do not generate royalties.
Registration isn’t optional — it officially establishes ownership and allows creators to be compensated.
Yet the system relies entirely on honesty. Most AI music platforms allow users to commercialize their outputs while disclaiming liability for the content produced. Creators uploading AI-generated tracks assume full responsibility — including any claims of authorship. This creates a loophole for misrepresentation: tracks can be partially or fully AI-generated yet submitted as human-made to collect royalties.
In the electronic music space, we’re already seeing well-known artists and labels blurring the lines and claiming AI-generated parts as their own. Some use ChatGPT to write lyrics and then feed them into engines like Suno, which create the melody, top line, instrumentation, and even computer-generated vocals. These AI elements are combined with human contributions, uploaded to streaming platforms, and treated as fully human-made tracks — triggering payouts from the same pooled royalties that real electronic artists rely on.
Knowingly claiming royalties from intellectual property you didn’t create is fraud — and it’s happening every day because the current systems are allowing artist to exploit loopholes, undermining both fairness and trust in the music ecosystem.
The Impact on Human Artists
The Dual Threat to Art and Audience
AI music in the marketplace is a direct threat to the livelihoods of human creators and their connection with audiences — and the consequences are profound. Each new AI-generated track added to the streaming platforms slices into the same royalty pool, reducing the share for artists who have spent years honing their craft. Visibility, income, and opportunities all take a hit — and the harder musicians work, the more they are penalized by this digital flood.
There’s also a cultural cost. Music isn’t just sound; it’s emotion, story, and shared experience. AI music risks diluting that human connection — something listeners feel, even if they can’t put it into words.
Surveys show that 83% of subscribers want real music, not AI-generated tracks. Young listeners, especially digital natives, crave authenticity: a “return to realness” that embraces imperfection, vulnerability, and human labor. AI can generate technical perfection, but it cannot replicate the subtle creative struggles, spontaneous decisions, or emotional resonance of a human performance.
As AI music grows, human artists face a dual threat: defending both their livelihoods and the very soul of the art form. Without action, AI risks quietly eroding not just earnings, but the essential connection between creators and audiences that makes music uniquely human.
Solutions and the Way Forward
Protecting Human Creativity in the Age of AI
There is hope.
The long-term answer is already emerging: transparency, clear labelling, and separate financial structures for AI music. Leaders in the industry, like Lucian Grainge at Universal Music Group, are defining AI’s role, establishing dedicated royalty pools, and partnering with AI companies to give stand-alone AI creators a space to innovate — without taking from the earnings of real, human artists. The goal is simple: embrace AI as a tool, not a replacement, while preserving fair recognition and compensation for human creators.
Human artists can decide how to participate — as collaborators with AI, innovators pushing boundaries, or by focusing solely on human-made music — knowing their work will be properly credited and valued.
But that future isn’t here yet. So, right now, the responsibility falls on all of us — artists, labels, platforms, and listeners — to protect human creativity. And the actions we take today must be immediate and intentional.
- Artists and labels, we have a responsibility to step back, make better decisions, fully disclose the use of AI, and consider the impact those choices have on the broader music community. This isn’t just about individual success; it’s about protecting the ecosystem we all depend on.
- Platforms and listeners, you also play a critical role by demanding transparency, rejecting misattributed AI tracks, and actively supporting real human creators.
- AI music creators, you are welcome to innovate and express your creativity — but it must be done in your own lane, not by drawing from the royalty pool meant for human artists.
Knowledge is power, and right now, transparency is survival. What we choose to do next will define whether music remains a human art form or becomes something else entirely.
History Repeats
Lessons from the Past, Action for the Future
After 33 years in the industry as an artist, producer, performer, and label owner, I’ve seen firsthand how quickly innovation can tip the scales against human creators. Every leap in music technology has brought both promise and peril – from analog to digital, CDs to streaming, and now computer-assisted production to full-blown AI.
The pattern is always the same: gaps in the system get exploited and artists pay the price. When Spotify launched in 2008, outdated copyright law left artists exposed. It took lawsuits, legislation, and new royalty structures to restore fairness. AI music is no different. The technology is powerful and exciting — but without transparency, labelling, and separate financial systems, human artists risk being left behind.
The difference now is speed: AI moves fast, and immediate action is critical. By demanding honesty, protecting human creativity, and building dedicated structures for AI-generated content, the industry can embrace innovation without sacrificing the livelihoods of real musicians or the authenticity that listeners crave.
History has taught us this: when the industry acts decisively, fairness can be restored. The time to act is now — if we want to keep music unmistakably human.