Imbr
July 30, 2025

AI Isn't the Apocalypse. It's Just the Music Industry Doing What It Always Does

Everyone's got an opinion on AI right now. Panic. Outrage. Wild optimism. Deep fear. And understandably so as the pace of development is staggering, and the implications for creativity, authorship, and income are massive.

AI Isn't the Apocalypse. It's Just the Music Industry Doing What It Always Does
Technology
AI
Innovation
Rights Management

Everyone’s got an opinion on AI right now. Panic. Outrage. Wild optimism. Deep fear. And understandably so as the pace of development is staggering, and the implications for creativity, authorship, and income are massive.

But the more I watch the conversation unfold, the more it feels like déjà vu. Because this isn’t new. Not really.

We’ve been here before.

Every few years, a new platform or technology emerges that reshapes how music is made, distributed, or monetised. And almost every time, the music industry’s first instinct is to delay, deny, or demand things it doesn’t have the leverage to enforce - only to come around once the damage is done.

AI might feel unprecedented. But the behaviour around it is textbook.

We’ve Seen This Movie Before

When YouTube launched, it didn’t ask for licenses. It asked for scale. And it got it. By the time rights holders pushed back on deal structures, the platform already had a billion users and the leverage that came with it.

TikTok, via its predecessor Musical.ly, took a similar approach. Build first. Worry about rights later. SoundCloud did it too. Facebook. Twitter. It’s a Silicon Valley model: break the rules, then clean it up; or buy enough goodwill that no one cares.

And when the music industry did finally engage? The early deals were lump sums. Quick wins. Guaranteed money. Simple optics. At the time, that felt like a win - cash up front while everyone figured out how to value the economics of scale. But as those platforms exploded, the leverage shifted entirely to the tech companies. And creators were left locked out of future growth.

We can’t repeat that mistake with AI.

This time, we need revenue shares from day one. Not just a one-off check to help major players hit bonus targets or pad quarterly earnings. We have to do the hard work now - the modelling, the forecasting, the policy design - so that the economics of this new space reflect creator value, not just platform scale.

Those lump-sum deals may look good on paper in year one, but they become the bane of creators’ lives in year five, when everyone else is profiting off what they helped build… and they’re left with a fixed fee and no upside.

Ironically, the platforms that actually did get their licenses in order before launching - Spotify and Apple - are the ones that get a lot of the heat. Spotify in particular. Apple still enjoys goodwill in most industry circles, despite the fact that they’re the ones who unbundled the album - the most valuable format the music business has ever had.

Spotify didn’t do that. But they’re easier to blame because of the perception of how low their payouts are on a transactional basis.

I think it’s important to be honest about how we got here. The current state of the music industry isn’t the fault of one company or one decision. This is the result of a thousand small choices, compromises, and missteps over time. But blaming a single “bad guy” is a much easier story to sell. It simplifies the narrative. It gives people someone to point at. And some corners of the industry have leaned hard into that, because it’s a more convenient marketing tactic than owning up to the broader dysfunction.

AI Is Not the Enemy

So let’s be clear: AI is not your enemy. A bad deal is.

What’s coming next - in rights scraping, in synthetic voices, in beat generation and lyric mimicry - is real. It will change the creative process. It will impact income. But that doesn’t make it a threat. It makes it another wave.

And if the industry meets this wave the same way it met the last five? We’ll lose again. Slowly. Loudly. Profoundly.

The bigger threat isn’t AI replacing songwriters. It’s AI platforms building billion-dollar businesses with no formal deal structure in place, and creators being shut out of the value they helped build.

If that sounds familiar, it should.

The Risk of Moral Grandstanding

Right now, there’s a lot of noise around refusing to license 'bad actor' infringing AI models. “We won’t engage with companies that used our data without permission.” etc etc. And I understand the anger. But morally satisfying stances don’t pay royalties.

Sitting out of the conversation doesn’t stop AI from growing. It just guarantees you won’t be part of the revenue share when it does.

And look - I get the frustration with the “ask forgiveness, not permission” model. I really do. But pretending we can rewind the clock or stop progress cold is naïve at best. The better path is to get to the table early. Define the terms. Protect the work.

Because once these models reach mass adoption, the leverage flips. And we’ll be right back where we always end up: playing catch-up with tech giants who no longer need us.

What a Smarter Deal Could Look Like

This isn’t about letting AI companies off the hook. It’s about making sure creators are actually part of the future economy and not just shouting at it from the sidelines.

We already know how to build licensing frameworks that work in complex, non-transparent environments. We’ve done it before.

Take private copying levies. In markets across Europe, Canada, and elsewhere, a small fee - often 2–3% of the device’s price or a flat fee per unit - is collected from manufacturers of recordable media. In Canada, about $0.29 per blank CD is paid into a fund, with 58% going to authors and publishers, 24% to performers, and 18% to labels. Globally, this approach has generated more than €1 billion in revenue in a single year.

It’s not perfect. But it works. And I’m not saying the economics or the splits from private copying levies are the right blueprint here - but they prove a point: we can build systems that monetise usage even when individual transactions can’t be tracked. AI outputs could be approached in much the same way, with levies or blanket payments based on usage volume, tiered access, or output scale.

And as uncomfortable as it might be for parts of the industry to hear, this kind of model means accepting that you can’t also gouge services for huge upfront advances just to manipulate the split value later. If we want the upside of a revenue share - with a fair advance to protect against platform risk, and a real commitment to building this ecosystem together - then that’s the trade-off. But it’s a trade-off where everyone actually wins.

But there’s another layer to consider: the difference between pre-existing AI platforms and new entrants.

Pre-existing services - the ones that already scraped and trained on copyrighted works without consent - need to settle up. That means a one-time payment (or structured settlement) to cover past training data usage. It’s retroactive. It's large. It’s clean-up. It’s the cost of doing business backwards.

But going forward, the deal has to change. Consent has to matter.

On a go-forward basis, platforms should only be able to train on rights-holder-approved works. Much like how digital services today have to deal with restricted writer lists - a complex, often messy system, but a functional one. It proves the industry knows how to manage opt-ins, exclusions, and tiered rights access when it needs to.

The analogy to restricted lists isn’t perfect, but it’s workable. And workability - not purity - is what we need.

The Industry Has a Choice

The real question isn’t whether AI is good or bad for music. It’s whether the industry is finally ready to deal with change in a way that protects creators - not just after the fact, but from the start.

Are we ready to stop pretending disruption is temporary? Ready to build smart deals instead of clinging to moral panic? Ready to stop assigning blame based on emotion rather than facts?

AI isn’t the apocalypse.

But it might just be another test.

And the question is whether we’re going to fail it the same way we failed the last one - by refusing to engage until it’s too late.