Spotify Moves to Stop AI Music Impersonation Crisis from Spreading

Spotify is testing a new feature aimed at tackling one of the fastest-growing problems in the music industry: AI-generated tracks being falsely attributed to real artists.

Spotify Moves to Stop AI Music Impersonation Crisis from Spreading

The tool, currently in beta, allows musicians to review and approve any new releases linked to their profiles before they go live. The goal is simple but significant: give artists direct control over what appears under their name and prevent impersonation, spam, or low-quality “AI slop” from slipping through.

The feature, referred to as “Artist Profile Protection,” is particularly targeted at creators who have experienced repeated misattribution, including those with common names or those already affected by fraudulent uploads. When activated, artists receive alerts whenever a track is submitted under their identity and can choose to approve or reject it.

This move comes as streaming platforms grapple with a surge in AI-generated content. Advances in generative tools have made it easier than ever to produce music that mimics real voices and styles, often blurring the line between authentic artistry and synthetic imitation. In some cases, fake songs have been uploaded under the names of well-known musicians, generating streams and revenue without their consent.

The scale of the problem is growing. Industry reports suggest that AI-generated tracks are flooding platforms, sometimes overwhelming moderation systems and making it harder for genuine artists to be discovered. The phenomenon, often dismissed as “AI slop,” refers to content that is easy to produce but lacks originality or authenticity, yet still competes for attention and monetisation.

Spotify’s response reflects a broader industry shift from passive moderation to proactive control. Rather than relying solely on automated detection or takedown systems, the company is placing responsibility directly in the hands of artists. This approach could prove more effective, especially in edge cases where algorithms struggle to distinguish between legitimate collaborations and impersonations.

However, the move also highlights deeper structural challenges. Streaming platforms were designed for scale and openness, allowing anyone to upload music through distributors. That same openness now makes them vulnerable to abuse. Introducing approval layers could slow down releases and add friction to the system—potentially affecting independent artists who rely on speed and accessibility.

There is also the question of scope. While the tool addresses attribution, it does not fully solve the broader issue of AI-generated music itself. Questions around labelling, royalties, and intellectual property remain unresolved, and different platforms are experimenting with varying solutions.

For Spotify, the stakes are both reputational and economic. If artists lose trust in how their identities and work are managed, the platform risks alienating the very creators it depends on. At the same time, failing to act could allow bad actors to exploit the system at scale.

Get the latest news and insights that are shaping the world. Subscribe to Impact Newswire to stay informed and be part of the global conversation.

Got a story to share? Pitch it to us at info@impactnews-wire.com and reach the right audience worldwide


Discover more from Impact AI News

Subscribe to get the latest posts sent to your email.

Scroll to Top

Discover more from Impact AI News

Subscribe now to keep reading and get access to the full archive.

Continue reading