All News

SoundCloud Revises Terms to Protect Artists from AI Training Without Consent

SoundCloud has clarified its terms of service following backlash over AI training concerns. The platform commits to not using artists' uploads to train generative AI models that replicate or synthesize their content without explicit opt-in consent. This move emphasizes transparency, artist control, and consent, addressing confusion caused by previous broad language in their terms. While some critics seek stronger protections, SoundCloud pledges to involve artists directly if AI tools are used in the future.

Published May 14, 2025 at 08:08 PM EDT in Artificial Intelligence (AI)

SoundCloud recently revised its Terms of Service (TOS) in response to widespread concern from artists about the use of their uploaded music for training artificial intelligence (AI) models. The platform clarified that it has never used artist content to train AI models and is now making a formal commitment to ensure any future AI use is grounded in consent, transparency, and artist control.

The controversy began after SoundCloud updated its terms in February of the previous year with language that was perceived as too broad, suggesting that user content could be used to train AI technologies without explicit consent. The specific passage stated that, unless otherwise agreed, user content may be used to inform or train AI as part of providing SoundCloud’s services.

In response to the backlash, SoundCloud CEO Eliah Seton announced that this clause will be replaced with a clearer statement: the company will not use artists’ content to train generative AI models that replicate or synthesize their voice, music, or likeness without explicit opt-in consent from the artist. This change is expected to take effect in the coming weeks.

Seton emphasized that SoundCloud has never used member content to train AI models, including large language models, for music creation or to mimic or replace members’ work. Furthermore, if generative AI tools are introduced, they will be made available only through an opt-in mechanism, ensuring artists retain control over their content.

Despite these assurances, some critics like tech ethicist Ed Newton-Rex argue that the revised language still leaves room for AI models trained on artists’ work that might not directly replicate their style but could compete with them commercially. Newton-Rex suggests a stronger clause that would prohibit any AI training on user content without explicit consent, regardless of the model’s replication intent.

This development highlights the broader challenges tech platforms face in balancing AI innovation with creator rights and ethical considerations. SoundCloud’s updated stance reflects a growing industry trend toward transparency and consent in AI content usage, setting a precedent for other platforms managing user-generated content.

For artists and tech leaders, this case underscores the importance of clear communication and policy design around AI training data. It also signals opportunities for platforms to innovate responsibly by integrating AI tools that empower creators rather than exploit their work without permission.

Keep Reading

View All
The Future of Business is AI

AI Tools Built for Agencies That Move Fast.

QuarkyByte offers deep insights into AI ethics and content rights management, helping platforms like SoundCloud navigate AI integration responsibly. Explore how our solutions empower creators and businesses to implement transparent, consent-driven AI policies that protect intellectual property and foster innovation.