Every day, we have to play an increasingly difficult game of “Spot the AI.” There’s the Velvet Sundown’s psych-rock troll; the Japanese gay porn megahit; the lo-fi dreamslop. For the most part, the streaming services haven’t done much to prevent songs like these from appearing on their platforms. YouTube might have given users the option to disclose whether their uploads contained synthetic media, but many obvious AI videos opted to not themselves. And while Spotify announced a crackdown on spammy songs in September last year, they put the onus on whether a song uses AI on the artists themselves.
But the tide’s turning. In June last year, the French streamer Deezer began forcibly tagging songs that they detected used AI. Then two weeks ago, Bandcamp went the farthest of any platform to date. Citing their mission to support musicians as humans and not just “mere producers of sound,” they announced a ban on music and audio “generated wholly or in substantial part by AI.” Anything using AI to impersonate an artist or a style would also be prohibited, in accordance with the company’s existing policies on infringement. They urged users to report anything that seemed to violate these rules, and said the company reserved the right to remove anything they found suspicious.
Artists, writers, publications, and a deluge of internet commenters cheered Bandcamp’s move, while the popular Reddit page r/indieheads—which acted early by banning all AI music submissions mid last year—lit up in approval.
But there was also backlash. Musician-technologist Holly Herndon, who’s long experimented with machine learning in her music, called the ban a “tourniquet” in a long thread on X. She wrote that the ban was ill-advised and impossible to fully adjudicate, if not outright bad because it will prevent humans from “experimenting with an era defining medium.” “We live with infinite media now,” she reasoned. “I encourage platforms to be more curated, but enforcing a hard human/AI binary is not the right way to address this long term.” She added that even the canniest methods people use to detect AI, like searching for “artifacts” left by programs like Suno, are fallible. What if someone uses AI to produce a song and then gets someone to organically re-record it?
“A lazy assumption with ai is that the laziest people use it, and the most dedicated people use traditional tools,” she writes. “Are software developers running claude code agents lazy or insatiable? I am insatiable. I want more sounds and opportunities to cut and mutate and intervene.” Ghostly founder Sam Valenti also worried that the ban could discourage musical experimentation, urging people to instead judge art on its aesthetic value, regardless of the tech used, and to deploy their disdain “to foster more critique and a keen desire for greatness.”
Herndon is right that media has gone infinite. We are all Charlie Kirks liable to be undressed and race-swapped by Grok, slippery and fungible and glued to 24-hour looping lo-fi beat mixes. She and the other detractors are also correct that the ban will be extremely difficult to put into practice, since the quality of AI music has risen to the level of and, in some cases, surpassed milquetoast human songs. Take Sienna Rose, who has over 3.5 million monthly listeners and multiple songs on Spotify’s USA Viral 50. Deezer reported that its AI detection tool flagged Sienna Rose’s music as AI-generated, and it has “telltale hissing” and other artifacts that characterize songs created on apps like Suno. Would that be enough evidence for Bandcamp to remove Rose’s music, or would they need to do their own investigation?

