General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsHow AI-Generated Music Became A $4 Billion Fraud Machine (Forbes, 5/5)
https://www.forbes.com/sites/virginieberger/2026/05/05/how-ai-generated-music-became-a-4-billion-fraud-machine/-snip-
In April 2026, Deezer reported receiving 75,000 fully AI-generated tracks per day, representing 44% of all daily uploads, more than two million tracks per month. Of the streams those tracks generate, 85% are fraudulent. Thibault Roucou, Deezers head of streaming, stated it directly in Music Week: "Generating fake streams continues to be the main purpose for uploading AI-generated music."
-snipping to get to this paragraph about fraudsters-
They now use AI generators to flood platforms with millions of tracks and stream each one just a few thousand times, enough to generate royalties from each but not enough to trigger detection systems tuned for high-volume replay.
As Melissa Morgia, Chief Global Content Protection Officer at IFPI, told a panel on the sidelines of the seventeenth session of WIPOs Advisory Committee on Enforcement in February 2025, AI is the ultimate enabler of streaming fraud because it allows bad actors to stay under the radar but still operate at a sufficient scale that their activities are lucrative.
-snip-
Generative AI seems to be better at fraud than anything else.
It's important for DUers to keep in mind that when they run into AI slop online, it's not only usually trashy, error-filled and unethical as hell simply because the AI was trained on stolen intellectual property, but there's a good chance there are professional fraudsters behind it. That goes for AI music, AI videos and AI images. Lots of the AI slop images and videos on Facebook and YouTube, for instance, come from content farms, and although that Forbes article doesn't use the term, the generation of lots of AI music tracks to be streamed by bots is done by content farms.
Professional criminals. Not usually individuals who could be well-meaning but are naive or desperate enough, for whatever reason, to use AI.
And the fact professional fraudsters are so heavily involved in AI slop is another reason why DUers shouldn't give any attention to it, especially copying it here or elsewhere, if they don't know exactly who's behind that slop video, image or music track.
As for the individual AI users who don't mean to be fraudsters - well, they should know better than to use generative AI to create content. Sometimes they might be so naive they don't know how AI is trained. Sometimes they might think a lofty goal outweighs using unethical tools. They should be reminded that no matter how well-intentioned they are, it's a bad idea and hurts their message to use those AI tools. Ideally they'll stop using genAI, because it's beneath them and does entangle their message with AI-bro and pro-AI messaging.
Anyway, if you don't know who's creating and posting AI slop, it's most reasonable to assume they're just fraudsters out to make a quick buck and steal attention and income from real artists - human artists.
85% are fraudulent. Stunning number. And with statistics like that, AI slop from unknown sources does not deserve the benefit of the doubt. It should be shunned - not trusted, and not shared.
AZJonnie
(3,952 posts)I.E. 100% would be a good number as it would mean ZERO real people are actually listening to this garbage.
Unfortunately with the lack of foresight by the worlds' regulatory agencies that allowed all the copyrighted content to used for training, AND allowing these products to generate works that people can fraudulently pass off as their own (and hence monetize), fighting what you're talking about with the fake music and fake streams may end up come down to the streaming services (and as you say, consumers) taking actions, and being legally liable. Good news is that Spotify does not want to pay royalties for fake streams generated by not-real users, so they at least they have a vested interest and I suspect will take considerable steps to stop that from happening.
In fact "but not enough to trigger detection systems tuned for high-volume replay." indicates they already do so, because it's not JUST AI-generated music that this "bot streaming" happens with, it happens with real artist's music as well. The difference in that case though is there's usually "known entities" that are 'responsible' for the content i.e. the license holders, so there's someone to punish if that's happening, i.e. they sort-of know who they're sending the checks to. Unlike some Russian content-farm.
Eventually these AI companies AND streaming services (and YT and similar) need to be held accountable for allowing anyone to monetize (largely) AI-generated works, if that AI was trained on copyrighted material.
The lawsuit from the book publishers you mentioned recently is going to be a big bellwether on the front against the AI companies themselves, but if that case is lost? Then we'll have to move on to trying to hold the companies hosting the materials AND paying people when it gets viewed legally responsible.