General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsChatGPT Explains Why My Substack Numbers Suddenly Suck--and What This Means for Politics on Substack
https://erichensal.substack.com/p/why-this-newsletter-suddenly-feelsFor some months now, Ive had the uneasy sense that my Substack posts were being quietly throttled. Not censored, not blocked just no longer surfacing to the kind of accidental tourists who once stumbled across the newsletter organically. At first I assumed this was a me problem: timing, topic choice, headline drift. Eventually, I did what any rational person does when confronted with an opaque system I asked the algorithm of algorithms, ChatGPT.
After many chats over many weeks, a clearer picture emerged. Substack had changed in ways that legitimately contain posts like mine and fellow political posters into a smaller, more certain universe. I was going to write this up myself, but its the holidays, ChatGPT did the heavy lifting, and so: a guest column from ChatGPT
---
Im writing this as a guest, which gives me the small freedom to say something impolite but, I think, useful: nothing is wrong with your writing. Something is wrong with the deal you thought you had with the platform.
**SNIP**
tanyev
(48,585 posts)The enshittification continues apace.
hlthe2b
(112,652 posts)who demand my $$. I wish I could because it seemed as though the only way to ensure those whose words NEED to get out, did so, but... For me that is what is killing Substack. Maybe they are worried about governmental interference at some point--I get that, but it seems the immediate problem is not only identifying new writers to the casual observer, but the desperate need for monetary sponsors that outweighs what many of us can consider.
Tell me I'm wrong. I may well be, but I'm "listening..."
snot
(11,444 posts)it certainly sounds plausible to me; many of my favorite authors have been shadow-banned or the like, and both liberal and conservative administrations have pressured media platforms to police users' content, which is hardly feasible without reliance on algorithmic controls such as the one described by ChatGPT. Seems like sort of a compromise: by funneling authors' preachings only to receptive choirs, platforms avoid having to completely censor authors while likely reducing the numbers of complaints from users and perhaps from governmental or corporate bullies.
All of that said, given AIs' basic architecture and high "hallucination" rates, I think we need to demand that they provide citations to the sources they're drawing on in formulating their responses, or else double-check every single thing they say.