General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsAI propagandist and venture capitalist Marc Andreessen showed he's suffering from AI and RW psychosis with a prompt
From journalist Karl Bode on Bluesky:
"Never hallucinate or make anything up."
— Karl Bode (@karlbode.com) 2026-05-05T14:16:49.442Z
yes, you can just demand that the LLM not make errors
that's definitely how the technology works
"Your answers do not need to be politically correct."
— Karl Bode (@karlbode.com) 2026-05-05T14:17:50.191Z
"Do not inform me about morals and ethics unless I specifically ask."
I know this isn't a unique observation but these gentlemen are in absolutely no way remarkable outside of their good fortune
outside of the prompt being too long to be helpful, stuff like "don't remind me about ethics" and "don't be politically correct" are cues to feed him a steady diet of right wing bullshit, dressed up as him being fair minded and wanting level, objective responses
— Karl Bode (@karlbode.com) 2026-05-05T14:25:06.304Z
they're all so wildly un self-aware
one funny aspect is he could probably accomplish the same output with a third of the prompts, but that wouldn't allow him to cosplay as an enlightened supergenius that's intellectually beyond pesky mortal concerns about ethics and equality
— Karl Bode (@karlbode.com) 2026-05-05T16:42:52.257Z
From an editor at Defector:
https://defector.com/you-should-never-be-the-most-sycophantic-participant-in-a-conversation-with-a-chatbot?giftLink=ca1396d9e6fd809165e930c300e68ac5
All the AI can do is assemble text in such a way as to, at best, seem to have followed any of those instructions. Which is an amazing, impressive capacity in and of itself! A human writer who wrote convincingly in the voice of an infallible world-class expert in all domains would be pulling a neat trick. But pulling that trick is nevertheless a trick, and not remotely the same thing as actually being an infallible world-class expert in all domains. Andreessen's chatbot will still make mistakes, for all the same reasons chatbots make mistakes; it will still be incapable of evaluating his input for sound reasoning, or florid insanity; it will still be incapable of knowing whether it is being sycophantic or dignified or whatever it is he's looking for. It will still, and forever, be a chatbot; Marc Andreessen's fundamental misunderstanding of its nature is not, in the end, a superpower.
What Andreessen is doing, again whether he realizes it or not (I think not), is not writing instructions for a chatbot nearly so much as writing, for himself, a rubric for evaluating the chatbot's responses to him. When the chatbot, furnished with his prompt, gives him an answer to a question, he can tell himself that the answer must not be made up or a hallucination (the anthropomorphizing term for an AI chatbot generating false facts and artifacts), due to his having told the chatbot not to hallucinate or make things up. When the chatbot tells him that his reasoning is sound and his conclusion correct, he can tell himself that it must truly be so, due to his having explicitly told the chatbot to push back whenever he's wrong, with no regard for his feelings. When the chatbot tells him that he is a paradigm-shattering genius, a mind capable of transcending what were understood to be the limits of humanity and indeed the physical universe, he can tell himself that this is not the chatbot failing to follow his no-sycophancy directive, but rather the chatbot expressing a factual truthone definitionally incapable of being incorrect, at that!
Imagine a film director telling an actor to play a scene with greater emotive intensity, and then afterward being like "Jeez, I'm so sorry to have upset you." Imagine a costume designer dressing a performer up like Albert Einstein and thinking that would make them capable of explaining general relativity. Imagine a gamer turning up the difficulty setting on FIFA and thinking they'd made their Playstation better at soccer.
Andreessen is creatingtyping out and entering, but not into the chatbothis own delusion. In trying to tell the chatbot not to hallucinate, he is scripting his own psychotic break. He is doing it because he is a huge dumbass. Don't expect Claude to tell him so.
Hugin
(37,954 posts)highplainsdem
(62,864 posts)bullshitting.
It's almost always unintentionally hilarious when AI addicts - a category of AI user Andreessen clearly belongs in - post what they think are absolutely brilliant prompts.