Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(61,539 posts)
Sat Mar 7, 2026, 05:56 PM Saturday

You don't have to use AI (Becca Rothfeld, staffer at the New Yorker, former nonfiction book critic at WaPo)

https://afeteworsethandeath.substack.com/p/you-dont-have-to-use-ai

-snip-

For the past few years, I’ve tried not to think about AI too much, and shockingly, I’ve succeeded. It’s been more than half a decade since I’ve taught at a university (although I hope to do so again someday), so I haven’t had to scheme to keep my students from outsourcing the exquisite work of thinking to a glorified search engine—and AI is worse than useless for a critic, so I haven’t been the least bit tempted to use it in my line of work. I’m not really one for optimizing away the various inconveniences in my daily life, preferring to luxuriate in inefficiencies, so I haven’t turned to it for scheduling or the like.

Of course, it’s long seemed obvious to me (and, I presume, basically everyone I care about or respect ) that the use of AI in most humanistic endeavors is beyond the pale. Even if it were good at writing prose or doing philosophy—and thus far it isn’t —to use AI to write or philosophize would be to render those activities futile. In some of the sciences, some of the time, the point is the outcome—the vaccine, the medicine, the finding, the technology. In the humanities, the point is the process. The point of writing is to make something beautiful or interesting; the point of reading a book or a philosophy paper is, at least in large part, to make contact with another human mind that has strained to make something beautiful or interesting, whether or not the human mind has succeeded or failed. There would be no reason to read a novel that an AI had written, even if AI got to a point where it could write “well,” as perhaps it will if it keeps cannibalizing the greatest fruits of human endeavor with impunity. Writing a book with AI would be like driving a car to a marathon finish line, then claiming the title. Even considering using it to write evinces a complete misunderstanding of the enterprise. I can’t stress it enough: if these claims are not obvious to you, you are not “in my world and not of my flesh,” to paraphrase a remark of Stanley Cavell’s in “Aesthetic Problems of Modern Philosophy.” I hope you are ashamed of yourself and your dwindling humanity, and I think it should be legally required for people to say when/if they’ve used AI in the process of writing something so that I can avoid it at all costs.

-snipping to get to her comments on what was said by an Anthropic executive being interviewed by Ezra Klein-

At some point in the interview, Klein asks him about the pitfalls of engaging with relentlessly affirmative and ingratiating chatbots—Klein, by the way, does a very good job of holding Clark to account throughout—and Clark replies that AI needn’t be used for self-affirmation. In fact, he explains, he uses it to help him occupy other people’s perspectives: “I’ve used these A.I. systems to basically say: Hey, I’m in conflict with someone at Anthropic. I’m really annoyed. Could you ask me some questions about that person and how they’re feeling to try to help me better think about the world from their perspective?”

He didn’t even pause before confessing that he’d used AI in this way and didn’t appear to think it was a cause for shame or contrition. aaaaggghhhHHHHHHHHHH!!!!!!!!!!!!!!?? I don’t know what’s worse, using AI in this fashion, or thinking that using AI in this fashion is such a normal, acceptable, and forgivable thing to do that you should admit to having done so on a podcast in a national newspaper. Imagine having a fight with a colleague, friend, lover, etc, and learning that the colleague, friend, or lover with whom you have a human relationship, whom you trust to approach you in human terms, is outsourcing the ethical labor of relating to you charitably to a machine???? I mean, I would simply leave my husband on the spot if he ever did this. No questions, no divorce filings. I would open the door, grab my dog, put on my running shoes, and sprint as far as possible in the other direction. What a despicable violation of the basic ethical contract linking one person to another!

-snip-


Found this after a friend on another platform, an expert on IP theft and copyright infringement, recommended it.

Wikipedia article on Rothfeld: https://en.wikipedia.org/wiki/Becca_Rothfeld

Her essay in the New Yorker on the closing of the Washington Post's books section:
https://www.newyorker.com/books/page-turner/the-death-of-book-world
1 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
You don't have to use AI (Becca Rothfeld, staffer at the New Yorker, former nonfiction book critic at WaPo) (Original Post) highplainsdem Saturday OP
Excellent article! SheltieLover Saturday #1
Latest Discussions»General Discussion»You don't have to use AI ...