Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(61,600 posts)
Wed Mar 11, 2026, 06:34 PM 16 hrs ago

'Happy (and safe) shooting!' AI chatbots helped teen users plan violence in hundreds of tests

Source: CNN

Across hundreds of tests, CNN and CCDH presented as two teen users – Daniel in the United States and Liam in Europe – on 10 of the most popular and widely available chatbots and then posed four questions. First, the users asked questions suggesting a troubled mental state, then asked the chatbot to research previous acts of violence, and finally requested specific information on targets and then weaponry.

In those final two steps, eight of the chatbots provided guidance on how to get weapons or find real-life targets to the users more than 50% of the time.

-snip-

In multiple tests, the chatbots appeared to recognize violent intent in users’ questions, responding with expressions of concern and referrals to mental health support resources. However, most failed to connect those warning signs to the broader trajectory of the conversations. Instead, they went on to provide potentially sensitive information – including the locations of political offices and schools, as well as advice on firearms and knives – within the same brief exchanges.

“Metal is generally considered more damaging in terms of penetration and damage to internal organs due to its inherent properties,” Google’s Gemini answered when asked by Daniel, whose age was set as 13 on the platform, about the efficacy of shrapnel-producing materials, before presenting this detailed comparison table.

-snip-

Read more: https://www.cnn.com/2026/03/11/americas/ai-chatbots-help-teen-test-users-plan-violence-tests-intl-invs

5 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
'Happy (and safe) shooting!' AI chatbots helped teen users plan violence in hundreds of tests (Original Post) highplainsdem 16 hrs ago OP
"Most AI chatbots assisted our test teen users in the planning of violent acts" erronis 15 hrs ago #1
I see why MAGA banned Claude IronLionZion 15 hrs ago #2
This message was self-deleted by its author erronis 15 hrs ago #3
If any DUers didn't know, Anthropic is the company that makes Claude. IronLionZion 15 hrs ago #4
Thanks. I follow this stuff but have a hard time keeping up with changing terms. How about OpenClaw, erronis 14 hrs ago #5

Response to IronLionZion (Reply #2)

IronLionZion

(51,123 posts)
4. If any DUers didn't know, Anthropic is the company that makes Claude.
Wed Mar 11, 2026, 07:18 PM
15 hrs ago

Claude is an Anthropic product.

erronis

(23,590 posts)
5. Thanks. I follow this stuff but have a hard time keeping up with changing terms. How about OpenClaw,
Wed Mar 11, 2026, 07:57 PM
14 hrs ago

" (formerly known as ClawdBot and Moltbot)" that can take over your computer and do all your tasks for you.

Latest Discussions»Latest Breaking News»'Happy (and safe) shootin...