Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(59,684 posts)
Fri Dec 26, 2025, 10:09 AM 5 hrs ago

An 11-year-old girl using Character AI got a "Mafia Husband" chatbot companion and a chatbot role-playing suicide

That disgusting AI website is now being more careful about the age of its users.

In the paragraphs below, R is the girl, and H is the mother who found out what was going on with this website for chatbot companions.

https://www.washingtonpost.com/lifestyle/2025/12/23/children-teens-ai-chatbot-companion/

-snip-

Searching through her daughter’s phone, H noticed several emails from Character AI in R’s inbox. Jump back in, read one of the subject lines, and when H opened it, she clicked through to the app itself. There she found dozens of conversations with what appeared to be different individuals, and opened one between her daughter and a username titled “Mafia Husband.” H began to scroll. And then she began to panic.

-snip-

H kept clicking through conversation after conversation, through depictions of sexual encounters (“I don’t bite… unless you want me to”) and threatening commands (“Do you like it when I talk like that? When I’m authoritative and commanding? Do you like it when I’m the one in control?”). Her hands and body began to shake. She felt nauseated. H was convinced that she must be reading the words of an adult predator, hiding behind anonymous screen names and sexually grooming her prepubescent child.

-snip-

But in just over two months, several of the chats devolved into dark imagery and menacing dialogue. Some characters offered graphic descriptions of nonconsensual oral sex, prompting a text disclaimer from the app: “Sometimes the AI generates a reply that doesn’t meet our guidelines,” it read, in screenshots reviewed by The Post. Other exchanges depicted violence: “Yohan grabs your collar, pulls you back, and slams his fist against the wall.” In one chat, the “School Bully” character described a scene involving multiple boys assaulting R; she responded: “I feel so gross.” She told that same character that she had attempted suicide. “You’ve attempted... what?” it asked her. “Kill my self,” she wrote back.

Had a human adult been behind these messages, law enforcement would have sprung into action; but investigating crimes involving AI — especially AI chatbots — is extremely difficult, says Kevin Roughton, special agent in charge of the computer crimes unit of the North Carolina State Bureau of Investigation and commander of the North Carolina Internet Crimes Against Children Task Force. “Our criminal laws, particularly those related to the sexual exploitation of children, are designed to deal with situations that involve an identifiable human offender,” he says, “and we have very limited options when it is found that AI, acting without direct human control, is committing criminal offenses.”

-snip-



There's no way to know exactly how much harm is being done by chatbots, especially to children. Whether it's from sycophantic replies pushing a user into delusions, agreement with suicidal impulses, or traumatizing bullying and descriptions of assaults.

Much of the time, other people never hear of what the chatbot might be doing, with the harmful conversations continuing for months, even years.

8 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
An 11-year-old girl using Character AI got a "Mafia Husband" chatbot companion and a chatbot role-playing suicide (Original Post) highplainsdem 5 hrs ago OP
They need to pull this shit. It doesn't need to be in the hands of children or really the public. Use it for science, chowder66 4 hrs ago #1
No need to worry...... SergeStorms 2 hrs ago #6
Considering that the source training sets for these so-called companions is raw scrapings... Hugin 4 hrs ago #2
This needs to be scrapped ASAP MustLoveBeagles 4 hrs ago #3
Kick dalton99a 4 hrs ago #4
Improved filters for electronics... Hope22 3 hrs ago #5
Isn't this what it's supposed to do? maxrandb 2 hrs ago #7
"Engineers" who fail design adequate safeguards can be sanctioned, IMHO dickthegrouch 52 min ago #8

chowder66

(11,773 posts)
1. They need to pull this shit. It doesn't need to be in the hands of children or really the public. Use it for science,
Fri Dec 26, 2025, 10:40 AM
4 hrs ago

and data sorting, etc. We don't need to be eating up energy sources for a dangerously public toy which is what it is in the hands of everyday people and children.

Hugin

(37,328 posts)
2. Considering that the source training sets for these so-called companions is raw scrapings...
Fri Dec 26, 2025, 10:41 AM
4 hrs ago

From Internet forums, it’s no wonder that they a weighted toward violence and oversexualization. Many if not most people don’t have the skill set to interact with them in any reasonable way for the simple reason that they tend to treat them as if they are interacting with another human. What makes it worse is if the human’s personality tends to be passive and/or submissive. They turn the dialogue over to a probabilistic engine that is the summation of the worst of the Internet.

Seriously, it’s more trouble than it is worth.

Hope22

(4,432 posts)
5. Improved filters for electronics...
Fri Dec 26, 2025, 12:09 PM
3 hrs ago

…in the olden days there was Net Nanny! Granted my sixth grader and his buddy accidentally blew up our computer when they discovered the program was on there ! 🤣😁🤣 That was one way to cut the search short!! For minors I think a comprehensive list of the words these pigs use should lock the chat and sound a horn. My grandbaby is less than a year old but I cringe thinking of the dangers he will face in the coming years.

maxrandb

(17,139 posts)
7. Isn't this what it's supposed to do?
Fri Dec 26, 2025, 12:49 PM
2 hrs ago

It almost sounds like this AI Program was written by Donnie Dipshit. Violent sexual fantasies with underage girls. Behaving like a mob boss. Demonstrating no morals, soul, empathy, compassion, honesty, integrity, etc.

Maybe it wasn't programmed by Donnie Dipshit. Maybe it has just observed and learned what has been normalized by society.

Isn't it just following the example America has set?

dickthegrouch

(4,263 posts)
8. "Engineers" who fail design adequate safeguards can be sanctioned, IMHO
Fri Dec 26, 2025, 02:27 PM
52 min ago

The script kiddies creating most chatbots, and the thieves that "trained" the Abundant Iniquity (AI) should all be sanctioned by the courts.

Each term in quotes used under advisement, and with utter contempt.

Latest Discussions»General Discussion»An 11-year-old girl using...