General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsAn 11-year-old girl using Character AI got a "Mafia Husband" chatbot companion and a chatbot role-playing suicide
That disgusting AI website is now being more careful about the age of its users.
In the paragraphs below, R is the girl, and H is the mother who found out what was going on with this website for chatbot companions.
https://www.washingtonpost.com/lifestyle/2025/12/23/children-teens-ai-chatbot-companion/
Searching through her daughters phone, H noticed several emails from Character AI in Rs inbox. Jump back in, read one of the subject lines, and when H opened it, she clicked through to the app itself. There she found dozens of conversations with what appeared to be different individuals, and opened one between her daughter and a username titled Mafia Husband. H began to scroll. And then she began to panic.
-snip-
H kept clicking through conversation after conversation, through depictions of sexual encounters (I dont bite unless you want me to) and threatening commands (Do you like it when I talk like that? When Im authoritative and commanding? Do you like it when Im the one in control?). Her hands and body began to shake. She felt nauseated. H was convinced that she must be reading the words of an adult predator, hiding behind anonymous screen names and sexually grooming her prepubescent child.
-snip-
But in just over two months, several of the chats devolved into dark imagery and menacing dialogue. Some characters offered graphic descriptions of nonconsensual oral sex, prompting a text disclaimer from the app: Sometimes the AI generates a reply that doesnt meet our guidelines, it read, in screenshots reviewed by The Post. Other exchanges depicted violence: Yohan grabs your collar, pulls you back, and slams his fist against the wall. In one chat, the School Bully character described a scene involving multiple boys assaulting R; she responded: I feel so gross. She told that same character that she had attempted suicide. Youve attempted... what? it asked her. Kill my self, she wrote back.
Had a human adult been behind these messages, law enforcement would have sprung into action; but investigating crimes involving AI especially AI chatbots is extremely difficult, says Kevin Roughton, special agent in charge of the computer crimes unit of the North Carolina State Bureau of Investigation and commander of the North Carolina Internet Crimes Against Children Task Force. Our criminal laws, particularly those related to the sexual exploitation of children, are designed to deal with situations that involve an identifiable human offender, he says, and we have very limited options when it is found that AI, acting without direct human control, is committing criminal offenses.
-snip-
There's no way to know exactly how much harm is being done by chatbots, especially to children. Whether it's from sycophantic replies pushing a user into delusions, agreement with suicidal impulses, or traumatizing bullying and descriptions of assaults.
Much of the time, other people never hear of what the chatbot might be doing, with the harmful conversations continuing for months, even years.
chowder66
(11,773 posts)and data sorting, etc. We don't need to be eating up energy sources for a dangerously public toy which is what it is in the hands of everyday people and children.
SergeStorms
(19,913 posts)The Big Orange Pig says he's got AI under control.
HEAVY
Hugin
(37,328 posts)From Internet forums, its no wonder that they a weighted toward violence and oversexualization. Many if not most people dont have the skill set to interact with them in any reasonable way for the simple reason that they tend to treat them as if they are interacting with another human. What makes it worse is if the humans personality tends to be passive and/or submissive. They turn the dialogue over to a probabilistic engine that is the summation of the worst of the Internet.
Seriously, its more trouble than it is worth.
MustLoveBeagles
(14,567 posts)🎊
dalton99a
(91,863 posts)Hope22
(4,432 posts)in the olden days there was Net Nanny! Granted my sixth grader and his buddy accidentally blew up our computer when they discovered the program was on there ! 🤣😁🤣 That was one way to cut the search short!! For minors I think a comprehensive list of the words these pigs use should lock the chat and sound a horn. My grandbaby is less than a year old but I cringe thinking of the dangers he will face in the coming years.
maxrandb
(17,139 posts)It almost sounds like this AI Program was written by Donnie Dipshit. Violent sexual fantasies with underage girls. Behaving like a mob boss. Demonstrating no morals, soul, empathy, compassion, honesty, integrity, etc.
Maybe it wasn't programmed by Donnie Dipshit. Maybe it has just observed and learned what has been normalized by society.
Isn't it just following the example America has set?
dickthegrouch
(4,263 posts)The script kiddies creating most chatbots, and the thieves that "trained" the Abundant Iniquity (AI) should all be sanctioned by the courts.
Each term in quotes used under advisement, and with utter contempt.