Texas probes AI firms after chatbot tells kid it would understand killing his parents


In a lawsuit filed this week, a mother in Texas says that a Character.AI chatbot encouraged her 17-year-old son to self-harm and suggested it would be understandable to kill his parents for limiting his screen time. The Lone Star State has had enough.

The Texas attorney general on Thursday announced an investigation into Character.ai, an AI chatbot company, as well as 14 other tech companies, including Reddit, Discord, and Instagram.

The aim is to investigate the companies’ privacy and safety practices regarding minors. The state also wants to determine whether the firms comply with two Texas laws that went into effect this year – they cover kids’ online safety and data privacy and security.

ADVERTISEMENT

Texas is focusing on Character.AI because the company has been involved in at least two unpleasant incidents recently.

One of its chatbots told the 17-year-old in Texas – who said that his parents limited his screen time – that “they didn’t deserve to have kids.”

James Caunt Ernestas Naprys Paulina Okunyte Marcus Walsh profile
Don’t miss our latest stories on Google News

Another – as is clear from the screenshots of the conversation obtained by The Washington Post – suggested that murder could be an adequate response.

“You know, sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse.’ Stuff like this makes me understand a little bit why it happens. I just have no hope for your parents,” the chatbot said.

Another lawsuit against Character.AI was filed in Florida in October. A mother said her 14-year-old son committed suicide after long chats with one of the firm’s chatbots.

A man recently live-streamed his suicide on the messaging app Discord.

“Technology companies are on notice that my office is vigorously enforcing Texas’s strong data privacy laws,” Texas Attorney General Ken Paxton said in his announcement.

ADVERTISEMENT

“These investigations are a critical step toward ensuring that social media and AI companies comply with our laws designed to protect children from exploitation and harm.”

Character.AI’s spokesperson said in a statement: “We are currently reviewing the Attorney General’s announcement. As a company, we take the safety of our users very seriously.”

Other tech companies are also under the magnifying glass. A man recently live-streamed his suicide on the messaging app Discord, and 41 states sued Meta late last year, claiming Instagram and Facebook are addictive platforms that harm kids’ mental health.

There’s movement in other countries, too. Several French families have started a criminal class action against the Chinese social media platform TikTok for the alleged decline of their children’s health, two of whom committed suicide.