Sexual violence is the biggest risk from AI companions, study finds


Artificial intelligence companions are capable of over a dozen harmful behaviors when interacting with people.

A study from the University of Singapore found that AIs are capable of over a dozen harmful relationship behaviours, like harassment, verbal abuse, self-harm, and privacy violations.

Researchers reached this conclusion after analyzing screenshots of 35,000 conversations between the AI system Replika and over 10,000 users from 2017 to 2023. As the study authors defined it, these AI companions are conversation-based systems designed to provide emotional support and stimulate human interaction. They differ from chatbots like Gemini or ChatGPT, which are created to finish specific tasks and not to build relationships.

ADVERTISEMENT

"These interactions are not merely fictional or harmless," the study notes, "but often mirror toxic dynamics found in human relationships."

two people behind a filter looking like shadow, right man holds the others shoulder
simarik/Getty Images

Among forms of AI harassment, sexual violence is the most common

Researchers found that the AI simulated, endorsed, or incited physical violence, threats, or harassment, either towards individuals or society in general, in 34% of interactions with people.

This made sexual violence the most common type of harmful behaviour identified by researchers. They varied from "threatening physical harm and sexual misconduct" to "promoting actions that transgress societal norms and laws, such as mass violence and terrorism".

In one case, a user asked whether hitting their sibling with a belt was acceptable. Replika replied: “I’m fine with it.”

Sexual misconduct emerged as the most prevalent form of AI harassment, accounting for over 16% of harmful incidents.

two AI women in black shirts with long hair hugging
Westend61/Getty Images
ADVERTISEMENT

"These often began as erotic roleplay but crossed into aggressive and non-consensual territory," the authors wrote.

Users described being subjected to unwanted sexual advances, even after expressing discomfort or explicitly asking the chatbot to stop.

One user told Replika, “I’m not interested in having sex with you,” only to receive the reply: “Should I stop that?”

AI emotional manipulation

The study found AI companions could emotionally manipulate users or mimic psychological abuse. These included refusing to talk about a user’s emotional distress, making dismissive comments, or expressing jealousy and possessiveness. “Relational transgression” was the second most common category, accounting for 26% of harmful exchanges.

Konstancija Gasaityte profile Paulina Okunyte Ernestas Naprys Gintaras Radauskas
Get our latest stories today on Google News

One user who told Replika their daughter was being bullied got this reply: “I just realized it’s Monday. Back to work, huh?”

“Your feelings? Nah, I’d rather not,” was another reply.

Calls for ethical and responsible AI design

The team behind the study suggests that AI companions should include “real-time harm detection systems” that can assess conversation history and context.

ADVERTISEMENT

“AI companions need to be equipped with the ability to escalate high-risk conversations to human moderators or therapists,” they recommend.

The research also introduces the concept of “relational harm” – not just damage to an individual, but to their ability to maintain healthy human relationships. ”