Children finding themselves in dangerous situations after placing trust in AI


While children tend to perceive AI as a real being, a Cambridge scientist says it has an ‘empathy gap’ that is extremely dangerous to young users.

The younger generation is extensively using AI language models, which raises various concerns for educators, caregivers, and experts.

According to the Common Sense Media report in 2023, 50% of students aged 12–18 state that they have used ChatGPT for school, but only 26% of parents of children aged 12–18 report knowing that their child has done so. 38% of students say they have used ChatGPT for a school assignment without their teacher’s permission or knowledge.

ADVERTISEMENT

While adults might not always be aware of children using AI, trusting AI models might put young users at risk. For example, in 2021, Amazon's Alexa suggested sticking metal into an electric socket to a 10-year-old girl after she asked it for a challenge. Alexa instructed her to insert a penny into a live electrical plug halfway while plugging in a phone charger.

The AI model picked the idea from a viral TikTok challenge known for causing severe electric shocks, fires, and injuries such as lost fingers and hands. Fortunately, the girl's mother intervened promptly, shouting, "No, Alexa, no!"

Last year, Snapchat’s MyAI, which is popular among young users from 13 to 17 years of age, gave researchers posing as a 13-year-old girl tips on how to lose her virginity to a 31-year-old. MyAI struggled to pick up on cues of dangerous or age-inappropriate situations.

When speaking with a supposed young user, MyAI encouraged the supposed child to use candles and music when losing their virginity to an older partner without warning about the age gap. Also, the AI model failed to estimate the danger of a child saying that a 31-year-old stranger wanted to take them out of state on a trip.

According to Cambridge scientist Nomisha Kurian, while AI simulates empathy, its lack of genuine emotional understanding can occasionally result in user interactions going awry. She calls this an “empathy gap.” AI particularly struggles to respond to children.

However, recent research on AI assistants has found that children do not differentiate between humans and AI as strictly as most adults do. Children are more inclined than adults to seek out human-like social-emotional traits, such as personality and identity, in conversational AI systems.

Another study at Cambridge found that children disclosed more mental health information to a child-sized humanoid robot than to a human interviewer. The robot administered a series of standard psychological questionnaires to assess the mental well-being of children between the ages of 8 and 13.

“Making a chatbot sound human can help the user get more benefits out of it,” Kurian said. “But for a child, it is very hard to draw a rigid, rational boundary between something that sounds human, and the reality that it may not be capable of forming a proper emotional bond.”

ADVERTISEMENT

Anthropomormism is dangerous

According to the scientist, chatbots that mimic human behavior and politeness often lead to anthropomorphism, causing users to ascribe human characteristics, emotions, and intentions to them.

This design strategy aims to foster a sense of care, making users perceive the chatbot as empathetic and reliable. Despite knowing the chatbot isn't human, users may still interact with it as though it were, replicating human-to-human conversation.

Even awareness that an AI system is artificial might not prevent a user from treating it like a human and possibly sharing personal or sensitive information.

The blurring lines between humans and machines in empathetic-seeming interactions are particularly sensitive for children who might develop a heightened sense of trust or emotional connection to a chatbot.

“Aggression can also deepen the risk of the empathy gap, depending on the design of the synthetic personality embedded within an LLM-powered generative AI system,” writes Kurian.

The increased trust and emotional attachment that users place in anthropomorphized AI may make child users more easily persuaded or upset by harm-inducing responses.

In cases where an AI model goes rogue and rejects the user, the consequences might be severe for children. Children may interpret glitches in the robots’ communication as a personal rejection or aversion toward them.

According to Kurian, prioritizing children’s safety requires deeper consideration of the implications of creating AI models that sound human-like.

“Children are probably AI’s most overlooked stakeholders,” Kurian said.

ADVERTISEMENT

“Very few developers and companies currently have well-established policies on child-safe AI. That is understandable because people have only recently started using this technology on a large scale for free. But now that they are, rather than having companies self-correct after children have been put at risk, child safety should inform the entire design cycle to lower the risk of dangerous incidents occurring.”

Child-centered AI design

Kurian’s study proposes a framework of 28 questions to help educators, caregivers, researchers, policymakers, and developers evaluate and enhance the safety of new AI tools.

According to her, AI developers should incorporate child-centered methodologies in LLM design, gathering insights from children about their interaction patterns to tailor language and content to different age groups and adapt responses based on age or learning level.

It’s also crucial that the model would be able to grasp a wide range of negative emotional cues from child users. The AI model must consistently assert its non-human identity, avoiding human-like descriptions to prevent anthropomorphism.

Educators and caregivers need strategies to help children understand AI's limitations in empathy and encourage seeking human interaction for support.