
While popular culture has been awash with human-robot relationships, the question surrounding whether we should be friends with a chatbot is still up for debate.
We’ve been fascinated with technological companions over the years, from R2-D2 and C-3PO in Star Wars to Samantha, the AI-powered operating system in Her. While robotic companions have largely failed to take off, the kind of companionship between humans and AI depicted in Her is increasingly possible.
Indeed, in 2023, Snapchat released ‘My AI’ to provide virtual companionship, with the AI capable of learning our preferences from the conversations we have with it. This has coincided with a huge spike in searches for things like "AI girlfriend," with many also turning to chatbots for things like therapy and other forms of advice and support.
A worthwhile ally
Research from Wharton explores whether or not chatbots make effective companions. After all, as society is in the midst of a so-called loneliness epidemic, surely any companionship should be welcomed?
This was the broad finding of the research, which shows that when chatbots are designed to respond with empathy, people exhibit fewer feelings of loneliness after interacting with them, albeit this boost to well-being was only short-term in nature.
Researchers conducted several experiments to understand how interactions with chatbots affected us. They modified the design of the chatbot across the various experiments to not only be more or less useful but also more or less empathetic.
The results show that when the chatbot was useful but not empathetic, there was no real impact on the reported loneliness levels of participants. This wasn't the case when the chatbot was designed to be empathetic. However, a simple 15-minute conversation with an empathetic chatbot produced the same kind of boost as having a conversation of a similar length with another person.
These findings were further reinforced when participants were asked to have daily conversations with the empathetic chatbot for a week before then reporting back on their mental health and loneliness levels. Participants revealed a high level of enjoyment from their interactions, due to the responsiveness and availability of the chatbot and also the quality of responses they received.
Another participant talked about the value they gained from being able to talk to the chatbot on a regular and consistent basis. This isn't always the case with human companions, who are likely to have busy schedules and their own priorities, which may mean they aren't as accessible as you might like.
A prescription for loneliness?
The results were sufficiently strong to prompt the researchers to advocate prescribing chatbots as a reliable tool for combating loneliness.
“People do experience loneliness to an extent that was much larger than in previous times, where people lived in smaller, interconnected communities. Social media interactions are not quite the same as having a cup of coffee with a neighbor,” the researchers explain. “Perhaps these tools can be a useful element of an answer to that question.”
There are grounds to suggest they may have a point. For instance, previous research from Duke found that robots were fairly effective in combating loneliness for many of the reasons found by the Wharton researchers, not least in terms of the ready availability of such technology.
“Right now, all the evidence points to having a real friend as the best solution,” the researchers explain. “But until society prioritizes social connectedness and eldercare, robots are a solution for the millions of isolated people who have no other solutions.”
“People do experience loneliness to an extent that was much larger than in previous times, where people lived in smaller, interconnected communities. Social media interactions are not quite the same as having a cup of coffee with a neighbor,”
the researchers explain.
Long-term impact
While the results are promising, it's evident that we need more research to understand the long-term implications of AI companionship. For instance, some are concerned that chatbots can be a bit too supportive, and examples have emerged of bots supporting people with various ideas that were ultimately harmful, such as the man who was encouraged to kill himself after talking to a chatbot.
Chatbots may also be little more than automated fortune tellers. Research in Nature found that when we think AI cares for us, we adapt our language such that it's likely to elicit a caring response in return, creating a false feedback loop that can nonetheless be extremely addictive.
The benefits of AI companionship should be hedged against the largely unregulated sector in which chatbots operate. Despite many providing quasi-therapeutic services, they're not as regulated as, say, a healthcare device. What's more, the business model of companies often revolves around getting people hooked and then selling either subscriptions or advertising (or both).
This runs the risk of not only people's supply of support being cut off (or manipulated for commercial ends) or of their privacy being compromised to sell advertising (or both). While AI companions can provide comfort, they also raise concerns about manipulation, privacy, and addiction. As such, it's perhaps the case that we “can” be friends with a chatbot, but whether we “should” be remains to be determined.
Your email address will not be published. Required fields are markedmarked