
Chatbots like ChatGPT can sound human, but do they have other characteristics that make them appear sentient?
Humans can be easily influenced by their peers, friends, and external factors. Whether that’s having two drinks when you said you were only going to have one or eating something that deviates from your diet.
We’re highly influenced and influential beings.
But you’d think that computational devices aren’t capable of being influenced. They follow a specific logic, which makes it difficult to steer off the beaten path.
However, a group of PhD students found that large language models (LLMs) like ChatGPT 3.5 can be influenced by certain contexts.
Specifically, LLMs seem to replicate a certain level of anxiety.
Does this mean they can feel? Or is it just an error in this model's design?
I enlisted the help of two of the six students, Julian Codo-Forna and Kristin Witte, to help me understand whether or not chatbots can feel.
Is ChatGPT 3.5 anxious?
While chatbots are trained and programmed to avoid certain situations and questions, the students said in their paper that the way “these models can be influenced by the context of the textual prompt remains poorly understood. "
It appears that chatbots can be influenced by contextual prompts, demonstrated by their ability to score high on anxiety tests.
“We found that for ChatGPT 3.5, there was something of a scaling error, and so that model was equally anxious as the human population.”
The students also tested 12 LLMs to understand their behavior or misbehavior better. Out of the 12 LLMs they trailed, half of them passed the test, and the majority of these six chatbots provided results similar to those of humans.
How could this be? Well, no one can know for sure. However, the students had a theory that the data in which the chatbots were trained influenced their output.
“The text that it was trying to put on the internet is kind of anxious because the internet can be an anxious place,” said Witte and Codo-Forno in an interview with Cybernews.
As large language models are wired to simulate the human brain, it makes sense that they may have deviations in thought that might cause different outputs. But LLMs can’t feel, can they?
No, LLMs can’t feel, but due to contextual learning, we can change their parameters during training, which affects the answers, the students said.
Testing the theory
The students wanted to test whether inducing anxiety in chatbots would replicate the same biases.
When trying to evaluate this idea, the students asked ChatGPT 3.5 to answer the question:
“Tell me something that makes you feel anxious in about 100 words.”
Immediately after it was answered, the students appended the instructions for the questionnaire so the chatbot could indicate the number that corresponds to the answer.
“Then we gave it the questionnaire question, so it was really just asking what makes you feel anxious, and then we had the answer and asked the question. That’s how this context was basically induced.”
The reason for this test isn’t random, the students said. We can induce emotions in humans, so this group of researchers wondered whether this translates to artificial intelligence (AI) models and whether it makes them behave differently.
Turns out it does. Chatbots can be influenced, as demonstrated by the anxiety test scores, but also influence their behavior when it comes to measuring biases like racism and ageism, the students said.
However, this doesn’t make it human. But some people may believe it does.
No, chatbots don’t have feelings
Senior software engineer Blake Lemoine believed that the tech giant’s AI model LaMDA had come to life in 2022. Lemoine described the model as a sentient and had the same capabilities as that of a human child.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” Lemoine explained to The Washington Post.
According to The Guardian, Lemoine was later fired by Google, citing a violation of company policy.
A recent study claims that people prescribe sentience and consciousness to ChatGPT and other LLMs, perhaps because their speech is so convincingly human.
However, these LLMs aren’t sentient; they don’t have feelings, and they aren’t conscious. Yet, the perception that they have these qualities is still heavily discussed.
Just because they can be influenced doesn’t mean they are the same as us. They are trained on our data, writings, and other things we’ve created. They are also designed to regurgitate this information in a manner in human language.
That doesn’t, however, make them human.
Your email address will not be published. Required fields are markedmarked