Chatbots are increasingly common in customer service environments. Indeed, some estimates suggest that they will provide 95% of online customer service by 2025.
However, research from the Queensland University of Technology suggests that such a strategy is not without risks.
They found that while chatbots can be an effective medium for customer service, they can also infuriate customers, thus making them less likely to make their purchases and generating a significant degree of anger.
Research from Columbia University found that part of the problem is that AI-based chatbots have an unhelpful tendency to talk gibberish. The study found that chatbots often believed a sentence was meaningful and helpful that human users found complete nonsense.
“That some of the large language models perform as well as they do suggests that they capture something important that the simpler models are missing,” the researchers explain.
“Even the best models we studied still can be fooled by nonsense sentences, which shows that their computations are missing something about the way humans process language.”
This muddled logic was further underlined by a recent paper from Cornell's SC Johnson College of Business, which explored how humans and chatbots make decisions. The findings don't suggest we can automatically rely on chatbots to make sound decisions.
Irrational decisions
“Surprisingly, our study revealed that AI chatbots, despite their computational prowess, exhibit decision-making patterns that are neither purely human nor entirely rational,” the researchers explain.
“They possess what we term as an ‘inside view’ akin to humans, characterized by falling prey to cognitive biases such as the conjunction fallacy, overconfidence, and confirmation biases.”
The conjunction fallacy is a common reasoning error in which we assume that certain, specific conditions are far more probable than a single, more general condition. Confirmation bias occurs when we prefer information that supports our existing view rather than any that contradicts it.
AI chatbots provide an "outside view," which can enhance human decision-making by offering a fresh perspective. They are good at using base rates and are less prone to biases from limited memory or overestimating the likelihood of events based on recent experiences. Unlike humans, who often overvalue things they own (a bias known as the endowment effect), AI chatbots don't show this tendency.
The researchers looked at various AI platforms in the study, including ChatGPT, Google Bard, Bing Chat AI, ChatGLM Pro, and Ernie Bot. They evaluated how these AI systems made decisions based on 17 principles from behavioral economics, shedding light on how humans and AI interact in decision-making processes.
Inexact mirroring
The study found that chatbots don't really mirror human decision-making all that closely, and certainly not as closely as the researchers expected them to.
Indeed, despite being trained on huge datasets that might exhibit human decision-making, the chatbots actually made decisions in ways that defy rational logic. For instance, the study found that whereas humans might take a gamble when facing a loss, chatbots would often do the opposite and look for a more certain outcome. In other words, they don't tend to display the loss aversion humans do.
If we're to use chatbots appropriately in our professional lives, we must understand how they work and how they differ from humans in making decisions.
While AI can be a useful tool, it's important to approach it with a healthy dose of skepticism. Knowing when AI offers an "inside view" can help reduce the risks of overconfidence and confirmation biases. On the other hand, using the "outside view" that AI provides can improve decision-making by focusing on base rates and avoiding biases that humans often fall prey to.
As AI becomes more integrated into different areas of life, understanding how it makes decisions is increasingly important. This research highlights AI's strengths and weaknesses, as well as its potential to enhance human decision-making.
“Exploring the unknown territory of AI decision-making has brought together diverse perspectives, paving the way for a deeper understanding of this rapidly evolving technology and its implications for society,” the authors conclude.
“As we continue on this journey, we aim to foster responsible and informed usage of AI, ensuring that it serves as a tool for progress and empowerment in the hands of decision-makers.”
Your email address will not be published. Required fields are markedmarked