ChatGPT’s answers could be nothing but a hallucination


OpenAI’s ChatGPT, Google’s Bard, or any other artificial intelligence-based service can inadvertently fool users with digital hallucinations.

OpenAI’s release of its AI-based chatbot ChatGPT last November gripped millions of people worldwide. The bot’s ability to provide articulate answers to complex questions forced many to ponder AI’s ability to mimic people.

ChatGPT’s stellar success quickly landed it a job at Microsoft’s search engine Bing. OpenAI’s chatbot takes queries, allowing users to ask complete complex questions and rely on AI’s neural network instead of a search engine algorithm.

ADVERTISEMENT

The move prompted tech behemoths such as Google’s owner, Alphabet, to hastily come up with alternatives. Mid-February, Google revealed Bard, the company’s answer to ChatGPT. However, the company’s presentation of the AI bombed, wiping billions of dollars of its value.

Later the same week, Google’s senior vice president Prabhakar Raghavan said that the company’s not rushing to publicly release Bard since developers are responsible for safeguarding against a concept called AI hallucinations.

“A hallucination occurs in AI when the AI model generates output that deviates from what would be considered normal or expected based on the training data it has seen.”

Kostello said.

Trippy intelligence

While hallucinating AI sounds like something straight out of a novel by Philip K. Dick, the concept is real, and many who played with AI-based chatbots have encountered the flaw themselves.

According to Greg Kostello, CTO and Co-Founder of AI-based healthcare company Huma.AI, AI hallucinations manifest when AI systems create something that looks very convincing but has no bases in the real world.

“It can manifest as a picture of a cat with multiple heads, code that doesn’t work, or a document with made up references,” Kostello told Cybernews.

AI developers borrowed the word ‘hallucinations’ from human psychology, and that was not an accident. Kostello claims that human hallucinations are perceptions of something not actually present in the environment.

ADVERTISEMENT

“Similarly, a hallucination occurs in AI when the AI model generates output that deviates from what would be considered normal or expected based on the training data it has seen,” Kostello said.

While AI hallucination can result in some entertaining visual output, chatbots hallucinating convincing fakes can lead to anything from misunderstandings to misinformation. AI hallucinating medical solutions could lead to even less desirable outputs.

ChatGPT answers
In reality, France had nothing to do with the construction of the Vilnius TV tower. The object was completed in 1980. Image by Cybernews.

AI is not rational

Artificial intelligence hallucinating concepts into existence reveals some fundamental characteristics about it, David Shrier, Professor of Practice, AI & Innovation with Imperial College Business School, thinks.

Over the decades, AI’s depiction in pop culture, such as HAL 9000 in Stanley Kubrick’s 2001: A Space Odyssey or Skynet in James Cameron’s Terminator, engraved human minds with the notion of AI’s mistakes being rational. The story often follows a similar pattern: humans program AI to do good but mess it up with conflicting instructions.

“But artificial intelligence is actually a lot more complicated and less predictable than how it appears in the media. Some of the most powerful approaches to artificial intelligence deliberately and directly design systems based on the architecture of the human brain,” Shrier told Cybernews.

Large language models built with deep learning systems have layers of linked networks, allowing the AI to provide coherent answers to complicated questions. The exact process creates space for mistakes.

“Because ChatGPT and similar large language systems build sentences based on relationships of prior words, the longer the piece you ask them to write, the greater the chance you spiral off into some really odd directions,” Shrier explained.

“But artificial intelligence is actually a lot more complicated and less predictable than how it appears in the media.”

Shrier told Cybernews.
ADVERTISEMENT

Curable ailment

The good news is that hallucination-inducing ailments in AI’s reasoning are no dead end. According to Kostello, AI researchers combine multiple approaches to mitigate possible output errors.

The solution heavily depends on a specific AI model. However, tactics used by researchers often include focusing the AI on validated data, thus ensuring the quality of the training data.

AI scientists fine-tune artificial intelligence using validated data, training the AI to be more robust against unrealistic inputs, and creating a feedback loop by having human evaluators review the outputs generated by the AI system.

For example, using human evaluation is one reason for ChatGPT’s quality. Last year, OpeanAI published a blog discussing various methods to improve the GTP-3 language model and found that human evaluation helped to reduce the number of AI hallucination instances.

With AI assisting everyone, from researchers developing new drugs to users searching for the correct information, finding ways to reduce AI’s hallucination rate will have to become a cornerstone for the quality assurance of AI-based services.

“The AI world’s hallucinations may be a small blip in the grand scheme of things, but it’s a reminder of the importance of staying vigilant and ensuring our technology is serving us, not leading us astray,” Kostello said.


ADVERTISEMENT

Comments

Chuck
prefix 1 year ago
Yup. I have experimented with chatGPT and sometimes it just makes up believable completely made up information. Sometimes I have asked the same question 3 times and I got 3 different answers. Right now, it's not very reliable.
Leave a Reply

Your email address will not be published. Required fields are markedmarked