OpenAI’s ChatGPT, Google’s Bard, or any other artificial intelligence-based service can inadvertently fool users with digital hallucinations.
OpenAI’s release of its AI-based chatbot ChatGPT last November gripped millions of people worldwide. The bot’s ability to provide articulate answers to complex questions forced many to ponder AI’s ability to mimic people.
ChatGPT’s stellar success quickly landed it a job at Microsoft’s search engine Bing. OpenAI’s chatbot takes queries, allowing users to ask complete complex questions and rely on AI’s neural network instead of a search engine algorithm.
The move prompted tech behemoths such as Google’s owner, Alphabet, to hastily come up with alternatives. Mid-February, Google revealed Bard, the company’s answer to ChatGPT. However, the company’s presentation of the AI bombed, wiping billions of dollars of its value.
Later the same week, Google’s senior vice president Prabhakar Raghavan said that the company’s not rushing to publicly release Bard since developers are responsible for safeguarding against a concept called AI hallucinations.
“A hallucination occurs in AI when the AI model generates output that deviates from what would be considered normal or expected based on the training data it has seen.”
Kostello said.
Trippy intelligence
While hallucinating AI sounds like something straight out of a novel by Philip K. Dick, the concept is real, and many who played with AI-based chatbots have encountered the flaw themselves.
According to Greg Kostello, CTO and Co-Founder of AI-based healthcare company Huma.AI, AI hallucinations manifest when AI systems create something that looks very convincing but has no bases in the real world.
“It can manifest as a picture of a cat with multiple heads, code that doesn’t work, or a document with made up references,” Kostello told Cybernews.
AI developers borrowed the word ‘hallucinations’ from human psychology, and that was not an accident. Kostello claims that human hallucinations are perceptions of something not actually present in the environment.
“Similarly, a hallucination occurs in AI when the AI model generates output that deviates from what would be considered normal or expected based on the training data it has seen,” Kostello said.
While AI hallucination can result in some entertaining visual output, chatbots hallucinating convincing fakes can lead to anything from misunderstandings to misinformation. AI hallucinating medical solutions could lead to even less desirable outputs.
AI is not rational
Artificial intelligence hallucinating concepts into existence reveals some fundamental characteristics about it, David Shrier, Professor of Practice, AI & Innovation with Imperial College Business School, thinks.
Over the decades, AI’s depiction in pop culture, such as HAL 9000 in Stanley Kubrick’s 2001: A Space Odyssey or Skynet in James Cameron’s Terminator, engraved human minds with the notion of AI’s mistakes being rational. The story often follows a similar pattern: humans program AI to do good but mess it up with conflicting instructions.
“But artificial intelligence is actually a lot more complicated and less predictable than how it appears in the media. Some of the most powerful approaches to artificial intelligence deliberately and directly design systems based on the architecture of the human brain,” Shrier told Cybernews.
Large language models built with deep learning systems have layers of linked networks, allowing the AI to provide coherent answers to complicated questions. The exact process creates space for mistakes.
“Because ChatGPT and similar large language systems build sentences based on relationships of prior words, the longer the piece you ask them to write, the greater the chance you spiral off into some really odd directions,” Shrier explained.
“But artificial intelligence is actually a lot more complicated and less predictable than how it appears in the media.”
Shrier told Cybernews.
Curable ailment
The good news is that hallucination-inducing ailments in AI’s reasoning are no dead end. According to Kostello, AI researchers combine multiple approaches to mitigate possible output errors.
The solution heavily depends on a specific AI model. However, tactics used by researchers often include focusing the AI on validated data, thus ensuring the quality of the training data.
AI scientists fine-tune artificial intelligence using validated data, training the AI to be more robust against unrealistic inputs, and creating a feedback loop by having human evaluators review the outputs generated by the AI system.
For example, using human evaluation is one reason for ChatGPT’s quality. Last year, OpeanAI published a blog discussing various methods to improve the GTP-3 language model and found that human evaluation helped to reduce the number of AI hallucination instances.
With AI assisting everyone, from researchers developing new drugs to users searching for the correct information, finding ways to reduce AI’s hallucination rate will have to become a cornerstone for the quality assurance of AI-based services.
“The AI world’s hallucinations may be a small blip in the grand scheme of things, but it’s a reminder of the importance of staying vigilant and ensuring our technology is serving us, not leading us astray,” Kostello said.
Comments
Your email address will not be published. Required fields are markedmarked