AI hallucinations “direct threat” to science – researchers


An Oxford University study warns against artificial intelligence hallucinating content into existence, threatening to contaminate science with biased and false information.

A group of Oxford University researchers called for restrictions on the use of large language models (LLMs) in research to limit the impact of AI’s hallucinations on scientific results, a recent study said.

While hallucinating AI sounds like something straight out of a novel by Philip K. Dick, the concept is real. It manifests when AI systems create something that looks very convincing but has no basis in the real world.

ADVERTISEMENT

Moreover, researchers argue that LLMs, such as ChatGPT or Bard, are trained using online sources that don’t always contain factually correct information and may lead the LLM to respond with false statements, opinions, or fiction.

According to Professor Brent Mittelstadt, people humanize LLM-generated answers since the technology is built to interact and sound human. However, human-sounding agents may provide completely false information.

“The result of this is that users can easily be convinced that responses are accurate even when they have no basis in fact or present a biased or partial version of the truth,” Professor Mittelstadt said.

Authors of the study argue that LLMs ought to be used only as “zero-shot translators.” What that means is that instead of relying on LLMs for factual data, scientists could employ the tool as a means to organize or systemize data.

“It’s important to take a step back from the opportunities LLMs offer and consider whether we want to give those opportunities to a technology, just because we can,” Professor Chris Russell, another of the paper’s authors, said.

Employing LLMs in science has ruffled some feathers in the community, as the method presents as many opportunities as it does dangers. Cybernews has discussed how researchers use AI to perform incredible feats, such as discovering exoplanets.

However, scientists may be skeptical about various conundrums like the black box problem, where there’s no clear answer as to why an AI model provides the results that it does, e.g., a machine learning model could tell scientists the data indicates the presence of a galaxy but could not explain why it came to such a conclusion.

ADVERTISEMENT