AI can’t think independently and poses no existential threat, new study says


And breathe. Well, if you believe the results of a brand new study that says large language models (LLMs) like ChatGPT cannot learn or acquire new skills independently.

According to the authors of the new research, the models need explicit instructions, are predictable, and are controllable. This essentially means that all the hype about robots taking over is probably just that and that AI effectively poses no existential threat to humanity.

Crucially, researchers from the University of Bath in the United Kingdom and the Technical University of Darmstadt in Germany say the LLMs have no potential to master new skills without explicit instruction.

ADVERTISEMENT

Yes, the models can follow instructions and excel at language proficiency, but their abilities are nowhere near the level of what could be called artificial general intelligence (AGI) – a concept pushed by influential tech visionaries like Elon Musk or Ilya Sutskever, OpenAI’s former chief scientist.

As Cybernews explains, an AGI would be capable of the same level of learning and understanding as a human being, and of carrying out the same level of intellectual tasks – while having instant access to a far greater range of data.

The current set of LLMs is “inherently controllable, predictable, and safe,” says the study. Its authors are also confident that the models – even if they’re trained on ever larger datasets – can continue to be deployed without safety concerns.

“The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus,” said Dr Harish Tayyar Madabushi, computer scientist at the University of Bath and co-author of the new study.

The researchers ran experiments to test the ability of LLMs to complete tasks that models have never come across before – the so-called emergent abilities.

Sure, the LLMs can answer questions about social situations without ever having been explicitly trained or programmed to do so, and previous research suggested this was a product of models “knowing” about social situations.

However, the researchers showed that it was, in fact, the result of models using LLMs' well-known ability to complete tasks based on a few examples presented to them, known as `in-context learning’ (ICL).

“The fear has been that as models get bigger and bigger, they will be able to solve new problems that we cannot currently predict, which poses the threat that these larger models might acquire hazardous abilities, including reasoning and planning,” said Dr Tayyar Madabushi.

ADVERTISEMENT

“But our study shows that the fear that a model will go away and do something completely unexpected, innovative and potentially dangerous is not valid.”

Nevertheless, the potential misuse of AI, such as generating fake news, still requires attention – deepfakes and AI bots can disrupt election campaigns and empower crooks to steal money from people.

Still, “while it’s important to address the existing potential for the misuse of AI, it would be premature to enact regulations based on perceived existential threats,” said Dr Tayyar Madabushi.