AI safety research “seriously lacking,” scientists warn


A group of Nobel laureates, Turing award winners, and other leading AI experts have released a grim warning urging world leaders “to wake up” to the risks posed by the new technology.

In the warning released ahead of the second AI Safety Summit that started in Seoul, South Korea, on May 21st, twenty-five of the world’s leading scientists said that not enough is being done to protect humanity from threats posed by AI.

An expert paper published by Science journal called on world leaders to take urgent action to tackle the risks, including establishing oversight institutions and legally binding regulations.

ADVERTISEMENT

The experts said world governments should set up a trigger system that automatically enforces strict requirements if AI advances too rapidly and relaxes them once the progress slows.

They also said that current research into AI safety is “seriously lacking,” with only an estimated 1-3% of AI publications concerning safety. According to the paper they signed, the impact of AI could be catastrophic if proper safeguards are not put in place.

“To advance undesirable goals, AI systems could gain human trust, acquire resources, and influence decision-makers. To avoid human intervention, they might copy their algorithms across global server networks,” the paper read.

Large-scale cybercrime, social manipulation, and other harms could escalate rapidly, while an open conflict could see AI systems autonomously deploying various weapons, including biological ones.

“AI systems having access to such technology would merely continue existing trends to automate military activity,” it said, adding that influence could also be freely handed over to AI in a greater push for “efficiency” by companies, governments, and militaries.

“There is a very real chance that unchecked AI advancement could culminate in a large-scale loss of life and the biosphere and the marginalization or extinction of humanity,” the experts cautioned.

The paper was co-authored by the likes of Geoffrey Hinton, Andrew Yao, Dawn Song, and the late Daniel Kahneman, and was signed by scientists from the US, China, EU, UK, and other leading AI powers.

The gathering in Seoul will continue the work of the first AI Safety Summit that took place in Bletchley Park in the UK last year. It will aim to forge new regulatory agreements, but the experts said that progress is not fast enough.

ADVERTISEMENT

“Technologies like spaceflight, nuclear weapons, and the internet moved from science fiction to reality in a matter of years. AI is no different,” said Dr Jan Brauner from the Department of Computer Science, University of Oxford, and one of the paper’s authors.

“We have to prepare now for risks that may seem like science fiction – like AI systems hacking into essential networks and infrastructure, AI political manipulation at scale, AI robot soldiers and fully autonomous killer drones, and even AIs attempting to outsmart us and evade our efforts to turn them off,” Brauner said.