A coalition of artificial intelligence experts recently released a brief statement warning of the risk of extinction from AI, and one of the signatories, Professor Dan Hendrycks, has employed a Darwinian argument in his new paper.
Hendrycks, the executive director of the Center for AI Safety, states that if AI agents have intelligence that exceeds that of humans, “this could lead to humanity losing control of its future.”
In other words, just like humans are the most successful species on Earth thanks to their high intelligence, AI agents might surpass them if their systems keep evolving and, for instance, start pursuing their own interests with little regard for humans.
This is a sort of futuristic Darwinian logic, Hendrycks admits – today’s viral chatbots largely reproduce patterns based on training data they have been fed. They certainly do not think for themselves.
But Hendrycks adds that the mere possibility could pose catastrophic risks. That’s why interventions aimed at counteracting AI-related dangers need to be considered, and rather urgently.
Doomsplaining: is it justified?
Hendrycks, now doing rounds in the media, is concerned that AI could pose a greater threat to humanity than something like a pandemic. What if, for example, AI realizes that humans can deactivate it and decides to prevent that by harming us preemptively instead?
“If current trends continue, we should expect AI agents to become just as capable as humans at a growing range of economically relevant tasks. This change could have huge upsides AI could help solve many of the problems humanity faces,” Hendrycks says.
“But, as AIs become increasingly capable of operating without direct human oversight, they could one day be pulling high-level strategic levers. If this happens, the direction of our future will be highly dependent on the nature of these AI agents.”
On the one hand, we can hope for benevolent AI agents that avoid harming humans and apply their skills to the benefit of society. But such an outcome is not guaranteed because AIs will almost certainly become more autonomous.
It seems likely to the scientist that the most influential AI agents in the future will be selfish: “Firstly, natural selection may be a dominant force in AI development. Secondly, evolution by natural selection tends to give rise to selfish behavior.”
So, even if some developers build altruistic AIs, others will build less altruistic ones, and the latter will outcompete the former, according to these Darwinian forces.
“Firstly, natural selection may be a dominant force in AI development. Secondly, evolution by natural selection tends to give rise to selfish behavior.”Dan Hendrycks.
Doomsplaining? Maybe. But Hendrycks writes that preparing for disastrous worst-case scenarios, up until now only depicted in sci-fi novels or TV shows, “is not overly pessimistic; rather it is prudent.”
Calls for regulation
The professor is one of dozens of AI experts and industry leaders who have called for reducing “the risk of global annihilation” due to AI.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads the statement published by the Center for AI Safety.
The statement was signed by leading industry officials, including OpenAI chief executive Sam Altman, the so-called “godfather” of AI Geoffrey Hinton, and Kevin Scott, Microsoft’s chief technology officer, among others.
AI-based tools like OpenAI’s ChatGPT have been booming in recent months – this specific chatbot is used by hundreds of millions of people each day. In response, lawmakers and activists across the globe have been busy calling for regulation of the industry before any major mishap occurs.
For instance, the European Parliament is currently in the process of drafting its first set of rules to govern the technology in an effort to tame the rapid growth of AI in accordance with Europe's General Data Protection Regulation (GDPR) rules.
More from Cybernews:
Subscribe to our newsletter