It’s been over a year since Henry Kissinger, the master of Realpolitik, died. And yet he’s still offering policy advice from beyond the grave. In his last book, he explores the future of humanity and its entanglement with AI.
Throughout his long life, Kissinger mainly dealt with big diplomacy matters. Even now, Richard Nixon’s Secretary of State is the face of pragmatic geopolitics known as Realpolitik.
Up until his death at the age of 100, Kissinger’s advice was sought by American presidents of both major political parties. He might have facilitated war crimes but his policies were deemed effective and aligned with US foreign policy objectives.
He even endorsed Jared Kushner, the sneaky son-in-law of Donald Trump, once, saying he was destined for great things while “flying close to the sun.”
Kushner is now most known for making billions in shady dealings with Saudi Arabia – not much more. So let’s just say this was another piece of proof that Kissinger can be wrong.
Anyway, when you’re retired – even if you’re Kissinger, The Nobel Peace Prize laureate – you might be expected to calm down and play golf or something.
But Kissinger surprised everyone in 2018, already at the ripe age of 95, by writing an essay on AI. Next, he co-wrote “The Age of AI” in 2021, and now, there’s even a posthumous book titled “Genesis: Artificial Intelligence, Hope, and the Human Spirit.”
Why the sudden rush to talk about AI? Well, Kissinger has always been in the middle of the hype – whatever the hype concerned. The Vietnam war? Kissinger’s project. Rapprochement with China? Why not? The Russian invasion of Ukraine? Henry’s gonna help out, too.
The brief “In Memoriam” section at the beginning of the book describes Kissinger quite brilliantly, calling him a “student of the nineteenth century, master of the twentieth, and oracle of the twenty-first.”
Will we even understand what’s happening?
“Genesis,” a book Kissinger co-authored with Google’s former CEO Eric Schmidt and Craig Mundie, a former chief research officer at Microsoft, essentially explores a King Midas type of dilemma.
How does humanity go about wielding a power which they cannot possibly understand, Kissinger asks – won’t we destroy ourselves? He goes further and says that the worst thing we could do as a civilization is to “declare too early, or too completely, that we understand” AI.
AI is now, of course, present everywhere. Students use it to cheat on college essays, medical professionals employ it to detect and treat cancer, engineers switch on AI to fight climate change or design spaceships.
But even that’s nothing, the authors of the book say. According to them, the “evolution of Homo technicus – a human species that may, in this new age, live in symbiosis with machine technology” – has already begun.
Large language models are becoming more powerful each week: they absorb data, they’re now able to reason, we might also soon welcome artificial general intelligence (AGI) which could revolutionize human life on a par with fire or electricity.
In the book, one scenario depicts AI solving everything, Sam Altman style – the climate crisis, income inequality, death, you name it. In many aspects, the authors seem cautiously optimistic about AI’s capabilities and future use cases.
In fact, Kissinger, Schmidt, and Mundie even sound resigned to the primacy of AI in human life in the near future. That’s because the superiority of AI in computational and problem-solving powers is pretty much irrefutable.
The average AI supercomputer is already 120 million times faster than the processing rate of the very creative but fragile human brain.
Indeed, the age of AI, Kissinger says, could “catalyze a return to a premodern acceptance of unexplained authority” because we, humans, will be challenged to our unique grasp of reality. AI will allow us “to know new things <...> but not to understand how the discoveries were made.”
We’re probably quite close to reaching AGI. OpenAI, the flagbearer of the industry, is reportedly planning to achieve (or at least to declare achievement of) it in 2025.
The company says there are five stages to AGI, and we’re currently at stage two where AI can reason through an idea before responding. The next stage is AI agents that can also plan and perform actions independently, and after that – bliss.
How does one define humanity?
The bigger challenge, of course, will be ensuring these new AGI are aligned to human values and can’t “go rogue,” performing actions not beneficial to humanity. Here, Kissinger and his companions wholeheartedly agree.
Kissinger, whose family fled Nazi Germany in 1939, has always been (in)famous for his constant warnings that humanity needs to avoid plunging into the depths of conflict such as the Second World War.
Here, his hand is felt heavily in the form of stark admonitions: “The advent of artificial intelligence is a question of human survival. An improperly controlled AI could accumulate knowledge destructively.”
If you unleash AI into the wild, the book states, “machines may contend that the truest method of classification is to group today’s humans together with other animals, since both are carbon systems emergent of evolution and different from silicon systems emergent of engineering.”
Yes, we’d become AI’s rivals competing for domination. Machines wouldn’t reflect humanity – they would replace us. That’s why the authors devote a lot of space in urging the creators and supervisors (so far) to cooperate in instilling into the machines the core values of human “dignity.”
The book is meant to be read by policy makers first and foremost – it’s them who will need to make crucial decisions when the time comes.
AI systems must be “compelled to build from observation a native understanding of what humans do and don’t do.” They must learn how to be human from the examples that humans set.
Alas, that’s where another problem lies. The concepts of moral humanity are different in different parts of the world. Are the American, or Western, moralities the same as Chinese, Russian, or Iranian? What invokes mercy in, say, Denmark, could inspire rage in Saudi Arabia.
Kissinger still has hope. In 2023, he even traveled to China and met its leader Xi Jinping, warning him about the catastrophic risks of AI. Likewise, the book is meant to be read by policy makers first and foremost – it’s them who will need to make crucial decisions when the time comes.
So far, though, it’s looking rather grim for governments. In California, where the AI boom is the loudest, governor Gavin Newsom has caved under pressure from tech companies and venture capitalists as he vetoed an important AI safety bill in September.
Indeed, the very fact that the development of ever more sophisticated forms of AI is “a project led almost exclusively by private corporations and entrepreneurs” is full of risks.
Could corporations form alliances to compound their already immense clout, even gaining military and political power in the process? What impact would that have on diplomacy and global stability?
Your email address will not be published. Required fields are markedmarked