AI singularity: waking nightmare, fool’s dream, or an answer to prayers?


The singularity, or more specifically the technological singularity, is often thought of as a point in future history when artificial intelligence overtakes that of its human creators — essentially rendering homo sapiens obsolete and therefore ending our dominance of the planet.

More specifically and less apocalyptically, it can be defined as the time when growth in machine intelligence attains such a rate that it becomes unstoppable and irreversible, leading to a radical transformation of human life.

ADVERTISEMENT

Many people I’ve spoken to dread the concept of singularity, while some are more optimistic. Others take a neutral stance, suggesting that AI will use its newfound acumen to build spacecraft and boldly head off into space, leaving humanity much where it was before. One character I met a few years ago even posited it as the Second Coming of Christ.

Whatever your take, it’s hard to deny that the S-word is laden with future possibilities. But as well as speculating endlessly about something that might never happen, it might be worth taking a look at the history of the term, and who was responsible for inserting it into the lexicon of human language.

An inevitable “intelligence explosion?”

Hungarian scientist and mathematical genius John von Neumann is cited by his colleague, Polish-American nuclear physicist Stanislaw Ulam, as being the first person to posit the term in a technological context.

A year after his death in 1957, Ulam claimed that Neumann had told him something along the lines of: “The ever accelerating progress of technology and changes in the mode of human life give the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.”

This pretty much sets the tone, and Neumann, who was reportedly quite comfortable conversing in Ancient Greek and dividing eight-digit numbers in his head by the age of six, knew a thing or two about, well, lots of things.

Whether Neumann actually said it the way he’s posthumously quoted by Ulam or not, Irving John Good was the next to pick up the thread. In 1965, this British scientist and former colleague of wartime cryptographer Alan Turing, threw his hat into the AI ring.

Stamp featuring scientist John von Neumann
John von Neumann reportedly coined the term "singularity" before his death in 1957. By Shutterstock
ADVERTISEMENT

In his landmark essay, Speculations Concerning The First Ultraintelligent Machine, he wrote: “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man, however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines.”

The upshot of this, Good continued, would inevitably be an “intelligence explosion” that would, essentially, leave humanity choking in the dust. This led him to conclude that “the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”

Of course, therein lies the rub: what if it isn’t?

Leaving that aside, depending on how you view his next statement, Good was arguably wrong about one thing: summing up his contention, he added that it was “more probable than not that, within the twentieth century, an ultraintelligent machine will be built.”

His odds may have been proven wrong — “probable” turned out to be “never happened” — but Good was smart enough to allow for a margin of error in his thinking. In fact, a precise prediction of the singularity’s advent is not something that leading scientists have been too keen to put their names to.

Lack of consensus among professionals

Futurist and robotics expert Hans Moravec, author of the 1998 book Robot: Mere Machine To Transcendent Mind, has predicted that by 2040 machines will be able to perform any job that humans can, far surpassing us in intellect a decade after that. But who’s to say he’ll be right, either?

Sci-fi author and professor Vernor Vinge, who popularized the singularity as a concept after he included it in his 1986 novel Marooned In Realtime, has stated that he believes it will become a reality sometime between 2005 and 2030.

Then there’s computer scientist Ray Kurzweil, who has written a series of nonfiction books discussing the coming of AI and its ramifications. In 1990, he published The Age of Intelligent Machines, which posits a computer with human-level intelligence.

He followed this up in 1999 with The Age of Spiritual Machines, before releasing The Singularity Is Near in 2005, predicting said phenomenon’s arrival forty years hence.

ADVERTISEMENT

That aligns closely with Moravec's prediction, but taken as a whole these forecasts still don't add up to a solid consensus among professionals, nor do they suggest individual confidence in a precise date for when the singularity occurs. But if so many highly intelligent experts in the field can neither agree with themselves nor one another when such an event will actually happen, then it begs the question: will it ever?

And yet, many experts in the field seem increasingly convinced that it will. One research paper written by Vincent Muller and Nick Bostrom in 2012-2013 found that most of them believe that there’s a 50/50 chance of singularity occurring between 2040 and 2050, with the chances rising to nine in ten by 2075.

Others, however, are singularity skeptical: in 2011, Microsoft co-founder Paul Allen and computer scientist Mark Greaves published an article in the MIT Technology Review in which they rebutted claims of an imminent singularity made by Kurzweil and Vinge.

Kurzweil’s assertion rests on a concept he outlined in 2001 called the Law of Accelerating Returns. Broadly speaking, this posits that technological and computing progress has and will continue to accelerate exponentially. Not only that, but the exponential rates of increase will themselves increase, the upshot being AI ultraintelligence by the middle of the century.

Allen and Greaves sought to refute that claim by arguing that Kurzweil’s hypothesis elides the preliminary need to fully understand how the human brain attains cognition, and that complex disciplines like neuroscience have historically occurred haphazardly and are therefore unlikely to fall into the ever-accelerating rate of progress he envisions.

“The complexity of the brain is simply awesome,” they wrote. “Every structure has been precisely shaped by millions of years of evolution to do a particular thing, whatever it might be. It is not like a computer, with billions of identical transistors in regular memory arrays that are controlled by a CPU [computer processing unit] with a few different elements. In the brain every individual structure and neural circuit has been individually refined by evolution and environmental factors. The closer we look at the brain, the greater the degree of neural variation we find. Understanding the neural structure of the human brain is getting harder as we learn more.”

It’s not exactly tea-time reading, and once again you’re left with the uncomfortable feeling that if the sharpest scientific minds cannot agree on when or even whether singularity will take place, it might well be a futile exercise for the rest of us mere mortals to contemplate such a question.

A gradual process

Perhaps it would be fitting to give the last word on the subject to Sam Altman, co-founder of OpenAI who is now responsible for ChatGPT, probably the most popular AI program on the planet at the time of writing.

In 2017, he noted the decline in the use of the term singularity in his blog, opining that the phenomenon now “feels uncomfortable and real enough that many seem to avoid naming it at all.”

ADVERTISEMENT

And that, of course, was six years ago. At that time, Altman added: “Perhaps another reason people stopped using the word ‘singularity’ is that it implies a single moment in time, and it now looks like the merge [between human and machine] is going to be a gradual process. And gradual processes are hard to notice.”

Given how far-reaching the consequences of superintelligent machines will likely be, perhaps it should come as no surprise that a term first used to sketch them as a hypothetical concept in the middle of the last century is becoming obsolete.

The technological phenomenon it has been used to describe, however, will do just the opposite.