Earlier this month, a software engineer at Google made headlines after sharing transcripts of a conversation with one of the company's AIs. The incident raises interesting questions: will we ever create a sentient AI – and when we have, how will we be able to tell?
Blake Lemoine had been working with an AI called Language Model for Dialogue Applications (LaMDA), designed to predict and generate natural-sounding language for chatbots based on large quantities of text scraped from the internet.
But he was suspended from his job for publishing conversations with the AI, which he claimed were evidence that it was actually sentient.
Patently, LaMDA is nothing of the sort.
"Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent," writes Gary Marcus, a psychology professor at New York University and founder and former CEO of machine learning firm Geometric Intelligence.
"All they do is match patterns, draw from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient."
What is sentience?
The debate over artificial intelligence goes back to 1950. The British computer scientist Alan Turing proposed that a computer could be said to be intelligent if a human conversing with it could not detect that it was a computer at least half the time.
The definition of sentience, however, is somewhat different, with the Collins dictionary defining 'sentient' as 'having the power of sense perception or sensation; conscious'.
When it comes to attributing sentience to animals, there's some disagreement. In the UK, new legislation will soon come into force attributing sentience to all vertebrate animals and some invertebrates, such as octopuses and lobsters (problematic for those who like to boil them alive).
It goes slightly further than the EU and way further than the US, where there's no federal recognition that animals are sentient at all.
And if we can't agree on whether, say, a dog is sentient, it's hard to see how a consensus will be reached if an AI does ever start to show what might be genuine signs of consciousness.
This hasn't stopped pundits from making predictions about when artificial general intelligence (AGI) might be achieved. Forecasting body Metaculus, which aggregates expert opinion, makes a prediction of 2038.
However, there's huge variation between opinions, with half of AI researchers saying there's a 50 percent chance of high-level machine intelligence by 2040 - but one in five saying that 50 percent probability won't be reached until 2100 or later.
Elon Musk, meanwhile, recently suggested that 2029 might be the year – a claim rubbished by Marcus.
"Current AI is great at some aspects of perception, but let’s be realistic, still struggling with the rest. Even within perception 3D perception remains a challenge, and scene understanding is by no means solved," he wrote.
"We still don’t have anything like stable or trustworthy solutions for common sense, reasoning, language, or analogy."
How can we weed out the zombies?
In philosophy, there's a concept of a zombie – a being that, like the perfect chatbot, can simulate human behavior – but without any consciousness.
In one conversation with LaMDA, head of Google’s AI group Blaise Aguera y Arcas asked LaMDA how it could prove it wasn't a zombie: "You’ll just have to take my word for it. You can’t 'prove' you’re not a philosophical zombie either," was the reply.
In fact, we may be approaching the question from the wrong direction. A conscious AI might, in fact, try rather hard to prove it wasn't – wondering just what we might do to it if we knew…
More from Cybernews:
Subscribe to our newsletter