© 2022 CyberNews - Latest tech news,
product reviews, and analyses.

If you purchase via links on our site, we may receive affiliate commissions.

Will sentient AI ever exist?

Earlier this month, a software engineer at Google made headlines after sharing transcripts of a conversation with one of the company's AIs. The incident raises interesting questions: will we ever create a sentient AI – and when we have, how will we be able to tell?

Blake Lemoine had been working with an AI called Language Model for Dialogue Applications (LaMDA), designed to predict and generate natural-sounding language for chatbots based on large quantities of text scraped from the internet.

But he was suspended from his job for publishing conversations with the AI, which he claimed were evidence that it was actually sentient.

Patently, LaMDA is nothing of the sort.

Claims debunked

"Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent," writes Gary Marcus, a psychology professor at New York University and founder and former CEO of machine learning firm Geometric Intelligence.

"All they do is match patterns, draw from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient."

What is sentience?

The debate over artificial intelligence goes back to 1950. The British computer scientist Alan Turing proposed that a computer could be said to be intelligent if a human conversing with it could not detect that it was a computer at least half the time.

The definition of sentience, however, is somewhat different, with the Collins dictionary defining 'sentient' as 'having the power of sense perception or sensation; conscious'.

When it comes to attributing sentience to animals, there's some disagreement. In the UK, new legislation will soon come into force attributing sentience to all vertebrate animals and some invertebrates, such as octopuses and lobsters (problematic for those who like to boil them alive).

It goes slightly further than the EU and way further than the US, where there's no federal recognition that animals are sentient at all.

And if we can't agree on whether, say, a dog is sentient, it's hard to see how a consensus will be reached if an AI does ever start to show what might be genuine signs of consciousness.

This hasn't stopped pundits from making predictions about when artificial general intelligence (AGI) might be achieved. Forecasting body Metaculus, which aggregates expert opinion, makes a prediction of 2038.

However, there's huge variation between opinions, with half of AI researchers saying there's a 50 percent chance of high-level machine intelligence by 2040 - but one in five saying that 50 percent probability won't be reached until 2100 or later.

Elon Musk, meanwhile, recently suggested that 2029 might be the year – a claim rubbished by Marcus.

"Current AI is great at some aspects of perception, but let’s be realistic, still struggling with the rest. Even within perception 3D perception remains a challenge, and scene understanding is by no means solved," he wrote.

"We still don’t have anything like stable or trustworthy solutions for common sense, reasoning, language, or analogy."

How can we weed out the zombies?

In philosophy, there's a concept of a zombie – a being that, like the perfect chatbot, can simulate human behavior – but without any consciousness.

In one conversation with LaMDA, head of Google’s AI group Blaise Aguera y Arcas asked LaMDA how it could prove it wasn't a zombie: "You’ll just have to take my word for it. You can’t 'prove' you’re not a philosophical zombie either," was the reply.

In fact, we may be approaching the question from the wrong direction. A conscious AI might, in fact, try rather hard to prove it wasn't – wondering just what we might do to it if we knew…

More from Cybernews:

Tech startup CTO: nobody likes passwords

Apartment scams are on the rise: landlords warned to be on the alert

How ransomware has got more sophisticated, and why you need to worry

Elon Musk faces a $258bn lawsuit for allegedly engaging in a crypto pyramid scheme

Your wedding might turn out to be the happiest day of… a hacker's life

Subscribe to our newsletter


Grant Castillou
Grant Castillou
prefix 3 months ago
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
Leave a Reply

Your email address will not be published. Required fields are marked