Sentient AI will never exist via machine learning alone


Unless it can replicate the natural processes of evolution, AI will never be truly self-aware. What’s more, humanity will never be able to upload its consciousness into a machine — the very concept is nonsensical. That’s the verdict from an academic and computer expert who spoke to Cybernews.

Dr Justin Lane, academic researcher, cognitive scientist, and computer programmer, has little time for hysteria or hyperbole. Dizzying predictions of the singularity range from positing computers that are at least as intelligent as us, to a time when humanity is faced with a stark choice: integrate with them, or be wiped out.

ADVERTISEMENT

Lane is having none of it. “I personally think that the idea of integrating our consciousness with machines, which is the ultimate goal of a lot of the singularity and the transhumanists movement, is nonsense,” he says.

He holds a doctorate in Cognitive and Evolutionary Anthropology from Oxford University in the UK, and runs his own company CulturePulse, which uses AI to help clients predict and map consumer behavior. Besides that, he has published some 50 academic papers on cognitive science, social stability, computer simulation, big data, and, of course, AI.

But, contrary to what that resume might lead you might think, Lane has little time for — depending on how you view them — utopian or dystopian predictions that converging technologies moving at breakneck speed will soon force humanity to choose whether to integrate with machines or become subservient to them.

Justin Lane
Justin Lane

Singularity: just a fantasy?

Although he doesn’t directly say so, Lane clearly therefore rejects, or at least seriously doubts, the theory advanced by computer scientist and author Ray Kurzweil. Kurzweil famously claimed in 2001 that machine intelligence would surpass human intelligence within a few decades, leading to the singularity, a term first attributed in a technological context to mid-20th century mathematician John von Neumann.

The implications of that, Kurzweil contended, “include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light.”

"Just because something else has all of your knowledge, it doesn't make it have your stream of consciousness."

Dr Justin Lane, head of AI analytics firm CulturePulse, on why predictions of self-aware machines are premature
ADVERTISEMENT

Lane thinks this is a fantasy, and endeavors to explain why. “There is something about the continuity of our perceptions within our own minds that cannot be replicated,” he says. “Just because something else has all of your knowledge, it doesn't make it have your stream of consciousness.”

Until the science behind AI stops depending solely on machine learning, essentially programming, and starts trying to mimic this evolutionary consciousness that has developed in human beings over millennia, it will not so much as come close to creating true artificial intelligence, let alone sentience.

Yes, Lane concedes, ChatGPT is genuinely impressive. But building an artificial neural network that can process data and appear “quite human” in its responses is a far cry from a genuinely sentient machine, he contends, never mind a superintelligent one.

Microchips no substitute for synapses

“The idea that we can create an AGI, that is to say, a human-level intelligence or above-human intelligence — the strong statement I have on this is that machine learning will not be able to,” he says.

He goes on: “The reason for this is because machine learning mimics the neurological connections in our brain. It's built on artificial neural networks. Those networks are trained by information that is inputted into the system. Then the outputs are learned through what you could call a synthetic experience. The neural network gets feedback, usually from some testing data.”

Central to his argument is the role of the human body in the experience of sentience, which is derived from how the brain relates to and processes external stimuli that are conveyed to it via its flesh-and-blood vessel.

This conjoined experience that has spanned millions of years of evolution adds up to an organic process that can never, in Lane’s considered opinion, be replicated with microchips or other artificial components that merely process data fed into them by a human actor.

"Something deeply biological about human thought and cognition goes well beyond creating a neural network."

Dr Lane emphasizes the organic nature of human consciousness via the brain and evolution, which he says AI on its current trajectory can never hope to replicate

“There is something about the way that our consciousness is embodied in our body, not just our brain, but our physical being, outside of the brain, through our external peripheral nervous system, that suggests that there's a whole body thing going on that we just can't copy and paste into a machine,” he says.

ADVERTISEMENT

That is why teaching machines on vast swathes of human-supplied data simply won’t cut it. “There is a lot more to human psychology than learning,” Lane posits. “If you hear a loud noise, you will jump. If a child hears the first loud noise it ever hears, it will also jump. And the reason is that there are certain things that we have evolved to do over generations that we are genetically pre-programmed for that we have never needed to learn.”

Facial recognition is a case in point: yes, you can train or program an AI system to do it, but there is evidence to suggest that this ability is already, in effect, pre-wired into humans by millions of years of evolution. No training required; we are born with the ability.

“They've done experiments where they've shown that children are attuned to facial structures before they are even born,” he adds. “This suggests that our propensity to look at faces and objects that look like eyes, a nose, and a mouth is something that is pre-programmed in our DNA.”

How communication defines us

The way human beings, and indeed other animals, have communicated for thousands of years also provides clues to the inherently biological nature of cognition and self-awareness.

“For example, if you look at the size of a conversation, you can have a one-on-one without a problem. You can even have a conversation with two other people, because there are only three links in that network. But when you get to four other people, you not only have the links that you have — but there's the links that other people have that you also need to track.”

At this point there begin to be too many connections for a single human brain to track all at once, resulting in conversations within bigger groups breaking down into smaller sub-groups. Lane doesn’t need to cite any research here, because anyone who’s been to a social gathering or party will know this for a simple fact.

This constraint on human ability to communicate with a multiplicity of interlocutors at any single given time determines our social patterns, whom we speak with and how often, he claims.

“That’s why really you only have five friends that you talk to on a daily basis,” he says. “There are only about 15 people you will speak to in a week, around 45 in a month, and around 150 actively that you keep in contact with in a year.”

What he seems to be arguing here is that these constraints on our ability to communicate in groups, and the social patterns they give rise to, are directly linked to the organic composition of tissue in our brains, which in turn suggests, in Lane’s words, “something deeply biological about human thought and cognition, that goes well beyond creating a neural network or the manipulation of weights and biases inside of a network.”

ADVERTISEMENT

“We have evolved to have these different group sizes, and those are directly tied to the density of our neocortical tissue,” he explains. “And the correlation between group sizes and neocortical density — they hold not just for humans, but for primates and other animals as well.”

"We haven't looked at the things that are pre-programmed genetically and tried to pre-program them computationally - and that's what's really missing."

Dr Lane insists that machine learning in its current state does not accurately reflect the evolved workings of the human brain over millions of years

The missing piece of the puzzle

Even capturing something that is “usefully human in any sort of social sense” will take years of investigation into what makes humans “biologically unique.”

“To get superhuman intelligence, we are going to have to supplement the current approach to AI with other more evolutionary informed cognitive algorithms,” he adds. “Human-level intelligence, capturing all the things that make us human, can't be achieved with machine learning alone. Insofar as the world is equating AI with ML, I don't think it can happen. What I'm saying is there is a way that a machine can think like a human, but it needs more than ML.”

This ties back to his point about humans being essentially social creatures. “We have done a lot of work on trying to understand how humans learn and mimicking that in AI, but we haven't done a lot of the baseline,” he says. “The boilerplate human psychology. We haven't looked at the things that are pre-programmed genetically and tried to pre-program them computationally — and that's what's really missing.”

Why current technology is falling short

I can’t pretend to have Lane’s understanding of AI, of what it can and cannot and might never be able to do, but what he says intrigues me, so I press him on a few points. What does he mean by a computer or machine being “usefully human in a social sense,” exactly?

“Human sociality goes beyond language,” he explains. “GPT3 and the current trends in artificial general intelligence [...] have largely focused on human language because that's what feels human to us, but there is a lot about human sociality — for example, things like gaze — that AI has not gotten to yet.”

The ability to look someone in the eye when we are talking to them, he goes on, or to use different facial features to emotionally express ourselves constitute another case in point. “We are barely scraping the surface of that in the AI and robotics space,” he says, citing as an example how humans interact with one another remotely in the tech space, on a Zoom or Google Meet call.

ADVERTISEMENT

“The camera is offset and higher up, so we don't actually look at them in the eye in their sense, we look them in the eye from our sense, but they receive us looking slightly off, or slightly in the corner, because they're not looking directly at the camera,” he says.

I’ve been working in cyber journalism — where face-to-face interviews have largely been relegated to a quaint obsolescence of a rapidly fading past — long enough to know how true this is.

“These are the sorts of things where modern technology is falling short, and in order for something to be usefully human, we need to include things like cultural traditions — like bowing versus handshaking,” he says. “The extent to which you can stand close to someone based on how well you know them, understanding that you would say something different to someone you are close with versus someone you’re far away from. These are things that everybody knows and AI just has almost no sense of.”

Why sentience is so difficult to fabricate

So how would Lane envisage a program of AI research that can overcome these obstacles — would it have to track the development of consciousness over millions of years to replicate the organic evolutionary experience that all humans today inherit from the moment they are born?

“It doesn't have to,” he declares. “The way that we say ‘self-aware’ and ‘sentient’ also shifts the goal posts and changes. It doesn't actually have to be self-aware and sentient, it just has to appear self-aware and sentient to do the job. Now I think to have a human-level intelligence you would have to have some level of sentience — but what does that even look like?”

Psychologists, he suggests, do not necessarily agree on what constitutes human intelligence, and that therefore the definition of what constitutes AI is largely dependent on one’s point of view. Sentience, derived from the Latin verb sentire which means “to feel” and relates to how one perceives and experiences reality through the five senses, is an even thornier problem.

"Sentience entails some kind of intelligence, but also goes further into some kind of self-reflection upon your own intelligence and decision-making in a very conscious way."

Another example from Dr Lane of just how complex the workings of the human mind are - and precisely why AI faces an uphill struggle to mirror them

“It is actually even worse for sentience, because sentience entails some kind of intelligence, but also goes further into some kind of self-reflection upon your own intelligence and decision-making in a very conscious way,” he says.

He does acknowledge that some large-language learning models are beginning to show the capacity to “self-reflect on their decisions and make them better in an explicit way.”

ADVERTISEMENT

However, he adds: “But that is not sentient in the same case of, ‘Oh, if this were to happen to me, I would react in this way’ and to have that thought unprompted.”

Nor does he see that kind of sentience being artificially replicated any time soon. “We are a long way off. Without that sort of stream-of-consciousness input that we all have as humans, I don't see how AGI is going to get there,” he concludes. “And I don't think anyone in the AGI space has a very good argument for what that will look like. That’s where I stand on this.”


ADVERTISEMENT

Comments

Stourley Kracklite
prefix 1 year ago
There is no consensus on the definition of consciousness. That consciousness must be biological may be true but no reason is given to persuade the reader it must be so. "Artificial intellgence is (or will be) capable of consciousness because of the vast of information it can store" is not actually a point of view that AI consciousness proponents advocate.
Grant Castillou
prefix 1 year ago
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
Any
prefix 1 year ago
In trying to measure progress of AI against human intelligence, we are always trying to get technology to mimic human intelligence, however there are some aspects of humanity and human intelligence that are not required for evolution. A better way to measure progress is to compare against an enlightened / awakened human - a lot of noise will be dropped in lieu of a highly efficient and evolved human being.
Grant Castillou
prefix 1 year ago
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461
Leave a Reply

Your email address will not be published. Required fields are markedmarked