
Tech pioneers are rushing to develop artificial general intelligence (AGI). But should they? There’s hope for progress ahead, but the worry is that we could lose control of it all.
AGI will bring about an epochal change unmatched by any modern invention, including the internet. Almost all tech leaders agree on that. What’s less clear is whether we, the humans, will like it.
Indeed, AGI will most likely be able to reason, learn, and innovate in any task. It will also not only match but outperform humans in its cognitive capabilities – and the milestone might even be reached this year, as the eternal AI optimist, Open AI CEO Sam Altman argues.
To be fair, the timeline is vague – who knows when we might reach AGI? But that’s why discussing the promise and peril of AGI right now is crucial. We must be prepared to handle the adoption of this technology as smoothly as possible.
At the World Economic Forum’s annual meeting in Davos, a panel discussion was arranged to raise pertinent questions about AGI. Will it be a force for progress or a threat to the very fabric of humanity? Most experts involved are optimists, it seems – but not all.
Hopeful but wary
“I hope we will reach AGI someday – maybe within our lifetimes, within the next few decades, or maybe hundreds of years.
But even AI has to obey the laws of physics, so there will be limitations. Still, the ceiling of how intelligent the systems can get will be extremely high,” said Andrew Ng, an entrepreneur and investor.
He’s an AI enthusiast, so his words aren’t surprising. Ng fully believes in AI's potential and has been a driving force in the push to get consumers and businesses interested in it.
But Yoshua Bengio, a professor at the University of Montreal, actually thinks we'll have AGI a lot sooner – even now, some machines are better at some tasks than humans.
“We don't know what the ceiling is. But the machines are digital and can learn from a lot more data than humans. That is why there's potential that we don't have,” said Bengio.
Of course, with machine learning, there's no agency – the models are just given data from the web. That's why it's different. Good at one set of things, and capable of making silly mistakes, thinks Yejin Choi, professor and senior fellow at Stanford University.
"Right now, science doesn’t know how to control machines that are just as smart as us. We’ll figure it out? But do we know what happens if we don’t figure it out?"
Joshua Bengio.
Choi calls the current method of building intelligence in machines “brute and inefficient” but adds that we still go far: “I just don’t know whether we can go beyond the best human intelligence.”
Jonathan Ross, founder and CEO of Groq, an American AI company, is also wary: “There are actually some pretty hard steps left [until we reach AGI]. We keep having to move the goalposts.” AI simply needs clear tasks, Ross seems to think.
Do we really know how to control AI?
Concerns about possible job losses, surveillance, and deepfakes are very real. Meta CEO Mark Zuckerberg recently said he’s planning to begin automating coding jobs with AI in 2025 – that’s hundreds if not thousands of humans losing their jobs.
Again, this wouldn’t be AGI replacing human coders. Zuckerberg was probably talking about virtual engineers – AI agents, autonomous enough to perform complex tasks without human intervention.
But then AGI will be even smarter and won’t have to be supervised by humans. What will we do then?
According to Ng, the threats of AGI are overly dramatized. To him, AI is a tool that can empower humans by giving them “all these AI agents working for them.”
“Intelligence is expensive. But with AI, it can become cheap and be given to everyone. When people talk about AI being dangerous, it sounds like saying your laptop can be dangerous. Absolutely – someone can use your laptop to do awful things, and someone can use AI to do the same,” said Ng.
“The other view is that AI is this sentient alien being with its own desires that can go rogue. But actually, every year our ability to control AI is improving. The safest way to make sure AI doesn’t do bad things is to fix it.”
Professor Bengio disagrees. To him, we would like AI to be just a tool but we’ve been making a mistake in taking human intelligence as a model to build artificial intelligence.
“Actually, what we really want from machines is not a new species, not a peer smarter than us. What we want is help in solving our problems. We have agency, we have our own goals,” said Bengio, citing new research showing emerging reasoning in new AI systems.
“Right now, science doesn’t know how to control machines that are just as smart as us. We’ll figure it out? But do we know what happens if we don’t figure it out? <...> A superhuman machine is not a laptop.”
Your email address will not be published. Required fields are markedmarked