For Sam Altman, AGI is not enough: is “superintelligence” the next big thing?


Artificial general intelligence (AGI) is not here yet but OpenAI boss Sam Altman is already dreaming about “superintelligence” – because he’s here for the “glorious future.”

For the uninitiated, AGI aims to achieve cognitive capabilities comparable to humans across various fields. But “artificial superintelligence” refers to AI that has evolved even further than AGI, surpassing human intelligence by a significant margin.

Altman has now claimed that OpenAI already knows how to build AGI. And because, essentially, that’s almost done (it’s not), the company is now turning its aim to “superintelligence.”

ADVERTISEMENT

“Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity,” Altman wrote on his personal blog.

Previously, Altman said that “superintelligence” could be “a few thousand days” away, whereas AGI, defined by OpenAI as “highly autonomous systems that outperform humans at most economically valuable work,” is allegedly closer.

According to the OpenAI CEO, we may see the first AI agents “join the workforce,” as soon as this year. In other words, the company might be trying out, by its own definition, the third step towards AGI.

As reported by Bloomberg back in July, OpenAI has come up with a set of five levels to track its progress toward building a supercapable AI.

Konstancija Gasaityte profile Paulius Grinkevicius Stefanie Marcus Walsh profile
Get our latest stories on Google News

Level one is conversational AI, which is already here. Level two, reasoning AI, is apparently coming in the near future, and level three will be autonomous AI – systems known as agents operating on a user’s behalf.

OpenAI hopes to create “innovator” AI systems next, meaning that they would be able to develop innovations independently. Finally, the fifth and final stage of super AI would involve AI capable of performing the work of an entire organization without a human in sight.

We’ll see, of course. Altman is very optimistic, though – in his blogs, he wrote: “This sounds like science fiction right now, and somewhat crazy to even talk about it. That’s alright – we’ve been there before and we’re OK with being there again. We’re pretty confident that in the next few years, everyone will see what we see.”

ADVERTISEMENT

What’s worrying is that OpenAI previously – and not that long ago – said that a successful transition to a world with “superintelligence” was “far from guaranteed” and openly admitted: “We don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.”

What’s even more concerning is that despite concluding that “humans won’t be able to reliably supervise AI systems much smarter than us,” OpenAI then disbanded teams focused on AI safety.