Tech leaders on AGI: when will it change the world?


From less than a year to a decade, leading figures in AI offer different timelines for when artificial general intelligence (AGI) will emerge – or even what it means exactly.

There’s one thing that thought leaders at the forefront of this technology agree on: AGI – or artificial superintelligence, as it is increasingly called – will bring about an epochal change unmatched by any modern invention, including the internet.

What they disagree on is what it will take to achieve AGI, a still hypothetical form of robot intelligence that matches or outperforms humans in its cognitive capabilities, or when it will actually happen.

ADVERTISEMENT

Optimists, like OpenAI’s Sam Altman, argue it could arrive as early as this year, although there’s an important semantic matter to consider in his case that could get his firm out of a complicated partnership agreement with Microsoft.

Others, including Google DeepMind’s Demis Hassabis, say that several major breakthroughs are still needed to achieve AGI and this could take at least a decade. Running out of data to train AI models is one obstacle that could delay the process.

There is also disagreement about what the technology means, with as many interpretations of AGI as there are companies working on it – and those definitions largely reflect their own goals.

We collected some of the most recent statements from the leaders of major AI companies reflecting their differing timelines, definitions, and expectations for AGI.

Sam Altman

sam_altman_0113
Sam Altman. Image by Eugene Gologursky/Getty Images

OpenAI CEO Sam Altman teased AGI as soon as 2025 in an interview with venture capitalist Garry Tan, offering some of the most optimistic timelines, but also repeatedly downplaying what it would actually mean.

In an interview with Andrew Ross Sorkin at The New York Times DealBook Summit in December, Altman said that “we will hit AGI sooner than most people in the world think and it will matter much less.”

ADVERTISEMENT

Altman said, “AGI can get built, the world mostly goes on in mostly the same way, things grow faster, but then there is a long continuation from what we call AGI to what we call superintelligence.”

Just two years ago, OpenAI said AGI “could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.”

It appears that it still believes that, but replaced AGI with the phrase “superintelligence,” which Altman recently said might be achieved “in a few thousand days.” That could mean anything from five to eight years, which is more in line with what the others in the industry are saying.

Under the terms of its partnership with Microsoft, OpenAI may get out of the profit-sharing agreement once AGI is achieved, according to The Information. The two companies reportedly defined AGI as a system that can generate $100 billion in profits – and it will be up to OpenAI’s board to decide whether AGI is achieved.

Elon Musk

elon_musk_0113
Elon Musk. Image by by Anna Moneymaker/Getty Images

Tesla and X chief Elon Musk, who co-founded OpenAI, said that AGI that is “smarter than the smartest human” will be available in 2025 or by 2026 in an interview with the Norwegian head fund manager Nicolai Tanger last year.

Musk, who has a history of setting overly optimistic timelines for his projects, said that “AI is the fastest advancing technology I’ve seen of any kind,” a view that is shared by many of his peers and experts.

“AI hardware and computers that are coming online dedicated to AI are increasing by a factor of 10 every year, if not every 6-9 months. Many, many software breakthroughs are demonstrated on the curve,” Musk said.

One limiting factor was constraints put on AI by the availability of electricity, the billionaire said, while a lack of advanced chips was also hampering the training of more advanced large language models.

ADVERTISEMENT

In 2023, Musk founded his own AI startup, xAI, with the goal of “understanding the true nature of the universe." However, he also warned about the possibility of a “Terminator future” of superintelligent machines capable of evading human control, which also hints at the distinction between AGI and superintelligence.

Gintaras Radauskas jurgita Neilc Paulina Okunyte
Stay informed and get our latest stories on Google News

Musk is also involved in a legal battle with OpenAI over the company’s decision to go for-profit. Emails released as part of the lawsuit revealed Musk was concerned that Demis Hassabis, the head of Google’s AI division DeepMind, was going to create an “AGI dictatorship.”

This was likely one of the reasons OpenAI was founded and the sentiment was shared by Altman, who wrote in 2015 that “it would be good for someone other than Google to do it first.”

However, OpenAI’s other co-founders, Greg Brockman and Ilya Sutskever, had serious concerns about both, emails revealed, according to Transformer. They wrote to Altman that they “haven't been able to fully trust [his] judgments throughout this process,” and told Musk that “you’ve shown to us that absolute control is extremely important to you.”

Demis Hassabis

demis_hassabis_0113
Demis Hassabis. Image by by Dan Kitwood/Getty Images

Demis Hassabis, chief executive of Google DeepMind and newly awarded Nobel Prize laureate in Chemistry, offers a more cautious perspective, arguing that AGI is still at least a decade away.

In an October interview with The Times, Hassabis said that “there are still two or three big innovations needed from here until we get to AGI” before what he described as an “epochal defining” moment.

One of these big breakthroughs will be the emergence of AI agent-based systems that will be able to achieve certain tasks or goals of a “useful” digital assistant, such as planning a holiday or booking event tickets.

ADVERTISEMENT

This will require an AI that is capable of acting and reasoning, as well as having better memory and greater personalization. Hassabis reiterated this timeline at an AI science forum organized by Google DeepMind and the Royal Society in London later that year.

According to Hassabis, AGI will be "unbelievably impactful" and "incredibly positive for the world." However, he also notes that "a lot of sort of crazy hype" surrounds both those who express alarm over the technology and those who downplay its potential effects.

Unlike OpenAI’s Altman, Hassabis has maintained a consistent AGI timeline, with his latest comments suggesting he is leaning toward a later date. In 2023, he told The Wall Street Journal that AGI “could be just a few years, maybe within a decade away.”

Yann LaCun

yann_lecun_0113
Yann LeCun. Image by Benjamin Girette/Bloomberg/Getty Images

Yann LaCun, Meta’s chief scientist and the winner of the prestigious Turing Award, also leans on a conservative side regarding the emergence of AGI, saying that human-level artificial intelligence is “quite possible within a decade.”

In a post on X almost a year ago, LaCun argued that “there is no question” AI will reach and surpass human intelligence in all domains but reiterated on several occasions over the next months that it won’t happen in a year or two.

Echoing Google Deepmind’s Hassabis, LaCun said that AGI would be achieved when machines “understand the world” and “have intuition, have common sense” that would allow them to reason and plan to the same level as humans.

“Despite what you might have heard from some of the most enthusiastic people, current AI systems are not capable of any of this,” he said during a talk at the Hudson Forum in October.

According to LaCun, creating universal digital assistance is a global effort and cannot be produced “by a company on the West Coast or the East Coast of the US.”

ADVERTISEMENT

He also said that open-source AI is “not just a good idea; it’s necessary for cultural diversity and perhaps even for the preservation of democracy.” Meta, along with Google and Musk’s xAI, is among the companies offering some of their AI technologies as open source – a list that notably does not include OpenAI.

Dario Amodei

dario_amodei_0113
Dario Amodei. Image by by Chesnot/Getty Images

Anthropic CEO Dario Amodei is firmly in the optimists’ camp in terms of how soon AGI is possible – even though he said he dislikes the use of the term in a blog post in October and prefers to call it a “powerful AI.”

In a recent podcast with Lex Fridman, the head of the company behind Claude predicted that “we’ll get there by 2026 or 2027,” all the while acknowledging that he is “not at all confident” about this timeline.

He said that “lots of things could derail it,” including the risk of running out of data or “maybe Taiwan gets blown up or something.”

Nonetheless, Amodei said that any delay is likely to be mild and that “we are rapidly running out of truly convincing blockers, truly compelling reasons why this will not happen in the next few years.”

“It could be such a beautiful future if we could just make it happen if we could get the landmines out of the way,” he said.

Expanding on the topic in his blog post, Amodei said that a powerful AI is a system that is smarter than a Nobel Prize winner across most fields, including biology, programming, math, engineering, and writing.

He envisions such a system as being able to prove unsolved mathematical theorems, write extremely good novels, and write difficult codebases from scratch, he said.

ADVERTISEMENT

For all the good that AGI could bring to humanity, Amodei shares some of his peers’ concerns about its risks, arguing that “most people are underestimating” how bad it could be. He also calls out the “grandiosity” in the public narrative surrounding the technology.

“I think it’s dangerous to view companies as unilaterally shaping the world and dangerous to view practical technological goals in essentially religious terms,” he said.