“Do not get distracted” says OpenAI’s Sam Altman, downplaying the risks of superintelligence


Sam Altman says that superintelligence is almost here and is capable of producing “prosperity for all” while trying to sell us a vague technological future.

“In the next couple of decades, we will be able to do things that would have seemed like magic to our grandparents,” OpenAI CEO Sam Altman recently wrote in his new personal blog post.

“It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.”

ADVERTISEMENT

In his words, Altman paints the tech future in bright colors, with a personal AI team of experts working together, children having AI tutors, and healthcare being AI-boosted. With these possibilities, Altman is projecting a “shared prosperity” for all.

“There are a lot of details we still have to figure out, but it’s a mistake to get distracted by any particular challenge. Deep learning works, and we will solve the remaining problems,” he assures.

What are the distractions?

However, as Altman admits in a post, the reality is far from perfect. He highlights the need to start working on “maximizing AI’s benefits while minimizing its harms,” such as AI affecting labor markets.

OpenAI's current objective is to develop AGI (artificial general intelligence), which refers to a theoretical technology capable of matching or exceeding human intelligence across various tasks, which brings many ethical and security risks to the table.

In May, OpenAI CEO Sam Altman testified before the Senate, marking the beginning of a series of hearings on artificial intelligence, during which he largely agreed with them on the need to regulate increasingly powerful AI technology.

Critics of OpenAI argue that the discussion around regulating superintelligence is a rhetorical tactic. They believe Altman is using it to divert attention from the immediate issues caused by AI systems and to keep lawmakers and the public preoccupied with science fiction scenarios.

In June, five lawmakers expressed concerns regarding the safety of the company’s latest artificial intelligence (AI) model.

ADVERTISEMENT

“OpenAI is now partnering with the US government and national security and defense agencies to develop cybersecurity tools to protect our nation’s critical infrastructure,” the senators wrote.

Earlier in the same month, anonymous whistleblowers from OpenAI filed a complaint with the Securities and Exchange Commission (SEC), asking them to investigate whether the company illegally restricted workers from communicating with regulators.

In May, the company received backlash for its restrictive offboarding policy, which forbids ex-employees from criticizing OpenAI. Even acknowledging that such an NDA exists is a violation of the agreement.

In September, after leaving OpenAI in June, former OpenAI co-founder and chief scientist Ilya Sutskever founded Safe Superintelligence, an AI company.

Give us more energy, but where to get it from?

In his post, Altman prophesied astounding technological triumphs, such as “fixing the climate,” “establishing a space colony,” and the “discovery of all of physics.”

“If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant which requires lots of energy and chips. If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.”

In his rhetoric, he promises that nearly limitless intelligence and abundant energy will empower all people, giving them the ability to generate great ideas and make them happen. In reality, the energetic and ecological cost of AI is mounting, and the solution is still not here.

The AI boom and the need for computational power are manifesting as ecological problems, too. According to the Guardian, from 2020 to 2022, the real emissions from Google, Microsoft, Meta, and Apple's company-owned data centers are likely about 662% (7.62 times) higher than officially reported.

In January 2024, at a Bloomberg event during the World Economic Forum in Davos, Altman stated that a positive development is the shift toward more climate-friendly energy sources, especially nuclear fusion and affordable solar power and storage, which are key for the future of AI.

ADVERTISEMENT

"There's no way to get there without a breakthrough," he said. "It motivates us to go invest more in fusion."

While scientists have been working on fusion generators for the last three-quarters of a century and made some important breakthroughs, the bottom line is simple – we are not there yet.