No product, no revenue – but a $30B valuation for Sutskever’s AI startup


OpenAI co-founder Ilya Sutskever is far less outspoken than his former colleague Sam Altman. Nevertheless, Sutskever is now also making waves in the AI world – even with no product on offer.

Altman once called Sutskever “one of the greatest minds of this generation,” and indeed, they were great friends for years, co-founding OpenAI, the AI startup behind ChatGPT.

Sutskever worked as OpenAI’s chief scientist and co-chaired the company’s “superalignment” team, which was focused on ensuring AI stayed aligned with human values.

ADVERTISEMENT

However, it all changed in late 2023 when Altman, OpenAI’s CEO, was briefly ousted by the company board led by Sutskever. He said firing Altman was “the board doing its duty.”

The move against Altman obviously didn’t work. He was reinstated days later while the board was reorganized, and Sutskever – who spent nearly a decade working at OpenAI – officially left the high-flying firm in May 2024.

It’s not that he’s been knocked out of the great AI race, though. Sure, Sutskever is naturally less of a public figure than Altman and doesn’t give a lot of interviews (one exception is his insightful contribution to the BBC’s Storyville documentary iHuman), but he’s been very busy lately.

In June, he co-created a new startup, Safe Superintelligence (SSI), which is focused on developing AI that outsmarts humans safely. Now, Bloomberg reports that Sutskever is raising more than $1 billion for the startup, which is valued at over $30 billion.

The new valuation would mean a significant increase from SSI’s previous funding round in September, when it was valued at $5 billion. It would also make SSI one of the most valuable private AI companies in the world.

However, it’s quite extraordinary that SSI is still very much a mystery. Unlike, say, Anthropic (valued at around $60 billion) or Elon Musk’s xAI (last valued at $51 billion), Sutskever’s startup doesn’t have any product ready for market whatsoever. There’s also no revenue, and the company has no logo.

Not much is known about the company in general, actually. On its website, a statement – which is the only piece of content there – reads: “We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.”

It might be that investors simply trust Sutskever, one of the earliest believers in neural networks, to build AI models that are powerful and safety-oriented at the same time. He was always the brain behind the bucks at OpenAI, after all.

ADVERTISEMENT
Niamh Ancell BW Konstancija Gasaityte profile vilius Marcus Walsh profile
Be the first to know and get our latest stories on Google News

However, Sutskever is also not one of those doomsayers who think AI is dangerous to humanity on principle. For example, he’s a big fan of artificial general intelligence (AGI).

In that BBC documentary iHuman, Sutskever declared that AGI models will “solve all the problems that we have today” before warning that they will also present “the potential to create infinitely stable dictatorships.”

In other words, he seems to be striving for a carefully balanced approach, and investors must have believed in the idea behind SSI – that we shouldn’t be afraid of what machines might do to humans.

On the contrary, we should be wary of what humans might do with machines to other humans.

That’s probably why Sutskever is in no rush. SSI doesn’t even intend to sell AI products in the near future, and the scientist told Bloomberg in June that the company’s first product will actually be safe superintelligence. Nothing will be done before reaching that particular goal.

“It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race,” said Sutskever.