OpenAI forced on the defensive over safety and radical nondisclosure policy


OpenAI has just released its newest human-like ChatGPT update, but everyone is more interested in the company’s extremely restrictive offboarding policy after former employees broke their radio silence.

When Ilya Sutskever announced his resignation from OpenAI, the firm’s former chief scientist chose a traditional message – he essentially wished the company well and said he was confident that Sam Altman, the startup’s CEO, will succeed in building safe artificial general intelligence (AGI).

It's a bit too nice for a guy who was allegedly deeply involved in last year’s drama when Altman was temporarily fired and who has been mostly absent from the company ever since.

ADVERTISEMENT

Jan Leike, Sutskever’s colleague in OpenAI’s superalignment team, a group of employees focused on researching future risks of rogue AI, was much blunter. Leike simply posted: “I resigned.”

What’s going on? Clearly, both Sutskever and Leike – the latter was quite enthusiastic about his team’s progress just a few weeks ago – are unhappy and worried but feel they cannot say more.

To be fair, Leike followed up last Friday, saying: “We urgently need to figure out how to steer and control AI systems much smarter than us.” He explained that he has been disagreeing with OpenAI’s leadership about the company’s core priorities “for quite some time.”

“Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products,” said Leike.

As if confirming that after two key departures the superalignment team was now dead in the water, OpenAI immediately disbanded it that same Friday. The team was only announced last year.

And that would probably have been it, folks. OpenAI doesn’t usually comment on events at the company that aren't related to its shiny new products, such as the new GPT-4o, and, as we’ve seen, former employees are mostly keeping quiet. Leike is a rare exception.

However, there’s a reason for such terseness, it turns out. Kelsey Piper, a journalist at Vox, said she had seen “the extremely restrictive offboarding agreement” that former OpenAI employees are subject to.

sam-altman-small
Sam Altman. Image by Shutterstock.
ADVERTISEMENT

In short, the provisions forbid them – forever – from criticizing OpenAI, and even acknowledging that such an NDA exists is a violation of the agreement.

Probably most importantly, if an employee violates the agreement or doesn’t sign it, they can lose all vested equity they earned during their time at the firm – more or less millions of dollars.

This was actually confirmed by one former OpenAI employee, Daniel Kokotajlo, who said he had to surrender a huge sum of money in order to quit without signing the document.

Altman confirmed in a tweet on Saturday evening that such a provision did exist but said: “We have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement).”

Further scrambling to reassure the world, OpenAI’s president Greg Brockman also posted on X to say that the firm has “helped pioneer the science of assessing AI systems for catastrophic risks.”

But more potentially bad or at least reputation-damaging news followed when Sonia Joseph, a machine learning researcher, claimed to know of “consensual non-consent” sex parties that she said took place within the AGI enthusiast community in Silicon Valley. OpenAI is undoubtedly part of such a community.

“I have seen some troubling things around social circles of early OpenAI employees, their friends, and adjacent entrepreneurs, which I have not previously spoken about publicly,” said Joseph.