The boardroom drama that has – over the period of five intense days – transfixed Silicon Valley and exposed the power struggles at OpenAI. Here’s our attempt to figure out what exactly happened and why.
How did it all begin?
First, Sam Altman, the CEO of OpenAI, was unexpectedly fired from his role by the board of the company last Friday. Details are lacking, but the statement said Altman was sacked over a failure to be “candid in his communications.”
The precise reason for his ousting is still not clear, actually. Reuters said on Wednesday that ahead of Altman’s four days in exile, several staff researchers sent the board of directors a letter warning of a powerful artificial intelligence discovery that they said could threaten humanity.
Microsoft, OpenAI’s biggest investor, owning 49% of the company, was reportedly furious and pushed for the reversal of the decision over the weekend.
The OpenAI board resisted, so on Monday, Microsoft said it had hired Altman and OpenAI’s president, Greg Brockman, to lead a new AI research lab within the corporation. Analysts, of course, interpreted this as a supersmart move, giving Altman a de facto leadership role over his former employer.
We’re discussing the drama at OpenAI in a special episode of the Cybernews podcast, too:
Around the same time on Monday, nearly all of OpenAI’s 750-strong workforce threatened to walk out and probably follow Altman to Microsoft unless the board brought Altman back and then quit immediately.
There’s another letter flying around these days, allegedly written by former OpenAI employees. It disses Altman and details a disturbing pattern of “deceit and manipulation.” It was sent to Elon Musk, who shared it on X.
For a day, nothing happened, at least in public. But then OpenAI announced that Altman was to return as the CEO of the company.
The interim board is also already in place – it’s led by Bret Taylor, the former co-CEO of software firm Salesforce. Larry Summers, the former US Treasury Secretary, and Adam D’Angelo, the chief executive of Quora, were also appointed to the board.
Interestingly, Altman is not on the overhauled board, and the board has agreed to an independent investigation that will examine all aspects of recent events, including Altman’s role.
What could have happened if Altman hadn’t been reinstated?
Well, just a few days ago, Microsoft appeared to have gained substantially in terms of talent and potential innovation. Even though the real victory would have depended on how effectively it integrated this new influx of expertise from OpenAI and navigated the associated risks, people in the industry were talking of an ultra-successful coup by Satya Nadella, Microsoft CEO.
Struck Capital, a venture capital firm, said: “Microsoft is in the driver’s seat because they’ve essentially acquired all of OpenAI’s value for essentially zero… Now, they’ve got Sam, and now they’re not beholden to a non-profit status.”
Alex Papadopoulos Korfiatis, senior machine learning research scientist at Genie AI, also told Cybernews that Microsoft getting the OpenAI team, including Altman and Brockman, would’ve been a big win.
“It was not only about getting top talent but also saving money on hiring costs and avoiding legal issues such as antitrust which might come with acquiring OpenAI outright – which has been suggested,” said Korfiatis.
OpenAI, on the other hand, would’ve faced a period of uncertainty and transition. The company's direction and stability in the wake of these changes would’ve been unclear.
OpenAI has received billions of dollars of investment from Microsoft, including access to its vast computing resources, after all. The astronomical expense of the computing power that cutting-edge AI systems require has been a major barrier to start-ups trying to compete with Big Tech.
From an investor’s perspective, the instability would have eroded investor confidence as well as OpenAI's attractiveness as a collaborative partner.
Why was the decision reversed?
Pressure from investors, Microsoft, Altman himself, and, of course, the revolting OpenAI employees, just might have worked.
Or, maybe everyone around the issue here realized that Altman’s jump to Microsoft could effectively result in the six-month development pause that some AI leaders sought this spring.
“Whatever happens at Microsoft, it will take at least six months for onboarding and ramp-up – and on the OpenAI side, it will take at least that long or more to rehire and recover,” Deb Raji, an AI researcher and fellow at Mozilla, wrote. Before the news that Altman is returning to OpenAI came out, of course.
Surely, there would have been bumps ahead. OpenAI is working on GPT-5, and Microsoft simply couldn’t have used that same knowledge to make their own version because OpenAI still holds control over the weights and the actual code for ChatGPT. In other words, Microsoft and its new AI research lab would have needed to start from scratch.
Altman is not on the overhauled board, and the board has agreed to an independent investigation that will examine all aspects of recent events, including Altman’s role.
“These models are big and require months of training time, even for a company with immense hardware resources like Microsoft, but they also require massive amounts of training data. Gathering and cleaning up these datasets, especially the Reinforcement learning from human feedback (RLHF) data, takes a lot of time and effort,” Korfiatis told Cybernews.
A reputational risk was also lurking. Microsoft, having invested billions into OpenAI, still tried to keep the technology at arm’s length and insulate the company from any embarrassing AI mistakes such as hallucinations.
After Meta, Facebook’s parent company, released Galactica, its science AI chatbot, the tool soon started to fabricate research. The public response was fierce enough for Meta to take it down.
Why did the drama need to happen at all?
We obviously need to talk about the manner in which the events have unfolded. It sounds like Altman’s firing was a surprise, but this should never be the case because the board should be actively managing the CEO.
This involves setting clear expectations and giving actionable feedback on whether or not the CEO is meeting those expectations. If all of that is being done, there is no reason for a surprise firing.
Now that Altman is back and the board has been reshuffled, one could, of course, claim that it’s business as usual. We doubt it.
Some background is needed here: let’s recall that OpenAI started out in 2015 as a nonprofit with the goal of making sure that AI doesn’t wipe out humans.
The co-founders said they would not be driven by commercial incentives and would operate more like a think tank or a research facility. OpenAI’s charter still states that its “primary fiduciary duty is to humanity,” not investors.
Altman presumably was content with this, but in 2019, he transformed the AI lab into a for-profit company controlled by the nonprofit and its board. In essence, the board is technically tied to a nonprofit that had a majority stake in OpenAI’s for-profit side, with absolute decision-making power over the for-profit OpenAI’s activities, investments, and overall direction.
The two tribes have managed to co-exist for years, but then the release of ChatGPT changed everything, and the pressure to commercialize clashed with the firm’s stated mission.
What’s next?
Now, it seems the commercialization camp has won this battle – OpenAI will undoubtedly keep releasing its new products and growing in value, which is expected to reach more than $80 billion after a planned sale of employee shares.
At the end of the day, so much is still to be found out, of course. It remains to be seen whether the company, originally meant to provide a more transparent alternative to Big Tech, will abandon its mission and actually start behaving like a Big Tech company.
There are now ideas floating around that the two sides of OpenAI should be governed by two separate boards, for example.
Altman will probably continue to wear his Oppenheimer mask, but the mission to guard humanity from the dangers of AI, and especially AGI (artificial general intelligence), is taking a back seat, we imagine.
Plus, it’s still unclear what Altman did to deserve the very public ousting. He might actually be a deeply polarizing figure. Before OpenAI, he was fired from a company – his mentor asked him to leave the start-up incubator Y Combinator after he allegedly realized that Altman put his own interests ahead of the organization, the Washington Post reported this week.
Besides, although Altman’s Friday removal has been mostly attributed to an ideological battle between safety concerns and commercial interests, a person familiar with the board’s proceedings told The New York Times that the group’s vote was rooted in worries Altman was trying to avoid any checks on his power at the company.
According to this source, that’s a trait evidenced by Altman’s unwillingness to entertain any board makeup that wasn’t heavily skewed in his favor.
The New York Times also reported this week that the now-old OpenAI board had concerns about some of Altman’s recent efforts to raise funds for personal interests, such as a drug development start-up, at the same time that he was raising money for OpenAI.
Another message being shared now online seems worrying as well. An anonymous OpenAI worker allegedly told a journalist they felt pressured by “early employees” to sign a letter demanding the old board’s resignation.
Either way, a lot remains unresolved. OpenAI’s unconventional corporate structure, which resulted in the company’s board having no investor representation, hasn’t changed, for instance – so it might be a bit premature to claim that “AI belongs to capitalists now.”
Your email address will not be published. Required fields are markedmarked