Ilya Sutskever, a scientist and one of the OpenAI board members who voted to remove Sam Altman as the company’s CEO last year, has announced his departure from the influential firm.
The move against Altman obviously didn’t work as he was reinstated days later, while Sutskever and the other board members who had voted against Altman resigned.
Officially, Sutskever remained at the company for the past few months, but the San Francisco tech crowd has known that the scientist – who helped OpenAI build the AI chatbot ChatGPT – was not part of the decision-making process. Now, it’s over for good.
“After almost a decade, I have made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial,” wrote Sutskever on X.
He also expressed confidence in the leadership of Altman, another co-founder Greg Brockman, chief technology officer Mira Murati, and Jakub Pachocki, who is taking over the chief scientist role.
AGI, or artificial general intelligence is a term describing AI systems that are smarter than humans. Sutskever has consistently praised the potential of AI models but he was also reportedly worried that their development might create rogue superintelligence.
For instance, Sutskever appeared in iHuman, a BBC documentary, and declared that AGI models will “solve all the problems that we have today” before warning that they will also present “the potential to create infinitely stable dictatorships.”
In that same film, Sutskever said that robots will not want to kill us humans, when they finally become smarter – they will simply prioritize their own survival. It’s like most of humanity treats animals – we’re fond of them, but we don’t ask their permission for anything.
And right before the release of ChatGPT in 2022, Sutskever – who said he was now excited about his next “personally meaningful” project – made waves by tweeting that “it may be that today’s large neural networks are slightly conscious.”
Altman thanked Sutskever for his work at OpenAI on X, calling him “easily one of the greatest minds of our generation” and “a dear friend.”
However, the reformed leadership of OpenAI has actually been making moves against researchers related to Sutskever. A month ago, the company fired two key researchers who were working on AI safety for alleged leaking.
Your email address will not be published. Required fields are markedmarked