The entire team at OpenAI that was dedicated to addressing the existential risks of AI has either resigned or reportedly been absorbed into other research groups within the company.
Shortly after Ilya Sutskever, OpenAI’s chief scientist and co-founder, announced his departure, Jan Leike, the former DeepMind researcher who co-led OpenAI’s superalignment team, revealed on a platform called X that he too had resigned.
Leike cited concerns about the company’s priorities as the reason for his departure, suggesting that OpenAI is more focused on product development rather than AI safety.
In a series of posts on X, Leike criticized OpenAI’s leadership for their choice of core priorities and urged them to prioritize safety and preparedness as they continue to develop artificial general intelligence (AGI). AGI refers to a hypothetical form of artificial intelligence that can perform tasks as well as or better than humans.
Having spent three years at OpenAI, Leike criticized the company for prioritizing flashy product development over fostering a strong AI safety culture and processes. He emphasized the urgent need for resource allocation, particularly computing power, to support the team’s crucial safety research, which he believed was being overlooked.
In July 2023, OpenAI established a new research team to prepare for the emergence of advanced AI that could surpass its creators in intelligence. Sutskever was appointed as the co-lead of this team, which was allocated 20% of OpenAI’s computational resources.
In response to the recent resignations, OpenAI has decided to dissolve the “Superalignment” team and integrate its functions into other research projects within the organization. This decision is said to be a result of ongoing internal restructuring, which was initiated following the governance crisis in November 2023.
Sutskever was involved in an effort that temporarily removed Sam Altman as CEO of OpenAI in November 2023, before Altman was later reinstated to the role due to employee backlash.
According to The Information, Sutskever informed employees that the board’s decision to remove Altman was a necessary step to ensure that OpenAI develops AGI for the benefit of all of humanity. As one of the six board members, Sutskever emphasized the board’s commitment to aligning OpenAI’s goals with the greater good.