Ilya Sutskever, the co-founder and former chief scientist of OpenAI, along with former OpenAI engineer Daniel Levy, have teamed up with investor and former partner of startup accelerator Y Combinator, Daniel Gross, to establish Safe Superintelligence, Inc. (SSI). The name of the new company clearly indicates its main objective and product.
SSI, a company based in the United States with offices in Palo Alto and Tel Aviv, aims to advance artificial intelligence (AI) by focusing on safety and capabilities simultaneously, as stated by the trio of founders in an online announcement on June 19. Sutskever and Gross had already expressed concerns about AI safety prior to joining forces to create SSI.
Sutskever departed OpenAI on May 14 amidst the firing of CEO Sam Altman, while Levy left shortly after. Both were part of a group of researchers who left OpenAI around the same time. Sutskever and Jan Leike were instrumental in leading OpenAI’s Superalignment team, which was established in July 2023 to address the challenges of controlling AI systems that surpass human intelligence, also known as artificial general intelligence (AGI).
After the departure of key researchers, OpenAI disbanded the Superalignment team, prompting discussions on AI safety measures within the tech community. Notable figures like Ethereum co-founder Vitalik Buterin, Tesla CEO Elon Musk, and Apple co-founder Steve Wozniak have expressed their concerns about the risks associated with AGI and urged for caution in the development of AI systems.
SSI is currently in the process of hiring engineers and researchers to further their mission. The company’s focus on AI safety and advancement aligns with the growing concerns within the tech industry regarding the future implications of artificial intelligence.