Ilya Sutskever, co-founder and former chief scientist of OpenAI, has announced his new artificial intelligence company named Safe Superintelligence. The announcement was made on X (formerly Twitter), where Sutskever outlined the company's mission to create a secure AI environment at a time when major tech companies are vying for dominance in the generative AI sector.
Safe Superintelligence has been positioned as an American firm with offices in Palo Alto and Tel Aviv, according to Sutskever.
In his announcement, Sutskever mentioned the company's focus to safety and security in AI development, stressing that the singular focus of Safe Superintelligence allows it to avoid distractions from management overhead or product cycles. He further noted that their business model is designed to insulate safety, security, and progress from short-term commercial pressures, ensuring a steadfast dedication to their goals.
Joining Sutskever in this venture are Daniel Levy, a former researcher at OpenAI, and Daniel Gross, co-founder of Cue and a former AI lead at Apple. This team of co-founders brings together extensive experience and expertise in the AI field, aiming to drive forward their vision of a safe AI future.
Sutskever's departure from OpenAI in May followed significant involvement in the dramatic events surrounding the firing and rehiring of CEO Sam Altman in November of the previous year. After Altman's return, Sutskever was removed from the company's board, paving the way for his new endeavor with Safe Superintelligence.