OpenAI's Superalignment team, tasked with developing methods to control superintelligent AI systems, has reportedly been neglected, leading to several resignations, including co-lead Jan Leike. The team was promised 20% of the company's compute resources, but often had requests for even a fraction of that denied. Leike, a former DeepMind researcher, expressed his disagreement with OpenAI's leadership over the company's core priorities, emphasizing the need for more focus on next-generation models, security, and safety-related topics.
The Superalignment team was formed in July last year with the ambitious goal of solving the core technical challenges of controlling superintelligent AI in the next four years. Despite publishing safety research and allocating millions in grants to outside researchers, the team struggled for more upfront investments as product launches consumed more of OpenAI leadership's attention.
Internal conflicts, including a dispute between Ilya Sutskever, co-founder of OpenAI and co-lead of the Superalignment team, and OpenAI CEO Sam Altman, further complicated the team's situation. Following the departures of Leike and Sutskever, another OpenAI co-founder, John Schulman, has taken over the work of the Superalignment team. However, there will no longer be a dedicated team; instead, it will be a loosely associated group of researchers spread across the company. This has raised concerns that OpenAI's AI development may not be as safety-focused as it could have been.