OpenAI has effectively disbanded a team focused on ensuring the security of possible future ultra-efficient artificial intelligence systems following the departure of two of the group’s leaders, including OpenAI co-founder and chief scientist Ilya Sutskever.
Instead of maintaining the so-called superalignment team as a stand-alone entity, OpenAI is now integrating the group more deeply into its research efforts to facilitate the company meet its security goals, Bloomberg News reported. The team was formed less than a year ago under the leadership of Sutskever and Jan Leike, another OpenAI veteran.
The decision to rethink the team comes after a series of recent departures from OpenAI that revive questions about the company’s approach to balancing speed and security when developing AI products. Sutskever, a widely respected researcher, announced Tuesday that he was leaving OpenAI after previously arguing with CEO Sam Altman over how to rapidly develop artificial intelligence.
Shortly thereafter, Leike announced his departure in a terse social media post. “I resigned,” he said. For Leike, Sutskever’s departure was the last straw after disagreements with the company, according to a person familiar with the situation who asked not to be named to discuss private conversations.
In a statement Friday, Leike said the super-alignment team was struggling for resources. “Over the last few months, my team has been sailing against the wind,” Leike wrote in X. “At times we had difficulty with calculations and it became increasingly hard to perform these key studies.” A few hours later, Altman responded to Leike’s question in the post. “He is right, we have much more to do,” Altman wrote in X. “We are determined to do it.”
continued below
Other members of the superalignment team have also left the company in recent months. Leopold Aschenbrenner and Pavel Izmailov were fired by OpenAI. The news previously reported their departure. According to a person familiar with the matter, Izmailov was removed from the team before his departure. Aschenbrenner and Izmailov did not respond to requests for comment.
According to the company, John Schulman, co-founder of a startup whose research focuses on huge language models, will be the future research director of work on adapting OpenAI. Separately, OpenAI said in a blog post that it had named chief research officer Jakub Pachocki to take over Suckewier’s role as chief scientist.
“I am confident that he will lead us to make rapid and secure progress on our mission to ensure the benefits of AGI for all,” Altman said in a statement Tuesday about Pachocki’s nomination. AGI, or artificial general intelligence, refers to artificial intelligence that can perform as well as or better than humans at most tasks. AGI doesn’t exist yet, but creating it is part of the company’s mission.
OpenAI also employs employees dedicated to AI security work in teams across the company, as well as in individual teams focused on security. The first is a preparedness team, established in October last year, whose task is to analyze and try to prevent potential “catastrophic threats” related to artificial intelligence systems.
The superorientation team was tasked with averting the most long-term threats. OpenAI announced the creation of a super-alignment team last July, saying it would focus on controlling and ensuring the security of future artificial intelligence software that is smarter than humans – something the company has long identified as a technology goal.
In the announcement, OpenAI stated that it would devote 20% of its computing power at the time to the team’s work.
In November, Sutskever was one of several OpenAI board members who decided to fire Altman, which sparked a five-day uproar at the company.
OpenAI CEO Greg Brockman left in protest, investors revolted, and within days nearly all of the startup’s roughly 770 employees signed a letter threatening to quit unless Altman was brought back. In a remarkable reversal, Sutskever also signed the letter and expressed regret for his involvement in Altman’s overthrow.
Altman was reinstated shortly thereafter.
In the months following Altman’s departure and return, Sutskever largely disappeared from public view, prompting speculation about his continued role at the company. Sutskever also stopped working at OpenAI’s San Francisco office, according to a person familiar with the matter.
In his statement, Leike said his departure came after a series of disagreements with OpenAI over the company’s “core priorities,” which he believed did not focus sufficiently on security measures related to creating artificial intelligence that can be more effective than humans.
In a post announcing his departure this week, Sutskever said he was “confident” that OpenAI would develop an AGI “that is both secure and beneficial” under its current leadership, including Altman.
Most read on the Internet