OpenAI prioritising 'shiny products' over safety, says departing researcher

Jan Leike urges OpenAI to become a safety-first AGI company

OpenAI prioritising 'shiny products' over safety, says departing researcher

Image:
OpenAI prioritising 'shiny products' over safety, says departing researcher

Jan Leike, a leading researcher who resigned from OpenAI last week, has expressed deep concerns about the company's priorities, stating that OpenAI's "safety culture and processes have taken a backseat to shiny products."

Before quitting, Leike led OpenAI's "Superalignment" team that focused on developing safe and beneficial advanced AI (AGI).

On Friday, Leike took to social media platform X to express his concerns about the company's direction.

He criticised the alleged neglect of "safety culture and processes" in favour of developing "shiny products."

"OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products," he said.

"We are long overdue in getting incredibly serious about the implications of AGI. We must prioritize preparing for them as best we can. Only then can we ensure AGI benefits all of humanity."

Leike admitted ongoing disagreements with OpenAI's leadership, which he said had "finally reached a breaking point."

"I joined because I thought OpenAI would be the best place in the world to do this research. However, I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point."

Leike said OpenAI should focus more on critical issues such as safety, social impact, confidentiality and security for its next-generation models.

"I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics. These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there."

He also complained about his team having to battle to secure essential computing resources for their crucial research.

"Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done."

Leike's resignation coincided with the departure of Ilya Sutskever, OpenAI's chief scientist and co-leader of the Superalignment team.

Sutskever, who co-founded OpenAI and played a role in projects like ChatGPT, left the company last week.

Last year, Sutskever was involved in a dramatic boardroom revolt that temporarily ousted CEO Sam Altman, only for Altman to be swiftly reinstated.

Despite Sutskever's public statements of regret and support for Altman's return, he had been largely absent from OpenAI's activities since then, even as other members of the policy, alignment, and safety teams have also left.

OpenAI disbands Superalignment team

Following the departures of two key figures, OpenAI last week disbanded its Superalignment team which had been dedicated to addressing the long-term risks associated with AGI.

The company said the decision was part of an ongoing internal restructuring. Despite the team's disbandment, OpenAI assured that research on long-term AI risks will continue under John Schulman.

Schulman also leads a team dedicated to fine-tuning AI models after training.

In a post on X, Altman acknowledged Leike's contributions and underscored the company's dedication to AI safety. He pledged further efforts toward achieving this goal and promised a more elaborate explanation in the coming days.

"I'm very grateful to @janleike for his great contributions to OpenAI's alignment research and safety culture, and I am really sad that he is leaving. He's right we have a lot more work to do; we are determined to do it. I will post my longer version in the next couple of days," Altman wrote.