close
close
migores1

OpenAI has lost almost half of its AGI security team, former researcher says

OpenAI has lost nearly half of the company’s team working on AI safety, according to one of its former government researchers, Daniel Kokotajlo.

“It wasn’t like a coordinated thing. I think people quit individually,” Kokotajlo told Fortune in a report published Tuesday.

Kokotajlo, who left OpenAI in April 2023, said the maker of ChatGPT initially had about 30 people working on security issues related to AI.

But multiple departures during the year have whittled the ranks of the safety team down to about 16 members, according to Kokotajlo.

“The people who are primarily focused on thinking about safety and preparing AGI are increasingly marginalized,” Kokotajlo told the press.

Business Insider could not independently confirm Kokotajlo’s claims about OpenAI’s headcount. When reached for comment, an OpenAI spokesperson told Fortune that the company is “proud of our track record of delivering the most capable and secure AI systems and believes in our scientific approach to addressing risk.”

OpenAI, the spokesperson added, will “continue to engage with governments, civil society and other communities around the world” on issues related to AI risks and safety.

Earlier this month, the company’s co-founder and head of its alignment science efforts, John Schulman, said he was leaving OpenAI to join rival Anthropic.

Schulman said in an X post on August 5 that his decision was “a personal one” and was not “due to a lack of support for alignment research at OpenAI.”

Schulman’s departure comes just months after another co-founder, chief scientist Ilya Sutskever, announced his resignation from OpenAI in May. Sutskever launched his own artificial intelligence company, Safe Superintelligence Inc., in June.

Jan Leike, who co-led OpenAI’s superalignment team with Sutskever, left the company in May. Like Schulman, he now works at Anthropic.

Leike and Sutskever’s team was tasked with ensuring that OpenAI’s superintelligence would remain aligned with humanity’s interests.

“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike wrote in an X post in May.

“But in recent years, safety culture and processes have taken a backseat to shiny products,” he added.

OpenAI did not immediately respond to a request for comment from Business Insider sent outside regular business hours.

Related Articles

Back to top button