https://soakensissies.com/iH8kuW0vDufzc997/96566

The OpenAI team tasked with protecting humanity is no more


In the summer of 2023, OpenAI created a “Superalignment” team whose goal was to steer and control future AI systems that could be so powerful they could lead to human extinction. Less than a year later, that team is dead.

OpenAI Bloomberg that the company was “integrating the group more deeply across its research efforts to help the company achieve its safety goals.” But a series of tweets from Jan Leike, one of the team’s leaders who recently quit revealed internal tensions between the safety team and the larger company.

In a statement on Friday, Leike said that the Superalignment team had been fighting for resources to get research done. “Building smarter-than-human machines is an inherently dangerous endeavor,” Leike wrote. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.” OpenAI did not immediately respond to a request for comment from Engadget.

X

Leike’s departure earlier this week came hours after OpenAI chief scientist Sutskevar announced that he was leaving the company. Sutskevar was not only one of the leads on the Superalignment team, but helped co-found the company as well. Sutskevar’s move came six months after he was involved in a decision to fire CEO Sam Altman over concerns that Altman hadn’t been “consistently candid” with the board. Altman’s all-too-brief ouster sparked an internal revolt within the company with nearly 800 employees signing a letter in which they threatened to quit if Altman wasn’t reinstated. Five days later, as OpenAI’s CEO after Sutskevar had signed a letter stating that he regretted his actions.

When it the creation of the Superalignment team, OpenAI said that it would dedicate 20 percent of its computer power over the next four years to solving the problem of controlling powerful AI systems of the future. “[Getting] this right is critical to achieve our mission,” the company wrote at the time. On X, Leike wrote that the Superalignment team was “struggling for compute and it was getting harder and harder” to get crucial research around AI safety done. “Over the past few months my team has been sailing against the wind,” he wrote and added that he had reached “a breaking point” with OpenAI’s leadership over disagreements about the company’s core priorities.

Over the last few months, there have been more departures from the Superalignment team. In April, OpenAI fired two researchers, Leopold Aschenbrenner and Pavel Izmailov, for allegedly leaking information.

OpenAI told Bloomberg that its future safety efforts will be led by John Schulman, another co-founder, whose research focuses on large language models. Jakub Pachocki, a director who led the development of GPT-4 — one of OpenAI’s flagship large language models — would replace Sutskevar as chief scientist.

Superalignment wasn’t the only team at OpenAI focused on AI safety. In October, the company a brand new “preparedness” team to stem potential “catastrophic risks” from AI systems including cybersecurity issues and chemical, nuclear and biological threats.

This article contains affiliate links; if you click such a link and make a purchase, we may Earn a commission.





Source link

Leave a Comment