OpenAI disbands another team focused on advanced AGI safety readiness
OpenAI has shut down its AGI Readiness Team, a group responsible for developing safeguards around advanced artificial intelligence systems.
The team focused on the safety of artificial general intelligence (AGI), which OpenAI defines in economic terms as AI systems capable of operating autonomously and automating a wide range of human tasks. The team members will be reassigned to other departments within the company.
Must be an almost impossible task to build up rulesets around AI so it wont be used for "evil".
It probably needs to be conscious for that...
There are guardrails that are going into place.
A big piece of the equation, in my view, is open source. That way it is in the open and everyone can see what is being done.
The danger is having to trust the likes of Sam Altman who is seeking regulatory capture so he is guaranteed to succeed.
Agree with you there
Miles Brundage, OpenAI's outgoing Senior Advisor for AGI Readiness, expresses serious concerns about this development as he announces his departure from the company. "In short, neither OpenAI nor any other frontier lab is ready, and the world is also not ready," Brundage states in a detailed public statement
Former internal AGI readiness advisor warns of lack of regulation
Brundage points to significant gaps in AI oversight, noting that tech companies have strong financial motivations to resist effective regulation. He emphasizes that developing safe AI systems requires deliberate action from governments, companies, and civil society rather than occurring automatically.
Following his departure, Brundage plans to either establish or join a non-profit organization, saying he can have more impact working outside the industry. "I think AI is unlikely to be as safe and beneficial as possible without a concerted effort to make it so." His team developed the five stages of AI progress for OpenAI.
This latest shutdown follows OpenAI's decision in May to disband its Superalignment team, which studied long-term AI safety risks. At that time, team leader Jan Leike publicly criticized the company, stating that "security culture and processes have to take a back seat to "shiny products."
Article