OpenAI and Anthropic agree to let U.S. AI Safety Institute test and evaluate new models
The U.S. AI Safety Institute on Thursday announced it had come to a testing and evaluation agreement with OpenAI and Anthropic.
OpenAI and Anthropic, two of the most richly valued artificial intelligence startups, have agreed to let the U.S. AI Safety Institute test their new models before releasing them to the public, following increased concerns in the industry about safety and ethics in AI.
The institute, housed within the Department of Commerce at the National Institute of Standards and Technology (NIST), said in a press release that it will get "access to major new models from each company prior to and following their public release."
The group was established after the Biden-Harris administration issued the U.S. government's first-ever executive order on artificial intelligence in October 2023, requiring new safety assessments, equity and civil rights guidance and research on AI's impact on the labor market.
"We are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models," OpenAI CEO Sam Altman wrote in a post on X. OpenAI also confirmed to CNBC on Thursday that, in the past year, the company has doubled its number of weekly active users from late last year to 200 million. Axios was first to report on the number.
The news comes a day after reports surfaced that OpenAI is in talks to raise a funding round valuing the company at more than $100 billion. Thrive Capital is leading the round and will invest $1 billion, according to a source with knowledge of the matter who asked not to be named because the details are confidential.
Anthropic, founded by ex-OpenAI research executives and employees, was most recently valued at $18.4 billion. Anthropic counts Amazon as a leading investor, while OpenAI is heavily backed by Microsoft.