U.S. DEPARTMENT OF COMMERCE PARTNERS WITH OPENAI AND ANTHROPIC FOR AI SAFETY
NIST's New Agreement with OpenAI and Anthropic Aims to Enhance AI Safety Through Early Model Access and Evaluation
The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has announced a groundbreaking partnership with leading AI developers OpenAI and Anthropic. This collaboration aims to bolster AI safety through the establishment of formal agreements with the U.S. AI Safety Institute (AISI), the agency revealed on Thursday.
Under the new arrangement, the AISI will gain exclusive early access to major AI models from OpenAI and Anthropic, including OpenAI's ChatGPT and Anthropic's Claude, both prior to and following their public release. This access will enable the institute to thoroughly evaluate the capabilities and potential safety risks associated with these advanced AI systems.
“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” said Elizabeth Kelly, Director of the AISI. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”
The deal reflects a growing focus on AI safety, a concern that has sparked significant debate within the industry. Concerns about AI practices have led to notable departures from companies like OpenAI, with some former executives launching new ventures dedicated to cautious development.
While Anthropic and OpenAI have yet to comment on the news, reactions from the companies have been positive. Anthropic co-founder Jack Clark expressed enthusiasm about the partnership on Twitter, emphasizing the importance of third-party testing in the AI ecosystem. “Looking forward to doing a pre-deployment test on our next model with the U.S. AISI,” Clark wrote. “It’s been amazing to see governments stand up safety institutes to facilitate this.”
OpenAI co-founder and CEO Sam Altman also voiced support for the collaboration, highlighting its significance at the national level. “We are happy to have reached an agreement with the U.S. AI Safety Institute for pre-release testing of our future models,” Altman tweeted. “For many reasons, we think it's important that this happens at the national level. [The] U.S. needs to continue to lead!”
The initiative comes in the wake of the Biden Administration’s October 2023 Executive Order, which established the AISI to address the rapid development of artificial intelligence. In February, the AISI Consortium (AISIC) was formed, including major AI firms such as OpenAI, Anthropic, Google, Apple, NVIDIA, Microsoft, and Amazon.
The U.S. AISI plans to share its findings with its European counterparts at the U.K. AI Safety Institute, marking a significant step in global efforts to ensure AI technologies are developed and deployed safely and responsibly.
#THE S MEDIA #Media Milenial #The U.S. Department of Commerce #National Institute of Standards and Technology #AI Safety Institute #OpenAI #Anthropic #ChatGPT #Claude #AI safety #AI technology #pre-release testing #Biden Administration #AISI Consortium #AI regulations #global AI collaboration #tech innovation


























