Skip to main content

U.S. AI Safety Institute Partners with Anthropic and OpenAI for AI Safety Research

The future of AI regulation?
Created on August 29|Last edited on August 29
The U.S. Artificial Intelligence Safety Institute, part of the National Institute of Standards and Technology (NIST), has signed pioneering agreements with AI leaders Anthropic and OpenAI to advance AI safety research. These agreements, the first of their kind, will enable the U.S. AI Safety Institute to access new models from both companies before and after their public release. The purpose of these partnerships is to collaborate on research into evaluating the capabilities and safety risks of AI models and to develop methods for mitigating those risks. Elizabeth Kelly, director of the U.S. AI Safety Institute, emphasized that these agreements represent a crucial step in promoting safe AI innovation.

Collaboration on AI Safety Evaluations

The collaboration involves the U.S. AI Safety Institute working closely with Anthropic and OpenAI, as well as with international partners like the U.K. AI Safety Institute. This partnership will enable the U.S. AI Safety Institute to provide feedback on potential safety improvements to AI models developed by these companies. The goal is to build on existing efforts to advance safe, secure, and trustworthy AI technologies, aligning with the Biden-Harris administration's Executive Order on AI and the voluntary commitments made by leading AI developers.

Role of the U.S. AI Safety Institute

Established following the Biden-Harris administration’s 2023 Executive Order on AI, the U.S. AI Safety Institute aims to advance the science of AI safety and address the risks associated with advanced AI systems. The Institute focuses on developing testing, evaluation, and guidelines to accelerate safe AI innovation both in the U.S. and globally. By conducting evaluations under the new agreements with Anthropic and OpenAI, the Institute will contribute to safer AI development and help ensure that AI technologies are used responsibly.

Significance for Future AI Development

These agreements mark a significant milestone in the ongoing effort to create a safe AI ecosystem. The ability to collaborate directly with leading AI companies will allow the U.S. AI Safety Institute to stay at the forefront of AI safety research, supporting the development of safer and more trustworthy AI systems. This collaboration also reflects a growing recognition of the importance of regulatory and research partnerships between the government and private sector to manage the risks posed by rapidly advancing AI technologies.
Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.