The National Institute of Standards and Technology ("NIST") has released its AI Risk Management Framework ("AI RMF") as a resource to reportedly assist individuals, organizations, and society identify risks associated with artificial intelligence ("AI").
On January 26, 2023, NIST released the first version of the AI RMF, along with the NIST AI RMF Playbook, the AI RMF Explainer Video, the AI RMF Roadmap, the AI RMF Crosswalk, and various Perspectives. NIST reports that the framework is aimed to assist individuals, organizations, and society better manage risks associated with AI.
The AI RMF seeks to promote trustworthy and responsible AI by assisting organizations in their design, development, deployment, and use of AI systems, taking into consideration both the potential for AI to drive positive scientific advancements, but also the potential risks associated that could negatively impact individuals, groups, society, and the planet.
The AI RMF is broken up into sections that provide practical guidance and often-overlooked considerations that companies should consider when developing AI. For example, the "Framing Risk" section includes a discussion on risk prioritization that explains how unrealistic expectations can cause an organization to inefficiently allocate resources to combat risks when the organization does not know how to assess risks and properly prioritize. Another example are the characteristics of a trustworthy AI system: Valid and reliable; safe; secure and resilient; accountable and transparent; explainable and interpretable; privacy-enhanced; and fair. NIST takes the position that detailing these characteristics will provide companies with a basis for developing or enhancing AI trustworthiness to reduce risk.
The AI RMF then goes into detailed descriptions of its four functions: Govern, Map, Measure, and Manage. These functions are identified as ways to assist companies in addressing AI risks in practice. "Govern" refers to the structures, systems, processes, and teams that assist organizations to develop a purpose-driven culture focused on risk understanding and management. "Map" is intended to enhance a company's ability to identify risks and their broader contributing factors. "Measure" is meant to analyze, assess, benchmark, and monitor risk and related impacts. "Manage" refers to the process of regularly allocating risk resources to mapped and measured risks.
In all, the AI RMF provides AI users with a voluntary methodology to evaluate the adoption of AI systems and to regulate deployed AI systems.