The National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework, published in January 2023, was designed to equip organizations with an approach that increases the trustworthiness of AI systems and fosters the responsible design, development, deployment, and use of AI systems.1
The NIST AI Risk Management Framework will serve as a foundational component when developing an AI regulatory compliance program that meets the requirements of emerging AI laws. We have previously written about the EU AI Act in an article titled “An Introduction to the EU AI Act,” where we focused on applicability, timing, and penalties related to the EU AI Act. We also wrote about the requirements of Chapter 2 of the EU AI Act, titled “Requirements for High-Risk AI Systems.”
Given the complexity of the NIST AI Risk Management Framework, we are publishing a series of articles focused on implementing the framework. This article is focused on the first of the four core functions which is called Govern. NIST defines the Govern function as a cross-cutting function that is infused throughout AI risk management and enables the other functions of the process.2
The Govern function includes six categories and 19 subcategory controls as listed in Table 1 below.
How can organizations use the NIST AI Risk Management Framework Controls to assess activities that involve AI systems for the Govern function?
Along with the NIST AI Risk Management Framework, NIST also provides an AI Risk Management Framework Playbook (AI RMF Playbook) which contains supporting actions and considerations for each subcategory control. The AI RMF Playbook provides suggestions on what should be assessed and documented relative to the Govern function within the NIST AI Risk Management Framework.
Examples include:
- Has the organization defined and documented the AI regulatory environment – including minimum requirements in laws and regulations and has the in-scope AI system been reviewed for its compliance with that regulatory environment? 3
- What policies has the organization developed to ensure the use of the AI systems is consistent with its intended use, values, and principles? 4
- What policies has the organization developed to ensure the use for the AI system is consistent with organizational risk tolerances and how do those assessments inform risk tolerance decisions? 5
- What are the roles and responsibilities of personnel involved in the design, development, deployment, assessment, and monitoring of the AI system? 6
- What processes exist for data generation, acquisition/collection, ingestion, staging/storage, transformations, security, maintenance, and dissemination? 7
The Govern functions and specific actions from the AI RMF Playbook are related to the requirements for high-risk AI systems per Chapter 2 of the EU AI Act and therefore illustrate the effectiveness of using NIST to assess your AI systems and processes.
What should organizations consider implementing to support alignment with the NIST AI Risk Management Govern function?
After assessing and documenting activities that involve AI systems against the Govern function, organizations should review and identify the appropriate AI compliance management activities to remediate gaps and demonstrate AI compliance readiness and maturity. The AI RMF Playbook provides suggested actions relative to the Govern function. Examples include:
- Develop and maintain policies for training (and re-training) organizational staff about necessary legal or regulatory considerations that may impact AI-related design, development, and deployment activities.8
- Update existing data governance and data privacy policies and practices, particularly the use of sensitive or otherwise risky data in the AI governance framework.9
- Establish policies to define mechanisms for measuring or understanding an AI system’s potential impacts, e.g., via regular impact assessments at key stages in the AI lifecycle.10
- Establish policies for AI system incident response or confirm that existing incident response policies apply to AI systems.11
- Establish policies that define the creation and maintenance of AI system inventories.12
The AI compliance risk profile is different for every organization and will require expertise in both conducting privacy risk assessments and the unique challenges that using AI systems presents. It is important to evaluate AI compliance risk or gaps relative to an accepted privacy framework such as the NIST AI Risk Management Framework, and then prioritize which compliance activities should be implemented to comply with relevant regulations such as the EU AI Act.
In our next article, we will focus on implementing the Map function of the NIST AI Risk Management Framework.
Notes
1. NIST AI 100-1. Artificial Intelligence Risk Management Framework (AI RMF 1.0). January 2023. Page 2.
2. NIST AI 100-1. Artificial Intelligence Risk Management Framework (AI RMF 1.0). January 2023. Page 21.
3. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 3.
4. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 5.
5. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 8.
6. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 10.
7. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 15.
8. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 3.
9. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 4.
10. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 7.
11. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 12.
12. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 14.