NIST Releases AI Risk Management Framework, Expected to Be a Critical Tool for Trustworthy AI Deployment

Wiley Rein LLP
Contact

On January 26, the National Institute of Standards and Technology (NIST) published its much anticipated AI Risk Management Framework 1.0 (AI RMF or Version 1.0), a risk-management resource for organizations designing, developing, deploying, or using AI systems. The AI RMF aims to provide voluntary guidance and risk management practices, similar to NIST’s Cybersecurity Framework (CSF). As companies look to deploy AI responsibly while mitigating risks – and as they increasingly face regulatory pressure at both the federal and state level to do so – the AI RMF will be a critical tool for assessing and managing risks.

Below, we provide brief background about the AI RMF, as well as a high-level summary of this new risk management tool.

AI RMF Background and Development Process

NIST was tasked with developing the AI RMF in the National Artificial Intelligence Initiative Act of 2020 (P.L. 116-283) – as included in the 2021 National Defense Authorization Act. NIST officially launched the AI RMF development process in July 2021. Its process has encouraged and reflected industry and other stakeholder feedback, with NIST releasing multiple rounds of preliminary drafts and hosting various workshops.

At today’s launch event, NIST’s collaborative process was a focal point for NIST and stakeholders, alike. For example, NIST touted that “[the AI RMF] has been developed through a consensus-driven, open, transparent, and collaborative process.”

AI RMF Summary

The AI RMF is divided into two parts. Part 1 explains how organizations can frame AI-related risks, and it outlines seven “trustworthy AI characteristics,” which are:

  • Valid and reliable,
  • Safe,
  • Secure and resilient,
  • Accountable and transparent,
  • Explainable and interpretable,
  • Privacy enhanced, and
  • Fair – with harmful bias managed.

Part 2 includes the AI RMF’s “Core,” which describes four “Functions,” along with “Categories” and “Subcategories” – similar to the CSF – that help organizations address AI system risks as a practical matter.

1. AI RMF Part 1:

Risk. The first part of the AI RMF explains that “risk” means “the composite measure of an event’s probability of occurring and the magnitude or degree of the consequences of the corresponding event.” Version 1.0, like the various drafts of the document, continues to explain that, while risk management generally addresses negative impacts, the AI RMF provides approaches to minimize negative impacts and to identify opportunities for maximizing positive impacts. Importantly, the AI RMF notes that risk tolerances (which are an organization’s “readiness to bear the risk in order to achieve its objectives”) will depend on an organization’s context and that attempting to eliminate risk entirely can be “counterproductive.”

Intended Users of the AI RMF. Part 1 also outlines the AI RMF’s audience, which applies to “AI actors,” a term adopted from the Organisation for Economic Co-operation and Development (OECD). NIST’s intended “primary audience” is “AI actors . . . who perform or manage the design, development, deployment, evaluation, and acquisition of AI systems and drive AI risk management efforts.”

Trustworthy AI Characteristics. NIST explains that these characteristics are criteria used to evaluate an AI system’s trustworthiness. The AI RMF describes each characteristic and provides guidance for addressing them, explaining that addressing the characteristics will involve organizational trade-offs and judgment calls based on the context of the particular AI system. The characteristics are as follows:

  • Valid and Reliable:
    • Validation is the []confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled.”
    • Reliability [is the] ability of an item to perform as required, without failure, for a given time interval, under given conditions.”
  • Safe:
    • AI systems should “not under defined conditions, lead to a state in which human life, health, property, or the environment is endangered.”
  • Secure and Resilient:
    • Systems should “withstand unexpected adverse events or unexpected changes in their environment or use – or if they can maintain their functions and structure in the face of internal and external change and degrade safely and gracefully when this is necessary.”
    • “While resilience is the ability to return to normal function after an unexpected adverse event, security includes resilience but also encompasses protocols to avoid, protect against, respond to, or recover from attacks.”
  • Accountable and Transparent:
    • “Accountability presupposes transparency. Transparency reflects the extent to which information about an AI system and its outputs is available to individuals interacting with such a system – regardless of whether they are even aware that they are doing so.”
  • Explainable and Interpretable:
    • Explainability refers to a representation of the mechanisms underlying AI systems’ operation, whereas interpretability refers to the meaning of AI systems’ output in the context of their designed functional purposes.”
  • Privacy-Enhanced:
    • Privacy refers generally to the norms and practices that help to safeguard human autonomy, identity, and dignity. These norms and practices typically address freedom from intrusion, limiting observation, or individuals’ agency to consent to disclosure or control of facets of their identities (e.g., body, data, reputation).”
  • Fair – with Harmful Bias Managed:
    • Fairness in AI includes concerns for equality and equity by addressing issues such as harmful bias and discrimination.”

Effectiveness of the AI RMF. The AI RMF also contains a section on the “Effectiveness of the AI RMF.” This section explains that evaluations of the effectiveness of the AI RMF “will be part of future NIST activities, in conjunction with the AI community.” This section also outlines NIST’s plans to develop metrics and other methods for evaluating the AI RMF’s effectiveness.

2. AI RMF Part 2:

AI RMF Core. The second part of the AI RMF contains the Core, which has four Functions: Govern, Map, Measure, and Manage. Each Function has both Categories and Subcategories, which are, in turn, subdivided into specific actions and outcomes. This part of the AI RMF provides more of a roadmap for how companies and organization might manage AI risks in practice. The AI RMF notes that some organizations may choose to apply only a subset of Categories when engaging in AI risk management. NIST also explains that risk management should be performed continuously throughout the AI system lifecycle and that the Functions should be carried out by diverse and multidisciplinary teams.

The Functions are organized as follows:

  • Govern: This is a “cross-cutting” Function that provides recommendations concerning high-level processes and organizational schemes for fostering a culture of risk management throughout an organization;
  • Map: This Function provides recommended methods for contextualizing and identifying AI system risks;
  • Measure: This Function provides recommendations for assessing, analyzing, and tracking identified AI risks; and
  • Manage: This Function provides recommendations for allocating resources and prioritizing AI system risks.

AI RMF Profiles. NIST also includes a discussion of AI RMF Profiles that will build on the document’s guidance. Profiles are intended to be created at the organization or segment/sector level, and NIST does not prescribe profile templates. NIST discusses several different types of profiles that organizations can utilize in implementing the AI RMF:

  • Use-case profiles: Implementations of AI RMF Functions, Categories, and Subcategories for a specific setting or application, such as a hiring profiles or a fair housing profile.
  • Temporal profiles: Includes Current Profiles, which are descriptions of either current states of AI risk management in a sector or application context, and Target Profiles, which indicate outcomes needed to achieve desired AI risk management goals.
  • Cross-sectoral profiles: Cover risks of models usable across sectors and use cases.

AI RMF Draft Playbook

In addition to the AI RMF, NIST previously had released a draft AI RMF Playbook, which includes more specific recommendations for incorporating the AI RMF’s guidance. Version 1.0’s publication was accompanied by a further update to the draft AI RMF Playbook, which provides additional draft guidance for implementing all four Functions in the AI RMF.

NIST explains that this revised draft Playbook will be updated in the Spring of 2023, and it encourages stakeholders to submit feedback by February 27, 2023.

Conclusion

As AI continues to evolve and its uses expand, the AI RMF will be a critical tool to help companies identify, assess, and manage AI’s associated risks and benefits. Companies can use this framework as a key component of their risk management practices. And because each company’s uses of AI are distinct, it will be implemented in different ways, as the AI RMF is designed to be voluntary and flexible. Companies using AI should note continued interest from policymakers and regulators at the federal and state levels into AI and automated decision making, and they should consider this and other tools to address AI risks as regulatory expectations ramp up.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Wiley Rein LLP

Written by:

Wiley Rein LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Wiley Rein LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide