New NIST AI framework offers guidance on risk management and governance for trustworthy AI systems

Eversheds Sutherland (US) LLP

On January 26, 2023, the National Institute of Standards and Technology (NIST) released its AI Risk Management Framework (AI RMF or Framework.) The AI RMF is a resource for organizations designing, developing, deploying, or using artificial intelligence (AI) systems to help manage the risks of AI and promote trustworthy and responsible development and use of AI systems. The Framework is voluntary and flexible – meaning organizations of all sizes and across all sectors can adapt it to their specific use cases.

NIST plans to consistently update and improve upon the AI RMF based on evolving technology and associated standards, as well as feedback from the AI community as it implements the Framework’s recommendations. In addition to the AI RMF, NIST released a companion AI RMF Playbook, AI RMF Explainer Video, an AI RMF Roadmap, AI RMF Crosswalk, and various Perspectives.

What’s in the NIST AI RMF?

The NIST AI RMF is organized into two parts.

Part 1 provides “foundational information” of which organizations should be aware as they work to increase the trustworthiness of their AI systems. Part 1 presents discussions regarding: how to frame risk, including challenges to managing AI risk; the intended audience of the AI RMF; the characteristics of a trustworthy AI system; and how NIST hopes to assess the effectiveness of the AI RMF.

Part 2 explains the Core of the Framework – or the more specific recommendations organizations should implement to help address AI risks. The Core is organized into four main functions – Govern, Map, Measure, and Manage. These four functions are further divided into categories and subcategories. The AI RMF concludes with a series of appendices focused upon: AI actor tasks; differences between AI risks and traditional software risks; AI risk management and human-AI interaction; and attributes of the AI RMF that guided its development.

Below we summarize the characteristics of a trustworthy AI system and the core functions of the AI RMF – the main sections of Parts 1 and 2, respectively.

What are the characteristics of a trustworthy AI system?

The Framework sets out the following seven characteristics:

  • Valid & Reliable

AI systems should be valid, reliable, accurate, and robust. A valid AI system is one that has been proven, through objective evidence, to work as it was designed to work. A reliable AI system performs as required without failing for a specific period of time under a specific set of conditions. An accurate AI system produces true results. Finally, a robust AI system performs consistently under varied circumstances.

  • Safe

AI systems should not cause harm to humans, property, or the environment. Ensuring safety should be top-of-mind during the design of any AI system. Rigorous testing, monitoring, and the ability to quickly shut down any AI system deemed unsafe are all practical approaches to maintaining safe systems.

  • Secure & Resilient

A resilient AI system returns to normal functioning after an unexpected adverse event, while a secure AI system is resilient plus equipped with protocols to avoid, protect against, respond to, and recover from attacks.

  • Accountable & Transparent

A transparent AI system’s information and outputs are available to all individuals who interact with that system. Transparency increases confidence in an AI system because it allows for higher levels of understanding of the system. The maintenance of training and other data that contribute to an AI system’s decisions can assist in achieving an accountable AI system.

  • Explainable & Interpretable

An AI system is explainable if one is able to explain how the system functions or operates, i.e., the mechanisms underlying the system’s operations. An AI system is interpretable if one is able to explain its outputs in relation to the purpose for which it was created, i.e., why the system made a particular prediction or recommendation.

  • Privacy-Enhanced

A privacy-enhanced AI system seeks to honor central privacy values such as anonymity, confidentiality, and user control. Privacy-enhancing technologies and data minimizing methods like de-identification and aggregation for particular outputs can contribute to the development of a privacy-enhanced AI system.

  • Fair – with Harmful Bias Managed

A fair AI system addresses issues such as harmful bias and discrimination. NIST identifies three major categories of AI bias that should be managed – systemic, computational and statistical, and human-cognitive.

Systemic bias can be present in AI datasets, the organizational norms, practices, and processes across the AI lifecycle, and the broader society that uses AI systems. Computational and statistical biases can be present in AI datasets and algorithmic processes, and often stem from systematic errors due to non-representative samples. Human-cognitive biases relate to how an individual or group perceives AI system information to make a decision or fill in missing information, or how humans think about the purposes and functions of an AI system.

What are the Core functions of the AI RMF? (i.e., What Should My Organization Do To Better Manage AI Risks?)

The AI RMF Core provides outcomes and actions designed to help organizations manage AI risks and develop trustworthy AI systems. As noted above, the Core is organized into four main functions and these functions are further divided into categories and subcategories. Below we provide a summary of each function and list its categories.

1. Govern

Govern is a cross-cutting function essential to AI risk management that enables the other functions. Strong governance can facilitate organizational risk culture. Senior leadership should set the tone for risk management within the organization and ensure it is incorporated into the organizations’ policies and operations. Ideally, all such policies and operations will serve to enhance transparency, improve human review processes, and bolster accountability in AI system teams.

Categories of Govern Function

Govern 1: Policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risks are in place, transparent, and implemented effectively.

Govern 2: Accountability structures are in place so that the appropriate teams and individuals are empowered, responsible, and trained for mapping, measuring, and managing AI risks.

Govern 3: Workforce diversity, equity, inclusion, and accessibility processes are prioritized in the mapping, measuring, and managing of AI risks throughout the lifecycle.

Govern 4: Organizational teams are committed to a culture that considers and communicates AI risk.

Govern 5: Processes are in place for robust engagement with relevant AI actors.

Govern 6: Policies and procedures are in place to address AI risks and benefits arising from third-party software and data and other supply chain issues.

2. Map

The Map function establishes the context to frame risks related to an AI system. Performing the Map function will provide all of an AI system’s relevant actors with visibility into other aspects of the AI system for which they may not be directly responsible. Providing this type of visibility will help prevent decisions made during one part of the AI lifecycle from undermining decisions made during another part of that lifecycle and/or by a different AI actor.

After completing the Map function, Framework users should have sufficient contextual knowledge about an AI system’s impacts to inform an initial go/no-go decision about whether to design, develop, or deploy an AI system. If the decision to proceed is made, organizations should use the Measure and Manage functions along with policies and procedures put into place in the Govern function to assist in AI risk management efforts.

Categories of Map Function

Map 1: Context is established and understood.

Map 2: Categorization of the AI system is performed.

Map 3: AI capabilities, targeted usage, goals, and expected benefits and costs compared with appropriate benchmarks are understood.

Map 4: Risks and benefits are mapped for all components of the AI system including third-party software and data.

Map 5: Impacts to individuals, groups, communities, organizations, and society are characterized.

3. Measure

The Measure function employs tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts. AI systems should be tested before their deployment and regularly while in operation. AI risk measurements include documenting aspects of systems’ functionality and trustworthiness.

The Measure function should include rigorous software testing and performance assessment methodologies with associated measures of uncertainty, comparisons to performance benchmarks, and formalized reporting and documentation of results.

After completing the Measure function, objective, repeatable, or scalable test, evaluation, verification, and validation (TEVV) processes including metrics, methods, and methodologies will be in place, followed, and documented.

Categories of Measure Function

Measure 1: Appropriate methods and metrics are identified and applied.

Measure 2: AI systems are evaluated for trustworthy characteristics.

Measure 3: Mechanisms for tracking identified AI risks over time are in place.

Measure 4: Feedback about efficacy of measurement is gathered and assessed.

4. Manage

The Manage function involves devoting resources to manage the risks that have been mapped and measured during the previous functions. Risk treatment includes plans to respond to, recover from, and communicate about incidents or events.

After completing the Manage function, plans for prioritizing risk and regular monitoring and improvement will be in place. Framework users will be better able to manage risks of deployed AI systems and allocate risk management resources based on assessed and prioritized risks.

Categories of Manage Function

Manage 1: AI risks based on assessments and other analytical output from the Map and Measure functions are prioritized, responded to, and managed.

Manage 2: Strategies to maximize AI benefits and minimize negative impacts are planned, prepared, implemented, documented, and informed by input from relevant AI actors.

Manage 3: AI risks and benefits from third-party entities are managed.

Manage 4: Risk treatments, including response and recovery, and communication plans for the identified and measured AI risks are documented and monitored regularly.

Conclusion

AI systems present unique risks. These systems must be trained on historic data that can change rapidly over time and may be biased, affecting the system’s reliability and trustworthiness in ways that can be hard to understand. AI systems are also highly scalable and generally operate in complex contexts that can make it difficult to detect and respond to failures effectively when they occur. While the risks of AI systems have been recognized in the AI principles articulated by international bodies such as the OECD and by many government and private sector actors (see, e.g., the White House Blueprint for an AI Bill of Rights and our legal alert discussing the same), pragmatic expert advice on how to implement trustworthy AI systems has been scarce. The NIST AI RMF is one expert tool that AI actors can use to put the core principles of trustworthy AI into action.

__________

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Eversheds Sutherland (US) LLP | Attorney Advertising

Written by:

Eversheds Sutherland (US) LLP
Contact
more
less

Eversheds Sutherland (US) LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide