NIST Releases New Draft of Artificial Intelligence Risk Management Framework for Comment

Faegre Drinker Biddle & Reath LLP
Contact

Faegre Drinker Biddle & Reath LLPThe National Institute of Standards and Technology (NIST) has released the second draft of its Artificial Intelligence (AI) Risk Management Framework (RMF) for comment. Comments are due by September 29, 2022.

NIST, part of the U.S. Department of Commerce, helps individuals and businesses of all sizes better understand, manage and reduce their respective “risk footprint.”  Although the NIST AI RMF is a voluntary framework, it has the potential to impact legislation. NIST frameworks have previously served as basis for state and federal regulations, like the 2017 New York State Department of Financial Services Cybersecurity Regulation (23 NYCRR 500).

The AI RMF was designed and is intended for voluntary use to address potential risks in “the design, development, use and evaluation of AI products, services and systems.” NIST envisions the AI RMF to be a “living document” that will be updated regularly as technology and approaches to AI reliability to evolve and change over time.

According to the proposed AI RMF, the specific focus of this new framework is an AI system engineered on a machine-based system that can, “for a given set of human-defined objectives, generate outputs such as predictions, recommendations or decisions influencing real or virtual environments.”

Amidst the growth of artificial intelligence, the AI RMF provides guidance on how to use AI in a respectful and responsible manner. Cybersecurity frameworks are designed to secure and protect data, and the AI RMF draft appears to complement that goal.

One of the many objectives of the AI RMF is to better clarify and design NIST’s “AI Lifecycle.” The current AI Lifecycle focuses on overall risk management issues. The main audience for this framework, as drafted, are those with responsibilities to commission or fund an AI system as well as those who are part of the “enterprise management structure” that work to govern the AI Lifecycle.

For example, as part of the proposed AI RMF, NIST has defined “stages” for the new AI Lifecycle model. These elements include:

  1. Plan & Design
  2. Collect & Process Data
  3. Build & Use Model
  4. Verify & Validate
  5. Deploy
  6. Operate & Monitor
  7. Use or Impacted By

AI will impact many critical aspects of society over the next few years including  the way we live and work. According to the World Economic Forum, up to 97 million new AI jobs could be created by the end of 2025. As AI continues to grow, it is critical to have a viable risk management framework in place.

A companion NIST AI RMF Playbook (Playbook) was published in conjunction with the second draft of the AI RMF. The Playbook is an online resource and “…includes suggested actions, references, and documentation guidance for stakeholders” to implement the recommendations in the AI RMF.

NIST will be holding a third and final virtual workshop on October 18-19, 2022, with leading AI experts and interested parties and expects the final AI RMF and Playbook to be published in January 2023.

We will continue to follow these developments and advise about updates as relevant.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Faegre Drinker Biddle & Reath LLP

Written by:

Faegre Drinker Biddle & Reath LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Faegre Drinker Biddle & Reath LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide