NIST Releases Series of AI Guidelines & Software in Ongoing Response to AI Executive Order

King & Spalding
Contact

King & Spalding

The U.S. Department of Commerce’s National Institute of Standards and Technology (“NIST”) recently announced the publication of three AI guidelines as well as its release of a software package aimed at helping organizations measure the impact of adversarial attacks on AI system performance. These actions are all in response to President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, published on October 23, 2023.

NIST & President Biden’s AI Executive Order

President Biden’s AI Executive Order was published with an accompanying Fact Sheet, which included action items for the various departments and agencies falling under the executive branch. Fact Sheet spotlighted, among other things, the need to create new standards for AI safety and security, and it did so specifically in relation to NIST:

  • Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy;
  • Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.

Since then, NIST has announced its continued and ongoing efforts to work with private and public stakeholders to fulfill these obligations. As stated by Under Secretary of Commerce for Standards and Technology and NIST Director Laurie Locascio: “We are committed to developing meaningful evaluation guidelines, testing environments, and information resources to help organizations develop, deploy, and use AI technologies that are safe and secure, and that enhance AI trustworthiness.”

For example, NIST kicked off these efforts by hosting a workshop in November 2023 to facilitate collaboration efforts, whereby inviting private and public stakeholders to begin the process of identifying working groups for the various deliverables required under the AI Executive Order. These meetings served to path mark the development of NIST’s recently published guidelines.

NIST’s New AI Guidelines

Building on the AI Risk-Management Framework (“AI RMF”) published by NIST in July 2024, NIST collaborated with private and public stakeholders, including an open call for comments, to publish final versions of the following AI-related guidelines:

In addition to the above final publications, NIST released an initial public draft of Managing Misuse Risk for Dual-Risk Foundation Models, which serves to identify best practices for developers of foundational models to best manage the risks that their models may be deliberately misused to cause harm. NIST has also issued a call for public comments for this draft through September 9, 2024, which will be used to help inform the final version of this document.

NIST’s AI Software Release

In January 2024, NIST published details about a type of cyberattack unique to AI systems: adversarial machine learning. Threat actors can “corrupt” or “poison” data that might be used by AI systems for training, thereby causing those AI systems to malfunction.

NIST aims to assist organizations through the release of its own open source software tool, Dioptra, which tests the effects of adversarial attacks on AI systems. In doing so, users will be able to select various adversarial tactics that a threat actor might use to make the model perform less effectively and thereby track performance reduction so as to learn how often and under what circumstances the AI system would fail.

“For all its potentially transformational benefits, generative AI also brings risks that are significantly different from those we see with traditional software. These guidance documents and testing platform will inform software creators about these unique risks and help them develop ways to mitigate those risks while supporting innovation.”

-- Laurie E. Locascio, Under Secretary of Commerce for Standards and Technology and NIST Director

Upcoming NIST AI Deliverables

The diagram below illustrates the effort of NIST as the agency builds out guidelines and standards for the safe, secure, and trustworthy development and use of AI continues in the coming months, with additional key benchmarks set through January 2025.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© King & Spalding

Written by:

King & Spalding
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

King & Spalding on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide