NIST Makes Good on Biden’s Executive Order on AI, Delivering Algorithm-Testing Software and Multiple AI-Related Guidance Documents to the Public

Faegre Drinker Biddle & Reath LLP
Contact

Faegre Drinker Biddle & Reath LLP

At a Glance

  • NIST announced that it had developed and made publicly available open-source software, nicknamed “Dioptra,” which allows users to test how their AI models and systems respond to adversarial attacks.
  • The U.S. AI Safety Institute published its initial public draft of guidance to address one key area of concern in the AI industry: misuse of AI systems and dual-use foundation models for nefarious and harmful purposes. NIST is accepting public comments on this draft guidance until September 9, 2024.
  • NIST also issued final versions of three previously published guidance documents:
    • “AI Risk Management Framework Generative AI Profile”
    • “Second Software Development Practices for Generative AI and Dual-Use Foundation Models”
    • “A Plan for Global Engagement on AI Standards”

It’s been a busy summer for the National Institute of Standards and Technology (NIST) and the U.S. AI Safety Institute (itself housed within NIST). July 26, 2024, marked the 270th day since President Biden issued his executive order on AI; and on the same day, NIST and the AI Safety Institute announced not one, not two, not three, but five AI-related deliverables for AI enthusiasts and users to chew on. 

Chief among those five developments was NIST’s announcement that it had developed and made publicly available open-source software, nicknamed “Dioptra,” which allows users to test how their AI models and systems respond to adversarial attacks. The software, which is free to download, responds to the Executive Order’s directive that NIST help with model testing that permits users to learn how often and under what circumstances their AI systems and models might fail. The software helps supports NIST’s already-existing AI Risk Management Framework by providing a functional option for assessing, analyzing and tracking AI risks, and allows for model testing and red teaming throughout the development lifecycle, during acquisition of AI models, and during auditing or compliance activities. 

Also for the first time, the U.S. AI Safety Institute published its initial public draft of guidance to address one key area of concern in the AI industry: misuse of AI systems and dual-use foundation models for nefarious and harmful purposes. The draft guidance, which includes seven key objectives for mitigating misuse, provides best practices for foundation model developers to protect their AI systems from being misused in a way that might cause harm to individuals or society more broadly. NIST is accepting public comments on this draft guidance until September 9, 2024. 

In addition to these two new deliverables, NIST also issued final versions of three previously published guidance documents. The first, “AI Risk Management Framework Generative AI Profile,” may be used to assist organizations in identifying risks specific to generative AI and practical solutions for management of generative AI risks. The second, “Second Software Development Practices for Generative AI and Dual-Use Foundation Models,” is designed to be a companion to NIST’s Secure Software Development Framework and addresses the risks of AI systems being compromised by malicious training data, or data that could poison, bias or tamper with a model’s training set. Finally, NIST published its final guidance for “A Plan for Global Engagement on AI Standards,” which suggests that development of AI-related standards should involve a broad range of multidisciplinary stakeholders from many different countries to achieve consensus on standards and enable information sharing.

For More Information

Companies utilizing AI, whether internally, externally or in coordination with their vendors, will want to continue to watch this space. While the five deliverables noted above offer plenty to digest for now, we’re likely to see much more in this space as we near the one-year anniversary of President Biden’s Executive Order on AI (and the associated deadlines imposed by that Executive Order).

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Faegre Drinker Biddle & Reath LLP

Written by:

Faegre Drinker Biddle & Reath LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Faegre Drinker Biddle & Reath LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide