On April 29, the National Institute of Standards and Technology (
NIST) at the U.S. Department of Commerce
released several announcements regarding the progress on President Biden's Executive Order on AI (covered by InfoBytes
here). NIST released four draft publications aimed at enhancing AI systems' safety, security, and trustworthiness.
The four draft publications include: (i) NIST AI 600-1 that offers a Generative AI Profile to help organizations identify and manage risks associated with generative AI; (ii) NIST SP 800-218A t expand on the Secure Software Development Framework (SSDF) and address concerns about malicious training data affecting AI systems, as well as provide potential risks and strategies for handling training data, including recommendations for analyzing data for signs of poisoning, bias, homogeneity, and tampering; (iii) NIST AI 100-4 that proposes technical methods to improve the transparency of AI-created or “synthetic” content; and (iv) NIST AI 100-5 which will outline a plan to encourage the global development of AI-related technical standards and seek feedback on areas for AI standardization, including methods for tracking the origin of digital content and shared practices for AI system testing and evaluation. Additionally, NIST is launching challenges to create methods for distinguishing between human and AI-generated content. Public comments on these initial drafts will be due by June 2.