White House Issues Guidance on Use and Procurement of Artificial Intelligence Technology

Ropes & Gray LLP
Contact

Ropes & Gray LLP

On April 3, 2025, the White House Office of Management and Budget (“OMB”) issued two memoranda, Accelerating Federal Use of AI through Innovation, Governance, and Public Trust (M-25-21)1 and Driving Efficient Acquisition of Artificial Intelligence in Government (M-25-22)2 (collectively, “New AI Guidelines”) providing guidelines and requirements for the procurement and use of artificial intelligence by U.S. federal agencies. The New AI Guidelines replace the Biden Administration AI directives issued on March 28, 2024, and September 24, 2024.3 Among other things, the New AI Guidelines require federal government agencies to develop minimum risk management practices for what it describes as “high-impact AI,” as well as operative guidance for agencies to reduce vendor lock-in, improve transparency, and protect intellectual property and public data. According to the White House, the New AI Guidelines are intended to shift U.S. AI policy to promote a “forward-leaning, pro-innovation, and pro-competition mindset.” While the New AI Guidelines do not directly impose restrictions on private industry, in many cases, the requirements are still likely to have significant impact on private industry through their incorporation into federal contracts. We discuss below key aspects of the New AI Guidelines, as well as key differences between them and the prior approach under the Biden AI executive orders.

  • Scope and Applicability. Like the Biden AI directives, the New AI Guidelines apply to “AI systems or services that are acquired by or on behalf of covered agencies,” excluding elements of the Intelligence Community, where AI is the primary purpose or primary functionality. These include data systems, software, applications, tools, or utilities “established primarily for the purpose of researching, developing, or implementing artificial intelligence technology,” as well as data systems, software, applications, tools, or utilities where an AI capability “is integrated into another system or agency business process, operational activity, or technology system,” but excludes widely publicly available products or services that incorporate AI but where the use of AI is incidental, and not the primary purpose or functionality. For example, word processing software primarily used for its AI functionality would be covered, whereas common commercial word processing software with substantial non-AI purposes, but with embedded AI for text suggestions or spelling/grammar corrections, would not. The procurement guidance (under M-25-22) applies to contracts awarded or renewed 180 days after the issuance of the memo (i.e., on or after October 1, 2025) and compliance time frames for the AI use guidelines (under M-25-21) generally range from 90 days to one year from the date of issuance of the memorandum.
  • Possible Departure from NIST AI Standards. The New AI Guidelines, like the Biden-era directives, emphasize transparency, interagency collaboration, and fostering innovation through competition in the AI marketplace. However, they diverge from specific Biden-era requirements in M-24-10 and M-24-18 regarding the use of National Institute of Standards and Technology’s (NIST) standards and risk management framework for monitoring AI systems used by governmental agencies. Under the Biden administration, agencies were encouraged to utilize the NIST AI Risk Management Framework (RMF) to manage AI risks effectively by providing a uniform framework for ensuring safe, secure, and trustworthy AI systems. For generative AI applications, agencies were encouraged to refer to NIST’s Generative AI Profile to identify and mitigate specific risks. AI-based biometric systems were required to be submitted for evaluation by NIST to ensure accuracy and compliance with legal standards. Additionally, agencies included contractual requirements for vendors to provide detailed documentation on testing, evaluations, and red-teaming results, including those conducted by NIST or third parties, to ensure AI systems were robust and secure. Lastly, agencies considered the environmental impact of AI systems and referred to NIST standards for improving efficiency and sustainability. By contrast, the New AI Guidelines encourages agencies to develop their own minimum risk management practices for “high-impact AI use” (discussed below), without specific requirements for generative AI or biometric AI systems. Consequently, under the New AI Guidelines, contractors may need to implement agency-specific risk management requirements, potentially without the harmonizing effects of NIST standards.
  • Maximizing the Use of American-Made AI. Consistent with the “America First” posture of the Trump Administration, the New AI Guidelines requires federal agencies to prioritize and maximize the use of AI products and services developed and produced in the United States.
  • Minimum Risk Management Practices for “High-Impact AI.” The New AI Guidelines introduce a general category of “high-impact AI use,” which is defined as “AI with an output that serves as a principal basis for decisions or actions with legal, material, binding, or significant effect on: (1) an individual or entity’s civil rights, civil liberties, or privacy; or (2) an individual or entity’s access to education, housing, insurance, credit, employment, and other programs; (3) an individual or entity’s access to critical government resources or services; (4) human health and safety; (5) critical infrastructure or public safety; or (6) strategic assets or resources, including high-value property and information marked as sensitive or classified by the Federal Government.”4 This concept combines aspects of the Biden-era “rights-impacting AI” (i.e., AI systems whose outputs serve as a principal basis for decisions or actions that significantly affect civil rights, civil liberties, privacy, equal opportunities, or access to critical government resources or services) and “safety-impacting AI” (i.e., AI systems that control or significantly influence outcomes related to human life or well-being, such as those that could result in loss of life, serious injury, or mental health impacts) but leaves out AI application and procurement considerations that affect the climate or environment. Certain AI use cases are automatically considered high-impact AI use, for example, safety-critical functions of critical infrastructure, activities involving hazardous chemicals or biological agents, and medically relevant functions of medical devices, etc. Agencies must comply with minimum risk management practices for these high-impact AI systems, which are subject to routine inspection by OMB. If found non-compliant, the agency is required to safely discontinue such AI use. Agencies are encouraged to develop their own practices consistent with the New AI Guidelines and Executive Orders 139605 and 141796. As noted above, this is a departure from the general requirements to follow NIST standards and the NIST RMF under the Biden administration. However, some principles of the NIST RMF form part of the required baseline minimum risk management practices for high-impact AI use cases under the New AI Guidelines. These include conducting pre-deployment testing and preparing risk mitigation plans, even if the agency lacks access to the AI system’s underlying source code or data. Agencies must complete and periodically update AI impact assessments, addressing the AI’s purpose, data quality, potential impacts on privacy and civil rights, reassessment schedules, cost analysis, independent review results, and risk acceptance. Ongoing monitoring for performance and adverse impacts, adequate human training and assessment, and ensuring human oversight, intervention, and accountability are also required. Additionally, agencies must provide remedies or appeals for individuals affected by AI decisions and incorporate feedback from end users and the public to inform AI use.
  • Intellectual Property (IP) Rights and Use of Government Data and Vendor Lock-In Protections. Similar to the Biden-era AI directives, the New AI Guidelines require federal agencies to retain sufficient rights to government data and any improvements made to such data. Agencies are encouraged to establish transparency and standardized contractual terms for data ownership and IP rights in AI procurements, as well as methods for tracking AI performance and effectiveness. Contracts with AI vendors and service providers should address: (a) use restrictions on government data, (b) privacy compliance, especially for personally identifiable information (PII), (c) vendor lock-in protections, (d) compliance with minimum risk management practices for high-impact AI, (e) clear delineation of data and IP ownership between the government and contractors, (f) ongoing testing and monitoring, (g) vendor performance requirements, and (h) notifications to agency stakeholders before integrating new AI enhancements. Contracts should also prohibit using nonpublic agency data to train publicly or commercially available AI algorithms without explicit agency consent. These guidelines under M-25-22 will apply for contracts awarded pursuant to a solicitation issued on or after the date that is 180 days after the memorandum’s publication (i.e., on or after October 1, 2025) and to existing AI contracts renewed or extended after that date.7
  • Promoting Interoperability and Transparency in AI Development and Procurement. Agencies are encouraged to proactively share their custom-developed code, including models and model weights, across the federal government. Where practicable and consistent with the OPEN Government Data Act,8 agencies should release and maintain AI code as open-source software in a public repository, unless restricted by law, regulation, contractual obligation, or if doing so would pose a risk to national security, confidentiality, or agency operations. To encourage interoperability, cost-effectiveness, and competition, agencies can leverage several vendor practices as evaluation criteria. Additionally, agencies are required to seek open-source licenses for vendor products, including AI models, systems, services, and datasets, as well as transparent and non-discriminatory pricing practices. For contractors providing such AI systems and services, it may be important to understand the parameters around protecting company trade secrets and IP from public disclosure, including as open source software, while still meeting the government’s requirements for openness and interoperability.
  • Continuous Performance Evaluation and Compliance for AI Systems. Contractors are encouraged to continually track and evaluate the performance of their AI systems. This includes documenting the AI’s capabilities and limitations, tracking the provenance of the data used, conducting ongoing testing and validation, assessing for overfitting, and ensuring continuous improvement and performance monitoring to adhere to the latest rules and regulations.

The New AI Guidelines emphasize the importance of responsible AI development, procurement, and governance to ensure that AI systems used by federal agencies are effective, secure, and aligned with public trust and legal requirements. Although the New AI Guidelines do not directly impose restrictions on private industry, they may impact service providers and vendors through their incorporation into federal contracts.

  1. M-24-10: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (March 28,2024) at https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf; M-24-18: Advancing the Responsible Acquisition of Artificial Intelligence in Government (September 24, 2024) at https://www.whitehouse.gov/wp-content/uploads/2024/10/M-24-18-AI-Acquisition-Memorandum.pdf
  2. M-25-21, p. 19.
  3. M-25-22, p. 4.
  4. Open, Public, Electronic and Necessary (OPEN) Government Data Act, https://www.congress.gov/bill/115th-congress/house-bill/1770

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Ropes & Gray LLP

Written by:

Ropes & Gray LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Ropes & Gray LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide