[co-author: Matthew Tikhonovsky]
- On September 24, the Office of Management and Budget (OMB) released a memorandum on Advancing the Responsible Acquisition of Artificial Intelligence in Government.
- The new memorandum builds on OMB’s March 2024 memorandum on AI, which established requirements and guidance for agencies that utilize AI. The March 2024 memorandum specifically outlined “minimum risk management practices for use of AI that impact the rights and safety of the public.”
- The new memorandum focuses on federal agencies’ acquisition of AI. It puts forth requirements and guidance to foster cross-agency collaboration throughout the AI acquisition lifecycle, mitigate AI risks during the procurement process, and promote a competitive AI ecosystem with innovative acquisition.
- Although Congress is considering legislation to codify similar requirements for AI use and procurement by federal agencies, such legislation is unlikely to pass in the five weeks left in this Congress.
On September 24, 2024, the Office of Management and Budget (OMB) released a memorandum on Advancing the Responsible Acquisition of Artificial Intelligence in Government. Building on OMB’s March 2024 memorandum on federal agencies’ use of AI, the new memorandum focuses on these agencies’ acquisition of AI. It establishes requirements and guidance to encourage collaboration across agencies throughout the AI acquisition lifecycle, reduce AI risks during procurement, and support a competitive AI ecosystem through innovative acquisition practices.
The requirements and guidance apply to “all agencies defined in 44 U.S.C. § 3502(1)” that contract AI services and systems from vendors and other third-parties. Some requirements only apply to Chief Financial Officer Act agencies, while none of the requirements “apply to elements of the Intelligence Community.”
March 2024 OMB Memo on AI
In March 2024, as we covered, OBM issued memorandum M-24-10 on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. The memorandum outlined requirements for federal agencies that use AI and directed them to designate a Chief AI officer, carry out proper due diligence when procuring AI tools, and establish a publicly accessible AI use case inventory, among other directives. Most importantly, the memorandum required that agencies identify AI use cases that are safety and/or rights-impacting and stipulated that they must adhere to specific minimum practices for these uses.
New OMB Memo on AI
Building on OMB’s previous AI memorandum, the new memorandum’s requirements fall into three broad categories:
1. Ensuring Collaboration Across the Federal Government on AI
2. Managing AI Risks and Performance
3. Promoting a Competitive AI Market with Innovative Acquisition
1. Ensuring Collaboration Across the Federal Government on AI
- Establish Cross-Functional Collaboration to Oversee AI Performance and Risks: Each agency must establish or update policies for agency collaboration “to ensure that acquisition of an AI system or service will have the appropriate controls in place to comply with the requirements of this memorandum.” Within 180 days, agency Chief AI Officers (CAIOs) must notify OMB of their progress in formalizing cross-functional collaboration to manage AI performance and risks. These policies should enable effective cross-functional collaboration for timely acquisition and proactive risk management. Agencies must specify how they will initially review AI acquisitions for necessary management practices, involve relevant experts in decision-making, and when necessary, escalate reviews and decisions related to AI performance and risk management.
- Ensure Agency-Wide Strategic Planning and Resources: Within 180 days of the memo, each agency CAIO must submit a plan “for ensuring that the CAIO coordinates on AI acquisition” with relevant agency officials, including with the agency’s Chief Information Officer (CIO) and Chief Information Security Officer.
- Centralize Interagency Information and Knowledge Sharing: The CAIO Council, along with OBM and other relevant government agencies, should gather and share information on AI acquisition for all executive branch agencies. This information sharing should include lessons learned from past AI acquisitions, templates for innovative practices, mechanisms for monitoring and managing risks, and best practices and methodologies for responsible AI acquisition.
- Cross-Council Working Group: The federal CIO and the Administrator for Federal Procurement Policy will “establish a Federal cross-council working group” “to examine cross-functional issues that constantly arise in the procurement of AI.”
2. Managing AI Risks and Performance
- Identify if AI is Included in an Acquisition: OMB’s March 2024 Memorandum on AI requires that agencies maintain and annually update an AI use case inventory. The new memorandum includes best practices for meeting this requirement, including by communicating to the vendor the use cases of the AI system or service, requiring vendors to report any proposed use of AI, and asking vendors if AI is a primary feature in their system or service.
- Mitigate Privacy Risks Throughout the Acquisition Lifecycle: Senior Agency Officials for Privacy (SAOPs) and other agency officials are required to “have early and ongoing involvement in AI acquisition processes so that they are able to identify and manage privacy risks that may arise throughout the acquisition lifecycle of AI systems.”
- Manage Risks Related to AI-Based Biometrics: Building on the March 2024 memorandum’s requirements, agencies must ensure that contracts to vendors for AI systems and services address risks associated with AI systems that use biometric identifiers, including risks related to unlawfully collected biometric data or systems that lack sufficient accuracy for reliable identification. To mitigate these risks, “agencies avoid biometric systems that rely on unreliable or unlawfully collected information.”
- Use Innovative Outcomes-Based Acquisition Techniques: When acquiring AI, agencies should use performance-based approaches and best practices to enhance risk management and planning. “Performance-based requirements allow agencies to understand and evaluate vendor claims about their proposed use of AI systems or services prior to contract award, acquire AI capabilities that address their needs, and perform post-award monitoring.”
- Include Transparency Requirements in Contracts to Obtain Necessary Information to Evaluate Risks: Agencies are directed to establish requirements in contracts to ensure that vendors supply adequate information for agencies to assess their AI use claims, manage risks, and conduct impact assessments throughout the AI acquisition lifecycle.
- Additional Safeguards for Generative AI: Agencies that utilize generative AI systems or services “must implement additional practices” to mitigate any risks. Contracts for generative AI systems or services to vendors must include requirements that ensure that outputs of AI systems have markers identifying them as generated by AI, document how such systems or services are trained and evaluated and prevent such systems or services from generating illegal or violent content. Agencies should also create empirical standards for evaluating and selecting from different generative AI systems and services the ones that provide the most value to the agency.
3. Promoting a Competitive AI Market with Innovative Acquisition
- Reduce Vendor Lock-in and Potential Switching Costs by Incorporating Specific Language into Contractual Requirements: Agency contractual requirements should include language to eliminate the potential for vendor lock-in by requiring vendors to commit to share and transfer knowledge with agency staff, provide agencies with “appropriate rights to code, data, and models,” and adhere to practices to promote model and data portability and pricing transparency.
- Focus on Interoperability, Data Portability, and Transparency: Agencies should consider and emphasize interoperability, data portability, and transparency when conducting market research, solicitation, and evaluation of vendors.
- Leverage Innovative Acquisition Practices: Agencies should rely on “innovative business practices and technologies to secure better contract outcomes.”
Legislation to Codify Requirements for AI Acquisition and Use
Congress is currently considering bipartisan legislation that would codify requirements, including those contained in OMB’s March memorandum on AI, for AI acquisition and use by federal agencies. As we covered, the PREPARED for AI Act, introduced in June 2024 by Senators Gary Peters (D-MI) and Thom Tillis (R-NC), would direct all federal agencies to appoint a CAIO, establish an AI risk classification system, and ban the use of AI by federal agencies to assign emotions, evaluate trustworthiness, or infer race. More generally, the bill would create a risk-mitigating framework for AI procurement and use to position federal agencies to safely and effectively adopt AI.
However, with only five weeks left this Congress, Congress has very little time left to pass the PREPARED for AI Act, which has not seen activity since July. As we’ve noted, although Senator Schumer (D-NY) has suggested that he intends to include AI bills in must-pass, end-of-the-year legislation, his primary focus has been on attaching AI election deepfake regulations to such legislation. This makes it even more unlikely that Congress will pass legislation addressing AI acquisition and use this term.
We will continue to monitor, analyze, and issue reports on these developments.
[View source.]