On April 3, 2025, the Office of Management and Budget (OMB) issued two memoranda implementing President Trump’s Executive Order 14179, Removing Barriers to American Leadership in Artificial Intelligence. Memorandum M-25-21, Accelerating Federal Use of AI through Innovation, Governance, and Public Trust (OMB Memo M-25-21) describes the federal government’s policy on promoting the responsible deployment of artificial intelligence (AI) to drive innovation, economic growth, and national security. It rescinds and replaces OMB Memorandum M-24-10, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. Memorandum M-25-22, Driving Efficient Acquisition of Artificial Intelligence in Government (OMB Memo M-25-22) provides guidance to executive agencies regarding AI procurement. This memorandum rescinds and replaces OMB Memorandum M-24-18, Advancing the Responsible Acquisition of Artificial Intelligence in Government. Both memoranda are generally applicable to executive agencies, with exemptions for national security. Taken together, the OMB memos pave the way for a significant expansion in the use and procurement of AI technologies in the federal ecosystem. We discuss the key aspects of the memos below.
- Data Rights: OMB Memo M-25-21 contemplates agencies, to the extent permitted by law, retaining the ability to reuse and share AI code and models, including by releasing and maintaining AI code as open-source software in a public repository. OMB Memo M-25-22, on the other hand, directs agencies to have “appropriate processes for addressing use of government data and include appropriate contractual terms that clearly delineate the respective ownership and IP rights of the government and the contractor.” In particular, it directs agencies to (1) scope licensing and other IP rights based on the intended use of AI to avoid vendor lock-in; (2) ensure components necessary to operate and monitor an AI system or service remain available for as long as necessary; (3) provide guidance on handling, accessing, and using agency data or information to ensure that such information is only collected and retained by a vendor when necessary to perform the contract; (4) prohibit the use of non-public inputted agency data and outputted results to train publicly or commercially available AI algorithms without express agency consent; and (5) prioritize obtaining documentation from contractors that facilitates transparency and ensures the ability to track the performance and effectiveness of procured AI. Contractors providing AI services to the government therefore should anticipate data rights negotiations (at least where the AI is not a commercially available solution governed by FAR 52.212-4) and strict parameters around the contractor’s ability to use government data for non-contractual purposes (restrictions that will likely be similar to those applicable to controlled unclassified information).
- Data Privacy: OMB Memo M-25-22 directs agencies to establish “policies and procedures, including contractual terms and conditions, that ensure compliance with privacy requirements” whenever agencies acquire an AI system or service that will create, collect, use, process, store, maintain, disseminate, disclose, or dispose of federal information containing personally identifiable information.
- Promoting Competition and Avoiding Vendor Lock: Both OMB memos identify the need to encourage competition among AI vendors to avoid vendor lock. With respect to building a robust AI marketplace, OMB Memo M-25-22 directs agencies to conduct broad market research, including seeking out novel AI capabilities from new market entrants and providing opportunities for product demonstrations. An initial gating item to market entry, at least for AI vendors operating in the cloud, will be obtaining Federal Risk and Authorization Management Program (FedRAMP) authorization for their AI offerings. Regarding vendor lock, OMB Memo M-25-22 recommends that agencies include solicitation provisions addressing knowledge transfer, data and model portability, licensing (see Data Rights section above), and pricing transparency. It also provides that, “[a]s soon as a decision is made not to extend a contract for an AI system or service,” agencies should work with the vendor to ensure knowledge and data transfer.
- Domestic Preference: Both OMB memos contemplate establishing a domestic preference for AI that is developed and produced in the United States. Exactly how this domestic preference would apply, however, is an open question. Existing domestic preference statutes apply to products rather than services. Although U.S. Customs and Border Protection has treated software as a product when analyzing country of origin under the Trade Agreements Act, AI operators typically view AI as a service (i.e., SaaS) rather than software. That makes existing domestic preference statutes ill-suited to serve as a mechanism to apply a domestic preference for AI. Giving preference to American AI could also complicate existing trade agreements that provide designated countries with access to the U.S. market, unless such agreements are renegotiated or an agency can identify a valid exception.
- AI Use in Government Contract Performance: OMB Memo M-25-22 acknowledges that contractors “will likely increasingly utilize AI as part of contract performance in situations where the government may not anticipate the use of that AI.” It therefore directs agencies to consider including, when appropriate, a solicitation provision requiring disclosure of AI use as part of contract performance. Contractors thus should closely examine their solicitations and contracts to determine whether there are any restrictions on the use of AI in performance. Even where no restriction exists, contractors will ultimately be responsible for whatever AI outputs they rely on for performance.
- Emphasis on Performance-Based Contracting: OMB Memo M-25-22 “strongly encourage[s]” agencies to leverage performance-based requirements when issuing solicitations for AI services. In particular, the memo recommends issuing Statements of Objectives and Performance Work Statements, which eschew “overly limiting requirements” in favor of “outcome-based needs”; Quality Assurance Surveillance Plans, which can help agencies monitor performance effectiveness; and contract incentives, including those tied to achieving Quality Assurance Surveillance Plan metrics. The emphasis on performance-based contracting could promote greater competition by eliminating restrictive requirements that might otherwise preclude AI vendors from entering the federal market and allowing AI contractors to focus on the outcomes that their services can achieve.
- Contract Performance Assessments: OMB Memo M-25-22 directs agencies to include contract terms allowing agencies to regularly monitor and evaluate the performance, risks, and effectiveness of AI systems or services. In addition, the memo encourages agencies to require contractors to regularly self-monitor their AI systems and services and to remediate any unacceptable activity. The memo explains that these monitoring activities should inform agency decision-making as to whether a particular AI system or service is effective and valuable, and it advises that agencies should be prepared to sunset AI systems or services that outlive their worth. These monitoring activities could also prompt potential investigations or enforcement actions under the False Claims Act.
- Minimum Needs v. Best Practices: OMB Memo M-25-21 describes an approach to risk management that focuses on establishing “the minimum number of requirements necessary to enable the trustworthy and responsible use of AI.” Although consistent with the Trump administration’s focus on deregulation and reducing bureaucracy, this focus on minimum requirements appears to reflect a departure from the “best practices” approach generally seen in the regulation of technology, such as in the cybersecurity space, and articulated in existing National Institutes of Standards and Technology guidance, such as the AI Risk Management Framework. Contractors may ultimately have to make discretionary determinations as to what measures must be deployed to adequately mitigate risk for their particular AI use cases.
[View source.]