The White House is gathering industry feedback on AI governance, giving stakeholders an opportunity to shape future policy.
On January 20, 2025, President Trump revoked Executive Order (EO) 14110, entitled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI),” which was issued by former President Biden and aimed at establishing a regulatory framework for AI oversight. On January 23, 2025, President Trump issued EO 14179, entitled “Removing Barriers to American Leadership in Artificial Intelligence,” that proposed the creation of an AI Action Plan.
Based on the revocation of EO 14110 and the text of EO 14179, the administration likely intends to return to the AI regulatory principles from President Trump’s previous term, which focused on minimal oversight and self-governance when possible. This move eliminates the safeguards and regulatory frameworks established under the previous administration to ensure responsible AI development, signaling a shift away from federal oversight in favor of industry-led innovation and voluntary compliance.
Key Details of the Announcement
As part of EO 14179, the Office of Science and Technology Policy (OSTP) is seeking public input on the development of a national AI Action Plan, marking a step toward shaping future AI policy and regulation. Input may address topics such as model development, cybersecurity, data privacy, regulation, national security, innovation, and international collaboration. The Request for Information (RFI) was published in the Federal Register on February 6, 2025, and invites stakeholders, such as businesses, researchers and industry groups, to provide public comment on all aspects of AI policy.
The government’s request reflects the Trump administration’s policy framework prioritizing AI leadership and reducing regulatory barriers to development. According to OSTP, the RFI aims to inform the creation of an AI Action Plan that encourages innovation while addressing risks related to security, accountability and ethical AI deployment.
The RFI focuses on:
- Ensuring U.S. competitiveness in AI,
- Limiting unnecessary regulatory burdens, and
- Developing safeguards that support responsible AI advancement.
Stakeholders can submit comments by March 15, 2025, after which the feedback will be used to inform future regulatory proposals.
Key Considerations for Stakeholders
As the federal government shifts its approach to AI governance, stakeholders must carefully assess how these changes will impact compliance, risk management and strategic planning. While the administration’s emphasis is on reducing regulatory barriers, organizations operating in AI-driven industries should remain vigilant about emerging oversight mechanisms, state-level regulations, and evolving best practices.
- Navigating a Fragmented Regulatory Landscape
Although the federal government is prioritizing AI leadership and minimizing regulatory burdens, states like Colorado, California and New York are advancing their own AI regulations, particularly around transparency, fairness and consumer protection. This creates a complex environment where organizations operating across multiple jurisdictions may face inconsistent compliance requirements. Stakeholders must be prepared to navigate potential conflicts between federal policy and state-level AI governance frameworks.
- Emphasis on Industry Self-Regulation
The administration will likely maintain its deregulatory approach to AI, favoring industry self-regulation over expanded federal oversight. Stakeholders may need to proactively implement accountability measures, transparency policies and risk management frameworks to demonstrate responsible AI use. While self-regulation offers flexibility, industries that fail to adopt safeguards risk future regulatory intervention if policymakers determine industry efforts are inadequate.
- Opportunities for Public-Private Collaboration
The administration’s AI policy shift may open doors for government-backed research initiatives, funding opportunities and regulatory incentives. Historically, agencies like DARPA and the National Science Foundation have played key roles in shaping technological advancements through grants and public-private partnerships. Although President Trump’s focus is on reducing government spending, his previous administration provided over $1 billion in funding to establish artificial intelligence and quantum information science research institutes. EO 14179 cites this investment to demonstrate the President’s commitment to AI research, which may indicate continued plans to provide funding. Organizations engaged in AI research and development should monitor how the AI Action Plan might provide new opportunities for industry collaboration, federal procurement and streamlined approval processes for AI deployment.
- A Crucial Moment for Industry Input
The RFI process provides a platform for public input on AI regulation before policies take shape. Those who participate can help shape the conversation. Engaging in this process allows stakeholders to share insights, highlight potential challenges and contribute to the development of practical guidelines. For those looking to have a seat at the table, collaboration with regulators, industry groups and legal advisors can be a valuable step in navigating the evolving AI landscape.
Recommended Actions
Given the potential impact of AI policy developments, stakeholders should take proactive steps to engage in the RFI process before the March 15, 2025, deadline:
- Assess Potential Regulatory Impact: Review how proposed AI policies could affect your organization’s operations, compliance obligations, and innovation strategy.
- Submit Public Comments: Stakeholders have a rare opportunity to shape AI governance by providing input on regulatory challenges, industry-specific concerns, and recommendations for balanced oversight.
- Work with Legal and Policy Experts: Engaging legal counsel can help stakeholders craft well-supported comments that effectively advocate for policies aligned with their interests.
[View source.]