The House version of the budget reconciliation package (H.R.1) includes a 10-year moratorium on state and local enforcement of “any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems.” An exception exists for “generally applicable law[s],” but such laws must be “imposed in the same manner on models and systems, other than [AI].”
While framed as a means of ensuring innovation and competitiveness by preventing a patchwork of regulation, the terms “artificial intelligence models,” “artificial intelligence systems” and “automated decision systems” are broadly defined, raising questions about which state and local AI laws would be subject to the moratorium.
By some estimates, at least 550 AI-related bills have been under consideration in state legislatures across the country this year. California lawmakers introduced 39 on AI alone this year, combined with close to 60 last year. Although those are eye-popping numbers, state and local governments have already enacted hundreds of AI-related laws far beyond that. For example, states passed 113 AI-related laws in 2024 alone. The most criticized state and local laws have been those seeking to regulate algorithmic discrimination and automated decision making in areas like insurance, health care and hiring. Those laws aside, state and local governments have also passed AI laws prohibiting deepfakes and digital impersonation, requiring disclosure to consumers when they are interacting with AI, and expressly authorizing the use of AI systems by government.
It remains to be seen whether the provision will survive a challenge under the Byrd Rule (which prohibits extraneous provisions from inclusion in budget bills) but its inclusion reflects a broader theme playing out in U.S. policy: the shifting balance of regulatory authority between Washington, D.C., and the states, especially in fast-moving and complex policy arenas like artificial intelligence. In the absence of congressional action, states have increasingly taken the lead on AI oversight, just as they have in areas like data privacy, climate policy and consumer protection. That dynamic has empowered states to tailor approaches to local needs and values but has also resulted in a patchwork of legal obligations for companies operating across jurisdictions. The current federal proposal, by contrast, would reassert federal primacy not through action, but through inaction—blocking states while Congress continues to deliberate.
At its core, the debate highlights competing visions for governance in the AI era: whether states should be empowered to act in the absence of federal consensus or whether a uniform national policy should be imposed, even in the form of a regulatory freeze. Consumer advocates warn that a decade-long ban would leave the public vulnerable to emerging harms without any meaningful oversight. Industry voices, meanwhile, point to the growing complexity and fragmentation of state-level rules. For their part, a bipartisan group of 40 state attorneys general have also weighed in, expressing their view that their state AI laws are neither abstract nor premature and arguing for a cooperative federal-state approach to AI governance.
Even if the federal proposal is ultimately removed from the budget reconciliation package, the concept seems unlikely to disappear from the Republican congressional agenda. Several federal lawmakers have indicated a continued interest in advancing a moratorium through standalone legislation, including promises for introduction of standalone AI policy legislation. In this way, the question now is not only what rules should govern AI, but also who has the authority to write them.
For companies developing or deploying AI, federal efforts to streamline the regulatory landscape may reduce fragmentation but could also introduce new layers of uncertainty—particularly as definitions evolve and the interplay between state, federal, and international frameworks continues to develop.