Veto of Virginia AI Bill Raises Questions About the Future of State-Level Regulation

Skadden, Arps, Slate, Meagher & Flom LLP
Contact

Skadden, Arps, Slate, Meagher & Flom LLP

On March 24, 2025, Virginia Gov. Glenn Youngkin vetoed the High-Risk Artificial Intelligence Developer and Deployer Act (House Bill 2094). The bill, which had passed through the Virginia Legislature in February 2025, would have required companies that both create and deploy so-called “high-risk” AI systems used to make consequential decisions in areas such as employment, lending, health care, housing and insurance to implement safeguards against algorithmic discrimination.

“Algorithmic discrimination” was defined in the bill as using an AI system that unlawfully differentiates or disfavors individuals or groups on the basis of a variety of factors including age, color, disability, national origin, race, religion and sexual orientation.

The Virginia bill generally mirrored a Colorado law that was enacted in 2024. It remains to be see whether the Democratic-controlled Virginia General Assembly will seek to override Gov. Youngkin’s veto with a two-thirds vote in both houses.

Takeaways

When the Virginia bill passed through the state Legislature, there was considerable commentary as to whether this marked the beginning of a steady flow of state AI regulation, filling the void created by a lack of corresponding federal regulation.

Youngkin’s veto as a Republican governor may signal that AI regulation is becoming an increasingly partisan issue, with state Republicans adopting the Trump administration’s stance against regulating emerging technologies, especially when the regulation is focused on preventing discrimination or bias.

Partisan issues aside, the reaction to the Virginia bill highlights some of the inherent difficulties in regulating AI. While many interest groups lauded the bill for its goal of regulating AI so it could not be used to harm individuals or groups, others argued that the manner in which the bill sought to achieve these goals — such as through requiring developers to document an AI system’s intended use, risks and performance, and requiring deployers to create risk management policies and conduct detailed impact assessments — would have generated considerable documentation but with little practical impact.

As the area of AI regulation continues to evolve, companies should closely monitor developments at the state and federal levels as well as regulatory activity outside the U.S.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Skadden, Arps, Slate, Meagher & Flom LLP

Written by:

Skadden, Arps, Slate, Meagher & Flom LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Skadden, Arps, Slate, Meagher & Flom LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide