Comprehensive AI Bill Poised for Governor’s Signature in Virginia

WilmerHale
Contact

On February 20, the Virginia legislature passed the High-Risk Artificial Intelligence Developer and Deployer Act (HB 2094), a bill that aims to prevent algorithmic discrimination by imposing requirements on businesses that are intended to mitigate the harms these artificial intelligence (AI) models may present. The bill is now with Governor Glenn Youngkin for signature. If he signs the bill into law, Virginia will become the second state, joining Colorado, to pass a comprehensive law addressing the discriminatory effects of high-risk AI systems.

While the Colorado law and the Virginia bill contain many similarities, a few key differences in language make the Virginia bill narrower in scope and more business friendly than its Mountain West predecessor. Both the Virginia bill and the Colorado law take a risk-based approach to regulation and have similar definitions for “high-risk systems” and “consequential decisions.” Virginia’s bill excludes all the same categories from the definition of a “high-risk AI system” as Colorado’s AI law, including anti-malware technology and firewalls, and adds “autonomous vehicle technology” to the list of exceptions. Like Colorado’s law, Virginia’s bill grants exclusive enforcement authority to the state attorney general (AG).

Notably, however, Virginia’s bill contains language that requires a high-risk AI system to be the “principal basis” of the potentially discriminatory decision. Virginia’s bill defines a high-risk AI system as one that is “specifically intended to autonomously” make or be a “substantial factor” in making a consequential decision. However, unlike Colorado’s law, which defines a “substantial factor” as “assisting” in making the consequential decision, Virginia’s bill defines a “substantial factor” as being the “principal basis” of the consequential decision. Thus, Virginia’s bill sets a potentially higher bar for applicability than Colorado’s law.

In addition, Virginia’s bill exempts a person acting in a commercial or employment context from the definition of “consumer,” which creates a carve-out for employers using AI.

It is unclear whether Governor Youngkin will sign this bill. If enacted, the provisions will become effective on July 1, 2026.

Below, we have provided a summary of the bill’s history and requirements. For more updates and analyses of current developments in AI, data privacy and cybersecurity, please subscribe to the WilmerHale Privacy and Cybersecurity Law blog.

Background

Delegate Michelle Lopes Maldonado’s HB 2094 was prefiled on January 7, 2025. The bill passed in the House on February 4 and in the Senate on February 19, passing narrowly in both chambers.

Who and What Does the Bill Apply to?

The bill applies to “developers” and “deployers” of “high-risk AI systems.”

A “developer” is any person doing business in Virginia that develops or intentionally and substantially modifies a high-risk AI system that is offered, sold, leased, given or otherwise made available to deployers or consumers in Virginia. The term “deployer” is defined to include any person doing business in Virginia that deploys or uses a high-risk AI system to make a “consequential decision” in the commonwealth. Consequential decisions are defined as those that have “material, legal, or similarly significant effects on the provision or denial” of certain essential resources to Virginia residents, including “parole, probation, a pardon, or any other release from incarceration or court supervision”; education and enrollment opportunities; access to employment and healthcare services; financial or lending services; housing; insurance; marital status; or a legal service.

What Does the Bill Require?

The bill imposes a duty on developers and deployers to avoid algorithmic discrimination. The bill also places substantive requirements on developers and deployers to design and implement risk management policies and programs.

Developers

Developers are required to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from how the developer intends to use the high-risk AI system. The bill imposes certain documentation requirements on developers offering, selling, leasing, giving or otherwise providing a high-risk AI system, including (i) a statement disclosing the intended uses of such high-risk AI system; (ii) documentation disclosing the limitations and purpose of the system, along with a summary describing how the system was evaluated for performance, measures taken to mitigate foreseeable risks of algorithmic discrimination, and how an individual can use and monitor the performance of the system; (iii) documentation describing the system’s intended outputs and how the system should and should not be used; and (iv) any additional documentation that is reasonably necessary to assist the deployer or other developer in understanding the outputs and monitoring performance of the high-risk AI system. Information and documentation may be provided through artifacts such as system cards, pre-deployment impact assessments, risk management policies or any relevant impact assessment completed.

Deployers

Deployers are also required to use reasonable care to protect consumers from the risks of algorithmic discrimination. Under the bill, deployers are prohibited from using high-risk AI systems to make consequential decisions without a risk management policy and program for the system. The bill specifically references the National Institute of Standards and Technology (NIST) AI Risk Management Framework as a model for creating reasonable risk management programs as well as the Standard ISO/IEC 42001 of the International Organization for Standardization and any framework the state AG designates as equivalent to or as stringent as what the Act requires.

Deployers are required to complete an impact assessment for a high-risk AI system before using the system to make a consequential decision. The assessment should include:

  • a statement by the deployer disclosing the high-risk AI system’s purpose, intended use cases and benefits (the statement should also evaluate the deployment context of the high-risk AI system and whether the deployment or use poses any known or reasonably foreseeable risk of algorithmic discrimination; if there are such risks, the deployer should disclose (i) the nature of the algorithmic discrimination and (ii) any mitigation steps taken to address such risks); and
  • a description of the (i) categories of data the high-risk AI system processes as inputs and produces as outputs, (ii) transparency measures deployers have taken to inform consumers in real time that a high-risk AI system is in use, and (iii) post-deployment analytics, including information about the monitoring performed, user safeguards provided, and how the intended use cases of the high-risk AI system as updated compare to the developer’s intended use cases.

If an adverse decision is made based on information that was not obtained directly from the consumer, deployers are required to provide consumers with (i) a statement that discloses the reason(s) for the consequential decision and (ii) an opportunity to correct inaccuracies, in accordance with the Virginia Consumer Data Protection Act (§ 59.1-575 et seq.), and appeal the decision.

In enforcement actions, the bill allows for a rebuttable presumption that developers and deployers used reasonable care if they complied with the bill’s requirements. 

Exemptions

Under this bill, the following entities are considered to be in compliance with the bill’s obligations: (i) financial institutions subject to state laws or federal regulations that govern the use of high-risk AI systems, (ii) insurance companies regulated by the State Corporation Commission, (iii) healthcare covered entities and telehealth service providers, and (iv) federal government contractors (with limitations).

Enforcement

The Virginia AG has exclusive enforcement authority and may recover civil penalties of up to $1,000 per violation or $10,000 for each willful violation.

In the case of an enforcement action, the bill creates an affirmative defense for businesses that (i) discover a violation of the Act through “red-teaming” or another method; (ii) cure the violation and provide notice to the Virginia AG regarding the cure and mitigation of associated harms within 45 days; and (iii) otherwise comply with the Act’s requirements.

Looking Ahead

Governor Youngkin has until March 24 to sign or veto the bill or return it with amendments.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© WilmerHale

Written by:

WilmerHale
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

WilmerHale on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide