US privacy lawyers have long used the “patchwork” metaphor to describe the US privacy legal landscape. Early signs suggest that metaphor may also soon apply to US AI regulation: Colorado adopted An Act Concerning Consumer Protections in Interactions with Artificial Intelligence Systems (“CO Act”) last year and the Virginia legislature passed the High-Risk Artificial Intelligence Developer and Deployer Act (“VA Bill”) last month, which is awaiting the governor’s signature.
This post provides an overview of these initial efforts to regulate AI in the US and what businesses should be doing to prepare.
Will these laws apply to my business?
If your business develops any AI systems, you will likely be subject to some requirements of the CO Act. That same activity would only subject you to the VA Bill if the developed systems are “high risk.”
Businesses that use AI systems would only be subject to the CO Act or VA Bill if the system qualifies as “high risk.”
Both laws create different sets of requirements for “developers” and “deployers” (which effectively means users) of certain AI systems. The CO Act applies to persons doing business in Colorado who (i) develop artificial intelligence systems generally or (ii) deploy certain “high-risk artificial intelligence systems.”
The VA Bill, by contrast, would apply to any person doing business in Virginia that develops or deploys “a high-risk artificial intelligence system.”
What are AI systems and high-risk AI systems?
The CO Act defines an AI system as “any machine-based system, that for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”
The VA Bill’s definition is similar, but expressly excludes models used for development, prototyping, and research before the model is made available to deployers or consumers.
The CO Act and VA Bill generally align that a high-risk AI system is one that makes, or is a substantial factor in making, a “consequential decision.” There are exceptions for systems that engage in specific activities, such as “perform[ing] a narrow procedural task,” detecting decision-making patterns without the intent to replace or influence a previously completed human assessment, and performing certain security and information technology functions.
What are the “consequential decisions” that could move an AI system into the high-risk category?
A “consequential decision,” under both the CO Act and VA Bill, is one that has a material legal or similarly significant effect on the provision or denial to any consumer of (a) education enrollment or an education opportunity, (b) employment or an employment opportunity, (c) a financial or lending service, (d) an essential government service, (e) health-care services, (f) housing, (g) insurance, or (h) a legal service.
Under the VA Bill, consequential decisions would also include decisions about the provision or denial of parole, probation, a pardon, or any other release from incarceration or court supervision or marital status as consequential decisions. The practical impact of including those criteria is not immediately obvious to us though, because government entities—who are more likely than private entities to make decisions involving those criteria—would be excluded from the VA Bill’s scope.
If I’m subject to these laws, what would I have to do?
The requirements vary based on your business’s role as a developer, deployer, or both. Key requirements under both the CO Act and VA Bill for developers include:
- Documentation Requirements. A developer is generally required to develop and make available documentation on various topics to deployers and other developers. Example topics include the data used to develop the system and measures to mitigate effects of algorithmic discrimination.
- Public Disclosures for High-risk AI Systems Under the CO Act. A developer is also required to clearly display any high-risk AI systems developed on its website “or in a public use case inventory” and make available how the developer manages known or reasonably foreseeable risks of algorithmic discrimination.
- Mandatory Disclosures to the Colorado Attorney General and Deployers Under the CO Act. If a developer discovers or learns from a credible source that its high-risk AI system has caused or is reasonably likely to cause algorithmic discrimination, the developer is required to notify the Colorado attorney general and all known deployers of the relevant system within 90 days. The VA Bill does not have an equivalent requirement.
Key requirements for deployers under both the CO Act and VA Bill include:
- Consumer Notices. Both the CO Act and VA Bill provide for various disclosures to consumers, including:
- Disclosing that the consumer is interacting with an AI system under the CO Act and disclosing that the consumer is interacting with a high-risk AI system under the VA Bill.
- Disclosures regarding the risk of algorithmic bias, the nature and purpose of high-risk AI systems generally under the CO Act, and the nature and purpose of high-risk AI systems that interact with a consumer under the VA Bill.
Notably, the CO Act defines consumer as a Colorado resident generally, while the VA Bill would define consumer as a Virginia resident excluding individuals acting in a commercial or employment context.
- Adverse Decision Notice. If a high-risk AI system makes a decision adverse to a consumer, the deployer must provide the consumer with a statement identifying the reason for the consequential decision, the degree to which the high-risk AI system contributed to the decision, and the type of data that was processed by the system and sources of that data. The consumer must also have the right to appeal and request human review of the decision and correct any inaccurate personal data used to make the decision.
- Risk Management Program and Impact Assessments. Both the CO Act and VA Bill contain provisions regarding risk management programs and impact assessments that require deployers and developers to:
- Maintain a commercially reasonable risk management program and policies that document principles, processes, and personnel the deployer uses to identify, document, and mitigate risks of algorithmic discrimination; and
- Perform an impact assessment that includes several mandatory components, including the metric used to evaluate the performance and known limitations.
- Mandatory Disclosures of Algorithmic Discrimination to the Colorado Attorney General. A deployer is required to notify the Colorado attorney general of any algorithmic discrimination within 90 days of discovering that discrimination.
How are these laws enforced?
There is no private right of action under either law. Enforcement authority lies with the attorney general of each state. Violations of the CO Act constitute an unfair and deceptive trade practice under Colorado law, which are subject to civil penalties of up to $20,000 per violation. The VA Bill provides for civil penalties of $1,000 per violation generally and of $10,000 per willful violation.
When do these laws take effect?
The CO Act will take effect on February 1, 2026. The VA Bill, if signed by the state’s governor, would take effect July 1, 2026.
What should I do to prepare?
Businesses that do business in Colorado or Virginia (assuming the governor signs the VA Bill) should take the following steps to prepare:
- Inventory current and contemplated AI use cases. Understanding AI system development and deployment will be a baseline requirement for any further legal analysis.
- For each AI use case, assess whether the business is a developer, deployer, or both. Because obligations vary depending on the business’s relationship to an AI system, determining the role the business plays for each relevant AI use case will be key to appropriately scope compliance obligations.
- Determine which AI use cases involve a high-risk AI system. This is another key scoping inquiry because the CO Act and VA Bill impose more onerous obligations for high-risk AI systems. The best starting point for this inquiry is to evaluate whether any AI systems play any role in consequential decisions. If a consequential decision is implicated, then you can assess the extent of the AI systems role and whether it qualifies as high risk.
- Prepare required disclosures. Review the applicable disclosure requirements and prepare content on those topics, including a notice to consumers that AI is present in a system where required.
- Conduct and document impact assessments. This could potentially leverage processes used for data protection assessments under state privacy laws. The requirements are not identical, but the privacy law requirements have enough commonality to be useful in the AI context.
- Draft and implement AI risk management program and policies. You can use a framework like the NIST AI RMF or ISO/IEC 42001 as a starting point as both the CO Act and VA Bill treat those frameworks favorably.