Colorado Enacts Groundbreaking Artificial Intelligence Act

Troutman Pepper

[co-author: Stephanie Kozol]*

On May 17, 2024, Colorado Governor Jared Polis signed into law Senate Bill 24-205, the Colorado Artificial Intelligence (AI) Act, making Colorado the first U.S. state to enact comprehensive legislation regulating the use and development of AI systems. The act is designed to regulate the private-sector use of AI systems, particularly addressing the risk of algorithmic discrimination arising from the use of so-called “high-risk AI systems.” The law will take effect on February 1, 2026, and the Colorado attorney general (AG) has exclusive enforcement authority.

Overview

The Colorado AI Act regulates “developers” (i.e. entities or individuals who create or substantially modify AI systems) and “deployers” (i.e. entities or individuals who use AI systems to make decisions or assist in decision-making) who develop or deploy “high-risk” AI systems. An AI system is considered “high-risk” if it “makes, or is a substantial factor in making, a consequential decision.” In turn, a “consequential decision” is any decision that can significantly impact an individual’s legal or economic interests, such as decisions related to employment, housing, credit, and insurance.

The legislative impetus for the act is the concern that consequential decisions, when influenced or driven by AI systems, can potentially lead to “algorithmic discrimination.” The act defines algorithmic discrimination as a “condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals” on the basis of protected classifications. Accordingly, the act imposes various documentation, disclosure, and compliance obligations on developers and deployers that are intended to identify and prevent such discrimination.

Developer Obligations

Under the act, developers of high-risk AI systems are required to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. In connection with this obligation, developers are also required to provide specific documentation to deployers or other developers of high-risk AI systems, including a general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the system, and detailed information about the system’s training data, limitations, purpose, intended benefits, and uses. Developers must also provide additional documentation necessary to assist in understanding the outputs of the AI system and how to monitor algorithmic decisions for bias.

Deployer Obligations

Deployers are also subject to a duty of reasonable care to protect consumers from known or reasonably foreseeable risk of algorithmic discrimination. They are required to implement a risk management policy and program that is reasonable in light of certain government standards, the size and complexity of the deployer, the scope of the system, the sensitivity and volume of the data, and the sensitivity and volume of data processed.

Deployers must conduct annual impact assessments of the AI system or after any intentional and substantial modification. These assessments must provide a statement of purpose and intended use case, an analysis of the algorithmic discrimination risks, a description of the data types used for inputs and outputs, metrics used to evaluate the system, the transparency measures taken, and a description of post-deployment monitoring and user safeguards.

In addition, deployers are required to inform consumers that the deployer has deployed a high-risk AI system to make decisions; provide a statement of the purpose of the system and the nature of decisions it is making; provide information regarding the consumer’s requirement to opt-out of the processing of personal data concerning the consumer for purpose of profiling.

If a decision is adverse to the consumer, the deployer must provide the consumer with a statement disclosing the reasons for the decision and the data used to make the decision, an opportunity to correct any incorrect data, and an opportunity to appeal the decision. Importantly, the notices must be provided directly to the consumer, in plain language, in a manner accessible to disabled individuals.

The AG’s Role

Both developers and deployers are required to disclose to the AG any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of a high-risk AI system. This disclosure is mandatory and must occur within 90 days when a developer or deployer: (1) discovers that the system has been deployed and has caused or is likely to have caused algorithmic discrimination; or (2) receives a credible report indicating such an occurrence.

The AG may require developers and deployers to provide a general statement describing the reasonably foreseeable and potentially harmful uses of the high-risk AI system. While making these disclosures, developers and deployers can designate the information as proprietary or a trade secret. Importantly, any information subject to attorney-client privilege or work-product protection is not considered waived upon disclosure.

Finally, the act grants the AG exclusive enforcement authority. A violation of the act is considered an unfair trade practice under Colorado’s Consumer Protection Act, which could lead to legal repercussions. The AG has the power to seek injunctive relief, an assurance of discontinuance, damages, and civil penalties of up to $20,000 per violation. The AG can also seek any other relief necessary to ensure compliance with the act.

Why It Matters

The Colorado AI Act is a pioneering piece of legislation, making Colorado the first U.S. state to enact a comprehensive law regulating the use and development of AI systems. This is significant as it sets a precedent for other states and potentially for federal legislation, thereby shaping the future of AI regulation.

With the law set to take effect on February 1, 2026, developers and deployers of AI systems have less than two years to ensure compliance with its requirements. Given the technical complexity of how AI models function, compliance may be challenging. Moreover, the process of auditing AI systems for bias can be resource-intensive. As such, companies that develop or deploy high-risk AI systems should take a compliance-by-design approach when building AI models.

Troutman Pepper will continue to monitor developments and will provide updates as additional information becomes available.

*Senior Government Relations Manager

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Troutman Pepper

Written by:

Troutman Pepper
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Troutman Pepper on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide