Colorado Artificial Intelligence Act: 5 Things You Should Know

Orrick, Herrington & Sutcliffe LLP

Colorado has enacted a first-of-its-kind Artificial Intelligence Act governing the development and use of artificial intelligence. 

Here are five things you should know about the Colorado AI Act in its current form—and how it may change before it takes effect. 

1. The Act’s framework will evolve before implementation in 2026.

While the AI Act will not go into effect until February 2026 at the earliest, Colorado already faces mounting pressure to change the law due to concerns of unintended impacts to consumers and businesses. 

Colorado Gov. Jared Polis said in a letter that legislators plan to revise the law “to ensure the final regulatory framework will protect consumers and support Colorado’s leadership in the AI sector.”  

2. The Act applies primarily to high-risk AI systems.

The Act only applies to “high-risk artificial intelligence systems” or “any artificial intelligence system that, when deployed, makes, or is a substantial factor in making, a consequential decision.” 

  • Artificial Intelligence System: “[A]ny machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs . . . that can influence physical or virtual environments.” 
  • Consequential Decision: “ A decision that has a material legal or similarly significant effect on the provision or denial to any [Colorado resident] of, or the cost or terms of: 
    • Education enrollment or an education opportunity.
    • Employment or an employment opportunity.
    • A financial or lending service.
    • An essential government service, health-care services, housing, insurance or a legal service.”

Despite several exceptions for systems that perform narrow procedural tasks or augment decision-making, these definitions can be interpreted broadly to apply to a wide range of technologies. 

The governor’s letter makes clear that revisions to the Act will refine the definitions to ensure the Act governs only the most high-risk systems. 

As a result, the Act in its final form is likely to apply only to AI systems that truly impact decisions with a material legal or similarly significant effect on designated high-importance services. 

3. Developers have a duty to avoid algorithmic discrimination.

The Act applies to anyone who does business in Colorado and develops or intentionally and substantially modifies a high-risk artificial intelligence system. It requires them to use reasonable care to protect consumers from algorithmic discrimination. 

Developers must make documentation available to deployers or other developers of the system. The documentation must disclose, among other things: 

  • The purpose, intended benefits, and reasonably foreseeable uses of the system.
  • The type of data used to train the system and the governance measures implemented in the training process. 
  • The limitations of the system. 
  • The evaluation performed on the system to address algorithmic discrimination. 
  • The measures taken to mitigate risks of algorithmic discrimination. 
  • How the system should be used, not used, and monitored. 
  • Any other information reasonably necessary to help deployers address their obligations under the law.

In its current form, the Act requires developers to proactively inform the Colorado Attorney General and known deployers/developers of any algorithmic discrimination issues. The governor’s letter, however, indicates an intent to shift to a more traditional enforcement framework without mandatory proactive disclosures. 

4. Deployers also have a duty to avoid algorithmic discrimination.

The Act also requires anyone who does business in Colorado and uses a high-risk artificial intelligence system to use reasonable care to protect consumers from algorithmic discrimination relating to such systems. Deployers must:

  • Implement a risk management policy and program to govern the deployment of the high-risk artificial intelligence system.
  • Complete impact assessments for the high-risk artificial intelligence system. 

As passed, the Act would require deployers to proactively inform the Colorado Attorney General of any algorithmic discrimination. The governor’s letter, though, indicates that Colorado intends to shift to a more traditional enforcement framework without mandatory proactive disclosures. 

In addition, the letter says legislators plan to amend the Act to focus regulation on the developers of high-risk artificial intelligence systems rather than smaller companies that deploy them. As a result, we may see scaled-back deployer obligations or broader deployer exemptions in the final implemented regulatory framework. 

5. The law gives consumer rights relating to artificial intelligence systems.

Developers and deployers must provide a public statement to consumers summarizing the types of high-risk artificial intelligence systems they develop or use, and how they mitigate algorithmic discrimination risks. 

Deployers also must notify consumers when they use a high-risk artificial intelligence system to make a consequential system or when such a system is a substantial factor in making that decision. They must do this before the decision is made. They must also provide the consumer information about the decision and, where available, the right to opt-out. 

If a high-risk artificial intelligence system results in an adverse decision for a consumer, the deployer must:

  • Disclose to the consumer: 
    • The principal reason or reasons for the decision. 
    • The degree to which the system contributed to the decision.
    • The type of data processed by the system in making the decision and their sources.
  • Provide an opportunity to correct data processed by the system to make the decision.
  • Offer an opportunity to appeal the decision and seek human review. 

Lastly, the Act requires that any artificial intelligence system (whether high-risk or not) intended to interact with consumers be accompanied by a disclosure that the consumer is interacting with an artificial intelligence system.    

What does this mean for your business? 

While the final form of the Colorado Artificial Intelligence Act may deviate from the version passed by the state legislature, businesses should start preparing for material AI regulation by: 

  • Developing an organizational framework for evaluating and managing AI-related risks. 
  • Preparing records and documentation for AI the business develops outlining how the systems were developed, how they should be used, and any measures taken to mitigate risks relating to their use. 
  • Establishing a process for assessing risks and potential impacts posed by the deployment of third-party AI. 
  • Expanding organizational procedures, including third-party contracting and management procedures, to take into consideration unique AI risks. 

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Orrick, Herrington & Sutcliffe LLP | Attorney Advertising

Written by:

Orrick, Herrington & Sutcliffe LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Orrick, Herrington & Sutcliffe LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide