Colorado Enacts AI Consumer Protection Legislation

Jones Day

On May 17, 2024, Colorado enacted S.B. 24-205 (the "Act"), which imposes a duty of reasonable care on developers and deployers of high-risk artificial intelligence ("AI") systems to protect consumers from risks of algorithmic discrimination.

Colorado is the first state to enact comprehensive legislation of high-risk AI systems in the United States. Effective February 1, 2026, the Act imposes sweeping compliance requirements on developers and deployers of high-risk AI systems. The Act will be enforced by the Colorado Attorney General ("AG"). Violations of the Act will constitute unfair and deceptive trade practices under Colorado's Consumer Protection Act.

A high-risk AI system means any AI system that makes a "consequential decision," meaning a decision that has a material legal effect, or similarly significant effect, on the provision, denial, cost, or terms to a consumer who resides in Colorado in the areas of education, employment, financial services, essential government services, health care, housing, insurance, or legal services. AI systems intended to perform narrow procedural tasks, detect decision-making patterns or deviations from prior patterns, and certain enumerated technologies, such as antivirus software or firewalls, are not considered high risk.

Developers and deployers must use reasonable care to protect consumers from any known or foreseeable risks of algorithmic discrimination arising from intended and contracted uses of high-risk AI systems. Compliance with the Act creates a rebuttable presumption that the developer or deployer used reasonable care. 

The Act imposes different requirements on developers and deployers. A developer must comply with notice and documentation requirements and conduct impact assessments. Deployers must implement a risk management policy and program to identify and mitigate known or reasonably foreseeable risks of algorithmic discrimination. They may consult nationally or internationally recognized guidance, such as the AI Risk Management Framework from the National Institute of Standards and Technology, to assess reasonableness. Deployers also must conduct an impact assessment, review deployment of high-risk AI systems annually, and provide certain notice and rights to consumers. Except where "obvious," the Act requires deployers of AI systems (not just high-risk AI systems) that are intended to interact with consumers to disclose to each such consumer that they are interacting with an AI system.

Developers and deployers have separate reporting requirements. Developers must report to the Colorado AG within 90 days after learning of any known or reasonably foreseeable risks of algorithmic discrimination related to a high-risk AI system that has been deployed. A deployer must report to the Colorado AG within 90 days of discovering that a high-risk AI system caused algorithmic discrimination.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Jones Day | Attorney Advertising

Written by:

Jones Day
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Jones Day on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide