Colorado Passes Law Requiring Governance Measures for High-Risk AI

King & Spalding
Contact

Colorado became the first state to comprehensively address artificial intelligence (“AI”), passing Senate Bill 24-205, or the Colorado Artificial Intelligence Act, on May 17, 2024 (“Act”). The Act establishes the nation’s first comprehensive consumer protection legislation for interactions with high-risk artificial intelligence systems and requires developers and deployers of such systems to adopt specific governance measures.

The Act takes effect on February 1, 2026, with expected rulemaking from the Attorney General forthcoming. The Attorney General has exclusive enforcement authority for violations of the Act under the state’s unfair trade practices statute, with penalties reaching up to $20,000 per violation.

Key Takeaways

  • The Act primarily applies to “high-risk artificial intelligence” systems (“High-Risk AI”) that make “consequential decisions” about the consumer. Business should consider assessing their current AI systems to evaluate whether they are in scope or meet the Act’s enumerated exceptions.
  • The Act requires all AI systems—regardless of associated risk—to transparently disclose the use of AI to consumers. Business should ensure that all AI includes appropriate disclosures.
  • The Act applies to both “developers” and “deployers” doing business in Colorado. Businesses must be prepared to evaluate their own in-house development of AI as well as the use of third-party AI and corresponding oversight measures.
  • The Act protects “consumers,” defined as individuals who are state residents.  Businesses should prepare to evaluate the use of AI as it relates to traditional consumers as well as employees and other workforce members.

High-Risk AI

The Act primarily regulates High-Risk AI, or systems that make or are a substantial factor in making “consequential decisions” about consumers. “Consequential decision” means “a decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of” opportunities and services related to education, employment, finance, healthcare, housing, insurance, law, or other essential government service.

In addition to a list of certain enumerated technologies, the Act specifically excludes certain AI from the high-risk designation, such as those that are intended to “perform a narrow procedural task” or review “previously completed human assessment.” The Act also excludes certain generative AI, provided that the generative AI is “subject to an accepted use policy that prohibits generating content that is discriminatory or harmful.”

Addressing Algorithmic Discrimination

The Act requires developers and deployers of High-Risk AI to protect consumers against algorithmic discrimination. The Act creates a rebuttable presumption that deployers and developers satisfied this duty by complying with the safeguards specified in the Act. Moreover, it is an affirmative defense against enforcement if an entity: (i) cures the alleged violation as a result of external feedback, internal reviews, or red team testing of the High-Risk AI and (ii) is otherwise in compliance its risk management framework.

Safeguards

Deployers and developers of High-Risk AI are required to implement several safeguards based on their respective designations. These include:

  • Complying with recognized risk management frameworks, such as the Artificial Intelligence Risk Management Framework promulgated by NIST;
  • Publicly disclosing how risks of algorithmic discrimination are managed and providing notices to consumers upon adverse decisions;
  • Conducting eight-part impact assessments annually, and upon substantial modification; and
  • Affirmatively reporting instances of algorithmic discrimination to the Attorney General.

While the AI legal landscape is still developing, themes around governance and disclosure are quickly coalescing. In addition to the Act, businesses using or developing AI should assess potential legal obligations under already effective state laws. This includes Utah’s Artificial Intelligence Policy Act, which requires that covered businesses clearly and conspicuously disclose the use of generative AI to consumers, as well as consumer data privacy laws, such as the Colorado Privacy Act, which require businesses conduct impact assessments and offer opt-out rights to consumers relating to uses of AI in furtherance of decisions that produce legal or similarly significant effects about the consumer.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© King & Spalding | Attorney Advertising

Written by:

King & Spalding
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

King & Spalding on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide