Friday, May 17, 2024: Colorado Enacted Nation’s First Ever (Nightmare) Law Addressing “Algorithmic Discrimination” in “High-Risk” AI Systems
Law Goes Beyond Current State & Federal Civil Rights Laws
Governor Called for Flawed Law’s Amendment Prior to Effective Date
With reservations, Colorado Governor Jared Polis signed into law S.B. 24-205, a measure intended to avoid “algorithmic discrimination” in “high-risk” artificial intelligence (“AI”) systems.
Many New Ambiguous Technical Legal Definitions
The new law creates a new language and supplies these important definitions:
“Algorithmic discrimination means any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law; (emphases added)
Note: The law seeks to incorporate all federal laws into this state law’s prohibitions.
A “high-risk” AI system is “any artificial intelligence system that, when deployed, makes, or is a substantial factor in making a consequential decision.” (emphases added)
Note 1: There is also a lengthy set of exclusions in the bill from the definition of a “high-risk” AI system.
Note 2: Take note of this emerging phrase “high-risk AI system.” It is becoming a new internationally used legal “term of art” (meaning it has a special legally defined meaning). You will see this phraseology in other statutes trying to limit the development and implementation of AI software tools.
“‘Consequential decision’ means a decision that has a material legal or similarly significant effect (emphases added: who knows what that phrase means?) on the provision or denial to any consumer of, or the cost or terms of:
- educational enrollment or an education opportunity;
- employment or an employment opportunity;
- a financial or lending service;
- an essential government service;
- health-care services;
- housing;
- insurance; or
- a legal service.”
Multiple published reports noted that Colorado’s law is the first comprehensive legal framework in the United States addressing AI. The Colorado legislature published a summary of the measure here.
Proactive Legal Quality Control Checks
The law will take effect February 1, 2026. The Legislature allowed a long runway before the law will become legally effective to allow AI developers to use “reasonable care” to avoid unlawful discrimination when using high-risk AI systems (whether in the employment, housing, financial services, etc.). The law thus draws on a notion taken from “affirmative action” law in that it requires developers to ensure in advance of their AI implementation that no unlawful discrimination will occur through its use.
Accordingly, like required federal Executive Order 11246 Affirmative Action Plan “evaluations,” AI developers must develop analogous “risk management strategies” to help protect in advance against algorithmic discrimination. As a result, AI software developers are now going to need to retain the services of employment defense litigators and statisticians experienced in Title VII and Executive Order 11246 “adverse impact” analyses to quality-control test every AI tool before its deployment for commercial use.
Note: Neither Title VII nor Executive Order 11246 nor their implementing Rules require covered employers/federal contractors to undertake “adverse impact” analyses as the Colorado law clearly does. The coming new Colorado law also provides a rebuttable presumption that a developer or deployer used “reasonable care” if it complied with the law’s requirements and any additional requirements that the state’s Attorney General (“AG”) may set forth.
What? More “law” yet to come…from an AG now ostensibly imbued with legislative-like powers to define new substantive legal requirements the Colorado Legislature did not envision and install in the bill (i.e., … “and anything else the Attorney General can think of to throw into this bill we have not thought of”?) That is a legal first, and more likely than not violates the state of Colorado’s separation of powers doctrine (imposing a separation of the roles and powers of the Colorado Legislative Branch of government and the Executive Branch of government even apart from being too vague and ambiguous to be enforceable).
Statutorily Required Confessions of Guilt
Uniquely, however, AI software developers must also “turn themselves in” by self-reporting to the state’s Attorney General any known discovery of any unlawful algorithmic discrimination. One unintended consequence of this new Colorado law will be that this “self-arrest” component of the bill will also raise significant “whistleblower” issues as software developers and lawyers internally debate whether a company’s newly developed AI software tool causes a “consequential decision.”
Drawing on notification requirements developed in consumer protection statutes, the new law will also require deployers to adequately notify “consumers” when a high-risk AI system makes, or is a substantial factor in making, a “consequential decision” about a consumer. (The measure defines “consumer” as “an individual who is a Colorado resident.”)
Governor Polis Concerned Measure, Without Refinement, May Hamper AI Development
Sounding an extraordinary queasy lack of confidence in the new measure on his desk for signature, but schizophrenically not enough for him to veto the bill, (I’m against it, but I am really for it!) Colorado Governor Polis invited the Colorado legislature to rewrite the law before it becomes legally effective in two years. Here is what Governor Polis wrote in his signing statement:
“Stakeholders, including industry leaders, must take the intervening two years before this measure takes effect to fine tune the provisions and ensure that the final product does not hamper development and expansion of new technologies in Colorado that can improve the lives of individuals across our state. It is critical that such discussions among stakeholders be based on a robust understanding of how the AI industry is developing, the impact of creating a separate anti-discrimination framework for AI systems only, and what our country is doing as a whole to adapt to this change in our society.”
Justification for the Measure
In a press release earlier this month, Colorado Senate Democrats asserted that “[a]lgorithmic discrimination has been shown to make biased determinations in cases involving hiring practices, housing applications, financial services, and health care coverage.” However, neither the Press Release nor any hearings the Colorado Senate deliberations about the bill identified any hiring practices, housing applications, financial services or health care coverage denied due to unlawful algorithmic software tools. Rather, the new Colorado law appears to be another regulatory knee-jerk reaction rooted in only a fear of the unknown.
Senate Majority Leader Robert Rodriguez, one of the measure’s sponsors said:
“AI systems are evolving faster than we can write and pass policy on them – which is why we need to act now. Many system’s algorithms have biases baked in and can easily result in discriminatory outcomes when it comes to housing applications, hiring practices, and more. This important bill will establish foundational guardrails for developers utilizing high risk AI systems with a goal of reducing algorithmic discrimination and creating a safer user experience for consumers. However, this is just a first step, and as technology continues to evolve, our work in this space must evolve alongside it.”
Forthcoming European Union AI Law
An article published on the Tech Policy Press website notes that the Colorado measure “will also be the first comprehensive AI law to come into effect globally, ahead of even the European Union AI Act.” While the European Union (“EU”) passed its AI Act on March 13, 2024, there are still several steps to go before it will take legal effect. The European Parliament website explains that following a couple of additional steps:
“[The EU AI Act] will enter into force twenty days after its publication in the official Journal, and be fully applicable 24 months after its entry into force, except for: bans on prohibited practices, which will apply six months after the entry into force date; codes of practise [U.K. spelling of the verb] (nine months after entry into force); general-purpose AI rules including governance (12 months after entry into force); and obligations for high-risk systems (36 months).”