What Employers Need to Know About Colorado’s New AI Law

Husch Blackwell LLP
Contact

Colorado recently became the first state to regulate the use of high-risk artificial intelligence (AI) systems to prevent algorithmic discrimination by developers and deployers of AI systems. The Colorado AI Act is broad in scope and will apply to businesses using AI for certain employment purposes, imposing numerous compliance obligations and potential liability for algorithmic discrimination.

When does the law go into effect?

Not for quite some time. The law will undergo rulemaking by the Colorado Attorney General’s Office over the next year and a half, and is scheduled to take effect on February 1, 2026. There also was a legislative task force established through a separate bill. The task force will review the law for potential amendments next legislative session.

What does the law regulate?

The law regulates the development and deployment of “high-risk” AI systems generally, but this article focuses on the employment-related aspects of the legislation. For context, the Act includes the following definitions:

  •  An “artificial intelligence system” is as any machine-based system that “infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”
  • A “high-risk artificial intelligence system” is defined as any AI system that makes or is a “substantial factor” in making a “consequential decision” when used.
  • A “consequential decision” is defined as any decision that has a “material legal or similarly significant effect” in areas such as employment, education, healthcare, housing, insurance, and financial services.

Therefore, any AI tool that plays a “substantial factor” in influencing employment decisions in hiring, retention, promotion, and other areas that significantly affect employees will likely be classified as a “high-risk” AI system subject to the Act. However, an AI system is not “high-risk” if it performs narrow procedural tasks or detects decision-making patterns or deviations from prior patterns and is not intended to replace or influence a previously completed human assessment without sufficient human review. High-risk AI systems also do not include certain technologies, like database, anti-virus, and cybersecurity software, unless they make or are a substantial factor in making consequential decisions.

What is “algorithmic discrimination”?

The Act seeks to prevent “algorithmic discrimination” in those consequential decisions mentioned above — which is when the use of an AI system results in differential that disfavors an individual or group based on classifications protected under state or federal law (such as their age, race, gender, etc.). For example, if an algorithm used to screen candidates is trained on data that mostly included successful candidates from one gender or race, it might unfairly favor similar candidates in the future, disadvantaging other qualified individuals. In short, it’s when AI technology ends up perpetuating or amplifying existing biases and inequalities, which the Act is designed to prevent.

Importantly for employers, the Act excludes from the definition of algorithmic discrimination the use of high-risk AI systems for the “sole purpose” of expanding an applicant pool to increase diversity or redress historical discrimination.

What does the Act mean by “developers” and “deployers”?

The Act bifurcates compliance obligations between “developers” and “deployers” of AI systems. A “developer” is a person or entity doing business in Colorado that “develops or intentionally and substantially modifies an artificial intelligence system.” A “deployer” includes any individual or entity doing business in Colorado that uses a high-risk AI system.

Given these definitions, most Colorado employers (including out-of-state companies with Colorado employees) using AI tools will fall under the definition of “deployer” and be subject to the requirements discussed below. But it is important to note that the definition of “deployer” only applies to “high-risk” AI systems, while the definition of “developer” applies more broadly to an AI system whether or not it is considered “high-risk.”

In some instances, an employer may also be classified as a developer if they design their own AI systems or “intentionally and substantially modify” an AI system provided by a third-party. The Act defines an “intentional and substantial modification” as a “deliberate change” to an AI system that “results in any new reasonably foreseeable risk of algorithmic discrimination.” It is unclear at this time how this will apply (or not apply) in specific real-world practices — for instance, when an employer modifies a third-party’s AI tool that screens job applicants to either incorporate new training data or adjust how the tool weighs certain qualifications over others. It is likely, however, that the Attorney General’s Office will provide some clarity on this issue during the rulemaking process.

What requirements apply to employers that are deployers?

Employers classified as deployers under the Act must exercise reasonable care to protect against known or foreseeable risks of algorithmic discrimination. The Act provides a rebuttable presumption that an employer used reasonable care if they comply with the deployer-specific requirements of the Act. Those requirements include:

  • Implementing a reasonable risk management policy and program to govern the employer’s use of the high-risk AI system. Among other requirements, the policy and program must specify the principles, processes, and personnel that the employer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination.
  • Completing an annual impact assessment for the AI system or contracting with a third-party to complete the assessment. Such assessments must include, among other information, a statement disclosing the purpose, intended use cases, deployment context, and benefits of the high-risk AI system. As part of this assessment requirement, employers must also review the deployment of each high-risk AI system annually to ensure that the system is not causing algorithmic discrimination.
  • Providing notice to employees that a high-risk AI system is being used to make consequential decisions concerning their employment and disclosing the purpose of the AI system and nature of the consequential decisions.
  • Providing an explanation to any employee who was subject to an adverse consequential decision made by the AI system that discloses the principal reasons for the decision, the degree to which the AI system contributed to the decision, and the types and sources of data processed by the AI system.
  • Providing employees subject to an adverse consequential decision an opportunity to appeal the decision for human review if technically feasible.
  • Public disclosure on the employer’s website that summarizes the types of high-risk AI systems being used, how known or reasonably foreseeable risks of algorithmic discrimination are being managed, and the information collected and used by the employer.
  • Providing notice to the Attorney General within 90 days if the employer discovers that its high-risk AI system has caused algorithmic discrimination. The Act also allows the Attorney General to require that employers (and other deployers) disclose their risk management policy, impact assessments, and other records to ensure compliance.

Note that for companies with fewer than 50 employees, there are exemptions to the requirements for risk management policies and programs, impact assessments, and website disclosures. However, other circumstances must exist for those exemptions to apply.

What requirements apply to employers that are developers?

As stated above, employers are less likely to be classified as developers of AI systems unless they design their own AI system or “intentionally and substantially modify” a system developed by a third-party. However, to the extent that an employer does meet the definition of a developer, they must similarly exercise reasonable care to protect against known or foreseeable risks of algorithmic discrimination and are afforded a rebuttable presumption of using reasonable care if they comply with developer-specific requirements.

Generally, the Act requires developers to provide to deployers and other developers of their high-risk AI system:

  • A general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk AI system;
  • Documentation that discloses:
  • summaries of the types of data used to train the high-risk AI system, known or reasonably foreseeable limitations of the system, the purposes of the system, the system’s intended benefits and uses, and other information necessary to allow deployers to comply with the requirements outlined above;
  • how the high-risk AI system was evaluated for performance and to mitigate algorithmic discrimination and the intended outputs of the system, among other information;
  • information necessary to assist the deployer in understanding the outputs and monitoring the performance of the system for risks of algorithmic discrimination; and
  • information necessary for a deployer to complete an impact assessment (unless the developer is also the deployer of the high-risk AI system).

Additionally, developers must disclose on their website or in a public use case inventory a statement summarizing: (1) the types of currently available high-risk AI systems that the developer has developed or intentionally and substantially modified; and (2) how the developer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise.

Finally, if a developer discovers that its high-risk AI system has been deployed and has caused or is reasonably likely to have caused algorithmic discrimination, it must disclose such information to the Attorney General and all known deployers or other developers within 90 days of the discovery.

Do any obligations apply to employers who are not developers or deployers of high-risk AI systems?

In certain circumstances, yes. The Act also requires that consumers be told if and when they are interacting with an AI system, such as an AI chatbot, unless it would be obvious to a reasonable person. A “consumer” is broadly defined by the Act as any Colorado resident to include employees. And this disclosure obligation applies to deployers and developers of any AI system (not necessarily “high-risk”) that is intended to interact with consumers.

Therefore, and regardless of whether an employer’s AI system is classified as “high-risk,” employers using AI systems to interact with their employees, such as chatbots providing answers to HR-related questions, must disclose the fact of that AI interaction to their employees.

How is the Act enforced?

A deployer’s or developer’s violation of the Act constitutes an unfair or deceptive trade practice under pre-existing Colorado law. Importantly, the Act does not provide a private right of action and is only enforceable by the Attorney General’s office. This means that individual employees cannot directly sue employers for violations of the law.

In an enforcement action, deployers and developers have an affirmative defense if they discover and cure the violation, and are otherwise in compliance with National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework or another risk management framework nationally or internationally recognized or designated by the Attorney General.

What should employers do now?

Although the Act does not go into effect until February 1, 2026, employers should begin assessing whether current or planned implementations of AI will be subject to the Act, and familiarize themselves with the Act’s requirements. As part of this process, employers should closely scrutinize any contracts or terms of service entered into with their AI developers, which may play a large role in shifting liability if a developer’s AI system ever results in algorithmic discrimination. Employers should also monitor the Attorney General’s rulemaking process to better prepare compliance measures and provide comments if appropriate.

Employers should also be mindful of other U.S. laws and guidance already in effect regulating the use of AI in employment, such as (i) the Equal Employment Opportunity Commission’s technical assistance documents; (ii) the Department of Labor’s principles regarding AI; (iii) the Department of Labor’s recent Field Assistance Bulletin on AI’s application to the FLSA, FMLA, and other federal labor standards; (iv) New York City’s Local Law 144 regulating automated employment decision tools; and (v) laws in Illinois and Maryland targeting the use of AI and facial recognition software in job interviews.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Husch Blackwell LLP | Attorney Advertising

Written by:

Husch Blackwell LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Husch Blackwell LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide