Utah, Colorado Pave Way for AI-Specific State Laws – Is Your Company Ready for the Impending Regulation Wave?

Cooley LLP
Contact

Cooley LLP

The regulation of artificial intelligence (AI) has drawn significant interest from policymakers in the US, particularly at the state level. There has been a recent slew of legislative activity with respect to comprehensive AI bills across various states. We expect to see this new wave of comprehensive AI regulation at the state level continue to increase over the coming months. Early state AI laws have the potential to exert an outsized influence on the trajectory of AI regulation in the US. 

The need to develop comprehensive AI laws inherently stems from a need to ensure consumer protection in the use of AI – including ensuring transparent and nondiscriminatory use of AI and appropriate protections regarding the collection and processing of personal data in connection with AI. To this point, regulators appear to be focused on how companies are using consumer personal data to train and develop their AI and machine-learning models and algorithms. As we discussed in this January 2024 cyber/data/privacy insights blog post regarding the Federal Trade Commission’s Rite Aid enforcement action, the use of personal data to train AI and machine-learning algorithms has the potential to significantly impact consumers, including by resulting in algorithmic discrimination. 

The latest regulations recently passed in Utah and Colorado have focused on broader consumer protection objectives in the use of AI. This blog post summarizes those AI laws, covers AI bills that have gained significant traction in Connecticut and California, and suggests what companies should be doing now to prepare.  

Utah’s Artificial Intelligence Policy Act and Colorado’s SB 205

On March 13, 2024, Utah became the first US state to enact a broad consumer protection statute specifically governing AI with passage of the Utah Artificial Intelligence Policy Act (AIPA), which has a particular focus on ensuring transparent use of AI. Effective as of May 1, 2024, the AIPA imposes disclosure obligations on covered entities related to their use of generative AI (gen AI) technologies and provides for liability for violations, including civil penalties, calculated on a per-violation basis.

To encourage innovation, the AIPA also creates a new AI regulatory body, the Office of Artificial Intelligence Policy, tasked with establishing an AI “Learning Laboratory Program” aimed at, among other things, analyzing risks and benefits related to the development and use of AI technologies (and their related policy implications) to inform the state’s broader approach to regulating AI. The AIPA will establish procedures for inviting entities to participate in – and receiving requests to participate in – the learning laboratory. In line with this industry engagement, the AIPA also introduces the opportunity for participants to enter into regulatory mitigation agreements, which provide participants with the option to mitigate certain regulatory consequences in exchange for the participant agreeing to implement certain safeguards and limit the scope of use of their technology.    

Following on Utah’s heels, Colorado has enacted its own comprehensive AI regulation with SB 205, known as the Colorado AI Act, which was signed into law by Gov. Jared Polis on May 17, 2024. The law will go into effect on February 1, 2026. Compared to Utah’s AIPA, which focuses primarily on transparency through disclosure requirements for the deployment of AI in consumer interactions, Colorado’s SB 205 imposes a wider range of obligations on developers and deployers of certain AI systems, focused primarily on algorithmic discrimination. It also adopts a risk-based approach to AI regulation, similar to the European Union AI Act – the world’s first major law to regulate AI, passed by the European Parliament in March 2024.

The following chart compares some of the core aspects of the two acts:

Issue Colorado’s SB 205 Utah’s AIPA
Which companies are affected?   Developers – SB 205 applies to any person doing business in Colorado that develops, or intentionally and substantially modifies, an AI system, including a general-purpose or high-risk AI system.

Deployers – SB 205 also applies to any person doing business in Colorado that deploys a high-risk AI system. Note, “deploy” is defined broadly to include any use of a high-risk AI system.

Nonregulated occupations – The AIPA applies to any person or entity that “uses, prompts, or otherwise causes” gen AI to interact with a person in connection with any activity subject to Utah’s consumer protection laws,1 including the Utah Consumer Privacy Act (UCPA)

Regulated occupations – The AIPA also applies to any person or entity providing services in “regulated occupations” – i.e., those requiring a license or other state certification to practice the occupation, such as medicine or accounting – that uses gen AI for consumer interactions in the provision of the regulated service. Regulated occupations have additional disclosure obligations, which are described below.

What types of AI are covered?    The majority of SB 205’s requirements apply only to high-risk AI systems, whether generative or nongenerative. The statute defines a high-risk AI system as “any artificial intelligence system that, when deployed, makes, or is a substantial factor in making, a consequential decision” about a consumer. A “consequential decision” is one that has a significant effect on the provision or denial, or cost or terms, of education enrollment or opportunity, employment, financial or lending services, essential government services, healthcare services, housing, insurance, or legal services.

SB 205 expressly excludes AI systems where the AI system is intended to perform a narrow procedural task or detect decision-making patterns or deviations from prior decision-making patterns, and is not intended to replace or influence a completed human assessment without sufficient human review. 

SB 205 also expressly excludes a broad range of specific technologies, such as anti-malware, cybersecurity, AI-enabled video games and anti-fraud technology that does not use facial recognition technology, provided that, when deployed, they do not make, and are not a substantial factor in making, a consequential decision.

The AIPA governs the use of gen AI, which the statute defines as “an artificial system that (i) is trained on data; (ii) interacts with a person using text, audio, or visual communication; and (iii) generates non-scripted outputs similar to outputs created by a human, with limited or no human oversight.”

Nongenerative AI – that is, AI that analyzes existing data to make predictions and identify patterns but does not generate new content – is not within scope of the law.  

What are the core requirements?    Developers and deployers must:

• Use reasonable care to avoid algorithmic discrimination arising from the intended and contracted uses of a high-risk AI system.

• Inform the Colorado attorney general within 90 days of discovery of algorithmic discrimination involving a high-risk AI system.

• If using an AI system to interact with consumers, ensure that the consumer is made aware that they are interacting with an AI system. This obligation applies even if the AI system is not high-risk.

Additional core obligations for developers include:

Disclosure obligations – Developers of high-risk AI systems must meet various disclosure obligations. For example, they must provide extensive disclosures to deployers or other developers of high-risk AI systems, including (among other disclosures) a description of reasonably foreseeable uses, and known harmful or inappropriate uses, of high-risk AI systems, as well as documentation detailing the types of data used to train high-risk AI systems, reasonably foreseeable limitations of such high-risk AI systems, the purpose of such systems, and how any such system was evaluated for algorithmic discrimination. Developers also must make disclosures related to the types of high-risk AI systems they have developed and how they manage known or reasonably foreseeable risks of algorithmic discrimination available on their websites or in a public use-case inventory.

Additional core obligations for deployers include:

Risk management policy and program – Deployers of high-risk AI systems must implement a risk management policy and program specifying the processes deployed to identify and mitigate known or reasonably foreseeable risks of algorithmic discrimination.

Impact assessments – At least annually, deployers of high-risk AI systems must complete an impact assessment for any high-risk AI system that includes, among other things, an analysis as to the known or reasonably foreseeable risks of algorithmic discrimination, an overview of the categories of data the deployers used to customize the high-risk AI system, and a description of any transparency measures and measures taken with respect to post-deployment monitoring and user safeguards.

Disclosure obligations – Deployers of high-risk AI systems that are used to make, or are a substantial factor in making, a consequential decision have certain disclosure obligations to consumers, including with respect to the purpose of the system, and must provide consumers with the right to opt out of the processing of personal data concerning the consumer to the extent required by and in accordance with the Colorado Privacy Act. Deployers also have special disclosure obligations, and must provide consumers with certain rights, in connection with the use of any high-risk AI system to make decisions that are adverse to the consumer. Additionally, deployers must make available on their websites disclosures related to the types of high-risk AI systems they have deployed, how they manage known or reasonably foreseeable risks of algorithmic discrimination, and details concerning the nature, source and extent of the information collected and used by the deployer.

The AIPA imposes disclosure obligations on covered entities; however, these obligations differ between entities operating in regulated versus nonregulated occupations. 

Regulated occupations – A person or entity operating in a regulated occupation must “prominently disclose” when a consumer is interacting with gen AI in connection with the provision of regulated services. This disclosure must be provided “verbally at the start of an oral exchange or conversation and through electronic messaging before a written exchange.” It is worth noting that this obligation is consistent with other state requirements, such as the Bolstering Online Transparency Act in California.

Nonregulated occupations – A more lenient standard applies to a person or entity operating in a nonregulated occupation who uses or otherwise causes gen AI to interact with a consumer. The person/entity must “clearly and conspicuously disclose” that the consumer is interacting with a gen AI system (rather than a human),if asked or prompted by the consumer.  

Who’s liable and for what?   The Colorado attorney general has exclusive authority to enforce SB 205. No private rights of action are expressly created under SB 205. Violations of SB 205 constitute unfair trade practices under Colorado consumer protection law, and the attorney general can seek remedies, including civil penalties of up to $20,000 per violation. The law also grants the attorney general rule-making authority. SB 205 provides an affirmative defense for developers and deployers who demonstrate that they have attempted to correct any discovered violations and are in compliance with certain nationally or internationally recognized risk management frameworks for AI systems. The Utah Division of Consumer Protection (DCP) can impose, among other penalties, an administrative fine of up to $2,500 for each violation of the AIPA, and in a court action brought by the DCP, the court can, among other remedies, issue an injunction or order disgorgement of any money received in violation of the AIPA. There is no express private right of action under the AIPA.    

The AIPA specifies that alleged violators of Utah consumer protection laws cannot escape liability by claiming that the gen AI technology undertook or caused the violation (e.g., made the violative statement).

Other states race to enact robust AI regulation 

AI-specific legislation has been proposed in a number of other states, and we expect that more states will soon follow. For example, AI-specific proposals are currently gaining significant traction in the following states:

Connecticut

Similar to Colorado’s SB 205, Connecticut’s SB 2 would regulate developers and deployers of high-risk AI systems and establish requirements related to risk management, impact assessments and consumer rights. The Connecticut bill also defines and lays out requirements for what it calls “general-purpose AI models,” defined as models that display significant generality, are capable of competently performing a wide range of distinct tasks, and can be integrated into a variety of downstream applications or systems. Under SB 2, developers of general-purpose AI models that generate or manipulate synthetic digital content also would be required, among other things, to mark their outputs as synthetic in a way that is detectable by consumers. If enacted, SB 2’s regulations would take effect in a phased approach, with some provisions taking effect as early as July 1, 2025.

California

California’s AB 2930 has many similarities to Colorado’s SB 205 (and Connecticut’s SB 2). AB 2930 would, among other things, require developers and deployers of an “automated decision tool” – defined as a system/service that uses AI and has been specifically developed or modified to make (or to be a substantial factor in making) consequential decisions – to perform impact assessments prior to using such a tool and annually thereafter. Like Colorado’s SB 205 and Connecticut’s SB 2, AB 2930 would provide certain rights to consumers, including a qualified right not to be subject to such an automated decision tool, if technically feasible. AB 2930 also would prohibit use of an automated decision tool that results in algorithmic discrimination. If enacted, AB 2930 would take effect on January 1, 2026.

While the proposals in Connecticut and California appear more closely aligned with Colorado’s SB 205 than Utah’s AIPA, time will tell which approach other states adopt, as well as whether any other trends emerge among states proposing new laws to regulate AI. 

What should companies be doing now?

1. Inventory existing AI tools. 

Review and inventory your company’s AI tools. This includes AI tools you have developed, along with those from vendors, and both stand-alone AI tools and AI features within larger products. For each AI tool, consider how the tool is used – with a particular focus on whether consumers can interact with the tool and whether the tool is used to make decisions about consumers.

Ensure that your company assesses whether existing AI tools, or those in development, will use any personal data to train the AI and machine-learning algorithms.

2. Determine applicability of state laws.

In the short term, consider whether you are a “developer” or “deployer” of a high-risk AI system in Colorado, or a regulated or nonregulated entity that deploys gen AI in Utah, in order to determine your level of compliance obligations under the latest regulations.

Unfortunately, the rapidly evolving and patchwork state-level regulatory framework means this kind of assessment must be done on an ongoing basis. Even in states like Colorado that have newly passed laws, implementing regulations are likely to be forthcoming. Consider scheduling periodic reviews to monitor future developments and ensure compliance in the states where your company conducts business.

3. Monitor regulator engagement and enforcement activities.

Clearly, with the establishment of bodies like the Office of Artificial Intelligence Policy in Utah, regulators are analyzing risks and benefits related to development and use of AI technologies and their related policy implications. Accordingly, regulatory and enforcement strategy is likely to evolve over time based on regulators’ learning activities and engagement with industry. Being close to these developments will assist in understanding the potential areas of regulatory enforcement.

4. Provide clear and conspicuous notice.

Utah’s AIPA and Colorado’s SB 205 both include requirements around transparency in connection with the use of AI systems. Companies should consider when and how they will provide required notices to consumers. Note that in many cases, providing required consumer disclosures in a website terms of use or privacy policy may not be sufficient.

5. Regularly monitor AI outputs. 

Companies may be considered responsible for AI outputs. For example, the AIPA specifies that companies cannot skirt liability by disclaiming responsibility for the content that their gen AI tools produce. Therefore, it is essential to evaluate AI tools – during procurement, development and on an ongoing basis after deployment – including for outputs that are false, misleading, discriminatory, or otherwise violative of applicable laws. Under laws like the AIPA, it will not be sufficient that you have a broad disclaimer regarding the accuracy and quality of any AI outputs.

6. Conduct employee training on proper use of AI. 

Employee training can raise awareness about the legal requirements around the use of AI.  Internal acceptable AI usage policies can provide a clear compliance roadmap of what are acceptable and unacceptable uses of AI. Additionally, having appropriate internal escalation processes could help mitigate the risk of liability when, for example, a consumer complains about or challenges an AI output.

7. Establish robust policies, procedures and testing for AI. 

Prepare internal policies (such as data retention policies) and guardrails for the use of AI tools (such as how to prevent algorithmic discrimination or bias), or consider implementing a third-party framework, such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework. However, companies should not assume that even comprehensive policies and procedures will be wholly effective at meeting legal requirements. Regular testing and auditing for bias, discrimination and other problematic outputs/outcomes is essential.

8. Conduct independent third-party assessments. 

Consider engaging a third-party independent expert to review and assess current AI tools and systems. To the extent feasible, ensure that your company implements recommendations from such auditors.

9. Monitor your vendors’ AI tools and policies. 

Ensure that your company conducts diligence when onboarding and using vendors that provide AI tools, including with respect to vendors’ training data, cybersecurity and measures taken to prevent biased and discriminatory outputs. Companies may be held liable for AI outputs resulting from a vendor’s tool, so it is crucial to periodically assess your vendors’ tools and practices.


  1. See SB 149, Section 13-2-12(3). ↩︎

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Cooley LLP | Attorney Advertising

Written by:

Cooley LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Cooley LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide