Navigating the Latest State AI Laws: What Your Business Needs to Know

Hinshaw & Culbertson - Privacy, Cyber & AI Decoded

Hinshaw's Privacy, Security, & Artificial Intelligence (AI) practice group has been continuously monitoring the changes to state AI regulations, which we outline below for your company to be aware of and comply with if you operate in the following states: Utah, Massachusetts, Connecticut, and California.

Utah's Artificial Intelligence Policy Act

Utah's Artificial Intelligence Policy Act goes into effect on May 1, 2024. It addresses generative AI systems and requires companies that are licensed or regulated in Utah to disclose when they use or create content with generative AI.

The Act requires two forms of disclosures when businesses make available generative AI applications to consumers:  

  1. Required Disclosure: Consumers need to be made aware that an application is powered by AI for certain entities licensed to operate in Utah, such as healthcare professionals. This disclosure needs to occur at the start of the interaction.
  2. Prompted Disclosure: Non-licensed entities whose activities are broadly governed under Utah's Consumer Protection Division must provide an AI disclosure if a user asks for it.

There is no private right of action, but the Utah Division of Consumer Protection can levy $2,500 fines for each infraction and can file court-related enforcement actions.

Companies falling within these regulations should implement these disclosures for their AI tools, such as chatbots and text messaging.

Massachusetts Attorney General Provides Artificial Intelligence Guidance on State Consumer Protection Laws

Attorney General Andrea Joy Campbell issued an advisory guidance on April 16, 2024, outlining how Massachusetts' Consumer Protection Laws apply to developers, suppliers, and users of AI and AI decision-making systems.

Under Massachusetts' existing consumer protection laws and other laws applied to AI, Attorney General Campbell emphasized that:

  • Organizations cannot falsely advertise the quality, usability, and value of an AI system, meaning you cannot state that an AI system has functionality that it does not have.
  • Organizations cannot misrepresent the reliability, safety, or performance of an AI system, such as whether it is free from bias or not susceptible to cyberattacks.
  • AI systems must comply with Massachusetts' personal information laws and anti-discrimination laws.
  • The advisory also reminded creditors that the Attorney General has enforcement power under the Equal Credit Opportunity Act (ECOA) as it relates to AI. This means creditors must disclose to consumers where their credit application is denied, including where AI was used.

We expect that other Attorney Generals would apply similar principles to their states' consumer protection laws. Companies should note that regulators already believe that they have the right to enforce consumer protection laws regarding AI.

Connecticut Proposed Private Sector Artificial Intelligence Bill

Everyone is closely watching Connecticut's "An Act Concerning Artificial Intelligence," Senator Maroney's S.2, which regulates the private sector's use of AI. The Connecticut Senate advanced the bill on April 24, 2024.

Notable provisions of the bill are as follows:  

  • The bill applies to both developers and deployers.
  • The bill requires developers and deployers to protect consumers and data subjects against algorithmic discrimination.
  • It prohibits the use of deep fake technology to impersonate others.
  • From a compliance standpoint, the bill would require impact assessments, transparency disclosures, disclosures regarding training data, and an AI risk mitigation policy.
  • Compliance with the National Institutes of Standards and Technology's (NIST) AI Framework or other internationally recognized frameworks is considered an affirmative defense.
  • Certain sections of the bill have varying effective dates starting on July 1, 2025.
  • The attorney general has the sole enforcement right to this bill. Currently, there is a 60-day right to cure that sunsets on February 1, 2027, with an optional right to cure after that at the attorney general's discretion.

Companies should take note that many of these provisions have already been suggested as AI best practices under federal AI guidance and existing federal laws. However, the law establishes another regulator that will take action against violators. 

California's Draft Automated Decision-making Technology Regulations

Lastly, we are watching the evolution of the California Privacy Protection Agency's Draft Automated Decision-making Technology Regulations that are currently being discussed at three public meetings in May 2024. We will provide an update once more details emerge.

What Should Organizations Do Now?

Our takeaway for companies deploying and using AI is that state regulators and legislatures will continue to regulate companies with a varying patchwork quilt of AI laws, similar to what we have seen with comprehensive state privacy laws. They will keep enforcing existing consumer protection laws regarding discrimination and AI privacy protection.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Hinshaw & Culbertson - Privacy, Cyber & AI Decoded

Written by:

Hinshaw & Culbertson - Privacy, Cyber & AI Decoded
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Hinshaw & Culbertson - Privacy, Cyber & AI Decoded on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide