DSIR Deeper Dive: Artificial Intelligence - Banner Years (and Counting) for AI Guidance and Regulation

BakerHostetler
Contact

BakerHostetler

The period of 2024-2025 was significant for AI law, regulation, and guidance, beginning in early February 2024, and the news told the tale:

Europeans First ‘Out of the Gate’

The European Union Artificial Intelligence Act (EU AI Act) was approved by EU member countries in February 2024; it was then approved by the European Parliament on March 13, 2024, and became law on May 21, 2024, when the Council of the EU announced its final approval. The EU AI Act was introduced as the central pillar of policy measures aimed at supporting the development of trustworthy AI, and it also included an AI Innovation Package and the Coordinated Plan on AI. For Europeans and participants in the EU market, that full EU AI Act package focused on several core elements that became themes in the U.S. as well; specifically:

  • Scope and enforcement of AI violations
  • A risk-based approach to AI
  • Prohibited and high-risk AI systems
  • General-purpose AI models and Generative AI

US Federal Guidance

Across the Atlantic, also in February 2024, the then-deputy U.S. attorney general provided guidance on how the Department of Justice (DOJ) planned to govern – and punish – uses of AI, including how the DOJ considered self-policing government use of AI and what that might mean for DOJ practices generally. In December 2024, bookending the year, the U.S. Commodity Futures Trading Commission (CFTC) issued its own nonbinding AI compliance obligations in an advisory on the use of artificial intelligence in CFTC-regulated entities, providing a non-exhaustive list of existing statutory and regulatory requirements that might be potentially implicated by CFTC-regulated entities’ use of AI.

After a change in administration, on April 3, the White House’s Office of Management and Budget issued the Accelerating Federal Use of AI through Innovation, Governance, and Public Trust (M-25-21) and Driving Efficient Acquisition of Artificial Intelligence in Government (M-25-22) to address the government’s use of AI and to replace the prior administration’s March 28, 2024 and September 24, 2024 AI directives.

US State Law and Regulation

California

At the U.S. state level, in March 2024, the California Privacy Protection Agency released “Risk Assessments and Automated Decisionmaking Technology (ADMT): Overview of Proposed Revisions,” where its revisions to prior proposed rules attempted to (among other things), clarify what an ADMT is (technology that “executes a decision, replaces human decisionmaking, or substantially facilitates human decisionmaking”), what an ADMT is not, and what risk assessments will look like. Those proposed rules were then further updated at various times, with the most recent revision (as of this post) presented on May 1, with public comments due as of June 2.

In the meantime, on March 21, the California Civil Rights Council, which handles regulations implementing California’s civil rights laws, adopted final regulations for automated decision-making systems, which in the employment context include tools used to increase efficiency (assisting in hiring, firing, promotions, cost-cutting and more). Once the final regulations, currently under review, are approved by the California Office of Administrative Law and published by the secretary of state, commentators indicate that they will likely become effective on July 1. Specific to recordkeeping obligations, California employers must keep AI records related to the regulations for four years. These include applications, personnel files and data from automated decision systems.

Colorado

In May 2024, Colorado enacted SB 24-205 (Concerning Consumer Protections in Interactions with Artificial Intelligence Systems, or the Colorado AI Act), which governs AI broadly but will not take effect until February 1, 2026. The Colorado AI Act’s requirements will mostly apply to high-risk uses of AI affecting Colorado “consumers,” defined as all Colorado residents – meaning, unlike with most privacy laws, there are no employment or business-to-business exceptions (but there are limited carve-outs for existing law and some industries).

As reported by news outlets, the Colorado Senate had tried to introduce SB25-318 as a last-minute effort to undermine the Colorado AI Act and change its effective date from Feb. 1, 2026, to sometime in 2027, just as the Colorado House was undertaking similarly oriented measures that ended in a filibuster.

  • As Colorado’s 2025 legislative session ended on May 5, the Colorado Legislature was left without sufficient “time for another measure to be drafted, introduced and debated this year.”
  • Further, the failure of SB25-318 apparently “leaves lawmakers little time next year to try to tweak the AI law before it takes effect,” as the 2026 legislative session begins on Jan. 14, 2026, leaving very little time before the Feb. 1, 2026 deadline. This led one reporter to consider that “it’s possible, if not likely, that the law will take effect as-is.”

Illinois

In August 2024, Illinois HB3773 (the Limit Predictive Analytics Use Legislation ) was signed into law. Effective Jan. 1, 2026, the Illinois law amends the existing Illinois Human Rights Act and specifies employment-related requirements regarding notice and use of AI.

While the law does not provide a specific enforcement scheme, it adds certain employment-related uses of AI to the list of actions under the Illinois Human Rights Act that may result in civil rights violations. Under existing law, individuals may seek to pursue violations by filing a charge with the Human Rights Commission or through a private right of action.

New York

On May 9, New York Gov. Kathy Hochul signed the 2025-2026 New York State Budget, which includes “companion chatbot” requirements that came from two prior bills, S5668 and S934, first introduced in the New York State Legislature by Sen. Kristen Gonzalez (D-59) in 2024’s legislative session. With this new Article 47 – AI Companion Models § 1700 et seq. of New York’s General Business Law, chatbot operators will be required to:

  • Include a protocol that detects users’ suicidal ideation or expressions of self-harm, directing such users to crisis services.
  • Display an affirmative notification reminding users that they are not interacting with a human, at least once daily at the beginning of the interaction, and again for ongoing chats lasting more than three hours.

Failure to adhere to these requirements will result in enforcement action by the attorney general’s office, with any related fines and penalties collected for violations contributed to a suicide prevention fund within the Office of Mental Health. The law goes into effect 180 days after Hochul’s signature – Nov. 5.

Utah

In May 2024, the Utah Artificial Intelligence Policy Act (UAIPA) took effect, mandating certain disclosures of generative AI (Gen AI) use for customer communications, creating Utah’s state AI office, and focusing on active disclosure requirements for parties regulated by Utah’s Commerce Department where organizations must “prominently disclose” to customers (before interactions) that Gen AI is used in communications, and passive disclosure requirements for a number of other companies for Gen AI use. Utah followed the UAIPA with additional chatbot requirements through the AI-related initiative H.B. 452 which took effect May 7 and created a new code section (Chapter 72a) titled “Artificial Intelligence Applications Relating to Mental Health” addressing mental health chatbots. H.B. 452 provides that:

  • Mental health chatbot suppliers cannot generally sell or share individually identifiable health information or user input.
  • A mental health chatbot must clearly and conspicuously disclose that it is AI technology before users access features, after seven days elapse, and anytime a user asks whether AI is being used.

US State Regulator Guidance

Other industry bodies took note of both activity and delays at the U.S. federal and state levels and took matters into their own proverbial hands.

A March 2024 bulletin posted by the Department of Insurance in Illinois reflected on how the National Association of Insurance Commissioners had promulgated the 2023 Model Bulletin on AI that was expected to be incorporated at the state level. In July 2024, the New York Department of Financial Services (NYDFS) released its Circular Letter No. 7 on the use of AI systems and external consumer data and information sources in insurance underwriting and pricing, which addressed external consumer data and information sources and AI systems, and covered (among others) all insurers authorized to write insurance in New York state. The NYDFS also published an Industry Letter in October 2024 in response to “inquiries about how AI is changing cyber risk and how Covered Entities can mitigate risks associated with AI.” That letter outlined four specific risks associated with AI: AI-Enabled Social Engineering; AI-Enhanced Cybersecurity Attacks; Exposure or Theft of Vast Amounts of Nonpublic Information; and Increased Vulnerabilities Due to Third-Party, Vendor, and Other Supply Chain Dependencies.

It Takes a Village to Make a Patchwork

Taking everything proposed and passed in 2024-2025 (and counting) into account presents less of a picture and more of a patchwork in which some companies have struggled to find certainty. The organization that fits neatly into any one framework is rare indeed; this multiplicity of approaches and staggered guidance and implementation dates made it challenging for multinational organizations to stay compliant or even abreast of current and near- (and far-) future requirements. This was especially difficult when, for example, Utah’s UAIPA went into immediate effect, but the Colorado AI Act, while slated for 2026 implementation, is potentially subject to further change. Perhaps that is the most solid takeaway from 2024-2025 AI learnings: The one constant in the near future will be change.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© BakerHostetler

Written by:

BakerHostetler
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

BakerHostetler on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide