Raft of California AI Legislation Adds to Growing Patchwork of US Regulation

White & Case LLP
Contact

White & Case LLP

Once again, California is flexing its market power by taking bold and wide-ranging legislative action to regulate the use and deployment of artificial intelligence ("AI") systems. In 2024, the California ("CA") Legislature was particularly active in developing and passing dozens of AI-related bills that aim to impose wide-ranging obligations ranging from safety, consumer transparency measures, reporting requirements, clarification of privacy safeguards, protections for performers and deceased celebrities, and election integrity measures. The CA AI bills add to the growing patchwork of AI regulation in the United States, notably including President Biden’s Executive Order imposing safety and security standards for AI and the Colorado AI Act barring "algorithmic discrimination" against consumers based on protected characteristics in enumerated fields. Other states are almost certain to follow suit by replicating and building upon the CA AI bills. Absent comprehensive US federal preemptive regulation, developers and deployers of AI systems will operate in an increasing minefield of regulatory risk, underscoring challenges to ensure compliance.

Overview of Select CA AI Bills

Transparency 

Senate Bill 942: California AI Transparency Act

The CA AI Transparency Act (the "Act")1 mandates that "Covered Providers" (AI systems that are publicly accessible within California with more than one million monthly visitors or users) implement comprehensive measures to disclose when content has been generated or modified by AI. This Act outlines requirements for AI detection tools and content disclosures, and establishes licensing practices to ensure that only compliant AI systems are permitted for public use.

Key Obligations:

  • AI Detection Tool: Providers must offer a free, publicly accessible tool for users to verify whether AI has generated or modified the content (including text, images, video and audio), including system provenance data. While the detection tool must be publicly accessible, providers may impose reasonable limitations to prevent or respond to demonstrable risks to the security or integrity of their generative AI systems. Further, providers must collect user feedback related to the tool's efficacy and incorporate relevant feedback into improvements.
  • Manifest and Latent Disclosures: Providers are required to disclose AI-generated content clearly, conspicuously and appropriately based on the medium of communication and in such a way that a reasonable person would understand. It should identify the content as AI-generated and be permanent or extraordinarily difficult to remove, to the extent technically feasible. Embedded disclosures must include the provider's name, the generative AI system's name and version number, the creation or alteration date, and a unique identifier. It should be detectable by the AI detection tool, consistent with industry standards, and permanent or extraordinarily difficult to remove.
  • License Revocation: Providers of generative AI systems must contractually require licensees to maintain the system's capability to include the mandated disclosures. If a provider discovers that a licensee has modified the system to remove required disclosures, the license must be revoked within 96 hours and the licensee must cease using the system immediately.
  • Enforcement and Penalties: Covered Providers that violate the Act are liable for a penalty of $5,000 per violation per day, enforceable through civil action by the CA Attorney General, city attorneys or county counsel.

Status: Governor Newsom signed the Act into law on September 19, 2024. The Act will enter into force on January 1, 2026. 

Assembly Bill 2013: Generative AI: Training Data Transparency Act

In an effort to enhance transparency to consumers, the Generative AI: Training Data Transparency Act (the "Act") would require developers of only generative AI ("GenAI") systems to publish a "high-level summary" of the datasets used to develop and train GenAI systems by January 1, 2026.

Key Obligations:

  • Transparency for Datasets: Developers of GenAI systems would need to publish a summary of the following information, which the Act emphasizes is non-exhaustive:
    • Sources and owners of the datasets
    • Description of how the datasets further the intended purpose of the GenAI system
    • Number of data points included in the datasets, which may be "general ranges"
    • Description of the types of data points within the data sets 
    • Whether the datasets include any information protected by IP law (i.e., copyright, trademark  or patent)
    • Whether the datasets were purchased or licensed by the developer
    • Whether the datasets include personal information as defined in the CCPA
    • Whether the datasets include aggregate consumer information as defined in the CCPA
    • Whether there was any "cleaning, processing, or other modification" to the datasets
    • The time period during which the data comprising the datasets was collected 
    • The date(s) on which the datasets were first used during the development of the GenAI system 
    • Whether the GenAI system used or uses "synthetic data generation" in its development
  • Exceptions: Developers of GenAI systems do not need to publish the above transparency information when the GenAI system is only used to:
    • Ensure security and integrity
    • Operate an aircraft in US airspace
    • Further national security, military or defense purposes and is made available only to a US   federal entity 
  • No Enforcement or Penalties: The Act does not specify any fines or penalties for non-compliance.

Status: Governor Newsom signed the Act into law on September 28, 2024. The Act will enter into force on January 1, 2026.

Assembly Bill 3030: Health Care Services: Artificial Intelligence Act

The Heath Care Services: Artificial Intelligence Act (the "Act") would require a health facility, clinic, physician's office or office of a group practice that uses GenAI to generate patient communications pertaining to patient clinical information to ensure that those communications include both (i) a disclaimer that indicates to the patient that a communication was generated by a Gen AI system, and (ii) clear instructions describing how a patient may contact a human health care provider, employee or other appropriate person. The Act creates an exception to this transparency obligation where the communication has been read and reviewed by a licensed or certified human health care provider.

Key Obligations: The Act's requirement to provide a conspicuous disclaimer applies to the following types of communications:

  • Written communications involving physical and digital media (e.g., letters, emails) must include a prominent disclaimer at the beginning of each communication.
  • Written communications involving continuous online interactions (e.g., chatbot) must include a prominent disclaimer at the beginning of each communication. 
  • Audio communications must include a verbal disclaimer at the start and end of the interaction. 
  • Video communications must include a prominent disclaimer that is displayed throughout the interaction. 

Enforcement and Penalties: The Act would be enforceable by the Medical Board of California and Osteopathic Medical Board of California. Non-compliance would be punishable by, inter alia, civil penalties, suspension or revocation of a medical license, and administrative fines as set out in the CA Health and Safety Code.

Status: Governor Newsom signed the Act into law on September 28, 2024. The Act will enter into force on January 1, 2025.

Privacy 

Assembly Bill 1008 and Senate Bill 1223: Amendments to California Consumer Privacy Act 

Pursuant to its authority under the California Consumer Privacy Act (the "CCPA") and the California Privacy Rights Act (the "CPRA"), the California Legislature passed two bills—AB 1008 and SB 1223—that would clarify the scope of the CCPA and CPRA. The proposed legislation functions as a "package deal," where one law may only enter into force if the other law does.

AB 1008 would clarify that the CCPA applies to consumers' "personal information" regardless of its format. Specifically, AB 1008 would clarify that the CCPA encompasses "personal information" contained in "abstract digital formats" (i.e., generative AI systems that are capable of outputting consumers' personal information). While AB 1008 would not practically expand developer and deployer obligations under the CCPA, it aims to update the CCPA to address emerging AI technologies.

SB 1223 would clarify that "sensitive personal information" under the CPRA encompasses consumers' neural data. As with AB 1008, SB 1223 aims to keep pace with emerging technology (in this case, neurotechnology) in an effort to protect information about consumers' brain and nervous system functions. While SB 1223 does not articulate a specific nexus to AI systems, if signed into law, it would constrain developers and deployers from using neural data under the CPRA. 

Status: Governor Newsom signed the AB 1008 and SB 1223 into law on September 28, 2024. AB 1008 and SB 1223 will enter into force on January 1, 2025.

Entertainment

Assembly Bill 2602: Contracts against Public Policy: Personal or Professional Services: Digital Replicas Act

The Contracts against Public Policy: Personal or Professional Services: Digital Replicas Act (the "Act") creates new protections against misappropriation of actors' and performers' names, images and likenesses by AI through restrictions on contract terms. 

Key Obligations: For a new performance on or after January 1, 2025, the Act makes unenforceable contract terms for the performance of personal or professional services via a digital replica2 of the individual if the contract term meets all of the following conditions:

  • Terms allowing for the creation and use of a digital replica of the individual's voice or likeness in place of work the individual would otherwise have performed in person
  • Terms that do not include a reasonably specific description of the intended uses of the digital replica
  • When the individual was not represented by a lawyer or union in negotiating the terms 

Importantly, the Act would not render the entirety of the contract unenforceable where AI-specific terms do not meet all of the above requirements. The Act would render only the specific terms unenforceable.

Status: Governor Newsom signed the Act into law on September 17, 2024. The Act will enter into force on January 1, 2025. 

Assembly Bill 1836: Use of Likeness: Digital Replica Act

The Use of Likeness: Digital Replica Act (the "Act") establishes a cause of action for beneficiaries of deceased celebrities to recover damages for the unauthorized use of an AI-created digital replica of the celebrity in audiovisual works or sound recordings. The Act requires deployers of AI systems to obtain the consent of a deceased personality's estate before producing, distributing or making available the digital replica of a deceased personality's voice or likeness in an expressive audiovisual work or sound recording. In sum, the Act clarifies the scope of a deceased celebrity's postmortem right of publicity and closes the door on prior exceptions for expressive works (e.g., movies, television shows, songs) that were permissible for entertainment purposes.

Status: Governor Newsom signed the Act into law on September 17, 2024. The Act will enter into force on January 1, 2025.

Election Integrity

Assembly Bill 2655: Defending Democracy from Deepfake Deception Act

The Defending Democracy from Deepfake Deception Act (the "Act") requires large online platforms3 to identify and block the publication of materially deceptive content4 related to elections in California during specified time periods before and after an election. Additionally, the Act requires large online platforms to label in-scope content as inauthentic, fake or false during specified time periods before and after an election in California. 

Key Obligations: Large online platforms must develop and implement procedures to identify materially deceptive content and label such content within 72 hours of a report if all of the following conditions are met:

  • The content is reported (e.g., via online tools). 
  • The materially deceptive content is any of the following:
    • A candidate for elective office is portrayed as doing or saying something that the candidate did not do or say and that is reasonably likely to harm the reputation or electoral prospects of a candidate. 
    • An elections official is portrayed as doing or saying something in connection with the performance of their elections-related duties that the elections official did not do or say and that is reasonably likely to falsely undermine confidence in the outcome of one or more election contests.
    • An elected official is portrayed as doing or saying something that influences an election in California that the elected official did not do or say and that is reasonably likely to falsely undermine confidence in the outcome of one or more election contests
  • The content is posted during the applicable time period before or after an election. 

Enforcement and Penalties: The CA AG, any district attorney or any city attorney may seek injunctive relief to compel removal of materially deceptive content. 

Status: Governor Newsom signed the Act into law on September 17, 2024. The Act will enter into force on January 1, 2025.

Government Accountability

Senate Bill 896: Generative Artificial Intelligence Accountability Act

The Generative Artificial Intelligence Accountability Act (the "Act") would establish oversight and accountability measures for the use of generative AI within California's state agencies and departments. The Act mandates updates to existing reports, risk analyses, transparency in AI communications and proactive measures to ensure the ethical and equitable use of generative AI technologies in government operations.

Status: Governor Newsom signed the Act into law on September 29, 2024. The Act will enter into force on January 1, 2025.

Unified Definition of Artificial Definition

Assembly Bill 2885: Artificial Intelligence

California Assembly Bill 2885 ("AB 2885") aims to unify the definition of "Artificial Intelligence" across various California laws. This standardization is crucial, as varying definitions can lead to inconsistencies in regulation and oversight in the rapidly evolving field of AI. Specifically, AB 2885 defines Artificial Intelligence "an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments". 

Status: AB 2885 was passed and chaptered by the Secretary of State on September 28, 2024.

Safety 

Vetoed: Senate Bill 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act

One of the most high-profile (and onerous) of the CA AI bills is the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (the "Act"). The Act mandates that developers of "Covered Models"5 implement safety and security measures, and establishes an oversight committee, incident reporting protocols and a consortium dedicated to safe and ethical AI research.

Key Obligations:

Developers of Covered Models must implement comprehensive cybersecurity protections, ensure a full shutdown capability, create a detailed safety and security protocol and retain a copy of this protocol for the duration of the model's availability, plus five years. These protocols, must include protections and procedures to prevent the model from posing unreasonable risks of "Critical Harm" (severe damages due to an AI model leading to mass casualties6 or $500 million in damages, as applicable, including the creation or use of weapons, cyberattacks on critical infrastructure, AI actions with limited human oversight resulting in crimes or other comparable severe threats to public safety and security, as further defined in the Act7). Additionally, not only should the protocols and procedure mitigate the risk of cybersecurity breaches, but they should also address internal risks such as access and use by unauthorized internal personnel. Developers must also determine whether Covered Models can inadvertently cause or materially enable Critical Harm or have negative consequences, and must retain records of such assessments for the duration of the model's availability, plus five years. Once developers identify a Critical Harm incident, they must report it within 72 hours and produce detailed incident management protocols, which include conducting internal audits after each incident and submitting reports to both the oversight board and the CA Attorney General. Incident reporting can also be done anonymously, as the Act provides robust whistleblower protections, ensuring employees can report non-compliance or risks directly to the Board of Frontier Models (described below) without fear of retaliation. Developers must also retain third-party auditors to conduct annual independent audits of their compliance with the Act. Penalties for non-compliance will include fines of up to $5,000 per violation, with more severe penalties, such as license revocation, for continuous breaches by third parties. The Attorney General, city attorneys, or county counsels may also initiate civil action to enforce compliance and seek injunctive relief if needed for particularly extreme or continuous violations.

Status: Governor Newsom vetoed the Act on September 29, 2024. The Act returns to the CA Legislature, which can override Governor Newsom's veto with a 2/3 majority vote.

Key Takeaways

  • Proactive Compliance: Given the minefield of risk, AI developers and deployers should aim to comply proactively with the CA AI bills to mitigate financial exposure, regulatory scrutiny and negative media and public relations.
  • Embrace Transparency: AI developers and deployers should embrace transparency with respect to the data sets used to train AI systems and inform consumers when content or communications are generated by AI. 
  • Innovate Enforcement Mechanisms: Large online platforms will need to beef up enforcement mechanisms to detect and resolve deceptive deepfakes and other AI content, particularly with respect to elections content. This may require the innovation of new technologies and enforcement mechanisms. 
  • Mitigation by Design: Developers and deployers should train and design AI systems with datasets that avoid infringing on high-risk inputs such as content that is protected by copyright, trademarks and patents.

By staying informed and adapting to these new regulations, developers and deployers of AI can not only ensure compliance but also leverage the shifting legal terrain to their strategic advantage (e.g., competing on robust privacy protections and transparency). 

1 The word "Act" with respect to each section refers to the specific act being discussed in that section.

2 The Act defines "digital replica" as a computer-generated, highly realistic electronic representation that is readily identifiable as the voice or visual likeness of the individual embodied in a sound recording, image, audiovisual work or transmission in which the actual individual either did not actually perform or appear, or the actual individual did perform or appear, but the fundamental character of the performance or appearance has been materially altered, except as prescribed.
3 The Act defines "large online platform" as a public-facing Internet website, web application or digital application, including a social media platform as defined in Section 22675 of the Business and Professions Code, video sharing platform, advertising network or search engine that had at least 1,000,000 California users during the preceding 12 months.
4 The Act defines "materially deceptive content" as audio or visual media that is digitally created or modified, and that includes, but is not limited to, deepfakes and the output of chatbots, such that it would falsely appear to a reasonable person to be an authentic record of the content depicted in the media.
5 The Act defines "covered model" as (A) Before January 1, 2027, either of the following: (i) An artificial intelligence model trained using a quantity of computing power greater than 10^26 integer or floating-point operations, the cost of which exceeds one hundred million dollars ($100,000,000) when calculated using the average market prices of cloud compute at the start of training as reasonably assessed by the developer; or (ii) An artificial intelligence model created by fine-tuning a Covered Model using a quantity of computing power equal to or greater than three times 10^25 integer or floating-point operations, the cost of which, as reasonably assessed by the developer, exceeds ten million dollars ($10,000,000) if calculated using the average market price of cloud compute at the start of fine-tuning. (B) (i) Except as provided in clause (ii), on and after January 1, 2027, any of the following: (I) An artificial intelligence model trained using a quantity of computing power determined by the Government Operations Agency pursuant to Section 11547.6 of the Government Code, the cost of which exceeds one hundred million dollars ($100,000,000) when calculated using the average market price of cloud compute at the start of training as reasonably assessed by the developer; or (II) An artificial intelligence model created by fine-tuning a Covered Model using a quantity of computing power that exceeds a threshold determined by the Government Operations Agency, the cost of which, as reasonably assessed by the developer, exceeds ten million dollars ($10,000,000) if calculated using the average market price of cloud compute at the start of fine-tuning. (C) (ii) If the Government Operations Agency does not adopt a regulation governing subclauses (I) and (II) of clause (i) before January 1, 2027, the definition of "Covered Model" in subparagraph (A) shall be operative until the regulation is adopted.
6 A "mass casualty event" is not defined in SB 1047.
7 The Act defines "critical harm" as any of the following harms caused or materially enabled by a Covered Model or Covered Model derivative: (A) The creation or use of a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties; (B) Mass casualties or at least five hundred million dollars ($500,000,000) of damage resulting from cyberattacks on critical infrastructure by a model conducting, or providing precise instructions for conducting, a cyberattack or series of cyberattacks on critical infrastructure; (C) Mass casualties or at least five hundred million dollars ($500,000,000) of damage resulting from an artificial intelligence model engaging in conduct that does both of the following: (i) Acts with limited human oversight, intervention, or supervision; and (ii) Results in death, great bodily injury, property damage, or property loss, and would, if committed by a human, constitute a crime specified in the Penal Code that requires intent, recklessness, or gross negligence, or the solicitation or aiding and abetting of such a crime; OR (D) Other grave harms to public safety and security that are of comparable severity to the harms described in subparagraphs (A) to (C), inclusive.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© White & Case LLP

Written by:

White & Case LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

White & Case LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide