Zooming in on AI – #3: California SB 1047 – The potential new frontier of more stringent AI regulation?

A&O Shearman
Contact

A&O Shearman

ON THIS PAGE

  • Key provisions of the Act

 

Helen Christakos and Sonya Aggarwal of our U.S. privacy and data security practice and Eva Wang of our technology transactions practice look at California’s new AI bill that aims to balance AI development with ensuring public safety, security, and accessibility and is awaiting Governor Newson’s signature.

The "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act" (the “Act”), which was passed by the California legislation on August 29, 2024, and is awaiting Governor Newsom’s signature, is a proposed bill aimed at regulating the development and deployment of advanced artificial intelligence (“AI”) models. The Act aims to balance the promotion of AI development with ensuring public safety, security, and accessibility. It acknowledges the potential benefits of AI in fields like medicine, climate science, and creativity, while also recognizing risks such as the potential for misuse in creating weapons of mass destruction or cyber threats.

If enacted, the requirements of the Act would come into effect in stages:

  • On or before January 1, 2026, the Government Operations Agency (described in more detail below), must submit a report from the consortium to the California legislature with the CalCompute framework (a framework to be developed by the Government Operations Agency for the creation of public cloud computing cluster to advance the development and deployment of AI that is safe, ethical, and sustainable).
  • Beginning January 1, 2026, developers of covered AI models would be required to:
    • Annually retain a third-party auditor to perform an independent audit of their safety and security protocols and to produce an audit report.
    • Retain an unredacted copy of the audit report for as long as the covered model is available for commercial, public, or foreseeable public use plus five years.
    • Submit to the Attorney General a statement of compliance with these provisions and require developers to report AI safety incidences to the Attorney General.
  • On or before January 1, 2027 and annually thereafter, the Board of Frontier Models within the Government Operations Agency (described in more detail below), would require the Government Operations Agency to issue regulations to, among other things, update the definition of “covered model” and require the regulations to be approved before taking effect.

Key provisions of the Act

  1. Definitions and scope: The Act introduces key definitions for understanding its scope:
  • “Covered model”: Refers to any AI model (and derivatives thereof), that meets certain criteria based on computing power, cost (over $100 million), and the extent of training. (The Act provides further detailed definitions for derivatives of such models and the conditions under which they are deemed to be Covered Models.)
  • “Critical harm”: Refers to severe damages that an AI model could cause, such as mass casualties from cyberattacks or grave public safety threats.
  • “Advanced persistent threats”: Describes sophisticated adversaries capable of using multiple attack vectors (including, but not limited to cyber, physical and deception), to compromise AI models.
  • “Board of frontier models”: A board of nine members within the Government Operations Agency and that operates independently of the Department of Technology. The California Governor may appoint an executive officer of the board, subject to Senate confirmation.
  • “Government Operations Agency”: An agency that contains the Board of Frontier Models.
  • Safety and security protocol requirements: Developers of covered AI models must implement comprehensive written safety and security protocols to manage risks throughout the model’s lifecycle. These protocols must include, but are not limited to:
  • Describe in detail the Company’s protections and procedures to prevent the model from posing unreasonable risks of causing or enabling critical harm;
  • State the Company’s compliance requirements in an objective manner and with sufficient detail and specificity to allow the developer or a third party to readily ascertain whether requirements of safety and security protocol have been followed;
  • Include testing procedures to assess the risks associated with modifications to the model after its initial training;
  • Retain an unredacted copy of the safety and security protocol for as long as the covered model is made available for commercial, public, or foreseeably public use plus five years; and
  • Be reviewed and updated annually to reflect changes in the model's capabilities and industry best practices.
  • Cybersecurity protections: Before training any covered AI model, developers are required to implement administrative, technical, and physical cybersecurity measures to prevent unauthorized access, misuse, or modifications. This includes developing the capacity for a full shutdown of the model if necessary, and ensuring safeguards against advanced persistent threats or other malicious actors.
  • Full shutdown procedures: Developers must establish and document the conditions under which a “full shutdown” of the model or its derivatives would be enacted to prevent potential harm. This includes considering the impact of a shutdown on critical infrastructure.
  • Compliance and third-party auditing requirements: Beginning January 1, 2026, developers of covered AI models must conduct annual third-party audits of their safety and security protocols. Developers are also required to publish redacted versions of their safety and security protocols and the results of their audits, and submit full versions of their audits to the California Attorney General upon request. Additionally, developers must submit annual compliance statements, signed by a senior corporate officer, detailing any risks and measures taken to prevent critical harm.
  • Incident reporting: Any AI safety incidents involving covered models must be reported to the California Attorney General within 72 hours of the developer becoming aware of the incident. The report should detail the nature of the incident and the steps taken to address the risks associated with it.
  • Co-Existence with federal contracts and pre-emption: The Act does not apply to products or services to the extent that the requirements would strictly conflict with federal government entity contracts. The Act’s provisions do not supersede existing federal laws and may be adjusted or supplemented based on federal regulations or evolving technological standards. If any part of the Act is held invalid, the remaining provisions shall still be enforceable.
  • Guidance and best practices: Developers are encouraged to follow industry best practices and consider guidance from organizations such as the U.S. Artificial Intelligence Safety Institute and the National Institute of Standards and Technology.
  • Civil penalties and enforcement actions: The Act grants the Attorney General authority to initiate civil actions for violations, including:
  • Penalties for violations: Fines are imposed based on the severity of the violation:
  1. For violations causing death, bodily harm, property damage, theft, or imminent public safety threats, fines are set at a maximum of 10% of the cost of the computing power used to train the AI model (calculated using average market prices at the time of training) for the first offense, increasing to 30% for subsequent violations; and
  2. Additional penalties are prescribed for violations related to labor laws, safety protocols, and other specific sections of the Act.
  • Injunctive relief and monetary damages: Courts may issue injunctions, award compensatory and punitive damages, and grant attorney fees and costs to enforce the Act’s provisions.
  • Contractual limitations on liability: Any contract or agreement that attempts to waive, limit, or shift liability for violations is deemed void. Courts are empowered to impose joint and several liability on affiliated entities if they attempt to limit or avoid liability through corporate structuring.
  • Assessment of developer conduct: In determining whether a developer exercised reasonable care, regulators may consider the quality and implementation of the developer’s safety and security protocols, the thoroughness of risk management practices, and comparisons to industry standards.
  • Whistleblower protections: The Act protects employees of AI developers and their contractors/subcontractors who disclose information to the Attorney General or Labor Commissioner regarding non-compliance with safety standards or risks of critical harm. The Act prohibits retaliation against whistleblowers and mandates clear communication of employee rights. Additionally, developers must establish an internal process for employees to report violations anonymously.
  • Public disclosure and transparency: The Attorney General and Labor Commissioner may release complaints or summaries thereof if doing so serves the public interest, with sensitive information redacted to protect public safety and privacy.
  • Creation of the Board of Frontier Models: The Act establishes the Board of Frontier Models within the Government Operations Agency, which will regulate AI models posing significant public safety risks:
  • The Board consists of nine members, including experts from AI safety, cybersecurity, and other fields. Members are appointed by the Governor, Senate, and Assembly.
  • The Board will oversee the establishment of thresholds for defining AI models subject to regulation, auditing requirements, and guidance for preventing critical harms.
  • Establishment of CalCompute: The Act proposes the creation of CalCompute, a public cloud computing cluster designed to foster safe, ethical, and equitable AI development. CalCompute aims to:
  • Support research and innovation in AI and expand access to computational resources.
  • Be established within the University of California system, if feasible, with funding options including private donations.
  • The Act outlines a framework for the creation and operation of CalCompute, including the governance structure, funding, and equitable access parameters.
  • Public access and confidentiality: While the Act imposes some limitations on public access to safety protocols and auditors' reports to protect proprietary information and public safety, it is designed to balance transparency with the need for confidentiality.

This detailed regulatory framework, if enacted, would ensure that AI technologies developed and deployed in California adhere to high standards of safety, accountability, and ethical practice, while also promoting innovation and equitable access to technological resources.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© A&O Shearman

Written by:

A&O Shearman
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

A&O Shearman on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide