California Governor Vetoes AI Safety Bill SB 1047, Signs AB 2013 Requiring Generative AI Transparency

Morgan Lewis
Contact

Morgan Lewis

California Governor Gavin Newsom on September 29, 2024 vetoed a bill imposing new AI safety regulations, while approving a law mandating transparency in generative AI. This update explores the implications for developers and the future of AI regulation in California.

The California State Senate received back contrasting decisions on two proposed bills to regulate Californian AI developers from the California State Governor’s Office this past weekend. Governor Gavin Newsom vetoed SB 1047, which would have added new safety requirements to developers of large-scale AI models. However, Governor Newsom signed AB 2013 into law, a law requiring certain public disclosures by generative AI developers to enhance transparency around AI data practices.

AI SAFETY BILL SB 1047 VETOED BY CALIFORNIA GOVERNOR

After the California legislature passed SB 1047, a bill that imposed new safety regulations on large-scale AI models, in late August, Governor Gavin Newsom returned the bill without a signature.

SB 1047, known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, proposed the introduction of new safety requirements for the development of large-scale AI models that met certain compute and cost-to-design thresholds. Notably, it would have required large AI model developers, as defined by the bill, to implement full shutdown capabilities, define safety and security protocols to avoid “critical harms” to the state’s infrastructure and public, and comply with certain audit requirements. The bill also authorized the Attorney General of California to bring civil suits for violations of the bill.

In his message vetoing the bill, Governor Newsom outlined his reasoning for declining to sign. He noted at the outset that California is home to many of the world’s leading AI companies, and given its leadership in this area, the state takes the responsibility to regulate this industry seriously. While agreeing that safety protocols and “proactive guardrails” must be adopted and implemented, the governor cautioned that SB 1047 focused only on “the most expensive and large-scale models,” which would give the public a false sense of security about controlling AI despite the possibility of “smaller and more specialized models” emerging that may be more dangerous.

Newsom also stated that SB 1047 does not take into account whether an AI system “is deployed in high-risk environments, involves critical decision-making, or the use of sensitive data.” Further, the governor disagreed with applying “stringent standards” to even “the most basic functions” deployed by large AI systems.

Governor Newsom went on to describe some of the ongoing “evidence-based” efforts to regulate AI, including the US AI Safety Institute’s development of guidance on national security risks and his own September 2023 executive order to have agencies under his administration perform risk analyses of the potential threats and vulnerabilities of AI to California’s critical infrastructure. He acknowledged that a “California-only approach” may be warranted, particularly absent federal action by the US Congress, but that it must be based on empirical evidence and science.

For more information on SB 1047, read our August 29, 2024 LawFlash.

WHERE CALIFORNIA AI REGULATION MAY GO FROM HERE

The governor’s messaging seems to indicate a “wait-and-see” approach to the implementation of more heavy-handed AI safety regulations while federal and EU regulation efforts take shape. As a result, it is unlikely that California will introduce additional legislation with SB 1047’s level of oversight on AI safety in the short or near term.

Governor Newsom’s rejection of an approach that scrutinizes AI based on size/compute power alone could signal an interest in regulation that is broader in potential reach but more targeted in application. This could involve regulations that filter not on processing power or size, but rather on how and where AI systems are deployed and the sensitivity of the data they utilize. Thus, developers of any AI models—including smaller scale models, which operate in “high-risk environments” or involve “critical decision-making”—may be subject to state AI regulation down the road.

AI TRANSPARENCY LAW AB 2013 PASSES

Separately, the California governor signed into law a separate bill relating AI transparency—AB 2013. As discussed in our September 16, 2024 LawFlash, AB 2013 requires any “developer” that makes any generative “artificial intelligence” technology available to California residents to publicly disclose on its website documentation that provides, among other requirements, a summary of the datasets used in the development and training of the AI technology or service.

AB 2013 applies both to original developers of generative AI technology or service and any person or entity meeting the definition of “developer” that makes any “substantial modification” to a generative AI technology or service released after January 1, 2022. “Substantially modify,” as defined by the bill, includes new versions, releases, retraining, and fine-tuning that materially changes functionality or performance. Such broad reach could have implications for certain service providers or collaborators that engage in material retraining and/or fine-tuning of an existing generative AI model within the scope of their license.

Notably, AB 2013 does not apply to generative AI technology that is (1) solely designed to ensure security and integrity, (2) solely intended for the operation of aircraft within the national airspace, or (3) developed for national security, military, or defense purposes, and is made available only to a federal entity.

AB 2013 is part of a sweeping package of AI-related bills in recently signed into law in California, including requiring state agencies to include disclaimers when using generative AI to communicate with the public (SB 896), banning the use of “deepfakes” in election communications close to a general election day (AB 2839), and combatting online disinformation by requiring large platforms to block or label deceptive election-related content during specified periods before and after an election (AB 2655).

For a comprehensive list of recently passed California AI bills, read Governor Newsom’s September 29, 2024 press release.

ANTICIPATED EFFECTS OF AB 2013

AB 2013’s new disclosure requirements, though modest in comparison to SB 1047’s attempted safety requirement regime, may be burdensome for providers of generative AI. However, some generative AI developers may be grandfathered in under the bill’s text (as stated above, only generative AI services, or substantial modifications to generative AI services released on or after January 1, 2022 are subject to the disclosure requirements). However, given the rapid pace of development of AI and the frequent release of new or updated models, it is unclear how much impact this date cutoff may have.

The transparency requirements for generative AI imposed by AB 2013 bear some similarity to the EU AI Act. In particular, under the EU law, generative AI will have to comply with transparency requirements and EU copyright law by (1) disclosing that the content was generated by AI; (2) designing the model to prevent it from generating illegal content; and (3) publishing summaries of copyrighted data used for training.

Further, content that is either generated or modified with the help of AI—images, audio, or video files (e.g., deepfakes)—must be clearly labeled as AI generated so that users are aware when they come across such content. Generative AI developers operating in both jurisdictions may be able to leverage compliance with transparency requirements in the European Union to meet new requirements in California, although they should consult counsel to confirm.

KEY TAKEAWAYS

California’s recent legislative activity shows that one of the world’s most important technology centers is paying close attention to both the use and potential misuse of AI. The state is willing to require public disclosures regarding the use of generative AI and to regulate specific, targeted risks, such as election misinformation and deepfakes. However, whether more stringent AI safety regulation will be imposed at the development stage may depend on federal action from Congress.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Morgan Lewis

Written by:

Morgan Lewis
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Morgan Lewis on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide