The NTIA Report and California AI Bill: A New Era for Open Model Governance

Fenwick & West LLP
Contact

Fenwick & West LLP

What You Need To Know

  • AI open models are garnering regulatory attention because of their potential for third-party misuse.
  • The National Telecommunications and Information Administration (NTIA) recommends the federal government collect further evidence before regulating any open models but has not ruled out restricting access to certain models. The California legislature recently passed SB 1047, which awaits the governor’s signature or veto and could also have significant impact on open models.
  • Regulators are particularly interested in models trained using exceptional computing power—with the EU focused on those trained with 1025 floating-point operations, which cover a few current frontier models, while California and the U.S. federal government have keyed in on future models trained using 1026 operations.
  • Companies should pay attention to best practices outlined by the National Institute of Standards and Technology (NIST), as the California legislation incorporates NIST’s guidance by reference

Open models play a crucial role in fostering a diverse and innovative AI ecosystem. But their irrevocable accessibility also presents challenges for preventing downstream misuse—and that has caught the government’s attention. President Biden’s 2023 Executive Order on AI (Biden EO) defines powerful open models that have tens of billions of parameters as “dual-use foundation models with widely available weights.” To evaluate the risks of those open models, the Biden EO mandated the Department of Commerce solicit comments from the public and submit a report with regulatory recommendations.

This potential restriction on open models triggered significant reactions from the AI community, resulting in over 330 public comments submitted to the NTIA. Many major AI leaders, such as OpenAI, Google, Microsoft, Meta, Anthropic, IBM, Cohere, Stability AI, and EleutherAI, submitted extensive opinions on open models’ risks, benefits, and regulations. On July 30, 2024, the NTIA released a report recommending further evidence collection before implementing any regulations on open models.

Meanwhile, the California legislature separately passed an AI safety bill, SB 1047 (CA AI Bill), on August 29, 2024, that currently awaits Gov. Gavin Newsom’s signature or veto. The legislation is not specifically directed at open models, but the bill could significantly impact the public dissemination of model weights in an open ecosystem, as it focuses on model developers’ liability instead of downstream applications.

The Risk and Benefit Considerations

Due to insufficient downstream control, some people consider models with widely available weights to present more safety risks than more-powerful closed foundation models. Once released, it is nearly impossible to retract an open model and its weights. Built-in guardrails can be circumvented or even completely removed after release. These exploits potentially enable harmful uses by malicious parties out of the original model developers’ control.

Notwithstanding such concerns, many public comments to the NTIA overwhelmingly support an open model ecosystem. These comments argue that the evidence and data do not substantiate those concerns. In response, the NTIA report also recognizes the significant benefits of open models, including fostering innovation, democratizing access, and enabling broad participation from various stakeholders. In its report, the NTIA weighed the marginal risk added by open models’ accessibility and ease of distribution against the open model ecosystem’s significant benefits.

The NTIA’s Balanced Approach: Evidence Before Action

By weighing the risks and benefits of open models, the NTIA has recommended a monitor approach that does not create immediate regulation but also preserves the future possibility of restricting access to certain powerful open models.

The report outlines a three-step process: Collect evidence, evaluate it, and act on findings. This involves encouraging standards, conducting audits, supporting safety research, and developing AI safety benchmarks. If necessary, regulatory actions, including restrictions on access, may follow based on the evidence. The NTIA report stresses the need for flexibility as AI technology evolves, aiming to balance fostering innovation and safeguarding against the marginal risks that open models pose.

The NTIA-recommended approach to collect and evaluate evidence resonates with a reporting requirement set forth in the Biden EO, which mandates companies that train models using a quantity of computing power greater than 1026 floating-point operations (FLOPs) provide reports to the federal government.

California’s Sweeping Approach

Unlike the federal reporting requirements, the CA AI Bill seeks to impose additional regulations on certain models that meet a regulatory threshold. The CA AI Bill defines a “covered model” as an AI model trained using more than 1026 FLOPs. While this regulatory threshold aligns with the Biden EO’s threshold, the CA AI Bill extends its reach by imposing regulations and potential liabilities on those “covered models.” The regulations include various pre-training obligations, such as implementing cybersecurity protections to prevent “unsafe post-training modification,” the capability of a “full shutdown,” and different written safety and security protocols. See § 22603(a).

The CA AI Bill’s legislative approach appears to contrast with the federal government’s approach, which recognizes the benefits of open models and prefers collecting evidence before taking regulatory actions.

Understanding the FLOPs Regulatory Threshold

The highly technical number 1026 FLOPs is best understood by comparing the U.S. regulatory threshold to the European Union regulatory threshold. In the U.S., both the Biden EO and the CA AI Bill use the threshold of 1026 FLOPs. In comparison, the EU AI Act classifies models trained with 1025 FLOPs as posing systemic risk. While 1025 and 1026 appear to be similar numbers, the difference in a single order of magnitude reflects a significant divergence in regulatory philosophy. The current generation of frontier models when the EU AI Act took effect in August 2024 are trained at the order of 1025 FLOPs. In fact, no publicly known AI model had exceeded 5×1025 FLOPs as of April 2024.

April 5, 2004 Source: Epoch AI

Putting these observations into the perspective of regulations, the EU AI Act focuses on immediately regulating current and future frontier models because the current frontier models have already reached the EU’s 1025 FLOPs threshold. In contrast, no publicly known model has yet reached the U.S. regulatory threshold. As articulated by the NTIA report, the U.S. federal government focuses on collecting evidence about future model risks if companies begin to train models that are at least several times bigger than the current frontier models. California instead intends to begin regulations when those companies train the next generation of models.

Regulation Threshold Approach

EU AI Act

1025 FLOPs

Regulating current and future frontier models

NTIA and Biden EO

1026 FLOPs

Collect evidence related to future frontier models

CA AI Bill

1026 FLOPs

Regulating future frontier models

The CA AI Bill’s Impact on the Open Model Ecosystem

The CA AI Bill mandates developers to implement “cybersecurity protections to prevent … unsafe post-training modifications of” the covered models and the capability to promptly enact a “full shutdown.” See §§ 22603(a)(1) and (a)(2). These obligations seem to have a significant future impact on the open model ecosystem where a model’s weights are widely available to the public.

Before discussing how those requirements may impact different types of models in varying degrees, it is noteworthy that the degree of openness of AI models exists along a spectrum, as shown in the diagram below.

Source: Comment to the NTIA from Connected Health Initiative

Future frontier models “with widely available weights” will appear to be most impacted by the CA AI Bill. For closed models that are hosted at a model developer’s server, preventing “unsafe post-training modifications” and implementing “full shutdown” seem to be more manageable. Yet, for open models with widely available weights, those models can be copied, disseminated, hosted locally by a third party, and most likely freely adjusted. While § 22602(k) of the CA AI Bill imposes a “full shutdown” requirement only on models and derivatives that are “controlled” by a developer, the boundary of “control” may remain unclear for models that are released to the public. Also, what constitutes reasonable and sufficient protection against “unsafe post-training modifications” can be difficult to define and implement in an open model, which is intended to be further modified by the public. As such, uncertainty exists in putting open models into compliance with the CA AI Bill and, therefore, may hinder the future release of powerful open models.

Towards Standardized Risk Measurement: The Role of NIST

The CA AI Bill highlights the uncertainty and difficulty in measuring model risks and defining safeguards, particularly for open models. These issues, both technical and legal in nature, will likely push the AI industry toward standardized risk measurement. For example, the NTIA report and many public comments to the NTIA consensually call for a better science of AI risks. To this end, NIST will likely play a central role—especially through the newly formed division, the U.S. Artificial Intelligence Safety Institute (AISI).

Adhering to the guidance and standards from NIST and AISI will help any AI companies mitigate risks associated with developing or deploying AI. Notably, NIST recently released the Artificial Intelligence Risk Management Framework and the Secure Software Development Practices for Generative AI. These frameworks, while not being legally enforced regulations, may become de facto requirements for AI developers. For example, the CA AI Bill mandates that model developers and computing cluster operators “shall consider industry best practices and applicable guidance from” the AISI and NIST. See §§ 22603(i) and 22604(b). The bill also requires California’s Government Operations Agency to issue regulations to establish AI model auditing requirements that “shall, at a minimum, be consistent with guidance issued by” the AISI and NIST. See proposed change to CA Government Code § 11547.6(e).

The Debate Between Horizontal and Vertical Approach: Regulating Models or Applications

The NTIA report and the CA AI Bill highlight contrasting approaches to AI regulation. In considering whether the focus should be placed on “interventions in downstream pathways through which those risks materialized rather than in the availability of model weights,” the NTIA report hints that the federal government may favor a vertical, sector-specific regulatory approach focusing on downstream applications. For example, the NTIA report has detailed analyses of high-risk industries related to chemical, biological, radiological, and nuclear (CBRN), cybersecurity, and misinformation/disinformation.

Discussions about open model safety have focused on model developers’ pre-release obligations, such as thorough and scientific red-teaming efforts to stress test the model’s safety. If the scope of regulations is shifted downstream, federal regulators may start to pay more attention to deployers that incorporate AI tools into the end-user products in certain sectors.

In contrast, the CA AI Bill adopts a horizontal approach that focuses on regulating the models themselves instead of downstream applications. This emphasizes model developers’ obligations. In fact, the bill prohibits any agreement that shifts the liability from the developer to another party in exchange for the right to use the developer’s AI products. See § 22606(c)(1). The approach of regulating models, which can be regarded as a general-purpose technology, has drawn criticism from leaders in the AI industry. It remains to be seen whether federal regulation may at some point include express preemption to different approaches such as that in the CA AI Bill.

Conclusion

The evolving landscape of AI regulation, as reflected in the NTIA’s cautious approach and California’s more aggressive stance, underscores the ongoing debate over how best to balance innovation with safety. As these discussions continue, the path forward will likely shape the future of open models and their role in the AI ecosystem.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Fenwick & West LLP

Written by:

Fenwick & West LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Fenwick & West LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide