Administration Directs Federal Agencies to Promulgate New Rules Governing the “Safe, Secure, and Trustworthy” Development & Deployment of AI Systems in the Public and Private Sectors
Key Takeaways
- The AI executive order moves the U.S. closer to a broader unified approach on federal AI regulation, expanding on the AI Bill of Rights and NIST AI Risk Management Framework and focusing on the responsible development and deployment of AI systems to minimize discrimination, preserve privacy, and prevent use of AI systems for "malicious cyber-enabled" activities.
- Reflecting a shift to move beyond voluntary industry governance, the AI EO places the U.S. in a stricter regulatory mode, directing federal agencies within the next year to issue rules, guidelines, and standards, as well as engage in ongoing oversight of public and private AI development and deployment.
- New AI rules will mandate reporting and disclosure requirements for certain private sector actors that create and deploy AI technology, including those that develop, train, and use "dual-use foundation models," operate "large-scale computing clusters," and provide infrastructure as a service.
Overview
On October 30, 2023, the White House issued a wide-ranging 63-page Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the "AI EO" or "Order"), directing multiple federal agencies to fulfill extensive action items within the next year, in pursuit of the overarching goal of encouraging the development, deployment, and use of safe and trustworthy AI systems. Implementation of many of the EO's policy directives will have an impact across the private sector, given the wide range of policy areas covered in the AI EO—from cybersecurity, privacy, consumer and worker protections, labor and employment to national security, intellectual property, technology innovation, and competition.
Significantly, the Order adopts new reporting, disclosure, and operational requirements for companies developing or training so-called "dual-use foundation models," as well as large providers offering cloud services and infrastructure that are critical to both training such models and facilitating their operation. These new requirements will specifically apply to training, developing, and using dual-use foundation models, as well as reporting on results of "AI red-teaming"[1] testing of such models to find flaws and vulnerabilities. Cloud and infrastructure providers will be required to disclose when "foreign persons" or resellers utilize their cloud infrastructure for AI model training or development and require that such foreign entities agree to make their own disclosures to the federal government. The Administration relies upon the Defense Production Act, which grants the President authority to ensure the timely availability of essential materials, technologies, and services from the U.S. industrial base to promote the national defense and in emergencies (such as COVID). Although the AI EO directives are significant, they are not the first U.S. measures to address the responsible use and development of AI. Coupled with the National Institute of Standards and Technology (NIST) AI Risk Management Framework and expanding on the AI Bill of Rights, the AI EO reflects part of a broader federal strategy aimed more at building on existing U.S. policy efforts, including the voluntary commitments secured by the White House earlier this year from fifteen major technology companies who agreed to mitigate risks and increase transparency of their AI systems. It is also worth noting that the AI EO was announced just days prior to the international AI safety summit organized by the U.K., and less than five months after the EU expanded the scope of its AI Act to cover foundation models, additional high-risk AI systems, and prohibited uses in biometrically categorizing individuals and exploiting vulnerabilities of an individual or specific group, among others.
Significant Aspects of the AI Executive Order
Near-Term Impact: New Regulations Affecting Certain Private Sector AI Providers
First, special attention is devoted to developers of "dual-use foundation models," defined as self-supervised, sophisticated models containing tens of billions of parameters, that either exhibit, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to economic and national security or public health and safety. Developers of these systems will be required to disclose to the Department of Commerce certain information regarding: (1) ongoing or planned activities related to training, developing, or producing these dual-use foundation models; (2) ownership and possession of model weights of any dual-use foundation models; and, (3) results of red-team testing, based on guidelines developed by NIST, and related safety measures the company has taken to improve performance on red-team tests to strengthen overall model security.
Second, companies that acquire, develop, or operate "large scale computing clusters" will be required to file reports disclosing the location, existence, and total computing power of such clusters. Companies will also be required to report the acquisition or development of such clusters and the total computing power of each. Interim standards establishing the technical conditions for reporting are set forth in the AI EO (based on levels of computing power, capacity, and networking speed that signify malicious cyber-enabled activity capabilities) until Commerce develops a longer term set of technical conditions for reporting.
Third, companies offering infrastructure as a service (IaaS) must disclose the existence of foreign persons using or reselling IaaS services by submitting reports to the Department of Commerce. This requirement aims to address national security concerns regarding foreign persons transacting with a U.S. IaaS provider to train a large AI model with potential capabilities for use in malicious cyber-enabled activities. Here again, Commerce will be developing standards and reporting requirements to facilitate these reports.
Longer-Term Impact: Mandates for Agencies to Begin Developing Standards, Guidelines, and Best Practices That May Lead to Additional Private Sector Regulation in Months Ahead
Beyond the rules for certain private sector actors, the AI EO directs numerous federal agencies to engage in a range of new activities, from data collection to standards development, within prescribed deadlines ranging from 30-to-365-days from the date of the AI EO. Notably, mandates to develop new rules and standards governing safety and national security face the earliest deadlines.
The Department of Commerce and NIST will take a leading role in establishing new guidelines, standards, and guidance on developing safe and secure AI. For example, NIST is tasked with creating new standards for the red-team testing that will be required of dual-use foundation model developers.[2] The agency will also produce a report to the President on the risks and benefits of dual-use foundation models, after soliciting input from the industry and interested stakeholders.
NIST will also develop a companion resource to the AI Risk Management Framework focused on generative AI; a secure software development framework for generative AI and for dual-use foundation models; and guidance for evaluating and auditing AI capabilities.
The U.S. Department of Commerce will be responsible for issuing guidance on many of the key safety and security requirements, including identifying, detecting, labeling (i.e. watermarking), and auditing synthetic content labeling, to help prevent generative AI models from producing harmful content and deepfakes and allow people to differentiate between "original" content and content produced by generative AI tools. The emphasis on government safeguards and risk assessments in addressing safety and security is likely to have a significant influence on how the private sector approaches the same issues.
Separately, the Departments of Energy and Homeland Security will apply the new standards that NIST issues governing critical infrastructure safety, including addressing chemical, biological, radiological, nuclear, and cybersecurity risks. Specifically, Homeland Security will receive assessments of potential risks related to AI in critical infrastructure and use such information to apply the NIST standards to AI used in critical infrastructure. Homeland Security will also establish an AI Safety and Security Board to advise the Administration on recommendations for improving security and resilience in critical infrastructure.
Other Key AI EO Directives
The AI EO emphasizes prioritizing federal support to accelerate privacy-preserving techniques and enforce existing consumer protection laws and principles to protect against fraud, unintended bias, and other potential AI harms. For example, the AI EO highlights the importance of funding a Research Coordination Network (RCN) to advance rapid breakthroughs and development in privacy-preserving research and technologies, "such as cryptographic tools that preserve individuals' privacy." The National Science Foundation is expected to work with RCN to promote the adoption of leading-edge privacy-preserving technologies by federal agencies. While there is a clear emphasis on privacy protections in the AI EO, it provides little detail on specific measures the federal government may undertake in the future, and to what extent those measures will impact private industry. Currently thirteen states have comprehensive data privacy laws, and some states also have laws requiring disclosure of the use of AI-powered "bots" to communicate or interact online with individuals with respect to purchasing goods or services or voting in elections.[3]
Given the absence of federal privacy legislation,[4] the AI EO goes on to direct federal agencies to: (i) develop guidelines for evaluating the effectiveness of "privacy-preserving techniques," including those used in AI systems; and (ii) conduct an evaluation of how federal agencies collect and use commercially available information.
- Guardrails to Prevent AI Discrimination
Numerous federal agencies are tasked with developing additional guidance and best practices to help mitigate the risk of potential discrimination when AI is used to assist in decisions about housing, employment, and criminal justice matters. For example, the AI EO directs the Department of Housing and Urban Development to issue guidance on using fair-lending and housing laws to prevent discrimination by AI in digital ads for credit and housing, and the Justice Department to develop best practices to address algorithmic discrimination and "ensure fairness" when utilizing AI technology for sentencing, parole, and surveillance.
The Federal Housing Finance Agency and the Director of the Consumer Financial Protection Bureau (CFPB) are "encouraged" to require their regulated entities to use appropriate methodologies, including AI tools, to identify potential areas of bias and discrimination and solutions to minimize them. The CFPB and Secretary of Housing and Urban Development are also "encouraged" to issue guidance addressing possible violations of the Fair Credit Reporting Act, the Fair Housing Act, the Consumer Financial Protection Act, or the Equal Credit Opportunity Act as they related to consumer credit, housing, and real estate transactions.
The AI EO mentions that AI can enhance law enforcement efficiency and accuracy, consistent with protections for privacy, civil rights, and civil liberties, but the U.S. AG is directed to submit a report recommending best practices, including safeguards and appropriate use limits for AI. In addition, agencies are directed to implement measures to prevent and address unlawful discrimination and other harms that result from uses of AI in federal government programs and benefits administration.
- Promoting Innovation and Competition
Importantly, the U.S. Patent and Trademark Office is tasked with issuing guidance on the intersection of AI with copyright and inventorship, in addition to developing a program to mitigate AI-related intellectual property risks. Reflecting on the current state of copyright law and litigation over whether works created by AI may be copyrighted, the AI EO directs the Copyright Office to recommend potential executive actions addressing the scope of protection for works produced using AI and the treatment of copyrighted works used in AI training.
The AI EO takes a multi-pronged approach to promoting innovation and competition for the U.S. labor market and procurement, including "accelerat[ing] the rapid hiring of AI professionals" as part of a government-wide concerted effort to increase AI talent and hire AI and machine learning engineers in federal agencies, an effort led by the U.S. Office of Personnel Management, U.S. Digital Service, U.S. Digital Corps, and the Presidential Innovation Fellowship. This effort also relates to strengthening U.S. leadership abroad and encouraging allies to advance responsible global technical standards for AI development and develop common regulatory and other accountability principles, including managing AI risks.
Looking Ahead
Many unknowns remain about how the future legal and regulatory landscape will evolve, and how significantly the AI EO will impact the private sector. But one thing remains clear—an increased focus on guardrails, regulations, and standards governing the development and deployment of AI systems is inevitable. Two days after the AI EO was released, the Office of Management and Budget (OMB) announced it was issuing a draft policy for "Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence" to establish AI governance structures in federal agencies, advance responsible AI innovation, increase transparency, protect federal workers, and manage risks from government uses of AI.
The AI EO, along with existing measures and the increased state efforts to pass piecemeal AI regulations, signals that private companies—particularly those developing and implementing AI system, as well as the technology sector at large—should plan and prepare for an increase in legal and regulatory oversight.
*Edlira Kuka, a member of the communications group at DWT, is currently pending bar admission and licensure, after passing the July 2023 District of Columbia Bar Exam.
[1] The AI EO defines "AI red-teaming" as a structured testing effort "using adversarial methods to identify flaws and vulnerabilities in an AI system, such as harmful or discriminatory outputs from an AI system, unforeseen or undesirable system behaviors, limitations, or potential risks associated with the misuse of the [AI] system."
[2] NIST's definition of "red team" is "a group of people authorized and organized to emulate a potential adversary's attack or exploitation capabilities against an enterprise's security posture … to improve enterprise cybersecurity by demonstrating the impacts of successful attacks and by demonstrating what works for the defenders (i.e., the Blue Team) in an operational environment." https://csrc.nist.gov/glossary/term/red_team (Last visited Nov. 5, 2023.)
[3] On the election impact of AI, the Federal Election Commission (FEC) is considering rules to clarify that existing federal election law and FEC regulations prohibit any deliberately deceptive use of Artificial Intelligence (AI) technology and "deepfakes" in campaign advertisements, unless such use is clearly satire or parody "where the purpose and effect is not to deceive voters."
[4] In the "Fact Sheet" issued ahead of the AI EO, the President "call[ed] on Congress to pass bipartisan data privacy legislation to protect all Americans, especially kids."
[View source.]