Artificial Intelligence - Congress, Federal Agencies, and the White House Solicit Information and Take Action

Kilpatrick
Contact

Kilpatrick

Congress, numerous federal agencies, and the White House are all working to address various aspects of artificial intelligence (AI) development, regulation, and use.

Congress –

On September 8, 2023, Sen. Richard Blumenthal (D-CT) and Sen. Josh Hawley (R-MO), published a Bipartisan Framework for U.S. AI. Earlier in the year, on June 21, 2023, Senate Majority Leader Chuck Schumer (D-NY) published a separate framework for AI titled the SAFE Innovation Framework.

Additionally, the following Congressional Committee hearings were held in recent weeks:

  • September 20, 2023, US Senate Committee on Banking Housing, and Urban Affairs
  • September 19, 2023, US Senate Select Committee on Intelligence
    • Open hearing - Advancing Intelligence in the Era of Artificial Intelligence: Addressing the National Security Implications of AI
  • September 14, 2023, US House Committee on Oversight and Accountability, Subcommittee on Cybersecurity, Information Technology, and Government Innovation
    • Hearing - How are Federal Agencies Harnessing Artificial Intelligence
  • September 12, 2023, US Senate Committee on the Judiciary, Subcommittee on Privacy, Technology, and the Law
    • Hearing - Oversight of AI: Legislating on Artificial Intelligence

Legislative developments and additional committee hearings on AI will continue over the coming weeks and months. For example, Sen. John Thune (R-SD) and Sen. Amy Klobuchar (D-Minn) are expected to publish new AI legislation in the coming days.

Federal Agencies –

Like Congress, federal agencies are seeking information and input as it relates to their development of AI regulations and rulemaking. These efforts focus on both direct and indirect impacts of AI. Recent solicited comments include the following:

  • On August 30, 2023, the United States Copyright Office published a Notice of Inquiry and Request for Comments titled - Artificial Intelligence and Copyright. The Office is seeking input to help “[…] assess whether legislative or regulatory steps in this area are warranted,” and soliciting comments on relevant issues “including those involved in the use of copyrighted works to train AI models, the appropriate levels of transparency and disclosure with respect to the use of copyrighted works, and the legal status of AI-generated outputs.”
    • Written comments are due by October 18, 2023. Written reply comments are due by November 15, 2023.
  • On September 7, 2023, the National Institute of Standards and Technology (NIST), published a - Request for Information on Implementation of the United States Government National Standards Strategy for Critical and Emerging Technology (USG NSSCET). The Request for Information seeks input on various Critical and Emerging Technologies (CETs) including, Artificial Intelligence and Machine Learning, Communication and Networking Technologies, Semiconductors and Microelectronics, including Computing, Memory, and Storage Technologies, Biotechnologies, Positioning, Navigation, and Timing Services, Digital Identity Infrastructure and Distributed Ledger Technologies, Clean Energy Generation and Storage, and Quantum Information Technologies.
    • Comments are due by November 6, 2023.
  • On July 26, 2023, the Securities and Exchange Commission (SEC) “proposed new rules that would require broker-dealers and investment advisers to take certain steps to address conflicts of interest associated with their use of predictive data analytics and similar technologies to interact with investors to prevent firms from placing their interests ahead of investors’ interests.” The Proposed Rule, titled Conflicts of Interest Associated With the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers, notes that,

    In recent years, we have observed a rapid expansion in firms’ reliance on technology and technology-based products and services. The use of technology is now central to how firms provide their products and services to investors. Some firms and investors in financial markets now use new technologies such as AI, machine learning, NLP, and chatbot technologies to make investment decisions and communicate between firms and investors. In addition, existing technologies for data-analytics and data collection continue to improve and find new applications.

    The Proposed Rule solicits, among other matters, comments concerning the ways AI may affect investment systems, and states, in relevant part,

    Any risks of conflicts of interest associated with AI use will expand as firms’ use of AI grows. These risks will have broad consequences if AI makes decisions that favor the firms’ interests and then rapidly deploys that information to investors, potentially on a large scale. Firms’ nascent use of AI may already be exposing investors to these types of risks as well as others. We are concerned that firms will intentionally or unintentionally take their own interest into account in the data or software underlying the applicable AI, as well as the applicable [predictive data analytics (“PDA”)]-like technologies, resulting in investor harm. Among other things, a firm may use these technologies to optimize for the firm’s revenue or to generate behavioral prompts or social engineering to change investor behavior in a manner that benefits the firm but is to the detriment of the investor.

    • Comments are due by October 10, 2023.

In addition to the above, the United States Patent and Trademark Office (USPTO) will be holding a Partnership Meeting on Artificial Intelligence (AI) and Emerging Technologies (ET) on September 27, 2023. Further, the Federal Trade Commission (FTC) is hosting a public Roundtable on AI and Content Creation on October 4, 2023. Of note, it was reported on July 13, 2023, that the FTC had opened a civil investigation concerning a leading generative AI company.

The White House –

Earlier this year, on July 21, 2023, the White House announced that it had “Secured Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI.” Specifically, seven of the “leading AI companies,” agreed to “help move toward safe, secure, and transparent development of AI technology.”

Broadly, the companies agreed to the eight following voluntary commitments,

  1. Commit to internal and external red-teaming of models or systems in areas including misuse, societal risks, and national security concerns, such as bio, cyber, and other safety areas
  2. Work toward information sharing among companies and governments regarding trust and safety risks, dangerous or emergent capabilities, and attempts to circumvent safeguards
  3. Invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights
  4. Incent third-party discovery and reporting of issues and vulnerabilities
  5. Develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated, including robust provenance, watermarking, or both, for AI-generated audio or visual content
  6. Publicly report model or system capabilities, limitations, and domains of appropriate and inappropriate use, including discussion of societal risks, such as effects on fairness and bias
  7. Prioritize research on societal risks posed by AI systems, including on avoiding harmful bias and discrimination, and protecting privacy
  8. Develop and deploy frontier AI systems to help address society’s greatest challenges

The White House notes that the, “[…] voluntary commitments are consistent with existing laws and regulations, and designed to advance a generative AI legal and policy regime.” Further, where the commitments mention particular models, “they apply only to generative models that are overall more powerful than the current industry frontier (e.g. models that are overall more powerful than any currently released models, including GPT-4, Claude 2, PaLM 2, Titan and, in the case of image generation, DALL-E 2).”

Some predict that the White House will be releasing a comprehensive Executive Order concerning AI regulation and development in the near term. Additionally, the Office of Management and Budget is expected to release “draft policy guidance for federal agencies to ensure the development, procurement, and use of AI systems is centered around safeguarding the American people’s rights and safety,” soon.

Next Steps –

As relevant AI legislation and regulation continue to develop and shift, those utilizing, or affected by, AI should be cognizant of necessary compliance frameworks, applicable existing regulations and the creation of new AI specific regulations, drafting and efforts to pass relevant legislation, solicited federal comments and rulemakings, enforcement concerns, and expanding use cases.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Kilpatrick | Attorney Advertising

Written by:

Kilpatrick
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Kilpatrick on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide