The U.S. is Scrambling to Regulate AI Without a Comprehensive Plan

Benesch
Contact

Benesch

While the European Union sprints ahead with draft legislation to closely regulate artificial intelligence (AI), the United States (U.S.) lags behind its western counterpart, leaving little regulatory guidance for U.S. AI companies. But introductory bills, statements, reports, and plans provide a glimpse into what AI regulation in the U.S. may look like in the future.

On Capitol Hill, Senate Majority Leader Chuck Schumer is making a high-profile push for broad federal AI intervention. In June 2023, he unveiled a framework to regulate AI that begins with forums for federal lawmakers to learn about the technology’s complexities. It follows with the drafting of bills. While critics warn that such forums may delay the legislative process, Schumer’s framework is expected to add a new level of impact to this issue which needs swift, focused attention.

Although no sweeping policy exists yet, U.S. senators have introduced a variety of piecemeal bipartisan AI bills in 2023. The Global Technology Leadership Act focuses on the country’s competitiveness in AI compared to its rival countries. The Transparent Automated Governance Act requires federal agencies to be transparent about their use of AI. And in an effort to protect consumers from harmful content produced by AI, two senators brought forth the No Section 230 for Immunity AI Act. If passed, the Act would give Americans who have been harmed by generative AI the power to sue AI companies in state or federal court.

Meanwhile, the White House has introduced a series of regulatory guidelines called a “Blueprint for an AI Bill of Rights,” updated its roadmap which addresses goals for federal investments in AI research and development, and signed an executive order tackling bias and algorithmic discrimination in AI. The Blueprint, developed by the White House Office of Science and Technology Policy, identifies five voluntary principles to shepherd the design and use of AI.

In late July, President Biden announced that his administration secured voluntary commitments from seven American companies, including Google, Amazon, Microsoft and Meta, meant to ensure their AI-related products are safe before releasing them to the public. While the voluntary commitments are a start and meant to be an immediate way to address these risks ahead of formal regulations, some critics say much more needs to be done to keep companies and their products accountable.

Federal committees and agencies are also launching their own AI initiatives in 2023. The National Artificial Intelligence Advisory Committee (NAIAC), which advises the President and the National AI Initiative Office, released its first report to the President in May. The report details how the U.S. government can utilize the benefits of AI and mitigate its dangers, outlining four major themes with more than a dozen objectives and two dozen recommended actions. It also states that the committee will concentrate its efforts on generative AI over the next two years. Similarly, the National Institute of Standards and Technology (NIST), a U.S. Department of Commerce agency, released an AI Risk Management Framework which recommends how to manage the AI-related risks to individuals, organizations, and society. The Framework is expected to be taken seriously by the federal government, if not adopted as an industry standard.

The Federal Trade Commission (FTC) is taking a tougher stance on biometric technology, which is increasingly using AI. Previously regulated by state or local laws, or none at all, the FTC now states that certain uses of biometric technology, which authenticates individuals based on their human body characteristics, may violate the FTC Act. As a result, the FTC’s new stance may cause a jump in biometric privacy litigation as plaintiffs bring claims under their state’s unfair and deceptive trade practices act.

At the state level, six states (California, Colorado, Connecticut, Illinois, Maryland, Virginia) have enacted laws or will have enacted laws by the end of 2023 making it illegal for businesses to use AI to discriminate or deceive consumers or job applicants. Nine states (California, Colorado, Connecticut, Indiana, Iowa, Montana, Tennessee, Utah, Virginia) have comprehensive data privacy laws in place. At the local level, New York City now has an ordinance that regulates how AI is used in the hiring process.  To learn more about these new data security and privacy laws, please see visit Data Meets World, where Benesch discusses all things data privacy and data security.

While U.S. AI policies may be lacking, transatlantic AI partnerships are beginning. In June 2023, President Joe Biden and U.K. Prime Minister Rishi Sunak agreed to work together to address the dangers of AI and leverage the technology’s opportunities. Learn more about how the United Kingdom plans to govern AI here.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Benesch | Attorney Advertising

Written by:

Benesch
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Benesch on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide