2023: A Generative AI Odyssey

BakerHostetler
Contact

BakerHostetler

Artificial intelligence (AI) has long existed in the public consciousness through science fiction, doomsday planners, and fears of Ray Kurzweil’s singularity—but it now appears to be an accessible reality. 2023 has begun with a sharp increase in the number of AI tools in the marketplace, such as AI-based bots that understand natural language and generate human-like responses to text-based conversations. These bots are what is known as “generative AI,” or algorithms that receive inputs and output “new” content, such as text, imagery, and audio.

Many businesses have already been using AI tools, whether to make employment decisions or automate simple tasks. Regulation on these tools has already begun at the state and local levels, such as the various state consumer privacy laws that regulate autonomous decision-making and NYC’s Automated Employment Decision Tools Law (NYC AEDT Law) that was originally set to come into effect on January 1, 2023 (currently delayed to April 15, 2023). Now, with the advent of readily accessible tools like these AI-based text bots that can seemingly create work-product through a sentence prompt, regulators like the Biden Administration and key industry players like the U.S. Department of Commerce’s National Institute of Standards & Technology (NIST) are also weighing in on the discussion.

In October 2022, the White House Office of Science and Technology Policy (OSTP) issued its Blueprint for an AI Bill of Rights. This document provided basic principles and guidance on ethical use of AI, namely safe and effective systems, algorithmic discrimination protections, data privacy, notice and transparency, and the right to opt out of AI.

While the OSTP continues to discuss and explore how to operationalize this guidance, NIST also provided its own guidance through the AI Risk Management Framework (AI RMF 1.0), to which OSTP provided “extensive input and insight.” The AI RMF 1.0 is made up of four “core functions”: governing information (as we’ve discussed before), mapping AI risks, measuring AI risks through multifactorial risk assessment methods and independent experts, and managing mapped and measured risks through triage, covered in the AI RMF Playbook, Roadmap, Crosswalks, and Perspectives. Comments on AI RMF 1.0 have closed, and as of this blog’s publication, an updated version is expected in spring 2023.

In the near term, organizations have begun to address issues like popularized use of AI-based text bots (extremely accessible to everyone with an internet connection) generally, as well as purpose-driven applicant tracking system (ATS) initiatives that comb and sift resumes “automagically” before relinquishing decision-making to employers. Approaches, at least initially, seem to focus on two ends of the spectrum:

First, many organizations are utilizing and/or updating internal policies regarding the creation/development, deployment, and ongoing monitoring of the use of such automated processing tools (as we’ve also discussed before regarding frameworks posed by the Federal Trade Commission, among other frameworks) and adopting different checklists and measuring tools in service to responsible development. Some of these efforts are utilizing existing platforms to catalog and map internal data stores and related organizational activities, many of which began in response to greater organizational interests in data security and data privacy.

Second, organizations are beginning to confront regulatory notice requirements. There are laws in addition to the NYC AEDT Law’s ultimate notice requirements that address the topic. In particular, the California Privacy Rights Act (CPRA) has provided the California Privacy Protection Agency (CPPA) a mandate to promulgate regulations governing access and opt-out rights regarding covered entities’ uses of automated decision-making technology. Similarly, the Virginia Consumer Data Protection Act (VCDPA), the Colorado Privacy Act (CPA), and the Connecticut Data Privacy Act (CTDPA) all grant rights to opt out of personal information processing directed toward profiling, and they create additional requirements regarding the use of automated decision-making technology.

While regulations will be promulgated and frameworks will be developed, there is the thought that, at least in the U.S., the government may be moving toward common ground. OSTP and NIST, for example, have continued to communicate with each other so that their sets of guidance are “complementary,” as stated by Alondra Nelson in her former capacity as OSTP’s chief. Similar themes do indeed emerge in both frameworks, such as consumer protection and transparency. As such, businesses should keep both in mind as they develop or continue to develop their AI toolbox and should utilize such pre-work when regulatory disclosures and practices finally “go live”—likely in a very different way than what we’ve seen so far.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© BakerHostetler

Written by:

BakerHostetler
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

BakerHostetler on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide