Artificial Intelligence Systems, Profiling, and the New U.S. State Privacy Laws

Cozen O'Connor
Contact

Cozen O'Connor

The rapid spread of Artificial Intelligence (AI) systems in recent years has overlapped with the enactment of comprehensive privacy laws by multiple U.S. states.  Aside from being generally applicable to AI systems in the same way as they are to any online service, several of the comprehensive state privacy laws have provisions that specifically address certain uses of AI systems, in particular use in profiling.  This article surveys those provisions and assumes the reader is already familiar with basic concepts in the comprehensive privacy laws, such as controllership and applicability thresholds.

The new comprehensive privacy laws of Iowa and Utah have no specific provisions on the use of AI for profiling or otherwise.  The laws of Connecticut, Delaware, Indiana, Montana, Tennessee, Texas, and Virginia are largely the same.  California, Colorado, Florida, and Oregon have some significant differences.

Connecticut, Delaware, Indiana, Montana, Tennessee, Texas, and Virginia

 

The comprehensive privacy laws of Connecticut, Delaware, Indiana, Montana, Tennessee, Texas, and Virginia (the “Majority Profiling Model”) mostly share a common definition of profiling.  They define profiling as the “automated processing” of personal data “to evaluate, analyze or predict” characteristics of a person’s “economic situation, health, personal preferences, interests, reliability, behavior, location or movements.” Delaware adds a person’s “demographic characteristics,” and Indiana includes characteristics of health records. Indiana, Tennessee, and Texas limit the definition to “solely automated processing.”

Under the Majority Profiling Model, residents of the applicable state have an opt-out right in connection with profiling used in furtherance of a decision that produces a legal or other similarly significant effect.

In Connecticut and Delaware, decisions that produce a legal or similarly significant effect are decisions that result in the provision or denial of financial or lending services, housing, insurance, education enrollment or opportunity, criminal justice, employment opportunities, health care services, or access to essential goods or services.  Indiana, Montana, Tennessee, Texas and Virginia replace “essential goods or services” with “basic necessities, such as food and water.”  Additionally, Texas and Virginia include “education enrollment,” but not “education opportunity.”

Connecticut, Delaware, Indiana, Montana, Tennessee, and Texas refer to “solely” automated processing either in their definition of profiling or their introduction of the opt-out right.  Only Virginia does not.  As a result, the opt-out rights for Connecticut, Delaware, Indiana, Montana, Tennessee, and Texas seemingly only apply where there is no human involvement with the decision (equivalent to “Solely Automated Processing” under the Colorado regulations (see below)), while the opt-out right for Virginia may include any use of AI for profiling, even where humans are involved ((equivalent to “Human Involved Automated Processing” and “Human Reviewed Automated Processing” under the Colorado regulations).

Under all states following the Majority Profiling Model, a controller must perform a data protection assessment if profiling presents a reasonably foreseeable risk of (i) unfair or deceptive treatment of, or unlawful disparate impact on, consumers, (ii) financial, physical or reputational injury to consumers, (iii) a physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers, where such intrusion would be offensive to a reasonable person, or (iv) other substantial injury to consumers.

Colorado

The statutory language of the Colorado Privacy Act (CPA) follows the Majority Profiling Model for the definition of profiling and the opt-out right.  

Under the CPA, a controller must perform a data protection assessment if the profiling presents a reasonably foreseeable risk of: (i) unfair or deceptive treatment of, or unlawful disparate impact on, Colorado consumers, (ii) financial or physical injury to Colorado consumers, (iii) a physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of Colorado consumers if the intrusion would be offensive to a reasonable person, or (iv) other substantial injury to Colorado consumers (collectively, “Heightened Risks”). The only change from the Majority Profiling Model is the omission by the CPA of risk of reputational injury as a trigger for a data protection assessment.

Unlike all of the other states, Colorado’s regulations provide significant additional details.

The regulations specify that when profiling is covered by a data protection assessment, the assessment must address the following:

  1. The specific types of personal data that were or will be used in the profiling or decision-making process;
  2. The decision to be made using profiling;
  3. The benefits of automated processing over manual processing for the stated purpose;
  4. A plain language explanation of why the profiling directly and reasonably relates to the controller’s goods and services;
  5. An explanation of the training data and logic used to create the profiling system, including any statistics used in the analysis, either created by the controller or provided by a third party which created the applicable profiling system or software;
  6. If the profiling is conducted by third party software purchased by the controller, the name of the software and copies of any internal or external evaluations sufficient to show the accuracy and reliability of the software where relevant to the Heightened Risks;
  7. A plain language description of the outputs secured from the profiling process;
  8. A plain language description of how the outputs from the profiling process are or will be used, including whether and how they are used to make a decision to provide or deny or substantially contribute to the provision or denial of financial or lending services, housing, insurance, education, enrollment or opportunity, criminal justice, employment opportunities, health-care services, or access to essential goods or services;
  9. If there is human involvement in the profiling process, the degree and details of any human involvement;
  10. How the profiling system is evaluated for fairness and disparate impact, and the results of any such evaluation;
  11. Safeguards used to reduce the risk of harms identified; and
  12. Safeguards for any data sets produced by or derived from the profiling.

Controllers have an affirmative obligation to provide clear, understandable, and transparent information to Colorado consumers about how their personal data is used, including for profiling.  Privacy policies must disclose that personal data will be used for profiling in furtherance of decisions that produce legal or similarly significant effects and include the following information (collectively, the “Profiling Disclosures”):

  1. The decision subject to the profiling;
  2. The categories of personal data used as part of the profiling used in furtherance of decisions that produce legal or other similarly significant effects;
  3. A non-technical, plain language explanation of the logic used in the profiling process;
  4. A non-technical, plain language explanation of the role of meaningful human involvement in profiling and the decision-making process;
  5. How profiling is used in the decision-making process;
  6. The benefits and potential consequences of the decision based on the profiling; and
  7. An explanation of how Colorado consumers can correct or delete the personal data used in the profiling used in the decision-making process.

The regulations define three levels of automated processing by AI systems. “Human Involved Automated Processing” means the automated processing of personal data where a human (1) engages in a meaningful consideration of available data used in the Processing or any output of the Processing, and (2) has the authority to change or influence the outcome of the Processing.  “Human Reviewed Automated Processing” means the automated processing of personal data where a human reviews the automated processing, but the level of human engagement does not rise to the level required for Human Involved Automated Processing. Reviewing the output of the automated processing with no meaningful consideration does not rise to the level of Human Involved Automated Processing.  “Solely Automated Processing” means the automated processing of personal data with no human review, oversight, involvement, or intervention.

Opt-out requests for Solely Automated Processing and Human Reviewed Automated Processing in connection with decisions that produce legal or similarly significant effects must be honored.  However, the controller may choose to deny opt-out requests for Human Involved Automated Processing in connection with decisions that produce legal or similarly significant effects.

If the opt-out request is denied, the controller must notify the Colorado consumer and must either include the Profiling Disclosures or provide the consumer with a link to the Profiling Disclosures in its privacy policy.  

Controllers must clearly and conspicuously provide a method to exercise the right to opt-out at or before the time the profiling occurs.  

A Colorado consumer’s right of access to personal data includes final profiling decisions, inferences, derivative data, marketing profiles, and other data linked or reasonably linkable to an identified or identifiable individual.

California

The California Privacy Rights Act (CPRA) defines profiling as any “automated processing of personal information” to evaluate characteristics of a person, in particular “to analyze or predict” a person’s “performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.”  While the CPRA calls out the evaluation of certain specific characteristics as profiling, the list is non-exhaustive and the evaluation of other personal characteristics may also qualify as profiling. 

The CPRA directs the adoption of regulations governing access and opt-out rights with respect to use of automated decision-making technology, including profiling.  Additionally, the regulations are to address responses to access requests, including providing meaningful information about the logic involved in the decision-making process, as well as a description of the likely outcome of the process with respect to the California resident.  The required regulations are not in the initially adopted CPRA regulations.

Florida

The basic requirements in Florida are similar to the Majority Profiling Model. Florida defines profiling as “solely automated processing” of personal data “to evaluate, analyze or predict” characteristics of a person’s “economic situation, health, personal preferences, interests, reliability, behavior, location or movements.”

Residents of Florida have an opt-out right in connection with profiling that produces a legal or other similarly significant effect. Decisions that produce a legal or similarly significant effect are decisions that result in the provision or denial of financial or lending services, housing, insurance, health care services, education enrollment, employment opportunities, criminal justice, or access to basic necessities, such as food and water. 

A controller must perform a data protection assessment if profiling presents a reasonably foreseeable risk of (i) unfair or deceptive treatment of, or unlawful disparate impact on, consumers, (ii) financial, physical or reputational injury to consumers, (iii) a physical or other intrusion on the solitude or seclusion, or the private affairs or concerns, of consumers, if the intrusion would be offensive to a reasonable person, or (iv) other substantial injury to consumers.

However, Florida adds specific requirements for certain categories of online platforms in connection with anyone under the age of 18.  “Online platform” specifically means social media companies and online games.  These requirements are in addition to the requirements above, which apply to both children and adults.

Online platforms may not profile a child unless both of the following criteria are met: (1) the online platform can demonstrate it has appropriate safeguards in place to protect children; AND (2) either (a) the profiling is necessary to provide the online service, product, or feature requested with which the child is actively and knowingly engaged; OR (b) the online platform can demonstrate a compelling reason that profiling does not pose a substantial harm or privacy risk to children.

This section has a separate definition of profiling that is largely identical to the main definition but applies only to children. “Profiling” for purposes of this section means any form of automated processing performed on personal information to evaluate, analyze, or predict personal aspects relating to the economic situation, health, personal preferences, interests, reliability, behavior, location, or movements of a child.  This definition does not require “solely” automated processing and, like Virginia, may include any use of AI for profiling, even where humans are involved ((equivalent to “Human Involved Automated Processing” and “Human Reviewed Automated Processing” under the Colorado regulations).

Oregon

Oregon’s definition of profiling, opt-out requirement, and data protection assessment requirement are similar to those in the Majority Profiling Model.  However, Oregon has a few additional requirements.

First, the controller must include in its privacy policy a clear and conspicuous description of profiling in furtherance of decisions that produce legal effects or effects of similar significance.

Furthermore, controllers are prohibited from using personal data for profiling in furtherance of decisions that produce legal effects or effects of similar significance without an individual’s consent if the controller has actual knowledge that, or willfully disregards whether, the individual is 13 to 15 years old.

There is a growing demand for additional, more specific AI regulations.  But it is worth remembering that that most of the state comprehensive privacy laws already contain some AI requirements.

Written by:

Cozen O'Connor
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Cozen O'Connor on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide