Artificial Intelligence Demands Attention to Privacy Compliance

Esquire Deposition Solutions, LLC
Contact

Esquire Deposition Solutions, LLC

Long maligned as innovation-shy, litigation practices are deploying emerging artificial intelligence technologies at a rapid rate. Technology’s ability to instantly synthesize and draw useful conclusions from large amounts of data is a law practice power tool too good to leave on the shelf.

According to the 2024 Legal Technology Survey Report released March 25 by the American Bar Association, already 13% of lawyers surveyed believe artificial intelligence is mainstream in the legal profession. Forty-five percent believe AI will be mainstream within 3 years. Thirty percent of lawyers are already using some form of artificial intelligence in their practices, the ABA survey indicated.

There is, unfortunately, a drawback to some of AI’s most exciting legal applications: they’re built on massive amounts of data – some of it public, some scraped from the Internet, and some collected under legally suspect circumstances – that may contain privacy landmines for the unwary.

Last year’s Thomson Reuters Institute Future of Professionals Report identified three leading areas where lawyers believe artificial intelligence will make the most significant impact on their practices: handling large volumes of data more effectively, reducing human errors, and providing advanced analytics for better decision-making. Some lawyers also believe that artificial intelligence will have a significant role in law practice marketing and identifying fruitful areas of litigation. These areas particularly impact lawyers with litigation practices.

There is, unfortunately, a drawback to some of AI’s most exciting legal applications: they’re built on massive amounts of data – some of it public, some scraped from the Internet, and some collected under legally suspect circumstances – that may contain privacy landmines for the unwary. Artificial intelligence technologies thrive on large amounts of data, and some of them are not particularly choosy where they get it.

Clearview Settles Suit Over Web-Scraped Images

Consider the case of Clearview AI, which recently settled class action litigation in a multidistrict case arising from the company’s collection, storage, and use of biometric face images scraped from the Internet. Clearview AI claims to have assembled a database of over 50 billion facial images – roughly six images for every person on earth – all from publicly available sources. The database is marketed to law enforcement agencies and businesses as a way to quickly identify criminal suspects.

According to the complaint, Clearview AI did not seek permission from depicted individuals before it collected their images. The linchpin for the settlement was Illinois’ landmark Biometric Information Privacy Act (“BIPA”), which prohibits the collection or capture of an individual’s “biometric identifier or biometric information” without prior, informed consent. Clearview AI’s data collection activities were, arguably, unlawful in Illinois. But not, arguably, in other states.

As the trial court noted while approving the settlement agreement, plaintiff subclasses from California, Virginia, and New York presented relatively weak “square-peg-round-hole” claims because their states lacked explicit biometric information privacy laws.

Elsewhere in the United States, only Colorado, Texas, and Washington have biometric privacy protection laws. Illinois is alone in providing a private right of action.

Even Weak Privacy Laws Merit Attention

The uncertain state of individual privacy rights in the United States casts a shadow over artificial intelligence tools that draw upon public records, social media posts and profiles, workplace communications, and information collected during healthcare transactions, banking activities, and consumer interactions with businesses. According to Bloomberg Law’s 2025 State of Practice survey, privacy and data security concerns were the leading focus of law firms with AI practice groups.

Due to the lack of a comprehensive federal privacy law, individual privacy rights in the United States are supplied by a patchwork of state laws, most of which operate through a familiar “notice and consent” mechanism. Entities that collect personal information are obliged to give “notice” to individuals of their data collection and use practices prior to data collection. So informed, the individual then decides whether or not to “consent” to the disclosed data collection and use practices.

Few laws place legal restrictions on what can be done with personal data that has been lawfully collected. This state of affairs encourages innovation but at the same time places enormous responsibility on individuals, who are required to make hasty decisions after reading longish privacy policies (assuming they read them at all), at a time when their primary focus was purchasing a product or service, or registering to vote.

In the age of artificial intelligence, not even the most discerning individuals can anticipate the consequences of sharing personal data with a company or government entity. In 2012, it was reported that merchandiser Target was able to correctly infer – using a computer algorithm – that one of its customers was pregnant based on public records and other individually harmless factors such as buying unscented lotions, mineral supplements, and cotton balls. According to news stories at the time, Target knew the teenage woman was pregnant before her parents did.

The Target story illustrates how difficult it can be for individuals to intelligently exercise “notice and consent” privacy rights when they are unable to appreciate technology’s ability to draw highly revealing conclusions by synthesizing seemingly benign data points.

In the area of litigation, new AI-powered tools have the ability to summarize testimony, create reports, and scour the Internet for information that might provide an edge in jury selection, case strategy, or predicting the way a particular judge or jury will decide a case. One such product, Reveal by Vigilent Inc., claims the ability to create reports “highlighting potential hidden biases and risks” by analyzing a juror’s public records and social media footprints. It seems safe to say that few potential jurors anticipate that their personal information – given to the government as the price of registering to vote, or given to a business in exchange for a free online service – could be used to exclude them from jury service.

AI Dangers Coming Into Focus

Perhaps in recognition of the weak privacy foundation that artificial intelligence technologies rest on, policymakers are turning their attention instead to regulating AI’s adverse effects rather than their underlying data sources. Colorado’s first-in-the-nation AI legislation creates a regulatory regime that seeks to protect Colorado residents from “algorithmic discrimination” by “high-risk artificial intelligence systems.” The Colorado law creates compliance responsibilities for both developers and deployers of AI technologies.

Importantly, the Colorado law has implications for law firms. Law firms will be “deployers” of high-risk AI systems. Artificial intelligence technology used in the “provision or denial of … legal services” is enumerated as a high-risk AI system.

Finally, both within and outside of Colorado, it’s important to note that the legal profession has its own “privacy” law – the ethical obligation to not reveal client confidential information or any information relating to the representation of a client. For this reason, it is critical that law firms deploying AI technologies thoroughly investigate the data sources used by AI tools to do their work. Bar association leaders have uniformly recognized the ethical danger compromising client confidentiality when using artificial intelligence technologies for client matters. The New York State Bar Association’s recent AI ethical guidance is a good example of this work.

Written by:

Esquire Deposition Solutions, LLC
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Esquire Deposition Solutions, LLC on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide