AI Trends for 2025 - Dirty Data Damaging Deals – Data Issues in AI M&A

MoFo Tech
Contact

MoFo Tech

2024 saw strong interest in M&A involving companies that use or develop artificial intelligence (“AI”) offerings. The rise of AI has brought new issues for companies and dealmakers.[1] In particular, 2024 saw regulators focusing further on the collection and use of data in AI products, applying existing rules and developing new approaches.

For example, in October, the Federal Trade Commission (“FTC”) announced actions against five companies for allegedly deceptive or unfair practices enabled by AI.[2] This followed the FTC’s complaint in January alleging that Rite Aid Corporation used facial recognition technology “to identify patrons that it had previously deemed likely to engage in shoplifting or other criminal behavior” without appropriate safeguards, including sufficient bias testing. The FTC ordered Rite Aid to, among other things, delete or destroy all photos and videos of consumers collected by the system as well as any data, models, or algorithms derived in whole or in part from them (so-called “algorithmic disgorgement”).[3]

Given the regulatory focus, buyers have increased their scrutiny of data used to train and develop AI products, including the potential for claims relating to:

  • Breach of Contract
    • If customer data was used to train the AI model, did the customer expressly consent to such use (e.g., in the end-user license agreement)
  • If the data was obtained from a third party, does the license or data aggregation agreement permit the data to be used to train AI models?
  • If the AI offering is dependent on the use of a large language model (“LLM”), is the use of the offering permitted by the contract with the LLM provider?
“2024 saw regulators focusing further on the collection and use of data in AI products, applying existing rules and developing new approaches.”
  • IP Infringement
    • Is the AI model trained on and does it regurgitate unlicensed third-party copyrighted works?
  • If the data was acquired from a third party, does the licensor have the right to make the data available for the applicable use, and what warranties and indemnification has the licensor provided with respect to such data?
  • Privacy and Data Protection
    • Does the data include personal data and, if so, were the data subjects provided with any required notice and was any required consent obtained from them?
  • What privacy laws apply to the data (e.g., the EU’s General Data Protection Regulation (“GDPR”) or the California Consumer Privacy Act), and is that something that can be ascertained with confidence (which may not be possible if the data was scraped from public sources)?[4]
  • Is the AI model developed in a way that enables the handling of individuals’ rights requests? For example, can the AI model process correction or deletion requests if it outputs incorrect information?
  • Other Regulations
    • Is the target compliant with applicable AI regulations, such as the EU’s AI Act,[5] and regulations specific to financial, health, and other sensitive information?
  • Is the target compliant with applicable cross-border data transfer regulations, such as the GDPR, especially where data was scraped in one jurisdiction for processing in another?
  • Is the target at risk for claims of allegedly deceptive or unfair use or offering of AI (including with respect to any advertising for the offering), such as by the FTC for deceptive or unfair acts in violation of the FTC Act?

Data-related risks can lead to:

  • Delays in dealmaking to conduct “deep dive” diligence to identify data from problematic sources (“dirty data”) and to ascertain the ability to remove or segregate that data.
  • A closing condition that dirty data or impacted algorithms be replaced.
  • Indemnities and special escrows for the identified risks, including the estimated cost of retraining a model based on clean data if required by a third-party claim or regulatory enforcement.
  • An upfront purchase price adjustment.
  • Buying a target primarily for its AI systems but being unable to use the systems due to noncompliance with AI, IP, or privacy laws.

Companies that use or develop AI offerings should ensure good data hygiene to minimize these risks, especially if they are considering a potential exit transaction. This blog post is an excerpt from a broader article reviewing the M&A markets in 2024 and the key legal and regulatory issues and trends that will affect deals in 2025.


[1] Visit MoFo’s Artificial Intelligence Resource Center for updates and insights on AI regulations and issues, including links to laws, regulations, and regulators by jurisdiction.

[2] See MoFo’s client alert, “FTC Rolls Out Targeted AI Enforcement,” Oct. 8, 2024.

[3] See MoFo’s client alert, “The FTC Brings Algorithmic Bias into Sharp Focus,” Jan. 8, 2024.

[4] See “Using special categories of data for training LLMs: never allowed?” by Lokke Moerel and Marijn Storm, Morrison Foerster, Aug. 28, 2024.

[5] See MoFo’s client alert: “EU AI Act – Landmark Law on Artificial Intelligence Approved by the European Parliament,” Mar. 14, 2024.

Written by:

MoFo Tech
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

MoFo Tech on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide