Addressing Generative AI’s Privacy Challenge

Goodwin
Contact

Goodwin

Use existing approaches to get started, but prepare for coming regulations and complexities in areas such as data on minors, biometrics, and facial recognition

Generative AI models access vast pools of information from a wide variety of sources, including information users add in prompts. This can include private or sensitive information, which may be used without consent and may be incorporated in a tool’s outputs.

As a result, companies that develop generative AI technologies, deploy them in products and services, or use generative AI tools face a number of challenges.

Companies may have to conduct data impact assessments to identify privacy risks and develop ways to mitigate them. Because data is often collected from around the world, they may have to consider laws from multiple jurisdictions. To ensure transparency, a key principle in many jurisdictions, companies should provide detailed information about their use of generative AI, and they may need to establish comprehensive privacy policies to do this effectively.

Those that are subject to GDPR regulations should carefully consider the appropriate legal basis for processing data used by generative AI tools. To meet GDPR’s standard of legitimate interest, companies will have to balance the interests of the data controller with the rights and freedoms of data subjects. Certain types of data, like special category data, may require extra safeguards. GDPR stipulates that users must be able to understand the logic of an AI tool’s decisions and have the right to challenge them — and data subjects must be able to access, correct, or delete their data from a tool’s training set or algorithm.

Businesses should also prepare for new laws that will affect generative AI. The EU is close to passing its AI Act, which will affect EU and non-EU companies. The Act provides a two-year grace period for companies to prepare for its requirements, and companies should start preparing well in advance.

To get started, privacy professionals and lawyers should use existing tools in their privacy toolbox. Ask the following questions about the information used by a generative AI tool:

  • Can it be sorted? Does the model use personal data for training? If so, can the data be identified? Can users opt out of having their data used for training?
  • Is it accurate? Is it possible to verify that data inputs and outputs are accurate? Can inaccuracies be corrected?
  • Is it biased or discriminatory? Are their biases in the data used to train the tool? Do outputs discriminate against certain groups?

New approaches may be required to address more complex issues, such as how generative AI tools use data about minors and whether tools collect and use biometric or facial recognition data. These kinds of cases are often subject to stricter regulations.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Goodwin | Attorney Advertising

Written by:

Goodwin
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Goodwin on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide