Dutch Data Protection Authority Warns that Using AI Chatbots Can Lead to Personal Data Breaches

Alston & Bird
Contact

On August 6th, the Dutch Data Protection Authority (DPA) issued guidance cautioning companies about the potential data protection risks associated with the use of Artificial Intelligence (AI)-powered chatbots.

In its guidance, the DPA reports that it has recently received several notifications of personal data breaches caused by employees sharing personal data with a chatbot that uses AI. In one of the personal data breaches notified to the DPA, an employee of a medical practice had entered patient medical data into an AI chatbot – contrary to the instructions of its employer. The DPA also received a notification from a telecom company, where an employee had entered a data file including customer addresses into an AI chatbot. The DPA’s guidance recognizes that many people in the workplace use digital assistants, such as ChatGPT and Copilot, for purposes such as to answer customer questions or summarize large files. That can help employees save time and improve their quality of work and productivity, but there are also significant risks from a GDPR-perspective.

By entering personal data into AI chatbots, the companies providing the chatbot can gain unauthorized access to that personal data, which may constitute a personal data breach under the EU General Data Protection Regulation (GDPR).[1] The GDPR defines a personal data breach as “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorized disclosure of, or access to, personal data transmitted, stored or otherwise processed”.  There is even greater risk in the context of medical data, which is considered sensitive data and is therefore subject to heightened legal protection. The DPA considers that sharing sensitive data with a tech company without proper protection is a major violation of the affected individuals’ privacy.

The DPA’s concerns relate mostly to the fact that the companies behind chatbots will often store all data entered. As a result, personal data may end up on the servers of technology companies, often without the person who entered the data being aware of it, and without knowing exactly what that company will do with the data. In many cases, the individuals to whom the personal data relates will not be aware either.

The DPA’s guidance stresses the importance of clear-cut agreements between employers and employees about the use of AI chatbots in the workplace. There should be no ambiguity as to whether employees are allowed to use AI chatbots for job-related purposes. If an employee uses a chatbot in violation of company policies to process personal data, then that could be sufficient to constitute a personal data breach. And if employers allow such use, it should be clear to employees what data they can and cannot enter. In their contractual arrangements with AI chatbot providers, companies should also include provisions that restrict the provider from storing and using the entered data.

[1] The GDPR requires controllers to notify a personal data breach to the relevant EU Member State supervisory authority/ies and, in certain cases, to communicate the breach to the individuals whose personal data has been affected by the breach.

[View source.]

Written by:

Alston & Bird
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Alston & Bird on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide