The privacy advocacy group noyb, founded by privacy activist Max Schrems, has raised concerns over how ChatGPT handles questions about real-life individuals. The group alleges that ChatGPT has been generating inaccurate personal data, such as incorrect birth dates, which they claim is a violation of the European Union’s General Data Protection Regulation (GDPR). GDPR mandates that personal data must be processed lawfully, fairly, and in a transparent manner, and it must be accurate and, where necessary, kept up to date. The complaint suggests that ChatGPT’s outputs, which Noyb refers to as “hallucinations,” fail to meet these standards because it repeatedly provided incorrect birthdates for Schrems. The group has called for a thorough investigation into OpenAI, the organization behind ChatGPT, to scrutinize the accuracy of the personal data used in its AI models.
This is not the first time ChatGPT has come under regulatory scrutiny under GDPR. Previously, it faced a temporary ban in Italy due to concerns over its handling of personal data. The Italian authorities were worried about the protection of users’ privacy, particularly in the context of interactions with AI systems that process vast amounts of information.
The situation raises important questions about the need for ongoing dialogue between technology developers, privacy advocates, and regulators to establish clear guidelines for AI operations. Crucially, how should AI handle personal data in its model? Is making up an answer better than giving out real information? One thing is clear: AI developers must get a solid handle on the outputs their models produce.
[View source.]