In December 2023, European Union (EU) lawmakers reached an agreement on the EU AI Act. In our article titled An Introduction to the EU AI Act, we focused on applicability, thresholds, timing, and penalties related to the EU AI Act. In our second article, we focused on the responsibilities of the providers of high-risk AI systems. In this article, we focus on the responsibilities of users of AI as required by the EU AI Act.
Article 29 of the EU AI Act is titled “Obligations of the users of high-risk AI systems” and contains the following three requirements:
- User Oversight – Article 29 requires that users monitor the operation of the high-risk AI system and if they have reason to believe that an AI system is presenting a risk to the health, safety, or fundamental rights of an individual, the user needs to inform the provider or distributor and suspend the use of the system. Users must also inform the provider or distributor of any serious incident or malfunctioning of the system where per Article 3, a user, provider, and distributor are defined as:
- User - any natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity;
- Provider - any natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed to place it on the market or put it into service under its own name or trademark, whether for payment or free of charge; and
- Distributor – any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the EU market without affecting its properties.
- Maintaining Logs – Article 29 requires that users of high-risk AI systems keep logs automatically generated by that high-risk AI system, to the extent the logs are under their control. As described in our prior article, such logs must contain a) the start date and time and end date and time of each use, b) the reference database against which the input data has been checked by the system, c) the input data for which the search has led to a match and d) the identification of the individual(s) involved in the verification of the results.
- Data Protection Impact Assessments (DPIA) – Article 29 requires that users of high-risk AI systems conduct data protection impact assessments under Article 35 of the General Data Protection Regulation (GDPR).
It is worth highlighting that the scope of AI systems and activities under the EU AI Act is narrower than that of the GDPR. In our first article in this series, we reviewed that the EU AI Act is largely focused on requirements for providers of high-risk AI systems that involve:
- Biometric identification and categorization of natural persons;
- Management and operation of critical infrastructure;
- Education and vocational training;
- Employment, works management, and access to self-employment;
- AI systems intended to be used by public authorities to evaluate the eligibility of natural persons for public assistance;
- Law enforcement;
- Border control management; or
- Administration of justice and democratic processes.
On the other hand, GDPR Article 35 requires organizations to conduct DPIAs on high-risk processing activities that are defined by the European Data Protection Board (EDPB) as activities that involve profiling, automated decision-making, processing data on a large scale, matching or combing datasets, innovative use or application of technological solutions or where the processing itself prevents data subjects from exercising a right. This set of criteria for when to conduct a DPIA implicates almost all uses of AI systems.
In summary, it is important for AI and data privacy experts to keep in mind that a high-risk AI system as defined by the EU AI Act has a much higher threshold and more narrow definition than a high-risk processing activity as defined by the GDPR. Even if an organization is not a provider of AI systems, it will still need to conduct DPIAs on most, if not all, business processes that utilize AI.