EU Commission Publishes Guidelines on the Prohibited AI Practices under the AI Act

Orrick, Herrington & Sutcliffe LLP
Contact

Orrick, Herrington & Sutcliffe LLP

While the authorities’ publications on AI have recently tended to be in the area of data protection (such as the EDPB, which we covered here and here), the European Commission has recently published its first set of [draft] guidelines on the practical impact of the AI Act (Regulation (EU) 2024/1689).

Background

On 2 February 2025, the general provisions (Articles 1 to 4 of the AI Act), as well as the rules on prohibited artificial intelligence ("AI") practices (Article 5 of the AI Act), went into effect.

  • Article 5 of the AI Act prohibits practices such as manipulative or deceptive AI techniques, untargeted facial data scraping, exploitative systems targeting vulnerable groups and certain forms of biometric categorization and emotion recognition in sensitive contexts.
  • Among the general provisions, the requirements for AI literacy (Article 4 AI Act) also came into effect. Article 4 of the AI Act requires companies to implement AI literacy among their workforce.

Please find further guidance on the AI Literacy requirement and five tips for companies to consider here.

Pursuant to Art. 96 of the AI Act, the European Commission is required to issue guidelines for the Act's implementation. The Commission began fulfilling this obligation by publishing its first two guidelines:

  • Guidelines on prohibited AI practices(4 February 2025);
  • Guidelines on the definition of an AI system under the AI Act(6 February 2025).

Although they are non-binding, these guidelines provide legal explanations and practical examples to help stakeholders understand and comply with the AI Act. They help foster a consistent, effective and uniform application of the AI Act across the EU.

The Commission has approved these guidelines but has not yet formally adopted them.

This article addresses the guidelines on prohibited AI practices. Please find further guidance on the other guidelines addressing the definition of AI systems here.

Guidelines on Prohibited AI Practices

The European Commission has published a comprehensive 135-page document outlining AI practices that are considered unacceptable due to their risks to fundamental rights and values. Its goal is to provide companies with insights into how the Commission interprets and defines prohibited AI practices. It focuses on the following areas:

Which AI practices are prohibited?

Simply put, only the practices explicitly listed in Article 5(1) of the AI Act are prohibited. The Commission methodically examines each prohibition specified in the AI Act, clarifying its scope and application and considering the rationale behind each prohibition. The prohibited practices include:

1. Harmful manipulation and deception, Article 5(1)(a) of the AI Act

AI systems that use subliminal, manipulative or deceptive techniques with the objective or effect of distorting behavior, causing or likely to cause significant harm.

2. Harmful exploitation of vulnerabilities, Article 5(1)(b) of the AI Act

AI systems that take advantage of vulnerabilities related to age, disability or specific social or economic situations, with the objective or effect of distorting behavior, causing or are likely to cause significant harm.

3. Social scoring, Article 5(1)(c) of the AI Act

AI systems that evaluate or classify individuals or groups based on social behavior or personal traits, with the social score leading to detrimental or unfavorable treatment when data is from unrelated social contexts or when such treatment is unjustified or disproportionate.

4. Individual criminal offence risk assessment and prediction, Article 5(1)(d) of the AI Act

AI systems that predict or assess the risk of individuals committing a criminal offense based solely on profiling or personality traits, unless used to support a human assessment based on objective and verifiable facts directly related to criminal activity.

5. Untargeted scraping to develop facial recognition databases, Article 5(1)(e) of the AI Act

AI systems that build or enlarge facial recognition databases by using untargeted scraping of facial images from the internet or CCTV footage.

6. Emotion recognition, Article 5(1)(f) of the AI Act

AI systems that infer emotions at the workplace or in educational institutions, except for medical or safety reasons.

7. Biometric categorization, Article 5(1)(g) of the AI Act

AI systems that categorize individuals based on biometric data to deduce or infer attributes like race, political opinions or sexual orientation, except when used for labeling or filtering lawfully acquired biometric datasets, such as in law enforcement.

8. Remote biometric identification, Article 5(1)(h) of the AI Act

AI systems for real-time remote biometric identification in public spaces for law enforcement, except when necessary for targeted searches of specific victims, preventing specific threats like terrorist attacks or searching for suspects of specific offenses, with further procedural requirements outlined in Article 5(2-7) of the AI Act.

Exclusions from the Scope of the AI Act

Article 2 of the AI Act provides for a number of general exclusions from the scope of the AI Act. For example, according to Article 2(12) of the AI Act, the Act generally does not apply to AI systems released under free and open-source licenses. However, this exemption only applies unless the AI systems are marketed or deployed as high-risk AI systems or fall under Article 5 of the AI Act or Article 50 of the AI Act (transparency obligations).

Therefore, providers cannot rely on this exclusion if their AI system constitutes a prohibited practice under Article 5 of the AI Act.

Material scope: practices involving the "placing on the market," "putting into service," or "use" of an AI system

1. "Placing on the market" (Article 3(9) of the AI Act) refers to the first availability of an AI system on the EU market. "Making available" means supplying the system for distribution or use in the EU during commercial activities, whether paid or free. This applies regardless of the supply method, including access via API, cloud, direct downloads, physical copies or embedded in physical products.

Example: A real-time remote biometric identification ("RBI") system developed outside the EU by a third-country provider is placed on the EU market when it is offered for payment or free in one or more Member States. This can occur by providing online access to the system via an API or other user interfaces.

2. "Putting into service" (Article 3(11) of the AI Act) is supplying an AI system for its first use to the deployer or for in-house purposes within the EU for its intended purpose. This includes both external deployment and internal development and deployment. The intended purpose is the use specified by the provider, including the context and conditions of use outlined in the instructions, promotional materials, and technical documentation.

Example: A provider develops an RBI system outside the EU and supplies it to a law enforcement authority or private company in a Member State for the first time, thus putting it into service.

3. The AI Act does not explicitly define "use" of an AI system, but it should be broadly interpreted to include any deployment or integration of the system at any point in its lifecycle after being placed on the market or put into service. This includes its incorporation into services, processes or more complex systems. Providers must anticipate both the intended use and reasonably foreseeable misuse before marketing their AI systems. However, deployers are responsible for ensuring lawful use. Under Article 5 of the AI Act, "use" also encompasses any misuse, whether foreseeable or not, that could constitute a prohibited practice.

Example: An AI system used by an employer to infer emotions in the workplace is prohibited except for medical or safety purposes. This prohibition applies to deployers regardless of whether the provider has excluded such use in their contractual terms with the employer.

Personal scope: responsible actors

The AI Act identifies various categories of operators related to AI systems, including providers, deployers, importers, distributors and product manufacturers. However, the current Guideline concentrates only on providers and deployers, as these are the focus concerning the prohibited practices outlined in Article 5 of the AI Act.

Providers (Article 3(3) of the AI Act) are entities, such as individuals, companies, public authorities, or agencies ("entities"), that develop AI systems or have them developed and then place them on the Union market or put them into service in the EU under their name or trademark. Providers outside the EU are also subject to the AI Act if they are subject to the provisions of the AI Act if they place those systems on the market or put them into service in the European Union or if the AI system's output is used within the EU.

Example: A provider of an RBI system could be a manufacturer that markets the system in the EU under its trademark.

Providers must ensure their AI systems comply with all relevant requirements before placing it on the market or putting it into service.

  • Deployers are entities that use AI systems under their authority, excluding personal, non-professional activities. "Authority" implies the responsibility for deploying the system and how it is used. Deployers are subject to the AI Act if they are based in the EU or if they are located outside the EU and the AI system's output is used within the EU. When a legal entity, like a law enforcement agency or private security firm, uses an AI system, individual employees following procedures are not considered deployers. The legal entity remains the deployer even if third parties, like contractors, operate the system on its behalf and under its responsibility and control.
  • Operators can hold multiple roles in relation to an AI system. For instance, if an operator develops and uses its own AI system, it is both the provider and deployer, even if other deployers use the system.

What's next?

Companies should thoroughly assess, on a case-by-case basis, whether the specific AI application is indeed deemed prohibited under Article 5 of the AI Act. Providers and deployers have different responsibilities based on their roles and control over the system's design, development and use. These responsibilities should be interpreted proportionately for each of the prohibitions, considering who in the value chain is best positioned to implement preventive and mitigating measures to ensure the AI system development and use is compliant.

The Commission's interpretative aids, examples and interpretations assist evaluation. However, as previously mentioned, the guidelines are non-binding. Since authoritative interpretations of the AI Act's provisions are reserved for the Court of Justice of the European Union (CJEU), all attention is eagerly focused on Luxembourg.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Orrick, Herrington & Sutcliffe LLP

Written by:

Orrick, Herrington & Sutcliffe LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Orrick, Herrington & Sutcliffe LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide