Zooming in on AI – #4: What is the interplay between “Deployers” and “Providers” in the EU AI Act?

A&O Shearman

ON THIS PAGE

  • Some background
  • Different qualifications bringing different obligations
  • Deployers’ obligations for High-Risk AI Systems
  • Providers’ obligations for High-Risk AI Systems
  • The thin barrier between deployers and providers
  • Importance of defining objectives and understanding obligations
  • Conclusion

 

One of the key aspects of the EU AI Act (“AI Act”)[1] is linked to the qualification of providers and deployers and the nuances which help distinguish between the two categories of stakeholders. What would this mean in practice for companies currently coming to grips with the new requirements?

Some background

As previously introduced in the first and second posts of our “Zooming in on AI” series, the AI Act is the result of years of negotiation within the EU, reflecting the growing recognition of AI’s potential associated risks. It aims to promote the uptake of human-centric and trustworthy AI while simultaneously protecting health, safety, and fundamental rights.[2]

As part of this fourth publication in the series, we will be taking a look at the two new categories of players within the AI ecosystem and the key elements that set them apart:

  • Deployers who operate an AI system under their own authority in a professional capacity; and
  • Providers who develop AI systems with a view to placing it on the market or putting it into service under their own name or trademark.

Although each role comes packaged with a series of specific obligations, in practice, the deployer may easily risk being requalified as a provider in certain scenarios which will then trigger additional obligations.

It is thus crucial to determine the purpose and objectives of any AI-related project from the outset to clearly determine and define the roles of the stakeholders upstream and accurately identify the obligations that will likely apply throughout the AI value chain.

Different qualifications bringing different obligations

Each actor becomes subject to a series of specific obligations, particularly when dealing with high-risk AI systems. On this point it is important to note the timeline for implementation in respect of this particular classification of AI systems, as detailed in our first article (Zooming in on AI: When will the AI Act apply?). The provisions relating to the obligations of high-risk AI system providers will apply from 2 August 2026 (except for high-risk AI systems that have been placed on the market or put into service before 2 August 2026).

Within the meaning of the AI Act, high-risk AI systems are those that pose a significant risk to health, safety, or fundamental rights. This would typically include: (a) AI systems used for credit scoring purposes; (b) the selection, monitoring and evaluation of employees; or (c) the profiling of individuals.

Deployers’ obligations for High-Risk AI Systems

The AI Act defines a ‘deployer’ as follows:

A ‘deployer’ means any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.[3]

For instance, a company using a third-party AI system for customer service, document generation and management, fraud prevention, customer service, or employee monitoring would generally be regarded as a deployer.

Deployers may become subject to the AI Act if they are established or located in the EU or, to the extent they are established or located in a third country, if the output of the AI system is used in the EU.

When dealing with high-risk AI systems, deployers face a variety of obligations including, amongst others, the implementation of specific governance, monitoring, transparency, and impact assessment requirements.

Zooming in on two important obligations for deployers of high-risk AI systems:

Operational Obligations: the deployer must: (a) implement appropriate measures to ensure the high-risk AI system is used in accordance with the relevant instructions for use; (b) ensure that input data is relevant and sufficiently representative for the intended purpose of the system; and (c) monitor its operation in order to be able to inform in the event it identifies any risks or serious incidents.

Control and Risk-Management Obligations: the deployer must: (a) conduct a fundamental rights impact assessment (FRIA) before deploying the system; (b) assign human oversight to individuals with necessary competence; (c) train and regularly monitor the AI system for risks; and (d) keep the logs of the AI system in an automatic and documented manner.

Providers’ obligations for High-Risk AI Systems

The AI Act defines ‘providers’ as follows:

A ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system, or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.[4]

Providers play a pivotal role within the AI ecosystem and become subject to the AI Act to the extent they either place an AI system on the market or put an AI system into service within the EU, regardless of the place where the AI provider is established or located.[5] The AI Act appears to expand its reach with the aim of preventing certain companies from circumventing the AI Act by moving their AI operations abroad. For example, a US company providing an AI system in the EU (e.g. France) would also have to meet the requirements of the AI Act, even if its system works via a local distributor, notwithstanding the distributor’s own obligations including, amongst others, verifying the provider’s documentation and instructions for use regarding the AI system.

When dealing with high-risk AI systems, providers face a variety of obligations. These are generally more stringent than those faced by deployers. Among these, providers are required to implement quality management systems, provide required documentation, draw up an EU declaration of conformity and apply the CE marking to indicate that the product meets EU safety standards.

Zooming in on two important obligations for providers of high-risk AI systems:

Registration Obligations: providers must register themselves and their system in the EU database before placing the high-risk AI system on the market or putting it into service.

Report Serious Incident Obligations: providers must report any serious incident to the market surveillance authorities of the European country where the incident occurred immediately after having established a causal link between the high-risk AI system and the incident (or the reasonable likelihood of such a link). Following the incident, the provider will have to perform a risk assessment and adopt corrective measures.

The thin barrier between deployers and providers

The AI Act places the primary responsibility on the initial provider. However, the distinction between providers and deployers is far from clear-cut. For example, it is worth considering the fact that the AI Act expressly contemplates scenarios where deployers can become providers if certain circumstances are met.

At a high-level, these conditions are met when a deployer[6]:

  • Puts its name or trademark on the high-risk AI system.

The first situation where a deployer may become a provider is where the deployer places its own brand on a high-risk AI system which has already been placed on the market. For example, if a high-risk AI system has been developed by a company as a white label AI solution, one may wonder whether the original creator of the high-risk AI system would not qualify as provider based on the AI Act definition because it has not put the system on the market under its own name. However, the customer would qualify as provider once it brands the system on its own.

This may be of relevance to deployers as rebranding a system may trigger a much wider compliance burden which would extend to the whole spectrum of obligations applicable to providers. Therefore, deployers should carefully consider the array of consequences, including the corresponding regulatory burden, which could ultimately derive from the potential use of a particular high-risk AI system under their own brand (i.e. as opposed to using such system under the brand of a third party).

  • Makes a substantial modification in a high-risk AI system.

The second situation where a deployer may become a provider is where the deployer substantially modifies a high-risk AI system. The main scenario where an AI system is "substantially modified" is where the intended purpose of the AI system is changed. However, it must be underlined that the definition in the AI Act for “substantially modified” remains unclear as to the starting point of the “substantial modification.”

  • Modifies the intended purpose of an AI system.

The third situation where a deployer may end up being qualified as a provider is where the deployer repurposes a GPAI system, which can be used for various applications, as a system specifically used for high-risk purposes such as educational or HR use.

Based on the above, if a deployer becomes a provider, all relevant obligations in the AI Act in relation to the rebranded or modified AI system shall be deemed to shift onto the deployer as a newly minted provider.

Importance of defining objectives and understanding obligations

Given the possibility for actors to shift from one qualification to another, it is crucial to clearly define objectives at the beginning of an AI project. Understanding who will act as a provider or deployer is material to ensure compliance with relevant obligations and to manage the associated risks.

Before engaging in the development, deployment, or use of an AI system, it is important to define at least the following:

  • Purpose and scope of the AI system: determine what is the AI system intended to do, who will use it and under what conditions.
  • Market intentions: determine if the system will be commercialized or marketed under a specific brand.
  • Modifications and customisations: determine if the system will require significant modifications and if so, who will be responsible for those changes.

By clearly defining these objectives, each role and the relevant obligations as either a deployer or a provider can be better understood.

Conclusion

The distinction between deployers and providers in a highly complex and constantly evolving AI ecosystem will need to be carefully assessed in a level of detail which should go beyond simply labelling roles as such. The fine line between both roles will likely attract significant implications in respect of the assignment of specific responsibilities and obligations throughout the AI value chain.

Whether acting as a deployer or provider, the key in achieving compliance with applicable requirements and obligations under the AI Act shall comprise of proactive data governance coupled with rigorous documentation, and a thorough understanding of the legal landscape in which each company operates.

Footnotes

[1] Regulation (EU) 2024/1689. This regulation came into force on August 1, 2024.

[2] See Article 1(1), AI Act.

[3] See Article 3(4), AI Act.

[4] See Article 3(3), AI Act.

[5] See Article 2(1)(a), AI Act.

[6] See Article 25(1), AI Act.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© A&O Shearman

Written by:

A&O Shearman
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

A&O Shearman on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide