Zooming in on AI – #2: AI System vs AI models

A&O Shearman

ON THIS PAGE

Definitions: AI systems versus (GPAI) models

The consequence of the distinction between an AI system and model

 

The Artificial Intelligence Act (AI Act) entered into force on 1 August 2024 and is the world's first comprehensive legal framework for AI regulation. As companies start incorporating AI tools into their business, products and services, it is critical for companies to understand and identify which categories their AI tool(s) fall into to ensure compliance with the AI Act.

Distinct legal requirements will apply for each category of AI system depending on the level of risk involved with the AI tool. Moreover, the timeline for implementation of these obligations varies per AI category, as detailed in our first article Zooming in on AI: When will the AI Act apply?

In this second publication of our “Zooming in on AI” series, we define and explain the differences between four different types of AI defined in the AI Act: (1) an AI system; (2) General-Purpose AI (GPAI) system; (3) GPAI model; and (4) an AI model. In essence, the AI Act regulates AI systems and GPAI models. The AI Act neither generally defines nor specifically regulates other AI models than GPAI models, which means that many AI models (such as machine learning or NLP models) are not specifically regulated by the AI Act.

For companies it is key is to understand what legal obligations and requirements under the AI Act apply to which AI systems, which depends on the risk involved with the use, what rules apply to (GP)AI models, which AI systems and models are not governed by the AI Act and which AI use cases are prohibited by the AI Act.

Definitions: AI systems versus (GPAI) models

AI system

The AI Act defines an AI system as: “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

An AI system must meet all the following criteria:

  1. It must be a machine-based system;
  2. It must be designed to operate with varying levels of autonomy;
  3. It must exhibit adaptiveness after deployment;
  4. It must be used for explicit or implicit objectives (e.g. tasks) from the input it receives; and
  5. It must be able to influence physical or virtual environments.
GPAI system

An AI system designed to perform confined and specific (narrow) tasks is commonly labelled as artificial narrow intelligence. Over time, AI systems evolved and can now handle many different tasks. Such systems are commonly labelled as general artificial intelligence. The terms artificial narrow and general intelligence are not reflected in the AI Act. General artificial intelligence is labelled in the AI Act as GPAI systems.

The AI Act defines a GPAI system as an “AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems.” An example of an GPAI system is Chat-GPT.

GPAI model

The AI Act defines a GPAI model as an “AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market.”

A GPAI model must meet all the following criteria:

  1. it must be an AI model, including where it is trained with a large amount of data using self-supervision at scale;
  2. it must display significant generality;
  3. it must be capable of competently performing a wide range of distinct tasks;
  4. it must be able to be integrated into a variety of downstream systems or applications; and
  5. except AI models that are used for research, development, or prototyping activities before they are placed on the market.

The two most important characteristics of a GPAI model are its generality and its ability to perform a wide range of different tasks competently. According to Recital 98 of the AI Act, a model with at least a billion parameters (parameters being the variables that the models ‘learns’ during training) and trained with a large amount of data using self-supervision (a machine learning process whereby the model learns by itself) be considered to display signification generality and to competently perform a wide range of distinctive tasks. While these crucial criteria lack further clarification, Recital 99 of the AI Act notes that large generative AI models are a typical example for GPAI models due to their ability to flexibly generate content in the form of text, audio, images or video, which can readily accommodate a wide range of distinctive tasks.

Typical examples of generative purpose AI models include: (i) image generators such as DALL-E and Stable Diffusion; (ii) large language models such as GPT-4, Claude, Google’s Bard, LLaMa; and (iii) voice generators such as Elevenlabs, OpenAI Jukebox.

AI model

Interestingly, the AI Act does not provide a generic definition of an AI model. This may be surprising, as the more specialized variant of a GPAI model does have its own definition.

However, a rough definition for an AI model can be derived from the distinction between GPAI systems and GPAI models as set out in Recital 97 AI Act. An AI model is an essential part of an AI system but does not constitute an AI system itself. As an example: Large language models are (GP)AI models that combined with a user interface are the basis of some of the most popular chatbots. This chatbot is then the AI system and depending on the use case, fall under the AI Act within the category of a high-risk AI system and/or constitute a limited AI system subject to transparency requirements.

The fact that the AI Act neither generally defines nor specifically regulates AI models points out that many AI models used (such as machine learning or NLP models) are not regulated by the AI Act. Only, the AI system that encompasses or is ‘built’ upon an AI model is regulated and not the AI model itself. However, as mentioned, this is different for GPAI models (such as large language models) which are specifically defined and regulated separately in the AI Act.

The consequence of the distinction between an AI system and model

The AI Act takes a risk-based approach and different requirements apply to AI tools. In essence, the AI Act:

  • sets out which specific rules apply to an AI system depending on the risk-category in which an AI system falls (prohibited AI practices, high risk AI systems, limited risk AI systems subject to transparency requirements);
  • sets out which specific rules apply to an GPAI model and additional rules apply when the used GPAI model poses a “systemic risk”;
  • does not include specific rules applicable to GPAI systems. However, GPAI systems used as AI systems by themselves or as component of another AI system, are subject to the specific rules mentioned under point (1); and
  • does not set out specific rules for AI models (machine learning of NLP) apart from GPAI models.

The requirements applicable to the various AI categories mentioned under sections 1. and 2. above will apply as of different dates. Prohibited AI practices (set out in Article 5 of the AI Act) will be prohibited from 2 February 2025, i.e. uses of AI that are considered to pose unacceptable risks to health and safety or fundamental rights. The rules concerning GPAI models will begin to apply as of 2 August 2025. The rules relating to high-risk systems and limited risk AI systems (subject to transparency requirements) will apply from 2 August 2026, which is also the general date the AI Act will apply.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© A&O Shearman

Written by:

A&O Shearman
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

A&O Shearman on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide