The European Commission published guidelines that clarify the definition of AI systems under the AI Act, analyzing each component of the definition of AI system, providing examples and specifying which systems should be outside the scope of the definition. Although non-binding, this guidance is a useful tool for companies to assess whether they fall within the scope of the AI Act.
The Commission has approved these guidelines but has not yet formally adopted them.
How does the AI Act define AI Systems?
Article 3(1) of the AI Act defines AI systems as follows: “‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” Recital 12 of the AI Act provides non-binding interpretative guidance regarding the definition of an AI system, which the EU Commission guidelines further support.
The EU Commission identifies and clarifies the following seven components of the definition of an AI system under the AI Act.
1. AI systems are machine-based systems.
The guidelines clarify that the term “machine-based” in the context of AI systems refers to the integration of both hardware and software components that enable these systems to function and underlines the fact that systems must be computationally driven and based on machine operations.
2. AI systems must have some degree of autonomy.
Recital 12 of the AI Act specifies that AI systems should have “some degree of independence of actions from human involvement and of capabilities to operate without human intervention.” The EU Commission elaborates that the notion of an AI system, therefore, excludes systems designed to operate solely with full manual human involvement and intervention. Such human involvement and intervention can be direct, e.g., through manual controls, or indirect, e.g., through automated systems-based controls. This part of the definition is closely linked to the inference capability (see below), as the ability to infer outputs is crucial for achieving the system’s autonomy.
3. AI systems may adapt after deployment.
The definition of an AI system under the AI Act indicates that a system may exhibit adaptiveness after deployment. Recital 12 of the AI Act clarifies that “adaptiveness” refers to self-learning capabilities, allowing the system’s behavior to change while in use. The guidelines make clear that the ability to adapt is not a mandatory requirement for a system to constitute an AI system regulated by the AI Act.
4. AI systems are designed to operate according to objectives.
The guidelines specify that the objectives of the system may be explicit (i.e., clearly stated goals that the developer directly encodes into the system) or implicit (i.e., goals that are not explicitly stated but may be deduced from the behavior or underlying assumptions of the system). According to the guidelines, the objectives are internal to the system and should be distinguished from the intended purpose of the system, which is externally oriented and includes the context in which the system is designed to be deployed and how it must be operated.
5. AI systems must be capable of inferring outputs.
This is a key element of the definition of “AI systems” and an important part of the guidelines. Recital 12 of the AI Act states that the definition of AI systems aims, in particular, to distinguish AI systems from simpler traditional software systems or programming approaches and that it should not cover systems that are based on the rules defined solely by natural persons to execute operations automatically. The European Commission’s guidelines clarified what constitutes “simpler traditional software” following a public consultation launched in November 2024 that included this question.
The EU Commission notably clarifies how to assess the capacity to infer outputs and what should be considered an AI system capable of inferring outputs. It also provides examples of systems that are not capable of inference. In general, inference relates to an AI system’s ability to create (infer) output from input received without being bound solely by human-defined rules.
The EU Commission first clarifies that, whereas the formulation used in the definition of AI system under Article 3(1) AI Act (i.e., “infers, how to generate outputs”) suggests that the concept of inference should be analyzed as part of the deployment phase of the AI system, it should be understood as primarily referring to the build phase of the system, whereby a system derives outputs through AI techniques (for instance logic- and knowledge-based approaches) that enable inference. It also specifies that machine learning approaches (such as supervised learning, unsupervised learning, self-supervised learning and reinforcement learning) are typically techniques that enable a system to infer how to generate outputs during the build phase.
According to the guidelines, systems that do not infer outputs or systems with limited capacity to analyze patterns and adjust autonomously are not AI systems under the AI Act. The guideline describes several system types as non-AI systems for the purposes of the AI Act:
- Systems that improve mathematical optimization or that accelerate and approximate well-established optimization methods, such as linear or logistic regression methods (for instance, a telecommunication program that optimizes bandwidth allocation and resource management);
- Basic data processing (for instance, a database management program that allows the sorting or filtering of data based on specific criteria);
- Classical heuristics (for instance, a chess program that can assess board positions without learning from experience); and
- Simple prediction systems (for instance, a program that estimates future stock prices by using an estimator with the “mean” strategy to establish a baseline prediction).
Although some of these systems may be able to infer, they have limited capacity to analyze patterns and autonomously adjust their outputs.
6. AI system’s outputs must be capable of influencing physical or virtual environments.
The EU Commission describes the four types of outputs listed in the definition of an AI system (predictions, content creation, recommendations and decisions). It specifies that AI systems can generate more nuanced outputs than other systems, for example, by leveraging patterns learned during training or using expert-defined rules to make decisions, offering more sophisticated reasoning in structured environments.
7. AI systems must be able to interact with the environment.
The EU Commission specifies that this element of the definition means that AI systems are active, not passive. The mention of “physical or virtual environments” signifies that an AI system can influence tangible objects, like a robotic arm, and virtual settings, like digital spaces, data flows and software ecosystems.
What’s next?
As part of their assessment of whether and how the AI Act may apply to their products and operations, organisations should evaluate the AI systems that they develop and/or use in accordance with the definition of AI system in the Act and the guidelines.
This assessment, particularly about the definition’s inference capacity component, should be carried out by both legal and technical teams.
[View source.]