The EU Artificial Intelligence (AI) Act is set to become the world’s first comprehensive regulation on AI. After extensive negotiations, the European Parliament, Commission, and Council reached a political agreement on the AI Act in December 2023. The act has now been published in the Official Journal of the European Union as of July 12, 2024.
From a compliance perspective, it is crucial for companies, especially those operating in the European Union or serving EU citizens, to understand the key implementation timelines of the AI Act.
Timeline
Entry Into Force: August 1, 2024
The AI Act will enter into force.
Six Months After Entry Into Force: February 1, 2025
Prohibitions on unacceptable risk AI systems (manipulation, social scoring systems, and biometric categorization), as well as general provisions, will become applicable as well.
Nine Months After Entry Into Force: May 1, 2025
Application of codes of practice for general-purpose AI systems on the market before August 1, 2025.
12 Months After Entry Into Force: August 1, 2025
Obligations for providers of general-purpose AI models will go into effect.
Member states must appoint competent authorities to oversee the AI Act.
The European Commission will conduct an annual review of the list of prohibited AI practices.
18 Months After Entry Into Force: February 1, 2026
The European Commission will issue an implementing act on post-market monitoring requirements.
24 Months After Entry Into Force: August 1, 2026
Obligations on high-risk AI systems listed in Annex III of the AI Act will become applicable. This includes AI systems used in biometrics, critical infrastructure, education, employment, access to essential public services, law enforcement, immigration, and administration of justice.
Member states must have implemented rules on penalties, including administrative fines.
Member state authorities must have established at least one operational AI regulatory sandbox.
The European Commission will review, and potentially amend, the list of high-risk AI systems.
36 Months After Entry Into Force: August 1, 2027
Obligations will become applicable for high-risk AI systems that are not prescribed in Annex III but are intended to be used as a safety component of a product (or if the AI is itself a product). The obligations also apply when the product is required to undergo a third-party conformity assessment under existing specific EU laws (e.g., toys, radio equipment, in vitro diagnostic medical devices, civil aviation security, and agricultural vehicles).
By the End of 2030
Obligations will go into effect for certain AI systems that are components of the large-scale IT systems established by EU law in the areas of freedom, security and justice, such as the Schengen Information System.
In addition to the above timeline, the European Commission will issue further rulemaking and guidance on various aspects of the AI Act, such as the definition of AI systems, criteria for high-risk AI, conformity assessments, and technical documentation requirements. Companies should closely monitor these developments to ensure ongoing compliance.
To prepare for the implementation of the AI Act, we recommend that our clients take the following steps:
- Classify their AI systems according to the risk levels defined in the AI Act (prohibited, high risk, limited risk, and low or minimal risk).
- Conduct a gap analysis to identify areas where their current AI practices may not align with the AI Act’s requirements.
- Develop and implement a compliance plan to address any gaps and ensure readiness for the different implementation stages of the AI Act.
- Stay informed about the European Commission’s further rulemaking and guidance to adapt their compliance efforts accordingly.
View a graphic of the timeline here.
Compatibility with the General Data Protection Regulation
The implementation of the AI Act must consider its compatibility with the General Data Protection Regulation (GDPR) to ensure that AI systems are developed and deployed in a manner that respects privacy and ethical standards.
Key considerations include:
- Purpose and Legitimacy
AI systems must be developed with a clearly defined purpose that is legitimate and known to the data subjects. This purpose must be determined at the design stage and be compatible with the organization’s missions. AI systems should process only data that is adequate, relevant, and limited to what is necessary for the intended purpose. GDPR restricts decisions based solely on automated processing that significantly affect individuals, unless certain conditions are met. The AI Act should ensure these protections are upheld.
- Data Protection Impact Assessments (DPIAs)
Conducting DPIAs is essential in identifying and mitigating potential risks associated with AI systems. This assessment should be done throughout the life cycle of the AI system, from development to deployment. Organizations must appoint data protection officers to oversee data protection strategies and ensure compliance with the GDPR and the AI Act.
- Legal Basis
The legal basis for processing personal data must be chosen before implementing the AI system. The GDPR sets out six legal grounds for processing personal data, and the chosen basis determines the obligations of the organization and the rights of the individuals. Data should be collected for specified, explicit, and legitimate purposes and not further processed in a manner incompatible with those purposes.
- Privacy by Design
AI systems should be designed with privacy in mind. This includes ensuring that data is collected and used only for specified purposes and ensuring that data minimization and retention are adhered to. Clear information must be provided to data subjects about the use of AI systems and their rights under GDPR.
- Transparency and Accountability
Transparency is crucial. AI systems must process personal data in a way that is lawful and transparent to data subjects. Organizations must provide clear and concise information about how they process personal data. Accountability is also essential because organizations must be able to demonstrate compliance with GDPR principles. Furthermore, AI systems must be designed to provide explanations for their decisions and operations, enhancing transparency.
- Risk-Based Classification
The AI Act classifies AI systems based on their risk level. High-risk AI systems must undergo a conformity assessment to ensure compliance with various requirements, including data governance, risk management, and cybersecurity.
- Supervision and Enforcement
The AI Act designates national supervisory authorities to enforce obligations on providers and users of AI systems. Data protection authorities, such as France’s National Commission on Informatics and Liberty (CNIL), play a crucial role in ensuring compliance with GDPR principles and developing appropriate responses to AI-related challenges.
- Ethical Considerations
The AI Act aims to establish an ethical framework to ensure that AI systems do not harm individuals or society. Ethical considerations are integral to the development and deployment of AI systems, particularly in terms of fairness and bias mitigation.
Luxembourg
CNPD, the data protection authority in Luxembourg, has already launched a regulatory AI sandbox to help ensure development of AI technologies that are compatible and in line with the GDPR. The Sandkëscht program is dedicated to the development of new technologies and the responsible use of personal data involved.
France
CNIL plans to rely on its issued requirements and recommendation that are adapted with GDPR in mind to guide and support stakeholders in compliance with the AI Act. This provides guidance to developers and suppliers of AI technologies to ensure compatibility with the GDPR. The main steps CNIL recommends are as follows:
- Define the purpose of the AI system.
- Create a clear outline of the operation use of the AI system.
- Understand general-purpose AI systems.
- Define AI systems for scientific research.
- Determine your responsibility based on your role as developer/supplier of the AI system and whether you act as controller or subcontractor.
- Define the legal basis for personal data processing as in the GDPR.
- Processing based on consent must not harm those who refuse to share their personal data.
- Compliance with a legal obligation and how this is achieved should be clearly stated.
- The performance of a contract is to be used exceptionally.
- The performance of a task carried out in the public interest is mostly applicable to public actors.
- Data processing must safeguard vital interests of the data’s subject(s).
- The pursuit of a legitimate interest should be clearly and precisely defined, as should the exact data that is collected and its purpose.
- Consider if any data collected can be reused, which will depend on the following:
- If an organization collects the data itself, it might need to carry out a compatibility test or request additional consent.
- Use of open-source data that is publicly available should be confirmed to not contain any unlawful or sensitive data.
- Data purchased from third parties requires deeper consideration.
- Ensure no more data is used than is necessary.
- Set a retention period for the data collected and processed.
- Carry out a DPIA to assess data processing risks and minimize risks.
[View source.]