EU AI Act: European Parliament and Council Reach Agreement

Mayer Brown

On 9 December 2023, European Parliament negotiators and the Council presidency agreed on the final version of what is claimed to be the world's first-ever comprehensive legal framework on Artificial Intelligence; the European Union Artificial Intelligence Act (the "EU AI Act").

The flagship legislation prohibits AI systems which pose an "unacceptable risk" from being deployed in the European Union and in other cases imposes different levels of obligations on AI systems that are categorised as "high risk" or "limited risk". An agreement has also been reached to regulate the deployment of foundation models, including on the adoption of measures to ensure compliance with European copyright law, requirements to publish detailed summaries about the content used to train these systems and the preparation of technical documentation related to the use of the models.

The EU AI Act was first proposed by the European Commission in April 2021. The European Parliament approved its version of the draft Act in June 2023 and this final agreement on the form that the EU AI Act will take follows on from the end of the negotiations between those institutions and the European Council, which represents the interests of the EU member states.

When will the Act come into effect?

The provisional agreement states that the Act should apply two years after entry into force, with some provisions coming into effect at a later date. Work still needs to take place to finalise the details of the new regulation, so it is likely that the Act will come into effect in 2026.

Who will the Act apply to?

The Act will apply to both providers and deployers of in-scope AI systems that are used in or produces an effect in the EU, irrespective of their place of establishment. This means the providers or deployers of AI systems in third countries, such as the United States, will have to comply with the EU AI Act if the output of the system is used in the EU.

Which AI systems will the Act cover?

The Act uses the definition of AI systems proposed by the OECD: "An AI system is a machine-based system that [...] infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can affect physical or virtual environments."

The Act will not apply to AI systems:

  • used exclusively for military or defence purposes;
  • used solely for the purpose of research and innovation; and
  • used by people for non-professional reasons.

Certain applications will be banned under the EU AI Act, including AI systems used for emotion recognition in the workplace, for untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Using AI systems to conduct remote biometric identification in public will be allowed when strictly necessary for law enforcement purposes. However, safeguards will need to be put in place, such as limiting the use of these systems to conducting searches for people suspected of the most serious crimes.

What are the requirements of the Act?

The requirements of the EU AI Act differ depending on the risk level posed by the AI system. For example, AI systems presenting a limited risk would be subject to more light touch transparency obligations, such as informing users that the content they are engaging with is AI generated.

High-risk AI systems would be authorised, but subject to tougher requirements and obligations, such as the need to carry out a mandatory fundamental rights impact assessment. Citizens will have a right to receive explanations about decisions based on the use of high-risk AI systems that affect their rights. At the other end of the scale, AI uses demonstrating unacceptable levels of risk would be prohibited.

Some examples include:

  • Limited risk: chat bots or deepfakes;
  • High risk: AI used in sensitive systems, such as welfare, employment, education, transport; and
  • Unacceptable risk: social scoring based on social behaviour or personal characteristics, emotion recognition in the workplace and biometric categorisation to infer sensitive data, such as sexual orientation.

What are the penalties for non-compliance?

Similar to the way fines are calculated under the European General Data Protection Regulation, fines for violating the Act will be calculated as a percentage of the liable party's global annual turnover in the previous financial year, or a fixed sum, whichever is higher:

  • €35 million or 7% for violations which involve the use of banned AI applications;
  • €15 million or 3% for violations of the Act's obligations; and
  • €7.5 million or 1.5% for the supply of incorrect information.

However, proportionate caps will be in place when issuing administrative fines against small and medium enterprises and start-ups. Citizens will be able to launch complaints about the use of AI systems that affect them.

Next steps

There will be technical refinements of the agreement conducted over the coming weeks before it is submitted to the representatives of the EU member states for approval. The final text of the EU AI Act will then be published.

[View source.]

Written by:

Mayer Brown
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Mayer Brown on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide