European Parliament’s Leading Committees Vote to Approve AI Act

Ogletree, Deakins, Nash, Smoak & Stewart, P.C.
Contact

Ogletree, Deakins, Nash, Smoak & Stewart, P.C.

[co-author: Ellie Burston]

The world’s first artificial intelligence (AI) regulatory framework is “a step closer” to becoming law, the European Parliament recently announced. Following the European Commission’s 2021 draft proposal, a draft negotiating mandate was adopted by a large majority in the EU Parliament’s Committee on Civil Liberties, Justice and Home Affairs and Committee on the Internal Market and Consumer Protection on 11 May 2023.

Quick Hits

  • The purpose of the AI Act is to manage AI safely by ensuring there is appropriate human oversight.
  • Members of the European Parliament have endorsed a four-tiered categorisation of AI systems: unacceptable, high, low, and minimal.
  • An EU AI Office would be created to monitor progress of the AI Act, be a point of consultation, and produce guidance on compliance.

The fundamental purpose of the legislation—called the Artificial Intelligence Act, or AI Act—is to manage the use of AI safely, ensuring there is appropriate human oversight. The key principles set out in the legislation include ensuring that AI is developed in a way which helps and respects people, minimises harm, complies with privacy and data protection rules, is transparent, and promotes equality and democracy. These aims are woven throughout the new drafting, much like the fundamental principles of the General Data Protection Regulation (GDPR).

In the amended proposal, a “best efforts” obligation has been introduced in line with the general aims of the legislation. AI providers would be required to “[establish] a high-level framework that promotes a coherent humancentric European approach to ethical and trustworthy Artificial Intelligence” in accordance with EU values.

What Is AI?

An initial point of interest to onlookers was how legislators would define AI itself, as there is currently no universally recognised term. The draft proposal included a “single future-proof definition of AI” which was defined as: “software … [which] can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”

Despite best intentions, the definition is not as future-proof as initially thought, as critics observed it left room for interpretation and legal uncertainty. The amended definition of AI in the AI Act is as follows: “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.”

EU Approach to Regulation

Members of the European Parliament (MEPs) have endorsed the Commission’s risk-based approach from their draft proposal in which AI systems are divided into four risk categories: unacceptable, high, low, and minimal. Certain AI systems would be prohibited altogether, whereas other types of AI would be permitted, but would be subject to several obligations regarding their development, placing on the market, and use.

Unacceptable Risk

AI systems are those that manipulate human behaviour or use “social scoring” (assessing the trustworthiness of people based on their social behaviour). These would be forbidden under the current proposals. MEPs have substantially amended the unacceptable list to include a ban on using AI in an intrusive and discriminatory manner. The AI Act would now also prohibit biometric categorisation systems using sensitive characteristics such as gender and race. The amended act also provides that AI should be prohibited in the use of predictive policing systems. In addition, the amended act would disallow the use of emotion recognition systems in the workplace, education, law enforcement, and border management. Creating facial recognition databases by scraping of biometric data from social media and CCTV would also be banned as this amounts to a violation of an individual’s right to privacy.

High Risk

AI systems, such as those that make decisions about people in areas sensitive to fundamental rights, would be required to meet strict requirements for their use including transparency, safety, and human oversight. MEPs expanded the high-risk classification to include harm to people’s health, safety, or the environment. This area also now includes AI systems that influence voters in political campaigns, and AI use in recommendation systems used by social media platforms. In addition, a “fundamental rights impact assessment” would also be required to be carried out before using high-risk systems for the first time.

Low and Minimal Risk

AI, such as chatbots or spam filters, would remain largely unregulated so that competitiveness is maintained in the EU.

Other Safeguards

An EU AI Office would be introduced which would monitor progress of the AI Act, be a point of consultation, and produce guidance on compliance.

Additional transparency requirements have been introduced for “generative AI systems” which can provide autonomous texts, images, or audio. Such systems would be required to disclose that the content was artificially manipulated or generated. The Commission and the AI Office will consult and develop guidelines on how these transparency obligations would be implemented.

AI providers would be obliged to ensure their staff or others dealing with AI on their behalf will have a “sufficient level of AI literacy” by means of training, including knowledge of AI functions, how the products are beneficial, but also the risks involved.

Next Steps

Following committee approval, the draft needs to be endorsed by the whole Parliament. It is anticipated a vote will be held during the 12-15 June 2023 session. Once this has been approved, tripartite negotiations on the final form of the law between the Council of the EU, the European Parliament, and the European Commission can commence.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Ogletree, Deakins, Nash, Smoak & Stewart, P.C.

Written by:

Ogletree, Deakins, Nash, Smoak & Stewart, P.C.
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Ogletree, Deakins, Nash, Smoak & Stewart, P.C. on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide