EU AI Office Announces Organizational Structure & Hosts First Webinar

King & Spalding
Contact

Last month the European Commission (“EC”) announced organizational details regarding the recently formed EU AI Office. Initially launched in January 2024 through the EC’s AI innovation package, the EU AI Office (“AI Office”) was created to implement, oversee, and provide guidance to general-purpose AI systems and models regulated by the EU AI Act (“AI Act”). The AI Act is set to begin its two-year phased rollout, with its initial entry into force starting in July 2024.

The EC’s May 29th announcement provides an overview of the planned organizational structure of the EU AI Office (“AI Office”), which went into effect on Jun 16, 2024. The AI Office followed this announcement by hosting its first webinar on May 30th, “Risk management logic of the AI Act and related standards” which focused on high risk AI systems and the AI Act’s anticipated implementation for those systems.

This client alert is the first of a series of ongoing client alerts focused on the EU AI Act.

EU AI Act

CLASSIFYING RISK WITH THE EU AI ACT

The AI Act has developed a risk classification pyramid for AI systems, with varying degrees of requirements depending on the level of risk:

  • Unacceptable risk, which is prohibited by the AI Act barring enumerated exclusions. Examples include AI systems used for social scoring.
  • High risk, which is permitted by the AI Act but is subject to mandatory requirements and conformity assessments. Examples include AI systems used for recruiting or in medical devices.
  • Transparency risk, which is permitted by the AI Act and is subject to transparency obligations. Examples include chatbots and deepfakes.
  • Permitted or no risk, which is permitted by the AI Act with no restrictions or obligations.

AI Office Structure & Tasks

AI OFFICE IDENTIFIED TASKS

The AI Office will play a central role in the implementation and enforcement of the EU AI Act, working closely with EU Member States, international stakeholders, and groups from the public and private sectors. The EC has identified the following priority task categories for the AI Office in anticipation of the AI Act’s entry into force:

  • Providing support to the AI Act’s implementation and enforcing general-purpose AI models
  • Working to strengthen the development and use of trustworthy AI.
  • Fostering international cooperation in the regulation of AI systems.
  • Establish collaborative cooperation with institutions, experts, and stakeholders.

The AI Office is currently preparing guidelines related to AI system definitions and AI system prohibitions, both of which are due six months after the AI Act has initially entered into force. The AI Office will also begin drafting codes of practice for general-purpose AI models, which is due nine months after the AI Act’s entry into force.

AI OFFICE ORGANIZATIONAL STRUCTURE

To facilitate the goals of fostering a trustworthy and innovative AI ecosystem across the EU, the AI Office will consist of five separate units, identified as:

  • The Excellence in AI and Robotics unit, which will support and provide funds for research and development to promote an ecosystem of excellence in AI and robotics.
  • The Regulation and Compliance unit, which will provide regulatory coordination and guidance to facilitate the uniform application and enforcement of the AI Act across the EU and will assist with investigations and possible infringements and administer sanctions.
  • The AI Safety unit, which will focus on the identification of systemic risks of very capable general-purpose models, possible mitigation measures, and evaluation and testing approaches.
  • The AI Innovation and Policy Coordination unit, which will oversee the execution of the EU AI strategy and foster an innovative ecosystem by supporting regulatory sandboxes and real-world testing.
  • The AI for Societal Good unit, which will design and implement the international engagement of the AI Office regarding efforts towards AI for good (e.g., weather modeling, cancer diagnosis, and digital twins).

The AI Office will be led by the Head of the AI Office, Lucilla Sioli, currently serving as Director for AI and Digital Industry within the EC. Additional leadership and guidance will be provided through the appointment of two advisors:

  • A Lead Scientific Advisor, to ensure scientific excellence in evaluation of models and innovative approaches.
  • An Advisor for International Affairs, to follow up on our commitment to work closely with international partners on trustworthy AI.

AI Office Webinar Overview

As the AI Act moves from adoption to implementation, the first webinar hosted by the AI Office on May 30th, “Risk management logic of the AI Act and related standards,” acknowledged the challenging road ahead for the AI Office and various stakeholders to build out the necessary guidelines, standards, and frameworks in support of the historic piece of legislation.

Given the need to prioritize these issues, the AI Office focused its first webinar on high-risk AI systems by providing an overview of the AI Act’s risk management and quality management logic and then giving an analysis of the current state towards AI standardization in regard to the AI act.

HIGH RISK AI SYSTEMS

The AI Act is a comprehensive product regulation prioritizing safety and trustworthiness. AI systems will be assessed through a risk-based approach, considering the entire lifecycle of the AI system to ensure continuous trustworthiness. Dr. Tatjana Evas from the EC DG CNECT presented on the risk management system (“RMS”) and quality management system (“QMS”) logic approaches developed in mind for high-risk AI systems and their resulting enforcement obligations.

Role of the EU AI Act

Article 1 of the AI Act defines the purpose behind this new legislation, which has been enacted to ensure a high level of protection of health, safety, and fundamental rights against the harmful effects of AI systems. As such, the AI Act is structured around five core principles:

  • Enhanced product regulation, where risks are assessed based on harm to health, safety, and fundamental rights.
  • AI system and risks that may be generated by an AI system, which serves as the focus of requirements identified for high-risk systems.
  • Risk-based and AI life cycle approach, where regulations are based on the level of risk arising from an AI system from both pre- and post-market monitoring.
  • Establishment of trust across the entire value chain, with the development of rules for AI systems and general purpose AI models.
  • The encouragement of responsible innovation, to develop trustworthy and human-centric AI systems.

Mandatory Requirements for High-Risk AI Systems

For high-risk AI systems, the Act mandates robust Risk Management Systems (RMS) and Quality Management Systems (QMS). These systems must cover the entire lifecycle of the AI system, from design to post-market monitoring.

The RMS process should involve a series of steps during the AI systems lifecycle, including:

  • Identification and analysis of known and foreseeable risks that AI system may pose to health, safety, or fundamental rights,
  • Estimation and evaluation of those risks (including those risks identified from collected data during post-market monitoring), and
  • Adoption of appropriate and targeted risk management measures.

Article 9 of the AI Act provides for a hierarchy of risk control actions that can be taken to implement appropriate risk management measures for high-risk AI systems:

  • Safety by design, which aims to eliminate or reduce risks identified and evaluated through adequate design and development of the AI system.
  • Protective measures, which implements adequate mitigation and control measures to address risks that cannot be eliminated; and
  • Information for safety, which requires the dissemination of information required by Article 13.

Testing is crucial for ensuring compliance with the AI Act. It must be thorough, continuous, and cover all lifecycle stages and for high-risk AI systems, shall be undertaken to identify the most appropriate and targeted risk management systems. Similarly, Quality Management covers the entire lifecycle of the AI system and must be thoroughly documented to ensure all compliance requirements are met.

AI STANDARDIZATION FOR THE AI ACT

As the AI Act moves further into its implementation phase, the importance of standardization becomes paramount. In the second half of the AI Office’s webinar, Josep Soler Garrido from the EC Joint Research Centre provided an overview of the current state of AI standardization and identified key areas requiring attention for effective compliance.

Existing Standards and Their Alignment with the AI Act

The development and adoption of standards are essential for the practical implementation of AI Act requirements. Many existing standards, such as those developed by ISO/IEC, offer valuable frameworks for AI systems. However, their alignment with the AI Act is not always perfect.

Key Areas of Misalignment

One notable example of misalignment is the definition of risk. ISO defines risk as the "effect of uncertainty," whereas the AI Act has a broader definition encompassing the probability and severity of harm, including risks to fundamental rights. ISO standards often focus on organizational objectives, whereas the AI Act emphasizes broader regulatory and public objectives, aiming to mitigate risks to individuals.

To ensure comprehensive compliance, it is essential to identify gaps between existing standards and the AI Act and develop additional requirements where necessary. This involves a detailed analysis of existing standards and the specific needs of the AI Act.

Harmonizing AI Standards

The AI Offices does not expect a single set of standards to address all high-risk AI systems covered by the AI Act. Instead, multiple standards will likely be needed. It is crucial to ensure that these standards collectively address all relevant requirements and are consistent with each other.

Rather than developing highly specific standards for each high-risk AI system, it is more practical to provide guidance on how to identify and apply the appropriate standards for different products. This approach allows for flexibility while ensuring compliance with the AI Act.

Conclusion

The path to compliance with the EU AI Act involves navigating a complex landscape of existing, new, and to-be-developed standards. Communications from the AI Office will be ongoing, as they develop and publish more guidelines and standards to support the full entry of the AI Act and its mandatory obligations required of high-risk AI systems. Identifying gaps and aligning standards with the AI Act’s requirements is critical and will require collaboration with diverse stakeholders to meet the goals of harmonization central to the New Legislation Framework. King & Spalding will continue to vigilantly monitor these developments.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© King & Spalding | Attorney Advertising

Written by:

King & Spalding
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

King & Spalding on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide