AI Watch: Global regulatory tracker - OECD

White & Case LLP
Contact

White & Case LLP

The OECD's AI recommendations encourage Member States to uphold principles of trustworthy AI.


Laws/Regulations directly regulating AI (the “AI Regulations”)

The OECD's Recommendation of the Council on Artificial Intelligence1 (the "Recommendation") adopted by 46 governments2 as of July 2021 (the "Adherents"), contains:

  • The OECD's AI Principles (the "Principles"), which were the first intergovernmental standard on AI and formed the basis for the G20's AI Principles3
  • Five recommendations to be implemented in the Adherents' national policies and international cooperation for trustworthy AI (the "Five Recommendations")

Status of the AI Regulations

The Adherents have agreed to promote, implement, and adhere to the Recommendation. The Principles contribute to other AI initiatives, such as the G7's Hiroshima AI Process Comprehensive Policy Framework (including the International Guiding Principles on AI for Organizations Developing Advanced AI Systems and the International Code of Conduct for Organizations Developing Advanced AI Systems).

Other laws affecting AI

While certain OECD instruments can be legally binding on members, most are not. However, OECD recommendations represent a political commitment to the principles they contain and entail an expectation that Adherents will endeavor to implement them.5 Notwithstanding, a non-exhaustive list of OECD guidance that does not directly seek to regulate AI, but may affect the development or use of AI includes:

  • The Recommendation of the Council concerning Guidelines Governing the Protection of Privacy and Transborder Flows of Personal Data
  • The OECD Guidelines for Multinational Enterprises
  • The Recommendation of the Council on Consumer Protection in E-commerce

Definition of “AI”

The OECD's definition of "AI system" was revised on November 8, 2023 to ensure that it continues to accurately reflect technological developments, including with respect to generative AI.6 AI is defined in the Recommendation using the following terms:

  • "AI actors" means "those who play an active role in the AI system lifecycle, including organizations and individuals that deploy or operate AI."
  • "AI knowledge" means "the skills and resources, such as data, code, algorithms, models, research, know-how, training programmes, governance, processes and best practices required to understand and participate in the AI system lifecycle."
  • "AI system" means "a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment."
  • "AI system lifecycle" involves the following phases: "i) ‘design, data and models'; which is a context-dependent sequence encompassing planning and design, data collection and processing, as well as model building; ii) ‘verification and validation'; iii) ‘deployment'; and iv) ‘operation and monitoring'. These phases often take place in an iterative manner and are not necessarily sequential. The decision to retire an AI system from operation may occur at any point during the operation and monitoring phase."

Territorial scope

The Adherents (who are expected to promote and implement the Recommendation – see above) include the following 46 OECD members and non-members.7

Specific obligations would be placed on AI actors by Adherents implementing the Recommendation. However, the term "AI actors" is not defined in the Recommendation by reference to territory.

Sectoral scope

The Recommendation is not sector-specific. As discussed above, Adherents are expected to promote and implement the Recommendation and, by doing so, specific obligations should be placed on AI actors. However, the term "AI actors" is not defined in the Recommendation by reference to sector.

Compliance roles

Adherents are expected to comply with the Recommendation, although the Recommendation does not explicitly govern compliance or regulatory oversight. Certain Principles relating to human-centered values and fairness, transparency and accountability are applicable to AI actors. Whether and to what extent AI actors have to comply with the Principles depends on the relevant Adherent state's approach to implementation.

Core issues that the AI Regulations seek to address

The OECD's AI Regulations intend to help shape a stable policy environment at the international level that promotes a human-centric approach to trustworthy AI, fosters research, and preserves economic incentives to innovate.8

Risk categorization

AI is not categorized according to risk in the Recommendation.

In order to promote a stable policy environment with regard to AI risk frameworks, the OECD has stated that it intends to analyze the criteria that should be included in a risk assessment and how to best aggregate such criteria, taking into account that different criteria may be interdependent.9

Key compliance requirements

The Adherents are expected to promote and implement the following Principles:10

  1. AI should pursue inclusive growth, sustainable development and well-being: This includes reducing economic, social, gender and other inequalities, and protecting natural environments.
  2. AI should incorporate human-centered values and fairness: AI actors should respect the rule of law, human rights and democratic values throughout the AI system lifecycle, and implement appropriate safeguards to that end.
  3. AI should be transparent and explainable: AI actors should provide information to foster a general understanding of AI systems, make stakeholders aware of their interactions with AI systems, and enable those affected by an AI system to understand and challenge the outcome.
  4. AI systems should be robust, secure, and safe so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they do not pose an unreasonable safety risk. To this end, AI actors should ensure traceability to enable analysis of the AI systems' output and apply a systematic risk management approach.
  5. Accountability: AI actors should be accountable for the proper functioning of AI systems and for the respect of the Principles.

The Adherents are also expected to promote and implement the Five Recommendations:11

  1. Investing in AI research and development. Governments should consider long-term public investment and encourage private investment in research, development, and open datasets that are representative and respect data privacy and data protection in order to spur innovation in trustworthy AI and support an environment for AI that is free of inappropriate bias.
  2. Fostering a digital ecosystem for AI. Governments should foster the development of, and access to, a digital ecosystem for trustworthy AI by promoting mechanisms, such as data trusts, to ensure the safe, fair, legal and ethical sharing of data.
  3. Shaping an enabling policy environment for AI. Governments should: (i) promote a policy environment that supports an agile transition from the research and development stage to the deployment and operation stage for trustworthy AI systems; and (ii) review and adapt policy and regulatory frameworks and assessment mechanisms as they apply to AI systems to encourage innovation and competition for trustworthy AI.
  4. Building human capacity and preparing for labor market transformation. Governments should: (i) collaborate with stakeholders to ensure people are prepared for AI-related changes in society and work by equipping them with necessary skills; (ii) take steps to ensure a fair transition for workers affected by AI, by offering training and support; and (iii) promote the responsible use of AI at work to enhance worker safety and the quality of jobs.
  5. International co-operation for trustworthy AI. Governments should: (i) actively co-operate to advance the Principles and progress the responsible stewardship of AI; (ii) work together in the OECD and other forums to foster the sharing of AI knowledge; (iii) promote the development of multi-stakeholder, consensus-driven global technical standards; and (iv) encourage the development, and their own use, of internationally comparable metrics to measure AI research, development, and deployment, using the evidence to assess progress in the implementation of the Principles.

Regulators

The OECD does not regulate the implementation of the Recommendation, although it does monitor and analyse information relating to AI initiatives through its AI Policy Observatory. The AI Policy Observatory includes a live database of AI strategies, policies and initiatives that countries and other stakeholders can share and update, enabling the comparison of their key elements in an interactive manner. It is continuously updated with AI metrics, measurements, policies and good practices that lead to further updates in the practical guidance for implementation.12

The Recommendation does not stipulate how Adherents should regulate the implementation of the Principles in their own jurisdictions.

Enforcement powers and penalties

As the Recommendation is not legally binding, it does not confer enforcement powers or give rise to any penalties for non-compliance. The OECD relies on Adherents to implement the Recommendation and enforce the Principles in their own jurisdictions.

1 https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449.
2 OECD Members: Australia, Austria, Belgium, Canada, Chile, Colombia, Costa Rica, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, Korea, Latvia, Lithuania, Luxembourg, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Republic of Türkiye, United Kingdom, United States, and Non-Members: Argentina, Brazil, Egypt, Malta, Peru, Romania, Singapore, and Ukraine.
3 Background - OECD.AI
4 https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.
5 "Decisions are adopted by the Council and are legally binding on all Members except those which abstain [whereas] Recommendations are adopted by the Council and are not legally binding [but do] represent a political commitment to the principles they contain and entail an expectation that Adherents will do their best to implement them." (https://www.oecd.org/legal/legal-instruments.htm.)
6 https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449#backgroundInformation.
7 OECD Members: Australia, Austria, Belgium, Canada, Chile, Colombia, Costa Rica, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, Korea, Latvia, Lithuania, Luxembourg, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Republic of Türkiye, United Kingdom, United States, and Non-Members: Argentina, Brazil, Egypt, Malta, Peru, Romania, Singapore, and Ukraine.
8 "RECOGNISING that given the rapid development and implementation of AI, there is a need for a stable policy environment that promotes a human-centric approach to trustworthy AI, that fosters research, preserves economic incentives to innovate, and that applies to all stakeholders according to their role and the context." (introduction to the Recommendation).
9 "The OECD Experts Working Group, with members from across sectors and professions, plans to conduct further analysis of the criteria to include in a risk assessment and how best to aggregate these criteria, taking into account that different criteria may be interdependent." (page 67 of the Framework for the Classification of AI Systems (here)).
10 See Section 1 (1.1 – 1.5) of the Recommendation.
11 See Section 2 (2.1 – 2.5) the Recommendation.
12 The OECD's Policy Observatory is available here.

 

Daniel Mair (Trainee Solicitor, White & Case, Paris) contributed to this publication.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© White & Case LLP | Attorney Advertising

Written by:

White & Case LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

White & Case LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide