The UK’s New AI Proposals

Faegre Drinker Biddle & Reath LLP

 

On 29 March 2023, the UK Government published its latest proposals on regulating Artificial Intelligence (“AI”). The White Paper follows on from an initial policy paper published in July 2022 (the “2022 Policy Paper”), which we discussed in detail in our previous blog post. The proposals set out in the White Paper have been informed by the feedback received as part of the UK Government’s consultation on the 2022 Policy Paper.

A central theme is that the regulatory framework in the UK must not stifle innovation, but rather harness AI’s ability to drive growth and prosperity, and increase public trust in its use and application.

Definition of AI

As in the 2022 Policy Paper, rather than creating a detailed definition of AI, the White Paper sets out the core characteristics of AI to inform the scope of regulation: ‘adaptivity’ and ‘autonomy’ of AI systems. The UK Government’s intention is to future-proof the regulatory framework against unanticipated new technologies that are autonomous and adaptive because rigid definitions can become outdated and restrictive with the rapid evolution of AI. The introduction to the White Paper explains that these defining characteristics were widely supported in responses received during consultation on the 2022 Policy Paper.

Core Principles

The framework envisioned in the White Paper is underpinned by the five principles outlined below to guide and inform the responsible development and use of AI in all sectors of the economy. Although largely similar to the six principles outlined in the 2022 Policy Paper, the UK Government has combined and/or refined each principle’s definition and rationale. The introduction to the White Paper explains that it has reflected stakeholder feedback to the 2022 Policy Paper consultation by expanding on concepts such as ‘robustness’ and ‘governance’ and better reflecting the concepts of ‘accountability’ and ‘responsibility’.

  • Safety, security and robustness. AI systems should function in a robust, secure and safe way throughout the AI life cycle, and risks should be continually identified, assessed and managed. Regulators will need to consider issuing guidance and technical standards for implementing this principle.
  • Appropriate transparency and explainability. An appropriate level of transparency and explainability will mean that regulators have sufficient information about AI systems and their associated inputs and outputs to give meaningful effect to the other principles. Appropriate means proportionate to the risks presented by an AI system.
  • AI systems should not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes. The White Paper anticipates that regulators may need to publish descriptions and illustrations of fairness that apply to AI systems within their domain, and develop guidance that takes into account technical standards.
  • Accountability and governance. Governance measures should be in place to ensure effective oversight of the supply and use of AI systems, with clear lines of accountability established across the AI life cycle.
  • Contestability and redress. Where appropriate, users, impacted third parties and actors in the AI life cycle should be able to contest an AI decision or outcome that is harmful or creates a material risk of harm. Regulators will be expected to clarify existing routes to contestability and redress.

In contrast with the 2022 Policy Paper, the White Paper merges the principle of safety with security and robustness, given the significant overlap between these concepts.

New Central Functions to Support the Framework

The White Paper proposes mechanisms to coordinate, monitor and adapt the regulatory framework. Although the 2022 Policy Paper proposed a small coordination layer within the regulatory architecture, feedback during the consultation process strongly favoured having a greater level of monitoring and central coordination to achieve coherence and improve clarity.

The proposed ‘central suite of functions’ includes:

  • a central monitoring and evaluation framework to assess cross-economy and sector-specific impacts of the new regime;
  • central regulatory guidance to support regulators’ coherent implementation of the principles;
  • a society-wide AI risk register to support regulators’ internal risk assessments;
  • support for AI innovators (including testbeds and sandboxes) to assist in navigating regulatory complexity;
  • education and awareness for consumers and guidance for businesses seeking to navigate the AI regulatory landscape;
  • horizon scanning for emerging trends and opportunities in AI development; and
  • ensuring interoperability with international regulatory frameworks.

What is next?

The UK Government is consulting on the overall approach set out in the White Paper, including any missed opportunities, flaws and gaps in the regulatory framework until 21 June 2023.

In addition to allowing for responses to this consultation, the UK Government has staggered its next steps into three phases. Within the six months following publication of the White Paper, the UK Government will publish its response, issue the cross-sectoral principles to regulators, and design and publish an AI Regulation Roadmap. In the six to twelve months after publication, it will agree partnership arrangements with leading organisations to deliver the first central functions. In the longer term, the UK Government will deliver a first iteration of the central function, encourage remaining regulators to publish guidance and publish a draft central, cross-economy risk register for consultation.

Overall, as no concrete details have been provided in this White Paper, the true strictness of the UK government’s approach to AI regulation will not become apparent until the first iteration of the guidance emerges six to twelve months from now.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Faegre Drinker Biddle & Reath LLP | Attorney Advertising

Written by:

Faegre Drinker Biddle & Reath LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Faegre Drinker Biddle & Reath LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide