UK Regulators Publish Approaches to AI Regulation in Financial Services

Skadden, Arps, Slate, Meagher & Flom LLP
Contact

Skadden, Arps, Slate, Meagher & Flom LLP

[co-author: William Adams]

On 22 April 2024, the Financial Conduct Authority (FCA), the Prudential Regulation Authority (PRA) and the Bank of England published their strategic approaches to regulating AI in response to the UK government’s July 2022 AI Regulation Policy Paper (the white paper). In summary, the releases made clear that there is a need for “pro-innovation” and “pro-safety”-focused approaches to any relevant regulations. Although it is unlikely that we will see prescriptive AI rules within the financial services sector anytime soon, the regulators acknowledged the need to keep up with the fast development and complexity of AI. Accordingly, we are likely to hear significantly more from UK regulators on AI in the coming months and years.

Background

Following the publication of the UK government’s white paper, HM Treasury published a consultation paper (the Consultation) on 29 March 2023 that set out proposals for a unified framework for AI regulation based on five key principles: (i) safety, security and robustness; (ii) appropriate transparency and explainability; (iii) fairness; (iv) accountability and governance; and (v) contestability and redress. HM Treasury published responses to the Consultation on 6 February 2024 (the Consultation Response) and asked regulators to publish an update outlining their strategic approaches to AI by 30 April 2024. The FCA, PRA and the Bank of England released their responses to this request on 22 April (the Regulator Responses).

Prior to the Consultation, the PRA and FCA jointly published DP5/22, a discussion paper that asked respondents to consider whether the existing legal requirements and guidance were sufficient enough to address the risks of AI in the financial services sector. The PRA and FCA published their feedback statement that summarises the responses to the discussion paper on 26 October 2023 (the PRA and FCA Proposals).

Below, we examine the key points from the PRA and FCA Proposals, the HM Treasury Response and the Regulator Responses.

PRA and FCA Proposals

The PRA and FCA Proposals asked respondents to consider whether the existing legal requirements and guidance were sufficient to address the risks and harms associated with AI and what changes would need to be made to support the proper adoption of AI in the UK financial markets. The questions fell into three main categories:

  • The PRA and FCA’s objectives and remits.
  • The benefits and risks of AI.
  • Regulation.

The PRA and FCA’s Objectives and Remits

The PRA and FCA both currently take a “technology-neutral” approach to regulation, meaning their core principles and rules do not usually mandate or prohibit specific technologies. In areas where there are risks that may relate to the use of specific technologies, the PRA or FCA may issue guidance or use other policy tools to clarify how the existing rules and relevant regulatory expectations apply to those technologies.

AI is already being used by UK financial services firms and regulatory bodies for a wide range of purposes, including:

  • Anti-money laundering and compliance functions.
  • Transaction monitoring and market surveillance.
  • Cyber defence and financial crime and fraud detection.
  • Credit and regulatory capital modelling in the banking industry.
  • Claims management, product pricing and capital reserve modelling in the insurance industry.
  • Order routing, robo-advisory services1, execution and trading signals generation in the investment management industry.
  • Predictive analysis, larger dataset analysis and the study of non-linear interactions between variables by the Bank of England.
  • A cognitive search tool introduced by the PRA that helps supervisors gain more insights from firm management information.
  • Natural language-processing programmes for trading-bot software, which utilise data over unstructured forums, market research and summarise documents in the investment banking industry.2

Given the wide range of AI utilisation already within UK financial markets, the PRA and FCA asked respondents to consider whether a technology-neutral approach would continue to be appropriate and whether a sectoral definition of AI for financial services should be introduced. UK financial markets participants have used technologies such as trading algorithms and other models for a number of years, with their usage regulated under the MiFID II regime.3 These technologies may not be deemed to be AI, but the issues related to their use often overlap with those related to AI (e.g., the systems are often very complex and difficult to understand or explain). Regulators and authorities tend to distinguish between AI and non-AI technologies by:

  • Providing a precise definition of what AI is (e.g., the definition of an “artificial intelligence system” published in the recent EU AI Act or the proposed Canadian AI and Data Act).
  • Viewing AI as part of a wider spectrum of analytical techniques with a range of elements and characteristics (proposed in the white paper and by the German regulator BaFin4).

Respondents were generally in favour of the latter approach, emphasising that a sector-specific definition of AI could either be too broad or too narrow, and could become quickly outdated due to the pace of AI technology development or conflict with the FCA and PRA’s technology-neutral approach, which respondents described as an effective method for the adoption of AI in financial services. Given the overlap between similar technologies, such as algorithmic trading, many respondents believed that the risks associated with AI could be mitigated within existing regulatory frameworks, noting that the focus should be on the outcomes affecting consumers and markets rather than on specific technologies.

Benefits and Risks of AI

The PRA and FCA discussed a number of potential benefits and risks of AI that were grouped according to their regulatory objectives, posing questions to respondents regarding what risks they should prioritise, including how the benefits and risks might evolve as AI technology progresses, as well as specific novel challenges, the impact on groups with protected characteristics and the most relevant metrics.

The majority of respondents cited consumer protection as an area for the PRA and FCA to prioritise, with the associated risks of bias, discrimination, lack of explainability, transparency and exploitation of vulnerable customers with protected characteristics being the most significant risks. The respondents argued that firms should focus on mitigating data bias through addressing data quality issues, documenting biases in data and capturing additional data that may highlight impacts on particular groups with shared characteristics. The respondents also noted that the increase in the scale and complexity of AI models, which may result in a lack of explainability or interpretability (the black box problem), could lead to an increased demand on governance, as firms may not have the sufficient ability and/or experience to support the level of oversight required to have effective control of the model and relevant risk management. Over half of the respondents noted that the most important metric would be focused on consumer outcomes, particularly those designed to identify biased outcomes.

In order to tackle the risks associated with third-party providers — such as overreliance, which could cause a single point of failure during a cyberattack that impacts multiple firms and markets — respondents suggested that third parties be required to provide evidence supporting the reasonable development, independent validation and ongoing governance of their AI products so firms can make their own risk assessments.

Regulation

The PRA and FCA listed what they viewed as the most important parts of the current regulatory framework for the regulation of AI, with reference to some of their objectives. Respondents were invited to consider a number of questions, including their views on the most relevant aspects of the regulatory framework; any regulatory deficiencies, barriers or areas requiring clarification; and specific PRA and FCA proposals, such as whether to create a new prescribed responsibility for AI to be allocated to an FCA senior management function (SMF).

Most respondents noted that UK data protection laws were some of the most important aspects of the existing regulatory framework for the regulation of AI, highlighting that the “right to erasure” under Article 17 of the UK General Data Protection Regulation (UK GDPR) extends to personal data used to train AI models. Respondents also mentioned that there are areas of data regulation that are not sufficient to identify, manage, monitor and control the risks associated with AI models, so there would be value in alignment between the UK GDPR definitions and taxonomies with the approaches of the UK regulators. In addition, respondents flagged the UK GDPR’s AI-related data protection and privacy rights as being difficult to navigate (particularly in relation to automated decision-making), and further clarity was sought on the topic.

Respondents also asked the PRA and FCA for more clarity on what bias and fairness is defined as in the context of AI models, as well as on implementing bias and fairness requirements and how firms should interpret the Equality Act 2010 and the FCA Consumer Duty in this context. Most respondents agreed that clarity should be achieved through additional guidance, but only if it was actionable and did not create duplication or confusion with respect to existing regulations or guidance. The respondents emphasised that cross-sectoral and cross-jurisdictional coordination for the regulation of AI, through the aligning of key principles, metrics and interpretation of key concepts, coupled with a risk-based (such as the EU AI Act’s risk-based categorisation of AI use cases) and principles-based regulatory approach to AI regulation would be particularly effective.

Most respondents did not believe that creating a new prescribed responsibility for AI allocated to an SMF would be helpful, due to the many potential applications of AI within a firm and the fact that a number of the relevant responsibilities are already reflected in the “statement of responsibilities” for existing SMFs.

HM Treasury Response

Though other jurisdictions have moved ahead with specific AI regulations, such as the EU’s AI Act, HM Treasury confirmed that the UK will continue with its approach of sector-based regulation underpinned by the five key principles outlined in the Consultation.

In order for regulators to be able to develop the tools and expertise required to address AI, the HM Treasury Response included an announcement of £10 million in funding for UK regulators, although it is unclear at this stage how the funds will be divided. The UK government also will review regulators’ existing powers and remits to assess whether they are sufficient enough to regulate AI in their respective sectors, in addition to establishing a steering committee that includes government representatives and regulators to coordinate AI governance by spring 2024.

The HM Treasury Response highlighted a number of specific risks of AI, grouped into three categories: (i) societal harms, (ii) misuse risks and (iii) autonomy risks. Risks that are particularly relevant to financial services include:

  • The risk of bias and discrimination.
  • The complex nature of the current rules regarding automated decision-making within UK data protection laws — HM Treasury confirmed that the Data Protection and Digital Information Bill (DPIB), which aims to reform the UK’s data protection laws, will complement the planned regulatory approach to AI.
  • The potential uses of highly capable generative AI systems.

Regulator Responses

The Regulator Responses to the feedback received in the PRA and FCA Proposals set out strategic approaches to AI, highlighting the importance of promoting the safe and responsible use of AI in UK financial markets. The Responses noted that a technology-neutral approach does not necessarily prevent the FCA, PRA or the Bank of England from issuing guidance or using other policy tools to clarify existing rules and regulatory expectations with regard to specific technologies. This “outcomes-based” approach to regulation is seen as more easily applicable to the rapid technological changes surrounding AI, and would therefore result in better protections for customers.

The Responses noted that the five principles for AI regulation outlined in the white paper were key to their respective approaches. In particular, the FCA listed a number of its existing rules and guidance that it viewed as most critical in addressing these principles, including the Threshold Conditions, the Senior Management Arrangements, Systems and Controls sourcebook, the Consumer Duty, and the Senior Managers and Certification Regime. Regulatory cooperation also was highlighted as important, both within the UK and internationally. In particular, the regulators confirmed that they would continue their ongoing cooperation and work with the Digital Regulation Cooperation Forum on research to better understand adoption of generative AI technology, including deepfakes and simulated content, during 2024 and 2025.

The PRA and the Bank of England confirmed that they will be running a third instalment of their “Machine learning in UK financial services” survey to keep up with any ongoing developments, and also will take a closer look at the financial stability implications of AI during the course of 2024 alongside the Financial Policy Committee. In addition, the regulators noted that certain areas in the regulatory framework needed further clarification, including (i) data management, (ii) model risk management, (iii) governance and (iv) operational resilience and third-party risks.

Looking Ahead

It is clear that the UK government will continue with its sector-based approach to the regulation of AI, which is consistent with how the UK has traditionally regulated new technologies, as seen recently with regard to cryptoassets. The PRA and FCA Proposals clarified the approach in some areas, but it is likely that the two regulators will release more consultations, rules, guidance and policy statements over the coming months. The FCA acknowledged that its regulatory approach will need to adapt to the speed, scale and complexity of the growth of AI, while also noting that there will need to be a greater focus on the validation and understanding of AI models, as well as strong accountability principles. Firms will have to be mindful of showing evidence of compliance with these principles, as well as of ensuring that an appropriate level of training and education is taking place with regard to AI technology. Additionally, the interplay between the DPIB and any future financial services regulation for AI will be an important factor going forward, particularly with regard to automated decision-making.

_______________

1 A robo-advisor is an algorithmic program that provides automated financial planning and investment services based on data that the user provides about their financial situation.

2 The FCA is currently developing a natural language-processing programme to gain more insights from unstructured text documentation.

3 Markets in Financial Instruments Directive 2014.

4 Please see German financial services regulators’ consultation paper on machine learning in risk models.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Skadden, Arps, Slate, Meagher & Flom LLP

Written by:

Skadden, Arps, Slate, Meagher & Flom LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Skadden, Arps, Slate, Meagher & Flom LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide