A deep dive into the FCA’s approach to the regulation of AI within financial services

BCLP
Contact

Last week the FCA issued three announcements concerning its approach to the digital future of financial services in the UK. Amongst these was the FCA’s AI Update, their response to the government’s recent AI consultation outcome that included the call to key regulators to set-out their strategic position on AI in their respective sectors by 30 April. The FCA’s response was published concurrently with the Bank of England and the PRA’s response letter.

The FCA’s approach is largely premised on regulating AI through the myriad of existing legislation and regulation, subject to the findings of their further work that will be conducted over the next 12 months. In this article we interrogate how the FCA propose that the five core cross-sector principles for “trustworthy AI” are addressed within their existing regulatory framework.

The response fails to materially advance our practical understanding of the FCA’s approach to the regulation of AI. To reflect the pace of change within AI and the scale of the task, we and our clients would welcome regular updates from the FCA over the course of the next 12 months, including clear guidance from the regulator about their practical application of the regulatory framework to AI-enabled technology and systems.

Backdrop to the FCA’s response

Last March 2023, the Government published its White Paper on its approach to regulating AI. The Paper proposed a decentralised approach to the regulatory oversight of AI and, notably, deviated from the position taken by the EU to legislate by way of its Artificial Intelligence Act 2024. Fundamentally, the Government proposed a framework of five overarching, cross-sector, principles for “trustworthy AI” (“the Principles”):

  1. Safety, security, and robustness
  2. Appropriate transparency and explainability
  3. Fairness
  4. Accountability and governance
  5. Contestability and redress

On 6 February 2024, the Government published its response to the consultation that was triggered by its White Paper, making clear that its response was primarily intended to provide guidance to the individual regulators as they take forward their consideration of the impact and risks of AI in their respective sectors, and determine their approach to governing AI. The clear direction from the Government was that it would not be pursuing legislative change at this stage, but will continue to pursue its principle-based approach. It requested that key regulators publish their individual strategic approaches to managing the risks presented by AI by 30 April 2024.

The FCA published its response – the FCA’s AI Update – one week ahead of schedule, on 22 April 2024. The response confirms that the FCA agrees with the Government’s approach, stating that “the FCA is a technology-agnostic, principles-based and outcomes-focused regulator”. The key takeaway from the FCA’s proposed approach is that it will use its existing framework to regulate AI. The response sets out what it calls an “evidence-based view, one that balances both the benefits and risks of AI, will ensure a proportionate and effective approach to the use of AI in financial services”. We now examine what this might mean.

Principle 1: Safety, security, and robustness

Perceived risks

AI has the potential to increase the threat that bad actors spread disinformation, which is difficult to detect, including fraud and intimate image abuse.

The increased access of AI tools can also enable bad actors to use AI to commit fraud, sophisticated cyber-attacks, and money laundering.

Government proposals

Regulators will need to provide guidance which takes account of good cybersecurity practices.

Regulators should consider privacy practices in their approaches to regulating AI, with accessibility only being granted to authorised users that would provide safeguards against bad actors.

FCA response

The FCA point to existing “high-level principles-based rules” that could be applied to the use of AI, including:

  • The Threshold Conditions, the minimum conditions to be satisfied by firms.
  • Principles for Business, which provide a general statement of the fundamental obligations of firms.

The FCA also points to its work on operational resilience, outsourcing and Critical Third Parties (“CTPs”), suggesting that AI systems could meet the criteria of CTPs, such that they would be subject to additional regulatory scrutiny and oversight. We note here that the regulators are currently assessing their approach to CTPs further to Consultation Paper 26/23 (see our Emerging Themes article).

Principle 2: Appropriate transparency and explainability

Perceived risks

The decision-making process of AI systems is not sufficiently explainable or transparent, and there is no clear regulatory consensus to address those challenges.

The potential for drift and the lack of explainability lead to prudential risks without appropriate guidance and oversight.

Government proposals

Regulators will need to set clear expectations for AI life-cycle actors.

The Government has encouraged regulators to consider the role of available technical standards in addressing explainability issues with the need for data and model performance metrics being evidenced.

FCA response

The FCA’s response appears underdeveloped. The paper acknowledges that its regulatory framework does not specifically address the transparency or explainability of AI systems.

It points specifically to firms’ cross-cutting obligation under the Consumer Duty to act in good faith. Where that does not apply, it points to Principle 7, which requires firms to pay due regard to the information needs of clients, communicating them in a way that is fair and not misleading.

The FCA also notes that it intends to use data controllers to provide data subjects with certain information about their processing activities.

We anticipate (and hope) that further guidance will be issued on this in the coming months given the response is not as developed as anticipated.

Principle 3: Fairness

Perceived risks

The risk of bias, discrimination, and the financial exclusion of vulnerable populations.

Government proposals

Regulators will need to interpret and articulate what fairness requires in the corresponding context and clarify the specific instances where fairness is relevant.

Regulators will also need to design, implement, and enforce necessary governance requirements that consider the relevant technical standards.

FCA response

The FCA relies on its current regulatory approach to consumer protection, based on its:

  • Principles of Businesses, in particular Principle 8 on managing conflicts of interests and Principle 9 on the suitability of advice for firms to consider; and
  • The Consumer Duty, which addresses discrimination harms by requiring firms to take account of the different needs of customers, including those of their individual characteristics.

The Guidance for firms on the fair treatment of vulnerable customers is technology-agnostic and applies to all firms subject to the Principles, including those using AI or data solutions within their services.

Furthermore, existing legislation will serve its purpose in regulating AI, with the Equality Act 2010 and the UK General Data Protection Regulation and the Data Protection Act 2018 playing a key part in ensuring fairness across the application of AI systems.

Principle 4: Accountability and governance

Perceived risks

An absence of clearly defined roles and responsibilities for AI that could result in insufficient skillsets and governance functions.

Government proposals

Regulators should be looking for ways to ensure that clear expectations for regulatory compliance and good practice are placed on actors in the AI supply chain.

Third-party providers of AI solutions are also encouraged to conduct their own risk assessments.

FCA response

As well as high-level rules and principles, including the Threshold Conditions and the Principles for Business (in particular, Principle 3 on Management and Control), the FCA points to:

  • The SYSC sourcebook, in particular SYSC 4.1.1R, contains a range of specific provisions on systems and controls and firms’ governance processes and accountability arrangements.
  • The FCA suggests that existing Senior Manager roles (SMF24, Chief Operations Function and SMF4, Chief Risk Function) under the Senior Managers and Certification Regime (SM&CR) are sufficient to cover the use of AI in relation to an activity, business area or management function. If AI is being used within the firm, whether or not there should be a dedicated Senior Management Function, is still a challenge the FCA will need to grapple with.

Practically, given the inevitably wide-reach of AI-enabled systems within regulated firms, mapping the governance onto the existing SM&CR framework will be hugely challenging.

Principle 5: Contestability and redress

Perceived risks

There is a risk of inconsistent enforcement arising out of inconsistent regulatory standards.

Government proposals

Impacted third parties and actors in the AI life cycle should be able to contest an AI decision or outcome that is harmful or creates a material risk.

Regulators need to clarify methods available to third parties to contest AI decisions and receive redress.

FCA response

Firms that use AI as part of their business operations remain responsible for ensuring compliance with the FCA’s rules, including the rules around consumer protection.

Should a breach occur, there are a range of existing mechanisms through which firms can be held accountable and consumers obtain redress.

Chapter 1 of the ‘Dispute Resolution: Complaints’ Sourcebook (DISP) contains rules and guidance detailing how firms should deal with complaints.

Redress can also be sought through the Financial Services Compensation Scheme.

What is on the horizon?

In its response, the FCA states that it will prioritise developing its understanding of how AI is currently being deployed by firms, so that risks and corresponding mitigations can be appropriately identified. This reflects the FCA’s commitment to the Government’s pro-innovation approach to AI, for example by funding a pilot of AI & Digital Hub delivered by DRCF member regulators. The FCA’s Regulatory Sandbox alongside other FCA Innovation Services will ensure that a diversity of perspectives and solutions are considered. As a result, data and digital will play a key role in the FCA’s strategy, with international cooperation being a crucial element in ensuring a safe, responsible, and proportionate framework that fosters a pro-innovation culture going forward.

The FCA’s CEO Nikhil Rathi confirmed the approach in his speech on the 22 April 2024, stating that “the FCA’s recent Call for Input shows that the FCA need to remain vigilant about data asymmetry or risk putting off incumbents and innovators from retail financial services.” The FCA is therefore examining the case for developing a commercially viable framework for data sharing in Open Banking and Finance.

Helpfully, the FCA has set out its plan of what it intends to do over the next 12 months. The FCA’s first priority is to continue to build an in-depth understanding of how AI is being deployed across financial markets. This will enable the FCA to respond promptly from a supervisory perspective to emerging issues within firms. In light of this, jointly with the Bank of England, it will be running a machine learning survey into the state of AI in financial services for the third time.

Over the next 12 months, the FCA intends to build on the existing foundations of the regulatory framework, which it considers to be relatively fit for purpose and aligns with the Government’s Principles. However, the regulator has expressed that it will be continuously monitoring the situation to ensure the regulatory environment reflects reality. Recent developments of existing frameworks in the UK include the rise of Large Language Models, which will have an increasing relevance to firm’s safe and responsible use of AI.

Collaboration, including with international regulators, will also be a priority over the coming year, but it is important to the FCA that any such collaboration does not hinder the speed of the regulatory framework being adapted in the UK so that the regulator remains in control of the pace of developments. This is a difficult balancing act for the FCA.

The 12-month plan highlights the importance of testing for beneficial AI, and, in particular, the use of the Digital Sandbox that allows for the testing of technology via synthetic data and the Regulatory Sandbox for which the FCA is the global pioneer.

The FCA therefore appears to take a proactive approach to understanding emerging technologies and their potential impact, which will be supported by the introduction of its Emerging Technology Research Hub. As part of the DRCF Horizon Scanning & Emerging Technologies, the FCA also intend to conduct research on deepfakes and simulated content.

Conclusion

The FCA’s approach is largely premised on regulating AI within its existing guidelines, legislation, and regulation. Whilst there is still more progress to come over the next 12 months, we would welcome regular updates from the FCA, incorporating clear practical guidance on how their regulatory expectations are to be applied in practice in relation to AI-enabled systems.


The authors would like to thank Molly Tinker for her contribution to this article.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© BCLP | Attorney Advertising

Written by:

BCLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

BCLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide