In our previous alert we mentioned a joint letter from the Prudential Regulation Authority (PRA) and Financial Conduct Authority (FCA) to the UK Government on their strategic approach to artificial intelligence (AI) and machine learning. The letter followed the UK Government’s publication of its pro-innovation strategy, in February of this year.
The PRA, which is charged mainly with oversight of the stability of the banking system and financial position of banks and large investment banks in the UK, had welcomed the Government’s principles-based, sector-led approach to AI regulation. The Government’s approach, will see the PRA, along with the FCA, which is responsible for policing the behaviour of all UK financial institutions and the integrity of financial markets in the UK, taking the lead on regulating the use of AI in the financial sector.
In a recent speech, Sarah Breeden, the deputy-governor of financial stability within the PRA (the DG), addressed the use of AI:
- at a microprudential level, where the PRA seeks to ensure the safety and soundness of individual firms, cautioning central banks and financial regulators to continue to assure themselves that technology-agnostic regulatory frameworks are sufficient to mitigate the financial stability risks from AI, as models become ever more powerful and adoption increases; and
- at a macroprudential level, noting the possible need (a) for macroprudential interventions to support the stability of the financial system as a whole and (b) to keep the PRA and FCA regulatory perimeters under review, should the financial system become more dependent on shared AI technology and infrastructure systems.
The Speech is useful in that it highlights specific AI issues which financial services firms and fintech providers should note when thinking about when deploying or developing AI. It is also useful for those thinking about policy, confirming much of what we have said in our previous alerts but showing also that thinking on how government approaches the regulation of AI can and will likely evolve.
Restating the PRA’s General Approach but Sounding a Warning
The DG restated the PRA’s “tech-agnostic microprudential approach”. It thus confirms its role, like that of the FCA, as regulator of financial services providers and financial markets, and not as a regulator of technology. The benefits of this approach are articulated thus:
A tech-agnostic approach future proofs regulatory frameworks, by focusing on what matters (the outcomes) and not requiring perfect foresight on the part of regulators for how technology will evolve to deliver them.
She noted, however, that the power and use of AI is growing fast, and the PRA must not be complacent: from past experience with technological innovation in other sectors of the economy, it is hard retrospectively to address risks once usage reaches systemic scale.
Growing Use Cases: AI and Risk Mitigation
The DG noted the surveys undertaken by the PRA and FCA, noting that early use cases within financial services firms for AI have been fairly low risk from a financial stability standpoint. 41% of respondents are using AI to optimise internal processes, while 26% are using AI to enhance customer support, helping to improve efficiency and productivity.
She went on to note, however, that many firms are also using AI to mitigate the external risks they face from cyber-attack (37%), fraud (33%) and money laundering (20%). For example, payment systems have long used machine learning automatically to block suspicious payments – and one card scheme is this year upgrading its fraud detection system using a foundation model trained on a purported one trillion data points.
She observed that potentially more significant use cases from a financial stability perspective are emerging. 16% of respondents are using AI for credit risk assessment, and a further 19% are planning to do so over the next three years. Meanwhile, 11% are using it for algorithmic trading, with a further 9% planning to do so in the next three years. And 4% of firms are already using AI for capital management, and a further 10% are planning to use it in the next three years.
AI and individual firms: microprudential supervision
In light of the above, the DG asked: as AI models get ever more powerful and ever more widely adopted in a wider range of use cases, can the PRA continue to rely on existing regulatory frameworks in its microprudential supervision of individual firms - as these were not built to contemplate autonomous, evolving models with potential for decision making capabilities? That is why, she said. The PRA is continuing its work on AI, noting three areas it is particularly keen to explore:
- Model risk management and the risk that model users within firms may not fully understand the third-party AI models they deploy within their firms. Limited explainability of AI models is a particular focus what explainability controls firms require and what that means for our regulatory and supervisory frameworks.
- Standards for the data on which AI models are trained. In particular, there is need for firms to:
- train AI models on high-quality, unbiased input data,
- trace, to a reasonable degree, how the model’s behaviour responds to particular aspects of that training data; and
- understand where the model is particularly dependent on certain segments of training data.
- Governance, with a latest survey showing that only a third of respondents describe themselves as having a complete understanding of the AI technologies they had implemented in their firms. The PRA will expect a stronger, more rigorous degree of oversight and challenge by firms’ management and Boards.
AI and the financial sector: macroprudential policy
The DG notes that, even if the PRA can deal with an individual firm, interconnectedness – where the actions of one firm can affect others – remains a concern. Firms can become critical nodes and be exposed to common weaknesses and AI could both increase interconnectedness and increase the probability that existing levels of interconnectedness threaten financial stability.
The DG, notes, in particular:
- Cyber-attacks, with AI aiding the attackers – for example through deepfakes created by generative AI to increase the sophistication of phishing attacks.
- Increased market speed and volatility under stress which AI potentially creates, especially where Multiple market participants using the same AI models and relying on a small number of AI service providers for trading which could result in increasingly correlated trading behaviour.
- System-wide conduct risk. The DG asks what the consequences would be if AI determines outcomes and makes decisions and, after a few years, these outcomes and decisions were legally challenged, with the need for mass redress needed.
The DG refers to the regime for critical third parties (CTPs), which we discuss further below, and the use of stress tests to understand how AI models used for trading whether by banks or non-banks could interact with each other.
Continued Alignment With Emerging Trends?
In our previous alert, we shared our thoughts on the emerging regulatory trends for regulating both generative AI and AI more generally – the DG’s speech does not change these although blind adherence to a principle of tech agnosticism is open to question with the characteristics of AI as a special technology becoming an increasing focus:
- No new risks? It is still not clear that AI necessarily creates material new risks in the context of financial services, although the rapid rate of technological change may create new risk; it remains too early to tell.
- Amplifying existing risk. Instead, AI may amplify and accelerate the existing financial sector risks — i.e., those connected with financial stability, consumer, and market integrity, which the financial services and markets regime is designed to reduce. This is very much a focus of the DG’s speech.
- Accountability for regulators’ use of AI. AI will also have a role in the control by firms of financial sector risks and, indeed, in the FCA’s and PRA’s regulation of the sector (although questions may arise about the justification for AI-generated administrative decisions and their compliance with statutory and common law principles of good administration). The DG mentions the PRA’s use of AI in the speech.
- Sectoral rather than general regulation. In keeping with the concerns about amplifying and accelerating existing risks, it is appropriate for the PRA and FCA, as current financial sector regulators, to be charged with regulating AI.
- Where possible, use of existing standards. The PRA’s and FCA’s role in regulating AI reinforces the need for using and developing existing financial sector regulatory frameworks, enhancing continuity and legal certainty, and making proportionate regulation more likely (although not inevitable). The DG’s focus in the speech on how its existing framework can be used to respond to firms’ adoption of AI is as such: AI may be new but regulatory obligations for using AI are already in place for firms. That said, there is more of an emphasis in the speech, than in the other UK regulatory pronouncements that we have commented on, on the need for these standards to develop.
- Governance is key. Effective governance of AI is needed to ensure that the AI is properly understood, not only by the technology experts who design it but also by the firms who use it — a “know-your-tech” (KYT) duty — and firms can respond effectively to avoid harm materialising from any amplified and accelerated risks. The SMCR, which is a highlight of the Update, should accommodate a KYT duty.
- A regulatory jurisdiction over unregulated provider of critical AI services seems inevitable. Staying with the theme of existing frameworks, the rise of the importance of technology and currently unregulated CTPs, noted above and specifically raised in the Update, has resulted in an extension of powers for the FCA and PRA under the recently enacted Financial Services and Markets Act 2023 (FSMA 2023), as noted in our recent alert and addressed on our dedicated microsite. Providers of AI models that are used by many financial institutions — or by a small number of large or important financial institutions — may become subject to the jurisdiction of the PRA or FCA under the new powers that FSMA 2023 introduces. If there is a provider of generative AI models used by a large number of financial institutions or a small number of large or important financial institutions, that provider may become subject to the jurisdiction of the PRA or FCA under the new powers that FSMA 2023 introduces. In 2023, the PRA consulted on its requirements for critical services providers, and the final rules are still awaited.
[View source.]