After a flurry of AI-related papers from the previous UK Conservative government and regulators in 2023 and the first half of 2024 (and the establishment of the UK’s AI Safety Institute and the AI Policy Directorate), progress seemed to stall as the new Labour government bedded in and set out its priorities. But in late 2024, we began to see signs that activity would resume in 2025 and, potentially, take a different direction from that indicated by the previous government. What does the launch of the AI Opportunities Action Plan on 13 January 2025 mean for the UK’s legislative agenda for AI? More generally, will the UK opt to keep pace with global AI regulation, or will it continue to pursue non-legislative interventions in the AI sphere as other jurisdictions like the EU, California, and Australia press on with AI-specific legislation?
What is the UK’s current position on AI regulation?
Almost a year ago, in February 2024, the previous Conservative government published its written response to the feedback it received as part of its consultation on its March 2023 White Paper on the approach to regulating AI. This response reiterated that the UK would not pursue specific legislative change to respond to the explosion in AI innovation, preferring a principles-based approach based around an over-arching framework of five cross-sectoral principles for “trustworthy AI”. Key regulators were asked to publish their individual strategic approaches to managing the risks presented by AI by 30 April 2024. We considered the financial regulators’ responses in our previous Emerging Themes article AI in Principle, but, in short, they concurred with the government’s approach, concluding that compliance with the AI principles could be met within the myriad of existing financial services legislation and regulation with just a bit more work.
The curve-ball in the form of Lord Chris Holmes’ Private Member’s Artificial Intelligence (Regulation) Bill introduced in December 2023 reached the House of Commons, but fell by the wayside when parliament was prorogued last May and has not (yet) been resurrected. A new Private Member’s Bill to regulate the use of AI systems in "decision-making" processes in the public sector was however tabled in September 2024, which is now at committee stage.
So, the position remains that there are no overarching regulations governing AI in the UK. However, there are some AI-applicable provisions in existing legislation, including those in the Data Protection Act 2018 that currently preclude automated decision-making, although the Data (Use and Access) Bill introduced in October 2024 is looking to relax the rules on automated decision-making, which would benefit AI systems.
Is that position set to change?
The Labour government used its first King’s Speech on 17 July 2024 to headline that it would "establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models” and, within the background briefing notes, “harness the power of artificial intelligence as we look to strengthen safety frameworks”. A promise repeated in October 2024 by Technology Secretary Peter Kyle, stating that the UK government will bring legislation to “safeguard against the risks of artificial intelligence” within the next year. Before the election, Kyle indicated that Labour would introduce a statutory regime requiring AI companies to share test data, but indicated they would be more targeted to high-risk systems.
The scope of the government’s AI ambitions has now been laid out in its January 2025 AI Opportunities Action Plan. This lays out the vision for the UK to shape the AI revolution, by investing in the foundations of AI (improved data infrastructure, investment in compute resources and AI talent, and establishment of AI growth zones), driving adoption of AI in the public sector, and making the UK an attractive location for AI investment. The Plan continues to walk the careful tightrope between supporting drivers for AI innovation and investment in the UK noting that “the UK’s current pro-innovation approach to regulation is a source of strength relative to other more regulated jurisdictions and we should be careful to preserve this” whilst protecting UK citizens, specifically identifying that safe development and adoption of AI should be achieved without “blocking the path towards AI’s transformative potential”. This suggests that we should not expect to see wholesale AI regulations being introduced this year, but more targeted interventions to support text and data mining and provide clarity as to how advanced frontier AI models will be regulated.
When it comes to financial services specifically, the financial regulators have not (publicly at least) changed their stance since April 2024. However, that is not to say that the regulators have been idle in this important area. And the new Action Plan suggests more funding for regulators to scale up their AI capabilities, and requirements for them to focus on enabling safe AI innovation and to publish annual reports indicating how they are enabling innovation and growth driven by AI in their sector. The UK’s new strategy also envisages appointment of an AI sector champion for the financial services sector to develop AI adoption plans for the sector, to be identified in Summer 2025.
In October 2024, the FCA launched its new AI lab, which aims to help financial services firms navigate the challenges of AI and support them as they develop new AI models and solutions. As part of this, the regulator offers its AI Input Zone for stakeholders to have their say on the future of AI in UK financial services through an online feedback platform.
In November 2024, the Bank of England also announced its AI Consortium to provide a platform for “public-private engagement to gather input from stakeholders on the capabilities, development, deployment and use of artificial intelligence (AI) in UK financial services”. Although the Consortium will not have decision-making capacity or be obliged to to act upon discussions, it aims to (i) identify use-cases for AI in UK financial services; (ii) discuss the benefits, risks, and challenges of AI in relation to firms and also the wider financial system; and (iii) inform the Bank’s ongoing approach to the safe adoption of AI. This valuable stakeholder engagement will inform the UK’s approach to AI regulation.
How does the UK’s position compare internationally?
It is safe to say the position internationally remains disparate and fragmented.
What are the implications of this?
When it comes to translating its position on AI regulation into law, the UK has not taken the lead, relying so far instead on non-legislative interventions. The EU has clearly led the charge globally, and, to lesser degrees, others are following suit. But does this matter? Well, not necessarily.
The extra-territorial scope of the AIA means that any in-scope AI system deployed in the EU or whose output affects individuals in the EU is caught, regardless of where the developer or provider is located. Other AI acts introduced internationally bite similarly. Due to the nature of AI-based technology, there will be no material AI system that operates exclusively within one jurisdiction. This means that AI legislation, wherever passed, raises the standards of every jurisdiction to a higher threshold. AI systems that are already being used and impacting individuals in the UK may therefore already be subject to legislative standards elsewhere. The commercial reality for financial services firms operating internationally will be a drift towards the highest standard of AI regulation, albeit not without significant compliance headaches when it comes to reconciling differing national approaches. Practically, we see the AIA serving as the global reference point.
When it comes to UK-specific regulation, the government acknowledges that the challenges posed by AI technologies will ultimately require legislative action. The AI Opportunities Action Plan spells out the UK’s ambitions on the world AI stage and heralds some legislative changes in 2025.
Will the lack of UK-specific AI legislation will impact enforcement activity by the UK regulators where there is a breach of, for example, standards imposed by the AIA? The FCA will need to rely on its existing framework – a position it seems to be comfortable with according to its April 2024 response. However, we do foresee scenarios where this might not be sufficient. For example, if an AI-enabled system is deemed to be adversely impacting UK consumers, the UK enforcement regime may be shown to be wanting in circumstances where, for example, the EU and/or US are offering consumers specific (and timely) protection.
Conclusion
With the EU and other countries currently taking a leading role globally on AI legislation, regulatory clarity in these jurisdictions could mean that they will attract increased investment, if tech companies show willingness to meet the enhanced compliance burden. However, the global nature of both AI and financial services, naturally leads to an international approach and consensus to AI regulation developing. The UK government, having now articulated its approach to AI regulation, may be trying to achieve the seemingly impossible balance between fostering innovation and providing regulatory clarity.
The authors would like to thank Senior Associate Siân Cowan for her input into this article.
[View source.]