[co-author: Oli Jones]
On 6 February 2024, the UK government published its response to the artificial intelligence (AI) white paper consultation, which explains its approach to the regulation of AI (including generative AI) in the UK.
The response confirms that the government's strategy is to promote innovation. Rather than implementing blanket legislation, the UK government will go ahead with adopting the non-statutory, contextual, cross-sectoral principles-based approach detailed in the white paper. Though the government recognises that at some point 'binding requirements' will be required to address potential AI-related harm, the government has said that it will only legislate when "confident that it's the right thing to do".
The white paper proposals to empower existing regulators and create a central regulatory function to coordinate the strategy will also proceed. As part of this, the Department for Science, Innovation and Technology (the "DSIT") has published new guidance for regulators to support them when interpreting and applying the principles-based approach. Our alert summarises the government's response and sets out a number of recommendations for businesses active in this space.
A principles-based approach
The white paper was originally released in March 2023 (see our update here), followed by a 12-week public consultation period which closed in June 2023. After hearing from over 545 different individuals and organisations, the government has published its response.
Rather than adopting blanket rules that apply to all AI technologies, the white paper proposed five broad cross-sectoral principles which set out the government's expectations for the responsible design, development and application of AI, being:
- Safety, security and robustness;
- Appropriate transparency and explainability;
- Fairness;
- Accountability and governance; and
- Contestability and redress.
The majority of respondents to the consultation agreed that these principles would cover the key risks posed by AI technologies, and the government's response confirms that these five principles remain unchanged.
The government believes that it is more effective to focus on how AI is used within a specific context than to regulate specific technologies. Existing regulators will apply the principles within their domains to progress safe, responsible AI innovation. The 'five principle' regime will be implemented on a non-statutory, voluntary basis as the government believes that this offers critical adaptability, however, this will be kept under review.
The government has written to regulators asking them to publish an update which outlines their strategic approach to AI by 30 April 2024, although the response details that a number of regulators have already started work in line with the principles-based approach:
- The Competition and Markets Authority (CMA): published a review of foundation models to understand the opportunities and risks for competition and consumer protection;
- The Information Commissioner's Office (ICO): updated guidance on how data protection laws apply to AI systems to include fairness; and
- The Office of Gas and Electricity Markets (OFGEM) and Civil Aviation Authority (CAA): currently working on AI strategies to be published later this year.
Binding requirements
The response notes that the principles-based approach could potentially miss significant risks posed by advanced highly capable general-purpose systems. This is because the wide range of potential uses of general-purpose systems means that they do not clearly fit within the remit of any one regulator, potentially leaving risks unmitigated.
Despite acknowledging the risks, binding requirements are not yet being imposed. The government believes that introducing measures too soon, even if highly targeted, could fail to effectively address risks, quickly become outdated, stifle innovation and prevent people from benefiting from AI. Binding measures will only be implemented if it is determined that existing measures are no longer adequate and risks could be mitigated in a targeted way.
Any future regulations would oblige developers to adhere to the five principles and be targeted at developers working on the most powerful general-purpose systems. The government proposes to achieve this by establishing dynamic thresholds which can quickly respond to AI development. These thresholds could be based on forecasts of capabilities using a combination of two factors:
- Compute: the amount of compute used to train the model; and
- Capability benchmarking: assessing capabilities in certain risk areas to identify where high capabilities could result in high risk.
Empowering regulators and developing a central function
The white paper proposed a central regulatory function and this was widely welcomed in the consultation by stakeholders. The central function will be established within government to monitor and assess risks across the economy and support coordination. It is hoped that this central, cross-sector function will prevent regulatory overlap, gaps and poor coordination amongst regulators.
The government also plans to boost the capabilities of regulators. The process of empowering regulators and establishing a central regulatory function has already commenced in a range of ways:
- Risk assessment: a new multidisciplinary team has been recruited to undertake cross-sectoral risk monitoring within the DSIT;
- Regulator capabilities: £10 million of funding has been announced for regulators to develop the capabilities and tools they need to adapt and respond to AI;
- Regulatory powers: the DSIT will work with government departments and regulators to analyse and review potential gaps in existing regulatory powers and remits;
- Coordination: Lead AI Ministers have been established across all government departments to coordinate action. Also, by spring 2024, the DSIT will establish a steering committee with government representatives and key regulators to support knowledge exchange and coordination on AI governance; and
- Ease of compliance: the DSIT are funding a pilot multi-agency advisory service to support innovators and businesses get new products to market safely and efficiently.
It would be prudent for businesses to start preparing for expanding regulatory activity and scrutiny regarding the use of AI in the UK. In the short term, businesses should expect to see an increasing volume of information gathering and guidance, for example, an Introduction to AI assurance, published 12 February 2024. In the long term, organisations should be mindful of the possibility of the regulatory framework being put onto a statutory footing, a decision which would likely lead to a rise in the amount of enforcement action taken by regulators in the UK.
Businesses that operate or seek to deploy AI technologies throughout Europe should also be aware that they could be subject to sweeping governance obligations imposed by the recently agreed EU AI Act (the "Act") (see our latest update here). The Act is broadly scoped and will apply to both providers and deployers of in-scope AI systems that are used in or produce an effect in the EU, irrespective of their place of establishment. While those obligations will not go into effect until two years after the final text of the law is published, most likely in 2026, organisations should familiarise themselves with the impending requirements in order to minimise risk.
[View source.]