It has been a busy summer for followers of the various European regulatory proposals to introduce a regulatory framework for the use of artificial intelligence in Europe. The EU is trying to resolve internal differences in approach to regulation, while the proposals published by the UK overtly take a more light-touch, pro‑innovation approach.
EU AI Act
In April 2021, the EU Commission published a proposal for an EU Artificial Intelligence Act in the form of an AI Regulation that would be immediately enforceable throughout the EU. The proposal sparked a lively discussion among EU Member States, stakeholders and political parties in the EU Parliament, generating several thousand amendment proposals. On the basis of a joint draft report issued in April 2022 by parliamentary committees examining the proposals, the EU Parliament is currently attempting to work out a compromise text. In addition, the EU Council has made separate new proposals to try to broker a compromise.
The proposed AI Regulation would apply to providers and users of AI systems regardless of their country of establishment as long as the AI system is available in the EU market or its output is used in the EU (see our detailed analysis of the proposal). The proposal includes:
- a ban on certain AI practices (such as specific social scoring and biometric identification usage) that impose a clear threat to the safety, livelihood and rights of people; and
- regulations on other AI applications according to their classification by risk. Providers and users of non-high-risk AI systems would be in a position to choose whether to implement voluntary codes of conducts, but AI systems that interact with humans and/or are used to detect emotions or generate deep fakes must observe specific transparency obligations. High-risk AI systems (i.e., systems creating a high risk to the health and safety or fundamental rights of natural persons) are subjected to an ex-ante (i.e., advance) conformity assessment. Providers of such high-risk AI systems are required to take extra steps – for example, to implement risk and quality management systems, to observe certain standards for the training data, to document the system’s output, to implement transparent information for users and to register their pre-assessed AI system with the public EU database.
EU Member States (as well as political parties in the EU Parliament) have argued for diverging approaches to the regulation of AI – which are covered by the latest two compromise proposals of the EU Council issued in September 2022. Key points of those proposals include:
- the definition of “AI systems” – which some in the industry consider to be too broad and threatening open-source and research applications; while others argue for deleting the list of AI techniques and approaches in Annex I to keep the AI Regulation open for future technical developments;
- the scope of the AI Regulation – which the rapporteur for the EU Parliament’s Committee on Civil Liberties, Justice and Home Affairs proposed to be broadened to include AI applications in the Metaverse, as well as in blockchain assets such as cryptocurrencies and NFTs;
- a ban on the use of facial biometrics in law enforcement – although some Member States want to exclude from the AI Regulation any use of AI applications for national security purposes (the proposals exclude AI systems developed or used “exclusively” for military purposes; however, the latest Council compromise text drops the exclusivity criterion). Germany has recently argued for ruling out remote real-time biometric identification in public spaces but allowing retrospective identification (e.g., during the evaluation of evidence), and asks for an explicit ban on the use of AI systems substituting human judges, for risk assessments by law enforcement authorities and for systematic surveillance and monitoring of employee performance;
- the classification of high-risk systems – which the Council seeks to limit to cases of major impacts on decision-making processes, i.e., not only being an accessory to the making of a decision. The Council proposal also removed the lack of human review as a classification criterion for high-risk systems. Germany has proposed to amend the latest compromise by including “significant harmful impacts on the environment” as a criterion for high-risk AI systems. Further discussions relate to the pre-set list of high-risk uses in Annex III and whether AI providers should instead have to self-assess their system’s risks. For general-purpose AI systems that may or may not be used for high-risk applications, the Council proposes on-going implementation acts to be issued by the Commission on the basis of impact assessments and public consultations;
- the national-level governance and enforcement of the AI Regulation – with which some Member States fear they will not be able to sufficiently comply (while the first pilot of a regulatory sandbox on AI has been presented by Spain in June). Proposed solutions include a stronger supporting role of the European AI Board or testing facilities at the EU level; and
- the level of proposed fines (up to €10-30 million or 2-6% of total annual turnover) – which some argue should be lowered in general (or, at least, for SMEs), while others are in favor of even higher sanctions. The Council has proposed to lower the top end of possible fines for SMEs from 3% of annual turnover to 2%.
The EU Parliament is expected to vote on its compromise text in November 2022. Final coordination between the Parliament, the Council and the Commission could start in the beginning of 2023.
Back to Top
AI-related revision of the EU Product Liability Directive
In September 2022, the EU Commission further published its draft for a revision of the Product Liability Directive (PLD). The PLD imposes a no-fault strict civil law liability of manufacturers for damages arising from defective products. A revision was necessary to include new categories of products emerging from digital technologies, such as AI. The PLD stipulates specific circumstances under which a product will be presumed “defective” for the purpose of a claim for damages, including the presumption of a causal link where the product is found defective and the damage is typically consistent with that defect.
With regard to AI systems, the revision of the PLD aims to clarify that:
- AI systems and AI-enabled goods are considered “products” and are thus covered by the PLD; and
- when AI systems are defective and cause damage to property, physical harm or data loss, the damaged party can seek no-fault compensation from the provider of the AI system or from a manufacturer integrating the system into another product.
Not specific to AI, the revised PLD further clarifies that:
- providers of software and digital services affecting the functionality of products can be held liable in the same way as hardware manufacturers;
- manufacturers can be held liable for subsequent changes made to products already placed on the market, e.g., by software updates or machine learning; and
- recoverability of damage to property under the PLD will be extended to lower-value items worth less than €500.
Back to Top
Proposal for an EU AI Liability Directive
Alongside the revised PLD, the EU Commission also published its draft of a AI Liability Directive that, in contrast to the AI Act, will have to be transposed into national law by the Member States within two years. The proposed AI Liability Directive is intended to facilitate the enforcement of civil law compensation for damage caused by AI systems. It complements the PLD – so, for example, while the PLD imposes a strict liability for defective products regardless of any “fault” of producer or manufacturer, the AI Liability Directive concerns cases in which damage is caused due to wrongful behavior (e.g., by breaches of privacy or safety or by discrimination due to AI applications).
The proposed Directive is deeply interwoven with the AI Regulation. For example:
- the definition of “AI system” aligns with the AI Regulation, including its notion of high-risk systems. Accordingly, potential adjustments in the AI Regulation will likely be mirrored in the proposed Directive during the legislative process;
- the proposed Directive stipulates the damaged party’s right to request information on the high-risk AI system from the provider – which the provider is required to document and store under the AI Regulation. A provider’s refusal to comply may be challenged before the courts and will be assessed under the principles of proportionality and protection of trade secrets; and
- the proposed Directive sets out a rebuttable presumption that a non-compliance with the AI Regulation’s obligations caused the damage suffered by a claimant. This presumption of causality releases claimants from the burden to show how the damaging output could have been caused by a specific act or omission of an AI provider or user.
Back to Top
Innovation before regulation: UK takes a fragmented approach
Meanwhile, outside the EU, the UK government has published an AI Regulation Policy Paper and AI Action Plan confirming that it intends to diverge from the EU’s regulatory regime. And, in June 2022, the UK made proposals on one key aspect of AI – the treatment of intellectual property rights. In both cases, the UK appears to be taking an approach that favours innovation over regulation.
Back to Top
Changes to UK IP law: text and data mining, computer-generated works’ copyright and patents
The UK plans to introduce a new copyright and database right exception that will permit text and data mining (TDM) for any purpose. IP rights-holders will not be able to opt out of the right, but will still have safeguards to protect their content – primarily, a requirement that content subject to TDM must be lawfully accessed. So rights-holders will be able to choose the platforms where they make their works available, including charging for access. They will also be able to take appropriate steps to ensure the integrity and security of their systems.
It is intended that the exception will speed up the TDM process, which is often a precursor to the development of AI, and will help to make the UK more competitive as a location for AI developers. Previously, the TDM exception only applied to non-commercial purposes.
On the other hand, the UK government axed other proposals. The UK has no plans to change the law regarding IP in computer-generated works. This means that works which do not have a human author will retain their copyright protection – a unique position in Europe. The government will keep the law under review and could amend, replace or remove protection in the future if the evidence supports it.
There will also be no change to UK patent law protection for AI-devised inventions. In response to government consultation, most respondents agreed that changes to the law on inventorship should be harmonised internationally and not implemented piecemeal. The counter-view is that the patentability rules ought to change to take account of the increasing contribution of AI in the R&D process and that, when AI technology reaches a stage where it can genuinely “invent”, any inventions devised by AI should be patentable. Although there will be no imminent policy change, the UK Supreme Court will consider a test case on the matter within the next two years (see our previous reporting on the multi-jurisdiction test here and also in the United States).
Back to Top
Business knows best: consolidated principles to inform a multi-regulator, light-touch regime
Separately, the UK government AI Regulation Policy Paper and AI Action Plan confirm that the UK will aim to promote innovation first and foremost. Initially, the UK will not establish an AI-specific body or regulation, or even seek to define “AI”. Rather, this responsibility will be delegated to industry and already established regulators (e.g., the Information Commissioner’s Office). This is designed to cater to the different challenges that different sectors face. However, a coherent approach will be reinforced through a set of cross-sectoral principles. As previously indicated in the UK’s AI strategy, the principles will be non‑statutory in order to maintain flexibility.
The paper takes the position that responsibility for such regimes must always lie with an identifiable person. A light-touch approach will be encouraged – such as guidance and voluntary measures. Prominent issues that are driving centralised AI regulation in the EU, such as safety, security, transparency and fairness, will instead be interpreted by individual regulators, in the context of that industry. The policy paper identifies that bodies and groups such as the Digital Regulation Cooperation Forum will have to play a key role to enable a more coordinated regime. Further details will be announced in a forthcoming White.
The UK’s Competition and Markets Authority (CMA) sounded a more cautious note than the UK government itself. It noted that AI has the potential to create business opportunities and better, personalised services. But AI can also allow the strongest market-players to increase their market strength – so clear regulatory powers will be needed to prevent abuse.
Meanwhile, the Equality and Human Rights Commission has published guidance on the use of AI in public services, furthering its intention of making AI a key focus in its three-year strategy plan. Prompted by the risks of discrimination when using AI for decision making, the guide contains advice on compliance with equality legislation (Equality Act 2010) and a checklist for public bodies when utilising AI.
Back to Top
Susan Bischoff, a research assistant in the Technology Transactions Group in our Berlin office, helped with the preparation of this article.
[View source.]