Many governments are grappling with the question of how to regulate artificial intelligence to ensure it is adopted safely and used responsibly without hampering innovation. Governments have generally indicated similar interests in mitigating AI’s potential harmful effects—for example, ensuring that the use of AI by businesses is safe and transparent. However, the approach to achieving those aims differs radically. The EU has the most comprehensive AI-specific legislation – the AI Act. On the other hand, the U.S. and U.K. have taken a different approach to the regulation and enforcement of AI which is based on the common law method of addressing risks as they are identified.
This note is the second in a three-part series on the regulation of artificial intelligence in the United States, the European Union and the United Kingdom. Our first note, available here, provided a summary comparative assessment of the approach that the three jurisdictions are taking to regulating the use of AI in the financial services sector, as well as a recommended Action Plan for firms to consider when implementing AI systems. This note examines:
- The scope of applicable laws and regulations;
- Extraterritorial application of AI laws and regulations;
- Data governance; and
- Third-party service provider regimes.
The final note of the series will assess the approach to enforcement, remedies and liability.
Scope
U.S.
There is no current, comprehensive AI-specific legislation at the federal level. The enacted state comprehensive privacy laws all have gating tests with two to three prongs that generally rate to a company’s annual revenue, the number of data subjects the company collects data from, and whether (and how much) personal data the company sells. In most cases, if a company meets the gating test, they are subject to the law, irrespective of state of incorporation or geographic location of the company. Companies subject to certain federal privacy laws, including the Gramm Leach Bliley Act, may not be subject to the comprehensive state privacy laws due to federal pre-emption.
State AI laws generally regulate companies’ use of generative AI for automated decision-making (i.e., without significant human oversight, such as a designated internal reviewer), in critical or legal functions (such as access to healthcare/health services, educational opportunities, insurance, and loans and other financial services). Generally speaking, any company (irrespective of where it is incorporated or domiciled), using generative AI in such a manner to make decisions about residents of a state with an enacted AI law will be subject to the law. The stand-alone laws generally do not have gating tests like the comprehensive privacy laws, and are generally not likely to be pre-empted by federal privacy laws.
Certain state laws require deployers of high-risk systems to, among other things: implement a risk management policy; complete an impact assessment, notify consumers if the high‑risk system makes a consequential decision concerning that consumer; post a publicly available statement summarizing the types of high-risk systems that the deployer currently deploys, how the deployer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of each of these high-risk systems, and the nature, source and extent of the information collected and used by the deployer.
EU
The EU AI Act is a stand-alone measure that will apply to various entities, depending on their role:
AI systems
The AI Act defines an AI system as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
The AI Act has four risk categories for AI systems. It is possible that the AI risk category may change over time, depending on the intensity of use that is being made of the AI system or on the number of end-users.
- Minimal risk - AI systems are free from regulatory obligations under the AI Act.
- Limited risk - AI systems are essentially subject to transparency obligations, for example, disclosing that the content was AI-generated so users can make informed decisions on further use.
- High risk - AI systems are subject to more onerous obligations, such as:
- Risk management systems must be established, implemented, documented and maintained.
- Detailed technical documentation that provides clear and comprehensive explanations of how the AI system is in compliance with the AI Act.
- Designing and developing of the AI system must ensure so that their operation is sufficiently transparent to enable deployers to interpret the system’s output and use it appropriately, including instructions for use.
- Training, validating and testing of high-quality data sets that are subject to appropriate data governance and management practices.
- Developing an AI system to an appropriate level of accuracy, robustness, and cybersecurity, and ensuring the AI system can perform consistently in those respects throughout their lifecycle.
- Ensuring the AI system is capable of being subject to human oversight, which must be commensurate to the risks, level of autonomy and context of use of the AI system.
- Quality management systems to be established. These will include an accountability framework setting out responsibility of senior management and staff for all aspects, strategy for regulatory compliance, handling communications with regulatory authorities and others, record keeping, resource management, systems and procedures for data management, technical specifications and standards, examination, test and validation procedures, techniques, procedures and systematic actions to be used for the design, development, quality control and quality assurance of the AI system and the risk management system, post-market monitoring system and procedures for serious incident reporting.
- Unacceptable risk - AI systems that are prohibited because they have harmful effects on users. These prohibited systems are deemed to be “unacceptable,” and include, for example, cognitive behavioural manipulation, the untargeted scraping of facial images from the internet or CCTV footage, emotion recognition in the workplace and educational institutions, social scoring and biometric categorisation to infer sensitive data.
The EU AI Act specifically exempts certain types of AI systems from its scope. For example, AI systems used for military, defence or national security purposes, or for the sole purpose of scientific research and development, are exempt from the obligations in the AI Act. In addition, AI systems released under free and open source licences are also exempt, unless they are high-risk AI systems, prohibited or limited risk AI systems.
GPAI models
The AI Act defines a GPAI model as “an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market”.
A separate approach is adopted for GPAI models, which will be subject to transparency obligations and the requirement to protect copyright. GPAI models with high capabilities (systemic risks) will also be subject to risk assessment and mitigation requirements. The systemic risks are different to those in the financial services sector and will usually pertain to the size of the GPAI system, with regards to its computing power, importance in the market (i.e. number of end-users), or the risks it poses for a given sector. Businesses developing or adapting these models will need to keep a record of how their systems are trained, including what type of data was used, whether any of that data was protected and what consents they had in place to use it. They will also be required to inform end users that they are interacting with an AI system rather than a human being.
One of the key issues for financial institutions will be to determine their status under the EU AI Act and the associated requirements each time they are using, deploying or distributing AI models or solutions. For providers and deployers of high-risk AI systems that are regulated financial institutions subject to internal governance, arrangements or processes requirements under the EU’s financial services legislation, limited derogations apply and they are able to integrate the record-keeping and log keeping arrangements with those under EU financial services law. Which financial institutions will benefit from the derogations is not entirely clear since the term “financial institution” is not defined in the AI Act. According to the Act’s recitals, the term includes credit institutions regulated under the Capital Requirements Directive (CRD), insurance and re-insurance undertakings and insurance holding companies under Solvency II, insurance intermediaries under the Insurance Distribution Directive, and "other types of financial institutions subject to requirements regarding internal governance, arrangements or processes established pursuant to the relevant Union financial services law to ensure consistency and equal treatment in the financial sector.”
At EU level, the European Supervisory Authorities have issued many statements and guidelines on the interaction of the financial services legislative requirements and the use of AI by regulated firms. For example, the latest guidance from the European Securities and Markets Authority (ESMA) considers how regulated investment firms providing retail investment services can comply with their obligations under the Markets in Financial Instruments Regulation and Directive package (MiFID II), in particular, organisational requirements, conduct of business requirements and acting in the best interest of the client.
U.K.
Given the sectoral-focused approach in the U.K., the scope of regulatory policies and statements on AI matches the regulatory perimeter of sectoral regulators. The government has asked sectoral regulators to identify gaps relevant to AI in regulatory powers and remits. There have been calls for financial services firms to have a named and FCA-registered senior manager who is directly responsible for a firm’s use of AI systems. However, industry has argued that existing governance frameworks already cover their use of technology.
Regulators in different sectors have established specific forums. For example, the AI and Digital Regulations Services in the health sector, and the Digital Regulation Cooperation Forum comprised of the Competition and Markets Authority, the Information Commissioner’s Office, the Office of Communications and the Financial Conduct Authority (FCA). The FCA, Bank of England and Prudential Regulation Authority (PRA) continue to collaborate on supervising regulated firms’ use of AI, and the Bank of England has recently issued a call for applications to join its new AI Consortium on the development, deployment and use of AI in the financial services sector.
As mentioned in our first note in this series, there is the potential for foundation models (high capacity GPAI), to become subject to legislative action. If taken forward, this would impact a small group of developers of the most powerful systems.
Data governance / processing
U.S.
In the U.S., federal agencies and state attorney generals and state regulatory bodies will enforce the laws noted throughout the notes in this series. It is recognized that such protections are critical in financial services, as well as other areas, where consumer harm would be significant. Biden's Executive Order 14110 encourages regulatory agencies to use their authorities to protect consumer privacy and to consider introducing rules or clarifications and guidance as to how existing rules apply to AI systems.
Additionally, state privacy laws may also apply. For example, the California Consumer Privacy Act regulates the collection and use of personal information, and several efforts in California aim to amend the Act to specifically address the automated processing of such personal information by AI.
EU
The AI Act provides that EU laws on data protection and privacy apply to personal data processing using AI. The AI Act does not affect the rights and obligations contained in the General Data Protection Regulation (GDPR), including the GDPR principles of lawfulness, fairness and transparency; purpose limitation; data minimisation; accuracy; storage limitation; integrity and confidentiality; and accountability. It is therefore crucial for a company to determine the legal basis for its data processing and to document (e.g., in its data protection impact assessment) its approach, as regards staff training, transparency and data subject rights. If AI systems process personal data for automated decision-making, companies must comply with additional requirements under the GDPR and must often obtain consent.
There is a limited exception for processing special categories of personal data (e.g., health data), for high-risk AI systems that involve the training of AI models with data. The exception, which is subject to several conditions being satisfied, is only available where it is strictly necessary for bias detection and correction.
The specific rights of individuals established in GDPR also apply. These include the right to be forgotten (erasure of data), and the right not to be subject to decisions based only on automated processing, such as profiling which has legal effects for or significantly affects the individual.
The European Data Protection Supervisor’s June 2024 guidance on generative AI provides practical advice on processing personal data while using generative AI to ensure compliance with the EU’s data protection framework. The guidelines notably cover the involvement of the Data Protection Officer at each step of the lifecycle of the generative AI system, tips on how to perform a data protection impact assessment, considerations on how to apply the data minimization when training the model, etc.
U.K.
The U.K. General Data Protection Regulation (U.K. GDPR) and the Data Protection Act 2018 will apply to any data control or processing in the context of an AI system. U.K. GDPR and the EU GDPR remain essentially identical, so the discussion immediately above for the EU applies essentially equally.
Over the course of 2024, the Information Commissioner’s Office (ICO) has engaged in a series of consultations on the application of U.K. data protection laws to generative AI, covering issues such as the lawful basis for training generative AI models and the expectations for complying with data subject rights. The ICO intends to use responses to the consultations to update its guidance on AI. The ICO updated its guidance on AI and Data Protection in 2023 to provide great clarity on fairness requirements.
Extraterritoriality
U.S.
The U.S. has a long history of extraterritorial application of its laws and regulations, including those on corruption, economic sanctions, export controls, taxes, etc. The U.S. has already imposed restrictions on AI that will have an extraterritorial effect, such as limitations on the exports of emerging technologies like AI. See also the description above in “Scope.”
EU
The EU AI Act will apply to providers regardless of whether the provider is physically present or established within the EU or in a third country. Providers established outside of the EU must appoint an EU representative in writing. The EU representative must terminate a mandate if it “considers or has reason to consider” that the provider is acting contrary to its obligations under the AI Act. This requirement is similar to that in the EU GDPR which requires third-country data processors and controllers within scope of the GDPR to appoint an EU representative.
The AI Act will also apply to providers and deployers of AI systems that are located or established outside of the EU, where the output produced by the system is used in the EU. In this situation the AI system is not placed on the market, put into service or used in the EU, only the output of the AI system is used in the EU. For example, a U.S. financial institution uses an AI system to develop curated research for customer investment opportunities and sends the results to its German subsidiary that provides that research to its clients.
EU GDPR has an extraterritorial reach that could impact firms using or deploying AI systems. GDPR would apply if a company targets individuals in the EU by offering products or services or if a company monitors the behaviour of an EU person where the behaviour occurs in the EU.
EU financial legislation can be complex to navigate on a cross-border basis. Most investment business can be provided to EU customers only on a so-called 'reverse solicitation' basis (i.e., on a 'they called us' basis). This is discussed further by Shearman & Sterling, now A&O Shearman, in “On the existence of a pan-European reverse solicitation regime under MiFID II, and importance on a 'Hard' Brexit.” Reverse solicitation will also apply from 11 January 2027 to banking business, in general. We discuss these new requirements in “New licensing requirements for cross-border lending into Europe.”
U.K.
As for applicable sector regulations’ U.K. financial services legislation provides that a firm may not conduct regulated activities in the U.K. unless a firm is licensed or otherwise exempt. One of those exemptions allows third-country firms to provide services in the U.K. without obtaining a licence under the overseas persons exclusion. This is a complex regulatory exclusion, which generally can apply to cross-border wholesale business into the U.K. from abroad. Financial promotions (communication of an invitation or inducement to engage in investment activity) are prohibited unless the person making the communication is licensed or the communication is approved by a licensed firm. However, there is an exemption for communications to persons receiving the communication outside of the U.K. or which is directed only at persons outside the U.K. In general, these provisions would apply in the same way to financial services firms making use of AI systems when those are provided on a cross-border basis to U.K. customers.
U.K. GDPR has a similar extraterritorial impact as that of the EU’s GDPR. In “Your 2023 Wrapped: UK AI and data protection edition,” Allen & Overy, now A&O Shearman, discuss the impact of the 2023 Clearview decision for firms outside the U.K. whose activities involve monitoring the behaviour of U.K. persons.
Third-party providers
U.S.
Executive Order 14110 calls for implementing an AI risk management framework, including mapping the data supply chain to track data provenance, lineage, transformation, and integration. The Executive Order also suggests that financial institutions should expand their typical third-party due diligence and monitoring to account for AI-specific factors (such as data privacy, data retention policies, AI model validation, and AI model maintenance), and ask their vendors if they rely on other vendors for data or models and if so, how they manage and account for these factors.
U.S. agencies have also issued guidance and proposed rules, which could clarify reliance on third-party providers that use AI.
- Securities and Exchange Commission (SEC): On 26 October 2022, the SEC proposed a new Rule 206(4)-11 and amendments to Rule 204-2, as well as amendments to Form ADV, regarding the use of third-party service providers by investment advisers required to be registered under the Investment Advisers Act of 1940. In an accompanying statement, SEC Chair Gary Gensler said the rule is “designed to ensure that advisers’ outsourcing is consistent with their obligations to clients.” The proposed rule is designed to prohibit advisers from outsourcing “covered functions” to service providers without meeting certain minimum requirements, including due diligence, monitoring, recordkeeping and reporting to the SEC. However, the rule has yet to be finalized.
- Broker-dealers which are Financial Industry Regulatory Authority (FINRA) members are also covered by FINRA rules, such as FINRA Rule 3110, which requires firms to adequately supervise the activities of its associated persons and vendors (including supervising activities related to AI applications), FINRA Rule 2210 governing communications, FINRA Rule 4510 regarding recordkeeping and FINRA Rule 2010 governing general conduct of business. FINRA has provided guidance on to broker-dealers regarding their use of AI and the applicability of various rules.
- Banking Regulators: On 6 June 2023, the Federal Reserve, Federal Deposit Insurance Corporation and the Office of the Comptroller of the Currency released final Interagency Guidance on banking organisations’ management of risks associated with third-party relationships which, while not specific to AI, is highly relevant. This Interagency Guidance replaces each agency’s existing, separate third-party risk management guidance. The Interagency Guidance states that sound third-party risk management should take into account the level of risk, complexity, and size of the banking organization and the nature of the third-party relationship, and emphasizes that a banking organization is ultimately responsible for conducting its activities, including activities conducted through a third party, in a safe and sound manner. In particular, the Interagency Guidance notes that banking organizations should apply “more comprehensive and rigorous oversight and management of third-party relationships that support higher-risk activities, including critical activities.” Such “critical activities” are activities that could cause a banking organization to face significant risk if the third party fails to meet expectations; have a significant impact on customers; or have a significant impact on the banking organization’s financial condition or operations.
EU
Recently, the EU’s financial services framework has been fortified by the Digital Operational Resilience Act (DORA), which applies from 17 January 2025, as regards the information and communication technology (ICT) security of financial services firms. DORA applies directly to various types of regulated EU financial entities, and indirectly applies to any ICT third-party service provider which is a service provider to an in-scope EU regulated entity. DORA also introduces an oversight framework for critical IT service providers to financial services firms. Entities designated as critical ICT service providers under the oversight framework will be subject to direct regulation by the most relevant European Supervisory Authority. DORA also introduces detailed requirements for the contractual arrangements between EU financial entities and their ICT third-party service providers. These include, for example, pre-contract obligations on EU financial entities relating to the risk assessment of potential ICT third-party service providers, the requirement that financial entities may only use the services of ICT third-party service providers that comply with appropriate information security standards and detailed requirements for termination arrangements.
EU GDPR imposes obligations on both data controllers and data processors. Where a company (the controller) hires a third-party provider to undertake the controller’s data processing, both entities will be subject to GDPR. The same will apply to an AI company that processes data on behalf of a company to which it is providing its services or products.
The EU financial services framework sets out requirements for regulated financial services firms to manage the risks arising from outsourcing functions to third-party service providers. For example, there are requirements in legislation applicable to credit institutions (CRR and CRD), to investment firms (MiFID II), and to payment institutions and electronic money institutions (the Payment Services Directive and the e-money Directive). In 2019, the European Banking Authority issued outsourcing guidelines that apply to banks and large investment firms. ESMA issued guidelines in 2021 on outsourcing to cloud service providers, which apply across its broad markets and securities remit.
U.K.
The Financial Services and Markets Act 2023 introduced a new regime for the regulation of service providers to regulated financial services institutions and financial market infrastructure. HM Treasury is empowered to designate service providers as critical third-party providers (CTPs) if their failure would pose a threat to financial stability or confidence in the U.K. financial system. A designated CTP will become subject to direct regulation by the Bank of England, the PRA and the FCA. No designations have been made yet, and the regulators have not yet published their final rules.
The CTP regime is similar to the EU’s DORA as regards critical service providers, with similar conditions for designation and the clear stipulation that regulated entities will retain responsibility for managing the risks arising from their use of third-party providers. However, the U.K. regime for critical service providers is broader in that it is not limited to ICT service providers. In contrast, DORA is broader in introducing detailed requirements for all IT providers to regulated entities, while in the U.K. the situation with non-critical providers remains governed only by outsourcing rules. Both DORA and the U.K. CTP regime will apply to service providers regardless of where they are located. The U.K. will not generally require the establishment of a U.K. subsidiary. In contrast, DORA will require an ICT third-party service provider to establish an EU subsidiary if they are providing services that affect the supply of financial services in the EU.
The U.K. FCA has confirmed that the adoption of AI by the financial services sector may result in third-party AI service providers becoming critical to the financial sector and that any systemic AI providers could fall into the new critical third-party regime. In, “The UKs New Regime for Critical Third Party Supervision” Shearman & Sterling, now A&O Shearman, discusses the implications of the U.K.’s new CTP regime.
There are existing requirements for U.K. regulated financial services firms to address potential risks when their firm’s critical or important functions are outsourced to third-party service providers. The outsourcing of such a function must meet certain conditions, such as not resulting in any delegation of responsibility of senior management and not changing the relationship and obligations a firm has towards its clients. According to those requirements, even if a U.K. regulated firm outsourced a critical or important function to an AI company, the regulated firm would remain responsible for all of its regulatory obligations, including the AI services and products.
As with EU GDPR, the U.K. data protection framework will capture an AI company that processes data in its provision of services to a customer.
[View source.]