Artificial Intelligence has the potential to be the next transformational technology, and as adoption of AI-powered tools continues to increase, deal activity in the AI space will follow. Regulators and law makers are actively developing AI regulation, taking varying approaches — from the more permissive, sectoral approach (e.g., in the UK and US) to the more restrictive stance developing in the EU. In this evolving legal landscape, sponsors should take a contextual and risk-based approach, focusing on how the target actually uses AI in its operations.
A contextual approach is vital
Among other concerns, uncontrolled use of AI may cause third-party IP infringement, breach of confidence, misuse of trade secrets, violation of data protection laws, product and tortious liability claims, and the inability to validate proprietary code bases. Early on, deal teams should closely engage with a target’s management and technical leads to ensure the organisation is alive to AI specific risk in the context of IP. In conducting IP diligence on the AI technology itself, sponsors should consider three main elements of the AI lifecycle:
- Inputs: Assessing whether the target has all the necessary IP and contractual rights to use the data used to train the relevant AI model is important. Since the IP treatment of AI training may differ by jurisdiction, the country in which model training occurs will be a material consideration. Deal teams should also be aware that under the incoming EU AI Act (expected to enter into force in late 2023/early 2024), developers of generative AI systems may — depending on the final legislation — be required to publicly disclose their use of copyright-protected materials to train their AI, giving the IP holders the ability to identify when their protected content has been mined without permission.
- AI models: Ownership and mode of protection are key concerns. Sponsors should also consider any in-licensed third-party IP used in the AI model and any contractual terms governing its use, including in relation to open-source software, which may impose downstream contractual obligations or restrictions. Notably, under the EU’s new, comprehensive AI Act, certain AI models deemed particularly harmful will be prohibited in the trading bloc.
- Outputs: Sponsors should seek to understand how the AI model was used to generate the outputs (i.e., was it used merely as a tool, or used to generate outputs with minimal human intervention?), whether IP subsists in outputs in the relevant jurisdiction (e.g., in some countries IP cannot subsist without a human creator), and how any such IP (if it exists in the first place) is protected. Deal teams should be mindful that AI model training and AI outputs may infringe third-party IP, which is an area of ongoing litigation.
Adapting for the future
Sponsors should adapt diligence and risk evaluation strategies to reflect the specific IP risks raised by the target’s AI use and strategy. Sponsors able to flex their diligence and AI strategies will be better positioned to protect the value of core AI and data assets, defend business models reliant on AI, and mitigate the risk of potentially significant fines and reputational damage arising from emerging AI laws or third-party claims.