The European Union recently passed a sweeping law regulating corporations and business leaders with respect to artificial intelligence (AI). The first legislation of its kind, the EU Artificial Intelligence Act seeks to impose legal and ethical standards on companies that develop and use AI. While the Act does not impose legal obligations on organizations that operate exclusively in the United States, it will serve as a harbinger and potential model for AI restrictions that may one day be passed by the U.S. Congress and state legislatures.
A short summary of the Act’s key provisions follows.
Categorization of AI according to Risk Levels
The Act, which is intended to be a consumer protection law, imposes different levels of regulation based on the risk of the AI product at issue. These levels are:
- Unacceptable risk level AI systems, which provide social scoring, infer the user’s emotions, impose deceptive techniques, assess the risk of an individual committing a crime, and take users’ real-time biometric data;
- High risk AI systems, which profile an individual’s work performance, economic situation, health, movement, or decision-making patterns without human review; and
- General purpose AI systems, which are AI models trained on large amounts of data that self-supervise, such as Chat GPT.
The Act prohibits unacceptable risk level AI systems.
High risk intelligence systems are permitted, but those who provide them must: (a) establish a risk management system throughout the AI product’s lifecycle; (b) conduct data governance to make sure that training, validation, and testing datasets for the product are relevant and free of errors; (c) develop technical documentation for the product and provide it to relevant authorities; (d) design the AI product to provide automatic recordkeeping to track events throughout its lifecycle; (e) provide instructions for use; (f) develop their AI system to ensure human oversight; (g) ensure their system contains accuracy, robustness, and cybersecurity; and (h) establish a quality management system.
Finally, those who provide general purpose AI systems must: (a) draw up technical documentation, including training and testing processes and evaluation results; (b) draw up information and documents to supply to downstream users; (c) publish a detailed summary of the data the model was trained on; and (d) establish certain policies.
Effective dates for the EU AI Act
The Act will be effective in stages, beginning in about six months.
What does this mean for US organizations?
Given that at least seven U.S. state legislatures likely will pass legislation this year regulating AI usage, and President Joe Biden signed an Executive Order in October 2023 on the development of AI, U.S. organizations will likely face new AI compliance obligations in 2025, if not 2024. Federal agencies, such as the Equal Employment Opportunity Commission, almost certainly will issue further AI regulations.
Any U.S. organization in the process of deciding whether, and how, to incorporate AI products and practices in any aspect of its operations should first consult with competent counsel about whether that product raises any compliance obligations, how to meet those obligations, and how to anticipate future legislation.