The European Commission High-Level Expert Group on Artificial Intelligence released on April 8, 2019 the final version of its Ethics Guidelines for Trustworthy AI (the “Guidelines”).
This is the first significant guidance issued in Europe regarding Artificial Intelligence and follows an extensive public comment process. While the Guidelines are not binding law, the creation of the Guidelines (including an AI assessment pilot) is a significant development towards potential direct regulation of the implementation of AI.
The Guidelines include:
-
A framework for “Trustworthy AI”, with the underpinning ethical principles that such Artificial Intelligence be lawful, ethical and robust.
-
A discussion of a rights-based approach to AI;
-
A review of requirements for AI to be “Trustworthy AI”, which include (1) human agency and oversight, (2) technical robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, non-discrimination and fairness, (6) societal and environmental wellbeing, and (7) accountability.
-
An assessment list on Trustworthy AI, intended to assist AI users and creators in the creation and implementation of Trustworthy AI systems
As Artificial Intelligence technologies continue to be created and implemented, and as the legal framework continues to evolve, new and unique issues will arise at the intersection of the two.
Since many of the current legal issues regarding AI that we are advising on stem from privacy laws, we intend to write about those issues on this blog in the future, including more about the Guidelines.
To review the Guidelines: EU AI Guidelines
[View source.]