[author: Doug Austin, Editor of eDiscovery Today]
The final installment of the virtual Legalweek(year) just concluded, with the fifth installment having completed this week, after previous iterations in February (the traditional time of year for in-person Legalweek), March, April, and May. I attended a handful of sessions and planned to give you a sampling of the sessions I attended, but the session that I attended at the end of the conference on Wednesday was so good, I decided to cover it specifically.
The session was The Ethics of AI in The Legal Profession and it was conducted by Tess Blair of Morgan Lewis and Maura R. Grossman of the University of Waterloo and Maura Grossman Law (who should be a familiar name to any of you who understand Technology Assisted Review (TAR) as she and Gordon V. Cormack defined the term and issued the groundbreaking study that demonstrated how TAR could be more efficient and effective for document review).
Blair and Grossman covered several aspects of the use of AI, a couple of which I will briefly recap here. They also provided some interesting graphics to illustrate various concepts such as machine learning (interspersed pictures of chihuahuas and blueberry muffins so similar it’s startling), natural language processing (NLP) and deep learning.
Advising Clients Developing or Using AI
Among the topics covered here were the idea of how crowdsourcing can introduce bias into AI algorithms, where the example used was to type in the phrase “lawyers are” into Google search and the completion terms that dropped down included terms like “scum”, “liars”, “sharks”, “evil”, and “crooks”. Ouch!
There are three places where AI bias can come into play: 1) the data, the example used was to use only white faces to train an algorithm to the point it won’t handle black faces; 2) the algorithm, which can be tuned to weight things differently; and 3) humans, which may have an algorithm aversion, may have automation bias (where the algorithm is assumed to be correct because it is not human) or may have confirmation bias (where they agree only if the results confirm what they already believe).
Grossman also discussed the use of the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), which tended to rate the possibility for black defendants to reoffend at much higher score than non-black defendants, as was the case of an 18 year old black woman who was rated high risk (a score of 8) for future crime after she and a friend took a kid’s bike and scooter that were sitting outside whereas a 41 year old white man who had already been convicted of armed robbery and attempted armed robbery was rated a low risk (according to this article by ProPublica).
Grossman pointed out that COMPAS experienced “function creep” where it was originally designed to provide insight into the types of treatment an offender might need (e.g., drug or mental health treatment), then expanded to decision making about conditions of release after arrest (e.g., release with no bail, bail, or retention without bail), before being expanded again to decisions about sentencing.
Blair added discussions regarding privacy and also AI moral dilemmas, such as the case of an autonomous vehicle, entering a tunnel with child in the middle of the road and a decision to make – whether to go straight ahead and kill the child or veer off into the wall and kill the passenger (yikes!). Those and other moral dilemmas can be found here at MIT’s Moral Machine site.
Resources for Practicing with AI
With regard to practicing law while using AI, the presenters discussed several sources of guidance with regard to an attorney’s duty for understanding AI, including:
- Rule 1.1: Duty of Competence (and the phrase in Comment 8 “a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology”)
- Rule 1.6: Duty of Confidentiality
- Rules 5.1 & 5.3: Supervision (of the Algorithm)
- Rule 5.5: Unauthorized Practice of Law, including Lola v. Skadden, Arps, Slate, Meagher & Flom LLP, where the Second Circuit found that tasks that could otherwise be performed entirely by a machine could not be said to fall under the practice of law. Very few legal AI tasks fall under this category currently, including document review, which requires human training for the AI algorithm).
- ABA Resolution 112 issued August 2019, which states “the American Bar Association urges courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intelligence (“AI”) in the practice of law including: (1) bias, explainability, and transparency of automated decisions made by AI; (2) ethical and beneficial usage of AI; and (3) controls and oversight of AI and the vendors that provide AI.” It also includes a 15-page discussion of consideration of the issues associated with AI.
Conclusion
Blair and Grossman concluded with a brief discussion on whether AI is going to take lawyer jobs (there may be fewer attorneys in the future, but they will be more focused on the types of tasks they were trained for in law school) and whether AI will ever become smarter than humans (Grossman stated that the technology “often still can’t do what a 3-year-old can do”).
So, “with great power comes great responsibility”.
But lawyers (or anybody using AI) need to do their part to understand the technology and the concepts (as well as the risks) to fully benefit from AI. When they do, they can accomplish amazing things!
Next year, Legalweek returns to an in-person event in New York City from January 31 from February 3! See you there!
[View source.]