Applying today’s legal ethics to today’s AI (part 1)

Casetext
Contact

Generative AI is powerful—but lawyers must use it responsibly to reap its benefits

Today’s generative AI stands to have a significant impact across industries, including the legal profession, where it has the potential to improve the practice of law and the quality and accessibility of legal services for clients. But these gains can only be realized through ethical use of the right kind of AI—generative AI that’s tailored to the practice of law.

In part 1 of this series, we distinguish the different types of generative AI that have become available in the last year, including specific-use AI made for legal practitioners. Part 2 explores how existing professional responsibility rules—such as those governing duties of competence, diligence, communication, and candor, among others—apply to AI, and offers guidance on how to use generative AI in a way that meets these obligations. 

Understanding today’s generative AI

Generative AI is built on LLMs, a type of AI that can recognize and generate text. While LLMs have been in existence for decades, it’s only in the last five years that they’ve advanced leaps and bounds, culminating in the latest generation of generative AI and the release of OpenAI’s GPT-4 in early 2023. 

GPT technology—which stands for “Generative Pretrained Transformer”—is much more sophisticated than prior LLMs because it can “generate” unique, novel, and human-like content. It’s also pretrained on massive datasets to handle language at a sophisticated level. The last part, “transformer,” refers to neural networks (neural nets) that learn faster with less computation that earlier AI, allowing higher quality output, faster.  

In the last year, these advancements in LLMs have been applied in several different ways and can be categorized into three types of generative AI tools. 

Generative AI type 1: General-use AI

The first category is general-use AI, such as OpenAI’s ChatGPT and Google’s Bard. General-use AI is helpful for subjective tasks where there isn’t a single “right” answer. An example would be using ChatGPT to write a letter with the appropriate tone. But to quote OpenAI CEO Sam Altman, it’d be a mistake to rely on general-use AI for “anything important,” 

That’s because these tools may produce inaccurate information. ChatGPT and Bard even have disclaimers stating their chatbots can make mistakes and that users should check any output before using it. 

Practicing law involves important tasks. Whether you’re advising a client on case strategy or researching a legal issue, the results must be accurate, which means relying on general-use AI in the practice of law isn’t a responsible use of the technology. 

Generative AI type 2: General-use AI with search

The second category of tools are those that use an LLM combined with a source of information, giving the AI search capabilities. An example of general-use AI with search is Bing Chat by Microsoft. This technology is valuable for initial searches and generating ideas, and is analogous to getting started with Wikipedia, where you can quickly identify and follow sources on a particular topic. But just as you wouldn’t use Wikipedia for scholarly papers or professional publications, these applications aren’t advisable for situations where you need to show your work and cite to reliable, verified sources. 

Generative AI type 3: Specific-use AI

The last category, specific-use AI, consists of a LLM and a reliable source of information paired with careful domain engineering. Specific-use refers to the use of AI in specific business applications and operations (e.g., the practice of law or medicine). Casetext’s CoCounsel is an example of specific-use AI—it’s built on an LLM (GPT-4), connects to a database of up-to-date, verified state and federal case law, statutes, and regulations, and is engineered to perform substantive legal tasks, such as preparing legal research memoranda, reviewing documents, and  analyzing contracts and redlining.

Ethical, responsible application of generative AI in the legal profession is predicated on using specific-use AI tailored to the practice of law. Specific-use AI is generative AI that:

  1. Reads domain-specific content. In specific-use AI applications, the LLM accesses a reliable and current source of information, and is able to read and understand data at exceptional speed. Recall that general-use AI doesn’t access a current or verified data source. For example, ChatGPT has a knowledge cutoff and accesses the internet as its data source, which is filled with unverified or even false information. CoCounsel, on the other hand, accesses a primary law database that is not only updated daily, but is filled with real (verified) law. As a result, CoCounsel can read primary law to complete legal research; read documents to answer specific questions; search databases (such as internal brief or contract banks and e-discovery) to find specific documents or information, and read contracts to assess compliance with policies. 
  1. Provides unique, refinable, and verifiable responses. Specific-use AI does more than just search for an answer to a question—it takes into consideration the specific language used to ask a question, so that your searches aren’t limited to exact terms (like Boolean and keywords). CoCounsel understands your specific language as if it were human and generates unique responses. You can also iterate on and refine CoCounsel’s results when performing legal tasks—the same way you would iterate with a human legal assistant. For example, you can ask CoCounsel follow-up questions on legal research. Or, it can suggest redlines you can immediately approve and incorporate. CoCounsel’s output is also verifiable. The responses it provides link to the source text of real case law, statutes, and regulations, so you can verify its work. It even provides pincite links to the exact page of a relevant document when reviewing documents. 
  1. Is private and secure. Specific-use AI like CoCounsel has a private, dedicated connection to LLMs, as opposed to a public API. It also does not train with the underlying AI model. For example, a user’s interactions with CoCounsel remain private and are never used to train GPT-4. Additionally, the AI should have a zero-retention policy on data shared by users, meaning the data is processed only for the user’s purpose, and is never stored or shared. 

4. Is developed responsibly. AI needs to be not only private and secure but developed responsibly for ethical use. CoCounsel was developed by a team of engineers, machine learning specialists, and attorneys with several years of experience applying LLMs to the law. Additionally, our Trust Team of several attorneys tested the AI for more than 4,000 hours to ensure the reliability of its output. The AI was then tested by hundreds of attorneys as part of a beta program that included more than 40 legal organizations. 

When considering using today’s AI in legal practice, lawyers should choose responsibly-developed, specific-use AI that can reliably perform legal tasks, cite accurate, up-to-date legal sources, and keep data safe and secure. By using the right AI, lawyers mitigate the risk of running afoul of their ethical duties when using AI in practice. 

Stay tuned for our next post in this series, where we delve into how lawyers can ethically use specific-use AI in their practice. 

Written by:

Casetext
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Casetext on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide