Taming the AI Beast: Judges Set Rules to Control Use of Generative AI in Their Courts

Mitchell, Williams, Selig, Gates & Woodyard, P.L.L.C.
Contact

Mitchell, Williams, Selig, Gates & Woodyard, P.L.L.C.

Download PDF

he legal profession has increasingly witnessed the rise of artificial intelligence (AI) technologies, particularly generative AI, which has shown immense potential in various areas of legal practice. From legal research to drafting, generative AI is a promising tool. However, its use in litigation requires careful consideration and oversight. This blog post explains how judges have stepped in to put safeguards on the use of generative AI in their courtrooms. This continues to evolve and we will need to follow these developments closely.

But first, what is generative AI? Generative AI is a subcategory of deep learning that leverages machine learning algorithms to produce new and original content, such as text, images, videos, and audio. By analyzing vast amounts of data, generative AI models can generate highly realistic and relevant responses. This ability makes it an enticing tool for legal professionals.

AI gone awry in litigation. One primary concern raised by legal experts is the potential for generative AI to produce inaccurate or even fabricated information—sometimes called “hallucinations.” The case of Mata v. Avianca serves as a cautionary tale, where a legal brief submitted by Mata's lawyers was found to contain fictitious judicial decisions.[1] This highlights the need for verifying and cross-referencing the data provided by AI tools with traditional legal databases or seeking expert human opinions.

Courts set rules governing generative AI use in litigation. In first of their kind judicially imposed restrictions on the use of generative AI in courts, in the past two weeks we have seen the emergence of judges stepping in to control the AI beast. One federal judge in Texas recently implemented a new rule that addresses the utilization of AI in legal briefs.[2] This directive, referred to as the “Mandatory Certification Regarding Generative Artificial Intelligence” rule, mandates that all attorneys appearing before the court must submit a certificate on the docket. This certificate must affirm either that no part of their filing was generated by AI tools like ChatGPT, Harvey.AI, or Google Bard, or that any AI-generated content has been thoroughly verified for accuracy by a human using traditional legal databases or print reporters. Another federal judge in Illinois similarly implemented a standing order that requires parties using generative AI tools in document preparation to disclose such usage in their filings.[3] The disclosure should include specific details about the AI tool employed and the manner in which it was utilized. The judge further stated that reliance on an AI tool may not constitute reasonable inquiry under Federal Rule of Civil Procedure 11.

Emerging trends in use of AI in litigation. The standing orders from specific judges in Texas and Illinois may not directly impact your practice if you do not appear in front of those judges. But don’t disregard them so quickly. Other individual judges will be following suit, and some districts may soon alter their local rules to expand such requirements beyond individual judges. Changes to rules of civil procedure to account for this explosive new technology are a possibility. Even beyond rules and standing orders by specific judges, by explicitly drawing in Rule 11, such standing orders present a cautionary tale for all litigators about the need to verify and cross-verify the output of generative AI. It is crucial to adopt a diligent approach when dealing with data generated by AI tools. For instance, it is advisable to cross-reference any data produced by AI tools with established legal databases to ensure accuracy and reliability. Additionally, seeking expert human opinions can provide valuable insights and further validation. By incorporating these measures, one can enhance the quality and credibility of the AI-generated data in legal proceedings.

Conclusion. While the explosive use and rapid adoption of generative AI has come upon the profession quickly, we are only beginning to see controls put in place by judges, courts, and governing bodies. Learning from these recent safeguards in Texas and Illinois is prudent, and maintaining a continued awareness of emerging orders, rules, and restrictions is recommended.


[1] See Mata v. Avianca, Inc., No. 22-cv-1461 (Doc. 31) (S.D.N.Y. May 4, 2023) (issuing rule to show cause where “[a] submission filed by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to nonexistent cases.”).

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Mitchell, Williams, Selig, Gates & Woodyard, P.L.L.C. | Attorney Advertising

Written by:

Mitchell, Williams, Selig, Gates & Woodyard, P.L.L.C.
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Mitchell, Williams, Selig, Gates & Woodyard, P.L.L.C. on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide