Opening the Black Box of Generative AI: Explainability in Bankruptcy Cases

Goodwin
Contact

Goodwin

US courts are issuing guidelines to ensure litigators disclose any use of generative AI in legal proceedings.

By now, most of us have heard a story about the misuse of generative AI in the practice of law: the attorney who cited a case that didn’t exist,[1] or the expert witness who relied on generative AI to create an expert report without independently verifying its contents.[2]

While these professionals’ willingness to be early adopters of new technology is admirable, both missed a critical step: supplementing artificial intelligence with human intelligence. While generative AI is and can be used to significantly increase human output and efficiencies, it is critically important that appropriate guardrails are put in place to ensure that the technology, when used, is used responsibly, and that the system generating the output is not an unknown or unknowable “black box” but is instead “explainable” and capable of forming the basis for admissible evidence in a legal proceeding.[3]

Evidence Requires Knowledge

Rule 702 of the Federal Rules of Evidence, made applicable to bankruptcy proceedings by Rule 9017 of the Federal Rules of Bankruptcy Procedure, provides that expert testimony may be given by a qualified witness if “(a) the expert’s scientific, technical, or other specialized knowledge will help the trier of fact to understand the evidence or to determine a fact in issue; (b) the testimony is based on sufficient facts or data; (c) the testimony is the product of reliable principles and methods; and (d) the expert’s opinion reflects a reliable application of the principles and methods to the facts of the case.”[4]

An expert report, when properly submitted, reflects the expert’s knowledge, experience, expertise, and methods. It embodies their testimony and conveys technical and detailed explanations to the finder of fact. Under Rule 26(a)(2) of the Federal Rules of Civil Procedure, experts are required to disclose “the facts or data considered by the witness in forming” their conclusions in the report.[5] As the Supreme Court has explained, expert testimony admissible under Rule 702 “rests on a reliable foundation and is relevant to the task at hand.”[6] The testimony must be reliable, which requires the court to perform “a preliminary assessment of whether the reasoning or methodology underlying the testimony is scientifically valid and whether that reasoning or methodology properly can be applied to the facts in issue.”[7] If an expert’s process and procedures cannot be replicated because such expert’s data and methodologies are not reliable or tested, admissibility may be an issue. It follows that, if an expert relies on generative AI in the preparation of an expert report, there are significant questions as to whether such expert can withstand a Daubert challenge.[8]

A Daubert challenge to generative AI appeared in the Chapter 11 cases of In re Celsius Network LLC. In Celsius, there was competing testimony regarding the appropriate pricing of a crypto token called CEL. An expert, in support of one of the parties, submitted a 172-page expert report that was created not by the expert himself but instead by generative AI.

After finding that the report was not based on sufficient facts or data, that the expert did not review the underlying source material for any sources cited, and that “there were no standards controlling the operation of the artificial intelligence that generated the Report,” the bankruptcy court held that the report was “not the product of reliable or peer-reviewed principles and methods”[9] and refused to admit the report into evidence because it failed to meet the standards for admission under Rule 702. The decision in Celsius was an easy one because the expert admitted that he did not perform any independent review or verification of the AI-generated product and that he had no knowledge of the generative AI’s sources and uses. Also, the report itself was riddled with clear factual errors and redundancies.

Evolving Judicial Guardrails

Unsurprisingly, faced with issues such as those present in Celsius, federal and state courts around the country have started implementing rules and procedures to ensure that litigants disclose AI-generated content in their submissions to the court.[10] The rules and procedures vary and continue to evolve, but the overall focus has been on disclosure if generative AI content was used to prepare filing and certification that the generated content was reviewed for accuracy.

Bankruptcy courts are no exception. In fact, just recently, in the Bankruptcy Court for the Northern District of Texas, Judge Stacey G. C. Jernigan issued a standing order requiring all parties, whether they use generative AI or not, to file a certification attesting either that no filings will be drafted with generative AI or that any content drafted with generative AI will be checked for accuracy. By the same token, in the Bankruptcy Court for the Western District of Oklahoma, Judges Sarah A. Hall and Janice D. Loyd issued a standing order requiring all parties to disclose the use of generative AI, identify the specific tool, identify the portions of the text drafted by generative AI, certify that the submission was checked for accuracy, and certify that the use of generative AI did not result in the disclosure of confidential information. While the focus of these standing orders to date is on disclosures regarding use, we fully expect that courts will also enact rules regarding admissibility of documents generated with generative AI, aimed primarily at explainability of the relevant outputs. For generative AI-created content to be used as “evidence” in a bankruptcy court proceeding, the expert submitting the content will need to be able to explain the system that was used in its creation; how and with what input that system was trained, whether the system is open or closed; how the system reached its particular decision, recommendation, or prediction; and why the training and system is reliable and trustworthy. We expect that the emphasis on testimony regarding system explainability will sprout a cottage industry to support that need — at least until generative AI becomes so engrained into our everyday proceedings that it is inherently reliable and trustworthy.

Unlocking the Box

How, one might ask, can we leverage the benefits of generative AI while avoiding the missteps in Celsius? The key is in two simple things: explainability and human oversight. As generative AI becomes woven into the fabric of daily life, practitioners will have to work to ensure that the systems are explainable (i.e., closed systems with defined source training materials), that the use is responsible (human oversight is critical), that the information is tested and verified, and that the individual(s) presenting the outputs in legal proceedings can meet their personal knowledge burden. Similar to “know your customer” diligence, practitioners will need to “know their AI” — at least well enough to convince a court that it can be used reliably.11


[1] Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 451 (S.D.N.Y. 2023).

[2] In re Celsius Network LLC, 655 B.R. 301 (Bankr. S.D.N.Y. 2023).

[3] Liz Grennan, Andreas Kremer, Alex Singla, and Peter Zipparo, Why businesses need explainable AI—and how to deliver it, (last visited Jul. 15, 2024) (“Explainability is the capacity to express why an AI system reached a particular decision, recommendation, or prediction.”)

[4] FED. R. EVID. 702 (emphasis added).

[5] TFED. R. CIV. P. 26(a)(2)(B).

[6] Daubert v. Merrell Dow Pharm., Inc., 509 U.S. 579, 597 (1993).

[7] Id. at 592-93.

[8] A Daubert challenge is a motion raised by the opposing party before or during a trial to exclude or limit an expert’s testimony. During a Daubert challenge, the expert must prove that their methodology and reasoning are scientifically valid and applicable to the facts of the case. This process aims to ensure that only reliable expert testimony is admitted in court proceedings.

[9] Celsius, 655 B.R. at 308.

[10] For example, in the Central District of California, Judge Rozella A. Oliver’s standing order requires parties using generative AI to “generate any portion of a brief, pleading, or other filing” to submit a separate declaration disclosing the use of generative AI and certifying the review and accuracy of any AI-generated content. Judge Rolando Olvera of the Southern District of Texas recently amended his local rules to require all filers appearing before him to submit, together with their proposed scheduling order, a certificate attesting either that no portion of any filing will be drafted by generative AI or that any language drafted by generative AI will be checked for accuracy using traditional legal databases. Judge Arun Subramanian of the Southern District of New York updated his individual practice guidelines to require counsel to personally confirm the accuracy of any research conducted by generative AI tools while warning that counsel “bears responsibility for any filings made by the party that counsel represents.” Other judges around the country have issued similar standing order orders or amended their individual practice rules to adopt guidelines for use of generative AI in their courtrooms.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Goodwin | Attorney Advertising

Written by:

Goodwin
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Goodwin on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide