Addressing Challenges of Deepfakes & AI-Generated Evidence

Sherman & Howard L.L.C.
Contact

Sherman & Howard L.L.C.

The Federal Rules of Evidence (“FRE”) currently provide a framework for authenticating evidence in court, but rapid advancements in artificial intelligence (AI) have raised concerns about the sufficiency of these rules in addressing emerging technologies like deepfakes and other AI-generated content. Two proposed changes to FRE 901(b)(9) and 901(c) seek to adapt these rules to the new challenges posed by AI-generated evidence.

Current Rule & the Proposed Changes

Under FRE 901, evidence must be authenticated before it can be admitted, which typically involves showing that the evidence is what its proponent claims it to be. FRE 901(b)(9) specifically deals with “processes” or “systems” and requires evidence that the process or system produces an accurate result. However, with the rise of AI-generated evidence, particularly deepfakes, the traditional methods of authentication may fall short. The Merriam-Webster Dictionary defines a “deepfake” as “an image, or a video or audio recording, that has been edited using an algorithm to replace the person in the original with someone else (especially a public figure) in a way that makes it look authentic.” The ability of deepfakes to mimic real people and events so realistically has raised concerns that juries may struggle to distinguish between authentic and fabricated evidence.

At the April 2024 Advisory Committee on Evidence Rules meeting, a panel of experts discussed proposed amendments to address these concerns. One of the key proposals, led by former Judge Paul W. Grimm and Dr. Maura Grossman, recommended updating FRE 901(b)(9) to require not only accuracy but also “validity” and “reliability” when it comes to AI-generated evidence. The proposed amendment to Rule 901(b)(9) would read:

(9) Evidence about a Process or System. For an item generated by a process or system:

(A) evidence describing it and showing that it produces a valid and reliable result; and

(B) if the proponent concedes that the item was generated by artificial intelligence, additional evidence that:

(i) describes the software or program that was used; and

(ii) shows that it produced valid and reliable results in this instance.

This change would align the rule more closely with the standards for scientific evidence under Daubert v. Merrell Dow Pharmaceuticals, Inc., which emphasizes both validity (whether the tool, technique, or methodology measures what it claims to) and reliability (whether the tool, technique, or methodology consistently produces the same result).

Additionally, Judge Grimm and Dr. Grossman introduced new Rule 901(c) to specifically address the challenges posed by deepfakes. This rule would require a higher standard for admitting potentially fabricated electronic evidence. If a party challenges the authenticity of AI-generated evidence, the proponent must demonstrate that the evidence’s probative value outweighs its potential prejudicial impact. The proposed Rule 901(c) would state:

901(c): Potentially Fabricated or Altered Electronic Evidence. If a party challenging the authenticity of computer-generated or other electronic evidence demonstrates to the court that it is more likely than not either fabricated, or altered in whole or in part, the evidence is admissible only if the proponent demonstrates that its probative value outweighs its prejudicial effect on the party challenging the evidence.

This change would place a higher burden on those presenting such evidence, helping to safeguard against the risk of fabricated evidence misleading or unfairly influencing juries.

Why the Changes Were Proposed

The driving force behind these proposed changes is the fear that the existing rules may be inadequate for handling the unique issues posed by AI and machine learning. Unlike traditional manufactured evidence, deepfakes are harder to detect, making it easier to pass off fabricated content as real. Furthermore, the low threshold for authenticity under FRE 901(a) only requires “evidence sufficient to support a finding.” This standard might allow deepfakes to be admitted without scrutiny.

The committee recognized that as AI technologies evolve, they will be used to create evidence for both legitimate and illegitimate purposes. Therefore, it is essential to update the rules to ensure that AI-generated evidence meets higher standards of reliability and authenticity before being presented in court.

What’s Next

The committee has not formally adopted these proposed changes, and discussions are ongoing about whether they should be applied only to AI-generated evidence or more broadly to other forms of digital content. The proposal to modify Rule 901(b)(9) and introduce 901(c) represents a proactive approach to addressing the potential misuse of AI and deepfakes in the courtroom. It will be interesting to see how the legal industry evolves to address the rapid advancements in AI, particularly as courts and practitioners adapt to new challenges in evidence authentication.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Sherman & Howard L.L.C.

Written by:

Sherman & Howard L.L.C.
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Sherman & Howard L.L.C. on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide