AI in eDiscovery: A Law Firm's Guide to Assessing Efficacy

Lighthouse
Contact

Lighthouse

Over the past two years, law firm attorneys have been bombarded with AI hype—especially in eDiscovery. As the legal industry gets pulled into the AI wave, marketing gets swept along with it, often promising unrealistic results or confusing messaging that conflates human automation with AI.

While it can be tempting to tune out all the AI noise and stick to your traditional eDiscovery workflows, attorneys do have an ethical obligation to their clients to understand how modern advancements in AI can help their clients. And modern AI advancements like large language models (LLMs) have the ability to uplevel eDiscovery workflows.

In short, the right AI—when trained within the proper parameters and leveraged properly for the right use case—can help law firms drive more value to their corporate clients and gain an advantage over less tech savvy competitors.

So how can you move beyond the AI marketing buzz and effectively evaluate AI in eDiscovery?  It starts with a simple question: Can this AI technology actually enhance my legal practice?

This is the critical question any attorney should be asking when seeking out AI integration or adoption. AI is only as valuable as the benefits it provides to your firm and its clients. Therefore, it’s crucial to consider any potential AI solution’s efficacy—its ability to deliver the expected benefits—before integrating it into eDiscovery workflows.

Let’s explore what efficacy means in general, and then examine its implications for the two types of modern AI (i.e., AI built with LLMs) in the context of outside counsel working on eDiscovery matters.

AI effectiveness is situational

When looking to measure efficacy, it's important to remember that not all technology marketed as “AI” is created equal. You’ll need to understand the technology behind modern AI, and how well that technology can perform the specific legal task at hand.

To do so, it’s helpful to break the evaluation process down into metrics that are the most relevant to your law firm’s practice. For most outside counsel working on eDiscovery matters, key metrics to consider include quality, speed, and ROI—factors that directly impact both your firm's bottom line and client satisfaction.

The two types of AI that use LLMs in eDiscovery are predictive AI and generative AI (for a detailed breakdown, see our previous article on LLMs and the types of AI). Because they perform different functions, they impact quality, speed, and ROI in different ways for law firms.

How outside counsel can measure the efficacy of predictive AI

In an eDiscovery setting, predictive AI assesses the probability of a document falling into a specified classification (e.g., whether a document is responsive or privileged). It usually works like this:

  1. Your attorneys review and code a representative sample of case documents.
  2. This coded set is used to train the AI model, effectively instructing it on the specific criteria for privilege or responsiveness in your matter.
  3. Once trained, the AI classifier evaluates the remaining documents, assigning each a probability score. A higher score indicates a greater likelihood of the document being privileged or responsive.

The initial AI training phase is an essential component in the efficacy equation for law firms as it affects the accuracy of the outcomes. While it demands upfront attorney review time, the investment enables the AI to dramatically reduce the need for eyes-on review later.

The edge this gives outside counsel is clearest in large litigation with high data volumes (multidistrict litigation (MDLs), HSR Second Requests, etc.). Having attorneys review 4,000 documents during the training period is more than worth it when AI removes more than 500,000 to 1M documents from privilege review, allowing you to allocate your firm's resources more efficiently.

With that in mind, here's how you could measure the efficacy of a predictive AI privilege classifier in your firm:

Quality: Does the predictive AI tool make accurate predictions?

LLM-backed AI privilege classifiers can be very effective at identifying privilege, including catching documents that other methods miss. In one real-life matter, a classifier found 1,600 privileged docs that weren't caught by search terms. Without the classifier, outside counsel would have faced painful inadvertent disclosures and clawbacks—potentially damaging both the case and the firm's reputation.

Speed: Will the predictive AI help your attorneys complete tasks faster?

LLM-backed AI can accelerate document review in multiple ways. Some outside counsel teams use the percentages assigned by their AI privilege classifier to prioritize review, starting with the most likely docs and reviewing the rest in descending order. Others use the percentages to cull the review population, removing docs below a certain percentage and reviewing only those docs that meet a certain threshold of likelihood.

One of the most effective approaches we’ve seen used by outside counsel is combining both methods. For first-level review, the law firm prioritizes docs that score in the middle. Docs with extremely high or low percentages are culled: The most likely docs go straight to second-level review, while the least likely docs go straight to production. By using this method during a high-stakes Second Request, the law firm was able to remove 200,000 documents from privilege review, significantly reducing the number of hours spent on simple, low-level review tasks, while maintaining quality.

ROI: Will the predictive AI increase the value-add of your firm?

Improving overall speed and quality can also improve the value your firm provides to its clients. During the Second Request mentioned above, outside counsel saved 8,000 hours of attorney time and more than $1M on privilege review. Freeing up those hours and budget enables outside counsel to focus on strategic work and enhances client relationships by demonstrating your firm’s commitment to efficiency and providing high-value legal work.

How outside counsel can measure the efficacy of generative AI

Generative AI (gen AI) can respond to questions or create condensed summaries of documents. Its applications for outside counsel—and its effectiveness—varies significantly across different use cases in eDiscovery.

For our first gen AI solution, the goal was to focus development on a use case where efficacy is straightforward: privilege logs. We chose privilege log generation because we wouldn’t have to give our gen AI open-ended questions or a sprawling canvas. We wanted to ask it to draft something very specific, for a specific legal purpose, that would be valuable to outside counsel and their corporate clients. That made the quality and value of its output easy to measure within a legal practice context.

Using gen AI to draft privilege log content is another case where AI's performance is tied to a training period, making its efficacy more significant in larger matters. After expert analysts train the AI on a few hundred privilege logs (the purpose of this training is to bake corporate and outside counsel perspective and feedback on privilege into the model), the model can generate tens of thousands of accurate privilege entries in a day.

Using our gen AI privilege log example, here's how you might measure efficacy for gen AI within your own practice:

Quality: Does the gen AI tool faithfully generate what you're asking it to?

The quality of gen AI outputs is not as straight forward as predictive AI (as discussed in an earlier blog post about AI and accuracy in eDiscovery). Depending on the prompt or situation, gen AI can do what you ask it to without sticking to the facts—a potentially dangerous situation for your firm’s reputation and your own legal practice.

For gen AI to deliver on quality and defensibility, you need a use case that affords:

  • Control—AI analytics experts should be deeply involved, writing prompts and setting boundaries for the AI-generated content to ensure it fits the legal problem you're solving. Control is critical to drive quality and maintain ethical standards.
  • Validation—Your firm’s attorneys should review and be able to easily edit all content generated by AI. Validation is critical to measure quality and ensure compliance with legal and ethical obligations.

Our gen AI privilege log solution meets these criteria. AI experts guide the AI as it generates content, and outside counsel attorneys approve or edit the log it generates, maintaining the necessary level of human oversight.

As a result, the solution reliably hits the efficacy mark for our law firm clients. In fact, in a head-to-head test, one outside counsel team rated our AI-generated log lines better than log lines by first-level contract attorneys, effectively improving the quality of work product that the law firm provided, while reducing risk for their client.

Speed: Will the gen AI tool help your attorneys complete tasks faster?

As most are aware, attorneys should consider AI-generated content as a first draft and handle it accordingly. Just as you wouldn’t submit a draft brief written by a summer associate without reviewing and fact-checking—you wouldn’t submit gen AI content without reviewing.

But AI generates content a lot faster than even the fastest attorney. And reviewing and editing a draft is a lot faster than writing from scratch (as long as that content is accurate—see our post on gen AI accuracy for a deeper dive into that topic).

This is how effective gen AI models, like our privilege log solution, can help outside counsel work smarter, enabling practice groups to reallocate attorney time to more complex legal tasks.

ROI: Will the predictive AI increase the value-add of your firm?

Giving gen AI credit for driving direct value for a law firm can be hard with many use cases. If you use gen AI as a conversational search engine or case-strategy collaborator, how do you calculate its ROI in dollars and cents?

But with specific workflow tasks such as privilege logs, the financial ROI is easy to track: What does your firm spend on privilege logs with gen AI vs. without? Many firms have found that using our gen AI for the first draft is cheaper than using junior attorneys or contract reviewers, enabling the firms to provide more competitive client billing (driving more repeat business).

Where can AI be effective for your law firm?

This post started with one question—How can AI make your legal work better?—but you can't answer it without also asking “where.”

Where would applying AI help your eDiscovery practice? Where would using AI be the most beneficial to your clients?

So much about efficacy depends on the use case. It determines which type of AI will deliver what your practice group needs. It dictates what to expect in terms of quality, speed, and ROI, including how easy it is to measure those benefits and whether you can expect much benefit at all.

[View source.]

Written by:

Lighthouse
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Lighthouse on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide