How To Use AI And Keep Firm And Client Data Safe

Casetext
Contact

You can get the benefits of AI without putting your information at risk, if you know what to look for in a solution.

As covered in our second post in this series, attorneys should be extremely cautious when using ChatGPT or GPT-4 in practice for a number of reasons. Chief among them, these general-use AI tools “hallucinate,” making up plausible-sounding but false information in their responses.

With the right expertise, building a solution that uses the power of AI and is trustworthy enough for legal practice is possible, and such products are quickly becoming must-haves for attorneys. Equally important as an AI product’s reliability, though, is its ability to keep confidential firm and client information secure and private.

The need for stringent security continues to grow

Just how critical data privacy and security are to legal practice has been underscored by the spike in law firm data breaches the last three years—since 2020, more than 750,000 Americans have had their personal information compromised as a result of law firm cyberattacks. And while consumer-facing products powered by large language models (LLMs), such as ChatGPT, do protect users’ data, that protection doesn’t rise to the level of both security and privacy required for high-stakes situations involving privileged and highly confidential information.

For starters, comprehensive privacy and cybersecurity protocols, rigorous audits and testing, and a high level of domain-area expertise are required to create AI solutions that meet the standards legal practitioners must adhere to. Building a generative AI-powered solution that’s both reliable and secure enough for use by legal professionals isn’t necessarily easy, but it is possible—we’ve done it with CoCounsel.

Using generative AI on its own isn’t worth the risk

Because of its prevalence and powerful capabilities, many lawyers already incorporate AI such as ChatGPT into their practice, despite its known data security and privacy risks. In response to ChatGPT’s data leak and requests for measures to protect personal data, OpenAI has added a Personal Data Removal Request form allowing users to ask that their information be deleted.

But this protection is limited to users based in countries such as Japan and GDPR-protected Europe. And even if a removal request is approved and OpenAI does not retain information provided in ChatGPT conversations, it appears the data may still be used to train the model.

Given these risks, general-use LLMs like ChatGPT don’t fulfill the strict obligations attorneys have to protect privileged work product and confidential client information.

When considering integrating an AI solution into your practice, it’s crucial to choose a product specifically built for use by legal professionals. 

For instance, security and privacy should be integral to a product’s creation, not add-on features. As pointed out recently in Harvard Business Review, companies practicing top-notch cybersecurity are committed to “ensuring security is not an afterthought through processes such as DevSecOps, a method that integrates security throughout the development life cycle.” 

What to look for in a professional-grade legal AI solution

When considering using AI in their practice, attorneys should look for these four key indicators of high-level security measures:

1. Customer-first data storage policies. It’s critical you, as the customer, control how your data is used, accessed, and stored.

2. Stringent security controls. Those providing AI for professional use should employ a sophisticated, multifaceted security program that goes beyond securing just the AI platform and customer data. Look for both internal and external security resources. 

3. A long record of success. A more nebulous but still important factor is how long the AI developer has been in the business, meaning not just legal tech generally, but in the very complex business of building LLM-powered products for legal professionals. Examine a company’s performance history, specifically in the areas of security and expertise in AI. Prior leaks or other security incidents are obvious red flags. If a developer has only been in AI for a year or two, there’s little record to examine in terms of both incidents and expertise. 

4. Adoption among industry leaders. Who does the legal AI provider count among its clients? Peer firms’ adoption of an AI solution indicates whether its security program is robust enough for law practice. 

When integrating AI into practice, attorneys need to know they’re using a platform they can trust, meaning one that will ensure they meet their obligations to protect client data and privileged work product. Look for providers who adhere to industry-leading security frameworks and are committed to data privacy, as demonstrated by their company’s history, expertise, and clientele.

Written by:

Casetext
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Casetext on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide