Should Attorney Conflict of Interest Rules Apply to Litigation Service Providers and LegalTech Companies, Too?

LegalMation
Contact

[author: Tom Hagy]

Introduction

This article addresses conflict-of-interest rules and recommendations that guide attorneys, and whether those principles should also apply to certain types of service providers, including legal software-as-a-service companies. In particular, I wanted to explore the new crop of AI-driven SaaS providers that are more embedded in case strategy than traditional service providers ever were, such as legal research publishers and platforms. Unlike linear, sequential processes, new AI models exhibit dynamic and adaptive behaviors, capable of morphing and changing based on data inputs. This exibility allows for sophisticated pattern recognition and decision-making, pushing the boundaries of AI's capabilities, making them more like members of a legal team and less like coffee machines. Read on for a brief overview of conflicts of interest in the context of the attorney-client relationship, then how similar principles might be applied in relationships with this new generation of service providers. Comments are encouraged. --Tom Hagy

Actual Conflicts of Interest

Attorneys know this well. They must avoid actual conflicts when it comes to the clients they represent and – usually, but not always – in the legal positions they take on behalf of clients from case to case. This is articulated in ABA Rule 1.7.

The reasons behind avoiding actual conflict of interest – representing adverse parties – are well known to anyone practicing or studying law. Attorneys would have and be able to use condential information, insights, secrets, or strategies learned while representing one client in a case against another. An argument that would be good for Client A may be detrimental to Client B. What path would an attorney take? Would they base their choice on the monetary value of a client? Or would they look to their personal, social, political, or moral beliefs?

It is not just the use of that information that is problematic. The mere appearance that the information is accessible and therefore can be used improperly is equally troublesome.

Positional Conflicts of Interest

In her 2006 paper – Positional Conicts Legal Doubletalk and the Concern with Positional Conicts: A “Foolish Consistency”? – Professor Helen A. Anderson insightfully explored whether lawyers should be able to take contradictory positions in dierent cases. Not to disappoint the attorney in you, “it depends” is where she landed, but with more eloquence.

Bar associations caution against positional conicts. But, Professor Anderson wrote, the recommended analysis of these conicts misses their real potential harm. “[I]t is precisely when a lawyer decides not to make a contradictory argument for one client in order not to offend or harm another client that an ethical problem is likely to be present. A positional conflict is therefore evidence that any pressure to modify arguments has been overcome.”

An outright bar of positional conflicts would give an attorney more reason to modify or sit on arguments for the client the attorney values less, e.g., for economic reasons. For that reason, Professor Anderson proposed, positional conflicts should not be barred as unethical.

But, she wrote, that doesn’t dispatch with all problems raised by positional conflicts, such as the damage to an attorney’s credibility.

The professor’s point is that some positional conflicts will harm a client, but barring conflicts across the board may favor financially more attractive clients and disadvantage clients with smaller bank balances.

The potential harms are present, albeit to dierent degrees, when actual Rule 1.7 conflicts of interest are present and when some positional conflicts arise. Avoiding these harmful entanglements is foundational to the practice of law.

But it’s not all about you. Your duty to avoid conflicts as a lawyer extends to the people you contract and hire, too, such as members of your trial teams. When you hire an expert or a consultant, you must not (and would not) hire one who is also representing the other side on the same issue. This is an obvious disaster waiting to happen.

Further, this duty arguably extends to certain types of data analysis services that – like people on your teams – offer insights and recommendations based on the analysis of the data they are fed. The sources of that data is where some of the problems take root.

The Real Issue

Even if everyone – the attorneys, the clients, the courts, the bar associations, and the public – were fine with representing adverse parties in separate, unrelated matters or with an attorney talking out both sides of their mouth, the real issue is access to information, insights, and tactical and strategic choices that drive a rival’s case – whether that rival is a direct or indirect opponent.

How are these positions crafted? What evidence supports them? What insights came from which people not directly involved in your case? What if insights from someone directly or tangentially on an opposing or divergent team made their way into the data? That is the real danger, and that is why companies that analyze data and oer direction for one side present an unacceptable risk if they work for the other.

Even if the humans were trustworthy and loyal and vigilant, their systems will not be so constrained if given access to data and directed to analyze it. Further, regardless of whether data inuenced or contributed by adversarial parties seeps into the yield of AI-crafted recommendations, the possibility and the appearance of this seepage make provider sharing among direct or indirect litigation rivals detrimental.

Not all providers are problematic, of course …

One category of service provider that does not pose a risk includes comprises companies that provide legal research services, computer networks, court reporting, and many others. Broadly speaking, and except for nefarious misconduct of seemingly innocuous providers, even an unscrupulous caterer, these companies are not gathering, sharing, or otherwise using case or client-specic insights in the service of other customers.

The brand-name legal research companies can serve all sides of matters because they are not using one law firm’s activity or queries to inform the results they provide to other, perhaps adverse, law firms. Their search libraries do not change based on customer inputs.

While these companies wisely and creatively use AI to enhance their services, they point these systems at massive, curated data collections, e.g., court records, public records, publications, etc. But they are not performing analytics on the condential inputs of one customer to serve up insights or recommendations to other customers. These companies must and do take great care to protect what their customers do on their platforms.

Companies in this category can safely work for clients universally.

… but some providers can be seriously problematic.

The second category, however, is where trouble incubates. Companies in this group oer strategic or tactical support and advisory services to attorneys for specic clients in specic matters.

Consider this scenario. Service A virtually and sometimes literally sits at the table with attorneys to craft ways to win their case for Client A.

Service A assists with demand letters, complaints, pleadings, motions, depositions, jury instructions, appeals, and settlements – generally helping develop legal strategy based on attorney and client insights and other information. To accomplish this, Service A collects and analyzes sensitive attorney-client inputs that the attorneys are legally and ethically bound to protect in their representation of Client A.

The goal of Service A is to make a profit – an admirable ambition to which all companies (including my own) aspire. Besides providing innovative and valuable services, companies grow by expanding not only the number of clients they serve, but the types of clients they serve and the types of services they provide different clients. A company whose clients are defense attorneys and insurance companies would logically explore selling the same services to plaintiff attorneys or an adjacent or complementary service to an insurance company. A court reporting company might also oer translation services, for example, or seek contracts with rms or companies regardless of their positions in litigation.

What about serving defense firms and insurers while also serving the firms who sue them? This makes sense from a business perspective for Service A, at least in the short term, but puts their clients and their clients’ clients in serious jeopardy. Why? Because, as I said earlier, they can and will use the data and insights they gather while serving one client to serve another. They don’t do this for nefarious reasons (not necessarily, anyway); they do it because they know the more “good data” they have, the better their outputs will be.

This becomes more complicated when Service A’s value is the insight it can generate by using AI tools on data pools filled with data from several or all of its clients.

Insights that Service A delivers to Client A may draw on data from Client B, and the other way around. As Clients A and B continue to help fill and update Service A’s database as they work with Clients A and B, the more informed Service A’s insights will be – because they are coming from both sides of an issue, perhaps from the specic adverse parties themselves.

When an AI-powered data analytics company does its job, it is like a water wheel that powers an insight mill, one that is fed by multiple streams of data. That is both the power and danger of the mill, because those who provide the data are engaged in legal battles.

This sounds like madness. Who would do this? That’s the question you should be asking when picking service providers.

Should conict principles guide the work of those who support attorneys in litigating cases?

Yes. Yes; they should. Consider this scenario:

1. Service E prepares demand letter packages for plaintiff lawyers using articial intelligence.

2. One of Service E’s clients is Attorney E, a plaintiff lawyer with a client injured in an auto accident.

3. Attorney E collects medical records and traffic accident reports.

4. Attorney E gives these materials to Service E, which prepares a demand letter for Attorney E.

5. Attorney E sends the demand letter to an insurance company, Insurer A

This a frequent and repetitive task. It’s not high level but does take knowledge, time and attention. An AI-driven demand letter service is valuable to Attorney E and other personal injury attorneys who – with the time saved – can turn their attention to higher level tasks. Other entrepreneurs and companies have recognized this, too. AI-powered demand letter preparation alone is a new cottage industry. Many new players are vying for market share and trying to increase revenues. As I said, one way to grow is a tried-and-true business method: take your product to adjacent or related markets. That is what Service E is doing. They are taking their solutions to another market where individual players have resources dedicated to performing routine and repetitive tasks that, if handled by a third party, can be reallocated or eliminated. That new market for Service E is the insurance industry which ultimately pays to settle personal injury cases – companies like Insurer A.

Under the overarching principles that govern the legal industry, if Company E – which is providing demand letter services – were to provide demand letter response services to Insurer A, this would arguably create a conflict of interest, since plaintiffs and insurance companies essentially are adversaries in personal injury litigation.

Why would Insurer A contract Service E to respond to the very demand letters that Service E drafted for plaintiff counsel Attorney E? The people overseeing the creation of demand letters could be in the cubicle next to the person overseeing the responses. They might even be the same people using the same data sets to craft the best demands and the best responses to those demands.

The risk of actual and positional conicts is high, and this is happening today. Even if specic conflicts can’t be proven, the mere appearance of such conflicts should make attorneys think twice about sharing such resources with adversaries. Put differently: all parties need to exercise extreme caution.

Some wise insurance carriers and defense firms have already reached this conclusion. Approached by a Service E-type demand letter provider – one that serves plaintiff firms and is trying to grow into adjacent markets – carriers are just saying no. They see the conict of such providers serving opposing parties, either in concept or in fact, because they are always aware of the potential conflict of interest.

These types of clients – insurance executives and defense attorneys – are generally risk averse beings, and with good reason. They are aware that utilizing a platform that is learning from and teaching opposing parties presents unreasonable risks to the point of being irresponsible, perhaps unethical.

Understand what these services can and cannot do

No matter what anyone tells you, all of these AI models will integrate not only information but concepts like strategy, decisions, and tactics. These models do not run like typical computer software, most of which operate on set of instructions. State-of-the art tools do not operate this way. They adapt to the inputs they receive and their analysis of these data.

Natural and impenetrable boundaries – an ethical wall – between these swirling oceans of data must be maintained. One company whose services are integrated into and fed by their litigation decisions should not work on both sides of that wall.

In this industry, where AI tools are augmenting human thought, they must be treated as if they were humans – consultants or employees – who are gaining insights, just as if the service is employing teams of people. Clients would not want the person writing the demand letters sitting next to the person writing responses to those demand letters, or collaborating, or the same person writing both. Certainly ecient, but not ethical. The guardrails that apply to you must apply to AI tools.

Selecting and vetting your provider.

While the problem is complex, the fix is simple.

Ask your AI service providers questions similar to those you would ask in a routine conflict check.

  • Who are your other current clients?
  • Who are your former clients?
  • Who are your owners and investors?
  • Do your law firm clients represent these specific companies (list them) or companies in these industries (list them)?
  • What are your plans for clients in the future?
  • Do you have business relationships with parties that may be adverse to our clients?
  • What are your data security policies and protections?
  • How do you guard condential information?

While not all companies will be forthcoming or permitted to share all of these details, it doesn't hurt to ask and asking clearly signals your concerns. If the answers you get reveal a conflict of interest or insucient data control, this is a provider you will want to avoid.

Conclusion

Contemporary AI models have transcended the limitations of traditional computing. Their ability to learn, adapt, and evolve based on input data positions them as powerful tools to support attorneys. As these models continue to advance, it is crucial to understand their underlying mechanisms and ethical implications – and their limitations.

© LegalMation

Written by:

LegalMation
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

LegalMation on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide