Can AI Replace Human Mediators? Groundbreaking Study Reveals Surprising Results

EDRM - Electronic Discovery Reference Model
Contact

EDRM - Electronic Discovery Reference Model

Can AI Replace Human Mediators? Groundbreaking Study Reveals Surprising Results by Ralph Losey
Image: Ralph Losey using WordPress’s Stable Diffusion.

Artificial Intelligence is no longer just a tool for automating mundane tasks—it’s now stepping into arenas traditionally dominated by human judgment and empathy. One of the most intriguing applications of AI is in dispute resolution, where large language models like GPT-4 are being tested as mediators. Pre-trial settlements are critical to the continued functioning of our system of justice because an estimated 92% percent of civil cases are resolved out of court. With the rise of online dispute resolution, the potential for AI to resolve low-stakes disputes autonomously is appealing, especially as legal systems are increasingly overburdened with new case filings.

A humanoid robot stands between a woman and a man in a modern office setting. The robot has a white and black design with glowing blue eyes. The woman on the left, wearing a gray blazer and light top, smiles as she looks at the robot. The man on the right, wearing a brown suit jacket, faces the robot. The scene suggests a professional interaction or meeting involving the robot. Large windows in the background allow natural light to fill the room.
Image by Ralph Losey using WordPress’s Stable Diffusion.

But can AI truly manage the complexities of human conflict? What happens when a machine has to balance neutrality with empathy, or data analysis with human emotion? This article discusses a groundbreaking study, Robots in the Middle: Evaluating Large Language Models in Dispute Resolution,” offering insights into how AI may augment—though not replace—human mediators. Let’s explore the future of AI in the courtroom and beyond.

A humanoid robot sits at the center of a long conference table, surrounded by a group of people on both sides, all facing toward the robot. The robot has a white and black design with glowing green eyes, appearing engaged in the meeting. The individuals, dressed in business casual attire, look attentively at the robot, with expressions ranging from curiosity to interest. The setting appears to be a professional or corporate meeting room, with warm lighting and blurred walls in the background. The scene suggests a discussion involving the robot's presence and role.
Image by Ralph Losey using WordPress’s Stable Diffusion.

Introduction

The Robots in the Middle study provides an empirical evaluation of LLM AIs acting as mediators. Mediation is very important to our system of justice because mediation and other methods of voluntary settlement keep our court systems functioning. According to an article by Harvard Law Professors, David A. Hoffman and John H. Watson, Jr: “… up to 92 percent of cases are resolved out of court, a figure that does not include the number of lawsuits that are never filed because the parties used other dispute resolution methods at the outset.” Resolving conflict outside the courtroom: Why mediation skills are increasingly valuable for lawyers, according to two Harvard Law experts (Harvard Law Today, 4/29/24). This is one reason why AI researchers in legal technology are so interested in the possible application of LLM AI to mediation.

The Robots in the Middle study was based on mediation by text messages of disputes in fifty hypothetical disputes. The analysis and responses of humans with AI expertise and some limited legal experience, none of whom were professional mediators, were compared with responses of ChatGPT4o (omni). The AI prompts used in the experiment were based on the mediator’s guide of the Department of Justice of Canada. Dispute Resolution Reference Guide: Practice Module 2 (August 25, 2022). The humans and AI were asked to select among thirteen mediation types set forth by the DOJ Canada. They were asked to pick one to three types they judged to be appropriate for each hypothetical. Then they were asked to prepare text messages to facilitate a settlement. The authors served as blind judges to evaluate the quality of the responses, not knowing which were generated by humans and which by ChatGPT4o.

A young man and a humanoid robot sit across from each other in a modern indoor setting, both focused on their smartphones. The man, dressed in light casual attire, sits on a cushioned chair, engrossed in his phone. The robot, with a sleek white and black design and glowing blue eyes, mirrors his posture and holds a phone. The setting appears quiet and minimalistic, with soft lighting creating a calm atmosphere. The scene suggests a shared moment of technology use between human and machines, highlighting their interaction through similar behavior.
Image by Ralph Losey using WordPress’s Stable Diffusion.

With the growing demand for Online Dispute Resolution (‘ODR”) platforms (see e.g. ODR.com), the study examines whether LLMs like GPT-4 might be able to effectively intervene in disputes by selecting appropriate types of mediation interventions and drafting coherent, impartial intervention messages.

The premise is simple, yet potentially transformative: if AI can handle routine, low-stakes disputes efficiently, this would alleviate the burden on human mediators, allowing them to devote their time and expertise to more complex, emotionally charged cases. The research sought to answer three fundamental questions:

  1. How well can LLMs select intervention types in disputes?
  2. How do LLMs compare to human mediators in drafting intervention messages?
  3. Are AI-generated messages safe, free from hallucinations, and contextually appropriate?
A humanoid robot sits between a man and a woman, all three focused on their smartphones. The robot has a white and black design with glowing red eyes. The man on the left, dressed in a dark jacket, appears concentrated on his device, while the woman on the right, wearing a light coat, also looks at her phone seriously. The background features warm, blurred lighting, suggesting a public or social setting. The scene conveys a moment of shared attention on technology between the humans and the robot.
Image by Ralph Losey using WordPress’s Stable Diffusion.

Who is Behind this Study?

The Robots in the Middle study has seven authors in academic fields of law and technology: Jinzhe TAN, Hannes WESTERMANN, Nikhil Reddy POTTANIGARI, Jaromır SAVELKA, Sebastien MEEUS, Mia GODET and Karim BENYEKHLEF. They are from multiple Universities: Cyberjustice Laboratory, University of Montreal, Canada; Maastricht Law and Tech Lab, Maastricht University, Netherlands; Mila – Quebec AI Institute, University of Montreal, Canada; School of Computer Science, Carnegie Mellon University, United States; and, Faculty of Law and Criminology, Universite Libre de Bruxelles, Belgium.

These are all brilliant international scholars with great expertise in legal theory and AI technology, but none appear to have any actual experience as a mediator or even experience serving as an attorney advocate in a mediation. Only one of the authors appears to have any experience with the U.S. legal system, Jaromır Savelka, who is a researcher associate at Carnegie Mellon. Savelka previously worked as a data scientist for Reed Smith (2017-2020). The lack of real legal experience in dispute resolution by the human subjects in the experiments is a weakness of this study.

Since my article is written primarily for U.S. attorneys and legal tech experts, I try to correct for this gap with input from a certified mediator, Lawrence Kolin, who has mediated thousands of cases of all types since 2001. He is also savvy in technology and AI. Moreover, I bring some specialized knowledge as an attorney who represented parties in many mediations since it first became a thing in Florida in the late 1980s. I was also trained and certified by the Florida Supreme Court as a Mediator of Computer disputes in 1989 but have never formally served as a mediator. That is why I sought the input and advice of a professional mediator included later in this article.

A group of five people sit around a conference table in a modern office with large windows, engaged in a meeting. The individuals, dressed in business attire, appear focused and involved in a discussion. Papers with charts and documents are spread across the table, and a bright orange notebook is prominently placed in the center. The natural light from the windows illuminates the room, with greenery visible outside. The overall atmosphere suggests a professional, collaborative business meeting.
Where’s the professional mediator?
Image by Ralph Losey using WordPress’s Stable Diffusion.

Background on the DOJ Canada Mediation Guide Used in the Experiment

I thought that the Mediation Summary prepared by the DOJ Canada was very good and a clever choice for the experimenters to use for guidance. I asked the latest version of Google’s Gemini generative AI, which seems to have improved significantly lately, to summarize the Mediation Guide. Dispute Resolution Reference Guide: Practice Module 2 (DOJ Canada, 8/25/22). I verified the accuracy and wording of the summary, which honestly was better than I could have done on this simple task, especially considering it took two seconds to prepare.

  • Mediation is a voluntary and non-coercive process where a neutral third party assists disputing parties in reaching a mutually acceptable settlement. The mediator does not have the authority to impose a decision, but instead facilitates communication and negotiation.
  • A successful mediation leads to a signed agreement or contract, often referred to as a memorandum of understanding, which outlines the parties’ future behavior and is legally binding.
  • The mediation process offers several advantages:
    • Preserves Relationships: Mediation helps maintain relationships, especially when parties need to continue interacting, by focusing on shared interests and avoiding the adversarial nature of litigation.
    • Flexibility and Creativity: The informality of mediation allows for customized processes and solutions that cater to the specific needs and interests of the parties involved, going beyond traditional legal remedies.
    • Confidentiality: Mediations are generally private, except when subject to laws like the Access to Information Act and Privacy Act, ensuring discretion and protecting sensitive information.
    • Time and Cost Efficiency: Reaching a mediated settlement is typically faster and more cost-effective than litigation, benefiting both parties financially.
    • Controlled Dialogue: The presence of a neutral mediator enables a structured conversation, particularly helpful when emotions run high or previous negotiations have failed.
    • Shared Ownership: As the parties share the costs of mediation, they feel equally invested in the outcome and more committed to the process.
A humanoid robot sits at the end of a long conference table, surrounded by a group of people on both sides, all focused on the robot. The robot has a white and black design, glowing blue eyes, and a concerned facial expression. The individuals, dressed in business attire, appear serious and attentive, with some looking thoughtful or reflective. Papers and documents are placed in front of them, and the warm lighting in the room creates a formal atmosphere. The scene suggests a significant discussion involving the robot at the forefront of the meeting.
Image by Ralph Losey using WordPress’s Stable Diffusion.
  • The mediation process also has some potential drawbacks:
    • Power Imbalances: Concerns exist, especially in harassment cases, that power imbalances between parties could compromise the fairness of the process. Strategies such as co-mediation with mediators of different genders and legal representation can be used to address this.
    • Lack of Precedent: Due to its private and non-adjudicative nature, mediation does not establish legal precedents, unlike court judgments.
    • Mediator Influence: A dominant mediator might exert excessive control, potentially influencing the final resolution and undermining party autonomy.
    • Delay Tactics: The absence of a binding third-party decision may lead a party to engage in mediation without genuine intent to cooperate, using it as a stalling tactic
  • The mediation process involves several key steps, which can vary depending on the specifics of the dispute.
    • Agreeing to mediate.
    • Understanding the problem(s).
    • Generating options.
    • Reaching agreement.
    • Implementing the agreement
  • A successful mediation requires:
    • Good Faith Participation: All parties must actively and honestly participate in the process.
    • Impartiality of the Mediator: The mediator must remain neutral and avoid favoring any party.
    • Confidentiality: All statements and disclosures made during the mediation are generally considered confidential, subject to legal exceptions.
  • The role of a mediator is to facilitate a productive and constructive dialogue between the parties, helping them identify their interests, explore options, and work towards a mutually acceptable agreement.
  • Legal counsel can play a significant role in mediation by advising their clients, ensuring their interests are protected, and facilitating effective communication.
  • The sources include a checklist and a sample mediation agreement that can be helpful resources for those considering or engaging in mediation.
A woman and a man stand closely together on the left side of the image, both focused intently on their smartphones. With a serious expression, the woman holds her phone in both hands while the man behind her also looks at his phone with a furrowed brow. On the right side, a humanoid robot with a sleek, futuristic design and glowing orange visor stands beside them, holding a smartphone. The soft and somewhat dim lighting creates a tense or reflective atmosphere. The scene suggests a shared moment of focused attention on technology between humans and robots.
Image by Ralph Losey using WordPress’s Stable Diffusion.

Research Design and Key Findings

The study used 50 hypothetical dispute scenarios, designed to cover various real-world cases, from consumer complaints to more complex business disputes. These scenarios included emotional conflicts, deadlocked negotiations, and evidential disagreements. As mentioned, the LLMs were evaluated against humans acting as mediators in two critical tasks: selecting appropriate intervention types and drafting effective text messages to encourage settlement. The one-hundred AI responses (fifty in each category) were also evaluated for hallucinations or harmful errors (none found).

Selecting Intervention Types. The participants, AI and human, were instructed to select between one to three intervention types from a list of thirteen from the DOJ Canada Mediations Guide. Evaluators compared the intervention types chosen by humans and LLMs for each scenario, rating their preference on a 5-point Likert scale.

Evaluators preferred the LLM-chosen intervention types in 22 cases, were ambivalent in 9, and preferred human choices in 19. This suggests that LLMs are capable of comprehending dispute scenarios and selecting appropriate intervention types. The report concludes this shows that AI can understand dispute contexts and recommend suitable actions in a significant number of cases.

Drafting Intervention Messages. The participants were instructed to draft intervention messages of between one to two sentences. To allow for comparison in all fifty hypotheticals, the LLM was always instructed to generate messages based on the intervention types selected by the human annotator. Evaluators blindly assessed their preference for the intervention messages written by humans and LLMs, using a 5-point Likert scale and providing comments. They then compared the messages on specific criteria: understanding and contextualization, neutrality and impartiality, empathy awareness, and resolution quality.

LLM-generated messages were rated higher than human in 60% of the texts and equal to human in 24%, for a total of 84%. In other words, the human mediator responses were only judged better than the AI in 16% of the texts.

Moreover, the evaluators often found them to be:

  • More clear and smooth.
  • Less prone to misunderstanding the dispute or party intentions, unlike human annotators.
  • Less likely to propose overly specific solutions or assign fault.
A group of five cheerful scientists, all wearing white lab coats, gather around a small humanoid robot in a brightly lit lab. The robot, with a white and black design and glowing blue eyes, stands on a table, while the scientists interact with it enthusiastically, some placing their hands on its arms and shoulders. They appear excited and proud, as if celebrating a successful achievement or breakthrough. The background features warm, glowing lights and lab equipment, creating an atmosphere of innovation and accomplishment in a scientific or technological setting.
Image by Ralph Losey using WordPress’s Stable Diffusion.

In terms of drafting intervention messages, the LLMs performed remarkably well. The actual wording in the report is important here:

First, the evaluators often found the messages written by the LLM to be more smooth and clear than the human-written ones. The general tone used by LLMs, involving frequent messages such as “I completely understand” or “It seems like there are problems,” seems to work well in a mediation environment, and may have contributed to high scores.

Second, while LLMs are known to frequently “hallucinate” information [9, 8], in our case the humans more often misunderstood the dispute or were confused about the party intentions or factual occurrences. This could be due to factors such as fatigue, emotional bias, or a misunderstanding of the role of the mediator. In contrast, LLMs demonstrated consistent and coherent interventions across multiple cases, with fewer instances of judgment errors.

Third, we found that our human annotators would often propose very specific solutions or even indicate fault, which received a lower rating as it may not be appropriate for the role of the mediator.

Robots in the Middle: Evaluating Large Language Models in Dispute Resolution. (September 2024).

A major caveat is needed in these comparisons between AI and human mediators. The humans acting as mediators were not real mediators and lacked any legal training or experience as mediators. Still, it may be surprising to many that the humans “more often misunderstood the dispute or were confused about the party intentions or factual occurrences” than the AI. Based on my experience with humans and AI, this finding was not that surprising. Moreover, the finding is consistent with ChatGPT4.0 passage of the BAR Exam in the top 10% of test takers, all of whom were law school graduates.

Safety and Hallucination Checks. No instances of hallucinations or harmful content were found in the AI-generated messages. No mention is made of humans hallucinating either, just that they were more dazed and confused than the AIs. Still, it was a good idea for the scientists to check for this but, as the study points out, larger-scale, real-world applications would still require careful monitoring to ensure that AI-generated outputs continue to be safe and reliable.

The researchers acknowledge limitations of the study, including:

  • The structured evaluation may not reflect real-world mediation processes.
  • The use of non-expert annotators and evaluators (which as a lawyer with experience with mediations is for me a major limitation that will be discussed further).
  • All of the interactions were in English, but none of the humans acting as if they were mediators were native English speakers. It was a second language for all of them.
  • The subjective nature of assessing intervention effectiveness.
  • The limited scope of the experiment.

Future of AI and Mediation

The results of the “Robots in the Middle” study suggest that the analysis and language generation abilities of ChatGPT omni version are already good enough for use in online low-stakes disputes, ODR. The experiments with ChatGPT demonstrated its ability to quickly process information, select appropriate interventions, and draft neutral messages. This suggests that generative AI could assist human mediators in many routine mediation tasks in all types of cases.

A group of smiling scientists, all wearing white lab coats, gather around a small humanoid robot in a laboratory setting. With glowing blue eyes and "84%" on its chest, the robot stands at the center of attention. The scientists, appearing excited and engaged, are interacting with the robot, with one of them pointing at it while others look on with expressions of enthusiasm. Microphones are positioned before the robot, suggesting a presentation or demonstration. The background is softly blurred, emphasizing the lively interaction between the scientists and the robot.
Image by Ralph Losey using WordPress’s Stable Diffusion.

For example, in a consumer dispute over a faulty product or a service contract issue, AI could:

  • Quickly review contracts and correspondence, identifying areas of misunderstanding.
  • Generate neutral settlement options based on similar cases.
  • Provide a data-driven assessment of the likelihood of success if the case were to go to court, allowing parties to weigh settlement offers.

In these cases, AI’s ability to process large datasets rapidly and generate unbiased, neutral recommendations is a clear advantage. The efficiency AI brings to these routine cases makes faster resolutions possible, reducing the backlog in mediation and freeing up human mediators to handle more complex disputes.

I expect that, logistical problems aside, AIs will soon move from online mediation via text messages to audio and video settings. There will be some level of human participation at first. This will likely change over the next five to ten years to humans acting as supervisors in many types of cases. The AI systems will certainly improve and so will human acceptance. The video participation may eventually change to holographic or other virtual presence with humans. However, I suspect AI will never be able to go it alone on very complex or emotionally charged cases. AI will probably always need help from human mediators in complex interpersonal dynamics.

A man in a suit sits across from a humanoid robot in a modern office meeting room. The two are seated at a reflective table, formally facing each other. The man, wearing glasses and a striped tie, has his hands clasped, attentively looking at the robot. The robot, with a sleek white and black design and a dark visor for a face, mirrors the man's posture with its hands placed on the table. In the background, a large digital screen displays text and information, suggesting a professional discussion or interview. The room is softly lit, creating a serious and focused atmosphere.
Image by Ralph Losey using WordPress’s Stable Diffusion.

Still, even in difficult or emotionally volatile cases, AI can still be a valuable member of a hybrid tag-team with human mediators in charge. Navigating the AI Frontier: Balancing Breakthroughs and Blind Spots (10/10/24) (Hybrid methods discussed, including the Centaur and Cyborg approaches); Loneliness Pandemic: Can Empathic AI Friendship Chatbots Be the Cure? (10/17/24) (discusses recent studies showing the ability of generative AI to act empathetically and make a person feel heard). For instance, consider a family law dispute, such as a custody case. In this context:

  • AI can provide valuable legal research, summarizing relevant precedents and suggesting neutral custody schedules based on case law.
  • AI can draft initial settlement agreement language.
  • While AI can offer neutral starting points and suggestions, only a human mediator at this time can navigate the complex emotions and relationships that often drive these disputes, ensuring that the parents remain focused on the child’s best interests rather than their grievances against each other.

It is critical that any AI-driven mediation platforms include some type of continuous oversight and transparency. Human mediators must be trained to monitor AI outputs, ensuring that the recommendations are unbiased and contextually appropriate. Additionally, ethical guidelines must be established to govern how AI is used in mediation, addressing issues of accountability, privacy, and fairness. See e.g. The Future of AI Is Here—But Are You Ready? Learn the OECD’s Blueprint for Ethical AI (10/__/24).

As mentioned, while I have considerable experience as an attorney in mediation, I have no experience serving as a mediator, and neither do any of the scientists in this study. The authors of the study admit that is one of the limitations of the study and future evaluations should include professional mediators:

While it seems like the ability of the LLM to select intervention types and write messages is favourable to that of average people, this paper cannot tell us about how trained mediators would approach these issues.Future work should focus on evaluating such tools in real-world contexts, and involve expert mediators, in order to achieve a higher “construct validity,” i.e., be more closely aligned with real-world outcomes.

Robots in the Middle: Evaluating Large Language Models in Dispute Resolution. (September 2024, Section 7, Limitations).

This is one reason I wanted to have the input of an expert Mediator on these issues.

Two older men in judicial robes, both wearing glasses, sit in a courtroom setting, looking back over their shoulders toward the camera with thoughtful expressions. Behind them, a humanoid robot with glowing blue eyes stands in the background, surrounded by people dressed in white lab coats. The letters "LLM ODR" are visible above the robot, possibly indicating a legal or technological theme related to the scene. The overall atmosphere suggests a courtroom or formal setting where advanced technology, represented by the robot, is being discussed or evaluated.
Image by Ralph Losey using WordPress’s Stable Diffusion.

Input of a Professional U.S. Mediator

Lawrence Kolin is a very experienced, tech savvy mediator who is a member of the UWWM Mediation Group and is an Executive Council member of The Florida Bar ADR Section. I am fortunate to have him as a colleague and regular reader of my articles. I asked for his reaction to the study, Robots in the Middle and my initial opinions about the study and the future of AI and mediation. He concurred in my basic opinions and analysis. Speaking of the authors of the report Lawrence said:

I concur with the authors in that there is indeed a place for AI in expanding access to justice and enhancing the process of resolving certain types of cases. I found it interesting that there were no perceived hallucinations and that humans in the study were more often confused about a party’s intentions or facts, which I likewise attribute to not using trained neutrals who better understand mediation.

Lawrence Kolin, Mediator and Arbitrator, UWWM Mediation Group.

As to the future of using AI in mediation, Mediator Kolin had this to say:

So my initial thought was unlike a pretrained transformer, I am part of a 3,000 year-old human tradition of making peace. When parties agree on algorithmic justice, are they giving up the nuance of emotional intelligence, ability to read the room and building of trust through rapport that human mediators can provide? In addition, we are flexible and can adapt as the process unfolds. We also have confidentiality, ethics and boundaries that may not be followed by AI that help protect self-determination of the outcome of a dispute and avoid coercion.

I agree that the small cases (as e-commerce has aptly demonstrated) can utilize this technology for efficiency and likely with success, but for a death, defamation, IP infringement or multiparty construction case it is less certain. It could assist in the generation of ideas for deal parameters or the breaking of an impasse. Gut calls on negotiation moves and creativity are, however, still very much the domain of humans.

Lawrence Kolin, Mediator and Arbitrator, UWWM Mediation Group.

For more on Mediator Lawrence Kolin’s thoughts on mediation, see his excellent blog Orlando Mediator. It is consistently ranked in the top five of Alternative Dispute Resolution blogs. Its current ranking is number four in the world! Lawrence’s short and snappy articles “cover a wide variety of topics–local, national, and international–and includes the latest on technology and Online Dispute Resolution affecting sophisticated lawyers and parties to lawsuits.”

Two older men, identified by name tags as "Losey" and "Kolin," sit at a table facing each other. Both are wearing glasses, with Losey in a white lab coat and Kolin in a black jacket. Between them, a humanoid robot with a white and black design and a green visor stands, flanked by individuals in the background, also dressed in lab coats. The scene is brightly lit, and the background is a soft pink, suggesting a formal or experimental environment where the robot is the focal point of the discussion or project.
AI “fake” of Losey and Kolin looking skeptical.
Image by Ralph Losey using WordPress’s Stable Diffusion.

Conclusion

The empirical findings of Robots in the Middle show that AI has a significant role in handling low-stakes, routine disputes. Its speed, neutrality, and efficiency can greatly improve existing Online Dispute Resolution (ODR) systems. I agree with the authors’ conclusion:

Our research contributes to the growing body of knowledge on AI applications in law and dispute resolution, highlighting the capabilities of LLMs in understanding complex human interactions and responding with empathy and neutrality. This advancement could significantly improve access to justice, particularly in cases where traditional mediation is inaccessible due to cost or availability constraints.

Robots in the Middle: Evaluating Large Language Models in Dispute Resolution. (September 2024).

However, for larger and more complex cases, or emotionally charged disputes of any size, much more than AI-generated text or other forms of AI involvement are needed to reach meaningful settlements. The human mediator’s emotional intelligence and adaptability—what Mediator Kolin calls the “ability to read the room”—remain critical.

AI, however, has the advantage of scale. Millions of otherwise unserved, often frustrated individuals seeking justice could benefit from AI-driven mediations. All they need is an internet connection and a willingness to try. These automated systems could be offered at a very low cost or even for free. Since the process is voluntary and no one is forced to settle, there is minimal risk in trying, and AI assistance is better than no help at all. Unresolved disputes can lead to violence and other negative consequences for both individuals and their communities. This is one reason why the use of AI as a mediation tool may grow exponentially in the coming years—there is no shortage of angry people seeking solutions to their grievances.

A humanoid robot stands face-to-face with a crowd of people outdoors, creating a tense or significant atmosphere. The robot, with a sleek white and black design and glowing blue eyes, faces a group of individuals dressed in casual attire, many of whom have serious or concerned expressions. The crowd, including men and women of various ages, watches the robot closely, with arms crossed or hands in pockets. The background shows a blurred building, giving the impression of a public gathering or confrontation. The scene suggests a moment of intense interaction between humans and the robot.
Image by Ralph Losey using WordPress’s Stable Diffusion.

Although not part of the report, in my experience, the AI we have today is already advanced enough to be useful in certain aspects of mediation. AI would not replace human mediators but instead enhance their abilities—a hybrid approach. This could allow human legal services to reach more people than ever before. AI can help mediators provide more effective and efficient services. Skilled mediators with some AI training can already use AI for tasks such as initial analysis of complex facts, preparation of summaries and timelines, legal research, position analysis, prediction of probable case outcomes, and drafting preliminary agreements.

A man in a suit sits across from a humanoid robot at a table in a modern conference room. The robot, with a white and black design, is seated with its hands resting on the table, mirroring the man's posture. The man leans forward slightly, engaged in what appears to be a serious conversation or meeting. The room is illuminated by a circular ceiling light, and multiple digital screens in the background display charts and data, suggesting a high-tech environment. The setting implies a professional or futuristic discussion between humans and AI.
Image by Ralph Losey using WordPress’s Stable Diffusion.

Even in difficult mediations, the creative brainstorming capabilities of generative AI can be invaluable. AI can generate new ideas in seconds, helping mediators overcome impasses. For example, Panel of AI Experts for Lawyers has shown how AI can aid in this capacity. Mediation is a far more creative process than most people realize, and brainstorming new approaches with other mediators is often impractical. The ability of AI to suggest possible solutions for mediators to consider is already impressive and will only improve in the coming years. I encourage mediators to experiment with AI on non-confidential matters to understand its potential. Once comfortable, they can apply it in real-world situations using full privacy settings and confidentiality protections.

There is no doubt that AI will become increasingly integrated into dispute resolution, including mediation. As this evolution unfolds, it is crucial to ensure continuous oversight, transparency, and accountability for AI systems. Ethical guidelines must be developed to address challenges like bias, fairness, and responsibility in AI-driven mediation. While AI offers exciting possibilities for enhancing access to justice, we must remain vigilant in ensuring that human judgment remains central, particularly in cases where lives, relationships, or livelihoods are at stake. Still, a super-smart AI whispering suggestions into the ear of a mediator—who can choose to ignore or act upon them—might just lead to more and better settlements.

A diverse group is seated around a large table in a well-lit office space, engaged in a meeting. Papers, documents, and a few electronic devices are spread out on the table as they collaborate, with some members focused on reading while others discuss. One man, positioned at the head of the table, has a hearing aid or a similar device behind his ear, highlighted by text reading "Hearing Aid? Or AI Aid?" Large windows in the background provide natural light, offering a view of buildings outside. The scene suggests a professional setting focusing on teamwork and communication, possibly hinting at the integration of assistive technology.
Mediator with a hearing aid, or what?
Image by Ralph Losey using WordPress’s Stable Diffusion.

Want to dive deeper into the themes of this article?
Listen to Ralph Losey’s new podcast, Echoes of AI: Episode 5 | Can AI Replace Human Mediators? Groundbreaking Study Reveals Surprising Results, for an engaging exploration of AI’s role in dispute resolution.

Written by:

EDRM - Electronic Discovery Reference Model
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

EDRM - Electronic Discovery Reference Model on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide