Silicon Valley Arbitration & Mediation Center Issues AI Guidelines

Axinn, Veltrop & Harkrider LLP
Contact

Axinn, Veltrop & Harkrider LLP

On April 30, 2024, the Silicon Valley Arbitration and Mediation Center published the 1st edition of its Guidelines on the Use of AI in Arbitration, which “shall apply when and to the extent that the parties have so agreed and/or following a decision by an arbitral tribunal or an arbitral institution to adopt these Guidelines.”

For all participants in arbitration, the Guidelines provide that all participants are responsible for familiarizing themselves with the AI tools and their intended uses and making reasonable efforts to understand relevant limitations, biases, and risks, including ensuring that any use of AI tools is consistent with confidentiality obligations. Disclosure of the use of AI tools is not required as a general matter, and decisions regarding disclosure are to be made on a case-by-case basis.

Parties and their representatives are directed to observe all applicable ethical rules and professional standards when using AI tools and to refrain from using AI in ways that would affect the integrity of or otherwise disrupt arbitration proceedings, including falsifying or compromising the authenticity of evidence or misleading the arbitral tribunal or opposing parties.

For arbitrators, the Guidelines emphasize that no part of the decision-making process should be delegated to any AI tool, including analysis of the facts, law, and evidence, and that arbitrators shall not rely on AI-generated information outside the record without making appropriate disclosures to the parties. In deciding how to address submissions containing AI-induced errors or inaccuracies, the tribunal may consider whether an AI-induced error is legitimately inadvertent or inconsequential or whether it would compromise the integrity of the proceedings. Arbitrators have a duty to disclose any reliance on AI-generated outputs outside the record that influence their understanding of the case and to allow parties the opportunity to comment to the extent any AI-generated outputs are used, subject to the acknowledgment that disclosure requirements may vary depending on the specific AI application used.

The Guidelines further indicate that there is no single definition of AI, and the definition adopted is meant to be broad enough to encompass both existing and future foreseeable types of AI but not encompass every type of computer-assisted automation tool.

Any questions or suggestions regarding the Guidelines are directed to AITaskForce@svamc.org.

“The publication of these general principles for the use of AI is a fitting tribute to SVAMC's tenth anniversary and its collective industriousness and dedication to promoting fairness, efficiency and transparency in arbitral proceedings.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Axinn, Veltrop & Harkrider LLP

Written by:

Axinn, Veltrop & Harkrider LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Axinn, Veltrop & Harkrider LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide