This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Freshfields Risk & Compliance

| 4 minutes read

AI and Arbitration: The Silicon Valley Arbitration and Mediation Center Guidelines on the Use of AI in Arbitration

Introduction

The Silicon Valley Arbitration and Mediation Center (SVAMC) recently published its first edition of Guidelines on the use of AI in arbitration (AI Guidelines).

The AI Guidelines reflect the culmination of a process that began in August 2023, when an SVAMC task force released a draft of the guidelines for public consultation (the Draft), in which Freshfields participated. The AI Guidelines are the latest response by arbitration providers to the rapid advancement in AI, which has accelerated at an unprecedented pace in the past few years. When used responsibly, AI can help save time and costs in the conduct of an arbitral proceeding, but its use presents some risks which the AI Guidelines seek to address. 

Scope and structure of the AI Guidelines 

The AI Guidelines join the body of soft law (including the IBA Rules on the Taking of Evidence in International Arbitration, for example) that the arbitration community has developed to foster a uniform approach to issues that appear across jurisdictions. Like the other sources of soft law, the AI Guidelines are not binding and are meant to apply as “guiding principles” where the tribunal and/or the parties agree to their application. As we have already discussed in our post on the opportunities and risks that generative AI brings to arbitration, the adoption of uniform guidance often promotes the transparency and legitimacy of arbitration. 

The AI Guidelines define AI as “computer systems that perform tasks commonly associated with human cognition, such as understanding natural language, recognizing complex semantic patterns, and generating human-like outputs.” In contrast to the definitions recently adopted by the OECD and the EU, the AI Guidelines seem to focus on the comparison with human cognition as the defining element of the relevant technology. This arguably renders the scope of the Guidelines rather broad, as this could potentially include technologies that are already commonly used in arbitration proceedings (such as optical character recognition). It remains to be seen how this definition will be interpreted in practice.  

The AI Guidelines are organized into three main parts based on the type of participants covered. Part 1 provides guidelines for all participants in arbitrations. Part 2 provides guidelines for parties and party representatives. Part 3 provides guidelines for arbitrators. The AI Guidelines further include a commentary to each Guideline, as well as a model clause for inclusion in Procedural Orders.

Guidelines for all participants

Under Part 1 of the AI Guidelines, Guidelines 1 and 2 contain a reminder that the parties, counsel and experts should familiarize themselves with AI technologies and use them responsibly and in a manner that safeguards the confidentiality of information. This is consistent with the ethical obligations that typically already apply to attorneys across the globe. However, as the use of AI presents novel risks, these Guidelines serve as a useful reminder. Courts and state bar associations have issued comparable guidance (see, for example, the UKAustraliaNew ZealandCanada, and several States in the US, including California and New Jersey).

One of the key provisions in this section is Guideline 3, concerning potential disclosure of the use of AI. The Draft Guidelines had provided that in certain circumstances it would be appropriate to disclose the use of AI, namely where AI is used in the preparation of submissions or expert reports, or where it might have an impact on the proceedings or their outcome. The suggested disclosure obligation in the Draft Guidelines received a mixed reaction from the arbitration community. Some saw it as a basic procedural fairness protection; others believed that it would lead to costly and lengthy procedural battles and could be used as a dilatory tactic. 

The final version of Guideline 3 seeks to strike a balance between these considerations. While it clarifies that there should be no general obligation to disclose, it also provides that disclosure may be appropriate on a case-by-case basis, taking into account due process and privilege protections. This gives tribunals significant discretion, which is probably appropriate, but it also makes disclosure obligations somewhat unpredictable. 

Guidelines for parties and party representatives

Part 2 of the AI Guidelines echoes the general considerations regarding the responsible use of AI. Part 2 contains two guidelines, which are largely unchanged from their Draft versions.

Guideline 4 provides that parties and party representatives have a duty of competence and diligence when using AI. This means that parties should verify the accuracy of an AI tool’s output, and party representatives should observe “applicable ethical rules or professional standards of competent or diligent representation” when using AI tools. Importantly, parties and party representatives are “deemed responsible” for any inaccurate AI-generated information that is submitted in an arbitration.

Guideline 5 emphasizes that parties, party representatives and experts should not use AI in ways that would affect the integrity of the arbitration and the evidence, such as falsifying evidence or otherwise misleading the tribunal and opposing parties. With the advancement in generative AI and deep fakes, which can be virtually indistinguishable from the real thing and are difficult to uncover through forensic techniques, manipulation is a real risk that arbitrators and parties need to consider. Although Guideline 5 does not suggest any specific sanctions, its commentary mentions that, in addition to any measures available under the applicable laws or arbitration rules (such as, for example, striking evidence from the record or deeming it inadmissible), arbitrators may also consider other alternatives such as drawing adverse inferences and taking the violation of this Guideline into account when allocating the costs of the arbitration.

Guidelines for arbitrators

Part 3 of the AI Guidelines contains two guidelines targeting the use of AI by arbitrators. Guideline 6 states that arbitrators shall not delegate “any part of their personal mandate”—particularly the arbitrators’ “decision-making process”—to any AI tool. The final version of Guideline 6 has an additional phrase that was not in its Draft version: “[t]he use of AI tools by arbitrators shall not replace their independent analysis of the facts, the law, and the evidence.” Guideline 7 has not changed from its initial version and states that arbitrators “shall not rely on AI-generated information outside the record without making appropriate disclosures to the parties beforehand and, as far as practical, allowing the parties to comment on it.” 

Together, Guidelines 6 and 7 protect natural justice: Guideline 6 ensures that arbitrators carry out their core adjudicatory function without improper influence from external sources (such as AI). Guideline 7 cautions arbitrators against relying on material or sources generated by AI outside the record that the parties have not had a chance to address. 

Practical considerations 

Overall, the adoption of uniform principles on the use of AI in arbitral proceedings is a welcome development, as arbitration commonly involves parties and laws from different jurisdictions. The non-binding nature of the AI Guidelines provides arbitrators and parties with flexibility to adapt their use to the needs of the case. We expect the adoption of the AI Guidelines in Procedural Orders in arbitration to become a common practice, allowing the parties to continue benefitting from the main advantages of arbitration (namely, speediness, efficiency and flexibility) while mitigating the risks of due process violations.  

Tags

ai, arbitration, disputes, international arbitration, tech media and telecoms