The year 2025 marked a watershed moment for artificial intelligence (AI), with the release of more powerful generative models and a rapid expansion of proprietary and specialised AI systems. This technological leap was mirrored by a dynamic shift in the legal and regulatory landscape, as both courts and arbitral institutions began to issue formal guidance on the use of AI in dispute resolution. Across these emerging frameworks, a consensus is forming around five core principles: (1) responsible use, (2) human oversight, (3) data protection, (4) transparency, and (5) promoting fairness in data-driven systems. While some arbitral bodies have yet to formulate specific AI rules, they broadly encourage leveraging technology to enhance efficiency, an objective already embedded in provisions like Article 14(6)(iii) of the LCIA Rules (2020) and Article 13(1) of the HKIAC Rules (2024).
This post, Part I of a two-part series, outlines the current state of play and the institutions shaping the use of AI in international arbitration. Part II will then explore how emerging principles interact with real-world arbitral practice.
Responsible Use and Accountability
A foundational principle across jurisdictions is that parties and their counsel remain accountable for how AI-assisted outputs are used and relied upon in proceedings. Courts generally permit the use of AI for drafting submissions, while emphasising that legal professionals retain responsibility for verifying accuracy and ensuring that AI-assisted outputs meet applicable professional standards. For instance, a Standing Order for civil cases before one of the magistrate judges in the U.S. District Court for the Northern District of California for cases before that particular judge requires lead counsel to personally verify the accuracy of AI-assisted content and maintain records of the prompts used. Similarly, the Australia’s Supreme Court of Victoria, the UK’s Artificial Intelligence Guidance and the Singapore’s Courts’ Guide require practitioners and judicial officers to confirm the reliability of AI-assisted documents prior to submission.
Arbitral institutions have similar guidance. For example, Guideline 4 of the Silicon Valley Arbitration and Mediation Center (SVAMC) Guidelines requires parties to observe all applicable competence and diligence obligations while using AI and confirms that such obligations apply equally to the production of AI-assisted outputs. Guideline 5 explicitly prohibits the fabrication or misrepresentation of evidence by misusing AI tools. The Vienna International Arbitral Centre (VIAC) Note adopts a similar position, confirming that arbitrators and parties retain full responsibility for AI-assisted outputs.
The Mandate for Human Oversight
Closely linked to responsible use is the requirement for human oversight. Judicial authorities have consistently emphasised that essential legal and adjudicative functions remain grounded in human judgment, even when supported by advanced technological tools. The UK’s Artificial Intelligence Guidance, for example, cautions against reliance on AI for legal analysis, noting that current AI tools are not designed to replace judicial reasoning or evaluative judgment. The Canadian Federal Court’s “Human in the loop” approach, articulated in the Interim Principles related to its own work, similarly requires verification of all AI-assisted outputs and confirms that final judgments and orders remain a judicial responsibility and cannot be done with AI. Brazil’s Resolution No. 615 of 2025 adopts a comparable framework, requiring that AI systems allow for meaningful human review of their outputs.
In the arbitral context, the SCC Guide, SVAMC Guidelines, CIArb Guideline and VIAC Note converge on the same principle. Arbitrators may use AI as a support tool for processing information, but must retain personal responsibility for deciding on matters of fact, law, or evidence. The CIArb Guideline sets a high bar, emphasising that arbitrators should retain personal responsibility for tasks—including research or analysis—that could influence procedural or substantive decisions.
Transparency and Disclosure
Whether, and to what extent, the use of AI must be disclosed remains a point of divergence. Some view AI as just another research tool, while others argue its generative features require transparency.
U.S. courts exemplify this range of approaches. Another magistrate judge in the Northern District of California mandates full disclosure of any AI use in filings, whereas the Illinois Supreme Court requires none, treating it like any other research method. Rule 10.430 of the California Rules of Court requires any court permitting generative AI use by court staff or judicial officers to adopt an official policy governing its use. This policy must mandate disclosure whenever the court releases content to the public that consists solely of AI-generated output. Standard 10.80 addresses individual judges. It grants them the discretion to determine on a case-by-case basis whether it is appropriate to disclose their use of AI in developing public-facing content within their adjudicative roles. Currently, neither of these rules governs how attorneys use AI in legal filings. However, this is addressed in pending legislation. The currently proposed California Senate Bill 574 would obligate counsel to assess whether disclosure is necessary when using generative AI for materials presented to the public. Furthermore, the bill includes a specific provision for arbitrators, stipulating that they must not rely on information produced by generative AI outside the record without first disclosing it to all parties.
By comparison, neither the Supreme Court of Victoria in Australia nor the UK courts—which, as explained above, have guidelines on responsible use—explicitly address mandatory disclosure of AI use.
In contrast, a number of guidelines by arbitral institutions lean towards transparency. The SCC Guide links disclosure to the integrity of proceedings and urges tribunals to disclose any use of AI in their analysis. The CIArb Guideline suggests arbitrators consult with parties before imposing disclosure requirements and even recommends refraining from AI use if the parties disagree. The SVAMC Guidelines advise disclosing sufficient information to enable reproducibility, including the tool’s name, settings, and the prompts used. The VIAC Note adopts a more liberal stance, stating that it is at the arbitrators’ discretion to request the disclosure of evidence produced using AI or with its support.
Promoting Fairness in Data‑Driven Systems
There is increasing recognition that, like all data-driven analytical tools, AI systems reflect the characteristics of their training data and deployment context. The UK’s AI Guidance emphasises awareness and professional judgment, whereas Brazil's Resolution No. 615 of 2025 establishes an affirmative duty to identify, assess and address distortions that may arise from data or system design. The SVAMC Guidelines similarly encourage the selection of AI tools that incorporate mechanisms to identify and manage such issues, especially in contexts such as selecting arbitrators or experts.
Data Protection and Confidentiality
Confidentiality remains a cornerstone of arbitration, and the use of public AI tools requires careful alignment with information‑governance obligations. Judicial guidance in California and New Zealand caution against entering confidential information into tools that are not appropriately secured.
Arbitral guidelines place particular emphasis on understanding how AI systems process, store and retain data. The SCC Guide urges all participants to understand how AI systems process and store data. The SVAMC Guidelines go further, stating that confidential information should be submitted to AI tools only when the tool has been appropriately vetted and authorised for that use, and they encourage the use of redaction or anonymisation. More robustly, the JAMS Rules provide for a specific AI Disputes Protective Order and allow arbitrators to appoint experts to review AI systems at the parties’ request.
Conclusion
The emerging guidance on AI reflects an effort to integrate advanced tools into established procedural and professional frameworks. For users of arbitration, the message is one of intentional adoption: understanding how AI tools are used, maintaining appropriate oversight, and engaging openly with tribunals to ensure shared expectations. These principles provide a stable foundation for leveraging AI to enhance efficiency and insight while preserving confidence in the arbitral process.

/Passle/5832ca6d3d94760e8057a1b6/MediaLibrary/Images/5caf47f7abdfea0b306b985f/2021-08-16-10-10-30-866-611a3996400fb311c899dbf1.jpg)