top of page

Subscribe to our newsletter

Write a
Title Here

I'm a paragraph. Click here to add your own text and edit me. I’m a great place for you to tell a story and let your users know a little more about you.

© Indic Pacific Legal Research LLP. 

The works published on this website are licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International.

For articles published in VISUAL LEGAL ANALYTICA, you may refer to the editorial guidelines for more information.

ciarb Guideline on the Use of AI in Arbitration (2025), Explained

This insight is co-authored by Vishwam Jindal, Chief Executive Officer, WebNyay.




The Chartered Institute of Arbitrators (CIArb) guideline on the use of AI in arbitration, published in 2025, provides a detailed framework for integrating AI into arbitration proceedings. This analysis covers every chapter, highlighting what each includes and identifying potential gaps. Below, we break down the key sections for clarity, followed by a detailed survey note for a deeper understanding.


Chapter-by-Chapter Analysis


  • Part I: Benefits and Risks: Details AI's advantages (e.g., legal research, data analysis) and risks (e.g., confidentiality, bias), providing a broad overview.

  • Part II: General Recommendations: Advises on due diligence, risk-benefit analysis, legal compliance, and maintaining accountability for AI use.

  • Part III: Parties’ Use of AI: Covers arbitrators' powers to regulate AI, party autonomy in agreeing on its use, and disclosure requirements for transparency.

  • Part IV: Use of AI by Arbitrators: Allows discretionary AI use for efficiency, prohibits decision delegation, and emphasizes transparency through party consultation.

  • Appendices: Includes templates for AI use agreements and procedural orders, aiding practical implementation.

  • Definitions: Provides clear definitions for terms like AI, hallucination, and tribunal, based on industry standards.


On definitions, it could have been better that ciarb could have opted definitions associated on AI from third-party technical forums like IEEE, Creative Commons, ISO etc., instead of IBM.


Part I: Benefits and Risks



Part I provides a balanced view of AI's potential benefits and risks in arbitration. The benefits section (1.1-1.10) highlights efficiency gains through legal research enhancement, data analysis capabilities, text generation assistance, evidence collection streamlining, and translation/transcription improvements. Notably, section 1.10 acknowledges AI's potential to remedy "inequality of arms" by providing affordable resources to under-resourced parties.


The risks section (2.1-2.9) addresses significant concerns including confidentiality breaches when using third-party AI tools, data integrity and cybersecurity vulnerabilities, impartiality issues arising from algorithmic bias, due process risks, the "black box" problem of AI opacity, enforceability risks for arbitral awards, and environmental impacts of energy-intensive AI systems.


Benefits


Now, AI offers transformative potential in arbitration by enhancing efficiency and quality across various stages of the process:


  • Legal Research: AI-powered tools outperform traditional search engines with their adaptability and predictive capabilities, enabling faster and more precise research.

  • Data Analysis: AI tools can process large datasets to identify patterns, correlations, and inconsistencies, aiding in case preparation.

  • Text Generation: Tools can draft, summarize, and refine documents while ensuring grammatical accuracy and coherence.

  • Translation and Transcription: AI facilitates multilingual arbitration by translating documents and transcribing hearings at lower costs.

  • Case Analysis: Predictive analytics provide insights into case outcomes and procedural strategies.

  • Evidence Collection: AI streamlines evidence gathering and verification, including detecting deep fakes or fabricated evidence.


Risks


Despite its advantages, AI introduces several risks:


  • Confidentiality: Inputting sensitive data into third-party AI tools raises concerns about data security and misuse.

  • Bias: Algorithmic bias can compromise impartiality if datasets or algorithms are flawed.

  • Due Process: Over-reliance on AI tools may undermine parties' ability to present their cases fully.

  • "Black Box" Problem: The opaque nature of some AI algorithms can hinder transparency and accountability.

  • Enforceability: The use of banned or restricted AI tools in certain jurisdictions could jeopardise the validity of arbitral awards.


Limitations in Part 1


Part I exhibits several significant limitations that undermine its comprehensiveness:


  • Incomplete treatment of risks: While identifying key risk categories, the guidelines lack depth in addressing bias detection and mitigation strategies, transparency mechanisms, and AI explainability challenges.

  • Gaps in benefits coverage: The incomplete presentation of sections 1.5-1.9 suggests missing analysis of potential benefits such as evidence gathering and authentication applications.

  • Absence of risk assessment framework: No structured methodology is provided for quantitatively evaluating the likelihood and severity of identified risks, leaving arbitrators without clear guidance on risk prioritisation.

  • Limited forward-looking analysis: The section focuses primarily on current AI capabilities without adequately addressing how rapidly evolving AI technologies might create new benefits or risks in the near future.



Part II: General Recommendations


The CIArb guidelines emphasise a cautious yet proactive approach to AI use:

  1. Due Diligence: Arbitrators and parties should thoroughly understand any AI tool's functionality, risks, and legal implications before using it.

  2. Balancing Benefits and Risks: Users must weigh efficiency gains against potential threats to due process, confidentiality, or fairness.

  3. Accountability: The use of AI should not diminish the responsibility or accountability of parties or arbitrators.


In summary, Part II establishes broad principles for AI adoption in arbitration. It encourages participants to conduct reasonable inquiries about AI tools' technology and function (3.1), weigh benefits against risks (3.2), investigate applicable AI regulations (3.3), and maintain responsibility despite AI use (3.4). The section addresses critical issues like AI "hallucinations" (factually incorrect outputs) and prohibits arbitrators from delegating decision-making responsibilities to AI systems.


Part II provides general advice on due diligence, risk assessment, legal compliance, and accountability for AI use. However, it has notable gaps:


  • Lack of Specific Implementation Guidance: The recommendations, such as conducting inquiries into AI tools (3.1), are broad and lack practical tools like checklists or frameworks. For example, it could include a step-by-step guide for evaluating AI tool security or a risk-benefit analysis template, aiding users in application.

  • Insufficient technical implementation guidance: The recommendations remain abstract without providing specific technical protocols for different types of AI tools or use cases.

  • No Examples or Hypothetical / Real Case Studies: Without real-world scenarios or even comparable hypothetical scenarios, such as how a party assessed an AI tool for confidentiality risks, practitioners may struggle to apply the recommendations. Hypothetical examples could bridge this gap, enhancing understanding.

  • Absence of AI literacy standards: No baseline competency requirements are established for arbitration participants using AI tools, creating potential disparities in understanding and application.

  • Missing protocols for AI transparency: The guidelines don't specify concrete mechanisms to make AI processes comprehensible to all parties, particularly important given the "black box" problem acknowledged elsewhere.

  • No Mechanism for Periodic Review: Similar to Part I, there is no provision for regularly updating the recommendations, such as a biennial review process, which is critical given AI's rapid evolution, like the advent of generative AI models.

  • Lack of Input from Technology Experts: The guideline does not indicate consultation with AI specialists or technologists, such as input from organizations like the IEEE (IEEE AI Ethics), which could ensure the recommendations reflect current industry practices and technological realities.



Part III: Parties’ Use of AI


Arbitrators’ Powers


Arbitrators have broad authority to regulate parties' use of AI:

  • They may issue procedural orders requiring disclosure of AI use if it impacts evidence or proceedings.

  • Arbitrators can appoint experts to assess specific AI tools or their implications for a case.


Party Autonomy


  • Parties retain significant autonomy to agree on the permissible scope of AI use in arbitration.

  • Arbitrators are encouraged to facilitate discussions about potential risks and benefits during case management conferences.


Disclosure Requirements


  • Parties may be required to disclose their use of AI tools to preserve procedural integrity.

  • Non-compliance with disclosure obligations could lead to adverse inferences or cost penalties.


In summary, Part III establishes a framework for regulating parties' AI use. Section 4 outlines arbitrators' powers to direct and regulate AI use, including appointing AI experts (4.2), preserving procedural integrity (4.3), requiring disclosure (4.4), and enforcing compliance (4.7). Section 5 respects party autonomy in AI decisions while encouraging proactive discussion of AI parameters. Sections 6 and 7 address rulings on AI admissibility and disclosure requirements respectively.


Part III contains several problematic gaps:


  • Ambiguity in Private vs. Procedural AI Use: Section 4.5 states arbitrators cannot regulate private use unless it interferes with proceedings, but the boundary is vague. For example, using AI for internal strategy could blur lines, and clearer definitions are needed.

  • Inadequate dispute resolution mechanisms: Despite acknowledging potential disagreements over AI use, the guidelines lack specific procedures for efficiently resolving such disputes.

  • Disclosure framework tensions: The optional nature of disclosure creates uncertainty about when transparency should prevail over party discretion, potentially undermining procedural fairness.

  • Absence of cost allocation guidance: The guidelines don't address how costs related to AI tools or AI-related disputes should be allocated between parties.

  • Limited cross-border regulatory guidance: Insufficient attention is paid to navigating conflicts between different jurisdictions' AI regulations, a critical issue in international arbitration.

  • Potential Issues with Over-Reliance on Party Consent: The emphasis on party agreement (Section 5) might limit arbitrators’ ability to act decisively if parties disagree, especially if one party lacks technical expertise, potentially undermining procedural integrity.

  • Need for Detailed Criteria for Selecting AI Experts: While arbitrators can appoint AI experts, there are no specific criteria, such as qualifications in AI ethics or experience in arbitration, which could ensure expert suitability and consistency.



Part IV: Use of AI by Arbitrators


Discretionary Use


Arbitrators may leverage AI tools to enhance efficiency but must ensure:

  • Independent judgment is maintained.

  • Tasks such as legal analysis or decision-making are not delegated entirely to AI.


Transparency


Arbitrators are encouraged to consult parties before using any AI tool. If parties object, arbitrators should refrain from using that tool unless all concerns are addressed.


Responsibility


Regardless of AI involvement, arbitrators remain fully accountable for all decisions and awards issued.


In summary, Part IV addresses arbitrators' AI usage, establishing that arbitrators may employ AI to enhance efficiency (8.1) but must not relinquish decision-making authority (8.2), must verify AI outputs independently (8.3), and must assume full responsibility for awards regardless of AI assistance (8.4). Section 9 emphasises transparency through consultation with parties (9.1) and other tribunal members (9.2).


Part IV exhibits several notable limitations:

  • Inadequate technical implementation guidance: The section provides general principles without specific technical protocols for different AI applications in arbitrator decision-making.

  • Missing AI literacy standards for arbitrators: No baseline competency requirements are established to ensure arbitrators sufficiently understand the AI tools they employ.

  • Insufficient documentation requirements: The guidelines don't specify how arbitrators should document AI influence on their decision-making process in awards or orders.

  • Absence of practical examples: Without concrete illustrations of appropriate versus inappropriate AI use by arbitrators, the guidance remains abstract and difficult to apply.

  • Underdeveloped bias mitigation framework: While acknowledging potential confirmation bias, the guidelines lack specific strategies for detecting and counteracting such biases.


Appendix A: Agreement on the Use of AI in Arbitration


Appendix A provides a template agreement for parties to formalize AI use parameters, including sections on permitted AI tools, authorized uses, disclosure obligations, confidentiality preservation, and tribunal AI use1.


Critical Deficiencies


Appendix A falls short in several areas:

  • Excessive generality: The template may be too generic for complex or specialised AI applications, potentially failing to address nuanced requirements of different arbitration contexts.

  • Limited customisation guidance: No framework is provided for adapting the template to different types of arbitration or technological capabilities of the parties.

  • Poor institutional integration: The template doesn't adequately address how it interfaces with various institutional arbitration rules that may have their own technology provisions.

  • Static nature: No provisions exist for updating the agreement as AI capabilities evolve during potentially lengthy proceedings.

  • Insufficient technical validation mechanisms: The template lacks provisions for verifying technical compliance with agreed AI parameters.


Appendix B: Procedural Order on the Use of AI in Arbitration


Appendix B provides both short-form and long-form templates for arbitrators to issue procedural orders on AI use, introducing the concept of "High Risk AI Use" requiring mandatory disclosure, establishing procedural steps for transparency, and enabling parties to comment on proposed AI applications.


Critical Deficiencies


Appendix B contains several notable gaps:

  • Technology adaptation limitations: The templates lack mechanisms for addressing emerging AI technologies that may develop during proceedings.

  • Enforcement uncertainty: Limited guidance is provided on monitoring and enforcing compliance with AI-related orders.

  • Insufficient technical validation: The templates don't establish concrete mechanisms for verifying adherence to AI usage restrictions.

  • Absence of update protocols: No provisions exist for modifying orders as AI capabilities evolve during proceedings.

  • Limited remedial options: Beyond adverse inferences and costs, few specific remedies are provided for addressing non-compliance.


Conclusion: Actionable Recommendations for Enhancement


The CIArb AI Guideline represents a significant first step toward establishing a framework for AI integration in arbitration, demonstrating awareness of both benefits and risks while respecting party autonomy. However, to transform this preliminary framework into a robust and practical tool, several enhancements are necessary:


  1. Technical Implementation Framework: Develop supplementary technical guidelines with specific protocols for AI verification, validation, and explainability across different arbitration contexts and AI applications.

  2. AI Literacy Standards: Establish minimum competency requirements and educational resources for arbitrators and practitioners to ensure informed decision-making about AI tools.

  3. Adaptability Mechanisms: Implement a formal revision process with specific timelines for guideline updates to address rapidly evolving AI capabilities.

  4. Transparency Protocols: Create more detailed transparency requirements with clearer thresholds for mandatory disclosure to balance flexibility with procedural fairness.

  5. Risk Assessment Methodology: Develop a quantitative framework for systematically evaluating AI risks in different arbitration contexts.

  6. Practical Examples Library: Supplement each section with concrete case studies illustrating appropriate and inappropriate AI applications in arbitration.

  7. Institutional Integration Guidance: Provide specific recommendations for aligning these guidelines with existing institutional arbitration rules.


Comentarios


bottom of page