Skip to main content

Clifford Chance

Clifford Chance
International Arbitration Insights<br />

International Arbitration Insights

CIETAC Issues APAC's First Guidelines on Use of AI in Arbitration

CIETAC's AI Guidelines represent the first set of principles issued in APAC on how parties and arbitral tribunals should adopt AI in arbitral proceedings. It lays down high-level considerations and risk mitigation measures to ensure that AI does not substitute human decision-making and parties' responsibility to produce accurate submissions.

Background

The China International Economic and Trade Arbitration Commission (CIETAC) published the new Provisional Guidelines on the Use of Artificial Intelligence Technology in Arbitration, which came into effect on 18 July 2025. The CIETAC AI Guidelines are a response to the increasing integration of AI into legal practice and is addressed to parties, tribunals, and other arbitration participants. It sets out high-level guidelines aimed at balancing greater efficiency against new risks arising from the use of AI. In particular, the key areas of use of AI include:

  • Proofreading and translation of arbitration documents, and transcription of hearing records
  • Assisting in the drafting of minutes, correspondence, and arbitral documents
  • Assisting in the collection, review, and analysis of evidentiary materials
  • Assisting in the recommendation of arbitrators
  • Assisting with legal research

The CIETAC AI Guidelines are not part of the CIETAC Arbitration Rules. Instead, it serves as a set of best practice recommendations, to be interpreted and updated by CIETAC as technology and the legal landscape evolve.

Benefits and Risks: A Delicate Balance

AI is not explicitly defined in the CIETAC AI Guidelines, but existing application of AI tools in arbitral proceedings focus on the use of Generative AI. The Guidelines acknowledge the efficiency and quality gains that AI can bring to text generation, data analysis and legal research, which can accelerate proceedings and reduce costs.

It also highlights well-known risks arising from the improper use of AI, which may lead to challenges of the tribunal's impartiality and risks associated with enforcement of the arbitral award. These include:

  • Confidentiality and data security: Inputting confidential case information into third-party AI tools may result in data breaches, unauthorised disclosure, or unlawful cross-border data transfers. This is particularly important for sensitive data stored in Mainland China.
  • Accuracy and bias: AI-generated outputs may be inaccurate, incomplete, or biased, potentially undermining the fairness of proceedings.
  • Lack of transparency: Decision-making mechanisms of AI tools are often opaque, making it difficult for parties or tribunals to understand, supervise, or challenge their outputs.
  • Regulatory uncertainty: The legal and regulatory framework for AI is evolving rapidly, and changes may affect the enforceability of awards where AI has played a significant role.

Core Principles

The CIETAC AI Guidelines are structured around three core principles:

  1. Party autonomy: Parties are free to agree whether, and how, AI will be used in their arbitration. This includes the ability to permit, restrict, or prohibit specific AI tools, and to establish protocols for disclosure. Unless otherwise provided by law or determined by the tribunal, parties’ agreement on AI use prevails.
  2. Auxiliary role of AI: AI is to be used as an aid, not a substitute, for human judgement. The tribunal remains responsible for the final award rendered.
  3. Good faith: The use of AI does not diminish parties’ responsibility for the accuracy and lawfulness of their submissions and documentary disclosure.

Guidance for Arbitral Tribunals

Before adopting AI technology, the tribunal should take note of the following:

  • Necessity and proportionality: Tribunals should assess whether the use of AI is necessary and proportionate, weighing the potential efficiency gains against the corresponding risks.
  • Data security and accuracy: The security and reliability of the chosen AI tool must be evaluated.
  • Legal and regulatory context: Tribunals must consider the law of the seat, procedural law, and the laws of parties’ jurisdictions regarding the use of AI.
  • Expert consultation: Where appropriate, tribunals may consult AI technology experts to understand the technical aspects and implications of using AI in the case, pursuant to the CIETAC Arbitration Rules.

Importantly, the CIETAC AI Guidelines reiterate that the tribunal’s duties of diligence, efficiency, independence, and impartiality are not changed by the use of AI. In particular:

  • The use of AI must not deprive or restrict any party’s right to present its case.
  • Tribunals must independently analyse the facts and law and must not delegate this responsibility or its decision-making to AI.
  • The tribunal must draft the reasoning and conclusions of the award itself and remains responsible for the content of the award.

Risk Mitigation: Recommended Measures

To help parties and tribunals manage the risks associated with AI, the CIETAC AI Guidelines include a checklist of recommended measures, including:

  • Contractual stipulations: Parties are encouraged to include provisions on the use of AI in their arbitration agreements, unless otherwise stipulated by the law.
  • Procedural dialogue: Tribunals should, where necessary, invite parties to express their views on the use of AI in procedural orders or at pre-hearing conferences, including on the types of AI tools to be used, restricted, or prohibited, and on disclosure protocols.
  • Use of CIETAC’s platform: Parties and tribunals are encouraged to use CIETAC’s own digital platform for services such as electronic case filing and AI-powered transcription.

Alignment with Global Trends

CIETAC’s initiative is part of a broader international trend in which arbitral institutions are issuing guidance on the use of AI in proceedings. Earlier this year, the Chartered Institute of Arbitrators (CIArb) issued the Guideline on the Use of AI in Arbitration. Similar guidance has also been issued by the Silicon Valley Arbitration and Mediation Centre and SCC Arbitration Institute (for more details, please see our previous blog post). For all the aforementioned AI Guidelines, they intend to provide a point of reference for arbitration participants and set out principles for consideration as AI technology continues to develop. They generally would not be construed as overriding domestic or other binding regulations on use of AI.

The CIArb Guidelines and SVAMC Guidelines are both structured similarly, outlining principles on use of AI addressed to all arbitration participants, then setting out specific guidance that applies to parties and arbitrators. In particular, both set out considerations on when disclosure on the use of AI tools by parties and arbitrators is appropriate (CIArb Guidelines 7 and 9, SVAMC Guideline 3). The CIArb Guidelines also include guidance on how arbitrators may make a ruling on the use of AI by parties (CIArb Guideline 6). The CIArb Guidelines also provide a sample agreement and procedural order (in short and long form) on the use of AI in arbitration, while the SVAMC Guideline provide a model clause on adopting these guidelines that can be incorporated in procedural orders.

On the other hand, both the SCC Guide and CIETAC’s AI Guidelines are more high-level in nature. CIETAC’s AI Guidelines echo principles stated in other institutional guidelines on AI use. As AI continues to develop, for now it will mainly be down to parties to discuss and define parameters for use of AI in their cases. Another key principle that has been reiterated is that AI cannot substitute human decision-making, and tribunals should be alive to the risk that their decision will be scrutinised if AI has been improperly used in rendering the final award. CIETAC’s AI Guidelines provide a useful foundation for arbitration participants to consider how AI is integrated into their proceedings and safeguards that can be put in place to maintain fairness and trust in the arbitral process.

Links: CIETAC AI Guidelines; CIETAC Press Release (in Chinese only)

  • Share on Twitter
  • Share on LinkedIn
  • Share via email
Back to top