Skip to main content

Clifford Chance

Clifford Chance

Briefings

Moving Forward on Explainable AI - New Guidance from the UK ICO and Turing Institute

11 June 2020

Artificial intelligence is being extensively used across sectors and around the world, but without comprehensive legal frameworks and appropriate governance and compliance programmes in place, regulators are starting to fill the gap.

One area of focus for regulators and a problem for organisations has been the need for greater transparency and "explainability" for artificial intelligence (AI) systems. As a result, a range of academic, industry and government initiatives have sought to give practical context to new legal requirements and help organisations to counter accusations that AI systems are opaque or act as a "black box". The latest of these, is guidance from Project explAIn, a collaboration between the UK Information Commissioner's Office (ICO) and the Alan Turing Institute, published in May 2020 following an industry consultation. This aims to provide practical advice on explaining decisions made by AI systems, in a manner that meets legal requirements, as well as technical and governance best practice. Here we consider some of the existing legal requirements for explainability in the UK and explore the key takeaways from this guidance.

Download PDF