Skip to main content

Clifford Chance

Clifford Chance

Artificial intelligence

Talking Tech

What is the UK's new AI Rulebook?

Artificial Intelligence 8 September 2022

AI regulation and oversight in the UK: what's next?

The Department for Digital, Culture, Media and Sport (DCMS) has published a policy paper outlining the UK government's emerging thinking on its approach to regulating artificial intelligence (AI) in the UK. The paper forms part of the UK's National AI Strategy, which has the stated aim of ensuring the UK "remains a global AI superpower", complements the wider UK Digital Strategy, and was published at the same time as the UK's Data Protection and Digital Information Bill.

This article considers the context in which the UK's proposed approach to AI regulation has been developed and some noteworthy areas where the proposed plans currently differ from the draft EU AI Act.

The UK AI landscape

AI is increasingly being used across a range of sectors in the UK, from drug discovery and medicine, to reducing emissions and tackling climate change. While appropriate use of AI can deliver major benefits to the UK economy, the increasing use of AI presents novel risks and raise challenging questions. For example, how can consumers be confident that they are buying evidenced and tested AI products? How can facial recognition be used in a way that protects individuals' rights to privacy? How can we ensure that algorithmic bias does not lead to discrimination and inequality? Against uncertainty raised by these questions, regulating AI has become a priority item for the UK government, regulators and industry actors.

Current AI Regulation in the UK

The UK government has cited the UK's reputation for the quality of its regulators and the rule of law as contributing factors to the high levels of AI private investment in the UK.  The policy paper goes on to contextualise plans to reform AI regulation as part of the UK government's strategy to better position the UK as a leader in AI innovation through ensuring that the governance of its use and development keeps up with the rapidly advancing technologies.

Whilst no UK laws have been written specifically to regulate AI, a patchwork of legal and regulatory requirements currently regulate the use of AI systems. For example, UK data protection law includes specific requirements on "automated decision-making". The Online Safety Bill also has provisions specifically targeting the use of algorithms. The upcoming competition law regime for digital markets as part of the proposed Digital Markets, Competition and Consumer Bill will also likely apply to the use of AI.

Alongside this patchwork legislation, UK regulators, such as the Information Commissioner's Office (ICO), Equality and Human Rights Commission, and Health and Safety Executive have taken action to support the responsible use of AI. The Competition Market Authority (CMA) has also consulted on how algorithms can reduce competition and harm consumers, and more recently published a discussion paper on how online choice architecture could potentially pose harm to consumers and competition. Efforts to develop AI standards have also been made by the DCMS, which announced the pilot of an AI Standards Hub in January 2022 which will create practical tools and bring the UK's multi-stakeholder AI community together to ensure that global AI standards are shaped by a wide range of experts.

The UK government believes this combination of voluntary, regulatory and quasi-regulatory activity has left a lack of clarity, overlaps, inconsistencies and gaps in the UK's approach to AI regulation which puts business confidence, investment, consumer trust, and ultimately the innovation and growth of AI, all at risk. Therefore, according to the UK government, the rules in the AI policy paper will cement the UK's position as an AI leader, improving clarity and coherence whilst establishing an internationally competitive regulatory approach driven by innovation.

The UK AI Policy Paper proposals

The government's proposal is underpinned by the following six cross-sectoral principles that the relevant regulators and bodies must implement and apply:

  1. ensure that AI is used safely;
  2. ensure that AI is technically secure and functions as designed;
  3. make sure that AI is appropriately transparent and explainable;
  4. consider fairness;
  5. identify a legal person to be responsible for AI; and
  6. clarify routes to redress or contestability.

According to the UK government, these cross-sectoral principles are tailored to the specific characteristics of AI. The policy paper goes on to describe the UK government's overall approach as:

  • context-specific – acknowledging that AI is a dynamic and general purpose technology and that the particular risks that arise depend on the context of its application;
  • risk-based – asking regulators to focus on applications of AI that result in real, identifiable, unacceptable levels of risk, rather than seeking to impose controls on uses of AI that pose low or hypothetical risk to avoid stifling innovation;
  • coherent – ensuring a simple, clear, predictable and stable system; and
  • proportionate and adaptable – asking regulators to consider lighter touch options, such as guidance or voluntary measures, in the first instance.

Alongside these principles, regulators will also be encouraged to consider non-binding options, including guidance and voluntary measures or creating sandboxes – such as a trial environment where businesses can check the safety and reliability of an AI solution before introducing it to market.

Ultimately, the principles will apply to any actor in the AI lifecycle whose activities create risks that the regulators believe should be managed through the context-based approach. How such approach will operate in practice, where different regulators with different rule-making powers are tasked with its implementation within their regulatory remits, raises two particular concerns:

  • Interpretation of principles: There are likely to be differences in interpretation between regulators; for example, when they are tasked to decide what "fairness" or "transparency" means for AI development or use in the context of their sector or domain. Whilst this approach may be "context-specific", it could be difficult to navigate, especially for businesses whose operations fall under the remit of more than one regulator.
  • Varying implementation powers: Similarly, given the differences in the powers regulators have to make rules, inconsistencies may arise in how these "sector- or domain-specific AI regulation measures" can be implemented. Whilst regulators such as the Financial Conduct Authority and the Prudential Regulation Authority have wide-ranging powers to make rules that are legally binding for those who are subject to them, others, such as the ICO and CMA, have more limited rule-making powers.

Although the UK government has proposed the principles on a non-statutory footing in order to remain agile to the rapid changes of AI, the DCMS has recognised that legislation may be necessary in the future. In particular, it recognises the need to review the roles, powers, remits and capabilities of regulators with the potential for specific new powers to be created. Hopefully, these issues will be addressed in the forthcoming White Paper which is due to be published later this year.

The UK approach vs. the EU's draft AI Act – what's new?

The UK's proposed plans diverge from the approach taken by the EU's draft AI Act in a number of ways. Key differences include:

Scope: definition of AI

A key distinction between the EU and UK's definitional approach is the UK's proposal to not statutorily define AI.

In attempting to harmonise rules across multiple countries, the EU grounded its approach to drafting the AI Act with a specific, central definition of AI, which would apply across all sectors and uses. The current EU definition, as currently drafted, is very broad and may capture software that would not be seen as involving AI in any usual sense – for example, an Excel spreadsheet. The EU AI Act then relies heavily on the 'high risk' overlay to cut back its scope but some of these elements are themselves cast in very broad terms. Whilst this definition is subject to revision and debate, it marks a starting point for policy discussion. 

Conversely, the UK government believes that such an approach fails to capture the full application of AI and its regulatory implications. The proposed rules seek to strike a balance between granularity and flexibility by avoid having a 'central' definition and instead seeking to focus on two criteria that would limit the scope of what is captured by AI regulations: adaptivity and autonomy. It is left to the UK regulators and relevant bodies to develop more detailed definitions of AI in accordance with their specific domains and sectors. The UK's policy proposal suggests that this context-specific approach will allow the UK to provide greater certainty about the scope and nature of UK regulation of AI while remaining flexible – recognising that AI may take forms in the future that cannot easily be defined right now.

It should be noted, however, that the EU Commission's proposed approach does also recognise that the central definition of AI may need to be updated with the evolving state of the art through the adoption of delegated acts (of course, the practical feasibility of changing the definition in an agile way remains to be seen).

Cross-sectoral principles – decentralisation

It seems that both the EU and the UK are planning to rely on decentralised supervision and enforcement with some centralised oversight and coordination - in the EU by the Commission, which retains its role as the guardian of the Treaties, and the proposed AI Board and, in the UK, by the UK government, although we expect the government's role to be more high level than that of the Commission and AI Board in the EU.

In the EU, the surveillance authorities supervise the harmonised rules and standards set out in the regulation and the secondary legislation. The AI Board does not have any formal role in rule-making or standard-setting under the AI Act other than providing advice. The UK government is also proposing to de-centralised rule-making and standard-setting.

Under this regime, responsibility for AI governance will be allocated to multiple regulators, including Ofcom, the Competition and Markets Authority, the ICO, the Financial Conduct Authority and the Medicine and Healthcare Products Regulatory Agency, which means that different rules may apply to different sectors or types of use of AI in the UK.

For the UK's decentralised approach to work in facilitating the UK government's stated aims of driving innovation and ensuring organisations have appropriate guardrails to inform risk decisions, there will need to be collaboration between different regulatory bodies and consistency of approach. This will be particularly important as the AI regulatory framework globally is yet to reach consensus on approach.

Legal liability – identifying a responsible person

One of the UK's proposed cross-sectoral principles is to "identify a legal person to be responsible for AI", meaning that "accountability for the outcomes produced by AI and legal liability must always rest with an identified or identifiable legal person – whether corporate or natural". This could have far-reaching repercussions if made into law.

Issue 1: Liability for compliance with AI laws and regulations – fines and administrative penalties

The EU AI Act seeks to allocate responsibility for compliance with its rules  by reference to the relevant actor's role (e.g. whether someone is the provider, importer, distributor). The UK's policy paper does not make clear how legal liability for AI governance will be allocated – it may be that UK regulators will be required to clarify who they consider has responsibility for ensuring compliance with their rules and who, therefore, may receive fines and other administrative penalties .

Issue 2: Damages Claims

There is a separate issue as to whether non-compliance with legal requirements relating to AI gives rise to claims for damages to third parties – such as an individual affected by a biased AI outcome or a business that has purchased a malfunctioning AI solution. This type of liability is not addressed in the EU AI Act and it is unclear whether the UK's AI proposals indicate reforms that would impact allocation of liability in this context.   Ultimately, under both the EU and UK regimes, businesses will increasingly need to trust the AI tools they use, creating both challenges and opportunities for software companies in legal tech and other sectors. [FR(L1]

A work in progress – next steps:

Whilst the policy paper has set out the UK government's overall approach to regulating AI, further refinement of the details and implementation process will follow. Specifically, the UK government has said it will consider:

  • Whether the proposed framework adequately addresses the UK's values and ambitions whilst also allowing effective co-ordination with other international approaches or whether it leaves gaps that require a more targeted solution.
  • How the approach can best be put into practice, including the roles, powers, remits and capabilities of regulators, the need for co-ordination, and how this will be delivered across regulators involved.
  • How the framework will be monitored to ensure that it delivers the UK's proposal to AI regulation in a way that can foresee future developments, mitigate future risks and maximise benefits of future opportunities.

For now, organisations operating within the UK should continue to take action to ensure that their use of AI aligns with existing legal and regulatory requirements and guidance, considering the impact of the AI policy paper proposals on their operations.

A call for stakeholder contributions:

Interested stakeholders can submit their views and evidence, on the UK government's proposed approach before 26 September 2022. The UK government proposes to set out its position on these topics through the forthcoming White Paper and the public consultation which it plans to publish in late 2022.