Skip to main content

Clifford Chance

Clifford Chance

Artificial intelligence

Talking Tech

CMA response to the UK's AI policy paper

Antitrust Consumer Artificial Intelligence 1 November 2022

Background

The Department for Digital, Culture, Media and Sport (DCMS) recently published a policy paper outlining the UK government's emerging thinking on its approach to regulating artificial intelligence (AI) systems in the UK. (see our article What is the UK's new AI Rulebook?). The policy paper recognises that appropriate use of AI could deliver significant opportunities for businesses and subsequent benefits to the UK economy. However, it also cautions that the increasing use of AI comes with novel risks. The Competition and Markets Authority's (CMA) recently responded to the government's call for stakeholder contributions on the AI policy paper. In its response, the CMA highlighted that algorithmic systems, including AI systems, risk enhancing incumbent firms' ability to self-preference at the expense of new innovative services, encourage discriminatory personalised pricing, and are, typically, insufficiently transparent to consumers.

This article provides a summary of the CMA's key messages.

Three key messages

The CMA emphasised three key messages in their response.

1.      The need for a principles-led approach to regulating the use of AI;

2.      The importance of cross-regulatory initiatives; and

3.      The need for an international outlook on regulating AI.

Principles-led approach to regulating the use of AI

Context-specific and risk-based approach

The CMA indicated their support for the government's proposed cross-sectoral and principles-based approach to regulating AI. The CMA emphasised that a context-specific and risk-based approach will enable regulators to prioritise interventions in relation to the most harmful practices, whilst not stifling the opportunities AI presents for UK businesses and consumers.

Nevertheless, the CMA stressed that such a pro-innovation focus on high-risk concerns should not prevent regulation from addressing harms which may have a diffused impact across many people or play out over a long period of time. The CMA emphasised that whilst harms to competition may not have the immediacy of health and safety risks or harm to fundamental rights, inappropriate use of AI systems might: (i) allow incumbent firms to get away with higher prices and lower quality to consumers; (ii) affect the livelihoods of people who own, invest in, and work for competitors that have been unfairly harmed; and ultimately, (iii) harm innovation, productivity, and economic growth. For example, algorithmic systems may allow dominant firms to engage in exclusionary practices, including self-preferencing and manipulating ranking algorithms to exclude competitors. [See: CMA- Algorithms: How they can reduce competition and harm consumers for further discussion of potential harms]. It is therefore important that any risk-based principles also take into account structural long-term risks, such as the concentration of key inputs to AI supply chains, including chips, data and computational resources.

Transparency

The CMA has lent its support to the cross-sectoral principle of ensuring AI is appropriately transparent and explainable, and additionally highlighted a number of points for the government to consider:

  • First, the CMA has emphasised the importance, from a competition perspective, of enabling consumers to have choice and control. For example, consumers may want to understand their options to opt-out of an algorithm that is recommending them products in addition to knowing that it is doing so.
  • Second, the CMA has cautioned that the scope of any mandated disclosure will need to be considered carefully. Greater transparency could relax competition between firms and risk collusion.
  • Third, AI transparency approaches could face hostility due to the reticence to share proprietary information. This could be addressed by regulators auditing key AI systems in a limited, targeted and proportionate way where wider transparency poses concerns.

Approach to regulation

The CMA also emphasised the importance of equipping regulators with the appropriate powers to intervene, including updating and adapting existing powers to match the pace of innovation in AI. Here the CMA highlighted the opportunity presented by establishing the new Digital Markets Unit and implementing broader competition and consumer reforms, to ensure the benefits from AI are widely felt, whilst still being able to effectively investigate and remedy high-risk harms to consumers and competition from AI systems misuse.

While AI applications are still developing, the CMA agreed with the government that a more light-touch approach to the regulation of AI using existing regulatory remits may be beneficial. However, the CMA stresses that it will be essential to ensure there is a "backstop of robust and effective, but proportionate, enforcement of this principles-led approach." This could be achieved through risk-based criteria and thresholds at which additional requirements could apply.

The CMA has also indicated that it is considering issuing specific guidance for businesses' use of AI systems in high-risk sectors or where those systems fulfil important functions in the digital economy (e.g., search, aggregation, reviews, recommendations and comparison services, etc.). This guidance is likely to lean on their learnings from previous studies, including their 2020 Online Platforms and Digital Advertising market study and the State of Competition Report of April 2022, in particular as regards the risks posed by firms with strategic market status (i.e., firms with substantial and entrenched market power in a digital market) and the presence of weak competition in the digital advertising market. The CMA is also likely to draw upon its recent studies on algorithms, in which it has already begun to provide some guidance to businesses on the steps they should be taking to improve transparency, set up ethical oversight mechanisms and improve their records of the behaviour and decisions of their algorithmic systems. [See CMA- Algorithms: How they can reduce competition and harm consumers- section 3]. This ties in with the work the CMA has engaged in, as a member of the Digital Regulation Cooperation Forum (DRCF), to improve regulators' understanding of how to reliably and consistently assess the impact of algorithmic systems, including through the use of auditing processes. [See: Auditing algorithms: the existing landscape, role of regulators and future outlook].

Approach to interactions between relevant stakeholders in the regulation of AI

The CMA welcomed the government's support for voluntary fora, such as the DRCF. Such fora serve as an opportunity to draw together different perspectives, encourage regulatory coherence and eliminate gaps and overlaps between regulatory remits. This is increasingly important in areas where policy objectives interact – for example, remedies addressing competition concerns may also need to factor in data protection considerations. The CMA has therefore encouraged regular engagement from government with relevant regulators to ensure that front line considerations are taken into account as the regulatory landscape evolves.

The CMA additionally emphasised the importance of engaging with a wide range of stakeholders in the development of the new regulatory framework, to avoid the risk that the developed principles and standards favour incumbents or larger online service providers.

Advocating domestics and international coherence in the regulation of AI

Finally, the CMA indicated its approval of the governments advocation of strong regulatory cooperation across jurisdictions. International coherence in the regulation of AI should help reduce the burden on businesses, particularly in fast moving and innovative areas such as digital markets.

Conclusion

The CMA’s response largely demonstrates support for the governments' proposals. Nevertheless, whilst advocating the merits of the government's desire to formulate a risk-based, pro-innovation approach, the CMA stressed that this must not prevent regulators from being able to address harms which, although may not appear immediate and visible, have the potential to cause long-term, structural effects, resulting in wide-spread harm to consumers. In addition, throughout the response the CMA frequently draws attention to the risk posed by incumbents' use of AI; a concern they have highlighted in their recent market studies and investigations in the digital and other sectors.