Skip to main content

Clifford Chance

Clifford Chance

Artificial intelligence

Talking Tech

The EU AI Act: concerns and criticism

Artificial Intelligence Europe 6 April 2023

Council of the EU's general approach

In April 2021, the European Commission published its proposal for a European law on artificial intelligence (the "AI Act"), the first comprehensive regulatory scheme for AI. The AI Act is currently going through a detailed legislative process, during which it is likely to be amended, and it is unlikely to be enacted until late 2023 or during 2024. In December 2022, following public consultations and suggested changes by EU committees and institutions, the Council of the EU ("Council") adopted its common position ('general approach') on the AI Act. The Council's aim is to ensure that AI systems placed on the EU market and used in the Union are safe and respect existing laws on fundamental rights and EU values, whilst also creating a climate of innovation to harness the benefits of AI and support the EU in being a pioneer in the regulation and development of tech.

In recent months, the AI Act and the Council's approach have given rise to several responses and concerns from different industries. We discuss some of the key concerns in more detail in this article.

Concerns and criticism

AI definition

One of the fundamental concerns is the definition of AI proposed in the AI Act. Initially, AI was defined as "software that is developed with one or more of the techniques and approaches listed in [the AI Act] and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with." This caused concern as it did not provide sufficiently clear criteria for distinguishing AI from simpler software systems. The Council's general approach seeks to address this concern by suggesting a narrower definition: "[AI system] means a system that is designed to operate with elements of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic- and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts."

However, despite these amendments, concerns remain that the definition is too wide and continues to capture 'simpler' software systems, with critics arguing that regulating such simpler software systems which do not pose significant risk could stifle further innovation and result in legal uncertainty about what should be considered AI.

Risk classification system

The AI Act classifies AI systems into one of four risk categories, depending on their intended purpose. Depending on which risk category an AI system is placed in, different rules will apply. This so-called 'pyramid of criticality' consists of the following risk levels:

  • Unacceptable risk AI systems, e.g., a system used by a public authority to assign a social score to an individual, leading to detrimental or unfavourable treatment – banned from entering the Union market
  • High risk AI systems, are (a) AI systems that are intended to be used as a safety component of products that are subject to a third-party conformity assessment or that are used in management and operation of certain critical infrastructure, or (b) stand-alone AI systems that have fundamental rights implications and that are explicitly set out in the Act e.g., systems used for the purpose of determining an individual's access to an educational institution – must comply with the bulk of the obligations under the AI Act
  • Limited risk AI systems, e.g., an AI chatbot answering simple customer support questions on an e-commerce website – obligations limited to specific transparency requirements, i.e., it must be disclosed to users when they are interacting with AI or when they are subject to decision-making by an AI system
  • Minimal risk AI systems, e.g., email filters that scan and sort mailboxes for spam emails – no specific obligations under the AI Act

The high-risk category imposes a substantial number of extra requirements on AI systems, with regards to safety, transparency and human oversight, during both the development phase and the phase after implementation of an AI system. The concern raised by some is that AI systems that are not likely to cause serious fundamental rights violations or other significant risks could be captured under this category and that it imposes obligations that are not technically feasible and/or create an undue burden for stakeholders to comply with.

The Council has sought to address these concerns in its general approach, however concerns still remain. We examine two notable concerns below: (1) a broad range of AI systems used by the insurance sector have been categorised as high-risk; and (2) it is unclear how the AI Act will regulate general purpose AI systems such as ChatGPT.

The Insurance sector

AI is broadly used in the insurance sector, for example, conversational AI such as chatbots or voicebots can provide support to customers 24/7, and AI can be used for claim automation to settle claims faster, for fraud detection by analysing huge volumes of data from multiple sources, and to process and classify incoming unstructured correspondence and data through mail, letters, sheets, etc.

On 23 February 2023, several European insurance organizations issued a joint statement of their concerns with the AI Act, particularly with regard to the approach taken by the Council in its general approach that all AI usage in the insurance sector intended to be used for risk assessment and pricing in the case of life and health insurance are categorised as 'high risk'. The Council's reasoning is that, if not duly designed, developed and used, insurance AI can lead to serious consequences for people's lives and health, including financial exclusion and discrimination. The joint statement views the Council's general approach as too far-reaching and has suggested reverting to the high-risk classification used in the original proposal, which would mean each AI system used by the insurance industry would be classified in accordance with its actual risk, instead of automatically including a broad range of AI systems within the insurance sector under the high-risk category.

This is part of a wider concern that the AI Act in its current form creates uncertainty and a resulting risk of fines and penalties, which could further stifle innovation and the uptake of AI in Europe. The insurers, in their joint statement, therefore also called for: (a) legal certainty on the extraterritorial applications of the AI Act's scope and effect on the Union's international partners; (b) assurance that there will be no overlap or competing obligations with existing obligations; (c) a clear allocation of responsibilities along the value chain, including the contractual freedom of the various stakeholders in the value chain to do so; and (d) the adoption of a narrower definition of an AI system that is expected to be agreed on by the European Parliament.

ChatGPT and other general purpose AI

The recent developments around ChatGPT and similar AI systems ("general purpose AI") also pose new challenges for the regulators because they do not have one specific purpose. For example, ChatGPT can be used for a broad range of tasks; from writing code to providing dinner recipes, the list goes on. General purpose AI is therefore not easily classified into one of the four risk categories. On the one hand, ChatGPT is merely an algorithmic text generator that may not seem to pose a significant risk. On the other hand, the system has, of course, no awareness of ethics or morals can produce a vast array of output in response to prompts (which could relate to many different use cases) without human oversight.

The Council has suggested in its general approach that, where general purpose AI technology is subsequently integrated into another high-risk system, certain requirements for high-risk AI systems would also apply to such general purpose AI systems through a (yet to be defined) implementing act. However, there is no consensus on whether this would be too strict or if it is a future-proof approach. AI campaign group ForHumanity has written an open letter urging OpenAI (the developers of Chat GPT) to join the regulatory sandbox to contribute to testing the limits and establishing boundaries for the AI Act and it will be interesting to see what approach the EU takes with regard to general purpose AI systems in the final version of the Act.

Conclusion

The proposed AI Act has been met with criticism and questions in a number of quarters. The European Union aims to become a pioneer in regulating new AI technologies to protect people from potential risks to fundamental human rights, whilst fostering innovation within the Union. However, regulating AI is proving difficult, with the possibility of minor changes in the Act having major impacts for certain industries and developments such as ChatGPT raising additional questions. EU legislators will be aiming to future-proof the legislation as far as possible while also find the right balance between taking precautions to regulate AI-related risks and leaving sufficient room for innovation and experimentation. With the European Parliament reportedly voting on the AI Act on the 26th April, we may see the next evolution of this landmark legislation address some of the critical open issues.