Skip to main content

Clifford Chance

Clifford Chance

Healthtech

Talking Tech

The Impact of the new EU AI Act on the Healthcare Sector

Part One: Scope and a Risk-Based Approach

Healthtech Artificial Intelligence Healthcare & Life Sciences 13 March 2024

Almost three years after the publication of the EU Commission's proposal for a Regulation on Artificial Intelligence (AI Act), the EU institutions involved in the legislative process have finally reached a consensus. With its vote on the AI Act, the Permanent Representatives Committee presented a compromise text on 26 January 2024.

On the 13th March 2024 the European Parliament adopted the AI Act, with final confirmatory votes by all institutions to be taken in April 2024 - before the European elections in June 2024 – to pass what will be the world's first comprehensive regulation on AI. The AI Act shall enter into force on the 20th day after publication in the EU Official Journal and generally apply 24 months later (i.e. by 2026), except for a few provisions that shall become applicable six and 12 months after the AI Act will have entered into force, respectively.

To facilitate compliance with the new legal framework for AI in Europe, the Commission has recently launched an AI Pact, a voluntary initiative that calls on AI developers from Europe and beyond to make themselves acquainted and comply with the key obligations under the AI Act ahead of time. But what exactly are these obligations? We will focus on various key aspects of the new AI Act and its impact on the healthcare sector in several blog posts over the coming weeks, starting with the scope of application and the risk-based approach of the new European law.

AI can and will be used across all industries, including the healthcare sector. Therefore, the AI Act must be applied having regard to and in accordance with all industry-specific sectoral regulations in Europe. In the healthcare sector, AI will be used in medical devices in particular, and AI-based medical devices will be subject to and governed by both the EU Medical Device Regulation 2017/745/EU (MDR) and the AI Act at the same time - which will require a coordinated interaction between the AI Act and the MDR and a thoughtful application by the competent authorities in order to avoid overregulation and legal uncertainties in the very innovation-friendly health tech sector in Europe.   

AVOIDING OVERREGULATION AND LEGAL UNCERTAINTIES

A Clear Definition of AI

The proposed definition of AI systems in the AI Act was repeatedly modified during the legislative negotiations to take appropriate account of the versatility, opacity and complexity of AI. In line with the latest definition proposed by the OECD, an AI system within the sense of the new AI Act is "a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments". The AI Act further clarifies that it does not apply to simpler traditional software systems or programming approaches, which are based on rules defined solely by natural persons to automatically execute operations. As technical progress will have an impact on the definition of AI, it remains to be seen whether and to what extent the proposed definition in the AI Act can keep pace with technical development or if (and how quickly) it will need to be revised and adjusted.

A Risk-based approach

The ACT takes a risk-based approach with AI systems categorized into four different risk classes. In the lowest risk category are AI systems with minimal risk, followed by low-risk, high-risk and prohibited AI systems.

  • Minimal and Low Risk: For minimal and low-risk AI systems, the AI Act lays down basic requirements regarding registration, information, transparency, labelling of AI content, model evaluations, risk assessment and management, reporting (in the event of serious incidents), cyber security and information on energy efficiency.
  • High Risk: For high-risk AI systems, the AI Act provides for stricter requirements, including in relation to risk and quality management systems, training for developers, validation and test data sets, data governance and management procedures, transparency, documentation and information obligations. Also, high-risk AI systems require a "human-machine interface" or other suitable device to ensure reliable supervision by natural persons for the duration of their use.
  • Forbidden: Forbidden AI systems (like social scoring) may not be placed on the European market and will be banned six months after the AI Act will have entered into force.

TAKEAWAY

Against this background, all stakeholders in the healthcare industry should make themselves acquainted with the new European legal framework for AI and its implications for their products and services as early as possible to lay the foundations for a successful alignment of their operational and strategic business with the new legal requirements. Please also read our upcoming blog posts for further actionable steps to ensure full compliance once the new provisions will become applicable.