Skip to main content

Clifford Chance

Clifford Chance

Artificial intelligence

Talking Tech

The potential impact on the use of AI in healthcare - EU White Paper on Artificial Intelligence.

Healthcare to be classified as "high-risk" sector

Healthtech Healthcare & Life Sciences 30 April 2020

"AI is developing fast. It will change our lives by improving healthcare (e.g. making diagnosis more precise, enabling better prevention of diseases),[…] and in many other ways that we can only begin to imagine." This introduction of the official White Paper on Artificial Intelligence (AI) provides an insight into the potential of AI seen by the European Commission and especially highlights the relevance of the healthcare sector in this context

In February 2020, the European Commission published a White Paper on a consistent strategy regarding the use of AI. The White Paper particularly emphasises the tremendous chances of AI and also focuses on the accompanying risks. The use of AI in healthcare is said to have an almost limitless potential. Immanently, healthcare is one of the "high-risk" sectors which thus requires special attention in the envisaged European legislation and the respective legislation process.

European regulatory framework on AI

According to the European Commission, the complex and non-transparent decision-making processes of AI requires European legislation which – in some areas – has to exceed the already existing protection provided by other EU legislation. By implementing a uniform strategy on the use of AI, the European Commission counters the threatening risk of fragmentation of the European member states' legislation. Moreover, a consistent European regulatory framework underlines the potential of AI and fosters an effective use of AI in a collective European approach. Even though to a certain extent the necessary know-how and expertise to successfully use AI is already present, large volumes of public and industrial data is still not exploited. Additionally, the European Commission determines that AI is in particular under-used by small and medium-sized companies [This is also reflected in a representative survey on behalf of Clifford Chance on "The Challenges of Digitalisation"]. Therefore, the suggestions in the White Paper additionally focus on the support and the respective benefits for such small and medium-sized companies. Generally, a uniform European legislation can specifically target the related upcoming risks of AI and may consider technological and commercial developments across the member states.

Furthermore, the European legislation is supposed to ensure an ethical handling of AI technology based on collective European values. This includes the respect for fundamental rights, human dignity, inclusion and non-discrimination. In addition, the protection of privacy as well as the protection of (personal) data shall be guaranteed.

Risk-based approach

To increase the practicability and the transparency of AI, the relevant sector-specific characteristics have to be reflected in the envisaged European regulatory framework. On the one hand, this may be achieved by reflecting the unique characteristics of AI in the already existing European legislation, on the other hand, by implementing a risk-based approach in the envisaged new AI specific regulations. An important point to note is that the scope of the new regulatory framework on AI would in principle only apply to technologies identified as "high-risk" applications. Hence, the European Commission suggests to classify the risks originating from each individual AI technology. Therefore, this classification has to distinguish between the potential risk originating from the sector the AI technology will be used in and the respective specific AI technology and its intended use.

With respect to the sector the AI technology will be used in, the European Commission suggests the implementation of an exhaustive list of all "high-risk" areas. Healthcare will, according to the White Paper, be classified as such a "high-risk" sector given that risks are deemed most likely to occur. Also the individual use of AI technology in healthcare will typically involve significant risks. However, e.g. a flaw in the appointment scheduling system of a hospital will normally not pose risks of such significance as to justify a "high-risk" classification and following legislative intervention. Obviously, this risk-based approach is in particular important to help ensure that potential regulatory interventions are proportionate.

Types of requirements

The European Commission already points out key features which may be considered by future requirements for the use of "high-risk" AI systems:

Training data

The used data sets should comply with certain prerequisites concerning the quality of the data. It should be ensured that the data, for instance, is sufficiently broad and covers all relevant scenarios needed to avoid dangerous situations and does not lead to outcomes entailing prohibited discrimination by using sufficiently representative data. Moreover, personal data should be adequately protected during the use of AI.

Keeping of records and data

To allow potentially problematic actions or decisions by AI systems to be traced back and verified, records in relation to the programming of the algorithm and the data used to train AI systems should be kept for a reasonable period of time. This should in particular ensure effective enforcement of the relevant legislation.

Information provision

To ensure a certain level of transparency, adequate information should be provided in a proactive manner. These information should in particular include information about the capabilities and limitations and the intended purpose of use of the AI system.

Robustness and accuracy

To minimise the risk of harm caused by the AI applications, future requirements should ensure that the systems have been developed in a responsible manner and are technically robust and accurate.

Human oversight

To achieve the objectives of trustworthy, ethical and human-centric AI, varying degrees of human oversight should be implemented. Human oversight could be involved in different stages of processes of the AI systems, e.g. by a human review prior to the effectiveness of the output of the system or by human monitoring of the AI system while in operation and the ability to intervene in real time.

Specific requirements for certain particular AI applications

In particular the processing of biometric data, e.g. for public facial recognition, should be subject to particularly high standards and requirements.

Conformity assessment for "high risk" AI applications

To balance any risks arising from "high-risk" AI technologies, the European Commission suggests the implementation of an objective, prior conformity assessment to verify and ensure that certain mandatory requirements, which will be set out in the respective European legislation, are complied with. In more detail, the conformity assessment could in particular include intense scrutiny of the used algorithms and of the data sets used in the development phase. In addition, it could stipulate procedures for testing, inspection and certification.

With respect to the use of AI technology in medical devices, the suggested conformity assessment for "high-risk" AI applications should be integrated in the conformity assessment mechanisms that already exist for medical devices. The European Commission further addresses the need to consider the possibility of certain AI devices to evolve and learn from experience gained in the use of the technology. This may require repeated assessments over the life-time of the AI systems. Besides this, defects and inconsistencies must be managed appropriately throughout the life cycle of the device.

The handling of low(er) risk AI technologies is not specifically addressed by the White Paper. However, the European Commission discusses the establishment of a voluntary labelling scheme which may provide interested companies the opportunity to signal that their used AI technology is trustworthy and in compliance with certain European standards.

In general, all AI applications subject to legal requirements, not limited to "high-risk" AI technologies, should also be subject to a continuous market surveillance scheme. This especially applies, if the applications may entail risks for fundamental rights.

Impact on companies interested in AI

The White Paper on Artificial Intelligence of the European Commission is a meaningful preview of potential future European legislation with respect to the handling and use of AI. It follows a consistent approach to ensure an ethical and trustworthy treatment of AI. Due to the nature of a White Paper, it solely provides the envisaged approach to handle various upcoming issues in the implementation of a European regulatory framework on AI. However, the paper illustrates the importance of AI in all sectors concerned.

Even though the White Paper does not provide details about any future European legislations, it particularly provides valuable information about the handling of AI with regard to the use in healthcare. According to the approach of the White Paper, AI applications used in the healthcare sector will be classified as "high-risk" technologies which will most likely be subject to future European legislation. Thus, R&D departments of healthcare companies are well advised to start preparing for future regulations now. The White Paper already focuses on various types of requirements. According to these, healthcare companies should already apply to the requirements to keep accurate records in relation to the programming of the algorithm and the data used to train AI systems. In addition, all accurate records which provide information about the capabilities and limitations and the responsible development of the AI systems should also be kept. This will significantly reduce the costs and efforts to (re)produce such information at a later stage and will additionally provide the capability to satisfy potential requests of competent authorities in due time. Furthermore, interested companies should already consider such obligations in potential R&D agreements/joint ventures with regard to non-disclosure agreements or other confidentiality arrangements.

Next steps

The White Paper and the respective public consultation invites stakeholders with an interest in AI and citizens to provide feedback on the questions raised and policy options proposed. The deadline to provide feedback has been extended to 14 June 2020 due to the COVID-19 (SARS-CoV-2) pandemic. However, further extensions cannot be excluded at this point of time.

Upon expiry of the consultation deadline, the European Commission will assess the feedback received which will have a decisive impact on the next steps introduced by the European Commission. However, a specific roadmap providing information about the legislative process and its timing has not been published (yet).

Key Take-Away Points
  • European Commission suggest classification of risks of AI technologies to assess the applicability of the envisaged European regulatory framework.
  • Healthcare will be classified as "high-risk" sector and generally fall under the scope of the new AI specific legislation.
  • Healthcare companies should prepare for the upcoming requirements now.