AI Enabled Medical Devices: Preparing for a New Era of EU Litigation Risk
Stricter regulation, a claimant-friendly EU product liability regime and strengthened collective action tools are driving increased litigation risk for MedTech companies developing or deploying AI enabled medical devices.
AI-enabled medical devices are medical devices ("MD") or in-vitro diagnostic medical devices ("IVD") that incorporate an artificial intelligence ("AI") component. Their number has increased steadily over recent years (with more than a thousand authorized in the USA by end 2025, according to FDA's statistics). Available data suggests that such devices are primarily used for diagnostic purposes (e.g., to assist clinicians in finding disease-suspicious regions or assessing the likelihood of a given diagnosis).
In the European Union ("EU"), AI-enabled medical devices are currently subject to a dual-regulatory environment that is unprecedented in its scope and technical complexity:
• the EU Medical Devices Regulation ("MDR") and the In Vitro Diagnostic Medical Device Regulation ("IVDR"), which establish the conformity-assessment framework applicable to devices placed on the EU market;
• the EU AI Act, which sets out specific obligations for providers and deployers of AI systems, including transparency and AI governance requirements.
Although these instruments impose extensive compliance obligations on manufacturers and other economic operators, they do not establish civil liability rules.
The European Commission decided in February 2025 (and officially confirmed its decision in October 2025) to withdraw the EU proposal for specific liability rules for AI, which was initially intended to complement the AI Act. In parallel, despite longstanding debates, no sector-specific liability regime has been introduced for health products.
As a result, liability for damage allegedly caused by AI-enabled medical devices continues to be governed primarily by the EU Product Liability Directive ("PLD"), which provides for strict (non-fault) liability for defective products.
An expanded product liability regime as of December 2026
Nearly forty years after the adoption in 1985 of the original PLD, a new PLD revising the EU product liability regime was adopted in 2024, with the stated objective of modernising a framework no longer suited to AI-driven and complex health products.
The revised PLD explicitly includes AI systems within its scope, whether they consist of standalone software or are embedded in, or interconnected with, another product. In practice, this means that both AI software as such and MDs or IVDs incorporating AI components fall within the scope of product liability rules.
The new PLD applies to products placed on the market or put into service from 9 December 2026 onwards. Discussions regarding national transpositions are currently ongoing, with industry stakeholders expressing concerns about divergent national approaches and the risk of over-transposition (e.g., in France and Germany).
Key takeaways for MedTech companies
More specifically, the revised PLD significantly expands the former liability framework (thus increasing litigation possibilities) and introduces several new legal mechanisms (which generally are in favour of the claimants):
1. A broader circle of potential defendants
Civil liability for defective products now includes additional economic actors. Depending on the circumstances, most MedTech companies across the value chain may be exposed to product liability claims.
This expansion is likely to encourage "deep pocket" litigation strategies, with claimants targeting (i) all possible defendants or (ii) those perceived as best positioned to provide compensation or settle the case.
2. Regulatory breach as a direct liability trigger
The concept of "defect" has also been substantially broadened. A product is now considered defective if it does not provide either (i) the safety that a person is entitled to expect (as already provided under the 1985 PLD), or (ii) the safety required under EU or national law.
In practice, this means that non‑compliance with obligations arising under the MDR, IVDR, GDPR or the AI Act may qualify as a safety defect and give rise to strict civil liability. Regulatory compliance becomes directly intertwined with litigation risk, including in highly technical areas such as quality management, data governance and AI governance (for example transparency, human oversight, logging and record‑keeping obligations).
3. New liability risks linked to AI's adaptive behaviour
Beyond regulatory compliance, the assessment of defectiveness must now take into account AI-specific characteristics, including the product's ability to continue learning or to acquire new features after being placed on the market.
Where a manufacturer designs an AI-enabled medical device capable of developing unforeseen or autonomous behaviour, it may remain liable for harm caused by such ability, even when a new behaviour materializes after placement on the market. This significantly raises the bar for product design, documentation and post-market monitoring.
To mitigate these risks, manufacturers will need to integrate technical and organisational safeguards at the design stage, including mechanisms to prevent unintended uses and control post‑deployment evolutions. This issue is particularly acute for digital and software‑driven medical devices.
4. A new evidentiary balance between MedTech defendants and claimants
The revised PLD also introduces a "disclosure of evidence" mechanism (clearly inspired by common law discovery or disclosure system). Where a claimant has presented sufficient facts to establish the "plausibility" of their claim for compensation, national courts may order the defendants to disclose "relevant evidence" under its control, subject to (i) necessity and proportionality principles and (ii) the protection of trade secrets. A failure to comply with an order to disclose evidence may give rise to a presumption that the product is defective.
For operators of AI-enabled medical devices, this may have significant consequences. Internally, compliance with disclosure orders may entail substantial costs linked to the organization and collection of the requested evidence.
This exposure is further exacerbated by the 2024 PLD's requirement that evidence be presented in an easily accessible and easily understandable manner, including documents that must be created for this purpose. Externally, there is an increased risk of disclosure of sensitive information, such as algorithms, training datasets, model architecture or vigilance data.
Although national courts retain the ability to limit access to disclosed evidence, the precise scope of these safeguards will depend heavily on national transposition measures, which are therefore closely scrutinised by the industry.
5. A shift in the burden of proof
Finally, the 2024 PLD introduces rebuttable presumptions of defectiveness and causation, shifting the burden of proof.
The main novelty is that presumptions of defectiveness of a product or a causal link between the defectiveness and the damage, or both, may apply where the claimant faces excessive difficulties in proving defectiveness or causation, provided it is shown that such defect or causal link is possible.
The PLD expressly notes, in its recitals, that such difficulties may arise in the context of AI systems and innovative or complex medical devices. This targeted reference is particularly concerning for manufacturers of AI‑enabled medical devices.
Increasing exposure to collective and quasi-collective actions
In parallel with the revised product liability regime, developments in class actions and litigation strategies are reshaping the risk landscape for health‑tech manufacturers.
Since the 2010s, a new generation of lawyers and online platforms have been trying to aggregate claims in the life science sector. MedTech companies have already been exposed to such strategies in high‑profile cases.
With effect from June 2023 onwards, class actions (called "representative actions" in the EU) and third-party funding possibilities across the EU have been further strengthened by Regulation No. 2020/1828.
Certain national transpositions of that Regulation have also created additional liability risks. Typically, France, while transposing that text, added a new civil fine mechanism (for which the amount can go up to five times the profit derived from the tort or the maximum amount of the administrative or criminal fines incurred for the same reasons) for lucrative torts, where collected fines would finance future class actions.
Against this background, the combined effect of an expanded liability regime and reinforced collective redress mechanisms is likely to significantly increase class‑action‑type litigation, particularly in highly regulated and technology‑driven sectors such as medical devices.
How Clifford Chance can help
Clifford Chance is uniquely positioned to assist MedTech companies navigating this evolving regulatory and litigation landscape. Our global reach, deep sector knowledge and experience in managing complex disclosure processes and class actions enable us to support clients at the intersection of regulatory compliance, risk mitigation and contentious strategy.