AI Meets Regulation: At the intersection of EU’s AI Act and Pharma Compliance Strategy
Could unchecked AI decide who lives and who doesn’t? The EU’s AI Act is rewriting the rules to make sure this doesn't happen. We are helping client in the healthcare and life sciences sector reshape their compliance strategies. Read our insights in the latest blog.
In 2025, the U.S. Food and Drug Administration (FDA) completed its first AI-assisted scientific review pilot, dramatically reducing review times for new therapies from days to minutes. This wasn’t just a technological breakthrough. It was also a regulatory wake-up call. AI had officially entered the pharmaceutical mainstream, not just as a tool for drug discovery, but as a core component of how medicines are evaluated, approved, and monitored.
Europe followed suit. The EU’s AI Act was originally conceived to regulate general-purpose AI systems, but it quickly expanded to cover sector-specific applications. Pharmaceutical AI (especially tools used in diagnostics, patient monitoring, and clinical decision support) was deemed “high-risk” under Annex III. This classification wasn’t arbitrary, but a reflection of the growing importance and utility of such technology and the risks they present.
For legal teams in the healthcare and life sciences sector, this shift means one thing: AI is no longer a fringe innovation. It’s a regulated entity. And compliance strategies must evolve accordingly.
Why does this matter?
Under the AI Act’s risk-based framework, most AI tools used in healthcare are automatically classified as “high-risk.” Under Article 6 and Annex III, AI used as a safety component in MDR or IVDR products is high risk; GPAI and other categories have separate duties. This includes diagnostic algorithms, patient monitoring systems, and clinical decision-support platforms. The classification triggers strict obligations around transparency, oversight, and lifecycle management. Application is phased. Prohibitions and AI literacy duties started on 2 February 2025. Governance rules and the obligations for general purpose AI models apply from 2 August 2025. The Act is generally applicable from 2 August 2026, with an extended transition to 2 August 2027 for high risk AI embedded in regulated products.. The AI Act doesn’t replace existing legislation, but adds to the matrix of overlapping regulations.
The requirements blur the lines between legal and technical domains, encompassing:
- Algorithmic transparency
- Bias mitigation
- Human oversight
- Lifecycle risk management
This represents a shift in focus for legal teams who must now understand how algorithms work, how data flows through systems, and how risks evolve over time. This requires new skill sets and deeper collaboration with data scientists and engineers.
The convergence of the new demands on legal teams requires a new approach for integrated compliance strategies. Legal teams must harmonize obligations across frameworks, ensuring AI systems meet multiple standards without duplication or conflict.
An important shift will be making sure compliance activity begins at the design phase. Legal teams must be involved in product architecture, data sourcing, and risk modeling.
Internally, decision-making authority and accountability for AI initiatives may have historically been unclear, complicating internal validation processes and risk management.
Companies in digital therapeutics, personalized medicine, and AI-driven clinical trials will feel this most acutely. Legal teams must assess whether existing governance models are fit for purpose, or whether upgrades are needed.
Our teams are monitoring many developments for clients, particularly if there are any changes to the current Annex III text or if there are any indications about how regulators intend to direct their enforcement activities.
As developments evolve, there are some key practices that organisations in the sector can implement to support refreshed compliance activities:
- Implement a three-tier governance model. This should assign operational responsibility to an AI committee, strategic control to an executive committee, and ultimate oversight to the board
- Build privacy by design into every data flow, combining robust anonymisation techniques, dynamic consent management and transparent patient communication
- Safeguard data quality through documented provenance checks, bias testing and periodic re‑assessment of residual re‑identification risk
- Align intellectual property and data strategies by recording human inventorship, negotiating clear ownership of training data and model outputs, and updating contracts accordingly
- Detail liability allocation in internal policies and vendor agreements, supported by explainable AI methods that make decision logic auditable
- Invest in secure, high-performance infrastructure that supports compliant model training, validation and continuous monitoring across jurisdictions
- Maintain exhaustive documentation of data sources, processing steps and algorithm performance to satisfy both GDPR and AI Act transparency duties and to streamline merger or partnership due diligence.
The AI Act marks a turning point in how technology and regulation interact. For many organisations, it’s a call to evolve.
While the AI Act raises the bar for compliance, it also presents an opportunity. By embracing its principles, pharma companies can strengthen public trust, improve patient outcomes and drive innovation in AI-powered healthcare solutions. From this perspective, the EU AI Act can be viewed as a catalyst for ethical and effective AI deployment in the sector.