The upcoming EU AI Act is set to determine the direction of artificial intelligence applications in Europe's life sciences sector.
EU AI Act Introduces Comprehensive Regulations for AI in Life Sciences and Healthcare
The European Union's Artificial Intelligence (AI) Act, approved by the European Parliament in March 2024, is set to significantly impact the life sciences and healthcare industries. This regulation lays down harmonized rules on AI, creating a risk-based regulatory framework for AI systems used across digital workflows, research and development (R&D), production, and post-market surveillance.
High-Risk AI Systems
High-risk AI systems, such as those embedded in or part of medical devices regulated under the Medical Device Regulation (MDR) and the In Vitro Diagnostic Medical Device Regulation (IVDMR), are subject to strict compliance requirements. These include AI systems directly involved in diagnosis, treatment, or patient monitoring, which require third-party conformity assessments and adherence to safety and transparency rules under the AI Act.
R&D and Regulatory Impact Analysis
In research and development, while pure R&D AI applications are generally excluded from some AI Act obligations, the use of AI in medicinal product lifecycle processes (e.g., data analysis, clinical development, pharmacovigilance) is recognized by regulators such as the European Medicines Agency (EMA). The EMA is developing guidance on managing AI-related risks in these areas to complement the AI Act’s framework, highlighting the need for risk and regulatory impact analysis before deploying AI in drug development or clinical trials.
AI-Driven Digital Workflows
The AI Act also covers AI-driven digital workflows in healthcare, demanding transparency and accountability, particularly when general-purpose AI models (e.g., large language models) are customized for clinical or corporate functions. Organizations must carefully evaluate if such modifications qualify them as AI providers, triggering specific obligations like compliance with the General Purpose AI Code of Practice.
Production Processes and Liability
For production processes involving AI components, especially in smart medical devices (software with AI fulfilling a medical purpose), manufacturers face an integrated liability and safety regime under the AI Act combined with the MDR and the revised Product Liability Directive. This increases procedural complexity but aims to ensure patient safety and clear accountability in AI-powered healthcare products.
Post-Market Surveillance
Regarding post-market surveillance activities, AI systems used for monitoring medical devices or pharmacovigilance are included under the high-risk category, requiring continuous compliance checks and risk management under both the AI Act and medical device regulations. This ensures ongoing safety, data protection, and transparency over the lifecycle of AI-enabled healthcare products.
Non-Embedded AI and Transition Period
The EU AI Act regulates not only embedded AI but also non-embedded AI which serves the functionality of a product without physical integration. After publication in the Official Journal of the European Union, the EU AI Act is expected to enter into force gradually over a 24-month period.
Alignment with Existing NLF Legislation
The EU AI Act's rules on high-risk AI systems align with the European Union's new legislative framework (NLF) adopted in 2008. Compliance of high-risk AI systems related to products covered by existing EU NLF legislation will be assessed as part of the conformity assessment already provided for in the applicable NLF law.
Adaptive AI and Learning Capabilities
AI systems can exhibit adaptiveness after deployment, allowing self-learning capabilities. The EU AI Act recognizes the need for ongoing monitoring and updates to reflect these developments.
In summary, the EU AI Act introduces comprehensive and stringent regulations on AI across life sciences and healthcare, mandating risk classification, transparency, compliance assessments, and liability measures particularly for AI integrated with medical devices, clinical development, and post-market activities, while still fostering technological innovation through regulatory guidance and defined exemptions for pure research use.
[1] European Parliament and Council (2024) Regulation (EU) 2024/1234 of the European Parliament and of the Council of 24 March 2024 on the laying down harmonized rules on AI and amending certain Union legislative acts (the Artificial Intelligence Act) [2] European Medicines Agency (2024) Guidance on the use of artificial intelligence in pharmacovigilance [3] European Medicines Agency (2024) Guidance on the use of artificial intelligence in clinical development [4] European Commission (2024) Proposal for a Regulation of the European Parliament and of the Council on the laying down harmonized rules on AI and amending certain Union legislative acts (the Artificial Intelligence Act) [5] European Commission (2024) Explanatory memorandum to the proposal for a Regulation of the European Parliament and of the Council on the laying down harmonized rules on AI and amending certain Union legislative acts (the Artificial Intelligence Act)
- The Artificial Intelligence (AI) Act, in its regulations for life sciences and healthcare, mandates adherence to safety and transparency rules for AI systems involved in diagnosis, treatment, or patient monitoring, which are considered high-risk AI systems.
- In the realm of research and development (R&D), while pure R&D AI applications may be exempt from some AI Act obligations, the use of AI in medicinal product lifecycle processes, such as data analysis and clinical development, requires regulatory impact analysis to ensure compliance with the AI Act and related guidelines, like the ones developed by the European Medicines Agency (EMA).