Navigating Compliance Considerations Under the AI Act for AI-Driven Medical Devices

Author info

Introduction 

The integration of artificial intelligence (AI) and advanced technologies into medical devices is transforming healthcare by enabling unprecedented advancements in diagnostics, treatment planning, and patient management. From AI-powered imaging analysis tools to automated therapeutic recommendations, these innovations hold immense potential to improve patient outcomes and streamline clinical workflows. However, as the use of AI in medical devices continues to expand, so does the complexity of the regulatory environment that stakeholders must navigate. 

In Europe, the regulation of software with a medical purpose, including AI-driven solutions, is governed by the rigorous standards set forth in the Medical Devices Regulation (MDR)i and the In Vitro Diagnostic Medical Devices Regulation (IVDR).ii These regulations, fully applicable since May 26, 2021, and May 26, 2022, respectively, establish stringent requirements for the evaluation and marketing of such devices within the European Union (EU). 

With the recent adoption of the EU Artificial Intelligence Act (AIA),iii a comprehensive legislative framework that applies across all industry sectors, the regulatory landscape for AI has become significantly more complex. Under the AIA, AI systems integrated into medical devices regulated by the MDR and IVDR may be classified as high-risk, imposing additional compliance obligations under the AIA.iv This creates a dual regulatory landscape where stakeholders must navigate the AI-specific requirements alongside the established MDR/IVDR frameworks. 

Accurately determining whether your software qualifies as an AI system under the AIA is a critical step in ensuring compliance and identifying the appropriate regulatory pathways. Misclassifications can lead to missed compliance obligations or, conversely, unnecessary regulatory burdens. For a comprehensive overview of the AIA's foundational elements, including key definitions, scope, and risk classifications, we invite you to refer to our previous blog post on the AIA. This context lays the groundwork for understanding the specific compliance requirements discussed in this blog. 

This blog aims to offer actionable insights into preparing for the AIA’s compliance landscape when dealing with AI-driven medical devices. We will explore key considerations to help you effectively navigate the AIA requirements, ensuring that your AI systems meet compliance standards, are market-ready, and align with evolving regulations. It is important to note that this discussion is limited to AI systems that intersect with the MDR and IVDR regulations. We do not cover compliance requirements specific to MDR/IVDR or delve into the obligations for General Purpose AI (GPAI) or generative AI components that could be layered on top of existing AI systems. A detailed examination of these aspects would necessitate separate, focused analyses. 

High-Risk AI Systems and Medical Devices 

The AIA classifies AI systems into four risk-based categories: unacceptable risk, high-risk, transparency risk, and minimum risk. This tiered approach is designed to ensure that the regulatory burden is proportionate to the potential risks posed by the AI system. For a more detailed exploration of these classifications and their implications, we recommend reviewing our previous blog post. 

High-risk AI systems represent the most heavily regulated category under the AIA due to their significant potential impact on health, safety, and fundamental rights. Article 6 of the AIA outlines various criteria for determining high-risk status by referring to Annex I and Annex III. This discussion will specifically focus on Annex I, as it details Union harmonization legislation directly relevant to the scope of this blog – particularly AI systems that may qualify as Medical Devices (MDs) or In Vitro Diagnostic Medical Devices (IVDs) under the MDR and IVDR. 

High-Risk AI System Criteria Linked to Annex I of the AIA 

According to Article 6(1) of the AIA, an AI system is classified as high-risk if (cumulative conditions): 

(a) it is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonization legislation listed in Annex I; and 

(b) the product is required to undergo a third-party conformity assessment as per the Union harmonization legislation listed in Annex I. 

The flowchart below provides a visual guide on how AI systems linked to the MDR/IVDR are assessed for high-risk classification under the AIA. This decision-making process illustrates the application of Article 6(1) criteria specifically for AI systems integrated into medical devices.

ai

Breaking Down the High-Risk Criteria for AI-driven medical devices under the AIA 

Let us now examine each criterion to understand how they specifically apply to AI systems under the MDR and IVDR frameworks, clarifying when these systems are classified as high-risk under the AIA. 

(a) Product or Safety Component: 

▪ The Product Itself: The AI system can be the primary medical product, such as stand-alone AI software for diagnostics, therapeutic recommendations, or patient monitoring. Examples include AI-driven imaging analysis tools or software that provides clinical decision support. 

Safety Component: Alternatively, the AI system can be a crucial part of a medical device, enhancing its safety and performance. For instance, an AI algorithm embedded in a heart monitoring device that triggers alerts during abnormal readings acts as a safety component because it directly affects patient safety by enabling timely interventions. 

(b) Third-Party Conformity Assessment Requirement:

 In addition to being a product or safety component, the AI system must also be subject to a third-party conformity assessment by a notified body, as required under the MDR or IVDR, to be classified as high-risk.v Typically, medical devices classified as Class IIa, IIb, and III under the MDR, and Class B, C, and D under the IVDR, are particularly affected by this classification due to their inherent risks to patients. The higher the classification, the greater the potential impact on patient health, requiring more stringent evaluation and oversight. 

From the above, it is evident that the AIA employs a targeted approach by classifying only those AI systems that qualify as a product or safety component and are subject to third-party conformity assessments as high-risk. This approach ensures that AI systems with significant impacts on safety and clinical outcomes are rigorously evaluated and held to the highest compliance standards under the AIA. 

Overview of Requirements for Providers of High-Risk AI Systems 

For AI-driven medical devices classified as high-risk AI systems under the AIA, adherence to the requirements for high-risk AI systems set out in the AIA is essential. While many of these obligations align with existing MDR/IVDR standards—such as maintaining a quality management system, preparing detailed technical documentation, and providing clear instructions for use—the AIA introduces additional standards tailored to the complexities of AI technologies. Below is an overview of some of the key compliance requirements for providers of high-risk AI systems under the AIA:

▪ AI literacy (Art. 4): Providers must ensure their staff and operators have adequate AI literacy, considering their experience, training, and the context of the AI system’s use. 

▪ Risk management system (Art. 9): Providers must establish, document, and maintain a continuous risk management system throughout the system’s lifecycle. This includes identifying, analysing, and mitigating foreseeable risks to health, safety, or fundamental rights, performing consistent testing, and ensuring that residual risks are reduced to an acceptable level through design, mitigation measures, and adequate information and training for deployers. 

▪ Data and data governance (Art. 10): Providers must ensure that training, validation, and testing data sets meet stringent quality and governance standards, address potential biases, and adhere to strict safeguards, including data protection regulations, especially when processing special categories of personal data. 

▪ Technical documentation (Art. 11): Providers must prepare and maintain up-to-date technical documentation for high-risk AI systems before they are placed on the market, ensuring that it demonstrates compliance with the regulatory requirements and contains all essential information for assessment, as outlined in Annex IV of the AIA. 

▪ Record keeping (Art. 12): Providers must ensure high-risk AI systems have automated logging throughout their lifecycle, capturing events for system traceability, risk identification, and post-market monitoring. Essential logs include usage timestamps, input data checks, and personnel involved in result verification, where applicable. 

▪ Transparency and provision of information to Deployer (Art. 13): Providers must ensure systems are designed for transparency, enabling deployers to understand and appropriately use the system’s output. Clear instructions for use must be provided, covering provider details, system capabilities, accuracy metrics, foreseeable risks, human oversight measures, performance specifications, maintenance guidelines, and, if relevant, mechanisms for log management and interpretation. 

▪ Human oversight (Art. 14): Providers must design systems that support effective human oversight to mitigate risks related to health, safety, or fundamental rights. Oversight measures should match the system's risk, autonomy, and context, enabling users to understand, monitor, interpret, and intervene as needed, including overriding or stopping the system. Systems should incorporate or facilitate oversight tools, ensuring deployers can verify outputs and prevent automation bias. 

▪ Accuracy, robustness and cybersecurity (Art. 15): Providers must ensure high-risk AI systems maintain accuracy, robustness, and cybersecurity throughout their lifecycle, including resilience to errors and unauthorized alterations. They must address vulnerabilities, such as data or model poisoning and adversarial attacks, and prevent biased feedback loops. Declared accuracy metrics should be included in user instructions, and technical solutions must align with risk levels and operational contexts. 

▪ Labelling and accessibility requirements (Art. 16): Providers must indicate their name, trade details, and contact address on the high-risk AI system, its packaging, or accompanying documentation and ensure compliance with accessibility requirements as per Directives (EU) 2016/2102 and (EU) 2019/882. 

▪ Quality management system (Art. 17): Providers must establish a documented quality management system that ensures regulatory compliance, covering areas such as design, development, data management, and risk assessment, among others. This system should also incorporate post-market monitoring, incident reporting, and clear communication protocols with relevant authorities and stakeholders. The scope and implementation of the quality management system should be proportional to the provider's organization size and may align with existing sectoral or financial governance laws where applicable. 

▪ Documentation keeping (Art. 18): Providers must retain technical documentation, quality management system records, and other relevant compliance documents for at least 10 years after the high-risk AI system is placed on the market. 

▪ Log keeping (Art. 19): Providers must retain logs automatically generated by their high-risk AI systems for at least 6 months or as specified by applicable Union or national laws. 

▪ Corrective action and duty of information (Art. 20): Providers must promptly take corrective actions if they believe a high-risk AI system, they have placed on the market is non-compliant, including notifying relevant parties and, if necessary, withdrawing or recalling the system. If the system poses a risk, providers must investigate the cause, collaborate with deployers if applicable, and inform market surveillance authorities and, where relevant, the notified body about the non-compliance and corrective actions taken. 

▪ Cooperation with authorities (Art. 21): Providers must supply competent authorities with all necessary documentation and information to prove compliance of their high-risk AI systems when requested, in an easily understood EU official language as indicated by the relevant Member State. They must also provide access to logs if applicable and within their control, with confidentiality maintained as per Article 78. 

▪ Appointment of authorized representative (Art. 22): Providers outside the EU must appoint an authorized representative in the Union responsible for verifying compliance, maintaining documentation access for authorities, and assisting with regulatory actions. If non-compliance is found, the representative must end the mandate and notify relevant authorities. 

▪ Conformity assessment, EU declaration of conformity, CE marking and registration (Art. 43, 47, 48 and 49): Before placing an AI system on the EU market or putting it into service, a conformity assessment must be completed, followed by drawing up an EU declaration of conformity and affixing the CE marking. The system must also be registered in the EU database. The AIA permits a unified assessment covering both the AIA and MDR/IVDR if the notified body is qualified for both. Otherwise, separate assessments are needed, leading to potential duplicated audits and higher compliance costs. While predefined updates in the initial documentation do not require reassessment, significant changes that impact the system’s intended purpose or risk profile necessitate a new assessment. Starting August 1, 2026, even AI systems already on the market must comply with AIA requirements if they undergo substantial modifications. 

▪ Post market monitoring (Art. 72): Providers must set up and document a post-market monitoring system tailored to the AI system's nature and risk, ensuring continuous compliance by actively collecting and analysing relevant performance data throughout the system’s lifecycle. This system should be based on a post-market monitoring plan included in the technical documentation and may be integrated with existing monitoring systems under related EU legislation to avoid duplication and ensure consistency. 

▪ Reporting of serious incidents (Art. 73): Providers must promptly report serious incidents, conduct risk assessments, take corrective action, and cooperate with investigations without altering the AI system prior to informing authorities. Reports should follow set timeframes, with allowances for initial, incomplete reports. National bodies and the Commission must be notified, with compliance guidance expected by August 2025. 

Standards for High-Risk AI Systems 

While the AIA outlines the requirements for high-risk AI systems, the practical implementation of these requirements relies heavily on the development of European harmonised standards. These standards will serve as a cornerstone for the AIA’s framework, translating its legal requirements into concrete technical specifications that can be applied in practice. These standards are being developed by the European Committee for Standardisation (‘CEN’) and the European Committee for Electrotechnical Standardisation (‘CENELEC’), following a formal standardisation request adopted by the European Commission in May 2023.vi The drafting of these standards will not only draw from international standards, such as those from ISO and IEC, but will also address unique aspects of the AIA. Unlike existing international efforts, the standards under the AIA will prioritise risks to health, safety, and fundamental rights, beyond only organisational objectives.vii This includes specific provisions for data governance and quality measures tailored to mitigate these risks, as well as requirements for oversight mechanisms that deliver verifiable outcomes. These standards are expected to be finalised by the end of April 2025.viii Once published in the Official Journal of the European Union, compliance with these standards will provide a ‘presumption of conformity,’ meaning that AI systems adhering to them will be considered compliant with the AIA’s requirements.ix 

Supplier Compliance for Integration and Use in High-Risk AI Systems 

The AIA addresses the roles of parties involved in supplying AI systems, tools, services, components, or processes that are used in or integrated into high-risk AI systems (including AI systems that are classified as high-risk medical devices) and extends responsibilities to such parties.x These third parties play a critical role across the AI value chain, contributing to model training, retraining, testing, evaluation, integration, and other essential aspects of AI system development.xi 

Article 25(4) establishes that when such elements are supplied for integration into or use in high-risk AI systems, a written agreement must be in place between the supplier of the elements and the provider of the high-risk system.xii This agreement must specify the necessary information, capabilities, technical access, and other assistance, in accordance with the generally acknowledged state of the art, to enable the provider to fully comply with their obligations under the AIA.xiii Such an agreement ensures that the high-risk AI system can meet critical standards of reliability, robustness, accuracy, and transparency, as outlined in the AIA. To support the implementation of these requirements, the AI Office may develop and recommend voluntary model contractual terms which will also account for sector-specific or business-specific contractual requirements.xiv

 For tools, services, components, or processes—excluding general-purpose AI models—made publicly available under a free and open-source license, compliance obligations under Article 25(4) do not apply.xv However, the AIA strongly encourages developers to adopt widely recognized documentation practices, such as model cards or data sheets.xvi These practices enhance transparency, foster collaboration across the AI value chain, and ensure responsible use or integration into high-risk AI systems. By providing clear and detailed information, even in open-source contexts, developers contribute significantly to building a trustworthy AI ecosystem and support the safe and effective deployment of their contributions in high-risk applications. 

Leveraging the Scientific Exception: Innovation with an Eye on Compliance 

The AIA provides specific exemptions for AI systems developed and used exclusively for scientific research and developmentxvii, enabling researchers to innovate without the immediate burden of full regulatory compliance. This exemption is intended to respect freedom of science and support innovationxviii, allowing researchers in academic settings, public research institutions, or non-commercial research projects to develop such AI systems with greater flexibility. By doing so, the AIA fosters an environment that encourages experimentation and the rapid advancement of novel AI applications. 

This exception is particularly significant for AI systems in scientific research phases of high-risk medical device development, whether in academic, EU-funded, or privately sponsored projects. However, the scientific exception has well-defined boundaries. Once an AI system transitions from scientific research to commercial or real-world deployment, it must comply with the full spectrum of requirements under the AIA. This includes scenarios where research outcomes are integrated into market-ready products, offered as services, or deployed in clinical workflows beyond controlled research settings. Even a partial shift toward commercialization—such as expanding access to the tool beyond research participants or incorporating it into practical applications—renders the scientific exception inapplicable, subjecting the system to the regulatory framework of the AIA. 

Hence, while the scientific exception offers initial regulatory relief, the ultimate goal of most research projects is to see the outcomes realized in practical, commercial, or clinical applications. By anticipating regulatory requirements early, even when operating under the shield of the scientific exception, researchers can gain significant advantages, including enhanced market readiness and the ability to avoid pitfalls that could hinder the success of their AI systems. 

To better prepare for future compliance, integrating key strategies during the research phase is essential. From the outset, developers should consider how their AI systems will meet the requirements under the AIA to minimize future modifications. Maintaining thorough technical documentation throughout the research phase will streamline the transition to commercial use by laying the groundwork for conformity assessments. Additionally, early engagement with legal experts can clarify compliance pathways, helping to avoid costly retrofitting, reduce delays, and expedite market entry. This proactive approach ensures a smoother path to commercialization and regulatory alignment. 

Conclusion 

Navigating the compliance framework for AI-driven medical devices under the AIA involves understanding its intricate requirements and planning strategically. From high-risk classification to meeting stringent compliance obligations, aligning with these requirements is key to success. Here are some key steps to guide your approach: 

▪ Identify and classify: Determine whether your system qualifies as an AI system under the AIA and its risk classification. 

▪ Understand the compliance requirements and design, document with compliance in mind: Identify the compliance requirements based on the classification of your AI system and integrate these considerations—such as data governance, transparency, human oversight, accuracy, and thorough documentation—into the design and development phases. 

▪ Engage early with legal experts: Consult with legal experts early to clarify compliance pathways and stay informed on evolving regulatory standards. This ensures your approach aligns with current and future regulatory frameworks effectively. 

▪ Plan for commercialisation: Whether your AI system is being developed in a research setting or under normal development circumstances, anticipate market entry needs and integrate compliance measures early to streamline transitions. This foresight minimizes the need for costly retrofitting, ensures market readiness, and enhances the system’s appeal to investors, regulators, and end-users, paving the way for successful commercialization. 

▪ Collaborate Across the Value Chain: When supplying AI systems, tools, services, components, or processes that are used in or integrated into high-risk AI systems, evaluate the need for collaboration with system providers to ensure alignment with Article 25(4) compliance obligations. This includes preparing to share necessary technical information and documentation that supports seamless integration. By embedding compliance from the outset, stakeholders can unlock the full potential of AI-driven medical devices while adhering to safety, ethical, and legal standards. With a proactive and comprehensive strategy, organizations can navigate the AIA’s requirements confidently, bringing transformative innovations to market efficiently and responsibly. 

Nayana Murali, the author of this article is participating in the EU-funded project: AISym4MED. AISym4Med has received funding from the European Union under the Horizon Europe Framework Programme Grant Agreement Nº: 101095387. Views and opinions expressed are however those of the author only and do not reflect those of the European Union or European Commission. Neither the European Union nor the European Health and Digital Executive Agency can be held responsible for them. 

References

i Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (MDR). (Access here: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A32017R0745)…;

ii Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (IVDR). (Access here: https://eur-lex.europa.eu/eli/reg/2017/746/oj

iii Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (AIA). (Access here: https://eur-lex.europa.eu/eli/reg/2024/1689/oj

iv Article 6(1) read with Annex 1, AIA 

v Article 6(1)(b), AIA 

vi Commission Implementing Decision on a standardisation request to the European Committee for Standardisation and the European Committee for Electrotechnical Standardisation in support of Union policy on artificial intelligence. (Access here: https://ec.europa.eu/transparency/documents-register/detail?ref=C(2023)…;

vii Soler Garrido, J., Fano Yela, D., Panigutti, C., Junklewitz, H., Hamon, R., Evas, T., André, A. and Scalzo, S., Analysis of the preliminary AI standardisation work plan in support of the AI Act, EUR 31518 EN, Publications Office of the European Union, Luxembourg, 2023, ISBN 978-92-68-03924-3, doi:10.2760/5847, JRC132833. (Accessible here: https://publications.jrc.ec.europa.eu/repository/handle/JRC132833

viii Commission Implementing Decision of 22 May 2023 on a Standardisation Request to the European Committee for Standardisation and the European Committee for Electrotechnical Standardisation in Support of Union Policy on Artificial Intelligence, Art. 1 C(2023) 3215 (Access here: https://ec.europa.eu/transparency/documents-register/detail?ref=C(2023)…;

ix Article 40, AIA 

x Article 25(4), AIA 

xi Recital 88, AIA 

xii Article 25(4), AIA 

xiii Article 25(4), AIA 

xiv Article 25(4), AIA 

xv Article 25(4), AIA 

xvi Recital 89, AIA 

xvii Article 2(2), AIA 

xviii Recital 25, AIA