EU AI ACT

The EU AI Act: A New Landscape for Artificial Intelligence

Navigating the EU AI Act for Medical Devices

This document provides an overview of the EU Artificial Intelligence Act (AI Act) and its implications for AI-enabled medical devices. It highlights the key aspects of the AI Act that medical device manufacturers need to understand, particularly how it interacts with the existing EU Medical Device Regulation (MDR). The document emphasizes the risk-based approach of the AI Act, its classification of AI-enabled medical devices as high-risk, and the conformity assessment process.

The EU Artificial Intelligence Act (AI Act) represents a landmark regulatory effort by the European Union to govern the development, deployment, and use of Artificial Intelligence (AI) systems. Its primary objective is to ensure that AI systems operating within the EU market are safe, transparent, and respect fundamental rights. The AI Act adopts a risk-based approach, categorizing AI systems based on their potential to cause harm. This categorization determines the level of regulatory scrutiny and the obligations imposed on developers and deployers.

For a detailed proposal with a Statement of Work, please complete the Request for Quote (RFQ) form provided separately for MDR CE Marking and IVDR CE Marking for 

AI-Enabled Medical Devices: A High-Risk Category

Under the EU AI Act, AI systems that are also medical devices, or safety components of medical devices, as defined by the EU Medical Device Regulation (MDR), are classified as High-Risk AI systems. This classification stems from the potential for these devices to directly impact patient health and safety. As a result, AI-enabled medical devices are subject to the most stringent requirements outlined in the AI Act.

Harmonization with the EU MDR

It’s crucial to understand that the EU AI Act does not replace the EU MDR. Instead, it operates in parallel, adding AI-specific governance and lifecycle controls to the existing regulatory framework for medical devices. This means that manufacturers of AI-enabled medical devices must comply with both regulations.

Key Implications for Medical Device Manufacturers

Here’s a breakdown of the key implications for manufacturers of AI-enabled medical devices:

  • High-Risk Classification: The classification as High-Risk triggers a comprehensive set of obligations under the AI Act.

  • Conformity Assessment: Compliance with the EU AI Act is assessed as part of the EU MDR conformity assessment process. This means that Notified Bodies, which are already responsible for assessing medical device compliance, will also evaluate the AI-specific aspects of the device.

  • No Separate CE Marking: A separate CE marking is not required for the AI Act. Compliance with the AI Act is demonstrated through the existing CE marking process under the EU MDR, with the Notified Body assessing AI-specific requirements.

  • AI-Specific Requirements: In addition to the requirements of the EU MDR, manufacturers must demonstrate compliance with AI-specific requirements related to

Requirement Area Description
Risk Management Implementing a robust risk management system that addresses the specific risks associated with the AI system, including potential biases, inaccuracies, and vulnerabilities.
Data Governance Establishing comprehensive data governance practices to ensure the quality, integrity, and security of the data used to train and operate the AI system. This includes addressing issues such as data bias, data privacy, and data security.
Transparency Providing clear and understandable information about the AI system's capabilities, limitations, and intended use. This includes disclosing how the AI system works, the data it uses, and the potential risks associated with its use.
Human Oversight Implementing mechanisms for human oversight to ensure that the AI system is used responsibly and ethically. This includes enabling users to understand and control AI decisions and providing mechanisms to intervene if the AI system makes an error.
Post-Market Monitoring Establishing a robust post-market monitoring system to track the AI system’s performance in real-world settings and identify potential issues or risks. This includes monitoring accuracy, reliability, safety, and user feedback.

AI Specific Requirements Explained

Requirement Area Specific Requirements Explained
AI-Specific Risk Management This goes beyond the general risk management required by the EU MDR and requires a detailed analysis of AI-related risks, including:
  • Bias: Identifying and mitigating potential biases in the data used to train the AI system.
  • Explainability: Ensuring the AI system’s decision-making process is understandable and transparent.
  • Robustness: Ensuring the AI system is resilient to errors, attacks, and unexpected inputs.
  • Security: Protecting the AI system from unauthorized access and cyber threats.
Data Governance The AI Act places strong emphasis on data quality and integrity. Manufacturers must:
  • Ensure Data Quality: Implement processes to ensure data used for training and operation is accurate, complete, and relevant.
  • Address Data Bias: Actively identify and mitigate potential biases in the data.
  • Protect Data Privacy: Comply with GDPR and apply appropriate security measures.
  • Document Data Provenance: Maintain clear records of data sources and processing steps.
Transparency Transparency is essential to build trust in AI systems. Manufacturers must:
  • Provide Clear Information: Clearly describe capabilities, limitations, and intended use.
  • Explain Decision-Making: Explain how decisions are made in an understandable manner.
  • Disclose Potential Risks: Communicate any known or foreseeable risks associated with use.
Post-Market Monitoring Continuous monitoring is required throughout the AI system lifecycle. Manufacturers must:
  • Collect Performance Data: Monitor real-world performance of the AI system.
  • Monitor User Feedback: Review complaints and feedback to detect issues.
  • Implement Corrective Actions: Address identified risks or performance issues.
  • Regularly Update the AI System: Improve performance and resolve vulnerabilities.
Consultants Role in CE Marking

The EU AI Act introduces a new layer of regulatory complexity for AI-enabled medical devices. Manufacturers must proactively address the AI-specific requirements outlined in the Act, in addition to complying with the existing EU MDR. By focusing on risk management, data governance, transparency, and post-market monitoring, manufacturers can ensure their AI-enabled medical devices are safe, effective, and compliant with the EU AI Act.

In this evolving regulatory landscape, I3CGLOBAL plays a critical role as a specialized regulatory and CE certification consulting partner. I3CGLOBAL supports manufacturers through end-to-end AI Act and MDR compliance by providing expert consultation on AI risk management frameworks, data governance strategies, transparency documentation, and post-market monitoring systems. Their team assists in aligning AI technical documentation with MDR requirements, preparing robust clinical and performance evidence, and coordinating effectively with Notified Bodies during CE certification.

This proactive and structured approach not only ensures regulatory compliance but also helps manufacturers reduce certification risks, accelerate time to market, and foster long-term trust in AI-powered healthcare solutions across the European Union.