European Healthcare Receives New CE Certification – Timing and Rationale Explained
The European Union's (EU) new legislative framework for industrial products, including Artificial Intelligence (AI) systems, includes the CE marking. This article aims to explain the process and requirements for high-risk AI systems to obtain the CE marking.
The CE marking is a crucial step in demonstrating compliance with the EU's AI Act. For high-risk AI systems, providers must undergo a conformity assessment before affixing the mark. This assessment can be conducted internally or by a third-party notified body, depending on the system's specifics.
In the case of a notified body assessment, both the Quality Management System (QMS) and the technical documentation are assessed. The notified body, responsible for the surveillance of the approved QMS, carries out this assessment. If an internal control procedure is followed, the provider is in charge of verifying compliance with the EU AI Act's QMS requirements, self-examining the high-risk AI system's technical documentation, and ensuring consistency between the design, development process, and post-market monitoring and the technical documentation.
High-risk AI systems used in specific areas, such as biometrics, require third-party assessments. In Germany, the notifying authority will oversee the conformity certification of high-risk AI systems used in life sciences and healthcare.
The CE marking must be affixed visibly, legibly, and indelibly. For high-risk AI systems provided digitally, a digital CE marking is required. Physical CE marking should be affixed for high-risk embedded AI systems. The marking is followed by the identification number of the notified body responsible for the conformity assessment procedures.
Where the AI system does not meet the requirements relating to the data used to train it, retraining of the AI system may be required by the notified body. Periodic audits and additional tests of the AI systems may be conducted by the notified body.
Providers of high-risk AI systems are not necessarily required to hire an external, third-party conformity assessment body for the procedure. However, pharmaceutical or medical device businesses that only use or deploy high-risk AI systems without acting as providers are not in charge of CE-marking high-risk products. Instead, they must familiarise themselves with the instructions for use accompanying each high-risk AI system they deploy.
Examples of high-risk AI systems include remote biometric identification systems, AI systems intended for biometric categorisation, and AI systems intended for emotion recognition. High-risk AI systems that are also New Legislative Framework (NLF) industrial products or serve as safety components may also be controlled by a notified body for their conformity with the EU AI Act.
In summary, the CE marking is a vital part of ensuring high-risk AI systems meet the EU's stringent safety and quality standards. Providers must adhere to the internal control procedure or undergo a notified body assessment, depending on the system's nature and use. The CE marking, followed by the notified body's identification number, serves as a visible sign of compliance with the EU AI Act, providing reassurance to users and regulators alike.