Debates in the legislative and policy-making environment on Artificial Intelligence (AI) have gradually concerned the medical device field. Where do AI-based medical devices regulation stand in the EU? This blog post gives a short and non-exhaustive overview of the legal and ethical aspects of AI-based medical devices that are being researched and discussed in the context of the CORE-MD project.
From Elisabetta Biasin – KU Leuven Centre for IT & IP Law (CiTiP)
AI in Healthcare
In healthcare, use of the existing and new AI-based medical devices is increasing. AI-based technologies encompass a wide range of examples, for instance, software used in breast cancer screening, computer-aided detection systems, software displaying MRI and other types of medical imaging. In cardiovascular medicine, recent AI applications aim to enhance the medical decision-making process (for example, through the prediction of cardiac risks or cardiac events detection and monitoring) with the support of data analysis.
The Medical Device Regulation
From a legal viewpoint, AI healthcare applications may fall under the scope of medical device legislation. Medical devices are regulated in the European Union through the Medical Device Regulation (MDR, Regulation 2017/745), along with the In Vitro Diagnostic Device Regulation (IVDR, Regulation 2017/746). The MDR/IVDR are two pieces of vertical legislation: they set specific requirements applying only to the category of medical devices. The MDR includes requirements for medical device software, which have been further interpreted over time by MEDDEV and, more recently, by the Medical Device Coordination Group (MDCG). International bodies such as the International Medical Device Regulators Forum (IMDRF) play an influential role concerning the orientations of the MDCG.
The Evolving Landscape of Artificial Intelligence in the EU
The forthcoming legislation on Artificial Intelligence will also concern AI applications in healthcare, including medical device software. In recent years, the EU policy-makers and legislators set up several legislative and policy initiatives relating to AI. Following the AI Act proposal, which will apply horizontally to high-risk AI systems, a new AI Liability legal framework is in the making. Besides the AI legal framework, other frameworks become and will be increasingly relevant, such as new data laws (see the Data Governance Act, the European Health Data Space and the Data Act proposals) or cybersecurity laws (NIS Directive, Cybersecurity Act, Cyber Resilience Act proposals). Besides new secondary laws, it should always be reminded that the existing EU legal framework is based on EU treaties recognizing patients’ and individuals’ fundamental rights – such as dignity, privacy and data protection, equality, and right to health care access.
AI Ethics and Medical Devices Regulation Through the Lenses of the Interdisciplinary Discussions within the CORE-MD project
The debate over the ethics of artificial intelligence and law has brought to the attention of the wider public the many issues that automation could imply for individuals, including in the healthcare sector. Within the CORE-MD project, many of these issues are debated from an interdisciplinary perspective.
This section will select three issues and associate them with the ethical requirements enlisted in European Commission’s High-Level Expert Group (HLEG) Guidelines on Trustworthy AI. The analysis considers four of the seven ethics requirements of the HLEG guidelines (human agency and oversight, diversity, non-discrimination and fairness, accuracy, technical robustness and safety). Other requirements from the HLEG guidelines are left out of the analysis for space reasons (accountability, transparency, environmental and societal well-being) – although some of them might be implied in the above-illustrated issues.
1) Human Agency and Oversight: The Update Problem
The first issue linked with the human agency and oversight requirement concerns the so-called ‘update problem‘. The update problem questions whether an AI-based medical device’s regulatory authorization should involve only the version of the submitted algorithm or, on the contrary, it concerns its changes and adaptations, too. Human agency and oversight – which helps ensure that an AI system does not undermine human autonomy or threaten individuals’ fundamental rights – is challenged in the AI-based medical devices field by the update problem. While in the US, the FDA issued preliminary guidance and scholars already debated it, the EU lacks a specific regulatory orientation about it from a regulatory perspective.
2) Diversity, Non-discrimination and Fairness: Data Diversity and Population Representativeness
As for most other AI applications, AI-based medical devices may entail ethical issues concerning diversity, non-discrimination and fairness. Existing experiences have shown the shortcomings of certain AI applications having regard different types of patients’ populations (for example, discriminatory decision-making software used by US hospitals). This issue also becomes relevant when it comes to population representativeness in trials of certain AI-based medical devices – especially when intertwined with other aspects, such as sex or gender.
3) Accuracy: Between The MDR and the AI Act proposal
The third issue concerns accuracy. Accuracy pertains to an AI system’s ability to make correct judgements – says the HLEG. In medical devices, the term assumes further nuances, especially when it comes to AI-based devices with a diagnostic or measuring function. Furthermore, in the specific case of medical device software, the manufacturer should verify that it reliably, accurately and consistently meets the intended purpose in real-world usage. However, accuracy is also a requirement in the AI Act (see article 15 AI Act proposal). In the future, the legal requirements of the MDR concerning accuracy shall not be confused with AI Act requirements on accuracy. Once approved, medical device manufacturers shall consider them as different requirements and will likely have to comply with both of them.
4) Technical Robustness and Safety: Cybersecurity of Medical Devices
Less debated but nevertheless of core importance is cybersecurity, which may be associated to HLEG’s principle of AI technical robustness and safety. Vulnerabilities of AI-based medical devices may entail dramatic consequences for healthcare services. For example, if a dataset used by medical device’s software is poisoned, results about one patient’s state of health may be incorrect (Biasin, Kamenjasevic, Ludvigsen, forthcoming). Awareness about the importance of cybersecurity of medical devices is rising, unfortunately also due to the increase of cyberattacks during the COVID-19 pandemic. Recent regulatory updates have concerned the specific framework of medical device cybersecurity which nevertheless presents its shortcomings. As cybersecurity legislation’s maturity progresses, so should its integration with existing cybersecurity requirements in the MDR. At the moment, however, the existing provisions of cybersecurity incident notification and serious incident reporting risk bring regulatory uncertainty for medical device manufacturers and healthcare providers using the devices.
This blog post summarized part of the relevant debates on the regulation of AI-based medical devices in the EU – which are under assessment within the CORE-MD project.
The analysis illustrated the primary legislation relevant to AI-based medical devices, focusing on MDR/IVDR and the AI Act proposal. Further, regulatory issues were considered in light of ethical aspects. This study chose some of the ethics requirements of the HLEG guidelines (human agency and oversight, diversity, non-discrimination and fairness, accuracy, technical robustness and safety). The identified regulatory issues were correlated to the ‘update problem’, patient representativeness and diversity, and the regulatory coherence between the AI/MDR accuracy and the AI/MDR cybersecurity legal requirements.
Last but not least, the blog post emphasized that considering regulatory issues and ethics does not mean that fundamental rights are left aside. Any process related to AI-based medical devices must follow the existing ethical and legal requirements, including the thorough respect of individuals’ fundamental rights.