Wien, 20.12.2023 – Text: Catarina Carrão
The EU AI Act
The Council presidency and the European Parliament have agreed on a provisional agreement on the proposal for harmonized rules on Artificial Intelligence (AI), known as the AI Act. The draft regulation aims to guarantee the safety and adherence to fundamental rights and values for AI systems in the European Union (EU). This global legislative lead has the potential to foster the development and uptake of safe and trustworthy AI based on a “risk-based” approach: the higher the risk, the stricter the rules.
High impact foundational models
New requirements have been added to reflect on situations where AI systems can be used for many different purposes (i.e., general purpose AI), and where such technologies can besubsequently integrated into an additional high-risk system. Furthermore, specific rules have been also agreed for foundation models – i.e., large systems capable to competently perform a wide range of distinctive tasks, such as computing, or coding. Such systems need to comply with specific transparency obligations before they are placed in the EU market. Additionally, “high impact foundational models” – i.e. foundation models trained with large amounts of data and with advanced complexity, capabilities, and performance beyond the average, have additional stringent rules to avoid their potential to disseminate systemic risks along the value chain.
Generalist biomedical AI (GBMAI): a look into the future?
Drawn directly from foundational models beyond the realm of medicine, there is an idea that GBMAI models will be able to perform a wide range of tasks with minimal or no reliance on task-specific labelled data. Based on self-supervised large and diverse datasets, GBMAIs will be able to decipher through various combinations of biomedical modalities, embracing data from imaging, electronic health records, laboratory results, genomics, graphs, or biomedical text. Such models will generate expressive outputs, such as clinical free-text explanations, spoken recommendations, or image annotations, showcasing advanced biomedical reasoning.
Or, not so fast, according to EMA….
Since these technologies use non-transparent model architectures (i.e., Machine Learning(ML) “black boxes”), new risks are introduced that need to be mitigated to ensure the safety of patients and integrity of clinical study results. As such, when using AI/ML in medical product development, the European Medicines Agency (EMA) will have an “Eye of Sauron” and will be involved in assessing whether such novel technologies can be used to generate adequate evidence to support an EU marketing authorization.
The specific case of clinical trials
If an AI/ML model is used in the context of a clinical trial, the full model architecture, logs from modeling, validation and testing, training data and description of data processing pipeline would be part of the clinical trial data or protocol dossier and would need to be included in the initial clinical trial application.
Moreover, if AI/ML systems are used for clinical management of patients (i.e., medical device or in vitro diagnostic device), then additional requirements would be needed to qualify their use in the context of a clinical trial, to ensure the rights, safety, wellbeing of subjects, integrity of data and results of the clinical trial, including their generalizability.
Data analysis and inference
In the case of using AI/ML models for data transformation or analysis within a clinical trial, these would be considered part of the statistical analysis and would need to follow applicable guidelines on statistical principles for clinical trials, including an analysis on the impact of downstream statistical inference. Specifically in late-stage clinical development, this would require a detailed description of a pre-specified data curation pipeline and a fully frozen set of models used for inference within the statistical analysis plan.
In the case of pivotal clinical trials, all risks related to overfitting and data leakage would need to be carefully mitigated. As such, prior to model deployment, the performance should be tested with prospectively generated data acquired in the setting or population representative of the intended context of use.
Conclusion
The quickly developing field of AI/ML shows great promise for advancing clinical development. A responsible and ethical use of such novel technologies under the EU AI Act and the arm of EMA, will ensure transparency, prioritizing patient privacy and safety. Collaboration between scientists, healthcare professionals and regulatory bodies will becrucial to navigate these complexities successfully.
Image: Stock photo
References
All references assessed 20th December 2023