© Toowongsa – stock.adobe.com
News • GMAI for cancer treatment
Generalist Medical AI: Experts propose legal guardrails
AI models are set to transform cancer care in the future by providing personalized diagnosis and treatment options.
The emergence of Generalist Medical Artificial Intelligence (GMAI) models poses a significant challenge to current regulatory frameworks. In a commentary published in the journal Nature Reviews Cancer, Stephen Gilbert and Jakob N. Kather, both professors at the EKFZ for Digital Health at TU Dresden, discuss how the regulation of these models could be handled in the future. Policy-makers will have to decide whether to radically adapt current frameworks, block generalist approaches, or force them onto narrow tracks.
Recommended article
Article • Technology overview
Artificial intelligence (AI) in healthcare
With the help of artificial intelligence, computers are to simulate human thought processes. Machine learning is intended to support almost all medical specialties. But what is going on inside an AI algorithm, what are its decisions based on? Can you even entrust a medical diagnosis to a machine? Clarifying these questions remains a central aspect of AI research and development.
Current artificial Intelligence (AI) models for cancer treatment are trained and approved only for specific intended purposes. GMAI models, in contrast, can handle a wide range of medical data including different types of images and text. For example, for a patient with colorectal cancer, a single GMAI model could interpret endoscopy videos, pathology slides and electronic health record (EHR) data. Hence, such multi-purpose or generalist models represent a paradigm shift away from narrow AI models.
Specific decisions should be made on how to proceed with these technologies and not to exclude their ability to address questions they were not specifically designed for
Stephen Gilbert
Regulatory bodies face a dilemma in adapting to these new models because current regulations are designed for applications with a defined and fixed purpose, specific set of clinical indications and target population. Adaptation or extension after approval is not possible without going through quality management and regulatory, administrative processes again. GMAI models, with their adaptability and predictive potential even without specific training examples – so called zero shot reasoning – therefore pose challenges for validation and reliability assessment. Currently, they are excluded by all international frameworks.
The authors point out that existing regulatory frameworks are not well suited to handle GMAI models due to their characteristics. “If these regulations remain unchanged, a possible solution could be hybrid approaches. GMAIs could be approved as medical devices and then the range of allowed clinical prompts could be restricted,” says Prof. Stephen Gilbert, Professor of Medical Device Regulatory Science at TU Dresden. “But this approach is to force models with potential to intelligential address new questions and multimodal data onto narrow tracks through rules written when these technologies were not anticipated. Specific decisions should be made on how to proceed with these technologies and not to exclude their ability to address questions they were not specifically designed for. New technologies sometimes call for new regulatory paradigms,” says Prof. Gilbert.
The researchers argue that it will be impossible to prevent patients and medical experts from using generic models or unapproved medical decision support systems. Therefore, it would be crucial to maintain the central role of physicians and enable them as empowered information interpreters.
In conclusion, the researchers propose a flexible regulatory approach that accommodates the unique characteristics of GMAI models while ensuring patient safety and supporting physician decision-making. They point out that a rigid regulatory framework could hinder progress in AI-driven healthcare, and call for a nuanced approach that balances innovation with patient welfare.
Source: TU Dresden
18.04.2024