News • Responsible deployment of medical AI

Strategies to prevent AI model data shifts in hospitals

A new study from York University found proactive, continual and transfer learning strategies for AI models to be key in mitigating data shifts and subsequent harms.

Portrait photo of Elham Dolatabadi
Elham Dolatabadi

Image source: York University 

A new study from York University, published in the journal JAMA Network Open, found proactive, continual and transfer learning strategies for AI models to be key in mitigating data shifts and subsequent harms. 

To determine the effect of data shifts, the team built and evaluated an early warning system to predict the risk of in-hospital patient mortality and enhance the triaging of patients at seven large hospitals in the Greater Toronto Area. The study used GEMINI, Canada’s largest hospital data sharing network, to assess the impact of data shifts and biases on clinical diagnoses, demographics, sex, age, hospital type, where patients were transferred from, such as an acute care institution or nursing home, and time of admittance. It included 143,049 patient encounters, such as lab results, transfusions, imaging reports and administrative features. 

“As the use of AI in hospitals increases to predict anything from mortality and length of stay to sepsis and the occurrence of disease diagnoses, there is a greater need to ensure they work as predicted and don’t cause harm,” says senior author York University Assistant Professor Elham Dolatabadi of York’s School of Health Policy and Management, Faculty of Health, a member of Connected Minds and a faculty affiliate at the Vector Institute. “Building reliable and robust machine learning models, however, has proven difficult as data changes over time creating system unreliability.” 

The data to train clinical AI models for hospitals and other health-care settings need to accurately reflect the variability of patients, diseases and medical practices, she adds. Without that, the model could develop irrelevant or harmful predictions, and even inaccurate diagnoses. Differences in patient subpopulations, staffing, resources, as well as unforeseen changes to policy or behaviour, differing health-care practices between hospitals or an unexpected pandemic, can also cause these potential data shifts. “We found significant shifts in data between model training and real-life applications, including changes in demographics, hospital types, admission sources, and critical laboratory assays,” says first author Vallijah Subasri, AI scientist at University Health Network. “We also found harmful data shifts when models trained on community hospital patient visits were transferred to academic hospitals, but not the reverse.” 

Recommended article

Photo

Article • Technology overview

Artificial intelligence (AI) in healthcare

With the help of artificial intelligence, computers are to simulate human thought processes. Machine learning is intended to support almost all medical specialties. But what is going on inside an AI algorithm, what are its decisions based on? Can you even entrust a medical diagnosis to a machine? Clarifying these questions remains a central aspect of AI research and development.

To mitigate these potentially harmful data shifts, the researchers used a transfer learning strategies, which allowed the model to store knowledge gained from learning one domain and apply it to a different but related domain and continual learning strategies where the AI model is updated using a continual stream of data in a sequential manner in response to drift-triggered alarms. 

There is a practical pathway from promise to practice, bridging the gap between the potential of AI in health and the realities of deploying and sustaining it in real-world clinical environments

Elham Dolatabadi

Although machine learning models usually remain locked once approved for use, the researchers found models specific to hospital type which leverage transfer learning, performed better than models that use all available hospitals. Using drift-triggered continual learning helped prevent harmful data shifts due to the Covid-19 pandemic and improved model performance over time. 

Depending on the data it was trained on, the AI model could also have a propensity for certain biases leading to unfair or discriminatory outcomes for some patient groups. “We demonstrate how to detect these data shifts, assess whether they negatively impact AI model performance, and propose strategies to mitigate their effects. We show there is a practical pathway from promise to practice, bridging the gap between the potential of AI in health and the realities of deploying and sustaining it in real-world clinical environments,” says Dolatabadi. 

The study is a crucial step towards the deployment of clinical AI models as it provides strategies and workflows to ensure the safety and efficacy of these models in real-world settings. “These findings indicate that a proactive, label-agnostic monitoring pipeline incorporating transfer and continual learning can detect and mitigate harmful data shifts in Toronto’s general internal medicine population, ensuring robust and equitable clinical AI deployment,” says Subasri. 


Source: York University; by Sandra McLean 

10.06.2025

Related articles

Photo

News • Misleading medical analyses

AI “predicts” beer drinking based on knee X-rays – why this is not only wrong, but dangerous

Can an AI determine whether or not a person drinks beer by looking at their knee X-rays? It can't – but the claim shows why “shortcut learning” is such a dangerous mechanism in medical AI.

Photo

News • Sustainable use of generative AI

Large language models in healthcare: shorter prompts, less emissions?

A new study investigating the impact of AI in healthcare shows that using LLMs to process thousands of patient records daily across multiple hospitals could lead to substantial resource consumption.

Photo

News • Classification of movement disorders

Tremor or myoclonus? AI helps tell them apart

Movement disorders often show overlapping symptoms, making it difficult for doctors to make the correct diagnosis. A new AI tool could help distinguish between different disorders, such as tremor and…

Related products

Subscribe to Newsletter