pulse oximeter on hand of person with darks skin on wooden table

Image source: Pexels/Mufid Majnun

News • Review calls for immediate action

Medical devices: prone to unfair bias, study finds

A new report details the findings of the Independent Review of Equity in Medical Devices. It calls for concerted action on devices that are prone to unfair biases, including pulse oximeters and those enabled by Artificial Intelligence (AI), to prevent patient harm.

The full report can be accessed here

Set up in 2022 by the then Secretary of State for Health and Social Care, the review sought to establish the extent and impact of ethnic and other unfair biases in the performance of medical devices commonly used in the NHS. It was commissioned amid concerns that such biases may lead to suboptimal treatment for the affected groups in the population. The review focused on three types of medical device where evidence suggested that the potential for harm was substantial. These were optical devices such as pulse oximeters, AI-enabled devices and certain genomics applications, such as polygenic risk scores.

The advance of AI in medical devices could bring great benefits, but it could also bring harm through inherent bias against certain groups in the population, notably women, people from ethnic minorities and disadvantaged socio-economic groups

Dame Margaret Whitehead

The expert panel found evidence that pulse oximeters – widely used during the Covid-19 pandemic to monitor blood oxygen levels – can over-estimate the amount of oxygen in the blood of people with darker skin tones. This could lead to delay in treatment if dangerously low oxygen levels were missed. The review recommends mitigating actions in relation to pulse oximeters already in widespread use across the NHS. Further recommendations aim to prevent adverse impacts arising in new devices as they are developed and put into use. 

On AI-enabled devices, the review found evidence of potential biases against women, ethnic minority and disadvantaged socioeconomic groups in how clinical decision-making tools select higher risk patients needing more intensive treatment. One example is the potential under-diagnosis of skin cancers for people with darker skin when using AI-enabled devices. This is as a result of machines being ‘trained’ predominantly on images of lighter skin tones.  There is also a long-standing problem of under-diagnosis of cardiac condition in women, which AI algorithms in medical devices could make worse.

Recommended article

Photo

Article • Sex differences in heart failure

Why a closer look at a woman’s heart matters

The hearts of men and women are different – while this insight has been established for quite some time now, it might even surprise cardiologists just how deep these differences really run. In her presentation at this year’s ESC, Diana Bonderman, MD, gave a comprehensive roundup on sex differences in risk factors and subtypes of heart failure.

The University of Liverpool’s Professor Dame Margaret Whitehead, Chair of the Review, said: “The advance of AI in medical devices could bring great benefits, but it could also bring harm through inherent bias against certain groups in the population, notably women, people from ethnic minorities and disadvantaged socio-economic groups. Our review reveals how existing biases and injustices in society can unwittingly be incorporated at every stage of the lifecycle of AI-enabled medical devices, and then magnified in algorithm development and machine learning. Our recommendations therefore call for system-wide action by many stakeholders and now need to be implemented as matter of priority with full government support” 

The review also recommends that the Government should start preparing now for the disruption to healthcare from the next generation of AI-enabled machines if it is to minimise the risk of patient harm. Panel member Professor Chris Holmes warned that the Government needs to understand how the advent of large language and foundation models like ChatGPT will disrupt clinical and public health practices. “We are calling on the government to appoint an expert panel including clinical, technology and healthcare leaders, patient and public representatives and industry to assess the potential unintended consequences arising from the AI revolution in healthcare. Now is the time to seize the opportunity to incorporate action on equity in medical devices into the overarching global strategies on AI safety,” he said. 

The Expert Review Panel led by Professor Dame Margaret Whitehead included Professors Raghib Ali (Cambridge University), Enitan Carrol (The University of Liverpool and North West Clinical Research Network), Chris Holmes (The Alan Turing Institute and Oxford University) and Frank Kee (Queens University, Belfast). 


Source: University of Liverpool

12.03.2024

Read all latest stories

Related articles

Photo

News • Deep learning, infrared and 3D videos

Using AI to classify epileptic seizures

A team of researchers from Portugal and Germany tested an innovative solution to classify seizures, the main symptom of epilepsy, using infrared radar and 3D videos. 

Photo

News • Voice box cancer treatment

Laryngeal cancer: AI software to better predict outcomes

Experts at Newcastle University have discovered artificial intelligence can be used to analyse medical imaging from CT scans, which is aimed at predicting survival in patients with laryngeal cancer.

Photo

News • Large language models in neurology

Chatbot-like AI tool to locate brain damage after stroke

Could a chatbot-like AI tool locate brain damage after a stroke? In a new study, large language model GPT-4 processed text from health histories and neurologic examinations to locate brain lesions.

Related products

Subscribe to Newsletter