illustration of human mind
Are artificial minds always superior to humans? When it comes to AI-powered interpretation of medical images, experts see more hype than substance

Image source: Shutterstock/agsandrew

Experts express doubts

AI outperforming doctors: hype, exaggeration or fact?

Many studies claiming that artificial intelligence (AI) is as good as (or better than) human experts at interpreting medical images are of poor quality and are arguably exaggerated, posing a risk for the safety of ‘millions of patients’ warn researchers in The BMJ.

Their findings raise concerns about the quality of evidence underpinning many of these studies, and highlight the need to improve their design and reporting standards. AI is an innovative and fast moving field with the potential to improve patient care and relieve overburdened health services. Deep learning is a branch of AI that has shown particular promise in medical imaging. The volume of published research on deep learning is growing, and some media headlines that claim superior performance to doctors have fuelled hype for rapid implementation. But the methods and risk of bias of studies behind these headlines have not been examined in detail.

To address this, a team of researchers reviewed the results of published studies over the past 10 years, comparing the performance of a deep learning algorithm in medical imaging with expert clinicians. They found just two eligible randomised clinical trials and 81 non-randomised studies. Of the non-randomised studies, only nine were prospective (tracking and collecting information about individuals over time) and just six were tested in a ‘real world’ clinical setting.

The average number of human experts in the comparator group was just four, while access to raw data and code (to allow independent scrutiny of results) was severely limited. More than two thirds (58 of 81) studies were judged to be at high risk of bias (problems in study design that can influence results), and adherence to recognised reporting standards was often poor. Three quarters (61 studies) stated that performance of AI was at least comparable to (or better than) that of clinicians, and only 31 (38%) stated that further prospective studies or trials were needed.

Maximising patient safety will be best served by ensuring that we develop a high quality and transparently reported evidence base moving forward

Nagendran et al.

The researchers point to some limitations, such as the possibility of missed studies and the focus on deep learning medical imaging studies so results may not apply to other types of AI. Nevertheless, they say that at present, “many arguably exaggerated claims exist about equivalence with (or superiority over) clinicians, which presents a potential risk for patient safety and population health at the societal level.”

Overpromising language “leaves studies susceptible to being misinterpreted by the media and the public, and as a result the possible provision of inappropriate care that does not necessarily align with patients’ best interests,” they warn. “Maximising patient safety will be best served by ensuring that we develop a high quality and transparently reported evidence base moving forward,” they conclude.


Source: The BMJ

26.03.2020

Read all latest stories

Related articles

Photo

Brain tumor treatment network

'Federated learning' AI approach allows hospitals to share patient data privately

To answer medical questions that can be applied to a wide patient population, machine learning models rely on large, diverse datasets from a variety of institutions. However, health systems and…

Photo

From physical to computational staining

Deep learning accurately stains digital biopsy H&E slides

Tissue biopsy slides stained using hematoxylin and eosin (H&E) dyes are a cornerstone of histopathology, especially for pathologists needing to diagnose and determine the stage of cancers. A…

Photo

Algorithm-assisted diagnostics

AI in imaging: not as reliable as you'd think

Machine learning and AI are highly unstable in medical image reconstruction, and may lead to false positives and false negatives, a new study suggests. A team of researchers, led by the University of…

Related products

Eppendorf – Mastercycler nexus X2

Research Use Only (RUO)

Eppendorf – Mastercycler nexus X2

Eppendorf AG
Sarstedt – Low DNA Binding Micro Tubes

Research Use Only (RUO)

Sarstedt – Low DNA Binding Micro Tubes

SARSTEDT AG & CO. KG
Shimadzu – CLAM-2030

Research Use Only (RUO)

Shimadzu – CLAM-2030

Shimadzu Europa GmbH