Mammogram images showing a real cancer-positive (left) case, with cancerous...
Mammogram images showing a real cancer-positive (left) case, with cancerous tissue indicated by white spot. A 'generative adversarial network' program removed cancerous regions from the cancer-positive image, creating a fake negative image (right). The method could also be used the other way around, artificially inserting cancerous regions to a cancer-negative image, creating a fake positive.

Image source: Zhou et al., Nature Communications 2021 (CC BY 4.0)

News • Adversarial attacks

Fake images can fool cancer-spotting AI and human experts

Artificial intelligence (AI) models that evaluate medical images have potential to speed up and improve accuracy of cancer diagnoses, but they may also be vulnerable to cyberattacks.

The study, published in Nature Communications, brings attention to a potential safety issue for medical AI known as "adversarial attacks," which seek to alter images or other inputs to make models arrive at incorrect conclusions. "What we want to show with this study is that this type of attack is possible, and it could lead AI models to make the wrong diagnosis—which is a big patient safety issue," said senior author Shandong Wu, Ph.D., associate professor of radiology, biomedical informatics and bioengineering at Pitt. "By understanding how AI models behave under adversarial attacks in medical contexts, we can start thinking about ways to make these models safer and more robust." 

AI-based image recognition technology for cancer detection has advanced rapidly in recent years, and several breast cancer models have U.S. Food and Drug Administration (FDA) approval. According to Wu, these tools can rapidly screen mammogram images and identify those most likely to be cancerous, helping radiologists be more efficient and accurate. But such technologies are also at risk from cyberthreats, such as adversarial attacks. Potential motivations for such attacks include insurance fraud from health care providers looking to boost revenue or companies trying to adjust clinical trial outcomes in their favor. Adversarial attacks on medical images range from tiny manipulations that change the AI's decision, but are imperceptible to the human eye, to more sophisticated versions that target sensitive contents of the image, such as cancerous regions —making them more likely to fool a human.

An AI-CAD model was first learned and then tested on the adversarial images...
An AI-CAD model was first learned and then tested on the adversarial images generated by the GAN model which aimed to make modifications to the diagnosis-sensitive contents of images (by inserting or removing cancerous tissue). The reader study examined human experts’ capabilities to visually recognize the GAN-generated adversarial images.

Image source: Zhou et al., Nature Communications 2021 (CC BY 4.0)

To understand how AI would behave under this more complex type of adversarial attack, Wu and his team used mammogram images to develop a model for detecting breast cancer. First, the researchers trained a deep learning algorithm to distinguish cancerous and benign cases with more than 80% accuracy. Next, they developed a so-called "generative adversarial network" (GAN)—a computer program that generates false images by inserting or removing cancerous regions from negative or positive images, respectively, and then they tested how the model classified these adversarial images.

We hope that this research gets people thinking about medical AI model safety and what we can do to defend against potential attacks

Shandong Wu

Of 44 positive images made to look negative by the GAN, 42 were classified as negative by the model, and of 319 negative images made to look positive, 209 were classified as positive. In all, the model was fooled by 69.1% of the fake images.

In the second part of the experiment, the researchers asked five human radiologists to distinguish whether mammogram images were real or fake. The experts accurately identified the images' authenticity with accuracy of between 29% and 71%, depending on the individual. "Certain fake images that fool AI may be easily spotted by radiologists. However, many of the adversarial images in this study not only fooled the model, but they also fooled experienced human readers," said Wu, who is also the director of the Intelligent Computing for Clinical Imaging Lab and the Pittsburgh Center for AI Innovation in Medical Imaging. "Such attacks could potentially be very harmful to patients if they lead to an incorrect cancer diagnosis." 

According to Wu, the next step is developing ways to make AI models more robust to adversarial attacks. "One direction that we are exploring is 'adversarial training' for the AI model," he explained. "This involves pre-generating adversarial images and teaching the model that these images are manipulated."

With the prospect of AI being introduced to medical infrastructure, Wu said that cybersecurity education is also important to ensure that hospital technology systems and personnel are aware of potential threats and have technical solutions to protect patient data and block malware. "We hope that this research gets people thinking about medical AI model safety and what we can do to defend against potential attacks, ensuring AI systems function safely to improve patient care," he added.


Source: University of Pittsburgh

15.12.2021

Read all latest stories

Related articles

Photo

News • Possible biological explanation found

Why are dense breasts associated with increased cancer risk?

The risk of developing breast cancer is higher in breasts with high density. But why is that? Researchers at Linköping University have shown major biological differences that promote cancer growth.

Photo

Article • Multiparametric ultrasound

Experts assess usefulness of MPUS in the diagnostic conundrum

Multiparametric ultrasound (MPUS) has proven its value in the abdomen – now, the technique is increasingly moving towards peripheral areas such as breast and testis imaging, experts showed in a…

Photo

Article • Deep Computer-Aided Triage

DeepCAT: An AI tool to improve high volume mammography reading workflow

A new deep learning system to help radiologists improve their workflow efficiency when reading high volumes of screening mammograms is being developed at Johns Hopkins University’s Radiology…

Related products

AB-CT – Advanced Breast-CT – nu:view

Mammo CT

AB-CT – Advanced Breast-CT – nu:view

AB-CT – Advanced Breast-CT GmbH
Fujifilm – Amulet Bellus II

Mammo Workstations

Fujifilm – Amulet Bellus II

FUJIFILM Europe GmbH
Fujifilm – AMULET Innovality

Tomosynthesis

Fujifilm – AMULET Innovality

FUJIFILM Europe GmbH
Fujifilm – AMULET Innovality Harmony

Tomosynthesis

Fujifilm – AMULET Innovality Harmony

FUJIFILM Europe GmbH
JVC – CL-S500

Displays – Mammo

JVC – CL-S500

JVCKENWOOD Deutschland GmbH
Subscribe to Newsletter