AI in radiology

Deep learning helps visualize X-ray data in 3D

A team of scientists at Argonne National Laboratory has leveraged artificial intelligence to train computers to keep up with the massive amounts of X-ray data taken at the Advanced Photon Source.

3d rendering of human skeleton
Human skeleton rendered in 3D (symbolic picture)

Image source: Raman Oza from Pixabay

Computers have been able to quickly process 2D images for some time. Your cell phone can snap digital photographs and manipulate them in a number of ways. Much more difficult, however, is processing an image in three dimensions, and doing it in a timely manner. The mathematics are more complex, and crunching those numbers, even on a supercomputer, takes time. That’s the challenge a group of scientists from the U.S. Department of Energy’s (DOE) Argonne National Laboratory is working to overcome. Artificial intelligence has emerged as a versatile solution to the issues posed by big data processing. For scientists who use the Advanced Photon Source (APS), a DOE Office of Science User Facility at Argonne, to process 3D images, it may be the key to turning X-ray data into visible, understandable shapes at a much faster rate. A breakthrough in this area could have implications for astronomy, electron microscopy and other areas of science dependent on large amounts of 3D data.

The research team, which includes scientists from three Argonne divisions, has developed a new computational framework called 3D-CDI-NN, and has shown that it can create 3D visualizations from data collected at the APS hundreds of times faster than traditional methods can. The team’s research was published in Applied Physics Reviews. CDI stands for coherent diffraction imaging, an X-ray technique that involves bouncing ultra-bright X-ray beams off of samples. Those beams of light will then be collected by detectors as data, and it takes some computational effort to turn that data into images. Part of the challenge, explains Mathew Cherukara, leader of the Computational X-ray Science group in Argonne’s X-ray Science Division (XSD), is that the detectors only capture some of the information from the beams.

But there is important information contained in the missing data, and scientists rely on computers to fill in that information. As Cherukara notes, while this takes some time to do in 2D, it takes even longer to do with 3D images. The solution, then, is to train an artificial intelligence to recognize objects and the microscopic changes they undergo directly from the raw data, without having to fill in the missing info. To do this, the team started with simulated X-ray data to train the neural network. The NN in the framework’s title, a neural network is a series of algorithms that can teach a computer to predict outcomes based on data it receives. Henry Chan, the lead author on the paper and a postdoctoral researcher in the Center for Nanoscale Materials (CNM), a DOE Office of Science User Facility at Argonne, led this part of the work. “We used computer simulations to create crystals of different shapes and sizes, and we converted them into images and diffraction patterns for the neural network to learn,” Chan said. “The ease of quickly generating many realistic crystals for training is the benefit of simulations.”

Recommended article

This work was done using the graphics processing unit resources at Argonne’s Joint Laboratory for System Evaluation, which deploys leading-edge testbeds to enable research on emerging high-performance computing platforms and capabilities. Once the network is trained, says Stephan Hruszkewycz, physicist and group leader with Argonne’s Materials Science Division, it can come pretty close to the right answer, pretty quickly. However, there is still room for refinement, so the 3D-CDI-NN framework includes a process to get the network the rest of the way there. Hruszkewycz, along with Northwestern University graduate student Saugat Kandel, worked on this aspect of the project, which reduces the need for time-consuming iterative steps. “The Materials Science Division cares about coherent diffraction because you can see materials at few-nanometer length scales — about 100,000 times smaller than the width of a human hair — with X-rays that penetrate into environments,” Hruszkewycz said. “This paper is a demonstration of these advanced methods, and it greatly facilitates the imaging process. We want to know what a material is, and how it changes over time, and this will help us make better pictures of it as we make measurements.”

As a final step, 3D-CDI-NN’s ability to fill in missing information and come up with a 3D visualization was tested on real X-ray data of tiny particles of gold, collected at beamline 34-ID-C at the APS. The result is a computational method that is hundreds of times faster on simulated data, and nearly that fast on real APS data. The tests also showed that the network can reconstruct images with less data than is usually required to compensate for the information not captured by the detectors.

The next step for this research, according to Chan, is to integrate the network into the APS’s workflow, so that it learns from data as it is taken. If the network learns from data at the beamline, he said, it will continuously improve. For this team, there’s a time element to this research as well. As Cherukara points out, a massive upgrade of the APS is in the works, and the amount of data generated now will increase exponentially once the project is complete. The upgraded APS will generate X-ray beams that are up to 500 times brighter, and the coherence of the beam — the characteristic of light that allows it to diffract in a way that encodes more information about the sample — will be greatly increased. That means that while it takes two to three minutes now to gather coherent diffraction imaging data from a sample and get an image, the data collection part of that process will soon be up to 500 times faster. The process of converting that data to a usable image also needs to be hundreds of times faster than it is now to keep up. “In order to make full use of what the upgraded APS will be capable of, we have to reinvent data analytics,” Cherukara said. “Our current methods are not enough to keep up. Machine learning can make full use and go beyond what is currently possible.”


Source: Argonne National Laboratory

27.07.2021

Read all latest stories

Related articles

Photo

Neurology

Supercomputer helps create 3D synthetic brain models

Scientists are using artificial intelligence (AI) and the Cambridge-1 supercomputer to synthesise artificial 3-D MRI images of human brains and create models that show disease states across various…

Photo

COVID-19

AI shortcuts could lead to misdiagnosis of

University of Washington researchers have discovered that AI models—like humans—have a tendency to look for shortcuts. In the case of AI-assisted disease detection, these shortcuts could lead to…

Photo

Algorithm-assisted diagnostics

AI in imaging: not as reliable as you'd think

Machine learning and AI are highly unstable in medical image reconstruction, and may lead to false positives and false negatives, a new study suggests. A team of researchers, led by the University of…

Related products

Fujifilm · REiLI

Artificial Intelligence

Fujifilm · REiLI

FUJIFILM EUROPE GmbH
Agfa - Smart XR

Accessories/ Complementary Systems

Agfa - Smart XR

Agfa HealthCare
Canon – Advanced intelligent Clear-IQ Engine for MR

Artificial Intelligence

Canon – Advanced intelligent Clear-IQ Engine for MR

Canon Medical Systems Europe B.V.
Canon - HIT Automation Platform

Artificial Intelligence

Canon - HIT Automation Platform

Canon Medical Systems Europe B.V.
Canon - Vantage Elan NX Edition

1.5 Tesla

Canon - Vantage Elan NX Edition

Canon Medical Systems Europe B.V.
Fujifilm - FDR EX-M1 AI box

Aritificial Intelligence

Fujifilm - FDR EX-M1 AI box

FUJIFILM EUROPE GmbH
Subscribe to Newsletter