‘In the Automation in Medical Imaging (AMI) project, we will build the necessary tools and infrastructure to ease the development of a special kind of computer software,’ computer scientist Markus Harz explains.
‘This software is capable of learning, so that it can at some point understand images as well as a human. As the project name implies, we are not trying to teach the computer to understand any image, but medical images in particular. This requires infrastructure: loading clinical image data into the computer software development process is not straightforward, and making standard desktop computers learn is cumbersome. This infrastructure helps to develop the software efficiently.
‘AMI is an international research project jointly headed by Horst K Hahn and Markus Harz between Fraunhofer MEVIS in Bremen and the Diagnostic Image Analysis Group (DIAG) in Nijmegen, The Netherlands. The DIAG group has renowned expertise in state-of-the-art self-learning software for medical image analysis. Fraunhofer MEVIS has years of experience in industry-grade software development.’ The combination has already proved complementary, he points out: through shared learning and fusing development systems.
Scientific, commercial, technical objectives
‘The greatest goal is to create software that solves real clinical problems. We are convinced that automation helps. Automate the processes clinicians hate most, like searching for the right images, or comparing details from two examinations. Clinicians will want the software, thereby increasing marketability, so we’ll have reached our clinical and commercial goals.
‘This software is similar to a person who learns. When a radiologist decides whether a suspicious area in a medical image is harmless, or reason for concern, she will use all her experience and knowledge, to compare form and structure, perhaps making judgments based on location and assessing other features. This is very similar to various approaches taken by machine learning. More traditional methods use criteria such those used by radiologists. The magic lies in teaching the software to “see” with a radiologist’s eyes and judge by her criteria.’
Deep learning algorithms
Deep learning comprises a variety of neural network architecturesMarkus Harz
'Deep learning comprises a variety of neural network architectures. Neural networks emulate the brain’s neural connectivity with neurons and synapses. Deep-learning neural networks are special among these approaches. They contain many more neurons and synapses than previous neural networks. Perhaps most significantly for machine learning, deep neural networks can learn directly from data. Previously, experts had to translate between image and learning algorithms: They designed the image features for the computer to use. In a way, an expert taught the computer to see. Deep learning is different, giving the computer huge amounts of data and hints about the meaning, so that the deep learning algorithm can discover the most relevant image features – often much more useful than those constructed by a human.’
Areas of use
We want to present these tiny changes to the oncologist even before he’s viewed a single imageMarkus Harz
‘We have three clinical applications in mind, but the deep learning approach imposes a clear restriction: it requires large amounts of data, and this data has to be prepared. The project tackles this problem: We want to ease the development of deep-learning computer software by simplifying data preparation by doctors, to collect large amounts of data quickly. Thereafter, trained computer software helps to collect more data. A virtual circle! We begin by focusing on three applications for which substantial data already exists – digital pathology, ophthalmology and oncology.’
‘The objectives vary for each of these fields. Oncologists monitor the health of cancer patients in screening or after treatment. They search for minute, suspicious changes, but the images are most often completely unsuspicious. We want to present these tiny changes to the oncologist even before he’s viewed a single image. The software has to learn about the body and its organs, how they normally appear, and where they are. Finding and capturing the organs automatically greatly helps subsequent processing.
‘For digital pathology, the challenge is to manage extraordinarily large images. The search for tiny cancer cell clusters resembles seeking a needle in a haystack. This is exactly what a computer can do very well and very accurately. ‘For ophthalmology, we want to improve treatment decision-making, for example, by teaching software to detect and measure fluid compartments behind the retina. The change of this exact value from exam to exam determines the treatment.’
Software diagnostic advantages/disadvantages
‘Computers can now match clinicians’ performances, at least for isolated tasks. The most prominent examples include tumour detection in breast and lung cancer screening and diagnosis of skin cancer images. In a recent skin cancer study, computer software outperformed experienced doctors by such a wide margin that Harz says: ‘I wouldn't have to think twice about which diagnosis to trust more’. However, the algorithms can only perform well when provided with sufficient data and information. Doctors often have an information advantage: they may know the patient from a prior visit, or read a relevant journal article, or discussed the topic with a colleague. Computers cannot easily access such implicit knowledge. Therefore, I think humans will certainly be part of the picture for quite some time.’
Endangering medical jobs
‘The help of 'intelligent' computer software in medicine is a means to extend and improve healthcare. I see radiologists struggle with the sheer amount of medical images they must review. How long can this continue? Employing more radiologists seems unfeasible, given already exploding costs in high-tech medicine. Simultaneously, public awareness of the benefit of modern, image-based diagnostics increases the demand to offer this to a broader public. This is possible, but hardly imaginable without computer assistance.’
A massive amount of data is needed for the algorithm to learn good image features. Deep learning algorithms also must be tuned. ‘Imagine recreating the visual system from eye to brain with only very small building blocks. How are these connected to each other? How many layers of abstraction do you need? You have to strike a balance between abstraction and differentiation. You don’t want to be too fine and only detect meaningless dots and lines and circles. You also don’t want to be too coarse and fail to distinguish e.g. between cancer types.
Shaping future diagnoses
Harz believes computer-prepared and computer-aided diagnoses are the future (but 5-10 years before market approval). Indeed, in some scenarios, man-made decisions might no longer be accepted. In a recent skin cancer study, computer software ‘outperformed even experienced doctors by such a wide margin,’ he said, ‘I wouldn't have to think twice about which diagnosis to trust veral hundred thousand!
For computer scientist Markus Harz PhD six-months’ work in a USA breast care centre has added much to his seven years in project management and ten years in medical image analysis. He has also contributed to several international research projects, scientifically and through project management. His PhD proposed methods for computer assistance in complex image-based clinical tasks. Machine learning and computer vision are always important tools in his approach.