Article • Professor questions essential artificial intelligence safety

Facing facts: AI in clinical practice

Examining the safety of AI integration into clinical workflow during at the British Institute of Radiology (BIR) annual congress in London, this November, Professor Nicola Strickland focused on issues of data quantity and quality, regulation, validation and testing of algorithms.

Report: Mark Nicholls

portrait of nicola strickland
Professor Nicola Strickland is a consultant radiologist at Imperial Healthcare NHS Trust and Immediate Past President of the Royal College of Radiologists

She also urged radiologists and computer scientists to work more closely together to develop safe, effective and relevant algorithms.

Data are the key element in terms of the amount required, access to those data, their anonymity, uniformity and quality. ‘The safety of an AI algorithm depends largely on the data used for its training, validation and testing,’ said Strickland, a consultant radiologist at Imperial Healthcare NHS Trust and Immediate Past President of the Royal College of Radiologists. ‘These datasets must be very large, separate, non-overlapping cohorts, to train the AI algorithms properly; otherwise algorithms will perform poorly in real clinical settings, which will lead to lack of faith in AI. The most important thing for radiologists to understand is that the safety of any AI algorithm depends almost entirely on the data used to train, validate and test it. I cannot emphasise enough how important it is that such data are of good quality and properly curated.’

Strickland noted that access to data is an issue with no anonymised national database in the UK and most data being hidden behind hospital firewalls, impedes research. ‘What we want is easy access for researchers to anonymised, or robustly pseudo-anonymised data – which may be used without consent, although there is ethical unease about this in the UK. This is not necessarily the case in some countries such as India or China, where non-anonymised data are more widely available and may even be purchased. There are different ethical standards around the world.’

Radiologists and other doctors need to be reassured that an algorithm has been properly tested before being introduced into clinical practice, but that is not the case at the moment

Nicola Strickland

While the accuracy of AI algorithms is paramount, and should be at least equal to that of radiologists, she pointed out that the accuracy or radiologists’ performance is very difficult to measure. In the published literature, radiological reporting discrepancy rates vary between 2-30% in different studies, depending on the selection bias, case mix, imaging modality used, criteria used to define discrepancy, and inter- and intra- observer variation in scoring/assessing.

However, AI is already showing potential within the NHS for screening, with trials using AI as a third reader in breast screening mammograms and to recognise vertebral fractures on CT scans performed for other reasons. Yet, the regulation and testing of AI algorithms remain an area of concern. ‘This is an area in which we are lagging behind,’ Strickland observed. ‘Radiologists and other doctors need to be reassured that an algorithm has been properly tested before being introduced into clinical practice, but that is not the case at the moment.’ She pointed to ‘attractive’ AI algorithms (e.g. for thrombotic stroke detection) coming onto the market, having obtained a CE mark and/or FDA approval, allowing their sale in Europe or the USA, yet with no peer reviewed publications or no independent testing of their performance.

AI must be seamlessly integrated with RIS/PACS/EPR and radiotherapy planning into the clinical workflow, Strickland advised, otherwise it will not be used. She also stressed the importance of close collaboration between radiologists and computer scientists. ‘It’s important that we get involved with computer scientists developing these AI algorithms to understand the major concepts involved and to help us judge AI in clinical practice. However, as radiologists we are not engaging enough with them. This should be a two-way process. Equally, computer scientists must work with clinicians to understand what is clinically useful and safe and what is not.’

Clinicians, she added, are understandably sceptical over ‘the black box phenomenon’ which arises in AI algorithms using radiomics and that this scepticism arises because doctors have no way of visualising the nature of the computer-derived features and have no easy way of checking the accuracy with which the algorithm is making a diagnosis, assessing prognosis, or predicting response to therapy.


Profile:

Professor Nicola Strickland is immediate past President of the Royal College of Radiologists and a consultant radiologist at Imperial College Healthcare NHS Trust in London. Her specialist interests are chest and oncological body imaging, and imaging informatics. She has held a number of national and international leadership roles, including being President of MIR (Management in Radiology) of the European Society of Radiology, President of the Radiological Section of the Royal Society of Medicine, Founder and Chairman of the Imaging Informatics Special Interest Group of the UK, and President of EuroPACSA leading UK radiologist has warned that significant steps need to be addressed before Artificial Intelligence (AI) can be used confidently in clinical practice.

12.03.2020

Related articles

Photo

News • Ultrasound, MRI, medical IT

Esaote to showcase AI-enhanced imaging at ECR

Italian biomedical and imaging company Esaote will be present at the ECR radiology congress in Vienna on March 1st - 5th, Expo X5 - Booth 517, to show its latest ultrasound, MRI and medical IT.

Photo

Interview • Scanner manipulation prevention

Protecting medical imaging devices from cyberattack

Medical imaging and radiotherapy treatment devices are essential lifesaving technologies. In an era when hospital information systems are increasingly being attacked by cybercriminals, just how…

Photo

Article • Covid-19, cybersecurity, AI

Top 10 technology hazards for hospitals (according to experts)

Coronavirus-associated concerns dominate the Top 10 list of important technology hazard risks for hospitals, in an annual report published by ECRI, a nonprofit technology Pennsylvania research firm.…

Related products

Subscribe to Newsletter