© MangKangMangMee – stock.adobe.com

News • Ten times faster than doctors

ChatGPT writes medical record notes – in record speed

The AI model ChatGPT can write administrative medical notes up to ten times faster than doctors without compromising quality.

This is according to a new study conducted by researchers at Uppsala University Hospital and Uppsala University in collaboration with Danderyd Hospital and the University Hospital of Basel, Switzerland and published in the journal Acta Orthopaedica. The researchers conducted a pilot study of just six virtual patient cases, which will now be followed up with an in-depth study of 1,000 authentic patient medical records.

portrait of cyrus broden
Cyrus Brodén

Image source: Akademiska sjukhuset/Uppsala universitet; photo: private

“For years, the debate has centred on how to improve the efficiency of healthcare. Thanks to advances in generative AI and language modelling, there are now opportunities to reduce the administrative burden on healthcare professionals. This will allow doctors to spend more time with their patients,” explains Cyrus Brodén, an orthopaedic physician and researcher at Uppsala University Hospital and Uppsala University. 

Administrative tasks take up a large share of a doctor’s working hours, reducing the time for patient contact and contributing to a stressful work situation. Researchers at Uppsala University Hospital and Uppsala University, in collaboration with Danderyd Hospital and the University Hospital of Basel, Switzerland, have shown in a new study that the AI model ChatGPT can write administrative medical notes up to ten times faster than doctors without compromising quality. 

The aim of the study was to assess the quality and effectiveness of the ChatGPT tool when producing medical record notes. The researchers used six virtual patient cases that mimicked real cases in both structure and content. Discharge documents for each case were generated by orthopaedic physicians. ChatGPT-4 was then asked to generate the same notes. The quality assessment was carried out by an expert panel of 15 people who were unaware of the source of the documents. As a secondary metric, the time required to create the documents was compared.“The results show that ChatGPT-4 and human-generated notes are comparable in quality overall, but ChatGPT-4 produced discharge documents ten times faster than the doctors,” notes Brodén. “Our interpretation is that advanced large language models like ChatGPT-4 have the potential to change the way we work with administrative tasks in healthcare. I believe that generative AI will have a major impact on healthcare and that this could be the beginning of a very exciting development,” he maintains.

The plan is to launch an in-depth study shortly, with researchers collecting 1,000 medical patient records. Again, the aim is to use ChatGPT to produce similar administrative notes in the patient records. “This will be an interesting and resource-intensive project involving many partners. We are already working actively to fulfil all data management and confidentiality requirements to get the study under way,” concludes Brodén. 


Source: Uppsala University

27.03.2024

More on the subject:

Related articles

Photo

News • Misleading medical analyses

AI “predicts” beer drinking based on knee X-rays – why this is not only wrong, but dangerous

Can an AI determine whether or not a person drinks beer by looking at their knee X-rays? It can't – but the claim shows why “shortcut learning” is such a dangerous mechanism in medical AI.

Photo

News • LLM-based mental health detection

AI model could help prevent suicide in hospital patients

Large language models (LLM) show promise in detecting hospital patients at risk of committing suicide. This could help warn medical staff in time while maintaining the patients' privacy.

Photo

News • Influence in diagnostic decisions

Too much trust in AI? X-ray boxes may lead radiologists astray

When an AI advisor points out an area of concern in a chest X-ray, radiologists are sometimes all too eager to follow their lead, a new study finds. This may lead to incorrect diagnostic decisions.

Related products

Subscribe to Newsletter