News • UK family doctor survey

AI in the GP office: study points out lack of clear work policies

A fifth of family doctors (GPs) seem to have readily incorporated AI into their clinical practice, despite a lack of any formal guidance or clear work policies on the use of these tools, suggest the findings of an online UK-wide snapshot survey.

Looking over the shoulder of a man in a white shirt, who is typing on a laptop PC. The screen shows a welcome message from the ChatGPT website

© Viralyft – unsplash.com

The results were published in the open access journal BMJ Health & Care Informatics

Doctors and medical trainees need to be fully informed about the pros and cons of AI, especially because of the inherent risks of inaccuracies (‘hallucinations’), algorithmic biases, and the potential to compromise patient privacy, conclude the researchers. 

Following the launch of ChatGPT at the end of 2022, interest in large language model-powered chatbots has soared, and attention has increasingly focused on the clinical potential of these tools, say the researchers. 

To gauge current use of chatbots to assist with any aspect of clinical practice in the UK, in February 2024 the researchers distributed an online survey to a randomly chosen sample of GPs registered with the clinician marketing service Doctors.net.uk. The survey had a predetermined sample size of 1000. The doctors were asked if they had ever used any of the following in any aspect of their clinical practice: ChatGPT; Bing AI; Google’s Bard; or ‘Other’. And they were subsequently asked what they used these tools for. Some 1006 GPs completed the survey: just over half the responses came from men (531; 53%) and a similar proportion of respondents (544;54%) were aged 46 or older. One in five (205; 20%) respondents reported using generative AI tools in their clinical practice. Of these, more than 1 in 4 (29%; 47) reported using these tools to generate documentation after patient appointments and a similar proportion (28%; 45) said they used them to suggest a differential diagnosis. One in four (25%; 40) said they used the tools to suggest treatment options.

[These tools] may also risk harm and undermine patient privacy since it is not clear how the internet companies behind generative AI use the information they gather

Charlotte R Blease, Cosima Locher, Jens Gaab, Maria Hägglund, Kenneth D Mandl

The researchers acknowledge that the survey respondents may not be representative of all UK GPs, and that those who responded may have been particularly interested in AI—for good or bad—potentially introducing a level of bias into the findings. Further research is needed to find out more about how doctors are using generative AI and how best to implement these tools safely and securely into clinical practice, they add. “These findings signal that GPs may derive value from these tools, particularly with administrative tasks and to support clinical reasoning. However, we caution that these tools have limitations since they can embed subtle errors and biases,” they say. 

And they point out: “[These tools] may also risk harm and undermine patient privacy since it is not clear how the internet companies behind generative AI use the information they gather. While these chatbots are increasingly the target of regulatory efforts, it remains unclear how the legislation will intersect in a practical way with these tools in clinical practice.” And they conclude: “The medical community will need to find ways to both educate physicians and trainees about the potential benefits of these tools in summarising information but also the risks in terms of hallucinations [perception of non-existent patterns or objects], algorithmic biases, and the potential to compromise patient privacy.” 


Source: BMJ Group

18.09.2024

Related articles

Photo

News • R&D platform

Integrating AI into clinical workflows for medical imaging

To advance the transition of AI from research to clinical application, Nvidia announced that Siemens Healthineers has adopted MONAI, an open-source medical imaging framework.

Photo

News • Analytics of patient outcomes

Hospital uses AI platform to achieve ‘extreme transparency’

Karolinska University Hospital is among the most transparent hospitals worldwide. Using AI, the hospital can accurately demonstrate where lives have been saved, and harm avoided for patients.

Photo

News • Transformation study

Turning labs into “discovery factories” with AI and robotics

Increasing implementation of robotic automation and AI will speed up scientific progress in science laboratories, according to a new study. The experts outline this process across distinctive levels.

Related products

Subscribe to Newsletter