© LALAKA – stock.adobe.com

News • Physicians' dilemma

Who is to blame for errors of medical AI?

Assistive artificial intelligence technologies hold significant promise for transforming health care by aiding physicians in diagnosing, managing, and treating patients.

However, the current trend of assistive AI implementation could actually worsen challenges related to error prevention and physician burnout, according to a new brief published in JAMA Health Forum

The brief, written by researchers from the Johns Hopkins Carey Business School, Johns Hopkins Medicine, and The University of Texas at Austin McCombs School of Business, explains that there is an increasing expectation of physicians to rely on AI to minimize medical errors. However, proper laws and regulations are not yet in place to support physicians as they make AI-guided decisions, despite the fierce adoption of these technologies among health care organizations.

Portrait photo of Shefali Patil
Shefali Patil

Image source: University of Texas at Austin McCombs School of Business

The researchers predict that medical liability will depend on whom society considers at fault when the technology fails or makes a mistake, subjecting physicians to an unrealistic expectation of knowing when to override or trust AI. The authors warn that such an expectation could increase the risk of burnout and even errors among physicians. 

“AI was meant to ease the burden, but instead, it’s shifting liability onto physicians — forcing them to flawlessly interpret technology even its creators can’t fully explain,” said Shefali Patil, visiting associate professor at the Carey Business School and associate professor at the University of Texas McCombs School of Business. “This unrealistic expectation creates hesitation and poses a direct threat to patient care.” 

The new brief suggests strategies for health care organizations to support physicians by shifting the focus from individual performance to organizational support and learning, which may alleviate pressure on physicians and foster a more collaborative approach to AI integration. 

“Expecting physicians to perfectly understand and apply AI alone when making clinical decisions is like expecting pilots to also design their own aircraft — while they’re flying it,” said Christopher Myers, associate professor and faculty director of the Center for Innovative Leadership at the Carey Business School. “To ensure AI empowers rather than exhausts physicians, health care organizations must develop support systems that help physicians calibrate when and how to use AI so they don’t need to second-guess the tools they’re using to make key decisions.” 


Source: University of Texas at Austin

26.03.2025

Related articles

Photo

News • UK family doctor survey

AI in the GP office: study points out lack of clear work policies

ChatGPT has brought generative AI to the mainstream – and into many GP practices as well, a new study suggests. The work points out the risk of doctors using AI without clear guidance or policies.

Photo

Article • Philips at ECR 2025

Enhancing the “eye of medicine”

A greater emphasis on AI and sustainability, new approaches to mitigating staff shortage and more: At the European Congress of Radiology (ECR) 2025 in Vienna, Philips showcased its approaches to…

Photo

Article • From chatbot to medical assistant

Generative AI: prompt solutions for healthcare?

Anyone who has exchanged a few lines of dialogue with a large language model (LLM), will probably agree that generative AI is an impressive new breed of technology. LLMs show great potential in…

Related products

Subscribe to Newsletter