Image source: Adobe Stock/Have a nice day

News • Algorithmic medical advice

Would you trust an AI doctor? Study reveals split in patients' attitude

A University of Arizona Health Sciences-led study found that more than 50% of people don't fully trust AI-powered medical advice, but many put faith in AI if it's monitored and guided by human touch.

Artificial intelligence-powered medical treatment options are on the rise and have the potential to improve diagnostic accuracy, but a new study led by University of Arizona Health Sciences researchers found that about 52% of participants would choose a human doctor rather than AI for diagnosis and treatment. The paper was published in the journal PLOS Digital Health.

portrait of Marvin J. Slepian
Dr. Marvin J. Slepian

Image source: University of Arizona

The research was led by Dr. Marvin J. Slepian, Regents Professor of Medicine at the UArizona College of Medicine – Tucson and member of the BIO5 Institute, and Christopher Robertson, professor of law and associate dean for strategic initiatives at the Boston University School of Law. The research team found that most patients aren't convinced the diagnoses provided by AI are as trustworthy of those delivered by human medical professionals. "While many patients appear resistant to the use of AI, accuracy of information, nudges and a listening patient experience may help increase acceptance," Slepian said of the study's other primary finding: that a human touch can help clinical practices use AI to their advantage and earn patients' trust. "To ensure that the benefits of AI are secured in clinical practice, future research on best methods of physician incorporation and patient decision making is required." 

In the National Institutes of Health-funded study, participants were placed into scenarios as mock patients and asked whether they would prefer to have an AI system or a physical doctor for diagnosis and treatment, and under what circumstances. 

In the first phase, researchers conducted structured interviews with actual patients, testing their reactions to current and future AI technologies. In the second phase of the study, researchers polled 2,472 participants across diverse ethnic, racial and socioeconomic groups using a blinded, randomized survey that tested eight variables. 

The onus will be on physicians and others in health care to ensure that information that resides in AI systems is accurate, and to continue to maintain and enhance the accuracy of AI systems

Marvin J. Slepian

Overall, participants were almost evenly split, with more than 52% choosing human doctors as a preference versus approximately 47% choosing an AI diagnostic method. If study participants were prompted that their primary care physicians felt AI was superior and helpful as an adjunct to diagnosis or otherwise nudged to consider AI as good, the acceptance of AI by study participants on re-questioning increased. This signaled the significance of the human physician in guiding a patient's decision. 

Disease severity – leukemia versus sleep apnea – did not affect participants' trust in AI. Compared to white participants, Black participants selected AI less often and Native Americans selected it more often. Older participants were less likely to choose AI, as were those who self-identified as politically conservative or viewed religion as important. 

The racial, ethnic and social disparities identified suggest that differing groups will warrant specific sensitivity and attention as to informing them as to the value and utility of AI to enhance diagnoses. "I really feel this study has the import for national reach. It will guide many future studies and clinical translational decisions even now," said Slepian, who also holds a J.D. "The onus will be on physicians and others in health care to ensure that information that resides in AI systems is accurate, and to continue to maintain and enhance the accuracy of AI systems as they will play an increasing role in the future of health care." 

Co-authors include Andrew Woods, Milton O. Riepe professor of law and co-director of the TechLaw program at the UArizona James E. Rogers College of Law; Kelly Bergstrand, associate professor of sociology and anthropology at the University of Texas at Arlington; Jess Findley, professor of practice and director of bar and academic success at the James E. Rogers College of Law; and Cayley Balser, postgraduate at Innovation for Justice, housed at both the James E. Rogers College of Law and the University of Utah David Eccles School of Business. 

This research was funded in part by the National Institutes of Health under award no. 3R25HL126140-05S1. 


Source: University of Arizona

23.05.2023

Related articles

Photo

Article • Artificial intelligence meets internal medicine

Medical AI: Enter ‘dea ex machina’

In the world of theatre, the ‘deus ex machina’, the god from the machine, is a dramaturgical trick to resolve seemingly unsolvable conflicts. Can artificial intelligence (AI) also be such a…

Photo

News • Patient-written symptom descriptions

ChatGPT struggles when medical questions are put in layman's terms

Enter symptoms into ChatGPT, receive an accurate diagnosis? Research reveals that LLM AI models are not quite there yet, struggling to identify genetic conditions from patient-written descriptions.

Photo

News • Medical workflow assistance

AI and robotics support for ultrasound imaging

Autonomous robotic ultrasound systems could perform routine examinations and support doctors in the OR. New research shows that these systems can make everyday life easier for medical professionals.

Related products

Subscribe to Newsletter