News • Patients' perception of AI avatars

Medical advice from ChatGPT: Can a bot's appearance stir bias?

Chatbots are increasingly becoming a part of health care around the world, but do they encourage bias? That’s what University of Colorado School of Medicine researchers are asking as they dig into patients’ experiences with the artificial intelligence (AI) programs that simulate conversation.

Photo

"But what if it's a female AI?" - chatbot users, apparently

Image source: Adobe Stock/Digital Vision Lab (image generated with the use of AI)

“Sometimes overlooked is what a chatbot looks like – its avatar,” the researchers write in a new paper published in Annals of Internal Medicine. “Current chatbot avatars vary from faceless health system logos to cartoon characters or human-like caricatures. Chatbots could one day be digitized versions of a patient’s physician, with that physician’s likeness and voice. Far from an innocuous design decision, chatbot avatars raise novel ethical questions about nudging and bias.”

The paper, published in Annals of Internal Medicine, challenges researchers and health care professionals to closely examine chatbots through a health equity lens and investigate whether the technology truly improves patient outcomes. 

In 2021, the Greenwall Foundation granted CU Division of General Internal Medicine Associate Professor Matthew DeCamp, MD, PhD, and his team of researchers in the CU School of Medicine funds to investigate ethical questions surrounding chatbots. The research team also included Internal medicine professor Annie Moore, MD, MBA, the Joyce and Dick Brown Endowed Professor in Compassion in the Patient Experience, incoming medical student Marlee Akerson, and UCHealth Experience and Innovation Manager Matt Andazola. “If chatbots are patients’ so-called ‘first touch’ with the health care system, we really need to understand how they experience them and what the effects could be on trust and compassion,” Moore says.

We can manipulate avatars to make the chatbot more effective, but should we? Does it cross a line around overly influencing a person's health decisions?

Matthew DeCamp

So far, the team has surveyed more than 300 people and interviewed 30 others about their interactions with health care-related chatbots. For Akerson, who led the survey efforts, it’s been her first experience with bioethics research. “I am thrilled that I had the chance to work at the Center for Bioethics and Humanities, and even more thrilled that I can continue this while a medical student here at CU,” she says. 

The researchers observed that chatbots were becoming especially common around the Covid-19 pandemic. “Many health systems created chatbots as symptom-checkers,” DeCamp explains. “You can go online and type in symptoms such as cough and fever and it would tell you what to do. As a result, we became interested in the ethics around the broader use of this technology.” 

Recommended article

Photo

Article • Technology overview

Artificial intelligence (AI) in healthcare

With the help of artificial intelligence, computers are to simulate human thought processes. Machine learning is intended to support almost all medical specialties. But what is going on inside an AI algorithm, what are its decisions based on? Can you even entrust a medical diagnosis to a machine? Clarifying these questions remains a central aspect of AI research and development.

Oftentimes, DeCamp says, chatbot avatars are thought of as a marketing tool, but their appearance can have a much deeper meaning. “One of the things we noticed early on was this question of how people perceive the race or ethnicity of the chatbot and what effect that might have on their experience,” he says. “It could be that you share more with the chatbot if you perceive the chatbot to be the same race as you.” 

For DeCamp and the team of researchers, it prompted many ethical questions, like how health care systems should be designing chatbots and whether a design decision could unintentionally manipulate patients. "There does seem to be evidence that people may share more information with chatbots than they do with humans, and that's where the ethics tension comes in: We can manipulate avatars to make the chatbot more effective, but should we? Does it cross a line around overly influencing a person's health decisions?” DeCamp says. 

A chatbot’s avatar might also reinforce social stereotypes. Chatbots that exhibit feminine features, for example, may reinforce biases on women’s roles in health care. On the other hand, an avatar may also increase trust among some patient groups, especially those that have been historically underserved and underrepresented in health care, if those patients are able to choose the avatar they interact with. “That's more demonstrative of respect,” DeCamp explains. “And that's good because it creates more trust and more engagement. That person now feels like the health system cared more about them.” 

If and when chatbots become a first touch for many patients’ health care, intentional design can promote greater trust in clinicians and health systems broadly

Akerson et al.

While there’s little evidence currently, there is a hypothesis emerging that a chatbot’s perceived race or ethnicity can impact patient disclosure, experience, and willingness to follow health care recommendations. “This is not surprising,” the CU researchers write in the Annals paper. “Decades of research highlight how patient-physician concordance according to gender, race, or ethnicity in traditional, face-to-face care supports health care quality, patient trust, and satisfaction. Patient-chatbot concordance may be next.” 

That’s enough reason to scrutinize the avatars as “nudges,” they say. Nudges are typically defined as low-cost changes in a design that influence behavior without limiting choice. Just as a cafeteria putting fruit near the entrance might “nudge” patrons to pick up a healthier option first, a chatbot could have a similar effect. “A patient’s choice can't actually be restricted,” DeCamp emphasizes. “And the information presented must be accurate. It wouldn't be a nudge if you presented misleading information.” In that way, the avatar can make a difference in the health care setting, even if the nudges aren’t harmful. 

DeCamp and his team urge the medical community to use chatbots to promote health equity and recognize the implications they may have so that the artificial intelligence tools can best serve patients. “Addressing biases in chatbots will do more than help their performance,” the researchers write. “If and when chatbots become a first touch for many patients’ health care, intentional design can promote greater trust in clinicians and health systems broadly.” 


Source: University of Colorado

08.06.2023

Related articles

Photo

News • LLM-based mental health detection

AI model could help prevent suicide in hospital patients

Large language models (LLM) show promise in detecting hospital patients at risk of committing suicide. This could help warn medical staff in time while maintaining the patients' privacy.

Photo

News • Conversational AI

ChatGPT answers common patient questions about colonoscopy

AI, as used in ChatGPT, can generate credible medical information in response to common patient questions, a research team from Taiwan and the US found. However, some critical pitfalls remain.

Photo

News • Appeal for clearer patient communication

Do CAT scans really involve cats? (and other ways children misunderstand medical jargon)

"Negative" results are good, and CAT scans are taken without felines: Medical jargon can be confusing for children, so pediatrics experts call for clearer language to avoid invoking fear.

Related products

Subscribe to Newsletter