© Andrii Lysenko – stock.adobe.com

News • LLMs confirm delusions

Mental illness and AI chatbots: a dangerous match

A new study from Aarhus University and Aarhus University Hospital suggests that the use of AI chatbots such as ChatGPT can have serious negative consequences for people with mental illness. The researchers are calling for increased awareness among healthcare professionals and for regulation.

People with mental illness who use AI chatbots risk experiencing a worsening of their condition. This is shown by a new study published in the international journal Acta Psychiatrica Scandinavica

The researchers screened electronic health records from nearly 54,000 patients with mental illness and found several cases in which the use of AI chatbots appears to have had negative consequences – primarily in the form of worsened delusions, but also potential worsening of mania, suicidal ideation, and eating disorder. "It supports our hypothesis that the use of AI chatbots can have significant negative consequences for people with mental illness," says Professor Søren Dinesen Østergaard from Aarhus University and Aarhus University Hospital, who leads the research group behind the study. 

AI chatbots have an inherent tendency to validate the user’s beliefs. It is obvious that this is highly problematic if a user already has a delusion or is in the process of developing one

Søren Dinesen Østergaard

In their study, the researchers found examples of delusions that were likely worsened due topatients' interactions with AI chatbots. According to Østergaard, there is a logical explanation for this. "AI chatbots have an inherent tendency to validate the user’s beliefs. It is obvious that this is highly problematic if a user already has a delusion or is in the process of developing one. Indeed, it appears to contribute significantly to the consolidation of, for example, grandiose delusions or paranoia," he says. 

According to Østergaard, the study should prompt increased awareness among healthcare professionals working with mental illness. He believes they should discuss AI chatbot use with their patients. "Despite our knowledge in this area still being limited, I would argue that we now know enough to say that use of AI chatbots is risky if you have a severe mental illness – such as schizophrenia or bipolar disorder. I would urge caution here," he says. 

The study shows a clear increase over time in the number of electronic health record entries mentioning AI chatbot use with potentially harmful consequences. Østergaard expects many more cases to be identified in the future. "Part of the increase we observe is probably due to greater awareness of the technology among the healthcare staff writing the clinical notes. This is good – because I fear the problem is more common than most people think. In our study, we are only seeing the tip of the iceberg, as we have only been able to identify cases that were described in the electronic health records. There are likely far more that have gone undetected," he says. 

The researchers emphasise, however, that the study does not document a direct causal relationship. "It is difficult to prove a causal link between AI chatbot use and negative psychological consequences. We need to examine this from many different angles, and I know there are many exciting international research projects underway. We are far from the only group taking this seriously," says Østergaard. 

The study also shows that some patients with mental illness use AI chatbots in ways that may be constructive – for example, to understand their symptoms or to combat loneliness. There is also ongoing research into whether AI chatbots can be used for talk therapy. 

Recommended article

Østergaard is nonetheless sceptical. "There may be potential in relation to psychoeducation and psychotherapy, but this must be investigated in controlled trials with the same rigour applied to other forms of treatment. I am not impressed by the trials conducted so far, and I am fundamentally sceptical about replacing a trained psychotherapist with an AI chatbot," he says. 

According to Østergaard, there is a significant lack of regulation of the AI chatbot technology. "Currently, it is left to the companies themselves to decide whether their products are safe enough for users. I believe we now have sufficient evidence to conclude that this model is simply too risky. Regulation is needed at a central level," he points out, adding: "It has been 20 years since social media obtained global reach, and only within the last year are countries beginning to regulate to counteract the negative consequences of this technology – especially on the mental health of children and young people. As I see it, this story is repeating itself with AI chatbots," he warns


Source: Aarhus University 

25.02.2026

Related articles

Photo

News • “Limited and unpredictable” ability

AI and apps not enough to solve mental health crisis, experts warn

People facing a mental health crisis increasingly turn to AI chatbots and wellness apps for emotional support. However, these tools alone do not have what it takes, according to a new health advisory.

Photo

News • Reducing clinician burden and burnout

Documentation takes the joy out of medicine – AI scribes could bring it back

AI-driven scribes that record patient visits and draft clinical notes for physician review may lead to significant reductions in physician burnout and improvements in well-being, a new study finds.

Photo

News • Relatable characters as communication gateway

How comic book heroes and villains could help cope with childhood trauma

The world of comic book heroes and villains is filled with trauma of all kinds. A new study explores how these tales of hardship can help find better treatments for mental health issues in children.

Subscribe to Newsletter