12 AI From chatbot to medical assistant Generative AI: prompt solutions for healthcare cess highly sensitive data, which calls for a thorough analysis of benefits and potential risks, Weiss pointed out. ‘We should not blindly push for implementation when the side-effects are unclear,’ he con- cluded. The aforementioned US companies were of course also at the tradefair, to present their generative AI sol- utions. Notably, Hadas Bitram, Partner General Manager, Health AI at Microsoft Health & Life Sciences, addressed the issue of re- sponsible use of medical AI at the Medica Innovation Forum. She highlighted safeguards in the com- pany’s own products against AI hallucinations and omissions. For example, the LLM gives reference to the sources a given information is based on, to enable validation and provide clinical context. ‘Still, there should always be an account- able medical professional in the loop,’ Bitram said. The expert also highlighted “AI lit- eracy” as a crucial skill for future clinicians: ‘The work of medical professionals will become increas- ingly sophisticated,’ she predicted. Assessing and using the right AI tools, interpreting their results and making the right calls based on their input will therefore be essen- tial. AI may not replace doctors, she concluded, but doctors who use AI will replace those who don’t. ■ Author: Wolfgang Behrends Baked-in bias of Computation, Prof Dr Reinhard Heckel pointed to insufficient data as a major cause for underperforming AI models. Rare conditions are by na- ture underrepresented in most da- tasets, and variations in image quality may increase existing mis- interpretations, the expert from the Technical University of Munich’s School In- formation and Technology said, ad- ding that inclusion of diverse data could mitigate some of these is- sues. However, some technologies have biases deeply embedded into their functionality, he explained. For example, some imaging moda- lities are optimized for lighter skin tones, which inherently leads to su- perior results in these patients. ‘This is a social issue,’ Antweiler said, pointing out the need to es- tablish groups that should be es- pecially protected from this kind of bias. m o c . e b o d a . k c o t s – v i l a z i l e © tent brought about one of the tech- nology’s major issues: results that look plausible at first glance but may turn out to be inaccurate or even completely made up by the algorithm – a phenomenon that has gained notoriety as AI “halluci- nations”. Since these fabricated findings can- not always reliably be excluded, safeguards should be established, suggested Antweiler. For example, different AI models could be run- ning independently on the same task to reveal inconsistencies. Also, human operators should always be aware of the possibility of such er- rors. ‘However, we should not for- get that humans also “hallucinate” – we make mistakes, and we tend to be less critical about them than we are about AI hallucinations,’ the expert cautioned. ‘Therefore, we should establish fair comparisons, to not vilify this powerful tech- nology.’ regarding further note were Of the IT requirements of generative AI, es- pecially computing power. More general models are often considerably more demand- ing, barring their use for solutions that need to be locally based to en- sure data security. In these cases, smaller, more specific AI models should be implemented, the ex- perts agreed. Need for “AI literacy” among medical professionals Despite the current shortcomings of generative AI models, many ex- perts consider their clinical adap- tation to be almost inevitable. Heckel noted that US-based com- panies are especially adept at translating AI models to market- able products, while European developers tend to have more high-end models, but are more cautious in their implementation. In medical settings, an AI may pro- Anyone who has exchanged a few lines of dialogue with a large language model (LLM), will probably agree that generative AI is an impressive new breed of technology. Beyond their use as chatbots, LLMs show great po- tential in addressing some of the most urgent in healthcare. At the Medica trade- fair in Düsseldorf, several expert sessions were dedicated to the topic of generative AI, its poten- tial medical applications and cur- rent caveats. challenges The above fact that almost every- one has tried their hands at interac- tion with a chatbot is testament to what some experts believe to be a key benefit of the technology: ‘LLMs are a game changer in that they make generative AI approach- able for more people, not just data scientists,’ explained Dr Bertram Weiss during a discussion at the Medica Health IT Forum. ‘This is because you can talk to them using normal language instead of code,’ the Pharma Lead from Berlin-based AI developer Merantix Momentum added. It is this high level of accessibility that could help hospitals offload a number of basic, but time-consum- ing tasks, such as appointment scheduling, or patient case docu- mentation. For example, generative AI has shown promising results in transcribing doctor’s notes into structured co- panellist Dario Antweiler from the Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS. Even though results still need to be checked by a human, ‘this takes away hours of daily work from medical professionals,’ the ex- pert pointed out. formats, said Everyone hallucinates sometimes Pointing out the need for human validation of the AI-generated con- AI keeps an eye on health AI Using a pioneering artificial in- telligence platform, Flinders Uni- versity researchers have as- sessed whether a cardiac AI tool trialled in South Australian hos- pitals actually has the potential to assist doctors and nurses to rapidly diagnose heart issues in emergency departments. “AI is becoming more common in healthcare, but it doesn’t always fit in smoothly with the vital work of our doctors and nurses,” says Flinders University’s Dr Maria Ale- jandra Pinero de Plaza. “We need to confirm these systems are trust- worthy and work consistently for everyone, ensuring they are able to support medical teams rather than slowing them down.” Developed by Dr Pinero de Plaza and her team, PROLIFERATE_AI is a human-centred evaluation tool that uses artificial intelligence alongside researcher analysis to as- sess how well AI tools work in hos- pitals. “In order to understand if the AI systems are viable, we look at how easy they are to use, how well doctors and nurses adopt them, and how they impact patient care,” says Dr Pinero de Plaza, a re- search fellow in Flinders’ Caring Futures Institute. “It’s not just about making AI accurate; it’s about making sure it’s easy to understand, adaptable, and genu- inely helpful for doctors and pa- tients when it matters most.” Published in the International Jour- nal of Medical Informatics, the study used PROLIFERATE_AI to as- sess the RAPIDx AI tool; designed to help emergency doctors quickly and accurately diagnose cardiac conditions by rapidly analysing clinical and biochemical data. With chest pain one of the most common reasons for ED visits, the South Australian health system has been part of an NHMRC-funded trial being run across 12 hospitals in metropolitan and rural South Australia, which is currently ana- lysing its 12-month patient out- comes. Before and during the trial, the PROLIFERATE researchers evaluated the tool, with medical and nursing staff at the participat- ing hospitals being provided the opportunity to share their insights on interacting with the RAPIDx AI tool. The results showed that while ex- perienced clinicians, such as ED consultants and registrars, demon- strated high comprehension and residents and engagement with the RAPIDx AI tool, less experienced users, in- cluding interns, faced usability challenges. Regis- tered nurses also reported strong emotional engagement with the tool, recognising its potential to en- hance patient safety by reducing diagnostic uncertainty. “What sets PROLIFERATE_AI apart is its abil- ity to provide actionable insights,” says Dr Pinero de Plaza. “Rather than focusing solely on technical performance, we evaluate AI tools based on real-world usability and clinician trust, ensuring that these technologies are not just innovative but also practical and accessible.” While the research demonstrated consultants and registrars bene- fited most from using RAPIDx AI, the study also highlighted the need for targeted training and workflow- aligned adoption rates among new users. interfaces improve to “Our goal is to create AI solutions that empower doctors and nurses, not replace them,” says Dr Pinero de Plaza. “Technology alone cannot solve the complexities of emerg- ency care. We need AI systems that work seamlessly with clinicians, support decision-making under pressure, and integrate smoothly into existing workflows. We are committed to enhancing our AI tools to ensure they are instinc- tively user-friendly and to develop customised training programs that meet the diverse needs of health- care professionals and others.” ■ Source: Flinders University EUROPEAN HOSPITAL Vol 34 Issue 1/25