‘Virtual reality (VR) is often used for education and training, for example in anatomy teaching, where the students can examine the virtual models, Egidijus Pelanis explained. ‘In medical school, the students train on artificially created anatomical models. If it is not a cadaver, or comes from patients’ images, the models are standardised. But that’s not how it is in reality because patients have anatomical variations. In VR, you can create highly detailed and realistic patient-specific 3D models. The users can also be placed in life-like environments, simulations, and use controllers to get haptic feedback during interactions to train before the actual procedure.’
Augmented reality (AR)
‘AR comes closer to the actual surgery because virtual images can be overlaid on top of the real-world object. In the Intervention Centre at Oslo University Hospital, we have tested AR for laparoscopic liver surgery. Here, the user can place 3D models and see these virtual elements in the 3D camera view. However, there are still some limitations related to the complex workflow, visualisation and how these virtual elements should be presented on the laparoscopic camera view.’
© Egidijus Pelanis
‘In MR, the user views the digital images in the physical world and interacts with them. At our hospital, we’re working with Microsoft’s HoloLens 2, which allows us to put virtual objects into our environment. We create holograms of patients to walk around and to look at their anatomy and pathology individually, or in groups of clinicians. It enables us to visualise and explore different treatment strategies in several disciplines – liver surgery, congenital heart surgery, orthopaedic surgery, and so on.
‘One of the advantages of using MR is that the clinicians are more accessible and not bound by the physical location of the hardware. With a head-mounted mixed reality device, one can see a model with depth as if it were in real life and can point and interact using one’s hands. You can toggle the visibility of the virtual elements or peek inside to better understand the spatial distribution of the anatomy and pathology.’
Challenges to XR implementation in surgery
I believe that doctors, surgeons, and even patients should start asking for 3D representations of their anatomy and pathologyEgidijus Pelanis
‘It’s crucial to remember that patients are in three dimensions, but medical professionals still focus on flat images for decision-making. For education and case reporting, documentation is commonly limited to text or annotations on medical images. It is constrained and needs a lot of imagination to understand where all these locations are inside the patient. By creating 3D volume renderings and holograms, you can point and show virtual models and combine all these mediums and information for more detailed documentation.
‘However, one major challenge is the preparation of XR, the data that needs to be processed into 3D models and holograms. Hospitals are sitting on a lot of unprocessed data they still use for the traditional way of examining. I believe that doctors, surgeons, and even patients should start asking for 3D representations of their anatomy and pathology.
‘Then again, to create 3D models of every patient, you need to have sufficient evidence that it will help, that you cannot make the best decision for the patient without it. Hospitals are sitting on this resource, but too few are processing it; too few are bold enough to invest without solid evidence that having that data processed will be important in the future. But the proof can only be established if someone spends enough time on this and publishes the research results.’
Different navigation and visualisation methods in XR
‘Medical images can be used, for example, to create virtual 3D models to lay over the patient during surgery. Once patient-specific 3D models are created, physical and virtual spaces can be combined with tracking technology. You can, for example, have an optical tracking system with cameras in the OR that detect spheres placed on various objects to be tracked. This enables real-time tracking of objects and the ability to place virtual elements in the OR. These fused scenes and images can be presented either on screens, in augmented reality, or in mixed reality.
‘This process can need some navigation system set up and a certain amount of input from the users. HoloLens offers eye-gaze control so that some functions can be controlled by looking at them. For example, when the surgeon’s hands are occupied during surgery, he can look at and activate certain buttons for interaction to reduce the physical clicking of buttons.’
A ‘perfect’ future OR
‘Today, we miss a complete solution for integration. Existing and new technologies in the OR are mostly independent; they have different systems and standards. I see a bright future where all systems talk to each other, where images and information are synergistically flowing between devices; I see decision-support mechanisms using AI that provide automatic patient- and user-specific workflow in which next steps are predicted and shown.
‘Maybe the patient lies on a table and complete patient data is shown holographically in the surroundings. Then treatment is performed by a robot-assisted surgical system with current and accurate visual controls that support surgeons in their intraoperative decision-making and surgery. Whatever new gadgets or technologies are brought into the OR, I hope these will seamlessly integrate into the whole imaging, diagnostic and treatment workflow.’
Computer scientist Stefanie Speidel, who became a Professor for Translational Surgical Oncology at the National Centre for Tumour Diseases, in Dresden in Germany this April, researches intelligent assistance systems for the operating theatre.
Could XR become the new standard for surgery?
‘First, we should remain focused on the patient by improving patient care from a value-based healthcare perspective and not get distracted by all futuristic emerging technologies – which resonate with our Sci-Fi fantasies from childhood. Secondly, I think XR will stay. Various modalities will find their place in healthcare systems. ‘Surgeons, and clinicians in general, will use what they have available and what works best. We are always looking for solutions to solve problems, to treat the patient. XR is one potential candidate for solving particular communication, visualisation and navigation challenges.
‘At first, it might have been curiosity about the gadgets and cool holograms. However, research is starting to gather knowledge and examples where XR could improve certain aspects of care. Though we still need more research, e.g. randomised controlled trials that provide evidence that when doing an intervention in three dimensions on a patient, we will have to see and plan those in 3D using XR devices. In addition, the definition of XR could evolve as new devices and gadgets are developed. Also, there are unexplored territories with projections and beyond wearable devices; I imagine some AR with screenless and glass-free 3D coming to the OR in the near future.’
Egidijus Pelanis MD is a PhD candidate working at The Intervention Centre, Oslo University Hospital in Norway. He works with technologies in healthcare and is part of both Section of Clinical Research, led by Professor Bjørn Edwin, and the Section of Medical Cybernetics and Image Processing, led by Professor Ole Jakob Elle. Egidijus is researching the use of various navigation and visualisation methods for minimally invasive surgery.