Closeup photo of a persons ear with hearing aid

© Mark Paton – unsplash.com

News • Smart glasses to support the ears

AI-powered lip-reading to advance hearing aids

Heriot-Watt researchers are part of a UK-wide team developing 'hearing glasses' that could dramatically improve how people with hearing loss experience sound by combining lip-reading technology, artificial intelligence and the power of cloud computing.

The COG-MHEAR programme is funded by the Engineering and Physical Sciences Research Council (EPSRC) and led by Edinburgh Napier University. It aims to help those with hearing loss by creating a device that filters out background noise in real time, even in loud environments. 

The envisaged device may use a small camera built into glasses to track the speaker’s lip movements, while a smartphone app could use 5G to send both audio and visual data to a powerful cloud server. There, artificial intelligence could isolate the speaker’s voice from surrounding noise and send the cleaned-up sound back to the listener's hearing aid or headphones almost instantly. 

There’s a slight delay, since the sound travels to Sweden and back, but with 5G, it’s fast enough to feel instant

Mathini Sellathurai

Professor Mathini Sellathurai from Heriot-Watt is co-leading a wireless 5G-cloud-based noise cleaning strand of the work. She said: “We’re not trying to reinvent hearing aids. We’re trying to give them superpowers. You simply point the camera or look at the person you want to hear. Even if two people are talking at once, the AI uses visual cues to extract the voice of the person you’re looking at.” 

This approach, known as audio-visual speech enhancement, takes advantage of the close link between lip movements and speech. While some noise-cancelling technologies already exist, they struggle with overlapping voices or complex background sounds—something this system aims to overcome. 

More than 1.2 million adults in the UK have hearing loss severe enough to make ordinary conversation difficult, according to the Royal National Institute for Deaf People. Hearing aids can help, but most are limited by size and processing power and often struggle in noisy places like cafés, stations or workplaces. One option is to shift the heavy processing work to cloud servers—some as far away as Stockholm—allowing the researchers to apply powerful deep-learning algorithms without overloading the small, wearable device. “There’s a slight delay, since the sound travels to Sweden and back,” said Professor Sellathurai, “but with 5G, it’s fast enough to feel instant.” The group is working on multiple fronts, from cloud AI to edge device AI, to achieve optimal results for sustainability. 

“One of the most exciting parts is how general the technology could be,” said Sellathurai. “Yes, it’s aimed to support people who use hearing aids and who have severe visual impairments, but it could help anyone working in noisy places, from oil rigs to hospital wards.” 

The researchers are working towards a functional version of the glasses. They're also speaking to hearing aid manufacturers about future partnerships and hoping to reduce costs to make the devices more widely available. “There are only a few big companies that make hearing aids, and they have limited support in noisy environments,” said Sellathurai. “We want to break that barrier and help more people, especially children and older adults, access affordable, AI-driven hearing support.” 

The COG-MHEAR team continues to host workshops for hearing aid users and collect noise samples, from washing machines to traffic, to improve the system’s training. Prof Sellathurai believes the cloud-based model could one day be made public, allowing anyone with a compatible device to connect and benefit. 

The COG-MHEAR project is led by Professor Amir Hussain from Edinburgh Napier University. Professor Mathini Sellathurai is one of two Heriot-Watt co-investigators on the project. The COG-MHEAR team also includes co-investigators from the universities of Stirling, Glasgow, Edinburgh, Manchester and Nottingham. 


Source: Heriot-Watt University 

19.08.2025

Related articles

Photo

News • Reactions to noise as biomarkers

Tinnitus diagnosis via AI-powered face scan

Pupil dilation and involuntary facial movements could provide a window into diagnosing tinnitus. A new method uses AI to detect minuscule reactions to various sounds and noises.

Photo

News • Review of colonoscopy CADe systems

AI for colon cancer detection: "Right now, this is version 1.0. We need version 4.0”

A rigorous review of evidence showed that AI-assisted technology helps identify colorectal polyps. However, its impact on preventing colon cancer remains unclear.

Photo

News • Imaging analysis

Vestibular schwannoma: predicting hearing loss with radiomics

Using specific radiomics features from 70 characteristics in MRI images, researchers develop an objective method to predict the hearing status of patients with vestibular schwannoma.

Related products

Subscribe to Newsletter