You’re in a noisy restaurant struggling to hear the people at your table. You reach into your pocket and put on your new AI-powered hearing aids. For a moment, everything becomes much louder. Then, word by word, head-turn by head-turn, your hearing aids use artificial intelligence (AI) processing and Deep Neural Network (DNN) models to identify the sound you're most interested in, make the background noises and chatter recede, and move the conversation of your dinner companions to the forefront, with their voices gaining even more clarity as you continue to engage with them. Soon you’re listening and talking effortlessly even as the room grows noisier into the evening.

Although hearing aids have improved significantly when it comes to listening in noise, the above scenario is still not a reality. Yet. But for an audio engineer and AI expert like Andreas Thelander Bertelsen, the above is a reachable goal—and soon.

Audio engineer Andreas Thelander Bertelsen.
Audio engineer Andreas Thelander Bertelsen.

Bertelsen has been working at leading global hearing aid companies like ReSound and Oticon for the past decade, researching new ways to enhance speech intelligibility and helping develop successful products like Oticon More and Oticon Opn S. Now with numerous patents under his belt, he has been leading audio technology development at Whisper.ai, a hearing aid manufacturer based in San Francisco that has a unique approach to removing background noise through “denoising” which relies heavily on AI and DNNs.

HearingTracker thought it would be interesting to get his perspectives on how hearing healthcare is being transformed by the application of these new advanced technologies.

HearingTracker: Beyond all the marketing hype about AI, machine learning, and DNN, can you give us an explanation about how these are being applied in hearing aids?

Bertelsen: First, I think the application of AI in hearing aids is the most exciting thing happening in hearing healthcare right now. There is a good reason why so many companies have recently introduced AI-enabled products: AI and DNN will transform how hearing aids process sound, and it could very well transform everything related to hearing healthcare.

Starkey image

The technology driving this—AI and more specifically, DNN—is a new kind of technology that can learn complex relationships from examples. So instead of designing processing schemes based on a set of assumptions, AI is all about learning from data.

DNN, as the “neural network” name suggests, is a human-brain-inspired structure that connects layers within the network to represent learned patterns. The DNN can take new inputs, use the vast network of connections to identify similar patterns of the input, and provide outputs that would otherwise not have been possible to discern. For example, this is how a neural network identifies human speech from a dog barking: it looks for patterns similar to what it has learned.

Some DNNs live within or onboard the hearing aids themselves. Hearing aids with special processors with DNNs inside them can now do new kinds of speech enhancement and noise-removal activities that old hearing aids couldn’t. DNN-enabled hearing aids can also better adapt to different sound situations, ensuring that the listening experience matches the environment—for example, discerning between a quiet hike or a noisy restaurant.

Deep neural networks, or DNNs, attempt to mimic neurons (shown above) in the human brain through a combination of data inputs, weights, and bias. These elements work together to recognize, classify, and describe objects within the data.
Deep neural networks, or DNNs, attempt to mimic neurons (shown above) in the human brain through a combination of data inputs, weights, and bias. These elements work together to recognize, classify, and describe objects within the data.

AI also has an exciting potential beyond what processing happens within a hearing aid. The fitting and customization process is a great example. Everyone who reads HearingTracker probably knows how important a great fit and patient-centered care is for achieving better outcomes, and I’m sure AI will play a role in more personalized fittings in the future. AI will mean a hearing aid wearer (and their hearing care professional, if they are working with one) can get to a hyper-personalized fitting by using the previous experiences of millions of other hearing aid wearers worldwide—all in far less time than it would take today.

This technology is moving very fast, and there are lots of avenues yet to be explored for enhancing the patient experience. And, although machine-learning hearing aids have been around for over 15 years, we’re only now entering a brand-new frontier.

HearingTracker: The brass ring in hearing aid technology continues to be separating speech from noise or “denoising” an environment enough to keep users aware of their surroundings and other signals of interest while being able to hone in on a conversation partner. How will AI allow hearing aid manufacturers to do this better?

Phonak image

Bertelsen: AI is a technology that gives engineers like me a new set of tools when developing solutions for hearing aids. AI technology is more accurate and powerful, which means we can tackle age-old problems—like focusing on the speech of a person in a noisy restaurant—in innovative new ways.

As an engineer, everything I did in the past had to be based on some fairly rudimentary assumptions. For example, we might successfully use these assumptions to remove background noise—as long as that background noise doesn’t change too much and matches our expectations. Or maybe we’d steer the directional microphones in a certain way—as long as the person speaking remains directly in front of the listener. These legacy tools have been pretty successful, but only to an extent.

That’s because, at their core, they all rely on simplistic assumptions about the structure of noise and speech sounds. In the real world, however, a huge diversity of voices and sound environments exist. And people move, and so do their conversations and other sounds of interest. So these traditional techniques can end up being too simple to handle the complexities of real life in real-time.

Artificial intelligence—and DNNs specifically—have given us a new category of tools. They are capable of accurately learning and deciphering the complex structures within sound by analyzing extensive collections of sound environments. This depth of understanding is not only useful when a hearing aid is trying to reduce noise, but it can also be used to steer directional microphones so they hone in on where the speech is coming from—even if the user is moving around!

Lots of research has already established that DNN-based approaches can outperform traditional methods because of their accuracy, especially when you’re in a complex sound environment with lots of competing speech and noise. Ultimately, this means hearing aid users will be able to exert less listening effort and understand speech in ever-more challenging places.

HearingTracker: What do companies mean when they say they have “onboard DNN” in a hearing aid? Is this just a buzzword or is it important?

Bertelsen: This is a great question. Because of the excitement surrounding their capabilities, AI and DNNs have quickly become buzzwords in hearing aids and many other high-tech products.

ReSound image

But not all DNN-enabled hearing aids are created equal. Their success is largely dependent on what kind of processors are inside the hearing aid, and the quality of the DNN models running on that processor. So the first critical piece of the puzzle is to have a specialized DNN processor in the hearing aid. In other words, the sound processing engine inside the hearing aid must be designed specifically to take full advantage of AI. That’s because this processor allows the hearing aid to run highly specialized AI software, called DNN models.

DNN models are the second critical piece for taking advantage of AI technology. Hearing aid engineers like me develop DNN models by creating software that “teaches” the hearing aid by presenting it with multiple years of audio information. This gives the model lots of examples and experience to draw from, and this is part of how we make sure these products have the best possible performance.

Beyond that, we must make sure this software can run quickly and efficiently on the device itself. After all, hearing aids have only a few milliseconds to process sound from the world before it comes out of the speaker, and we need to do this on tiny batteries all day!

HearingTracker: Good point! Hearing care professionals and hearing industry veterans are well aware of the trade-offs regarding battery life versus hearing aid feature sets (eg, audio streaming, processing algorithms, sound quality, etc.). How big of a role does battery life play in the implementation of AI in hearing aids?

Bertelsen: Battery life is one of the most important considerations in the design of any hearing aid. With the current state of DNN processors and battery technology, the capabilities of onboard DNNs within a traditional hearing aid style are fairly limited due to size. As a result, different companies make different decisions about how they want to incorporate DNNs. Do they want to optimize for smaller less-capable devices, or aim to deliver more performance to users?

Every company will have its own strategies in this regard, depending on its objectives. At Whisper, we explicitly want to build products with as much AI capability as possible. That has meant needing to develop some unique technologies, like the Whisper Brain. The Whisper Brain is a small pocket accessory that gives the hearing aids additional processing capabilities, including being able to run DNNs that have 100 times more capacity than what could otherwise fit inside a more conventional hearing aid.

The Whisper AI hearing aid system comes with a small pocket accessory, the Whisper Brain, that gives the hearing aids added processing capabilities.
The Whisper AI hearing aid system comes with a small pocket accessory, the Whisper Brain, that gives the hearing aids added processing capabilities.

To make this available with no additional delay, we have developed new wireless connections that allow us to communicate almost instantaneously between the earpieces and the Brain. While having this extra accessory might not be for everyone, we think it’s a great way to bring tomorrow's hearing technology to people today.

Let me give you a concrete example of how this extra capacity is useful. In a traditional hearing aid, the left ear is generally processing sound separately from the right ear. But if you have a powerful device like the Whisper Brain, you can combine information from both left and right sides when processing the audio. The benefit of this is that you’ll hear a more natural and stable sound picture because the processor can integrate the information from both ears when making high-precision sound processing decisions using AI.

HearingTracker: How is big data or cloud data being stored and used by AI?

Bertelsen: AI and DNNs are very powerful in part because they improve and adapt through a learning process. The learning process involves presenting new information— in the form of data—to the DNNs and having them adapt how they respond. At Whisper and several other established hearing aid manufacturers, the AI has access to multiple years of diverse audio information when it goes through this learning process.

When designing a DNN, you want to present as much unique information to the DNNs as possible, meaning you’ll want a wide array of different examples to use. This is what people mean when they say “big data.” DNNs that have been exposed to lots of data will be better at reacting to new situations and making decisions compared to those that have only been exposed to a limited set of experiences.

Starkey image

But there is a caveat: if you are presenting unrealistic data to your DNNs, then the result will not be very good. Put simply, DNNs are no better than the data they have access to. There is always a risk of a “garbage in, garbage out” situation if it’s not set up carefully and thoughtfully.

So, not only do we want a large volume of data, but we also want the right data. For example, when designing systems for speech enhancement, we know users aren’t always looking at the person they are speaking with. To solve this, a DNN designer needs to simulate motion—head movements or a person walking around while someone is talking to them—in the AI data to make sure their software is robust and can reflect real-world usage.

HearingTracker: How might AI change the traditional hearing aid dispensing protocol?

Bertelsen: The new regulatory changes by the FDA introducing OTC hearing aids have been the subject of a lot of debate over the past few years. From an AI perspective, I’ll just say I’m really enthusiastic because AI will be able to improve hearing aid outcomes regardless of whether a person decides to work with a provider or not.

One general trend I see is that AI will enable us to build new kinds of interactive experiences that allow both providers and patients to reach a more personalized outcome in less time using the data collected. In the OTC space, I think the trend will be toward simpler yet more interactive software for consumers to do self-fittings.

The questionnaires and pure-tone tests that hearing care professionals rely on today will evolve into interactive games or augmented reality (AR) experiences that will be able to transport the users to a complex sound environment right in their homes. Instead of guessing how something might sound, you’ll be able to try it immediately by listening. These experiences and preferences, in turn, can then all be collected and analyzed by AI to provide much more precise personalization.

Bertelsen believes that AI will help bring hearing tests and aural rehabilitation into the realm of augmented reality and gamification.
Bertelsen believes that AI will help bring hearing tests and aural rehabilitation into the realm of augmented reality and gamification.

Likewise, the tools of hearing care providers will evolve so they can work more efficiently with patients through all phases of the care process. For example, providers will be able to use AI to show patients how their individual fitting changes might be related to previous hearing aid wearers.

Consider this: millions of people use hearing aids every day, and most of those hearing aids were fit based on a formula called NAL-NL2. This prescription was developed in 2011, and the creators used around 240 audiograms to come up with the formula. We now have the opportunity to use millions of data points. That’s going to translate into critical and useful insights for someone getting fit with hearing aids for the first time. We can apply AI and can take in much more data than before and still make sense of it.

The result will be better, more individualized fittings and assessments. It’s not hard to imagine a future where more diagnostics about individual preferences will lead to tailor-made settings for sound processing across dozens of different listening environments.

HearingTracker: Increasingly, we’re seeing AI becoming very good at diagnosing medical conditions that doctors are less likely to catch on something like a CT scan or from a set of symptoms. Do you think AI hearing aids of the future will be able to identify and/or monitor medical problems in people and alert users and providers in ways we’re not capable of today?

Bertelsen: There are many possibilities here, although we have a lot more work to do on getting AI to help within our existing field of hearing loss and hearing assistance. It’s still difficult, if not impossible, to measure some facets of the auditory system on a practical level. For example, we’re only just now scratching the surface of understanding things like hidden hearing loss—a condition where you struggle to follow conversation in noisy environments but have normal results on standard hearing tests. My hope is that AI systems will be able to help us understand conditions like hidden hearing loss by drawing connections between data that we previously thought were unconnected.

Beyond that, the availability of miniaturized sensors mounted in devices can be transformational for general healthcare. We already know that the ear is a great place to monitor many key indicators for general health, so we can start to expose the connection between hearing healthcare and other health indicators.

Phonak image

Since DNNs are so capable of ingesting and sorting through large amounts of data, the sky's the limit here! Integrating AI into hearing and general healthcare is a truly exciting prospect. It also poses all kinds of opportunities from an interdisciplinary professional perspective.

HearingTracker: What else might we see from AI in the next 5-10 years (with wild predictions encouraged)?

Bertelsen: With the speed of technological development, I’m confident hearing healthcare is reaching an inflection point not seen since the original introduction of the digital hearing aid more than 25 years ago.

I sense we’re on the cusp of putting a big dent into speech enhancement and noise management—one of the most significant historical needs of hearing aids that continues today. Solving speech enhancement will require a lot of innovation beyond AI itself. It will require things like making more power-efficient processors specifically optimized for AI algorithms. But you can already see the promise today of what is to come.

Beyond that, AI will also enable better overall care for everyone, creating more engaging, personalized experiences that take hearing aid wearers and providers outside of the four walls of a traditional clinic or booth. This will help us learn about hearing factors beyond the audiogram, like reducing mental effort, and allow us to uncover the deeper relationship between hearing and the other human senses and functions.

We just mentioned hidden hearing loss, but an even more ambitious target is for AI to map out how hearing is connected to other brain activities like memory, cognitive processing, and other facets of neuroaudiology.

Regardless of where all this leads, AI is definitely the next frontier of technology for hearing. I’m really grateful to have the chance to apply this new technology to a field that can have so much positive impact on people in the world, and I hope others are as excited about it as I am!

For more information, please visit the Whisper AI website.