In today's episode of The HearingTracker Podcast, host Steve Taddei speaks with Rick Radia, the Product and Partnership Manager at AudioTelligence. Based in Cambridge (England), AudioTelligence aims to solve the "Cocktail Party Problem" using its aiso™ for Hearing, which combines Blind Source Separation (BSS) and low-latency noise suppression. Tune in to hear audio samples of aiso™ for Hearing, and to find out how the technology might help you hear better in noisy environments.

Podcast Transcript

Steve Taddei: Hey I am Dr. Steve Taddei and you are listening to the HearingTracker Podcast.

We recently passed the second anniversary of Covid. And for many of us, this has meant nearly 2 years of isolation, avoiding large social gatherings, and far too many Zoom meetings.

While we’re not out of the woods yet, it’s starting to feel safer with seeing family and friends. And hopefully, we can soon say goodbye to glitchy conference calls and hello to our normal lives.

But for many with hearing issues, this will present a familiar challenge ]. Something called the “Cocktail Party Problem” and that’s what this episode is all about.

Rick Radia: 81% of 40-64 year olds struggle to hear in busy and noisy environments and ultimately struggle with the Cocktail Party Problem2.

Steve Taddei: That is Rick Radia and he is the Product and Partnership Manager of a company called AudioTelligence. And his company has been doing quite a bit of research into this problem and the people affected.

Rick Radia: So if we were to take age related hearing loss, it’s gradual and some of the first signs of it can be very subtle and hard to detect. And one of these first signs is the inability to follow a conversation in a noisy place. And this is what’s referred to as the Cocktail Party Problem. That means in the presence of background noise, people with hearing loss are unable to discern the speaker of interest from the mixture of sounds. And people who suffer from this might think that their hearing is okay, in fact they might even pass a normal hearing test, but when they’re in a crowded bar or restaurant they find themselves struggling to follow the conversation.

Steve Taddei: The Cocktail Party Problem, or Effect as it’s sometimes called, refers to our ability to dial in on an individual sound in the presence of many competing sources. Research in this area dates back to the 1950s and now belongs to a greater area of study known as auditory scene analysis1. For example, here’s some music [Music Plays] notice how you can choose to focus on the whole piece, a single instrument such as the guitar, or even my voice so you can understand what I’m saying. This applies to other environments too, such as groups where we can choose who we listen to and other voices then seem to blur into the background - It’s almost like magic.

This is all possible because our brains are amazing processors storing and correlating information about frequency, timing, direction, visual cues and much more. This is ultimately what allows you, in a vast sea of noise, to pick out your name from across the room.

Rick Radia: So actually the difficulty in following the conversations and understanding speech in environments gives rise to quite a number of issues. So, when speech intelligibility decreases you need more effort to understand the speech. And there’s also fatigue induced by the increased cognitive load in those situations. So as your hearing deteriorates, when you go into those situations it results in listener fatigue and exhaustion. So people can then sometimes avoid those situations and in the long term that leads to social isolation. And in some cases, severe cases, depression and even more recently there are links to dementia as well.

Steve Taddei: So what exactly does this mean? Well imagine for example you’re out shopping [Emulated shopping sounds with normal hearing3]. But what happens in that same situation if you have hearing difficulty? [Emulated shopping sounds with mild hearing loss3]. That was an emulation of a mild hearing loss, where mainly consonants providing clarity are less audible. Now imagine the brain power, and focus, it takes with even more hearing injury [Emulated shopping sounds with moderate hearing loss3]. This is what ultimately leads to the Cocktail Party Effect, being the Cocktail Party Problem.

Ok so what can we do about this? We have hearing aids and cochlear implants, but while these devices are great, they still haven’t quite cracked the background noise problem.

Well Rick and his team at AudioTelligence have been working on another solution that can help. Let’s hear what he has to say.

Rick Radia: We’re an audio startup based in Cambridge, UK. So over the pond from you. And we have strong ties to Cambridge University. We have around 30 people who are many software developers and researchers. And that team includes the inventor of our original blind source separation algorithms. But we’ve formed so that we can further develop our algorithms and our technology for social good and help the consumer tech and hearing market.

And all of our research has culminated in the development of our aiso™ for Hearing software solution. And this takes inspiration from the way our brains work by using these algorithms to separate different sources of sound as well as suppressing background noise and babble. And the technology at the heart of ASIO for Hearing is blind source separation in combination with our low latency noise suppression.

Steve Taddei: Here’s a quick sneak peak on their aiso™ system. More on that and blind source separation coming up after the break.

ReSound image

Thank you for listening to the HearingTracker Podcast. Last month we held a giveaway for a pair of Minunedo Lossless Earplugs, and I’d like to congratulate Nathan Tepp for being our winning. I’d also like to thank everyone who entered, it was great to see people reach out. If you’re still interested in Minunedo products, you can get 20% off online with the code STEVESB.

If you have general questions about protecting your hearing, and the many technologies out there, don’t hesitate to reach out to us at HearingTracker. You can even email directly at steve@hearingtracker.com.

Just before the break Rick discussed AudioTelligence’s aiso™ for Hearing technology. He also mentioned something called blind source separation.

Rick Radia: So blind source separation, or BSS, is a data driven approach based on bayesian statistics. BSS analyzes the raw signal data provided by the microphones, from the mic array, to locate sound sources in a scene. It first separates those sound sources it finds into channels. And then we use different methodologies in order to select that channel that best corresponds to the source of interest. So that the interference of the other channels can be rejected and only the selected channels can be heard. So one of the methodologies that we use for that, that we can implement on the hardware, is based on conversation dynamics and signal content. So the user can seamlessly follow the conversation.

Steve Taddei: Going from theory to application, they added their aiso™ system into a tabletop option.

Rick Radia: And we wanted to prove that this technology works in the real world. So we integrated this into a remote microphone prototype. The prototype itself has eight microphones and it can seamlessly follow the conversation. It actually has two modes. You simply place the device in the middle of the table, and it will start off in “automatic mode” where it will follow the conversation around the room using the algorithms that are embedded into the hardware.

You can also use a “focus mode”. So if there’s multiple conversations happening on the table at any one point, you could focus on a number of people around the table. Whether that’s one to eight people around the table.

Steve Taddei: Today we were also joined by Dan, another member of their team, and he was able to provide a real-time demonstration of this technology. So if you have a good set of headphones, this is the time to put them on.

Rick Radia: Dan are you there?

Daniel Potter: Ah I am yes, I’m Dan. I’m one of the engineers on the Hearing Systems Team at AudioTelligence. Just to explain the demo, all of the audio you’re hearing is coming from the device itself. And we’ve got three speakers setup around the table who are going to play out a conversation. And additionally there’s a bunch of speakers setup around the room, mostly out of shot, that are generating ambient noise levels typical of a busy cafe or restaurant. There’s around 70 dBs of background noise in here.

At the moment, I can demonstrate that by turning off the processing. So this is with the processing turned off. So that’s the ambient noise level and you probably find it a bit more difficult to hear my voice at this point. Now the processing is back on.

Steve Taddei: Dan showed me their automatic mode, which can follow multiple voices.

Daniel Potter: So just to demonstrate that I’m now going to play a conversation between speakers 1, 2, and 3. [aiso™ for Hearing sound sample] Alright, and just to demonstrate the difference the device makes again I’m gonna turn off the processing and continue playing that conversation. [aiso™ for Hearing sound sample]

Steve Taddei: They also offer a focus mode, which allows you to select the direction, or sources, you’d like to hear. For example, Dan mentioned that this can be useful if multiple conversations are occurring and maybe you’d like to hear only one of them.

Rick went on discussing some of the other elements within their system.

Rick Radia: There’s a limiter in place which notices certain bursts of sound and stops those bursts of sound from coming in. If it were a consistent sound, that’s probably when you would need to have “Focus Mode” selected. So if there was a consistent sound of music playing, you would have to select the “Focus Mode” and select a particular direction persay. Or a source of interest.

I think the other thing to mention with our sorta automatic source selection, is that we follow the conversation dynamic and not just the loudest source. So we look at the history of the loudness, the pattern of the loudness in order to have that automatic mode and follow the conversation.

Steve Taddei: After all this, I was curious about the differences between their aiso™ system and other technologies available. So I asked Rick, here’s what he had to say.

Rick Radia: With regards to our low latency noise suppression, we’re actually using the multi-channel audio outputs to further enhance the noise suppression. So standard noise suppression technology utilizes the single channel approach. Whereas AudioTelligence’s solution uses multi-channel audio to further enhance the noise suppression. So we actually use all of the data from the microphones to deliver the output to the end user.

How we would potentially compare ourselves if we were to look at… the majority of assistive listening devices have beamforming. Beamforming, as we all probably know, is a spatial filter. It uses the physics of wave propagation to focus on a particular direction or sound source. This means that a beamform can extract a signal from a specific direction and reduce interference of signals in other directions.

But beamforms have the advantage of being both mathematically simple and easy to implement in real systems. However, you need to know a lot of information about the acoustic scene. Such as the target source, or direction, microphone geometry. And more sophisticated solutions may need calibrated microphones. And one of the limitations of beamforming is that it can not separate two sources located in the same direction. So, if there is noise behind the speaker, everything in that direction is amplified.

Steve Taddei: And we can easily demonstrate this. Right now I’m using a highly directional microphone known as a shotgun mic. This loosely emulated the fixed directional microphone features in hearing aids. If the noise is behind you as the listener, while the microphone faces me, we get greater reduction of unwanted ambient noise. However, if the noise is behind me we don’t see the same benefits from directional microphones. Ultimately, this is one reason why hearing care providers counsel patients to orient their back towards any noise.

Rick Radia: So we’re not separating the speech from the noise. Which differs to the way in which our technology works where we actually analyzing the whole scene and separating the prominent sources into separate channels. Which allows us to have that finer degree of differentiation between two people who are sitting next to each other, or from a speech and the background noise.

Steve Taddei: After speaking with Rick and hearing their aiso™ for hearing system, I can see, or better yet hear, many applications for this style of device. So what comes next?

Rick Radia: Well I think there’s lots of opportunities and applications for the aiso™ for hearing software. We’re looking to integrate it into other hearing solutions. Potential applications include remote microphones similar to the proof of concept prototype you say. Potentially integration into earbud charging cases. So that can be an accessory alongside the earbuds to further enhance their performance in complex environments. Potentially there’s an option to integrate it into mobile accessories. Or even a mobile app.

We have some user trials coming up, which I’m sure will provide feedback on how we can improve the audio performance in those real world settings. But the next big step for us is to integrate this into the different form factors I previously mentioned. And then from there we have more proof of concept to talk to our potential partners about so we can really show the demonstrable benefit of the technology to the people we want to work with.

Steve Taddei: An important takeaway from this and many conversations we have is that technology is advancing. There are many options such as hearing aids, earbuds, apps, and tabletop devices. You don’t always have to choose one in lieu of an other. Many times they can be used together in specific use cases to help, let's say, the Cocktail Party Problem.

Steve Taddei: I’d like to thank Rick Radia and Daniel Potter for coming on the show and talking about the Cocktail Party Problem, AudioTelligencem and their aiso™ for hearing technology. To learn more you can visit Audiotelligence.com. I’d also like to thank Soundbrenner and Minunedo for supporting the giveaway mentioned during the break.

If you liked the live sound demonstration in today’s episode, more can be found in the original unedited conversation, head to patreon.com/hearingtrackerpodcast to check it out. Hope you enjoyed today’s episode and thank you for listening.

References

  1. Cherry, E. C. (1953). Some experiments on the recognition of speech, with one and with two ears. The Journal of the Acoustical Society of America, 25(5), 975–979. https://doi.org/10.1121/1.1907229
  2. Harris Interactive AudioTelligence (2020). The Cocktail Party Problem prevalence and consumer behaviors. AudioTelligence. Retrieved April 2022, from www.audiotelligence.com
  3. Starkey Hearing Foundation. (n.d.). Hearing loss simulator - find out what hearing loss is like. Starkey. Retrieved April 2022, from www.starkey.com/hearing-loss-simulator

Music by Coma-Media, Penguinmusic, and Undruground from Pixabay