Speech in Noise: Some New Perspectives from Chatable
AI company Chatable have published a series of deep tech blogs direct from founder and CSO, Dr Andrew SimpsonDr Andy Simpson, Founder and Chief Scientist at ChatableApps, has released a series of blog articles linking the worlds of auditory neuroscience, artificial intelligence, and hearing technology. Introducing the concept of neuroscience-led AI, Chatable are building their artificial intelligence by working from the blueprint of the brain: “Neuroscience is the process of reverse engineering the brain” Simpson says.
The on-device challenge
New and established hearing-industry players are looking to solve the longstanding speech-in-noise problem with innovative AI approaches. Despite the obvious potential of AI, real-time on-device AI-driven noise removal seems out of reach; for AI to process speech on a hearing aid, the software must be small enough to fit in a tiny hearing aid microchip and fast enough to avoid unacceptable speech delay.
Speech delay (driven by poor latency) is a major problem when it comes to hearing aids. If you’re speaking with someone and their voice is out of sync with their lips, this leads to confusion and poor speech recognition. To minimize this effect, speech should not be delayed any longer than five or six milliseconds. With current technologies, it’s almost impossible for AI to process and deliver sound so quickly.
There’s also the problem of reducing the size of the AI software to the scale necessary to run on hearing aids. “You can have it big and sounding good, or you can have it small and sounding bad, but you can’t have both” Simpson asserts. But Chatable appear to be working on a solution to both problems.
“Neuroscience-Led AI”
Speech-in-noise has been a hot topic in auditory neuroscience for some time, as evidenced by the popularization of the ‘cocktail party effect.’ The cocktail party effect refers to the brain’s ability to tune out background noise when listening to a friend or colleague at a noisy party. Chatable’s thesis is to bring auditory neuroscience and artificial intelligence together for a new way to approach the speech-in-noise problem — by looking to the brain for the solution. The Chatable approach is “Neuroscience-Led AI”. Simpson believes that in the future neuroscience and AI will merge into a single discipline and that Chatable are ahead of the curve. “Reverse engineering attentional processing in Cortex is what we do”, Simpson says.
The Chatable team are focusing on how the brain processes speech in noise and, in particular, what neuroscientists call ‘attention’. “When a baby cries, if it’s your baby: that’s attention. If it’s somebody else’s baby: that’s distraction” Simpson says, as he describes the ‘bottom-up’ process of how sounds are selectively processed in the brain. This mechanism, Simpson says, protects you from a “potentially paralysing tsunami of information”. In fact, with their brain-centered approach, Chatable are targeting not just hearing impairment, but autism and ADHD as well. Chatable’s CEO Giles Tongue quietly posted a video on LinkedIn purporting to show the father of an Autistic child saying “it was almost like having a conversation with someone who didn’t have a form of autism”.
Reducing potential noise damage through AI
Dr Simpson also raises some interesting questions around hearing aids in the context of noise exposure. Citing World Health Organization recommendations on noise exposure, Dr Simpson suggests a possible role for AI to make hearing devices safer by using AI to prevent the unnecessary amplification of background noise.
A zero-latency future?
Taken as a whole, this feels like a fresh new take on the problem of speech in noise and there is the hint that Chatable may be on the cusp of a major breakthrough: “we are developing the world’s first real-time zero-latency AI for speech in noise” says Simpson. Chatable also appear well positioned - the Chatable team have recently taken on investment from some of the UK’s leading independent audiologists and are actively exploring ways to take the technology forward through partnerships. Watch this space.
- chatable
- speech understanding
- noise
Abram Bailey, AuD
Founder and PresidentDr. Bailey is a leading expert on consumer technology in the audiology industry. He is a staunch advocate for patient-centered hearing care and audiological best practices, and welcomes any technological innovation that improves access to quality hearing outcomes. Dr. Bailey holds an Au.D. from Vanderbilt University Medical Center.