AI and Hearing Technology
Dr Achin Bhowmik at StarkeyIn Episode 1 of the Hearing Tracker Podcast, we had the pleasure of interviewing Dr. Achin Bhowmik, CTO and Executive Vice President of Engineering at Starkey Hearing Technologies. Prior to joining Starkey in 2017, Dr Bhowmik was the VP and GM at Intel's Perceptual Computing Group, where he worked on Machine Learning (ML), Artificial Intelligence (AI), and advanced sensor initiatives. At Starkey, Dr. Bhowmik was involved in bringing groundbreaking AI and advanced-sensor technologies to the new Starkey Livio hearing aid, which was named one of Time Magazine's Best Inventions of 2019.
Audio-only version
Episode transcript
Steve Taddei (Host): This is the Hearing Tracker Podcast from HearingTracker.com. Thank you everyone for tuning in. On this episode, we are joined by Dr. Achin Bhowmik. And he is the Chief Technology Officer and Executive Vice President of Engineering at Starkey, and he is also an adjunct professor at Stanford University. So Dr. Achin, thank you so much for joining us. and pleasure to have you.
Dr Achin Bhowmik: It is my pleasure, Steve, to talk with you and connect with your audience.
Host: Before we get started, do you mind giving us just a little bit of a background about yourself?
Bhowmik: Yes, so, you know, I'm a trained engineer. After receiving my doctoral degree, I joined Intel as a researcher, and I rose through the ranks at Intel working on advanced technologies and computing, computing devices, processor technology. The last 10 years or so there I was the Vice President and General Manager for Perceptual Computing group, where I worked on machine learning and artificial intelligence, plus advanced sensors to help machine understand and perceive the world around it.
So I like to say I switched from helping machines understand the world better to helping people understand people and the world better. So I joined Starkey in the summer of 2017 to become the Chief Technology Officer. And I'm responsible for the research, product development and engineering at Starkey, where we get to get the opportunity to work on advanced technologies, to help people, you know, understand and connect with people.
Host: You know, a big part in that that I wanted to ask you about today is, you know, I know machine learning and artificial intelligence are relatively new to the world of hearing aids. And you've been a key innovator in this area because of that background that you've had. So can you tell me a little bit about AI or artificial intelligence initiatives, and what you're working on at Starkey, and how you think those initiatives help consumers hear better?
Bhowmik: Absolutely. So the way that I look at AI, the best to explain this is instead of pre-crafting algorithms, you let the machine learn from data and decide on its own the best course. So it's almost parallel to how we humans go about our business.
So for example, our parents taught us to recognize cats from dogs. The first time you saw a cat, Mom said, "Look, that's a cat." And what you did, you taught yourself that an image like that that you capture through your eyes, the visual cortex in the back of your brain will match those patterns. And every time you, from that point on you see a cat, you recognize it's a cat. But same thing about every other sensory processing that goes on in human head. You know, the touch of a fire, the first time you put your hand in the fire, it caused pain, and you learned not to do that.
So we humans were unique for a long time. It's what separated humans from machines, that humans learned from information and the data. And the machines on the other hand were stupid, and we had to tell them what to do. So the world of AI that's creating so much excitement around the technology world over the last decade, and in the last few years particularly, is that we're now learning to teach machines to learn from data, and the breakthroughs are phenomenal.
So you, you know, you hear how cars are driving by themselves, autonomous cars, learning to be even safer than human drivers, or drones, robots that are learning from the information. For us, it was an opportunity to bring these advances in machine learning and artificial intelligence to the hearing devices.
And there are three things that we looked at, the opportunity that were untapped for in-ear devices. Number one, I felt we could be doing a lot better job by tapping into machine learning and artificial intelligence to help people hear better. You know, this is a buzzword we're hearing, hearing aids help people hear better, but not just by amplifying every sound. And machine learning algorithm with artificial intelligence will make the devices smarter to understand what sound is what. What sound should be amplified? What sound should not be amplified? So that's where the machine learning and artificial intelligence in terms of helping people hear better.
But then there were other opportunities. We had the opportunity to convert the device from a single-function sound amplification product to a multifunction health-level device. The device should be able to track my health, you know, my physical activities, my exercise. In fact, if our older patients with hearing aids, if they fall down, the artificial intelligence technology built into Livio Edge AI devices can automatically detect the fall, and send an alert to the loved ones.
And then the last, but not the least, we are connecting the patient to the world of information with our hearing aids. All you have to do is double tap on your ear and ask a question, what's the weather outside, and you get the answer spoken right in your ear. The device can self-check. And guess what, it reminds you of your medication. You could say, remind me at 8:00 pm, I need to take, you know, blah, blah medicine. It'll remind you at 8:00 PM.
And today we announced a new addition to that smart technology, where if you lose your smartphone, I lose it all the time, it's like lost in plain sight under my pillow, behind my desk, and I can simply ask my hearing aids, where is my phone? And it'll make the phone ring.
So, you know, it's, you know, what I say is using artificial intelligence technology to make hearing aids cool, give it functions that will make people want to have a device like this, not just because they need to hear better, but hear much better with artificial intelligence technology. At the same time that device has so much more value for you in your daily lives that you know, you will want to have a device like that. Remove the stigma from the device.
Host: Mm hm, so I wanted to draw a line between the two hopefully, or just better understand it. I know people are very familiar already that hearing aids have automatic program switching. And I think the average person understands that a hearing aid, whether you're in background noise, listening to music, it's going to be kind of analyzing the environment and then making programming changes to be as appropriate as possible. So what you're saying then with artificial intelligence, it's not just as simple if there's background noise, we go to a background noise program. It's more so that it listens along and it tries to create these sonic scapes within its memory that can then more accurately make those adjustments?
Bhowmik: That is very nicely described. So the way I will say this is when we end up in the old paradigms, when you have the ability to pick from a small library of settings, right, so let's say, you know, I saved restaurant mode in my memory one and because I found that, you know, those are the kinds of settings and fittings that helped me best when I was in the restaurant last week. So I programmed it and I called it restaurant mode, and it's my memory one. That's how things used to work before. And then I might have in memory two my home mode, because in home I played with my settings, and I decided that's the best fitting for my home. And then let's say I have an office mode and then guess what, I have a machine noise mode. I have my friend's home mode. How many of those modes can you have?
So even after all that, you have a small number of libraries that are available to you. And the next you do have to, you know, is there such a thing called restaurant mode? Or is there such a thing called machine noise? But what if it's a combination of various things? So what you really want is the device's ability to automatically recognize what should be the best settings of various different parameters in a particular environment that you are in.
So which is where we introduced our Edge artificial intelligence technology. So what the Edge AI engine does is it's analyzing the acoustic environment every six milliseconds. And if you choose to engage that mode by simply tapping your device or, you know, with a simple button press, depends on how your professional has set that configured Edge mode activation to be, it will automatically adjust without you having to pick from a small library of settings.
And today we demonstrated enhancement to that that is helping with masks. So let me just explain that a bit because it's not something we would have imagined to be a problem six months ago, right?
So today when you are going into an environment, you are interacting with people who all have masks. So there are a couple of things that are going on. For people with hearing impairment who used to rely on the lip reading to understand speech somewhat, now the lips are covered by masks. The next thing, masks attenuate speech signals. And you know, you're talking with different frequencies at different levels. So there is a dynamic component involved there. To make it worse, no two masks are the same. Your fabric mask is different from your N95 mask, which is different from your plastic face shield, right? So we have been able to challenge ourselves to solve this problem with machine learning.
So with Edge Mode, again, today's Edge Mode, what we introduced today, when you engage that Edge Mode, it's going to, and let's say you're surrounded by talkers with various different masks. For that particular snapshot of time, for that few milliseconds, it's going to provide the appropriate levels of compensation for those masks that are around you, because it's taking a snapshot of the speech signals. It recognizes those to be speech.
And it did a dynamic understanding of what a correct speech signal should be, because it has been trained with machine learning and AI, and it's providing dynamic compensation rather than, you know, having to rely on a mask mode where you can apply fixed amounts of gain per frequencies, but that would work for a particular mode, particular mask that it was optimized for. But I might be encountering different kinds of masks. So that is a perfect example where artificial intelligence and machine learning can provide the solution. Again, that was not a problem six months ago, but today it's a big problem.
Host: That's great, and I honestly don't think there's a single current hearing aid user who hasn't run into issues with wearing hearing aids and then also a face mask. You know, Achin, I see people on a weekly basis that run into issues with firstly, just retention with the hearing aid staying on their ears with the use of face masks, but then also communication breakdowns, because as you mentioned, the lips are covered. We've now changed the way that sound is able to leave, you know, leave the mouth. So it's great to see a manufacturer take that so seriously and provide a possible alternative or solution this quickly. Have you run into any other maybe techniques that have helped people communicate better with masks, for those who might not have this new Edge technology?
Bhowmik: Well, if you look at auditory communication, the problem is one is when the signal is coming out, that I'm intended to understand. So we don't control who's around us and who's talking, right? So, you know, if I moved from my one environment to another environment and I am having people, let's say tens of dozens of people around me and they have different kinds of masks, obviously technology that we already have in our portfolio, you might recall our table microphone that we introduced in January, which is, you know, an amazing device with its array of microphones around the circumference of the device that provides super directionality, right? And then it provides anywhere from seven to 10 dB of extra signal to noise ratio with the microphones that are embedded in there. You would get some benefit from there. But on top of that, with the benefits you get with your hearing aid itself, so that's the receiver side, right, you could use the Edge Mode in order to provide the dynamic understanding and dynamic adaptability to the specific environment that you're in now. And that might change again in another second or two or half an hour, and it will provide different kinds of adaptations to it. This is really the best solution in terms of the best compensation for the environment you would be in at that given time.
Host: And I really liked that concept of using like remote microphones, because something that we run into very often in clinic is a hearing aid being amplification or providing volume to overcome someone's hearing loss. It's not always possible to find a one-size-fits-all solution. You know, this hearing aid is gonna help you hear speech more clearly. The same hearing aid is gonna provide appropriate like sound quality for music. The same hearing aid is gonna be very natural when you're wearing it and transparent or unnoticeable. So it can be very difficult. And I do think as we go forward that it will be something where there are different programs, different devices, that all integrate together to provide the best possible outcomes for a patient. So I do think as we go forward, and I know this is something that you have talked about before, you know, hearing aids should be more than amplification. They should integrate into our just daily lives and be transparent in a sense.
Bhowmik: Yeah, totally. So I call it perceptual computing because all of these devices, they're computers, mini computers in your ear, and by integrating artificial intelligence technologies, you know, we kind of bring it closer to how a healthy human's auditory cortex works. So if you think of a normal hearing person, and I'm going to kind of try to describe that from an engineer's perspective. So we are a combination of sensors and processors. So in this particular case, we are talking about ears as the sound transduction device. So essentially vibrations in the air coming in, it's getting converted into neural signals, and then some magic's happening in our auditory cortex. And if you think about it, a healthy hearing person, our auditory cortex is not doing the same sound processing algorithms for all sounds. In a complex listening environment where you have many talkers, just close your eyes and reflect on how without even knowing we engage our signal to noise ratio enhancement capabilities in our auditory cortex to suppress sounds we don't want to hear, and pay attention to the speech signals that we do want to hear. And to the point that you don't even know someone's talking if you're so focused on something, right? So, and then when you are listening to music, you automatically, your auditory cortex dials in and helps you appreciate the transitions that are inherent in music. So you switch from tuning in understanding speech to appreciating music, to suppressing that irritating machine noise. So, you know, what we're doing with the hearing aids with AI is that we're bringing it compared close to that level of a healthy human auditory cortex, plus the sound enhancement. It needs to know what is what sound and apply a particular kind of algorithms for enhancement, for particular kind of music. It's only part in sensing with the different kinds of masks, right? You need to know for an N95 masks, I hear as if somebody is wearing this kind of mask. So I would like to compensate by this much. So I really need this intelligent system that will help in different environments. You know, today I talked about what a new feature we introduced today called IntelliVoice. This is industry's most advanced deep neural network technology that has never been part of hearing aid ecosystem. It's been part of, you know, autonomous driving, autonomous robots and face recognition technology, speech recognition with autonomous speech recognition, et cetera, for personal assistant technology. For the first time we are using it for enhancing speech in your hearing aid when it's paired with an iPhone.
Host: What exactly is that doing? Is it providing better signal to noise ratios or basically just clearer speech for people?
Bhowmik: That's good, so it's doing, it's enhancing speech signals. So if you look at what is a speech signal, let's say compared to any other signals, speech has a particular signature to it in terms of how there are the frequent transitions in higher frequencies happen. But again, you know, the old paradigms of, well, if that is so, I'm going to apply speech enhancement, right? So the handful of algorithms that our industry had for a long time versus realizing that no two people speak the same way. So I have a unique accent, and you have a different accent. So there is no way for me, for people to know how different people's speeches are. So you you need an autonomous automatic system that can understand the particular kind of speech that it's been subjected to. So we built a deep neural network, which we trained with all kinds of speech signals. So it's able to dynamically provide the signal to noise ratio that's relevant for understanding speech for that particular talker and the particular listener in the particular environment. So it's a very different way of enhancing speech with deep neural network than traditional approaches of speech processing.
Host: And I think one of the most beautiful things about that is all of that is happening behind the scenes. Like traditionally, patients are used to and consumers are used to having that control and maybe switching between programs. But I don't think the average person wants to necessarily know about all of the different types of modulation that happens within speech. You know, the average person, they do just wanna put their hearing aids on and forget that they're wearing anything.
Bhowmik: That's exactly what it is. I think the Holy Grail for us technologists is to try and make the technology disappear. Technology should take the back seat. So you don't have to worry about exactly what to do, fidget with buttons and go in and decide for the device what it should do, but the device should become intelligent. That's the Holy Grail for us technologists. Again, it ranges from, you know, from having cars drive by themselves so I can get my work done to a hearing aid to decide with advanced technology, on what algorithms to apply without me having to worry about it, because I just want to hear better. I just want to lead better lives. You know, and again, if I'm old and I am prone to falling down and hurting, I want to have the safety, the sense of safety that if I did fall, my loved ones would be automatically notified because I have a smart device that detects it when I fall. Right, and then other things like I don't want to remember when I need to take my medicine. So my hearing aid needs to be able to tell me. If I lost my smartphone, I should be simply able to ask my hearing aid, "Hey, where is my phone?" And it makes my phone ring. Or I want to know when is my appointment with Steve? I don't need to remember it. It'll tell me, "Your meeting with Steve is at 2:00 pm." So that's, technology should take a backseat and it should just make me more capable of accomplishing what I want to accomplish.
Host: Right, it should now aid us as technology starts getting beyond like what we we've been able to process as humans. Yeah, it really should be that stereotypical like sci-fi what is the world of tomorrow where it just allows us to live more easy, free lives.
Bhowmik: It's a great question you ask. So I think there are a couple of things here. I'll allude to, it's a combination of software and hardware that makes the magic of the, you know, technology disappearing possible. So I gave an example today of how the rechargeable custom hearing aid, you know, I was explaining how up until our introduction of this device this year, I used to hear things like "It's impossible; it's difficult. "If you want a custom hearing aid, "you won't get rechargeability." You want rechargeable hearing aid? Go with your, you know, behind the ear or big device. Or you want to find 2.4 gigahertz internet. Well, you know, you can't have both rechargeability and 2.4 gigahertz connectivity in a custom form factor. I like to say, if it's difficult, we work on it right away. If it's impossible, it might take us a little longer. So that's the hardware piece. We want to push the limits of what was considered to be impossible yesterday. We want to make it possible today. And then apply the magic and the power of artificial intelligence. So suddenly your personalized in-ear device becomes the vehicle for technology disappearing and making you a superhuman, gives you the capabilities of understanding the sound that you had never heard or thought was there, help connect and communicate with people better, help you be healthier by tracking your exercises like your Fitbit does, but one less device that you have to worry about charging. Your hearing aid does it for yourself. It provides the peace of mind to your caregivers with your permission, they know how healthy have you been. Are you talking, socially engaged with people or not? If you fell, they would be notified. And then of course, everything we talked about, you know, connecting you to the world of information.
Host: And you know, it's interesting. There seems to be some resistance to technology in the beginning with smartphones, and there were plenty of people I know who resisted it, and then you get hooked on it. But you know, it becomes a part of our lives and it integrates so well to the point where when it's removed, we feel like we're not whole anymore.
Bhowmik: Yes, yes, I'm sorry. I should add that no matter what field you look at, it's only when you convert a device from a device that you need to a device that you want, and that's the challenge we have for our hearing aids. So I believe with all of these advanced technologies and multiple functions built in, that makes it so much more valuable for you ranging from helping you hear better to helping you live better lives to all of the smart technologies that helps you in your daily lives. We are on a journey to make the hearing aid a device that you want, that will no longer have the stigma that has been historically associated with it.
Host: Right, and that's exactly where I was leading to. You know, as the hearing aid becomes more just a part of us where it is like almost a secretary that we keep with us, and it helps family member keep track of us as well in a good way. I think you're absolutely right. It will reduce the stigmas 'cause it will just become a part of the new age like human experience. It's making us hear better. It works seamlessly with our cell phones, which is something that we have just grown to be a part of us now as well.
Bhowmik: Yeah, can I just say, like you said, it is your very own assistant, your very own personal assistant. In fact, just last week, we were humbled to get this artificial intelligence breakthrough award, which had over 2,200 nominees for it. And they picked us for being the best personal assistant, because for our patients, they always have these devices in their ear. So it is becoming your ubiquitous personal assistant that's helping you with your life. We're really proud about what we're doing with this device.
Host: Where do you see AI shaping the future of hearing aids over the next five to 10 years?
Bhowmik: That's a good question. I strongly believe that we are only at the beginning of this journey. With all of what we've done over the last 2 1/2 years since the introduction of Livio up until today of introducing new technologies, we're just at the beginning of what's possible. And you, as you said, technology moves at lightning pace. What's considered to be possible today were a matter of dream, you know, just years ago, right? And the pace is progressing so quickly. We have lots of tricks in our bag that we're working on. I wish I could tell you everything, but then I'll alert my competitors as well.
Host: For sure.
Bhowmik: But here's what I'm going to tell you, that when you look at this vision of an in-ear device that gives you the superpower of hearing, that hearing sounds that you didn't think were there, or help you connect with people, even for someone like me with no hearing loss, I hear better with the most modern Livio Edge AI hearing aids because it gives me that enhanced signal to noise ratio in a loud and noisy restaurant. I hear better and converse better, even with no hearing loss. Now think about with deep neural network, the Edge we are providing to people with over 50 dB of hearing loss in understanding speech to health tracking. So you can just imagine all of the physiological parameters that we can sense and track in the near future, and then alert you of your problems before you know it is going to happen to you. Ear is the best place for physiological sensing, not your wrist. And what happens today when you go to your doctor, the first five minutes, they are getting your vitals and all of this information about your health. What if we knew that continuously? What if you can passively monitor your all kinds of problems and alert you before it happens, send you information you need to know. And combine that with personal assistant that's moving at a lightning pace where you do not need anything else. You know, you would have the information right in your ear available at your fingertips. You just ask; you get the answer. So that's the possibility. And every one of us will want it, a device, technology like that. Technology will take a backseat, and we will become super human beings. And that's the journey we are on. And you know, you just see us introduce technologies, break the barrier in a lightning pace. I'm truly excited about what's coming.
Host: So I was curious, there are many hearing aid alternatives on the market, and a lot of the companies are getting into that, whether it be like the Bose Hearphones, the Nuheara IQbuds squared MAX, and many of these do have apps that provide, you know, personal sound amplification, where the consumer can test their own hearing remotely on their phone without even needing to go to an audiologist or hearing care provider, and then have devices that are programmed based off of very similar kind of processing like hearing aids do. Do you think that these devices improve accessibility, which is one of the main issues or just a very significant issue in this field?
Bhowmik: Yes, they do. In fact, I'm really happy that, you know, a lot of consumer electronics companies are bringing accessibility features and paying more attention to people's accessibility. I am particularly proud of our work with Apple and Google, you know, in enabling audio streaming from straight from the smartphones to the hearing aids, but I'm even more thankful to all of the consumer ear buds. Just think about it, right? Before Apple introduced ear buds and you know, devices in your ear would have been unthinkable. Now, how often do you just turn around in the plane and you see almost everybody has something dangling from their ear.
Host: For sure, yeah.
Bhowmik: It removes the stigma from having something in your ear. So all these collective efforts in making people aware of technology is very helpful because you know, what they are not, they're not hearing aids. Like just give an example, the best ear buds in the market might give you three or four hours of battery life. Our hearing aids provide 24 hours of battery life. And then it's designed to perfectly fit your ear, both mechanically with the custom devices, and acoustically compensating for your particular hearing loss. So, you know, those consumer ear buds are not hearing aids that's going to help them, you know, people who are struggling to understand speech. But it does help us in bringing the accessibility awareness in the market. Hearing aids and our devices that are on another side, what you might have thought a couple of years ago, technologies that are only for consumer electronics devices, we're now bringing those into the hearing aids. So this combination of the two domains is really going to help a whole lot of people. And again, another point I should mention, you know this very well, and your audience should, almost half a billion people have a disabling hearing loss according to World Health Organization. So there is such a large number of people that we can collectively help, that we are barely helping today. I'm really excited about bringing these technologies to large number of people, much larger than what you do today.
Host: I completely agree with you. I mean, it is such a large underserved population, those with hearing loss, and it does improve just the viewpoint of whether it be hearing loss or just devices for your ears, because more people are wearing them. It's hard to go to a store now and not see someone who's younger walking around with wireless earbuds in their ears. And it does, it absolutely does reduce the stigmas. It turns more focus towards it, which will then just hopefully further reduce costs by getting more people using this technology. And for even those who don't have the means at this point to pursue or don't have the interest in pursuing, you know, a professionally fit hearing aid, I really do like the prospect of these other devices now maybe offering a little bit of fine tuning for people, to give them more appropriate gain if they do have a hearing loss. Because there are plenty of people, and I see them on a daily basis who for whatever reason, whether it be they don't want to accept the complications that they're having at this time or financial issues, that there are now alternatives on the market, no, not maybe as good as a professionally fit hearing aid, but that there's at least an option for them.
Bhowmik: Sure, you know, in whichever way you can help people, certainly. The one other way that hearing aid differentiates from these devices is also the latency, because hearing aid has all the built-in processing right there with instantaneous delivery of and compensation of the sound, for technologies that are using apps, will have some, you know, about a hundred milliseconds of delay. But yep, so that's the pace of progress in technologies that are going to make it. But I wanted to, so just to draw on the example that you gave about, you see everybody with Bluetooth headsets these days. So after we introduced the custom rechargeable hearing aid, you know, the comments of how this just looks and feels like your consumer Bluetooth ear bud, but you know, with all the technologies of hearing aids and 24 hour battery life. We are now getting requests of, you know, can you make the device black color?
Host: Okay.
Bhowmik: Right. So in the past it used to be that can you make, make the color match my skin color so nobody knows I'm wearing a device in my year? So now we're seeing a demand for making the device fancy, like, you know, white or black, because it's in your ear. It's custom, it's in your ear, it's rechargeable and nobody knows, is it you're hearing aid, or is it your music listening device, right?
Host: Yeah.
Bhowmik: So interesting benefit that we see from all of the consumer ear buds that are out there that are helping hearing aid industry as well.
Host: Yeah, and it's not something that I would have necessarily expected. There's been this popular comparison of, you know, glasses now. There are some people who actually wear eyeglasses just for the look of it. It's like an accessory. I mean, it sounds like actually that increase in people using hearing technology is then it's crossing over, and then hearing aids are also doing that where it's not an accessory, but it's something that doesn't have to be hidden. If you choose a color on it, it's to make it more noticeable. It's to make this statement.
Bhowmik: It's that stigma, the old stigma that is going away because now it's cool. Now it's doing many more things than just amplifying sound. And it's not just fixing your disability, but it's giving you abilities that you want to have, all of these other capabilities. You know, it's helping you stay healthy. It's tracking, it's doing everything that your Fitbit or Apple watch does, and it's connecting you to the world of information. It's doing all of these things, so it's no longer just a single-function hearing aid. So it's removing the stigma. People are wanting to have it. We'll add that they're wanting to show it off, right? The colors that do not hide it.
Host: Yeah, there's a lot to look forward to in the coming years, and as technology keeps advancing with cell phones and the wireless connectivity, the sound quality. So there's a lot to keep our eyes and ears open for. So was there anything else that you wanted to add or wanted to share with us at this time?
Bhowmik: Yes, well, one thing that I don't think I got a chance to talk to you today about, we just introduced the BTE rechargeable device that's got a 70 dB gain. So, you know, small number of people that need the power, right? So again, in the past, if they needed that power, if you needed to fit a patient with 70 dB gain, you didn't have the option for a rechargeable device that also connects with 2.4 gigahertz radio, directly streams our audio from your iPhones and latest Android phones. Today, we introduced such a product. It's the industry's smallest, 30% smaller than the nearest competing device. And in fact, it's 22% smaller than our own Zinc Air BTE device that we have in the market and quite popular. It's a really small device, and we've been able to engineer it to provide the industry's highest levels of gain in there. So, you know, again, small number of people that will benefit from it, and I'm really happy that we'll be able to help them with this device. That's the last thing I wanted to make sure that I don't miss.
Host: And you said the battery life on that, it is rechargeable?
Bhowmik: 24 hours!
Host: And it's 24 hours.
Bhowmik: Yes, no compromise at all.
Host: No, not at all. Well, thank you so much. It's been a true pleasure having you on the show and speaking with you and hearing about the new initiatives and advancements in technology with Starkey, and what you personally have brought to the field with machine learning and artificial intelligence. So thank you so much.
Bhowmik: And I look forward to talking with you again.
Host: Absolutely, likewise. Well, thank you everyone for tuning in. This is the Hearing Tracker Podcast.
Abram Bailey, AuD
Founder and PresidentDr. Bailey is a leading expert on consumer technology in the audiology industry. He is a staunch advocate for patient-centered hearing care and audiological best practices, and welcomes any technological innovation that improves access to quality hearing outcomes. Dr. Bailey holds an Au.D. from Vanderbilt University Medical Center.