What Your Brain Can Do for You

by Donald Schum, PhD, Vice President of Audiology for Oticon


Hearing loss can be an extremely frustrating condition to face.  For those who develop hearing loss in adulthood, the frustration can be particularly pointed because something that used to be so natural and easy is now difficult.  Although people normally think of hearing impairment in terms of what cannot be heard, the more significant challenge for most individuals is hearing too much – the world can become an undifferentiated wall of sound in lively, interesting situations such as parties, restaurants and family gatherings.  The frustration increases even more when it is clear that individuals with normal hearing seem to be able to navigate those situations with ease.

What is going on?  Well, our cognitive system has developed over thousands and thousands of years to allow humans to take in as much information as possible to allow us to understand all of the things that are happening around us.  It integrates information embedded in the neural code from all of our senses, with vision and hearing obviously playing the biggest part.  The brain can automatically and effortless create a representation of the world around us so that we can focus on what needs our attention at that point in time and what we can ignore. 

Recent work 1, 2 in the world of the electrophysiology of hearing has provided a deeper look into how that happens at the cortical level for those with normal hearing.  When information about a complex sound scene first enters the brain, all the sounds are represented. Then, in a nearby area, the sounds that are of interest to the listener become more strongly encoded and the other sounds fade in their strength.  Basically, the brain is taking in all of the input and then the listener decides on what to focus and track over time.

The basic lesson from this and a long history of previous work is that the brain can handle a lot of stimulation.  And the cognitive system can outperform any computer we have built when it comes to handling a complex sound scene and finding individual voices and suppressing irrelevant information.  But the information that the brain receives from the auditory system must be clear and well-defined.

Hearing impairment complicates the situation.  The type of permanent hearing loss that most individuals face (sensorineural hearing loss) disrupts the quality of the information that is fed to the brain.  Not only are some sounds simply not heard and thus not sent to the brain, but many of the sounds that are audible to the person will be disrupted by the impaired ear.  The coded neural information that is sent up to the brain is incomplete and distorted.  Then the brain struggles to make sense of sound.  It cannot organize sound in a normally automatic and effortless manner.

Hearing aids try to address this situation in two overall ways.  Traditionally, they have worked to process sound in such a way where we are only sending on the most important parts of sound.  Parts of the sound scene that are determined to be less important are removed.  Although these traditional approaches have definitely improved patient performance over the years, there have been limits to how much benefit they can provide. 

More recently, inspired by the ever-growing research about how the brain processes sound scenes, there has been a trend to move more of the responsibility back to the brain.  We know that it can do a great job of processing sound information, as long as it has good, clear information.  Newer technologies attempt to provide a clear representation of the fuller sound scene, allowing the cognitive system to perform its normal organizational function.  These newer technologies attempt to clean up the sound montage but without overly restricting what the patient is allowed to hear. The goal is to create a more natural listening experience – not one where the amount of sound presented to the listener is restricted. 

These two broad approaches are designed to work hand-in-hand.  Sometimes you simply have to restrict some sounds in some situations for some listeners.  The effects of hearing impairment vary widely from person to person.  But the overall goal is to process sound enough to allow important voices to be understood at the same time of preserving a natural listening experience.


1: O’Sullivan, J., Herrero, J., Smith, E., Schevon, C., McKhann, G. M., Sheth, S. A., Mehta, A. D., & Mesgarani, N. (2019).
Hierarchical Encoding of Attended Auditory Objects in Multi-talker Speech Perception. Neuron, 104(6), 1195-1209.
2: Puvvada, K. C., & Simon, J. Z. (2017). Cortical Representations of Speech in a Multitalker Auditory Scene.
The Journal of Neuroscience, 37(38), 9189–9196.

HIA Logo

The Hearing Industries Association is the trusted voice on hearing health care for product innovation, public policy, patient safety and education.

Members   Marketrak   Members Area

Connect with Us

Facebook Twitter