Historically, directional hearing aid microphone designs presume that speech is always from the front. However, it has been known for some time that this is not entirely correct (Walden et al., 2004). In that study, the data showed that out of 1,586 reported observations 318 (20%) recorded speech as “not from front”. Furthermore, they reported that noise was present was during 1,006 observations. For 239 (24%) of those, speech was “not from front”.
20% - 24% of listening time where speech is not from the front represents a substantial amount of time where traditional front facing microphones are not optimal. They are perhaps even somewhat problematic (Wu et al., 2014). Wu et al. studied the effect of different microphone schemes on hearing-impaired listeners’ speech recognition and preference scores while listening in an automobile. The results showed a benefit and preference for rear-directional processing compared to omnidirectional processing. Furthermore, when compared to omnidirectional microphones, typical adaptive directional microphones were found to be detrimental to speech understanding and preference.
One could argue that as soon as speech is detected from any direction other than the front, the listener would simply turn their head to face the target speaker, but this is not always possible. For example, a car driver does not have the luxury of turning to face a rear seated speaker. The recently updated Log It All algorithm can determine how much time a listener spends in situations when speech is not directed to them from the front.
Log It All is a proprietary Unitron hearing instrument feature, originally designed to track the amount of time that listeners spent in seven different listening environments. Clinicians could use Log It All as a tool to determine if the hearing instrument their client was wearing was appropriate for their individual needs. For example, people who spend more time in complex listening environments such as conversations in large groups or conversations in noise, may derive more benefit from premium technology. Log It All precisely classifies their individual acoustic environment, providing real world data to aid in troubleshooting problems and evaluating the efficacy of the fitting.
The capabilities of Log It All have now been extended to include the direction of target speech in complex listening environments: conversation in a large group, conversation in noise and noise only. These are the environments within which AutoFocus 360 can optimize listening to speech from the left, right, back or front. Knowing the percentage of time that speech is not from the front in these listening environments for that specific individual provides invaluable information to the hearing care professional about the listener’s acoustic world.
Log It All data was collected from 6,998 fittings/follow ups that occurred between November 1, 2021 and February 14, 2022 in 37 countries. Additional details about the fittings can be found in Log It All and the direction of speech (Hayes, 2022).
The first interesting question to answer with 6,998 fittings worth of real use data is simply, “Overall, in what acoustic environments do people spend their time?” Please see Figure 1 below.
Figure 1 shows that on average, hearing instrument wearers tend to spend the largest percentage of their time in either quiet listening situations or small group situations. The other common feature that occurs within large group Log It All data is the wide range of individual variation. Although the average Log It All data for almost any group of hearing aid wearers follows a consistent pattern, there is always huge variations among individuals. This is why Log It All is so important. When we compare an individual fitting to the average, they can be quite different. Looking at individual data can lead to a better prognosis for different technology levels.
The most complex listening environments are in the blue shaded area of Figure 1. The average time spent in either speech in a large group or speech in noise is roughly 10% each. Noise only is slightly lower, at around 6%. But the bottom end of the error bars can be as low as 3% or 4% each, and the higher end as much as 19% to 23% each. If we focus on the situations where people have the most difficulty, speech in noise and speech in a large group, that is a combined possible range from 7% of the time to 42% of the time.
We asked the question, “What percentage of the time is speech from the front, side or back of the listeners in our sample?” See Figure 2.
When looking at the box and whisker plots, we are most interested in the median percentage of time where speech is from each of the recorded directions. Across this sample we can see that the median for speech from the front is 30% of the time. The combined medians for speech from the left and right is 21% of the time. Speech is from the back 4% of the time. The median recorded for no target was 33% of the time. In other words, the median time for speech recorded from the front is 30% whereas speech is from the sides or back 25% of the time.
Please note, this is front, side or back given the relative position of the hearing instruments, not the listener. Therefore, if speech is from the side or the back and the listener turns their head to face the speaker, Log It All would detect that speech as being from the front. Thus, if we assume (as we have in the past) that listeners persistently face the source of the speech, there would be a strong speech from front bias built into these results. Yet there is only a 5% difference in the median percentage of time that speech was from the front as opposed to the other three directions combined.
We also need to ask, “why is so much time spent in the no target condition?” The median time with no target was 33%. Given that one of the three listening environments where speech direction is recorded is noise only, it is not unexpected that no speech direction was often the condition recorded. By obtaining the correlation between time spent in noise only and the percentage of no target results, we can clarify the impact of the listening environment on the no target results.
Figure 3 shows the percentage of time for the no target condition from each fitting compared to the percentage of time in the noise only listening environment.
The picture in Figure 3 is pretty clear, the time people spend in noise where there is no speech is very well correlated with the percentage of time they are in the no target condition. Pearson’s correlation coefficient (r = 0.676) bears that out. In other words, there are circumstances where speech and noise coexist that Log It All will classify as no target from Figure 3, but the overwhelming majority of the time that no target is observed by Log It All is because there truly is no speech target present as the hearing instrument is in a noise only listening environment.
In this white paper we have answered three different questions. How much time do listeners spend in different listening environments on average? Of the time where they are in complex listening environments and Log It All is recording the directions of speech, how often is speech not from front? Finally, why is such a large percentage of time spent in the no target condition? Of the three questions the most important result is implied in the second, do people always look in the direction of speech? Unequivocally, the answer is an emphatic no. Listeners are not facing the direction of speech almost as often (25% of the time) as they are, (30% of the time). Consequently, it makes sense to offer those people hearing instruments which accommodate their non-frontal listening behavior.
Walden, B. E., Surr, R. K., Cord, M. T., & Dyrlund, O. (2004). Predicting hearing aid microphone preference in everyday listening. J Am Acad Audiol, 15(5), 365-396. https://doi.org/10.3766/jaaa.15.5.4
Wu, Y.-H., Aksan, N., Rizzo, M., Stangl, E., Zhang, X., & Bentler, R. . (2014). Measuring listening effort: Driving simulator versus simple dual-task paradigm. Ear & Hearing, 35(6), 623-632. https://doi.org/10.1097/aud.0000000000000079