Consider this three-word statement: audibility precedes intelligibility. This premise is fundamental to appreciating the difference between hearing and comprehension. Audibility alone does not guarantee intelligibility, yet until sounds are heard, they are not usable. Our industry has successfully developed technologies that improve both audibility and intelligibility. For example, fitting formulae guide us on the gain needed and appropriate for each audiogram (to deliver mostly normal loudness perception and avoid discomfort). And a host of algorithms, from WDRC to frequency-lowering strategies, help us achieve high levels of audibility for most sounds. In addition to providing awareness of sounds, we have strategies to improve speech understanding (even in very challenging environments), such as automatic adaptive directional microphone modes, binaural spatial features, and remote microphone options, to name just a few. These technologies continue to evolve and improve. But beyond making sounds audible and words understandable, are we doing enough to uncover the true meaning behind speech? Wouldn’t it be great if we could move beyond the words to identify not only what is said, but how it’s said?

Decades of research on emotion exist in fields like psychology and neuroscience. Historically, (as it relates to hearing) research on emotion primarily focused on investigating the psychological impact of hearing loss on individuals and their significant others. In recent years, new areas of analysis, such as listening effort and fatigue, user intention, cognitive decline, and others, have emerged with technological developments that offer potential benefits in amplification. The latest compelling area of emerging research in audiology is the impact of hearing loss on the recognition of emotion in spoken language, including potential user benefits of hearing aid technologies.

In April 2017, a workshop brought together opinion leaders and researchers with expertise in emotional communication. The Hearing, Emotion, Amplification, Research and Training (HEART) workshop sought a consensus on knowledge about this topic, identification of gaps, and prioritization of future research efforts. The publication1 documenting this workshop is an exhaustive review, listing an astonishing 245 references. Only a handful of these are specific to the field of audiology.

Based on the limited research on vocal emotion cited in the HEART paper, it suggests:

  • On tests of emotion identification, people with hearing loss generally have more difficulty than listeners with normal hearing. 
  • There are limited, if any, positive effects of hearing aid use on emotion-recognition performance.

In addition to recommending future research directions, the workshop also considered intervention priorities. A hopeful note states: “Interventions that improve pitch perception and spectral resolution would be expected to improve interindividual emotion perception.”

A key challenge facing audiology in this exciting new area is the scarcity of methodologies to assess and quantify the experience of listening to signals that contain emotion information. Fortunately, tools and tests are now emerging to pursue research and clinical implications in this area; these include subjective self-rating questionnaires, as well as objective tools to quantify accuracy of emotion perception.

In 2018, Singh, Liskovoi, Launer, and Russo2 explored emotion perception using a new self-report questionnaire that assesses experiences of hearing and handicap for signals that contain emotional information - the Emotional Communication in Hearing Questionnaire or EMO-CHeQ. An initial crowdsourcing-based evaluation of the EMO-CHeQ reported results from 586 participants: 243 were normal hearing, 193 had hearing impairment and 150 were hearing aid wearers. In addition to validating the usability of this new questionnaire, results revealed compelling information about perception of emotion in speech.

Figure 1 shows the results for two groups, younger adults and older adults, each with three subgroups - normal hearing, hearing impaired (but unamplified) and hearing impaired with hearing aids. Key takeaways:

  • For both younger and older adults, those with hearing loss reported significantly more difficulty perceiving vocal emotion than those with normal hearing.
  • There is no significant difference between those with and without amplification.
  • Authors suggest that even those reporting high satisfaction with their hearing aids did not benefit from amplification for perception of vocal emotion.
     

Note: This study does not report which brands and styles and technology levels of hearing instruments were worn by the hearing aid group, but given the large sample size, one can safely assume all major brands were represented, likely proportional to their market share.

Figure 1. EMO-CHeQ mean results from 586 respondents to online EMO-CHeQ, collapsed on age for groups with self-reported normal hearing, hearing loss (unaided) and hearing aids. Higher numbers represent more perceived handicap. Error bars represent standard deviations. The * indicate significant differences.

A second phase of this research02, conducted at Ryerson University, evaluated the EMO-CHeQ with a group of participants with audiometrically-verified hearing status in all three groups (normal/near normal hearing, hearing impaired (unaided) and hearing impaired with hearing aids), with 32 participants. The 10 people in the hearing aid group wore a variety of device styles and brands.

Figure 2 shows the EMO-CHeQ results, including total scores and scores for four sub-scales. The results are very similar to those of the Phase 1 crowd-sourcing group, namely, significantly worse ability to perceive vocal emotion for those with hearing loss, both with and without hearing aids with some variability across sub-scales.

Figure 2. EMO-CHeQ results from 32 participants with verified hearing status, including groups with normal/ near normal hearing, hearing loss (unaided) and hearing aids. Higher numbers represent more perceived handicap. Mean results and 4 sub-scale results are shown with * indicating significant differences.

Phase 2 also included an objective measure of emotion identification, using the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS). Participants identified emotions in recorded stimuli both with and without visual cues. Figure 3 shows these results. With and without visual cues, those with hearing loss had more difficulty accurately identifying emotions. There was a statistically significant difference without visuals for the hearing-impaired and hearing aid groups relative to the group with normal hearing. And again, use of hearing aids did not make any significant difference in either scenario.

Figure 3. Mean performance for 32 participants on an audio only and audio-visual emotion identification task (RAVDESS stimuli) for groups with normal/near normal hearing, hearing loss (unaided) and hearing aids. Better scores have a higher value on this test. The * indicate significant differences.

The two overall conclusions from the two study phases are that for both self-reported and verified users of hearing aids, use of amplification did not improve emotion-identification performance and absence of visual cues exacerbates this deficit.

In another 2018 study,3 researchers used a variety of cognitive and satisfaction questionnaires (Montreal Cognitive Assessment, HHIA, APHAB), traditional word recognition tasks, and test materials from the Toronto Emotional Speech Set. Similar to the RAVDESS, this test presents sentences in a variety of different emotions, which participants attempt to identify. Results showed that amplification with hearing aids did improve word recognition scores across emotions. However, there was no significant impact of hearing aid usage on the accuracy of emotion identification. In other words, for speech spoken with emotion, hearing aids improved speech intelligibility but not perception of emotion. The authors suggest that current hearing aids may process acoustic speech and emotional cues similarly regardless of their emotional content.

In this same study, young normal-hearing participants were also tested for accuracy of vocal emotion. Results were consistent with the long-held view that young listeners are better able to identify emotions than older listeners with hearing loss. Overall, this report suggests that there are changes to emotion identification in listeners with hearing loss that cannot be attributed only to normal aging and that hearing aids do not appear to compensate for these changes. Difficulties with identification of emotion may contribute to challenges in social functioning, in addition to the other communication difficulties resulting from hearing loss. Could this be part of the reason why people with hearing loss have misunderstandings in conversations – they are not just missing the words, but the emotions?

Another interesting study4 investigated responses to emotional speech using a skin conductance response (SCR) measure. Nespoli, Singh, and Russo tested normal hearing participants and those with hearing loss, with and without amplification. They found that the normal-hearing participants were faster and more accurate in identifying emotional speech. And use of hearing aids did not improve responses for those with hearing loss.

Finally, Picou5 reported that for adults with acquired sensorineural hearing loss (mild to moderately severe), the deficits in perception of vocal emotion also impact the valence of the emotional response of the listener. In other words, they rate pleasant signals as less pleasant and unpleasant signals as less unpleasant, compared to normal-hearing peers. TV viewing is also affected, as disruption of emotion perception is also seen in responses to media on the television. It appears that this is associated primarily with reduced intelligibility and reduced high-frequency audibility. The study asserted that just compensating for audibility by increasing overall loudness levels can exacerbate, not ameliorate emotionperception deficits.

These consistent research results regarding perception of vocal emotion may seem discouraging, as they conclude that those with hearing loss (especially if they are older) have much greater difficulty accurately identifying emotions in speech and, in general, hearing aids do not seem to help.

Here’s the good news: in the face of these proven challenges, a study6 conducted by Hoerzentrum Oldenburg, in collaboration with Vitakustik in Germany, showed that Unitron technology actually can make a difference in this area. In this study, 88 new users and 70 experienced users of amplification completed the EMO-CHeQ questionnaire before and after being fitted with Unitron Moxi™ Fit Pro RIC instruments. Participants were recruited from actual clinics. Fittings were ‘real-world’, done by clinicians in their clinics, and not in a research facility; they used their normal fitting procedures (including first fit and fine tuning, as needed). For the experienced user group, their initial ratings were based on their experiences with their current hearing devices, again using a variety of brands, styles and technology levels.

Figure 4. Mean EMO-CHeQ results from 88 new users (FTU) and 70 experienced hearing aid users (EXU). Higher numbers represent more perceived handicap. The pre results for the new users are based on their experiences before trying hearing aids. The pre results for the experienced hearing aids users are based on their experiences with their current hearing aids. The post results for the new users and experienced users are based on their experiences while wearing Unitron devices for 2-3 weeks. The new and experienced users both performed significantly better with the Unitron hearing aids compared to the pre results, as the * indicates.

As shown in Figure 4, a significant benefit was observed, for both first time users (FTU) and experienced users (EXU) after 2-3 weeks of usage of the Unitron instruments.

Figure 5 shows these results relative to the average for normal hearing people. Displayed this way we can calculate the percentage improvement reported by the study participants after wearing the Unitron devices for 2-3 weeks. For experienced users, there was a reported average improvement of 61%, and for first time users there was an improvement of 89% relative to the normal hearing baseline.

These results are especially noteworthy given that the previous research concluded no improvement in perception of vocal emotion as a result of amplification. In this Oldenburg study, the improvement was observed with Unitron hearing aids, not with the devices from other brands that participants had worn before. In fact, one of the researchers commented, “We were astonished to see comparably huge differences between results for the other hearing instruments and the new Unitron hearing instruments.”

Figure 5. Mean EMO-CHeQ results from 88 new users (FTU) and 70 experienced hearing aid users (EXU) showing self-rated ability to perceive vocal emotion, relative to the mean for normal hearing.

So how do we do it? Let’s return to the fundamental statement ‘audibility precedes intelligibility’. In order for a sound to be used it must first be made audible. But as Goy et.al. observed, current hearing aids may process acoustic speech and emotional cues similarly regardless of the vocal emotion. Picou et. al. reported that increasing only overall loudness to compensate for reduced audibility (even when fitting to standard targets) can actually disrupt, not enhance emotion perception. Somehow, the cues needed for vocal-emotion identification are either not being made audible or are being compromised in some way by the signal processing of most hearing aids.

All manufacturers, Unitron included, focus on improving audibility of the widest range of sounds possible and improving signal-to-noise ratios (SNR) for better intelligibility of speech in a variety of acoustic environments. But Unitron employs a unique approach of integrating key adaptive features into an intelligent synergistic system called SoundCore.™ This approach does more than just activating individual algorithms for comfort and SNR improvement. Multiple components of SoundCore uniquely work together to improve sound awareness, speech understanding, and to go further to provide the subtle nuances of speech often needed for deeper meaning.7

Despite the troubling conclusions from the research summarized here about the historical lack of benefit for perception of vocal emotion with hearing aids, we’re hopeful. As shown by the Oldenburg study cited above, Unitron’s exclusive combination of synergistic elements come together in our SoundCore signal processing system to help clients move beyond the words and get to the deeper meaning. And, our work isn’t finished. Audibility, speech understanding, particularly in difficult listening situations, and realistic sound reproduction continue to be a main focus of innovation and algorithm evolution at Unitron so that we can help clients get to the heart of conversations.

References
1Picou, E., Singh, G., Goy, H., Russo, F., Hickson, L., Oxenham, A., Buono, G., Ricketts, T., Launer, S., (2018). Hearing, emotion, amplification, research, and training workshop: Current understanding of hearing loss and emotion perception and priorities for future research. Trends in Hearing, 22: 1-24.

2Singh, G., Liskovoi, L., Launer, S., Russo, F., (2018). The Emotional Communication in Hearing Questionnaire (EMO-CHeQ): Development and evaluation. Ear & Hearing, 40:260-271.

3Goy, H., Pichora-Fuller, K., Singh, G., Russo, F., (2018). Hearing aids benefit recognition of words in emotional speech but not emotion identification. Trends in Hearing, 22: 1-16.

4Nespoli, G., Singh, G., Russo, F., (2018). Skin conductance responses to emotional speech in hearing-impaired and hearing-aided listeners. Proceedings of Acoustics Week in Canada, Canadian Acoustics, 44. Vancouver, BC.

5Picou, E, (2019). Can hearing aids change the way adults respond emotionally to sounds? American Academy of Audiology ARC 19 summary in Audiology Today, 31: 52. Submitted for further publication – pending.

6Singh, G., Krueger, M., Besser, J., Wietoska, L., Launer, S., Meis, M., (2018). A pre-post intervention study of hearing aid amplification: results of the Emotional Communication in Hearing Questionnaire (EMO-CHeQ). ICHON 2018 poster session.

7Cornelisse, L., (2017). A conceptual framework to align sound performance with the listener’s needs and preferences to achieve the highest level of satisfaction with amplification. Unitron white paper.