A Purdue University researcher is working on a new technique to diagnose hearing loss in a way that more accurately reflects real-world situations.
“The traditional way to assess speech understanding in people with hearing loss is to put them in a quiet room and ask them to repeat words produced by one person they can’t see,” said Karen Iler Kirk, a professor of speech, language and hearing sciences.
“The goal of our research is to develop new tests that reflect more natural listening situations with visual cues, different background noises, voice quality, dialects and speaking rates. This is a more accurate way to predict how people perceive speech in the real world and, therefore, can help us determine appropriate therapy and interventions, such as cochlear implants.
“The better the diagnostic tool we have to make such decisions, the better we can serve our patients.”
Kirk received a $2.8 million grant from the National Institute on Deafness and Other Communication Disorders for the five-year project to develop two new audiovisual and multi-talker sentence tests that expand upon the traditional spoken word recognition format that has been used since the 1950s. One test is for adults and the other for children. More than 1,000 people ages 4-65 will participate in the study.
“The traditional spoken word recognition format has been used to determine the need for some sensory aids, such as hearing aids, which are used to amplify sound,” Kirk said. “However, it is not the best method for assessing the benefits of other sensory aids, such as the more expensive cochlear implants.”
A cochlear implant is an electronic device that can provide a sense of sound to someone who is deaf or severely hard of hearing. The device, which is surgically implanted, picks up and processes sound that is converted into electric impulses that are sent to the auditory nerve. More than 100,000 people worldwide have received cochlear implants, and more health insurance companies are paying for the surgery and therapy, Kirk said.
This project also is expanding word lists from the traditional monosyllabic words to a greater range of words based on how often they are used and lexical density – the number of words phonetically similar to the target. For example, the word “cat” has a number of lexical neighbors such as “bat,” “cap,” “cut” and “scat.” A word like “banana” may be used frequently but has few words that sound similar.
The 10 diverse speakers, who are recording more than 6,000 sentences combined, will not be producing perfectly articulated speech.
“It’s important to use sentence materials that are produced by different speakers because in the real world, we do not listen to just one person,” Kirk said.
In addition to the auditory component, the materials will be presented in a visual format so listeners can see and hear the phrase.
“This is really important because hearing-impaired people often have great difficulty understanding speech if they are just listening. Seeing the face and following lip reading cues can help someone understand the intended message,” she said.
Participants will be tested in auditory-only, visual-only or auditory plus visual modalities. At the end of the project, DVDs containing the test, as well as instruction booklets, data-gathering forms and a manual for data interpretation, will be available to professionals.
Another benefit from this study will be the raw data generated.
“Just collecting information from 1,000 individuals and measuring how well they perform on these tests gives us tremendous information that is not available elsewhere,” Kirk said.