Close
New

Medscape is available in 5 Language Editions – Choose your Edition here.

 

Speech Audiometry

  • Author: Suzanne H Kimball, AuD, CCC-A/FAAA; Chief Editor: Arlen D Meyers, MD, MBA  more...
 
Updated: Jul 30, 2015
 

Overview

Speech audiometry has become a fundamental tool in hearing-loss assessment. In conjunction with pure-tone audiometry, it can aid in determining the degree and type of hearing loss. Speech audiometry also provides information regarding discomfort or tolerance to speech stimuli and information on word recognition abilities.

In addition, information gained by speech audiometry can help determine proper gain and maximum output of hearing aids and other amplifying devices for patients with significant hearing losses and help assess how well they hear in noise. Speech audiometry also facilitates audiological rehabilitation management.

The Technique section of this article describes speech audiometry for adult patients. For pediatric patients, see the Pediatric Speech Materials section below.

Next

Indications

Speech audiometry can be used for the following:

  • Assessment of degree and type of hearing loss
  • Examination of word recognition abilities
  • Examination of discomfort or tolerance to speech stimuli
  • Determination of proper gain and maximum output of amplifying devices
Previous
Next

Contraindications

Speech audiometry should not be done if the patient is uncooperative.

Previous
Next

Anesthesia

No anesthesia is required for speech audiometry.

Previous
Next

Equipment

In most circumstances, speech audiometry is performed in a 2-room testing suite. Audiologists work from the audiometric equipment room, while patients undergo testing in the evaluation room. The audiometric equipment room contains the speech audiometer, which is usually part of a diagnostic audiometer. The speech-testing portion of the diagnostic audiometer usually consists of 2 channels that provide various inputs and outputs.

Speech audiometer input devices include microphones (for live voice testing), tape recorders, and CDs for recorded testing. Various output devices, including earphones, ear inserts, bone-conduction vibrators, and loudspeakers, are located in the testing suite.[1]

Tests using speech materials can be performed using earphones, with test material presented into 1 or both earphones. Testing can also be performed via a bone-conduction vibrator. In addition to these methods, speech material can be presented using loudspeakers in the sound-field environment.

Previous
Next

Technique

Speech-awareness thresholds

Speech-awareness threshold (SAT) is also known as speech-detection threshold (SDT). The objective of this measurement is to obtain the lowest level at which speech can be detected at least half the time. This test does not have patients repeat words; it requires patients to merely indicate when speech stimuli are present.

Speech materials usually used to determine this measurement are spondees. Spondaic words are 2-syllable words spoken with equal emphasis on each syllable (eg, pancake, hardware, playground). Spondees are used because they are easily understandable and contain information within each syllable sufficient to allow reasonably accurate guessing.

The SAT is especially useful for patients too young to understand or repeat words. It may be the only behavioral measurement that can be made with this population. The SAT may also be used for patients who speak another language or who have impaired language function because of neurological insult.

For patients with normal hearing or somewhat flat hearing loss, this measure is usually 10-15 dB better than the speech-recognition threshold (SRT) that requires patients to repeat presented words. For patients with sloping hearing loss, this measurement can be misleading with regard to identifying the overall degree of loss.

If a patient has normal hearing in a low frequency, the SAT will be closely related to the threshold for that frequency, and it will not indicate greater loss in higher frequencies.

Speech-recognition threshold

The speech-recognition threshold (SRT) is sometimes referred to as the speech-reception threshold. The objective of this measure is to obtain the lowest level at which speech can be identified at least half the time.

Spondees are usually used for this measurement. Lists of spondaic words commonly used to obtain the SRT are contained within the Central Institute for the Deaf (CID) Auditory List W-1 and W-2.

In addition to determining softest levels at which patients can hear and repeat words, the SRT is also used to validate pure-tone thresholds because of high correlation between the SRT and the average of pure-tone thresholds at 500, 1000, and 2000 Hz.

In clinical practice, the SRT and 3-frequency average should be within 5-12 dB. This correlation holds true if hearing loss in the 3 measured frequencies is relatively similar. If 1 threshold within the 3 frequencies is significantly higher than the others, the SRT will usually be considerably better than the 3-frequency average. In this case, a 2-frequency average is likely to be calculated and assessed for agreement with the SRT.

Other clinical uses of the SRT include establishing the sound level to present suprathreshold measures and determining appropriate gain during hearing aid selection.

Suprathreshold word-recognition testing

The primary purpose of suprathreshold word-recognition testing is to estimate ability to understand and repeat single-syllable words presented at conversational or another suprathreshold level. This type of testing is also referred to as word-discrimination testing or speech-discrimination testing.

Initial word lists compiled for word-recognition testing were phonetically balanced (PB). This term indicated that phonetic composition of the lists was equivalent and representative of connected English discourse.

The original PB lists were created at the Harvard Psycho-Acoustic Laboratory and are referred to as the PB-50 lists. The PB-50 lists contain 50 single-syllable words in 20 lists consisting of 1000 different monosyllabic words. Several years later, the CID W-22 word lists were devised, primarily using words selected from the PB-50 lists. Another word list (devised from a grouping of 200 consonant-nucleus-consonant [CNC] words) is called the Northwestern University Test No. 6 (NU-6). Recorded tape and CD versions of all these word-recognition tests are commercially available.

The PB-50, CID W-22, and NU-6 word lists each contain 50 words that are presented at specified sensation levels. Words can be presented via tape, CD, or monitored live voice. Patients are asked to repeat words to the audiologist. Each word repeated correctly is valued at 2%, and scores are tallied as a percent-correct value.

Varying the presentation level of monosyllabic words reveals a variety of performance-intensity functions for these word lists. In general, presenting words at 25-40 dB sensation level (refer to the SRT) allows patients to achieve maximum scores. Lowering the level results in lower scores. For individuals with hearing loss, words can be presented at a comfortable loudness level or at the highest reasonable level before discomfort occurs.

When words are presented at the highest reasonable level and the word-recognition score is 80% or better, testing can be discontinued. If the score is lower than 80%, further testing at lower presentation levels is recommended. If scores at lower levels are better than those obtained at higher presentation levels, "roll over" has occurred, and these scores indicate a possible retrocochlear (or higher) site of lesion.

Another use of suprathreshold word-recognition testing is to verify speech-recognition improvements achieved by persons with hearing aids. Testing can be completed at conversational levels in the sound field without the use of hearing aids and then again with hearing aids fitted to the patient. Score differences can be used as a method to assess hearing with hearing aids and can be used as a pretest and posttest to provide a percent-improvement score

Speech audiogram. Video courtesy of Benjamin Daniel Liess, MD.

Sentence testing

To evaluate ability to hear and understand everyday speech, various tests have been developed that use sentences as test items. Sentences can provide information regarding the time domain of everyday speech and can approximate contextual characteristics of conversational speech.

Everyday sentence test

This is the first sentence test developed at the CID in the 1950s.

Clinical use of this test is limited, because its reliability as a speech-recognition test for sentences remains undemonstrated.

Synthetic-sentence identification test

The synthetic-sentence identification (SSI) test was developed in the late 1960s. SSI involves a set of 10 synthetic sentences. Sentences used in this test were constructed so that each successive group of 3 words in a sentence is itself meaningful but the entire sentence is not.

Because the sentences are deemed insufficiently challenging in quiet environments, a recommendation has been made that sentences be administered in noise at a signal-to-noise (S/N) ratio of 0 dB, which presents both sentences and noise at equal intensity level.

Speech perception in noise test

The speech perception in noise (SPIN) test is another sentence-identification test. The SPIN test was originally developed in the late 1970s and was revised in the mid 1980s.

The revised SPIN test consists of 8 lists of 50 sentences. The last word of each sentence is considered the test item. Half of listed sentences contain test items classified as having high predictability, indicating that the word is very predictable given the sentence context. The other half of listed sentences contain test items classified as having low predictability, indicating that the word is not predictable given sentence context. Recorded sentences come with a speech babble-type noise that can be presented at various S/N ratios.

Speech in noise test

The speech in noise (SIN) test, developed in the late 1990s, contains 5 sentences with 5 key words per test condition. Two signal levels (70 and 40 dB) and 4 S/N ratios are used at each level. A 4-talker babble is used as noise. This recorded test can be given to patients with hearing aids in both the unaided and aided conditions.

Results are presented as performance-intensity functions in noise. A shorter version of the SIN, the QuickSIN, was developed in 2004. The QuickSIN has been shown to be effective, particularly when verifying open-fit behind-the-ear hearing aids.

Hearing in noise test

The hearing in noise test (HINT) is designed to measure speech recognition thresholds in both quiet and noise. The test consists of 25 lists of 10 sentences and noise matched to long-term average speech.

Using an adaptive procedure, a reception threshold for sentences is obtained while noise is presented at a constant level. Results can be compared with normative data to determine the patient's relative ability to hear in noise.

Words in noise test

The Words-in-Noise Test (WIN), developed in the early 2000s, provides an open set word-recognition task without linguistic context. The test is composed of monosyllabic words from the NU-6 word lists presented in multitalker babble. The purpose of the test is to determine the signal-to-babble (S/B) ratio in decibels for those with normal and impaired hearing. The WIN is similar to the QuickSIN in providing information about speech recognition performance.

The WIN is used to measure performance of basic auditory function when working memory and linguistic context is reduced or eliminated. This measure, by using monosyllabic words in isolation, evaluates the listener's ability to recognize speech using acoustic cues alone and by eliminating syntactical and semantic cues founds in sentences. The WIN materials allow for the same words to be spoken by the same speaker for both speech-in-quiet and speech-in-noise data collection.

Bamford-Kowal-Bench speech in noise test

Bamford-Kowal-Bench Speech-in-Noise Test (BKB-SIN) was developed by Etymotic Research in the early to mid 2000s. The primary population for this test include children and candidates or recipients of cochlear implants.

Like the HINT, the BKB-SIN uses Americanized BKB sentences.[2] These words are characterized as short, and the sentences are highly redundant; they contain semantic and syntactic contextual cues developed at a first grade reading level. Compared to the HINT, which uses speech-spectrum noise, the BKB-SIN uses multitalker babble. Clinicians can expect better recognition performance on the BKB-SIN and HINT in comparison to the QuickSIN and WIN because of the additional semantic context provided by the BKB sentences.

Selecting proper speech in noise testing

QuickSIN and WIN materials are best for use in discriminating those who have hearing loss from normal hearing individuals. The BKB-SIN and HINT materials are less able to identify those with hearing loss.[3] Therefore, the QuickSIN or WIN is indicated as part of the routine clinical protocol as a speech in noise task. The choice of QuickSIN or WIN is strictly a matter of clinician preference; however, the clinician must also consider whether or not the patient can handle monosyllabic words (WIN) or needs some support from sentence context (QuickSIN).

The BKB-SIN and HINT materials are easier to recognize because of the semantic content, making them excellent tools for young children or individuals with substantial hearing loss, including cochlear implant candidates and new recipients.

Most comfortable loudness level and uncomfortable loudness level

Most comfortable loudness level

The test that determines the intensity level of speech that is most comfortably loud is called the most comfortable loudness level (MCL) test.

For most patients with normal hearing, speech is most comfortable at 40-50 dB above the SRT. This sensation level is reduced for many patients who have sensorineural hearing loss (SNHL). Because of this variation, MCL can be used to help determine hearing aid gain for patients who are candidates for amplification.

MCL measurement can be obtained using cold running or continuous speech via recorded or monitored live-voice presentation. Patients are instructed to indicate when speech is perceived to be at the MCL. Initial speech levels may be presented at slightly above SRT and then progressively increased until MCL is achieved. Once MCL is achieved, a speech level is presented above initial MCL and reduced until another MCL is obtained. This bracketing technique provides average MCL.

Uncomfortable loudness level

One reason to establish uncomfortable loudness level (UCL) is to determine the upper hearing limit for speech. This level provides the maximum level at which word-recognition tests can be administered. UCL can also indicate maximum tolerable amplification.

Another reason to establish UCL is to determine the dynamic speech range. Dynamic range represents the limits of useful hearing in each ear and is computed by subtracting SRT from UCL. For many patients with SNHL, this range can be extremely limited because of recruitment or abnormal loudness perception.

UCL speech materials can be the same as for MCL. The normal ear should be able to accept hearing levels of 90-100 dB. Patients are instructed to indicate when presented speech is uncomfortably loud. Instructions are critical, since patients must allow speech above MCL before indicating discomfort.

Previous
Next

Pearls

While the use of speech testing in general has not necessarily been shown to predict hearing aid satisfaction,[4] the use of loudness discomfort levels (UCLs) has been shown to be useful in successful hearing aid outcomes.[5]

The Acceptable Noise Level (ANL) test is a measure of the amount of background noise that a person is willing to tolerate.[6] In recent years it has gained interest among researchers and hearing-care professionals because of its ability to predict, with 85% accuracy, who will be successful with hearing aids.[7]

Previous
Next

Pediatric Speech Materials

For very young children with limited expressive and receptive language skills, picture cards representing spondaic words can be used to establish the SRT. Before testing, the tester must ensure that the child understands what the card represents. Once the child has been taught to point to the correct picture card, 4-6 cards are chosen and presented to the child. Then, the softest level at which the child can select the correct card at least half the time is established.

For children with typical kindergarten or first-grade language skills, the Children's Spondee Word List can be used instead of adult word lists. The CID W-1 list is appropriate for use with older children.

Word-recognition testing for children can be classified as open-message response testing or closed-response testing. Closed-response testing uses the picture-pointing technique.

Word intelligibility by picture identification test

One of the more popular closed-response tests is the word intelligibility by picture identification (WIPI) test. This test consists of 25 pages; on each page are 6 colored pictures representing an item named by a monosyllabic word. Four pictures represent a test item, while the other 2 serve to decrease probability of a correct guess.

WIPI was developed for use with children with hearing impairment and can be used for children aged 4 years and older.

Northwestern University children's perception of speech test

Another popular closed-response test is the Northwestern University children's perception of speech (NU-CHIPS) test. NU-CHIPS consists of 50 pages with 4 pictures per page.

This test was developed for use with children aged 3 years and older.

Pediatric speech intelligibility test

The pediatric speech intelligibility (PSI) test uses both monosyllabic words and sentence test items. The PSI test consists of 20 monosyllabic words and 10 sentences. Children point to the appropriate picture representing the word or sentence presented.

Test materials are applicable for children aged as young as 3 years.

Phonetically balanced kindergarten test

One of the more popular open-message response tests for children is the phonetically balanced kindergarten (PBK) test, which contains 50 monosyllabic words that the child repeats.

The PKB test is most appropriate for children aged 5-7 years.

Bamford-Kowal-Bench Speech-in-Noise Test

As mentioned prior, the BKB-SIN materials are easier due to the amount of semantic content utilized which makes it an excellent tool for use with young children.

Previous
 
Contributor Information and Disclosures
Author

Suzanne H Kimball, AuD, CCC-A/FAAA Assistant Professor, University of Oklahoma Health Sciences Center

Suzanne H Kimball, AuD, CCC-A/FAAA is a member of the following medical societies: American Academy of Audiology, American Speech-Language-Hearing Association

Disclosure: Nothing to disclose.

Specialty Editor Board

Francisco Talavera, PharmD, PhD Adjunct Assistant Professor, University of Nebraska Medical Center College of Pharmacy; Editor-in-Chief, Medscape Drug Reference

Disclosure: Received salary from Medscape for employment. for: Medscape.

Peter S Roland, MD Professor, Department of Neurological Surgery, Professor and Chairman, Department of Otolaryngology-Head and Neck Surgery, Director, Clinical Center for Auditory, Vestibular, and Facial Nerve Disorders, Chief of Pediatric Otology, University of Texas Southwestern Medical Center; Chief of Pediatric Otology, Children’s Medical Center of Dallas; President of Medical Staff, Parkland Memorial Hospital; Adjunct Professor of Communicative Disorders, School of Behavioral and Brain Sciences, Chief of Medical Service, Callier Center for Communicative Disorders, University of Texas School of Human Development

Peter S Roland, MD is a member of the following medical societies: Alpha Omega Alpha, American Auditory Society, The Triological Society, North American Skull Base Society, Society of University Otolaryngologists-Head and Neck Surgeons, American Neurotology Society, American Academy of Otolaryngic Allergy, American Academy of Otolaryngology-Head and Neck Surgery, American Otological Society

Disclosure: Received honoraria from Alcon Labs for consulting; Received honoraria from Advanced Bionics for board membership; Received honoraria from Cochlear Corp for board membership; Received travel grants from Med El Corp for consulting.

Chief Editor

Arlen D Meyers, MD, MBA Professor of Otolaryngology, Dentistry, and Engineering, University of Colorado School of Medicine

Arlen D Meyers, MD, MBA is a member of the following medical societies: American Academy of Facial Plastic and Reconstructive Surgery, American Academy of Otolaryngology-Head and Neck Surgery, American Head and Neck Society

Disclosure: Serve(d) as a director, officer, partner, employee, advisor, consultant or trustee for: Cerescan;RxRevu;SymbiaAllergySolutions<br/>Received income in an amount equal to or greater than $250 from: Symbia<br/>Received from Allergy Solutions, Inc for board membership; Received honoraria from RxRevu for chief medical editor; Received salary from Medvoy for founder and president; Received consulting fee from Corvectra for senior medical advisor; Received ownership interest from Cerescan for consulting; Received consulting fee from Essiahealth for advisor; Received consulting fee from Carespan for advisor; Received consulting fee from Covidien for consulting.

Additional Contributors

Cliff A Megerian, MD, FACS Medical Director of Adult and Pediatric Cochlear Implant Program, Director of Otology and Neurotology, University Hospitals of Cleveland; Chairman of Otolaryngology-Head and Neck Surgery, Professor of Otolaryngology-Head and Neck Surgery and Neurological Surgery, Case Western Reserve University School of Medicine

Cliff A Megerian, MD, FACS is a member of the following medical societies: American Otological Society, Association for Research in Otolaryngology, American Academy of Otolaryngology-Head and Neck Surgery, Society of University Otolaryngologists-Head and Neck Surgeons, Triological Society, American Neurotology Society, American College of Surgeons, Massachusetts Medical Society, Society for Neuroscience

Disclosure: Received consulting fee from Cochlear Americas for board membership; Received consulting fee from Grace Corporation for board membership.

Acknowledgements

Medscape Reference thanks Benjamin Daniel Liess, MD, Assistant Professor, Department of Otolaryngology, University of Missouri-Columbia School of Medicine, for the video contributions to this article.

References
  1. Lewis MS, Crandell CC, Valente M, Horn JE. Speech perception in noise: directional microphones versus frequency modulation (FM) systems. J Am Acad Audiol. 2004 Jun. 15(6):426-39. [Medline].

  2. Bench J, Kowal A, Bamford J. The BKB (Bamford-Kowal-Bench) sentence lists for partially-hearing children. Br J Audiol. 1979 Aug. 13(3):108-12. [Medline].

  3. Wilson RH, McArdle RA, Smith SL. An Evaluation of the BKB-SIN, HINT, QuickSIN, and WIN Materials on Listeners With Normal Hearing and Listeners With Hearing Loss. J Speech Lang Hear Res. 2007 Aug. 50(4):844-56. [Medline].

  4. Killion MC, Gudmundsen GI. Fitting hearing aids using clinical prefitting speech measures: an evidence-based review. J Am Acad Audiol. 2005 Jul-Aug. 16(7):439-47. [Medline].

  5. Mueller HG, Bentler RA. Fitting hearing aids using clinical measures of loudness discomfort levels: an evidence-based review of effectiveness. J Am Acad Audiol. 2005 Jul-Aug. 16(7):461-72. [Medline].

  6. Nabelek AK, Tucker FM, Letowski TR. Toleration of background noises: relationship with patterns of hearing aid use by elderly persons. J Speech Hear Res. 1991 Jun. 34 (3):679-85. [Medline].

  7. Nabelek AK, Freyaldenhoven MC, Tampas JW, Burchfiel SB, Muenchen RA. Acceptable noise level as a predictor of hearing aid use. J Am Acad Audiol. 2006 Oct. 17 (9):626-39. [Medline].

  8. Bentler RA, Palmer C, Dittberner AB. Hearing-in-Noise: comparison of listeners with normal and (aided) impaired hearing. J Am Acad Audiol. 2004 Mar. 15(3):216-25. [Medline].

  9. Bilger RC, Nuetzel JM, Rabinowitz WM, Rzeczkowski C. Standardization of a test of speech perception in noise. J Speech Hear Res. 1984 Mar. 27(1):32-48. [Medline].

  10. Carhart R, Porter LS. Audiometric configuration and prediction of threshold for spondees. J Speech Hear Res. 1971 Sep. 14(3):486-95. [Medline].

  11. Chaiklin JB, Ventry IM. Spondee Threshold measurement: A comparison of 2- and 5-DB methods. J Speech Hear Disord. 1964 Feb. 29:47-59. [Medline].

  12. Dhar S, Humes LE, Calandruccio L, Barlow NN, Hipskind N. Predictability of speech-in-noise performance from real ear measures of directional hearing AIDS. Ear Hear. 2004 Apr. 25(2):147-58. [Medline].

  13. Dirks DD, Kamm C. Psychometric functions for loudness discomfort and most comfortable loudness levels. J Speech Hear Res. 1976 Dec. 19(4):613-27. [Medline].

  14. Egan JP. Articulation testing methods. Laryngoscope. 1948. 58:955-91.

  15. Elliot L, Katz D. Development of a New Children's Test of Speech Discrimination. St. Louis, Mo: 1980.

  16. Haskins H. A phonetically balanced test of speech discrimination for children. Chicago, Ill: Northwestern University; 1949.

  17. Hirsh IJ, Davis H, Silverman SR, Reynolds EG, Eldert E, Benson RW. Development of materials for speech audiometry. J Speech Hear Disord. 1952 Sep. 17(3):321-37. [Medline].

  18. Jerger J, Hayes D. Diagnostic speech audiometry. Arch Otolaryngol. 1977 Apr. 103(4):216-22. [Medline].

  19. Jerger J, Speaks C, Trammell JL. A new approach to speech audiometry. J Speech Hear Disord. 1968 Nov. 33(4):318-28. [Medline].

  20. Jerger S, Lewis S, Hawkins J, Jerger J. Pediatric speech intelligibility test. I. Generation of test materials. Int J Pediatr Otorhinolaryngol. 1980 Sep. 2(3):217-30. [Medline].

  21. Killion M. Etymotic Research. SIN Test. A Speech in Noise Test. Elk Grove Village, Ill: 1997.

  22. Killion MC, Niquette PA, Gudmundsen GI, Revit LJ, Banerjee S. Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal-hearing and hearing-impaired listeners. J Acoust Soc Am. 2004 Oct. 116(4 Pt 1):2395-405. [Medline].

  23. Kopra LL, Blosser D. Effects of method of measurement on most comfortable loudness level for speech. J Speech Hear Res. 1968 Sep. 11(3):497-508. [Medline].

  24. Lezak RJ. Determination of an intensity level to obtain PB MAX. Laryngoscope. 1963 Mar. 73:267-74. [Medline].

  25. Martin FN, Armstrong TW, Champlin CA. A survey of audiological practices in the United States in 1992. Am J Audiol. 1994. 3:20-6.

  26. Martin FN, Dowdy LK. A modified spondee threshold procedure. J Aud Res. 1986 Apr. 26(2):115-9. [Medline].

  27. McArdle R, Wilson R. Speech perception in noise: The basics. Perspectives on Hearing and Hearing Disorders: Research and Diagnostics. 2009. 13:4-13.

  28. McArdle R, Wilson RH. Selecting Speech Tests to Measure Auditory Function. The ASHA Leader. September 2008. [Full Text].

  29. McArdle RA, Wilson RH. Homogeneity of the 18 QuickSIN lists. J Am Acad Audiol. 2006 Mar. 17(3):157-67. [Medline].

  30. Nilsson M, Soli SD, Sullivan JA. Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. J Acoust Soc Am. 1994 Feb. 95(2):1085-99. [Medline].

  31. Peterson GE, Lehiste I. Revised CNC lists for auditory tests. J Speech Hear Disord. 1962 Feb. 27:62-70. [Medline].

  32. Ross M, Lerman J. A picture identification test for hearing-impaired children. J Speech Hear Res. 1970 Mar. 13(1):44-53. [Medline].

  33. Sanderson-Leepa ME, Rintelmann WF. Articulation functions and test-retest performance of normal-hearing children on three speech discrimination tests: WIPI, PBK-50, and NV Auditory Test No. 6. J Speech Hear Disord. 1976 Nov. 41(4):503-19. [Medline].

  34. Silverman SR, Hirsh IJ. Problems relating to the use of speech in clinical audiometry. Ann Otol Rhinol Laryngol. 1955. 64:1234-44.

  35. Tillman T, Carhart R. An Expanded Test for Speech Discrimination Utilizing CNC Monosyllabic Words, Northwestern University Auditory Test No 6. Technical Report, SAM-TR-66-55. Brooks AFB, Tex: USAF School of Aerospace Medicine; 1966.

  36. Tillman TW, Olsen WD. Speech audiometry. Jerger J, ed. Modern Developments in Audiology. 2nd ed. 1973. 37-74.

  37. von Hapsburg D, Champlin CA, Shetty SR. Reception thresholds for sentences in bilingual (Spanish/English) and monolingual (English) listeners. J Am Acad Audiol. 2004 Jan. 15(1):88-98. [Medline].

  38. Wilson RH. Development of a speech-in-multitalker-babble paradigm to assess word-recognition performance. J Am Acad Audiol. 2003 Nov. 14(9):453-70. [Medline].

  39. Wilson RH, Burks CA. Use of 35 words for evaluation of hearing loss in signal-to-babble ratio: A clinic protocol. J Rehabil Res Dev. 2005 Nov-Dec. 42(6):839-52. [Medline].

 
Previous
Next
 
Speech audiogram. Video courtesy of Benjamin Daniel Liess, MD.
 
 
 
All material on this website is protected by copyright, Copyright © 1994-2016 by WebMD LLC. This website also contains material copyrighted by 3rd parties.