Hearing loss is more prevalent than diabetes mellitus, myelomeningocele, all pediatric cancers, and numerous other medical conditions.  However, medical professionals learn little about hearing impairment, about how to advise parents of children who are deaf or hard of hearing, or about the special considerations needed in the care of children with hearing loss.
In the past decade, recommendations for universal neonatal hearing screening resulted in numerous articles regarding the tests, the efficacy of testing, the role of the audiologist in amplification, and the importance of early intervention programs. [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13] The role of the primary care physician cannot be overemphasized. In many instances, the otolaryngologist develops a long-term relationship with patients and their families, caring for the patients through their spectrum of development, especially if the otolaryngologist is part of a cochlear implant program. In addition to the surgeon, most cochlear implant teams include audiologists, speech therapists, and, sometimes, social workers. These teams can be very helpful to deaf children, their families, and their primary care physicians.
Pediatricians play a crucial role in providing referrals to audiologists, otolaryngologists, and special programs. To do so, they must understand the nature of hearing loss and the equipment that can improve auditory reception, the linguistic and social development of children who have hearing impairment, and the educational and linguistic options available to children who are deaf or hard of hearing.
The goals must always be to integrate the child into the family and into society and to enable the growth and development of a healthy, confident child who is deaf or hard of hearing. To meet these goals, clinicians should use whichever communication strategy and equipment that is best suited for the individual child and his or her family.
Relevant anatomy and physiology
Sound waves arrive to the auricle and are channeled through the external auditory canal to the tympanic membrane. When they strike the tympanic membrane, the waves cause it to vibrate, setting off a chain of vibrations along the ossicles (malleus, incus, and stapes) to the membrane of the oval window at the entrance to the cochlea. This process amplifies environmental sound by approximately 20-fold.
The cochlea is the end organ of hearing and is shaped like a snail shell with two and one half turns. Inside, two membranes longitudinally divide the cochlea into three sections: the scala tympani, the scala vestibuli, and the scala media. All three are filled with fluids of differing ion concentrations (similar to intracellular and extracellular constituents).
Along one of the membranes in the scala media, or cochlear duct, lie the internal and external hair cells. Movement of the stapes on the oval window creates a wave or vibration in the perilymph fluid of the cochlea. This fluid movement, which opens ion channels in the hair cells, displaces the hair cells, triggering an action potential and causing a nerve in the cochlea to fire to the brain.
Thousands of nerves representing more than 20,000 frequencies are located along the length of the cochlea; these nerves account for the hearing range. The microscopic nerves culminate in the cochlear portion of the eighth cranial nerve. The location of the vibration in the cochlea correlates with the frequency of the original pitch. Low-frequency sounds are near the apex, and high-frequency sounds are near the base.
Types of hearing loss
Conductive hearing loss (CHL) results from anything that decreases the transmission of sound from the outside world to the cochlea. Causes include abnormal formation of the auricle or helix, impaction of cerumen in the ear canal, effusions in the middle ear, or dysfunction or fixation of the ossicular chain. Otosclerosis is one of the most common examples.
An important cause of CHL is a cholesteatoma, a locally destructive but benign growth. Other neoplasms can affect the middle ear as well. Examples include glomus tympanicum or glomus jugulare, schwannomas of the facial nerve, and hemangiomas. Dehiscence of the roof of the middle ear (tegmen mastoideum), such as is caused by an encephalocele, can result in CHL. In CHL, sounds perceived by the brain are diminished but are generally not distorted.
Sensorineural hearing loss (SNHL) may result from disruptions in transmission after the cochlea. These disruptions may be a result of hair cell destruction in the cochlea or damage to the eighth cranial nerve. Sounds perceived by the brain are both diminished and distorted. The degree of distortion is independent of the degree of hearing loss (eg, mild hearing loss but very poor speech discrimination is possible).
Auditory dyssynchrony should be considered in the setting of no auditory brainstem response (ABR), no middle-ear muscle response, normal otoacoustic emissions, or normal cochlear microphonics.
Mixed hearing loss has components of both CHL and SNHL.
Categories of hearing loss
Regardless of the type, the American National Standards Institute defines hearing loss in terms of decibels (dB) lost, as follows:
Slight hearing loss - 16-25 dB
Mild hearing loss - 26-40 dB
Moderate hearing loss - 41-55 dB
Severe hearing loss - 71-90 dB
Profound - More than 90 dB
In 2014, the American College of Medical Genetics and Genomics published a guideline that provided information about the frequency, causes, and presentations of hearing loss and suggested approaches to clinical evaluation aimed at identifying an etiologic diagnosis of hearing loss. 
Most sources cite genetic causes as accounting for at least 50% of hearing loss. [1, 15, 16, 17] These can be divided into syndromic and nonsyndromic types. As with all genetic syndromes, genetic causes of hearing loss may be autosomal dominant (AD), autosomal recessive (AR), X-linked, mitochondrial, or sporadic.
Nonsyndromic deafness accounts for slightly more than two thirds of all cases of genetic deafness. It probably accounts for most cases classified as unknown. Children with nonsyndromic deafness are deaf or hard of hearing; however, they have no other physical abnormalities, no particular risk to other organ systems, and no increased risk of mental deficiency. Some children have a history of deafness in a close or distant family member. Others have new mutations or an AR gene with no known proband. The histories of subsequent siblings and progeny may help to distinguish a genetic cause from developmental arrest or a prenatal insult.
Exciting developments in genetic mapping have revealed approximately two dozen abnormal genes that lead to deafness. These genes have been classified according to their mode of inheritance: ADs (DFNA1 through DFNA48), ARs (DFNB1 through DFNB67), X-linked recessives (DFN1 through DFN8), or mitochondrial (at least 5 loci identified, including 12Sr RNA and tRNA-Ser UCN).
Mutations in these genes result in a functional or structural defect (eg, collagen in the basilar membrane; a structural defect in a membrane-gating protein, such as connexin 26).  In some of these, one or more specific mutations in the DNA sequence have been identified. One such example is DFNA44, for which the problem is on gene CCDC50, which codes for Ymer, a cytoplasmic protein that inhibits the down-regulation of the epidermal growth factor receptor, affecting the normal development of the structural organization of the pillar cells of the inner ear. [19, 20] Several of the genes exhibit variable penetrance, so the presence of the gene does not correlate directly with the degree of hearing loss.
Syndromic deafness accounts for the remaining cases of genetic deafness, with more than 300 syndromes described.  Some syndromes have a particular inheritance pattern (eg, AD for Waardenburg syndrome and Gernet syndrome, AR for Jervell Lange-Nielson syndrome and Winter syndrome, X linked for Alport syndrome and Rosenberg syndrome). Others are sporadic (eg, cat-eye syndrome, Turner syndrome, Klinefelter syndrome). Physical findings usually help to indicate the presence of a particular syndrome; however, children with some syndromes develop the associated physical findings late in childhood. Other children present early in life with either deafness or the sequelae of a biochemical or metabolic derangement.
As noted above, syndromes may affect any single organ or several organ systems. Just as the genetics of nonsyndromic deafness have advanced, more is known about the genetics of syndromes associated with deafness. For example, Waardenburg syndrome type IV is caused by an abnormality of endothelin 3 (EDN3), which results in abnormalities of ligand molecules affecting striatal intermediate, GI, and pigment cells. 
A small sample of syndromes associated with deafness is summarized in Table 1 below. A few may be familiar, although many are not; most are fairly uncommon. For many of these syndromes, good data about their actual prevalences are difficult to find. The first few syndromes listed for each organ or system are most commonly known. They may be most widely recognized because their associated findings or illnesses may result in high morbidity or mortality rates, because the physical stigmata are classic and therefore make the syndrome easily identifiable, or because they are overrepresented in test questions on pediatric examinations.
Table 1. Some Syndromes Associated with Deafness (Open Table in a new window)
|Organ or System||Syndrome||Inheritance Pattern||Hearing Loss||Obvious Physical Abnormalities|
|External ear||DiGeorge sequelae||Sporadic||CHL||Yes|
|Cardiac||Coloboma, heart disease, atresia choanae, retarded growth, and ear anomalies (CHARGE) syndrome||AD, AR, X linked, sporadic||SNHL, mixed||Yes|
|Jervell Lange-Nielson syndrome||AR||SNHL||No|
|Renal||Alport syndrome||AD, AR, X linked||SNHL||Yes or no|
|Branchio-oto-renal syndrome||AD||SNHL, CHL||Yes|
|Mental (retardation)||Noonan syndrome||Sporadic||SNHL||Yes|
|Cockayne syndrome, type I||AR||SNHL||Yes|
|Gustavson syndrome||X linked||SNHL||Yes|
|Lentigines, ECG, ocular, pulmonary, abnormal, retardation, and deafness (LEOPARD) syndrome||AD||SNHL||Yes|
|Black locks with albinism and deafness (BADS) syndrome||AR||SNHL||Yes|
|Endocrine and/or metabolic||Pendred syndrome||AR||SNHL||Yes or no|
|Wolfram syndrome||AR||SNHL||Yes or no|
|Kallmann syndrome||AD, AR, X linked||SNHL, mixed||Yes or no|
|Facial||Goldenhar syndrome||AD, AR||CHL, SNHL||Yes|
|Frontometaphyseal dysplasia||X linked||Mixed||Yes|
|Ophthalmologic||Usher syndrome||AR||SNHL||Yes or no|
|Harboyan syndrome||AR||SNHL||Yes or no|
|Jensen syndrome||X linked||SNHL||No|
|Orthopedic||Klippel-Feil sequelae||Sporadic||CHL, SNHL||Yes|
|Stickler syndrome||AD||CHL, SNHL, mixed||Yes|
|Craniometaphyseal dysplasia||AD, AR||CD||Yes|
|Oto-spondylo-megaepiphyseal dysplasia (OSMED) syndrome||AR||SNHL||Yes|
Prenatal causes lead to 5-10% of hearing losses. Congenital infections (eg, cytomegaloviral [CMV] infections, herpes, rubella, syphilis, toxoplasmosis, varicella) can result in SNHL.  Fetal exposure to teratogens (eg, methyl mercury, retinoic acid, thalidomide, trimethadione) may also result in SNHL. Most of these perinatal insults result in physical abnormalities, which should prompt the clinician to recognize the diagnosis and perform a confirmatory evaluation.
However, the physical findings sometimes are subtle or are not apparent until the child ages (eg Hutchinson molars). In these cases, it may be the identification of hearing loss that leads to the evaluation leading to the diagnosis, or even the later appearance of developmental delays. Even if children with a known prenatal exposure pass the neonatal screen, careful follow-up of their hearing is necessary.
Perinatal causes are responsible for 5-15% of hearing losses. A history of prematurity, low birth weight, anoxia and/or low Apgar scores (see the Apgar Score calculator), hyperbilirubinemia, or sepsis should prompt an evaluation of hearing because these conditions may also result in SNHL.
About 10-20% of hearing loses are due to postnatal causes. Childhood infections, such as meningitis or mumps, may result in SNHL. Treatment with ototoxic medications, such as aminoglycosides or furosemide, also can lead to SNHL. Otitis media or major head injury may cause SNHL or CHL.
About 20-30% of deaf children have no certain etiology. Their hearing losses likely result from a maldevelopment of the ear or neurologic system. Such an event may have been a developmental accident or the result of an undiagnosed infection or exposure to a teratogenic agent. However, many are likely due to an undiagnosed genetic defect that may represent a new mutation or a genetic recessive trait.
United States and international statistics
Hearing loss occurs in approximately 5-10 per 1000 children in the United States. Roughly 1-3 in 1000 children are born with profound hearing loss, and 3-5 in 1000 are born with mild-to-moderate hearing loss that may affect language acquisition unless hearing, language, or both are aided.  The prevalence of hearing loss requiring intervention among graduates from neonatal intensive care units (NICUs) is 1-4%. Acquired hearing loss in children may add another 10-20% to these numbers. 
The prevalence of hearing loss in adolescents aged 12-19 years has increased in the United States compared with the previous decade.  This increase was approximately one third greater from 2005-2006 than from 1988-1994. Interestingly, significant hearing loss (≥25 dB) was particularly increased, to the point where approximately 1 in 20 adolescents has this type of hearing loss. Noise-induced hearing loss contributes substantially to the increased incidence of hearing loss in adolescents.
Data from the United States Census show that almost 3% of the population in the workforce reports having some hearing loss, including CHL, SNHL, or mixed loss.
Worldwide, SNHL occurs in 9-27 per 1000 children.
Age- and sex-related demographics
Most hearing loss in children is congenital or acquired perinatally.  However, hearing loss may occur at any age. Approximately 10-20% of all cases of deafness are acquired postnatally, although some genetic causes of deafness result in hearing loss that begins during childhood or adolescence or is slowly progressive and therefore diagnosed in childhood or adolescence.
No sex predilection is known. Some hereditary causes of deafness or acquired deafness may occur more frequently in one sex than the other. However, the overall prevalence of deafness is equal in male and female individuals.
Assisting and educating parents
Remember, although most deaf children are otherwise healthy, a parent's response to having a child with hearing impairment is generally the same as that of a parent whose child has a physical disability or chronic medical condition. Some deaf children have both hearing loss and other disabilities or medical conditions. Help parents to identify available resources (such as those listed at the end of this section), or refer them to a social worker or counselor who can provide specialized assistance.
In some states, social workers assigned to families with deaf or hard-of-hearing children are required to have a background in disabilities or hearing loss. Many states assign the next social worker available, in which case the worker has no more ability to determine the best form of communication, the best school environment, or the best resources than the primary care provider.
The information available on the Internet may be selected strictly on geographic location, by the "number of hits" making the site appear at the top of the list, or by alphabet or the manner in which the resource is structured. If possible, the patient’s family should be directed towards people with expertise in the field and towards sources without obvious bias.
Parents must understand that deafness is not an all-or-nothing categorization and that hard-of-hearing is not defined by the hearing loss being in the middle ranges (eg, 30-60 dB). Children with CHL are likely to develop good use of speech with appropriate amplification. Children with SNHL are more variable. In SNHL, the nature of the sound may be distorted, which means that amplification of the deficit may not improve the child's language comprehension in any meaningful way. A helpful analogy is that of a radio signal that is mostly static; it remains unintelligible no matter how loud it is. Therefore, children with moderate SNHL (41-55 dB) may have minimal benefit to improving language comprehension, although they may have good benefit from improved ability to hear environmental cues.
Communication is the most important loss in deafness. Communication is necessary for socialization and integration into the family and into society. All interventions must have the goal of optimizing the child's ability to successfully communicate and interact socially.
Lip-reading (oralism) and sign language
Considerable debate continues regarding the choice of lip-reading (oralism) versus sign language. Lip-reading may be taught alone or supplemented with cued speech. In the United States, sign language can be in the form of American Sign Language (ASL) or Signed English (SE) with Signing Exact English/Seeing Essential English (SEE, which is sometimes distinguished as Signing Exact English [SEE 1] and Seeing Essential English [SEE 2]). PSE (Pidgin Signed English) is not used educationally, but functionally it combines the use of signs using an English-based grammar, but like any Pidgin language uses neither proper English grammar nor ASL grammar.
Parents must be able and willing to accept and then participate in the language that they chose for their child. They should be vigilant and remember that school failures and behavior problems are not inherent to deafness but frequently result from a frustrated child who has no language, no ability to communicate, and no ability to connect with others.
Advantages and disadvantages of lip-reading
The greatest advantage of lip-reading is that parents and society are required to make only minimal adjustments for the deaf individual. Therefore, deaf children must learn to speak for themselves and understand the speech of others.
The disadvantages are numerous. First, formal training can begin only at school age. Therefore, language acquisition is delayed beyond the optimal neurobiologic window for language acquisition, between approximately age 3-5 years (although some evidence indicates it may be as young as 9-18 mo).
Second, almost half of the consonants in English appear similar when spoken (eg, d-t, f-v, g-k, b-p-m). That is, they look identical on the lips but are distinguishable to hearing people when spoken. Other sounds appear the same such as /ch/, /j/, and /sh/, making chew, Jew, and shoe indistinguishable from each other (this includes the "soft g" as in "George" and all variations of /sh/ as in fi sh, Charlotte, and na tion. Many vowel sounds appear similar, especially compared with their written equivalent. For example, the sound /oo/ may be written as “t o,” “ too,” “th rew,” “thr ough,” “d ue,” or “sh oe.”
Lip-reading is hard to master and tiring to perform. To appreciate the difficulty, imagine reading this page with no spaces between the words and with only periods as punctuation. Then, imagine the text passing in front of you as if on an electronic billboard, and that every /p/ might be a /b/ or /m/, every /d/ could be a /t/, every /ch/ could be /j/ or /sh/, and so on.
Fourth, lighting, distance, speech impediments, accents, and foreign objects or motion can make lip-reading more difficult (eg, faces silhouetted by light, food, pencils, fingers, mustaches, turning the head).
As a result, typical lip-readers understand only one third of a one-to-one conversation. The best lip-readers understand about two thirds. In general, children with more hearing and better speech discrimination than others are most successful with lip-reading.
Finally, the lip-reader cannot localize a speaker. During a one-to-one conversation between the lip-reader and one other person, there is only the one set of lips to watch. Even a third person makes the conversation difficult because when the speaker’s lips stop moving, it could be a pause, or the other speaker has started speaking. The lip-reader must guess if he or she should continue to look at the speaker’s lips, risking missing the opening words of the second party (those which best set up context, for example agreement or disagreement with the first speaker). To look at the other person’s lips risks missing the continued comments of the speaker.
In a group setting, the lip-reader may miss several entire comments by the time his or her eyes identify the speaker. Having more hearing is helpful in group situations. Although hearing aids help to localize sound, they also amplify each voice equally, which may decrease the ability to accurately use the sounds that are heard.
Deaf and hard-of-hearing people who use either lip-reading or cued speech must learn to speak for themselves. They have varying success, but many can make themselves understood in most situations. The challenge is simple. They must learn to create sounds they cannot hear. A simple analogy would be teaching a blind person to paint in watercolors. It would be possible for blind people to make the design by embossing the paper with an image they can feel, and then fill in the areas (like painting by numbers) with colors they cannot truly imagine. Deaf people use surrogates for sound to produce noises they cannot imagine. Traditional “sound surrogates” would be making a feather blow with plosive sounds, or feeling the throat vibrate to understand when the larynx is engaged.
More modern technologies include matching speech patterns on oscilloscopes or similar equipment, or “green light” when the deaf person’s spoken word matches the computer’s. Many hearing people cannot even imitate another regional or foreign accent, and few people speaking a new language ever speak without a residual accent from their native language, despite the fact that they can hear the new language and compare their voice with it. Just as some hearing people go to a specialist to learn to speak with an accent or to ablate their native accent, some deaf people who already have learned to speak will return to speech therapists every few years to maintain their speech quality.
Puberty and growth, changes in their vocal cords due to maturation and use (or misuse), and changes in their mouth and shape of their oropharynx occur throughout their lives, altering their ability to produce speech sounds. Without being able to hear themselves and self-correct, they seek professional help to do so.
Cued speech aids lip-reading because hand shapes are placed near the mouth. These shapes help in discriminating sounds that are difficult to distinguish by observing the lips alone.
Parents must learn how to cue. The technique is similar to shorthand in that as sounds, not letters, are cued. For example, the /sh/ sound in fi sh, ch ard, and na tio n are all cued and in the same way "L" or "gun" hand shape placed by the chin. Whereas the /p/, /b/, and /m/, which are identical on the lips, are cued by the chin with one finger, 4 fingers, or all 5 fingers with the fingers flat and close together, respectively.
Because cued speech is the language of neither the parents nor society, cued-speech interpreters may be required in situations such as interviews or public events. These interpreters are harder to locate than sign-language or oral interpreters.
Similar to instruction in lip-reading, instruction in cued speech cannot begin at an early age; therefore, language acquisition is delayed.
Systems of manual and visual sign language
In the United States, manual and visual signing systems include SEE and ASL.
SEE is visually encoded English, using or adapting the signs of ASL and imposing exact order of the spoken word on the signs. SEE invents suffixes (dog vs dog s), conjugations (see vs sees, -ing, -ed), and signs (the) that are not necessary in the grammar of ASL. These make the signs and sign order identical to English. SEE is long and tiresome. However, children who use SEE grow up signing what they learn to read and write, just as hearing children speak and hear what they later read and write. Because it takes too long, most people end up omitting or changing some of the signs, and they use PSE. Therefore, children do not benefit from the English-grammar aspect of SEE as much as they might; they learn neither proper English grammar nor proper ASL grammar.
ASL has a unique grammar. It requires fewer signs than SEE does to complete most thoughts because it incorporates space and time into the motion of signs in a way that spoken language cannot. It is efficient and beautiful to watch. However, the child must grow up being bilingual. The grammar used for ASL must be translated into English in order to write.
An advantage of ASL or SEE is that instruction may begin immediately when hearing loss is diagnosed. In fact, children of parents who are deaf learn to sign as their first language, and they begin to sign babble as early as 6-9 months, when hearing children begin to babble normally. In addition, signs are clearly visible at distances, and signing is the preferred language of the Deaf community. Research suggests that a strong linguistic background is as important to reading and language development as the actual language itself. Therefore, learning ASL may aid the development of English language skills rather than confuse them. Remember, it was not too long ago that educators told immigrant parents not to confuse their children with their foreign languages and to focus on English. Now it is accepted that children can easily learn several languages simultaneously, and it may enhance their ability to acquire languages later in life.
The biggest disadvantage is that sign language is not the language of the hearing world; therefore, interpreters are necessary. It is usually not the language of the family, and many families are intimidated by having to learn a new language to communicate with their child. In fact, 20% of children who are deaf who sign have no family members who sign, and 40% have only 1 family member who signs. Parents must be reminded that as long as they are 1 sign ahead of their child in early childhood, they know all they need to maintain communication and linguistic development. With only a few signs, love and discipline can be clearly expressed.
Total communication with sign language and voice
Total communication allows children with residual hearing to benefit from supplemental auditory information. It may also help students in lip-reading because signs with meaning can be associated with movements of the mouth. (For example, mouth the words "I'm going to bed" to your spouse at bedtime; he or she probably understands because the phrase is short and the context is clear; if you said it at a baseball game, your spouse initially would assume he or she had misunderstood.)
The main disadvantage is that speaking English while signing ASL at the same time is almost impossible. As a result, neither of the grammars is effectively or consistently applied.
School placement and schools for children with hearing impairment
Educational placements largely depend on the choice of language.
Children who are learning to lip-read attend an oral school where lip-reading is taught. A portion of the day is spent in lip-reading instruction, and other subjects cannot truly be learned until adequate language is established. As children age, they can be placed in a regular hearing classroom ("mainstreamed") if their lip-reading and vocal skills are sufficient. Their experience there depends on the teacher's ability to accommodate the needs of the student (eg, by not spending much time facing the blackboard and away from the student). Oral interpreters may also be provided to the student in a mainstream classroom. Many children who are deaf and who have succeeded to this point do well in school with their peers.
The experience for students using Cued Speech is not dissimilar. Once their language skills are established, they may stay in a program that uses Cued Speech, or they may enter a mainstream classroom with a Cued Speech interpreter.
Many children who are entering a school where sign language is used have already learned some signs at home or in early intervention. The classroom may consist of deaf children of deaf adults whose first language is ASL and children with minimal sign language skills. Because sign language is visual, young children who are immersed in a signing setting rapidly acquire the signs for objects, people, and, ultimately, grammar. Shortly thereafter, formal educational programming can begin. The process is analogous to most kindergarten classes, which tend to be more social than academic, and language acquisition occurs in a similar fashion in many bilingual school programs for foreign languages. The use of ASL or SEE in schools is debated by educators, but it is of relatively little importance early in the child’s education, especially when the goal is to foster the development of language, communication, and social skills.
Another debate is the location of deaf/signing programs, residential or mainstream programs. Placing children who are deaf or hard of hearing with hearing children in the least restrictive environment (mainstreaming) may not be as successful for deaf children as it is for children with other disabilities. Many so-called mainstream classrooms are isolated from those for hearing children, and the children who are deaf are instead grouped with children who have learning disabilities or mental retardation. Even in an integrated classroom, instruction happens through the interpreter.
When children are engaging in nonacademic subjects or when no interpreter is present, communication between deaf students and their classmates is limited. Children who have acquired only minimal sign skills will not understand the interpreter, and they have minimal opportunity to practice their signing with their classmates who do not sign. Children who have more advanced sign language may do much better in a hearing classroom from the academic perspective, but their ability to socialize with other students or participate in extracurricular activities is limited.
In some cities, there are deaf programs at hearing schools. Some of these programs put deaf children of varying ages together with a signing teacher. The educational model is more like independent study, with each child working on their assignments. However, the language of instruction is sign, and they interact with signing classmates. Other programs are able to assemble sufficient numbers of deaf students to have entire classes of deaf students with a signing teacher, but on the campus of a hearing school. Depending on the programs resources, students usually are able to participate in extracurricular activities.
Residential deaf schools are on the decline because of the recent desire to keep deaf children at home with their families. Relatively few independent day schools for the deaf exist, no more than 1 or 2 in each state. At a deaf school, where everyone signs, students can participate in many extracurricular and academic activities, such as debate team, football, and cheerleading. Because most deaf children live in a hearing household that is linguistically isolated to them, a school environment that is completely linguistically accessible to the child promotes self-esteem and social skills. The children are able to use sign language all day and all night. They engage and interact with peers and deaf children of different ages. They can develop leadership skills in a way that they may not be able to living at home. Most deaf adults who attended such residential schools look back on that experience as being the best time of their lives.
Choice of language and school placement
The debates rage on, and all parties can be vehement in their views. No one opinion is right, and little literature of adequate quality strongly supports any particular standpoint. Few well-performed studies exist, and the results of most studies cannot be generalized to apply to the majority of children who are deaf and hard of hearing. Therefore, the best approach is to make decisions based on the individual, to meet the needs of a particular child, including the parents’ beliefs and resources, and programs available to them. The goal of pediatricians should be continual monitoring of the child's progress. If the child is not succeeding in one environment or with one choice, suggest a trial in a different one.
Devices to aid children with hearing impairment
Young children need only hearing aids. As they grow, the family should be encouraged to obtain devices such as strobe lights connected to doorbells, timers, alarm clocks, and fire alarms. Telecommunication Devices for the Deaf (TDDs) and teletypewriters (TTYs) are machines than enable deaf people to use the phone. Computers with modems or video/webcam, cell phones with text messaging or instant messaging, and other hand-held devices all permit children to communicate using modern technologies.
These and other aids help children with hearing impairment to develop a sense of independence and accomplishment, just as hearing children do when they complete tasks such as waking up for school using their own alarm clock or baking a cake for the first time.
Schools should also use FM amplification systems to transmit the teacher's voice to a small headphone speaker the child wear just behind the hearing aid. This system amplifies the teacher's voice over extraneous noise.
All new televisions are equipped with closed captioning, which decodes the captioning of dialogue and action provided with most television shows, videotapes, and DVDs. This not only makes television accessible but also promotes reading skills in deaf and hearing children alike.
3417 Volta Place, NW
Washington, DC 20007
Voice: (202) 337-5220
TTY: (202) 337-5221
11730 Plaza America Drive, Suite 300,
McLean, VA 22102
Voice: (800) AAA-2336, (703) 790-8466
Fax: (703) 790-8631
ADARA National Office
PO Box 480
Myersville, MD 21773
8 South Michigan Avenue, Suite 814
Chicago, IL 60603-4539
Voice: (312) 726-9670
Fax: (312) 726-9695
National Center for Law and the Deaf
800 Florida Avenue, NE
Washington, DC 20002
Hearing Loss Association of America (HLAA, formerly Self Help for Hard of Hearing People [SHHH])
7910 Woodmont Ave, Suite 1200
Bethesda, MD 20814
Phone: (301) 657-2248
141 Middle Neck Road
Sands Point, NY 11050
Voice, TTY: (516) 944-8900
House Ear Institute (HEI)
2100 West Third Street
Los Angeles, CA 90057
Voice: (213) 483-4431
TTD: (213) 483-2642
Fax: (213) 483-8789
8630 Fenton Street, Suite 820
Silver Spring, MD 20910-3819
Voice: (301) 587-1788
TTY: (301) 587-1789
Fax: (301) 587-1791
5619 McLean Drive
Bethesda, MD 20814-1021
Voice, TTY: (800) 459-3529, (301) 915-8009
National Fraternal Society of the Deaf
1188 South Sixth Street
Springfield, IL 62703
Voice: (217) 789-7429
TTY: (217) 789-7438
400 Maryland Avenue, SW
Washington, DC 20202-7100
Voice: (202) 245-7468
Parmly Hearing Institute
6525 North Sheridan Road
Chicago, IL 60626
Voice: (773) 508-2710
Fax: (773) 508-2719
E-mail: firstname.lastname@example.org (Richard R. Fay, director)
Steven Schumacher, RAD Secretary
9804 Walker House Road, Suite 4
Montgomery Village, MD 20886-0506
333 Commerce Street
Alexandria, VA 22314
Phone: (703) 838-0030
8630 Fenton Street, Suite 604
Silver Spring, MD 20910
Voice: (301) 589-3786
TTY: (301) 589-3006
Fax: (301) 589-3797
Triological Society (The American Laryngological, Rhinological, and Otological Society, Inc.)
555 North 30th Street
Omaha, NE 68131
Voice: (402) 346-5500
Fax: (402) 346-5300
USA Deaf Sports Federation (USADSF, formerly American Athletic Association of the Deaf [AAAD])
102 North Krohn Place
Sioux Falls, SD 57103-1800
Voice: (605) 367-5760
TTY: (605) 367-5761
PO Box 3211
Quartz Hill, CA 93586
Videophone: (661) 943-8879
Additional patient education resources
What would you like to print?