Auditory brainstem response

Auditory brainstem response

The auditory brainstem response (ABR) is an auditory evoked potential extracted from ongoing electrical activity in the brain and recorded via electrodes placed on the scalp. The resulting recording is a series of vertex positive waves of which I through V are evaluated. These waves, labeled with roman numerals in Jewett and Williston convention, occur in the first 10 milliseconds after onset of an auditory stimulus. The ABR is considered an exogenous response because it is dependent upon external factors[1][2][3]

The auditory structures that generate the auditory brainstem response are believed to be as follows:[2][4]

  • Wave I – generated by the peripheral portion of cranial nerve VIII
  • Wave II – generated by the central portion of cranial nerve VIII
  • Wave III – generated by the cochlear nucleus
  • Wave IV – generated by the superior olivary complex/lateral lemniscus
  • Wave V – generated by the lateral lemniscus/inferior colliculus


History of ABR

In 1967, Sohmer and Feinmesser were the first to publish ABRs recorded with surface electrodes in humans which showed that cochlear potentials could be obtained non-invasively. In 1971, Jewett and Williston gave clear description of the human ABR and correctly interpreted the later waves as arriving from the brainstem. In 1977, Selters and Brackman published landmark findings on prolonged inter-peak latencies in tumor cases (greater than 1 cm). In 1974, Hecox and Galambos showed that the ABR could be used for threshold estimation in adults and infants. In 1975, Starr and Achor were the first to report the effects on CNS pathology on the ABR (pathology that restricted the brainstem)[2]

ABR techniques

Recording parameter

  • Electrode montage: most performed with a vertical montage (high forehead [active or positive], earlobes or mastoids [reference right & left or negative], low forehead [ground]
  • Impedance: 5 kΩs or less (also equal between electrodes
  • Filter settings: 100–3000 Hz bandwidth
  • Time window: 10ms (minimum)
  • Sampling rate: usually fixed at 256 or 512
  • Intensity: usually start at 70 dBnHL
  • Stimulus type: click (100 ms long) or toneburst
  • Transducer type: insert, bone vibrator, sound field, headphones
  • Stimulation or repetition rate: 21.1 (for example)
  • Amplification: 100-150K
  • n (# of averages/ sweeps): 1000 minimum (1500 recommended)
  • Polarity: rarefaction or alternating recommended

Interpretation of result

When interpreting the ABR, we look at amplitude (the number of neurons firing), latency (the speed of transmission), interpeak latency (the time between peaks), and interaural latency (the difference in wave V latency between ears). The ABR represents initiated activity beginning at the base of the cochlea and moving toward the apex over a 4ms period of time. The peaks largely reflect activity from the most basal regions on the cochlea because the disturbance hits the basal end first and by the time it gets to the apex, a lot of phase cancellation that occurs.

Use of ABR

The ABR is used for newborn hearing screening, auditory threshold estimation, intraoperative monitoring, determining hearing loss type and degree, and auditory nerve and brainstem lesion detection.

Advanced ABR techniques

Stacked ABR


One use of the traditional ABR is site-of-lesion testing and it has been shown to be sensitive to large acoustic tumors. However, it has poor sensitivity to tumors smaller than 1 centimeter is diameter. In the 1990s, there were several studies that concluded that the use of ABRs to detect acoustic tumors should be abandoned. As a result, many practitioners only used the MRI for this purpose now.[5]

The reason the ABR does not identify small tumors can be explained by the fact that ABRs rely on latency changes of peak V . Peak V is primarily influenced by high-frequency fibers and tumors will be missed if those fibers aren’t affected. Although the click stimulates a wide frequency region on the cochlea, phase cancellation of the lower frequency responses occurs, as a result if time delays along the basilar membrane.[6] If a tumor is small, it is possible those fibers won’t be sufficiently effected to be detected by the traditional ABR measure.

Primary reasons why it is not practical to simply send every patient in for an MRI are the high cost of an MRI, its’ impact on patient comfort, and limited availability in rural areas and third world countries. In 1997, Dr. Manuel Don and colleagues published on the Stacked ABR as a way to enhance the sensitivity of the ABR in detecting smaller tumors. Their hypothesis was that the new ABR-stacked derived-band ABR amplitude could detect small acoustic tumors missed by standard ABR measures.[7] In 2005, he stated that it would be clinically valuable to have available an ABR test to screen for small tumors.[8] In a 2005 interview in Audiology Online, Dr. Don of House Ear Institute defined the Stacked ABR as “ attempt to record the sum of the neural activity across the entire frequency region of the cochlea in response to a click stimuli.” [4]

Stacked ABR defined

The stacked ABR is the sum of the synchronous neural activity generated from five frequency regions across the cochlea in response to click stimulation and high-pass pink noise masking.[8] The development of this technique was based on the 8th nerve compound action potential work done by Teas, Eldredge, and Davis in 1962.[9]


The stacked ABR is a composite of activity from ALL frequency regions of the cochlea – not just high frequency.[4]

  • Step 1: obtain Click-evoked ABR responses to clicks and high pass pink masking noise (ipsilateral masking)
  • Step 2: obtain derived-band ABRs (DBR)
  • Step 3: shift & align the wave V peaks of the DBR – thus, “stacking” the waveforms with wave V lined up
  • Step 4: add the waveforms together
  • Step 5: compare the amplitude of the Stacked ABR with the click-evoked ABR from the same ear

When the derived waveforms are representing activity from more apical regions along the basilar membrane, wave V latencies are prolonged because of the nature of the traveling wave. In order to compensate for these latency shifts, the wave V component for each derived waveform is stacked (aligned), added together, and then the resulting amplitude is measured. .[6]

In 2005, Don explains that in a normal ear, the sum of the Stacked ABR will have the same amplitude as the Click-evoked ABR. But, the presence of even a small tumor results in a reduction in the amplitude of the Stacked ABR in comparison with the Click-evoked ABR.

Application and effectiveness

With the intent of screening for and detecting the presence of small (less than or equal to 1 cm) acoustic tumors, the Stacked ABR is:[7]

  • 95% Sensitivity
  • 83% Specificity

(Note: 100% sensitivity was obtained at 50% specificity)

In a 2007 comparative study of ABR abnormalities in acoustic tumor patients, Montaguti and colleagues mention the promise of and great scientific interest in the Stacked ABR. The article suggests that the Stacked ABR could make it possible to identify small acoustic neuromas missed by traditional ABRs.[10]

The Stacked ABR is a valuable screening tool for the detection of small acoustic tumors because it is sensitive, specific, widely available, comfortable, and cost-effective.

Auditory steady-state response (ASSR)

ASSR defined

Auditory Steady State Response is an auditory evoked potential, elicited with modulated tones that can be used to predict hearing sensitivity in patients of all ages. It is an electrophysiologic response to rapid auditory stimuli and creates a statistically valid estimated audiogram (evoked potential used to predict hearing thresholds for normal hearing individuals and those with hearing loss). The ASSR uses statistical measures to determine if and when a threshold is present and is a “cross-check” for verification purposes prior to arriving at a differential diagnosis.


In 1981, Galambos and colleagues reported on the “40 Hz auditory potential” which is a continuous 400 Hz tone sinusoidally ‘amplitude modulated’ at 40 Hz and at 70 dB SPL. This produced a very frequency specific response, but the response was very susceptible to state of arousal. In 1991, Cohen and colleagues learned that by presenting at a higher rate of stimulation than 40 Hz (>70 Hz), the response was smaller but less affected by sleep. In 1994, Rickarts and colleagues showed that it was possible to obtain responses in newborns. In 1995, Lins and Picton found that simultaneous stimuli presented at rates in the 80 to 100 Hz range made it possible to obtain auditory thresholds.[1]


The same or similar to traditional recording montages used for ABR recordings are used for the ASSR. Two active electrodes are placed at or near vertex and at ipsilateral earlobe/mastoid with ground at low forehead. If collecting from both ears simultaneously, a two-channel pre-amplifier is used. When single channel recording system is used to detect activity from a binaural presentation, a common reference electrode may be located at the nape of the neck. Transducers can be insert earphones, headphones, a bone oscillator, or sound field and it is preferable if patient is asleep. Unlike ABR settings, the high pass filter might be approximately 40 to 90 Hz and low pass filter might be between 320 and 720 Hz with typical filter slopes of 6 dB per octave. Gain settings of 10,000 are common, artifact reject is left “on”, and it is thought to be advantageous to have manual “override” to allow the clinician to make decisions during test and apply course corrections as needed.[11]



  • Both record bioelectric activity from electrodes arranged in similar recording arrays.
  • Both are auditory evoked potentials.
  • Both use acoustic stimuli delivered through inserts (preferably).
  • Both can be used to estimate threshold for patients who cannot or will not participate in traditional behavioral measures.


  • ASSR looks at amplitude and phases in the spectral (frequency) domain rather than at amplitude and latency.
  • ASSR depends on peak detection across a spectrum rather than across a time vs. amplitude waveform.
  • ASSR is evoked using repeated sound stimuli presented at a high rep rate rather than an abrupt sound at a relatively low rep rate.
  • ABR typically uses click or tone-burst stimuli in one ear at a time, but ASSR can be used binaurally while evaluating broad bands or four frequencies (500, 1k, 2k, & 4k) simultaneously.
  • ABR estimates thresholds basically from 1-4k in typical mild-moderate-severe hearing losses. ASSR can also estimate thresholds in the same range, but offers more frequency specific info more quickly and can estimate hearing in the severe-to-profound hearing loss ranges.
  • ABR depends highly upon a subjective analysis of the amplitude/latency function. The ASSR uses a statistical analysis of the probability of a response (usually at a 95% confidence interval).
  • ABR is measured in microvolts (millionths of a volt) and the ASSR is measured in nanovolts (billionths of a volt).


Analysis, normative data, and general trends

Analysis is mathematically based and dependent upon the fact that related bioelectric events coincide with the stimulus rep rate. The specific method of analysis is based on the manufacturer’s statistical detection algorithm. It occurs in the spectral domain and is composed of specific frequency components that are harmonics of the stimulus repetition rate. Early ASSR systems considered the first harmonic only, but newer systems also incorporate higher harmonics in their detection algorithms.[11] Most equipment provides correction tables for converting ASSR thresholds to estimated HL audiograms and are found to be within 10 dB to 15 dB of audiometric thresholds. Although there are variances across studies. Correction data depends on variables such as: equipment used, frequencies collected, collection time, age of subject, sleep state of subject, stimulus parameters.[12]

ABR and hearing aid fittings

In certain cases where behavioral thresholds cannot be attained, ABR thresholds can be used for hearing aid fittings. New fitting formulas such as DSL v5.0 allow the user to base the settings in the hearing aid on the ABR thresholds. Correction factors do exist for converting ABR thresholds to behavioral thresholds, but vary greatly. For example, one set of correction factors involves lowering ABR thresholds from 1000–4000 Hz by 10 dB and lowering the ABR threshold at 500 Hz by 15 to 20 dB.[13] Previously, brainstem audiometry has been used for hearing aid selection by using normal and pathological intensity-amplitude functions to determine appropriate amplification.[14] The principal idea of the selection and fitting of the hearing instrument was based on the assumption that amplitudes of the brainstem potentials were directly related to loudness perception. Under this assumption, the amplitudes of brainstem potentials stimulated by the hearing devices should exhibit close-to-normal values. ABR thresholds do not necessarily improve in the aided condition.[15] ABR can be a inaccurate indicator of hearing aid benefit due to difficulty processing the appropriate amount of fidelity of the transient stimuli used to evoke a response. Bone conduction ABR thresholds can be used if other limitations are present, but thresholds are not as accurate as ABR thresholds recorded through air conduction.[16]

Advantages of hearing aid selection by brainstem audiometry include the following applications:

  • evaluation of loudness perception in the dynamic range of hearing (recruitment)
  • determination of basic hearing aid properties (gain, compression factor, compression onset level)
  • cases with middle ear impairment (contrary to acoustic reflex methods)
  • non-cooperative subjects even in sleep
  • sedation or anesthesia without influence of age and vigilance (contrary to cortical evoked responses).

Disadvantages of hearing aid selection by brainstem audiometry include the following applications:

  • in cases of severe hearing impairment including no or only poor information as to loudness perception
  • no control of compression setting
  • no frequency-specific compensation of hearing impairment

Cochlear implantation and central auditory development

There are about 188,000 people around the world who have received cochlear implants. In the United States alone, there are about 30,000 adults and over 30,000 children who are recipients of cochlear implants.[17] This number continues to grow as cochlear implantation is becoming more and more accepted. In 1961, Dr. William House began work on the predecessor for today’s cochlear implant. William House is an Otologist and is the founder of House ear institute in Los Angeles, California. This groundbreaking device, which was manufactured by 3M company was approved by the FDA in 1984.[18] Although this was a single channel device, it paved the way for future multi channel cochlear implants. Currently, as of 2007, the three cochlear implant devices approved for use in the U.S. are manufactured by Cochlear, Med El, and Advanced Bionics. The way a cochlear implant works is sound is received by the cochlear implant’s microphone, which picks up input that needs to be processed to determine how the electrodes will receive the signal. This is done on the external component of the cochlear implant called the sound processor. The transmitting coil, also an external component transmits the information from the speech processor through the skin using frequency modulated radio waves. The signal is never turned back into an acoustic stimulus, unlike a hearing aid. This information is then received by the cochlear implant’s internal components. The receiver stimulator delivers the correct amount of electrical stimulation to the appropriate electrodes on the array to represent the sound signal that was detected. The electrode array stimulates the remaining auditory nerve fibers in the cochlea, which carry the signal on to the brain, where it is processed.

One way to measure the developmental status and limits of plasticity of the auditory cortical pathways is to study the latency of cortical auditory evoked potentials (CAEP). In particular, the latency of the first positive peak (P1) of the CAEP is of interest to researchers. P1 in children is considered a marker for maturation of the auditory cortical areas (Eggermont & Ponton, 2003; Sharma & Dorman, 2006; Sharma, Gilley, Dorman, & Baldwin, 2007).[19][20][21] The P1 is a robust positive wave occurring at around 100 to 300 ms in children. P1 latency represents the synaptic delays throughout the peripheral and central auditory pathways (Eggermont, Ponton, Don, Waring, & Kwong, 1997).[22]

P1 latency changes as a function of age, and is considered an index of cortical auditory maturation (Ceponiene, Cheour, & Naatanen, 1998).[23] P1 latency and age has a strong negative correlation, decrease in P1 latency with increasing age. This is most likely due to more efficient synaptic transmission over time. The P1 waveform also becomes broader as we age. The P1 neural generators are thought to originate from the thalamo-cortical portion of the auditory cortex. Researchers believe that P1 may be the first recurrent activity in the auditory cortex (Kral & Eggermont, 2007) .[24] The negative component following P1 is called N1. N1 is not consistently seen in children until 12 years or age.

In 2006 Sharma & Dorman measured the P1 response in deaf children who received cochlear implants at different ages to examine the limits of plasticity in the central auditory system.[20] Those who received cochlear implant stimulation in early childhood (younger than 3.5 years) had normal P1 latencies. Children who received cochlear implant stimulation late in childhood (younger than seven years) had abnormal cortical responses latencies. However, children who received cochlear implant stimulation between the ages 3.5 and 7 years revealed variable latencies of the P1. Sharma also studied the waveform morphology of the P1 response in 2005 [25] and 2007.[21] She found that in early implanted children the P1 waveform morphology was normal. For late implanted children, the P1 waveforms were abnormal and had lower amplitudes when compared to normal waveform morphology. In 2008 Gilley and colleagues used source reconstruction and dipole source analysis derived from high density EEG recordings to estimate generators for the P1 in three groups of children: normal hearing children, children receiving a cochlear implant before the age of four, and children receiving a cochlear implant after the age of seven. Findings concluded that the waveform morphology of normal hearing children and early implanted children were very similar.[26] Late implanted children have smaller amplitudes and poorer morphology. Only the late implanted children differed in latency difference that was statistically significant.

The take home message is that auditory evoked potentials is a valid clinical tool when assessing individuals pre and post cochlear implantation. Done by determining the need for a cochlear implant and assessing if the cochlear implant is working post implantation by measuring the benefit to the user. Children implanted prior to age 3.5 years showed P1 latencies within the normal developmental limits. Children who are implanted after the age of seven years almost always show evidence of abnormal central auditory maturation when examining the latency of the P1 response. The critical window is when the child is younger than 3.5 years of age. Implantation done before this age will improve the success of their cochlear implant.

Sedation protocols

Common sedative used

To achieve the highest-quality recordings for any recording potential, good patient relaxation is generally necessary. However, many recordings can be filled and contaminated with myogenic and movement artifacts. Patient restlessness and movement will contribute to threshold overestimation and inaccurate test results. In most cases, an adult is usually more than capable to provide a good extratympanic recording. In transtympanic recordings, a sedative can be used when time consuming events need to take place. Most patients (especially infants) are given light anesthesia when test transtympanically.

Chloral Hydrate is a commonly prescribed sedative, and most common for inducing sleep in young children and infants for AEP recordings. It uses alcohol to depress the central nervous system, specifically the cerebral cortex. Side effects of chloral hydrate include vomiting, nausea, gastric irritation, delirium, disorientation, allergic reactions and occasionally excitement – a high level of activity rather than becoming tired and falling asleep. Chloral Hydrate is readily available in three forms – syrup, capsule and suppository. Syrup is most successful for those 4 months and older, proper dosage is poured in an oral syringe or cup. The syringe is used to squirt in the back of the mouth and then the child is encouraged to swallow. To induce sleep, dosages range anywhere from 500 mg to 2g, the recommended pediatric dose is equal to 50 mg per kg of body weight. A second dose no greater than the first dose, and an overall dose not exceeding 100 mg/kg of body weight can be used if the child does not fall asleep after the first dose. Sedation personnel should include a physician and a registered or practical nurse. Documentation and monitoring of physiologic parameters is required throughout the entire process. Sedatives should only be administered in the presence of those who are knowledgeable and skilled in airway management and cardiopulmonary resuscitation (CPR).


A consent form must be signed and received from the patient or guardian indicating the conscious sedation and the procedure being performed. Documented medical evaluation for pre-sedation purposes including a focused airway examination either on the same day as the sedation process or within recent days that will include but not limited to:

  • Age and weight
  • A complete and thorough medical history including all current medications, drug allergies, relevant disease, adverse drug reactions (especially relevant if any previous reaction to sedatives) and all relevant family history
  • Verify any airway or respiratory problems
  • All medications taken (including dosage and history of specific drug use) on the day of the procedure
  • Food and fluid intake within the 8 hours prior to sedation – light breakfast or lunch 1–2 hours prior to testing reduces likelihood of gastric irritation (common with chloral hydrate).
  • All vital signs

All orders for conscious sedation for patients must be written. Prescriptions or orders received from areas outside of the conscious sedation area are not acceptable. There has to be a single individual assigned to monitor the sedated patient’s cardiorespiratory status before, during and after sedation.

If patient is deeply sedated, the individual’s only job should be to verify and record vital signs no less than every five minutes. All age and size appropriate equipment and medications used to sustain life should be verified before sedation and should be readily available at any time during and after sedation.

The medication should be administered by a physician or nurse and documented (dosage, name, time, etc). Children should not receive the sedative without supervision of a skilled and knowledgeable medical personnel (at home, technician). Emergency equipment including crash cart must be readily available and respiration monitoring should be done visually or with stethoscope. Family member needs to remain in room with patient, especially if tester steps out. In this scenario, respiration can be monitored acoustically with a talk-back system microphone placed near patient’s head. Medical personnel should be notified of slow respiration state.

After procedure is over, patient must be continuously observed in the facility that is appropriately equipped and staffed because patient’s typically “floppy” and have poor motor control. Patients shouldn’t stand on their own for the first few hours. No other medications with alcohol should be administered until patient is back to normal state. Drinking fluids is encouraged to reduce stomach irritation. Each facility should create and use their own discharge criteria. Verbal and written instructions should be provided on the topics of limitations of activity and anticipated changes in behavior. All discharge criteria must be met and documented before the patient leaves the facility.

Some criteria prior to discharge should include:

  • Stable vital signs similar to those taken pre-procedure
  • Patient is at the level of consciousness pre-procedure
  • Patient has received post-procedure care instructions.[13]

See also


  1. ^ a b Burkhard, R.F., Don, M., and Eggermont, J.J. (2007) Auditory Evoked Potentials, Basic Principles and Clinical Application. Baltimore: Lippincott & Wilkins.
  2. ^ a b c Hall, J.W. III (2007). The new handbook of auditory evoked responses. Boston: Allyn & Bacon.
  3. ^ Moore, E.J. (1983). Bases of Auditory Brain-Stem Evoked Responses. New York: Grune & Stratton, Inc.
  4. ^ a b c DeBonis, D.A., and Donohue, C.L. (2008). Survey of Audiology: Fundamentals for Audiologists and Health Professionals (2nd ed.). Boston: Allyn and Bacon.
  5. ^ Don, M., Kwong, B., Tanaka, C., Brackmann, D., Nelson, R. (2005) The stacked ABR: a sensitive and specific screening tool for detecting small acoustic tumors. Audiol Neurotol; 10:274-290.
  6. ^ a b Prout, T. (2007) Asymmetrical low frequency hearing loss and acoustic neuroma. Audiology Online.
  7. ^ a b Don, M., Masuda, A., Nelson, R.A., Brackmann, D.E. (1997) Successful detection of small acoustic tumors using the stacked derived band ABR method. Am J Otolaryngol; 18:608-621.
  8. ^ a b Don, M. (2005) The stacked ABR: an alternative screening tool for small acoustic tumors. Hearing Review; August.
  9. ^ Teas, D.C., Eldredge, D.H., Davis, H. (1962) Cochlear responses to acoustic transients. An interpretation of whole-nerve action potentials. J Acoust Soc Am; 34:1438-1489
  10. ^ Montaguti, C., Bergonzoni, M., Zanetti, A., Ceroni, R. (2007) Comparative evaluation of ABR abnormalities in patients with and without neurinoma of VIII cranial nerve. ACTA Otorhinolaryngologica Italica; 27:68-72.
  11. ^ a b c Beck, DL; Speidel, DP; and Petrak, M. (2007) Auditory Steady-State Response (ASSR): A Beginner’s Guide. The Hearing Review. 2007; 14(12):34-37.
  12. ^ Picton TW, Dimitrijevic A, Perez-Abalo M-C, van Roon P. (2005) Estimating audiometric thresholds using auditory steady-state responses. J Am Acad Audiol. 16:140-156.
  13. ^ a b Hall JW, Swanepoel DW (2010). Objective Assessment of Hearing. San Diego = Arch. Neurol: Plural Publishing Inc.. 
  14. ^ Kiebling J (1982). "Hearing Aid Selection by Brainstem Audiometry". Scandinavian Audiology 11: 269–275. 
  15. ^ Billings CJ, Tremblay K, Souza PE, Binns MA. (2007). "Stimulus Intensity and Amplification Effects on Cortical Evoked Potentials". Audiol Neurotol 12: 234–246. 
  16. ^ Rahne T, Ehelebe T, Rasinski C, Gotze G. (2010). "Auditory Brainstem and Cortical Potentials Following Bone-Anchored Hearing Aid Stimulation". Journal of Neuroscience Methods 193: 300–306. 
  17. ^ Jennifer Davis (2009-10-29), Peoria Journal Star, "According to the U.S. Food and Drug Administration, about 188,000 people worldwide have received implants as of April 2009." 
  18. ^ W.F. House (2009), Annals of Otology, Rhinology and Laryngology, 85, pp. 1–93, "Cochlear implants" 
  19. ^ Eggermont, J. J.; Ponton, C. W. (2003), Acta Oto-Laryngologica, p. 123(2), "Auditory-evoked potential studies of cortical maturation in normal hearing and implanted children: Correlations with changes in structure and speech perception." 
  20. ^ a b Sharma, A.; Dorman, M. F. (2006), Advances in Oto-Laryngologica, "Central auditory development in children with cochlear implants: Clinical implications." 
  21. ^ a b Sharma, A.; Gilley, P. M.; Dorman, M. F.; Baldwin, R. (2007), International Journal of Audiology, p. 46(9), "Deprivation-induced cortical reorganization in children with cochlear implants." 
  22. ^ Eggermont, J. J.; Ponton, C. W.; Don, M.; Waring, M. D.; Kwong, B. (1997), Acta Oto-Laryngologica, p. 117(2), "Deprivation-induced cortical reorganization in children with cochlear implants." 
  23. ^ Ceponiene, R.; Cheour, M.; Naatanen, R. (1998), Electroencephalography and Clinical Neurophysiology, p. 108(4), "Interstimulus interval and auditory event-related potentials in children: Evidence for multiple generators." 
  24. ^ Kral, A.; Eggermont, J. J. (2007), Brain Res. Rev., p. 56, "What's to lose and what's to learn: development under auditory deprivation, cochlear implants and limits of cortical plasticity." 
  25. ^ Sharma, A. (2005), Audiol, p. 16, "P1 latency as a biomarker for central auditory development in children with hearing impairment J. Am. Acad." 
  26. ^ Gilley, P. M., Sharma, A., & Dorman, M. F. (2008). Cortical reorganization in children with cochlear implants. Brain Research .

External links

Wikimedia Foundation. 2010.

Look at other dictionaries:

  • auditory brainstem response — (ABR) for sensorineural hearing loss: a test that tracks nerve signals arising in the internal ear as they travel along the auditory nerve to the brain region responsible for hearing. A small speaker placed near the ear makes a clicking sound,… …   Medical dictionary

  • Auditory brainstem response (ABR) test — A test for hearing and brain (neurological) functioning. ABR may be used in the evaluation of: {{}}Neurologic integrity (and hearing) in patients who are comatose, unresponsive, or impaired due to a stroke, an acoustic neuroma (tumor on the… …   Medical dictionary

  • ABR (auditory brainstem response) test — A test for hearing and brain (neurological) functioning. ABR may be used in the evaluation of: {{}}Neurologic integrity (and hearing) in patients who are comatose, unresponsive, or impaired due to a stroke, an acoustic neuroma (tumor on the… …   Medical dictionary

  • Test, auditory brainstem response (ABR) — A test for hearing and brain (neurological) functioning. ABR may be used in the evaluation of: {{}}Neurologic integrity (and hearing) in patients who are comatose, unresponsive, or impaired due to a stroke, an acoustic neuroma (tumor on the… …   Medical dictionary

  • Auditory neuropathy — is a variety of hearing loss in which the outer hair cells within the cochlea are present and functional, but sound information is not faithfully transmitted to the auditory nerve and brain properly. A neuropathy simply refers to a disease of the …   Wikipedia

  • Auditory system — The auditory system is the sensory system for the sense of hearing. Ear Outer ear The folds of cartilage surrounding the ear canal are called the pinna. Sound waves are reflected and attenuated when they hit the pinna, and these changes provide… …   Wikipedia

  • Brainstem auditory evoked potentials — Auditory evoked potentials (AEPs) are a subclass of Event related potentials (ERPs). For AEPs, the event is a sound. AEPs (and ERPs) are very small electrical voltage potentials originating from the brain recorded from the scalp in response to an …   Wikipedia

  • response — 1. The reaction of a muscle, nerve, gland, or other excitable tissue to a stimulus. 2. Any act or behavior, or its constituents, that a living organism is capable of emitting. Reflexes are usually excluded because they are typically elicited by a …   Medical dictionary

  • auditory evoked potential — (AEP) in electroencephalography, changes in waves in response to sound; see also brainstem auditory evoked p …   Medical dictionary

  • Primary auditory cortex — Infobox Brain Name = Primary auditory cortex Latin = GraySubject = GrayPage = Caption = Brodmann areas 41 42 of the human brain. Caption2 = The Primary Auditory Cortex is highlighted in magenta, and has been known to interact with all areas… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”

We are using cookies for the best presentation of our site. Continuing to use this site, you agree with this.