Home About us Editorial board Ahead of print Current issue Search Archives Submit article Instructions Subscribe Contacts Login 

 Table of Contents  
ORIGINAL ARTICLE
Year : 2016  |  Volume : 3  |  Issue : 2  |  Page : 25-34

Templates for speech-evoked auditory brainstem response performance in cochlear implantees


1 Unit of Audiovestibular Medicine, Faculty of Medicine, Department of Otorhinolaryngology, Alexandria University, Alexandria, Egypt
2 Department of Diagnostic Imaging, Faculty of Medicine, Alexandria University, Alexandria, Egypt
3 Department of Computer and Systems Engineering, Faculty of Engineering, Alexandria University, Alexandria, Egypt

Date of Submission13-Sep-2016
Date of Acceptance30-Nov-2016
Date of Web Publication20-Mar-2017

Correspondence Address:
Mirhan K Eldeeb
Unit of Audiovestibular Medicine, Faculty of Medicine, Department of Otorhinolaryngology, Faculty of Medicine, Al Sultan Hussein Street, Al Khartoom Square, Al Azareeta, Alexandria, 21111
Egypt
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/2314-8667.202551

Rights and Permissions
  Abstract 

Introduction
Speech-evoked auditory brainstem response (ABR) has been used to assess the fidelity of encoding speech stimuli at the subcortical level in normal individuals in noise and in special populations such as learning-impaired children and musicians. The neural code generated by cochlear implants (CIs) in the auditory brainstem pathway and its similarity to stimulus may account for variable speech development in cochlear implantees.
Objective
The aim of this study was to describe speech ABR recorded in CI individuals and establish measurement parameters for the neural response and its reproducibility.
Participants and methods
Children between 5 and 10 years of age implanted in the right ear with fully inserted 12-electrode CIs were selected. All participants had normal morphology of the cochlea and auditory nerve in preoperative computed tomographic scan and MRI. Speech syllable 40 ms /da/ was used to elicit speech ABR. Response traces for intensity input/output functions were harvested. Grand averages were constructed for peak picking. Individual patient responses were analyzed for reproducibility, latency of wave V, root mean square amplitude of the response, and correlation to the stimulus.
Results
Grand averages showed wave V, followed by the frequency following response. Wave V is a vertex-positive peak, equivalent to that elicited by a click, which reflects the stimulation by the transient /d/. The mean latency of wave V was 2.59±0.7 ms at 70 dBHL. The frequency following response showed multiple sequenced troughs corresponding to the sustained vowel /a/. Individual responses collected for similar stimulus parameters showed high reproducibility, being 99.65% at 60 dBHL and 52.8% at 30 dBHL. Participants showed variable latency and root mean square amplitude-intensity input–output functions slopes. The mean stimulus-to-response correlation was 18.1±3.1%.
Conclusion
Speech ABR in CI participants shows similar morphology to that recorded in norms. CIs thus transcribe the speech signal with high fidelity to the brainstem pathways.

Keywords: auditory brainstem response, cochlear implant, speech auditory brainstem response


How to cite this article:
Mourad MI, Eid M, Elmongui HG, Talaat MM, Eldeeb MK. Templates for speech-evoked auditory brainstem response performance in cochlear implantees. Adv Arab Acad Audio-Vestibul J 2016;3:25-34

How to cite this URL:
Mourad MI, Eid M, Elmongui HG, Talaat MM, Eldeeb MK. Templates for speech-evoked auditory brainstem response performance in cochlear implantees. Adv Arab Acad Audio-Vestibul J [serial online] 2016 [cited 2024 Mar 28];3:25-34. Available from: http://www.aaj.eg.net/text.asp?2016/3/2/25/202551

This study was presented at the 52nd Inner Ear Biology 2015 symposium and workshop as a poster presentation, Transduction of a complex signal through the normal cochlea and through the cochlear implant' Abstract book page 22 poster number 53; 12-15 September 2015; Rome, Italy, and the Cairo Cochlear International Congress as oral presentation Transduction of the speech syllable /da/through cochlear implant'; 5-7 February 2016; Cairo, Egypt.



  Introduction Top


Sound transduction in the cochlea follows propagation of the mechanical traveling wave along the basilar membrane, stimulating the outer and inner hair cells, and evoking the eighth nerve action potential. In profound hearing loss, this function is substantially disturbed with subsequent failure to provoke an auditory nerve action potential.

Cochlear implant (CI) transduces acoustic into electrical signals bypassing the damaged cochlea and provoking auditory nerve action potentials. The transduction process involves acoustic signal processing to extract prominent features of the target speech and feeds it to the auditory nerve through electrical biphasic pulses tonotopically mapped in the cochlear scala using place coding strategies. In addition, temporal coding, conveyed through pulse rate, enhances low-frequency perception [1].

Auditory brainstem response (ABR) is a series of potentials that have robust timing and reproducibility for transient and sustained acoustic signals. It is measured for speech stimuli [2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12],[13],[14],[15],[16],[17],[18],[19],[20],[21],[22]. Transient stimuli whether a click [23] or a stop consonant [20] will yield a series of voltages I–V. Sustained stimuli such as phrases [24], monosyllabic words [25], and vowels [26],[27],[28] will yield a series of potentials composing the frequency following response (FFR). The FFR vertex-negative peaks in response to the speech syllable /da/ of 40 ms duration are named B, C, D, E, F, and O [20]. Transduction of acoustic speech with CI into neural codes in the brainstem may therefore be studied by speech-evoked ABR.

The aim of the present study was to describe speech ABR morphology in CI individuals and establish parameters for neural response reproducibility.

In this investigation it was hypothesized that the CI processor-electrode coupling transduces the speech syllable reflecting its temporal and spectral components. In this respect, it mimics speech ABR reported in normal individuals.


  Participants and methods Top


Participants

Ten prelingually deafened children using CIs were selected. Particpants’ ages ranged from 5 to 10 years (four male and six female). The study was approved by the local ethics committee, and an informed consent was obtained from each participant’s parent before inclusion. Criteria for selection were as follows:

  1. Preoperative computed tomography (CT) and MRI of the petrous bone, indicating normal anatomy of the cochlea and eighth nerve.
  2. Postoperative CT of the petrous bone showing full insertion of the 12-electrode (Med-EL, Innsbruck, Austria) standard array.
  3. Implantation in the right ear.
  4. Participants with all electrodes enabled.
  5. For all participants the same coding strategy was used, fine structure four (FS4) coding strategy.


Informed consent was obtained from every participant’s parent. Each CI participant was subjected to the following protocol of evaluation.

Behavioral assessment

All children were examined using warble tones in decibel hearing level (dBHL) at 250, 500, 1000, 2000, and 4000 Hz to obtain aided free-field thresholds using their final map adjustments.

Speech auditory brainstem response recording

Speech stimulus

Speech-like /da/ syllable 40 ms duration, provided by Kraus brainstem toolbox, was presented at a repetition rate of 2.1/s and alternating polarity. The /da/ complex utilizes a five-formant synthetic speech syllable /da/, produced using a Klatt cascade/parallel formant synthesizer. A detailed description of the stimulus is provided in the study by Banai et al. [29].

Stimulus calibration

Calibration of the stimulus /da/ was measured for each participant at the level of the ear with CI. Stimulus intensity was measured in decibel sound pressure level (dBSPL) using a Radio Shack sound level meter. It was corrected to dBHL by subtracting 20 dB from the sound level meter dial.

Recording parameters

Responses were differentially recorded using an electrode montage forehead-to-contralateral (left) mastoid, chin being the ground. This contralateral electrode montage was chosen to minimize stimulus artifact by increasing the distance between the device, speaker, and the reference electrode. Disposable electrodes with conductive paste were used. Electrode sites were cleaned with alcohol and rubbed with rough gauze to lower skin resistance. To ensure balanced inputs to the differential amplifier and optimize signal-to-noise ratio, electrode impedance did not exceed 3000 Ω and differences between electrode pairs were kept below 2000 Ω.

Responses were averaged for 1000 stimuli. A 60-ms recording window (including a 10 ms prestimulus period) was used. Responses were online filtered through a 30–500 Hz band-pass filter. An averaged no stimulus run, with the processor turned on, was used as a control trace. Moreover, an averaged response with the processor-off was used as a control trace.

On the basis of a pilot study on CI participants, when the parameters used in normal-hearing children in the literature (a stimulus rate of 4.1/s and a band-pass filter of 30–3000 Hz [8]) were applied to CI participants, the traces had poor definition of response peaks and troughs. Changing stimulus rate to 2.1/s and changing band-pass filter to 30–500 Hz yielded clearer recordings with less noise contamination and visually rated reproducibility. Wave V could be traced down to 30 dBHL in most cases.

Test procedure

Responses were recorded using Bio-logic navigator Pro version 7.0.0 (Natus Medical Incorporated, San Carlos, California, United States). Measures were obtained in a quiet room and all participants were tested either in a comfortable state while watching silent cartoon or while sleeping. The stimulus /da/ was delivered through a speaker located 30 cm from the participant’s head at 90° azimuth. Response input–output intensity function was recorded starting at 70 dBHL and then at successively lower intensities by 10 dB decrements down to the level where no visual response could be attained. Two traces were recorded at each stimulus intensity to ensure that the response is repeatable. The two traces were used to assess waveform reproducibility and then the two traces were added to create an average.

Data analysis of the response

Traces were analyzed using both the MATLAB Digital Signal Processing toolbox and the Kraus brainstem toolbox in MathWorks’ MATLAB software (The MathWorks, Inc., Natick, Massachusetts, United States).

Traces were exported to ASCII format using the AEP2ASCII software provided by the Natus corporation version 7.0.0 of Bio-logic navigator Pro. Digital signals were extracted from the ASCII files; they had a frequency of four kHz. The stimulus /da/ was retrieved as a WAV file, which had a frequency of 48 kHz.

For the analysis, both the stimulus and responses (traces) were converted into 8 kHz digital signals. The stimulus sampling rate was reduced by using a sampling rate compressor that implements the function: xd[n]=x[6n], where xd[n] is the compressed stimulus and x[n] is the WAV file stimulus. On the other hand, the responses were upsampled by a factor of 2, and then filtered using a low pass filter to compensate for missing values. The following formula was used to implement this upsampling, where yi[n] is the upsampled response and y[n] is the extracted trace:



All upsampled responses and compressed stimulus were converted to an AVG format that the Kraus brainstem toolbox uses.

To analyze the responses and their correlation with the stimulus, normalized cross correlation was used. In normalized cross correlation, maximum correlation was searched across different lag times. The FFR was correlated to the vowel part of the stimulus, which bracketed the temporal window (11–40 ms) at 60 and 70 dBHL. The FFR included the segment 3 ms after wave V trough to the end of the trace. The normalized cross correlation allowed clearer illustrations of the cross correlation even when signals had diverse levels of energy. The normalized cross correlation of two signals is the cross correlation of the normalized signals. Let and σy[n] denote the normalized signal, the average value, and the SD of y[n], respectively. Therefore,



The root mean square (RMS) amplitude was obtained for the whole response waveform. RMS amplitude was also measured for wave V peak to its following trough. RMS amplitude measurement for the FFR included the segment 3 ms after wave V trough to the end of the trace. The RMS amplitude ratio of wave V to FFR was calculated at all stimulus intensities.

For each patient, the intertrace normalized correlation was performed for the following:

  1. Speech ABR traces of similar intensities.
  2. Between speech ABR trace at 60 dBHL and a control trace recorded and averaged, where no stimulus was presented, but the processor was turned on.
  3. Between speech ABR trace at 60 dBHL and a control trace recorded and averaged when the processor was turned off.


The prestimulus baseline RMS amplitude was subtracted from the response to remove any noise during recording.

Labeling of wave V at threshold followed visual inspection of the peak in the input–output function. Wave V was marked as the first positive peak 6 ms or earlier, followed by a sharp trough at high stimulus intensities. To estimate wave V threshold, a bracketing procedure of ‘down-ten/up-ten dBHL’ was applied. Threshold corresponded to the level at which responses were obtained for two ascending runs. Wave V was identified as the positive peak near 6 ms immediately before the negative slope, and A was selected as the bottom of the downward slope following wave V [8],[29]. A reliable peak was judged as having a peak-to-peak amplitude larger than the prestimulus baseline activity. Ambiguous peaks were visually assessed by three raters. Wave V and the FFR were expected to be earlier compared with speech ABR in normal individuals despite possible maturational delays. This can be attributed to loss of cochlear travel time estimated to be 6 ms present in normal individuals [30],[31]. A delay of 0.8 ms due to the speaker distance from the ear was also expected and was corrected after peak marking.

Designation of the FFR thresholds followed the normalized correlation procedure. Two parameters were measured. First, the percentage correlation between the two traces when wave V is absent. This correlation was based on morphology and RMS similarity. The second parameter is the lag time between the traces in ms. The FFR threshold was marked as the minimum intensity at which a maximal correlation was obtained at ∼0 ms lag time. The peak of maximum correlation had to be a single peak more than 50% to judge the trace as the FFR threshold.

A grand average was constructed for traces of similar intensities across patients to create a template for peak marking. Initially, the average latency of wave V of individual traces at a given intensity was calculated to give wave V latency of a grand average. Individual traces were aligned at the average latency of wave V for a given intensity.


  Results Top


Map levels and aided free-field thresholds

[Table 1] shows the mean and SDs of threshold (T) and most comfortable (C) electrical stimulation levels in charge units (qu) in final maps and aided free-field thresholds.
Table 1 The mean and SD of T and C electrical stimulation levels in qu for final maps and aided free-field thresholds

Click here to view


Speech auditory brainstem response morphology

Speech auditory brainstem response grand average

The response consisted of an early segment (wave V peak followed by wave A trough) and a later segment (series of vertex-negative peaks, which represent the FFR). The most prominent FFR troughs in the grand averages, particularly that at 70 dBHL, were waves C, D, and E. Waves B, F, and O of FFR in normal individuals were not detected. [Figure 1] shows labeled grand averages for speech ABR at 70, 50, and 30 dBHL. [Table 2] shows speech ABR grand average latencies and amplitudes of the response segments at 70 dBHL.
Figure 1 The grand average for speech ABR responses at 70 (a), 50 (b), and 30 (c) dBHL with the responses aligned at wave V. The grand averages had mean wave V latencies of 2.59±0.7, 3.2±1, and 4.3±0.7 ms, respectively. ABR, auditory brainstem response; RMS, root mean square.

Click here to view
Table 2 Speech auditory brainstem response grand average latencies and amplitudes of the response segments at 70 dBHL

Click here to view


Speech auditory brainstem response individual traces

Waveform reproducibility: The response morphology to the /da/ stimulus was maintained for the input–output intensity function (70–30 dBHL). Speech ABR trace reproducibility was maximal at high and moderate intensities (reaching 99.65% at 60 dBHL). [Table 3] shows speech ABR mean trace reproducibility expressed in percentage and mean lag time in ms at high, moderate, and low stimulus intensity levels. [Figure 2] shows normalized correlation between two traces of the same intensity. [Figure 3] shows normalized correlation between a true trace and an averaged raw waveform when the processor turned off ([Figure 3]a), and there was no stimulus input ([Figure 3]3b). The latter two averaged traces in [Figure 3]a and [Figure 3]b served as a control (no stimulus feed to the brainstem pathway).
Table 3 Speech auditory brainstem response mean trace reproducibility (%) and mean lag time (ms) at high, moderate, and low stimulus intensity levels

Click here to view
Figure 2 Speech ABR trace reproducibility showing two traces recorded at 70 dBHL (a), at 30 dBHL (b) for the same patient, and their normalized correlation. Maximum correlation is indicated by the blue arrow. ABR, auditory brainstem response; RMS, root mean square.

Click here to view
Figure 3 (a) Normalized correlation between two traces recorded for the same patient, upper trace at 60 dBHL; implant turned on and lower trace; implant turned off. (b) Normalized correlation between two traces recorded in the same patient, upper at 60 dBHL and lower at 0 dBHL. RMS, root mean square.

Click here to view


Thresholds of speech ABR ranged from 30 to 50 dBHL between participants. At threshold the full morphology (V-FFR) of the speech ABR was detected (n=7). In one participant, wave V alone was manifest at threshold. However, in other individuals (n=2), only the FFR was manifest at threshold. [Figure 4] shows the respective trace morphologies. [Table 4] shows speech ABR mean thresholds for wave V and FFR in dBHL.
Figure 4 Speech ABR intensity I/O function for individual patients. (a) A case in which wave V and FFR thresholds were similar. (b) A case in which wave V threshold was better than FFR. (c) A case in which FFR threshold was better than wave V. FFR threshold is marked by an asterisk. ABR, auditory brainstem response; FFR, frequency following response; I/O, input–output.

Click here to view
Table 4 Speech auditory brainstem response mean thresholds for wave V and frequency following response (dBHL)

Click here to view


Latency of speech auditory brainstem response wave V: The mean latency of wave was 2.59±0.7 ms at 70 dBHL with a range of 1.81–4.82 ms. [Figure 5]a shows speech ABR wave V latency-intensity function with the best linear fitted lines. The slope of the average best linear fitted line was 0.038.
Figure 5 (a) Speech ABR wave V latency-intensity function scatter diagram, (b) speech ABR response RMS amplitude-intensity function scatter diagram with best linear fitted lines (solid for individual cases and an average dashed bold line). ABR, auditory brainstem response; RMS, root mean square.

Click here to view


Root mean square amplitude of the speech auditory brainstem response response: [Figure 5]b shows speech ABR RMS amplitude-intensity function with best linear fitted lines. The slope of the average best linear fitted line was 0.091. [Figure 6] shows the mean interamplitude (RMS) ratio of wave V to FFR (V/FFR) in percentage at different stimulus intensities. The ratio was greater than 1 at 40 and 50 dBHL.
Figure 6 The mean interamplitude (RMS) ratio of speech ABR wave V to FFR in percentage at different stimulus intensities. ABR, auditory brainstem response; FFR, frequency following response; RMS, root mean square.

Click here to view


Stimulus-to-response correlations: The vowel segment /a/ of the stimulus was correlated with the FFR segment of the response at 60 and 70 dBHL. The responses at 60 and 70 dBHL intensities were chosen for the correlation as the grand averages at these intensities displayed the best morphology. Maximum correlation was based on the best morphology and optimum RMS similarity as searched across different lag times. The mean stimulus–response correlation was 18.1±3.1 and ranged between 12.66 and 25.88%. The mean lag time between stimulus and response was 6±7.6 ms and ranged from −3.625 to 24 ms. [Figure 7] shows normalized correlation between the response (FFR) and the stimulus (/a/) at 70 dBHL. The maximum correlation was 25.88% attained at 11 ms lag time. [Figure 8] shows a bar chart of the stimulus–response correlation in percentage at two stimulus intensities of 60 and 70 dBHL.
Figure 7 Normalized correlation between the response (FFR) and the stimulus (/a/) at 70 dBHL. (a) FFR segment of speech ABR response at 70 dBHL for a case. (b) /a/ segment of the /da/ stimulus. (c) Normalized correlation between the FFR and the /a/ with maximum correlation of 25.88% attained at 11 ms lag time (arrow). ABR, auditory brainstem response; FFR, frequency following response; RMS, root mean square.

Click here to view
Figure 8 Bar chart of /a/ to FFR correlation in percentage at two stimulus intensities of 60 and 70 dBHL. FFR, frequency following response.

Click here to view


Radiological profiles

[Table 5] shows measurements of anatomical structures depicted in postoperative CT scan: cochlear length, internal auditory canal diameter, electrode array angular insertion, and distribution of electrodes along the basal, middle, and apical turns.
Table 5 Postoperative computed tomographic scan measurements of the cochlea, internal auditory canal, and electrode array

Click here to view



  Discussion Top


In the present study, acoustic speech ABR was recorded in 10 CI children. Responses were evaluated for intertrace reproducibility, stimulus-to-response correlation, latency-intensity, and RMS intensity output function.

Speech auditory brainstem response responses

Because of the presence of highly reproducible transient and sustained neural responses in CI individuals for /da/, our results suggest that the neural codes provoked by CI faithfully transcribe the speech signal. The response shows many fine details as compared with speech ABR in normal cochleae [8],[31]. These details may represent the difference between auditory nerve firing provoked by electrical stimulation through CI versus those provoked by acoustic stimulation through the traveling wave. Electrical stimulation of the auditory nerve produces a deterministic firing pattern that is tightly phase locked to the stimulus. This phase-locked response follows the all or none rule of nerve action potential [32],[33],[34]. In contrast, acoustic stimulation through cochlear transduction follows a stochastic firing with unequal intervals between the peaks due to the probabilistic nature of hair cell neural connection [35],[36],[37]. In addition, the deterministic nature of electrical stimulation and the tight phase locking explains the high amplitude of the waves ([Table 2]) compared with norms [29].

Variability in morphology, latency, and amplitude was noted among implanted patients. The grand average constructed at different intensities limited this variation among individuals. Peak and trough picking in the grand average showed wave V and waves C, D, and E of the FFR in norms. In literature norms, FFR represents phase locking to the fundamental frequency of the stimulus. It occurs in response to the periodic information present in the vowel at the frequency of the sound source (i.e. the glottal pulse). Subsequently, the period between peaks D, E, and F of the FFR corresponds to the fundamental frequency of the stimulus. Wave C marks the transition from the consonant /d/ to the vowel /a/, whereas the waves D, E, and F represent phase locking to the first formant [29]. Absence of wave F in CI individuals, which is prominent in norms, may be attributed to the following:


  1. Limited coding of higher frequencies of the first formant through the CI due to frequency-place shift of the apical electrodes. The later shift is promoted by deeper electrode insertion and/or larger angles of insertion [38].
  2. The band-pass filter used in the present study to record the responses was narrower (30–500 Hz) than that used in literature norms (70–2000 Hz). The bandwidth selected in the study may hinder the recording of the first formant higher frequencies.


Speech ABR as reported in the literature was limited to grand average morphologies at moderate intensity. In the present study, the grand averages as well as individual responses among individuals is reported for wave V and FFR latency-intensity function. The variability expressed in SDs and wide range for wave V latency for CI individuals may be an expression of variable neural survival [39],[40],[41],[42],[43] and/or differential electrical stimulation levels in the different cochlear turns [44].

The grand average latency of speech ABR wave V in normal individuals is reported to be 6.6 ms when recorded using insert phones in 8–12-year-old children [8],[29]. The electrical wave V latency recorded using biphasic pulses for single electrodes is 3–4 ms [45],[46],[47],[48]. In the present study, the grand average latency of wave V recorded in CI individuals using speakers was 2.59 ms at 70 dBHL. This early onset is attributed to bypassing cochlear traveling wave delay, which ∼6 ms [47]. The earlier acoustic speech ABR wave V latency in CIs compared with electrical wave V latency is explained by summation and overlap of electrical fields caused by complex signals. The acoustic speech ABR leads to simultaneous and overlapping stimulation of most electrodes. This simultaneous stimulation of multiple electrodes produces synchrony of neural firing and earlier latencies.

There was a clear growth of the RMS amplitude with the increase in acoustic signal intensity (refer to [Figure 5]b). The growth function of the biological neural responses may be an indication of the following:

  1. Appreciable neural density with subsequently increased voltage capacity.
  2. The decreased RMS amplitude at low intensities or for elevated thresholds indicates decreased neural firing in the former and less surviving neural population in the latter.


The correlation of the vowel /a/ with the FFR was generally similar to the norms reported in the literature. The range of FFR–vowel correlation in CI was 12.66–25.88% at 60 and 70 dBHL, which is in close similarity to the literature norms (20–30%). However, the delay at which maximum correlation was calculated was larger for the current study (−3.625–24 ms) compared with literature norms (5.6–8.1 ms) [20].

Because the morphology of speech ABR response in CI individuals mimics the speech signal in its transient and sustained portions, brainstem responses to complex stimuli are viewed as biomarkers for encoding a speech syllable in the subcortical auditory system. Speech ABR in individuals with CI showed both rapid deflections (wave V) and some of the discrete peaks corresponding to the periodic peaks of the stimulus waveform in a robust manner.

Role of the fine structure four strategy, and speech auditory brainstem response, root mean square amplitude, and lag time in view of the present research

The FS4 strategy was implemented in current research. In this strategy, the input signal is band-pass filtered and fed into channels to stimulate the electrode array tonotopically placed in the cochlea. In the low-frequency channels, the fine structure is encoded by stimulating the first four apical electrodes at a rate equal to the instantaneous frequency of the signals. The amplitude of biphasic pulses is equal to the instantaneous envelope of the signal. For this temporal weighted strategy, phase-locking stimulation is emphasized simulating the normal cochlea and increasing the low-frequency information conveyed to the apical portions of the cochlea to 970 Hz [49]. This explains the approximation of the speech ABR FFR morphology and reproducibility in CI individuals to that in normal individuals. Muller et al. [50] reported that FS4 strategy improves vowel identification and speech understanding in CI individuals due to phase-locking mechanisms. The high-frequency channels in the basal electrodes process signals according to the continuous interleaving strategy principle in which the envelope of the signal is amplitude modulated at a constant rate [51]. This principle is applied to the remaining eight electrodes and simulates the place theory for frequency coding in the normal cochlea.

RMS similarity and lag time were used to evaluate trace reproducibility in addition to stimulus–response correlation. As the use of these parameters reflected response consistency, they may provide prognostic measure of speech neural encoding in CI. Assessment of intertrace correlation based on RMS and lag time determines the power of the phase-locking abilities of the auditory nerve and the brainstem. This may also provide useful information about the effectiveness of a particular coding strategy as regards low-frequency signals to neural code transduction.

Electrode array insertion angle and depth

The standard electrode array used in our cases is 31 mm in length, which allows an insertion angle of ∼720° [52]. A long electrode would allow the stimulation of more apical regions of the cochlea with better coding of the low-frequency information in the vowel /a/. FFR will therefore display most of the described fundamental and formant frequencies harbored in the stimulus.


  Conclusion Top


  1. Brainstem auditory responses provoked by CI signal transduction faithfully transcribe the complex input signal.
  2. Response lag time and RMS are reasonable biomarkers for response consistency.
  3. User-friendly software programs for clinical implementation will provide a valuable tool to assess CI signal transduction.


Acknowledgements

The authors thank the Cochlear Implant Unit, Faculty of Medicine, Alexandria University, Egypt, for providing the participants of the study. They also thank Dr Nina Kraus Laboratory for providing the brainstem toolbox.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.

 
  References Top

1.
Clark G. The multi-channel cochlear implant and the relief of severe-to-profound deafness. Cochlear Implants Int 2012; 13:69–85.  Back to cited text no. 1
    
2.
Wible B, Nicol T, Kraus N. Abnormal neural encoding of repeated speech stimuli in noise in children with learning problems. Clin Neurophysiol 2002; 113:485–494.  Back to cited text no. 2
    
3.
Wible B, Nicol T, Kraus N. Atypical brainstem representation of onset and formant structure of speech sounds in children with language-based learning problems. Biol Psychol 2004; 67:299–317.  Back to cited text no. 3
    
4.
Johnson KL, Nicol TG, Zecker SG, Kraus N. Auditory brainstem correlates of perceptual timing deficits. J Cogn Neurosci 2007; 19:376–385.  Back to cited text no. 4
    
5.
Abrams DA, Nicol T, Zecker SG, Kraus N. Auditory brainstem timing predicts cerebral asymmetry for speech. J Neurosci 2006; 26:11131–11137.  Back to cited text no. 5
    
6.
Kraus N, McGee TJ, Carrell TD, Zecker SG, Nicol TG, Koch DB. Auditory neurophysiologic responses and discrimination deficits in children with learning problems. Science 1996; 273:971–973.  Back to cited text no. 6
    
7.
Russo NM, Nicol TG, Zecker SG, Hayes EA, Kraus N. Auditory training improves neural timing in the human brainstem. Behav Brain Res 2005; 156:95–103.  Back to cited text no. 7
    
8.
Russo N, Nicol T, Musacchia G, Kraus N. Brainstem responses to speech syllables. Clin Neurophysiol 2004; 115:2021–2030.  Back to cited text no. 8
    
9.
Kraus N, Banai K. Auditory-processing malleability − Focus on language and music. Curr Dir Psychol Sci 2007; 16:105–110.  Back to cited text no. 9
    
10.
Song JH, Banai K, Kraus N. Brainstem timing deficits in children with learning impairment may result from corticofugal origins. Audiol Neurootol 2008; 13:335–344.  Back to cited text no. 10
    
11.
Banai K, Nicol T, Zecker SG, Kraus N. Brainstem timing: implications for cortical processing and literacy. J Neurosci 2005; 25:9850–9857.  Back to cited text no. 11
    
12.
Chandrasekaran B, Hornickel J, Skoe E, Nicol T, Kraus N. Context-dependent encoding in the human auditory brainstem relates to hearing speech in noise: implications for developmental dyslexia. Neuron 2009; 64:311–319.  Back to cited text no. 12
    
13.
Wible B, Nicol T, Kraus N. Correlation between brainstem and cortical auditory processes in normal and language-impaired children. Brain 2005; 128:417–423.  Back to cited text no. 13
    
14.
King C, Warrier CM, Hayes E, Kraus N. Deficits in auditory brainstem pathway encoding of speech sounds in children with learning problems. Neurosci Lett 2002; 319:111–115.  Back to cited text no. 14
    
15.
Hornickel J, Kraus N. Unstable representation of sound: a biological marker of dyslexia. J Neurosci 2013; 33:3500–3504.  Back to cited text no. 15
    
16.
Wong PC, Skoe E, Russo NM, Dees T, Kraus N. Musical experience shapes human brainstem encoding of linguistic pitch patterns. Nat Neurosci 2007; 10:420–422.  Back to cited text no. 16
    
17.
Hayes EA, Warrier CM, Nicol TG, Zecker SG, Kraus N. Neural plasticity following auditory training in children with learning problems. Clin Neurophysiol 2003; 114:673–684.  Back to cited text no. 17
    
18.
Anderson S, Skoe E, Chandrasekaran B, Kraus N. Neural timing is linked to speech perception in noise. J Neurosci 2010; 30:4922–4926.  Back to cited text no. 18
    
19.
Song JH, Banai K, Russo NM, Kraus N. On the relationship between speech- and nonspeech-evoked auditory brainstem responses. Audiol Neurootol 2006; 11:233–241.  Back to cited text no. 19
    
20.
Cunningham J, Nicol T, Zecker SG, Bradlow A, Kraus N. Neurobiologic responses to speech in noise in children with learning problems: deficits and strategies for improvement. Clin Neurophysiol 2001; 112:758–767.  Back to cited text no. 20
    
21.
Song JH, Nicol T, Kraus N. Test-retest reliability of the speech-evoked auditory brainstem response. Clin Neurophysiol 2011; 122:346–355.  Back to cited text no. 21
    
22.
Rocha-Muniz CN, Befi-Lopes DM, Schochat E. Sensitivity, specificity and efficiency of speech-evoked ABR. Hear Res 2014; 317:15–22.  Back to cited text no. 22
    
23.
Moller AR. Neural mechanisms of BAEP. Electroencephalogr Clin Neurophysiol Suppl 1999; 49:27–35.  Back to cited text no. 23
    
24.
Galbraith GC, Amaya EM, de Rivera JM, Donan NM, Duong MT, Hsu JN et al. Brain stem evoked response to forward and reversed speech in humans. Neuroreport 2004; 15:2057–2060.  Back to cited text no. 24
    
25.
Krishnan A, Xu Y, Gandour JT, Cariani PA. Human frequency-following response: representation of pitch contours in Chinese tones. Hear Res 2004; 189:1–12.  Back to cited text no. 25
    
26.
Ananthakrishnan S, Krishnan A, Bartlett E. Human frequency following response: neural representation of envelope and temporal fine structure in listeners with normal hearing and sensorineural hearing loss. Ear Hear 2016; 37:e91–e103.  Back to cited text no. 26
    
27.
Krishnan A. Human frequency-following responses: representation of steady-state synthetic vowels. Hear Res 2002; 166:192–201.  Back to cited text no. 27
    
28.
Aiken SJ, Picton TW. Envelope and spectral frequency-following responses to vowel sounds. Hear Res 2008; 245:35–47.  Back to cited text no. 28
    
29.
Banai K, Abrams D, Kraus N. Sensory-based learning disability: insights from brainstem processing of speech sounds. Int J Audiol 2007; 46:524–532.  Back to cited text no. 29
    
30.
Anderson S, Kraus N. Sensory-cognitive interaction in the neural encoding of speech in noise: a review. J Am Acad Audiol 2010; 21:575–585.  Back to cited text no. 30
    
31.
Johnson KL, Nicol TG, Kraus N. Brain stem response to speech: a biological marker of auditory processing. Ear Hear 2005; 26:424–434.  Back to cited text no. 31
    
32.
Clark GM. Hearing due to electrical stimulation of the auditory system. Med J Aust 1969; 1:1346–1348.  Back to cited text no. 32
    
33.
Clark GM. Middle ear and neural mechanisms in hearing and the management of deafness [thesis]. Sydney, New South Wales: University of Sydney; 1970.  Back to cited text no. 33
    
34.
Clark GM. Responses of cells in the superior olivary complex of the cat to electrical stimulation of the auditory nerve. Exp Neurol 1969; 24:124–136.  Back to cited text no. 34
    
35.
Paolini A, Clark GM. The effect of pulsatile intracochlear electrical stimulation on intracellularly recorded cochlear nucleus neurons. In: Clark GM editor. Cochlear Implants: XVI World Congress of Otohinolaryngology Head and Neck Surgery. Bologna, Italy: Monduzzi Editore; 1997:119–124.  Back to cited text no. 35
    
36.
Siebert WM. Frequency discrimination in the auditory system: place or periodicity mechanisms? Proc IEEE Inst Electr Electron Eng 1970; 58:723–730.  Back to cited text no. 36
    
37.
Burkitt AN, Clark GM. Synchronization of the neural response to noisy periodic synaptic input. Neural Comput 2001; 13:2639–2672.  Back to cited text no. 37
    
38.
Schatzer R, Vermeire K, Visser D, Krenmayr A, Kals M, Voormolen M et al. Electric-acoustic pitch comparisons in single-sided-deaf cochlear implant users: frequency-place functions and rate pitch. Hear Res 2014; 309:26–35.  Back to cited text no. 38
    
39.
Kikkawa YS, Nakagawa T, Ying L, Tabata Y, Tsubouchi H, Ido A et al. Growth factor-eluting cochlear implant electrode: impact on residual auditory function, insertional trauma, and fibrosis. J Transl Med 2014; 12:280.  Back to cited text no. 39
    
40.
Sly DJ, Hampson AJ, Minter RL, Heffer LF, Li J, Millard RE et al. Brain-derived neurotrophic factor modulates auditory function in the hearing cochlea. J Assoc Res Otolaryngol 2013; 13:1–16.  Back to cited text no. 40
    
41.
Gillespie LN, Zanin MP, Shepherd RK. Cell-based neurotrophin treatment supports long-term auditory neuron survival in the deaf guinea pig. J Control Release 2015; 198:26–34.  Back to cited text no. 41
    
42.
Fransson A, Jarlebark LE, Ulfendahl M. In vivo infusion of UTP and uridine to the deafened guinea pig inner ear: effects on response thresholds and neural survival. J Neurosci Res 2009; 87:1712–1717.  Back to cited text no. 42
    
43.
Landry TG, Wise AK, Fallon JB, Shepherd RK. Spiral ganglion neuron survival and function in the deafened cochlea following chronic neurotrophic treatment. Hear Res 2011; 282:303–313.  Back to cited text no. 43
    
44.
Firszt JB, Chambers RD, Kraus , Reeder RM. Neurophysiology of cochlear implant users I: effects of stimulus current level and electrode site on the electrical ABR, MLR, and N1-P2 response. Ear Hear 2002; 23:502–515.  Back to cited text no. 44
    
45.
Gyo K, Yanagihara N. Electrically and acoustically evoked brain stem responses in guinea pig. Acta Otolaryngol 1980; 90:25–31.  Back to cited text no. 45
    
46.
Starr A, Brackmann DE. Brain stem potentials evoked by electrical stimulation of the cochlea in human subjects. Ann Otol Rhinol Laryngol 1979; 88:550–556.  Back to cited text no. 46
    
47.
Guiraud J, Gallego S, Arnold L, Boyle P, Truy E, Collet L. Effects of auditory pathway anatomy and deafness characteristics? (1): on electrically evoked auditory brainstem responses. Hear Res 2007; 223:48–60.  Back to cited text no. 47
    
48.
Lundin K, Stillesjo F, Rask-Andersen H. Prognostic value of electrically evoked auditory brainstem responses in cochlear implantation. Cochlear Implants Int 2015; 16:254–261.  Back to cited text no. 48
    
49.
Riss D, Hamzavi JS, Blineder M, Honeder C, Ehrenreich I, Kaider A et al. FS4, FS4-p, and FSP: a 4-month crossover study of 3 fine structure sound-coding strategies. Ear Hear 2014; 35:e272–e281.  Back to cited text no. 49
    
50.
Muller J, Brill S, Hagen R, Moeltner A, Brockmeier SJ, Stark T et al. Clinical trial results with the MED-EL fine structure processing coding strategy in experienced cochlear implant users. J Otorhinolaryngol Relat Spec 2012; 74:185–198.  Back to cited text no. 50
    
51.
Wilson BS, Finley CC, Lawson DT, Wolford RD, Eddington DK, Rabinowitz WM. Better speech recognition with cochlear implants. Nature 1991; 352:236–238.  Back to cited text no. 51
    
52.
Brill S, Muller J, Hagen R, Moltner A, Brockmeier SJ, Stark T et al. Site of cochlear stimulation and its effect on electrically evoked compound action potentials using the MED-EL standard electrode array. Biomed Eng Online 2009; 8:40.  Back to cited text no. 52
    


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6], [Figure 7], [Figure 8]
 
 
    Tables

  [Table 1], [Table 2], [Table 3], [Table 4], [Table 5]


This article has been cited by
1 Frequency-Following Response (FFR) em usuários de implante coclear: uma revisão sistemática dos parâmetros de aquisição, análise e resultados
Leonardo Gleygson Angelo Venâncio, Mariana de Carvalho Leal, Laís Cristine Delgado da Hora, Silvana Maria Sobral Griz, Lilian Ferreira Muniz
CoDAS. 2022; 34(4)
[Pubmed] | [DOI]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
   Abstract
  Introduction
   Participants and...
  Results
  Discussion
  Conclusion
   References
   Article Figures
   Article Tables

 Article Access Statistics
    Viewed4176    
    Printed325    
    Emailed0    
    PDF Downloaded3888    
    Comments [Add]    
    Cited by others 1    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]