MC LAUGHLIN et al.: TOWARDS A CLOSED-LOOP COCHLEAR IMPLANT SYSTEM 445in the clinic relatively straightforward with no addi...
MC LAUGHLIN et al.: TOWARDS A CLOSED-LOOP COCHLEAR IMPLANT SYSTEM 447template for subtraction can be obtained (see Fig. 3)...
MC LAUGHLIN et al.: TOWARDS A CLOSED-LOOP COCHLEAR IMPLANT SYSTEM 449Fig. 4. Neural responses to closely matched stimuli a...
MC LAUGHLIN et al.: TOWARDS A CLOSED-LOOP COCHLEAR IMPLANT SYSTEM 451Fig. 7. Comparison between scalp recorded and extra-c...
MC LAUGHLIN et al.: TOWARDS A CLOSED-LOOP COCHLEAR IMPLANT SYSTEM 453[14] C. J. Brown, P. J. Abbas, and B. Gantz, “Electri...
Upcoming SlideShare
Loading in...5

Towards aclosed cochlear important system application of embedded monitoring of peripheral & central neural activity


Published on

For more projects visit @

Published in: Technology, Business
  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Towards aclosed cochlear important system application of embedded monitoring of peripheral & central neural activity

  1. 1. IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 20, NO. 4, JULY 2012 443Towards a Closed-Loop Cochlear Implant System:Application of Embedded Monitoring of Peripheraland Central Neural ActivityMyles Mc Laughlin, Thomas Lu, Andrew Dimitrijevic, and Fan-Gang Zeng, Fellow, IEEEAbstract—Although the cochlear implant (CI) is widely con-sidered the most successful neural prosthesis, it is essentiallyan open-loop system that requires extensive initial fitting andfrequent tuning to maintain a high, but not necessarily optimal,level of performance. Two developments in neuroscience andneuroengineering now make it feasible to design a closed-loop CI.One development is the recording and interpretation of evokedpotentials (EPs) from the peripheral to the central nervous system.The other is the embedded hardware and software of a modernCI that allows recording of EPs. We review EPs that are pertinentto behavioral functions from simple signal detection and loudnessgrowth to speech discrimination and recognition. We also describesignal processing algorithms used for electric artifact reductionand cancellation, critical to the recording of electric EPs. We thenpresent a conceptual design for a closed-loop CI that utilizes in aninnovative way the embedded implant receiver and stimulatorsto record short latency compound action potentials ms ,auditory brainstem responses (1-10 ms) and mid-to-late corticalpotentials (20-300 ms). We compare EPs recorded using the CIto EPs obtained using standard scalp electrodes recording tech-niques. Future applications and capabilities are discussed in termsof the development of a new generation of closed-loop CIs andother neural prostheses.Index Terms—Auditory evoked potentials, closed-loop system,cochlear implants, electric stimulation, electrophysiology.I. INTRODUCTIONACOCHLEAR IMPLANT (CI) partially restores hearingin deaf individuals by electrically stimulating the auditorynerve via an electrode array implanted in the cochlea. An ex-ternal behind-the-ear (BTE) processor runs a speech processingstrategy which controls this electrical stimulation. A radio-fre-quency (RF) link allows for two-way communication betweenManuscript received August 01, 2011; revised November 03, 2011; acceptedJanuary 14, 2012. Date of publication February 06, 2012; date of current versionJuly 03, 2012.M. Mc Laughlin is with the Department of Otolaryngology—Head and NeckSurgery, University of California, Irvine, CA 92697 USA and also with theTrinity Centre for Bioengineering, Trinity College, Dublin, Ireland ( Lu is with the Department of Otolaryngology—Head and Neck Surgery,University of California, Irvine, CA 92697 USA (e-mail: Dimitrijevic is with the Communication Sciences Research Center,Cincinnati Children’s Hospital Medical Center, Department of Otolaryngology,Head and Neck Surgery, University of Cincinnati, Cincinnati, OH 45229 USA(e-mail: Zeng is with the Departments of Anatomy and Neurobiology, Biomed-ical Engineering, Cognitive Sciences, and Otolaryngology—Head and NeckSurgery, University of California, Irvine, CA 92697 USA (e-mail: Object Identifier 10.1109/TNSRE.2012.2186982TABLE IOUTLINE OF THE KEY STEPS IN THE FITTING PROCEDUREthe internal receiver and the external BTE processor. Accu-rate control of the electrical stimulation is critical in deliveringacoustic information to the auditory nerve which can then beinterpreted by the brain. In fact, the development of processingstrategies which could effectively deliver speech information tothe brain [1]–[3] was one of the key elements in the successstory of modern CIs.The process of “fitting” or “mapping” the CI (see Table I for asummary of the different steps) is carried out by an audiologistand involves carefully selecting the correct speech processingstrategy and setting the electrical stimulation parameters foreach individual user. Properly fitting the CI is essential ifthe recipient is to successfully understand speech. Currently,most of the fitting steps are done in an open-loop system: Theaudiologist stimulates a CI electrode, elicits a verbal response,and accordingly adjusts a setting on the BTE processor (seeFig. 1, primary open-loop route). There are a number of dis-advantages associated with this open-loop method. First, itis time consuming for both the audiologist and the CI user.Fitting a CI can last anywhere from 10 min to a couple ofhours, and as the optimal settings for each individual user canchange during the first few months of use [4]–[6] the fittingprocess is often repeated. Second, in an open-loop system thereis no effective way to determine each user’s optimal settingsfor speech recognition. The stimulation current level thatjust elicits an auditory percept (threshold or T level) and thatwhich is most comfortable (comfort or C level) are relativelyeasy to determine behaviorally. However, when postlinguallydeafened adults are first fitted with a CI they go through arelearning period [7]–[10] where the brain learns to interpret1534-4320/$31.00 © 2012 IEEE
  2. 2. 444 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 20, NO. 4, JULY 2012Fig. 1. Potential loops in a closed-loop cochlear implant system. BTE pro-cessor contains the settings and speech processing strategy used to control theelectrical stimulation. Proper control of these settings is a key element in the suc-cess of the CI users understanding speech. These parameters are currently set bythe audiologist in an open-loop system based on behavioral response (primaryopen-loop route) and evoked potential measures (secondary open-loop route).Evoked potentials of the auditory brainstem of the auditory brainstem responseor auditory cortex can be measured using scalp electrodes or from the auditorynerve by using the stimulating electrode or the CI as recording electrodes. Po-tential routes for a closed-loop CI system are suggested.the spectrally impoverished electrical stimulation delivered bythe speech processing strategy as meaningful auditory input.This relearning period can last anywhere for a few weeks to ayear or more. Therefore, simply changing speech processingstrategies and then testing the CI user’s speech perception isneither an effective nor efficient way of choosing the correctstimulation parameters as the brain needs time to adjust to thenew input. The final disadvantage of an open-loop system isthat it requires verbal feedback from the CI user. As CI tech-nology advances, it is quickly becoming the standard treatmentfor children who are born with severe to profound hearing lossin the developed world [11]. A number of studies report andrecommend implantation in very young children [12], [13]but obtaining meaningful verbal responses in these childrenis difficult and sometimes impossible. A closed-loop CI, withaccess to neural responses at multiple levels along the auditorypathway, could in theory perform these tasks automatically andresolve many of these issues.In fact, the development of the two-way communicationbetween the CI and BTE processor means that the electrodesnormally used to stimulate the cochlea can be used as recordingelectrodes to obtain electric compound action potentials(ECAP) from auditory nerve [14], the first stage in the auditoryneural pathway. Scalp electrodes can be used to monitor neuralactivity further along the auditory pathway in the brainstem orauditory cortex [15]. Therefore, the audiologist does have ac-cess to a number of evoked potential (EP) measures of auditoryneural activity and, particularly in pediatric populations, thesemeasures are sometimes used to guide the fitting procedure[16] (see Fig. 1, secondary open-loop routes).Here, we present a new method for recording longer latencyEPs using only the CI. In addition to the intracochlear electrodesused for stimulation, all modern CIs have extra-cochlear elec-trodes which are used to return the current in monopolar stim-ulation mode. We demonstrate that these extra-cochlear elec-trodes can also be used to record the neural activity at higherlevels such as the brainstem and the auditory cortex. This newtechnology has the potential to streamline the fitting processby making objective measures of brainstem and cortex functioneasily available to audiologists. It is also an important step to-wards being able to design a closed-loop CI system. Conceptu-ally a closed-loop CI system, with an extended ability to monitorneural activity at multiple stages along the auditory pathway anddynamically adjust the electrical stimulation (see Fig. 1, poten-tial closed-loop routes), could address many of the limitationsof the current open-loop system.In the next sections, we review the theoretical and practicalmethods of EP recording and discuss how they can be appliedusing our new extra-cochlear electrode recording (EER) tech-nique. Section II reviews the standard EP methods used in CIsubjects to assess neural activity at different levels in the audi-tory pathway. In Section III we discuss the important issue ofartifact cancellation and reduction. In Section IV we give de-tails of how our new EER technique is applied and how EERmeasurements compare with scalp recorded potentials. Finally,in Section V we discuss how the EER technique could be in-tegrated into a closed-loop CI system and discuss how EERtechnology could be used to monitor neural activity at multiplelevels in auditory pathway and used to dynamically adjust theelectrical stimulation.II. EVOKED POTENTIALS IN THE ELECTRICALLYSTIMULATED AUDITORY PATHWAYEP techniques are currently used to measure neural activity atdifferent stages in the electrically stimulated auditory pathway.We discuss the clinical applications along with the strengths andweaknesses of each technique, and also the differences in EPbetween acoustic and electric stimulation wherever appropriate.A. Auditory NerveIn acoustic hearing the compound action potential (CAP)represents the sum of auditory nerve activity to an acousticstimulus. A large portion of the auditory nerve is encased inthe dense temporal bone making it difficult to get good qualitynoninvasive CAP measurements. In animal studies CAP mea-surements are typically performed invasively by placing a ballelectrode on the exposed auditory nerve trunk or in the cochlea.In people with a CI, intracochlear electrodes can be used toobtain high quality ECAP measurements which are simplynot possible in non-implanted people. The main difficulty inrecording ECAPs is that the auditory nerve response occurswithin 1 ms of onset of electrical stimulation (earlier than foracoustic stimulation) meaning that it overlaps in time with thestimulus artifact. A number of techniques for separating theartifact from ECAP response have been developed and are dis-cussed in Section III. CI manufacturers have implemented andautomated these methods [17], making the recording of ECAPs
  3. 3. MC LAUGHLIN et al.: TOWARDS A CLOSED-LOOP COCHLEAR IMPLANT SYSTEM 445in the clinic relatively straightforward with no additional con-ventional EP equipment required. The commercialization ofthese techniques mean that ECAPs are the most widely usedobjective measure by audiologists working with CI patients.ECAP responses can be used to help guide the choice of com-fort and threshold level. The success of this approach has beenlimited by intra- and inter-subject variability [18], [19]. How-ever, combining ECAP measurements with a limited amountof behavioral data can give a reasonable estimate of comfortand threshold levels across all electrodes [20]. ECAPs are alsoa useful research tool and can be used to assess the spread ofelectrical excitation within the cochlea [21], [22], an issue lim-iting the success of current CIs [23].ECAPs from the auditory nerve represent the first encodingstage in the neural auditory pathway. They do not require thesubject to be attentive and can be recorded when the listeneris sleeping or sedated. As they are not affected by muscle ac-tivity they can also be recorded when the subject is moving.The ECAP offers a direct assessment of frequency-specific in-formation (cochlear place), which can be more difficult to assesswith scalp recording techniques, and have a relatively large am-plitude . While cortical and brainstem responsesare known to differ greatly in both latency and morphology be-tween adults and young children [24], [25], ECAPs are reportedto be relatively similar between these populations [26], makingthem easier to interpret in young children or people with de-velopmental disorders (both are patient groups which receiveCIs). All these properties make ECAPs a good candidate neuralresponse for estimating threshold and comfort loudness levels.However, the lack of influence of high level neural processingon ECAPs means that they may not be the ideal candidate forpredicting more complex outcomes like speech perception [27].B. Auditory BrainstemThe next auditory encoding stages occur in the brainstem.The auditory brainstem response (ABR) represents activityfrom structures in the brainstem, including the auditory nerve,cochlear nucleus, superior olivary complex and inferior col-liculus. The maximal latency of the components of the acousticABRs is restricted to around 10 ms but electric ABR (EABR)latencies occur a few milliseconds earlier because electricstimulation bypasses any acoustical mechanical delays presentin the middle and inner ear (e.g., the basilar membrane). Be-cause the physiological latency of wave V in EABRs is longerthan the physiological latency of the ECAP response ( msversus ms), they are technically easier to separate fromthe artifact. The EABR can be recorded by placing a scalpelectrode on the vertex or forehead and one on the mastoid(although other configurations are possible) and amplifying thepotential difference between these two electrodes. A stimulusis typically repeated 2000 to 4000 times, and the EABR iscalculated by averaging the recorded epochs.Each peak in the averaged ABR is typically labeled withWaves I through V, with each wave representing activity at adifferent site in the brainstem. Wave V is the largest componentof the EABR and again, because of the lack of acoustic delay,it occurs earlier in CI subjects than in normal hearing ( msversus 5.7 ms) [28], [29]. It can be used to help predict com-fort threshold levels, and EABR thresholds have been shown tobe closer to behavioral thresholds than ECAP thresholds (seeFigs. 4 and 5 [20]).EABRs represent more central processing than ECAPs andas such can be used to study binaural integration. As more andmore people receive bilateral CIs, the question of how to opti-mize bilateral electrical stimulation so that the brain can fullyintegrate the information from both ears becomes increasinglyimportant. Development of an objective measure of bilateral in-tegration is increasingly important as many children are now re-ceiving bilateral CIs. In normal hearing people the binaural in-teraction component (BIC) can be calculated by subtracting theleft monaural and right monaural ABRs from the binaural ABR[30]. It was shown in implanted cats that when a bilateral pair ofelectrodes stimulates the same frequency region the amplitudeof the BIC is largest [31]. It was hoped that this technique couldprovide an objective measure of pitch matching and enhancedbinaural integration. However, initial results have been disap-pointing [32], [33], possibly due to the small number of subjectsin the study, differences in the psychophysical and electrophys-iological stimuli used or more central processing involved inthe psychophysical pitch matching task. Another confoundingfactor may be the difficulty in calculating the BIC, which is adifference measure and thus highly susceptible to noise. Themonaural wave V is a more stable measurement (both in ampli-tude and latency) and may therefore prove to be a more usefulclinical metric. For example the amplitudes of a monaural waveV, elicited separately from each ear but recorded using the sameelectrode montage, may be one way to objectively loudness bal-ance bilateral CIs. Note, that EABRs collected for a loudnessbalancing procedure should not be collected using alternatingpolarity stimuli (see Section III) as anodic or catholic leadingpulses with equal amplitudes will elicit different loudness per-cepts [34], [35]. Wave V latencies, recorded in the same way,could be used to objectively synchronize the timing of bilateralCIs.C. CortexIt is reasonable to infer that examining CI function at higherlevels of auditory processing (i.e., cortex) may provide better re-lationships with speech perception compared to subcortical re-sponses. However, much less work has been devoted to studyingcortical potential in CI subjects. In general, early or “obligatory”cortical potentials (less than 200 ms) represent initial corticalprocessing and reflect stimulus attributes, while later latency po-tentials reflect different degrees of processing of the stimulussuch as discrimination.The middle latency response (MLR) is a series of peaks withboth positive and negative polarity occurring between 15 and50 ms. The MLR likely represents the first cortical responsethat can be recorded using scalp electrodes. Although the MLRvaries with stimulus intensity, the relationship with speech per-ception performance has been poor [36], [37]. In subjects withpoor speech perception, larger degrees of ELMR variabilityhave been found [38]. Firszt et al. [39] found a significant rela-tionship with speech perception using normalized EMLRs as afunction of threshold and dynamic range. Some disadvantages
  4. 4. 446 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 20, NO. 4, JULY 2012of the MLR is that it is prone to muscle artifacts (reviewed in[40]) and variability in children (reviewed in [41]).In contrast to the MLR, the P100 is a prominent positive com-ponent peaking between 100 and 200 ms in children before theN100 begins to develop near 10 years of age [42]. The P100 hasshown to be a useful tool in characterizing the developmentalchanges associated with CI use. For example, Ponton et al. [43]has shown that the latency of the response reflects the amount oftime the child has used the CI. When accounting for the “timein sound,” children with CIs followed a normal pattern P100maturity. However, late implanted children may not develop anormal developmental trajectory which would suggest there isa critical period for development [44].The N100 is recorded as a negative deflection occurring closeto 100 ms. It is most often elicited by an onset of a stimulussuch as a tone or speech (for review see [45]). In adult CI-users,when the N100 is present, it is often similar in morphology tothat observed in normal hearing adults although may be reducedin amplitude [37], [46]. The N100 change response, a stimulusthat shares acoustic properties with speech, has been shown ina number of studies to relate to speech perception in CI subjects[47]–[49] and may provide an index which relates to speechdiscrimination ability. However, an abnormal “N100” or “de-privation negativity” has been reported only in subjects withespecially poor speech perception scores [50], [51]. Gordon etal. [50] found that children who had speech perception perfor-mance above 90% showed an age appropriate positivity (P1 ofthe cortical evoked potential) whereas poor speech performancewas associated with a negativity resembling an early N100.The mismatch negativity (MMN) is seen when subtractingthe evoked response to frequent stimuli (standard) from re-sponses to infrequent stimuli (deviant). The derived negativityis thought to be related to the subject’s ability to discriminateor to detect a deviant stimulus among standard stimuli [52].In the context of CI users, reports have used speech contrasts[53]–[55], tones [53] and CI electrode pairs [56]–[58] as stimuli.The general finding is that although MMN can be recorded inboth adult and child CI users, the relationship between MMNand speech perception ability is varied. Kilney et al. [53] foundthat MMN to tonal contrasts was related to speech perceptionperformance but MMN to speech contrasts was not. Singh etal. [55] found that good CI performers had a higher probabilityof having a MMN to speech stimuli than poor performers. Nosignificant correlations of MMN amplitude or latency existedwith speech perception, however, a significant relationship wasobserved with MMN duration. Wable et al. [58] using electrodepairs for standard and deviants found no relationships betweenspeech perception and MMN. In summary, the MMN likely hassome predictive ability of speech perception, but the inherentvariability of the response, even among normal hearing subjects[59], limits its application.III. ARTIFACT REDUCTION AND CANCELLATIONTwo distinct types of artifact exist in CI EP recordings. Oneresults from the stimulation of the electrodes in the cochlea andis referred to here as the stimulation artifact. It is visible as abipolar spike in the EP recordings which typically last from tensto hundreds of microseconds but can be as long as few millisec-onds in duration (exponential decay of the artifact and amplifierrecovery from saturation may extend it). The shape of stimula-tion artifact is determined by the shape of the stimulation pulseand it reverses when stimulus polarity is reversed. The secondartifact results from the RF communication link between theBTE processor and the CI and may be caused by a capacitivecoupling between the RF link and the recording leads or elec-trodes. We refer to this as the RF artifact, and it is often visibleas an elevated pedestal or dc component in the EP [60], [61]. Itdoes not reverse with stimulus polarity reversal. Artifact reduc-tion techniques seek to minimize the size of the artifact before itis recorded, while cancellation techniques seek to remove the ar-tifact after it has been recorded. Both techniques are often usedin combination to record EPs in CI users.A. Artifact Reduction TechniquesOne of the simplest methods to reduce artifacts is to increasethe spatial distance between recording electrode and CI, for ex-ample by placing the recording electrode on the mastoid con-tralateral to the CI. The RF artifact can also be reduced by spa-tially separating the recording leads from CI to minimize anycurrent which may be induced in the recording leads by theCI. The recording leads should also be kept close together sothat any induced artifact will be the same in the reference andrecording lead and thus be rejected by the differential ampli-fier (i.e., common mode rejection). Increasing the temporal sep-aration between the stimulation pulse and recording epoch isanother straight forward and extremely effective method of re-ducing the stimulus artifact. This can be achieved by using lowrate stimulation and recording in the gaps between the stim-ulus pulses. However, this approach is not suitable for recordingshort-latency auditory nerve responses. It also is not useful forrecording EPs on CI with the clinically relevant pulse rates of1000 Hz or higher. Finally stimulus artifact reduction can beachieved by careful design of the stimulus pulse. Tri-phasicpulses can be employed where the final phase of the pulse shouldcancel or reduce the exponential decay of the artifact [62].B. Artifact Cancellation Techniques1) Artifact Template Subtraction: Artifact template subtrac-tion is an effective signal processing technique that can removestimulus artifact by subtracting a template containing artifactonly from the contaminated signal which contains both artifactand neural response. In CIs and other neurotechnology applica-tions such as deep brain stimulation [63], estimating the stim-ulus artifact is difficult. One method is to assume the stimulusartifact scales linearly with current level, record a stimulus ar-tifact at a subthreshold current level and then linearly scale itto the required level (note, that this assumption relates directlyto the current level, i.e., the output of the electrode, and notto the acoustic input to the implant which may undergo com-pression and other nonlinearities in the speech processing al-gorithm). In practice the linearity assumption is confounded bynonlinearities in tissue conductance and recording amplifier. Ifan artifact-only template is not available, then a model about theshape of the artifact can be made (e.g., exponential). By fittingthe contaminated response with the assumed artifact function, a
  5. 5. MC LAUGHLIN et al.: TOWARDS A CLOSED-LOOP COCHLEAR IMPLANT SYSTEM 447template for subtraction can be obtained (see Fig. 3). The mainlimitation of this technique is that the assumed artifact functionmay not accurately describe the artifact. The forward maskingparadigm below describes a more sophisticated method for esti-mating the artifact template that does not explicitly assume tem-plate linearity nor function.2) Forward Masking: The basic principal of the forwardmasking technique [14], [64] is shown in Fig. 2 and works asfollows: The response to a probe stimulus alone is recorded,which contains both the neural response and the stimulus ar-tifact. Next the response to probe stimulus that was quickly pre-ceded ms by a masker stimulus is recorded. This con-tains only the stimulus artifact and no neural response as theforward masker still subjects the auditory nerve to its absoluterefractory period. The artifact-only response is then subtractedfrom the neural response plus stimulus artifact, leaving only theneural response. The complex forward masking algorithm hasbeen implemented and automated by the major CI manufac-turers, making it relatively easy to apply in practice. However,it can still be a time consuming technique and needs to be care-fully set to cleanly remove stimulus artifact.3) Alternating Phase: Biphasic pulses (cathodic followed byanodic) are typically used in CI stimulation. The stimulus pulsecan be reversed in polarity so that the anodic pulse comes first.If we assume the stimulus artifact simply reverses in polaritybut the neural response will not, then summing the response totwo pulses of opposite phase should cancel the artifact and leavetwice the neural response. This technique will provide a largereduction in stimulus artifact but it will normally not give com-plete artifact cancellation ([61] and personal observations) dueto asymmetries in tissue conductance and the amplifier. Whilenot critical for artifact cancellation, it should be noted that theneural response to pulses of different polarity is not exactly thesame and can result in different loudness percepts [35], [65].4) Independent Component Analysis: Independent compo-nent analysis (ICA) is a blind source separation technique thatuses higher order statistics to separate independent sourcesfrom signals containing linear mixtures of those sources withthe condition that there must be more observation points thansources [66], [67]. ICA has been used to separate the stimulusand RF artifacts from ECEP recordings with multiple scalpelectrodes [60], [68], [69]. A limitation of this technique isthe need for multiple recording sites and therefore requires asubjective evaluation of presumed artifact scalp topography.ICA is therefore not suitable to the current implementationof EP recordings using embedded cochlear implant hardwaredescribed below.IV. EXTRA-COCHLEAR ELECTRODE RECORDINGSIn spite of much research and promising results regardingspeech performance predictions, EP techniques measuringneural responses in the brainstem and auditory cortex arenot widely used in the clinic. One reason for this lack ofclinical usage is that they require the use of an additional,dedicated EP recording system which takes time to setup andis not usually available in the clinic. This problem, togetherwith the potential applications for a closed-loop CI system,prompted us to develop a technique to use the extra-cochlearelectrodes as recording electrodes for longer latency neuralresponses without the need for a separate EP recording system.In this section we describe the EER technique and compareEER responses with those obtained using scalp electrodes atdifferent levels in the auditory pathway in three postlinguallydeaf CI subjects using the Nucleus 24 (Cochlear Corporation,Australia). Two subjects were female, aged 74 and 69 yearsand one male aged 77. All subjects were considered good users(e.g., can use a telephone).A. Methods1) Scalp Electrode Recordings and Stimuli: A custom builtrecording, artifact cancellation and analysis system was usedto measure EABRs, EAMLRs, and CEPs. Two recording elec-trodes were placed on each mastoid, a reference electrode onthe forehead and a ground on the nape of the neck. All record-ings were amplified and digitized using the Medusa systemfrom Tucker-Davis Technologies (Alachua, FL) consisting ofa preamp and A/D converter (RA4PA, 48 dB gain, 25 kHzsampling frequency, 2.2 Hz–7.5 kHz 3 dB frequency response),connected via a fiber optic link to the base station (RA16BA). During a recording session, signals were visualized andaveraged online and stored on disk for further offline analysis.All the analysis was performed in Matlab (Mathworks, Natick,MA), and all filters were Butterworth 2nd order zero-phase fil-ters. Time epochs and filter settings were: 0–10 ms bandpassedat 100–2000 Hz for EABRs, 0–50 ms bandpassed at 50–400 Hzfor EAMLR and 0–500 ms bandpassed at 3–35 Hz at for CEPs.All stimuli were delivered using the Custom Sound EP soft-ware and cochlear programming pod (Cochlear Corporation,Australia). This system sends a trigger pulse with each stimuluswhich was used to trigger our EP recording system. All stimuliwere biphasic ( per phase, interphase gap) pulsesdelivered in a monopolar mode via an intra-cochlear electrode(16) and returned through MP1, the extra-cochlear electrodeembedded in the temporalis muscle. For EABRs and EAMLRsa single pulse was repeated at a rate of 4.7 Hz while for CEPstwo stimulus settings were used: 1 pulse repeated at a rate of1.1 Hz or a 500 ms burst at 900 pulses/s repeated at a rate of0.5 Hz. EABRs and EAMLRs responses represent the averageof 4000 repetitions of pulses with alternating polarity (2000 perpolarity), while CEP responses are 300 pulses with the samepolarity. All scalp recorded potentials (EABRs, EAMLRs, andCEPs) were inverted before plotting, as is the convention forABRs.2) Intra-cochlear Electrode Recordings and Stimuli: Torecord ECAP responses from the auditory nerve, we usedthe standard Custom Sound EP implementation of forwardmasking paradigm described in [14]. The active electrode forthe masker and probe was intra-cochlear electrode 16 and thereturn electrode was MP1. The active recording electrode was18 and indifferent (or reference) electrode was MP2, the secondextra-cochlear electrode located on the implant RF receiver.For ECAPs the 1 probe pulse ( per phase, interphasegap) was repeated at a rate of 80 Hz.3) Extra-cochlear Electrode Recordings and Stimuli: For theEERs of longer latency neural responses we adapted the CustomSound EP implementation of the forward masking paradigm.
  6. 6. 448 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 20, NO. 4, JULY 2012Fig. 2. Forward masking paradigm used to obtain electronic compound actionpotential (ECAP) responses. In the forward masking paradigm, buffer A con-tains the response to the probe alone. This includes neural response, stimulusartifact, and amplifier switch-on artifact. The masker puts the nerve in a refrac-tory state, thus buffer B only contains artifact and no neural response. BufferC contains artifact from the masker and, importantly for our application, anyremaining neural response to the masker stimulus. Buffer D contains only am-plifier switch on artifact. The ECAP response is typically calculated as follows:A-(B-(C-D)). To obtain the extra-cochlear electrode recordings (EER) we setthe probe pulse amplitude to zero and varied the masker probe delay to the de-sired latency we wanted to sample. We then subtracted buffer D from buffer Cto obtain the EER.The active (or recording) electrode for the masker and probewas still electrode 16 and the return electrode was MP1. How-ever, the active recording electrode was now the extra-cochlearelectrode MP2 and the indifferent (or reference) electrode wasthe intra-cochlear electrode 1. Restrictions in the software meantthat we were limited to only sampling a recording window of1.6 ms in duration. The software also gives control over two de-lays: the delay between the probe and recording window and thedelay between the masker and the probe (see Fig. 2). However,restrictions in the software meant that we could only adjust thedelay between the masker and the probe (not the probe and therecording window) to latencies long enough to record EABRsand CEPs (see Fig. 2). Therefore, we set the probe level to 0 andthe masker to the desired stimulation level. Then, keeping thedelay between the probe and recording window fixed at ,we adjusted the delay between the masker and probe to the de-sired latency (between 1 and 300 ms) minus . The recordeddata could then be exported and analyzed in Matlab. For theFig. 3. Cortical extra-cochlear recording artifact cancellation. (a) The extra-cochlear recorded (EER) cortical potential (thin line) contained a large artifactwhich was well fit by an exponential decay function (thick line). (b) Subtractingthe fitted exponential function revealed the neural response (thin line) which wasfurther smoothed using a 3 point running average (thick line).EER recordings we always used the data from buffer C in theforward masking paradigm and subtracted the amplifier switchon artifact contained in buffer D (see Fig. 2). To capture the com-plete EABR we sampled a number of overlapping time windowsbetween 1 and 8 ms by sequentially shifting the 1.6 ms and thenpatching these responses together. Specifically, we set the delayto 1 ms and then collected N trials at that delay. We then set thedelay to 2 ms and collected trials at that delay and continuedthis procedure until we had sampled the entire 8 ms section.The EER was then calculated by taking the vector average ofall these responses. To capture a CEP using the EER techniquewe sequentially moved the 1.6 ms sampling window in 10 mssteps from 10 to 300 ms, collecting all trials at one delay be-fore moving to the next. The EER was then calculated as the av-erage response within the entire 1.6 ms sample window at eachdelay, thus giving one point per delay. We tried to match thestimuli used to collect the EER responses as closely as possibleto those used to collect the scalp electrode responses. The stim-ulating electrode, return electrode, stimulation level and pulsewidth used to collect the EER EPs were exactly the same asthose used to collect the scalp-electrode EPs. Since the CustomSound EP software was not designed to record these types of re-sponses, recording parameters had to be manually adjusted eachtime, meaning that data collection could take a long time. To re-duce data collection time for the EER EABR we only used 1000repetitions and for the EER ECEP we used 50 repetitions. Forthe EER EABR we still used a repetition rate of 4.7 Hz and forthe 1 pulse EER ECEP we also used at repetition rate of 1.1 Hz.However for pulse burst EER CEPs we used slightly faster rep-etition rate and shorter pulse burst: 222 ms (instead of 500 ms)burst at 900 pulses/s repeated at a rate of 0.7 Hz (instead of 0.5Hz). The EER EABRs were inverted before plotting to matchthe convention of the scalp recorded EABRs but the EER CEPswere not inverted.
  7. 7. MC LAUGHLIN et al.: TOWARDS A CLOSED-LOOP COCHLEAR IMPLANT SYSTEM 449Fig. 4. Neural responses to closely matched stimuli at different levels along the auditory pathway. (a) ECAP responses for stimulation levels from 170 to 200CU, in 5 CU steps. Dots indicate the minima (N1) and maxima (P1) used to calculate the amplitude growth functions in Fig. 5(a). (b) EABR responses to asimilar stimulus at the same current levels. Dots indicate wave V peak. The thin lines show scalp recorded potentials and the thick lines show the correspondingextra-cochlear electrode recorded potentials, only present on the 200, 185 and 175 CU responses. (c) EAMLR responses to a similar stimulus at the same currentlevels.4) Artifact Cancellation and Reduction: To reduce the arti-fact in the scalp electrode recordings we recorded from the mas-toid contralateral to the CI and used stimuli with temporal gapsduring the desired neural responses period. For the EABR andEAMLR recordings we used pulses with alternating polarity.This reduced the artifact but did not completely remove it. Thisremaining artifact consisted of two parts: a spike artifact lastingfrom around 0–1.5 ms and a slower exponential decay artifactlasting from around 0–5 ms. We are not sure what caused theexponential decay artifact but it may be caused by the filters inour acquisition system. We used a two stage procedure to re-move both artifacts. 1) We used linear interpolation over a 2 msperiod surrounding each pulse to remove the stimulus spike ar-tifact. 2) We fitted a subthreshold recording, which containedonly artifact, with an exponential function to remove the expo-nential decay artifact. We then linearly scaled this exponentialfunction to fit each recording and subtracted this scaled expo-nential from the recording to leave the neural responses.For the CEP recordings we did not use alternating polaritypulses as subjects reported differences in loudness to pulsebursts with different polarity. This effect would be predictedbased the results of previous studies investigating the effect ofanodic verses cathodic first pulses [34], [35]. To remove thestimulus artifact from the baseline we simply linearly interpo-lated a 3 ms window around each stimulus artifact. A slowlydecaying exponential artifact was also present in both the scalpelectrode and EER CEPs [see Fig. 3(a)]. To obtain an artifacttemplate we fitted the recordings with an exponential functionand then subtracted this template to leave the neural response[see Fig. 3(b)]. The EER CEPs were then smoothed using a 3point running average.B. Results1) Amplitude Growth Functions: We examined how the au-ditory system responded to electrical stimuli at different ampli-tudes. Fig. 4 traces the neural response in one CI subject to elec-trical stimulation on the same electrode, with closely matchedstimuli, as a function of stimulation levels (170 to 200 clinicalunits, CU all the way through the auditory pathway from theauditory nerve (ECAP) to the brainstem (EABR) and the cortex(EMLR). In the Nucleus device, CU is related to current (I) bythe equation A. Generally and as ex-pected, neural response amplitudes increased with stimulationlevels.Fig. 4(a) shows the ECAP responses recorded from subject1 using the standard forward masking paradigm implementedin Custom Sound EP software. N1s and P1s (as identified byCustom Sound) for each response are marked with dots. Theamplitude of N1-P1 is clearly related to the stimulation level.The maximal N1-P1 response occurs at 200 CU and has disap-peared at 170 CU.The scalp EABR responses [thin lines, Fig. 4(b)] using thesame stimulating electrode in the same subject are also depen-dent on stimulation levels. The amplitude of wave V (at ms)increases linearly with increasing stimulation levels. EABR re-sponses obtained using the EER technique [thick line, Fig. 4(b)]
  8. 8. 450 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 20, NO. 4, JULY 2012for three different levels also change similarly with stimulationlevel. As the Custom Sound EP software was not optimized torecord these types of responses, the EER EABR at 200 CU from1 to 8 ms [longer thick line at 200 CU, Fig. 4(b)] took over2 h to collect. Therefore, we restricted the time epoch, limitedthe number of repetitions to 1000 and only collected data at thewave V peak at 185 and 170 CU stimulation levels [short thickline at 185 and 170 CU, Fig. 4(b)]. The rationale here was thatwe already knew the general shape of the EER EABR at 200CU and could now track the magnitude greater than the scalpelectrode EABR [see vertical scale,Fig. 4(b)], probably becauseof reduced impedance due to the extra-cochlear electrode’s im-planted location and its closer proximity to the neural gener-ator. The section of the EER EABR waveform preceding waveV appears to be elevated when compared to the scalp recordedEABR. This is probably partly due to the stimulation and RFartifacts which we have not attempted to remove from the EEREABR. Since the return electrode is very close to the auditorynerve, we may expect to see a larger wave II which would con-tribute to the elevated levels preceding wave V.Fig. 4(c) shows the scalp-recorded EAMLR recordings inthe same subject with the same stimulation parameters as inFig. 4(a). Here too, the EAMLR shows a strong dependence onstimulation level. Although we did not collect EER EAMLR re-sponses in this subject due to time constraints, in theory it wouldbe possible to capture these responses by moving the samplewindow to longer latencies. The dots on Fig. 4(c) mark the Napeak (negativity occurring around 20 ms) of the EAMLR.The amplitudes of ECAP, EABR, and EMLR responseswere quantified and plotted as a function of stimulation level[Fig. 5(a)–(c)]. The plots demonstrate a nearly linear relation-ship between the amplitude of the measured neural responsesand stimulation levels (in current units). Fig. 5(b) clearly showsthat the EABR also decreases linearly with decreasing stimu-lation levels for both scalp and EER methods. The agreementbetween the methods gives some reassurance the short samplewindows used at 185 and 170 CU (short thick lines, Fig. 4(b)are a valid way to track EER EABR amplitude in a time effi-cient manner, without having to sample the entire waveform.To provide behavioral assessment for the stimulation levelsused, the subject was asked to rate the loudness of the stimulion a scale of 0–10 (0 no sound, 1 barely audible, 6 mostcomfortable, and 10 extremely loud). Fig. 5(d) shows that theperceived loudness increased with stimulation level. Note thatthe behavioral response was also linear but the thresholds werelower than the ECAP threshold, as the subject could hear stimulithat produced no observable responses in the ECAP.Fig. 6 shows the EABR responses from subject 2 recordedusing scalp electrodes (thin line) and the EER technique (thickline). For this subject there is also a reasonable match betweenthe timing of wave V from the scalp electrode recording and thatobtained using the EER technique.2) Long Latency Cortical Evoked Potentials: Here, we showthat it is also possible to record long latency cortical evoked po-tentials using just the CI and without any dedicated EP equip-ment. We collected CEP waveforms in three subjects using scalpelectrodes (thin lines, Fig. 7). The waveforms were noisy but aclear N100 component was visible in all four recording (panelsFig. 5. Amplitude growth functions. (a), (b), and (c) show amplitude growthfunction calculated from the corresponding waveforms shown in Fig. 4. Note thesimilar linear growth at the different levels along the auditory neural pathwayand the fact that the extra-cochlear recorded amplitude growth function [thickline on (c)] matches that of the corresponding scalp recorded growth function(thin line). (d) Behavior loudness growth function measured on the same elec-trode at two repetition rates, 80 Hz as used for the ECAP and 4.7 Hz as usedfor the EABR and EAMLR. There is very little difference between the loudnessgrowth functions at these two rates. Loudness is rated on a scale of 1 to 10 with0 being silent and 10 too loud.Fig. 6. Comparison between scalp recorded and extra-cochlear recorded audi-tory brainstem response. The EER response has also been scaled to match thescalp amplitude. The EER amplitude is indicated by the thick scale bar the yaxis shows scalp electrode amplitude. Note the similar timing of wave V.a-d). We did not attempt to correct these recordings for eyemovement artifact by placing an electrode near the eye, butsimply asked the subject to close their eyes during the record-ings but remain alert. This means that the recordings were likelycontaminated with eye movement or muscle artifacts and in-creased alpha-wave activity due to eye closure. How-ever, this also made the scalp-recorded CEPs more comparableto the EER CEP data as it would be difficult to correct for eyemovements using the EER technique. We collected EER CEPdata using the same stimulus (thick lines on each panel): panelA uses 1 pulse repeated every 1.1 Hz, panels b, c, and d use a500 ms 450 pulse burst repeated every 0.5 Hz. The EER CEPwaveforms are also noisy but there is a clear N100 componentwhich aligns reasonably in time with the scalp electrode N100.The amplitudes of the EER CEP waveforms are about an order
  9. 9. MC LAUGHLIN et al.: TOWARDS A CLOSED-LOOP COCHLEAR IMPLANT SYSTEM 451Fig. 7. Comparison between scalp recorded and extra-cochlear recorded longlatency cortical potentials. On all panels the thin line show the ECEP recordedusing scalp electrodes and the thick line shows that using the extra-cochlearelectrode recording (EER) technique. For ease of comparison the polarity ofthe scalp recorded potential has been inverted to match that of the EER tech-nique. The EER responses have also been scaled to match the scalp amplitudes.The EER amplitudes are indicated by the thick scale bar on each panel whilethe y-axis shows the scalp electrode amplitude. Note the similar timing of thelarge negativity around 100 ms. Panel (a) shows responses recorded to a 1 pulsestimulus while (b), (c), and (d) show responses recorded to a pulse burst of 450pulses at 900 pulses/s.of magnitude greater than scalp electrode CEPs (note the dif-ferent scale bars). The polarity of the EER N100 is opposite tothat of the scalp recorded N100 (on Fig. 7, for ease of com-parison, the scalp recorded waveform has been inverted but theEER waveform has not).V. DISCUSSIONWe have demonstrated that by using the extra-cochlearelectrode in a CI as a recording electrode it is possible to recordlonger latency neural responses and that the timings of theseresponses are in general agreement with auditory brainstemand auditory cortex responses recorded with scalp electrodes.However, a thorough understanding of the origin of the EERpotentials is necessary, as discussed below. In the past, thesehigher level responses were only accessible using a dedicatedEP system. Now, by using the EER technique, they can bemeasured using only the CI. This is the first report on usingthe extra-cochlear electrode in a commercially available CI torecord both peripheral and central neural activities. We believethat the EER technique represents a completely new applica-tion for cochlear implants and has an enormous potential forimproving implant fitting and performance.A. Neural GeneratorsAlthough more work is required to verify the neural originsof the EER responses, the response latencies and waveformmorphology suggest that an N100 and EABR can be recordedusing electrodes in the CI. The scalp recorded N100 responseis made up of multiple generators, but two predominant wave-forms are usually seen. First, a vertically orientated dipolearising from auditory cortex that can be seen as negativity near100 ms recorded at the vertex that is optimally recorded usingvertically orientated electrodes. Second, a radially orientateddipole that is observed over temporal electrode recording sitesthat is optimally recorded using laterally orientated electrodes[70]. The EER electrode “montage” is sensitive to both verticaland radial sources, however, given that RF receiver (and thusthe return electrode) in the CI is located above andslightly lateral to the CI intra-cochlear electrodes,the electrode configuration “axis” will be more sensitive tovertical dipoles. This may explain the difference in polaritiesof the N100s (on Fig. 7 the scalp recorded waveform has beeninverted but the EER waveform has not). Therefore, the EERN100 (Figs. 3 and 7 likely represent a CI-recorded N100. TheEER EABR showed large amplitude waves I and II comparedto wave V, a pattern not typically seen with scalp recordedABRs. Because the generators of wave I and II are likelyauditory nerve, the extreme closeness, and reduced impedanceof the intra-cochlear electrode likely contribute to their largeamplitude.B. Applications for the EER TechniqueAdvances in CI technology and performance have steadilyrelaxed the criteria for implantation over the years. As a result,the number of people receiving CIs has been steadily increasingacross both the developed and developing countries, reaching200 000 with about half of them being children. How to fit a CIefficiently and optimally has become an urgent and importantneed. In Section I we reviewed how the ECAP technique is reg-ularly used by audiologists to determine threshold and comfortlevels. One reason for the preference of the ECAPs over otherEPs is that the ECAPS can be recorded using only the CI withoutthe need for a dedicated and often expensive EP measurementsystem. The other reason is that even when such a system isavailable, it is a time consuming process for the audiologist dueto subject preparation time and cooperation. The developmentof the EER technique means that this may no longer be an issue,although further work (see Section V-C) is still necessary to re-duce the EER response collection time. With dedicated softwareEABRs, EAMLRs, and CEPs could be made easily accessible tothe audiologist using only the CI as a measurement device. TheEER technique could be a vital step towards bringing metricswhich access higher level neural responses into general clinicalpractice; it would not only save time for the audiologist but moreimportantly may improve the CI fitting and performance.One factor which may limit the use of CEPs in CI clinicalpractice is that different subject populations (e.g., prelinguallydeafened children versus postlingually deafened adults) canhave markedly different CEP morphologies and latencies. Forexample, the N100 is not fully developed until around age13 in normal hearing children. We will need to gain a fuller
  10. 10. 452 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 20, NO. 4, JULY 2012understanding of the effects of age, developmental state, andelectrical stimulation on CEPs before they can be used asa reliable clinical metric in a closed-loop CI system acrossthe different subject populations. In Fig. 1, we proposed anumber of theoretical routes along which a closed-loop CIcould operate. Auditory nerve response metrics were alreadyaccessible via the CI. However, responses from the brainstemand cortex were not. Here, we have demonstrated that it istechnically feasible to record responses from the brainstemand cortex with innovative use of the existing commercial CI,although some design changes could improve the applicabilityof the technique as discussed below. To design a completelyclosed-loop CI system it would be necessary to record and ana-lyze EP responses automatically. Recently, an important study[17] has shown that automated collection and analysis of ECAPresponses is feasible, reliable and repeatable. We envisage aclosed-loop CI which can automatically accesses and analyzeneural responses from multiple levels in the auditory neuralpathway. First, the ECAP and EABR responses [20], [64] couldbe used in a closed-loop CI to automatically set the comfort andthreshold levels, eliminating a tedious job for the audiologist[17]. However, further work is still needed to find a completelyautomated and method of estimating behavioral thresholdsfrom ECAP and EABR responses. Second, longer latency re-sponses can be used to measure suprathreshold discriminationand recognition tasks, which are not available or used in presentclinical settings. For example, the MMN measure could be usedto eliminate redundant electrodes that potentially decrease theimplant performance [71], [72]. The cortical responses couldbe used to dynamically adjust the speech processing strategyby tracking corresponding resulting changes in responses. Theability to track these responses at regular intervals over a longperiod of time would be a particularly useful application for aclosed-loop CI, given the long language learning (prelinguallyimplanted) or relearning (postlingually implanted) periodwhich most CI users go through. As a result, the EER techniquecan be used as a powerful built-in research tool to study brainmaturity and plasticity.C. Future Design ConsiderationsAlthough we have successfully used a standard modern CI torecord these EPs, we propose a number of design modifications,both hardware and software related, for improving the closed-loop CI performance. It currently takes a number of hours tocollect EER EABR or EER ECEP responses. One of the rea-sons for the long collection time is that the current softwareand hardware were not designed to collect longer latency re-sponses. Currently, we have to manually adjust the delay ofthe sampling window for each data point of the EER CEP re-sponse or each 1.6 ms segment of the EER EABR response.We then collect all N repetitions at that delay, then manuallyadjust the delay for the next data point and repeat the process.The data must then be manually exported and reconstructed inMatlab, yielding the final response. Specific changes to the soft-ware could easily automate all these steps and thus speed up thedata collection process. Changes to the hardware could furtherreduced data collection time by 1) increasing the buffer size toallow longer time segments of data to be collected at one delayand 2) use lower sampling rates so longer time segments of datacould be collected in the same buffer size. In general, bettercontrol over the amplifier, A/D converter and evoked potentialrecording protocols would increase the flexibility of the systemand enhance our ability to sample evoked potentials from var-ious stages of the nervous system.Further design modifications to the CI could include athird extra-cochlear electrode, dedicated to monitoring neuralresponses, placed at some distance from the intra-cochlearelectrodes and other two extra-cochlear electrodes. An orthog-onal placement of this third extra-cochlear electrode to theexisting ones may facilitate the measurement of larger neuralresponse and smaller artifacts. Finally, software on the BTEprocessor could be used to track neural responses over timeand onboard wireless technology could send and log this dataon a personal wireless device such as a smart phone, where itcould be remotely accessible by the audiologist. These designchanges and considerations can and should be implementedin not only next-generation cochlear implants but also otherneural prostheses and stimulators such as retinal implants anddeep brain stimulation devices.REFERENCES[1] R. V. Shannon, F. G. Zeng, V. Kamath, J. Wygonski, and M. Ekelid,“Speech recognition with primarily temporal cues,” Science, vol. 270,pp. 303–4, Oct. 1995.[2] F.-G. Zeng, S. Rebscher, W. V. Harrison, X. Sun, and H. Feng,“Cochlear implants: System design, integration and evaluation,” IEEERev. Biomed. Eng., vol. 1, pp. 115–142, Jan. 2008.[3] B. S. Wilson, C. C. Finley, D. T. Lawson, R. D. Wolford, D. K.Eddington, and W. M. Rabinowitz, “Better speech recognition withcochlear implants,” Nature, vol. 352, no. 6332, pp. 236–8, Jul. 1991.[4] M. L. Hughes et al., “A longitudinal study of electrode impedance, theelectrically evoked compound action potential, and behavioral mea-sures in nucleus 24 cochlear implant users,” Ear Hear., vol. 22, no. 6,pp. 471–86, Dec. 2001.[5] M. F. Dorman, L. M. Smith, K. Dankowski, G. McCandless, and J.L. Parkin, “Long-term measures of electrode impedance and auditorythresholds for the ineraid cochlear implant,” J. Speech Hear. Res., vol.35, no. 5, pp. 1126–30, Oct. 1992.[6] K. A. Gordon, B. C. Papsin, and R. V. Harrison, “Toward a battery ofbehavioral and objective measures to achieve optimal cochlear implantstimulation levels in children,” Ear Hear., vol. 25, no. 5, pp. 447–63,Oct. 2004.[7] C. Pantev, A. Dinnesen, B. Ross, A. Wollbrink, and A. Knief, “Dy-namics of auditory plasticity after cochlear implantation: A longitu-dinal study,” Cerebral Cortex, vol. 16, pp. 31–6, Jan. 2006.[8] R. S. Tyler, A. J. Parkinson, G. G. Woodworth, M. W. Lowder, and B.J. Gantz, “Performance over time of adult patients using the ineraid ornucleus cochlear implant,” J. Acoust. Soc. Amer., vol. 102, no. 1, pp.508–22, Jul. 1997.[9] L. G. Spivak and S. B. Waltzman, “Performance of cochlear implantpatients as a function of time,” J. Speech Hear. Res., vol. 33, no. 3, pp.511–9, Sep. 1990.[10] G. E. Loeb and D. K. Kessler, “Speech recognition performance overtime with the clarion cochlear prosthesis,” Ann. Otology, RhinologyLaryngology, Suppl., vol. 166, pp. 290–2, Sep. 1995.[11] R. E. S. Lovett, P. T. Kitterick, C. E. Hewitt, and A. Q. Summerfield ,“Bilateral or unilateral cochlear implantation for deaf children: An ob-servational study,” Arch. Disease Childhood, vol. 95, no. 2, pp. 107–12,Feb. 2010.[12] A. Sharma, M. F. Dorman, and A. J. Spahr, “A sensitive period for thedevelopment of the central auditory system in children with cochlearimplants: implications for age of implantation,” Ear Hear., vol. 23, no.6, pp. 532–9, Dec. 2002.[13] V. Colletti, M. Carner, V. Miorelli, M. Guida, L. Colletti, and F. G.Fiorino, “Cochlear implantation at under 12 months: report on 10 pa-tients,” Laryngoscope, vol. 115, no. 3, pp. 445–9, Mar. 2005.
  11. 11. MC LAUGHLIN et al.: TOWARDS A CLOSED-LOOP COCHLEAR IMPLANT SYSTEM 453[14] C. J. Brown, P. J. Abbas, and B. Gantz, “Electrically evoked whole-nerve action potentials: Data from human cochlear implant users,” J.Acoust. Soc. Amer., vol. 88, no. 3, pp. 1385–91, Sep. 1990.[15] T. W. Picton, Auditory Evoked Potentials, Current Practice of ClinicalElectroencephalography, D. D. Daly and T. A. Pedley, Eds., 2nd ed.New York: Raven, 1990, pp. 625–678.[16] Y. S. Sininger, “Audiologic assessment in infants,” Current Opin. Oto-laryngol. Head Neck Surg., vol. 11, no. 5, pp. 378–82, Oct. 2003.[17] L. Gärtner, T. Lenarz, G. Joseph, and A. Büchner, “Clinical use of asystem for the automated recording and analysis of electrically evokedcompound action potentials (ECAPs) in cochlear implant patients,” Ac-taoto-Laryng., vol. 130, no. 6, pp. 724–32, Jul. 2010.[18] C. J. Brown, M. L. Hughes, S. M. Lopez, and P. J. Abbas, “Relationshipbetween EABR thresholds and levels used to program the CLARIONspeech processor,” Ann. Otology, Rhinol. Laryngol., Suppl., vol. 177,pp. 50–7, Apr. 1999.[19] M. L. Hughes, C. J. Brown, P. J. Abbas, A. A. Wolaver, and J. P. Ger-vais, “Comparison of EAP thresholds with MAP levels in the nucleus24 cochlear implant: Data from children,” Ear Hear., vol. 21, no. 2, pp.164–74, Apr. 2000.[20] C. J. Brown, M. L. Hughes, B. Luk, P. J. Abbas, A. Wolaver, and J.Gervais, “The relationship between EAP and EABR thresholds andlevels used to program the nucleus 24 speech processor: Data fromadults,” Ear Hear., vol. 21, no. 2, pp. 151–63, Apr. 2000.[21] L. T. Cohen, L. M. Richardson, E. Saunders, and R. S. C. Cowan,“Spatial spread of neural excitation in cochlear implant recipients:Comparison of improved ECAP method and psychophysical forwardmasking,” Hear. Res., vol. 179, no. 1–2, pp. 72–87, May 2003.[22] Q. Tang, R. Benítez, and F.-G. Zeng, “Spatial channel interactions incochlear implants,” J. Neural Eng., vol. 8, no. 4, pp. 046029–046029,Jul. 2011.[23] L. M. Friesen, R. V. Shannon, D. Baskent, and X. Wang, “Speechrecognition in noise as a function of the number of spectral channels:Comparison of acoustic hearing and cochlear implants,” J. Acoust. Soc.Amer., vol. 110, no. 2, pp. 1150–63, Aug. 2001.[24] N. Choudhury and A. Benasich, “Maturation of auditory evoked po-tentials from 6 to 48 months: prediction to 3 and 4 year language andcognitive abilities,” Clin. Neurophysiol., vol. 122, no. 2, pp. 320–38,Feb. 2011.[25] H. Thai-Van, S. Cozma, F. Boutitie, F. Disant, E. Truy, and L. Collet,“The pattern of auditory brainstem response wave V maturation incochlear-implanted children,” Clin. Neurophysiol., vol. 118, no. 3, pp.676–89, Mar. 2007.[26] C. J. Brown, P. J. Abbas, C. P. Etler, and J. J. Oleson, “Effects of long-term use of a cochlear implant on the electrically evoked compoundaction potential,” J. Am. Acad. Audiol., vol. 21, no. 1, pp. 5–5, 2010.[27] C. J. Brown, P. J. Abbas, M. Hughes, and R. Tyler, “Cross-electrodedifferences in eap growth and recovery functions measured using thenucleus NRT software: correlation with speech performance,” in Proc.Conf. Implantable Auditory Prostheses, 1999.[28] P. J. Abbas and C. J. Brown, “Electrically evoked brainstem potentialsin cochlear implant patients with multi-electrode stimulation,” Hear.Res., vol. 36, no. 2-3, pp. 153–62, Nov. 1988.[29] J. B. Firszt, R. D. Chambers, N. Kraus, and R. M. Reeder, “Neurophys-iology of cochlear implant users I: effects of stimulus current level andelectrode site on the electrical ABR, MLR, and N1-P2 response,” EarHear., vol. 23, no. 6, pp. 502–15, Dec. 2002.[30] D. L. McPherson and A. Starr, “Auditory time-intensity cues in the bin-aural interaction component of the auditory evoked potentials,” Hear.Res., vol. 89, no. 1-2, pp. 162–71, Sep. 1995.[31] Z. M. Smith and B. Delgutte, “Using evoked potentials to match inter-aural electrode pairs with bilateral cochlear implants,” J. Assoc. Res.Otolaryngol., vol. 8, no. 1, pp. 134–51, Mar. 2007.[32] S. He, C. J. Brown, and P. J. Abbas, “Effects of stimulation level andelectrode pairing on the binaural interaction component of the electri-cally evoked auditory brain stem response,” Ear Hear., vol. 31, no. 4,pp. 457–70, Aug. 2010.[33] S. He, C. J. Brown, and P. J. Abbas, “Preliminary results of the rela-tionship between the binaural interaction component of the electricallyevoked auditory brainstem response and interaural pitch comparisonsin bilateral cochlear implant recipients,” Ear Hear., Jul. 2011.[34] J. A. Undurraga, A. Ven Wieringen, R. P. Carlyon, O. Macherey, and J.Wouters, “Polarity effects on neural responses of the electrically stim-ulated auditory nerve at different cochlear sites,” Hear. Res., vol. 269,no. 1-2, pp. 146–161, 2010.[35] O. Macherey, R. P. Carlyon, A. Ven Wieringen, J. M. Deeks, and J.Wouters, “Higher sensitivity of human auditory nerve fibers to posi-tive electrical currents,” J. Assoc. Res. Otolaryngol., vol. 9, no. 2, pp.241–51, Jun. 2008.[36] M. J. Makhdoum, P. Groenen, F. Snik, and P. Ven den Broek, “Intra-and interindividual correlations between auditory evoked potentialsand speech perception in cochlear implant users,” Scand. Audiol., vol.27, no. 1, pp. 13–20, Jan. 1998.[37] A. S. Kelly, S. C. Purdy, and P. R. Thorne, “Electrophysiological andspeech perception measures of auditory processing in experiencedadult cochlear implant users,” Clin. Neurophysiol., vol. 116, no. 6, pp.1235–46, Jun. 2005.[38] P. Groenen, A. Snik, and P. Ven den Broek, “Electrically evoked au-ditory middle latency responses versus perception abilities in cochlearimplant users,” Audiol., vol. 36, no. 2, pp. 83–97.[39] J. B. Firszt, R. D. Chambers, and N. Kraus, “Neurophysiology ofcochlear implant users II: Comparison among speech perception,dynamic range, and physiological measures,” Ear Hear., vol. 23, no.6, pp. 516–31, Dec. 2002.[40] T. Picton, Middle Latency Responses—the Brain and the BrawnHuman Auditory Evoked Potentials. San Diego, CA: Plural, 2010.[41] T. McGee and N. Kraus, “Auditory development reflected by middlelatency response,” Ear Hear., vol. 17, no. 5, pp. 419–29, Oct. 1996.[42] C. W. Ponton, J. J. Eggermont, B. Kwong, and M. Don, “Maturation ofhuman central auditory system activity: evidence from multi-channelevoked potentials,” Clin. Neurophysiol., vol. 111, no. 2, pp. 220–36,Feb. 2000.[43] C. W. Ponton, M. Don, J. J. Eggermont, M. D. Waring, B. Kwong, andA. Masuda, “Auditory system plasticity in children after long periodsof complete deafness,” Neuroreport, vol. 8, no. 1, pp. 61–5, Dec. 1996.[44] A. Sharma, M. F. Dorman, and A. Kral, “The influence of a sensitiveperiod on central auditory development in children with unilateral andbilateral cochlear implants,” Hear. Res., vol. 203, no. 1–2, pp. 134–43,May 2005.[45] R. Näätänen and T. Picton, “The N1 wave of the human electric andmagnetic response to sound: a review and an analysis of the componentstructure,” Psychophysiol., vol. 24, no. 4, pp. 375–425, Jul. 1987.[46] P. A. Groenen, M. Makhdoum, J. L. Ven den Brink, M. H. Stollman, A.F. Snik, and P. Ven den Broek, “The relation between electric auditorybrain stem and cognitive responses and speech perception in cochlearimplant users,” Actaoto-Laryngol., vol. 116, no. 6, pp. 785–90, Nov.1996.[47] L. M. Friesen and K. L. Tremblay, “Acoustic change complexesrecorded in adult cochlear implant listeners,” Ear Hear., vol. 27, no.6, pp. 678–85, Dec. 2006.[48] B. A. Martin, “Can the acoustic change complex be recorded in an in-dividual with a cochlear implant? separating neural responses fromcochlear implant artifact,” J. Am. Acad. Audiol., vol. 18, no. 2, pp.126–40, Feb. 2007.[49] J.-R. Kim, C. J. Brown, P. J. Abbas, C. P. Etler, and S. O’Brien, “Theeffect of changes in stimulus level on electrically evoked cortical audi-tory potentials,” Ear Hear., vol. 30, no. 3, pp. 320–9, Jun. 2009.[50] K. A. Gordon, S. Tanaka, and B. C. Papsin, “Atypical cortical responsesunderlie poor speech perception in children using cochlear implants,”Neuroreport, vol. 16, no. 18, pp. 2041–5, Dec. 2005.[51] A. Sharma, A. A. Nash, and M. Dorman, “Cortical development,plasticity and re-organization in children with cochlear implants,” J.Commun. Disorders, vol. 42, no. 4, pp. 272–9.[52] T. W. Picton, C. Alain, L. Otten, W. Ritter, and A. Achim, “Mismatchnegativity: different water in the same river,” Audiol. Neuro-otology,vol. 5, no. 3-4, pp. 111–39.[53] P. R. Kileny, A. Boerst, and T. Zwolan, “Cognitive evoked potentialsto speech and tonal stimuli in children with implants,” Otolaryngology-Head Neck Surg., vol. 117, no. 3, pp. 161–9, Sep. 1997.[54] N. Karus et al., “The mismatch negativity cortical evoked potentialelicited by speech in cochlear-implant users,” Hear. Res., vol. 65, no.1-2, pp. 118–24, Feb. 1993.[55] S. Singh, A. Liasis, K. Rajput, A. Towell, and L. Luxon, “Event-relatedpotentials in pediatric cochlear implant patients,” Ear Hear., vol. 25,no. 6, pp. 598–610, Dec. 2004.[56] C. W. Ponton and M. Don, “The mismatch negativity in cochlear im-plant users,” Ear Hear., vol. 16, no. 1, pp. 131–46, Feb. 1995.[57] C. W. Ponton et al., “Maturation of the mismatch negativity: Effects ofprofound deafness and cochlear implant use,” Audiol. Neuro-Otology,vol. 5, no. 3–4, pp. 167–85.
  12. 12. 454 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 20, NO. 4, JULY 2012[58] J. Wable, T. van den Abbeele, S. Gallégo, and B. Frachet, “Mismatchnegativity: A tool for the assessment of stimuli discrimination incochlear implant subjects,” Clin. Neurophysiol., vol. 111, no. 4, pp.743–51, Apr. 2000.[59] J. L. Wunderlich and B. K. Cone-Wesson, “Effects of stimulus fre-quency and complexity on the mismatch negativity and other compo-nents of the cortical auditory-evoked potential,” J. Acoust. Soc. Am.,vol. 109, no. 4, pp. 1526–37, Apr. 2001.[60] P. M. Gilley, A. Sharma, M. Dorman, C. C. Finley, A. S. Panch, and K.Martin, “Minimization of cochlear implant stimulus artifact in corticalauditory evoked potentials,” Clin. Neurophysiol., vol. 117, no. 8, pp.1772–82, Aug. 2006.[61] M. Hofmann and J. Wouters, “Electrically evoked auditory steady stateresponses in cochlear implant users,” J. Assoc. Res. Otolaryngol., vol.11, no. 2, pp. 267–82, Jun. 2010.[62] A. Bahmer, O. Peter, and U. Baumann, “Recording and analysisof electrically evoked compound action potentials (ECAPs) withMED-EL cochlear implants and different artifact reduction strategiesin MatLab,” J. Neurosci. Meth., vol. 191, no. 1, pp. 66–74, Aug. 2010.[63] T. Hashimoto, C. M. Elder, and J. L. Vitek, “A template subtractionmethod for stimulus artifact removal in high-frequency deep brainstimulation,” J. Neurosci. Meth., vol. 113, no. 2, pp. 181–6, Jan. 2002.[64] C. A. Miller, P. J. Abbas, and C. J. Brown, “An improved method ofreducing stimulus artifact in the electrically evoked whole-nerve po-tential,” Ear Hear., vol. 21, no. 4, pp. 280–90, Aug. 2000.[65] O. Macherey, A. Ven Wieringen, R. P. Carlyon, J. M. Deeks, and J.Wouters, “Asymmetric pulses in cochlear implants: effects of pulseshape, polarity, and rate,” J. Assoc. Res. Otolaryngol., vol. 7, no. 3,pp. 253–66, Sep. 2006.[66] P. Comon, “Independent component analysis, a new concept?,” SignalProcess., vol. 36, no. 3, pp. 287–314, Apr. 1994.[67] A. Hyvärinen and E. Oja, “Independent component analysis: algo-rithms and applications,” Neural Netw., vol. 13, no. 4-5, pp. 411–30.[68] F. C. Viola, J. D. Thorne, S. Bleeck, J. Eyles, and S. Debener, “Uncov-ering auditory evoked potentials from cochlear implant users with in-dependent component analysis,” Psychophysiol., pp. 1–11, Jun. 2011.[69] S. Debener, J. Hine, S. Bleeck, and J. Eyles, “Source localization of au-ditory evoked potentials after cochlear implantation,” Psychophysiol.,vol. 45, no. 1, pp. 20–4, Jan. 2008.[70] M. Scherg and D. V. Cramon, “Evoked dipole source potentials of thehuman auditory cortex,” Electroenceph. Clin. Neurophysiol., vol. 65,no. 5, pp. 344–60, Sep. 1986.[71] T. A. Zwolan, L. M. Collins, and G. H. Wakefield, “Electrode discrimi-nation and speech recognition in postlingually deafened adult cochlearimplant subjects,” J. Acoust. Soc. Am., vol. 102, no. 6, pp. 3673–85,Dec. 1997.[72] J. J. Remus, C. S. Throckmorton, and L. M. Collins, “Expediting theidentification of impaired channels in cochlear implants via analysis ofspeech-based confusion matrices,” IEEE Trans. Biomed. Eng., vol. 54,no. 12, pp. 2193–204, Dec. 2007.Myles Mc Laughlin received the B.S. degree inphysics from Queens University, Belfast, Ireland,in 2002, and the M.S. degree in medical imagingand the Ph.D. degree in auditory neurophysiology,both from K.U. Leuven, Belgium, in 2003 and 2009,respectively.He is currently a Postdoctoral Fellow with theUniversity of California, Irvine. His research inter-ests include auditory electrophysiology, auditoryscene analysis, and cochlear implants.Thomas Lu received the B.S. and Ph.D. degrees inbiomedical engineering from the Johns Hopkins Uni-versity, Baltimore, MD, in 1995 and 2002, respec-tively.He is a Project Scientist in the Hearing and SpeechLab, University of California, Irvine. His researchinterests include auditory neurophysiology andcochlear implants.Andrew Dimitrijevicand received the B.Sc. (Hons),M.Sc., and Ph.D. degrees in physiology and neuro-science from the University of Toronto, Toronto, ON,Canada, in 1996, 1999, and 2003, respectively.He is an Assistant Professor with the Com-munication Sciences Research Center, CincinnatiChildren’s Hospital Medical Center, Cincinnati, OH,and the University of Cincinnati, Department of Oto-laryngology Head and Neck Surgery. His researchexamines auditory electrophysiology in people withhearing loss including auditory neuropathy cochlearimplants.Fan-Gang Zeng (S’88–M’91–SM’07–F’11) re-ceived the B.S. degree from the University ofScience and Technology of China, in 1982, the from Academia Sinica in 1985, and the Ph.D.from Syracuse University, Syracuse, NY, in 1990.He has been with the House Ear Institute(1990–1998), University of Southern California(1996–1998), and the University of Maryland,College Park (1998–2000). He is currently Directorof the Center for Hearing Research, Research Di-rector of Otolaryngology—Head and Neck Surgery,and Professor of Anatomy and Neurobiology, Biomedical Engineering, andCognitive Sciences at the University of California, Irvine. He has published100 peer-reviewed journal articles, 20 book chapters, and three books includinga volume on cochlear implants in the Springer Handbook of Auditory Research(New York: Springer-Verlag). He holds 12 patents and has given 150 invitedpresentations worldwide. He has consulted for NIH, NSF, DOD, National Nat-ural Science Foundation of China, Natural Sciences and Engineering ResearchCouncil of Canada, and numerous other public and private agencies. He hasbeen on the editorial board for the Journal of the Association for Research inOtolaryngology and Hearing Research. He has served on the Advisory Boardfor Apherma Corporation, Sunnyvale, CA; ImThera Medical, San Diego, CA;Nurotron Biotech, Inc. Hangzhou, China; and SoundCure, Boston, MA.Dr. Zeng has been on the editorial board for the IEEE TRANSACTIONS ONBIOMEDICAL ENGINEERING. He is Fellow of the Acoustical Society of Americaand the American Institute for Medical and Biological Engineering.