This document describes three methods for quantifying the signal-to-noise ratio (SNR) of multifocal visual evoked potential (mfVEP) recordings: the 2-run SNR (2rSNR), the individual noise window SNR (inSNR), and the mean noise window SNR (mnSNR). The 2rSNR compares responses between two recording runs, while the inSNR and mnSNR compare response amplitudes to amplitudes in a presumed noise-only time window. Higher SNR values indicate better recording quality. The document evaluates these SNR methods by relating their values to false positive rates, using recordings where some stimulus locations were occluded to contain only noise. The mnSNR had the lowest false positive rate and is recommended for determining recording quality
2. 288
[5,6]. In particular, they showed that 60 or more local VEP responses, called
the multifocal VEP (mfVEP), could be obtained over a wide retinal area if the
stimulus array was scaled to account for cortical magnification. The clinical
implications of this technique were largely ignored until Klistorner et al. [7]
demonstrated a qualitative agreement between visual field defects measured
with static perimetry and regions of diminished mfVEP responses in patients
with ganglion cell and/or optic nerve damage. Subsequent work showed that
it was possible to quantitatively compare local changes in the mfVEP to local
changes in static (Humphrey) visual fields [8, 9]. Regions of field loss seen
with traditional static visual fields can be detected in the mfVEP, especially if
the mfVEPs from both eyes of a subject are compared [8–11]. But problems
remain to be solved if the mfVEP is to be of clinical use.
Many of these problems revolve around the well-known inter-subject vari-
ability in the VEP and mfVEP responses. The major source of variability
among individuals is cortical anatomy. The position of the primary visual area
within cortical folds [12,13] and the relationship of external landmarks, such
as the inion, to underlying brain structures [9,14] differ widely among indi-
viduals. These differences can lead to mfVEP responses that vary markedly
in amplitude across subjects at a given field location and within subjects
across the visual field. Where appropriate, interocular comparison of mfVEP
responses can be employed to minimize variability due to cortical anatomy
[8-11, 15]. However, other problems exist because the responses measure
only a few hundred nano-volts in amplitude.
As the mfVEP responses can be very small, deciding what constitutes a
response can be a problem. Electrical noise from the environment and cor-
tical noise from, for example, alpha waves can vary from day to day and
even moment to moment. To improve an analysis of the mfVEP, records that
are too noisy and/or records with signals that are too small to be measured
reliably should be eliminated from the analysis. For example, records with
peak-to-trough (PtT) amplitudes less than a criterion value could be excluded
from further analysis [16]. However, there are problems with using the PtT
amplitude as a criterion for eliminating records when responses are contam-
inated by alpha waves or are unduly noisy due to higher frequency noise.
Consider the three records in the first row of Figure 1A (described further in
the next section). The first two records do not contain a signal but have about
the same PtT amplitude as the third record because alpha is present in the first
record, and high frequency noise is present the second. The root-mean-square
(RMS amplitude), defined below, has advantages over the PtT amplitude as
a criterion for eliminating records, but it suffers from the same problem. In
this paper, two signal-to-noise ratios are evaluated as criteria for eliminating
records of poor quality.
3. 289
Figure 1. (A) The top row contains three waveforms with approximately the same peak to
trough and RMS amplitudes. The first record has an alpha contribution, the second has high
frequency noise while the third has a discernible signal. See the text for details. The records
for the second run, Run B, are in the second row. And the third and fourth rows are (Run A +
Run B)/2 and (Run A – Run B)/2. The 2rSNR calculated with the formulas in panels (B) and
(C) appear at the bottom of the records. (C). The equation for calculating the RMS amplitude
where Rt is the response amplitude at time t, µ is the average of the amplitudes from 45 to
150 ms, and N is the number of samples in the time period. (D) The equation for calculating
the 2rSNR where RMS is defined as in panel (C). (E) The signal and noise windows used in
the calculation of the inSNR and mnSNR. F. The general equation for calculating nwSNRs.
4. 290
Introduction to the two-run signal-to-noise ratio (2rSNR)
The approach adopted here is described in Baseler et al. [4] who employed it
to assess the reproducibility across two recording sessions. Here two mfVEP
responses obtained for the same location in the same session are compared
(see also Ref. [17]). Figure 1A illustrates the logic behind the approach.
The upper row of mfVEP records in Figure 1A represents three hypothet-
ical records from a single run, Run A. These records are actually from the
occluded display experiment described below. The first two columns do not
contain a signal (i.e., there was no stimulus present) while the third contains a
clear signal (there was a stimulus present). The first record contains an alpha
component. The second was amplified in amplitude to illustrate the appear-
ance of a very noisy record as is occasionally recorded from some subjects.
These three examples were chosen to have nearly identical RMS amplitudes.
By RMS amplitude we mean the root-mean-square (RMS) calculated over
some time interval (see Figure 1B for equation). The RMS measure has the
advantage over the PtT measure in that it does not depend upon a particular
aspect of the response waveform but merely requires the specification of a
time interval. The interval of analysis employed here is 45–150 ms shown as
the dashed lines in Figure 1A. The records in Run A of Figure 1A have nearly
identical RMS amplitudes; they also have similar peak-to-trough amplitudes.
The logic and advantage of the ‘two-run signal-to-noise ratio’, 2rSNR, is
shown in the rest of Figure 1A. First, a second set of responses is obtained,
shown as Run B in the second row of the figure. The two responses are added
and averaged to get the row labeled ‘SUM/2’, and they are subtracted and
averaged to get the row labeled ‘DIFF/2’. The SUM gives us an estimate of
the signal plus noise and the DIFF a measure of the noise. Notice that if the
signals were identical in both runs then the DIFF record would be only noise.
The 2rSNR is obtained as the ratio of the RMS values for the SUM and DIFF
responses (see formula in Figure 1C). This is not a true SNR in the sense that
the numerator, in addition to containing a signal term, contains a noise term as
well as a term representing an interaction between the signal and noise [18]
(see also Eq. (A5) in Ref. [4]). The minus 1 is in the equation (Figure 1C)
so that 2rSNR will be, on average, equal to 0 when no signal is present. In
Figure 1A, the two records without a signal have a 2rSNR close to 0 while
the records with the large signal have a 2rSNR of 4.6.
Introduction to the noise window signal-to-noise ratio (nwSNR)
The nwSNR ratio is a more conventional measure. We asked whether there
was a part of the record that was far enough in time from the onset of the
pattern reversal so as to be free of the response but not so far in time that it
5. 291
Figure 2. (A) The standard mfVEP display. (B) The occluded display with the peripheral 36
sectors covered.
is affected by so-called ‘kernel overlap’ [5]. For the 7-min m-sequence used
here, the epoch from 325 to 430 ms (the same period as our analysis epoch
of 45–150 ms) appears to contain, to a first approximation, only noise. This
was based upon an analysis of its frequency content. This was confirmed by
averaging the mfVEP responses from the 14 control subjects in Ref. [19]. The
mfVEPs in Figure 1D are the averages of the responses to the 30 sectors above
and below the midline (see Figure 3). As previously reported [4,7,8], these
mfVEPs are reversed in polarity. Notice that the ‘noise window’, defined as
the period from 325 to 430 ms, does not appear to have a response. To obtain
a nwSNR (Figure 1E), the RMS for the ‘signal window’, which will contain
both signal and noise, is divided by the RMS for the ‘noise window’, which
to a first approximation contains only noise. Two nwSNRs were obtained for
each record, one based upon the RMS of the noise window for that individual
record alone (inSNR) and the other based upon the mean of the RMS values
of all the noise windows for a particular subject (mnSNR).
Choosing a criterion value of SNR
An advantage of a SNR over a RMS or PtT criterion is that it can be defined
independent of noise level. That is, the biologically and environmentally pro-
duced noise in recordings can vary from day-to-day, individual-to-individual,
and laboratory-to-laboratory. Since noise is in both the denominator and nu-
merator, the SNR will have the same meaning for different days, individuals,
or laboratories while the RMS and PtT measures will not. But, what value
does one choose when rejecting ‘poor’ records based upon SNR values? The
larger the SNR, the more likely the records provide a good measure of the real
6. 292
Figure 3. (A) A histogram of the 2rSNR values of the noise-only records from eight subjects
(288 pairs of noise-only records). (B) Histograms of the inSNR and mnSNR values for the
288 pairs of noise-only records. (C) The cumulative distribution for the histograms in panels
(A) and (B). It provides an estimate of the false positive rate as a function of a criterion value
of the SNR.
signal. At the same time, the larger the SNR criterion the fewer the responses
that can be included in our analysis. One way to express this trade-off is in
7. 293
terms of false-positive rates. As the criterion value of the SNR is increased,
the percentage of records falsely identified as having a signal will decrease.
The main purpose here is to relate the values of the SNR to false-positive
rates. A preliminary report was presented at the ARVO 2001 meeting [22].
Methods
Stimuli
Figure 2A shows the ‘standard’ display employed in previous work and Fig-
ure 2B the ‘occluded display’ employed here. These stimulus arrays were pro-
duced with VERIS software (Dart Board 60 With Pattern) from EDI (Electro-
Diagnostic Imaging, San Mateo, CA). The array in Figure 2A consists of
60 sectors each with 16 checks, eight white (200 cd/m2
) and eight black
(< 1cd/m2
). The entire display has a diameter of 44.5◦
. For the ‘occluded
display’ in Figure 2B, the outer three rings (36 sectors in all) were covered
with white cardboard. The central 24 sectors remained to help the subject
maintain fixation and attention as in the standard display of Figure 2A.
Recording the mfVEP
The mfVEPs were recorded with gold cup electrodes placed at 4 cm above
the inion (active), at the inion (reference), and on the forehead (ground). The
continuous record was amplified with the low and high frequency cutoffs set
at 3 and 100 Hz (1/2 amplitude; Grass preamplifier P511J, Quincy, MA),
and it was sampled at 1200 Hz (every 0.83 ms). The m-sequence had 215
-1
elements and required about 7 min for a single run.
All mfVEPs were obtained with monocular stimulation. Within a session,
two runs were obtained with the occluded display (Figure 2B). The two runs
are needed to calculate the 2rSNR. The records from the two runs were av-
eraged before the nwSNRs were calculated. To improve the subject’s ability
to maintain fixation, the run was broken up into overlapping segments each
lasting about 27 s. Second-order local response components were extracted
using VERIS 4.1 software from EDI (San Mateo, CA). All other analyses
were done with programs written in MATLAB (Mathworks, MA).
Subjects
Eight subjects ranging in age from 20 to 58 years (mean 31 years) with no
known abnormalities of the visual system participated in the study. Proced-
ures followed the tenets of the Declaration of Helsinki and the protocol was
8. 294
approved by the committee of the Institutional Board of Research Associates
of Columbia University.
Calculating the SNRs
As described above, the 2rSNR was calculated for each pair of 36 noise-only
records from each subject using the equations in Figures 1B,C. To obtain a
nwSNR for the same pairs of records, the responses from Runs A and B were
first averaged. Two nwSNRs were then defined. One, called the ‘individual
noise window SNR’ (inSNR), was defined for the ith sector of the jth subject
as
inSNR = [RMSij (45 to 150ms)/RMSij (325 to 430ms)] − 1. (1)
where RMS(t1 to t2) is the RMS defined by the equation in Figure 1B for
the interval from t= t1 to t= t2. The second nwSNR, called the ‘mean noise
window SNR’ was defined for the ith sector of the jth subject as
mnSNR = {RMSij (45 to 150ms)/[ i RMSij (325 to 430ms)/n]} − 1 (2)
where the denominator is the average of the individual RMS values for jth
subject.
Notice that all three SNRs have the same numerator. The denominators are
of the same form. They are all based upon the RMS of the combined records
from Runs A and B. They differ in terms of whether the RMS is based upon
the record for the signal window (2rSNR), the noise window (inSNR) or the
mean of the RMS values of the noise windows (mnSNR).
Results
To obtain a distribution of the SNR values when no signal is present, the SNRs
were calculated for all 36 pairs of responses (two runs were obtained with the
36 occluded sectors) from all eight subjects. The distribution of these 288
SNR values is shown in Figure 3A,B for the three different types of SNRs.
The cumulative distribution of these histograms can be found in Figure 3C.
Since there can be no signal present in this experiment, the cumulative dis-
tribution provides an estimate of the ‘false positive rate’ as a function of the
value of a SNR criterion. The false positive rate is the percentage of time that
one would conclude there is a signal present when in fact there is no signal
present. For example, a false positive rate of 5% (dashed line in Figure 3C)
is associated here with SNR values of about 1.0, 0.8, and 0.5 for the 2rSNR,
inSNR and mnSNR, respectively. Notice that for any given SNR value, the
9. 295
false-positive rate is highest for the 2rSNR and lowest for the mnSNR. Recall
that the numerator of all three SNRs is the same. To understand why we get
different false-positive values, we need to understand the implications of the
different denominators.
Consider the difference between the two measures of nwSNR. Each point
in the scatter plot in Figure 4A shows the inSNR (Eq. (1)) and the mnSNR
(Eq. (2)), for each of the 288 noise-only records from the occluded display
experiment. As expected from Figure 3C, the mnSNR, based upon the mean
for each subject, exhibits less variability than the inRMS which is based upon
the individual records. Notice, for example, the number of points falling bey-
ond a value of 1.0 (solid lines) for each measure. The reason can be found
in Figure 5. In Figure 5, the RMSij (45 to 150 ms), the numerator of both
nwSNRs is shown versus the RMSij (325 to 430 ms), the denominator of the
inSNR. For clarity only the data for four of the subjects are shown. These
subjects were chosen as they represent examples of subjects with the lowest
mean RMS value (triangles), the highest mean RMS value (pluses), the largest
range of RMS values (open circles), and the best correlation between the
RMS values for the two windows (filled circles). It is clear in Figure 5 that,
for any given subject, the noise in the two windows is poorly correlated. On
average, the correlation coefficient (r2
) of the two RMS values was 0.07 with
a range from 0.00 to 0.29 (filled circles). Since the noise outside the signal
window is poorly correlated with the noise in the signal window, the mean
of the RMS from all the noise windows of a given subject supplies a better
estimate of the noise in an individual record.
Figure 4B shows a comparison of the mnSNR (Eq (2)) and the 2rSNR
values for all 288 noise-only records. Although the 2rSNR and mnSNR are
correlated, the correlation is far from perfect. To better understand why these
measures differ, the individual records associated with 2rSNR values greater
than 1.0, the symbols to the right of the vertical line in Figure 4B were ex-
amined. Fourteen of these 16 records came from three of the eight subjects.
In 14 of the 16, the records in the signal window of both Runs A and B
had a low frequency component that was approximately in phase in the two
runs and appear to be attributable to alpha. Figure 6A provides an example.
Notice that by chance a slow component, presumably due to the presence of
alpha waves in the continuous VEP record, is in phase in the two runs. This
results in a 2rSNR of 1.9. The mnSNR (Eq (2)) was 0.59. When the noise is
in phase as in Figure 6A, the 2rSNR will be larger than the nwSNR because
they share the same numerator and the denominator of the 2rSNR will be
smaller. Recall that the denominator is the RMS of the difference between
Runs A and B and thus includes the difference of the two alpha components.
By a similar argument, the 2rSNR will be smaller than the mnSNR when the
10. 296
Figure 4. (A) The inSNR is plotted against the mnSNR. Each point is for a single sector and
a single subject for the combined records of Runs A and B. There are 36 points (the occluded
sectors) for each subject. The different symbols denote different subjects. (B) Same as in (A),
but for mnSNR versus 2rSNR.
alpha components are out of phase. [This would be easier to see in Figure 4
had the −1 in the SNR equations been removed and a log scale used. The
equivalent points to those falling between 1 and 2 on these scales would fall
between −0.5 and −0.33.]
11. 297
Figure 5. (A) The RMS amplitude for the epoch between 45 and 150 ms is plotted against the
RMS amplitude for the window from 325 to 430 ms. Each point is for a single sector and a
single subject for the combined records of Runs A and B. There are 36 points (the occluded
sectors) for each subject. The four different symbols denote different subjects.
Figure 6. (A) A set of noise-only records illustrating a case where the 2rSNR is large because
the noise in both Runs A and (B), most likely alpha wave in origin, is in phase. B. A set of
noise-only records illustrating a case where the 2rSNR measure will be superior to the nmSNR
measures.
12. 298
Figure 7. The first 600 ms of the mfVEP records from five of the locations in the occluded
display experiment. Records from the same locations are shown for all eight subjects.
Discussion
Measuring the quality of the mfVEP records
The mfVEP responses are inherently small. In fact, for some subjects the
signal may be essentially zero in some locations. This is not necessarily
caused by abnormal vision but can be present in control subjects due to local
folding of the cortex. In particular, the activity of cells oriented parallel to the
recording electrodes will not be seen. Since the mfVEP signals can be small,
distinguishing them from noise becomes a problem. Some investigators have
employed a criterion amplitude (PtT or RMS) in the signal window to identify
poor records (e.g., Ref. [16]). As can be seen in Figure 5, this procedure is less
than optimal. Placing the criterion at 0.04 would accept all of one subject’s
noise-only signals (pluses) while rejecting all of another’s (triangles). A better
procedure involves estimating the noise level and calculating a SNR for each
record. Based upon the analysis in this paper, it is probably best to obtain
one estimate of noise level for the entire set of records. Using this average
13. 299
value as an estimate of the RMS noise in the signal window, a SNR can be
calculated for each record and records can be rejected if this value falls below
some criterion value.
A priori, one might expect that the 2rSNR is a better way to detect the
presence of a signal in noise since it uses an estimate of the noise in the
same epoch as the signal to be detected. Figure 6B, for example, shows a pair
of responses from the occluded display experiment. In this case, the 2rSNR
will provide the best estimate of the noise in the signal window. But, there is a
problem for the 2rSNR when there is a source of noise, alpha in this case, with
a reasonably high probability of occurrence and with a dominant frequency
that is low relative to the analysis window. Under these circumstances, this
component will appear in phase in the two runs on some occasions and out of
phase in others. In general, a slow component in phase will give a 2rSNR that
is greater than a nwSNR and one out of phase a 2rSNR that is smaller than a
nwSN. Thus, under these circumstances the 2rSNR is not a good metric for
rejecting noisy records. By a similar logic, however, for a fixed value of the
SNR, the nwSNR will lead to more or less false-negatives, saying a signal
is not present when in fact it was, than the 2rSNR depending upon whether
the alpha components are in or out of phase. In sum, the fact that a given
value of the 2rSNR has a higher false-positive rate than the same value of
the nwSNR does not tell the whole story. What we really need to know is
which measure produces ‘better records’ when the same percentage of the
records is removed. This is a more complicated issue to resolve, but unless
the contamination from alpha can be removed the mnSNR will provide a less
variable estimate of the noise.
The individual mfVEP records give the appearance of local periods of
alpha. Thus, we initially thought that the inSNR, based upon the same record,
might provide a better measure than the mnSNR. But, as we should have sus-
pected, the alpha contamination is largely global, not local in time. Figure 7
illustrates this point by showing the first 600 ms of records for five randomly
chosen locations for the eight subjects. The alpha contribution is particularly
obvious in three of these subjects. Notice that there is some fluctuation in the
noise with periods of prominent alpha seen within these records, but these
fluctuations do not have any consistent structural basis. The lack of correla-
tion between the signal and noise windows in Figure 5 reinforces this point.
Since this correlation is poor, the alpha contamination affects the inSNR in
a manner similar to its effects on the 2rSNR, resulting in a more variable
estimate of the noise in the signal window. For example, an examination of
the records for the extreme values of the inSNR in Figure 4A (points above
horizontal line) revealed that these records usually had an alpha component
out of phase in the noise window.
14. 300
Improving the quality of the mfVEP records
Regardless of the signal-to-noise measure employed, rejecting records from
an analysis involves a trade off between a loss of data on one hand, and a
gain in the quality of the data on the other. Our purpose here was to relate the
SNR criteria to false-positive rates so as to allow the experimenter to assess
this trade-off. The fact that we recommend the use of the mnSNR means that
the experimenter can obtain false positive rates from the actual recordings by
analyzing the records outside the signal window [19].
To obtain a false-positive rate of 5% or better the mnSNR should be equal
to or better than about 0.5. We will show in the following paper that, on
average, as many as 17% of the responses from normal controls may fail to
meet this criterion level. This is an unacceptably large number for many pur-
poses. There are two methods readily available for improving the SNR. One
involves summing responses from neighboring sectors and the other adding
additional electrodes (e.g., Refs. 8,11,16,19). The following paper examines
these methods in detail.
There are also precautions that the experimenter can take to improve the
SNR. First, it is important to keep the resistance of all electrodes as low,
and as similar, as possible. Second, attempts should be made to reduce the
intrusion of alpha waves. Figure 6A illustrates how alpha contamination can
be mistaken for a signal no matter what SNR measure is employed. Reducing
alpha or, in fact, any low frequency noise, will reduce the false positive rates
for any given SNR value.
There are two classes of techniques one might consider for reducing the
contamination of the records by alpha. First, the experimenter can continu-
ously monitor the VEP record and provide feedback to the subject. For ex-
ample, we have recently found that many ‘alpha-producers’ can suppress
alpha if they are told to pay close attention to the edges of the small checks in
the center of the display and if the experimenter provides feedback whenever
an alpha burst appears on the screen. The second technique involves software
rejection of alpha bursts. This is not available in the current versions of the
VERIS software, but it should be possible to implement.
More than a methodology for rejecting poor records
Beyond providing an objective method for rejecting poor records, there are
benefits to a measure that takes into consideration both signal and noise. A
signal-to-noise measure allows for a quantitative answer to questions such as:
‘Is condition A better than B’, where A and B refer to different numbers of re-
cording channels (see following paper [19]), records obtained from different
laboratories, different methods of analysis or different experimental condi-
15. 301
tions. Whatever the question, ‘better’ usually implies a larger signal-to-noise
and thus a SNR should be employed.
Acknowledgements
Supported by grants from the National Eye Institute (R01-EY-02115 and
R01-EY-09076). The authors gratefully acknowledge the support and ad-
vice of Drs. Greenstein and Odel. We also thank Nazreen Karim and An-
nemarie Gallagher for their help in recording mfVEPs and Drs. Fortune and
Greenstein for helpful comments on earlier versions of this paper.
References
1. Abe H, Iwata K. Checkerboard pattern reversal VEP in the assessment of glaucomatous
field defects. Acta Soc Ophthalmol Jpn 1976; 80: 829–41.
2. Bobak P, Bodis-Wollner I, Harnois C, Maffei L, Mylin L, Podos S, Thornton J. Pattern
electroretinograms and visual-evoked potentials in glaucoma and multiple sclerosis. Am
J Ophthal 1983; 96: 72–83.
3. Regan D, Spekreijse H. Evoked potentials in vision research 1961–1986. Vis Res 1986;
26: 1461–80.
4. Baseler HA, Sutter EE, Klein SA, Carney T. The topography of visual evoked response
properties across the visual field. Electroencephalogr Clin Neurophysiol 1994; 90: 65–
81.
5. Sutter EE. The fast m-transform: a fast computation of cross-correlations with binary
m-sequences. Soc Ind Appl Math 1991; 20: 686–94.
6. Sutter EE, Tran D. The field topography of ERG components in man-I. The photopic
luminance response. Vis Res 1992; 32: 433–66.
7. Klistorner AI, Graham SL, Grigg JR, Billson FA. Multifocal topographic visual evoked
potential: Improving objective detection of local visual field defects. Invest Ophthalmol
Vis Sci 1998; 39: 937–50.
8. Hood DC, Zhang X, Greenstein VC, Kangovi S, Odel JG, et al. An interocular compar-
ison of the multifocal VEP: A possible technique for detecting local damage to the optic
nerve. Invest Ophthalmol Vis Sci 2000; 41: 1580–87.
9. Hood D C, Zhang X. Multifocal ERG and VEP responses and visual fields: comparing
disease-related changes. Doc Ophthal 2000; 100: 115–37.
10. Graham SL, Klistorner AI, Grigg JR, Billson FA. Objective VEP perimetry in glaucoma:
Asymmetry analysis to identify early deficits. J Glaucoma 2000; 9: 10–9.
11. Hood DC, Odel JG, Zhang X. Tracking the recovery of local optic nerve function after
optic neuritis: A multifocal VEP study. Invest Ophthalmol Vis Sci 2000; 41: 4032–38.
12. Brindley GS. The variability of the human striate cortex. Proc Physiol Soc 1972; 1–3P.
13. Stensaas SS, Eddington DK, Dobelle WH. The topography and variability of the primary
visual cortex in man. J Neurosurg 1974; 40: 747–55.
14. Steinmetz H, Gunter F, Bernd-Ulrich M. Craniocerebral topography within the interna-
tional 10-20 system. Electroencephalogr Clin Neurophysiol 1989; 72: 499–506.
16. 302
15. Zhang X, Hood DC, Greenstein VC, Odel JG, Kangovi S, Liebmann JM. Detecting field
defects with multifocal VEPS: Two eyes are better than one. Invest Ophthalmol Vis Sci
(abstract) 1999; 40: S81.
16. Klistorner AI, Graham SL. Objective perimetry in glaucoma. Ophthalmology 2000; 107:
2283–99.
17. Schimmel H. The (+) reference: Accuracy of estimated mean components in average
response studies. Science 1967; 157: 92–4
18. Meigen T, Bach M. On the statistical significance of electrophysiological steady-state
responses. Doc Ophthalmol 1999; 92: 207–32.
19. Hood DC, Zhang X, Hong JE, Chen CS. Quantifying the benefits of additional channels
of multifocal VEP recording. Doc Ophthalmol 2002; 104: 303–320.
20. Klistorner AI, Graham. SL Multifocal pattern VEP perimetry: analysis of sectoral
waveforms. Doc Ophthalmol 1999; 98: 183–96.
21. Zhang X, Hood DC. Quantitative methods for comparing changes in multifocal visual
evoked potentials to visual field defects. Invest Ophthalmol Vis Sci (abstract) 2000; 41:
S292.
22. Zhang X, Hong JE, Hood DC. Quantitative assessment of the quality of multifocal VEP
records: Bigger is not necessarily better. Invest Ophthal Vis Sci (abstract) 2001; 41: in
press.
Address for correspondence: D.C. Hood, Department of Psychology, Schermerhorn Hall,
Room 406, Columbia University, 1190 Amsterdam Ave., New York, NY 10027-7004, USA
Fax: +1-212-854-3609; E-mail: dch3@columbia.edu