SlideShare a Scribd company logo
1 of 16
Considerations and Algorithm Development for Scene-Based
Nonuniformity Correction (NUC)
Clay Stanek, Larry Ewing, Doug Moore
Mission Research Corporation, 735 State Street PO Drawer 719, Santa Barbara, CA 93102
Abstract
Pixel-to-pixel radiance nonuniformity is the prominent noise source from resistive arrays and must be compensated or otherwise
mitigated for high-fidelity testing of infrared imaging sensors. Many of the current advances in the capability of resistive array, IR
scene projection rest in the improvement of nonuniformity correction (NUC) schemes. Early NUC schemes address the problem
of optical crosstalk or spreading and the types of algorithms available that help to mitigate its effect when individual pixel
radiometry is performed. However, there has been relatively little work done on scene-based correction to date where the effects
such as power drops across the emitter array and thermal crosstalk are important to consider. This paper will examine potential
problem areas in scene-based correction and discuss possible algorithms that could be used in a scene-based NUC approach.
Keywords: resistive emitter arrays, nonuniformity correction, scene-based algorithms, crosstalk
1 Introduction
Resistive array technology is finding increasing application in representing synthetic infrared targets and backgrounds. Pixel-
to-pixel radiance nonuniformity is the prominent noise source from resistive arrays and must be compensated or otherwise
mitigated for high fidelity testing of infrared imaging sensors. Any imaging method for measuring and correcting
nonuniformity noise is subject to theoretical performance limitations due to sensor measurement noise, geometrical resolution,
background offset, and optical resolution. This white paper will discuss approaches for improvement of projector
nonuniformity, enhancing the current nonuniformity correction procedures developed for DTRA under the NODDS (Nuclear
Optical Dynamic Display System) and the TDT (True Display Technology) programs done at Mission Research Corporation.
Resistive emitter arrays are characterized by fixed-pattern noise due to variations in the structural, circuit, emissive, and
reflective properties of the individual elements in the array. We refer to individual elements in the array as dixels, an
abbreviated contraction of display pixel. Dixel variations impose the ultimate limit on the ability of a staring infrared scene
projector to generate image detail, particularly in FLIRs and other scanning thermal imagers. To date the technology push has
been to increase the size and speed of IR scene projectors with less emphasis on nonuniformity correction (NUC), which is
general terminology for signal processing methods to reduce dixel fixed-pattern noise. Today the scene projection community
views NUC as a major area for optimization of infrared scene projectors in any true display simulation.
2 The NUC Problem and Current Approach
The NUC problem in general has two components: measurement of the required correction factor and application of the
measured correction factor to pre-computed or real-time generated imagery. In this paper we concern ourselves with the first
of the two problems, measurement of the emitter nonuniformity; all references to NUC in this paper mean the measurement
problem.
Similar to pixel pattern noise in focal plane arrays (FPA), resistive emitter arrays are characterized by a dixel pattern noise that
varies only slowly in time (on the order of days) if at all. Unlike semiconductor FPA, the amount of dixel pattern noise in a
resistive array depends on the emission level of the individual emitter. We have found it to be nearly impossible to correct
resistive-array pattern noise using only two parameters, e.g., offset and gain, as is commonly done in linearly responsive FPA.
It is difficult to model individual physical sources that combine to form pattern noise (electronic circuitry, spectral emissivity,
electrical resistance, thermal conductivity, and emissivity-area variations) let alone model the behavior of the entire chain. We
have had only limited success with physics-based models, which attempt to replace most of these variations with model
descriptions. Our effort has concentrated on measuring the emitter transfer function, which is the emitter in-band radiance
versus gate voltage, as shown in the Figure 1 example.
The NUC methods used to date, and those outlined in Section 3 of this paper, assume that each emitter in the array is
independent. Under this assumption NUC for a given emitter depends only on the level to which the emitter is being driven
and not on the levels of other emitters in the array. It has been pointed out in the literature that such an ideal is not always
satisfied and that the NUC for a given dixel should depend on the levels for all dixels in the array, not just the one being
NUC’ed. In this latter situation, the NUC for a given frame depends on the scene being displayed and the NUC method is
called scene-based correction. Scene-based and statistical NUC schemes are two clear directions for future work, which is
discussed in Section 6.
The point of confusion/contention is whether NUC should be applied to an individual physical element of the display, or
whether NUC should apply to the region of the image of the element on the FPA. To a large extent these differing points of
view beg the real problem of how to treat far field diffraction effects. To a first approximation the intensity of the far field
from a dixel image falls as r-2
, either when the image is in sharp focus or beyond the range of a blur distribution when the
image is defocused. In experimental modeling we have measured this fall off between r-2
and r-2.2
, r-2
is close enough for our
purposes here. The blur distribution is sometimes approximated by a Gaussian, but it is not Gaussian. It is instead a
convolution of the diffraction PSF of the optical train and the defocus distribution. When individual elements are measured
(this never truly happens, there are always other elements active), a few percent of their energy is lost in this far field
diffraction. Similarly the accumulated far field energy from the other active dixels enters the measurement of the intended
dixel.
When a block of elements is simultaneously active, their combined far field pattern raises the base for the dixels in the block
and for any other dixel that is to be activated. So depending on the point of view, NUC can be applied to individual display
elements, with proper accounting of the far field effects, or it may be applied to a scene. In either case the scene display
algorithm, the other half of the NUC problem, becomes more complex. Of course scene-based correction can account for other
correlated effects, such as cross-talk in the display or repeating minor defects in display manufacture, as well as the far field
diffraction.
In our work to date, each dixel in the array is considered an independent gray body that can be measured independently of any
other dixel in the array. To minimize data collection and processing time required to NUC a large (~ 1M dixel) array, an
infrared sensor (typically a staring FPA) is used to image so-called sparse arrays of dixels. Dixel spacing in the emitter array is
chosen to reduce overlap of the optical blur in the infrared image. F shows an image captured from a 16x16 spaced sparse
0.0001
0.001
0.01
0.1
1
1.6 1.8 2 2.2 2.4 2.6 2.8 3 3.2
ISPS LWIR Mean Transfer Curve
8.24-8.74 µm
Mean Radiance (Table 1 All)
Mean Radiance (Table 2 None)
Mean Radiance (Table 2 Ave)
MeanRadiance[e17ph/sec-cm
2
-SR]
Gate Voltage
Sep - Oct 1998 Data
ApparentTemperature[K]
281
304
328
350
375
263
227
198
161
151
Figure 1 Typical Mean Transfer Function for 1282
arrays
array of dixels (every sixteenth dixel by row and column is set to the same voltage) in a 128x128 resistive array. Clearly 256
images such as shown in Figure 2 are necessary to characterize the display, so that every element is measured. Some of the
smearing in Figure 2 is obviously generated in the FPA readout, but notice that the brightness outside the sparse array image is
below that seen between dixel images. Dixel images on the edges and at the corners of the sparse array image are in a different
environment, because of the lack of symmetry. This is an example of the far field problem mentioned above.
The essence of our current NUC procedure is to sum the energy in the blur
associated with a single dixel, after the background has been removed. Radiance
associated with this energy is estimated by comparing the summed signal to that
obtained when a known, calibrated, black body flood is imaged by the camera.
Although our experience has been with an FPA-based camera, there is no reason
that the technique cannot be used with a scanning infrared imager, with control of
pixel positions. This approach of projecting grids of emitters and performing
radiometric estimates to derive curves such as shown in Figure 1 is known as the
Sparse Array approach for NUC.
Limitations realized when the theory is put into practice include the following: unknown magnification of the imaging system,
dead pixels, rotation between the emitter array and the imager array (misalignment), and unwanted backgrounds.
2.1 Sparse Algorithm Assumptions
For a small radiance range (L1, L2), the incremental contribution to emitter spectral photon radiant intensity measured by a
detector of unit cell area Ad is computed as
∆I(λ) =
V2 - Ve L1 + Ve - V1 L2
V2 - V1
Ad [p / s / sr / µm]
(1)
where (V1,V2) are the calibration voltages corresponding to known flood radiances L1< L2 and Ve , where V1<Ve<V2 , is the
measured voltage due to the emitter. Error in this step depends on the actual linearity of the detector over the range (L1, L2).
For an arbitrary detector the error can be made small by judicious choice of pairs (L1, L2) in the piece-wise linear fit.
This NUC will work quite well provided that the display-camera setup remains undisturbed. As long as the alignment,
magnification, and focus are not changed, repeatable, scaled images can be displayed and viewed.
If all of a dixel’s radiant energy were focused on Ad, then the characterization would be finished. It is the case, however, that
some of this energy is lost in the gaps between pixels. Dixel images that fall near or on a gap loose more of their energy in this
way than do those that are imaged near a pixel center. To mitigate this effect, we intentionally defocus the image somewhat,
spreading the dixel image until it is about twice a pixel size during NUC. Obtaining Ve now becomes a matter of combining
the effect from several neighboring pixels. In fact the fraction of a dixel’s energy that falls on a pixel is unknown, one minus
that fraction is from the display substrate, and Ve contains the far field contribution from the other elements. We can and have
constructed detailed theoretical models of this to study it, but there is no compelling reason to believe that they accurately
Figure 2 Example sparse array image
match the real world. An approach to empirically determine this fraction is given in Section 3. This is the heart of the NUC
problem, the confounding of unknowns with the dixel behavior we want to characterize. We need to deconstruct the
confounded factors that are from the display wafer, the optical train, and the particular test setup used to acquire data, to
recover something that is truly an element property. Our current methodology using sparse arrays does this, but it can be
improved.
A narrow-band procedure is the only way to obtain absolute radiometry without complete knowledge of the following: detector
transimpedance, integration time, photoconductive gain, spectral quantum efficiency, optics solid angle, optics spectral
transmission, and spectral bandwidth. Our current procedure renders all of these unknowns as unwanted nuisance parameters
that do not affect the accuracy of emitter radiometry. For wide-band measurements, or where the spectral quantum-efficiency
optical-transmission product varies across the band, the parameters are very important and will affect the variance of the
emitter radiometry. Furthermore, a radiometric NUC obtained using a wide spectral band NUC sensor will not, in general, be
radiometric when applied to a Unit Under Test (UUT) having a different spectral response than the NUC sensor.1
3 A 128x128 Projection System Calibration
When the sparse algorithm is put into practice, achieving acceptable results can be a frustrating experience. An excellent
example of the technique in practice was performed on the Infrared Scene Projection System (ISPS) for Komatsu LTD, in
Hiratsuka, Japan, this November.
To calibrate this 128x128 system, MRC used a semi-custom nonuniformity correction (NUC) sensor comprised of the SE-IR
CamIRa system, a Rockwell TCM2550 256x256 focal plane array (FPA), a Custom LWIR lens at f/1.37, and a calibration
software suite developed for MRC by Saturn Systems of Duluth Minnesota. The calibration software provides a means of
displaying calibration scenes from the projector, acquiring them with the SE-IR camera system, and reducing the calibration
data into a response table for each emitter. The final table is fit on a pixel-by-pixel basis with a logarithmic function and this
function is used by the RTNUC subsystem to generate corrected imagery in real-time.
To further aid in the calibration of this projector, an EOI 4” blackbody simulator, a specialized background control and
suppression enclosure and custom temperature monitoring station were also used.
This particular calibration had many challenges. Among the most demanding were the table-top nature of the set-up and the
mosaic approach needed to calibrate the entire array. Figure 3 shows how many portions of the array overlap and how in the
center portion it overlaps 4 ways. This can be a useful consistency check but also very frustrating if the absolute radiometry is
poor. It does bear one of the golden rules of calibration: the NUC will only be as good as the ability to make repeatable
measurements.
This calibration required that data be collected in all of the four quadrants, at 11 gate voltages, with multiple FPA calibrations,
and 30 averaged frames for each resultant image to be reduced. The gate voltages used were 1.0, 1.5, 2.0, 2.3, 2.5, 2.7, 2.8,
2.9, 3.0, 3.1, and 3.2V
Q11 Q12
Q21 Q22
Figure 3 Mosaic Calibration done in 4 quadrants
1
Analysis and Implications for Nonuniformity Correction (NUC) Between Sensors of Different Spectral Bands, C. Stanek, D.
Moore, R. Driggers, SPIE 98
Figure 4 highlights another, often overlooked aspect of NUC: the calibration of the NUC sensor is not trivial. In this example,
the camera was referenced to two blackbody references. In the left-most image, the reference temperatures were 10 degrees
apart, in the second image, 60 degrees [K] apart. In both cases a 2 pt calibration using these references was used to correct the
camera output when referencing a source temperature between the two references. What can be seen in the figure is that
residual nonuniformity exists in the correct, sensor output. These deviations can be attributed to nonlinearity in the FPA
response. As the assumption section states, the calibration will become perfect in the limit of the references temperatures
approaching the temperature to be estimated. When these reference temperatures are too scarce or far apart, the calibration of
the sensor itself may be unsatisfactory. The acceptable level of residual nonuniformity in the NUC sensor is driven by
projector nonuniformity requirements and varies from system to system. In the figure, the nonuniformity is emphasized by the
choice of gray scaling. On the left, the nonuniformity is .25% and on the right, .63% (after dead pixel replacement is
performed)2
.
Figure 4 Blackbody images from camera with 2 pt calibration.
Figure 5 Alignment with calibration data reduction software
When the sparse array data is collected, the data must be reduced into the desired emitter radiance response at corresponding
emitter gate voltages. In figure 5, the region of interest (ROI) registration is shown. This has historically been a labor-
intensive process that requires the user to enter in the system magnification and horizontal and vertical pixel offsets so that the
first ROI corresponds to pixel (0,0).
2
Mooney, J., Shepherd, F., Ewing, W., Murguia, J., Silverman, J.; Responsivity nonuniformity limited performance of infrared
staring cameras; Optical engineering, Vol. 28 No. 11, p 1153, November 1989 discuss other forms of residual nonuniformity
Figure 6 Emitter Table for 3.0, 3.1, and 3.2V
The output of the calibration data reduction software is a table that provides the measured radiance for each emitter pixel at the
gate voltages used. Another key assumption of the sparse array technique is that the coalescence of the sparse information into
the emitter response table accurately reflects array behavior when real scenes are projected. This is one of the arguments
against the sparse NUC: the calibration scenes do not reflect ‘real’ scenes used in sensor testing. Furthermore, additional
concerns such as power dissipation in real scenes and associated substrate heating, resistive losses in the array (droop
phenomenon), and other types of cross talk are not adequately accounted for in the sparse procedure.
Figure 7 shows the results of using the emitter calibration table to project a ‘flat’ scene. This is a DC scene where a uniform
radiance response is desired from every pixel. On the left, the scene is projected without NUC, on the right, the RTNUC uses
the emitter table generated from the calibration to compensate for spatial noise.
Figure 7 Pre and Post NUC Flood Imagery for 1282
Projector
There is obvious improvement; in this case a factor of 5 between the uncorrected and corrected scenes. However, the corrected
level is just a bit below 3%. The metric typically used is the 1 sigma deviation over the scene mean (both in radiance). To
achieve a superior level of calibration uniformity, additional measures must be taken. MRC is in the process of incorporating
many of these into our calibration procedure. The next section describes them.
4 Improvements to Sparse Array Procedure
4.1 Dixel Characterization
NUC of infrared displays has traditionally used sparse array images for calibration. There are several reasons for this, primary
among these are the regularity and precision that is achieved in spatial, voltage, and sampling statistics. That regularity implies
that the effects of that regular array of element images can be compensated and removed from the analysis, effectively
producing an isolated element image for nonuniformity correction.
4.1.1 Estimate and remove distant diffraction effects.
The diffraction pattern from the entire sparse array produces a few percent addition to the measurement taken at each bright
image. Accounting for this effect is a principal step in obtaining an isolated image.
4.1.2 Improved background estimate.
The integerized nature of output from analog-to-digital conversion leaves a unit step between possible background outputs.
Background subtraction introduces this jitter into the data before processing. A significant reduction in noise from this
background quantization can be realized by using the FPA calibration instead.
4.1.3 Determine offset and gain conversion to radiance.
Camera output is scaled to fall within an acceptable, apparently linear, range. Provided that several flood measurements are
taken at each setting, it is possible with good accuracy to discover the offset in ADU (analog to digital units) and the gain
conversion to radiance. Historically, a multi-point table is used for the sensor calibration table as well as the emitter table;
interpolation is used to estimate the measured radiance from nearest reference points. However, other models exist that fit with
high accuracy and require less storage and computational time.
4.1.4 Emitter Centroid Calculations
The current, most accurate, method of Region-of-Interest (ROI) location uses a centroid calculation of the element image. By
using the entire sparse array image and producing a least squares fit to the rows and columns of element image centroids, very
accurate estimates of the locations of image centers can be obtained.
4.1.5 Flexible ROI sizes depending on image center placement.
When an image center is near the center of a pixel it is appropriate to use an ROI that has an odd edge length (e.g. 5x5).
However, when the image center is near the gap between pixels, the appropriate ROI should have at least one even side (e.g.,
5x4 ) depending on the geometrical relationship.
4.1.6 Develop sub-pixel tables for dead pixel replacement.
When Steps 1 through 5 have been taken, so that accurate, effectively isolated, images are available, a new level of calibration
becomes possible. This new technique is the spatial analog of TDI, time delay integration. We will call it SOI, spatial offset
integration. Once the relation of an element image center to the illuminated pixel center is accurately known, regardless of any
defocus present, the entire set of sparse arrays (144 sparse arrays when a 12 element step is used) can be examined for nearly
identical pixel center to element image relations.
These geometric relations would then be grouped and serve as a means for dead and abnormal FPA pixel replacement. FPA
artifacts are a significant limitation to NUC, and the proper mitigation technique will give big dividends.
4.1.7 SOI tables used for image normalization.
The SOI tables would not only provide for dead pixel replacement, but also will allow a reliable estimate of the fraction of
radiant energy from an element image that falls on a pixel in the ROI. This would allow another source of estimation error to
be removed from the radiometric calculation.
4.2 Substrate Characterization
Substrate behavior has been one of those nuisance factors. Changing substrate temperature during sparse array data runs
caused us to reorder the voltage step and the sparse array offset step and add a null phase for substrate cooling. When
significant substrate areas are heated by having large numbers of local dixels at display voltage for significant periods, the
scene generation algorithm should account for this effect and reduce dixel voltages to match the requested radiance.
Using particular patterns to test substrate behavior allows gradients and other irregularities to be identified and compensated.
Variations in the substrate should not occur at dixel spacing, if they do occur. Irregularities in substrate bonding would
probably be evident as spatial gradients in the cooling rate, for example. This effort is more experimental than those outlined
for improvement of the sparse array NUC.
4.3 Calibration Software Validation
A key element to calibration is defining the important parameters for a setup that affect radiometry. It is often assumed that the
NUC is a ‘software’ problem. Closer to the truth is that NUC is a complex radiometry problem and software is necessary to
reduce the voluminous amounts of data generated in the radiometry process.
In figure 8, a series of images are shown that reveal possible defocus levels that might be used during a calibration. This is one
such parameter of importance: what should the typical blur diameter be to perform a repeatable NUC?
Figure 8 Examples of various defocus levels measured in DYNNICS
The list of important parameters is long when considering a NUC scheme and it is useful to have a tool that allows the effects
of these parameters to be studied without the burden of data collection. In a test setup, it is not always possible to isolate a
parameter in the collected data.
Figure 9 Improvements to sparse array procedure shown in 'cross' image
MRC has developed a program called SYN_CAM for the purpose of modeling projector displays and their corresponding
appearance and measurement by a sensor. The program has a detailed model of emitter geometry, optical effects including
distortion and defocusing, and their interplay in determining how the emitter radiance appears at the detector plane. Detailed
diffraction model (including far-field model) is also included.
In Figure 9, our progress in the calibration software performance is demonstrated. The intent is to project a uniform ‘cross’
shape. The SYN_CAM program produces output in calibration scenes that closely models how it would appear from a
projector. The calibration software is used to reduce this data and construct an emitter table. This table is used to correct for a
cross and reprojected. The reprojection is then ‘captured’ by a perfect camera (100% fill factor, no responsivity variations
across detectors). It is clear from the sequence of images that the calibration software performance has dramatically improved.
Mostly, the improvements have been in the elimination of pattern noise introduced in the image registration and ROI
summations.
4.4 RTNUC Improvements
The typical emitter transfer curve has a strongly nonliner, logarithmic characteristic when plotting radiance versus gate voltage.
Because of this, simple functions do not fit the transfer curve well. Using polynomials usually lead to oscillations of parts of
the operating range and usually this is unacceptable as it leads to non-monotonic approximations to the transfer curve. One
approach often used is a multi-point lookup table. Depending on the sampling of the transfer curve, this can lead to rather large
tables that, in real-time, are addressed, searched for the proper interval, and then interpolated to give the required radiance at
each pixel.
One fit that proves particularly useful is the Horner-Karin fit. This is a fit in logarithmic space; the upper portion of the
transfer curve is up logarithmically parabolic and the lower portion logarithmically linear. The break point is chosen
empirically after examining the typical transfer curves of the emitter for that array.
The fit is logarithmic in radiance and of the form:
Vgate = a0 + a1log(R) +a2(log(R))2
The fit also performs best when applied above the threshold gate voltage. Here, about 2.5V, the region below the threshold can
be treated as linear or
Vgate =b0 + b1R.
The quality of the fit and lower order suggest savings in coefficient storage space. This fit requires 5. three for the HK and two
for the linear region. Figure 10 shows the upper part of the fit on typical emitter radiance data.
2.4
2.6
2.8
3
3.2
3.4
0 0.2 0.4 0.6 0.8 1 1.2 1.4
Horner-Kairn Fit over 2.5 to 3.2 Volts
Voltage
Voltage
Radiance
Figure 10 Partial Horner-Karin fit to emitter radiance data
It is possible for this fit to be coupled with a scene-dependent calibration. In this case the sparse method provides the basis for
the initial HK fit and then additional correction terms are added based on other factors such as droop. RTNUC will not only
implement the best HK fit, but will have additional correction terms to achieve the required fidelity:
Vgate = Horner-Karin(R) + ε1 f1(R) + ε2 f2(R) + ....
Determining f()s comes from feature extraction. The first order correction will certainly be the lowest order moment of the
scene, the DC value, or Rave.. The other f()s can be determined through empirical orthogonal functional analysis (EOF). This
is a standard technique for decomposing error covariance matrices into more revealing forms and is discussed more in section
6.
5 Scene-Based Correction Algorithms
5.1 NUC Procedure Using Current Maps
1000
1100
1200
1300
1400
1500
0 0.5 1 1.5 2 2.5 3 3.5 4
DIRSP Array 10A at 250K
8.44-8.54 µm spectral
Array Group 8 Driven at Vgate
ROIMean[adu]
Vgate
2/25/98
DIRSP ROI 23x21
Tbright = 255K
Substrate = 250K
0.0 µA avg
Tbright = 286K
54 µA avg
Tbright = 306K
69 µA avg
Tbright = 332 K
86 µA avg
Tbright = 346 K
95 µAavg
Tbright = 361 K
105 µAavg
Tbright = 378 K
114 µA avg
Tbright = 394 K
124 µA avg
Tbright = 411 K
135 µA avg
Tbright = 427 K
145 µA avg
Figure 11 Apparent temperature vs. current measurements on DIRSP arrays
0
50
100
150
250 300 350 400 450
Apparent Temperature Vs Current
8.24-8.74 µm Band
Current(µA)
Apparent Temperature [K]
Y = M0 + M1*x+ ... M8*x
8
+ M9*x
9
-27031M0
379.89M1
-2.1312M2
0.005968M3
-8.3276e-06M4
4.6318e-09M5
0.99992R
0.06
0.07
0.08
0.09
0.1
0.2
20 40 60 80 100 120 140
Array 3-7C @ 3.1V
Comparison of Mean Column Variations
Q11 contains rows 12- 156
Q21 contains rows 156-312
Q41 contains rows 368-511
Q11 @ 3.1v (061698)
Q21 @ 3.1v (051998)
Q41 @ 3.1v (060998)
Radiance9.47-9.71µm[e17ph/sec-cm
2
-SR]
Emitter Column
Figure 12 Current versus temperature shows a parabolic relationship
Figure 13 Radiometric column-to-column variations can be compared against current draw in same columns
One possible approach with promise is to virtually eliminate the radiometry from the NUC problem. This is possible by
looking for other fundamental array properties that can be measured and also correlate with nonuniformity.
An obvious fundamental property is current. Each pixel draws a given current (on the order of 100 µA) for a specific gate
voltage. Since the resistance of the emitter and the current supplied to it determines the power drawn, it is reasonable to
conclude that power and radiance must be closely related.
In Figure 11, this relationship is shown by plotting as a function of gate voltage the current drawn and apparent temperature
achieved for a small block of emitters on the array. In Figure 12, the relationship between current and this temperature is
plotted independent of gate voltage. For these narrowband measurements, theory shows that the temperature and current
drawn should be related parabolically and this is validated by the fit shown in Figure 12.
The technique would construct a transfer curve for each emitter by first constructing the current versus gate voltage
relationship for each emitter. This would generate a nonuniformity map in current:gate voltage space. The next step would be
to construct the mean transfer curve for the entire array in radiance:gate voltage space. Once this curve is known, gate voltage
can be used to express current as a function of temperature or radiance (as shown in Figure 12).
The technique eliminates many of the problems associated with current NUC schemes by reducing the radiometry to the
measurement of a single, gross, transfer curve compared to the million of so needed for a display such as the DIRSP. Instead
these transfer curves are derived from measurements that are almost entirely done with a multimeter and a simple transfer
function applied to those curves to map them into radiance:gate voltage space.
The assumptions are obvious: all the nonunifomity is due to variations in current drawn by the individual emitters and that a
mean transfer curve can be used to map the current:gate voltage relationships to all the emitters individually.
To validate this technique, MRC is currently generating nonuniformity maps in current space to correlate against
nonuniformity maps generated by radiometric techniques. If a strong correlation can be shown, this technique may replace the
cumbersome optical setups and enormous data reduction task used in present NUC systems. In Figure 13, column
nonuniformities are shown as measured on an array with an LWIR sensor. This data set will provide the basis for comparing
current nonuniformity along array columns. If the correlation exists, MRC will proceed to do more detailed comparisons for
portions of the array on an emitter by emitter basis.
5.2 Finite-Difference Schemes
Difference schemes all fundamentally rely on the ability of the NUC processing module to make an image comparison between
what image was intended and what was actually projected. These schemes can be iterative and implemented with only simple
scenes (such as the DC scene) to more realistic target/background imagery. A typical difference scheme is shown in Figure 14.
NUC Sensor
grabs
frame S k(i,j)
CIG inputs desired
scene,S inp (i,j)
Sk+1 (i,j) = S k (i,j) + β*C(i,j)
C(i,j) < e(i,j)- C(i,j)
RTNUC in Global
Calibration Mode
estimate voltages
V k+1 (i,j)
CES Drive
Electronics
NUC PROCESSOR
DIRSP Optics
k == 0
YES
T(S inp (i,j) )
NO
(R,V,R ave ) to disk
for each dixel
YES
1a
2
3
Compute gain β 4
5
6
1b
7
for( m =1 to NUM_IMAGES)
for(k=1 to NUM_ITERATIONS)
CIG generates imageS m 0
Process (R,V,R ave )
data for operational
RTNUC use
8
Figure 14 Finite difference scheme diagram
Step 0 shows that the CIG (Computer Image Generator) contains a series of images that we wish to run through the global
calibration procedure. In the figure, the two larger boxes denote loops. The outer loop denotes a loop over the number of
images to gather calibration data for, the inner loop denotes the number of iterations necessary to achieve the desired scene
uniformity and accuracy. The double lines going to and from the loops in step 0 and step 7 to 8 show information structure. In
the outer loop, the CIG has an array of an array of pixels (an array of scenes), whereas inside the loop it is a single scene.
Likewise in step 7 to 8, there is an array forming because there is a collection of information stored for each calibrated image.
In step 1a, the desired scene is generated by the CIG. If this is the first iteration (k=0), then this scene is fed to the RTNUC for
radiance lookup to voltage and then through the CES (Control Electronics Subsystem). Also, the CIG generated image is
passed to the NUC processor.
Step 1b refers to the acquisition of the projected image by the NUC sensor. This step involves several subprocesses including
the NUC sensor optics, detector response and readout, and conversion of the digital data into radiometric values. This
radiometric conversion involves referencing a known source and applying a correction (usually linear or multi-point) to the
digital data. For an FPA, the thermal reference source is usually an external NIST-traceable blackbody. Scanning sensors
typically have internal reference sources such as microblackbodies or TEC strips on each side of the scan.
Step 2 refers to transforming the desired input scene to a coordinate grid that is compatible with measurements obtained from
the NUC sensor. For instance, a 1024x1024 desired scene may be projected with 4:1 oversampling such that it is imaged by a
256x256 FPA. In the most general cases, the IFOV and the IFOVC may be different, the image sizes may be different, and the
images may not be perfectly aligned in space (i.e. dixel (1,1) may not be centroided with sensor pixel (1,1) ). These are all
formidable issues that the transformation module must address. However, step 2 in the most general sense, is to ensure that
comparisons are valid.
Step 3 summarizes the main role of the NUC processor during scene global calibration. The desired scene radiance and
measured scene radiance are compared by the NUC processor and a difference matrix is constructed, C, where Cij represents
the difference between desired and measured radiances at dixel location (i,j). At this point, the difference matrix (or some
appropriate parameter) is compared to a desired tolerance ε. For example, ε may be the maximum value of the infinite norm
allowed for the matrix C.
In step 4, the new, scene-desired radiance is calculated by computing the coefficient, β, and in step 5 multiplying by the
calculated scene radiance change needed, and adding it to the measured scene radiance from step 1b.
Upon first glance, it would appear that a choice of β = 1 would be optimal. This is likely not the case for several reasons.
First, the required voltage changes estimated in the next step are not exact and an underestimate in radiance correction (β<1)
can be viewed in the next step as an underestimate in the derivative dL/dV, which is unlikely to produce undulating iterations
and therefore desirable. The best choice of β will be one that critically damps the convergence cycle. Second, the
measurement of Sk is not a perfect measurement, but subject to noise. Therefore, the optimal choice of β in this regard is also
related to the best estimate for Sk ; highly resembling a problem in Kalman filtering.
In step 6, the sparse array table is used to estimate the change in voltage needed to accommodate the desired change in
radiance. The new voltages are estimated from the current sparse array dixel response curves by the equation:
where the derivative is estimated from the Horner-Karin fit achieved in the sparse array calibration.
After the next iteration of voltages have been calculated, the CES is commanded to drive the arrays. This output is channeled
through the DIRSP optics train and captured by the NUC sensor as Sk+1.
Step 7 shows that once the measured scene and desired scene are within some specified tolerance, the dixel radiance, voltages
required to achieve them, and any other parameters of interest (such as average scene radiance) for regression are stored.
In step 8, all the recorded values from all the scenes are sent to the post-processing stage. This stage is an optimization stage
for real-time operation, using the information gathered in steps 1 through 7.
6 Statistical Approach to NUC
The approach to NUC outlined so far has assumed that each emitter in the array is independent of all other emitters in the array.
It is not unreasonable to believe that if an emitter is not independent, then it is most strongly influenced by those emitters that
are nearest. If this were the case, then an image calibrated to be uniform using a calibration scheme which assumes
independent emitters would show areas larger than a dixel size with similar levels of emission. This correlation between
nearby dixels may depend on temperature, image nonuniformity, manufacturing inhomogeneities, and other unknown factors.
The simplest approach next to assuming emitter independence is to assume that there is some correlation in emission between
dixels that does not depend on temperature or image nonuniformity, but only on (perhaps vector) distance between dixels. This
assumption would first be tested using exploratory data analysis.
The purpose of exploratory data analysis is to examine the data in an initial, cursory manner to identify the spatial correlations
that may be present. This first step is to avoid performing lengthy, unnecessary calculations to determine model parameters
that may not apply. Graphical display of the data is essential here. Large scale trends in the data, perhaps due to misalignment
of the emitting and detecting arrays and other such factors, should be removed at this point. For treating data on a regular grid
whose random component comes from a continuous distribution, a useful technique is median polish. This attempts to
decompose the data into a sum, data = all + row + column + residuals, where row and column are averages over the rows and
columns. It is the residuals that should then be examined for correlation.
Once it is clear that there may be correlation in the residuals, it is then appropriate to try to estimate the degree to which it is
present. An estimate of the correlation function (or variogram or covariogram) as a function of distance between dixels (lag)
can be obtained by summing the difference squared of all residuals at a fixed distance apart. It should be kept in mind that the
correlation may depend on direction, that is, the correlation may be stronger in one direction than in the perpendicular
direction, which case the correlation will be a function of vector distance. Other, more resistant, techniques are available to
estimate this function for cases where outliers (bad data) may be a problem. A correlation distance can be estimated from this
function. This distance provides an approximation as to how many nearby dixels influence a particular dixel.
The correlation function gives a clue to possible stochastic models that may describe the random behavior of the emitting array.
The simplest possible model with correlation for a regular grid would be one where a given dixel is correlated only with its
four nearest neighbors. (If a dixel is located at point (i,j), then the four nearest neighbors are then located at (i+1,j), (i-1,j),
(i,j+1), (i,j-1).) The correlation coefficient would in general be different in the x and y directions and can be estimated in a
number of ways. It may be necessary to use more complicated models, such as more general autoregressive models, which
require more parameters.
Once a model has been selected it must be tested to see if indeed calibration is improved. Improvement is generally obtained
when the variance of the residuals is reduced. As an example, for the nearest neighbor case, if z(i,j) is the radiance minus the
mean radiance measured at dixel (i,j), then the residual is defined as
r(i,j) = z(i,j) - a1[z(i+1,j)+z(i-1,j))] - a2[z(i,j+1)+z(i,j-1)] (2)
where a1 and a2 are the estimated model coefficients. This set of residuals should have smaller variance than the original set,
z(i,j). Now set the voltage at location (i,j) to be that voltage which corresponds to the radiance = mean - a1[z(i+1,j)+z(i-1,j))] -
a2[z(i,j+1)+z(i,j-1)]. This new set of voltages should give a more uniform radiance from the array near the (i,j) element. This
process may be iterated.
7 Summary
The sparse array algorithm has some fundamental limitations that are difficult to overcome. Other areas of criticism include
the stark difference between calibration scenes and ‘real’ scenes, the inability to compensate for scene-based effects such as
thermal, optical, and electrical crosstalk as well as the ‘droop’ phenomenon.
The sparse array procedure can be dramatically approved and new levels of calibration attained by:
•Estimate and remove distant diffraction effects
•Improve background estimate
•Determine offset and gain conversion to radiance
•Least squares fit to grid of image centers
•Flexible ROI sizes to improve symmetry with respect to image centers
•Examine and correct the least-squares grid-fit parameters
•Develop SOI subpixel resolution tables for dead pixel replacement
•Use SOI tables for radiometric normalization.
Scene-based algorithms discussed include current versus gate voltage maps, finite difference schemes, and difference schemes
that utilize advanced filters to reduce measurement error. Scene-based correction schemes that are more statistically model-
driven were also mentioned.
Given research cost, the current mapping approach holds the most near-term promise with the advent of scene-based correction
schemes a very real possibility in the near-future.
8 Acknowledgments
This work was supported by the Defense Threat Reduction Agency (DTRA) under contract XXXXXXX and U.S. Army
Simulation Training and Instrumentation Command (STRICOM) under contract N61339-96-C-0074.
Author info: C.S. stanek@mrcsb.com, D.M. dmoore@mrcsb.com,. L.E. ewing@mrcsb.com

More Related Content

What's hot

Image quality in nuclear medicine
Image quality in nuclear medicineImage quality in nuclear medicine
Image quality in nuclear medicineRad Tech
 
Modeling of Optical Scattering in Advanced LIGO
Modeling of Optical Scattering in Advanced LIGOModeling of Optical Scattering in Advanced LIGO
Modeling of Optical Scattering in Advanced LIGOHunter Rew
 
iPlan PB commissioning report (2013)
iPlan PB commissioning report (2013)iPlan PB commissioning report (2013)
iPlan PB commissioning report (2013)Edwin Sham
 
Introduction to wavelet transform
Introduction to wavelet transformIntroduction to wavelet transform
Introduction to wavelet transformRaj Endiran
 
The super resolution technology 2016
The super resolution technology 2016The super resolution technology 2016
The super resolution technology 2016Testo Viet Nam
 
Monte carlo Technique - An algorithm for Radiotherapy Calculations
Monte carlo Technique - An algorithm for Radiotherapy CalculationsMonte carlo Technique - An algorithm for Radiotherapy Calculations
Monte carlo Technique - An algorithm for Radiotherapy CalculationsSambasivaselli R
 
Diffractive Optics Manufacturing
Diffractive Optics ManufacturingDiffractive Optics Manufacturing
Diffractive Optics ManufacturingAlexCarter68
 
modulation transfer function (MTF)
modulation transfer function (MTF)modulation transfer function (MTF)
modulation transfer function (MTF)AJAL A J
 
Accelarating Optical Quadrature Microscopy Using GPUs
Accelarating Optical Quadrature Microscopy Using GPUsAccelarating Optical Quadrature Microscopy Using GPUs
Accelarating Optical Quadrature Microscopy Using GPUsPerhaad Mistry
 
Reduction of Azimuth Uncertainties in SAR Images Using Selective Restoration
Reduction of Azimuth Uncertainties in SAR Images Using Selective RestorationReduction of Azimuth Uncertainties in SAR Images Using Selective Restoration
Reduction of Azimuth Uncertainties in SAR Images Using Selective RestorationIJTET Journal
 
Charge Sharing Suppression in Single Photon Processing Pixel Array
Charge Sharing Suppression in Single Photon Processing Pixel ArrayCharge Sharing Suppression in Single Photon Processing Pixel Array
Charge Sharing Suppression in Single Photon Processing Pixel Arrayijeei-iaes
 
A new single grating spectrograph for ultra violet raman scattering studies
A new single grating spectrograph for ultra violet raman scattering studiesA new single grating spectrograph for ultra violet raman scattering studies
A new single grating spectrograph for ultra violet raman scattering studiesJohn Clarkson
 

What's hot (16)

Image quality in nuclear medicine
Image quality in nuclear medicineImage quality in nuclear medicine
Image quality in nuclear medicine
 
Modeling of Optical Scattering in Advanced LIGO
Modeling of Optical Scattering in Advanced LIGOModeling of Optical Scattering in Advanced LIGO
Modeling of Optical Scattering in Advanced LIGO
 
iPlan PB commissioning report (2013)
iPlan PB commissioning report (2013)iPlan PB commissioning report (2013)
iPlan PB commissioning report (2013)
 
Introduction to wavelet transform
Introduction to wavelet transformIntroduction to wavelet transform
Introduction to wavelet transform
 
The super resolution technology 2016
The super resolution technology 2016The super resolution technology 2016
The super resolution technology 2016
 
Monte carlo Technique - An algorithm for Radiotherapy Calculations
Monte carlo Technique - An algorithm for Radiotherapy CalculationsMonte carlo Technique - An algorithm for Radiotherapy Calculations
Monte carlo Technique - An algorithm for Radiotherapy Calculations
 
WAVELET TRANSFORM
WAVELET TRANSFORMWAVELET TRANSFORM
WAVELET TRANSFORM
 
Diffractive Optics Manufacturing
Diffractive Optics ManufacturingDiffractive Optics Manufacturing
Diffractive Optics Manufacturing
 
modulation transfer function (MTF)
modulation transfer function (MTF)modulation transfer function (MTF)
modulation transfer function (MTF)
 
Accelarating Optical Quadrature Microscopy Using GPUs
Accelarating Optical Quadrature Microscopy Using GPUsAccelarating Optical Quadrature Microscopy Using GPUs
Accelarating Optical Quadrature Microscopy Using GPUs
 
Miller - Remote Sensing and Imaging Physics - Spring Review 2012
Miller - Remote Sensing and Imaging Physics - Spring Review 2012Miller - Remote Sensing and Imaging Physics - Spring Review 2012
Miller - Remote Sensing and Imaging Physics - Spring Review 2012
 
Wavelets presentation
Wavelets presentationWavelets presentation
Wavelets presentation
 
Reduction of Azimuth Uncertainties in SAR Images Using Selective Restoration
Reduction of Azimuth Uncertainties in SAR Images Using Selective RestorationReduction of Azimuth Uncertainties in SAR Images Using Selective Restoration
Reduction of Azimuth Uncertainties in SAR Images Using Selective Restoration
 
Charge Sharing Suppression in Single Photon Processing Pixel Array
Charge Sharing Suppression in Single Photon Processing Pixel ArrayCharge Sharing Suppression in Single Photon Processing Pixel Array
Charge Sharing Suppression in Single Photon Processing Pixel Array
 
A new single grating spectrograph for ultra violet raman scattering studies
A new single grating spectrograph for ultra violet raman scattering studiesA new single grating spectrograph for ultra violet raman scattering studies
A new single grating spectrograph for ultra violet raman scattering studies
 
Final Seminar
Final SeminarFinal Seminar
Final Seminar
 

Viewers also liked (13)

ThesisFinal2
ThesisFinal2ThesisFinal2
ThesisFinal2
 
AAE2702
AAE2702AAE2702
AAE2702
 
PCT1004
PCT1004PCT1004
PCT1004
 
5 hot selling_cars_2016
5 hot selling_cars_20165 hot selling_cars_2016
5 hot selling_cars_2016
 
TurnerBottoneStanekNIPS2013
TurnerBottoneStanekNIPS2013TurnerBottoneStanekNIPS2013
TurnerBottoneStanekNIPS2013
 
Slayt
SlaytSlayt
Slayt
 
estudiante
estudianteestudiante
estudiante
 
Present
PresentPresent
Present
 
nips2014
nips2014nips2014
nips2014
 
Mototrbo Overview Dec 2014-Pennine Telecom
Mototrbo Overview Dec 2014-Pennine TelecomMototrbo Overview Dec 2014-Pennine Telecom
Mototrbo Overview Dec 2014-Pennine Telecom
 
Customer lifetime value (1)
Customer lifetime value (1)Customer lifetime value (1)
Customer lifetime value (1)
 
9. nociones de probabilidad
9. nociones de probabilidad9. nociones de probabilidad
9. nociones de probabilidad
 
Eduardo flores
Eduardo floresEduardo flores
Eduardo flores
 

Similar to SPIE99_1B.DOC

DESPECKLING OF SAR IMAGES BY OPTIMIZING AVERAGED POWER SPECTRAL VALUE IN CURV...
DESPECKLING OF SAR IMAGES BY OPTIMIZING AVERAGED POWER SPECTRAL VALUE IN CURV...DESPECKLING OF SAR IMAGES BY OPTIMIZING AVERAGED POWER SPECTRAL VALUE IN CURV...
DESPECKLING OF SAR IMAGES BY OPTIMIZING AVERAGED POWER SPECTRAL VALUE IN CURV...ijistjournal
 
DESPECKLING OF SAR IMAGES BY OPTIMIZING AVERAGED POWER SPECTRAL VALUE IN CURV...
DESPECKLING OF SAR IMAGES BY OPTIMIZING AVERAGED POWER SPECTRAL VALUE IN CURV...DESPECKLING OF SAR IMAGES BY OPTIMIZING AVERAGED POWER SPECTRAL VALUE IN CURV...
DESPECKLING OF SAR IMAGES BY OPTIMIZING AVERAGED POWER SPECTRAL VALUE IN CURV...ijistjournal
 
Experimental Study of Spectrum Sensing based on Energy Detection and Network ...
Experimental Study of Spectrum Sensing based on Energy Detection and Network ...Experimental Study of Spectrum Sensing based on Energy Detection and Network ...
Experimental Study of Spectrum Sensing based on Energy Detection and Network ...Saumya Bhagat
 
Paper id 25201478
Paper id 25201478Paper id 25201478
Paper id 25201478IJRAT
 
Image Compression Using Wavelet Packet Tree
Image Compression Using Wavelet Packet TreeImage Compression Using Wavelet Packet Tree
Image Compression Using Wavelet Packet TreeIDES Editor
 
Laser Physics Department -ALNeelain University 5 th
Laser Physics Department -ALNeelain University 5 thLaser Physics Department -ALNeelain University 5 th
Laser Physics Department -ALNeelain University 5 thGazy Khatmi
 
Presentation_Guccione.pptx
Presentation_Guccione.pptxPresentation_Guccione.pptx
Presentation_Guccione.pptxgrssieee
 
WSN, FND, LEACH, modified algorithm, data transmission
WSN, FND, LEACH, modified algorithm, data transmission WSN, FND, LEACH, modified algorithm, data transmission
WSN, FND, LEACH, modified algorithm, data transmission ijwmn
 
Modeling, Simulation And Implementation Of Adaptive Optical System For Laser ...
Modeling, Simulation And Implementation Of Adaptive Optical System For Laser ...Modeling, Simulation And Implementation Of Adaptive Optical System For Laser ...
Modeling, Simulation And Implementation Of Adaptive Optical System For Laser ...IJMER
 
Circuits for Optical Based Line of Sight Voice Communication
Circuits for Optical Based Line of Sight Voice CommunicationCircuits for Optical Based Line of Sight Voice Communication
Circuits for Optical Based Line of Sight Voice CommunicationjournalBEEI
 
IRJET- Rapid Spectrum Sensing Algorithm in Cooperative Spectrum Sensing for a...
IRJET- Rapid Spectrum Sensing Algorithm in Cooperative Spectrum Sensing for a...IRJET- Rapid Spectrum Sensing Algorithm in Cooperative Spectrum Sensing for a...
IRJET- Rapid Spectrum Sensing Algorithm in Cooperative Spectrum Sensing for a...IRJET Journal
 
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...jantjournal
 
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...jantjournal
 
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...jantjournal
 
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...jantjournal
 
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...jantjournal
 

Similar to SPIE99_1B.DOC (20)

DESPECKLING OF SAR IMAGES BY OPTIMIZING AVERAGED POWER SPECTRAL VALUE IN CURV...
DESPECKLING OF SAR IMAGES BY OPTIMIZING AVERAGED POWER SPECTRAL VALUE IN CURV...DESPECKLING OF SAR IMAGES BY OPTIMIZING AVERAGED POWER SPECTRAL VALUE IN CURV...
DESPECKLING OF SAR IMAGES BY OPTIMIZING AVERAGED POWER SPECTRAL VALUE IN CURV...
 
DESPECKLING OF SAR IMAGES BY OPTIMIZING AVERAGED POWER SPECTRAL VALUE IN CURV...
DESPECKLING OF SAR IMAGES BY OPTIMIZING AVERAGED POWER SPECTRAL VALUE IN CURV...DESPECKLING OF SAR IMAGES BY OPTIMIZING AVERAGED POWER SPECTRAL VALUE IN CURV...
DESPECKLING OF SAR IMAGES BY OPTIMIZING AVERAGED POWER SPECTRAL VALUE IN CURV...
 
Experimental Study of Spectrum Sensing based on Energy Detection and Network ...
Experimental Study of Spectrum Sensing based on Energy Detection and Network ...Experimental Study of Spectrum Sensing based on Energy Detection and Network ...
Experimental Study of Spectrum Sensing based on Energy Detection and Network ...
 
Nachman - Electromagnetics - Spring Review 2013
Nachman - Electromagnetics - Spring Review 2013Nachman - Electromagnetics - Spring Review 2013
Nachman - Electromagnetics - Spring Review 2013
 
Paper id 25201478
Paper id 25201478Paper id 25201478
Paper id 25201478
 
Image Compression Using Wavelet Packet Tree
Image Compression Using Wavelet Packet TreeImage Compression Using Wavelet Packet Tree
Image Compression Using Wavelet Packet Tree
 
Laser Physics Department -ALNeelain University 5 th
Laser Physics Department -ALNeelain University 5 thLaser Physics Department -ALNeelain University 5 th
Laser Physics Department -ALNeelain University 5 th
 
Presentation_Guccione.pptx
Presentation_Guccione.pptxPresentation_Guccione.pptx
Presentation_Guccione.pptx
 
WSN, FND, LEACH, modified algorithm, data transmission
WSN, FND, LEACH, modified algorithm, data transmission WSN, FND, LEACH, modified algorithm, data transmission
WSN, FND, LEACH, modified algorithm, data transmission
 
Basics of dip
Basics of dipBasics of dip
Basics of dip
 
Modeling, Simulation And Implementation Of Adaptive Optical System For Laser ...
Modeling, Simulation And Implementation Of Adaptive Optical System For Laser ...Modeling, Simulation And Implementation Of Adaptive Optical System For Laser ...
Modeling, Simulation And Implementation Of Adaptive Optical System For Laser ...
 
Circuits for Optical Based Line of Sight Voice Communication
Circuits for Optical Based Line of Sight Voice CommunicationCircuits for Optical Based Line of Sight Voice Communication
Circuits for Optical Based Line of Sight Voice Communication
 
73.1 s
73.1 s73.1 s
73.1 s
 
Phase Unwrapping Via Graph Cuts
Phase Unwrapping Via Graph CutsPhase Unwrapping Via Graph Cuts
Phase Unwrapping Via Graph Cuts
 
IRJET- Rapid Spectrum Sensing Algorithm in Cooperative Spectrum Sensing for a...
IRJET- Rapid Spectrum Sensing Algorithm in Cooperative Spectrum Sensing for a...IRJET- Rapid Spectrum Sensing Algorithm in Cooperative Spectrum Sensing for a...
IRJET- Rapid Spectrum Sensing Algorithm in Cooperative Spectrum Sensing for a...
 
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
 
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
 
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
 
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
 
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
ARRAY FACTOR IN CURVED MICROSTRIPLINE ARRAY ANTENNA FOR RADAR COMMUNICATION S...
 

SPIE99_1B.DOC

  • 1. Considerations and Algorithm Development for Scene-Based Nonuniformity Correction (NUC) Clay Stanek, Larry Ewing, Doug Moore Mission Research Corporation, 735 State Street PO Drawer 719, Santa Barbara, CA 93102 Abstract Pixel-to-pixel radiance nonuniformity is the prominent noise source from resistive arrays and must be compensated or otherwise mitigated for high-fidelity testing of infrared imaging sensors. Many of the current advances in the capability of resistive array, IR scene projection rest in the improvement of nonuniformity correction (NUC) schemes. Early NUC schemes address the problem of optical crosstalk or spreading and the types of algorithms available that help to mitigate its effect when individual pixel radiometry is performed. However, there has been relatively little work done on scene-based correction to date where the effects such as power drops across the emitter array and thermal crosstalk are important to consider. This paper will examine potential problem areas in scene-based correction and discuss possible algorithms that could be used in a scene-based NUC approach. Keywords: resistive emitter arrays, nonuniformity correction, scene-based algorithms, crosstalk 1 Introduction Resistive array technology is finding increasing application in representing synthetic infrared targets and backgrounds. Pixel- to-pixel radiance nonuniformity is the prominent noise source from resistive arrays and must be compensated or otherwise mitigated for high fidelity testing of infrared imaging sensors. Any imaging method for measuring and correcting nonuniformity noise is subject to theoretical performance limitations due to sensor measurement noise, geometrical resolution, background offset, and optical resolution. This white paper will discuss approaches for improvement of projector nonuniformity, enhancing the current nonuniformity correction procedures developed for DTRA under the NODDS (Nuclear Optical Dynamic Display System) and the TDT (True Display Technology) programs done at Mission Research Corporation. Resistive emitter arrays are characterized by fixed-pattern noise due to variations in the structural, circuit, emissive, and reflective properties of the individual elements in the array. We refer to individual elements in the array as dixels, an abbreviated contraction of display pixel. Dixel variations impose the ultimate limit on the ability of a staring infrared scene projector to generate image detail, particularly in FLIRs and other scanning thermal imagers. To date the technology push has been to increase the size and speed of IR scene projectors with less emphasis on nonuniformity correction (NUC), which is general terminology for signal processing methods to reduce dixel fixed-pattern noise. Today the scene projection community views NUC as a major area for optimization of infrared scene projectors in any true display simulation. 2 The NUC Problem and Current Approach The NUC problem in general has two components: measurement of the required correction factor and application of the measured correction factor to pre-computed or real-time generated imagery. In this paper we concern ourselves with the first of the two problems, measurement of the emitter nonuniformity; all references to NUC in this paper mean the measurement problem. Similar to pixel pattern noise in focal plane arrays (FPA), resistive emitter arrays are characterized by a dixel pattern noise that varies only slowly in time (on the order of days) if at all. Unlike semiconductor FPA, the amount of dixel pattern noise in a resistive array depends on the emission level of the individual emitter. We have found it to be nearly impossible to correct resistive-array pattern noise using only two parameters, e.g., offset and gain, as is commonly done in linearly responsive FPA. It is difficult to model individual physical sources that combine to form pattern noise (electronic circuitry, spectral emissivity, electrical resistance, thermal conductivity, and emissivity-area variations) let alone model the behavior of the entire chain. We
  • 2. have had only limited success with physics-based models, which attempt to replace most of these variations with model descriptions. Our effort has concentrated on measuring the emitter transfer function, which is the emitter in-band radiance versus gate voltage, as shown in the Figure 1 example. The NUC methods used to date, and those outlined in Section 3 of this paper, assume that each emitter in the array is independent. Under this assumption NUC for a given emitter depends only on the level to which the emitter is being driven and not on the levels of other emitters in the array. It has been pointed out in the literature that such an ideal is not always satisfied and that the NUC for a given dixel should depend on the levels for all dixels in the array, not just the one being NUC’ed. In this latter situation, the NUC for a given frame depends on the scene being displayed and the NUC method is called scene-based correction. Scene-based and statistical NUC schemes are two clear directions for future work, which is discussed in Section 6. The point of confusion/contention is whether NUC should be applied to an individual physical element of the display, or whether NUC should apply to the region of the image of the element on the FPA. To a large extent these differing points of view beg the real problem of how to treat far field diffraction effects. To a first approximation the intensity of the far field from a dixel image falls as r-2 , either when the image is in sharp focus or beyond the range of a blur distribution when the image is defocused. In experimental modeling we have measured this fall off between r-2 and r-2.2 , r-2 is close enough for our purposes here. The blur distribution is sometimes approximated by a Gaussian, but it is not Gaussian. It is instead a convolution of the diffraction PSF of the optical train and the defocus distribution. When individual elements are measured (this never truly happens, there are always other elements active), a few percent of their energy is lost in this far field diffraction. Similarly the accumulated far field energy from the other active dixels enters the measurement of the intended dixel. When a block of elements is simultaneously active, their combined far field pattern raises the base for the dixels in the block and for any other dixel that is to be activated. So depending on the point of view, NUC can be applied to individual display elements, with proper accounting of the far field effects, or it may be applied to a scene. In either case the scene display algorithm, the other half of the NUC problem, becomes more complex. Of course scene-based correction can account for other correlated effects, such as cross-talk in the display or repeating minor defects in display manufacture, as well as the far field diffraction. In our work to date, each dixel in the array is considered an independent gray body that can be measured independently of any other dixel in the array. To minimize data collection and processing time required to NUC a large (~ 1M dixel) array, an infrared sensor (typically a staring FPA) is used to image so-called sparse arrays of dixels. Dixel spacing in the emitter array is chosen to reduce overlap of the optical blur in the infrared image. F shows an image captured from a 16x16 spaced sparse 0.0001 0.001 0.01 0.1 1 1.6 1.8 2 2.2 2.4 2.6 2.8 3 3.2 ISPS LWIR Mean Transfer Curve 8.24-8.74 µm Mean Radiance (Table 1 All) Mean Radiance (Table 2 None) Mean Radiance (Table 2 Ave) MeanRadiance[e17ph/sec-cm 2 -SR] Gate Voltage Sep - Oct 1998 Data ApparentTemperature[K] 281 304 328 350 375 263 227 198 161 151 Figure 1 Typical Mean Transfer Function for 1282 arrays
  • 3. array of dixels (every sixteenth dixel by row and column is set to the same voltage) in a 128x128 resistive array. Clearly 256 images such as shown in Figure 2 are necessary to characterize the display, so that every element is measured. Some of the smearing in Figure 2 is obviously generated in the FPA readout, but notice that the brightness outside the sparse array image is below that seen between dixel images. Dixel images on the edges and at the corners of the sparse array image are in a different environment, because of the lack of symmetry. This is an example of the far field problem mentioned above. The essence of our current NUC procedure is to sum the energy in the blur associated with a single dixel, after the background has been removed. Radiance associated with this energy is estimated by comparing the summed signal to that obtained when a known, calibrated, black body flood is imaged by the camera. Although our experience has been with an FPA-based camera, there is no reason that the technique cannot be used with a scanning infrared imager, with control of pixel positions. This approach of projecting grids of emitters and performing radiometric estimates to derive curves such as shown in Figure 1 is known as the Sparse Array approach for NUC. Limitations realized when the theory is put into practice include the following: unknown magnification of the imaging system, dead pixels, rotation between the emitter array and the imager array (misalignment), and unwanted backgrounds. 2.1 Sparse Algorithm Assumptions For a small radiance range (L1, L2), the incremental contribution to emitter spectral photon radiant intensity measured by a detector of unit cell area Ad is computed as ∆I(λ) = V2 - Ve L1 + Ve - V1 L2 V2 - V1 Ad [p / s / sr / µm] (1) where (V1,V2) are the calibration voltages corresponding to known flood radiances L1< L2 and Ve , where V1<Ve<V2 , is the measured voltage due to the emitter. Error in this step depends on the actual linearity of the detector over the range (L1, L2). For an arbitrary detector the error can be made small by judicious choice of pairs (L1, L2) in the piece-wise linear fit. This NUC will work quite well provided that the display-camera setup remains undisturbed. As long as the alignment, magnification, and focus are not changed, repeatable, scaled images can be displayed and viewed. If all of a dixel’s radiant energy were focused on Ad, then the characterization would be finished. It is the case, however, that some of this energy is lost in the gaps between pixels. Dixel images that fall near or on a gap loose more of their energy in this way than do those that are imaged near a pixel center. To mitigate this effect, we intentionally defocus the image somewhat, spreading the dixel image until it is about twice a pixel size during NUC. Obtaining Ve now becomes a matter of combining the effect from several neighboring pixels. In fact the fraction of a dixel’s energy that falls on a pixel is unknown, one minus that fraction is from the display substrate, and Ve contains the far field contribution from the other elements. We can and have constructed detailed theoretical models of this to study it, but there is no compelling reason to believe that they accurately Figure 2 Example sparse array image
  • 4. match the real world. An approach to empirically determine this fraction is given in Section 3. This is the heart of the NUC problem, the confounding of unknowns with the dixel behavior we want to characterize. We need to deconstruct the confounded factors that are from the display wafer, the optical train, and the particular test setup used to acquire data, to recover something that is truly an element property. Our current methodology using sparse arrays does this, but it can be improved. A narrow-band procedure is the only way to obtain absolute radiometry without complete knowledge of the following: detector transimpedance, integration time, photoconductive gain, spectral quantum efficiency, optics solid angle, optics spectral transmission, and spectral bandwidth. Our current procedure renders all of these unknowns as unwanted nuisance parameters that do not affect the accuracy of emitter radiometry. For wide-band measurements, or where the spectral quantum-efficiency optical-transmission product varies across the band, the parameters are very important and will affect the variance of the emitter radiometry. Furthermore, a radiometric NUC obtained using a wide spectral band NUC sensor will not, in general, be radiometric when applied to a Unit Under Test (UUT) having a different spectral response than the NUC sensor.1 3 A 128x128 Projection System Calibration When the sparse algorithm is put into practice, achieving acceptable results can be a frustrating experience. An excellent example of the technique in practice was performed on the Infrared Scene Projection System (ISPS) for Komatsu LTD, in Hiratsuka, Japan, this November. To calibrate this 128x128 system, MRC used a semi-custom nonuniformity correction (NUC) sensor comprised of the SE-IR CamIRa system, a Rockwell TCM2550 256x256 focal plane array (FPA), a Custom LWIR lens at f/1.37, and a calibration software suite developed for MRC by Saturn Systems of Duluth Minnesota. The calibration software provides a means of displaying calibration scenes from the projector, acquiring them with the SE-IR camera system, and reducing the calibration data into a response table for each emitter. The final table is fit on a pixel-by-pixel basis with a logarithmic function and this function is used by the RTNUC subsystem to generate corrected imagery in real-time. To further aid in the calibration of this projector, an EOI 4” blackbody simulator, a specialized background control and suppression enclosure and custom temperature monitoring station were also used. This particular calibration had many challenges. Among the most demanding were the table-top nature of the set-up and the mosaic approach needed to calibrate the entire array. Figure 3 shows how many portions of the array overlap and how in the center portion it overlaps 4 ways. This can be a useful consistency check but also very frustrating if the absolute radiometry is poor. It does bear one of the golden rules of calibration: the NUC will only be as good as the ability to make repeatable measurements. This calibration required that data be collected in all of the four quadrants, at 11 gate voltages, with multiple FPA calibrations, and 30 averaged frames for each resultant image to be reduced. The gate voltages used were 1.0, 1.5, 2.0, 2.3, 2.5, 2.7, 2.8, 2.9, 3.0, 3.1, and 3.2V Q11 Q12 Q21 Q22 Figure 3 Mosaic Calibration done in 4 quadrants 1 Analysis and Implications for Nonuniformity Correction (NUC) Between Sensors of Different Spectral Bands, C. Stanek, D. Moore, R. Driggers, SPIE 98
  • 5. Figure 4 highlights another, often overlooked aspect of NUC: the calibration of the NUC sensor is not trivial. In this example, the camera was referenced to two blackbody references. In the left-most image, the reference temperatures were 10 degrees apart, in the second image, 60 degrees [K] apart. In both cases a 2 pt calibration using these references was used to correct the camera output when referencing a source temperature between the two references. What can be seen in the figure is that residual nonuniformity exists in the correct, sensor output. These deviations can be attributed to nonlinearity in the FPA response. As the assumption section states, the calibration will become perfect in the limit of the references temperatures approaching the temperature to be estimated. When these reference temperatures are too scarce or far apart, the calibration of the sensor itself may be unsatisfactory. The acceptable level of residual nonuniformity in the NUC sensor is driven by projector nonuniformity requirements and varies from system to system. In the figure, the nonuniformity is emphasized by the choice of gray scaling. On the left, the nonuniformity is .25% and on the right, .63% (after dead pixel replacement is performed)2 . Figure 4 Blackbody images from camera with 2 pt calibration. Figure 5 Alignment with calibration data reduction software When the sparse array data is collected, the data must be reduced into the desired emitter radiance response at corresponding emitter gate voltages. In figure 5, the region of interest (ROI) registration is shown. This has historically been a labor- intensive process that requires the user to enter in the system magnification and horizontal and vertical pixel offsets so that the first ROI corresponds to pixel (0,0). 2 Mooney, J., Shepherd, F., Ewing, W., Murguia, J., Silverman, J.; Responsivity nonuniformity limited performance of infrared staring cameras; Optical engineering, Vol. 28 No. 11, p 1153, November 1989 discuss other forms of residual nonuniformity
  • 6. Figure 6 Emitter Table for 3.0, 3.1, and 3.2V The output of the calibration data reduction software is a table that provides the measured radiance for each emitter pixel at the gate voltages used. Another key assumption of the sparse array technique is that the coalescence of the sparse information into the emitter response table accurately reflects array behavior when real scenes are projected. This is one of the arguments against the sparse NUC: the calibration scenes do not reflect ‘real’ scenes used in sensor testing. Furthermore, additional concerns such as power dissipation in real scenes and associated substrate heating, resistive losses in the array (droop phenomenon), and other types of cross talk are not adequately accounted for in the sparse procedure. Figure 7 shows the results of using the emitter calibration table to project a ‘flat’ scene. This is a DC scene where a uniform radiance response is desired from every pixel. On the left, the scene is projected without NUC, on the right, the RTNUC uses the emitter table generated from the calibration to compensate for spatial noise. Figure 7 Pre and Post NUC Flood Imagery for 1282 Projector There is obvious improvement; in this case a factor of 5 between the uncorrected and corrected scenes. However, the corrected level is just a bit below 3%. The metric typically used is the 1 sigma deviation over the scene mean (both in radiance). To achieve a superior level of calibration uniformity, additional measures must be taken. MRC is in the process of incorporating many of these into our calibration procedure. The next section describes them. 4 Improvements to Sparse Array Procedure
  • 7. 4.1 Dixel Characterization NUC of infrared displays has traditionally used sparse array images for calibration. There are several reasons for this, primary among these are the regularity and precision that is achieved in spatial, voltage, and sampling statistics. That regularity implies that the effects of that regular array of element images can be compensated and removed from the analysis, effectively producing an isolated element image for nonuniformity correction. 4.1.1 Estimate and remove distant diffraction effects. The diffraction pattern from the entire sparse array produces a few percent addition to the measurement taken at each bright image. Accounting for this effect is a principal step in obtaining an isolated image. 4.1.2 Improved background estimate. The integerized nature of output from analog-to-digital conversion leaves a unit step between possible background outputs. Background subtraction introduces this jitter into the data before processing. A significant reduction in noise from this background quantization can be realized by using the FPA calibration instead. 4.1.3 Determine offset and gain conversion to radiance. Camera output is scaled to fall within an acceptable, apparently linear, range. Provided that several flood measurements are taken at each setting, it is possible with good accuracy to discover the offset in ADU (analog to digital units) and the gain conversion to radiance. Historically, a multi-point table is used for the sensor calibration table as well as the emitter table; interpolation is used to estimate the measured radiance from nearest reference points. However, other models exist that fit with high accuracy and require less storage and computational time. 4.1.4 Emitter Centroid Calculations The current, most accurate, method of Region-of-Interest (ROI) location uses a centroid calculation of the element image. By using the entire sparse array image and producing a least squares fit to the rows and columns of element image centroids, very accurate estimates of the locations of image centers can be obtained. 4.1.5 Flexible ROI sizes depending on image center placement. When an image center is near the center of a pixel it is appropriate to use an ROI that has an odd edge length (e.g. 5x5). However, when the image center is near the gap between pixels, the appropriate ROI should have at least one even side (e.g., 5x4 ) depending on the geometrical relationship. 4.1.6 Develop sub-pixel tables for dead pixel replacement. When Steps 1 through 5 have been taken, so that accurate, effectively isolated, images are available, a new level of calibration becomes possible. This new technique is the spatial analog of TDI, time delay integration. We will call it SOI, spatial offset integration. Once the relation of an element image center to the illuminated pixel center is accurately known, regardless of any defocus present, the entire set of sparse arrays (144 sparse arrays when a 12 element step is used) can be examined for nearly identical pixel center to element image relations. These geometric relations would then be grouped and serve as a means for dead and abnormal FPA pixel replacement. FPA artifacts are a significant limitation to NUC, and the proper mitigation technique will give big dividends. 4.1.7 SOI tables used for image normalization. The SOI tables would not only provide for dead pixel replacement, but also will allow a reliable estimate of the fraction of radiant energy from an element image that falls on a pixel in the ROI. This would allow another source of estimation error to be removed from the radiometric calculation. 4.2 Substrate Characterization Substrate behavior has been one of those nuisance factors. Changing substrate temperature during sparse array data runs caused us to reorder the voltage step and the sparse array offset step and add a null phase for substrate cooling. When significant substrate areas are heated by having large numbers of local dixels at display voltage for significant periods, the scene generation algorithm should account for this effect and reduce dixel voltages to match the requested radiance.
  • 8. Using particular patterns to test substrate behavior allows gradients and other irregularities to be identified and compensated. Variations in the substrate should not occur at dixel spacing, if they do occur. Irregularities in substrate bonding would probably be evident as spatial gradients in the cooling rate, for example. This effort is more experimental than those outlined for improvement of the sparse array NUC. 4.3 Calibration Software Validation A key element to calibration is defining the important parameters for a setup that affect radiometry. It is often assumed that the NUC is a ‘software’ problem. Closer to the truth is that NUC is a complex radiometry problem and software is necessary to reduce the voluminous amounts of data generated in the radiometry process. In figure 8, a series of images are shown that reveal possible defocus levels that might be used during a calibration. This is one such parameter of importance: what should the typical blur diameter be to perform a repeatable NUC? Figure 8 Examples of various defocus levels measured in DYNNICS The list of important parameters is long when considering a NUC scheme and it is useful to have a tool that allows the effects of these parameters to be studied without the burden of data collection. In a test setup, it is not always possible to isolate a parameter in the collected data.
  • 9. Figure 9 Improvements to sparse array procedure shown in 'cross' image MRC has developed a program called SYN_CAM for the purpose of modeling projector displays and their corresponding appearance and measurement by a sensor. The program has a detailed model of emitter geometry, optical effects including distortion and defocusing, and their interplay in determining how the emitter radiance appears at the detector plane. Detailed diffraction model (including far-field model) is also included. In Figure 9, our progress in the calibration software performance is demonstrated. The intent is to project a uniform ‘cross’ shape. The SYN_CAM program produces output in calibration scenes that closely models how it would appear from a projector. The calibration software is used to reduce this data and construct an emitter table. This table is used to correct for a cross and reprojected. The reprojection is then ‘captured’ by a perfect camera (100% fill factor, no responsivity variations across detectors). It is clear from the sequence of images that the calibration software performance has dramatically improved. Mostly, the improvements have been in the elimination of pattern noise introduced in the image registration and ROI summations. 4.4 RTNUC Improvements The typical emitter transfer curve has a strongly nonliner, logarithmic characteristic when plotting radiance versus gate voltage. Because of this, simple functions do not fit the transfer curve well. Using polynomials usually lead to oscillations of parts of the operating range and usually this is unacceptable as it leads to non-monotonic approximations to the transfer curve. One approach often used is a multi-point lookup table. Depending on the sampling of the transfer curve, this can lead to rather large tables that, in real-time, are addressed, searched for the proper interval, and then interpolated to give the required radiance at each pixel. One fit that proves particularly useful is the Horner-Karin fit. This is a fit in logarithmic space; the upper portion of the transfer curve is up logarithmically parabolic and the lower portion logarithmically linear. The break point is chosen empirically after examining the typical transfer curves of the emitter for that array. The fit is logarithmic in radiance and of the form: Vgate = a0 + a1log(R) +a2(log(R))2 The fit also performs best when applied above the threshold gate voltage. Here, about 2.5V, the region below the threshold can be treated as linear or Vgate =b0 + b1R. The quality of the fit and lower order suggest savings in coefficient storage space. This fit requires 5. three for the HK and two for the linear region. Figure 10 shows the upper part of the fit on typical emitter radiance data.
  • 10. 2.4 2.6 2.8 3 3.2 3.4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Horner-Kairn Fit over 2.5 to 3.2 Volts Voltage Voltage Radiance Figure 10 Partial Horner-Karin fit to emitter radiance data It is possible for this fit to be coupled with a scene-dependent calibration. In this case the sparse method provides the basis for the initial HK fit and then additional correction terms are added based on other factors such as droop. RTNUC will not only implement the best HK fit, but will have additional correction terms to achieve the required fidelity: Vgate = Horner-Karin(R) + ε1 f1(R) + ε2 f2(R) + .... Determining f()s comes from feature extraction. The first order correction will certainly be the lowest order moment of the scene, the DC value, or Rave.. The other f()s can be determined through empirical orthogonal functional analysis (EOF). This is a standard technique for decomposing error covariance matrices into more revealing forms and is discussed more in section 6. 5 Scene-Based Correction Algorithms 5.1 NUC Procedure Using Current Maps 1000 1100 1200 1300 1400 1500 0 0.5 1 1.5 2 2.5 3 3.5 4 DIRSP Array 10A at 250K 8.44-8.54 µm spectral Array Group 8 Driven at Vgate ROIMean[adu] Vgate 2/25/98 DIRSP ROI 23x21 Tbright = 255K Substrate = 250K 0.0 µA avg Tbright = 286K 54 µA avg Tbright = 306K 69 µA avg Tbright = 332 K 86 µA avg Tbright = 346 K 95 µAavg Tbright = 361 K 105 µAavg Tbright = 378 K 114 µA avg Tbright = 394 K 124 µA avg Tbright = 411 K 135 µA avg Tbright = 427 K 145 µA avg Figure 11 Apparent temperature vs. current measurements on DIRSP arrays
  • 11. 0 50 100 150 250 300 350 400 450 Apparent Temperature Vs Current 8.24-8.74 µm Band Current(µA) Apparent Temperature [K] Y = M0 + M1*x+ ... M8*x 8 + M9*x 9 -27031M0 379.89M1 -2.1312M2 0.005968M3 -8.3276e-06M4 4.6318e-09M5 0.99992R 0.06 0.07 0.08 0.09 0.1 0.2 20 40 60 80 100 120 140 Array 3-7C @ 3.1V Comparison of Mean Column Variations Q11 contains rows 12- 156 Q21 contains rows 156-312 Q41 contains rows 368-511 Q11 @ 3.1v (061698) Q21 @ 3.1v (051998) Q41 @ 3.1v (060998) Radiance9.47-9.71µm[e17ph/sec-cm 2 -SR] Emitter Column Figure 12 Current versus temperature shows a parabolic relationship Figure 13 Radiometric column-to-column variations can be compared against current draw in same columns One possible approach with promise is to virtually eliminate the radiometry from the NUC problem. This is possible by looking for other fundamental array properties that can be measured and also correlate with nonuniformity. An obvious fundamental property is current. Each pixel draws a given current (on the order of 100 µA) for a specific gate voltage. Since the resistance of the emitter and the current supplied to it determines the power drawn, it is reasonable to conclude that power and radiance must be closely related. In Figure 11, this relationship is shown by plotting as a function of gate voltage the current drawn and apparent temperature achieved for a small block of emitters on the array. In Figure 12, the relationship between current and this temperature is plotted independent of gate voltage. For these narrowband measurements, theory shows that the temperature and current drawn should be related parabolically and this is validated by the fit shown in Figure 12. The technique would construct a transfer curve for each emitter by first constructing the current versus gate voltage relationship for each emitter. This would generate a nonuniformity map in current:gate voltage space. The next step would be to construct the mean transfer curve for the entire array in radiance:gate voltage space. Once this curve is known, gate voltage can be used to express current as a function of temperature or radiance (as shown in Figure 12). The technique eliminates many of the problems associated with current NUC schemes by reducing the radiometry to the measurement of a single, gross, transfer curve compared to the million of so needed for a display such as the DIRSP. Instead these transfer curves are derived from measurements that are almost entirely done with a multimeter and a simple transfer function applied to those curves to map them into radiance:gate voltage space. The assumptions are obvious: all the nonunifomity is due to variations in current drawn by the individual emitters and that a mean transfer curve can be used to map the current:gate voltage relationships to all the emitters individually. To validate this technique, MRC is currently generating nonuniformity maps in current space to correlate against nonuniformity maps generated by radiometric techniques. If a strong correlation can be shown, this technique may replace the cumbersome optical setups and enormous data reduction task used in present NUC systems. In Figure 13, column nonuniformities are shown as measured on an array with an LWIR sensor. This data set will provide the basis for comparing current nonuniformity along array columns. If the correlation exists, MRC will proceed to do more detailed comparisons for portions of the array on an emitter by emitter basis.
  • 12. 5.2 Finite-Difference Schemes Difference schemes all fundamentally rely on the ability of the NUC processing module to make an image comparison between what image was intended and what was actually projected. These schemes can be iterative and implemented with only simple scenes (such as the DC scene) to more realistic target/background imagery. A typical difference scheme is shown in Figure 14. NUC Sensor grabs frame S k(i,j) CIG inputs desired scene,S inp (i,j) Sk+1 (i,j) = S k (i,j) + β*C(i,j) C(i,j) < e(i,j)- C(i,j) RTNUC in Global Calibration Mode estimate voltages V k+1 (i,j) CES Drive Electronics NUC PROCESSOR DIRSP Optics k == 0 YES T(S inp (i,j) ) NO (R,V,R ave ) to disk for each dixel YES 1a 2 3 Compute gain β 4 5 6 1b 7 for( m =1 to NUM_IMAGES) for(k=1 to NUM_ITERATIONS) CIG generates imageS m 0 Process (R,V,R ave ) data for operational RTNUC use 8 Figure 14 Finite difference scheme diagram Step 0 shows that the CIG (Computer Image Generator) contains a series of images that we wish to run through the global calibration procedure. In the figure, the two larger boxes denote loops. The outer loop denotes a loop over the number of images to gather calibration data for, the inner loop denotes the number of iterations necessary to achieve the desired scene uniformity and accuracy. The double lines going to and from the loops in step 0 and step 7 to 8 show information structure. In the outer loop, the CIG has an array of an array of pixels (an array of scenes), whereas inside the loop it is a single scene. Likewise in step 7 to 8, there is an array forming because there is a collection of information stored for each calibrated image. In step 1a, the desired scene is generated by the CIG. If this is the first iteration (k=0), then this scene is fed to the RTNUC for radiance lookup to voltage and then through the CES (Control Electronics Subsystem). Also, the CIG generated image is passed to the NUC processor. Step 1b refers to the acquisition of the projected image by the NUC sensor. This step involves several subprocesses including the NUC sensor optics, detector response and readout, and conversion of the digital data into radiometric values. This radiometric conversion involves referencing a known source and applying a correction (usually linear or multi-point) to the digital data. For an FPA, the thermal reference source is usually an external NIST-traceable blackbody. Scanning sensors typically have internal reference sources such as microblackbodies or TEC strips on each side of the scan.
  • 13. Step 2 refers to transforming the desired input scene to a coordinate grid that is compatible with measurements obtained from the NUC sensor. For instance, a 1024x1024 desired scene may be projected with 4:1 oversampling such that it is imaged by a 256x256 FPA. In the most general cases, the IFOV and the IFOVC may be different, the image sizes may be different, and the images may not be perfectly aligned in space (i.e. dixel (1,1) may not be centroided with sensor pixel (1,1) ). These are all formidable issues that the transformation module must address. However, step 2 in the most general sense, is to ensure that comparisons are valid. Step 3 summarizes the main role of the NUC processor during scene global calibration. The desired scene radiance and measured scene radiance are compared by the NUC processor and a difference matrix is constructed, C, where Cij represents the difference between desired and measured radiances at dixel location (i,j). At this point, the difference matrix (or some appropriate parameter) is compared to a desired tolerance ε. For example, ε may be the maximum value of the infinite norm allowed for the matrix C. In step 4, the new, scene-desired radiance is calculated by computing the coefficient, β, and in step 5 multiplying by the calculated scene radiance change needed, and adding it to the measured scene radiance from step 1b. Upon first glance, it would appear that a choice of β = 1 would be optimal. This is likely not the case for several reasons. First, the required voltage changes estimated in the next step are not exact and an underestimate in radiance correction (β<1) can be viewed in the next step as an underestimate in the derivative dL/dV, which is unlikely to produce undulating iterations and therefore desirable. The best choice of β will be one that critically damps the convergence cycle. Second, the measurement of Sk is not a perfect measurement, but subject to noise. Therefore, the optimal choice of β in this regard is also related to the best estimate for Sk ; highly resembling a problem in Kalman filtering. In step 6, the sparse array table is used to estimate the change in voltage needed to accommodate the desired change in radiance. The new voltages are estimated from the current sparse array dixel response curves by the equation:
  • 14.
  • 15. where the derivative is estimated from the Horner-Karin fit achieved in the sparse array calibration. After the next iteration of voltages have been calculated, the CES is commanded to drive the arrays. This output is channeled through the DIRSP optics train and captured by the NUC sensor as Sk+1. Step 7 shows that once the measured scene and desired scene are within some specified tolerance, the dixel radiance, voltages required to achieve them, and any other parameters of interest (such as average scene radiance) for regression are stored. In step 8, all the recorded values from all the scenes are sent to the post-processing stage. This stage is an optimization stage for real-time operation, using the information gathered in steps 1 through 7. 6 Statistical Approach to NUC The approach to NUC outlined so far has assumed that each emitter in the array is independent of all other emitters in the array. It is not unreasonable to believe that if an emitter is not independent, then it is most strongly influenced by those emitters that are nearest. If this were the case, then an image calibrated to be uniform using a calibration scheme which assumes independent emitters would show areas larger than a dixel size with similar levels of emission. This correlation between nearby dixels may depend on temperature, image nonuniformity, manufacturing inhomogeneities, and other unknown factors. The simplest approach next to assuming emitter independence is to assume that there is some correlation in emission between dixels that does not depend on temperature or image nonuniformity, but only on (perhaps vector) distance between dixels. This assumption would first be tested using exploratory data analysis. The purpose of exploratory data analysis is to examine the data in an initial, cursory manner to identify the spatial correlations that may be present. This first step is to avoid performing lengthy, unnecessary calculations to determine model parameters that may not apply. Graphical display of the data is essential here. Large scale trends in the data, perhaps due to misalignment of the emitting and detecting arrays and other such factors, should be removed at this point. For treating data on a regular grid whose random component comes from a continuous distribution, a useful technique is median polish. This attempts to decompose the data into a sum, data = all + row + column + residuals, where row and column are averages over the rows and columns. It is the residuals that should then be examined for correlation. Once it is clear that there may be correlation in the residuals, it is then appropriate to try to estimate the degree to which it is present. An estimate of the correlation function (or variogram or covariogram) as a function of distance between dixels (lag) can be obtained by summing the difference squared of all residuals at a fixed distance apart. It should be kept in mind that the correlation may depend on direction, that is, the correlation may be stronger in one direction than in the perpendicular direction, which case the correlation will be a function of vector distance. Other, more resistant, techniques are available to estimate this function for cases where outliers (bad data) may be a problem. A correlation distance can be estimated from this function. This distance provides an approximation as to how many nearby dixels influence a particular dixel. The correlation function gives a clue to possible stochastic models that may describe the random behavior of the emitting array. The simplest possible model with correlation for a regular grid would be one where a given dixel is correlated only with its four nearest neighbors. (If a dixel is located at point (i,j), then the four nearest neighbors are then located at (i+1,j), (i-1,j), (i,j+1), (i,j-1).) The correlation coefficient would in general be different in the x and y directions and can be estimated in a number of ways. It may be necessary to use more complicated models, such as more general autoregressive models, which require more parameters. Once a model has been selected it must be tested to see if indeed calibration is improved. Improvement is generally obtained when the variance of the residuals is reduced. As an example, for the nearest neighbor case, if z(i,j) is the radiance minus the mean radiance measured at dixel (i,j), then the residual is defined as r(i,j) = z(i,j) - a1[z(i+1,j)+z(i-1,j))] - a2[z(i,j+1)+z(i,j-1)] (2) where a1 and a2 are the estimated model coefficients. This set of residuals should have smaller variance than the original set, z(i,j). Now set the voltage at location (i,j) to be that voltage which corresponds to the radiance = mean - a1[z(i+1,j)+z(i-1,j))] -
  • 16. a2[z(i,j+1)+z(i,j-1)]. This new set of voltages should give a more uniform radiance from the array near the (i,j) element. This process may be iterated. 7 Summary The sparse array algorithm has some fundamental limitations that are difficult to overcome. Other areas of criticism include the stark difference between calibration scenes and ‘real’ scenes, the inability to compensate for scene-based effects such as thermal, optical, and electrical crosstalk as well as the ‘droop’ phenomenon. The sparse array procedure can be dramatically approved and new levels of calibration attained by: •Estimate and remove distant diffraction effects •Improve background estimate •Determine offset and gain conversion to radiance •Least squares fit to grid of image centers •Flexible ROI sizes to improve symmetry with respect to image centers •Examine and correct the least-squares grid-fit parameters •Develop SOI subpixel resolution tables for dead pixel replacement •Use SOI tables for radiometric normalization. Scene-based algorithms discussed include current versus gate voltage maps, finite difference schemes, and difference schemes that utilize advanced filters to reduce measurement error. Scene-based correction schemes that are more statistically model- driven were also mentioned. Given research cost, the current mapping approach holds the most near-term promise with the advent of scene-based correction schemes a very real possibility in the near-future. 8 Acknowledgments This work was supported by the Defense Threat Reduction Agency (DTRA) under contract XXXXXXX and U.S. Army Simulation Training and Instrumentation Command (STRICOM) under contract N61339-96-C-0074. Author info: C.S. stanek@mrcsb.com, D.M. dmoore@mrcsb.com,. L.E. ewing@mrcsb.com