Signal Processing and Performance
Analysis for Imaging Systems
For a listing of recent titles in the Artech House Optoelectronics Series,
turn to the back of this book.
Signal Processing and Performance
Analysis for Imaging Systems
S. Susan Young
Ronald G. Driggers
Eddie L. Jacobs
artechhou...
Library of Congress Cataloging-in-Publication Data
A catalog record for this book is available from the U.S. Library of Co...
To our families
Contents
Preface xiii
PART I
Basic Principles of Imaging Systems and Performance 1
CHAPTER 1
Introduction 3
1.1 “Combined”...
CHAPTER 3
Target Acquisition and Image Quality 41
3.1 Introduction 41
3.2 A Brief History of Target Acquisition Theory 41
...
CHAPTER 5
Image Resampling 107
5.1 Introduction 107
5.2 Image Display, Reconstruction, and Resampling 107
5.3 Sampling The...
CHAPTER 7
Image Deblurring 179
7.1 Introduction 179
7.2 Regularization Methods 181
7.3 Wiener Filter 181
7.4 Van Cittert F...
9.4 Imaging System Performance with Fixed-Pattern Noise 243
9.5 Summary 244
References 245
CHAPTER 10
Tone Scale 247
10.1 ...
Preface
In today’s consumer electronics market where a 5-megapixel camera is no longer
considered state-of-the-art, signal...
the system performance aspects are relatively new and not quite fully understood.
While the focus of this book is to help ...
Hsien-Che Lee for his guidance and help early in her career in signal and image pro-
cessing. On a personal side, we autho...
P A R T I
Basic Principles of Imaging Systems
and Performance
C H A P T E R 1
Introduction
1.1 “Combined” Imaging System Performance
The “combined” imaging system performance of both h...
1.3 Signal Processing: Basic Principles and Advanced Applications
The basic signal processing principles, including Fourie...
also called image decimation, or image interpolation, according to whether the goal
is to reduce or enlarge the size (or r...
The first step in a super-resolution image reconstruction algorithm is to estimate
the supixel shifts of each frame with r...
noise control mechanisms, examples, and image performance are discussed in
Chapter 7.
1.7 Image Contrast Enhancement
Image...
mates the gain and offset parameters by exposing the FPA to two distinct and
uniform irradiance levels. The scene-adaptive...
divided into near infrared (NIR), shortwave infrared (SWIR), midwave infrared
(MWIR), longwave infrared (LWIR), and far in...
Many questions of image fusion remain unanswered and open to new research
opportunities. Some of the questions involve how...
C H A P T E R 2
Imaging Systems
In this chapter, basic imaging systems are introduced and the concepts of resolution
and s...
the infrared portion of the electromagnetic spectrum into displayable images in the
visible band for human use.
The infrar...
The primary difference between a visible spectrum camera and an infrared
imager is the physical phenomenology of the radia...
The characteristics of the infrared radiation emitted by an object are described
by Planck’s blackbody law in terms of spe...
been adjusted. Dynamic range may be fully utilized in a visible sensor. For the case
of an infrared sensor, a portion of t...
or the sensor contrast threshold function (CTF) has become the primary perfor-
mance metrics for infrared systems. MRT and...
The superposition principle states that this sum of point source images would be
identical to the resultant image if both ...
where x1 ≤ xo ≤ x2 and y1 ≤ yo ≤ y2. The delta function, δ(x−xo, y−yo), is nonzero only
at xo, yo and has an area of unity...
filter that is convolved with the input scene to obtain an output image. The simpli-
fied LSI imaging system model is show...
2.4 Imaging System Point Spread Function and Modulation Transfer
Function
The system impulse response or point spread func...
beginning with the optical effects. Also, the transfer function of a system, as given in
(2.17), is frequently described w...
( )H f f
D D D
diff x y, cos=





 − −












−2
11
2
π
ρλ ρλ ρλ

(2.23)
where ρ = +f fx y
2 2
a...
The other effects can be included, but they are usually considered negligible
unless there is good reason to believe other...
function in x where the size of the rectangle corresponds to the distance between
samples. In the spatial domain y directi...
tems with such functions as interpolation, boost, and edge enhancements. These are
filters that are convolved with a digit...
( ) ( )H f f Gausdisp x y disp angle, _= σ ρ Gaussian display (2.34)
or
( ) ( )H f f W f H fdisp x y disp angle h x disp a...
where ρ is the radial spatial frequency, f fx y
2 2
+ , in cycles per milliradian. M is the
system magnification (angular ...
The MTF for a typical MWIR system is shown in Figure 2.14. The pre-MTF
shown is the rollup transfer function for the optic...
where o1(x, y) is the presampled blur image or the output of the presample blur pro-
cess. The convolution is denoted by t...
( ) ( ) ( )O f f I f f H f fx y x y pre x y1 , , ,= (2.50)
where fx and fy are the horizontal and vertical spatial frequen...
has not been applied to the signal. The higher-order replications of the baseband are
real-frequency components. The curve...
signals would not be present. The higher-order replicated signals are combined at
each spatial frequency in terms of a vec...
SR SR SRout of band in band− − −= − (2.57)
where fs is the sampling frequency.
Examples of total and in-band spurious resp...
The sampling artifacts associated with out-of-band spurious response can be
removed by the display or image reconstruction...
To calculate the intensity of the source associated with the footprint of the
detector (i.e., the only part of the source ...
Equation (2.63) can be rearranged for infrared systems so that when the SNR is
set to 1, a blackbody temperature differenc...
eight noise components. These components are described in Table 2.1. Note that
the subscripts of the noise components indi...
( ) ( ) ( ) ( ) ( )Ω f E E f E f E f E fx tvh t v x h x vh v x h x= +σ σ2 2
(2.67)
where Et, Ev(fx), and Eh(fx) are the te...
2.8 Summary
This chapter introduced the basic imaging system and its components. The concepts
of resolution and sensitivit...
C H A P T E R 3
Target Acquisition and Image Quality
3.1 Introduction
In Chapter 2, we reviewed the basic principles of im...
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
Upcoming SlideShare
Loading in...5
×

Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems

1,106

Published on

Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems
gonzalo santiago martinez

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,106
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Young, driggers, jacobs_-_signal_processiong_and_performance_analysis_for_imaging_systems

  1. 1. Signal Processing and Performance Analysis for Imaging Systems
  2. 2. For a listing of recent titles in the Artech House Optoelectronics Series, turn to the back of this book.
  3. 3. Signal Processing and Performance Analysis for Imaging Systems S. Susan Young Ronald G. Driggers Eddie L. Jacobs artechhouse.com
  4. 4. Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the U.S. Library of Congress. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library. ISBN-13: 978-1-59693-287-6 Cover design by Igor Valdman © 2008 ARTECH HOUSE, INC. 685 Canton Street Norwood, MA 02062 All rights reserved. Printed and bound in the United States of America. No part of this book may be reproduced or utilized in any form or by any means, electronic or mechanical, includ- ing photocopying, recording, or by any information storage and retrieval system, without permission in writing from the publisher. All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. Artech House cannot attest to the accuracy of this informa- tion. Use of a term in this book should not be regarded as affecting the validity of any trade- mark or service mark. 10 9 8 7 6 5 4 3 2 1
  5. 5. To our families
  6. 6. Contents Preface xiii PART I Basic Principles of Imaging Systems and Performance 1 CHAPTER 1 Introduction 3 1.1 “Combined” Imaging System Performance 3 1.2 Imaging Performance 3 1.3 Signal Processing: Basic Principles and Advanced Applications 4 1.4 Image Resampling 4 1.5 Super-Resolution Image Reconstruction 5 1.6 Image Restoration—Deblurring 6 1.7 Image Contrast Enhancement 7 1.8 Nonuniformity Correction (NUC) 7 1.9 Tone Scale 8 1.10 Image Fusion 8 References 10 CHAPTER 2 Imaging Systems 11 2.1 Basic Imaging Systems 11 2.2 Resolution and Sensitivity 15 2.3 Linear Shift-Invariant (LSI) Imaging Systems 16 2.4 Imaging System Point Spread Function and Modulation Transfer Function 20 2.4.1 Optical Filtering 21 2.4.2 Detector Spatial Filters 22 2.4.3 Electronics Filtering 24 2.4.4 Display Filtering 25 2.4.5 Human Eye 26 2.4.6 Overall Image Transfer 27 2.5 Sampled Imaging Systems 28 2.6 Signal-to-Noise Ratio 34 2.7 Electro-Optical and Infrared Imaging Systems 38 2.8 Summary 39 References 39 vii
  7. 7. CHAPTER 3 Target Acquisition and Image Quality 41 3.1 Introduction 41 3.2 A Brief History of Target Acquisition Theory 41 3.3 Threshold Vision 43 3.3.1 Threshold Vision of the Unaided Eye 43 3.3.2 Threshold Vision of the Aided Eye 47 3.4 Image Quality Metric 50 3.5 Example 53 3.6 Summary 61 References 61 PART II Basic Principles of Signal Processing 63 CHAPTER 4 Basic Principles of Signal and Image Processing 65 4.1 Introduction 65 4.2 The Fourier Transform 65 4.2.1 One-Dimensional Fourier Transform 65 4.2.2 Two-Dimensional Fourier Transform 78 4.3 Finite Impulse Response Filters 83 4.3.1 Definition of Nonrecursive and Recursive Filters 83 4.3.2 Implementation of FIR Filters 84 4.3.3 Shortcomings of FIR Filters 85 4.4 Fourier-Based Filters 86 4.4.1 Radially Symmetric Filter with a Gaussian Window 87 4.4.2 Radially Symmetric Filter with a Hamming Window at a Transition Point 87 4.4.3 Radially Symmetric Filter with a Butterworth Window at a Transition Point 88 4.4.4 Radially Symmetric Filter with a Power Window 89 4.4.5 Performance Comparison of Fourier-Based Filters 90 4.5 The Wavelet Transform 90 4.5.1 Time-Frequency Wavelet Analysis 91 4.5.2 Dyadic and Discrete Wavelet Transform 96 4.5.3 Condition of Constructing a Wavelet Transform 97 4.5.4 Forward and Inverse Wavelet Transform 97 4.5.5 Two-Dimensional Wavelet Transform 98 4.5.6 Multiscale Edge Detection 98 4.6 Summary 102 References 102 PART III Advanced Applications 105 viii Contents
  8. 8. CHAPTER 5 Image Resampling 107 5.1 Introduction 107 5.2 Image Display, Reconstruction, and Resampling 107 5.3 Sampling Theory and Sampling Artifacts 109 5.3.1 Sampling Theory 109 5.3.2 Sampling Artifacts 110 5.4 Image Resampling Using Spatial Domain Methods 111 5.4.1 Image Resampling Model 111 5.4.2 Image Rescale Implementation 112 5.4.3 Resampling Filters 112 5.5 Antialias Image Resampling Using Fourier-Based Methods 114 5.5.1 Image Resampling Model 114 5.5.2 Image Rescale Implementation 115 5.5.3 Resampling System Design 117 5.5.4 Resampling Filters 118 5.5.5 Resampling Filters Performance Analysis 119 5.6 Image Resampling Performance Measurements 125 5.7 Summary 127 References 127 CHAPTER 6 Super-Resolution 129 6.1 Introduction 129 6.1.1 The Meaning of Super-Resolution 129 6.1.2 Super-Resolution for Diffraction and Sampling 129 6.1.3 Proposed Nomenclature by IEEE 130 6.2 Super-Resolution Image Restoration 130 6.3 Super-Resolution Image Reconstruction 131 6.3.1 Background 131 6.3.2 Overview of the Super-Resolution Reconstruction Algorithm 132 6.3.3 Image Acquisition—Microdither Scanner Versus Natural Jitter 132 6.3.4 Subpixel Shift Estimation 133 6.3.5 Motion Estimation 135 6.3.6 High-Resolution Output Image Reconstruction 143 6.4 Super-Resolution Imager Performance Measurements 158 6.4.1 Background 158 6.4.2 Experimental Approach 159 6.4.3 Measurement Results 166 6.5 Sensors That Benefit from Super-Resolution Reconstruction 167 6.5.1 Example and Performance Estimates 168 6.6 Performance Modeling and Prediction of Super-Resolution Reconstruction 172 6.7 Summary 173 References 174 Contents ix
  9. 9. CHAPTER 7 Image Deblurring 179 7.1 Introduction 179 7.2 Regularization Methods 181 7.3 Wiener Filter 181 7.4 Van Cittert Filter 182 7.5 CLEAN Algorithm 183 7.6 P-Deblurring Filter 184 7.6.1 Definition of the P-Deblurring Filter 185 7.6.2 Properties of the P-Deblurring Filter 186 7.6.3 P-Deblurring Filter Design 188 7.7 Image Deblurring Performance Measurements 199 7.7.1 Experimental Approach 200 7.7.2 Perception Experiment Result Analysis 203 7.8 Summary 204 References 204 CHAPTER 8 Image Contrast Enhancement 207 8.1 Introduction 207 8.2 Single-Scale Process 208 8.2.1 Contrast Stretching 208 8.2.2 Histogram Modification 209 8.2.3 Region-Growing Method 209 8.3 Multiscale Process 209 8.3.1 Multiresolution Analysis 210 8.3.2 Contrast Enhancement Based on Unsharp Masking 210 8.3.3 Contrast Enhancement Based on Wavelet Edges 211 8.4 Contrast Enhancement Image Performance Measurements 217 8.4.1 Background 217 8.4.2 Time Limited Search Model 218 8.4.3 Experimental Approach 219 8.4.4 Results 222 8.4.5 Analysis 223 8.4.6 Discussion 226 8.5 Summary 227 References 228 CHAPTER 9 Nonuniformity Correction 231 9.1 Detector Nonuniformity 231 9.2 Linear Correction and the Effects of Nonlinearity 232 9.2.1 Linear Correction Model 233 9.2.2 Effects of Nonlinearity 233 9.3 Adaptive NUC 238 9.3.1 Temporal Processing 238 9.3.2 Spatio-Temporal Processing 240 x Contents
  10. 10. 9.4 Imaging System Performance with Fixed-Pattern Noise 243 9.5 Summary 244 References 245 CHAPTER 10 Tone Scale 247 10.1 Introduction 247 10.2 Piece-Wise Linear Tone Scale 248 10.3 Nonlinear Tone Scale 250 10.3.1 Gamma Correction 250 10.3.2 Look-Up Tables 252 10.4 Perceptual Linearization Tone Scale 252 10.5 Application of Tone Scale to Enhanced Visualization in Radiation Treatment 255 10.5.1 Portal Image in Radiation Treatment 255 10.5.2 Locating and Labeling the Radiation and Collimation Fields 257 10.5.3 Design of the Tone Scale Curves 257 10.5.4 Contrast Enhancement 262 10.5.5 Producing the Output Image 264 10.6 Tone Scale Performance Example 264 10.7 Summary 266 References 267 CHAPTER 11 Image Fusion 269 11.1 Introduction 269 11.2 Objectives for Image Fusion 270 11.3 Image Fusion Algorithms 271 11.3.1 Superposition 272 11.3.2 Laplacian Pyramid 272 11.3.3 Ratio of a Lowpass Pyramid 275 11.3.4 Perceptual-Based Multiscale Decomposition 276 11.3.5 Discrete Wavelet Transform 278 11.4 Benefits of Multiple Image Modes 280 11.5 Image Fusion Quality Metrics 281 11.5.1 Mean Squared Error 282 11.5.2 Peak Signal-to-Noise Ratio 283 11.5.3 Mutual Information 283 11.5.4 Image Quality Index by Wang and Bovik 283 11.5.5 Image Fusion Quality Index by Piella and Heijmans 284 11.5.6 Xydeas and Petrovic Metric 285 11.6 Imaging System Performance with Image Fusion 286 11.7 Summary 290 References 290 About the Authors 293 Index 295 Contents xi
  11. 11. Preface In today’s consumer electronics market where a 5-megapixel camera is no longer considered state-of-the-art, signal and image processing algorithms are real-time and widely used. They stabilize images, provide super-resolution, adjust for detec- tor nonuniformities, reduce noise and blur, and generally improve camera perfor- mance for those of us who are not professional photographers. Most of these signal and image processing techniques are company proprietary and the details of these techniques are never revealed to outside scientists and engineers. In addition, it is not necessary for the performance of these systems (including the algorithms) to be determined since the metric of success is whether the consumer likes the product and buys the device. In other imaging communities such as military imaging systems (which, at a minimum, include visible, image intensifiers, and infrared) and medical imaging devices, it is extremely important to determine the performance of the imaging sys- tem, including the signal and image processing techniques. In military imaging sys- tems that involve target acquisition and surveillance/reconnaissance, the performance of an imaging system determines how effective the warfighter can accomplish his or her mission. In medical systems, the imaging system performance determines how accurately a diagnosis can be provided. Signal and image process- ing plays a key role in the performance of these imaging systems and, in the past 5 to 10 years, has become a key contributor to increased imaging system performance. There is a great deal of government funding in signal and image processing for imaging system performance and the literature is full of university and government laboratory developed algorithms. There are still a great number of industry algo- rithms that, overall, are considered company proprietary. We focus on those in the literature and those algorithms that can be generalized in a nonproprietary manner. There are numerous books in the literature on signal and image processing tech- niques, algorithms, and methods. The majority of these books emphasize the math- ematics of image processing and how they are applied to image information. Very few of the books address the overall imaging system performance when signal and image processing is considered a component of the imaging system. Likewise, there are many books in the area of imaging system performance that consider the optics, the detector, and the displays in the system and how the system performance behaves with changes or modifications of these components. There is very little book content where signal and imager processing is included as a component of the overall imaging system performance. This is the gap that we have attempted to fill with this book. While algorithm development has exploded in the past 5 to 10 years, xiii
  12. 12. the system performance aspects are relatively new and not quite fully understood. While the focus of this book is to help the scientist and engineer begin to understand that these algorithms are really an imaging system component and help in the system performance prediction of imaging systems with these algorithms, the performance material is new and will undergo dramatic improvements in the next 5 years. We have chosen to address signal and image processing techniques that are not new, but the real time implementation in military and medical systems are relatively new and the performance predication of systems with these algorithms are definitely new. There are some algorithms that are not addressed such as electronic stabiliza- tion and turbulence correction. There are current programs in algorithm develop- ment that will provide great advances in algorithm performance in the next few years, so we decided not to spend time on these particular areas. It is worth mentioning that there is a community called “computational imag- ing” where, instead of using signal/image processing to improve the performance of an existing imaging system approach, signal processing is an inherent part of the electro-optical design process for image formation. The field includes unconven- tional imaging systems and unconventional processing, where the performance of the collective system design is beyond any conventional system approach. In many cases, the resulting image is not important. The goal of the field is to maximize sys- tem task performance for a given electro-optical application using nonconventional design rules (with signal processing and electro-optical components) through the exploitation of various degrees of freedom (space, time, spectrum, polarization, dynamic range, and so forth). Leaders in this field include Dennis Healey at DARPA, Ravi Athale at MITRE, Joe Mait at the Army Research Laboratory, Mark Mirotznick at Catholic University, and Dave Brady at Duke University. These researchers and others are forging a new path for the rest of us and have provided some very stimulating experiments and demonstrations in the past 2 or 3 years. We do not address computational imaging in this book, as the design and approach methods are still a matter of research and, as always, it will be some time before sys- tem performance is addressed in a quantitative manner. We would like to thank a number of people for their thoughtful assistance in this work. Dr. Patti Gillespie at the Army Research Laboratory provided inspiration and encouragement for the project. Rich Vollmerhausen has contributed more to mili- tary imaging system performance modeling over the past 10 years than any other researcher, and his help was critical to the success of the project. Keith Krapels and Jonathan Fanning both assisted with the super-resolution work. Khoa Dang, Mike Prarie, Richard Moore, Chris Howell, Stephen Burks, and Carl Halford contributed material for the fusion chapter. There are many others who worked signal process- ing issues and with whom we collaborated through research papers to include: Nicole Devitt, Tana Maurer, Richard Espinola, Patrick O’Shea, Brian Teaney, Louis Larsen, Jim Waterman, Leslie Smith, Jerry Holst, Gene Tener, Jennifer Parks, Dean Scribner, Jonathan Schuler, Penny Warren, Alan Silver, Jim Howe, Jim Hilger, and Phil Perconti. We are grateful for the contributions that all of these people have pro- vided over the years. We (S. Susan Young and Eddie Jacobs) would like to thank our coauthor, Dr. Ronald G. Driggers for his suggestion of writing this book and encouragement in this venture. Our understanding and appreciation of system performance signifi- cance started from collaborating with him. S. Susan Young would like to thank Dr. xiv Preface
  13. 13. Hsien-Che Lee for his guidance and help early in her career in signal and image pro- cessing. On a personal side, we authors are very thankful to our families for their support and understanding. xv
  14. 14. P A R T I Basic Principles of Imaging Systems and Performance
  15. 15. C H A P T E R 1 Introduction 1.1 “Combined” Imaging System Performance The “combined” imaging system performance of both hardware (sensor) and soft- ware (signal processing) is extremely important. Imaging system hardware is designed primarily to form a high-quality image from source emissions under a large variety of environmental conditions. Signal processing is used to help highlight or extract information from the images that are generated from an imaging system. This processing can be automated for decision-making purposes or it can be utilized to enhance the visual acuity of a human looking through the imaging system. Performance measures of an imaging system have been excellent methods for better design and understanding of the imaging system. However, the imaging per- formance of an imaging system with the aid of signal processing has not been widely considered in the light of improving image quality from imaging systems and signal processing algorithms. Imaging systems can generate images with low-contrast, high-noise, blurring, or corrupted/lost high-frequency details, among others. How does the image performance of a low-cost imaging system with the aid of signal pro- cessing compare with the one of an expensive imaging system? Is it worth investing in higher image quality by improving the imaging system hardware or by develop- ing the signal processing software? The topic of this book is to relate the ability of extracting information from an imaging system with the aid of signal processing to evaluate the overall performance of imaging systems. 1.2 Imaging Performance Understanding the image formation and recording process helps in understanding the factors that affect image performance and therefore helps the design of imaging systems and signal processing algorithms. The image formation process and the sources of image degradation, such as loss of useful high-frequency details, noise, or low-contrast target environment, are discussed in Chapter 2. Methods of determining image performance are important tools in determining the merits of imaging systems and signal processing algorithms. Image performance determination can be performed via subjective human perception studies or image performance modeling. Image performance prediction and the role of image perfor- mance modeling are also discussed in Chapter 3. 3
  16. 16. 1.3 Signal Processing: Basic Principles and Advanced Applications The basic signal processing principles, including Fourier transform, wavelet trans- form, finite impulse response (FIR) filters, and Fourier-based filters, are discussed in Chapter 4. In an image formation and recording process, many factors affect sensor perfor- mance and image quality, and these can result in loss of high-frequency information or low contrast in an image. Several common causes of low image quality are the following: • Many low-cost visible and thermal sensors spatially or electronically undersample an image. Undersampling results in aliased imagery in which subtle/detailed information (high-frequency components) is lost in these images. • An imaging system’s blurring function (sometimes called the point spread function, or PSF) is another common factor in the reduction of high-frequency components in the acquired imagery and results in blurred images. • Low-cost sensors and environmental factors, such as lighting sources or back- ground complexities, result in low-contrast images. • Focal plan array (FPA) sensors have detector-to-detector variability in the FPA fabrication process and cause the fixed-pattern noise in the acquired imagery. There are many signal processing applications for the enhancement of imaging system performance. Most of them attempt to enhance the image quality or remove the degradation phenomena. Specifically, these applications try to recover the useful high-frequency components that are lost or corrupted in the image and attempt to suppress the undesired high-frequency components, which are noises. In Chapters 5 to 11, the following classes of signal processing applications are considered: 1. Image resampling; 2. Super-resolution image reconstruction; 3. Image restoration—deblurring; 4. Image contrast enhancement; 5. Nonuniformity correction (NUC); 6. Tone scale; 7. Image fusion. 1.4 Image Resampling The concept of image resampling originates from the sampled imager. The discus- sion in this chapter relates image resampling with image display and reconstruction from sampled points of one single image. These topics provide the reader with a fun- damental understanding that the way an image is processed and displayed is just as important as the blur and sampling characteristics of the sensor. It also provides a background for undersampled imaging for discussion on super-resolution image reconstruction in the following chapter. In signal processing, image resampling is 4 Introduction
  17. 17. also called image decimation, or image interpolation, according to whether the goal is to reduce or enlarge the size (or resolution) of a captured image. It can provide the image values that are not recorded by the imaging system, but are calculated from the neighboring pixels. Image resampling does not increase the inherent informa- tion content in the image, but poor image display reconstruction function can reduce the overall imaging system performance. The image resampling algorithms include spatial and spatial-frequency domain, or Fourier-based windowing, methods. The important considerations in image resampling include the image resampling model, image rescale implementation, and resampling filters, especially the anti-aliasing image resampling filter. These algo- rithms, examples, and image resampling performance measurements are discussed in Chapter 5. 1.5 Super-Resolution Image Reconstruction The loss of high-frequency information in an image could be due to many factors. Many low-cost visible and thermal sensors spatially or electronically undersample an image. Undersampling results in aliased imagery in which the high-frequency components are folded into the low-frequency components in the image. Conse- quently, subtle/detailed information (high-frequency components) is lost in these images. Super-resolution image reconstruction can produce high-resolution images by using the existing low-cost imaging devices from a sequence, or a few snapshots, of low-resolution images. Since undersampled images have subpixel shifts between successive frames, they represent different information from the same scene. Therefore, the informa- tion that is contained in an undersampled image sequence can be combined to obtain an alias-free (high-resolution) image. Super-resolution image reconstruction from multiple snapshots provides far more detail information than any interpolated image from a single snapshot. Figure 1.1 shows an example of a high-resolution (alias-free) infrared image that is obtained from a sequence of low-resolution (aliased) input images having subpixel shifts among them. 1.5 Super-Resolution Image Reconstruction 5 (b) (a) Figure 1.1 Example of super-resolution image reconstruction: (a) input sequence of aliased infra- red images having subpixel shifts among them; and (b) output alias-free (high-resolution) image in which the details of tree branches are revealed.
  18. 18. The first step in a super-resolution image reconstruction algorithm is to estimate the supixel shifts of each frame with respect to a reference frame. The second step is to increase the effective spatial sampling by operating on a sequence of low-resolu- tion subpixel-shifted images. There are also spatial and spatial frequency domain methods for the subpixel shift estimation and the generation of the high-resolution output images. These algorithms, examples, and the image performance are discussed in Chapter 6. 1.6 Image Restoration—Deblurring An imaging system’s blurring function, also called the point spread function (PSF), is another common factor in the reduction of high-frequency components in the image. Image restoration tries to inverse this blurring degradation phenomenon, but within the bandlimit of the imager (i.e., it enhances the spatial frequencies within the imager band). This includes deblurring images that are degraded by the limitations of a sensor or environment. The estimate or knowledge of the blurring function is essential to the application of these algorithms. One of the most important consider- ations of designing a deblurring filter is to control noise, since the noise is likely amplified at high frequencies. The amplification of noise results in undesired arti- facts in the output image. Figure 1.2 shows examples of image deblurring. One input image [Figure 1.2(a)] contains the blur, while the deblurred version of it [Figure 1.2(b)] removes the most blur. Another input image [Figure 1.2(c)] contains the blur and noise; the noise effect illustrates on the deblurred version of it [Figure 1.2(d)]. Image restoration tries to recover the high-frequency information below the diffrac- tion limit while limiting the noise artifacts. The designs of deblurring filters, the 6 Introduction (a) (b) (c) (d) Figure 1.2 Examples of image deblurring: (a) blurred bar image; (b) deblurred version of (a); (c) blurred bar image with noise added; and (d) deblurred version of (c).
  19. 19. noise control mechanisms, examples, and image performance are discussed in Chapter 7. 1.7 Image Contrast Enhancement Image details can also be enhanced by image contrast enhancement techniques in which certain image edges are emphasized as desired. For an example of a medical application in diagnosing breast cancer from mammograms, radiologists follow the ductal networks to look for abnormalities. However, the number of ducts and the shape of ductal branches vary with individuals, which make the visual process of locating the ducts difficult. The image contrast enhancement provides the ability to enhance the appearance of the ductal elements relative to the fatty-tissue surround- ings, which helps radiologists to visualize abnormalities in mammograms. Image contrast enhancement methods can be divided into single-scale approach and multiscale approach. In the single-scale approach, the image is processed in the original image domain, such as a simple look-up table. In the multiscale approach, the image is decomposed into multiple resolution scales, and processing is per- formed in the multiscale domain. Because the information at each scale is adjusted before the image is reconstructed back to the original image intensity domain, the output image contains the desired detail information. The multiscale approach can also be coupled with the dynamic range reduction. Therefore, the detail information in different scales can be displayed in one output image. Localized contrast enhancement (LCE) is the process in which these techniques are applied on a local scale for the management of dynamic range in the image. For example, the sky-to-ground interface in infrared imaging can include a huge apparent tempera- ture difference that occupies most of the image dynamic range. Small targets with smaller signals can be lost, while LCE can reduce the large sky-to-ground interface signal and enhance small target signals (see Figure 8.10 later in this book). Details of the algorithms, examples, and image performance are discussed in Chapter 8. 1.8 Nonuniformity Correction (NUC) Focal plan array (FPA) sensors have been used in many commercial and military applications, including both visible and infrared imaging systems, since they have wide spectral responses, compact structures, and cost-effective production. How- ever, each individual photodetector in the FPA has a different photoresponse, due to detector-to-detector variability in the FPA fabrication process [1]. Images that are acquired by an FPA sensor suffer from a common problem known as fixed-pattern noise, or spatial nonuniformity. The technique to compensate for this distortion is called nonuniformity correction (NUC). Figure 1.3 shows an example of a nonuniformity corrected image from an original input image with the fixed-pattern noise. There are two main categories of NUC algorithms, namely, calibration-based and scene-adaptive algorithms. A conventional, calibration-based NUC is the stan- dard two-point calibration, which is also called linear NUC. This algorithm esti- 1.7 Image Contrast Enhancement 7
  20. 20. mates the gain and offset parameters by exposing the FPA to two distinct and uniform irradiance levels. The scene-adaptive NUC uses the data acquired in the video sequence and a motion estimation algorithm to register each point in the scene across all of the image frames. This way, continuous compensation can be applied adaptively for individual detector responses and background changes. These algo- rithms, examples, and imaging system performance are discussed in Chapter 9. 1.9 Tone Scale Tone scale is a technique that improves the image presentation on an output display medium (softcopy display or hardcopy print). Tone scale is also a mathematical mapping of the image pixel values from the sensor to a region of interest on an out- put medium. Note that tone scale transforms improve only the appearance of the image, but not the image quality itself. The gray value resolution is still the same. However, a proper tone scale allows the characteristic curve of a display system to match the sensitivity of the human eye to enhance the image interpretation task per- formance. There are various tone scale techniques, including piece-wise linear tone scale, nonlinear tone scale, and perceptual linearization tone scale. These techniques and a tone scale performance example are discussed in Chapter 10. 1.10 Image Fusion Because researchers realize that different sensors provide different signature cues of the scene, image fusion has been receiving additional attention in signal processing. Some of those applications are shown to benefit from fusing the images of multiple sensors. Imaging sensor characteristics are determined by the wavebands that they respond to in the electromagnetic spectrum. Figure 1.4 is a diagram of the electro- magnetic spectrum with wavelength indicated in metric length units [2]. The most familiar classifications of wavebands are the radiowave, microwave, infrared, visi- ble, ultraviolet, X-ray, and gamma-ray wavebands. Figure 1.5 shows further subdi- vided wavebands for broadband sensors [3]. For example, the infrared waveband is 8 Introduction (a) (b) Figure 1.3 Example of nonuniformity correction: (a) input image with the fixed-pattern noise shown in the image; and (b) nonuniformity corrected image in which the helicopter in the center is clearly illustrated.
  21. 21. divided into near infrared (NIR), shortwave infrared (SWIR), midwave infrared (MWIR), longwave infrared (LWIR), and far infrared. The sensor types are driven by the type of image information that can be exploited within these bands. X-ray sensors can view human bones for disease diagnosis. Microwave and radiowave sensors have a good weather penetration in military applications. Infrared sensors detect both temperature and emissivity and are beneficial for night-vision applica- tions. Different subwaveband sensors in infrared wavebands can provide different information. For example, MWIR sensors respond better to hotter-than-terrestrial objects. LWIR sensors have better response to overall terrestrial object tempera- tures, which are around 300 Kelvins (K). Solar clutter is high in the MWIR in the daytime and is negligible in the LWIR. Figure 1.6 shows an example of fusing MWIR and LWIR images. The road cracks are visible in LWIR, but not in MWIR. Similarly, the Sun glint is visible in MWIR image, but not in LWIR. The fused image shows both Sun glint and road cracks. 1.10 Image Fusion 9 Microwave (and subbands) Radiowave Infrared Ultraviolet 400 nm Visible 750 nm Gamma rays X-rays −2 1 m 1 mm 1 cm 1 mµ 1 nm λ wavelength (meters)−13 −9 −6 −3 10 10 10 10 10 10 Figure 1.4 Electromagnetic spectrum. λ wavelength m(micrometers, )µ Longwave infrared Midwave infrared Near -- and short wave infrared Visible 0.4 1.0 10 14 3.0 Figure 1.5 Subdivided infrared wavebands.
  22. 22. Many questions of image fusion remain unanswered and open to new research opportunities. Some of the questions involve how to select different sensors to pro- vide better image information from the scene; whether different imaging informa- tion can be effectively combined to provide a better cue in the scene; and how to best combine the information. These issues are presented and examples and imaging sys- tem performance are provided in Chapter 11. References [1] Milton, A. F., F. B. Barone, and M. R. Kruer, “Influence of Nonuniformity on Infrared Focal Plan Array Performance,” Optical Engineering, Vol. 24, No. 5, 1985, pp. 855–862. [2] Richards, A., Alien Vision—Exploring the Electromagnetic Spectrum with Imaging Tech- nology, Bellingham, WA: SPIE Press, 2001. [3] Driggers, R. G., P. Cox, and T. Edwards, Introduction to Infrared and Electro-Optical Sys- tems, Norwood, MA: Artech House, 1999. 10 Introduction Road cracks MW LW Road cracks clearly visible in LW but not MW Sun glint Fused image Figure 1.6 Example of fusing MWIR and LWIR images. The road cracks are visible in LWIR, but not in MWIR. Similarly, the Sun glint is visible in MWIR image, but not in LWIR. The fused image shows both Sun glint and road cracks.
  23. 23. C H A P T E R 2 Imaging Systems In this chapter, basic imaging systems are introduced and the concepts of resolution and sensitivity are explored. This introduction presents helpful background infor- mation that is necessary to understand imaging system performance, which is pre- sented in Chapter 3. It also provides a basis for later discussions on the implementation of advanced signal and image processing techniques. 2.1 Basic Imaging Systems A basic imaging system can be depicted as a cascaded system where the input signal is optical flux from a target and background and the output is an image presented for human consumption. A basic imaging system is shown in Figure 2.1. The system can begin with the flux leaving the target and the background. For electro-optical systems and more sophisticated treatments of infrared systems, the system can even begin with the illumination of the target with external sources. Regardless, the flux leaving the source traverses the atmosphere as shown. This path includes blur from turbulence and scattering and a reduction in the flux due to atmospheric extinction, such as scattering, and absorption, among others. The flux that makes it to the entrance of the optics is then blurred by optical diffraction and aberrations. The flux is also reduced by the optical transmission. The flux is imaged onto a detector array, either scanning or staring. Here, the flux is converted from photons to electrons. There is a quantum efficiency that reduces the signal, and the finite size of the detector imposes a blur on the image. The electronics further reduce, or in some cases enhance, the signal. The display also provides a signal reduction and a blur, due to the finite size of the display element. Finally, the eye consumes the image. The eye has its own inherent blur and noise, which are consid- ered in overall system performance. In some cases, the output of the electronics is processed by an automatic target recognizer (ATR), which is an automated process of detecting and recognizing targets. An even more common process is an aided tar- get recognizer (AiTR), which is more of a cueing process for a human to view the resultant cued image “chips” (a small area containing an object). All source and background objects above 0K emit electromagnetic radiation associated with the thermal activity on the surface of the object. For terrestrial tem- peratures (around 300K), objects emit a good portion of the electromagnetic flux in the infrared part of the electromagnetic spectrum. This emission of flux is some- times called blackbody thermal emission. The human eye views energy only in the visible portion of the electromagnetic spectrum, where the visible band spans wave- lengths from 0.4 to 0.7 micrometer (µm). Infrared imaging devices convert energy in 11
  24. 24. the infrared portion of the electromagnetic spectrum into displayable images in the visible band for human use. The infrared spectrum begins at the red end of the visible spectrum where the eye can no longer sense energy. It spans from 0.7 to 100 µm. The infrared spectrum is, by common convention, broken into five different bands (this may vary according to the application/community). The bands are typically defined in the following way: near-infrared (NIR) from 0.7 to 1.0 µm, shortwave infrared (SWIR) from 1.0 to 3.0 µm, midwave infrared (MWIR) from 3.0 to 5.0 µm, longwave infrared (LWIR) from 8.0 to 14.0 µm, and far infrared (FIR) from 14.0 to 100 µm. These bands are depicted graphically in Figure 2.2. Figure 2.2 shows the atmospheric transmission for a 1-kilometer horizontal ground path for a “standard” day in the United States. These types of transmission graphs can be tailored for any condition using sophisti- cated atmospheric models, such as MODTRAN (from http://www.ontar.com). Note that there are many atmospheric “windows” so that an imager designed with such a band selection can see through the atmosphere. 12 Imaging Systems Target and background Atmosphere Scanner Detector array and cooler Display Human vision Optics Electronics ATR Figure 2.1 Basic imaging system. VisibleUltra- violet Near- and short- wave infrared Midwave infrared Longwave infrared Far infrared Wavelength (micrometers) 0.4 Transmission 1 10 143 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure 2.2 Atmospheric transmission for a 1-kilometer path on a standard U.S. atmosphere day.
  25. 25. The primary difference between a visible spectrum camera and an infrared imager is the physical phenomenology of the radiation from the scene being imaged. The energy used by a visible camera is predominantly reflected solar or some other illuminating energy in the visible spectrum. The energy imaged by infrared imagers, commonly known as forward looking infrareds (FLIRs) in the MWIR and LWIR bands, is primarily self-emitted radiation. From Figure 2.2, the MWIR band has an atmospheric window in the 3- to 5-µm region, and the LWIR band has an atmo- spheric window in the 8- to 12-µm region. The atmosphere is opaque in the 5- to 8-µm region, so it would be pointless to construct a camera that responds to this waveband. Figure 2.3 provides images to show the difference in the source of the radiation sensed by the two types of cameras. The visible image on the left side is all light that was provided by the Sun, propagated through Earth’s atmosphere, reflected off the objects in the scene, traversed through a second atmospheric path to the sensor, and then imaged with a lens and a visible band detector. A key here is that the objects in the scene are represented by their reflectivity characteristics. The image characteris- tics can also change by any change in atmospheric path or source characteristic change. The atmospheric path characteristics from the sun to the objects change fre- quently because the Sun’s angle changes throughout the day, plus the weather and cloud conditions change. The visible imager characterization model is a multipath problem that is extremely difficult. The LWIR image given on the right side of Figure 2.3 is obtained primarily by the emission of radiation by objects in the scene. The amount of electromagnetic flux depends on the temperature and emissivity of the objects. A higher temperature and a higher emissivity correspond to a higher flux. The image shown is white hot—a whiter point in the image corresponds to a higher flux leaving the object. It is interesting to note that trees have a natural self-cooling process, since a high temper- ature can damage foliage. Objects that have absorbed a large amount of solar energy are hot and are emitting large amounts of infrared radiation. This is some- times called solar loading. 2.1 Basic Imaging Systems 13 Figure 2.3 Visible image on the left side: reflected flux; LWIR infrared image on the right side: emitted flux. (Images courtesy of NRL Optical Sciences Division.)
  26. 26. The characteristics of the infrared radiation emitted by an object are described by Planck’s blackbody law in terms of spectral radiant emittance, in the following: ( ) ( ) M c ec Tλ λ ε λ λ µ= − −1 5 2 2 1 W cm m (2.1) where c1 and c2 are constants of 3.7418 × 10 4 W-µm 4 /cm 2 and 1.4388 × 10 4 µm-K. The wavelength, λ, is provided in micrometers and ε(λ) is the emissivity of the sur- face. A blackbody source is defined as an object with an emissivity of 1.0 and is con- sidered a perfect emitter. Source emissions of blackbodies at typical terrestrial temperatures are shown in Figure 2.4. Often, in modeling and system performance assessment, the terrestrial background temperature is assumed to be 300K. The source emittance curves are shown for other temperatures for comparison. One curve corresponds to an object colder than the background, and two curves corre- spond to temperatures hotter than the background. Planck’s equation describes the spectral shape of the source as a function of wavelength. It is readily apparent that the peak shifts to the left (shorter wave- lengths) as the body temperature increases. If the temperature of a blackbody were increased to that of the Sun (5,900K), the peak of the spectral shape would decrease to 0.55 µm or green light (note that this is in the visible band). This peak wavelength is described by Wien’s displacement law λ µmax , ,= 2 898 T m (2.2) For a terrestrial temperature of 300K, the peak wavelength is around 10 µm. It is important to note that the difference between the blackbody curves is the “signal” in the infrared bands. For an infrared sensor, if the background is at 300K and the tar- get is at 302K, the signal is the difference in flux between the blackbody curves. Sig- nals in the infrared sensor are small riding on very large amounts of background flux. In the visible band, this is not the case. For example, consider the case of a white target on a black background. The black background is generating no signal, while the white target is generating a maximum signal, given the sensor gain has 14 Imaging Systems Blackbody curves for four temps from 290K to 320K Radiantemittance,W/cm-mµ 0 0.001 0.002 0.003 0.004 0.005 0 5 10 15 20 25 30 35 40 Wavelength (micrometers) 290 300 310 3202 2 Figure 2.4 Planck’s blackbody radiation curves.
  27. 27. been adjusted. Dynamic range may be fully utilized in a visible sensor. For the case of an infrared sensor, a portion of the dynamic range is used by the large back- ground flux radiated by everything in the scene. This flux is never a small value; hence, sensitivity and dynamic range requirements are much more difficult to satisfy in infrared sensors than in visible sensors. 2.2 Resolution and Sensitivity There are three general categories of infrared sensor performance characterizations. The first is sensitivity and the second is resolution. When end-to-end, or human-in-the-loop (HITL), performance is required, the third type of performance characterization describes the visual acuity of an observer through a sensor, which will be discussed in Chapter 3. The former two are both related to the hardware and software that comprises the system, while the latter includes both the sensor and the observer. The first type of measure, sensitivity, is determined through radiometric analysis of the scene/environment and the quantum electronic properties of the detectors. Resolution is determined by analysis of the physical optical properties, the detector array geometry, and other degrading components of the system in much the same manner as complex electronic circuit/signals analysis. Sensitivity describes how the sensor performs with respect to input signal level. It relates noise characteristics, responsivity of the detector, light gathering of the optics, and the dynamic range/quantization of the sensor. Radiometry describes how much light leaves the object and background and is collected by the detector. Optical design and detector characteristics are of considerable importance in sensor sensitivity analysis. In infrared systems, noise equivalent temperature difference (NETD) is often a first-order description of the system sensitivity. The three-dimen- sional (3-D) noise model [1] describes more detailed representations of sensitivity parameters. In visible systems, the noise equivalent irradiance (NEI) is a similar term that is used to determine the sensitivity of the system. The second type of measure is resolution. Resolution is the ability of the sensor to image small targets and to resolve fine detail in large targets. Modulation transfer function (MTF) is the most widely used resolution descriptor in infrared systems. Alternatively, it may be specified by a number of descriptive metrics, such as the optical Rayleigh Criterion or the instantaneous field-of-view (FOV), of the detector. While these metrics are component-level descriptions, the system MTF is an all-encompassing function that describes the system resolution. Sensitivity and resolution can be competing system characteristics, and they are the most important issues in initial studies for a design. For example, given a fixed sensor aperture diameter, an increase in focal length can provide an increase in reso- lution, but it may decrease sensitivity [1]. Typically, visible band systems have plenty of sensitivity and are resolution-limited, while infrared imagers have been more sensitivity-limited. With staring infrared sensors, the sensitivity has seen significant improvements. Quite often metrics, such as NETD and MTF, are considered to be separable. However, in an actual sensor, sensitivity and resolution performance are not inde- pendent. As a result, minimum resolvable temperature difference (MRT or MRTD) 2.2 Resolution and Sensitivity 15
  28. 28. or the sensor contrast threshold function (CTF) has become the primary perfor- mance metrics for infrared systems. MRT and MRC (minimum resolvable contrast) are a quantitative performance measure in terms of both sensitivity and resolution. A simple MRT curve is shown in Figure 2.5. The performance is bounded by the sen- sor’s limits and the observer’s limits. The temperature difference, or thermal con- trast, required to image smaller details in a scene increases with detail size. The inclusion of observer performance yields a single-sensor performance characteriza- tion. It describes sensitivity as a function of resolution and includes the human visual system. 2.3 Linear Shift-Invariant (LSI) Imaging Systems A linear imaging system requires two properties [1, 2]: superposition and scaling. Consider an input scene, i(x, y) and an output image, o(x, y). Given that a linear sys- tem is described by L{}, then ( ) ( ){ }o x y L i x y, ,= (2.3) The superposition and scaling properties are satisfied if ( ) ( ){ } ( ){ } ( ){ }L ai x y bi x y aL i x y bL i x y1 2 1 2, , , ,+ = + (2.4) where i1(x, y) and i2(x, y) are input scenes and a and b are constants. Superposition, simply described, is that the image of two scenes, such as a target scene and a back- ground scene, is the sum of individual scenes imaged separately. The simplest exam- ple here is that of a point source as shown in Figure 2.6. The left side of the figure shows the case where a single point source is imaged, then a second point source is imaged, and the two results are summed to give an image of the two point sources. 16 Imaging Systems Spatial frequency Minimumresolvable temperature Visual sensitivity limit System resolution limit System response Figure 2.5 Sensor resolution and sensitivity.
  29. 29. The superposition principle states that this sum of point source images would be identical to the resultant image if both point sources were included in the input scene. The second property simply states that an increase in input scene brightness increases the image brightness. Doubling a point source brightness would double the image brightness. The linear systems approach is extremely important with imaging systems, since any scene can be represented as a collection of weighted point sources. The output image is the collection of the imaging system responses to the point sources. In continuous (nonsampled) imaging systems, another property is typically assumed: shift-invariance. Sometimes a shift invariant system is called isoplanatic. Mathematically stated, the response of a shift invariant system to a shifted input, such as a point source, is a shifted output; that is, ( ) ( ){ }o x x x y L i x x y yo o o o− − = − −, , (2.5) where xo and yo are the coordinates of the point source. It does not matter where the point source is located in the scene, the image of the point source will appear to be the same, only shifted in space. The image of the point source does not change with position. If this property is satisfied, the shifting property of the point source, or delta function, can be used, ( ) ( ) ( )i x y i x y x x y y dxdyo o o o x x y y , , ,= − −∫∫ δ 1 2 1 2 (2.6) 2.3 Linear Shift-Invariant (LSI) Imaging Systems 17 Imaging system Input scene Output image P1 Imaging system Input scene Output image P2 + = Output Image Imaging system Input scene Output image Imaged separately and added Imaged together Figure 2.6 Superposition principle.
  30. 30. where x1 ≤ xo ≤ x2 and y1 ≤ yo ≤ y2. The delta function, δ(x−xo, y−yo), is nonzero only at xo, yo and has an area of unity. The delta function is used frequently to describe infinitesimal sources of light. Equation (2.6) states that the value of the input scene at xo, yo can be written in terms of a weighted delta function. We can substitute i(x, y) in (2.6) ( ) ( ) ( )i x y i x y d d, , ,= − − −∞ ∞ −∞ ∞ ∫∫ α β δ α β α β (2.7) which states that the entire input scene can be represented as a collection of weighted point sources. The output of the linear system can then be written using (2.7) as the input, so that ( ) ( ) ( )o x y L i x y d d, , ,= − −      −∞ ∞ −∞ ∞ ∫∫ α β δ α β α β (2.8) Since the linear operator, L{}, does not operate on α and β, (2.8) can be rewritten as ( ) ( ) ( ){ }o x y i L x y d d, , ,= − − −∞ ∞ −∞ ∞ ∫∫ α β δ α β α β (2.9) If we call the point source response of the system the impulse response, defined as ( ) ( ){ }h x y L x y, ,= δ (2.10) then the output of the system is the convolution of the input scene with the impulse response of the system; that is, ( ) ( ) ( ) ( ) ( )o x y i h x y d d i x y h x y, , , , ** ,= − − = −∞ ∞ −∞ ∞ ∫∫ α β α β α β (2.11) where ** denotes the two-dimensional (2-D) convolution. The impulse response of the system, h(x, y), is commonly called the point spread function (PSF) of the imag- ing system. The significance of (2.11) is that the system impulse response is a spatial 18 Imaging Systems i x,y( ) x Point spread function o x,y( ) ** h x,y( ) Figure 2.7 Simplified LSI imaging system.
  31. 31. filter that is convolved with the input scene to obtain an output image. The simpli- fied LSI imaging system model is shown in Figure 2.7. The system described here is valid for LSI systems only. This analysis technique is a reasonable description for continuous and well-sampled imaging systems. It is not a good description for an undersampled or a well-designed sampled imaging system. These sampled imaging systems do satisfy the requirements of a linear sys- tem, but they do not follow the shift invariance property. The sampling nature of these systems is described later in this chapter. The representation of sampled imag- ing systems is a modification to this approach. For completeness, we take the spatial domain linear systems model and convert it to the spatial frequency domain. Spatial filtering can be accomplished in both domains. Given that x and y are spatial coordinates in units of milliradians, the spatial frequency domain has independent variables of fx and fy, cycles per milliradian. A spatial input or output function is related to its spectrum by the Fou- rier transform ( ) ( ) ( ) F f f f x y e dxdyx y j f x f yx y , ,= − + −∞ ∞ −∞ ∞ ∫∫ 2 π (2.12) where the inverse Fourier transform converts an image spectrum to a spatial function ( ) ( ) ( ) f x y F f f e df dfx y j f x f y x y x y , ,= + −∞ ∞ −∞ ∞ ∫∫ 2 π (2.13) The properties and characteristics of the Fourier transform are provided in [2–4]. A function and its spectrum are collectively described as a Fourier transform pair. We will use the notation of the Fourier transform operator ( ) ( )[ ] ( ) ( )[ ]G f f g x y g x y G f fx y x y, , , ,= = − ᑤ ᑤand 1 (2.14) in order to simplify analyses descriptions. One of the very important properties of the Fourier transform is that the Fourier transform of a convolution results in a product. Therefore, the spatial convolution described in (2.11) results in a spectrum of ( ) ( ) ( )O f f I f f H f fx y x y x y, , ,= (2.15) Here, the output spectrum is related to the input spectrum by the product of the Fourier transform of the system impulse response. Therefore, the Fourier transform of the system impulse response is called the transfer function of the system. Multi- plication of the input scene spectrum by the transfer function of an imaging system provides the same filtering action as the convolution of the input scene with the imaging system PSF. In imaging systems, the magnitude of the Fourier transform of the system PSF is the modulation transfer function (MTF). 2.3 Linear Shift-Invariant (LSI) Imaging Systems 19
  32. 32. 2.4 Imaging System Point Spread Function and Modulation Transfer Function The system impulse response or point spread function (PSF) of an imaging system is comprised of component impulse responses as shown in Figure 2.8. Each of the com- ponents in the system contributes to the blurring of the scene. In fact, each of the components has an impulse response that can be applied in the same manner as the system impulse response. The blur attributed to a component may be comprised of a few different physical effects. For example, the optical blur is a combination of the diffraction and aberration effects of the optical system. The detector blur is a combi- nation of the detector shape and the finite time of detector integration as it traverses the scene. It can be shown that the PSF of the system is a combination of the individual impulse responses ( ) ( ) ( ) ( ) ( )h x y h x y h x y h x y h x ysystem atm optics elec, , ** , ** , ** , *det= ( )* ,h x ydisp (2.16) so that the total blur, or system PSF, is a combination of the component impulse responses. The Fourier transform of the system impulse response is called the transfer func- tion of the system. In fact, each of the component impulse responses given in (2.16) has a component transfer function that, when cascaded (multiplied), the resulting transfer function is the overall system transfer function; that is, ( ) ( ) ( ) ( ) ( ) O f f I f f H f f H f f H f f H f x y x y atm x y optics x y x y elec x , , , , ,det = ( ) ( ) ( ), , ,f H f f H f fy disp x y eye x y (2.17) Note that the system transfer function is the product of the component transfer functions. A large number of imaging spatial filters are accounted for in the design and/or analysis of imaging system performance. These filters include effects from optics, detectors, electronics, displays, and the human eye. We use (2.16) and (2.17) as our spatial filtering guidelines, where we know that the treatment can be applied in either the spatial or frequency domain. We present the most common of these filters 20 Imaging Systems Input scene Atmosphere Optics Detectors Electronics Display hatm hoptics( )x,y hdet helec hdisp i x,y( ) o x,y( ) Output scene ( )x,y ( )x,y ( )x,y( )x,y Figure 2.8 Imaging system components.
  33. 33. beginning with the optical effects. Also, the transfer function of a system, as given in (2.17), is frequently described without the eye transfer function. 2.4.1 Optical Filtering Two filters account for the optical effects in an imaging system: diffraction and aberrations. The diffraction filter accounts for the spreading of the light as it passes an obstruction or an aperture. The diffraction impulse response for an incoherent imaging system with a circular aperture of diameter D is ( )h x y D somb Dr diff , =             λ λ 2 2 (2.18) where λ is the average band wavelength and r x y= +2 2 . The somb (for som- brero) function is given by Gaskill [3] to be ( ) ( ) somb r J r r = 1 π π (2.19) where J1 is the first-order Bessel function of the first kind. The filtering associated with the optical aberrations is sometimes called the geometric blur. There are many ways to model this blur and there are numerous commercial programs for calculat- ing geometric blur at different locations on the image. However, a convenient method is to consider the geometric blur collectively as a Gaussian function ( )h x y Gaus r geom gb gb , =         1 2 σ σ (2.20) where σgb is an amplitude that best describes the blur associated with the aberrations. The Gaussian function, Gaus, is ( )Gaus r e r = −π 2 (2.21) Note that the scaling values in front of the somb and the Gaus functions are intended to provide a functional area (under the curve) of unity so that no gain is applied to the scene. Examples of the optical impulse responses are given in Figure 2.9 corresponding to a wavelength of 10 µm, an optical diameter of 10 cen- timeters, and a geometric blur of 0.1 milliradian. The overall impulse response of the optics is the combined blur of both the dif- fraction and aberration effects ( ) ( ) ( )h x y h x y h x yoptics diff geom, , ** ,= (2.22) The transfer functions corresponding to these impulse responses are obtained by taking the Fourier transform of the functions given in (2.18) and (2.20). The Fou- rier transform of the somb is given by Gaskill [3] so that the transfer function is 2.4 Imaging System Point Spread Function and Modulation Transfer Function 21
  34. 34. ( )H f f D D D diff x y, cos=       − −             −2 11 2 π ρλ ρλ ρλ  (2.23) where ρ = +f fx y 2 2 and is plotted in cycles per milliradian and D is the entrance aperture diameter. The Fourier transform of the Gaus function is simply the Gaus function [4], with care taken on the scaling property of the transform. The transfer function corresponding to the aberration effects is ( ) ( )H f f Gausgeom x y gb, = σ ρ (2.24) For the example described here, the transfer functions are shown in Figure 2.10. Note that the overall optical transfer function is the product of the two functions. 2.4.2 Detector Spatial Filters The detector spatial filter is also comprised of a number of different effects, includ- ing spatial integration, sample-and-hold, crosstalk, and responsivity, among others. The two most common effects are spatial integration and sample-and-hold; that is, ( ) ( ) ( )h x y h x y h x ysp shdet det_ det_, , ** ,= (2.25) 22 Imaging Systems −0.2 −0.1 0 0.1 0.2 −0.2 −0.1 0 0.1 0.2 0 0.2 0.4 0.6 0.8 1 −0.2 −0.1 0 0.1 0.2 −0.1 0 0.1 0.2 0 0.2 0.4 0.6 0.8 1 y milliradians hdiff ( )x,y hgeom ( )x,y mx illiradiansy milliradians mx illiradians Figure 2.9 Spatial representations of optical blur. 10 0.2 0.4 0.6 0.8 1 0 5 0 −5 0.2 0.4 0.6 0.8 1 0 cycles per milliradians H f f( , )geom −10 10 5 0 −5 −10 10 f 5 0 −5 −10 10 5 0 −5 −10 cycles per milliradians cycles per milliradians cycles per milliradians x fxfy fy x yH f fdiff x y( , ) Figure 2.10 Optical transfer functions of optical blur.
  35. 35. The other effects can be included, but they are usually considered negligible unless there is good reason to believe otherwise (i.e., the detector responsivity varies dramatically over the detector). The detector spatial impulse response is due to the spatial integration of the light over the detector. Since most detectors are rectangular in shape, the rectangle function is used as the spatial model of the detector ( )h x y DAS DAS rect x DAS y DAS DAS r sp x y x y x det_ , ,=         = 1 1 ect x DAS DAS rect y DASx y y               1 (2.26) where DASx and DASy are the horizontal and vertical detector angular subtenses in milliradians. The detector angular subtense is the detector width (or height) divided by the sensor focal length. The transfer function corresponding to the detector spa- tial integration is determined by taking the Fourier transform of (2.26) ( ) ( ) ( )H f f DAS f DAS f DAS f DAS fsp x y x x y y x x ydet_ , ,= =sinc sinc sinc( )y (2.27) where the sinc function is defined as [2] ( ) ( ) sinc x x x = sin π π (2.28) The impulse response and the transfer function for a detector with a 0.1 by 0.1 milliradian detector angular subtense is shown in Figure 2.11. The detector sample-and-hold function is an integration of the light as the detector scans across the image. This sample-and-hold function is not present in staring arrays, but it is present in most scanning systems where the output of the integrated signal is sampled. The sampling direction is assumed to be the horizontal, or x, direction. Usually, the distance, in milliradians, between samples is smaller than the detector angular subtense by a factor called samples per IFOV or samples per DAS, spdas. The sample-and-hold function can be considered a rectangular 2.4 Imaging System Point Spread Function and Modulation Transfer Function 23 40 0 0.5 1 h x y( , )det_sp −0.5 20 0 −20 0.2 0.4 0.6 0.8 1 0 y milliradians H f( , )fdet_sp −40 1.0 0 −0.1 1.0 0 −0.1 x milliradians cycles per milliradians cycles per milliradians fy fx 40 20 0 −20 −40 x y Figure 2.11 Detector spatial impulse response and transfer function.
  36. 36. function in x where the size of the rectangle corresponds to the distance between samples. In the spatial domain y direction, the function is an impulse function. Therefore, the impulse response of the sample-and-hold function is ( ) ( )h x y spdas DAS rect x spdas DAS ysh x x det_ , =      δ (2.29) The Fourier transform of the impulse response gives the transfer function of the sample- and-hold operation ( )H f f DAS f spdas sh x y x x det_ , =      sinc (2.30) Note that the Fourier transform of the impulse function in the y direction is 1. The impulse response and the transfer function for sample-and-hold associated with the detector given in Figure 2.11 with a two-sample-per-DAS sample-and-hold are shown in Figure 2.12. 2.4.3 Electronics Filtering The electronics filtering function is one of the more difficult to characterize and one of the more loosely applied functions. First, it involves the conversion of temporal frequencies to spatial frequencies. Usually, this involves some scan rate or readout rate. Second, most impulse response functions in space are even functions. With electronic filtering, the impulse function can be a one-sided function. Finally, most engineers apply a two-sided impulse response that violates the rules of causality. This gross approximation does not usually have a heavy impact on sensor perfor- mance estimates since the electronics are not typically the limiting component of the sensor. Holst [5] and Vollmerhausen and Driggers [6] provide electronic filter (digi- tal and analog) approximations that can be used in transfer function estimates. Digital filters also provide a spatial blur and a corresponding transfer function. Finite impulse response (FIR) filters are common in electro-optical and infrared sys- 24 Imaging Systems 40 0 0.5 1 h x y( , )det_sh −0.5 20 0 −20 0.2 0.4 0.6 0.8 1 0 y milliradians H f( , )fdet_sh −40 1.0 0 −0.1 1.0 0 −0.1 x milliradians cycles per milliradians cycles per milliradians fy fx 40 20 0 −20 −40 x y −0.05 0.05 −0.05 0.05 Figure 2.12 Detector sample-and-hold impulse response and transfer function.
  37. 37. tems with such functions as interpolation, boost, and edge enhancements. These are filters that are convolved with a digital image and so they have a discrete “kernel” that is used to process the spatial image. The transfer function associated with these FIR filters is a summation of sines and cosines, where the filter is not band-limited. The combination of these filters with a display reconstruction provides for an over- all output filter (and corresponding transfer function). Chapter 4 discusses finite impulse response filters and the transfer function associated with these filters. 2.4.4 Display Filtering The finite size and shape of the display spot also corresponds to a spatial filtering of the image. Usually, the spot, or element, of a display is either Gaussian in shape like a cathode ray tube (CRT), or it is rectangular in shape, like a flat-panel display. Light emitting diode (LED) displays are also rectangular in shape. The PSF of the display is simply the size and shape of the display spot. The only difference is that the finite size and shape of the display spot must be converted from a physical dimension to the sensor angular space. For the Gaussian spot, the spot size dimen- sion in centimeters must be converted to an equivalent angular space in the sensor’s field of view (FOV) σ σdisp angle disp cm v disp v FOV L _ _ _ = (2.31) where Ldisp_v is the length in centimeters of the display vertical dimension and FOVv is FOV of the sensor in milliradians. For the rectangular display element, the height and width of the display element must also be converted to the sensor’s angular space. The vertical dimension of the rectangular shape is obtained using (2.31) and the horizontal dimension is similar with the horizontal display length and sensor FOV. Once these angular dimensions are obtained, the PSF of the display spot is simply the size and shape of the display element ( )h x y Gaus r disp disp angle disp angle , _ _ =         1 2 σ σ for a Gaussian spot (2.32) or ( )h x y W H rect x W disp disp angle h disp angle v disp ang , _ _ _ _ _ = 1 le h disp angle v y H_ _ _ ,         for flat panel (2.33) where the angular display element shapes are given in milliradians. These spatial shapes are shown in Figures 2.9 and 2.11. The transfer functions associated with these display spots are determined by taking the Fourier transform of the earlier PSF equations; that is, 2.4 Imaging System Point Spread Function and Modulation Transfer Function 25
  38. 38. ( ) ( )H f f Gausdisp x y disp angle, _= σ ρ Gaussian display (2.34) or ( ) ( )H f f W f H fdisp x y disp angle h x disp angle v y, ,_ _ _ _= sinc Flat-panel display (2.35) Again, these transfer functions are shown in Figures 2.9 and 2.11. 2.4.5 Human Eye Note that the human eye is not part of the system performance MTF as shown in Figure 2.8, and the eye MTF should not be included in the PSF of the system. In Chapter 3, the eye CTF is used to include the eye sensitivity and resolution limita- tions in performance calculations. It is, however, useful to understand the PSF and MTF of the eye such that the eye blur can be compared to sensor blur. A system with much higher resolution than the eye is a waste of money, and a system with much lower resolution than the eye is a poorly performing system. The human eye certainly has a PSF that is a combination of three physical com- ponents: optics, retina, and tremor [7, 8]. In terms of these components, the PSF is ( ) ( ) ( ) ( )h x y h x y h x y h x yeye optics retina tremor, , ** , ** ,_= (2.36) Therefore, the transfer function of the eye is ( ) ( ) ( ) ( )H f f H f f H f f H f feye x y eye optics x y retina x y tremor x y, , , ,_= (2.37) The transfer function associated with the eye optics is a function of display light level. This is because the pupil diameter changes with light level. The number of foot-Lamberts (fL) at the eye from the display is Ld/0.929, where Ld is the display luminance in millilamberts. The pupil diameter is then ( ){ } [ ]D fLpupil = − + −9011 1323 2108210. . exp log . mm (2.38) This equation is valid, if one eye is used as in some targeting applications. If both eyes view the display, the pupil diameter is reduced by 0.5 millimeter. Two parame- ters, io and fo, are required for the eye optics transfer function. The first parameter is ( )io Dpupil= +0 7155 0277 2 . . (2.39) and the second is ( ){ }fo D Dpupil pupil= −exp . . * log3663 00216 2 (2.40) Now, the eye optics transfer function can be written as ( ) ( )[ ]{ }H M foeye optics io _ exp .ρ ρ= − 4369 (2.41) 26 Imaging Systems
  39. 39. where ρ is the radial spatial frequency, f fx y 2 2 + , in cycles per milliradian. M is the system magnification (angular subtense of the display to the eye divided by the sen- sor FOV). The retina transfer function is ( ) ( ){ }H Mretina ρ ρ= −exp . . 0375 1 21 (2.42) Finally, the transfer function of the eye due to tremor is ( ) ( ){ }H Mtremor ρ ρ= −exp .04441 2 (2.43) which completes the eye model. For an example, let the magnification of the system equal 1. With a pupil diame- ter of 3.6 mm corresponding to a display brightness of 10 fL at the eye (with one viewing eye), the combined MTF of the eye is shown in Figure 2.13. The io and fo parameters were 0.742 and 27.2, respectively. All of the PSFs and transfer functions given in this section are used in the model- ing of infrared and electro-optical imaging systems. We covered only the more com- mon system components. There may be many more that must be considered when they are part of an imaging system. 2.4.6 Overall Image Transfer To quantify the overall system resolution, all of the spatial blurs are convolved and all of the transfer functions are multiplied. The system PSF is the combination of all the blurs, and the system MTF is the product of all the transfer functions. In the roll-up, the eye is typically not included to describe the resolution of the system. Also, the system can be described as “limited” by some aspect of the system. For example, a diffraction-limited system is one in which the diffraction cutoff fre- quency is smaller than all of the other components in the system (and spatial blur is larger). A detector-limited system would be one in which the detector blur is larger and the detector transfer cutoff frequency is smaller than the other system components. 2.4 Imaging System Point Spread Function and Modulation Transfer Function 27 fx fy 0 0.2 0.4 0.6 0.8 1 eyeH 2 21 10 0−1 −1 −2 −2 cyc/mradcyc/mrad Figure 2.13 Eye transfer function.
  40. 40. The MTF for a typical MWIR system is shown in Figure 2.14. The pre-MTF shown is the rollup transfer function for the optics diffraction blur, aberrations, and the detector shape. The post-MTF is the rollup transfer for the electronics (many times negligible) and the display. The system MTF (system transfer) is the product of the pre- and post-MTFs as shown. In Figure 2.14, the horizontal MTF is shown. In most sensor performance mod- els, the horizontal and vertical blurs, and corresponding MTFs, are considered sepa- rable. That is, ( ) ( ) ( )h x y h x h y, = (2.44) and the corresponding Fourier transform is ( ) ( ) ( )H f f H f H fx y x y, = (2.45) This approach usually provides for small errors (a few percent) in performance calculations even when some of the components in the system are circularly symmetric. 2.5 Sampled Imaging Systems In the previous sections, we described the process of imaging for a continuous or well-sampled imager. In this case, the input scene is convolved with the imager PSF (i.e., the impulse response of the system). With sampled imaging systems, the process is different. As an image traverses through a sampled imaging system, the image undergoes a three-step process. Figure 2.15 shows this process as a presample blur, a sampling action, and a postsample blur (reconstruction). The image is blurred by the optics, the detector angular subtense, the spatial integration scan, if needed, and any other effects appropriate to presampling. This presample blur, h(x,y), is applied to the image in the manner of an impulse response, so the response is convolved with the input scene ( ) ( ) ( ) ( ) ( )o x y i x y h d d i x y h x y1 , , , , ** ,= − − = −∞ ∞ −∞ ∞ ∫∫ α β α β α β (2.46) 28 Imaging Systems Horizontal system MTFs 0 0.2 0.4 0.6 0.8 1 0 5 10 15 Cycles/mrad Pre-MTF System transfer Post-MTF Figure 2.14 System transfer function.
  41. 41. where o1(x, y) is the presampled blur image or the output of the presample blur pro- cess. The convolution is denoted by the *, so ** denotes the two-dimensional con- volution. The sampling process can be modeled with the multiplication of the presample blur image with the sampling function. For convention, we use Gaskill’s comb function [9] ( ) ( )comb x a y b a b x ma y nb nm ,       = − − =−∞ ∞ =−∞ ∞ ∑∑ δ δ (2.47) which is a two-dimensional separable function. Now the output of the sampling process can be written as the product of the presample blurred image with the sam- pling function (note that a and b are the distances in milliradians or millimeters between samples) ( ) ( ) ( ) ( )[ ]o x y o x y ab comb x a y b i x y h x y ab comb x a 2 1 1 1 , , , , ** , ,=       = y b       (2.48) At this point, all that is present is a set of discrete values that represent the presample blurred image at discrete locations. This output can be thought of as a weighted “bed of nails” that is meaningless to look at unless the “image” is recon- structed. The display and the eye, if applied properly, reconstruct the image to a function that is interpretable. This reconstruction is modeled as the convolution of the display and eye blur (and any other spatial postsample blur) and the output of the sampling process; that is, ( ) ( ) ( ) ( ) ( )[ ]{o x y o x y d x y i x y h x y ab comb x a y b , , ** , , ** , , * = = ×          2 1 ( )* ,d x y (2.49) While (2.49) appears to be a simple spatial process, there is a great deal that is inherent in the calculation. We have simplified the equation with the aggregate presample blur effects and the aggregate postsample reconstruction effects. The frequency analysis of the three-step process shown in Figure 2.15 can be presented simply by taking the Fourier transform of each process step. Consider the first step in the process, the presample blur. The transform of the convolution in space is equivalent to a product in spatial frequency 2.5 Sampled Imaging Systems 29 Presample blur Image sample Reconstruction x x x i x,y( ) h x,y( ) s x,y( ) d x,y( ) o x,y( ) o1( )x,y o2( )x,y Figure 2.15 Three-step imaging process.
  42. 42. ( ) ( ) ( )O f f I f f H f fx y x y pre x y1 , , ,= (2.50) where fx and fy are the horizontal and vertical spatial frequencies. If x and y are in milliradians, then the spatial frequencies are in cycles per milliradian. Hpre(fx, fy) is the Fourier transform of the presample blur spot. Note that the output spectrum can be normalized to the input spectrum so that Hpre(fx, fy) is a transfer function that fol- lows the linear systems principles. Consider the presample blur spectrum (i.e., the presample blur transfer function given in Figure 2.16). Note that this is the image spectrum on the output of the blur that would occur if an impulse were input to the system. Next, we address the sampling process. The Fourier transform of (2.48) gives ( ) ( ) ( )[ ] ( )O f f I f f H f f comb af bfx y x y pre x y x y2 , , , ** ,= (2.51) where ( ) ( ) ( )comb af bf f kf f lf f a f bx y x xs y ys xs ys l , ,= − − = = =−∞ δ δ 1 1and ∞ =−∞ ∞ ∑∑k (2.52) If an impulse were input to the system, the response would be ( ) ( ) ( )O f f H f f comb af bfx y pre x y x y2 , , ** ,= (2.53) which is a replication of the presample blur at sample spacings of 1/a and 1/b. Con- sider the case shown in Figure 2.17. The sampled spectrum shown corresponds to a Gaussian blur of 0.5-milliradian radius (to the 0.043 cutoff) and a sample spacing of 0.5 milliradian. Note that the reproduction in frequency of the presample blur is at 2 cycles per milliradian. The so-called Nyquist rate of the sensor (the sensor half-sample rate) is at 1 cycle per milliradian. Any frequency from the presample blur baseband that is greater than the half-sample rate is also present as a mirror signal, or classical aliasing, under the half-sample rate by the first-order reproduction. The amount of classical aliasing is easily computed as the area of this mirrored signal. However, this is not the aliased signal seen on the output of the display as the display transfer 30 Imaging Systems 0 0.2 0.4 0.6 0.8 1 1.2 −3 −2 −1 0 1 2 3 Cycles/mrad H (fx,fy)pre Figure 2.16 Presample blur transfer function.
  43. 43. has not been applied to the signal. The higher-order replications of the baseband are real-frequency components. The curves to the left and the right of the central curve are the first- and second-order replications at the positive and negative positions. The current state of the sample signal is tiny infinitesimal points weighted with the presample blurred image values. These points have spectra that extend in the fre- quency domain to very high frequencies. The higher-order replications are typically filtered with a reconstruction function involving the display and the eye. There is no practical way to implement the perfect reconstruction filter; however, the perfect rectangular filter would eliminate these higher-order replications and result only in the classical aliased signal. The reconstruction filter usually degrades the baseband and allows some of the signal of the higher-order terms through to the observer. The final step in the process corresponds to the reconstruction of the sampled information. This is accomplished simply by blurring the infinitesimal points so that the function looks nearly like that of the continuous input imagery. The blur is convolved in space, so it is multiplied in frequency ( ) ( ) ( )[ ] ( )O f f H f f comb af bf D f fx y pre x y x y x y, , ** , ,= (2.54) where this output corresponds to a point source input. Note that the postsampling transfer function is multiplied by the sampled spectrum to give the output of the whole system. Consider the sampled spectrum and the dashed display transfer func- tion shown in Figure 2.18. The postsampling transfer function is shown in the graph as the display, passes part of the first-order replications. However, this display degrades the baseband signal relatively little. The tradeoff here is baseband resolu- tion versus spurious response content. Given that all of the signals traverse through the postsample blur transfer func- tion, the output is shown in Figure 2.19. There is classical aliasing on the output of the system, but a large signal corresponds to higher-order replication signals that passed through the display. Aliasing and the higher-order signals are collectively the spurious response of the sensor. These signals were not present on the input imag- ery, but there are artifacts on the output imagery. Without sampling, these spurious 2.5 Sampled Imaging Systems 31 Cycles/mrad 0 0.2 0.4 0.6 0.8 1 −5 −3 −1 1 3 5 2O (fx ,fy) Figure 2.17 Output of sampling.
  44. 44. signals would not be present. The higher-order replicated signals are combined at each spatial frequency in terms of a vector sum, so it is convenient to represent the magnitude of the spurious signals as the root-sum-squared (RSS) of the spurious orders that make it through the reconstruction process. Three aggregate quantities have proven useful in describing the spurious response of a sampled imaging system: total integrated spurious response as defined by (2.55), in-band spurious response as defined by (2.56), and out-of-band spurious response as defined by (2.57) [10]; that is, ( ) ( ) SR Spurious df df x x = −∞ ∞ −∞ ∞ ∫ ∫ Response BasebandSignal (2.55) ( ) SR Spurious df in band x f f s s − − = ∫ Response BasebandSigna 2 2 ( )l dfx −∞ ∞ ∫ (2.56) 32 Imaging Systems Cycles/mrad 0 0.2 0.4 0.6 0.8 1 −5 −3 −1 1 3 5 Sampled signal and display transfer Figure 2.18 Sampled signal and display transfer. Cycles/mrad System output spectrum 0 0.2 0.4 0.6 0.8 1 −5 −3 −1 1 3 5 Figure 2.19 System output signal.
  45. 45. SR SR SRout of band in band− − −= − (2.57) where fs is the sampling frequency. Examples of total and in-band spurious response are illustrated in Figures 2.20 and 2.21, respectively. The spurious responses of the higher-order replications could be constructive or destructive in nature, depending on the phase and frequency content of the spurious signals. The combination in magnitude was identical to that of a vector sum. This magnitude, on average, was the quadrature sum of the signals. This integrated RSS spurious response value was normalized by the integral of the baseband area. This ratio was the spurious response ratio. There is good experimental and theoretical evidence to generalize the effects of in-band spurious response and out-of-band spurious response. An in-band spurious response is the same as classical aliasing in communication systems. These are sig- nals that are added to real signals and can corrupt an image by making the image look jagged, misplaced, (spatial registering), or even the wrong size (wider or thin- ner than the original object). The only way to decrease the amount of in-band spuri- ous response is to increase the sample rate or increase the amount of presample blur. Increasing presample blur, or reducing the MTF, only reduces the performance of the imaging system. Blur causes more severe degradation than aliased signals. The effects of out-of-band spurious response are manifested by display artifacts. Common out-of-band spurious response is raster where the display spot is small compared to the line spacing or pixelization where the display looks blocky. Pixelization occurs on flat-panel displays where the display elements are large or in cases where pixel replication is used as a reconstruction technique. 2.5 Sampled Imaging Systems 33 xf )( xfGδ Transfer response Spurious response Figure 2.20 Example of total spurious response. xf )( xfGδ Transfer response Spurious response Figure 2.21 Example of in-band spurious response.
  46. 46. The sampling artifacts associated with out-of-band spurious response can be removed by the display or image reconstruction process. Multiple display pixels per sensor sample can be used, and the sensor samples are interpolated to provide the intensity values for the added pixels. It is possible to remove essentially all of the out-of-band spurious response without degrading the transfer response of the sen- sor. That is, image interpolation can remove much of the bad without affecting the good; there is no performance down side to interpolation [11]. 2.6 Signal-to-Noise Ratio In the past few sections, we have discussed primarily resolution. Just as important as resolution is sensitivity. The foundation for sensitivity is signal-to-noise ratio (SNR). SNR is of importance in both electro-optical and infrared systems, but in this section we focus on the calculation of SNR in the infrared. A very similar analysis is applied to electro-optical systems in which the signal is reflected light instead of emitted light. The noise concepts are the same. The differences between electro-optical and infrared systems are briefly discussed in the next section. Using Figure 2.22 as a guide, we first consider calculating the signal leaving the source object. In this case, the source object is resolved; that is, it is larger than a sin- gle detector footprint on the object. We start with the emittance in W/cm2 -µm leav- ing the object. The emittance is the flux being emitted from the object, while an electro-optical system would use the quantity of exitance in the same units. For most sources in the infrared, the object is Lambertian so that the radiance of the object is related to the emittance by L M source source = − − π µW cm m sr2 (2.58) where Msource is the emittance of the source. 34 Imaging Systems IFOVΩ D Sensor Range, R ddet foptics Collecting optic Detector Area of source seen by the detector Extended source (resolved) sensor Figure 2.22 System radiometry.
  47. 47. To calculate the intensity of the source associated with the footprint of the detector (i.e., the only part of the source from which the detector can receive light), we multiply (2.58) by the effective source area I M A M A R f source source source source = = − − π π µdet 2 2 W m sr (2.59) where the area of the source seen by the detector, Asource, is related to the area of the detector, Adet, the focal length, f, and the range to the target, R. The intensity can be used to determine the amount of power that enters the sensor aperture; that is, ( ) P I M A R f D R M F source sensor source source = = =Ω π π det 2 2 2 2 4 4 # 2 Adet W m−µ (2.60) so that the power entering the aperture is related to the area of the detector and the sensor f-number (F/#). It is written as various forms, such as f/4, f4, F/4, and the like. This power entering the aperture is the power on the detector (since we only consid- ered the source seen by the detector), except that the power is reduced by the optical transmission of the optical system, optics ( ) P M A F source optics = det τ µ 4 2 # W m (2.61) This power on the detector must be integrated with wavelength to provide the overall power. The noise on the detector is described by the noise equivalent power (NEP). The detector NEP is related to the area of the detector, the detector bandwidth, and the normalized detectivity; that is, ( ) NEP A f D = det ∆ * λ W (2.62) where D*(λ) is the detectivity of the detector in cm(Hz) 0.5 /W and ∆f is the temporal bandwidth associated with the detector in hertz. The SNR is determined by taking the ratio of the signal in (2.61) to the noise in (2.62); that is, ( ) ( ) ( ) ( ) SNR M A D F f d source optics = ∫ λ τ λ λ λ λ det * #4 2 ∆ (unitless) (2.63) This SNR determines how “noisy” the image appears. An SNR of 10 appears very noisy, an SNR of 100 looks acceptable, and an SNR of 1,000 appears pristine. In fact, an image with an SNR of 1,000 appears with no perceivable noise since most displays have a dynamic range of 7 or 8 bits (less than 256 gray levels). In this case, the noise is smaller than the minimum dynamic range of the display. 2.6 Signal-to-Noise Ratio 35
  48. 48. Equation (2.63) can be rearranged for infrared systems so that when the SNR is set to 1, a blackbody temperature difference that creates an emittance difference for this condition can be determined. This temperature difference that creates an SNR of 1 is called NETD, or is sometimes called the random spatio-temporal noise. The derivation is provided in [12]. An NETD of 50 milliKelvins (mK) means that a dif- ferential scene temperature (in equivalent blackbody temperature) of 50 millikelvins will create an SNR of 1. Band-averaged detectivity is usually specified by detector manufacturers so that a useful form of NETD is ( ) NETD F f A D L T optic d =             4 2 π τ # * ∆ ∆ ∆ (2.64) As an example, an LWIR system with an F/# of 1.75 with a 35-cm focal length, an optical transmission of 0.7, a detectivity of 5 × 10 10 cm(Hz) 0.5 /W, a detector area of 49 × 10 −6 cm 2 , and a bandwidth of 55.9 kHz yields an NETD of 0.06K or 60 mK. The LWIR band is from 8 to 12 µm and the ∆L/∆T is 6.7 × 10 −6 W/(cm 2 -sr-K)for this band. With the development of advanced scanning arrays, including line scanners, and focal plane arrays (FPA), a single-valued temporal noise could no longer character- ize imaging system noise in an adequate manner. The nonuniformities of the detec- tor arrays contributed significantly to the overall noise of the system. The nonuniformity noise values are not represented in the classical NETD. In 1989 and 1990, the U.S. Army Night Vision and Electronic Sensor Directorate (NVESD) developed the 3-D noise technique along with a laboratory procedure to address these problems. The concept of directional averaging allows the characterization of complex noise patterns. Consider the 3-D noise coordinate system shown in Figure 2.23. The 3-D noise method applies directional averages at the system output port to result in 36 Imaging Systems V, rows H, columns T, time Figure 2.23 3-D Noise coordinate system.
  49. 49. eight noise components. These components are described in Table 2.1. Note that the subscripts of the noise components indicate the dimensions in which the noise components fluctuate. The symbol σtvh is the parameter that resembles NETD. Ref- erences by D’Agostino [13] and Webb et al. [14, 15] provide the measurement and calculation of 3-D noise. In Table 2.1, σvh is the most common fixed-pattern noise seen in advanced FPAs. In many cases, this random spatial noise is the only significant noise inherent in the sensor other than the random spatio-temporal noise. Therefore, the random spatial noise is given as the imager fixed-pattern noise. The 3-D noise is not the only method for characterizing fixed-pattern noise in FPA imagers. Inhomogeneity equivalent temperature difference (IETD) is defined as the blackbody temperature difference that produces a signal equal to a signal caused by the different responses of the detectors. It is important in staring arrays because it can be the main source of noise. In terms of 3-D noise, IETD is the collective noise attributed to σvh, σv, σh, that is, IETD vh v h= + +σ σ σ22 2 (2.65) Again, in many advanced FPAs, the random spatial noise is the only significant factor, so IETD is approximately the random spatial noise. Note that IETD is small when nonuniformity correction (NUC) has been applied to the sensor under test. Finally, correctability describes the residual spatial noise after the calibration and NUC of the sensor and is normalized to the random spatio-temporal noise, σtvh. A value of “one” means that the spatial noise after correction is equal to the random spatio-temporal noise of the system, that is, C vh v h tvh = + +σ σ σ σ 2 2 2 (2.66) The most desirable situation occurs when the sensor is limited by the random spatio-temporal noise (i.e., the correctability is less than one). In modeling the effect of noise on the sensor in which the sensor noise is only comprised of random spatio-temporal and random spatial noise, the contributions of the noise parameters are 2.6 Signal-to-Noise Ratio 37 Table 2.1 3-D Noise Components Noise Component Potential Source σtvh Random spatio-temporal noise Detector temporal noise σtv Temporal row bounce Line processing, 1/f, readout σth Temporal column bounce Scan effects σvh Random spatial noise Pixel processing, detector nonuniformity σv Fixed row noise Detector nonuniformity, 1/f σh Fixed column noise Detector nonuniformity, scan effects σt Frame to frame noise Frame processing S Mean of all components
  50. 50. ( ) ( ) ( ) ( ) ( )Ω f E E f E f E f E fx tvh t v x h x vh v x h x= +σ σ2 2 (2.67) where Et, Ev(fx), and Eh(fx) are the temporal, vertical, and horizontal integrations associated with the eye/brain and fx is the spatial frequency in cycles per milliradian. For uncorrelated noise, the temporal integration can be estimated by E F t R e = 1 τ (2.68) where FR is the frame rate of the sensor and τe is the integration time constant of the eye. Note that denominator of (2.68) gives the number of frames that the eye inte- grates in a time constant. Therefore, the noise contribution of the random spatio-temporal component in (2.67) is reduced by the number of frames that are integrated by the eye. The random spatial noise contribution remains constant with time. 2.7 Electro-Optical and Infrared Imaging Systems There are numerous engineers and scientists who consider electro-optical imaging systems as those that view reflected light and infrared imaging systems as those that view emitted light. In Figure 2.3, the image on the left was formed when the imager viewed light that was completely reflected by the target and background. For the image on the right, the light imaged was emitted by the target and the background. Electro-optical systems cover the 0.4- to 3.0-µm bands. The infrared band cer- tainly covers the 8- to 12-µm band (LWIR) and most of the time it covers the 3- to 5-µm (MWIR). At night, the MWIR band provides a mostly emissive target and background signature, while in the daytime, the signature is the combination of emitted light and solar light that is reflected by the target and the background. This MWIR daytime case, as well as the case for electro-optical systems with all reflected light, is extremely difficult to characterize. In both measurements and performance modeling, the reflected light case asso- ciated with electro-optical systems is a more difficult problem because it is a two-path problem. The first path is the path from the illuminator (i.e., the sun in most cases) to the target or the background. The light from the first path is multi- plied by the target or background reflectivities and then the second path is the path from the target, or background, and the imager. The second path is the exitance of reflected flux through the atmosphere into the sensor aperture, on to the focal plane, converted to electrons by the detector, and then processed and displayed for human consumption. The infrared case is much simpler and is a single-path problem. The light is emitted from the target, traverses the single atmospheric path, enters the optics, is converted to electrons by the detector, and is processed and displayed for human consumption. In Chapter 3, the overall system performance is considered to include sensitivity and resolution. 38 Imaging Systems
  51. 51. 2.8 Summary This chapter introduced the basic imaging system and its components. The concepts of resolution and sensitivity have been discussed. The imaging system components have been introduced and their contributions to overall system resolution and sensi- tivity were presented. These discussions, along with the presented sampling theory, aid in the formation of the overall system performance metrics that are developed in Chapter 3. References [1] Driggers, R. G., P. Cox, and T. Edwards, Introduction to Infrared and Electro-Optical Sys- tems, Norwood, MA: Artech House, 1999, p. 8. [2] Goodman, J., Introduction to Fourier Optics, New York: McGraw-Hill, 1968, pp. 17–18. [3] Gaskill, J., Linear System, Fourier Transforms, and Optics, New York: Wiley, 1978, p. 72. [4] Gaskill, J., Linear System, Fourier Transforms, and Optics, New York: Wiley, 1978, p. 47. [5] Holst, G., Electro-Optical Imaging System Performance, Orlando, FL: JCD Publishing, 1995, p. 127. [6] Vollmerhausen, R., and R. Driggers, Analysis of Sampled Imaging Systems, Ch. 4, Bellingham, WA: SPIE Press, 2001. [7] Overington, I., Vision and Acquisition, New York: Crane and Russak, 1976. [8] Vollmerhausen, R., Electro-Optical Imaging System Performance Modeling, Chapter 23, Bellingham, WA: ONTAR and SPIE Press, 2000. [9] Gaskill, J., Linear System, Fourier Transforms, and Optics, New York: Wiley, 1978, p. 60. [10] Vollmerhausen, R., and R. Driggers, Analysis of Sampled Imaging Systems, Bellingham, WA: SPIE Press, 2001, pp. 68–69. [11] Vollmerhausen, R., and R. Driggers, Analysis of Sampled Imaging Systems, Bellingham, WA: SPIE Press, 2001, pp. 73–85. [12] Lloyd, M., Thermal Imaging Systems, New York: Plenum Press, 1975, p. 166. [13] D’Agostino, J., “Three Dimensional Noise Analysis Framework and Measurement Meth- odology for Imaging System Noise,” Proceedings of SPIE, Vol. 1488, Infrared Imaging Sys- tems: Design, Analysis, Modeling, and Testing II, Orlando, FL, April 3, 1991, pp. 110–121. [14] Webb, C., P. Bell, and G. Mayott, “Laboratory Procedures for the Characterization of 3-D Noise in Thermal Imaging Systems,” Proceedings of the IRIS Passive Sensors Symposium, Laurel, MD, March 1991, pp. 23–30. [15] Webb, C., “Approach to 3-D Noise Spectral Analysis,” Proceedings of SPIE Vol. 2470, Infrared Imaging Systems: Design, Analysis, Modeling, and Testing VI, Orlando, FL, April 19, 1995, pp. 288–299. 2.8 Summary 39
  52. 52. C H A P T E R 3 Target Acquisition and Image Quality 3.1 Introduction In Chapter 2, we reviewed the basic principles of imaging systems. In this chapter, we study methods of determining image performance, including target acquisition theory and an image quality metric. Signal or image processing is often used to enhance the amount of information in an image available to an observer. If an observer is using imagery to perform a task, then an enhancement in the information content available to the observer will result in an improvement in observer performance. It is logical then to assess image processing techniques based on observer performance with and without the applica- tion of the image processing technique being assessed. One application in which this technique has been used extensively is military target acquisition with imaging sensors. These sensors often operate in conditions in which the image formed is significantly degraded by blur, noise, or sampling arti- facts. Image processing is used to improve the image quality for the purpose of improving target acquisition performance. An example of this type of assessment is given by Driggers et al. [1]. In this chapter, a theory of target acquisition is reviewed. First, a brief history of the military development of target acquisition is presented, followed by a discussion of human threshold vision. A metric based on threshold vision is then provided and related to the statistical performance of human observers. The chapter concludes with a discussion of how these results can be used in the assessment of image processing techniques. 3.2 A Brief History of Target Acquisition Theory In 1958, John Johnson of the U.S. Army Engineer Research and Development Labo- ratories, now NVESD, presented a methodology for predicting the performance of observers using electro-optic sensors [2]. The Johnson methodology proceeds from two basic assumptions: 1. Target acquisition performance is related to perceived image quality; 2. Perceived image quality is related to threshold vision. The first assumption is recognition of the fact that our ability to see a target in an image is strongly dependent on the clarity of the image. A blurred and noisy 41

×