SlideShare a Scribd company logo
1 of 713
Download to read offline
Part I
Detection, Estimation, and Modulation Theory, Part I:
Detection, Estimation, and Linear Modulation Theory. Harry L. Van Trees
Copyright© 2001 John Wiley & Sons, Inc.
ISBNs: 0-471-09517-6 (Paperback); 0-471-22108-2 (Electronic)
Detection, Estimation, and Modulation Theory
Detection, Estimation,
and Modulation Theory
Part I. Detection, Estimation,
and Linear Modulation Theory
HARRY L. VAN TREES
George Mason University
A Wiley-Interscience Publication
JOHN WILEY & SONS, INC.
New York • Chichester • Weinheim • Brisbane • Singapore • Toronto
This text is printed on acid-free paper. @)
Copyright© 2001 by John Wiley & Sons, Inc. All rights reserved.
Published simultaneously in Canada.
No part ofthis publication may be reproduced, stored in a retrieval system or transmitted in any
form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise,
except as permitted under Section I07 or I08 ofthe 1976 United States Copyright Act, without
either the prior written permission ofthe Publisher, or authorization through payment ofthe
appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA
01923, (978) 750-8400, fax (978) 750-4744. Requests to the Publisher for permission should be
addressed to the Permissions Department, John Wiley & Sons, Inc., 605 Third Avenue, New York,
NY 10158-0012, (212) 850-6011, fax (212) 850-6008, E-Mail: PERMREQ@ WILEY.COM.
For ordering and customer service, call 1-800-CALL-WILEY.
Library ofCongress Cataloging in Publication Data is available.
ISBN 0-471-22108-2
This title is also available in print as ISBN 0-471-09517-6.
Printed in the United States of America
10 9 8 7 6 5 4 3 2 I
To Diane
and Stephen, Mark, Kathleen, Patricia,
Eileen, Harry, and Julia
and the next generation-
Brittany, Erin, Thomas, Elizabeth, Emily,
Dillon, Bryan, Julia, Robert, Margaret,
Peter, Emma, Sarah, Harry, Rebecca,
and Molly
Prefacefor Paperback Edition
In 1968, Part I of Detection, Estimation, and Modulation Theory [VT68) was pub-
lished. It turned out to be a reasonably successful book that has been widely used by
several generations of engineers. There were thirty printings, but the last printing
was in 1996. Volumes II and III ([VT71a], [VT71b]) were published in 1971 and fo-
cused on specific application areas such as analog modulation, Gaussian signals
and noise, and the radar-sonar problem. Volume II had a short life span due to the
shift from analog modulation to digital modulation. Volume III is still widely used
as a reference and as a supplementary text. In a moment ofyouthful optimism, I in-
dicated in the the Preface to Volume III and in Chapter III-14 that a short mono-
graph on optimum array processing would be published in 1971. The bibliography
lists it as a reference, Optimum Array Processing, Wiley, 1971, which has been sub-
sequently cited by several authors. After a 30-year delay, Optimum Array Process-
ing, Part IV ofDetection, Estimation, and Modulation Theory will be published this
year.
A few comments on my career may help explain the long delay. In 1972, MIT
loaned me to the Defense Communication Agency in Washington, D.C. where I
spent three years as the Chief Scientist and the Associate Director ofTechnology. At
the end of the tour, I decided, for personal reasons, to stay in the Washington, D.C.
area. I spent three years as an Assistant Vice-President at COMSAT where my
group did the advanced planning for the INTELSAT satellites. In 1978, I became
the Chief Scientist ofthe United States Air Force. In 1979, Dr. Gerald Dinneen, the
former Director of Lincoln Laboratories, was serving as Assistant Secretary of De-
fense for C31. He asked me to become his Principal Deputy and I spent two years in
that position. In 1981, I joined M/A-COM Linkabit. Linkabit is the company that Ir-
win Jacobs and Andrew Viterbi had started in 1969 and sold to MIA-COM in 1979.
I started an Eastern operation which grew to about 200 people in three years. After
Irwin and Andy left MIA-COM and started Qualcomm, I was responsible for the
government operations in San Diego as well as Washington, D.C. In 1988, MIA-
COM sold the division. At that point I decided to return to the academic world.
I joined George Mason University in September of 1988. One of my priorities
was to finish the book on optimum array processing. However, I found that I needed
to build up a research center in order to attract young research-oriented faculty and
vii
viii Prefacefor Paperback Edition
doctoral students. The process took about six years. The Center for Excellence in
Command, Control, Communications, and Intelligence has been very successful
and has generated over $300 million in research funding during its existence. Dur-
ing this growth period, I spent some time on array processing but a concentrated ef-
fort was not possible. In 1995, I started a serious effort to write the Array Process-
ing book.
Throughout the Optimum Array Processing text there are references to Parts I
and III ofDetection, Estimation, and Modulation Theory. The referenced material is
available in several other books, but I am most familiar with my own work. Wiley
agreed to publish Part I and III in paperback so the material will be readily avail-
able. In addition to providing background for Part IV, Part I is still useful as a text
for a graduate course in Detection and Estimation Theory. Part III is suitable for a
second level graduate course dealing with more specialized topics.
In the 30-year period, there has been a dramatic change in the signal processing
area. Advances in computational capability have allowed the implementation of
complex algorithms that were only of theoretical interest in the past. In many appli-
cations, algorithms can be implemented that reach the theoretical bounds.
The advances in computational capability have also changed how the material is
taught. In Parts I and III, there is an emphasis on compact analytical solutions to
problems. In Part IV, there is a much greater emphasis on efficient iterative solu-
tions and simulations. All ofthe material in parts I and III is still relevant. The books
use continuous time processes but the transition to discrete time processes is
straightforward. Integrals that were difficult to do analytically can be done easily in
Matlab®. The various detection and estimation algorithms can be simulated and
their performance compared to the theoretical bounds. We still use most ofthe prob-
lems in the text but supplement them with problems that require Matlab® solutions.
We hope that a new generation of students and readers find these reprinted edi-
tions to be useful.
Fairfax, Virginia
June 2001
HARRY L. VAN TREES
Preface
The area ofdetection and estimation theory that we shall study in this book
represents a combination of the classical techniques of statistical inference
and the random process characterization of communication, radar, sonar,
and other modern data processing systems. The two major areas of statis-
tical inference are decision theory and estimation theory. In the first case
we observe an output that has a random character and decide which oftwo
possible causes produced it. This type ofproblem was studied in the middle
of the eighteenth century by Thomas Bayes [1]. In the estimation theory
case the output is related to the value of some parameter of interest, and
we try to estimate the value of this parameter. Work in this area was
published by Legendre [2] and Gauss [3] in the early nineteenth century.
Significant contributions to the classical theory that we use as background
were developed by Fisher [4] and Neyman and Pearson [5] more than
30 years ago. In 1941 and 1942 Kolmogoroff [6] and Wiener [7] applied
statistical techniques to the solution of the optimum linear filtering
problem. Since that time the application of statistical techniques to the
synthesis and analysis of all types of systems has grown rapidly. The
application of these techniques and the resulting implications are the
subject of this book.
This book and the subsequent volume, Detection, Estimation, and
Modulation Theory, Part II, are based on notes prepared for a course
entitled "Detection, Estimation, and Modulation Theory," which is taught
as a second-level graduate course at M.I.T. My original interest in the
material grew out ofmy research activities in the area ofanalog modulation
theory. A preliminary version of the material that deals with modulation
theory was used as a text for a summer course presented at M.I.T. in 1964.
It turned out that our viewpoint on modulation theory could best be
understood by an audience with a clear understanding ofmodern detection
and estimation theory. At that time there was no suitable text available to
cover the material of interest and emphasize the points that I felt were
ix
x Preface
important, so I started writing notes. It was clear that in order to present
the material to graduate students in a reasonable amount of time it would
be necessary to develop a unified presentation ofthe three topics: detection,
estimation, and modulation theory, and exploit the fundamental ideas that
connected them. As the development proceeded, it grew in size until the
material that was originally intended to be background for modulation
theory occupies the entire contents of this book. The original material on
modulation theory starts at the beginning ofthe second book. Collectively,
the two books provide a unified coverage of the three topics and their
application to many important physical problems.
For the last three years I have presented successively revised versions of
the material in my course. The audience consists typically of 40 to 50
students who have completed a graduate course in random processes which
covered most of the material in Davenport and Root [8]. In general, they
have a good understanding ofrandom process theory and a fair amount of
practice with the routine manipulation required to solve problems. In
addition, many ofthem are interested in doing research in this general area
or closely related areas. This interest provides a great deal of motivation
which I exploit by requiring them to develop many of the important ideas
as problems. It is for this audience that the book is primarily intended. The
appendix contains a detailed outline of the course.
On the other hand, many practicing engineers deal with systems that
have been or should have been designed and analyzed with the techniques
developed in this book. I have attempted to make the book useful to them.
An earlier version was used successfully as a text for an in-plant course for
graduate engineers.
From the standpoint of specific background little advanced material is
required. A knowledge of elementary probability theory and second
moment characterization ofrandom processes is assumed. Some familiarity
with matrix theory and linear algebra is helpful but certainly not necessary.
The level ofmathematical rigor is low, although in most sections the results
could be rigorously proved by simply being more careful in our derivations.
We have adopted this approach in order not to obscure the important
ideas with a lot of detail and to make the material readable for the kind of
engineering audience that will find it useful. Fortunately, in almost all
cases we can verify that our answers are intuitively logical. It is worthwhile
to observe that this ability to check our answers intuitively would be
necessary even if our derivations were rigorous, because our ultimate
objective is to obtain an answer that corresponds to some physical system
of interest. It is easy to find physical problems in which a plausible mathe-
matical model and correct mathematics lead to an unrealistic answer for the
original problem.
Preface xi
We have several idiosyncrasies that it might be appropriate to mention.
In general, we look at a problem in a fair amount ofdetail. Many times we
look at the same problem in several different ways in order to gain a better
understanding ofthe meaning ofthe result. Teaching students a number of
ways of doing things helps them to be more flexible in their approach to
new problems. A second feature is the necessity for the reader to solve
problems to understand the material fully. Throughout the course and the
book we emphasize the development ofan ability to work problems. At the
end ofeach chapter are problems that range from routine manipulations to
significant extensions of the material in the text. In many cases they are
equivalent to journal articles currently being published. Only by working a
fair number of them is it possible to appreciate the significance and
generality of the results. Solutions for an individual problem will be
supplied on request, and a book containing solutions to about one third
of the problems is available to faculty members teaching the course. We
are continually generating new problems in conjunction with the course
and will send them to anyone who is using the book as a course text. A
third issue is the abundance of block diagrams, outlines, and pictures. The
diagrams are included because most engineers (including myself) are more
at home with these items than with the corresponding equations.
One problem always encountered is the amount of notation needed to
cover the large range of subjects. We have tried to choose the notation in a
logical manner and to make it mnemonic. All the notation is summarized
in the glossary at the end of the book. We have tried to make our list of
references as complete as possible and to acknowledge any ideas due to
other people.
A number of people have contributed in many ways and it is a pleasure
to acknowledge them. Professors W. B. Davenport and W. M. Siebert have
provided continual encouragement and technical comments on the various
chapters. Professors Estil Hoversten and Donald Snyder of the M.I.T.
faculty and Lewis Collins, Arthur Baggeroer, and Michael Austin, three
of my doctoral students, have carefully read and criticized the various
chapters. Their suggestions have improved the manuscript appreciably. In
addition, Baggeroer and Collins contributed a number of the problems in
the various chapters and Baggeroer didtheprogrammingnecessaryfor many
of the graphical results. Lt. David Wright read and criticized Chapter 2.
L.A. Frasco and H. D. Goldfein, two of my teaching assistants, worked
all ofthe problems in the book. Dr. Howard Yudkin ofLincoln Laboratory
read the entire manuscript and offered a number of important criticisms.
In addition, various graduate students taking the course have made
suggestions which have been incorporated. Most of the final draft was
typed by Miss Aina Sils. Her patience with the innumerable changes is
xii Preface
sincerely appreciated. Several other secretaries, including Mrs. Jarmila
Hrbek, Mrs. Joan Bauer, and Miss Camille Tortorici, typed sections of the
various drafts.
As pointed out earlier, the books are an outgrowth of my research
interests. This research is a continuing effort, and I shall be glad to send our
current work to people working in this area on a regular reciprocal basis.
My early work in modulation theory was supported by Lincoln Laboratory
as a summer employee and consultant in groups directed by Dr. Herbert
Sherman and Dr. Barney Reiffen. My research at M.I.T. was partly
supported by the Joint Services and the National Aeronautics and Space
Administration under the auspices of the Research Laboratory of Elec-
tronics. This support is gratefully acknowledged.
Cambridge, Massachusetts
October, 1967.
REFERENCES
Harry L. Van Trees
[1] Thomas Bayes, "An Essay Towards Solving a Problem in the Doctrine of
Chances," Phil. Trans, 53, 370-418 (1764).
[2] A. M. Legendre, Nouvelles Methodes pour La Determination ces Orbites des
Cometes, Paris, 1806.
[3] K. F. Gauss, Theory of Motion of the Heavenly Bodies Moving About the Sun in
Conic Sections, reprinted by Dover, New York, 1963.
[4] R. A. Fisher, "Theory of Statistical Estimation," Proc. Cambridge Philos. Soc.,
11, 700 (1925).
[5] J. Neyman and E. S. Pearson, "On the Problem of the Most Efficient Tests of
Statistical Hypotheses," Phil. Trans. Roy. Soc. London, A 131, 289, (1933).
[6] A. Kolmogoroff, "Interpolation and Extrapolation von Stationiiren Zufiilligen
Folgen," Bull. Acad. Sci. USSR, Ser. Math. 5, 1941.
[7] N. Wiener, Extrapolation, Interpolation, and Smoothing ofStationary Time Series,
Tech. Press of M.I.T. and Wiley, New York, 1949 (originally published as a
classified report in 1942).
[8] W. B. Davenport and W. L. Root, Random Signals and Noise, McGraw-Hill,
New York, 1958.
Contents
1 Introduction
2
1.1 Topical Outline
1.2 Possible Approaches
1.3 Organization
Classical Detection and Estimation Theory
2.1 Introduction
2.2 Simple Binary Hypothesis Tests
Decision Criteria. Performance: Receiver Operating Charac-
teristic.
2.3 M Hypotheses
2.4 Estimation Theory
Random Parameters: Bayes Estimation. Real (Nonrandom)
Parameter Estimation. Multiple Parameter Estimation. Sum-
mary of Estimation Theory.
2.5 Composite Hypotheses
2.6 The General Gaussian Problem
Equal Covariance Matrices. Equal Mean Vectors. Summary.
2.7 Performance Bounds and Approximations
2.8 Summary
2.9 Problems
References
1
1
12
15
19
19
23
46
52
86
96
116
133
133
163
xiii
xiv Contents
3 Representations of Random Processes 166
3.1 Introduction 166
3.2 Deterministic Functions: Orthogonal Representations 169
3.3 Random Process Characterization 174
Random Processes: Conventional Characterizations. Series
Representation of Sample Functions of Random Processes.
Gaussian Processes.
3.4 Homogeneous Integral Equations and Eigenfunctions 186
Rational Spectra. Bandlimited Spectra. Nonstationary
Processes. White Noise Processes. The Optimum Linear
Filter. Properties of Eigenfunctions and Eigenvalues.
3.5 Periodic Processes
3.6 Infinite Time Interval: Spectral Decomposition
Spectral Decomposition. An Application of Spectral Decom-
position: MAP Estimation of a Gaussian Process.
209
212
3.7 Vector Random Processes 220
3.8 Summary
3.9 Problems
References
4 Detection of Signals-Estimation of Signal
224
226
237
Parameters 239
4.1 Introduction 239
Models. Format.
4.2 Detection and Estimation in White Gaussian Noise 246
Detection of Signals in Additive White Gaussian Noise. Linear
Estimation. Nonlinear Estimation. Summary: Known Signals
in White Gaussian Noise.
4.3 Detection and Estimation in Nonwhite Gaussian Noise 287
"Whitening" Approach. A Direct Derivation Using the
Karhunen-Loeve Expansion. A Direct Derivation with a
Sufficient Statistic. Detection Performance. Estimation.
Solution Techniques for Integral Equations. Sensitivity.
Known Linear Channels.
Contents XV
4.4 Signals with Unwanted Parameters: The Composite Hypo-
thesis Problem 333
Random Phase Angles. Random Amplitude and Phase.
4.5 Multiple Channels 366
Formulation. Application.
4.6 Multiple Parameter Estimation 370
Additive White Gaussian Noise Channel. Extensions.
4.7 Summary and Omissions 374
Summary. Topics Omitted.
4.8 Problems 377
References 418
5 Estimation of Continuous Waveforms 423
5.1 Introduction 423
5.2 Derivation of Estimator Equations 426
No-Memory Modulation Systems. Modulation Systems with
Memory.
5.3 A Lower Bound on the Mean-Square Estimation Error 437
5.4 Multidimensional Waveform Estimation 446
Examples of Multidimensional Problems. Problem Formula-
tion. Derivation of Estimator Equations. Lower Bound on
the Error Matrix. Colored Noise Estimation.
5.5 Nonrandom Waveform Estimation 456
5.6 Summary 459
5.7 Problems 460
References 465
6 Linear Estimation 467
6.1 Properties of Optimum Processors 468
6.2 Realizable Linear Filters: Stationary Processes, Infinite
Past: Wiener Filters 481
Solution of Wiener-Hopf Equation. Errors in Optimum
Systems. Unrealizable Filters. Closed-FormError Expressions.
Optimum Feedback Systems. Comments.
xvi Contents
6.3 Kalman-Bucy Filters 515
Differential Equation Representation of Linear Systems and
Random Process Generation. Derivation of Estimator Equa-
tions. Applications. Generalizations.
6.4 Linear Modulation: Communications Context 575
DSB-AM: Realizable Demodulation. DSB-AM: Demodula-
tion with Delay. Amplitude Modulation: Generalized Carriers.
Amplitude Modulation: Single-Sideband Suppressed-Carrier.
6.5 The Fundamental Role of the Optimum Linear Filter 584
6.6 Comments 585
6.7 Problems 586
References 619
7 Discussion 623
7.1 Summary 623
7.2 Preview of Part II 625
7.3 Unexplored Issues 627
References 629
Appendix: A Typical Course Outline 635
Glossary 671
Author Index 683
Subject Index 687
1
Detection, Estimation, and Modulation Theory, Part I:
Detection, Estimation, and Linear Modulation Theory. Harry L. Van Trees
Copyright © 2001 John Wiley & Sons, Inc.
ISBNs: 0-471-09517-6 (Paperback); 0-471-22108-2 (Electronic)
Introduction
In these two books, we shall study three areas of statistical theory, which
we have labeled detection theory, estimation theory, and modulation
theory. The goal is to develop these theories in a common mathematical
framework and to demonstrate how they can be used to solve a wealth of
practical problems in many diverse physical situations.
In this chapter we present three outlines of the material. The first is a
topical outline in which we develop a qualitative understanding ofthe three
areas by examining some typical problems of interest. The second is a
logical outline in which we explore the various methods of attacking the
problems. The third is a chronological outline in which we explain the
structure of the books.
1.1 TOPICAL OUTLINE
An easy way to explain what is meant by detection theory is to examine
several physical situations that lead to detection theory problems.
A simple digital communication system is shown in Fig. 1.1. The source
puts out a binary digit every T seconds. Our object is to transmit this
sequence of digits to some other location. The channel available for trans-
mitting the sequence depends on the particular situation. Typically, it
could be a telephone line, a radio link, or an acoustical channel. For
r(t)
Fig. 1.1 Digital communication system.
1
2 1.1 Topical Outline
purposes ofillustration, we shall consider a radio link. In order to transmit
the information, we must put it into a form suitable for propagating over
the channel. A straightforward method would be to build a device that
generates a sine wave,
(1)
for T seconds if the source generated a "one" in the preceding interval,
and a sine wave of a different frequency,
s0(t) = sin w0 t, (2)
for T seconds if the source generated a "zero" in the preceding interval.
The frequencies are chosen so that the signals s0 (t) and s1(t) will propagate
over the particular radio link of concern. The output of the device is fed
into an antenna and transmitted over the channel. Typical source and
transmitted signal sequences are shown in Fig. 1.2. In the simplest kind of
channel the signal sequence arrives at the receiving antenna attenuated but
essentially undistorted. To process the received signal we pass it through
the antenna and some stages of rf-amplification, in the course of which a
thermal noise n(t) is added to the message sequence. Thus in any T-second
interval we have available a waveform r(t) in which
r(t) = s1(t) + n(t),
if s1(t) was transmitted, and
r(t) = s0(t) + n(t),
0 :s; t :s; T, (3)
(4)
if s0(t) was transmitted. We are now faced with the problem of deciding
which ofthe two possible signals was transmitted. We label the device that
does this a decision device. It is simply a processor that observes r(t) and
guesses whether s1(t) or s0(t) was sent according to some set of rules. This
is equivalent to guessing what the source output was in the preceding
interval. We refer to designing and evaluating the processor as a detection
1 0 0
--------
Source output
sin w1t sin wot sin wot sin w1t
~~~-
Transmitted sequence
Fig. 1.2 Typical sequences.
Detection Theory 3
If ( r J[ [ J
i7 A~i VVI
I• ~I
sin (Wit + BI) sin (wot + Bo) sin (Wit + Bj)
Fig. 1.3 Sequence with phase shifts.
theory problem. In this particular case the only possible source oferror in
making a decision is the additive noise. If it were not present, the input
would be completely known and we could make decisions without errors.
We denote this type of problem as the known signal in noise problem. It
corresponds to the lowest level (i.e., simplest) of the detection problems of
interest.
An example of the next level of detection problem is shown in Fig. 1.3.
The oscillators used to generate s1(t) and s0{t) in the preceding example
have a phase drift. Therefore in a particular T-second interval the received
signal corresponding to a "one " is
(5)
and the received signal corresponding to a "zero" is
r(t) = sin (w0 t + 90) + n(t), (6)
where 90 and 91 are unknown constant phase angles. Thus even in the
absence ofnoise the input waveform is not completely known. In a practical
system the receiver may include auxiliary equipment to measure the oscilla-
tor phase. If the phase varies slowly enough, we shall see that essentially
perfect measurement is possible. If this is true, the problem is the same as
above. However, if the measurement is not perfect, we must incorporate
the signal uncertainty in our model.
A corresponding problem arises in the radar and sonar areas. A con-
ventional radar transmits a pulse at some frequency we with a rectangular
envelope:
s1(t) = sin wet, (7)
If a target is present, the pulse is reflected. Even the simplest target will
introduce an attenuation and phase shift in the transmitted signal. Thus
the signal available for processing in the interval of interest is
r(t) = V, sin [we(t - T) + 8,] + n(t),
= n(t),
7' s; t s; 7' + T,
0 s; t < T, -r + T < t < oo, (8)
4 1.1 Topical Outline
if a target is present and
r(t) = n(t), 0 :s; t < oo, (9)
if a target is absent. We see that in the absence of noise the signal still
contains three unknown quantities: Vn the amplitude, ()" the phase, and
r, the round-trip travel time to the target.
These two examples represent the second level of detection problems.
We classify them as signal with unknown parameters in noise problems.
Detection problems of a third level appear in several areas. In a passive
sonar detection system the receiver listens for noise generated by enemy
vessels. The engines, propellers, and other elements in the vessel generate
acoustical signals that travel through the ocean to the hydrophones in the
detection system. This composite signal can best be characterized as a
sample function from a random process. In addition, the hydrophone
generates self-noise and picks up sea noise. Thus a suitable model for the
detection problem might be
r(t) = s0 (t) + n(t) (10)
if the target is present and
r(t) = n(t) (11)
if it is not. In the absence of noise the signal is a sample function from a
random process (indicated by the subscript 0).
In the communications field a large number of systems employ channels
in which randomness is inherent. Typical systems are tropospheric scatter
links, orbiting dipole links, and chaff systems. A common technique is to
transmit one of two signals separated in frequency. (We denote these
frequencies as w1 and w0.) The resulting received signal is
r(t) = s01 (t) + n(t) (12)
if s1(t) was transmitted and
r(t) = S00 (t) + n(t) (13)
if s0(t) was transmitted. Here s01 (t) is a sample function from a random
process centered at w1o and s00 (t) is a sample function from a random
process centered at w0• These examples are characterized by the lack ofany
deterministic signal component. Any decision procedure that we design
will have to be based on the difference in the statistical properties of the
two random processes from which s00 {t) and s01 (t) are obtained. This is
the third level of d~tection problem and is referred to as a random signal
in noise problem.
Estimation Theory 5
In our examination of representative examples we have seen that detec-
tion theory problems are characterized by the fact that we must decide
which of several alternatives is true. There were only two alternatives in
the examples cited; therefore we refer to them as binary detection prob-
lems. Later we will encounter problems in which there are M alternatives
available (the M-ary detection problem). Our hierarchy of detection
problems is presented graphically in Fig. 1.4.
There is a parallel set of problems in the estimation theory area. A
simple example is given in Fig. 1.5, in which the source puts out an
analog message a(t) (Fig. 1.5a). To transmit the message we first sample it
every T seconds. Then, every T seconds we transmit a signal that contains
Detection theory
Level 1. Known signals in 1. Synchronous digital communication
noise 2. Pattern recognition problems
Level 2. Signals with unknown 1. Conventional pulsed radar or sonar,
parameters in noise target detection
2. Target classification (orientation of
target unknown)
3. Digital communication systems without
phase reference
4. Digital communication over slowly-
fading channels
Level 3. Random signals in 1. Digital communication over scatter
noise link, orbiting dipole channel, or
chaff link
2. Passive sonar
3. Seismic detection system
4. Radio astronomy (detection of noise
sources)
Fig. 1.4 Detection theory hierarchy.
6 1.1 Topical Outline
a(t)
--........ r:
"-...7
a(t)
ITransmitter t-1---------;),..
. . s(t, An)
(a)
(b)
l
Aa
a.(t)
(Frequency changes
I exaggerated)
f f ! AAAAf f
VVVVVWIJV
(c)
A
A
A,
s(t, An)
At
I "
I ~a(t) r
A2
I
J "C/
A3
)I
a.(t) Filter
fi(t)
~
(d)
Fig. 1.5 (a) Sampling an analog source; (b) pulse-amplitude modulation; (c) pulse-
frequency modulation; (d) waveform reconstruction.
a parameter which is uniquely related to the last sample value. In Fig. 1.5b
the signal is a sinusoid whose amplitude depends on the last sample. Thus,
if the sample at time nTis A,., the signal in the interval [nT, (n + l)T] is
nT s t s (n + l)T. (14)
A system of this type is called a pulse amplitude modulation (PAM)
system. In Fig. 1.5c the signal is a sinusoid whose frequency in the interval
Estimation Theory 7
differs from the reference frequency we by an amount proportional to the
preceding sample value,
s(t, A,.) = sin (wet + A,.t), nT::;; t ::;; (n + l)T. (15)
A system ofthis type is called a pulse frequency modulation (PFM) system.
Once again there is additive noise. The received waveform, given that A,.
was the sample value, is
r(t) = s(t, A,.) + n(t), nT::;; t ::;; (n + l)T. (16)
During each interval the receiver tries to estimate A,.. We denote these
estimates as A,.. Over a period of time we obtain a sequence of estimates,
as shown in Fig. 1.5d, which is passed into a device whose output is an
estimate of the original message a(t). If a(t) is a band-limited signal, the
device is just an ideal low-pass filter. For other cases it is more involved.
If, however, the parameters in this example were known and the noise
were absent, the received signal would be completely known. We refer
to problems in this category as known signal in noise problems. If we
assume that the mapping from A,. to s(t, A,.) in the transmitter has an
inverse, we see that if the noise were not present we could determine A,.
unambiguously. (Clearly, if we were allowed to design the transmitter, we
should always choose a mapping with an inverse.) The known signal in
noise problem is the first level of the estimation problem hierarchy.
Returning to the area of radar, we consider a somewhat different
problem. We assume that we know a target is present but do not know
its range or velocity. Then the received signal is
r(t) = V, sin [(we + wa)(t - -r) + 8,] + n(t), -r ::;; t ::;; -r + T,
= n(t), 0 ::;; t < -r, -r + T < t < oo,
(17)
where wa denotes a Doppler shift caused by the target's motion. We want
to estimate -rand wa. Now, even if the noise were absent and 1' and wa
were known, the signal would still contain the unknown parameters V,
and 8,. This is a typical second-level estimation problem. As in detection
theory, we refer to problems in this category as signal with unknown
parameters in noise problems.
At the third level the signal component is a random process whose
statistical characteristics contain parameters we want to estimate. The
received signal is of the form
r(t) = sn(t, A) + n(t), (18)
where sn(t, A) is a sample function from a random process. In a simple
case it might be a stationary process with the narrow-band spectrum shown
in Fig. 1.6. The shape of the spectrum is known but the center frequency
Spectrum of s0 (t, A)
----~-r~---------------r--------------£_-r~--~r
Center frequency
Fig. 1.6 Spectrum of random signal.
Estimation Theory
Level l. Known signals in noise l. PAM, PFM, and PPM communication systems
with phase synchronization
2. Inaccuracies in inertial systems
(e.g., drift angle measurement)
Level 2. Signals with unknown l. Range, velocity, or angle measurement in
parameters in noise radar/sonar problems
2. Discrete time, continuous amplitude communication
system (with unknown amplitude or phase in
channel)
Level 3. Random signals in noise 1. Power spectrum parameter estimation
2. Range or Doppler spread target parameters
in radar{sonar problem
3. Velocity measurement in radio astronomy
4. Target parameter estimation: passive sonar
5. Ground mapping radars
Fig. 1.7 Estimation theory hierarchy.
8
Modulation Theory 9
is not. The receiver must observe r(t) and, using the statistical properties
of s0 (t, A) and n(t), estimate the value of A. This particular example could
arise in either radio astronomy or passive sonar. The general class of
problem in which the signal containing the parameters is a sample function
from a random process is referred to as the random signal in noise problem.
The hierarchy of estimation theory problems is shown in Fig. 1.7.
We note that there appears to be considerable parallelism in the detection
and estimation theory problems. We shall frequently exploit these parallels
to reduce the work, but there is a basic difference that should be em-
phasized. In binary detection the receiver is either "right" or "wrong."
In the estimation of a continuous parameter the receiver will seldom be
exactly right, but it can try to be close most of the time. This difference
will be reflected in the manner in which we judge system performance.
The third area of interest is frequently referred to as modulation theory.
We shall see shortly that this term is too narrow for the actual problems.
Once again a simple example is useful. In Fig. 1.8 we show an analog
message source whose output might typically be music or speech. To
convey the message over the channel, we transform it by using a modula-
tion scheme to get it into a form suitable for propagation. The transmitted
signal is a continuous waveform that depends on a(t) in some deterministic
manner. In Fig. 1.8 it is an amplitude modulated waveform:
s[t, a(t)] = [1 + ma(t)] sin (wet). (19)
(This is conventional double-sideband AM with modulation index m.) In
Fig. l.Sc the transmitted signal is a frequency modulated (FM) waveform:
s[t, a(t)] = sin [wet + f"'a(u) du]. (20)
When noise is added the received signal is
r(t) = s[t, a(t)] + n(t). (21)
Now the receiver must observe r(t) and put out a continuous estimate of
the message a(t), as shown in Fig. 1.8. This particular example is a first-
level modulation problem, for ifn(t) were absent and a(t) were known the
received signal would be completely known. Once again we describe it as
a known signal in noise problem.
Another type of physical situation in which we want to estimate a
continuous function is shown in Fig. 1.9. The channel is a time-invariant
linear system whose impulse response h(T) is unknown. To estimate the
impulse response we transmit a known signal x(t). The received signal is
r(t) = fo''" h(T) x(t - T) dT + n(t). (22)
s[t, a(t))
(a)
(c)
~(t)
1" (' (' ,..
VV)
r(t)
ill Demodulator
(d)
Fig. 1.8 A modulation theory example: (a) analog transmission system; (b) amplitude
modulated signal; (c) frequency modulated signal; (d) demodulator.
10
Linear time-invariant
channel
Fig. 1.9 Channel measurement.
Modulation Theory 11
The receiver observes r(t) and tries to estimate h(T). This particularexample
could best be described as a continuous estimation problem. Many other
problems of interest in which we shall try to estimate a continuous wave-
form will be encountered. For convenience, we shall use the term modula-
tion theory for this category, even though the term continuous waveform
estimation might be more descriptive.
The other levels of the modulation theory problem follow by direct
analogy. In the amplitude modulation system shown in Fig. 1.8b the
receiver frequently does not know the phase angle of the carrier. In this
case a suitable model is
r(t) = (1 + ma(t)) sin (wet + 8) + n(t), (23)
Modulation Theory (Continuous waveform estimation)
1. Known signals in noise 1. Conventional communication systems
such as AM (DSB-AM, SSB), FM,and
PM with phase synchronization
2. Optimum filter theory
3. Optimum feedback systems
4. Channel measurement
5. Orbital estimation for satellites
6. Signal estimation in seismic and
sonar classification systems
7. Synchronization in digital systems
2. Signals with unknown 1. Conventional communication systems
parameters in noise without phase synchronization
2. Estimation of channel characteristics when
phase of input signal is unknown
3. Random signals in noise 1. Analog communication over
randomly varying channels
2. Estimation of statistics of
time-varying processes
3. Estimation of plant characteristics
Fig. 1.10 Modulation theory hierarchy.
12 1.2 Possible Approaches
where () is an unknown parameter. This is an example of a signal with
unknown parameter problem in the modulation theory area.
A simple example of a third-level problem (random signal in noise) is one
in which we transmit a frequency-modulated signal over a radio link whose
gain and phase characteristics are time-varying. We shall find that if we
transmit the signal in {20) over this channel the received waveform will be
r(t) = V(t) sin [wet + f"'a(u) du + O(t)] + n(t), (24)
where V(t) and O(t) are sample functions from random processes. Thus,
even if a(u) were known and the noise n(t) were absent, the received signal
would still be a random process. An over-all outline of the problems of
interest to us appears in Fig. 1.10. Additional examples included in the
table to indicate the breadth of the problems that fit into the outline are
discussed in more detail in the text.
Now that we have outlined the areas of interest it is appropriate to
determine how to go about solving them.
1.2 POSSIBLE APPROACHES
From the examples we have discussed it is obvious that an inherent
feature of all the problems is randomness of source, channel, or noise
(often all three). Thus our approach must be statistical in nature. Even
assuming that we are using a statistical model, there are many different
ways to approach the problem. We can divide the possible approaches into
two categories, which we denote as "structured" and "nonstructured."
Some simple examples will illustrate what we mean by a structured
approach.
Example 1. The input to a linear time-invariant system is r(t):
r(t) = s(t) + w(t)
= 0,
0:;; t:;; T,
elsewhere. (25)
The impulse response of the system is h(.,.). The signal s(t) is a known function with
energy E.,
E. = J:s2(t) dt, (26)
and w(t) is a sample function from a zero-mean random process with a covariance
function:
No )
Kw(t, u) = 2 8(t-u. (27)
We are concerned with the output of the system at time T. The output due to the
signal is a deterministic quantity:
S0 (T) =I:h(-r) s(T- -r) d-r. (28)
Structured Approach 13
The output due to the noise is a random variable:
n0 (T) = J:h(T) n(T- .,-)d.,-.
We can define the output signal-to-noise ratio at time T as
where E( ·) denotes expectation.
S [', S0
2(T)
N- E[n02(T)]'
Substituting (28) and (29) into (30), we obtain
S u:lz(T) s(T- T) dTr
N= E[ijh(-r)h(u)n(T- T)n(T- u)d.,-du]
(29)
(30)
(31)
By bringing the expectation inside the integral, using (27), and performing the
integration with respect to u, we have
(32)
The problem of interest is to choose h(.,-) to maximize the signal-to-noise ratio.
The solution follows easily, but it is not important for our present discussion. (See
Problem 3.3.1.)
This example illustrates the three essential features of the structured
approach to a statistical optimization problem:
Structure. The processor was required to be a linear time-invariant
filter. We wanted to choose the best system in this class. Systems that were
not in this class (e.g., nonlinear or time-varying) were not allowed.
Criterion. In this case we wanted to maximize a quantity that we called
the signal-to-noise ratio.
Information. To write the expression for S/N we had to know the signal
shape and the covariance function of the noise process.
If we knew more about the process (e.g., its first-order probability
density), we could not use it, and if we knew less, we could not solve the
problem. Clearly, if we changed the criterion, the information required
might be different. For example, to maximize x
S0T)
X = E[n0
4(T)]'
(33)
the covariance function of the noise process would not be adequate. Alter-
natively, if we changed the structure, the information required might
14 1.2 Possible Approaches
change. Thus the three ideas of structure, criterion, and information are
closely related. It is important to emphasize that the structured approach
does not imply a linear system, as illustrated by Example 2.
Example 2. The input to the nonlinear no-memory device shown in Fig. 1.11 is r(t),
where
r(t) = s(t) + n(t), -oo < t < oo. (34)
At any time t, s(t) is the value of a random variable s with known probability
density p,(S). Similarly, n(t) is the value of a statistically independent random
variable n with known density Pn(N). The output of the device is y(t), where
y(t) = ao + a,[r(t)] + a2 [r(t)]2 (35)
is a quadratic no-memory function of r(t). [The adjective no-memory emphasizes that
the value ofy(to) depends only on r(to).] We want to choose the coefficients a0 , a1 , and
a2 so that y(t) is the minimum mean-square error estimate of s(t). The mean-square
error is
Hf) ~ E{[y(t) - s(t)2 ]}
= E({a0 + a1 [r(t)] + a2 [r2(t)] - s(t)}2)
(36)
and a0 , a1 , and a2 are chosen to minimize e(t}. The solution to this particular problem
is given in Chapter 3.
The technique for solving structured problems is conceptually straight-
forward. We allow the structure to vary within the allowed class and choose
the particular system that maximizes (or minimizes) the criterion of
interest.
An obvious advantage to the structured approach is that it usually
requires only a partial characterization of the processes. This is important
because, in practice, we must measure or calculate the process properties
needed.
An obvious disadvantage is that it is often impossible to tell if the struc-
ture chosen is correct. In Example 1 a simple nonlinear system might
}--1--~ y(t)
Nonlinear no-memory device
Fig. 1.11 A structured nonlinear device.
Nonstructured Approach 15
be far superior to the best linear system. Similarly, in Example 2 some
other nonlinear system might be far superior to the quadratic system.
Once a class of structure is chosen we are committed. A number of trivial
examples demonstrate the effect of choosing the wrong structure. We shall
encounter an important practical example when we study frequency
modulation in Chapter 11-2.
At first glance it appears that one way to get around the problem of
choosing the proper strucutre is to let the structure be an arbitrary non-
linear time-varying system. In other words, the class of structure is chosen
to be so large that every possible system will be included in it. The difficulty
is that there is no convenient tool, such as the convolution integral, to
express the output of a nonlinear system in terms of its input. This means
that there is no convenient way to investigate all possible systems by using
a structured approach.
The alternative to the structured approach is a nonstructured approach.
Here we refuse to make any a priori guesses about what structure the
processor should have. We establish a criterion, solve the problem, and
implement whatever processing procedure is indicated.
A simple example of the nonstructured approach can be obtained by
modifying Example 2. Instead of assigning characteristics to the device,
we denote the estimate by y(t). Letting
'(t) Q E{[y(t) - s(t)]2}, (37)
we solve for the y(t) that is obtained from r(t) in any manner to minimize'·
The obvious advantage is that if we can solve the problem we know that
our answer, is with respect to the chosen criterion, the best processor of all
possible processors. The obvious disadvantage is that we must completely
characterize all the signals, channels, and noises that enter into the
problem. Fortunately, it turns out that there are a large number of
problems of practical importance in which this complete characterization
is possible. Throughout both books we shall emphasize the nonstructured
approach.
Our discussion up to this point has developed the topical and logical
basis of these books. We now discuss the actual organization.
1.3 ORGANIZATION
The material covered in this book and Volume II can be divided into
five parts. The first can be labeled Background and consists of Chapters 2
and 3. In Chapter 2 we develop in detail a topic that we call Classical
Detection and Estimation Theory. Here we deal with problems in which
16 1.3 Organization
the observations are sets of random variables instead of random wave-
forms. The theory needed to solve problems of this type has been studied
by statisticians for many years. We therefore use the adjective classical
to describe it. The purpose of the chapter is twofold: first, to derive all
the basic statistical results we need in the remainder of the chapters;
second, to provide a general background in detection and estimation theory
that can be extended into various areas that we do not discuss in detail.
To accomplish the second purpose we keep the discussion as general as
possible. We consider in detail the binary and M-ary hypothesis testing
problem, the problem ofestimating random and nonrandom variables, and
the composite hypothesis testing problem. Two more specialized topics,
the general Gaussian problem and performance bounds on binary tests,
are developed as background for specific problems we shall encounter later.
The next step is to bridge the gap between the classical case and the
waveform problems discussed in Section 1.1. Chapter 3 develops the
necessary techniques. The key to the transition is a suitable method for
characterizing random processes. When the observation interval is finite,
the most useful characterization is by a series expansion of the random
process which is a generalization of the conventional Fourier series. When
the observation interval is infinite, a transform characterization, which is a
generalization of the usual Fourier transform, is needed. In the process of
developing these characterizations, we encounter integral equations and we
digress briefly to develop methods of solution. Just as in Chapter 2, our
discussion is general and provides background for other areas of
application.
With these two chapters in the first part as background, we are prepared
to work our way through the hierarchy of problems outlined in Figs. 1.4,
1.7, and 1.10. The second part of the book (Chapter 4) can be labeled
Elementary Detection and Estimation Theory. Here we develop the first
two levels described in Section 1.1. (This material corresponds to the
upper two levels in Figs. 1.4 and 1.7.) We begin by looking at the simple
binary digital communication system described in Fig. 1.1 and then
proceed to more complicated problems in the communications, radar, and
sonar area involving M-ary communication, random phase channels,
random amplitude and phase channels, and colored noise interference. By
exploiting the parallel nature of the estimation problem, results are
obtained easily for the estimation problem outlined in Fig. 1.5 and other
more complex systems. The extension of the results to include the multiple
channel (e.g., frequency diversity systems or arrays) and multiple parameter
(e.g., range and Doppler) problems completes our discussion. The results
in this chapter ar,e fundamental to the understanding of modern com-
munication and radar/sonar systems.
Volume II Outline 17
The third part, which can be labeled Modulation Theory or Continuous
Estimation Theory, consists of Chapters 5 and 6 and Chapter 2 of Volume
II. In Chapter 5 we formulate a quantitative model for the first two levels
of the continuous waveform estimation problem and derive a set of
integral equations whose solution is the optimum estimate of the message.
We also derive equations that give bounds on the performance of the
estimators. In order to study solution techniques, we divide the estimation
problem into two categories, linear and nonlinear.
In Chapter 6 we study linear estimation problems in detail. In the first
section of the chapter we discuss the relationships between various criteria,
process characteristics, and the structure of the processor. In the next
section we discuss the special case in which the processes are stationary and
the infinite past is available. This case, the Wiener problem, leads to
straightforward solution techniques. The original work of Wiener is ex-
tended to obtain some important closed-form error expressions. In the
next section we discuss the case in which the processes can be characterized
by using state-variable techniques. This case, the Kalman-Bucy problem,
enables us to deal with nonstationary, finite-interval problems and adds
considerable insight to the results of the preceding section.
The material in Chapters I through 6 has two characteristics:
1. In almost all cases we can obtain explicit, exact solutions to the
problems that we formulate.
2. Most of the topics discussed are of such fundamental interest that
everyone concerned with the statistical design ofcommunication, radar, or
sonar systems should be familiar with them.
As soon as we try to solve the nonlinear estimation problem, we see a
sharp departure. To obtain useful results we must resort to approximate
solution techniques. To decide what approximations are valid, however,
we must consider specific nonlinear modulation systems. Thus the precise
quantitative results are only applicable to the specific system. In view of
this departure, we pause briefly in our logical development and summarize
our results in Chapter 7.
After a briefintroduction we return to the nonlinear modulation problem
in Chapter 2 of Volume II and consider angle modulation systems in great
detail. After an approximation to the optimum processor is developed, its
performance and possible design modification are analyzed both theoreti-
cally and experimentally. More advanced techniques from Markov process
theory and information theory are used to obtain significant results.
ln the fourth part we revisit the problems of detection, estimation, and
modulation theory at the third level of the hierarchy described in Section
1.1. Looking at the bottom boxes in Figs. 1.4, 1.7, and 1.10, we see that
this is the Random Signals in Noise problem. Chapter 11-3 studies it in
18 1.3 Organization
detail. We find that the linear processors developed in Chapter 1-6 play a
fundamental role in the random signal problem. This result, coupled with
the corresponding result in Chapter 11-2, emphasizes the fundamental im-
portance of the results in Chapter I-6. They also illustrate the inherent
unity of the various problems. Specific topics such as power-spectrum
parameter estimation and analog transmission over time-varying channels
are also developed.
The fifth part is labeled Applications and includes Chapters 11-4 and 11-5.
Throughout the two books we emphasize applications of the theory to
models of practical problems. In most of them the relation of the actual
physical situation can be explained in a page or two. The fifth part deals
with physical situations in which developing the model from the physical
situation is a central issue. Chapter 11-4 studies the radar/sonar problem in
depth. It builds up a set oftarget and channel models, starting with slowly
fluctuating point targets and culminating in deep targets that fluctuate at
arbitrary rates. This set of models enables us to study the signal design
problem for radar and sonar, the resolution problem in mapping radars,
the effect of reverberation on sonar-system performance, estimation of
parameters of spread targets, communication over spread channels, and
other important problems.
In Chapter 11-5 we study various multidimensional problems such as
multiplex communication systems and multivariable processing problems
encountered in continuous receiving apertures and optical systems. The
primary emphasis in the chapter is on optimum array processing in sonar
(or seismic) systems. Both active and passive sonar systems are discussed;
specific processor configurations are developed and their performance is
analyzed.
Finally, in Chapter 11-6 we summarize some of the more important
results, mention some related topics that have been omitted, and suggest
areas of future research.
2
Detection, Estimation, and Modulation Theory, Part I:
Detection, Estimation, and Linear Modulation Theory. Harry L. Van Trees
Copyright © 2001 John Wiley & Sons, Inc.
ISBNs: 0-471-09517-6 (Paperback); 0-471-22108-2 (Electronic)
Classical Detection
and Estimation Theory
2.1 INTRODUCTION
In this chapter we develop in detail the basic ideas ofclassical detection and
estimation theory. The first step is to define the various terms.
The basic components of a simple decision-theory problem are shown in
Fig. 2.1. The first is a source that generates an output. In the simplest case
this output is one oftwo choices. We refer to them as hypotheses and label
them H0 and H1 in the two-choice case. More generally, the output might
be one of M hypotheses, which we label H0 , Hl> .. ., HM-l· Some typical
source mechanisms are the following:
1. A digital communication system transmits information by sending
ones and zeros. When "one" is sent, we call it Hl> and when "zero" is
sent, we call it H0 •
2. In a radar system we look at a particular range and azimuth and try
Source
H1
Probabilistic
transition
Ho
mechanism
Decision
rule
Decision
Fig. 2.1 Components of a decision theory problem.
19
20 2.1 Introduction
to decide whether a target is present; H1 corresponds to the presence of a
target and H0 corresponds to no target.
3. In a medical diagnosis problem we examine an electrocardiogram.
Here H1 could correspond to the patient having had a heart attack and H0
to the absence of one.
4. In a speaker classification problem we know the speaker is German,
British, or American and either male or female. There are six possible
hypotheses.
In the cases of interest to us we do not know which hypothesis is true.
The second component of the problem is a probabilistic transition
mechanism; the third is an observation space. The transition mechanism
p,.(N)
n
-+-----{41----!~--->>-r
I
Transition
mechan.sm
(a)
I
"--..---/
Observation
space
p r p
-------~1--~0~--+~1-----~>>-N
p r t*
0 +1 +2
t* r 1*
-2 -1 0
>R
PrtH,(RIHo)
(b)
Fig. 2.2 A simple decision problem: (a) model; (b) probability densities.
Detection Theory Model 21
can be viewed as a device that knows which hypothesis is true. Based on
this knowledge, it generates a point in the observation space according to
some probability law.
A simple example to illustrate these ideas is given in Fig. 2.2. When H1 is
true, the source generates +1. When H0 is true, the source generates - I.
An independent discrete random variable n whose probability density is
shown in Fig. 2.2b is added to the source output. The sum of the source
output and n is the observed variable r.
Under the two hypotheses, we have
H1 :r = 1 + n,
H 0 :r = -1 + n.
(1)
The probability densities of r on the two hypotheses are shown in Fig.
2.2b. The observation space is one-dimensional, for any output can be
plotted on a line.
A related example is shown in Fig. 2.3a in which the source generates
two numbers in sequence. A random variable n1 is added to the first
number and an independent random variable n2 is added to the second.
Thus
H1 :r1 = 1 +n1
'2 = 1 + n2,
Ho: r1 = - 1 + n1
r2 = -1 + n2•
(2)
The joint probability density of r1 and r2 when H1 is true is shown in
Fig. 2.3b. The observation space is two-dimensional and any observation
can be represented as a point in a plane.
In this chapter we confine our discussion to problems in which the
observation space is finite-dimensional. In other words, the observations
consist of a set of N numbers and can be represented as a point in an
N-dimensional space. This is the class of problem that statisticians have
treated for many years. For this reason we refer to it as the classical
decision problem.
The fourth component of the detection problem is a decision rule. After
observing the outcome in the observation space we shall guess which
hypothesis was true, and to accomplish this we develop a decision rule that
assigns each point to one of the hypotheses. Suitable choices for decision
rules will depend on several factors which we discuss in detail later. Our
study will demonstrate how these four components fit together to form the
total decision (or hypothesis-testing) problem.
The classical estimation problem is closely related to the detection
problem. We describe it in detail later.
22 2.1 Introduction
(a)
R1
Fig. 2.3 A two-dimensional problem: (a) model; (b) probability density.
Organization. This chapter is organized in the following manner. In
Section 2.2 we study the binary hypothesis testing problem. Then in
Section 2.3 we extend the results to the case of M hypotheses. In Section
2.4 classical estimation theory is developed.
The problems that we encounter in Sections 2.2 and 2.3 are characterized
by the property that each source output corresponds to a different hypoth-
esis. In Section 2.5 we shall examine the composite hypothesis testing
problem. Here a number of source outputs are lumped together to form a
single hypothesis.
All of the developments through Section 2.5 deal with arbitrary prob-
ability transition mechanisms. In Section 2.6 we consider in detail a special
class of problems that will be useful in the sequel. We refer to it as the
general Gaussian class.
In many cases of practical importance we can develop the "optimum"
decision rule according to certain criteria but cannot evaluate how well the
Decision Criteria 23
test will work. In Section 2.7 we develop bounds and approximate expres-
sions for the performance that will be necessary for some of the later
chapters.
Finally, in Section 2.8 we summarize our results and indicate some of
the topics that we have omitted.
2.2 SIMPLE BINARY HYPOTHESIS TESTS
As a starting point we consider the decision problem in which each of
two source outputs corresponds to a hypothesis. Each hypothesis maps
into a point in the observation space. We assume that the observation
space corresponds to a set of N observations: r1o r2 , r3 , ••• , 'N· Thus each
set can be thought of as a point in an N-dimensional space and can be
denoted by a vector r:
(3)
The probabilistic transition mechanism generates points in accord with
the two known conditional probability densities PriH1 (RIH1) and
Pr!Ho(RIH0). The object is to use this information to develop a suitable
decision rule. To do this we must look at various criteria for making
decisions.
2.2.1 Decision Criteria
In the binary hypothesis problem we know that either H0 or H1 is true.
We shall confine our discussion to decision rules that are required to make
a choice. (An alternative procedure would be to allow decision rules with
three outputs (a) H0 true, (b) H1 true, (c) don't know.) Thus each time the
experiment is conducted one of four things can happen:
1. H0 true; choose H0•
2. H0 true; choose H1 •
3. H1 true; choose H1•
4. H1 true; choose H0 •
The first and third alternatives correspond to correct choices. The second
and fourth alternatives correspond to errors. The purpose of a decision
criterion is to attach some relative importance to the four possible courses
of action. It might be expected that the method of processing the received
24 2.2 Simple Binary Hypothesis Tests
data (r) would depend on the decision criterion we select. In this section
we show that for the two criteria of most interest, the Bayes and the
Neyman-Pearson, the operations on r are identical.
Bllyes Criterion. A Bayes test is based on two assumptions. The first is
that the source outputs are governed by probability assignments, which are
denoted by P1 and P0, respectively, and called the a priori probabilities.
These probabilities represent the observer's information about the source
before the experiment is conducted. The second assumption is that a cost is
assigned to each possible course of action. We denote the cost for the four
courses of action as C00, C10, C11, Cot> respectively. The first subscript
indicates the hypothesis chosen and the second, the hypothesis that was
true. Each time the experiment is conducted a certain cost will be incurred.
We should like to design our decision rule so that on the average the cost
will be as small as possible. To do this we first write an expression for the
expected value of the cost. We see that there are two probabilities that we
must average over; the a priori probability and the probability that a
particular course of action will be taken. Denoting the expected value of
the cost as the risk :Jt, we have:
:Jt = C00P0 Pr (say HoiHo is true)
+ C10Po Pr (say HdHo is true)
+ C11P1 Pr (say H1IHt is true)
+ Co1P1 Pr (say HoiH1 is true). (4)
Because we have assumed that the decision rule must say either H1 or
H0 , we can view it as a rule for dividing the total observation space Z into
two parts, Z0 and Zl> as shown in Fig. 2.4. Whenever an observation falls
in Z0 we say H0 , and whenever an observation falls in Z1 we say H1•
Z: observation space
Source
Say Ht
Fig. 2.4 Decision regions.
Decision Criteria 25
We can now write the expression for the risk in terms of the transition
probabilities and the decision regions:
.'It = CooPo f Pr!Ho(RIHo) dR
Zo
+ C1oPo f Pr!Ho(RIHo)dR
zl
+ CuP1 f PrJH1 (RIH1) dR
zl
+ Co1P1 f PrJH1 (RIHI) dR.
Zo
(5)
For an N-dimensional observation space the integrals in (5) are N-fold
integrals.
We shall assume throughout our work that the cost of a wrong decision
is higher than the cost of a correct decision. In other words,
C10 > Coo,
Co1 > Cu.
(6)
Now, to find the Bayes test we must choose the decision regions Z0 and
Z1 in such a manner that the risk will be minimized. Because we require
that a decision be made, this means that we must assign each point R in
the observation space Z to Z0 or Z1 •
Thus
Z = Zo + Z1 ~ Zo u Z1. (7)
Rewriting (5), we have
.'It = PoCoo f PrJHo(RIHo) dR + PoC1o f PrJHo(RIHo) dR
~ z-~
Observing that
LPriHo(RIHo) dR = LPrJH1 (RIHl)dR = 1, (9)
(8) reduces to
.'It= PoC10 + P1C11
+ f {[P1(Co1 - Cu)PrJH1 (RIH1)]
Zo
- [Po(Clo- Coo)PriHo(RIHo)]} dR. (10)
26 2.2 Simple Binary Hypothesis Tests
The first two terms represent the fixed cost. The integral represents the
cost controlled by those points R that we assign to Z0• The assumption in
(6) implies that the two terms inside the brackets are positive. Therefore
all values of R where the second term is larger than the first should be
included in Z0 because they contribute a negative amount to the integral.
Similarly, all values of R where the first term is larger than the second
should be excluded from Z0 (assigned to Z1) because they would contribute
a positive amount to the integral. Values of R where the two terms are
equal have no effect on the cost and may be assigned arbitrarily. We shall
assume that these points are assigned to H1 and ignore them in our sub-
sequent discussion. Thus the decision regions are defined by the statement:
If
P1(Co1 - Cu)PrtH1 (RIHl) <::: Po(Clo- Coo)PrtHo(RIHo), (11)
assign R to Z1 and consequently say that H1 is true. Otherwise assign R
to Z0 and say H0 is true.
Alternately, we may write
PrtH1 (RIHl) ~ Po(Clo - Coo). (12)
PrtH0 (RIHo) Ho P1(Co1 - Cu)
The quantity on the left is called the likelihood ratio and denoted by A(R)
(13)
Because it is the ratio of two functions of a random variable, it is a
random variable. We see that regardless of the dimensionality of R, A(R)
is a one-dimensional variable.
The quantity on the right of (12) is the threshold of the test and is
denoted by 7J:
A Po(Clo - Coo)
TJ'-' •
- P1(Co1 - Cu)
(14)
Thus Bayes criterion leads us to a likelihood ratio test (LRT)
(15)
We see that all the data processing is involved in computing A(R) and
is not affected by a priori probabilities or cost assignments. This invariance
of the data processing is of considerable practical importance. Frequently
the costs and a priori probabilities are merely educated guesses. The result
in (15) enables us to build the entire processor and leave 7J as a variable
threshold to accommodate changes in our estimates ofa priori probabilities
and costs.
Likelihood Ratio Tests 27
Because the natural logarithm is a monotonic function, and both sides
of (15) are positive, an equivalent test is
H1
ln A(R) ?( ln 71· (16)
Ho
Two forms of a processor to implement a likelihood ratio test are
shown in Fig. 2.5.
Before proceeding to other criteria, we consider three simple examples.
Example 1. We assume that under H1 the source output is a constant voltage m.
Under H0 the source output is zero. Before observation the voltage is corrupted by
an additive noise. We sample the output waveform each second and obtain N samples.
Each noise sample is a zero-mean Gaussian random variable n with variance a2 • The
noise samples at various instants are independent random variables and are indepen-
dent of the source output. Looking at Fig. 2.6, we see that the observations under the
two hypotheses are
and
H1:r, = m + n,
H 0 :r, = n,
i = 1, 2, ... , N,
i = 1, 2, ... , N,
1 ( x2)
p.,(X) = V2.. a exp - 2a2 '
because the noise samples are Gaussian.
The probability density of r, under each hypothesis follows easily:
and
Threshold
A(R) device Decision
-------
A(R)~'II
(a)
Threshold
lnA(R) device Decision
-------
InA (B)
~In 11
(b)
Fig. 2.5 Likelihood ratio processors.
(17)
(18)
(19)
(20)
28 2.2 Simple Binary Hypothesis Tests
N samples
~
Ht:~
r (2 I'·
Ira )o
1------;..~ Sampler 1----------,
r(t)
Ho-+1----;>~
Processor
Fig. 2.6 Model for Example 1.
Because the n1 are statistically independent, the joint probability density of the r1
(or, equivalently, of the vector r) is simply the product of the individual probability
densities. Thus
and
N ) ( R,")
PriH0 (RIHo) = n _,2 exp -2a" .
i=lv wa
Substituting into (13), we have
n
N ) ( (R1 - m)2 )
-=-exp - 2 2
A(R) = 1=1 v21r a a .
N ) ( R.")
n . I - exp -p
l=1·v21ra a
After canceling common terms and taking the logarithm, we have
m N Nm2
In A(R) = - 2: R1 - - ·
a2 1=1 2a2
Thus the likelihood ratio test is
m N Nm•H•
22:R~---
2 :;:In"'
a l=l 2a Ho
or, equivalently,
N Ht a• Nm
2: R1 :;: - In "' + - Q y.
1= 1 Ho m 2
(21)
(22)
(23)
(24)
(25)
(26)
We see that the processor simply adds the observations and compares them with a
threshold.
Likelihood Ratio Tests 29
In this example the only way the data appear in the likelihood ratio test
is in a sum. This is an example of a sufficient statistic, which we denote by
I(R) (or simply I when the argument is obvious). It is just a function of the
received data which has the property that A(R) can be written as a function
of I. In other words, when making a decision, knowing the value of the
sufficient statistic is just as good as knowing R. In Example 1, I is a linear
function ofthe R1• A case in which this is not true is illustrated in Example 2.
Example 2. Several different physical situations lead to the mathematical model of
interest in this example. The observations consist of a set of N values: rh r2, r3, ... , rN.
Under both hypotheses, the r, are independent, identically distributed, zero-mean
Gaussian random variables. Under H1 each r1 has a variance a12• Under Ha each r1
has a variance aa2 • Because the variables are independent, the joint density is simply
the product of the individual densities. Therefore
(27)
and
N 1 ( R1
2)
PriHo(RIHo) = TI.,- exp -:22 .
1=1Y2?Ta0 ao
(28)
Substituting (27) and (28) into (13) and taking the logarithm, we have
(29)
In this case the sufficient statistic is the sum of the squares of the observations
N
/(R) = L R,2, (30)
1=1
and an equivalent test for a12 > a02 is
(31)
For o12 < o02 the inequality is reversed because we are multiplying by a negative
number:
/(R) ~ :a 01
2 N In ~ - In 'l !l r';
Ho 2 2 2 ( )
H1 ao - a1 a1
(32)
These two examples have emphasized Gaussian variables. In the next
example we consider a different type of distribution.
Example 3. The Poisson distribution of events is encountered frequently as a model of
shot noise and other diverse phenomena (e.g., [I] or [2)). Each time the experiment is
conducted a certain number of events occur. Our observation is just this number
which ranges from 0 to oo and obeys a Poisson distribution on both hypotheses; that is,
Pr (n events) = (m,t e-m,,
n.
n = 0, 1, 2 ... , i = 0,1,
where m, is the parameter that specifies the average number of events:
E(n) = m1•
(33)
(34)
30 2.2 Simple Binary Hypothesis Tests
It is this parameter m, that is different in the two hypotheses. Rewriting (33) to
emphasize this point, we have for the two Poisson distributions
m"
H0 :Pr (n events)= -Te-mo,
n.
n =0,1, 2, ... ,
n = 0,1, 2, ....
Then the likelihood ratio test is
or, equivalently,
A(n) = ....! exp [-(m1 - mo)] ~ 'I
(m )" H1
mo Ho
~In.,+ m1- mo
n rfo In m1 - In mo '
~In 'I+ m1- mo
n ~ In m1 - In mo '
(35)
(36)
(37)
(38)
This example illustrates how the likelihood ratio test which we originally
wrote in terms of probability densities can be simply adapted to accom-
modate observations that are discrete random variables. We now return
to our general discussion of Bayes tests.
There are several special kinds of Bayes test which are frequently used
and which should be mentioned explicitly.
Ifwe assume that C00 and Cu are zero and C01 = C10 = 1, the expres-
sion for the risk in (8) reduces to
We see that (39) is just the total probability of making an error. There-
fore for this cost assignment the Bayes test is minimizing the total
probability of error. The test is
H1 p
ln A(R) ~ In Po = In P0 - ln (I - P0).
Ho 1
(40)
When the two hypotheses are equally likely, the threshold is zero. This
assumption is normally true in digital communication systems. These
processors are commonly referred to as minimum probability of error
receivers.
A second special case of interest arises when the a priori probabilities
are unknown. To investigate this case we look at (8) again. We observe
that once the decision regions Z0 and Z1 are chosen, the values of the
integrals are determined. We denote these values in the following manner:
Likelihood Ratio Tests 31
(41)
We see that these quantities are conditional probabilities. The subscripts
are mnemonic and chosen from the radar problem in which hypothesis H1
corresponds to the presence of a target and hypothesis H0 corresponds to
its absence. PF is the probability of a false alarm (i.e., we say the target is
present when it is not); P0 is the probability of detection (i.e., we say the
target is present when it is); PM is the probability of a miss (we say the
target is absent when it is present). Although we are interested in a much
larger class of problems than this notation implies, we shall use it for
convenience.
For any choice of decision regions the risk expression in (8) can be
written in the notation of (41):
.1t = PoC1o + P1Cu + P1(Co1 - Cu)PM
- Po(Clo - Coo)(l - PF). (42)
Because
(43)
(42) becomes
.1t(P1) = Coo(l - PF) + C1oPF
+ Pl[(Cu - Coo) + (Col - Cu)PM - (C1o - Coo)PF]. (44)
Now, if all the costs and a priori probabilities are known, we can find a
Bayes test. In Fig. 2.7a we plot the Bayes risk, .1ts(P1), as a function ofP1.
Observe that as P1 changes the decision regions for the Bayes test change
and therefore PF and PM change.
Now consider the situation in which a certain P1 (say P1 = Pr) is
assumed and the corresponding Bayes test designed. We now fix the
threshold and assume that P1is allowed to change. We denote the risk for
this fixed threshold test as j(APr, P1). Because the threshold is fixed, PF
and PM are fixed, and (44) is just a straight line. Because it is a Bayes test
for P1 = Pr, it touches the .1t8(P1) curve at that point. Looking at (14),
we see that the threshold changes continuously with P1. Therefore, when-
ever P1 =F Pr, the threshold in the Bayes test will be different. Because the
Bayes test minimizes the risk,
(45)
32 2.2 Simple Binary Hypothesis Tests
(a) (b)
Cn
Coo
Coo
0~------7------;.. pl 0'--------'--- PI
(c) (d)
Fig. 2.7 Risk curves: (a) fixed risk versus typical Bayes risk; (b) maximum value of
.1t1 at P1 = 0.
If A is a continuous random variable with a probability distribution
function that is strictly monotonic, then changing TJ always changes the
risk. .'R.8 (P1) is strictly concave downward and the inequality in (45) is
strict. This case, which is one of particular interest to us, is illustrated in
Fig. 2.7a. We see that .'R.iPt, P1) is tangent to .'R.a(P1) at P1 = Pf, These
curves demonstrate the effect of incorrect knowledge of the a priori
probabilities.
An interesting problem is encountered if we assume that the a priori
probabilities are chosen to make our performance as bad as possible. In
other words, P1 is chosen to maximize our risk :JtF(Pt, P1). Three possible
examples are given in Figs. 2.7b, c, and d. In Fig. 2.7b the maximum of
:Jt8 (P1) occurs at P1 = 0. To minimize the maximum risk we use a Bayes
test designed assuming P1 = 0. In Fig. 2.7c the maximum of.'R.a(P1) occurs
at P1 = 1. To minimize the maximum risk we use a Bayes test designed
assuming P1 = 1. In Fig. 2.7d the maximum occurs inside the interval
Likelihood Ratio Tests 33
[0, 1], and we choose :RF to be the horizontal line. This implies that the
coefficient of P1 in (44) must be zero:
(Cll - Coo) + (Col - Cu)PM - (C1o - Coo)PF = 0. (46)
A Bayes test designed to minimize the maximum possible risk is called a
minimax test. Equation 46 is referred to as the minimax equation and is
useful whenever the maximum of :R8 (P1) is interior to the interval.
A special cost assignment that is frequently logical is
Coo= Cu = 0
(This guarantees the maximum is interior.)
Denoting,
the risk is,
Col= eM,
C1o = CF.
:RF = CFPF + Pl(CMPM - CFPF)
= PoCFPF + P1CMPM
and the minimax equation is
(47)
(48)
(49)
(50)
Before continuing our discussion oflikelihood ratio tests we shall discuss
a second criterion and prove that it also leads to a likelihood ratio test.
Neyman-Pearson Tests. In many physical situations it is difficult to
assign realistic costs or a priori probabilities. A simple procedure to by-
pass this difficulty is to work with the conditional probabilities PF and Pv.
In general, we should like to make PF as small as possible and Pv as large
as possible. For most problems of practical importance these are con-
flicting objectives. An obvious criterion is to constrain one of the prob-
abilities and maximize (or minimize) the other. A specific statement of this
criterion is the following:
Neyman-Pearson Criterion. Constrain PF = a.' ::::; a. and design a test to
maximize Pv (or minimize PM) under this constraint.
The solution is obtained easily by using Lagrange multipliers. We con-
struct the function F,
(51)
or
F = fzo PriH1 (RIHl) dR + , [{1
PriHo(RIHo) dR- a.']• (52)
Clearly, if PF = a.', then minimizing F minimizes PM.
34 2.2 Simple Binary Hypothesis Tests
or
F = .(I -a')+ f [Pr!H1 (RIH1)- APriHo(RIHo)] dR. (53)
Jzo
Now observe that for any positive value of, an LRT will minimize F.
(A negative value of, gives an LRT with the inequalities reversed.)
This follows directly, because to minimize F we assign a point R to Z0
only when the term in the bracket is negative. This is equivalent to the test
Pr!H1 (RIHI) ,
PriHo(RIHo) < '
assign point to Z0 or say H0 • (54)
The quantity on the left is just the likelihood ratio. Thus F is minimized
by the likelihood ratio test
H1
A(R) ~ ,.
Ho
(55)
To satisfy the constraint we choose , so that PF = a'. If we denote the
density of A when H0 is true as PA!Ho(AIH0), then we require
PF = L"'PAIHo(AIHo) dA = a'. (56)
Solving (56) for ,gives the threshold. The value of, given by {56) will be
non-negative because PAIHo(AIHo) is zero for negative values of,. Observe
that decreasing , is equivalent to increasing Z1 , the region where we say
H1 • Thus Pv increases as , decreases. Therefore we decrease , until we
obtain the largest possible a' =:;; a. In most cases of interest to us PF is a
continuous function of, and we have PF = a. We shall assume this con-
tinuity in all subsequent discussions. Under this assumption the Neyman-
Pearson criterion leads to a likelihood ratio test. On p. 41 we shall see the
effect of the continuity assumption not being valid.
Summary. In this section we have developed two ideas of fundamental
importance in hypothesis testing. The first result is the demonstration that
for a Bayes or a Neyman-Pearson criterion the optimum test consists of
processing the observation R to find the likelihood ratio A(R) and then
comparing A(R) to a threshold in order to make a decision. Thus, regard-
less of the dimensionality of the observation space, the decision space is
one-dimensional.
The second idea is that ofa sufficient statistic /(R). The idea ofa sufficient
statistic originated when we constructed the likelihood ratio and saw that
it depended explicitly only on /(R). If we actually construct A(R) and then
recognize /(R), the notion of a sufficient statistic is perhaps of secondary
value. A more important case is when we can recognize /(R) directly. An
easy way to do this is to examine the geometric interpretation ofa sufficient
Likelihood Ratio Tests 3S
statistic. We considered the observations r1o r2, ••• , rN as a point r in an
N-dimensional space, and one way to describe this point is to use these
coordinates. When we choose a sufficient statistic, we are simply describing
the point in a coordinate system that is more useful for the decision
problem. We denote the first coordinate in this system by /, the sufficient
statistic, and the remaining N - I coordinates which will not affect our
decision by the (N - I)-dimensional vector y. Thus
A(R) = A(L Y) = PI,YiHl(L, YIHl). (57)
' PI,YIHo(L, YIHo)
Now the expression on the right can be written as
A(L Y) _ PliH1(LiHl)PYii,H1(YIL, H1).
' - PliH0 (LIHo)PYII,Ho(YIL, Ho)
(58)
If I is a sufficient statistic, then A(R) must reduce to A(L). This implies
that the second terms in the numerator and denominator must be equal.
In other words,
(59)
because the density of y cannot depend on which hypothesis is true. We
see that choosing a sufficient statistic simply amounts to picking a co-
ordinate system in which one coordinate contains all the information
necessary to making a decision. The other coordinates contain no informa-
tion and can be disregarded for the purpose of making a decision.
In Example 1 the new coordinate system could be obtained by a simple
rotation. For example, when N = 2,
1
L = v'2' (R1 + R2),
(60)
1
Y = v'2' (R1 - R2).
In Example 2 the new coordinate system corresponded to changing to
polar coordinates. For N = 2
L = R12 + R22,
y = tan-1 R2.
~ ~I)
Notice that the vector y can be chosen in order to make the demonstra-
tion of the condition in (59) as simple as possible. The only requirement is
that the pair (/, y) must describe any point in the observation space. We
should also observe that the condition
PYiH1(YIH1) = PYIHo(YIHo) (62)
J6 2.2 Simple Binary Hypothesis Tests
does not imply (59) unless I and y are independent under H1 and H0•
Frequently we will choose y to obtain this independence and then use (62)
to verify that I is a sufficient statistic.
2.2.2 Performance: Receiver Operating Characteristic
To complete our discussion of the simple binary problem we must
evaluate the performance of the likelihood ratio test. For a Neyrnan-
Pearson test the values of PF and PD completely specify the test perform-
ance. Looking at (42) we see that the Bayes risk .'R.8 follows easily ifPF and
PD are known. Thus we can concentrate our efforts on calculating PF and
PD.
We begin by considering Example I in Section 2.2.1.
Example I. From (25) we see that an equivalent test is
1 N H1 a VNm
I=--=- L R, ~ -=-ln'l + --·
VNat=l HoVNm 2a
(63)
u -../Nm In 11 d
Threshold: -./Nm In 'II + 2<1 = d + 2
(a)
(b)
Fig. 1.8 Error probabilities: (a) PF calculation; (b) Pv calculation.
Performance: Receiver Operating Characteristic 37
We have multiplied (25) by a/VN m to normalize the next calculation. Under H 0,
1is obtained by adding N independent zero-mean Gaussian variables with variance
a2 and then dividing by VN a. Therefore I is N(O, 1).
Under H, I is N(VN m/a, 1). The probability densities on the two hypotheses are
sketched in Fig. 2.8a. The threshold is also shown. Now, P, is simply the integral of
PIIHo<LIHo) to the right of the threshold.
Thus
P, = --= exp -- dx,
i... 1 ( x2)
<m•>ld+d/2 v2'" 2
where d £ VN mfa is the distance between the means of the two densities.
integral in (64) is tabulated in many references (e.g., [3] or [4]).
We generally denote
err. (X)!!!. r...v~'" exp ( -t) dx,
where err. is an abbreviation for the error functiont and
f... 1 ( x2
)
errc. (X)!!!. x v2'" exp - 2 dx
is its complement. In this notation
(ln'l ~
P, = erfc• d + 1.)"
(64)
The
(65)
(66)
(67)
Similarly, Po is the integral of p,1H 1(LIH1) to the right of the threshold, as shown in
Fig. 2.8b:
i., 1 [ (x - d)2]
Po= --=exp dx
em •>ld+d/2 v2'" 2
= --=exp -- dy !!!_ errc. - - - .
i., 1 ( y2
) (In "' V
(ID.)/d-d/2 V2'1T 2 d 2
(68)
In Fig. 2.9a we have plotted Po versus P, for various values ofdwith 'I as the varying
parameter. For 'I = O,ln 'I = -oo, and the processor always guesses H1• Thus P, = 1
and Po = l. As"' increases, P, and Po decrease. When"' = oo, the processor always
guesses Ho and P, = Po = 0.
As we would expect from Fig. 2.8, the performance increases monotonically with d.
In Fig. 2.9b we have replotted the results to give Po versus d with P, as a parameter
on the curves. For a particular d we can obtain any point on the curve by choosing 'I
appropriately (0 :5 ., :5 oo).
The result in Fig. 2.9a is referred to as the receiver operating characteristic (ROC).
It completely describes the performance of the test as a function of the parameter of
interest.
A special case that will be important when we look at communication systems is
the case in which we want to minimize the total probability of error
(69a)
t The function that is usually tabulated is erf (X) = v2/'" J: exp (-y2) dy, which is
related to (65) in an obvious way.
38 2.2 Simple Binary Hypothesis Tests
(a)
Fig. 2.9 (a) Receiver operating characteristic: Gaussian variables with unequal means.
The threshold for this criterion was given in (40). For the special case in which
Po = P1 the threshold 'I equals one and
Pr <~> = !(P, + PM>·
Using (67) and (68) in (69), we have
Pr <~> = J"' 1 exp (- x2
) dx = erfc. (+~)·
+d/2 v21r 2 2
(69b)
(70)
It is obvious from (70) that we could also obtain the Pr (•) from the ROC. However,
if this is the only threshold setting of interest, it is generally easier to calculate the
Pr (~) directly.
Before calculating the performance of the other two examples, it is
worthwhile to point out two simple bounds on erfc* (X). They will enable
Performance: Receiver Operating Characteristic 39
us to discuss its approximate behavior analytically. For X > 0
1 ( 1 ) ( X2
) 1 ( x2)
V2rr X 1 - X2 exp -2 < erfc* (X) < V2rr Xexp -2 . (71)
This can be derived by integrating by parts. (See Problem 2.2.15 or Feller
(30].) A second bound is
erfc* (X) < -! exp (- ;
2
)• x > 0, (72)
d -
(b)
Fig. 2.9 (b) detection probability versus d.
40 2.2 Simple Binary Hypothesis Tests
which can also be derived easily (see Problem 2.2.16). The four curves are
plotted in Fig. 2.10. We note that erfc* (X) decreases exponentially.
The receiver operating characteristics for the other two examples are
also of interest.
- - 1-- e-X/2
1 ( 1) 2
-.,(ii X X 2
x-
Fig. 2.10 Plot of erfc• (X) and related functions.
Performance: Receiver Operating Characteristic 41
Example 2. In this case the test is
(73)
The performance calculation for arbitrary N is somewhat tedious, so we defer it
until Section 2.6. A particularly simple case appearing frequently in practice is
N = 2. Under H 0 the r, are independent zero-mean Gaussian variables with variances
equal to ao2 :
P, = Pr (I~ yiHa) = Pr (r12 + r22 ~ YIHa). (74)
To evaluate the expression on the right, we change to polar coordinates:
r1 = zcos 8,
(75)
r2 = z sin 8,
Then
i2
" J"' 1 ( z•)
Pr (z2 ~ yiHa) = dB Z -
2 2 exp - -
2 2 dZ.
o ,f7 'ITao ao
(76)
Integrating with respect to 8, we have
f., 1 ( z•)
P, = Z 2 exp - -
2 2 dZ.
,f7 ao ao
(77)
We observe that /, the sufficient statistic, equals z2• Changing variables, we have
p, = f"' ~ exp (-~) dL = exp (-~)·
), 2a0 2ao 2ao
(Note that the probability density of the sufficient statistic is exponential.)
Similarly,
(78)
(79)
To construct the ROC we can combine (78) and (79) to eliminate the threshold y.
This gives
(80)
In terms of logarithms
(81)
As expected, the performance improves monotonically as the ratio a12 /ao2 increases.
We shall study this case and its generalizations in more detail in Section 2.6..
The two Poisson distributions are the third example.
Example 3. From (38), the likelihood ratio test is
~1 In TJ + m1 - mo
n lfo In m1 - In mo = y,
(82)
Because n takes on only integer values, it is more convenient to rewrite (82) as
Yr = 0, 1, 2, ... , (83)
42 2.2 Simple Binary Hypothesis Tests
where y1 takes on only integer values. Using (35),
YI = 0, 1, 2, ... , (84)
and from (36)
YI = 0, 1, 2, 0 0 oo (85)
The resulting ROC is plotted in Fig. 2.11a for some representative values of m0
and m1.
We see that it consists of a series of points and that PF goes from 1 to 1 - e-mo
when the threshold is changed from 0 to 1. Now suppose we wanted PF to have an
intermediate value, say 1 - te-mo. To achieve this performance we proceed in the
following manner. Denoting the LRT with y1 = 0 as LRT No. 0 and the· LRT with
Y1 = 1 as LRT No. 1, we have the following table:
0081-
[]
8
0.61-
Oo41-
Oo2~5
0
[]
7
o
4
[]
6
LRT
0
3
0
1
I
002
Y1
0
1
0
2
I
Oo4
PF-
Po
o mo = 2, m1 = 4
c mo =4, m1 =10
I I I
Oo6 008
Fig. 1.11 (a) Receiver operating characteristic, Poisson problem.
-
-
-
-
-
-
1.0
Performance: Receiver Operating Characteristic 43
/
/
/
/
/
/
/
/
/
/
/
/
/
/
/
/
/
/
/
/
/
/
/
/
/
/
0 mo= 2, ml=4
c mo =4, m1 =10
Fig. 2.11 (b) Receiver operating characteristic with randomized decision rule.
1.0
To get the desired value of P, we use LRT No. 0 with probability t and LRT No. 1
with probability t. The test is
If n = 0,
n ~ 1
say H1 with probability t,
say Ho with probability t,
say H1.
This procedure, in which we mix two likelihood ratio tests in some probabilistic
manner, is called a randomized decision rule. The resulting Po is simply a weighted
combination of detection probabilities for the two tests.
Po = 0.5(1) + 0.5(1 - e-ml) = (1 - O.S e-m1). (86)
We see that the ROC for randomized tests consists of straight lines which connect
the points in Fig. 2.lla, as shown in Fig. 2.llb. The reason that we encounter a
randomized test is that the observed random variables are discrete. Therefore A(R)
is a discrete random variable and, using an ordinary likelihood ratio test, only certain
values ofP, are possible.
44 2.2 Simple Binary Hypothesis Tests
Looking at the expression for PF in (56) and denoting the threshold by "'•
we have
(87)
If PiTJ) is a continuous function of TJ, we can achieve a desired value from
0 to I by a suitable choice of TJ and a randomized test will never be needed.
This is the only case of interest to us in the sequel (see Prob. 2.2.12).
With these examples as a background, we now derive a few general
properties of receiver operating characteristics. We confine our discussion
to continuous likelihood ratio tests.
Two properties of all ROC's follow immediately from this example.
Property 1. All continuous likelihood ratio tests have ROC's that are con-
cave downward. If they were not, a randomized test would be better. This
would contradict our proof that a LRT is optimum (see Prob. 2.2.12).
Property 2. All continuous likelihood ratio tests have ROC's that are above
the PD = PFline. This is just a special case ofProperty l because the points
(PF = 0, PD = 0) and (PF = I, PD = 1) are contained on all ROC's.
Property 3. The slope of a curve in a ROC at a particular point is equal to
the value of the threshold TJ required to achieve the PD and PF of that point.
Proof.
PD = f"PAIHl(AIHl) dA,
PF = f'PAIHo(AIHo) dA. (88)
Differentiating both expressions with respect to TJ and writing the results
as a quotient, we have
dPDfdTJ -PAIH1 (7JIH1) dPD
dPFfdTJ = -PAIHo('IIHo) = dPF.
(89)
We now show that
Let
Then
(90)
n(TJ) £ {RIA(R) ~ 11} = (RI;::::~::z:~ ~ "~l (91)
PD(TJ) £ Pr {A(R) ~ TJIHl} = r PriHl(RIHl) dR
Jn<~>
= f A(R)pr1n0 (RIH0) dR, (92)
Jn<~>
Performance: Receiver Operating Characteristic 45
where the last equality follows from the definition of the likelihood ratio.
Using the definition of 0(7]), we can rewrite the last integral
Pv(TJ) = f A(R)Prlno(RIHo) dR = i"" XpAino(XiHo) dX. (93)
Jnc~> ~
Differentiating (93) with respect to 7J, we obtain
(94)
Equating the expression for dPv(n)fd7J in the numerator of (89) to the
right side of (94) gives the desired result.
We see that this result is consistent with Example I. In Fig. 2.9a, the
curves for nonzero d have zero slope at PF = Pv = 1 (71 = 0) and infinite
slope at PF = Pv = 0 (71 = oo).
Property 4. Whenever the maximum value of the Bayes risk is interior to
the interval (0, I) on the P1 axis, the minimax operating point is the
intersection of the line
(C11 - Coo) + (Col - C11)(l - Pv) - (C1o - Coo)PF = 0 (95)
and the appropriate curve of the ROC (see 46). In Fig. 2.12 we show the
special case defined by (50),
(96)
Fig. 2.12 Determination of minimax operating point.
46 2.3 M Hypotheses
superimposed on the ROC of Example 1. We see that it starts at the point
PF = 0, PD = 1, and intersects the PF = 1 line at
PF = 1- cF.
eM (97)
This completes our discussion of the binary hypothesis testing problem.
Several key ideas should be re-emphasized:
1. Using either a Bayes criterion or a Neyman-Pearson criterion, we
find that the optimum test is a likelihood ratio test. Thus, regardless of the
dimensionality of the observation space, the test consists of comparing a
scalar variable A(R) with a threshold. (We assume PF("/) is continuous.)
2. In many cases construction of the LRT can be simplified if we can
identify a sufficient statistic. Geometrically, this statistic is just that
coordinate in a suitable coordinate system which describes the observation
space that contains all the information necessary to make a decision.
3. A complete description of the LRT performance was obtained
by plotting the conditional probabilities PD and PF as the threshold"' was
varied. The resulting ROC could be used to calculate the Bayes risk for
any set ofcosts. In many cases only one value of the threshold is of interest
and a complete ROC is not necessary.
A number of interesting binary tests are developed in the problems.
1.3 M HYPOTHESES
The next case of interest is one in which we must choose one of M
hypotheses. In the simple binary hypothesis test there were two source
outputs, each of which corresponded to a single hypothesis. In the simple
M-ary test there are M source outputs, each of which corresponds to one
of M hypotheses. As before, we assume that we are forced to make a
decision. Thus there are M 2 alternatives that may occur each time the
experiment is conducted. The Bayes criterion assigns a cost to each ofthese
alternatives, assumes a set of a priori probabilities P0 , P1, ••• , PM-h and
minimizes the risk. The generalization of the Neyman-Pearson criterion to
M hypotheses is also possible. Because it is not widely used in practice, we
shall discuss only the Bayes criterion in the text.
Bayes Criterion. To find a Bayes test we denote the cost of each course
of action as C11• The first subscript signifies that the ith hypothesis is
chosen. The second subscript signifies that thejth hypothesis is true. We
denote the region of the observation space in which we choose H1 as Z1
Bayes Criterion 47
and the a priori probabilities are P1• The model is shown in Fig. 2.13. The
expression for the risk is
3t = ~1
:'%P,CII Iz,Pr1H1(RIH,) dR. (98)
To find the optimum Bayes test we simply vary the Z, to minimize 3t.
This is a straightforward extension ofthe technique used in the binary case.
For simplicity of notation, we shall only consider the case in which M = 3
in the text.
Noting that Z0 = Z - Z1 - Z2, because the regions are disjoint, we
obtain
3t = PoCoo C PriHo(RIHo) dR + PoC1o C PriHo(RIHo)dR
Jz-z1-z2 Jz1
+ PoC2o C PriHo(RIHo) dR + P1Cu C PriH1 (RIHl)dR
Jz2 Jz-zo -z2
+ P1Co1 C Pr!H1 (RIH1) dR + P1C21 C Pr!H1 (RIH1) dR
Jzo Jz2
+ P2C22 C Pr!H2 (RIH2) dR + P2Co2 C Pr!H2 (RIH2) dR
Jz-zo -Z1 Jzo
+ P2C12 C Pr!H2 (RIH2) dR. (99)
Jzl
This reduces to
3t = PoCoo + P1Cu + P2C22
+ C [P2(Co2- C22)PriH2 (RIH2) + P1(Co1- Cu)PriH1 (RIHl)]dR
Jzo
+ C [Po(ClO- Coo)PriHo(RIHo) + P2(C12- C22)Pr!H2 (RIH2}]dR
Jz1
+ C [Po(C2o - Coo)Pr!Ho(RIHo} + P1(C21 - Cu)Pr!H1 (RIHl)] dR.
Jz2
(100)
As before, the first three terms represent the fixed cost and the integrals
represent the variable cost that depends on our choice of Z0 , Z~> and Z2 •
Clearly, we assign each R to the region in which the value of the integrand
is the smallest. Labeling these integrands /0(R), / 1(R), and /2(R), we have
the following rule:
if Io(R) < /1(R) and /2(R), choose Ho,
if /1(R) < /0(R) and /2(R), choose H 17 (101)
if /2(R) < /0(R) and /1(R), choose H2•
48 2.3 M Hypotheses
Fig. 2.13 M hypothesis problem.
Say Ht
Z:
observation
space
We can write these terms in terms of likelihood ratios by defining
A (R) 1:::,. PrrH1(RIHl)
1 - PriHo(RIHo)'
A2(R) (;:: Pr1H2(RIH2).
PriHo(RIHo)
(102)
Using (102) in (100) and (101), we have
(103)
H2orH1
P2(Co2 - C22) A2(R) ~ Po(C2o - Coo) + P1(C21 - Col) A1(R), (104)
Ho orH1
We see that the decision rules correspond to three lines in the At. A2
plane. It is easy to verify that these lines intersect at a common point and
therefore uniquely define three decision regions, as shown in Fig. 2.I4.
The decision space is two-dimensional for the three-hypothesis problem.
It is easy to verify that M hypotheses always lead to a decision space
which has, at most, (M- I) dimensions.
Several special cases will be useful in our later work. The first is defined
by the assumptions
Coo = C11 = C22 = 0,
C11 = I, i I= j.
(106)
These equations indicate that any error is of equal importance. Looking
at (98), we see that this corresponds to minimizing the total probability of
error.
At{R)
Fig. 2.14 Decision space.
Substituting into (103)-(105), we have
H1 or H2
P1A1(R) ~ Po,
Ho or H2
H2 or H1
P2A2(R) ~ Po,
Ho or H1
H2 or Ho
P2A2(R) ~ P1A1(R).
H 1 or Ho
Ho
'----'-----+ A1{R)
Po/P1
(a)
lnPo/P2
----1--rit-:--=-=-~ln At (R)
lnPo/P1
Ho
(b)
Fig. 2.15 Decision spaces.
Bayes Criterion 49
(107)
50 2.3 M Hypotheses
The decision regions in the (A1o A2) plane are shown in Fig. 2.15a. In this
particular case, the transition to the (ln A1o ln A2) plane is straight-
forward (Fig. 2.15b). The equations are
Ht orH2 p
In A2(R) ~ In po' (108)
HoorHt 2
HoorH2 p
ln A2(R) ~ ln A1(R) + ln p1•
HoorHt 2
The expressions in (107) and (108) are adequate, but they obscure an
important interpretation of the processor. The desired interpretation is
obtained by a little manipulation.
Substituting (102) into (103-105) and multiplying both sides by
PriHo(RIHo), we have
Ht orH2
P1Pr1H1(RIH1) ~ PoPriHo(RIHo),
HoorH2
H2orH1
P2Pr1H2 (RIH2) ~ PoPriHo(RIHo),
HoorH1
(109)
H2orHo
P2Pr1H2 (RIH2) ~ P1Pr1H1 (RIH1).
Ht orHo
Looking at (109), we see that an equivalent test is to compute the a
posteriori probabilities Pr [H0 IR], Pr [H1IR], and Pr [H2IR1 and choose
the largest. (Simply divide both sides of each equation by p.(R) and
examine the resulting test.) For this reason the processor for the minimum
probability of error criterion is frequently referred to as a maximum a
posteriori probability computer. The generalization to M hypotheses is
straightforward.
The next two topics deal with degenerate tests. Both results will be useful
in later applications. A case of interest is a degenerate one in which we
combine H1 and H2• Then
C12 = C21 = o,
and, for simplicity, we can let
Col = C1o = C2o = Co2
and
Coo = Cu = C22 = 0.
Then (103) and (104) both reduce to
Ht orH2
P1A1(R) + P2A2(R) ~ Po
Ho
and (105) becomes an identity.
(110)
(Ill)
(112)
(113)
Po
p2
'----------~-+A1(R)
Po
pl
Fig. 2.16 Decision spaces.
Bayes Criterion 51
The decision regions are shown in Fig. 2.16. Because we have eliminated
all of the cost effect of a decision between H1 and H2, we have reduced it
to a binary problem.
We next consider the dummy hypothesis technique. A simple example
illustrates the idea. The actual problem has two hypotheses, H1 and H2,
but occasionally we can simplify the calculations by introducing a dummy
hypothesis H0 which occurs with zero probability. We let
P0 = 0,
and (114)
Substituting these values into (103-105), we find that (103) and (104)
imply that we always choose H1 or H2 and the test reduces to
Looking at (12) and recalling the definition of A1(R) and A2(R), we see
that this result is exactly what we would expect. [Just divide both sides of
(12) by Prrn0 (RIH0).] On the surface this technique seems absurd, but it
will turn out to be useful when the ratio
Prrn2 (RIH2)
Prrn,(R!Hl)
is difficult to work with and the ratios A1(R) and A2(R) can be made
simple by a proper choice ofPrrn0 (RIHo)·
In this section we have developed the basic results needed for the M-
hypothesis problem. We have not considered any specific examples
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf
Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf

More Related Content

What's hot

Digital modulation techniques
Digital modulation techniquesDigital modulation techniques
Digital modulation techniquessrkrishna341
 
Classification of Systems: Part 1
Classification of Systems:  Part 1Classification of Systems:  Part 1
Classification of Systems: Part 1Dr.SHANTHI K.G
 
Mathematical model for communication channels
Mathematical model for communication channelsMathematical model for communication channels
Mathematical model for communication channelssafeerakd
 
Error detection & correction codes
Error detection & correction codesError detection & correction codes
Error detection & correction codesRevathi Subramaniam
 
Lab 4 EEL 3552 Amplitude Modulation with MATLAB Simulations
Lab 4 EEL 3552 Amplitude Modulation with MATLAB SimulationsLab 4 EEL 3552 Amplitude Modulation with MATLAB Simulations
Lab 4 EEL 3552 Amplitude Modulation with MATLAB SimulationsKatrina Little
 
Week 14 pid may 24 2016 pe 3032
Week  14 pid  may 24 2016 pe 3032Week  14 pid  may 24 2016 pe 3032
Week 14 pid may 24 2016 pe 3032Charlton Inao
 
digital control Chapter1 slide
digital control Chapter1 slidedigital control Chapter1 slide
digital control Chapter1 slideasyrafjpk
 
Device drivers and interrupt service mechanism
Device drivers and interrupt service mechanismDevice drivers and interrupt service mechanism
Device drivers and interrupt service mechanismVijay Kumar
 
Keypad scanner using Verilog code in VLSI Systems
Keypad scanner using Verilog code in VLSI SystemsKeypad scanner using Verilog code in VLSI Systems
Keypad scanner using Verilog code in VLSI SystemsSrishti Kakade
 
communication system Chapter 6
communication system Chapter 6communication system Chapter 6
communication system Chapter 6moeen khan afridi
 
Pre-emphasis and de-emphasis circuits
Pre-emphasis and de-emphasis circuitsPre-emphasis and de-emphasis circuits
Pre-emphasis and de-emphasis circuitsShivangiSingh241
 

What's hot (20)

Digital modulation techniques
Digital modulation techniquesDigital modulation techniques
Digital modulation techniques
 
Energy & Power Signals |Solved Problems|
Energy & Power Signals |Solved Problems|Energy & Power Signals |Solved Problems|
Energy & Power Signals |Solved Problems|
 
Class 25 i, d electronic controllers
Class 25   i, d electronic controllersClass 25   i, d electronic controllers
Class 25 i, d electronic controllers
 
Classification of Systems: Part 1
Classification of Systems:  Part 1Classification of Systems:  Part 1
Classification of Systems: Part 1
 
Lecture 5: The Convolution Sum
Lecture 5: The Convolution SumLecture 5: The Convolution Sum
Lecture 5: The Convolution Sum
 
Mathematical model for communication channels
Mathematical model for communication channelsMathematical model for communication channels
Mathematical model for communication channels
 
Properties of dft
Properties of dftProperties of dft
Properties of dft
 
Error detection & correction codes
Error detection & correction codesError detection & correction codes
Error detection & correction codes
 
Lab 4 EEL 3552 Amplitude Modulation with MATLAB Simulations
Lab 4 EEL 3552 Amplitude Modulation with MATLAB SimulationsLab 4 EEL 3552 Amplitude Modulation with MATLAB Simulations
Lab 4 EEL 3552 Amplitude Modulation with MATLAB Simulations
 
Week 14 pid may 24 2016 pe 3032
Week  14 pid  may 24 2016 pe 3032Week  14 pid  may 24 2016 pe 3032
Week 14 pid may 24 2016 pe 3032
 
digital control Chapter1 slide
digital control Chapter1 slidedigital control Chapter1 slide
digital control Chapter1 slide
 
Z transfrm ppt
Z transfrm pptZ transfrm ppt
Z transfrm ppt
 
Device drivers and interrupt service mechanism
Device drivers and interrupt service mechanismDevice drivers and interrupt service mechanism
Device drivers and interrupt service mechanism
 
Low pass filters
Low pass filtersLow pass filters
Low pass filters
 
Waveform coding
Waveform codingWaveform coding
Waveform coding
 
Keypad scanner using Verilog code in VLSI Systems
Keypad scanner using Verilog code in VLSI SystemsKeypad scanner using Verilog code in VLSI Systems
Keypad scanner using Verilog code in VLSI Systems
 
Floating mode controller
Floating mode controllerFloating mode controller
Floating mode controller
 
Equalization
EqualizationEqualization
Equalization
 
communication system Chapter 6
communication system Chapter 6communication system Chapter 6
communication system Chapter 6
 
Pre-emphasis and de-emphasis circuits
Pre-emphasis and de-emphasis circuitsPre-emphasis and de-emphasis circuits
Pre-emphasis and de-emphasis circuits
 

Similar to Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf

Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.Robert Louis Stevenson
 
Introduction To Psychology Essay. Introduction to Psychology. - A-Level Psych...
Introduction To Psychology Essay. Introduction to Psychology. - A-Level Psych...Introduction To Psychology Essay. Introduction to Psychology. - A-Level Psych...
Introduction To Psychology Essay. Introduction to Psychology. - A-Level Psych...Johanna Solis
 
A Course in Mathematical Logic for Mathematicians, Second Edition.pdf
A Course in Mathematical Logic for Mathematicians, Second Edition.pdfA Course in Mathematical Logic for Mathematicians, Second Edition.pdf
A Course in Mathematical Logic for Mathematicians, Second Edition.pdfssuser2c74e2
 
Cohen-Tannoudji, Diu and Laloë - Quantum Mechanics (vol. I, II and III, 2nd e...
Cohen-Tannoudji, Diu and Laloë - Quantum Mechanics (vol. I, II and III, 2nd e...Cohen-Tannoudji, Diu and Laloë - Quantum Mechanics (vol. I, II and III, 2nd e...
Cohen-Tannoudji, Diu and Laloë - Quantum Mechanics (vol. I, II and III, 2nd e...BEATRIZJAIMESGARCIA
 
Techno-government networks: Actor-Network Theory in electronic government res...
Techno-government networks: Actor-Network Theory in electronic government res...Techno-government networks: Actor-Network Theory in electronic government res...
Techno-government networks: Actor-Network Theory in electronic government res...FGV Brazil
 
An Analysis Of Research In Information Systems (1981-1997
An Analysis Of Research In Information Systems (1981-1997An Analysis Of Research In Information Systems (1981-1997
An Analysis Of Research In Information Systems (1981-1997Angie Miller
 
Construction kits for evolving life -- Including evolving minds and mathemati...
Construction kits for evolving life -- Including evolving minds and mathemati...Construction kits for evolving life -- Including evolving minds and mathemati...
Construction kits for evolving life -- Including evolving minds and mathemati...Aaron Sloman
 
ECO-409-class_one.pptxhrhurutgtututu8yhyyyhyt
ECO-409-class_one.pptxhrhurutgtututu8yhyyyhytECO-409-class_one.pptxhrhurutgtututu8yhyyyhyt
ECO-409-class_one.pptxhrhurutgtututu8yhyyyhytMisterPhilips
 
Scientific Paper How To Write An Introduction - Umgala.W
Scientific Paper How To Write An Introduction - Umgala.WScientific Paper How To Write An Introduction - Umgala.W
Scientific Paper How To Write An Introduction - Umgala.WTonya Roberts
 
NG2S: A Study of Pro-Environmental Tipping Point via ABMs
NG2S: A Study of Pro-Environmental Tipping Point via ABMsNG2S: A Study of Pro-Environmental Tipping Point via ABMs
NG2S: A Study of Pro-Environmental Tipping Point via ABMsKan Yuenyong
 
Printable Writing Paper Writing Paper Printable Stat
Printable Writing Paper  Writing Paper Printable StatPrintable Writing Paper  Writing Paper Printable Stat
Printable Writing Paper Writing Paper Printable StatJill Ailts
 
Application of First Order Linear Equation Market Balance
Application of First Order Linear Equation Market BalanceApplication of First Order Linear Equation Market Balance
Application of First Order Linear Equation Market Balanceijtsrd
 
(LEA's communication series) Wicks, Jan LeBlanc - Media management _ a casebo...
(LEA's communication series) Wicks, Jan LeBlanc - Media management _ a casebo...(LEA's communication series) Wicks, Jan LeBlanc - Media management _ a casebo...
(LEA's communication series) Wicks, Jan LeBlanc - Media management _ a casebo...tungtran376667
 
Steps Towards a History of Ethnomethodology in HCI
Steps Towards a History of Ethnomethodology in HCI Steps Towards a History of Ethnomethodology in HCI
Steps Towards a History of Ethnomethodology in HCI butest
 
Macroeconomia harvard
Macroeconomia harvardMacroeconomia harvard
Macroeconomia harvardAndre Viana
 

Similar to Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf (20)

Abbott Q Paper0102
Abbott Q Paper0102Abbott Q Paper0102
Abbott Q Paper0102
 
Syllabus
SyllabusSyllabus
Syllabus
 
Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.Simon, Herbert A. (1969). The Science Of The Artificial.
Simon, Herbert A. (1969). The Science Of The Artificial.
 
Introduction To Psychology Essay. Introduction to Psychology. - A-Level Psych...
Introduction To Psychology Essay. Introduction to Psychology. - A-Level Psych...Introduction To Psychology Essay. Introduction to Psychology. - A-Level Psych...
Introduction To Psychology Essay. Introduction to Psychology. - A-Level Psych...
 
A Course in Mathematical Logic for Mathematicians, Second Edition.pdf
A Course in Mathematical Logic for Mathematicians, Second Edition.pdfA Course in Mathematical Logic for Mathematicians, Second Edition.pdf
A Course in Mathematical Logic for Mathematicians, Second Edition.pdf
 
Cohen-Tannoudji, Diu and Laloë - Quantum Mechanics (vol. I, II and III, 2nd e...
Cohen-Tannoudji, Diu and Laloë - Quantum Mechanics (vol. I, II and III, 2nd e...Cohen-Tannoudji, Diu and Laloë - Quantum Mechanics (vol. I, II and III, 2nd e...
Cohen-Tannoudji, Diu and Laloë - Quantum Mechanics (vol. I, II and III, 2nd e...
 
Techno-government networks: Actor-Network Theory in electronic government res...
Techno-government networks: Actor-Network Theory in electronic government res...Techno-government networks: Actor-Network Theory in electronic government res...
Techno-government networks: Actor-Network Theory in electronic government res...
 
An Analysis Of Research In Information Systems (1981-1997
An Analysis Of Research In Information Systems (1981-1997An Analysis Of Research In Information Systems (1981-1997
An Analysis Of Research In Information Systems (1981-1997
 
Some Historical Milestone
Some Historical MilestoneSome Historical Milestone
Some Historical Milestone
 
Construction kits for evolving life -- Including evolving minds and mathemati...
Construction kits for evolving life -- Including evolving minds and mathemati...Construction kits for evolving life -- Including evolving minds and mathemati...
Construction kits for evolving life -- Including evolving minds and mathemati...
 
ASE_III FINAL
ASE_III FINALASE_III FINAL
ASE_III FINAL
 
ECO-409-class_one.pptxhrhurutgtututu8yhyyyhyt
ECO-409-class_one.pptxhrhurutgtututu8yhyyyhytECO-409-class_one.pptxhrhurutgtututu8yhyyyhyt
ECO-409-class_one.pptxhrhurutgtututu8yhyyyhyt
 
Scientific Paper How To Write An Introduction - Umgala.W
Scientific Paper How To Write An Introduction - Umgala.WScientific Paper How To Write An Introduction - Umgala.W
Scientific Paper How To Write An Introduction - Umgala.W
 
NG2S: A Study of Pro-Environmental Tipping Point via ABMs
NG2S: A Study of Pro-Environmental Tipping Point via ABMsNG2S: A Study of Pro-Environmental Tipping Point via ABMs
NG2S: A Study of Pro-Environmental Tipping Point via ABMs
 
Printable Writing Paper Writing Paper Printable Stat
Printable Writing Paper  Writing Paper Printable StatPrintable Writing Paper  Writing Paper Printable Stat
Printable Writing Paper Writing Paper Printable Stat
 
Application of First Order Linear Equation Market Balance
Application of First Order Linear Equation Market BalanceApplication of First Order Linear Equation Market Balance
Application of First Order Linear Equation Market Balance
 
Big Data Research Trend and Forecast (2005-2015): An Informetrics Perspective
Big Data Research Trend and Forecast (2005-2015): An Informetrics PerspectiveBig Data Research Trend and Forecast (2005-2015): An Informetrics Perspective
Big Data Research Trend and Forecast (2005-2015): An Informetrics Perspective
 
(LEA's communication series) Wicks, Jan LeBlanc - Media management _ a casebo...
(LEA's communication series) Wicks, Jan LeBlanc - Media management _ a casebo...(LEA's communication series) Wicks, Jan LeBlanc - Media management _ a casebo...
(LEA's communication series) Wicks, Jan LeBlanc - Media management _ a casebo...
 
Steps Towards a History of Ethnomethodology in HCI
Steps Towards a History of Ethnomethodology in HCI Steps Towards a History of Ethnomethodology in HCI
Steps Towards a History of Ethnomethodology in HCI
 
Macroeconomia harvard
Macroeconomia harvardMacroeconomia harvard
Macroeconomia harvard
 

Recently uploaded

(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordAsst.prof M.Gokilavani
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...ranjana rawat
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Online banking management system project.pdf
Online banking management system project.pdfOnline banking management system project.pdf
Online banking management system project.pdfKamal Acharya
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)simmis5
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Christo Ananth
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performancesivaprakash250
 
Russian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
Russian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur EscortsRussian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
Russian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSSIVASHANKAR N
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingrknatarajan
 
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxBSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxfenichawla
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Call Girls in Nagpur High Profile
 
UNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular ConduitsUNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular Conduitsrknatarajan
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxupamatechverse
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSISrknatarajan
 

Recently uploaded (20)

Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
 
Online banking management system project.pdf
Online banking management system project.pdfOnline banking management system project.pdf
Online banking management system project.pdf
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
Russian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
Russian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur EscortsRussian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
Russian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
 
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxBSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
 
UNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular ConduitsUNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular Conduits
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptx
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSIS
 

Detection, Estimation, and Modulation Theory, Part I ( PDFDrive ).pdf

  • 2. Detection, Estimation, and Modulation Theory, Part I: Detection, Estimation, and Linear Modulation Theory. Harry L. Van Trees Copyright© 2001 John Wiley & Sons, Inc. ISBNs: 0-471-09517-6 (Paperback); 0-471-22108-2 (Electronic) Detection, Estimation, and Modulation Theory
  • 3. Detection, Estimation, and Modulation Theory Part I. Detection, Estimation, and Linear Modulation Theory HARRY L. VAN TREES George Mason University A Wiley-Interscience Publication JOHN WILEY & SONS, INC. New York • Chichester • Weinheim • Brisbane • Singapore • Toronto
  • 4. This text is printed on acid-free paper. @) Copyright© 2001 by John Wiley & Sons, Inc. All rights reserved. Published simultaneously in Canada. No part ofthis publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Section I07 or I08 ofthe 1976 United States Copyright Act, without either the prior written permission ofthe Publisher, or authorization through payment ofthe appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4744. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 605 Third Avenue, New York, NY 10158-0012, (212) 850-6011, fax (212) 850-6008, E-Mail: PERMREQ@ WILEY.COM. For ordering and customer service, call 1-800-CALL-WILEY. Library ofCongress Cataloging in Publication Data is available. ISBN 0-471-22108-2 This title is also available in print as ISBN 0-471-09517-6. Printed in the United States of America 10 9 8 7 6 5 4 3 2 I
  • 5. To Diane and Stephen, Mark, Kathleen, Patricia, Eileen, Harry, and Julia and the next generation- Brittany, Erin, Thomas, Elizabeth, Emily, Dillon, Bryan, Julia, Robert, Margaret, Peter, Emma, Sarah, Harry, Rebecca, and Molly
  • 6.
  • 7. Prefacefor Paperback Edition In 1968, Part I of Detection, Estimation, and Modulation Theory [VT68) was pub- lished. It turned out to be a reasonably successful book that has been widely used by several generations of engineers. There were thirty printings, but the last printing was in 1996. Volumes II and III ([VT71a], [VT71b]) were published in 1971 and fo- cused on specific application areas such as analog modulation, Gaussian signals and noise, and the radar-sonar problem. Volume II had a short life span due to the shift from analog modulation to digital modulation. Volume III is still widely used as a reference and as a supplementary text. In a moment ofyouthful optimism, I in- dicated in the the Preface to Volume III and in Chapter III-14 that a short mono- graph on optimum array processing would be published in 1971. The bibliography lists it as a reference, Optimum Array Processing, Wiley, 1971, which has been sub- sequently cited by several authors. After a 30-year delay, Optimum Array Process- ing, Part IV ofDetection, Estimation, and Modulation Theory will be published this year. A few comments on my career may help explain the long delay. In 1972, MIT loaned me to the Defense Communication Agency in Washington, D.C. where I spent three years as the Chief Scientist and the Associate Director ofTechnology. At the end of the tour, I decided, for personal reasons, to stay in the Washington, D.C. area. I spent three years as an Assistant Vice-President at COMSAT where my group did the advanced planning for the INTELSAT satellites. In 1978, I became the Chief Scientist ofthe United States Air Force. In 1979, Dr. Gerald Dinneen, the former Director of Lincoln Laboratories, was serving as Assistant Secretary of De- fense for C31. He asked me to become his Principal Deputy and I spent two years in that position. In 1981, I joined M/A-COM Linkabit. Linkabit is the company that Ir- win Jacobs and Andrew Viterbi had started in 1969 and sold to MIA-COM in 1979. I started an Eastern operation which grew to about 200 people in three years. After Irwin and Andy left MIA-COM and started Qualcomm, I was responsible for the government operations in San Diego as well as Washington, D.C. In 1988, MIA- COM sold the division. At that point I decided to return to the academic world. I joined George Mason University in September of 1988. One of my priorities was to finish the book on optimum array processing. However, I found that I needed to build up a research center in order to attract young research-oriented faculty and vii
  • 8. viii Prefacefor Paperback Edition doctoral students. The process took about six years. The Center for Excellence in Command, Control, Communications, and Intelligence has been very successful and has generated over $300 million in research funding during its existence. Dur- ing this growth period, I spent some time on array processing but a concentrated ef- fort was not possible. In 1995, I started a serious effort to write the Array Process- ing book. Throughout the Optimum Array Processing text there are references to Parts I and III ofDetection, Estimation, and Modulation Theory. The referenced material is available in several other books, but I am most familiar with my own work. Wiley agreed to publish Part I and III in paperback so the material will be readily avail- able. In addition to providing background for Part IV, Part I is still useful as a text for a graduate course in Detection and Estimation Theory. Part III is suitable for a second level graduate course dealing with more specialized topics. In the 30-year period, there has been a dramatic change in the signal processing area. Advances in computational capability have allowed the implementation of complex algorithms that were only of theoretical interest in the past. In many appli- cations, algorithms can be implemented that reach the theoretical bounds. The advances in computational capability have also changed how the material is taught. In Parts I and III, there is an emphasis on compact analytical solutions to problems. In Part IV, there is a much greater emphasis on efficient iterative solu- tions and simulations. All ofthe material in parts I and III is still relevant. The books use continuous time processes but the transition to discrete time processes is straightforward. Integrals that were difficult to do analytically can be done easily in Matlab®. The various detection and estimation algorithms can be simulated and their performance compared to the theoretical bounds. We still use most ofthe prob- lems in the text but supplement them with problems that require Matlab® solutions. We hope that a new generation of students and readers find these reprinted edi- tions to be useful. Fairfax, Virginia June 2001 HARRY L. VAN TREES
  • 9. Preface The area ofdetection and estimation theory that we shall study in this book represents a combination of the classical techniques of statistical inference and the random process characterization of communication, radar, sonar, and other modern data processing systems. The two major areas of statis- tical inference are decision theory and estimation theory. In the first case we observe an output that has a random character and decide which oftwo possible causes produced it. This type ofproblem was studied in the middle of the eighteenth century by Thomas Bayes [1]. In the estimation theory case the output is related to the value of some parameter of interest, and we try to estimate the value of this parameter. Work in this area was published by Legendre [2] and Gauss [3] in the early nineteenth century. Significant contributions to the classical theory that we use as background were developed by Fisher [4] and Neyman and Pearson [5] more than 30 years ago. In 1941 and 1942 Kolmogoroff [6] and Wiener [7] applied statistical techniques to the solution of the optimum linear filtering problem. Since that time the application of statistical techniques to the synthesis and analysis of all types of systems has grown rapidly. The application of these techniques and the resulting implications are the subject of this book. This book and the subsequent volume, Detection, Estimation, and Modulation Theory, Part II, are based on notes prepared for a course entitled "Detection, Estimation, and Modulation Theory," which is taught as a second-level graduate course at M.I.T. My original interest in the material grew out ofmy research activities in the area ofanalog modulation theory. A preliminary version of the material that deals with modulation theory was used as a text for a summer course presented at M.I.T. in 1964. It turned out that our viewpoint on modulation theory could best be understood by an audience with a clear understanding ofmodern detection and estimation theory. At that time there was no suitable text available to cover the material of interest and emphasize the points that I felt were ix
  • 10. x Preface important, so I started writing notes. It was clear that in order to present the material to graduate students in a reasonable amount of time it would be necessary to develop a unified presentation ofthe three topics: detection, estimation, and modulation theory, and exploit the fundamental ideas that connected them. As the development proceeded, it grew in size until the material that was originally intended to be background for modulation theory occupies the entire contents of this book. The original material on modulation theory starts at the beginning ofthe second book. Collectively, the two books provide a unified coverage of the three topics and their application to many important physical problems. For the last three years I have presented successively revised versions of the material in my course. The audience consists typically of 40 to 50 students who have completed a graduate course in random processes which covered most of the material in Davenport and Root [8]. In general, they have a good understanding ofrandom process theory and a fair amount of practice with the routine manipulation required to solve problems. In addition, many ofthem are interested in doing research in this general area or closely related areas. This interest provides a great deal of motivation which I exploit by requiring them to develop many of the important ideas as problems. It is for this audience that the book is primarily intended. The appendix contains a detailed outline of the course. On the other hand, many practicing engineers deal with systems that have been or should have been designed and analyzed with the techniques developed in this book. I have attempted to make the book useful to them. An earlier version was used successfully as a text for an in-plant course for graduate engineers. From the standpoint of specific background little advanced material is required. A knowledge of elementary probability theory and second moment characterization ofrandom processes is assumed. Some familiarity with matrix theory and linear algebra is helpful but certainly not necessary. The level ofmathematical rigor is low, although in most sections the results could be rigorously proved by simply being more careful in our derivations. We have adopted this approach in order not to obscure the important ideas with a lot of detail and to make the material readable for the kind of engineering audience that will find it useful. Fortunately, in almost all cases we can verify that our answers are intuitively logical. It is worthwhile to observe that this ability to check our answers intuitively would be necessary even if our derivations were rigorous, because our ultimate objective is to obtain an answer that corresponds to some physical system of interest. It is easy to find physical problems in which a plausible mathe- matical model and correct mathematics lead to an unrealistic answer for the original problem.
  • 11. Preface xi We have several idiosyncrasies that it might be appropriate to mention. In general, we look at a problem in a fair amount ofdetail. Many times we look at the same problem in several different ways in order to gain a better understanding ofthe meaning ofthe result. Teaching students a number of ways of doing things helps them to be more flexible in their approach to new problems. A second feature is the necessity for the reader to solve problems to understand the material fully. Throughout the course and the book we emphasize the development ofan ability to work problems. At the end ofeach chapter are problems that range from routine manipulations to significant extensions of the material in the text. In many cases they are equivalent to journal articles currently being published. Only by working a fair number of them is it possible to appreciate the significance and generality of the results. Solutions for an individual problem will be supplied on request, and a book containing solutions to about one third of the problems is available to faculty members teaching the course. We are continually generating new problems in conjunction with the course and will send them to anyone who is using the book as a course text. A third issue is the abundance of block diagrams, outlines, and pictures. The diagrams are included because most engineers (including myself) are more at home with these items than with the corresponding equations. One problem always encountered is the amount of notation needed to cover the large range of subjects. We have tried to choose the notation in a logical manner and to make it mnemonic. All the notation is summarized in the glossary at the end of the book. We have tried to make our list of references as complete as possible and to acknowledge any ideas due to other people. A number of people have contributed in many ways and it is a pleasure to acknowledge them. Professors W. B. Davenport and W. M. Siebert have provided continual encouragement and technical comments on the various chapters. Professors Estil Hoversten and Donald Snyder of the M.I.T. faculty and Lewis Collins, Arthur Baggeroer, and Michael Austin, three of my doctoral students, have carefully read and criticized the various chapters. Their suggestions have improved the manuscript appreciably. In addition, Baggeroer and Collins contributed a number of the problems in the various chapters and Baggeroer didtheprogrammingnecessaryfor many of the graphical results. Lt. David Wright read and criticized Chapter 2. L.A. Frasco and H. D. Goldfein, two of my teaching assistants, worked all ofthe problems in the book. Dr. Howard Yudkin ofLincoln Laboratory read the entire manuscript and offered a number of important criticisms. In addition, various graduate students taking the course have made suggestions which have been incorporated. Most of the final draft was typed by Miss Aina Sils. Her patience with the innumerable changes is
  • 12. xii Preface sincerely appreciated. Several other secretaries, including Mrs. Jarmila Hrbek, Mrs. Joan Bauer, and Miss Camille Tortorici, typed sections of the various drafts. As pointed out earlier, the books are an outgrowth of my research interests. This research is a continuing effort, and I shall be glad to send our current work to people working in this area on a regular reciprocal basis. My early work in modulation theory was supported by Lincoln Laboratory as a summer employee and consultant in groups directed by Dr. Herbert Sherman and Dr. Barney Reiffen. My research at M.I.T. was partly supported by the Joint Services and the National Aeronautics and Space Administration under the auspices of the Research Laboratory of Elec- tronics. This support is gratefully acknowledged. Cambridge, Massachusetts October, 1967. REFERENCES Harry L. Van Trees [1] Thomas Bayes, "An Essay Towards Solving a Problem in the Doctrine of Chances," Phil. Trans, 53, 370-418 (1764). [2] A. M. Legendre, Nouvelles Methodes pour La Determination ces Orbites des Cometes, Paris, 1806. [3] K. F. Gauss, Theory of Motion of the Heavenly Bodies Moving About the Sun in Conic Sections, reprinted by Dover, New York, 1963. [4] R. A. Fisher, "Theory of Statistical Estimation," Proc. Cambridge Philos. Soc., 11, 700 (1925). [5] J. Neyman and E. S. Pearson, "On the Problem of the Most Efficient Tests of Statistical Hypotheses," Phil. Trans. Roy. Soc. London, A 131, 289, (1933). [6] A. Kolmogoroff, "Interpolation and Extrapolation von Stationiiren Zufiilligen Folgen," Bull. Acad. Sci. USSR, Ser. Math. 5, 1941. [7] N. Wiener, Extrapolation, Interpolation, and Smoothing ofStationary Time Series, Tech. Press of M.I.T. and Wiley, New York, 1949 (originally published as a classified report in 1942). [8] W. B. Davenport and W. L. Root, Random Signals and Noise, McGraw-Hill, New York, 1958.
  • 13. Contents 1 Introduction 2 1.1 Topical Outline 1.2 Possible Approaches 1.3 Organization Classical Detection and Estimation Theory 2.1 Introduction 2.2 Simple Binary Hypothesis Tests Decision Criteria. Performance: Receiver Operating Charac- teristic. 2.3 M Hypotheses 2.4 Estimation Theory Random Parameters: Bayes Estimation. Real (Nonrandom) Parameter Estimation. Multiple Parameter Estimation. Sum- mary of Estimation Theory. 2.5 Composite Hypotheses 2.6 The General Gaussian Problem Equal Covariance Matrices. Equal Mean Vectors. Summary. 2.7 Performance Bounds and Approximations 2.8 Summary 2.9 Problems References 1 1 12 15 19 19 23 46 52 86 96 116 133 133 163 xiii
  • 14. xiv Contents 3 Representations of Random Processes 166 3.1 Introduction 166 3.2 Deterministic Functions: Orthogonal Representations 169 3.3 Random Process Characterization 174 Random Processes: Conventional Characterizations. Series Representation of Sample Functions of Random Processes. Gaussian Processes. 3.4 Homogeneous Integral Equations and Eigenfunctions 186 Rational Spectra. Bandlimited Spectra. Nonstationary Processes. White Noise Processes. The Optimum Linear Filter. Properties of Eigenfunctions and Eigenvalues. 3.5 Periodic Processes 3.6 Infinite Time Interval: Spectral Decomposition Spectral Decomposition. An Application of Spectral Decom- position: MAP Estimation of a Gaussian Process. 209 212 3.7 Vector Random Processes 220 3.8 Summary 3.9 Problems References 4 Detection of Signals-Estimation of Signal 224 226 237 Parameters 239 4.1 Introduction 239 Models. Format. 4.2 Detection and Estimation in White Gaussian Noise 246 Detection of Signals in Additive White Gaussian Noise. Linear Estimation. Nonlinear Estimation. Summary: Known Signals in White Gaussian Noise. 4.3 Detection and Estimation in Nonwhite Gaussian Noise 287 "Whitening" Approach. A Direct Derivation Using the Karhunen-Loeve Expansion. A Direct Derivation with a Sufficient Statistic. Detection Performance. Estimation. Solution Techniques for Integral Equations. Sensitivity. Known Linear Channels.
  • 15. Contents XV 4.4 Signals with Unwanted Parameters: The Composite Hypo- thesis Problem 333 Random Phase Angles. Random Amplitude and Phase. 4.5 Multiple Channels 366 Formulation. Application. 4.6 Multiple Parameter Estimation 370 Additive White Gaussian Noise Channel. Extensions. 4.7 Summary and Omissions 374 Summary. Topics Omitted. 4.8 Problems 377 References 418 5 Estimation of Continuous Waveforms 423 5.1 Introduction 423 5.2 Derivation of Estimator Equations 426 No-Memory Modulation Systems. Modulation Systems with Memory. 5.3 A Lower Bound on the Mean-Square Estimation Error 437 5.4 Multidimensional Waveform Estimation 446 Examples of Multidimensional Problems. Problem Formula- tion. Derivation of Estimator Equations. Lower Bound on the Error Matrix. Colored Noise Estimation. 5.5 Nonrandom Waveform Estimation 456 5.6 Summary 459 5.7 Problems 460 References 465 6 Linear Estimation 467 6.1 Properties of Optimum Processors 468 6.2 Realizable Linear Filters: Stationary Processes, Infinite Past: Wiener Filters 481 Solution of Wiener-Hopf Equation. Errors in Optimum Systems. Unrealizable Filters. Closed-FormError Expressions. Optimum Feedback Systems. Comments.
  • 16. xvi Contents 6.3 Kalman-Bucy Filters 515 Differential Equation Representation of Linear Systems and Random Process Generation. Derivation of Estimator Equa- tions. Applications. Generalizations. 6.4 Linear Modulation: Communications Context 575 DSB-AM: Realizable Demodulation. DSB-AM: Demodula- tion with Delay. Amplitude Modulation: Generalized Carriers. Amplitude Modulation: Single-Sideband Suppressed-Carrier. 6.5 The Fundamental Role of the Optimum Linear Filter 584 6.6 Comments 585 6.7 Problems 586 References 619 7 Discussion 623 7.1 Summary 623 7.2 Preview of Part II 625 7.3 Unexplored Issues 627 References 629 Appendix: A Typical Course Outline 635 Glossary 671 Author Index 683 Subject Index 687
  • 17. 1 Detection, Estimation, and Modulation Theory, Part I: Detection, Estimation, and Linear Modulation Theory. Harry L. Van Trees Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-09517-6 (Paperback); 0-471-22108-2 (Electronic) Introduction In these two books, we shall study three areas of statistical theory, which we have labeled detection theory, estimation theory, and modulation theory. The goal is to develop these theories in a common mathematical framework and to demonstrate how they can be used to solve a wealth of practical problems in many diverse physical situations. In this chapter we present three outlines of the material. The first is a topical outline in which we develop a qualitative understanding ofthe three areas by examining some typical problems of interest. The second is a logical outline in which we explore the various methods of attacking the problems. The third is a chronological outline in which we explain the structure of the books. 1.1 TOPICAL OUTLINE An easy way to explain what is meant by detection theory is to examine several physical situations that lead to detection theory problems. A simple digital communication system is shown in Fig. 1.1. The source puts out a binary digit every T seconds. Our object is to transmit this sequence of digits to some other location. The channel available for trans- mitting the sequence depends on the particular situation. Typically, it could be a telephone line, a radio link, or an acoustical channel. For r(t) Fig. 1.1 Digital communication system. 1
  • 18. 2 1.1 Topical Outline purposes ofillustration, we shall consider a radio link. In order to transmit the information, we must put it into a form suitable for propagating over the channel. A straightforward method would be to build a device that generates a sine wave, (1) for T seconds if the source generated a "one" in the preceding interval, and a sine wave of a different frequency, s0(t) = sin w0 t, (2) for T seconds if the source generated a "zero" in the preceding interval. The frequencies are chosen so that the signals s0 (t) and s1(t) will propagate over the particular radio link of concern. The output of the device is fed into an antenna and transmitted over the channel. Typical source and transmitted signal sequences are shown in Fig. 1.2. In the simplest kind of channel the signal sequence arrives at the receiving antenna attenuated but essentially undistorted. To process the received signal we pass it through the antenna and some stages of rf-amplification, in the course of which a thermal noise n(t) is added to the message sequence. Thus in any T-second interval we have available a waveform r(t) in which r(t) = s1(t) + n(t), if s1(t) was transmitted, and r(t) = s0(t) + n(t), 0 :s; t :s; T, (3) (4) if s0(t) was transmitted. We are now faced with the problem of deciding which ofthe two possible signals was transmitted. We label the device that does this a decision device. It is simply a processor that observes r(t) and guesses whether s1(t) or s0(t) was sent according to some set of rules. This is equivalent to guessing what the source output was in the preceding interval. We refer to designing and evaluating the processor as a detection 1 0 0 -------- Source output sin w1t sin wot sin wot sin w1t ~~~- Transmitted sequence Fig. 1.2 Typical sequences.
  • 19. Detection Theory 3 If ( r J[ [ J i7 A~i VVI I• ~I sin (Wit + BI) sin (wot + Bo) sin (Wit + Bj) Fig. 1.3 Sequence with phase shifts. theory problem. In this particular case the only possible source oferror in making a decision is the additive noise. If it were not present, the input would be completely known and we could make decisions without errors. We denote this type of problem as the known signal in noise problem. It corresponds to the lowest level (i.e., simplest) of the detection problems of interest. An example of the next level of detection problem is shown in Fig. 1.3. The oscillators used to generate s1(t) and s0{t) in the preceding example have a phase drift. Therefore in a particular T-second interval the received signal corresponding to a "one " is (5) and the received signal corresponding to a "zero" is r(t) = sin (w0 t + 90) + n(t), (6) where 90 and 91 are unknown constant phase angles. Thus even in the absence ofnoise the input waveform is not completely known. In a practical system the receiver may include auxiliary equipment to measure the oscilla- tor phase. If the phase varies slowly enough, we shall see that essentially perfect measurement is possible. If this is true, the problem is the same as above. However, if the measurement is not perfect, we must incorporate the signal uncertainty in our model. A corresponding problem arises in the radar and sonar areas. A con- ventional radar transmits a pulse at some frequency we with a rectangular envelope: s1(t) = sin wet, (7) If a target is present, the pulse is reflected. Even the simplest target will introduce an attenuation and phase shift in the transmitted signal. Thus the signal available for processing in the interval of interest is r(t) = V, sin [we(t - T) + 8,] + n(t), = n(t), 7' s; t s; 7' + T, 0 s; t < T, -r + T < t < oo, (8)
  • 20. 4 1.1 Topical Outline if a target is present and r(t) = n(t), 0 :s; t < oo, (9) if a target is absent. We see that in the absence of noise the signal still contains three unknown quantities: Vn the amplitude, ()" the phase, and r, the round-trip travel time to the target. These two examples represent the second level of detection problems. We classify them as signal with unknown parameters in noise problems. Detection problems of a third level appear in several areas. In a passive sonar detection system the receiver listens for noise generated by enemy vessels. The engines, propellers, and other elements in the vessel generate acoustical signals that travel through the ocean to the hydrophones in the detection system. This composite signal can best be characterized as a sample function from a random process. In addition, the hydrophone generates self-noise and picks up sea noise. Thus a suitable model for the detection problem might be r(t) = s0 (t) + n(t) (10) if the target is present and r(t) = n(t) (11) if it is not. In the absence of noise the signal is a sample function from a random process (indicated by the subscript 0). In the communications field a large number of systems employ channels in which randomness is inherent. Typical systems are tropospheric scatter links, orbiting dipole links, and chaff systems. A common technique is to transmit one of two signals separated in frequency. (We denote these frequencies as w1 and w0.) The resulting received signal is r(t) = s01 (t) + n(t) (12) if s1(t) was transmitted and r(t) = S00 (t) + n(t) (13) if s0(t) was transmitted. Here s01 (t) is a sample function from a random process centered at w1o and s00 (t) is a sample function from a random process centered at w0• These examples are characterized by the lack ofany deterministic signal component. Any decision procedure that we design will have to be based on the difference in the statistical properties of the two random processes from which s00 {t) and s01 (t) are obtained. This is the third level of d~tection problem and is referred to as a random signal in noise problem.
  • 21. Estimation Theory 5 In our examination of representative examples we have seen that detec- tion theory problems are characterized by the fact that we must decide which of several alternatives is true. There were only two alternatives in the examples cited; therefore we refer to them as binary detection prob- lems. Later we will encounter problems in which there are M alternatives available (the M-ary detection problem). Our hierarchy of detection problems is presented graphically in Fig. 1.4. There is a parallel set of problems in the estimation theory area. A simple example is given in Fig. 1.5, in which the source puts out an analog message a(t) (Fig. 1.5a). To transmit the message we first sample it every T seconds. Then, every T seconds we transmit a signal that contains Detection theory Level 1. Known signals in 1. Synchronous digital communication noise 2. Pattern recognition problems Level 2. Signals with unknown 1. Conventional pulsed radar or sonar, parameters in noise target detection 2. Target classification (orientation of target unknown) 3. Digital communication systems without phase reference 4. Digital communication over slowly- fading channels Level 3. Random signals in 1. Digital communication over scatter noise link, orbiting dipole channel, or chaff link 2. Passive sonar 3. Seismic detection system 4. Radio astronomy (detection of noise sources) Fig. 1.4 Detection theory hierarchy.
  • 22. 6 1.1 Topical Outline a(t) --........ r: "-...7 a(t) ITransmitter t-1---------;),.. . . s(t, An) (a) (b) l Aa a.(t) (Frequency changes I exaggerated) f f ! AAAAf f VVVVVWIJV (c) A A A, s(t, An) At I " I ~a(t) r A2 I J "C/ A3 )I a.(t) Filter fi(t) ~ (d) Fig. 1.5 (a) Sampling an analog source; (b) pulse-amplitude modulation; (c) pulse- frequency modulation; (d) waveform reconstruction. a parameter which is uniquely related to the last sample value. In Fig. 1.5b the signal is a sinusoid whose amplitude depends on the last sample. Thus, if the sample at time nTis A,., the signal in the interval [nT, (n + l)T] is nT s t s (n + l)T. (14) A system of this type is called a pulse amplitude modulation (PAM) system. In Fig. 1.5c the signal is a sinusoid whose frequency in the interval
  • 23. Estimation Theory 7 differs from the reference frequency we by an amount proportional to the preceding sample value, s(t, A,.) = sin (wet + A,.t), nT::;; t ::;; (n + l)T. (15) A system ofthis type is called a pulse frequency modulation (PFM) system. Once again there is additive noise. The received waveform, given that A,. was the sample value, is r(t) = s(t, A,.) + n(t), nT::;; t ::;; (n + l)T. (16) During each interval the receiver tries to estimate A,.. We denote these estimates as A,.. Over a period of time we obtain a sequence of estimates, as shown in Fig. 1.5d, which is passed into a device whose output is an estimate of the original message a(t). If a(t) is a band-limited signal, the device is just an ideal low-pass filter. For other cases it is more involved. If, however, the parameters in this example were known and the noise were absent, the received signal would be completely known. We refer to problems in this category as known signal in noise problems. If we assume that the mapping from A,. to s(t, A,.) in the transmitter has an inverse, we see that if the noise were not present we could determine A,. unambiguously. (Clearly, if we were allowed to design the transmitter, we should always choose a mapping with an inverse.) The known signal in noise problem is the first level of the estimation problem hierarchy. Returning to the area of radar, we consider a somewhat different problem. We assume that we know a target is present but do not know its range or velocity. Then the received signal is r(t) = V, sin [(we + wa)(t - -r) + 8,] + n(t), -r ::;; t ::;; -r + T, = n(t), 0 ::;; t < -r, -r + T < t < oo, (17) where wa denotes a Doppler shift caused by the target's motion. We want to estimate -rand wa. Now, even if the noise were absent and 1' and wa were known, the signal would still contain the unknown parameters V, and 8,. This is a typical second-level estimation problem. As in detection theory, we refer to problems in this category as signal with unknown parameters in noise problems. At the third level the signal component is a random process whose statistical characteristics contain parameters we want to estimate. The received signal is of the form r(t) = sn(t, A) + n(t), (18) where sn(t, A) is a sample function from a random process. In a simple case it might be a stationary process with the narrow-band spectrum shown in Fig. 1.6. The shape of the spectrum is known but the center frequency
  • 24. Spectrum of s0 (t, A) ----~-r~---------------r--------------£_-r~--~r Center frequency Fig. 1.6 Spectrum of random signal. Estimation Theory Level l. Known signals in noise l. PAM, PFM, and PPM communication systems with phase synchronization 2. Inaccuracies in inertial systems (e.g., drift angle measurement) Level 2. Signals with unknown l. Range, velocity, or angle measurement in parameters in noise radar/sonar problems 2. Discrete time, continuous amplitude communication system (with unknown amplitude or phase in channel) Level 3. Random signals in noise 1. Power spectrum parameter estimation 2. Range or Doppler spread target parameters in radar{sonar problem 3. Velocity measurement in radio astronomy 4. Target parameter estimation: passive sonar 5. Ground mapping radars Fig. 1.7 Estimation theory hierarchy. 8
  • 25. Modulation Theory 9 is not. The receiver must observe r(t) and, using the statistical properties of s0 (t, A) and n(t), estimate the value of A. This particular example could arise in either radio astronomy or passive sonar. The general class of problem in which the signal containing the parameters is a sample function from a random process is referred to as the random signal in noise problem. The hierarchy of estimation theory problems is shown in Fig. 1.7. We note that there appears to be considerable parallelism in the detection and estimation theory problems. We shall frequently exploit these parallels to reduce the work, but there is a basic difference that should be em- phasized. In binary detection the receiver is either "right" or "wrong." In the estimation of a continuous parameter the receiver will seldom be exactly right, but it can try to be close most of the time. This difference will be reflected in the manner in which we judge system performance. The third area of interest is frequently referred to as modulation theory. We shall see shortly that this term is too narrow for the actual problems. Once again a simple example is useful. In Fig. 1.8 we show an analog message source whose output might typically be music or speech. To convey the message over the channel, we transform it by using a modula- tion scheme to get it into a form suitable for propagation. The transmitted signal is a continuous waveform that depends on a(t) in some deterministic manner. In Fig. 1.8 it is an amplitude modulated waveform: s[t, a(t)] = [1 + ma(t)] sin (wet). (19) (This is conventional double-sideband AM with modulation index m.) In Fig. l.Sc the transmitted signal is a frequency modulated (FM) waveform: s[t, a(t)] = sin [wet + f"'a(u) du]. (20) When noise is added the received signal is r(t) = s[t, a(t)] + n(t). (21) Now the receiver must observe r(t) and put out a continuous estimate of the message a(t), as shown in Fig. 1.8. This particular example is a first- level modulation problem, for ifn(t) were absent and a(t) were known the received signal would be completely known. Once again we describe it as a known signal in noise problem. Another type of physical situation in which we want to estimate a continuous function is shown in Fig. 1.9. The channel is a time-invariant linear system whose impulse response h(T) is unknown. To estimate the impulse response we transmit a known signal x(t). The received signal is r(t) = fo''" h(T) x(t - T) dT + n(t). (22)
  • 26. s[t, a(t)) (a) (c) ~(t) 1" (' (' ,.. VV) r(t) ill Demodulator (d) Fig. 1.8 A modulation theory example: (a) analog transmission system; (b) amplitude modulated signal; (c) frequency modulated signal; (d) demodulator. 10 Linear time-invariant channel Fig. 1.9 Channel measurement.
  • 27. Modulation Theory 11 The receiver observes r(t) and tries to estimate h(T). This particularexample could best be described as a continuous estimation problem. Many other problems of interest in which we shall try to estimate a continuous wave- form will be encountered. For convenience, we shall use the term modula- tion theory for this category, even though the term continuous waveform estimation might be more descriptive. The other levels of the modulation theory problem follow by direct analogy. In the amplitude modulation system shown in Fig. 1.8b the receiver frequently does not know the phase angle of the carrier. In this case a suitable model is r(t) = (1 + ma(t)) sin (wet + 8) + n(t), (23) Modulation Theory (Continuous waveform estimation) 1. Known signals in noise 1. Conventional communication systems such as AM (DSB-AM, SSB), FM,and PM with phase synchronization 2. Optimum filter theory 3. Optimum feedback systems 4. Channel measurement 5. Orbital estimation for satellites 6. Signal estimation in seismic and sonar classification systems 7. Synchronization in digital systems 2. Signals with unknown 1. Conventional communication systems parameters in noise without phase synchronization 2. Estimation of channel characteristics when phase of input signal is unknown 3. Random signals in noise 1. Analog communication over randomly varying channels 2. Estimation of statistics of time-varying processes 3. Estimation of plant characteristics Fig. 1.10 Modulation theory hierarchy.
  • 28. 12 1.2 Possible Approaches where () is an unknown parameter. This is an example of a signal with unknown parameter problem in the modulation theory area. A simple example of a third-level problem (random signal in noise) is one in which we transmit a frequency-modulated signal over a radio link whose gain and phase characteristics are time-varying. We shall find that if we transmit the signal in {20) over this channel the received waveform will be r(t) = V(t) sin [wet + f"'a(u) du + O(t)] + n(t), (24) where V(t) and O(t) are sample functions from random processes. Thus, even if a(u) were known and the noise n(t) were absent, the received signal would still be a random process. An over-all outline of the problems of interest to us appears in Fig. 1.10. Additional examples included in the table to indicate the breadth of the problems that fit into the outline are discussed in more detail in the text. Now that we have outlined the areas of interest it is appropriate to determine how to go about solving them. 1.2 POSSIBLE APPROACHES From the examples we have discussed it is obvious that an inherent feature of all the problems is randomness of source, channel, or noise (often all three). Thus our approach must be statistical in nature. Even assuming that we are using a statistical model, there are many different ways to approach the problem. We can divide the possible approaches into two categories, which we denote as "structured" and "nonstructured." Some simple examples will illustrate what we mean by a structured approach. Example 1. The input to a linear time-invariant system is r(t): r(t) = s(t) + w(t) = 0, 0:;; t:;; T, elsewhere. (25) The impulse response of the system is h(.,.). The signal s(t) is a known function with energy E., E. = J:s2(t) dt, (26) and w(t) is a sample function from a zero-mean random process with a covariance function: No ) Kw(t, u) = 2 8(t-u. (27) We are concerned with the output of the system at time T. The output due to the signal is a deterministic quantity: S0 (T) =I:h(-r) s(T- -r) d-r. (28)
  • 29. Structured Approach 13 The output due to the noise is a random variable: n0 (T) = J:h(T) n(T- .,-)d.,-. We can define the output signal-to-noise ratio at time T as where E( ·) denotes expectation. S [', S0 2(T) N- E[n02(T)]' Substituting (28) and (29) into (30), we obtain S u:lz(T) s(T- T) dTr N= E[ijh(-r)h(u)n(T- T)n(T- u)d.,-du] (29) (30) (31) By bringing the expectation inside the integral, using (27), and performing the integration with respect to u, we have (32) The problem of interest is to choose h(.,-) to maximize the signal-to-noise ratio. The solution follows easily, but it is not important for our present discussion. (See Problem 3.3.1.) This example illustrates the three essential features of the structured approach to a statistical optimization problem: Structure. The processor was required to be a linear time-invariant filter. We wanted to choose the best system in this class. Systems that were not in this class (e.g., nonlinear or time-varying) were not allowed. Criterion. In this case we wanted to maximize a quantity that we called the signal-to-noise ratio. Information. To write the expression for S/N we had to know the signal shape and the covariance function of the noise process. If we knew more about the process (e.g., its first-order probability density), we could not use it, and if we knew less, we could not solve the problem. Clearly, if we changed the criterion, the information required might be different. For example, to maximize x S0T) X = E[n0 4(T)]' (33) the covariance function of the noise process would not be adequate. Alter- natively, if we changed the structure, the information required might
  • 30. 14 1.2 Possible Approaches change. Thus the three ideas of structure, criterion, and information are closely related. It is important to emphasize that the structured approach does not imply a linear system, as illustrated by Example 2. Example 2. The input to the nonlinear no-memory device shown in Fig. 1.11 is r(t), where r(t) = s(t) + n(t), -oo < t < oo. (34) At any time t, s(t) is the value of a random variable s with known probability density p,(S). Similarly, n(t) is the value of a statistically independent random variable n with known density Pn(N). The output of the device is y(t), where y(t) = ao + a,[r(t)] + a2 [r(t)]2 (35) is a quadratic no-memory function of r(t). [The adjective no-memory emphasizes that the value ofy(to) depends only on r(to).] We want to choose the coefficients a0 , a1 , and a2 so that y(t) is the minimum mean-square error estimate of s(t). The mean-square error is Hf) ~ E{[y(t) - s(t)2 ]} = E({a0 + a1 [r(t)] + a2 [r2(t)] - s(t)}2) (36) and a0 , a1 , and a2 are chosen to minimize e(t}. The solution to this particular problem is given in Chapter 3. The technique for solving structured problems is conceptually straight- forward. We allow the structure to vary within the allowed class and choose the particular system that maximizes (or minimizes) the criterion of interest. An obvious advantage to the structured approach is that it usually requires only a partial characterization of the processes. This is important because, in practice, we must measure or calculate the process properties needed. An obvious disadvantage is that it is often impossible to tell if the struc- ture chosen is correct. In Example 1 a simple nonlinear system might }--1--~ y(t) Nonlinear no-memory device Fig. 1.11 A structured nonlinear device.
  • 31. Nonstructured Approach 15 be far superior to the best linear system. Similarly, in Example 2 some other nonlinear system might be far superior to the quadratic system. Once a class of structure is chosen we are committed. A number of trivial examples demonstrate the effect of choosing the wrong structure. We shall encounter an important practical example when we study frequency modulation in Chapter 11-2. At first glance it appears that one way to get around the problem of choosing the proper strucutre is to let the structure be an arbitrary non- linear time-varying system. In other words, the class of structure is chosen to be so large that every possible system will be included in it. The difficulty is that there is no convenient tool, such as the convolution integral, to express the output of a nonlinear system in terms of its input. This means that there is no convenient way to investigate all possible systems by using a structured approach. The alternative to the structured approach is a nonstructured approach. Here we refuse to make any a priori guesses about what structure the processor should have. We establish a criterion, solve the problem, and implement whatever processing procedure is indicated. A simple example of the nonstructured approach can be obtained by modifying Example 2. Instead of assigning characteristics to the device, we denote the estimate by y(t). Letting '(t) Q E{[y(t) - s(t)]2}, (37) we solve for the y(t) that is obtained from r(t) in any manner to minimize'· The obvious advantage is that if we can solve the problem we know that our answer, is with respect to the chosen criterion, the best processor of all possible processors. The obvious disadvantage is that we must completely characterize all the signals, channels, and noises that enter into the problem. Fortunately, it turns out that there are a large number of problems of practical importance in which this complete characterization is possible. Throughout both books we shall emphasize the nonstructured approach. Our discussion up to this point has developed the topical and logical basis of these books. We now discuss the actual organization. 1.3 ORGANIZATION The material covered in this book and Volume II can be divided into five parts. The first can be labeled Background and consists of Chapters 2 and 3. In Chapter 2 we develop in detail a topic that we call Classical Detection and Estimation Theory. Here we deal with problems in which
  • 32. 16 1.3 Organization the observations are sets of random variables instead of random wave- forms. The theory needed to solve problems of this type has been studied by statisticians for many years. We therefore use the adjective classical to describe it. The purpose of the chapter is twofold: first, to derive all the basic statistical results we need in the remainder of the chapters; second, to provide a general background in detection and estimation theory that can be extended into various areas that we do not discuss in detail. To accomplish the second purpose we keep the discussion as general as possible. We consider in detail the binary and M-ary hypothesis testing problem, the problem ofestimating random and nonrandom variables, and the composite hypothesis testing problem. Two more specialized topics, the general Gaussian problem and performance bounds on binary tests, are developed as background for specific problems we shall encounter later. The next step is to bridge the gap between the classical case and the waveform problems discussed in Section 1.1. Chapter 3 develops the necessary techniques. The key to the transition is a suitable method for characterizing random processes. When the observation interval is finite, the most useful characterization is by a series expansion of the random process which is a generalization of the conventional Fourier series. When the observation interval is infinite, a transform characterization, which is a generalization of the usual Fourier transform, is needed. In the process of developing these characterizations, we encounter integral equations and we digress briefly to develop methods of solution. Just as in Chapter 2, our discussion is general and provides background for other areas of application. With these two chapters in the first part as background, we are prepared to work our way through the hierarchy of problems outlined in Figs. 1.4, 1.7, and 1.10. The second part of the book (Chapter 4) can be labeled Elementary Detection and Estimation Theory. Here we develop the first two levels described in Section 1.1. (This material corresponds to the upper two levels in Figs. 1.4 and 1.7.) We begin by looking at the simple binary digital communication system described in Fig. 1.1 and then proceed to more complicated problems in the communications, radar, and sonar area involving M-ary communication, random phase channels, random amplitude and phase channels, and colored noise interference. By exploiting the parallel nature of the estimation problem, results are obtained easily for the estimation problem outlined in Fig. 1.5 and other more complex systems. The extension of the results to include the multiple channel (e.g., frequency diversity systems or arrays) and multiple parameter (e.g., range and Doppler) problems completes our discussion. The results in this chapter ar,e fundamental to the understanding of modern com- munication and radar/sonar systems.
  • 33. Volume II Outline 17 The third part, which can be labeled Modulation Theory or Continuous Estimation Theory, consists of Chapters 5 and 6 and Chapter 2 of Volume II. In Chapter 5 we formulate a quantitative model for the first two levels of the continuous waveform estimation problem and derive a set of integral equations whose solution is the optimum estimate of the message. We also derive equations that give bounds on the performance of the estimators. In order to study solution techniques, we divide the estimation problem into two categories, linear and nonlinear. In Chapter 6 we study linear estimation problems in detail. In the first section of the chapter we discuss the relationships between various criteria, process characteristics, and the structure of the processor. In the next section we discuss the special case in which the processes are stationary and the infinite past is available. This case, the Wiener problem, leads to straightforward solution techniques. The original work of Wiener is ex- tended to obtain some important closed-form error expressions. In the next section we discuss the case in which the processes can be characterized by using state-variable techniques. This case, the Kalman-Bucy problem, enables us to deal with nonstationary, finite-interval problems and adds considerable insight to the results of the preceding section. The material in Chapters I through 6 has two characteristics: 1. In almost all cases we can obtain explicit, exact solutions to the problems that we formulate. 2. Most of the topics discussed are of such fundamental interest that everyone concerned with the statistical design ofcommunication, radar, or sonar systems should be familiar with them. As soon as we try to solve the nonlinear estimation problem, we see a sharp departure. To obtain useful results we must resort to approximate solution techniques. To decide what approximations are valid, however, we must consider specific nonlinear modulation systems. Thus the precise quantitative results are only applicable to the specific system. In view of this departure, we pause briefly in our logical development and summarize our results in Chapter 7. After a briefintroduction we return to the nonlinear modulation problem in Chapter 2 of Volume II and consider angle modulation systems in great detail. After an approximation to the optimum processor is developed, its performance and possible design modification are analyzed both theoreti- cally and experimentally. More advanced techniques from Markov process theory and information theory are used to obtain significant results. ln the fourth part we revisit the problems of detection, estimation, and modulation theory at the third level of the hierarchy described in Section 1.1. Looking at the bottom boxes in Figs. 1.4, 1.7, and 1.10, we see that this is the Random Signals in Noise problem. Chapter 11-3 studies it in
  • 34. 18 1.3 Organization detail. We find that the linear processors developed in Chapter 1-6 play a fundamental role in the random signal problem. This result, coupled with the corresponding result in Chapter 11-2, emphasizes the fundamental im- portance of the results in Chapter I-6. They also illustrate the inherent unity of the various problems. Specific topics such as power-spectrum parameter estimation and analog transmission over time-varying channels are also developed. The fifth part is labeled Applications and includes Chapters 11-4 and 11-5. Throughout the two books we emphasize applications of the theory to models of practical problems. In most of them the relation of the actual physical situation can be explained in a page or two. The fifth part deals with physical situations in which developing the model from the physical situation is a central issue. Chapter 11-4 studies the radar/sonar problem in depth. It builds up a set oftarget and channel models, starting with slowly fluctuating point targets and culminating in deep targets that fluctuate at arbitrary rates. This set of models enables us to study the signal design problem for radar and sonar, the resolution problem in mapping radars, the effect of reverberation on sonar-system performance, estimation of parameters of spread targets, communication over spread channels, and other important problems. In Chapter 11-5 we study various multidimensional problems such as multiplex communication systems and multivariable processing problems encountered in continuous receiving apertures and optical systems. The primary emphasis in the chapter is on optimum array processing in sonar (or seismic) systems. Both active and passive sonar systems are discussed; specific processor configurations are developed and their performance is analyzed. Finally, in Chapter 11-6 we summarize some of the more important results, mention some related topics that have been omitted, and suggest areas of future research.
  • 35. 2 Detection, Estimation, and Modulation Theory, Part I: Detection, Estimation, and Linear Modulation Theory. Harry L. Van Trees Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-09517-6 (Paperback); 0-471-22108-2 (Electronic) Classical Detection and Estimation Theory 2.1 INTRODUCTION In this chapter we develop in detail the basic ideas ofclassical detection and estimation theory. The first step is to define the various terms. The basic components of a simple decision-theory problem are shown in Fig. 2.1. The first is a source that generates an output. In the simplest case this output is one oftwo choices. We refer to them as hypotheses and label them H0 and H1 in the two-choice case. More generally, the output might be one of M hypotheses, which we label H0 , Hl> .. ., HM-l· Some typical source mechanisms are the following: 1. A digital communication system transmits information by sending ones and zeros. When "one" is sent, we call it Hl> and when "zero" is sent, we call it H0 • 2. In a radar system we look at a particular range and azimuth and try Source H1 Probabilistic transition Ho mechanism Decision rule Decision Fig. 2.1 Components of a decision theory problem. 19
  • 36. 20 2.1 Introduction to decide whether a target is present; H1 corresponds to the presence of a target and H0 corresponds to no target. 3. In a medical diagnosis problem we examine an electrocardiogram. Here H1 could correspond to the patient having had a heart attack and H0 to the absence of one. 4. In a speaker classification problem we know the speaker is German, British, or American and either male or female. There are six possible hypotheses. In the cases of interest to us we do not know which hypothesis is true. The second component of the problem is a probabilistic transition mechanism; the third is an observation space. The transition mechanism p,.(N) n -+-----{41----!~--->>-r I Transition mechan.sm (a) I "--..---/ Observation space p r p -------~1--~0~--+~1-----~>>-N p r t* 0 +1 +2 t* r 1* -2 -1 0 >R PrtH,(RIHo) (b) Fig. 2.2 A simple decision problem: (a) model; (b) probability densities.
  • 37. Detection Theory Model 21 can be viewed as a device that knows which hypothesis is true. Based on this knowledge, it generates a point in the observation space according to some probability law. A simple example to illustrate these ideas is given in Fig. 2.2. When H1 is true, the source generates +1. When H0 is true, the source generates - I. An independent discrete random variable n whose probability density is shown in Fig. 2.2b is added to the source output. The sum of the source output and n is the observed variable r. Under the two hypotheses, we have H1 :r = 1 + n, H 0 :r = -1 + n. (1) The probability densities of r on the two hypotheses are shown in Fig. 2.2b. The observation space is one-dimensional, for any output can be plotted on a line. A related example is shown in Fig. 2.3a in which the source generates two numbers in sequence. A random variable n1 is added to the first number and an independent random variable n2 is added to the second. Thus H1 :r1 = 1 +n1 '2 = 1 + n2, Ho: r1 = - 1 + n1 r2 = -1 + n2• (2) The joint probability density of r1 and r2 when H1 is true is shown in Fig. 2.3b. The observation space is two-dimensional and any observation can be represented as a point in a plane. In this chapter we confine our discussion to problems in which the observation space is finite-dimensional. In other words, the observations consist of a set of N numbers and can be represented as a point in an N-dimensional space. This is the class of problem that statisticians have treated for many years. For this reason we refer to it as the classical decision problem. The fourth component of the detection problem is a decision rule. After observing the outcome in the observation space we shall guess which hypothesis was true, and to accomplish this we develop a decision rule that assigns each point to one of the hypotheses. Suitable choices for decision rules will depend on several factors which we discuss in detail later. Our study will demonstrate how these four components fit together to form the total decision (or hypothesis-testing) problem. The classical estimation problem is closely related to the detection problem. We describe it in detail later.
  • 38. 22 2.1 Introduction (a) R1 Fig. 2.3 A two-dimensional problem: (a) model; (b) probability density. Organization. This chapter is organized in the following manner. In Section 2.2 we study the binary hypothesis testing problem. Then in Section 2.3 we extend the results to the case of M hypotheses. In Section 2.4 classical estimation theory is developed. The problems that we encounter in Sections 2.2 and 2.3 are characterized by the property that each source output corresponds to a different hypoth- esis. In Section 2.5 we shall examine the composite hypothesis testing problem. Here a number of source outputs are lumped together to form a single hypothesis. All of the developments through Section 2.5 deal with arbitrary prob- ability transition mechanisms. In Section 2.6 we consider in detail a special class of problems that will be useful in the sequel. We refer to it as the general Gaussian class. In many cases of practical importance we can develop the "optimum" decision rule according to certain criteria but cannot evaluate how well the
  • 39. Decision Criteria 23 test will work. In Section 2.7 we develop bounds and approximate expres- sions for the performance that will be necessary for some of the later chapters. Finally, in Section 2.8 we summarize our results and indicate some of the topics that we have omitted. 2.2 SIMPLE BINARY HYPOTHESIS TESTS As a starting point we consider the decision problem in which each of two source outputs corresponds to a hypothesis. Each hypothesis maps into a point in the observation space. We assume that the observation space corresponds to a set of N observations: r1o r2 , r3 , ••• , 'N· Thus each set can be thought of as a point in an N-dimensional space and can be denoted by a vector r: (3) The probabilistic transition mechanism generates points in accord with the two known conditional probability densities PriH1 (RIH1) and Pr!Ho(RIH0). The object is to use this information to develop a suitable decision rule. To do this we must look at various criteria for making decisions. 2.2.1 Decision Criteria In the binary hypothesis problem we know that either H0 or H1 is true. We shall confine our discussion to decision rules that are required to make a choice. (An alternative procedure would be to allow decision rules with three outputs (a) H0 true, (b) H1 true, (c) don't know.) Thus each time the experiment is conducted one of four things can happen: 1. H0 true; choose H0• 2. H0 true; choose H1 • 3. H1 true; choose H1• 4. H1 true; choose H0 • The first and third alternatives correspond to correct choices. The second and fourth alternatives correspond to errors. The purpose of a decision criterion is to attach some relative importance to the four possible courses of action. It might be expected that the method of processing the received
  • 40. 24 2.2 Simple Binary Hypothesis Tests data (r) would depend on the decision criterion we select. In this section we show that for the two criteria of most interest, the Bayes and the Neyman-Pearson, the operations on r are identical. Bllyes Criterion. A Bayes test is based on two assumptions. The first is that the source outputs are governed by probability assignments, which are denoted by P1 and P0, respectively, and called the a priori probabilities. These probabilities represent the observer's information about the source before the experiment is conducted. The second assumption is that a cost is assigned to each possible course of action. We denote the cost for the four courses of action as C00, C10, C11, Cot> respectively. The first subscript indicates the hypothesis chosen and the second, the hypothesis that was true. Each time the experiment is conducted a certain cost will be incurred. We should like to design our decision rule so that on the average the cost will be as small as possible. To do this we first write an expression for the expected value of the cost. We see that there are two probabilities that we must average over; the a priori probability and the probability that a particular course of action will be taken. Denoting the expected value of the cost as the risk :Jt, we have: :Jt = C00P0 Pr (say HoiHo is true) + C10Po Pr (say HdHo is true) + C11P1 Pr (say H1IHt is true) + Co1P1 Pr (say HoiH1 is true). (4) Because we have assumed that the decision rule must say either H1 or H0 , we can view it as a rule for dividing the total observation space Z into two parts, Z0 and Zl> as shown in Fig. 2.4. Whenever an observation falls in Z0 we say H0 , and whenever an observation falls in Z1 we say H1• Z: observation space Source Say Ht Fig. 2.4 Decision regions.
  • 41. Decision Criteria 25 We can now write the expression for the risk in terms of the transition probabilities and the decision regions: .'It = CooPo f Pr!Ho(RIHo) dR Zo + C1oPo f Pr!Ho(RIHo)dR zl + CuP1 f PrJH1 (RIH1) dR zl + Co1P1 f PrJH1 (RIHI) dR. Zo (5) For an N-dimensional observation space the integrals in (5) are N-fold integrals. We shall assume throughout our work that the cost of a wrong decision is higher than the cost of a correct decision. In other words, C10 > Coo, Co1 > Cu. (6) Now, to find the Bayes test we must choose the decision regions Z0 and Z1 in such a manner that the risk will be minimized. Because we require that a decision be made, this means that we must assign each point R in the observation space Z to Z0 or Z1 • Thus Z = Zo + Z1 ~ Zo u Z1. (7) Rewriting (5), we have .'It = PoCoo f PrJHo(RIHo) dR + PoC1o f PrJHo(RIHo) dR ~ z-~ Observing that LPriHo(RIHo) dR = LPrJH1 (RIHl)dR = 1, (9) (8) reduces to .'It= PoC10 + P1C11 + f {[P1(Co1 - Cu)PrJH1 (RIH1)] Zo - [Po(Clo- Coo)PriHo(RIHo)]} dR. (10)
  • 42. 26 2.2 Simple Binary Hypothesis Tests The first two terms represent the fixed cost. The integral represents the cost controlled by those points R that we assign to Z0• The assumption in (6) implies that the two terms inside the brackets are positive. Therefore all values of R where the second term is larger than the first should be included in Z0 because they contribute a negative amount to the integral. Similarly, all values of R where the first term is larger than the second should be excluded from Z0 (assigned to Z1) because they would contribute a positive amount to the integral. Values of R where the two terms are equal have no effect on the cost and may be assigned arbitrarily. We shall assume that these points are assigned to H1 and ignore them in our sub- sequent discussion. Thus the decision regions are defined by the statement: If P1(Co1 - Cu)PrtH1 (RIHl) <::: Po(Clo- Coo)PrtHo(RIHo), (11) assign R to Z1 and consequently say that H1 is true. Otherwise assign R to Z0 and say H0 is true. Alternately, we may write PrtH1 (RIHl) ~ Po(Clo - Coo). (12) PrtH0 (RIHo) Ho P1(Co1 - Cu) The quantity on the left is called the likelihood ratio and denoted by A(R) (13) Because it is the ratio of two functions of a random variable, it is a random variable. We see that regardless of the dimensionality of R, A(R) is a one-dimensional variable. The quantity on the right of (12) is the threshold of the test and is denoted by 7J: A Po(Clo - Coo) TJ'-' • - P1(Co1 - Cu) (14) Thus Bayes criterion leads us to a likelihood ratio test (LRT) (15) We see that all the data processing is involved in computing A(R) and is not affected by a priori probabilities or cost assignments. This invariance of the data processing is of considerable practical importance. Frequently the costs and a priori probabilities are merely educated guesses. The result in (15) enables us to build the entire processor and leave 7J as a variable threshold to accommodate changes in our estimates ofa priori probabilities and costs.
  • 43. Likelihood Ratio Tests 27 Because the natural logarithm is a monotonic function, and both sides of (15) are positive, an equivalent test is H1 ln A(R) ?( ln 71· (16) Ho Two forms of a processor to implement a likelihood ratio test are shown in Fig. 2.5. Before proceeding to other criteria, we consider three simple examples. Example 1. We assume that under H1 the source output is a constant voltage m. Under H0 the source output is zero. Before observation the voltage is corrupted by an additive noise. We sample the output waveform each second and obtain N samples. Each noise sample is a zero-mean Gaussian random variable n with variance a2 • The noise samples at various instants are independent random variables and are indepen- dent of the source output. Looking at Fig. 2.6, we see that the observations under the two hypotheses are and H1:r, = m + n, H 0 :r, = n, i = 1, 2, ... , N, i = 1, 2, ... , N, 1 ( x2) p.,(X) = V2.. a exp - 2a2 ' because the noise samples are Gaussian. The probability density of r, under each hypothesis follows easily: and Threshold A(R) device Decision ------- A(R)~'II (a) Threshold lnA(R) device Decision ------- InA (B) ~In 11 (b) Fig. 2.5 Likelihood ratio processors. (17) (18) (19) (20)
  • 44. 28 2.2 Simple Binary Hypothesis Tests N samples ~ Ht:~ r (2 I'· Ira )o 1------;..~ Sampler 1----------, r(t) Ho-+1----;>~ Processor Fig. 2.6 Model for Example 1. Because the n1 are statistically independent, the joint probability density of the r1 (or, equivalently, of the vector r) is simply the product of the individual probability densities. Thus and N ) ( R,") PriH0 (RIHo) = n _,2 exp -2a" . i=lv wa Substituting into (13), we have n N ) ( (R1 - m)2 ) -=-exp - 2 2 A(R) = 1=1 v21r a a . N ) ( R.") n . I - exp -p l=1·v21ra a After canceling common terms and taking the logarithm, we have m N Nm2 In A(R) = - 2: R1 - - · a2 1=1 2a2 Thus the likelihood ratio test is m N Nm•H• 22:R~--- 2 :;:In"' a l=l 2a Ho or, equivalently, N Ht a• Nm 2: R1 :;: - In "' + - Q y. 1= 1 Ho m 2 (21) (22) (23) (24) (25) (26) We see that the processor simply adds the observations and compares them with a threshold.
  • 45. Likelihood Ratio Tests 29 In this example the only way the data appear in the likelihood ratio test is in a sum. This is an example of a sufficient statistic, which we denote by I(R) (or simply I when the argument is obvious). It is just a function of the received data which has the property that A(R) can be written as a function of I. In other words, when making a decision, knowing the value of the sufficient statistic is just as good as knowing R. In Example 1, I is a linear function ofthe R1• A case in which this is not true is illustrated in Example 2. Example 2. Several different physical situations lead to the mathematical model of interest in this example. The observations consist of a set of N values: rh r2, r3, ... , rN. Under both hypotheses, the r, are independent, identically distributed, zero-mean Gaussian random variables. Under H1 each r1 has a variance a12• Under Ha each r1 has a variance aa2 • Because the variables are independent, the joint density is simply the product of the individual densities. Therefore (27) and N 1 ( R1 2) PriHo(RIHo) = TI.,- exp -:22 . 1=1Y2?Ta0 ao (28) Substituting (27) and (28) into (13) and taking the logarithm, we have (29) In this case the sufficient statistic is the sum of the squares of the observations N /(R) = L R,2, (30) 1=1 and an equivalent test for a12 > a02 is (31) For o12 < o02 the inequality is reversed because we are multiplying by a negative number: /(R) ~ :a 01 2 N In ~ - In 'l !l r'; Ho 2 2 2 ( ) H1 ao - a1 a1 (32) These two examples have emphasized Gaussian variables. In the next example we consider a different type of distribution. Example 3. The Poisson distribution of events is encountered frequently as a model of shot noise and other diverse phenomena (e.g., [I] or [2)). Each time the experiment is conducted a certain number of events occur. Our observation is just this number which ranges from 0 to oo and obeys a Poisson distribution on both hypotheses; that is, Pr (n events) = (m,t e-m,, n. n = 0, 1, 2 ... , i = 0,1, where m, is the parameter that specifies the average number of events: E(n) = m1• (33) (34)
  • 46. 30 2.2 Simple Binary Hypothesis Tests It is this parameter m, that is different in the two hypotheses. Rewriting (33) to emphasize this point, we have for the two Poisson distributions m" H0 :Pr (n events)= -Te-mo, n. n =0,1, 2, ... , n = 0,1, 2, .... Then the likelihood ratio test is or, equivalently, A(n) = ....! exp [-(m1 - mo)] ~ 'I (m )" H1 mo Ho ~In.,+ m1- mo n rfo In m1 - In mo ' ~In 'I+ m1- mo n ~ In m1 - In mo ' (35) (36) (37) (38) This example illustrates how the likelihood ratio test which we originally wrote in terms of probability densities can be simply adapted to accom- modate observations that are discrete random variables. We now return to our general discussion of Bayes tests. There are several special kinds of Bayes test which are frequently used and which should be mentioned explicitly. Ifwe assume that C00 and Cu are zero and C01 = C10 = 1, the expres- sion for the risk in (8) reduces to We see that (39) is just the total probability of making an error. There- fore for this cost assignment the Bayes test is minimizing the total probability of error. The test is H1 p ln A(R) ~ In Po = In P0 - ln (I - P0). Ho 1 (40) When the two hypotheses are equally likely, the threshold is zero. This assumption is normally true in digital communication systems. These processors are commonly referred to as minimum probability of error receivers. A second special case of interest arises when the a priori probabilities are unknown. To investigate this case we look at (8) again. We observe that once the decision regions Z0 and Z1 are chosen, the values of the integrals are determined. We denote these values in the following manner:
  • 47. Likelihood Ratio Tests 31 (41) We see that these quantities are conditional probabilities. The subscripts are mnemonic and chosen from the radar problem in which hypothesis H1 corresponds to the presence of a target and hypothesis H0 corresponds to its absence. PF is the probability of a false alarm (i.e., we say the target is present when it is not); P0 is the probability of detection (i.e., we say the target is present when it is); PM is the probability of a miss (we say the target is absent when it is present). Although we are interested in a much larger class of problems than this notation implies, we shall use it for convenience. For any choice of decision regions the risk expression in (8) can be written in the notation of (41): .1t = PoC1o + P1Cu + P1(Co1 - Cu)PM - Po(Clo - Coo)(l - PF). (42) Because (43) (42) becomes .1t(P1) = Coo(l - PF) + C1oPF + Pl[(Cu - Coo) + (Col - Cu)PM - (C1o - Coo)PF]. (44) Now, if all the costs and a priori probabilities are known, we can find a Bayes test. In Fig. 2.7a we plot the Bayes risk, .1ts(P1), as a function ofP1. Observe that as P1 changes the decision regions for the Bayes test change and therefore PF and PM change. Now consider the situation in which a certain P1 (say P1 = Pr) is assumed and the corresponding Bayes test designed. We now fix the threshold and assume that P1is allowed to change. We denote the risk for this fixed threshold test as j(APr, P1). Because the threshold is fixed, PF and PM are fixed, and (44) is just a straight line. Because it is a Bayes test for P1 = Pr, it touches the .1t8(P1) curve at that point. Looking at (14), we see that the threshold changes continuously with P1. Therefore, when- ever P1 =F Pr, the threshold in the Bayes test will be different. Because the Bayes test minimizes the risk, (45)
  • 48. 32 2.2 Simple Binary Hypothesis Tests (a) (b) Cn Coo Coo 0~------7------;.. pl 0'--------'--- PI (c) (d) Fig. 2.7 Risk curves: (a) fixed risk versus typical Bayes risk; (b) maximum value of .1t1 at P1 = 0. If A is a continuous random variable with a probability distribution function that is strictly monotonic, then changing TJ always changes the risk. .'R.8 (P1) is strictly concave downward and the inequality in (45) is strict. This case, which is one of particular interest to us, is illustrated in Fig. 2.7a. We see that .'R.iPt, P1) is tangent to .'R.a(P1) at P1 = Pf, These curves demonstrate the effect of incorrect knowledge of the a priori probabilities. An interesting problem is encountered if we assume that the a priori probabilities are chosen to make our performance as bad as possible. In other words, P1 is chosen to maximize our risk :JtF(Pt, P1). Three possible examples are given in Figs. 2.7b, c, and d. In Fig. 2.7b the maximum of :Jt8 (P1) occurs at P1 = 0. To minimize the maximum risk we use a Bayes test designed assuming P1 = 0. In Fig. 2.7c the maximum of.'R.a(P1) occurs at P1 = 1. To minimize the maximum risk we use a Bayes test designed assuming P1 = 1. In Fig. 2.7d the maximum occurs inside the interval
  • 49. Likelihood Ratio Tests 33 [0, 1], and we choose :RF to be the horizontal line. This implies that the coefficient of P1 in (44) must be zero: (Cll - Coo) + (Col - Cu)PM - (C1o - Coo)PF = 0. (46) A Bayes test designed to minimize the maximum possible risk is called a minimax test. Equation 46 is referred to as the minimax equation and is useful whenever the maximum of :R8 (P1) is interior to the interval. A special cost assignment that is frequently logical is Coo= Cu = 0 (This guarantees the maximum is interior.) Denoting, the risk is, Col= eM, C1o = CF. :RF = CFPF + Pl(CMPM - CFPF) = PoCFPF + P1CMPM and the minimax equation is (47) (48) (49) (50) Before continuing our discussion oflikelihood ratio tests we shall discuss a second criterion and prove that it also leads to a likelihood ratio test. Neyman-Pearson Tests. In many physical situations it is difficult to assign realistic costs or a priori probabilities. A simple procedure to by- pass this difficulty is to work with the conditional probabilities PF and Pv. In general, we should like to make PF as small as possible and Pv as large as possible. For most problems of practical importance these are con- flicting objectives. An obvious criterion is to constrain one of the prob- abilities and maximize (or minimize) the other. A specific statement of this criterion is the following: Neyman-Pearson Criterion. Constrain PF = a.' ::::; a. and design a test to maximize Pv (or minimize PM) under this constraint. The solution is obtained easily by using Lagrange multipliers. We con- struct the function F, (51) or F = fzo PriH1 (RIHl) dR + , [{1 PriHo(RIHo) dR- a.']• (52) Clearly, if PF = a.', then minimizing F minimizes PM.
  • 50. 34 2.2 Simple Binary Hypothesis Tests or F = .(I -a')+ f [Pr!H1 (RIH1)- APriHo(RIHo)] dR. (53) Jzo Now observe that for any positive value of, an LRT will minimize F. (A negative value of, gives an LRT with the inequalities reversed.) This follows directly, because to minimize F we assign a point R to Z0 only when the term in the bracket is negative. This is equivalent to the test Pr!H1 (RIHI) , PriHo(RIHo) < ' assign point to Z0 or say H0 • (54) The quantity on the left is just the likelihood ratio. Thus F is minimized by the likelihood ratio test H1 A(R) ~ ,. Ho (55) To satisfy the constraint we choose , so that PF = a'. If we denote the density of A when H0 is true as PA!Ho(AIH0), then we require PF = L"'PAIHo(AIHo) dA = a'. (56) Solving (56) for ,gives the threshold. The value of, given by {56) will be non-negative because PAIHo(AIHo) is zero for negative values of,. Observe that decreasing , is equivalent to increasing Z1 , the region where we say H1 • Thus Pv increases as , decreases. Therefore we decrease , until we obtain the largest possible a' =:;; a. In most cases of interest to us PF is a continuous function of, and we have PF = a. We shall assume this con- tinuity in all subsequent discussions. Under this assumption the Neyman- Pearson criterion leads to a likelihood ratio test. On p. 41 we shall see the effect of the continuity assumption not being valid. Summary. In this section we have developed two ideas of fundamental importance in hypothesis testing. The first result is the demonstration that for a Bayes or a Neyman-Pearson criterion the optimum test consists of processing the observation R to find the likelihood ratio A(R) and then comparing A(R) to a threshold in order to make a decision. Thus, regard- less of the dimensionality of the observation space, the decision space is one-dimensional. The second idea is that ofa sufficient statistic /(R). The idea ofa sufficient statistic originated when we constructed the likelihood ratio and saw that it depended explicitly only on /(R). If we actually construct A(R) and then recognize /(R), the notion of a sufficient statistic is perhaps of secondary value. A more important case is when we can recognize /(R) directly. An easy way to do this is to examine the geometric interpretation ofa sufficient
  • 51. Likelihood Ratio Tests 3S statistic. We considered the observations r1o r2, ••• , rN as a point r in an N-dimensional space, and one way to describe this point is to use these coordinates. When we choose a sufficient statistic, we are simply describing the point in a coordinate system that is more useful for the decision problem. We denote the first coordinate in this system by /, the sufficient statistic, and the remaining N - I coordinates which will not affect our decision by the (N - I)-dimensional vector y. Thus A(R) = A(L Y) = PI,YiHl(L, YIHl). (57) ' PI,YIHo(L, YIHo) Now the expression on the right can be written as A(L Y) _ PliH1(LiHl)PYii,H1(YIL, H1). ' - PliH0 (LIHo)PYII,Ho(YIL, Ho) (58) If I is a sufficient statistic, then A(R) must reduce to A(L). This implies that the second terms in the numerator and denominator must be equal. In other words, (59) because the density of y cannot depend on which hypothesis is true. We see that choosing a sufficient statistic simply amounts to picking a co- ordinate system in which one coordinate contains all the information necessary to making a decision. The other coordinates contain no informa- tion and can be disregarded for the purpose of making a decision. In Example 1 the new coordinate system could be obtained by a simple rotation. For example, when N = 2, 1 L = v'2' (R1 + R2), (60) 1 Y = v'2' (R1 - R2). In Example 2 the new coordinate system corresponded to changing to polar coordinates. For N = 2 L = R12 + R22, y = tan-1 R2. ~ ~I) Notice that the vector y can be chosen in order to make the demonstra- tion of the condition in (59) as simple as possible. The only requirement is that the pair (/, y) must describe any point in the observation space. We should also observe that the condition PYiH1(YIH1) = PYIHo(YIHo) (62)
  • 52. J6 2.2 Simple Binary Hypothesis Tests does not imply (59) unless I and y are independent under H1 and H0• Frequently we will choose y to obtain this independence and then use (62) to verify that I is a sufficient statistic. 2.2.2 Performance: Receiver Operating Characteristic To complete our discussion of the simple binary problem we must evaluate the performance of the likelihood ratio test. For a Neyrnan- Pearson test the values of PF and PD completely specify the test perform- ance. Looking at (42) we see that the Bayes risk .'R.8 follows easily ifPF and PD are known. Thus we can concentrate our efforts on calculating PF and PD. We begin by considering Example I in Section 2.2.1. Example I. From (25) we see that an equivalent test is 1 N H1 a VNm I=--=- L R, ~ -=-ln'l + --· VNat=l HoVNm 2a (63) u -../Nm In 11 d Threshold: -./Nm In 'II + 2<1 = d + 2 (a) (b) Fig. 1.8 Error probabilities: (a) PF calculation; (b) Pv calculation.
  • 53. Performance: Receiver Operating Characteristic 37 We have multiplied (25) by a/VN m to normalize the next calculation. Under H 0, 1is obtained by adding N independent zero-mean Gaussian variables with variance a2 and then dividing by VN a. Therefore I is N(O, 1). Under H, I is N(VN m/a, 1). The probability densities on the two hypotheses are sketched in Fig. 2.8a. The threshold is also shown. Now, P, is simply the integral of PIIHo<LIHo) to the right of the threshold. Thus P, = --= exp -- dx, i... 1 ( x2) <m•>ld+d/2 v2'" 2 where d £ VN mfa is the distance between the means of the two densities. integral in (64) is tabulated in many references (e.g., [3] or [4]). We generally denote err. (X)!!!. r...v~'" exp ( -t) dx, where err. is an abbreviation for the error functiont and f... 1 ( x2 ) errc. (X)!!!. x v2'" exp - 2 dx is its complement. In this notation (ln'l ~ P, = erfc• d + 1.)" (64) The (65) (66) (67) Similarly, Po is the integral of p,1H 1(LIH1) to the right of the threshold, as shown in Fig. 2.8b: i., 1 [ (x - d)2] Po= --=exp dx em •>ld+d/2 v2'" 2 = --=exp -- dy !!!_ errc. - - - . i., 1 ( y2 ) (In "' V (ID.)/d-d/2 V2'1T 2 d 2 (68) In Fig. 2.9a we have plotted Po versus P, for various values ofdwith 'I as the varying parameter. For 'I = O,ln 'I = -oo, and the processor always guesses H1• Thus P, = 1 and Po = l. As"' increases, P, and Po decrease. When"' = oo, the processor always guesses Ho and P, = Po = 0. As we would expect from Fig. 2.8, the performance increases monotonically with d. In Fig. 2.9b we have replotted the results to give Po versus d with P, as a parameter on the curves. For a particular d we can obtain any point on the curve by choosing 'I appropriately (0 :5 ., :5 oo). The result in Fig. 2.9a is referred to as the receiver operating characteristic (ROC). It completely describes the performance of the test as a function of the parameter of interest. A special case that will be important when we look at communication systems is the case in which we want to minimize the total probability of error (69a) t The function that is usually tabulated is erf (X) = v2/'" J: exp (-y2) dy, which is related to (65) in an obvious way.
  • 54. 38 2.2 Simple Binary Hypothesis Tests (a) Fig. 2.9 (a) Receiver operating characteristic: Gaussian variables with unequal means. The threshold for this criterion was given in (40). For the special case in which Po = P1 the threshold 'I equals one and Pr <~> = !(P, + PM>· Using (67) and (68) in (69), we have Pr <~> = J"' 1 exp (- x2 ) dx = erfc. (+~)· +d/2 v21r 2 2 (69b) (70) It is obvious from (70) that we could also obtain the Pr (•) from the ROC. However, if this is the only threshold setting of interest, it is generally easier to calculate the Pr (~) directly. Before calculating the performance of the other two examples, it is worthwhile to point out two simple bounds on erfc* (X). They will enable
  • 55. Performance: Receiver Operating Characteristic 39 us to discuss its approximate behavior analytically. For X > 0 1 ( 1 ) ( X2 ) 1 ( x2) V2rr X 1 - X2 exp -2 < erfc* (X) < V2rr Xexp -2 . (71) This can be derived by integrating by parts. (See Problem 2.2.15 or Feller (30].) A second bound is erfc* (X) < -! exp (- ; 2 )• x > 0, (72) d - (b) Fig. 2.9 (b) detection probability versus d.
  • 56. 40 2.2 Simple Binary Hypothesis Tests which can also be derived easily (see Problem 2.2.16). The four curves are plotted in Fig. 2.10. We note that erfc* (X) decreases exponentially. The receiver operating characteristics for the other two examples are also of interest. - - 1-- e-X/2 1 ( 1) 2 -.,(ii X X 2 x- Fig. 2.10 Plot of erfc• (X) and related functions.
  • 57. Performance: Receiver Operating Characteristic 41 Example 2. In this case the test is (73) The performance calculation for arbitrary N is somewhat tedious, so we defer it until Section 2.6. A particularly simple case appearing frequently in practice is N = 2. Under H 0 the r, are independent zero-mean Gaussian variables with variances equal to ao2 : P, = Pr (I~ yiHa) = Pr (r12 + r22 ~ YIHa). (74) To evaluate the expression on the right, we change to polar coordinates: r1 = zcos 8, (75) r2 = z sin 8, Then i2 " J"' 1 ( z•) Pr (z2 ~ yiHa) = dB Z - 2 2 exp - - 2 2 dZ. o ,f7 'ITao ao (76) Integrating with respect to 8, we have f., 1 ( z•) P, = Z 2 exp - - 2 2 dZ. ,f7 ao ao (77) We observe that /, the sufficient statistic, equals z2• Changing variables, we have p, = f"' ~ exp (-~) dL = exp (-~)· ), 2a0 2ao 2ao (Note that the probability density of the sufficient statistic is exponential.) Similarly, (78) (79) To construct the ROC we can combine (78) and (79) to eliminate the threshold y. This gives (80) In terms of logarithms (81) As expected, the performance improves monotonically as the ratio a12 /ao2 increases. We shall study this case and its generalizations in more detail in Section 2.6.. The two Poisson distributions are the third example. Example 3. From (38), the likelihood ratio test is ~1 In TJ + m1 - mo n lfo In m1 - In mo = y, (82) Because n takes on only integer values, it is more convenient to rewrite (82) as Yr = 0, 1, 2, ... , (83)
  • 58. 42 2.2 Simple Binary Hypothesis Tests where y1 takes on only integer values. Using (35), YI = 0, 1, 2, ... , (84) and from (36) YI = 0, 1, 2, 0 0 oo (85) The resulting ROC is plotted in Fig. 2.11a for some representative values of m0 and m1. We see that it consists of a series of points and that PF goes from 1 to 1 - e-mo when the threshold is changed from 0 to 1. Now suppose we wanted PF to have an intermediate value, say 1 - te-mo. To achieve this performance we proceed in the following manner. Denoting the LRT with y1 = 0 as LRT No. 0 and the· LRT with Y1 = 1 as LRT No. 1, we have the following table: 0081- [] 8 0.61- Oo41- Oo2~5 0 [] 7 o 4 [] 6 LRT 0 3 0 1 I 002 Y1 0 1 0 2 I Oo4 PF- Po o mo = 2, m1 = 4 c mo =4, m1 =10 I I I Oo6 008 Fig. 1.11 (a) Receiver operating characteristic, Poisson problem. - - - - - - 1.0
  • 59. Performance: Receiver Operating Characteristic 43 / / / / / / / / / / / / / / / / / / / / / / / / / / 0 mo= 2, ml=4 c mo =4, m1 =10 Fig. 2.11 (b) Receiver operating characteristic with randomized decision rule. 1.0 To get the desired value of P, we use LRT No. 0 with probability t and LRT No. 1 with probability t. The test is If n = 0, n ~ 1 say H1 with probability t, say Ho with probability t, say H1. This procedure, in which we mix two likelihood ratio tests in some probabilistic manner, is called a randomized decision rule. The resulting Po is simply a weighted combination of detection probabilities for the two tests. Po = 0.5(1) + 0.5(1 - e-ml) = (1 - O.S e-m1). (86) We see that the ROC for randomized tests consists of straight lines which connect the points in Fig. 2.lla, as shown in Fig. 2.llb. The reason that we encounter a randomized test is that the observed random variables are discrete. Therefore A(R) is a discrete random variable and, using an ordinary likelihood ratio test, only certain values ofP, are possible.
  • 60. 44 2.2 Simple Binary Hypothesis Tests Looking at the expression for PF in (56) and denoting the threshold by "'• we have (87) If PiTJ) is a continuous function of TJ, we can achieve a desired value from 0 to I by a suitable choice of TJ and a randomized test will never be needed. This is the only case of interest to us in the sequel (see Prob. 2.2.12). With these examples as a background, we now derive a few general properties of receiver operating characteristics. We confine our discussion to continuous likelihood ratio tests. Two properties of all ROC's follow immediately from this example. Property 1. All continuous likelihood ratio tests have ROC's that are con- cave downward. If they were not, a randomized test would be better. This would contradict our proof that a LRT is optimum (see Prob. 2.2.12). Property 2. All continuous likelihood ratio tests have ROC's that are above the PD = PFline. This is just a special case ofProperty l because the points (PF = 0, PD = 0) and (PF = I, PD = 1) are contained on all ROC's. Property 3. The slope of a curve in a ROC at a particular point is equal to the value of the threshold TJ required to achieve the PD and PF of that point. Proof. PD = f"PAIHl(AIHl) dA, PF = f'PAIHo(AIHo) dA. (88) Differentiating both expressions with respect to TJ and writing the results as a quotient, we have dPDfdTJ -PAIH1 (7JIH1) dPD dPFfdTJ = -PAIHo('IIHo) = dPF. (89) We now show that Let Then (90) n(TJ) £ {RIA(R) ~ 11} = (RI;::::~::z:~ ~ "~l (91) PD(TJ) £ Pr {A(R) ~ TJIHl} = r PriHl(RIHl) dR Jn<~> = f A(R)pr1n0 (RIH0) dR, (92) Jn<~>
  • 61. Performance: Receiver Operating Characteristic 45 where the last equality follows from the definition of the likelihood ratio. Using the definition of 0(7]), we can rewrite the last integral Pv(TJ) = f A(R)Prlno(RIHo) dR = i"" XpAino(XiHo) dX. (93) Jnc~> ~ Differentiating (93) with respect to 7J, we obtain (94) Equating the expression for dPv(n)fd7J in the numerator of (89) to the right side of (94) gives the desired result. We see that this result is consistent with Example I. In Fig. 2.9a, the curves for nonzero d have zero slope at PF = Pv = 1 (71 = 0) and infinite slope at PF = Pv = 0 (71 = oo). Property 4. Whenever the maximum value of the Bayes risk is interior to the interval (0, I) on the P1 axis, the minimax operating point is the intersection of the line (C11 - Coo) + (Col - C11)(l - Pv) - (C1o - Coo)PF = 0 (95) and the appropriate curve of the ROC (see 46). In Fig. 2.12 we show the special case defined by (50), (96) Fig. 2.12 Determination of minimax operating point.
  • 62. 46 2.3 M Hypotheses superimposed on the ROC of Example 1. We see that it starts at the point PF = 0, PD = 1, and intersects the PF = 1 line at PF = 1- cF. eM (97) This completes our discussion of the binary hypothesis testing problem. Several key ideas should be re-emphasized: 1. Using either a Bayes criterion or a Neyman-Pearson criterion, we find that the optimum test is a likelihood ratio test. Thus, regardless of the dimensionality of the observation space, the test consists of comparing a scalar variable A(R) with a threshold. (We assume PF("/) is continuous.) 2. In many cases construction of the LRT can be simplified if we can identify a sufficient statistic. Geometrically, this statistic is just that coordinate in a suitable coordinate system which describes the observation space that contains all the information necessary to make a decision. 3. A complete description of the LRT performance was obtained by plotting the conditional probabilities PD and PF as the threshold"' was varied. The resulting ROC could be used to calculate the Bayes risk for any set ofcosts. In many cases only one value of the threshold is of interest and a complete ROC is not necessary. A number of interesting binary tests are developed in the problems. 1.3 M HYPOTHESES The next case of interest is one in which we must choose one of M hypotheses. In the simple binary hypothesis test there were two source outputs, each of which corresponded to a single hypothesis. In the simple M-ary test there are M source outputs, each of which corresponds to one of M hypotheses. As before, we assume that we are forced to make a decision. Thus there are M 2 alternatives that may occur each time the experiment is conducted. The Bayes criterion assigns a cost to each ofthese alternatives, assumes a set of a priori probabilities P0 , P1, ••• , PM-h and minimizes the risk. The generalization of the Neyman-Pearson criterion to M hypotheses is also possible. Because it is not widely used in practice, we shall discuss only the Bayes criterion in the text. Bayes Criterion. To find a Bayes test we denote the cost of each course of action as C11• The first subscript signifies that the ith hypothesis is chosen. The second subscript signifies that thejth hypothesis is true. We denote the region of the observation space in which we choose H1 as Z1
  • 63. Bayes Criterion 47 and the a priori probabilities are P1• The model is shown in Fig. 2.13. The expression for the risk is 3t = ~1 :'%P,CII Iz,Pr1H1(RIH,) dR. (98) To find the optimum Bayes test we simply vary the Z, to minimize 3t. This is a straightforward extension ofthe technique used in the binary case. For simplicity of notation, we shall only consider the case in which M = 3 in the text. Noting that Z0 = Z - Z1 - Z2, because the regions are disjoint, we obtain 3t = PoCoo C PriHo(RIHo) dR + PoC1o C PriHo(RIHo)dR Jz-z1-z2 Jz1 + PoC2o C PriHo(RIHo) dR + P1Cu C PriH1 (RIHl)dR Jz2 Jz-zo -z2 + P1Co1 C Pr!H1 (RIH1) dR + P1C21 C Pr!H1 (RIH1) dR Jzo Jz2 + P2C22 C Pr!H2 (RIH2) dR + P2Co2 C Pr!H2 (RIH2) dR Jz-zo -Z1 Jzo + P2C12 C Pr!H2 (RIH2) dR. (99) Jzl This reduces to 3t = PoCoo + P1Cu + P2C22 + C [P2(Co2- C22)PriH2 (RIH2) + P1(Co1- Cu)PriH1 (RIHl)]dR Jzo + C [Po(ClO- Coo)PriHo(RIHo) + P2(C12- C22)Pr!H2 (RIH2}]dR Jz1 + C [Po(C2o - Coo)Pr!Ho(RIHo} + P1(C21 - Cu)Pr!H1 (RIHl)] dR. Jz2 (100) As before, the first three terms represent the fixed cost and the integrals represent the variable cost that depends on our choice of Z0 , Z~> and Z2 • Clearly, we assign each R to the region in which the value of the integrand is the smallest. Labeling these integrands /0(R), / 1(R), and /2(R), we have the following rule: if Io(R) < /1(R) and /2(R), choose Ho, if /1(R) < /0(R) and /2(R), choose H 17 (101) if /2(R) < /0(R) and /1(R), choose H2•
  • 64. 48 2.3 M Hypotheses Fig. 2.13 M hypothesis problem. Say Ht Z: observation space We can write these terms in terms of likelihood ratios by defining A (R) 1:::,. PrrH1(RIHl) 1 - PriHo(RIHo)' A2(R) (;:: Pr1H2(RIH2). PriHo(RIHo) (102) Using (102) in (100) and (101), we have (103) H2orH1 P2(Co2 - C22) A2(R) ~ Po(C2o - Coo) + P1(C21 - Col) A1(R), (104) Ho orH1 We see that the decision rules correspond to three lines in the At. A2 plane. It is easy to verify that these lines intersect at a common point and therefore uniquely define three decision regions, as shown in Fig. 2.I4. The decision space is two-dimensional for the three-hypothesis problem. It is easy to verify that M hypotheses always lead to a decision space which has, at most, (M- I) dimensions. Several special cases will be useful in our later work. The first is defined by the assumptions Coo = C11 = C22 = 0, C11 = I, i I= j. (106) These equations indicate that any error is of equal importance. Looking at (98), we see that this corresponds to minimizing the total probability of error.
  • 65. At{R) Fig. 2.14 Decision space. Substituting into (103)-(105), we have H1 or H2 P1A1(R) ~ Po, Ho or H2 H2 or H1 P2A2(R) ~ Po, Ho or H1 H2 or Ho P2A2(R) ~ P1A1(R). H 1 or Ho Ho '----'-----+ A1{R) Po/P1 (a) lnPo/P2 ----1--rit-:--=-=-~ln At (R) lnPo/P1 Ho (b) Fig. 2.15 Decision spaces. Bayes Criterion 49 (107)
  • 66. 50 2.3 M Hypotheses The decision regions in the (A1o A2) plane are shown in Fig. 2.15a. In this particular case, the transition to the (ln A1o ln A2) plane is straight- forward (Fig. 2.15b). The equations are Ht orH2 p In A2(R) ~ In po' (108) HoorHt 2 HoorH2 p ln A2(R) ~ ln A1(R) + ln p1• HoorHt 2 The expressions in (107) and (108) are adequate, but they obscure an important interpretation of the processor. The desired interpretation is obtained by a little manipulation. Substituting (102) into (103-105) and multiplying both sides by PriHo(RIHo), we have Ht orH2 P1Pr1H1(RIH1) ~ PoPriHo(RIHo), HoorH2 H2orH1 P2Pr1H2 (RIH2) ~ PoPriHo(RIHo), HoorH1 (109) H2orHo P2Pr1H2 (RIH2) ~ P1Pr1H1 (RIH1). Ht orHo Looking at (109), we see that an equivalent test is to compute the a posteriori probabilities Pr [H0 IR], Pr [H1IR], and Pr [H2IR1 and choose the largest. (Simply divide both sides of each equation by p.(R) and examine the resulting test.) For this reason the processor for the minimum probability of error criterion is frequently referred to as a maximum a posteriori probability computer. The generalization to M hypotheses is straightforward. The next two topics deal with degenerate tests. Both results will be useful in later applications. A case of interest is a degenerate one in which we combine H1 and H2• Then C12 = C21 = o, and, for simplicity, we can let Col = C1o = C2o = Co2 and Coo = Cu = C22 = 0. Then (103) and (104) both reduce to Ht orH2 P1A1(R) + P2A2(R) ~ Po Ho and (105) becomes an identity. (110) (Ill) (112) (113)
  • 67. Po p2 '----------~-+A1(R) Po pl Fig. 2.16 Decision spaces. Bayes Criterion 51 The decision regions are shown in Fig. 2.16. Because we have eliminated all of the cost effect of a decision between H1 and H2, we have reduced it to a binary problem. We next consider the dummy hypothesis technique. A simple example illustrates the idea. The actual problem has two hypotheses, H1 and H2, but occasionally we can simplify the calculations by introducing a dummy hypothesis H0 which occurs with zero probability. We let P0 = 0, and (114) Substituting these values into (103-105), we find that (103) and (104) imply that we always choose H1 or H2 and the test reduces to Looking at (12) and recalling the definition of A1(R) and A2(R), we see that this result is exactly what we would expect. [Just divide both sides of (12) by Prrn0 (RIH0).] On the surface this technique seems absurd, but it will turn out to be useful when the ratio Prrn2 (RIH2) Prrn,(R!Hl) is difficult to work with and the ratios A1(R) and A2(R) can be made simple by a proper choice ofPrrn0 (RIHo)· In this section we have developed the basic results needed for the M- hypothesis problem. We have not considered any specific examples