This document provides information about the course COMM 1004: Detection & Estimation taught by Prof. Ahmed El-Mahdy at The German University in Cairo. It lists the required textbooks and outlines the grading breakdown, course contents, and topics covered in the first lecture. The course contents include estimation theory, parameter estimation, detection techniques like binary hypothesis testing, and applications in areas like communications and signal processing.
Complex plane, Modulus, Argument, Graphical representation of a complex numbe...
Detection&Estimation-Lecture 1.pdf
1. COMM 1004: Detection & Estimation
Prof. Ahmed El-Mahdy
Dean of the faculty of IET
The German University in Cairo
2. Text Books
• H.L. Van Trees, Detection, Estimation, and Linear Modulation
Theory, vol. I. John Wiley& sons, New York, 2001.
• Don. H. Johnson, Statistical Signal Processing: Detection Theory,
Houston, TX, 2013.
• S. Kay, Fundamentals of Statistical Signal Processing: Estimation
Theory, Prentice Hall, 1993.
• S. Kay, Fundamentals of Statistical Signal Processing: Detection
Theory, Prentice Hall, 1993.
6. Introduction to Detection & Estimation
Goal: Extract useful information from noisy signals
Detection: Decision between two (or a small
number of) possible hypothesis to choose
the best of the two hypothesis.
Parameter Estimation: Given a set of observations
and given an assumed
probabilistic model, we get
the best estimate of the
parameters of the model.
What is the detection and estimation??
9. Detection example 3: In a speaker classification
problem we know the speaker is German, British, or
American. There are three possible hypotheses Ho, H1,
H2.
Decision: After observing the outcome in the observation
space, we guess which hypothesis is true.
16. Performance of Estimators
1- Unbiased Estimators:
- For an estimator to be unbiased we mean that on the average
the estimator will yield the true value of the unknown
parameter.
- Since the parameter value may in general be anywhere in the
interval , unbiasedness asserts that no matter what
the true value of θ, our estimator will yield it on the average.
𝐸[
𝜃]=𝜃
Otherwise, the estimate is said to be biased: 𝐸[
𝜃]≠ 𝜃
a b
17. The bias 𝑏[𝜃] is usually considered to be additive, so that:
𝐸[
𝜃]=𝜃 + 𝑏[𝜃].
When we have a biased estimate, the bias usually depends on the number
of observations N. An estimate is said to be asymptotically unbiased if the
bias tends to zero for large N: lim
𝑁→∞
𝑏=0
Variance of Estimator: The variance of an estimator
𝜃 is defined as:
𝑣𝑎𝑟(
𝜃)=𝐸[(
𝜃 − 𝐸[
𝜃])2
]
Expectations are taken over x (meaning
𝜃 is random but not 𝜃).
An estimate’s variance equals the mean-squared estimation error
only if the estimate is unbiased.
20. Unbiased Estimators
• An estimator is unbiased does not necessarily
mean that it is a good estimator. We need to
Check some other performance measure.
• It only guarantees that on the average it will
attain the true value.
• A continuous bias will always result in a poor
estimator.
22. 2-Efficiency:
An unbiased estimator is said to be efficient if it has lower variance than
all other estimators.
Example: If we compare two unbiased estimators .
Cramer-Rao bound is a lower bound of the variance of any unbiased
estimators. Then:
An estimator is said to be efficient if:
-It is unbiased
-It satisfies Cramer-Rao bound.
If an efficient estimate exists, it is optimum in the mean-squared sense:
No other estimate has a smaller mean-squared error.
Efficiency states that the estimator is “best”
2
1
ˆ
and
ˆ
)
ˆ
(
)
ˆ
(
ˆ
than
efficient
more
is
ˆ
2
1
2
1
Var
Var
if
23. 3- Consistency:
• An unbiased estimator is consistent
if its variance decreases as sample
size increases.
• In consistent unbiased estimator,
the distribution of the estimator
converges to the true value as the
sample size increases.
0
)
ˆ
(
lim 1
Var
n
• Consistency is a relatively
weak property in contrast to
optimal properties such as
efficiency. Unbiased and
Consistent Estimator
Thus, a consistent estimate must be at
least asymptotically unbiased.
26.
)
/
1
(
0
0
)
/
1
(
)
(
)
(
:
then
,
0
0
matrix
diagonoal
For
)
8
(
and
matrix
unitary
called
is
then
,
if
)
7
(
)
6
(
)
5
(
)
(
)
4
(
constant
for
(3)
matrix
symmetric
is
then
,
if
(2)
)
1
(
:
C
and
B,
A,
matrices
For the
2
1
1
2
1
1
1
b
b
B
b
B
B
a
b
b
B
A
A
A
A
A
A
B
C
C
B
A
A
B
B
A
B
A
B
A
A
A
A
A
A
A
A
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
T
29. Inverse of matrices
There exist an inverse of the matrix A when det (A) does not equal to zero.
For the matrix A:
30.
31. Eigen values and Eigen vectors of a matrix :
.
of
each value
for
0
)
-
(
:
solve
rs,
eigenvecto
the
determine
To
-
.
of
for values
0
)
-
det(
:
equation
stic
characteri
the
solve
s,
eigenvalue
the
determine
To
.
eigenvalue
the
called
is
.
=
such that
,
any vector
is
r
eigenvecto
an
,
matrix
square
a
Given
v
I
A
I
A
v
Av
v
A
32. Example to find the Eigen values and vectors of a matrix :
)
(
7
/
3
7
/
1
3
1
3
Repeat
5
/
1
5
/
2
5
1
2
:
vector
of
length
by
divide
unit
be
to
vector
the
For
.
1
2
then
,
2
Assume
.
5
.
0
2
1
0
6
3
0
2
0
0
6
3
2
1
:
get
we
,
0
4
Solving
6
3
2
1
4
0
0
4
2
3
2
3
4
:
4
For
:
vectors
eigen
the
find
To
3
,
4
:
are
values
eigen
the
Then
0
3
4
0
12
0
-
2
-
3
2
-
3
0
)
-
det(
:
is
equation
stic
characteri
The
2
3
2
3
:
matrix
the
of
vectors
eigen
ing
correspond
the
and
values
eigen
the
Find
2
2
2
1
2
2
1
1
11
11
11
1
11
12
12
11
12
11
12
11
1
1
1
2
1
2
vector
unit
for
v
v
v
v
v
v
v
v
v
v
v
V
V
V
V
V
V
V
I
A
I
A
I
A
I
A
A
38. Remember: Two Statistically Independent Random
Variables
)
(
)
(
)
( Y
E
X
E
XY
E
)
(
)
(
)
( Y
Var
X
Var
Y
X
Var
If X and Y are statistically independent, then