2. Table of Contents
2
◊ 1 Introduction
◊ 2 Probability
◊ 3 Random Variables
◊ 4 Statistical Averages
◊ 5 Random Processes
◊ 6 Mean, Correlation and Covariance Functions
◊ 7 Transmission of a Random Process through a Linear
Filter
◊ 8 Power Spectral Density
◊ 9 Power Spectral Density
◊ 9 Gaussian Process
◊ 10 Noise
◊ 11 Narrowband Noise
3. Introduction
◊ Fourier transform is a mathematical tool for the representation of
deterministic signals.
◊ Deterministic signals: the class of signals that may be modeled as
completely specified functions of time.
A signal is “random” if it is not possible to predict its precise value
◊
3
in advance.
◊ A random process consists of an ensemble (family) of sample
functions, each of which varies randomly with time.
◊ A random variable is obtained by observing a random process at a
fixed instant of time.
4. Probability
◊
Probability theory is rooted in phenomena that, explicitly or implicitly, can be
modeled by an experiment with an outcome that is subject to chance.
◊ Example: Experiment may be the observation of the result of tossing a fair
coin. In this experiment, the possible outcomes of a trial are “heads” or
“tails”.
◊ If an experiment has K possible outcomes, then for the kth possible outcome
we have a point called the sample point, which we denote by sk. With this
basic framework, we make the following definitions:
◊ The set of all possible outcomes of the experiment is calledthe
sample space, which we denote by S.
◊ An event corresponds to either a single sample point or a set of sample
points in the space S.
4
5. Probability
◊ Probability theory is rooted in phenomena that, explicitly or
implicitly, can be modeled by an experiment with an outcome that is
subject to chance.
◊ Example: Experiment may be the observation of the result of
tossing a fair coin. In this experiment, the possible outcomes of a
trial are “heads” or “tails”.
◊ If an experiment has K possible outcomes, then for the kth possible
outcome we have a point called the sample point, which we denote
by sk. With this basic framework, we make the following definitions:
◊ The set of all possible outcomes of the experiment is calledthe
sample space, which we denote by S.
◊ An event corresponds to either a single sample point or a set of
sample points in the space S.
5
6. Probability
◊ A single sample point is called an elementary event.
◊ The entire sample space S is called the sure event; and the null set
is called the null or impossible event.
◊ Two events are mutually exclusive if the occurrence of one event
precludes the occurrence of the other event.
◊ A probability measure P is a function that assigns a non-negative
number to an event A in the sample space S and satisfies the
following three properties (axioms):
5.1
5.2
1. 0 PA1
2. PS 1
3. If A and B are two mutually exclusive events, then
PA B PA PB 5.3
6
11. Conditional Probability
◊ Example 5.1 Binary Symmetric Channel
◊ This channel is said to be discrete in that it is designed to handle
discrete messages.
◊ The channel is memoryless in the sense that the channel output at
any time depends only on the channel input at that time.
◊ The channel is symmetric, which means that the probability of
receiving symbol 1 when 0 is sent is the same as the probability
of receiving symbol 0 when symbol 1 is sent.
11
15. Random Variables
◊ We denote the random variable as X(s) or just X.
◊ X is a function.
◊ Random variable may be discrete or continuous.
◊ Consider the random variable X and the probability of the event
X ≤ x. We denote this probability by P[X ≤ x].
◊ To simplify our notation, we write
5.15
FX x PX x
◊ The function FX(x) is called the cumulative distribution
function (cdf) or simply the distribution function of the random
variable X.
◊ The distribution function FX(x) has the following properties:
1. 0 Fx x 1
15
if x1 x2
2. Fx x1 Fx x2
17. Random Variables
◊ If the distribution function is continuously differentiable, then
X X
dx
f x
d
F x 5.17
◊ fX(x) is called the probability density function (pdf) of the random
variable X.
◊ Probability of the event x1 < X ≤ x2 equals
Px1 X x2 PX x2 PX x1
FX x2 FX x1 5.19
x
1
x2
f xdx
x
X
F x X
f ξdξ
1
17
X
x
◊ Probability density function must always be a nonnegative function,
and with a total area of one.
18. Random Variables
◊ Example 5.2 Uniform Distribution
1
0,
x a
, a x b
f X x
ba
x b
0,
x a
0,
x a
FX x , a x b
x b
b a
0
,
18
19. Random Variables
◊ Several Random Variables
◊ Consider two random variables X and Y. We define the joint
distribution function FX,Y(x,y) as the probability that the random
variable X is less than or equal to a specified value x and that the
random variable Y is less than or equal to a specified value y.
FX ,Y x,y P X x,Y y 5.23
◊ Suppose that joint distribution function FX,Y(x,y) is continuous
everywhere, and that the partial derivative
X ,Y
X ,Y
2
F x, y
f x, y 5.24
xy
19
exists and is continuous everywhere. We call the function fX,Y(x,y)
the joint probability density function of the random variables X
and Y.
20. Random Variables
◊ Several Random Variables
◊ The joint distribution function FX,Y(x,y) is a monotone-
nondecreasing function of both x and y.
◊
X,Y
f ,dd 1
◊ Marginal density fX(x)
X X ,Y X X ,Y
f f
, dd x, d 5.27
x
F x f x
◊ Suppose that X and Y are two continuous random variables with
joint probability density function fX,Y(x,y). The conditional
probability density function of Y given that X = x is defined by
f x,y
5.28
20
X ,Y
Y
X
f x
f y x
21. Random Variables
◊ Several Random Variables
◊ If the random variable X and Y are statistically independent, then
knowledge of the outcome of X can in no way affect the
distribution of Y.
fY y x fY y b
y5.
28
f x, y f x f
y5.32
X ,Y X Y
Px A, Y B P X A PY B
21
5.33
22. Random Variables
◊ Example 5.3 Binomial Random Variable
◊ Consider a sequence of coin-tossing experiments where the
probability of a head is p and let Xn be the Bernoulli random
variable representing the outcome of the nth toss.
◊ Let Y be the number of heads that occur on N tosses of the coins:
N
n1
Y Xn
y
PY y
N
py
1 pNy
N
y
N!
y!N y!
22
23. Statistical Averages
◊ The expected value or mean of a random variable X is defined by
xdx 5.36
x X
EX
xf
◊ Function of a Random Variable
◊ Let X denote a random variable, and let g(X) denote a real-
valued function defined on the real line. We denote as
Y g X 5.37
◊ To find the expected value of the random variable Y.
23
5.38
Y x
E Y yf g x f x dx
g X
y dy E
24. Statistical Averages
◊ Example 5.4 Cosinusoidal Random Variable
◊ Let Y=g(X)=cos(X)
◊ X is a random variable uniformly distributed in the interval (-π, π)
x
f
1
,
x
2
otherwise
X
0,
EY
1
cosx dx
2
sin x
1
2
x
0
24
25. Statistical Averages
◊ Moments
◊ For the special case of g(X) = X n, we obtain the nth moment of
the probability distribution of the random variable X; that is
xdx 5.39
X
E
Xn
xn
f
◊ Mean-square value of X:
5.40
xdx
E
X2
x2
f
◊ The nth central moment is
25
X
5.41
n n
X X X
x f xdx
E X
26. Statistical Averages
◊ For n = 2 the second central moment is referred to as the variance of
the random variable X, written as
2 2
X X
var X x f X xdx 5.42
E X
◊ The variance of a random variable X is commonly denoted as .
2
X
◊ The square root of the variance is called the standard deviation of
the random variable X.
◊ 2
2
X X
X
2
X
varX
2 X
E X
E
X2
26
2
X
2
X
E X 2 E X
2
5.44
2
X
E X
27. Statistical Averages
◊ Chebyshev inequality
◊ Suppose X is an arbitrary random variable with finite mean mx
and finite variance σx
2. For any positive number δ:
2
2
x
P X m
x
◊ Proof:
2 2
2
x
x x x
|xm |
(x m ) p(x)dx (x m )2
p( x)dx
p( x)dx 2
P(|X m | )
x
27
|xmx |
28. Statistical Averages
◊ Chebyshev inequality
◊ Another way to view the Chebyshev bound is working with
the zero mean random variable Y=X-mx.
◊ Define a function g(Y) as:
Y
gY
1
and EgY PY
Y
0
2
Y
2
◊ Upper-bound g(Y) by the quadratic (Y/δ) , i.e. g Y
◊ The tail probability
2
2
2
2
2
2
EY2
y
x
Y 2
EgY E
28
29. Statistical Averages
◊ Chebychev inequality
◊ A quadratic upper bound on g(Y) used in obtaining the tail
probability (Chebyshev bound)
◊ For many practical applications, the Chebyshev bound is
extremely loose.
29
30. ◊
Statistical Averages
Characteristic function X is defined as the expectation of the
complex exponential function exp( jυX ), as shown by
5.45
X X
exp j x exp j
X dx
f
j E X
◊ In other words , the characteristic function X is the Fourier
transform of the probability density function fX(x).
◊ Analogous with the inverse Fourier transform:
5.46
30
X X
2
f x
1
jexp jX d
31. Statistical Averages
◊ Characteristic functions
◊ First moment (mean) can be obtained by:
v0
x
dv
( jv)
E(X ) m j
d
◊ Since the differentiation process can be repeated, n-th
moment can be calculated by:
n
E( X ) ( j)n d n
( jv)
v0
31
dvn
32. Statistical Averages
◊ Characteristic functions
◊ Determining the PDF of a sum of statistically independent
random variables:
jvY
Y
i
n
Y X
i1
X
( jv) E(e ) E expjv
n
i
i1
n
n
jvX
e
i
e
p(x1, x2 ,..., xn )dx1dx2 ...dxn
jvxi
E
...
i1 i1
n
Since the random variables are statistically independent,
p(x1, x2 ,..., xn ) p(x1) p(x2 )...p(xn ) Y ( jv) Xi
(jv)
32
i1
If Xi are iid (independent and identically distributed)
( jv)
( jv)n
Y X
33. Statistical Averages
◊ Characteristic functions
◊ The PDF of Y is determined from the inverse Fourier transform of ΨY(jv).
◊ Since the characteristic function of the sum of n statistically independent random variables
is equal to the product of the characteristic functions of the individual random variables, it
follows that, in the transform domain, the PDF of Y is the n- fold convolution of the PDFs
of the Xi.
◊ Usually, the n-fold convolution is more difficult to perform than the characteristic function
method in determining the PDF of Y.
33
34. Statistical Averages
◊ Example 5.5 Gaussian Random Variable
◊ The probability density function of such a Gaussian random
variable is defined by:
1
X 2
X
X
2
2
x 2
f x expX
, x
◊ The characteristic function of a Gaussian random variable with
mean mx and variance σ2 is (Problem 5.1):
1
2
exmx 2
/2 2 dx e jvmx 1/ 2v2
2
e jvx
jv
◊ It can be shown that the central moments of a Gaussian random
variable are given by:
13 (k 1) k
(even k)
34
0 (odd k)
k
k
E(X m )
x
35. Statistical Averages
◊ Example 5.5 Gaussian Random Variable (cont.)
◊ The sum of n statistically independent Gaussian random
variables is also a Gaussian random variable.
◊ Proof:
n
Y Xi
i1
2 2
n
n
Y e i i
Xi
jvmy v2 2 /2
y
jvm v /2
e
jv
jv
and
i 1
i 1
n
n
y i
y
2 2
i
where m m
Therefore,Y is Gaussian-distributed withmeanmy
i1 i1
2
.
35
and variance y
36. Statistical Averages
◊ Joint Moments
◊ Consider next a pair of random variables X and Y. A set of
statistical averages of importance in this case are the joint
moments, namely, the expected value of Xi Y k, where i and k
may assume any positive integer values. We may thus write
X,Y x, ydxdy 5.51
E
X i
Y k
xi
yk
f
◊ A joint moment of particular importance is the correlation
defined by E[XY], which corresponds to i = k = 1.
Y EY 5.53
36
X Y
◊ Covariance of X and Y :
covXY E
X EX = EXY
37. Statistical Averages
◊ Correlation coefficient of X and Y :
cov XY
5.54
37
X Y
◊ σX and σY denote the variances of X and Y.
◊ We say X and Y are uncorrelated if and only if cov[XY] = 0.
◊ Note that if X and Y are statistically independent, then they are
uncorrelated.
◊ The converse of the above statement is not necessarily true.
◊ We say X and Y are orthogonal if and only if E[XY] = 0.
38. Statistical Averages
◊ Example 5.6 Moments of a Bernoulli Random Variable
◊ Consider the coin-tossing experiment where the probability of a
head is p. Let X be a random variable that takes the value 0 if the
result is a tail and 1 if it is a head. We say that X is a Bernoulli
random variable.
1 p 1
x 0
x 1
otherwise
0
P X x
p
EX kPX k 01 p1 p p
k0
1
2
X
k
X
2
P X k
j k
2
j
j k
E X E X j k
j k
E
X X
E X
k0
p2
j k
p j k
0 p2
1 p 1 p2
p 1 p
38
1
k0
j
2
where the E X k2
PX k.
40. Random Processes
◊ At any given time instant, the value of a stochastic process
is a random variable indexed by the parameter t. We
denote such a process by X(t).
◊ In general, the parameter t is continuous, whereas X may
be either continuous or discrete, depending on the
characteristics of the source that generates the stochastic
process.
◊ The noise voltage generated by a single resistor or a single
information source represents a single realization of the
stochastic process. It is called a sample function.
40
41. Random Processes
◊ The set of all possible sample functions constitutes an
ensemble of sample functions or, equivalently, the
stochastic process X(t).
◊ In general, the number of sample functions in the
ensemble is assumed to be extremely large; often it is
infinite.
◊ Having defined a stochastic process X(t) as an ensemble of
sample functions, we may consider the values of the
process at any set of time instants t1>t2>t3>…>tn, where n
is any positive integer.
◊ In general, the random variables Xt X ti ,i 1,2,..., n, are
i
41
t1 t2 tn
characterized statistically by their joint PDF px , x ,..., x
.
42. Random Processes
◊ Stationary stochastic processes
◊ Consider another set of n random variables Xt t X ti t,
i
i 1, 2,..., n, where t is an arbitrary time shift. These random
variables are characterized by the joint PDF pxt1t , xt2 t ,..., xtn t .
◊The jont PDFs of the random variables Xt and Xt t ,i 1,2,...,n,
i i
may or may not be identical. When they are identical, i.e., when
pxt1
, xt2
,..., xtn
pxt1t , xt2 t ,..., xtn t
for all t and all n, it is said to be stationary in the strict sense (SSS).
◊ When the joint PDFs are different, the stochastic process is
non-stationary.
42
43. Random Processes
◊ Averages for a stochastic process are called ensemble averages.
◊
i
n
n
The nth moment of the random variable Xt is defined as :
x dx
EX x p
◊
ti ti ti
ti
◊
In general, the value of the nth moment will depend on the
time instant ti if thePDF of Xt depends on ti.
i
When the processis stationary, p xt t p x for allt.
i ti
Therefore,the PDFis independent of time,and,as a
consequence, the nth momentisindependent of time.
43
44. Random Processes
◊ Two random variables: Xt
i
X ti ,i 1,2.
◊ The correlation is measured by the joint moment:
x x p x , x dx dx
E X X
t1 t2 t1 t2 t1 t2 t1 t2
◊ Since this joint moment depends on the time instants t1 and
t2, it is denoted by RX(t1 ,t2).
◊ RX(t1 ,t2) is called the autocorrelation function of the
stochastic process.
44
◊ For a stationary stochastic process, the joint moment is:
E(Xt Xt ) RX (t1 ,t2 ) RX (t1 t2 ) RX ()
1 2
◊
' '
1 1 1 1 1 1
X
t
t t t t
RX ( ) E(Xt X ) E(X X ) E(X X ) R ()
2
◊ Average power in the process X(t): RX(0)=E(Xt ).
45. Random Processes
45
◊ Wide-sense stationary (WSS)
◊ A wide-sense stationary process has the property that the
mean value of the process is independent of time (a
constant) and where the autocorrelation function satisfies
the condition that RX(t1,t2)=RX(t1-t2).
◊ Wide-sense stationarity is a less stringent condition than
strict-sense stationarity.
46. Random Processes
◊ Auto-covariance function
◊ The auto-covariance function of a stochastic process is
defined as:
t1,t2 E
Xt1
mt1
Xt2
mt2
RX t1,t2 mt1 mt2
◊ When the process is stationary, the auto-covariance
function simplifies to:
2
(t1,t2 ) (t1 t2 ) () RX () m
◊ For a Gaussian random process, higher-order moments can
be expressed in terms of first and second moments.
Consequently, a Gaussian random process is completely
characterized by its first two moments.
46
47. Mean, Correlation and Covariance Functions
5.57
X X t
xf x dx
◊ Consider a random process X(t). We define the mean of the
process X(t) as the expectation of the random variable obtained by
observing the process at some time t, as shown by
t E
[X t
◊ A random process is said to be stationary to first order if the
distribution function (and therefore density function) of X(t) does
not vary with time.
47
1 2 1 2
X t X t
f x for all t for all t
x f and t X t X 5.59
◊ The mean of the random process is a constant.
◊ The variance of such a process is also constant.
48. Mean, Correlation and Covariance Functions
◊ We define the autocorrelation function of the process X(t) as the
expectation of the product of two random variables X(t1) and X(t2).
RX t1,t2 E
X t1 X t2
◊ We say a random process X(t) is stationary to second order if the
1 2
1 2 1 2 1 2 5.60
X t , X t
x x f x , x dx dx
48
1 2 1 2
X t , X t
joint distribution f x , x depends on the difference between
the observation time t1 and t2.
RX t1,t2 RX t2 t1 for all t1 and t2 5.61
◊ The autocovariance function of a stationary random process X(t) is
written as
X 2 1
2
X
t 5.62
t
CX t1,t2 E
X t1 X X t2 X
R
49. Mean, Correlation and Covariance Functions
◊ For convenience of notation, we redefine the autocorrelation function of
a stationary process X(t) as
for all t
RX E
X t Xt
5.63
◊ This autocorrelation function has several important properties:
5.64
5.65
49
1. RX 0 E
X t
2
2. RX RX
3. RX RX 0 5.67
◊ Proof of (5.64) can be obtained from (5.63) by putting τ = 0.
51. 5.6 Mean, Correlation and Covariance Functions
◊ The physical significance of the autocorrelation function RX(τ) is that
it provides a means of describing the “interdependence” of two
random variables obtained by observing a random process
X(t) at times τ seconds apart.
51
52. 5.6 Mean, Correlation and Covariance Functions
◊ Example 5.7 Sinusoidal Signal with Random Phase
◊ Consider a sinusoidal signal with random phase:
X t Acos 2 f t f
1
,
c
2
elsewhere
0,
2
2
cos 4
A
RX E
X t X t
fc
fct
2 fc 2
E A
E
cos 2
2
cos 4 c c c
A2
f
2
A 1
f t 2 f 2 d cos 2
2
cos 2 fc
2 2
2
A
2
52
53. 5.6 Mean, Correlation and Covariance Functions
◊ Averages for joint stochastic processes
◊ Let X(t) and Y(t) denote two stochastic processes and let
Xti
≡X(ti), i=1,2,…,n, Yt’j
≡Y(t’j), j=1,2,…,m, represent the
random variables at times t1>t2>t3>…>tn, and
t’1>t’2>t’3>…>t’m , respectively. The two processes are
characterized statistically by their joint PDF:
◊ The cross-correlation function of X(t) and Y(t), denoted by
' '
1 2 n 1 2
m
t t t t t t
p x , x ,..., x , y , y ' ,..., y
Rxy(t1,t2), is defined as the joint moment:
1 2 t1 t2 t1 t2 t1 t2
Rxy (t1,t2 ) E(Xt Yt ) x y p(x , y )dx dy
◊ The cross-covariance is:
53
xy (t1,t2 ) Rxy (t1,t2 ) mx (t1)my (t2 )
54. 5.6 Mean, Correlation and Covariance Functions
◊ Averages for joint stochastic processes
◊ When the process are jointly and individually stationary, we
have Rxy(t1,t2)=Rxy(t1-t2), and μxy(t1,t2)= μxy(t1-t2):
' ' ' '
1 1 1 1 1 1
yx
xy t t t t t t
R ( ) E(X Y )
) E(X Y ) E(Y X ) R (
◊ The stochastic processes X(t) and Y(t) are said to be
statistically independent if and only if :
' ' '
'
' '
1 2 n
1 2 m 1 2 m
t
t t
t t t t t
t
) p(y , y ,..., y )
,..., x
p( xt , xt ,..., xt , y , y ,..., y ) p(x , x
1 2 n
for all choices of ti and t’i and for all positive integers n andm.
◊ The processes are said to be uncorrelated if
54
1 2
t t
Rxy (t1,t2 ) E(X )E(Y ) xy (t1,t2 ) 0
55. 5.6 Mean, Correlation and Covariance Functions
◊ Example 5.9 Quadrature-Modulated Processes
◊ Consider a pair of quadrature-modulated processes X1(t) and X2(t):
X1 t X tcos2 fct
X2 t X tsin2 fct
R12 t X2 t
E
X1
E
X tX t cos2 fct sin2 fct 2 fc
E
X tX t
E
c
o
s2 fct sin2 fct 2 fc
2
X c c c
1
R Esin4 f t 2 f t 2sin2 f
2
X c
1
R sin 2 f 12 1 2
R 0 E
X tX t 0
55
56. 5.6 Mean, Correlation and Covariance Functions
◊ Ergodic Processes
◊ In many instances, it is difficult or impossible to observe all sample
functions of a random process at a given time.
◊ It is often more convenient to observe a single sample function for a
long period of time.
◊ For a sample function x t), the time average of the mean value over
an observation period 2T is
T
1
xtdt 5.84
56
x,T
2T T
◊ For many stochastic processes of interest in communications, the
time averages and ensemble averages are equal, a property known as
ergodicity.
◊ This property implies that whenever an ensemble average is required,
we may estimate it by using a time average.
57. 5.6 Mean, Correlation and Covariance Functions
◊ Cyclostationary Processes (in the wide sense)
◊ There is another important class of random processes commonly
encountered in practice, the mean and autocorrelation function of
which exhibit periodicity:
X t1 T X t1
RX t1 T ,t2 T RX t1,t2
for all t1 and t2.
◊ Modeling the process X(t) as cyclostationary adds a new
dimension, namely, period T to the partial description of the
process.
57
58. 5.7 Transmission of a Random Process Through
a Linear Filter
◊ Suppose that a random process X(t) is applied as input to linear
time-invariant filter of impulse response h(t), producing a new
random process Y(t) at the filter output.
◊ Assume that X(t) is a wide-sense stationary random process.
◊ The mean of the output random process Y(t) is given by
1
Y
h X t d
1 1
t E
Yt E
58
1 1 1
= d
h E
X t
1 1 1 5.86
X
h
t d
59. 5.7 Transmission of a Random Process Through
a Linear Filter
◊ When the input random process X(t) is wide-sense stationary, the
mean t
X Y
is a constant X , then mean t is also a constant Y .
Y 1 1 X
h d
t H 0 5.87
X
where H(0) is the zero-frequency (dc) response of the system.
◊ The autocorrelation function of the output random process Y(t) is
given by:
1 1 2
Y h X
t d h X u d
2 2
1
R t,u E
YtY u E
1 1 2 2 1 2
d h d h
E
X t X u
59
1 1 2 2 1 2
X
d h d h
R t ,u
60. 5.7 Transmission of a Random Process Through
a Linear Filter
◊ When the input X(t) is a wide-sense stationary random process, the
autocorrelation function of X(t) is only a function of the difference
between the observation times:
5.90
1 2 X 1 2 1 2
Y
R h h R d d
◊ If the input to a stable linear time-invariant filter is a wide-sense
stationary random process, then the output of the filter is also a
wide-sense stationary random process.
60
61. 5.8Power
Spectral Density
◊ The Fourier transform of the autocorrelation function RX(τ) is called
the power spectral density SX( f ) of the random process
X(t).
X X j2 f d
R exp
S f
5.91
5.92
RX
SX f expj2 f df
◊ Equations (5.91) and (5.92) are basic relations in the theory of
spectral analysis of random processes, and together they constitute
what are usually called the Einstein-Wiener-Khintchine relations.
61
62. 5
5.
. 8
8 P
P o
ow
we
er
r S
Sp
pe
ec
ct
tr
r a
a l
l D
D e
e n
n s
s i
i t
ty
y
◊ Properties of the Power Spectral Density
◊ Property 1:
◊ Proof: Let f =0 in Eq. (5.91)
5.93
X X
S R d
0
◊ Property 2:
2
5.94
X
S f df
t
E X
◊ Proof: Let τ =0 in Eq. (5.92) and note that RX(0)=E[X2(t)].
62
◊ Property 3: 5.95
5.96
SX f 0 for all f
SX f SX f
◊ Property 4:
◊ Proof: From (5.91)
X X
X X
R exp X
R exp X
j2 f d
S f S f
R R
j2 f d
63. Proof of Eq. (5.95)
◊ It can be shown that (see eq. 5.106) SY f SX f H f 2
2
Y Y X
R S S f H f expj2 f df
f exp j2 f df
2
Y X
R 0
E Y t
S f H f 2
df 0 for any H f
◊ Suppose we let |H( f )|2=1 for any arbitrarily small interval f1 ≤ f ≤ f2 ,
and H( f )=0 outside this interval. Then, we have:
1
63
X
f2
S f df 0
f
This is possible if an only if SX( f )≥0 for all f.
◊ Conclusion: SX( f )≥0 for all f.
64. 5.8Power
Spectral Density
◊ Example 5.10 Sinusoidal Signal with Random Phase
◊ Consider the random process X(t)=Acos(2πfc t+Θ), where Θ is a
uniformly distributed random variable over the interval (-π,π).
◊ The autocorrelation function of this random process is given in
Example 5.7: A2
RX cos2 fc (5.74)
◊ Taking the Fourier transform of both sides of this relation:
2
A
X c c
S
f f f f f (5.97)
4
64
65. 5.8Power
Spectral Density
◊ Example 5.12 Mixing of a Random Process with a
Sinusoidal Process
◊ A situation that often arises in practice is that of mixing (i.e.,
multiplication) of a WSS random process X(t) with a sinusoidal
signal cos(2πfc t+Θ), where the phase Θ is a random variable that
is uniformly distributed over the interval (0,2π).
◊ Determining the power spectral density of the random process Y(t)
defined by:
(5.101)
65
Y f X t cos2 fct
◊ We note that random variable Θ is independent of X(t) .
66. 5.8Power
Spectral Density
◊ Example 5.12 Mixing of a Random Process with a
Sinusoidal Process (continued)
◊ The autocorrelation function of Y(t) is given by:
RY E
Yt Y t
E
X t cos2 fct 2 fc X tcos2 fct
E
X t X t
E
c
o
s2 fct 2 fc cos2 fct
2
X c c c
1
R E cos2 f cos 4 f t 2 f 2
2
X c
1
R cos 2 f
Fourier transform
(5.103)
66
Y X
S f
1
S
4 c X
f f S c
f f
67. 5.8Power
Spectral Density
◊ Relation among the Power Spectral Densities of the Input and
Output Random Processes
◊ Let SY( f ) denote the power spectral density of the output random process Y(t)
obtained by passing the random process through a linear filter of transfer
function H( f ).
Y Y
R e d
j2 f
S f
1 2 1 2 1 2
Y X d 5.90
R h h R d
1 2 1 2
X 1 2
h h d d d
j2 f
R e
Let 1 2 0
0 1 2
1 2 1 2 0
X 0
h h R e d d d
j 2 f
0
1 2
1 2 0
X 0 e
j2 f
h e d h e d R d
j2 f
j2 f
1 2
5.106
67
H f H
f S f H f 2
S f
X X
68. 5.8Power
Spectral Density
◊ Example 5.13 Comb Filter
◊ Consider the filter of Figure (a) consisting of a delay line and a
summing device. We wish to evaluate the power spectral density
of the filter output Y(t).
68
69. 5.8Power
Spectral Density
◊ Example 5.13 Comb Filter (continued)
◊ The transfer function of this filter is
H f 1 exp j2 fT 1 cos2 fT jsin2 fT
2 2
H f cos 2
1 fT sin2
2 fT
=2
1
cos 2 fT
=4sin2
fT
◊ Because of the periodic form of this frequency response (Fig. (b)),
the filter is sometimes referred to as a comb filter.
◊ The power spectral density of the filter output is:
SY f H f S f 4sin fTS f
2 2
X X
If fT is verysmall
69
SY ( f ) 4 f T S ( f )
2 2 2
X
(5.107)
70. 5.9 Gaussian
Process
◊ A random variable Y is defined by:
N
0
T
Y gt Xtdt
◊ We refer to Y as a linear functional of X(t).
Y ai Xi
i1
aiare constants
Xi are randomvariables
Y is a linearfunctionof theXi.
◊ If the weighting function g(t) is such that the mean-square value of
the random variable Y is finite, and if the random variable Y is a
Gaussian-distributed random variable for every g(t) in this class of
functions, then the process X(t) is said to be a Gaussian process.
◊ In other words, the process X(t) is a Gaussian process if every
linear functional of X(t) is a Gaussian random variable.
◊ The Gaussian process has many properties that make analytic
results possible.
◊ The random processes produced by physical phenomena are often
such that a Gaussian model is appropriate.
70