Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Section4 stochastic
1. Stochastic Section # 4
Random Signals & Random Signal Models
Eslam Adel
March 21, 2018
1 Random Signals
1. Random signal is a sequence of random variables [x1, x2, x3, . . . , xN ].
2. Mean of the random signal E[x] can be approximated as
E[x] =
1
N
N−1
i=0
xi (1)
3. Sample Autocorrelation Rxnxn+k
is defined as:
Rxnxn+k
=
1
N
N−1−k
i=0
xi+kxi k = 0, 1, 2, . . . N (2)
4. We always assume that the signal is zero mean. If the signal is not zero mean, subtract the mean from it.
x → x − E[x]
Note: for zero mean signal Rxx = σ2
5. For random signal it is assumed that it is a wide sense stationary signal (WSS) Where :
1. E[x] = constant
2. Rxn+kxn = E[xn+kxn] = Rxx(k) (not a function of time variable n)
6. Stationarity means that for different windows of the same signal, statistical parameters are almost the
same.
7. For WSS signal Rxx(k) = Rxx(−k)
1.1 Example
Assume signal x[n] = [1, 2, 3, 4, 5]
Evaluate the following:
1. Rxx(0)
2. Rxx(−1)
3. Rxx(1)
Solution :
At first make sure that signal is zero mean
m = 3 so x[n] will be
x[n] = [−2, −1, 0, 1, 2]
1
2. 1. Rxx(0)
Rxx(0) = 1
5 [−2, −1, 0, 2, 1] × [−2, −1, 0, 1, 2] = 10
5 = 2
2. Rxx(1)
Rxx(1) = 1
5 [−2, −1, 0, 2, 1] × [0, −2, −1, 0, 1] = 4
5
3. Rxx(−1)
Rxx(−1) = 1
5 [−2, −1, 0, 2, 1] × [−1, 0, 1, 2, 0] = 4
5
Note:
For WSS signal Rxx(k) = Rxx(−k)
1.2 Problem 1.25
A random signal x(n) is defined as a linear function of time by
x(n) = an + b (3)
where a and b are independent zero-mean gaussian random variables of variances σ2
a and σ2
b , respectively.
1. Compute E[x(n)]2
.
2. Is x(n) a stationary process? Explain.
3. For each fixed n, compute the probability density p(x(n))
Solution :
1. E[x(n)]2
E[x(n)2
] = E[(an + b)2
] = E[a2
n2
+ 2anb + b2
]
E[x(n)2
] = n2
E[a2
] + E[b2
] + 2nE[ab]
E[a2
] = σ2
a − 0 = σ2
a,E[b2
] = σ2
b and E[ab] = E[a]E[b] = 0:
E[x(n)2
] = n2
σ2
a + σ2
b
2. To check a signal is WSS or not
1. E[x] = 0 = const So we have to check the correlation
2. Rxx
Rxn+kxn
= E[xn+kxn] = E[(a(n + k) + b)(an + b)] = n(n + k)E[a2
] + E[b2
]
Rxn+kxn
= n(n + k)σ2
a + σ2
b
Rxx(k) is a function in time variable So x(n) is not WSS
3. p(x(n))
x(n) = an + b where a ∼ N(0, σ2
a) and b ∼ N(0, σ2
b )
So x ∼ N(0, σ2
x) and σ2
x = E[x2
] − m2
x = n2
σ2
a + σ2
b
p(x(n)) = 1√
2πσx
e
−x2
2σ2
x
2
3. 2 Random Signal Models
The idea is that I don’t need to store the whole signal. I can model it and store only model parameters. Then
I can genrate my signal again using random signal generator and stored model. see figure 1
Applications :
1. Signal synthesis :
for example speech synthesis.
2. Classification :
Model parameters are considered as feature vector that used in classification process of different signals.
3. Data Compression
For data transmission, no bandwidth for transmission of all data. Only model parameters are transmitted
and signal will be recovered from model at the reciever.
Figure 1: General Block Diagram for signal modeling
2.1 Moving Average Model (MA)
X(Z) = C(Z) (Z)
Where (Z) is uncorrelated white gaussian noise (WGN)
In time domain
x(n) = N−1
i=0 cn−i i
2.2 Auto Regressive Model (AR)
X(Z) = b0
A(Z) (Z)
In time domain
N−1
i=0 an−ixi = b0 (n)
2.3 Auto Regressive Moving Average Model (ARMA)
X(Z) = B(Z)
A(Z) (Z)
In time domain
N−1
i=0 an−ixi =
N−1
i=0 bn−i i
3
4. 2.4 Problem 1.30
A random signal yn is generated by sending unit-variance zero-mean white noise n through the filters defined
by the following difference equations:
1. yn = −0.9yn−1 + n
2. yn = 0.9yn−1 + n + n−1
3. yn = n + 2 n−1 + n−2
4. yn = −0.81yn−2 + n
5. yn = 0.1yn−1 + 0.72yn−2 + n − 2 n−1 + n−2
a) For each case, determine the transfer function of the filter, decide whether the model is ARMA, MA, or
AR.
b) Write explicitly the power spectrum Syy(w)
Solution:
1. Taking Z transform for both sides
Y (Z) = −0.9Y (Z)Z−1
+ (Z)
(1 + 0.9Z−1
)Y (Z) = (Z)
Y (Z) = 1
1+0.9Z−1 (Z)
So the transfer function is 1
1+0.9Z−1
Model type is AR.
Power Spectrum Syy(w) = 1
1+0.9e−jw
2
2. Taking Z transform for both sides
Y (Z) = 0.9Y (Z)Z−1
+ (Z) + (Z)Z−1
(1 − 0.9Z−1
)Y (Z) = (1 + Z−1
) (Z)
Y (Z) = 1+Z−1
1−0.9Z−1 (Z)
So the transfer function is 1+Z−1
1−0.9Z−1
Model type is ARMA.
Power Spectrum Syy(w) = 1+e−jw
1−0.9e−jw
2
3. Taking Z transform for both sides
Y (Z) = (Z) + 2 (Z)Z−1
+ (Z)Z−2
Y (Z) = (1 + 2Z−1
+ Z−2
) (Z)
So the transfer function is 1 + 2Z−1
+ Z−2
Model type is MA.
Power Spectrum Syy(w) = 1 + 2e−jw
+ e−2jw
2
4. Solve as above
5. Solve as above
4
5. Figure 2: Linear Estimation of signal
3 Linear Estimation of Signal
What’s beyond building signal model? We have to determine model parameters. But How to select them ?.
There is different ways to determine model parameters. Different methods are proposed. We will have two of
them.
So the case now is that we have a signal x that is changed due to noise to be another signal y. We applied
our model on y to estimate x so we got ˜x. see figure 2
3.1 Maximum Likelihood Method (ML)
The idea is to maximize the joint probability function of all random variables f(x1, x2, . . . , xN )
Applying this on first order autoregressive model AR(1)
X(Z) = 1
1−aZ−1 (Z)
model parameters are a and σ2
Solution will be :
a =
N−1
n=1 xnxn−1
N−1
n=1 x2
n−1
σ2
= 1
N
N−1
n=1 (x(n) − ax(n − 1))
3.2 Mean Square Error Method (MS)
Error is defined as :
e = x − ˜x
So the mean square error will be
E[(x − ˜x)2
]
to minimize the error differentiation of MSe with respect to model parameters must be zero.
Lets see an example:
3.2.1 Example 1
Assume
˜x = ay
Determine value of a that minimize the mean square error.
Solution:
MSe = E[e2
] = E[(x − ˜x)2
] = E[(x − ay)2
]
∂MSe
∂a = 0
E[2(x − ay) × −y] = 0
−E[xy] + aE[y2
] = 0
a = E[xy]
E[y2]
a =
Rxy(0)
Ryy(0)
5
6. 3.2.2 Example 2
Given
˜x = ay(n) + by(n − 1)
Determine values of a and b to minimize the mean square error.
Solution:
MSe = E[e2
] = E[(x − ˜x)2
] = E[(x − ayn − byn−1)2
]
∂MSe
∂a = 0
E[2(x − ayn − byn−1) × −yn] = 0
−E[xy] + aE[y2
] + bE[yn−1yn] = 0
aRyy(0) + bRyy(1) = Rxy(0) (4)
Similarly for b
∂MSe
∂b = 0
E[2(x − ayn − byn−1) × −yn−1] = 0
−E[xyn−1] + aE[y2
ny2
n−1] + bE[y2
n−1] = 0
aRyy(1) + bRyy(0) = Rxy(1) (5)
We can put it in matrix form
Ryy(0) Ryy(1)
Ryy(1) Ryy(0)
a
b
=
Rxy(0)
Rxy(1)
(6)
Finally
a
b
=
Ryy(0) Ryy(1)
Ryy(1) Ryy(0)
−1
Rxy(0)
Rxy(1)
(7)
6