This document discusses three methods for equalization in wideband TDMA systems: linear equalization, decision feedback equalization, and maximum likelihood sequence estimation using the Viterbi algorithm. Linear equalization methods like least mean square aim to minimize intersymbol interference but have limited performance. Decision feedback equalization has better performance than linear equalization by cancelling interference from previously decided symbols. Maximum likelihood sequence estimation using the Viterbi algorithm provides the best performance but highest complexity by estimating the most likely transmitted sequence. The document provides examples of equalizer structures and algorithms like LMS for adjusting filter coefficients to minimize intersymbol interference.
1. Equalization in a
wideband TDMA system
• Three basic equalization methods
• Linear equalization (LE)
• Decision feedback equalization (DFE)
• Sequence estimation (MLSE-VA)
• Example of channel estimation circuit
2. Three basic equalization methods (1)
Linear equalization (LE):
Performance is not very good when the frequency response
of the frequency selective channel contains deep fades.
Zero-forcing algorithm aims to eliminate the intersymbol
interference (ISI) at decision time instants (i.e. at the
center of the bit/symbol interval).
Least-mean-square (LMS) algorithm will be investigated in
greater detail in this presentation.
Recursive least-squares (RLS) algorithm offers faster
convergence, but is computationally more complex than
LMS (since matrix inversion is required).
3. Three basic equalization methods (2)
Decision feedback equalization (DFE):
Performance better than LE, due to ISI cancellation of tails
of previously received symbols.
Decision feedback equalizer structure:
Feed-forward
filter (FFF)
Feed-back
filter (FBF)
Adjustment of
filter coefficients
Input Output
+
+
Symbol
decision
4. Three basic equalization methods (3)
Maximum Likelihood Sequence Estimation using
the Viterbi Algorithm (MLSE-VA):
Best performance. Operation of the Viterbi algorithm can be
visualized by means of a trellis diagram with mK-1 states,
where m is the symbol alphabet size and K is the length of
the overall channel impulse response (in samples).
State trellis
diagram
Sample time
instants
State
Allowed transition
between states
5. Linear equalization, zero-forcing algorithm
Raised
cosine
spectrum
Transmitted
symbol
spectrum
Channel frequency
response
(incl. T & R filters)
Equalizer
frequency
response
=
Z f B f H f E f
0 f
fs = 1/T
B f
H f
E f
Z f
Basic idea:
6. Zero-forcing equalizer
Communication
channel
Equalizer
FIR filter contains
2N+1 coefficients
r k
z k
Transmitted
impulse
sequence Input to
decision
circuit
N
n
n N
h k h k n
Channel impulse response Equalizer impulse response
M
m
m M
c k c k m
Coefficients of
equivalent FIR filter
( )
M
k m k m
m M
f c h M k M
(in fact the equivalent FIR filter consists of 2M+1+2N coefficients,
but the equalizer can only “handle” 2M+1 equations)
FIR filter contains
2M+1 coefficients
Overall
channel
7. Zero-forcing equalizer
We want overall filter response
to be non-zero at decision time
k = 0 and zero at all other
sampling times k 0 :
1, 0
0, 0
M
k m k m
m M
k
f c h
k
0 1 1 2
1 0 1 2 1
1 1
2 1 2 2 1 1
2 2 1 1 0
... 0
... 0
:
... 1
:
... 0
... 0
M M M M
M M M M
M M M M M M
M M M M M
M M M M M
h c h c h c
h c h c h c
h c h c h c
h c h c h c
h c h c h c
This leads to
a set of 2M+1
equations:
(k = –M)
(k = 0)
(k = M)
8. Minimum Mean Square Error (MMSE)
The aim is to minimize:
2
k
J E e
ˆ ˆ
k k k k k
e z b b z
(or depending on the source)
Equalizer
Channel
k
z
ˆ
k
b
k
e
r k
s k
+
Estimate
of k:th
symbol
Input to
decision
circuit
z k
b̂ k
Error
9. MSE vs. equalizer coefficients
2
k
J E e
J
1
c
2
c
quadratic multi-dimensional function of
equalizer coefficient values
MMSE aim: find minimum value directly (Wiener solution),
or use an algorithm that recursively changes the equalizer
coefficients in the correct direction (towards the minimum
value of J)!
Illustration of case for two real-valued
equalizer coefficients (or one complex-
valued coefficient)
10. Wiener solution
opt
Rc p
R = correlation matrix (M x M) of received (sampled) signal
values
p = vector (of length M) indicating cross-correlation between
received signal values and estimate of received symbol
copt = vector (of length M) consisting of the optimal equalizer
coefficient values
(We assume here that the equalizer contains M taps, not
2M+1 taps like in other parts of this presentation)
We start with the Wiener-Hopf equations in matrix form:
ˆ
k
b
k
r
k
r
11. Correlation matrix R & vector p
Before we can perform the stochastical expectation operation,
we must know the stochastical properties of the transmitted
signal (and of the channel if it is changing). Usually we do not
have this information => some non-stochastical algorithm like
Least-mean-square (LMS) must be used.
*T
E k k
R r r
*
ˆ
k
E k b
p r
1 1
, ,...,
T
k k k M
k r r r
r
where
M samples
12. Algorithms
Stochastical information (R and p) is available:
1. Direct solution of the Wiener-Hopf equations:
2. Newton’s algorithm (fast iterative algorithm)
3. Method of steepest descent (this iterative algorithm is slow
but easier to implement)
R and p are not available:
Use an algorithm that is based on the received signal sequence
directly. One such algorithm is Least-Mean-Square (LMS).
opt
Rc p 1
opt
c R p
Inverting a large
matrix is difficult!
13. Conventional linear equalizer of LMS type
k M
r
k M
r
T T T
LMS algorithm for
adjustment of
tap coefficients
Transversal FIR filter
with 2M+1 filter taps
Estimate of k:th symbol
after symbol decision
Complex-valued tap coefficients
of equalizer filter
k
e
+
ˆ
k
b
k
z
M
c 1 M
c 1
M
c M
c
Widrow
Received complex
signal samples
14. Joint optimization of coefficients and phase
Equalizer filter
Coefficient
updating
Phase
synchronization
k
e
j
e
+
r k
k
z
ˆ
k
b
ˆ ˆ
exp
M
k k k m k m k
m M
e z b c r j b
Minimize:
Godard
Proakis, Ed.3, Section 11-5-2
2
k
J E e
15. Least-mean-square (LMS) algorithm
(derived from “method of steepest descent”)
for convergence towards minimum mean square error (MMSE)
2
Re 1 Re
Re
k
n n
n
e
c i c i
c
Real part of n:th coefficient:
2
Im 1 Im
Im
k
n n
n
e
c i c i
c
Imaginary part of n:th coefficient:
2
1 k
e
i i
Phase:
2 2 1 1
M
equations Iteration index Step size of iteration
2
k k k
e e e
16. LMS algorithm (cont.)
After some calculation, the recursion equations are obtained in
the form
ˆ
Re 1 Re 2 Re
M
j j
n n m k m k k n
m M
c i c i e c r b r e
ˆ
Im 1 Im 2 Im
M
j j
n n m k m k k n
m M
c i c i e c r b r e
ˆ
1 2 Im
M
j
k m k m
m M
i i b e c r
k
e
17. Effect of iteration step size
Slow acquisition
smaller larger
Poor tracking
performance
Poor stability
Large variation
around optimum
value
18. Decision feedback equalizer
T T T
LMS
algorithm
for tap
coefficient
adjustment
T T
FFF
FBF +
+ k
e
ˆ
k
b
k
z
ˆ
k Q
b
1
ˆ
k
b
k M
r k M
r
M
c 1 M
c 1
M
c M
c
1
Q
q Q
q
1
q
?
19. The purpose is again to minimize
Decision feedback equalizer (cont.)
1
ˆ ˆ ˆ
Q
M
k k k m k m n k n k
m M n
e z b c r q b b
2 2
k k
J E e e
Feedforward filter (FFF) is similar to filter in linear equalizer
tap spacing smaller than symbol interval is allowed =>
fractionally spaced equalizer
=> oversampling by a factor of 2 or 4 is common
Feedback filter (FBF) is used for either reducing or canceling
(difference: see next slide) samples of previous symbols at
decision time instants
tap spacing must be equal to symbol interval
where
20. The coefficients of the feedback filter (FBF) can be
obtained in either of two ways:
Recursively (using the LMS algorithm) in a similar
fashion as FFF coefficients
By calculation from FFF coefficients and channel
coefficients (we achieve exact ISI cancellation in
this way, but channel estimation is necessary):
Decision feedback equalizer (cont.)
1, 2, ,
M
n m n m
m M
q c h n Q
Proakis, Ed.3, Section 11-2
Proakis, Ed.3, Section 10-3-1
21. Channel estimation circuit
T T T
LMS
algorithm
Estimated
symbols
+
M
c
1
M
c
1
c
0
c
ˆ
k M
b
ˆ
k
b
ˆk
r
k
r
ˆ
m m
h c
Proakis, Ed.3, Section 11-3
k:th sample of received signal Estimated channel coefficients
Filter length = CIR length
22. Channel estimation circuit (cont.)
1. Acquisition phase
Uses “training sequence”
Symbols are known at receiver, .
2. Tracking phase
Uses estimated symbols (decision directed mode)
Symbol estimates are obtained from the decision
circuit (note the delay in the feedback loop!)
Since the estimation circuit is adaptive, time-varying
channel coefficients can be tracked to some extent.
ˆ
k k
b b
Alternatively: blind estimation (no training sequence)
23. Channel estimation circuit in receiver
Channel
estimation
circuit
Equalizer
& decision
circuit
ĥ m
b̂ k
b k
r k
Estimated channel coefficients
“Clean” output symbols
Received signal samples
Symbol estimates (with errors)
Training
symbols
(no
errors)
Mandatory for MLSE-VA, optional for DFE
24. Theoretical ISI cancellation receiver
Precursor cancellation
of future symbols
Postcursor cancellation
of previous symbols
Filter matched to
sampled channel
impulse response
1
ˆ ˆ
k k Q
b b
1
ˆ ˆ
k P k
b b
k P
r
+
ˆ
k
b
(extension of DFE, for simulation of matched filter bound)
If previous and future symbols can be estimated without error
(impossible in a practical system), matched filter performance
can be achieved.
25. MLSE-VA receiver structure
Matched
filter
MLSE
(VA)
Channel
estimation circuit
NW
filter
r t
f k
y k
b̂ k
ˆ
f k
MLSE-VA circuit causes delay of estimated symbol sequence
before it is available for channel estimation
=> channel estimates may be out-of-date
(in a fast time-varying channel)
26. MLSE-VA receiver structure (cont.)
The probability of receiving sample sequence y (note: vector form)
of length N, conditioned on a certain symbol sequence estimate and
overall channel estimate:
2
1
2 2
1 0
1
1 1 ˆ ˆ
ˆ ˆ
ˆ ˆ
, , exp
2
2
N N K
k k n k n
N N
k n
k
p p y y f b
y b f b f
Metric to be
minimized
(select best ..
using VA)
Objective:
find symbol
sequence that
maximizes this
probability
This is allowed since
noise samples are
uncorrelated due to NW
(= noise whitening) filter
Length of f (k)
Since we have AWGN
b̂
27. MLSE-VA receiver structure (cont.)
We want to choose that symbol sequence estimate and overall
channel estimate which maximizes the conditional probability.
Since product of exponentials <=> sum of exponents, the
metric to be minimized is a sum expression.
If the length of the overall channel impulse response in
samples (or channel coefficients) is K, in other words the time
span of the channel is (K-1)T, the next step is to construct a
state trellis where a state is defined as a certain combination
of K-1 previous symbols causing ISI on the k:th symbol.
K-1
0 k
f k Note: this is overall CIR,
including response of matched
filter and NW filter
28. MLSE-VA receiver structure (cont.)
At adjacent time instants, the symbol sequences causing ISI
are correlated. As an example (m=2, K=5):
1 0 0 1 0
1 0 0 1 0
0 0 1 0 0
0
1
At time k-3
At time k-2
At time k-1
:
:
Bits causing ISI not causing ISI at time instant
At time k
1
0 1 0 0 1
1 0 1
Bit detected at time instant
16 states
29. MLSE-VA receiver structure (cont.)
State trellis diagram
k-2 k-1 k
k-3
1
K
m
Number
of states
The ”best” state
sequence is
estimated by
means of Viterbi
algorithm (VA)
k+1
Of the transitions terminating in a certain state at a certain
time instant, the VA selects the transition associated with
highest accumulated probability (up to that time instant) for
further processing.
Alphabet
size
Proakis, Ed.3, Section 10-1-3