1. April 2009
Channel Matched Iterative Decoding for
Magnetic Recording Systems
Final Oral Examination
Hakim Alhussien, PhD Candidate
Adviser: Jae Moon
Communications and Data Storage (CDS) Laboratory
Department of Electrical and Computer Engineering
University of Minnesota
April 06, 2009
1
2. Hakim, April 2009
Outline
Perpendicular magnetic recording channel.
• ECC for recording channels.
• Error Pattern Correction Coding (EPCC).
EPCC enhanced TE (TE-EPCC).
• Error rate analysis of TE-EPCC.
• TE-EPCC and TP-EPCC for PMRC.
Tensor product parity codes (TPPC).
• Linear-time Encoding of tensor product codes.
• Hard decoding of EPC-RS tensor product codes.
• Error rate analysis of EPC-RS tensor product codes.
EPC-LDPC tensor product codes.
• Soft-syndrome decoding of EPC-LDPC tensor product code.
• Simulation study of EPC-LDPC.
Thesis contributions.
2
3. Hakim, April 2009
Perpendicular Magnetic Recording (PMR) Channel
Recording channel is “transition-response fixed”
• To achieve the same normalized user density at a lower coding rate, the
( )
SNR is degrader by ∼ 10 × log10 1 R 2 use high rate codes.
Saturated-level recording (binary-constrained input)
• Optimal-precoding or SNR water-filling not possible.
Channel impaired by long error bursts.
• Due to ISI, disk defects, and thermal asperities.
• Symbol-correcting codes effective in burst correction,
such as RS, LDPC over GF(q).
Data reread is expensive in terms of latency
• Standard frame error rate is very low 10−13 ∼ 10−14
Fixed ISI channel with dominant odd and even error events
• Utilize ECC targeting dominant errors events after ML detection.
DC full PRML target
• A DC wandering compensation loop is required.
Transition-dependent medium noise due to zigzag domain boundaries
• Channel detector trellis incorporates PDNP.
3
4. Hakim, April 2009
ECC for PMR Read Channel
Reed Solomon (RS)
• Minimum distance of RS > LDPC for same block length and rate.
• ML decoding of RS outperforms ML decoding LDPC.
• Iterative Belief propagation decoding approaches ML performance.
• RS parity check matrix very dense – large number of 4-cycles.
• Iterative decoding of LDPC significantly outperforms RS iterative decoding.
RS with inner LDPC or turbo codes
• Error behavior of LDPC is catastrophic for strong codes.
• Requires high-rate low-column weight LDPC – weak family of codes.
• Convolutional based Turbo: long tail in symbol-error distribution.
Stand-alone LDPC
• Extensive research on lowering the SER error floor.
• LDPC with sector-wide codeword has low minimum distance.
• Sparse LDPC: improved iterative decoding – larger girth.
• Dense or large-block length LDPC: better Hamming weight spectrum
• Consider: Sparse non-binary LDPC of sector-length codeword!
4
6. Hakim, April 2009
The Channel Matched ECC Paradigm
Premise: for a given ISI channel all dominant error patterns are known a
priori.
• Hyperbolic tangent transition response
at a channel density of 1.4, High density
• 10% AWGN and 90% jitter noise.,
perpendicular
• Target response: 1+0.9D,
• Bit error rates: 2.3276×10-3 (1 PDNP) ,
recording channel
• Captured # of error patterns: 223,676,
• Edt/N90 = 13.5 dB
Strong general
Strong general Channel-matched
Channel-matched Write head/medium/read head
Write head/medium/read head
ECC encoder
ECC encoder EPC encoder
EPC encoder
corrects focuses on correcting
a few dominant error
remaining errors patterns
Strong general Channel-matched
Channel-matched
Strong general Equalizer/Detector
EPC encoder Equalizer/Detector
ECC decoder
ECC decoder EPC encoder
6
7. Hakim, April 2009
EPCC Design: Target List =5 most dominant errors
Target the 5 most dominant errors,
which account for 92.04% of possible errors.
Syndrome set produced by g(x) = 1 + x +x3 + x5 + x6 Target Error Polynomial Syndrome
• Order of g(x)=12. Period
• Total number of distinct syndrome sets: 5. 1 12
• 5 distinct, non-overlapping syndrome sets are
utilized to distinguish 5 target error. 1+ x 12
Cyclic generator polynomial used to design a cyclic
(12,6) code of rate=0.5, and code cord length 12. 1+ x + x 2 6
Single occurrences of error types {1,2,4,5} decoded 1+ x + x 2 + x 3 12
without ambiguity.
Via channel reliability information and the polarity of 1 + x + x 2 + x 3 + x 4 12
data support, error type 3 can be decoded reliably.
Unique syndrome-error mapping via channel side
information.
7
8. Hakim, April 2009
EPCC Design: Target List =10 most dominant errors
Target the 10 most dominant errors, Target Error Polynomial Syndrome Syndrome
Period g1(x) Period g2(x)
account for 99.67% of possible errors.
g1(x) = 1 + x2 +x3 + x5 + x6 +x8 1 18 30
• Order of g1(x)=18.
1+x 9 15
• 10 distinct syndrome sets.
• Cyclic generator polynomial used to 1+x +x 2 18 10
design a cyclic (18,10) code of
1+x +x 2 +x3 9 15
rate=0.56, and codeword length 18.
g2(x) = 1 +x3 + x5 + x8 1+x +x 2 +x3 +x 4 18 6
• Order of g2(x)=30. 1+x +x 2 +x3 +x4 +x5 9 5
• 10 distinct syndrome sets.
1+x +x 2 +x3 +x4 +x5 +x6 18 30
• Cyclic generator polynomial used to
design a cyclic (30,22) code of 1+x +x 2 +x3 +x4 +x5 +x6 +x7 9 15
rate=0.73, and codeword length 30.
1+x +x 2 +x3 +x4 +x5 +x6 +x7 +x8 2 10
Unique syndrome-error mapping via
channel side information. 1+x +x 2 +x3 +x4 +x5 +x6 +x7 +x8 +x9 9 3
8
9. Hakim, April 2009
Approaches to Increase Code Rate of EPCC
Syndrome sets produced by g(x) = 1 + x3 + x5 + x8
• Order of g(x): 30 → (30, 22) base cyclic code
• 10+3 extra distinct, non-overlapping syndrome sets are utilized to distinguish 13
target error patterns.
Multiply g(x) by a degree 6 primitive polynomial which is not a factor of
any target error polynomials :
• The extended code is a (630,616) code of rate 0.98.
Extended periods of syndrome
sets produced by g ′(x )
Tensor product coding paradigm.
• Short codeword length (outer ECC symbol length), very high total code rate.
9
11. Hakim, April 2009
WER ML Bound
Word Error Probability:
x2
1 M
x − x m′
PW ≤ ∑∑ Q m
M m =1 m ′ ≠ m 2σ
T1,dmin
2 2
d E = d min
1 M ∞ d
PW ≤ ∑∑
M m =1 d E =1
Tm ,d E Q E
2σ
x3 x1
∞
d
= ∑
d E = d m in
T ( d E )Q E
2σ
Decrease number of codewords at Increase Euclidean minimum
Euclidean minimum distance distance
x4
(Turbo codes) (Trellis coded modulation)
11
12. Hakim, April 2009
BER ML Bound
∞
T (d E ) w(d E ) d E
• Bit Error Probability: Pb ≤ ∑
d E = d min K
Q
2σ
• Average number of codeword sequences of channel noiseless outputs
separated by dE:
N
T (d E , C ) = ∑ A ( d )Pr( d
d =1
E | d,C )
• Average Hamming distance between information words that generate codewords
of channel noiseless outputs separated by dE2:
N
1
w (d E , C ) =
T (d E , C )
∑ A ( d ) A ( d )Pr ( d
d =1
E | d,C)
Average input Hamming weight
# of codeword sequences of weight d of codewords of weight d
∞ N
A ( d ) A ( d )Pr( d E | d , C ) d E
Pb ≤ ∑ ∑
d E = d min d =1 K
Q
2σ
12
14. Hakim, April 2009
Partial Response Class-1 (PR1) Channel (1+D)
0 0/0 Trellis of Dicode channel
1/1 0 /1
1 1/ 2
2 … non-dominant error pattern
d = 2 + 4 × bcr
E
2
2
dE = 1 dE = 1
2
dE = 4
dominant error pattern
2 …
d =2
E
2 2
dE = 1 dE = 1
2
dE = 0
14
15. Hakim, April 2009
Dicode Channel (1-D)
0 0/0 Trellis of Dicode channel
1/1 0 / −1
1 1/ 0
2
dE = 2 … dominant error pattern
2
2
dE = 1 dE = 1
2
dE = 0
non-dominant error pattern
2
d = 2 + 4 × bcr
E
…
2 2
dE = 1 dE = 1
2
dE = 4
15
16. Hakim, April 2009
A Dicode multiple error occurrence
m: # of error patterns in EPCC sub-code
… … … …
4
1 1 1 1 1 1 1
• Merging branches correspond to
∑
zero error Hamming weight 2
dE
16
17. Hakim, April 2009
Distribution of dE given d and m
# of ways we can have # of crossing branches
crossing branches
d − m d −m
1
2
2 d E − 2m
, > 0 integer, mdom < m
d E − 2m 2
4
4
d − mdom
1 2
Pr(d E | d , m) = , d E = 2mdom , mdom = m
2
0 , otherwsie
d: Hamming weight of multiple error,
m: # of error patterns, mdom: # of dominant error patterns
17
18. Hakim, April 2009
Enumerators for error Hamming weights
d H (e) = i K
A(d , i) Information sequence
d H ( e) = d N
RSCC codeword
1
Π
N
d
d H ( e) = d N
Interleaved RSCC codeword
L
d H ( e) = ∑ d i L × Nc
i =1
Nc
di
d H (e) = di N c + Pc EPCC codeword
N − di di − 1 N − di d i − 1
m m − 1
i i mi − 1 mi − 1
Nc closed Nc
closed
d + d
i error patterns
open i
error patterns
18
19. Hakim, April 2009
Enumerators for error Hamming weights
Distribution of Euclidean distance given Distribution of sub-code Hamming weights
Hamming weight of sub-codes given Hamming weight of outer code
Pr( d E | d ) = Pr(d E | d , d1 ,..., d L ) × Pr( d1 ,..., d L | d )
= Pr( d E | d , d1 ,..., d L , m, m1 ,..., mL )
×Pr(m, m1 ,..., mL | d1 ,..., d L , d ) × Pr( d1 ,..., d L | d )
Distribution of sub-code multiple error patterns
given Hamming weight of sub-codes
d d
Pr(d E | d ) = ∑ ... ∑ Pr ( d ,..., d
d1 = 0 d L =0
1 L | d)
∑ i=1 di
L
d=
d d1 dL L
× ∑ ∑ ...∑ Pr ( d
m =1 m1 = 0 mL
E | d , m ) ∏ Pr(mi | di )
i =1
∑ i=1
L
m= mi
19
20. Hakim, April 2009
Enumerators for error Hamming weights
Joint distribution of the sub-code Hamming weights:
Nc Nc Nc # of sub-code words
× ... × of Hamming weight d
Pr(d1 ,..., d L | d ) = d1 d 2 dL L
N # of interleaved RSCC words
of Hamming weight d
d
Distribution of the number of error patterns per sub-code:
# of ways mi error patterns are N c − di d i − 1 # of ways di is decomposed
arranged in sub-code i. × into mi error patterns.
mi mi − 1
Pr(mi | di ) =
Nc # of sub-code words
of Hamming weight di
di
20
21. Hakim, April 2009
Euclidean distance Enumerator of TE-EPCC, when EPCC is tuned off:
d d d d1 dL
1
Pr( d E | d ) = ∑0 ... d∑0 ∑1
N d1 =
∑0 ...m∑0
L= m= m1 = L=
d =∑ iL=1 di L
m: d E − 2 m = 0 mod 4, m = ∑ mi
d
2
i =1
d −m d −m L
2 1 Nc − d j d j − 1
× d E − 2m ∏
2
j =1 m j m j − 1
4
Euclidean distance Enumerator of all correctable TE-EPCC codewords:
1 min( d ,dc ) min( d ,dc ) d min( d1 , mc ) min( d L , mc )
Pr(d E | d , C ) =
∑ ... d∑0
N d1 =0 ∑ ∑ ... ∑
L= m =1 m1 = 0 mL = 0
d= d
d ∑ i=1 i
L L
2
m: m = d E 2, m = ∑ mi
i =1
d −m L
1 Nc − d j d j − 1
× ∏
2 j =1 m j m j − 1
Euclidean distance Enumerator of non-correctable TE-EPCC codewords:
Pr( d E | d , C ) = Pr( d E | d ) − Pr( d E | d , C )
21
22. Hakim, April 2009
Interleaver Gain Exponent of TE
Approximations:
d
N ( N − d + 1) Nd
>
d d! d!
m − µ +1
N − d m − µ + 1 N − d + 1 N − d + 1 ( N − d + 1) N m − µ +1
= <
m − µ N − d + 1 m − µ + 1 m − µ + 1 (m − µ + 1)! (m − µ + 1)!
d2
d E 1 − 4σE2
Q ≤ e .
2σ 2
Modified TE bound: 2
∞ dT 1 d dE
1 −
Pb <
2K
∑ ∑∑
µ
d E =1 d = 2 = 0
∑
m =1
B d E ,d ,m,µ N m − µ −d
e 4σ 2
2
m: d E − 2 m + µ = 0mod 4
d −m d −m
B d E ,d ,m ,µ = A ( d ) A (d )
d! 1 2 d −1
( m − µ )! 2 d E − 2m + µ
m − 1
4
22
23. Hakim, April 2009
Interleaver Gain Exponent of
TE-EPCC(dc = 10, mc = 3, L = 1)
2
∞ dE 1
1 −
Pb <
2K
∑e
d E =1
4σ 2
∑
µ
=0
dT d min( dT , d c ) min( d , mc )
∑
d =2
∑m =1
B d E ,d ,m, µ N m − µ −d
− ∑
d =2
∑
m =1
B d E ,d ,m, µ N m − µ − d
2 2
m: d E − 2 m + µ = 0mod 4 m: d E = 2 m − µ
23
24. Hakim, April 2009
Interleaver Gain Exponent of
TE-EPCC(dc = 10, mc = 3, L = 1)
24
25. Hakim, April 2009
Interleaver Gain Exponent of
TE-EPCC(dc = 10, mc = 3, L = 1)
25
26. Hakim, April 2009
Interleaver Gain Exponent of
TE-EPCC(dc = 10, mc = 3, L = 1)
26
27. Hakim, April 2009
Interleaver Gain Exponent of
TE-EPCC(dc = 10, mc = 3, L = 1)
27
28. Hakim, April 2009
Interleaver Gain Exponent of
TE-EPCC(dc = 10, mc = 3, L = 1)
Asymptotic BER bound of conventional TE:
1 1 3
A(2) A(2) − 4σ 2 A (2) A(2) − 2σ 2 A(2) A(2) − 4σ 2
Pb < e + e + e
2 KN 2 2 KN KN
1 5 3
A (2) A(2) −σ 2 3A(3) A(3) − 4σ 2 A(3) A (3) − 2σ 2
+ e + e + e
2K 2 KN 2K
7
2A (4) A(4) − 4σ 2
+ e +O
KN
Asymptotic BER bound of TE-EPCC(dc = 10, mc = 3, L = 1):
1 1 3
155925A (10) A (10) − 4σ 2 155925A (10) A (10) − 2σ 2 779625A (10) A (10) − 4σ 2
Pb < 11
e + 10
e + 10
e
8 KN 8KN 2 KN
1 5 3
779625A(10) A(10) − σ 2 A (2) A (2) − 4σ 2 A(2) A (2) − 2σ 2
+ e + e + e
4 KN 9 2 KN 2 2 KN
7
2 A (4) A (4) − 4σ 2
+ e +O
KN
28
29. Hakim, April 2009
“Spectral Thinning” of TE-EPCC
30
20
10
0
log T(dE)
-10
-20
precoded Dicode, TE
-30 unprecoded Dicode, TE
unprecoded Dicode, TE-EPCC
-40
-50
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
d2
E
• TE: K = 4096, punctured R=8/9, (31, 33) RSCC.
• TE-EPCC: (L = 7)EPCC, mc = 3, dc = 10.
• EPCC sub-code: (630, 616), R = 0.98.
29
30. Hakim, April 2009
Precoded TE
0 0/ 0 Precoded TE
Convolutional
1/1 1/ −1 Encoder Dicode Channel
(RSCC) ∏ 1 1⊕ D (1-D)
1 0/ 0
Unprecoded Dicode: trellis paths corresponding to different code bits
are at 0 Euclidean distance → long error events have a high
probability of generating low Euclidean distance errors.
Precoded Dicode: trellis paths corresponding to different code bits
accumulate Euclidean distance → ONLY low Hamming weight
errors generate low Euclidean distance errors.
The average number of Hamming weight 2 errors that generate dE2 =2
is more for precoded compared to unprecoded Dicode.
• Unprecoded TE achieves a lower error floor compared to precoded TE.
30
38. Hakim, April 2009
Perpendicular Magnetic Recording (PMR) channel
Hyperbolic tangent transition response for perpendicular recording
2t (H. Sawaguchi et al., “Performance analysis of modified
h(t ) = tanh PRML channels for perpendicular recording systems,” J.
0.5795 ⋅ π ⋅ pw50 Magn. Magn. Mater., 2001.)
Channel density Ds ≡ pw50 / T
• pw50 : −50% to 50% width of the transition response
• T : symbol period
38
39. Hakim, April 2009
PMR Continuous-time Channel Model
Continuous-time channel model
• h(t) : Hyperbolic tangent transition response, i.e., h(t ) = tanh(λt )
1
• s(t) : dibit response, i.e., s (t ) = [ h (t ) − h (t −T )]
2 2 2
• h'(t) : First-order time derivative of h(t), i.e., h′(t ) = λ sech (λt )
• p(t) : Front-end band-limiting filter (7th-order butterworth filter)
• n(t) : Additive white Gaussian noise
• jk : Random transition position jitter
• Definition of energy Edt: 2
E dt = ∫−∞ [ h ′(t )] dt
∞
39
40. Hakim, April 2009
PMR Discrete-time Channel Model
Discrete-time channel model
• s k ≡ [s (t ) ∗ p (t )]t =kT , h j ≡ [ h′(t ) ∗ p (t )] , h n ∗ h n = [ p ( t ) ∗ p ( − t ) ]
s
k t =kT k −k t = kT
• Variance of the additive white Gaussian noise (AWGN) sequence nk : σ n2 = N o
2
• Variance of the jitter noise jk : σ 2j = M o
2
• Spectral height for the mixed noise: Nα = No + Mo
− Nα signifies α % jitter noise, i.e., α = M o × 100
No + M o
Edt
• SNR can be defined as SNR ≡
Nα
40
41. Hakim, April 2009
Partial Response Maximum Likelihood System
Channel density: 1.1
• Mixed noise: 10% AWGN and 90% jitter noise, DC full dibit response.
• Target response: 1+0.85D , optimized to whiten noise for the all-transition input.
Discrete Time Dibit Response at Ds=1.1 15-tap RLS Equlizer
0.4 3
0.3 2
1
0.2
0
0.1
-1
0
0 2 4 6 8 0 5 10 15
k k
Dibit Vs Taregt (Frequency) Target Vs Dibit*Equalizer (Frequency)
10 10
0 0
-10 -10
dB
dB
-20 Dibit -20
Target Equalized Dibit
-30 -30 Target
-40 -40
0 0.5 1 0 0.5 1
fT fT
41
42. Hakim, April 2009
EPCC-TE
Encoder
(630,616) Write head/medium/read head
RS Encoder (11,10)
Convolutional ∏ EPCC 1+0.9D PR,
xk t = 20 Encoder (RSCC) encoder 90% media noise + 10% electronic noise
(11,10) RSCC ∏-1 e EPCC SISO λke SISO Equalizer
RS Decoder
SISO decoder λ k List decoder 4 state BCJR,
t = 20 4 state BCJR
ˆ
xk rate ≈ 1 1 PDNP tap
∏
Decoder EPCC enhanced Turbo Equalizer
42
47. Hakim, April 2009
An EPC- Tensor Product Code
Chaichanavong and Siegel (2006) proposed a tensor product code based on a
single parity code + BCH as an inner code for outer RS ECC.
• Suitable for low density longitudinal recording channels were dominant errors
have odd weight of the form [+2] , [+2, −2, +2] .
• Code combined with MTR for perpendicular recording channels.
• Tensor product code has much higher rate than a short parity code.
• Parity code on the symbol-level – less multiple error occurrences.
To achieve performance gains with respect to QLDPC we will investigate a
tensor product code based on a short inner multiparty code (EPCC) and
outer QLDPC ECC.
• The EPC multiparty code corrects any single occurrence of a dominant targeted
error in a tensor symbol.
• An EPCC sequence of syndromes forms a codeword for QLDPC.
• EPCC is decoded jointly with the channel using post processing techniques that
generate a soft “syndrome-codeword” to be decoded by the QLDPC non-binary
message-passing decoder.
• Via channel side information, EPCC has a unique syndrome per dominant error
single occurrence. A list decoding scheme increases the decoding sphere radius
of EPCC to target multiple error occurrences.
47
48. Hakim, April 2009
Introduction to Tensor Product Codes
Jack K. Wolf, “On Codes Derivable form the Tensor Product of check Matrices,” IT 1965.
Constituent Codes: 1 0 1
• Binary (3,1) single error correcting code, H = = 1 α α2
0 1 1
1 0 1 α α 2
• Doubly-extended t=1 (5,3) RS on GF(22), H =
0 1 1 α2 α
The tensor product code parity check matrix in GF(22) is
1 α α 2 0 0 0 1 α α 2 α α 2 1 α 2 1 α
H GF (22 ) =
0 0 0 1 α α 2 1 α α 2 α 2 1 α α α 2 1
1. This binary (15,11) tensor product code 101 000 101 011 110
corrects any single tensor symbol error 011 000 011 110 101
=
provided it contains a single bit error.
2. Binary constituent code rate is 0.34 and H GF (2)
codeword length is 3 bits. 000 101 101 110 011
3. Tensor product code rate is 0.74
and codeword length is 15 bits. 000 011 011 101 110
Tensor Symbol
48
49. Hakim, April 2009
Encoding of Tensor Product Codes
Encoding of a tensor product code of binary code C1: (n1,k1), and non-binary
code C2: (n2,k2)
• Divide n1k2 information bits into k2 columns.
• Encode each column using C1 .
• Convert to GF ( 2 p ) .
1
• Encode intermediate non-binary syndromes using C2 .
• Convert back to GF ( 2 ) .
transmitted codeword
1 0 1 1 0
• Use remaining p2k1 information bits
and the calculated syndromes 1 1 1 1 1
bits to calculate p1p2 parity bits using
back substitution and systematic H1. 0 1 1 0 1
• Result : If C1 and C2 are linear time 1 1 0 0 1
Intermediate
syndromes
encodable, then C 1 ⊗ C 2 1 0 0 1 1
is linear time encodable! 2 2
α 1 0 α α
49
50. Hakim, April 2009
An EPC-RS Tensor Product code
EPC-RS constituent codes
• (18,10) EPCC over GF(2), Rate=0.556, 8 parity bits.
H = 1 α α 2 α 3 α 4 α 5 α 6 α 7 α 133 α 134 α 96 α 90 α 82 α 236 α 234 α 217 α 92 α 93
• (255,195) RS over GF(28), Rate=0.765, t=30, 60 parity symbols.
EPC-RS tensor product code is a binary (4590,4110) code, Rate=0.895, 480
parity bits.
• Codeword length = 18×255 bits, parity = 8×60 bits.
Tensor symbol (1) Tensor symbol (2) Tensor symbol (3) … Tensor symbol (255)
18 bits
Tensor code can correct any combination of 30 or less tensor symbol errors,
given that each 18-bit tensor symbol has a single occurrence of a dominant
error that is correctable by EPCC.
50
51. Hakim, April 2009
Hard Decoding of RS-EPC tensor product code
18 bits
Tensor symbol (1) Tensor symbol (2) Tensor symbol (3) … Tensor symbol (255)
Compute EPCC binary
syndromes and convert to GF(28)
EPCC Syndrome (1) EPCC Syndrome (2) EPCC Syndrome (3) … EPCC Syndrome (255)
RS hard decoding in GF(28)
8 bits or GF(28) symbol
(or any list soft decoding algorithm ).
RS symbol (1) RS symbol (2) RS symbol (3) … RS symbol (255)
Convert back to corrected binary
EPCC syndromes.
EPCC Error Synd (1) EPCC Error Synd (2) EPCC Error Synd (3) … EPCC Error Synd (255)
Find most likely single and
8 bits double errors.
Likely dominant Likely dominant Likely dominant Likely dominant Error
Error (1) Error (2) Error (3) … (255)
Add to ML word
18 bits
51
52. Hakim, April 2009
RS-EPC TPPC Residual Errors
Non-targeted single error occurrences.
e13 (x ) …
13 bits
18 bits
More than double multiple error occurrences.
…
2 bits 1 bit 3 bits
18 bits
Double error occurrences that have a zero EPCC syndrome, since RS generates
syndromes of errors as input to EPCC.
Residual errors can be corrected by an outer RS code of small correction power,
since the number of residual tensor symbols in error is small.
EPCC can work as an error locating code: Erasure decoding of outer RS.
52
53. Hakim, April 2009
EPC-RS Hard Decoder
Tensor Product Hard Decoder
EPCC Syndrome
Generator
RS Decoder t=27 GF(28)
rk Binary
Viterbi
Modulo 2
ˆ
ck
hk
qk EPCC list ˆ
bk
− RS Decoder t=3 GF(210)
+
decoder
25 test words
53
54. Hakim, April 2009
Semi-Analytic & Fully-Analytic Multinomial SER estimations
Step 1: Estimate P1, …, Pm
• Simulation:
1. slide a window of size m symbols over the channel detector’s simulated hard output
and count occurrences of 1 to m consecutive symbol errors.
2. divide the m cumulative sums by the number of simulated symbols.
• Analytic:
1. P1=∑ (probability of 1 dominant error-pattern that spans 1 symbol).
2. P2=∑ (probability of 1 dominant error-pattern that spans 2 symbols)
+ ∑(probability of 2 dominant error-patterns encapsulated in two separate
consecutive symbols).
3. P3=∑ (probability of 1 dominant error-pattern that spans 3 symbols)
+ ∑ (probability of 1 dominant error-pattern that spans 2 consecutive symbols)
×(probability of 1 dominant error-pattern that spans a 3rd succeeding
symbol )
+ ∑ (probability of 3 dominant error-patterns encapsulated in 3 separate
consecutive symbols).
Step 2: n!
PW ≥ 1 − ∑ ∑ …∑ P0s0 P1s1 … Pmsm
s0 s1 sm s 0 ! s1 !… s m !
m m m
∀ si : ∑ is
i=0
i ≤ t, ∑s i =0
i = n ; P0 = 1 − ∑ Pi .
i =1
54
55. Hakim, April 2009
Symbol Error Event Probabilities
Single-level RS Vs EPC-RS
-1
10
-2
10
• ISI channel 5+6D-D3, AWGN.
• Shortened (450, 450-2T) RS -3
Symbol Error Event Probability
10
over GF(210).
• (18, 10) EPCC + shortened
-4
(250,250-2Ttp) RS over GF(28). 10
-5
10
P1, 10 bit sybmol
-6
10 P2, 10 bit sybmol
P3, 10 bit sybmol
-7
P1, 18 bit sybmol
10 P2, 18 bit sybmol
P3, 18 bit sybmol
-8
10
6.5 7 7.5 8 8.5 9 9.5
SNR (dB)
55
57. Hakim, April 2009
Difference of Minimum SNR Required for SFR=10-13
Single-level RS Vs EPC-RS
0.9
0.85
• ISI channel 5+6D-D3, AWGN.
minSNR(RS) - minSNR(TPRS)
0.8
• Shortened (450, 450-2T) RS
over GF(210).
• (18, 10) EPCC + shortened 0.75
(250,250-2Ttp) RS over GF(28).
0.7
0.65
0.6
0.55
0.5
0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1
Rate, R
57
58. Hakim, April 2009
Minimum SNR Required for SFR=10-13
Single-level RS Vs EPC-RS
9
8.8
• ISI channel 5+6D-D3, AWGN.
• Shortened RS, GF(212), R=0.89. 8.6
• (24, 14) EPCC + shortened RS
over GF(210), total R=0.89. 8.4
S N R (d B ) 8.2
8
7.8
7.6
7.4
7.2
1/2 K 1K 3/2 K 2K 5/2 K 3K 7/2 K 4K
Sector size
58
60. Hakim, April 2009
Non-binary LDPC: Complexity and Performance
Davey and MacKay (1998) have shown that the near Shannon limit
performance of binary LDPC codes in AWGN can be significantly enhanced
by a move to fields of higher order.
For monotonic improvement in performance with field order the parity
check matrix for short blocks has to be very sparse
• Column weight 3 codes over GF(q) exhibit worse BER as q increases.
• Column weight 2 codes over GF(q) exhibit monotonically lower BER as q
increases.
• Results confirmed by Hu, Eleftheriou, and Arnold (2005): optimum degree
sequence favors a regular graph of degree-2 in all symbol nodes.
Chang and Cruz (2008) studied the decoding time complexity of non-binary
LDPC for PR channels
• Moving from binary to non-binary LDPC results in a gain of around 1 dB.
• Size of the Galois field does not affect the decoding complexity.
• The decoding complexity ratios of non-binary to binary LDPC-coded system
can be as high as 7.42 (in the number of FLP ops).
• Time complexity ratios are always smaller than the ratios of FLP ops.
60
61. Hakim, April 2009
Soft Decoding of EPC-LDPC tensor product code
LDPC
iteration
Convolve
Map to LDPC FFT-
EPCC list γ ( Syni e = j )
bit-level based
a priori info decoder j ∈ GF (26 ) SPA
GF (26 ) over GF(26)
λk λk
rk Global
iteration 1 ≤ i ≤ 390
Binary
Viterbi ˆ
ck
Tensor symbol i
hk :
Correlator(e1) :
qk 0 ≤ j ≤ α 63
− Correlator(e2) γ ( Syni ch = j )
+ Syndrome j j ∈ GF (26 )
Correlator(elmax) 1 ≤ i ≤ 390
List of likely
0 ≤ j ≤ α 63
errors and
RS decoder reliabilities
t=6
:
ˆ
bk :
61
62. Hakim, April 2009
p.m.f. of Tensor Symbol i
0.8
j=233
Pr[Syndrome(i)]=j
0.6
How to generate Syndrome p.m.f. for
0.4 j=66
each Tensor Symbol?
0.2 j=127
0
0 50 100 150 200 250
j
62
63. Hakim, April 2009
SFR Comparison of Single-level LDPC Systems
0
10
CU.I.D
• (4550, 4095) GF(2)-LDPC,
GF(256) LDPC, Col wt 2.
col. wt.= 5, cycle size= 91, and GF(64) LDPC, Col wt 2.
binary BCJR. 10×50 TE. -1 GF(64) LDPC, Col wt 3.
10
GF(2) LDPC, Col wt 5.
• (570, 510) GF(28)-LDPC, 4560
bits, col. wt.= 2, cycle size= 15
Sector Error Rate
symbols, and GF(28)-BCJR. -2
0×50 TE. 10
•(760, 684) GF(26)-LDPC, 4560
bits, col. wt.= 2, cycle size= 19
-3
symbols, and GF(26)-BCJR. 10
0×50 TE.
• (775, 700) GF(26)-LDPC, 4650
-4
bits, col. wt.= 3, cycle size= 25 10
symbols, and GF(26)-BCJR.
0×50 TE.
-5
10
3.8 4 4.2 4.4 4.6 4.8 5 5.2 5.4 5.6 5.8
E / N dB
b o
63
67. Hakim, April 2009
Thesis Contributions
Proposed a channel matched turbo equalization scheme based on the SISO list
decoder of EPCC, termed TE-EPCC.
Demonstrated the “Spectral Thinning” effect achieved by incorporating EPCC in
TE of the Dicode channel.
Derived an upper bound on the ML BER of TE-EPCC.
Proposed a turbo-product code based on EPCC.
Proposed an error-pattern correcting tensor product code that is linear time
encodable.
Derived a fully analytic multinomial method to estimate the SER of RS in ISI.
Designed a two-level coding scheme based on the tensor product of EPCC and
QLDPC that achieves a better complexity-performance trade-off compared to
single-level QLDPC.
Designed a soft iterative decoder of T-EPCC-QLDPC.
67