Codes on Graphs: Introduction and Recent Advances Mohammad Hossein Taghavi In Collaboration with Prof. Paul H. Siegel Univ...
Outline <ul><li>Introduction to Coding Theory </li></ul><ul><ul><li>Shannon’s Channel Coding Theorem  </li></ul></ul><ul><...
Outline <ul><li>Introduction to Coding Theory </li></ul><ul><ul><li>Shannon’s Channel Coding Theorem  </li></ul></ul><ul><...
A Noisy Communication System INFORMATION SOURCE   TRANSMITTER  RECEIVER  DESTINATION  MESSAGE SIGNAL RECEIVED SIGNAL MESSA...
Common Channels <ul><li>Binary Erasure Channel  BEC( ε ) </li></ul><ul><li>Binary Symmetric Channel  BSC( p ) </li></ul>? ...
Coding Code rate:   <ul><li>We add redundancy to the message for protection against noise. </li></ul><ul><li>Coding Theory...
Shannon’s Coding Theorem <ul><li>Every communication channel is characterized by a single number  C , called the  channel ...
Designing Binary Codes <ul><li>Coding generally involves mapping each of the 2 k  vectors in {0,1} k  to some vector (code...
State-of-the-Art <ul><li>Solution </li></ul><ul><ul><li>Long, structured, “pseudorandom” codes </li></ul></ul><ul><ul><li>...
Evolution of Coding Technology from Trellis and Turbo Coding, Schlegel and Perez, IEEE Press, 2004 LDPC codes
Outline <ul><li>Introduction to Coding Theory </li></ul><ul><ul><li>Shannon’s Channel Coding Theorem  </li></ul></ul><ul><...
Binary Linear Codes <ul><li>Codewords are chosen to satisfy a number of (binary) linear constraints. </li></ul><ul><li>Par...
Linear Block Codes on Graphs <ul><li>A binary linear code  C   is the collection of points  x ( i )  in  {0,1} n  that sat...
Low-Density Parity-Check Codes <ul><li>Proposed by Gallager (1960) </li></ul><ul><li>“ Sparseness” of matrix and graph des...
Code Construction <ul><li>LDPC codes are generally constructed randomly, according to certain distributions on the degrees...
Optimal Bit Decoding <ul><li>Consider transmission of binary inputs  X  {  1}  over a memoryless channel using linear co...
Belief Propagation <ul><li>If the Tanner graph is cycle-free,  there is a message-passing approach  to bit-wise MAP decodi...
Sum-Product Update Rule <ul><li>Variable to check </li></ul><ul><li>Check to Variable </li></ul><ul><ul><li>where  f   is ...
Log-Likelihood Formulation <ul><li>The sum-product update is simplified using log-likelihoods </li></ul><ul><li>For messag...
Performance Analysis <ul><li>In the spirit of Shannon, we can analyze the performance of message-passing decoding on  ense...
Key Results <ul><li>Concentration </li></ul><ul><ul><li>With high probability, the performance of ℓ rounds of BP decoding ...
Cycle-free Performance: Density Evolution <ul><li>We can get asymptotic results by looking at the cycle-free case. </li></...
Density Evolution, cont. <ul><li>So at each iteration, we can compute the pdf of the outgoing messages in terms of those o...
Finite-length Performance <ul><li>Remaining questions:  </li></ul><ul><ul><li>How does the waterfall region scale with blo...
Outline <ul><li>Introduction to Coding Theory </li></ul><ul><ul><li>Shannon’s Channel Coding Theorem  </li></ul></ul><ul><...
Linear Programming Decoding: Motivation <ul><li>Linear Programming (LP) decoding is an alternative to MP decoding for Turb...
Maximum-likelihood (ML) Decoding of Binary Linear Codes <ul><li>ML (MAP) Block Decoding: Find the  sequence ,  x , that ma...
Feldman’s LP Relaxation of ML <ul><li>LP decoding: </li></ul><ul><ul><li>Each binary parity-check constraint: is replaced ...
The Fundamental Polytope <ul><li>Properties: </li></ul><ul><ul><li>The integer vertices of  P  are exactly the codewords. ...
Some Performance Results <ul><li>For binary linear codes, WER is independent of the transmitted codeword. </li></ul><ul><l...
Size of the LP Decoding Problem <ul><li>For every check node with a neighborhood  N  of size  d : </li></ul><ul><ul><li>co...
Properties <ul><li>Definition:   </li></ul><ul><li>A constraint of the form is a  cut  at point    if it is violated at  <...
Adaptive LP Decoding <ul><li>Reduce the complexity by decreasing the number of constraints/vertices. </li></ul><ul><ul><li...
Upper Bound on the Complexity <ul><li>Theorem 2:  Algorithm 1 converges in at most  n  iterations. </li></ul><ul><li>Proof...
Numerical Results at Low SNR Fixed length  n  = 360  and rate  R  = 0.5 . Fixed length  n  = 120  and variable node degree...
Numerical Results: Gain in Running Time <ul><li>Reducing the number of constraints translates into a gain in running time:...
Open Problems <ul><li>Finite-length analysis of ensembles of LDPC codes under LP decoding </li></ul><ul><li>Computing the ...
Outline <ul><li>Introduction to Coding Theory </li></ul><ul><ul><li>Shannon’s Channel Coding Theorem  </li></ul></ul><ul><...
Conclusion <ul><li>LDPC codes are becoming the mainstream in coding technology. </li></ul><ul><ul><li>Already implemented ...
Some References <ul><li>Gallager, R. G., Low-Density Parity-Check Codes, M.I.T. Press, Cambridge, Mass: 1963. </li></ul><u...
Upcoming SlideShare
Loading in …5
×

Hossein Taghavi : Codes on Graphs

3,204 views

Published on

Hossein Taghavi's talk organized by the Knowledge Diffusion Network at Sharif University

Published in: Technology
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
3,204
On SlideShare
0
From Embeds
0
Number of Embeds
37
Actions
Shares
0
Downloads
72
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

Hossein Taghavi : Codes on Graphs

  1. 1. Codes on Graphs: Introduction and Recent Advances Mohammad Hossein Taghavi In Collaboration with Prof. Paul H. Siegel University of California, San Diego E-mail: mtaghavi@ucsd.edu
  2. 2. Outline <ul><li>Introduction to Coding Theory </li></ul><ul><ul><li>Shannon’s Channel Coding Theorem </li></ul></ul><ul><ul><li>Error-Correcting Codes – State-of-the-Art </li></ul></ul><ul><li>Low-Density Parity-Check Codes </li></ul><ul><ul><li>Design and Message-Passing Decoding </li></ul></ul><ul><ul><li>Performance </li></ul></ul><ul><li>Linear Programming Decoding </li></ul><ul><ul><li>LP Relaxation </li></ul></ul><ul><ul><li>Properties </li></ul></ul><ul><ul><li>Improvements </li></ul></ul><ul><li>Conclusion and Open Problems </li></ul>
  3. 3. Outline <ul><li>Introduction to Coding Theory </li></ul><ul><ul><li>Shannon’s Channel Coding Theorem </li></ul></ul><ul><ul><li>Error-Correcting Codes – State-of-the-Art </li></ul></ul><ul><li>Low-Density Parity-Check Codes </li></ul><ul><ul><li>Design and Message-Passing Decoding </li></ul></ul><ul><ul><li>Performance </li></ul></ul><ul><li>Linear Programming Decoding </li></ul><ul><ul><li>LP Relaxation </li></ul></ul><ul><ul><li>Properties </li></ul></ul><ul><ul><li>Improvements </li></ul></ul><ul><li>Conclusion and Open Problems </li></ul>
  4. 4. A Noisy Communication System INFORMATION SOURCE TRANSMITTER RECEIVER DESTINATION MESSAGE SIGNAL RECEIVED SIGNAL MESSAGE NOISE SOURCE CHANNEL
  5. 5. Common Channels <ul><li>Binary Erasure Channel BEC( ε ) </li></ul><ul><li>Binary Symmetric Channel BSC( p ) </li></ul>? ε ε 1- ε 1- ε 0 1 0 1 <ul><li>Additive White Gaussian Noise Channel </li></ul>0 0 1 1 1- p 1- p p p
  6. 6. Coding Code rate: <ul><li>We add redundancy to the message for protection against noise. </li></ul><ul><li>Coding Theory: </li></ul><ul><li>Design Good Codes (mappings) </li></ul><ul><li>Design Good Encoding and Decoding Algorithms </li></ul>Source Encoder Decoder Sink Channel
  7. 7. Shannon’s Coding Theorem <ul><li>Every communication channel is characterized by a single number C , called the channel capacity . </li></ul><ul><li>If C is a code with rate R > C, then the probability of error in decoding this code is bounded away from 0. </li></ul><ul><li>For any information rate R < C and any δ > 0 , there exists a code C of length n δ and rate R, such that the probability of error in maximum likelihood decoding of this code is at most δ . </li></ul><ul><li>Bottomline: Reliable communication is possible if and only if </li></ul>
  8. 8. Designing Binary Codes <ul><li>Coding generally involves mapping each of the 2 k vectors in {0,1} k to some vector (codeword) in {0,1} n . </li></ul><ul><li>A good mapping places the codewords at the maximum possible distances. </li></ul><ul><li>The proof of Shannon’s theorem is based on (long) random codes and optimal decoding. </li></ul><ul><ul><li>i.e. a random mapping from {0,1} k to {0,1} n </li></ul></ul><ul><ul><li>The decoder uses a look-up table (practically infeasible) </li></ul></ul><ul><li>Algebraic coding theory studies highly-structured codes </li></ul><ul><ul><li>e.g. Reed-Solomon codes have efficient decoders but cannot get very close to the capacity of binary channels. </li></ul></ul>
  9. 9. State-of-the-Art <ul><li>Solution </li></ul><ul><ul><li>Long, structured, “pseudorandom” codes </li></ul></ul><ul><ul><li>Practical, near-optimal decoding algorithms </li></ul></ul><ul><li>Examples </li></ul><ul><ul><li>Turbo codes (1993) </li></ul></ul><ul><ul><li>Low-density parity-check (LDPC) codes (1960, 1999) </li></ul></ul><ul><li>State-of-the-art </li></ul><ul><ul><li>Turbo codes and LDPC codes have brought Shannon limits to within reach on a wide range of channels. </li></ul></ul>
  10. 10. Evolution of Coding Technology from Trellis and Turbo Coding, Schlegel and Perez, IEEE Press, 2004 LDPC codes
  11. 11. Outline <ul><li>Introduction to Coding Theory </li></ul><ul><ul><li>Shannon’s Channel Coding Theorem </li></ul></ul><ul><ul><li>Error-Correcting Codes – State-of-the-Art </li></ul></ul><ul><li>Low-Density Parity-Check Codes </li></ul><ul><ul><li>Design and Message-Passing Decoding </li></ul></ul><ul><ul><li>Performance </li></ul></ul><ul><li>Linear Programming Decoding </li></ul><ul><ul><li>LP Relaxation </li></ul></ul><ul><ul><li>Properties </li></ul></ul><ul><ul><li>Improvements </li></ul></ul><ul><li>Conclusion and Open Problems </li></ul>
  12. 12. Binary Linear Codes <ul><li>Codewords are chosen to satisfy a number of (binary) linear constraints. </li></ul><ul><li>Parameters of binary linear block code C </li></ul><ul><ul><li>k = number of information bits </li></ul></ul><ul><ul><li>n = number of code bits </li></ul></ul><ul><ul><li>R = k / n </li></ul></ul><ul><ul><li>d min = minimum distance </li></ul></ul><ul><li>There are many ways to describe C </li></ul><ul><ul><li>Codebook (list) </li></ul></ul><ul><ul><li>Parity-check matrix / generator matrix </li></ul></ul><ul><ul><li>Graphical representation (“Tanner graph”) </li></ul></ul>
  13. 13. Linear Block Codes on Graphs <ul><li>A binary linear code C is the collection of points x ( i ) in {0,1} n that satisfy </li></ul><ul><ul><li>H m x n x ( i ) = 0 mod 2 . </li></ul></ul><ul><ul><li>If H is full rank, there will be 2 n-m codewords. </li></ul></ul><ul><li>The code can be described by a Tanner Graph: </li></ul><ul><ul><li>Neighborhood N j : The set of nodes directly connected to j. </li></ul></ul><ul><ul><li>Degree d j : The size of the neighborhood, N j . </li></ul></ul><ul><li>Example : </li></ul>x 1 x 2 x 3 x 4 x 5 x 6 x 7 m check nodes n variable nodes x 1 + x 3 + x 6 = 0 mod 2
  14. 14. Low-Density Parity-Check Codes <ul><li>Proposed by Gallager (1960) </li></ul><ul><li>“ Sparseness” of matrix and graph descriptions </li></ul><ul><ul><li>Number of 1’s in H grows linearly with block length </li></ul></ul><ul><ul><li>Number of edges in Tanner graph grows linearly with block length </li></ul></ul><ul><li>“ Randomness” of construction in: </li></ul><ul><ul><li>Placement of 1’s in H </li></ul></ul><ul><ul><li>Connectivity of variable and check nodes </li></ul></ul><ul><li>Iterative, message-passing decoder </li></ul><ul><ul><li>Simple “local” decoding at nodes </li></ul></ul><ul><ul><li>Iterative exchange of information (message-passing) </li></ul></ul><ul><li>Other families of graph-based codes: </li></ul><ul><ul><li>Repeat Accumulate Codes </li></ul></ul><ul><ul><li>Fountain Codes </li></ul></ul>
  15. 15. Code Construction <ul><li>LDPC codes are generally constructed randomly, according to certain distributions on the degrees of variable and check nodes. </li></ul><ul><ul><li>Once the node degrees are selected, connections are made randomly </li></ul></ul><ul><li>Regular LDPC: Each check node has degree d c , and each variable node has degree d v </li></ul><ul><li>Irregular LDPC: The degrees of variable and check nodes need not be constant. (generalization of regular LDPC) </li></ul><ul><li>Ensemble defined by “node degree distribution” functions. </li></ul>
  16. 16. Optimal Bit Decoding <ul><li>Consider transmission of binary inputs X  {  1} over a memoryless channel using linear code C . </li></ul><ul><li>Assume codewords are transmitted equiprobably. </li></ul><ul><li>Maximum a posteriori (MAP) bit decoding rule minimizes bit error probability: </li></ul><ul><li>where is 1 if x is a codeword of C , and 0 otherwise. </li></ul>
  17. 17. Belief Propagation <ul><li>If the Tanner graph is cycle-free, there is a message-passing approach to bit-wise MAP decoding. </li></ul><ul><li>The nodes of the Tanner graph exchange updated messages, i.e. conditional bit distributions, denoted u = [ u (1), u (-1)] . </li></ul><ul><li>The initial messages presented by the channel to the variable nodes are of the form </li></ul><ul><li>The variable-to-check and check-to-variable message updates are determined by the “sum-product” update rule. </li></ul>x 1 x 2 x 3 x 4 x 5 x 6 x 7
  18. 18. Sum-Product Update Rule <ul><li>Variable to check </li></ul><ul><li>Check to Variable </li></ul><ul><ul><li>where f is the parity-check indicator function. </li></ul></ul>from channel v u v 1 v d-1 v d v 0
  19. 19. Log-Likelihood Formulation <ul><li>The sum-product update is simplified using log-likelihoods </li></ul><ul><li>For message u, define </li></ul><ul><li>Variable-to-check update </li></ul><ul><li>Check-to-variable update </li></ul>L ( u ) edge e L ( v d -2 ) L ( v d -1 ) L ( v 1 ) from channel L ( v )
  20. 20. Performance Analysis <ul><li>In the spirit of Shannon, we can analyze the performance of message-passing decoding on ensembles of LDPC codes with specified degree distributions ( λ , ρ ) . </li></ul><ul><li>The results provide criteria for designing LDPC codes that transmit reliably with MP decoding at rates very close to the Shannon capacity. </li></ul><ul><li>The analysis can assume the all-0’s codeword is sent. </li></ul><ul><li>The results are asymptotic , i.e. hold for very large block lengths. </li></ul>
  21. 21. Key Results <ul><li>Concentration </li></ul><ul><ul><li>With high probability, the performance of ℓ rounds of BP decoding on a randomly selected code converges to the ensemble average performance as the length n->∞ . </li></ul></ul><ul><li>Convergence to cycle-free performance </li></ul><ul><ul><li>The average performance of ℓ rounds of BP decoding on the ( n , λ , ρ ) ensemble converges to the performance on a graph with no cycles of length ≤ 2 ℓ as the length n->∞ . </li></ul></ul>
  22. 22. Cycle-free Performance: Density Evolution <ul><li>We can get asymptotic results by looking at the cycle-free case. </li></ul><ul><li>The cycle-free performance can be computed using density evolution . </li></ul><ul><li>For an ensemble of LDPC code: </li></ul><ul><ul><li>the incoming messages to each node are i.i.d. </li></ul></ul><ul><ul><li>the channel observations, L ( u 0 ) , at different variable nodes are i.i.d. </li></ul></ul>edge e L ( v d -2 ) L ( v d -1 ) L ( v 1 ) from channel L ( v )
  23. 23. Density Evolution, cont. <ul><li>So at each iteration, we can compute the pdf of the outgoing messages in terms of those of the incoming messages. </li></ul><ul><ul><li>Having the pdf of the LLRs after ℓ iterations, we can compute the bit error probability. </li></ul></ul><ul><li>Threshold calculation </li></ul><ul><ul><li>There is a threshold channel parameter p *( λ , ρ ) such that, for any “better” channel parameter p , the cycle-free error probability approaches 0 as the number of iterations ℓ ->∞. </li></ul></ul><ul><li>For some channels, we can optimize the degree distributions of irregular LDPC codes for the best p * . </li></ul><ul><li>This technique has produced rate 1/2 LDPC ensembles with thresholds within 0.0045dB of the Shannon limit on the AWGN channel! </li></ul><ul><li>A rate 1/2 code with block length 10 7 provided BER of 10 -6 within 0.04 dB of the Shannon limit! </li></ul>
  24. 24. Finite-length Performance <ul><li>Remaining questions: </li></ul><ul><ul><li>How does the waterfall region scale with block length? </li></ul></ul><ul><ul><li>Where does the error floor occur? </li></ul></ul><ul><li>In practice, the algorithm sometimes does not converge. </li></ul><ul><ul><li>We can see the MP decoding as an optimization algorithm that sometimes gets trapped in local minima. </li></ul></ul><ul><ul><li>This events are analytically characterized for the BEC, but are still not fully understood in general. </li></ul></ul><ul><li>There are some promising results obtained by techniques from the statistical mechanics literature. </li></ul>
  25. 25. Outline <ul><li>Introduction to Coding Theory </li></ul><ul><ul><li>Shannon’s Channel Coding Theorem </li></ul></ul><ul><ul><li>Error-Correcting Codes – State-of-the-Art </li></ul></ul><ul><li>Low-Density Parity-Check Codes </li></ul><ul><ul><li>Design and Message-Passing Decoding </li></ul></ul><ul><ul><li>Performance </li></ul></ul><ul><li>Linear Programming Decoding </li></ul><ul><ul><li>LP Relaxation </li></ul></ul><ul><ul><li>Properties </li></ul></ul><ul><ul><li>Improvements </li></ul></ul><ul><li>Conclusion and Open Problems </li></ul>
  26. 26. Linear Programming Decoding: Motivation <ul><li>Linear Programming (LP) decoding is an alternative to MP decoding for Turbo and LDPC codes. </li></ul><ul><li>Its performance has connections to that of MP decoding. </li></ul><ul><li>Advantages: </li></ul><ul><ul><li>Amenability to finite-length analysis </li></ul></ul><ul><ul><li>Potential for improvement </li></ul></ul><ul><ul><li>Detectable failures </li></ul></ul><ul><li>Major drawback: </li></ul><ul><ul><li>Higher complexity than MP </li></ul></ul><ul><li>Sometimes seen as tool to characterize MP decoding. </li></ul><ul><ul><li>The figure is courtesy of Jon Feldman. </li></ul></ul>
  27. 27. Maximum-likelihood (ML) Decoding of Binary Linear Codes <ul><li>ML (MAP) Block Decoding: Find the sequence , x , that maximizes the likelihood of the received vector. </li></ul><ul><li>Optimization with linear objective function, but nonlinear constraints </li></ul><ul><ul><ul><li>Minimize </li></ul></ul></ul><ul><ul><ul><li>x </li></ul></ul></ul><ul><ul><ul><li>Subject to </li></ul></ul></ul> 0 1 1 –1 ChannelEncoder Channel Decoder
  28. 28. Feldman’s LP Relaxation of ML <ul><li>LP decoding: </li></ul><ul><ul><li>Each binary parity-check constraint: is replaced by linear inequalities </li></ul></ul><ul><ul><li>Each binary condition is relaxed to a box constraint </li></ul></ul><ul><li>These linear inequalities define the fundamental polytope , P . </li></ul><ul><li>Linear optimization: </li></ul><ul><ul><ul><li>Minimize </li></ul></ul></ul><ul><ul><ul><li>x </li></ul></ul></ul><ul><ul><ul><li>Subject to </li></ul></ul></ul>x 1 x 2 x 3 x 4 x 5 x 6 x 7 N j c j
  29. 29. The Fundamental Polytope <ul><li>Properties: </li></ul><ul><ul><li>The integer vertices of P are exactly the codewords. </li></ul></ul><ul><ul><li>There are additional “non-integral” vertices, too. Pseudo-Codewords (PCW) </li></ul></ul><ul><li>γ determines a direction to search for the minimum cost vertex, x . </li></ul><ul><li>Possible algorithm outputs: </li></ul><ul><ul><li>ML codeword (if the solution is integral) </li></ul></ul><ul><ul><li>A non-integral vector (PCW) => declare failure </li></ul></ul><ul><ul><li>We always know if LP finds the ML solution. </li></ul></ul>- γ Codeword Pseudo-codeword
  30. 30. Some Performance Results <ul><li>For binary linear codes, WER is independent of the transmitted codeword. </li></ul><ul><li>Motivated by the Hamming weight, we can define the pseudo-weight w p . </li></ul><ul><li>For AWGN: </li></ul><ul><ul><li>For binary vectors, it is equal to the Hamming weight. </li></ul></ul><ul><li>[Koetter et al.]: If 1 n is transmitted and r is received, the Euclidean distance in the signal space between 1 n and r is w p ( r ). </li></ul><ul><ul><li>So, the performance can be described by the pseudo-weight spectrum. </li></ul></ul><ul><ul><li>The minimum pseudo-weight describes the error floor. </li></ul></ul><ul><li>More results: </li></ul><ul><ul><li>The minimum w p increases sublinearly n for regular codes. </li></ul></ul><ul><ul><li>Bounds on the threshold of LP decoding </li></ul></ul>
  31. 31. Size of the LP Decoding Problem <ul><li>For every check node with a neighborhood N of size d : </li></ul><ul><ul><li>constraints of the form </li></ul></ul><ul><li>Total of constraints. </li></ul><ul><li>High-density codes : </li></ul><ul><ul><li>The complexity is exponential in n . </li></ul></ul><ul><ul><li>[Feldman et al.]: An equivalent relaxation with O ( n 3 ) constraints. </li></ul></ul><ul><li>Do we need all the constraints? </li></ul>
  32. 32. Properties <ul><li>Definition: </li></ul><ul><li>A constraint of the form is a cut at point if it is violated at </li></ul><ul><ul><li>i.e. . </li></ul></ul><ul><li>Theorem 1: At any given point , at most one of the constraints introduced by each parity-check can be violated. </li></ul><ul><li>There is a O(md max ) way to find all these cuts, (if any). </li></ul><ul><ul><li>d max is the maximum check-node degree. </li></ul></ul>
  33. 33. Adaptive LP Decoding <ul><li>Reduce the complexity by decreasing the number of constraints/vertices. </li></ul><ul><ul><li>Do not use a constraint until it is violated. </li></ul></ul><ul><li>Algorithm 1 (Adaptive LP): </li></ul><ul><ul><li>Set up the problem with a minimal number of constraints to guarantee boundedness of the result. </li></ul></ul><ul><ul><li>Find the optimum point x ( k ) by linear programming. </li></ul></ul><ul><ul><li>For each check node, </li></ul></ul><ul><ul><ul><li>Check if this check node introduces a cut; if so, add it to the set of constraints. </li></ul></ul></ul><ul><ul><li>If at least one cut is added, go to step 2; otherwise, we have found the LP solution. </li></ul></ul>
  34. 34. Upper Bound on the Complexity <ul><li>Theorem 2: Algorithm 1 converges in at most n iterations. </li></ul><ul><li>Proof Outline : The final solution can be determined by n independent constraints </li></ul><ul><ul><li>where </li></ul></ul><ul><ul><li>Each intermediate solution, x ( k ) , should violate at least one of κ i ’s. </li></ul></ul><ul><ul><li>Hence, at most n intermediate solutions/iterations. </li></ul></ul><ul><li>Corollary 2: Algorithm 1 has at most n+nm constraints at the final iteration. </li></ul><ul><ul><li>Proof: We start with n constraints, and each of the m parity checks add at most one constraint per iteration. </li></ul></ul>
  35. 35. Numerical Results at Low SNR Fixed length n = 360 and rate R = 0.5 . Fixed length n = 120 and variable node degree d v = 3 . <ul><li>Observation: Adaptive LP decoding converges with O (1) iterations and less than 2 constraints per parity check. </li></ul>
  36. 36. Numerical Results: Gain in Running Time <ul><li>Reducing the number of constraints translates into a gain in running time: </li></ul><ul><ul><li>~10 2 times faster than standard implementation for d c = 8 . </li></ul></ul><ul><ul><li>Even faster if we use a “warm start” at each iteration. </li></ul></ul><ul><ul><li>The time remains constant even with a high-density code ( d c = O ( n ) ) </li></ul></ul><ul><ul><li>The LP solver is not making use of the sparsity! </li></ul></ul>Dashed lines: d c =6 Solid lines: d c =8
  37. 37. Open Problems <ul><li>Finite-length analysis of ensembles of LDPC codes under LP decoding </li></ul><ul><li>Computing the thresholds under LP decoding </li></ul><ul><li>Design LP solvers that exploit the properties of the decoding problem </li></ul><ul><li>Given a code, find its best Tanner graph representation for LP decoding </li></ul>
  38. 38. Outline <ul><li>Introduction to Coding Theory </li></ul><ul><ul><li>Shannon’s Channel Coding Theorem </li></ul></ul><ul><ul><li>Error-Correcting Codes – State-of-the-Art </li></ul></ul><ul><li>Low-Density Parity-Check Codes </li></ul><ul><ul><li>Design and Message-Passing Decoding </li></ul></ul><ul><ul><li>Performance </li></ul></ul><ul><li>Linear Programming Decoding </li></ul><ul><ul><li>LP Relaxation </li></ul></ul><ul><ul><li>Properties </li></ul></ul><ul><ul><li>Improvements </li></ul></ul><ul><li>Conclusion </li></ul>
  39. 39. Conclusion <ul><li>LDPC codes are becoming the mainstream in coding technology. </li></ul><ul><ul><li>Already implemented in 3G and LAN standards </li></ul></ul><ul><li>Many new applications, beyond LDPC decoding, are being studied for MP algorithms: </li></ul><ul><ul><li>Joint equalization and decoding </li></ul></ul><ul><ul><li>Classical combinatorial problems, e.g. SAT </li></ul></ul><ul><li>Connections to other fields are being made </li></ul><ul><ul><li>Coding theory is being invaded by statistical physicists! </li></ul></ul><ul><li>Many important questions are still open, waiting for you! </li></ul>
  40. 40. Some References <ul><li>Gallager, R. G., Low-Density Parity-Check Codes, M.I.T. Press, Cambridge, Mass: 1963. </li></ul><ul><li>T. Richardson and R. Urbanke, Modern Coding Theory. Cambridge University Press (Preliminary version available online at Urbanke’s webpage at EPFL) </li></ul><ul><li>Special Issue on Codes on Graphs and Iterative Algorithms, IEEE Transactions on Information Theory, February 2001. </li></ul><ul><li>Thanks! </li></ul>

×