The document discusses low density parity check (LDPC) codes. It begins with a brief history of LDPC codes, invented by Gallager in 1960 but rediscovered in the 1990s. It then discusses linear block codes and how they can be represented by generator and parity check matrices. The key properties of LDPC codes are described, including their sparse parity check matrix and regular or irregular structure. Decoding of LDPC codes using tanner graphs and hard decision bit flipping algorithms is explained. Finally, some applications of LDPC codes in communication systems and data storage are provided.
1. Group 02-Dark Knight
Anand Kotadiya_131003
Devanshi Piprottar_
Jaysheel Shah_
Manindar Sambhi_
Suhani Ladani_131057
Institute of Engineering and Technology
Linear Algebra_2014
LDPC Bit Flipping Decoding
2. Outline
History of LDPC Codes
Linear Block Codes
Properties of LDPC Codes
Decoding of LDPC Code
Tanner Graph
Hard Decision Decoding(Bit Flipping)
Applications of LDPC Code
References
3. History of LDPC Codes
Low Density Parity Check Code
A class of Linear Block Codes
Invented by Robert Gallager in his 1960 MIT Ph. D.
dissertation.
Being ignored for long time due to
Requirement of high complexity computation
Introduction of Reed-Solomon codes
The concatenated RS and convolution codes were considered
perfectly suitable for error control coding
The LDPC codes were rediscovered in mid 90s by R. Neal
and D. Mackay at the Cambridge University.
LDPC codes are arguably the best error correction codes in
existence at present.
4. Linear Block Codes
A Linear Code can be described by a
generator matrix G or a parity check matrix H.
A (N,K) block encoder accepts K-bit input and
produces N-bit codeword
c= xG, and cHT = 0
where c = codeword, x = information
G = Generator matrix, H = parity check matrix
G can be found by Gaussian elimination
H can be put in the form H = [PT : I].
The Generator Matrix G = [I : P].
5. Properties of LDPC Codes
LDPC codes are defined by a sparse parity-check matrix
Parity Check Matrix (H) for decoding is sparse
Very few 1's in each row and column.
Expected large minimum distance.
Regular LDPC codes
H: m x n where (n-m) information bits are encoded into n
codewords
H contains exactlyWc 1's per column and exactlyWr =Wc(n/m) 1's
per row, whereWc << m.
The above definition implies that Wr << n.
Wc ≥ 3 is necessary for good codes.
If the number of 1's per column or row is not constant, the code
is an irregular LDPC code.
Usually irregular LDPC codes outperforms regular LDPC codes.
6. Decoding of LDPC Codes
General decoding of linear block codes
Only if c is a valid codeword, we have
c HT = 0
For binary symmetric channel (BSC), the received codeword is c
added with an error vector e
The decoder needs to find out e and flip the corresponding bits
The decoding algorithm is based on linear algebra
Graph-based Algorithms
Sum-product algorithm for general graph-based codes
MAP algorithm for trellis graph-based codes
Bit Flipping(Message passing algorithm) for bipartite graph-
based codes
7. Tanner Graph
Tanner showed parity check matrix can be represented
effectively by a bipartite graph, now called a Tanner
graph.
a bipartite graph is an unidirectional graph whose nodes may be
separated into two classes, where edges only connect two nodes
not residing in the same class
Tanner graph has two classes of nodes
variable nodes (bit or symbol nodes)
check nodes (function nodes)
TheTanner graph is drawn according to the following
rule
check node j is connected to variable node i
whenever element hji in H is a 1.
9. Decoding of LDPC Codes
Decoding complexity grows in O(n2)
Even sparse matrices don’t result in a good performance if the
block length (n) gets very high
So iterative decoding algorithms are used
Those algorithms perform local calculations and pass those local
results via messages
This step is typically repeated several times
It was observed that iterative decoding algorithms of sparse
codes perform very close to the optimal decoder
Hard Decision Decoding(Bit Flipping)
Let’s assume codeword c = [1 0 0 1 0 1 0 1] and
received codeword c’ = [1 1 0 1 0 1 0 1]
1. All v-nodes ci send a “message” to their c-nodes fj containing
the bit they believe to be the correct one for them. At this
stage the only information a v-node ci has is the corresponding
received i-th bit of c.
10. Hard Decision Decoding(Bit Flipping)
2. Every check nodes fj calculate a response to every
connected variable node.The response message
contains the bit that fj believes to be the correct one for
this v-node cj assuming that the other v-nodes
connected to fj are correct.
11. Hard Decision Decoding(Bit Flipping)
3. The v-nodes receive the messages from the
check nodes and use this additional information
to decide if their originally received bit is OK. A
simple way to do this is majority vote.
4. Go to step 2.
13. Applications of LDPC Code
Today Internet, communication and digital circuits are huge source of
data sharing. If I want to transmit my confidential data then I need to
convert it into codeword(Encoding).
But while transmitting it through the transmission channel, it will give
wrong information to the receiver due to noise in the channel.
So we need to use LBC which will encodes the data and transmit it over
the channel and if the error occurs in the received data then it should be
capable of finding the error and correcting it.
Some of the Real life Applications of LDPC Codes
1. COMUNICATION SYSTEMS/INTERNET
Teletext systems, satellite communication, broadcasting (radio and digital TV),
telecommunications (digital phones)
Ethernet, Cellular wireless
2. INFORMATION SYSTEMS
Logical circuits, semiconductor memories.
Data Storage-Magnetic disks (HD), optic reading disks (CD-ROM).
3. AUDIOANDVIDEO SYSTEMS
Digital sound (CD) and digital video(DVD)
14. References
“Lecture 10 on LDPC Codes”, Information Electronics
Engineering,EwhaWomans University
Ryan,W., “An Introduction to Low Density Parity Check Codes”,
UCLA Short Course Notes,April, 2001
R. Gallager, “Low-density parity-check codes”, IRETrans. IT, Jan.
1962