SlideShare a Scribd company logo
1 of 41
Download to read offline
UNIT-III
CHANNEL CODING
 Types of transmission errors
 Need for error control coding
 Linear Block Codes (LBC): Description of LBC, Generation
 Syndrome and error detection
 Minimum distance of Linear block code
 Error correction and error detection capabilities
 Standard array and syndrome decoding
 Hamming codes
 Binary cyclic codes (BCC): Description of cyclic codes, encoding
 Decoding and Error correction using shift registers
 Convolution codes: description, encoding
 Decoding-Code tree, state diagram.
UNIT-III
Channel Coding
INTRODUCTION
Channel coding is intended to introduce controlled redundancy in order to provide some
amount of error-detecting and correcting capability to the data being transmitted. This controlled
redundancy helps in detecting erroneously decoded bits and makes it possible to correct the errors
before passing on the data to the source decoder. Channel coding may be used even for conserving
transmitted power, for a given probability of error. Channel coding may be used either for error-
detection or error-correction, depending on the amount of redundancy introduced.
Need for Error Control Coding
The primary communication resources are the transmitted signal power and channel
bandwidth, together determine the signal energy per bit-to-noise power density ratio, Eb/No. This
ratio Eb/No uniquely determines the probability of error (Pe) or bit error rate (BER), for a particular
modulation scheme.
The channel induced noise can introduce errors in the transmitted binary data. ie., a bit 0
may change to bit 1 or a bit 1 may change to bit 0. The reliability of data transmission gets severely
affected because of these errors. Accordingly, in practice with the available modulation schemes,
it is not possible to provide acceptable data quality of low error performance. Also, there is a
limitation on the achieved maximum value of the ratio Eb/No. Therefore, for a fixed Eb/No, the only
practical option available for changing data quality from problematic to acceptable level is to use
coding. Another practical requirement for the use of coding is to reduce the required Eb/No for a
fixed bit error rate. The reduction in Eb/No will reduce the required transmitted power. This in turn,
reduces the hardware costs by requiring a smaller antenna size.
Channel coding is used for the reliable transmission of digital information over the
channel. Channel coding improves communication performance by enabling the transmitted
signals to better withstand the effects of various channel impairments, such as noise, interference
and fading.
Channel coding methods introduce controlled redundancy in order to provide error
detecting and correcting capability to the data being transmitted. Hence channel coding is also
called as error control coding. Here we shall see in detail about error control coding.
Channel coding increases the transmission bandwidth as the data rate is increased due to
the redundancy introduced. It also increases the system complexity in the form of a channel
encoder at the transmitter and a channel decoder at the receiver.
Types of transmission errors:
Depending upon the nature of the noise, the codewords transmitted through the channel is
affected differently. Hence, there is a possibility that bit 0 transmitted may be received as bit 1 or
vice versa. This is called as the error introduced by the noise in the transmitted code word.
There are mainly two types of errors introduced during data transmission.
1) Random error and
2) Burst error.
Both random error and burst errors occur in the contents of a message. Hence they may also be
referred to as “content errors”. Alternatively, it is possible that a data block may be lost in the
network as it has been delivered to a wrong destination. It is referred as the “Flow Integrity error”.
1. Random Errors: Random errors are caused by Additive White Gaussian Noise (AWGN)
in the channel. Noise affects the transmitted symbols independently. Hence, the error
introduced in the particular interval does not affect the performance of the system in
subsequent intervals. The errors are totally uncorrelated. Therefore, they are also called as
independent errors. The channels that are mostly influenced by white Gaussian noise are
satellite and deep-space communication links. The use of forward-error correcting codes is
well suited for these channels.
2. Burst Errors: Burst errors are caused by Impulse noise in the channel. Impulse noise
affects several consecutive bits and errors tend to occur in clusters. Hence the burst errors
are dependent on each other in successive message intervals. The channels that are mostly
influenced by impulse noise are telephone channels and radio channels. In telephone
channels, burst of errors result from impulse noise on circuits due to lightning, and
transients in central office switching equipment. In radio channels, bursts of errors are
produced by atmospherics, multi-path fading, and interferences from other users of the
frequency band. An effective method for error protection over these channels is based on
ARQ method.
3. Compound Errors: In many of the real communication channels, there is a possibility that
both the white Gaussian noise and impulse noise will affect the channel. Hence the errors
introduced will be of both random (independent) and burst errors. Therefore, if there is a
mixture of random and burst errors, then such errors are called as compound errors.
ERROR CONTROL CODING METHODS: There are two main methods used for error control
coding. They are
1) Error detection and retransmission or Automatic Repeat Request (ARQ)
2) Forward acting Error Correction (FEC) (error detection and correction)
Sometimes, a hybrid system employing both FEC and ARQ may be used.
Automatic Repeat Request (ARQ): In this receiver detects an error and request the transmitter
for retransmission that does not endeavour to rectify the detected error. It requires a return path or
feedback path from the receiver to the transmitter. It makes use of error-detection at the receiver.
Basically there are two types of ARQ. They are
1) Stop-and-wait ARQ: In the stop-and-wait ARQ, the transmitter transmits a codeword and
then waits. On receiving the transmitted codeword, the receiver checks up whether there
are any errors in it. If no errors are detected, the receiver sends an ‘acknowledgement’
(ACK) signal through the return or feedback path. On receipt of an acknowledgement
(ACK) signal, the transmitter transmits the next codeword. In case, one or more errors are
detected in the received codeword, the receiver sends a negative acknowledgement (NAK)
to the transmitter, which, on receipt of the NAK, retransmits the same codeword that was
sent earlier.
Fig 3.1: Stop-and-wait ARQ
Disadvantage: A serious drawback of the stop-and-wait system is that the time interval
between two successive transmissions is slightly greater than the round trip delay. So, in
satellite channels, in which the round trip delay is quite large, use of stop-and-wait ARQ
will very much degrade the transmission efficiency.
Advantage: These ARQ systems are very simple and so they are used on terrestrial
microwave links as the round-trip delay is very small in these links.
2) Continuous ARQ: Continuous ARQ systems are of two types:
(a) Go back-N ARQ systems: In a Go back-N ARQ system, the transmitter sends the message
continuously without waiting for an ACK signal from the receiver. However, if the receiver detects
an error in say the kth message, a NAK signal is sent to the transmitter indicating that the kth
message is in error. The transmitter, on receiving the NAK signal, goes back to the kth codeword
and starts transmitting all the codewords from the kth onwards. The Go back-N ARQ is quite
useful in satellite links in which the round trip delay is quite large. But buffering is generally its
greatest drawback.
Fig 3.2: Go-back-N ARQ with N=7
(b) Selective repeat ARQ systems: In a selective repeat ARQ system, the transmitter goes on
sending the messages one after the other without waiting for an ACK. In case the receiver detects
an error in the kth codeword, it informs the transmitter indicating that the kth word is in error. The
transmitter then immediately sends the kth word and then resumes transmission of the messages
in a sequential order starting from where it broke the sequence in order to send the kth word.
Fig 3.3: Selective-Repeat ARQ
From the throughput efficiency point of view, the selective ARQ is the best among all the ARQ
systems; but its implementation is expensive.
Forward Error-Correction (FEC): It consists of a channel encoder at the transmitter and a
channel decoder at the receiver, as shown in Fig. 3.4, and depends upon error-correcting codes.
Fig 3.4: Block diagram of FEC
The FEC encoder and modulator are shown as separate units in the transmitter and correspondingly
the detector and FEC decoder are also shown as two separate units in the receiver. However, in
certain cases, where bandwidth efficiency is of major concern, the functions of the FEC encoder
and modulator at the transmitter and those of the FEC decoder and the demodulator at the receiver
are combined.
The advantages and disadvantages in using FEC are as follows.
a. No return path, or feedback channel is needed as in the case of ARQ systems.
b. The ratio of the number of information, or message bits to the total number of bits
transmitted, defined as the information throughput efficiency, is constant in FEC systems
and constant overall delay is obtained.
c. FEC systems need expensive input and output buffers for the encoders and decoders and
sometimes buffer overflows cause problems.
d. When very high reliability is needed, selection of an appropriate error-correcting code and
implementing its decoding algorithm may be difficult.
e. Reliability of the received data is sensitive to channel degradations.
TYPES OF CODES: Error correcting codes are divided into two broad categories:
Block Codes: it consists of (n-k) number of check bits (redundant bits) being added to k number
of information bits to form a n-bit codeword. These (n-k) number of check bits are derived from
k-information bits. At the receiver, these check bits are used to detect and correct errors which
may occur in the entire n-bit code word.
Convolutional Codes: Block of n-code digits not only depends on the block of k information bits
but also on the preceding (N-1) blocks of information bits. In this check bits are continuously
interleaved with information bits which helps to correct errors not only in that particular block but
also in other blocks as well.
Block codes are better suited for error detections and convolutional codes for error
correction.
IMPORTANT TERMS USED IN ERROR CONTROL CODING:
Codeword:
The encoded block of ‘n’ bits is called a codeword. It contains message bits (k) and redundant
check bits.
Block length: The number of bits ‘n’ after coding is called the block length of the code.
Code rate: The code rate ‘r’ is defined as the ratio of message bits (k) and the encoder output bits
(n). Hence,
Code rate, r =
k
n
where 0 < r < 1
Code Vector: An ‘n’ bit code word can be visualized in an N-dimensional space as a vector whose
elements or co-ordinates are the bits in the code word.
Code Efficiency: The code efficiency is the ratio of the message bits to the transmitted bits for
that block by the encoder. Hence,
Code efficiency =
Message bits
Transmitted bits
= [
k
n
× 100] %
Weight of the code: The number of non-zero elements in the transmitted code vector is called
code vector weight.
Hamming distance: The hamming distance (d) between the two code vectors is equal to the
number of elements in which they differ. Eg. Let X = 101 and Y = 110. Then hamming distance
(d) between X and Y code vectors is 2.
Minimum hamming distance:
The smallest hamming distance between the valid code vectors is termed as the minimum
hamming distance (dmin).
Modulo-2 arithmetic
(i) Addition
Modulo-2 addition is an exclusive OR operation.
0 
0 
1 
1 
(ii) Multiplication
Multiplication of binary digits follows AND logic.
0 
0 
1 
1 
LINEAR BLOCK CODES (LBC): LBC involves encoding a block of source information
bits into another block of bits with addition of error control bits to combat channel errors induced
during transmission.
An (n, k) linear block code encodes k message bits into a block of n bits by adding (n-k) number
of check bits.
Or
Fig 3.5: Systematic Format of LBC
An (n, k) block code is said to be “(n, k) linear block code” if it satisfies the condition given
below:
Let C1 and C2 be any two code vectors (n-bits) belonging to a set of (n, k) block code. If 1 2
C C

is also a n-bit code word belonging to the same set of (n,k) block code, then such block code is
called (n,k) LBC. In other words a linear summation of any two codewords should result another
code-vector in a set of (n, k) block codes.
An (n, k) LBC is a said to be systematic if the k-message bits either appear at the beginning of the
codeword or at the end of the code word.
The codeword structure for systematic code is:
bits (n-k) check bits
C k message
 
Or
=(n-k) check bits bits
k message

0 1 2 3 1
i.e , C = (C ,C ,C ,C ,......,C )
n
For systematic code,
( )
; 0,1,.....,( 1)
C
m ; ( ),( 1),.....( 1)
i
i
i n k
b i n k
i n k n k n
 
  
 
 
  
    
 
 
Generator
Matrix [G]:
Let the message block of k-bits be represented as a row vector called message vector.
0 1 2 3 1 1
[ ] [m ,m ,m ,m ,......,m ]
k k
M  
 (1)
Where 0 1 2 3 1
m ,m ,m ,m ,......,mk be either 0’s or 1’s. Thus there are 2k
distinct message vectors.
The channel encoder systematically adds (n-k) number of check bits to form (n, k) LBC.
Parity bits/check bits, 0 1 2 3 1 1 ( )
[B] [b ,b ,b ,b ,......,b ]
n k n k
   
 (2)
Thus we have 2k
distinct code vectors corresponding to 2k
distinct message vectors to form n-bit
code word.
0 1 2 3 1 1
[C] = [C ,C ,C ,C ,......,C ]
n n
  (3)
The 2k
distinct code vectors each of length n, form a subspace by the set of all possible 2n
vectors
each of length n. Among 2n
distinct n-length binary sequences, only 2k
are valid code vectors
and the remaining (2 2 )
n k
 code vectors are invalid which forms the error vectors.
The code-vector is written as
[ ]
C b m

0 1 2 3 1 0 1 2 3 1
[b ,b ,b ,b ,......,b ,m ,m ,m ,m ,......,m ]
n k k
  
 (4)
The (n-k) check bits 0 1 2 3 1
b ,b ,b ,b ,......,bn k
  are derived from the k- message bits using a
predetermined rule as below:
0 0 0,0 1 1,0 2 2,0 1 k 1,0
1 0 0,1 1 1,1 2 2,1 1 k 1,1
1 0 0,n k 1 1 1,n k 1 2 2,n k 1 1 k 1,n k 1
b m m m m
b m m m m
b m m m m
k
k
n k k
p p p p
p p p p
p p p p
 
 
           
    
    
    
(5)
Expressing eq (5) in matrix form as,
0,0 0,1 0,n k 1
1,0 1,1 1,n k 1
0 1 2 3 1 0 1 2 3 1
k 1,0 k 1,1 k 1,n k 1 ( )
[m ,m ,m ,m ,......,m ] [b ,b ,b ,b ,......,b ]
k n k
k n k
p p p
p p p
p p p
 
 
  
      
 
 
  
 
 
 
Or ( )
[ ] [ ]
k n k
m P b
   (6)
Where
0,0 0,1 0,n k 1
1,0 1,1 1,n k 1
( )
k 1,0 k 1,1 k 1,n k 1
[ ]k n k
p p p
p p p
P
p p p
 
 
 
    
 
 
 

 
 
 
[ ] [ ] [ ]
k
C m P I m G
   (7)
Where G is called Generator matrix given by,
[G] [ ] [ ] [ ]
k k n k n k
P I or G I P
 
  (8)
0,0 0,1 0,n k 1
0
1,0 1,1 1,n k 1
1
k 1,0 k 1,1 k 1,n k 1
1
1 0 0
0 1 0
[G]
0 0 1
k
k k identity matrix
P matrix
p p p
g
p p p
g
p p p
g
 
 
    



 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
The k-rows of the generator matrix G are linearly independent in the sense that the linear
combination of no two of its rows will result in any of the other rows. Each rows of the G matrix
are also codewords. The matrix G is therefore said to be in the canonical form.
PARITY CHECK MATRIX [H]:
The generator matrix [G] completely characterizes a linear (n, k) block code in the sense
that knowledge of [G] enables us to determine all the 2 k
codewords of the code. Apart from [G]
matrix, there is another matrix, [H] which also completely characterizes the code. This H-matrix
is called the parity check matrix. Each one of the parity check bits is a linear combination of the
message digits. Thus, in an (n, k) block code, the (n–k) parity check digits can be determined for
any arbitrary set of k message digits, provided we have (n–k) parity-check equations. Thus, the
parity-check equations give another way of characterizing a block code.
Let us consider a ( )
n k n
  matrix H defined as:
( )
[ ] [ ]or[ ]
T T
n k n n k n k
H I P P I
   
 (9)
Then, ( )
[H]
n k
T
n n k
I
P

 
 
 
  
 
 
(10)
[ ] [ ]or[ ]
k n k k
G P I I P
  (11)
Then, ( )
[H][G] [ ] [0] n k n
T
T T T T
n k n k k
k
P
I P I P P I
I
 
 
 
 
   
 
 
 
(12)
( )
[ ][ ] [ ] [0] n k n
n k
T
k n k k
I
G H P I I P PI
P
 


 
 
   
 
 
 
(13)
Therefore, HGT
and HT
G are dual matrices.
We know that [ ] [ ]
C m G
 (14)
Post multiplying HT
on both sides, . [ ]. [0] 0
T T
C H m G H m
  
. 0
T
C H
  (15)
We know that,
0,0 0,1 0,n k 1
0
1,0 1,1 1,n k 1
1
k 1,0 k 1,1 k 1,n k 1
1
1 0 0
0 1 0
[G]
0 0 1
k
k k identity matrix
P matrix
p p p
g
p p p
g
p p p
g
 
 
    



 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
(16)
[ ]
T
n k
H I P

 (17)
0,0 1,0 k 1,0
0,1 1,1 k 1,1
0,n k 1 1, 1 k 1,n k 1 ( )
1 0 0
0 1 0
[H]
0 0 1 n k n k n
p p p
p p p
p p p


        
 
 
 

 
 
 
(18)
Let the code words be generated as
0 1 2 3 1 0 1 2 3 1 0 1 2 3 1
C = [C ,C ,C ,C ,......,C ] [b ,b ,b ,b ,......,b , m ,m ,m ,m ,......,m ]
n n k k
   

0
1
2
1
0
1
2
1
b
b
b
b
C n k
T
k
m
m
m
m
 

 
 
 
 
 
 
 
  
 
 
 
 
 
 
 
 
(19)
0
1
2
0,0 1,0 k 1,0
0,1 1,1 k 1,1 1
0
0,n k 1 1, 1 k 1,n k 1 1
2
1
b
b
b
1 0 0
b
0 1 0
HC
0 0 1
n k
T
n k
k
p p p
p p p
m
p p p m
m
m

  
      

 
 
 
 
 
   
   
 
  
   
   
   
 
 
 
 
 
(20)
This gives (n-k) equations, which are actually parity-check equations. Let us take the first equation:
0 1 2 3 1 0 0,0 1 1,0 1 k 1,0
b (b +b +b ...... b ).0 m m +...... m [0]
n k k
p p p
   
      
Hence, 0 0 0,0 1 1,0 1 k 1,0
b m m +...... mk
p p p
 
   (21)
Which is the parity check equation that gives the parity check bit b0. Like this, these (n – k)
equations give the (n – k) parity check bits of the code vector, the remaining elements are k-
message bits.
The encoding circuit for a systematic (n, k) LBC is shown in Fig 3.6, in which the k-message bits
are represented by 0 1 2 3 1
,u ,u ,u ,......,uk
u  and parity bits are represented by 0 1 2 1
,v ,v ,......,vn k
v  
Fig 3.6: Encoding Circuit for Systematic (n, k) LBC
The G-matrix of a linear block code is useful in generating the code vectors (as output of
the channel encoder at the transmitter), the H-matrix is useful at the decoder of the receiver.
Since Eq. (15) is satisfied by C if and only if it is a legitimate code vector, the decoder of the
receiver uses the received vector r in Eq. (15) in the place of C to check whether r satisfies
that equation. If it does, then it is a valid code vector. If it does not, then the receiver decides
that one or more bits of the received vector are erroneous.
SYNDROME AND ERROR DETECTION:
Consider a (n, k) LBC with G and H- matrices. Let 0 1 2 3 1
C = [C ,C ,C ,C ,......,C ]
n be a codeword
that was transmitted over a noisy communication channel. Let 0 1 2 1
R = [r ,r ,r ,......,r ]
n be the
received vector at the output of the channel. Because of the channel noise R is different from C.
0 1 2 1
. e=R+C=[e ,e ,e ,......,e ]
n
i e  (22)
Where e is called the error pattern (or) error vector.
e 1
e 0
i i i
i i i
r c
r c
  
  
(23)
The 1’s in the e are called the transmission errors caused by the channel noise. The received
vector is the sum of the transmitted code and error vector.
R=C+ e (24)
Upon receiving R, the decoder must first determine whether R has transmission errors. If the
presence of errors is detected then the decoder will take action to locate and correct using FEC or
ARQ methods.
When R is received, the decoder computes the following operation.
Syndrome, 0 1 2 1
. [s ,s ,s ,......,s ]
T
n k
S R H  
  (25)
If S=0 then R is a codeword (maybe)
S≠0, then R is not a codeword.
Sometimes even though R contain errors, still S=0, the error patterns of this kind are undetectable
error patterns.
The Syndrome vector S is given by equation, . T
S R H

0 1 2 1 0 1 2 1
0,0 0,1 0,2 0,3 0,n k 1
1,0 1,1 1,2 1,3 1, 1
1,0 1,1 k 1,2 k 1,3 k 1,n k 1
0 0 0 0
1
0 0 0 0
1
0 0 0 0
1
s ,s ,s ,......,s ] [r ,r ,r ,......,r ]
n k n
n k
k k
p p p p p
p p p p p
p p p p p
  
 
 
      
 
 
 
 
 
 

 
 
 
 
 
 
 
0 0 0,0 1 1,0 1 k 1,0
1 1 0,1 1 1,1 1 k 1,1
1 1 0,n k 1 1 1,n k 1 1 k 1,n k 1
s + ......
s + ......
s + ......
n k n k n
n k n k n
n k n k n k n k n
r r p r p r p
r r p r p r p
r r p r p r p
    
    
              
   
   
   
(26)
Fig3.7: Syndrome decoding Circuit for Systematic (n, k) LBC
Properties:
1. The syndrome is independent of the transmitted code vector. It depends only on the error
pattern. i.e . T
S e H

Proof: we know that . ( )
T T T T
S R H C e H CH eH
    
. [ 0]
T T
S e H CH
  
0 0 0,0 1 1,0 1 k 1,0
1 1 0,1 1 1,1 1 k 1,1
1 1 0,n k 1 1 1,n k 1 1 k 1,n k 1
s +e ......
s +e ......
s +e ......
n k n k n
n k n k n
n k n k n k n k n
e e p p e p
e e p p e p
e e p p e p
    
    
              
   
   
   
(27)
2. Error patterns differing by a codeword will have the same syndrome.
Proof: suppose 1
e is the error pattern and 2 1
e e C
  , then
Syndrome, 1 1
s T
e H
 and
Syndrome, 2 2 1 1
s ( )
T T T
e H e C H e H
   
2 1
s s
 
Thus, Error patterns differing by a codeword will have the same syndrome
Co-Sets: suppose e is some arbitrary error pattern. Let ; 0,1,2,....(2 1)
k
i i
e e C i
   .Then, from
property 2 of the syndromes that all the 2k
error patterns have the same syndrome. This set of 2k
error patterns having unique syndrome is called a Co-set of code.
“A coset of an (n, k) block code is a set of 2k error patterns, characterized by a unique syndrome
for all its elements”
So from (n, k) block codes we can form 2n
distinct error patterns. Among them, 2k
error patterns
will result a common syndrome which forms a coset.
Therefore, the number of cosets = 2n-k
Hamming weight: it is defined as the number of non-zero elements of a codeword. For example
C=0010110, then HW=3(no. of 1’s)
Hamming Distance: The Hamming distance d(C1, C2) between two code vectors having the same
number of elements is defined as the number of locations in which their respective elements differ.
Minimum Distance (dmin): The minimum distance dmin of a linear block code is the smallest
Hamming distance between any two code vectors of the code.
Error detection and Correction Capabilities of LBC:
1. The minimum distance of LBC is equal to the minimum Hamming weight of a non-
zero code vector.
Proof: from the definition of hamming distance and moduo-2 addition it follows that the
hamming distance between two n-tuples Ci and Cj is equal to the hamming weight of the
sum of Ci and Cj.
i.e let d(Ci ,Cj)= hamming distance between Ci and Cj=Hw(Ci +Cj)
min min{ ( , )]
i j
d d C C
 
The hamming distance between two code vectors in C is equal to the Hamming weight of
the third code-vector Ck in C.
min 1 2
( , ) Hw(C +C ) (C )
i j k
d C C Hw
  
min min{ (C )} Hwmin
k
d Hw
 (28)
Note:
1. The minimum distance of a LBC is equal to the minimum number of columns in H will
sum-up which when added result a Zero vector.
2. The minimum distance of a LBC is equal to the minimum number of rows in HT which
when added result a Zero vector.
A linear block code with minimum distance can detect upto min
( 1)
d  errors in each code-vector
and can correct upto min 1
2
d 
 
 
 
errors where min 1
2
d 
 
 
 
denotes the largest integer number
greater than min 1
2
d 
.
(i) The error detecting capability of a LBC with minimum distance min
d is min
( 1)
d 
(ii) If min
d is odd it is capable of detecting all error patterns ≤ min
( 1)
d  and number
of errors that can correct is t ≤ min 1
2
d 
.
(iii) If min
d is even it is capable of detecting all error patterns ≤ min
2
d
and number of
errors that can correct is t ≤ min 2
2
d 
.
SYNDROME DECODING USING STANDARD ARRAY:
2n
received vectors are partitioned into 2k
non-overlapping subsets in an array called
“standard array”. Thus we have 2k
columns each led by a code-vector commencing with all zero
vector at the left most column corner and we have 2n-k
rows.
Each of these rows forms a ‘coset’ and the left-most element of each coset is called the
‘coset leader’. The first row comprises the 2k
possible zero-error received vectors. The coset leader
for the second row is say an error pattern e2. The other elements of the row are
2 2 3 2 2
2
, ,........, k
C e C e C e
   . The coset leaders must be so chosen they are the most likely error
patterns – those with smallest Hamming weight.
The general decoding procedure consists of the following steps:
1. Determine the syndrome of the received vector R
T
S RH

2. Identify the coset with this syndrome and let its coset leader be an error pattern e
3. Decode the received vector R into the code vector C =R+ e.
Fig 3.8: Standard Array syndrome Decoding
The storage or memory space requirement for array decoding increases exponentially with
the number of parity check bits used in the code. To store the 2n-k
coset leaders each with n-digits,
we need 2n k
n 
 digits in total. To store 2n k

syndromes each with (n-k) digits we need
( ) 2n k
n k 
  digits in total. Altogether, (2 ) 2n k
n k 
  digits to store coset leaders and syndromes.
Table-Lookup Decoding: The table-lookup decoding can be applied to any (n, k) LBC
resulting minimum decoding delay and minimum probability of error. The standard array can be
regarded as truth table of n-switching functions:
0 0 0 1 2 1
1 1 0 1 2 1
1 1 0 1 2 1
( , , ,..........., )
( , , ,..........., )
( , , ,..........., )
n k
n k
n n n k
e f S S S S
e f S S S S
e f S S S S
 
 
   



(29)
Where 0 1 2 1
, , ,..........., n k
S S S S   are the syndrome digits, which are regarded as switching variables
and 0 1 2 1
,e ,e ,...........,en
e  are the estimated error digits. When these n switching functions are
derived and simplified, a combinational circuit with the (n-k) syndrome digits as inputs and the
estimated error digits as outputs are realized. The general decoder for (n, k) LBC for table-lookup
is shown in Figure 3.9.
Fig 3.9: General Decoder for an LBC
HAMMING CODE:
Hamming codes are a family of (n, k) block error-correcting codes having the following
properties:
 Number of user data bits, 2 1
m
k m
   ; where m is number of check bits or redundancy.
 Number of encoded data bits, 2 1
m
n  
 Number of check bits, n k m
 
 Minimum hamming distance, min 3
d 
 Error correcting capability, min 1
1
2
d
t

 
 The number of check bits is calculated by 2 ( 1)
m
m k
  
Procedure to calculate Hamming Check bits:
1. Hamming check bits are inserted with the original data bits at positions that are power of
2 , i.e., 0 1 2
2 ,2 ,2 ,......,2n k

2. Using the condition 2 ( 1)
m
m k
   , the number of check bits are calculated.
3. Total number of encoded bits are given by k+m.
4. Each data position which have a value 1 is represented by a binary value equivalent to its
position.
5. All of these position values are XORed together to produce the check bits of Hamming
code.
Decoding of Hamming Code:
1. All data bit positions with binary value 1 plus Hamming code formed by the check bits
are XORed since check bits occur at bit positions that are power of 2.
2. If syndrome contains all 0s, no error is detected.
3. If syndrome contains on and only one bit set to 1, then error has occurred in one of the
check bits, no correction is required in received decoded data.
4. If syndrome contains more than one bit set to 1, then its decimal equivalent value
indicated the position of data bit which is in error. This data bit simply inverted for
correction.
Design of Hamming Codes using H-Matrix:
we know that, [ ] [ ]
T
n k
H P I 

HT
n k
P
I 
 
  
 
(30)
It is clear that there are (n-k) number of columns. Therefore each row in HT
has (n-k) number of
entries each of which could be ‘0’ or ‘1’. Thus we have 2n-k
number of distinct rows. But a row of
0’s cannot be used as this represents the syndrome of no error. Thus we are left with (2n-k
-1)
number of distinct rows.
The single error correcting (n, k) Hamming code has the following:
Code length: 2 1
n k
n 
 
Number of message bits: 2
log ( 1)
k n n
  
Number of parity check bits : (n-k)
Error correcting capability: min 1
2
d
t


The construction of [P] matrix is chosen as:
1. HT
should not contain a row of 0’s.
2. No two rows of HT
must be same.
BINARY CYCLIC CODES:
Cyclic codes are a subclass of the linear block codes have distinct advantages over the linear codes.
(i) The encoding and decoding circuits for cyclic codes can be easily implemented using
shift registers with feedback connections and some basic gates.
(ii) They have an excellent mathematical structure which makes the design of error-
correcting codes with multiple-error correction capability relatively easier
(iii) Because of the availability of very efficient decoding methods that do not depend upon
a look-up table, large memories are not needed and can correct errors caused by bursts
of noise that affect several successive bits.
These attractive features that almost all Forward Error Correcting (FEC) systems make use of
cyclic codes.
Description of Cyclic Codes:
Def: An (n, k) linear code ‘C’ is called cyclic code if every cyclic shift of a code vector in
C is also a code vector in C.
If the component of n-tuple 0 1 2 3 1
C = (C ,C ,C ,C ,......,C )
n are cyclically shifted one place to
the right, we obtain another n-tuple.
(1)
1 0 1 2 3 2
C = [C ,C ,C ,C ,C ,......,C ]
n n
 
Which is called a cyclic shift of ‘C’. if the components of ‘C’ are cyclically shifted ‘i’
places to the right, is
(i)
1 1 0 1 1
C = [C ,C ,.....,C ,C ,C ,......,C ]
n i n i n n i
     
Cyclically shifting ‘C’ ‘i’ places to the right is equivalent to cyclically shifting C, (n-i)
places to left. This property of cyclic codes allows us to treat the elements of each code-
vector as the coefficients of a polynomial of degree (n-1) or less.
i.e
2 1
0 1 2 1
( ) C +C X+C X +.......+C n
n
C X X 


if
1
1
C 0 deg ( 1)
C 0 deg is lessthan ( 1)
n
n
ree n
ree n


  
  
The code polynomial that corresponds to code vector (i)
C is
(i) 1 1 1
1 1 0 1 1
C ( ) C +C X ..... C +C +C ...... C
i i i n
n i n i n n i
X X X X X
  
     
    
There exists a interesting algebraic relationship between (i)
( )&C ( )
C X X . I.e. multiplying
C(X) by Xi
we obtain
1 1 1
0 1 1 1
( ) C +C X +........+C X +.......+C
i i i n n i
n i n
X C X X X
   
  

The above equation can be manipulated as:
1 1 1
1 1 0 1 1
1
1 1
( ) C +C X ..... C +C +C ...... C
C ( 1) C X( 1) ......... C ( 1)
i i i i n
n i n i n n i
n n i n
n i n i n
X C X X X X X
X X X X
  
     

   
     
     
(i)
( ) ( )[ 1] C ( )
i n
X C X q X X X
   
Where,
1
1 1
( ) C +C X ..... C i
n i n i n
q X X 
   
  
The code polynomial C(X) is simply the remainder that results from dividing ( )
i
X C X
by 1
n
X 
i.e
(i)
Re
( ) C ( )
( )
1 1
i
n n
Quotient
mainder
X C X X
q X
X X
 
 
if C(X) is the code polynomial then,
(i)
C ( ) ( )mod [ 1]
i n
X X C X ulo X
  is also a code
polynomial for any cyclic shift ‘i’.
Generator Polynomial: Let G(X) be a polynomial of degree (n-k) and is a factor of 1
n
X 
is called a Generator polynomial of (n, k) cyclic code.
2
0 1 2
g( ) +g X+g X +.......+g n k
n k
X g X 


Where, 0 g 1
n k
g 
  because X cannot be a factor of 1
n
X  and minimum factor is
(1+X).
Hence,
1
0
g( ) 1
n k
i n k
i
i
X g X X
 


  

A cyclic code is uniquely determined by the generator polynomial G(X) in which each
code polynomial can be expressed as product polynomial.
(X) a(X).g(X)
C 
The degree of g(X) = number of parity check bits = (n-k) and degree of a(X) is (k-1).
Systematic Encoding of Cyclic Code:
Let the message sequence be 0 1 2 3 1
{m ,m ,m ,m ,......,m }
k
M 
 .
Let the message polynomial be,
2 1
0 1 2 1
M(X) +m X+m X +.......+m k
k
m X 


Let the parity bits be, 0 1 2 3 1
{b ,b ,b ,b ,......,b }
n k
b  

The parity polynomial,
2 1
0 1 2 1
( ) +b X+b X +.......+b n k
n k
b X b X  
 

Let the code sequence be, 0 1 2 3 1 0 1 2 3 1
{b ,b ,b ,b ,......,b ,m ,m ,m ,m ,......,m }
n k k
C   

The code polynomial be,
2 1 1 2 1
0 1 2 1 0 1 2 1
( ) +b X+b X +.......+b X +m X +m X +.......+m
n k n k n k n k k
n k k
C X b X m X
       
  
 
( ) ( )
a X g X

( ) ( )
n k
b X X m X

 
( ) ( ) ( ) ( )
n k
X m X b X a X g X

 
Re ( )
( ) b( )
a( )
( ) ( )
n k
Quotient
mainder PARITY
X m X X
X
g X g X

 
( ) ( ) ( )
n k
C X b X X m X

  
Steps:
1. Multiply the message polynomial with
n k
X 
.
2. Divide ( )
n k
X m X

by given generator polynomial g(X) to obtain the remainder (parity
bits).
3. Combine b(X) and ( )
n k
X m X

to give the code polynomial ( ) ( ) ( )
n k
C X b X X m X

 
****The non-systematic cyclic code vectors can be found for each codeword by
C(X)=M(X)G(X).
Systematic Encoding of Cyclic codes using (n-k) Feedback Shift Registers:
The systematic encoding of an (n, k) cyclic code can be accomplished with a division circuit which
is linear (n-k) stage shift register with feedback connections based on the generator polynomial
2 1
1 2 1
g( ) 1+g X+g X +.......+g n k n k
n k
X X X
  
 
  .
Fig 3.10: Encoding circuit for an (n, k) cyclic code with g(X)
Steps:
1. Gate-1 is closed during the first k shifts, to allow transmission of the message bits into
(n- k) stage encoding shift register
2. Gate-2 is in the up position to allow transmission of the message bits directly to an output
register during the first k shifts
3. After transmission of the kth message bit, Gate-1 is opened and Gate-2 is moved to the
down position to give parity bits.
4. The remaining (n – k) shifts clear the encoding register by moving the parity bits to the
output register.
5. The total number of shifts is equal to n, and the contents of the output register is the
codeword polynomial ( ) ( ) ( )
n k
C X b X X m X

 
Syndrome Calculation and Decoding of Cyclic Codes:
If C(X) is the transmitted code polynomial and R(X) is the received polynomial then,
R(X) = C(X) + e(X)
Where e(X) is error polynomial corresponding to error pattern created by the channel.
Let
2 1
0 1 2 1
R(X) +r X+r X +.......+r n
n
r X 

 , then
( )
( )
( ) ( ) ( )
( ) quotient remainder syndrome
R X
q X g X S X
g X
 
Since the degree of g(X) is (n-k) , the degree of S(X) is (n-k-1). Then remainder polynomial is
called syndrome polynomial with degree (n-k-1).
Property-1: If S(X) is a syndrome for R(X) then it is also a syndrome for error polynomial.
Proof: Let r(X) =C(X) +e(X)
e(X)= r(X)+C(X)
But C(X) =a(X)g(X)
Therefore, e(X) =r(X) +a(X)g(X)
But r(X) =q(X)g(X)+S(X)
therefore, e(X) = q(X)g(X)+S(X)+a(X)g(X)
=[q(X)+a(X)]g(X)+S(X)
Therefore, e(X)/g(X)= =[q(X)+a(X)]g(X)+S(X)
Hence the syndrome for r(X) and e(X) is same. Since both are divisible by generator polynomial.
Property-2: If S(X) is the syndrome for r(X) then XS(X) is the syndrome for X.r(X).
Proof: r(X) =q(X)g(X)+S(X)
Xr(X) =Xq(X)g(X)+XS(X)
i.e multiplying r(x) by X is equivalent to giving one right cyclic shift to the received word R, then
the syndrome will be XS(X).
Property-3: If the errors occur only in the parity check bits of the transmitted codeword, the
syndrome polynomial and the error pattern polynomial will be the same.
Proof: Let
2 1
0 1 2 1
r(X) +rX+r X +.......+r n
n
r X 

 , in this 0 1 2 1
,r ,r ,.........rn k
r   are the received
parity-check bits.
2 1 1
0 1 2 1 1
e(X) ( +e X+e X +.......+e ) X +.......+e
n k n k n
n k n k n
e X e X
   
   
 
Coefficients 1
e
n k n
e to
  are all zero. When e(x) is divided by g(x) the remainder is the
syndrome s(x). Thus s(x) = e(x).
Syndrome Calculation using (n-k) shift register and error correction:
Computation of the syndrome s(X) of r(X) can be accomplished with a division circuit as
shown in Figure 3.11.
Fig 3.11: An (n-k) syndrome calculation circuit for (n, k) cyclic code
1. The register is first initialized. With Gate-2 turned ON and gate-1 OFF, the received
vector r(X) is entered into shift register.
2. After the entire received vector is shifted into register, the contents of the register will be
syndrome.
3. Now gete-2 is OFF and gate-I ON and the syndrome vector is shifted out of the register.
The circuit is ready for processing the next received vector.
For error correction, the decoder has to determine a correctable error pattern e(X) from the
syndrome S(X) and add e(X) to r(X) to determine the transmitted code-vector C(X) as shown in
Figure 3.12.
Fig 3.12: General form of decoder for cyclic codes.
1. The received vector is shifted into the buffer register and the syndrome register.
2. After the syndrome for the received vector is calculated and placed in the syndrome
register, the contents of the syndrome register are read into the detector. If the detector
output is ‘1’ then the received digit at the right-most stage of the buffer register is assumed
to be erroneous and hence is corrected. If the detector output is ‘0’, then it is assumed to
correct.
3. The first received digit is shifted out of the buffer and at the same time the syndrome
register is shifted right once. If the first received digit is in error, the detector output will
be ‘1’ which is used for correcting the first received digit. The output of the detector is also
fed into the syndrome register to modify the syndrome. This results in a new syndrome
corresponding to the altered received vector shifted to the right by one place.
4. The new syndrome is now used to check whether or not the second received digit, now at
the right-most stage of the buffer register, is an erroneous digit. If so, it is corrected a new
syndrome is calculated as in step 3 and the procedure is repeated.
5. The decoder operates on the received vector digit by digit until the entire received vector
is shifted out of the buffer.
At the end of the decoding operation, errors will have been corrected if they correspond to an error
pattern built into the detector and the syndrome register will contain all 0’s. If the syndrome
register does not contain all 0’s then an uncorrectable error pattern has been detected. The above
decoder circuit is called Meggit decoder.
Convolutional Codes:
In convolutional coding, a block of ‘n’ code digits generated by the encoder in a time unit depends
on not only the block of ‘k’ message digits within that time unit, but also on the preceding (m-1)
blocks of message digits(m>1). Usually the values of k and n will be small.
Encoder for convolutional codes: A convolutional encoder takes a sequence of message digits
and generates sequences of code digits. In any time unit, a message block consists of k-digits is
fed into the encoder and the encoder generates a code block consisting of ‘n’ code digits (k<n).
The n-digit code block depends not only on the k-digit message block of the same time unit but
also on the previous (m-1) message blocks. The code generated by the above encoder is called (n,
k, m) convolutional code.
Where, n number of output bits at particular interval of time (no. of modulo adders)
k no. of input bits entering at any time
m no. of flip-flops stages( no. of shifts)
L length of the message to be encoded
n(L+m) length of encoded sequence.
Constraint length, K = L/n(L+m) is defined as the number of encoder output bits influenced by
each message bits.
Rate efficiency. =k/n
Note: Always the encoder should start with zero and end with zero. To bring back to zero we
append 0’s to the input. The number of zeros appended is equal to number of shifts.
Time Domain Approach of Convolutional Encoder:
Let us consider (n, k, m) as (2, 1, 3) convolutional encoder as shown in Figure 3.13.
Fig 3.13: A (2, 1, 3) binary Convolutional Encoder
The time domain behavior of a binary convolutional encoder is defined interms of a “n-impulse
responses”. Let the sequence (j) (j) (j) (j)
1 2 3 1
[ , , ,......., ]
m
g g g g  denote the “impulse responses” also called
“generator sequences” of the i/p-o/p path through ‘n’ number of modulo-2 adders.
In the encoder, there are two modulo-2 adders labelled top-adder and bottom-adder. Hence there
are two generator sequences. Let 1 2 3
( , , ,......, )
L
u u u u represent the input message sequence that
enters into the encoder, one bit out at a time starting with u1. Then the encoder generates two o/p
sequences denoted by (1) (2)
v & v defined by the discrete convolutional sums,
(1) (1)
(2) (2)
v [ ]*
v [ ]*
u g
u g


From the definition of discrete convolutional, we have
(j) (j)
1
0
v
m
l l i i
i
u g
 

 
In the given encoder, j=1, 2, and i 0to m=3
Let the message sequence be 1 2 3 4 5
( ) 10111
u u u u u  . The o/p sequences are calculated as :
For j=1:
3
(1) (j)
1
0
vl l i i
i
u g
 

 
(1) (1) (1) (1) (1)
1 1 2 2 3 3 4
vl l l l l
u g u g u g u g
  
   
The quantity ‘l’  1 to (L+m)
In the given encoder , L=5 , m=3 , therefore l1 to 8
(1) (1)
1 5
(1) (1)
2 6
(1) (1)
3 7
(1) (1)
4 8
1 v 1 5 v 0
2 v 0 6 v 0
3 v 0 7 v 0
4 v 0 8 v 1
l l
l l
l l
l l
     
     
     
     
(1)
v [10000001]
 
For j=2:
3
(2) (2)
1
0
vl l i i
i
u g
 

 
(2) (2) (2) (2) (2)
1 1 2 2 3 3 4
vl l l l l
u g u g u g u g
  
   
(2) (2)
1 5
(2) (2)
2 6
(2) (2)
3 7
(2) (2)
4 8
1 v 1 5 v 1
2 v 1 6 v 1
3 v 0 7 v 0
4 v 8 v 1
l l
l l
l l
l l
     
     
     
     
(2)
v [11011101]
 
After encoding, the two o/p sequences are multiplexed into a single sequence called “code-word”
for transmission over the channel. The “code-word” at the o/p of the convolutional encoder is
given by
(1) (2) (1) (2) (1) (2) (1) (2)
1 1 2 2 3 3
[v v ,v v ,v v ,.........,v v ]
L m L m
V  

[11,01,00,01,01,01,00,11]
V
 
Matrix Method: The generator sequences (1) (1) (1) (1)
1 2 3 1
, , ,......., m
g g g g  for the top adder and
(2) (2) (2) (2)
1 2 3 1
, , ,......., m
g g g g  for the bottom adder, can be interlaced and arranged in a matrix form
with the number of rows = L , number of columns = n(L+m) , such a matrix of order
[ ] [ ( )]
L n L m
  is called “ Generator matrix” of the convolutional encoder.
(1) (2) (1) (2)
(1) (2) (1) (2)
3 3 1 1
1 1 2 2
(1) (2) (1) (2)
(1) (2) (1) (2)
3 3 1 1
1 1 2 2
(1) (2) (1) (2)
(1) (2) (1) (2)
3 3 1 1
1 1 2 2
(1)
(1) (2) (1) (2)
3
1 1 2 2
00 00 00
00 00 00
G 00 00 00
00 00 00
m m
m m
m m
g g g g
g g g g
g g g g
g g g g
g g g g
g g g g
g g
g g g g
 
 
 

(2) (1) (2)
3 1 1
m m
g g
 
 
 
 
 
 
 
 
 
In the 2nd
row, the number of 0’s is equal to number of modulo-2 adders.
V= u*G
Given u =10111, (1)
[1011]
g  (2)
[1111]
g 
Rows, L=5 , columns n(L+m)=16
01 00 00 00 00
11 11 11
00 01 00 00 00
11 11 11
G 00 00 01 00 00
11 11 11
00 00 00 01 00
11 11 11
00 00 00 00 01
11 11 11
 
 
 
 

 
 
 
 
01 00 00 00 00
11 11 11
00 01 00 00 00
11 11 11
[10111] 00 00 01 00 00
11 11 11
00 00 00 01 00
11 11 11
00 00 00 00 01
11 11 11
V
 
 
 
 

 
 
 
 
[11,01,00,01,01,01,00,11]
V 
Transfer-Domain Approach: Let the impulse response of each path in the encoder be
replaced by a polynomial whose coefficients are represented by the respective elements of the
impulse response.
(j) (j) (j) (j) 2 (j)
1 2 3 1
( ) X X ....... m
m
g X g g g g X

    
The corresponding output of each of the adders is given by
(i) (j)
(X) u(X) ( )
V g X

Final encoder output is given as,
(1) (2) 2 (3) 1 (n)
( ) (X ) X (X ) X (X ) ....... X (X )
n n n n n
V X V V V V

    
For example, consider U=10111
2 3 4
( ) 1
U X X X X
   
(1) (1) 2 3
[1011] ( ) 1
g g X X X
    
(2) (2) 2 3
[1111] (X) 1
g g X X X
     
(1) (1) 7
( ) ( ) ( ) 1
V X U X g X X
  
(2) (2) 3 4 5 7
( ) ( ) ( ) 1
V X U X g X X X X X X
      
(1) 2 (2) 2
( ) ( ) * ( )
V X V X X V X
  
(1) 2 14
( ) 1
V X X
 
(2) 2 3 7 9 11 15
* ( )
X V X X X X X X X
     
3 7 9 11 14 15
( ) 1
[11,01,00,01,01,01,00,11]
V X X X X X X X X
V
        
 
STATE DIAGRAM: The state of the encoder is defined as its shift register contents. For an
(n, k , m) code with k>1, the ith shift register contains ki pervious information bits. There are total
of 2k
different possible states. Each block of k-inputs causes a transition to a new state. Hence
there are 2k
branches leaving each state one each corresponding to the input block. The state of a
(n, 1, m) encoder is defined as the contents of the first (m-1) shift register. Thus the encoder can
be represented as (m-1) state machine. Knowing the present state, next input we can determine the
next state, then output. Transistion between the states is governed by particular incoming bit (0 or
1). The encoder as 1
2m
possible states. For the Fig 3.13, m=3 , there are 4 states and they are
designated as 00, 10, 01, 11
a b c d
    .
Fig 3.14: State Diagram
There are only two transitions emanating from each state, corresponding to the two possible input
bits (0 or 1). A transition from present state to next corresponding to input bit 0 is represented by
solid line, for input bit 1 is represented by dashed line. Bits appearing in each path corresponds to
the output bits. Initially encoder is in all zero state (state a) and codeword corresponding to any
given input sequence is obtained by noting down the outputs on the branch labels. An encoder will
start in all zero state and should return to all zero state.
Procedure for writing the state diagram:
1. The different possible states are identified and state table is created.
2. State transition table is constructed showing clearly the present state, next state, input bits
and output bits.
3. The state diagram is drawn showing all the states of flip-flops, interconnection between
them for various input combinations and the corresponding outputs.
CODE TREE: The tree diagram adds the dimension of time to the state diagram. The tree
diagram tor the convolutional encoder shown in Figure 3.15.At each successive input bit time
the encodi11g procedure can be described by traversing the diagram from left to right. Each
tree branch describing an output
branch word. The branching rule
for finding a codeword sequence is
as follows: If the input bit is a zero,
associated branch word is found by
moving to the next rightmost
branch in the upward direction. If
the input bit is a one, its branch
word is found by moving to the
next rightmost branch in the
downward direction. Assuming
that the initial contents of the
encoder is all zeros. The diagram
shows that if the first input bit is a
zero, the output branch word is 00
and if the first input bit is a one, the
output branch word is 11.
Similarly, if the first input bit is a
one and the second input bit is a
zero, the second output branch
word is 10. Or, if the first input bit
is a one and the second input bit is
a one, the second output branch
word is 01. The vertical line
represents a “node” and horizontal
line is called “branch”. The
encoder output for any information
sequence can be traced through the
code tree paths.
Fig 3.15: Code-Tree Diagram
Sequential Decoding of Convolutional codes:
A sequential decoder works by generating hypotheses about the transmitted codeword
sequence; it computes a metric between these hypotheses and the received signal. It goes forward
as long as the metric indicates that its choices are likely; otherwise, it goes backward, changing
hypotheses until, through a systematic trial-and-error search, it finds a Likely hypothesis he
decoder starts at the time t1 node of the tree and generates both paths leaving that node. The
decoder follows that path which agrees with the received n code symbols. At the next level in the
tree, the decoder again generates both paths leaving that node and follows the path agreeing with
the second group of n code symbols. Proceeding in this manner, the decoder quickly penetrates
the tree.
The decoder starts at the time t1 node or the code tree and generates both paths leading
from that node. If the received n code symbols coincide with one of the generated paths, the
decoder follows that path. If there is not agreement, the decoder follows the most likely path but
keeps a cumulative count on the number of disagreements between the received symbols and the
branch words on the path being followed. If two branches appear equally likely, tile receiver uses
an arbitary rule such as following the zero input path. AI each new level in the 1ree, the decoder
generates new branches and compares them with the next set of n received code symbols. The
search continues to penetrate the tree along the most likely path and maintains the cumulative
disagreement count.lf the disagreement count exceeds a certain number the decoder decides that it
is on an incorrect path, backs out of the path, and tries another. The decoder keeps track of the
discarded pathways to avoid repeating any path excursions.
Example: Consider the below sequence,
Steps:
1. At time t1 we receive symbols 11 and compare them with the branch words leaving the
first node.
2. The most likely branch is the one with branch word 1 1, so the decoder decides that input
bit one is the correct decoding, and moves to the next level.
3. At time t2 the decoder receives symbols 00 and compares them with the available branch
words 10 and 01 at this second level.
4. There is no ''best'· path, so the decoder arbitrarily take' the input bit zero (or branch word
10) path, and the disagreement count registers a disagreement of 1.
5. At time t 3, the decoder receives symbols 01 and compares them with the available branch
words 11 ant.! 00 at this third level.
6. Again, there is no best path, so the decoder arbitrarily takes the input zero (or branch word
11) path, and the disagreement count is increased to 2.
7. At time t4 the decoder receives symbols 10 and compares them with the available branch
words 00 and 11 at this fourth level.
8. Again, there is no best path, so the decoder takes the input bit zero (or branch word 00)
path, and the disagreement count is increased to 3.
9. But a disagreement count of 3 is the turnaround criterion so the decoder ''backs out" and
tries the alternative path. The disagreement counter is reset to 2.
10. The alternative path is the input bit
one (or branch word 11) path at the
t4 level. The decoder tries this, but
compared to the received symbols
10, there is still a disagreement of 1
and the counter is reset to 3.
11. But, 3 being the turnaround criterion
the decoder backs out of this path,
and the counter is reset to 2. All of
the alternatives have now been
traversed at this t4 level, so the
decoder returns to the node at t3, and
resets the counter to l.
12. At the t3 node the decoder compares
the symbols received at time t3,
namely 01, with the untried 00 path.
There is a disagreement of 1, and the
counter is increased to 2.
13. At the t4 node, the decoder follows
the branch word 10 that matches its
t4 code symbols of 10. The counter
remains unchanged at 2.
14. At the t5 node, there is no best path
so the decoder follows the upper
branch as is the rule and the counter
is increased to 3.
15. At this count, the decoder backs up
resets the counter to 2, and tries the
alternative path at node t5. Since the
alternate branch word is 00 there is a
disagreement of 1 with the received
code symbols 01 at time t5, and the
counter is again increased to 3.
Fig 3.16: Sequential decoding Example
16. The decoder backs out of this path and the counter is reset to 2. All of the alternatives have
now been traversed at this t5 level so the decoder returns to the node at t4 and resets the
counter to l.
17. The decoder tries the alternative path at t4, which raises the metric to 3 since there is a
disagreement in two positions of the branch word. This time the decoder must back up all
the way to the time t2 node because all of the other paths at higher levels have been tried.
The counter is now decremented to zero.
18. At the t2 node, the decoder now follows the branch word 01, and because there is a
disagreement of 1 with the received code symbols 00 at time t2, counter is increased to 1.
The decoder continues in this way, yields the correctly decoded message sequence, 1 1 0 1 1.
Sequential decoding can be viewed as a trial-and-error technique for searching out the correct path
in the code tree. It performs the search in a sequential manner, always operating on just a single
path at a lime. If an incorrect decision is made, subsequent extensions of the path will be wrong.
The decoder can eventually recognize its error by monitoring the path metric. The algorithm is
similar to the case of an automobile traveler following a road map.
Comparison between Linear Block codes and Convolutional codes
Sl. No. Linear Block Codes Convolutional Codes
1. Block codes are generated by
X = MG (or)
X(p) = M(p) .G(p)
Convolutional codes are generated by
convolution between message sequencing and
generating sequence.
2. For a block of message bits, encoded
block (code vector) is generated
Each message bit is encoded separately. For
every message bit, two or more encoded bits
are generated.
3. Coding is block by block. Coding is bit by bit.
4. Syndrome decoding is used for most
liklihood decoding.
Viterbi decoding is used for most liklihood
decoding.
5. Generator matrices, parity check
matrices and syndrome vectors are
used for analysis.
Code tree, code trellis and state diagrams are
used for analysis.
6. Generating polynomial and generator
matrix are used to get code vectors.
Generating sequences are used to get code
vectors.
7. Error correction and detection
capability depends upon minimum
distance dmin.
Error correction and detection capability
depends upon free distance dmin.
Drill Problems
1. Consider a Binary cyclic code with 3
( ) 1
g X X X
   corresponding to the message
bits 1011, design an encoder for (7, 4) with feedback elements?
Sol: No. of shift registers, n--k=7-4=3
Given 3
( ) 1
g X X X
   implies 0 1 2 3
1, 1, 0, 1
g g g g
   
Message bits Shifts Register contents output
1 0 1 1 - 0 0 0 (initial) -
1 0 1 1 1 1 0 1
1 0 2 1 0 1 1
1 3 1 0 0 0
- 4 1 0 0 1
- 5 0 1 0 0
- 6 0 0 1 0
- 7 0 0 0 1
After 4-shifts the contents of register are “100” parity bits. After (n-k) shifts clears the encoding
register by moving parity bits to the output register.
C = 1 0 0 1 0 1 1
2. For a (7, 4) cyclic code, the received vector Z(X) is 1110101 and 3
( ) 1
g X X X
   .
Draw the syndrome calculation circuit and correct if any error.
Sol: Given 3
( ) 1
g X X X
   implies 0 1 2 3
1, 1, 0, 1
g g g g
   
Parity
Message
Output is taken
from top to bottom
No. of
shifts
Input(Z) Shift register Comment
---- 0 0 0 initial
1 1 1 0 0
2 0 0 1 0
3 1 1 0 1
4 0 1 0 0
5 1 1 1 0
6 1 1 1 1
7 1 0 0 1 error
8 0 1 1 0
9 0 0 1 1
10 0 1 1 1
11 0 1 0 1
12 0 1 0 0 End of shift
When all the 7 received bits are entered into the syndrome calculator, 0’s are fed from 8th
shift
onwards. Each time a 0 is fed into the circuit, the fresh shift register contents are tabulated. This
process of feeding 0’s is continued till the shift register contents are read as 100. In general for (n-
k) shift register, the contents should read as 1000….0. in the above table we find that 12th
shift we
get 100 i.e error is located at 12th
place.
Z = 1 1 1 0 1 0 1
Error
Error vector E = 0 0 1 0 0 0 0
C= Z +E = 1 1 1 0 1 01 + 0 0 1 0 0 0 0=1100101(corrected codeword)
3. The generator matrix for a block code is given below. Find all code vectors of this
code.
G = [
𝟏 𝟎 𝟎
𝟎 𝟏 𝟎
⋮ 𝟏 𝟎 𝟎
⋮ 𝟎 𝟏 𝟏
𝟎 𝟎 𝟏 ⋮ 𝟏 𝟎 𝟏
]
Sol:
The Generator matrix G is generally represented as
[𝐺]𝑘 x 𝑛 = [𝐼 𝑘 x 𝑘 ⋮ 𝑃 𝑘 x 𝑞] 𝑘 x 𝑛
Hence
[𝐼 𝑘 x 𝑘 ⋮ 𝑃 𝑘 x 𝑞] 𝑘 x 𝑛 = [
1 0 0
0 1 0
⋮ 1 0 0
⋮ 0 1 1
0 0 1 ⋮ 1 0 1
]
On comparing,
The number of message bits, k = 3
The number of code word bits, n = 6
The number of check bits, q = n – k = 6 – 3 = 3
Hence, the code is a (6, 3) systematic linear block code. From the generator matrix, we have
1
Identity matrix, I k x k = [
1 0 0
0
0
1
0
0
1
] the coefficient or submatrix, P k x q = [
1 0 0
0
1
1
0
1
1
]
Therefore, the check bits vector is given by,
[𝐶]1 x 𝑞 = [𝑀]1 x 𝑘 [𝑃]𝑘 x 𝑞
On substituting the matrix form,
[c1 c2 c3] = [m1 m2 m3] [
1 0 0
0
1
1
0
1
1
]
From the matrix multiplication, we have
c1 = (1 x m1)  (0 x m2)  (1 x m3)
c2 = (0 x m1)  (1 x m2)  (0 x m3)
c3 = (0 x m1)  (1 x m2)  (1 x m3)
On simplifying, we obtain
c1 = m1m3
c2 = m2
c3 = m2m3
Hence the check bits (c1 c2 c3) for each block of (m1 m2 m3) message bits can be determined.
(i) For the message block of (m1 m2 m3) = (0 0 0), we have
c1 = m1m3 = 0  0 = 0
c2 = m2 = 0
c3 = m2m3 = 0  0 = 0
(ii) For the message block of (m1 m2 m3) = (0 0 1), we have
c1 = m1m3 = 0  1 = 1
c2 = m2 = 0
c3 = m2m3 = 0  1 = 1
Similarly, we can obtain check bits for other message blocks listed in table, all the message bits,
their check bits and code word vectors.
Message Vector
in one block Check bits Code word Vector
m1 m2 m3
c1=
m1m3
c2=m2
c3=
m2m3 m1 m2 m3 c1 c2 c3
0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 1 0 1 0 0 1 1 0 1
0 1 0 0 1 1 0 1 0 0 1 1
0 1 1 1 1 0 0 1 1 1 1 0
1 0 0 1 0 0 1 0 0 1 0 0
1 0 1 0 0 1 1 0 1 0 0 1
1 1 0 1 1 1 1 1 0 1 1 1
1 1 1 0 1 0 1 1 1 0 1 0
4. The parity check matrix of a (7, 4) linear block code is given by
H = [
𝟏 𝟏 𝟏
𝟏 𝟏 𝟎
𝟎 𝟏 𝟎 𝟎
𝟏 𝟎 𝟏 𝟎
𝟏 𝟎 𝟏 𝟏 𝟎 𝟎 𝟏
]
i. Find the generator matrix (G).
ii. List all the code vectors.
iii. How many errors can be detected?
iv. How many errors can be corrected?
Solution:
For a (7, 4) linear block code, we have n = 7, k = 4 and q = n – k = 7 – 4 = 3.
Thus n = 2q
– 1 = 23
– 1 = 7.
Hence, the given code is Hamming code.
(i) To find the generator matrix:
We know that [𝐻] x 𝑛 = [𝑃𝑇 ⋮ 𝐼] x 𝑛
Given that, [𝐻]3 x 7 = [
1 1 1
1 1 0
0 1 0 0
1 0 1 0
1 0 1 1 0 0 1
]
Then the transpose of submatrix is
[𝑃𝑇]𝑞 x 𝑘 = [
1 1 1
1 1 0
0
1
1 0 1 1
]
3 x 4
The submatrix is given by changing rows to columns
[𝑃]𝑘 x 𝑞 = [
1 1 1
1 1 0
1
0
0 1
1 1
]
4 x 3
Therefore, the Generator matrix is given by
[𝐺]𝑘 x 𝑛 = [𝐼 𝑘 x 𝑘 ⋮ 𝑃 𝑘 x 𝑞]𝑘 x 𝑛
[𝐺]4 x 7 = [
1 0 0 0 ⋮ 1 1 1
0 1 0 0 ⋮ 1 1 0
0
0
0
0
1 0 ⋮ 1 0 1
0 1 ⋮ 0 1 1
]
4 x 7
(ii) To find all the code words
The check bits vector is given by
[𝐶]1 x 𝑞 = [𝑀]1 x 𝑘 [𝑃] 𝑘 x 𝑞
Hence, [c1 c2 c3] = [m1 m2 m3 m4] [
1 1 1
1 1 0
1
0
0 1
1 1
]
From the matrix multiplication, we have
c1 = m1  m2  m3
c2 = m1  m2  m4
c3 = m1  m3  m4
We can now determine the check bits (c1 c2 c3) for each block of (m1 m2 m3 m4) message bits. The
generated code words are given in Table.
Sl.
No.
Message Vector
in one block Check bits Code word Vector Weight
of
Code
Vector
w(X)
m1 m2 m3 m4
c1 =
m1m2
 m3
c2=
m1m2
 m4
c3=
m1m3
 m4
m1 m2 m3 m4 c1 c2 c3
1. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2. 0 0 0 1 0 1 1 0 0 0 1 0 1 1 3
3. 0 0 1 0 1 0 1 0 0 1 0 1 0 1 3
4. 0 0 1 1 1 1 0 0 0 1 1 1 1 0 4
5. 0 1 0 0 1 1 0 0 1 0 0 1 1 0 3
6. 0 1 0 1 1 0 1 0 1 0 1 1 0 1 4
7. 0 1 1 0 0 1 1 0 1 1 0 0 1 1 4
8. 0 1 1 1 0 0 0 0 1 1 1 0 0 0 3
9. 1 0 0 0 1 1 1 1 0 0 0 1 1 1 4
10. 1 0 0 1 1 0 0 1 0 0 1 1 0 0 3
11. 1 0 1 0 0 1 0 1 0 1 0 0 1 0 3
12. 1 0 1 1 0 0 1 1 0 1 1 0 0 1 4
13. 1 1 0 0 0 0 1 1 1 0 0 0 0 1 3
14. 1 1 0 1 0 1 0 1 1 0 1 0 1 0 4
15. 1 1 1 0 1 0 0 1 1 1 0 1 0 0 4
16. 1 1 1 1 1 1 1 1 1 1 1 1 1 1 7
The smallest weight of any non-zero code vector is 3. Hence the minimum hamming
distance is dmin = 3.
(iii) Error detection capability
The error detection upto ‘s’ errors per word is
dmin ≥ s + 1
The minimum distance is dmin = 3
Therefore, 3 ≥ s + 1
s ≤ 3 - 1
s ≤ 2
Thus error in two bits will be detected.
(iv) Error correction capability
The error correction upto ‘t’ errors per word is
dmin ≥ 2t + 1
Hence, 3 ≥ 2t + 1
2t ≤ 3 – 1
2t ≤ 2
t ≤ 1
Thus error in one bit will be corrected.
5. The generator polynomial of a (7, 4) cyclic code is G(p) = p3 + p + 1. Find all the code
vectors for the code in non-systematic form.
Solution:
Here n = 7, k = 4
Therefore, q = n – k = 7 – 4 = 3
Since k = 4, there will be a total of 2k
= 24
= 16 message vectors (From 0 0 0 0 to
1 1 1 1). Each can be coded in to a 7 bits codeword.
(i) Consider any message vector as
M = (m3 m2 m1 m0) = (1 0 0 1)
The general message polynomial is
M(p) = m3p3
+ m2p2
+ m1p + m0, for k = 4
For the message vector (1 0 0 1), the polynomial is
M(p) = 1 . p3
+ 0 . p2
+ 0 . p + 1
M(p) = P3
+ 1
The given generator polynomial is G(p) = p3
+ p + 1
In non-systematic form, the codeword polynomial is
X (p) = M (p) . G (p)
On substituting,
X (p) = (p3
+ 1) . (p3
+ p +1)
= p6
+ p4
+ p3
+ p3
+ p +1
= p6
+ p4
+ (1 1) p3
+ p +1 = p6
+ p4
+ p + 1
= 1.p6
+ 0.p5
+ 1.p4
+ 0.p3
+ 0.p2
+ 1.p + 1
The code vector corresponding to this polynomial is
X = (x6 x5 x4 x3 x2 x1 x0)
X = ( 1 0 1 0 0 1 1)
(ii) Consider another message vector as
M = (m3 m2 m1 m0)
= ( 0 1 1 0 )
The polynomial is M(p) = 0.p3
+ 1.p2
+ 1.p1
+ 0.1
M(p) = p2
+ p
The codeword polynomial is X(p) = M(p) . G(p)
X(p) = (p2
+ p) . (p3
+ p + 1)
= p5
+ p3
+ p2
+ p4
+ p2
+ p
= p5
+ p4
+ p3
+ (1 1) p2
+ p
= p5
+ p4
+ p3
+ p
= 0.p6
+ 1.p5
+ 1.p4
+ 1.p3
+ 0.p2
+ 1.p + 0.1
The code vector, X = ( 0 1 1 1 0 1 0 )
Similarly, we can find code vector for other message vectors also, using the same procedure.
6. The generator polynomial of a (7, 4) cyclic code is G(p) = p3 + p + 1. Find all the code
vectors for the code in systematic form.
Solution:
Here n = 7, k = 4
Therefore, q = n - k = 7 – 4 = 3
Since k = 4, there will be a total of 2k
= 24
= 16 message vectors (From 0 0 0 0 to 1 1 1 1 ). Each
can be coded into a 7 bits codeword.
(i) Consider any message vector as
M = (m3 m2 m1 m0) = (1 1 1 0 )
By message polynomial, M(p) = m3p3
+ m2p2
+ m1p + m0, for k = 4.
For the message vector (1 1 1 0), the polynomial is
M(p) = 1.p3
+ 1.p2
+ 1.p + 0.1
M(p) = p3
+ p2
+ p
The given generator polynomial is
G(p) = p3
+ p + 1
The check bits vector polynomial is
C(p) = rem[
pq.M(p)
G(P)
]
= rem[
p3(p3+p2+p)
(p3+p+1)
]
= rem[
p6+p5+p4
p3+p+1
]
We perform division as per the following method.
p3
+ p2
(p3
+ p + 1) p6
+ p5
+ p4
+ 0.p3
+ 0.p2
+ 0.p + 0.1
p6
+0.p5
+ p4
+ p3
(Mod-2 addition) 0.p6
+ p5
+ 0.p4
+ p3
+ 0.p2
+ 0.p + 0.1
p5
+ 0.p4
+ p3
+ p2
0.p5
+ 0.p4
+ 0.p3
+ p2
+ 0.p + 0.1
remainder
Thus the remainder polynomial is p2
+ 0.p + 0.1.
This is the check bits polynomial C(p)
c(p) = p2
+ 0.p + 0.1
The check bits are c = (1 0 0 )
Hence the code vector for the message vector (1 1 1 0 ) in systematic form is
X = (m3 m2 m1 m0⋮ c2 c1 c0) = ( 1 1 1 0 1 0 0 )
Similarly, we can find code vector for other message vectors also, using the same procedure.
7. Consider a data (message) block of 1 1 0 1. The hamming code adds three parity bits
to the message bits in such a way that both message bits and check bits get mixed. The
check bit locations are as shown below.
1 2 3 4 5 6 7 → Bit location
p1 p2 D p3 D D D
 Here p1, p2 and p3 represent the parity check bits to be added. D represents the data
(message) bits. Then we have
1 2 3 4 5 6 7
p1 p2 1 p3 1 0 1
 The first parity bit, p1 provides even parity from a check of bit locations 3, 5 and 7. Here
they are 1, 1 and 1 respectively. Hence p1 will therefore be 1 to achieve even parity.
 The second parity bit, p2 checks locations 3, 6 and 7. Here they are 1, 0 and 1 respectively.
Hence p2 will be 0 to achieve even parity.
 The third parity bit p3, checks locations 5, 6 and 7. Here they are 1, 0 and 1 respectively.
Hence p3 will be 0 to achieve even parity.
 The resulting 7-bit code word generated is as below.
1 2 3 4 5 6 7
p1 p2 D p3 D D D
1 0 1 0 1 0 1 → Code word transmitted
 Suppose that this code word is altered during transmission. Assume that location 5 changes
from 1 to 0. Hence the received code word with error is given below.
1 2 3 4 5 6 7
p1 p2 D p3 D D D
1 0 0 0 0 0 1
 At the decoder, we have to evaluate the parity bits to determine where error occurs. This is
accomplished by assigning a 1 to any parity bit which is incorrect and a 0 to the parity bit
which is correct.
 We check p1 for locations 3, 5 and 7. Here they are 1, 0 and 1. For even parity p1 should
be 0, but we have received p1 as 1, which is incorrect. We assign a 1 to p1.
 We check p2 for locations 3, 6 and 7. Here they are 1, 0 and 1 respectively. For even parity
p2 should be 0 and we have also received p2 as 0, which is correct. We assign a 0 to p2.
 We check p3 for locations 5, 6 and 7. Here they are 0, 0 and 1 respectively. For even parity
p3 should be 1, but we have received p3 as 0, which is incorrect. We assign a 1 to p3.
 The three assigned values result in the binary form of 1 0 1, which has a decimal value of
5. This means that the bit location containing the error is 5. The decoder then change the
5th location bit from 0 to 1.
 The hamming code is therefore capable of locating a single error. But it fails if multiple
errors occur in one data block.

More Related Content

Similar to Channel Coding Techniques and Error Control Methods

Effect of Interleaved FEC Code on Wavelet Based MC-CDMA System with Alamouti ...
Effect of Interleaved FEC Code on Wavelet Based MC-CDMA System with Alamouti ...Effect of Interleaved FEC Code on Wavelet Based MC-CDMA System with Alamouti ...
Effect of Interleaved FEC Code on Wavelet Based MC-CDMA System with Alamouti ...IJCSEIT Journal
 
Equalization with the help of Non OMA.pptx
Equalization with the help of Non OMA.pptxEqualization with the help of Non OMA.pptx
Equalization with the help of Non OMA.pptxUditJain156267
 
BER Analysis of OFDM Systems with Varying Frequency Offset Factor over AWGN a...
BER Analysis of OFDM Systems with Varying Frequency Offset Factor over AWGN a...BER Analysis of OFDM Systems with Varying Frequency Offset Factor over AWGN a...
BER Analysis of OFDM Systems with Varying Frequency Offset Factor over AWGN a...rahulmonikasharma
 
Lecture Notes: EEEC6440315 Communication Systems - Spectral Efficiency
Lecture Notes:  EEEC6440315 Communication Systems - Spectral EfficiencyLecture Notes:  EEEC6440315 Communication Systems - Spectral Efficiency
Lecture Notes: EEEC6440315 Communication Systems - Spectral EfficiencyAIMST University
 
BLOCK CODES,STBCs & STTCs.pptx
BLOCK CODES,STBCs & STTCs.pptxBLOCK CODES,STBCs & STTCs.pptx
BLOCK CODES,STBCs & STTCs.pptxFAIZAN SHAFI
 
Eqalization and diversity
Eqalization and diversityEqalization and diversity
Eqalization and diversityerindrasen
 
Performance of OFDM System under Different Fading Channels and Channel Coding
Performance of OFDM System under Different Fading Channels and Channel CodingPerformance of OFDM System under Different Fading Channels and Channel Coding
Performance of OFDM System under Different Fading Channels and Channel CodingjournalBEEI
 
Design and verification of pipelined parallel architecture implementation in ...
Design and verification of pipelined parallel architecture implementation in ...Design and verification of pipelined parallel architecture implementation in ...
Design and verification of pipelined parallel architecture implementation in ...eSAT Publishing House
 
Unit 2 data link control
Unit 2 data link controlUnit 2 data link control
Unit 2 data link controlVishal kakade
 
93755181 lte-hoping
93755181 lte-hoping93755181 lte-hoping
93755181 lte-hopingSadef Karish
 
A Survey of Different Approaches for Differentiating Bit Error and Congestion...
A Survey of Different Approaches for Differentiating Bit Error and Congestion...A Survey of Different Approaches for Differentiating Bit Error and Congestion...
A Survey of Different Approaches for Differentiating Bit Error and Congestion...IJERD Editor
 
Performance analysis of al fec raptor code over 3 gpp embms network
Performance analysis of al fec raptor code over 3 gpp embms networkPerformance analysis of al fec raptor code over 3 gpp embms network
Performance analysis of al fec raptor code over 3 gpp embms networkeSAT Publishing House
 
Performance analysis of al fec raptor code over 3 gpp embms network
Performance analysis of al fec raptor code over 3 gpp embms networkPerformance analysis of al fec raptor code over 3 gpp embms network
Performance analysis of al fec raptor code over 3 gpp embms networkeSAT Journals
 
Designing and Performance Evaluation of 64 QAM OFDM System
Designing and Performance Evaluation of 64 QAM OFDM SystemDesigning and Performance Evaluation of 64 QAM OFDM System
Designing and Performance Evaluation of 64 QAM OFDM SystemIOSR Journals
 
Designing and Performance Evaluation of 64 QAM OFDM System
Designing and Performance Evaluation of 64 QAM OFDM SystemDesigning and Performance Evaluation of 64 QAM OFDM System
Designing and Performance Evaluation of 64 QAM OFDM SystemIOSR Journals
 
1 . introduction to communication system
1 . introduction to communication system1 . introduction to communication system
1 . introduction to communication systemabhijitjnec
 

Similar to Channel Coding Techniques and Error Control Methods (20)

TURBO EQUALIZER
TURBO EQUALIZERTURBO EQUALIZER
TURBO EQUALIZER
 
Effect of Interleaved FEC Code on Wavelet Based MC-CDMA System with Alamouti ...
Effect of Interleaved FEC Code on Wavelet Based MC-CDMA System with Alamouti ...Effect of Interleaved FEC Code on Wavelet Based MC-CDMA System with Alamouti ...
Effect of Interleaved FEC Code on Wavelet Based MC-CDMA System with Alamouti ...
 
www.ijerd.com
www.ijerd.comwww.ijerd.com
www.ijerd.com
 
Equalization with the help of Non OMA.pptx
Equalization with the help of Non OMA.pptxEqualization with the help of Non OMA.pptx
Equalization with the help of Non OMA.pptx
 
BER Analysis of OFDM Systems with Varying Frequency Offset Factor over AWGN a...
BER Analysis of OFDM Systems with Varying Frequency Offset Factor over AWGN a...BER Analysis of OFDM Systems with Varying Frequency Offset Factor over AWGN a...
BER Analysis of OFDM Systems with Varying Frequency Offset Factor over AWGN a...
 
Lecture Notes: EEEC6440315 Communication Systems - Spectral Efficiency
Lecture Notes:  EEEC6440315 Communication Systems - Spectral EfficiencyLecture Notes:  EEEC6440315 Communication Systems - Spectral Efficiency
Lecture Notes: EEEC6440315 Communication Systems - Spectral Efficiency
 
BLOCK CODES,STBCs & STTCs.pptx
BLOCK CODES,STBCs & STTCs.pptxBLOCK CODES,STBCs & STTCs.pptx
BLOCK CODES,STBCs & STTCs.pptx
 
Eqalization and diversity
Eqalization and diversityEqalization and diversity
Eqalization and diversity
 
Performance of OFDM System under Different Fading Channels and Channel Coding
Performance of OFDM System under Different Fading Channels and Channel CodingPerformance of OFDM System under Different Fading Channels and Channel Coding
Performance of OFDM System under Different Fading Channels and Channel Coding
 
Design and verification of pipelined parallel architecture implementation in ...
Design and verification of pipelined parallel architecture implementation in ...Design and verification of pipelined parallel architecture implementation in ...
Design and verification of pipelined parallel architecture implementation in ...
 
Channel estimation
Channel estimationChannel estimation
Channel estimation
 
Unit 2 data link control
Unit 2 data link controlUnit 2 data link control
Unit 2 data link control
 
93755181 lte-hoping
93755181 lte-hoping93755181 lte-hoping
93755181 lte-hoping
 
A Survey of Different Approaches for Differentiating Bit Error and Congestion...
A Survey of Different Approaches for Differentiating Bit Error and Congestion...A Survey of Different Approaches for Differentiating Bit Error and Congestion...
A Survey of Different Approaches for Differentiating Bit Error and Congestion...
 
I010435659
I010435659I010435659
I010435659
 
Performance analysis of al fec raptor code over 3 gpp embms network
Performance analysis of al fec raptor code over 3 gpp embms networkPerformance analysis of al fec raptor code over 3 gpp embms network
Performance analysis of al fec raptor code over 3 gpp embms network
 
Performance analysis of al fec raptor code over 3 gpp embms network
Performance analysis of al fec raptor code over 3 gpp embms networkPerformance analysis of al fec raptor code over 3 gpp embms network
Performance analysis of al fec raptor code over 3 gpp embms network
 
Designing and Performance Evaluation of 64 QAM OFDM System
Designing and Performance Evaluation of 64 QAM OFDM SystemDesigning and Performance Evaluation of 64 QAM OFDM System
Designing and Performance Evaluation of 64 QAM OFDM System
 
Designing and Performance Evaluation of 64 QAM OFDM System
Designing and Performance Evaluation of 64 QAM OFDM SystemDesigning and Performance Evaluation of 64 QAM OFDM System
Designing and Performance Evaluation of 64 QAM OFDM System
 
1 . introduction to communication system
1 . introduction to communication system1 . introduction to communication system
1 . introduction to communication system
 

More from Matrusri Engineering College (8)

UNIT-5 Spread Spectrum Communication.pdf
UNIT-5 Spread Spectrum Communication.pdfUNIT-5 Spread Spectrum Communication.pdf
UNIT-5 Spread Spectrum Communication.pdf
 
DC@UNIT 5 ppt.ppt
DC@UNIT 5 ppt.pptDC@UNIT 5 ppt.ppt
DC@UNIT 5 ppt.ppt
 
UNIT-4 Baseband Digital Modulation.pdf
UNIT-4 Baseband Digital Modulation.pdfUNIT-4 Baseband Digital Modulation.pdf
UNIT-4 Baseband Digital Modulation.pdf
 
DC@UNIT 3 ppt.ppt
DC@UNIT 3 ppt.pptDC@UNIT 3 ppt.ppt
DC@UNIT 3 ppt.ppt
 
DC@UNIT 2 ppt.ppt
DC@UNIT 2 ppt.pptDC@UNIT 2 ppt.ppt
DC@UNIT 2 ppt.ppt
 
UNIT-2.pdf
UNIT-2.pdfUNIT-2.pdf
UNIT-2.pdf
 
UNIT-1.pdf
UNIT-1.pdfUNIT-1.pdf
UNIT-1.pdf
 
DC@UNIT 1 ppt.ppt
DC@UNIT 1 ppt.pptDC@UNIT 1 ppt.ppt
DC@UNIT 1 ppt.ppt
 

Recently uploaded

Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Dr.Costas Sachpazis
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...roncy bisnoi
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdfankushspencer015
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...ranjana rawat
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingrakeshbaidya232001
 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSRajkumarAkumalla
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performancesivaprakash250
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxupamatechverse
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSSIVASHANKAR N
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations120cr0395
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...Soham Mondal
 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxpranjaldaimarysona
 

Recently uploaded (20)

Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdf
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writing
 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
 
Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptx
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptx
 

Channel Coding Techniques and Error Control Methods

  • 1. UNIT-III CHANNEL CODING  Types of transmission errors  Need for error control coding  Linear Block Codes (LBC): Description of LBC, Generation  Syndrome and error detection  Minimum distance of Linear block code  Error correction and error detection capabilities  Standard array and syndrome decoding  Hamming codes  Binary cyclic codes (BCC): Description of cyclic codes, encoding  Decoding and Error correction using shift registers  Convolution codes: description, encoding  Decoding-Code tree, state diagram.
  • 2. UNIT-III Channel Coding INTRODUCTION Channel coding is intended to introduce controlled redundancy in order to provide some amount of error-detecting and correcting capability to the data being transmitted. This controlled redundancy helps in detecting erroneously decoded bits and makes it possible to correct the errors before passing on the data to the source decoder. Channel coding may be used even for conserving transmitted power, for a given probability of error. Channel coding may be used either for error- detection or error-correction, depending on the amount of redundancy introduced. Need for Error Control Coding The primary communication resources are the transmitted signal power and channel bandwidth, together determine the signal energy per bit-to-noise power density ratio, Eb/No. This ratio Eb/No uniquely determines the probability of error (Pe) or bit error rate (BER), for a particular modulation scheme. The channel induced noise can introduce errors in the transmitted binary data. ie., a bit 0 may change to bit 1 or a bit 1 may change to bit 0. The reliability of data transmission gets severely affected because of these errors. Accordingly, in practice with the available modulation schemes, it is not possible to provide acceptable data quality of low error performance. Also, there is a limitation on the achieved maximum value of the ratio Eb/No. Therefore, for a fixed Eb/No, the only practical option available for changing data quality from problematic to acceptable level is to use coding. Another practical requirement for the use of coding is to reduce the required Eb/No for a fixed bit error rate. The reduction in Eb/No will reduce the required transmitted power. This in turn, reduces the hardware costs by requiring a smaller antenna size. Channel coding is used for the reliable transmission of digital information over the channel. Channel coding improves communication performance by enabling the transmitted signals to better withstand the effects of various channel impairments, such as noise, interference and fading. Channel coding methods introduce controlled redundancy in order to provide error detecting and correcting capability to the data being transmitted. Hence channel coding is also called as error control coding. Here we shall see in detail about error control coding. Channel coding increases the transmission bandwidth as the data rate is increased due to the redundancy introduced. It also increases the system complexity in the form of a channel encoder at the transmitter and a channel decoder at the receiver. Types of transmission errors: Depending upon the nature of the noise, the codewords transmitted through the channel is affected differently. Hence, there is a possibility that bit 0 transmitted may be received as bit 1 or vice versa. This is called as the error introduced by the noise in the transmitted code word.
  • 3. There are mainly two types of errors introduced during data transmission. 1) Random error and 2) Burst error. Both random error and burst errors occur in the contents of a message. Hence they may also be referred to as “content errors”. Alternatively, it is possible that a data block may be lost in the network as it has been delivered to a wrong destination. It is referred as the “Flow Integrity error”. 1. Random Errors: Random errors are caused by Additive White Gaussian Noise (AWGN) in the channel. Noise affects the transmitted symbols independently. Hence, the error introduced in the particular interval does not affect the performance of the system in subsequent intervals. The errors are totally uncorrelated. Therefore, they are also called as independent errors. The channels that are mostly influenced by white Gaussian noise are satellite and deep-space communication links. The use of forward-error correcting codes is well suited for these channels. 2. Burst Errors: Burst errors are caused by Impulse noise in the channel. Impulse noise affects several consecutive bits and errors tend to occur in clusters. Hence the burst errors are dependent on each other in successive message intervals. The channels that are mostly influenced by impulse noise are telephone channels and radio channels. In telephone channels, burst of errors result from impulse noise on circuits due to lightning, and transients in central office switching equipment. In radio channels, bursts of errors are produced by atmospherics, multi-path fading, and interferences from other users of the frequency band. An effective method for error protection over these channels is based on ARQ method. 3. Compound Errors: In many of the real communication channels, there is a possibility that both the white Gaussian noise and impulse noise will affect the channel. Hence the errors introduced will be of both random (independent) and burst errors. Therefore, if there is a mixture of random and burst errors, then such errors are called as compound errors. ERROR CONTROL CODING METHODS: There are two main methods used for error control coding. They are 1) Error detection and retransmission or Automatic Repeat Request (ARQ) 2) Forward acting Error Correction (FEC) (error detection and correction) Sometimes, a hybrid system employing both FEC and ARQ may be used. Automatic Repeat Request (ARQ): In this receiver detects an error and request the transmitter for retransmission that does not endeavour to rectify the detected error. It requires a return path or feedback path from the receiver to the transmitter. It makes use of error-detection at the receiver. Basically there are two types of ARQ. They are
  • 4. 1) Stop-and-wait ARQ: In the stop-and-wait ARQ, the transmitter transmits a codeword and then waits. On receiving the transmitted codeword, the receiver checks up whether there are any errors in it. If no errors are detected, the receiver sends an ‘acknowledgement’ (ACK) signal through the return or feedback path. On receipt of an acknowledgement (ACK) signal, the transmitter transmits the next codeword. In case, one or more errors are detected in the received codeword, the receiver sends a negative acknowledgement (NAK) to the transmitter, which, on receipt of the NAK, retransmits the same codeword that was sent earlier. Fig 3.1: Stop-and-wait ARQ Disadvantage: A serious drawback of the stop-and-wait system is that the time interval between two successive transmissions is slightly greater than the round trip delay. So, in satellite channels, in which the round trip delay is quite large, use of stop-and-wait ARQ will very much degrade the transmission efficiency. Advantage: These ARQ systems are very simple and so they are used on terrestrial microwave links as the round-trip delay is very small in these links. 2) Continuous ARQ: Continuous ARQ systems are of two types: (a) Go back-N ARQ systems: In a Go back-N ARQ system, the transmitter sends the message continuously without waiting for an ACK signal from the receiver. However, if the receiver detects an error in say the kth message, a NAK signal is sent to the transmitter indicating that the kth message is in error. The transmitter, on receiving the NAK signal, goes back to the kth codeword and starts transmitting all the codewords from the kth onwards. The Go back-N ARQ is quite useful in satellite links in which the round trip delay is quite large. But buffering is generally its greatest drawback.
  • 5. Fig 3.2: Go-back-N ARQ with N=7 (b) Selective repeat ARQ systems: In a selective repeat ARQ system, the transmitter goes on sending the messages one after the other without waiting for an ACK. In case the receiver detects an error in the kth codeword, it informs the transmitter indicating that the kth word is in error. The transmitter then immediately sends the kth word and then resumes transmission of the messages in a sequential order starting from where it broke the sequence in order to send the kth word. Fig 3.3: Selective-Repeat ARQ From the throughput efficiency point of view, the selective ARQ is the best among all the ARQ systems; but its implementation is expensive. Forward Error-Correction (FEC): It consists of a channel encoder at the transmitter and a channel decoder at the receiver, as shown in Fig. 3.4, and depends upon error-correcting codes. Fig 3.4: Block diagram of FEC The FEC encoder and modulator are shown as separate units in the transmitter and correspondingly the detector and FEC decoder are also shown as two separate units in the receiver. However, in certain cases, where bandwidth efficiency is of major concern, the functions of the FEC encoder
  • 6. and modulator at the transmitter and those of the FEC decoder and the demodulator at the receiver are combined. The advantages and disadvantages in using FEC are as follows. a. No return path, or feedback channel is needed as in the case of ARQ systems. b. The ratio of the number of information, or message bits to the total number of bits transmitted, defined as the information throughput efficiency, is constant in FEC systems and constant overall delay is obtained. c. FEC systems need expensive input and output buffers for the encoders and decoders and sometimes buffer overflows cause problems. d. When very high reliability is needed, selection of an appropriate error-correcting code and implementing its decoding algorithm may be difficult. e. Reliability of the received data is sensitive to channel degradations. TYPES OF CODES: Error correcting codes are divided into two broad categories: Block Codes: it consists of (n-k) number of check bits (redundant bits) being added to k number of information bits to form a n-bit codeword. These (n-k) number of check bits are derived from k-information bits. At the receiver, these check bits are used to detect and correct errors which may occur in the entire n-bit code word. Convolutional Codes: Block of n-code digits not only depends on the block of k information bits but also on the preceding (N-1) blocks of information bits. In this check bits are continuously interleaved with information bits which helps to correct errors not only in that particular block but also in other blocks as well. Block codes are better suited for error detections and convolutional codes for error correction. IMPORTANT TERMS USED IN ERROR CONTROL CODING: Codeword: The encoded block of ‘n’ bits is called a codeword. It contains message bits (k) and redundant check bits. Block length: The number of bits ‘n’ after coding is called the block length of the code. Code rate: The code rate ‘r’ is defined as the ratio of message bits (k) and the encoder output bits (n). Hence, Code rate, r = k n where 0 < r < 1 Code Vector: An ‘n’ bit code word can be visualized in an N-dimensional space as a vector whose elements or co-ordinates are the bits in the code word. Code Efficiency: The code efficiency is the ratio of the message bits to the transmitted bits for that block by the encoder. Hence, Code efficiency = Message bits Transmitted bits = [ k n × 100] %
  • 7. Weight of the code: The number of non-zero elements in the transmitted code vector is called code vector weight. Hamming distance: The hamming distance (d) between the two code vectors is equal to the number of elements in which they differ. Eg. Let X = 101 and Y = 110. Then hamming distance (d) between X and Y code vectors is 2. Minimum hamming distance: The smallest hamming distance between the valid code vectors is termed as the minimum hamming distance (dmin). Modulo-2 arithmetic (i) Addition Modulo-2 addition is an exclusive OR operation. 0  0  1  1  (ii) Multiplication Multiplication of binary digits follows AND logic. 0  0  1  1  LINEAR BLOCK CODES (LBC): LBC involves encoding a block of source information bits into another block of bits with addition of error control bits to combat channel errors induced during transmission. An (n, k) linear block code encodes k message bits into a block of n bits by adding (n-k) number of check bits. Or Fig 3.5: Systematic Format of LBC An (n, k) block code is said to be “(n, k) linear block code” if it satisfies the condition given below: Let C1 and C2 be any two code vectors (n-bits) belonging to a set of (n, k) block code. If 1 2 C C  is also a n-bit code word belonging to the same set of (n,k) block code, then such block code is called (n,k) LBC. In other words a linear summation of any two codewords should result another code-vector in a set of (n, k) block codes. An (n, k) LBC is a said to be systematic if the k-message bits either appear at the beginning of the codeword or at the end of the code word.
  • 8. The codeword structure for systematic code is: bits (n-k) check bits C k message   Or =(n-k) check bits bits k message  0 1 2 3 1 i.e , C = (C ,C ,C ,C ,......,C ) n For systematic code, ( ) ; 0,1,.....,( 1) C m ; ( ),( 1),.....( 1) i i i n k b i n k i n k n k n                      Generator Matrix [G]: Let the message block of k-bits be represented as a row vector called message vector. 0 1 2 3 1 1 [ ] [m ,m ,m ,m ,......,m ] k k M    (1) Where 0 1 2 3 1 m ,m ,m ,m ,......,mk be either 0’s or 1’s. Thus there are 2k distinct message vectors. The channel encoder systematically adds (n-k) number of check bits to form (n, k) LBC. Parity bits/check bits, 0 1 2 3 1 1 ( ) [B] [b ,b ,b ,b ,......,b ] n k n k      (2) Thus we have 2k distinct code vectors corresponding to 2k distinct message vectors to form n-bit code word. 0 1 2 3 1 1 [C] = [C ,C ,C ,C ,......,C ] n n   (3) The 2k distinct code vectors each of length n, form a subspace by the set of all possible 2n vectors each of length n. Among 2n distinct n-length binary sequences, only 2k are valid code vectors and the remaining (2 2 ) n k  code vectors are invalid which forms the error vectors. The code-vector is written as [ ] C b m  0 1 2 3 1 0 1 2 3 1 [b ,b ,b ,b ,......,b ,m ,m ,m ,m ,......,m ] n k k     (4) The (n-k) check bits 0 1 2 3 1 b ,b ,b ,b ,......,bn k   are derived from the k- message bits using a predetermined rule as below: 0 0 0,0 1 1,0 2 2,0 1 k 1,0 1 0 0,1 1 1,1 2 2,1 1 k 1,1 1 0 0,n k 1 1 1,n k 1 2 2,n k 1 1 k 1,n k 1 b m m m m b m m m m b m m m m k k n k k p p p p p p p p p p p p                                (5)
  • 9. Expressing eq (5) in matrix form as, 0,0 0,1 0,n k 1 1,0 1,1 1,n k 1 0 1 2 3 1 0 1 2 3 1 k 1,0 k 1,1 k 1,n k 1 ( ) [m ,m ,m ,m ,......,m ] [b ,b ,b ,b ,......,b ] k n k k n k p p p p p p p p p                            Or ( ) [ ] [ ] k n k m P b    (6) Where 0,0 0,1 0,n k 1 1,0 1,1 1,n k 1 ( ) k 1,0 k 1,1 k 1,n k 1 [ ]k n k p p p p p p P p p p                         [ ] [ ] [ ] k C m P I m G    (7) Where G is called Generator matrix given by, [G] [ ] [ ] [ ] k k n k n k P I or G I P     (8) 0,0 0,1 0,n k 1 0 1,0 1,1 1,n k 1 1 k 1,0 k 1,1 k 1,n k 1 1 1 0 0 0 1 0 [G] 0 0 1 k k k identity matrix P matrix p p p g p p p g p p p g                                             The k-rows of the generator matrix G are linearly independent in the sense that the linear combination of no two of its rows will result in any of the other rows. Each rows of the G matrix are also codewords. The matrix G is therefore said to be in the canonical form. PARITY CHECK MATRIX [H]: The generator matrix [G] completely characterizes a linear (n, k) block code in the sense that knowledge of [G] enables us to determine all the 2 k codewords of the code. Apart from [G] matrix, there is another matrix, [H] which also completely characterizes the code. This H-matrix is called the parity check matrix. Each one of the parity check bits is a linear combination of the message digits. Thus, in an (n, k) block code, the (n–k) parity check digits can be determined for any arbitrary set of k message digits, provided we have (n–k) parity-check equations. Thus, the parity-check equations give another way of characterizing a block code.
  • 10. Let us consider a ( ) n k n   matrix H defined as: ( ) [ ] [ ]or[ ] T T n k n n k n k H I P P I      (9) Then, ( ) [H] n k T n n k I P               (10) [ ] [ ]or[ ] k n k k G P I I P   (11) Then, ( ) [H][G] [ ] [0] n k n T T T T T n k n k k k P I P I P P I I                   (12) ( ) [ ][ ] [ ] [0] n k n n k T k n k k I G H P I I P PI P                   (13) Therefore, HGT and HT G are dual matrices. We know that [ ] [ ] C m G  (14) Post multiplying HT on both sides, . [ ]. [0] 0 T T C H m G H m    . 0 T C H   (15) We know that, 0,0 0,1 0,n k 1 0 1,0 1,1 1,n k 1 1 k 1,0 k 1,1 k 1,n k 1 1 1 0 0 0 1 0 [G] 0 0 1 k k k identity matrix P matrix p p p g p p p g p p p g                                             (16) [ ] T n k H I P   (17) 0,0 1,0 k 1,0 0,1 1,1 k 1,1 0,n k 1 1, 1 k 1,n k 1 ( ) 1 0 0 0 1 0 [H] 0 0 1 n k n k n p p p p p p p p p                         (18)
  • 11. Let the code words be generated as 0 1 2 3 1 0 1 2 3 1 0 1 2 3 1 C = [C ,C ,C ,C ,......,C ] [b ,b ,b ,b ,......,b , m ,m ,m ,m ,......,m ] n n k k      0 1 2 1 0 1 2 1 b b b b C n k T k m m m m                                     (19) 0 1 2 0,0 1,0 k 1,0 0,1 1,1 k 1,1 1 0 0,n k 1 1, 1 k 1,n k 1 1 2 1 b b b 1 0 0 b 0 1 0 HC 0 0 1 n k T n k k p p p p p p m p p p m m m                                                          (20) This gives (n-k) equations, which are actually parity-check equations. Let us take the first equation: 0 1 2 3 1 0 0,0 1 1,0 1 k 1,0 b (b +b +b ...... b ).0 m m +...... m [0] n k k p p p            Hence, 0 0 0,0 1 1,0 1 k 1,0 b m m +...... mk p p p      (21) Which is the parity check equation that gives the parity check bit b0. Like this, these (n – k) equations give the (n – k) parity check bits of the code vector, the remaining elements are k- message bits. The encoding circuit for a systematic (n, k) LBC is shown in Fig 3.6, in which the k-message bits are represented by 0 1 2 3 1 ,u ,u ,u ,......,uk u  and parity bits are represented by 0 1 2 1 ,v ,v ,......,vn k v  
  • 12. Fig 3.6: Encoding Circuit for Systematic (n, k) LBC The G-matrix of a linear block code is useful in generating the code vectors (as output of the channel encoder at the transmitter), the H-matrix is useful at the decoder of the receiver. Since Eq. (15) is satisfied by C if and only if it is a legitimate code vector, the decoder of the receiver uses the received vector r in Eq. (15) in the place of C to check whether r satisfies that equation. If it does, then it is a valid code vector. If it does not, then the receiver decides that one or more bits of the received vector are erroneous. SYNDROME AND ERROR DETECTION: Consider a (n, k) LBC with G and H- matrices. Let 0 1 2 3 1 C = [C ,C ,C ,C ,......,C ] n be a codeword that was transmitted over a noisy communication channel. Let 0 1 2 1 R = [r ,r ,r ,......,r ] n be the received vector at the output of the channel. Because of the channel noise R is different from C. 0 1 2 1 . e=R+C=[e ,e ,e ,......,e ] n i e  (22) Where e is called the error pattern (or) error vector. e 1 e 0 i i i i i i r c r c       (23) The 1’s in the e are called the transmission errors caused by the channel noise. The received vector is the sum of the transmitted code and error vector. R=C+ e (24)
  • 13. Upon receiving R, the decoder must first determine whether R has transmission errors. If the presence of errors is detected then the decoder will take action to locate and correct using FEC or ARQ methods. When R is received, the decoder computes the following operation. Syndrome, 0 1 2 1 . [s ,s ,s ,......,s ] T n k S R H     (25) If S=0 then R is a codeword (maybe) S≠0, then R is not a codeword. Sometimes even though R contain errors, still S=0, the error patterns of this kind are undetectable error patterns. The Syndrome vector S is given by equation, . T S R H  0 1 2 1 0 1 2 1 0,0 0,1 0,2 0,3 0,n k 1 1,0 1,1 1,2 1,3 1, 1 1,0 1,1 k 1,2 k 1,3 k 1,n k 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 s ,s ,s ,......,s ] [r ,r ,r ,......,r ] n k n n k k k p p p p p p p p p p p p p p p                                          0 0 0,0 1 1,0 1 k 1,0 1 1 0,1 1 1,1 1 k 1,1 1 1 0,n k 1 1 1,n k 1 1 k 1,n k 1 s + ...... s + ...... s + ...... n k n k n n k n k n n k n k n k n k n r r p r p r p r r p r p r p r r p r p r p                                      (26) Fig3.7: Syndrome decoding Circuit for Systematic (n, k) LBC
  • 14. Properties: 1. The syndrome is independent of the transmitted code vector. It depends only on the error pattern. i.e . T S e H  Proof: we know that . ( ) T T T T S R H C e H CH eH      . [ 0] T T S e H CH    0 0 0,0 1 1,0 1 k 1,0 1 1 0,1 1 1,1 1 k 1,1 1 1 0,n k 1 1 1,n k 1 1 k 1,n k 1 s +e ...... s +e ...... s +e ...... n k n k n n k n k n n k n k n k n k n e e p p e p e e p p e p e e p p e p                                      (27) 2. Error patterns differing by a codeword will have the same syndrome. Proof: suppose 1 e is the error pattern and 2 1 e e C   , then Syndrome, 1 1 s T e H  and Syndrome, 2 2 1 1 s ( ) T T T e H e C H e H     2 1 s s   Thus, Error patterns differing by a codeword will have the same syndrome Co-Sets: suppose e is some arbitrary error pattern. Let ; 0,1,2,....(2 1) k i i e e C i    .Then, from property 2 of the syndromes that all the 2k error patterns have the same syndrome. This set of 2k error patterns having unique syndrome is called a Co-set of code. “A coset of an (n, k) block code is a set of 2k error patterns, characterized by a unique syndrome for all its elements” So from (n, k) block codes we can form 2n distinct error patterns. Among them, 2k error patterns will result a common syndrome which forms a coset. Therefore, the number of cosets = 2n-k Hamming weight: it is defined as the number of non-zero elements of a codeword. For example C=0010110, then HW=3(no. of 1’s) Hamming Distance: The Hamming distance d(C1, C2) between two code vectors having the same number of elements is defined as the number of locations in which their respective elements differ. Minimum Distance (dmin): The minimum distance dmin of a linear block code is the smallest Hamming distance between any two code vectors of the code. Error detection and Correction Capabilities of LBC: 1. The minimum distance of LBC is equal to the minimum Hamming weight of a non- zero code vector. Proof: from the definition of hamming distance and moduo-2 addition it follows that the hamming distance between two n-tuples Ci and Cj is equal to the hamming weight of the sum of Ci and Cj. i.e let d(Ci ,Cj)= hamming distance between Ci and Cj=Hw(Ci +Cj) min min{ ( , )] i j d d C C  
  • 15. The hamming distance between two code vectors in C is equal to the Hamming weight of the third code-vector Ck in C. min 1 2 ( , ) Hw(C +C ) (C ) i j k d C C Hw    min min{ (C )} Hwmin k d Hw  (28) Note: 1. The minimum distance of a LBC is equal to the minimum number of columns in H will sum-up which when added result a Zero vector. 2. The minimum distance of a LBC is equal to the minimum number of rows in HT which when added result a Zero vector. A linear block code with minimum distance can detect upto min ( 1) d  errors in each code-vector and can correct upto min 1 2 d        errors where min 1 2 d        denotes the largest integer number greater than min 1 2 d  . (i) The error detecting capability of a LBC with minimum distance min d is min ( 1) d  (ii) If min d is odd it is capable of detecting all error patterns ≤ min ( 1) d  and number of errors that can correct is t ≤ min 1 2 d  . (iii) If min d is even it is capable of detecting all error patterns ≤ min 2 d and number of errors that can correct is t ≤ min 2 2 d  . SYNDROME DECODING USING STANDARD ARRAY: 2n received vectors are partitioned into 2k non-overlapping subsets in an array called “standard array”. Thus we have 2k columns each led by a code-vector commencing with all zero vector at the left most column corner and we have 2n-k rows. Each of these rows forms a ‘coset’ and the left-most element of each coset is called the ‘coset leader’. The first row comprises the 2k possible zero-error received vectors. The coset leader for the second row is say an error pattern e2. The other elements of the row are 2 2 3 2 2 2 , ,........, k C e C e C e    . The coset leaders must be so chosen they are the most likely error patterns – those with smallest Hamming weight. The general decoding procedure consists of the following steps: 1. Determine the syndrome of the received vector R T S RH  2. Identify the coset with this syndrome and let its coset leader be an error pattern e 3. Decode the received vector R into the code vector C =R+ e.
  • 16. Fig 3.8: Standard Array syndrome Decoding The storage or memory space requirement for array decoding increases exponentially with the number of parity check bits used in the code. To store the 2n-k coset leaders each with n-digits, we need 2n k n   digits in total. To store 2n k  syndromes each with (n-k) digits we need ( ) 2n k n k    digits in total. Altogether, (2 ) 2n k n k    digits to store coset leaders and syndromes. Table-Lookup Decoding: The table-lookup decoding can be applied to any (n, k) LBC resulting minimum decoding delay and minimum probability of error. The standard array can be regarded as truth table of n-switching functions: 0 0 0 1 2 1 1 1 0 1 2 1 1 1 0 1 2 1 ( , , ,..........., ) ( , , ,..........., ) ( , , ,..........., ) n k n k n n n k e f S S S S e f S S S S e f S S S S            (29) Where 0 1 2 1 , , ,..........., n k S S S S   are the syndrome digits, which are regarded as switching variables and 0 1 2 1 ,e ,e ,...........,en e  are the estimated error digits. When these n switching functions are derived and simplified, a combinational circuit with the (n-k) syndrome digits as inputs and the estimated error digits as outputs are realized. The general decoder for (n, k) LBC for table-lookup is shown in Figure 3.9.
  • 17. Fig 3.9: General Decoder for an LBC HAMMING CODE: Hamming codes are a family of (n, k) block error-correcting codes having the following properties:  Number of user data bits, 2 1 m k m    ; where m is number of check bits or redundancy.  Number of encoded data bits, 2 1 m n    Number of check bits, n k m    Minimum hamming distance, min 3 d   Error correcting capability, min 1 1 2 d t     The number of check bits is calculated by 2 ( 1) m m k    Procedure to calculate Hamming Check bits: 1. Hamming check bits are inserted with the original data bits at positions that are power of 2 , i.e., 0 1 2 2 ,2 ,2 ,......,2n k  2. Using the condition 2 ( 1) m m k    , the number of check bits are calculated. 3. Total number of encoded bits are given by k+m. 4. Each data position which have a value 1 is represented by a binary value equivalent to its position. 5. All of these position values are XORed together to produce the check bits of Hamming code.
  • 18. Decoding of Hamming Code: 1. All data bit positions with binary value 1 plus Hamming code formed by the check bits are XORed since check bits occur at bit positions that are power of 2. 2. If syndrome contains all 0s, no error is detected. 3. If syndrome contains on and only one bit set to 1, then error has occurred in one of the check bits, no correction is required in received decoded data. 4. If syndrome contains more than one bit set to 1, then its decimal equivalent value indicated the position of data bit which is in error. This data bit simply inverted for correction. Design of Hamming Codes using H-Matrix: we know that, [ ] [ ] T n k H P I   HT n k P I         (30) It is clear that there are (n-k) number of columns. Therefore each row in HT has (n-k) number of entries each of which could be ‘0’ or ‘1’. Thus we have 2n-k number of distinct rows. But a row of 0’s cannot be used as this represents the syndrome of no error. Thus we are left with (2n-k -1) number of distinct rows. The single error correcting (n, k) Hamming code has the following: Code length: 2 1 n k n    Number of message bits: 2 log ( 1) k n n    Number of parity check bits : (n-k) Error correcting capability: min 1 2 d t   The construction of [P] matrix is chosen as: 1. HT should not contain a row of 0’s. 2. No two rows of HT must be same.
  • 19. BINARY CYCLIC CODES: Cyclic codes are a subclass of the linear block codes have distinct advantages over the linear codes. (i) The encoding and decoding circuits for cyclic codes can be easily implemented using shift registers with feedback connections and some basic gates. (ii) They have an excellent mathematical structure which makes the design of error- correcting codes with multiple-error correction capability relatively easier (iii) Because of the availability of very efficient decoding methods that do not depend upon a look-up table, large memories are not needed and can correct errors caused by bursts of noise that affect several successive bits. These attractive features that almost all Forward Error Correcting (FEC) systems make use of cyclic codes. Description of Cyclic Codes: Def: An (n, k) linear code ‘C’ is called cyclic code if every cyclic shift of a code vector in C is also a code vector in C. If the component of n-tuple 0 1 2 3 1 C = (C ,C ,C ,C ,......,C ) n are cyclically shifted one place to the right, we obtain another n-tuple. (1) 1 0 1 2 3 2 C = [C ,C ,C ,C ,C ,......,C ] n n   Which is called a cyclic shift of ‘C’. if the components of ‘C’ are cyclically shifted ‘i’ places to the right, is (i) 1 1 0 1 1 C = [C ,C ,.....,C ,C ,C ,......,C ] n i n i n n i       Cyclically shifting ‘C’ ‘i’ places to the right is equivalent to cyclically shifting C, (n-i) places to left. This property of cyclic codes allows us to treat the elements of each code- vector as the coefficients of a polynomial of degree (n-1) or less. i.e 2 1 0 1 2 1 ( ) C +C X+C X +.......+C n n C X X    if 1 1 C 0 deg ( 1) C 0 deg is lessthan ( 1) n n ree n ree n         The code polynomial that corresponds to code vector (i) C is (i) 1 1 1 1 1 0 1 1 C ( ) C +C X ..... C +C +C ...... C i i i n n i n i n n i X X X X X               There exists a interesting algebraic relationship between (i) ( )&C ( ) C X X . I.e. multiplying C(X) by Xi we obtain 1 1 1 0 1 1 1 ( ) C +C X +........+C X +.......+C i i i n n i n i n X C X X X         The above equation can be manipulated as: 1 1 1 1 1 0 1 1 1 1 1 ( ) C +C X ..... C +C +C ...... C C ( 1) C X( 1) ......... C ( 1) i i i i n n i n i n n i n n i n n i n i n X C X X X X X X X X X                           (i) ( ) ( )[ 1] C ( ) i n X C X q X X X    
  • 20. Where, 1 1 1 ( ) C +C X ..... C i n i n i n q X X         The code polynomial C(X) is simply the remainder that results from dividing ( ) i X C X by 1 n X  i.e (i) Re ( ) C ( ) ( ) 1 1 i n n Quotient mainder X C X X q X X X     if C(X) is the code polynomial then, (i) C ( ) ( )mod [ 1] i n X X C X ulo X   is also a code polynomial for any cyclic shift ‘i’. Generator Polynomial: Let G(X) be a polynomial of degree (n-k) and is a factor of 1 n X  is called a Generator polynomial of (n, k) cyclic code. 2 0 1 2 g( ) +g X+g X +.......+g n k n k X g X    Where, 0 g 1 n k g    because X cannot be a factor of 1 n X  and minimum factor is (1+X). Hence, 1 0 g( ) 1 n k i n k i i X g X X         A cyclic code is uniquely determined by the generator polynomial G(X) in which each code polynomial can be expressed as product polynomial. (X) a(X).g(X) C  The degree of g(X) = number of parity check bits = (n-k) and degree of a(X) is (k-1). Systematic Encoding of Cyclic Code: Let the message sequence be 0 1 2 3 1 {m ,m ,m ,m ,......,m } k M   . Let the message polynomial be, 2 1 0 1 2 1 M(X) +m X+m X +.......+m k k m X    Let the parity bits be, 0 1 2 3 1 {b ,b ,b ,b ,......,b } n k b    The parity polynomial, 2 1 0 1 2 1 ( ) +b X+b X +.......+b n k n k b X b X      Let the code sequence be, 0 1 2 3 1 0 1 2 3 1 {b ,b ,b ,b ,......,b ,m ,m ,m ,m ,......,m } n k k C     The code polynomial be, 2 1 1 2 1 0 1 2 1 0 1 2 1 ( ) +b X+b X +.......+b X +m X +m X +.......+m n k n k n k n k k n k k C X b X m X              ( ) ( ) a X g X  ( ) ( ) n k b X X m X    ( ) ( ) ( ) ( ) n k X m X b X a X g X   
  • 21. Re ( ) ( ) b( ) a( ) ( ) ( ) n k Quotient mainder PARITY X m X X X g X g X    ( ) ( ) ( ) n k C X b X X m X     Steps: 1. Multiply the message polynomial with n k X  . 2. Divide ( ) n k X m X  by given generator polynomial g(X) to obtain the remainder (parity bits). 3. Combine b(X) and ( ) n k X m X  to give the code polynomial ( ) ( ) ( ) n k C X b X X m X    ****The non-systematic cyclic code vectors can be found for each codeword by C(X)=M(X)G(X). Systematic Encoding of Cyclic codes using (n-k) Feedback Shift Registers: The systematic encoding of an (n, k) cyclic code can be accomplished with a division circuit which is linear (n-k) stage shift register with feedback connections based on the generator polynomial 2 1 1 2 1 g( ) 1+g X+g X +.......+g n k n k n k X X X        . Fig 3.10: Encoding circuit for an (n, k) cyclic code with g(X) Steps: 1. Gate-1 is closed during the first k shifts, to allow transmission of the message bits into (n- k) stage encoding shift register 2. Gate-2 is in the up position to allow transmission of the message bits directly to an output register during the first k shifts 3. After transmission of the kth message bit, Gate-1 is opened and Gate-2 is moved to the down position to give parity bits.
  • 22. 4. The remaining (n – k) shifts clear the encoding register by moving the parity bits to the output register. 5. The total number of shifts is equal to n, and the contents of the output register is the codeword polynomial ( ) ( ) ( ) n k C X b X X m X    Syndrome Calculation and Decoding of Cyclic Codes: If C(X) is the transmitted code polynomial and R(X) is the received polynomial then, R(X) = C(X) + e(X) Where e(X) is error polynomial corresponding to error pattern created by the channel. Let 2 1 0 1 2 1 R(X) +r X+r X +.......+r n n r X    , then ( ) ( ) ( ) ( ) ( ) ( ) quotient remainder syndrome R X q X g X S X g X   Since the degree of g(X) is (n-k) , the degree of S(X) is (n-k-1). Then remainder polynomial is called syndrome polynomial with degree (n-k-1). Property-1: If S(X) is a syndrome for R(X) then it is also a syndrome for error polynomial. Proof: Let r(X) =C(X) +e(X) e(X)= r(X)+C(X) But C(X) =a(X)g(X) Therefore, e(X) =r(X) +a(X)g(X) But r(X) =q(X)g(X)+S(X) therefore, e(X) = q(X)g(X)+S(X)+a(X)g(X) =[q(X)+a(X)]g(X)+S(X) Therefore, e(X)/g(X)= =[q(X)+a(X)]g(X)+S(X) Hence the syndrome for r(X) and e(X) is same. Since both are divisible by generator polynomial. Property-2: If S(X) is the syndrome for r(X) then XS(X) is the syndrome for X.r(X). Proof: r(X) =q(X)g(X)+S(X) Xr(X) =Xq(X)g(X)+XS(X) i.e multiplying r(x) by X is equivalent to giving one right cyclic shift to the received word R, then the syndrome will be XS(X).
  • 23. Property-3: If the errors occur only in the parity check bits of the transmitted codeword, the syndrome polynomial and the error pattern polynomial will be the same. Proof: Let 2 1 0 1 2 1 r(X) +rX+r X +.......+r n n r X    , in this 0 1 2 1 ,r ,r ,.........rn k r   are the received parity-check bits. 2 1 1 0 1 2 1 1 e(X) ( +e X+e X +.......+e ) X +.......+e n k n k n n k n k n e X e X           Coefficients 1 e n k n e to   are all zero. When e(x) is divided by g(x) the remainder is the syndrome s(x). Thus s(x) = e(x). Syndrome Calculation using (n-k) shift register and error correction: Computation of the syndrome s(X) of r(X) can be accomplished with a division circuit as shown in Figure 3.11. Fig 3.11: An (n-k) syndrome calculation circuit for (n, k) cyclic code 1. The register is first initialized. With Gate-2 turned ON and gate-1 OFF, the received vector r(X) is entered into shift register. 2. After the entire received vector is shifted into register, the contents of the register will be syndrome. 3. Now gete-2 is OFF and gate-I ON and the syndrome vector is shifted out of the register. The circuit is ready for processing the next received vector.
  • 24. For error correction, the decoder has to determine a correctable error pattern e(X) from the syndrome S(X) and add e(X) to r(X) to determine the transmitted code-vector C(X) as shown in Figure 3.12. Fig 3.12: General form of decoder for cyclic codes. 1. The received vector is shifted into the buffer register and the syndrome register. 2. After the syndrome for the received vector is calculated and placed in the syndrome register, the contents of the syndrome register are read into the detector. If the detector output is ‘1’ then the received digit at the right-most stage of the buffer register is assumed to be erroneous and hence is corrected. If the detector output is ‘0’, then it is assumed to correct. 3. The first received digit is shifted out of the buffer and at the same time the syndrome register is shifted right once. If the first received digit is in error, the detector output will be ‘1’ which is used for correcting the first received digit. The output of the detector is also fed into the syndrome register to modify the syndrome. This results in a new syndrome corresponding to the altered received vector shifted to the right by one place. 4. The new syndrome is now used to check whether or not the second received digit, now at the right-most stage of the buffer register, is an erroneous digit. If so, it is corrected a new syndrome is calculated as in step 3 and the procedure is repeated. 5. The decoder operates on the received vector digit by digit until the entire received vector is shifted out of the buffer.
  • 25. At the end of the decoding operation, errors will have been corrected if they correspond to an error pattern built into the detector and the syndrome register will contain all 0’s. If the syndrome register does not contain all 0’s then an uncorrectable error pattern has been detected. The above decoder circuit is called Meggit decoder. Convolutional Codes: In convolutional coding, a block of ‘n’ code digits generated by the encoder in a time unit depends on not only the block of ‘k’ message digits within that time unit, but also on the preceding (m-1) blocks of message digits(m>1). Usually the values of k and n will be small. Encoder for convolutional codes: A convolutional encoder takes a sequence of message digits and generates sequences of code digits. In any time unit, a message block consists of k-digits is fed into the encoder and the encoder generates a code block consisting of ‘n’ code digits (k<n). The n-digit code block depends not only on the k-digit message block of the same time unit but also on the previous (m-1) message blocks. The code generated by the above encoder is called (n, k, m) convolutional code. Where, n number of output bits at particular interval of time (no. of modulo adders) k no. of input bits entering at any time m no. of flip-flops stages( no. of shifts) L length of the message to be encoded n(L+m) length of encoded sequence. Constraint length, K = L/n(L+m) is defined as the number of encoder output bits influenced by each message bits. Rate efficiency. =k/n Note: Always the encoder should start with zero and end with zero. To bring back to zero we append 0’s to the input. The number of zeros appended is equal to number of shifts. Time Domain Approach of Convolutional Encoder: Let us consider (n, k, m) as (2, 1, 3) convolutional encoder as shown in Figure 3.13. Fig 3.13: A (2, 1, 3) binary Convolutional Encoder
  • 26. The time domain behavior of a binary convolutional encoder is defined interms of a “n-impulse responses”. Let the sequence (j) (j) (j) (j) 1 2 3 1 [ , , ,......., ] m g g g g  denote the “impulse responses” also called “generator sequences” of the i/p-o/p path through ‘n’ number of modulo-2 adders. In the encoder, there are two modulo-2 adders labelled top-adder and bottom-adder. Hence there are two generator sequences. Let 1 2 3 ( , , ,......, ) L u u u u represent the input message sequence that enters into the encoder, one bit out at a time starting with u1. Then the encoder generates two o/p sequences denoted by (1) (2) v & v defined by the discrete convolutional sums, (1) (1) (2) (2) v [ ]* v [ ]* u g u g   From the definition of discrete convolutional, we have (j) (j) 1 0 v m l l i i i u g      In the given encoder, j=1, 2, and i 0to m=3 Let the message sequence be 1 2 3 4 5 ( ) 10111 u u u u u  . The o/p sequences are calculated as : For j=1: 3 (1) (j) 1 0 vl l i i i u g      (1) (1) (1) (1) (1) 1 1 2 2 3 3 4 vl l l l l u g u g u g u g        The quantity ‘l’  1 to (L+m) In the given encoder , L=5 , m=3 , therefore l1 to 8 (1) (1) 1 5 (1) (1) 2 6 (1) (1) 3 7 (1) (1) 4 8 1 v 1 5 v 0 2 v 0 6 v 0 3 v 0 7 v 0 4 v 0 8 v 1 l l l l l l l l                         (1) v [10000001]   For j=2: 3 (2) (2) 1 0 vl l i i i u g      (2) (2) (2) (2) (2) 1 1 2 2 3 3 4 vl l l l l u g u g u g u g        (2) (2) 1 5 (2) (2) 2 6 (2) (2) 3 7 (2) (2) 4 8 1 v 1 5 v 1 2 v 1 6 v 1 3 v 0 7 v 0 4 v 8 v 1 l l l l l l l l                        
  • 27. (2) v [11011101]   After encoding, the two o/p sequences are multiplexed into a single sequence called “code-word” for transmission over the channel. The “code-word” at the o/p of the convolutional encoder is given by (1) (2) (1) (2) (1) (2) (1) (2) 1 1 2 2 3 3 [v v ,v v ,v v ,.........,v v ] L m L m V    [11,01,00,01,01,01,00,11] V   Matrix Method: The generator sequences (1) (1) (1) (1) 1 2 3 1 , , ,......., m g g g g  for the top adder and (2) (2) (2) (2) 1 2 3 1 , , ,......., m g g g g  for the bottom adder, can be interlaced and arranged in a matrix form with the number of rows = L , number of columns = n(L+m) , such a matrix of order [ ] [ ( )] L n L m   is called “ Generator matrix” of the convolutional encoder. (1) (2) (1) (2) (1) (2) (1) (2) 3 3 1 1 1 1 2 2 (1) (2) (1) (2) (1) (2) (1) (2) 3 3 1 1 1 1 2 2 (1) (2) (1) (2) (1) (2) (1) (2) 3 3 1 1 1 1 2 2 (1) (1) (2) (1) (2) 3 1 1 2 2 00 00 00 00 00 00 G 00 00 00 00 00 00 m m m m m m g g g g g g g g g g g g g g g g g g g g g g g g g g g g g g        (2) (1) (2) 3 1 1 m m g g                   In the 2nd row, the number of 0’s is equal to number of modulo-2 adders. V= u*G Given u =10111, (1) [1011] g  (2) [1111] g  Rows, L=5 , columns n(L+m)=16 01 00 00 00 00 11 11 11 00 01 00 00 00 11 11 11 G 00 00 01 00 00 11 11 11 00 00 00 01 00 11 11 11 00 00 00 00 01 11 11 11                  01 00 00 00 00 11 11 11 00 01 00 00 00 11 11 11 [10111] 00 00 01 00 00 11 11 11 00 00 00 01 00 11 11 11 00 00 00 00 01 11 11 11 V                  [11,01,00,01,01,01,00,11] V 
  • 28. Transfer-Domain Approach: Let the impulse response of each path in the encoder be replaced by a polynomial whose coefficients are represented by the respective elements of the impulse response. (j) (j) (j) (j) 2 (j) 1 2 3 1 ( ) X X ....... m m g X g g g g X       The corresponding output of each of the adders is given by (i) (j) (X) u(X) ( ) V g X  Final encoder output is given as, (1) (2) 2 (3) 1 (n) ( ) (X ) X (X ) X (X ) ....... X (X ) n n n n n V X V V V V       For example, consider U=10111 2 3 4 ( ) 1 U X X X X     (1) (1) 2 3 [1011] ( ) 1 g g X X X      (2) (2) 2 3 [1111] (X) 1 g g X X X       (1) (1) 7 ( ) ( ) ( ) 1 V X U X g X X    (2) (2) 3 4 5 7 ( ) ( ) ( ) 1 V X U X g X X X X X X        (1) 2 (2) 2 ( ) ( ) * ( ) V X V X X V X    (1) 2 14 ( ) 1 V X X   (2) 2 3 7 9 11 15 * ( ) X V X X X X X X X       3 7 9 11 14 15 ( ) 1 [11,01,00,01,01,01,00,11] V X X X X X X X X V            STATE DIAGRAM: The state of the encoder is defined as its shift register contents. For an (n, k , m) code with k>1, the ith shift register contains ki pervious information bits. There are total of 2k different possible states. Each block of k-inputs causes a transition to a new state. Hence there are 2k branches leaving each state one each corresponding to the input block. The state of a (n, 1, m) encoder is defined as the contents of the first (m-1) shift register. Thus the encoder can be represented as (m-1) state machine. Knowing the present state, next input we can determine the next state, then output. Transistion between the states is governed by particular incoming bit (0 or 1). The encoder as 1 2m possible states. For the Fig 3.13, m=3 , there are 4 states and they are designated as 00, 10, 01, 11 a b c d     .
  • 29. Fig 3.14: State Diagram There are only two transitions emanating from each state, corresponding to the two possible input bits (0 or 1). A transition from present state to next corresponding to input bit 0 is represented by solid line, for input bit 1 is represented by dashed line. Bits appearing in each path corresponds to the output bits. Initially encoder is in all zero state (state a) and codeword corresponding to any given input sequence is obtained by noting down the outputs on the branch labels. An encoder will start in all zero state and should return to all zero state. Procedure for writing the state diagram: 1. The different possible states are identified and state table is created. 2. State transition table is constructed showing clearly the present state, next state, input bits and output bits. 3. The state diagram is drawn showing all the states of flip-flops, interconnection between them for various input combinations and the corresponding outputs.
  • 30. CODE TREE: The tree diagram adds the dimension of time to the state diagram. The tree diagram tor the convolutional encoder shown in Figure 3.15.At each successive input bit time the encodi11g procedure can be described by traversing the diagram from left to right. Each tree branch describing an output branch word. The branching rule for finding a codeword sequence is as follows: If the input bit is a zero, associated branch word is found by moving to the next rightmost branch in the upward direction. If the input bit is a one, its branch word is found by moving to the next rightmost branch in the downward direction. Assuming that the initial contents of the encoder is all zeros. The diagram shows that if the first input bit is a zero, the output branch word is 00 and if the first input bit is a one, the output branch word is 11. Similarly, if the first input bit is a one and the second input bit is a zero, the second output branch word is 10. Or, if the first input bit is a one and the second input bit is a one, the second output branch word is 01. The vertical line represents a “node” and horizontal line is called “branch”. The encoder output for any information sequence can be traced through the code tree paths. Fig 3.15: Code-Tree Diagram
  • 31. Sequential Decoding of Convolutional codes: A sequential decoder works by generating hypotheses about the transmitted codeword sequence; it computes a metric between these hypotheses and the received signal. It goes forward as long as the metric indicates that its choices are likely; otherwise, it goes backward, changing hypotheses until, through a systematic trial-and-error search, it finds a Likely hypothesis he decoder starts at the time t1 node of the tree and generates both paths leaving that node. The decoder follows that path which agrees with the received n code symbols. At the next level in the tree, the decoder again generates both paths leaving that node and follows the path agreeing with the second group of n code symbols. Proceeding in this manner, the decoder quickly penetrates the tree. The decoder starts at the time t1 node or the code tree and generates both paths leading from that node. If the received n code symbols coincide with one of the generated paths, the decoder follows that path. If there is not agreement, the decoder follows the most likely path but keeps a cumulative count on the number of disagreements between the received symbols and the branch words on the path being followed. If two branches appear equally likely, tile receiver uses an arbitary rule such as following the zero input path. AI each new level in the 1ree, the decoder generates new branches and compares them with the next set of n received code symbols. The search continues to penetrate the tree along the most likely path and maintains the cumulative disagreement count.lf the disagreement count exceeds a certain number the decoder decides that it is on an incorrect path, backs out of the path, and tries another. The decoder keeps track of the discarded pathways to avoid repeating any path excursions. Example: Consider the below sequence, Steps: 1. At time t1 we receive symbols 11 and compare them with the branch words leaving the first node. 2. The most likely branch is the one with branch word 1 1, so the decoder decides that input bit one is the correct decoding, and moves to the next level. 3. At time t2 the decoder receives symbols 00 and compares them with the available branch words 10 and 01 at this second level. 4. There is no ''best'· path, so the decoder arbitrarily take' the input bit zero (or branch word 10) path, and the disagreement count registers a disagreement of 1. 5. At time t 3, the decoder receives symbols 01 and compares them with the available branch words 11 ant.! 00 at this third level. 6. Again, there is no best path, so the decoder arbitrarily takes the input zero (or branch word 11) path, and the disagreement count is increased to 2.
  • 32. 7. At time t4 the decoder receives symbols 10 and compares them with the available branch words 00 and 11 at this fourth level. 8. Again, there is no best path, so the decoder takes the input bit zero (or branch word 00) path, and the disagreement count is increased to 3. 9. But a disagreement count of 3 is the turnaround criterion so the decoder ''backs out" and tries the alternative path. The disagreement counter is reset to 2. 10. The alternative path is the input bit one (or branch word 11) path at the t4 level. The decoder tries this, but compared to the received symbols 10, there is still a disagreement of 1 and the counter is reset to 3. 11. But, 3 being the turnaround criterion the decoder backs out of this path, and the counter is reset to 2. All of the alternatives have now been traversed at this t4 level, so the decoder returns to the node at t3, and resets the counter to l. 12. At the t3 node the decoder compares the symbols received at time t3, namely 01, with the untried 00 path. There is a disagreement of 1, and the counter is increased to 2. 13. At the t4 node, the decoder follows the branch word 10 that matches its t4 code symbols of 10. The counter remains unchanged at 2. 14. At the t5 node, there is no best path so the decoder follows the upper branch as is the rule and the counter is increased to 3. 15. At this count, the decoder backs up resets the counter to 2, and tries the alternative path at node t5. Since the alternate branch word is 00 there is a disagreement of 1 with the received code symbols 01 at time t5, and the counter is again increased to 3. Fig 3.16: Sequential decoding Example
  • 33. 16. The decoder backs out of this path and the counter is reset to 2. All of the alternatives have now been traversed at this t5 level so the decoder returns to the node at t4 and resets the counter to l. 17. The decoder tries the alternative path at t4, which raises the metric to 3 since there is a disagreement in two positions of the branch word. This time the decoder must back up all the way to the time t2 node because all of the other paths at higher levels have been tried. The counter is now decremented to zero. 18. At the t2 node, the decoder now follows the branch word 01, and because there is a disagreement of 1 with the received code symbols 00 at time t2, counter is increased to 1. The decoder continues in this way, yields the correctly decoded message sequence, 1 1 0 1 1. Sequential decoding can be viewed as a trial-and-error technique for searching out the correct path in the code tree. It performs the search in a sequential manner, always operating on just a single path at a lime. If an incorrect decision is made, subsequent extensions of the path will be wrong. The decoder can eventually recognize its error by monitoring the path metric. The algorithm is similar to the case of an automobile traveler following a road map. Comparison between Linear Block codes and Convolutional codes Sl. No. Linear Block Codes Convolutional Codes 1. Block codes are generated by X = MG (or) X(p) = M(p) .G(p) Convolutional codes are generated by convolution between message sequencing and generating sequence. 2. For a block of message bits, encoded block (code vector) is generated Each message bit is encoded separately. For every message bit, two or more encoded bits are generated. 3. Coding is block by block. Coding is bit by bit. 4. Syndrome decoding is used for most liklihood decoding. Viterbi decoding is used for most liklihood decoding. 5. Generator matrices, parity check matrices and syndrome vectors are used for analysis. Code tree, code trellis and state diagrams are used for analysis. 6. Generating polynomial and generator matrix are used to get code vectors. Generating sequences are used to get code vectors. 7. Error correction and detection capability depends upon minimum distance dmin. Error correction and detection capability depends upon free distance dmin.
  • 34. Drill Problems 1. Consider a Binary cyclic code with 3 ( ) 1 g X X X    corresponding to the message bits 1011, design an encoder for (7, 4) with feedback elements? Sol: No. of shift registers, n--k=7-4=3 Given 3 ( ) 1 g X X X    implies 0 1 2 3 1, 1, 0, 1 g g g g     Message bits Shifts Register contents output 1 0 1 1 - 0 0 0 (initial) - 1 0 1 1 1 1 0 1 1 0 2 1 0 1 1 1 3 1 0 0 0 - 4 1 0 0 1 - 5 0 1 0 0 - 6 0 0 1 0 - 7 0 0 0 1 After 4-shifts the contents of register are “100” parity bits. After (n-k) shifts clears the encoding register by moving parity bits to the output register. C = 1 0 0 1 0 1 1 2. For a (7, 4) cyclic code, the received vector Z(X) is 1110101 and 3 ( ) 1 g X X X    . Draw the syndrome calculation circuit and correct if any error. Sol: Given 3 ( ) 1 g X X X    implies 0 1 2 3 1, 1, 0, 1 g g g g     Parity Message Output is taken from top to bottom
  • 35. No. of shifts Input(Z) Shift register Comment ---- 0 0 0 initial 1 1 1 0 0 2 0 0 1 0 3 1 1 0 1 4 0 1 0 0 5 1 1 1 0 6 1 1 1 1 7 1 0 0 1 error 8 0 1 1 0 9 0 0 1 1 10 0 1 1 1 11 0 1 0 1 12 0 1 0 0 End of shift When all the 7 received bits are entered into the syndrome calculator, 0’s are fed from 8th shift onwards. Each time a 0 is fed into the circuit, the fresh shift register contents are tabulated. This process of feeding 0’s is continued till the shift register contents are read as 100. In general for (n- k) shift register, the contents should read as 1000….0. in the above table we find that 12th shift we get 100 i.e error is located at 12th place. Z = 1 1 1 0 1 0 1 Error Error vector E = 0 0 1 0 0 0 0 C= Z +E = 1 1 1 0 1 01 + 0 0 1 0 0 0 0=1100101(corrected codeword) 3. The generator matrix for a block code is given below. Find all code vectors of this code. G = [ 𝟏 𝟎 𝟎 𝟎 𝟏 𝟎 ⋮ 𝟏 𝟎 𝟎 ⋮ 𝟎 𝟏 𝟏 𝟎 𝟎 𝟏 ⋮ 𝟏 𝟎 𝟏 ] Sol: The Generator matrix G is generally represented as [𝐺]𝑘 x 𝑛 = [𝐼 𝑘 x 𝑘 ⋮ 𝑃 𝑘 x 𝑞] 𝑘 x 𝑛 Hence [𝐼 𝑘 x 𝑘 ⋮ 𝑃 𝑘 x 𝑞] 𝑘 x 𝑛 = [ 1 0 0 0 1 0 ⋮ 1 0 0 ⋮ 0 1 1 0 0 1 ⋮ 1 0 1 ] On comparing, The number of message bits, k = 3 The number of code word bits, n = 6 The number of check bits, q = n – k = 6 – 3 = 3 Hence, the code is a (6, 3) systematic linear block code. From the generator matrix, we have 1
  • 36. Identity matrix, I k x k = [ 1 0 0 0 0 1 0 0 1 ] the coefficient or submatrix, P k x q = [ 1 0 0 0 1 1 0 1 1 ] Therefore, the check bits vector is given by, [𝐶]1 x 𝑞 = [𝑀]1 x 𝑘 [𝑃]𝑘 x 𝑞 On substituting the matrix form, [c1 c2 c3] = [m1 m2 m3] [ 1 0 0 0 1 1 0 1 1 ] From the matrix multiplication, we have c1 = (1 x m1)  (0 x m2)  (1 x m3) c2 = (0 x m1)  (1 x m2)  (0 x m3) c3 = (0 x m1)  (1 x m2)  (1 x m3) On simplifying, we obtain c1 = m1m3 c2 = m2 c3 = m2m3 Hence the check bits (c1 c2 c3) for each block of (m1 m2 m3) message bits can be determined. (i) For the message block of (m1 m2 m3) = (0 0 0), we have c1 = m1m3 = 0  0 = 0 c2 = m2 = 0 c3 = m2m3 = 0  0 = 0 (ii) For the message block of (m1 m2 m3) = (0 0 1), we have c1 = m1m3 = 0  1 = 1 c2 = m2 = 0 c3 = m2m3 = 0  1 = 1 Similarly, we can obtain check bits for other message blocks listed in table, all the message bits, their check bits and code word vectors. Message Vector in one block Check bits Code word Vector m1 m2 m3 c1= m1m3 c2=m2 c3= m2m3 m1 m2 m3 c1 c2 c3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 0 0 1 1 0 1 0 1 0 0 1 1 0 1 0 0 1 1 0 1 1 1 1 0 0 1 1 1 1 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 1 0 0 1 1 0 1 0 0 1 1 1 0 1 1 1 1 1 0 1 1 1 1 1 1 0 1 0 1 1 1 0 1 0
  • 37. 4. The parity check matrix of a (7, 4) linear block code is given by H = [ 𝟏 𝟏 𝟏 𝟏 𝟏 𝟎 𝟎 𝟏 𝟎 𝟎 𝟏 𝟎 𝟏 𝟎 𝟏 𝟎 𝟏 𝟏 𝟎 𝟎 𝟏 ] i. Find the generator matrix (G). ii. List all the code vectors. iii. How many errors can be detected? iv. How many errors can be corrected? Solution: For a (7, 4) linear block code, we have n = 7, k = 4 and q = n – k = 7 – 4 = 3. Thus n = 2q – 1 = 23 – 1 = 7. Hence, the given code is Hamming code. (i) To find the generator matrix: We know that [𝐻] x 𝑛 = [𝑃𝑇 ⋮ 𝐼] x 𝑛 Given that, [𝐻]3 x 7 = [ 1 1 1 1 1 0 0 1 0 0 1 0 1 0 1 0 1 1 0 0 1 ] Then the transpose of submatrix is [𝑃𝑇]𝑞 x 𝑘 = [ 1 1 1 1 1 0 0 1 1 0 1 1 ] 3 x 4 The submatrix is given by changing rows to columns [𝑃]𝑘 x 𝑞 = [ 1 1 1 1 1 0 1 0 0 1 1 1 ] 4 x 3 Therefore, the Generator matrix is given by [𝐺]𝑘 x 𝑛 = [𝐼 𝑘 x 𝑘 ⋮ 𝑃 𝑘 x 𝑞]𝑘 x 𝑛 [𝐺]4 x 7 = [ 1 0 0 0 ⋮ 1 1 1 0 1 0 0 ⋮ 1 1 0 0 0 0 0 1 0 ⋮ 1 0 1 0 1 ⋮ 0 1 1 ] 4 x 7 (ii) To find all the code words The check bits vector is given by [𝐶]1 x 𝑞 = [𝑀]1 x 𝑘 [𝑃] 𝑘 x 𝑞 Hence, [c1 c2 c3] = [m1 m2 m3 m4] [ 1 1 1 1 1 0 1 0 0 1 1 1 ] From the matrix multiplication, we have c1 = m1  m2  m3 c2 = m1  m2  m4 c3 = m1  m3  m4
  • 38. We can now determine the check bits (c1 c2 c3) for each block of (m1 m2 m3 m4) message bits. The generated code words are given in Table. Sl. No. Message Vector in one block Check bits Code word Vector Weight of Code Vector w(X) m1 m2 m3 m4 c1 = m1m2  m3 c2= m1m2  m4 c3= m1m3  m4 m1 m2 m3 m4 c1 c2 c3 1. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2. 0 0 0 1 0 1 1 0 0 0 1 0 1 1 3 3. 0 0 1 0 1 0 1 0 0 1 0 1 0 1 3 4. 0 0 1 1 1 1 0 0 0 1 1 1 1 0 4 5. 0 1 0 0 1 1 0 0 1 0 0 1 1 0 3 6. 0 1 0 1 1 0 1 0 1 0 1 1 0 1 4 7. 0 1 1 0 0 1 1 0 1 1 0 0 1 1 4 8. 0 1 1 1 0 0 0 0 1 1 1 0 0 0 3 9. 1 0 0 0 1 1 1 1 0 0 0 1 1 1 4 10. 1 0 0 1 1 0 0 1 0 0 1 1 0 0 3 11. 1 0 1 0 0 1 0 1 0 1 0 0 1 0 3 12. 1 0 1 1 0 0 1 1 0 1 1 0 0 1 4 13. 1 1 0 0 0 0 1 1 1 0 0 0 0 1 3 14. 1 1 0 1 0 1 0 1 1 0 1 0 1 0 4 15. 1 1 1 0 1 0 0 1 1 1 0 1 0 0 4 16. 1 1 1 1 1 1 1 1 1 1 1 1 1 1 7 The smallest weight of any non-zero code vector is 3. Hence the minimum hamming distance is dmin = 3. (iii) Error detection capability The error detection upto ‘s’ errors per word is dmin ≥ s + 1 The minimum distance is dmin = 3 Therefore, 3 ≥ s + 1 s ≤ 3 - 1 s ≤ 2 Thus error in two bits will be detected. (iv) Error correction capability The error correction upto ‘t’ errors per word is dmin ≥ 2t + 1 Hence, 3 ≥ 2t + 1 2t ≤ 3 – 1 2t ≤ 2 t ≤ 1 Thus error in one bit will be corrected.
  • 39. 5. The generator polynomial of a (7, 4) cyclic code is G(p) = p3 + p + 1. Find all the code vectors for the code in non-systematic form. Solution: Here n = 7, k = 4 Therefore, q = n – k = 7 – 4 = 3 Since k = 4, there will be a total of 2k = 24 = 16 message vectors (From 0 0 0 0 to 1 1 1 1). Each can be coded in to a 7 bits codeword. (i) Consider any message vector as M = (m3 m2 m1 m0) = (1 0 0 1) The general message polynomial is M(p) = m3p3 + m2p2 + m1p + m0, for k = 4 For the message vector (1 0 0 1), the polynomial is M(p) = 1 . p3 + 0 . p2 + 0 . p + 1 M(p) = P3 + 1 The given generator polynomial is G(p) = p3 + p + 1 In non-systematic form, the codeword polynomial is X (p) = M (p) . G (p) On substituting, X (p) = (p3 + 1) . (p3 + p +1) = p6 + p4 + p3 + p3 + p +1 = p6 + p4 + (1 1) p3 + p +1 = p6 + p4 + p + 1 = 1.p6 + 0.p5 + 1.p4 + 0.p3 + 0.p2 + 1.p + 1 The code vector corresponding to this polynomial is X = (x6 x5 x4 x3 x2 x1 x0) X = ( 1 0 1 0 0 1 1) (ii) Consider another message vector as M = (m3 m2 m1 m0) = ( 0 1 1 0 ) The polynomial is M(p) = 0.p3 + 1.p2 + 1.p1 + 0.1 M(p) = p2 + p The codeword polynomial is X(p) = M(p) . G(p) X(p) = (p2 + p) . (p3 + p + 1) = p5 + p3 + p2 + p4 + p2 + p = p5 + p4 + p3 + (1 1) p2 + p = p5 + p4 + p3 + p = 0.p6 + 1.p5 + 1.p4 + 1.p3 + 0.p2 + 1.p + 0.1 The code vector, X = ( 0 1 1 1 0 1 0 ) Similarly, we can find code vector for other message vectors also, using the same procedure.
  • 40. 6. The generator polynomial of a (7, 4) cyclic code is G(p) = p3 + p + 1. Find all the code vectors for the code in systematic form. Solution: Here n = 7, k = 4 Therefore, q = n - k = 7 – 4 = 3 Since k = 4, there will be a total of 2k = 24 = 16 message vectors (From 0 0 0 0 to 1 1 1 1 ). Each can be coded into a 7 bits codeword. (i) Consider any message vector as M = (m3 m2 m1 m0) = (1 1 1 0 ) By message polynomial, M(p) = m3p3 + m2p2 + m1p + m0, for k = 4. For the message vector (1 1 1 0), the polynomial is M(p) = 1.p3 + 1.p2 + 1.p + 0.1 M(p) = p3 + p2 + p The given generator polynomial is G(p) = p3 + p + 1 The check bits vector polynomial is C(p) = rem[ pq.M(p) G(P) ] = rem[ p3(p3+p2+p) (p3+p+1) ] = rem[ p6+p5+p4 p3+p+1 ] We perform division as per the following method. p3 + p2 (p3 + p + 1) p6 + p5 + p4 + 0.p3 + 0.p2 + 0.p + 0.1 p6 +0.p5 + p4 + p3 (Mod-2 addition) 0.p6 + p5 + 0.p4 + p3 + 0.p2 + 0.p + 0.1 p5 + 0.p4 + p3 + p2 0.p5 + 0.p4 + 0.p3 + p2 + 0.p + 0.1 remainder Thus the remainder polynomial is p2 + 0.p + 0.1. This is the check bits polynomial C(p) c(p) = p2 + 0.p + 0.1 The check bits are c = (1 0 0 ) Hence the code vector for the message vector (1 1 1 0 ) in systematic form is X = (m3 m2 m1 m0⋮ c2 c1 c0) = ( 1 1 1 0 1 0 0 ) Similarly, we can find code vector for other message vectors also, using the same procedure.
  • 41. 7. Consider a data (message) block of 1 1 0 1. The hamming code adds three parity bits to the message bits in such a way that both message bits and check bits get mixed. The check bit locations are as shown below. 1 2 3 4 5 6 7 → Bit location p1 p2 D p3 D D D  Here p1, p2 and p3 represent the parity check bits to be added. D represents the data (message) bits. Then we have 1 2 3 4 5 6 7 p1 p2 1 p3 1 0 1  The first parity bit, p1 provides even parity from a check of bit locations 3, 5 and 7. Here they are 1, 1 and 1 respectively. Hence p1 will therefore be 1 to achieve even parity.  The second parity bit, p2 checks locations 3, 6 and 7. Here they are 1, 0 and 1 respectively. Hence p2 will be 0 to achieve even parity.  The third parity bit p3, checks locations 5, 6 and 7. Here they are 1, 0 and 1 respectively. Hence p3 will be 0 to achieve even parity.  The resulting 7-bit code word generated is as below. 1 2 3 4 5 6 7 p1 p2 D p3 D D D 1 0 1 0 1 0 1 → Code word transmitted  Suppose that this code word is altered during transmission. Assume that location 5 changes from 1 to 0. Hence the received code word with error is given below. 1 2 3 4 5 6 7 p1 p2 D p3 D D D 1 0 0 0 0 0 1  At the decoder, we have to evaluate the parity bits to determine where error occurs. This is accomplished by assigning a 1 to any parity bit which is incorrect and a 0 to the parity bit which is correct.  We check p1 for locations 3, 5 and 7. Here they are 1, 0 and 1. For even parity p1 should be 0, but we have received p1 as 1, which is incorrect. We assign a 1 to p1.  We check p2 for locations 3, 6 and 7. Here they are 1, 0 and 1 respectively. For even parity p2 should be 0 and we have also received p2 as 0, which is correct. We assign a 0 to p2.  We check p3 for locations 5, 6 and 7. Here they are 0, 0 and 1 respectively. For even parity p3 should be 1, but we have received p3 as 0, which is incorrect. We assign a 1 to p3.  The three assigned values result in the binary form of 1 0 1, which has a decimal value of 5. This means that the bit location containing the error is 5. The decoder then change the 5th location bit from 0 to 1.  The hamming code is therefore capable of locating a single error. But it fails if multiple errors occur in one data block.