Error Control CODING
Prepared by :
Submitted to : Dr. Allam Mousa
Defintion of channel coding
Error control coding ,detect, and often correct, symbols which are
received in error
The channel encoder separates or segments the incoming bit stream into
equal length blocks of L binary digits and maps each L-bit message
block into an N-bit code word where N > L
There are M=2L messages and 2L code words of length N bits
The channel decoder has the task of detecting that there has been a bit error and (if •
possible) correcting the bit error
ARQ (Automatic-Repeat-Request ) If the channel decoder performs error
detection then errors can be detected and a feedback channel from the channel
decoder to the channel encoder can be used to control the retransmission of the
. code word until the code word is received without detectable errors
There are two major ARQ techniques stop and wait continuous ARQ
FEC (Forward Error Correction) If the channel decoder performs error correction then
errors are not only detected but the bits in error can be identified and corrected (by bit
There are two major ARQ techniques
stop and wait, in which each block of data is positively, or
negatively, acknowledged by the receiving terminal as being error
free before the next data block is transmitted,
continuous ARQ, in which blocks of data continue to be transmitted
without waiting for each previous block to be acknowledged
Error Control Coding (Channel
particular error control methods : linear group
codes, cyclic codes, the Golay code, BCH
codes, Reed–Solomon codes and Hamming
Block coding VS. convolutional coding
The (n,k) block code is the encoding a stream of data
rather than blocks of data .
code that convert k bit of the
massage signal to n bit
The sequence of bits in a
convolutional code depends
Not only on the current bits of
It block because it
but also on previous bits of
take number of bits from
and add redundant bits
(parity digit) to it and do so
to the rest of the bits.
Error rate control concepts
How to measure error performance ?
Answer is BER : the average rate at which errors occur and is
given by the product PbRb
Pb: probability of error
Rb : bit transmission rate in the channel
If BER is too large ! What to do to make it smaller ..
• increase transmitter power(not efficient )
• Diversity : Frequency diversiy employs two different frequencies to
transmit the same information , time diversity systems the same
message is transmitted more than once at different times .
• introduce full duplex transmission: implying simultaneous two-way
• ARQ and FEC
The Hamming distance between two code-words is defined as the number of places,
.bits or digits in which they differ
•The distance is important factor since it indicates how easy to change one valid code
•The weight of the codeword is defined as the number of ones in the codeword.
: Example) Calculate the hamming distance and weight of the following codeword •
Hamming distance = 3 bit
The code word 11100 could changed to 11011
The weight of the codeword 1= 3
The weight of the codeword 2= 4
The minimum codeword weight =3
(n, k) block codes:
k information digits
going into the coder.
n digits coming out
(n −k) redundant
parity check digits.
The rate, or efficiency, for this
(R) = k/n
Rate is normally in the range 1/2 to
Linear group codes
Group codes contain the all-zeros codeword and have the
property referred to as closure .
Advantage : it makes performance calculations with linear group
codes particularly easy.
taking any two codewords Ci and Cj , then Ci ⊕ Cj = Ck .
a contain all-zeros codeword
b ⊕ c=d
, c ⊕ d=b , b ⊕ d=b .
Hamming distances measurer to determine the overall performance of a
consideration of each of the codewords with the all-zeros
Dmin = 3 for this (5,2) code.
Consider four codewords the weights of these are 0, 3, 3 and 4.
the minimum weight in the weight structure (3) is equal to Dmin,
the minimum Hamming distance for the code.
Performance prediction!!!! !!!! ةعجارمراجعة
Consider the probability of the ith codeword (Ci ) being
misinterpreted as the j th codeword (Cj ). This probability depends on
the distance between these two codewords (Di j ).
Di j is equal to the weight of a third codeword Ck
• Ci being mistaken for Cj is therefore equal to the probability
of Ck being mistaken for C0
• The probability of C0 being misinterpretedas Ck
depends only on the weight of Ck .
the performance of such a code can be determined completely
1. consideration of C0
2. the weight structure alone
Error detection and correction capability
t :The maximum possible error correcting.
Dmin :minimum Hamming distance
e : is the ability of certain code to detect errors.
t ≤ e.
Error detection and correction capability
Dmin is 3
e=1 , t=1
If any single error occurs in one of the codewords it can
therefore be corrected. Dmin − 1 errors can be detected
there is no error correction
Longer codes with larger Hamming distances
offer greater detection and correction capability
by selecting different t and e
The UK Post Office Code Standards Advisory Group(POCSAG) code
k = 21 and n = 32
R ≈ 2/3
Dmin = 6.
3 bit detection 2 bit correction capability.
syndrome is independent
of the transmitted
codeword and only
the error sequence
A decoding table tells a
decoder how to correct
errors that might have
corrupted the code during
d is a message vector of k digits
G is the k × n generator matrix
c is the n-digit codeword corresponding to the message d,
Where G is the generation matrix
where H is the (even) parity check matrix corresponding to G
r is the sequence received after transmitting c .
e is an error vector representing the location of the errors
which occur in the received sequence r.
syndrome vector s
s = H r = H (c ⊕ e) =
H c ⊕ H e = 0 ⊕ H e =He
s is easily calculated
The generator matrix(G) : The generator matrix G for
an (n, k) block code can be used to generate the appropriate
n-digit codeword from any given k-digit data sequence .
Parity check matrix (H) : does not contain any codewords.
(7,4) block block code H matrix .
The right side of G is the transpose of the left hand
portion of H.
Parity check section must : must contain at least
two ones. rows cannot be identical.
G is the k × n generator matrix . The right side of G is the
transpose of the left hand portion of H. .
use this syntax to Produce syndrome decoding table
t = syndtable(h)
returns a decoding table for an error-correcting binary
code having codeword length n and message length
• Error control coding, generally, is applied
widely in control and communications
systems for aerospace applications, in
mobile (GSM) cellular telephony and for
enhancing security in banking and
n = 6; k = 4; % Set codeword length and message
% for a [6,4] code.
msg = [1 0 0 1 1 0 1 0 1 0 1 1]'; % Message is a
code = encode(msg,n,k,'cyclic'); % Code will binary
msg consists of 12 entries, which are interpreted as three 4-digit
(because k = 4) messages. The resulting vector codecomprises three 6digit (because n = 6) codewords, which are concatenated to form a
vector of length 18. The parity bits are at the beginning of each