SlideShare a Scribd company logo
1 of 300
UNIT IV - INFORMATION THEORY AND CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 1
SYLLABUS
• Measure of information
• Entropy
• Source coding theorem
• Shannon–Fano coding, Huffman Coding, LZ Coding
• Channel capacity
• Shannon-Hartley law ,Shannon's limit
• Error control codes – Cyclic codes, Syndrome
calculation
• Convolution Coding, Sequential and Viterbi decoding
INFORMATION THEORY INTRODUCTION
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 2
• The purpose of a communication system is to
facilitate the transmission of signals generated by a
information source to the receiver end over a
communication channel.
• Information theory is a branch of probability
theory which may be applied to the study of
communication systems. Information theory allows
us to determine the information content in a
message signal leading to different source coding
techniques for efficient transmission of message.
• In the context of communication, information theory
deals with mathematical modeling and analysis of
communication rather than with physical sources
INFORMATION THEORY INTRODUCTION
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 3
• In particular, it provides answers to two
fundamental questions,
• What is the irreducible complexity below which a signal
cannot be compressed?
• What is the ultimate transmission rate for reliable
communication over a noisy channel?
• The answers to these questions lie on the entropy
of the source and capacity of a channel
respectively.
• Entropy is defined in terms of the probabilistic
behavior of a source of information.
• Channel Capacity is defined as the intrinsic ability
of a channel to convey information; it is naturally
INFORMATION
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 4
• Information is the source of a communication system,
whether it is analog or digital communication. Information is
related to the probability of occurrence of the event. The unit
of information is called the bit.
• Based on memory, information source can be classified as
follows:
• A source with memory is one for which a current
symbol depends on the previous symbols.
• A memory less source is one for which each symbol
produced is independent of the previous symbols.
• A Discrete Memory less Source (DMS) can be
characterized by list of symbols, the probability
assignment to these symbols, and the specification of
DISCRETE MEMORY LESS SOURCE (DMS)
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 5
• Consider a probabilistic experiment involves the observation of the
output emitted by a discrete source during every unit of time (Signaling
Interval). The source output is modeled as a discrete random variable
(S), which takes on symbols from a fixed finite alphabet.
S= { s0,s1,s2,… ,sK-1}
with probabilities,
P(S=sk)= pk , k=0,1,2,…….,K-1.
The set of probabilities that must satisfy the condition
• We assume that the symbols emitted by the source during successive
signaling intervals are statistically independent.
• A source having the alone described properties is called a Discrete
Memory less Source (DMS).
INFORMATION AND UNCERTAINTY
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 6
• Information is related to the probability of
occurrence of the event. More is the uncertainty,
more is the information associated or related with
it.
• Consider the event S=sk, describing the emission
of symbol sk by the source with probability pk
clearly, if the probability pk=1 and pi=0 for all i≠k,
then
• , there is no surprise and therefore no information
when symbol sk is emitted.
• If, on the other hand, the source symbols occur
with different probabilities and the probability pk is
low, then there is more surprise and, therefore,
information when symbol sk is emitted by the
INFORMATION AND UNCERTAINTY
• Example:
• Sun rises in East : Here
Uncertainty is zero there is no
surprise in the statement. The
probability of occurrence is 1
(pk=1).
• Sun does not rise in East :
Here uncertainty is high, because
there is maximum surprise,
maximum information is not
possible.
• The amount of information is
related to the inverse of the
probability of occurrence of
the event S = sk as shown in
Figure. The amount of
information gained after
observing the event S = sk,
which occurs with probability
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 7
PROPERTIES OF INFORMATION
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 8
• Even if we are absolutely certain of the outcome of an
event, even before it occurs, there is no information
gained.
• The occurrence of an event S = sk either provides some
or no information, but never brings about a loss of
information.
• That is, the less probable an event is, the more
information we gain when it occurs.
PROPERTIES OF INFORMATION
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 9
• This additive property follow from the logarithmic
definition of I(sk).
• If sk and sl are statistically independent.
• It is standard practice in information theory to use a
logarithm to base 2 with binary signalling in mind. The
resulting unit of information is called the bit, which is a
contraction of the words binary digit.
PROPERTIES OF INFORMATION
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 10
• Hence , one bit is the amount of information
that we gain when one of two possible and
equally likely (i.e..equiprobable) event
occurs.
• Note that the information I(sk) is positive,
because the logarithm of a number less than
one, such as a probability, is negative.
ENTROPHY (AVERAGE INFORMATION)
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 11
• The entropy of a discrete random variable, representing the
output of a source of information, is a measure of the
average information content per source symbol.
• The amount of information I(sk) produced by the source
during an arbitrary signalling interval depends on the symbol
sk emitted by the source at the time. The self-information
I(sk) is a discrete random variable that takes on the values
I(s0), I(s1), …, I(sK – 1) with probabilities p0, p1, ….., pK – 1
respectively
• The expectation of I(sk) over all the probable values taken by
the random variable S is given by
PROPERTIES OF ENTROPY
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 12
Consider a discrete memory less source whose
mathematical model is defined by
S= { s0,s1,s2,… ,sK-1}
with probabilities,
P(S=sk)= pk , k=0,1,2,…….,K-1.
The entropy H(S) of the discrete random variable S
is bounded as follows:
0 ≤ H (s ) ≤ log2 (K )
where K is the number of symbols in the alphabet
PROPERTIES OF ENTROPY
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 13
Property 1: H(S) = 0, if, and only if, the probability pk = 1 for
some k, and the remaining probabilities in the set are all
zero; this lower bound on entropy corresponds to no
uncertainty
PROPERTIES OF ENTROPY
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 14
Property 2: H(S) = log K, if, and only if, pk = 1/K for all k ; this
upper bound on entropy corresponds to maximum
uncertainty.
PROPERTIES OF ENTROPY
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 15
PROPERTIES OF ENTROPY
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 16
Property 3: The upper bound (Hmax) on
Entropy is given as
PROPERTIES OF ENTROPY
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 17
PROPERTIES OF ENTROPY
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 18
PROPERTIES OF ENTROPY
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 19
PROPERTIES OF ENTROPY
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 20
INFORMATION RATE (R)
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 21
Information rate (R) is represented in average
number of bits of information per second.
R = r H(S)
Where,
R - information rate ( Information bits / second )
H(S) - the Entropy or average information (bits / symbol)
and
r - the rate at which messages are generated (symbols /
second).
MUTUAL INFORMATION
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 22
• The mutual information I(X;Y) is a measure of the uncertainty
about the channel input, which is resolved by observing the
channel output.
• Mutual information I(xi,yj) of a channel is defined as the
amount of information transferred when xi transmitted and yj
received.
DISCRETE MEMORYLESS CHANNELS
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 23
Discrete Memoryless Channel is a statistical model
with an input X and output Y (i.e., noisy version of X).
Both X and Y are random variables.
The channel is said to be discrete ,when both X and Y
have finite sizes. It is said to be memoryless when the
current output symbol depends only on the current
DISCRETE MEMORYLESS CHANNELS
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 24
TYPES OF CHANNELS
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 25
• LOSSLESS CHANNELS
A channel described by a channel matrix with only one
non-zero element in each column is called a lossless
channel. The channel matrix of a lossless channel will be
like,
TYPES OF CHANNELS
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 26
DETERMINISTIC CHANNEL
• A channel described by a channel matrix with
one non-zero element in each row is called a
deterministic channel
TYPES OF CHANNELS
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 27
BINARY SYMMETRIC CHANNEL
• Binary Symmetric Channel has two input symbols x0=0
and x1=1 and two output symbols y0=0 and y1=1. The
channel is symmetric because the probability of receiving a 1
if a 0 is sent is the same as the probability of receiving a 0 if
a 1 is sent.
• In other words correct bit transmitted with probability 1-p,
wrong bit transmitted with probability p. It is also called
“cross-over probability”. The conditional probability of error is
denoted as p.
TYPES OF CHANNELS
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 28
BINARY ERASURE CHANNEL (BEC)
A Binary erasure channel (BEC) has two inputs (0,1)
and three outputs (0,y,1). The symbol y indicates that, due to
noise, no deterministic decision can be made as to whether
the received symbol is a 0 or 1. In other words the symbol y
represents that the output is erased. Hence the name Binary
Erasure Channel (BEC)
CHANNEL CAPACITY
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 29
Channel Capacity of a discrete memory less
channel as the maximum average mutual information
I(X:Y) in any single use of channel i.e., signaling interval,
where the maximization is over all possible input
probability distribution {p(xi)} on X. It is measured in bits
per channel use.
Channel capacity (C) =max I (X:Y)
The channel capacity C is a function only of the
transition probabilities p(yj|xi), which defines the channel.
The calculation of channel capacity (C) involves
maximization of average mutual information I(X:Y) over
K variables
CHANNEL CAPACITY
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 30
CHANNEL CODING THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 31
• The design goal of channel coding is to increase the
resistance of a digital communication system to channel
noise.
• Specifically, channel coding consists of mapping the
incoming data sequence into a channel input sequence and
inverse mapping the channel output sequence into an output
data sequence in such a way that the overall effect of
channel noise on the system is minimized.
CHANNEL CODING THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 32
• Mapping operation is performed in the transmitter by a
channel encoder, whereas the inverse mapping operation is
performed in the receiver by a channel decoder.
THEOREM
The channel-coding theorem for a discrete memoryless
channel is stated as follows:
• Let a discrete memoryless source with an alphabet 𝒮 have
entropy H(S) for random variable S and produce symbols
once every Ts seconds. Let a discrete memoryless channel
have capacity C and be used once every Tc seconds. Then,
if
CHANNEL CODING THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 33
there exists a coding scheme for which the source
output can be transmitted over the channel and be
reconstructed with an arbitrarily small probability of error.
The parameter C/Tc is called the critical rate.
When the system is said to be signalling at the critical
rate then it need to satisfy the below condition
Conversely, if
CHANNEL CODING THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 34
it is not possible to transmit information over the channel and
reconstruct it with an arbitrarily small probability of error.
The ratio Tc/Ts equals the code rate of encoder denoted by ‘r’,
where
• In short, the Channel Coding Theorem states that if a discrete
memoryless channel has capacity C and a source generates
information at a rate less than C, then there exists a coding
technique such that the output of the source maybe
transmitted over the channel with an arbitrarily low probability
of symbol error.
• Conversely, it is not possible to find such a code if the code
rate ‘r’ is greater than the channel capacity C. If r ≤ C, there
exists a code capable of achieving an arbitrarily low
CHANNEL CODING THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 35
Limitations
It does not show us how to construct a good code.
The theorem does not have a precise result for the
probability of symbol error after decoding the channel
output.
MAXIMUM ENTROPY FOR GAUSSIAN CHANNEL
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 36
MAXIMUM ENTROPY FOR GAUSSIAN CHANNEL
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 37
MAXIMUM ENTROPY FOR GAUSSIAN CHANNEL
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 38
CHANNEL (INFORMATION ) CAPACITY THEOREM OR
SHANNON’S THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 39
The information capacity (C) of a continuous
channel of bandwidth B hertz, affected by AWGN of total
noise power with in the channel bandwidth N, is given by
the formula,
where, C - Information Capacity.
B - Channel Bandwidth
S - Signal power (or) Average Transmitted power
N – Total noise power within the channel bandwidth (B)
It is easier to increase the information capacity of
a continuous communication channel by expanding its
bandwidth than by increasing the transmitted power for a
prescribed noise variance.
CHANNEL (INFORMATION ) CAPACITY THEOREM OR
SHANNON’S THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 40
CHANNEL (INFORMATION ) CAPACITY THEOREM OR
SHANNON’S THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 41
CHANNEL (INFORMATION ) CAPACITY THEOREM OR
SHANNON’S THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 42
Trade off
Noiseless Channel has infinite bandwidth.
If there is no noise in the channel , then N=0
hence Then
Infinite bandwidth has limited capacity. If B
is infinite, channel capacity is limited,
because noise power (N) increases, (S/N)
ratio decreases. Hence, even B approaches
infinity, capacity does not approach infinity.
SOURCE CODING THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 43
Efficient representation of data generated by a discrete
source is accomplished by some encoding process. The
device that performs the representation is called a source
encoder.
Efficient source encoder must satisfy two functional
requirements
• Code Nodes produced by the encoder are in binary form.
• The source code is uniquely decodable, so that the original
source sequence can be reconstructed perfectly from the
encoded binary sequence.
SOURCE CODING THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 44
THEOREM STATEMENT
• Given a discrete memory less source of entropy H(S) or
H(X) then the average code word length L for any distortion
less source encoding is bounded as,
• where, the Entropy H(S) or H(X) is the fundamental limit on
the average number of bits/source symbol.
SOURCE CODING THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 45
AVERAGE CODE WORD LENGTH
SOURCE CODING THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 46
CODING EFFICIENCY ( Ƞ )
The value of coding efficiency (Ƞ) is always
lesser or equal to 1. The source encoder is said to be
efficient when Ƞ approaches unity.
SOURCE CODING THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 47
VARIANCE
The variance is defined as,
CODING TECHNIQUES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 48
The two popular coding technique used in Information theory
are
(i) Shannon-Fano Coding
(ii) Huffman Coding.
ALGORITHM OF SHANNON-FANO CODING:
(i) List the source symbols in order of decreasing probability.
(ii) Partition the set into two sets that are as close to
equiprobables as possible.
(iii) Now assign “0” to the upper set and assign “1” to the lower
set.
(iv) Continue this process, each time partitioning the sets with
as nearly equal probabilities as possible until further
SHANNON-FANO CODING TECHNIQUES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 49
STEP 1
(i) A discrete memoryless source has symbols x1, x2, x3, x4,
x5 with probabilities of 0.4, 0.2, 0.1, 0.2, 0.1 respectively.
Construct a Shannon-Fano code for the source and
calculate code efficiency η
SHANNON-FANO CODING TECHNIQUES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 50
STEP 2
SHANNON-FANO CODING TECHNIQUES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 51
STEP 3
SHANNON-FANO CODING TECHNIQUES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 52
STEP 4
SHANNON-FANO CODING TECHNIQUES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 53
SHANNON-FANO CODING TECHNIQUES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 54
SHANNON-FANO CODING TECHNIQUES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 55
2 .For a discrete memoryless source ‘X” with 6 symbols x1,
x2,….., x6 the probabilities are p(x1) =0.3, p(x2) =0.25, p(x3)
= 0.2, p(x4) = 0.12, p(x5) = 0.08, p(x6) = 0.05 respectively.
Using Shannon-Fano coding, calculate entropy, average
length of code, efficiency and redundancy of the code.
SYMBOL PROBABILITY
X1 p(x1) =0.3
X2 p(x2) =0.25
X3 p(x3) = 0.2
X4 p(x4) = 0.12
X5 p(x5) = 0.08
X6 p(x6) = 0.05
SHANNON-FANO CODING TECHNIQUES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 56
SHANNON-FANO CODING TECHNIQUES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 57
SHANNON-FANO CODING TECHNIQUES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 58
SHANNON-FANO CODING TECHNIQUES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 59
SHANNON-FANO CODING TECHNIQUES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 60
SHANNON-FANO CODING TECHNIQUES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 61
SHANNON-FANO CODING TECHNIQUES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 62
• A discrete memoryless source has five symbols
x1,x2,x3,x4 and x5 with probabilities
0.4,0.19,0.16,0.15 and 0.15 respectively attached
to every symbol.Construct a Shannon-Fano code
for the source and calculate code efficiency.
HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 63
• Huffman coding is a source coding method used for
data compression applications
• It is used to generate variable length codes and gives
lowest possible values of average length of code.
• Huffman coding provides maximum efficiency and
minimum redundancy
• Huffman coding is also known as minimum
redundancy code or optimum code
ALGORITHM FOR HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 64
• Step 1 : Arrange given messages in decreasing order of
probabilities.
• Step 2: Group the least probable M messages and
assign them symbols for coding.
• Step 3 : Add the probabilities of grouped messages and
place them as high as possible and rearrange them in
decreasing order again. This process is known as
reduction.
• Step 4: Repeat the steps 2 & 3 until only M or less than
M probabilities remains.
• Step 5: obtain the code words for each message by
tracing the probabilities from left to right in direction of the
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 65
• 1)Using Huffman algorithm find the average code
word length and code efficiency. Consider the 5
source symbols of a DMS with probabilities
p(m1)=0.4, p(m2)=0.2, p(m3)=0.2, p(m4)=0.1 and
p(m5)=0.1.
Message (mk) probabilities(Pk)
m1 0.4
m2 0.2
m3 0.2
m4 0.1
m5 0.1
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 66
Message (mk) probabilities(Pk) Stage I
m1 0.4 0.4
m2 0.2 0.2
m3 0.2 0.2
m4 0.1 0.2
m5 0.1
0
1
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 67
Message (mk)
probabilities(P
k)
Stage I Stage II
m1 0.4 0.4 0.4
m2 0.2 0.2 0.4
m3 0.2 0.2 0.2
m4 0.1 0.2
m5 0.1
0
1
0
1
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 68
Message
(mk) probabilitie
s(Pk)
Stage I Stage II Stage III
m1 0.4 0.4 0.4 0.6
m2 0.2 0.2 0.4 0.4
m3 0.2 0.2 0.2
m4 0.1 0.2
m5 0.1
0
1
0
1
0
0
1
1
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 69
Message
(mk) probabilit
ies(Pk)
Stage I Stage II Stage III Code
Word
m1 0.4 0.4 0.4 0.6 00
m2 0.2 0.2 0.4 0.4
m3 0.2 0.2 0.2
m4 0.1 0.2
m5 0.1
0
1
0
1
0
0
1
1
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 70
Message
(mk) probabilit
ies(Pk)
Stage I Stage II Stage III Code
Word
m1 0.4 0.4 0.4 0.6 00
m2 0.2 0.2 0.4 0.4 10
m3 0.2 0.2 0.2
m4 0.1 0.2
m5 0.1
0
1
0
1
0
0
1
1
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 71
Message
(mk) probabilit
ies(Pk)
Stage I Stage II Stage III Code
Word
m1 0.4 0.4 0.4 0.6 00
m2 0.2 0.2 0.4 0.4 10
m3 0.2 0.2 0.2 11
m4 0.1 0.2
m5 0.1
0
1
0
1
0
0
1
1
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 72
Message
(mk) probabilit
ies(Pk)
Stage I Stage II Stage III Code
Word
m1 0.4 0.4 0.4 0.6 00
m2 0.2 0.2 0.4 0.4 10
m3 0.2 0.2 0.2 11
m4 0.1 0.2 010
m5 0.1
0
1
0
1
0
0
1
1
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 73
Message
(mk) probabilit
ies(Pk)
Stage I Stage II Stage III Code
Word
m1 0.4 0.4 0.4 0.6 00
m2 0.2 0.2 0.4 0.4 10
m3 0.2 0.2 0.2 11
m4 0.1 0.2 010
m5 0.1 011
0
1
0
1
0
0
1
1
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 74
Messag
e (mk) probabi
lities(Pk
)
Stage I Stage II Stage
III
Code
Word
No of bit
per
message
(Ik)
m1 0.4 0.4 0.4 0.6 00 2
m2 0.2 0.2 0.4 0.4 10 2
m3 0.2 0.2 0.2 11 2
m4 0.1 0.2 010 3
m5 0.1 011 3
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 75
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 76
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 77
• For a discrete memoryless source ‘X” with 6 symbols x1,
x2,….., x6 the probabilities are p(x1) =0.3, p(x2) =0.25, p(x3) =
0.2, p(x4) = 0.12, p(x5) = 0.08, p(x6) = 0.05 respectively.
Using Huffman coding, calculate entropy, average length of
code, efficiency and redundancy of the code.
Message (mk)
probabilities(Pk
)
X1 0.3
X2 0.25
X3 0.2
X4 0.12
X5 0.08
x6 0.05
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 78
Message (mk) probabilities(Pk) Stage I
X1 0.3 0.3
X2 0.25 0.25
X3 0.2 0.2
X4 0.12 0.13
X5 0.08 0.12
x6 0.05
0
1
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 79
Message (mk)
probabilities(P
k)
Stage I Stage II
X1 0.3 0.3 0.3
X2 0.25 0.25 0.25
X3 0.2 0.2 0.25
X4 0.12 0.13 0.2
X5 0.08 0.12
x6 0.05
0
1
0
1
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 80
Messag
e (mk) probabilities(P
k)
Stage I Stage II Stage III
X1 0.3 0.3 0.3 0.45
X2 0.25 0.25 0.25 0.3
X3 0.2 0.2 0.25 0.25
X4 0.12 0.13 0.2
X5 0.08 0.12
x6 0.05
0
1
0
1
0
1
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 81
Messa
ge
(mk)
probabilities
(Pk)
Stage I Stage II Stage III Stage IV
X1 0.3 0.3 0.3 0.45 0.55
X2 0.25 0.25 0.25 0.3 0.45
X3 0.2 0.2 0.25 0.25
X4 0.12 0.13 0.2
X5 0.08 0.12
x6 0.05
0
1
0
1
0
1
0
1
0
1
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 82
Mess
age
(mk)
probabilitie
s(Pk)
Stage I Stage II Stage
III
Stage
IV
Code
word
X1 0.3 0.3 0.3 0.45 0.55 00
X2 0.25 0.25 0.25 0.3 0.45
X3 0.2 0.2 0.25 0.25
X4 0.12 0.13 0.2
X5 0.08 0.12
x6 0.05
0
1
0
1
0
1
0
1
0
1
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 83
Mess
age
(mk)
probabilitie
s(Pk)
Stage I Stage II Stage
III
Stage
IV
Code
word
X1 0.3 0.3 0.3 0.45 0.55 00
X2 0.25 0.25 0.25 0.3 0.45 10
X3 0.2 0.2 0.25 0.25
X4 0.12 0.13 0.2
X5 0.08 0.12
x6 0.05
0
1
0
1
0
1
0
1
0
1
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 84
Mess
age
(mk)
probabilitie
s(Pk)
Stage I Stage II Stage
III
Stage
IV
Code
word
X1 0.3 0.3 0.3 0.45 0.55 00
X2 0.25 0.25 0.25 0.3 0.45 10
X3 0.2 0.2 0.25 0.25 11
X4 0.12 0.13 0.2
X5 0.08 0.12
x6 0.05
0
1
0
1
0
1
0
1
0
1
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 85
Mess
age
(mk)
probabilitie
s(Pk)
Stage I Stage II Stage
III
Stage
IV
Code
word
X1 0.3 0.3 0.3 0.45 0.55 00
X2 0.25 0.25 0.25 0.3 0.45 10
X3 0.2 0.2 0.25 0.25 11
X4 0.12 0.13 0.2 011
X5 0.08 0.12
x6 0.05
0
1
0
1
0
1
0
1
0
1
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 86
Mess
age
(mk)
probabilitie
s(Pk)
Stage I Stage II Stage
III
Stage
IV
Code
word
X1 0.3 0.3 0.3 0.45 0.55 00
X2 0.25 0.25 0.25 0.3 0.45 10
X3 0.2 0.2 0.25 0.25 11
X4 0.12 0.13 0.2 011
X5 0.08 0.12 0100
x6 0.05
0
1
0
1
0
1
0
1
0
1
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 87
Mess
age
(mk)
probabilitie
s(Pk)
Stage I Stage II Stage
III
Stage
IV
Code
word
X1 0.3 0.3 0.3 0.45 0.55 00
X2 0.25 0.25 0.25 0.3 0.45 10
X3 0.2 0.2 0.25 0.25 11
X4 0.12 0.13 0.2 011
X5 0.08 0.12 0100
x6 0.05 0101
0
1
0
1
0
1
0
1
0
1
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 88
Mes
sag
e
(mk)
probabili
ties(Pk)
Stage
I
Stage
II
Stage
III
Stage
IV
Cod
e
wor
d
No of bit
per
messag
e (Ik)
X1 0.3 0.3 0.3 0.45 0.55 00 2
X2 0.25 0.25 0.25 0.3 0.45 10 2
X3 0.2 0.2 0.25 0.25 11 2
X4 0.12 0.13 0.2 011 3
X5 0.08 0.12 010
0
4
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 89
PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 90
LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 91
ALGORITHM FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 92
• Principle: The source data streams is parsed
into segments that are the shortest redundancies
not encountered previously
Step 1: Divide the given input data stream into sub
sequences
Step 2: Assign numerical position for each sub
sequences
Step 3: Assign numerical representation for each
PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 93
• Example 1: Given data stream :
0 0 0 1 0 1 1 1 0 0 1 0 1 0 0 1 0 1
Step 1: Divide the given input data stream into
sub sequences
0 1 00 01 011 10 010 100 101
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
0 1 00 01 011 10 010 100 101
PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 94
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
0 1 0 0 0 1 01 1 1 0 01 0 10 0 10 1
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub sequences 0 1 0 0 0 1 01 1 1 0 01 0 10 0 10 1
Numerical
Representatio
n
11 12 4 2 2 1 4 1 6 1 6 2
PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 95
•
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub sequences 0 1 0 0 0 1 01 1 1 0 01 0 10 0 10 1
Numerical
Representatio
n
- - 11 12 4 2 2 1 4 1 6 1 6 2
Binary
Encoded
Blocks
0 1 001 0 001 1 1001 010
0
100
0
110
0
110
1
Final Encoded Block sequence for the given data stream :
0010 0011 1001 0100 1000 1100 1101
PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 96
• Example 2: Given data stream :
1 1 1 0 1 0 0 0 1 1 0 1 0 1 1 0 1 0
Step 1: Divide the given input data stream into
sub sequences
1 0 11 10 100 01 101 011 010
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
1 0 11 10 100 01 101 011 010
PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 97
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
1 0 11 10 100 01 101 011 010
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub sequences 1 0 11 10 100 01 101 011 010
Numerical
Representatio
n
11 12 4 2 2 1 4 1 6 1 6 2
PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 98
•
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub sequences 1 0 0 0 0 1 01 1 1 0 01 0 10 0 10 1
Numerical
Representatio
n
- - 11 12 4 2 2 1 4 1 6 1 6 2
Binary
Encoded
Blocks
1 0 001 0 001 1 1001 010
0
100
0
110
0
110
1
Final Encoded Block sequence for the given data stream :
0010 0011 1001 0100 1000 1100 1101
PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 99
• Example 3 : Given data stream :
A A B A B B B A B A A B A B B B A
B B A B B
Step 1: Divide the given input data stream into
sub sequences
A A B A B B B A B A A B A B B B A B
B A BB
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
A A B A B B B ABA ABAB BB ABBA BB
PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 100
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
A A
B
A B
B
B ABA ABA
B
BB ABB
A
BB
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
A A
B
A B
B
B ABA ABA
B
BB ABB
A
BB
Numerical
representati
on
φA 1B 2B φB 2A 5B 4B 3A 7
PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 101
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
A A
B
A B
B
B ABA ABA
B
BB ABB
A
BB
Numerical
representati
on
φA 1B 2B φB 2A 5B 4B 3A 7
Binary
Encoded
Blocks
0 11 101 1 100 1011 1001 110 111
Max no of bits :
4
Apply Zero Padding to all other codes to get decoding
without redudancies
PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 102
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
A A
B
A B
B
B ABA ABAB BB ABBA BB
Numerical
representatio
n
φA 1B 2B φB 2A 5B 4B 3A 7
Binary
Encoded
Blocks
0 11 101 1 100 1011 1001 110 111
Binary
Encoded
Blocks
000
0
0011 0101 0001 0100 1011 1001 0110 0111
Final Encoded Block sequence for the given data stream :
0000 0011 0101 0001 0100 1011 1001
0110 0111
PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 103
• Example 4 : Given data stream :
B B A B A A A B A B B A B A A A B A
A B A A
Step 1: Divide the given input data stream into
sub sequences
B B A BAA A B A B B A B A A A B A
A B AA
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
B BA BAA A BAB BABA AA BAAB AA
PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 104
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
B B
A
B A
A
A BAB BAB
A
AA BAA
B
AA
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
B B
A
B A
A
A BAB BAB
A
AA BAA
B
AA
Numerical
representati
on
φB 1A 2A φA 2B 5A 4A 3B 7
PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 105
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
B B
A
B A
A
A BAB BAB
A
AA BAA
B
AA
Numerical
representati
on
φB 1A 2A φA 2B 5A 4A 3B 7
Binary
Encoded
Blocks
0 11 101 1 100 1011 1001 110 111
Max no of bits :
4
Apply Zero Padding to all other codes to get decoding
without redudancies
PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 106
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
B B
A
B A
A
A BAB BABA AA BAAB AA
Numerical
representatio
n
φB 1A 2A φA 2B 5A 4A 3B 7
Binary
Encoded
Blocks
0 11 101 1 100 1011 1001 110 111
Binary
Encoded
Blocks
000
0
0011 0101 0001 0100 1011 1001 0110 0111
Final Encoded Block sequence for the given data stream :
0000 0011 0101 0001 0100 1011 1001
0110 0111
ERROR CONTROL CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 107
ERROR:
Error is a condition when the output information does not
match with the input information. During transmission, digital
signals suffer from noise that can introduce errors in the
binary bits travelling from one system to other. That means a
0 bit may change to 1 or a 1 bit may change to 0
Digital
Source
Source
Encod
er
Error
Control
Coding
Line
Coding
Modulat
or
Chann
el
Noise
Digital
Sink
Source
Decod
er
Error
Control
Decoding
Line
Decoding
Demod
ulator
+
Transmitter
Receiver
X(w)
N(w)
Y(w)
ERROR CONTROL CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 108
CLASSIFICATION OF ERRORS:
Content Error:
These are errors in the content of
messageintroduced due to noise during transmission
Flow integrity Error
These errors are missing blocks of data, data lost in
network or data delivered to wrong destination
CLASSIFICATION OF
CODES:
 Error Detecting
 Error Correcting
IMPORTANT DEFINITIONS RELATED TO CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 109
 Code Word:
The code word is the n bit encoded block
of bits. It contains message bits and parity or
redundant bits.
 Block Length:
The number of bits ‘n’ after coding is known as
block length.
 Code Rate (or) Code Efficiency:
The code rate is defined astheratio of
the number of message bits (k) to the total number
of bits (n) in a code word.
𝐶𝑜𝑑𝑒 𝑟𝑎𝑡𝑒 (𝑟 )= number of message bits (k)
IMPORTANT DEFINITIONS RELATED TO CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 110
 Code Vector:
An ‘n’ bit code word can be visualized in an n – dimensional
space as a vector whose elements or coordinates are bits in the code
word.
IMPORTANT DEFINITIONS RELATED TO CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 111
Hamming Distance:
The Hamming distance between two code words is equal
to the number of differences between them, e.g.,
1 0 0 1 1 0 1 1
1 1 0 1 0 0 1 0
Hamming distance = 3
Minimum Distance dmin:
It is defined as the smallest Hamming distance between
any pair of code vectors in the code.
Hamming Weight of a Code Word [w(x)]:
It is defined as the number of non – zero elements in the
code word. It is denoted by w(x)
for eg: X= 01110101, then w(x) = 5
ERROR CONTROL CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 112
Error Control Coding
The main purpose of Error control coding is to enable the
receiver to detect and ever correct the errors by introducing
some redundancies into the data which is to be transmitted.
There are basically two mechanisms to for adding redundancy.
They are
 Block Coding
 Convolutional coding
CLASSIFICATION OF ERROR–CORRECTING
CODES:
Block Codes (No
memory is required):
(n,k) block code is
generated when the
channel encoder
accepts information in
successive k bit blocks.
At the end of each
such block, (n- k) parity
bit is added, which
contains no information
and termed as
Convolutional Codes
(Memory is required)
Here the code words
are generated by discrete –
time convolution of the
input sequence with impulse
response of the encoder.
Unlike block codes,
channel encoder accepts
messages as a continuous
sequence and generates a
continuous sequence of
encoded bits at the output
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 113
CLASSIFICATION OF ERROR–CORRECTING
CODES:
Linear Codes and non linear Codes
Linear codes have the unique property that when any
two code words of linear code are added in modulo – 2 adder, a
third code word is produced which is also a code, which is not
the case for non – linear codes.
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 114
CLASSIFICATION OF ERROR CONTROL CODING
TECHNIQUES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 115
TYPES OF ERROR CONTROL TECHNIQUES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 116
AUTOMATIC REPEAT REQUEST (ARQ)
 Here, when an error is detected, a request is made
for retransmission of signal.
 A feedback channel is necessary for this
retransmission.
 It differs from the FEC system in the following three
aspects.
1. Less number of check bits (parity bits) are required
increasing the (k/n) ratio for (n, k) block code.
2. Additional hardware to implement feedback path is
required.
3. But rate at forward transmission needs to make
AUTOMATIC REPEAT REQUEST (ARQ)
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 117
 For each message signal at the input, the encoder produces code words
which are stored temporarily at encoder output and transmitted over forward
transmission channel.
 At the destination, decoders decode the signals and give a positive
acknowledgement (ACK) and negative acknowledgement (NAK)
respectively in case no error and error is detected.
 On receipt of NAK, the controller retransmits the appropriate word
stored in the input buffer.
FORWARD ERROR CORRECTION (FEC)
TECHNIQUE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 118
 It is a digital modulation system where discrete source generates
information in binary form.
 The channel encoder accepts these message bits and add redundant
bits to them leaving higher bit rate for transmission.
 Channel decoders uses the redundant bits to check for
actually and erroneous transmitted messages.
ERROR DETECTION TECHNIQUE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 119
Whenever a message is transmitted, it may get
scrambled by noise or data may get corrupted. To avoid
this, we use error-detecting codes which are additional
data added to a given digital message to help us detect if
an error occurred during transmission of the message. A
simple example of error-detecting code is parity check.
i) Parity Checking
ii) Check sum Error Detection
iii) Cyclic Redundancy Check (CRC)
PARITY CHECKING OF ERROR DETECTION
• It is the simplest technique for detecting and correcting
errors. The MSB of an 8-bits word is used as the parity bit
and the remaining 7 bits are used as data or message bits.
The parity of 8-bits transmitted word can be either even parity
or odd parity.
• Even parity -- Even parity means the number of 1's in the
given word including the parity bit should be even (2,4,6,....).
• Odd parity -- Odd parity means the number of 1's in the
given word including the parity bit should be odd (1,3,5,....).
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 120
PARITY CHECKING OF ERROR DETECTION
USE OF PARITY BIT
• The parity bit can be set to 0 and 1 depending on the type of
the parity required.
• For even parity, this bit is set to 1 or 0 such that the no. of "1
bits" in the entire word is even. Shown in fig. (a).
• For odd parity, this bit is set to 1 or 0 such that the no. of "1
bits" in the entire word is odd. Shown in fig (b)
fig. (a).
fig. (b).
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 121
0 0 0 1 1 1 0 1 1 1
0 0 1 1 1 1 0 0 1 1
Message or data bits
Parity
bit
Message or data bits
Parity
bit
Message or data bits
Parity
bit
Parity
bit
Message or data bits
PARITY CHECKING OF ERROR DETECTION
• ERROR DETECTION USING PARITY BIT
Parity checking at the receiver can detect the presence
of an error if the parity of the receiver signal is different from
the expected parity. That means, if it is known that the parity
of the transmitted signal is always going to be "even" and if
the received signal has an odd parity, then the receiver can
conclude that the received signal is not correct. If an error is
detected, then the receiver will ignore the received byte and
request for retransmission of the same byte to the transmitter.
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 122
CHECK SUM ERROR DETECTION
• This is a block code method where a checksum is created
based on the data values in the data blocks to be transmitted
using some algorithm and appended to the data.
• When the receiver gets this data, a new checksum is
calculated and compared with the existing checksum. A non-
match indicates an error.
Error Detection by Checksums
• For error detection by checksums, data is divided into fixed
sized frames or segments.
• Sender’s End − The sender adds the segments using 1’s
complement arithmetic to get the sum. It then complements
the sum to get the checksum and sends it along with the data
frames.
• Receiver’s End − The receiver adds the incoming segments
along with the checksum using 1’s complement arithmetic to
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 123
CHECK SUM ERROR DETECTION
Example
• Suppose that the sender wants to
send 4 frames each of 8 bits,
where the frames are 11001100,
10101010, 11110000 and
11000011.
• The sender adds the bits using 1s
complement arithmetic. While
adding two numbers using 1s
complement arithmetic, if there is
a carry over, it is added to the
sum.
• After adding all the 4 frames, the
sender complements the sum to
get the checksum, 11010011, and
sends it along with the data
frames.
• The receiver performs 1s
complement arithmetic sum of all
the frames including the
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 124
CYCLIC REDUNDANCY CHECK (CRC)
 The concept of parity checking can be extended from
detection to correction of single error by arranging the data
block in rectangular matrix.
 This will lead to two set of parity bits, viz.
•Longitudinal Redundancy Check
(LRC) and
• Vertical Redundancy Check (VRC).
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 125
LONGITUDINAL REDUNDANCY CHECK (LRC)
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 126
 In Longitudinal Redundancy Check, one row is taken
up at a time and counting the number of 1s, the
parity bit is adjusted to achieve even parity.
 Here for checking the message block, a complete
character known as Block Check Character (BCC)
is added at the end of the block of information,
which may be of even or odd parity
VERTICAL REDUNDANCY CHECK (LRC)
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 127
In VRC, the ASCII code for individual alphabets are
considered arranged vertically and then counting the
number of 1s, the parity bit is adjusted to achieve even
parity.
LONGITUDINAL REDUNDANCY CHECK (LRC)
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 128
 A single error in any bit will result in a non – correct LRC in
the row and a non – correct VRC in the column. The bit
which is common to both the row and column is the bit in
error. The limitation is though it can detect multiple errors
but is capable to correct only a single error as for multiple
errors it is not suitable to locate the position of the errors.
 1 in the square box in the next table, is the bit in error as it is
evident from the erroneous results both in the LRC and the
VRC columns
1
LINEAR BLOCK CODES
PRINCIPLE OF BLOCK CODING:
 For the block of k message bits, (n-k) parity bits or check bits
are added.
 Hence the total bits at the output of channel encoder are ‘n’.
 Such codes are called (n,k) block codes
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 129
Message
Block
input
CHANNEL
ENCODER
Code Block
output
MESSAGE
k bits
MESSAGE Check bits
k bits (n – k)
bits
LINEAR BLOCK CODES
SYSTEMATIC CODES:
In the systematic
block code, the message
bits appear at the
beginning of the code
word. Then the check bits
are transmitted in a block.
This type of code is called
NON-SYSTEMATIC
CODES:
In non-systematic
codes it is not possible
to identify message
bits and check bits.
They are mixed in the
block code.
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 130
LINEAR BLOCK CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 131
LINEAR BLOCK CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 132
i.e q are the number of redundant bits added by the encoder.
The above code vector can also written as,
X =(M│C)
Here,
M = k-bit message vector
C = q –bit check vector
The check bits play the role of error detection and
correction. The job of the linear block code is to generate
those “check bits”.
MATRIX DESCRIPTION OF LINEAR BLOCK CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 133
The code vector can be represented as,
X = MG
Here, X = Code vector of 1 × n size or n bits
M = Message vector of 1 × k size bits
G = Generator matrix of k × n size.
MATRIX DESCRIPTION OF LINEAR BLOCK CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 134
MATRIX DESCRIPTION OF LINEAR BLOCK CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 135
Note: All the additions are mod – 2 additions
STRUCTURE OF LINEAR BLOCK CODER
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 136
LINEAR BLOCK CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 137
LINEAR BLOCK CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 138
LINEAR BLOCK CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 139
ii) To obtain the equation for check bits:
Here k= 3, q = 3, and n = 6
Here the block size of the message vector is 3 bits, Hence 2k =
23 = 8 message vectors are possible as shown below,
LINEAR BLOCK CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 140
LINEAR BLOCK CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 141
LINEAR BLOCK CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 142
iii) To determine check bits and code vectors
for every message vectors:
LINEAR BLOCK CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 143
LINEAR BLOCK CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 144
PROPERTIES OF LINEAR BLOCK CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 145
PROPERTIES OF LINEAR BLOCK CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 146
PROPERTIES OF LINEAR BLOCK CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 147
above
eqn
ERROR DETECTION AND CORRECTION OF LINEAR
BLOCK CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 148
HAMMING CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 149
HAMMING CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 150
ERROR DETECTION AND CORRECTION CAPABILITIES OF
HAMMING CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 151
For Hamming Code , dmin = 3
So ,
S ≤ 3 -1
No of error detected by Hamming code S ≤ 2
No of error corrected by Hamming code t≤ 1
HAMMING CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 152
HAMMING CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 153
HAMMING CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 154
HAMMING CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 155
HAMMING CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 156
HAMMING CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 157
HAMMING CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 158
HAMMING CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 159
iv) To determine check bits and code vectors for
every message vectors:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 160
iv) To determine check bits and code vectors for
every message vectors:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 161
HAMMING CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 162
HAMMING CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 163
ERROR DETECTION AND CORRECTION CAPABILITIES OF
HAMMING CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 164
For Hamming Code , dmin = 3
So ,
S ≤ 3 -1
No of error detected by Hamming code S ≤ 2
No of error corrected by Hamming code t≤ 1
HAMMING CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 165
HAMMING CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 166
HAMMING CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 167
HAMMING CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 168
HAMMING CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 169
HAMMING CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 170
HAMMING CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 171
HAMMING CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 172
iv) To determine check bits and code vectors for
every message vectors:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 173
iv) To determine check bits and code vectors for
every message vectors:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 174
iv) To determine Error Detection and Error Correction
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 175
SYNDROME DECODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 176
To avoid such complexity, syndrome decoding is used in
linear block codes.
SYNDROME DECODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 177
SYNDROME DECODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 178
SYNDROME DECODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 179
SYNDROME DECODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 180
SYNDROME DECODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 181
SYNDROME DECODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 182
SYNDROME DECODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 183
SYNDROME DECODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 184
SYNDROME DECODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 185
SYNDROME DECODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 186
SYNDROME DECODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 187
SYNDROME DECODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 188
SYNDROME DECODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 189
S = [1 0 1]
SYNDROME DECODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 190
SYNDROME DECODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 191
ERROR CORRECTION USING SYNDROME VECTOR
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 192
ERROR CORRECTION USING SYNDROME VECTOR
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 193
ERROR CORRECTION USING SYNDROME VECTOR
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 194
ERROR CORRECTION USING SYNDROME VECTOR
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 195
ERROR CORRECTION USING SYNDROME VECTOR
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 196
ERROR CORRECTION USING SYNDROME VECTOR
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 197
ERROR CORRECTION USING SYNDROME VECTOR
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 198
ERROR CORRECTION USING SYNDROME VECTOR
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 199
ERROR CORRECTION USING SYNDROME VECTOR
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 200
ERROR CORRECTION USING SYNDROME VECTOR
• Here the block size of the
message vector is 3 bits,
Hence 2k = 23 = 8 message
vectors are possible as shown
below,
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 201
ERROR CORRECTION USING SYNDROME VECTOR
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 202
ERROR CORRECTION USING SYNDROME VECTOR
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 203
ERROR CORRECTION USING SYNDROME VECTOR
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 204
ERROR CORRECTION USING SYNDROME VECTOR
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 205
ERROR CORRECTION USING SYNDROME VECTOR
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 206
ERROR CORRECTION USING SYNDROME VECTOR
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 207
ERROR CORRECTION USING SYNDROME VECTOR
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 208
Similarly by calculating for other bit error vectors we
have the following decoding table
ERROR CORRECTION USING SYNDROME VECTOR
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 209
ERROR CORRECTION USING SYNDROME VECTOR
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 210
CYCLIC CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 211
CYCLIC CODES:
 Cyclic codes are the sub class of linear block codes.
 Cyclic codes can be in systematic or non-systematic form.
 In systematic form, check bits are calculated separately and
the code vector is X= (M:C)form. Here ‘M’ represents
message bits and ‘C represents check bits.
Definition:
A linear code is called cyclic code if every cyclic shift of
the code vector produces some other code vector.
Properties of Cyclic Codes:
Cyclic codes exhibit two fundamental properties:
(i) Linearity
(ii) Cyclic property
CYCLIC CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 212
CYCLIC CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 213
CYCLIC CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 214
Algebraic structures of cyclic codes:
The code words can be represented by a polynomial
For eg., let us consider the n-bit codeword,
X = {xn-1, xn-2….. x1,x0}
The above code word can be represented by a polynomial of degree
less than or equal to (n-1).i.e.,
X(p) = xn-1pn-1 + xn-2pn-2 + …..x1p + x0
Here X(p) is the polynomial of degree (n-1)
p is the arbitrary variable of the polynomial.
The power of ‘p’ represents the positions of the code words. i.e.,
pn-1 represents MSB
p0 represents LSB
p1 represents second bit from LSB side.
CYCLIC CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 215
Generation of Code Vectors in Non-Systematic
form:
Let M = {mk-1, mk-2, …m1,m0} be ‘k’ bits of message
vector. Then it can be represented by the polynomial
as,
M(p) = mk-1pk-1 + mk-2pk-2 +…..m1p +m0
Let X(p) represents the codeword polynomial. It is
given as,
X(p) = M(p)G(p)
Here G(p) is the generating polynomial of degree
‘q’.
For an (n,k) cyclic code, q =n-k represents the
number of parity bits. The generating polynomial is
given as,
GENERATION OF CODE VECTORS IN NON-
SYSTEMATIC FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 216
If M1, M2, M3,…..etc are the other message
vectors, then the corresponding code vectors
can be calculated as,
X1(p) = M1(p)G(p)
X2(p) = M2(p)G(p)
X3(p) = M3(p)G(p) and so on.
All the above code vectors X1, X2, X3…..are in
non-systematic form and they satisfy cyclic
property.
Note: The generator polynomial G(p) remains
Example: The generator polynomial of (7,4) cyclic code
is G(p) = p3+p+1.
Find all the code vectors for the code in non-
systematic form.
Sol:
Here n = 7 and k = 4
, q= n-k =3
To find possible
message vectors,
2k = 24 = 16 message
vectors of four bits
are possible
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 217
GENERATION OF CODE VECTORS IN NON-
SYSTEMATIC FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 218
GENERATION OF CODE VECTORS IN NON-
SYSTEMATIC FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 219
GENERATION OF CODE VECTORS IN NON-
SYSTEMATIC FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 220
GENERATION OF CODE VECTORS IN NON-
SYSTEMATIC FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 221
To check whether cyclic property is satisfied:
Now Code vector X9 is considered form the
above table
X9 = (1 0 1 1 0 0 0)
By shifting the above code vector to left side by
1 bit position,
X’ = (0 1 1 0 0 0 1)
From the table , it is shown that,
X’ = X8 = ( 0 1 1 0 0 0 1)
Thus cyclic shift of X9 produces X8. This
cyclic propert can be verified for other code
GENERATION OF CODE VECTORS IN SYSTEMATIC
FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 222
GENERATION OF CODE VECTORS IN SYSTEMATIC
FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 223
Above equation implies,
• (i) multiply message polynomial by pq.
• (ii) Divide pq M(p) by generator polynomial
• (iii) Remainder of the division is C(p)
Example: The generator polynomial of a (7,4) cyclic
code is G(p) = p3+p+1.
Find all the code vectors in systematic form.
Sol:
Here n = 7 and k = 4 ,
q= n-k =3
To find possible
message vectors,
2k = 24 = 16 message
vectors of four bits are
possible
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 224
GENERATION OF CODE VECTORS IN SYSTEMATIC
FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 225
To find message polynomial,
Consider any message vector M = (m3 m2
m1 m0) = (0 1 0 1)
Then, M(p) = m3p3 + m2 p2 + m1p1 + m0
M(p) = p2 + 1
The given generator polynomial is,
G(p) = p3 + p + 1
GENERATION OF CODE VECTORS IN SYSTEMATIC
FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 226
To obtain pq M(p):
Since q =3, pq M(p) will be,
pq M(p) = p3 M(p)
= p3 (p2 + 1)
pq M(p) = p5 + p3
pq M(p) = p5 + 0p4 + p3 + 0p2 + 0p +1
G(p) = p3 + p + 1
and G(p) = p3 + 0p2 + p + 1
GENERATION OF CODE VECTORS IN SYSTEMATIC
FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 227
GENERATION OF CODE VECTORS IN SYSTEMATIC
FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 228
GENERATION OF CODE VECTORS IN SYSTEMATIC
FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 229
This is the required code vector for the message vector ( 0 1 0
1) in systematic form. The code vectors for other message
bits can be obtained by following the same procedure and
listed in the table as below
GENERATION OF CODE VECTORS IN SYSTEMATIC
FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 230
ENCODING USING AN (N - K) BIT SHIFT REGISTER:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 231
ENCODING USING AN (N - K) BIT SHIFT REGISTER:
OPERATION:
 The feedback switch is first closed. The output switch is
connected to message input.
 All the shift registers are initialized to all zero state. The k
message bits are shifted to the transmitter as well as shifted into
the registers.
 After the shift of k messages bits the registers contain q check
bits. The feedback switch is now opened and output switch is
connected to check bits position. With the every shift, the check
bits are then shifted to the transmitter.
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 232
ENCODING USING AN (N - K) BIT SHIFT REGISTER:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 233
Here q = 3 , so there are 3 flipflops in shift register to hold
check bits c1, c2,c0
Since g2 = 0 , its link is not connected. g1 = 1, hence its link is
connected.
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 234
Shift register bit position for input message 1100
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 235
Operation of (7,4) cyclic code encoder
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 236
CONVOLUTIONAL CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 237
• A Convolutional coding is done by combining the fixed
number of input bits.
• The input bits are stored in the fixed length shift register
and they are combined with the help of mod-2 adders. This
operation is equivalent to binary covolution and hence its
called Covolutional coding.
CONVOLUTIONAL CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 238
CONVOLUTIONAL CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 239
CONVOLUTIONAL CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 240
CONVOLUTIONAL CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 241
CONVOLUTIONAL CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 242
STATES OF THE ENCODER
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 243
• In the above convolutional encoder, the previous two
successive message bits m1 and m2 represents state.
• The input message bits m affects the state of the encoder as
well as outputs x1 and x2 during that state.
• Whenever new message bit is shifted to m, the contents of m1
and m2 define new state. And outputs x1 and x2 are also
changed according to new state m1,m2 and message bit m.
• Let the initial values of bits stored in m1 and m2 be zero. That
is m1m2 = 00 initially and the encoder is in state ‘a’
STATES OF THE ENCODER
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 244
m2 m1 States of encoder
0 0 a
0 1 b
1 0 c
1 1 d
DEVELOPMENT OF THE CODE TREE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 245
DEVELOPMENT OF THE CODE TREE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 246
• The development of code tree for the message sequence m
= 110 is given below. Initially let us assume that encoder is
in state ‘a’ . i.e., m1m2 = 00
(i) When message bit m = 1, (first bit)
The first message input is m = 1 With this input x1 and x2 is
calculated as follows,
For the given convolutional encoder, the outputs x1 and x2
are given as,
DEVELOPMENT OF THE CODE TREE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 247
DEVELOPMENT OF THE CODE TREE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 248
DEVELOPMENT OF THE CODE TREE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 249
DEVELOPMENT OF THE CODE TREE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 250
DEVELOPMENT OF THE CODE TREE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 251
Input bits Present state Next state Output of the
encoder
0 a (00) a (00) 00
1 a (00) b (01) 11
1 b (01) d (11) 01
0 d(11) c (10) 01
DEVELOPMENT OF THE CODE TREE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 252
Inp
ut
bits
Pres
ent
sta
te
Ne
xt
stat
e
Outp
ut of
the
enco
der
0 a
(00
)
a
(00
)
00
1 a
(00
)
b
(01
)
11
1 b
(01
d
(11
01
CODE TRELLIS
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 253
• Code trellis is the more compact representation of
the code tree.
• In code tree there are fore states or nodes. Every
states goes to some other state depending upon
the input code.
• Trellis represents the single, an unique diagram for
such transitions.
STATE TABLE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 254
Input bits Present state Next state Output of the
encoder
0 a (00) a (00) 00
1 b (10) 11
0
b (10)
c (01) 10
1
d (11) 01
0 c (01) a (00) 11
1 b (10) 00
0 d(11) c (01) 01
1 d(11) 10
CODE TRELLIS
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 255
00
11
11 00
10
01
01
10
CODE TRELLIS
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 256
• The node on the left side denotes four
possible current states and those on the right
represents the next states.
• The solid transition line represents input m =
0 and broken line represents input m = 1.
• Along with each transition line the output x1x2
is represented during the transition.
• For eg., let the encoder be in current state of
‘a’. If input m = 0, the next state will be ‘a’,
with the output x1x2 = 11.
• Thus code trellis is compact representation
STATE DIAGRAM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 257
• If the current and next states of the encoder
are combined, then it can be represented by
a state diagram.
• For eg., let the encoder is in state ’a’. If input
bit m = 0, then the next state is same i.e., ‘a’,
with the output x1x2 = 00. This is shown as
self loop at node ‘a’ in the state diagram.
• If input m =1 , then state diagram shows the
next state as ‘b’, with outputs x1x2 = 11.
STATE DIAGRAM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 258
b
c
10
10
11
11
0
0
01
01
SOLVED EXAMPLE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 259
Example: A code rate 1 / 3 covolutional
encoder has generating vectors as
g1 = (1 0 0), g2 = (1 1 1) and g3 = (1 0 1)
(i) Sketch the encoder configuration.
(ii) Draw the code tree, state diagram, and
trellis diagram.
(iii) If input message sequence is 10110,
determine the output sequence of the
encoder.
SOLVED EXAMPLE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 260
SOLVED EXAMPLE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 261
SOLVED EXAMPLE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 262
SOLVED EXAMPLE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 263
SOLVED EXAMPLE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 264
SOLVED EXAMPLE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 265
SOLVED EXAMPLE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 266
SOLVED EXAMPLE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 267
SOLVED EXAMPLE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 268
SOLVED EXAMPLE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 269
SOLVED EXAMPLE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 270
d
SOLVED EXAMPLE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 271
CODE TRELLIS
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 272
000
11
1
011 10
0
010
101
001
110
STATE DIAGRAM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 273
b
c
110
010
11
1
01
1
10
0
10
1
001
0 0
0
CODE TREE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 274
000
111
m =
0
m =
1
000
000
000
111
111
TO FIND OUTPUT SEQUENCE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 275
TO FIND OUTPUT SEQUENCE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 276
TO FIND OUTPUT SEQUENCE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 277
TO FIND OUTPUT SEQUENCE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 278
TO FIND OUTPUT SEQUENCE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 279
TO FIND OUTPUT SEQUENCE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 280
VITERBI DECODING ALGORITHM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 281
ALGORITHM STEPS
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 282
Example :
Viterbi Decoding
Input data m: 1 1 0 1 1
Codeword U: 11 01 01 00 01
Received seq. Z: 11 01 01 10 01
a = 00
b = 10
t1 t2
2
0
2

a
0

b
Example :
Viterbi Decoding
Input data m: 1 1 0 1 1
Codeword U: 11 01 01 00 01
Received seq. Z: 11 01
a = 00
b = 10
t1 t2 t3
2 1
0 1
2
0
c = 01
d = 11
2

a
0

b
3

a
3

b
2

c
0

d
Example :
Viterbi Decoding
Input data m: 1 1 0 1 1
Codeword U: 11 01 01 00 01
Received seq. Z: 11 01 01
a = 00
b = 10
t1 t2 t3 t4
2 1 1
0 1 1
2
2
0
0
1
1
2
0
c = 01
d = 11
3

a
3

b
2

c
0

d
Example :
Viterbi Decoding
Input data m: 1 1 0 1 1
Codeword U: 11 01 01 00 01
Received seq. Z: 11 01 01
a = 00
b = 10
t1 t2 t3 t4
2 1 1
0 1 1
2
2
0
0
1
1
2
0
c = 01
d = 11
Path metric
4
3
Example :
Viterbi Decoding
Input data m: 1 1 0 1 1
Codeword U: 11 01 01 00 01
Received seq. Z: 11 01 01
a = 00
b = 10
t1 t2 t3 t4
2 1 1
0 1 1
2
2
0
0
1
1
2
0
c = 01
d = 11
Path metric
4
3
Example :
Viterbi Decoding
Input data m: 1 1 0 1 1
Codeword U: 11 01 01 00 01
Received seq. Z: 11 01 01
a = 00
b = 10
t1 t2 t3 t4
2 1 1
0 1 1
2
2
0
0
1
1
2
0
c = 01
d = 11
Path metric
4
0
a = 00
b = 10
t1 t2 t3 t4
2 1 1
0 1 1
2
2
0
0
1
1
2
0
c = 01
d = 11
Path metric
3
2
Example :
Viterbi Decoding
Input data m: 1 1 0 1 1
Codeword U: 11 01 01 00 01
Received seq. Z: 11 01 01
Example :
Viterbi Decoding
Input data m: 1 1 0 1 1
Codeword U: 11 01 01 00 01
Received seq. Z: 11 01 01
a = 00
b = 10
t1 t2 t3 t4
0
2
0
0
1
1
2
c = 01
d = 11
3

a
3

b
0

c
t5
01
1
1 1
1
0
2 2
0
Example :
Viterbi Decoding
Input data m: 1 1 0 1 1
Codeword U: 11 01 01 00 01
Received seq. Z: 11 01 01
a = 00
b = 10
t1 t2 t3 t4
0
2
0
0
1
1
2
c = 01
d = 11
t5
01
1
1 1
1
0
2 2
0
Path metric
4
1
Example :
Viterbi Decoding
Input data m: 1 1 0 1 1
Codeword U: 11 01 01 00 01
Received seq. Z: 11 01 01
a = 00
b = 10
t1 t2 t3 t4
0
2
0
0
1
1
2
c = 01
d = 11
t5
01
1 1
1
0
2 2
0
Path metric
4
1
Example :
Viterbi Decoding
Input data m: 1 1 0 1 1
Codeword U: 11 01 01 00 01
Received seq. Z: 11 01 01
a = 00
b = 10
t1 t2 t3 t4
0
2
0
0
1
2
c = 01
d = 11
t5
01
1
1
0
2 2
0
Path metric
3
4
Example :
Viterbi Decoding
a = 00
b = 10
t1 t2 t3 t4
0
2
0
0
1
2
c = 01
d = 11
t5
1
1
0
2
0
5
Path metric
2
Input data m: 1 1 0 1 1
Codeword U: 11 01 01 00 01
Received seq. Z: 11 01 01 01
Example :
Viterbi Decoding
Input data m: 1 1 0 1 1
Codeword U: 11 01 01 00 01
Received seq. Z: 11 01 01
a = 00
b = 10
t1 t2 t3 t4
0
2
0
0
1
2
c = 01
d = 11
t5
10
1
1
0
0
1

a
1

b
3

c
2

d
01
t6
1
1 1
1
2
0
0
2
a = 00
b = 10
t1 t2 t3 t4
0
2
0
0
1
2
c = 01
d = 11
t5
1
1
0
0
t6
1
1 1
1
2
0
0
2
Path metric
2
4
Example :
Viterbi Decoding
Input data m: 1 1 0 1 1
Codeword U: 11 01 01 00 01
Received seq. Z: 11 01 01 10 01
a = 00
b = 10
t1 t2 t3 t4
0
2
0
0
1
2
c = 01
d = 11
t5
1
1
0
0
t6
1
1 1
1
2
0
0
2
Path metric
2
4
Example :
Viterbi Decoding
Input data m: 1 1 0 1 1
Codeword U: 11 01 01 00 01
Received seq. Z: 11 01 01 10 01
a = 00
b = 10
t1 t2 t3 t4
0
0
0
2
c = 01
d = 11
t5
1
1
0
t6
1
1
2
0
0
2
Path metric
3
2
Example :
Viterbi Decoding
Input data m: 1 1 0 1 1
Codeword U: 11 01 01 00 01
Received seq. Z: 11 01 01 10 01
a = 00
b = 10
t1 t2 t3 t4
0
0
0
2
c = 01
d = 11
t5
1
1
0
t6
1
1
0
0
2
Path metric
2
4
Example :
Viterbi Decoding
Input data m: 1 1 0 1 1
Codeword U: 11 01 01 00 01
Received seq. Z: 11 01 01 10 01
a = 00
b = 10
t1 t2 t3 t4
0
0
0
2
c = 01
d = 11
t5
1
1
0
t6
1
1
0
0
2

a
2

b
2

c
1

d
Example :
Viterbi Decoding
Input data m: 1 1 0 1 1
Codeword U: 11 01 01 00 01
Received seq. Z: 11 01 01 10 01

More Related Content

Similar to Communication engineering -UNIT IV .pptx

Similar to Communication engineering -UNIT IV .pptx (20)

DC@UNIT 2 ppt.ppt
DC@UNIT 2 ppt.pptDC@UNIT 2 ppt.ppt
DC@UNIT 2 ppt.ppt
 
Analysis of Space Time Codes Using Modulation Techniques
Analysis of Space Time Codes Using Modulation TechniquesAnalysis of Space Time Codes Using Modulation Techniques
Analysis of Space Time Codes Using Modulation Techniques
 
Unit-1_Digital_Communication-Information_Theory.pptx
Unit-1_Digital_Communication-Information_Theory.pptxUnit-1_Digital_Communication-Information_Theory.pptx
Unit-1_Digital_Communication-Information_Theory.pptx
 
Information theory
Information theoryInformation theory
Information theory
 
Modified Koblitz Encoding Method for ECC
Modified Koblitz Encoding Method for ECCModified Koblitz Encoding Method for ECC
Modified Koblitz Encoding Method for ECC
 
Unit I.pptx INTRODUCTION TO DIGITAL COMMUNICATION
Unit I.pptx INTRODUCTION TO DIGITAL COMMUNICATIONUnit I.pptx INTRODUCTION TO DIGITAL COMMUNICATION
Unit I.pptx INTRODUCTION TO DIGITAL COMMUNICATION
 
Digital Communication: Information Theory
Digital Communication: Information TheoryDigital Communication: Information Theory
Digital Communication: Information Theory
 
AUTHENTICATED PUBLIC KEY ENCRYPTION SCHEME USING ELLIPTIC CURVE CRYPTOGRAPHY
AUTHENTICATED PUBLIC KEY ENCRYPTION SCHEME USING ELLIPTIC CURVE CRYPTOGRAPHYAUTHENTICATED PUBLIC KEY ENCRYPTION SCHEME USING ELLIPTIC CURVE CRYPTOGRAPHY
AUTHENTICATED PUBLIC KEY ENCRYPTION SCHEME USING ELLIPTIC CURVE CRYPTOGRAPHY
 
AUTHENTICATED PUBLIC KEY ENCRYPTION SCHEME USING ELLIPTIC CURVE CRYPTOGRAPHY
AUTHENTICATED PUBLIC KEY ENCRYPTION SCHEME USING ELLIPTIC CURVE CRYPTOGRAPHYAUTHENTICATED PUBLIC KEY ENCRYPTION SCHEME USING ELLIPTIC CURVE CRYPTOGRAPHY
AUTHENTICATED PUBLIC KEY ENCRYPTION SCHEME USING ELLIPTIC CURVE CRYPTOGRAPHY
 
Information Theory - Introduction
Information Theory  -  IntroductionInformation Theory  -  Introduction
Information Theory - Introduction
 
Digital Communication Techniques
Digital Communication TechniquesDigital Communication Techniques
Digital Communication Techniques
 
40120140501016
4012014050101640120140501016
40120140501016
 
INFORMATION_THEORY.pdf
INFORMATION_THEORY.pdfINFORMATION_THEORY.pdf
INFORMATION_THEORY.pdf
 
Unit 3 ppt
Unit 3 pptUnit 3 ppt
Unit 3 ppt
 
Information Theory and Coding
Information Theory and CodingInformation Theory and Coding
Information Theory and Coding
 
Analysis and Implementation of Hard-Decision Viterbi Decoding In Wireless Com...
Analysis and Implementation of Hard-Decision Viterbi Decoding In Wireless Com...Analysis and Implementation of Hard-Decision Viterbi Decoding In Wireless Com...
Analysis and Implementation of Hard-Decision Viterbi Decoding In Wireless Com...
 
Information theory & coding PPT Full Syllabus.pptx
Information theory & coding PPT Full Syllabus.pptxInformation theory & coding PPT Full Syllabus.pptx
Information theory & coding PPT Full Syllabus.pptx
 
Outage analysis of simo system over nakagami n fading channel
Outage analysis of simo system over nakagami n fading channelOutage analysis of simo system over nakagami n fading channel
Outage analysis of simo system over nakagami n fading channel
 
DC@UNIT 5 ppt.ppt
DC@UNIT 5 ppt.pptDC@UNIT 5 ppt.ppt
DC@UNIT 5 ppt.ppt
 
Convolutional Codes.pdf
Convolutional Codes.pdfConvolutional Codes.pdf
Convolutional Codes.pdf
 

Recently uploaded

Seizure stage detection of epileptic seizure using convolutional neural networks
Seizure stage detection of epileptic seizure using convolutional neural networksSeizure stage detection of epileptic seizure using convolutional neural networks
Seizure stage detection of epileptic seizure using convolutional neural networks
IJECEIAES
 

Recently uploaded (20)

15-Minute City: A Completely New Horizon
15-Minute City: A Completely New Horizon15-Minute City: A Completely New Horizon
15-Minute City: A Completely New Horizon
 
Research Methodolgy & Intellectual Property Rights Series 1
Research Methodolgy & Intellectual Property Rights Series 1Research Methodolgy & Intellectual Property Rights Series 1
Research Methodolgy & Intellectual Property Rights Series 1
 
Seizure stage detection of epileptic seizure using convolutional neural networks
Seizure stage detection of epileptic seizure using convolutional neural networksSeizure stage detection of epileptic seizure using convolutional neural networks
Seizure stage detection of epileptic seizure using convolutional neural networks
 
Worksharing and 3D Modeling with Revit.pptx
Worksharing and 3D Modeling with Revit.pptxWorksharing and 3D Modeling with Revit.pptx
Worksharing and 3D Modeling with Revit.pptx
 
Artificial Intelligence in due diligence
Artificial Intelligence in due diligenceArtificial Intelligence in due diligence
Artificial Intelligence in due diligence
 
Developing a smart system for infant incubators using the internet of things ...
Developing a smart system for infant incubators using the internet of things ...Developing a smart system for infant incubators using the internet of things ...
Developing a smart system for infant incubators using the internet of things ...
 
CLOUD COMPUTING SERVICES - Cloud Reference Modal
CLOUD COMPUTING SERVICES - Cloud Reference ModalCLOUD COMPUTING SERVICES - Cloud Reference Modal
CLOUD COMPUTING SERVICES - Cloud Reference Modal
 
analog-vs-digital-communication (concept of analog and digital).pptx
analog-vs-digital-communication (concept of analog and digital).pptxanalog-vs-digital-communication (concept of analog and digital).pptx
analog-vs-digital-communication (concept of analog and digital).pptx
 
Basics of Relay for Engineering Students
Basics of Relay for Engineering StudentsBasics of Relay for Engineering Students
Basics of Relay for Engineering Students
 
Dynamo Scripts for Task IDs and Space Naming.pptx
Dynamo Scripts for Task IDs and Space Naming.pptxDynamo Scripts for Task IDs and Space Naming.pptx
Dynamo Scripts for Task IDs and Space Naming.pptx
 
Software Engineering Practical File Front Pages.pdf
Software Engineering Practical File Front Pages.pdfSoftware Engineering Practical File Front Pages.pdf
Software Engineering Practical File Front Pages.pdf
 
The Entity-Relationship Model(ER Diagram).pptx
The Entity-Relationship Model(ER Diagram).pptxThe Entity-Relationship Model(ER Diagram).pptx
The Entity-Relationship Model(ER Diagram).pptx
 
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdf
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdfInvolute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdf
Involute of a circle,Square, pentagon,HexagonInvolute_Engineering Drawing.pdf
 
Filters for Electromagnetic Compatibility Applications
Filters for Electromagnetic Compatibility ApplicationsFilters for Electromagnetic Compatibility Applications
Filters for Electromagnetic Compatibility Applications
 
Circuit Breakers for Engineering Students
Circuit Breakers for Engineering StudentsCircuit Breakers for Engineering Students
Circuit Breakers for Engineering Students
 
Diploma Engineering Drawing Qp-2024 Ece .pdf
Diploma Engineering Drawing Qp-2024 Ece .pdfDiploma Engineering Drawing Qp-2024 Ece .pdf
Diploma Engineering Drawing Qp-2024 Ece .pdf
 
Interfacing Analog to Digital Data Converters ee3404.pdf
Interfacing Analog to Digital Data Converters ee3404.pdfInterfacing Analog to Digital Data Converters ee3404.pdf
Interfacing Analog to Digital Data Converters ee3404.pdf
 
Maximizing Incident Investigation Efficacy in Oil & Gas: Techniques and Tools
Maximizing Incident Investigation Efficacy in Oil & Gas: Techniques and ToolsMaximizing Incident Investigation Efficacy in Oil & Gas: Techniques and Tools
Maximizing Incident Investigation Efficacy in Oil & Gas: Techniques and Tools
 
Insurance management system project report.pdf
Insurance management system project report.pdfInsurance management system project report.pdf
Insurance management system project report.pdf
 
8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...8th International Conference on Soft Computing, Mathematics and Control (SMC ...
8th International Conference on Soft Computing, Mathematics and Control (SMC ...
 

Communication engineering -UNIT IV .pptx

  • 1. UNIT IV - INFORMATION THEORY AND CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 1 SYLLABUS • Measure of information • Entropy • Source coding theorem • Shannon–Fano coding, Huffman Coding, LZ Coding • Channel capacity • Shannon-Hartley law ,Shannon's limit • Error control codes – Cyclic codes, Syndrome calculation • Convolution Coding, Sequential and Viterbi decoding
  • 2. INFORMATION THEORY INTRODUCTION 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 2 • The purpose of a communication system is to facilitate the transmission of signals generated by a information source to the receiver end over a communication channel. • Information theory is a branch of probability theory which may be applied to the study of communication systems. Information theory allows us to determine the information content in a message signal leading to different source coding techniques for efficient transmission of message. • In the context of communication, information theory deals with mathematical modeling and analysis of communication rather than with physical sources
  • 3. INFORMATION THEORY INTRODUCTION 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 3 • In particular, it provides answers to two fundamental questions, • What is the irreducible complexity below which a signal cannot be compressed? • What is the ultimate transmission rate for reliable communication over a noisy channel? • The answers to these questions lie on the entropy of the source and capacity of a channel respectively. • Entropy is defined in terms of the probabilistic behavior of a source of information. • Channel Capacity is defined as the intrinsic ability of a channel to convey information; it is naturally
  • 4. INFORMATION 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 4 • Information is the source of a communication system, whether it is analog or digital communication. Information is related to the probability of occurrence of the event. The unit of information is called the bit. • Based on memory, information source can be classified as follows: • A source with memory is one for which a current symbol depends on the previous symbols. • A memory less source is one for which each symbol produced is independent of the previous symbols. • A Discrete Memory less Source (DMS) can be characterized by list of symbols, the probability assignment to these symbols, and the specification of
  • 5. DISCRETE MEMORY LESS SOURCE (DMS) 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 5 • Consider a probabilistic experiment involves the observation of the output emitted by a discrete source during every unit of time (Signaling Interval). The source output is modeled as a discrete random variable (S), which takes on symbols from a fixed finite alphabet. S= { s0,s1,s2,… ,sK-1} with probabilities, P(S=sk)= pk , k=0,1,2,…….,K-1. The set of probabilities that must satisfy the condition • We assume that the symbols emitted by the source during successive signaling intervals are statistically independent. • A source having the alone described properties is called a Discrete Memory less Source (DMS).
  • 6. INFORMATION AND UNCERTAINTY 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 6 • Information is related to the probability of occurrence of the event. More is the uncertainty, more is the information associated or related with it. • Consider the event S=sk, describing the emission of symbol sk by the source with probability pk clearly, if the probability pk=1 and pi=0 for all i≠k, then • , there is no surprise and therefore no information when symbol sk is emitted. • If, on the other hand, the source symbols occur with different probabilities and the probability pk is low, then there is more surprise and, therefore, information when symbol sk is emitted by the
  • 7. INFORMATION AND UNCERTAINTY • Example: • Sun rises in East : Here Uncertainty is zero there is no surprise in the statement. The probability of occurrence is 1 (pk=1). • Sun does not rise in East : Here uncertainty is high, because there is maximum surprise, maximum information is not possible. • The amount of information is related to the inverse of the probability of occurrence of the event S = sk as shown in Figure. The amount of information gained after observing the event S = sk, which occurs with probability 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 7
  • 8. PROPERTIES OF INFORMATION 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 8 • Even if we are absolutely certain of the outcome of an event, even before it occurs, there is no information gained. • The occurrence of an event S = sk either provides some or no information, but never brings about a loss of information. • That is, the less probable an event is, the more information we gain when it occurs.
  • 9. PROPERTIES OF INFORMATION 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 9 • This additive property follow from the logarithmic definition of I(sk). • If sk and sl are statistically independent. • It is standard practice in information theory to use a logarithm to base 2 with binary signalling in mind. The resulting unit of information is called the bit, which is a contraction of the words binary digit.
  • 10. PROPERTIES OF INFORMATION 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 10 • Hence , one bit is the amount of information that we gain when one of two possible and equally likely (i.e..equiprobable) event occurs. • Note that the information I(sk) is positive, because the logarithm of a number less than one, such as a probability, is negative.
  • 11. ENTROPHY (AVERAGE INFORMATION) 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 11 • The entropy of a discrete random variable, representing the output of a source of information, is a measure of the average information content per source symbol. • The amount of information I(sk) produced by the source during an arbitrary signalling interval depends on the symbol sk emitted by the source at the time. The self-information I(sk) is a discrete random variable that takes on the values I(s0), I(s1), …, I(sK – 1) with probabilities p0, p1, ….., pK – 1 respectively • The expectation of I(sk) over all the probable values taken by the random variable S is given by
  • 12. PROPERTIES OF ENTROPY 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 12 Consider a discrete memory less source whose mathematical model is defined by S= { s0,s1,s2,… ,sK-1} with probabilities, P(S=sk)= pk , k=0,1,2,…….,K-1. The entropy H(S) of the discrete random variable S is bounded as follows: 0 ≤ H (s ) ≤ log2 (K ) where K is the number of symbols in the alphabet
  • 13. PROPERTIES OF ENTROPY 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 13 Property 1: H(S) = 0, if, and only if, the probability pk = 1 for some k, and the remaining probabilities in the set are all zero; this lower bound on entropy corresponds to no uncertainty
  • 14. PROPERTIES OF ENTROPY 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 14 Property 2: H(S) = log K, if, and only if, pk = 1/K for all k ; this upper bound on entropy corresponds to maximum uncertainty.
  • 15. PROPERTIES OF ENTROPY 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 15
  • 16. PROPERTIES OF ENTROPY 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 16 Property 3: The upper bound (Hmax) on Entropy is given as
  • 17. PROPERTIES OF ENTROPY 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 17
  • 18. PROPERTIES OF ENTROPY 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 18
  • 19. PROPERTIES OF ENTROPY 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 19
  • 20. PROPERTIES OF ENTROPY 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 20
  • 21. INFORMATION RATE (R) 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 21 Information rate (R) is represented in average number of bits of information per second. R = r H(S) Where, R - information rate ( Information bits / second ) H(S) - the Entropy or average information (bits / symbol) and r - the rate at which messages are generated (symbols / second).
  • 22. MUTUAL INFORMATION 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 22 • The mutual information I(X;Y) is a measure of the uncertainty about the channel input, which is resolved by observing the channel output. • Mutual information I(xi,yj) of a channel is defined as the amount of information transferred when xi transmitted and yj received.
  • 23. DISCRETE MEMORYLESS CHANNELS 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 23 Discrete Memoryless Channel is a statistical model with an input X and output Y (i.e., noisy version of X). Both X and Y are random variables. The channel is said to be discrete ,when both X and Y have finite sizes. It is said to be memoryless when the current output symbol depends only on the current
  • 24. DISCRETE MEMORYLESS CHANNELS 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 24
  • 25. TYPES OF CHANNELS 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 25 • LOSSLESS CHANNELS A channel described by a channel matrix with only one non-zero element in each column is called a lossless channel. The channel matrix of a lossless channel will be like,
  • 26. TYPES OF CHANNELS 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 26 DETERMINISTIC CHANNEL • A channel described by a channel matrix with one non-zero element in each row is called a deterministic channel
  • 27. TYPES OF CHANNELS 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 27 BINARY SYMMETRIC CHANNEL • Binary Symmetric Channel has two input symbols x0=0 and x1=1 and two output symbols y0=0 and y1=1. The channel is symmetric because the probability of receiving a 1 if a 0 is sent is the same as the probability of receiving a 0 if a 1 is sent. • In other words correct bit transmitted with probability 1-p, wrong bit transmitted with probability p. It is also called “cross-over probability”. The conditional probability of error is denoted as p.
  • 28. TYPES OF CHANNELS 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 28 BINARY ERASURE CHANNEL (BEC) A Binary erasure channel (BEC) has two inputs (0,1) and three outputs (0,y,1). The symbol y indicates that, due to noise, no deterministic decision can be made as to whether the received symbol is a 0 or 1. In other words the symbol y represents that the output is erased. Hence the name Binary Erasure Channel (BEC)
  • 29. CHANNEL CAPACITY 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 29 Channel Capacity of a discrete memory less channel as the maximum average mutual information I(X:Y) in any single use of channel i.e., signaling interval, where the maximization is over all possible input probability distribution {p(xi)} on X. It is measured in bits per channel use. Channel capacity (C) =max I (X:Y) The channel capacity C is a function only of the transition probabilities p(yj|xi), which defines the channel. The calculation of channel capacity (C) involves maximization of average mutual information I(X:Y) over K variables
  • 30. CHANNEL CAPACITY 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 30
  • 31. CHANNEL CODING THEOREM 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 31 • The design goal of channel coding is to increase the resistance of a digital communication system to channel noise. • Specifically, channel coding consists of mapping the incoming data sequence into a channel input sequence and inverse mapping the channel output sequence into an output data sequence in such a way that the overall effect of channel noise on the system is minimized.
  • 32. CHANNEL CODING THEOREM 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 32 • Mapping operation is performed in the transmitter by a channel encoder, whereas the inverse mapping operation is performed in the receiver by a channel decoder. THEOREM The channel-coding theorem for a discrete memoryless channel is stated as follows: • Let a discrete memoryless source with an alphabet 𝒮 have entropy H(S) for random variable S and produce symbols once every Ts seconds. Let a discrete memoryless channel have capacity C and be used once every Tc seconds. Then, if
  • 33. CHANNEL CODING THEOREM 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 33 there exists a coding scheme for which the source output can be transmitted over the channel and be reconstructed with an arbitrarily small probability of error. The parameter C/Tc is called the critical rate. When the system is said to be signalling at the critical rate then it need to satisfy the below condition Conversely, if
  • 34. CHANNEL CODING THEOREM 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 34 it is not possible to transmit information over the channel and reconstruct it with an arbitrarily small probability of error. The ratio Tc/Ts equals the code rate of encoder denoted by ‘r’, where • In short, the Channel Coding Theorem states that if a discrete memoryless channel has capacity C and a source generates information at a rate less than C, then there exists a coding technique such that the output of the source maybe transmitted over the channel with an arbitrarily low probability of symbol error. • Conversely, it is not possible to find such a code if the code rate ‘r’ is greater than the channel capacity C. If r ≤ C, there exists a code capable of achieving an arbitrarily low
  • 35. CHANNEL CODING THEOREM 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 35 Limitations It does not show us how to construct a good code. The theorem does not have a precise result for the probability of symbol error after decoding the channel output.
  • 36. MAXIMUM ENTROPY FOR GAUSSIAN CHANNEL 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 36
  • 37. MAXIMUM ENTROPY FOR GAUSSIAN CHANNEL 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 37
  • 38. MAXIMUM ENTROPY FOR GAUSSIAN CHANNEL 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 38
  • 39. CHANNEL (INFORMATION ) CAPACITY THEOREM OR SHANNON’S THEOREM 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 39 The information capacity (C) of a continuous channel of bandwidth B hertz, affected by AWGN of total noise power with in the channel bandwidth N, is given by the formula, where, C - Information Capacity. B - Channel Bandwidth S - Signal power (or) Average Transmitted power N – Total noise power within the channel bandwidth (B) It is easier to increase the information capacity of a continuous communication channel by expanding its bandwidth than by increasing the transmitted power for a prescribed noise variance.
  • 40. CHANNEL (INFORMATION ) CAPACITY THEOREM OR SHANNON’S THEOREM 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 40
  • 41. CHANNEL (INFORMATION ) CAPACITY THEOREM OR SHANNON’S THEOREM 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 41
  • 42. CHANNEL (INFORMATION ) CAPACITY THEOREM OR SHANNON’S THEOREM 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 42 Trade off Noiseless Channel has infinite bandwidth. If there is no noise in the channel , then N=0 hence Then Infinite bandwidth has limited capacity. If B is infinite, channel capacity is limited, because noise power (N) increases, (S/N) ratio decreases. Hence, even B approaches infinity, capacity does not approach infinity.
  • 43. SOURCE CODING THEOREM 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 43 Efficient representation of data generated by a discrete source is accomplished by some encoding process. The device that performs the representation is called a source encoder. Efficient source encoder must satisfy two functional requirements • Code Nodes produced by the encoder are in binary form. • The source code is uniquely decodable, so that the original source sequence can be reconstructed perfectly from the encoded binary sequence.
  • 44. SOURCE CODING THEOREM 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 44 THEOREM STATEMENT • Given a discrete memory less source of entropy H(S) or H(X) then the average code word length L for any distortion less source encoding is bounded as, • where, the Entropy H(S) or H(X) is the fundamental limit on the average number of bits/source symbol.
  • 45. SOURCE CODING THEOREM 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 45 AVERAGE CODE WORD LENGTH
  • 46. SOURCE CODING THEOREM 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 46 CODING EFFICIENCY ( Ƞ ) The value of coding efficiency (Ƞ) is always lesser or equal to 1. The source encoder is said to be efficient when Ƞ approaches unity.
  • 47. SOURCE CODING THEOREM 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 47 VARIANCE The variance is defined as,
  • 48. CODING TECHNIQUES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 48 The two popular coding technique used in Information theory are (i) Shannon-Fano Coding (ii) Huffman Coding. ALGORITHM OF SHANNON-FANO CODING: (i) List the source symbols in order of decreasing probability. (ii) Partition the set into two sets that are as close to equiprobables as possible. (iii) Now assign “0” to the upper set and assign “1” to the lower set. (iv) Continue this process, each time partitioning the sets with as nearly equal probabilities as possible until further
  • 49. SHANNON-FANO CODING TECHNIQUES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 49 STEP 1 (i) A discrete memoryless source has symbols x1, x2, x3, x4, x5 with probabilities of 0.4, 0.2, 0.1, 0.2, 0.1 respectively. Construct a Shannon-Fano code for the source and calculate code efficiency η
  • 50. SHANNON-FANO CODING TECHNIQUES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 50 STEP 2
  • 51. SHANNON-FANO CODING TECHNIQUES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 51 STEP 3
  • 52. SHANNON-FANO CODING TECHNIQUES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 52 STEP 4
  • 53. SHANNON-FANO CODING TECHNIQUES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 53
  • 54. SHANNON-FANO CODING TECHNIQUES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 54
  • 55. SHANNON-FANO CODING TECHNIQUES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 55 2 .For a discrete memoryless source ‘X” with 6 symbols x1, x2,….., x6 the probabilities are p(x1) =0.3, p(x2) =0.25, p(x3) = 0.2, p(x4) = 0.12, p(x5) = 0.08, p(x6) = 0.05 respectively. Using Shannon-Fano coding, calculate entropy, average length of code, efficiency and redundancy of the code. SYMBOL PROBABILITY X1 p(x1) =0.3 X2 p(x2) =0.25 X3 p(x3) = 0.2 X4 p(x4) = 0.12 X5 p(x5) = 0.08 X6 p(x6) = 0.05
  • 56. SHANNON-FANO CODING TECHNIQUES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 56
  • 57. SHANNON-FANO CODING TECHNIQUES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 57
  • 58. SHANNON-FANO CODING TECHNIQUES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 58
  • 59. SHANNON-FANO CODING TECHNIQUES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 59
  • 60. SHANNON-FANO CODING TECHNIQUES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 60
  • 61. SHANNON-FANO CODING TECHNIQUES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 61
  • 62. SHANNON-FANO CODING TECHNIQUES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 62 • A discrete memoryless source has five symbols x1,x2,x3,x4 and x5 with probabilities 0.4,0.19,0.16,0.15 and 0.15 respectively attached to every symbol.Construct a Shannon-Fano code for the source and calculate code efficiency.
  • 63. HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 63 • Huffman coding is a source coding method used for data compression applications • It is used to generate variable length codes and gives lowest possible values of average length of code. • Huffman coding provides maximum efficiency and minimum redundancy • Huffman coding is also known as minimum redundancy code or optimum code
  • 64. ALGORITHM FOR HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 64 • Step 1 : Arrange given messages in decreasing order of probabilities. • Step 2: Group the least probable M messages and assign them symbols for coding. • Step 3 : Add the probabilities of grouped messages and place them as high as possible and rearrange them in decreasing order again. This process is known as reduction. • Step 4: Repeat the steps 2 & 3 until only M or less than M probabilities remains. • Step 5: obtain the code words for each message by tracing the probabilities from left to right in direction of the
  • 65. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 65 • 1)Using Huffman algorithm find the average code word length and code efficiency. Consider the 5 source symbols of a DMS with probabilities p(m1)=0.4, p(m2)=0.2, p(m3)=0.2, p(m4)=0.1 and p(m5)=0.1. Message (mk) probabilities(Pk) m1 0.4 m2 0.2 m3 0.2 m4 0.1 m5 0.1
  • 66. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 66 Message (mk) probabilities(Pk) Stage I m1 0.4 0.4 m2 0.2 0.2 m3 0.2 0.2 m4 0.1 0.2 m5 0.1 0 1
  • 67. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 67 Message (mk) probabilities(P k) Stage I Stage II m1 0.4 0.4 0.4 m2 0.2 0.2 0.4 m3 0.2 0.2 0.2 m4 0.1 0.2 m5 0.1 0 1 0 1
  • 68. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 68 Message (mk) probabilitie s(Pk) Stage I Stage II Stage III m1 0.4 0.4 0.4 0.6 m2 0.2 0.2 0.4 0.4 m3 0.2 0.2 0.2 m4 0.1 0.2 m5 0.1 0 1 0 1 0 0 1 1
  • 69. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 69 Message (mk) probabilit ies(Pk) Stage I Stage II Stage III Code Word m1 0.4 0.4 0.4 0.6 00 m2 0.2 0.2 0.4 0.4 m3 0.2 0.2 0.2 m4 0.1 0.2 m5 0.1 0 1 0 1 0 0 1 1
  • 70. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 70 Message (mk) probabilit ies(Pk) Stage I Stage II Stage III Code Word m1 0.4 0.4 0.4 0.6 00 m2 0.2 0.2 0.4 0.4 10 m3 0.2 0.2 0.2 m4 0.1 0.2 m5 0.1 0 1 0 1 0 0 1 1
  • 71. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 71 Message (mk) probabilit ies(Pk) Stage I Stage II Stage III Code Word m1 0.4 0.4 0.4 0.6 00 m2 0.2 0.2 0.4 0.4 10 m3 0.2 0.2 0.2 11 m4 0.1 0.2 m5 0.1 0 1 0 1 0 0 1 1
  • 72. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 72 Message (mk) probabilit ies(Pk) Stage I Stage II Stage III Code Word m1 0.4 0.4 0.4 0.6 00 m2 0.2 0.2 0.4 0.4 10 m3 0.2 0.2 0.2 11 m4 0.1 0.2 010 m5 0.1 0 1 0 1 0 0 1 1
  • 73. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 73 Message (mk) probabilit ies(Pk) Stage I Stage II Stage III Code Word m1 0.4 0.4 0.4 0.6 00 m2 0.2 0.2 0.4 0.4 10 m3 0.2 0.2 0.2 11 m4 0.1 0.2 010 m5 0.1 011 0 1 0 1 0 0 1 1
  • 74. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 74 Messag e (mk) probabi lities(Pk ) Stage I Stage II Stage III Code Word No of bit per message (Ik) m1 0.4 0.4 0.4 0.6 00 2 m2 0.2 0.2 0.4 0.4 10 2 m3 0.2 0.2 0.2 11 2 m4 0.1 0.2 010 3 m5 0.1 011 3
  • 75. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 75
  • 76. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 76
  • 77. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 77 • For a discrete memoryless source ‘X” with 6 symbols x1, x2,….., x6 the probabilities are p(x1) =0.3, p(x2) =0.25, p(x3) = 0.2, p(x4) = 0.12, p(x5) = 0.08, p(x6) = 0.05 respectively. Using Huffman coding, calculate entropy, average length of code, efficiency and redundancy of the code. Message (mk) probabilities(Pk ) X1 0.3 X2 0.25 X3 0.2 X4 0.12 X5 0.08 x6 0.05
  • 78. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 78 Message (mk) probabilities(Pk) Stage I X1 0.3 0.3 X2 0.25 0.25 X3 0.2 0.2 X4 0.12 0.13 X5 0.08 0.12 x6 0.05 0 1
  • 79. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 79 Message (mk) probabilities(P k) Stage I Stage II X1 0.3 0.3 0.3 X2 0.25 0.25 0.25 X3 0.2 0.2 0.25 X4 0.12 0.13 0.2 X5 0.08 0.12 x6 0.05 0 1 0 1
  • 80. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 80 Messag e (mk) probabilities(P k) Stage I Stage II Stage III X1 0.3 0.3 0.3 0.45 X2 0.25 0.25 0.25 0.3 X3 0.2 0.2 0.25 0.25 X4 0.12 0.13 0.2 X5 0.08 0.12 x6 0.05 0 1 0 1 0 1
  • 81. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 81 Messa ge (mk) probabilities (Pk) Stage I Stage II Stage III Stage IV X1 0.3 0.3 0.3 0.45 0.55 X2 0.25 0.25 0.25 0.3 0.45 X3 0.2 0.2 0.25 0.25 X4 0.12 0.13 0.2 X5 0.08 0.12 x6 0.05 0 1 0 1 0 1 0 1 0 1
  • 82. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 82 Mess age (mk) probabilitie s(Pk) Stage I Stage II Stage III Stage IV Code word X1 0.3 0.3 0.3 0.45 0.55 00 X2 0.25 0.25 0.25 0.3 0.45 X3 0.2 0.2 0.25 0.25 X4 0.12 0.13 0.2 X5 0.08 0.12 x6 0.05 0 1 0 1 0 1 0 1 0 1
  • 83. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 83 Mess age (mk) probabilitie s(Pk) Stage I Stage II Stage III Stage IV Code word X1 0.3 0.3 0.3 0.45 0.55 00 X2 0.25 0.25 0.25 0.3 0.45 10 X3 0.2 0.2 0.25 0.25 X4 0.12 0.13 0.2 X5 0.08 0.12 x6 0.05 0 1 0 1 0 1 0 1 0 1
  • 84. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 84 Mess age (mk) probabilitie s(Pk) Stage I Stage II Stage III Stage IV Code word X1 0.3 0.3 0.3 0.45 0.55 00 X2 0.25 0.25 0.25 0.3 0.45 10 X3 0.2 0.2 0.25 0.25 11 X4 0.12 0.13 0.2 X5 0.08 0.12 x6 0.05 0 1 0 1 0 1 0 1 0 1
  • 85. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 85 Mess age (mk) probabilitie s(Pk) Stage I Stage II Stage III Stage IV Code word X1 0.3 0.3 0.3 0.45 0.55 00 X2 0.25 0.25 0.25 0.3 0.45 10 X3 0.2 0.2 0.25 0.25 11 X4 0.12 0.13 0.2 011 X5 0.08 0.12 x6 0.05 0 1 0 1 0 1 0 1 0 1
  • 86. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 86 Mess age (mk) probabilitie s(Pk) Stage I Stage II Stage III Stage IV Code word X1 0.3 0.3 0.3 0.45 0.55 00 X2 0.25 0.25 0.25 0.3 0.45 10 X3 0.2 0.2 0.25 0.25 11 X4 0.12 0.13 0.2 011 X5 0.08 0.12 0100 x6 0.05 0 1 0 1 0 1 0 1 0 1
  • 87. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 87 Mess age (mk) probabilitie s(Pk) Stage I Stage II Stage III Stage IV Code word X1 0.3 0.3 0.3 0.45 0.55 00 X2 0.25 0.25 0.25 0.3 0.45 10 X3 0.2 0.2 0.25 0.25 11 X4 0.12 0.13 0.2 011 X5 0.08 0.12 0100 x6 0.05 0101 0 1 0 1 0 1 0 1 0 1
  • 88. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 88 Mes sag e (mk) probabili ties(Pk) Stage I Stage II Stage III Stage IV Cod e wor d No of bit per messag e (Ik) X1 0.3 0.3 0.3 0.45 0.55 00 2 X2 0.25 0.25 0.25 0.3 0.45 10 2 X3 0.2 0.2 0.25 0.25 11 2 X4 0.12 0.13 0.2 011 3 X5 0.08 0.12 010 0 4
  • 89. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 89
  • 90. PROBLEMS OF HUFFMAN CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 90
  • 91. LEMPEL ZIV CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 91
  • 92. ALGORITHM FOR LEMPEL ZIV CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 92 • Principle: The source data streams is parsed into segments that are the shortest redundancies not encountered previously Step 1: Divide the given input data stream into sub sequences Step 2: Assign numerical position for each sub sequences Step 3: Assign numerical representation for each
  • 93. PROBLEMS FOR LEMPEL ZIV CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 93 • Example 1: Given data stream : 0 0 0 1 0 1 1 1 0 0 1 0 1 0 0 1 0 1 Step 1: Divide the given input data stream into sub sequences 0 1 00 01 011 10 010 100 101 Numerical Position 1 2 3 4 5 6 7 8 9 Sub sequences 0 1 00 01 011 10 010 100 101
  • 94. PROBLEMS FOR LEMPEL ZIV CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 94 Numerical Position 1 2 3 4 5 6 7 8 9 Sub sequences 0 1 0 0 0 1 01 1 1 0 01 0 10 0 10 1 Numerical Position 1 2 3 4 5 6 7 8 9 Sub sequences 0 1 0 0 0 1 01 1 1 0 01 0 10 0 10 1 Numerical Representatio n 11 12 4 2 2 1 4 1 6 1 6 2
  • 95. PROBLEMS FOR LEMPEL ZIV CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 95 • Numerical Position 1 2 3 4 5 6 7 8 9 Sub sequences 0 1 0 0 0 1 01 1 1 0 01 0 10 0 10 1 Numerical Representatio n - - 11 12 4 2 2 1 4 1 6 1 6 2 Binary Encoded Blocks 0 1 001 0 001 1 1001 010 0 100 0 110 0 110 1 Final Encoded Block sequence for the given data stream : 0010 0011 1001 0100 1000 1100 1101
  • 96. PROBLEMS FOR LEMPEL ZIV CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 96 • Example 2: Given data stream : 1 1 1 0 1 0 0 0 1 1 0 1 0 1 1 0 1 0 Step 1: Divide the given input data stream into sub sequences 1 0 11 10 100 01 101 011 010 Numerical Position 1 2 3 4 5 6 7 8 9 Sub sequences 1 0 11 10 100 01 101 011 010
  • 97. PROBLEMS FOR LEMPEL ZIV CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 97 Numerical Position 1 2 3 4 5 6 7 8 9 Sub sequences 1 0 11 10 100 01 101 011 010 Numerical Position 1 2 3 4 5 6 7 8 9 Sub sequences 1 0 11 10 100 01 101 011 010 Numerical Representatio n 11 12 4 2 2 1 4 1 6 1 6 2
  • 98. PROBLEMS FOR LEMPEL ZIV CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 98 • Numerical Position 1 2 3 4 5 6 7 8 9 Sub sequences 1 0 0 0 0 1 01 1 1 0 01 0 10 0 10 1 Numerical Representatio n - - 11 12 4 2 2 1 4 1 6 1 6 2 Binary Encoded Blocks 1 0 001 0 001 1 1001 010 0 100 0 110 0 110 1 Final Encoded Block sequence for the given data stream : 0010 0011 1001 0100 1000 1100 1101
  • 99. PROBLEMS FOR LEMPEL ZIV CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 99 • Example 3 : Given data stream : A A B A B B B A B A A B A B B B A B B A B B Step 1: Divide the given input data stream into sub sequences A A B A B B B A B A A B A B B B A B B A BB Numerical Position 1 2 3 4 5 6 7 8 9 Sub sequences A A B A B B B ABA ABAB BB ABBA BB
  • 100. PROBLEMS FOR LEMPEL ZIV CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 100 Numerical Position 1 2 3 4 5 6 7 8 9 Sub sequences A A B A B B B ABA ABA B BB ABB A BB Numerical Position 1 2 3 4 5 6 7 8 9 Sub sequences A A B A B B B ABA ABA B BB ABB A BB Numerical representati on φA 1B 2B φB 2A 5B 4B 3A 7
  • 101. PROBLEMS FOR LEMPEL ZIV CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 101 Numerical Position 1 2 3 4 5 6 7 8 9 Sub sequences A A B A B B B ABA ABA B BB ABB A BB Numerical representati on φA 1B 2B φB 2A 5B 4B 3A 7 Binary Encoded Blocks 0 11 101 1 100 1011 1001 110 111 Max no of bits : 4 Apply Zero Padding to all other codes to get decoding without redudancies
  • 102. PROBLEMS FOR LEMPEL ZIV CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 102 Numerical Position 1 2 3 4 5 6 7 8 9 Sub sequences A A B A B B B ABA ABAB BB ABBA BB Numerical representatio n φA 1B 2B φB 2A 5B 4B 3A 7 Binary Encoded Blocks 0 11 101 1 100 1011 1001 110 111 Binary Encoded Blocks 000 0 0011 0101 0001 0100 1011 1001 0110 0111 Final Encoded Block sequence for the given data stream : 0000 0011 0101 0001 0100 1011 1001 0110 0111
  • 103. PROBLEMS FOR LEMPEL ZIV CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 103 • Example 4 : Given data stream : B B A B A A A B A B B A B A A A B A A B A A Step 1: Divide the given input data stream into sub sequences B B A BAA A B A B B A B A A A B A A B AA Numerical Position 1 2 3 4 5 6 7 8 9 Sub sequences B BA BAA A BAB BABA AA BAAB AA
  • 104. PROBLEMS FOR LEMPEL ZIV CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 104 Numerical Position 1 2 3 4 5 6 7 8 9 Sub sequences B B A B A A A BAB BAB A AA BAA B AA Numerical Position 1 2 3 4 5 6 7 8 9 Sub sequences B B A B A A A BAB BAB A AA BAA B AA Numerical representati on φB 1A 2A φA 2B 5A 4A 3B 7
  • 105. PROBLEMS FOR LEMPEL ZIV CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 105 Numerical Position 1 2 3 4 5 6 7 8 9 Sub sequences B B A B A A A BAB BAB A AA BAA B AA Numerical representati on φB 1A 2A φA 2B 5A 4A 3B 7 Binary Encoded Blocks 0 11 101 1 100 1011 1001 110 111 Max no of bits : 4 Apply Zero Padding to all other codes to get decoding without redudancies
  • 106. PROBLEMS FOR LEMPEL ZIV CODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 106 Numerical Position 1 2 3 4 5 6 7 8 9 Sub sequences B B A B A A A BAB BABA AA BAAB AA Numerical representatio n φB 1A 2A φA 2B 5A 4A 3B 7 Binary Encoded Blocks 0 11 101 1 100 1011 1001 110 111 Binary Encoded Blocks 000 0 0011 0101 0001 0100 1011 1001 0110 0111 Final Encoded Block sequence for the given data stream : 0000 0011 0101 0001 0100 1011 1001 0110 0111
  • 107. ERROR CONTROL CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 107 ERROR: Error is a condition when the output information does not match with the input information. During transmission, digital signals suffer from noise that can introduce errors in the binary bits travelling from one system to other. That means a 0 bit may change to 1 or a 1 bit may change to 0 Digital Source Source Encod er Error Control Coding Line Coding Modulat or Chann el Noise Digital Sink Source Decod er Error Control Decoding Line Decoding Demod ulator + Transmitter Receiver X(w) N(w) Y(w)
  • 108. ERROR CONTROL CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 108 CLASSIFICATION OF ERRORS: Content Error: These are errors in the content of messageintroduced due to noise during transmission Flow integrity Error These errors are missing blocks of data, data lost in network or data delivered to wrong destination CLASSIFICATION OF CODES:  Error Detecting  Error Correcting
  • 109. IMPORTANT DEFINITIONS RELATED TO CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 109  Code Word: The code word is the n bit encoded block of bits. It contains message bits and parity or redundant bits.  Block Length: The number of bits ‘n’ after coding is known as block length.  Code Rate (or) Code Efficiency: The code rate is defined astheratio of the number of message bits (k) to the total number of bits (n) in a code word. 𝐶𝑜𝑑𝑒 𝑟𝑎𝑡𝑒 (𝑟 )= number of message bits (k)
  • 110. IMPORTANT DEFINITIONS RELATED TO CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 110  Code Vector: An ‘n’ bit code word can be visualized in an n – dimensional space as a vector whose elements or coordinates are bits in the code word.
  • 111. IMPORTANT DEFINITIONS RELATED TO CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 111 Hamming Distance: The Hamming distance between two code words is equal to the number of differences between them, e.g., 1 0 0 1 1 0 1 1 1 1 0 1 0 0 1 0 Hamming distance = 3 Minimum Distance dmin: It is defined as the smallest Hamming distance between any pair of code vectors in the code. Hamming Weight of a Code Word [w(x)]: It is defined as the number of non – zero elements in the code word. It is denoted by w(x) for eg: X= 01110101, then w(x) = 5
  • 112. ERROR CONTROL CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 112 Error Control Coding The main purpose of Error control coding is to enable the receiver to detect and ever correct the errors by introducing some redundancies into the data which is to be transmitted. There are basically two mechanisms to for adding redundancy. They are  Block Coding  Convolutional coding
  • 113. CLASSIFICATION OF ERROR–CORRECTING CODES: Block Codes (No memory is required): (n,k) block code is generated when the channel encoder accepts information in successive k bit blocks. At the end of each such block, (n- k) parity bit is added, which contains no information and termed as Convolutional Codes (Memory is required) Here the code words are generated by discrete – time convolution of the input sequence with impulse response of the encoder. Unlike block codes, channel encoder accepts messages as a continuous sequence and generates a continuous sequence of encoded bits at the output 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 113
  • 114. CLASSIFICATION OF ERROR–CORRECTING CODES: Linear Codes and non linear Codes Linear codes have the unique property that when any two code words of linear code are added in modulo – 2 adder, a third code word is produced which is also a code, which is not the case for non – linear codes. 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 114
  • 115. CLASSIFICATION OF ERROR CONTROL CODING TECHNIQUES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 115
  • 116. TYPES OF ERROR CONTROL TECHNIQUES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 116 AUTOMATIC REPEAT REQUEST (ARQ)  Here, when an error is detected, a request is made for retransmission of signal.  A feedback channel is necessary for this retransmission.  It differs from the FEC system in the following three aspects. 1. Less number of check bits (parity bits) are required increasing the (k/n) ratio for (n, k) block code. 2. Additional hardware to implement feedback path is required. 3. But rate at forward transmission needs to make
  • 117. AUTOMATIC REPEAT REQUEST (ARQ) 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 117  For each message signal at the input, the encoder produces code words which are stored temporarily at encoder output and transmitted over forward transmission channel.  At the destination, decoders decode the signals and give a positive acknowledgement (ACK) and negative acknowledgement (NAK) respectively in case no error and error is detected.  On receipt of NAK, the controller retransmits the appropriate word stored in the input buffer.
  • 118. FORWARD ERROR CORRECTION (FEC) TECHNIQUE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 118  It is a digital modulation system where discrete source generates information in binary form.  The channel encoder accepts these message bits and add redundant bits to them leaving higher bit rate for transmission.  Channel decoders uses the redundant bits to check for actually and erroneous transmitted messages.
  • 119. ERROR DETECTION TECHNIQUE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 119 Whenever a message is transmitted, it may get scrambled by noise or data may get corrupted. To avoid this, we use error-detecting codes which are additional data added to a given digital message to help us detect if an error occurred during transmission of the message. A simple example of error-detecting code is parity check. i) Parity Checking ii) Check sum Error Detection iii) Cyclic Redundancy Check (CRC)
  • 120. PARITY CHECKING OF ERROR DETECTION • It is the simplest technique for detecting and correcting errors. The MSB of an 8-bits word is used as the parity bit and the remaining 7 bits are used as data or message bits. The parity of 8-bits transmitted word can be either even parity or odd parity. • Even parity -- Even parity means the number of 1's in the given word including the parity bit should be even (2,4,6,....). • Odd parity -- Odd parity means the number of 1's in the given word including the parity bit should be odd (1,3,5,....). 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 120
  • 121. PARITY CHECKING OF ERROR DETECTION USE OF PARITY BIT • The parity bit can be set to 0 and 1 depending on the type of the parity required. • For even parity, this bit is set to 1 or 0 such that the no. of "1 bits" in the entire word is even. Shown in fig. (a). • For odd parity, this bit is set to 1 or 0 such that the no. of "1 bits" in the entire word is odd. Shown in fig (b) fig. (a). fig. (b). 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 121 0 0 0 1 1 1 0 1 1 1 0 0 1 1 1 1 0 0 1 1 Message or data bits Parity bit Message or data bits Parity bit Message or data bits Parity bit Parity bit Message or data bits
  • 122. PARITY CHECKING OF ERROR DETECTION • ERROR DETECTION USING PARITY BIT Parity checking at the receiver can detect the presence of an error if the parity of the receiver signal is different from the expected parity. That means, if it is known that the parity of the transmitted signal is always going to be "even" and if the received signal has an odd parity, then the receiver can conclude that the received signal is not correct. If an error is detected, then the receiver will ignore the received byte and request for retransmission of the same byte to the transmitter. 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 122
  • 123. CHECK SUM ERROR DETECTION • This is a block code method where a checksum is created based on the data values in the data blocks to be transmitted using some algorithm and appended to the data. • When the receiver gets this data, a new checksum is calculated and compared with the existing checksum. A non- match indicates an error. Error Detection by Checksums • For error detection by checksums, data is divided into fixed sized frames or segments. • Sender’s End − The sender adds the segments using 1’s complement arithmetic to get the sum. It then complements the sum to get the checksum and sends it along with the data frames. • Receiver’s End − The receiver adds the incoming segments along with the checksum using 1’s complement arithmetic to 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 123
  • 124. CHECK SUM ERROR DETECTION Example • Suppose that the sender wants to send 4 frames each of 8 bits, where the frames are 11001100, 10101010, 11110000 and 11000011. • The sender adds the bits using 1s complement arithmetic. While adding two numbers using 1s complement arithmetic, if there is a carry over, it is added to the sum. • After adding all the 4 frames, the sender complements the sum to get the checksum, 11010011, and sends it along with the data frames. • The receiver performs 1s complement arithmetic sum of all the frames including the 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 124
  • 125. CYCLIC REDUNDANCY CHECK (CRC)  The concept of parity checking can be extended from detection to correction of single error by arranging the data block in rectangular matrix.  This will lead to two set of parity bits, viz. •Longitudinal Redundancy Check (LRC) and • Vertical Redundancy Check (VRC). 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 125
  • 126. LONGITUDINAL REDUNDANCY CHECK (LRC) 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 126  In Longitudinal Redundancy Check, one row is taken up at a time and counting the number of 1s, the parity bit is adjusted to achieve even parity.  Here for checking the message block, a complete character known as Block Check Character (BCC) is added at the end of the block of information, which may be of even or odd parity
  • 127. VERTICAL REDUNDANCY CHECK (LRC) 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 127 In VRC, the ASCII code for individual alphabets are considered arranged vertically and then counting the number of 1s, the parity bit is adjusted to achieve even parity.
  • 128. LONGITUDINAL REDUNDANCY CHECK (LRC) 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 128  A single error in any bit will result in a non – correct LRC in the row and a non – correct VRC in the column. The bit which is common to both the row and column is the bit in error. The limitation is though it can detect multiple errors but is capable to correct only a single error as for multiple errors it is not suitable to locate the position of the errors.  1 in the square box in the next table, is the bit in error as it is evident from the erroneous results both in the LRC and the VRC columns 1
  • 129. LINEAR BLOCK CODES PRINCIPLE OF BLOCK CODING:  For the block of k message bits, (n-k) parity bits or check bits are added.  Hence the total bits at the output of channel encoder are ‘n’.  Such codes are called (n,k) block codes 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 129 Message Block input CHANNEL ENCODER Code Block output MESSAGE k bits MESSAGE Check bits k bits (n – k) bits
  • 130. LINEAR BLOCK CODES SYSTEMATIC CODES: In the systematic block code, the message bits appear at the beginning of the code word. Then the check bits are transmitted in a block. This type of code is called NON-SYSTEMATIC CODES: In non-systematic codes it is not possible to identify message bits and check bits. They are mixed in the block code. 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 130
  • 131. LINEAR BLOCK CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 131
  • 132. LINEAR BLOCK CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 132 i.e q are the number of redundant bits added by the encoder. The above code vector can also written as, X =(M│C) Here, M = k-bit message vector C = q –bit check vector The check bits play the role of error detection and correction. The job of the linear block code is to generate those “check bits”.
  • 133. MATRIX DESCRIPTION OF LINEAR BLOCK CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 133 The code vector can be represented as, X = MG Here, X = Code vector of 1 × n size or n bits M = Message vector of 1 × k size bits G = Generator matrix of k × n size.
  • 134. MATRIX DESCRIPTION OF LINEAR BLOCK CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 134
  • 135. MATRIX DESCRIPTION OF LINEAR BLOCK CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 135 Note: All the additions are mod – 2 additions
  • 136. STRUCTURE OF LINEAR BLOCK CODER 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 136
  • 137. LINEAR BLOCK CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 137
  • 138. LINEAR BLOCK CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 138
  • 139. LINEAR BLOCK CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 139 ii) To obtain the equation for check bits: Here k= 3, q = 3, and n = 6 Here the block size of the message vector is 3 bits, Hence 2k = 23 = 8 message vectors are possible as shown below,
  • 140. LINEAR BLOCK CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 140
  • 141. LINEAR BLOCK CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 141
  • 142. LINEAR BLOCK CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 142 iii) To determine check bits and code vectors for every message vectors:
  • 143. LINEAR BLOCK CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 143
  • 144. LINEAR BLOCK CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 144
  • 145. PROPERTIES OF LINEAR BLOCK CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 145
  • 146. PROPERTIES OF LINEAR BLOCK CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 146
  • 147. PROPERTIES OF LINEAR BLOCK CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 147 above eqn
  • 148. ERROR DETECTION AND CORRECTION OF LINEAR BLOCK CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 148
  • 149. HAMMING CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 149
  • 150. HAMMING CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 150
  • 151. ERROR DETECTION AND CORRECTION CAPABILITIES OF HAMMING CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 151 For Hamming Code , dmin = 3 So , S ≤ 3 -1 No of error detected by Hamming code S ≤ 2 No of error corrected by Hamming code t≤ 1
  • 152. HAMMING CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 152
  • 153. HAMMING CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 153
  • 154. HAMMING CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 154
  • 155. HAMMING CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 155
  • 156. HAMMING CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 156
  • 157. HAMMING CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 157
  • 158. HAMMING CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 158
  • 159. HAMMING CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 159
  • 160. iv) To determine check bits and code vectors for every message vectors: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 160
  • 161. iv) To determine check bits and code vectors for every message vectors: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 161
  • 162. HAMMING CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 162
  • 163. HAMMING CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 163
  • 164. ERROR DETECTION AND CORRECTION CAPABILITIES OF HAMMING CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 164 For Hamming Code , dmin = 3 So , S ≤ 3 -1 No of error detected by Hamming code S ≤ 2 No of error corrected by Hamming code t≤ 1
  • 165. HAMMING CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 165
  • 166. HAMMING CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 166
  • 167. HAMMING CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 167
  • 168. HAMMING CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 168
  • 169. HAMMING CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 169
  • 170. HAMMING CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 170
  • 171. HAMMING CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 171
  • 172. HAMMING CODES: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 172
  • 173. iv) To determine check bits and code vectors for every message vectors: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 173
  • 174. iv) To determine check bits and code vectors for every message vectors: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 174
  • 175. iv) To determine Error Detection and Error Correction 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 175
  • 176. SYNDROME DECODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 176 To avoid such complexity, syndrome decoding is used in linear block codes.
  • 177. SYNDROME DECODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 177
  • 178. SYNDROME DECODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 178
  • 179. SYNDROME DECODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 179
  • 180. SYNDROME DECODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 180
  • 181. SYNDROME DECODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 181
  • 182. SYNDROME DECODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 182
  • 183. SYNDROME DECODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 183
  • 184. SYNDROME DECODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 184
  • 185. SYNDROME DECODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 185
  • 186. SYNDROME DECODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 186
  • 187. SYNDROME DECODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 187
  • 188. SYNDROME DECODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 188
  • 189. SYNDROME DECODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 189 S = [1 0 1]
  • 190. SYNDROME DECODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 190
  • 191. SYNDROME DECODING 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 191
  • 192. ERROR CORRECTION USING SYNDROME VECTOR 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 192
  • 193. ERROR CORRECTION USING SYNDROME VECTOR 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 193
  • 194. ERROR CORRECTION USING SYNDROME VECTOR 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 194
  • 195. ERROR CORRECTION USING SYNDROME VECTOR 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 195
  • 196. ERROR CORRECTION USING SYNDROME VECTOR 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 196
  • 197. ERROR CORRECTION USING SYNDROME VECTOR 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 197
  • 198. ERROR CORRECTION USING SYNDROME VECTOR 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 198
  • 199. ERROR CORRECTION USING SYNDROME VECTOR 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 199
  • 200. ERROR CORRECTION USING SYNDROME VECTOR 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 200
  • 201. ERROR CORRECTION USING SYNDROME VECTOR • Here the block size of the message vector is 3 bits, Hence 2k = 23 = 8 message vectors are possible as shown below, 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 201
  • 202. ERROR CORRECTION USING SYNDROME VECTOR 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 202
  • 203. ERROR CORRECTION USING SYNDROME VECTOR 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 203
  • 204. ERROR CORRECTION USING SYNDROME VECTOR 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 204
  • 205. ERROR CORRECTION USING SYNDROME VECTOR 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 205
  • 206. ERROR CORRECTION USING SYNDROME VECTOR 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 206
  • 207. ERROR CORRECTION USING SYNDROME VECTOR 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 207
  • 208. ERROR CORRECTION USING SYNDROME VECTOR 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 208 Similarly by calculating for other bit error vectors we have the following decoding table
  • 209. ERROR CORRECTION USING SYNDROME VECTOR 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 209
  • 210. ERROR CORRECTION USING SYNDROME VECTOR 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 210
  • 211. CYCLIC CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 211 CYCLIC CODES:  Cyclic codes are the sub class of linear block codes.  Cyclic codes can be in systematic or non-systematic form.  In systematic form, check bits are calculated separately and the code vector is X= (M:C)form. Here ‘M’ represents message bits and ‘C represents check bits. Definition: A linear code is called cyclic code if every cyclic shift of the code vector produces some other code vector. Properties of Cyclic Codes: Cyclic codes exhibit two fundamental properties: (i) Linearity (ii) Cyclic property
  • 212. CYCLIC CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 212
  • 213. CYCLIC CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 213
  • 214. CYCLIC CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 214 Algebraic structures of cyclic codes: The code words can be represented by a polynomial For eg., let us consider the n-bit codeword, X = {xn-1, xn-2….. x1,x0} The above code word can be represented by a polynomial of degree less than or equal to (n-1).i.e., X(p) = xn-1pn-1 + xn-2pn-2 + …..x1p + x0 Here X(p) is the polynomial of degree (n-1) p is the arbitrary variable of the polynomial. The power of ‘p’ represents the positions of the code words. i.e., pn-1 represents MSB p0 represents LSB p1 represents second bit from LSB side.
  • 215. CYCLIC CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 215 Generation of Code Vectors in Non-Systematic form: Let M = {mk-1, mk-2, …m1,m0} be ‘k’ bits of message vector. Then it can be represented by the polynomial as, M(p) = mk-1pk-1 + mk-2pk-2 +…..m1p +m0 Let X(p) represents the codeword polynomial. It is given as, X(p) = M(p)G(p) Here G(p) is the generating polynomial of degree ‘q’. For an (n,k) cyclic code, q =n-k represents the number of parity bits. The generating polynomial is given as,
  • 216. GENERATION OF CODE VECTORS IN NON- SYSTEMATIC FORM: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 216 If M1, M2, M3,…..etc are the other message vectors, then the corresponding code vectors can be calculated as, X1(p) = M1(p)G(p) X2(p) = M2(p)G(p) X3(p) = M3(p)G(p) and so on. All the above code vectors X1, X2, X3…..are in non-systematic form and they satisfy cyclic property. Note: The generator polynomial G(p) remains
  • 217. Example: The generator polynomial of (7,4) cyclic code is G(p) = p3+p+1. Find all the code vectors for the code in non- systematic form. Sol: Here n = 7 and k = 4 , q= n-k =3 To find possible message vectors, 2k = 24 = 16 message vectors of four bits are possible 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 217
  • 218. GENERATION OF CODE VECTORS IN NON- SYSTEMATIC FORM: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 218
  • 219. GENERATION OF CODE VECTORS IN NON- SYSTEMATIC FORM: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 219
  • 220. GENERATION OF CODE VECTORS IN NON- SYSTEMATIC FORM: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 220
  • 221. GENERATION OF CODE VECTORS IN NON- SYSTEMATIC FORM: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 221 To check whether cyclic property is satisfied: Now Code vector X9 is considered form the above table X9 = (1 0 1 1 0 0 0) By shifting the above code vector to left side by 1 bit position, X’ = (0 1 1 0 0 0 1) From the table , it is shown that, X’ = X8 = ( 0 1 1 0 0 0 1) Thus cyclic shift of X9 produces X8. This cyclic propert can be verified for other code
  • 222. GENERATION OF CODE VECTORS IN SYSTEMATIC FORM: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 222
  • 223. GENERATION OF CODE VECTORS IN SYSTEMATIC FORM: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 223 Above equation implies, • (i) multiply message polynomial by pq. • (ii) Divide pq M(p) by generator polynomial • (iii) Remainder of the division is C(p)
  • 224. Example: The generator polynomial of a (7,4) cyclic code is G(p) = p3+p+1. Find all the code vectors in systematic form. Sol: Here n = 7 and k = 4 , q= n-k =3 To find possible message vectors, 2k = 24 = 16 message vectors of four bits are possible 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 224
  • 225. GENERATION OF CODE VECTORS IN SYSTEMATIC FORM: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 225 To find message polynomial, Consider any message vector M = (m3 m2 m1 m0) = (0 1 0 1) Then, M(p) = m3p3 + m2 p2 + m1p1 + m0 M(p) = p2 + 1 The given generator polynomial is, G(p) = p3 + p + 1
  • 226. GENERATION OF CODE VECTORS IN SYSTEMATIC FORM: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 226 To obtain pq M(p): Since q =3, pq M(p) will be, pq M(p) = p3 M(p) = p3 (p2 + 1) pq M(p) = p5 + p3 pq M(p) = p5 + 0p4 + p3 + 0p2 + 0p +1 G(p) = p3 + p + 1 and G(p) = p3 + 0p2 + p + 1
  • 227. GENERATION OF CODE VECTORS IN SYSTEMATIC FORM: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 227
  • 228. GENERATION OF CODE VECTORS IN SYSTEMATIC FORM: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 228
  • 229. GENERATION OF CODE VECTORS IN SYSTEMATIC FORM: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 229 This is the required code vector for the message vector ( 0 1 0 1) in systematic form. The code vectors for other message bits can be obtained by following the same procedure and listed in the table as below
  • 230. GENERATION OF CODE VECTORS IN SYSTEMATIC FORM: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 230
  • 231. ENCODING USING AN (N - K) BIT SHIFT REGISTER: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 231
  • 232. ENCODING USING AN (N - K) BIT SHIFT REGISTER: OPERATION:  The feedback switch is first closed. The output switch is connected to message input.  All the shift registers are initialized to all zero state. The k message bits are shifted to the transmitter as well as shifted into the registers.  After the shift of k messages bits the registers contain q check bits. The feedback switch is now opened and output switch is connected to check bits position. With the every shift, the check bits are then shifted to the transmitter. 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 232
  • 233. ENCODING USING AN (N - K) BIT SHIFT REGISTER: 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 233
  • 234. Here q = 3 , so there are 3 flipflops in shift register to hold check bits c1, c2,c0 Since g2 = 0 , its link is not connected. g1 = 1, hence its link is connected. 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 234
  • 235. Shift register bit position for input message 1100 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 235
  • 236. Operation of (7,4) cyclic code encoder 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 236
  • 237. CONVOLUTIONAL CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 237 • A Convolutional coding is done by combining the fixed number of input bits. • The input bits are stored in the fixed length shift register and they are combined with the help of mod-2 adders. This operation is equivalent to binary covolution and hence its called Covolutional coding.
  • 238. CONVOLUTIONAL CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 238
  • 239. CONVOLUTIONAL CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 239
  • 240. CONVOLUTIONAL CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 240
  • 241. CONVOLUTIONAL CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 241
  • 242. CONVOLUTIONAL CODES 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 242
  • 243. STATES OF THE ENCODER 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 243 • In the above convolutional encoder, the previous two successive message bits m1 and m2 represents state. • The input message bits m affects the state of the encoder as well as outputs x1 and x2 during that state. • Whenever new message bit is shifted to m, the contents of m1 and m2 define new state. And outputs x1 and x2 are also changed according to new state m1,m2 and message bit m. • Let the initial values of bits stored in m1 and m2 be zero. That is m1m2 = 00 initially and the encoder is in state ‘a’
  • 244. STATES OF THE ENCODER 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 244 m2 m1 States of encoder 0 0 a 0 1 b 1 0 c 1 1 d
  • 245. DEVELOPMENT OF THE CODE TREE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 245
  • 246. DEVELOPMENT OF THE CODE TREE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 246 • The development of code tree for the message sequence m = 110 is given below. Initially let us assume that encoder is in state ‘a’ . i.e., m1m2 = 00 (i) When message bit m = 1, (first bit) The first message input is m = 1 With this input x1 and x2 is calculated as follows, For the given convolutional encoder, the outputs x1 and x2 are given as,
  • 247. DEVELOPMENT OF THE CODE TREE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 247
  • 248. DEVELOPMENT OF THE CODE TREE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 248
  • 249. DEVELOPMENT OF THE CODE TREE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 249
  • 250. DEVELOPMENT OF THE CODE TREE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 250
  • 251. DEVELOPMENT OF THE CODE TREE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 251 Input bits Present state Next state Output of the encoder 0 a (00) a (00) 00 1 a (00) b (01) 11 1 b (01) d (11) 01 0 d(11) c (10) 01
  • 252. DEVELOPMENT OF THE CODE TREE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 252 Inp ut bits Pres ent sta te Ne xt stat e Outp ut of the enco der 0 a (00 ) a (00 ) 00 1 a (00 ) b (01 ) 11 1 b (01 d (11 01
  • 253. CODE TRELLIS 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 253 • Code trellis is the more compact representation of the code tree. • In code tree there are fore states or nodes. Every states goes to some other state depending upon the input code. • Trellis represents the single, an unique diagram for such transitions.
  • 254. STATE TABLE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 254 Input bits Present state Next state Output of the encoder 0 a (00) a (00) 00 1 b (10) 11 0 b (10) c (01) 10 1 d (11) 01 0 c (01) a (00) 11 1 b (10) 00 0 d(11) c (01) 01 1 d(11) 10
  • 255. CODE TRELLIS 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 255 00 11 11 00 10 01 01 10
  • 256. CODE TRELLIS 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 256 • The node on the left side denotes four possible current states and those on the right represents the next states. • The solid transition line represents input m = 0 and broken line represents input m = 1. • Along with each transition line the output x1x2 is represented during the transition. • For eg., let the encoder be in current state of ‘a’. If input m = 0, the next state will be ‘a’, with the output x1x2 = 11. • Thus code trellis is compact representation
  • 257. STATE DIAGRAM 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 257 • If the current and next states of the encoder are combined, then it can be represented by a state diagram. • For eg., let the encoder is in state ’a’. If input bit m = 0, then the next state is same i.e., ‘a’, with the output x1x2 = 00. This is shown as self loop at node ‘a’ in the state diagram. • If input m =1 , then state diagram shows the next state as ‘b’, with outputs x1x2 = 11.
  • 258. STATE DIAGRAM 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 258 b c 10 10 11 11 0 0 01 01
  • 259. SOLVED EXAMPLE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 259 Example: A code rate 1 / 3 covolutional encoder has generating vectors as g1 = (1 0 0), g2 = (1 1 1) and g3 = (1 0 1) (i) Sketch the encoder configuration. (ii) Draw the code tree, state diagram, and trellis diagram. (iii) If input message sequence is 10110, determine the output sequence of the encoder.
  • 260. SOLVED EXAMPLE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 260
  • 261. SOLVED EXAMPLE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 261
  • 262. SOLVED EXAMPLE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 262
  • 263. SOLVED EXAMPLE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 263
  • 264. SOLVED EXAMPLE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 264
  • 265. SOLVED EXAMPLE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 265
  • 266. SOLVED EXAMPLE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 266
  • 267. SOLVED EXAMPLE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 267
  • 268. SOLVED EXAMPLE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 268
  • 269. SOLVED EXAMPLE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 269
  • 270. SOLVED EXAMPLE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 270 d
  • 271. SOLVED EXAMPLE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 271
  • 272. CODE TRELLIS 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 272 000 11 1 011 10 0 010 101 001 110
  • 273. STATE DIAGRAM 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 273 b c 110 010 11 1 01 1 10 0 10 1 001 0 0 0
  • 274. CODE TREE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 274 000 111 m = 0 m = 1 000 000 000 111 111
  • 275. TO FIND OUTPUT SEQUENCE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 275
  • 276. TO FIND OUTPUT SEQUENCE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 276
  • 277. TO FIND OUTPUT SEQUENCE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 277
  • 278. TO FIND OUTPUT SEQUENCE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 278
  • 279. TO FIND OUTPUT SEQUENCE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 279
  • 280. TO FIND OUTPUT SEQUENCE 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 280
  • 281. VITERBI DECODING ALGORITHM 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 281
  • 282. ALGORITHM STEPS 2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 282
  • 283. Example : Viterbi Decoding Input data m: 1 1 0 1 1 Codeword U: 11 01 01 00 01 Received seq. Z: 11 01 01 10 01 a = 00 b = 10 t1 t2 2 0 2  a 0  b
  • 284. Example : Viterbi Decoding Input data m: 1 1 0 1 1 Codeword U: 11 01 01 00 01 Received seq. Z: 11 01 a = 00 b = 10 t1 t2 t3 2 1 0 1 2 0 c = 01 d = 11 2  a 0  b 3  a 3  b 2  c 0  d
  • 285. Example : Viterbi Decoding Input data m: 1 1 0 1 1 Codeword U: 11 01 01 00 01 Received seq. Z: 11 01 01 a = 00 b = 10 t1 t2 t3 t4 2 1 1 0 1 1 2 2 0 0 1 1 2 0 c = 01 d = 11 3  a 3  b 2  c 0  d
  • 286. Example : Viterbi Decoding Input data m: 1 1 0 1 1 Codeword U: 11 01 01 00 01 Received seq. Z: 11 01 01 a = 00 b = 10 t1 t2 t3 t4 2 1 1 0 1 1 2 2 0 0 1 1 2 0 c = 01 d = 11 Path metric 4 3
  • 287. Example : Viterbi Decoding Input data m: 1 1 0 1 1 Codeword U: 11 01 01 00 01 Received seq. Z: 11 01 01 a = 00 b = 10 t1 t2 t3 t4 2 1 1 0 1 1 2 2 0 0 1 1 2 0 c = 01 d = 11 Path metric 4 3
  • 288. Example : Viterbi Decoding Input data m: 1 1 0 1 1 Codeword U: 11 01 01 00 01 Received seq. Z: 11 01 01 a = 00 b = 10 t1 t2 t3 t4 2 1 1 0 1 1 2 2 0 0 1 1 2 0 c = 01 d = 11 Path metric 4 0
  • 289. a = 00 b = 10 t1 t2 t3 t4 2 1 1 0 1 1 2 2 0 0 1 1 2 0 c = 01 d = 11 Path metric 3 2 Example : Viterbi Decoding Input data m: 1 1 0 1 1 Codeword U: 11 01 01 00 01 Received seq. Z: 11 01 01
  • 290. Example : Viterbi Decoding Input data m: 1 1 0 1 1 Codeword U: 11 01 01 00 01 Received seq. Z: 11 01 01 a = 00 b = 10 t1 t2 t3 t4 0 2 0 0 1 1 2 c = 01 d = 11 3  a 3  b 0  c t5 01 1 1 1 1 0 2 2 0
  • 291. Example : Viterbi Decoding Input data m: 1 1 0 1 1 Codeword U: 11 01 01 00 01 Received seq. Z: 11 01 01 a = 00 b = 10 t1 t2 t3 t4 0 2 0 0 1 1 2 c = 01 d = 11 t5 01 1 1 1 1 0 2 2 0 Path metric 4 1
  • 292. Example : Viterbi Decoding Input data m: 1 1 0 1 1 Codeword U: 11 01 01 00 01 Received seq. Z: 11 01 01 a = 00 b = 10 t1 t2 t3 t4 0 2 0 0 1 1 2 c = 01 d = 11 t5 01 1 1 1 0 2 2 0 Path metric 4 1
  • 293. Example : Viterbi Decoding Input data m: 1 1 0 1 1 Codeword U: 11 01 01 00 01 Received seq. Z: 11 01 01 a = 00 b = 10 t1 t2 t3 t4 0 2 0 0 1 2 c = 01 d = 11 t5 01 1 1 0 2 2 0 Path metric 3 4
  • 294. Example : Viterbi Decoding a = 00 b = 10 t1 t2 t3 t4 0 2 0 0 1 2 c = 01 d = 11 t5 1 1 0 2 0 5 Path metric 2 Input data m: 1 1 0 1 1 Codeword U: 11 01 01 00 01 Received seq. Z: 11 01 01 01
  • 295. Example : Viterbi Decoding Input data m: 1 1 0 1 1 Codeword U: 11 01 01 00 01 Received seq. Z: 11 01 01 a = 00 b = 10 t1 t2 t3 t4 0 2 0 0 1 2 c = 01 d = 11 t5 10 1 1 0 0 1  a 1  b 3  c 2  d 01 t6 1 1 1 1 2 0 0 2
  • 296. a = 00 b = 10 t1 t2 t3 t4 0 2 0 0 1 2 c = 01 d = 11 t5 1 1 0 0 t6 1 1 1 1 2 0 0 2 Path metric 2 4 Example : Viterbi Decoding Input data m: 1 1 0 1 1 Codeword U: 11 01 01 00 01 Received seq. Z: 11 01 01 10 01
  • 297. a = 00 b = 10 t1 t2 t3 t4 0 2 0 0 1 2 c = 01 d = 11 t5 1 1 0 0 t6 1 1 1 1 2 0 0 2 Path metric 2 4 Example : Viterbi Decoding Input data m: 1 1 0 1 1 Codeword U: 11 01 01 00 01 Received seq. Z: 11 01 01 10 01
  • 298. a = 00 b = 10 t1 t2 t3 t4 0 0 0 2 c = 01 d = 11 t5 1 1 0 t6 1 1 2 0 0 2 Path metric 3 2 Example : Viterbi Decoding Input data m: 1 1 0 1 1 Codeword U: 11 01 01 00 01 Received seq. Z: 11 01 01 10 01
  • 299. a = 00 b = 10 t1 t2 t3 t4 0 0 0 2 c = 01 d = 11 t5 1 1 0 t6 1 1 0 0 2 Path metric 2 4 Example : Viterbi Decoding Input data m: 1 1 0 1 1 Codeword U: 11 01 01 00 01 Received seq. Z: 11 01 01 10 01
  • 300. a = 00 b = 10 t1 t2 t3 t4 0 0 0 2 c = 01 d = 11 t5 1 1 0 t6 1 1 0 0 2  a 2  b 2  c 1  d Example : Viterbi Decoding Input data m: 1 1 0 1 1 Codeword U: 11 01 01 00 01 Received seq. Z: 11 01 01 10 01