Communication engineering unit iv all notes regarding communication engineering Anna University regulations 2017, slides contain to syllabus of iii semester showing measurements of information, entropy, source coding theorem, Huffman coding, error control codes, convolutional coding
8th International Conference on Soft Computing, Mathematics and Control (SMC ...
Communication engineering -UNIT IV .pptx
1. UNIT IV - INFORMATION THEORY AND CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 1
SYLLABUS
• Measure of information
• Entropy
• Source coding theorem
• Shannon–Fano coding, Huffman Coding, LZ Coding
• Channel capacity
• Shannon-Hartley law ,Shannon's limit
• Error control codes – Cyclic codes, Syndrome
calculation
• Convolution Coding, Sequential and Viterbi decoding
2. INFORMATION THEORY INTRODUCTION
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 2
• The purpose of a communication system is to
facilitate the transmission of signals generated by a
information source to the receiver end over a
communication channel.
• Information theory is a branch of probability
theory which may be applied to the study of
communication systems. Information theory allows
us to determine the information content in a
message signal leading to different source coding
techniques for efficient transmission of message.
• In the context of communication, information theory
deals with mathematical modeling and analysis of
communication rather than with physical sources
3. INFORMATION THEORY INTRODUCTION
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 3
• In particular, it provides answers to two
fundamental questions,
• What is the irreducible complexity below which a signal
cannot be compressed?
• What is the ultimate transmission rate for reliable
communication over a noisy channel?
• The answers to these questions lie on the entropy
of the source and capacity of a channel
respectively.
• Entropy is defined in terms of the probabilistic
behavior of a source of information.
• Channel Capacity is defined as the intrinsic ability
of a channel to convey information; it is naturally
4. INFORMATION
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 4
• Information is the source of a communication system,
whether it is analog or digital communication. Information is
related to the probability of occurrence of the event. The unit
of information is called the bit.
• Based on memory, information source can be classified as
follows:
• A source with memory is one for which a current
symbol depends on the previous symbols.
• A memory less source is one for which each symbol
produced is independent of the previous symbols.
• A Discrete Memory less Source (DMS) can be
characterized by list of symbols, the probability
assignment to these symbols, and the specification of
5. DISCRETE MEMORY LESS SOURCE (DMS)
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 5
• Consider a probabilistic experiment involves the observation of the
output emitted by a discrete source during every unit of time (Signaling
Interval). The source output is modeled as a discrete random variable
(S), which takes on symbols from a fixed finite alphabet.
S= { s0,s1,s2,… ,sK-1}
with probabilities,
P(S=sk)= pk , k=0,1,2,…….,K-1.
The set of probabilities that must satisfy the condition
• We assume that the symbols emitted by the source during successive
signaling intervals are statistically independent.
• A source having the alone described properties is called a Discrete
Memory less Source (DMS).
6. INFORMATION AND UNCERTAINTY
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 6
• Information is related to the probability of
occurrence of the event. More is the uncertainty,
more is the information associated or related with
it.
• Consider the event S=sk, describing the emission
of symbol sk by the source with probability pk
clearly, if the probability pk=1 and pi=0 for all i≠k,
then
• , there is no surprise and therefore no information
when symbol sk is emitted.
• If, on the other hand, the source symbols occur
with different probabilities and the probability pk is
low, then there is more surprise and, therefore,
information when symbol sk is emitted by the
7. INFORMATION AND UNCERTAINTY
• Example:
• Sun rises in East : Here
Uncertainty is zero there is no
surprise in the statement. The
probability of occurrence is 1
(pk=1).
• Sun does not rise in East :
Here uncertainty is high, because
there is maximum surprise,
maximum information is not
possible.
• The amount of information is
related to the inverse of the
probability of occurrence of
the event S = sk as shown in
Figure. The amount of
information gained after
observing the event S = sk,
which occurs with probability
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 7
8. PROPERTIES OF INFORMATION
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 8
• Even if we are absolutely certain of the outcome of an
event, even before it occurs, there is no information
gained.
• The occurrence of an event S = sk either provides some
or no information, but never brings about a loss of
information.
• That is, the less probable an event is, the more
information we gain when it occurs.
9. PROPERTIES OF INFORMATION
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 9
• This additive property follow from the logarithmic
definition of I(sk).
• If sk and sl are statistically independent.
• It is standard practice in information theory to use a
logarithm to base 2 with binary signalling in mind. The
resulting unit of information is called the bit, which is a
contraction of the words binary digit.
10. PROPERTIES OF INFORMATION
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 10
• Hence , one bit is the amount of information
that we gain when one of two possible and
equally likely (i.e..equiprobable) event
occurs.
• Note that the information I(sk) is positive,
because the logarithm of a number less than
one, such as a probability, is negative.
11. ENTROPHY (AVERAGE INFORMATION)
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 11
• The entropy of a discrete random variable, representing the
output of a source of information, is a measure of the
average information content per source symbol.
• The amount of information I(sk) produced by the source
during an arbitrary signalling interval depends on the symbol
sk emitted by the source at the time. The self-information
I(sk) is a discrete random variable that takes on the values
I(s0), I(s1), …, I(sK – 1) with probabilities p0, p1, ….., pK – 1
respectively
• The expectation of I(sk) over all the probable values taken by
the random variable S is given by
12. PROPERTIES OF ENTROPY
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 12
Consider a discrete memory less source whose
mathematical model is defined by
S= { s0,s1,s2,… ,sK-1}
with probabilities,
P(S=sk)= pk , k=0,1,2,…….,K-1.
The entropy H(S) of the discrete random variable S
is bounded as follows:
0 ≤ H (s ) ≤ log2 (K )
where K is the number of symbols in the alphabet
13. PROPERTIES OF ENTROPY
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 13
Property 1: H(S) = 0, if, and only if, the probability pk = 1 for
some k, and the remaining probabilities in the set are all
zero; this lower bound on entropy corresponds to no
uncertainty
14. PROPERTIES OF ENTROPY
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 14
Property 2: H(S) = log K, if, and only if, pk = 1/K for all k ; this
upper bound on entropy corresponds to maximum
uncertainty.
21. INFORMATION RATE (R)
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 21
Information rate (R) is represented in average
number of bits of information per second.
R = r H(S)
Where,
R - information rate ( Information bits / second )
H(S) - the Entropy or average information (bits / symbol)
and
r - the rate at which messages are generated (symbols /
second).
22. MUTUAL INFORMATION
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 22
• The mutual information I(X;Y) is a measure of the uncertainty
about the channel input, which is resolved by observing the
channel output.
• Mutual information I(xi,yj) of a channel is defined as the
amount of information transferred when xi transmitted and yj
received.
23. DISCRETE MEMORYLESS CHANNELS
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 23
Discrete Memoryless Channel is a statistical model
with an input X and output Y (i.e., noisy version of X).
Both X and Y are random variables.
The channel is said to be discrete ,when both X and Y
have finite sizes. It is said to be memoryless when the
current output symbol depends only on the current
25. TYPES OF CHANNELS
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 25
• LOSSLESS CHANNELS
A channel described by a channel matrix with only one
non-zero element in each column is called a lossless
channel. The channel matrix of a lossless channel will be
like,
26. TYPES OF CHANNELS
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 26
DETERMINISTIC CHANNEL
• A channel described by a channel matrix with
one non-zero element in each row is called a
deterministic channel
27. TYPES OF CHANNELS
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 27
BINARY SYMMETRIC CHANNEL
• Binary Symmetric Channel has two input symbols x0=0
and x1=1 and two output symbols y0=0 and y1=1. The
channel is symmetric because the probability of receiving a 1
if a 0 is sent is the same as the probability of receiving a 0 if
a 1 is sent.
• In other words correct bit transmitted with probability 1-p,
wrong bit transmitted with probability p. It is also called
“cross-over probability”. The conditional probability of error is
denoted as p.
28. TYPES OF CHANNELS
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 28
BINARY ERASURE CHANNEL (BEC)
A Binary erasure channel (BEC) has two inputs (0,1)
and three outputs (0,y,1). The symbol y indicates that, due to
noise, no deterministic decision can be made as to whether
the received symbol is a 0 or 1. In other words the symbol y
represents that the output is erased. Hence the name Binary
Erasure Channel (BEC)
29. CHANNEL CAPACITY
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 29
Channel Capacity of a discrete memory less
channel as the maximum average mutual information
I(X:Y) in any single use of channel i.e., signaling interval,
where the maximization is over all possible input
probability distribution {p(xi)} on X. It is measured in bits
per channel use.
Channel capacity (C) =max I (X:Y)
The channel capacity C is a function only of the
transition probabilities p(yj|xi), which defines the channel.
The calculation of channel capacity (C) involves
maximization of average mutual information I(X:Y) over
K variables
31. CHANNEL CODING THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 31
• The design goal of channel coding is to increase the
resistance of a digital communication system to channel
noise.
• Specifically, channel coding consists of mapping the
incoming data sequence into a channel input sequence and
inverse mapping the channel output sequence into an output
data sequence in such a way that the overall effect of
channel noise on the system is minimized.
32. CHANNEL CODING THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 32
• Mapping operation is performed in the transmitter by a
channel encoder, whereas the inverse mapping operation is
performed in the receiver by a channel decoder.
THEOREM
The channel-coding theorem for a discrete memoryless
channel is stated as follows:
• Let a discrete memoryless source with an alphabet 𝒮 have
entropy H(S) for random variable S and produce symbols
once every Ts seconds. Let a discrete memoryless channel
have capacity C and be used once every Tc seconds. Then,
if
33. CHANNEL CODING THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 33
there exists a coding scheme for which the source
output can be transmitted over the channel and be
reconstructed with an arbitrarily small probability of error.
The parameter C/Tc is called the critical rate.
When the system is said to be signalling at the critical
rate then it need to satisfy the below condition
Conversely, if
34. CHANNEL CODING THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 34
it is not possible to transmit information over the channel and
reconstruct it with an arbitrarily small probability of error.
The ratio Tc/Ts equals the code rate of encoder denoted by ‘r’,
where
• In short, the Channel Coding Theorem states that if a discrete
memoryless channel has capacity C and a source generates
information at a rate less than C, then there exists a coding
technique such that the output of the source maybe
transmitted over the channel with an arbitrarily low probability
of symbol error.
• Conversely, it is not possible to find such a code if the code
rate ‘r’ is greater than the channel capacity C. If r ≤ C, there
exists a code capable of achieving an arbitrarily low
35. CHANNEL CODING THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 35
Limitations
It does not show us how to construct a good code.
The theorem does not have a precise result for the
probability of symbol error after decoding the channel
output.
36. MAXIMUM ENTROPY FOR GAUSSIAN CHANNEL
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 36
37. MAXIMUM ENTROPY FOR GAUSSIAN CHANNEL
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 37
38. MAXIMUM ENTROPY FOR GAUSSIAN CHANNEL
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 38
39. CHANNEL (INFORMATION ) CAPACITY THEOREM OR
SHANNON’S THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 39
The information capacity (C) of a continuous
channel of bandwidth B hertz, affected by AWGN of total
noise power with in the channel bandwidth N, is given by
the formula,
where, C - Information Capacity.
B - Channel Bandwidth
S - Signal power (or) Average Transmitted power
N – Total noise power within the channel bandwidth (B)
It is easier to increase the information capacity of
a continuous communication channel by expanding its
bandwidth than by increasing the transmitted power for a
prescribed noise variance.
40. CHANNEL (INFORMATION ) CAPACITY THEOREM OR
SHANNON’S THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 40
41. CHANNEL (INFORMATION ) CAPACITY THEOREM OR
SHANNON’S THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 41
42. CHANNEL (INFORMATION ) CAPACITY THEOREM OR
SHANNON’S THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 42
Trade off
Noiseless Channel has infinite bandwidth.
If there is no noise in the channel , then N=0
hence Then
Infinite bandwidth has limited capacity. If B
is infinite, channel capacity is limited,
because noise power (N) increases, (S/N)
ratio decreases. Hence, even B approaches
infinity, capacity does not approach infinity.
43. SOURCE CODING THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 43
Efficient representation of data generated by a discrete
source is accomplished by some encoding process. The
device that performs the representation is called a source
encoder.
Efficient source encoder must satisfy two functional
requirements
• Code Nodes produced by the encoder are in binary form.
• The source code is uniquely decodable, so that the original
source sequence can be reconstructed perfectly from the
encoded binary sequence.
44. SOURCE CODING THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 44
THEOREM STATEMENT
• Given a discrete memory less source of entropy H(S) or
H(X) then the average code word length L for any distortion
less source encoding is bounded as,
• where, the Entropy H(S) or H(X) is the fundamental limit on
the average number of bits/source symbol.
46. SOURCE CODING THEOREM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 46
CODING EFFICIENCY ( Ƞ )
The value of coding efficiency (Ƞ) is always
lesser or equal to 1. The source encoder is said to be
efficient when Ƞ approaches unity.
48. CODING TECHNIQUES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 48
The two popular coding technique used in Information theory
are
(i) Shannon-Fano Coding
(ii) Huffman Coding.
ALGORITHM OF SHANNON-FANO CODING:
(i) List the source symbols in order of decreasing probability.
(ii) Partition the set into two sets that are as close to
equiprobables as possible.
(iii) Now assign “0” to the upper set and assign “1” to the lower
set.
(iv) Continue this process, each time partitioning the sets with
as nearly equal probabilities as possible until further
49. SHANNON-FANO CODING TECHNIQUES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 49
STEP 1
(i) A discrete memoryless source has symbols x1, x2, x3, x4,
x5 with probabilities of 0.4, 0.2, 0.1, 0.2, 0.1 respectively.
Construct a Shannon-Fano code for the source and
calculate code efficiency η
62. SHANNON-FANO CODING TECHNIQUES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 62
• A discrete memoryless source has five symbols
x1,x2,x3,x4 and x5 with probabilities
0.4,0.19,0.16,0.15 and 0.15 respectively attached
to every symbol.Construct a Shannon-Fano code
for the source and calculate code efficiency.
63. HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 63
• Huffman coding is a source coding method used for
data compression applications
• It is used to generate variable length codes and gives
lowest possible values of average length of code.
• Huffman coding provides maximum efficiency and
minimum redundancy
• Huffman coding is also known as minimum
redundancy code or optimum code
64. ALGORITHM FOR HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 64
• Step 1 : Arrange given messages in decreasing order of
probabilities.
• Step 2: Group the least probable M messages and
assign them symbols for coding.
• Step 3 : Add the probabilities of grouped messages and
place them as high as possible and rearrange them in
decreasing order again. This process is known as
reduction.
• Step 4: Repeat the steps 2 & 3 until only M or less than
M probabilities remains.
• Step 5: obtain the code words for each message by
tracing the probabilities from left to right in direction of the
65. PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 65
• 1)Using Huffman algorithm find the average code
word length and code efficiency. Consider the 5
source symbols of a DMS with probabilities
p(m1)=0.4, p(m2)=0.2, p(m3)=0.2, p(m4)=0.1 and
p(m5)=0.1.
Message (mk) probabilities(Pk)
m1 0.4
m2 0.2
m3 0.2
m4 0.1
m5 0.1
66. PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 66
Message (mk) probabilities(Pk) Stage I
m1 0.4 0.4
m2 0.2 0.2
m3 0.2 0.2
m4 0.1 0.2
m5 0.1
0
1
67. PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 67
Message (mk)
probabilities(P
k)
Stage I Stage II
m1 0.4 0.4 0.4
m2 0.2 0.2 0.4
m3 0.2 0.2 0.2
m4 0.1 0.2
m5 0.1
0
1
0
1
68. PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 68
Message
(mk) probabilitie
s(Pk)
Stage I Stage II Stage III
m1 0.4 0.4 0.4 0.6
m2 0.2 0.2 0.4 0.4
m3 0.2 0.2 0.2
m4 0.1 0.2
m5 0.1
0
1
0
1
0
0
1
1
69. PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 69
Message
(mk) probabilit
ies(Pk)
Stage I Stage II Stage III Code
Word
m1 0.4 0.4 0.4 0.6 00
m2 0.2 0.2 0.4 0.4
m3 0.2 0.2 0.2
m4 0.1 0.2
m5 0.1
0
1
0
1
0
0
1
1
70. PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 70
Message
(mk) probabilit
ies(Pk)
Stage I Stage II Stage III Code
Word
m1 0.4 0.4 0.4 0.6 00
m2 0.2 0.2 0.4 0.4 10
m3 0.2 0.2 0.2
m4 0.1 0.2
m5 0.1
0
1
0
1
0
0
1
1
71. PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 71
Message
(mk) probabilit
ies(Pk)
Stage I Stage II Stage III Code
Word
m1 0.4 0.4 0.4 0.6 00
m2 0.2 0.2 0.4 0.4 10
m3 0.2 0.2 0.2 11
m4 0.1 0.2
m5 0.1
0
1
0
1
0
0
1
1
72. PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 72
Message
(mk) probabilit
ies(Pk)
Stage I Stage II Stage III Code
Word
m1 0.4 0.4 0.4 0.6 00
m2 0.2 0.2 0.4 0.4 10
m3 0.2 0.2 0.2 11
m4 0.1 0.2 010
m5 0.1
0
1
0
1
0
0
1
1
73. PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 73
Message
(mk) probabilit
ies(Pk)
Stage I Stage II Stage III Code
Word
m1 0.4 0.4 0.4 0.6 00
m2 0.2 0.2 0.4 0.4 10
m3 0.2 0.2 0.2 11
m4 0.1 0.2 010
m5 0.1 011
0
1
0
1
0
0
1
1
74. PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 74
Messag
e (mk) probabi
lities(Pk
)
Stage I Stage II Stage
III
Code
Word
No of bit
per
message
(Ik)
m1 0.4 0.4 0.4 0.6 00 2
m2 0.2 0.2 0.4 0.4 10 2
m3 0.2 0.2 0.2 11 2
m4 0.1 0.2 010 3
m5 0.1 011 3
77. PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 77
• For a discrete memoryless source ‘X” with 6 symbols x1,
x2,….., x6 the probabilities are p(x1) =0.3, p(x2) =0.25, p(x3) =
0.2, p(x4) = 0.12, p(x5) = 0.08, p(x6) = 0.05 respectively.
Using Huffman coding, calculate entropy, average length of
code, efficiency and redundancy of the code.
Message (mk)
probabilities(Pk
)
X1 0.3
X2 0.25
X3 0.2
X4 0.12
X5 0.08
x6 0.05
78. PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 78
Message (mk) probabilities(Pk) Stage I
X1 0.3 0.3
X2 0.25 0.25
X3 0.2 0.2
X4 0.12 0.13
X5 0.08 0.12
x6 0.05
0
1
79. PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 79
Message (mk)
probabilities(P
k)
Stage I Stage II
X1 0.3 0.3 0.3
X2 0.25 0.25 0.25
X3 0.2 0.2 0.25
X4 0.12 0.13 0.2
X5 0.08 0.12
x6 0.05
0
1
0
1
80. PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 80
Messag
e (mk) probabilities(P
k)
Stage I Stage II Stage III
X1 0.3 0.3 0.3 0.45
X2 0.25 0.25 0.25 0.3
X3 0.2 0.2 0.25 0.25
X4 0.12 0.13 0.2
X5 0.08 0.12
x6 0.05
0
1
0
1
0
1
81. PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 81
Messa
ge
(mk)
probabilities
(Pk)
Stage I Stage II Stage III Stage IV
X1 0.3 0.3 0.3 0.45 0.55
X2 0.25 0.25 0.25 0.3 0.45
X3 0.2 0.2 0.25 0.25
X4 0.12 0.13 0.2
X5 0.08 0.12
x6 0.05
0
1
0
1
0
1
0
1
0
1
82. PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 82
Mess
age
(mk)
probabilitie
s(Pk)
Stage I Stage II Stage
III
Stage
IV
Code
word
X1 0.3 0.3 0.3 0.45 0.55 00
X2 0.25 0.25 0.25 0.3 0.45
X3 0.2 0.2 0.25 0.25
X4 0.12 0.13 0.2
X5 0.08 0.12
x6 0.05
0
1
0
1
0
1
0
1
0
1
83. PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 83
Mess
age
(mk)
probabilitie
s(Pk)
Stage I Stage II Stage
III
Stage
IV
Code
word
X1 0.3 0.3 0.3 0.45 0.55 00
X2 0.25 0.25 0.25 0.3 0.45 10
X3 0.2 0.2 0.25 0.25
X4 0.12 0.13 0.2
X5 0.08 0.12
x6 0.05
0
1
0
1
0
1
0
1
0
1
84. PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 84
Mess
age
(mk)
probabilitie
s(Pk)
Stage I Stage II Stage
III
Stage
IV
Code
word
X1 0.3 0.3 0.3 0.45 0.55 00
X2 0.25 0.25 0.25 0.3 0.45 10
X3 0.2 0.2 0.25 0.25 11
X4 0.12 0.13 0.2
X5 0.08 0.12
x6 0.05
0
1
0
1
0
1
0
1
0
1
85. PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 85
Mess
age
(mk)
probabilitie
s(Pk)
Stage I Stage II Stage
III
Stage
IV
Code
word
X1 0.3 0.3 0.3 0.45 0.55 00
X2 0.25 0.25 0.25 0.3 0.45 10
X3 0.2 0.2 0.25 0.25 11
X4 0.12 0.13 0.2 011
X5 0.08 0.12
x6 0.05
0
1
0
1
0
1
0
1
0
1
86. PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 86
Mess
age
(mk)
probabilitie
s(Pk)
Stage I Stage II Stage
III
Stage
IV
Code
word
X1 0.3 0.3 0.3 0.45 0.55 00
X2 0.25 0.25 0.25 0.3 0.45 10
X3 0.2 0.2 0.25 0.25 11
X4 0.12 0.13 0.2 011
X5 0.08 0.12 0100
x6 0.05
0
1
0
1
0
1
0
1
0
1
87. PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 87
Mess
age
(mk)
probabilitie
s(Pk)
Stage I Stage II Stage
III
Stage
IV
Code
word
X1 0.3 0.3 0.3 0.45 0.55 00
X2 0.25 0.25 0.25 0.3 0.45 10
X3 0.2 0.2 0.25 0.25 11
X4 0.12 0.13 0.2 011
X5 0.08 0.12 0100
x6 0.05 0101
0
1
0
1
0
1
0
1
0
1
88. PROBLEMS OF HUFFMAN CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 88
Mes
sag
e
(mk)
probabili
ties(Pk)
Stage
I
Stage
II
Stage
III
Stage
IV
Cod
e
wor
d
No of bit
per
messag
e (Ik)
X1 0.3 0.3 0.3 0.45 0.55 00 2
X2 0.25 0.25 0.25 0.3 0.45 10 2
X3 0.2 0.2 0.25 0.25 11 2
X4 0.12 0.13 0.2 011 3
X5 0.08 0.12 010
0
4
92. ALGORITHM FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 92
• Principle: The source data streams is parsed
into segments that are the shortest redundancies
not encountered previously
Step 1: Divide the given input data stream into sub
sequences
Step 2: Assign numerical position for each sub
sequences
Step 3: Assign numerical representation for each
93. PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 93
• Example 1: Given data stream :
0 0 0 1 0 1 1 1 0 0 1 0 1 0 0 1 0 1
Step 1: Divide the given input data stream into
sub sequences
0 1 00 01 011 10 010 100 101
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
0 1 00 01 011 10 010 100 101
95. PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 95
•
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub sequences 0 1 0 0 0 1 01 1 1 0 01 0 10 0 10 1
Numerical
Representatio
n
- - 11 12 4 2 2 1 4 1 6 1 6 2
Binary
Encoded
Blocks
0 1 001 0 001 1 1001 010
0
100
0
110
0
110
1
Final Encoded Block sequence for the given data stream :
0010 0011 1001 0100 1000 1100 1101
96. PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 96
• Example 2: Given data stream :
1 1 1 0 1 0 0 0 1 1 0 1 0 1 1 0 1 0
Step 1: Divide the given input data stream into
sub sequences
1 0 11 10 100 01 101 011 010
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
1 0 11 10 100 01 101 011 010
97. PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 97
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
1 0 11 10 100 01 101 011 010
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub sequences 1 0 11 10 100 01 101 011 010
Numerical
Representatio
n
11 12 4 2 2 1 4 1 6 1 6 2
98. PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 98
•
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub sequences 1 0 0 0 0 1 01 1 1 0 01 0 10 0 10 1
Numerical
Representatio
n
- - 11 12 4 2 2 1 4 1 6 1 6 2
Binary
Encoded
Blocks
1 0 001 0 001 1 1001 010
0
100
0
110
0
110
1
Final Encoded Block sequence for the given data stream :
0010 0011 1001 0100 1000 1100 1101
99. PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 99
• Example 3 : Given data stream :
A A B A B B B A B A A B A B B B A
B B A B B
Step 1: Divide the given input data stream into
sub sequences
A A B A B B B A B A A B A B B B A B
B A BB
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
A A B A B B B ABA ABAB BB ABBA BB
100. PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 100
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
A A
B
A B
B
B ABA ABA
B
BB ABB
A
BB
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
A A
B
A B
B
B ABA ABA
B
BB ABB
A
BB
Numerical
representati
on
φA 1B 2B φB 2A 5B 4B 3A 7
101. PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 101
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
A A
B
A B
B
B ABA ABA
B
BB ABB
A
BB
Numerical
representati
on
φA 1B 2B φB 2A 5B 4B 3A 7
Binary
Encoded
Blocks
0 11 101 1 100 1011 1001 110 111
Max no of bits :
4
Apply Zero Padding to all other codes to get decoding
without redudancies
102. PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 102
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
A A
B
A B
B
B ABA ABAB BB ABBA BB
Numerical
representatio
n
φA 1B 2B φB 2A 5B 4B 3A 7
Binary
Encoded
Blocks
0 11 101 1 100 1011 1001 110 111
Binary
Encoded
Blocks
000
0
0011 0101 0001 0100 1011 1001 0110 0111
Final Encoded Block sequence for the given data stream :
0000 0011 0101 0001 0100 1011 1001
0110 0111
103. PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 103
• Example 4 : Given data stream :
B B A B A A A B A B B A B A A A B A
A B A A
Step 1: Divide the given input data stream into
sub sequences
B B A BAA A B A B B A B A A A B A
A B AA
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
B BA BAA A BAB BABA AA BAAB AA
104. PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 104
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
B B
A
B A
A
A BAB BAB
A
AA BAA
B
AA
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
B B
A
B A
A
A BAB BAB
A
AA BAA
B
AA
Numerical
representati
on
φB 1A 2A φA 2B 5A 4A 3B 7
105. PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 105
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
B B
A
B A
A
A BAB BAB
A
AA BAA
B
AA
Numerical
representati
on
φB 1A 2A φA 2B 5A 4A 3B 7
Binary
Encoded
Blocks
0 11 101 1 100 1011 1001 110 111
Max no of bits :
4
Apply Zero Padding to all other codes to get decoding
without redudancies
106. PROBLEMS FOR LEMPEL ZIV CODING
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 106
Numerical
Position
1 2 3 4 5 6 7 8 9
Sub
sequences
B B
A
B A
A
A BAB BABA AA BAAB AA
Numerical
representatio
n
φB 1A 2A φA 2B 5A 4A 3B 7
Binary
Encoded
Blocks
0 11 101 1 100 1011 1001 110 111
Binary
Encoded
Blocks
000
0
0011 0101 0001 0100 1011 1001 0110 0111
Final Encoded Block sequence for the given data stream :
0000 0011 0101 0001 0100 1011 1001
0110 0111
107. ERROR CONTROL CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 107
ERROR:
Error is a condition when the output information does not
match with the input information. During transmission, digital
signals suffer from noise that can introduce errors in the
binary bits travelling from one system to other. That means a
0 bit may change to 1 or a 1 bit may change to 0
Digital
Source
Source
Encod
er
Error
Control
Coding
Line
Coding
Modulat
or
Chann
el
Noise
Digital
Sink
Source
Decod
er
Error
Control
Decoding
Line
Decoding
Demod
ulator
+
Transmitter
Receiver
X(w)
N(w)
Y(w)
108. ERROR CONTROL CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 108
CLASSIFICATION OF ERRORS:
Content Error:
These are errors in the content of
messageintroduced due to noise during transmission
Flow integrity Error
These errors are missing blocks of data, data lost in
network or data delivered to wrong destination
CLASSIFICATION OF
CODES:
Error Detecting
Error Correcting
109. IMPORTANT DEFINITIONS RELATED TO CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 109
Code Word:
The code word is the n bit encoded block
of bits. It contains message bits and parity or
redundant bits.
Block Length:
The number of bits ‘n’ after coding is known as
block length.
Code Rate (or) Code Efficiency:
The code rate is defined astheratio of
the number of message bits (k) to the total number
of bits (n) in a code word.
𝐶𝑜𝑑𝑒 𝑟𝑎𝑡𝑒 (𝑟 )= number of message bits (k)
110. IMPORTANT DEFINITIONS RELATED TO CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 110
Code Vector:
An ‘n’ bit code word can be visualized in an n – dimensional
space as a vector whose elements or coordinates are bits in the code
word.
111. IMPORTANT DEFINITIONS RELATED TO CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 111
Hamming Distance:
The Hamming distance between two code words is equal
to the number of differences between them, e.g.,
1 0 0 1 1 0 1 1
1 1 0 1 0 0 1 0
Hamming distance = 3
Minimum Distance dmin:
It is defined as the smallest Hamming distance between
any pair of code vectors in the code.
Hamming Weight of a Code Word [w(x)]:
It is defined as the number of non – zero elements in the
code word. It is denoted by w(x)
for eg: X= 01110101, then w(x) = 5
112. ERROR CONTROL CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 112
Error Control Coding
The main purpose of Error control coding is to enable the
receiver to detect and ever correct the errors by introducing
some redundancies into the data which is to be transmitted.
There are basically two mechanisms to for adding redundancy.
They are
Block Coding
Convolutional coding
113. CLASSIFICATION OF ERROR–CORRECTING
CODES:
Block Codes (No
memory is required):
(n,k) block code is
generated when the
channel encoder
accepts information in
successive k bit blocks.
At the end of each
such block, (n- k) parity
bit is added, which
contains no information
and termed as
Convolutional Codes
(Memory is required)
Here the code words
are generated by discrete –
time convolution of the
input sequence with impulse
response of the encoder.
Unlike block codes,
channel encoder accepts
messages as a continuous
sequence and generates a
continuous sequence of
encoded bits at the output
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 113
114. CLASSIFICATION OF ERROR–CORRECTING
CODES:
Linear Codes and non linear Codes
Linear codes have the unique property that when any
two code words of linear code are added in modulo – 2 adder, a
third code word is produced which is also a code, which is not
the case for non – linear codes.
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 114
115. CLASSIFICATION OF ERROR CONTROL CODING
TECHNIQUES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 115
116. TYPES OF ERROR CONTROL TECHNIQUES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 116
AUTOMATIC REPEAT REQUEST (ARQ)
Here, when an error is detected, a request is made
for retransmission of signal.
A feedback channel is necessary for this
retransmission.
It differs from the FEC system in the following three
aspects.
1. Less number of check bits (parity bits) are required
increasing the (k/n) ratio for (n, k) block code.
2. Additional hardware to implement feedback path is
required.
3. But rate at forward transmission needs to make
117. AUTOMATIC REPEAT REQUEST (ARQ)
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 117
For each message signal at the input, the encoder produces code words
which are stored temporarily at encoder output and transmitted over forward
transmission channel.
At the destination, decoders decode the signals and give a positive
acknowledgement (ACK) and negative acknowledgement (NAK)
respectively in case no error and error is detected.
On receipt of NAK, the controller retransmits the appropriate word
stored in the input buffer.
118. FORWARD ERROR CORRECTION (FEC)
TECHNIQUE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 118
It is a digital modulation system where discrete source generates
information in binary form.
The channel encoder accepts these message bits and add redundant
bits to them leaving higher bit rate for transmission.
Channel decoders uses the redundant bits to check for
actually and erroneous transmitted messages.
119. ERROR DETECTION TECHNIQUE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 119
Whenever a message is transmitted, it may get
scrambled by noise or data may get corrupted. To avoid
this, we use error-detecting codes which are additional
data added to a given digital message to help us detect if
an error occurred during transmission of the message. A
simple example of error-detecting code is parity check.
i) Parity Checking
ii) Check sum Error Detection
iii) Cyclic Redundancy Check (CRC)
120. PARITY CHECKING OF ERROR DETECTION
• It is the simplest technique for detecting and correcting
errors. The MSB of an 8-bits word is used as the parity bit
and the remaining 7 bits are used as data or message bits.
The parity of 8-bits transmitted word can be either even parity
or odd parity.
• Even parity -- Even parity means the number of 1's in the
given word including the parity bit should be even (2,4,6,....).
• Odd parity -- Odd parity means the number of 1's in the
given word including the parity bit should be odd (1,3,5,....).
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 120
121. PARITY CHECKING OF ERROR DETECTION
USE OF PARITY BIT
• The parity bit can be set to 0 and 1 depending on the type of
the parity required.
• For even parity, this bit is set to 1 or 0 such that the no. of "1
bits" in the entire word is even. Shown in fig. (a).
• For odd parity, this bit is set to 1 or 0 such that the no. of "1
bits" in the entire word is odd. Shown in fig (b)
fig. (a).
fig. (b).
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 121
0 0 0 1 1 1 0 1 1 1
0 0 1 1 1 1 0 0 1 1
Message or data bits
Parity
bit
Message or data bits
Parity
bit
Message or data bits
Parity
bit
Parity
bit
Message or data bits
122. PARITY CHECKING OF ERROR DETECTION
• ERROR DETECTION USING PARITY BIT
Parity checking at the receiver can detect the presence
of an error if the parity of the receiver signal is different from
the expected parity. That means, if it is known that the parity
of the transmitted signal is always going to be "even" and if
the received signal has an odd parity, then the receiver can
conclude that the received signal is not correct. If an error is
detected, then the receiver will ignore the received byte and
request for retransmission of the same byte to the transmitter.
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 122
123. CHECK SUM ERROR DETECTION
• This is a block code method where a checksum is created
based on the data values in the data blocks to be transmitted
using some algorithm and appended to the data.
• When the receiver gets this data, a new checksum is
calculated and compared with the existing checksum. A non-
match indicates an error.
Error Detection by Checksums
• For error detection by checksums, data is divided into fixed
sized frames or segments.
• Sender’s End − The sender adds the segments using 1’s
complement arithmetic to get the sum. It then complements
the sum to get the checksum and sends it along with the data
frames.
• Receiver’s End − The receiver adds the incoming segments
along with the checksum using 1’s complement arithmetic to
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 123
124. CHECK SUM ERROR DETECTION
Example
• Suppose that the sender wants to
send 4 frames each of 8 bits,
where the frames are 11001100,
10101010, 11110000 and
11000011.
• The sender adds the bits using 1s
complement arithmetic. While
adding two numbers using 1s
complement arithmetic, if there is
a carry over, it is added to the
sum.
• After adding all the 4 frames, the
sender complements the sum to
get the checksum, 11010011, and
sends it along with the data
frames.
• The receiver performs 1s
complement arithmetic sum of all
the frames including the
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 124
125. CYCLIC REDUNDANCY CHECK (CRC)
The concept of parity checking can be extended from
detection to correction of single error by arranging the data
block in rectangular matrix.
This will lead to two set of parity bits, viz.
•Longitudinal Redundancy Check
(LRC) and
• Vertical Redundancy Check (VRC).
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 125
126. LONGITUDINAL REDUNDANCY CHECK (LRC)
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 126
In Longitudinal Redundancy Check, one row is taken
up at a time and counting the number of 1s, the
parity bit is adjusted to achieve even parity.
Here for checking the message block, a complete
character known as Block Check Character (BCC)
is added at the end of the block of information,
which may be of even or odd parity
127. VERTICAL REDUNDANCY CHECK (LRC)
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 127
In VRC, the ASCII code for individual alphabets are
considered arranged vertically and then counting the
number of 1s, the parity bit is adjusted to achieve even
parity.
128. LONGITUDINAL REDUNDANCY CHECK (LRC)
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 128
A single error in any bit will result in a non – correct LRC in
the row and a non – correct VRC in the column. The bit
which is common to both the row and column is the bit in
error. The limitation is though it can detect multiple errors
but is capable to correct only a single error as for multiple
errors it is not suitable to locate the position of the errors.
1 in the square box in the next table, is the bit in error as it is
evident from the erroneous results both in the LRC and the
VRC columns
1
129. LINEAR BLOCK CODES
PRINCIPLE OF BLOCK CODING:
For the block of k message bits, (n-k) parity bits or check bits
are added.
Hence the total bits at the output of channel encoder are ‘n’.
Such codes are called (n,k) block codes
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 129
Message
Block
input
CHANNEL
ENCODER
Code Block
output
MESSAGE
k bits
MESSAGE Check bits
k bits (n – k)
bits
130. LINEAR BLOCK CODES
SYSTEMATIC CODES:
In the systematic
block code, the message
bits appear at the
beginning of the code
word. Then the check bits
are transmitted in a block.
This type of code is called
NON-SYSTEMATIC
CODES:
In non-systematic
codes it is not possible
to identify message
bits and check bits.
They are mixed in the
block code.
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 130
132. LINEAR BLOCK CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 132
i.e q are the number of redundant bits added by the encoder.
The above code vector can also written as,
X =(M│C)
Here,
M = k-bit message vector
C = q –bit check vector
The check bits play the role of error detection and
correction. The job of the linear block code is to generate
those “check bits”.
133. MATRIX DESCRIPTION OF LINEAR BLOCK CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 133
The code vector can be represented as,
X = MG
Here, X = Code vector of 1 × n size or n bits
M = Message vector of 1 × k size bits
G = Generator matrix of k × n size.
134. MATRIX DESCRIPTION OF LINEAR BLOCK CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 134
135. MATRIX DESCRIPTION OF LINEAR BLOCK CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 135
Note: All the additions are mod – 2 additions
136. STRUCTURE OF LINEAR BLOCK CODER
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 136
139. LINEAR BLOCK CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 139
ii) To obtain the equation for check bits:
Here k= 3, q = 3, and n = 6
Here the block size of the message vector is 3 bits, Hence 2k =
23 = 8 message vectors are possible as shown below,
151. ERROR DETECTION AND CORRECTION CAPABILITIES OF
HAMMING CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 151
For Hamming Code , dmin = 3
So ,
S ≤ 3 -1
No of error detected by Hamming code S ≤ 2
No of error corrected by Hamming code t≤ 1
164. ERROR DETECTION AND CORRECTION CAPABILITIES OF
HAMMING CODES:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 164
For Hamming Code , dmin = 3
So ,
S ≤ 3 -1
No of error detected by Hamming code S ≤ 2
No of error corrected by Hamming code t≤ 1
201. ERROR CORRECTION USING SYNDROME VECTOR
• Here the block size of the
message vector is 3 bits,
Hence 2k = 23 = 8 message
vectors are possible as shown
below,
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 201
208. ERROR CORRECTION USING SYNDROME VECTOR
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 208
Similarly by calculating for other bit error vectors we
have the following decoding table
211. CYCLIC CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 211
CYCLIC CODES:
Cyclic codes are the sub class of linear block codes.
Cyclic codes can be in systematic or non-systematic form.
In systematic form, check bits are calculated separately and
the code vector is X= (M:C)form. Here ‘M’ represents
message bits and ‘C represents check bits.
Definition:
A linear code is called cyclic code if every cyclic shift of
the code vector produces some other code vector.
Properties of Cyclic Codes:
Cyclic codes exhibit two fundamental properties:
(i) Linearity
(ii) Cyclic property
214. CYCLIC CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 214
Algebraic structures of cyclic codes:
The code words can be represented by a polynomial
For eg., let us consider the n-bit codeword,
X = {xn-1, xn-2….. x1,x0}
The above code word can be represented by a polynomial of degree
less than or equal to (n-1).i.e.,
X(p) = xn-1pn-1 + xn-2pn-2 + …..x1p + x0
Here X(p) is the polynomial of degree (n-1)
p is the arbitrary variable of the polynomial.
The power of ‘p’ represents the positions of the code words. i.e.,
pn-1 represents MSB
p0 represents LSB
p1 represents second bit from LSB side.
215. CYCLIC CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 215
Generation of Code Vectors in Non-Systematic
form:
Let M = {mk-1, mk-2, …m1,m0} be ‘k’ bits of message
vector. Then it can be represented by the polynomial
as,
M(p) = mk-1pk-1 + mk-2pk-2 +…..m1p +m0
Let X(p) represents the codeword polynomial. It is
given as,
X(p) = M(p)G(p)
Here G(p) is the generating polynomial of degree
‘q’.
For an (n,k) cyclic code, q =n-k represents the
number of parity bits. The generating polynomial is
given as,
216. GENERATION OF CODE VECTORS IN NON-
SYSTEMATIC FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 216
If M1, M2, M3,…..etc are the other message
vectors, then the corresponding code vectors
can be calculated as,
X1(p) = M1(p)G(p)
X2(p) = M2(p)G(p)
X3(p) = M3(p)G(p) and so on.
All the above code vectors X1, X2, X3…..are in
non-systematic form and they satisfy cyclic
property.
Note: The generator polynomial G(p) remains
217. Example: The generator polynomial of (7,4) cyclic code
is G(p) = p3+p+1.
Find all the code vectors for the code in non-
systematic form.
Sol:
Here n = 7 and k = 4
, q= n-k =3
To find possible
message vectors,
2k = 24 = 16 message
vectors of four bits
are possible
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 217
218. GENERATION OF CODE VECTORS IN NON-
SYSTEMATIC FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 218
219. GENERATION OF CODE VECTORS IN NON-
SYSTEMATIC FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 219
220. GENERATION OF CODE VECTORS IN NON-
SYSTEMATIC FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 220
221. GENERATION OF CODE VECTORS IN NON-
SYSTEMATIC FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 221
To check whether cyclic property is satisfied:
Now Code vector X9 is considered form the
above table
X9 = (1 0 1 1 0 0 0)
By shifting the above code vector to left side by
1 bit position,
X’ = (0 1 1 0 0 0 1)
From the table , it is shown that,
X’ = X8 = ( 0 1 1 0 0 0 1)
Thus cyclic shift of X9 produces X8. This
cyclic propert can be verified for other code
222. GENERATION OF CODE VECTORS IN SYSTEMATIC
FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 222
223. GENERATION OF CODE VECTORS IN SYSTEMATIC
FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 223
Above equation implies,
• (i) multiply message polynomial by pq.
• (ii) Divide pq M(p) by generator polynomial
• (iii) Remainder of the division is C(p)
224. Example: The generator polynomial of a (7,4) cyclic
code is G(p) = p3+p+1.
Find all the code vectors in systematic form.
Sol:
Here n = 7 and k = 4 ,
q= n-k =3
To find possible
message vectors,
2k = 24 = 16 message
vectors of four bits are
possible
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 224
225. GENERATION OF CODE VECTORS IN SYSTEMATIC
FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 225
To find message polynomial,
Consider any message vector M = (m3 m2
m1 m0) = (0 1 0 1)
Then, M(p) = m3p3 + m2 p2 + m1p1 + m0
M(p) = p2 + 1
The given generator polynomial is,
G(p) = p3 + p + 1
226. GENERATION OF CODE VECTORS IN SYSTEMATIC
FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 226
To obtain pq M(p):
Since q =3, pq M(p) will be,
pq M(p) = p3 M(p)
= p3 (p2 + 1)
pq M(p) = p5 + p3
pq M(p) = p5 + 0p4 + p3 + 0p2 + 0p +1
G(p) = p3 + p + 1
and G(p) = p3 + 0p2 + p + 1
227. GENERATION OF CODE VECTORS IN SYSTEMATIC
FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 227
228. GENERATION OF CODE VECTORS IN SYSTEMATIC
FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 228
229. GENERATION OF CODE VECTORS IN SYSTEMATIC
FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 229
This is the required code vector for the message vector ( 0 1 0
1) in systematic form. The code vectors for other message
bits can be obtained by following the same procedure and
listed in the table as below
230. GENERATION OF CODE VECTORS IN SYSTEMATIC
FORM:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 230
231. ENCODING USING AN (N - K) BIT SHIFT REGISTER:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 231
232. ENCODING USING AN (N - K) BIT SHIFT REGISTER:
OPERATION:
The feedback switch is first closed. The output switch is
connected to message input.
All the shift registers are initialized to all zero state. The k
message bits are shifted to the transmitter as well as shifted into
the registers.
After the shift of k messages bits the registers contain q check
bits. The feedback switch is now opened and output switch is
connected to check bits position. With the every shift, the check
bits are then shifted to the transmitter.
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 232
233. ENCODING USING AN (N - K) BIT SHIFT REGISTER:
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 233
234. Here q = 3 , so there are 3 flipflops in shift register to hold
check bits c1, c2,c0
Since g2 = 0 , its link is not connected. g1 = 1, hence its link is
connected.
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 234
235. Shift register bit position for input message 1100
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 235
236. Operation of (7,4) cyclic code encoder
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 236
237. CONVOLUTIONAL CODES
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 237
• A Convolutional coding is done by combining the fixed
number of input bits.
• The input bits are stored in the fixed length shift register
and they are combined with the help of mod-2 adders. This
operation is equivalent to binary covolution and hence its
called Covolutional coding.
243. STATES OF THE ENCODER
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 243
• In the above convolutional encoder, the previous two
successive message bits m1 and m2 represents state.
• The input message bits m affects the state of the encoder as
well as outputs x1 and x2 during that state.
• Whenever new message bit is shifted to m, the contents of m1
and m2 define new state. And outputs x1 and x2 are also
changed according to new state m1,m2 and message bit m.
• Let the initial values of bits stored in m1 and m2 be zero. That
is m1m2 = 00 initially and the encoder is in state ‘a’
244. STATES OF THE ENCODER
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 244
m2 m1 States of encoder
0 0 a
0 1 b
1 0 c
1 1 d
245. DEVELOPMENT OF THE CODE TREE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 245
246. DEVELOPMENT OF THE CODE TREE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 246
• The development of code tree for the message sequence m
= 110 is given below. Initially let us assume that encoder is
in state ‘a’ . i.e., m1m2 = 00
(i) When message bit m = 1, (first bit)
The first message input is m = 1 With this input x1 and x2 is
calculated as follows,
For the given convolutional encoder, the outputs x1 and x2
are given as,
247. DEVELOPMENT OF THE CODE TREE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 247
248. DEVELOPMENT OF THE CODE TREE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 248
249. DEVELOPMENT OF THE CODE TREE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 249
250. DEVELOPMENT OF THE CODE TREE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 250
251. DEVELOPMENT OF THE CODE TREE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 251
Input bits Present state Next state Output of the
encoder
0 a (00) a (00) 00
1 a (00) b (01) 11
1 b (01) d (11) 01
0 d(11) c (10) 01
252. DEVELOPMENT OF THE CODE TREE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 252
Inp
ut
bits
Pres
ent
sta
te
Ne
xt
stat
e
Outp
ut of
the
enco
der
0 a
(00
)
a
(00
)
00
1 a
(00
)
b
(01
)
11
1 b
(01
d
(11
01
253. CODE TRELLIS
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 253
• Code trellis is the more compact representation of
the code tree.
• In code tree there are fore states or nodes. Every
states goes to some other state depending upon
the input code.
• Trellis represents the single, an unique diagram for
such transitions.
254. STATE TABLE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 254
Input bits Present state Next state Output of the
encoder
0 a (00) a (00) 00
1 b (10) 11
0
b (10)
c (01) 10
1
d (11) 01
0 c (01) a (00) 11
1 b (10) 00
0 d(11) c (01) 01
1 d(11) 10
256. CODE TRELLIS
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 256
• The node on the left side denotes four
possible current states and those on the right
represents the next states.
• The solid transition line represents input m =
0 and broken line represents input m = 1.
• Along with each transition line the output x1x2
is represented during the transition.
• For eg., let the encoder be in current state of
‘a’. If input m = 0, the next state will be ‘a’,
with the output x1x2 = 11.
• Thus code trellis is compact representation
257. STATE DIAGRAM
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 257
• If the current and next states of the encoder
are combined, then it can be represented by
a state diagram.
• For eg., let the encoder is in state ’a’. If input
bit m = 0, then the next state is same i.e., ‘a’,
with the output x1x2 = 00. This is shown as
self loop at node ‘a’ in the state diagram.
• If input m =1 , then state diagram shows the
next state as ‘b’, with outputs x1x2 = 11.
259. SOLVED EXAMPLE
2/27/2024 MAHENDRA COLLEGE OF ENGINEERING 259
Example: A code rate 1 / 3 covolutional
encoder has generating vectors as
g1 = (1 0 0), g2 = (1 1 1) and g3 = (1 0 1)
(i) Sketch the encoder configuration.
(ii) Draw the code tree, state diagram, and
trellis diagram.
(iii) If input message sequence is 10110,
determine the output sequence of the
encoder.