2.
Communication System
The purpose of a Communication System is to transport an information
bearing signal from a source to a user destination via a communication
channel.
ELEMENTS OF DIGITAL COMMUNICATION SYSTEMS:
1. Analog Information Sources.
2. Digital Information Sources.
(i). Analog Information Sources → Microphone actuated by a speech, TV
Camera scanning a scene, continuous amplitude signals.
(ii). Digital Information Sources → These are teletype or the numerical
output of computer which consists of a sequence of discrete symbols or
letters.
DCT Notes By, Er. Swapnil Kaware
DCT Notes By, Er. Swapnil Kaware
3.
(i). Source Coding: Code data to more efficiently represent
the information.
*Reduces “size” of data.
*Analog  Encode analog source data into a binary format.
*Digital  Reduce the “size” of digital source data.
(ii). Channel Coding: Code data for transmission over a
noisy communication channel.
*Increases “size” of data.
*Digit:  Add redundancy to identify and correct errors.
*Analog  Represent digital values by analog signals.
Communication System
DCT Notes By, Er. Swapnil Kaware
4.
Communication System
DCT Notes By, Er. Swapnil Kaware
5.
Coding in communication system
• In the engineering sense, coding can be classified into four
areas:
• Encryption: to encrypt information for security purpose.
• Data Compression: to reduce space for the data stream.
• Data Translation: to change the form of representation of
the information so that it can be transmitted over a
communication channel.
• Error Control: to encode a signal so that error occurred
can be detected and possibly corrected.
DCT Notes By, Er. Swapnil Kaware
6.
Source Coding
(i). The process by which information symbols are mapped to alphabetical symbols
is called source coding.
(ii). The mapping is generally performed in sequences or groups of information and
alphabetical symbols.
(iii). it must be performed in such a manner that it guarantees the exact recovery
of the information symbol back from the alphabetical symbols otherwise it will
destroy the basic theme of the source coding.
(iv). The source coding is called lossless compression if the information symbols are
exactly recovered from the alphabetical symbols otherwise it is called lossy
compression.
(v). The Source coding also known as compression or bitrate reduction process.
DCT Notes By, Er. Swapnil Kaware
7.
(vi). It is the process of removing redundancy from the
source symbols, which essentially reduces data size.
(vii). Source coding is a vital part of any communication
system as it helps to use disk space and transmission
bandwidth efficiently.
(viii). The source coding can either be lossy or lossless.
(ix). In case of lossless encoding, error free
reconstruction of source symbols is possible,
whereas, exact reconstruction of source symbols is
not possible in case of lossy encoding.
Source Coding
DCT Notes By, Er. Swapnil Kaware
8.
• Suppose a word ‘Zebra’ is going to be sent out. Before this
information can be transmitted to the channel, it is first
translated into a stream of bits (‘0’ and ‘1’).
• The process is called source coding. There are many
commonly used ways to translate that.
• For example, if ASCII code is used, each alphabet will be
represented by 7bit so called the code word.
• The alphabets ‘Z’, ‘e’, ‘b’, ‘r’, ‘a’, will be encoded as
‘1010101’, ‘0110110’, ‘0010110’, ‘0010111’, ‘0001110’.
Source Coding
DCT Notes By, Er. Swapnil Kaware
9.
Source Encoder / Decoder
Source Encoder ( or Source Coder):
(i). It converts the input i.e. symbol sequence into a binary sequence of 0’s and 1’s by
assigning code words to the symbols in the input sequence.
(ii). For e.g. :If a source set is having hundred symbols, then the number of bits
used to represent each symbol will be 7 because 2⁷=128 unique combinations are
available.
(iii). The important parameters of a source encoder are block size, code word
lengths, average data rate and the efficiency of the coder (i.e. actual output data
rate compared to the minimum achievable rate).
DCT Notes By, Er. Swapnil Kaware
10.
Source Decoder:
(i). Source decoder converts the binary output of the channel
decoder into a symbol sequence.
(ii). The decoder for a system using fixed – length code words is quite
simple, but the decoder for a system using variable – length code
words will be very complex.
(iii). Aim of the source coding is to remove the redundancy in the
transmitting information, so that bandwidth required for
transmission is minimized.
(iv). Based on the probability of the symbol code word is assigned.
Higher the probability, shorter is the codeword.
Ex: Huffman coding.
Source Encoder / Decoder
DCT Notes By, Er. Swapnil Kaware
11.
Huffman coding.
• Rules:
• Order the symbols (‘Z’, ‘e’, ‘b’, ‘r’, ‘a’) by decreasing probability and
denote them as S1, to Sn (n = 5 for this case).
• Combine the two symbols (Sn, Sn1) having the lowest probabilities.
• Assign ‘1’ as the last symbol of Sn1 and ‘0’ as the last symbol of Sn.
• Form a new source alphabet of (n1) symbols by combining Sn1 and Sn
into a new symbol S’n1 with probability P’n1 = Pn1 + Pn.
• Repeat the above steps until the final source alphabet has only one
symbol with probability equals to 1.
DCT Notes By, Er. Swapnil Kaware
13.
• The Channel Encoder will add bits to the message bits to be transmitted
systematically.
• After passing through the channel, the Channel decoder will detect and correct
the errors.
• A simple example is to send ‘000’ (‘111’ correspondingly) instead of sending
only one ‘0’ (‘1’ correspondingly) to the channel.
• Due to noise in the channel, the received bits may become ‘001’. But since
either ‘000’ or ‘111’ could have been sent.
• By majority logic decoding scheme, it will be decoded as ‘000’ and therefore
the message has been a ‘0’.
• In general the channel encoder will divides the input message bits into blocks of
k messages bits and replaces each k message bits block with a nbit code word
by introducing (nk) check bits to each message block.
Channel Encoder / Decoder:
DCT Notes By, Er. Swapnil Kaware
14.
Channel Encoder / Decoder:
There are two methods of channel coding.
1. Block Coding: (i). The encoder takes a block of ‘k’ information
bits from the source encoder and adds ‘r’ error control bits,
where ‘r’ is dependent on ‘k’ and error control capabilities
desired.
2. Convolution Coding: (i). The information bearing message
stream is encoded in a continuous fashion by continuously
interleaving information bits and error control bits.
DCT Notes By, Er. Swapnil Kaware
15.
Block Coding:
• Denoted by (n, k) a block code is a collection of code words
each with length n, k information bits and r = n – k check bits.
• It is linear if it is closed under addition mod 2.
• A Generator Matrix G (of order k × n) is used to generate the
code. (3.1) G = [ Ik P ] k × n.
• where Ik is the k × k identity matrix and P is a k × (n – k) matrix
selected to give desirable properties to the code produced.
• For example, denote D to be the message, G to be the
generator matrix, C to be code word.
DCT Notes By, Er. Swapnil Kaware
16.
Average Mutual Information & Entropy
Information theory answers two fundamental
questions in communication theory:
1. What is the ultimate lossless data compression?
2. What is the ultimate transmission rate of reliable
communication?
Information theory gives insight into the problems of
statistical inference, computer science, investments
and many other fields.
DCT Notes By, Er. Swapnil Kaware
17.
• Entropy is a measure of the number of specific ways in
which a system may be arranged, often taken to be a
measure of disorder, or a measure of progressing towards
thermodynamic equilibrium.
• The entropy of an isolated system never decreases, because
isolated systems spontaneously evolve towards
thermodynamic equilibrium, which is the state of maximum
entropy.
• Entropy was originally defined for a thermodynamically
reversible process as where the entropy (S) is found from
the uniform thermodynamic temperature (T) of a closed
system divided into an incremental reversible transfer of
heat into that system (dQ).
Entropy
DCT Notes By, Er. Swapnil Kaware
18.
• The above definition is sometimes called the macroscopic
definition of entropy because it can be used without regard
to any microscopic picture of the contents of a system.
• In thermodynamics, entropy has been found to be more
generally useful and it has several other formulations.
• Entropy was discovered when it was noticed to be a
quantity that behaves as a function of state.
• Entropy is an extensive property, but it is often given as
an intensive property of specific entropy as entropy per
unit mass or entropy per mole.
Entropy
DCT Notes By, Er. Swapnil Kaware
19.
Entropy
(i). The entropy of a random variable is a function which attempts to
characterize the “unpredictability” of a random variable.
(ii). Consider a random variable X representing the number that
comes up on a roulette wheel and a random variable Y
representing the number that comes up on a fair 6sided die.
(iii). The entropy of X is greater than the entropy of Y . In addition to
the numbers 1 through 6, the values on the roulette wheel can
take on the values 7 through 36.
(iii). In some sense, it is less predictable.
DCT Notes By, Er. Swapnil Kaware
20.
(iv). But entropy is not just about the number of possible
outcomes. It is also about their frequency.
(v). For example, let Z be the outcome of a weighted six
sided die that comes up 90% of the time as a “2”. Z has
lower entropy than Y representing a fair 6sided die.
(vi). The weighted die is less unpredictable, in some sense.
But entropy is not a vague concept.
(vii). It has a precise mathematical definition. In particular, if
a random variable X takes on values in a set X = {x1, x2, ...,
xn}.
Entropy
DCT Notes By, Er. Swapnil Kaware
21.
The most fundamental concept of information
theory is the entropy. The entropy of a random
variable X is defined by,
The entropy is nonnegative. It is zero when the
random variable is “certain” to be predicted.
Entropy
DCT Notes By, Er. Swapnil Kaware
22.
Joint Entropy
(i). Joint entropy is the entropy of a joint probability
distribution, or a multivalued random variable.
(ii). For example, one might wish to the know the joint
entropy of a distribution of people defined by hair
color C and eye color E, where C can take on 4
different values from a set C and E can take on 3
values from a set E.
(iii). If P(E,C) defines the joint probability distribution of
hair color and eye color.
DCT Notes By, Er. Swapnil Kaware
23.
Joint and Conditional Entropy
For two random variables X and Y , the joint
entropy is defined by,
The conditional entropy is defined by,
DCT Notes By, Er. Swapnil Kaware
24.
Mutual information
(i). Mutual information is a quantity that measures a relationship
between two random variables that are sampled simultaneously.
(ii). In particular, it measures how much information is
communicated, on average, in one random variable about
another.
(iii). Intuitively, one might ask, how much does one random variable
tell me about another?
• For example, suppose X represents the roll of a fair 6sided die,
and Y represents whether the roll is even (0 if even, 1 if odd).
Clearly, the value of Y tells us something about the value of X and
vice versa.
• That is, these variables share mutual information.
DCT Notes By, Er. Swapnil Kaware
25.
(iv). On the other hand, if X represents the roll of one
fair die, and Z represents the roll of another fair
die, then X and Z share no mutual information.
(v). The roll of one die does not contain any
information about the outcome of the other die.
(vi). An important theorem from information theory
says that the mutual information between two
variables is 0 if and only if the two variables are
statistically independent.
Mutual information
DCT Notes By, Er. Swapnil Kaware
26.
Mutual Information
(i). The mutual information of X and Y is defined by,
(ii). Note that the mutual information is symmetric
in the arguments. That is,
(iii). Mutual information is also nonnegative, as we
will show in a minute.
DCT Notes By, Er. Swapnil Kaware
27.
Mutual Information and Entropy
It follows from definition of entropy and mutual
information that,
The mutual information is the reduction of entropy
of X when Y is known.
DCT Notes By, Er. Swapnil Kaware
28.
Conditional Mutual Information
The conditional mutual information of X and Y
given Z is defined by,
DCT Notes By, Er. Swapnil Kaware
29.
Chain Rules of Mutual Information
It can be shown from the definitions that the
mutual information of (X; Y ) and Z is the sum of
the mutual information of X and Z and the
conditional mutual information of Y and Z given X.
That is,
30.
Chain Rules of Entropy
From the definition of entropy, it can be shown that
for two random variables X and Y , the joint entropy
is the sum of the entropy of X and the conditional
entropy of Y given X,
DCT Notes By, Er. Swapnil Kaware
31.
Discrete Memory Less Source
(i). Suppose that a probabilistic experiment involves the observation of the output
emitted by a discrete source during every unit of time (signaling interval).
(ii). The source output is modeled as a discrete random variable S, which takes on
symbols from a alphabet.
(iii). We assume that the symbols emitted by the source during successive signaling
intervals are statistically independent.
(iv). A source having the properties just described is called a discrete memory less
source.
(v). If the source symbols occur with different probabilities, and the probability ‘pk’
is low, then there is more surprise, and therefore information, when symbol ‘sk’
is emitted by the source.
DCT Notes By, Er. Swapnil Kaware
32.
Lempel Ziv algorithm
An innovative, radically different method was introduced in1977 by Abraham
Lempel and Jacob Ziv.
This technique (called LempelZiv) actually consists of two considerably
different algorithms, LZ77 and LZ78. LZ78 inserts one or multicharacter, non
overlapping, distinct patterns of the message to be encoded in a Dictionary.
The multicharacter patterns are of the form: C0C1 . . . Cn1Cn. The prefix of
a pattern consists of all the pattern characters except the last: C0C1 . . . Cn1
LZ78 O/P:
DCT Notes By, Er. Swapnil Kaware
33.
LempelZiv algorithm
(i). Encode a string by finding the longest match
anywhere within a window of past symbols and
represents the string by a pointer to location of the
match within the window and the length of the
match.
(ii). This algorithm is simple to implement and has
become popular as one of the early standard
algorithms for file compression on computers because
of its speed and efficiency.
(iii). It is also used for data compression in high7 speed
modems.
DCT Notes By, Er. Swapnil Kaware
34.
LempelZiv algorithm
As mentioned earlier, static coding schemes require some
knowledge about the data before encoding takes place.
Universal coding schemes, like LZW, do not require advance
knowledge and can build such knowledge onthefly.
LZW is the foremost technique for general purpose data
compression due to its simplicity and versatility.
It is the basis of many PC utilities that claim to “double the
capacity of your hard drive”.
LZW compression uses a code table, with 4096 as a common
choice for the number of table entries.
DCT Notes By, Er. Swapnil Kaware
35.
Codes 0255 in the code table are always assigned to
represent single bytes from the input file.
When encoding begins the code table contains only the first
256 entries, with the remainder of the table being blanks.
Compression is achieved by using codes 256 through 4095
to represent sequences of bytes.
As the encoding continues, LZW identifies repeated
sequences in the data, and adds them to the code table.
Decoding is achieved by taking each code from the
compressed file, and translating it through the code table to
find what character or characters it represents.
LempelZiv algorithm
DCT Notes By, Er. Swapnil Kaware
36.
1 Initialize table with single character strings
2 P = first input character
3 WHILE not end of input stream
4 C = next input character
5 IF P + C is in the string table
6 P = P + C
7 ELSE
8 output the code for P
9 add P + C to the string table
10 P = C
11 END WHILE
12 output code for P
LempelZiv algorithm
DCT Notes By, Er. Swapnil Kaware
37.
Example 1: Compression using LZW
Example 1: Use the LZW algorithm to compress the
string
BABAABAAA
DCT Notes By, Er. Swapnil Kaware
44.
LZW Decompression
The LZW decompressor creates the same string table
during decompression.
It starts with the first 256 table entries initialized to single
characters.
The string table is updated for each character in the input
stream, except the first one.
Decoding achieved by reading codes and translating them
through the code table being built.
DCT Notes By, Er. Swapnil Kaware
45.
LZW Decompression Algorithm
1 Initialize table with single character strings
2 OLD = first input code
3 output translation of OLD
4 WHILE not end of input stream
5 NEW = next input code
6 IF NEW is not in the string table
7 S = translation of OLD
8 S = S + C
9 ELSE
10 S = translation of NEW
11 output S
12 C = first character of S
13 OLD + C to the string table
14 OLD = NEW
15 END WHILE
DCT Notes By, Er. Swapnil Kaware
46.
Example 2: LZW Decompression 1
Example 2: Use LZW to decompress the output sequence of
Example 1:
<66><65><256><257><65><260>.
DCT Notes By, Er. Swapnil Kaware
47.
Example 2: LZW Decompression Step 1
<66><65><256><257><65><260> Old = 65 S = A
New = 66 C = A
STRING TABLEENCODER OUTPUT
stringcodewordstring
B
BA256A
DCT Notes By, Er. Swapnil Kaware
48.
Example 2: LZW Decompression Step 2
<66><65><256><257><65><260> Old = 256 S = BA
New = 256 C = B
STRING TABLEENCODER OUTPUT
stringcodewordstring
B
BA256A
AB257BA
DCT Notes By, Er. Swapnil Kaware
49.
Example 2: LZW Decompression Step 3
<66><65><256><257><65><260> Old = 257 S = AB
New = 257 C = A
STRING TABLEENCODER OUTPUT
stringcodewordstring
B
BA256A
AB257BA
BAA258AB
DCT Notes By, Er. Swapnil Kaware
50.
Example 2: LZW Decompression Step 4
<66><65><256><257><65><260> Old = 65 S = A
New = 65 C = A
STRING TABLEENCODER OUTPUT
stringcodewordstring
B
BA256A
AB257BA
BAA258AB
ABA259A
DCT Notes By, Er. Swapnil Kaware
51.
Example 2: LZW Decompression Step 5
<66><65><256><257><65><260> Old = 260 S = AA
New = 260 C = A
STRING TABLEENCODER OUTPUT
stringcodewordstring
B
BA256A
AB257BA
BAA258AB
ABA259A
AA260AA
DCT Notes By, Er. Swapnil Kaware
52.
LZW Summary
This algorithm compresses repetitive sequences of data well.
Since the code words are 12 bits, any single encoded character
will expand the data size rather than reduce it.
In this example, 72 bits are represented with 72 bits of data. After
a reasonable string table is built, compression improves
dramatically.
Advantages of LZW over Huffman:
LZW requires no prior information about the input data stream.
LZW can compress the input stream in one single pass.
Another advantage of LZW its simplicity, allowing fast
execution.
DCT Notes By, Er. Swapnil Kaware
53.
LZW: Limitations
What happens when the dictionary gets too large (i.e., when all the 4096
locations have been used)?
Here are some options usually implemented:
Simply forget about adding any more entries and use the table as is.
Throw the dictionary away when it reaches a certain size.
Throw the dictionary away when it is no longer effective at compression.
Clear entries 2564095 and start building the dictionary again.
Some clever schemes rebuild a string table from the last N input
characters.
DCT Notes By, Er. Swapnil Kaware
54.
Coding of analog sources
Three type of analog source encoding:
Temporal Waveform coding :design to represent digitally the timedomain characteristic of the signal
(i). Pulsecode modulation (PCM).
(ii). Differential pulsecode modulation (DPCM).
(iii). Delta modulation(DM).
• Spectral waveform coding: signal waveform is sub divided into different frequency
band and either the time waveform in each band or its spectral characteristics are encoded.
• Each subband can be encoded in timedomain waveform or Each subband can be encoded in frequency
domain waveform).
• Modelbased coding:Based on the mathematical model of source.(The Source is modeled as
a linear system that results in the observed source output).
• Instead of transmitted samples of the source, the parameters of the linear system are transmitted with
an appropriate excitation table.
• If the parameters are sufficient small, provides large compression).
DCT Notes By, Er. Swapnil Kaware
55.
Rate–Distortion Theory
(i). In rate–distortion theory, the rate is usually understood as the number
of bits per data sample to be stored or transmitted.
(ii). In the most simple case (which is actually used in most cases), the
distortion is defined as the expected value of the square of the
difference between input and output signal (i.e., the mean squared
error ).
(iii). However, since we know that most lossy compression techniques
operate on data that will be perceived by human consumers (listening
to music, watching pictures and video).
(iv). The distortion measure should preferably be modeled on
human perception and perhaps aesthetics: much like the use
of probability in lossless compression, distortion measures can
ultimately be identified with loss functions as used in
Bayesian estimation and decision theory.
DCT Notes By, Er. Swapnil Kaware
56.
(v). In audio compression, perceptual models (and
therefore perceptual distortion measures) are
relatively well developed and routinely used in
compression techniques such as MP3 or Vorbis, but
are often not easy to include in rate–distortion
theory.
(vi). In image and video compression, the human
perception models are less well developed and
inclusion is mostly limited to
the JPEG and MPEG weighting
(quantization, normalization) matrix.
Rate–Distortion Theory
DCT Notes By, Er. Swapnil Kaware
57.
Rate Distortion Function
• RATE :
• It is the number of bits per data sample to be stored or transmitted.
• DISTORTION:
• It is defined as the variance of the difference between input and
output.
• 1.hamming distance
• 2.squared error
• Rate distortion theory is the branch of information theory addressing
the problem of determining the minimal amount of entropy or
information that should be communicated over a channel such that the
source can be reconstructed at the receiver with given distortion.
DCT Notes By, Er. Swapnil Kaware
58.
Rate Distortion Function
DCT Notes By, Er. Swapnil Kaware
59.
Variance of Input and Output Image
(Example of distortion)
DCT Notes By, Er. Swapnil Kaware
60.
Quantization
• Quantization, in mathematics and digital signal processing, is the
process of mapping a large set of input values to a smaller set –
such as rounding values to some unit of precision.
• A device oralgorithmic function that performs quantization is
called a quantizer. The roundoff error introduced by quantization
is referred to as quantization error.
• In analogtodigital conversion, the difference between the actual
analog value and quantized digital value is called quantization
error or quantization distortion.
• This error is either due to rounding or truncation.
DCT Notes By, Er. Swapnil Kaware
61.
• The error signal is sometimes considered as an additional
random signal called quantization noise.
• Quantization is involved to some degree in nearly all digital
signal processing, as the process of representing a signal in
digital form ordinarily involves rounding.
• Quantization also forms the core of essentially all lossy
compression algorithms.
• A quantizer describes the relation between the encoder
input values and the decoder output values
Quantization
DCT Notes By, Er. Swapnil Kaware
63.
Quantizer
• The design of the quantizer has a significant impact on the
amount of compression obtained and loss incurred in a lossy
compression scheme.
• Quantizer: Quantizer in nothing but the combination of encoder
mapping and decode mapping.
(i). Encoder Mapping:
• – The encoder divides the range of source into a number of
intervals.
• – Each interval is represented by a distinct codeword.
(ii). Decoder mapping.
• – For each received codeword, the decoder generates a
reconstruct value.
DCT Notes By, Er. Swapnil Kaware
64.
Components of a Quantizer
1. Encoder mapping: Divides the range of values that the
source generates into a number of intervals.
2. Each interval is then mapped to a codeword. It is a
manytoone irreversible mapping.
3. The code word only identifies the interval, not the
original value.
4. If the source or sample value comes from a analog
source, it is called a A/D converter.
DCT Notes By, Er. Swapnil Kaware
67.
Components of a Quantizer
Decoder: Given the code word, the decoder gives a an
estimated value that the source might have generated.
Usually, it is the midpoint of the interval but a more
accurate estimate will depend on the distribution of the
values in the interval.
In estimating the value, the decoder might generate some
errors.
DCT Notes By, Er. Swapnil Kaware
71.
Scalar Quantization
• Many of the fundamental ideas of quantization and
compression are easily introduced in the simple context
of scalar quantization.
• An example: any real number x can be rounded off to the
nearest integer, say
q(x) = round(x)
• Maps the real line R (a continuous space) into a discrete
space.
DCT Notes By, Er. Swapnil Kaware
72.
Vector Quantization
• Vector Quantization Rule: Vector quantization (VQ) of X
may be viewed as the classification of the outcomes of X
into a discrete number of sets or cells in Nspace
• Each cell is represented by a vector output Yj
• Given a distance measure d(x, y), we have
• VQ output: Q(X) = Yj iff d(X, Yj) < d(X, Yi), " i ¹ j.
• Quantization region: Vj = { X: d(X, Yj) < d(X, Yi), " i ¹ j}.
DCT Notes By, Er. Swapnil Kaware
74.
The Schematic of a Vector Quantizer
DCT Notes By, Er. Swapnil Kaware
75.
BCH Codes
• BCH (Bose –Chaudhuri – Hocquenghem) Codes form
a large class of multiple ramdom errorcorrecting
codes.
• They ware first discovered by A.Hocquenghem in
1959 and independently by R.C.Bose and D.K.Ray
Chaudhuri in 1960.
• BCH codes are cyclic codes. Binary BCH are most
popular.
• The first decoding algorithm for binary BCH codesDCT Notes By, Er. Swapnil Kaware
76.
BCH codes
• In coding theory the BCH codes form a class of cyclic error
correcting codes that are constructed using finite fields.
• BCH codes were invented in 1959 by French
mathematician Alexis Hocquenghem, and independently in
1960 by Raj Bose and D. K. RayChaudhuri.
• The acronym BCH comprises the initials of these inventors'
names.
• One of the key features of BCH codes is that during code
design, there is a precise control over the number of symbol
errors correctable by the code.
DCT Notes By, Er. Swapnil Kaware
77.
• it is possible to design binary BCH codes that can correct
multiple bit errors.
• Another advantage of BCH codes is the ease with which
they can be decoded, namely, via an algebraic method
known as syndrome decoding.
• This simplifies the design of the decoder for these codes,
using small lowpower electronic hardware.
• BCH codes are used in applications like satellite
communications,compact disc players, DVDs, disk
drives, solidstate drives and twodimensional bar codes.
BCH codes
DCT Notes By, Er. Swapnil Kaware
78.
Parameters BCH
codesBlock length :
Message Size (bits) :
Minimum Distance :
• The code is a t error
correcting
• For example, for m=6, t=3
This is a tripleerror
mtnk
12n m
12min td
7132
1836
6312
min
6
d
kn
n
DCT Notes By, Er. Swapnil Kaware
79.
Generator Polynomial of
BCH Codes
• Let α be a primitive element in GF(2m).
For 1≤ i ≤ t, let m2i1(x) be the
minimum polynomial of the field
element α2i1 .
• The generator polynomial g(X) of a t
errorcorrecting primitive BCH codes of
length 2m1 is given by
LCM : Least Common Multiple
• Note that degree of g(x) is less.
))(),...,(),(()( 1231 XmXmxmLCMxg t
DCT Notes By, Er. Swapnil Kaware
80.
BCH Encoding
• Let m(x) be the message
polynomial to be encoded
Where mi GF(2m).
• Dividing x2tm(x) by g(x), we have
Where p(x) is the remainder.
• Then u(x) is the codeword
polynomial for the message m(x).
1
110 ...)( k
k xmxmmxm
)()()()(2
xpxgxqxmx t
12
1210 ...)( t
t xpxppxp
)()()( xmxxpxu kn
DCT Notes By, Er. Swapnil Kaware
81.
Decoding of BCH codes
• Consider a BCH code with n=2m1
and generator polynomial g(x).
• Suppose a code polynomial v(x) is
transmitted
Let r(x) be the received
polynomial.
1
110 ...)( n
n xvxvvxv
1
110 ...)( n
n xrxrrxr
DCT Notes By, Er. Swapnil Kaware
82.
Reed–Solomon (RS) codes
• In coding theory, Reed–Solomon (RS) codes are non
binary cyclic errorcorrecting codes invented by Irving S.
Reed and Gustave Solomon.
• They described a systematic way of building codes that could
detect and correct multiple random symbol errors.
• By adding t check symbols to the data, an RS code can detect
any combination of up to t erroneous symbols, or correct up to
⌊t/2⌋ symbols.
• As an erasure code, it can correct up to t known erasures, or it
can detect and correct combinations of errors and erasures.
DCT Notes By, Er. Swapnil Kaware
83.
• Furthermore, RS codes are suitable as multipleburst biterror
correcting codes, since a sequence of b + 1 consecutive bit
errors can affect at most two symbols of size b.
• The choice of t is up to the designer of the code, and may be
selected within wide limits.
• In Reed–Solomon coding, source symbols are viewed as
coefficients of a polynomial p(x) over a finite field.
• The original idea was to create n code symbols from k source
symbols by oversampling p(x) at n > k distinct points, transmit
the sampled points, and use interpolation techniques at the
receiver to recover the original message.
Reed–Solomon (RS) codes
84.
• That is not how RS codes are used today. Instead, RS codes are viewed
as cyclic BCH codes, where encoding symbols are derived from the
coefficients of a polynomial constructed by multiplying p(x) with a
cyclic generator polynomial.
• This gives rise to efficient decoding algorithms (described below).
• Reed–Solomon codes have since found important applications from
deepspace communication to consumer electronics.
• They are prominently used in consumer electronics such
as CDs, DVDs, Bluray Discs, in data transmission technologies such
as DSL and WiMAX, in broadcast systems such as DVB and ATSC, and in
computer applications such as RAID 6 systems;
Reed–Solomon (RS) codes
DCT Notes By, Er. Swapnil Kaware
85.
Motivation for RS Soft Decision
Decoder
Hard decision decoder does not fully exploit the decoding capability
Efficient soft decision decoding of RS codes remains an open problem
RS Coded Turbo Equalization
System

+
a priori
extrinsic
interleaving
a priori
extrinsic
ΠΣ
source
RS Encoder
interleaving
PR Encoder
sink
hard decision
+
AWGN
+
RS
Decoder
Channel
Equalizer
deinterleaving
Π
1
Π
Σ
Soft input soft output (SISO) algorithm is favorable
DCT Notes By, Er. Swapnil Kaware
86.
Applications
• Data storageBar code
• Data transmission
• Space transmission
DCT Notes By, Er. Swapnil Kaware
87.
Reed Muller Codes
• Reed Muller codes are some of the oldest error correcting codes.
• Error correcting codes are very useful in sending information over long
distances or through channels where errors might occur in the message.
• They have become more prevalent as telecommunications have
expanded and developed a use for codes that can selfcorrect.
• Reed Muller codes were invented in 1954 by D. E. Muller and I. S. Reed.
In 1972,
• A Reed Muller code was used by Mariner 9 to transmit black and white
photographs of Mars.
• Reed Muller codes are relatively easy to decode.
DCT Notes By, Er. Swapnil Kaware
88.
Decoding Reed Muller:
Decoding Reed Muller encoded messages is more complex than
encoding them.
The theory behind encoding and decoding is based on the distance
between vectors.
The distance between any two vectors is the number of places in the
two vectors that have different values.
The basis for Reed Muller encoding is the assumption that the closest
codeword in <(r;m) to the received message is the original encoded
message.
Reed Muller Codes
89.
• This method of decoding is given by the following
algorithm: Apply Steps 1 and 2.
• The vector spaces used in this paper consist of strings of
length 2m, where m is a positive integer, of numbers in
• F2 = f0.
• The code words of a Reed Muller code form a subspace of
such a space.
• Vectors can be manipulated by three main operations:
addition, multiplication, and the dot product.
Reed Muller Codes
90.
• For two vectors x = (x1; x2; : : : ; xn) and y = (y1; y2; :
: : ; yn), addition is dened by x + y = (x1 + y1; x2 + y2;
: : : ; xn + yn);
• where each xi or yi is either 1 or 0, and
• 1 + 1 = 0; 0 + 1 = 1; 1 + 0 = 1; 0 + 0 = 0:
• For example, if x and y are dened as x = (10011110)
and y = (11100001), then the sum of x and y is x + y
= (10011110) + (11100001) = (01111111):
Reed Muller Codes
DCT Notes By, Er. Swapnil Kaware
92.
• In telecommunication, a convolutional code is a type of error
correcting code in which each mbit information symbol (each mbit
string) to be encoded is transformed into an nbit symbol, where m/n is
the code rate (n ≥ m).
• The transformation is a function of the last k information symbols,
where k is the constraint length of the code.
• Convolutional codes are used extensively in numerous applications in
order to achieve reliable data transfer, including digital video,
radio, mobile communication, and satellite communication.
• These codes are often implemented in concatenation with a hard
decision code, particularly Reed Solomon.
• Prior to turbo codes, such constructions were the most efficient,
coming closest to the Shannon limit.
Convolution codes
DCT Notes By, Er. Swapnil Kaware
93.
• Convolutional Encoding:
• To convolutionally encode data, start with k memory
registers, each holding 1 input bit. Unless otherwise
specified, all memory registers start with a value of 0.
• The encoder has n modulo2adders (a modulo 2 adder
can be implemented with a single Boolean XOR gate,
where the logic is: 0+0 = 0, 0+1 = 1, 1+0 = 1, 1+1 = 0),
and n generator polynomials — one for each adder (see
figure below).
• An input bit m1 is fed into the leftmost register. Using the
generator polynomials and the existing values in the
remaining registers, the encoder outputs n bits.
Convolution codes
DCT Notes By, Er. Swapnil Kaware
94.
• Now bit shift all register values to the right (m1 moves
to m0, m0 moves to m1) and wait for the next input bit.
• If there are no remaining input bits, the encoder continues
output until all registers have returned to the zero state.
• The figure below is a rate 1/3 (m/n) encoder with
constraint length (k) of 3. Generator polynomials are G1 =
(1,1,1), G2 = (0,1,1), and G3 = (1,0,1). Therefore, output bits
are calculated (modulo 2) as follows:
• n1 = m1 + m0 + m1n2 = m0 + m1n3 = m1 + m1.
Convolution codes
DCT Notes By, Er. Swapnil Kaware
96.
Convolutional codes
• Convolutional codes map information to code bits
sequentially by convolving a sequence of information
bits with “generator” sequences.
• A convolutional encoder encodes K information bits to
N>K code bits at one time step.
• Convolutional codes can be regarded as block codes
for which the encoder has a certain structure such
that we can express the encoding operation as
convolution.
DCT Notes By, Er. Swapnil Kaware
97.
• The convolutional code is linear.
• Code bits generated at time step i are affected by information
bits up to M time steps i – 1, i – 2, …, i – M back in time. M is
the maximal delay of information bits in the encoder.
• Code memory is the (minimal) number of registers to construct
an encoding circuit for the code.
• Constraint length is the overall number of information bits
affecting code bits generated at time step i: =code memory +
K=MK + K=(M + 1)K.
• A convolutional code is systematic if the N code bits generated
at time step i contain the K information bits.
Convolutional codes
DCT Notes By, Er. Swapnil Kaware
98.
99
Modified State Diagram
A path from (00) to (00) is denoted by
Di (weight)
Lj (length)
Nk (# info 1‘s)
DCT Notes By, Er. Swapnil Kaware
99.
100
Transfer Function
• The transfer function T(D,L,N)
T(D,L,N)
D L
DNL(1 L)
5 3
1
DCT Notes By, Er. Swapnil Kaware
100.
•The distance properties and the error rate performance
of a convolutional code can be obtained from its transfer
function.
• Since a convolutional code is linear, the set of Hamming
distances of the code sequences generated up to some
stages in the trellis, from the allzero code sequence, is
the same as the set of distances of the code sequences
with respect to any other code sequence.
•
•Thus, we assume that the allzero path is the input to the
encoder
Transfer Function
DCT Notes By, Er. Swapnil Kaware
102.
103
Transfer Function
• Performing long division:
T(D,L,N) = D5L3N + D6L4N2 + D6L5N2 + D7L5N3 + ….
• If interested in the Hamming distance property of the code only,
set N = 1 and L = 1 to get the distance transfer function:
T (D) = D5 + 2D6 + 4D7 + …
There is one code sequence of weight 5. Therefore dfree=5.
There are two code sequences of weight 6,
four code sequences of weight 7, ….
DCT Notes By, Er. Swapnil Kaware
103.
104
Performance
• The event error probability is
defined as the probability that the
decoder selects a code sequence
that was not transmitted
• For two codewords the Pairwise
Error Probability is
• The upperbound for the event
error probability is given by
d
d
2/d
p1
pdidi
d
2
1di
))p1(p(4(
)p1(2)p1(p
i
d
)d(PEP
dcetandisatcodewordofnumbertheis)d(Awhere
)d(PEP)d(AP
freedd
event
correct
node
incorrect
DCT Notes By, Er. Swapnil Kaware
104.
105
Performance
• using the T(D,N,L), we can formulate this as
• The bit error rate (not probability) is written as
)p1(p2D;1NLevent )N,L,D(TP
)p1(p2D;1N;1LdN
d
bit )N,L,D(TP
DCT Notes By, Er. Swapnil Kaware
105.
Viterbi Decoding Algorithm
• The Viterbi algorithm is a standard component of tens of millions of high
speed modems. It is a key building block of modern information infrastructure
• The symbol "VA" is ubiquitous in the block diagrams of modern receivers.
• Essentially: the VA finds a path through any Markov graph, which is a sequence
of states governed by a Markov chain.
• Many practical applications:
– convolutional decoding and channel trellis decoding.
– fading communication channels,
– partial response channels in recording systems,
– optical character recognition,
– voice recognition.
– DNA sequence analysis
– etc.
DCT Notes By, Er. Swapnil Kaware
106.
• A Viterbi decoder uses the Viterbi algorithm for decoding a bits tream
that has been encoded using a convolutional code.
• There are other algorithms for decoding a convolutionally encoded
stream (for example, the Fano algorithm).
• The Viterbi algorithm is the most resourceconsuming, but it does
the maximum likelihood decoding.
• It is most often used for decoding convolutional codes with constraint
lengths k<=10, but values up to k=15 are used in practice.
• Viterbi decoding was developed by Andrew J. Viterbi and published in
the paper "Error Bounds for Convolutional Codes and an
Asymptotically Optimum Decoding Algorithm", IEEE Transactions on
Information Theory, Volume IT13, pages 260269, in April, 1967.
Viterbi Decoding Algorithm
DCT Notes By, Er. Swapnil Kaware
107.
• Applications:
(i). The Viterbi decoding algorithm is widely used in the following areas:
(ii). Radio communication: digital TV (ATSC, QAM, DVBT, etc.), radio
relay, satellite communications, PSK31 digital mode for amateur radio.
(iii). Decoding trelliscoded modulation (TCM), the technique used in
telephoneline modems to squeeze high spectral efficiency out of
3 kHzbandwidth analog telephone lines.
(iv). Computer storage devices such as hard disk drives.
(v). Automatic speech recognition.
Viterbi Decoding Algorithm
DCT Notes By, Er. Swapnil Kaware
108.
Viterbi Decoding
• Viterbi decoding is one of two types of decoding algorithms
used with convolutional encodingthe other type is sequential
decoding.
• Sequential decoding has the advantage that it can perform
very well with longconstraintlength convolutional codes, but
it has a variable decoding time.
• A discussion of sequential decoding algorithms is beyond the
scope of this tutorial; the reader can find sources discussing
this topic in the Books about Forward Error Correction section
of the bibliography.
• Viterbi decoding has the advantage that it has a fixed decoding
time. It is well suited to hardware decoder implementation.
DCT Notes By, Er. Swapnil Kaware
109.
• But its computational requirements grow exponentially as a function of
the constraint length, so it is usually limited in practice to constraint
lengths of K = 9 or less.
• Stanford Telecom produces a K = 9 Viterbi decoder that operates at rates
up to 96 kbps, and a K = 7 Viterbi decoder that operates at up to 45
Mbps.
• Advanced Wireless Technologies offers a K = 9 Viterbi decoder that
operates at rates up to 2 Mbps.
• NTT has announced a Viterbi decoder that operates at 60 Mbps, but I
don't know its commercial availability.
• Moore's Law applies to Viterbi decoders as well as to microprocessors,
so consider the rates mentioned above as a snapshot of the stateof
theart taken in early 1999.
Viterbi Decoding
DCT Notes By, Er. Swapnil Kaware
110.
Trellis Coded
Modulation.
• In telecommunication, trellis modulation (also known as trellis
coded modulation, or simply TCM).
• TCM is a modulation scheme which allows highly efficient
transmission of information over bandlimited channels such
as telephone lines.
• Trellis modulation was invented by Gottfried Ungerboeck who is
working for IBM in the 1970s.
DCT Notes By, Er. Swapnil Kaware
111.
• The name trellis was coined because a state diagram of
the technique, when drawn on paper, closely resembles
the trellis latticeused in rose gardens.
• The scheme is basically a convolutional code of rates
(r,r+1).
• Ungerboeck's unique contribution is to apply the parity
check on a per symbol basis instead of the older technique
of applying it to the bit stream then modulating the bits.
• The key idea he termed Mapping by Set Partitions.
Trellis Coded Modulation.
DCT Notes By, Er. Swapnil Kaware
112.
• This idea was to group the symbols in a tree like fashion
then separate them into two limbs of equal size.
• At each limb of the tree, the symbols were further apart.
• Although hard to visualize in multidimensions, a simple
one dimension example illustrates the basic procedure.
• Suppose the symbols are located at [1, 2, 3, 4, ...].
• Then take all odd symbols and place them in one group,
and the even symbols in the second group.
Trellis Coded Modulation.
DCT Notes By, Er. Swapnil Kaware
113.
• This is not quite accurate because Ungerboeck was looking at the two
dimensional problem, but the principle is the same, take every other one for
each group and repeat the procedure for each tree limb.
• He next described a method of assigning the encoded bit stream onto the
symbols in a very systematic procedure. Once this procedure was fully
described, his next step was to program the algorithms into a computer and let
the computer search for the best codes.
• The results were astonishing. Even the most simple code (4 state) produced
error rates nearly one onethousandth of an equivalent uncoded system.
• For two years Ungerboeck kept these results private and only conveyed them
to close colleagues.
• Finally, in 1982, Ungerboeck published a paper describing the principles of
trellis modulation.
Trellis Coded Modulation.
DCT Notes By, Er. Swapnil Kaware
114.
References
1. “Digital and Analog Communication Systems” –
K. Sam Shanmugam, John Wiley.
2. “An introduction to Analog and Digital Communication”
Simon Haykin, John Wiley.
3. “Digital Communication Fundamentals & Applications” –
Bernard Sklar, Pearson Education.
4. “Analog & Digital Communications”
HSU, Tata Mcgraw Hill, II edition.
DCT Notes By, Er. Swapnil Kaware
115.
Thank You!!!!!
Have A Nice Day!!!!!
DCT Notes By, Er. Swapnil Kaware
(svkaware@yahoo.co.in)