SlideShare a Scribd company logo
1 of 74
Convolutional Codes
Dr. Himansu Shekhar Pradhan
Assistant Professor
Department of Electronics & communication Engineering
National Institute of Technology, Warangal - 506004, Telangana, INDIA
Introduction:
• Convolutional codes were first discovered by P.Elias in 1955.
• The structure of convolutional codes is quite different from that of block codes.
• During each unit of time, the input to a convolutional code encoder is also a k-bit
message block and the corresponding output is also an n-bit coded block with
k<n.
• Each coded n-bit output block depends not only the corresponding k-bit input
message block at the same time unit but also on the m previous message blocks.
• Thus the encoder has k input lines, n output lines and a memory of order m.
Introduction:
• Each message (or information) sequence is encoded into a code sequence.
• The set of all possible code sequences produced by the encoder is called an (n,k,m)
convolutional code.
• The parameters, k and n, are normally small, say 1≤ k≤8 and 2 ≤ n ≤ 9.
• The ratio R=k/n is called the code rate.
• Typical values for code rate are: 1/2, 1/3,2/3.
• The parameter m is called the memory order of the code.
• Note that the number of redundant (or parity) bits in each coded block is small.
However, more redundant bits are added by increasing the memory order m of the
code while holding k & and n fixed.
Convolutional codes and Encoders
• Convolutional encoders is accomplished using shift registers (D flip
flop) and combinational logic that performs modulo-two addition.
Convolutional Codes and Encoders
• We’ll also need an output selector to toggle between the two
modulo-two adders.
• The output selector (SEL A/B block) cycles through two states;
In the first state,
it selects and outputs the output of the upper modulo-two adder;
In the second state,
it selects and outputs the output of the lower modulo-two adder.
Convolutional Codes and Encoders
Convolution Codes
• Convolutional codes are characterized by thee parameters:
(n, k, m)
Where, n = Number of output bits
k = Number of input bits
m = Number of memory registers
• Code Rate = k/n
= Number of input bits /Number of output bits
• Constraint length "L"= k(m-1)
L represents the no. of bits in the encoder memory that affects the generation of n output bits
An example: (2,1,4) Coder
• n = Number of output bits 2
k = Number of input bits 1
m = Number of memory registers 4
• Constraint length L-3.
• The no. of bits in the shaded registers are called states of code, and are
defined by
No. of states = 2L
(2,1,4) Coder:
• The constraint length = 3.
• The shaded registers hold these bits,
while the un-shaded register holds the incoming bit.
• This means, 3 bits, or 8 different combinations
of the bits can be held in the registers.
• These 8 combinations determine what
output will be received as the 2 output bits v1 and v2.
Input sequence: 1, Output: 11 11 10 11
The Catastrophic code
• Providing input of 1 to the coder provides what is called the
"impulse response" of that coder.
• Here, providing 1 as input (and then 'flushing' it out from the
registers using zeros) gave the output sequence:11 11 10 11.
• Similarly, response for input bit 0 would be:00 00 00 00.
• The output sequence can be computed simply by convolving the
input sequence u with the impulse response g.
Or, v=u*g
Example:
• To find the coded sequence for the input 1011, we just have to add the shifted
versions of the individual responses:
This result can be verified by the encoder model too.
Output bits and the encoder bits through the (2,1,4) code:
Graphical representation of Convolutional codes
• 3 different but related graphical representations can be used to study the
convolutional encoding.
1) Code tree = Tree diagram
2) Code trellis = Trellis diagram
3) State diagram
• Note that we can easily find output of the encoder from any of the above
diagrams.
• Given a sequence of message bits and the initial state, you can use any of
following 3 diagrams to find the resulting output bits.
Code Tree
• The convention used to distinguish the input binary symbols is as follows:
Input 0 — specifies upper branch
Input 1 — specifies lower branch
• Each branch of the tree represents an input symbol, with the corresponding
pair of output binary symbols indicated on the branch.
• It is often convenient to represent the codewords of a convolutional code as
paths through a code tree. A convolutional code is sometimes called a (linear)
tree code.
• Code tree: the left most node is called the root.
Since the encoder has 1 binary input, there are 2 branches stemming from
each node. (starting at the root). The upper branch leaving each node
corresponds to input 0 and the lower branch corresponds to the input digit 1.
• On each branch we have 2 binary code digits viz., the 2 outputs from the
encoder.
• Each branch of the tree represents an input symbol, with the corresponding
pair of output binary symbols indicated on the branch.
(2,1,1) convolutional code |solved problem |Tree diagram
Q. Below figure depicts a rate ½, constraint length L = 1, convolutional
encoder. Sketch the Tree diagram. Also find encoder output for input
data: 11101
Step-1:
Step-2:
Step-3:
Step-4:
Step-5:
Trellis Diagram Representation
• The trellis diagram is basically a redrawing of the state diagram. It shows all possible
state transitions at each time step.
• The trellis diagram is drawn by lining up all the possible states (2L) in the vertical axis.
Then we connect each state to the next state by the allowable codeword's for that
state.
• There are only two choices possible at each state. These are determined by the
arrival of either a 0 or a 1 bit.
• The arrows show the input bit and the output bits are shown in parentheses.
• The arrows going upwards represent a o bit and going downwards represent 1 bit.
Trellis Diagram Representation
Steps to construct trellis diagram
• It starts from scratch (all 0's in the SR, i.e., state a) and makes transitions
corresponding to each input data digit.
• These transitions are denoted by a solid line for the next data digit 0 and by
a dashed line for the next data digit 1.
• Thus when the first input digit is 0, the encoder output is 00 (solid line)
• When the input digit is 1, the encoder output is 11 (dashed line).
• We continue this way for the second input digit and so on.
ENCODER REPRESENTATIONS
Example: Encoding of convolutional codes using Trellis Representation
K = 1, n = 2 convolutional code
• We begin with state 00:
• Input data: 0 1 0 1 1 0 0
• Output: 00 11 01 00 10 10 11
Polynomial description of Convolutional codes
• It is also called as the Description in the D-Transform Domain.
• Ex: (2,1,2) Convolution Code encoder:
• G(1)(D) = 1 + D + D2
• G(2)(D) = 1 + D2
• G(D) = [ 1 + D + D2 ,1 + D2 ]
• U(D) = 1 + D + D2 + D 4
Polynomial description of Convolutional codes
• Ex: (2,1,2) Convolution Code encoder:
V(D) = U(D) G(D)
V1(D) = U(D) G(1)(D)
• V1(D) = ( 1 + D + D2 + D 4 ) X ( = 1 + D + D2 )
= (1 + D + D2 + D 4 + D + D2 + D3 + D5+ D2 +D3 + D4 + D6 )
• V1(D) = 1 + D2 + D5 + D6
Polynomial description of Convolutional codes
• Ex: (2,1,2) Convolution Code encoder:
V(D) = U(D) G(D)
V2(D) = U(D) G(2)(D)
• V2(D) = ( 1 + D + D2 + D 4 ) X ( = 1 + D2 )
= (1 + D + D2 + D 4 + D2 + D3 + D4 + D6 )
• V2(D) = 1 + D + D3 + D6
Polynomial description of Convolutional codes
• Ex: (2,1,2) Convolution Code encoder:
• V1(D) = 1 + D2 + D5 + D6
• V2(D) = 1 + D + D3 + D6
• In the time domain :
V1 = (1010011)
V2 = (1101001)
• Then the code word is :
V = (11 01 10 01 11 10 11)
Polynomial description – Example 1
A convolutional encoder with two streams of encoded bits with message sequence
10011 as an input
c’j
c’’j
Stream 1
Stream 2
First, we find the impulse response of both streams to a symbol 1.
Impulse response of stream 1 = (1 1 1)
Impulse response of stream 2 = (1 0 1)
Then, write the corresponding generator polynomial of both streams
          





 



 M
D
i
M
g
D
i
g
D
i
g
i
g
D
i
g ........
2
2
1
0
Polynomial description– Example 1
First, we find the impulse response of both streams to a symbol 1.
Impulse response of stream 1 = (1 1 1)
Impulse response of stream 2 = (1 0 1)
Then, write the corresponding generator polynomials of both streams
   




 

 2
1
1
D
D
D
g
   




 
 2
1
2
D
D
g
Then, write the message polynomial for input message (10011)
  




 

 4
3
1 D
D
D
m
Then, find the output polynomial for both streams by multiplying the generator
polynomial and the message polynomial
Polynomial description– Example 1
Then, find the output polynomial for both streams by multiplying the generator
polynomial and the message polynomial
       




  D
m
D
g
D
c .
2
2
       




  D
m
D
g
D
c .
1
1
= (1 + D + D2)(1 + D3 + D4)
= 1 + D + D2 + D3 + D6
= (1 + D2)(1 + D3 + D4)
= 1 + D2 + D3 + D4 + D5 + D6
So, the output sequence for stream 1
is 1111001
The output sequence for stream 2
is 1011111
Interleave
C = 11,10,11,11,01,01,11
Original message (10011)
Encoded sequence (11101111010111)
Polynomial description– Example 2
A convolutional encoder with two streams of encoded bits with message
sequence 110111001 as an input
c’j
c’’j
Stream 1
Stream 2
Polynomial description– Example 2
First, we find the impulse response of both streams to a symbol 1.
Impulse response of stream 1 = (1 1 1)
Impulse response of stream 2 = (1 0 1)
Then, write the corresponding generator polynomials of both streams
   




 

 2
1
1
D
D
D
g
   




 
 2
1
2
D
D
g
Then, write the message polynomial for input message (110111001)
  




 




 8
5
4
3
1 D
D
D
D
D
D
m
Then, find the output polynomial for both streams by multiplying the generator
polynomial and the message polynomial
Polynomial description– Example 2
Then, find the output polynomial for both streams by multiplying the generator
polynomial and the message polynomial
       




  D
m
D
g
D
c .
2
2
       




  D
m
D
g
D
c .
1
1
= 1 + D5 + D7 + D8 + D9 + D10 = 1 + D +D2 + D4 + D6 + D7 + D8 + D10
So, the output sequence for stream 1
is 10000101111
The output sequence for stream 2
is 11101011101
Interleave
C = 11 01 01 00 01 10 01 11 11 10 11
Original message (110111001)
Encoded sequence (11 01 01 00 01 10 01 11 11 10 11 )
Distance notions for convolutional code
Parameters in Convolutional Codes
• For generating a convolutional code, the information is passed sequentially through
a linear finite-state shift register. The shift register comprises of (-bit) stages and
Boolean function generators.
A convolutional code can be represented as where
• k is the number of bits shifted into the encoder at one time. Generally, k = 1 .
• n is the number of encoder output bits corresponding to information bits.
• The encoder memory, a shift register of size K , is the constraint length.
• n is a function of the present input bits and the contents of K .
• The state of the encoder is given by the value of (K - 1 ) bits.
Distance notions for convolutional code
Distance notions for convolutional code
• In particular, the Hamming distance of any linear code, i.e., the minimum Hamming
distance between any two valid codewords, is equal to the number of ones in the
smallest non-zero codeword with minimum weight, where the weight of a codeword
is the number of ones it contains.
• In the context of convolutional codes, the smallest Hamming distance between any
two valid codewords is called the free distance.
• Specifically, the free distance of a convolutional code is the difference in path metrics
between the all-zeroes output and the path with the smallest non-zero path metric
going from the initial 00 state to some future 00 state.
Matrix description of convolutional codes
• For an input sequence m, the output sequence can be written as c(j)=m*g(j), where *
denotes discrete-time convolution. The operation of convolution can also be
represented using matrices.
• Let m=[m0, m1, m2…..]. Then for g(j)= [g0
(j), g0
(j) ,……, gr
(j)], the convolution c= mg(j) can
be represented as
where empty entries in the matrix indicate zeros.
Matrix description of convolutional codes
• For a rate 1/2 code, the operation of convolution and interleaving the output
sequences is represented by the following matrix, where the columns of different
matrices are interleaved:
The Viterbi Algorithm
• The Viterbi algorithm performs Maximum Likelihood decoding.
• It finds a path through trellis with the largest metric (maximum correlation or
minimum distance).
- It processes the demodulator outputs in an iterative manner.
- At each step in the trellis, it compares the metric of all paths
entering each state, and keeps only the path with the largest metric,
called the survivor, together with its metric.
- It proceeds in the trellis by eliminating the least likely paths.
• It reduces the decoding complexity to L 2K-1
Overview of Trellis Coded Modulation(TCM)
Conventional coding
• Separate from modulation, performed at the digital level
before modulation
• The insertion of redundant bits
− Given the same information transmission rate, the symbol rate
must be (n/k) times that of the uncoded system.
− The redundancy provides coding gain, however, requires extra
bandwidth.
• In a band-limited channel, the required additional bandwidth is unavailable.
Overview of TCM
• Solution: Trellis coded modulation (TCM)
− The combination of coding and modulation
− Coding gain without expanding bandwidth
• Using constellation with more points than that required without coding.
• Typically, the number of points is doubled.
• The symbol rate is unchanged and the bandwidth remains unchanged.
Basic Principles of TCM
• TCM is to devise an effective method for mapping the coded bits into signal
points such that the minimum Euclidean distance is maximized.
• Ungerboek idea: mapping by set partitioning
− The signal constellation is partitioned in a systematic manner to form a
series of smaller subsets.
− The resulting subsets have a larger minimum distance than their “parent”.
− The goal of partitioning: each partition should produce subsets with
increased minimum distance.
Example of Set Partitioning
Basic Principles of TCM
• In general, the encoding is performed as follows:
− A block of m information bits is separated into two groups of length k1 and k2 ,
respectively.
− The k1 bits are encoded into n bits, while the k2 bits are left uncoded.
− The n bits from the encoder are used to select one of the possible subsets in the
partitioned signal set, while the k2 bits are used to select one of 2k2 signal points in
each subset.
 The coder need not code all the incoming bits. When k2=0, all m information bits
are encoded.
There are many ways to map the coded bits into symbols. The choice of mapping will
drastically affect the performance of the code.
Basic Principles of TCM
• General structure of encoder:
Basic Principles of TCM
 The basic rules for the assignment of signal subsets to state transitions in the
trellis
− Use all subsets with equal frequency in the trellis
− Transitions originating from the same state or merging into the same state
in the trellis are assigned subsets that are separated by the largest
Euclidean distance
− Parallel state transitions (when they occur) are assigned signal points
separated by the largest Euclidean distance.
• Parallel transitions in the trellis are characteristic of TCM that contains one or
more uncoded information bits.
Examples of TCM
Examples of TCM
Examples of TCM
Examples of TCM
Examples of TCM
Turbo codes
• The Parallel-Concatenated Convolutional Codes (PCCC), called turbo codes,
has solved the dilemma of structure and randomness through concatenation
and interleaving respectively.
• The introduction of turbo codes has given most of the gain promised by the
channel- coding theorem.
• Turbo codes have an astonishing performance of bit error rate (BER) at
relatively low Eb /No.
Turbo Codes: Encoder
Convolutional Encoder
1
Interleaving
Convolutional Encoder
2
Data
Source
X
X
Y
Y1
Y2
(Y1, Y2)
X: Information
Yi: Redundancy Information
Turbo Codes: Decoder
Convolutional
Decoder 1
Convolutional
Decoder 2
Interleaving
Interleaver
De-interleaving
De-interleaving
X
Y1
Y2
X’
X’: Decoded Information
Interleaving
• An Interleaver is a device that rearranges the ordering of sequence of symbols
in a deterministic manner.
• The two main issues in the interleaver design are the interleaver size and the
interleaver map.
• Interleaving is used to feed the encoders with permutations so that the
generated redundancy sequences can be assumed independent.
Interleaving
a1, a2, a3, a4, a5, a6, a7, a8, a9, …
Input Data
a1, a2, a3, a4
a5, a6, a7, a8
a9, a10, a11, a12
a13, a14, a15, a16
Write
Read
Interleaving
a1, a5, a9, a13, a2, a6, a10, a14, a3, …
Transmitting
Data
a1, a2, a3, a4
a5, a6, a7, a8
a9, a10, a11, a12
a13, a14, a15, a16
Read
Write
De-Interleaving
a1, a2, a3, a4, a5, a6, a7, a8, a9, …
Output Data
Interleaving (Example)
0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0,…
Transmitting
Data
0, 1, 0, 0
0, 1, 0, 0
0, 1, 0, 0
1, 0, 0, 0
Read
Write
De-Interleaving
0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, …
Output Data
Burst error
Discrete error
Punctured code
• In coding theory, puncturing is the process of removing some of the parity bits after
encoding with an error-correction code. This has the same effect as encoding with an
error-correction code with a higher rate, or less redundancy.
• However, with puncturing the same decoder can be used regardless of how many
bits have been punctured, thus puncturing considerably increases the flexibility of
the system without significantly increasing its complexity.
Encoding Parallel Concatenated Codes
• The conventional arrangement for the (unpunctured) turbo encoder
is shown in figure
Turbo Encoding
• A block of input symbols x = {x0, x1,…….., xN-1} is presented to the encoder or it may
simply be a message sequence, x = m = (m0 , m1 ,..., mN-1).
• In the encoder, the input sequence x is used three ways.
• First, it is copied directly to the output to produce the systematic output
sequence Vt
(0) = Xt , t = 0, 1,..., N-1.
• Second, the input sequence runs through the first RSC encoder with
transfer function G(x), resulting in a parity sequence ( v0
(1) ,v 2
(1) ..vN-1
(1) )
• The combination of the sequence {Vt
(0) } and the sequence {Vt
(1) } results in
a rate R = 1/2 (neglecting the length of the zero-forcing tail, if any)
systematically encoded convolutionally encoded sequence.
• Third, the sequence x is also passed through an interleaver or permuter of
length N, denoted by П , which produces the permuted output sequence
x' = П(x).
• The sequence x' is passed through another convolutional encoder with transfer
function G(x) which produces the output sequence .
v(2) = ( vo
(1) ,v2
(1) .. vN-1
(1) ).
The three output sequences are multiplexed together to form the output
sequence
v = { (vo
(0) ,vo
(1) ,vo
(2) ) , (v2
(0) , v2
(1) , v2
(2) ) , ………., (vN-1
(0) , vN-1
(1) , vN-1
(2) ) },
resulting in an overall rate R = 1/3 linear, systematic, block code. The code has
two sets of parity information, v(1) and v(2) which, because of the interleaving, are
fairly independent.
Turbo decoding
• The data (r(0) , r(1) ) associated with the first encoder are fed to Decoder I.
• This decoder initially uses uniform priors on the transmitted bits and
produces probabilities of the bits conditioned on the observed data.
• These probabilities are called the extrinsic probabilities, as described
below.
• The output probabilities of Decoder I are interleaved and passed to
Decoder II, where they are used as "prior" probabilities in the decoder,
along with the data associated with the second encoder, which is r(0)
(interleaved) and r(2) .
• The extrinsic output probabilities of Decoder II are deinterleaved and
passed back to become prior probabilities to Decoder I.
• The process of passing probability information back and forth continues
until the decoder determines (somehow) that the process has
converged, or until some maximum number of iterations is reached.
unit 5 (1).pptx

More Related Content

Similar to unit 5 (1).pptx

combinationalcircuits-161111065011(0).pptx
combinationalcircuits-161111065011(0).pptxcombinationalcircuits-161111065011(0).pptx
combinationalcircuits-161111065011(0).pptx
MmMm633188
 

Similar to unit 5 (1).pptx (20)

Combinational Circuits PPT.pdf
Combinational Circuits PPT.pdfCombinational Circuits PPT.pdf
Combinational Circuits PPT.pdf
 
Encoders and decoders
Encoders and decodersEncoders and decoders
Encoders and decoders
 
digi.elec.number%20system.pptx
digi.elec.number%20system.pptxdigi.elec.number%20system.pptx
digi.elec.number%20system.pptx
 
Chapter 10: Error Correction and Detection
Chapter 10: Error Correction and DetectionChapter 10: Error Correction and Detection
Chapter 10: Error Correction and Detection
 
Reed Soloman and convolution codes
Reed Soloman and convolution codesReed Soloman and convolution codes
Reed Soloman and convolution codes
 
SESSION 2.ppt
SESSION 2.pptSESSION 2.ppt
SESSION 2.ppt
 
Chapter 4 combinational circuit
Chapter 4 combinational circuit Chapter 4 combinational circuit
Chapter 4 combinational circuit
 
11.ppt
11.ppt11.ppt
11.ppt
 
Convolutional Error Control Coding
Convolutional Error Control CodingConvolutional Error Control Coding
Convolutional Error Control Coding
 
ATT SMK.pptx
ATT SMK.pptxATT SMK.pptx
ATT SMK.pptx
 
B sc3 unit 4 combi..lckt
B sc3 unit 4 combi..lcktB sc3 unit 4 combi..lckt
B sc3 unit 4 combi..lckt
 
UNIT3.3.pdf
UNIT3.3.pdfUNIT3.3.pdf
UNIT3.3.pdf
 
digital electronics..
digital electronics..digital electronics..
digital electronics..
 
Linear Block code.pdf
Linear Block code.pdfLinear Block code.pdf
Linear Block code.pdf
 
Digital Logic circuit
Digital Logic circuitDigital Logic circuit
Digital Logic circuit
 
FYBSC IT Digital Electronics Unit IV Chapter I Multiplexer, Demultiplexer, AL...
FYBSC IT Digital Electronics Unit IV Chapter I Multiplexer, Demultiplexer, AL...FYBSC IT Digital Electronics Unit IV Chapter I Multiplexer, Demultiplexer, AL...
FYBSC IT Digital Electronics Unit IV Chapter I Multiplexer, Demultiplexer, AL...
 
Introduction to communication system lecture5
Introduction to communication system lecture5Introduction to communication system lecture5
Introduction to communication system lecture5
 
combinationalcircuits-161111065011(0).pptx
combinationalcircuits-161111065011(0).pptxcombinationalcircuits-161111065011(0).pptx
combinationalcircuits-161111065011(0).pptx
 
Unit-1.pptx
Unit-1.pptxUnit-1.pptx
Unit-1.pptx
 
Encoders and types of encodrs
Encoders and types of encodrs Encoders and types of encodrs
Encoders and types of encodrs
 

Recently uploaded

Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Christo Ananth
 
result management system report for college project
result management system report for college projectresult management system report for college project
result management system report for college project
Tonystark477637
 
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
dharasingh5698
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Christo Ananth
 

Recently uploaded (20)

Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptx
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.ppt
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
 
Unit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdfUnit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdf
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
 
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
result management system report for college project
result management system report for college projectresult management system report for college project
result management system report for college project
 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
 
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
 
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
 
Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - V
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
 
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPT
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdf
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations
 

unit 5 (1).pptx

  • 1. Convolutional Codes Dr. Himansu Shekhar Pradhan Assistant Professor Department of Electronics & communication Engineering National Institute of Technology, Warangal - 506004, Telangana, INDIA
  • 2. Introduction: • Convolutional codes were first discovered by P.Elias in 1955. • The structure of convolutional codes is quite different from that of block codes. • During each unit of time, the input to a convolutional code encoder is also a k-bit message block and the corresponding output is also an n-bit coded block with k<n. • Each coded n-bit output block depends not only the corresponding k-bit input message block at the same time unit but also on the m previous message blocks. • Thus the encoder has k input lines, n output lines and a memory of order m.
  • 3. Introduction: • Each message (or information) sequence is encoded into a code sequence. • The set of all possible code sequences produced by the encoder is called an (n,k,m) convolutional code. • The parameters, k and n, are normally small, say 1≤ k≤8 and 2 ≤ n ≤ 9. • The ratio R=k/n is called the code rate. • Typical values for code rate are: 1/2, 1/3,2/3. • The parameter m is called the memory order of the code. • Note that the number of redundant (or parity) bits in each coded block is small. However, more redundant bits are added by increasing the memory order m of the code while holding k & and n fixed.
  • 4. Convolutional codes and Encoders • Convolutional encoders is accomplished using shift registers (D flip flop) and combinational logic that performs modulo-two addition.
  • 5. Convolutional Codes and Encoders • We’ll also need an output selector to toggle between the two modulo-two adders. • The output selector (SEL A/B block) cycles through two states; In the first state, it selects and outputs the output of the upper modulo-two adder; In the second state, it selects and outputs the output of the lower modulo-two adder.
  • 7. Convolution Codes • Convolutional codes are characterized by thee parameters: (n, k, m) Where, n = Number of output bits k = Number of input bits m = Number of memory registers • Code Rate = k/n = Number of input bits /Number of output bits • Constraint length "L"= k(m-1) L represents the no. of bits in the encoder memory that affects the generation of n output bits
  • 8. An example: (2,1,4) Coder • n = Number of output bits 2 k = Number of input bits 1 m = Number of memory registers 4 • Constraint length L-3. • The no. of bits in the shaded registers are called states of code, and are defined by No. of states = 2L
  • 9. (2,1,4) Coder: • The constraint length = 3. • The shaded registers hold these bits, while the un-shaded register holds the incoming bit. • This means, 3 bits, or 8 different combinations of the bits can be held in the registers. • These 8 combinations determine what output will be received as the 2 output bits v1 and v2.
  • 10. Input sequence: 1, Output: 11 11 10 11
  • 11. The Catastrophic code • Providing input of 1 to the coder provides what is called the "impulse response" of that coder. • Here, providing 1 as input (and then 'flushing' it out from the registers using zeros) gave the output sequence:11 11 10 11. • Similarly, response for input bit 0 would be:00 00 00 00. • The output sequence can be computed simply by convolving the input sequence u with the impulse response g. Or, v=u*g
  • 12. Example: • To find the coded sequence for the input 1011, we just have to add the shifted versions of the individual responses: This result can be verified by the encoder model too.
  • 13. Output bits and the encoder bits through the (2,1,4) code:
  • 14. Graphical representation of Convolutional codes • 3 different but related graphical representations can be used to study the convolutional encoding. 1) Code tree = Tree diagram 2) Code trellis = Trellis diagram 3) State diagram • Note that we can easily find output of the encoder from any of the above diagrams. • Given a sequence of message bits and the initial state, you can use any of following 3 diagrams to find the resulting output bits.
  • 15. Code Tree • The convention used to distinguish the input binary symbols is as follows: Input 0 — specifies upper branch Input 1 — specifies lower branch • Each branch of the tree represents an input symbol, with the corresponding pair of output binary symbols indicated on the branch. • It is often convenient to represent the codewords of a convolutional code as paths through a code tree. A convolutional code is sometimes called a (linear) tree code.
  • 16. • Code tree: the left most node is called the root. Since the encoder has 1 binary input, there are 2 branches stemming from each node. (starting at the root). The upper branch leaving each node corresponds to input 0 and the lower branch corresponds to the input digit 1. • On each branch we have 2 binary code digits viz., the 2 outputs from the encoder. • Each branch of the tree represents an input symbol, with the corresponding pair of output binary symbols indicated on the branch.
  • 17. (2,1,1) convolutional code |solved problem |Tree diagram Q. Below figure depicts a rate ½, constraint length L = 1, convolutional encoder. Sketch the Tree diagram. Also find encoder output for input data: 11101
  • 23. Trellis Diagram Representation • The trellis diagram is basically a redrawing of the state diagram. It shows all possible state transitions at each time step. • The trellis diagram is drawn by lining up all the possible states (2L) in the vertical axis. Then we connect each state to the next state by the allowable codeword's for that state. • There are only two choices possible at each state. These are determined by the arrival of either a 0 or a 1 bit. • The arrows show the input bit and the output bits are shown in parentheses. • The arrows going upwards represent a o bit and going downwards represent 1 bit.
  • 24. Trellis Diagram Representation Steps to construct trellis diagram • It starts from scratch (all 0's in the SR, i.e., state a) and makes transitions corresponding to each input data digit. • These transitions are denoted by a solid line for the next data digit 0 and by a dashed line for the next data digit 1. • Thus when the first input digit is 0, the encoder output is 00 (solid line) • When the input digit is 1, the encoder output is 11 (dashed line). • We continue this way for the second input digit and so on.
  • 25. ENCODER REPRESENTATIONS Example: Encoding of convolutional codes using Trellis Representation K = 1, n = 2 convolutional code • We begin with state 00: • Input data: 0 1 0 1 1 0 0 • Output: 00 11 01 00 10 10 11
  • 26. Polynomial description of Convolutional codes • It is also called as the Description in the D-Transform Domain. • Ex: (2,1,2) Convolution Code encoder: • G(1)(D) = 1 + D + D2 • G(2)(D) = 1 + D2 • G(D) = [ 1 + D + D2 ,1 + D2 ] • U(D) = 1 + D + D2 + D 4
  • 27. Polynomial description of Convolutional codes • Ex: (2,1,2) Convolution Code encoder: V(D) = U(D) G(D) V1(D) = U(D) G(1)(D) • V1(D) = ( 1 + D + D2 + D 4 ) X ( = 1 + D + D2 ) = (1 + D + D2 + D 4 + D + D2 + D3 + D5+ D2 +D3 + D4 + D6 ) • V1(D) = 1 + D2 + D5 + D6
  • 28. Polynomial description of Convolutional codes • Ex: (2,1,2) Convolution Code encoder: V(D) = U(D) G(D) V2(D) = U(D) G(2)(D) • V2(D) = ( 1 + D + D2 + D 4 ) X ( = 1 + D2 ) = (1 + D + D2 + D 4 + D2 + D3 + D4 + D6 ) • V2(D) = 1 + D + D3 + D6
  • 29. Polynomial description of Convolutional codes • Ex: (2,1,2) Convolution Code encoder: • V1(D) = 1 + D2 + D5 + D6 • V2(D) = 1 + D + D3 + D6 • In the time domain : V1 = (1010011) V2 = (1101001) • Then the code word is : V = (11 01 10 01 11 10 11)
  • 30. Polynomial description – Example 1 A convolutional encoder with two streams of encoded bits with message sequence 10011 as an input c’j c’’j Stream 1 Stream 2 First, we find the impulse response of both streams to a symbol 1. Impulse response of stream 1 = (1 1 1) Impulse response of stream 2 = (1 0 1) Then, write the corresponding generator polynomial of both streams                       M D i M g D i g D i g i g D i g ........ 2 2 1 0
  • 31. Polynomial description– Example 1 First, we find the impulse response of both streams to a symbol 1. Impulse response of stream 1 = (1 1 1) Impulse response of stream 2 = (1 0 1) Then, write the corresponding generator polynomials of both streams             2 1 1 D D D g            2 1 2 D D g Then, write the message polynomial for input message (10011)            4 3 1 D D D m Then, find the output polynomial for both streams by multiplying the generator polynomial and the message polynomial
  • 32. Polynomial description– Example 1 Then, find the output polynomial for both streams by multiplying the generator polynomial and the message polynomial               D m D g D c . 2 2               D m D g D c . 1 1 = (1 + D + D2)(1 + D3 + D4) = 1 + D + D2 + D3 + D6 = (1 + D2)(1 + D3 + D4) = 1 + D2 + D3 + D4 + D5 + D6 So, the output sequence for stream 1 is 1111001 The output sequence for stream 2 is 1011111 Interleave C = 11,10,11,11,01,01,11 Original message (10011) Encoded sequence (11101111010111)
  • 33. Polynomial description– Example 2 A convolutional encoder with two streams of encoded bits with message sequence 110111001 as an input c’j c’’j Stream 1 Stream 2
  • 34. Polynomial description– Example 2 First, we find the impulse response of both streams to a symbol 1. Impulse response of stream 1 = (1 1 1) Impulse response of stream 2 = (1 0 1) Then, write the corresponding generator polynomials of both streams             2 1 1 D D D g            2 1 2 D D g Then, write the message polynomial for input message (110111001)               8 5 4 3 1 D D D D D D m Then, find the output polynomial for both streams by multiplying the generator polynomial and the message polynomial
  • 35. Polynomial description– Example 2 Then, find the output polynomial for both streams by multiplying the generator polynomial and the message polynomial               D m D g D c . 2 2               D m D g D c . 1 1 = 1 + D5 + D7 + D8 + D9 + D10 = 1 + D +D2 + D4 + D6 + D7 + D8 + D10 So, the output sequence for stream 1 is 10000101111 The output sequence for stream 2 is 11101011101 Interleave C = 11 01 01 00 01 10 01 11 11 10 11 Original message (110111001) Encoded sequence (11 01 01 00 01 10 01 11 11 10 11 )
  • 36. Distance notions for convolutional code Parameters in Convolutional Codes • For generating a convolutional code, the information is passed sequentially through a linear finite-state shift register. The shift register comprises of (-bit) stages and Boolean function generators. A convolutional code can be represented as where • k is the number of bits shifted into the encoder at one time. Generally, k = 1 . • n is the number of encoder output bits corresponding to information bits. • The encoder memory, a shift register of size K , is the constraint length. • n is a function of the present input bits and the contents of K . • The state of the encoder is given by the value of (K - 1 ) bits.
  • 37. Distance notions for convolutional code
  • 38. Distance notions for convolutional code • In particular, the Hamming distance of any linear code, i.e., the minimum Hamming distance between any two valid codewords, is equal to the number of ones in the smallest non-zero codeword with minimum weight, where the weight of a codeword is the number of ones it contains. • In the context of convolutional codes, the smallest Hamming distance between any two valid codewords is called the free distance. • Specifically, the free distance of a convolutional code is the difference in path metrics between the all-zeroes output and the path with the smallest non-zero path metric going from the initial 00 state to some future 00 state.
  • 39. Matrix description of convolutional codes • For an input sequence m, the output sequence can be written as c(j)=m*g(j), where * denotes discrete-time convolution. The operation of convolution can also be represented using matrices. • Let m=[m0, m1, m2…..]. Then for g(j)= [g0 (j), g0 (j) ,……, gr (j)], the convolution c= mg(j) can be represented as where empty entries in the matrix indicate zeros.
  • 40. Matrix description of convolutional codes • For a rate 1/2 code, the operation of convolution and interleaving the output sequences is represented by the following matrix, where the columns of different matrices are interleaved:
  • 41. The Viterbi Algorithm • The Viterbi algorithm performs Maximum Likelihood decoding. • It finds a path through trellis with the largest metric (maximum correlation or minimum distance). - It processes the demodulator outputs in an iterative manner. - At each step in the trellis, it compares the metric of all paths entering each state, and keeps only the path with the largest metric, called the survivor, together with its metric. - It proceeds in the trellis by eliminating the least likely paths. • It reduces the decoding complexity to L 2K-1
  • 42.
  • 43.
  • 44.
  • 45.
  • 46.
  • 47.
  • 48.
  • 49. Overview of Trellis Coded Modulation(TCM) Conventional coding • Separate from modulation, performed at the digital level before modulation • The insertion of redundant bits − Given the same information transmission rate, the symbol rate must be (n/k) times that of the uncoded system. − The redundancy provides coding gain, however, requires extra bandwidth. • In a band-limited channel, the required additional bandwidth is unavailable.
  • 50. Overview of TCM • Solution: Trellis coded modulation (TCM) − The combination of coding and modulation − Coding gain without expanding bandwidth • Using constellation with more points than that required without coding. • Typically, the number of points is doubled. • The symbol rate is unchanged and the bandwidth remains unchanged.
  • 51. Basic Principles of TCM • TCM is to devise an effective method for mapping the coded bits into signal points such that the minimum Euclidean distance is maximized. • Ungerboek idea: mapping by set partitioning − The signal constellation is partitioned in a systematic manner to form a series of smaller subsets. − The resulting subsets have a larger minimum distance than their “parent”. − The goal of partitioning: each partition should produce subsets with increased minimum distance.
  • 52. Example of Set Partitioning
  • 53. Basic Principles of TCM • In general, the encoding is performed as follows: − A block of m information bits is separated into two groups of length k1 and k2 , respectively. − The k1 bits are encoded into n bits, while the k2 bits are left uncoded. − The n bits from the encoder are used to select one of the possible subsets in the partitioned signal set, while the k2 bits are used to select one of 2k2 signal points in each subset.  The coder need not code all the incoming bits. When k2=0, all m information bits are encoded. There are many ways to map the coded bits into symbols. The choice of mapping will drastically affect the performance of the code.
  • 54. Basic Principles of TCM • General structure of encoder:
  • 55. Basic Principles of TCM  The basic rules for the assignment of signal subsets to state transitions in the trellis − Use all subsets with equal frequency in the trellis − Transitions originating from the same state or merging into the same state in the trellis are assigned subsets that are separated by the largest Euclidean distance − Parallel state transitions (when they occur) are assigned signal points separated by the largest Euclidean distance. • Parallel transitions in the trellis are characteristic of TCM that contains one or more uncoded information bits.
  • 61. Turbo codes • The Parallel-Concatenated Convolutional Codes (PCCC), called turbo codes, has solved the dilemma of structure and randomness through concatenation and interleaving respectively. • The introduction of turbo codes has given most of the gain promised by the channel- coding theorem. • Turbo codes have an astonishing performance of bit error rate (BER) at relatively low Eb /No.
  • 62. Turbo Codes: Encoder Convolutional Encoder 1 Interleaving Convolutional Encoder 2 Data Source X X Y Y1 Y2 (Y1, Y2) X: Information Yi: Redundancy Information
  • 63. Turbo Codes: Decoder Convolutional Decoder 1 Convolutional Decoder 2 Interleaving Interleaver De-interleaving De-interleaving X Y1 Y2 X’ X’: Decoded Information
  • 64. Interleaving • An Interleaver is a device that rearranges the ordering of sequence of symbols in a deterministic manner. • The two main issues in the interleaver design are the interleaver size and the interleaver map. • Interleaving is used to feed the encoders with permutations so that the generated redundancy sequences can be assumed independent.
  • 65. Interleaving a1, a2, a3, a4, a5, a6, a7, a8, a9, … Input Data a1, a2, a3, a4 a5, a6, a7, a8 a9, a10, a11, a12 a13, a14, a15, a16 Write Read Interleaving a1, a5, a9, a13, a2, a6, a10, a14, a3, … Transmitting Data a1, a2, a3, a4 a5, a6, a7, a8 a9, a10, a11, a12 a13, a14, a15, a16 Read Write De-Interleaving a1, a2, a3, a4, a5, a6, a7, a8, a9, … Output Data
  • 66. Interleaving (Example) 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0,… Transmitting Data 0, 1, 0, 0 0, 1, 0, 0 0, 1, 0, 0 1, 0, 0, 0 Read Write De-Interleaving 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, … Output Data Burst error Discrete error
  • 67. Punctured code • In coding theory, puncturing is the process of removing some of the parity bits after encoding with an error-correction code. This has the same effect as encoding with an error-correction code with a higher rate, or less redundancy. • However, with puncturing the same decoder can be used regardless of how many bits have been punctured, thus puncturing considerably increases the flexibility of the system without significantly increasing its complexity.
  • 68. Encoding Parallel Concatenated Codes • The conventional arrangement for the (unpunctured) turbo encoder is shown in figure
  • 69. Turbo Encoding • A block of input symbols x = {x0, x1,…….., xN-1} is presented to the encoder or it may simply be a message sequence, x = m = (m0 , m1 ,..., mN-1). • In the encoder, the input sequence x is used three ways. • First, it is copied directly to the output to produce the systematic output sequence Vt (0) = Xt , t = 0, 1,..., N-1. • Second, the input sequence runs through the first RSC encoder with transfer function G(x), resulting in a parity sequence ( v0 (1) ,v 2 (1) ..vN-1 (1) ) • The combination of the sequence {Vt (0) } and the sequence {Vt (1) } results in a rate R = 1/2 (neglecting the length of the zero-forcing tail, if any) systematically encoded convolutionally encoded sequence.
  • 70. • Third, the sequence x is also passed through an interleaver or permuter of length N, denoted by П , which produces the permuted output sequence x' = П(x). • The sequence x' is passed through another convolutional encoder with transfer function G(x) which produces the output sequence . v(2) = ( vo (1) ,v2 (1) .. vN-1 (1) ). The three output sequences are multiplexed together to form the output sequence v = { (vo (0) ,vo (1) ,vo (2) ) , (v2 (0) , v2 (1) , v2 (2) ) , ………., (vN-1 (0) , vN-1 (1) , vN-1 (2) ) }, resulting in an overall rate R = 1/3 linear, systematic, block code. The code has two sets of parity information, v(1) and v(2) which, because of the interleaving, are fairly independent.
  • 71.
  • 72. Turbo decoding • The data (r(0) , r(1) ) associated with the first encoder are fed to Decoder I. • This decoder initially uses uniform priors on the transmitted bits and produces probabilities of the bits conditioned on the observed data. • These probabilities are called the extrinsic probabilities, as described below. • The output probabilities of Decoder I are interleaved and passed to Decoder II, where they are used as "prior" probabilities in the decoder, along with the data associated with the second encoder, which is r(0) (interleaved) and r(2) .
  • 73. • The extrinsic output probabilities of Decoder II are deinterleaved and passed back to become prior probabilities to Decoder I. • The process of passing probability information back and forth continues until the decoder determines (somehow) that the process has converged, or until some maximum number of iterations is reached.