INFORMATION NETWOK SECURITY ADMINISTRATION
INTELLEGENCE TECHNOLOGY EXCELLENCE DIVISION
SIGNAL ANALYSIS TEAM
NAME TEMESGEN TIZAZU
Outline
 Introduction
 Huffman coding
Convolutional code
Interleaving
 Conclusion
 MATLAB Implementation
Introduction
 Codes are used for data transmission, data compression, cryptography, error-
correction
 Coding is a form of data organization or data presentation
There are four types of coding
 Data compression (source coding)
 Error correction (channel coding)
 Cryptographic coding
 Line coding
Continued …
 Data Compression: the process of coding that will effectively reduce the total
number of bits needed to represent certain information
 Source coding:- is used to remove redundancy from source information for
efficient transmission
 Data redundancy is the existence of data that is additional to the actual data
 Decreasing the level of redundancy is equivalent to lossless compression
 Because of that source coding is often treated as lossless compression.
Continued…
 Lossless: If the compression and decompression processes induce no
information loss, otherwise, it is lossy
 Lossy methods: used when the data are image (JPEG), video (MPEG), audio
(MP3)
 lossless methods: are used when the data are text or programs
e.g., Run-length, Huffman, Lempel Ziv….
Continued…
Compression ratio=Bo/B1;where
 Bo=number of bits before compression(we need 7 bits to write 128 characters )
 B1=number of bits after compression
Huffman encoding
 Design the mapping from source symbols to codewords
 Entropy Coding (Prefix-free Code):-No codeword is a prefix of another one
 Lossless compression
 Symbols that occur more frequently will have shorter Huffman codes than
symbols that occur less frequently
 In an optimum prefix-free code, the two codewords that occur least frequently
will have the same length; differing only at the last bit
Huffman encoding
 Used to generate uniquely decodable code:-Good codewords and achieves lower
bound
L=Average Huffman codeword length:
In general: H(S) ≤ L < H(S) + 1; lower bound
H(S)= Entropy:
 Goal: minimizing the average codeword length
Continued…
Huffman algorithm:-
Arrange given messages in decreasing order of probability
Group the least probable message and assign them symbols used for coding
Add the probabilities of the grouped message and place them as high as possible
rearrange them in decreasing order again
Repeat the above step till two probabilities remain
Obtain the codeword for each message by tracing the probabilities from left to right
Writing the respecting symbols ( 0 or 1) above the path by tracing them from right to
left
Continued…
 Huffman Coding: Example
Continued…
Continued…
Continued…
They have the same average length
They differ in the variance of the average code length
Solution 1
variance =0.16
 Solution 2
variance =1.36
 So least variance can be taken
Continued…
 Consider a source which generates one of five possible symbols
 S=[a, b, c, d , e]; their probability is
 P=[0.2,0.4,0.05,0.1,0.25]
Convolutional code
Channel coding :-
 When digital data is transmitted from one system to another, an unwanted
electrical disturbance added to it
 This can cause an ‘error’ in digital information. That means a 0 can change to 1 or
1 can change to 0.
 To detect and correct such errors special type of codes capable of detecting and
correcting the errors are used.
Convolutional code
 There are many different types of error control codes like :- polar codes, Reed
Solomon codes, Turbo codes, Convolutional codes, cyclic codes and LDPC
codes.
 Block codes and convolutional codes are commonly used in digital
communications
 Different factors affect the choice of a particular coding scheme.
 Like cost, power, bandwidth, Delay in Decoding, data rate
Continued…
 In block codes there is a one-to-one correspondence between uncoded blocks of
symbols and coded block of symbols
 Convolutional codes have memory that uses previous bits to encode or decode
 Each information bits remain in the encoder for up to m+1-time units
 During each time unit can affect any of the n encoder output (depending on the shift
register connection)
 Encoder output at any given time is dependent on present as well as past inputs
Continued…
 Convolution codes are commonly specified by three parameters (n, k, m)
 A convolutional encoder can be described by shift registers and modulo 2 adders
 The content of the shift registers determines the state of the encoder
 The coding rate of convolutional codes is given by R = k/n,
 Constraint length Lc = m+1
 A convolutional encoder has generator sequences (g), given in octal form and each
of length Lc.
Continued…
A (2,1,3) binary convolutional encoder contains linear feed forward shift register
In linear feed forward shift register the information sequence u enters the encoder one
bit at a time
 The two-encoder output sequence can be obtained the convolution of the input
sequence u with the two-encoder impulse response (generator sequence)
Continued…
 So, the two-encoder output sequence can be written as
𝒗(𝟏)
=(𝒗𝒐
(𝟏)
, 𝒗𝟏
(𝟏)
,𝒗𝟐
(𝟏)
, … .) and
𝒗(𝟐)=(𝒗𝒐
(𝟐)
, 𝒗𝟏
(𝟐)
,𝒗𝟐
(𝟐)
, …)
 The two-encoder impulse response (generator sequence) can be written as
𝒈(𝟏)=(𝒈𝒐
(𝟏)
, 𝒈𝟏
(𝟏)
,𝒈𝒎
(𝟏)
) and
𝒈(𝟐)=(𝒈𝒐
(𝟐)
, 𝒈𝟏
(𝟐)
,𝒈𝒎
(𝟐)
)
The encoding equation cab be written as 𝒗(𝟏)=u*𝒈(𝟏)
𝒗 𝟐 =u*𝒈 𝟐
Continued…
After encoding; the two encoder outputs sequences are multiplexed in to a single
sequence, called the codeword
 The codeword is given by
V=(𝒗𝒐
(𝟏)
𝒗𝒐
(𝟐)
, 𝒗𝟏
(𝟏)
𝒗𝟏
(𝟐)
, 𝒗𝟐
(𝟏)
𝒗𝟐
(𝟐)
, 𝒗𝟑
(𝟏)
𝒗𝟑
(𝟐)
, 𝒗𝟒
(𝟏)
𝒗𝟒
(𝟐)
, …)
 Codeword V has length n(m+L), where L is the length of input sequence and m
is memory order
Continued…
 We can obtain generator matrix if the generator sequence 𝒈(𝟏) and 𝒈(𝟐) are
interlaced and then arranged in matrix
Continued…
 Each row of G is identical to the preceding row but shifted to n places to the right
 If u has a finite length L, then G has L rows and n*(m+L) columns and V has length
n*(m+L)
 The encoding equation can be written as in matrix form as follows
V=u.G
 Where G is the generator matrix of the code, and u is information sequence
Continued…
 In general case of an (n,k, m) code, the generator matrix is
Where each 𝐺𝑙 is a k× 𝑛 sub matrix as follows
Continued…
 Example :- u=(1 0 1 1 1), 𝒈(𝟏)=(1 0 1 1), 𝒈(𝟐)=(1 1 1 1) , Find the transmitted
codeword?
 Solution
𝐯(𝟏)=u*𝐠(𝟏)
𝐯(𝟏)=(1 0 1 1 1)* (1 0 1 1)
 𝐯
(𝟏)
=(1 0 0 0 0 0 0 1)
Continued…
 𝐯(𝟐)=u*𝐠(𝟐)
 𝐯(𝟐)=(1 0 1 1 1)* (1 1 1 1)
 𝐯(𝟐)=(1 1 0 1 1 1 0 1 )
Continued…
So, the transmitted code word can be obtained by taking the corresponding bit of
𝐯(𝟏) 𝑎𝑛𝑑 𝐯(𝟐)
𝐯(𝟏)= (1 0 0 0 0 0 0 1)
𝐯(𝟐)
= (1 1 0 1 1 1 0 1)
 V= (1 1, 0 1, 0 0 ,0 1,0 1,0 1,0 0 ,1 1)
Continued…
G=generator matrix can be obtained by
Continued…
So, the transmitted code word can be obtained by
V= U.G=
V= (1 1, 0 1, 0 0 ,0 1,0 1,0 1,0 0 ,1 1)
Interleaving
An interleaver takes a given sequence of symbols and permutes their positions,
arranging them in a different temporal order
 To convert error patterns that contain long sequences of serial erroneous data into a
more random error pattern
 Distributing errors among many code vectors
 For a memory channel with burst errors, rearranging the symbols can cause the
bursts of channel errors to be spread in time
Interleaving
To separate the codeword symbols in time:-
 This codeword symbol–separation in time effectively transforms a channel with
memory into a memoryless one
 The basic goal of an interleaver is to randomize the data sequence
 This allows random–error–correcting codes to be useful in burst–noise channel
Continued…
There are three types of interleavers that are commonly used in communications:
 Block interleavers; Convolutional interleavers, Pseudorandom interleavers
Block interleavers
 A block interleaver accepts the coded symbols in blocks from the encoder
 Shuffles the symbols, and then feeds the rearranged symbols to the modulator
 The shuffling of the block is accomplished by filling the columns of an M-row–by N -
column (M×N) array with the encoded sequence
Continued…
After the array is filled, the symbols are then fed to the modulator one row at a time
and transmitted over the channel
 Input to the interleaver is column–wise, while output is row–wise
 At the receiver, the deinterleaver performs the inverse operation, the symbols are
entered by rows and removed one column at a time
Continued…
Example: we have the following data sequence before interleaving
The given data sequence is the input to a 4 ×4 block interleaver
 The out put of 4 ×4 block interleaver is as follows
Continued…
We have the following data sequence before deinterleaving
The given data sequence is the input to a 4 ×4 block deinterleaver
 The out put of 4 ×4 block deinterleaver is as follows
Continued…
The following sequence is the input to a 4 ×6 block interleaver
1101 0101 1111 0000 1010 1011
Let there be a noise burst of five symbol times, the colored symbols shown below
experience errors in transmission
 Input the bit stream column by column
The output of the interleaver is then read row by row
101011 111000 001011 111001
Continued…
 The out put of the interleaver (101011 111000 001011 111001) is the input to
deinterleaver as follows
Then input the bit stream row by row
The output of the deinterleaver is then read column by column
1101 0101 1111 0000 1010 1011
Continued…
 Convolutional Interleavers:-
 The code symbols are sequentially shifted into the bank of N registers
 Each successive register provides J symbols more storage than did the preceding one
 The zeroth register provides no storage the symbol is transmitted immediately
 Each new code symbol, the commutator switches to a new register
 The new code symbol is shifted while the oldest code symbol in that register is shifted
out to the modulator
Continued…
After the (𝑁 − 1)𝑡ℎ register, the commutator returns to the zeroth register and starts
again
 The deinterleaver performs the inverse operation
 The input and output commutators for both interleaving and deinterleaving must be
synchronized
The synchronized deinterleaver is shown simultaneously feeding the interleaved symbols
to the decoder
Continued…
 Convolutional Interleavers:-
The code symbols are sequentially shifted into the bank of N registers
Continued…
 Example :- we have the following data sequence before interleaving
 Convolutional four–register (J= 1) interleaver being loaded by a sequence of code
symbols
 Symbols 1 to 4 are being loaded; the x represent unknown states
Continued…..
Continued…..
Continued…..
 Find the interleaving matrix, entering the bits column–wise
 We then add columns of xs to the left and right of the columns containing the bit
The output of the interleaver is then read diagonally from top–left of the message bits
to bottom–left of the xs
Continued…..
1 x x x 5 2 x x 9 6 3 x 13 10 7 4 x 14 11 8 x x 15 12 x x x 16
Application Area
Convolutional codes are used extensively in numerous applications in order to
achieve reliable data transfer
 Including digital video, radio, mobile communication, and satellite
communication
 These codes are often implemented in concatenation with a hard-decision code,
particularly Reed Solomon,turbocodes
Continued…..
This interleaving technique provides a high degree of robustness to variability in the
burst parameters
 Most block or convolutional codes are designed to combat random independent
errors
 Interleaver one of the most important component that construct Turbo code
 To enhance overall Turbo code performance against burst errors
Thank you!!!!!

Presentation ppt 3.pptx

  • 1.
    INFORMATION NETWOK SECURITYADMINISTRATION INTELLEGENCE TECHNOLOGY EXCELLENCE DIVISION SIGNAL ANALYSIS TEAM NAME TEMESGEN TIZAZU
  • 2.
    Outline  Introduction  Huffmancoding Convolutional code Interleaving  Conclusion  MATLAB Implementation
  • 3.
    Introduction  Codes areused for data transmission, data compression, cryptography, error- correction  Coding is a form of data organization or data presentation There are four types of coding  Data compression (source coding)  Error correction (channel coding)  Cryptographic coding  Line coding
  • 4.
    Continued …  DataCompression: the process of coding that will effectively reduce the total number of bits needed to represent certain information  Source coding:- is used to remove redundancy from source information for efficient transmission  Data redundancy is the existence of data that is additional to the actual data  Decreasing the level of redundancy is equivalent to lossless compression  Because of that source coding is often treated as lossless compression.
  • 5.
    Continued…  Lossless: Ifthe compression and decompression processes induce no information loss, otherwise, it is lossy  Lossy methods: used when the data are image (JPEG), video (MPEG), audio (MP3)  lossless methods: are used when the data are text or programs e.g., Run-length, Huffman, Lempel Ziv….
  • 6.
    Continued… Compression ratio=Bo/B1;where  Bo=numberof bits before compression(we need 7 bits to write 128 characters )  B1=number of bits after compression
  • 7.
    Huffman encoding  Designthe mapping from source symbols to codewords  Entropy Coding (Prefix-free Code):-No codeword is a prefix of another one  Lossless compression  Symbols that occur more frequently will have shorter Huffman codes than symbols that occur less frequently  In an optimum prefix-free code, the two codewords that occur least frequently will have the same length; differing only at the last bit
  • 8.
    Huffman encoding  Usedto generate uniquely decodable code:-Good codewords and achieves lower bound L=Average Huffman codeword length: In general: H(S) ≤ L < H(S) + 1; lower bound H(S)= Entropy:  Goal: minimizing the average codeword length
  • 9.
    Continued… Huffman algorithm:- Arrange givenmessages in decreasing order of probability Group the least probable message and assign them symbols used for coding Add the probabilities of the grouped message and place them as high as possible rearrange them in decreasing order again Repeat the above step till two probabilities remain Obtain the codeword for each message by tracing the probabilities from left to right Writing the respecting symbols ( 0 or 1) above the path by tracing them from right to left
  • 10.
  • 11.
  • 12.
  • 13.
    Continued… They have thesame average length They differ in the variance of the average code length Solution 1 variance =0.16  Solution 2 variance =1.36  So least variance can be taken
  • 14.
    Continued…  Consider asource which generates one of five possible symbols  S=[a, b, c, d , e]; their probability is  P=[0.2,0.4,0.05,0.1,0.25]
  • 15.
    Convolutional code Channel coding:-  When digital data is transmitted from one system to another, an unwanted electrical disturbance added to it  This can cause an ‘error’ in digital information. That means a 0 can change to 1 or 1 can change to 0.  To detect and correct such errors special type of codes capable of detecting and correcting the errors are used.
  • 16.
    Convolutional code  Thereare many different types of error control codes like :- polar codes, Reed Solomon codes, Turbo codes, Convolutional codes, cyclic codes and LDPC codes.  Block codes and convolutional codes are commonly used in digital communications  Different factors affect the choice of a particular coding scheme.  Like cost, power, bandwidth, Delay in Decoding, data rate
  • 17.
    Continued…  In blockcodes there is a one-to-one correspondence between uncoded blocks of symbols and coded block of symbols  Convolutional codes have memory that uses previous bits to encode or decode  Each information bits remain in the encoder for up to m+1-time units  During each time unit can affect any of the n encoder output (depending on the shift register connection)  Encoder output at any given time is dependent on present as well as past inputs
  • 18.
    Continued…  Convolution codesare commonly specified by three parameters (n, k, m)  A convolutional encoder can be described by shift registers and modulo 2 adders  The content of the shift registers determines the state of the encoder  The coding rate of convolutional codes is given by R = k/n,  Constraint length Lc = m+1  A convolutional encoder has generator sequences (g), given in octal form and each of length Lc.
  • 19.
    Continued… A (2,1,3) binaryconvolutional encoder contains linear feed forward shift register In linear feed forward shift register the information sequence u enters the encoder one bit at a time  The two-encoder output sequence can be obtained the convolution of the input sequence u with the two-encoder impulse response (generator sequence)
  • 20.
    Continued…  So, thetwo-encoder output sequence can be written as 𝒗(𝟏) =(𝒗𝒐 (𝟏) , 𝒗𝟏 (𝟏) ,𝒗𝟐 (𝟏) , … .) and 𝒗(𝟐)=(𝒗𝒐 (𝟐) , 𝒗𝟏 (𝟐) ,𝒗𝟐 (𝟐) , …)  The two-encoder impulse response (generator sequence) can be written as 𝒈(𝟏)=(𝒈𝒐 (𝟏) , 𝒈𝟏 (𝟏) ,𝒈𝒎 (𝟏) ) and 𝒈(𝟐)=(𝒈𝒐 (𝟐) , 𝒈𝟏 (𝟐) ,𝒈𝒎 (𝟐) ) The encoding equation cab be written as 𝒗(𝟏)=u*𝒈(𝟏) 𝒗 𝟐 =u*𝒈 𝟐
  • 21.
    Continued… After encoding; thetwo encoder outputs sequences are multiplexed in to a single sequence, called the codeword  The codeword is given by V=(𝒗𝒐 (𝟏) 𝒗𝒐 (𝟐) , 𝒗𝟏 (𝟏) 𝒗𝟏 (𝟐) , 𝒗𝟐 (𝟏) 𝒗𝟐 (𝟐) , 𝒗𝟑 (𝟏) 𝒗𝟑 (𝟐) , 𝒗𝟒 (𝟏) 𝒗𝟒 (𝟐) , …)  Codeword V has length n(m+L), where L is the length of input sequence and m is memory order
  • 22.
    Continued…  We canobtain generator matrix if the generator sequence 𝒈(𝟏) and 𝒈(𝟐) are interlaced and then arranged in matrix
  • 23.
    Continued…  Each rowof G is identical to the preceding row but shifted to n places to the right  If u has a finite length L, then G has L rows and n*(m+L) columns and V has length n*(m+L)  The encoding equation can be written as in matrix form as follows V=u.G  Where G is the generator matrix of the code, and u is information sequence
  • 24.
    Continued…  In generalcase of an (n,k, m) code, the generator matrix is Where each 𝐺𝑙 is a k× 𝑛 sub matrix as follows
  • 25.
    Continued…  Example :-u=(1 0 1 1 1), 𝒈(𝟏)=(1 0 1 1), 𝒈(𝟐)=(1 1 1 1) , Find the transmitted codeword?  Solution 𝐯(𝟏)=u*𝐠(𝟏) 𝐯(𝟏)=(1 0 1 1 1)* (1 0 1 1)  𝐯 (𝟏) =(1 0 0 0 0 0 0 1)
  • 26.
    Continued…  𝐯(𝟐)=u*𝐠(𝟐)  𝐯(𝟐)=(10 1 1 1)* (1 1 1 1)  𝐯(𝟐)=(1 1 0 1 1 1 0 1 )
  • 27.
    Continued… So, the transmittedcode word can be obtained by taking the corresponding bit of 𝐯(𝟏) 𝑎𝑛𝑑 𝐯(𝟐) 𝐯(𝟏)= (1 0 0 0 0 0 0 1) 𝐯(𝟐) = (1 1 0 1 1 1 0 1)  V= (1 1, 0 1, 0 0 ,0 1,0 1,0 1,0 0 ,1 1)
  • 28.
  • 29.
    Continued… So, the transmittedcode word can be obtained by V= U.G= V= (1 1, 0 1, 0 0 ,0 1,0 1,0 1,0 0 ,1 1)
  • 30.
    Interleaving An interleaver takesa given sequence of symbols and permutes their positions, arranging them in a different temporal order  To convert error patterns that contain long sequences of serial erroneous data into a more random error pattern  Distributing errors among many code vectors  For a memory channel with burst errors, rearranging the symbols can cause the bursts of channel errors to be spread in time
  • 31.
    Interleaving To separate thecodeword symbols in time:-  This codeword symbol–separation in time effectively transforms a channel with memory into a memoryless one  The basic goal of an interleaver is to randomize the data sequence  This allows random–error–correcting codes to be useful in burst–noise channel
  • 32.
    Continued… There are threetypes of interleavers that are commonly used in communications:  Block interleavers; Convolutional interleavers, Pseudorandom interleavers Block interleavers  A block interleaver accepts the coded symbols in blocks from the encoder  Shuffles the symbols, and then feeds the rearranged symbols to the modulator  The shuffling of the block is accomplished by filling the columns of an M-row–by N - column (M×N) array with the encoded sequence
  • 33.
    Continued… After the arrayis filled, the symbols are then fed to the modulator one row at a time and transmitted over the channel  Input to the interleaver is column–wise, while output is row–wise  At the receiver, the deinterleaver performs the inverse operation, the symbols are entered by rows and removed one column at a time
  • 34.
    Continued… Example: we havethe following data sequence before interleaving The given data sequence is the input to a 4 ×4 block interleaver  The out put of 4 ×4 block interleaver is as follows
  • 35.
    Continued… We have thefollowing data sequence before deinterleaving The given data sequence is the input to a 4 ×4 block deinterleaver  The out put of 4 ×4 block deinterleaver is as follows
  • 36.
    Continued… The following sequenceis the input to a 4 ×6 block interleaver 1101 0101 1111 0000 1010 1011 Let there be a noise burst of five symbol times, the colored symbols shown below experience errors in transmission  Input the bit stream column by column The output of the interleaver is then read row by row 101011 111000 001011 111001
  • 37.
    Continued…  The output of the interleaver (101011 111000 001011 111001) is the input to deinterleaver as follows Then input the bit stream row by row The output of the deinterleaver is then read column by column 1101 0101 1111 0000 1010 1011
  • 38.
    Continued…  Convolutional Interleavers:- The code symbols are sequentially shifted into the bank of N registers  Each successive register provides J symbols more storage than did the preceding one  The zeroth register provides no storage the symbol is transmitted immediately  Each new code symbol, the commutator switches to a new register  The new code symbol is shifted while the oldest code symbol in that register is shifted out to the modulator
  • 39.
    Continued… After the (𝑁− 1)𝑡ℎ register, the commutator returns to the zeroth register and starts again  The deinterleaver performs the inverse operation  The input and output commutators for both interleaving and deinterleaving must be synchronized The synchronized deinterleaver is shown simultaneously feeding the interleaved symbols to the decoder
  • 40.
    Continued…  Convolutional Interleavers:- Thecode symbols are sequentially shifted into the bank of N registers
  • 41.
    Continued…  Example :-we have the following data sequence before interleaving  Convolutional four–register (J= 1) interleaver being loaded by a sequence of code symbols  Symbols 1 to 4 are being loaded; the x represent unknown states
  • 42.
  • 43.
  • 44.
    Continued…..  Find theinterleaving matrix, entering the bits column–wise  We then add columns of xs to the left and right of the columns containing the bit The output of the interleaver is then read diagonally from top–left of the message bits to bottom–left of the xs
  • 45.
    Continued….. 1 x xx 5 2 x x 9 6 3 x 13 10 7 4 x 14 11 8 x x 15 12 x x x 16
  • 46.
    Application Area Convolutional codesare used extensively in numerous applications in order to achieve reliable data transfer  Including digital video, radio, mobile communication, and satellite communication  These codes are often implemented in concatenation with a hard-decision code, particularly Reed Solomon,turbocodes
  • 47.
    Continued….. This interleaving techniqueprovides a high degree of robustness to variability in the burst parameters  Most block or convolutional codes are designed to combat random independent errors  Interleaver one of the most important component that construct Turbo code  To enhance overall Turbo code performance against burst errors
  • 48.