Channel Coding (Error Control Coding)

5,637 views

Published on

power point presentation
By : Ola Mashaqi, Suhad Malayshe, Mais Masri
Annajah national University
Telecommunications Engineers

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
5,637
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
415
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Channel Coding (Error Control Coding)

  1. 1. Error Control CODING Prepared by : Ola Mashaqi Suhad Malayshe Mais Masri Submitted to : Dr. Allam Mousa
  2. 2. Defintion of channel coding Error control coding ,detect, and often correct, symbols which are received in error The channel encoder separates or segments the incoming bit stream into equal length blocks of L binary digits and maps each L-bit message block into an N-bit code word where N > L There are M=2L messages and 2L code words of length N bits The channel decoder has the task of detecting that there has been a bit error and (if • possible) correcting the bit error
  3. 3. ARQ (Automatic-Repeat-Request ) If the channel decoder performs error detection then errors can be detected and a feedback channel from the channel decoder to the channel encoder can be used to control the retransmission of the . code word until the code word is received without detectable errors There are two major ARQ techniques stop and wait continuous ARQ FEC (Forward Error Correction) If the channel decoder performs error correction then errors are not only detected but the bits in error can be identified and corrected (by bit )inversion
  4. 4. There are two major ARQ techniques • stop and wait, in which each block of data is positively, or negatively, acknowledged by the receiving terminal as being error free before the next data block is transmitted, • continuous ARQ, in which blocks of data continue to be transmitted without waiting for each previous block to be acknowledged
  5. 5. Error Control Coding (Channel Coding) particular error control methods : linear group codes, cyclic codes, the Golay code, BCH codes, Reed–Solomon codes and Hamming codes
  6. 6. Block coding VS. convolutional coding Block coding Convolutional Coding The (n,k) block code is the encoding a stream of data rather than blocks of data . code that convert k bit of the massage signal to n bit The sequence of bits in a codeword . convolutional code depends Not only on the current bits of data It block because it but also on previous bits of take number of bits from data. massage(information digit) and add redundant bits (parity digit) to it and do so to the rest of the bits.
  7. 7. Error rate control concepts How to measure error performance ? Answer is BER : the average rate at which errors occur and is given by the product PbRb Pb: probability of error Rb : bit transmission rate in the channel BUT If BER is too large ! What to do to make it smaller .. • increase transmitter power(not efficient ) • Diversity : Frequency diversiy employs two different frequencies to transmit the same information , time diversity systems the same message is transmitted more than once at different times . • introduce full duplex transmission: implying simultaneous two-way transmission • ARQ and FEC
  8. 8. Hamming Distance The Hamming distance between two code-words is defined as the number of places, .bits or digits in which they differ •The distance is important factor since it indicates how easy to change one valid code into another. •The weight of the codeword is defined as the number of ones in the codeword. : Example) Calculate the hamming distance and weight of the following codeword • 11100, 11011 Hamming distance = 3 bit The code word 11100 could changed to 11011 The weight of the codeword 1= 3 The weight of the codeword 2= 4 The minimum codeword weight =3
  9. 9. (n, k) block codes: with k information digits going into the coder. n digits coming out after (n −k) redundant parity check digits. The rate, or efficiency, for this code (R) = k/n Rate is normally in the range 1/2 to unity.
  10. 10. Linear group codes Group codes contain the all-zeros codeword and have the property referred to as closure . Advantage : it makes performance calculations with linear group codes particularly easy. taking any two codewords Ci and Cj , then Ci ⊕ Cj = Ck .
  11. 11. Example a contain all-zeros codeword b ⊕ c=d , c ⊕ d=b , b ⊕ d=b .
  12. 12. Performance prediction Hamming distances measurer to determine the overall performance of a block code consideration of each of the codewords with the all-zeros codewordis sufficient. Example 00000 00111 11100 11011 Dmin = 3 for this (5,2) code. Consider four codewords the weights of these are 0, 3, 3 and 4. the minimum weight in the weight structure (3) is equal to Dmin, the minimum Hamming distance for the code.
  13. 13. Performance prediction!!!! ‫!!!! ةعجارمراجعة‬ ‫يمكن أحذفها‬ Consider the probability of the ith codeword (Ci ) being misinterpreted as the j th codeword (Cj ). This probability depends on the distance between these two codewords (Di j ). Ci Cj Di j is equal to the weight of a third codeword Ck • Ci being mistaken for Cj is therefore equal to the probability of Ck being mistaken for C0 • The probability of C0 being misinterpretedas Ck depends only on the weight of Ck . the performance of such a code can be determined completely 1. consideration of C0 2. the weight structure alone
  14. 14. Error detection and correction capability t :The maximum possible error correcting. Dmin :minimum Hamming distance e : is the ability of certain code to detect errors. t ≤ e.
  15. 15. Error detection and correction capability Dmin is 3 e=1 , t=1 11001 &11000 If any single error occurs in one of the codewords it can therefore be corrected. Dmin − 1 errors can be detected there is no error correction Longer codes with larger Hamming distances offer greater detection and correction capability by selecting different t and e
  16. 16. Standerd The UK Post Office Code Standards Advisory Group(POCSAG) code k = 21 and n = 32 R ≈ 2/3 Dmin = 6. 3 bit detection 2 bit correction capability.
  17. 17. Syndrome decoding Error location 0000000 1000000 0100000 0010000 0001000 0000100 0000010 0000001 Syndrome 000 111 011 101 110 100 010 001 syndrome is independent of the transmitted codeword and only depends on the error sequence A decoding table tells a decoder how to correct errors that might have corrupted the code during transmission
  18. 18. Syndrome decoding d is a message vector of k digits G is the k × n generator matrix c is the n-digit codeword corresponding to the message d, dG=c Where G is the generation matrix Furthermore: Hc=0 where H is the (even) parity check matrix corresponding to G
  19. 19. Syndrome decoding r=c⊕e r is the sequence received after transmitting c . e is an error vector representing the location of the errors which occur in the received sequence r. syndrome vector s s = H r = H (c ⊕ e) = H c ⊕ H e = 0 ⊕ H e =He s is easily calculated
  20. 20. The generator matrix(G) : The generator matrix G for an (n, k) block code can be used to generate the appropriate n-digit codeword from any given k-digit data sequence . Parity check matrix (H) : does not contain any codewords. (7,4) block block code H matrix . The right side of G is the transpose of the left hand portion of H. Parity check section must : must contain at least two ones. rows cannot be identical.
  21. 21. G is the k × n generator matrix . The right side of G is the transpose of the left hand portion of H. .
  22. 22. Example
  23. 23. use this syntax to Produce syndrome decoding table t = syndtable(h) returns a decoding table for an error-correcting binary code having codeword length n and message length http://www.mathworks.com/help/comm/ref/syndtable.html
  24. 24. Application • Error control coding, generally, is applied widely in control and communications systems for aerospace applications, in mobile (GSM) cellular telephony and for enhancing security in banking and barcode readers.
  25. 25. Matlab n = 6; k = 4; % Set codeword length and message length % for a [6,4] code. msg = [1 0 0 1 1 0 1 0 1 0 1 1]'; % Message is a binary column. code = encode(msg,n,k,'cyclic'); % Code will binary column. msg' code'
  26. 26. msg consists of 12 entries, which are interpreted as three 4-digit (because k = 4) messages. The resulting vector codecomprises three 6digit (because n = 6) codewords, which are concatenated to form a vector of length 18. The parity bits are at the beginning of each codeword

×