Error Detection N Correction
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Error Detection N Correction

on

  • 5,967 views

 

Statistics

Views

Total Views
5,967
Views on SlideShare
5,904
Embed Views
63

Actions

Likes
2
Downloads
177
Comments
0

3 Embeds 63

http://networkjourney.wordpress.com 34
http://www.slideshare.net 27
https://networkjourney.wordpress.com 2

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Error Detection N Correction Presentation Transcript

  • 1. Error Detection & Correction
    -=namma Angels™=-
    » NikethaDalmia
    » Neha Sharma
    » ChanchalJalan
  • 2. The problem»
    Many Factors can alter one or more bits of a message during transmission.
    Single bit error
    Burst error (2 or more bits altered)
    Solution?
    Detect if there was an error.
    Its easy to determine if an error has occurred.
    Simple answer Y/N
    Correction:
    Knowledge of exact number of corrupted bits.
    Knowledge of exact position of error.
  • 3. So how do we correct?»
    Retransmission
    Detect an error
    Repeatedly request sender to resend.
    Forward Error Correction
    Receiver tries to guess the correct code.
    Coding:
    Block Coding
    Complex Convolution code (wiki definition included)
    In telecommunication, a convolutional code is a type of error-correcting code in which (a) each m-bitinformation symbol (each m-bit string) to be encoded is transformed into an n-bit symbol, where m/n is the code rate (n ≥ m) and (b) the transformation is a function of the last k information symbols, where k is the constraint length of the code.
  • 4. Coding
    The Addition of redundant bits
    The concept of Block Coding:
    Each message of k bits are datawords.
    Add r redundant bits.
    The resulting n=k+rblocks are codewords.
    No. of possible codewards larger than possible datawords.
    2n – 2kcodewords not used
    These are invalid, and help in error detection.
  • 5. Our Own c++ implementation of error detection and correction.
    A few terms to grasp first
  • 6. Hamming code:
    It is easy to determine how many bits sequences A and B differ by, in the above case 3 bits.
    The number of bits in which two codewords vary is called the Hamming distance.
    Note: for two codewords, with Hamming distance d, a total of d single-bit errors are needed to convert one codeword into the other codeword.
    The Hamming distance of a complete code (A code consists of a number of codewords) is the minimum Hamming distance between any two codewords of that code.
  • 7. A simple example:
    A codeword is a binary sequence used to represent an item.
    ASCII is one such example where a 7 or 8 bit code is used to represent a character.
    Note, the Hamming distance of ASCII is 1, i.e. changing one bit of an ASCII code will result in another valid ASCII code.
  • 8. Detection/correction by hamming code:
    The error detecting and correction properties of a code depend upon its Hamming distance.
    To detect d errors, a Hamming distance of d+1 is needed (thereby ensuring that it is impossible for d errors to change one valid codeword into another valid codeword).
    To correct d errors, a Hamming distance of 2d+1 is needed (thereby ensuring that after d errors the original codeword is still the closest match to the corrupted signal, and hence the best match).
  • 9. The hamming(7,4) algorithm:
    4 Bit DataWord » 7 bit CodeWord by adding 3 redundant bits (anywhere)
    This is done by the encoding algorithm.
    The Received CodeWord is Decoded
    Errors upto 2 bits can be detected.
    But only errors upto 1 bit can be corrected.
    (Recall the mechanism of Hamming codes)
    P1
    P2
    P3
    D1
    D2
    D3
    D4
  • 10. Encoding:
    p1 = d2 + d3 + d4p2 = d1 + d3 + d4p3 = d1 + d2 + d4
    Each of the three parity bits are parity for three of the four data bits, and no two parity bits are for the same three data bits. All of the parity bits are even parity
    There's a fourth equation for a parity bit that may be used in Hamming codes: p4 = d1 + d2 + d3. Any of the three out of four may be used.
    One method for transforming four bits of data into a seven bit Hamming code word is to use a 4×7 generator matrix [G], Define d to be the 1×4 vector [d1 d2 d3 d4]
  • 11. So why a generator matrix?
    We could have kept it simple:
    But after thorough analysis we realized…
    The 4x7 Generator Matrix prevails.
    | 1 0 1 0 | * |0 1 1 1|
    |1 0 1 1| = | |
    |1 1 0 1|
    (parity equation matrix)
  • 12. The 4x7 Generator matrix:
          | 1 |d1 = | 0 |      | 0 |      | 0 |
          | 0 |d2 = | 1 |      | 0 |      | 0 |
          | 0 |d3 = | 0 |      | 1 |      | 0 |
          | 0 |d4 = | 0 |      | 0 |      | 1 |
         | 0 |p1 = | 1 |      | 1 |      | 1 |
          | 1 |p2 = | 0 |      | 1 |      | 1 |
          | 1 |p3 = | 1 |      | 0 |      | 1 |
    Arrange the column vectors from the previous steps into a 4×7 matrix such that the columns are ordered to match their corresponding bits in a code word.Using the vectors from the previous steps, the following will produce code words of the form [p1 p2 p3 d1 d2 d3 d4]       | 0 1 1 1 0 0 0 | G = | 1 0 1 0 1 0 0 |      | 1 1 0 0 0 1 0 |      | 1 1 1 0 0 0 1 |Arranging the columns in any other order will just change the positions of bits in the code word.
  • 13. Example:
    So 1010 encodes to 1011010. Equivalent Hamming codes represented by different generator matrices will produce different results.
    Lets test the C++ Encoding code we wrote (not copied)