Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Error Detection and Correction - Data link Layer

16,970 views

Published on

Networks must be able to transfer data from one device to another with acceptable
accuracy. For most applications, a system must guarantee that the data received are
identical to the data transmitted. Any time data are transmitted from one node to the
next, they can become corrupted in passage. Many factors can alter one or more bits of
a message. Some applications require a mechanism for detecting and correcting errors.
Some applications can tolerate a small level of error. For example, random errors
in audio or video transmissions may be tolerable, but when we transfer text, we expect
a very high level of accuracy.
At the data-link layer, if a frame is corrupted between the two nodes, it needs to be
corrected before it continues its journey to other nodes. However, most link-layer protocols
simply discard the frame and let the upper-layer protocols handle the retransmission
of the frame. Some multimedia applications, however, try to correct the corrupted frame.
This chapter is divided into five sections.
❑ The first section introduces types of errors, the concept of redundancy, and distinguishes
between error detection and correction.
❑ The second section discusses block coding. It shows how error can be detected
using block coding and also introduces the concept of Hamming distance.
❑ The third section discusses cyclic codes. It discusses a subset of cyclic code, CRC,
that is very common in the data-link layer. The section shows how CRC can be
easily implemented in hardware and represented by polynomials.
❑ The fourth section discusses checksums. It shows how a checksum is calculated for
a set of data words. It also gives some other approaches to traditional checksum.
❑ The fifth section discusses forward error correction. It shows how Hamming distance
can also be used for this purpose. The section also describes cheaper methods
to achieve the same goal, such as XORing of packets, interleaving chunks, or
compounding high and low resolutions packets.

Published in: Internet
  • Be the first to comment

Error Detection and Correction - Data link Layer

  1. 1. Error Detection and Correction Presented by : Abdulaziz Mabrouk Tagawy
  2. 2. INTRODUCTION O Types of Errors Whenever bits flow from one point to another, they are subject to unpredictable changes because of interference. This interference can change the shape of the signal.
  3. 3. INTRODUCTION Types of Errors O The term single-bit error means that only 1 bit of a given data unit (such as a byte, O character, or packet) is changed from 1 to 0 or from 0 to 1.
  4. 4. INTRODUCTION Types of Errors O The term burst error means O that 2 or more bits in the data unit have changed from 1 to 0 or from 0 to 1.
  5. 5. INTRODUCTION O Redundancy The central concept in detecting or correcting errors is redundancy. To be able to detect or correct errors, we need to send some extra bits with our data. These redundant bits are added by the sender and removed by the receiver. Their presence allows the receiver to detect or correct corrupted bits.
  6. 6. INTRODUCTION O Detection versus Correction O The correction of errors is more difficult than the detection. O In error detection, we are only looking to see if any error has occurred. The answer is a simple yes or no. We are not even interested in the number of corrupted bits.
  7. 7. INTRODUCTION O Detection versus Correction O In error correction, we need to know the exact number of bits that are corrupted and, more importantly, their location in the message. The number of errors and the size of the message are important factors
  8. 8. INTRODUCTION O Coding O Redundancy is achieved through various coding schemes. O The ratio of redundant bits to data bits and the robustness of the process are important factors in any coding scheme. O block coding and convolution coding.
  9. 9. BLOCK CODING O In block coding, we divide our message into blocks, each of k bits, called datawords. O adding r redundant bits to each block to make the length n = k + r. O The resulting n-bit blocks are called codewords. O If the O receiver receives an invalid codeword, this indicates that the data was corrupted during O transmission.
  10. 10. BLOCK CODING Error Detection O If the following two conditions are met, the receiver can detect a change in the original codeword. O 1. The receiver has (or can find) a list of valid codewords. O 2. The original codeword has changed to an invalid one.
  11. 11. BLOCK CODING
  12. 12. BLOCK CODING O Hamming Distance O Hamming distance between two words (of the same size) is the number of differences between the corresponding bits. O Hamming distance between two words x and y as d(x, y). O Hamming O distance between the two is d(00000, 1101) = 3. O if the Hamming distance between the sent and the received codeword is not zero, the codeword has been corrupted during transmission.
  13. 13. BLOCK CODING O Hamming Distance O It can easily be found if we apply the XOR operation (βŠ•) .
  14. 14. BLOCK CODING O Hamming Distance O EX:- O Let us find the Hamming distance between two pairs of words. O 1. The Hamming distance d(000, 011) is 2 because (000 βŠ• 011) is 011 (two 1s). O 2. The Hamming distance d(10101, 11110) is 3 because (10101 βŠ• 11110) is 01011 (three 1s).
  15. 15. BLOCK CODING O Minimum Hamming Distance for Error Detection : O the minimum Hamming distance is the smallest Hamming distance between all possible pairs of codewords.
  16. 16. BLOCK CODING Minimum Hamming Distance for Error Detection : O EX:-The minimum Hamming distance for code scheme in the Table is 2. This code guarantees detection of only a single error. O For example, if the third codeword (101) is sent and one error occurs, the received codeword does not match any valid codeword. If two errors occur, however, the received codeword may match a valid codeword and the errors are not detected.
  17. 17. BLOCK CODING O Minimum Hamming Distance for Error Detection : O EX:- A code scheme has a Hamming distance dmin = 4. This code guarantees the detection of up to three errors (d = s + 1 or s = 3).
  18. 18. BLOCK CODING O Linear Block Codes : O Almost all block codes used today belong to a subset of block codes called linear block codes. O The use of nonlinear block codes for error detection and correction is not as widespread because their structure makes theoretical analysis and implementation difficult.
  19. 19. BLOCK CODING Linear Block Codes : O EX:- The code in Table is a linear block code because the result of XORing any codeword with any other codeword is a valid codeword. O For example, the XORing of the second and third codewords creates the fourth one.
  20. 20. BLOCK CODING Linear Block Codes : O The minimum Hamming distance is the number of 1s in the nonzero valid codeword with the smallest number of 1s. O EX:- In the Table the numbers of 1s in the nonzero codewords are 2, 2, and 2. So the minimum Hamming distance is dmin = 2.
  21. 21. BLOCK CODING O Parity-Check Code : O It is linear block code. O k-bit dataword is changed to an n-bit codeword where n = k + 1. O The extra bit, called the parity bit . O the parity bit, is selected to make the total O number of 1s in the codeword even. Although some implementations specify an odd O number of 1s,
  22. 22. BLOCK CODING O Parity-Check Code : O The minimum Hamming distance for this category is dmin = 2, O It is a single-bit error-detecting code
  23. 23. BLOCK CODING
  24. 24. BLOCK CODING
  25. 25. BLOCK CODING O Parity-Check Code : O EX:- Let us look at some transmission scenarios. Assume the sender sends the dataword 1011. The codeword created from this dataword is 10111, which is sent to the receiver. We examine five cases: O 1. No error occurs; the received codeword is 10111. The syndrome is 0. The dataword 1011 is created. O 2. One single-bit error changes a1. The received codeword is 10011. The syndrome is 1. No dataword is created. O 3. One single-bit error changes r0. The received codeword is 10110. The syndrome is 1. No dataword is created. Note that although none of the dataword bits are corrupted, no dataword is created because the code is not sophisticated enough to show the position of the corrupted bit. O 4. An error changes r0 and a second error changes a3. The received codeword is 00110. The syndrome is 0. The dataword 0011 is created at the receiver. Note that here the dataword is wrongly created due to the syndrome value. The simple parity- check decoder cannot detect an even number of errors. The errors cancel each other out and give the syndrome a value of 0. O 5. Three bitsβ€”a3, a2, and a1β€”are changed by errors. The received codeword is 01011. The syndrome is 1. The dataword is not created. This shows that the simple parity check, guaranteed to detect one single error, can also find any odd number of errors.
  26. 26. BLOCK CODING O Parity-Check Code :
  27. 27. CYCLIC CODES O Cyclic codes are special linear block codes with one extra property ; Why ? O Because the result is another codeword . O EX : - if 1011000 is a codeword and we cyclically left-shift, then 0110001 is also a codeword.
  28. 28. CYCLIC CODES O Cyclic Redundancy Check (CRC) : O It is used in networks such as LANs and WANs .
  29. 29. CYCLIC CODES Cyclic Redundancy Check (CRC) :
  30. 30. CYCLIC CODES O Cyclic Redundancy Check (CRC) : O In the encoder, the dataword has k bits (4 here); the codeword has n bits (7 here). O The size of the dataword is augmented by adding n - k (3 here) 0s to the right-hand side of the word. The n-bit result is fed into the generator. The generator uses a divisor of size n - k + 1 (4 here), predefined and agreed upon. The generator divides the augmented dataword by the divisor (modulo-2 division). The quotient of the division is discarded; the remainder (r2r1r0) is appended to the dataword to create the codeword.
  31. 31. CYCLIC CODES O Cyclic Redundancy Check (CRC) : O The decoder receives the codeword (possibly corrupted in transition). A copy of all n bits is fed to the checker, which is a replica of the generator. The remainder produced by the checker is a syndrome of n βˆ’ k (3 here) bits, which is fed to the decision logic analyzer. The analyzer has a simple function. If the syndrome bits are all 0s, the 4 leftmost bits of the codeword are accepted as the dataword (interpreted as no error); otherwise, the 4 bits are discarded (error).
  32. 32. CYCLIC CODES Encoder:
  33. 33. CYCLIC CODES Decoder:
  34. 34. CYCLIC CODES O Polynomials : O pattern of 0s and 1s can be represented as a polynomial with coefficients of 0 and 1 . The power of each term shows the position of the bit; the coefficient shows the value of the bit.
  35. 35. CYCLIC CODES O Polynomials :
  36. 36. CYCLIC CODES O Polynomials : O The degree of a polynomial is the highest power in the polynomial. For example, the degree of the polynomial π‘₯6 + π‘₯ +1 is 6 . O Adding and subtracting polynomials in mathematics are done by adding or subtracting the coefficients of terms with the same power. O EX:- adding π‘₯5 + π‘₯4 + π‘₯2 and π‘₯6 + π‘₯4 + π‘₯2 gives just π‘₯6 + π‘₯5. The terms π‘₯4 and π‘₯2 are deleted..
  37. 37. CYCLIC CODES O Polynomials : O Multiplying a term by another term is very simple; we just add the powers. EX:-, π‘₯4 + π‘₯3 is π‘₯7 . O For dividing, we just subtract the power of the second term from the power of the first. For example, π‘₯5/π‘₯2 is π‘₯3 . O EX:- (π‘₯5 + π‘₯3 + π‘₯2 + π‘₯ )(π‘₯2 + π‘₯ +1) = π‘₯7 + π‘₯6 + π‘₯5 + π‘₯5 +π‘₯4 + π‘₯3 + π‘₯4 + π‘₯3 +π‘₯2 + π‘₯3 + π‘₯2 + π‘₯ =π‘₯7 + π‘₯6 + π‘₯3 + π‘₯
  38. 38. CYCLIC CODES Polynomials : O Shifting : O A binary pattern is often shifted a number of bits to the right or left. Shifting to the left means adding extra 0s as rightmost bits; shifting to the right means deleting some rightmost bits. Shifting to the left is accomplished by multiplying each term of the polynomial by π‘₯ π‘š , where m is the number of shifted bits; shifting to the right is accomplished by dividing each term of the polynomial by π‘₯ π‘š . The following shows shifting to the left and to the right. Note that we do not have negative powers in the polynomial representation. O Shifting left 3 bits: 10011 becomes 10011000 : O π‘₯4 + π‘₯ +1 becomes π‘₯7 + π‘₯4+π‘₯3 O Shifting right 3 bits: 10011 becomes 10 : O π‘₯4 + π‘₯ +1 becomes π‘₯
  39. 39. CYCLIC CODES Cyclic Code Encoder Using Polynomials :
  40. 40. CYCLIC CODES Cyclic Code Encoder Using Polynomials : O The divisor in a cyclic code is normally called the generator polynomial or simply the generator.
  41. 41. CYCLIC CODES O Cyclic Code Analysis : O 𝑓 π‘₯ is a polynomial with binary coefficients. O Dataword: 𝑑 π‘₯ O Codeword: 𝑐 π‘₯ O Generator: 𝑔 π‘₯ O Syndrome: 𝑠 π‘₯ O Error: 𝑒 π‘₯
  42. 42. CYCLIC CODES O Cyclic Code Analysis : O In a cyclic code, O 1. If 𝑠 π‘₯ Β¦ 0, one or more bits is corrupted. O 2. If 𝑠 π‘₯ = 0, either O a. No bit is corrupted, or O b. Some bits are corrupted, but the decoder failed to detect them.
  43. 43. CYCLIC CODES O Cyclic Code Analysis : O π‘Ήπ’†π’„π’†π’Šπ’—π’†π’… π’„π’π’…π’†π’˜π’π’“π’… π’ˆ(𝒙) = 𝒄(𝒙) π’ˆ(𝒙) + 𝒆(𝒙) π’ˆ(𝒙) O The first term at the right-hand side of the equality has a remainder of zero (according to the definition of codeword). So the syndrome is actually the remainder of the second term on the right-hand side. If this term does not have a remainder (syndrome = 0), either e(x) is 0 or e(x) is divisible by g(x). We do not have to worry about the first case (there is no error); the second case is very important. Those errors that are divisible by g(x) are not caught. O In a cyclic code, those 𝒆(𝒙) errors that are divisible by π’ˆ(𝒙) are not caught.
  44. 44. CYCLIC CODES O Single-Bit Error : O It is 𝒆(𝒙) = π‘₯ 𝑖, where 𝑖 is the position of the bit. If a single-bit error is caught, then π‘₯ 𝑖 is not divisible by g(x). (Note that when we say not divisible, we mean that there is a remainder.) If π’ˆ(𝒙) has at least two terms (which is normally the case) and the coefficient of π‘₯0 is not zero (the rightmost bit is 1), then 𝒆(𝒙) cannot be divided by π’ˆ(𝒙). If the generator has more than one term and the coefficient of 𝒙 𝟎 is 1, all single-bit errors can be caught .
  45. 45. CYCLIC CODES O Single-Bit Error : O EX:-Which of the following π’ˆ(𝒙) values guarantees that a single- bit error is caught? For each case, what is the error that cannot be caught? a) a. π‘₯ + 1 b) b. π‘₯3 c) c. 1 O Solution a) No π‘₯ 𝑖 can be divisible by π‘₯ + 1 . In other words, π‘₯ 𝑖 /( π‘₯ + 1) always has a remainder. So the syndrome is nonzero. Any single-bit error can be caught. b) If i is equal to or greater than 3, π‘₯ 𝑖 is divisible by π’ˆ(𝒙).The remainder of π‘₯ 𝑖 / π‘₯3 is zero, and the receiver is fooled into believing that there is no error, although there might be one. Note that in this case, the corrupted bit must be in position 4 or above. All single-bit errors in positions 1 to 3 are caught. c) All values of i make π‘₯ 𝑖 divisible by π’ˆ(𝒙 ). No single-bit error can be caught. In addition, this π’ˆ(𝒙) is useless because it means the codeword is just the dataword augmented with n βˆ’ k zeros.
  46. 46. CYCLIC CODES O Two Isolated Single-Bit Errors
  47. 47. CYCLIC CODES O Two Isolated Single-Bit Errors
  48. 48. CYCLIC CODES
  49. 49. CYCLIC CODES O Standard Polynomials
  50. 50. CHECKSUM O Checksum is an error-detecting technique that can be applied to a message of any length. O In the Internet, the checksum technique is mostly used at the network and transport layer rather than the data-link layer. O At the source, the message is first divided into m- bit units. The generator then creates an extra m-bit unit called the checksum, which is sent with the message.
  51. 51. CHECKSUM O At the destination, the checker creates a new checksum from the combination of the message and sent checksum. If the new checksum is all 0s, the message is accepted; otherwise, the message is discarded . O The checksum unit is not necessarily added at the end of the message; it can be inserted in the middle of the message.
  52. 52. CHECKSUM
  53. 53. CHECKSUM O Concept : O EX:- Suppose the message is a list of five 4-bit numbers that we want to send to a destination . O In addition to sending these numbers, we send the sum of the numbers. O if the set of numbers is (7, 11, 12, 0, 6) . O (7, 11, 12, 0, 6, 36) they will be send, whereas 36 is the sum of the original numbers.
  54. 54. CHECKSUM O Concept :- O The receiver adds the five numbers and compares the result with the sum. O If the two are the same, the receiver assumes no error, accepts the five numbers and discards the sum. O Otherwise , there is an error somewhere and the message is not accepted.
  55. 55. CHECKSUM O One’s Complement Addition O In the previous example, the decimal number 36 in binary is (100100)2. To change it to a 4-bit number we add the extra leftmost bit to the right four bits as shown below. O We can send 6 as the sum (7, 11, 12, 0, 6, 6). The receiver can add the first five numbers in one’s complement arithmetic. If the result is 6, the numbers are accepted; otherwise, they are rejected.
  56. 56. CHECKSUM O Checksum : O the complement of a number is found by completing all bits (changing all 1s to 0s and all 0s to 1s). O The same as Subtracting the number from 2 π‘š βˆ’ 1. O When the receiver adds all five numbers (including the checksum), it gets a negative zero. O The positive zero has all π‘š bits set to 0; the negative zero has all bits set to 1 (it is 2 π‘š βˆ’ 1). O The receiver can complement the result again to get a positive zero.
  57. 57. CHECKSUM O Checksum : O EX:- Let us use the idea of the checksum O The sender adds all five numbers in one’s complement to get the sum = 6. O Then complements the result to get the checksum = 9, which is 15 βˆ’ 6. O Note that 6 = (0110)2 and 9 = (1001)2; they are complements of each other . O Then sends the five data numbers and the checksum (7, 11, 12, 0, 6, 9).
  58. 58. CHECKSUM O Checksum : O EX:- Cont.…. O If there is no corruption in transmission, O The receiver receives (7, 11, 12, 0, 6, 9) and adds them in one’s complement to get 15. O The receiver complements 15 to get 0. O This shows that data have not been corrupted.
  59. 59. CHECKSUM O Checksum :- O EX:- Cont.….
  60. 60. CHECKSUM O Internet Checksum : - O Traditionally, the Internet has used a 16- bit checksum. O Describe the Procedure to calculate the traditional checksum ?
  61. 61. CHECKSUM Internet Checksum : - Sender Internet Checksum : - Receiver 1. The message is divided into 16-bit words. 2. The value of the checksum word is initially set to zero. 3. All words including the checksum are added using one’s complement addition. 4. The sum is complemented and becomes the checksum. 5. The checksum is sent with the data. 1. The message and the checksum are received. 2. The message is divided into 16-bit words. 3. All words are added using one’s complement addition. 4. The sum is complemented and becomes the new checksum. 5. If the value of the checksum is 0, the message is accepted; otherwise, it is rejected.
  62. 62. CHECKSUM O Performance : O The traditional checksum uses a small number of bits (16) to detect errors in a message of any size (sometimes thousands of bits). However, it is not as strong as the CRC in error- checking capability. O the tendency in the Internet, particularly in designing new protocols, is to replace the checksum with a CRC. O EX:- If the value of one word is incremented and the value of another word is decremented by the same amount, the two errors cannot be detected because the sum and checksum remain the same. O EX:-If the value of one word is incremented and the value of another word is decremented by the same amount, the two errors cannot be detected because the sum and checksum remain the same.
  63. 63. CHECKSUM O Fletcher Checksum: - O It was devised to weight each data item according to its position . O It has proposed two algorithms: 8-bit and 16- bit. O The first, 8-bit Fletcher, calculates on 8-bit data items and creates a 16-bit checksum. O The second, 16-bit Fletcher, calculates on 16- bit data items and creates a 32-bit checksum.
  64. 64. CHECKSUM O The 8-bit Fletcher:- O It is calculated over data octets (bytes) and creates a 16-bit checksum. O The calculation is done modulo 256 (28), O The intermediate results O are divided by 256 and the remainder is kept. O The algorithm uses two accumulators, L and R. οƒ˜ The first simply adds data items together οƒ˜ The second adds a weight to the calculation.
  65. 65. FORWARD ERROR CORRECTION O We need to correct the error or reproduce the packet immediately. Several schemes have been designed and used in this case that are collectively referred to as forward error correction (FEC) techniques.
  66. 66. FORWARD ERROR CORRECTION O Using Hamming Distance :- O to detect s errors, the minimum Hamming distance should be dmin = s + 1. O For error detection, to detect t errors, we need to have dmin = 2t + 1. O EX:- To correct 10 bits in a packet, we need to O make the minimum hamming distance 21 bits,
  67. 67. FORWARD ERROR CORRECTION O Using Hamming Distance :- O EX:- consider BCH code : O if data is 99 bits, we need to send 255 bits (extra 156 bits) to correct just 23 possible bit errors O Most of the time we cannot afford such a redundancy.
  68. 68. FORWARD ERROR CORRECTION O Using XOR: - O Applying the exclusive OR operation on N data items (P1 to PN), it can recreate any of the data items by exclusive-ORing all of the items, replacing the one to be created by the result of the previous operation (R). οƒ˜ Dividing a packet into N chunks . οƒ˜ create the exclusive OR of all the chunks and send N + 1 chunks. οƒ˜ If any chunk is lost or corrupted, it can be created at the receiver site. ? The question is what should the value of N be ? οƒΌ If N = 4, it means that we need to send 25 percent extra data and be able to correct the data if only one out of four chunks is lost.
  69. 69. FORWARD ERROR CORRECTION O Chunk Interleaving:- O to achieve FEC in multimedia is to allow some small chunks to be missing at the receiver. O we can afford to let one chunk be missing in each packet. O EX:- Dividing each packet into 5 chunks (normally the number is much larger). We can then create data chunk by chunk (horizontally), but combine the chunks into packets vertically.
  70. 70. FORWARD ERROR CORRECTION O Chunk Interleaving:- O EX:- Cont. …. O each packet sent carries a chunk from several original packets. If the packet is lost, we miss only one chunk in each packet, which is normally acceptable in multimedia communication.
  71. 71. FORWARD ERROR CORRECTION O Chunk Interleaving:- O EX:- Cont. ….
  72. 72. FORWARD ERROR CORRECTION O Combining Hamming Distance and Interleaving :- O Hamming distance and interleaving can be combined. We can first create n-bit packets that can correct t-bit errors. Then we interleave m rows and send the bits column by column. In this way, we can automatically correct burst errors up to m Γ— t-bit errors.
  73. 73. FORWARD ERROR CORRECTION O Compounding High- and Low-Resolution Packets : - O Creating a duplicate of each packet with a low- resolution redundancy and combine the redundant version with the next packet. O EX:- create four low-resolution packets out of five high-resolution packets and send them . οƒ˜ If a packet is lost, we can use the low- resolution version from the next packet. οƒ˜ the low-resolution section in the first packet is empty.
  74. 74. FORWARD ERROR CORRECTION O Compounding High- and Low- Resolution Packets : - O EX:- Cont. …. οƒ˜ if the last packet is lost, it cannot be recovered . O The audio and video reproduction does not have the same quality, but the lack of quality is not recognized most of the time.
  75. 75. FORWARD ERROR CORRECTION O Compounding High- and Low-Resolution Packets : - O EX:- Cont. ….
  76. 76. Error Detection and Correction ‫هللا‬ ‫بحمد‬ ‫Ψͺم‬

Γ—