Upcoming SlideShare
×

# Error detection and correction

2,662 views

Published on

Published in: Education, Technology
2 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

Views
Total views
2,662
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
93
0
Likes
2
Embeds 0
No embeds

No notes for slide

### Error detection and correction

1. 1. 1. Error Detection and Correction Environmental interference and physical defects in the communication medium can cause random bit errors during data transmission. Error coding is a method of detecting and correcting these errors to ensure information is transferred intact from its source to its destination. Error coding is used for fault tolerant computing in computer memory, magnetic and optical data storage media, satellite and deep space communications, network communications, cellular telephone networks, and almost any other form of digital data communication. Error coding uses mathematical formulas to encode data bits at the source into longer bit words for transmission. The "code word" can then be decoded at the destination to retrieve the information. The extra bits in the code word provide redundancy that, according to the coding scheme used, will allow the destination to use the decoding process to determine if the communication medium introduced errors and in some cases correct them so that the data need not be retransmitted. 1.1 Types of errors These interferences can change the timing and shape of the signal. If the signal is carrying binary encoded data, such changes can alter the meaning of the data. These errors can be divided into two types: Single-bit error and Burst error. Single-bit Error The term single-bit error means that only one bit of given data unit (such as a byte, character, or data unit) is changed from 1 to 0 or from 0 to 1 as shown in Fig. 1.1.1 Fig.1.1.1 Single bit error Example: Single bit errors are least likely type of errors in serial data transmission. To see why, imagine a sender sends data at 10 Mbps. This means that each bit lasts only for 0.1 μs (micro-second). For a single bit error to occur noise must have duration of only 0.1 μs (micro-second), which is very rare. However, a single-bit error can happen if we are having a parallel data transmission. For example, if 16 wires are used to send all 16 bits of a word at the same time and one of the wires is noisy, one bit is corrupted in each word. Burst Error The term burst error means that two or more bits in the data unit have changed from 0 to 1 or vice-versa. Note that burst error doesn’t necessary means that error occurs in consecutive bits. The length of the burst error is measured from the first corrupted bit to the last corrupted bit. Some bits in between may not be corrupted. Fig.1.1.2 Burst Error
2. 2. Example: Burst errors are mostly likely to happen in serial transmission. The duration of the noise is normally longer than the duration of a single bit, which means that the noise affects data; it affects a set of bits as shown in Fig. 1.1.2. The number of bits affected depends on the data rate and duration of noise. Error-Correcting Codes One way is to include enough redundant information (extra bits are introduced into the data stream at the transmitter on a regular and logical basis) along with each block of data sent to enable the receiver to deduce what the transmitted character must have been. This method sometimes called forward error correction. Error-detecting Codes. The other way is to include only enough redundancy to allow the receiver to deduce that error has occurred, but not which error has occurred and the receiver asks for a retransmission. 1.2 Error Detecting Codes Basic approach used for error detection is the use of redundancy, where additional bits are added to facilitate detection and correction of errors. Popular techniques are: Simple Parity check Two-dimensional Parity check Checksum Cyclic redundancy check 1.2.1 Simple Parity Checking or One-dimension Parity Check The most common and least expensive mechanism for error- detection is the simple parity check. In this technique, a redundant bit called parity bit, is appended to every data unit so that the number of 1s in the unit (including the parity bit) becomes even. Blocks of data from the source are subjected to a check bit or Parity bit generator form, where a parity of 1 is added to the block if it contains an odd number of 1’s (ON bits) and 0 is added if it contains an even number of 1’s. At the receiving end the parity bit is computed from the received data bits and compared with the received parity bit, as shown in Fig. 1.2.1. This scheme makes the total number of 1’s even, that is why it is called even parity checking. Fig. 1.2.1 Even-parity checking scheme
3. 3. Note that for the sake of simplicity, we are discussing here the even-parity checking, where the number of 1’s should be an even number. It is also possible to use odd-parity checking, where the number of 1’s should be odd. Performance Simple parity check scheme can detect single bit error. However, if two errors occur in the code word, it becomes another valid member of the set and the decoder will see only another valid code word and know nothing of the error. Thus errors in more than one bit cannot be detected. In fact it can be shown that a single parity check code can detect only odd number of errors in a code word. 1.2.2 Two-dimension Parity Check Performance can be improved by using two-dimensional parity check, which organizes the block of bits in the form of a table. Parity check bits are calculated for each row, which is equivalent to a simple parity check bit. Parity check bits are also calculated for all columns then both are sent along with the data. At the receiving end these are compared with the parity bits calculated on the received data. Fig. 1.2.2 Two-dimension Parity Checking Performance Two- Dimension Parity Checking increases the likelihood of detecting burst errors. A burst error of more than n bits is also detected by 2-D Parity check with a high-probability. There is, however, one pattern of error that remains elusive. 1.2.3 Checksum In checksum error detection scheme, the data is divided into k segments each of m bits. In the sender’s end the segments are added using 1’s complement arithmetic to get the sum. The sum is complemented to get the checksum. The checksum segment is sent along with the data segments as shown in Fig. 1.2.3 (a). At the receiver’s end, all received segments are added using 1’s complement arithmetic to get the sum. The sum is complemented. If the result is zero, the received data is accepted; otherwise discarded, as shown in Fig. 1.2.3 (b). Performance The checksum detects all errors involving an odd number of bits. It also detects most errors involving even number of bits.
4. 4. (a) (b) Figure 1.2.3 (a) Sender’s end for the calculation of the checksum, (b) Receiving end for checking the checksum 1.2.4 Cyclic Redundancy Checks (CRC) This Cyclic Redundancy Check is the most powerful and easy to implement technique. Unlike checksum scheme, which is based on addition, CRC is based on binary division. In CRC, a sequence of redundant bits, called cyclic redundancy check bits, are appended to the end of data unit so that the resulting data unit becomes exactly divisible by a second, predetermined binary number. At the destination, the incoming data unit is divided by the same number. If at this step there is no remainder, the data unit is assumed to be correct and is therefore accepted. A remainder indicates that the data unit has been damaged in transit and therefore must be rejected. The generalized technique can be explained as follows. If a k bit message is to be transmitted, the transmitter generates an r-bit sequence, known as Frame Check Sequence (FCS) so that the (k+r) bits are actually being transmitted. Now this r-bit FCS is generated by dividing the original number, appended by r zeros, by a predetermined number. This number, which is (r+1) bit in length, can also be considered as the coefficients of a polynomial, called Generator Polynomial. The remainder of this division process generates the r-bit FCS. On receiving the packet, the receiver divides the (k+r) bit frame by the same predetermined number and if it produces no remainder, it can be assumed that no error has occurred during the transmission. Operations at both the sender and receiver end are shown in Fig. 1.2.4
5. 5. . Fig. 1.2.4 Basic scheme for Cyclic Redundancy Checking This mathematical operation performed is illustrated in Fig. 1.2.4 by dividing a sample 4-bit number by the coefficient of the generator polynomial x3+x+1, which is 1011, using the modulo-2 arithmetic. Modulo-2 arithmetic is a binary addition process without any carry over, which is just the Exclusive- OR operation. Consider the case where k=1101. Hence we have to divide 1101000 (i.e. k appended by 3 zeros) by 1011, which produces the remainder r=001, so that the bit frame (k+r) =1101001 is actually being transmitted through the communication channel. At the receiving end, if the received number, i.e., 1101001 is divided by the same generator polynomial 1011 to get the remainder as 000, it can be assumed that the data is free of errors. Fig. 1.2.4 Cyclic Redundancy Checks All the values can be expressed as polynomials of a dummy variable X. For example, for P = 11001 the corresponding polynomial is X4+X3+1. A polynomial is selected to have at least the following properties: It should not be divisible by X. It should not be divisible by (X+1). The first condition guarantees that all burst errors of a length equal to the degree of polynomial are detected. The second condition guarantees that all burst errors affecting an odd number of bits are detected.
6. 6. In a cyclic code, where s(x) is the syndrome If s(x) ≠ 0, one or more bits is corrupted. If s(x) = 0, either a. No bit is corrupted. or b. Some bits are corrupted, but the decoder failed to detect them. Performance CRC is a very effective error detection technique. If the divisor is chosen according to the previously mentioned rules, its performance can be summarized as follows: CRC can detect all single-bit errors CRC can detect all double-bit errors (three 1’s) CRC can detect any odd number of errors (X+1) CRC can detect all burst errors of less than the degree of the polynomial. CRC detects most of the larger burst errors with a high probability. 2. Framing and synchronization Normally, units of data transfer are larger than a single analog or digital encoding symbol. It is necessary to recover clock information for both the signal and obtain synchronization for larger units of data (such as data words and frames). It is necessary to recover the data in words or blocks because this is the only way the receiver process will be able to interpret the data received; for a given bit stream. So, it is necessary to add other bits to the block that convey control information used in the data link control procedures. The data along with preamble, postamble, and control information forms a frame. Frame synchronization or delineation (or simply framing) is theprocess of defining and locating frame boundaries (start and end ofthe frame) on a bit sequence. This framing is necessary for the purpose of synchronization and other data control functions. Framing Method The problem of framing is solved in different ways depending on theframes having a fixed (known) length or a variable length For frames of fixed length (e.g., a physical layer SONET/SDH frame or an ATM cell), it is only necessary to identify the start of the frame and add the frame size to locate the end of the frame – framing methods can thus exploit the occurrence of either periodic patterns or known correlations that occur periodically in bit sequences (the latter is exploited in ATM) For frames of variable size, special synchronization characters or bit patterns are used to identify the start of a frame, while different explicit or implicit methods can be used for identifying the end of a frame (e.g., special characters or bit patterns, a length field or some event that may be associated with the end of the frame) 2.1 Character oriented framing Character-oriented protocols are also known as byte oriented protocols. They are used in variable size framing by the Data link layer for data link control. Data are 8-bit characters encoded in ASCII. Along with the header and the trailer, 2 flags are included in each frame (beginning and end of frame) to separate it from other frames. To separate one frame from the next an 8bit(1-byte) flag is added at the beginning and the end of a frame. The flag is protocol dependent special characters, signals the start or end of a frame.
12. 12. otherwise it sends NACK. Station A sends a new frame after receiving ACK; otherwise it retransmits the old frame, if it receives a NACK. This is illustrated in Fig 4.1.2. Fig. 4.1.2Stop-And-Wait ARQ technique To tackle the problem of a lost or damaged frame, the sender is equipped with a timer. In case of a lost ACK, the sender transmits the old frame. In the Fig. 4.1.3, the second PDU of Data is lost during transmission. The sender is unaware of this loss, but starts a timer after sending each PDU. Normally an ACK PDU is received before the timer expires. In this case no ACK is received, and the timer counts down to zero and triggers retransmission of the same PDU by the sender. The sender always starts a timer following transmission, but in the second transmission receives an ACK PDU before the timer expires, finally indicating that the data has now been received by the remote node.The receiver now can identify that it has received a duplicate frame from the label of the frame and it is discarded. Figure 4.1.3 shows an example of Stop-and-Wait ARQ. Frame 0 is sent and acknowledged. Frame 1 is lost and resent after the time-out. The resent frame 1 is acknowledged and the timer stops. Frame 0 is sent and acknowledged, but the acknowledgment is lost. The sender has no idea if the frame or the acknowledgment is lost, so after the time-out, it resends frame 0, which is acknowledged. Fig. 4.1.3 Flow diagram for an example of Stop-and-Wait ARQ
14. 14. Fig. 4.2.2 Lost Frames in Go-Back-N ARQ Fig. 4.2.3 Lost ACK in Go-Back-N ARQ If no acknowledgement is received after sending N frames, the sender takes the help of a timer. After the time-out, it resumes retransmission. The go-back-N protocol also takes care of damaged frames and damaged ACKs. This scheme is little more complex than the previous one but gives much higher throughput. Stop-and-Wait ARQ is a special case of Go-Back-N ARQ in which the size of the send window is 1.
15. 15. 4.3 Selective-Repeat ARQ The selective-repetitive ARQ scheme retransmits only those for which NAKs are received or for which timer has expired, this is shown in the Fig.4.3.1. This is the most efficient among the ARQ schemes, but the sender must be more complex so that it can send out-of-order frames. The receiver also must have storage space to store the post-NAK frames and processing power to reinsert frames in proper sequence. Fig.4.3.1 Selective-repeat Reject Mention key advantages and disadvantages of stop-and-wait ARQ technique? Ans: Advantages of stop-and-wait ARQ are: a. Simple to implement b. Frame numbering is modulo-2, i.e. only 1 bit is required. The main disadvantage of stop-and-wait ARQ is that when the propagation delay is long, it is extremely inefficient. Consider the use of 10 K-bit size frames on a 10 Mbps satellite channel with 270 ms delay. What is the link utilization for stop-and-wait ARQ technique assuming P = 10-3? Ans: Link utilization = (1-P) / (1+2a) , P is the probability of single frame error. Where a = (Propagation Time) / (Transmission Time) Propagation time = 270 msec Transmission time = (frame length) / (data rate) = (10 K-bit) / (10 Mbps) = 1 msec Hence, a = 270/1 = 270 Link utilization = 0.999/(1+2*270) ≈0.0018 =0.18% What is the channel utilization for the go-back-N protocol with window size of 7 for the problem 3? Ans: Channel utilization for go-back-N = N(1 – P) / (1 + 2a)(1-P+NP) P = probability of single frame error ≈ 10-3 Channel utilization ≈ 0.01285 = 1.285%
20. 20. 6. POINT-TO-POINT PROTOCOL Although HDLC is a general protocol that can be used for both point-to-point and multipointconfigurations, one of the most common protocols for point-to-point access is thePoint-to-Point Protocol (PPP). Today, millions of Internet users who need to connecttheir home computers to the server of an Internet service provider use PPP. The majorityof these users have a traditional modem; they are connected to the Internet through a telephone line, which provides the services of the physical layer. But to control and manage the transfer of data, there is a need for a point-to-point protocol at the data link layer. PPP is by far the most common. PPP provides several services: PPP defines the format of the frame to be exchanged between devices. PPP defines how two devices can negotiate the establishment of the link and the exchange of data. PPP defines how network layer data are encapsulated in the data link frame. PPP defines how two devices can authenticate each other. PPP provides multiple network layer services supporting a variety of network layer protocols. PPP provides connections over multiple links. PPP provides network address configuration. This is particularly useful when a home user needs a temporary network address to connect to the Internet. On the other hand, to keep PPP simple, several services are missing: PPP does not provide flow control. A sender can send several frames one after another with no concern about overwhelming the receiver. PPP has a very simple mechanism for error control. PPP does not provide a sophisticated addressing mechanism to handle frames in a multipoint configuration. Framing PPP is a byte-oriented protocol.
21. 21. Flag: A PPP frame starts and ends with a I-byte flag with the bit pattern 01111110. Though the flag is same as HDLC but PPP is a byte-oriented protocol; HDLC is a bit-oriented protocol. Address: The address field in this protocol is a constant value and set to 11111111 (broadcast address). Control: This field is set to the constant value 11000000 Protocol: The protocol field defines what is being carried in the data field: either user data or other information. Payload field: This field carries either the user data or other information.The data field is a sequence of bytes with the default of a maximum of 1500 bytes; but this can be changed during negotiation. The data field is byte stuffedif the flag byte pattern appears in this field. Because there is no field definingthe size of the data field, padding is needed if the size is less than the maximumdefault value or the maximum negotiated value. FCS: The frame check sequence (FCS) is simply a 2-byte or 4-byte standard CRC Transition Phases A PPP connection goes through phases which can be shown in a transition phase diagram fig. 6.1.1. Fig. 6.1.1 Transition phases Dead:In the dead phase the link is not being used. There is no active carrier (at the physical layer) and the line is quiet. Establish:When one of the nodes starts the communication, the connection goes into this phase. In this phase, options are negotiated between the two parties. The link control protocol packets are used for this purpose. Authenticate:The authentication phase is optional; the two nodes may decide, during the establishment phase, not to skip this phase. Network:In the network phase, negotiation for the network layer protocols takes place. PPP specifies that two nodes establish a network layer agreement before data at the network layer can be exchanged.
23. 23. The ID field holds a value that matches a request with a reply. The information field is divided into three fields: option type, option length, and option data. Authentication Protocols Authentication means validating the identity of a user who needs to access a set of resources. PPP has created two protocols for authentication: Password Authentication Protocol and Challenge Handshake Authentication Protocol. PAP The Password Authentication Protocol (PAP) is a simple authentication procedure with a two-step process: The user who wants to access a system sends authentication identification (usually the user name) and a password. The system checks the validity of the identification and password and either accepts or denies connection. When a PPP frame is carrying any PAP packets, the value of the protocolfield is OxC023. The three PAP packets are authenticate-request, authenticate-ack, andauthenticate-nak as shown in Fig. 6.1.3. The first packet is used by the user to send the user name and password.The second is used by the system to allow access. The third is used by the systemto deny access. Fig. 6.1.3 PAP packets encapsulated in a PPP frame CHAP The Challenge Handshake Authentication Protocol (CHAP) is a three-wayhand- shaking authentication protocol that provides greater security than PAP. In this method, the password is kept secret; it is never sent online. The system sends the user a challenge packet containing a challenge value, usually a few bytes. The user applies a predefined function that takes the challenge value and the user's own password and creates a result. The user sends the result in the response packet to the system.
24. 24. The system does the same. It applies the same function to the password of the user (known to the system) and the challenge value to create a result. If the result created is the same as the result sent in the response packet, access is granted; otherwise, it is denied. CHAP is more secure than PAP, especially if the system continuously changes the challenge value. Even if the intruder learns the challenge value and the result, the password is still secret. Figure 6.1.4 shows the packets and how they are used. Figure 6.1.4 CHAP packets encapsulated in a PPP frame CHAP packets are encapsulated in the PPP frame with the protocol value C223 in hexadecimal. There are four CHAP packets: challenge, response, success, and failure. The first packet is used by the system to send the challenge value. The second is used by the user to return the result of the calculation. The third is used by the system to allow access to the system. The fourth is used by the system to deny access to the system. Network Control Protocols PPP is a multiple-network layer protocol. It can carry a network layer data packet from protocols defined by the Internet, OSI, Xerox, DECnet, AppleTalk, Novel, and so on. To do this, PPP has defined a specific Network Control Protocol for each network protocol. For example, IPCP (Internet Protocol Control Protocol) configures the link for carrying IP data packets.NCP packets carry network layer data; they just configure the link at the network layer for the incoming data.One NCP protocol is the Internet Protocol Control Protocol (IPCP). This protocol configures the link used to carry IP packets in the Internet. IPCP is especially of interest to us. The format of an IPCP packet is shown in Figure 6.1.5
26. 26. Fairness: The technique should treat each station fairly in terms of the time it is made to wait until it gains entry to the network, access time and the time it is allowed to spend for transmission. Priority: In managing access and communications time, the technique should be able to give priority to some stations over other stations to facilitate different type of services needed. Limitations to one station: The techniques should allow transmission by one station at a time. Receipt: The technique should ensure that message packets are actually received (no lost packets) and delivered only once (no duplicate packets), and are received in the proper order. Error Limitation: The method should be capable of encompassing an appropriate error detection scheme. Recovery: If two packets collide (are present on the network at the same time), or if notice of a collision appears, the method should be able to recover, i.e. be able to halt all the transmissions and select one station to retransmit. 7.1 RANDOM ACCESS No station is superior to another station and none is assigned the control over another.Each station can transmit when it desires on the condition that it follows the predefined procedure, including the testing of the state of the medium. There is no scheduled time for a station to transmit. Transmission is random among the stations. That is why these methods are called random access. Second, no rules specify which station should send next. Stations compete with one another to access the medium. That is why these methods are also called contention methods. In a random access method, each station has the right to the medium without being controlled by any other station. However, if more than one station tries to send, there is an access conflict- collision-and the frames will be either destroyed or modified. The method was improved with the addition of a procedure that forces the station to sense the medium before transmitting. This was called carrier sense multiple access. This method later evolved into two parallel methods: carrier sense multiple access with collision detection (CSMA/CD) and carrier sense multiple access with collision avoidance (CSMA/CA). CSMA/CD tells the station what to do when a collision is detected. CSMA/CA tries to avoid the collision. 7.1.1 ALOHA ALOHA, the earliest random access method, was developed at the University of Hawaii in early 1970. It was designed for a radio (wireless) LAN, but it can be used on any shared medium. Pure ALOHA The original ALOHA protocol is called pure ALOHA. This is a simple, but elegant protocol. The idea is that each station sends a frame whenever it has a frame to send. However, since there is only one channel to share, there is the possibility of collision between frames from different stations. Figure 7.1.2 shows an example of frame collisions in pure ALOHA. There are four stations (unrealistic assumption) that contend with one another for access to the shared channel. There are a total of eight frames on the shared medium. Some of these frames collide because multiple frames are in contention for the shared channel. It is obvious that we need to resend the frames that have been destroyed during transmission. The pure ALOHA protocol relies on acknowledgments from the receiver. If all these stations try to resend their frames after the time-out, the frames will collide again. Pure ALOHA dictates thatwhen the time-out period passes, each station waits a random amount of time before resending its frame. The randomness will help avoid more collisions. We call this time the back- off time TB.
27. 27. Figure 7.1.2 Frames in a pure ALOHA network Pure ALOHA has a second method to prevent congesting the channel with retransmitted frames. After a maximum number of retransmission attempts Kmax,a station must give up and try later. Figure 7.1.3 shows the procedure for pure ALOHA based on the above strategy. Figure 7.1.3 Procedure for pure ALOHA protocol The time-out period is equal to the maximum possible round-trip propagation delay, which is twice the amount of time required to send a frame between the two most widely separated stations (2 x Tp) The back-off time TB is a random value that normally depends on K (the number of attempted unsuccessful transmissions). The formula for TB depends on the implementation. One common formula is the binary exponential back-off. Multiplier in the range 0 to 2k - 1 is randomly chosen and multiplied by Tp(maximum propagation time) or Tfr(the average time required to send out a frame) to find TB.
28. 28. Vulnerable time Let us find the length of time, the vulnerable time, in which there is a possibility of collision. We assume that the stations send fixed-length frames with each frame taking Tfrto send.Station A sends a frame at time t. Now imagine station B has already sent a frame between t– Tfrandt. This leads to a collision between the frames from station A and station B. The end of B's frame collides with the beginning of A's frame. On the other hand, suppose that station C sends a frame between t and t + Tfr.Here, there is a collision between frames from station A and station C. The beginning of C's frame collides with the end of A's frame. Figure 7.1.4 Vulnerable time for pure ALOHA protocol Looking at Figure 7.1.4 we see that the vulnerable time, during which a collision may occur in pure ALOHA, is 2 times the frame transmission time. Pure ALOHA vulnerable time = 2 x Tfr The throughput for pure ALOHA is S = G × e −2G .The maximum throughput Smax = 0.184 when G= (1/2). Example: A pure ALOHA network transmits 200-bit frames on a shared channel of 200 kbps. What is the throughput if the system (all stations together) produces a. 1000 frames per second b. 500 frames per second c. 250 frames per second. Solution The frame transmission time is 200/200 kbps or 1 ms. a. If the system creates 1000 frames per second, this is 1frame per millisecond. The load is 1. In this case S = G× e−2 G or S = 0.135 (13.5 percent). This meansthat the throughput is 1000 × 0.135 = 135 frames. Only135 frames out of 1000 will probably survive. b. If the system creates 500 frames per second, this is(1/2) frame per millisecond. The load is (1/2). In this case S = G × e −2G or S = 0.184 (18.4 percent). Thismeans that the throughput is 500 × 0.184 = 92 and thatonly 92 frames out of 500 will probably survive. Notethat this is the maximum throughput case,percentagewise. c. If the system creates 250 frames per second, this is (1/4)frame per millisecond. The load is (1/4). In this case S = G × e −2G or S = 0.152 (15.2 percent). This meansthat the throughput is 250 × 0.152 = 38. Only 38frames out of 250 will probably survive.
29. 29. Slotted ALOHA Slotted ALOHA was invented to improve the efficiency of pure ALOHA. In slotted ALOHA time is divided into slots of Tfrand force the station to send only at the beginning of the time slot. Figure 7.1.5 shows an example of frame collisions in slotted ALOHA. Figure 7.1.5 Frames in a slotted ALOHA network Station which started at the beginning of this slot has already finished sending its frame. Of course, there is still the possibility of collision if two stations try to send at the beginning of the same time slot. However, the vulnerable time is now reduced to one-half, equal to Tfr .Figure 7.1.6 shows the situation. Figure 7.1.6 Vulnerable time for slotted ALOHA protocol The throughput for slotted ALOHA is S = G × e−G .The maximum throughput Smax = 0.368 when G = 1. Throughput versus offered load for ALOHA protocol
30. 30. Carrier Sense Multiple Access (CSMA) To minimize the chance of collision and, therefore, increase the performance, the CSMA method was developed. The chance of collision can be reduced if a station senses the medium before trying to use it. Carrier sense multiple access (CSMA) requires that each station first listen to the medium (or check the state of the medium) before sending. CSMA is based on the principle "sense before transmit" or "listen before talk." CSMA can reduce the possibility of collision, but it cannot eliminate it. The possibility of collision still exists because of propagation delay; when a station sends a frame, it still takes time (although very short) for the first bit to reach every station and for every station to sense it. In Fig. 7.1.7, at time t1station B senses the medium and finds it idle, so it sends a frame. At timet2 (t2> t1) station C senses the medium and finds it idle because, at this time, the firstbits from station B have not reached station C. Station C also sends a frame. The twosignals collide and both frames are destroyed. Fig. 7.1.7 Space/time model of the collision in CSMA Vulnerable Time The vulnerable time for CSMA is the propagation time Tp .This is the time needed for a signal to propagate from one end of the medium to the other. When a station sends a frame, and any other station tries to send a frame during this time, a collision will result. But if the first bit of the frame reaches the end of the medium, every station will already have heard the bit and will refrain from sending. Figure 7.1.8 shows the worst case. The leftmost station A sends a frame at time t1which reaches the rightmost station D at time t1+ Tp .The gray area shows the vulnerable area in time and space. Figure 7.1.8 Vulnerable time in CSMA
31. 31. Persistence Methods Three persistent methods have been devised to answer these questions: the I-persistent method, the non-persistent method, and the p-persistent method. Figure 7.1.9 shows the behavior of three persistence methods when a station finds a channel busy. 1-Persistent: The 1-persistent methodis simple and straightforward. In this method, after the station finds the line idle, it sends its frame immediately (with probability I). This method has the highest chance of collision because two or more stations may find the line idle and send their frames immediately. Non-persistent: In the non-persistent method, a station that has a frame to send senses the line. If the line is idle, it sends immediately. If the line is not idle, it waits a random amount of time and then senses the line again. The non-persistent approach reduces the chance of collision because it is unlikely that two or more stations will wait the same amount of time and retry to send simultaneously. This method reduces the efficiency of the network because the medium remains idle when there may be stations with frames to send. Fig. 7.1.9 Behavior of three persistence methods P-Persistent: The p-persistent methodis used if the channel has time slots with slot duration equal to or greater than the maximum propagation time. It reduces the chance of collision and improves efficiency. In this method, after the station finds the line idle it follows these steps: 1. With probability p, the station sends its frame. 2. With probability q = 1 - p, the station waits for the beginning of the next time slot and checks the line again. a. If the line is idle, it goes to step 1. b. If the line is busy, it acts as though a collision has occurred and uses the back-offprocedure.
32. 32. Carrier Sense Multiple Access with Collision Detection (CSMA/CD) Carrier sense multiple access with collision detection (CSMA/CD) augments the algorithm to handle the collision. a station monitors the medium after it sends a frame to see if the transmission was successful. If so, the station is finished. If, however, there is a collision, the frame is sent again. To better understand CSMA/CD, let us look at the first bits transmitted by the two stations involved in the collision. Although each station continues to send bits in the frame until it detects the collision, we show in Fig. 7.1.10, what happens as the first bits collide. Fig. 7.1.10 Collision and abortion in CSMA/CD At time t1, station A has executed its persistence procedure and starts sending the bits of its frame. At time t2, station C has not yet sensed the first bit sent by A and station C executes its persistence procedure and starts sending the bits in its frame, which propagate both to the left and to the right.The collision occurs sometime after time t2Station C detects a collision at time t3when it receives the first bit of A's frame. Station C immediately (or after a short time, but we assume immediately) aborts transmission. Station A detects collision at time t4when it receives the first bit of C's frame; it also immediately aborts transmission. Looking at the figure, we see that A transmits for the duration t4– t1 ;C transmits for the duration t3– t2. Minimum Frame Size For CSMA/CD to work, we need a restriction on the frame size. Before sending the last bit of the frame, the sending station must detect a collision, if any, and abort the transmission. This is so because the station, once the entire frame is sent, does not keep a copy of the frame and does not monitor the line for collision detection. Therefore, the frame transmission time Tfr must be at least two times the maximum propagation time Tp. Procedure Now let us look at the flow diagram for CSMA/CD in Figure 7.1.11It is similar to the one for the ALOHA protocol, but there are differences. Addition of the persistence process in CSMA/CD which is not in ALOHA. The second difference is the frame transmission. In ALOHA, we first transmit the entire frame and then wait for an acknowledgment. In CSMA/CD, transmission and collision detection is a continuous process. Sending of a short jamming signal that enforces the collision in case other stations have not yet sensed the collision.
33. 33. Figure 7.1.11 Flow diagram for the CSMA/CD Throughput The throughput of CSMA/CD is greater than that of pure or slotted ALOHA. The maximum throughput occurs at a different value of G and is based on the persistence method and the value of p in the p-persistent approach. For I-persistent method the maximum throughput is around 50 percent when G=1. For non-persistent method, the maximum throughput can go up to 90 percent when G is between 3 and 8. Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) We need to avoid collisions on wireless networks because they cannot be detected. Carrier sense multiple access with collision avoidance (CSMA/CA) was invented for this network. Collisions are avoided through the use of CSMAICA's three strategies: the interframe space, the contention window, and acknowledgments, as shown in Figure 7.1.12. Figure 7.1.12 Timing in CSMA/CA Interframe Space (IFS) First, collisions are avoided by deferring transmission even if the channel is found idle. When an idle channel is found, the station does not send immediately. It waits for a period of time called the interframe space or IFS. Even though the channel may appear idle when it is sensed, a distant station may have already started transmitting. The IFS time allows the front of the transmitted
34. 34. signal by the distant station to reach this station. If after the IFS time the channel is still idle, the station can send, but it still needs to wait a time equal to the contention time. The IFS variable can also be used to prioritize stations or frame types. For example, a station that is assigned ashorter IFS has a higher priority. Contention Window The contention window is an amount of time divided into slots. A station that is ready to send chooses a random number of slots as its wait time. The number of slots in the window changes according to the binary exponential back-off strategy. This means that it is set to one slot the first time and then doubles each time the station cannot detect an idle channel after the IFS time. In CSMA/CA, if the station finds the channel busy, it does not restart the timer of the contention window; it stops the timer and restarts it when the channel becomes idle. Acknowledgment With all these precautions, there still may be a collision resulting in destroyed data. In addition, the data may be corrupted during the transmission. The positive acknowledgment and the time- out timer can help guarantee that the receiver has received the frame. Procedure Figure 7.1.13. Shows the procedure. Figure 7.1.13 Flow diagramfor CSMA/CA