TCP, RemoteFX, and IPQImpact of Latency and Loss on TCP Throughput including RemoteFX and IPQ Throughput Test ResultsJuly 2010<br />The IPQ Benefits Hierarchy<br />The benefits of IPQ are most often described in terms of a remarkable improvement in the user experience of real-time network applications.  There is, however, an inter-related hierarchy of benefits associated with the use of IPQ.<br />STEP 1 is accomplished as a core IPQ capability and actually enables all remaining steps in the benefits hierarchy.  All steps can add significant value to the enterprise in terms of enhanced productivity and reduced costs but the benefits implied at STEP 2 are the most profound and the most complex. This brief backgrounder describes the relationship between packet loss and IP network performance and quality.<br />Packet Loss and UDP<br />IP networks support two transport protocols: UDP and TCP.  UDP is a ‘best efforts’ protocol that sends packets once and only once.  There are no retransmissions.  If packets are lost, it is up to the application that expects the lost data to recover, mask, or otherwise compensate for that loss.  Real-time, interactive network applications like video conferencing implement UDP not because packet loss is not a problem – it is big problem for these applications – but because the alternative, TCP, adds latency and latency is not acceptable in a real-time interactive application environment.  Unlike UDP, TCP is a guaranteed delivery transport protocol.  TCP is the right choice for applications that must receive all packets and the time it takes to receive them is of secondary importance.  Email and web browsing are good examples of these types of applications.<br />Packet Loss and TCP<br />In order to guarantee the delivery of all packets, TCP implements an acknowledgement and retransmission scheme.  The source of the packets expects an acknowledgement for every packet that successfully reaches the destination.  When packets are lost, i.e., when the destination fails to acknowledge receipt, the TCP source retransmits the lost packets.  But it also assumes that the packet loss is an indication that the link cannot support the data rate and reduces the number of packets it sends on the retransmission.  In a lossy network environment, the net effect of this scheme is a reduction of the so-called good throughput or “goodput”.  Goodput refers to the total packets successfully received.  Simple throughput, by contrast, refers to the total packets sent, whether those packets reach the destination or not.  (In simple terms, goodput is equal to throughput minus retransmissions.)  This reduction in goodput is sometimes called “TCP clamping”.<br />Latency and TCP<br />Network latency has a similar effect on TCP throughput.  Everything else being equal, the length of time required to send the packet and receive the acknowledgement defines the maximum data rates.  When the source and destination are in close physical proximity and network latency is low, acknowledgements are received quickly and the next sequence of packets can be transmitted quickly.  However, if distance and high latency separate the source and destination, then acknowledgements take longer and the next transmission of packets will be delayed.  The net effect is, again, “TCP clamping” – a reduction in goodput.<br />When packet loss and latency are both present, they interact to cause even greater reductions in TCP throughput.  The TCP Goodput Base Case chart summarizes the combined impact of packet loss and latency on network performance and quality.<br />The starting point in this chart is a 100 Mbps TCP session and a near perfect network: a network with zero packet loss and no more than 5 millliseconds of latency.  Over such a network, there will be almost no TCP clamping and a full 100 Mbps of goodput will be achieved.<br />If, however, the network suffers 100ms of latency (as is common for a cross-country connection) and 1% packet loss (as is common with the public Internet) that same TCP session will be severely clamped and goodput will be reduced to a mere 1.16 Mbps.  To frame the impact of this reduction in network performance, consider a 1 GB file transfer.  Over the red network, that 1 GB file transfer would be completed in about 2 hours.  However, with protection against packet loss, goodput could be increased to 5.24 Mbps and the 1 GB file transfer would be completed in about 25 minutes.<br />By reducing packet loss, and thereby improving network performance, the performance of the network application (in this case, the file transfer) is greatly improved and this in turn makes for a greatly improved user experience.<br />Another important variable contributing to TCP network performance is packet size.  TCP achieves the greatest throughput (and therefore, in theory, best goodput) with large packets.  Multimedia streams over TCP (e.g., RemoteFX) generally include both audio and video channels.  Audio packets are much smaller than video packets and, when combined, will tend to cause an averaging downward of the maximum TCP throughput potential.  This is consistent with our test results of RemoteFX shown below.<br />This chart shows the measured maximum TCP goodput when displaying HD video in a RemoteFX virtual desktop over varying network conditions<br />The control for this series of test measurements was established by running the HD video over perfect network conditions: zero latency and zero packet loss.  The measured TCP goodput with RemoteFX over a perfect network was 27 Mbps.  (This control point is shown in green text in the upper left hand corner of the chart.)  Measurements were then taken after adding latency in amounts that represent typical local, metro, cross-country, and global IP network conditions.  The same measurements were then taken with packet loss added in increments that are representative of typical LAN, public Internet, and WiFi network conditions.  The entire set of measurements repeated with IPQ protection against the packet loss on the network. <br />As the chart shows, when 30 milliseconds of latency and 1% packet loss were added in order to emulate a typical, unprotected metro network connection over the public Internet, the RemoteFX goodput was reduced to 3.4 Mbps, a data rate that is unlikely to deliver a “best” user experience of HD video.  By contrast, under the same conditions but with IPQ protection against the packet loss, the RemoteFX goodput is doubled to 6.2 Mbps, a data rate that easily supports a best user experience of HD video.<br />To emphasize the potential benefits of IPQ with respect to network performance and quality, the chart uses colored backgrounds to highlight the range of network goodput values that will support the best user experience (green background), a good user experience (yellow background), and a poor user experience (pink background) of HD video running over RemoteFX.  With IPQ protection, as shown on the right hand side of the chart, the range of network latency and loss conditions that can still support best and good user experience of HD video is far greater than the same range without protection against packet loss.<br />IPeak Networks has concluded that adding IPQ protection against packet loss, the RemoteFX value proposition can be expanded to include not only the LAN but also the WAN.  While competing protocols such as VMWare’s PCoIP do claim to support the WAN they still require a high quality, low loss network.   That is, they require an MPLS WAN.  By adding IPQ protection against packet loss, RemoteFX can become the only remote display protocol that supports virtualized desktop running over the public Internet – the true WAN.  The net effect would be to the grow the addressable market for RDS to include deployment for nomadic sales, for workshifting, for the home consumer, and for deployment in those global markets that where readily accessible IP networks are of particularly low quality and the excessively high cost of better quality, i.e., MPLS network services are a significant obstacle to the adoption of virtualization technologies.<br />
TCP RemoteFX and IPQ
TCP RemoteFX and IPQ
TCP RemoteFX and IPQ
TCP RemoteFX and IPQ
TCP RemoteFX and IPQ

TCP RemoteFX and IPQ

  • 1.
    TCP, RemoteFX, andIPQImpact of Latency and Loss on TCP Throughput including RemoteFX and IPQ Throughput Test ResultsJuly 2010<br />The IPQ Benefits Hierarchy<br />The benefits of IPQ are most often described in terms of a remarkable improvement in the user experience of real-time network applications. There is, however, an inter-related hierarchy of benefits associated with the use of IPQ.<br />STEP 1 is accomplished as a core IPQ capability and actually enables all remaining steps in the benefits hierarchy. All steps can add significant value to the enterprise in terms of enhanced productivity and reduced costs but the benefits implied at STEP 2 are the most profound and the most complex. This brief backgrounder describes the relationship between packet loss and IP network performance and quality.<br />Packet Loss and UDP<br />IP networks support two transport protocols: UDP and TCP. UDP is a ‘best efforts’ protocol that sends packets once and only once. There are no retransmissions. If packets are lost, it is up to the application that expects the lost data to recover, mask, or otherwise compensate for that loss. Real-time, interactive network applications like video conferencing implement UDP not because packet loss is not a problem – it is big problem for these applications – but because the alternative, TCP, adds latency and latency is not acceptable in a real-time interactive application environment. Unlike UDP, TCP is a guaranteed delivery transport protocol. TCP is the right choice for applications that must receive all packets and the time it takes to receive them is of secondary importance. Email and web browsing are good examples of these types of applications.<br />Packet Loss and TCP<br />In order to guarantee the delivery of all packets, TCP implements an acknowledgement and retransmission scheme. The source of the packets expects an acknowledgement for every packet that successfully reaches the destination. When packets are lost, i.e., when the destination fails to acknowledge receipt, the TCP source retransmits the lost packets. But it also assumes that the packet loss is an indication that the link cannot support the data rate and reduces the number of packets it sends on the retransmission. In a lossy network environment, the net effect of this scheme is a reduction of the so-called good throughput or “goodput”. Goodput refers to the total packets successfully received. Simple throughput, by contrast, refers to the total packets sent, whether those packets reach the destination or not. (In simple terms, goodput is equal to throughput minus retransmissions.) This reduction in goodput is sometimes called “TCP clamping”.<br />Latency and TCP<br />Network latency has a similar effect on TCP throughput. Everything else being equal, the length of time required to send the packet and receive the acknowledgement defines the maximum data rates. When the source and destination are in close physical proximity and network latency is low, acknowledgements are received quickly and the next sequence of packets can be transmitted quickly. However, if distance and high latency separate the source and destination, then acknowledgements take longer and the next transmission of packets will be delayed. The net effect is, again, “TCP clamping” – a reduction in goodput.<br />When packet loss and latency are both present, they interact to cause even greater reductions in TCP throughput. The TCP Goodput Base Case chart summarizes the combined impact of packet loss and latency on network performance and quality.<br />The starting point in this chart is a 100 Mbps TCP session and a near perfect network: a network with zero packet loss and no more than 5 millliseconds of latency. Over such a network, there will be almost no TCP clamping and a full 100 Mbps of goodput will be achieved.<br />If, however, the network suffers 100ms of latency (as is common for a cross-country connection) and 1% packet loss (as is common with the public Internet) that same TCP session will be severely clamped and goodput will be reduced to a mere 1.16 Mbps. To frame the impact of this reduction in network performance, consider a 1 GB file transfer. Over the red network, that 1 GB file transfer would be completed in about 2 hours. However, with protection against packet loss, goodput could be increased to 5.24 Mbps and the 1 GB file transfer would be completed in about 25 minutes.<br />By reducing packet loss, and thereby improving network performance, the performance of the network application (in this case, the file transfer) is greatly improved and this in turn makes for a greatly improved user experience.<br />Another important variable contributing to TCP network performance is packet size. TCP achieves the greatest throughput (and therefore, in theory, best goodput) with large packets. Multimedia streams over TCP (e.g., RemoteFX) generally include both audio and video channels. Audio packets are much smaller than video packets and, when combined, will tend to cause an averaging downward of the maximum TCP throughput potential. This is consistent with our test results of RemoteFX shown below.<br />This chart shows the measured maximum TCP goodput when displaying HD video in a RemoteFX virtual desktop over varying network conditions<br />The control for this series of test measurements was established by running the HD video over perfect network conditions: zero latency and zero packet loss. The measured TCP goodput with RemoteFX over a perfect network was 27 Mbps. (This control point is shown in green text in the upper left hand corner of the chart.) Measurements were then taken after adding latency in amounts that represent typical local, metro, cross-country, and global IP network conditions. The same measurements were then taken with packet loss added in increments that are representative of typical LAN, public Internet, and WiFi network conditions. The entire set of measurements repeated with IPQ protection against the packet loss on the network. <br />As the chart shows, when 30 milliseconds of latency and 1% packet loss were added in order to emulate a typical, unprotected metro network connection over the public Internet, the RemoteFX goodput was reduced to 3.4 Mbps, a data rate that is unlikely to deliver a “best” user experience of HD video. By contrast, under the same conditions but with IPQ protection against the packet loss, the RemoteFX goodput is doubled to 6.2 Mbps, a data rate that easily supports a best user experience of HD video.<br />To emphasize the potential benefits of IPQ with respect to network performance and quality, the chart uses colored backgrounds to highlight the range of network goodput values that will support the best user experience (green background), a good user experience (yellow background), and a poor user experience (pink background) of HD video running over RemoteFX. With IPQ protection, as shown on the right hand side of the chart, the range of network latency and loss conditions that can still support best and good user experience of HD video is far greater than the same range without protection against packet loss.<br />IPeak Networks has concluded that adding IPQ protection against packet loss, the RemoteFX value proposition can be expanded to include not only the LAN but also the WAN. While competing protocols such as VMWare’s PCoIP do claim to support the WAN they still require a high quality, low loss network. That is, they require an MPLS WAN. By adding IPQ protection against packet loss, RemoteFX can become the only remote display protocol that supports virtualized desktop running over the public Internet – the true WAN. The net effect would be to the grow the addressable market for RDS to include deployment for nomadic sales, for workshifting, for the home consumer, and for deployment in those global markets that where readily accessible IP networks are of particularly low quality and the excessively high cost of better quality, i.e., MPLS network services are a significant obstacle to the adoption of virtualization technologies.<br />