This paper presents a technique for end hosts to detect if intermediaries like routers are applying compression to traffic flows without the end hosts' knowledge. The technique is non-intrusive and only uses packet inter-arrival times for detection, requiring no changes to or cooperation from intermediaries. Simulations and internet experiments show the approach can accurately detect compression applied by intermediaries. The technique could help end hosts optimize their own use of compression resources by avoiding redundant compression when intermediaries are already compressing traffic.
Extending TCP the Major Protocol of Transport LayerScientific Review
As We've known for a while that the Internet is classified as a result of the race to optimise existing applications or en- hance security. Sometimes NATs, performance-enhancing-proxies, firewalls and traffic normalizers are only a few of the middle- boxes that are deployed in the network and look beyond the IP header to do their job. IP itself can't be extended because "IP options are not an option" [1]. Is the same true for TCP? In this Research we develop a methodology for evaluating middlebox behavior relating to TCP extensions and present the results of measurements conducted from multiple Survival points. The shortest answer is that Yes we can still extend TCP, but extensions' design is very constrained as it needs to take into account prevalent middlebox behaviors. For instance, absolute sequence numbers cannot be embedded in options, as middleboxes can rewrite ISN and preserve undefined options. Sequence numbering also must be consistent for a TCP connection, because many middleboxes only allow through contiguous flows. We used these findings to analyze three proposed extensions to TCP. We find that MPTCP is likely to work correctly in the Internet or fall-back to regular TCP. TcpCrypt seems ready to be deployed, however it is fragile if resegmentation does happen for instance with hardware offload. Finally, TCP extended options in its current form is not safe to deploy.
IRJET- Simulation Analysis of a New Startup Algorithm for TCP New RenoIRJET Journal
This document presents a simulation analysis of a new startup algorithm for TCP New Reno to improve responsiveness for short-lived applications. The proposed TCP SYN Loss (TSL) startup algorithm uses a less conservative congestion response than standard TCP when connection setup packets are lost. Simulations are conducted using the ns-2 network simulator to evaluate the performance of TSL variants under different levels of congestion. The main results show that TSL variants can achieve an average latency gain of 15 round-trip times compared to standard TCP at up to 90% link utilization with a packet loss rate of 1%.
This paper focuses on packet routing in Delay Tolerant Networks (DTNs) where end-to-end connectivity is intermittent. It studies routing policies for transferring files when packets arrive progressively at the source node. It analyzes the optimality conditions for routing policies in terms of delivery probability and delay. It proposes piecewise-threshold policies that perform better than existing work-conserving policies, especially when there is an energy constraint. It extends the analysis to coded packets generated using linear block codes and rateless coding. Numerical results show piecewise-threshold policies have higher efficiency than work-conserving policies.
Token Based Packet Loss Control Mechanism for NetworksIJMER
This document summarizes a research paper that proposes a new congestion control mechanism using tokens. It begins with background on congestion control and modern IP networks. The proposed approach uses edge and core routers to write quality of service measures in packet headers as tokens. Tokers are interpreted by routers to gauge congestion, especially at edge routers. Based on tokens, edge routers can shape traffic from sources to reduce congestion. The mechanism aims to provide fairness while controlling packet loss. Key aspects discussed include stable token limit congestion control, core routers, edge routers, and how the approach compares to related work like CSFQ.
TRIDNT: THE TRUST-BASED ROUTING PROTOCOL WITH CONTROLLED DEGREE OF NODE SELFI...IJNSA Journal
In Mobile ad-hoc network, nodes must cooperate to achieve the routing purposes. Node misbehaviour due to selfish or malicious intention could significantly degrade the performance of MANET because most existing routing protocols in MANET are aiming at finding most efficiency path. In this paper, we propose a Two node-disjoint Routes protocol for Isolating Dropper Node in MANET (TRIDNT) to deal with misbehaviour in MANET. TRIDNT allows some degree of selfishness to give an incentive to the selfish nodes to declare itself to its neighbours, which reduce the misbehaving nodes searching time. In TRIDNT two node-disjoint routes between the source and destination are selected based on their trust values. We use both DLL-ACK and end-to-end TCP-ACK to monitor the behaviour of routing path nodes: if a malicious behaviour is detected then the path searching tool starts to identify the malicious nodes and isolate them. Finally by using a mathematical analysis we find that our proposed protocol reduces the searching time of malicious nodes comparing to the route expected life time, and avoids the isolated misbehaving node from sharing in all future routes, which improve the overall network throughput.
Dynamic control of coding for progressive packet arrivals in dt nsIEEEFINALYEARPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
This document summarizes a survey and analysis of various host-to-host congestion control proposals for TCP data transmission. It discusses the basic principles that underlie current host-to-host algorithms, including probing available network resources, estimating congestion through packet loss or delay, and quickly detecting packet losses. The document then analyzes specific algorithms like slow start, congestion avoidance, and fast recovery. It also examines calculating retransmission timeout and round-trip time, congestion avoidance and packet recovery techniques, and data transmission in TCP. The overall goal of these proposals is to control congestion in a distributed manner without relying on explicit network notifications.
In the last decade Peer to Peer technology has been thoroughly explored, becauseit overcomes many limitations compared to the traditional client server paradigm. Despite its advantages over a traditional approach, the ubiquitous availability of high speed, high bandwidth and low latency networks has supported the traditional client-server paradigm. Recently, however, the surge of streaming services has spawned renewed interest in Peer to Peer technologies. In addition, services like geolocation databases and browser technologies like Web-RTC make a hybrid approach attractive.
In this paper we present algorithms for the construction and the maintenance of a hybrid P2P overlay multicast tree based on topological distances. The essential idea of these algorithms is to build a multicast tree by choosing neighbours close to each other. The topological distances can be easily obtained by the browser using the geolocation API. Thus the implementation of algorithms can be done web-based in a distributed manner.
We present proofs of our algorithms as well as experimental results and evaluations.
Extending TCP the Major Protocol of Transport LayerScientific Review
As We've known for a while that the Internet is classified as a result of the race to optimise existing applications or en- hance security. Sometimes NATs, performance-enhancing-proxies, firewalls and traffic normalizers are only a few of the middle- boxes that are deployed in the network and look beyond the IP header to do their job. IP itself can't be extended because "IP options are not an option" [1]. Is the same true for TCP? In this Research we develop a methodology for evaluating middlebox behavior relating to TCP extensions and present the results of measurements conducted from multiple Survival points. The shortest answer is that Yes we can still extend TCP, but extensions' design is very constrained as it needs to take into account prevalent middlebox behaviors. For instance, absolute sequence numbers cannot be embedded in options, as middleboxes can rewrite ISN and preserve undefined options. Sequence numbering also must be consistent for a TCP connection, because many middleboxes only allow through contiguous flows. We used these findings to analyze three proposed extensions to TCP. We find that MPTCP is likely to work correctly in the Internet or fall-back to regular TCP. TcpCrypt seems ready to be deployed, however it is fragile if resegmentation does happen for instance with hardware offload. Finally, TCP extended options in its current form is not safe to deploy.
IRJET- Simulation Analysis of a New Startup Algorithm for TCP New RenoIRJET Journal
This document presents a simulation analysis of a new startup algorithm for TCP New Reno to improve responsiveness for short-lived applications. The proposed TCP SYN Loss (TSL) startup algorithm uses a less conservative congestion response than standard TCP when connection setup packets are lost. Simulations are conducted using the ns-2 network simulator to evaluate the performance of TSL variants under different levels of congestion. The main results show that TSL variants can achieve an average latency gain of 15 round-trip times compared to standard TCP at up to 90% link utilization with a packet loss rate of 1%.
This paper focuses on packet routing in Delay Tolerant Networks (DTNs) where end-to-end connectivity is intermittent. It studies routing policies for transferring files when packets arrive progressively at the source node. It analyzes the optimality conditions for routing policies in terms of delivery probability and delay. It proposes piecewise-threshold policies that perform better than existing work-conserving policies, especially when there is an energy constraint. It extends the analysis to coded packets generated using linear block codes and rateless coding. Numerical results show piecewise-threshold policies have higher efficiency than work-conserving policies.
Token Based Packet Loss Control Mechanism for NetworksIJMER
This document summarizes a research paper that proposes a new congestion control mechanism using tokens. It begins with background on congestion control and modern IP networks. The proposed approach uses edge and core routers to write quality of service measures in packet headers as tokens. Tokers are interpreted by routers to gauge congestion, especially at edge routers. Based on tokens, edge routers can shape traffic from sources to reduce congestion. The mechanism aims to provide fairness while controlling packet loss. Key aspects discussed include stable token limit congestion control, core routers, edge routers, and how the approach compares to related work like CSFQ.
TRIDNT: THE TRUST-BASED ROUTING PROTOCOL WITH CONTROLLED DEGREE OF NODE SELFI...IJNSA Journal
In Mobile ad-hoc network, nodes must cooperate to achieve the routing purposes. Node misbehaviour due to selfish or malicious intention could significantly degrade the performance of MANET because most existing routing protocols in MANET are aiming at finding most efficiency path. In this paper, we propose a Two node-disjoint Routes protocol for Isolating Dropper Node in MANET (TRIDNT) to deal with misbehaviour in MANET. TRIDNT allows some degree of selfishness to give an incentive to the selfish nodes to declare itself to its neighbours, which reduce the misbehaving nodes searching time. In TRIDNT two node-disjoint routes between the source and destination are selected based on their trust values. We use both DLL-ACK and end-to-end TCP-ACK to monitor the behaviour of routing path nodes: if a malicious behaviour is detected then the path searching tool starts to identify the malicious nodes and isolate them. Finally by using a mathematical analysis we find that our proposed protocol reduces the searching time of malicious nodes comparing to the route expected life time, and avoids the isolated misbehaving node from sharing in all future routes, which improve the overall network throughput.
Dynamic control of coding for progressive packet arrivals in dt nsIEEEFINALYEARPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
This document summarizes a survey and analysis of various host-to-host congestion control proposals for TCP data transmission. It discusses the basic principles that underlie current host-to-host algorithms, including probing available network resources, estimating congestion through packet loss or delay, and quickly detecting packet losses. The document then analyzes specific algorithms like slow start, congestion avoidance, and fast recovery. It also examines calculating retransmission timeout and round-trip time, congestion avoidance and packet recovery techniques, and data transmission in TCP. The overall goal of these proposals is to control congestion in a distributed manner without relying on explicit network notifications.
In the last decade Peer to Peer technology has been thoroughly explored, becauseit overcomes many limitations compared to the traditional client server paradigm. Despite its advantages over a traditional approach, the ubiquitous availability of high speed, high bandwidth and low latency networks has supported the traditional client-server paradigm. Recently, however, the surge of streaming services has spawned renewed interest in Peer to Peer technologies. In addition, services like geolocation databases and browser technologies like Web-RTC make a hybrid approach attractive.
In this paper we present algorithms for the construction and the maintenance of a hybrid P2P overlay multicast tree based on topological distances. The essential idea of these algorithms is to build a multicast tree by choosing neighbours close to each other. The topological distances can be easily obtained by the browser using the geolocation API. Thus the implementation of algorithms can be done web-based in a distributed manner.
We present proofs of our algorithms as well as experimental results and evaluations.
A dynamic performance-based_flow_controlingenioustech
Dear Students
Ingenious techno Solution offers an expertise guidance on you Final Year IEEE & Non- IEEE Projects on the following domain
JAVA
.NET
EMBEDDED SYSTEMS
ROBOTICS
MECHANICAL
MATLAB etc
For further details contact us:
enquiry@ingenioustech.in
044-42046028 or 8428302179.
Ingenious Techno Solution
#241/85, 4th floor
Rangarajapuram main road,
Kodambakkam (Power House)
http://www.ingenioustech.in/
Dear Student,
DREAMWEB TECHNO SOLUTIONS is one of the Hardware Training and Software Development centre available in
Trichy. Pioneer in corporate training, DREAMWEB TECHNO SOLUTIONS provides training in all software
development and IT-related courses, such as Embedded Systems, VLSI, MATLAB, JAVA, J2EE, CIVIL,
Power Electronics, and Power Systems. It’s certified and experienced faculty members have the
competence to train students, provide consultancy to organizations, and develop strategic
solutions for clients by integrating existing and emerging technologies.
ADD: No:73/5, 3rd Floor, Sri Kamatchi Complex, Opp City Hospital, Salai Road, Trichy-18
Contact @ 7200021403/04
phone: 0431-4050403
This document discusses a proposed congestion control mechanism called Network Border Protocol (NBP) that aims to prevent congestion collapse and unfairness in networks. NBP works by having edge routers monitor and control the ingress rates of individual flows to prevent packets from entering the network faster than they can leave. It uses feedback exchanged between ingress and egress routers to inform them of flow rates. While adding complexity to edge routers, NBP's approach aims to isolate this within the network borders and not require changes to end systems or transport protocols. The key components of NBP include its rate control algorithm, use of leaky bucket algorithms at ingress routers, and feedback control between edge routers.
ROUTING PROTOCOLS FOR DELAY TOLERANT NETWORKS: SURVEY AND PERFORMANCE EVALUATIONijwmn
Delay Tolerant Networking (DTN) is a promising technology that aims to provide efficient communication
between devices in a network with no guaranteed continuous connectivity. Most existing routing schemes
for DTNs exploit the advantage of message replication to achieve high message delivery rate. However,
these schemes commonly suffer from large communication overhead due to the lack of efficient mechanisms
to control message replication. In this paper we give a brief survey on routing protocols designed for
DTNs, and evaluate the performance of several representative routing protocols including Epidemic, Spray
and Wait, PRoPHET, and 3R through extensive trace-driven simulations. Another objective of this work is
to evaluate the security strength of different routing schemes under common DTN attacks such as the black
hole attack. The results and analysis presented in this paper can provide useful guidance on the design and
selection of routing protocols for given delay-tolerant applications.
A Distributed Approach to Solving Overlay Mismatching ProblemZhenyun Zhuang
This document proposes an algorithm called Adaptive Connection Establishment (ACE) to address the topology mismatch problem between the logical overlay network and physical underlying network in unstructured peer-to-peer systems. ACE builds a minimum spanning tree among each source node and its neighbors within a certain diameter, optimizes connections not on the tree to reduce redundant traffic, while retaining search scope. It evaluates tradeoffs between topology optimization and information exchange overhead by changing the diameter. Simulation results show ACE can significantly reduce unnecessary P2P traffic by efficiently matching the overlay and physical network topologies.
Hybrid Periodical Flooding in Unstructured Peer-to-Peer NetworksZhenyun Zhuang
This document proposes a new search mechanism called Hybrid Periodical Flooding (HPF) for unstructured peer-to-peer networks. HPF aims to reduce unnecessary traffic like blind flooding while also addressing the "partial coverage problem" of some statistics-based search mechanisms. It introduces the concept of Periodical Flooding (PF), which controls the number of neighbors a query is forwarded to based on the time-to-live value. This allows the forwarding behavior to change periodically over the query's lifetime. HPF then combines PF with weighted selection of neighbors based on multiple metrics to guide queries towards potentially relevant results while exploring more of the network.
Performance improvement of bottleneck link in red vegas over heterogeneous ne...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
1) The document analyzes delay performance in multihop wireless networks and develops techniques to derive lower bounds on average packet delay under any scheduling policy.
2) It introduces the concept of a k-bottleneck, where k or fewer links can transmit simultaneously due to interference constraints.
3) A key technique, called reduction, simplifies analysis of queues upstream of a k-bottleneck by reducing it to a single queue system with k servers and appropriate arrival processes.
This document summarizes research into the resilience of deployed TCP implementations to "blind" in-window attacks from off-path adversaries. The authors tested major operating systems and network infrastructure and found that:
1) Over 50% of web server connections were vulnerable to at least one blind in-window attack, with 44% accepting invalid data packets.
2) All routers and switches tested had some form of TCP vulnerability, despite more recent systems being resistant to SYN and reset attacks.
3) Ephemeral port selection on real systems remains predictable, potentially aiding attackers, though adoption of new operating systems may improve this over time.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
A NEW ALGORITHM FOR CONSTRUCTION OF A P2P MULTICAST HYBRID OVERLAY TREE BASED...csandit
In the last decade Peer to Peer technology has been thoroughly explored, because it overcomes many limitations compared to the traditional client server paradigm. Despite its advantages over a traditional approach, the ubiquitous availability of high speed, high bandwidth and low latency networks has supported the traditional client-server paradigm. Recently, however, the surge of streaming services has spawned renewed interest in Peer to Peer technologies. In addition, services like geolocation databases and browser technologies like Web-RTC make a hybrid approach attractive.
An Extensive Literature Review of Various Routing Protocols in Delay Tolerant...IRJET Journal
This document summarizes an extensive literature review on routing protocols in delay tolerant networks (DTNs). It begins by defining DTNs as wireless networks with intermittent connectivity where nodes use a store-carry-forward mechanism. Common routing protocols for DTNs like epidemic, spray and wait, and prophet are described. The document then reviews several papers that propose and evaluate new routing algorithms or improvements for DTNs, analyzing metrics like delivery ratio, overhead, and latency. Key factors considered include node contact histories, social characteristics, energy constraints, and message prioritization. Finally, it suggests the contact duration between nodes could be an important parameter to further optimize routing in DTNs.
Developing QoS by Priority Routing for Real Time Data in Internet of Things (...IJCNCJournal
- The document discusses developing quality of service (QoS) in Internet of Things (IoT) networks by using priority routing for real-time data.
- It proposes a novel solution that integrates mobile ad hoc networks (MANETs) to improve delivery of high priority real-time application data over low latency MANET paths.
- Experimental results showed the proposed approach was effective in reducing network overhead and congestion, while also improving routing protocol performance in terms of packet delay and throughput.
The document provides step-by-step instructions on how to use Google Drive and its various productivity apps, including Google Docs, Sheets, Slides, and Forms. It explains how to create a Google account, access Drive, and explores the basic functions of the apps, highlighting their similarities to Microsoft Office programs like Word, Excel, and PowerPoint. The overall purpose is to introduce the reader to Google's online office suite and file storage system.
Digital Nativity: Education in the Generation of the Tech-SaavyChris Mogensen
"The newest generation of learners arriving at our shores have never been without technology in their lives…how does this simple fact change their perception of education? What does it mean for them, and us? Explore the paradigm of teaching to the Digital Native."
Presentation given at the Association of Adult Educators conference on October 23rd, 2015 at Nova Scotia Community College - Waterfront Campus in Dartmouth, Nova Scotia, Canada.
Bibliography available on request.
This document provides an overview of trigonometry including definitions of basic trigonometric functions, identities, graphs of trig functions, inverse trig functions, laws of sines and cosines, and vectors. It defines angles, conversions between degrees and radians, trig functions of right triangles, trig identities, graphs of sine, cosine, tangent and cotangent waves, inverse trig functions, and vector operations like addition, subtraction, scalar multiplication and dot products. The document is a tutorial covering the essential topics of trigonometry.
The document outlines a social media strategy for Ryan's SuperValu Glanmire. It discusses researching customers through interviews to understand engagement and identify key themes. The research found that customers are interested in fresh foods, community updates, competitions, healthy recipes and online shopping. The current strategy's strengths are giveaways and updates, but it could post more frequently and promote events better. Recommendations include running online and in-store competitions, promoting the bakery, posting community news and health/fitness, and varying timely content without overloading followers. The goal is to increase interaction and store footfall through an effective social media presence.
El documento proporciona indicadores de éxito para alumnos con TDAH en el aula y en el centro. En el aula, se recomienda situar la mesa cerca de la pizarra para mantener el contacto visual, estructurar las tareas en periodos cortos, y permitir movimiento cuando sea necesario. En el centro, se sugiere gestionar recursos de apoyo, organizar espacios, tiempos y recursos, y tener un protocolo de actuación ante la sospecha de TDAH.
The UK National Screening Committee commissioned a review in 2010 to evaluate screening for atrial fibrillation in individuals over 65 years old. The review found the cost-effectiveness of a national screening program was uncertain and that current management of atrial fibrillation was poor. While most respondents to a public consultation favored screening, concerns remained about improving clinical management. Therefore, the Committee recommended retaining the current policy that screening this population is not recommended due to potential for more harm than good.
The document provides an overview of lung cancer screening and key events since the last UK National Screening Committee guidance on the topic in 2006 and 2007. It summarizes evidence from the National Lung Screening Trial showing a reduction in lung cancer mortality from low-dose CT screening. It also discusses the 2013 USPSTF recommendation based on this evidence and ongoing trials like NELSON. The PenTAG group at the University of Exeter will produce a health technology assessment on the clinical and cost-effectiveness of low-dose CT screening for lung cancer, incorporating results from the NELSON trial expected in late 2016, to inform future UK guidance.
A dynamic performance-based_flow_controlingenioustech
Dear Students
Ingenious techno Solution offers an expertise guidance on you Final Year IEEE & Non- IEEE Projects on the following domain
JAVA
.NET
EMBEDDED SYSTEMS
ROBOTICS
MECHANICAL
MATLAB etc
For further details contact us:
enquiry@ingenioustech.in
044-42046028 or 8428302179.
Ingenious Techno Solution
#241/85, 4th floor
Rangarajapuram main road,
Kodambakkam (Power House)
http://www.ingenioustech.in/
Dear Student,
DREAMWEB TECHNO SOLUTIONS is one of the Hardware Training and Software Development centre available in
Trichy. Pioneer in corporate training, DREAMWEB TECHNO SOLUTIONS provides training in all software
development and IT-related courses, such as Embedded Systems, VLSI, MATLAB, JAVA, J2EE, CIVIL,
Power Electronics, and Power Systems. It’s certified and experienced faculty members have the
competence to train students, provide consultancy to organizations, and develop strategic
solutions for clients by integrating existing and emerging technologies.
ADD: No:73/5, 3rd Floor, Sri Kamatchi Complex, Opp City Hospital, Salai Road, Trichy-18
Contact @ 7200021403/04
phone: 0431-4050403
This document discusses a proposed congestion control mechanism called Network Border Protocol (NBP) that aims to prevent congestion collapse and unfairness in networks. NBP works by having edge routers monitor and control the ingress rates of individual flows to prevent packets from entering the network faster than they can leave. It uses feedback exchanged between ingress and egress routers to inform them of flow rates. While adding complexity to edge routers, NBP's approach aims to isolate this within the network borders and not require changes to end systems or transport protocols. The key components of NBP include its rate control algorithm, use of leaky bucket algorithms at ingress routers, and feedback control between edge routers.
ROUTING PROTOCOLS FOR DELAY TOLERANT NETWORKS: SURVEY AND PERFORMANCE EVALUATIONijwmn
Delay Tolerant Networking (DTN) is a promising technology that aims to provide efficient communication
between devices in a network with no guaranteed continuous connectivity. Most existing routing schemes
for DTNs exploit the advantage of message replication to achieve high message delivery rate. However,
these schemes commonly suffer from large communication overhead due to the lack of efficient mechanisms
to control message replication. In this paper we give a brief survey on routing protocols designed for
DTNs, and evaluate the performance of several representative routing protocols including Epidemic, Spray
and Wait, PRoPHET, and 3R through extensive trace-driven simulations. Another objective of this work is
to evaluate the security strength of different routing schemes under common DTN attacks such as the black
hole attack. The results and analysis presented in this paper can provide useful guidance on the design and
selection of routing protocols for given delay-tolerant applications.
A Distributed Approach to Solving Overlay Mismatching ProblemZhenyun Zhuang
This document proposes an algorithm called Adaptive Connection Establishment (ACE) to address the topology mismatch problem between the logical overlay network and physical underlying network in unstructured peer-to-peer systems. ACE builds a minimum spanning tree among each source node and its neighbors within a certain diameter, optimizes connections not on the tree to reduce redundant traffic, while retaining search scope. It evaluates tradeoffs between topology optimization and information exchange overhead by changing the diameter. Simulation results show ACE can significantly reduce unnecessary P2P traffic by efficiently matching the overlay and physical network topologies.
Hybrid Periodical Flooding in Unstructured Peer-to-Peer NetworksZhenyun Zhuang
This document proposes a new search mechanism called Hybrid Periodical Flooding (HPF) for unstructured peer-to-peer networks. HPF aims to reduce unnecessary traffic like blind flooding while also addressing the "partial coverage problem" of some statistics-based search mechanisms. It introduces the concept of Periodical Flooding (PF), which controls the number of neighbors a query is forwarded to based on the time-to-live value. This allows the forwarding behavior to change periodically over the query's lifetime. HPF then combines PF with weighted selection of neighbors based on multiple metrics to guide queries towards potentially relevant results while exploring more of the network.
Performance improvement of bottleneck link in red vegas over heterogeneous ne...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
1) The document analyzes delay performance in multihop wireless networks and develops techniques to derive lower bounds on average packet delay under any scheduling policy.
2) It introduces the concept of a k-bottleneck, where k or fewer links can transmit simultaneously due to interference constraints.
3) A key technique, called reduction, simplifies analysis of queues upstream of a k-bottleneck by reducing it to a single queue system with k servers and appropriate arrival processes.
This document summarizes research into the resilience of deployed TCP implementations to "blind" in-window attacks from off-path adversaries. The authors tested major operating systems and network infrastructure and found that:
1) Over 50% of web server connections were vulnerable to at least one blind in-window attack, with 44% accepting invalid data packets.
2) All routers and switches tested had some form of TCP vulnerability, despite more recent systems being resistant to SYN and reset attacks.
3) Ephemeral port selection on real systems remains predictable, potentially aiding attackers, though adoption of new operating systems may improve this over time.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
A NEW ALGORITHM FOR CONSTRUCTION OF A P2P MULTICAST HYBRID OVERLAY TREE BASED...csandit
In the last decade Peer to Peer technology has been thoroughly explored, because it overcomes many limitations compared to the traditional client server paradigm. Despite its advantages over a traditional approach, the ubiquitous availability of high speed, high bandwidth and low latency networks has supported the traditional client-server paradigm. Recently, however, the surge of streaming services has spawned renewed interest in Peer to Peer technologies. In addition, services like geolocation databases and browser technologies like Web-RTC make a hybrid approach attractive.
An Extensive Literature Review of Various Routing Protocols in Delay Tolerant...IRJET Journal
This document summarizes an extensive literature review on routing protocols in delay tolerant networks (DTNs). It begins by defining DTNs as wireless networks with intermittent connectivity where nodes use a store-carry-forward mechanism. Common routing protocols for DTNs like epidemic, spray and wait, and prophet are described. The document then reviews several papers that propose and evaluate new routing algorithms or improvements for DTNs, analyzing metrics like delivery ratio, overhead, and latency. Key factors considered include node contact histories, social characteristics, energy constraints, and message prioritization. Finally, it suggests the contact duration between nodes could be an important parameter to further optimize routing in DTNs.
Developing QoS by Priority Routing for Real Time Data in Internet of Things (...IJCNCJournal
- The document discusses developing quality of service (QoS) in Internet of Things (IoT) networks by using priority routing for real-time data.
- It proposes a novel solution that integrates mobile ad hoc networks (MANETs) to improve delivery of high priority real-time application data over low latency MANET paths.
- Experimental results showed the proposed approach was effective in reducing network overhead and congestion, while also improving routing protocol performance in terms of packet delay and throughput.
The document provides step-by-step instructions on how to use Google Drive and its various productivity apps, including Google Docs, Sheets, Slides, and Forms. It explains how to create a Google account, access Drive, and explores the basic functions of the apps, highlighting their similarities to Microsoft Office programs like Word, Excel, and PowerPoint. The overall purpose is to introduce the reader to Google's online office suite and file storage system.
Digital Nativity: Education in the Generation of the Tech-SaavyChris Mogensen
"The newest generation of learners arriving at our shores have never been without technology in their lives…how does this simple fact change their perception of education? What does it mean for them, and us? Explore the paradigm of teaching to the Digital Native."
Presentation given at the Association of Adult Educators conference on October 23rd, 2015 at Nova Scotia Community College - Waterfront Campus in Dartmouth, Nova Scotia, Canada.
Bibliography available on request.
This document provides an overview of trigonometry including definitions of basic trigonometric functions, identities, graphs of trig functions, inverse trig functions, laws of sines and cosines, and vectors. It defines angles, conversions between degrees and radians, trig functions of right triangles, trig identities, graphs of sine, cosine, tangent and cotangent waves, inverse trig functions, and vector operations like addition, subtraction, scalar multiplication and dot products. The document is a tutorial covering the essential topics of trigonometry.
The document outlines a social media strategy for Ryan's SuperValu Glanmire. It discusses researching customers through interviews to understand engagement and identify key themes. The research found that customers are interested in fresh foods, community updates, competitions, healthy recipes and online shopping. The current strategy's strengths are giveaways and updates, but it could post more frequently and promote events better. Recommendations include running online and in-store competitions, promoting the bakery, posting community news and health/fitness, and varying timely content without overloading followers. The goal is to increase interaction and store footfall through an effective social media presence.
El documento proporciona indicadores de éxito para alumnos con TDAH en el aula y en el centro. En el aula, se recomienda situar la mesa cerca de la pizarra para mantener el contacto visual, estructurar las tareas en periodos cortos, y permitir movimiento cuando sea necesario. En el centro, se sugiere gestionar recursos de apoyo, organizar espacios, tiempos y recursos, y tener un protocolo de actuación ante la sospecha de TDAH.
The UK National Screening Committee commissioned a review in 2010 to evaluate screening for atrial fibrillation in individuals over 65 years old. The review found the cost-effectiveness of a national screening program was uncertain and that current management of atrial fibrillation was poor. While most respondents to a public consultation favored screening, concerns remained about improving clinical management. Therefore, the Committee recommended retaining the current policy that screening this population is not recommended due to potential for more harm than good.
The document provides an overview of lung cancer screening and key events since the last UK National Screening Committee guidance on the topic in 2006 and 2007. It summarizes evidence from the National Lung Screening Trial showing a reduction in lung cancer mortality from low-dose CT screening. It also discusses the 2013 USPSTF recommendation based on this evidence and ongoing trials like NELSON. The PenTAG group at the University of Exeter will produce a health technology assessment on the clinical and cost-effectiveness of low-dose CT screening for lung cancer, incorporating results from the NELSON trial expected in late 2016, to inform future UK guidance.
This document discusses digital humanities in Portugal and ways to make museums and cultural institutions more accessible and humane through technology. It provides examples of several museums that have incorporated digital technologies like Google Art Project, virtual tours, live streaming of performances and events, and 3D scanning of artifacts to allow remote access to their collections and expand their reach worldwide. The talk emphasizes that digital initiatives should be carefully considered to fit the goals and strengths of each institution rather than just copying others.
The document discusses ways to express agreeing and disagreeing in dialogues. It provides example phrases like "I agree with you" and "I disagree with you". It then gives an example dialogue where the characters discuss ways to address climate change caused by issues like illegal logging and pollution. The dialogue is incomplete and the students must arrange sentences to complete it. They are then instructed to create their own dialogue in pairs on the topic and present it to the class.
This document discusses brand management concepts like brand equity, brand value, and the brand equity pyramid. It defines brand equity as the goodwill and recognition that builds customer affection and loyalty over time, translating to higher sales and profits. Brand value refers to the monetary worth of a brand as a business asset. The brand equity pyramid shows how consumer recognition, perception, response, and bonding are built up from a brand's identity. Strong brands have high equity through loyal, many customers. While equity measures loyalty, value looks at monetary returns; a brand can have high equity but low value, or vice versa.
El documento habla sobre la certificación en uso intensivo de tecnologías II de la estudiante Gaby Tania Carrion Alegre de la Universidad Católica Los Angel de Chimbote. El documento analiza los personajes de la paz.
How to use Google Calendar to create an eventAndrea Viernes
The document provides instructions for using Google Calendar. It explains how to sign in with a Google account, navigate to the Google Calendar page, and view the calendar interface which includes a small monthly calendar and sections to view different calendars. It describes how to create new calendars, choose a calendar view, create and edit events, customize calendar and event colors, and change calendar settings.
The document discusses the history and techniques of breast incisions for cancer surgery and reconstruction. It outlines how the transverse elliptical incision became standard but often led to unnatural breast shapes. The Wise pattern introduced in 1956 uses skin tailoring within a keyhole pattern to better preserve the breast's natural appearance and shape. Today, multidisciplinary teams plan oncologically justified incisions tailored to each patient's breast within the Wise pattern principles for optimal cosmetic and medical outcomes.
Un circuito mixto combina elementos conectados en serie y en paralelo de cualquier manera, utilizando ambos sistemas. La ley de Ohm establece que la corriente (I) que pasa a través de un conductor es directamente proporcional a la diferencia de potencial (V) aplicada y a la resistencia (R) del material, representado por la ecuación V=IxR.
HOW TO DETECT MIDDLEBOXES: GUIDELINES ON A METHODOLOGYcscpconf
Internet middleboxes such as VPNs, firewalls, and proxies can significantly change handling of traffic streams. They play an increasingly important role in various types of IP networks. If end hosts can detect them, these hosts can make beneficial, and in some cases, crucial improvements in security and performance But because middle boxes have widely varying behavior and effects on the traffic they handle, no single technique has been discovered that can detect all of them.
Devising a detection mechanism to detect any particular type of middle box interference involves many design decisions and has numerous dimensions. One approach to assist with the
complexity of this process is to provide a set of systematic guidelines. This paper is the first attempt to introduce a set of general guidelines (as well as the rationale behind them) to assist researchers with devising methodologies for end-hosts to detect middle boxes by the end-hosts. The guidelines presented here take some inspiration from the previous work of other
researchers using various and often ad hoc approaches. These guidelines, however, are mainly based on our own experience with research on the detection of middle boxes. To assist
researchers in using these guidelines, we also provide an example of how to bring them into play for detection of network compression.
How to detect middleboxes guidelines on a methodologycsandit
Internet middleboxes such as VPNs, firewalls, and proxies can significantly change handling of
traffic streams. They play an increasingly important role in various types of IP networks. If end
hosts can detect them, these hosts can make beneficial, and in some cases, crucial improvements
in security and performance But because middleboxes have widely varying behavior and effects
on the traffic they handle, no single technique has been discovered that can detect all of them.
Devising a detection mechanism to detect any particular type of middlebox interference involves
many design decisions and has numerous dimensions. One approach to assist with the
complexity of this process is to provide a set of systematic guidelines. This paper is the first
attempt to introduce a set of general guidelines (as well as the rationale behind them) to assist
researchers with devising methodologies for end-hosts to detect middleboxes by the end-hosts.
The guidelines presented here take some inspiration from the previous work of other
researchers using various and often ad hoc approaches. These guidelines, however, are mainly
based on our own experience with research on the detection of middleboxes. To assist
researchers in using these guidelines, we also provide an example of how to bring them into
play for detection of network compression
USING A DEEP UNDERSTANDING OF NETWORK ACTIVITIES FOR SECURITY EVENT MANAGEMENTIJNSA Journal
With the growing deployment of host-based and network-based intrusion detection systems in increasingly
large and complex communication networks, managing low-level alerts from these systems becomes
critically important. Probes of multiple distributed firewalls (FWs), intrusion detection systems (IDSs) or
intrusion prevention systems (IPSs) are collected throughout a monitored network such that large series of
alerts (alert streams) need to be fused. An alert indicates an abnormal behavior, which could potentially be
a sign for an ongoing cyber attack. Unfortunately, in a real data communication network, administrators
cannot manage the large number of alerts occurring per second, in particular since most alerts are false
positives. Hence, an emerging track of security research has focused on alert correlation to better identify
true positive and false positive. To achieve this goal we introduce Mission Oriented Network Analysis
(MONA). This method builds on data correlation to derive network dependencies and manage security
events by linking incoming alerts to network dependencies.
USING A DEEP UNDERSTANDING OF NETWORK ACTIVITIES FOR SECURITY EVENT MANAGEMENTIJNSA Journal
With the growing deployment of host-based and network-based intrusion detection systems in increasingly large and complex communication networks, managing low-level alerts from these systems becomes critically important. Probes of multiple distributed firewalls (FWs), intrusion detection systems (IDSs) or intrusion prevention systems (IPSs) are collected throughout a monitored network such that large series of alerts (alert streams) need to be fused. An alert indicates an abnormal behavior, which could potentially be a sign for an ongoing cyber attack. Unfortunately, in a real data communication network, administrators cannot manage the large number of alerts occurring per second, in particular since most alerts are false positives. Hence, an emerging track of security research has focused on alert correlation to better identify true positive and false positive. To achieve this goal we introduce Mission Oriented Network Analysis (MONA). This method builds on data correlation to derive network dependencies and manage security events by linking incoming alerts to network dependencies.
International Refereed Journal of Engineering and Science (IRJES) is a leading international journal for publication of new ideas, the state of the art research results and fundamental advances in all aspects of Engineering and Science. IRJES is a open access, peer reviewed international journal with a primary objective to provide the academic community and industry for the submission of half of original research and applications
A novel token based approach towards packet loss controleSAT Journals
This document summarizes a research paper that proposes a novel congestion control mechanism called Stable Token-Limited Congestion Control (STLCC). STLCC monitors inter-domain traffic rates and limits the number of tokens to control congestion and improve network performance. The authors implemented STLCC in a prototype application and found that it was effective at controlling packet loss and improving network performance compared to other congestion control methods. They concluded that STLCC can automatically measure and reduce congestion to allocate network resources stably.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
This document analyzes data from Measurement Lab (M-Lab) to study the impact of interconnection between internet service providers (ISPs) on consumer internet performance in the United States. The study found sustained performance degradation for customers of major access ISPs like AT&T and Comcast when their traffic passed over interconnections with transit ISPs like Cogent, Level 3, and XO. This degradation was often diurnal and worst during peak usage hours, indicating it was caused by network congestion. While the study could not determine fault or the details of interconnection agreements, it provides evidence that issues at interconnection points can significantly impact the quality of internet access experienced by consumers.
This document summarizes a study on the performance of LTE networks. The researchers conducted passive and active measurements on a commercial LTE network with over 300,000 users to analyze network characteristics and resource utilization. They found that while LTE provides higher bandwidth than 3G, TCP flows often underutilize available bandwidth due to factors like limited receive windows. On average, flows used only 52% of available bandwidth, lengthening transfers and wasting energy. The researchers developed techniques to estimate bandwidth and identify inefficient application behaviors to recommend protocol and design improvements.
Sip Overload Control Testbed: Design, Building And Evaluationijasa
This document describes the design, implementation, and evaluation of a SIP overload control testbed using a window-based mechanism on the Asterisk open source proxy platform. The mechanism aims to maintain server throughput near capacity during overload by adjusting the window size for active transactions based on average transaction delay. Evaluation results show that with this mechanism, the proxy maintains maximum throughput even under heavy loads and reduces average call establishment delays and message resend rates compared to without overload control.
CPCRT: Crosslayered and Power Conserved Routing Topology for congestion Cont...IOSR Journals
The document describes a proposed Crosslayered and Power Conserved Routing Topology (CPCRT) for congestion control in mobile ad hoc networks. The CPCRT aims to improve transmission performance by distinguishing between packet loss due to link failure versus other causes, while also conserving power used for packet transmission. It builds upon an earlier Crosslayered Routing Topology (CRT) approach by incorporating power conservation. The CPCRT is intended to identify the root cause of packet loss, avoid unnecessary congestion handling from link failures, allow congestion handling at specific high-traffic nodes rather than all nodes, and optimize resource and power usage for packet routing in mobile ad hoc networks.
A NEW ALGORITHM FOR CONSTRUCTION OF A P2P MULTICAST HYBRID OVERLAY TREE BASED...cscpconf
In the last decade Peer to Peer technology has been thoroughly explored, because it overcomes many limitations compared to the traditional client server paradigm. Despite its advantages over a traditional approach, the ubiquitous availability of high speed, high bandwidth and low latency networks has supported the traditional client-server paradigm. Recently, however, the surge of streaming services has spawned renewed interest in Peer to Peer technologies. In addition, services like geolocation databases and browser technologies like Web-RTC make a hybrid approach attractive. In this paper we present algorithms for the construction and the maintenance of a hybrid P2P overlay multicast tree based on topological distances. The essential idea of these algorithms is to build a multicast tree by choosing neighbours close to each other. The topological distances can be easily obtained by the browser using the geolocation API. Thus the implementation of algorithms can be done web-based in a distributed manner. We present proofs of our algorithms as well as practical results and evaluations.
Comparative Analysis of Different TCP Variants in Mobile Ad-Hoc Network partha pratim deb
The document analyzes the performance of different TCP variants (New Reno, Reno, Tahoe) with MANET routing protocols (AODV, DSR, TORA) through simulation. It finds that in scenarios with 3 and 5 nodes, AODV has better throughput than DSR and TORA for all TCP variants. Throughput decreases for all variants as node count increases. New Reno provides multiple packet loss recovery and is the best choice for AODV in MANETs due to its consistent performance with changes in node count. Further analysis of additional protocols and TCP variants is recommended.
Network Traffic Anomaly Detection Through Bayes NetGyan Prakash
Traffic anomaly detection using high performance measurement systems offers the possibility of improving the speed of
detection and enabling detection of important, short lived anomalies. In this paper we investigate the problem of detecting anomalies
using traffic measurements with fine-grained time stamps. We develop a new detection algorithm (called KS3) that utilizes a Bayes
Net to efficiently consider multiple input signals and to explicitly define what is considered “anomalous”.
The input signals considered KS3 are traffic volumes and correlations between ingress egress packet and bit rates. These
complementary signals enable identification of expanded range of anomalies. Using a set of high precision traffic measurements
collected at our campus border router over a 10 month period and an annotated anomaly log supplied by our network operators, we
show that KS3 is highly accurate, identifying 86% of the anomalies listed in the log. Compared with well known time series-based
and wavelet-based detectors, this represents over a 20% improvement in accuracy. Investigation of events identified by KS3 that did
not appear in the operator log indicate many are, in fact, true positives. Deployment of Ks3 in an operational environment supports
this by showing zero false positives during initial tests.
TCP INCAST AVOIDANCE BASED ON CONNECTION SERIALIZATION IN DATA CENTER NETWORKSIJCNCJournal
In distributed file systems, a well-known congestion collapse called TCP incast (Incast briefly) occurs
because many servers almost simultaneously send data to the same client and then many packets overflow
the port buffer of the link connecting to the client. Incast leads to throughput degradation in the network. In
this paper, we propose three methods to avoid Incast based on the fact that the bandwidth-delay product is
small in current data center networks. The first method is a method which completely serializes connection
establishments. By the serialization, the number of packets in the port buffer becomes very small, which
leads to Incast avoidance. The second and third methods are methods which overlap the slow start period
of the next connection with the current established connection to improve throughput in the first method.
Numerical results from extensive simulation runs show the effectiveness of our three proposed methods.
Cataloging Of Sessions in Genuine Traffic by Packet Size Distribution and Ses...IOSR Journals
Abstract: Cataloging traffic keen on precise network applications is vital for application-aware network
organization and it turn into more taxing because modern applications incomprehensible their network
behaviors. Whereas port number-based classifiers work merely for a little renowned application and signaturebased
classifiers are not significant to encrypted packet payloads, researchers are inclined to classify network
traffic rooted in behaviors scrutinized in network applications. In this document, a session level Flood
Cataloging (SLFC) approach is proposed to organize network Floods as a session, which encompasses of
Floods in the equal discussion. SLFC initially classifies flood into the analogous applications by packet size
distribution (PSD) and subsequently faction Floods as sessions by port locality. With PSD, each Flood is
distorted into a set of points in a two-Dimension space and the remoteness among all Flood and the
representatives of preselected applications are calculated. The Flood is predicted as the application having a
least distance. Meanwhile, port locality is accustomed to cluster Floods as sessions since an application often
uses successive port statistics surrounded by a session. If flood of a session are categorized into diverse
applications, an arbitration algorithm is invoked to make the improvement.
Keywords: Flood Cataloging; session grouping; session Cataloging; packet size distribution
Reduce the False Positive and False Negative from Real Traffic with Intrusion...inventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
1. End-to-End Detection of Compression of
Traffic Flows by Intermediaries
Vahab Pournaghshband
Computer Science Department
University of California, Los Angeles
vahab@cs.ucla.edu
Alexander Afanasyev
Computer Science Department
University of California, Los Angeles
afanasev@cs.ucla.edu
Peter Reiher
Computer Science Department
University of California, Los Angeles
reiher@cs.ucla.edu
Abstract—Routers or nodes on the Internet sometimes apply
link-layer or IP-level compression on traffic flows with no
knowledge of the end-hosts. If the end-host applications are aware
of the compression already provided by an intermediary, they can
save time and resources by not applying compression themselves.
The savings are even greater in mobile applications.
We present a probing technique to detect the compression of
traffic flows by intermediaries. Our technique is non-intrusive
and robust to cross traffic. It is entirely end-to-end, requiring
neither changes to nor information from intermediate nodes.
We present two different but similar approaches based on how
cooperative the end-hosts are. Our proposed technique only
uses packet inter-arrival times for detection. It does not require
synchronized clocks at the sender and receiver. Simulations
and Internet experiments were used to evaluate our approach.
Our findings demonstrate an accurate detection of compression
applied to traffic flows by intermediaries.
I. INTRODUCTION
On the Internet, every packet sent goes through numerous
routers or intermediaries until it gets to the intended receiver.
While routing the traffic, these intermediaries, are potentially
capable of making serious changes to what happens to a traffic
stream on the network. One class of intermediaries makes no
changes to the content of the traffic, giving the appearance
that nothing has been done to the stream other than routing
it to the destination. This transparency property may make
end-to-end detection of such intermediaries harder in most
cases. The class of such intermediaries is very broad (e.g.,
performance enhancing proxies, VPN gateways, Internet cen-
sors, and network dissuasion [29]), and some intermediaries
have been deployed worldwide for decades. Investigating the
detectability of such intermediaries leads to two questions:
(i) can the sender or receiver (or both if they cooperate)
determine that something of this kind has been done if they
pay attention, and/or (ii) is it possible for such an intermediary
to work by stealth and remain undetected? Another example
of these intermediaries is that of network compression which
happens at intermediate nodes, rather transparently to the end-
hosts. As an example of determining the detectability of third
party influences of this kind, in this paper we investigate the
feasibility of detecting network compression on the path.
One way to increase network throughput is to compress
data that is being transmitted. Network compression may
happen at different network layers and in different forms:
Application layer. Compression at the application layer
is widely used, specially for applications that use highly
compressible data such as VoIP and video streaming. At
the application layer, both compression and decompression
happen at the end-hosts.
TCP/IP Header. Often header compression is possible when
there is significant redundancy between header fields; within
the headers of a single packet, but in particular between
consecutive packets belonging to the same flow. This is
mainly achieved by first sending header field information
that is expected to remain static for most of the lifetime
of the packet flow. Since these methods are used to avoid
sending unchanged header fields for a network flow, no data
compression algorithm is actually applied here [6].
Early TCP/IP header compressions such as CTP [17] and
IPHC [11] were designed for slow serial links of 32 kbps or
less to produce a significant performance impact [8]. More
recent header compressions have since been developed, such
as ROHC [19], [27].
IP payload. IPComp [33] is the de facto method in
this case. LZS [12], Deflate [28], and ITUT v.44 [5] are
the well-known compression algorithms that work with
IPComp. IPComp is generally used with IPsec. IP payload
compression is something of a niche optimization. It is
necessary because IP-level security converts IP payloads to
random bitstreams, defeating commonly deployed link-layer
compression mechanisms that are faced with payloads
which have no redundant information that can be more
compactly represented. However, many IP payloads are
already compressed (images, audio, video, or zipped files
being FTPed), or are already encrypted above the IP layer
(e.g., SSL). These payloads will typically not compress
further, limiting the benefit of this optimization. In general,
application-level compression can often outperform IPComp
because of the opportunity to use compression dictionaries
based on knowledge of the specific data being compressed.
This makes the mechanism less useful, and hence reduces the
need for IPComp [9].
Link layer. Commonly used link compression schemes
include the Stacker [13] and the Predictor [31][8]. The
2. Stacker compression algorithm is based on the Lempel-Ziv
[12] compression algorithm. The Predictor compression
scheme works by predicting the next sequence of characters
in a data stream using an index to look up a sequence in
a compression dictionary. There is no information on how
widely link-layer compression is used in practice.
Except for application-layer compression, compression hap-
pens at intermediate nodes, often without the knowledge of
end-users. For example, in January 2013, a researcher discov-
ered that Nokia had been applying compression to its users’
data without their knowledge [3]. In this case, the intermediary
was surreptitiously decrypting and re-encrypting user packets
in order to effectively apply compression. Users surely would
have preferred to know that this was happening, both because
of the security risk and because it would render their own
application-level compression unnecessary.
However, performing compression and decompression re-
quires many resources at the intermediate nodes, and the
resulting overhead can overload the intermediary’s queue,
causing delay and packet losses. Further, not all commercial
routers come with compression capabilities [16]. Thus, some
intermediaries apply compression, some do not, and generally
they do not tell end-users whether they do. While managing
resources effectively at end-hosts is not as crucial as it is at
routers, it is still beneficial—particularly for mobile devices
where resources are limited. Wasting these resources on redun-
dant compression is undesirable. End-hosts can benefit from
recognizing when compression has already being applied on
a network connection.
Ideally, end-hosts and intermediaries should coordinate their
compression decision, but practical problems make that ideal
unlikely. Therefore, since the end-hosts have the greatest
interest in proper compression choices for their data, they
could detect if intermediate compression is present and adjust
their behavior accordingly. An end-to-end approach to detect
compression by intermediaries can help to save end-hosts’
resources by not compressing data when intermediaries are
already doing so.
This paper describes a method to allow end-to-end detection
of intermediary-provided link-layer or IP-level compression.
Such an end-to-end approach does not require any changes to
or cooperation from intermediary nodes, making its deploy-
ment and use much more practical. We propose two end-to-
end approaches based on a receiver’s cooperativeness in the
detection process. A cooperative receiver is willing to make
necessary changes on its machine or system to fully cooperate
with the sender in the detection process. A responsive receiver
only responds to the sender’s requests as long as they do not
require any changes on the receiver’s machine. For example,
our approach assumes that the receiver responds to the sender’s
ICMP requests. This paper deals with a single-sender/single-
receiver path of a communication network. Our approach is
resilient to cross traffic and other Internet variabilities, and is
non-intrusive, making it both practical and deployable. Also,
our proposed solution uses only regular unicast probes, and
thus it is applicable in today’s Internet.
Simulation and Internet experiments show that our approach
works. Our approach detects both software and hardware com-
pression. We use only the relative delays between arrival times
of our probing packets for detection, so clock synchronization
is unnecessary. The approach requires no special network
support. However, it is not designed to detect compressions
that are not based on entropy of data, such as dictionary-based
or TCP/IP packet header compression.
While IPComp is not widely used, there is no evidence
related to how commonly link-layer compression is deployed.
Knowing this is essential before performing further research
in this direction, which suggests an investigation of the preva-
lence of link-layer compression as a next step. We plan to use
our findings here in a longitudinal study of the prevalence of
this type of compression in the Internet.
End-to-end detection of compression by intermediaries is
also valuable for bandwidth availability and capacity estima-
tion. Bandwidth availability and path capacity are among the
most important characteristics of Internet paths. IP-level and
link-layer compression directly influence the estimation of
these path properties, since compression has a considerable
effect on the assessment of both capacity and bandwidth. Not
taking such effects into consideration will lead to bandwidth
or capacity under- or overestimation.
While investigating the detectability of link-layer compres-
sion is valuable by itself, we believe this work is the beginning
of a much broader area of research — that is, exploring
the detectability of the class of intermediaries that influences
traffic but leaves the packet payloads within the traffic stream
unchanged. This area, in turn, feeds into the highly important
research question of what, in general, can an end user know
about what happens to the packets he submits to the Internet.
The remainder of the paper is organized as follows: Section
II presents related work, followed by detection methodology in
Section III. Implementation, simulations, Internet evaluation,
and discussion are presented in Sections IV, V, VI, and VII
respectively. Section VIII concludes the paper.
II. RELATED WORK
While the problem of detecting compression has not been
addressed in the literature in the past, the detection of
the presence of redundancy elimination (RE) on bandwidth-
constrained links has. RE-enabled routers identify, and then,
remove redundant packets (i.e., multiple copies of the same
packet) [4]. Han et al. [15] briefly outline an approach to
detecting RE-enabled routers on the path and leave elaboration
and implementation to future work. The detection of compres-
sion, however, is different from detecting RE-enabled routers
on the path, since the two third parties are looking to reduce
(or ideally eliminate) different kinds of redundancies.
Because of the nature of network compression effects on the
available bandwidth and the fact that our approach is inspired
by the algorithms used to estimate bandwidth, in this section
we present end-to-end techniques and tools for measurements
of the available bandwidth and capacity of a network path. The
3. problem of bandwidth estimation has been extensively studied
in the past. Many approaches are designed for cooperative end-
hosts, and some are designed to work with responsive hosts.
End-to-end active probing schemes for bandwidth estima-
tion are classified into three categories [30]: Packet Pair/Train
Dispersion (PPTD), Self-Loading Periodic Streams (SLoPS),
and Trains of Packet Pairs (TOPP). In this section we briefly
describe each of these techniques.
A. Packet Pair/Train Dispersion (PPTD)
The packet-pair technique was first introduced by Keshav
[20]. In this technique, the source sends multiple pairs of pack-
ets to the receiver. Each packet pair consists of two packets
of the same size sent back-to-back. Then, the dispersion of a
packet pair is used to measure the capacity of the path. The
dispersion of a packet pair, δi, at a particular link of the path,
is defined as the time distance between the last bit of each
packet. With the assumption of no cross traffic, δi is:
δi = max δi∗ ,
L
Ci
(1)
where δi∗ is the the dispersion prior to the link Ci, L is the
packet size, and δ0 = L/C0.
Measuring the dispersion at the receiver, δR, is what is used
to estimate the path capacity, C (H is the number of hops
between the end-hosts):
δR = max
0≤i≤H
L
Ci
=
L
min0≤i≤H(Ci)
=
L
C
(2)
C =
L
δR
. (3)
Jain et al. [18] extended the packet-pair probing technique
to packet trains, where more than two packets are sent back-
to-back. The dispersion of a packet train at a link is defined
as the time between the last bits of the first and last packets
in the train.
PPTD probing techniques typically require cooperative end-
hosts. It is, however, possible to perform PPTD measurements
with only a responsive receiver. In that case, the receiver
is expected to, for instance, respond to ICMP messages.
However, the reverse path capacities and cross traffic may
affect the results.
B. Self-Loading Periodic Streams (SLoPS)
SLoPS is a another methodology for measuring end-to-end
available bandwidth. In this technique, the sender periodically
sends a number of equal-sized packets to the receiver at a
certain rate. This measurement methodology involves moni-
toring the arrival time variations of the probing packets. If the
sending rate is greater than the path’s available bandwidth, it
will overload the queue of the bottleneck, which results in an
increasing trend of one-way delay.
On the other hand, if the stream rate is lower than the
available bandwidth, the probing packets will go through the
path without overloading the queue: thus we do not expect to
see an increasing trend. In this approach, the sender, through
iterations, attempts to adjust the sending rate to get close to
the available bandwidth.
C. Trains of Packet Pairs (TOPP)
Unlike the Self-Loading Periodic Streams technique that
measures the end-to-end one-way delays of a packet train
arriving at the receiver, TOPP [25] increases the sending rate
(RS) until a point where the sender is sending faster than the
path capacity (C). The receiver cannot receive faster than the
available bandwidth at the bottleneck (RR < RS), so further
increasing the sending rate above the available capacity means
that the packets will get queued at the intermediate routers. As
long as the sender is still sending within the path capacity, the
receiving rate is not more than the available capacity. Thus,
the ratio of sending rate to receiving rate is close to unity (i.e.,
RR = RS). Hence, TOPP estimates the available bandwidth
to be the maximum sending rate such that RS ≈ RR. The
following equation is used to estimate the capacity C from
the slope of RS/RR versus RS:
RS
RR
=
RS
RS + RC
C (4)
where RC is the average cross traffic rate. TOPP is quite
similar to SLoPS. In fact, most of the differences between
the two methods are related to the statistical processing of the
measurements.
Table I summarizes some of the publicly available band-
width estimation tools and the methodology used in their
underlying estimation algorithm.
TABLE I
END-TO-END BANDWIDTH ESTIMATION TOOLS
Tool Measurement Metric Methodology
bing Path capacity PPTD
bprobe Path capacity PPTD
nettimer Path capacity PPTD
pathrate Path capacity PPTD
sprobe Path capacity TOPP
cprobe Available bandwdith PPTD
pipechar Available bandwdith PPTD
pathload Available bandwdith SLoPS
IGI Available bandwdith SLoPS
pathchirp Available bandwdith SLoPS
III. DETECTION METHODOLOGY
A. Assumptions
In our approach to detecting compression on a path, we
assumed that the network consists of a series of store-and-
forward nodes; each of them is equipped with a FIFO queue
and has a constant service rate. We also assume that the
packet delay results from propagation delay, service time and
variable queuing delay. Lastly, our approach is based on the
assumption that, in the absence of compression, packets of the
same size are not treated differently based on the entropy of
their payload.
4. B. Approach Overview
To detect if compression is provided on the network we ex-
ploit the unique effects of compression on network flows. As-
suming the original packets were of the same size, compressed
low entropy data packets are expected to be considerably
smaller than compressed packets containing high entropy data,
which in turn leads to a shorter transmission delay. The added
processing delay (dC) caused by compression/decompression
methods for low entropy packets is not greater than the high
entropy packets (dCL
≤ dCH
where L: low entropy, H: high
entropy) [32]. Based on these facts, the sketch of our approach
is as follows:
Send a train of fixed-size packets back-to-back with pay-
loads consisting of only low entropy data. Then send a similar
train of packets, except these payloads contain high entropy
data instead. We then measure the arrival times of the first
and the last packet in the train, independently for low entropy
(tL1
and tLN
, where N is the number of packets in a single
train) and high entropy (tH1 and tHN
) packet trains. Since
the number of packets in the two trains is known and all of
the packets have the same uncompressed size, the following
inequality will hold if some kind of a network compression is
performed on the path:
∆tL = tLN
− tL1 < ∆tH = tHN
− tH1 (5)
The inequality (1) suggests that the total set of highly
compressible low entropy packets gets to the destination
faster than the set of less compressible high entropy packets.
Conversely, if the packets are not being compressed by any
intermediary, then the two sides of the inequality (1) should
be almost equal. This suggests that a threshold should be
specified to distinguish effects of compression from normal
Internet variabilities:
∆tH − ∆tL > τ (6)
The underlying rationale behind this approach is that be-
cause of the presence of compression and decompression, the
receiving party should sense a relatively higher bandwidth
when the train of low entropy data is sent, since the same
amount of data is received, but in shorter time.
IV. IMPLEMENTATION
We used UDP packets to generate our probe train. When the
receiver is cooperative, with the help of two TCP connections
before and after the probe UDP train, the sender is able to
send experiment parameters to the receiver, and the receiver
uses the second TCP connection to send the recorded arrival
times of the received packets to the sender for further analysis.
If the receiver does not cooperate, but is responsive, we
attached an ICMP ping request packet to the head and tail of
the UDP probing train. In this way, the sender performs the
detection by analyzing the difference between the arrival time
of the two ICMP ping reply packets, without relying on the
receiver to provide any measurement information.
Our techniques are not designed to handle receivers who
are neither cooperative nor responsive.
A. Content of packet’s payload
The payloads of the low entropy packets are filled with
0’s. The payloads of the high entropy packets are filled with
random bytes read from /dev/random, independently for
each new experiment.
B. Packet size
Dovrolis et al. [10] argued that a maximum transmission
unit (MTU) is not optimal for accurate bandwidth estimation.
However, we used large packet size probes (1100 bytes) since
the larger the packets, the more apparent are the effects of
compression, which in turn leads to a more accurate detection.
We emphasize that while our technique is based on measuring
the bandwidth, for detecting compression we do not need to
estimate the bandwidth accurately.
C. Inter-packet departure spacing
In our experiments we use an empirically set value of 100 µ-
sec. In general, this number should not be too small to ensure
that our experiment does not result in queuing overflows in
the intermediate routers. Conversely, this value should not be
too large, since sending the packets at a slow rate removes the
aggregate effects of compression on the traffic flow.
D. Number of measurements
The presence of cross traffic, on average, differs at certain
times of the day and follows a particular pattern [34]. By
performing measurements throughout the day, we hope to
capture the effects of time-dependent cross traffic variation.
We ran each scenario at every hour throughout an entire day,
resulting in a total of 24 measurements.
E. Threshold
The selection of this value highly depends on the time
precision and resolution of the machine performing the mea-
surement time analysis. If the time precision on a typical
machine with a typical operating system is 10 ms [26], then
we believe any value at least an order of magnitude higher
(e.g. 100 ms) is a suitable selection for this parameter.
F. Number of packets
A careful selection of this number is vital, since a small
train may not be sufficient to introduce a noticeable gap of
aggregate compression, and on the other hand, a large number
of packets would make our approach highly intrusive. In the
next section, we show, through simulations, that this number
is positively correlated with the available bandwidth. In our
experiments we used 6000 packets for each train. Section VI.B
shows that this number is sufficient without being intrusive
(Section VII.A).
5. Sender Receiver
Compression Link
Fig. 1. Topology used for simulation
V. SIMULATIONS
We simulated our approach using the ns-3 simulator [2]
under different network scenarios using the simple topology
depicted in Figure 1.
We used simulation to investigate how network character-
istics can influence our detection rate. Each of the following
subsections refer to a particular scenario. Within each subsec-
tion, we first describe the scenario setup, then finally present
the obtained results.
A. Scenario I
In the first scenario we tested our approach using the
topology in Figure 1 when the capacity of all three links
are equal (Figure 2). As these results show, we can detect
compression on a 100 Kbps or 5 Mbps link with a 100 msec
threshold and a small to moderate number of packets.
q
q
q q q q q q q q q
100 ms threshold
0.001
0.004
0.016
0.064
0.256
1.024
4.096
0 2000 4000 6000
Number of packets
∆tH−∆tL(s)
q 100 Kbps
5 Mbps
100 Mbps
Fig. 2. Comparison of number of probe packets required for detection for
different bandwidths (in log-scale).
While the results verify our proposed approach, they show
that in our detection mechanism, as the path capacity gets
higher, more probe packets are required for the detection of
compression effects. With 100 Mbps links, we cannot detect
at a threshold of 100 msec even with 6000 packets. The trend
of the curve suggests we would need several thousand more
packets. This observation suggests that both the path capacity
and the available bandwidth of the path in question have direct
effects on the number of packets required for each measure-
ment. It also suggests that a relative orders-of-magnitude larger
number of probe packets is required for detection in high speed
networks (e.g., 1Gbps). However, this is not a major disad-
vantage for our approach, since compression on high-speed
networks is rarely deployed because the hardware required for
compression for high-speed networks is expensive. In addition,
the typical hardware compression components have proved to
be unable to compress/decompress fast enough for a high-
speed network, thus creating a bottleneck on that link.
B. Scenario II
In this scenario we examined the relationship between how
effective the deployed compression is and the detection rate
of our detection scheme.1
To carry out this simulation, we set
the link capacity of the first and last links to a fixed rate of
5Mbps. We then ran the simulation on numerous values for
the link capacity of the compression link, C = {1, 4, 5, 6}
(Mbps).
q
q
q
q q q q q q q q
100 ms threshold
0.064
0.256
1.024
4.096
16.384
0 2000 4000 6000
Number of packets
∆tH−∆tL(s)
q 1 Mbps
4 Mbps
5 Mbps
6 Mbps
Fig. 3. Comparison of scenarios based on how narrow the compression
bottleneck link is (in log-scale).
As the results show (Figure 3), our detection works well
only when the compression link is indeed the bottleneck of
the path. There is a correlation between how narrow the
compression link is compared to the overall path capacity, and
the number of probe packets needed to detect compression.
The narrower the link, the smaller the number of the probe
packets needed for detection. In fact, sometimes an order-
of-magnitude fewer packets are needed, while maintaining
approximately the same detection accuracy.
This stems from the fact that the effects of compression are
more significant as the bottleneck becomes very narrow. On
the other hand, compression is ineffective when it is not on the
bottleneck. The result is that our detection method becomes
rather ineffective (6 Mbps case in Figure 3). However, we
know that employing compression when it is not placed on the
bottleneck link is a poor practice. For this reason, we expect
that similar scenarios are relatively rare to find in practice, and
hence this makes handling such scenarios less important. To
summarize, simulation has confirmed that our approach detects
only effective compression.
VI. INTERNET EVALUATION
In this section, we present our Internet evaluation to confirm
that our method works on a real network. Here, we begin with
the experiment setup, and follow with a demonstration of the
results and an analysis.
A. Experiment Setup
To simulate the effects of link-layer, we used the Click Mod-
ular Software Router [21]. The experiment environment and
topology setup is depicted in Figure 4. The compression and
decompression components, as well as the receiver, were all
1Compression efficiency is determined by how much the packet size is
reduced by applying the compression algorithm [19].
6. Decompressor
Compressor
Receiver
Internet
PlanetLab
Nodes
Fig. 4. Environment used in our experiments
located in UCLA. The senders are, however, remote PlanetLab
[1] nodes. We implemented LZCompressor and LZDecom-
pressor Click elements for our experiment.2
Also, we reduced
the sending transmission rate from the compression element
to 1Mbps, making the compression link the narrow link of the
path. Note that to confirm that our compression link is indeed
the bottleneck, we used pathrate [10], a capacity estimation
tools that has been proven experimentally to work well with
PlanetLab nodes [22].
We could have placed both the sender and the receiver
remotely and simulated a single bottleneck link on the path
by routing the traffic stream through our local network. But
by using a remote sender and a local receiver, we simulate a
more common scenario, ensuring that the experiment matches
the no-valley property, which derives from the provider-to-
customer relationship and is the most commonly adopted
routing policy by ASes [14], [35].
B. Results
A set of ten geographically distributed PlanetLab nodes con-
nected via the open Internet were selected for our experiment
(Table II). We define an experiment scenario uniquely as three
elements: (1) a remote PlanetLab node, (2) whether we applied
link-layer compression or not in the experiment, and (3)
whether the cooperative receiver approach or the responsive
receiver approach was used to detect compression in the
experiment. For every scenario, we performed 24 individual
experiments, within a span of 24 hours, running only one
experiment every hour.
0
5
10
0 2 4 6
∆tH − ∆tL (s)
Numberofmeasurements
Compression
No Compression
Fig. 5. Histogram of ∆tH − ∆tL for two sets of 24 measurements from a
PlanetLab node in Singapore (the gray region indicates the gap between the
means of the two distributions).
2Our code and implemented Click elements are available publicly on
http://lasr.cs.ucla.edu/triton
To illustrate how the aggregate data looks, we depicted the
measurement histogram for one node when testing with the
cooperative receiver (Figure 5). Looking at the histogram,
we observe that each set of data can be described by a
normal distribution. The noticeable gap between the two
distributions is an indication of the compression effects. Also,
another observation is that the measurements are slightly more
spread for compression than non-compression. As discussed
in Section IV, we use different sets of random bytes for
the payloads of high entropy probe packets used for each
experiment. This results in inconsistent compression ratios,
which we suspect are responsible for the wider spread of
timings when compression is applied.
Table II summarizes our results for all the experiment
scenarios we performed by illustrating the normal distribution
parameters in each scenario. The results confirm the existence
of a significant gap in the presence and absence of network
compression for all scenarios tested.
An observation from Table II is that the distributions for
the responsive approach scenarios are relatively more spread
compared to those of the cooperative approach. This is because
in the responsive receiver approach, the ICMP reply packets
travel the reverse path back to the sender, adding additional
variability to the delay observations. In addition, since the
effects of double compression are not very different than
effects from single compression, a different kind of obser-
vation from our results suggests that, except for our own
intermediary compression, there is no effective compression
provided between the end-hosts selected in our scenarios in
the duration of our experiment.
q q
q
q
q q
q
q
q
q
q
100 ms threshold
0
1
2
3
4
5
0 2000 4000
Number of packets
∆tH−∆tL(s)
q Australia
Singapore
Sweden
Fig. 6. Comparison of the number of probe packets used for detection when
using only a single measurement from three geographically diverse distant
PlanetLab nodes.
Figure 6 demonstrates the difference between the duration
of the packet trains between high and low entropy for three dif-
ferent nodes, when considering only their first measurements.
As can be seen, the plots are not as linear as those presented in
the simulations in Section V.A where the measurements were
performed in a clean and more controlled environment, yet
they follow a similar increasing trend.
VII. DISCUSSION
A. Desirable Characteristics
In general, for any end-to-end active network measurements
there is a set of desirable, and in some cases necessary,
7. Cooperative Receiver Responsive Receiver
Compression No Compression Compression No Compression
∆tH − ∆tL ∆tH − ∆tL ∆tH − ∆tL ∆tH − ∆tL
Location IP Address Est RTT µ σ µ σ µ σ µ σ
Singapore 203.30.39.238 201 5119 300 54 320 5010 430 78 450
California 128.111.52.59 5 5052 640 -42 290 4870 890 18 360
Brazil 200.17.202.195 188 5000 450 -46 130 5120 470 -35 620
Czech Republic 147.229.10.250 190 4389 440 -34 230 4102 830 25 1020
New Zealand 130.195.4.69 136 4632 530 79 410 5103 1050 -10 1205
Massachusetts 75.130.96.13 80 5388 410 13 230 5202 410 28 730
Canada 216.48.80.12 94 4624 440 161 700 4700 450 130 410
Sweden 192.16.125.11 185 1601 1220 115 390 2440 1300 150 1380
South Korea 143.248.55.128 173 4823 1140 155 550 4322 930 94 1100
Australia 130.194.252.8 188 4827 500 50 550 4730 780 -12 1240
TABLE II
SUMMARY OF THE RESULTS FROM THE INTERNET EXPERIMENTS (MSEC).
characteristics that should be satisfied.
1) Working with uncooperative intermediaries: Obtaining
any information directly from routers is usually impossible,
so end-to-end measurements should not rely on cooperation
from them.
2) Robust against cross traffic: Cross traffic is usually
present and can significantly affect the network measurements.
Hence, measurement mechanisms should be accurate even
in the presence of cross traffic. In addition, no assumption
should be made on the characteristics of the cross traffic when
considering its impact on the measurement.
We ran our experiment for a span of 24 hours and from
different parts of the world. While we cannot truly confirm
that during our measurements the network path experienced
high volumes of cross traffic and congestion, the experiments
were performed in realistic scenarios, accurately discovering
the effects of compression in all of the scenarios. Normally
(unless there is an ongoing long-term DDoS attack), cross
traffic and high levels of congestion on a particular path persist
for only a limited amount of time [24].
3) Non-intrusive: Any detection approach using network
measurements should not significantly affect the traffic in the
path or the throughput of the other connections. If some active
probing is necessary, it should be minimized. Also, a non-
intrusive network measurement technique should not affect the
actual property being measured. Our experiment consists of
two sets of 6000 packets of 1100 bytes each; the sets are sent
one minute apart. Each set adds up to a total size of 6.6 MB,
which is equivalent to the size of a typical high-quality MP3
song [7].
4) Short measurement time: Short measurement time is
desirable, but not always required. For instance, it is required
for available bandwidth measurements because the average
available bandwidth can change over time; therefore, it is
important to measure it quickly. On the other hand, IP-level
or link-layer compression on the path is likely to persist for a
longer period, so a quick measurement is less vital.
5) Minimal performance overhead: Minimal overhead en-
sures that the measurement can be performed on typical
machines with typical resources, and also that it does not
interfere with other processes running on those machines.
B. Timestamp Precision and Resolution Effects on Detection
Quality
End-hosts that perform Internet measurements can introduce
delays and bottlenecks that are due to hardware or operating
system effects and have nothing to do with the network
behavior they are measuring. This problem is particularly
acute when the timestamping of network events occurs at the
application level.
Clearly the more accurately we can measure the time, the
better the outcome of the detection process. This is also true
as the end-hosts use more accurate time resolution. However,
in devising our technique and in the implementation of our
experiments, we intentionally used only standard hardware
and software. For instance, we avoided using any special
hardware components for precise timestamping of packet
arrivals, capturing packets at the kernel level, or packet sniffing
applications such as libpcap [23]. This was done to ensure
that our detection technique works for typical end-hosts with
typical machines and resources. Our results confirmed this.
VIII. CONCLUDING REMARKS
In this paper we examined the feasibility of detecting
whether intermediaries have performed compression on the
path. We presented an end-to-end packet-train-like approach
that works in both cooperative and responsive environments.
Our Internet experiments confirmed our detection approach.
While this work constitutes a significance advance, we believe
that this is just the beginning of further research in this
direction.
Currently, there is no information about how commonly
link-layer compression is deployed. This suggests that the next
step will be an investigation of the prevalence of link-layer
compression. We believe this realization is important before
proceeding with further research in this direction. We plan
to examine the prevalence of link-layer compression in the
Internet, using our findings presented in this paper. Based on
8. these results, we will be able to take further steps to complete
this research.
For practical purposes, a desirable tool should be able to
do the detection with just a few measurements—and ideally,
with only one. A lightweight probing technique could even be
more attractive and useful, particularly for mobile applications
where resources are limited. But, a tool that only takes one
or a few measurements should respond to variability well,
as it then requires dealing with clock skew and time preci-
sion, context-switching effects, congestion, etc. Besides, an
automated method for assessing suitable values for detection
process parameters based on the network environment is also
beneficial for any automated network tool.
The existing bandwidth and capacity estimation techniques
do not take into consideration the presence of compression.
Another significant focus of future work would be to test and
observe how accurately the current tools respond, and how
they should be adjusted, in the presence of compression on
the path.
An area that makes our approach particularly attractive is the
mobile environment, since mobile devices typically have con-
siderably lower available bandwidth. We will test our approach
in a mobile environment where some of its characteristics are
different from the wired Internet results reported here. For
example, we will experiment with considerably higher loss
rates. We will expand our approach so that it also works in
this type of environment.
Finally, as suggested in the introduction, detecting compres-
sion is one important element of the more general problem of
detecting all third-party influences on packets submitted to the
Internet. Currently, users can learn very little about the fate of
their traffic once it is sent. Ideally, for many good reasons, they
should be able to know more. From a pure research point of
view, it would be valuable to better understand what is possible
to know about traffic handling on the Internet, and what can
be done to acquire this knowledge. This work represents a step
in improving that knowledge.
REFERENCES
[1] “ns-3: An Open Simulation Environment,” http://www.nsnam.org/.
[2] “PlanetLab: An Open network platform,” http://www.planet-lab.org/.
[3] “Nokia hijacks mobile browser traffic, decrypts HTTPS data,”
http:www.zdnet.comnokia-hijacks-mobile-browser-traffic-decrypts-
https-data-7000009655, 2013.
[4] A. Anand, A. Gupta, A. Akella, S. Seshan, and S. Shenker, “Packet
caches on routers: the implications of universal redundant traffic elimi-
nation,” in ACM SIGCOMM Computer Communication Review, vol. 38,
no. 4. ACM, 2008, pp. 219–230.
[5] J. Border and J. Heath, “RFC3051: IP payload compression using ITU-T
V. 44 packet method,” 2001.
[6] S. Casner and V. Jacobson, “RFC2508: Compressing IP/UDP/RTP
headers for low-speed serial links,” 1999.
[7] J. C. Chu, K. S. Labonte, and B. N. Levine, “Availability and locality
measurements of peer-to-peer file systems,” in ITCom 2002: The Conver-
gence of Information Technologies and Communications. International
Society for Optics and Photonics, 2002, pp. 310–321.
[8] Cisco Documents - Document ID: 14156, Understanding Data Com-
pression. www.cisco.com/application/pdf/paws/9289/
wan compression faq.pdf.
[9] S. Dawkins, “Internet Draft: Performance Implications of Link-Layer
Characteristics: Slow Links,” 1998.
[10] C. Dovrolis, P. Ramanathan, and D. Moore, “What do packet dis-
persion techniques measure?” in INFOCOM 2001. Twentieth Annual
Joint Conference of the IEEE Computer and Communications Societies.
Proceedings. IEEE, vol. 2. IEEE, 2001, pp. 905–914.
[11] M. Engan, S. Casner, C. Bormann, and T. Koren, “RFC 2509: IP header
compression over PPP,” RFC 2509, February, Tech. Rep., 1999.
[12] R. Friend and R. Monsour, “RFC2395: IP payload compression using
LZS,” 1998.
[13] R. Friend and W. Simpson, “RFC1974: PPP Stac LZS Compression
Protocol,” 1996.
[14] L. Gao, “On inferring autonomous system relationships in the internet,”
IEEE/ACM Transactions on Networking (ToN), vol. 9, no. 6, pp. 733–
745, 2001.
[15] D. Han, A. Anand, A. Akella, and S. Seshan, “RPT: Re-architecting
loss protection for content-aware networks,” in Proceedings of the 9th
USENIX conference on Networked Systems Design and Implementation,
NSDI, vol. 12, 2011, pp. 6–6.
[16] HP Support FAQs, Using Compression with HP Router Products.
http:www.hp.comrndsupportfaqspdfcomp.pdf.
[17] V. Jacobson, “RFC 1144: Compressing TCP,” IP headers for low speed
serial links, 1990.
[18] R. Jain and S. Routhier, “Packet trains–measurements and a new model
for computer network traffic,” Selected Areas in Communications, IEEE
Journal on, vol. 4, no. 6, pp. 986–995, 1986.
[19] L. Jonsson, G. Pelletier, and K. Sandlund, “RFC 4995: The Robust
Header Compression (ROHC) Framework,” Network Working Group,
pp. 1–40, 2007.
[20] S. Keshav, “A control-theoretic approach to flow control,” ACM SIG-
COMM Computer Communication Review, vol. 25, no. 1, pp. 188–201,
1995.
[21] E. Kohler, R. Morris, B. Chen, J. Jannotti, and M. F. Kaashoek, “The
Click modular router,” ACM Transactions on Computer Systems (TOCS),
vol. 18, no. 3, pp. 263–297, 2000.
[22] S.-J. Lee, P. Sharma, S. Banerjee, S. Basu, and R. Fonseca, “Measuring
bandwidth between planetlab nodes,” in Proceedings of the 6th inter-
national conference on Passive and Active Network Measurement, ser.
PAM’05. Berlin, Heidelberg: Springer-Verlag, 2005, pp. 292–305.
[23] S. McCanne, C. Leres, and V. Jacobson, “Libpcap,” 1989. [Online].
Available: http://www.tcpdump. org
[24] D. McPherson, R. Dobbins, M. Hollyman, C. Labovitzh, and J. Nazario,
“Worldwide infrastructure security report, Volume V,” Arbor Networks,
2010.
[25] B. Melander, M. Bjorkman, and P. Gunningberg, “A new end-to-end
probing and analysis method for estimating bandwidth bottlenecks,” in
IEEE Global Telecommunications Conference, vol. 1, 2000, pp. 415–
420.
[26] V. Paxson, “Strategies for sound internet measurement,” in Proceedings
of the 4th ACM SIGCOMM conference on Internet measurement, ser.
IMC ’04. New York, NY, USA: ACM, 2004, pp. 263–271.
[27] G. Pelletier and K. Sandlund, “RFC 5225: Robust header compression
version 2 (ROHCv2): Profiles for RTP,” 2008.
[28] R. Pereira, “RFC2394: IP payload compression using DEFLATE,” 1998.
[29] V. Pournaghshband, L. Kleinrock, P. L. Reiher, and A. Afanasyev,
“Controlling applications by managing network characteristics,” in
IEEE International Conference on Communications (ICC), 2012.
[Online]. Available: http://dx.doi.org/10.1109/ICC.2012.6364064
[30] R. Prasad, C. Dovrolis, M. Murray, and K. Claffy, “Bandwidth esti-
mation: metrics, measurement techniques, and tools,” Network, IEEE,
vol. 17, no. 6, pp. 27–35, 2003.
[31] D. Rand, “RFC1978: PPP Predictor Compression Protocol,” 1996.
[32] D. Salomon, Data Compression.: The Complete Reference. Springer,
2004.
[33] A. Shacham, B. Monsour, R. Pereira, and M. Thomas, “RFC 3173: IP
Payload Compression Protocol (IPComp),” 2001.
[34] A. Soule, A. Nucci, R. Cruz, E. Leonardi, and N. Taft, “How to
identify and estimate the largest traffic matrix elements in a dynamic
environment,” in Proceedings of the joint international conference on
measurement and modeling of computer systems, ser. SIGMETRICS ’04.
New York, NY, USA: ACM, 2004, pp. 73–84.
[35] F. Wang, Z. M. Mao, J. Wang, L. Gao, and R. Bush, “A measurement
study on the impact of routing events on end-to-end internet path
performance,” in ACM SIGCOMM Computer Communication Review,
vol. 36, no. 4. ACM, 2006, pp. 375–386.