This document discusses application-aware acceleration (A3) for improving application performance over wireless networks. It presents results showing that while enhanced transport protocols improve performance for FTP, they provide little benefit for other popular applications like CIFS, SMTP, and HTTP. This is because the behavior of these applications, designed for reliable LANs, negatively impacts their performance over lossy wireless links. The document proposes A3 as a middleware solution that offsets these behavioral problems through application-specific design principles, while remaining transparent to applications.
Mutual Exclusion in Wireless Sensor and Actor NetworksZhenyun Zhuang
This document discusses mutual exclusion in wireless sensor and actor networks. It begins by introducing wireless sensor networks and how they have evolved into wireless sensor and actor networks which can both sense and act on their environments. This introduces new challenges around resource utilization that must be addressed. Specifically, the document identifies the problem of mutual exclusion - ensuring only a minimum necessary subset of actors take action for a given event to avoid issues like inefficient resource usage. It defines different types of mutual exclusion and proposes both a greedy centralized approach and a distributed localized approach to address this problem efficiently while meeting application-specific delay bounds and fully covering the event region.
WebAccel: Accelerating Web access for low-bandwidth hostsZhenyun Zhuang
The document describes problems with how current web browsers access web pages in low-bandwidth environments. It analyzes factors that cause large response times, such as properties of typical web pages, interactions between HTTP and TCP protocols, and impact of server-side optimizations. It proposes a new solution called WebAccel that uses three browser-side mechanisms - prioritized fetching, object reordering, and connection management - to reduce user response time in an easy-to-deploy way. Simulation results and a prototype implementation show that WebAccel brings significant performance benefits over current browsers.
Client-side web acceleration for low-bandwidth hostsZhenyun Zhuang
This document proposes client-side optimizations to reduce web page load times for users on low-bandwidth networks. It analyzes problems with how current web browsers fetch entire pages greedily without prioritizing visible content. This wastes bandwidth and increases load times. The document proposes three browser-side mechanisms: 1) prioritizing the fetching of objects visible on the initial screen over other objects, 2) reordering object fetching to better utilize bandwidth, and 3) improving connection management. Simulations show these techniques can significantly reduce user-perceived response times compared to current browsers for low-bandwidth conditions.
Effective Replicated Server Allocation Algorithms in Mobile computing Systemsijwmn
In mobile environments, mobile device users access and transfer a great deal of data through the online servers. In order to enhance users’ access speed in a wireless network, decentralizing replicated servers appropriately in the network is required. Previous work regarding this issue had focused on the placement of replicated servers along with the moving paths of the users to maximize the hit ratio. When a miss occurs, they simply ignored the file request. Therefore, we suggest a solution to take care of such a miss by sending a file request to a replicated server nearby in the network
An Approach for Enhanced Performance of Packet Transmission over Packet Switc...ijceronline
With the increased use of real time applications, there is a need for improved network traffic and bandwidth management. Switches are being used by computer networks for enabling connection between those hosts which are not connected by a direct link. When two or more than two host attempt to transmit packet at the same time, collision in data packets occurred. In this paper an optimized performance of local area network in terms of collision count and some other parameter have been investigated using simulation model. Simulation results have been obtained in different network scenarios by varying the number of devices in the network.
Analysis of LTE Radio Load and User ThroughputIJCNCJournal
A recurring topic in LTE radio planning pertains to the maximum acceptable LTE radio interface load, up to which a targeted user data rate can be maintained. We explore this topic by using Queuing Theory elements to express the downlink user throughput as a function of the LTE Physical Resource Block (PRB) utilization. The resulting formulas are expressed in terms of standardized 3GPP KPIs and can be readily evaluated from network performance counters. Examples from live networks are given to illustrate the results, and the suitability of a linear decrease model is quantified upon data from a commercial LTE network.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
Mutual Exclusion in Wireless Sensor and Actor NetworksZhenyun Zhuang
This document discusses mutual exclusion in wireless sensor and actor networks. It begins by introducing wireless sensor networks and how they have evolved into wireless sensor and actor networks which can both sense and act on their environments. This introduces new challenges around resource utilization that must be addressed. Specifically, the document identifies the problem of mutual exclusion - ensuring only a minimum necessary subset of actors take action for a given event to avoid issues like inefficient resource usage. It defines different types of mutual exclusion and proposes both a greedy centralized approach and a distributed localized approach to address this problem efficiently while meeting application-specific delay bounds and fully covering the event region.
WebAccel: Accelerating Web access for low-bandwidth hostsZhenyun Zhuang
The document describes problems with how current web browsers access web pages in low-bandwidth environments. It analyzes factors that cause large response times, such as properties of typical web pages, interactions between HTTP and TCP protocols, and impact of server-side optimizations. It proposes a new solution called WebAccel that uses three browser-side mechanisms - prioritized fetching, object reordering, and connection management - to reduce user response time in an easy-to-deploy way. Simulation results and a prototype implementation show that WebAccel brings significant performance benefits over current browsers.
Client-side web acceleration for low-bandwidth hostsZhenyun Zhuang
This document proposes client-side optimizations to reduce web page load times for users on low-bandwidth networks. It analyzes problems with how current web browsers fetch entire pages greedily without prioritizing visible content. This wastes bandwidth and increases load times. The document proposes three browser-side mechanisms: 1) prioritizing the fetching of objects visible on the initial screen over other objects, 2) reordering object fetching to better utilize bandwidth, and 3) improving connection management. Simulations show these techniques can significantly reduce user-perceived response times compared to current browsers for low-bandwidth conditions.
Effective Replicated Server Allocation Algorithms in Mobile computing Systemsijwmn
In mobile environments, mobile device users access and transfer a great deal of data through the online servers. In order to enhance users’ access speed in a wireless network, decentralizing replicated servers appropriately in the network is required. Previous work regarding this issue had focused on the placement of replicated servers along with the moving paths of the users to maximize the hit ratio. When a miss occurs, they simply ignored the file request. Therefore, we suggest a solution to take care of such a miss by sending a file request to a replicated server nearby in the network
An Approach for Enhanced Performance of Packet Transmission over Packet Switc...ijceronline
With the increased use of real time applications, there is a need for improved network traffic and bandwidth management. Switches are being used by computer networks for enabling connection between those hosts which are not connected by a direct link. When two or more than two host attempt to transmit packet at the same time, collision in data packets occurred. In this paper an optimized performance of local area network in terms of collision count and some other parameter have been investigated using simulation model. Simulation results have been obtained in different network scenarios by varying the number of devices in the network.
Analysis of LTE Radio Load and User ThroughputIJCNCJournal
A recurring topic in LTE radio planning pertains to the maximum acceptable LTE radio interface load, up to which a targeted user data rate can be maintained. We explore this topic by using Queuing Theory elements to express the downlink user throughput as a function of the LTE Physical Resource Block (PRB) utilization. The resulting formulas are expressed in terms of standardized 3GPP KPIs and can be readily evaluated from network performance counters. Examples from live networks are given to illustrate the results, and the suitability of a linear decrease model is quantified upon data from a commercial LTE network.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
VIRTUAL ROUTING FUNCTION DEPLOYMENT IN NFV-BASED NETWORKS UNDER NETWORK DELAY...IJCNC Journal
NFV-based network implements a variety of network functions with software on general-purpose servers
and this allows the network operator to select any capacity and location of network functions without any
physical constraints. The authors proposed an algorithm of virtual routing function allocation in the NFVbased
network for minimizing the total power consumption or the total network cost, and developed
effective allocation guidelines for virtual routing functions.
This paper evaluates the effect of the maximum tolerable network delay on the guidelines for the allocation
of virtual routing functions, which minimizes the total network cost. The following points are clear from
quantitative evaluations: (1) The shorter the maximum tolerable network delay, the greater the number of
areas where the routing function must be allocated, resulting in an increase in the total network cost. (2)
The greater the routing function cost relative to the circuit bandwidth cost, the greater the increase in the
total network cost caused by the maximum tolerable network delay. This paper also provides the possible
guideline how to decide the value of maximum tolerable network delay when the condition of allowable
increase in network cost is given.
AN EFFECTIVE CONTROL OF HELLO PROCESS FOR ROUTING PROTOCOL IN MANETSIJCNCJournal
In the mobile ad hoc network (MANET) update of link connectivity is necessary to refresh the neighbor tables in data transfer. A existing hello process periodically exchanges the link connectivity information, which is not adequate for dynamic topology. Here, slow update of neighbour table entries causes link failures which affect performance parameter as packet drop, maximum delay, energy consumption, and reduced throughput. In the dynamic hello technique, new neighbour nodes and lost neighbour nodes are used to compute link change rate (LCR) and hello-interval/refresh rate (r). Exchange of link connectivity information at a fast rate consumes unnecessary bandwidth and energy. In MANET resource wastage can be controlled by avoiding the re-route discovery, frequent error notification, and local repair in the entire network. We are enhancing the existing hello process, which shows significant improvement in performance.
Recital Study of Various Congestion Control Protocols in wireless networkiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document discusses dynamic adaptation techniques for optimizing data transfer performance over networks. It describes how the number of concurrent data transfer streams can be adjusted dynamically according to changing network conditions, without relying on historical measurements or external profiling. The proposed approach gradually increases the level of parallelism during a transfer to find a near-optimal number of streams based on instant throughput measurements, allowing it to adapt to varying environments and network utilization over time.
This document discusses database system architectures and distributed database systems. It covers transaction server systems, distributed database definitions, promises of distributed databases, complications introduced, and design issues. It also provides examples of horizontal and vertical data fragmentation and discusses parallel database architectures, components, and data partitioning techniques.
A XMLRPC Approach to the Management of Cloud Infrastructureiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This paper proposes and evaluates a communication-aware load balancing scheme for parallel applications on clusters. It models the communication, I/O, and CPU load of parallel jobs. The proposed scheme balances loads while keeping network resource utilization high. Experimental results show it can improve performance by up to 206-235% under high communication demands compared to other schemes. The paper also discusses how network delays can affect the accuracy of load balancing solutions in highly interactive distributed environments and proposes using a centralized approach with delay adjustment to address this issue.
SECTOR TREE-BASED CLUSTERING FOR ENERGY EFFICIENT ROUTING PROTOCOL IN HETEROG...IJCNCJournal
The document proposes a new routing protocol called Sector Tree-Based Clustering for Energy Efficient Routing Protocol (STB-EE) for wireless sensor networks. STB-EE partitions the sensor field into dynamic sectors to balance the number of nodes per cluster. Within each sector, STB-EE constructs a minimum spanning tree to connect nodes and reduce long-distance communication. STB-EE selects cluster heads based on remaining energy and distance to the base station. Simulation results show STB-EE can improve network lifespan by about 15-16% compared to other protocols.
The document presents PACK, a novel end-to-end traffic redundancy elimination system designed for cloud computing customers. PACK aims to minimize processing costs for the cloud server by offloading traffic elimination efforts to end clients. It uses a receiver-based approach where the client analyzes incoming data streams, identifies redundant content, and sends predictions to the server. If a prediction matches, the server only needs to send an acknowledgment instead of the actual data, reducing bandwidth costs. The authors implemented and tested PACK, finding it can achieve up to 30% redundancy elimination with low server overhead, representing a cost savings of around 20% for cloud users.
Enhancing Cloud Computing Security for Data Sharing Within Group Membersiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Enhancement of qos in multihop wireless networks by delivering cbr using lb a...eSAT Journals
Abstract One of the most complicated issues is to measuring the delay performance of end to end nodes in Multi-hop Wireless Networks. The two nodes are communicating via hopping over the multiple wireless links. The fact that is each node has to concentrate not only its own generated traffic, but also relayed one. Observing unfairness particularly for transmissions among nodes that are more than one hop Most of the existing works deals with the joint congestion control and scheduling algorithm, which does not focusing the delay performance. In turn, considering the throughput metric alone although for congestion control flows, throughput is the repeated difficult performance metric Packet delay is also important because practical congestion control protocols need to establish the timeouts for the retransmissions based on the packet delay, such parameters could significantly impact the speed of recovery when loss of packets occurred. The related issues on the delay-performance First, for long flows, the end to end delay may grow in terms of square with based on the number of hops. Second, it is difficult to control the end-to-end delay of each flows. TDMA schedules the transmissions in a fair way, in terms of throughput per connection, considering the communication requirements of the active flows of the network. It does not work properly in the multi-hop scenario, because it is generated only for single hop networks, We propose The Leaky Bucket Algorithm, in addition to joint congestion control and scheduling algorithm in multi-hop wireless networks. The proposed algorithm not only achieves the provable throughput and also considering the upper bounds of the delay of each flow. It reduces the transmission time by delivering packets at a constant bit rate even it receives the packet at a busty way. Keywords- Multi-hop wireless networks, congestion control, Performance, Delay, Flow, Throughput.
Enhancement of qos in multihop wireless networks by delivering cbr using lb a...eSAT Publishing House
This document summarizes a research paper that proposes using the Leaky Bucket Algorithm to enhance quality of service (QoS) in multi-hop wireless networks delivering constant bit rate (CBR) traffic. The Leaky Bucket Algorithm aims to reduce transmission delay by delivering packets at a constant rate even when packets arrive in bursts. It combines joint congestion control and a scheduling algorithm to not only achieve provable throughput guarantees, but also place explicit upper bounds on the end-to-end delay of each flow. Simulation results show the proposed approach reduces transmission time and improves throughput compared to existing scheduling algorithms that do not consider delay performance.
Task mapping and routing optimization for hard real-time Networks-on-ChipjournalBEEI
This document discusses an optimization technique that simultaneously explores task mapping and network routing for hard real-time Networks-on-Chip (NoCs). The technique aims to reduce optimization time by evaluating both parameters simultaneously, rather than in separate stages. It proposes using a genetic algorithm with a chromosome structure that encodes both task mapping and routing information. The algorithm evaluates configurations based on schedulability analysis to minimize the number of unschedulable tasks and messages. The goals are to find a schedulable configuration from the large design space and ensure the mapping remains schedulable under different routing mechanisms.
GTSH: A New Channel Assignment Algorithm in Multi-Radio Multi-channel Wireles...IJERA Editor
This document presents a new channel assignment algorithm called GTSH for multi-radio multi-channel wireless mesh networks. It combines the genetic algorithm and tabu search algorithm to maximize throughput. The genetic algorithm is used to generate initial solutions while tabu search explores neighbors of the best solution to avoid getting stuck in local optima. Simulation results using the NS2 simulator showed the hybrid GTSH method achieved significantly higher throughput than using genetic or tabu search alone.
PAGE: A Partition Aware Engine for Parallel Graph Computation1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
This document provides summaries of 15 networking projects from TTA including the project code, title, description, and reference. The projects cover topics like delay analysis of opportunistic spectrum access MAC protocols, load balancing for network traffic measurement, key exchange protocols for parallel network file systems, anomaly detection in intrusion detection systems, and energy efficient group key agreement for wireless networks. The document provides contact information at the end for obtaining full project papers.
Wired and Wireless Computer Network Performance Evaluation Using OMNeT++ Simu...Jaipal Dhobale
This document summarizes the performance evaluation of wired and wireless computer networks using the OMNeT++ simulation environment. The performance is evaluated based on throughput. For the wired network simulation, the Nclients application from INET is used, while the Wireless Host to Host application is used for the wireless network simulation. Throughput is measured for both networks by varying the data rate and number of clients. The results show that throughput from the wired server generally increases with more clients, while throughput from the wireless server is highest with a lower number of clients. Throughput to the server is observed to increase with data rate for both networks.
Application Aware Topology Generation for Surface Wave Networks-on-Chipzhao fu
This document summarizes a research paper that proposes a new algorithm called the maximal declining sorting algorithm (MDSA) to optimize network-on-chip (NoC) performance and power consumption. The MDSA aims to reduce communication time in NoCs by minimizing the number of hops between cores that communicate frequently. It does this by adding wireless transceivers between cores based on the product of their traffic volume and number of hops. The MDSA is shown to reduce communication time by up to 35.6% compared to the conventional genetic algorithm approach, and also lowers power consumption.
On the Impact of Mobile Hosts in Peer-to-Peer Data NetworksZhenyun Zhuang
This document analyzes the performance issues faced by mobile hosts participating in peer-to-peer (P2P) data networks like BitTorrent. It finds that the design of P2P networks is incompatible with the characteristics of wireless networks, causing poor performance for mobile users. It then presents a solution called wireless P2P (wP2P) that addresses these issues through techniques only applied on mobile hosts, improving performance for both mobile and fixed peers. An evaluation shows wP2P provides significant gains over existing P2P applications on mobile networks.
PAIDS: A Proximity-Assisted Intrusion Detection System for Unidentified WormsZhenyun Zhuang
This document proposes a new intrusion detection system called PAIDS (Proximity-Assisted Intrusion Detection System) to identify unknown worms. Existing signature-based and anomaly-based detection systems are ineffective against new worms that spread quickly. PAIDS takes advantage of the clustered spread of worms among nearby hosts, especially in the early stages, rather than relying on signatures. It aims to detect worm outbreaks when they first begin spreading to limit their propagation. Preliminary simulations show PAIDS has a high detection rate and low false positive rate.
Hazard avoidance in wireless sensor and actor networksZhenyun Zhuang
This document discusses hazards that can occur in wireless sensor and actor networks due to out-of-order execution of queries and commands. It identifies three types of hazards:
1) Command-after-command (CAC) hazard occurs when the order of two sequential commands is reversed.
2) Query-after-command (QAC) hazard occurs when a query is executed before the corresponding command.
3) Command-after-query (CAQ) hazard is the reverse of QAC, where a command is executed before its preceding query.
The document uses an example of a fire detection and suppression system to illustrate these hazards and their undesirable consequences. It also discusses challenges in addressing hazards such as parallel
Optimizing Streaming Server Selection for CDN-delivered Live StreamingZhenyun Zhuang
LNCS 2012
Content Delivery Networks (CDNs) have been widely used to deliver
web contents on today’s Internet. Gaining tremendous popularity, live streaming
is also increasingly being delivered by CDNs. Compared to conventional static
or dynamic web contents, the new application type of live streaming exposes
unique characteristics that pose challenges to the underlying CDN infrastructure.
Unlike traditional web-objects fetching, which allows Edge Servers to cache contents
and thus typically only involves Edge Servers for delivering contents, live
streaming requires real-time full CDN-streaming paths that span across Ingest
Servers, Origin Servers and Edge Servers.
DNS is the standard practice for enabling dynamic assignment of servers. GeoDNS,
a specialized DNS system, provides DNS resolution by taking into account the
geographical locations of end-users and CDN servers. Though GeoDNS effectively
redirects users to nearest CDN Edge Servers, it may not be able to select
the optimal Origin Server for relaying a live stream to Edge Servers due to the
unique characteristics of live streaming. In this work, we consider the requirements
of delivering live streaming with CDN, and propose advanced design for
selecting optimal Origin Streaming Servers in order to reduce network transit
cost and increase viewers’ experience. We further propose a live-streaming specific
GeoDNS design for selecting optimal Origin Servers to serve Edge Servers.
VIRTUAL ROUTING FUNCTION DEPLOYMENT IN NFV-BASED NETWORKS UNDER NETWORK DELAY...IJCNC Journal
NFV-based network implements a variety of network functions with software on general-purpose servers
and this allows the network operator to select any capacity and location of network functions without any
physical constraints. The authors proposed an algorithm of virtual routing function allocation in the NFVbased
network for minimizing the total power consumption or the total network cost, and developed
effective allocation guidelines for virtual routing functions.
This paper evaluates the effect of the maximum tolerable network delay on the guidelines for the allocation
of virtual routing functions, which minimizes the total network cost. The following points are clear from
quantitative evaluations: (1) The shorter the maximum tolerable network delay, the greater the number of
areas where the routing function must be allocated, resulting in an increase in the total network cost. (2)
The greater the routing function cost relative to the circuit bandwidth cost, the greater the increase in the
total network cost caused by the maximum tolerable network delay. This paper also provides the possible
guideline how to decide the value of maximum tolerable network delay when the condition of allowable
increase in network cost is given.
AN EFFECTIVE CONTROL OF HELLO PROCESS FOR ROUTING PROTOCOL IN MANETSIJCNCJournal
In the mobile ad hoc network (MANET) update of link connectivity is necessary to refresh the neighbor tables in data transfer. A existing hello process periodically exchanges the link connectivity information, which is not adequate for dynamic topology. Here, slow update of neighbour table entries causes link failures which affect performance parameter as packet drop, maximum delay, energy consumption, and reduced throughput. In the dynamic hello technique, new neighbour nodes and lost neighbour nodes are used to compute link change rate (LCR) and hello-interval/refresh rate (r). Exchange of link connectivity information at a fast rate consumes unnecessary bandwidth and energy. In MANET resource wastage can be controlled by avoiding the re-route discovery, frequent error notification, and local repair in the entire network. We are enhancing the existing hello process, which shows significant improvement in performance.
Recital Study of Various Congestion Control Protocols in wireless networkiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document discusses dynamic adaptation techniques for optimizing data transfer performance over networks. It describes how the number of concurrent data transfer streams can be adjusted dynamically according to changing network conditions, without relying on historical measurements or external profiling. The proposed approach gradually increases the level of parallelism during a transfer to find a near-optimal number of streams based on instant throughput measurements, allowing it to adapt to varying environments and network utilization over time.
This document discusses database system architectures and distributed database systems. It covers transaction server systems, distributed database definitions, promises of distributed databases, complications introduced, and design issues. It also provides examples of horizontal and vertical data fragmentation and discusses parallel database architectures, components, and data partitioning techniques.
A XMLRPC Approach to the Management of Cloud Infrastructureiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This paper proposes and evaluates a communication-aware load balancing scheme for parallel applications on clusters. It models the communication, I/O, and CPU load of parallel jobs. The proposed scheme balances loads while keeping network resource utilization high. Experimental results show it can improve performance by up to 206-235% under high communication demands compared to other schemes. The paper also discusses how network delays can affect the accuracy of load balancing solutions in highly interactive distributed environments and proposes using a centralized approach with delay adjustment to address this issue.
SECTOR TREE-BASED CLUSTERING FOR ENERGY EFFICIENT ROUTING PROTOCOL IN HETEROG...IJCNCJournal
The document proposes a new routing protocol called Sector Tree-Based Clustering for Energy Efficient Routing Protocol (STB-EE) for wireless sensor networks. STB-EE partitions the sensor field into dynamic sectors to balance the number of nodes per cluster. Within each sector, STB-EE constructs a minimum spanning tree to connect nodes and reduce long-distance communication. STB-EE selects cluster heads based on remaining energy and distance to the base station. Simulation results show STB-EE can improve network lifespan by about 15-16% compared to other protocols.
The document presents PACK, a novel end-to-end traffic redundancy elimination system designed for cloud computing customers. PACK aims to minimize processing costs for the cloud server by offloading traffic elimination efforts to end clients. It uses a receiver-based approach where the client analyzes incoming data streams, identifies redundant content, and sends predictions to the server. If a prediction matches, the server only needs to send an acknowledgment instead of the actual data, reducing bandwidth costs. The authors implemented and tested PACK, finding it can achieve up to 30% redundancy elimination with low server overhead, representing a cost savings of around 20% for cloud users.
Enhancing Cloud Computing Security for Data Sharing Within Group Membersiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Enhancement of qos in multihop wireless networks by delivering cbr using lb a...eSAT Journals
Abstract One of the most complicated issues is to measuring the delay performance of end to end nodes in Multi-hop Wireless Networks. The two nodes are communicating via hopping over the multiple wireless links. The fact that is each node has to concentrate not only its own generated traffic, but also relayed one. Observing unfairness particularly for transmissions among nodes that are more than one hop Most of the existing works deals with the joint congestion control and scheduling algorithm, which does not focusing the delay performance. In turn, considering the throughput metric alone although for congestion control flows, throughput is the repeated difficult performance metric Packet delay is also important because practical congestion control protocols need to establish the timeouts for the retransmissions based on the packet delay, such parameters could significantly impact the speed of recovery when loss of packets occurred. The related issues on the delay-performance First, for long flows, the end to end delay may grow in terms of square with based on the number of hops. Second, it is difficult to control the end-to-end delay of each flows. TDMA schedules the transmissions in a fair way, in terms of throughput per connection, considering the communication requirements of the active flows of the network. It does not work properly in the multi-hop scenario, because it is generated only for single hop networks, We propose The Leaky Bucket Algorithm, in addition to joint congestion control and scheduling algorithm in multi-hop wireless networks. The proposed algorithm not only achieves the provable throughput and also considering the upper bounds of the delay of each flow. It reduces the transmission time by delivering packets at a constant bit rate even it receives the packet at a busty way. Keywords- Multi-hop wireless networks, congestion control, Performance, Delay, Flow, Throughput.
Enhancement of qos in multihop wireless networks by delivering cbr using lb a...eSAT Publishing House
This document summarizes a research paper that proposes using the Leaky Bucket Algorithm to enhance quality of service (QoS) in multi-hop wireless networks delivering constant bit rate (CBR) traffic. The Leaky Bucket Algorithm aims to reduce transmission delay by delivering packets at a constant rate even when packets arrive in bursts. It combines joint congestion control and a scheduling algorithm to not only achieve provable throughput guarantees, but also place explicit upper bounds on the end-to-end delay of each flow. Simulation results show the proposed approach reduces transmission time and improves throughput compared to existing scheduling algorithms that do not consider delay performance.
Task mapping and routing optimization for hard real-time Networks-on-ChipjournalBEEI
This document discusses an optimization technique that simultaneously explores task mapping and network routing for hard real-time Networks-on-Chip (NoCs). The technique aims to reduce optimization time by evaluating both parameters simultaneously, rather than in separate stages. It proposes using a genetic algorithm with a chromosome structure that encodes both task mapping and routing information. The algorithm evaluates configurations based on schedulability analysis to minimize the number of unschedulable tasks and messages. The goals are to find a schedulable configuration from the large design space and ensure the mapping remains schedulable under different routing mechanisms.
GTSH: A New Channel Assignment Algorithm in Multi-Radio Multi-channel Wireles...IJERA Editor
This document presents a new channel assignment algorithm called GTSH for multi-radio multi-channel wireless mesh networks. It combines the genetic algorithm and tabu search algorithm to maximize throughput. The genetic algorithm is used to generate initial solutions while tabu search explores neighbors of the best solution to avoid getting stuck in local optima. Simulation results using the NS2 simulator showed the hybrid GTSH method achieved significantly higher throughput than using genetic or tabu search alone.
PAGE: A Partition Aware Engine for Parallel Graph Computation1crore projects
IEEE PROJECTS 2015
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Dot Net
DOTNET Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
Java Project Domain list 2015
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2015
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US
1 CRORE PROJECTS
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 72999 51536
This document provides summaries of 15 networking projects from TTA including the project code, title, description, and reference. The projects cover topics like delay analysis of opportunistic spectrum access MAC protocols, load balancing for network traffic measurement, key exchange protocols for parallel network file systems, anomaly detection in intrusion detection systems, and energy efficient group key agreement for wireless networks. The document provides contact information at the end for obtaining full project papers.
Wired and Wireless Computer Network Performance Evaluation Using OMNeT++ Simu...Jaipal Dhobale
This document summarizes the performance evaluation of wired and wireless computer networks using the OMNeT++ simulation environment. The performance is evaluated based on throughput. For the wired network simulation, the Nclients application from INET is used, while the Wireless Host to Host application is used for the wireless network simulation. Throughput is measured for both networks by varying the data rate and number of clients. The results show that throughput from the wired server generally increases with more clients, while throughput from the wireless server is highest with a lower number of clients. Throughput to the server is observed to increase with data rate for both networks.
Application Aware Topology Generation for Surface Wave Networks-on-Chipzhao fu
This document summarizes a research paper that proposes a new algorithm called the maximal declining sorting algorithm (MDSA) to optimize network-on-chip (NoC) performance and power consumption. The MDSA aims to reduce communication time in NoCs by minimizing the number of hops between cores that communicate frequently. It does this by adding wireless transceivers between cores based on the product of their traffic volume and number of hops. The MDSA is shown to reduce communication time by up to 35.6% compared to the conventional genetic algorithm approach, and also lowers power consumption.
On the Impact of Mobile Hosts in Peer-to-Peer Data NetworksZhenyun Zhuang
This document analyzes the performance issues faced by mobile hosts participating in peer-to-peer (P2P) data networks like BitTorrent. It finds that the design of P2P networks is incompatible with the characteristics of wireless networks, causing poor performance for mobile users. It then presents a solution called wireless P2P (wP2P) that addresses these issues through techniques only applied on mobile hosts, improving performance for both mobile and fixed peers. An evaluation shows wP2P provides significant gains over existing P2P applications on mobile networks.
PAIDS: A Proximity-Assisted Intrusion Detection System for Unidentified WormsZhenyun Zhuang
This document proposes a new intrusion detection system called PAIDS (Proximity-Assisted Intrusion Detection System) to identify unknown worms. Existing signature-based and anomaly-based detection systems are ineffective against new worms that spread quickly. PAIDS takes advantage of the clustered spread of worms among nearby hosts, especially in the early stages, rather than relying on signatures. It aims to detect worm outbreaks when they first begin spreading to limit their propagation. Preliminary simulations show PAIDS has a high detection rate and low false positive rate.
Hazard avoidance in wireless sensor and actor networksZhenyun Zhuang
This document discusses hazards that can occur in wireless sensor and actor networks due to out-of-order execution of queries and commands. It identifies three types of hazards:
1) Command-after-command (CAC) hazard occurs when the order of two sequential commands is reversed.
2) Query-after-command (QAC) hazard occurs when a query is executed before the corresponding command.
3) Command-after-query (CAQ) hazard is the reverse of QAC, where a command is executed before its preceding query.
The document uses an example of a fire detection and suppression system to illustrate these hazards and their undesirable consequences. It also discusses challenges in addressing hazards such as parallel
Optimizing Streaming Server Selection for CDN-delivered Live StreamingZhenyun Zhuang
LNCS 2012
Content Delivery Networks (CDNs) have been widely used to deliver
web contents on today’s Internet. Gaining tremendous popularity, live streaming
is also increasingly being delivered by CDNs. Compared to conventional static
or dynamic web contents, the new application type of live streaming exposes
unique characteristics that pose challenges to the underlying CDN infrastructure.
Unlike traditional web-objects fetching, which allows Edge Servers to cache contents
and thus typically only involves Edge Servers for delivering contents, live
streaming requires real-time full CDN-streaming paths that span across Ingest
Servers, Origin Servers and Edge Servers.
DNS is the standard practice for enabling dynamic assignment of servers. GeoDNS,
a specialized DNS system, provides DNS resolution by taking into account the
geographical locations of end-users and CDN servers. Though GeoDNS effectively
redirects users to nearest CDN Edge Servers, it may not be able to select
the optimal Origin Server for relaying a live stream to Edge Servers due to the
unique characteristics of live streaming. In this work, we consider the requirements
of delivering live streaming with CDN, and propose advanced design for
selecting optimal Origin Streaming Servers in order to reduce network transit
cost and increase viewers’ experience. We further propose a live-streaming specific
GeoDNS design for selecting optimal Origin Servers to serve Edge Servers.
AOTO: Adaptive overlay topology optimization in unstructured P2P systemsZhenyun Zhuang
IEEE GLOBECOM 2003
Peer-to-Peer (P2P) systems are self-organized and
decentralized. However, the mechanism of a peer randomly
joining and leaving a P2P network causes topology mismatch-
ing between the P2P logical overlay network and the physical
underlying network. The topology mismatching problem brings
great stress on the Internet infrastructure and seriously limits
the performance gain from various search or routing tech-
niques. We propose the Adaptive Overlay Topology Optimiza-
tion (AOTO) technique, an algorithm of building an overlay
multicast tree among each source node and its direct logical
neighbors so as to alleviate the mismatching problem by choos-
ing closer nodes as logical neighbors, while providing a larger
query coverage range. AOTO is scalable and completely dis-
tributed in the sense that it does not require global knowledge
of the whole overlay network when each node is optimizing the
organization of its logical neighbors. The simulation shows that
AOTO can effectively solve the mismatching problem and re-
duce more than 55% of the traffic generated by the P2P system itself.
Ensuring High-performance of Mission-critical Java Applications in Multi-tena...Zhenyun Zhuang
The document discusses problems with ensuring high performance of mission-critical Java applications in multi-tenant cloud environments. It identifies issues caused by resource sharing between applications on the same platform, such as memory pressure triggering page swapping and direct reclaiming, which can severely degrade Java application performance through increased garbage collection pauses and reduced throughput. The authors investigate two scenarios in a production environment and determine that transparent huge pages, memory pressure from other applications, and interactions between the JVM and Linux memory management are key factors impacting Java application performance in multi-tenant cloud setups.
Programmatic Right Here, Right Now ( English Version )Xavier Garrido
Presentation done at II Forum Programmatic during March the 1rst 2016 in Madrid.
In this presentation, i tried to bring closer programmatic in a simple way to the audience emphasizing that we are facing a paradigm shift in the advertising industry.
Legal aspects of religion in the workplaceRonald Brown
1. Managing religion in the workplace is challenging as people have strong spiritual beliefs but companies must also avoid religious discrimination. While some firms welcome religious expressions, others keep religion out of the workplace.
2. A survey found 20% of workers reported experiencing religious prejudice or knew of others facing discriminatory treatment. This has led to lawsuits against companies over religious discrimination or failure to accommodate religious practices.
3. Employers must strive to balance religious expression with inclusion of all beliefs to avoid lawsuits. They should provide religious accommodations when possible without causing undue hardship. Anti-harassment policies and training can help prevent problems related to religion in the workplace.
The document discusses definitions of leadership and what makes a good leader. It provides several definitions:
1) "Leadership [is] creating the conditions in organizational systems so that people can do their best work" and "Leaders define or clarify goals for a group, which can be as small as a seminar or as large as a nation-state and mobilize the energies of members of the group to pursue those goals."
2) "Problem solving is the core of leadership" and "the art of accomplishing more than the science of management says is possible."
3) Leadership "is about coping with change", about "motivating and inspiring---keeping people moving in the right direction, despite major obstacles
What makes a leader and what is leadershpRonald Brown
Leadership is defined in multiple ways in the document. It involves creating conditions for people to do their best work, defining goals for a group, and motivating people to pursue those goals. It also involves problem solving and accomplishing more than what seems possible. Effective leadership requires vision, managing change, and having a clear sense of direction.
The document discusses biometrics, which uses physiological or behavioral human characteristics to identify individuals. It defines biometrics and describes a generic biometric system involving enrollment, sensors, feature extraction, and matching. The document outlines several types of biometrics including face recognition, fingerprints, hand geometry, iris/retina scans, DNA, keystrokes, and voice. It also discusses vulnerabilities in biometric systems such as spoofing attacks, template database leaks, and intrinsic limitations like false matches. The document proposes security approaches like feature transformations and cryptosystems to enhance biometric security.
Building Cloud-ready Video Transcoding System for Content Delivery Networks (...Zhenyun Zhuang
GLOBECOM 2012
Video streaming traffic of both VoD (Video on
Demand) and Live is exploding. Various types of businesses
and many people are relying on video streaming to attract
customers/users and for other purposes. Given the vast number
of video stream formats (e.g., MP4, FLV) and transmission
protocols (e.g., HTTP, RTMP, RTSP) for supporting varying
types of playback terminals (particularly mobile devices such as
iphone/ipad and Android phones), video content providers often
need to transcode videos to multiple formats in order to stream
to different types of users.
Being time-sensitive and requiring high bandwidth, video
streaming exerts high pressure on underlying delivery networks.
Content Delivery Network (CDN) providers can help their
customers quickly and reliably distribute stream contents to end
users. In addition to distributing video streams, CDN providers
typically allow their customers to perform video transcoding on
CDN platforms. With the high volume of video streams and the
bursty transcoding workload, CDN providers are eager to deploy
elastic and optimized cloud-based transcoding platforms.
Hybrid Periodical Flooding in Unstructured Peer-to-Peer NetworksZhenyun Zhuang
This document proposes a new search mechanism called Hybrid Periodical Flooding (HPF) for unstructured peer-to-peer networks. HPF aims to reduce unnecessary traffic like blind flooding while also addressing the "partial coverage problem" of some statistics-based search mechanisms. It introduces the concept of Periodical Flooding (PF), which controls the number of neighbors a query is forwarded to based on the time-to-live value. This allows the forwarding behavior to change periodically over the query's lifetime. HPF then combines PF with weighted selection of neighbors based on multiple metrics to guide queries towards potentially relevant results while exploring more of the network.
Eliminating OS-caused Large JVM Pauses for Latency-sensitive Java-based Cloud...Zhenyun Zhuang
For PaaS-deployed (Platform as a Service)
customer-facing applications (e.g., online gaming and online
chatting), ensuring low latencies is not just a preferred feature,
but a must-have feature. Given the popularity and powerful-
ness of Java platforms, a significant portion of today’s PaaS
platforms run Java. JVM (Java Virtual Machine) manages a
heap space to hold application objects. The heap space can be
frequently GC-ed (Garbage Collected), and applications can be
occasionally stopped for long time during some GC and JVM
activities.
In this work, we investigated the JVM pause problem.
We found out that there are some (and large) JVM STW
pauses cannot be explained by application-level activities and
JVM activities during GC; instead, they are caused by OS
mechanisms. We successfully reproduced such problems and
root-cause-ed the reasons. The findings can be used to enhance
JVM implementation. We also proposed a set of solutions to
mitigate and eliminate these large STW pauses. We share the
knowledge and experiences in this writing.
Mobile Hosts Participating in Peer-to-Peer Data Networks: Challenges and Solu...Zhenyun Zhuang
Wireless Networks (2010)
http://dl.acm.org/citation.cfm?id=1873504
Peer-to-peer (P2P) data networks dominate
Internet traffic, accounting for over 60% of the overall
traffic in a recent study. In this work, we study the
problems that arise when mobile hosts participate in
P2P networks. We primarily focus on the performance
issues as experienced by the mobile host, but also study
the impact on other fixed peers. Using BitTorrent as a
key example, we identify several unique problems that
arise due to the design aspects of P2P networks being
incompatible with typical characteristics of wireless
and mobile environments. Using the insights gained
through our study, we present a wireless P2P (wP2P)
client application that is backward compatible with existing
fixed-peer client applications, but when used on
mobile hosts can provide significant performance improvements.
Guarding Fast Data Delivery in Cloud: an Effective Approach to Isolating Perf...Zhenyun Zhuang
LNCS 2015
Cloud-based products heavily rely on the fast data
delivery between data centers and remote users - when data
delivery is slow, the products’ performance is crippled. When
slow data delivery occurs, engineers need to investigate the issue
and find the root cause. The investigation requires experience
and time, as data delivery involves multiple playing parts
including sender/receiver/network.
To facilitate the investigations, we propose an algorithm
to automatically identify the performance bottleneck. The
algorithm aggregates information from multiple layers of
data sender and receiver. It helps to automatically isolate
the problem type by identifying which component of
sender/receiver/network is the bottleneck. After isolation, successive
efforts can be taken to root cause the exact problem.
We also build a prototype to demonstrate the effectiveness of
the algorithm.
OCPA: An Algorithm for Fast and Effective Virtual Machine Placement and Assig...Zhenyun Zhuang
This document proposes a method called Constrained Server Chaining (CSC) to optimize CDN infrastructure for live streaming. CSC allows CDN streaming servers to dynamically select upstream servers to pull live streams from, rather than only pulling from fixed ingest servers. This can reduce transit costs for CDN providers by creating more direct paths between servers. However, CSC also imposes a delivery length cap to avoid compromising end user experience with longer paths. The document describes the problem CSC addresses, an illustrative example of how CSC works, and the key components of CSC including cost determination, length cap determination, and server connection monitoring.
Optimizing CDN Infrastructure for Live Streaming with Constrained Server Chai...Zhenyun Zhuang
This document proposes a method called Constrained Server Chaining (CSC) to optimize CDN infrastructure for live streaming. CSC allows CDN streaming servers to dynamically select upstream servers to pull live streams from, rather than only pulling from fixed ingest servers. This allows streaming servers to form constrained chains to minimize total transit costs for the CDN provider while ensuring end user experience is not compromised by capping delivery path lengths. The document outlines the problem definition, design overview, and software architecture of CSC and provides an example to motivate how CSC can reduce costs compared to traditional layered CDN structures.
Application-Aware Acceleration for Wireless Data Networks: Design Elements an...Zhenyun Zhuang
This document discusses an approach called Application-Aware Acceleration (A3) to improve application performance over wireless networks. It finds that while transport layer protocols improve performance for FTP, they provide little benefit for other applications like CIFS, SMTP, and HTTP due to the applications' behaviors. A3 addresses this by using principles like transaction prediction, prioritized fetching, and redundant transmissions to offset applications' typical problems when used over wireless networks. The document presents the motivation and design of A3, and evaluates its effectiveness through emulations and a proof-of-concept prototype using NetFilter.
This document is a project report on computer networking prepared by Surender Singh for his summer training. It provides an introduction to networking and covers topics such as network types (LAN and WAN), network models (OSI model), networking cables, devices, IP addressing, routing, firewalls, wireless networks, and ISDN. The report defines what a computer network is, outlines the requirements and benefits of networking, and describes different network components and concepts at a high level.
Performing Network Simulators of TCP with E2E Network Model over UMTS NetworksAM Publications,India
Wireless links losses result in poor TCP throughput since losses are perceived as congestion by TCP with the evolution of 3G technologies like Universal Mobile Telecommunication System (UMTS), the usage of TCP has become more popular for a reliable end-to-end (e2e) data delivery. However, TCP was initially designed for wired networks and therefore it suffers performance degradation due to the radio signal getting affected by fading, shadowing and interference. There are many strategies proposed by the research community on how to improve the performance of TCP over wireless links such as introducing link-layer retransmission, explicitly notifying the sender of network conditions or using new variants of TCP. As UMTS network coverage and availability are currently experiencing rapid growth, optimization of various internal components of its wireless network is very important. One of the optimization is the introduction of High Speed Downlink Packet Access (HSDPA). This architecture not only allows higher data rates but also more reliable data transfer by the introduction of Hybrid ARQ (HARQ). With this enhancement to the UMTS network, it becomes vital to see the performance of TCP in such a network. Therefore in this thesis, we try to evaluate two aspects of UMTS networks: first, the impact of HSDPA parameters like scheduling algorithm and RLC/MAC-hs buffer size on overall performance of TCP and second, to study the behaviour of two categories of TCP rate and flow control: loss based and delay based. Our simulation shows that delay based TCP tends to perform better than loss based TCP in our selected scenarios. The simulations are performed using the network simulator NS-2 with an e2e network model for enhanced UMTS (EURANE).
The document discusses the layered architecture of internet networks. It explains that networks are composed of multiple interconnected components and protocols arranged in layers, with each layer providing services to the layer above. The layers include the physical, data link, network, transport, and application layers. Data moves between hosts by being encapsulated with headers at each layer and de-encapsulated at the receiving end. The end-to-end principle guides that core network functions operate at the lower layers, leaving application-specific functions to the higher layers.
A Machine Learning based Network Sharing System Design with MPTCPIJMREMJournal
The information and communication technologies (ICT) integrate different types of wireless communication to
provide IT-enabled services and applications. The great majority end devices are equipped with multiple network
interfaces such as Wi-Fi and 4G. Our goal is to integrate the available network interfaces and technologies to
enhance seamless communication efficiency and increase resources utilization. We proposed a heterogeneous
network management algorithm based on machine learning methods which includes roaming and sharing
functions. The roaming function provides the multiple network resources in physical and media access control
layers. The sharing function supports multiple network resources allocation and the service handover process
based on the Multi-Path TCP protocol. The simulation result also shows that the proposed scheme can increase
the network bandwidth utilization effectively. The sharing system could be used in home, mobile and vehicular
environments to realize ubiquitous social sharing networks.
A Machine Learning based Network Sharing System Design with MPTCPIJMREMJournal
1) The document describes a machine learning-based network sharing system that uses Multipath TCP to integrate multiple network interfaces and allocate bandwidth resources for multiple users.
2) The system includes roaming and sharing functions, where roaming chooses the best network and sharing allocates resources across available networks.
3) A heterogeneous network management algorithm is proposed that monitors network status, predicts handovers between networks, and uses a machine learning approach to optimize resource utilization and load balancing across different network interfaces.
The document provides information about the CCNA certification options and Cisco networking concepts including the OSI model. It can be used to study for the CCNA exam. There are two options to obtain the CCNA: pass a single exam or two exams. The document then explains the OSI model in detail including mnemonics to remember the layer names and summaries of what occurs at each layer of the OSI model to help understand how data flows through a network.
The Utility based AHP& TOPSIS Methods for Smooth Handover in Wireless NetworksIRJET Journal
1) The document presents a method for network selection in heterogeneous wireless networks using the Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) along with utility functions.
2) It aims to select the best network to avoid excessive switching between networks and provide smooth handover for different application types.
3) The proposed AHP and TOPSIS method incorporates quality of service parameters like data rate, delay, jitter and cost to calculate scores for each network and select the most suitable one for different application types including conversational, streaming and interactive.
The Utility based AHP& TOPSIS Methods for Smooth Handover in Wireless NetworksIRJET Journal
1) The document presents a method for network selection in heterogeneous wireless networks using the Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) along with utility functions.
2) It aims to select the best network to avoid excessive switching between networks and provide smooth handovers for different application types while meeting their quality of service requirements.
3) The proposed AHP and TOPSIS approach considers attributes like data rate, cost, delay and jitter to calculate scores for different networks and select the optimal one for the application and user preferences.
This document discusses application requirements for computer networks. It defines application requirements as requirements determined from application information, experience, or testing that represent what is needed by applications. It provides examples of different types of application requirements like web page requests, database transactions, messaging protocols, and API calls. It also discusses how to classify applications based on service and performance requirements into categories like mission-critical, rate-critical, and real-time/interactive applications. Finally, it describes the different tiers in an application system including the web, application, and database tiers.
Performance Analysis of Data Traffic Offload Scheme on Long Term Evolution (L...TELKOMNIKA JOURNAL
One of new mobile technology is being developed by 3GPP is Long Term Evolution (LTE). LTE
usually used by user because provide high data rate. Many traffic sending over LTE, makes several users
didn’t get good Quality of Service (QoS). Traffic diversion is needed to increasing QoS value. It can be
done with offloading data method from LTE to Wi-Fi network. This paper using 802.11ah standard to
evaluate Wi-Fi network. IEEE 802.11ah have 1000 meters coverage area and efficiency energy
mechanism, which is proposed for M2M in 5G techonology. Some research has proven that traffic
diversion with offloading can increasing network performance. The contribution of this paper is to evaluate
the impact of traffic offload between LTE and IEEE 802.11ah standard. This paper propose two scenarios
using increment number of user and increment mobility speed of user to evaluate throughput and delay
value before and after the offload process. The simulation will simulate using Network Simulator-3. We can
conclude that network performance after offloading is better for every scenario. For increment number of
user scenario, throughput value increasing 29.08%, and delay decreasing 8.12%. Scenario with increment
mobility speed of user obtain throughput value increasing 37,57%, and delay value decreasing 27.228%.
This document discusses the need for network simulation tools to test telecom network components before deployment. It describes the key requirements for building an efficient simulation tool that can accurately model a complex telecom network, including 3G and UMTS networks. Specifically, it discusses modeling internet traffic and using semi-Markovian models to generate traffic. It also covers the importance of considering physical layer factors like RF path loss and mechanisms like power control when simulating UMTS networks. The document provides details on the algorithms and architecture needed for a simulation tool to generate traffic according to specified models and evaluate network performance and capacity.
This document discusses the need for network simulation tools to test telecom network components before they are deployed. It describes the key requirements for building an efficient simulation tool that can accurately model a complex telecom network, including 3G and UMTS networks. Specifically, it discusses the need to generate realistic traffic patterns and loads, model protocols and interfaces, and consider physical layer factors like RF path loss and power control mechanisms. The document provides details on using semi-Markovian models to generate traffic according to different states and distributions. It also outlines the overall architecture of a packet load generator tool to simulate network elements and evaluate their performance under different traffic scenarios.
Call Admission Control (CAC) with Load Balancing Approach for the WLAN NetworksIJARIIT
The cell migrations take place between the different network operators, and require the significant information exchange between the operators to handle the migratory users. The new user registration requires the pre-shared information from the user’s equipment, which signifies the user recognition before registering the new user over the network. In this thesis, the proposed model has been aimed at the development of the new call admission control mechanism with the sub-channel assignment. The very basic utilization of the proposed model is to increase the number of the users over the given cell units, which is realized by using the sub-channel assignment to the users of the network. The proposed model is aimed at solving the issue by assigning the dual sub channels over the single communication channel. Also the proposed model is aimed at handling the minimum resource users by incorporating the load balancing approach over the given network segment. The load balancing approach shares the load of the overloaded cell with the cell with lowest resource utilization. The proposed model performance has been evaluated in the various scenarios and over all of the BTS nodes. The proposed model results have been obtained in the form of the resource utilization, network load, transmission delay, consumed bandwidth and data loss. The proposed model has shown the efficiency obtained by using the proposed call admission control (CAC) along with the new load balancing mechanism. The proposed model has shown the robustness of the proposed model in handling the cell overloading factors.
The document discusses network models and compares the OSI model and TCP/IP model. It provides details on the layers of the OSI model including the 7 layers from physical to application layer. It describes the functions of each layer such as physical dealing with raw bit transmission, data link framing bits into frames, network routing packets, transport ensuring reliable data delivery, session controlling connections, presentation translating between systems, and application providing user interfaces. It also summarizes the similarities and differences between the OSI and TCP/IP models.
This document discusses LTE Advanced and WiMAX2 (IEEE 802.16m) technologies and the use of relay stations to improve network performance. It aims to cost-effectively deploy relay stations in an LTE Advanced network while enhancing quality of service. The author outlines objectives to study different relay station deployment methods and techniques to minimize costs and improve metrics like throughput, delay and network load. The document also describes relay station types, operations and techniques like amplify-and-forward, decode-and-forward and compress-and-forward. The research methodology involves identifying the problem, conducting literature review, simulation-based testing and analyzing results.
ENHANCING AND MEASURING THE PERFORMANCE IN SOFTWARE DEFINED NETWORKINGIJCNCJournal
Software Defined Networking (SDN) is a challenging chapter in today’s networking era. It is a network design approach that engages the framework to be controlled or 'altered' adroitly and halfway using programming applications. SDN is a serious advancement that assures to provide a better strategy than displaying the Quality of Service (QoS) approach in the present correspondence frameworks. SDN etymologically changes the lead and convenience of system instruments using the single high state program. It separates the system control and sending functions, empowering the network control to end up specifically. It provides more functionality and more flexibility than the traditional networks. A network administrator can easily shape the traffic without touching any individual switches and services which are needed in a network. The main technology for implementing SDN is a separation of data plane and control plane, network virtualization through programmability. The total amount of time in which user can respond is called response time. Throughput is known as how fast a network can send data. In this paper, we have design a network through which we have measured the Response Time and Throughput comparing with the Real-time Online Interactive Applications (ROIA), Multiple Packet Scheduler, and NOX.
Talhunt is a leader in assisting and executing IEEE Engineering projects to Engineering students - run by young and dynamic IT entrepreneurs. Our primary motto is to help Engineering graduates in IT and Computer science department to implement their final year project with first-class technical and academic assistance.
Project assistance is provided by 15+ years experienced IT Professionals. Over 100+ IEEE 2015 and 200+ yester year IEEE project titles are available with us. Projects are based on Software Development Life-Cycle (SDLC) model.
The document presents information about the Open Systems Interconnection (OSI) model. The OSI model describes how data moves through seven layers of a network from one device to another. It was the first standard model for network communications and was adopted by all major computer and telecommunication companies in the early 1980s. The seven layers are the physical, data link, network, transport, session, presentation, and application layers. Each layer has a specific role to play and the layers work together to successfully transmit data between devices on a network.
The document provides an overview of computer network models and layers. It discusses the OSI reference model and TCP/IP model. The OSI model has 7 layers - application, presentation, session, transport, network, data link, and physical. Each layer has a specific role and the layers work together to transmit data between applications on different devices. The TCP/IP model has 4 layers - application, transport, internet, and link. It then provides details on the functions and protocols used at each individual layer of the OSI model.
Similar to A3: application-aware acceleration for wireless data networks (20)
Designing SSD-friendly Applications for Better Application Performance and Hi...Zhenyun Zhuang
This document discusses how applications can be designed to take advantage of the unique characteristics of solid state drives (SSDs) in order to improve application performance, storage input/output (IO) efficiency, and SSD lifespan. It proposes nine SSD-friendly application design changes and explains how they can result in better application performance by fully utilizing SSDs' internal parallelism, more efficient storage IO by reducing write amplification, and longer SSD lifespan by decreasing write amplification.
Optimized Selection of Streaming Servers with GeoDNS for CDN Delivered Live S...Zhenyun Zhuang
This document proposes a new DNS design called Sticky-DNS to optimize server selection for CDN-delivered live streaming. Sticky-DNS aims to minimize CDN transit costs while maintaining good viewer experience. Unlike traditional GeoDNS which selects the nearest origin server to an edge server, Sticky-DNS considers the full ingest-origin and origin-edge paths to potentially select a non-nearest origin server that results in lower overall transit costs. It does this by maintaining cost values for all server pairs and selecting origins to serve edges in a way that minimizes total path costs. For less popular streams, origins are chosen based on end-to-end path lengths, while for popular streams Sticky-DNS adapts to encourage reuse
A Distributed Approach to Solving Overlay Mismatching ProblemZhenyun Zhuang
This document proposes an algorithm called Adaptive Connection Establishment (ACE) to address the topology mismatch problem between the logical overlay network and physical underlying network in unstructured peer-to-peer systems. ACE builds a minimum spanning tree among each source node and its neighbors within a certain diameter, optimizes connections not on the tree to reduce redundant traffic, while retaining search scope. It evaluates tradeoffs between topology optimization and information exchange overhead by changing the diameter. Simulation results show ACE can significantly reduce unnecessary P2P traffic by efficiently matching the overlay and physical network topologies.
Enhancing Intrusion Detection System with Proximity InformationZhenyun Zhuang
This document proposes PAIDS, a Proximity-Assisted Intrusion Detection System that identifies unknown worm outbreaks by leveraging proximity information of compromised hosts. PAIDS operates independently from existing signature-based and anomaly-based IDS approaches. It observes that compromised hosts tend to cluster geographically and remain active for long periods, allowing proximity to infected machines to indicate higher infection risk. The document motivates PAIDS based on limitations of other IDSes and clustered/long-term nature of worm spread. It then outlines PAIDS design, deployment model, software architecture, and key components for detecting outbreaks using proximity information.
SLA-aware Dynamic CPU Scaling in Business Cloud Computing EnvironmentsZhenyun Zhuang
IEEE CLOUD 2015
Modern cloud computing platforms (e.g. Linux
on Intel CPUs) feature ACPI-based (Advanced Configuration
and Power Interface) mechanism, which dynamically scales
CPU frequencies/voltages to adjust the CPU frequencies based
on the workload intensity. With this feature, CPU frequency
is reduced when the workload is relatively light in order to
save energy; while increased when the workload intensity is
relatively high.
In business cloud computing environments, software products/
services often need to “scale out” to multiple machines to
form a cluster to achieve a pre-defined aggregated performance
goal (e.g., SLA-devised throughput). To reduce business operation
cost, minimizing the provisioned cluster size is critical.
However, as we show in this work, the working of ACPI
in today’s modern OS may result in more machines being
provisioned, hence higher business operation cost,
To deal with this problem, we propose a SLA-aware CPU
scaling algorithm based on business SLA (Service Level Agreement
aware). The proposed design rational and algorithm are
a fundamental rethinking of how ACPI mechanisms should be
implemented in business cloud computing environments. Contrary
to the current forms of ACPI which simply adapt CPU
power levels only based on workload intensity, the proposed
SLA-aware algorithm is primarily based on current application
performance relative to the pre-defined SLA. Specifically, the
algorithm targets at achieving the pre-defined SLA as the toplevel
goal, while saving energy as the second-level goal.
Optimizing JMS Performance for Cloud-based Application ServersZhenyun Zhuang
IEEE CLOUD 2012
http://dl.acm.org/citation.cfm?id=2353798
Many business-oriented services will be gradually
offered in the Cloud. Java Message Service (JMS) is a critical
messaging technology in Java-based business applications, particularly
to those that are based on the Java Enterprise Edition
(Java EE) open standard. Maintaining high performance in
the horizontally scaled, and elastic, cloud environment is
critical to the success of the business applications. In this
paper, we present practical considerations in optimizing JMS
performance for the cloud deployment, where some of the
findings may also serve to improve the design of JMS container
so it adapts well to cloud computing. Our work also includes
performance evaluation on the proposed strategies.
Capacity Planning and Headroom Analysis for Taming Database Replication LatencyZhenyun Zhuang
ACM ICPE 2015
http://dl.acm.org/citation.cfm?id=2688054
Internet companies like LinkedIn handle a large amount of
incoming web traffic. Events generated in response to user
input or actions are stored in a source database. These
database events feature the typical characteristics of Big
Data: high volume, high velocity and high variability. Data-
base events are replicated to isolate source database and
form a consistent view across data centers. Ensuring a low
replication latency of database events is critical to business
values. Given the inherent characteristics of Big Data, min-
imizing the replication latency is a challenging task.
In this work we study the problem of taming the database
replication latency by effective capacity planning. Based
on our observations into LinkedIn’s production traffic and
various playing parts, we develop a practical and effective
model to answer a set of business-critical questions related
to capacity planning. These questions include: future traffic
rate forecasting, replication latency prediction, replication
capacity determination, replication headroom determination
and SLA determination.
OS caused Large JVM pauses: Deep dive and solutionsZhenyun Zhuang
We have found many large JVM GC pauses are not caused by application itself, but by the interactions between JVM and OS. We characterize these issues into 3 scenarios: (1) application startup state; (2) application steady state with memory pressure; and (3) application steady state with heavy IO. The root causes are quite complicated, so we share our experiences about this.
//This slide deck is for Qcon Beijing 2016 talk.
Wireless memory: Eliminating communication redundancy in Wi-Fi networksZhenyun Zhuang
This document describes a proposed system called Wireless Memory (WM) to eliminate communication redundancy in Wi-Fi networks. The authors first analyze real Wi-Fi traces from multiple buildings and observe significant redundancy both between users and over time for individual users. Based on these insights, they propose WM, which equips access points and clients with memory to store transmitted data. When sending new data, the access point can retrieve stored data from the client's memory by sending a reference rather than the full data, reducing transmission size. The authors evaluate WM through simulations using the collected traces and find it can improve network throughput by up to 93% in some scenarios by eliminating redundancy.
Improving energy efficiency of location sensing on smartphonesZhenyun Zhuang
The document proposes an adaptive location-sensing framework to improve energy efficiency on smartphones running location-based applications. The framework uses four design principles: substitution replaces GPS with less power-intensive location services when possible; suppression avoids unnecessary GPS use through sensors like accelerometers; piggybacking synchronizes location requests from multiple apps; and adaptation adjusts location sensing based on battery level. An implementation on Android phones reduces GPS use by up to 98% and improves battery life by up to 75%.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
A3: application-aware acceleration for wireless data networks
1. A3: Application-Aware Acceleration for
Wireless Data Networks
∗
Zhenyun Zhuang†
, Tae-Young Chang†
, Raghupathy Sivakumar†§
,
and Aravind Velayutham§
†
Georgia Institute of Technology, Atlanta, GA 30332, USA
§
Asankya Networks, Inc., Atlanta, GA 30308, USA
zhenyun@cc.gatech.edu, {key4078,siva}@ece.gatech.edu, vel@asankya.com
ABSTRACT
A tremendous amount of research has been done toward
improving transport layer performance over wireless data
networks. The improved transport layer protocols are typi-
cally application-unaware. In this paper, we argue that the
behavior of applications can and do dominate the actual
performance experienced. More importantly, we show that
for practical applications, application behavior all but com-
pletely negates any improvements achievable through bet-
ter transport layer protocols. In this context, we motivate
an application-aware, but application transparent, solution
suite called A3
(application-aware acceleration) that uses a
set of design principles realized in an application specific
fashion to overcome the typical behavioral problems of ap-
plications. We demonstrate the performance of A3
through
emulations using realistic application traffic traces.
Categories and Subject Descriptors
C.2.1 [Network Architecture and Design]: Wireless
Communication; C.2.2 [Network Protocols]: Applications;
D.4.8 [Performance]: Simulation
General Terms
Algorithms, Design, Performance
Keywords
Wireless Networks, Application-Aware Acceleration
1. INTRODUCTION
A significant amount of research has been done toward
the development of better transport layer protocols that
∗This work was funded in part by NSF grants CNS-0519733,
CNS-0519841, ECS-0428329 and CCR-0313005.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
MobiCom’06, September 23–26, 2006, Los Angeles, California, USA.
Copyright 2006 ACM 1-59593-286-0/06/0009 ...$5.00.
can alleviate the problems the transmission control proto-
col (TCP) exhibits in wireless environments [9, 18, 11, 12].
Such protocols, and several more, have novel and unique
design components that are indeed important for tackling
the unique characteristics of wireless environments. How-
ever, in this paper we ask a somewhat orthogonal question
in the very context the above protocols were designed for:
How does the application’s behavior impact the performance
deliverable to wireless users?
Toward answering this question, we explore the impact of
typical wireless characteristics on the performance experi-
enced by the applications for very popularly used real-world
applications including the file transfer protocol (FTP), the
Common Internet File Sharing protocol (CIFS)[1], the Sim-
ple Mail Transfer Protocol (SMTP)[7], and the Hyper-Text
Transfer Protocol (HTTP)[4]. Through our experiments, we
arrive at an impactful result: Except for FTP which has a
simple application layer behavior, for all other applications
considered, not only is the performance experienced when
using vanilla TCP-NewReno much worse than for FTP, but
the applications see negligible or no performance enhance-
ments even when they are made to use the wireless-aware
protocols.
We delve deeper into the above observation and identify
several common behavioral characteristics of the applica-
tions that fundamentally limit the performance achievable
when operating over wireless data networks. Such charac-
teristics stem from the design of the applications, which
is typically tailored for operations in substantially higher
quality local-area network environments (LANs). Hence, we
pose the question: if application behavior is a major cause
for performance degradation as observed through the experi-
ments, what can be done to improve the end-user application
performance?
In answering the above question, we present a new solu-
tion called application-aware acceleration (A3
), which is a
middleware that offsets the typical behavioral problems of
real-life applications through an effective set of principles
and design elements. We present A3
as a platform solution
requiring entities at both ends of the end-to-end communi-
cation, but also describe a variation of A3
called A3
•, which
is a point solution but is not as effective as A3
. One of the
keystone aspects of the A3
design is that it is application-
aware, but application transparent.
The rest of the paper is organized as follows: Section 2
presents the motivation results for A3
. Section 3 presents
the key design elements underlying the A3
solution. Sec-
2. tion 4 describes realization of A3
for specific applications.
Section 5 evaluates A3
. Section 6 discusses related works,
and Section 7 concludes the paper.
2. MOTIVATION
The focus of this work is entirely on applications that re-
quire reliable and in-sequenced delivery. In other words, we
consider only applications that are traditionally developed
with the assumption of using the TCP transport layer.
2.1 Evaluation Model
We now briefly present the setting and methodology em-
ployed for the results presented in the rest of the section.
Applications: For the results presented in this section, we
consider four different applications. Besides FTP, the appli-
cations are: (i) CIFS - The Common Internet File System
is a platform independent network protocol used for shar-
ing files, printers, and other communications between com-
puters. While originally developed by Microsoft, CIFS is
currently an open technology that is used for all Windows
workgroup file sharing, NT printing, and the Linux Samba
server1
. (ii) SMTP - the simple mail transfer protocol is used
for the exchange of e-mails either between mail servers, or
between a client and its server. Most e-mail systems that
use the Internet for communication use SMTP. (iii) HTTP
- the hypertext transfer protocol is the underlying protocol
used by the World Wide Web.
Traffic generator: We use the IxChariot to generate accu-
rate application specific traffic patterns. IxChariot[13] is a
commercial tool for emulating most real-world applications.
It is comprised of the IxChariot console (for control), per-
formance end-points (for traffic generation and reception),
and IxProfile (for characterizing performance).
Testbed: We use a combination of a real test-bed and em-
ulation to construct the test-bed for the results presented in
the section. Since IxChariot is a a software tool that gen-
erates actual application traffic, it is hosted on the sender
and the receiving machines shown in Figure 12. The path
from the sender to the receiver goes through a node running
the ns2 network simulator in emulation mode. The net-
work emulator is configured to represent desired topologies
including the different types of wireless technologies. More
information on the test-bed is presented in Section 5.
Transport protocols: Since we consider wireless LANs
(WLAN), wireless WANs (WWAN), and wireless satellite
area networks (SAT), we use transport layer protocols pro-
posed in related literature for each of these environments.
Specifically, we use TCP-ELN (NewReno with explicit loss
notification)[9], WTCP (Wide-area Wireless TCP)[18], and
STP (Satellite transport protocol)[11] as enhanced transport
protocols for WLANs, WWANs, and SATs respectively.
Parameters: We use average RTT values of 5 ms, 200 ms,
and 1000 ms, average loss rates of 1 %, 8 %, and 3 %, and
average bandwidths of 5 Mbps, 0.1 Mbps, and 1 Mbps for
WLANs, WWANs, and SATs respectively. We use appli-
cation perceived throughput as the key metric of interest.
Wach data point is taken as an average of 10 different ex-
perimental runs.
1
Samba uses SMB on which CIFS is based.
2.2 Quantitative Analysis
Figure 1(a) presents the performance results for FTP un-
der varying loss conditions in WLANs, WWANs, and SAT
environments. The tailored protocols uniformly show con-
siderable performance improvements. The results illustrate
that the design of the enhancement protocols TCP-ELN,
WTCP, and STP, is sufficient enough to deliver consider-
able improvements in performance for wireless data net-
works, when using FTP as the application. In the rest of
the section, we discuss the impact of using such protocols
for other applications such as CIFS, SMTP, and HTTP.
Figures 1(b)-(d) show the performance experienced by
CIFS, SMTP, and HTTP respectively under varying loss
conditions for the different wireless environments. It can be
observed that the performance improvements demonstrated
by the enhancement protocols for FTP do not carry over to
these three applications. It also can be observed that the
maximum performance improvement delivered by the en-
hancement protocols is less than 5 % across all scenarios.
While the trend evident from the results discussed above
is that the enhanced wireless transport protocols do not pro-
vide any performance improvements for three very popularly
used applications, we argue in the rest of the section that this
is not due to any fundamental limitations of the transport
protocols themselves, but due to the specifics of the behavior
of the three applications under consideration.
2.3 Impact of Application Behavior
We now explain the lack of performance improvements
when using enhanced wireless transport protocols with ap-
plications such as CIFS, SMTP, and HTTP. We use the con-
ceptual application traffic pattern for the three applications
in Figure 2 for most of our reasonings[1, 7, 4].
2.3.1 Thin session control messages
All three applications, as can be observed in Figures 2(a)-
(c), use thin session control message exchanges before the
actual data transfer occurs, and thin request messages dur-
ing the actual data transfer phase as well. We use the term
“thin” to refer to the fact that such messages are almost
always contained in a single packet of MSS (maximum seg-
ment size).
The observation above have two key consequences: (i)
When a loss occurs to a thin message, an entire round-trip
is taken to recover from such a loss. When the round-trip
time is large like in WWANs and SATs, this can result in
considerably inflating the overall transaction time for the ap-
plications. Note that a loss during the data phase will not
have such an adverse impact, as the recovery of that loss can
be multiplexed with other new data transmissions, whereas
for thin message losses, no other traffic can be sent anyway.
(ii) Most protocols, including TCP, rely on the arrival of
out-of-order packets to infer packet losses and hence trigger
loss recovery. In the case of thin messages, since there are
no packets following the lost message, the only means for
loss detection is the expiry of the retransmission timer. Re-
transmission timers typically have coarse minimum values
to keep overheads low. TCP, for example, uses a minimum
Retransmission Time Out (RTO) value of one second2
.
2
While newer Linux releases have lower minimum RTO val-
ues, they still are in the order of several hundred ms.
3. 1 2 3
4
4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
4.9
5
Loss (%)
Throughput(Mbit/s)
NewReno
TCP−ELN
0 5 10 15
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
Loss (%)
Throughput(Mbit/s)
NewReno
WTCP
0 2 4 6
0
0.05
0.1
0.15
0.2
0.25
Loss (%)
Throughput(Mbit/s)
NewReno
STP
(a) FTP
1 2 3
800
850
900
950
1000
1050
1100
1150
Loss (%)
Throughput(Kbit/s)
NewReno
TCP−ELN
0 5 10 15
5.5
6
6.5
7
7.5
8
8.5
Loss (%)
Throughput(Kbit/s)
NewReno
WTCP
0 2 4 6
2.6
2.7
2.8
2.9
3
3.1
3.2
3.3
3.4
3.5
Loss (%)
Throughput(Kbit/s)
NewReno
STP
(b) CIFS
1 2 3
950
1000
1050
1100
1150
1200
1250
Loss (%)
Throughput(Kbit/s)
NewReno
TCP−ELN
0 5 10 15
7.5
8
8.5
9
9.5
Loss (%)
Throughput(Kbit/s)
NewReno
WTCP
0 2 4 6
1.8
2
2.2
2.4
2.6
2.8
3
3.2
Loss (%)
Throughput(Kbit/s)
NewReno
STP
(c) SMTP
1 2 3
1000
1050
1100
1150
1200
1250
1300
1350
1400
1450
Loss Rate (%)
Throughput(Mbit/s)
NewReno
TCP−ELN
0 5 10 15
4
4.5
5
5.5
6
6.5
Loss Rate (%)
Throughput(Kbit/s)
NewReno
WTCP
0 2 4 6
1.3
1.4
1.5
1.6
1.7
1.8
1.9
2
Loss Rate (%)
Throughput(Kbit/s)
NewReno
STP
(d) HTTP
Figure 1: Impact of Wireless Environment Characteristics
2.3.2 Batched data fetches
Another characteristic of the applications, especially CIFS
and HTTP, is that although the total amount of data to
be fetched can be large, the data transfer is performed in
batches, with each batch including a “request-response” ex-
change. CIFS uses its request-data-block message to send
the batched requests, with each request typically requesting
only 16 KB - 32 KB of data.
Such a batched fetching of data has two implications to
performance: (i) When the size of the requested data is
smaller than the Bandwidth Delay Product (BDP), there is
a gross underutilization of the available resources. Hence,
when the SAT network has a BDP of 128 KB, and CIFS
uses a 16 KB request size, the utilization is 12.5 %. (ii)
Independent of the size of each requested data batch, one
rtt is spent in sending the next request once the current
requested data arrives. When the rtt of the path is large like
in WWANs and SATs, this can inflate the overall transaction
time, and hence lower throughput performance.
2.3.3 Flow control bottlenecked operations
Flow control is an important function in communication
that helps in preventing the source from overwhelming the
receiver. In a mobile/wireless setting, flow control can kick
in and prove to be the bottleneck for the connection progress
due to two reasons: (i) If the application on the mobile de-
vice reads slowly or is temporarily halted for some other
reason, the receiver buffer fills up and the source is even-
tually frozen till the buffer empties. (ii) When there are
losses in the network, and the receiver buffer size is of the
same order as the BDP (which is typically true), flow control
can prevent new data transmissions even when techniques
such as fast recovery is used due to unavailability of buffer
space at the receiver. The dominant effect of flow control is
however undesirable in wireless environments because of its
resultant low throughput performance.
2.3.4 Other reasons
While the above discussed reasons are behavioral “acts
of commission” by the applications that result in lowered
performance, we now discuss two more reasons that can be
seen as behavioral “acts of omission”. These are techniques
that the applications could have used to address conditions
in a wireless environment, but do not.
Non-prioritization of data: For all three applications
considered, no explicit prioritization of data to be fetched is
performed, and hence all the data to be fetched are given
equal importance. However, for certain applications prior-
itizing data in a meaningful fashion can have a profound
impact on the performance experienced by the end-system
or user. For example, consider the case of HTTP used for
browsing on a small-screen PDA. When a webpage URL re-
quest is issued, HTTP fetches all the data for the webpage
with equal importance. However, the data corresponding to
the visible portion of the webpage on the PDA’s screen is
obviously of more importance and will have a higher impact
on the perceived performance by the end-user. Not leverag-
ing such means of prioritizing data hence results in HTTP
suffering performance as defined by the original data size
and the low bandwidths of the wireless environment.
Non-use of data reduction techniques: Finally, another
issue is applications not using knowledge specific to their
content or behavior to employ effective data reduction tech-
niques. For example, considering the SMTP application,
“email vocabulary” of users has evolved over the last couple
of decades to be very independent of traditional “writing vo-
cabulary” and “verbal vocabulary” of the users. Hence, it is
an interesting question as to whether SMTP can use email
vocabulary based techniques to reduce the actual content
transferred between SMTP servers, or a SMTP server and
a client. Not leveraging such aspects prove to be of more
significance in wireless environments where the baseline per-
formance is poor to start with.
3. DESIGN
Since we have outlined several behavioral problems with
4. Client Server
Establish NetBIOS Session
Positive Session Ack
Negotiate CIFS Dialect
Choose CIFS Dialect
User Login
User ID
Connect to Resource
Tree ID
Open A File
File ID
Request Data Block 1
Data 1
Data 1
Request Data Block 2
Data 2
Data 2
Data 2
Data 1
CIFS-1
CIFS-2
CIFS-3
CIFS-4
CIFS-5
CIFS-6
CIFS-7
CIFS-8
CIFS-9
CIFS-10
CIFS-11
CIFS-12
(a) CIFS
Client Server
200 smtp.receiver.com Ready
HELO mail.sender.com
250 smtp.receiver.com
MAIL FROM: david@sender.com
250 OK
RCPT TO: bod@receiver.com
250 OK
DATA
250 OK
DATA
SMTP-1
SMTP-2
SMTP-3
SMTP-4
SMTP-5
SMTP-6
SMTP-7
SMTP-8
SMTP-9
Connect to server
SMTP-10
SMTP-11
SMTP-12
SMTP-13
SMTP-14
End of Data
Quit
221 Service Closing
(b) SMTP
Client Server
HTTP 200 OK
HTTP Request (GET)
DATA
DATA
DATA
HTTP Request (GET)
HTTP 200 OK
DATA
DATA
HTTP Request (GET)
HTTP 200 OK
DATA
DATA
DATA
HTTP-1
HTTP-2
HTTP-3
HTTP-4
(c) HTTP
Figure 2: Application Traffic Patterns
2 4 6 8 10 12 14
0
5
10
15
20
25
File Size (MBytes)
Throughput(Mbps)
CIFS
FTP
(a) Throughput
2 4 6 8 10 12 14
50
100
150
200
250
300
350
400
450
File Size (MBytes)
NumberofRequests
(b) Number of Requests
0 1 2 3 4 5 6 7
0.2
0.22
0.24
0.26
0.28
0.3
0.32
Loss Rate (%)
Throughput(Mb/s)
Ideal
TCP Newreno
(c) SMTP
Figure 3: Motivation for TP and RAR
applications in Section 2, an obvious question to ask is:
“Why not change the applications to address these prob-
lems?” We believe that is indeed one possible solution.
Hence, we structure the presentation of the A3
solution into
two distinct components: (i) the key design elements or prin-
ciples that underlie A3
; and (ii) the actual realization of the
design elements for specific applications in the form of an op-
timization middleware that is application-aware, but appli-
cation transparent. The design elements generically present
strategies to improve application behavior and can be used
by application developers to improve performance by incor-
porating changes to the applications directly. In the rest of
this section, we outline the design of five principles in the
A3
solution.
3.1 Transaction Prediction
Transaction prediction (TP) is an approach to determinis-
tically predict future application data requests to the server,
and issue them ahead of time. Note that this is differ-
ent from techniques such as “prefetching” where content
is heuristically fetched to speed up later access, but is not
guaranteed to be used. In TP, A3
is fully aware of applica-
tion semantics, and knows exactly what data to fetch and
that the data will be used. TP will aid in conditions where
the BDP is larger than the default application batch fetch
size, and where the RTT is very large. Under both cases,
the overall throughput will improve when TP is used. Fig-
ure 3(a) shows the throughput performance of CIFS when
fetching files of varying sizes. It can be seen that the perfor-
mance is substantially lower than that of FTP, and this is
due to the batched fetching mechanism described in Section
2. Figure 3(b) shows the number of transactions it takes
CIFS to actually fetch a single file, and it can be observed
that the number of transactions increases linearly with file
size. Under such conditions, TP will“parallelize” the trans-
actions and hence improve throughput performance. Good
examples of applications that will benefit from using TP
include CIFS and HTTP for reasons outlined in Section 2.
3.2 Redundant and AggressiveRetransmissions
Redundant and aggressive retransmissions (RAR) is an
approach to protect thin session control and data request
messages better from losses. The technique involves recog-
nizing thin application messages, and using a combination
of packet level redundancy, and aggressive retransmissions
to protect such messages. RAR will help address both is-
sues with thin messages identified in Section 2. The redun-
dant transmissions reduce probability of message losses, and
the aggressive retransmissions that operate on tight RTT
granularity timeouts reduce the loss recovery time. The
key challenges in RAR is to recognize thin messages in an
application-aware fashion. Note that only thin messages re-
quire RAR because of reasons outlined in Section 2. Regular
data messages should not be subjected to RAR both because
their loss recovery can be masked in the overall transaction
time by performing the recovery simultaneously with other
data packet transmissions, and because the overheads of per-
forming RAR will become untenable when applied to large
volume messages such as the data. Figure 3(c) shows the
throughput performance of SMTP under lossy conditions.
5. 1 2 3 4 5 6
0
50
100
150
200
250
Number of Screens
DataSize(KBytes)
(a) CDF of Transfer Size per Screen
0.2 0.3 0.4 0.5 0.6 0.7 0.8
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Application Rate (Mb/s)
Throughput(Mb/s)
Ideal
Newreno
(b) Application Rate
0 1 2 3 4 5
0
0.2
0.4
0.6
0.8
1
Loss Rate (%)
Throughput(Mb/s)
Ideal
TCP−Newreno
(c) Loss Increase
Figure 4: Motivation for PF and IB
The dramatic effect of a 35 % drop in throughput perfor-
mance for a loss-rate increase from 0 % to 3 % is much higher
than the 15 % drop in performance in the FTP performance
for the same corresponding loss-rate increase shown in Sec-
tion 2. Typical applications that can benefit from RAR
include CIFS, SMTP, and HTTP.
3.3 Prioritized Fetching
Prioritized fetching (PF) is an approach to prioritize sub-
sets of data to be fetched as being more important than
others, and to fetch the higher priority data faster than the
lower priority data. A simple approach to achieve the dual
rate fetching is to use default TCP-like congestion control
for the high priority data, but use congestion control like
in TCP-LP[14] for low priority data. An important con-
sideration in PF is to devise a strategy to prioritize data
intelligently, and on the fly. Figure 4(a) shows the average
transfer size per screen for the top fifty accessed webpages
on the world-wide web[2]. It can be seen that nearly 80 % of
the data (belonging to screens 2 and higher) are not directly
impacting response time experienced by the user, and hence
can be de-prioritized in relation to the data pertaining to the
first screen. Note that the results are for a 1024x768 reso-
lution laptop screen, and will in fact be better for smaller
screen devices such as PDAs. Good examples of applications
that can benefit from PF include HTTP and SMTP.
3.4 Infinite Buffering
Infinite buffering (IB) is an approach that prevents flow
control from throttling the progression of a network connec-
tion terminating at the mobile wireless device. IB prevents
flow control from impacting performance by providing the
sender the impression of an infinite buffer at the receiver.
Secondary storage is used to realize such an infinite buffer,
with the main rationale being that reading from the sec-
ondary storage will be faster than fetching it from the sender
over the wireless network when there is space created in the
actual connection buffer at a later point. With typical hard-
disk data transfer rates today being at around 250 Mbps[5],
the abovementioned rationale is well justified for wireless en-
vironments. Note that the trigger for using IB can be both
due to application reading slowly or temporarily not reading
form the connection buffer, or due to losses on the wireless
path. Figures 4(b)-(c) show the throughput performance of
SMTP under both conditions. It can be observed that for
both scenarios, the impact of flow control drastically lowers
performance compared to what is achievable. Due to lack
of space, in the rest of the paper we focus on IB specifically
in the context of the more traditional trigger for flow con-
trol - application reading bottleneck. Typical applications
that can benefit from IB include CIFS, SMTP, and HTTP
- essentially, any application that transfers multiple BDPs
worth of data at a time.
3.5 Application-aware Encoding
Unique Total Char. Bits per Email Bits per Email
Words Words per Word for Binary Coding for Simple AE
1362 6383 6.22 3176 664.6
Table 1: Averaged Statistics of 10 Email Folders
Application-aware encoding (AE) is an approach that uses
application specific information to better encode or com-
press data during communication. Traditional compression
tools such as zip operate on a given content in isolation
without any context for the application corresponding to
the content. AE, on the other hand, explicitly uses this
contextual information to achieve better performance. Note
that AE is not a better compression algorithm. However, it
is a better way of identifying data-sets that need to be op-
erated on by a given compression algorithm. Table 1 shows
the average e-mail vocabulary characteristics of ten differ-
ent graduate students based on 100 emails sent by each per-
son during two weeks. It is interesting to see the following
characteristics in the results: (i) the e-mail vocabulary size
across the ten people is relatively small - a few thousand
words; and (ii) even a simple encoding involving this knowl-
edge will result in every word being encoded with only 10 -
12 bits, which is substantially lower than using 40 - 48 bits
required using standard binary encoding. In Section 5, we
show that such vocabulary based encoding can considerably
outperform other standard compression tools such as zip as
well. Moreover, further benefits can be attained if more so-
phisticated compression schemes such as Huffman encoding
is employed instead of simple binary encoding. Typical ap-
plications that can benefit from using AE include SMTP
and HTTP.
4. SOLUTION
4.1 Deployment Model and Architecture
The A3
deployment model is shown in Figure 5. Since
A3
is a platform solution, it requires two entities at either
end of the communication session that are A3
aware. At
the mobile device, A3
is a software module that is installed
in user-space. At the server side, while A3
can be deployed
as a software module on all servers, a more elegant solution
would be to deploy a packet processing network appliance
that processes all content flowing from the servers to the
wide-area network. We assume the latter model for our
6. TCP
Application
IP
PF
MAC
TP IBRAR
Accept
Application Recognition and A3 Management
Application Recognition
Rules
Session Table
PHY
AE
Application Acceleration
Rules
A3: Application Aware Acceleration
(a) Deployment with Netfilter
IP
A3 Components (TP, RAR, PF, AE, IB)
3
Kernel Space
2
User Space
Pre-
Routing
Forward
Post-
Routing
2 3
1 4
4
Local In Local Out 1
Accept
Accept
(b) Software Architecture
Figure 6: A3 Architecture
discussions. However, note that A3
can be deployed in either
fashion as it is purely a software solution.
This deployment model will help in any communication
between a server behind the A3
server, and the mobile de-
vice running the A3
module. However, if the mobile device
communicates with a non A3
enabled server, one of two op-
tions exists: (i) As we discuss later in the paper, A3
can be
used as a point-solution with lesser effectiveness; or (ii) the
A3
server is brought closer to the mobile device, perhaps
within the wireless network provider’s access network. In
the rest of the paper, we don’t delve into the latter option.
However, we do revisit the point-solution mode of operation
of A3
.
We present an A3
implementation that resides in user-
space, and uses the NetFilter utility in Linux for the cap-
turing of traffic outgoing and incoming at the mobile device.
NetFilter is a Linux specific packet capture tool that has
hooks at multiple points in the Linux kernel. The A3
hooks
are registered at the Local-In and Local-Out stages of the
chain of hooks in NetFilter. While our discussions are Linux
centric, our discussions can be mapped on the Windows op-
erating system through the use of the Windows Packet Fil-
tering interface, or wrappers such as PktFilter that are built
around the interface. Figure 6(a) shows the A3
deployment
on the mobile device using NetFilter.
The A3
software architecture is shown in Figure 6(b).
Since the design elements in A3
are to a large extent inde-
pendent of each other, a simple chaining of the elements in
an appropriate fashion results in an integrated A3
architec-
ture. The specific order in which the elements are chained in
the A3
realization is TP, RAR, PF, IB, and AE. While RAR
protects the initial session control exchanges and the data
requests, it operates on traffic after TP, given that TP can
generate new requests for data. PF manipulates the prior-
ity with which different requests are served, and IB ensures
that data responses are not throttled by flow control. Fi-
nally, AE compresses any data outgoing, and decompresses
any data incoming.
Internet
A3
-Enabled Client
A3
-Enabled Client
AP
Application Server
Wireless Access
Network
A3
ServerEnterprise
Network / Content
Network
Figure 5: Deployment Model
4.2 Application Overviews
Since we describe the actual operations of the mechanisms
in A3
in the context of one of the three applications, we now
briefly comment on the specific message types involved in
typical transactions by those applications. We then refer to
the specific message types when describing the operations of
A3
subsequently.
Due to lack of space, instead of presenting all message
types again, we refer readers back to Figure 2 to observe the
message exchanges for the three applications. The labels
such as CIFS-x refer to particular message types in CIFS
and will be referred to in the A3
realization descriptions
that follow.
CIFS, also sometimes known as Server Message Block
(SMB), is a platform independent protocol for file shar-
ing. The typical message exchanges in a CIFS session are as
shown in Figure 2(a). Overall, TP manipulates the CIFS-11
message, RAR operates on CIFS-1 through CIFS-11, and
IB aids in CIFS-12.
SMTP[7] is Internet’s standard host-to-host mail trans-
port protocol and traditionally operates over TCP. The typ-
ical message exchanges in an SMTP session are shown in
Figure 2(b). Overall, RAR operates on SMTP-1 through
SMTP-8, and SMTP-12 through SMTP-14, IB and AE op-
erates on SMTP-9 and SMTP-10.
The HTTP message standard exchanges are relatively sim-
ple, and typically consist of the messages shown in Fig-
ure 2(c). Typical HTTP sessions consist of multiple ob-
jects, including the original HTML file, and hence appear
as a sequence of overlapping exchanges of the above format.
Overall, RAR operates on HTTP-1, and PF and IB operate
on HTTP-3.
4.3 A3
Implementation
In the rest of the section, we take one design element
at a time, and walk through the algorithmic details of the
element with respect to a single application. Note that A3
is
an application-aware solution, and hence its operations will
be application specific. Since we describe each element in
isolation, we assume that the element resides between the
application and the network. In an actual usage of A3
, the
elements will have to be chained as discussed earlier.
4.3.1 Transaction Prediction
Figure 7 shows the flow chart for the implementation of
TP for CIFS. When A3
receives a message from the appli-
cation, it checks to see if the message is CIFS-9, and records
state for the file transfer in its TP-File-States data structure.
7. Local
Cache
Retrieve from
Local Cache
File Open?
Application
Request
for Local
Block?
Give to
Application
Update TP
States
Update
Request for
more blocks
Close File TP
State
EOF?
Data?
Store
Information
about
Predicted
Requests
File TP
States
Predicted
Request
States
Data for
Predicting
Req?
Store In Local
Cache
No
Yes
Yes
Yes
Network
No
No
Yes No
No
Yes Yes
Figure 7: Transaction Prediction
It then passes through the message. If the incoming message
was a request, TP checks to see if the request is for a locally
cached block, or for a new block. If the latter, it updates the
request for more blocks, stores information about the pre-
dicted requests generated in the Predicted-Request-States
data structure, and forwards the requests.
In the reverse direction, when data comes in from the
network, TP checks to see if the data is for a predicted
request. If yes, it caches the data in secondary storage and
updates its state information, and forwards the data to the
application otherwise.
The number of additional blocks to request is an interest-
ing design decision. For CIFS, A3
uses a TP request for the
entire file size, since the overall performance is not affected
in any way given the CIFS server semantics that allows for
multiple simultaneous requests. The file size information
can be retrieved from the CIFS-10 message. If the incoming
message is for an earlier retrieved block, TP retrieves the
block from secondary storage, and provides it to the appli-
cation.
While CIFS servers accept multiple data requests from
the same client simultaneously, it is possible that for some
applications, the server might not be willing to accept mul-
tiple data requests simultaneously. In such an event, the
A3
server will let only one of the client requests go through
to the server at any point in time, and will send the other
requests one at a time once the previous requests are served.
4.3.2 Redundant and Aggressive Retransmissions
Figure 8 shows the flow chart for the implementation of
Create
Redundant
Transmissions
Thin
Message?
Application
Retrieve Thin
Message
Store State and
Time, Stagger
Transmissions &
Start Timer Response
?
Thin
Message
States
Yes
No
Network
Yes
Update
RTT and
Stop Timer
No
Figure 8: Redundant and Aggressive Retransmis-
sions
Request?
Application
Yes
No
Network
All Contents
Required?
Fetch
Immediately
Yes
Split Requests
into Two
Categories
No
Rquired
REQ / Not REQ
Fetch Slowly
Application Plugin for
Content Requirements
DATA
Fetch Immediately
Figure 9: Prioritized Fetching
RAR for CIFS. When A3
receives a message from the ap-
plication, it checks to see if it is a thin message. The way
A3
performs the check is to see if the message is one of the
messages between CIFS-1 and CIFS-11. All such messages
are interpreted as thin messages.
If the incoming message is not a thin message, it is let
through as-is. Otherwise, redundant copies of the message
are created, information about current time noted, retrans-
mission alarm started, and the copies sent out in a staggered
fashion. When a response arrives, the timestamp for the cor-
responding request is checked, and RTT estimate updated.
The message is then passed on to the application.
If the alarm expires for a particular thin message, the
message is again subjected to the redundant transmissions.
Successful arrivals of redundant copies of the same message
are filtered at the A3
server.
The key issues of interest in the RAR implementation
are: (i) How many redundant transmissions are performed?
Since packet loss rates in wireless data networks rarely ex-
ceed 10 %, even a redundancy factor of two (two addi-
tional copies created) reduces the effective loss-rate to 0.1 %.
Hence, A3
uses a redundancy factor of two. (ii) How should
the redundant messages be staggered? The answer to this
question lies in the specific channel characteristics experi-
enced by the mobile device. However, at the same time,
the staggered delay should not exceed the round-trip time
of the connection, as otherwise the mechanism would lose
its significance. Hence, A3
uses a staggering delay of RT T
10
between any two copies of the same message. This ensures
that within 20 % of the RTT duration, all messages are sent
out at the mobile device. (iii) How is the aggressive timeout
value determined? Note that while the aggressive timeout
mechanism will help under conditions when all copies of a
message are lost, the total message overhead by such ag-
gressive loss recovery is negligible when compared to the
overall size of data transferred by the application. Hence,
A3
uses a timeout value of the RTTavg + α, where α is a
small guard constant, and RTTavg is the average RTT ob-
served so far. This ensures that the timeout values are tight,
and at the same time the mechanism adapts to changes in
network characteristics.
4.3.3 Prioritized Fetching
Figure 9 shows the flow chart for the implementation of
PF in the context of HTTP. Once again, the key goal in PF
for HTTP is to retrieve HTTP objects that are required for
the display of the visible portion of the webpage quickly in
relation to the objects on the page that are not visible.
Unlike in the other mechanisms, PF cannot be imple-
mented with some additional interactions with the appli-
cation itself. Fortunately, browser applications have well
8. Connection
Open/Close
?
Application
Data ?
No
Yes
Network
Yes
Local
Cache
ACK
Information?
Update IB
State
Max Adv.
Window?
Yes
Change Ack to Max
Window
No
Space in
Buffer?
Store Buffer
Occupancy, Ack. Seq.
State
No
Yes
No
Yes
Is Ack. <
Max. Ack.?
Yes
No
Retrieve from
Local Cache
Local Cache
Empty and
Space in
Buffer?
Yes
Store in Local
Cache
No
Buffer
Occupancy
State
Max. In-
seq. Ack.
Generate
ACK.
Yes
Drop
No
No
Space in
Buffer?
Server
Application from
Local Cache till
Buffer Full or
Local Cache
Empty
No
Figure 10: Infinite Buffering
defined interfaces for querying state of the browser includ-
ing the current focus window, scrolling information, etc.
Hence, the implementation of PF relies on a separate mod-
ule called the application state monitor (ASM) that is akin
to a browser plug-in to coordinate its operations.
When a message comes in from the application, PF checks
to see if the message is a request. If it is not, it is let through.
If it is, PF checks with the ASM to see if all the requested
content are immediately required. ASM classifies the ob-
jects requested as being of immediate need (visible portion
of webpage) or as those that are not immediately required.
PF then sends out fetch requests immediately for the first
category of objects, and uses a low-priority fetching mecha-
nism for the remaining objects.
Since A3
is a platform solution, all PF has to inform the
A3
server is that certain objects are of low priority through
A3
specific piggybacked information. The A3
server then
de-prioritizes the transmission of those objects in preference
to those that are of higher priority. Note that the rela-
tive prioritization is used not only between the content of
a single end-device, but also across end-devices as well to
improve overall system performance. Approaches such as
TCP-LP[14] are candidates that can be used for the relative
prioritization, although A3
currently uses a simple priority
queuing scheme at the A3
server.
Note that while the ASM might classify objects in a par-
ticular fashion, changes in the application (e.g. user scrolling
down) will result in a re-prioritization of the objects accord-
ingly. Hence, the ASM has the capability of gratuitously
informing PF about priority changes. Such changes are im-
mediately notified to the A3
server through appropriate re-
quests.
4.3.4 Infinite Buffering
Figure 10 shows the flow chart for the implementation of
IB in the context of SMTP. IB keeps track of TCP connec-
tion status, and monitors all ACKs that are sent out by the
TCP connection serving the SMTP application for SMTP-9
and SMTP-10. If the advertised window in the ACK is less
than the maximum possible, IB immediately resets the ad-
vertised window to the maximum value, and appropriately
updates its current knowledge of the connection’s buffer oc-
cupancy and max in-sequence ACK information.
Hence, IB prevents anything less than the maximum buffer
size from being advertised. However, when data packets ar-
rive from the network, IB receives the packets and checks to
see if the local disk based cache is empty and the connection
buffer can accommodate more packets. If both conditions
are true, IB delivers the packet to the application. If the disk
Compression
Based on
Application
Vocabulary
DATA?
Application
Decompress
Based on
Application
Vocabulary
Mark as
Compressed
Compressed
?
Yes
No
Network
No
Yes
Common
Table
User
Coding
Table
New
Words
Space
Application
Vocabulary
Figure 11: Application-aware Encoding
cache is non-empty, the incoming packet is directly added
to the cache. In this case, IB generates a proxy ACK back
to the server. Then, if the connection buffer has space in it,
packets are retrieved from the disk cache and given to the
application till the buffer becomes full again. When the con-
nection sends an ACK for a packet already ACKed by IB,
IB suppresses the ACK. When the connection state is torn
down for the CIFS application, IB resets state accordingly.
4.3.5 Application-aware Encoding
Figure 11 shows the flow-chart for the implementation
of AE for SMTP. When AE receives data (SMTP-9) from
the SMTP application, it uses its application vocabulary
table to compress the data, and marks the message as being
compressed and forwards it to the network. The marking
is done to inform the A3
server about the need to perform
de-compression. Similarly, when incoming data arrives for
the SMTP server, and the data is marked as compressed,
AE performs the necessary de-compression.
The mechanisms used for the actual creation and manip-
ulation of the vocabulary tables are of importance to AE.
In A3
, the SMTP vocabulary tables are created and main-
tained purely on a user pair-wise basis. Not only are the
table created in this fashion, but the data sets over which
the vocabulary tables are created is also restricted to this
pair-wise model. In other words, if A is the sender and B
is the receiver, A uses its earlier emails to B as the data
set on which the A-B vocabulary table is created, and then
uses this table for encoding. B, having the data set already
(since the emails were sent to B), can exactly recreate the
table on its side and hence decode any compressed data.
This precludes the need for exchanging tables periodically,
and also takes advantage of changes in vocabulary sets that
might occur based on the recipient.
4.4 A3
Point Solution - A3
•
While the A3
deployment model assumed so far is a plat-
form model requiring participation by A3
enabled devices at
both the client and server ends, in this section we describe
how A3
can be used as a point-solution, albeit with some-
what limited capabilities. We refer to the point-solution
version of A3
as A3
•.
Of the five design elements in A3
, the only design el-
ement for which the platform model is mandatory is the
application-aware encoding mechanism. Since compression
or encoding is an end-to-end process, A3
• cannot be used
with AE. However, each of the other four principles can be
9. A2
NS2 Emulation
B2B1
AE
IB
PF
RAR
TP
AE
IB
PF
RAR
TP
A1
AppEm
(Client)
N1 N2
A3
Em
WNetEm
AppEm
(Server)
A3
Em
Figure 12: Evaluation Network Topology
employed with minimal changes in A3
•.
TP involves the generation of predictive data requests,
and hence can be performed in A3
• as long as the applica-
tion server can accept multiple simultaneous requests. For
CIFS and HTTP, the servers do accept simultaneous re-
quests. IB is purely a flow control avoidance mechanism,
and can be realized in A3
•. RAR involves redundant trans-
missions of messages, and hence can be implemented in A3
•
as long as application servers are capable of filtering du-
plicate messages. If the application servers are not capa-
ble of doing so (e.g. HTTP, which would respond to each
request), the redundant transmissions will have to be per-
formed at the granularity of transport layer segments as op-
posed to application layer messages, since protocols such as
TCP provide redundant packet filtering. Finally, PF can
be accomplished in A3
• in terms of classifying requests and
treating the requests differently. However, the slow fetching
of data not required immediately has to be realized through
coarser receiver based mechanisms such as delayed requests
as opposed to the best possible strategy of slowing down
responses as in A3
.
5. EVALUATION
5.1 Experimental Setup
The experimental setup for the performance evaluation is
shown in Figure 12. The setup consists of three desktop ma-
chines running the Fedora Core 4 operating system with the
Linux 2.6 kernel. An application-emulator (AppEm) mod-
ule runs on both the end machines. The AppEm module is
a custom-built user-level module that generates traffic pat-
terns and content for three different application protocols -
CIFS, SMTP, and HTTP.
The traffic patterns are modeled based on traffic traces
generated by the IxChariot emulator, and documented stan-
dards for the application protocols. The AppEm module
also generates traffic content based on both input real-life
data-sets (for Email and Web content), and random data-
sets (File transfer)3
. The traffic patterns shown in Fig-
ure 2 are representative of the traffic patterns generated by
AppEm.
The system connecting the two end-systems runs the em-
ulators for both A3
and the wireless network. Both the em-
ulators, A3
-Em and WNetEm, are implemented within the
framework of the ns2 simulator, and ns2 is running in the
emulation mode. Running ns2 in its emulation mode allows
3
While the IxChariot emulator can generate representative
traffic traces, it does not allow for specific data sets to be
used for the content, and hence the need for the custom built
emulator.
for the capture and processing of live network traffic. The
emulator object in ns2 taps directly into the device driver
of the interface cards to capture and inject real packets into
the network.
All five of the A3
mechanisms are implemented in the A3
-
Em module, and each mechanism can be enabled either in-
dependently or in tandem with the other mechanisms. The
WNetEm module is used for emulating different wireless net-
work links representing the WLAN, WWAN, and SAT en-
vironments. The specific characteristics used to represent
wireless network environments are the same as those pre-
sented in Section 2.
The primary metrics monitored are throughput, response
time (for HTTP) and confidence intervals for the throughput
and response time. Each data point is the average of 20
simulation runs and in addition we show the 90 % confidence
intervals. The results of the evaluation study are presented
in two stages. We first present the results of the performance
evaluation of A3
principles in isolation. Then, we discuss the
combined performance improvements delivered by A3
.
5.2 Principles in Isolation
5.2.1 Transaction Prediction
We use CIFS as the application traffic for evaluating the
performance of Transaction Prediction. The results of the
TP evaluation are shown in Figure 13. The x-axes of the
graphs show the size of the transferred file in MBytes and
the y-axes are the application throughput in Mbps. The
results show several trends: (i) Using wireless aware TCP
(such as ELN, WTCP, and STP), the increase in through-
put is very negligible. This trend is consistent with the
results in Section 2. (ii) Transaction Prediction improves
CIFS application throughput significantly. In the SAT net-
work, for instance, TP improves CIFS throughput by more
than 80 % when transferring a 10 MByte file. (iii) The
improvement achieved by TP increases with increase in file
size. This is because TP is able to reduce more number of
request-response interactions with increasing file size. (iv)
TP achieves the highest improvement in SAT network. This
is due to the fact that TP’s benefits are directly proportional
to the RTT and the BDP of the network, and SATs have
high RTTs and large BDPs when compared to the other
wireless environments.
5.2.2 Redundant and Aggressive Retransmissions
We evaluate the effectiveness of the RAR principle using
the CIFS application protocol. The results of the RAR eval-
uation is presented in Figure 14. The x-axis in the graphs
is the requested file size in MBytes and the y-axis is the
CIFS application throughput in Mbps. We observe that
RAR delivers better performance when compared to both
TCP-NewReno and the tailored transport protocols, deliv-
ering up to 80 % improvement in throughput performance
for SATs. RAR is able to reduce the chances of experiencing
a timeout when a wireless packet loss occurs. The reduction
of TCP timeouts leads to better performance using RAR.
5.2.3 Prioritized Fetching
The performance of the PF principle was evaluated with
HTTP traffic and results are shown in Figure 15. The x-
axis in the graphs is the requested web-page size in KBytes,
and the y-axis is the response time in seconds for the initial
10. 0 2 4 6 8 10 12
3
3.2
3.4
3.6
3.8
4
4.2
4.4
4.6
File Size (MB)
Throughput(Mb/s) TCP Newreno
ELN
Newreno with TP
(a) WLAN
0 2 4 6 8 10 12
0.055
0.06
0.065
0.07
0.075
0.08
0.085
0.09
0.095
File Size (MB)
Throughput(Mb/s)
TCP Newreno
WTCP
Newreno with TP
(b) WWAN
0 2 4 6 8 10 12
0.03
0.035
0.04
0.045
0.05
0.055
0.06
0.065
0.07
File Size (MB)
Throughput(Mb/s)
TCP Newreno
STP
Newreno with TP
(c) SAT
Figure 13: Transaction Prediction: CIFS
0 2 4 6 8 10 12
3
3.2
3.4
3.6
3.8
4
4.2
4.4
4.6
File Size (MB)
Throughput(Mb/s)
TCP Newreno
ELN
Newreno with RAR
(a) WLAN
0 2 4 6 8 10 12
0.055
0.06
0.065
0.07
0.075
0.08
0.085
0.09
0.095
File Size (MB)
Throughput(Mb/s)
TCP Newreno
WTCP
Newreno with RAR
(b) WWAN
0 2 4 6 8 10 12
0.03
0.035
0.04
0.045
0.05
0.055
0.06
0.065
0.07
File Size (MB)
Throughput(Mb/s)
TCP Newreno
STP
Newreno with RAR
(c) SAT
Figure 14: Redundant and Aggressive Retransmissions: CIFS
screen. In the figure, it can be seen that as a user accesses
larger web pages, the response time difference between de-
fault content fetching and PF increases. PF consistently
delivers a 15 % to 30 % improvement in the response time
performance. PF reduces aggressive traffic volumes by de-
prioritizing the out-of sequence fetching of unseen objects.
Note that PF, while improving the response time, does not
improve raw throughput performance. In other words, only
the effective throughput, as experienced by the end-user,
increases when using PF.
5.2.4 Infinite Buffering
The effectiveness of IB is evaluated using CIFS traffic, and
the results are shown in Figure 16. The x-axes of the graphs
are requested file size in MBytes and the y-axes are the ap-
plication throughput in Mbps. We can see that: (i) Trans-
ferring larger data size with IB achieves higher throughput.
This is because of the fact that IB helps most during the ac-
tual data transfer phase, and will not help when the amount
of data to be transferred is less than a few times the BDP
of the network. (ii) IB performs much better in a SAT net-
work than the other two networks, delivering almost a 400 %
improvement in performance. Again, the results are as ex-
pected because IB’s benefits are higher when the BDP of
the network is higher.
5.2.5 Application-aware Encoding
Application-aware Encoding is designed primarily to ac-
celerate e-mail delivery using SMTP and hence we evaluate
the effectiveness of AE for SMTP traffic. In the evalua-
tion, emails of sizes ranging from 1 KBytes to 10 KBytes
(around 120 to 1200 words) are used. We show the results
in Figure 17 where the x-axis is the e-mail size in KBytes
and y-axis is the application throughput in Mbps. Varying
degrees of throughput improvements are achieved, and in
WWAN, an increase of 80 % is observed when transferring
a 10 KBytes email. We can see that AE achieves the highest
improvement in WWAN due to its relatively low bandwidth.
We also show the effectiveness of AE in terms of compres-
sion ratio in Figure 19. In the figure, the results of ten per-
sons’ emails using three compression estimators (WinRAR,
WinZip and AE) are shown. We can see that WinRAR and
WinZip can compress an email by a factor of 2 to 3, while
AE can achieve a compression ratio of about 5.
5.3 Integrated Performance Evaluation
In this section, we present the results of the combined
effectiveness of all applicable principles for the three appli-
cations, CIFS, SMTP and HTTP. We employ RAR, TP, and
IB on the CIFS traffic. For SMTP, the RAR, AE and IB
principles are used. In the case of HTTP, the A3
principles
applied are RAR, PF and IB. As expected, the through-
put of the applications (CIFS and SMTP) when using the
integrated A3
principles is higher than when any individ-
ual principle is employed in isolation, while the response
time of HTTP is lower than any individual principle. The
results are shown in Figure 18, with A3
delivering perfor-
mance improvements of approximately 70 %, 110 %, and
30 % respectively for CIFS, SMTP, and HTTP.
1 2 3 4 5 6 7 8 9 10
0
20
40
60
80
100
Person ID
CompressionRatio(%)
RAR
ZIP
VBC
Figure 19: Efficiency of AE
6. RELATED WORKS
6.1 Wireless-aware Middleware/Applications
The Wireless Application Protocol (WAP) is a protocol
developed to allow efficient transmission of WWW content
to handheld wireless devices. The transport layer proto-
11. 0 100 200 300 400 500 600
0.2
0.4
0.6
0.8
1
1.2
Web Page Size (KB)
ResponseTime(s)
TCP Newreno
ELN
Newreno with PF
(a) WLAN
0 100 200 300 400 500 600
0
10
20
30
40
50
Web Page Size (KB)
ResponseTime(s)
TCP Newreno
WTCP
Newreno with PF
(b) WWAN
0 100 200 300 400 500 600
0
10
20
30
40
50
60
Web Page Size (KB)
ResponseTime(s)
TCP Newreno
STP
Newreno with PF
(c) SAT
Figure 15: Prioritized Fetching: HTTP
0 2 4 6 8 10 12
3
3.2
3.4
3.6
3.8
4
4.2
4.4
4.6
File Size (MB)
Throughput(Mb/s)
TCP Newreno
ELN
ELN with IB
(a) WLAN
0 2 4 6 8 10 12
0.055
0.06
0.065
0.07
0.075
0.08
0.085
0.09
0.095
File Size (MB)
Throughput(Mb/s)
TCP Newreno
WTCP
ELN with IB
(b) WWAN
0 2 4 6 8 10 12
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
File Size (MB)
Throughput(Mb/s)
TCP Newreno
STP
ELN with IB
(c) SAT
Figure 16: Infinite Buffering: CIFS
cols in WAP consists of the Wireless Transaction Protocol
and Wireless Datagram Protocol, which are designed for use
over narrow band bearers in wireless networks and are not
compatible with TCP. WAP is highly WWW centric, and
does not aim to optimize any of the application behavioral
patterns identified earlier in the paper. Browsers such as
the Pocket Internet Explorer (PIE)[6] are developed with
capabilities that can address resource constraints on mobile
devices. However, they do not optimize communication per-
formance which is the focus of A3
.
Work in [15] aims to save bandwidth/power by adapting
the contents based on user semantics and contexts. The
adaptations, however, are exposed to the end-applications
and users. This is different from the A3
approach which is
application-transparent.
The Odyssey project[16] focuses on system support for
collaboration between the operating system and individual
applications by letting them both be aware of the wireless
environment, and thus adapt their behaviors. Compara-
tively, A3
does not rely on OS-level support, and is totally
transparent both to the underlying OS and the applications.
The Coda file system[17] is based on the Andrew File Sys-
tem (AFS), but supports disconnected operations for mobile
hosts. When the client is connected to the network, it hoards
files for later use during disconnected operations. During
disconnections, Coda emulates the server, serving files from
its local cache. Coda’s techniques are specific to file systems,
and require applications to have changed semantics for the
data that they use.
6.2 Related Design Principles
Some related works in literature have been proposed to ac-
celerate applications with various mechanisms. We present
a few of them here, and identify the differences vis-a-vis A3
.
TP-related: In [10], the authors propose to “upload” clients’
task to the server side, thus eliminating many RTTs required
for applications like SMTP. This approach is different from
the A3
approach in terms of application protocols applied
and the overall mechanism.
RAR-related: Mechanisms like FEC use error control cod-
ing for digital communication systems. Another work[19]
proposes aggressive retransmission mechanism to encourage
legitimate clients to behave more aggressively in order to
fight attack against servers. Compared to these approaches,
A3
only applies RAR to control messages in application pro-
tocols, and it does so by retransmitting the control message
when a maintained timer expires. We present arguments
earlier in the paper as to why protecting control message ex-
changes is a major factor affecting application performance.
PF-related: To improve the web-access performance, work
in [15] proposes out-of-order transmission of HTTP objects
above UDP, and break the in-order delivery of an object.
However, unlike the A3
framework, it requires the confor-
mation of both client and server sides.
IB-related: [8] shows that overbuffering on routers in-
creases end-to-end delay in the presence of congestion, and
complicates the design of high-speed routers. IB is different
from overbuffering, which aims at fully utilizing the net-
work resources by removing the buffer length constraint. IB
specifically applies to applications with large bulk of data
transfer, such as FTP, and is meant to counter impact of
flow control.
AE-related: Companies like Converged[3] provide applica-
tion aware compression solutions through compressing the
data for some applications based on priority and applica-
tion natures. These mechanisms share the property of be-
ing application aware, meaning only subset of applications
will be compressed. However, AE has the property of be-
ing user-aware, that is take into considerations user-pattern
information, and thus can achieve better performances.
6.3 Commercial WAN Optimizers
Several companies, such as Riverbed and Peribit, sell WAN-
optimization application-acceleration products. However,
(1) Almost all the commercial solutions are proprietary ones;
(2) The A3
principles such as RAR, IB, AE and PF are not
seen in commercial solutions; and (3) Many of the techniques
used in commercial solutions, such as bit-level caching and
12. 0 2 4 6 8 10 12
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Email Size (KB)
Throughput(Mb/s)
TCP Newreno
ELN
Newreno with AE
(a) WLAN
0 2 4 6 8 10 12
0
0.01
0.02
0.03
0.04
0.05
0.06
Email Size (KB)
Throughput(Mb/s)
TCP Newreno
WTCP
Newreno with AE
(b) WWAN
0 2 4 6 8 10 12
0
0.002
0.004
0.006
0.008
0.01
0.012
Email Size (KB)
Throughput(Mb/s)
TCP Newreno
STP
Newreno with AE
(c) SAT
Figure 17: Application-aware Encoding: SMTP
0 2 4 6 8 10 12
0.06
0.07
0.08
0.09
0.1
0.11
0.12
0.13
File Size (MB)
Throughput(Mb/s)
TCP Newreno
WTCP
ELN with RAR+TP+IB
(a) CIFS
0 2 4 6 8 10 12
0
0.01
0.02
0.03
0.04
0.05
0.06
Email Size (KB)
Throughput(Mb/s)
TCP Newreno
WTCP
Newreno with RAR+IB+AE
(b) SMTP
0 100 200 300 400 500 600
0
10
20
30
40
50
Web Page Size (KB)
ResponseTime(s)
TCP Newreno
WTCP
Newreno with RAR+IB+AE
(c) HTTP
Figure 18: Integrated A3 Results in WWAN
LPZ-based compression, are hardware-based approaches, and
require large amounts of storage. The above properties ren-
der the commercial solutions inapplicable for environments
where easy deployment is required. Also, A3
is a middleware
approach, and does not require large amounts of storage.
7. CONCLUSIONS
In this paper, we motivate the need for application accel-
eration for wireless-data networks, and present the A3
solu-
tion that is application-aware, but application transparent.
Using a combination of principles targeted toward tackling
design problems in popular real-world applications, A3
pro-
vides significant improvements in end-user application per-
formance.
8. REFERENCES
[1] CIFS: A common internet file system.
http://www.microsoft.com/mind/1196/cifs.asp.
[2] Comscore media metrix top 50 online property ranking.
http://www.comscore.com/press/release.asp?press=547.
[3] Converged access wan optimization.
http://www.convergedaccess.com/.
[4] Hypertext transfer protocol– http/1.1.
http://www.ietf.org/rfc/rfc2616.txt.
[5] Linux magzine.
http://www.linux-magazine.com/issue/15/.
[6] Pocket internet explorer.
http://www.microsoft.com/windowsmobile/.
[7] Simple mail transfer protocol.
http://www.ietf.org/rfc/rfc2821.txt.
[8] G. Appenzeller, I. Keslassy, and N. McKeown. Sizing
router buffers. In Proceedings of ACM SIGCOMM,
Portland, Oregon, 2004.
[9] H. Balakrishnan and R. Katz. Explicit loss notification
and wireless web performance. In Proceedings of IEEE
GLOBECOM, Sydney, Australia, Nov. 1998.
[10] S. Czerwinski and A. Joseph. Using simple remote
evaluation to enable efficient application protocols in
mobile environments. In Proceedings of the 1st IEEE
International Symposium on Network Computing and
Applications, Cambridge, MA, 2001.
[11] T. Henderson and R. Katz. Transport protocols for
Internet-compatible satellite networks. IEEE Journal
on Selected Areas in Communications (JSAC),
17(2):345–359, Feb. 1999.
[12] H. Hsieh, K. Kim, Y. Zhu, and R. Sivakumar. A
receiver-centric transport protocol for mobile hosts
with heterogeneous wireless interfaces. In Proceedings
of ACM MOBICOM, 2003.
[13] IXIA. http://www.ixiacom.com/.
[14] A. Kuzmanovic and E. Knightly. TCP-LP: A
distributed algorithm for low priority data transfer. In
Proceedings of IEEE INFOCOM, 2003.
[15] I. Mohomed, J. C. Cai, S. Chavoshi, and E. de Lara.
Context-aware interactive content adaptation. In
Proceedings of the 4th International Conference on
Mobile Systems, Applications, and Services (MobiSys),
Uppsala, Sweden, 2006.
[16] B. D. Noble, M. Satyanarayanan, D. Narayanan, J. E.
Tilton, J. Flinn, and K. R. Walker. Agile
application-aware adaptation for mobility. In
Proceedings of the 16th ACM Symposium on Operating
System Principles, Saint Malo, France, 1997.
[17] M. Satyanarayanan, J. J. Kistler, P. Kumar, M. E.
Okasaki, E. H. Siegel, and D. C. Steere. Coda: A
highly available file system for a distributed
workstation environment. IEEE Transactions on
Computers, 39(4):447–459, 1990.
[18] P. Sinha, N. Venkitaraman, R. Sivakumar, and
V. Bharghavan. WTCP: A reliable transport protocol
for wireless wide-area networks. In Proceedings of
ACM MOBICOM, Seattle, WA, USA, Aug. 1999.
[19] M. Walfish, H. Balakrishnan, D. Karger, and
S. Shenker. Dos: Fighting fire with fire. In Proceedings
of the 4th ACM Workshop on Hot Topics in Networks
(HotNets), College Park, MD, 2005.