In today’s world’s to provide service to netizen’s with good availability of data, content
delivery networks (CDNs) must balance requests between servers while assigning clients to closet
servers. In this paper, we describe a new CDN design that associates artificial load-aware coordinates
with clients and data servers and uses them to direct content requests to cached data. This approach
helps achieve good accuracy and service when request workloads and resource availability in the CDN
are dynamic. A deployment and evaluation of our system on Planet Lab demonstrates how it achieves low
request times with high cache hit ratios when compared to other CDN approaches.
VIRTUAL CACHE & VIRTUAL WAN ACCELERATOR FUNCTION PLACEMENT FOR COST-EFFECTIVE...IJCNCJournal
The algorithm to determine the place where network functions are located and how much capacities of network function flexibly are required is essential for economical NFV (Network Functions Virtualization)based network design. The authors proposed a placement algorithm of virtual routing function and virtual firewall function in the NFV-based network for minimizing the total network cost and developed the effective allocation guidelines for these virtual functions.
Design of Equitable Dominating Set Based Semantic Overlay Networks with Optim...IDES Editor
Content Distribution Networks (CDNs) are overlay
networks for placing the content near the end clients with the
aim at reducing the delay, network congestion and balancing
the workload, hence improving the service quality perceived
by the end clients. The main objective of this work is to
construct a semantic overlay network of surrogate servers
based on equitable dominating set. This yields any replication
algorithm that can replicate the contents to minimum number
of surrogate servers within the SON. Such servers can be
accessed from anywhere. Then we propose a content
distribution algorithm named Optimal Fast Replica (O-FR)
and apply our proposed algorithm to distribute the content
over the Equitable Dominating set based Semantic Overlay
Networks (EDSON). We analyze the performance of our
proposed Optimal Fast Replica (O-FR) in terms of average
replication time, and maximum replication time and compare
its performance with existing content distribution algorithms
named Fast Replica and Resilient Fast Replica. The result of
such approach improves the service quality perceived by the
end clients. This paper also analyzes the use of equitable
dominating set for the construction of semantic overlay
networks and also investigates how it is useful for maintaining
the uniform utilization of the surrogate servers.
Prediction System for Reducing the Cloud Bandwidth and Costijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Dynamic Routing for Data Integrity and Delay Differentiated Services in Wirel...syeda yasmeen
The document proposes a routing algorithm called IDDR for wireless sensor networks that aims to provide both low delay for delay-sensitive applications and high data integrity for integrity-sensitive applications. IDDR constructs a virtual hybrid potential field to separate packets by application type and route them along different paths, improving data integrity by caching packets on underloaded paths while reducing delay by sending delay-sensitive packets along shorter paths. Using theoretical analysis, the authors prove IDDR is stable, and simulations show it provides differentiated quality of service for different application requirements.
SELFLESS DISTRIBUTED CREDIT BASED SCHEDULING FOR IMPROVED QOS IN IEEE 802.16 ...ijwmn
Packet and flow scheduling algorithms for WiMAX has been a topic of interest for a long time since the very inception of WiMAX networks. WiMAX offers advantages particularly in terms of Quality of service it offers over a longer range at the MAC level. In our paper, we propose two credit based scheduling schemes one in which completed flows distributes the left over credits equally to all higher priority flows(FDCBSS) and another in which completed flows give away all the excess credits to the highest priority uncompleted flow(SDCBSS). Both the schemes are compatible with 802.16 MAC protocol and can efficiently serve real time bursty traffic with reduced latency and hence improved QOS for real time flows. We compare the two proposed schemes for their latency, bandwidth utilization and throughput for real time burst flows with the basic Deficit Round Robin scheduling scheme.
A-Serv: A Novel Architecture Providing Scalable Quality of ServiceCSCJournals
QoS architectures define how routers process packets to ensure QoS service guarantees enforced. Existing QoS architectures such as Integrated Services (IntServ), Differentiated Services (DiffServ), and Dynamic Packet State (DPS) share one common property that the packet structure and the function of the routers are closely connected. Packets of one data flow are treated the same all the time at different routers. We propose to decouple such connection between packet structures and router functions. In our solution, packets carry as much information as possible, while routers process packets as detailed as possible until their load burden prohibits. We call such novel QoS architecture Adaptive Services (A-Serv). A-Serv utilizes our newly designed Load Adaptive Router to provide adaptive QoS to data flows. Treatments to data flows are not predefined but based on the load burden in Load Adaptive Routers. A-Serv overcomes the scalability problem of IntServ, provides better service guarantees to individual data flows than DiffServ and can be deployed incrementally. Our empirical analysis results show that compared with DiffServ architecture, A-Serv can provide differentiated services to data flows in the same DiffServ class and can provide better guaranteed QoS to data flows. Furthermore, A-Serv provides better protection to data flows than DiffServ when malicious data flows exist.
Dual-resource TCPAQM for Processing-constrained Networksambitlick
This document summarizes a research paper that examines congestion control issues for TCP flows that require in-network processing by network elements like routers. It proposes a new active queue management (AQM) scheme called Dual-Resource Queue (DRQ) that can approximate proportional fairness for bandwidth and CPU resources without changing TCP stacks. The study finds that current TCP/AQM schemes can lose throughput and cause unfairness in networks where CPU cycles are scarce. DRQ maintains separate queues for each resource and works with classes of flows based on their known processing requirements, making it scalable without per-flow state. Simulation results show DRQ achieves fair and efficient resource allocation with low implementation costs.
JPJ1410 PACK: Prediction-Based Cloud Bandwidth and Cost Reduction Systemchennaijp
This paper presents PACK, a receiver-based end-to-end traffic redundancy elimination system for cloud computing. PACK aims to reduce bandwidth costs for cloud customers by having the client predict and eliminate redundant data sent by the cloud server. It does this by having the client maintain chunks of previously received data and use these to identify redundant chunks in newly received data, allowing it to send "predictions" to the server about future chunks. This avoids the need for the server to do processing for traffic redundancy elimination.
VIRTUAL CACHE & VIRTUAL WAN ACCELERATOR FUNCTION PLACEMENT FOR COST-EFFECTIVE...IJCNCJournal
The algorithm to determine the place where network functions are located and how much capacities of network function flexibly are required is essential for economical NFV (Network Functions Virtualization)based network design. The authors proposed a placement algorithm of virtual routing function and virtual firewall function in the NFV-based network for minimizing the total network cost and developed the effective allocation guidelines for these virtual functions.
Design of Equitable Dominating Set Based Semantic Overlay Networks with Optim...IDES Editor
Content Distribution Networks (CDNs) are overlay
networks for placing the content near the end clients with the
aim at reducing the delay, network congestion and balancing
the workload, hence improving the service quality perceived
by the end clients. The main objective of this work is to
construct a semantic overlay network of surrogate servers
based on equitable dominating set. This yields any replication
algorithm that can replicate the contents to minimum number
of surrogate servers within the SON. Such servers can be
accessed from anywhere. Then we propose a content
distribution algorithm named Optimal Fast Replica (O-FR)
and apply our proposed algorithm to distribute the content
over the Equitable Dominating set based Semantic Overlay
Networks (EDSON). We analyze the performance of our
proposed Optimal Fast Replica (O-FR) in terms of average
replication time, and maximum replication time and compare
its performance with existing content distribution algorithms
named Fast Replica and Resilient Fast Replica. The result of
such approach improves the service quality perceived by the
end clients. This paper also analyzes the use of equitable
dominating set for the construction of semantic overlay
networks and also investigates how it is useful for maintaining
the uniform utilization of the surrogate servers.
Prediction System for Reducing the Cloud Bandwidth and Costijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Dynamic Routing for Data Integrity and Delay Differentiated Services in Wirel...syeda yasmeen
The document proposes a routing algorithm called IDDR for wireless sensor networks that aims to provide both low delay for delay-sensitive applications and high data integrity for integrity-sensitive applications. IDDR constructs a virtual hybrid potential field to separate packets by application type and route them along different paths, improving data integrity by caching packets on underloaded paths while reducing delay by sending delay-sensitive packets along shorter paths. Using theoretical analysis, the authors prove IDDR is stable, and simulations show it provides differentiated quality of service for different application requirements.
SELFLESS DISTRIBUTED CREDIT BASED SCHEDULING FOR IMPROVED QOS IN IEEE 802.16 ...ijwmn
Packet and flow scheduling algorithms for WiMAX has been a topic of interest for a long time since the very inception of WiMAX networks. WiMAX offers advantages particularly in terms of Quality of service it offers over a longer range at the MAC level. In our paper, we propose two credit based scheduling schemes one in which completed flows distributes the left over credits equally to all higher priority flows(FDCBSS) and another in which completed flows give away all the excess credits to the highest priority uncompleted flow(SDCBSS). Both the schemes are compatible with 802.16 MAC protocol and can efficiently serve real time bursty traffic with reduced latency and hence improved QOS for real time flows. We compare the two proposed schemes for their latency, bandwidth utilization and throughput for real time burst flows with the basic Deficit Round Robin scheduling scheme.
A-Serv: A Novel Architecture Providing Scalable Quality of ServiceCSCJournals
QoS architectures define how routers process packets to ensure QoS service guarantees enforced. Existing QoS architectures such as Integrated Services (IntServ), Differentiated Services (DiffServ), and Dynamic Packet State (DPS) share one common property that the packet structure and the function of the routers are closely connected. Packets of one data flow are treated the same all the time at different routers. We propose to decouple such connection between packet structures and router functions. In our solution, packets carry as much information as possible, while routers process packets as detailed as possible until their load burden prohibits. We call such novel QoS architecture Adaptive Services (A-Serv). A-Serv utilizes our newly designed Load Adaptive Router to provide adaptive QoS to data flows. Treatments to data flows are not predefined but based on the load burden in Load Adaptive Routers. A-Serv overcomes the scalability problem of IntServ, provides better service guarantees to individual data flows than DiffServ and can be deployed incrementally. Our empirical analysis results show that compared with DiffServ architecture, A-Serv can provide differentiated services to data flows in the same DiffServ class and can provide better guaranteed QoS to data flows. Furthermore, A-Serv provides better protection to data flows than DiffServ when malicious data flows exist.
Dual-resource TCPAQM for Processing-constrained Networksambitlick
This document summarizes a research paper that examines congestion control issues for TCP flows that require in-network processing by network elements like routers. It proposes a new active queue management (AQM) scheme called Dual-Resource Queue (DRQ) that can approximate proportional fairness for bandwidth and CPU resources without changing TCP stacks. The study finds that current TCP/AQM schemes can lose throughput and cause unfairness in networks where CPU cycles are scarce. DRQ maintains separate queues for each resource and works with classes of flows based on their known processing requirements, making it scalable without per-flow state. Simulation results show DRQ achieves fair and efficient resource allocation with low implementation costs.
JPJ1410 PACK: Prediction-Based Cloud Bandwidth and Cost Reduction Systemchennaijp
This paper presents PACK, a receiver-based end-to-end traffic redundancy elimination system for cloud computing. PACK aims to reduce bandwidth costs for cloud customers by having the client predict and eliminate redundant data sent by the cloud server. It does this by having the client maintain chunks of previously received data and use these to identify redundant chunks in newly received data, allowing it to send "predictions" to the server about future chunks. This avoids the need for the server to do processing for traffic redundancy elimination.
A Fuzzy Based Dynamic Queue Management Approach to Improve QOS in Wireless se...IJARIDEA Journal
Abstract— Wireless Sensor Networks (WSNs) are predicted to be the following iteration of networks which
will kind an indispensable a part of man’s lives and which furnish a bridge between the true bodily and
virtual worlds. WSNs will have to be able to aid more than a few functions over the same platform. Specific
applications would have unique QoS requirements helping the preliminary specifications for delivering
Quality of services (QoS), which is fundamental for numerous purposes, is directly concerning energy
consumption, delay, reliability, distortion, and community lifetime. There may be an inevitable correlation
between quality of accessible service levels in WSNs and power consumption in these networks, while
acquiring any of those bases acquires the influential interface on the other.
Keywords—Data integrity, Delay differentiated services, Dynamic routing, Potential field, Wireless sensor
networks.
Integrated services and RSVP - ProtocolPradnya Saval
- MPLS can be used to create virtual private networks (VPNs) that provide wide-area connectivity between sites of a large organization through dedicated label switched paths. This gives the appearance of a dedicated network while transmitting through a public or shared MPLS network.
- The integrated services model was developed by IETF to provide different levels of quality of service in the Internet. It uses resource reservation, packet classification, and scheduling to ensure applications receive their requested QoS.
- The Resource Reservation Protocol (RSVP) is used by the integrated services model to signal resource requirements and set up flows with a requested QoS across a network. RSVP uses soft state and receiver-initiated reservations.
A Novel Rebroadcast Technique for Reducing Routing Overhead In Mobile Ad Hoc ...IOSR Journals
This document presents a novel rebroadcast technique called Neighbor Coverage based Probabilistic Rebroadcast (NCPR) protocol to reduce routing overhead in mobile ad hoc networks. The NCPR protocol calculates a rebroadcast delay based on the number of common neighbors between nodes to prioritize dissemination of neighbor information. It also calculates a rebroadcast probability based on additional neighbor coverage ratio and connectivity factor to reduce unnecessary rebroadcasts while maintaining network connectivity. The protocol is implemented by enhancing the AODV routing protocol in NS-2 to reduce overhead from hello packets and neighbor lists in route requests. Its performance is evaluated under varying network sizes, traffic loads, and packet loss conditions.
A Survey on Cross Layer Routing Protocol with Quality of ServiceIJSRD
Wireless is playing the wide role in today’s industrial application. Central idea of this paper is to enhance quality of service (QoS) for multimedia transmission over ad-hoc network. This paper describes the operational of different QoS routing protocols, their properties and various parameters advantages and disadvantages. Also describes the use of QoS in Cross layer routing protocol. Finally, it concludes by study of all these cross layer QoS routing protocols.
Token Based Packet Loss Control Mechanism for NetworksIJMER
This document summarizes a research paper that proposes a new congestion control mechanism using tokens. It begins with background on congestion control and modern IP networks. The proposed approach uses edge and core routers to write quality of service measures in packet headers as tokens. Tokers are interpreted by routers to gauge congestion, especially at edge routers. Based on tokens, edge routers can shape traffic from sources to reduce congestion. The mechanism aims to provide fairness while controlling packet loss. Key aspects discussed include stable token limit congestion control, core routers, edge routers, and how the approach compares to related work like CSFQ.
Assessing Buffering with Scheduling Schemes in a QoS Internet RouterIOSR Journals
This document examines different scheduling algorithms that could be used with RIO-C penalty enforcement buffering in a multi-queue QoS router to improve network performance. It simulates priority, round robin, and weighted round robin scheduling with RIO-C buffering. The results show that priority scheduling achieved the lowest loss rates, with 29.46% scheduler drop rate and 14.95% RED loss rate. Round robin was second with 29.53% and 10.50% losses. Weighted round robin was third with 30.28% and 3.04% losses. The document concludes that a network seeking quality of service could adopt priority scheduling with RIO-C admission control.
This document describes a custom reliable file transfer protocol over UDP that is designed to perform better than TCP in lossy network conditions. It discusses the protocol design which uses sequence numbers and negative acknowledgements to provide reliability over UDP. The protocol is tested on a simulated network with varying packet loss rates and delays. Results show the protocol achieves higher throughput than TCP-based file transfer methods on lossy links.
International Refereed Journal of Engineering and Science (IRJES) is a peer reviewed online journal for professionals and researchers in the field of computer science. The main aim is to resolve emerging and outstanding problems revealed by recent social and technological change. IJRES provides the platform for the researchers to present and evaluate their work from both theoretical and technical aspects and to share their views.
Packet delivery ratio, delay, throughput, routing overhead etc are the strict quality of service requirements
for applications in Ad hoc networks. So, the routing protocol not only finds a suitable path but also the path
should satisfy the QoS constraints also. Quality of services (QoS) aware routing is performed on the basis
of resource availability in the network and the flow of QoS requirement. In this paper we developed a
source routing protocol which satisfying the link bandwidth and end –to- end delay factor. Our protocol
will find multiple paths between the source and the destination, out of those one will be selected for data
transfer and others are reserve at the source node those can be used for route maintenance purpose. The
path selection is strictly based on the bandwidth and end-to-end delay in case two or more then two paths
are having the same values for QoS constraints then we will use hop as a parameter for path selection.
Abstract: CHOKe is a simple and stateless active queue management (AQM) scheme. Highly attractive property of Choke is that it can protect responsive TCP flows from unresponsive UDP flows. Packets currently queued in buffer, to penalize the high bandwidth flows. It can be implemented by using RED algorithm. In RED algorithm when packets arrives at a congested router. CHOKe draws a packet at a random from the FIFO buffer and compares it with the arriving packet. If both belong to same flow, then they are both dropped; else randomly chosen packet is left intact and arriving packet is admitted into the buffer with a probability that depend on the level of congestion. These algorithms are typically implemented in the transport protocols (e.g., TCP) of end-hosts. To ensure global fairness, such schemes require all users to adopt them and respond to network congestion properly.Keywords: CHOKe, Random Early Detection (RED), Congestion.
Title: Ephemeral Performance of Choke
Author: Suhitha K.C
ISSN 2350-1022
International Journal of Recent Research in Mathematics Computer Science and Information Technology
Paper Publications
Dynamic routing for data integrity and delay differentiated services in wirel...LeMeniz Infotech
Dynamic routing for data integrity and delay differentiated services in wireless sensor networks
Do Your Projects With Technology Experts
To Get this projects Call : 9566355386 / 99625 88976
Visit : www.lemenizinfotech.com / www.ieeemaster.com
Mail : projects@lemenizinfotech.com
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document summarizes a research paper that proposes a "packet weighted energy drain rate" mechanism to detect selfish nodes in mobile ad hoc networks. The mechanism calculates the packet weighted energy drain rate of each node to identify those that are not fairly forwarding packets and draining battery. It uses this metric along with optimal change requests and topology maintenance to mitigate the impacts of selfish nodes. Simulations were conducted in NS-2 to evaluate the performance of this approach under varying packet sizes, and it was found to improve metrics like packet delivery ratio and reduce overhead compared to existing selfish-aware scheduling.
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
Clustering based Time Slot Assignment Protocol for Improving Performance in U...journal ijrtem
Recently, numerous approaches have been proposed for designing medium access control (MAC)
in underwater acoustic networks (UANs). Some of those works tried to adapt MAC protocols proposed for
terrestrial networks. However, unique environmental characteristics of UANs make the MAC protocols hard to be
used in the UANs and degrade network performance. In order to improve network performance, COD-TS MAC
protocol was proposed. COD-TS focuses on both single hop and multi-hop mode and utilizes CDMA for
exchanging schedule information between cluster heads. COD-TS has shortcomings such as collisions, additional
energy consumption by exchanging schedule information and near-far effect of CDMA. To overcome above
shortcomings, we propose a clustering-based time slot assignment protocol. In the proposed protocol, nodes are
clustered, and each cluster head performs two-hop neighbor cluster discovery operation. And then, a cluster head
obtains its own relative position information. Finally, the cluster head assigns its own time slot for data
transmission based on the information. Simulation results show that the proposed protocol has always better
performance compared to the COD-TS.
Improvement of Congestion window and Link utilization of High Speed Protocols...IOSR Journals
This document summarizes a research paper that proposes using a k-nearest neighbors (k-NN) algorithm to help high-speed transport layer protocols like CUBIC better distinguish between packet drops due to network congestion versus other factors like noise. The k-NN algorithm would analyze patterns in packet drop history to classify new drops, helping protocols avoid unnecessary window size reductions when drops are not actually due to congestion. The document provides background on high-speed protocols, issues like underutilization from treating all drops as congestion, and how incorporating k-NN classification could improve protocols' performance in noisy network conditions.
Delay jitter control for real time communicationMasud Rana
This document proposes a method for controlling delay jitter for real-time communication channels in a packet-switching network. It extends an existing scheme that provides bounds on maximum delay. The key aspects are:
1) Each network node contains "regulators" for each channel that reconstruct the original packet arrival pattern to preserve jitter, and a scheduler that ensures low distortion of patterns between nodes.
2) Clients specify maximum delay and jitter bounds when establishing a channel. The establishment procedure sets local jitter bounds at each node to ensure the end-to-end bound is met.
3) Regulators at each node attempt to faithfully preserve the original packet arrival pattern, so that the last node sees essentially the original
GPRS was established by ETSI to provide packet-switched data services in GSM networks. It introduces two new core network nodes, SGSN and GGSN, to route packets between external data networks and mobile stations. GPRS supports bit rates up to 170kbps and quality of service features. It allows dynamic allocation of radio resources and efficient delivery of packet data using concepts like always-on connectivity and burst transmissions. GPRS uses concepts like point-to-point and point-to-multipoint connections to provide services like IP, X.25, SMS and other applications to mobile users.
This document summarizes a dissertation on an improved load balancing technique for secure data in cloud computing. The dissertation discusses research issues in load balancing and data security in cloud computing. It proposes a load balancing methodology that uses a load balancer, Kerberos authentication, and Nginx load balancing algorithms like round robin and least connections to securely store and balance load of encrypted data across multiple cloud nodes. The methodology is implemented using tools like HP LoadRunner, Amazon Web Services, and Jelastic cloud platform. Performance is analyzed in terms of transaction time. The proposed technique aims to improve resource utilization, access control, data security, and efficiency in cloud environments.
This document presents a new control strategy for an islanded microgrid consisting of PV and fuel cell distributed generation units supplying both local and nonlocal unbalanced loads. The control strategy comprises voltage control using a proportional resonance controller, droop-based power sharing, and a negative sequence impedance controller. The voltage controller regulates load voltages while the droop controller shares average power. The negative sequence controller minimizes negative sequence currents in the microgrid lines by adjusting each DG unit's negative sequence output impedance, improving power quality under unbalanced conditions. Time-domain simulations in MATLAB/Simulink validate the control strategy's performance for the PV and fuel cell microgrid.
1) The document describes a case where a cadaver was found to have a variant anatomy of the flexor carpi ulnaris muscle and ulnar artery.
2) Specifically, the ulnar head of the flexor carpi ulnaris muscle was more prominent and separated the ulnar artery and nerve. The humeral and ulnar heads of the muscle were also separated by the ulnar nerve.
3) This variation could be clinically significant for plastic surgeons performing flap surgeries and surgeons dealing with cubital tunnel syndrome where the ulnar nerve passes. Knowledge of unusual anatomy in the forearm is important for surgical planning and interpretation of imaging studies.
This document summarizes a research paper that models and simulates the behavior of a Doubly Fed Induction Generator (DFIG) during faulty grid conditions using Matlab. It describes how a three-phase fault is created using a fault block, and the converter connected to the rotor is disconnected and the rotor is shorted by dump resistances after the fault occurs. The paper presents the normal and faulty operating equivalent circuits of the DFIG. It also describes the Matlab simulation model developed, including blocks for the DFIG and dump resistor protection. Simulation results are shown graphically comparing stator current with time during a 0.2 to 0.3 second fault period.
Remote Access and Dual Authentication for Cloud StorageIJMER
Cloud computing is an emerging technology, which provides services over internet such as
software, hardware, network and storage. The key role for cloud computing is virtualization which
reduces the total cost and gives reliable, flexible and secured services. However compute service are
chosen between the providers located in multiple data centres. One of the major security concerns
related to the virtualization and the Storage where the outside attackers can use the files in the storage
and the data owners are not capable of knowing attacks. In this paper we proposed a high level
authentication for the cloud user and remote monitor controlled of your cloud storage. Here our model
provides the dual authentication for the cloud and to get the runtime record of the logs and the secured
application controls, the logs are remotely accessed and controlled by the owner of the data.
A Fuzzy Based Dynamic Queue Management Approach to Improve QOS in Wireless se...IJARIDEA Journal
Abstract— Wireless Sensor Networks (WSNs) are predicted to be the following iteration of networks which
will kind an indispensable a part of man’s lives and which furnish a bridge between the true bodily and
virtual worlds. WSNs will have to be able to aid more than a few functions over the same platform. Specific
applications would have unique QoS requirements helping the preliminary specifications for delivering
Quality of services (QoS), which is fundamental for numerous purposes, is directly concerning energy
consumption, delay, reliability, distortion, and community lifetime. There may be an inevitable correlation
between quality of accessible service levels in WSNs and power consumption in these networks, while
acquiring any of those bases acquires the influential interface on the other.
Keywords—Data integrity, Delay differentiated services, Dynamic routing, Potential field, Wireless sensor
networks.
Integrated services and RSVP - ProtocolPradnya Saval
- MPLS can be used to create virtual private networks (VPNs) that provide wide-area connectivity between sites of a large organization through dedicated label switched paths. This gives the appearance of a dedicated network while transmitting through a public or shared MPLS network.
- The integrated services model was developed by IETF to provide different levels of quality of service in the Internet. It uses resource reservation, packet classification, and scheduling to ensure applications receive their requested QoS.
- The Resource Reservation Protocol (RSVP) is used by the integrated services model to signal resource requirements and set up flows with a requested QoS across a network. RSVP uses soft state and receiver-initiated reservations.
A Novel Rebroadcast Technique for Reducing Routing Overhead In Mobile Ad Hoc ...IOSR Journals
This document presents a novel rebroadcast technique called Neighbor Coverage based Probabilistic Rebroadcast (NCPR) protocol to reduce routing overhead in mobile ad hoc networks. The NCPR protocol calculates a rebroadcast delay based on the number of common neighbors between nodes to prioritize dissemination of neighbor information. It also calculates a rebroadcast probability based on additional neighbor coverage ratio and connectivity factor to reduce unnecessary rebroadcasts while maintaining network connectivity. The protocol is implemented by enhancing the AODV routing protocol in NS-2 to reduce overhead from hello packets and neighbor lists in route requests. Its performance is evaluated under varying network sizes, traffic loads, and packet loss conditions.
A Survey on Cross Layer Routing Protocol with Quality of ServiceIJSRD
Wireless is playing the wide role in today’s industrial application. Central idea of this paper is to enhance quality of service (QoS) for multimedia transmission over ad-hoc network. This paper describes the operational of different QoS routing protocols, their properties and various parameters advantages and disadvantages. Also describes the use of QoS in Cross layer routing protocol. Finally, it concludes by study of all these cross layer QoS routing protocols.
Token Based Packet Loss Control Mechanism for NetworksIJMER
This document summarizes a research paper that proposes a new congestion control mechanism using tokens. It begins with background on congestion control and modern IP networks. The proposed approach uses edge and core routers to write quality of service measures in packet headers as tokens. Tokers are interpreted by routers to gauge congestion, especially at edge routers. Based on tokens, edge routers can shape traffic from sources to reduce congestion. The mechanism aims to provide fairness while controlling packet loss. Key aspects discussed include stable token limit congestion control, core routers, edge routers, and how the approach compares to related work like CSFQ.
Assessing Buffering with Scheduling Schemes in a QoS Internet RouterIOSR Journals
This document examines different scheduling algorithms that could be used with RIO-C penalty enforcement buffering in a multi-queue QoS router to improve network performance. It simulates priority, round robin, and weighted round robin scheduling with RIO-C buffering. The results show that priority scheduling achieved the lowest loss rates, with 29.46% scheduler drop rate and 14.95% RED loss rate. Round robin was second with 29.53% and 10.50% losses. Weighted round robin was third with 30.28% and 3.04% losses. The document concludes that a network seeking quality of service could adopt priority scheduling with RIO-C admission control.
This document describes a custom reliable file transfer protocol over UDP that is designed to perform better than TCP in lossy network conditions. It discusses the protocol design which uses sequence numbers and negative acknowledgements to provide reliability over UDP. The protocol is tested on a simulated network with varying packet loss rates and delays. Results show the protocol achieves higher throughput than TCP-based file transfer methods on lossy links.
International Refereed Journal of Engineering and Science (IRJES) is a peer reviewed online journal for professionals and researchers in the field of computer science. The main aim is to resolve emerging and outstanding problems revealed by recent social and technological change. IJRES provides the platform for the researchers to present and evaluate their work from both theoretical and technical aspects and to share their views.
Packet delivery ratio, delay, throughput, routing overhead etc are the strict quality of service requirements
for applications in Ad hoc networks. So, the routing protocol not only finds a suitable path but also the path
should satisfy the QoS constraints also. Quality of services (QoS) aware routing is performed on the basis
of resource availability in the network and the flow of QoS requirement. In this paper we developed a
source routing protocol which satisfying the link bandwidth and end –to- end delay factor. Our protocol
will find multiple paths between the source and the destination, out of those one will be selected for data
transfer and others are reserve at the source node those can be used for route maintenance purpose. The
path selection is strictly based on the bandwidth and end-to-end delay in case two or more then two paths
are having the same values for QoS constraints then we will use hop as a parameter for path selection.
Abstract: CHOKe is a simple and stateless active queue management (AQM) scheme. Highly attractive property of Choke is that it can protect responsive TCP flows from unresponsive UDP flows. Packets currently queued in buffer, to penalize the high bandwidth flows. It can be implemented by using RED algorithm. In RED algorithm when packets arrives at a congested router. CHOKe draws a packet at a random from the FIFO buffer and compares it with the arriving packet. If both belong to same flow, then they are both dropped; else randomly chosen packet is left intact and arriving packet is admitted into the buffer with a probability that depend on the level of congestion. These algorithms are typically implemented in the transport protocols (e.g., TCP) of end-hosts. To ensure global fairness, such schemes require all users to adopt them and respond to network congestion properly.Keywords: CHOKe, Random Early Detection (RED), Congestion.
Title: Ephemeral Performance of Choke
Author: Suhitha K.C
ISSN 2350-1022
International Journal of Recent Research in Mathematics Computer Science and Information Technology
Paper Publications
Dynamic routing for data integrity and delay differentiated services in wirel...LeMeniz Infotech
Dynamic routing for data integrity and delay differentiated services in wireless sensor networks
Do Your Projects With Technology Experts
To Get this projects Call : 9566355386 / 99625 88976
Visit : www.lemenizinfotech.com / www.ieeemaster.com
Mail : projects@lemenizinfotech.com
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
This document summarizes a research paper that proposes a "packet weighted energy drain rate" mechanism to detect selfish nodes in mobile ad hoc networks. The mechanism calculates the packet weighted energy drain rate of each node to identify those that are not fairly forwarding packets and draining battery. It uses this metric along with optimal change requests and topology maintenance to mitigate the impacts of selfish nodes. Simulations were conducted in NS-2 to evaluate the performance of this approach under varying packet sizes, and it was found to improve metrics like packet delivery ratio and reduce overhead compared to existing selfish-aware scheduling.
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
Clustering based Time Slot Assignment Protocol for Improving Performance in U...journal ijrtem
Recently, numerous approaches have been proposed for designing medium access control (MAC)
in underwater acoustic networks (UANs). Some of those works tried to adapt MAC protocols proposed for
terrestrial networks. However, unique environmental characteristics of UANs make the MAC protocols hard to be
used in the UANs and degrade network performance. In order to improve network performance, COD-TS MAC
protocol was proposed. COD-TS focuses on both single hop and multi-hop mode and utilizes CDMA for
exchanging schedule information between cluster heads. COD-TS has shortcomings such as collisions, additional
energy consumption by exchanging schedule information and near-far effect of CDMA. To overcome above
shortcomings, we propose a clustering-based time slot assignment protocol. In the proposed protocol, nodes are
clustered, and each cluster head performs two-hop neighbor cluster discovery operation. And then, a cluster head
obtains its own relative position information. Finally, the cluster head assigns its own time slot for data
transmission based on the information. Simulation results show that the proposed protocol has always better
performance compared to the COD-TS.
Improvement of Congestion window and Link utilization of High Speed Protocols...IOSR Journals
This document summarizes a research paper that proposes using a k-nearest neighbors (k-NN) algorithm to help high-speed transport layer protocols like CUBIC better distinguish between packet drops due to network congestion versus other factors like noise. The k-NN algorithm would analyze patterns in packet drop history to classify new drops, helping protocols avoid unnecessary window size reductions when drops are not actually due to congestion. The document provides background on high-speed protocols, issues like underutilization from treating all drops as congestion, and how incorporating k-NN classification could improve protocols' performance in noisy network conditions.
Delay jitter control for real time communicationMasud Rana
This document proposes a method for controlling delay jitter for real-time communication channels in a packet-switching network. It extends an existing scheme that provides bounds on maximum delay. The key aspects are:
1) Each network node contains "regulators" for each channel that reconstruct the original packet arrival pattern to preserve jitter, and a scheduler that ensures low distortion of patterns between nodes.
2) Clients specify maximum delay and jitter bounds when establishing a channel. The establishment procedure sets local jitter bounds at each node to ensure the end-to-end bound is met.
3) Regulators at each node attempt to faithfully preserve the original packet arrival pattern, so that the last node sees essentially the original
GPRS was established by ETSI to provide packet-switched data services in GSM networks. It introduces two new core network nodes, SGSN and GGSN, to route packets between external data networks and mobile stations. GPRS supports bit rates up to 170kbps and quality of service features. It allows dynamic allocation of radio resources and efficient delivery of packet data using concepts like always-on connectivity and burst transmissions. GPRS uses concepts like point-to-point and point-to-multipoint connections to provide services like IP, X.25, SMS and other applications to mobile users.
This document summarizes a dissertation on an improved load balancing technique for secure data in cloud computing. The dissertation discusses research issues in load balancing and data security in cloud computing. It proposes a load balancing methodology that uses a load balancer, Kerberos authentication, and Nginx load balancing algorithms like round robin and least connections to securely store and balance load of encrypted data across multiple cloud nodes. The methodology is implemented using tools like HP LoadRunner, Amazon Web Services, and Jelastic cloud platform. Performance is analyzed in terms of transaction time. The proposed technique aims to improve resource utilization, access control, data security, and efficiency in cloud environments.
This document presents a new control strategy for an islanded microgrid consisting of PV and fuel cell distributed generation units supplying both local and nonlocal unbalanced loads. The control strategy comprises voltage control using a proportional resonance controller, droop-based power sharing, and a negative sequence impedance controller. The voltage controller regulates load voltages while the droop controller shares average power. The negative sequence controller minimizes negative sequence currents in the microgrid lines by adjusting each DG unit's negative sequence output impedance, improving power quality under unbalanced conditions. Time-domain simulations in MATLAB/Simulink validate the control strategy's performance for the PV and fuel cell microgrid.
1) The document describes a case where a cadaver was found to have a variant anatomy of the flexor carpi ulnaris muscle and ulnar artery.
2) Specifically, the ulnar head of the flexor carpi ulnaris muscle was more prominent and separated the ulnar artery and nerve. The humeral and ulnar heads of the muscle were also separated by the ulnar nerve.
3) This variation could be clinically significant for plastic surgeons performing flap surgeries and surgeons dealing with cubital tunnel syndrome where the ulnar nerve passes. Knowledge of unusual anatomy in the forearm is important for surgical planning and interpretation of imaging studies.
This document summarizes a research paper that models and simulates the behavior of a Doubly Fed Induction Generator (DFIG) during faulty grid conditions using Matlab. It describes how a three-phase fault is created using a fault block, and the converter connected to the rotor is disconnected and the rotor is shorted by dump resistances after the fault occurs. The paper presents the normal and faulty operating equivalent circuits of the DFIG. It also describes the Matlab simulation model developed, including blocks for the DFIG and dump resistor protection. Simulation results are shown graphically comparing stator current with time during a 0.2 to 0.3 second fault period.
Remote Access and Dual Authentication for Cloud StorageIJMER
Cloud computing is an emerging technology, which provides services over internet such as
software, hardware, network and storage. The key role for cloud computing is virtualization which
reduces the total cost and gives reliable, flexible and secured services. However compute service are
chosen between the providers located in multiple data centres. One of the major security concerns
related to the virtualization and the Storage where the outside attackers can use the files in the storage
and the data owners are not capable of knowing attacks. In this paper we proposed a high level
authentication for the cloud user and remote monitor controlled of your cloud storage. Here our model
provides the dual authentication for the cloud and to get the runtime record of the logs and the secured
application controls, the logs are remotely accessed and controlled by the owner of the data.
This document summarizes a technique called mixed single frequency delay line filtering that is proposed to optimize computational complexity and processing delay for multichannel audio crosstalk cancellation. The technique combines mixed filtering and single frequency delay line filtering. Mixed filtering allows all filtering operations to be performed in a single equation in the frequency domain, reducing computations. Single frequency delay line filtering partitions long filter impulse responses into shorter partitions that are filtered using overlap-save, reducing delay. The proposed technique partitions impulse responses and applies mixed filtering to further reduce computations over existing methods. It is shown to provide less computational complexity and delay than overlap-save filtering while maintaining performance for long impulse responses.
Design of Neural Network Controller for Active Vibration control of Cantileve...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Influence choice of the injection nodes of energy source on on-line losses of...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
A Novel Method for Blocking Misbehaving Users over Anonymizing NetworksIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
The Method of Repeated Application of a Quadrature Formula of Trapezoids and ...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Optimal Converge cast Methods for Tree- Based WSNsIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Combating Bit Losses in Computer Networks using Modified Luby Transform CodeIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
This document summarizes a paper that discusses generating and transmitting electrical power through a Solar Power Satellite (SPS). An SPS would collect solar energy in space using solar cells on a large satellite. The energy would be converted to microwaves and transmitted to a rectenna on Earth, which would convert it back to electricity. Key aspects of an SPS system discussed are the components of the satellite including solar panels and microwave generators, transmitting power through microwaves, and receiving power on a rectenna on Earth. Safety considerations around human exposure to microwave power levels are also addressed.
Visual Quality for both Images and Display of Systems by Visual Enhancement u...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Acquisition of Long Pseudo Code in Dsss SignalIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
This document discusses a mobile app called "Road Factor" that uses GPS to provide information on road conditions. It allows users to view details of the road they are currently on, like when it was last resurfaced. Government agencies can use the data to monitor roads and make planning/budget decisions to improve road maintenance. The app aims to help build better transportation infrastructure and facilitate governance through electronic monitoring of roadwork. It connects to a centralized database containing road condition details for cities across India.
The document summarizes a study on the effect of additives on the electrical resistivity of pulp black liquor-sawdust blends. Scanning electron microscopy and Fourier-transform infrared spectroscopy were used to analyze the morphology and chemical structure of samples. Electrochemical impedance spectroscopy and cyclic voltammetry techniques demonstrated that adding natural binders like wax and starch increases the electrical resistivity of the blends by filling pores in the black liquor and retarding the movement of electric charges. The addition of wax or starch decreases the conductivity of black liquor composite samples and makes them more insulating.
1) The document describes the development of a fuzzy rule-based (FRB) model for obtaining optimal reservoir releases using data from the Ukai Reservoir project in India.
2) The FRB model operates on an "if-then" principle to determine optimal monthly reservoir releases based on inputs of inflow, storage, and demand. Membership functions were created and fuzzy rules were formulated based on these inputs and the output of release.
3) Results show the FRB model was able to satisfy demand completely in all months considered for 2007 and 2011 while saving a significant amount of water compared to actual historical releases.
MCDM Techniques for the Selection of Material Handling Equipment in the Autom...IJMER
This document discusses different multi-criteria decision making (MCDM) techniques for selecting material handling equipment in the automobile industry. It first provides background on material handling and the automobile industry. It then reviews literature on relevant criteria for equipment selection: material, move, and method. The document proceeds to describe four MCDM techniques in detail: Analytic Hierarchy Process (AHP), Fuzzy Analytic Hierarchy Process (FAHP), Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), and provides numerical examples of applying each technique to different shops (body, paint, trim, final assembly) to determine the best material handling equipment.
AC-DC-AC-DC Converter Using Silicon Carbide Schottky DiodeIJMER
This document discusses the advantages of silicon carbide (SiC) diodes compared to silicon diodes for use in power electronic applications. SiC diodes have several benefits including lower leakage current even at high temperatures, faster switching speeds, and negligible reverse recovery current. The document provides details on the crystal structure of SiC and compares static and dynamic characteristics of SiC and silicon diodes. It also presents simulation results of an AC-DC-DC converter using a SiC diode that demonstrate lower power losses compared to using a silicon diode. In conclusion, the SiC diode enables more efficient power conversion due to its material properties and performance advantages over silicon diodes.
This document discusses an email automation system created using Java Server Pages (JSP) and SQL Server 2000. The system allows users to send and receive emails and includes features like tracking emails, reporting, adding/removing accounts. It uses a client-server architecture for storage and access. Dataflow diagrams and database tables are presented. The system provides a user-friendly interface and leverages security and recovery of the database platform. Automating email could provide significant benefits in efficiency and reduced workload compared to a manual system.
An Efficient Distributed Control Law for Load Balancing in Content Delivery N...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Fast Distribution of Replicated Content to Multi- Homed ClientsIDES Editor
Clients can potentially have access to more than
one communication network nowadays due to the availability
of a wide variety of access technologies. On the other hand,
service replication has become a trivial approach in overlay
networks to provide a high availability of data and better QoS.
In this paper, we consider such a multi-homed client seeking
a replicated service in overlay network (e.g., CDN, peer-topeer).
Our aim is to improve the content distribution by
proposing a new model for being applied at the applicationlevel
and in a fully distributed way. Basically, our model
proposes to determine the best mirror server that could be
reached through each client’s network interface based on
application utility function. Then, it consists of downloading
the requested content from the determined best servers
simultaneously through their associated interfaces. Each best
server should deliver a specific estimated range of bytes (i.e.,
content chunk) to an independent TCP socket opened at the
client side for being finally aggregated at the applicationlevel.
Our real experiments show that our model is able to
considerably improve the QoS (e.g., content transfer time)
perceived by the client comparing to the traditional content
distribution techniques.
A Distributed Control Law for Load Balancing in Content Delivery NetworksSruthi Kamal
1. The document presents a novel load balancing algorithm for content delivery networks that aims to minimize load imbalance and metric movement costs.
2. It proposes estimating system state through probability distributions of node capacities and load to help peers schedule transfers without centralized control.
3. Each peer independently manipulates partial system information and reassigns virtual servers based on the approximated system state.
Content Distribution Network(CDN) Report(IEEE Format)KeshavKumar315
A content delivery network (CDN) is a system of distributed servers (network) that deliver pages and other Web content to a user, based on the geographic locations of the user, the origin of the webpage and the content delivery server.
It is also known as Content Delivery Network.
35 content distribution with dynamic migration of services for minimum cost u...INFOGAIN PUBLICATION
Content Delivery Networks are the key for today’s internet content delivery. Users are knowingly or unknowingly accessing the CDN via internet. No matter how much the data retrieved by the user it may contain the CDN hand behind every character of text and every pixel of image. CDN came into existence to solve the delay problem. The moment when a user requests for a web page and the response delivered to the corresponding users web browser facing a huge delay. The main goal of this paper is content distribution of web services to multiple data centers placed in different geographical locations and providing security. A content distribution service is a major part of popular Internet applications. In proposed system hybrid clouds are used i.e., both private cloud as well as public cloud. One data center is allocated to each region. Providing security to the data is always an important issue because of the critical nature of the cloud and very large amount of complicated data it carries. To provide security cipher text policy algorithm is used. Authentication technique is used to verify the user authentication. If the user is authorized to access services then and only he receives configuration key to use.
Cost-Minimizing Dynamic Migration of Content Distribution Services into Hybri...nexgentechnology
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
COST-MINIMIZING DYNAMIC MIGRATION OF CONTENT DISTRIBUTION SERVICES INTO HYBR...Nexgen Technology
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
Cost minimizing dynamic migration of contentnexgentech15
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
This document discusses overlay networks, with a focus on content distribution networks (CDNs). It defines what an overlay network is and provides examples like MBone, 6Bone, and peer-to-peer networks. It then describes the motivations for content networks like reducing congestion and improving performance. Finally, it explains the components of a typical CDN including request routing, content distribution, and accounting infrastructures.
A content delivery network (CDN) distributes content across multiple servers located in different geographic locations. This improves performance by serving content from the server closest to the user. CDNs provide advantages like decreasing server load, faster content delivery, 100% availability even during outages, increased concurrent users, and more control over asset delivery. While CDNs improve performance, they also have disadvantages to consider like increased complexity and upfront costs.
HYBRID OPTICAL AND ELECTRICAL NETWORK FLOWS SCHEDULING IN CLOUD DATA CENTRESijcsit
This document summarizes a research paper on scheduling flows in hybrid optical and electrical networks for cloud data centers. The paper proposes a strategy for selecting which flows are suitable to switch from the electrical packet network to the optical circuit network. It presents techniques for detecting bottlenecks in the packet network and selecting flows to offload. Simulation results showed improved network performance from this flow selection approach, including higher average throughput, lower configuration delay, and more stable offloaded flows.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
A Survey on CDN Vulnerability to DoS AttacksIJCNCJournal
This document summarizes research on vulnerabilities in content delivery networks (CDNs) to denial-of-service (DoS) attacks, specifically forwarding loop attacks. CDNs improve website performance but expose nodes to disproportionate traffic from requests circulating endlessly between nodes. The paper analyzes vulnerabilities in commercial CDNs and proposes defensive strategies. Key findings include that adding random search strings to URLs forces edge servers to retrieve content from origin servers, rather than caches. This can be exploited to amplify attacks by overloading origin servers with traffic even if the attacking client reduces its own bandwidth.
A Survey on CDN Vulnerability to DoS AttacksIJCNCJournal
Content Delivery Networks (CDN), or ”content distribution networks” have been introduced to improve performance, scalability, and security of data distributed through the web. To reduce the response time of a web page when certain content is requested, the CDN redirects requests from users’ browsers to geographically distributed surrogate nodes, thus having a positive impact on the response time and network load. As a side effect, the surrogate servers manage possible attacks, especially denial of service attacks, by distributing the considerable amount of traffic generated by malicious activities among different data centers. Some CDNs provide additional services to normalize traffic and filter intrusion attacks, thus further mitigating the effects of possible unpleasant scenarios. Despite the presence of these native protective mechanisms, a malicious user can undermine the stability of a CDN by generating a disproportionate amount of traffic within a CDN thanks to endless cycles of requests circulating between nodes of the same network or between several distinct networks. We refer in particular to Forwarding Loops Attacks, a collection of techniques that can alter the regular forwarding process inside CDNs. In this paper, we analyze the vulnerability of some commercial CDNs to this type of attacks and then propose some possible useful defensive strategies.
Service Request Scheduling based on Quantification Principle using Conjoint A...IJECEIAES
This document presents a service request scheduling technique for heterogeneous distributed systems using quantification principles. It uses conjoint analysis to identify the most influential server attribute, and z-score to quantify attribute values. Servers are assigned a "servicing cutoff" percentage based on z-scores, indicating each server's share of total requests. Requests are prioritized and assigned to servers without exceeding capacity limits. The technique aims to evenly distribute workload among servers according to their quantified capacities. Experimental results showed improved performance over other scheduling principles.
Traffic-aware adaptive server load balancing for softwaredefined networks IJECEIAES
Servers in data center networks handle heterogeneous bulk loads. Load balancing, therefore, plays an important role in optimizing network bandwidth and minimizing response time. A complete knowledge of the current network status is needed to provide a stable load in the network. The process of network status catalog in a traditional network needs additional processing which increases complexity, whereas, in software defined networking, the control plane monitors the overall working of the network continuously. Hence it is decided to propose an efficient load balancing algorithm that adapts SDN. This paper proposes an efficient algorithm TAASLB-traffic-aware adaptive server load balancing to balance the flows to the servers in a data center network. It works based on two parameters, residual bandwidth, and server capacity. It detects the elephant flows and forwards them towards the optimal server where it can be processed quickly. It has been tested with the Mininet simulator and gave considerably better results compared to the existing server load balancing algorithms in the floodlight controller. After experimentation and analysis, it is understood that the method provides comparatively better results than the existing load balancing algorithms.
The document proposes a method for grouping files and allocating jobs using server scheduling to balance load. It involves splitting a server into multiple sub-servers. The performance of client machines is analyzed based on factors like processing speed, bandwidth, and memory usage. Jobs are then assigned to sub-servers based on these performance analyses, with the goal of completing all tasks quickly. Once tasks are complete, files are distributed to the respective client machines. The proposed method aims to reduce the workload on servers and improve response times compared to the existing system.
The Grouping of Files in Allocation of Job Using Server Scheduling In Load Ba...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
E VALUATION OF T WO - L EVEL G LOBAL L OAD B ALANCING F RAMEWORK IN C L...ijcsit
With technological advancements and c
onstant changes of Internet, cloud computing has been today's
trend. With the lower cost and convenience of cloud computing services, users have increasingly put
their
Web resources and information in the cloud environment. The availability and reliability
of the client
systems will become increasingly important. Today cloud applications slightest interruption, the imp
act
will be significant for users. It is an important issue that how to ensure reliability and stability
of the cloud
sites. Load balancing w
ould be one good solution.
This paper presents a framework for global server load balancing of the Web sites in a cloud with tw
o
-
level
load balancing model. The proposed framework is intended for adapting an open
-
source load
-
balancing
system and the frame
work allows the network service provider to deploy a load balancer in different data
centers dynamically while the customers need more load balancers for increasing the availability
Similar to Load balancing in Content Delivery Networks in Novel Distributed Equilibrium (20)
A Study on Translucent Concrete Product and Its Properties by Using Optical F...IJMER
- Translucent concrete is a concrete based material with light-transferring properties,
obtained due to embedded light optical elements like Optical fibers used in concrete. Light is conducted
through the concrete from one end to the other. This results into a certain light pattern on the other
surface, depending on the fiber structure. Optical fibers transmit light so effectively that there is
virtually no loss of light conducted through the fibers. This paper deals with the modeling of such
translucent or transparent concrete blocks and panel and their usage and also the advantages it brings
in the field. The main purpose is to use sunlight as a light source to reduce the power consumption of
illumination and to use the optical fiber to sense the stress of structures and also use this concrete as an
architectural purpose of the building
Developing Cost Effective Automation for Cotton Seed DelintingIJMER
A low cost automation system for removal of lint from cottonseed is to be designed and
developed. The setup consists of stainless steel drum with stirrer in which cottonseeds having lint is mixed
with concentrated sulphuric acid. So lint will get burn. This lint free cottonseed treated with lime water to
neutralize acidic nature. After water washing this cottonseeds are used for agriculter purpose
Study & Testing Of Bio-Composite Material Based On Munja FibreIJMER
The incorporation of natural fibres such as munja fiber composites has gained
increasing applications both in many areas of Engineering and Technology. The aim of this study is to
evaluate mechanical properties such as flexural and tensile properties of reinforced epoxy composites.
This is mainly due to their applicable benefits as they are light weight and offer low cost compared to
synthetic fibre composites. Munja fibres recently have been a substitute material in many weight-critical
applications in areas such as aerospace, automotive and other high demanding industrial sectors. In
this study, natural munja fibre composites and munja/fibreglass hybrid composites were fabricated by a
combination of hand lay-up and cold-press methods. A new variety in munja fibre is the present work
the main aim of the work is to extract the neat fibre and is characterized for its flexural characteristics.
The composites are fabricated by reinforcing untreated and treated fibre and are tested for their
mechanical, properties strictly as per ASTM procedures.
Hybrid Engine (Stirling Engine + IC Engine + Electric Motor)IJMER
Hybrid engine is a combination of Stirling engine, IC engine and Electric motor. All these 3 are
connected together to a single shaft. The power source of the Stirling engine will be a Solar Panel. The aim of
this is to run the automobile using a Hybrid engine
Fabrication & Characterization of Bio Composite Materials Based On Sunnhemp F...IJMER
This document summarizes research on the fabrication and characterization of bio-composite materials using sunnhemp fibre. The document discusses how sunnhemp fibre was used to reinforce an epoxy matrix through hand lay-up methods. Various mechanical properties of the bio-composites were tested, including tensile, flexural, and impact properties. The results of the mechanical tests on the bio-composite specimens are presented. Potential applications of the sunnhemp fibre bio-composites are also suggested, such as in fall ceilings, partitions, packaging, automotive interiors, and toys.
Geochemistry and Genesis of Kammatturu Iron Ores of Devagiri Formation, Sandu...IJMER
The Greenstone belts of Karnataka are enriched in BIFs in Dharwar craton, where Iron
formations are confined to the basin shelf, clearly separated from the deeper-water iron formation that
accumulated at the basin margin and flanking the marine basin. Geochemical data procured in terms of
major, trace and REE are plotted in various diagrams to interpret the genesis of BIFs. Al2O3, Fe2O3 (T),
TiO2, CaO, and SiO2 abundances and ratios show a wide variation. Ni, Co, Zr, Sc, V, Rb, Sr, U, Th,
ΣREE, La, Ce and Eu anomalies and their binary relationships indicate that wherever the terrigenous
component has increased, the concentration of elements of felsic such as Zr and Hf has gone up. Elevated
concentrations of Ni, Co and Sc are contributed by chlorite and other components characteristic of basic
volcanic debris. The data suggest that these formations were generated by chemical and clastic
sedimentary processes on a shallow shelf. During transgression, chemical precipitation took place at the
sediment-water interface, whereas at the time of regression. Iron ore formed with sedimentary structures
and textures in Kammatturu area, in a setting where the water column was oxygenated.
Experimental Investigation on Characteristic Study of the Carbon Steel C45 in...IJMER
In this paper, the mechanical characteristics of C45 medium carbon steel are investigated
under various working conditions. The main characteristic to be studied on this paper is impact toughness
of the material with different configurations and the experiment were carried out on charpy impact testing
equipment. This study reveals the ability of the material to absorb energy up to failure for various
specimen configurations under different heat treated conditions and the corresponding results were
compared with the analysis outcome
Non linear analysis of Robot Gun Support Structure using Equivalent Dynamic A...IJMER
Robot guns are being increasingly employed in automotive manufacturing to replace
risky jobs and also to increase productivity. Using a single robot for a single operation proves to be
expensive. Hence for cost optimization, multiple guns are mounted on a single robot and multiple
operations are performed. Robot Gun structure is an efficient way in which multiple welds can be done
simultaneously. However mounting several weld guns on a single structure induces a variety of
dynamic loads, especially during movement of the robot arm as it maneuvers to reach the weld
locations. The primary idea employed in this paper, is to model those dynamic loads as equivalent G
force loads in FEA. This approach will be on the conservative side, and will be saving time and
subsequently cost efficient. The approach of the paper is towards creating a standard operating
procedure when it comes to analysis of such structures, with emphasis on deploying various technical
aspects of FEA such as Non Linear Geometry, Multipoint Constraint Contact Algorithm, Multizone
meshing .
Static Analysis of Go-Kart Chassis by Analytical and Solid Works SimulationIJMER
This paper aims to do modelling, simulation and performing the static analysis of a go
kart chassis consisting of Circular beams. Modelling, simulations and analysis are performed using 3-D
modelling software i.e. Solid Works and ANSYS according to the rulebook provided by Indian Society of
New Era Engineers (ISNEE) for National Go Kart Championship (NGKC-14).The maximum deflection is
determined by performing static analysis. Computed results are then compared to analytical calculation,
where it is found that the location of maximum deflection agrees well with theoretical approximation but
varies on magnitude aspect.
In récent year various vehicle introduced in market but due to limitation in
carbon émission and BS Séries limitd speed availability vehicle in the market and causing of
environnent pollution over few year There is need to decrease dependancy on fuel vehicle.
bicycle is to be modified for optional in the future To implement new technique using change in
pedal assembly and variable speed gearbox such as planetary gear optimise speed of vehicle
with variable speed ratio.To increase the efficiency of bicycle for confortable drive and to
reduce torque appli éd on bicycle. we introduced epicyclic gear box in which transmission done
throgh Chain Drive (i.e. Sprocket )to rear wheel with help of Epicyclical gear Box to give
number of différent Speed during driving.To reduce torque requirent in the cycle with change in
the pedal mechanism
Integration of Struts & Spring & Hibernate for Enterprise ApplicationsIJMER
This document discusses integrating the Spring, Struts, and Hibernate frameworks to develop enterprise applications. It provides an overview of each framework and their features. The Spring Framework is a lightweight, modular framework that allows for inversion of control and aspect-oriented programming. It can be used to develop any or all tiers of an application. The document proposes an architecture for an e-commerce website that integrates these three frameworks, with Spring handling the business layer, Struts the presentation layer, and Hibernate the data access layer. This modular approach allows for clear separation of concerns and reduces complexity in application development.
Microcontroller Based Automatic Sprinkler Irrigation SystemIJMER
Microcontroller based Automatic Sprinkler System is a new concept of using
intelligence power of embedded technology in the sprinkler irrigation work. Designed system replaces
the conventional manual work involved in sprinkler irrigation to automatic process. Using this system a
farmer is protected against adverse inhuman weather conditions, tedious work of changing over of
sprinkler water pipe lines & risk of accident due to high pressure in the water pipe line. Overall
sprinkler irrigation work is transformed in to a comfortableautomatic work. This system provides
flexibility & accuracy in respect of time set for the operation of a sprinkler water pipe lines. In present
work the author has designed and developed an automatic sprinkler irrigation system which is
controlled and monitored by a microcontroller interfaced with solenoid valves.
On some locally closed sets and spaces in Ideal Topological SpacesIJMER
This document introduces and studies the concept of δˆ s-locally closed sets in ideal topological spaces. Some key points:
- A subset A is δˆ s-locally closed if A can be written as the intersection of a δˆ s-open set and a δˆ s-closed set.
- Various properties of δˆ s-locally closed sets are introduced and characterized, including relationships to other concepts like generalized locally closed sets.
- It is shown that a subset A is δˆ s-locally closed if and only if A can be written as the intersection of a δˆ s-open set and the δˆ s-closure of A.
- Theore
Intrusion Detection and Forensics based on decision tree and Association rule...IJMER
This paper present an approach based on the combination of, two techniques using
decision tree and Association rule mining for Probe attack detection. This approach proves to be
better than the traditional approach of generating rules for fuzzy expert system by clustering methods.
Association rule mining for selecting the best attributes together and decision tree for identifying the
best parameters together to create the rules for fuzzy expert system. After that rules for fuzzy expert
system are generated using association rule mining and decision trees. Decision trees is generated for
dataset and to find the basic parameters for creating the membership functions of fuzzy inference
system. Membership functions are generated for the probe attack. Based on these rules we have
created the fuzzy inference system that is used as an input to neuro-fuzzy system. Fuzzy inference
system is loaded to neuro-fuzzy toolbox as an input and the final ANFIS structure is generated for
outcome of neuro-fuzzy approach. The experiments and evaluations of the proposed method were
done with NSL-KDD intrusion detection dataset. As the experimental results, the proposed approach
based on the combination of, two techniques using decision tree and Association rule mining
efficiently detected probe attacks. Experimental results shows better results for detecting intrusions as
compared to others existing methods
Natural Language Ambiguity and its Effect on Machine LearningIJMER
This document discusses natural language ambiguity and its effect on machine learning. It begins by introducing different types of ambiguity that exist in natural languages, including lexical, syntactic, semantic, discourse, and pragmatic ambiguities. It then examines how these ambiguities present challenges for computational linguistics and machine translation systems. Specifically, it notes that ambiguity is a major problem for computers in processing human language as they lack the world knowledge and context that humans use to resolve ambiguities. The document concludes by outlining the typical process of machine translation and how ambiguities can interfere with tasks like analysis, transfer, and generation of text in the target language.
Today in era of software industry there is no perfect software framework available for
analysis and software development. Currently there are enormous number of software development
process exists which can be implemented to stabilize the process of developing a software system. But no
perfect system is recognized till yet which can help software developers for opting of best software
development process. This paper present the framework of skillful system combined with Likert scale. With
the help of Likert scale we define a rule based model and delegate some mass score to every process and
develop one tool name as MuxSet which will help the software developers to select an appropriate
development process that may enhance the probability of system success.
Material Parameter and Effect of Thermal Load on Functionally Graded CylindersIJMER
The present study investigates the creep in a thick-walled composite cylinders made
up of aluminum/aluminum alloy matrix and reinforced with silicon carbide particles. The distribution
of SiCp is assumed to be either uniform or decreasing linearly from the inner to the outer radius of
the cylinder. The creep behavior of the cylinder has been described by threshold stress based creep
law with a stress exponent of 5. The composite cylinders are subjected to internal pressure which is
applied gradually and steady state condition of stress is assumed. The creep parameters required to
be used in creep law, are extracted by conducting regression analysis on the available experimental
results. The mathematical models have been developed to describe steady state creep in the composite
cylinder by using von-Mises criterion. Regression analysis is used to obtain the creep parameters
required in the study. The basic equilibrium equation of the cylinder and other constitutive equations
have been solved to obtain creep stresses in the cylinder. The effect of varying particle size, particle
content and temperature on the stresses in the composite cylinder has been analyzed. The study
revealed that the stress distributions in the cylinder do not vary significantly for various combinations
of particle size, particle content and operating temperature except for slight variation observed for
varying particle content. Functionally Graded Materials (FGMs) emerged and led to the development
of superior heat resistant materials.
Energy Audit is the systematic process for finding out the energy conservation
opportunities in industrial processes. The project carried out studies on various energy conservation
measures application in areas like lighting, motors, compressors, transformer, ventilation system etc.
In this investigation, studied the technical aspects of the various measures along with its cost benefit
analysis.
Investigation found that major areas of energy conservation are-
1. Energy efficient lighting schemes.
2. Use of electronic ballast instead of copper ballast.
3. Use of wind ventilators for ventilation.
4. Use of VFD for compressor.
5. Transparent roofing sheets to reduce energy consumption.
So Energy Audit is the only perfect & analyzed way of meeting the Industrial Energy Conservation.
An Implementation of I2C Slave Interface using Verilog HDLIJMER
This document describes the implementation of an I2C slave interface using Verilog HDL. It introduces the I2C protocol which uses only two bidirectional lines (SDA and SCL) for communication. The document discusses the I2C protocol specifications including start/stop conditions, addressing, read/write operations, and acknowledgements. It then provides details on designing an I2C slave module in Verilog that responds to commands from an I2C master and allows synchronization through clock stretching. The module is simulated in ModelSim and synthesized in Xilinx. Simulation waveforms demonstrate successful read and write operations to the slave device.
Discrete Model of Two Predators competing for One PreyIJMER
This paper investigates the dynamical behavior of a discrete model of one prey two
predator systems. The equilibrium points and their stability are analyzed. Time series plots are obtained
for different sets of parameter values. Also bifurcation diagrams are plotted to show dynamical behavior
of the system in selected range of growth parameter
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
AI for Legal Research with applications, toolsmahaffeycheryld
AI applications in legal research include rapid document analysis, case law review, and statute interpretation. AI-powered tools can sift through vast legal databases to find relevant precedents and citations, enhancing research accuracy and speed. They assist in legal writing by drafting and proofreading documents. Predictive analytics help foresee case outcomes based on historical data, aiding in strategic decision-making. AI also automates routine tasks like contract review and due diligence, freeing up lawyers to focus on complex legal issues. These applications make legal research more efficient, cost-effective, and accessible.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.
Digital Twins Computer Networking Paper Presentation.pptx
Load balancing in Content Delivery Networks in Novel Distributed Equilibrium
1. International
OPEN ACCESS Journal
Of Modern Engineering Research (IJMER)
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 6| June. 2014 | 83|
Load balancing in Content Delivery Networks in Novel
Distributed Equilibrium
V. Kamakshamma1
, D. Anil2
1
Department of CSE, VEC, Kavali,
2
Asst.Profesor, Department of CSE, VEC, Kavali,
I. Introduction
The average size of web objects has grown over the years. Today Internet users regularly download
rented films (e.g.,from iTunes and NetFlix), TV programmes (e.g., using BBC iPlayer), large security updates,
virtual machine images and entire operating system distributions. File sizes for this content can range from a
few megabytes (security patches) to several gigabytes (rented high-definition films). Content providers
usecontent delivery networks (CDNs), such as Akamai [16], Limelight, CoralCDN [3] and CoDeeN [17], to
provide files to millions of Internet users through a distributed network of content servers.
To achieve a scalable and reliable service, a CDN should have two desirable properties. First, load
awareness should partition requests across a group of servers with replicated content, balancing computational
load and network congestion. This increases the number of users that can be served requesting the same content.
The degree of replication may be chosen dynamically to handle surges of incoming requests (flash crowds)
when content suddenly becomes popular (known as the Slashdot effect) [2]. Second, locality awareness
should exploit proximity relationships in the network, such as geographic distance, round-trip latency
or topological distance, when assigning client requests to con tent servers.
Intuitively, by keeping network paths short, a CDN can provide better quality-of-service (QoS) to
clients. It is challenging to design a CDN that makes a trade-of between load- and locality-awareness. Simple
CDNs that always redirect clients to geographically-closest content servers lack load-balancing and suffer from
overload when local content popularity increases. More advanced CDNs that are based on distributed hash
tables (DHTs) [15] primarily focus on load-balancing. They use network locality only as a tie breaker between
multiple servers, leading to sub-optimal decisions about network locality. State-of-the-art CDNs such as Akamai
[16] are proprietary, with little public knowledge on the types of complex optimizations that they perform.
In this paper, we describe a new type of CDN that uses load-aware network coordinates (LANCs) to
capture naturally the tension between load and locality awareness. LANCs are synthetic coordinates (calculated
using a metric embedding [10] of application-level delay measurements) that incorporate network location and
load of content servers. Our CDN uses LANCs to map clients dynamically to the most appropriate server in a
decentralised fashion. Popular content is replicated among nearby content servers with low load. By combining
locality and load using the unified mechanism of LANCs, we simplify the design of the CDN and discard the
need for ad-hoc solutions.The main contributions of this work are: (1) the introduction of LANCs, showing how
they react to CPU load and network congestion; (2) the design and implementation of a CDN that uses LANCs
to route client requests to content servers and replicate content; (3) a preliminary evaluation of our LANC-based
CDN on PlanetLab, demonstrating its benefits in comparison to other approaches.
Our result shows that, with a locality-heavy workload, our approach achieves almost an order of
magnitude lower request times compared to direct retrieval of content. The rest of the paper is structured as
follows. In Section 2,we discuss the requirements for CDNs in more detail and,in Section 3, we describe
LANCs. We present our CDN design in Section 4, focusing on request mapping and request redirection. In
Section 5, we evaluate LANCs on a local network and also present results from a large-scale PlanetLab
deployment. Section 6 describes related work and Section 7 concludes with an outlook on future work.
Abstract: In today’s world’s to provide service to netizen’s with good availability of data, content
delivery networks (CDNs) must balance requests between servers while assigning clients to closet
servers. In this paper, we describe a new CDN design that associates artificial load-aware coordinates
with clients and data servers and uses them to direct content requests to cached data. This approach
helps achieve good accuracy and service when request workloads and resource availability in the CDN
are dynamic. A deployment and evaluation of our system on Planet Lab demonstrates how it achieves low
request times with high cache hit ratios when compared to other CDN approaches.
2. Load balancing in Content Delivery Networks in Novel Distributed Equilibrium
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 6| June. 2014 | 84|
II. Content Delivery Networks
Global CDNs are typically implemented as a distributed network of servers hosting replicated content.
Servers may be located at geographically-diverse data centres, providing content to users in a given region. Web
clients issue requests to fetch content from the CDN, which are satisfied by content servers according to a
particular mapping strategy. A goal of the CDN is to minimize total request time, i.e., the time until a client has
successfully retrieved desired content. Load awareness.For a CDN to be load-aware, there must be a mechanism
that balances load between content servers. This may be the done by, for example, distributing requests
uniformly across all content servers or redirecting requests to the least-loaded server. If content requests go to
overloaded servers, clients experience poor performance.In our model, we distinguish between load balancing
for computational load and network congestion. (1) Computational load at a content server is caused by the
processing associated with content requests. This involves request parsing, retrieval of cached content if there is
a cache hit and content delivery to the client. (2) Network congestion is caused by the limited capacity of
network paths to carry traffic.This creates bottleneck links that determine the throughput at which content can be
delivered to clients. Considering the Internet path from client to server, the bottleneck may be: (a) at the client’s
access link, in which case only the performance of this client is aff ected and the CDN cannot provide any
remedy; (b) at the client’s upstream ISP or Internet backbone, affecting a larger number of clients. Here,the
choice of a diff erent content server may lead to improved performance; (c) at the content servers’ access link,
aff ecting all client requests to this server. In this case, the CDN should redirect requests away from the
congested server.Locality awareness.
A CDN is locality-aware if network paths are kept short. For example, a CDN can take request origins
into account and return content from nearby servers with low load. Proximity may be defined in terms of
geographic distance, latency, number of routing hops or overlap between address prefixes. By minimising
network path lengths, clients are more likely to experience better QoS [11]. Intuitively, this is because: (1) short
paths offer low latencies. This helps TCP obtain high throughput quickly therefore reducing transmission times
for small content; (2) short paths are less likely to encounter congestion hot-spots, resulting in improved
throughput; (3) short paths tend to be more reliable as they involve fewer network links and routers; (4) short
paths decrease overall network saturation, leaving more spare network capacity for other traffic.
III. Load Aware Network Coordinates
Overlay networks can use network coordinates (NCs) [10] to become locality-aware [13]. In a NC
system, each node maintains a synthetic n-dimensional coordinate (of low dimensionality, typically 2 ≤ n ≤ 5)
based on round-trip latency measurements between nodes. The NCs of nodes are calculated by embedding the
graph of latency measurements into a metric space. Euclidean distances between NCs then predict the actual
communication latencies. NCs are updated dynamically to reflect changes in Internet latencies. A benefit of NCs
is that they estimate missing measurements. They allow Internet nodes to reason about their relative proximities
without having to collect all measurements. However, triangle inequality violations found in Internet latencies
make it impossible to embed measurements without error [9], resulting in less accurate latency prediction. There
is also a constant measurement overhead when maintaining NCs in the background. Vivaldi. Our CDN
uses the Pyxida library [14] to maintain NCs according to the Vivaldi [1] algorithm. Vivaldi is a decentralized
algorithm that computes NCs using a spring relaxation technique. Nodes are modeled as point masses and
latencies as springs between nodes. The NCs of nodes change as nodes attract and repel each other.
E =
X
X(Lij −kxi−xjk)
2
(1)
Let xibe the NC assigned to node i and xjto j. Lijis the actual latency between nodes i and j. Vivaldi
characterizes the errors in the NCs using the squared-error function in Eq. 1, where kxi− xjk is the Euclidean
distance between the NCs of nodes i and j. Since Eq. 1 corresponds to energy in a physical mass-spring system,
Vivaldi minimises prediction errors by finding a low-energy state of the spring network.NCs are computed by
each node in a decentralised fashion:When node I receives a new latency measurementLijto node j, it compares
the true latency Lij to the predicted latency kxi− xjk. It then adjusts its NC xito minimize the prediction error
according to Eq. 1. Measurements are filtered to remove outliers [7]. The above process converges quickly in
practice and leads to stable and accurate NCs [6].
Load-awareness It is easy to see how a CDN could benefit from NCs to achieve locality-awareness.
Assuming that clients and servers have known NCs, clients could direct requests to the server with the closest
NC. Also, servers could use their neighbours to replicate popular content, keeping content replication local
while reducing total request times. The lower accuracy of NCs for latency prediction compared to direct
measurements [18] is less important for a CDN. A small latency reduction by choosing a slightly closer server
will only have a marginal impact on the total request time for large content. The goal for locality-awareness in a
CDN is to select a content server with good performance, as opposed to finding the single closest one.
However, regular NCs do not provide load-balancing between content servers. Servers in densely-populated
areas are likely to become overloaded, whereas servers in sparse regions will have spare capacity. To address
3. Load balancing in Content Delivery Networks in Novel Distributed Equilibrium
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 6| June. 2014 | 85|
this problem,we propose load-aware NCs (LANCs) that are calculated using application-level delay
measurements, therefore incorporating computational load and network congestion. Regular NCs are computed
from network-level measurements (e.g., ICMP echo requests) to capture pure network latencies between two
hosts. In particular, measurements aim to be independent of computational load (e.g., by assigning kernel-space
timestamps to packets) and of congestion on network paths (e.g., by using small measurement packets
least affected by congestion) [5].In contrast, LANCs are computed with application-perceived round-trip times
(RTTs) between hosts that are measured through dedicated TCP connections with user-space time stamping.
The computational load of a host affects user-space timestamps, leading to higher RTTs for
overloaded hosts. Congestion on network paths results in the retransmission of TCP packets, again increasing
application perceived RTTs. As a result, overloaded and congested servers are assigned more distant
LANCs and can be avoided by clients. (This assumes that clients are, in general, less loaded than servers.)
Figure 1: Overview of the architecture of our LANC-based CDN.
By folding load into LANCs, the CDN does not need to manually tune the trade-off between choosing
a local content server versus a server with low load. Instead, it can map client requests directly to the “best”
content server in terms of expected content delivery performance. The best server has the closest LANC relative
to the client and combines low computational load with little congestion on the network path. We believe that
this results in a simpler and more natural CDN design.
IV. CDN Design
Next we describe the architecture of our LANC-based CDN, how it processes requests, and how
requests are redirected between servers. As shown in Figure 1, we distinguish between the content server, the
client and the origin web server. Clients generate requests and forward them to content servers.
Servers receive requests and handle them by delivering the requested content, potentially fetching an
authoritative copy of the content from the origin web server.
Content servers have three components: (1) The cache manager provides an interface to store and
retrieve content from the local disk. It also defines the cache replacement strategy (e.g., LRU or LFU) when
disk space is low and content must be discarded.(2) The server HTTP proxy is the point of entry for HTTP
requests by providing a proxy interface. For each request, the proxy decides to (a) retrieve content locally if the
cache manager indicates that the requested content exists in the local cache; (b) request content from a nearby
server with the help of the server coordinate manager (Section 4.2); or (c) return content from the origin web
server to the requester, while caching it locally for future access.(3) The server coordinate manager maintains
the LANC. It takes application-level delay measurements to random other servers and clients and updates the
LANC accordingly. It also maintains a routing table of neighbours that is used for mapping clients to servers
(Section 4.1). Clients generate HTTP requests with a regular web browser. To redirect requests to content
servers, clients run two components as a separate process (or as part of a browser plugin): (1) The client HTTP
proxyprovides a local proxy. The local browser is configured to send all HTTP requests through this proxy. The
proxy interacts with the client coordinate manager to redirect requests to content servers.(2) The client
coordinate manager is similar to the one on the server-side. It makes delay measurements to maintain a LANC.
It also manages a fixed-sized neighbour set with nearby content servers used for mapping requests to servers.
4. Load balancing in Content Delivery Networks in Novel Distributed Equilibrium
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 6| June. 2014 | 86|
Figure 2: Mapping of requests to content servers using LANCs
Clients map requests to the content server with the closest LANC compared to their own. This
guarantees that they use a server with good locality and low load. As shown in Figure 2, each server maintains
aneighbour set of nearby servers. Requests are redirected to the closest server from that set. Overloaded content
servers will move away from other clients by settling for “distant” coordinates in the space. Each client
coordinate manager must dynamically keeptrack of a small set of nearby servers. For this, we use a geometric
routing approach in the coordinate space created by the LANCs of all servers. The algorithm, described in
previous work [6], constructs a Θ-graph spanner and uses a greedy approach to route messages to a target
coordinate through O(log n) hops. Informally, each server has a routing table that stores O(log n) servers at
various angles and distances in the LANC space. To route a message to a given target LANC, a node greedily
forwards the message to a node from its routing table that is closest to the target. A more detailed description
can be found in [6]. When a new client joins the CDN, it contacts an arbitrary server and routes a message with
its own LANC as the destination. This message is guaranteed to arrive at the closest existing server to the given
coordinate. The client then adds this server to its neighbour set. It may also include other, more distant servers to
increase failure resilience. Periodically, clients rejoin the CDN to update the mapping as the LANCs of servers
change. Servers follow the regular join
Protocol described in [6] to construct their routing tables. Note that multi-hop geometric routing is only
done when a client joins the CDN (and also periodically to refresh the mapping) but not for each content
request. A content request only requires a look-up of the best content server from the local neighbour set. 4.2
Request Redirection So far, our LANC-based CDN is only populated with new content when a server
fetches content from the origin web server after a cache miss (local-only redirection). However, we could take
advantage of the proximity between servers and have them cooperate to retrieve content from each other. This
would help exploit spatial in addition to just temporal cache locality. For a performance gain, we must ensure
that it is faster to retrieve content from another server than to fetch it from the original web server.This is likely
to be true for local servers. We propose two simple coordinate-based techniques for cooperation between
content servers:
Server-centric redirection. When a primary content server receives a request that it cannot satisfy with
its local cache, it forwards the request to a set of secondary servers in parallel. The secondary servers are
neighbours in its geometric routing table within a distance d in the LANC space. The secondary servers then (a)
return the requested content to the primary server if available. The primary server then forwards the content to
the client while caching it; or (b) respond with cache misses.
If all secondary servers had cache misses, the primary server fetches the content from the origin web
server. A low value of d ensures that retrieving content from secondary servers gives better performance than
fetching it directly from the origin web server. A timeout value puts an upper bound on the waiting time for
responses from secondary servers. Client-centric redirection. In this scheme, a client sends requests in parallel
to nearby servers in the LANC space. Again, the closest server will act as the primary server and fetch the
content from the origin web server on a cache miss. At the same time, secondary servers within distance d
attempt to retrieve the content from their local caches and return it to the client. The client may receive content
multiple times from different servers and therefore needs to abort pending requests after the first successful
retrieval. Client-centric redirection is likely to give better performance when fetching content from secondary
servers. However, it forces the primary server to retrieve content from the origin server, even when a secondary
5. Load balancing in Content Delivery Networks in Novel Distributed Equilibrium
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 6| June. 2014 | 87|
server has had a cache hit. This unnecessarily increases load on origin web servers. To avoid this (at the cost of
one additional RTT),the client could send the request to the primary server only after receiving negative
responses from the secondary ones. Fundamentally, server-centric redirection spreads popular content more
effectively in the CDN because primary servers retrieve it from secondary servers without involving origin web
servers. Therefore our evaluation in Section 5.2 focuses on server-centric redirection.
V. Evaluation
Our evaluation goals were (a) to investigate the behavior of LANCs under CPU load and network
congestion in a controlled environment; and (b) to observe the performance of a large-scale deployment of our
LANC-based CDN with a non-trivial workload on PlanetLab.
5.1 LAN Experiment
The first experiment examines how LANCs are influenced by computational load on hosts. We set up
our CDN on a LAN with 6 nodes (1 content server and 5 clients). The content server ran on a resource-
constraint laptop connected via a wireless link and acted as the bootstrap node. To achieve high sensitivity to
load changes, all nodes performed aggressive RTT measurements every 10 seconds with 100 KB payloads. The
nodes were left until their LANCs stabilised.We then generated synthetic load on the server with a CPU bound
process for 10 minutes.
Figure 3: Increase in RTT and coordinate distances under high content server load
Figure 4: Increase in RTT samples and coordinate distances under network congestion
Figure 3 shows the application-level RTT samples from the five clients to the server (shown as dots;
left axis) and the corresponding coordinate distances (shown as lines; right axis) between the clients and the
server. The CPU load on the content server causes an increase in RTT, which is then (with a small lag) also
reflect by the LANC distances. The change in LANC distances is due to a change of the server coordinate.
Clients observe that the RTTs among them have not changed, whereas the server measures higher RTTs to all
clients and adapts its coordinate accordingly.
In the second experiment, we ran 1 (well-provisioned) content server and 5 clients in a LAN. After a
stabilization period, we added 3 elastic TCP flows between the node running the content server and 3 other
(non-client) nodes. As shown in Figure 4, the resulting network congestion increases the RTT samples (dots; left
axis) between the clients and server and makes them vary between 80 and 120 ms. This is picked by the LANC
coordinate of the server, leading to increased distances (lines; right axis). After the additional flows terminate,
the coordinate returns to its original value.
6. Load balancing in Content Delivery Networks in Novel Distributed Equilibrium
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 6| June. 2014 | 88|
5.2 Planet Lab Deployment
We deployed our LANC-based CDN on PlanetLab (PL) to investigate how it exploits temporal and
spatial locality between client requests. We ran content servers on 108 PL nodes distributed world-wide and 16
clients on hosts at our university. We then conducted six experiments with different configurations and
measured the performance of satisfying content requests:
(a) LANC+SC (server-centric). This configuration uses the LANC-based request mapping with server-
centric redirection of requests. On a cache miss, a content server forwards a request to all its known neighbours
and waits
Figure 5: CDF plot of the distribution of transfer data rates for six configurations. (Faster is better) for responses
with a 2 second timeout.
(b) LANC+LO (local-only).This uses the LANC-based request mapping but only satisfies requests from the
local server cache.
(c) Nearest.This directs all requests to the single, nearest content server (located at one of the Imperial PL
nodes).
(d) Random.This directs requests to random servers.
(e) Direct. For comparison, this retrieves content directly from the origin web servers without caching.
(f) CoralCDN. This configuration directs all requests to the local CoralCDN node running on PL.
Figure 6: Cache hit ratio as a function of completed requests (for first 2000 requests only)
7. Load balancing in Content Delivery Networks in Novel Distributed Equilibrium
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 6| June. 2014 | 89|
Each experiment was set up in the same way. The first content server acted as the bootstrap node for all
other nodes. After starting the servers, we added the client nodes and let coordinates stabilise for one hour. RTT
measurements between nodes were taken with 4 KB payloads exchanged through TCP connections every
minute. All content servers started with cold caches. We generated a synthetic request workload on the 16
clients. Each client continuously requested a Gentoo Linux distribution file with a size of 3.28 MB from a list of
100 web servers distributed world-wide [4]. (Since the URLs were different, all CDNs assumed these to be
different files, with no caching between URLs.) This request load provided spatial locality (all requests came
from clients at our university) and temporal locality (clients eventually requested the same URL repeatedly).
The high load on PL nodes exercised our load balancing mechanism. We deliberately chose a relatively small
file size to stay below per-slice transmission limits on PL. Each configuration ran for one hour.
The results are summarized in Table 1. Figure 5 shows the distribution of average transfer data rates
per request across all clients on a logarithmic scale. LANC+SC manages to satisfy the most requests (12, 922) in
one hour. The low value for the 90th percentile of request times (4.57 seconds) is due to its good choice of
content servers (with nearby LANCs), which results in high transfer rates, and its aggressive fetching of cached
content from other servers, which means a high cache hit ratio of 97.3%.During the experiment, we observed
that most clients requested content from a small set of servers running on PL nodes with low load at Imperial, in
France and in Germany.
LANC+LO satisfies fewer requests compared to LANC+SC. As seen in Figure 5, its performance
suffers from a small fraction of requests with low transfer rates. We observed that these were mostly cache
misses that forced the retrieval of content from slow web servers. At the same time, the 90th percentile of its
transfer rate is slightly higher than for LANC+SC because servers do not have the overhead of relaying content
when fetching it from neighbours. Since LANC+LO only considers a single content server for satisfying
requests, it has a lower cache hit ratio (89.2%).
Nearest gives poor performance because the single content server becomes a bottleneck and can only
satisfy requests with a consistently low transfer rate. It has a high cache hit ratio (90.2%) because all requests
are directed to the same server (but not the highest because clients do not manage to submit all requests twice
within one hour).Random balances requests across many servers but fails to exploit spatial or temporal locality,
leading to a low cache hit ratio (4.6%). Selected servers often take a long time to retrieve content due to high
load on PL nodes and/or low available bandwidth.
Direct illustrates the benefit of caching with a CDN compared to retrieving content directly from
hosting web servers. Finally, we used Coral CDN on PL to retrieve content. Although this is not a fair
comparison because, as a public service, Coral CDN handles a higher work-load than just the requests we
directed at it, we believe that it illustrates the potential of our approach. As expected, Coral CDN showed a
substantially higher average request time (18.41 seconds) than our LANC-based CDN. As shown in Figure 5, its
average transfer rates were lower than fetching content directly (while presumably significantly reducing load
on the origin web servers). For now, we can only speculate whether this is due to Coral CDN’s high workload or
less optimal choice of content servers. In future work, we plan to repeat this experiment in a controlled setting.
Figure 6 illustrates the difference between LANC+SC and LANC+LO in more detail. It shows the change in
cache hit ratio as a function of completed requests. The cache hit ratio of LANC+SC grows faster because it
considers nearby servers for requested content. This means that LANC+SC reduces the load on the origin web
servers by retrieving more content from the CDN. With our workload, eventually all requests can be satisfied by
the CDN and the cache hit ratio asymptotically approaches unity.
VI. Related Work
CoralCDN [3] is a peer-to-peer CDN built to help web servers handle huge demands of flash crowds.
With cooperating cache nodes, CoralCDN minimises the load on the original web server and avoids the creation
of hot spots. It builds a load-balanced, latency-optimised hierarchical indexing infrastructure based on a weakly-
consistent DHT and achieves locality-awareness by making nodes members of multiple DHTs called clusters.
Clusters are specified by a maximum RTT and can reduce request latencies by prioritizing nearby nodes. Our
work shares many of the design goals of CoralCDN in terms of locality- and load-awareness.
However our approach of using a unified scheme based on LANCs is different, aiming for good request
mappings in environments with dynamic load and network congestion.
CoDeeN [17] is a network of open proxy servers running on PL. The system leaves it up to the user to
choose a suitable proxy server for browser requests. If the local proxy cannot satisfy a request, it selects another
proxy based on locality, load and reliability. Our LANC-based CDN automates the initial mapping step at the
cost of running a client component on user machines.
Globule [12] is a collaborative CDN that exploit client resources for caching using a browser plug-in.
Because Globule runs on client hosts, it must make diff erent assumption about churn and security. It uses
landmark-based NCs for locality-awareness. Replicated content is placed according to the locations of clients in
8. Load balancing in Content Delivery Networks in Novel Distributed Equilibrium
| IJMER | ISSN: 2249–6645 | www.ijmer.com | Vol. 4 | Iss. 6| June. 2014 | 90|
a coordinate space. Since it leverages many client machines for caching, balancing computational load is less
important.
Meridian [18] builds an overlay network for locality-aware node selection with on-demand probing to
estimate node distances. To find the nearest node to a client, Meridian uses a set of ICMP echo requests to move
logarithmically closer to the target. Similar to our geometric routing tables, each nodes maintains a fixed number
of links to other nodes with exponentially increasing distances. Meridian aims to return the closest existing node
to a client. We argue that such accuracy is not necessary when selecting content servers, as other factors such as
load will be equally important.
VII. Conclusions
We have described a novel CDN that uses a coordinate space with application-level latency
measurements between clients and content servers for the mapping and redirection of requests. We
demonstrated how our LANC-based CDN reacts to computational load and network congestion. A large-scale
deployment on PL with a locality-heavy workload highlights the benefits of our approach. As future work, we
plan to study the stability and load awareness of LANCs under dynamic workloads (e.g., flash crowds) and
varying resource availability. We also plan a more extensive comparison against other CDNs.
Finally, we want to investigate how proxy NCs [8] can relieve the burden from maintaining LANCs
from client nodes.
REFERENCES
[1] F. Dabek, R. Cox, F. Kaashoek, and R. Morris.Vivaldi: A Decentralized Network Coordinate System.In Proc. of
SIGCOMM, Aug. 2004.
[2] J. Elson and J. Howell. Handling Flash Crowds from Your Garage. In Proc. of USENIX, 2008.
[3] M. J. Freedman, E. Freudenthal, and D. Mazi`eres.Democratizing Content Publication with Coral. In Proc. of NSDI,
2004.
[4] Gentoo. Gentoo Mirror List.www.gentoo.org/main/en/mirrors2.xml, Aug. 2008.
[5] J. Ledlie, P. Gardner, and M. Seltzer. Network Coordinates in the Wild. In Proc. of NSDI, 2007.
[6] J. Ledlie, M. Mitzenmacher, M. Seltzer, and P. Pietzuch. Wired Geometric Routing. In Proc. Of IPTPS, Bellevue,
WA, USA, Feb. 2007.
[7] J. Ledlie, P. Pietzuch, and M. Seltzer. Stable and Accurate Network Coordinates. In ICDCS, July 2006.
[8] J. Ledlie, M. Seltzer, and P. Pietzuch. Proxy Network Coordinates. TR 2008/4, Imperial College, Feb. 2008.
[9] E. K. Lua, T. Griffin, et al. On the Accuracy of Embeddings for Internet Coord. Sys. In IMC, 2005.
[10] T. E. Ng and H. Zhang. Predicting Internet Network Distance with Coordinates-Based Approaches. In Proc. of
INFOCOM’02, New York, NY, June 2002.
[11] D. Oppenheimer, J. Albrecht, D. Patterson, and A. Vahdat. Distributed Resource Discovery on PlanetLab with
SWORD. In WORLDS, Dec. 2004.
[12] G. Pierre and M. van Steen. Globule: a Collaborative CDN. IEEE Comms. Magazine, 44(8), Aug. 2006.
[13] P. Pietzuch, J. Ledlie, M. Mitzenmacher, and M. Seltzer. Network-Aware Overlays with Network Coordinates. In
Proc. of IWDDS, July 2006.
[14] Pyxida Project. pyxida.sourceforge.net, 2006. [15] I. Stoica, R. Morris, D. Karger, et al. Chord: A Scalable Peer-to-
peer Lookup Service for Internet Applications. In Proc. SIGCOMM, Aug. 2001.
[16] A.-J. Su, D. Choff nes, A. Kuzmanovic, et al. Drafting Behind Akamai. In SIGCOMM, 2006.[17] L. Wang, K. Park,
R. Pang, V. S. Pai, and
[17] L. Wang, K. Park, R. Pang, V. S. Pai, and L. Peterson. Reliability and Security in the CoDeeN Content Distribution
Network. In USENIX, 2004.
[18] B. Wong et al. Meridian: A Lightweight Network Loc.Service without Virtual Coords. In SIGCOMM, 2005.