Web hotspot is a serious problem often experienced in case popular websites. It
provides dramatic load spike in a website, which occurs when a huge number of users
accessing the same website. A prominent solution to this problem is server load balancing.
Dynamic load balancing involves allocation of requests to the server or processor
dynamically when they arrive. For effective load balancing, a near-optimal schedule of
incoming requests or processes must be determined “on-the-fly”, so that execution of
requests can be completed in shortest possible time. So we have proposed a Genetic
Algorithm based load balancing scheme which relies on a process scheduling policy. Genetic
Algorithm provides to search for the optimal solution out a search of candidate solutions. It
follows the survival-of-the-fittest principle, to achieve the optimal solution, through a
number of generations. The proposed algorithm is evaluated for various population size and
number of generations, to maximize the processor utilization of nodes/ processors in the
system.
AWSQ: an approximated web server queuing algorithm for heterogeneous web serv...IJECEIAES
With the rising popularity of web-based applications, the primary and consistent resource in the infrastructure of World Wide Web are cluster-based web servers. Overtly in dynamic contents and database driven applications, especially at heavy load circumstances, the performance handling of clusters is a solemn task. Without using efficient mechanisms, an overloaded web server cannot provide great performance. In clusters, this overloaded condition can be avoided using load balancing mechanisms by sharing the load among available web servers. The existing load balancing mechanisms which were intended to handle static contents will grieve from substantial performance deprivation under database-driven and dynamic contents. The most serviceable load balancing approaches are Web Server Queuing (WSQ), Server Content based Queue (QSC) and Remaining Capacity (RC) under specific conditions to provide better results. By Considering this, we have proposed an approximated web server Queuing mechanism for web server clusters and also proposed an analytical model for calculating the load of a web server. The requests are classified based on the service time and keep tracking the number of outstanding requests at each webserver to achieve better performance. The approximated load of each web server is used for load balancing. The investigational results illustrate the effectiveness of the proposed mechanism by improving the mean response time, throughput and drop rate of the server cluster.
Load Balancing in Cloud Computing Environment: A Comparative Study of Service...Eswar Publications
Load balancing is a computer networking method to distribute workload across multiple computers or a computer cluster, network links, central processing units, disk drives, or other resources, to achieve optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The
load balancing service is usually provided by dedicated software or hardware, such as a multilayer switch or a Domain Name System server. In this paper, the existing static algorithms used for simple cloud load balancing have been identified and also a hybrid algorithm for developments in the future is suggested.
The Grouping of Files in Allocation of Job Using Server Scheduling In Load Ba...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Dynamic Cloud Partitioning and Load Balancing in Cloud Shyam Hajare
Cloud computing is the emerging and transformational paradigm in the field of information technology. It mostly focuses in providing various services on demand and resource allocation and secure data storage are some of them. To store huge amount of data and accessing data from such metadata is new challenge. Distributing and balancing of the load over a cloud using cloud partitioning can ease the situation. Implementing load balancing by considering static as well as dynamic parameters can improve the performance cloud service provider and can improve the user satisfaction. Implementation the model can provide dynamic way of resource selection de-pending upon different situation of cloud environment at the time of accessing cloud provisions based on cloud partitioning. This model can provide effective load balancing algorithm over the cloud environment, better refresh time methods and better load status evaluation methods.
This document discusses client-side load balancing in a cloud computing environment. It describes how a client-side load balancer can distribute requests across backend web servers in a scalable way without requiring control of the infrastructure. The proposed architecture uses static anchor pages hosted on Amazon S3 that contain JavaScript code to select a web server based on its reported load. The JavaScript then proxies the request to that server and updates the page content. This approach achieves high scalability and adaptiveness without hardware load balancers or layer 2 optimizations.
IRJET- An Improved Weighted Least Connection Scheduling Algorithm for Loa...IRJET Journal
This document proposes an improved weighted least connection (IWLC) scheduling algorithm to balance load among web servers in a cluster system. It aims to prevent overloading newly added servers by not assigning requests to a new server more than a maximum number of times (C) consecutively. If a new server receives more than C requests consecutively, it is deactivated and excluded from scheduling for C-1 rounds. Then it is reactivated and included in the scheduling list. The algorithm balances load by avoiding overloads on new servers. The performance of the IWLC algorithm is evaluated through a simulation using Docker containers to create virtual web servers handling simultaneous client requests sent via Node.js. Results show the proposed algorithm more evenly distributes
A Proposed Model for Web Proxy Caching Techniques to Improve Computer Network...Hossam Al-Ansary
This document proposes a model for using web proxy caching techniques to improve the performance of computer networks. It discusses how web proxy caching works by storing frequently requested web objects on proxy caches located within the network. This reduces network traffic, server load, and retrieval delays for users. The document also outlines some challenges with web proxy caching, such as cache size, consistency, and overhead. It then proposes using a combination of forward and reverse proxy caching to take advantage of the benefits of caching while overcoming its issues. This could help improve poor network communication in the Egyptian National Railways organization by enhancing efficiency and access for remote users.
AWSQ: an approximated web server queuing algorithm for heterogeneous web serv...IJECEIAES
With the rising popularity of web-based applications, the primary and consistent resource in the infrastructure of World Wide Web are cluster-based web servers. Overtly in dynamic contents and database driven applications, especially at heavy load circumstances, the performance handling of clusters is a solemn task. Without using efficient mechanisms, an overloaded web server cannot provide great performance. In clusters, this overloaded condition can be avoided using load balancing mechanisms by sharing the load among available web servers. The existing load balancing mechanisms which were intended to handle static contents will grieve from substantial performance deprivation under database-driven and dynamic contents. The most serviceable load balancing approaches are Web Server Queuing (WSQ), Server Content based Queue (QSC) and Remaining Capacity (RC) under specific conditions to provide better results. By Considering this, we have proposed an approximated web server Queuing mechanism for web server clusters and also proposed an analytical model for calculating the load of a web server. The requests are classified based on the service time and keep tracking the number of outstanding requests at each webserver to achieve better performance. The approximated load of each web server is used for load balancing. The investigational results illustrate the effectiveness of the proposed mechanism by improving the mean response time, throughput and drop rate of the server cluster.
Load Balancing in Cloud Computing Environment: A Comparative Study of Service...Eswar Publications
Load balancing is a computer networking method to distribute workload across multiple computers or a computer cluster, network links, central processing units, disk drives, or other resources, to achieve optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The
load balancing service is usually provided by dedicated software or hardware, such as a multilayer switch or a Domain Name System server. In this paper, the existing static algorithms used for simple cloud load balancing have been identified and also a hybrid algorithm for developments in the future is suggested.
The Grouping of Files in Allocation of Job Using Server Scheduling In Load Ba...iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Dynamic Cloud Partitioning and Load Balancing in Cloud Shyam Hajare
Cloud computing is the emerging and transformational paradigm in the field of information technology. It mostly focuses in providing various services on demand and resource allocation and secure data storage are some of them. To store huge amount of data and accessing data from such metadata is new challenge. Distributing and balancing of the load over a cloud using cloud partitioning can ease the situation. Implementing load balancing by considering static as well as dynamic parameters can improve the performance cloud service provider and can improve the user satisfaction. Implementation the model can provide dynamic way of resource selection de-pending upon different situation of cloud environment at the time of accessing cloud provisions based on cloud partitioning. This model can provide effective load balancing algorithm over the cloud environment, better refresh time methods and better load status evaluation methods.
This document discusses client-side load balancing in a cloud computing environment. It describes how a client-side load balancer can distribute requests across backend web servers in a scalable way without requiring control of the infrastructure. The proposed architecture uses static anchor pages hosted on Amazon S3 that contain JavaScript code to select a web server based on its reported load. The JavaScript then proxies the request to that server and updates the page content. This approach achieves high scalability and adaptiveness without hardware load balancers or layer 2 optimizations.
IRJET- An Improved Weighted Least Connection Scheduling Algorithm for Loa...IRJET Journal
This document proposes an improved weighted least connection (IWLC) scheduling algorithm to balance load among web servers in a cluster system. It aims to prevent overloading newly added servers by not assigning requests to a new server more than a maximum number of times (C) consecutively. If a new server receives more than C requests consecutively, it is deactivated and excluded from scheduling for C-1 rounds. Then it is reactivated and included in the scheduling list. The algorithm balances load by avoiding overloads on new servers. The performance of the IWLC algorithm is evaluated through a simulation using Docker containers to create virtual web servers handling simultaneous client requests sent via Node.js. Results show the proposed algorithm more evenly distributes
A Proposed Model for Web Proxy Caching Techniques to Improve Computer Network...Hossam Al-Ansary
This document proposes a model for using web proxy caching techniques to improve the performance of computer networks. It discusses how web proxy caching works by storing frequently requested web objects on proxy caches located within the network. This reduces network traffic, server load, and retrieval delays for users. The document also outlines some challenges with web proxy caching, such as cache size, consistency, and overhead. It then proposes using a combination of forward and reverse proxy caching to take advantage of the benefits of caching while overcoming its issues. This could help improve poor network communication in the Egyptian National Railways organization by enhancing efficiency and access for remote users.
The document discusses improving request routing mechanisms in content delivery networks. It outlines comparing CDN and non-CDN networks, analyzing request routing techniques like DNS-based routing. The author proposes a new CDN DNS request routing technique, simulates and compares it to existing techniques. Results show the proposed technique reduces packet loss and round-trip time by using client location to directly route requests. Local load balancing techniques are also evaluated, with round-trip found to perform best.
The Concept of Load Balancing Server in Secured and Intelligent NetworkIJAEMSJORNAL
Hundreds and thousands of data packets are routed every second by computer networks which are complex systems. The data should be routed efficiently to handle large amounts of data in network. A core networking solution which is responsible for distribution of incoming traffic among servers hosting the same content is load balancing. For example, if there are ten servers within a network and two of them are doing 95% of the work, the network is not running very efficiently. If each server was handling about 10% of the traffic, the network would run much faster.Networks get more efficient with the help of Load balancing. The traffic is evenly distributed amongst the network making sure no single device is overwhelmed.When a request is balanced across multiple servers, it prevents any server from becoming a single point of failure. It improves overall availability and responsiveness. To evenly split the traffic load among several different servers web servers; often use load balancing.Load balancing requires hardware or software that divides incoming traffic amongst the available serverseither it is done on a local network or a large web server. High amount of traffic is received by a network that have one server dedicated to balance the load among other servers and devices in the network. This server is often known as load balancer. Load balancing is used by clusters or multiple computers that work together, to spread out processing jobs among the available systems.
A load balancing model based on cloud partitioning for the public cloud. ppt Lavanya Vigrahala
Load balancing in the cloud computing environment has an important impact on the performance. Good load balancing makes cloud computing more efficient and improves user satisfaction. This article introduces a better load balance model for the public cloud based on the cloud partitioning concept with a switch mechanism to choose different strategies for different situations. The algorithm applies the game theory to the load balancing strategy to improve the efficiency in the public cloud environment.
This document summarizes a dissertation on an improved load balancing technique for secure data in cloud computing. The dissertation discusses research issues in load balancing and data security in cloud computing. It proposes a load balancing methodology that uses a load balancer, Kerberos authentication, and Nginx load balancing algorithms like round robin and least connections to securely store and balance load of encrypted data across multiple cloud nodes. The methodology is implemented using tools like HP LoadRunner, Amazon Web Services, and Jelastic cloud platform. Performance is analyzed in terms of transaction time. The proposed technique aims to improve resource utilization, access control, data security, and efficiency in cloud environments.
1) The document proposes a bandwidth-aware virtual machine migration policy for cloud data centers that considers both the bandwidth and computing power of resources when scheduling tasks of varying sizes.
2) It presents an algorithm that binds tasks to virtual machines in the current data center if the load is below the saturation threshold, and migrates tasks to the next data center if the load is above the threshold, in order to minimize completion time.
3) Experimental results show that the proposed algorithm has lower completion times compared to an existing single data center scheduling algorithm, demonstrating the benefits of considering bandwidth and utilizing multiple data centers.
Porcupine is a highly available cluster-based mail service that uses commodity hardware to provide scalable email services. It addresses challenges of conventional mail solutions in performance, manageability and availability. Key techniques used include functional homogeneity, automatic reconfiguration, replication and load balancing to provide better availability, manageability and linear performance scaling with cluster size. Evaluation shows it efficiently handles failures, heterogeneous hardware and skewed workloads.
Guarding Fast Data Delivery in Cloud: an Effective Approach to Isolating Perf...Zhenyun Zhuang
LNCS 2015
Cloud-based products heavily rely on the fast data
delivery between data centers and remote users - when data
delivery is slow, the products’ performance is crippled. When
slow data delivery occurs, engineers need to investigate the issue
and find the root cause. The investigation requires experience
and time, as data delivery involves multiple playing parts
including sender/receiver/network.
To facilitate the investigations, we propose an algorithm
to automatically identify the performance bottleneck. The
algorithm aggregates information from multiple layers of
data sender and receiver. It helps to automatically isolate
the problem type by identifying which component of
sender/receiver/network is the bottleneck. After isolation, successive
efforts can be taken to root cause the exact problem.
We also build a prototype to demonstrate the effectiveness of
the algorithm.
Implementing a Caching Scheme for Media Streaming in a Proxy ServerAbdelrahman Hosny
In the past few years, websites have moved from being
static web pages into rich media applications that use audio,
images and videos heavily in their interaction with users. This
change has made a dramatic change in network traffics
nowadays. Organizations spend a lot of effort, time and money
to improve response time and design intermediary systems that
enhance overall user experience. Media traffic represents
about 69.9-88.8% of all traffic. Therefore, enhancing networks
to accommodate this large traffic is a major trend. Content
Distribution Networks (CDNs) are now largely deployed for a
faster delivery of media. Redundancy and caching are also
implemented to decrease response time.
In this project, we are implementing a caching scheme for
media streaming in a proxy server. Unlike CDNs, which
require huge infrastructure, our caching proxy server will be
as simple as a piece of software that is portable and can be
installed in small as well as large scales. It may be deployed in
a university network, company’s private network or on ISPs
servers. This caching scheme, specially tailored for media
streaming, will reduce traffic and enhance network efficiency
in general.
Index Terms – Proxy servers, Caching, Media streaming
Hybrid Scheduling Algorithm for Efficient Load Balancing In Cloud ComputingEswar Publications
This document presents a hybrid scheduling algorithm for efficient load balancing in cloud computing. The algorithm uses both round robin and priority-based scheduling approaches. It first assigns priorities to incoming job requests and then executes them in a round robin fashion. The algorithm aims to minimize overall response time and data center processing time. It is evaluated through simulation and found to perform better than round robin, priority-based, and equally spread current execution algorithms alone in terms of optimized response time and data center service time.
Server load balancing (SLB) distributes network traffic across multiple servers to optimize resource utilization and maximize throughput. It intercepts traffic destined for a website and redirects requests to various backend servers using techniques like network address translation. SLB aims to improve performance, increase scalability, and maintain high availability by monitoring servers and routing traffic around failures to keep applications running if servers go down. Both hardware and software-based solutions exist, with hardware providing higher performance but at greater cost than software-based options.
LOAD BALANCING ALGORITHM TO IMPROVE RESPONSE TIME ON CLOUD COMPUTINGijccsa
Load balancing techniques in cloud computing can be applied at different levels. There are two main
levels: load balancing on physical server and load balancing on virtual servers. Load balancing on a
physical server is policy of allocating physical servers to virtual machines. And load balancing on virtual
machines is a policy of allocating resources from physical server to virtual machines for tasks or
applications running on them. Depending on the requests of the user on cloud computing is SaaS (Software
as a Service), PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) that has a proper load
balancing policy. When receiving the task, the cloud data center will have to allocate these tasks efficiently
so that the response time is minimized to avoid congestion. Load balancing should also be performed
between different datacenters in the cloud to ensure minimum transfer time. In this paper, we propose a
virtual machine-level load balancing algorithm that aims to improve the average response time and
average processing time of the system in the cloud environment. The proposed algorithm is compared to the
algorithms of Avoid Deadlocks [5], Maxmin [6], Throttled [8] and the results show that our algorithms
have optimized response times.
Service Request Scheduling based on Quantification Principle using Conjoint A...IJECEIAES
This document presents a service request scheduling technique for heterogeneous distributed systems using quantification principles. It uses conjoint analysis to identify the most influential server attribute, and z-score to quantify attribute values. Servers are assigned a "servicing cutoff" percentage based on z-scores, indicating each server's share of total requests. Requests are prioritized and assigned to servers without exceeding capacity limits. The technique aims to evenly distribute workload among servers according to their quantified capacities. Experimental results showed improved performance over other scheduling principles.
Efficient and secure content processing and distribution by cooperative inter...Mumbai Academisc
The document proposes an approach for efficient and secure content processing and distribution through cooperative intermediaries. It allows multiple proxies to simultaneously perform adaptation services on different portions of content while preserving data integrity and confidentiality. The approach supports decentralized proxy and key management and flexible delegation of services. Experimental results showed the approach minimizes network data transmission and improves performance.
The document discusses load balancing and intelligent load balancing. It covers load balancing architecture, how the data collector and dynamic store work, and how performance counters are used. Intelligent load balancing techniques like load throttling are explained. Potential issues that could cause load imbalances like the "black hole effect" or failing to read performance counters are also reviewed. Troubleshooting techniques for resolving common problems are provided.
LOAD BALANCING ALGORITHM ON CLOUD COMPUTING FOR OPTIMIZE RESPONE TIMEijccsa
To improve the performance of cloud computing, there are many parameters and issues that we should consider, including resource allocation, resource responsiveness, connectivity to resources, unused resources exploration, corresponding resource mapping and planning for resource. The planning for the use of resources can be based on many kinds of parameters, and the service response time is one of them.
The users can easily figure out the response time of their requests, and it becomes one of the important QoSs. When we discover and explore more on this, response time can provide solutions for the distribution, the load balancing of resources with better efficiency. This is one of the most promising
research directions for improving the cloud technology. Therefore, this paper proposes a load balancing algorithm based on response time of requests on cloud with the name APRA (ARIMA Prediction of Response Time Algorithm), the main idea is to use ARIMA algorithms to predict the coming response time, thus giving a better way of effectively resolving resource allocation with threshold value. The experiment
result outcomes are potential and valuable for load balancing with predicted response time, it shows that prediction is a great direction for load balancing.
This document discusses and compares various load balancing techniques in cloud computing. It begins by introducing load balancing as an important issue in cloud computing for efficiently scheduling user requests and resources. Several load balancing algorithms are then described, including honeybee foraging algorithm, biased random sampling, active clustering, OLB+LBMM, and Min-Min. Metrics for evaluating and comparing load balancing techniques are defined, such as throughput, overhead, fault tolerance, migration time, response time, resource utilization, scalability, and performance. The algorithms are then analyzed based on these metrics.
Enhanced Dynamic Web Caching: For Scalability & Metadata ManagementDeepak Bagga
Abstract: These days web caching suffers from many problems like scalability, robustness, metadata management etc. These problems degrade the performance of the network and can also create frustrating situations for the clients. This paper discusses several web caching schemes such as Distributed Web Caching (DWC), Distributed Web Caching with Clustering (DWCC), Robust Distributed Web Caching (RDWC), Distributed Web Caching for Robustness, Low latency & Disconnection Handling (DWCRLD). Clustering improves the retrieval latency and also helps to provide load balancing in distributed environment. But this cannot ensure the scalability issues, easy handling of frequent disconnections of proxy servers and metadata management issues in the network. This paper presents a strategy that enhances the clustering scheme to provide scalability even if size of the cluster grows, easy handling of frequent disconnections of proxy servers and a structure for proper management of cluster’s metadata. Then a comparative table is given that shows its comparison with these schemes.
This document describes a server load balancing system for structured data. The objectives are to develop a load balancer that can manage large amounts of data and provide functionality for uploading, downloading, and deleting data, while providing reliability, scalability, and high performance. The system uses a master server to distribute loads to slave servers and track their locations. Clients communicate directly with slave servers to access data using unique keys. This allows for horizontal scaling and fault tolerance. The system is designed to handle large volumes of data across multiple servers and provide reliable access even if servers fail.
A COMPREHENSIVE SOLUTION TO CLOUD TRAFFIC TRIBULATIONSijwscjournal
Cloud computing is generally believed to the most gifted technological revolution in computing and it will soon become an industry standard. It is believed that cloud will replace the traditional office setup. However a big question mark exists over the network performance when the cloud traffic explodes. We
call it “explosion” as in future we know that various cloud services replacing desktop computing will be accessed via cloud and the traffic increases exponentially. This journal aims at addressing some of these doubts better called “dangers” about the network performance, when cloud becomes a standard globally and providing a comprehensive solution to those problems. Our study concentrates on, that despite of offering better round-trip times and throughputs, cloud appears to consistently lose large amounts of the data that it is required to send to the clients. In this journal, we give a concise survey on the research efforts in this area. Our survey findings show that the networking research community has converged to the common understanding that a measurement infrastructure is insufficient for the optimal operation and future growth of the cloud. Despite many proposals on building an network measurement infrastructure from the research community, we believe that it will not be in the near future for such an
infrastructure to be fully deployed and operational, due to both the scale and the complexity of the network. We also suggest a set of technologies to identify and manage cloud traffic using IP header DS field, QoS protocols, MPLS/IP Header Compression, Use of high speed edge routers and cloud traffic flow measurement. In the solution DS Field of IP header will be used to recognize the cloud traffic separately, QOS protocols provide the cloud traffic, the type of QOS it requires by allocating resources and marking cloud traffic identification. Further the MPLS/IP Header Compression is performed so that the traffic can pass through the existing network efficiently and speedily. The solution also suggests deployment of high speed edge routers to improve network conditions and finally it suggest to measure the traffic flow using meters for better cloud network management. Our solutions assume that cloud is being assessed via basic public network.
The document proposes a method for grouping files and allocating jobs using server scheduling to balance load. It involves splitting a server into multiple sub-servers. The performance of client machines is analyzed based on factors like processing speed, bandwidth, and memory usage. Jobs are then assigned to sub-servers based on these performance analyses, with the goal of completing all tasks quickly. Once tasks are complete, files are distributed to the respective client machines. The proposed method aims to reduce the workload on servers and improve response times compared to the existing system.
A Distributed Control Law for Load Balancing in Content Delivery NetworksSruthi Kamal
1. The document presents a novel load balancing algorithm for content delivery networks that aims to minimize load imbalance and metric movement costs.
2. It proposes estimating system state through probability distributions of node capacities and load to help peers schedule transfers without centralized control.
3. Each peer independently manipulates partial system information and reassigns virtual servers based on the approximated system state.
The document discusses improving request routing mechanisms in content delivery networks. It outlines comparing CDN and non-CDN networks, analyzing request routing techniques like DNS-based routing. The author proposes a new CDN DNS request routing technique, simulates and compares it to existing techniques. Results show the proposed technique reduces packet loss and round-trip time by using client location to directly route requests. Local load balancing techniques are also evaluated, with round-trip found to perform best.
The Concept of Load Balancing Server in Secured and Intelligent NetworkIJAEMSJORNAL
Hundreds and thousands of data packets are routed every second by computer networks which are complex systems. The data should be routed efficiently to handle large amounts of data in network. A core networking solution which is responsible for distribution of incoming traffic among servers hosting the same content is load balancing. For example, if there are ten servers within a network and two of them are doing 95% of the work, the network is not running very efficiently. If each server was handling about 10% of the traffic, the network would run much faster.Networks get more efficient with the help of Load balancing. The traffic is evenly distributed amongst the network making sure no single device is overwhelmed.When a request is balanced across multiple servers, it prevents any server from becoming a single point of failure. It improves overall availability and responsiveness. To evenly split the traffic load among several different servers web servers; often use load balancing.Load balancing requires hardware or software that divides incoming traffic amongst the available serverseither it is done on a local network or a large web server. High amount of traffic is received by a network that have one server dedicated to balance the load among other servers and devices in the network. This server is often known as load balancer. Load balancing is used by clusters or multiple computers that work together, to spread out processing jobs among the available systems.
A load balancing model based on cloud partitioning for the public cloud. ppt Lavanya Vigrahala
Load balancing in the cloud computing environment has an important impact on the performance. Good load balancing makes cloud computing more efficient and improves user satisfaction. This article introduces a better load balance model for the public cloud based on the cloud partitioning concept with a switch mechanism to choose different strategies for different situations. The algorithm applies the game theory to the load balancing strategy to improve the efficiency in the public cloud environment.
This document summarizes a dissertation on an improved load balancing technique for secure data in cloud computing. The dissertation discusses research issues in load balancing and data security in cloud computing. It proposes a load balancing methodology that uses a load balancer, Kerberos authentication, and Nginx load balancing algorithms like round robin and least connections to securely store and balance load of encrypted data across multiple cloud nodes. The methodology is implemented using tools like HP LoadRunner, Amazon Web Services, and Jelastic cloud platform. Performance is analyzed in terms of transaction time. The proposed technique aims to improve resource utilization, access control, data security, and efficiency in cloud environments.
1) The document proposes a bandwidth-aware virtual machine migration policy for cloud data centers that considers both the bandwidth and computing power of resources when scheduling tasks of varying sizes.
2) It presents an algorithm that binds tasks to virtual machines in the current data center if the load is below the saturation threshold, and migrates tasks to the next data center if the load is above the threshold, in order to minimize completion time.
3) Experimental results show that the proposed algorithm has lower completion times compared to an existing single data center scheduling algorithm, demonstrating the benefits of considering bandwidth and utilizing multiple data centers.
Porcupine is a highly available cluster-based mail service that uses commodity hardware to provide scalable email services. It addresses challenges of conventional mail solutions in performance, manageability and availability. Key techniques used include functional homogeneity, automatic reconfiguration, replication and load balancing to provide better availability, manageability and linear performance scaling with cluster size. Evaluation shows it efficiently handles failures, heterogeneous hardware and skewed workloads.
Guarding Fast Data Delivery in Cloud: an Effective Approach to Isolating Perf...Zhenyun Zhuang
LNCS 2015
Cloud-based products heavily rely on the fast data
delivery between data centers and remote users - when data
delivery is slow, the products’ performance is crippled. When
slow data delivery occurs, engineers need to investigate the issue
and find the root cause. The investigation requires experience
and time, as data delivery involves multiple playing parts
including sender/receiver/network.
To facilitate the investigations, we propose an algorithm
to automatically identify the performance bottleneck. The
algorithm aggregates information from multiple layers of
data sender and receiver. It helps to automatically isolate
the problem type by identifying which component of
sender/receiver/network is the bottleneck. After isolation, successive
efforts can be taken to root cause the exact problem.
We also build a prototype to demonstrate the effectiveness of
the algorithm.
Implementing a Caching Scheme for Media Streaming in a Proxy ServerAbdelrahman Hosny
In the past few years, websites have moved from being
static web pages into rich media applications that use audio,
images and videos heavily in their interaction with users. This
change has made a dramatic change in network traffics
nowadays. Organizations spend a lot of effort, time and money
to improve response time and design intermediary systems that
enhance overall user experience. Media traffic represents
about 69.9-88.8% of all traffic. Therefore, enhancing networks
to accommodate this large traffic is a major trend. Content
Distribution Networks (CDNs) are now largely deployed for a
faster delivery of media. Redundancy and caching are also
implemented to decrease response time.
In this project, we are implementing a caching scheme for
media streaming in a proxy server. Unlike CDNs, which
require huge infrastructure, our caching proxy server will be
as simple as a piece of software that is portable and can be
installed in small as well as large scales. It may be deployed in
a university network, company’s private network or on ISPs
servers. This caching scheme, specially tailored for media
streaming, will reduce traffic and enhance network efficiency
in general.
Index Terms – Proxy servers, Caching, Media streaming
Hybrid Scheduling Algorithm for Efficient Load Balancing In Cloud ComputingEswar Publications
This document presents a hybrid scheduling algorithm for efficient load balancing in cloud computing. The algorithm uses both round robin and priority-based scheduling approaches. It first assigns priorities to incoming job requests and then executes them in a round robin fashion. The algorithm aims to minimize overall response time and data center processing time. It is evaluated through simulation and found to perform better than round robin, priority-based, and equally spread current execution algorithms alone in terms of optimized response time and data center service time.
Server load balancing (SLB) distributes network traffic across multiple servers to optimize resource utilization and maximize throughput. It intercepts traffic destined for a website and redirects requests to various backend servers using techniques like network address translation. SLB aims to improve performance, increase scalability, and maintain high availability by monitoring servers and routing traffic around failures to keep applications running if servers go down. Both hardware and software-based solutions exist, with hardware providing higher performance but at greater cost than software-based options.
LOAD BALANCING ALGORITHM TO IMPROVE RESPONSE TIME ON CLOUD COMPUTINGijccsa
Load balancing techniques in cloud computing can be applied at different levels. There are two main
levels: load balancing on physical server and load balancing on virtual servers. Load balancing on a
physical server is policy of allocating physical servers to virtual machines. And load balancing on virtual
machines is a policy of allocating resources from physical server to virtual machines for tasks or
applications running on them. Depending on the requests of the user on cloud computing is SaaS (Software
as a Service), PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) that has a proper load
balancing policy. When receiving the task, the cloud data center will have to allocate these tasks efficiently
so that the response time is minimized to avoid congestion. Load balancing should also be performed
between different datacenters in the cloud to ensure minimum transfer time. In this paper, we propose a
virtual machine-level load balancing algorithm that aims to improve the average response time and
average processing time of the system in the cloud environment. The proposed algorithm is compared to the
algorithms of Avoid Deadlocks [5], Maxmin [6], Throttled [8] and the results show that our algorithms
have optimized response times.
Service Request Scheduling based on Quantification Principle using Conjoint A...IJECEIAES
This document presents a service request scheduling technique for heterogeneous distributed systems using quantification principles. It uses conjoint analysis to identify the most influential server attribute, and z-score to quantify attribute values. Servers are assigned a "servicing cutoff" percentage based on z-scores, indicating each server's share of total requests. Requests are prioritized and assigned to servers without exceeding capacity limits. The technique aims to evenly distribute workload among servers according to their quantified capacities. Experimental results showed improved performance over other scheduling principles.
Efficient and secure content processing and distribution by cooperative inter...Mumbai Academisc
The document proposes an approach for efficient and secure content processing and distribution through cooperative intermediaries. It allows multiple proxies to simultaneously perform adaptation services on different portions of content while preserving data integrity and confidentiality. The approach supports decentralized proxy and key management and flexible delegation of services. Experimental results showed the approach minimizes network data transmission and improves performance.
The document discusses load balancing and intelligent load balancing. It covers load balancing architecture, how the data collector and dynamic store work, and how performance counters are used. Intelligent load balancing techniques like load throttling are explained. Potential issues that could cause load imbalances like the "black hole effect" or failing to read performance counters are also reviewed. Troubleshooting techniques for resolving common problems are provided.
LOAD BALANCING ALGORITHM ON CLOUD COMPUTING FOR OPTIMIZE RESPONE TIMEijccsa
To improve the performance of cloud computing, there are many parameters and issues that we should consider, including resource allocation, resource responsiveness, connectivity to resources, unused resources exploration, corresponding resource mapping and planning for resource. The planning for the use of resources can be based on many kinds of parameters, and the service response time is one of them.
The users can easily figure out the response time of their requests, and it becomes one of the important QoSs. When we discover and explore more on this, response time can provide solutions for the distribution, the load balancing of resources with better efficiency. This is one of the most promising
research directions for improving the cloud technology. Therefore, this paper proposes a load balancing algorithm based on response time of requests on cloud with the name APRA (ARIMA Prediction of Response Time Algorithm), the main idea is to use ARIMA algorithms to predict the coming response time, thus giving a better way of effectively resolving resource allocation with threshold value. The experiment
result outcomes are potential and valuable for load balancing with predicted response time, it shows that prediction is a great direction for load balancing.
This document discusses and compares various load balancing techniques in cloud computing. It begins by introducing load balancing as an important issue in cloud computing for efficiently scheduling user requests and resources. Several load balancing algorithms are then described, including honeybee foraging algorithm, biased random sampling, active clustering, OLB+LBMM, and Min-Min. Metrics for evaluating and comparing load balancing techniques are defined, such as throughput, overhead, fault tolerance, migration time, response time, resource utilization, scalability, and performance. The algorithms are then analyzed based on these metrics.
Enhanced Dynamic Web Caching: For Scalability & Metadata ManagementDeepak Bagga
Abstract: These days web caching suffers from many problems like scalability, robustness, metadata management etc. These problems degrade the performance of the network and can also create frustrating situations for the clients. This paper discusses several web caching schemes such as Distributed Web Caching (DWC), Distributed Web Caching with Clustering (DWCC), Robust Distributed Web Caching (RDWC), Distributed Web Caching for Robustness, Low latency & Disconnection Handling (DWCRLD). Clustering improves the retrieval latency and also helps to provide load balancing in distributed environment. But this cannot ensure the scalability issues, easy handling of frequent disconnections of proxy servers and metadata management issues in the network. This paper presents a strategy that enhances the clustering scheme to provide scalability even if size of the cluster grows, easy handling of frequent disconnections of proxy servers and a structure for proper management of cluster’s metadata. Then a comparative table is given that shows its comparison with these schemes.
This document describes a server load balancing system for structured data. The objectives are to develop a load balancer that can manage large amounts of data and provide functionality for uploading, downloading, and deleting data, while providing reliability, scalability, and high performance. The system uses a master server to distribute loads to slave servers and track their locations. Clients communicate directly with slave servers to access data using unique keys. This allows for horizontal scaling and fault tolerance. The system is designed to handle large volumes of data across multiple servers and provide reliable access even if servers fail.
A COMPREHENSIVE SOLUTION TO CLOUD TRAFFIC TRIBULATIONSijwscjournal
Cloud computing is generally believed to the most gifted technological revolution in computing and it will soon become an industry standard. It is believed that cloud will replace the traditional office setup. However a big question mark exists over the network performance when the cloud traffic explodes. We
call it “explosion” as in future we know that various cloud services replacing desktop computing will be accessed via cloud and the traffic increases exponentially. This journal aims at addressing some of these doubts better called “dangers” about the network performance, when cloud becomes a standard globally and providing a comprehensive solution to those problems. Our study concentrates on, that despite of offering better round-trip times and throughputs, cloud appears to consistently lose large amounts of the data that it is required to send to the clients. In this journal, we give a concise survey on the research efforts in this area. Our survey findings show that the networking research community has converged to the common understanding that a measurement infrastructure is insufficient for the optimal operation and future growth of the cloud. Despite many proposals on building an network measurement infrastructure from the research community, we believe that it will not be in the near future for such an
infrastructure to be fully deployed and operational, due to both the scale and the complexity of the network. We also suggest a set of technologies to identify and manage cloud traffic using IP header DS field, QoS protocols, MPLS/IP Header Compression, Use of high speed edge routers and cloud traffic flow measurement. In the solution DS Field of IP header will be used to recognize the cloud traffic separately, QOS protocols provide the cloud traffic, the type of QOS it requires by allocating resources and marking cloud traffic identification. Further the MPLS/IP Header Compression is performed so that the traffic can pass through the existing network efficiently and speedily. The solution also suggests deployment of high speed edge routers to improve network conditions and finally it suggest to measure the traffic flow using meters for better cloud network management. Our solutions assume that cloud is being assessed via basic public network.
The document proposes a method for grouping files and allocating jobs using server scheduling to balance load. It involves splitting a server into multiple sub-servers. The performance of client machines is analyzed based on factors like processing speed, bandwidth, and memory usage. Jobs are then assigned to sub-servers based on these performance analyses, with the goal of completing all tasks quickly. Once tasks are complete, files are distributed to the respective client machines. The proposed method aims to reduce the workload on servers and improve response times compared to the existing system.
A Distributed Control Law for Load Balancing in Content Delivery NetworksSruthi Kamal
1. The document presents a novel load balancing algorithm for content delivery networks that aims to minimize load imbalance and metric movement costs.
2. It proposes estimating system state through probability distributions of node capacities and load to help peers schedule transfers without centralized control.
3. Each peer independently manipulates partial system information and reassigns virtual servers based on the approximated system state.
E VALUATION OF T WO - L EVEL G LOBAL L OAD B ALANCING F RAMEWORK IN C L...ijcsit
With technological advancements and c
onstant changes of Internet, cloud computing has been today's
trend. With the lower cost and convenience of cloud computing services, users have increasingly put
their
Web resources and information in the cloud environment. The availability and reliability
of the client
systems will become increasingly important. Today cloud applications slightest interruption, the imp
act
will be significant for users. It is an important issue that how to ensure reliability and stability
of the cloud
sites. Load balancing w
ould be one good solution.
This paper presents a framework for global server load balancing of the Web sites in a cloud with tw
o
-
level
load balancing model. The proposed framework is intended for adapting an open
-
source load
-
balancing
system and the frame
work allows the network service provider to deploy a load balancer in different data
centers dynamically while the customers need more load balancers for increasing the availability
PROPOSED LOAD BALANCING ALGORITHM TO REDUCE RESPONSE TIME AND PROCESSING TIME...IJCNCJournal
Cloud computing is a new technology that brings new challenges to all organizations around the world.
Improving response time for user requests on cloud computing is a critical issue to combat bottlenecks. As
for cloud computing, bandwidth to from cloud service providers is a bottleneck. With the rapid development
of the scale and number of applications, this access is often threatened by overload. Therefore, this paper
our proposed Throttled Modified Algorithm(TMA) for improving the response time of VMs on cloud
computing to improve performance for end-user. We have simulated the proposed algorithm with the
CloudAnalyts simulation tool and this algorithm has improved response times and processing time of the
cloud data center.
DYNAMIC ALLOCATION METHOD FOR EFFICIENT LOAD BALANCING IN VIRTUAL MACHINES FO...acijjournal
This paper proposes a Dynamic resource allocation method for Cloud computing. Cloud computing is a model for delivering information technology services in which resources are retrieved from the internet through web-based tools and applications, rather than a direct connection to a server. Users can set up
and boot the required resources and they have to pay only for the required resources. Thus, in the future providing a mechanism for efficient resource management and assignment will be an important objective of Cloud computing. In this project we propose a method, dynamic scheduling and consolidation mechanism that allocate resources based on the load of Virtual Machines (VMs) on Infrastructure as a service (IaaS). This method enables users to dynamically add and/or delete one or more instances on the basis of the load and the conditions specified by the user. Our objective is to develop an effective load balancing algorithm using Virtual Machine Monitoring to
maximize or minimize different performance parameters(throughput for example) for the Clouds of
different sizes (virtual topology de-pending on the application requirement).
This document discusses load balancing techniques used by high-traffic websites like Yahoo to distribute server loads and improve performance. It explains that load balancing involves distributing network traffic, processing, and other loads across multiple servers to prevent any single server from being overwhelmed. Common load balancing methods mentioned are round robin DNS, hardware load balancing using network gateways, and software load balancing using integrated components in web and application servers.
Cost-Minimizing Dynamic Migration of Content Distribution Services into Hybri...nexgentechnology
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
COST-MINIMIZING DYNAMIC MIGRATION OF CONTENT DISTRIBUTION SERVICES INTO HYBR...Nexgen Technology
bulk ieee projects in pondicherry,ieee projects in pondicherry,final year ieee projects in pondicherry
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
Cost minimizing dynamic migration of contentnexgentech15
Nexgen Technology Address:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com.
www.nexgenproject.com
Mobile: 9751442511,9791938249
Telephone: 0413-2211159.
NEXGEN TECHNOLOGY as an efficient Software Training Center located at Pondicherry with IT Training on IEEE Projects in Android,IEEE IT B.Tech Student Projects, Android Projects Training with Placements Pondicherry, IEEE projects in pondicherry, final IEEE Projects in Pondicherry , MCA, BTech, BCA Projects in Pondicherry, Bulk IEEE PROJECTS IN Pondicherry.So far we have reached almost all engineering colleges located in Pondicherry and around 90km
An Efficient Distributed Control Law for Load Balancing in Content Delivery N...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
IRJET- Commercial Web Application Load Balancing based on Hybrid CloudIRJET Journal
This document discusses load balancing of traffic from a commercial web application across instances on Amazon Web Services (AWS) cloud infrastructure. It begins with an abstract describing the goal of balancing traffic across AWS to reduce hardware needs. It then covers background on load balancing, AWS services like EC2 instances, and algorithms like round robin. The methodology section outlines deploying a website to a private cloud, using AWS services like Elastic Load Balancer and auto-scaling instances to balance traffic. Load is balanced using a round robin algorithm across instances to improve website performance and user experience. In conclusion, balancing traffic across a hybrid cloud provides smooth access to websites for users.
Role of Virtual Machine Live Migration in Cloud Load BalancingIOSR Journals
Abstract: Cloud computing has touched almost every field of the life. Hence number of cloud application
consumers is increasing every day and so as the number of application request to the cloud provider. This leads
increment of workload in many of the cloud nodes. The motive to use load balancing concepts in cloud
environment is to efficiently utilize available resources keeping in mind that no any single system is heavily
loaded or not a single system is idle during the active phase of the request completion. Even though cloud
computing being a software facility most often, how does it actually performs well in heavily loaded
environment at processor level, is discussed in the paper. This paper aims to throw some light on what is cloud
load balancing and what is the role of Virtual machine migration in improving it.
Keywords: Cloud load balancing, Live Migration, Migration, Virtualization, Virtual machine.
Modified Active Monitoring Load Balancing with Cloud Computingijsrd.com
Cloud computing is internet-based computing in which large groups of remote servers are networked to allow the centralized data storage, and online access to computer services or resources. Load Balancing is essential for efficient operations in distributed environments. As Cloud Computing is growing rapidly and clients are demanding more services and better results, load balancing for the Cloud has become a very interesting and important research area. In the absence of proper load balancing strategy/technique the growth of CC will never go as per predictions. The main focus of this paper is to verify the approach that has been proposed in the model paper [3]. An efficient load balancing algorithm has the ability to reduce the data center processing time, overall response time and to cope with the dynamic changes of cloud computing environments. The traditional load balancing Active Monitoring algorithm has been modified to achieve better data center processing time and overall response time. The algorithm presented in this paper efficiently distributes the requests to all the VMs for their execution, considering the CPU utilization of all VMs.
This document proposes a new retrieval strategy called CoRe for peer-to-peer video-on-demand systems. CoRe aims to minimize response time and maximize throughput. It does this by selecting multiple serving peers to collaboratively service each request based on factors like distance and available resources. The document outlines limitations of existing strategies, describes the CoRe algorithm in detail, and presents experimental results showing CoRe performs better than the Least Load First algorithm, especially under heavy workloads.
The document discusses server virtualization and consolidation in enterprise data centers. It notes that many servers are underutilized but some become overloaded during peaks, and server consolidation aims to increase utilization while maintaining performance. Two main virtualization technologies are hypervisor-based (e.g. VMware, Xen) and operating system-level (e.g. OpenVZ, Linux VServer). The document evaluates the performance and scalability of a multi-tier application running on these virtualization platforms under different consolidation scenarios. It also examines the impact on underlying system metrics to understand virtualization overhead.
This document discusses real-time issues in cloud computing and proposes a framework for real-time service-oriented cloud computing. It presents challenges at both the client-side and server-side. At the client-side, issues include efficient execution, caching, paging, stream filtering, runtime checking and environment-aware adaptation. At the server-side, major issues are customization to serve multiple tenants simultaneously, and scalability to provide additional resources proportional to customer demand while maintaining performance. The paper proposes a novel real-time architecture to address these new challenges in cloud computing.
Load Balancing Algorithm to Improve Response Time on Cloud Computingneirew J
Load balancing techniques in cloud computing can be applied at different levels. There are two main
levels: load balancing on physical server and load balancing on virtual servers. Load balancing on a
physical server is policy of allocating physical servers to virtual machines. And load balancing on virtual
machines is a policy of allocating resources from physical server to virtual machines for tasks or
applications running on them. Depending on the requests of the user on cloud computing is SaaS (Software
as a Service), PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) that has a proper load
balancing policy. When receiving the task, the cloud data center will have to allocate these tasks efficiently
so that the response time is minimized to avoid congestion. Load balancing should also be performed
between different datacenters in the cloud to ensure minimum transfer time. In this paper, we propose a
virtual machine-level load balancing algorithm that aims to improve the average response time and
average processing time of the system in the cloud environment. The proposed algorithm is compared to the
algorithms of Avoid Deadlocks [5], Maxmin [6], Throttled [8] and the results show that our algorithms
have optimized response times.
Cloud Computing Load Balancing Algorithms Comparison Based SurveyINFOGAIN PUBLICATION
Cloud computing is an online primarily based computing. This computing paradigm has increased the employment of network wherever the potential of 1 node may be used by alternative node. Cloud provides services on demand to distributive resources like info, servers, software, infrastructure etc. in pay as you go basis. Load reconciliation is one amongst the vexing problems in distributed atmosphere. Resources of service supplier have to be compelled to balance the load of shopper request. Totally different load reconciliation algorithms are planned so as to manage the resources of service supplier with efficiency and effectively. This paper presents a comparison of assorted policies used for load reconciliation.
Static Enabler: A Response Enhancer for Dynamic Web ApplicationsOsama M. Khaled
In this paper, we describe a solution for dynamic web applications that need to optimize their performance for
responsiveness. The solution is described as a design pattern to be shared among developers from different
technical backgrounds. The Static Enabler design pattern enhances the performance of a dynamic web
application by converting some of its pages into static ones without losing the reference to the original pages.
For citation:
10. Osama M. Khaled and Hoda M. Hosny. Static Enabler: A Response Enhancer for Dynamic Web Applications. In the (VikingPLOP 2004) September 16th-19th, Uppsala, Sweden.
Similar to A Modified Genetic Algorithm based Load Distribution Approach towards Web Hotspot rescue (20)
This document summarizes a research paper that proposes using an artificial neural network tuned by a simulated annealing algorithm for real-time credit card fraud detection. The paper describes how simulated annealing can be used to train the weights of a neural network model to classify credit card transactions as fraudulent or non-fraudulent based on attributes of past transactions. The algorithm is tested on a real-world credit card transaction dataset and is found to effectively classify most transactions correctly, though some misclassifications still occur.
Wireless sensor networks (WSN) have been widely used in various applications.
In these networks nodes collect data from the attached sensors and send their data to a base
station. However, nodes in WSN have limited power supply in form of battery so the nodes
are expected to minimize energy consumption in order to maximize the lifetime of WSN. A
number of techniques have been proposed in the literature to reduce the energy
consumption significantly. In this paper, we propose a new clustering based technique
which is a modification of the popular LEACH algorithm. In this technique, first cluster
heads are elected using the improved LEACH algorithm as usual, and then a cluster of
nodes is formed based on the distance between node and cluster head. Finally, data from
node is transferred to cluster head. Cluster heads forward data, after applying aggregation,
to the cluster head that is closer to it than sink in forward direction or directly to the sink.
This reduction in distance travelled improves the performance over LEACH algorithm
significantly.
This document provides an overview of vertical handover decision strategies in heterogeneous wireless networks. It begins with an introduction to always best connectivity requirements in next generation networks that allow users to move between different network technologies. It then discusses the key aspects of handover management, including the three phases of initiation, decision, and execution. Various criteria for the handover decision process are described, such as received signal strength, network connection time, available bandwidth, power consumption, cost, security, and user preferences. Different types of handover decision strategies are categorized, including those based on network conditions, user preferences, multiple attributes, fuzzy logic/neural networks, and context awareness. The strategies are analyzed and their advantages/disadvantages compared.
This paper presents the design and performance comparison of a two stage
operational amplifier topology using CMOS and BiCMOS technology. This conventional op
amp circuit was designed by using RF model of BSIM3V3 in 0.6 μm CMOS technology and
0.35 μm BiCMOS technology. Both the op amp circuits were designed and simulated,
analyzed and performance parameters are compared. The performance parameters such as
gain, phase margin, CMRR, PSRR, power consumption etc achieved are compared. Finally,
we conclude the suitability of CMOS technology over BiCMOS technology for low power
RF design.
In Cognitive Radio Networks (CRN), Cooperative Spectrum Sensing (CSS) is
used to improve performance of spectrum sensing techniques used for detection of licensed
(Primary) user’s signal. In CSS, the spectrum sensing information from multiple unlicensed
(Secondary) users are combined to take final decision about presence of primary signal. The
mixing techniques used to generate final decision about presence of PU’s signal are also
called as Fusion techniques / rules. The fusion techniques are further classified as data
fusion and decision fusion techniques. In data fusion technique all the secondary users
(SUs) share their raw information of spectrum detection like detected energy or other
statistical information, while in decision fusion technique all the SUs take their local
decisions and share the decision by sending ‘0’ or ‘1’ corresponding to absence and presence
of PU’s signal respectively. The rules used in decision fusion techniques are OR rule, AND
rule and K-out-of-N rule. The CSS is further classified as distributed CSS and centralized
CSS. In distributed CSS all the SUs share the spectrum detection information with each
other and by mixing the shared information; all the SUs take final decision individually. In
centralized CSS all the SUs send their detected information to a secondary base station /
central unit which combines the shared information and takes final decision. The secondary
base station shares the final decision with all the SUs in the CRN. This paper covers
overview of information fusion methods used for CSS and analysis of decision fusion rules
with simulation results.
This paper analyzes the impact of network scalability on various physical attributes of Zigbee networks. Simulations were conducted using Qualnet to evaluate the performance of the Zigbee physical layer based on energy consumption and throughput. Energy consumption was analyzed for different modulation schemes (ASK, BPSK, OQPSK), network sizes (2-50 nodes), and clear channel assessment modes. The results showed that OQPSK and ASK had lower energy consumption than BPSK. Throughput was highest for OQPSK. While carrier sense had slightly higher throughput than other CCA modes, the energy consumption differences between CCA modes were minor.
This paper gives a brief idea of the moving objects tracking and its application.
In sport it is challenging to track and detect motion of players in video frames. Task
represents optical flow analysis to do motion detection and particle filter to track players
and taking consideration of regions with movement of players in sports video. Optical flow
vector calculation gives motion of players in video frame. This paper presents improved
Luacs Kanade algorithm explained for optical flow computation for large displacement and
more accuracy in motion estimation.
A rapid progress is seen in the field of robotics both in educational and industrial
automation sectors. The Robotics education in particular is gaining technological advances
and providing more learning opportunities. In automotive sector, there is a necessity and
demand to automate daily human activities by robot. With such an advancement and
demand for robotics, the realization of a popular computer game will help students to learn
and acquire skills in the field of robotics. The computer game such as Pacman offers
challenges on both software and hardware fronts. In software, it provides challenges in
developing algorithms for a robot to escape from the pool of attacking robots and to develop
algorithms for multiple ghost robots to attack the Pacman. On the hardware front, it
provides a challenge to integrate various systems to realize the game. This project aims to
demonstrate the pacman game in real world as well as in simulation. For simulation
purpose Player/Stage is used to develop single-client and multi-client architectures. The
multi- client architecture in player/stage uses one global simulation proxy to which all the
robot models are connected. This reduces the overhead to manage multiple robots proxy.
The single-client architecture enables only two robot models to connect to the simulation
proxy. Multi-client approach offers flexibility to add sensors to each port which will be used
distinctly by the client attached to the respective robot. The robots are named as Pacman
and Ghosts, which try to escape and attack respectively. Use of Network Camera has been
done to detect the global positions of the robots and data is shared through inter-process
communication.
In Content-Based Image Retrieval (CBIR) systems, the visual contents of the
images in the database are took out and represented by multi-dimensional characteristic
vectors. A well known CBIR system that retrieves images by unsupervised method known
as cluster based image retrieval system. For enhancing the performance and retrieval rate
of CBIR system, we fuse the visual contents of an image. Recently, we developed two
cluster-based CBIR systems by fusing the scores of two visual contents of an image. In this
paper, we analyzed the performance of the two recommended CBIR systems at different
levels of precision using images of varying sizes and resolutions. We also compared the
performance of the recommended systems with that of the other two existing CBIR systems
namely UFM and CLUE. Experimentally, we find that the recommended systems
outperform the other two existing systems and one recommended system also comparatively
performed better in every resolution of image.
Information Systems and Networks are subjected to electronic attacks. When
network attacks hit, organizations are thrown into crisis mode. From the IT department to
call centers, to the board room and beyond, all are fraught with danger until the situation is
under control. Traditional methods which are used to overcome these threats (e.g. firewall,
antivirus software, password protection etc.) do not provide complete security to the system.
This encourages the researchers to develop an Intrusion Detection System which is capable
of detecting and responding to such events. This review paper presents a comprehensive
study of Genetic Algorithm (GA) based Intrusion Detection System (IDS). It provides a
brief overview of rule-based IDS, elaborates the implementation issues of Genetic Algorithm
and also presents a comparative analysis of existing studies.
Step by step operations by which we make a group of objects in which attributes
of all the objects are nearly similar, known as clustering. So, a cluster is a collection of
objects that acquire nearly same attribute values. The property of an object in a cluster is
similar to other objects in same cluster but different with objects of other clusters.
Clustering is used in wide range of applications like pattern recognition, image processing,
data analysis, machine learning etc. Nowadays, more attention has been put on categorical
data rather than numerical data. Where, the range of numerical attributes organizes in a
class like small, medium, high, and so on. There is wide range of algorithm that used to
make clusters of given categorical data. Our approach is to enhance the working on well-
known clustering algorithm k-modes to improve accuracy of algorithm. We proposed a new
approach named “High Accuracy Clustering Algorithm for Categorical datasets”.
Brain tumor is a malformed growth of cells within brain which may be
cancerous or non-cancerous. The term ‘malformed’ indicates the existence of tumor. The
tumor may be benign or malignant and it needs medical support for further classification.
Brain tumor must be detected, diagnosed and evaluated in earliest stage. The medical
problems become grave if tumor is detected at the later stage. Out of various technologies
available for diagnosis of brain tumor, MRI is the preferred technology which enables the
diagnosis and evaluation of brain tumor. The current work presents various clustering
techniques that are employed to detect brain tumor. The classification involves classification
of images into normal and malformed (if detected the tumor). The algorithm deals with
steps such as preprocessing, segmentation, feature extraction and classification of MR brain
images. Finally, the confirmatory step is specifying the tumor area by technique called
region of interest.
A Proxy signature scheme enables a proxy signer to sign a message on behalf of
the original signer. In this paper, we propose ECDLP based solution for chen et. al [1]
scheme. We describe efficient and secure Proxy multi signature scheme that satisfy all the
proxy requirements and require only elliptic curve multiplication and elliptic curve addition
which needs less computation overhead compared to modular exponentiations also our
scheme is withstand against original signer forgery and public key substitution attack.
This document proposes a digital watermarking technique using LSB replacement with secret key insertion for enhanced data security. The technique works by inserting a watermark into the least significant bits of pixels in an image. A secret key is also inserted during transmission for additional security. The watermarked image is generated without noticeably impacting image quality. The proposed method was tested on sample images and successfully embedded watermarks while maintaining visual quality. The technique aims to provide copyright protection and authentication of digital images and documents.
Today among various medium of data transmission or storage our sensitive data
are not secured with a third-party, that we used to take help of. Cryptography plays an
important role in securing our data from malicious attack. This paper present a partial
image encryption based on bit-planes permutation using Peter De Jong chaotic map for
secure image transmission and storage. The proposed partial image encryption is a raw data
encryption method where bits of some bit-planes are shuffled among other bit-planes based
on chaotic maps proposed by Peter De Jong. By using the chaotic behavior of the Peter De
Jong map the position of all the bit-planes are permuted. The result of the several
experimental, correlation analysis and sensitivity test shows that the proposed image
encryption scheme provides an efficient and secure way for real-time image encryption and
decryption.
This paper presents a survey of Dependency Analysis of Service Oriented
Architecture (SOA) based systems. SOA presents newer aspects of dependency analysis due
to its different architectural style and programming paradigm. This paper surveys the
previous work taken on dependency analysis of service oriented systems. This study shows
the strengths and weaknesses of current approaches and tools available for dependency
analysis task in context of SOA. The main motivation of this work is to summarize the
recent approaches in this field of research, identify major issue and challenges in
dependency analysis of SOA based systems and motivate further research on this topic.
In this paper, proposed a novel implementation of a Soft-Core system using
micro-blaze processor with virtex-5 FPGA. Till now Hard-Core processors are used in
FPGA processor cores. Hard cores are a fixed gate-level IP functions within the FPGA
fabrics. Now the proposed processor is Soft-Core Processor, this is a microprocessor fully
described in software, usually in an HDL. This can be implemented by using EDK tool. In
this paper, developed a system which is having a micro-blaze processor is the combination
of both hardware & Software. By using this system, user can control and communicate all
the peripherals which are in the supported board by using Xilinx platform to develop an
embedded system. Implementing of Soft-Core process system with different peripherals like
UART interface, SPA flash interface, SRAM interface has to be designed using Xilinx
Embedded Development Kit (EDK) tools.
The article presents a simple algorithm to construct minimum spanning tree and
to find shortest path between pair of vertices in a graph. Our illustration includes the proof
of termination. The complexity analysis and simulation results have also been included.
Wimax technology has reshaped the framework of broadband wireless internet
service. It provides the internet service to unconnected or detached areas such as east South
Africa, rural areas of America and Asia region. Full duplex helpers employed with one of
the relay stations selection and indexing method that is Randomized Distributed Space Time
are used to expand the coverage area of primary Wimax station. The basic problem was
identified at cell edge due to weather conditions (rain, fog), insertion of destruction because
of multiple paths in the same communication channel and due to interference created by
other users in that communication. It is impractical task for the receiver station to decode
the transmitted signal successfully at the cell edges, which increases the high packet loss and
retransmissions. But Wimax is a outstanding technology which is used for improving the
quality of internet service and also it offers various services like Voice over Internet
Protocol, Video conferencing and Multimedia broadcast etc where a little delay in packet
transmission can cause a big loss in the communication. Even setup and initialization of
another Wimax station nearer to each other is not a good alternate, where any mobile
station can easily handover to another base station if it gets a strong signal from other one.
But in rural areas, for few numbers of customers, installation of base station nearer to each
other is costlier task. In this review article, we present a scheme using R-DSTC technique to
choose and select helpers (relay nodes) randomly to expand the coverage area and help to
mobile station as a helper to provide secure communication with base station. In this work,
we use full duplex helpers for better utilization of bandwidth.
Radio Frequency identification (RFID) technology has become emerging
technique for tracking and items identification. Depend upon the function; various RFID
technologies could be used. Drawback of passive RFID technology, associated to the range
of reading tags and assurance in difficult environmental condition, puts boundaries on
performance in the real life situation [1]. To improve the range of reading tags and
assurance, we consider implementing active backscattering tag technology. For making
mobiles of multiple radio standards in 4G network; the Software Defined Radio (SDR)
technology is used. Restrictions in Existing RFID technologies and SDR technology, can be
eliminated by the development and implementation of the Software Defined Radio (SDR)
active backscattering tag compatible with the EPC global UHF Class 1 Generation 2 (Gen2)
RFID standard. Such technology can be used for many of applications and services.
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.pptHenry Hollis
The History of NZ 1870-1900.
Making of a Nation.
From the NZ Wars to Liberals,
Richard Seddon, George Grey,
Social Laboratory, New Zealand,
Confiscations, Kotahitanga, Kingitanga, Parliament, Suffrage, Repudiation, Economic Change, Agriculture, Gold Mining, Timber, Flax, Sheep, Dairying,
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
Elevate Your Nonprofit's Online Presence_ A Guide to Effective SEO Strategies...TechSoup
Whether you're new to SEO or looking to refine your existing strategies, this webinar will provide you with actionable insights and practical tips to elevate your nonprofit's online presence.
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
CapTechTalks Webinar Slides June 2024 Donovan Wright.pptxCapitolTechU
Slides from a Capitol Technology University webinar held June 20, 2024. The webinar featured Dr. Donovan Wright, presenting on the Department of Defense Digital Transformation.
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
2. optimization, which is used to distribute the load constantly assigned to a single server, across a network of
processing elements or servers, so as to equalize the load among the servers, at any point of time.
A most commonly used load balancing technique is DNS Round Robin, a DNS-based load balancing
process. This technique provides a function to associate more than one IP address to a single hostname, as
shown in Fig.1 [16]. For e.g., the hostname, www.vegan.net, is associated with multiple IP addresses,
provided to distribute the traffic evenly among the IP addresses. However DNS Round Robin ended up with
limitations like caching issues, traffic distribution etc. Now-a-days, the SLB (server load balancing) process
is quite effective in context to solve problems like redundancy, scalability and server management. SLB
generally comes with components like VIP (virtual IP address), server, user access levels, redundancy,
persistence, service checking, load balancing algorithms etc. Load balancing algorithms are mathematically
programmed into SLB device. They are assigned to individual VIPs. There is a number of load balancing
algorithm, those can be categorized as global or local, static or dynamic, centralized or distributed etc.
In our proposed system, the concept of SLB is implemented, with application of an optimization algorithm,
namely Genetic Algorithm. Genetic algorithm combines the exploitation of previous results with the
exploration of new solutions of the search space. It generally follows the survival-of-the-fittest technique [2].
Genetic algorithm provided to maintain a population of candidate solutions that evolves over time and
ultimately converges to give the optimal solution. In a population, individuals are represented by
chromosome, which is represented as a string of bits. To evolve the best solution and to implement natural
selection, an objective function is defined, which helps to measure a candidate solution’s relative fitness.
The domain of our key problem is distributed system. Generally a distributed system comprises of a no. of
computers, which acts as client, accessing services from another set of computers, which acts as servers. The
most common example of distributed system is the World Wide Web (www). The WWW, everyday it
intercepts a large traffic, directing it to a web server system.The purpose of web server is to store information
and serve client requests. A web server system is consisting of multiple web server hosts, running a number
of web applications simultaneously.
Dynamic load balancing comes with the need of allocating servers/ resources to client requests, at the
moment they arrive. It is “mission-critical” as it is unpredictable to determine the incoming load. It involves
key issues like task migration and load sharing. According to Ref. [3], Load sharing provides to manage the
tasks in the system in such a way that no processor in the system is idle. Generally a process is migrated to
another processor if the migration cost or overhead is less than some predetermined matrix, in order to
improve processor utilization. Migration of processes generally requires more hardware requirements, which
in turn leads to increase the cost of execution. The load balancing problem strategy tries to ensure that the
processors or servers in the system are equally loaded and every processor or server does same number of
request processing. After receiving of the requests, a good scheduling policy should be maintained which can
assure assigning of requests, to appropriate servers, within the shortest execution time. In this paper, we have
considered the problem of load balancing is a process scheduling policy which takes every incoming request
as one process and assign it to a processor or server for processing.
The rest of the paper is organized as follows: In section 2, a brief description of the related works has given.
In section 3, theoretical details about the Genetic Algorithm are described. In section 4, the system and
process model is introduced. Section 5 included the proposed Genetic Algorithm based load balancing
approach. The implementations and results are discussed in section 6. Section 7 includes the conclusion part.
Figure 1: DNS round Robin mechanism
10
3. II. RELATED WORKS
Web hotspot being a serious problem as it degrades the quality of the website. Manual control on this whole
process would surely affect the website quality. Ref. [1] defined DotSlash autonomic rescue system, given by
Weibin Zhao, provided a solution to this problem. In order to solve the problem, DotSlash enables the web
site to create a distributed web server system on the fly, adaptive to the changing environment. In the design
model of DotSlash autonomic rescue system, a cost effective mechanism was applied to handle the increase
request load. According to it, different web sites can form a mutual-aid community of web servers, so that in
case of critical period it can use the spare capacity of other web sites in the community.
The working of DotSlash rescue system can be described simply by the following steps:
Dynamic Virtual Hosting
Request Redirection
Workload Monitoring
Rescue Control
Service Recovery.
A. Dynamic load balancing Approaches
In client based approach, requested documents can be routed to any replicated web server even when the
nodes are loosely (or not) coordinated. Routing of requests to the web clusters can be done by either Webclients or by Client-side proxy servers [4]. DNS- based approach provides to overcome the limitations of
client based approach as it uses request routing mechanism in the cluster side. The cluster DNS or the
authoritative DNS server for the distributed Web server system’s nodes, translates the URL to the IP address
of one server, so as provides architecture transparency at the URL-level [4] [5]. Based on the scheduling
algorithm, used by the cluster DNS, to balance the load on the Web server’s node, the DNS-based approach
can be categorized into Constant TTL Algorithm and Adaptive or Dynamic TTL Algorithm.
Cluster based approach for peer-to-peer system; B. Mortazavi and G. Kesidis [6] have used a reputation
framework, based on which they have designed a game, in which players play in order to receive maximum
files from the system. Brighten Godfrey and et al. [7] has proposed an algorithm for load balancing for
heterogeneous and dynamic P2P system. Kalman Graffi et al [8] have used a DHT-based information
gathering and system analyzing technique. Ananth Rao et al [9] to address the load balancing problem in p2p
system have proposed an algorithm, which gives the idea of virtual server. Song Fu et al [10] has
characterized the behaviour of randomized search algorithms in the general P2P environment. In case of
dispatcher based approach, Harikesh Singh et al. [5] have addressed an advanced DNS dispatching technique
provided to distribute the HTTP requests from the clients, by using Round Robin and proximity based
scheduling algorithm.
Many of the approaches of load balancing involved optimization techniques like Fuzzy logic, Genetic
Algorithm also. Load balancing problem is known to be NP- hard in context to number of requests versus
number of machines/ servers. It leads to search for an optimum solution to this problem. Yu-Kwong Kwok
and et al [14] defined a new dynamic fuzzy-decision-based load balancing system incorporated in a
distributed object computing environment. With the help of conventional control theory, the sudden increase
in the load was considered as an external force to the system. A feedback mechanism is maintained which
provide to minimize the effect of the external force to the system. A Genetic Algorithm based approach was
introduced by Bibhudatta Sahoo et al [15] for dynamic load distribution in heterogeneous distributed system.
It has defined the load balancing as a job scheduling mechanism, comparing the proposed system with two
scheduling policies like LERT-MW and LERT-MWM. Priyanka Gonade et al [3] defined a modified Genetic
Algorithm approach with an objective function for minimum load deviation of a node.
III. GENETIC ALGORITHM - THEORETICAL CONCEPT
Genetic Algorithm (GA) is search based method which works based on the principle of natural selection and
genetics. It is a model based on search methods, provided to obtain the optimal solution out of a search space
consists of a population of potential solution. This algorithm follows the principle of survival of the fittest,
where each individual presents a point in problem solution’s search space. An individual which represents a
candidate solution can be expressed as string of bits, referred to as chromosomes. Each chromosome is
11
4. composed of variables called genes and values associated with the genes are termed as alleles. To evolve the
best solution and to implement natural selection, an objective function is defined, which helps to measure a
candidate solution’s relative fitness. The objective function is an important concept as it is used subsequently
used by the GA to guide the evolution of best solutions. After the problem is encoded in a chromosomal
manner and an objective function has been chosen, solution to the search problem can be evolved by using
the following steps [11]:
INITIALIZATION:
The initial population of candidate solutions is usually generated randomly across the search space.
EVALUATION:
After initialization of the population, the fitness values of all the candidate solutions are evaluated by using
the objective function.
SELECTION:
Selection provides to select those solutions with higher fitness value to the next generation and thus imposes
the survival-of-the-fittest mechanism on the candidate solutions. The main idea of selection is to prefer better
solutions to worse ones, and many selection procedures have been proposed to accomplish this idea. Some of
the selection techniques are roulette-wheel selection, stochastic universal selection, ranking selection and
tournament selection.
RECOMBINATION:
Recombination provides to combine parts of two or more parental candidate solutions to create a new,
possibly better solutions, termed as offspring. The offspring under recombination will not be identical to any
particular parent and will instead combine parental traits in a novel manner [13].
MUTATION:
The task of mutation is to locally but randomly modify a solution. It generally involves changing one or more
traits of an individual. We can say that the mutation performs a random walk in the space of the candidate
solutions.
REPLACEMENT:
The offspring population created by selection, recombination, and mutation replaces the original parental
population. Many replacement techniques such as elitist replacement, generation-wise replacement and
steady-state replacement methods are used in GAs.
TERMINATION CONDITIONS OR STOPPING CONDITIONS:
Termination conditions are generally problem dependent. Some general stopping conditions are obtaining of
optimal solution, same fitness value for more than one generation, consecutively etc.
A. Basic Genetic Algorithm Operators
SELECTION OPERATOR:
The basic selection techniques can be distinguished into two categories:
FITNESS PROPORTIONATE SELECTION
This includes methods such as roulette-wheel selection and stochastic universal selection [11]. In roulettewheel selection, each individual in the population is assigned a roulette wheel slot sized according to its
fitness value. Thus a better solution will have a larger slot than a less fit solution.
ORDINAL SELECTION
This includes methods such as tournament selection and truncation selection [11]. In tournament selection, snumber of chromosomes are selected in random and put in tournament against each other. The fittest group
with k-number of individuals is selected as the parent.
RECOMBINATION OPERATOR:
After selection, individuals from the offspring pool are recombined (or crossed over) to create new, hopefully
better, offspring. In recombination process, two individuals are selected randomly and recombined with
predefined probability, pc, termed as crossover probability. A uniform random number, r is defined which is
compared with the pc. If r<= pc, then the individuals are recombined and if r> pc, then individuals are simply
taken to be the copy of their parents. A pseudo code for the above mechanism is given below:
Pseudo Code:
[1] Start
[2] Define r any random number
[3] Define pc, pc= crossover probability
[4] If r <= pc
12
5. [5] then perform recombination
[6] else
[7] copy the parents to the next generation.
[8] End
MUTATION OPERATOR:
The significance of mutation operator is to add diversity to the population and to ensure the exploration of
entire search space. Mutation is the primary variation/search operator, which is performed with low
probability in GA. Bit-flip mutation is the most common mutation technique used. A mutation probability,
pm is defined, according to which each bit in a binary string is changed (as 0 is converted to 1, and vice
versa).
REPLACEMENT OPERATOR:
Replacement techniques are used to introduce the newly generated offspring into the parental population.
Some of the replacement techniques are:
Delete-All:
It provides to delete all the individuals in a current population and replace them with same number of newly
created offspring.
Steady-State:
This technique provides to delete n-number of old members and replace them with n-number of new
offspring. The number to delete and replace, n, at any one time is a parameter to this deletion technique.
Steady-state-no-duplicates:
While replacing n-number of parents with n-number of offspring, this technique ensures that no duplicate
chromosomes are added to the population.
IV. SYSTEM AND PROCESS MODEL
In this paper, the problem of load balancing implements a process scheduling policy. Every incoming client
requests are taken as one process. We have to find the optimum schedule according to which the processes/
requests are allocated to different servers, according to their demand. Process scheduling mechanism can be
implemented into two phases:
PROCESS DISTRIBUTION:
Provide to distribute the load equally on the processor.
PROCESS EXECUTION ORDERING:
Genetic algorithm concept is used in this stage. GA provided to search random search methods that mimic
the principle of evolution mad natural selection. From an entire solution space, GA provided to search for the
optimal solution.
Every request that arrived at the distributed server system is considered as one process. A request queue is
defined which will entry each request in it, i.e. received every request is put into the queue. They are taken
out from the queue for processing in a FCFS order.
Let P= (p1, p2, p3.........pn) denoted the set of processor or server in the distributed system. Constraint is
applied as one processor can execute only one request at a time.
J= (j1, j2, j3............j m) denotes the set of processes to be executed
A n×m assignment matrix, where the value pik , 1 < i < n, 1 < k < m; denotes number of times a process, pi is
allocated to a specific server, jk. With every schedule, the matrix gets increased.
The process scheduling mechanism can be depicted by the Fig. 2 [3] [12]. As shown in Fig.2, represents
the request queue and 1, 2,.........m represents the processors or severs in the distributed system.
In this paper it is considered the underlying system architecture has the following components, as shown in
figure 3.
Clients
Forwarding Machine
MASTER
Servers
13
6. Figure 2: process scheduling mechanism
Clients are connected through a network to the distributed server system. A distributed server system consists
of a number of servers, interconnected to each other. When clients have send requests, it is basically received
by the FMs (Forwarding machine), which are responsible for forwarding the requests to the servers. Behind
every FM, a server or sometimes a cluster of servers is present; to process different requests according to
their demand. The MASTER performs the role of server load balancer. Server load balancing can be
defined as the process of distributing the traffic occurred in a web site, among a number of servers, using a
Network-based device. Generally it is a user- transparent process. In our underlying system model, the
MASTER is responsible for taking decisions about the process/ request assignment.
To avoid deadlock-type situation, the process scheduling mechanism must fulfil the following two
constraints:
Time Constraint (CT): Processes/ requests with same demand can’t be allocated to a server simultaneously.
There should be a specific time-interval while allocating processors/servers, to different requests/ processes.
Activity Constraint (CA): No server should be active or ideal forever. After a specific time period, each
server should assign to sleep (idle) state, while other servers processing the requests.
To simply define the load balancing problem, suppose we have a set of n-requests or tasks, which we have to
assign to m- machines/ servers. We are given an array of non-negative elements, T[1,2,….n], where the value
T[i] represents the running time of a task, i. The assignment is given by the assignment matrix.
A. Performance Metrics:
To evaluate the performance of the proposed model, we have considered the following metrics
PROCESSOR LOAD:
It is defined as the number of processes allocated to a specific process/ server. It is denoted by load(pi), gives
the total number of processes a processor has which is the sum of number of processor already allocated to
that processor and the newly assigned processes to that processor.
Mathematically,
( )=∑
.
,
,
+∑
.
,
,
(1)
MAKESPAN:
It is defined as the maximum finishing time or total execution time required to complete the maximum load
on any processor, pi, at any time, t.
Mathematically,
( )
= max
(2)
PROCESSOR UTILIZATION:
Processor utilization for any processor, pi is obtained by dividing the processor load, load (pi), by the value
of makespan.
Mathematically,
(
( )=
)
( )
14
(3)
7. Figure 3: Underlying system architecture
Average processor utilization is given by:
=
∑
.
.
( )
(4)
V. PROPOSED GENETIC ALGORITHM BASED APPROACH
Generally the Genetic Algorithm provides an efficient way to search for an optimal solution. The algorithm
starts by randomly generating an initial population of possible solutions. In this paper, the proposed GAimplemented load balancing provides distribution of processes, among different processor or server, based on
processor load. When a process is assigned to a processor, the processor load is updated with the latest
assigning process to that processor or server, which is given by the assignment (n×m) matrix. In context to
our problem, the initial population is created by randomly taking incoming requests/ processes. A request
queue is defined with all un- processed requests/ processes within it. After a specific time- interval, the
request/ processes are taken out from the queue in FCFS order and randomly allocated to the processors
(servers). Then each schedule is evaluated according to a fitness function. Two best schedules are selected, to
produce the next generation. Mutation and crossover functions are performed over the selected schedules to
produce schedules with higher fitness value, in order to maintain the population size. In every generation,
individuals are evaluated with the fitness function and less fit solutions are got rejected.
The algorithm can be implemented in the following phases:
INITIALIZATION PHASE:
Genetic algorithm provides to search from a large population of individuals. The initial population is created
by randomly selecting the processes/ requests from the request queue, in FCFS order and then randomly
assigned to processors (servers). The order of assigning the processes is taken as a condition. The initial
population is obtained by swapping the orders of assignment of processes, for a fixed number of times.
EVALUATION PHASE:
Evaluation phase provides to find a quality measure to determine how fit one individual is among the
population. In context to our key problem, “web hotspot”, where the load on the server suddenly get
increased to high; we define the fitness of a schedule as the number of un-processed requests. This is because
our foremost aim is to find an optimum solution for distributing load among the processors (servers), so that
there is a response for every incoming processes/ requests. The fittest schedule will have zero un-processed
requests.
The fitness function can be defined as:
( )={
= 0, ∀ = 1,2, … . }
(5)
So, a schedule, s is said to be fittest if there is no un-processed request. This fitness function is applied to find
the fitness of individuals in the initial population.
SELECTION PHASE:
Selection phase provides to get more copies of the solutions with higher fitness value and hence survival-of
–the-fittest mechanism can be implemented on the candidate solution. This will improve the total fitness of
15
8. the population. For selection of individuals, a quality measure, q is defined where q=u. Therefore, schedules
with less number of un-processed requests/ processes are selected to the next generation.
RECOMBINATION PHASE:
Recombination phase provides to create new and better offspring by combining two or more parental
solutions, selected through the selection phase. There are number of recombination operators are defined to
accomplish this, as follows:
Crossover
It involves exchanging parts of information between two randomly selected individuals. Two best individuals
are randomly selected and from one string, a process, ji is selected at random and put it into the second string.
For two process, ji and jk, for ji, jk € J, both the processes are exchanged between the two individuals. For one
of the parents being the best individual, then we simply mutate the second string.
Mutation
It involves change the gene values in the chromosomes. It replaced the gene value with a new value selected
from a definite domain for the gene. For mutating two processes, two number r and c are defined with the
conditions:
i)
r#c
ii)
Set r is not empty.
From set, r one process is selected at random and replaces it on c.
TERMINATION PHASE:
Stopping conditions are defined as when the programme encounters these situations, it will get terminated.
Some of the stopping conditions are:
Reached maximum number of generations,
Obtaining equal fitness for number of generations.
Obtaining desired solution.
All these steps are combined to give the GA based load balancing algorithm.
Here is the algorithm:
Algorithm:
GA based Load Balancing
{
[1] Initialization
[2] Load Checking
[3] Repeat through step [9] until request_queue is empty
Until topping conditions are TRUE
{
[4] Randomly create the initial population
[5] Apply fitness function
[6] Choose two best individuals from the population
[7] Crossover the selected individuals
[8] Mutate the child
[9] Replace the worse individuals in the population with best ones
}
}
[10] End
The whole mechanism is shown in figure 4.
VI. IMPLEMENTATION AND RESULTS
We have implemented our proposed algorithm on Pentium Core-i3-540 with 3.06 GHz processor and with
500GB HDD and 4GB RAM. We have used JDK 1.6 as the coding language and Netbeans IDE 7.0.1 as the
front-end tool. For application of our proposed Genetic Algorithm to solve the load balancing problem, we
have set the parameters as follows:
Population size: It defines the number of processes/ requests taken at random, in every execution.
The population size will vary from 20- 100.
Number of Generations: It defines the number of cycles the algorithm is run, to converge towards
the optimal solution.
16
9. Figure 4: Flowchart for GA based load balancing mechanism
MI (million instructions): It defines process length, i.e. the number of instructions, each process
contain as processing requirements. It varies from 1-10 MI per request.
For simulating our proposed algorithm, we have implemented it with GridSim 5.4 toolkit. The GridSim
toolkit comes up with multiple entities like users, brokers, Resource, GIS (Grid Information Service) and
Input-Output. In our implementation of Genetic Algorithm based load balancing, for each process, is given as
input with varying processing times and input file size. The “Gridlet” Package contains all the information
related to a process and its execution. During simulation, GridSim provides scheduling of processes/ jobs
based on two events, either time-shared or space-shared. For easier implementation of our proposed
algorithm, we have considered the space-shared scheduling for simulation. For every incoming process/
request, this scheduling provides to allocate the machines/ servers immediately, if there are available
machines. Otherwise the processes are queued. During Gridlet assignment, processing time for each request
is determined and an event is scheduled. After completion of execution of scheduled Gridlet process, the
resource simulator frees the machines/ servers and checks for request in the queue. Then it is assigned to
available machines/ servers. In Table 1, a statistical scenario of a space-shared scheduling is given, for four
Gridlet processes, with processing requirements are 6.5, 4.6, 10 and 8 MI respectively.
We have simulated the proposed algorithm for different population size and number of generations, and then
evaluate the GA convergence to maximize the processor utilization. The results are shown in Fig. 5 and Fig.
6 respectively.
TABLE I: A SCHEDULING STATISTICS SCENARIO FOR SPACE-SHARED RESOURCES IN GRIDSIM
Gridlet
Numbers
G1
Request Length
(MI)
6.5
Arrival time
(a)
0
G2
4.6
4
4
8.6
4.6
G3
10
6
6.5
16.5
10.5
G4
8
8
8.6
16.6
8.6
17
Start Time
(s)
0
Finish Time
(f)
6.5
Elapsed
Time (f-a)
6.5
10. Processor Utilization
100
90
--->Utilization%
80
70
60
50
40
30
Population size=80
20
Popualtion Size=40
10
Popualation size= 100
0
0
10
20
30
40
50
60
70
80
90
---->Nos. of processor
Figure 5: GA convergence for various populations with respect to processor utilization
VII. CONCLUSION
With the rapid increase in the number of internet users, the problem of load balancing is becoming “missioncritical” as it has to cover issues like redundancy, scalability, flexibility, QoS etc. in its solution. In this paper,
we have considered the web hotspot as our key problem, with distributed system as our problem domain. We
have proposed a GA approach for load distribution. Our aim was to assign the requests among the servers in
such a way that every request get processed, even in situations like “web hotspot”, where the load in the site
get suddenly increased to very high. While implementing GA, we have formulated the load balancing
problem as a process scheduling policy. A modified genetic algorithm is introduced with an objective
function equal to number of un-processed requests/ processes in a random population. We have simulated our
proposed model by using GridSim distributed system simulator, with space-shared scheduling of resources.
The results have shown that with different population size, the GA convergence for maximizing processor
utilization is obtained well. Even with the increasing number of processes, the proposed algorithm converges
towards the optimal solution. For varying number of generations, it has given a near optimal result.
Through the work, we did not compare our work with any previous work. Besides more number of
parameters have to be evaluated, for various situation. In our future work, we would try to simulate our
proposed algorithm for more number of parameters, and in different problem domain.
VIII. ACKNOWLEDGEMENT
While concluding the paper, we would like to thank those people who endowed upon us their constant
guidance and encouragement during the work. We would like to thank and express our sense of gratitude to
the faculty members of Department of Computer Science Engineering & Information Technology, for their
kind help and encouragement. Lastly we express our gratitude to our parents and all the friends who helped
us in one way or the other.
REFERENCES
[1] Weibin Zhao, “Towards Autonomic Computing: Service Discovery and Web Hotspot Rescue”, COLUMBIA
UNIVERSITY, 2006.
[2] Albert Y. Zomaya, Yee-HweiTeh, “Observations on Using Genetic Algorithms for Dynamic Load-Balancing”,
IEEE, transactions on parallel and distributed systems, Vol. 12, Number. 9, September 2001.
[3] Priyanka Gonnade, Sonali Bodkhe, “An Efficient load balancing using Genetic algorithm in Hierarchical structured
distributed system”, International Journal of Advanced Computer Research, Volume-2 Number-4 Issue-6
December-2012.
[4] Valeria Cardellini, Michale Colajanni, Phillip S.Yu, “ Dynamic Load balancing web Server System”, IEEE Internet
Computing, vol. 3, Number 3, PP 28-39, May-June 1999.
18
11. Procesor Utilization
--->Processor Utilization
120
100
80
60
40
Processor Utilization
20
0
10
15
20
25
30
---->No. of Generations
Figure 6: Performance comparison of processor utilization with respect to number of generations
[5] Harikesh Singh, Dr. Shishir Kumar, “Dispatcher Based Dynamic Load Balancing on Web Server System”,
International Journal of Grid and Distributed Computing Vol. 4, Number. 3, September, 2011.
[6] B. Mortazavi and G. Kesidis, "Cumulative Reputation Systems for Peer-to-Peer Content Distribution", in
proceedings of IEEE Annual Conference on Information Sciences and Systems, PP 1546- 1552, 22-24 March 2006.
[7] BrightenG odfrey, KarthikLakshminarayanan, Sonesh Surana, Richard Karp, IonStoica, “Load Balancing in
Dynamic Structured P2P Systems”, IEEE INFOCOM 2004.
[8] Kalman Graffi, Sebastian Kaune, Konstantin Pussep, Aleksandra Kovacevic, Ralf Steinmetz, “Load Balancing for
Multimedia Streaming in Heterogeneous Peer-to-Peer Systems” NOSSDAV, Braunschweig, Germany, 2008.
[9] Ananth Rao, Karthik Lakshminarayanan, Sonesh Surana, Richard Karp, Ion Stoica, “Load Balancing in Structured
P2P Systems”, Elsevier Science Publishers B. V. Amsterdam, The Netherlands, volume 63, Issue 3, March 2006.
[10] Song Fu, Cheng-Zhong Xu, Haiying Shen, “Random Choices for Churn Resilient Load Balancing in Peer-to-Peer
Networks”, In Proceedings of the 22nd ACM/IEEE International Parallel and Distributed Processing Symposium
(IPDPS), 2008.
[11] Kumara Sastry, David Goldberg, Graham Kendall, “GENETIC ALGORITHMS”, Search Methodologies, Springer,
PP 97-125, 2005.
[12] Vinay Harsora, Apurva Shah, “A Modified Genetic Algorithm for Process Scheduling in Distributed System”, IJCA
Special Issue on “Artificial Intelligence Techniques - Novel Approaches & Practical Applications”, AIT, 2011.
[13] D. E. Goldberg, “Design of Innovation: Lessons From and For Competent Genetic Algorithms”, Kluwer, Boston,
MA, 2002.
[14] Yu-Kwong Kwok_ and Lap-Sun Cheung, “A new fuzzy-decision based load balancing system for distributed object
computing”, Elsevier Journal of Parallel and Distributed Computing, 2003.
[15] Bibhudatta Sahoo, Sudipta Mohapatra, and Sanjay Kumar Jena, “A Genetic Algorithm Based Dynamic Load
Balancing Scheme for Heterogeneous Distributed Systems”, Proceedings of the International Conference on Parallel
and Distributed Processing Techniques and Applications, Vol-2, Issue-July, 2008. Bools:
[16] Tony Bruke, “Server Load Balancing”, Published by O'Reilly & Associates, Inc., 101 Morris Street, Sebastopol,
August, 2001.
19