This document discusses various load balancing algorithms that can be applied in cloud computing. It begins with an introduction to cloud computing models including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). It then discusses the goals of load balancing in cloud computing. The main part of the document describes and provides examples of several load balancing algorithms: Round Robin, Opportunistic Load Balancing, Minimum Completion Time, and Minimum Execution Time. For each algorithm, it explains the basic approach and provides an example to illustrate how it works.
LOAD BALANCING ALGORITHM TO IMPROVE RESPONSE TIME ON CLOUD COMPUTINGijccsa
Load balancing techniques in cloud computing can be applied at different levels. There are two main
levels: load balancing on physical server and load balancing on virtual servers. Load balancing on a
physical server is policy of allocating physical servers to virtual machines. And load balancing on virtual
machines is a policy of allocating resources from physical server to virtual machines for tasks or
applications running on them. Depending on the requests of the user on cloud computing is SaaS (Software
as a Service), PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) that has a proper load
balancing policy. When receiving the task, the cloud data center will have to allocate these tasks efficiently
so that the response time is minimized to avoid congestion. Load balancing should also be performed
between different datacenters in the cloud to ensure minimum transfer time. In this paper, we propose a
virtual machine-level load balancing algorithm that aims to improve the average response time and
average processing time of the system in the cloud environment. The proposed algorithm is compared to the
algorithms of Avoid Deadlocks [5], Maxmin [6], Throttled [8] and the results show that our algorithms
have optimized response times.
Task scheduling is an important aspect to improve the utilization of resources in the Cloud Computing. This paper proposes a Divide and Conquer based approach for heterogeneous earliest finish time algorithm. The proposed system works in two phases. In the first phase it assigns the ranks to the incoming tasks with respect to size of it. In the second phase, we properly assign and manage the task to the virtual machine with the consideration of ideal time of respective virtual machine. This helps to get more effective resource utilization in Cloud Computing. The experimental results using Cybershake Scientific Workflow shows that the proposed Divide and Conquer HEFT performs better than HEFT in terms of task's finish time and response time. The result obtained by experimentally demonstrate that the proposed DCHEFT performance superiorly.
Cloud Computing Load Balancing Algorithms Comparison Based SurveyINFOGAIN PUBLICATION
Cloud computing is an online primarily based computing. This computing paradigm has increased the employment of network wherever the potential of 1 node may be used by alternative node. Cloud provides services on demand to distributive resources like info, servers, software, infrastructure etc. in pay as you go basis. Load reconciliation is one amongst the vexing problems in distributed atmosphere. Resources of service supplier have to be compelled to balance the load of shopper request. Totally different load reconciliation algorithms are planned so as to manage the resources of service supplier with efficiency and effectively. This paper presents a comparison of assorted policies used for load reconciliation.
(Slides) Task scheduling algorithm for multicore processor system for minimiz...Naoki Shibata
Shohei Gotoda, Naoki Shibata and Minoru Ito : "Task scheduling algorithm for multicore processor system for minimizing recovery time in case of single node fault," Proceedings of IEEE International Symposium on Cluster Computing and the Grid (CCGrid 2012), pp.260-267, DOI:10.1109/CCGrid.2012.23, May 15, 2012.
In this paper, we propose a task scheduling al-gorithm for a multicore processor system which reduces the
recovery time in case of a single fail-stop failure of a multicore
processor. Many of the recently developed processors have
multiple cores on a single die, so that one failure of a computing
node results in failure of many processors. In the case of a failure
of a multicore processor, all tasks which have been executed
on the failed multicore processor have to be recovered at once.
The proposed algorithm is based on an existing checkpointing
technique, and we assume that the state is saved when nodes
send results to the next node. If a series of computations that
depends on former results is executed on a single die, we need
to execute all parts of the series of computations again in
the case of failure of the processor. The proposed scheduling
algorithm tries not to concentrate tasks to processors on a die.
We designed our algorithm as a parallel algorithm that achieves
O(n) speedup where n is the number of processors. We evaluated
our method using simulations and experiments with four PCs.
We compared our method with existing scheduling method, and
in the simulation, the execution time including recovery time in
the case of a node failure is reduced by up to 50% while the
overhead in the case of no failure was a few percent in typical
scenarios.
LOAD BALANCING ALGORITHM TO IMPROVE RESPONSE TIME ON CLOUD COMPUTINGijccsa
Load balancing techniques in cloud computing can be applied at different levels. There are two main
levels: load balancing on physical server and load balancing on virtual servers. Load balancing on a
physical server is policy of allocating physical servers to virtual machines. And load balancing on virtual
machines is a policy of allocating resources from physical server to virtual machines for tasks or
applications running on them. Depending on the requests of the user on cloud computing is SaaS (Software
as a Service), PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) that has a proper load
balancing policy. When receiving the task, the cloud data center will have to allocate these tasks efficiently
so that the response time is minimized to avoid congestion. Load balancing should also be performed
between different datacenters in the cloud to ensure minimum transfer time. In this paper, we propose a
virtual machine-level load balancing algorithm that aims to improve the average response time and
average processing time of the system in the cloud environment. The proposed algorithm is compared to the
algorithms of Avoid Deadlocks [5], Maxmin [6], Throttled [8] and the results show that our algorithms
have optimized response times.
Task scheduling is an important aspect to improve the utilization of resources in the Cloud Computing. This paper proposes a Divide and Conquer based approach for heterogeneous earliest finish time algorithm. The proposed system works in two phases. In the first phase it assigns the ranks to the incoming tasks with respect to size of it. In the second phase, we properly assign and manage the task to the virtual machine with the consideration of ideal time of respective virtual machine. This helps to get more effective resource utilization in Cloud Computing. The experimental results using Cybershake Scientific Workflow shows that the proposed Divide and Conquer HEFT performs better than HEFT in terms of task's finish time and response time. The result obtained by experimentally demonstrate that the proposed DCHEFT performance superiorly.
Cloud Computing Load Balancing Algorithms Comparison Based SurveyINFOGAIN PUBLICATION
Cloud computing is an online primarily based computing. This computing paradigm has increased the employment of network wherever the potential of 1 node may be used by alternative node. Cloud provides services on demand to distributive resources like info, servers, software, infrastructure etc. in pay as you go basis. Load reconciliation is one amongst the vexing problems in distributed atmosphere. Resources of service supplier have to be compelled to balance the load of shopper request. Totally different load reconciliation algorithms are planned so as to manage the resources of service supplier with efficiency and effectively. This paper presents a comparison of assorted policies used for load reconciliation.
(Slides) Task scheduling algorithm for multicore processor system for minimiz...Naoki Shibata
Shohei Gotoda, Naoki Shibata and Minoru Ito : "Task scheduling algorithm for multicore processor system for minimizing recovery time in case of single node fault," Proceedings of IEEE International Symposium on Cluster Computing and the Grid (CCGrid 2012), pp.260-267, DOI:10.1109/CCGrid.2012.23, May 15, 2012.
In this paper, we propose a task scheduling al-gorithm for a multicore processor system which reduces the
recovery time in case of a single fail-stop failure of a multicore
processor. Many of the recently developed processors have
multiple cores on a single die, so that one failure of a computing
node results in failure of many processors. In the case of a failure
of a multicore processor, all tasks which have been executed
on the failed multicore processor have to be recovered at once.
The proposed algorithm is based on an existing checkpointing
technique, and we assume that the state is saved when nodes
send results to the next node. If a series of computations that
depends on former results is executed on a single die, we need
to execute all parts of the series of computations again in
the case of failure of the processor. The proposed scheduling
algorithm tries not to concentrate tasks to processors on a die.
We designed our algorithm as a parallel algorithm that achieves
O(n) speedup where n is the number of processors. We evaluated
our method using simulations and experiments with four PCs.
We compared our method with existing scheduling method, and
in the simulation, the execution time including recovery time in
the case of a node failure is reduced by up to 50% while the
overhead in the case of no failure was a few percent in typical
scenarios.
Modified Active Monitoring Load Balancing with Cloud Computingijsrd.com
Cloud computing is internet-based computing in which large groups of remote servers are networked to allow the centralized data storage, and online access to computer services or resources. Load Balancing is essential for efficient operations in distributed environments. As Cloud Computing is growing rapidly and clients are demanding more services and better results, load balancing for the Cloud has become a very interesting and important research area. In the absence of proper load balancing strategy/technique the growth of CC will never go as per predictions. The main focus of this paper is to verify the approach that has been proposed in the model paper [3]. An efficient load balancing algorithm has the ability to reduce the data center processing time, overall response time and to cope with the dynamic changes of cloud computing environments. The traditional load balancing Active Monitoring algorithm has been modified to achieve better data center processing time and overall response time. The algorithm presented in this paper efficiently distributes the requests to all the VMs for their execution, considering the CPU utilization of all VMs.
The note is compiled with reference from many sites and According to the syllabus of Real Time System (6th semester CSIT). Drive deep to the never ending knowledge.
Performance Comparision of Dynamic Load Balancing Algorithm in Cloud ComputingEswar Publications
Cloud computing as a distributed paradigm, it has the latent to make over a large part of the Cooperative industry. In cloud computing it’s automatically describe more technologies like distributed computing, virtualization, software, web services and networking. We review the new cloud computing technologies, and indicate the main challenges for their development in future, among which load balancing problem stands out and attracts our attention Concept of load balancing in networking and in cloud environment both are widely different. Load balancing in networking its complete concern to avoid the problem of overloading and under loading in any sever networking cloud computing its complete different its involves different elements metrics such as security, reliability, throughput, tolerance, on demand services, cost etc. Through these elements we avoiding various node problem of distributing system where many services waiting for request and others are heavily loaded and through these its increase response time and degraded performance optimization. In this paper first we classify algorithms in static and dynamic. Then we analyzed the dynamic algorithms applied in dynamics environments in cloud. Through this paper we have been show compression of various dynamics algorithm in which we include honey bee algorithm, throttled algorithm, Biased random algorithm with different elements and describe how and which is best in cloud environment with different metrics mainly used elements are performance, resource utilization and minimum cost. Our main focus of paper is in the analyze various load
balancing algorithms and their applicability in cloud environment.
VIRTUAL MACHINE SCHEDULING IN CLOUD COMPUTING ENVIRONMENTijmpict
Cloud computing is an upcoming technology in dispersed computing facilitating paying for each model as
for each user demand and need. Cloud incorporates a set of virtual machine which comprises both storage
and computational facility. The fundamental goal of cloud computing is to offer effective access to isolated
and geographically circulated resources. Cloud is growing every day and experiences numerous problems
such as scheduling. Scheduling means a collection of policies to regulate the order of task to be executed
by a computer system. An excellent scheduler derives its scheduling plan in accordance with the type of
work and the varying environment. This research paper demonstrates a generalized precedence algorithm
for effective performance of work and contrast with Round Robin and FCFS Scheduling. Algorithm needs
to be tested within CloudSim toolkit and outcome illustrates that it provide good presentation compared
some customary scheduling algorithm.
About real time system task scheduling basic concepts.It deals with task, instance,data sharing and their types.It also covers various important terminologies regarding scheduling algorithms.
Cost Optimization in Multi Cloud Platforms using Priority Assignmentijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Modified Active Monitoring Load Balancing with Cloud Computingijsrd.com
Cloud computing is internet-based computing in which large groups of remote servers are networked to allow the centralized data storage, and online access to computer services or resources. Load Balancing is essential for efficient operations in distributed environments. As Cloud Computing is growing rapidly and clients are demanding more services and better results, load balancing for the Cloud has become a very interesting and important research area. In the absence of proper load balancing strategy/technique the growth of CC will never go as per predictions. The main focus of this paper is to verify the approach that has been proposed in the model paper [3]. An efficient load balancing algorithm has the ability to reduce the data center processing time, overall response time and to cope with the dynamic changes of cloud computing environments. The traditional load balancing Active Monitoring algorithm has been modified to achieve better data center processing time and overall response time. The algorithm presented in this paper efficiently distributes the requests to all the VMs for their execution, considering the CPU utilization of all VMs.
The note is compiled with reference from many sites and According to the syllabus of Real Time System (6th semester CSIT). Drive deep to the never ending knowledge.
Performance Comparision of Dynamic Load Balancing Algorithm in Cloud ComputingEswar Publications
Cloud computing as a distributed paradigm, it has the latent to make over a large part of the Cooperative industry. In cloud computing it’s automatically describe more technologies like distributed computing, virtualization, software, web services and networking. We review the new cloud computing technologies, and indicate the main challenges for their development in future, among which load balancing problem stands out and attracts our attention Concept of load balancing in networking and in cloud environment both are widely different. Load balancing in networking its complete concern to avoid the problem of overloading and under loading in any sever networking cloud computing its complete different its involves different elements metrics such as security, reliability, throughput, tolerance, on demand services, cost etc. Through these elements we avoiding various node problem of distributing system where many services waiting for request and others are heavily loaded and through these its increase response time and degraded performance optimization. In this paper first we classify algorithms in static and dynamic. Then we analyzed the dynamic algorithms applied in dynamics environments in cloud. Through this paper we have been show compression of various dynamics algorithm in which we include honey bee algorithm, throttled algorithm, Biased random algorithm with different elements and describe how and which is best in cloud environment with different metrics mainly used elements are performance, resource utilization and minimum cost. Our main focus of paper is in the analyze various load
balancing algorithms and their applicability in cloud environment.
VIRTUAL MACHINE SCHEDULING IN CLOUD COMPUTING ENVIRONMENTijmpict
Cloud computing is an upcoming technology in dispersed computing facilitating paying for each model as
for each user demand and need. Cloud incorporates a set of virtual machine which comprises both storage
and computational facility. The fundamental goal of cloud computing is to offer effective access to isolated
and geographically circulated resources. Cloud is growing every day and experiences numerous problems
such as scheduling. Scheduling means a collection of policies to regulate the order of task to be executed
by a computer system. An excellent scheduler derives its scheduling plan in accordance with the type of
work and the varying environment. This research paper demonstrates a generalized precedence algorithm
for effective performance of work and contrast with Round Robin and FCFS Scheduling. Algorithm needs
to be tested within CloudSim toolkit and outcome illustrates that it provide good presentation compared
some customary scheduling algorithm.
About real time system task scheduling basic concepts.It deals with task, instance,data sharing and their types.It also covers various important terminologies regarding scheduling algorithms.
Cost Optimization in Multi Cloud Platforms using Priority Assignmentijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Load Balancing Algorithm to Improve Response Time on Cloud Computingneirew J
Load balancing techniques in cloud computing can be applied at different levels. There are two main
levels: load balancing on physical server and load balancing on virtual servers. Load balancing on a
physical server is policy of allocating physical servers to virtual machines. And load balancing on virtual
machines is a policy of allocating resources from physical server to virtual machines for tasks or
applications running on them. Depending on the requests of the user on cloud computing is SaaS (Software
as a Service), PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) that has a proper load
balancing policy. When receiving the task, the cloud data center will have to allocate these tasks efficiently
so that the response time is minimized to avoid congestion. Load balancing should also be performed
between different datacenters in the cloud to ensure minimum transfer time. In this paper, we propose a
virtual machine-level load balancing algorithm that aims to improve the average response time and
average processing time of the system in the cloud environment. The proposed algorithm is compared to the
algorithms of Avoid Deadlocks [5], Maxmin [6], Throttled [8] and the results show that our algorithms
have optimized response times.
An adaptive algorithm for task scheduling for computational grideSAT Journals
Abstract
Grid Computing is a collection of computing and storage resources that are collected from multiple administrative domains. Grid resources can be applied to reach a common goal. Since computational grids enable the sharing and aggregation of a wide variety of geographically distributed computational resources, an effective task scheduling is vital for managing the tasks. Efficient scheduling algorithms are the need of the hour to achieve efficient utilization of the unused CPU cycles distributed geographically in various locations. The existing job scheduling algorithms in grid computing are mainly concentrated on the system’s performance rather than the user satisfaction. This research work presents a new algorithm that mainly focuses on better meeting the deadlines of the statically available jobs as expected by the users. This algorithm also concentrates on the better utilization of the available heterogeneous resources.
Keywords: Task Scheduling, Computational Grid, Adaptive Scheduling and User Deadline.
A survey of various scheduling algorithm in cloud computing environmenteSAT Journals
Abstract Cloud computing is known as a provider of dynamic services using very large scalable and virtualized resources over the Internet. Due to novelty of cloud computing field, there is no many standard task scheduling algorithm used in cloud environment. Especially that in cloud, there is a high communication cost that prevents well known task schedulers to be applied in large scale distributed environment. Today, researchers attempt to build job scheduling algorithms that are compatible and applicable in Cloud Computing environment Job scheduling is most important task in cloud computing environment because user have to pay for resources used based upon time. Hence efficient utilization of resources must be important and for that scheduling plays a vital role to get maximum benefit from the resources. In this paper we are studying various scheduling algorithm and issues related to them in cloud computing. Index Terms: cloud computing, scheduling, algorithm
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Optimized Assignment of Independent Task for Improving Resources Performance ...ijgca
Grid computing has emerged from category of distributed and parallel computing where the
heterogeneous resources from different network are used simultaneously to solve a particular problem that
need huge amount of resources. Potential of Grid computing depends on my issues such as security of
resources, heterogeneity of resources, fault tolerance & resource discovery and job scheduling. Scheduling
is one of the core steps to efficiently exploit the capabilities of heterogeneous distributed computing
resources and is an NP-complete problem. To achieve the promising potential of grid computing, an
effective and efficient job scheduling algorithm is proposed, which will optimized two important criteria to
improve the performance of resources i.e. makespan time & resource utilization. With this, we have
classified various tasks scheduling heuristic in grid on the basis of their characteristics.
Optimized Assignment of Independent Task for Improving Resources Performance ...Ricardo014
Grid computing has emerged from category of distributed and parallel computing where the heterogeneous resources from different network are used simultaneously to solve a particular problem that need huge amount of
resources. Potential of Grid computing depends on my issues such as security of resources, heterogeneity of resources, fault tolerance & resource discovery and job scheduling. Scheduling is one of the core steps to
efficiently exploit the capabilities of heterogeneous distributed computing resources and is an NP-complete problem. To achieve the promising potential of grid computing, an effective and efficient job scheduling algorithm is
proposed, which will optimized two important criteria to improve the performance of resources i.e. makespan time & resource utilization. With this, we have classified various tasks scheduling heuristic in grid on the basis of
their characteristics.
Optimized Assignment of Independent Task for Improving Resources Performance ...ijgca
Grid computing has emerged from category of distributed and parallel computing where the heterogeneous resources from different network are used simultaneously to solve a particular problem that need huge amount of resources. Potential of Grid computing depends on my issues such as security of resources, heterogeneity of resources, fault tolerance & resource discovery and job scheduling. Scheduling is one of the core steps to efficiently exploit the capabilities of heterogeneous distributed computing resources and is an NP-complete problem. To achieve the promising potential of grid computing, an effective and efficient job scheduling algorithm is proposed, which will optimized two important criteria to improve the performance of resources i.e. makespan time & resource utilization. With this, we have classified various tasks scheduling heuristic in grid on the basis of their characteristics.
Enhanced equally distributed load balancing algorithm for cloud computingeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Enhanced equally distributed load balancing algorithm for cloud computingeSAT Journals
Abstract Cloud Computing as the name suggests, it is a style of computing where different users uses the resources on the go i.e. over the Internet. In the recent era, this technology has emerged as a strong option for not only large scale organizations but also for small scale organizations that only access/use the resources what they want. In recent research study, many organizations lose significant part of their revenues in handling the requests given by the clients over the web servers i.e. unable to balance the load for web servers which results in loss of data, delay in time and increased costs. This Paper gives a new enhanced load balancing algorithm by which the performance of their web application can be increased. This Algorithm works on the major drawbacks such as delay in time, response to request ratio etc.
Application of selective algorithm for effective resource provisioning in clo...ijccsa
Modern day continued demand for resource hungry services and applications in IT sector has led to
development of Cloud computing. Cloud computing environment involves high cost infrastructure on one
hand and need high scale computational resources on the other hand. These resources need to be
provisioned (allocation and scheduling) to the end users in most efficient manner so that the tremendous
capabilities of cloud are utilized effectively and efficiently. In this paper we discuss a selective algorithm
for allocation of cloud resources to end-users on-demand basis. This algorithm is based on min-min and
max-min algorithms. These are two conventional task scheduling algorithm. The selective algorithm uses
certain heuristics to select between the two algorithms so that overall makespan of tasks on the machines is
minimized. The tasks are scheduled on machines in either space shared or time shared manner. We
evaluate our provisioning heuristics using a cloud simulator, called CloudSim. We also compared our
approach to the statistics obtained when provisioning of resources was done in First-Cum-First-
Serve(FCFS) manner. The experimental results show that overall makespan of tasks on given set of VMs
minimizes significantly in different scenarios.
1. Vol-1 Issue-2 2015 IJARIIE-ISSN(O)-2395-4396
1161 www.ijariie.com 187
Various Load Balancing Algorithms in cloud
Computing
Bhavisha Patel1
, Mr. Shreyas Patel2
1
M.E Student, Department of CSE, Parul Institute of Engineering & Technology, Vadodara, India,
bhavu2761991@gmail.com
2
Assistant Professor, Department of CSE, Parul Institute of Engineering & Technology, Vadodara, India,
Shreyaspatel_89@yahoo.com
ABSTRACT
Cloud Computing is the use of computing resources that are delivered as a service over a network. In cloud
computing, there are many tasks requires to be executed by the available resources to achieve best performance,
utilize the resources efficiently under the condition of heavy or light load in the network, minimize the response time
and delay, for maintain system stability, to improve the performance, increase the throughput of the system, to
decrease the communication overhead and to minimize the computation cost. This Paper Describe various load
balancing algorithms that can be applied in cloud computing. These algorithms have different working and
principles.
Keyword: - Cloud Computing, Load Balancing, Load Balancing Algorithms, Round Rubin, Max-Min, Min-Min,
ESCE, AMLB, Throttled Algorithm, Modified Throttled Algorithm, Weighted Active Monitoring Algorithm, MET,
MCT, Improved Max-Min, OLB, Improved Cost Based Algorithm
1. INTODUCTION
Cloud computing is an internet based technology in which internet can be represented as a cloud. Cloud is a
platform provides pool of resources like applications, services and storage network etc to a computers and devices
based on pay-per-use model [1]. Some characteristics of cloud computing is:
Scalability and elasticity
Reliability
Device and location independence
Security
On-demand service
Pay-per-use model
Cloud can be Public, Private or Hybrid [1].
Public cloud: A cloud is called a "public cloud" when the services are rendered over a network that is open for
public use. Public cloud services may be free or offered on a pay-per-usage model.
Private cloud: Private cloud is cloud infrastructure operated for a single organization, whether managed internally
or by a third-party, and hosted either internally or externally.
Hybrid cloud: Hybrid cloud is a combination of two or more private, community or public clouds that remain
distinct entities offering the benefits of multiple deployment models.
2. Vol-1 Issue-2 2015 IJARIIE-ISSN(O)-2395-4396
1161 www.ijariie.com 188
Cloud computing providers offer their services according to several fundamental models:
Infrastructure as a service (IaaS): Cloud infrastructure services, known as Infrastructure as a Service (IaaS). In
IaaS, the physical resources like storage and networking services can be split into a number of logical units called
Virtual Machine (VM). These models manage virtualization, servers, hard drives, storage, and networking. Amazon
EC2, Google Compute Engine (GCE).
Platform as a Service (PaaS):
Cloud platform services, or Platform as a Service (PaaS), are used for applications, and other development. PaaS is a
framework they can build upon to develop or customize applications. Example of PaaS is Google app engine, AWS,
Apache.
Software as a Service (SaaS):
Cloud application services, or Software as a Service (SaaS), use the web to deliver applications. SaaS eliminates the
need to install and run applications on individual computers. Example of SaaS is Google Gmail, Microsoft 365.
2. LOAD BALANCING IN CLOUD COMPUTING [13]
Load balancing is the process of distribution of user request on available resources to maximize the
throughput of the system and utilize the resources effectively. Load balancing required to improve the performance
of the system by minimize the overall completion time and avoid the situation where some resources are heavily
loaded or others remains under loaded in the system.
The goals of load balancing are:
Improve the performance
Maintain system stability
Build fault tolerance system
Accommodate future modification.
Energy is saved in case of low load
Cost of using resources is reduced
Maximize throughput of the system
Minimize communication overhead
Resources are easily available on demand
Resources are efficiently utilized under condition of high/low load
Minimize overall completion time (makespan)
3. LOAD BALANCING ALGORITHMS IN CLOUD COMPUTING
Introduction related your research work Introduction related your research work Introduction related your research
work Introduction related your research work Introduction related your research work Introduction related your
research work Introduction related your research work Introduction related your research work Introduction related
your research work Introduction related your research work
A. Round Rubin [2]
It is a task scheduling technique in which tasks are scheduled on the basis of time quantum. A small unit of
time is called time slice which is defined in this algorithm. It is also called time sharing system in which each process
will execute in particular time slot without any priority. These techniques divide the task request equally on the
available service providers or resources. Round rubin is a static algorithm. The main advantage is that it is a starvation
free. Every process will be executed by CPU for fixed time slice. So in this way no process left waiting for its turn to
be executed by the CPU. All tasks will executed without any prioritization. It selects the load randomly and leads to
3. Vol-1 Issue-2 2015 IJARIIE-ISSN(O)-2395-4396
1161 www.ijariie.com 189
the situation where some nodes are heavily loaded and some are lightly loaded. Though the algorithm is very simple
and easy to implement but there is an additional load on the scheduler to decide the time quantum. Round-rubin
scheduling algorithm doesn‟t give special priority to more important tasks. This means an urgent request doesn't get
handled any faster than other requests in queue. Throughput is low as the large process is holding up the Central
processing unit for execution. In round rubin, equal amount of time will be assigned to each and every process for
execution, but it can frustrate those with medium-length tasks. And it will take a more time to execute small task. For
example in the supermarket its only allows shoppers to check out 11 items at a time. People with 10 or fewer items
are unaffected. People with 100 items take longer, but there's a sense of fairness for loads that size. A customer with
10 items should go to the end of the line after checking out 10, making her wait in line seem longer than is normally
appropriate. This isn't a problem for computers, but it can frustrate users and customers. Another disadvantage is that
it will not consider the current load of the servers.
As shown in figure 3.1 there are three resources R1, R2, R3 and five tasks T1,T2, T3, T4 and T5. Here time
slice/quantum is predefined. So tasks T1 and T4 are scheduled on resource R1 for a fixed time slot. And tasks T2 and
T5 are scheduled on R2. Last one is scheduled on resource R3.
Fig 3.1 Round Rubin Algorithm [3]
B. OPPORTUNISTIC LOAD BALANCING ALGORITHM (OLB) [5]
This algorithm consider the allocated job request of each available virtual machine and assign the selected
unexecuted task to the available virtual machine which has the lowest load among all the virtual machines. OLB
assign the job in random order without considering the expectation execution time for the job on that virtual
machine. As a result it provides better load balanced but it gives poor overall completion time (makespan). This
algorithm is very simple and easy to implement and each virtual machine often keep busy. The main goal of OLB
algorithm is to maintain load balance and makes each node in a working state.
C. Minimum Completion Time (MCT) [3]
The Minimum Completion Time job scheduling algorithm dispatches the selected job to the available VM that can
provide the minimum completion time. Due to this reason there are some tasks to be assigned to machines that do
not have the minimum execution time for them. The main
Idea of determining the minimum completion time in the scheduling algorithm is the processor speed and the cur-
rent load on each VM. The algorithm first scans the available VMs in order to determine the most suitable virtual
machine to perform the unexecuted job then dispatches that virtual machine and starts execution. Minimum
Completion time is calculated according to this equation:
4. Vol-1 Issue-2 2015 IJARIIE-ISSN(O)-2395-4396
1161 www.ijariie.com 190
Ctij = Etij + rtj
Where, rtj = Ready Time of resource Rj
Etij = Execution Time of task Ti on resource Rj
Ctij = Completion Time of task Ti on resource Rj
Example:
Table 1 describes the list of tasks and list of available resources.
Resource MIPS Task MI
R1 20 T1 10
R2 18 T2 5
R3 14 T3 22
T4 15
Table 1 Resource and Task Specification
Fig 3.2 Minimum Completion Time (MCT)
5. Vol-1 Issue-2 2015 IJARIIE-ISSN(O)-2395-4396
1161 www.ijariie.com 191
As shown in figure 2.2, task T2 and T3 are scheduled on resource R2 and task T1 is executed by resource R1 and
task t4 is scheduled on resource R3. [3]
D. Minimum Execution Time (MET) [8]
Minimum Execution Time (MET) assigns each task, in random order, to the machine with the minimum
expected execution time for that task, regardless of that machine's availability. The aim behind MET is to give each
task to its best machine. This can cause a severe load imbalance across machines. So some virtual machines are
overloaded and some are underutilized in the system.
Expected Execution time is calculated according to this equation:
Etij = Task Length (MI) / Processing Speed (MIPS)
Where, Etij = Execution time of the task Ti on Rj
Example:
Resource MIPS Task MI
R1 10 T1 10
R2 8 T2 15
R3 5 T3 20
T4 25
Table 2 Resource and Task Specification
Fig 3.3 Minimum Execution Time (MET)
6. Vol-1 Issue-2 2015 IJARIIE-ISSN(O)-2395-4396
1161 www.ijariie.com 192
As shown in figure 3.3 Minimum Execution Time (MCT) considers only the processing power of the virtual
machine. According to the ratio of processing speed and the task length current task is scheduled on that resource
which gives minimum execution time. In given example tasks T1, T2, T3, T4 are scheduled on resource R1. And
resource R2 and R3 are remaining idle.
E. Throttled Load Balancing Algorithm
(TLB) [6]
Throttled load balancing algorithm maintains the index list for all virtual machines. Index list maintain the
information of the state (Busy/Ideal) of each virtual machine. When a new request arrives, throttled load balancer
scan the indexed list from the top until the first free virtual machine is found. Throttled load balance return the ideal
virtual machine id to the data center controller. Further, the data center controller allocate job to that identified
virtual machine. If suitable virtual machine is not present to execute the task then throttled load balancer returns -1
to the data center controller and it will queued until the any one virtual machine becomes free. This algorithm is
dynamic.
ALGORITHM [7]:
1) Throttled load balancer maintains index Table of virtual machines and state (BUSY/AVAILABLE) of
virtual machines. At the starting all virtual machines are free.
2) Data Center Controller receives a job request.
3) Data Center Controller sends the request to the Throttled Load Balancer for the allocation of request.
4) Throttled Load Balancer starts with the available virtual machine list from the first index for the availability
of free virtual machine.
Case 1: if virtual machine found then
a. the Throttled Load Balancer returns the Virtual Machine id to the Data Center Controller
b. Data Center Controller sends the request to that identified virtual machine.
c. Data Center Controller notifies the Throttled Load Balancer of the new allocation of job
d. Throttled Load Balancer updates the allocation Table
Case 2: if no Virtual machine is found
a. The Throttled Load Balancer returns -1.
5) When the Virtual machine finishes execution of the request, and the Data Center Controller receives the
response cloudlet, it notifies the Throttled Load Balancer of the Virtual machine deallocation.
6) If there are more requests, Data Center Controller repeats step 4 with first index again and process is
repeated until size of index Table is reached.
7) Continues from step 2.
Throttled Load Balancing Algorithm parse the index from the beginning every time whenever the new request is
arrive at the Data Center Controller. So resulting some virtual machines are overloaded and some are underutilized.
Resulting load sharing is not uniform on available servers. And also resources utilization is not properly done by this
algorithm.
Another major disadvantage is that this algorithm does not consider the advance load balancing requirements like
execution time and completion time of each request on the available resources for the load balancing [2]
F. Equally Spread Current Execution Algorithm (ESCE) [2]
In Equally spread current execution technique load balancer spread equal load to all the available servers or
virtual machines which connected with the data centre. Load balancer maintains an index Table which contains the
record of Virtual machines and number of requests currently assigned to the Virtual Machine (VM). If the request
comes from the data centre to allocate the new VM, it scans the index Table and fine the least loaded virtual
7. Vol-1 Issue-2 2015 IJARIIE-ISSN(O)-2395-4396
1161 www.ijariie.com 193
machine. The load balancer also returns the least loaded VM id to the data centre controller. In case there are more
than one VM is found than first identified VM is selected for handling the request of the client/node. The data centre
revises the index Table by increasing the allocation count of identified VM. When VM completes the assigned task,
a request is communicated to data centre which is further notified by the load balancer. The load balancer again
revises the index Table by decreasing the allocation count for identified VM by one but there is an additional
computation overhead to scan the queue again and again. In this algorithm load is equally spread so whole system is
load balanced and no virtual machines are under loaded or overloaded. And because of this data migration and
virtual machine cost is also reduced. The main advantage is that there is a no overhead to assign the time slot for
each process.
G. Min-Min [4]
Min-Min algorithm starts with a set of unmapped / unscheduled jobs. Min-Min Task Scheduling Algorithm
is a static scheduling algorithm. This algorithm first identifies the jobs having minimum execution time and these
tasks are scheduled first in this algorithm. Then it will calculate the expected completion time for each tasks
according to available virtual machines and the resource that has the minimum completion time for selected task is
scheduled on that resource. The ready time of that resource is updated and the process is repeated until all the
unexecuted tasks are scheduled. Hence Min-Min algorithm chooses the smallest size tasks first and assigned these
tasks to fastest resource. So it leaves the some resource overloaded and other remains underutilized or idle. In
addition it does not provide load balanced in the system. It will provide a better makespan and resource utilization
when the number of the large task is more than the number of the small task in meta-task. Main disadvantage of this
algorithm is that it selects small tasks to be executed firstly, which in turn large task delays for long time. Min=Min
algorithm is failed to utilize resources efficiently which lead to a load imbalance.
Expected completion time of each task to the available resources is calculated using this equation:
Ctij = Etij + rtj
Where, rtj = Ready Time of resource Rj
Etij = Execution Time of task Ti on resource Rj
Ctij = Completion Time of task Ti on resource Rj
Example:
Resource MIPS Task MI
R1 10 T1 10
R2 8 T2 15
R3 5 T3 20
T4 25
8. Vol-1 Issue-2 2015 IJARIIE-ISSN(O)-2395-4396
1161 www.ijariie.com 194
T5 50
Table 3, represents the processing power (MIPS) of each resource and the task size (MI) of each task. Data given in
Table III are used to calculate the expected completion time and execution time of the tasks on each of the resources.
Figure 3.4 demonstrates calculated expected complete time for each task. On next step of the algorithm iteration,
data in figure will be updated until all tasks are allocated.
As shown in Figure 3.4, Tasks T1. T3, T5 are scheduled on resource R1 and tasks T2, T4 are executed by resource
R2.
The Min-Min algorithm achieves total makespan (overall completion time) 8 seconds but uses only two resources
R1, R2. And resource R3 is still remains idle.
Fig 3.4 Min-Min Task Scheduling Algorithm
H. Max-Min
Max-min algorithm allocates task Ti on the resource Rj where large tasks have highest priority rather than
smaller tasks in the meta-task. It is begin with a set of unmapped tasks. Then calculate expected execution time and
expected completion time of each task on available resource. Then select the task with overall maximum completion
time and assign this task to the resource which give overall minimum execution time. The ready time of that
resource is updated. Finally, the new mapped task is removed from the meta-task, and the process keeps on
repeating until all tasks are mapped which implies till meta-task is empty. The idea of Max-Min algorithm is to
reduce the waiting time of large tasks. As Max-min gives highest priority to long tasks, it increases average response
time for small tasks. This algorithm performs better when the number of small task is much more than the long one.
9. Vol-1 Issue-2 2015 IJARIIE-ISSN(O)-2395-4396
1161 www.ijariie.com 195
Example:
Table 4 describes the list of tasks and list of available resources.
Resource MIPS Task MI
R1 10 T1 10
R2 8 T2 15
R3 5 T3 20
T4 25
T5 50
Table 4 Resource and Task Specification
Task/Resource R1 R2 R3
T1 1 1.25 2
T2 1.5 1.875 3
T3 2 2.5 4
T4 2.5 3.12 5
T5 5 6.25 10
10. Vol-1 Issue-2 2015 IJARIIE-ISSN(O)-2395-4396
1161 www.ijariie.com 196
Fig 3.5 Max-Min Task Scheduling Algorithm
Table 5 demonstrates the calculated expected execution time for each task on available resources. Figure 3.5 shows
the scheduled task on the resources with minimum execution time.
I. Improved Max-Min Task Scheduling Algorithm [9]
Improved max-min algorithm is a modified version of Max-Min algorithm. Improved Max-Min algorithm
based on expected execution time instead of completion time as a selection basis of task and resources. this algorithm
applied on set of unscheduled/unmapped tasks. First it will calculate the Expected Execution time and completion
time of each task on available resources. Then it identifies the task with maximum execution time (largest task in the
meta-tasks) and scheduled this task on that resource which gives the minimum completion time (slowest resource).
After this remove the mapped task from the meta-tasks and update the ready time of the selected resource. While
meta-tasks is not empty then apply the Max-Min algorithm on the remaining small tasks. This algorithm try to
minimize the waiting time of small tasks by executing large task on slowest resource and concurrently assign the
small task on another available resources by max-min strategy. It will improve the makespan and response time for
the small tasks concurrently.
Example:
Resource MIPS Task MI
R1 100 T1 500
R2 300 T2 780
T3 800
T4 1200
11. Vol-1 Issue-2 2015 IJARIIE-ISSN(O)-2395-4396
1161 www.ijariie.com 197
T5 900
Table 6 Resource and Task Specification
Task (Maximum Expected Execution Time – Largest Task) = T4
Resource (Minimum Completion Time -- Slowest Resource) = R1
Task/Resource R1 R2
T1 5 1.6
T2 7.8 2.6
T3 8 2.7
T4 12 4
T5 9 3
Table 7 Expected Execution Time of the tasks on each Resources
12. Vol-1 Issue-2 2015 IJARIIE-ISSN(O)-2395-4396
1161 www.ijariie.com 198
Fig 3.6 Gantt chart of Improved Max-min Algorithm
J. Improved Cost-Based Algorithm [10]
Traditional way for task scheduling in cloud computing use the direct task of users as the overhead
application basis. The problem is that there are no relationship between the overhead application basis and the way
that different task cause overhead cost of resources. For large number of simple tasks this increases the cost and the
cost is decreased if we have a small number of complex tasks. To minimize the cost and minimize the makespan
improved activity based cost algorithm is proposed. The objective of this algorithm is to schedule task groups to the
available resources in the cloud computing environment, where all resources have different processing power and
different computation cost. For reducing the communication overhead and computation time this algorithm is
proposed. In this algorithm selection of resources is based on the cost of resource. Job grouping is done on the basis
of the resource‟s processing capability.
In improves activity based costing algorithm job scheduling strategy consider processing requirement of each
task/job and processing capabilities of resources. This algorithm focuses on scheduling of independent tasks. It will
create coarse-gained jobs by grouping fine-gained jobs together to reduce job assignment overhead and also reduce
the computation ratio. Scheduler receives number of tasks and calculates their priority level and put into appropriate
list high, medium, low.
Now job grouping algorithm is applied to this lists to allocate the task-groups to different available resources. Then
scheduler gathers characteristics of available resources. Select particular resource and multiplies MIPS with the
granularity size. Grouping is done according to the capability of the resource. Granularity size is the time within
which a job is processed at the resources. Granularity size is used to determine total amount of jobs that can be
completed within a specified time in a particular resource. If the task-groups total length is less than the MIPS of
resources then we can add another task from the list. And subtract the length the last task from the task group.
Repeat this process until the all tasks in the list are grouped into task-groups.
The priority level of each task can be calculated as in Equation (1),
Lk = nΣ i =0Ri,k * Ci,k / Pk
13. Vol-1 Issue-2 2015 IJARIIE-ISSN(O)-2395-4396
1161 www.ijariie.com 199
Where,
Ri,k : The ith individual use of resources by the kth task.
Ci,k : The cost of the ith individual use of resources by the kth task.
Pk : The profit earned from the kth task.
Lk : The priority level of the kth task.
Fig 3.7 Flowchart of Improved ABC Algorithm
14. Vol-1 Issue-2 2015 IJARIIE-ISSN(O)-2395-4396
1161 www.ijariie.com 200
In figure 3.8 user tasks and available resource is given. According to given granularity size tasks 0,1,2,3 are grouped
on the basis of resource capability and requirement of tasks and assigned this task to the available resource R1. Next
tasks are scheduled on another capable resource.
Fig 3.8 Example of Job Grouping in Improved ABC Algorithm
K. Weighted Active Monitoring Load Balancing Algorithm
The Weighted Active Monitoring Load Balancing Algorithm is proposed by modifying the Active
Monitoring Load Balancer by assigning a weight to each Virtual Machine. In this proposed Load balancing
algorithm all VM are assigned different amount of the available processing power. First create virtual machines of
different computing power in terms of its core processor, processing speed (MIPS), memory, storage. Allocate
weighted count according to computing power of virtual machines. If one VM is capable of having twice as much
load as the other, then assigned weight is „2‟ or if it can take four times load then assigned weight is „4‟ and so on.
For example:
Host server with single core processor, 1GB of memory, 1TB of Storage space, 1000000 bandwidth will
have weighted count=1
Host server with 2 core processor, 4GB of memory, 2TB of Storage space and 1000000 bandwidth will
have weighted count=2
Host server with quard core processor, 8GB of memory 4TB of Storage space and 1000000 bandwidth will
have weighted count=4 and so on..
In this algorithm WeightedActiveVmLoadBalancer maintains an index Table of VMs, associated weighted
count and the number of requests currently allocated to the VM. When a request will arrive, load balancer parse the
index and find the least loaded virtual machine. It allocate request to the most powerful VM according to the weight
assigned. If there are more than one, the first identified is selected. After finishing processing request on the virtual
machine, load balancer dealloacate the virtual machine and update the resource allocation Table by decreasing
allocation count by one.