This document presents a scheduling strategy that performs dynamic job grouping at runtime to optimize the execution of applications with many fine-grained tasks on global grids. The strategy groups individual jobs into larger "job groups" based on the processing requirements of each job, the capabilities of available grid resources, and a defined granularity size. It aims to minimize overall job execution time and cost while maximizing resource utilization. The strategy is evaluated through simulations using the GridSim toolkit, which models grid resources and application scheduling.
Effective and Efficient Job Scheduling in Grid ComputingAditya Kokadwar
The integration of remote and diverse resources and the increasing computational needs of Grand Challenges problems combined with the faster growth of the internet and communication technologies leads to the development of global computational grids. Grid computing is a prevailing technology, which unites underutilized resources in order to support sharing of resources and services distributed across numerous administrative region. An efficient and effective scheduling system is essentially required in order to achieve the promising capacity of grids. The main goal of scheduling is to maximize the resource utilization and minimize processing time and cost of the jobs. In this research, the objective is to prioritize the jobs based on execution cost and then allocate the resources with minimum cost by merging it with conventional job grouping strategy to provide the solution for better and more efficient job scheduling which is beneficial to both user and resource broker. The proposed scheduling approach in grid computing employs a dynamic cost-based job scheduling algorithm for making an efficient mapping of a job to available resources in the grid. It also improves communication to computation ratio (CCR) and utilization of available resources by grouping the user jobs before resource allocation.
Scheduling Algorithm Based Simulator for Resource Allocation Task in Cloud Co...IRJET Journal
This document proposes a scheduling algorithm for allocating resources in cloud computing based on the Project Evaluation and Review Technique (PERT). It aims to address issues like starvation of lower priority tasks. The algorithm models task allocation as a directed acyclic graph and uses PERT to schedule critical and non-critical tasks, prioritizing higher priority tasks. The algorithm is evaluated against other scheduling methods and shows improvements in reducing completion time and optimizing resource allocation for all tasks.
An enhanced adaptive scoring job scheduling algorithm with replication strate...eSAT Publishing House
This document describes an enhanced adaptive scoring job scheduling algorithm with replication strategy for grid environments. The algorithm aims to improve upon an existing adaptive scoring job scheduling algorithm by identifying whether jobs are data-intensive or computation-intensive. It then divides large jobs into subtasks, replicates the subtasks, and allocates the replicas to clusters based on a computed cluster score in order to improve resource utilization and job completion times. The algorithm is evaluated through simulation using the GridSim toolkit.
GROUPING BASED JOB SCHEDULING ALGORITHM USING PRIORITY QUEUE AND HYBRID ALGOR...ijgca
This document describes a proposed grouping based job scheduling algorithm for grid computing that aims to maximize resource utilization and minimize job processing times. It discusses related work on job scheduling algorithms and then presents the steps of the proposed algorithm. The algorithm uses shortest job first, first-in first-out, and round robin scheduling to process jobs in groups. The algorithm is evaluated experimentally in MATLAB and shown to reduce total job processing time compared to using only first-in first-out scheduling. Graphs demonstrate the processing time improvements achieved by the combined scheduling approach.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Time Efficient VM Allocation using KD-Tree Approach in Cloud Server Environmentrahulmonikasharma
This document summarizes a research paper that proposes a new algorithm called KD-Tree approach for efficient virtual machine (VM) allocation in cloud computing environments. The algorithm aims to minimize the response time for allocating VMs to user requests. It does this by adopting a KD-Tree data structure to index physical host machines, allowing the scheduler to quickly find the host that can accommodate a new VM request with the minimum latency in O(Log n) time. The proposed approach is evaluated through simulations using the CloudSim toolkit and is shown to outperform an existing linear scheduling strategy (LSTR) algorithm in terms of reducing VM allocation times.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Service Request Scheduling in Cloud Computing using Meta-Heuristic Technique:...IRJET Journal
This document discusses using the Teaching Learning Based Optimization (TLBO) meta-heuristic technique for service request scheduling between users and cloud service providers. TLBO is a nature-inspired algorithm that mimics the teacher-student learning process. It is compared to other meta-heuristic algorithms like Genetic Algorithm. The key steps of TLBO involve initializing a population, evaluating fitness, selecting the best solution as teacher, and updating the population through teacher and learner phases until termination criteria is met. The document proposes using number of users and virtual machines as parameters for TLBO scheduling in cloud computing. MATLAB simulation results show the initial and final iterations converging to an optimal scheduling solution.
Effective and Efficient Job Scheduling in Grid ComputingAditya Kokadwar
The integration of remote and diverse resources and the increasing computational needs of Grand Challenges problems combined with the faster growth of the internet and communication technologies leads to the development of global computational grids. Grid computing is a prevailing technology, which unites underutilized resources in order to support sharing of resources and services distributed across numerous administrative region. An efficient and effective scheduling system is essentially required in order to achieve the promising capacity of grids. The main goal of scheduling is to maximize the resource utilization and minimize processing time and cost of the jobs. In this research, the objective is to prioritize the jobs based on execution cost and then allocate the resources with minimum cost by merging it with conventional job grouping strategy to provide the solution for better and more efficient job scheduling which is beneficial to both user and resource broker. The proposed scheduling approach in grid computing employs a dynamic cost-based job scheduling algorithm for making an efficient mapping of a job to available resources in the grid. It also improves communication to computation ratio (CCR) and utilization of available resources by grouping the user jobs before resource allocation.
Scheduling Algorithm Based Simulator for Resource Allocation Task in Cloud Co...IRJET Journal
This document proposes a scheduling algorithm for allocating resources in cloud computing based on the Project Evaluation and Review Technique (PERT). It aims to address issues like starvation of lower priority tasks. The algorithm models task allocation as a directed acyclic graph and uses PERT to schedule critical and non-critical tasks, prioritizing higher priority tasks. The algorithm is evaluated against other scheduling methods and shows improvements in reducing completion time and optimizing resource allocation for all tasks.
An enhanced adaptive scoring job scheduling algorithm with replication strate...eSAT Publishing House
This document describes an enhanced adaptive scoring job scheduling algorithm with replication strategy for grid environments. The algorithm aims to improve upon an existing adaptive scoring job scheduling algorithm by identifying whether jobs are data-intensive or computation-intensive. It then divides large jobs into subtasks, replicates the subtasks, and allocates the replicas to clusters based on a computed cluster score in order to improve resource utilization and job completion times. The algorithm is evaluated through simulation using the GridSim toolkit.
GROUPING BASED JOB SCHEDULING ALGORITHM USING PRIORITY QUEUE AND HYBRID ALGOR...ijgca
This document describes a proposed grouping based job scheduling algorithm for grid computing that aims to maximize resource utilization and minimize job processing times. It discusses related work on job scheduling algorithms and then presents the steps of the proposed algorithm. The algorithm uses shortest job first, first-in first-out, and round robin scheduling to process jobs in groups. The algorithm is evaluated experimentally in MATLAB and shown to reduce total job processing time compared to using only first-in first-out scheduling. Graphs demonstrate the processing time improvements achieved by the combined scheduling approach.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Time Efficient VM Allocation using KD-Tree Approach in Cloud Server Environmentrahulmonikasharma
This document summarizes a research paper that proposes a new algorithm called KD-Tree approach for efficient virtual machine (VM) allocation in cloud computing environments. The algorithm aims to minimize the response time for allocating VMs to user requests. It does this by adopting a KD-Tree data structure to index physical host machines, allowing the scheduler to quickly find the host that can accommodate a new VM request with the minimum latency in O(Log n) time. The proposed approach is evaluated through simulations using the CloudSim toolkit and is shown to outperform an existing linear scheduling strategy (LSTR) algorithm in terms of reducing VM allocation times.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Service Request Scheduling in Cloud Computing using Meta-Heuristic Technique:...IRJET Journal
This document discusses using the Teaching Learning Based Optimization (TLBO) meta-heuristic technique for service request scheduling between users and cloud service providers. TLBO is a nature-inspired algorithm that mimics the teacher-student learning process. It is compared to other meta-heuristic algorithms like Genetic Algorithm. The key steps of TLBO involve initializing a population, evaluating fitness, selecting the best solution as teacher, and updating the population through teacher and learner phases until termination criteria is met. The document proposes using number of users and virtual machines as parameters for TLBO scheduling in cloud computing. MATLAB simulation results show the initial and final iterations converging to an optimal scheduling solution.
Task Scheduling methodology in cloud computing Qutub-ud- Din
This document outlines a proposed methodology for developing efficient task scheduling strategies in cloud computing. It begins with introductions to cloud computing and task scheduling. It then reviews several relevant existing task scheduling algorithms from literature that focus on objectives like reducing costs, minimizing completion time, and maximizing resource utilization. The problem statement indicates the goals are to reduce costs, minimize completion time, and maximize resource allocation. An overview of the proposed methodology's flow is then provided, followed by references.
This document summarizes a research paper that proposes a strategy to improve resource provisioning in heterogeneous cloud environments. The strategy uses an electronic auction model that considers workload selection factors like job deadlines and CPU time. It also presents workflow optimization logic to minimize costs while meeting performance requirements. The strategy employs fault tolerance services using job migration. It is evaluated based on metrics like execution time, makespan time, migration frequency and energy consumption, showing improved performance over existing approaches. Future work plans to introduce new resource provisioning mechanisms considering load, energy and network factors to optimize resource selection and reduce transmission costs and times.
The document discusses using a genetic algorithm to schedule tasks in a cloud computing environment. It aims to minimize task execution time and reduce computational costs compared to the traditional Round Robin scheduling algorithm. The proposed genetic algorithm mimics natural selection and genetics to evolve optimal task schedules. It was tested using the CloudSim simulation toolkit and results showed the genetic algorithm provided better performance than Round Robin scheduling.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
Bragged Regression Tree Algorithm for Dynamic Distribution and Scheduling of ...Editor IJCATR
In the past few years, Grid computing came up as next generation computing platform which is a combination of
heterogeneous computing resources combined by a network across dynamic and geographically separated organizations. So, it
provides the perfect computing environment to solve large-scale computational demands. As the Grid computing demands are still
increasing from day to day due to rise in large number of complex jobs worldwide. So, the jobs may take much longer time to
complete due to poor distribution of batches or groups of jobs to inappropriate CPU’s. Therefore there is need to develop an efficient
dynamic job scheduling algorithm that would assign jobs to appropriate CPU’s dynamically. The main problem which dealt in the
paper is, how to distribute the jobs when the payload, importance, urgency, flow time etc. dynamically keeps on changing as the grid
expands or is flooded with number of job requests from different machines within the grid.
In this paper, we present a scheduling strategy which takes the advantage of decision tree algorithm to take dynamic decision
based on the current scenarios and which automatically incorporates factor analysis for considering the distribution of jobs.
This document proposes a genetic algorithm called Workflow Scheduling for Public Cloud Using Genetic Algorithm (WSGA) to optimize the cost of executing workflows in the public cloud. It discusses how genetic algorithms can be applied to the workflow scheduling problem to generate optimal schedules. The WSGA represents potential scheduling solutions as chromosomes, uses a fitness function to evaluate scheduling costs, and applies genetic operators like selection, crossover and mutation to evolve new schedules over multiple iterations. The goal is to minimize total execution cost while meeting workflow dependencies and deadline constraints. An experimental setup is described and the WSGA approach is claimed to reduce costs more than other heuristic scheduling algorithms for communication-intensive workflows.
Sharing of cluster resources among multiple Workflow Applicationsijcsit
Many computational solutions can be expressed as workflows. A Cluster of processors is a shared
resource among several users and hence the need for a scheduler which deals with multi-user jobs
presented as workflows. The scheduler must find the number of processors to be allotted for each workflow
and schedule tasks on allotted processors. In this work, a new method to find optimal and maximum
number of processors that can be allotted for a workflow is proposed. Regression analysis is used to find
the best possible way to share available processors, among suitable number of submitted workflows. An
instance of a scheduler is created for each workflow, which schedules tasks on the allotted processors.
Towards this end, a new framework to receive online submission of workflows, to allot processors to each
workflow and schedule tasks, is proposed and experimented using a discrete-event based simulator. This
space-sharing of processors among multiple workflows shows better performance than the other methods
found in literature. Because of space-sharing, an instance of a scheduler must be used for each workflow
within the allotted processors. Since the number of processors for each workflow is known only during
runtime, a static schedule can not be used. Hence a hybrid scheduler which tries to combine the advantages
of static and dynamic scheduler is proposed. Thus the proposed framework is a promising solution to
multiple workflows scheduling on cluster.
Qo s aware scientific application scheduling algorithm in cloud environmentAlexander Decker
This document summarizes a research paper that proposes a scheduling algorithm for scientific applications in cloud environments. The algorithm aims to schedule tasks in workflows based on user preferences for quality of service (QoS), like time and cost. It ranks tasks and uses an UPFF function to select resources that meet the user's desired QoS. The algorithm is compared to other similar algorithms through scenarios, and results show it has better efficiency. The full paper provides more details on scientific workflows, cloud computing, related work on workflow scheduling algorithms, and defines the problem of scheduling tasks to resources while considering costs and times.
OPTIMIZED RESOURCE PROVISIONING METHOD FOR COMPUTATIONAL GRID ijgca
Grid computing is an accumulation of heterogeneous, dynamic resources from multiple administrative areas which are geographically distributed that can be utilized to reach a mutual end. Development of resource provisioning-based scheduling in large-scale distributed environments like grid computing brings in new requirement challenges that are not being believed in traditional distributed computing environments. Computational grid is applying the resources of many systems in a network to a single problem at the same time. Grid scheduling is the method by which work specified by some means is assigned to the resources that complete the work in the environment which cannot fulfill the user requirements considerably. The satisfaction of users while providing the resources might increase the beneficiary level of resource suppliers. Resource scheduling has to satisfy the multiple constraints specified by the user. The option of resource with the satisfaction of multiple constraints is the most tedious process. This trouble is solved by bringing out the particle swarm optimization based heuristic scheduling algorithm which attempts to select the most suitable resource from the set of available resources. The primary parameters that are taken in this work for selecting the most suitable resource are the makespan and cost. The experimental result shows that the proposed method yields optimal scheduling with the atonement of all user requirements.
Quality of Service based Task Scheduling Algorithms in Cloud Computing IJECEIAES
In cloud computing resources are considered as services hence utilization of the resources in an efficient way is done by using task scheduling and load balancing. Quality of service is an important factor to measure the trustiness of the cloud. Using quality of service in task scheduling will address the problems of security in cloud computing. This paper studied quality of service based task scheduling algorithms and the parameters used for scheduling. By comparing the results the efficiency of the algorithm is measured and limitations are given. We can improve the efficiency of the quality of service based task scheduling algorithms by considering these factors arriving time of the task, time taken by the task to execute on the resource and the cost in use for the communication.
Demand-driven Gaussian window optimization for executing preferred population...IJECEIAES
Scheduling is one of the essential enabling technique for Cloud computing which facilitates efficient resource utilization among the jobs scheduled for processing. However, it experiences performance overheads due to the inappropriate provisioning of resources to requesting jobs. It is very much essential that the performance of Cloud is accomplished through intelligent scheduling and allocation of resources. In this paper, we propose the application of Gaussian window where jobs of heterogeneous in nature are scheduled in the round-robin fashion on different Cloud clusters. The clusters are heterogeneous in nature having datacenters with varying sever capacity. Performance evaluation results show that the proposed algorithm has enhanced the QoS of the computing model. Allocation of Jobs to specific Clusters has improved the system throughput and has reduced the latency.
1. The document proposes a new framework for scheduling multiple DAG applications on a cluster of processors. It involves finding the optimal and maximum number of processors that can be allotted to each DAG.
2. Regression analysis is used to model the reduction in makespan for each additional processor allotted to a DAG. This information helps determine the best way to share available processors among submitted DAGs.
3. The framework receives DAG submissions, allocates processors to each DAG, and schedules tasks on the allotted processors. The goal is to maximize resource utilization and minimize overall completion time. Experiments show this approach performs better than other methods in literature.
A BAYE'S THEOREM BASED NODE SELECTION FOR LOAD BALANCING IN CLOUD ENVIRONMENThiij
Cloud computing is a popular computing model as it renders service to large number of users request on
the fly and has lead to the proliferation of large number of cloud users. This has lead to the overloaded
nodes in the cloud environment along with the problem of load imbalance among the cloud servers and
thereby impacts the performance. Hence, in this paper a heuristic Baye's theorem approach is considered
along with clustering to identify the optimal node for load balancing. Experiments using the proposed
approach are carried out on cloudsim simulator and are compared with the existing approach. Results
demonstrates that task deployment performed using this approach has improved performance in terms of
utilization and throughput when compared to the existing approaches
A Baye's Theorem Based Node Selection for Load Balancing in Cloud Environmentneirew J
Cloud computing is a popular computing model as it renders service to large number of users request on
the fly and has lead to the proliferation of large number of cloud users. This has lead to the overloaded
nodes in the cloud environment along with the problem of load imbalance among the cloud servers and
thereby impacts the performance. Hence, in this paper a heuristic Baye's theorem approach is considered
along with clustering to identify the optimal node for load balancing. Experiments using the proposed
approach are carried out on cloudsim simulator and are compared with the existing approach. Results
demonstrates that task deployment performed using this approach has improved performance in terms of
utilization and throughput when compared to the existing approaches.
Deadline and Suffrage Aware Task Scheduling Approach for Cloud EnvironmentIRJET Journal
The document proposes a deadline and suffrage aware task scheduling approach for cloud environments. It discusses limitations of existing approaches that can cause system imbalances. The proposed approach considers both task deadlines and priorities assigned by user votes ("suffrage") to schedule tasks. It was tested using CloudSim simulator and found to outperform the basic min-min approach in reducing completion times and improving resource utilization and provider profits while still meeting task deadlines.
Scheduling Divisible Jobs to Optimize the Computation and Energy Costsinventionjournals
ABSTRACT : The important challenge in cloud computing environment is to design a scheduling strategy to handle jobs, and to process them in a heterogeneous environment with shared data centers. In this paper, we attempt to investigate a new analytical framework model that enables an existing private cloud data-center for scheduling jobs and minimizing the overall computation and energy cost together. Our model is based on Divisible Load Theory (DLT) model to derive closed-form solution for the load fractions to be assigned to each machines considering computation and energy cost. Our analysis also attempts to schedule the jobs such a way that cloud provider can gain maximum benefit for his service and Quality of Service (QoS) requirement user’s job. Finally, we quantify the performance of the strategies via rigorous simulation studies.
Fault-Tolerance Aware Multi Objective Scheduling Algorithm for Task Schedulin...csandit
Computational Grid (CG) creates a large heterogeneous and distributed paradigm to manage and execute the applications which are computationally intensive. In grid scheduling tasks are assigned to the proper processors in the grid system to for its execution by considering the execution policy and the optimization objectives. In this paper, makespan and the faulttolerance of the computational nodes of the grid which are the two important parameters for the task execution, are considered and tried to optimize it. As the grid scheduling is considered to be NP-Hard, so a meta-heuristics evolutionary based techniques are often used to find a solution for this. We have proposed a NSGA II for this purpose. The performance estimation ofthe proposed Fault tolerance Aware NSGA II (FTNSGA II) has been done by writing program in Matlab. The simulation results evaluates the performance of the all proposed algorithm and the results of proposed model is compared with existing model Min-Min and Max-Min algorithm which proves effectiveness of the model.
GROUPING BASED JOB SCHEDULING ALGORITHM USING PRIORITY QUEUE AND HYBRID ALGOR...ijgca
Grid computing enlarge with computing platform which is collection of heterogeneous computing resources connected by a network across dynamic and geographically dispersed organization to form a distributed high performance computing infrastructure. Grid computing solves the complex computing
problems amongst multiple machines. Grid computing solves the large scale computational demands in a high performance computing environment. The main emphasis in the grid computing is given to the resource management and the job scheduler .The goal of the job scheduler is to maximize the resource utilization and minimize the processing time of the jobs. Existing approaches of Grid scheduling doesn’t give much emphasis on the performance of a Grid scheduler in processing time parameter. Schedulers allocate resources to the jobs to be executed using the First come First serve algorithm. In this paper, we have provided an optimize algorithm to queue of the scheduler using various scheduling methods like Shortest Job First, First in First out, Round robin. The job scheduling system is responsible to select best suitable machines in a grid for user jobs. The management and scheduling system generates job schedules for each machine in the grid by taking static restrictions and dynamic parameters of jobs and machines
into consideration. The main purpose of this paper is to develop an efficient job scheduling algorithm to maximize the resource utilization and minimize processing time of the jobs. Queues can be optimized by using various scheduling algorithms depending upon the performance criteria to be improved e.g. response
time, throughput. The work has been done in MATLAB using the parallel computing toolbox.
This document provides an overview of scheduling mechanisms in cloud computing. It discusses task scheduling, gang scheduling based on performance and cost evaluation, and resource scheduling. For task scheduling, it describes classifying tasks based on quality of service parameters and MapReduce level scheduling. It then explains two gang scheduling algorithms - Adaptive First Come First Serve (AFCFS) and Largest Job First Serve (LJFS) - and how they are used to evaluate performance and cost. Finally, it briefly discusses resource scheduling and factors that affect scheduling mechanisms in cloud computing like efficiency, fairness, costs, and communication patterns.
RSDC (Reliable Scheduling Distributed in Cloud Computing)IJCSEA Journal
This document summarizes the PPDD algorithm for scheduling divisible loads originating from multiple sites in distributed computing environments. The PPDD algorithm is a two-phase approach that first derives a near-optimal load distribution and then considers actual communication delays when transferring load fractions. It guarantees a near-optimal solution and improved performance over previous algorithms like RSA by avoiding unnecessary load transfers between processors.
The document proposes an Earthquake Disaster Based Resource Scheduling (EDBRS) framework for efficiently allocating cloud computing resources during earthquake disasters. The framework aims to minimize execution costs and times of cloud workloads by prioritizing urgent workloads related to emergency response. It models the resource scheduling problem and considers factors like workload deadlines, resource speeds and costs. The framework also presents algorithms for optimally assigning equal-length and variable-length workloads across multiple public and private cloud resources to balance performance and cost. The goal is to efficiently allocate cloud resources to disaster response zones based on urgency to reduce loss of life during earthquakes.
This document proposes an Earthquake Disaster Based Resource Scheduling (EDBRS) framework for efficiently allocating cloud computing resources during earthquake disasters. The framework prioritizes resource allocation based on the urgency of workloads, with more urgent workloads related to earthquake response and rescue receiving resources first. An algorithm is proposed that schedules resources to workloads based on this urgency criterion. The algorithm aims to reduce the execution time and costs of cloud workloads submitted during disasters as compared to existing scheduling algorithms. The performance of the proposed algorithm is evaluated using CloudSim simulation software, and it is shown to outperform existing algorithms.
Task Scheduling methodology in cloud computing Qutub-ud- Din
This document outlines a proposed methodology for developing efficient task scheduling strategies in cloud computing. It begins with introductions to cloud computing and task scheduling. It then reviews several relevant existing task scheduling algorithms from literature that focus on objectives like reducing costs, minimizing completion time, and maximizing resource utilization. The problem statement indicates the goals are to reduce costs, minimize completion time, and maximize resource allocation. An overview of the proposed methodology's flow is then provided, followed by references.
This document summarizes a research paper that proposes a strategy to improve resource provisioning in heterogeneous cloud environments. The strategy uses an electronic auction model that considers workload selection factors like job deadlines and CPU time. It also presents workflow optimization logic to minimize costs while meeting performance requirements. The strategy employs fault tolerance services using job migration. It is evaluated based on metrics like execution time, makespan time, migration frequency and energy consumption, showing improved performance over existing approaches. Future work plans to introduce new resource provisioning mechanisms considering load, energy and network factors to optimize resource selection and reduce transmission costs and times.
The document discusses using a genetic algorithm to schedule tasks in a cloud computing environment. It aims to minimize task execution time and reduce computational costs compared to the traditional Round Robin scheduling algorithm. The proposed genetic algorithm mimics natural selection and genetics to evolve optimal task schedules. It was tested using the CloudSim simulation toolkit and results showed the genetic algorithm provided better performance than Round Robin scheduling.
A Review on Scheduling in Cloud Computingijujournal
Cloud computing is the requirement based on clients that this computing which provides software,
infrastructure and platform as a service as per pay for use norm. The scheduling main goal is to achieve
the accuracy and correctness on task completion. The scheduling in cloud environment which enables the
various cloud services to help framework implementation. Thus the far reaching way of different type of
scheduling algorithms in cloud computing environment surveyed which includes the workflow scheduling
and grid scheduling. The survey gives an elaborate idea about grid, cloud, workflow scheduling to
minimize the energy cost, efficiency and throughput of the system.
Bragged Regression Tree Algorithm for Dynamic Distribution and Scheduling of ...Editor IJCATR
In the past few years, Grid computing came up as next generation computing platform which is a combination of
heterogeneous computing resources combined by a network across dynamic and geographically separated organizations. So, it
provides the perfect computing environment to solve large-scale computational demands. As the Grid computing demands are still
increasing from day to day due to rise in large number of complex jobs worldwide. So, the jobs may take much longer time to
complete due to poor distribution of batches or groups of jobs to inappropriate CPU’s. Therefore there is need to develop an efficient
dynamic job scheduling algorithm that would assign jobs to appropriate CPU’s dynamically. The main problem which dealt in the
paper is, how to distribute the jobs when the payload, importance, urgency, flow time etc. dynamically keeps on changing as the grid
expands or is flooded with number of job requests from different machines within the grid.
In this paper, we present a scheduling strategy which takes the advantage of decision tree algorithm to take dynamic decision
based on the current scenarios and which automatically incorporates factor analysis for considering the distribution of jobs.
This document proposes a genetic algorithm called Workflow Scheduling for Public Cloud Using Genetic Algorithm (WSGA) to optimize the cost of executing workflows in the public cloud. It discusses how genetic algorithms can be applied to the workflow scheduling problem to generate optimal schedules. The WSGA represents potential scheduling solutions as chromosomes, uses a fitness function to evaluate scheduling costs, and applies genetic operators like selection, crossover and mutation to evolve new schedules over multiple iterations. The goal is to minimize total execution cost while meeting workflow dependencies and deadline constraints. An experimental setup is described and the WSGA approach is claimed to reduce costs more than other heuristic scheduling algorithms for communication-intensive workflows.
Sharing of cluster resources among multiple Workflow Applicationsijcsit
Many computational solutions can be expressed as workflows. A Cluster of processors is a shared
resource among several users and hence the need for a scheduler which deals with multi-user jobs
presented as workflows. The scheduler must find the number of processors to be allotted for each workflow
and schedule tasks on allotted processors. In this work, a new method to find optimal and maximum
number of processors that can be allotted for a workflow is proposed. Regression analysis is used to find
the best possible way to share available processors, among suitable number of submitted workflows. An
instance of a scheduler is created for each workflow, which schedules tasks on the allotted processors.
Towards this end, a new framework to receive online submission of workflows, to allot processors to each
workflow and schedule tasks, is proposed and experimented using a discrete-event based simulator. This
space-sharing of processors among multiple workflows shows better performance than the other methods
found in literature. Because of space-sharing, an instance of a scheduler must be used for each workflow
within the allotted processors. Since the number of processors for each workflow is known only during
runtime, a static schedule can not be used. Hence a hybrid scheduler which tries to combine the advantages
of static and dynamic scheduler is proposed. Thus the proposed framework is a promising solution to
multiple workflows scheduling on cluster.
Qo s aware scientific application scheduling algorithm in cloud environmentAlexander Decker
This document summarizes a research paper that proposes a scheduling algorithm for scientific applications in cloud environments. The algorithm aims to schedule tasks in workflows based on user preferences for quality of service (QoS), like time and cost. It ranks tasks and uses an UPFF function to select resources that meet the user's desired QoS. The algorithm is compared to other similar algorithms through scenarios, and results show it has better efficiency. The full paper provides more details on scientific workflows, cloud computing, related work on workflow scheduling algorithms, and defines the problem of scheduling tasks to resources while considering costs and times.
OPTIMIZED RESOURCE PROVISIONING METHOD FOR COMPUTATIONAL GRID ijgca
Grid computing is an accumulation of heterogeneous, dynamic resources from multiple administrative areas which are geographically distributed that can be utilized to reach a mutual end. Development of resource provisioning-based scheduling in large-scale distributed environments like grid computing brings in new requirement challenges that are not being believed in traditional distributed computing environments. Computational grid is applying the resources of many systems in a network to a single problem at the same time. Grid scheduling is the method by which work specified by some means is assigned to the resources that complete the work in the environment which cannot fulfill the user requirements considerably. The satisfaction of users while providing the resources might increase the beneficiary level of resource suppliers. Resource scheduling has to satisfy the multiple constraints specified by the user. The option of resource with the satisfaction of multiple constraints is the most tedious process. This trouble is solved by bringing out the particle swarm optimization based heuristic scheduling algorithm which attempts to select the most suitable resource from the set of available resources. The primary parameters that are taken in this work for selecting the most suitable resource are the makespan and cost. The experimental result shows that the proposed method yields optimal scheduling with the atonement of all user requirements.
Quality of Service based Task Scheduling Algorithms in Cloud Computing IJECEIAES
In cloud computing resources are considered as services hence utilization of the resources in an efficient way is done by using task scheduling and load balancing. Quality of service is an important factor to measure the trustiness of the cloud. Using quality of service in task scheduling will address the problems of security in cloud computing. This paper studied quality of service based task scheduling algorithms and the parameters used for scheduling. By comparing the results the efficiency of the algorithm is measured and limitations are given. We can improve the efficiency of the quality of service based task scheduling algorithms by considering these factors arriving time of the task, time taken by the task to execute on the resource and the cost in use for the communication.
Demand-driven Gaussian window optimization for executing preferred population...IJECEIAES
Scheduling is one of the essential enabling technique for Cloud computing which facilitates efficient resource utilization among the jobs scheduled for processing. However, it experiences performance overheads due to the inappropriate provisioning of resources to requesting jobs. It is very much essential that the performance of Cloud is accomplished through intelligent scheduling and allocation of resources. In this paper, we propose the application of Gaussian window where jobs of heterogeneous in nature are scheduled in the round-robin fashion on different Cloud clusters. The clusters are heterogeneous in nature having datacenters with varying sever capacity. Performance evaluation results show that the proposed algorithm has enhanced the QoS of the computing model. Allocation of Jobs to specific Clusters has improved the system throughput and has reduced the latency.
1. The document proposes a new framework for scheduling multiple DAG applications on a cluster of processors. It involves finding the optimal and maximum number of processors that can be allotted to each DAG.
2. Regression analysis is used to model the reduction in makespan for each additional processor allotted to a DAG. This information helps determine the best way to share available processors among submitted DAGs.
3. The framework receives DAG submissions, allocates processors to each DAG, and schedules tasks on the allotted processors. The goal is to maximize resource utilization and minimize overall completion time. Experiments show this approach performs better than other methods in literature.
A BAYE'S THEOREM BASED NODE SELECTION FOR LOAD BALANCING IN CLOUD ENVIRONMENThiij
Cloud computing is a popular computing model as it renders service to large number of users request on
the fly and has lead to the proliferation of large number of cloud users. This has lead to the overloaded
nodes in the cloud environment along with the problem of load imbalance among the cloud servers and
thereby impacts the performance. Hence, in this paper a heuristic Baye's theorem approach is considered
along with clustering to identify the optimal node for load balancing. Experiments using the proposed
approach are carried out on cloudsim simulator and are compared with the existing approach. Results
demonstrates that task deployment performed using this approach has improved performance in terms of
utilization and throughput when compared to the existing approaches
A Baye's Theorem Based Node Selection for Load Balancing in Cloud Environmentneirew J
Cloud computing is a popular computing model as it renders service to large number of users request on
the fly and has lead to the proliferation of large number of cloud users. This has lead to the overloaded
nodes in the cloud environment along with the problem of load imbalance among the cloud servers and
thereby impacts the performance. Hence, in this paper a heuristic Baye's theorem approach is considered
along with clustering to identify the optimal node for load balancing. Experiments using the proposed
approach are carried out on cloudsim simulator and are compared with the existing approach. Results
demonstrates that task deployment performed using this approach has improved performance in terms of
utilization and throughput when compared to the existing approaches.
Deadline and Suffrage Aware Task Scheduling Approach for Cloud EnvironmentIRJET Journal
The document proposes a deadline and suffrage aware task scheduling approach for cloud environments. It discusses limitations of existing approaches that can cause system imbalances. The proposed approach considers both task deadlines and priorities assigned by user votes ("suffrage") to schedule tasks. It was tested using CloudSim simulator and found to outperform the basic min-min approach in reducing completion times and improving resource utilization and provider profits while still meeting task deadlines.
Scheduling Divisible Jobs to Optimize the Computation and Energy Costsinventionjournals
ABSTRACT : The important challenge in cloud computing environment is to design a scheduling strategy to handle jobs, and to process them in a heterogeneous environment with shared data centers. In this paper, we attempt to investigate a new analytical framework model that enables an existing private cloud data-center for scheduling jobs and minimizing the overall computation and energy cost together. Our model is based on Divisible Load Theory (DLT) model to derive closed-form solution for the load fractions to be assigned to each machines considering computation and energy cost. Our analysis also attempts to schedule the jobs such a way that cloud provider can gain maximum benefit for his service and Quality of Service (QoS) requirement user’s job. Finally, we quantify the performance of the strategies via rigorous simulation studies.
Fault-Tolerance Aware Multi Objective Scheduling Algorithm for Task Schedulin...csandit
Computational Grid (CG) creates a large heterogeneous and distributed paradigm to manage and execute the applications which are computationally intensive. In grid scheduling tasks are assigned to the proper processors in the grid system to for its execution by considering the execution policy and the optimization objectives. In this paper, makespan and the faulttolerance of the computational nodes of the grid which are the two important parameters for the task execution, are considered and tried to optimize it. As the grid scheduling is considered to be NP-Hard, so a meta-heuristics evolutionary based techniques are often used to find a solution for this. We have proposed a NSGA II for this purpose. The performance estimation ofthe proposed Fault tolerance Aware NSGA II (FTNSGA II) has been done by writing program in Matlab. The simulation results evaluates the performance of the all proposed algorithm and the results of proposed model is compared with existing model Min-Min and Max-Min algorithm which proves effectiveness of the model.
GROUPING BASED JOB SCHEDULING ALGORITHM USING PRIORITY QUEUE AND HYBRID ALGOR...ijgca
Grid computing enlarge with computing platform which is collection of heterogeneous computing resources connected by a network across dynamic and geographically dispersed organization to form a distributed high performance computing infrastructure. Grid computing solves the complex computing
problems amongst multiple machines. Grid computing solves the large scale computational demands in a high performance computing environment. The main emphasis in the grid computing is given to the resource management and the job scheduler .The goal of the job scheduler is to maximize the resource utilization and minimize the processing time of the jobs. Existing approaches of Grid scheduling doesn’t give much emphasis on the performance of a Grid scheduler in processing time parameter. Schedulers allocate resources to the jobs to be executed using the First come First serve algorithm. In this paper, we have provided an optimize algorithm to queue of the scheduler using various scheduling methods like Shortest Job First, First in First out, Round robin. The job scheduling system is responsible to select best suitable machines in a grid for user jobs. The management and scheduling system generates job schedules for each machine in the grid by taking static restrictions and dynamic parameters of jobs and machines
into consideration. The main purpose of this paper is to develop an efficient job scheduling algorithm to maximize the resource utilization and minimize processing time of the jobs. Queues can be optimized by using various scheduling algorithms depending upon the performance criteria to be improved e.g. response
time, throughput. The work has been done in MATLAB using the parallel computing toolbox.
This document provides an overview of scheduling mechanisms in cloud computing. It discusses task scheduling, gang scheduling based on performance and cost evaluation, and resource scheduling. For task scheduling, it describes classifying tasks based on quality of service parameters and MapReduce level scheduling. It then explains two gang scheduling algorithms - Adaptive First Come First Serve (AFCFS) and Largest Job First Serve (LJFS) - and how they are used to evaluate performance and cost. Finally, it briefly discusses resource scheduling and factors that affect scheduling mechanisms in cloud computing like efficiency, fairness, costs, and communication patterns.
RSDC (Reliable Scheduling Distributed in Cloud Computing)IJCSEA Journal
This document summarizes the PPDD algorithm for scheduling divisible loads originating from multiple sites in distributed computing environments. The PPDD algorithm is a two-phase approach that first derives a near-optimal load distribution and then considers actual communication delays when transferring load fractions. It guarantees a near-optimal solution and improved performance over previous algorithms like RSA by avoiding unnecessary load transfers between processors.
The document proposes an Earthquake Disaster Based Resource Scheduling (EDBRS) framework for efficiently allocating cloud computing resources during earthquake disasters. The framework aims to minimize execution costs and times of cloud workloads by prioritizing urgent workloads related to emergency response. It models the resource scheduling problem and considers factors like workload deadlines, resource speeds and costs. The framework also presents algorithms for optimally assigning equal-length and variable-length workloads across multiple public and private cloud resources to balance performance and cost. The goal is to efficiently allocate cloud resources to disaster response zones based on urgency to reduce loss of life during earthquakes.
This document proposes an Earthquake Disaster Based Resource Scheduling (EDBRS) framework for efficiently allocating cloud computing resources during earthquake disasters. The framework prioritizes resource allocation based on the urgency of workloads, with more urgent workloads related to earthquake response and rescue receiving resources first. An algorithm is proposed that schedules resources to workloads based on this urgency criterion. The algorithm aims to reduce the execution time and costs of cloud workloads submitted during disasters as compared to existing scheduling algorithms. The performance of the proposed algorithm is evaluated using CloudSim simulation software, and it is shown to outperform existing algorithms.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
Dynamic Task Scheduling based on Burst Time Requirement for Cloud EnvironmentIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
MULTIPLE DAG APPLICATIONS SCHEDULING ON A CLUSTER OF PROCESSORScscpconf
Many computational solutions can be expressed as Directed Acyclic Graph (DAG), in which
nodes represent tasks to be executed and edges represent precedence constraints among tasks.
A Cluster of processors is a shared resource among several users and hence the need for a
scheduler which deals with multi-user jobs presented as DAGs. The scheduler must find the
number of processors to be allotted for each DAG and schedule tasks on allotted processors. In
this work, a new method to find optimal and maximum number of processors that can be allotted
for a DAG is proposed. Regression analysis is used to find the best possible way to share
available processors, among suitable number of submitted DAGs. An instance of a scheduler
for each DAG, schedules tasks on the allotted processors. Towards this end, a new framework
to receive online submission of DAGs, allot processors to each DAG and schedule tasks, is
proposed and experimented using a simulator. This space-sharing of processors among multiple
DAGs shows better performance than the other methods found in literature. Because of spacesharing,
an online scheduler can be used for each DAG within the allotted processors. The use
of online scheduler overcomes the drawbacks of static scheduling which relies on inaccurate
estimated computation and communication costs. Thus the proposed framework is a promising
solution to perform online scheduling of tasks using static information of DAG, a kind of hybrid
scheduling.
Optimized Resource Provisioning Method for Computational Gridijgca
Grid computing is an accumulation of heterogeneous, dynamic resources from multiple administrative areas which are geographically distributed that can be utilized to reach a mutual end. Development of resource provisioning-based scheduling in large-scale distributed environments like grid computing brings in new requirement challenges that are not being believed in traditional distributed computing environments. Computational grid is applying the resources of many systems in a network to a single problem at the same time. Grid scheduling is the method by which work specified by some means is assigned to the resources that complete the work in the environment which cannot fulfill the user requirements considerably. The satisfaction of users while providing the resources might increase the beneficiary level of resource suppliers. Resource scheduling has to satisfy the multiple constraints specified by the user. The option of resource with the satisfaction of multiple constraints is the most tedious process. This trouble is solved by bringing out the particle swarm optimization based heuristic scheduling algorithm which attempts to select the most suitable resource from the set of available resources. The primary parameters that are taken in this work for selecting the most suitable resource are the makespan and cost. The experimental result shows that the proposed method yields optimal scheduling with the atonement of all user requirements
Optimized Assignment of Independent Task for Improving Resources Performance ...ijgca
Grid computing has emerged from category of distributed and parallel computing where the
heterogeneous resources from different network are used simultaneously to solve a particular problem that
need huge amount of resources. Potential of Grid computing depends on my issues such as security of
resources, heterogeneity of resources, fault tolerance & resource discovery and job scheduling. Scheduling
is one of the core steps to efficiently exploit the capabilities of heterogeneous distributed computing
resources and is an NP-complete problem. To achieve the promising potential of grid computing, an
effective and efficient job scheduling algorithm is proposed, which will optimized two important criteria to
improve the performance of resources i.e. makespan time & resource utilization. With this, we have
classified various tasks scheduling heuristic in grid on the basis of their characteristics.
Optimized Assignment of Independent Task for Improving Resources Performance ...Ricardo014
Grid computing has emerged from category of distributed and parallel computing where the heterogeneous resources from different network are used simultaneously to solve a particular problem that need huge amount of
resources. Potential of Grid computing depends on my issues such as security of resources, heterogeneity of resources, fault tolerance & resource discovery and job scheduling. Scheduling is one of the core steps to
efficiently exploit the capabilities of heterogeneous distributed computing resources and is an NP-complete problem. To achieve the promising potential of grid computing, an effective and efficient job scheduling algorithm is
proposed, which will optimized two important criteria to improve the performance of resources i.e. makespan time & resource utilization. With this, we have classified various tasks scheduling heuristic in grid on the basis of
their characteristics.
Optimized Assignment of Independent Task for Improving Resources Performance ...ijgca
Grid computing has emerged from category of distributed and parallel computing where the heterogeneous resources from different network are used simultaneously to solve a particular problem that need huge amount of resources. Potential of Grid computing depends on my issues such as security of resources, heterogeneity of resources, fault tolerance & resource discovery and job scheduling. Scheduling is one of the core steps to efficiently exploit the capabilities of heterogeneous distributed computing resources and is an NP-complete problem. To achieve the promising potential of grid computing, an effective and efficient job scheduling algorithm is proposed, which will optimized two important criteria to improve the performance of resources i.e. makespan time & resource utilization. With this, we have classified various tasks scheduling heuristic in grid on the basis of their characteristics.
Max Min Fair Scheduling Algorithm using In Grid Scheduling with Load Balancing IJORCS
This paper shows the importance of fair scheduling in grid environment such that all the tasks get equal amount of time for their execution such that it will not lead to starvation. The load balancing of the available resources in the computational grid is another important factor. This paper considers uniform load to be given to the resources. In order to achieve this, load balancing is applied after scheduling the jobs. It also considers the Execution Cost and Bandwidth Cost for the algorithms used here because in a grid environment, the resources are geographically distributed. The implementation of this approach the proposed algorithm reaches optimal solution and minimizes the make span as well as the execution cost and bandwidth cost.
Qo s aware scientific application scheduling algorithm in cloud environmentAlexander Decker
The document describes a QoS-aware scientific application scheduling algorithm for cloud environments. It proposes an algorithm that ranks tasks in a workflow and uses a user preference fitness function to select resources based on the user's desired quality of service, such as time and cost. The algorithm is compared to other similar works through several scenarios, and results show the proposed algorithm has better efficiency. Key aspects considered include task dependencies, data sizes, compute times, data transfer times, workflow makespan, resource costs and attributes.
Task Scheduling using Hybrid Algorithm in Cloud Computing Environmentsiosrjce
The document summarizes a proposed hybrid task scheduling algorithm called PSOCS that combines particle swarm optimization (PSO) and cuckoo search (CS) for scheduling tasks in cloud computing environments. The PSOCS algorithm aims to minimize task completion time (makespan) and improve resource utilization. It was tested in a simulation using CloudSim and showed reductions in makespan and increases in utilization compared to PSO and random scheduling algorithms.
This document summarizes a research paper that proposes a hybrid task scheduling algorithm for cloud computing environments called PSOCS. PSOCS combines the Particle Swarm Optimization (PSO) algorithm and Cuckoo Search (CS) algorithm to optimize task scheduling and minimize completion time while increasing resource utilization. The paper describes PSO and CS algorithms individually, then defines the proposed PSOCS algorithm. It evaluates PSOCS using a simulation and finds it reduces makespan and increases utilization compared to PSO and random allocation algorithms.
Providing a multi-objective scheduling tasks by Using PSO algorithm for cost ...Editor IJCATR
This article is intended to use the multi-PSO algorithm for scheduling tasks for cost management in cloud computing. This means that
any migration costs due to supply failure consider as a one objective and each task is a little particle and recognize by use of the
appropriate fitness schedule function (how the particles arrangement) that cost at least amount of total expense. In addition to, the weight
is granted to the each expenditure that reflects the importance of cost. The data which is used to simulate proposed method are series of
academic and research data that are prepared from the Internet and MATLAB software is used for simulation. We simulate two issues,
in the first issue, consider four task by four vehicles and divide tasks. In the second issue, make the issue more complicated and consider
six tasks by four vehicles. We write PSO's output for each two issues of various iterations. Finally, the particles dispersion and as well
as the output of the cost function were computed for each pa
LOAD BALANCING ALGORITHM ON CLOUD COMPUTING FOR OPTIMIZE RESPONE TIMEijccsa
To improve the performance of cloud computing, there are many parameters and issues that we should consider, including resource allocation, resource responsiveness, connectivity to resources, unused resources exploration, corresponding resource mapping and planning for resource. The planning for the use of resources can be based on many kinds of parameters, and the service response time is one of them.
The users can easily figure out the response time of their requests, and it becomes one of the important QoSs. When we discover and explore more on this, response time can provide solutions for the distribution, the load balancing of resources with better efficiency. This is one of the most promising
research directions for improving the cloud technology. Therefore, this paper proposes a load balancing algorithm based on response time of requests on cloud with the name APRA (ARIMA Prediction of Response Time Algorithm), the main idea is to use ARIMA algorithms to predict the coming response time, thus giving a better way of effectively resolving resource allocation with threshold value. The experiment
result outcomes are potential and valuable for load balancing with predicted response time, it shows that prediction is a great direction for load balancing.
DGBSA : A BATCH JOB SCHEDULINGALGORITHM WITH GA WITH REGARD TO THE THRESHOLD ...IJCSEA Journal
In this paper , we will provide a scheduler on batch jobs with GA regard to the threshold detector. In The algorithm proposed in this paper, we will provide the batch independent jobs with a new technique ,so we can optimize the schedule them. To do this, we use a threshold detector then among the selected jobs, processing resources can process batch jobs with priority. Also hierarchy of tasks in each batch, will be determined with using DGBSA algorithm. Now , with the regard to the works done by previous ,we can provide an algorithm that by adding specific parameters to fitness function of the previous algorithms ,develop a optimum fitness function that in the proposed algorithm has been used. According to assessment done on DGBSA algorithm, in compare with the similar algorithms, it has more performance. The effective parameters that used in the proposed algorithm can reduce the total wasting time in compare with previous algorithms. Also this algorithm can improve the previous problems in batch processing with a new technique.
Grid computing can involve lot of computational tasks which requires trustworthy computational nodes. Load balancing in grid computing is a technique which overall optimizes the whole process of assigning computational tasks to processing nodes. Grid computing is a form of distributed computing but different from conventional distributed computing in a manner that it tends to be heterogeneous, more loosely coupled and dispersed geographically. Optimization of this process must contains the overall maximization of resources utilization with balance load on each processing unit and also by decreasing the overall time or output. Evolutionary algorithms like genetic algorithms have studied so far for the implementation of load balancing across the grid networks. But problem with these genetic algorithm is that they are quite slow in cases where large number of tasks needs to be processed. In this paper we give a novel approach of parallel genetic algorithms for enhancing the overall performance and optimization of managing the whole process of load balancing across the grid nodes.
2. job execution time and cost, and maximum utilization of
the Grid resources. In order to evaluate the proposed job
scheduler, GridSim toolkit, as discussed in Buyya and
Murshed (2002), is used to model and simulate Grid
resources and application scheduling.
The rest of this paper is organized as follows: Section 2
briefly discusses related work, whereas Section 3 presents
the proposed job grouping algorithm and its strategy.
Some simulations and experiments were conducted on the
proposed scheduler algorithm using GridSim toolkit and
the results are presented in Section 4. Finally, Section 5
concludes the paper and mentions some future work.
2 Related Work
In cellular manufacturing systems, job grouping has been
used to enhance efficiency of machinery utilization as
mentioned by Logendran, Carson and Hanson (2002).
Similarly, Gerasoulis and Yang (1992), in the context of
Directed Acyclic Graph (DAG) scheduling in parallel
computing environments, named grouping of jobs to
reduce communication dependencies among them as
clustering. However, the aim of clustering is to reduce the
inter-job communication and thus, decreasing the time
required for parallel execution. For example, Edge-
Zeroing, as discussed in Sarkar (1989), tries to reduce the
critical path of the job graph. Another example is
Dominant Sequence Clustering (DSC), as explained by
Yang and Gerasoulis (1994), that trying to reduce the
longest path in a scheduled DAG. Once the clustering is
complete, mapping of clusters to processors becomes
another hard problem. Some heuristics for cluster
mapping are discussed and compared in Radulescu and
van Gemund (1998). These heuristics aim to maximize
the number of jobs that can be executed in parallel on
different processors.
In this work, we focus on scheduling jobs which do not
require communication with each other. Also, the overall
aim of this work is to create coarse-grained jobs by
grouping fine-grained jobs together in order to reduce the
job assignment overhead, that is, the overhead of starting
a new job on a remote node.
A study of scheduling heuristics for such jobs and similar
problem was conducted in James, Hawick and
Coddington (1999). Among others, two clustering
algorithms - round-robin with clustering and continual
adaptive scheduling - were discussed and compared for
various job distributions. Within the former algorithm,
jobs were grouped in equal numbers, while in the latter
algorithm, the nodes are made to synchronize after each
round of execution. In our case, as we will describe later
on, the jobs are grouped according the ability of the
remote node. Also, the job groups are dispatched as and
when the nodes become available thus eliminating the
overhead of a synchronisation step.
3 Algorithm Listing
Figure 1 shows the terms that are used throughout this
paper and their definitions. The job grouping and
scheduling algorithm is presented in Figure 2. Figure 3
depicts an example of job grouping and scheduling
scenario where 100 user jobs with small processing
requirements (MI) are grouped into six job groups
according to the processing capabilities (MIPS) of the
available resources and the granularity size.
The overall explanation of Figure 2 is as follows: once
the user jobs are submitted to the broker or scheduler, the
scheduler gathers the characteristics of the available Grid
resources. Then, it selects a particular resource and
multiplies the resource MIPS with the granularity size
where the resulting value indicates the total MI the
resource can process within a specified granularity size.
The scheduler groups the user jobs by accumulating the
MI of each user job while comparing the resulting job
total MI with the resource total MI. If the total MI of user
jobs is more than the resource MI, the very last MI added
to the job total MI will be removed from the job total MI.
Eventually, a new job (job group) of accumulated total
MI will be created with a unique ID and scheduled to be
executed in the selected resource. This process continues
until all the user jobs are grouped into few groups and
assigned to the Grid resources. The scheduler then sends
the job groups to their corresponding resources for further
computation. The Grid resources process the received job
Figure 1: List of terms and their definitions
MI : Million instructions or processing requirements of a user job
MIPS : Million instructions per second or processing capabilities of a resource
Processing Time : Total time taken for executing the user jobs on the Grid
Computation Time : Time taken for computing a job on a Grid resource
JobList : List of user jobs submitted to the broker
RList : List of available Grid resources
JList_Size : Total number of user jobs
RList_Size : Total number of available Grid resources
Job_Listi_MI : MI of ith
user job
RListj_MIPS : MIPS of jth
Grid resource
Granularity_Size : Granularity size (time in seconds) for the job grouping activity
Total_JMI : Total processing requirements (MI) of a job group (in MI)
Total_RMIj : Total processing capabilities (MI) of jth
resource
Total_RMIj = RListj_MIPS *Granularity_Size
GJobList : List of job groups after job grouping activity
TargetRList : List of target resources of each job group
3. groups and send back the computed job groups to the
Grid user. The scheduler then gathers the computed job
groups from the network through its I/O port or queue.
In Figure 3, the granularity size is set to 3 seconds for
example. The scheduler selects a resource of 33 MIPS
and multiply the MIPS with the given granularity size. In
total, that particular resource can process 99 MI of user
jobs within 3 seconds. The scheduler then gathers the user
jobs by accumulating their MI up to 99 MI. In this case,
the first 4 jobs are grouped together resulting in 85 MI.
The fifth job has MI of 22 and grouping of 5 jobs will
results in 107 MI, which is more than the total processing
capability of the selected resource. Once a group of first
four jobs is created, the scheduler assigns a unique ID to
that group. It then selects another resource and performs
the same grouping operations. This process continues
until all the jobs are grouped into a number of groups.
Finally, the scheduler sends the groups to the resource for
job computation.
4 Evaluation
4.1 Implementation with GridSim
GridSim toolkit is used to conduct the simulations based
on the developed scheduling algorithm. Figure 4 depicts
the simulation strategy of the proposed dynamic job
grouping-based scheduler which is implemented using the
GridSim toolkit. The system accepts total number of user
jobs, processing requirements or average MI of those
jobs, allowed deviation percentage of the MI, processing
overhead time of each user job on the Grid, granularity
size of the job grouping activity and the available Grid
resources in the Grid environment (step 1-3). Details of
the available Grid resources are obtained from Grid
Information Service entity that keeps track of the
resources available in the Grid environment. Each Grid
resource is described in terms of their various
characteristics, such as resource ID, name, total number
machines in each resource, total processing elements (PE)
in each machine, MIPS of each PE, and bandwidth speed.
In this simulation, the details of the Grid resources are
-------------------------------------------------------------------------
Algorithm 1.0 Job Grouping and Scheduling Algorithm
-------------------------------------------------------------------------
1 m := 0;
2 for i:= 0 to JobList_Size-1 do
3 for j:=0 to RList_Size-1 do
4 Total_JMI := 0;
5 Total_RMIj :=
RListj_MIPS*Granularity_Size;
6 while Total_JMI Total_RMIj and i
JobList_Size-1 do
7 Total_JMI := Total_JMI + JobListi_MI;
8 i++;
9 endwhile
10 i--;
11 if Total_JMI > Total_RMIj then
12 Total_JMI := Total_JMI – JobListi_MI;
13 i--;
14 endif
15 Create a new job with total MI equals to
Total_JMI;
16 Assign a unique ID for the newly created job;
17 Place the job in GJobListm;
18 Place RListj in TargetRListm;
19 m++;
20 endfor
21 endfor
22 for i:= 0 to GJobList-1 do
23 Send GJobListi to TargetRListi for job
computation;
24 endfor
25 //Job computation at the Grid resources
26 for i:= 0 to GJobList-1 do
27 Receive computed GJobListi from TargetRListi;
28 endfor
Figure 2: Listing of the Job Grouping and Scheduling
Algorithm
Granularity Size: 3 sec
Resource 11/33
Total_RMI: 99
Resource 15/35
Total_RMI: 105
Resource 11/70
Total_RMI: 210
Job 0/20
Job 1/21
Job 2/21
Job 3/23
Job 4/22
Job 5/19
Job 6/18
Job 7/19
Job 8/25
Job 9/28
……….
Job 50/29
Job 51/30
Job 52/29
Job 97/22
Job 98/30
Job 99/24
……….
Job 96/21
Job Group
0/85
Job Group
1/103
Job Group
2/200
Job Group
3/88
Job Group
4/100
Job Group
5/97
User Job ID /
MI
Job Group
ID / MI
Resource
ID/MIPS
Figure 3: An Example of a Job Grouping Strategy
4. Resource MIPS Cost per second
R1 200 100
R2 160 200
R3 210 300
R4 480 210
R5 270 200
R6 390 210
R7 540 320
Table 1: Grid resources setup for the simulation.
store in a file which will be retrieved during the
simulations.
After gathering the details of user jobs and the available
resources, the system randomly creates jobs according to
the given average MI and MI deviation percentage (step
4). The scheduler will then select a resource and multiply
the resource MIPS with the given granularity size (step
5). The jobs will be gathered or grouped according to the
resulting total MI of the resource (step 6), and each
created group will be stored in a list with its associated
resource ID (step 7). Eventually, after grouping all jobs,
the scheduler will submit the job groups to their
corresponding resources for job computation (step 8).
4.2 Experimental Setup
Figure 5 lists the terms used within this section and their
definitions. The inputs to the simulations are total number
of Gridlets, average MI of Gridlets, MI deviation
percentage, granularity size, resource MIPS and Gridlet
processing overhead time.
The tests are conducted using seven resources of different
MIPS, as showed in Table 1.The MIPS of each resource
is computed as follows:
Resource MIPS = Total_PE * PE_MIPS, where
Total_PE = Total number of PEs at the resource,
PE_MIPS = MIPS of PE
Each resource has its own predefined cost rate for
counting the charges imposed on a Grid user for
executing the user jobs at that resource. The MIPS and
cost per second are selected randomly for the simulation
purpose.
In the simulation, the total processing time is calculated
in seconds based on the overhead time for processing
Figure 5: List of terms used within the evaluation and their definition.
(2)
(4)
(3)
(1) JOB SCHEDULER
Grid
resources’
characteristics
Jobs
GRID RES. ID
Grid resource 0
Grid resource 1
Grid resource N
Job MI
Resource MIPS
…Grid resource 0
Job group 0
Grid resource 1
Job group 1 Job group 2
Job groups Resource IDs
Granularity Size
Granularity size
USER JOBS
Total number of jobs
Average MI of job
MI deviation percentage
Overhead processing time
Total MIPS
Grid resource 2
(5)
(6)
(7)
(8)
Figure 4: The simulation strategy for dynamic job grouping-based scheduler
Gridlet : User job
Group : Total number of Gridlet groups created from Gridlet grouping process
R : Resource
A_MI : Average MI rating of Gridlet or Gridlet length in MI
G_Size : Granularity size in seconds
R_MIPS : Resource processing capabilities in MIPS
D_% : MI deviation percentage
OH_Time : Processing overhead time of each Gridlet in seconds
Process_Time : Gridlet processing time in seconds
Process_Cost : Processing cost of the Gridlets
PE : Processing elements in each resource
5. each Gridlets, and the time taken for performing Gridlet
(job) grouping process, sending Gridlets to the resources,
processing the Gridlets at the resources and receiving
back the processed Gridlets. This time computation is
depicted in Figure 6. In real world, the overhead time for
each job depends on the current network load and speed.
In the simulations, the processing overhead time
(OH_Time) of each Gridlet is set to 10 seconds.
The total processing cost is computed based on the actual
CPU time taken for computing the Gridlets at the Grid
resource and at the cost rate specified at the Grid
resource, as summarized below:
Process_Cost = T * C, where
T = Total CPU Time for Gridlet execution, and
C = Cost per second of the resources.
4.3 Experiments, Results and Discussions
4.3.1 Experiment 1: Simulation with and
without Job Grouping
Simulations are conducted to analyse and compare the
differences between two scheduling algorithms: first
come first serve and job grouping-based algorithm
described in section 3 in terms of processing time and
cost. Resources R1 through R4 are used for these
simulations.
Table 2 shows the results of the simulations with and
without job grouping method conducted with granularity
size of 30 seconds and Gridlet average MI of 200. The
simulations managed to execute maximum of 150
Gridlets within 30 seconds. As depicted in Figure 7, the
total processing time and cost are increasing gradually for
simulations without job grouping method compared to
simulations with job grouping method.
When scheduling 25 Gridlets, simulation with job
grouping method groups the Gridlets into one group
according to resource R1’s MI of 6000 (200*30).
Therefore, the total OH_Time is only 10 seconds and the
resulting total Process_Time is 64 seconds. The job
grouping, scheduling and deploying activities take up to
54 seconds. On the other hand, simulation without job
grouping sends all the Gridlets individually to resource
R1 and the total OH_Time is 250 seconds (25*10) leads
to total Process_Time of 280 seconds. In this case, the
total Gridlet computation time (30 seconds) is much less
than the total communication time (250 seconds).Without
grouping, a simulation from 25 to 100 Gridlets yields a
massive increase of 297% in total Process_Time, whereas
simulation with grouping yields only 112.5% rise in
terms of in total Process_Time. As the number of Gridlets
grows, the total Process_Time increases linearly for
simulation without job grouping since total
communication time increased with number of Gridlets.
In simulation with grouping, the communication time
remains constant and major contribution to the total
Process_Time comes from Gridlet computation time at
the resources. With 150 Gridlets, four Gridlet groups are
created, and each resource received one Gridlet group.
Here, 1.48% of the total Process_Time is spent for
communication purpose, whereas in simulation without
grouping, 90.3% of total Process_Time is spent for the
same communication purpose.
Number of
Gridlets
With Grouping Without Grouping
Number of
Groups
Process_Time
(sec)
Process_Cost Process_Time
(sec)
Process _Cost
25 1 64 4979 280 9333
50 2 82 15992 561 38946
75 3 99 35904 838 73485
100 4 136 55332 1112 97741
125 4 186 72332 1388 115673
150 4 270 90124 1662 134843
A_MI:200 D_%:20% G_Size:30 sec R_MIPS: 200,160,210,480 OH_Time:10 sec
Table 2: Simulation with and without job grouping for average MI of 200 and granularity size of 30 seconds
+
+
+
+
Processing overhead time
for Grouped_Gridlet 0
Processing overhead time
for Grouped_Gridlet 1
Processing overhead time
for Grouped_Gridlet 2
Processing overhead time
for Grouped_Gridlet N
Total processing
overhead time
Gridlet Grouping
Time
Time taken to
submit all the
groups to resources
Gridlet Processing
Time
Total processing
time
Time taken to
receive all the
processed Gridlets
Figure 6: Processing time
6. In terms of Process_Cost, the time each Gridlet spends at
the Grid resource is taken into consideration for
computing the total Process_Cost. In simulation with job
grouping, only a small number of Gridlets (Gridlet
groups) are sent to each resource and therefore, the
amount of total overhead time is reduced. In simulation
without job grouping, each small scaled Gridlet sustains a
small amount of overhead time at the Grid resources.
Therefore, the total overhead time incurred by all the
Gridlets at the Grid resource leads to higher processing
cost. For example, when processing 25 Gridlets
individually at the Grid resource, the total Process_Cost
comes up to 9333 units, whereas simulation with job
grouping reduces this cost to 4979 units.
4.3.2 Experiment 2: Simulation of Different
Granularity Sizes with Job Scheduling
Simulations are conducted using different granularity
sizes to examine the total time and cost taken to execute
100 Gridlets on the Grid. Resources R1 through R7 are
used for these simulations.
Table 3 and Figure 8 depict the results gained from
simulations carried out on 100 Gridlets of 200 average
MI using different granularity sizes. Table 4 and Figure 9
show the processing load at each Grid resources when
different granularity sizes are used. The term ‘Gridlet
Computation Time’ in Table 4 refers to the total time
taken for each resource to compute the assigned Gridlet
groups. The communication time is not included in this
computation time.
Job Processing Time for Scheduling with and
without Task Grouping
0
200
400
600
800
1000
1200
1400
1600
1800
25 50 75 100 125 150
User Jobs / Gridlets
ProcessingTime(sec)
With Grouping
Without Grouping
Job Processing Cost for Scheduling with and
without Task Grouping
0
20000
40000
60000
80000
100000
120000
140000
160000
25 50 75 100 125 150
User Jobs / Gridlets
ProcessingCost
With Grouping
Without Grouping
(a) (b)
Figure 7: Processing time (a) and cost (b) for executing 150 Gridlets of 200 average MI within the granularity
size of 30 seconds
Granularity Size (sec) 10 20 30 40 50 60
Process_Time (sec) 160 196 136 120 135 143
Process_Cost 61231 60073 55333 48179 38878 31890
Number of Groups 7 4 4 3 3 2
Gridlets: 100; A_MI:200; D_%:20%; OH_Time:10 sec; Resource: R1-R7
Table 3: Simulation with job grouping for different granularity sizes
Job Processing Time based on Different
Granularity Sizes
0
50
100
150
200
250
10 20 30 40 50 60
Granularity Size (sec)
ProcessingTime(sec)
Time
JobProcessingCost basedonDifferent GranularitySizes
0
10000
20000
30000
40000
50000
60000
70000
10 20 30 40 50 60
Granularity Size (sec)
ProcessingCost
Cost
(a) (b)
Figure 8: (a) Processing time and (b) cost for executing 100 Gridlets of 200 average MI using different granularity sizes
7. From the simulation, it is observed that the total
Process_Time for granularity size of 10 seconds is less
than the one observed for granularity size of 20 seconds.
When granularity size is 10 seconds, 7 job groups are
created (from 100 user jobs) and each resource computes
one job group of almost balanced MI. Since the Gridlet
computations at the Grid resources are done in parallel
and each resource has less processing load (balanced
Gridlet MI), all the Gridlet groups can be computed
rapidly, in 86 minutes.
In the case of granularity size of 20 seconds, four Gridlet
groups are created and 44% of the total Gridlet MI is
scheduled to be computed at resource R4 since it can
support up to 9600 MI. Average Gridlet MI percentage at
the other resources is about 18.7%. Therefore, R4 spent
more time in computing the Gridlet group which leads to
higher total Process_Time.
For granularity size 30 seconds, four Gridlet groups are
produced and resource R3 receives the most MI, about
30.6% of the total MI. The total MI scheduled to all the
resources does not defer much as in the previous case.
Therefore, all the resources can complete the Gridlet
computation in 91 minutes.
The minimum Process_Time is achieved when the
granularity time is 40 seconds. The Gridlet computation
time is same as for the granularity size of 10 seconds, but
less communication time is taken (30 seconds) for dealing
with three Gridlet groups.
In terms of Process_Cost, the resulting cost highly
depends on the cost per second located at each resource
and total Gridlet MI assigned to each resource. In the
simulations, cost per second of using resource R3 (300
units) and R7 (320 units) are more than the other
resources. Therefore, involving these resources in Gridlet
computation will increase the total Process_Cost, e.g. all
the resources are used for Gridlet computation when
granularity size is 10 seconds, which costs 61231 units.
When the granularity size is 20 seconds, R7 is not
engaged in the computation. However, assigning a large
number Gridlet MI (8756 MI) to R4 results in high total
Process_Cost of 60073 units. When the granularity time
is 30 seconds, balanced distribution of the MI among four
resources reduces the total Process_Cost. Another point is
that the total MI assigned to resource R1 is increased as
the granularity size increases. Since R1’s cost per second
is very low (100 units), the total Process_Cost decreases
gradually for granularity sizes 40, 50 and 60 seconds.
From the experiments, it is clear that job grouping
method decreases the total processing time and cost.
However, assigning a large number of Gridlet MI to one
particular resource will increase the total processing time
and cost. Therefore, during the job grouping activity, a
balanced relationship should be determined between total
number of groups to be created from job grouping
Resource/MIPSGranularity
Size (sec) R1/200 R2/160 R3/210 R4/480 R5/270 R6/390 R7/540
Gridlet Computation
Time (sec)
10 1995 1549 2094 4771 2509 3761 3217 86
20 3904 3126 4108 8756 152
30 5809 4775 6094 3217 91
40 7843 6337 5715 86
50 9802 7940 2153 97
60 11898 7997 117
Table 4: Processing load at the grid resources for different granularity sizes
0
2000
4000
6000
8000
10000
12000
Processing
Load (MI)
10 20 30 40 50 60
Granulary Size (sec)
Processing Load at Grid Resources for Different
Granularity Sizes
R1
R2
R3
R4
R5
R6
R7
Figure 9: Processing load at the grid resources for different granularity sizes
8. method, resources’ cost per second, and MI distribution
among the selected resources.
5 Conclusion and Future Work
The job grouping strategy results in increased
performance in terms of low processing time and cost if it
is applied to a Grid application with a large number of
jobs where each user job holds small processing
requirements. Sending/receiving each small job
individually to/from the resources will increase the total
communication time and cost. In addition, the total
processing capabilities of each resource may not be fully
utilized each time the resource receives a small scaled
job. Job grouping strategy aims to reduce the impact of
these drawbacks on the total processing time and cost.
The strategy groups the small scaled user jobs into few
job groups according to the processing capabilities of
available Grid resources. This reduces the communication
overhead time and processing overhead time of each user
job.
Future work would involve developing a more
comprehensive job grouping-based scheduling system
that takes into account QoS (Quality of Service)
requirements as mentioned by Abramson, Buyya, and
Giddy (2002) of each user job before performing the
grouping method. In addition, each resource should be
examined for their current processing load, and jobs
should be grouped according to the available processing
capabilities. Finally, need to consider grouping jobs that
using common data for execution.
6 References
Abramson, D., Buyya, R. and Giddy, J. (2002): A
Computational Economy for Grid Computing, and its
Implementation in the Nimrod-G Resource Broker.
Journal of Future Generation Computer Systems
(FGCS), 18(8): 1061-1074.
Berman, F., Fox, G. and Hey, A. (2003): Grid Computing
– Making the Global Infrastructure a Reality. London,
Wiley.
Buyya, R. and Murshed, M. (2002): GridSim: A Toolkit
for the Modeling, and Simulation of Distributed
Resource Management, and Scheduling for Grid
Computing. Journal of Concurrency and Computation:
Practice and Experience (CCPE), 14(13-15):1175-
1220.
Buyya, R., Date, S., Mizuno-Matsumoto, Y., Venugopal,
S. and Abramson, D. (2004): Neuroscience
Instrumentation and Distributed Analysis of Brain
Activity Data: A Case for eScience on Global Grids.
Journal of Concurrency and Computation: Practice
and Experience, (accepted in Jan. 2004 and in print).
Foster, I. and Kesselman, C. (1999): The Grid: Blueprint
for a New Computing Infrastructure. San Francisco,
Morgan Kaufmann Publisher, Inc.
Gerasoulis, A. and Yang, T. (1992): A comparison of
clustering heuristics for scheduling directed graphs on
multiprocessors. Journal of Parallel and Distributed
Computing, 16(4):276-291.
Gray, J. (2003): Distributed Computing Economics.
Newsletter of the IEEE Task Force on Cluster
Computing, 5(1), July/August.
James, H. A., Hawick, K. A. and Coddington, P. D.
(1999): Scheduling Independent Tasks on
Metacomputing Systems. Proc. of Parallel and
Distributed Computing (PDCS ’99), Fort Lauderdale,
USA.
Logendran, R., Carson, S. and Hanson, E. (2002): Group
Scheduling Problems in Flexible Flow Shops. Proc. of
the Annual Conference of Institute of Industrial
Engineers, USA.
Radulescu, A. and van Gemund, A. (1998): GLB: A
Low-Cost Scheduling Algorithm for Distributed-
Memory Architectures. Proc. of the Fifth International
Conference on High Performance Computing(HiPC
98), Madras, India, pp. 294-301, IEEE Press.
Sarkar, V. (1989): Partitioning and Scheduling Parallel
Programs for Execution on Multiprocessors,
Cambridge, MIT Press.
Yang, T. and Gerasoulis, A. (1994): DSC: Scheduling
Parallel Tasks on an Unbounded Number of
Processors. IEEE Transactions on Parallel and
Distributed Systems, 5(9):951-967.