Dynamic Three Stages Task Scheduling Algorithm on Cloud Computing
Naglaa Sayed Abdelrehem, Fathi Ahmed Amer, Imane Aly Saroit,
Department of Information Technology, Faculty of Computer and Artificial Intelligence, Cairo University, Cairo, Egypt.
A Survey on Service Request Scheduling in Cloud Based ArchitectureIJSRD
Cloud computing has become quite popular now-a-days. It facilitates the users to store and process their data which is stored in 3rd party data centers. Today in IT sector everything is run and managed on the cloud environment. As the number of users is increasing day by day, faster and efficient processing of large volume of data and resources is desired at all levels. So the management of resources attains prime importance. While using cloud computing various issues are encountered like load balancing, traffic while computation etc. Job scheduling is one of the solution of these problems which reduces the waiting time and maximizes the quality of services. In job scheduling “priority†is an important factor. In this paper, we will be discussing various scheduling algorithms and a review on dynamic priority scheduling algorithm.
LOAD BALANCING ALGORITHM ON CLOUD COMPUTING FOR OPTIMIZE RESPONE TIMEijccsa
To improve the performance of cloud computing, there are many parameters and issues that we should consider, including resource allocation, resource responsiveness, connectivity to resources, unused resources exploration, corresponding resource mapping and planning for resource. The planning for the use of resources can be based on many kinds of parameters, and the service response time is one of them.
The users can easily figure out the response time of their requests, and it becomes one of the important QoSs. When we discover and explore more on this, response time can provide solutions for the distribution, the load balancing of resources with better efficiency. This is one of the most promising
research directions for improving the cloud technology. Therefore, this paper proposes a load balancing algorithm based on response time of requests on cloud with the name APRA (ARIMA Prediction of Response Time Algorithm), the main idea is to use ARIMA algorithms to predict the coming response time, thus giving a better way of effectively resolving resource allocation with threshold value. The experiment
result outcomes are potential and valuable for load balancing with predicted response time, it shows that prediction is a great direction for load balancing.
Deadline and Suffrage Aware Task Scheduling Approach for Cloud EnvironmentIRJET Journal
The document proposes a deadline and suffrage aware task scheduling approach for cloud environments. It discusses limitations of existing approaches that can cause system imbalances. The proposed approach considers both task deadlines and priorities assigned by user votes ("suffrage") to schedule tasks. It was tested using CloudSim simulator and found to outperform the basic min-min approach in reducing completion times and improving resource utilization and provider profits while still meeting task deadlines.
A HYPER-HEURISTIC METHOD FOR SCHEDULING THEJOBS IN CLOUD ENVIRONMENTieijjournal
The document proposes a hyper-heuristic method for scheduling jobs in a cloud environment. It combines two low-level heuristics - Ant Colony Optimization and Particle Swarm Optimization - and uses two operators, intensification and diversity revealing, to select the heuristics. It also uses a conditional revealing operator to identify job failures while allocating resources. The hyper-heuristic aims to achieve better results than individual heuristics in terms of lower makespan time.
This document proposes ANGEL, an agent-based scheduling algorithm for real-time tasks in virtualized cloud environments. It employs a bidirectional announcement-bidding mechanism between agents to allocate tasks and dynamically provision resources. The mechanism consists of three phases: basic matching, forward announcement-bidding, and backward announcement-bidding. ANGEL also dynamically adds virtual machines to improve schedulability. Extensive experiments show ANGEL efficiently solves real-time task scheduling in clouds.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
The cloud environment offers an appropriate location for the implementation of huge range of scientific applications. However, in the existing workflows the major dispute is to assign the assets to the tasks in a well-organized way so, that it acquires less finishing time and load on every virtual machines will be impartial. To overcome this problem, GA_ MINMIN has been proposed that combines the features of GA and MINMIN scheduling algorithms. This algorithm is fundamentally a three-layer structure where GA is connected on the main level and hereditary calculation was performed for distributing belonging in an advanced way. At second level, the execution request of the assignments was resolved based on their size. This would be finished with the assistance of MIN-MIN. At third level, all the virtual machines have been running in parallel so that task response time will get decreased with more advanced outcomes. The proposed algorithm has been executed on the simulation environment.
Energy Efficient Heuristic Base Job Scheduling Algorithms in Cloud ComputingIOSRjournaljce
Cloud computing environment provides the cost efficient solution to customers by the resource provisioning and flexible customized configuration. The interest of cloud computing is growing around the globe at very fast pace because it provides scalable virtualized infrastructure by mean of which extensive computing capabilities can be used by the cloud clients to execute their submitted jobs. It becomes challenge for the cloud infrastructure to manage and schedule these jobs originated by different cloud users to available resources in such a manner to strengthen the overall performance of the system. As the number of user increases the job scheduling become an intensive task. Energy efficient job scheduling is one constructive solution to streamline the resource utilization as well as to reduce the energy consumption. Though there are several scheduling algorithms available, this paper intends to present job scheduling based on two Heuristic approaches i.e. Efficient MQS (Multi-queue job scheduling) and ACO (Ant colony optimization) and further evaluating the effectiveness of both approaches by considering the parameter of energy consumption and time in cloud computing.
A Survey on Service Request Scheduling in Cloud Based ArchitectureIJSRD
Cloud computing has become quite popular now-a-days. It facilitates the users to store and process their data which is stored in 3rd party data centers. Today in IT sector everything is run and managed on the cloud environment. As the number of users is increasing day by day, faster and efficient processing of large volume of data and resources is desired at all levels. So the management of resources attains prime importance. While using cloud computing various issues are encountered like load balancing, traffic while computation etc. Job scheduling is one of the solution of these problems which reduces the waiting time and maximizes the quality of services. In job scheduling “priority†is an important factor. In this paper, we will be discussing various scheduling algorithms and a review on dynamic priority scheduling algorithm.
LOAD BALANCING ALGORITHM ON CLOUD COMPUTING FOR OPTIMIZE RESPONE TIMEijccsa
To improve the performance of cloud computing, there are many parameters and issues that we should consider, including resource allocation, resource responsiveness, connectivity to resources, unused resources exploration, corresponding resource mapping and planning for resource. The planning for the use of resources can be based on many kinds of parameters, and the service response time is one of them.
The users can easily figure out the response time of their requests, and it becomes one of the important QoSs. When we discover and explore more on this, response time can provide solutions for the distribution, the load balancing of resources with better efficiency. This is one of the most promising
research directions for improving the cloud technology. Therefore, this paper proposes a load balancing algorithm based on response time of requests on cloud with the name APRA (ARIMA Prediction of Response Time Algorithm), the main idea is to use ARIMA algorithms to predict the coming response time, thus giving a better way of effectively resolving resource allocation with threshold value. The experiment
result outcomes are potential and valuable for load balancing with predicted response time, it shows that prediction is a great direction for load balancing.
Deadline and Suffrage Aware Task Scheduling Approach for Cloud EnvironmentIRJET Journal
The document proposes a deadline and suffrage aware task scheduling approach for cloud environments. It discusses limitations of existing approaches that can cause system imbalances. The proposed approach considers both task deadlines and priorities assigned by user votes ("suffrage") to schedule tasks. It was tested using CloudSim simulator and found to outperform the basic min-min approach in reducing completion times and improving resource utilization and provider profits while still meeting task deadlines.
A HYPER-HEURISTIC METHOD FOR SCHEDULING THEJOBS IN CLOUD ENVIRONMENTieijjournal
The document proposes a hyper-heuristic method for scheduling jobs in a cloud environment. It combines two low-level heuristics - Ant Colony Optimization and Particle Swarm Optimization - and uses two operators, intensification and diversity revealing, to select the heuristics. It also uses a conditional revealing operator to identify job failures while allocating resources. The hyper-heuristic aims to achieve better results than individual heuristics in terms of lower makespan time.
This document proposes ANGEL, an agent-based scheduling algorithm for real-time tasks in virtualized cloud environments. It employs a bidirectional announcement-bidding mechanism between agents to allocate tasks and dynamically provision resources. The mechanism consists of three phases: basic matching, forward announcement-bidding, and backward announcement-bidding. ANGEL also dynamically adds virtual machines to improve schedulability. Extensive experiments show ANGEL efficiently solves real-time task scheduling in clouds.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
The cloud environment offers an appropriate location for the implementation of huge range of scientific applications. However, in the existing workflows the major dispute is to assign the assets to the tasks in a well-organized way so, that it acquires less finishing time and load on every virtual machines will be impartial. To overcome this problem, GA_ MINMIN has been proposed that combines the features of GA and MINMIN scheduling algorithms. This algorithm is fundamentally a three-layer structure where GA is connected on the main level and hereditary calculation was performed for distributing belonging in an advanced way. At second level, the execution request of the assignments was resolved based on their size. This would be finished with the assistance of MIN-MIN. At third level, all the virtual machines have been running in parallel so that task response time will get decreased with more advanced outcomes. The proposed algorithm has been executed on the simulation environment.
Energy Efficient Heuristic Base Job Scheduling Algorithms in Cloud ComputingIOSRjournaljce
Cloud computing environment provides the cost efficient solution to customers by the resource provisioning and flexible customized configuration. The interest of cloud computing is growing around the globe at very fast pace because it provides scalable virtualized infrastructure by mean of which extensive computing capabilities can be used by the cloud clients to execute their submitted jobs. It becomes challenge for the cloud infrastructure to manage and schedule these jobs originated by different cloud users to available resources in such a manner to strengthen the overall performance of the system. As the number of user increases the job scheduling become an intensive task. Energy efficient job scheduling is one constructive solution to streamline the resource utilization as well as to reduce the energy consumption. Though there are several scheduling algorithms available, this paper intends to present job scheduling based on two Heuristic approaches i.e. Efficient MQS (Multi-queue job scheduling) and ACO (Ant colony optimization) and further evaluating the effectiveness of both approaches by considering the parameter of energy consumption and time in cloud computing.
Cloud computing is the fastest emerging technology and a novel buzzword in the field of IT domain that offer distinct services, applications and focuses on providing sustainable, reliable, scalable and virtualized resources to its consumer. The main aim of cloud computing is to enhance the use of distributed resources to achieve higher throughput and resource utilization in large-scale computation problems. Scheduling affects the efficiency of cloud and plays a significant role in cloud computing to create high performance environment. The Quality of Service (QoS) requirements of user application define the scheduling of resources. Numbers of researchers have tried to solve these scheduling problems using different QoS based scheduling techniques. In this paper, a detail analysis of resource scheduling methodology is presented, with different types of scheduling based on soft computing techniques, their comparisons, benefits and results are discussed. Major finding of this paper helps researchers to decide suitable approach for scheduling user’s applications considering their QoS requirements.
IRJET- Scheduling of Independent Tasks over Virtual Machines on Computati...IRJET Journal
This document discusses scheduling independent tasks over virtual machines in a cloud computing environment. It compares the performance of four scheduling algorithms: First Come First Serve (FCFS), Shortest Job First (SJF), Round Robin, and Particle Swarm Optimization (PSO). The algorithms are tested on virtual machines with 1, 2, and 4 CPU cores. PSO consistently achieves the shortest makespan (task completion time). While FCFS, SJF, and Round Robin perform similarly on single-core and dual-core VMs, Round Robin's performance degrades on quad-core VMs likely due to core collision issues. Overall, PSO schedules tasks most efficiently across all virtual machine configurations.
LOAD BALANCING ALGORITHM TO IMPROVE RESPONSE TIME ON CLOUD COMPUTINGijccsa
Load balancing techniques in cloud computing can be applied at different levels. There are two main
levels: load balancing on physical server and load balancing on virtual servers. Load balancing on a
physical server is policy of allocating physical servers to virtual machines. And load balancing on virtual
machines is a policy of allocating resources from physical server to virtual machines for tasks or
applications running on them. Depending on the requests of the user on cloud computing is SaaS (Software
as a Service), PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) that has a proper load
balancing policy. When receiving the task, the cloud data center will have to allocate these tasks efficiently
so that the response time is minimized to avoid congestion. Load balancing should also be performed
between different datacenters in the cloud to ensure minimum transfer time. In this paper, we propose a
virtual machine-level load balancing algorithm that aims to improve the average response time and
average processing time of the system in the cloud environment. The proposed algorithm is compared to the
algorithms of Avoid Deadlocks [5], Maxmin [6], Throttled [8] and the results show that our algorithms
have optimized response times.
Cloud computing Review over various scheduling algorithmsIJEEE
Cloud computing has taken an importantposition in the field of research as well as in thegovernment organisations. Cloud computing uses virtualnetwork technology to provide computer resources tothe end users as well as to the customer’s. Due tocomplex computing environment the use of high logicsand task scheduler algorithms are increase which resultsin costly operation of cloud network. Researchers areattempting to build such kind of job scheduling algorithms that are compatible and applicable in cloud computing environment.In this paper, we review research work which is recently proposed by researchers on the base of energy saving scheduling techniques. We also studying various scheduling algorithms and issues related to them in cloud computing.
This document discusses scheduling in cloud computing. It proposes a priority-based scheduling protocol to improve resource utilization, server performance, and minimize makespan. The protocol assigns priorities to jobs, allocates jobs to processors based on completion time, and processes jobs in parallel queues to efficiently schedule jobs in cloud computing. Future work includes analyzing time complexity and completion times through simulation to validate the protocol's efficiency.
A hybrid approach for scheduling applications in cloud computing environment IJECEIAES
Cloud computing plays an important role in our daily life. It has direct and positive impact on share and update data, knowledge, storage and scientific resources between various regions. Cloud computing performance heavily based on job scheduling algorithms that are utilized for queue waiting in modern scientific applications. The researchers are considered cloud computing a popular platform for new enforcements. These scheduling algorithms help in design efficient queue lists in cloud as well as they play vital role in reducing waiting for processing time in cloud computing. A novel job scheduling is proposed in this paper to enhance performance of cloud computing and reduce delay time in queue waiting for jobs. The proposed algorithm tries to avoid some significant challenges that throttle from developing applications of cloud computing. However, a smart scheduling technique is proposed in our paper to improve performance processing in cloud applications. Our experimental result of the proposed job scheduling algorithm shows that the proposed schemes possess outstanding enhancing rates with a reduction in waiting time for jobs in queue list.
IRJET- Time and Resource Efficient Task Scheduling in Cloud Computing Environ...IRJET Journal
This document summarizes a research paper that proposes a Task Based Allocation (TBA) algorithm to efficiently schedule tasks in a cloud computing environment. The algorithm aims to minimize makespan (completion time of all tasks) and maximize resource utilization. It first generates an Expected Time to Complete (ETC) matrix that estimates the time each task will take on different virtual machines. It then sorts tasks by length and allocates each task to the VM that minimizes its completion time, updating the VM wait times. The algorithm is evaluated using CloudSim simulation and is shown to reduce makespan, execution time and costs compared to random and first-come, first-served scheduling approaches.
Cost-Efficient Task Scheduling with Ant Colony Algorithm for Executing Large ...Editor IJCATR
This document summarizes a research paper that proposes an optimized ant colony optimization (ACO) algorithm for task scheduling in cloud computing. The goal is to minimize makespan and cost while improving fairness and load balancing. The ACO algorithm is adapted to prioritize and fairly allocate tasks to machines based on their performance. Simulations show the proposed ACO algorithm reduces makespan by 80% compared to Berger and greedy algorithms. It also increases processor utilization and balances loads across machines better than the other algorithms. The researchers conclude the optimized ACO approach improves resource usage and user satisfaction for task scheduling in cloud computing.
Job Resource Ratio Based Priority Driven Scheduling in Cloud Computingijsrd.com
Cloud Computing is an emerging technology in the area of parallel and distributed computing. Clouds consist of a collection of virtualized resources, which include both computational and storage facilities that can be provisioned on demand, depending on the users' needs. Job scheduling is one of the major activities performed in all the computing environments. Cloud computing is one the upcoming latest technology which is developing drastically. To efficiently increase the working of cloud computing environments, job scheduling is one the tasks performed in order to gain maximum profit. In this paper we proposed a new scheduling algorithm based on priority and that priority is based on ratio of job and resource. To calculate priority of job we use analytical hierarchy process. In this paper we also compare result with other algorithm like First come first serve and round robin algorithms.
Dynamic Cloud Partitioning and Load Balancing in Cloud Shyam Hajare
Cloud computing is the emerging and transformational paradigm in the field of information technology. It mostly focuses in providing various services on demand and resource allocation and secure data storage are some of them. To store huge amount of data and accessing data from such metadata is new challenge. Distributing and balancing of the load over a cloud using cloud partitioning can ease the situation. Implementing load balancing by considering static as well as dynamic parameters can improve the performance cloud service provider and can improve the user satisfaction. Implementation the model can provide dynamic way of resource selection de-pending upon different situation of cloud environment at the time of accessing cloud provisions based on cloud partitioning. This model can provide effective load balancing algorithm over the cloud environment, better refresh time methods and better load status evaluation methods.
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
1) The document proposes a bandwidth-aware virtual machine migration policy for cloud data centers that considers both the bandwidth and computing power of resources when scheduling tasks of varying sizes.
2) It presents an algorithm that binds tasks to virtual machines in the current data center if the load is below the saturation threshold, and migrates tasks to the next data center if the load is above the threshold, in order to minimize completion time.
3) Experimental results show that the proposed algorithm has lower completion times compared to an existing single data center scheduling algorithm, demonstrating the benefits of considering bandwidth and utilizing multiple data centers.
VIRTUAL MACHINE SCHEDULING IN CLOUD COMPUTING ENVIRONMENTijmpict
Cloud computing is an upcoming technology in dispersed computing facilitating paying for each model as
for each user demand and need. Cloud incorporates a set of virtual machine which comprises both storage
and computational facility. The fundamental goal of cloud computing is to offer effective access to isolated
and geographically circulated resources. Cloud is growing every day and experiences numerous problems
such as scheduling. Scheduling means a collection of policies to regulate the order of task to be executed
by a computer system. An excellent scheduler derives its scheduling plan in accordance with the type of
work and the varying environment. This research paper demonstrates a generalized precedence algorithm
for effective performance of work and contrast with Round Robin and FCFS Scheduling. Algorithm needs
to be tested within CloudSim toolkit and outcome illustrates that it provide good presentation compared
some customary scheduling algorithm.
This document summarizes and compares various scheduling algorithms used in cloud computing environments. It begins with an introduction to cloud computing and the need for scheduling algorithms in cloud environments. It then describes several existing scheduling algorithms, including compromised-time-cost scheduling, particle swarm optimization-based heuristic, improved cost-based algorithm, resource-aware scheduling, innovative transaction intensive cost-constraint scheduling, scalable heterogeneous earliest-finish-time algorithm, and multiple QoS constrained scheduling strategy of multi-workflows. These algorithms aim to optimize metrics such as execution time, cost, deadline, load balancing, and quality of service. The document concludes by comparing the different scheduling strategies.
This document provides an overview of scheduling mechanisms in cloud computing. It discusses task scheduling, gang scheduling based on performance and cost evaluation, and resource scheduling. For task scheduling, it describes classifying tasks based on quality of service parameters and MapReduce level scheduling. It then explains two gang scheduling algorithms - Adaptive First Come First Serve (AFCFS) and Largest Job First Serve (LJFS) - and how they are used to evaluate performance and cost. Finally, it briefly discusses resource scheduling and factors that affect scheduling mechanisms in cloud computing like efficiency, fairness, costs, and communication patterns.
Service Request Scheduling in Cloud Computing using Meta-Heuristic Technique:...IRJET Journal
This document discusses using the Teaching Learning Based Optimization (TLBO) meta-heuristic technique for service request scheduling between users and cloud service providers. TLBO is a nature-inspired algorithm that mimics the teacher-student learning process. It is compared to other meta-heuristic algorithms like Genetic Algorithm. The key steps of TLBO involve initializing a population, evaluating fitness, selecting the best solution as teacher, and updating the population through teacher and learner phases until termination criteria is met. The document proposes using number of users and virtual machines as parameters for TLBO scheduling in cloud computing. MATLAB simulation results show the initial and final iterations converging to an optimal scheduling solution.
An optimized scientific workflow scheduling in cloud computingDIGVIJAY SHINDE
The document discusses optimizing scientific workflow scheduling in cloud computing. It begins with definitions of workflow and cloud computing. Workflow is a group of repeatable dependent tasks, while cloud computing provides applications and hardware resources over the Internet. There are three cloud service models: SaaS, PaaS, and IaaS. The document explores how to efficiently schedule workflows in the cloud to reduce makespan, cost, and energy consumption. It reviews different scheduling algorithms like FCFS, genetic algorithms, and discusses optimizing objectives like time and cost. The document provides a literature review comparing various workflow scheduling methods and algorithms. It concludes with discussing open issues and directions for future work in optimizing workflow scheduling for cloud computing.
Abstract: Efficient task scheduling method can meet users' requirements, and improve the resource utilization, then increase the overall performance of the cloud computing environment. Cloud computing has new features, such as flexibility, Virtualization and etc., in this paper we propose a two levels task scheduling method based on load balancing in cloud computing. This task scheduling method meet user's requirements and get high resource utilization that simulation results in Cloud Sim simulator prove this.Keywords: cloud computing; task scheduling; virtualization.
Title: A Task Scheduling Algorithm in Cloud Computing
Author: Ali Bagherinia
ISSN 2350-1022
International Journal of Recent Research in Mathematics Computer Science and Information Technology
Paper Publications
Load Balancing in Auto Scaling Enabled Cloud Environmentsneirew J
Cloud computing is growing in popularity and it has been continuously updated with more improvements.
Auto scaling is one of such improvements that help to maintain the availability of customer’s subscribed
cloud system. The appearance of an auto scaling mechanism in the cloud system with many existing system
mechanisms is an issue that needs to be considered. Because, normally, there is no free drawbacks
whenever a new part is added to a certain stable system. In this paper, we consider how existing load
balancing and auto scaling impact on each other. For the purpose, we have modeled a cloud system with
an auto scaler and a load balancer and implementing simulations based on the constructed model. Also
based on the results from the computer simulations we proposed about choosing load balancers for
subscribed cloud system with auto scaling service.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
Dynamic Task Scheduling based on Burst Time Requirement for Cloud EnvironmentIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
A HYPER-HEURISTIC METHOD FOR SCHEDULING THEJOBS IN CLOUD ENVIRONMENTieijjournal1
Currently cloud computing has turned into a promising technology and has become a great key for
satisfying a flexible service oriented , online provision and storage of computing resources and user’s
information in lesser expense with dynamism framework on pay per use basis.In this technology Job
Scheduling Problem is acritical issue. For well-organizedmanaging and handling resources,
administrations, scheduling plays a vital role. This paper shares out the improved Hyper- Heuristic
Scheduling Approach to schedule resources, by taking account of computation time and makespan with two
detection operators. Operators are used to select the low-level heuristics automatically. Conditional
Revealing Algorithm (CRA)idea is applied for finding the job failures while allocating the resources. We
believe that proposed hyper-heuristic achieve better results than other individual heuristics
Cloud computing is the fastest emerging technology and a novel buzzword in the field of IT domain that offer distinct services, applications and focuses on providing sustainable, reliable, scalable and virtualized resources to its consumer. The main aim of cloud computing is to enhance the use of distributed resources to achieve higher throughput and resource utilization in large-scale computation problems. Scheduling affects the efficiency of cloud and plays a significant role in cloud computing to create high performance environment. The Quality of Service (QoS) requirements of user application define the scheduling of resources. Numbers of researchers have tried to solve these scheduling problems using different QoS based scheduling techniques. In this paper, a detail analysis of resource scheduling methodology is presented, with different types of scheduling based on soft computing techniques, their comparisons, benefits and results are discussed. Major finding of this paper helps researchers to decide suitable approach for scheduling user’s applications considering their QoS requirements.
IRJET- Scheduling of Independent Tasks over Virtual Machines on Computati...IRJET Journal
This document discusses scheduling independent tasks over virtual machines in a cloud computing environment. It compares the performance of four scheduling algorithms: First Come First Serve (FCFS), Shortest Job First (SJF), Round Robin, and Particle Swarm Optimization (PSO). The algorithms are tested on virtual machines with 1, 2, and 4 CPU cores. PSO consistently achieves the shortest makespan (task completion time). While FCFS, SJF, and Round Robin perform similarly on single-core and dual-core VMs, Round Robin's performance degrades on quad-core VMs likely due to core collision issues. Overall, PSO schedules tasks most efficiently across all virtual machine configurations.
LOAD BALANCING ALGORITHM TO IMPROVE RESPONSE TIME ON CLOUD COMPUTINGijccsa
Load balancing techniques in cloud computing can be applied at different levels. There are two main
levels: load balancing on physical server and load balancing on virtual servers. Load balancing on a
physical server is policy of allocating physical servers to virtual machines. And load balancing on virtual
machines is a policy of allocating resources from physical server to virtual machines for tasks or
applications running on them. Depending on the requests of the user on cloud computing is SaaS (Software
as a Service), PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) that has a proper load
balancing policy. When receiving the task, the cloud data center will have to allocate these tasks efficiently
so that the response time is minimized to avoid congestion. Load balancing should also be performed
between different datacenters in the cloud to ensure minimum transfer time. In this paper, we propose a
virtual machine-level load balancing algorithm that aims to improve the average response time and
average processing time of the system in the cloud environment. The proposed algorithm is compared to the
algorithms of Avoid Deadlocks [5], Maxmin [6], Throttled [8] and the results show that our algorithms
have optimized response times.
Cloud computing Review over various scheduling algorithmsIJEEE
Cloud computing has taken an importantposition in the field of research as well as in thegovernment organisations. Cloud computing uses virtualnetwork technology to provide computer resources tothe end users as well as to the customer’s. Due tocomplex computing environment the use of high logicsand task scheduler algorithms are increase which resultsin costly operation of cloud network. Researchers areattempting to build such kind of job scheduling algorithms that are compatible and applicable in cloud computing environment.In this paper, we review research work which is recently proposed by researchers on the base of energy saving scheduling techniques. We also studying various scheduling algorithms and issues related to them in cloud computing.
This document discusses scheduling in cloud computing. It proposes a priority-based scheduling protocol to improve resource utilization, server performance, and minimize makespan. The protocol assigns priorities to jobs, allocates jobs to processors based on completion time, and processes jobs in parallel queues to efficiently schedule jobs in cloud computing. Future work includes analyzing time complexity and completion times through simulation to validate the protocol's efficiency.
A hybrid approach for scheduling applications in cloud computing environment IJECEIAES
Cloud computing plays an important role in our daily life. It has direct and positive impact on share and update data, knowledge, storage and scientific resources between various regions. Cloud computing performance heavily based on job scheduling algorithms that are utilized for queue waiting in modern scientific applications. The researchers are considered cloud computing a popular platform for new enforcements. These scheduling algorithms help in design efficient queue lists in cloud as well as they play vital role in reducing waiting for processing time in cloud computing. A novel job scheduling is proposed in this paper to enhance performance of cloud computing and reduce delay time in queue waiting for jobs. The proposed algorithm tries to avoid some significant challenges that throttle from developing applications of cloud computing. However, a smart scheduling technique is proposed in our paper to improve performance processing in cloud applications. Our experimental result of the proposed job scheduling algorithm shows that the proposed schemes possess outstanding enhancing rates with a reduction in waiting time for jobs in queue list.
IRJET- Time and Resource Efficient Task Scheduling in Cloud Computing Environ...IRJET Journal
This document summarizes a research paper that proposes a Task Based Allocation (TBA) algorithm to efficiently schedule tasks in a cloud computing environment. The algorithm aims to minimize makespan (completion time of all tasks) and maximize resource utilization. It first generates an Expected Time to Complete (ETC) matrix that estimates the time each task will take on different virtual machines. It then sorts tasks by length and allocates each task to the VM that minimizes its completion time, updating the VM wait times. The algorithm is evaluated using CloudSim simulation and is shown to reduce makespan, execution time and costs compared to random and first-come, first-served scheduling approaches.
Cost-Efficient Task Scheduling with Ant Colony Algorithm for Executing Large ...Editor IJCATR
This document summarizes a research paper that proposes an optimized ant colony optimization (ACO) algorithm for task scheduling in cloud computing. The goal is to minimize makespan and cost while improving fairness and load balancing. The ACO algorithm is adapted to prioritize and fairly allocate tasks to machines based on their performance. Simulations show the proposed ACO algorithm reduces makespan by 80% compared to Berger and greedy algorithms. It also increases processor utilization and balances loads across machines better than the other algorithms. The researchers conclude the optimized ACO approach improves resource usage and user satisfaction for task scheduling in cloud computing.
Job Resource Ratio Based Priority Driven Scheduling in Cloud Computingijsrd.com
Cloud Computing is an emerging technology in the area of parallel and distributed computing. Clouds consist of a collection of virtualized resources, which include both computational and storage facilities that can be provisioned on demand, depending on the users' needs. Job scheduling is one of the major activities performed in all the computing environments. Cloud computing is one the upcoming latest technology which is developing drastically. To efficiently increase the working of cloud computing environments, job scheduling is one the tasks performed in order to gain maximum profit. In this paper we proposed a new scheduling algorithm based on priority and that priority is based on ratio of job and resource. To calculate priority of job we use analytical hierarchy process. In this paper we also compare result with other algorithm like First come first serve and round robin algorithms.
Dynamic Cloud Partitioning and Load Balancing in Cloud Shyam Hajare
Cloud computing is the emerging and transformational paradigm in the field of information technology. It mostly focuses in providing various services on demand and resource allocation and secure data storage are some of them. To store huge amount of data and accessing data from such metadata is new challenge. Distributing and balancing of the load over a cloud using cloud partitioning can ease the situation. Implementing load balancing by considering static as well as dynamic parameters can improve the performance cloud service provider and can improve the user satisfaction. Implementation the model can provide dynamic way of resource selection de-pending upon different situation of cloud environment at the time of accessing cloud provisions based on cloud partitioning. This model can provide effective load balancing algorithm over the cloud environment, better refresh time methods and better load status evaluation methods.
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
1) The document proposes a bandwidth-aware virtual machine migration policy for cloud data centers that considers both the bandwidth and computing power of resources when scheduling tasks of varying sizes.
2) It presents an algorithm that binds tasks to virtual machines in the current data center if the load is below the saturation threshold, and migrates tasks to the next data center if the load is above the threshold, in order to minimize completion time.
3) Experimental results show that the proposed algorithm has lower completion times compared to an existing single data center scheduling algorithm, demonstrating the benefits of considering bandwidth and utilizing multiple data centers.
VIRTUAL MACHINE SCHEDULING IN CLOUD COMPUTING ENVIRONMENTijmpict
Cloud computing is an upcoming technology in dispersed computing facilitating paying for each model as
for each user demand and need. Cloud incorporates a set of virtual machine which comprises both storage
and computational facility. The fundamental goal of cloud computing is to offer effective access to isolated
and geographically circulated resources. Cloud is growing every day and experiences numerous problems
such as scheduling. Scheduling means a collection of policies to regulate the order of task to be executed
by a computer system. An excellent scheduler derives its scheduling plan in accordance with the type of
work and the varying environment. This research paper demonstrates a generalized precedence algorithm
for effective performance of work and contrast with Round Robin and FCFS Scheduling. Algorithm needs
to be tested within CloudSim toolkit and outcome illustrates that it provide good presentation compared
some customary scheduling algorithm.
This document summarizes and compares various scheduling algorithms used in cloud computing environments. It begins with an introduction to cloud computing and the need for scheduling algorithms in cloud environments. It then describes several existing scheduling algorithms, including compromised-time-cost scheduling, particle swarm optimization-based heuristic, improved cost-based algorithm, resource-aware scheduling, innovative transaction intensive cost-constraint scheduling, scalable heterogeneous earliest-finish-time algorithm, and multiple QoS constrained scheduling strategy of multi-workflows. These algorithms aim to optimize metrics such as execution time, cost, deadline, load balancing, and quality of service. The document concludes by comparing the different scheduling strategies.
This document provides an overview of scheduling mechanisms in cloud computing. It discusses task scheduling, gang scheduling based on performance and cost evaluation, and resource scheduling. For task scheduling, it describes classifying tasks based on quality of service parameters and MapReduce level scheduling. It then explains two gang scheduling algorithms - Adaptive First Come First Serve (AFCFS) and Largest Job First Serve (LJFS) - and how they are used to evaluate performance and cost. Finally, it briefly discusses resource scheduling and factors that affect scheduling mechanisms in cloud computing like efficiency, fairness, costs, and communication patterns.
Service Request Scheduling in Cloud Computing using Meta-Heuristic Technique:...IRJET Journal
This document discusses using the Teaching Learning Based Optimization (TLBO) meta-heuristic technique for service request scheduling between users and cloud service providers. TLBO is a nature-inspired algorithm that mimics the teacher-student learning process. It is compared to other meta-heuristic algorithms like Genetic Algorithm. The key steps of TLBO involve initializing a population, evaluating fitness, selecting the best solution as teacher, and updating the population through teacher and learner phases until termination criteria is met. The document proposes using number of users and virtual machines as parameters for TLBO scheduling in cloud computing. MATLAB simulation results show the initial and final iterations converging to an optimal scheduling solution.
An optimized scientific workflow scheduling in cloud computingDIGVIJAY SHINDE
The document discusses optimizing scientific workflow scheduling in cloud computing. It begins with definitions of workflow and cloud computing. Workflow is a group of repeatable dependent tasks, while cloud computing provides applications and hardware resources over the Internet. There are three cloud service models: SaaS, PaaS, and IaaS. The document explores how to efficiently schedule workflows in the cloud to reduce makespan, cost, and energy consumption. It reviews different scheduling algorithms like FCFS, genetic algorithms, and discusses optimizing objectives like time and cost. The document provides a literature review comparing various workflow scheduling methods and algorithms. It concludes with discussing open issues and directions for future work in optimizing workflow scheduling for cloud computing.
Abstract: Efficient task scheduling method can meet users' requirements, and improve the resource utilization, then increase the overall performance of the cloud computing environment. Cloud computing has new features, such as flexibility, Virtualization and etc., in this paper we propose a two levels task scheduling method based on load balancing in cloud computing. This task scheduling method meet user's requirements and get high resource utilization that simulation results in Cloud Sim simulator prove this.Keywords: cloud computing; task scheduling; virtualization.
Title: A Task Scheduling Algorithm in Cloud Computing
Author: Ali Bagherinia
ISSN 2350-1022
International Journal of Recent Research in Mathematics Computer Science and Information Technology
Paper Publications
Load Balancing in Auto Scaling Enabled Cloud Environmentsneirew J
Cloud computing is growing in popularity and it has been continuously updated with more improvements.
Auto scaling is one of such improvements that help to maintain the availability of customer’s subscribed
cloud system. The appearance of an auto scaling mechanism in the cloud system with many existing system
mechanisms is an issue that needs to be considered. Because, normally, there is no free drawbacks
whenever a new part is added to a certain stable system. In this paper, we consider how existing load
balancing and auto scaling impact on each other. For the purpose, we have modeled a cloud system with
an auto scaler and a load balancer and implementing simulations based on the constructed model. Also
based on the results from the computer simulations we proposed about choosing load balancers for
subscribed cloud system with auto scaling service.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
Dynamic Task Scheduling based on Burst Time Requirement for Cloud EnvironmentIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
A HYPER-HEURISTIC METHOD FOR SCHEDULING THEJOBS IN CLOUD ENVIRONMENTieijjournal1
Currently cloud computing has turned into a promising technology and has become a great key for
satisfying a flexible service oriented , online provision and storage of computing resources and user’s
information in lesser expense with dynamism framework on pay per use basis.In this technology Job
Scheduling Problem is acritical issue. For well-organizedmanaging and handling resources,
administrations, scheduling plays a vital role. This paper shares out the improved Hyper- Heuristic
Scheduling Approach to schedule resources, by taking account of computation time and makespan with two
detection operators. Operators are used to select the low-level heuristics automatically. Conditional
Revealing Algorithm (CRA)idea is applied for finding the job failures while allocating the resources. We
believe that proposed hyper-heuristic achieve better results than other individual heuristics
Multi-objective tasks scheduling using bee colony algorithm in cloud computingIJECEIAES
This document presents a new approach for scheduling multi-objective tasks in cloud computing using an artificial bee colony algorithm. The proposed algorithm aims to optimize response time, schedule length ratio, and efficiency. It models tasks as bees that are assigned to processing elements in data centers to minimize completion time while balancing resource loads. The results showed the bee colony algorithm achieved better performance than other scheduling methods in cloud computing environments.
Load Balancing Algorithm to Improve Response Time on Cloud Computingneirew J
Load balancing techniques in cloud computing can be applied at different levels. There are two main
levels: load balancing on physical server and load balancing on virtual servers. Load balancing on a
physical server is policy of allocating physical servers to virtual machines. And load balancing on virtual
machines is a policy of allocating resources from physical server to virtual machines for tasks or
applications running on them. Depending on the requests of the user on cloud computing is SaaS (Software
as a Service), PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) that has a proper load
balancing policy. When receiving the task, the cloud data center will have to allocate these tasks efficiently
so that the response time is minimized to avoid congestion. Load balancing should also be performed
between different datacenters in the cloud to ensure minimum transfer time. In this paper, we propose a
virtual machine-level load balancing algorithm that aims to improve the average response time and
average processing time of the system in the cloud environment. The proposed algorithm is compared to the
algorithms of Avoid Deadlocks [5], Maxmin [6], Throttled [8] and the results show that our algorithms
have optimized response times.
IRJET- Advance Approach for Load Balancing in Cloud Computing using (HMSO) Hy...IRJET Journal
This document proposes a new hybrid multi-swarm optimization (HMSO) algorithm for load balancing in cloud computing. It aims to minimize response time and costs while improving resource utilization and customer satisfaction. The HMSO algorithm uses multi-level particle swarm optimization to find an optimal resource allocation solution. Simulation results show that the proposed HMSO technique reduces response time and datacenter costs compared to other algorithms. It also achieves a more balanced load distribution across resources.
This document proposes a new task scheduling algorithm called Dynamic Heterogeneous Shortest Job First (DHSJF) for heterogeneous cloud computing systems. DHSJF aims to improve performance metrics like reduced makespan and low energy consumption by considering the heterogeneity of resources and workloads. It discusses existing scheduling algorithms like Round Robin, First Come First Serve and their limitations. The proposed DHSJF algorithm prioritizes tasks with the shortest estimated completion time to optimize resource utilization and improve overall performance of the cloud computing system. Simulation results show that DHSJF provides better results for metrics like average waiting time and turnaround time as compared to Round Robin and First Come First Serve scheduling algorithms.
A Novel Dynamic Priority Based Job Scheduling Approach for Cloud EnvironmentIRJET Journal
The document proposes a new dynamic priority-based job scheduling algorithm for cloud environments to optimize the problem of starvation. It assigns priority to jobs based on criteria like CPU requirements, I/O requirements, and job criticality. The algorithm aims to reduce wait time, turnaround time, and increase throughput and CPU utilization. It was tested against the Shortest Job First algorithm in CloudSim simulation software. The results showed improvements in wait time, turnaround time, and total finish time compared to the SJF algorithm.
The Cloud computing becomes an important topic
in the area of high performance distributed computing. On the
other hand, task scheduling is considered one the most significant
issues in the Cloud computing where the user has to pay for the
using resource based on the time. Therefore, distributing the
cloud resource among the users' applications should maximize
resource utilization and minimize task execution Time. The goal
of task scheduling is to assign tasks to appropriate resources that
optimize one or more performance parameters (i.e., completion
time, cost, resource utilization, etc.). In addition, the scheduling
belongs to a category of a problem known as an NP-complete
problem. Therefore, the heuristic algorithm could be applied to
solve this problem. In this paper, an enhanced dependent task
scheduling algorithm based on Genetic Algorithm (DTGA) has
been introduced for mapping and executing an application’s
tasks. The aim of this proposed algorithm is to minimize the
completion time. The performance of this proposed algorithm has
been evaluated using WorkflowSim toolkit and Standard Task
Graph Set (STG) benchmark.
Application of selective algorithm for effective resource provisioning in clo...ijccsa
Modern day continued demand for resource hungry services and applications in IT sector has led to
development of Cloud computing. Cloud computing environment involves high cost infrastructure on one
hand and need high scale computational resources on the other hand. These resources need to be
provisioned (allocation and scheduling) to the end users in most efficient manner so that the tremendous
capabilities of cloud are utilized effectively and efficiently. In this paper we discuss a selective algorithm
for allocation of cloud resources to end-users on-demand basis. This algorithm is based on min-min and
max-min algorithms. These are two conventional task scheduling algorithm. The selective algorithm uses
certain heuristics to select between the two algorithms so that overall makespan of tasks on the machines is
minimized. The tasks are scheduled on machines in either space shared or time shared manner. We
evaluate our provisioning heuristics using a cloud simulator, called CloudSim. We also compared our
approach to the statistics obtained when provisioning of resources was done in First-Cum-First-
Serve(FCFS) manner. The experimental results show that overall makespan of tasks on given set of VMs
minimizes significantly in different scenarios.
Differentiating Algorithms of Cloud Task Scheduling Based on various Parametersiosrjce
Cloud computing is a new design structure for large, distributed data centers. Cloud computing
system promises to offer end user “pay as go” model. To meet the expected quality requirements of users, cloud
computing need to offer differentiated services to users. QoS differentiation is very important to satisfy
different users with different QoS requirements. In this paper, various QoS based scheduling algorithms,
scheduling parameters and the future scope of discussed algorithms have been studied. This paper summarizes
various cloud scheduling algorithms, findings of algorithms, scheduling factors, type of scheduling and
parameters considered
This document discusses different algorithms for task scheduling in cloud computing environments based on various quality of service (QoS) parameters. It summarizes several QoS-based scheduling algorithms including QDA, Improved Cost Based, PAPRIKA, ANT Colony, CMultiQoSSchedule, and SHEFT Workflow. It also provides a comparative table of these algorithms and discusses the various metrics considered by QoS-based scheduling algorithms like time, cost, makespan, trust, and resource utilization. The paper concludes that scheduling is an important factor for cloud environments and that existing algorithms can be improved by considering additional parameters like trust values, execution rates, and success rates.
Cloud computing gives on-demand access to computing resources in
metered and powerfully adapted way; it empowers the client to get access to
fast and flexible resources through virtualization and widely adaptable for
various applications. Further, to provide assurance of productive
computation, scheduling of task is very much important in cloud
infrastructure environment. Moreover, the main aim of task execution
phenomena is to reduce the execution time and reserve infrastructure;
further, considering huge application, workflow scheduling has drawn fine
attention in business as well as scientific area. Hence, in this research work,
we design and develop an optimized load balancing in parallel computation
aka optimal load balancing in parallel computing (OLBP) mechanism to
distribute the load; at first different parameter in workload is computed and
then loads are distributed. Further OLBP mechanism considers makespan
time and energy as constraint and further task offloading is done considering
the server speed. This phenomenon provides the balancing of workflow;
further OLBP mechanism is evaluated using cyber shake workflow dataset
and outperforms the existing workflow mechanism.
Time and Reliability Optimization Bat Algorithm for Scheduling Workflow in CloudIRJET Journal
This document describes using a meta-heuristic optimization algorithm called the Bat Algorithm (BA) to schedule workflows in cloud computing environments. The BA is applied to optimize a multi-objective function that minimizes workflow execution time and maximizes reliability while keeping costs within a user-specified budget. The BA is compared to a basic randomized evolutionary algorithm (BREA) that uses greedy approaches. Experimental results show the BA performs better by finding schedules that have lower execution times and higher reliability within the given budget constraints. The BA is well-suited for this problem because it can efficiently search large solution spaces and automatically focus on optimal regions like other metaheuristics.
This document proposes a genetic algorithm called Workflow Scheduling for Public Cloud Using Genetic Algorithm (WSGA) to optimize the cost of executing workflows in the public cloud. It discusses how genetic algorithms can be applied to the workflow scheduling problem to generate optimal schedules. The WSGA represents potential scheduling solutions as chromosomes, uses a fitness function to evaluate scheduling costs, and applies genetic operators like selection, crossover and mutation to evolve new schedules over multiple iterations. The goal is to minimize total execution cost while meeting workflow dependencies and deadline constraints. An experimental setup is described and the WSGA approach is claimed to reduce costs more than other heuristic scheduling algorithms for communication-intensive workflows.
This document provides an overview of load balancing techniques in cloud computing. It discusses how load balancing aims to efficiently distribute workload across nodes to maximize resource utilization and minimize response time. The document categorizes load balancing algorithms as either static or dynamic. It further classifies dynamic algorithms as centralized, distributed, cooperative or non-cooperative. Several common load balancing algorithms for cloud computing are then described, including Round Robin, Throttled Load Balancing, Modified Throttled, Min-Min Scheduling, and Load Balance Min-Min.
A cloud computing scheduling and its evolutionary approachesnooriasukmaningtyas
Despite the increasing use of cloud computing technology because it offers
unique features to serve its customers perfectly, exploiting the full potential
is very difficult due to the many problems and challenges. Therefore,
scheduling resources are one of these challenges. Researchers are still finding
it difficult to determine which of the scheduling algorithms are appropriate
and effective and that helps increases the performance of the system to
accomplish these tasks. This paper provides a broad and detailed examination
of resource scheduling algorithms in the environment of a cloud computing
environment and highlights the advantages and disadvantages of some
algorithms to help researchers in selecting the best algorithms to schedule a
particular workload to get a satisfy a quality of service, guarantee good
utilization of the cloud resources also minimizing the make-span.
A STUDY ON JOB SCHEDULING IN CLOUD ENVIRONMENTpharmaindexing
This document discusses job scheduling algorithms in cloud computing environments. It begins with an introduction to cloud computing and job scheduling challenges. It then reviews several existing job scheduling algorithms that aim to minimize completion time and costs while improving performance and quality of service. These algorithms use approaches like genetic algorithms, priority queues, and workload prediction. The document also discusses issues like priority-based scheduling and balancing mixed workloads. Overall, the document analyzes the problem of job scheduling in clouds and surveys different proposed scheduling algorithms and their objectives.
Optimized Assignment of Independent Task for Improving Resources Performance ...ijgca
Grid computing has emerged from category of distributed and parallel computing where the
heterogeneous resources from different network are used simultaneously to solve a particular problem that
need huge amount of resources. Potential of Grid computing depends on my issues such as security of
resources, heterogeneity of resources, fault tolerance & resource discovery and job scheduling. Scheduling
is one of the core steps to efficiently exploit the capabilities of heterogeneous distributed computing
resources and is an NP-complete problem. To achieve the promising potential of grid computing, an
effective and efficient job scheduling algorithm is proposed, which will optimized two important criteria to
improve the performance of resources i.e. makespan time & resource utilization. With this, we have
classified various tasks scheduling heuristic in grid on the basis of their characteristics.
Optimized Assignment of Independent Task for Improving Resources Performance ...Ricardo014
Grid computing has emerged from category of distributed and parallel computing where the heterogeneous resources from different network are used simultaneously to solve a particular problem that need huge amount of
resources. Potential of Grid computing depends on my issues such as security of resources, heterogeneity of resources, fault tolerance & resource discovery and job scheduling. Scheduling is one of the core steps to
efficiently exploit the capabilities of heterogeneous distributed computing resources and is an NP-complete problem. To achieve the promising potential of grid computing, an effective and efficient job scheduling algorithm is
proposed, which will optimized two important criteria to improve the performance of resources i.e. makespan time & resource utilization. With this, we have classified various tasks scheduling heuristic in grid on the basis of
their characteristics.
Similar to Dynamic Three Stages Task Scheduling Algorithm on Cloud Computing (20)
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
Dynamic Three Stages Task Scheduling Algorithm on Cloud Computing
1. (IJCSIS) International Journal of Computer Science and Information Security,
Vol. 18, No. 6 June 2020
Dynamic Three Stages Task Scheduling Algorithm
on Cloud Computing
Naglaa Sayed Abdelrehem Fathi Ahmed Amer Imane Aly Saroit
Department of Information Technology
Faculty of Computer and Artificial Intelligence
Cairo University
Cairo, Egypt
Naglaasayed.fci@gmail.com fathi.amer.csis@o6u.edu.eg i.saroit@fci-cu.edu.eg
Abstract— Scheduling process is one of the main challenges in
cloud computing to manage and coordinate between tasks and
their appropriate resources, to get the best and most efficient use
of the available cloud resources. This paper proposes a cloud
scheduling mechanism that works as a three- stage strategy. In
the first stage, a task classification is performed using a job
classifier to pre-create different types of Virtual Machines (VMs),
which saves the time needed through the scheduling process to
create these VMs and decreases failure rate. In the second stage,
tasks are sorted based on their priority, and then check if their
expected execution time is less than or equal to the deadline to
indicate the state of the VM that satisfies the deadline constraint
as successful and reject the tasks that can’t be executed within
their deadline and save them in the database to be detected later.
In the third stage, the tasks are paired dynamically with their
matching VMs with minimum completion time. In order to
evaluate the proposed protocol; a simulation is performed using
the cloud sim plus simulator to simulate the proposed algorithm
and compare it with the Min-Min standard algorithm and the
two-stage scheduling algorithm to show that the proposed
algorithm reduced the average waiting time, the average
makespan, and failure rate and maximized the virtual machine
utilization rate, task guarantee ratio and the VMs load balancing
compared to the two other algorithms.
Keywords- Cloud Computing; Scheduling; Virtual Machines
(VMs), Makespan; Waiting time; Resource
Utilization; Failure rate.
I. INTRODUCTION
Cloud computing is known as an on demand sharing of
resources, services or infrastructure over the internet and
paying only for what is used. Tasks are scheduled depending
on the user different needs [1] [2]. To organize the resource
usage on the cloud, and find the appropriate deployment
method we need to find a suitable scheduling mechanism to
get the efficient use of cloud resources with the minimum
costs. The cloud scheduling process aims to define the most
suitable deployment method to satisfy the user’s requirements
and to help the service providers to get the highest economic
benefits [3]. Many different cloud applications are received by
the data center to get services using the pay-per-use policy.
Attributed to the limited resources with different
functionalities and different capacities on the cloud, Cloud
scheduling has turned into a challenging process [4]. There
are various scheduling algorithms suggested by different
researchers to define the most convenient deployment method
of the resources in the cloud [5] [6]. There are many tasks
scheduled in different cloud environments with different
Quality of Service (Qos) requirements [7]. Some of the
suggested cloud task scheduling mechanisms aim to optimize
the deadline [8]. Others suggested enhancements in the load
balancing [9] [10]. Others suggested to optimize the quality of
service requirements (Qos) and maximize the total revenue
[11], or minimize the costs [12]. Others suggested to get the
best service level agreement and the best energy consumption
level [13]
The cloud scheduling process became an urgent and
sensitive matter to find the best deployment for cloud
resources, which helps enhance the cloud overall performance,
increase the quality of the service, minimize the costs and
failure rate, maximize the utilization and total revenue. The
main problems we face during the scheduling process is to
find the most convenient pair of tasks and VMs to be matched
with, the waiting times sometimes be too high and many tasks
may access the system and waiting to be processed then fail as
they exhausted the deadline constraint. In order to resolve this
problem a new scheduling algorithm is introduced in this
paper to organize the task and virtual machine mapping
process; based on a dynamic three stage strategy where in the
first steps it detects the common task and VM types based on a
historical stored data set; which helps to predict and pre-create
a convenient number of VMs based on the types in the
database; this step saves the time needed to create the tasks
during the scheduling process. The algorithm starts to receive
dynamic task sets in the cloud, classify based on the historical
database, checks first if the coming tasks can be executed
within their deadline to follow the scheduling sequence, or
56 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
2. (IJCSIS) International Journal of Computer Science and Information Security,
Vol. 18, No. 6 June 2020
they can’t and will be rejected to avoid waiting time and be
saved in the historical database to be detected later. In case of
successful tasks that matches the deadline constraint, some
may don’t find a suitable matching VM from the same type, so
that the algorithm will fetch for the most similar available VM
with the minimum completion time. If they can’t find a
convenient one, tasks will be saved in a waiting queue to
create a convenient matching one. This helps to minimize the
waiting time, makespan and the task guarantee ratio, also can
enhance the load balance among several VMs.
The paper is organized as follows; section 2 reviews some of
the related work scenarios. Section 3 explains the proposed
algorithm in details. Section 4 shows the simulation and
evaluation of the proposed scenario and comparison results
with other algorithms. Section 5 concludes the work and show
the suggested future work.
II. RELATED WORK
SaeMi Shin and SuKyoung Lee suggested a scheduling
mechanism to receive all the data center jobs, sort the jobs
in an ascending order to serve the high priority jobs first
then choose the largest backfill job to satisfy the deadline
guarantee constraint. Their algorithm enhanced the
scheduling performance by enhancing the deadline
guarantee ratio and the utilization of resources [2]. Atul
Vikas Lakraa and Dharmendra Kumar Yadavb showed a
simplified survey on different cloud task scheduling
algorithms; each algorithm enhanced one or more of these
factors (quality of service, load balancing, minimized
makespan, consistency, maximum resource utilization,
energy efficiency, effective implementation, and fairness
among tasks, high profits and bandwidth utilization).
Meanwhile, all the algorithms have a problem that they
cannot enhance all of them together so no one of them can
achieve 100% efficiency [4]. IM.Vijayalakshmi, and
IIV.Venkatesa Kumar made a survey on different
scheduling techniques like Round Robin, Minimum
Completion time, Random Resource Selection, and Load
Balancing algorithms, where the RR had minimum cost
compared to the minimum completion time and the load
balancing algorithms. In terms of the total cost, Random
algorithm is the best. Random algorithm and the Round
Rubin algorithm got the same cost when increasing the
number of jobs [6]. Mousa and Abdelouahed Gherbi
suggested a scheduling mechanism that divides tasks into
different groups using Ram and CPU utilization based on
data from log files where various tasks can share the same
VM resources, where their algorithm enhances the QoS
requirements, maximizes the resources usage, increases the
user satisfaction and reduces the number of job rejections
[10]. PeiYun Zhang and MengChu Zhou proposed a cloud
scheduling technique that works as a strategy of two stages.
In the first stage, a task classification is performed
depending on past documented data. In the second stage,
there is matching of the tasks dynamically to their
corresponding VMs. Their algorithm enhanced the load
balancing and the scheduling performance compared to
Min-Min and Max-Min algorithms [14]. Mokhtar A.
Alworafi and Suresha Mallappa suggested a cloud
scheduling mechanism to sort the tasks according to the
length priority in an ascending order, then they referred the
state of the virtual machine that satisfies the deadline
constraint as successful. After that, they pair each task to
its convenient VM. Their algorithm enhanced the task
guarantee ratio and the utilization of resources, and
reduced the average response time and the makespan
compared with other algorithms like (Min-Min, GA, SJF,
and Round Robin) [15]. Dr. Amit Agarwal and Saloni Jain
suggested a Generalized Priority Algorithm where the
priority is defined with respect to the user demands. Their
algorithm has a minimum execution time compared with
(FCFS) and (RR) algorithms [16]. Xiaoping Li and Rub´en
Ruiz suggested a task scheduling mechanism that combines
the priority scheduling algorithm and the RR algorithm to
enhance performance by enhancing the execution time and
throughput values by combining the tasks into two groups
(deadline-based and cost-based). The tasks arranged
ascending based on the deadline and the cost arranged
descending based on the task length [17]. Mehwish
Awan*, and Munam Ali Shah suggested a multi-objective
scheduling algorithm to relate a group of tasks received by
the broker to the received virtual machines list. They
reduced the execution time of workload to the minimum
optimized time. They compared their algorithm to (FCFS)
algorithm and priority scheduling algorithm [18].
III. PROPOSED ALGORITHM
In this section, we explain, the proposed cloud scheduling
strategy named the Dynamic Three Stages scheduling
algorithm. The proposed algorithm aims to find the best
sequence to be followed as a task scheduling mechanism
that maps the tasks dynamically to the most appropriate
virtual machine with the minimum time and the maximum
utilization rate, the algorithm acts as a series of three stages
as we will see in the next sections.
A. The First Stage
In the first stage, a task classification is performed using
a job classifier depending on the last stored historical data
and the cloud environment’s current state. This step saves
the time needed to create virtual machines during the
scheduling process and also decreases the task scheduling
failure rate as it pre-creates a convenient number of the
various predicted virtual machine types.
57 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
3. (IJCSIS) International Journal of Computer Science and Information Security,
Vol. 18, No. 6 June 2020
Fig. 1 illustrates the sequence followed in the first stage of
the algorithm in brief:
Figure 1: The First Stage Sequence
B. The Second Stage
The second stage works on decreasing the average
makespan and maximizing the resource utilization. The
tasks are sorted depending on the priority of their lengths, in
order to process critical or high priority tasks first and then
the ordinary tasks, then comes the time to check that the
expected execution time for a task is less than or equal to
the task deadline, which means that the task can be
executed within its deadline. After that, the state of the
virtual machine that satisfies the deadline constraint is
referred as successful to complete the process, and discard
the unsuccessful ones after saving them in the database to
be used later as a historical data and create convenient VMs
that can execute them easily.
Fig. 2 illustrates the sequence followed in the second
stage of the algorithm in brief:
Figure 2: The Second Stage Sequence
C. The Third Stage
The third stage is a mapping of the tasks dynamically to
the convenient virtual machines, if they do not have a
convenient matching with a VM from the same type
available, we check if we have extra available VMs from
other similar types to match with, which helps to enhance
the load balancing among all the available VMs and saves
the waiting time the task need to wait in the waiting queue
until we create a matching VM from the same type,
Otherwise, we put the task in the waiting queue to create a
convenient VM type during the dynamic scheduling
process. The algorithm is compared with two old
algorithms which is the Min-Min and the two stages
algorithms.
Figure 3 illustrates the sequence followed in the third stage
of the algorithm in brief:
Figure 3: The Third Stage Sequence
D. The task and VM mapping
Assume we have a set of VMs defined as V = {1, 2… N}
and Vi, where i ∈ {1, 2... N}, represent the VM number i, Vi
defined by four attributes denoted as Vi (a), where a ∈ {1, 2,
3, 4}. Which represent the CPU resources like the CPU clock
speed, memory resources, the network bandwidth and the
hard disk storage, respectively. Where, we have Vi = {Vi (1),
Vi (2), Vi (3), Vi (4)}.
Assume we have a set of tasks provided by the users
defined as T = {1, 2, ... , M} and Tj , where j ∈ {1, 2, ... , M},
represent the task number j. where the task j can be defined
based on some attributes as Tj ={Tj(id) , Tj(r), Tj(d), and
Tj(p),Tj(L) } , where Tj (id) is defined as the unique ID of task
j, Tj (r) is defined as the requirements of task j . and Tj(r) =
{Tj1, Tj2,Tj3, Tj4} specifies the requirements for CPU, memory,
network bandwidth and hard disk storage for task Tj; Tj(d) is
defined as the deadline of task Tj. When the deadline of Tj is
58 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
4. (IJCSIS) International Journal of Computer Science and Information Security,
Vol. 18, No. 6 June 2020
violated, the task is failed to be scheduled, Tj (p) is defined as
the priority of task j, if Tj is urgent or high payment user’s
job, it is a high priority task otherwise it is a regular job.
Assume that the cloud holds a set of hosts defined as Hk,
where k ∈ {1, 2... K}, represent the host number k that can
create Gk VMs, i.e., Hk = {vnk |n ∈ {1, 2... Gk}}. Where vnk
represents the VM number n of the host number k.
To reduce the complexity of the mapping, we divide tasks
that need to be matched with VMs into task types, assume we
have a set of VM types called VMtype = {1, 2... L}. Where L
represents the number of VM types.
Assume that the data center DC has a set of servers
defined as S= {S1, S2 . . . Ss}, Si = {Vi1, Vi2 . . . V iN} is a set of
virtual machines in the server Si. Each virtual machine has a
specified speed defined by the number of million
instructions per second (MIPS). And the speed of the VM is
defined as (Vs), where the number of instructions per task
(task length) is defined as (TL).
The expected execution time (EET) of the task in each
VM is computed, and compared with the deadline constraint
value to find which of the VM achieve the deadline to be
defined as a successful task to complete the scheduling
sequence. Or which of them don't achieve the deadline to be
defined as an unsuccessful task and saved to the database to
be defined later, the expected execution time is calculated as
in equation 2 [15]:
Where defines the task length, and defines the VM
speed.
The task scheduling process here is defined as a function
that maps tasks to the convenient VM types. We can
compute the matching degree using equation 3 [14].
Where is the maximum Vk (a), k represents the
type-k virtual machine.
Assume having a variable Yi that defines a VM of type
i. For task j, we can compute the probability that task Tj is
related to type Yi, using a Bayes classifier, using equation 4
[14].
E. Model Description
Fig. 4 illustrates a flow chart for the proposed protocol.
Figure 4: A Detailed Task Scheduling Based on a Three Stages Strategy
IV. SIMULATION AND EXPERIMENTS
A. Simulation Environment
We applied our experiments using Lenovo laptop with a
processor intel(R), Core (TM) i3-3110 M CPU, 2.40 GHZ,
memory 4.0 GB, HD graphics (4000 G), windows 10
operating system, NetBeans IDE 8.2, JDK 8.0, Microsoft
SQL server 2012 and cloudSim plus simulator which is a
59 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
5. (IJCSIS) International Journal of Computer Science and Information Security,
Vol. 18, No. 6 June 2020
well-known and the most used cloud simulator to simulate
the cloud by inheriting and extending some of the cloudSim
classes such as Vm, Data center Broker, Cloudlet and Host
and defined the mapping policies as in the
CloudletSchedulerSpaceShared policy extended from the
CloudletSchedulerAbstract.
We applied the model simulation with a data set
from"http://www.mediafire.com/file/birsbbpo7e8nwom/L
CG._ARCH.swf/file" with a total size 10000. In the first
simulation we used a data set with 1000 record, we divided
the experimental data into five groups each group has a size
200. We started the simulation with a set from 0 to 200, then
added the second set from 200 to 400 to work on a set of
400, then 600,800 to 1000. Each time we add a set of 200 to
the last set until the end we work on f the 1000 set. The
simulation results may be different in values at each time we
simulate based on the dynamic data set each time, although
the result analysis shows a good enhancement in several
measurement factors in all the simulation trials.
B. Experimental Results and Analysis
Based on the above experimental configuration, and
analysing the simulation results for the proposed algorithm,
we compared our algorithm with the last proposed two-
stage scheduling algorithm [14], and the standard Min-Min
algorithm [21] based on some evaluation metrics like:
1. The Time complexity:
The time complexity for the Min–Min algorithm is
calculated as O (n3
) [14] [21] and for the Two- stage
algorithm it is calculated as O (n3
) [14]. After
simulating the algorithm and the theoretical
computation of time complexity to the overall
proposed algorithm, it is also O (n3
) which is the
same as in the two compared algorithms. So, the
three algorithms have equal effect with respect to
the time complexity.
2. The average makespan:
It is defined as the total time needed for tasks to be
scheduled and completed in the cloud, taking into
consideration that the smaller the makespan value,
the better the scheduling and the better the quality of
the service are. It is calculated as in equation 5 [14].
Where is the completion time of task Tj in the
cloud, and M is the total number of tasks
By calculating the average makespan and
according to the simulation results, we found that
the total time taken for tasks to complete the
scheduling process by the proposed three stages
algorithm is 1.88 MS while the total time taken
using the two- stages algorithm is 0.48 MS and the
min-min takes a total time of 2.90. This means that
the total time for tasks to be scheduled on the cloud
using the proposed three stages scheduling
algorithm is less than the total time we can got using
the two other algorithms, which means that the
proposed algorithm minimized the average
makespan value by about 74.5% compared to the
two stages and by about 74.7% compared to the
Min-Min algorithm working on a set of 200 task, by
the same way the algorithm minimized the
makespan value by about 70.7% compared to the
two stages, and by 88% compared with the Min-Min
algorithm using a set of 1000 task.
Fig. 5 shows the results of the simulation for the
three algorithms with respect to the average
makespan working on a set of 200,400,600…..1000
tasks:
Figure 5: The Average Makespan vs Number of Tasks
By analysing the data set that contains 1000
sample, we found it divided into 12 types with a
varying results from type 1 to type 12, the final
results as shown in fig. 6 is that the proposed
algorithm minimized the total makespan value by
71% compared to the two stages algorithm, and
by 88% compared to the Min-Min algorithm
which is a good enhancement ratio.
60 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
6. (IJCSIS) International Journal of Computer Science and Information Security,
Vol. 18, No. 6 June 2020
Figure 6: The Average Makespan vs Task Types
3. The task Average Waiting Time:
It is defined as the performance of the overall
processing capacity, and the throughput of the cloud,
taking into consideration that the smaller the waiting
time value, the better the scheduling and the better
the quality of the service are. It is calculated as in
equation 6 [14].
Where the waiting time of task Tj, M is the
total number of tasks.
By calculating the average waiting time and
according to the simulation results as shown, we
found that using the proposed algorithm; the tasks
take an average waiting time of 0.07 MS, while using
the two stages, tasks have a waiting time of 0.09 MS
and using the min-min algorithm the tasks have to
wait about 0.24 in average, this refers that the
proposed algorithm minimized the average waiting
time taken by tasks inside the scheduling process,
which means that the proposed algorithm minimized
the average waiting time value by about 22%
compared to the two stages by about 71% compared
to the Min-Min algorithm working on a set of 200
task, by the same way the algorithm enhanced the
waiting time value by about 70% compared to the
two stages, and by 88% compared with the Min-Min
algorithm using a set of 1000 task.
Fig. 7 shows the results of the simulation for the
three algorithms with respect to the average waiting
time working on a set of 200,400,600…..1000 tasks:
Figure 7: The Average Waiting Time vs Number of Tasks
By analysing the data set that contains 1000
sample, we found it divided into 12 types with a
varying results from type 1 to type 12, the final
results as shown in fig. 8 is that the proposed
algorithm minimized the total average waiting
time by 70% compared to the two stages, and by
88% compared with the Min-Min algorithm, this
results indicates a good enhancement also for the
VMs load balancing; which results from using
the most similar with the minimum completion
time VM, in case there is no available matching
VM from the same type.
Figure 8: Average Waiting Time vs Task Types
4. The VMs Utilization Rate:
It is defined as a ratio that shows how effective the
scheduling is, according to the resource usage and
resource deployment, taking into consideration that
the bigger the utilization ratio value, the better the
scheduling and the better the quality of the service
are. It is calculated as in equation 7.
61 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
7. (IJCSIS) International Journal of Computer Science and Information Security,
Vol. 18, No. 6 June 2020
Where is the number of successful tasks
By calculating the utilization rate, and according to
the simulation results, we found that the proposed
algorithm has a utilization rate with 86%, while the
two- stages has a utilization rate with 81%, and the
Min-Min has a ratio of 80%, which means that the
proposed algorithm managed the available cloud
resources well, find the best deployment method for
resources and the tasks have got the most appropriate
Vm to be scheduled with, and maximized the
utilization rate value by about 6.2% compared to the
two stages and by about 7.5% compared to the Min-
Min algorithm working on a set of 200 task, by the
same way the algorithm maximized the utilization
ratio by about 10% compared to the two stages, and
by 42.6% compared with the Min-Min algorithm
using a set of 1000 task.
Fig. 9 shows the results of the simulation for the
three algorithms with respect to the utilization ratio
working on a set of 200,400,600…..1000 tasks:
Figure 9: The Utilization Rate vs Number of Tasks
By analysing the data set that contains 1000
sample, we found it divided into 12 types with a
varying results from type 1 to type 12, the final
results as shown in fig. 10 is that the proposed
algorithm minimized the total average utilization by
70% compared to the two stages, and by 88%
compared with the Min-Min algorithm, The
enhancement in the resource utilization refers to the
good deployment for resources.
Figure 10: The Utilization Rate vs Task types
5. The Task Scheduling Failure Rate:
The Ratio between the number of failed tasks and
the total number of tasks, it measures the cloud's
stability.it is calculated as in equation 8 [15].
Where FT is the number of tasks that have a
scheduling failure, and M is the total number of
tasks.
By calculating the failure rate and according to the
simulation results, we found that the proposed
algorithm has 3.5%, failed tasks among all, while the
two stages has got a minimum failure ratio which is
5.5 %, and the Min-Min has a ratio of 3.5 % failed
tasks, which means that the proposed algorithm
reduced the failure rate by about 36.4% compared to
the two- stages and by 0% compared to the Min-Min
algorithm working on a set of 200 task. By the same
way, the algorithm reduced the failure rate by 0%
compared to the two- stages, and by 17% compared
with the Min-Min algorithm using a set of 1000 task.
Fig. 11 shows the results of the simulation for the
three algorithms with respect to the failure rate
working on a set of 200,400,600…..1000 tasks:
62 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
8. (IJCSIS) International Journal of Computer Science and Information Security,
Vol. 18, No. 6 June 2020
Figure 11: The Failure Rate vs Number of Tasks
By analysing the data set that contains 1000
sample, we found it divided into 12 types with a
varying results from type 1 to type 12, the final
results as shown in fig. 12 is that the proposed
algorithm reduced the failure rate by about 0%
compared to the two- stages, and by 17% compared
with the Min-Min. as per the graph results the failure
rate ratio varies between types 1 to 12 from low to
high according to the tasks complexity. Sometimes
the failure rate for the proposed algorithm is a little
greater than or equal to other algorithms. We may
handle this point later.
Figure 12: The Failure Rate vs Number of Tasks
6. The task guarantee ratio:
It defines the ratio of the tasks that are
successfully matched to a convenient VM to the
total number of tasks. We first define a variable
that takes a value 1, only when a task Tj is assigned
to a virtual machine Vi at Host Hk as represented in
equation 9 [14]:
The task guarantee ratio (Gr) value of a given host
Hk at its VMs as in equation 10 [14]:
By calculating the guarantee ratio and according to
the simulation results, we found that the proposed
algorithm has guarantee ratio of 99.965%, while the
two stages has got a value 99.945%, and the Min-Min
has a value of 99.965%, which means that the
proposed algorithm reduced the failure rate by about
0.02% compared to the two- stages and by about 0%
compared to the Min-Min algorithm working on a set
of 200 task. By the same way, the algorithm has the
same guarantee ratio compared to the two- stages,
and reduced the value by 0.005% compared with the
Min-Min algorithm using a set of 1000 task.
Fig. 14 shows the results of the simulation for the
three algorithms with respect to the guarantee ratio
working on a set of 200,400,600…..1000 tasks:
Figure 13: The Guarantee Ratio vs Task Types
By analysing the data set that contains 1000 sample,
we found it divided into 12 types with a varying
results from type 1 to type 12, the final results as
shown in figure 14 shows that the guarantee ratio
values vary between types 1 to 12 from higher values
to low according to the task type complexity.
63 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
9. (IJCSIS) International Journal of Computer Science and Information Security,
Vol. 18, No. 6 June 2020
Figure 14: Guarantee Ratio vs Number of Tasks
CONCLUSION AND FUTURE WORK
In this paper, we suggest a dynamic three stages
scheduling algorithm; which is a three-stage task
scheduling technique that is used to find the best
matched pairs of tasks and VMs, achieve the best
results of task scheduling and execution, enhance the
quality of the service on the cloud based on the latest
stored scheduling data for tasks and their matched
VMs, which helps to pre-create a convenient number
of VMs with different resource attributes, saving
much time and resources and enhancing the quality of
service. The results of the algorithm simulation
compared to other algorithms shows that the
proposed algorithm has got the minimum waiting
time, minimum makespan, and maximum utilization
rate, enhanced the load balance on the virtual
machines and has got a little enhancement in the task
guarantee ratio. In spite of the better enhancement for
the previous factors, the algorithm sometimes has a
failure rate value which is a little higher or equal to
other algorithms, based on the dynamic dataset. So
that, In the future, we need to minimize the failure
rate also we can use the concept of machine learning
to get the system act as a self-learning while working
on any type of data need to be scheduled.
REFERENCES
[1] Panda, S. K., & Jana, P. K. "Efficient task scheduling
algorithms for heterogeneous multi-cloud environment". The
Journal of Supercomputing, Vol.71, No.4, pp.1505–1533,
2015.
[2] SaeMi Shin, Yena Kim, and SuKyoung Lee." Deadline-
guaranteed scheduling algorithm with improved resource
utilization for cloud computing". 2015 12th Annual IEEE
Consumer Communications and Networking Conference
(CCNC), 2015.
[3] Razaque, Abdul, et al. "Task scheduling in cloud
computing." 2016 IEEE Long Island Systems, Applications
and Technology Conference (LISAT). IEEE, pp.1-5, 2016.
[4] Lakra, A. V., & Yadav, D. K "Multi-Objective Tasks
Scheduling Algorithm for Cloud Computing Throughput
Optimization". Procedia Computer Science. Vol.48, pp.107–
113. 2015.
[5] Arunarani, A. R., D. Manjula, and Vijayan Sugumaran.
"Task scheduling techniques in cloud computing: A
literature survey." Future Generation Computer Systems,
Vol.91, pp.407-415, 2019.
[6] IM.Vijayalakshmi, IIV.Venkatesa Kumar" Investigations on
Job Scheduling Algorithms in Cloud Computing",
International Journal of Advanced Research in Computer
Science & Technology (IJARCST) Vol.2, No.1, 2014.
[7] Babur Hayat Malik, Mehwashma Amir, Bilal Mazhar,
Shehzad Ali, Rabiya Jalil, Javaria Khalid. "Comparison of
Task Scheduling Algorithms in Cloud Environment".
(IJACSA) International Journal of Advanced Computer
Science and Applications. Vol.9, No.5, 2018.
[8] Jain, N., Menache, I., Naor, J. (Seffi), & Yaniv, J. "Near-
Optimal Scheduling Mechanisms for Deadline-Sensitive
Jobs in Large Computing Clusters". ACM Transactions on
Parallel Computing, Vol.2, No.1, pp.1–29, 2015.
[9] Raza Abbas Haidri C. P. Katti2, P. C. Saxena3. "A Load
Balancing Strategy for Cloud Computing", IEEE
International Conference on Signal Propagation and
Computer Technology, 2014.
[10] Elrotub, M., & Gherbi, A., "Virtual machine classification-
based approach to enhanced workload balancing for cloud
computing applications", Procedia computer
science, Vol.130, pp.683-688, 2018.
[11] Delimitrou, C., & Kozyrakis, C. "QoS-Aware scheduling in
heterogeneous data centers with paragon". ACM
Transactions on Computer Systems, Vol.31, No.4, pp.1–34,
2013.
[12] Rodriguez, Maria Alejandra, and Rajkumar Buyya.
"Deadline based resource provisioningand scheduling
algorithm for scientific workflows on clouds." IEEE
transactions on cloud computing, Vol.2, No.2, pp.222-235,
2014.
[13] Zhou, Zhou, et al. "Minimizing SLA violation and power
consumption in Cloud data centers using adaptive energy-
aware algorithms." Future Generation Computer
Systems, Vol.86, pp.836-850, 2018.
[14] Zhang, P., & Zhou, M., "Dynamic Cloud Task Scheduling
Based on a Two-Stage Strategy", IEEE Transactions on
Automation Science and Engineering, Vol.15, No.2,
pp.772–783, 2018.
DOI:10.1109/tase.2017.2693688
[15] Alworafi, M. A., & Mallappa, S. "An Enhanced Task
Scheduling in Cloud Computing Based on Deadline-Aware
Model", International Journal of Grid and High-Performance
Computing, Vol.10, No.1, pp.31–53, 2018.
[16] Dr. Amit Agarwal, Saloni Jain."Efficient Optimal Algorithm
of Task Scheduling in Cloud Computing Environment".
International Journal of Computer Trends and Technology
(IJCTT), Vol.9, No.7, pp.344-349, 2014.
[17] Li, X., Qian, L., & Ruiz, R. "Cloud Workflow Scheduling
with Deadlines and Time Slot Availability". IEEE
Transactions on Services Computing, Vol.11, No.2, pp.329–
340, 2018.
[18] Mehwish Awan*, Munam Ali Shah. "A Survey on Task
Scheduling Algorithms in Cloud Computing Environment ".
International Journal of Computer and Information
Technology, Vol.4, No.2, 2015.
[19] Khanghahi, N., & Ravanmehr, R., "Cloud Computing
Performance Evaluation: Issues and Challenges",
International Journal on Cloud Computing: Services and
Architecture, Vol.3, No.5, pp.29–41, 2013.
[20] Zhang, Xinqian, et al. "Energy-aware virtual machine
allocation for cloud with resource reservation." Journal of
Systems and Software, Vol.147, pp.147-161, 2019.
[21] Devipriya, S., and C. Ramesh. "Improved Max-min heuristic
model for task scheduling in cloud." 2013 International
Conference on Green Computing, Communication and
Conservation of Energy (ICGCE). IEEE, 2014.
64 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
10. IJCSIS
ISSN (online): 1947-5500
Please consider to contribute to and/or forward to the appropriate groups the following opportunity to submit and publish
original scientific results.
CALL FOR PAPERS
International Journal of Computer Science and Information Security (IJCSIS)
January-December 2020 Issues
The topics suggested by this issue can be discussed in term of concepts, surveys, state of the art, research,
standards, implementations, running experiments, applications, and industrial case studies. Authors are invited
to submit complete unpublished papers, which are not under review in any other conference or journal in the
following, but not limited to, topic areas.
See authors guide for manuscript preparation and submission guidelines.
Indexed by Google Scholar, DBLP, CiteSeerX, Directory for Open Access Journal (DOAJ), Bielefeld
Academic Search Engine (BASE), SCIRUS, Scopus Database, Cornell University Library, ScientificCommons,
ProQuest, EBSCO and more.
Deadline: see web site
Notification: see web site
Revision: see web site
Publication: see web site
For more topics, please see web site https://sites.google.com/site/ijcsis/
For more information, please visit the journal website (https://sites.google.com/site/ijcsis/)
Context-aware systems
Networking technologies
Security in network, systems, and applications
Evolutionary computation
Industrial systems
Evolutionary computation
Autonomic and autonomous systems
Bio-technologies
Knowledge data systems
Mobile and distance education
Intelligent techniques, logics and systems
Knowledge processing
Information technologies
Internet and web technologies, IoT
Digital information processing
Cognitive science and knowledge
Agent-based systems
Mobility and multimedia systems
Systems performance
Networking and telecommunications
Software development and deployment
Knowledge virtualization
Systems and networks on the chip
Knowledge for global defense
Information Systems [IS]
IPv6 Today - Technology and deployment
Modeling
Software Engineering
Optimization
Complexity
Natural Language Processing
Speech Synthesis
Data Mining