This paper shows the importance of fair scheduling in grid environment such that all the tasks get equal amount of time for their execution such that it will not lead to starvation. The load balancing of the available resources in the computational grid is another important factor. This paper considers uniform load to be given to the resources. In order to achieve this, load balancing is applied after scheduling the jobs. It also considers the Execution Cost and Bandwidth Cost for the algorithms used here because in a grid environment, the resources are geographically distributed. The implementation of this approach the proposed algorithm reaches optimal solution and minimizes the make span as well as the execution cost and bandwidth cost.
Optimized Assignment of Independent Task for Improving Resources Performance ...ijgca
Grid computing has emerged from category of distributed and parallel computing where the
heterogeneous resources from different network are used simultaneously to solve a particular problem that
need huge amount of resources. Potential of Grid computing depends on my issues such as security of
resources, heterogeneity of resources, fault tolerance & resource discovery and job scheduling. Scheduling
is one of the core steps to efficiently exploit the capabilities of heterogeneous distributed computing
resources and is an NP-complete problem. To achieve the promising potential of grid computing, an
effective and efficient job scheduling algorithm is proposed, which will optimized two important criteria to
improve the performance of resources i.e. makespan time & resource utilization. With this, we have
classified various tasks scheduling heuristic in grid on the basis of their characteristics.
Deadline and Suffrage Aware Task Scheduling Approach for Cloud EnvironmentIRJET Journal
The document proposes a deadline and suffrage aware task scheduling approach for cloud environments. It discusses limitations of existing approaches that can cause system imbalances. The proposed approach considers both task deadlines and priorities assigned by user votes ("suffrage") to schedule tasks. It was tested using CloudSim simulator and found to outperform the basic min-min approach in reducing completion times and improving resource utilization and provider profits while still meeting task deadlines.
An Improved Parallel Activity scheduling algorithm for large datasetsIJERA Editor
Parallel processing is capable of executing a large number of tasks on a multiprocessor at the same time period, and it is also one of the emerging concepts. Complex and computational problems can be resolved in an efficient way with the help of parallel processing. The parallel processing system can be divided into two categories depending on the nature of tasks such are homogenous parallel system and the heterogeneous parallel processing system. In the homogeneous environment, the number of processors required for executing different tasks is similar in capacity. In case of heterogeneous environments, tasks are allocated to various processors with different capacity and speed. The main objective of parallel processing is to optimize the execution speed and to shorten the duration of task execution with independent of environment. In this proposed work, an optimized parallel project selection method was implemented to find the optimal resource utilization and project scheduling. The execution speeds of the task increases and the overall average execution time of the task decreases by allocating different tasks to various processors with the task scheduling algorithm.
This document discusses resource management for computer operating systems. It argues that traditional OS architecture is outdated given changes in hardware and software. The authors propose an approach where the OS allocates resources like CPU cores, memory, and bandwidth to processes to optimize responsiveness based on penalty functions that model how run time affects user experience. The goal is to continuously minimize the total penalty by adjusting resource allocations over time as user needs and process requirements change.
This document summarizes an adaptive checkpointing and replication strategy to tolerate faults in computational grids. It proposes maintaining a balance between the overheads of replication and checkpointing. Tasks are replicated on up to three resources based on each resource's probability of permanent failure. Checkpoints are taken adaptively based on the probability of recoverable failure. If a resource fails permanently, the task resumes from the last checkpoint. If a failure is recoverable, the task resumes on the same resource. This strategy aims to minimize resource wastage from replication while utilizing different resource speeds.
This document discusses adaptive system-level scheduling under fluid traffic flow conditions in multiprocessor systems. It proposes a scheduling mechanism that accounts for traffic-centric system design. The mechanism evaluates scheduling methods based on effectiveness, robustness, and flexibility. It also introduces a processor-FPGA scheduling approach that reduces schedule length by taking advantage of FPGA reconfiguration. Simulation results show that processor-FPGA scheduling outperforms multiprocessor-only scheduling under certain traffic conditions. Future work will focus on formulating a traffic-centric scheduling method.
Comparative Analysis of Various Grid Based Scheduling Algorithmsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Reinforcement learning based multi core scheduling (RLBMCS) for real time sys...IJECEIAES
This document summarizes a reinforcement learning based multi-core scheduling (RLBMCS) algorithm for real-time systems. The algorithm uses reinforcement learning to dynamically assign task priorities and place tasks in a multi-level feedback queue to schedule tasks across multiple processor cores. It aims to optimize metrics like CPU utilization, throughput, turnaround time, waiting time, response time and deadline meet ratio. Tasks can transition between four states - initial, objective degradation, objective progression, and objective stabilization - based on changes to a multi-objective optimization function. The scheduler acts as the agent and assigns tasks to queues/actions based on task and system states to maximize the optimization function over time.
Optimized Assignment of Independent Task for Improving Resources Performance ...ijgca
Grid computing has emerged from category of distributed and parallel computing where the
heterogeneous resources from different network are used simultaneously to solve a particular problem that
need huge amount of resources. Potential of Grid computing depends on my issues such as security of
resources, heterogeneity of resources, fault tolerance & resource discovery and job scheduling. Scheduling
is one of the core steps to efficiently exploit the capabilities of heterogeneous distributed computing
resources and is an NP-complete problem. To achieve the promising potential of grid computing, an
effective and efficient job scheduling algorithm is proposed, which will optimized two important criteria to
improve the performance of resources i.e. makespan time & resource utilization. With this, we have
classified various tasks scheduling heuristic in grid on the basis of their characteristics.
Deadline and Suffrage Aware Task Scheduling Approach for Cloud EnvironmentIRJET Journal
The document proposes a deadline and suffrage aware task scheduling approach for cloud environments. It discusses limitations of existing approaches that can cause system imbalances. The proposed approach considers both task deadlines and priorities assigned by user votes ("suffrage") to schedule tasks. It was tested using CloudSim simulator and found to outperform the basic min-min approach in reducing completion times and improving resource utilization and provider profits while still meeting task deadlines.
An Improved Parallel Activity scheduling algorithm for large datasetsIJERA Editor
Parallel processing is capable of executing a large number of tasks on a multiprocessor at the same time period, and it is also one of the emerging concepts. Complex and computational problems can be resolved in an efficient way with the help of parallel processing. The parallel processing system can be divided into two categories depending on the nature of tasks such are homogenous parallel system and the heterogeneous parallel processing system. In the homogeneous environment, the number of processors required for executing different tasks is similar in capacity. In case of heterogeneous environments, tasks are allocated to various processors with different capacity and speed. The main objective of parallel processing is to optimize the execution speed and to shorten the duration of task execution with independent of environment. In this proposed work, an optimized parallel project selection method was implemented to find the optimal resource utilization and project scheduling. The execution speeds of the task increases and the overall average execution time of the task decreases by allocating different tasks to various processors with the task scheduling algorithm.
This document discusses resource management for computer operating systems. It argues that traditional OS architecture is outdated given changes in hardware and software. The authors propose an approach where the OS allocates resources like CPU cores, memory, and bandwidth to processes to optimize responsiveness based on penalty functions that model how run time affects user experience. The goal is to continuously minimize the total penalty by adjusting resource allocations over time as user needs and process requirements change.
This document summarizes an adaptive checkpointing and replication strategy to tolerate faults in computational grids. It proposes maintaining a balance between the overheads of replication and checkpointing. Tasks are replicated on up to three resources based on each resource's probability of permanent failure. Checkpoints are taken adaptively based on the probability of recoverable failure. If a resource fails permanently, the task resumes from the last checkpoint. If a failure is recoverable, the task resumes on the same resource. This strategy aims to minimize resource wastage from replication while utilizing different resource speeds.
This document discusses adaptive system-level scheduling under fluid traffic flow conditions in multiprocessor systems. It proposes a scheduling mechanism that accounts for traffic-centric system design. The mechanism evaluates scheduling methods based on effectiveness, robustness, and flexibility. It also introduces a processor-FPGA scheduling approach that reduces schedule length by taking advantage of FPGA reconfiguration. Simulation results show that processor-FPGA scheduling outperforms multiprocessor-only scheduling under certain traffic conditions. Future work will focus on formulating a traffic-centric scheduling method.
Comparative Analysis of Various Grid Based Scheduling Algorithmsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Reinforcement learning based multi core scheduling (RLBMCS) for real time sys...IJECEIAES
This document summarizes a reinforcement learning based multi-core scheduling (RLBMCS) algorithm for real-time systems. The algorithm uses reinforcement learning to dynamically assign task priorities and place tasks in a multi-level feedback queue to schedule tasks across multiple processor cores. It aims to optimize metrics like CPU utilization, throughput, turnaround time, waiting time, response time and deadline meet ratio. Tasks can transition between four states - initial, objective degradation, objective progression, and objective stabilization - based on changes to a multi-objective optimization function. The scheduler acts as the agent and assigns tasks to queues/actions based on task and system states to maximize the optimization function over time.
This document presents a genetic algorithm approach for process scheduling in distributed operating systems. It aims to minimize total execution time, maximize processor utilization, and balance load across processors. The algorithm represents each schedule as a chromosome and uses genetic operators like selection, crossover and mutation to evolve better schedules over generations. Experimental results show the proposed genetic algorithm can optimize multiple scheduling objectives simultaneously in distributed systems.
Fault-Tolerance Aware Multi Objective Scheduling Algorithm for Task Schedulin...csandit
Computational Grid (CG) creates a large heterogeneous and distributed paradigm to manage and execute the applications which are computationally intensive. In grid scheduling tasks are assigned to the proper processors in the grid system to for its execution by considering the execution policy and the optimization objectives. In this paper, makespan and the faulttolerance of the computational nodes of the grid which are the two important parameters for the task execution, are considered and tried to optimize it. As the grid scheduling is considered to be NP-Hard, so a meta-heuristics evolutionary based techniques are often used to find a solution for this. We have proposed a NSGA II for this purpose. The performance estimation ofthe proposed Fault tolerance Aware NSGA II (FTNSGA II) has been done by writing program in Matlab. The simulation results evaluates the performance of the all proposed algorithm and the results of proposed model is compared with existing model Min-Min and Max-Min algorithm which proves effectiveness of the model.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
LOAD BALANCING ALGORITHM ON CLOUD COMPUTING FOR OPTIMIZE RESPONE TIMEijccsa
To improve the performance of cloud computing, there are many parameters and issues that we should consider, including resource allocation, resource responsiveness, connectivity to resources, unused resources exploration, corresponding resource mapping and planning for resource. The planning for the use of resources can be based on many kinds of parameters, and the service response time is one of them.
The users can easily figure out the response time of their requests, and it becomes one of the important QoSs. When we discover and explore more on this, response time can provide solutions for the distribution, the load balancing of resources with better efficiency. This is one of the most promising
research directions for improving the cloud technology. Therefore, this paper proposes a load balancing algorithm based on response time of requests on cloud with the name APRA (ARIMA Prediction of Response Time Algorithm), the main idea is to use ARIMA algorithms to predict the coming response time, thus giving a better way of effectively resolving resource allocation with threshold value. The experiment
result outcomes are potential and valuable for load balancing with predicted response time, it shows that prediction is a great direction for load balancing.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The
increased availability of the cloud models and allied developing models creates easier computing cloud
environment. Energy consumption and effective energy management are the two important challenges in
virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling
(DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the
required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our
proposed model confirms the effectiveness of its implementation, scalability, power consumption and
execution time with respect to other existing approaches.
A Framework and Methods for Dynamic Scheduling of a Directed Acyclic Graph on...IDES Editor
The data flow model is gaining popularity as a
programming paradigm for multi-core processors. Efficient
scheduling of an application modeled by Directed Acyclic
Graph (DAG) is a key issue when performance is very
important. DAG represents computational solutions, in which
the nodes represent tasks to be executed and edges represent
precedence constraints among the tasks. The task scheduling
problem in general is a NP-complete problem[2]. Several static
scheduling heuristics have been proposed. But the major
problem in static list scheduling is the inherent difficulty in
exact estimation of task cost and edge cost in a DAG and also
its inability to consider and manage with runtime behavior of
tasks. This underlines the need for dynamic scheduling of a
DAG. This paper presents how in general, dynamic scheduling
of a DAG can be done. Also proposes 4 simple methods to
perform dynamic scheduling of a DAG. These methods have
been simulated and experimented using a representative set
of DAG structured computations from both synthetic and real
problems. The proposed dynamic scheduler performance is
found to be in comparable with that of static scheduling
methods. The performance comparison of the proposed
dynamic scheduling methods is also carried out.
This document proposes a fair scheduling algorithm with dynamic load balancing for grid computing. It begins by introducing grid computing and the need for efficient load balancing algorithms to distribute tasks. It then describes dynamic load balancing approaches, including information, triggering, resource type, location, and selection policies. The proposed algorithm uses a fair scheduling approach that assigns tasks to processors based on their estimated fair completion times to ensure tasks receive equal shares of computing resources. It also includes a dynamic load balancing component that migrates tasks between processors to maintain balanced loads across all resources. Simulation results demonstrated the algorithm achieved balanced loads across processors and reduced overall task completion times.
Scheduling of Heterogeneous Tasks in Cloud Computing using Multi Queue (MQ) A...IRJET Journal
This document proposes a Multi Queue (MQ) task scheduling algorithm for heterogeneous tasks in cloud computing. It aims to improve upon the Round Robin and Weighted Round Robin algorithms by overcoming their drawbacks. The MQ algorithm splits tasks and resources into separate queues based on size/length and speed. Small tasks are scheduled on slower resources and large tasks on faster resources. The document compares the performance of MQ to Round Robin and Weighted Round Robin algorithms based on makespan, average resource utilization, and load balancing level using CloudSim simulations. The results show that MQ scheduling performs better than the other algorithms in most cases in terms of these metrics.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
This document proposes a new task scheduling algorithm called Dynamic Heterogeneous Shortest Job First (DHSJF) for heterogeneous cloud computing systems. DHSJF aims to improve performance metrics like reduced makespan and low energy consumption by considering the heterogeneity of resources and workloads. It discusses existing scheduling algorithms like Round Robin, First Come First Serve and their limitations. The proposed DHSJF algorithm prioritizes tasks with the shortest estimated completion time to optimize resource utilization and improve overall performance of the cloud computing system. Simulation results show that DHSJF provides better results for metrics like average waiting time and turnaround time as compared to Round Robin and First Come First Serve scheduling algorithms.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDM O...ijgca
The ever-increasing status of the cloud computing h
ypothesis and the budding concept of federated clou
d
computing have enthused research efforts towards in
tellectual cloud service selection aimed at develop
ing
techniques for enabling the cloud users to gain max
imum benefit from cloud computing by selecting
services which provide optimal performance at lowes
t possible cost. Cloud computing is a novel paradig
m
for the provision of computing infrastructure, whic
h aims to shift the location of the computing
infrastructure to the network in order to reduce th
e maintenance costs of hardware and software resour
ces.
Cloud computing systems vitally provide access to l
arge pools of resources. Resources provided by clou
d
computing systems hide a great deal of services fro
m the user through virtualization. In this paper, t
he
cloud data center is modelled as
queuing system with a single task arrivals
and a task request buffer of infinite capacity.
A MULTI-OBJECTIVE PERSPECTIVE FOR OPERATOR SCHEDULING USING FINEGRAINED DVS A...VLSICS Design
The stringent power budget of fine grained power managed digital integrated circuits have driven chip designers to optimize power at the cost of area and delay, which were the traditional cost criteria for circuit optimization. The emerging scenario motivates us to revisit the classical operator scheduling problem under the availability of DVFS enabled functional units that can trade-off cycles with power. We study the design space defined due to this trade-off and present a branch-and-bound(B/B) algorithm to explore this state space and report the pareto-optimal front with respect to area and power. The scheduling also aims at maximum resource sharing and is able to attain sufficient area and power gains for complex benchmarks when timing constraints are relaxed by sufficient amount. Experimental results show that the algorithm that operates without any user constraint(area/power) is able to solve the problem for mostavailable benchmarks, and the use of power budget or area budget constraints leads to significant performance gain.
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Job Resource Ratio Based Priority Driven Scheduling in Cloud Computingijsrd.com
Cloud Computing is an emerging technology in the area of parallel and distributed computing. Clouds consist of a collection of virtualized resources, which include both computational and storage facilities that can be provisioned on demand, depending on the users' needs. Job scheduling is one of the major activities performed in all the computing environments. Cloud computing is one the upcoming latest technology which is developing drastically. To efficiently increase the working of cloud computing environments, job scheduling is one the tasks performed in order to gain maximum profit. In this paper we proposed a new scheduling algorithm based on priority and that priority is based on ratio of job and resource. To calculate priority of job we use analytical hierarchy process. In this paper we also compare result with other algorithm like First come first serve and round robin algorithms.
The document discusses using a genetic algorithm to schedule tasks in a cloud computing environment. It aims to minimize task execution time and reduce computational costs compared to the traditional Round Robin scheduling algorithm. The proposed genetic algorithm mimics natural selection and genetics to evolve optimal task schedules. It was tested using the CloudSim simulation toolkit and results showed the genetic algorithm provided better performance than Round Robin scheduling.
OPTIMIZED RESOURCE PROVISIONING METHOD FOR COMPUTATIONAL GRID ijgca
Grid computing is an accumulation of heterogeneous, dynamic resources from multiple administrative areas which are geographically distributed that can be utilized to reach a mutual end. Development of resource provisioning-based scheduling in large-scale distributed environments like grid computing brings in new requirement challenges that are not being believed in traditional distributed computing environments. Computational grid is applying the resources of many systems in a network to a single problem at the same time. Grid scheduling is the method by which work specified by some means is assigned to the resources that complete the work in the environment which cannot fulfill the user requirements considerably. The satisfaction of users while providing the resources might increase the beneficiary level of resource suppliers. Resource scheduling has to satisfy the multiple constraints specified by the user. The option of resource with the satisfaction of multiple constraints is the most tedious process. This trouble is solved by bringing out the particle swarm optimization based heuristic scheduling algorithm which attempts to select the most suitable resource from the set of available resources. The primary parameters that are taken in this work for selecting the most suitable resource are the makespan and cost. The experimental result shows that the proposed method yields optimal scheduling with the atonement of all user requirements.
Optimized Resource Provisioning Method for Computational Gridijgca
Grid computing is an accumulation of heterogeneous, dynamic resources from multiple administrative areas which are geographically distributed that can be utilized to reach a mutual end. Development of resource provisioning-based scheduling in large-scale distributed environments like grid computing brings in new requirement challenges that are not being believed in traditional distributed computing environments. Computational grid is applying the resources of many systems in a network to a single problem at the same time. Grid scheduling is the method by which work specified by some means is assigned to the resources that complete the work in the environment which cannot fulfill the user requirements considerably. The satisfaction of users while providing the resources might increase the beneficiary level of resource suppliers. Resource scheduling has to satisfy the multiple constraints specified by the user. The option of resource with the satisfaction of multiple constraints is the most tedious process. This trouble is solved by bringing out the particle swarm optimization based heuristic scheduling algorithm which attempts to select the most suitable resource from the set of available resources. The primary parameters that are taken in this work for selecting the most suitable resource are the makespan and cost. The experimental result shows that the proposed method yields optimal scheduling with the atonement of all user requirements
This document proposes a genetic algorithm called Workflow Scheduling for Public Cloud Using Genetic Algorithm (WSGA) to optimize the cost of executing workflows in the public cloud. It discusses how genetic algorithms can be applied to the workflow scheduling problem to generate optimal schedules. The WSGA represents potential scheduling solutions as chromosomes, uses a fitness function to evaluate scheduling costs, and applies genetic operators like selection, crossover and mutation to evolve new schedules over multiple iterations. The goal is to minimize total execution cost while meeting workflow dependencies and deadline constraints. An experimental setup is described and the WSGA approach is claimed to reduce costs more than other heuristic scheduling algorithms for communication-intensive workflows.
Optimized Assignment of Independent Task for Improving Resources Performance ...Ricardo014
Grid computing has emerged from category of distributed and parallel computing where the heterogeneous resources from different network are used simultaneously to solve a particular problem that need huge amount of
resources. Potential of Grid computing depends on my issues such as security of resources, heterogeneity of resources, fault tolerance & resource discovery and job scheduling. Scheduling is one of the core steps to
efficiently exploit the capabilities of heterogeneous distributed computing resources and is an NP-complete problem. To achieve the promising potential of grid computing, an effective and efficient job scheduling algorithm is
proposed, which will optimized two important criteria to improve the performance of resources i.e. makespan time & resource utilization. With this, we have classified various tasks scheduling heuristic in grid on the basis of
their characteristics.
Optimized Assignment of Independent Task for Improving Resources Performance ...ijgca
Grid computing has emerged from category of distributed and parallel computing where the heterogeneous resources from different network are used simultaneously to solve a particular problem that need huge amount of resources. Potential of Grid computing depends on my issues such as security of resources, heterogeneity of resources, fault tolerance & resource discovery and job scheduling. Scheduling is one of the core steps to efficiently exploit the capabilities of heterogeneous distributed computing resources and is an NP-complete problem. To achieve the promising potential of grid computing, an effective and efficient job scheduling algorithm is proposed, which will optimized two important criteria to improve the performance of resources i.e. makespan time & resource utilization. With this, we have classified various tasks scheduling heuristic in grid on the basis of their characteristics.
Effective and Efficient Job Scheduling in Grid ComputingAditya Kokadwar
The integration of remote and diverse resources and the increasing computational needs of Grand Challenges problems combined with the faster growth of the internet and communication technologies leads to the development of global computational grids. Grid computing is a prevailing technology, which unites underutilized resources in order to support sharing of resources and services distributed across numerous administrative region. An efficient and effective scheduling system is essentially required in order to achieve the promising capacity of grids. The main goal of scheduling is to maximize the resource utilization and minimize processing time and cost of the jobs. In this research, the objective is to prioritize the jobs based on execution cost and then allocate the resources with minimum cost by merging it with conventional job grouping strategy to provide the solution for better and more efficient job scheduling which is beneficial to both user and resource broker. The proposed scheduling approach in grid computing employs a dynamic cost-based job scheduling algorithm for making an efficient mapping of a job to available resources in the grid. It also improves communication to computation ratio (CCR) and utilization of available resources by grouping the user jobs before resource allocation.
This document presents a genetic algorithm approach for process scheduling in distributed operating systems. It aims to minimize total execution time, maximize processor utilization, and balance load across processors. The algorithm represents each schedule as a chromosome and uses genetic operators like selection, crossover and mutation to evolve better schedules over generations. Experimental results show the proposed genetic algorithm can optimize multiple scheduling objectives simultaneously in distributed systems.
Fault-Tolerance Aware Multi Objective Scheduling Algorithm for Task Schedulin...csandit
Computational Grid (CG) creates a large heterogeneous and distributed paradigm to manage and execute the applications which are computationally intensive. In grid scheduling tasks are assigned to the proper processors in the grid system to for its execution by considering the execution policy and the optimization objectives. In this paper, makespan and the faulttolerance of the computational nodes of the grid which are the two important parameters for the task execution, are considered and tried to optimize it. As the grid scheduling is considered to be NP-Hard, so a meta-heuristics evolutionary based techniques are often used to find a solution for this. We have proposed a NSGA II for this purpose. The performance estimation ofthe proposed Fault tolerance Aware NSGA II (FTNSGA II) has been done by writing program in Matlab. The simulation results evaluates the performance of the all proposed algorithm and the results of proposed model is compared with existing model Min-Min and Max-Min algorithm which proves effectiveness of the model.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
LOAD BALANCING ALGORITHM ON CLOUD COMPUTING FOR OPTIMIZE RESPONE TIMEijccsa
To improve the performance of cloud computing, there are many parameters and issues that we should consider, including resource allocation, resource responsiveness, connectivity to resources, unused resources exploration, corresponding resource mapping and planning for resource. The planning for the use of resources can be based on many kinds of parameters, and the service response time is one of them.
The users can easily figure out the response time of their requests, and it becomes one of the important QoSs. When we discover and explore more on this, response time can provide solutions for the distribution, the load balancing of resources with better efficiency. This is one of the most promising
research directions for improving the cloud technology. Therefore, this paper proposes a load balancing algorithm based on response time of requests on cloud with the name APRA (ARIMA Prediction of Response Time Algorithm), the main idea is to use ARIMA algorithms to predict the coming response time, thus giving a better way of effectively resolving resource allocation with threshold value. The experiment
result outcomes are potential and valuable for load balancing with predicted response time, it shows that prediction is a great direction for load balancing.
REAL-TIME ADAPTIVE ENERGY-SCHEDULING ALGORITHM FOR VIRTUALIZED CLOUD COMPUTINGijdpsjournal
Cloud computing becomes an ideal computing paradigm for scientific and commercial applications. The
increased availability of the cloud models and allied developing models creates easier computing cloud
environment. Energy consumption and effective energy management are the two important challenges in
virtualized computing platforms. Energy consumption can be minimized by allocating computationally
intensive tasks to a resource at a suitable frequency. An optimal Dynamic Voltage and Frequency Scaling
(DVFS) based strategy of task allocation can minimize the overall consumption of energy and meet the
required QoS. However, they do not control the internal and external switching to server frequencies,
which causes the degradation of performance. In this paper, we propose the Real Time Adaptive EnergyScheduling (RTAES) algorithm by manipulating the reconfiguring proficiency of Cloud ComputingVirtualized Data Centers (CCVDCs) for computationally intensive applications. The RTAES algorithm
minimizes consumption of energy and time during computation, reconfiguration and communication. Our
proposed model confirms the effectiveness of its implementation, scalability, power consumption and
execution time with respect to other existing approaches.
A Framework and Methods for Dynamic Scheduling of a Directed Acyclic Graph on...IDES Editor
The data flow model is gaining popularity as a
programming paradigm for multi-core processors. Efficient
scheduling of an application modeled by Directed Acyclic
Graph (DAG) is a key issue when performance is very
important. DAG represents computational solutions, in which
the nodes represent tasks to be executed and edges represent
precedence constraints among the tasks. The task scheduling
problem in general is a NP-complete problem[2]. Several static
scheduling heuristics have been proposed. But the major
problem in static list scheduling is the inherent difficulty in
exact estimation of task cost and edge cost in a DAG and also
its inability to consider and manage with runtime behavior of
tasks. This underlines the need for dynamic scheduling of a
DAG. This paper presents how in general, dynamic scheduling
of a DAG can be done. Also proposes 4 simple methods to
perform dynamic scheduling of a DAG. These methods have
been simulated and experimented using a representative set
of DAG structured computations from both synthetic and real
problems. The proposed dynamic scheduler performance is
found to be in comparable with that of static scheduling
methods. The performance comparison of the proposed
dynamic scheduling methods is also carried out.
This document proposes a fair scheduling algorithm with dynamic load balancing for grid computing. It begins by introducing grid computing and the need for efficient load balancing algorithms to distribute tasks. It then describes dynamic load balancing approaches, including information, triggering, resource type, location, and selection policies. The proposed algorithm uses a fair scheduling approach that assigns tasks to processors based on their estimated fair completion times to ensure tasks receive equal shares of computing resources. It also includes a dynamic load balancing component that migrates tasks between processors to maintain balanced loads across all resources. Simulation results demonstrated the algorithm achieved balanced loads across processors and reduced overall task completion times.
Scheduling of Heterogeneous Tasks in Cloud Computing using Multi Queue (MQ) A...IRJET Journal
This document proposes a Multi Queue (MQ) task scheduling algorithm for heterogeneous tasks in cloud computing. It aims to improve upon the Round Robin and Weighted Round Robin algorithms by overcoming their drawbacks. The MQ algorithm splits tasks and resources into separate queues based on size/length and speed. Small tasks are scheduled on slower resources and large tasks on faster resources. The document compares the performance of MQ to Round Robin and Weighted Round Robin algorithms based on makespan, average resource utilization, and load balancing level using CloudSim simulations. The results show that MQ scheduling performs better than the other algorithms in most cases in terms of these metrics.
DYNAMIC TASK SCHEDULING BASED ON BURST TIME REQUIREMENT FOR CLOUD ENVIRONMENTIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
This document proposes a new task scheduling algorithm called Dynamic Heterogeneous Shortest Job First (DHSJF) for heterogeneous cloud computing systems. DHSJF aims to improve performance metrics like reduced makespan and low energy consumption by considering the heterogeneity of resources and workloads. It discusses existing scheduling algorithms like Round Robin, First Come First Serve and their limitations. The proposed DHSJF algorithm prioritizes tasks with the shortest estimated completion time to optimize resource utilization and improve overall performance of the cloud computing system. Simulation results show that DHSJF provides better results for metrics like average waiting time and turnaround time as compared to Round Robin and First Come First Serve scheduling algorithms.
PERFORMANCE FACTORS OF CLOUD COMPUTING DATA CENTERS USING [(M/G/1) : (∞/GDM O...ijgca
The ever-increasing status of the cloud computing h
ypothesis and the budding concept of federated clou
d
computing have enthused research efforts towards in
tellectual cloud service selection aimed at develop
ing
techniques for enabling the cloud users to gain max
imum benefit from cloud computing by selecting
services which provide optimal performance at lowes
t possible cost. Cloud computing is a novel paradig
m
for the provision of computing infrastructure, whic
h aims to shift the location of the computing
infrastructure to the network in order to reduce th
e maintenance costs of hardware and software resour
ces.
Cloud computing systems vitally provide access to l
arge pools of resources. Resources provided by clou
d
computing systems hide a great deal of services fro
m the user through virtualization. In this paper, t
he
cloud data center is modelled as
queuing system with a single task arrivals
and a task request buffer of infinite capacity.
A MULTI-OBJECTIVE PERSPECTIVE FOR OPERATOR SCHEDULING USING FINEGRAINED DVS A...VLSICS Design
The stringent power budget of fine grained power managed digital integrated circuits have driven chip designers to optimize power at the cost of area and delay, which were the traditional cost criteria for circuit optimization. The emerging scenario motivates us to revisit the classical operator scheduling problem under the availability of DVFS enabled functional units that can trade-off cycles with power. We study the design space defined due to this trade-off and present a branch-and-bound(B/B) algorithm to explore this state space and report the pareto-optimal front with respect to area and power. The scheduling also aims at maximum resource sharing and is able to attain sufficient area and power gains for complex benchmarks when timing constraints are relaxed by sufficient amount. Experimental results show that the algorithm that operates without any user constraint(area/power) is able to solve the problem for mostavailable benchmarks, and the use of power budget or area budget constraints leads to significant performance gain.
A survey of various scheduling algorithm in cloud computing environmenteSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Job Resource Ratio Based Priority Driven Scheduling in Cloud Computingijsrd.com
Cloud Computing is an emerging technology in the area of parallel and distributed computing. Clouds consist of a collection of virtualized resources, which include both computational and storage facilities that can be provisioned on demand, depending on the users' needs. Job scheduling is one of the major activities performed in all the computing environments. Cloud computing is one the upcoming latest technology which is developing drastically. To efficiently increase the working of cloud computing environments, job scheduling is one the tasks performed in order to gain maximum profit. In this paper we proposed a new scheduling algorithm based on priority and that priority is based on ratio of job and resource. To calculate priority of job we use analytical hierarchy process. In this paper we also compare result with other algorithm like First come first serve and round robin algorithms.
The document discusses using a genetic algorithm to schedule tasks in a cloud computing environment. It aims to minimize task execution time and reduce computational costs compared to the traditional Round Robin scheduling algorithm. The proposed genetic algorithm mimics natural selection and genetics to evolve optimal task schedules. It was tested using the CloudSim simulation toolkit and results showed the genetic algorithm provided better performance than Round Robin scheduling.
OPTIMIZED RESOURCE PROVISIONING METHOD FOR COMPUTATIONAL GRID ijgca
Grid computing is an accumulation of heterogeneous, dynamic resources from multiple administrative areas which are geographically distributed that can be utilized to reach a mutual end. Development of resource provisioning-based scheduling in large-scale distributed environments like grid computing brings in new requirement challenges that are not being believed in traditional distributed computing environments. Computational grid is applying the resources of many systems in a network to a single problem at the same time. Grid scheduling is the method by which work specified by some means is assigned to the resources that complete the work in the environment which cannot fulfill the user requirements considerably. The satisfaction of users while providing the resources might increase the beneficiary level of resource suppliers. Resource scheduling has to satisfy the multiple constraints specified by the user. The option of resource with the satisfaction of multiple constraints is the most tedious process. This trouble is solved by bringing out the particle swarm optimization based heuristic scheduling algorithm which attempts to select the most suitable resource from the set of available resources. The primary parameters that are taken in this work for selecting the most suitable resource are the makespan and cost. The experimental result shows that the proposed method yields optimal scheduling with the atonement of all user requirements.
Optimized Resource Provisioning Method for Computational Gridijgca
Grid computing is an accumulation of heterogeneous, dynamic resources from multiple administrative areas which are geographically distributed that can be utilized to reach a mutual end. Development of resource provisioning-based scheduling in large-scale distributed environments like grid computing brings in new requirement challenges that are not being believed in traditional distributed computing environments. Computational grid is applying the resources of many systems in a network to a single problem at the same time. Grid scheduling is the method by which work specified by some means is assigned to the resources that complete the work in the environment which cannot fulfill the user requirements considerably. The satisfaction of users while providing the resources might increase the beneficiary level of resource suppliers. Resource scheduling has to satisfy the multiple constraints specified by the user. The option of resource with the satisfaction of multiple constraints is the most tedious process. This trouble is solved by bringing out the particle swarm optimization based heuristic scheduling algorithm which attempts to select the most suitable resource from the set of available resources. The primary parameters that are taken in this work for selecting the most suitable resource are the makespan and cost. The experimental result shows that the proposed method yields optimal scheduling with the atonement of all user requirements
This document proposes a genetic algorithm called Workflow Scheduling for Public Cloud Using Genetic Algorithm (WSGA) to optimize the cost of executing workflows in the public cloud. It discusses how genetic algorithms can be applied to the workflow scheduling problem to generate optimal schedules. The WSGA represents potential scheduling solutions as chromosomes, uses a fitness function to evaluate scheduling costs, and applies genetic operators like selection, crossover and mutation to evolve new schedules over multiple iterations. The goal is to minimize total execution cost while meeting workflow dependencies and deadline constraints. An experimental setup is described and the WSGA approach is claimed to reduce costs more than other heuristic scheduling algorithms for communication-intensive workflows.
Optimized Assignment of Independent Task for Improving Resources Performance ...Ricardo014
Grid computing has emerged from category of distributed and parallel computing where the heterogeneous resources from different network are used simultaneously to solve a particular problem that need huge amount of
resources. Potential of Grid computing depends on my issues such as security of resources, heterogeneity of resources, fault tolerance & resource discovery and job scheduling. Scheduling is one of the core steps to
efficiently exploit the capabilities of heterogeneous distributed computing resources and is an NP-complete problem. To achieve the promising potential of grid computing, an effective and efficient job scheduling algorithm is
proposed, which will optimized two important criteria to improve the performance of resources i.e. makespan time & resource utilization. With this, we have classified various tasks scheduling heuristic in grid on the basis of
their characteristics.
Optimized Assignment of Independent Task for Improving Resources Performance ...ijgca
Grid computing has emerged from category of distributed and parallel computing where the heterogeneous resources from different network are used simultaneously to solve a particular problem that need huge amount of resources. Potential of Grid computing depends on my issues such as security of resources, heterogeneity of resources, fault tolerance & resource discovery and job scheduling. Scheduling is one of the core steps to efficiently exploit the capabilities of heterogeneous distributed computing resources and is an NP-complete problem. To achieve the promising potential of grid computing, an effective and efficient job scheduling algorithm is proposed, which will optimized two important criteria to improve the performance of resources i.e. makespan time & resource utilization. With this, we have classified various tasks scheduling heuristic in grid on the basis of their characteristics.
Effective and Efficient Job Scheduling in Grid ComputingAditya Kokadwar
The integration of remote and diverse resources and the increasing computational needs of Grand Challenges problems combined with the faster growth of the internet and communication technologies leads to the development of global computational grids. Grid computing is a prevailing technology, which unites underutilized resources in order to support sharing of resources and services distributed across numerous administrative region. An efficient and effective scheduling system is essentially required in order to achieve the promising capacity of grids. The main goal of scheduling is to maximize the resource utilization and minimize processing time and cost of the jobs. In this research, the objective is to prioritize the jobs based on execution cost and then allocate the resources with minimum cost by merging it with conventional job grouping strategy to provide the solution for better and more efficient job scheduling which is beneficial to both user and resource broker. The proposed scheduling approach in grid computing employs a dynamic cost-based job scheduling algorithm for making an efficient mapping of a job to available resources in the grid. It also improves communication to computation ratio (CCR) and utilization of available resources by grouping the user jobs before resource allocation.
This document provides a comparative analysis of various grid-based scheduling algorithms. It discusses six different algorithms: Min-Min, Sufferage, Heterogeneous Earliest Finish Time (HEFT), Critical Path-On-a-Processor (CPOP), Reliability Aware Scheduling Algorithm with Duplication of HDC System (RASD), and Hierarchical Job Scheduling for Clusters of Workstations (HJS). It compares the algorithms based on parameters like response time, resource utilization, load balancing, and considers factors like architecture, environment, and dynamicity. The document concludes that grid scheduling is important for optimizing resource allocation in distributed, heterogeneous environments.
An enhanced adaptive scoring job scheduling algorithm with replication strate...eSAT Publishing House
This document describes an enhanced adaptive scoring job scheduling algorithm with replication strategy for grid environments. The algorithm aims to improve upon an existing adaptive scoring job scheduling algorithm by identifying whether jobs are data-intensive or computation-intensive. It then divides large jobs into subtasks, replicates the subtasks, and allocates the replicas to clusters based on a computed cluster score in order to improve resource utilization and job completion times. The algorithm is evaluated through simulation using the GridSim toolkit.
Dynamic Task Scheduling based on Burst Time Requirement for Cloud EnvironmentIJCNCJournal
Cloud computing has an indispensable role in the modern digital scenario. The fundamental challenge of cloud systems is to accommodate user requirements which keep on varying. This dynamic cloud environment demands the necessity of complex algorithms to resolve the trouble of task allotment. The overall performance of cloud systems is rooted in the efficiency of task scheduling algorithms. The dynamic property of cloud systems makes it challenging to find an optimal solution satisfying all the evaluation metrics. The new approach is formulated on the Round Robin and the Shortest Job First algorithms. The Round Robin method reduces starvation, and the Shortest Job First decreases the average waiting time. In this work, the advantages of both algorithms are incorporated to improve the makespan of user tasks.
RSDC (Reliable Scheduling Distributed in Cloud Computing)IJCSEA Journal
This document summarizes the PPDD algorithm for scheduling divisible loads originating from multiple sites in distributed computing environments. The PPDD algorithm is a two-phase approach that first derives a near-optimal load distribution and then considers actual communication delays when transferring load fractions. It guarantees a near-optimal solution and improved performance over previous algorithms like RSA by avoiding unnecessary load transfers between processors.
DGBSA : A BATCH JOB SCHEDULINGALGORITHM WITH GA WITH REGARD TO THE THRESHOLD ...IJCSEA Journal
In this paper , we will provide a scheduler on batch jobs with GA regard to the threshold detector. In The algorithm proposed in this paper, we will provide the batch independent jobs with a new technique ,so we can optimize the schedule them. To do this, we use a threshold detector then among the selected jobs, processing resources can process batch jobs with priority. Also hierarchy of tasks in each batch, will be determined with using DGBSA algorithm. Now , with the regard to the works done by previous ,we can provide an algorithm that by adding specific parameters to fitness function of the previous algorithms ,develop a optimum fitness function that in the proposed algorithm has been used. According to assessment done on DGBSA algorithm, in compare with the similar algorithms, it has more performance. The effective parameters that used in the proposed algorithm can reduce the total wasting time in compare with previous algorithms. Also this algorithm can improve the previous problems in batch processing with a new technique.
Scheduling Algorithm Based Simulator for Resource Allocation Task in Cloud Co...IRJET Journal
This document proposes a scheduling algorithm for allocating resources in cloud computing based on the Project Evaluation and Review Technique (PERT). It aims to address issues like starvation of lower priority tasks. The algorithm models task allocation as a directed acyclic graph and uses PERT to schedule critical and non-critical tasks, prioritizing higher priority tasks. The algorithm is evaluated against other scheduling methods and shows improvements in reducing completion time and optimizing resource allocation for all tasks.
Adaptive check-pointing and replication strategy to tolerate faults in comput...IOSR Journals
This document summarizes an adaptive checkpointing and replication strategy to tolerate faults in computational grids. It proposes maintaining a balance between the overheads of replication and checkpointing. Tasks are replicated on up to three resources based on each resource's probability of permanent failure. Checkpoints are taken adaptively based on the probability of recoverable failure. If a resource fails permanently, the task resumes from the last checkpoint. If a failure is recoverable, the task resumes on the same resource. This strategy aims to minimize resource wastage from replication while utilizing different resource speeds.
Propose a Method to Improve Performance in Grid Environment, Using Multi-Crit...Editor IJCATR
The most important purpose of grid networks is resource subscription in a dynamic and heterogeneous environment.
They are accessible through using various methods. Subscription has mainly computational, scientific and other implications. In
order to reach grid purposes and to use available resources in grid environment, subtasks are distributed among resources and are
scheduled by considering the quality of service. It has been tried to distribute subtasks between resources in a way that maximum
QOS can be obtained. In this study, a method has been presented. In this method, three parameters; namely, sent and transferred
time between RMS and resource, process time of subtask by the resource, and the load of available tasks in resources row, have
been taken into account. In this way, multi-criteria decision is made by using TOPSIS method and this priority of the resources
are determined to assign them to subtasks. Finally, time response, as an efficient parameter, has been improved and optimized by
optimal assignment of the resources to subtasks.
This document provides an overview of scheduling mechanisms in cloud computing. It discusses task scheduling, gang scheduling based on performance and cost evaluation, and resource scheduling. For task scheduling, it describes classifying tasks based on quality of service parameters and MapReduce level scheduling. It then explains two gang scheduling algorithms - Adaptive First Come First Serve (AFCFS) and Largest Job First Serve (LJFS) - and how they are used to evaluate performance and cost. Finally, it briefly discusses resource scheduling and factors that affect scheduling mechanisms in cloud computing like efficiency, fairness, costs, and communication patterns.
GROUPING BASED JOB SCHEDULING ALGORITHM USING PRIORITY QUEUE AND HYBRID ALGOR...ijgca
Grid computing enlarge with computing platform which is collection of heterogeneous computing resources connected by a network across dynamic and geographically dispersed organization to form a distributed high performance computing infrastructure. Grid computing solves the complex computing
problems amongst multiple machines. Grid computing solves the large scale computational demands in a high performance computing environment. The main emphasis in the grid computing is given to the resource management and the job scheduler .The goal of the job scheduler is to maximize the resource utilization and minimize the processing time of the jobs. Existing approaches of Grid scheduling doesn’t give much emphasis on the performance of a Grid scheduler in processing time parameter. Schedulers allocate resources to the jobs to be executed using the First come First serve algorithm. In this paper, we have provided an optimize algorithm to queue of the scheduler using various scheduling methods like Shortest Job First, First in First out, Round robin. The job scheduling system is responsible to select best suitable machines in a grid for user jobs. The management and scheduling system generates job schedules for each machine in the grid by taking static restrictions and dynamic parameters of jobs and machines
into consideration. The main purpose of this paper is to develop an efficient job scheduling algorithm to maximize the resource utilization and minimize processing time of the jobs. Queues can be optimized by using various scheduling algorithms depending upon the performance criteria to be improved e.g. response
time, throughput. The work has been done in MATLAB using the parallel computing toolbox.
GROUPING BASED JOB SCHEDULING ALGORITHM USING PRIORITY QUEUE AND HYBRID ALGOR...ijgca
This document describes a proposed grouping based job scheduling algorithm for grid computing that aims to maximize resource utilization and minimize job processing times. It discusses related work on job scheduling algorithms and then presents the steps of the proposed algorithm. The algorithm uses shortest job first, first-in first-out, and round robin scheduling to process jobs in groups. The algorithm is evaluated experimentally in MATLAB and shown to reduce total job processing time compared to using only first-in first-out scheduling. Graphs demonstrate the processing time improvements achieved by the combined scheduling approach.
A novel scheduling algorithm for cloud computing environmentSouvik Pal
The document describes a proposed genetic algorithm-based scheduling approach for cloud computing environments. It aims to minimize waiting time and queue length. The algorithm first permutes task burst times and finds minimum waiting times using FCFS and genetic algorithms. It then applies a queuing model to the sequences with minimum waiting time from each approach. Experimental results on 4 sample tasks show the genetic algorithm reduces waiting time compared to FCFS. The genetic operators of selection, crossover and mutation are applied to evolve optimal task scheduling sequences.
This document presents a scheduling strategy that performs dynamic job grouping at runtime to optimize the execution of applications with many fine-grained tasks on global grids. The strategy groups individual jobs into larger "job groups" based on the processing requirements of each job, the capabilities of available grid resources, and a defined granularity size. It aims to minimize overall job execution time and cost while maximizing resource utilization. The strategy is evaluated through simulations using the GridSim toolkit, which models grid resources and application scheduling.
Similar to Max Min Fair Scheduling Algorithm using In Grid Scheduling with Load Balancing (20)
Help the Genetic Algorithm to Minimize the Urban Traffic on IntersectionsIJORCS
This document summarizes a research paper that uses genetic algorithms to optimize traffic light timing at intersections to minimize traffic. It first describes modeling traffic light intersections using Petri nets. It then explains how genetic algorithms can be used for optimization by coding the problem variables in chromosomes, defining a fitness function to evaluate populations over generations, and using operators like mutation and crossover. The fitness function aims to minimize average traffic light cycle times based on 14 parameters related to light timing and vehicle wait times at two intersections. The genetic algorithm optimization of traffic light timing parameters is found to improve traffic flow at intersections.
Welcoming the research scholars, scientists around the globe in the Open Access Dimension, IJORCS is now accepting manuscripts for its next issue (Volume 4, Issue 4). Authors are encouraged to contribute to the research community by submitting to IJORCS, articles that clarify new research results, projects, surveying works and industrial experiences that describe significant advances in field of computer science.
All paper submissions (http://www.ijorcs.org/submit-paper) are received and managed electronically by IJORCS Team. Detailed instructions about the submission procedure are available on IJORCS website (http://www.ijorcs.org/author-guidelines)
License plate recognition system is one of the core technologies in intelligent traffic control. In this paper, a new and tunable algorithm which can detect multiple license plates in high resolution applications is proposed. The algorithm aims at investigation into and identification of the novel Iranian and some European countries plate, characterized by both inclusion of blue area on it and its geometric shape. Obviously, the suggested algorithm contains suitable velocity due to not making use of heavy pre-processing operation such as image-improving filters, edge-detection operation and omission of noise at the beginning stages. So, the recommended method of ours is compatible with model-adaptation, i.e., the very blue section of the plate so that the present method indicated the fact that if several plates are included in the image, the method can successfully manage to detect it. We evaluated our method on the two Persian single vehicle license plate data set that we obtained 99.33, 99% correct recognition rate respectively. Further we tested our algorithm on the Persian multiple vehicle license plate data set and we achieved 98% accuracy rate. Also we obtained approximately 99% accuracy in character recognition stage.
FPGA Implementation of FIR Filter using Various Algorithms: A RetrospectiveIJORCS
This Paper is a review study of FPGA implementation of Finite Impulse response (FIR) with low cost and high performance. The key observation of this paper is an elaborate analysis about hardware implementations of FIR filters using different algorithm i.e., Distributed Arithmetic (DA), DA-Offset Binary Coding (DA-OBC), Common Sub-expression Elimination (CSE) and sum-of-power-of-two (SOPOT) with less resources and without affecting the performance of the original FIR Filter.
Using Virtualization Technique to Increase Security and Reduce Energy Consump...IJORCS
An approach has been presented in this paper in order to generate a secure environment on internet Based Virtual Computing platform and also to reduce energy consumption in green cloud computing. The proposed approach constantly checks the accuracy of stored data by means of a central control service inside the network environment and also checks system security through isolating single virtual machines using a common virtual environment. This approach has been simulated on two types of Virtual Machine Manager (VMM) Quick EMUlator (Qemu), HVM (Hardware Virtual Machine) Xen and outputs of the simulation in VMInsight show that when service is getting singly used, the overhead of its performance will be increased. As a secure system, the proposed approach is able to recognize malicious behaviors and assure service security by means of operational integrity measurement. Moreover, the rate of system efficiency has been evaluated according to the amount of energy consumption on five applications (Defragmentation, Compression, Linux Boot Decompression and Kernel Boot). Therefore, this has been resulted that to secure multi-tenant environment, managers and supervisors should independently install a security monitoring system for each Virtual Machines (VMs) which will come up to have the management heavy workload of. While the proposed approach, can respond to all VM’s with just one virtual machine as a supervisor.
Algebraic Fault Attack on the SHA-256 Compression FunctionIJORCS
The cryptographic hash function SHA-256 is one member of the SHA-2 hash family, which was proposed in 2000 and was standardized by NIST in 2002 as a successor of SHA-1. Although the differential fault attack on SHA-1compression function has been proposed, it seems hard to be directly adapted to SHA-256. In this paper, an efficient algebraic fault attack on SHA-256 compression function is proposed under the word-oriented random fault model. During the attack, an automatic tool STP is exploited, which constructs binary expressions for the word-based operations in SHA-256 compression function and then invokes a SAT solver to solve the equations. The simulation of the new attack needs about 65 fault injections to recover the chaining value and the input message block with about 200 seconds on average. Moreover, based on the attack on SHA-256 compression function, an almost universal forgery attack on HMAC-SHA-256 is presented. Our algebraic fault analysis is generic, automatic and can be applied to other ARX-based primitives.
Enhancement of DES Algorithm with Multi State LogicIJORCS
The principal goal to design any encryption algorithm must be the security against unauthorized access or attacks. Data Encryption Standard algorithm is a symmetric key algorithm and it is used to secure the data. Enhanced DES algorithm works on increasing the key length or complex S-BOX design or increased the number of states in which the information is to be represented or combination of above criteria. By increasing the key length, the number of combinations for key will increase which is hard for the intruder to do the brute force attack. As the S-BOX design will become the complex there will be a good avalanche effect. As the number of states increases in which the information is represented, it is hard for the intruder to crack the actual information. Proposed algorithm replace the predefined XOR operation applied during the 16 round of the standard algorithm by a new operation called “Hash function” depends on using two keys. One key used in “F” function and another key consists of a combination of 16 states (0,1,2…13,14,15) instead of the ordinary 2 state key (0, 1). This replacement adds a new level of protection strength and more robustness against breaking methods.
Hybrid Simulated Annealing and Nelder-Mead Algorithm for Solving Large-Scale ...IJORCS
This paper presents a new algorithm for solving large scale global optimization problems based on hybridization of simulated annealing and Nelder-Mead algorithm. The new algorithm is called simulated Nelder-Mead algorithm with random variables updating (SNMRVU). SNMRVU starts with an initial solution, which is generated randomly and then the solution is divided into partitions. The neighborhood zone is generated, random number of partitions are selected and variables updating process is starting in order to generate a trail neighbor solutions. This process helps the SNMRVU algorithm to explore the region around a current iterate solution. The Nelder- Mead algorithm is used in the final stage in order to improve the best solution found so far and accelerates the convergence in the final stage. The performance of the SNMRVU algorithm is evaluated using 27 scalable benchmark functions and compared with four algorithms. The results show that the SNMRVU algorithm is promising and produces high quality solutions with low computational costs.
Welcoming the research scholars, scientists around the globe in the Open Access Dimension, IJORCS is now accepting manuscripts for its next issue (Volume 4, Issue 2). Authors are encouraged to contribute to the research community by submitting to IJORCS, articles that clarify new research results, projects, surveying works and industrial experiences that describe significant advances in field of computer science.
To view complete list of topics coverage of IJORCS, Aim & Scope, please visit, www.ijorcs.org/scope
Welcoming the research scholars, scientists around the globe in the Open Access Dimension, IJORCS is now accepting manuscripts for its next issue (Volume 4, Issue 1). Authors are encouraged to contribute to the research community by submitting to IJORCS, articles that clarify new research results, projects, surveying works and industrial experiences that describe significant advances in field of computer science.
Voice Recognition System using Template MatchingIJORCS
It is easy for human to recognize familiar voice but using computer programs to identify a voice when compared with others is a herculean task. This is due to the problem that is encountered when developing the algorithm to recognize human voice. It is impossible to say a word the same way in two different occasions. Human speech analysis by computer gives different interpretation based on varying speed of speech delivery. This research paper gives detail description of the process behind implementation of an effective voice recognition algorithm. The algorithm utilize discrete Fourier transform to compare the frequency spectra of two voice samples because it remained unchanged as speech is slightly varied. Chebyshev inequality is then used to determine whether the two voices came from the same person. The algorithm is implemented and tested using MATLAB.
Channel Aware Mac Protocol for Maximizing Throughput and FairnessIJORCS
The proper channel utilization and the queue length aware routing protocol is a challenging task in MANET. To overcome this drawback we are extending the previous work by improving the MAC protocol to maximize the Throughput and Fairness. In this work we are estimating the channel condition and Contention for a channel aware packet scheduling and the queue length is also calculated for the routing protocol which is aware of the queue length. The channel is scheduled based on the channel condition and the routing is carried out by considering the queue length. This queue length will provide a measurement of traffic load at the mobile node itself. Depending upon this load the node with the lesser load will be selected for the routing; this will effectively balance the load and improve the throughput of the ad hoc network.
A Review and Analysis on Mobile Application Development Processes using Agile...IJORCS
This document provides a review and analysis of mobile application development processes using agile methodologies. It begins with an introduction to agile software development and discusses how agile principles are a natural fit for mobile application development given the dynamic environment. The document then reviews several proposed mobile application development processes that combine agile and non-agile techniques, including Mobile-D, RaPiD7, a hybrid methodology, MASAM, and a Scrum and Lean Six Sigma integration approach. It concludes by noting that while agile methodologies show promise for mobile development, further empirical validation is still needed.
Congestion Prediction and Adaptive Rate Adjustment Technique for Wireless Sen...IJORCS
In general, nodes in Wireless Sensor Networks (WSNs) are equipped with limited battery and computation capabilities but the occurrence of congestion consumes more energy and computation power by retransmitting the data packets. Thus, congestion should be regulated to improve network performance. In this paper, we propose a congestion prediction and adaptive rate adjustment technique for Wireless Sensor Networks. This technique predicts congestion level using fuzzy logic system. Node degree, data arrival rate and queue length are taken as inputs to the fuzzy system and congestion level is obtained as an outcome. When the congestion level is amidst moderate and maximum ranges, adaptive rate adjustment technique is triggered. Our technique prevents congestion by controlling data sending rate and also avoids unsolicited packet losses. By simulation, we prove the proficiency our technique. It increases system throughput and network performance significantly.
A Study of Routing Techniques in Intermittently Connected MANETsIJORCS
A Mobile Ad hoc Network (MANET) is a self-configuring infrastructure less network of mobile devices connected by wireless. These are a kind of wireless Ad hoc Networks that usually has a routable networking environment on top of a Link Layer Ad hoc Network. The routing approach in MANET includes mainly three categories viz., Reactive Protocols, Proactive Protocols and Hybrid Protocols. These traditional routing schemes are not pertinent to the so called Intermittently Connected Mobile Ad hoc Network (ICMANET). ICMANET is a form of Delay Tolerant Network, where there never exists a complete end – to – end path between two nodes wishing to communicate. The intermittent connectivity araise when network is sparse or highly mobile. Routing in such a spasmodic environment is arduous. In this paper, we put forward the indication of prevailing routing approaches for ICMANET with their benefits and detriments
Improving the Efficiency of Spectral Subtraction Method by Combining it with ...IJORCS
In the field of speech signal processing, Spectral subtraction method (SSM) has been successfully implemented to suppress the noise that is added acoustically. SSM does reduce the noise at satisfactory level but musical noise is a major drawback of this method. To implement spectral subtraction method, transformation of speech signal from time domain to frequency domain is required. On the other hand, Wavelet transform displays another aspect of speech signal. In this paper we have applied a new approach in which SSM is cascaded with wavelet thresholding technique (WTT) for improving the quality of speech signal by removing the problem of musical noise to a great extent. Results of this proposed system have been simulated on MATLAB.
An Adaptive Load Sharing Algorithm for Heterogeneous Distributed SystemIJORCS
This summarizes a research paper that proposes an adaptive load sharing algorithm for heterogeneous distributed systems. The algorithm aims to balance load across nodes by migrating tasks from overloaded nodes to underloaded nodes, taking into account factors like node processing capacities, link capacities, and communication delays. It formulates mathematical models to represent changes in waiting times as tasks are added, completed or migrated between nodes. The goal is to minimize overall response times through decentralized load balancing decisions made locally at each node.
The Design of Cognitive Social Simulation Framework using Statistical Methodo...IJORCS
Modeling the behavior of the cognitive architecture in the context of social simulation using statistical methodologies is currently a growing research area. Normally, a cognitive architecture for an intelligent agent involves artificial computational process which exemplifies theories of cognition in computer algorithms under the consideration of state space. More specifically, for such cognitive system with large state space the problem like large tables and data sparsity are faced. Hence in this paper, we have proposed a method using a value iterative approach based on Q-learning algorithm, with function approximation technique to handle the cognitive systems with large state space. From the experimental results in the application domain of academic science it has been verified that the proposed approach has better performance compared to its existing approaches.
An Enhanced Framework for Improving Spatio-Temporal Queries for Global Positi...IJORCS
This document proposes a framework to improve the processing of spatio-temporal queries for global positioning systems. The framework employs a new indexing algorithm built on SQL Server 2008 that avoids the overhead of R-Tree indexing. It utilizes dynamic materialized views and an adaptive safe region to reduce communication costs and update loads. Caching is used to enhance performance. The notification engine processes concurrent queries using publish/subscribe to group similar queries. Experiments showed the framework outperformed R-Tree indexing.
A PSO-Based Subtractive Data Clustering AlgorithmIJORCS
There is a tremendous proliferation in the amount of information available on the largest shared information source, the World Wide Web. Fast and high-quality clustering algorithms play an important role in helping users to effectively navigate, summarize, and organize the information. Recent studies have shown that partitional clustering algorithms such as the k-means algorithm are the most popular algorithms for clustering large datasets. The major problem with partitional clustering algorithms is that they are sensitive to the selection of the initial partitions and are prone to premature converge to local optima. Subtractive clustering is a fast, one-pass algorithm for estimating the number of clusters and cluster centers for any given set of data. The cluster estimates can be used to initialize iterative optimization-based clustering methods and model identification methods. In this paper, we present a hybrid Particle Swarm Optimization, Subtractive + (PSO) clustering algorithm that performs fast clustering. For comparison purpose, we applied the Subtractive + (PSO) clustering algorithm, PSO, and the Subtractive clustering algorithms on three different datasets. The results illustrate that the Subtractive + (PSO) clustering algorithm can generate the most compact clustering results as compared to other algorithms.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
2. 42 R.Gogulan, A.Kavitha, U.Karthick Kumar
II. RELATED WORK completion time is used in AFTO to order the
processor in increasing order.
Fair Share scheduling [4] is compared with Simple
Fair Task Order Scheduling, Adjusted Fair Task Order • MMFS rule: MMFS is applied here to compensate
Scheduling and Max-Min Fair Share Scheduling the overflow and underflow processor.
algorithm are developed and tested with existing • LB rule: After MMFS rule LB rule is applied only
scheduling algorithms. K. Somasundaram, S. for overflow processor to reduce the overall
Radhakrishnan compares Swift Scheduler with First completion time of the processor.
Come First Serve ,Shortest Job First and with Simple
Fair Task Order based on processing time analysis, Begin
cost analysis and resource utilization[5]. Thamarai
Selvi describes the advantages of standard algorithms
such as shortest processing time, longest processing Initialization of Algorithm
time, and earliest deadline first.
Pal Nilsson and Michal Pioro have discussed Max
Min Fair Allocation for routing problem in a Calculate Total processor capacity and
communication Network [8]. Hans Jorgen Bang, Demand rate
Torbjorn Ekman and David Gesbert has proposed
proportional fair scheduling which addresses the
problem of multiuser diversity scheduling together Apply fair share approach to evaluate
with channel prediction[9]. Daphne Lopez, S. V. Fair rate
Kasmir raja has described and compared Fair
Scheduling algorithm with First Come First Serve and
Round Robin schemes [10]. Load Balancing is one of Find Non Adjusted and Adjusted FCT
the big issues in Grid Computing [11], [12]. B.
Yagoubi, described a framework consisting of
distributed dynamic load balancing algorithm in Apply SFTO and AFTO rule
perspective to minimize the average response time of
applications submitted to Grid computing.
Grosu and Chronopoulos [13], Penmatsa and Apply MMFS rule for overflow and
Chronopoulos [14] considered static load balancing in underflow processor
a system with servers and computers where servers
balance load among all computers in a round robin
fashion. Qin Zheng, Chen-Khong Tham, Bharadwaj Step=Step +1
Veradale to address the problem of determining which
group an arriving job should be allocated to and how
its load can be distributed among computers in the Apply LB rule for overflow processor
group to optimize the performance and also proposed
algorithms which guarantee finding a load distribution
over computers in a group that leads to the minimum Yes
response time or computational cost [12].
Step < = N
III.NOTATION AND PROBLEM FORMULATION
No
• Initialization of Algorithm:Number of task and
number of resource are initialized at the beginning Return the best
of the algorithm. solution
• Calculate the total processor capacity and demand
rateis calculated from workload by difference Let N be the number of tasks that have to be
between deadline and grid access delay. scheduled and workload wi of task Ti, i=1, 2… N is the
• Evaluate fair rate: From the max min fair share duration of the task when executed on a processor of
approach calculate fair rate depend on the number unit computation capacity. Let M be the number of
of processor and processor capacity. processors and that the computation capacity of
processor j is equal to cj units of capacity. The total
• Non Adjusted and Adjusted FCT: By using fair computation capacity C of the Grid is defined [4] as
rate adjusted and non-adjusted fair completion
time is calculated as per SFTO and AFTO. M
(1)
• SFTO and AFTO rule: Non adjusted fair C =∑ cj
completion time is used in SFTO to order the j=1
processor in increasing order and adjusted fair
www.ijorcs.org
3. Max Min Fair Scheduling Algorithm using In Grid Scheduling with Load Balancing 43
Let dij be the communication delay between user i requesting service are queued for scheduling according
and processor j. More precisely, dij is the time that to their fair completion times. The fair completion time
elapses between the times a decision is made by the of a task is found by first estimating its fair task rates
resource manager to assign task Ti to processor j and using a max-min fair sharing algorithm.
the arrival of all files necessary to run task Ti to
processor j. A. Estimation of the Task Fair Rates
Each task Ti is characterized by a deadline Di that Max-Min Fair Sharing scheme, small demanded
defines the time by which it is desirable for the task to computation rates Xi get all the computation power
complete execution. Let γj be the estimated completion they require, whereas larger rates share leftovers. Max-
time of the tasks that are already running on or already Min Fair Sharing algorithm is described as follows.
scheduled on processor j. γj is equal to zero when no
task has been allocated to processor j at the time a task The demanded computation rates Xi, i =1, 2. . . N
assignment is about to be made; otherwise, γj of the tasks are sorted in ascending order, say, X1 < X2
corresponds to the remaining time until the completion < _ _ _ < XN . Initially, we assign capacity C/N to the
of the tasks that are already allocated to processor j. task T1 with the smallest demand X1, where C is the
We define the earliest starting time of task Ti on total grid computation capacity. If the fair share C/N is
processor j[4] as more than the demanded rate X1 of task T1, the unused
excess capacity of C/N – X1 is again equally shared to
δij=max{dij,γj} (2) the remaining tasks N-1 so that each of them gets an
additional capacity (C / N + (C / N – X1)) / (N – 1).
δij is the earliest time at which it is feasible for task Ti
to start execution on processor j. We define the
average of the earliest starting times of task Ti over all This may be larger than task T2 needs, in which
the M available processors[4] as case, the excess capacity is again equally shared
among the remaining N-2 tasks, and this process
M continues until there is no computation capacity left to
∑ δij cj distribute or until all tasks have been assigned capacity
j=1 (3) equal to their demanded computation rates. When the
δi = process terminates, each task has been assigned no
M
more capacity than it needs, and, if its demand was not
∑ cj
j=1
satisfied, no less capacity than any other task with a
greater demand has been assigned. We denote by ri(n)
where δi as the grid access delay for task Ti. In the fair the non adjusted fair computation rate of the task Ti at
scheduling algorithm, the demanded computation rate the nth iteration of the algorithm. Then, ri(n) is
Xi of a task Ti will play an important role and is given[4] by
defined [4] as n
Xi if Xi < ∑ O (k)
wi k=0
Xi = (4)
ri (n) = ; n ≥ 0, (5)
Di - δ i
n n
∑ O (k) if Xi ≥ ∑ O (k)
Here, Xi can be viewed as the computation capacity
k=0 k=0
that the Grid should allocate to task Ti for it to finish
just before its requested deadline Di if the allocated Where
computation capacity could be accessed at the mean
access delay δi. N
C - ∑ ri (n -1)
VI. EXISTING METHOD i=1
O (k) = ,n≥1 (6)
The scheduling algorithms do not adequately Card {N (n)}
address congestion, and they do not take fairness
considerations into account. For example, the ECT With
O (0) = C/N. (7)
rule, tasks that have long execution time have a higher
probability of missing their deadline even if they have
a late deadline. Also, with the EDF rule, a task with a Where, N (n) is the set of tasks whose assigned fair
late deadline is given low priority until its deadline rates are smaller than their demanded computation
approaches, giving no incentive to the users. rates at the beginning of the nth iteration, that is,
To overcome these difficulties, in this section
N (n) = {Ti: Xi > ri (n -1)} and N (0) = N, (8)
provide an alternative approach, where the tasks
www.ijorcs.org
4. 44 R.Gogulan, A.Kavitha, U.Karthick Kumar
Whereas, the function card (.) returns the must be estimated. This can be done in two ways. In
cardinality of a set. The process is terminated at the the first approach, each time an unused processor
first iteration no at which either O (n0) = 0 or the capacity is available; it is equally divided among all
number card {N (n0)} =0. The former case indicates active tasks. In the second approach, the rates of all
congestion, whereas the latter indicates that the total active tasks are recalculated using the max-min fair
grid computation capacity can satisfy all the demanded sharing algorithm, based on their respective demanded
task rates [4], that is, rates.
N
∑ Xi < C (9)
The estimated fair rate of each task is a function of
i=1 time, denoted by ri(t). Here, introduce a variable called
the round number, which defines the number of rounds
The non adjusted fair computation rate ri of task Ti of service that have been completed at a given time. A
is obtained at the end of the process as non-integer round number represents a partial round of
service. The round number depends on the number and
ri = ri (n0 ). (10) the rates of the active tasks at a given time. In
particular, the round number increases with a rate
B. Fair Task Queue Order Estimation equal to the sum of the rates of all active tasks, equal
to 1 / ∑i ri (t). Thus, the rate with which the round
number increases changes and has to be recalculated
A scheduling algorithm has two important things.
each time a new arrival or task completion takes place.
First, it has to choose the order in which the tasks are
Based on the round number, we define the finish
considered for assignment to a processor (the queue
number Fi (t) of task Ti at time t as in [4]
ordering problem). Second, for the task that is located
each time at the front of the queue, the scheduler has to
decide the processor on which the task is assigned (the Fi (t) = R (τ) + wi / ri(t). (12)
processor assignment problem). To solve the queue Where τ is the last time a change in the number of
ordering problem in fair scheduling, SFTO and AFTO active tasks occurred, and R (τ) is the round number at
are discussed. time τ. Fi (t) is recalculated each time new arrivals or
task completions take place. Note that Fi (t) is not the
C. Simple Fair Task Order time that task Ti will complete its execution. It is only
a service tag that we will use to determine the order in
In SFTO, the tasks are ordered in the queue in which the tasks are assigned to processors.
increasing order of their non adjusted fair completion
time’s ti. The non adjusted fair completion time ti of The adjusted fair completion times tia can be
task Ti is defined[4] as computed as the time at which the round number
reaches the estimated finish number of the respective
ti = δi + wi /ri (11)
task. Thus, in [4]
tia: R (tia ) = Fi (tia) (13)
where ti can be thought of as the time at which the
task would be completed if it could obtain constant Where, the task adjusted fair completion times
computation rate equal to its fair computation rate ri, determine the order in which the tasks are considered
starting at time δi . for assignment to processors in the AFTO scheme: The
task with the earliest adjusted fair completion time is
assigned first, followed by the second earliest, and so
D.Adjusted Fair Task Order
on.
In the AFTO scheme, the tasks are ordered in the
queue in increasing order of their adjusted fair E. Max-Min Fair Scheduling
completion times tia. The AFTO scheme results in
schedules that are fairer than those produced by the In MMFS ,the task are non preemptable, the sum of
SFTO rule; it is, more difficult to implement and more the rates of the tasks assigned for execution to a
computationally demanding than the SFTO scheme, processor may be smaller than the processor capacity,
since the adjusted fair completion times tia are more and some processors may not be fully utilized. A
difficult to obtain than the non adjusted fair processor with unused capacity will be called an
completion times ti. underflow processor. In an optimal solution, tasks
assigned to underflow processors have schedulable
i.Adjusted Fair Completion Times Estimation: rates that are equal to their respective fair rates, ris = ri.
The overflow Oj of processor j is defined [4]as
To compute the adjusted fair completion times tia,
Oj = max {0, ∑ ri – cj} (14)
the fair rate of the active tasks at each time instant
www.ijorcs.org
5. Max Min Fair Scheduling Algorithm using In Grid Scheduling with Load Balancing 45
i€Pj Op = max {0, ∑ ri – cp} (21)
And the underflow Uk of processor k as i€Pp
Uk = max { 0, ∑ ri – ck} Where On >0and Op >0 referred as n and p are
(15) overflow processors. If On > Op then n processor take
i€Pk more time to complete the job. If Op > On then p
processor take more time to complete the job. This will
Processors for which Oj > 0 will be referred to as cause more time to complete the full job. To recover
overflow processors, whereas underflow processors these problems Load Balance Algorithm proceeds to
are those for which Uk < 0. In an optimal solution, we rearrange the fair rates of the caused processor, so it
have will reduce overall completion time. The proposed
algorithm combines maximum of two processors of
capacity of first overflow with processors of capacity
∑ ris = cj for all j for which Oj >0 (16) of second overflow to obtain a better exploitation of
i€Pj the overall processor capacity. More specifically,
given an assignment of tasks to processors, we
i. Processor Assignment consider the rearrangement if On > Op then a task of
rate rx assigned to an overflow processor On is
This algorithm combines processors of capacity substituted for a task of rate ry assigned to an overflow
overflow with processors of capacity underflow to processor Op. After the task rearrangement, the
obtain a better exploitation of the overall processor overflow capacity of the processors is updated as
capacity. More specifically, given an assignment of follows:
tasks to processors, we consider the rearrangement
where a task of rate rl assigned to an overflow Rn = On - €
processor is substituted for a task of rate rm assigned to Rp = Op - € (22)
an underflow processor. After the task rearrangement,
the overflow (underflow) capacity of the processors is Where € = rx – ry. It expresses the task rate difference
updated as[4] follows: between the two selected tasks, where Rn and Rp are
the updated processor residuals. If Rn > Rp, processor n
Rj = O j - € remains at the more completion time. So it will
(17)
Rk = Uk - € continue from step (1) up to Rn is more or less equal to
Where Rp.
€ = rm - rl. (18)
B. Execution Cost
To expresses the task rate difference between the two
selected tasks, where Rj and Rk are the updated Here, we also implement Execution Cost for all
Algorithm used in this thesis. The Execution Cost is
processor residuals. If Rj > 0, processor j remains at
defined by Cexe (Pj) that is execution cost of jth
the overflow state after the task rearrangement,
processor.
whereas if Rj < 0, processor j turns to the underflow
state. A reduction is accomplished only if the task rate Cexe (Pj) = P (tia)j * costj (23)
difference satisfies the following equation in [4]: Where P (tia) is fair completion time of processor j.
1 1
€ : O j + Ok (19)
C. Communication Cost
Where Oj1 = max (0, Rj) and Ok1
= max (0, Rk).This Here, we also implements the Communication Cost is
satisfies the processor requirements. defined as
Cb (Pj) = Cexe (Pj) + F (Pj) (24)
V.PROPOSED METHOD Where Cexe (Pj) is execution cost of processor j and F
A. Load Balancing (Pj) is Fitness of processor j.
VI.RESULTS
The existing method is good for fair completion
time but the load is not balanced. That is sometimes This paper proposes Load Balancing in MMFS to
processor task allocation is excessive than the other, it obtain better load balancing. Here, cost rate range from
may take more time to complete the whole job. For 5 – 10 units is randomly chosen and assigned
this difficulty, here we propose a new algorithm called according to speed of the processor. Speed of the
Load Balance Algorithm to give uniform load to the processor ranges from 0 – 1MIPS are randomly
resources. The overflow On and Op of processors n and assigned to M processor. The proposed method is
p is defined a compared with existing one with different number of
On = max {0, ∑ ri – cn} (20) processors and tasks. Here number of processor taken
i€Pn are 8, 16, 32 and 64 matrixes with number of task as
256, 512, 1024, and 2048MI. Below table shows the
www.ijorcs.org
6. 46 R.Gogulan, A.Kavitha, U.Karthick Kumar
comparison results of load balance in MMFS with
existing algorithm such as EDF, SFTO, AFTO and Number of processor 8
MMFS for 8, 16, 32, and 64 processors. The proposed 4000
work is approximately gives 45% - 25% less than EDF EDF
3500
and 7% - 5% less than SFTO and AFTO and 5% - 2% 3000 SFTO
Makespan
less than MMFS for makespan. Also, MMFS + LB 2500
approximately show 30% - 25% less than EDF and 7% AFTO
2000
- 6% less than SFTO and AFTO 2% - 1% less than 1500 MMFS
MMFS for Execution cost and Bandwidth cost. The 1000 LB
result shows better performance for Higher Matrix 500
also. The following are the comparison result of 0
existing and proposed method. 256 512 1024 2048
Table: 1 Performance comparison of proposed MMFS + LB Task
with existing algorithm for EDF, SFTO, AFTO + MMFS for
8 processors
Fig 1: Performance comparison of proposed MMFS + LB
Executio Communicati with existing algorithm for EDF, SFTO, AFTO + MMFS for
Scheduling Resource Makesp n on
Algorithm Matrix an Makespan
Cost Cost
EDF 917.82 5506.91 6424.73
Number of processor 8
SFTO 447.74 4477.44 4925.19
30000
AFTO 444.39 4468.54 4912.94 EDF
Execution Cost
25000
MMFS 439.61 4446.77 4886.39 20000 SFTO
256 x 8
MMFS + 15000 AFTO
418.13 4181.27 4599.4
LB 10000 MMFS
EDF 1121.32 7849.21 8970.53 5000
LB
SFTO 1022.36 5111.8 6134.16
0
256 512 1024 2048
AFTO 1010.09 5050.45 6060.53
Task
MMFS 858.54 4292.71 5151.26
512 x 8
MMFS + Fig 2: Performance comparison of proposed MMFS + LB
836.72 4183.58 5020.3
LB
with existing algorithm for EDF, SFTO, AFTO + MMFS for
EDF 1825.33 10951.97 12777.3 Execution Cost
SFTO 1651.45 13211.63 14863.08
AFTO 1686.17 11803.21 13489.38 Number of processor 8
MMFS 1643.32 13180.96 14824.28 35000
EDF
Communication Cost
1024 x 8 30000
MMFS +
LB
1599.82 12798.55 14398.36 25000 SFTO
20000
EDF 3596.42 25174.94 28771.36
AFTO
15000
10000 MMFS
SFTO 3280.39 26243.11 29523.5
5000 LB
AFTO 3247.63 25981.06 29228.69 0
256 512 1024 2048
MMFS 3137.59 25100.75 28238.34
2048 x 8
MMFS + Task
3095.82 24766.55 27862.37
LB
Fig 3: Performance comparison of proposed MMFS + LB
with existing algorithm for EDF, SFTO, AFTO + MMFS for
Bandwidth Cost
Table: 2 Performance comparison of proposed MMFS +
LB with existing algorithm for EDF, SFTO, AFTO + MMFS
for 16 processors
www.ijorcs.org
7. Max Min Fair Scheduling Algorithm using In Grid Scheduling with Load Balancing 47
Communicat
Scheduling Resource
Makespan
Executio
ion
Number of processor 16
Algorithm Matrix n Cost
Cost 30000
EDF 1466.72 7332.11 8798.53 EDF
25000
Execution Cost
20000 SFTO
SFTO 304 1520 1824
15000 AFTO
AFTO 300.65 1511.1 1811.75 10000 MMFS
256 x 16 5000 LB
MMFS 295.87 1489.33 1785.2
0
MMFS + LB 209 1045 1254 256 512 1024 2048
Task
EDF 1366.48 13664.81 15031.29
Fig 5: Performance comparison of proposed MMFS + LB
SFTO 553.89 5538.91 6092.8
with existing algorithm for EDF, SFTO, AFTO + MMFS for
AFTO 555.37 5553.75 6109.12 Execution Cost
MMFS 512 x 16 545.76 5508.24 6054
Number of processor 16
MMFS + LB 483.57 4835.69 5319.26 35000
Communication Cost
30000 EDF
EDF 1540.27 9241.6 10781.86 25000 SFTO
20000 AFTO
SFTO 1309.94 6549.72 7859.66 15000
10000 MMFS
AFTO 1296.35 6481.77 7778.13 5000 LB
0
MMFS 1024 x 16 1301.81 6519.05 7820.86 256 512 1024 2048
MMFS + LB 1231.43 6157.14 7388.57
Task
EDF 3352.67 23468.72 26821.39
Fig 6: Performance comparison of proposed MMFS + LB
SFTO 2742.53 24682.76 27425.29
with existing algorithm for EDF, SFTO, AFTO + MMFS for
Communication Cost
AFTO 2761.98 27619.81 30381.79 Table 3: Performance comparison of proposed MMFS + LB
2048 x 16
with existing algorithm for EDF, SFTO, AFTO + MMFS for
MMFS 2734.4 24652.09 27386.49 32 processors
Communicat
MMFS + LB 2641.04 23769.35 26410.39 Scheduling Resource Execution
Makespan ion
Algorithm Matrix Cost
Cost
EDF 206.05 1648.40 1854.45
SFTO 183.43 917.15 1100.58
AFTO 180.08 908.25 1088.33
Number of processor 16 MMFS
256 x 32
175.30 886.48 1061.78
MMFS +
4000 114.64 573.22 687.86
LB
3500 EDF EDF 744.80 5958.36 6703.16
3000 SFTO SFTO 580.60 5225.43 5806.04
Makespan
2500 AFTO 577.25 5216.53 5793.79
2000 AFTO MMFS
512 x 32
574.27 3445.64 4019.91
1500 MMFS +
464.54 2787.23 3251.77
1000 MMFS LB
500 EDF 966.47 7731.78 8698.26
LB
0 SFTO 912.96 4564.78 5477.73
AFTO 937.07 7512.53 8451.59
256 512 1024 2048 MMFS 904.83 4534.11 5438.93
1024 x 32
MMFS +
863.54 4317.72 5181.26
Task LB
EDF 2675.05 18725.38 21400.44
SFTO 2427.97 21851.76 24279.74
Fig 4: Performance comparison of proposed MMFS + LB with AFTO 2375.23 21377.09 23752.32
existing algorithm for EDF, SFTO, AFTO + MMFS for makespan MMFS
2048 x 32
2370.11 21330.99 23701.10
MMFS +
2262 20359.85 22622.06
LB
www.ijorcs.org
8. 48 R.Gogulan, A.Kavitha, U.Karthick Kumar
TABLE 4: Performance comparison of proposed MMFS +
Number of processor 32 LB with existing algorithm for EDF, SFTO, AFTO + MMFS
for 64 processors
3000 EDF Scheduling Resource Execution Communication
Makespan
Algorithm Matrix Cost Cost
Makespan
2500
SFTO EDF 305.35 2748.13 3053.47
2000 SFTO 281.93 1691.60 1973.53
1500 AFTO AFTO 278.58 1682.70 1961.28
MMFS 256 x 64 273.80 1660.93 1934.73
1000 MMFS MMFS +
211.45 1268.7 1480.15
500 LB
LB EDF 966.67 5800.02 6766.69
0
SFTO 600 3000 3600
256 512 1024 2048 AFTO 596.65 2991.10 3587.75
MMFS 512 x 64 591.87 2969.33 3561.20
Task MMFS +
450 2700 3150
LB
Fig 7: Performance comparison of proposed MMFS + LB EDF 968.49 5810.95 6779.44
with existing algorithm for EDF, SFTO, AFTO + MMFS for SFTO 978.87 9788.75 10767.62
makespan AFTO 975.52 9779.85 10755.37
MMFS 1024 x 970.74 9758.08 10728.82
64
MMFS +
Number of processor 32 LB
795.34 7953.36 8748.69
EDF 2984.98 23879.85 26864.83
25000
SFTO 2630.08 26300.85 28930.93
EDF
20000 AFTO 2626.73 26291.95 28918.68
Execution Cost
SFTO MMFS 2048 x 2621.95 26270.18 28892.13
64
15000 MMFS +
2330.86 23308.63 25639.50
AFTO LB
10000 MMFS
5000 LB Number of processor 64
0 3500
256 512 1024 2048 3000 EDF
2500 SFTO
Makespan
Task
2000 AFTO
1500
Fig 8: Performance comparison of proposed MMFS + LB with MMFS
existing algorithm for EDF, SFTO, AFTO + MMFS for Execution 1000
500 LB
Cost
0
Number of processor 32 256 512 1024 2048
Task
30000
Communication Cost
25000 EDF Fig 10: Performance comparison of proposed MMFS + LB with
SFTO existing algorithm for EDF, SFTO, AFTO + MMFS for Makespan
20000
15000 AFTO
Number of processor 64
10000 MMFS 30000
25000 EDF
Execution Cost
5000 LB
20000 SFTO
0
15000 AFTO
256 512 1024 2048
10000 MMFS
Task
5000 LB
0
Fig 9: Performance comparison of proposed MMFS + LB with 256 512 1024 2048
existing algorithm for EDF, SFTO, AFTO + MMFS for
Communication Cost Task
Fig 11: Performance comparison of proposed MMFS + LB with
existing algorithm for EDF, SFTO, AFTO + MMFS for Execution
Cost
www.ijorcs.org
9. Max Min Fair Scheduling Algorithm using In Grid Scheduling with Load Balancing 49
[7] Parvin Asadzadeh, Rajkumar Buyya1, Chun Ling Kei,
Number of processor 64
Deepa Nayar, And Srikumar Venugopal Global Grids
35000 EDF and Software Toolkits:A Study of Four Grid
30000 Middleware Technologies.
SFTO
25000 [8] Pal Nilsson1 and Michał Pi´Oro Unsplittable max-min
Communication Cost
20000 AFTO demand allocation – a routing problem.
15000 MMFS [9] Hans Jorgen Bang, Torbjorn Ekman And David Gesbert
10000 A Channel Predictive Proportional Fair Scheduling
5000 LB
Algorithm.
0
[10] Daphne Lopez. S. V. Kasmir Raja (2009) A Dynamic
256 512 1024 2048
Error Based Fair Scheduling Algorithm For A
Task Computational Grid. Journal Of Theoretical And
Applied Information Technology JATIT.
Fig 12: Performance comparison of proposed MMFS + LB with
existing algorithm for EDF, SFTO, AFTO + MMFS for [11] Qin Zheng, Chen-Khong Tham, Bharadwaj Veeravalli
Communcation Cost (2008) Dynamic Load Balancing and Pricing in Grid
Computing with Communication Delay.Journal in Grid
VII. CONCLUSION Computing .
In this paper, Load Balancing algorithm is [12] Stefan Schamberger (2005) A Shape Optimizing Load
compared with normal scheduling algorithm such as Distribution Heuristic for Parallel Adaptive fem
Computations.Springer-Verlag Berlin Heidelberg .
Earliest Deadline First, and Fair Scheduling algorithm
such as SFTO, AFTO and MMFS. Our proposed [13] Grosu, D, Chronopoulos. A.T (2005) Noncooperative
load balancing in distributed systems.Journal of
algorithm shows better result for execution cost and
Parallel Distrib. Comput. 65(9), 1022–1034.
bandwidth cost also. Result shows that load balancing
[14] Penmatsa, S., Chronopoulos, A.T (2005) Job allocation
with scheduling produces minimum makespan than
schemes in computational Grids based on cost
others. Future work will focus on that how fair
optimization.In: Proceedings of 19th IEEE
scheduling can be applied to optimization techniques, International Parallel and Distributed Processing
QoS Constrains such as reliability can be used as Symposium,Denver.
performance measure.
VIII. REFERRENCES
Books
[1] Rajkumar Buyya, David Abramson, and Jonathan
Giddy A Case for Economy Grid Architecture for
Service Oriented Grid Computing
[2] Foster.I.,Kesselman.C(1999) The Grid: Blueprint for a
New Computing Infrastructure. Morgan Kaufmann
Publishers, USA.
[3] Wolski.R, Brevik.J, Plank.J, and Bryan.T, (2003) Grid
Resource Allocation and Control Using Computational
Economies, In Grid Computing: Making the Global
Infrastructure a Reality. Berman, F, Fox, G., and Hey,
T. editors, Wiley and Sons, pp. 747--772, 2003.
Conferences
[4] Doulamis.N.D.Doulamis. A.D, Varvarigos. E.A.
Varvarigou. T.A (2007) Fair Scheduling Algorithms in
Grids .IEEE Transactions on Parallel and Distributed
Systems, Volume18, Issue 11Page(s):1630 – 1648.
Journals
[5] K.Somasundaram, S.Radhakrishnan (2009) Task
Resource Allocation in Grid using Swift Scheduler.
International Journal of Computers, Communications
& Control, ISSN 1841-9836, E-ISSN 1841-9844 Vol.
IV.
[6] Miguel.L, Bote-Lorenzo, Yannis.A Dimitriadis, And
Eduardo Gomez-Sanchez(2004) Grid Characteristics
and Uses: a Grid Definition. Springer-Verlag LNCS
2970, pp. 291-298.
www.ijorcs.org