This document summarizes a research article that proposes a hybrid scheduling algorithm for real-time operating systems that combines Earliest Deadline First (EDF) and Ant Colony Optimization (ACO) algorithms. The hybrid algorithm aims to overcome the limitations of EDF, which cannot handle overloaded systems well, and ACO, which has longer execution times than EDF. The document provides background on real-time operating systems and describes EDF and ACO scheduling algorithms. It then proposes the hybrid algorithm that automatically switches between EDF and ACO to leverage the advantages of both. The goal is to enhance system performance, reduce failures, and decrease missed deadlines under different load conditions.
Proposing a Scheduling Algorithm to Balance the Time and Energy Using an Impe...Editor IJCATR
Computational grids have become an appealing research area as they solve compute-intensive problems within the scientific
community and in industry. A grid computational power is aggregated from a huge set of distributed heterogeneous workers; hence, it
is becoming a mainstream technology for large-scale distributed resource sharing and system integration. Unfortunately, current grid
schedulers suffer from the haste problem, which is the schedule inability to successfully allocate all input tasks. Accordingly, some tasks
fail to complete execution as they are allocated to unsuitable workers. Others may not start execution as suitable workers are previously
allocated to other peers. This paper presents an imperialist competition algorithm (ICA) method to solve the grid scheduling problems.
The objective is to minimize the makespan and energy of the grid. Simulation results show that the grid scheduling problem can be
solved efficiently by the proposed method
Hybrid of Ant Colony Optimization and Gravitational Emulation Based Load Bala...IRJET Journal
This document proposes a hybrid Ant Colony Optimization (ACO) and Gravitational Emulation Local Search (GELS) algorithm for load balancing in cloud computing. ACO is combined with GELS to take advantage of both algorithms - ACO is good for global search using pheromone trails while GELS is powerful for local search based on gravitational attraction. The hybrid algorithm is tested using CloudSim and shows improvements over existing algorithms like GA-GELS in metrics like resource utilization, makespan, and load balancing level.
This document presents a genetic algorithm approach for scheduling jobs with burst times and priorities to find a schedule that is near or equal to the optimal shortest job first (SJF) schedule. It discusses related work on using genetic algorithms for scheduling problems. The proposed algorithm uses a genetic algorithm to generate priorities that are assigned to jobs to find a schedule with a total turnaround time close to the SJF schedule. Experimental results show that the genetic algorithm approach produces solutions very close to SJF and better than a priority-based algorithm in terms of total turnaround time.
Schedulability of Rate Monotonic Algorithm using Improved Time Demand Analysi...IJECEIAES
Real-Time Monotonic algorithm (RMA) is a widely used static priority scheduling algo- rithm. For application of RMA at various systems, it is essential to determine the system’s feasibility first. The various existing algorithms perform the analysis by reducing the scheduling points in a given task set. In this paper we propose a schedubility test algorithm, which reduces the number of tasks to be analyzed instead of reducing the scheduling points of a given task. This significantly reduces the number of iterations taken to compute feasibility. This algorithm can be used along with the existing algorithms to effectively reduce the high complexities encountered in processing large task sets. We also extend our algorithm to multiprocessor environment and compare number of iterations with different number of processors. This paper then compares the proposed algorithm with existing algorithm. The expected results show that the proposed algorithm performs better than the existing algorithms.
An improved approach to minimize context switching in round robin scheduling ...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IRJET- Advance Approach for Load Balancing in Cloud Computing using (HMSO) Hy...IRJET Journal
This document proposes a new hybrid multi-swarm optimization (HMSO) algorithm for load balancing in cloud computing. It aims to minimize response time and costs while improving resource utilization and customer satisfaction. The HMSO algorithm uses multi-level particle swarm optimization to find an optimal resource allocation solution. Simulation results show that the proposed HMSO technique reduces response time and datacenter costs compared to other algorithms. It also achieves a more balanced load distribution across resources.
This document discusses using a genetic algorithm to solve the job shop scheduling problem. It begins with an abstract that introduces using genetic algorithms for job shop scheduling. It then provides more details on the problem and discusses using genetic algorithms with modifications like generating the initial population using priority rules and incorporating critical block and disjunctive graph distance in the crossover and mutation operations. The document outlines the genetic algorithm approach with sections on chromosome representation and decoding, data structures, generating the initial population, crossover and mutation operations. The goal is to minimize the makespan value for job shop scheduling.
Job Resource Ratio Based Priority Driven Scheduling in Cloud Computingijsrd.com
Cloud Computing is an emerging technology in the area of parallel and distributed computing. Clouds consist of a collection of virtualized resources, which include both computational and storage facilities that can be provisioned on demand, depending on the users' needs. Job scheduling is one of the major activities performed in all the computing environments. Cloud computing is one the upcoming latest technology which is developing drastically. To efficiently increase the working of cloud computing environments, job scheduling is one the tasks performed in order to gain maximum profit. In this paper we proposed a new scheduling algorithm based on priority and that priority is based on ratio of job and resource. To calculate priority of job we use analytical hierarchy process. In this paper we also compare result with other algorithm like First come first serve and round robin algorithms.
Proposing a Scheduling Algorithm to Balance the Time and Energy Using an Impe...Editor IJCATR
Computational grids have become an appealing research area as they solve compute-intensive problems within the scientific
community and in industry. A grid computational power is aggregated from a huge set of distributed heterogeneous workers; hence, it
is becoming a mainstream technology for large-scale distributed resource sharing and system integration. Unfortunately, current grid
schedulers suffer from the haste problem, which is the schedule inability to successfully allocate all input tasks. Accordingly, some tasks
fail to complete execution as they are allocated to unsuitable workers. Others may not start execution as suitable workers are previously
allocated to other peers. This paper presents an imperialist competition algorithm (ICA) method to solve the grid scheduling problems.
The objective is to minimize the makespan and energy of the grid. Simulation results show that the grid scheduling problem can be
solved efficiently by the proposed method
Hybrid of Ant Colony Optimization and Gravitational Emulation Based Load Bala...IRJET Journal
This document proposes a hybrid Ant Colony Optimization (ACO) and Gravitational Emulation Local Search (GELS) algorithm for load balancing in cloud computing. ACO is combined with GELS to take advantage of both algorithms - ACO is good for global search using pheromone trails while GELS is powerful for local search based on gravitational attraction. The hybrid algorithm is tested using CloudSim and shows improvements over existing algorithms like GA-GELS in metrics like resource utilization, makespan, and load balancing level.
This document presents a genetic algorithm approach for scheduling jobs with burst times and priorities to find a schedule that is near or equal to the optimal shortest job first (SJF) schedule. It discusses related work on using genetic algorithms for scheduling problems. The proposed algorithm uses a genetic algorithm to generate priorities that are assigned to jobs to find a schedule with a total turnaround time close to the SJF schedule. Experimental results show that the genetic algorithm approach produces solutions very close to SJF and better than a priority-based algorithm in terms of total turnaround time.
Schedulability of Rate Monotonic Algorithm using Improved Time Demand Analysi...IJECEIAES
Real-Time Monotonic algorithm (RMA) is a widely used static priority scheduling algo- rithm. For application of RMA at various systems, it is essential to determine the system’s feasibility first. The various existing algorithms perform the analysis by reducing the scheduling points in a given task set. In this paper we propose a schedubility test algorithm, which reduces the number of tasks to be analyzed instead of reducing the scheduling points of a given task. This significantly reduces the number of iterations taken to compute feasibility. This algorithm can be used along with the existing algorithms to effectively reduce the high complexities encountered in processing large task sets. We also extend our algorithm to multiprocessor environment and compare number of iterations with different number of processors. This paper then compares the proposed algorithm with existing algorithm. The expected results show that the proposed algorithm performs better than the existing algorithms.
An improved approach to minimize context switching in round robin scheduling ...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IRJET- Advance Approach for Load Balancing in Cloud Computing using (HMSO) Hy...IRJET Journal
This document proposes a new hybrid multi-swarm optimization (HMSO) algorithm for load balancing in cloud computing. It aims to minimize response time and costs while improving resource utilization and customer satisfaction. The HMSO algorithm uses multi-level particle swarm optimization to find an optimal resource allocation solution. Simulation results show that the proposed HMSO technique reduces response time and datacenter costs compared to other algorithms. It also achieves a more balanced load distribution across resources.
This document discusses using a genetic algorithm to solve the job shop scheduling problem. It begins with an abstract that introduces using genetic algorithms for job shop scheduling. It then provides more details on the problem and discusses using genetic algorithms with modifications like generating the initial population using priority rules and incorporating critical block and disjunctive graph distance in the crossover and mutation operations. The document outlines the genetic algorithm approach with sections on chromosome representation and decoding, data structures, generating the initial population, crossover and mutation operations. The goal is to minimize the makespan value for job shop scheduling.
Job Resource Ratio Based Priority Driven Scheduling in Cloud Computingijsrd.com
Cloud Computing is an emerging technology in the area of parallel and distributed computing. Clouds consist of a collection of virtualized resources, which include both computational and storage facilities that can be provisioned on demand, depending on the users' needs. Job scheduling is one of the major activities performed in all the computing environments. Cloud computing is one the upcoming latest technology which is developing drastically. To efficiently increase the working of cloud computing environments, job scheduling is one the tasks performed in order to gain maximum profit. In this paper we proposed a new scheduling algorithm based on priority and that priority is based on ratio of job and resource. To calculate priority of job we use analytical hierarchy process. In this paper we also compare result with other algorithm like First come first serve and round robin algorithms.
A Heterogeneous Static Hierarchical Expected Completion Time Based Scheduling...IRJET Journal
The document presents a new scheduling algorithm called Hierarchical Expected Completion Time based Scheduling (HECTS) for tasks on multiprocessor systems. HECTS has two phases: 1) It prioritizes tasks based on their level in the task graph and calculates an expected completion time value for each task. Tasks are sorted by completion time value. 2) It uses an insertion-based approach to assign tasks to processors, trying to find the best time slot between already scheduled tasks without violating dependencies. The algorithm is evaluated based on speedup, efficiency and schedule length, and compared to other list scheduling algorithms. Simulation results show HECTS improves performance metrics over existing approaches.
A New Approach for Dynamic Load Balancing Using Simulation In Grid ComputingIRJET Journal
This document proposes a new dynamic load balancing approach for grid computing using simulation. It discusses how dynamic load balancing algorithms can improve performance by reallocating tasks from heavily loaded nodes to lightly loaded nodes. The proposed approach implements a dynamic load balancing algorithm in a simulated grid environment. The algorithm uses information about current resource loads to schedule tasks in a way that aims to optimize resource usage and achieve high performance computing across the distributed grid resources.
This paper presents a particle swarm optimization (PSO) algorithm to tune the gains of an integral controller for load frequency control (LFC) in a single-area power system with multiple energy sources (thermal, hydro, gas, and wind). The objective is to minimize the integral absolute error of frequency deviations following a step load change. Simulation results show the PSO-tuned integral controller provides better transient response than an uncontrolled system, with reduced settling time, peak overshoot, and oscillations.
Time and Reliability Optimization Bat Algorithm for Scheduling Workflow in CloudIRJET Journal
This document describes using a meta-heuristic optimization algorithm called the Bat Algorithm (BA) to schedule workflows in cloud computing environments. The BA is applied to optimize a multi-objective function that minimizes workflow execution time and maximizes reliability while keeping costs within a user-specified budget. The BA is compared to a basic randomized evolutionary algorithm (BREA) that uses greedy approaches. Experimental results show the BA performs better by finding schedules that have lower execution times and higher reliability within the given budget constraints. The BA is well-suited for this problem because it can efficiently search large solution spaces and automatically focus on optimal regions like other metaheuristics.
OPTIMAL PID CONTROLLER DESIGN FOR SPEED CONTROL OF A SEPARATELY EXCITED DC MO...ijscmcjournal
This paper presents a new approach to determine the optimal proportional-integral-derivative controller
parameters for the speed control of a separately excited DC motor using firefly optimization technique.
Firefly algorithm is one of the recent evolutionary methods which are inspired by the Firefly’s behavior in
nature. The firefly optimization technique is successfully implemented using MATLAB software. A
comparison is drawn from the results obtained between the linear quadratic regulator and firefly
optimization techniques. Simulation results are presented to illustrate the performance and validity of the
design method.
This document proposes a fair scheduling algorithm with dynamic load balancing for grid computing. It begins by introducing grid computing and the need for efficient load balancing algorithms to distribute tasks. It then describes dynamic load balancing approaches, including information, triggering, resource type, location, and selection policies. The proposed algorithm uses a fair scheduling approach that assigns tasks to processors based on their estimated fair completion times to ensure tasks receive equal shares of computing resources. It also includes a dynamic load balancing component that migrates tasks between processors to maintain balanced loads across all resources. Simulation results demonstrated the algorithm achieved balanced loads across processors and reduced overall task completion times.
Efficient Resource Management Mechanism with Fault Tolerant Model for Computa...Editor IJCATR
Grid computing provides a framework and deployment environment that enables resource
sharing, accessing, aggregation and management. It allows resource and coordinated use of various
resources in dynamic, distributed virtual organization. The grid scheduling is responsible for resource
discovery, resource selection and job assignment over a decentralized heterogeneous system. In the
existing system, primary-backup approach is used for fault tolerance in a single environment. In this
approach, each task has a primary copy and backup copy on two different processors. For dependent
tasks, precedence constraint among tasks must be considered when scheduling backup copies and
overloading backups. Then, two algorithms have been developed to schedule backups of dependent and
independent tasks. The proposed work is to manage the resource failures in grid job scheduling. In this
method, data source and resource are integrated from different geographical environment. Faulttolerant
scheduling with primary backup approach is used to handle job failures in grid environment.
Impact of communication protocols is considered. Communication protocols such as Transmission
Control Protocol (TCP), User Datagram Protocol (UDP) which are used to distribute the message of
each task to grid resources.
This document proposes a new one-step method for tuning PI/PID controllers based on closed-loop experiments. It derives simple correlations between data from a proportional-only closed-loop step response experiment and PI/PID settings that provide good performance and robustness. Specifically:
1) A proportional-only controller is used to generate a step response with 10-60% overshoot. The gain, overshoot, peak time, and steady-state change are recorded.
2) Simulations show the proposed controller gain is proportional to the proportional gain used in the experiment, with the ratio dependent only on overshoot. Simple equations are derived relating overshoot and peak time to the PI/PID settings.
3
Quadrotor control is needed so that the quadrotor can float close to the stationary state. For that we need control techniques. One control technique that can be designed and implemented in quadrotor is PID control. PID parameter tuning using the Genetic Algorithm technique can speed up the manual tuning process. The weakness in the application of the Genetic Algorithm rule is that it often rejects important information found in other individuals and causes premature convergence, especially at the beginning of the generation. These problems can be overcome by using crossover and mutation rules with different probability levels according to fitness values and evolutionary processes. The results of the study using fast genetic algorithm techniques obtained constants Kp, Ki and Kd with the lowest rise time and overshoot, namely 0.010, 0.001 and 0.036 at the pitch angle. At the roll angle, they are 0.010, 0.001 and 0.03. At yaw angle 0.018, 0.006 and 0.043. Comparison of PID tuning simulations using fast genetic algorithm with genetic algorithm standards, shows that fast genetic algorithm has increased optimum generation achievement faster by 26.67% at pitch angle, 44% at roll angle and 20% at yaw angle. This condition has an effect on increasing simulation execution time, where fast genetic algorithm is 26.4% faster at pitch angle, 38.05% at roll angle, and 24.19% at yaw angle
Optimization of Unit Commitment Problem using Classical Soft Computing Techni...IRJET Journal
The document describes using a particle swarm optimization (PSO) algorithm to solve the unit commitment problem (UCP) in electrical power systems. The UCP involves determining the optimal daily startup and shutdown schedule for power generating units to minimize costs while meeting demand and operational constraints. PSO is a soft computing technique inspired by animal social behavior that is applied to find near-optimal solutions. Test results are presented applying PSO to solve the UCP for 6-unit and 10-unit power system models using load data over a 24-hour period. The results demonstrate the effectiveness of PSO for solving the short-term UCP.
This document presents a method for tuning PID controller parameters for an automatic voltage regulator (AVR) system using particle swarm optimization (PSO) and genetic algorithm (GA). It compares the step response results of using PSO versus GA for tuning the PID controller. The key steps are: 1) Define an objective function to minimize transient response characteristics like overshoot and settling time, 2) Minimize the objective function using PSO and GA to determine optimal PID parameters, 3) Compare the closed-loop step response results of the PSO-PID and GA-PID controllers, with the PSO-PID controller showing better performance.
In this research simulation process was according to the cost of the proposed
algorithms. The proposed algorithms were as follow LA, CA and LOAC algorithms.
NETBEANS software was employed for implementation of these algorithms. Results of
simulation of this research were validated with pinch mark. The results of simulation
were for two aspects, in terms of cost for four scenarios and in terms of processing time
for seven different data centers. The results for costs were stated that the lower cost is
happened at the first scenario and maximum cost happened at the third scenario. When
it comes to the processing time, the maximum delay happens in data center No.6 while
the minimum processing time happened in data center No.2
Energy power efficient real time systemspragya arya
This document discusses energy efficient real-time systems. It introduces real-time systems and divides them into hard and soft categories. Hard real-time systems have consequences for missing deadlines while soft systems do not. The document then discusses embedded systems and the need for energy efficiency in real-time systems. It presents scheduling and dynamic voltage scaling techniques to improve energy efficiency. Finally, it proposes a dynamic scheduling algorithm to minimize energy consumption using an energy beneficial break even time approach.
Design of spark ignition engine speed control using bat algorithm IJECEIAES
This document summarizes a research paper that proposes using the bat algorithm to design a speed controller for a spark ignition engine. The paper first provides background on spark ignition engines, PI controllers, and the bat algorithm. It then describes using the bat algorithm to optimize the proportional and integral gains of a PI controller for a simulated spark ignition engine model in MATLAB/SIMULINK. The objective function is to minimize the integrated time absolute error of the engine speed response. Simulation results under different speed variations and load conditions are presented and analyzed to demonstrate that the bat algorithm can enhance engine speed performance compared to a conventional PI controller.
Task Scheduling using Hybrid Algorithm in Cloud Computing Environmentsiosrjce
The document summarizes a proposed hybrid task scheduling algorithm called PSOCS that combines particle swarm optimization (PSO) and cuckoo search (CS) for scheduling tasks in cloud computing environments. The PSOCS algorithm aims to minimize task completion time (makespan) and improve resource utilization. It was tested in a simulation using CloudSim and showed reductions in makespan and increases in utilization compared to PSO and random scheduling algorithms.
IRJET- Comparative Analysis between Critical Path Method and Monte Carlo S...IRJET Journal
This document compares the Critical Path Method (CPM) and Monte Carlo simulation for project scheduling. CPM uses deterministic activity durations to calculate the critical path and project duration. Monte Carlo simulation incorporates uncertainty by using three time estimates per activity - optimistic, pessimistic, and most likely - represented as probability distributions. It runs thousands of simulations to determine the likely project duration based on random sampling from these distributions. The document reviews literature on applying Monte Carlo simulation in construction projects. It then describes a study that uses both CPM and Monte Carlo simulation on a real construction project to compare the results and evaluate Monte Carlo simulation's usefulness for the construction industry.
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
Cloud computing provides the facility for the business customers to scale up and down their resource usage
based on needs. This is because of the virtualization technology. The scheduling objectives are to improve
the system’s schedule ability for the real-time tasks and to save energy. To achieve the objectives, we
employed the virtualization technique and rolling-horizon optimization with vertical scheduling operation.
The project considers Cluster Scoring Based Task Scheduling (CSBTS) algorithm which aims to decrease
task’s completion time and the policies for VM’s creation, migration and cancellation are to dynamically
adjust the scale of cloud in a while meets the real-time requirements and to save energy.
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
Cloud computing provides the facility for the business customers to scale up and down their resource usage based on needs. This is because of the virtualization technology. The scheduling objectives are to improve the system’s schedule ability for the real-time tasks and to save energy. To achieve the objectives, we employed the virtualization technique and rolling-horizon optimization with vertical scheduling operation.
The project considers Cluster Scoring Based Task Scheduling (CSBTS) algorithm which aims to decrease task’s completion time and the policies for VM’s creation, migration and cancellation are to dynamically adjust the scale of cloud in a while meets the real-time requirements and to save energy.
Survey of Real Time Scheduling AlgorithmsIOSR Journals
This document summarizes and reviews real-time scheduling algorithms. It discusses both static and dynamic scheduling algorithms for real-time tasks on uniprocessors and multiprocessors. The document reviews algorithms such as Earliest Deadline First, Least Laxity First, and Modified Instantaneous Utilization Factor Scheduling. It concludes that Modified Instantaneous Utilization Factor Scheduling provides better results than previous algorithms in terms of context switching, response time, and CPU utilization for uniprocessor scheduling.
Reinforcement learning based multi core scheduling (RLBMCS) for real time sys...IJECEIAES
This document summarizes a reinforcement learning based multi-core scheduling (RLBMCS) algorithm for real-time systems. The algorithm uses reinforcement learning to dynamically assign task priorities and place tasks in a multi-level feedback queue to schedule tasks across multiple processor cores. It aims to optimize metrics like CPU utilization, throughput, turnaround time, waiting time, response time and deadline meet ratio. Tasks can transition between four states - initial, objective degradation, objective progression, and objective stabilization - based on changes to a multi-objective optimization function. The scheduler acts as the agent and assigns tasks to queues/actions based on task and system states to maximize the optimization function over time.
This document provides an analytical survey of real-time system scheduling techniques. It discusses the types of real-time systems as either hard or soft, and the importance of scheduling in meeting tasks' predetermined deadlines. The document outlines several static and dynamic scheduling techniques used in real-time systems, including table-driven, static priority preemptive, and static priority non-preemptive approaches. It provides examples of algorithms that use these techniques, such as earliest deadline first (EDF) and rate monotonic (RM), and discusses their advantages and limitations for different real-time system needs.
STATISTICAL APPROACH TO DETERMINE MOST EFFICIENT VALUE FOR TIME QUANTUM IN RO...ijcsit
Scheduling various processes is one of the most fundamental functions of the operating system. In that
context one of the most common scheduling algorithms used in most operating systems is the Round Robin
method in which, the ready processes waiting in the ready queue, take control of the processor for a short
period of time known as the time quantum (or time slice) circularly. Here we discuss the use of statistics
and develop a mathematical model to determine the most efficient value for time quantum. This is strictly
theoretical as we do not know the values of times for the various processes beforehand. However the
proposed approach is compared with the recent developed algorithms to this regard to determine the
efficiency of the proposed algorithm.
A Heterogeneous Static Hierarchical Expected Completion Time Based Scheduling...IRJET Journal
The document presents a new scheduling algorithm called Hierarchical Expected Completion Time based Scheduling (HECTS) for tasks on multiprocessor systems. HECTS has two phases: 1) It prioritizes tasks based on their level in the task graph and calculates an expected completion time value for each task. Tasks are sorted by completion time value. 2) It uses an insertion-based approach to assign tasks to processors, trying to find the best time slot between already scheduled tasks without violating dependencies. The algorithm is evaluated based on speedup, efficiency and schedule length, and compared to other list scheduling algorithms. Simulation results show HECTS improves performance metrics over existing approaches.
A New Approach for Dynamic Load Balancing Using Simulation In Grid ComputingIRJET Journal
This document proposes a new dynamic load balancing approach for grid computing using simulation. It discusses how dynamic load balancing algorithms can improve performance by reallocating tasks from heavily loaded nodes to lightly loaded nodes. The proposed approach implements a dynamic load balancing algorithm in a simulated grid environment. The algorithm uses information about current resource loads to schedule tasks in a way that aims to optimize resource usage and achieve high performance computing across the distributed grid resources.
This paper presents a particle swarm optimization (PSO) algorithm to tune the gains of an integral controller for load frequency control (LFC) in a single-area power system with multiple energy sources (thermal, hydro, gas, and wind). The objective is to minimize the integral absolute error of frequency deviations following a step load change. Simulation results show the PSO-tuned integral controller provides better transient response than an uncontrolled system, with reduced settling time, peak overshoot, and oscillations.
Time and Reliability Optimization Bat Algorithm for Scheduling Workflow in CloudIRJET Journal
This document describes using a meta-heuristic optimization algorithm called the Bat Algorithm (BA) to schedule workflows in cloud computing environments. The BA is applied to optimize a multi-objective function that minimizes workflow execution time and maximizes reliability while keeping costs within a user-specified budget. The BA is compared to a basic randomized evolutionary algorithm (BREA) that uses greedy approaches. Experimental results show the BA performs better by finding schedules that have lower execution times and higher reliability within the given budget constraints. The BA is well-suited for this problem because it can efficiently search large solution spaces and automatically focus on optimal regions like other metaheuristics.
OPTIMAL PID CONTROLLER DESIGN FOR SPEED CONTROL OF A SEPARATELY EXCITED DC MO...ijscmcjournal
This paper presents a new approach to determine the optimal proportional-integral-derivative controller
parameters for the speed control of a separately excited DC motor using firefly optimization technique.
Firefly algorithm is one of the recent evolutionary methods which are inspired by the Firefly’s behavior in
nature. The firefly optimization technique is successfully implemented using MATLAB software. A
comparison is drawn from the results obtained between the linear quadratic regulator and firefly
optimization techniques. Simulation results are presented to illustrate the performance and validity of the
design method.
This document proposes a fair scheduling algorithm with dynamic load balancing for grid computing. It begins by introducing grid computing and the need for efficient load balancing algorithms to distribute tasks. It then describes dynamic load balancing approaches, including information, triggering, resource type, location, and selection policies. The proposed algorithm uses a fair scheduling approach that assigns tasks to processors based on their estimated fair completion times to ensure tasks receive equal shares of computing resources. It also includes a dynamic load balancing component that migrates tasks between processors to maintain balanced loads across all resources. Simulation results demonstrated the algorithm achieved balanced loads across processors and reduced overall task completion times.
Efficient Resource Management Mechanism with Fault Tolerant Model for Computa...Editor IJCATR
Grid computing provides a framework and deployment environment that enables resource
sharing, accessing, aggregation and management. It allows resource and coordinated use of various
resources in dynamic, distributed virtual organization. The grid scheduling is responsible for resource
discovery, resource selection and job assignment over a decentralized heterogeneous system. In the
existing system, primary-backup approach is used for fault tolerance in a single environment. In this
approach, each task has a primary copy and backup copy on two different processors. For dependent
tasks, precedence constraint among tasks must be considered when scheduling backup copies and
overloading backups. Then, two algorithms have been developed to schedule backups of dependent and
independent tasks. The proposed work is to manage the resource failures in grid job scheduling. In this
method, data source and resource are integrated from different geographical environment. Faulttolerant
scheduling with primary backup approach is used to handle job failures in grid environment.
Impact of communication protocols is considered. Communication protocols such as Transmission
Control Protocol (TCP), User Datagram Protocol (UDP) which are used to distribute the message of
each task to grid resources.
This document proposes a new one-step method for tuning PI/PID controllers based on closed-loop experiments. It derives simple correlations between data from a proportional-only closed-loop step response experiment and PI/PID settings that provide good performance and robustness. Specifically:
1) A proportional-only controller is used to generate a step response with 10-60% overshoot. The gain, overshoot, peak time, and steady-state change are recorded.
2) Simulations show the proposed controller gain is proportional to the proportional gain used in the experiment, with the ratio dependent only on overshoot. Simple equations are derived relating overshoot and peak time to the PI/PID settings.
3
Quadrotor control is needed so that the quadrotor can float close to the stationary state. For that we need control techniques. One control technique that can be designed and implemented in quadrotor is PID control. PID parameter tuning using the Genetic Algorithm technique can speed up the manual tuning process. The weakness in the application of the Genetic Algorithm rule is that it often rejects important information found in other individuals and causes premature convergence, especially at the beginning of the generation. These problems can be overcome by using crossover and mutation rules with different probability levels according to fitness values and evolutionary processes. The results of the study using fast genetic algorithm techniques obtained constants Kp, Ki and Kd with the lowest rise time and overshoot, namely 0.010, 0.001 and 0.036 at the pitch angle. At the roll angle, they are 0.010, 0.001 and 0.03. At yaw angle 0.018, 0.006 and 0.043. Comparison of PID tuning simulations using fast genetic algorithm with genetic algorithm standards, shows that fast genetic algorithm has increased optimum generation achievement faster by 26.67% at pitch angle, 44% at roll angle and 20% at yaw angle. This condition has an effect on increasing simulation execution time, where fast genetic algorithm is 26.4% faster at pitch angle, 38.05% at roll angle, and 24.19% at yaw angle
Optimization of Unit Commitment Problem using Classical Soft Computing Techni...IRJET Journal
The document describes using a particle swarm optimization (PSO) algorithm to solve the unit commitment problem (UCP) in electrical power systems. The UCP involves determining the optimal daily startup and shutdown schedule for power generating units to minimize costs while meeting demand and operational constraints. PSO is a soft computing technique inspired by animal social behavior that is applied to find near-optimal solutions. Test results are presented applying PSO to solve the UCP for 6-unit and 10-unit power system models using load data over a 24-hour period. The results demonstrate the effectiveness of PSO for solving the short-term UCP.
This document presents a method for tuning PID controller parameters for an automatic voltage regulator (AVR) system using particle swarm optimization (PSO) and genetic algorithm (GA). It compares the step response results of using PSO versus GA for tuning the PID controller. The key steps are: 1) Define an objective function to minimize transient response characteristics like overshoot and settling time, 2) Minimize the objective function using PSO and GA to determine optimal PID parameters, 3) Compare the closed-loop step response results of the PSO-PID and GA-PID controllers, with the PSO-PID controller showing better performance.
In this research simulation process was according to the cost of the proposed
algorithms. The proposed algorithms were as follow LA, CA and LOAC algorithms.
NETBEANS software was employed for implementation of these algorithms. Results of
simulation of this research were validated with pinch mark. The results of simulation
were for two aspects, in terms of cost for four scenarios and in terms of processing time
for seven different data centers. The results for costs were stated that the lower cost is
happened at the first scenario and maximum cost happened at the third scenario. When
it comes to the processing time, the maximum delay happens in data center No.6 while
the minimum processing time happened in data center No.2
Energy power efficient real time systemspragya arya
This document discusses energy efficient real-time systems. It introduces real-time systems and divides them into hard and soft categories. Hard real-time systems have consequences for missing deadlines while soft systems do not. The document then discusses embedded systems and the need for energy efficiency in real-time systems. It presents scheduling and dynamic voltage scaling techniques to improve energy efficiency. Finally, it proposes a dynamic scheduling algorithm to minimize energy consumption using an energy beneficial break even time approach.
Design of spark ignition engine speed control using bat algorithm IJECEIAES
This document summarizes a research paper that proposes using the bat algorithm to design a speed controller for a spark ignition engine. The paper first provides background on spark ignition engines, PI controllers, and the bat algorithm. It then describes using the bat algorithm to optimize the proportional and integral gains of a PI controller for a simulated spark ignition engine model in MATLAB/SIMULINK. The objective function is to minimize the integrated time absolute error of the engine speed response. Simulation results under different speed variations and load conditions are presented and analyzed to demonstrate that the bat algorithm can enhance engine speed performance compared to a conventional PI controller.
Task Scheduling using Hybrid Algorithm in Cloud Computing Environmentsiosrjce
The document summarizes a proposed hybrid task scheduling algorithm called PSOCS that combines particle swarm optimization (PSO) and cuckoo search (CS) for scheduling tasks in cloud computing environments. The PSOCS algorithm aims to minimize task completion time (makespan) and improve resource utilization. It was tested in a simulation using CloudSim and showed reductions in makespan and increases in utilization compared to PSO and random scheduling algorithms.
IRJET- Comparative Analysis between Critical Path Method and Monte Carlo S...IRJET Journal
This document compares the Critical Path Method (CPM) and Monte Carlo simulation for project scheduling. CPM uses deterministic activity durations to calculate the critical path and project duration. Monte Carlo simulation incorporates uncertainty by using three time estimates per activity - optimistic, pessimistic, and most likely - represented as probability distributions. It runs thousands of simulations to determine the likely project duration based on random sampling from these distributions. The document reviews literature on applying Monte Carlo simulation in construction projects. It then describes a study that uses both CPM and Monte Carlo simulation on a real construction project to compare the results and evaluate Monte Carlo simulation's usefulness for the construction industry.
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
Cloud computing provides the facility for the business customers to scale up and down their resource usage
based on needs. This is because of the virtualization technology. The scheduling objectives are to improve
the system’s schedule ability for the real-time tasks and to save energy. To achieve the objectives, we
employed the virtualization technique and rolling-horizon optimization with vertical scheduling operation.
The project considers Cluster Scoring Based Task Scheduling (CSBTS) algorithm which aims to decrease
task’s completion time and the policies for VM’s creation, migration and cancellation are to dynamically
adjust the scale of cloud in a while meets the real-time requirements and to save energy.
IMPROVING REAL TIME TASK AND HARNESSING ENERGY USING CSBTS IN VIRTUALIZED CLOUDijcax
Cloud computing provides the facility for the business customers to scale up and down their resource usage based on needs. This is because of the virtualization technology. The scheduling objectives are to improve the system’s schedule ability for the real-time tasks and to save energy. To achieve the objectives, we employed the virtualization technique and rolling-horizon optimization with vertical scheduling operation.
The project considers Cluster Scoring Based Task Scheduling (CSBTS) algorithm which aims to decrease task’s completion time and the policies for VM’s creation, migration and cancellation are to dynamically adjust the scale of cloud in a while meets the real-time requirements and to save energy.
Survey of Real Time Scheduling AlgorithmsIOSR Journals
This document summarizes and reviews real-time scheduling algorithms. It discusses both static and dynamic scheduling algorithms for real-time tasks on uniprocessors and multiprocessors. The document reviews algorithms such as Earliest Deadline First, Least Laxity First, and Modified Instantaneous Utilization Factor Scheduling. It concludes that Modified Instantaneous Utilization Factor Scheduling provides better results than previous algorithms in terms of context switching, response time, and CPU utilization for uniprocessor scheduling.
Reinforcement learning based multi core scheduling (RLBMCS) for real time sys...IJECEIAES
This document summarizes a reinforcement learning based multi-core scheduling (RLBMCS) algorithm for real-time systems. The algorithm uses reinforcement learning to dynamically assign task priorities and place tasks in a multi-level feedback queue to schedule tasks across multiple processor cores. It aims to optimize metrics like CPU utilization, throughput, turnaround time, waiting time, response time and deadline meet ratio. Tasks can transition between four states - initial, objective degradation, objective progression, and objective stabilization - based on changes to a multi-objective optimization function. The scheduler acts as the agent and assigns tasks to queues/actions based on task and system states to maximize the optimization function over time.
This document provides an analytical survey of real-time system scheduling techniques. It discusses the types of real-time systems as either hard or soft, and the importance of scheduling in meeting tasks' predetermined deadlines. The document outlines several static and dynamic scheduling techniques used in real-time systems, including table-driven, static priority preemptive, and static priority non-preemptive approaches. It provides examples of algorithms that use these techniques, such as earliest deadline first (EDF) and rate monotonic (RM), and discusses their advantages and limitations for different real-time system needs.
STATISTICAL APPROACH TO DETERMINE MOST EFFICIENT VALUE FOR TIME QUANTUM IN RO...ijcsit
Scheduling various processes is one of the most fundamental functions of the operating system. In that
context one of the most common scheduling algorithms used in most operating systems is the Round Robin
method in which, the ready processes waiting in the ready queue, take control of the processor for a short
period of time known as the time quantum (or time slice) circularly. Here we discuss the use of statistics
and develop a mathematical model to determine the most efficient value for time quantum. This is strictly
theoretical as we do not know the values of times for the various processes beforehand. However the
proposed approach is compared with the recent developed algorithms to this regard to determine the
efficiency of the proposed algorithm.
SCHEDULING DIFFERENT CUSTOMER ACTIVITIES WITH SENSING DEVICEijait
Most periodic tasks are assigned to processors using partition scheduling policy after checking feasibility conditions. A new approach is proposed for scheduling different activities with one periodic task within the system. In this paper, control strategies are identified for allocating different types of tasks (activities) to
individual computing elements like Smartphone or microphones. In our simulation model, each periodic task generates an aperiodic tasks are taken into consideration. Different sets of periodic tasks and aperiodic tasks are scheduled together. This new approach proves that when all different activities are
scheduled with one periodic tasks leads to better performance.
This document describes an efficient dynamic scheduling algorithm for real-time multi-core systems. It proposes a task split myopic scheduling algorithm that exploits parallelism in tasks to meet deadlines. The algorithm splits tasks that cannot meet their deadline into mandatory and optional sub-tasks, which can then execute concurrently on multiple cores. Simulation studies show the proposed algorithm has higher schedulability than myopic and improved myopic algorithms. The algorithm aims to better utilize executing cores to increase the probability of tasks meeting their deadlines while maintaining a low computational complexity of O(kn), the same as myopic algorithms.
Efficient Dynamic Scheduling Algorithm for Real-Time MultiCore Systems iosrjce
Imprecise computation model is used in dynamic scheduling algorithm having heuristic function to
schedule task sets. A task is characterized by ready time, worst case computation time, deadline and resource
requirements. A task failing to meet its deadline and resource requirements on time is split into mandatory part
and optional part. These sub-tasks of a task can execute concurrently on multiple cores, thus achieving
parallelization provided by the multi-core system. Mandatory part produces acceptable results while optional
part refines the result further. To study the effectiveness of proposed scheduling algorithm, extensive simulation
studies have been carried out. Performance of proposed scheduling algorithm is compared with myopic and
improved myopic scheduling algorithm. The simulation studies shows that schedulability of task split myopic
algorithm is always higher than myopic and improved myopic algorithm.
This document discusses various load balancing algorithms that can be applied in cloud computing. It begins with an introduction to cloud computing models including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). It then discusses the goals of load balancing in cloud computing. The main part of the document describes and provides examples of several load balancing algorithms: Round Robin, Opportunistic Load Balancing, Minimum Completion Time, and Minimum Execution Time. For each algorithm, it explains the basic approach and provides an example to illustrate how it works.
Bounded ant colony algorithm for task Allocation on a network of homogeneous ...ijcsit
This document summarizes a research paper that proposes a bounded ant colony algorithm (BTS-ACO) for task scheduling on a network of homogeneous processors using a primary site. The algorithm uses an initial bound on each processor's load to control task allocation. It investigates scheduling tasks from a sorted list (SLoT) versus a random list (RLoT). Simulation results show that BTS-ACO with a sorted task list achieves better performance than a random list in terms of scheduling time, makespan, and load balancing.
The document presents a novel hyper-heuristic scheduling algorithm called HHSA for cloud computing systems. HHSA aims to find better scheduling solutions than traditional rule-based algorithms by employing diversity and improvement detection operators to dynamically determine which low-level heuristic to use. The performance of HHSA is evaluated on CloudSim and Hadoop and shown to significantly reduce makespan compared to other algorithms.
Integrating Fault Tolerant Scheme With Feedback Control Scheduling Algorithm ...ijics
In order to provide Quality of Service (QoS) in open and unpredictable environment, Feedback based
Control Scheduling Algorithm (FCSA) is designed to keep the processor utilization at the scheduling
utilization bound. FCSA controls CPU utilization by assigning task periods that optimize overall control
performance, meeting deadlines even if the task execution time is unpredictable and through performance
control feedback loop.
In a multiprogramming system, multiple processes exist concurrently in main memory. Each process alternates between using a processor and waiting for some event to occur, such as the completion of an I O operation. The processor or processors are kept busy by executing one process while the others wait. The key to multiprogramming is scheduling. CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated the CPU. By switching the CPU among processor the operating system can make the computer more productive. Scheduling affectes the performance of the system because it determines which processes will wait and which will progress. In this paper, simulation of various scheduling algorithm First Come First Served FCFS , Round Robin RR , Shortest Process Next SPN and Shortest Remaining Time SRT is done over C Daw Khin Po ""Simulation of Process Scheduling Algorithms"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd25124.pdf
Paper URL: https://www.ijtsrd.com/computer-science/operating-system/25124/simulation-of-process-scheduling-algorithms/daw-khin-po
Integrating fault tolerant scheme with feedback control scheduling algorithm ...ijics
In order to provide Quality of Service (QoS) in open and unpredictable environment, Feedback based
Control Scheduling Algorithm (FCSA) is designed to keep the processor utilization at the scheduling
utilization bound. FCSA controls CPU utilization by assigning task periods that optimize overall control
performance, meeting deadlines even if the task execution time is unpredictable and through performance
control feedback loop. Current FCSA doesn’t ensure Fault Tolerance (FT) while providing QoS in terms of
CPU utilization and resource management. In order to assure that tasks should meet their deadlines even
in the presence of faults, a FT scheme has to be integrated at control scheduling co-design level. This paper
presents a novel approach on integrating FT scheme with FCSA for real time embedded systems. This
procedure is especially important for control scheduling co-design of embedded systems.
Developing Scheduler Test Cases to Verify Scheduler Implementations In Time-T...ijesajournal
Despite that there is a “one-to-many” mapping between scheduling algorithms and scheduler implementations, only a few studies have discussed the challenges and consequences of translating between these two system models. There has been an argument that a wide gap exists between scheduling theory and scheduling implementation in practical systems, where such a gap must be bridged to obtain an effective validation of embedded systems. In this paper, we introduce a technique called “Scheduler Test Case” (STC) aimed at bridging the gap between scheduling algorithms and scheduler implementations in single-processor embedded systems implemented using Time-Triggered Co-operative (TTC) architectures. We will demonstrate how the STC technique can provide a simple and systematic way for documenting, verifying (testing) and comparing various TTC scheduler implementations on particular hardware. However, STC is a generic technique that provides a black-box tool for assessing and predicting the behaviour of representative implementation sets of any real-time scheduling algorithm.
DEVELOPING SCHEDULER TEST CASES TO VERIFY SCHEDULER IMPLEMENTATIONS IN TIMETR...ijesajournal
Despite that there is a “one-to-many” mapping between scheduling algorithms and scheduler
implementations, only a few studies have discussed the challenges and consequences of translating between
these two system models. There has been an argument that a wide gap exists between scheduling theory and
scheduling implementation in practical systems, where such a gap must be bridged to obtain an effective
validation of embedded systems. In this paper, we introduce a technique called “Scheduler Test Case”
(STC) aimed at bridging the gap between scheduling algorithms and scheduler implementations in singleprocessor embedded systems implemented using Time-Triggered Co-operative (TTC) architectures. We will
demonstrate how the STC technique can provide a simple and systematic way for documenting, verifying
(testing) and comparing various TTC scheduler implementations on particular hardware. However, STC is
a generic technique that provides a black-box tool for assessing and predicting the behaviour of
representative implementation sets of any real-time scheduling algorithm.
1. The document presents a mathematical model for scheduling jobs in a three-stage flow shop to minimize total elapsed time and rental costs under constraints.
2. It considers processing times, setup times, transportation times between machines, and machine breakdown intervals.
3. The key concepts are: starting machines at optimal times to keep completion times and total elapsed time unchanged while minimizing machine rental costs.
This document proposes a new task scheduling algorithm called Dynamic Heterogeneous Shortest Job First (DHSJF) for heterogeneous cloud computing systems. DHSJF aims to improve performance metrics like reduced makespan and low energy consumption by considering the heterogeneity of resources and workloads. It discusses existing scheduling algorithms like Round Robin, First Come First Serve and their limitations. The proposed DHSJF algorithm prioritizes tasks with the shortest estimated completion time to optimize resource utilization and improve overall performance of the cloud computing system. Simulation results show that DHSJF provides better results for metrics like average waiting time and turnaround time as compared to Round Robin and First Come First Serve scheduling algorithms.
Optimizing Task Scheduling in Mobile Cloud Computing Using Particle Swarm Opt...IRJET Journal
The document discusses optimizing task scheduling in mobile cloud computing using the particle swarm optimization algorithm. It proposes using PSO to develop a task scheduling optimization model that reduces task transmission time, execution time, and costs. PSO is a dynamic scheduling algorithm that could help speed up task execution and decrease costs compared to other algorithms. The document reviews background on task scheduling and cloud computing and analyzes related work on using algorithms like genetic algorithms and ant colony optimization for task scheduling.
Proposing a New Job Scheduling Algorithm in Grid Environment Using a Combinat...Editor IJCATR
Scheduling jobs to resources in grid computing is complicated due to the distributed and heterogeneous nature of the resources.
The purpose of job scheduling in grid environment is to achieve high system throughput and minimize the execution time of applications.
The complexity of scheduling problem increases with the size of the grid and becomes highly difficult to solve effectively.
To obtain a good and efficient method to solve scheduling problems in grid, a new area of research is implemented. In this paper, a job
scheduling algorithm is proposed to assign jobs to available resources in grid environment. The proposed algorithm is based on Ant
Colony Optimization (ACO) algorithm. This algorithm is combined with one of the best scheduling algorithm, Suffrage. This paper uses
the result of Suffrage in proposed ACO algorithm. The main contribution of this work is to minimize the makespan of a given set of
jobs. The experimental results show that the proposed algorithm can lead to significant performance in grid environment.