Grid computing utilizes the distributed heterogeneous resources in order to support complicated computing problems. Job
scheduling in computing grid is a very important problem. To utilize grids efficiently, we need a good job scheduling algorithm to assign
jobs to resources in grids.
In the natural environment, the ants have a tremendous ability to team up to find an optimal path to food resources. An ant algorithm
simulates the behavior of ants. In this paper, a new Ant Colony Optimization (ACO) algorithm is proposed for job scheduling in the
Grid environment. The main contribution of this paper is to minimize the makespan of a given set of jobs. Compared with the other job
scheduling algorithms, the proposed algorithm can outperform them according to the experimental results.
Proposing a New Job Scheduling Algorithm in Grid Environment Using a Combinat...Editor IJCATR
Scheduling jobs to resources in grid computing is complicated due to the distributed and heterogeneous nature of the resources.
The purpose of job scheduling in grid environment is to achieve high system throughput and minimize the execution time of applications.
The complexity of scheduling problem increases with the size of the grid and becomes highly difficult to solve effectively.
To obtain a good and efficient method to solve scheduling problems in grid, a new area of research is implemented. In this paper, a job
scheduling algorithm is proposed to assign jobs to available resources in grid environment. The proposed algorithm is based on Ant
Colony Optimization (ACO) algorithm. This algorithm is combined with one of the best scheduling algorithm, Suffrage. This paper uses
the result of Suffrage in proposed ACO algorithm. The main contribution of this work is to minimize the makespan of a given set of
jobs. The experimental results show that the proposed algorithm can lead to significant performance in grid environment.
The document describes a novel approach called Enhanced Ant Colony Optimization (EACO) for scheduling tasks in a grid computing environment. EACO aims to improve task scheduling by minimizing makespan time compared to existing algorithms like Modified Ant Colony Optimization, MAX-MIN, and Resource Aware Scheduling Algorithm. It does this by considering system and network performance in dynamic grids and selecting resources according to their availability. The document presents the procedures of EACO and the existing algorithms, experimental results showing EACO achieves lower makespan, and concludes EACO is effective for task scheduling in grids.
Proposing a Scheduling Algorithm to Balance the Time and Energy Using an Impe...Editor IJCATR
Computational grids have become an appealing research area as they solve compute-intensive problems within the scientific
community and in industry. A grid computational power is aggregated from a huge set of distributed heterogeneous workers; hence, it
is becoming a mainstream technology for large-scale distributed resource sharing and system integration. Unfortunately, current grid
schedulers suffer from the haste problem, which is the schedule inability to successfully allocate all input tasks. Accordingly, some tasks
fail to complete execution as they are allocated to unsuitable workers. Others may not start execution as suitable workers are previously
allocated to other peers. This paper presents an imperialist competition algorithm (ICA) method to solve the grid scheduling problems.
The objective is to minimize the makespan and energy of the grid. Simulation results show that the grid scheduling problem can be
solved efficiently by the proposed method
Presenting an Algorithm for Tasks Scheduling in Grid Environment along with I...Editor IJCATR
Nowadays, human faces with huge data. With regard to expansion of computer technology and detectors, some terabytes are
produced. In order to response to this demand, grid computing is considered as one of the most important research fields. Grid technology
and concepts were used to provide resource subscription between scientific units. The purpose was using resources of grid environment
to solve complex problems.
In this paper, a new algorithm based on Mamdani fuzzy system has been proposed for tasks scheduling in computing grid. Mamdani
fuzzy algorithm is a new technique measuring criteria by using membership functions. In this paper, our considered criterion is response
time. The results of proposed algorithm implemented on grid systems indicate priority of the proposed method in terms of validation
criteria of scheduling algorithms like ending time of the task and etc. Also, efficiency increases considerably.
Proposing a New Job Scheduling Algorithm in Grid Environment Using a Combinat...Editor IJCATR
Grid computing is a hardware and software infrastructure and provides affordable, sustainable, and reliable access. Its aim is
to create a supercomputer using free resources. One of the challenges to the Grid computing is scheduling problem which is regarded
as a tough issue. Since scheduling problem is a non-deterministic issue in the Grid, deterministic algorithms cannot be used to improve
scheduling. In this paper, a combination of imperialist competition algorithm (ICA) and gravitational attraction is used for to address the
problem of independent task scheduling in a grid environment, with the aim of reducing the makespan and energy. Experimental results
compare ICA with other algorithms and illustrate that ICA finds a shorter makespan and energy relative to the others. Moreover, it
converges quickly, finding its optimum solution in less time than the other algorithms.
Proposing a scheduling algorithm to balance the time and cost using a genetic...Editor IJCATR
This summary provides the key details from the document in 3 sentences:
The document proposes a genetic algorithm approach combined with a local search algorithm inspired by binary gravitational attraction to solve scheduling problems in grid computing. The algorithm aims to minimize task completion time and costs by optimizing resource selection and load balancing. Experimental results showed that the proposed algorithm achieved better optimization of time and costs and selection of resources compared to other algorithms.
An Automatic Clustering Technique for Optimal ClustersIJCSEA Journal
This document presents a new automatic clustering algorithm called Automatic Merging for Optimal Clusters (AMOC). AMOC is a two-phase iterative extension of k-means clustering that aims to automatically determine the optimal number of clusters for a given dataset. In the first phase, AMOC initializes a large number of clusters k using k-means. In the second phase, it iteratively merges the lowest probability cluster with its closest neighbor, recomputing metrics each time to evaluate if the merge improved clustering quality. The algorithm stops merging once no improvements are found. Experimental results on synthetic and real datasets show AMOC finds nearly optimal cluster structures in terms of number, compactness and separation of clusters.
A PREFIXED-ITEMSET-BASED IMPROVEMENT FOR APRIORI ALGORITHMcsandit
Association rules is a very important part of data mining. It is used to find the interesting patterns from transaction databases. Apriori algorithm is one of the most classical algorithms
of association rules, but it has the bottleneck in efficiency. In this article, we proposed a prefixed-itemset-based data structure for candidate itemset generation, with the help of the structure we managed to improve the efficiency of the classical Apriori algorithm.
Proposing a New Job Scheduling Algorithm in Grid Environment Using a Combinat...Editor IJCATR
Scheduling jobs to resources in grid computing is complicated due to the distributed and heterogeneous nature of the resources.
The purpose of job scheduling in grid environment is to achieve high system throughput and minimize the execution time of applications.
The complexity of scheduling problem increases with the size of the grid and becomes highly difficult to solve effectively.
To obtain a good and efficient method to solve scheduling problems in grid, a new area of research is implemented. In this paper, a job
scheduling algorithm is proposed to assign jobs to available resources in grid environment. The proposed algorithm is based on Ant
Colony Optimization (ACO) algorithm. This algorithm is combined with one of the best scheduling algorithm, Suffrage. This paper uses
the result of Suffrage in proposed ACO algorithm. The main contribution of this work is to minimize the makespan of a given set of
jobs. The experimental results show that the proposed algorithm can lead to significant performance in grid environment.
The document describes a novel approach called Enhanced Ant Colony Optimization (EACO) for scheduling tasks in a grid computing environment. EACO aims to improve task scheduling by minimizing makespan time compared to existing algorithms like Modified Ant Colony Optimization, MAX-MIN, and Resource Aware Scheduling Algorithm. It does this by considering system and network performance in dynamic grids and selecting resources according to their availability. The document presents the procedures of EACO and the existing algorithms, experimental results showing EACO achieves lower makespan, and concludes EACO is effective for task scheduling in grids.
Proposing a Scheduling Algorithm to Balance the Time and Energy Using an Impe...Editor IJCATR
Computational grids have become an appealing research area as they solve compute-intensive problems within the scientific
community and in industry. A grid computational power is aggregated from a huge set of distributed heterogeneous workers; hence, it
is becoming a mainstream technology for large-scale distributed resource sharing and system integration. Unfortunately, current grid
schedulers suffer from the haste problem, which is the schedule inability to successfully allocate all input tasks. Accordingly, some tasks
fail to complete execution as they are allocated to unsuitable workers. Others may not start execution as suitable workers are previously
allocated to other peers. This paper presents an imperialist competition algorithm (ICA) method to solve the grid scheduling problems.
The objective is to minimize the makespan and energy of the grid. Simulation results show that the grid scheduling problem can be
solved efficiently by the proposed method
Presenting an Algorithm for Tasks Scheduling in Grid Environment along with I...Editor IJCATR
Nowadays, human faces with huge data. With regard to expansion of computer technology and detectors, some terabytes are
produced. In order to response to this demand, grid computing is considered as one of the most important research fields. Grid technology
and concepts were used to provide resource subscription between scientific units. The purpose was using resources of grid environment
to solve complex problems.
In this paper, a new algorithm based on Mamdani fuzzy system has been proposed for tasks scheduling in computing grid. Mamdani
fuzzy algorithm is a new technique measuring criteria by using membership functions. In this paper, our considered criterion is response
time. The results of proposed algorithm implemented on grid systems indicate priority of the proposed method in terms of validation
criteria of scheduling algorithms like ending time of the task and etc. Also, efficiency increases considerably.
Proposing a New Job Scheduling Algorithm in Grid Environment Using a Combinat...Editor IJCATR
Grid computing is a hardware and software infrastructure and provides affordable, sustainable, and reliable access. Its aim is
to create a supercomputer using free resources. One of the challenges to the Grid computing is scheduling problem which is regarded
as a tough issue. Since scheduling problem is a non-deterministic issue in the Grid, deterministic algorithms cannot be used to improve
scheduling. In this paper, a combination of imperialist competition algorithm (ICA) and gravitational attraction is used for to address the
problem of independent task scheduling in a grid environment, with the aim of reducing the makespan and energy. Experimental results
compare ICA with other algorithms and illustrate that ICA finds a shorter makespan and energy relative to the others. Moreover, it
converges quickly, finding its optimum solution in less time than the other algorithms.
Proposing a scheduling algorithm to balance the time and cost using a genetic...Editor IJCATR
This summary provides the key details from the document in 3 sentences:
The document proposes a genetic algorithm approach combined with a local search algorithm inspired by binary gravitational attraction to solve scheduling problems in grid computing. The algorithm aims to minimize task completion time and costs by optimizing resource selection and load balancing. Experimental results showed that the proposed algorithm achieved better optimization of time and costs and selection of resources compared to other algorithms.
An Automatic Clustering Technique for Optimal ClustersIJCSEA Journal
This document presents a new automatic clustering algorithm called Automatic Merging for Optimal Clusters (AMOC). AMOC is a two-phase iterative extension of k-means clustering that aims to automatically determine the optimal number of clusters for a given dataset. In the first phase, AMOC initializes a large number of clusters k using k-means. In the second phase, it iteratively merges the lowest probability cluster with its closest neighbor, recomputing metrics each time to evaluate if the merge improved clustering quality. The algorithm stops merging once no improvements are found. Experimental results on synthetic and real datasets show AMOC finds nearly optimal cluster structures in terms of number, compactness and separation of clusters.
A PREFIXED-ITEMSET-BASED IMPROVEMENT FOR APRIORI ALGORITHMcsandit
Association rules is a very important part of data mining. It is used to find the interesting patterns from transaction databases. Apriori algorithm is one of the most classical algorithms
of association rules, but it has the bottleneck in efficiency. In this article, we proposed a prefixed-itemset-based data structure for candidate itemset generation, with the help of the structure we managed to improve the efficiency of the classical Apriori algorithm.
THE ACTIVE CONTROLLER DESIGN FOR ACHIEVING GENERALIZED PROJECTIVE SYNCHRONIZA...ijait
This paper discusses the design of active controllers for achieving generalized projective synchronization (GPS) of identical hyperchaotic Lü systems (Chen, Lu, Lü and Yu, 2006), identical hyperchaotic Cai systems (Wang and Cai, 2009) and non-identical hyperchaotic Lü and hyperchaotic Cai systems. The synchronization results (GPS) for the hyperchaotic systems have been derived using active control method and established using Lyapunov stability theory. Since the Lyapunov exponents are not required for these calculations, the active control method is very effective and convenient for achieving the GPS of the
hyperchaotic systems addressed in this paper. Numerical simulations are provided to illustrate the effectiveness of the GPS synchronization results derived in this paper.
Hybrid aco iwd optimization algorithm for minimizing weighted flowtime in clo...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Hybrid aco iwd optimization algorithm for minimizing weighted flowtime in clo...eSAT Journals
Abstract Scientists and engineers conduct several experiments by executing the same coding against the various input data, which is achieved by the Parameter Sweep Experiments (PSEs). This may finally results in too many jobs with high computational requirements. Therefore the distributed environments, particularly clouds, are used in-order to fulfill these demands. Since it is an NP-complete problem the job scheduling is much changeling. Now the proposed work is determined by the Cloud scheduler based on the bio-inspired techniques, since it works well in approximating problems with little input. But in existing proposals the job priority is ignored; which in turn it is the important aspect in PSEs because it accelerates the result of the PSE and visualization of scientific clouds. The weighted flow time is minimized with the help of the cloud scheduler based on Ant Colony Optimization (ACO). All matching recourses of the job requirements and the routing information are defined by the Intelligent Water Drops (IWDs) in order to reach the recourses. Among all matching resources of the job the Ant colony optimization is determined as the best resources. The main aim of this approach is to converge to the optimal scheduler faster, minimize the make span of the job, improve load balancing. Keywords: Ant Colony Optimization, Intelligent Water Drops, Parameter Sweep Experiments, Weighted Flowtime.
Bounded ant colony algorithm for task Allocation on a network of homogeneous ...ijcsit
This document summarizes a research paper that proposes a bounded ant colony algorithm (BTS-ACO) for task scheduling on a network of homogeneous processors using a primary site. The algorithm uses an initial bound on each processor's load to control task allocation. It investigates scheduling tasks from a sorted list (SLoT) versus a random list (RLoT). Simulation results show that BTS-ACO with a sorted task list achieves better performance than a random list in terms of scheduling time, makespan, and load balancing.
USING CUCKOO ALGORITHM FOR ESTIMATING TWO GLSD PARAMETERS AND COMPARING IT WI...ijcsit
This study introduces and compares different methods for estimating the two parameters of generalized logarithmic series distribution. These methods are the cuckoo search optimization, maximum likelihood estimation, and method of moments algorithms. All the required derivations and basic steps of each algorithm are explained. The applications for these algorithms are implemented through simulations using different sample sizes (n = 15, 25, 50, 100). Results are compared using the statistical measure mean square error.
IRJET- PSO based PID Controller for Bidirectional Inductive Power Transfer Sy...IRJET Journal
This document presents a Particle Swarm Optimization (PSO) based PID controller for a bidirectional inductive power transfer system. PSO is used to optimize the parameters of a PID controller (proportional, integral and derivative terms) to minimize error and improve system performance. The proposed PSO-tuned PID controller is simulated and its performance is compared to other tuning methods. Results demonstrate the validity and effectiveness of using PSO to optimize PID control parameters for bidirectional power transfer systems.
The document summarizes electricity load forecasting techniques for power system planning. It discusses using curve fitting algorithms to forecast electricity load based on analyzing past load data from 2012. Specifically, it proposes using a Fourier series curve fitting model to predict future load based on factors like temperature, humidity, and time of day or year. The document also briefly describes other common load forecasting techniques including multiple regression, exponential smoothing, and neural networks.
ADAPTIVE CONTROL AND SYNCHRONIZATION OF SPROTT-I SYSTEM WITH UNKNOWN PARAMETERSijscai
This paper derives new results for the adaptive control and synchronization design of the Sprott-I chaotic system (1994), when the system parameters are unknown. First, we build an adaptive controller to stabilize the Sprott-I chaotic system to its unstable equilibrium at the origin. Then we build an adaptive
synchronizer to achieve global chaos synchronization of the identical Sprott-I chaotic systems with unknown parameters. The results derived for adaptive stabilization and adaptive synchronization for the Sprott-I chaotic system have been established using adaptive control theory and Lyapunov stability
theory. Numerical simulations have been shown to demonstrate the effectiveness of the adaptive control and synchronization schemes derived in this paper for the Sprott-I chaotic system.
Time and Reliability Optimization Bat Algorithm for Scheduling Workflow in CloudIRJET Journal
This document describes using a meta-heuristic optimization algorithm called the Bat Algorithm (BA) to schedule workflows in cloud computing environments. The BA is applied to optimize a multi-objective function that minimizes workflow execution time and maximizes reliability while keeping costs within a user-specified budget. The BA is compared to a basic randomized evolutionary algorithm (BREA) that uses greedy approaches. Experimental results show the BA performs better by finding schedules that have lower execution times and higher reliability within the given budget constraints. The BA is well-suited for this problem because it can efficiently search large solution spaces and automatically focus on optimal regions like other metaheuristics.
SWARM INTELLIGENCE SCHEDULING OF SOFT REAL-TIME TASKS IN HETEROGENEOUS MULTIP...ecij
The document presents a hybrid swarm intelligence algorithm called VNABCSA for scheduling soft real-time tasks in heterogeneous multiprocessor systems. VNABCSA combines artificial bee colony and simulated annealing algorithms. It aims to minimize total tardiness, number of processors used, completion time, total waiting time of tasks and processors. The algorithm represents solutions as an ordering of tasks and assignment to processors. It uses artificial bee colony for global search and simulated annealing for local search to improve convergence. Simulation results show it performs better than existing scheduling algorithms.
A HYBRID K-HARMONIC MEANS WITH ABCCLUSTERING ALGORITHM USING AN OPTIMAL K VAL...IJCI JOURNAL
Large quantities of data are emerging every year and an accurate clustering algorithm is needed to derive
information from these data. K-means clustering algorithm is popular and simple, but has many limitations
like its sensitivity to initialization, provides local optimum solutions. K-harmonic means clustering is an
improved variant of K-means which is insensitive to the initialization of centroids, but still in some cases it
ends up with local optimum solutions. Clustering using Artificial Bee Colony (ABC) algorithm always gives
global optimum solutions. In this paper a new hybrid clustering algorithm (KHM-ABC) is presented by
combining both K-harmonic means and ABC algorithm to perform accurate clustering. Experimental
results indicate that the performance of the proposed algorithm is superior to the available algorithms in
terms of the quality of clusters.
This document describes a study that developed a hybrid page replacement algorithm (HRA) by combining the Least Recently Used (LRU) and Optimal page replacement algorithms. The HRA was evaluated by comparing its performance in terms of number of page faults generated against the First In First Out (FIFO), LRU, and Optimal algorithms. Results showed that the HRA outperformed FIFO, Optimal, and LRU when the number of frames was 4 or more. The document provides background on page replacement algorithms and discusses related works that modified standard algorithms or proposed new algorithms for page replacement.
Short Term Electrical Load Forecasting by Artificial Neural NetworkIJERA Editor
This paper presents an application of artificial neural networks for short-term times series electrical load
forecasting. An adaptive learning algorithm is derived from system stability to ensure the convergence of
training process. Historical data of hourly power load as well as hourly wind power generation are sourced from
European Open Power System Platform. The simulation demonstrates that errors steadily decrease in training
with the adaptive learning factor starting at different initial value and errors behave volatile with constant
learning factors with different values
Effects of The Different Migration Periods on Parallel Multi-Swarm PSO csandit
In recent years, there has been an increasing inter
est in parallel computing. In parallel
computing, multiple computing resources are used si
multaneously in solving a problem. There
are multiple processors that will work concurrently
and the program is divided into different
tasks to be simultaneously solved. Recently, a cons
iderable literature has grown up around the
theme of metaheuristic algorithms. Particle swarm o
ptimization (PSO) algorithm is a popular
metaheuristic algorithm. The parallel comprehensive
learning particle swarm optimization
(PCLPSO) algorithm based on PSO has multiple swarms
based on the master-slave paradigm
and works cooperatively and concurrently. The migra
tion period is an important parameter in
PCLPSO and affects the efficiency of the algorithm.
We used the well-known benchmark
functions in the experiments and analysed the perfo
rmance of PCLPSO using different
migration periods.
An enhanced adaptive scoring job scheduling algorithm with replication strate...eSAT Publishing House
This document describes an enhanced adaptive scoring job scheduling algorithm with replication strategy for grid environments. The algorithm aims to improve upon an existing adaptive scoring job scheduling algorithm by identifying whether jobs are data-intensive or computation-intensive. It then divides large jobs into subtasks, replicates the subtasks, and allocates the replicas to clusters based on a computed cluster score in order to improve resource utilization and job completion times. The algorithm is evaluated through simulation using the GridSim toolkit.
In the present work, Lyapunov stability theory, nonlinear adaptive control law and the parameter update
law were utilized to derive the state of two new chaotic reversal systems after being synchronized by the
function projective method. Using this technique allows for a scaling function instead of a constant thereby
giving a better method in applications in secure communication. Numerical simulations are presented to
demonstrate the effective nature of the proposed scheme of synchronization for the new chaotic reversal
system.
This document summarizes a presentation on modeling and simulating a solar PV and battery-based DC microgrid system. It discusses the advantages of DC microgrids over AC microgrids. It also outlines several real-world DC microgrid implementations worldwide and describes the designed microgrid system. Three scenarios are simulated to test the energy management control algorithm in maintaining the DC bus voltage and regulating the battery state of charge. The results demonstrate the controls working as expected. Future work is proposed to expand the system and design protections.
Ant colony optimization based routing algorithm in various wireless sensor ne...Editor Jacotech
Wireless Sensor Network has several issues and challenges due to limited battery backup, limited computation capability, and limited computation capability. These issues and challenges must be taken care while designing the algorithms to increase the Network lifetime of WSN. Routing, the act of moving information across an internet world from a source to a destination is one of the vital issue associated with Wireless Sensor Network. The Ant Colony Optimization (ACO) algorithm is a probabilistic technique for solving computational problems that can be used to find optimal paths through graphs. The short route will be increasingly enhanced therefore become more attractive. The foraging behavior and optimal route finding capability of ants can be the inspiration for ACO based algorithm in WSN. The nature of ants is to wander randomly in search of food from their nest. While moving, ants lay down a pheromone trail on the ground. This chemical pheromone has the ability to evaporate with the time. Ants have the ability to smell pheromone. When selecting their path, they tend to select, probably the paths that has strong pheromone concentrations. As soon as an ant finds a food source, carries some of it back to the nest. While returning, the quantity of chemical pheromone that an ant lay down on the ground may depend on the quantity and quality of the food. The pheromone trails will lead other ants towards the food source. The path which has the strongest pheromone concentration is followed by the ant which is the shortest paths between their nest and food source. This paper surveys the ACO based routing in various Networking domains like Wireless Sensor Networks and Mobile Ad Hoc Networks.
Presentation from Sierra Club panel discussion on Microgrids in DCHugh Youngblood
This document summarizes a panel discussion on microgrids in DC power. It describes the motivations for microgrids such as reliability, efficiency, sustainability and energy independence. It then provides details on the Crispus Attucks Park Microgrid Project in Washington DC which aims to reduce costs, enhance safety and experience for users through a standalone solar+storage microgrid. The project objectives, design considerations and next steps are outlined. Finally, it discusses subsequent microgrid projects and how communities can implement their own microgrids.
Ant Colony Optimization: The Algorithm and Its Applicationsadil raja
The document discusses ant colony optimization, an algorithm inspired by the foraging behavior of ants. It describes how ants communicate indirectly via pheromone trails to find the shortest paths between their nests and food sources. The algorithm emulates this behavior in artificial ant colonies to solve discrete optimization problems. It outlines various applications of the algorithm to routing problems, assignment problems, scheduling problems, and machine learning. In conclusion, it praises ant colony optimization as an intuitive, effective algorithm with many successful applications and variants.
Ant colony optimization is an optimization technique inspired by the behavior of real ant colonies. The technique was introduced in the 1990s and uses indirect coordination between agents through pheromone trails to solve problems. Ants communicate by laying pheromone trails and tend to follow stronger trails, with the result that the paths between food sources emerge from their collective behavior without centralized control. The ant colony optimization algorithm applies this behavior to problems by having artificial "ants" probabilistically build solutions and adjust pheromone levels to guide future construction. The algorithm has been successfully applied to problems like the traveling salesman problem.
THE ACTIVE CONTROLLER DESIGN FOR ACHIEVING GENERALIZED PROJECTIVE SYNCHRONIZA...ijait
This paper discusses the design of active controllers for achieving generalized projective synchronization (GPS) of identical hyperchaotic Lü systems (Chen, Lu, Lü and Yu, 2006), identical hyperchaotic Cai systems (Wang and Cai, 2009) and non-identical hyperchaotic Lü and hyperchaotic Cai systems. The synchronization results (GPS) for the hyperchaotic systems have been derived using active control method and established using Lyapunov stability theory. Since the Lyapunov exponents are not required for these calculations, the active control method is very effective and convenient for achieving the GPS of the
hyperchaotic systems addressed in this paper. Numerical simulations are provided to illustrate the effectiveness of the GPS synchronization results derived in this paper.
Hybrid aco iwd optimization algorithm for minimizing weighted flowtime in clo...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Hybrid aco iwd optimization algorithm for minimizing weighted flowtime in clo...eSAT Journals
Abstract Scientists and engineers conduct several experiments by executing the same coding against the various input data, which is achieved by the Parameter Sweep Experiments (PSEs). This may finally results in too many jobs with high computational requirements. Therefore the distributed environments, particularly clouds, are used in-order to fulfill these demands. Since it is an NP-complete problem the job scheduling is much changeling. Now the proposed work is determined by the Cloud scheduler based on the bio-inspired techniques, since it works well in approximating problems with little input. But in existing proposals the job priority is ignored; which in turn it is the important aspect in PSEs because it accelerates the result of the PSE and visualization of scientific clouds. The weighted flow time is minimized with the help of the cloud scheduler based on Ant Colony Optimization (ACO). All matching recourses of the job requirements and the routing information are defined by the Intelligent Water Drops (IWDs) in order to reach the recourses. Among all matching resources of the job the Ant colony optimization is determined as the best resources. The main aim of this approach is to converge to the optimal scheduler faster, minimize the make span of the job, improve load balancing. Keywords: Ant Colony Optimization, Intelligent Water Drops, Parameter Sweep Experiments, Weighted Flowtime.
Bounded ant colony algorithm for task Allocation on a network of homogeneous ...ijcsit
This document summarizes a research paper that proposes a bounded ant colony algorithm (BTS-ACO) for task scheduling on a network of homogeneous processors using a primary site. The algorithm uses an initial bound on each processor's load to control task allocation. It investigates scheduling tasks from a sorted list (SLoT) versus a random list (RLoT). Simulation results show that BTS-ACO with a sorted task list achieves better performance than a random list in terms of scheduling time, makespan, and load balancing.
USING CUCKOO ALGORITHM FOR ESTIMATING TWO GLSD PARAMETERS AND COMPARING IT WI...ijcsit
This study introduces and compares different methods for estimating the two parameters of generalized logarithmic series distribution. These methods are the cuckoo search optimization, maximum likelihood estimation, and method of moments algorithms. All the required derivations and basic steps of each algorithm are explained. The applications for these algorithms are implemented through simulations using different sample sizes (n = 15, 25, 50, 100). Results are compared using the statistical measure mean square error.
IRJET- PSO based PID Controller for Bidirectional Inductive Power Transfer Sy...IRJET Journal
This document presents a Particle Swarm Optimization (PSO) based PID controller for a bidirectional inductive power transfer system. PSO is used to optimize the parameters of a PID controller (proportional, integral and derivative terms) to minimize error and improve system performance. The proposed PSO-tuned PID controller is simulated and its performance is compared to other tuning methods. Results demonstrate the validity and effectiveness of using PSO to optimize PID control parameters for bidirectional power transfer systems.
The document summarizes electricity load forecasting techniques for power system planning. It discusses using curve fitting algorithms to forecast electricity load based on analyzing past load data from 2012. Specifically, it proposes using a Fourier series curve fitting model to predict future load based on factors like temperature, humidity, and time of day or year. The document also briefly describes other common load forecasting techniques including multiple regression, exponential smoothing, and neural networks.
ADAPTIVE CONTROL AND SYNCHRONIZATION OF SPROTT-I SYSTEM WITH UNKNOWN PARAMETERSijscai
This paper derives new results for the adaptive control and synchronization design of the Sprott-I chaotic system (1994), when the system parameters are unknown. First, we build an adaptive controller to stabilize the Sprott-I chaotic system to its unstable equilibrium at the origin. Then we build an adaptive
synchronizer to achieve global chaos synchronization of the identical Sprott-I chaotic systems with unknown parameters. The results derived for adaptive stabilization and adaptive synchronization for the Sprott-I chaotic system have been established using adaptive control theory and Lyapunov stability
theory. Numerical simulations have been shown to demonstrate the effectiveness of the adaptive control and synchronization schemes derived in this paper for the Sprott-I chaotic system.
Time and Reliability Optimization Bat Algorithm for Scheduling Workflow in CloudIRJET Journal
This document describes using a meta-heuristic optimization algorithm called the Bat Algorithm (BA) to schedule workflows in cloud computing environments. The BA is applied to optimize a multi-objective function that minimizes workflow execution time and maximizes reliability while keeping costs within a user-specified budget. The BA is compared to a basic randomized evolutionary algorithm (BREA) that uses greedy approaches. Experimental results show the BA performs better by finding schedules that have lower execution times and higher reliability within the given budget constraints. The BA is well-suited for this problem because it can efficiently search large solution spaces and automatically focus on optimal regions like other metaheuristics.
SWARM INTELLIGENCE SCHEDULING OF SOFT REAL-TIME TASKS IN HETEROGENEOUS MULTIP...ecij
The document presents a hybrid swarm intelligence algorithm called VNABCSA for scheduling soft real-time tasks in heterogeneous multiprocessor systems. VNABCSA combines artificial bee colony and simulated annealing algorithms. It aims to minimize total tardiness, number of processors used, completion time, total waiting time of tasks and processors. The algorithm represents solutions as an ordering of tasks and assignment to processors. It uses artificial bee colony for global search and simulated annealing for local search to improve convergence. Simulation results show it performs better than existing scheduling algorithms.
A HYBRID K-HARMONIC MEANS WITH ABCCLUSTERING ALGORITHM USING AN OPTIMAL K VAL...IJCI JOURNAL
Large quantities of data are emerging every year and an accurate clustering algorithm is needed to derive
information from these data. K-means clustering algorithm is popular and simple, but has many limitations
like its sensitivity to initialization, provides local optimum solutions. K-harmonic means clustering is an
improved variant of K-means which is insensitive to the initialization of centroids, but still in some cases it
ends up with local optimum solutions. Clustering using Artificial Bee Colony (ABC) algorithm always gives
global optimum solutions. In this paper a new hybrid clustering algorithm (KHM-ABC) is presented by
combining both K-harmonic means and ABC algorithm to perform accurate clustering. Experimental
results indicate that the performance of the proposed algorithm is superior to the available algorithms in
terms of the quality of clusters.
This document describes a study that developed a hybrid page replacement algorithm (HRA) by combining the Least Recently Used (LRU) and Optimal page replacement algorithms. The HRA was evaluated by comparing its performance in terms of number of page faults generated against the First In First Out (FIFO), LRU, and Optimal algorithms. Results showed that the HRA outperformed FIFO, Optimal, and LRU when the number of frames was 4 or more. The document provides background on page replacement algorithms and discusses related works that modified standard algorithms or proposed new algorithms for page replacement.
Short Term Electrical Load Forecasting by Artificial Neural NetworkIJERA Editor
This paper presents an application of artificial neural networks for short-term times series electrical load
forecasting. An adaptive learning algorithm is derived from system stability to ensure the convergence of
training process. Historical data of hourly power load as well as hourly wind power generation are sourced from
European Open Power System Platform. The simulation demonstrates that errors steadily decrease in training
with the adaptive learning factor starting at different initial value and errors behave volatile with constant
learning factors with different values
Effects of The Different Migration Periods on Parallel Multi-Swarm PSO csandit
In recent years, there has been an increasing inter
est in parallel computing. In parallel
computing, multiple computing resources are used si
multaneously in solving a problem. There
are multiple processors that will work concurrently
and the program is divided into different
tasks to be simultaneously solved. Recently, a cons
iderable literature has grown up around the
theme of metaheuristic algorithms. Particle swarm o
ptimization (PSO) algorithm is a popular
metaheuristic algorithm. The parallel comprehensive
learning particle swarm optimization
(PCLPSO) algorithm based on PSO has multiple swarms
based on the master-slave paradigm
and works cooperatively and concurrently. The migra
tion period is an important parameter in
PCLPSO and affects the efficiency of the algorithm.
We used the well-known benchmark
functions in the experiments and analysed the perfo
rmance of PCLPSO using different
migration periods.
An enhanced adaptive scoring job scheduling algorithm with replication strate...eSAT Publishing House
This document describes an enhanced adaptive scoring job scheduling algorithm with replication strategy for grid environments. The algorithm aims to improve upon an existing adaptive scoring job scheduling algorithm by identifying whether jobs are data-intensive or computation-intensive. It then divides large jobs into subtasks, replicates the subtasks, and allocates the replicas to clusters based on a computed cluster score in order to improve resource utilization and job completion times. The algorithm is evaluated through simulation using the GridSim toolkit.
In the present work, Lyapunov stability theory, nonlinear adaptive control law and the parameter update
law were utilized to derive the state of two new chaotic reversal systems after being synchronized by the
function projective method. Using this technique allows for a scaling function instead of a constant thereby
giving a better method in applications in secure communication. Numerical simulations are presented to
demonstrate the effective nature of the proposed scheme of synchronization for the new chaotic reversal
system.
This document summarizes a presentation on modeling and simulating a solar PV and battery-based DC microgrid system. It discusses the advantages of DC microgrids over AC microgrids. It also outlines several real-world DC microgrid implementations worldwide and describes the designed microgrid system. Three scenarios are simulated to test the energy management control algorithm in maintaining the DC bus voltage and regulating the battery state of charge. The results demonstrate the controls working as expected. Future work is proposed to expand the system and design protections.
Ant colony optimization based routing algorithm in various wireless sensor ne...Editor Jacotech
Wireless Sensor Network has several issues and challenges due to limited battery backup, limited computation capability, and limited computation capability. These issues and challenges must be taken care while designing the algorithms to increase the Network lifetime of WSN. Routing, the act of moving information across an internet world from a source to a destination is one of the vital issue associated with Wireless Sensor Network. The Ant Colony Optimization (ACO) algorithm is a probabilistic technique for solving computational problems that can be used to find optimal paths through graphs. The short route will be increasingly enhanced therefore become more attractive. The foraging behavior and optimal route finding capability of ants can be the inspiration for ACO based algorithm in WSN. The nature of ants is to wander randomly in search of food from their nest. While moving, ants lay down a pheromone trail on the ground. This chemical pheromone has the ability to evaporate with the time. Ants have the ability to smell pheromone. When selecting their path, they tend to select, probably the paths that has strong pheromone concentrations. As soon as an ant finds a food source, carries some of it back to the nest. While returning, the quantity of chemical pheromone that an ant lay down on the ground may depend on the quantity and quality of the food. The pheromone trails will lead other ants towards the food source. The path which has the strongest pheromone concentration is followed by the ant which is the shortest paths between their nest and food source. This paper surveys the ACO based routing in various Networking domains like Wireless Sensor Networks and Mobile Ad Hoc Networks.
Presentation from Sierra Club panel discussion on Microgrids in DCHugh Youngblood
This document summarizes a panel discussion on microgrids in DC power. It describes the motivations for microgrids such as reliability, efficiency, sustainability and energy independence. It then provides details on the Crispus Attucks Park Microgrid Project in Washington DC which aims to reduce costs, enhance safety and experience for users through a standalone solar+storage microgrid. The project objectives, design considerations and next steps are outlined. Finally, it discusses subsequent microgrid projects and how communities can implement their own microgrids.
Ant Colony Optimization: The Algorithm and Its Applicationsadil raja
The document discusses ant colony optimization, an algorithm inspired by the foraging behavior of ants. It describes how ants communicate indirectly via pheromone trails to find the shortest paths between their nests and food sources. The algorithm emulates this behavior in artificial ant colonies to solve discrete optimization problems. It outlines various applications of the algorithm to routing problems, assignment problems, scheduling problems, and machine learning. In conclusion, it praises ant colony optimization as an intuitive, effective algorithm with many successful applications and variants.
Ant colony optimization is an optimization technique inspired by the behavior of real ant colonies. The technique was introduced in the 1990s and uses indirect coordination between agents through pheromone trails to solve problems. Ants communicate by laying pheromone trails and tend to follow stronger trails, with the result that the paths between food sources emerge from their collective behavior without centralized control. The ant colony optimization algorithm applies this behavior to problems by having artificial "ants" probabilistically build solutions and adjust pheromone levels to guide future construction. The algorithm has been successfully applied to problems like the traveling salesman problem.
Show ant-colony-optimization-for-solving-the-traveling-salesman-problemjayatra
The document describes using ant colony optimization to solve the traveling salesman problem. It outlines the traveling salesman problem and introduces ant colony optimization as a metaheuristic for solving optimization problems inspired by ant behavior. The document then provides an example of using ant colony optimization to iteratively find the shortest route between 5 cities, with ants probabilistically choosing paths based on pheromone levels and distance.
The document discusses ant colony optimization (ACO) algorithms. It introduces ACO as a probabilistic metaheuristic technique inspired by the behavior of ants seeking paths between their colony and food sources. It outlines the ACO metaheuristic and describes key ACO algorithms like Ant System, Ant Colony System, and MAX-MIN Ant System. The document also covers applications of ACO, advantages like inherent parallelism and efficient solutions to problems like the traveling salesman problem, and disadvantages like difficulty analyzing ACO theoretically.
This document summarizes a study on integrating wind and solar power generation into microgrids using battery energy storage and power electronic converters. The study proposes an aggregated model for forecasting renewable generation and develops an adaptive droop control method for battery inverters. The droop control curves are made asymmetric and adjusted based on the battery state of charge to maintain supply-demand balance while ensuring the battery state of charge stays within a desired range. The controls are demonstrated in a case study of a DC microgrid integrated within a high-rise building that harvests wind and solar power on the roof to provide power for electric vehicles.
Ant colony optimization is a swarm intelligence technique inspired by the behavior of ants. It is used to find optimal paths or solutions to problems. The key aspects are that ants deposit pheromones as they move, influencing the paths other ants take, with shorter paths receiving more pheromones over time. This results in the emergence of the shortest path as the most favorable route. The algorithm is often applied to problems like the traveling salesman problem to find the shortest route between nodes.
Ant colony optimization (ACO) is a heuristic optimization algorithm inspired by the foraging behavior of ants. It is used to find optimal paths in graph problems. The algorithm operates by simulating ants walking around the problem space, depositing and following pheromone trails. Over time, as ants discover short paths, the pheromone density increases on those paths, making them more desirable for future ants. This positive feedback eventually leads all ants to converge on the shortest path. ACO has been applied successfully to problems like the traveling salesman problem.
This document discusses DC microgrids and the research being done on them at Aalborg University. It provides an overview of the microgrid research programme, including its focus areas and team members. It then discusses various DC microgrid projects, from residential to industrial applications. It also covers DC microgrid control architectures, including primary, secondary and tertiary control levels.
The document discusses ant colony optimization (ACO), which is a metaheuristic algorithm inspired by the behavior of real ant colonies. It describes how real ants deposit pheromone trails to communicate indirectly and find the shortest path between their colony and food sources. The algorithm works by "artificial ants" probabilistically building solutions to optimization problems and adjusting pheromone levels based on solution quality, similar to how real ants reinforce shorter paths. It provides examples of how ACO has been applied to problems like the traveling salesman problem and discusses some extensions to the basic ACO algorithm.
This is an easy introduction to the concept of Genetic Algorithms. It gives Simple explanation of Genetic Algorithms. Covers the major steps that are required to implement the GA for your tasks.
For other resources visit: http://pimpalepatil.googlepages.com/
For more information mail me on pbpimpale@gmail.com
The document proposes a hybrid algorithm combining genetic algorithm and cuckoo search optimization to solve job shop scheduling problems. It aims to minimize makespan (completion time of all jobs) by scheduling jobs on machines. The genetic algorithm is used to explore the search space but can get trapped in local optima. Cuckoo search optimization performs local search faster than genetic algorithm and helps avoid local optima. Experimental results on benchmark problems show the hybrid algorithm yields better solutions in terms of makespan and runtime compared to genetic algorithm and ant colony optimization algorithms.
Job Scheduling on the Grid Environment using Max-Min Firefly AlgorithmEditor IJCATR
Grid computing indeed is the next generation of distributed systems and its goals is creating a powerful virtual, great, and
autonomous computer that is created using countless Heterogeneous resource with the purpose of sharing resources. Scheduling is one
of the main steps to exploit the capabilities of emerging computing systems such as the grid. Scheduling of the jobs in computational
grids due to Heterogeneous resources is known as an NP-Complete problem. Grid resources belong to different management domains
and each applies different management policies. Since the nature of the grid is Heterogeneous and dynamic, techniques used in
traditional systems cannot be applied to grid scheduling, therefore new methods must be found. This paper proposes a new algorithm
which combines the firefly algorithm with the Max-Min algorithm for scheduling of jobs on the grid. The firefly algorithm is a new
technique based on the swarm behavior that is inspired by social behavior of fireflies in nature. Fireflies move in the search space of
problem to find the optimal or near-optimal solutions. Minimization of the makespan and flowtime of completing jobs simultaneously
are the goals of this paper. Experiments and simulation results show that the proposed method has a better efficiency than other
compared algorithms.
This document summarizes a research article that proposes a hybrid scheduling algorithm for real-time operating systems that combines Earliest Deadline First (EDF) and Ant Colony Optimization (ACO) algorithms. The hybrid algorithm aims to overcome the limitations of EDF, which cannot handle overloaded systems well, and ACO, which has longer execution times than EDF. The document provides background on real-time operating systems and describes EDF and ACO scheduling algorithms. It then proposes the hybrid algorithm that automatically switches between EDF and ACO to leverage the advantages of both. The goal is to enhance system performance, reduce failures, and decrease missed deadlines under different load conditions.
AN ANT COLONY OPTIMIZATION ALGORITHM FOR JOB SHOP SCHEDULING PROBLEMijaia
This document summarizes an ant colony optimization algorithm for solving job shop scheduling problems. It describes how ant colony optimization is inspired by the behavior of real ants finding shortest paths between their nest and food sources. The algorithm models artificial ants probabilistically constructing solutions to the job shop scheduling problem. The ants are guided by pheromone trails and heuristic information associated with edges in a graph representation of the problem. The pheromone trails, representing learned desirability of choices, are updated based on the quality of the solutions constructed by the ants. The algorithm aims to find high-quality solutions with relatively few evaluations of the objective function for minimizing makespan in job shop scheduling problems.
In industries, the completion time of job problems in the manufacturing unit has risen significantly. In several types of current study, the job's completion time, or makespan, is reduced by taking straight paths, which is time-consuming. In this paper, we used an Improved Ant Colony Optimization and Tabu Search (ACOTS) algorithm to solve this problem by precisely defining the fault occurrence location in order to rollback. We have used a short-term memory-based rollback recovery strategy to minimise the job's completion time by rolling back to its own short-term memory. The recent movements in Tabu quest are visited using short term memory. As compared to the ACO algorithm, our proposed ACOTS-Cmax solution is more efficient and takes less time to complete.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
This document describes an efficient dynamic scheduling algorithm for real-time multi-core systems. It proposes a task split myopic scheduling algorithm that exploits parallelism in tasks to meet deadlines. The algorithm splits tasks that cannot meet their deadline into mandatory and optional sub-tasks, which can then execute concurrently on multiple cores. Simulation studies show the proposed algorithm has higher schedulability than myopic and improved myopic algorithms. The algorithm aims to better utilize executing cores to increase the probability of tasks meeting their deadlines while maintaining a low computational complexity of O(kn), the same as myopic algorithms.
Efficient Dynamic Scheduling Algorithm for Real-Time MultiCore Systems iosrjce
Imprecise computation model is used in dynamic scheduling algorithm having heuristic function to
schedule task sets. A task is characterized by ready time, worst case computation time, deadline and resource
requirements. A task failing to meet its deadline and resource requirements on time is split into mandatory part
and optional part. These sub-tasks of a task can execute concurrently on multiple cores, thus achieving
parallelization provided by the multi-core system. Mandatory part produces acceptable results while optional
part refines the result further. To study the effectiveness of proposed scheduling algorithm, extensive simulation
studies have been carried out. Performance of proposed scheduling algorithm is compared with myopic and
improved myopic scheduling algorithm. The simulation studies shows that schedulability of task split myopic
algorithm is always higher than myopic and improved myopic algorithm.
Hybrid of Ant Colony Optimization and Gravitational Emulation Based Load Bala...IRJET Journal
This document proposes a hybrid Ant Colony Optimization (ACO) and Gravitational Emulation Local Search (GELS) algorithm for load balancing in cloud computing. ACO is combined with GELS to take advantage of both algorithms - ACO is good for global search using pheromone trails while GELS is powerful for local search based on gravitational attraction. The hybrid algorithm is tested using CloudSim and shows improvements over existing algorithms like GA-GELS in metrics like resource utilization, makespan, and load balancing level.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Scalable Rough C-Means clustering using Firefly algorithm..................................................................1
Abhilash Namdev and B.K. Tripathy
Significance of Embedded Systems to IoT................................................................................................. 15
P. R. S. M. Lakshmi, P. Lakshmi Narayanamma and K. Santhi Sri
Cognitive Abilities, Information Literacy Knowledge and Retrieval Skills of Undergraduates: A
Comparison of Public and Private Universities in Nigeria ........................................................................ 24
Janet O. Adekannbi and Testimony Morenike Oluwayinka
Risk Assessment in Constructing Horseshoe Vault Tunnels using Fuzzy Technique................................ 48
Erfan Shafaghat and Mostafa Yousefi Rad
Evaluating the Adoption of Deductive Database Technology in Augmenting Criminal Intelligence in
Zimbabwe: Case of Zimbabwe Republic Police......................................................................................... 68
Mahlangu Gilbert, Furusa Samuel Simbarashe, Chikonye Musafare and Mugoniwa Beauty
Analysis of Petrol Pumps Reachability in Anand District of Gujarat ....................................................... 77
Nidhi Arora
EFFECTS OF THE DIFFERENT MIGRATION PERIODS ON PARALLEL MULTI-SWARM PSOcscpconf
This document discusses the effects of different migration periods on the Parallel Multi-Swarm Particle Swarm Optimization (PCLPSO) algorithm. PCLPSO is a parallel metaheuristic algorithm based on Particle Swarm Optimization and the Comprehensive Learning Particle Swarm Optimization. It uses multiple swarms that work cooperatively and concurrently. The migration period, which is the frequency at which swarms share information, is an important parameter. The document analyzes PCLPSO performance on benchmark functions using different migration periods, finding the best periods varied with problem dimension and type.
Multiprocessor scheduling of dependent tasks to minimize makespan and reliabi...ijfcstjournal
Algorithms developed for scheduling applications on heterogeneous multiprocessor system focus on a
single objective such as execution time, cost or total data transmission time. However, if more than one
objective (e.g. execution cost and time, which may be in conflict) are considered, then the problem becomes
more challenging. This project is proposed to develop a multiobjective scheduling algorithm using
Evolutionary techniques for scheduling a set of dependent tasks on available resources in a multiprocessor
environment which will minimize the makespan and reliability cost. A Non-dominated sorting Genetic
Algorithm-II procedure has been developed to get the pareto- optimal solutions. NSGA-II is a Elitist
Evolutionary algorithm, and it takes the initial parental solution without any changes, in all iteration to
eliminate the problem of loss of some pareto-optimal solutions.NSGA-II uses crowding distance concept to
create a diversity of the solutions.
DGBSA : A BATCH JOB SCHEDULINGALGORITHM WITH GA WITH REGARD TO THE THRESHOLD ...IJCSEA Journal
In this paper , we will provide a scheduler on batch jobs with GA regard to the threshold detector. In The algorithm proposed in this paper, we will provide the batch independent jobs with a new technique ,so we can optimize the schedule them. To do this, we use a threshold detector then among the selected jobs, processing resources can process batch jobs with priority. Also hierarchy of tasks in each batch, will be determined with using DGBSA algorithm. Now , with the regard to the works done by previous ,we can provide an algorithm that by adding specific parameters to fitness function of the previous algorithms ,develop a optimum fitness function that in the proposed algorithm has been used. According to assessment done on DGBSA algorithm, in compare with the similar algorithms, it has more performance. The effective parameters that used in the proposed algorithm can reduce the total wasting time in compare with previous algorithms. Also this algorithm can improve the previous problems in batch processing with a new technique.
Task Scheduling using Hybrid Algorithm in Cloud Computing Environmentsiosrjce
The document summarizes a proposed hybrid task scheduling algorithm called PSOCS that combines particle swarm optimization (PSO) and cuckoo search (CS) for scheduling tasks in cloud computing environments. The PSOCS algorithm aims to minimize task completion time (makespan) and improve resource utilization. It was tested in a simulation using CloudSim and showed reductions in makespan and increases in utilization compared to PSO and random scheduling algorithms.
This document summarizes a research paper that proposes a hybrid task scheduling algorithm for cloud computing environments called PSOCS. PSOCS combines the Particle Swarm Optimization (PSO) algorithm and Cuckoo Search (CS) algorithm to optimize task scheduling and minimize completion time while increasing resource utilization. The paper describes PSO and CS algorithms individually, then defines the proposed PSOCS algorithm. It evaluates PSOCS using a simulation and finds it reduces makespan and increases utilization compared to PSO and random allocation algorithms.
ANALYSINBG THE MIGRATION PERIOD PARAMETER IN PARALLEL MULTI-SWARM PARTICLE SW...ijcsit
This document analyzes the effect of different migration periods on the Parallel Comprehensive Learning Particle Swarm Optimization (PCLPSO) algorithm. PCLPSO is a parallel multi-swarm algorithm based on Particle Swarm Optimization (PSO) and Comprehensive Learning PSO (CLPSO). It uses multiple swarms that work cooperatively and concurrently. The migration period determines how often each swarm shares its best solution with other swarms, and affects the algorithm's efficiency. The document tests PCLPSO on 14 benchmark optimization functions using different migration periods, to analyze their impact on the algorithm's performance.
This document summarizes an article that proposes improvements to an existing algorithm for resource scheduling in cloud computing environments. The existing algorithm uses a hybrid of ant colony optimization and particle swarm optimization. The proposed improvements add an initial phase that uses an enhanced fish swarm search algorithm to help find more global optimal solutions. This global optimal solution found by fish swarm search is then used to guide the existing ant colony optimization and particle swarm optimization hybrid to find more locally optimal solutions. The document provides background on resource scheduling, metaheuristic algorithms, and describes the specific implementations of the improved fish swarm search algorithm and the overall proposed methodology.
A MODIFIED ANT COLONY ALGORITHM FOR SOLVING THE UNIT COMMITMENT PROBLEMAEIJjournal2
Solving the unit commitment (UC) problem is one of the most complicated issues in power systems that its
exact solving can be calculated by perfect counting of entire possible compounds of generative units. UC is
equated as a nonlinear optimization with huge size. Purpose of solving this problem is to programming the
optimization of the generative units to minimize the full action cost regarding problem constraints. In this
article, a modified version of ant colony optimization (MACO) is introduced for solving the UC problem in
a power system. ACO algorithm is a powerful optimization method which has the capability of fleeing from
local minimums by performing flexible memory system. The efficiency of proposed method in two power
system containing 4 and 10 generative units is indicated. Comparison of obtained results from the proposed
method with results of the past well-known methods is a proof for suitability of performing the introduced
algorithm in economic input and output of generative units.
Advanced Energy: An International Journal (AEIJ)AEIJjournal2
A modified ant colony optimization algorithm is proposed to solve the unit commitment problem in power systems. The unit commitment problem aims to minimize generation costs while meeting demand and operational constraints of generation units. The modified ant colony optimization algorithm uses multiple pheromone matrices, one for each hour, to represent the problem. It also uses discrete power levels for units and considers their minimum up and down times. The algorithm evaluates solutions based on generation costs while ensuring demand is met and system reserves are maintained at each step. The algorithm is tested on 4 and 10 unit systems and shows improvements over previous ant colony optimization methods for unit commitment.
Similar to Presenting a new Ant Colony Optimization Algorithm (ACO) for Efficient Job Scheduling in Grid Environment (20)
Text Mining in Digital Libraries using OKAPI BM25 ModelEditor IJCATR
The emergence of the internet has made vast amounts of information available and easily accessible online. As a result, most libraries have digitized their content in order to remain relevant to their users and to keep pace with the advancement of the internet. However, these digital libraries have been criticized for using inefficient information retrieval models that do not perform relevance ranking to the retrieved results. This paper proposed the use of OKAPI BM25 model in text mining so as means of improving relevance ranking of digital libraries. Okapi BM25 model was selected because it is a probability-based relevance ranking algorithm. A case study research was conducted and the model design was based on information retrieval processes. The performance of Boolean, vector space, and Okapi BM25 models was compared for data retrieval. Relevant ranked documents were retrieved and displayed at the OPAC framework search page. The results revealed that Okapi BM 25 outperformed Boolean model and Vector Space model. Therefore, this paper proposes the use of Okapi BM25 model to reward terms according to their relative frequencies in a document so as to improve the performance of text mining in digital libraries.
Green Computing, eco trends, climate change, e-waste and eco-friendlyEditor IJCATR
This document discusses green computing practices and sustainable IT services. It provides an overview of factors driving adoption of green computing to reduce costs and environmental impact of data centers, such as rising energy costs and density. Green strategies discussed include improving infrastructure efficiency, power management, thermal management, efficient product design, and virtualization to optimize resource utilization. The document examines how green computing aims to lower costs and environmental footprint, and how sustainable IT services take a broader approach considering economic, environmental and social impacts.
Policies for Green Computing and E-Waste in NigeriaEditor IJCATR
Computers today are an integral part of individuals’ lives all around the world, but unfortunately these devices are toxic to the environment given the materials used, their limited battery life and technological obsolescence. Individuals are concerned about the hazardous materials ever present in computers, even if the importance of various attributes differs, and that a more environment -friendly attitude can be obtained through exposure to educational materials. In this paper, we aim to delineate the problem of e-waste in Nigeria and highlight a series of measures and the advantage they herald for our country and propose a series of action steps to develop in these areas further. It is possible for Nigeria to have an immediate economic stimulus and job creation while moving quickly to abide by the requirements of climate change legislation and energy efficiency directives. The costs of implementing energy efficiency and renewable energy measures are minimal as they are not cash expenditures but rather investments paid back by future, continuous energy savings.
Performance Evaluation of VANETs for Evaluating Node Stability in Dynamic Sce...Editor IJCATR
Vehicular ad hoc networks (VANETs) are a favorable area of exploration which empowers the interconnection amid the movable vehicles and between transportable units (vehicles) and road side units (RSU). In Vehicular Ad Hoc Networks (VANETs), mobile vehicles can be organized into assemblage to promote interconnection links. The assemblage arrangement according to dimensions and geographical extend has serious influence on attribute of interaction .Vehicular ad hoc networks (VANETs) are subclass of mobile Ad-hoc network involving more complex mobility patterns. Because of mobility the topology changes very frequently. This raises a number of technical challenges including the stability of the network .There is a need for assemblage configuration leading to more stable realistic network. The paper provides investigation of various simulation scenarios in which cluster using k-means algorithm are generated and their numbers are varied to find the more stable configuration in real scenario of road.
Optimum Location of DG Units Considering Operation ConditionsEditor IJCATR
The optimal sizing and placement of Distributed Generation units (DG) are becoming very attractive to researchers these days. In this paper a two stage approach has been used for allocation and sizing of DGs in distribution system with time varying load model. The strategic placement of DGs can help in reducing energy losses and improving voltage profile. The proposed work discusses time varying loads that can be useful for selecting the location and optimizing DG operation. The method has the potential to be used for integrating the available DGs by identifying the best locations in a power system. The proposed method has been demonstrated on 9-bus test system.
Analysis of Comparison of Fuzzy Knn, C4.5 Algorithm, and Naïve Bayes Classifi...Editor IJCATR
Early detection of diabetes mellitus (DM) can prevent or inhibit complication. There are several laboratory test that must be done to detect DM. The result of this laboratory test then converted into data training. Data training used in this study generated from UCI Pima Database with 6 attributes that were used to classify positive or negative diabetes. There are various classification methods that are commonly used, and in this study three of them were compared, which were fuzzy KNN, C4.5 algorithm and Naïve Bayes Classifier (NBC) with one identical case. The objective of this study was to create software to classify DM using tested methods and compared the three methods based on accuracy, precision, and recall. The results showed that the best method was Fuzzy KNN with average and maximum accuracy reached 96% and 98%, respectively. In second place, NBC method had respective average and maximum accuracy of 87.5% and 90%. Lastly, C4.5 algorithm had average and maximum accuracy of 79.5% and 86%, respectively.
Web Scraping for Estimating new Record from Source SiteEditor IJCATR
Study in the Competitive field of Intelligent, and studies in the field of Web Scraping, have a symbiotic relationship mutualism. In the information age today, the website serves as a main source. The research focus is on how to get data from websites and how to slow down the intensity of the download. The problem that arises is the website sources are autonomous so that vulnerable changes the structure of the content at any time. The next problem is the system intrusion detection snort installed on the server to detect bot crawler. So the researchers propose the use of the methods of Mining Data Records and the method of Exponential Smoothing so that adaptive to changes in the structure of the content and do a browse or fetch automatically follow the pattern of the occurrences of the news. The results of the tests, with the threshold 0.3 for MDR and similarity threshold score 0.65 for STM, using recall and precision values produce f-measure average 92.6%. While the results of the tests of the exponential estimation smoothing using ? = 0.5 produces MAE 18.2 datarecord duplicate. It slowed down to 3.6 datarecord from 21.8 datarecord results schedule download/fetch fix in an average time of occurrence news.
Evaluating Semantic Similarity between Biomedical Concepts/Classes through S...Editor IJCATR
Most of the existing semantic similarity measures that use ontology structure as their primary source can measure semantic similarity between concepts/classes using single ontology. The ontology-based semantic similarity techniques such as structure-based semantic similarity techniques (Path Length Measure, Wu and Palmer’s Measure, and Leacock and Chodorow’s measure), information content-based similarity techniques (Resnik’s measure, Lin’s measure), and biomedical domain ontology techniques (Al-Mubaid and Nguyen’s measure (SimDist)) were evaluated relative to human experts’ ratings, and compared on sets of concepts using the ICD-10 “V1.0” terminology within the UMLS. The experimental results validate the efficiency of the SemDist technique in single ontology, and demonstrate that SemDist semantic similarity techniques, compared with the existing techniques, gives the best overall results of correlation with experts’ ratings.
Semantic Similarity Measures between Terms in the Biomedical Domain within f...Editor IJCATR
The techniques and tests are tools used to define how measure the goodness of ontology or its resources. The similarity between biomedical classes/concepts is an important task for the biomedical information extraction and knowledge discovery. However, most of the semantic similarity techniques can be adopted to be used in the biomedical domain (UMLS). Many experiments have been conducted to check the applicability of these measures. In this paper, we investigate to measure semantic similarity between two terms within single ontology or multiple ontologies in ICD-10 “V1.0” as primary source, and compare my results to human experts score by correlation coefficient.
A Strategy for Improving the Performance of Small Files in Openstack Swift Editor IJCATR
This is an effective way to improve the storage access performance of small files in Openstack Swift by adding an aggregate storage module. Because Swift will lead to too much disk operation when querying metadata, the transfer performance of plenty of small files is low. In this paper, we propose an aggregated storage strategy (ASS), and implement it in Swift. ASS comprises two parts which include merge storage and index storage. At the first stage, ASS arranges the write request queue in chronological order, and then stores objects in volumes. These volumes are large files that are stored in Swift actually. During the short encounter time, the object-to-volume mapping information is stored in Key-Value store at the second stage. The experimental results show that the ASS can effectively improve Swift's small file transfer performance.
Integrated System for Vehicle Clearance and RegistrationEditor IJCATR
Efficient management and control of government's cash resources rely on government banking arrangements. Nigeria, like many low income countries, employed fragmented systems in handling government receipts and payments. Later in 2016, Nigeria implemented a unified structure as recommended by the IMF, where all government funds are collected in one account would reduce borrowing costs, extend credit and improve government's fiscal policy among other benefits to government. This situation motivated us to embark on this research to design and implement an integrated system for vehicle clearance and registration. This system complies with the new Treasury Single Account policy to enable proper interaction and collaboration among five different level agencies (NCS, FRSC, SBIR, VIO and NPF) saddled with vehicular administration and activities in Nigeria. Since the system is web based, Object Oriented Hypermedia Design Methodology (OOHDM) is used. Tools such as Php, JavaScript, css, html, AJAX and other web development technologies were used. The result is a web based system that gives proper information about a vehicle starting from the exact date of importation to registration and renewal of licensing. Vehicle owner information, custom duty information, plate number registration details, etc. will also be efficiently retrieved from the system by any of the agencies without contacting the other agency at any point in time. Also number plate will no longer be the only means of vehicle identification as it is presently the case in Nigeria, because the unified system will automatically generate and assigned a Unique Vehicle Identification Pin Number (UVIPN) on payment of duty in the system to the vehicle and the UVIPN will be linked to the various agencies in the management information system.
Assessment of the Efficiency of Customer Order Management System: A Case Stu...Editor IJCATR
The Supermarket Management System deals with the automation of buying and selling of good and services. It includes both sales and purchase of items. The project Supermarket Management System is to be developed with the objective of making the system reliable, easier, fast, and more informative.
Energy-Aware Routing in Wireless Sensor Network Using Modified Bi-Directional A*Editor IJCATR
Energy is a key component in the Wireless Sensor Network (WSN)[1]. The system will not be able to run according to its function without the availability of adequate power units. One of the characteristics of wireless sensor network is Limitation energy[2]. A lot of research has been done to develop strategies to overcome this problem. One of them is clustering technique. The popular clustering technique is Low Energy Adaptive Clustering Hierarchy (LEACH)[3]. In LEACH, clustering techniques are used to determine Cluster Head (CH), which will then be assigned to forward packets to Base Station (BS). In this research, we propose other clustering techniques, which utilize the Social Network Analysis approach theory of Betweeness Centrality (BC) which will then be implemented in the Setup phase. While in the Steady-State phase, one of the heuristic searching algorithms, Modified Bi-Directional A* (MBDA *) is implemented. The experiment was performed deploy 100 nodes statically in the 100x100 area, with one Base Station at coordinates (50,50). To find out the reliability of the system, the experiment to do in 5000 rounds. The performance of the designed routing protocol strategy will be tested based on network lifetime, throughput, and residual energy. The results show that BC-MBDA * is better than LEACH. This is influenced by the ways of working LEACH in determining the CH that is dynamic, which is always changing in every data transmission process. This will result in the use of energy, because they always doing any computation to determine CH in every transmission process. In contrast to BC-MBDA *, CH is statically determined, so it can decrease energy usage.
Security in Software Defined Networks (SDN): Challenges and Research Opportun...Editor IJCATR
In networks, the rapidly changing traffic patterns of search engines, Internet of Things (IoT) devices, Big Data and data centers has thrown up new challenges for legacy; existing networks; and prompted the need for a more intelligent and innovative way to dynamically manage traffic and allocate limited network resources. Software Defined Network (SDN) which decouples the control plane from the data plane through network vitalizations aims to address these challenges. This paper has explored the SDN architecture and its implementation with the OpenFlow protocol. It has also assessed some of its benefits over traditional network architectures, security concerns and how it can be addressed in future research and related works in emerging economies such as Nigeria.
Measure the Similarity of Complaint Document Using Cosine Similarity Based on...Editor IJCATR
Report handling on "LAPOR!" (Laporan, Aspirasi dan Pengaduan Online Rakyat) system depending on the system administrator who manually reads every incoming report [3]. Read manually can lead to errors in handling complaints [4] if the data flow is huge and grows rapidly, it needs at least three days to prepare a confirmation and it sensitive to inconsistencies [3]. In this study, the authors propose a model that can measure the identities of the Query (Incoming) with Document (Archive). The authors employed Class-Based Indexing term weighting scheme, and Cosine Similarities to analyse document similarities. CoSimTFIDF, CoSimTFICF and CoSimTFIDFICF values used in classification as feature for K-Nearest Neighbour (K-NN) classifier. The optimum result evaluation is pre-processing employ 75% of training data ratio and 25% of test data with CoSimTFIDF feature. It deliver a high accuracy 84%. The k = 5 value obtain high accuracy 84.12%
Hangul Recognition Using Support Vector MachineEditor IJCATR
The recognition of Hangul Image is more difficult compared with that of Latin. It could be recognized from the structural arrangement. Hangul is arranged from two dimensions while Latin is only from the left to the right. The current research creates a system to convert Hangul image into Latin text in order to use it as a learning material on reading Hangul. In general, image recognition system is divided into three steps. The first step is preprocessing, which includes binarization, segmentation through connected component-labeling method, and thinning with Zhang Suen to decrease some pattern information. The second is receiving the feature from every single image, whose identification process is done through chain code method. The third is recognizing the process using Support Vector Machine (SVM) with some kernels. It works through letter image and Hangul word recognition. It consists of 34 letters, each of which has 15 different patterns. The whole patterns are 510, divided into 3 data scenarios. The highest result achieved is 94,7% using SVM kernel polynomial and radial basis function. The level of recognition result is influenced by many trained data. Whilst the recognition process of Hangul word applies to the type 2 Hangul word with 6 different patterns. The difference of these patterns appears from the change of the font type. The chosen fonts for data training are such as Batang, Dotum, Gaeul, Gulim, Malgun Gothic. Arial Unicode MS is used to test the data. The lowest accuracy is achieved through the use of SVM kernel radial basis function, which is 69%. The same result, 72 %, is given by the SVM kernel linear and polynomial.
Application of 3D Printing in EducationEditor IJCATR
This paper provides a review of literature concerning the application of 3D printing in the education system. The review identifies that 3D Printing is being applied across the Educational levels [1] as well as in Libraries, Laboratories, and Distance education systems. The review also finds that 3D Printing is being used to teach both students and trainers about 3D Printing and to develop 3D Printing skills.
Survey on Energy-Efficient Routing Algorithms for Underwater Wireless Sensor ...Editor IJCATR
In underwater environment, for retrieval of information the routing mechanism is used. In routing mechanism there are three to four types of nodes are used, one is sink node which is deployed on the water surface and can collect the information, courier/super/AUV or dolphin powerful nodes are deployed in the middle of the water for forwarding the packets, ordinary nodes are also forwarder nodes which can be deployed from bottom to surface of the water and source nodes are deployed at the seabed which can extract the valuable information from the bottom of the sea. In underwater environment the battery power of the nodes is limited and that power can be enhanced through better selection of the routing algorithm. This paper focuses the energy-efficient routing algorithms for their routing mechanisms to prolong the battery power of the nodes. This paper also focuses the performance analysis of the energy-efficient algorithms under which we can examine the better performance of the route selection mechanism which can prolong the battery power of the node
Comparative analysis on Void Node Removal Routing algorithms for Underwater W...Editor IJCATR
The designing of routing algorithms faces many challenges in underwater environment like: propagation delay, acoustic channel behaviour, limited bandwidth, high bit error rate, limited battery power, underwater pressure, node mobility, localization 3D deployment, and underwater obstacles (voids). This paper focuses the underwater voids which affects the overall performance of the entire network. The majority of the researchers have used the better approaches for removal of voids through alternate path selection mechanism but still research needs improvement. This paper also focuses the architecture and its operation through merits and demerits of the existing algorithms. This research article further focuses the analytical method of the performance analysis of existing algorithms through which we found the better approach for removal of voids
Decay Property for Solutions to Plate Type Equations with Variable CoefficientsEditor IJCATR
In this paper we consider the initial value problem for a plate type equation with variable coefficients and memory in
1 n R n ), which is of regularity-loss property. By using spectrally resolution, we study the pointwise estimates in the spectral
space of the fundamental solution to the corresponding linear problem. Appealing to this pointwise estimates, we obtain the global
existence and the decay estimates of solutions to the semilinear problem by employing the fixed point theorem
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
A Comprehensive Guide to DeFi Development Services in 2024Intelisync
DeFi represents a paradigm shift in the financial industry. Instead of relying on traditional, centralized institutions like banks, DeFi leverages blockchain technology to create a decentralized network of financial services. This means that financial transactions can occur directly between parties, without intermediaries, using smart contracts on platforms like Ethereum.
In 2024, we are witnessing an explosion of new DeFi projects and protocols, each pushing the boundaries of what’s possible in finance.
In summary, DeFi in 2024 is not just a trend; it’s a revolution that democratizes finance, enhances security and transparency, and fosters continuous innovation. As we proceed through this presentation, we'll explore the various components and services of DeFi in detail, shedding light on how they are transforming the financial landscape.
At Intelisync, we specialize in providing comprehensive DeFi development services tailored to meet the unique needs of our clients. From smart contract development to dApp creation and security audits, we ensure that your DeFi project is built with innovation, security, and scalability in mind. Trust Intelisync to guide you through the intricate landscape of decentralized finance and unlock the full potential of blockchain technology.
Ready to take your DeFi project to the next level? Partner with Intelisync for expert DeFi development services today!
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Letter and Document Automation for Bonterra Impact Management (fka Social Sol...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on automated letter generation for Bonterra Impact Management using Google Workspace or Microsoft 365.
Interested in deploying letter generation automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Nunit vs XUnit vs MSTest Differences Between These Unit Testing Frameworks.pdfflufftailshop
When it comes to unit testing in the .NET ecosystem, developers have a wide range of options available. Among the most popular choices are NUnit, XUnit, and MSTest. These unit testing frameworks provide essential tools and features to help ensure the quality and reliability of code. However, understanding the differences between these frameworks is crucial for selecting the most suitable one for your projects.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Programming Foundation Models with DSPy - Meetup Slides
Presenting a new Ant Colony Optimization Algorithm (ACO) for Efficient Job Scheduling in Grid Environment
1. International Journal of Computer Applications Technology and Research
Volume 5– Issue 1, 15 - 19, 2016, ISSN:- 2319–8656
www.ijcat.com 15
Presenting a new Ant Colony Optimization Algorithm
(ACO) for Efficient Job Scheduling in Grid Environment
Firoozeh Ghazipour
Department of Computer
Science and Research Branch
Islamic Azad University, Kish, Iran
Seyed Javad Mirabedini
Member of Science Board of
Computer Group in Azad Islamic
University of Tehran Center, Iran
Abstract: Grid computing utilizes the distributed heterogeneous resources in order to support complicated computing problems. Job
scheduling in computing grid is a very important problem. To utilize grids efficiently, we need a good job scheduling algorithm to assign
jobs to resources in grids.
In the natural environment, the ants have a tremendous ability to team up to find an optimal path to food resources. An ant algorithm
simulates the behavior of ants. In this paper, a new Ant Colony Optimization (ACO) algorithm is proposed for job scheduling in the
Grid environment. The main contribution of this paper is to minimize the makespan of a given set of jobs. Compared with the other job
scheduling algorithms, the proposed algorithm can outperform them according to the experimental results.
Keywords: jobs, scheduling, Grid environment, Ant Colony Optimization (ACO), makespan.
1. INTRODUCTION
Current scientific problems are very complex and need huge
computing power and storage space. The past technologies such
as distributed or parallel computing are unsuitable for current
scientific problems with large amounts of data. Processing and
storing massive volumes of data may take a very long time.
Grid computing [1] is a new paradigm for solving those complex
problems. In grids, we need to consider the conditions such as
network status and resources status. If the network or resources
are unstable, jobs would be failed or the total computation time
would be very large. So we need an efficient job scheduling
algorithm for these problems in the grid environment.
The purpose of job scheduling is to balance the entire system load
while completing all the jobs at hand as soon as possible
according to the environment status. Because the environment
status may change frequently, traditional job scheduling
algorithm may not be suitable for the dynamic environment in
grids.
In grids, users may face hundreds of thousands of computers to
utilize. It is impossible for anyone to manually assign jobs to
computing resources in grids. Therefore, grid job scheduling is a
very important issue in grid computing. Because of its
importance, importance, many job scheduling algorithms for
grids [2-5] have been proposed.
A good schedule would adjust its scheduling strategy according
to the changing status of the entire environment and the types of
jobs. Therefore, a dynamic algorithm in job scheduling such as
Ant Colony Optimization (ACO) [6, 7] is appropriate for grids.
ACO is a heuristic algorithm with efficient local search for
combinatorial problems. ACO imitates the behavior of real ant
colonies in nature to search for food and to connect to each other
by pheromone laid on paths traveled. Many researches use ACO
to solve NP-hard problems such as traveling salesman problem
[8], graph coloring problem [9], vehicle routing problem [10],
and so on.
This paper applies the ACO algorithm to job scheduling
problems in Grid computing. We assume each job is an ant and
the algorithm sends the ants to search for resources. We also
modify the global and local pheromone update functions in ACO
algorithm in order to balance the load for each grid resource.
Finally, we compare the proposed ACO algorithm with Min-Min
[11] and Max-Min [12]. According to the experimental results,
we can find out that our proposed ACO algorithm is capable of
achieving system load balance and decreasing the makespan
better than other job scheduling algorithms. The rest of the paper
is organized as follows. Section 2 explains the background of
ACO algorithm and scheduling problem. Section 3 introduces
some related work. Section 4 details the proposed ACO
algorithm for job scheduling. Section 5 indicates the
experimental results and finally, Section 6 concludes the paper.
2. BACKGROUND
2.1 Characteristics of ACO
Dorigo introduced the ant algorithm [18], which is a new
heuristic algorithm and based on the behavior of real ants. When
the blind insects, such as ants look for food, the moving ant lays
some pheromone on the ground, thus marking the path it
followed by a trail of this substance. While an isolated ant moves
essentially at random, an ant encountering a previously laid trail
can detect it and decide with high probability to follow it, thus
reinforcing the trail with its own pheromone. The collective
behavior that emerges means where the more are the ants
following a trail, the more that trail becomes attractive for being
followed. The process is thus characterized by a positive
feedback loop, where the probability with which an ant chooses
an optimum path increases with the number of ants that chose the
same path in the preceding steps. Above observations inspired a
new type of algorithm called ant algorithms or ant systems,
which is presented by Dorigo and Gambardella [19]. The ACO
algorithm uses a colony of artificial ants that behave as co-
operative agents in a mathematical space were they are allowed
to search and reinforce pathways (solutions) in order to find the
optimal ones. Solution that satisfies the constraints is feasible.
2. International Journal of Computer Applications Technology and Research
Volume 5– Issue 1, 15 - 19, 2016, ISSN:- 2319–8656
www.ijcat.com 16
All ACO algorithms adopt specific algorithmic scheme which is
shown in Figure 1.
Figure 1. All ACO Algorithm scheme.
We utilize the characteristics of ant algorithms above mentioned
to schedule job. We can carry on new job scheduling by
experience depend on the result in the past job scheduling. It is
very helpful for being within the grid environment.
2.2 Scheduling Problem Formulation
The grid environment consists of a large number of resources and
jobs which should be assigned to the resources. The jobs cannot
divide into smaller parts and after assigning a job to a resource,
its executing cannot be stopped.
The main challenge in scheduling problems is time. Finding a
solution in these problems tries to decrease the time of executing
all jobs. In this case, the most popular criterion is makespan and
our purpose in this paper is reducing the makespan with the aid
of ACO.
In grid, we have a set of resources (Resources = {m1, m2, …,
mm}) and a set of jobs (Jobs = {t1, t2, …, tn}) which should be
assigned to the resources and executed on them. There is a matrix
ETC [Resources] [Jobs] (Figure 2) that represents the time of
executing (EX) ti on mj.
Figure 2. Matrix ETC
Suppose that Eij (i ϵ Jobs , j ϵ Resources) is the time of job i on resource
j and Wj (j ϵ Resources) is the time of executing jobs which are
assigned to resource j before. Then equation (1) shows the time
of executing all jobs which are allocated to mj.
∑ (𝐸𝑖𝑗 + 𝑊𝑗)∀ 𝐽𝑜𝑏 𝑖𝑠 𝑎𝑙𝑙𝑜𝑐𝑎𝑡𝑒𝑑 𝑡𝑜 𝑅𝑒𝑠𝑜𝑢𝑟𝑐𝑒 𝑗 (1)
We can find the value of makespan according to equation (2):
makespan = max {∑ (𝐸𝑖𝑗 +∀ 𝑗𝑜𝑏 𝑖𝑠 𝑎𝑙𝑙𝑜𝑐𝑎𝑡𝑒𝑑 𝑡𝑜 𝑟𝑒𝑠𝑜𝑢𝑟𝑐𝑒 𝑗
𝑊𝑗)} , 𝑖 ∈ 𝐽𝑜𝑏𝑠 𝑎𝑛𝑑 𝑗 ∈ 𝑅𝑒𝑠𝑜𝑢𝑟𝑐𝑒𝑠 (2)
With another look on these definitions, we can calculate the
completion time of resources with equation (3):
𝐶𝑇𝑖𝑗 = 𝐸𝑖𝑗 + 𝑊𝑗 (3)
There is a Scheduling List for all resources that shows the jobs
which are assigned to each resource. Each resource has a
completion time and according to equation (4), the value of
makespan is equal to the maximum completion time.
makespan = 𝑚𝑎𝑥 (𝑖,𝑗) ∈ 𝑆𝑐ℎ𝑒𝑑𝑢𝑙𝑖𝑛𝑔 𝐿𝑖𝑠𝑡 (𝐶𝑇𝑖𝑗 ) (4)
The makespan is a criterion to evaluate the grid system and the
main purpose of a scheduler is to minimize this criterion.
3. RELATED WORK
Jobs submitted to a grid computing system need to be processed
by the available resources.
Best resources are categorized as optimal resources. In a research
by [13], Ant Colony Optimization (ACO) has been used as an
effective algorithm in solving the scheduling problem in grid
computing.
ACO has been applied in solving many problems in scheduling
such as Job Shop Problem, Open Shop Problem, Permutation
Flow Shop Problem, Single Machine Total Tardiness Problem,
Single Machine Total Weighted Tardiness Problem, Resource
Constraints Project Scheduling Problem, Group Shop Problem
and Single Machine Total Tardiness Problem with Sequence
Dependent Setup Times [6]. A recent approach of ACO
researches in the use of ACO for scheduling job in grid
computing [14]. ACO algorithm has been used in grid computing
because it is easily adapted to solve both static and dynamic
combinatorial optimization problems and job scheduling in grid
computing is an example.
Balanced job assignment based on ant algorithm for computing
grids called BACO was proposed by [15]. The research aims to
minimize the computation time of job executing in Taiwan
UniGrid environment which focused on load balancing factors of
each resource. By considering the resource status and the size of
the given job, BACO algorithm chooses optimal resources to
process the submitted jobs by applying the local and global
pheromone update technique to balance the system load. Local
pheromone update function updates the status of the selected
resource after job has been assigned and the job scheduler
depends on the newest information of the selected resource for
the next job submission. Global pheromone update function
updates the status of each resource for all jobs after the
completion of the jobs. By using these two update techniques, the
job scheduler will get the newest information of all resources for
the next job submission. From the experimental result, BACO is
capable of balancing the entire system load regardless of the size
of the jobs. However, BACO was only tested in Taiwan UniGrid
environment.
An ant colony optimization for dynamic job scheduling in grid
environment was proposed by [16] which aimed to minimize the
total job tardiness time. The initial pheromone value of each
resource is based on expected execution time and actual
execution time of each job. The process to update the pheromone
value on each resource is based on local update and global update
rules as in ACS. In that study, ACO algorithm performed the best
when compared to First Come First Serve, Minimal Tardiness
Earliest Due Date and Minimal Tardiness Earliest Release Date
techniques.
The study to improved ant algorithm for job scheduling in grid
computing which is based on the basic idea of ACO was
proposed by [17]. The pheromone update function in this
research is performed by adding encouragement, punishment
coefficient and load balancing factor. The initial pheromone
value of each resource is based on its status where job is assigned
3. International Journal of Computer Applications Technology and Research
Volume 5– Issue 1, 15 - 19, 2016, ISSN:- 2319–8656
www.ijcat.com 17
to the resource with the maximum pheromone value. The
strength of pheromone of each resource will be updated after
completion of the job. The encouragement and punishment and
local balancing factor coefficient are defined by users and are
used to update pheromone values of resources. If a resource
completed a job successfully, more pheromone will be added by
the encouragement coefficient in order to be selected for the next
job execution. If a resource failed to complete a job, it will be
punished by adding less pheromone value. The load of each
resource is taken into account and the balancing factor is also
applied to change the pheromone value of each resource.
4. THE PROPOSED ALGORITHM
Real ants foraging for food lay down quantities of pheromone
(chemical substance) marking the path that they follow. An
isolated ant moves essentially at random but an ant encountering
a previously laid pheromone will detect it and decide to follow it
with high probability and thereby reinforce it with a further
quantity of pheromone. The repetition of the above mechanism
represents the auto catalytic behavior of real ant colony where
the more the ants follow a trail, the more attractive that trail
becomes [13].
The ACO algorithm uses a colony of artificial ants that behave
as co-operative agents in a mathematical space were they are
allowed to search and reinforce pathways (solutions) in order to
find the optimal ones. Solution that satisfies the constraints is
feasible. After initialization of the pheromone trails, ants
construct feasible solutions, starting from random nodes, and
then the pheromone trails are updated. At each step ants compute
a set of feasible moves and select the best one (according to some
probabilistic rules) to carry out the rest of the tour. The transition
probability is based on the heuristic information and pheromone
trail level of the move. The higher value of the pheromone and
the heuristic information, the more profitable it is to select this
move and resume the search.
4.1 Initialize Pheromone and Probability List
At the beginning, a population of ants is generated and they start
their own search from one of the resource (the ants assigned to
resources randomly). The initial Pheromone and Probability List
is set to a small positive value t0 and then ants update this value
after completing the construction stage. In the nature there is not
any pheromone on the ground at the beginning, or the initial
pheromone in the nature is t0 = 0. If in ACO algorithm the initial
Pheromone is zero, then the probability to choose the next state
will be zero and the search process will stop from the beginning.
Thus it is important to set the initial Pheromone and Probability
List to a positive value. The value of t0 is calculated with
equation (5):
𝑡0 =
1
𝐽𝑜𝑏𝑠 × 𝑅𝑒𝑠𝑜𝑢𝑟𝑐𝑒𝑠
(5)
4.2 Local Pheromone Update
Moving from a state to another means that a job is assigned to a
resource. After choosing next node by ants, the pheromone trail
should be updated. This update is local pheromone update and
the equation (6) shows how it happens:
Pheromone𝑖𝑗
𝑘
= ((1 − 𝜀) × (Pheromone𝑖𝑗
𝑘
)) + (𝜀 × 𝜃) (6)
In local pheromone update equation, θ is a coefficient which
obtains its value from equation (7):
𝜃 =
𝑡0
𝐶𝑇𝑖𝑗(𝐴𝑛𝑡 𝑘)
(7)
The less value of CTij, the more value of θ. In fact, if the value of
θ is larger, the more pheromone value will be deposited.
Therefore, the chance of choosing resource j in next assigning is
more than other resources.
4.3 Probability List Update
In addition to update Pheromone, the Probability List should be
updated, too. The ants choose the next states based on heuristic
information, equation (8):
Heuristic 𝑖𝑗
𝑘
=
1
(𝑊 𝑗) × (𝐸𝑇𝐶𝑖𝑗)
(8)
With the heuristic information, we can update the Probability
List, equation (9):
Probability List 𝑖𝑗
𝑘
= ( Pheromone𝑖𝑗
𝑘
) × (Heuristic 𝑖𝑗
𝑘
) 𝛽
(9)
4.4 Global Pheromone Update
In the nature, some pheromone value on the trails evaporates. At
the end of each iteration in the proposed algorithm, when all ants
finish the search process, the all ants’ value of pheromone will
be reduced by evaporation rule, equation (10):
Pheromone𝑖𝑗
𝑘
= (Pheromone𝑖𝑗
𝑘
) × (1 − 𝜌) (10)
When all ants construct a solution, it means that the ants moves
from the nest to the food resource and finish the search process
(all the jobs are assigned to the resources in grid). In the proposed
algorithm, the best solution and the best ant which construct that
solution will be found. The global pheromone update is just for
the ant that finds the best solution. This ant is the best ant of
iteration. At this stage, the value of Pheromone should be
updated, equation (11):
Pheromone𝑖𝑗
𝐵𝑒𝑠𝑡 𝐴𝑛𝑡
= Pheromone𝑖𝑗
𝐵𝑒𝑠𝑡 𝐴𝑛𝑡
+ ((𝜌) × (∆)) +
𝜀
𝑚𝑎𝑘𝑒𝑠𝑝𝑎𝑛 (𝐵𝑒𝑠𝑡 𝐴𝑛𝑡)
(11)
In global pheromone update, ρ is the elitism coefficient and Δ is
calculated by equation (12):
∆=
1
𝑚𝑎𝑘𝑒𝑠𝑝𝑎𝑛 (𝐵𝑒𝑠𝑡 𝐴𝑛𝑡)
(12)
The pseudo-code of the proposed algorithm is presented in
Figure 3.
5. EXPERIMENTAL RESULTS
The results of the evaluation of the proposed algorithm with the
two algorithms of Min-Min and Max-Min [11] for scheduling
independent jobs in grid environment are presented in this
section.
All experiments have been done on a system running Windows
7 Professional operating system with configuration of 2 GHz
CPU and 2GB of RAM.
4. International Journal of Computer Applications Technology and Research
Volume 5– Issue 1, 15 - 19, 2016, ISSN:- 2319–8656
www.ijcat.com 18
Figure 3. Pseudo-code of the proposed algorithm.
Table 1 indicates the amounts of parameters which are used in
executing the proposed algorithm.
Table 1. Parameters of the proposed algorithm.
A real heterogeneous computational system such as grid is a
combination of hardware and software elements and a
comparison of the scheduling techniques is often complicated in
this environment. To solve this problem, Braun et al. [20],
proposed a simulation model. They defined a grid environment
which consists of a set of resources and a set of independent jobs.
The scheduling algorithms aim to minimize the makespan. All
scheduling algorithms need to know the completion time of each
job on each resource. The model consists of 12 different kinds of
examples: u_c_hihi, u_c_hilo, u_c_lohi, u_c_lolo, u_i_hihi,
u_i_hilo, u_i_lohi, u_i_lolo, u_s_hihi, u_s_hilo, u_s_lohi,
u_s_lolo; that any of them can be shown in a matrix. This model
uses a matrix ETC which illustrates the estimated times of
completion (Figure 2).
In this paper, the same model is used to evaluate the proposed
algorithm and the two scheduling algorithms, Min-Min and Max-
Min. After executing these three algorithms, different amounts
of makespan are obtained which are shown in Table 2.
Table 2. Comparison of three algorithms’ makespans.
The Results of experiment are shown as a chart in Figure 4, 5, 6
and 7.
Figure 4. Algorithms’ makespans based on u-*-hihi.
Figure 5. Algorithms’ makespans based on u-*-hilo.
Figure 6. Algorithms’ makespans based on u-*-lohi.
5. International Journal of Computer Applications Technology and Research
Volume 5– Issue 1, 15 - 19, 2016, ISSN:- 2319–8656
www.ijcat.com 19
Figure 7. Algorithms’ makespans based on u-*-lolo.
The two scheduling algorithms, Min-Min and Max-Min, are two
best algorithms among other scheduling algorithms but the
results of the experiment (Table 2) indicate that the proposed
algorithm has higher performance and lower amount of
makespan than the other two scheduling algorithms.
6. CONCLUSIONS
In this paper, a new ACO algorithm is proposed to choose
suitable resources to execute jobs according to the completion
times of resources and the size of given job in the grid
environment. The local and global pheromone update functions
are changed to do balance the system load. Local pheromone
update function updates the status of the selected resource after
jobs assignment. Global pheromone update function updates the
status of scheduling list of best solution. The purpose of this
paper is to minimize the makespan and the experimental results
show that the proposed algorithm is capable of minimizing the
makespan better than other two scheduling algorithms.
7. REFERENCES
[1] Reed, D. A., “Grids, the TeraGrid and beyond”, IEEE, Vol.
36, Issue 1, 2003.
[2] Chang, R. S., Chang, J. S., Lin, P. S., “An ant algorithm for
balanced job scheduling in grids”, Future Generation Computer
Systems, Vol. 25, pp. 20-27, 2009.
[3] Ku-Mahamud, K. R., Nasir, H. J. A., “Ant Colony Algorithm
for Job Scheduling in Grid Computing”, Fourth Asia
International Conference on Mathematical/Analytical Modeling
and Computer Simulation, pp. 40-45, 2010.
[4] Chang, R. S., Chang, J. S., Lin, S. Y., “Job scheduling and
data replication on data grids”, Future Generation Computer
Systems, Vol. 23, Issue 7, pp. 846-860, 2007.
[5] Gao, Y., Rong, H., Huang, J. Z., “Adaptive grid job
scheduling with genetic algorithms”, Future Generation
Computer Systems, Vol. 21, Issue 1, pp. 151-161, 2005.
[6] Dorigo, M., Stutzle, T., “Ant Colony Optimization”, A
Bradford Book, 2004.
[7] Dorigo, M., Blum, C., “Ant colony optimization theory: A
survey”, Theoretical Computer Science, Vol. 344, Issue 2, pp.
243-278, 2005.
[8] Dorigo, M., Gambardella, L. M., “Ant colony system: a
cooperative learning approach to the traveling salesman
problem”, IEEE Transaction on Evolutionary Computation, Vol.
1, Issue 1, pp. 53-66, 1997.
[9] Salari, E., Eshghi, K., “An ACO algorithm for graph coloring
problem”, ICSC Congress on Computational Intelligence
Methods and Applications, 2005.
[10] Zhang, X., Tang, L., “CT-ACO-hybridizing ant colony
optimization with cyclic transfer search for the vehicle routing
problem”, ICSC Congress on Computational Intelligence
Methods and Applications, 2005.
[11] Maheswaran, M., Ali, S., Siegel, H. J., Hensgen, D., Freund,
R. F., “Dynamic matching and scheduling of a class of
independent tasks onto heterogeneous computing systems”,
Journal of Parallel and Distributed Computing, pp. 107-131,
1999.
[12] Casanova, H., Legrand, A., Zagorodnov, D., Berman, F.,
“Heuristics for scheduling parameter sweep applications in grid
environments”, Heterogeneous Computing Workshop, pp. 349-
363, 2000.
[13] Fidanova, S., Durchova, M., “Ant Algorithm for Grid
Scheduling Problem”, Springer, pp. 405-412, 2006.
[14] Pavani, G. S., Waldman, H., “Grid Resource Management
by means of Ant Colony Optimization”, 3rd
International
Conference on Broadband Communications, Networks and
Systems, pp. 1-9, 2006.
[15] Chang, R. S., Chang, J. S., Lin, P. S., “Balanced Job
Assignment Based on Ant Algorithm for Computing Grids”, The
2nd
IEEE Asia-Pacific Service Computing Conference, pp. 291-
295, 2007.
[16] Lorpunmanee, S., Sap, M. N., Abdullah, A. H., Chompoo-
inwai, C., “An Ant Colony Optimization for Dynamic Job
Scheduling in Grid Environment”, Proceedings of World
Academy of Science, Engineering and Technology, Vol. 23, pp.
314-321, 2007.
[17] Yan, H., Shen, X. Q., Li, X., Wu, M. H., “An improved ant
algorithm for job scheduling in grid computing”, Proceedings of
the fourth International Conference on Machine Learning and
Cybernetics, Vol. 5, pp. 2957-2961, 2005.
[18] Dorigo, M., Maniezzo, V., Colorni, A., “Ant system:
optimization by a colony of cooperating agents”, IEEE
Transactions on Systems, Man and Cybernetics - Part B, Vol. 26,
No. 1, pp. 1-13, 1996.
[19] Dorigo, M., Gambardella, L. M., “Ant colonies for the
traveling salesman problem”, Biosystems, Vol. 43, Issue 2, pp.
73-81, 1997.
[20] Braun, T. D., Siegel, H. J., Beck, N., “A Comparison of
Eleven static Heuristics for Mapping a Class of Independent
Tasks onto Heterogeneous Distributed Computing systems”,
Journal of Parallel and Distributed Computing, Vol. 61, pp. 810-
837, 2001.