This document analyzes the effect of different migration periods on the Parallel Comprehensive Learning Particle Swarm Optimization (PCLPSO) algorithm. PCLPSO is a parallel multi-swarm algorithm based on Particle Swarm Optimization (PSO) and Comprehensive Learning PSO (CLPSO). It uses multiple swarms that work cooperatively and concurrently. The migration period determines how often each swarm shares its best solution with other swarms, and affects the algorithm's efficiency. The document tests PCLPSO on 14 benchmark optimization functions using different migration periods, to analyze their impact on the algorithm's performance.
Effects of The Different Migration Periods on Parallel Multi-Swarm PSO csandit
In recent years, there has been an increasing inter
est in parallel computing. In parallel
computing, multiple computing resources are used si
multaneously in solving a problem. There
are multiple processors that will work concurrently
and the program is divided into different
tasks to be simultaneously solved. Recently, a cons
iderable literature has grown up around the
theme of metaheuristic algorithms. Particle swarm o
ptimization (PSO) algorithm is a popular
metaheuristic algorithm. The parallel comprehensive
learning particle swarm optimization
(PCLPSO) algorithm based on PSO has multiple swarms
based on the master-slave paradigm
and works cooperatively and concurrently. The migra
tion period is an important parameter in
PCLPSO and affects the efficiency of the algorithm.
We used the well-known benchmark
functions in the experiments and analysed the perfo
rmance of PCLPSO using different
migration periods.
Proposing a scheduling algorithm to balance the time and cost using a genetic...Editor IJCATR
Grid computing is a hardware and software infrastructure and provides affordable, sustainable, and reliable access. Its aim is
to create a supercomputer using free resources. One of the challenges to the Grid computing is scheduling problem which is regarded
as a tough issue. Since scheduling problem is a non-deterministic issue in the Grid, deterministic algorithms cannot be used to improve
scheduling.
In this paper, a combination of genetic algorithms and binary gravitational attraction is used for scheduling problem solving, where the
reduction in the duty performance timing and cost-effective use of simultaneous resources are investigated. In this case, the user
determines the execution time parameter and cost-effective use of resources. In this algorithm, a new approach that has led to a
balanced load of resources is used in the selection of resources. Experimental results reveals that our proposed algorithm in terms of
cost-time and selection of the best resource has reached better results than other algorithm.
AN IMPROVED MULTIMODAL PSO METHOD BASED ON ELECTROSTATIC INTERACTION USING NN...ijaia
In this paper, an improved multimodal optimization (MMO) algorithm,calledLSEPSO,has been proposed. LSEPSO combinedElectrostatic Particle Swarm Optimization (EPSO) algorithm and a local search method and then madesome modification onthem. It has been shown to improve global and local optima finding ability of the algorithm. This algorithm useda modified local search to improve particle's personal best, which usedn-nearest-neighbour instead of nearest-neighbour. Then, by creating n new points among each particle and n nearest particles, it triedto find a point which could be the alternative of particle's personal best. This methodprevented particle's attenuation and following a specific particle by its neighbours. The performed tests on a number of benchmark functions clearly demonstratedthat the improved algorithm is able to solve MMO problems and outperform other tested algorithms in this article.
Proposing a New Job Scheduling Algorithm in Grid Environment Using a Combinat...Editor IJCATR
Scheduling jobs to resources in grid computing is complicated due to the distributed and heterogeneous nature of the resources.
The purpose of job scheduling in grid environment is to achieve high system throughput and minimize the execution time of applications.
The complexity of scheduling problem increases with the size of the grid and becomes highly difficult to solve effectively.
To obtain a good and efficient method to solve scheduling problems in grid, a new area of research is implemented. In this paper, a job
scheduling algorithm is proposed to assign jobs to available resources in grid environment. The proposed algorithm is based on Ant
Colony Optimization (ACO) algorithm. This algorithm is combined with one of the best scheduling algorithm, Suffrage. This paper uses
the result of Suffrage in proposed ACO algorithm. The main contribution of this work is to minimize the makespan of a given set of
jobs. The experimental results show that the proposed algorithm can lead to significant performance in grid environment.
A presentation on PSO with videos and animations to illustrate the concept. The ppt throws light on the concept, the algo, the application and comparison of PSO with GA and DE.
Effects of The Different Migration Periods on Parallel Multi-Swarm PSO csandit
In recent years, there has been an increasing inter
est in parallel computing. In parallel
computing, multiple computing resources are used si
multaneously in solving a problem. There
are multiple processors that will work concurrently
and the program is divided into different
tasks to be simultaneously solved. Recently, a cons
iderable literature has grown up around the
theme of metaheuristic algorithms. Particle swarm o
ptimization (PSO) algorithm is a popular
metaheuristic algorithm. The parallel comprehensive
learning particle swarm optimization
(PCLPSO) algorithm based on PSO has multiple swarms
based on the master-slave paradigm
and works cooperatively and concurrently. The migra
tion period is an important parameter in
PCLPSO and affects the efficiency of the algorithm.
We used the well-known benchmark
functions in the experiments and analysed the perfo
rmance of PCLPSO using different
migration periods.
Proposing a scheduling algorithm to balance the time and cost using a genetic...Editor IJCATR
Grid computing is a hardware and software infrastructure and provides affordable, sustainable, and reliable access. Its aim is
to create a supercomputer using free resources. One of the challenges to the Grid computing is scheduling problem which is regarded
as a tough issue. Since scheduling problem is a non-deterministic issue in the Grid, deterministic algorithms cannot be used to improve
scheduling.
In this paper, a combination of genetic algorithms and binary gravitational attraction is used for scheduling problem solving, where the
reduction in the duty performance timing and cost-effective use of simultaneous resources are investigated. In this case, the user
determines the execution time parameter and cost-effective use of resources. In this algorithm, a new approach that has led to a
balanced load of resources is used in the selection of resources. Experimental results reveals that our proposed algorithm in terms of
cost-time and selection of the best resource has reached better results than other algorithm.
AN IMPROVED MULTIMODAL PSO METHOD BASED ON ELECTROSTATIC INTERACTION USING NN...ijaia
In this paper, an improved multimodal optimization (MMO) algorithm,calledLSEPSO,has been proposed. LSEPSO combinedElectrostatic Particle Swarm Optimization (EPSO) algorithm and a local search method and then madesome modification onthem. It has been shown to improve global and local optima finding ability of the algorithm. This algorithm useda modified local search to improve particle's personal best, which usedn-nearest-neighbour instead of nearest-neighbour. Then, by creating n new points among each particle and n nearest particles, it triedto find a point which could be the alternative of particle's personal best. This methodprevented particle's attenuation and following a specific particle by its neighbours. The performed tests on a number of benchmark functions clearly demonstratedthat the improved algorithm is able to solve MMO problems and outperform other tested algorithms in this article.
Proposing a New Job Scheduling Algorithm in Grid Environment Using a Combinat...Editor IJCATR
Scheduling jobs to resources in grid computing is complicated due to the distributed and heterogeneous nature of the resources.
The purpose of job scheduling in grid environment is to achieve high system throughput and minimize the execution time of applications.
The complexity of scheduling problem increases with the size of the grid and becomes highly difficult to solve effectively.
To obtain a good and efficient method to solve scheduling problems in grid, a new area of research is implemented. In this paper, a job
scheduling algorithm is proposed to assign jobs to available resources in grid environment. The proposed algorithm is based on Ant
Colony Optimization (ACO) algorithm. This algorithm is combined with one of the best scheduling algorithm, Suffrage. This paper uses
the result of Suffrage in proposed ACO algorithm. The main contribution of this work is to minimize the makespan of a given set of
jobs. The experimental results show that the proposed algorithm can lead to significant performance in grid environment.
A presentation on PSO with videos and animations to illustrate the concept. The ppt throws light on the concept, the algo, the application and comparison of PSO with GA and DE.
MARKOV CHAIN AND ADAPTIVE PARAMETER SELECTION ON PARTICLE SWARM OPTIMIZERijsc
Particle Swarm Optimizer (PSO) is such a complex stochastic process so that analysis on the stochastic
behavior of the PSO is not easy. The choosing of parameters plays an important role since it is critical in
the performance of PSO. As far as our investigation is concerned, most of the relevant researches are
based on computer simulations and few of them are based on theoretical approach. In this paper,
theoretical approach is used to investigate the behavior of PSO. Firstly, a state of PSO is defined in this
paper, which contains all the information needed for the future evolution. Then the memory-less property of
the state defined in this paper is investigated and proved. Secondly, by using the concept of the state and
suitably dividing the whole process of PSO into countable number of stages (levels), a stationary Markov
chain is established. Finally, according to the property of a stationary Markov chain, an adaptive method
for parameter selection is proposed.
Job Scheduling on the Grid Environment using Max-Min Firefly AlgorithmEditor IJCATR
Grid computing indeed is the next generation of distributed systems and its goals is creating a powerful virtual, great, and
autonomous computer that is created using countless Heterogeneous resource with the purpose of sharing resources. Scheduling is one
of the main steps to exploit the capabilities of emerging computing systems such as the grid. Scheduling of the jobs in computational
grids due to Heterogeneous resources is known as an NP-Complete problem. Grid resources belong to different management domains
and each applies different management policies. Since the nature of the grid is Heterogeneous and dynamic, techniques used in
traditional systems cannot be applied to grid scheduling, therefore new methods must be found. This paper proposes a new algorithm
which combines the firefly algorithm with the Max-Min algorithm for scheduling of jobs on the grid. The firefly algorithm is a new
technique based on the swarm behavior that is inspired by social behavior of fireflies in nature. Fireflies move in the search space of
problem to find the optimal or near-optimal solutions. Minimization of the makespan and flowtime of completing jobs simultaneously
are the goals of this paper. Experiments and simulation results show that the proposed method has a better efficiency than other
compared algorithms.
EVOLVING CONNECTION WEIGHTS FOR PATTERN STORAGE AND RECALL IN HOPFIELD MODEL ...ijsc
In this paper, implementation of a genetic algorithm has been described to store and later, recall of some
prototype patterns in Hopfield neural network associative memory. Various operators of genetic algorithm
(mutation, cross-over, elitism etc) are used to evolve the population of optimal weight matrices for the
purpose of storing the patterns and then recalling of the patterns with induced noise was made, again using
a genetic algorithm. The optimal weight matrices obtained during the training are used as seed for starting
the GA in recalling, instead starting with random weight matrix. A detailed study of the comparison of
results thus obtained with the earlier results has been done. It has been observed that for Hopfield neural
networks, recall of patterns is more successful if evolution of weight matrices is applied for training
purpose also.
Comparative Study of Ant Colony Optimization And Gang SchedulingIJTET Journal
Abstract— Ant Colony Optimization (ACO) is a well known and rapidly evolving meta-heuristic technique. All optimization problems have already taken advantage of the ACO technique while countless others are on their way. Ant Colony Optimization (ACO) has been used as an effective algorithm in solving the scheduling problem in grid computing. Whereas gang scheduling is a scheduling algorithm that is used to schedule the parallel systems and schedules related threads or processes to run simultaneously on different processors. The threads that are scheduled are belonging to the same process, but they from different processes in some cases, for example when the processes have a producer-consumer relationship, when all processes come from the same MPI program.
SWARM INTELLIGENCE SCHEDULING OF SOFT REAL-TIME TASKS IN HETEROGENEOUS MULTIP...ecij
In this paper, a hybrid swarm intelligence algorithm (named VNABCSA) is presented for the scheduling of non-preemptive soft real-time tasks in heterogeneous multiprocessor platforms. The method is based on a combination of artificial bee colony and simulated annealing algorithms. The multi-objective function of the VNABCSA algorithm is defined to minimize the total tardiness of all tasks, total number of utilized processors, total completion time, total waiting time for all tasks, and total waiting time for all processors. We introduce a hybrid variable neighborhood search strategy to improve the convergence speed of the algorithm. Simulation results demonstrate the efficiency of the proposed methodology as compared with the
existing scheduling algorithms.
A New Multi-Objective Mixed-Discrete Particle Swarm Optimization AlgorithmWeiyang Tong
A new multi-objective optimization algorithm to handle problems that are hightly constrained, highly nonlinear, and with mixed types of design variables
Efficient steganography techniques are needed for the security of digital information over the Internet and for secret data communication. Therefore, many techniques are proposed for steganography. One of these intelligent techniques is Particle Swarm Optimization (PSO) algorithm. Recently, many modifications are made to Standard PSO (SPSO) such as Human-Based Particle Swarm Optimization (HPSO). Therefore, this paper presents image steganography using HPSO in order to find best locations in image cover to hide text secret message. Then, a comparison is done between image steganography using PSO and using HPSO. Experimental results on six (256×256) cover images and different size of secret massages, prove that the performance of the proposed image steganography using HPSO has been improved in comparison with using SPSO.
An Effective PSO-inspired Algorithm for Workflow Scheduling IJECEIAES
The Cloud is a computing platform that provides on-demand access to a shared pool of configurable resources such as networks, servers and storage that can be rapidly provisioned and released with minimal management effort from clients. At its core, Cloud computing focuses on maximizing the effectiveness of the shared resources. Therefore, workflow scheduling is one of the challenges that the Cloud must tackle especially if a large number of tasks are executed on geographically distributed servers. This entails the need to adopt an effective scheduling algorithm in order to minimize task completion time (makespan). Although workflow scheduling has been the focus of many researchers, a handful efficient solutions have been proposed for Cloud computing. In this paper, we propose the LPSO, a novel algorithm for workflow scheduling problem that is based on the Particle Swarm Optimization method. Our proposed algorithm not only ensures a fast convergence but also prevents getting trapped in local extrema. We ran realistic scenarios using CloudSim and found that LPSO is superior to previously proposed algorithms and noticed that the deviation between the solution found by LPSO and the optimal solution is negligible.
Biogeography Based Optimization (BBO) is a new evolutionary algorithm for global optimization that was introduced in
2008. BBO is an application of biogeography to evolutionary algorithms. Biogeography is the study of the distribution of biodiversity
over space and time. It aims to analyze where organisms live, and in what abundance. BBO has certain features in common with other population-based optimization methods. Like GA and PSO, BBO can share information between solutions. This makes BBO applicable to many of the same types of problems that GA and PSO are used for, including unimodal, multimodal and deceptive functions. This paper explains the methodology of application of BBO algorithm for the constrained task scheduling problems.
Multiprocessor scheduling of dependent tasks to minimize makespan and reliabi...ijfcstjournal
Algorithms developed for scheduling applications on heterogeneous multiprocessor system focus on a
single objective such as execution time, cost or total data transmission time. However, if more than one
objective (e.g. execution cost and time, which may be in conflict) are considered, then the problem becomes
more challenging. This project is proposed to develop a multiobjective scheduling algorithm using
Evolutionary techniques for scheduling a set of dependent tasks on available resources in a multiprocessor
environment which will minimize the makespan and reliability cost. A Non-dominated sorting Genetic
Algorithm-II procedure has been developed to get the pareto- optimal solutions. NSGA-II is a Elitist
Evolutionary algorithm, and it takes the initial parental solution without any changes, in all iteration to
eliminate the problem of loss of some pareto-optimal solutions.NSGA-II uses crowding distance concept to
create a diversity of the solutions.
EFFECTS OF THE DIFFERENT MIGRATION PERIODS ON PARALLEL MULTI-SWARM PSOcscpconf
In recent years, there has been an increasing interest in parallel computing. In parallel computing, multiple computing resources are used simultaneously in solving a problem. There are multiple processors that will work concurrently and the program is divided into different tasks to be simultaneously solved. Recently, a considerable literature has grown up around the theme of metaheuristic algorithms. Particle swarm optimization (PSO) algorithm is a popular metaheuristic algorithm. The parallel comprehensive learning particle swarm optimization (PCLPSO) algorithm based on PSO has multiple swarms based on the master-slave paradigm and works cooperatively and concurrently. The migration period is an important parameter in PCLPSO and affects the efficiency of the algorithm. We used the well-known benchmark functions in the experiments and analysed the performance of PCLPSO using different migration periods.
MARKOV CHAIN AND ADAPTIVE PARAMETER SELECTION ON PARTICLE SWARM OPTIMIZERijsc
Particle Swarm Optimizer (PSO) is such a complex stochastic process so that analysis on the stochastic
behavior of the PSO is not easy. The choosing of parameters plays an important role since it is critical in
the performance of PSO. As far as our investigation is concerned, most of the relevant researches are
based on computer simulations and few of them are based on theoretical approach. In this paper,
theoretical approach is used to investigate the behavior of PSO. Firstly, a state of PSO is defined in this
paper, which contains all the information needed for the future evolution. Then the memory-less property of
the state defined in this paper is investigated and proved. Secondly, by using the concept of the state and
suitably dividing the whole process of PSO into countable number of stages (levels), a stationary Markov
chain is established. Finally, according to the property of a stationary Markov chain, an adaptive method
for parameter selection is proposed.
Job Scheduling on the Grid Environment using Max-Min Firefly AlgorithmEditor IJCATR
Grid computing indeed is the next generation of distributed systems and its goals is creating a powerful virtual, great, and
autonomous computer that is created using countless Heterogeneous resource with the purpose of sharing resources. Scheduling is one
of the main steps to exploit the capabilities of emerging computing systems such as the grid. Scheduling of the jobs in computational
grids due to Heterogeneous resources is known as an NP-Complete problem. Grid resources belong to different management domains
and each applies different management policies. Since the nature of the grid is Heterogeneous and dynamic, techniques used in
traditional systems cannot be applied to grid scheduling, therefore new methods must be found. This paper proposes a new algorithm
which combines the firefly algorithm with the Max-Min algorithm for scheduling of jobs on the grid. The firefly algorithm is a new
technique based on the swarm behavior that is inspired by social behavior of fireflies in nature. Fireflies move in the search space of
problem to find the optimal or near-optimal solutions. Minimization of the makespan and flowtime of completing jobs simultaneously
are the goals of this paper. Experiments and simulation results show that the proposed method has a better efficiency than other
compared algorithms.
EVOLVING CONNECTION WEIGHTS FOR PATTERN STORAGE AND RECALL IN HOPFIELD MODEL ...ijsc
In this paper, implementation of a genetic algorithm has been described to store and later, recall of some
prototype patterns in Hopfield neural network associative memory. Various operators of genetic algorithm
(mutation, cross-over, elitism etc) are used to evolve the population of optimal weight matrices for the
purpose of storing the patterns and then recalling of the patterns with induced noise was made, again using
a genetic algorithm. The optimal weight matrices obtained during the training are used as seed for starting
the GA in recalling, instead starting with random weight matrix. A detailed study of the comparison of
results thus obtained with the earlier results has been done. It has been observed that for Hopfield neural
networks, recall of patterns is more successful if evolution of weight matrices is applied for training
purpose also.
Comparative Study of Ant Colony Optimization And Gang SchedulingIJTET Journal
Abstract— Ant Colony Optimization (ACO) is a well known and rapidly evolving meta-heuristic technique. All optimization problems have already taken advantage of the ACO technique while countless others are on their way. Ant Colony Optimization (ACO) has been used as an effective algorithm in solving the scheduling problem in grid computing. Whereas gang scheduling is a scheduling algorithm that is used to schedule the parallel systems and schedules related threads or processes to run simultaneously on different processors. The threads that are scheduled are belonging to the same process, but they from different processes in some cases, for example when the processes have a producer-consumer relationship, when all processes come from the same MPI program.
SWARM INTELLIGENCE SCHEDULING OF SOFT REAL-TIME TASKS IN HETEROGENEOUS MULTIP...ecij
In this paper, a hybrid swarm intelligence algorithm (named VNABCSA) is presented for the scheduling of non-preemptive soft real-time tasks in heterogeneous multiprocessor platforms. The method is based on a combination of artificial bee colony and simulated annealing algorithms. The multi-objective function of the VNABCSA algorithm is defined to minimize the total tardiness of all tasks, total number of utilized processors, total completion time, total waiting time for all tasks, and total waiting time for all processors. We introduce a hybrid variable neighborhood search strategy to improve the convergence speed of the algorithm. Simulation results demonstrate the efficiency of the proposed methodology as compared with the
existing scheduling algorithms.
A New Multi-Objective Mixed-Discrete Particle Swarm Optimization AlgorithmWeiyang Tong
A new multi-objective optimization algorithm to handle problems that are hightly constrained, highly nonlinear, and with mixed types of design variables
Efficient steganography techniques are needed for the security of digital information over the Internet and for secret data communication. Therefore, many techniques are proposed for steganography. One of these intelligent techniques is Particle Swarm Optimization (PSO) algorithm. Recently, many modifications are made to Standard PSO (SPSO) such as Human-Based Particle Swarm Optimization (HPSO). Therefore, this paper presents image steganography using HPSO in order to find best locations in image cover to hide text secret message. Then, a comparison is done between image steganography using PSO and using HPSO. Experimental results on six (256×256) cover images and different size of secret massages, prove that the performance of the proposed image steganography using HPSO has been improved in comparison with using SPSO.
An Effective PSO-inspired Algorithm for Workflow Scheduling IJECEIAES
The Cloud is a computing platform that provides on-demand access to a shared pool of configurable resources such as networks, servers and storage that can be rapidly provisioned and released with minimal management effort from clients. At its core, Cloud computing focuses on maximizing the effectiveness of the shared resources. Therefore, workflow scheduling is one of the challenges that the Cloud must tackle especially if a large number of tasks are executed on geographically distributed servers. This entails the need to adopt an effective scheduling algorithm in order to minimize task completion time (makespan). Although workflow scheduling has been the focus of many researchers, a handful efficient solutions have been proposed for Cloud computing. In this paper, we propose the LPSO, a novel algorithm for workflow scheduling problem that is based on the Particle Swarm Optimization method. Our proposed algorithm not only ensures a fast convergence but also prevents getting trapped in local extrema. We ran realistic scenarios using CloudSim and found that LPSO is superior to previously proposed algorithms and noticed that the deviation between the solution found by LPSO and the optimal solution is negligible.
Biogeography Based Optimization (BBO) is a new evolutionary algorithm for global optimization that was introduced in
2008. BBO is an application of biogeography to evolutionary algorithms. Biogeography is the study of the distribution of biodiversity
over space and time. It aims to analyze where organisms live, and in what abundance. BBO has certain features in common with other population-based optimization methods. Like GA and PSO, BBO can share information between solutions. This makes BBO applicable to many of the same types of problems that GA and PSO are used for, including unimodal, multimodal and deceptive functions. This paper explains the methodology of application of BBO algorithm for the constrained task scheduling problems.
Multiprocessor scheduling of dependent tasks to minimize makespan and reliabi...ijfcstjournal
Algorithms developed for scheduling applications on heterogeneous multiprocessor system focus on a
single objective such as execution time, cost or total data transmission time. However, if more than one
objective (e.g. execution cost and time, which may be in conflict) are considered, then the problem becomes
more challenging. This project is proposed to develop a multiobjective scheduling algorithm using
Evolutionary techniques for scheduling a set of dependent tasks on available resources in a multiprocessor
environment which will minimize the makespan and reliability cost. A Non-dominated sorting Genetic
Algorithm-II procedure has been developed to get the pareto- optimal solutions. NSGA-II is a Elitist
Evolutionary algorithm, and it takes the initial parental solution without any changes, in all iteration to
eliminate the problem of loss of some pareto-optimal solutions.NSGA-II uses crowding distance concept to
create a diversity of the solutions.
EFFECTS OF THE DIFFERENT MIGRATION PERIODS ON PARALLEL MULTI-SWARM PSOcscpconf
In recent years, there has been an increasing interest in parallel computing. In parallel computing, multiple computing resources are used simultaneously in solving a problem. There are multiple processors that will work concurrently and the program is divided into different tasks to be simultaneously solved. Recently, a considerable literature has grown up around the theme of metaheuristic algorithms. Particle swarm optimization (PSO) algorithm is a popular metaheuristic algorithm. The parallel comprehensive learning particle swarm optimization (PCLPSO) algorithm based on PSO has multiple swarms based on the master-slave paradigm and works cooperatively and concurrently. The migration period is an important parameter in PCLPSO and affects the efficiency of the algorithm. We used the well-known benchmark functions in the experiments and analysed the performance of PCLPSO using different migration periods.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The Effect of Updating the Local Pheromone on ACS Performance using Fuzzy Log...IJECEIAES
Fuzzy Logic Controller (FLC) has become one of the most frequently utilised algorithms to adapt the metaheuristics parameters as an artificial intelligence technique. In this paper, the 휉 parameter of Ant Colony System (ACS) algorithm is adapted by the use of FLC, and its behaviour is studied during this adaptation. The proposed approach is compared with the standard ACS algorithm. Computational results are done based on a library of sample instances for the Traveling Salesman Problem (TSPLIB).
An Improved Adaptive Multi-Objective Particle Swarm Optimization for Disassem...IJRESJOURNAL
With the development of productivity and the fast growth of the economy, environmental pollution, resource utilization and low product recovery rate have emerged subsequently, so more and more attention has been paid to the recycling and reuse of products. However, since the complexity of disassembly line balancing problem (DLBP) increases with the number of parts in the product, finding the optimal balance is computationally intensive. In order to improve the computational ability of particle swarm optimization (PSO) algorithm in solving DLBP, this paper proposed an improved adaptive multi-objective particle swarm optimization (IAMOPSO) algorithm. Firstly, the evolution factor parameter is introduced to judge the state of evolution using the idea of fuzzy classification and then the feedback information from evolutionary environment is served in adjusting inertia weight, acceleration coefficients dynamically. Finally, a dimensional learning strategy based on information entropy is used in which each learning object is uncertain. The results from testing in using series of instances with different size verify the effect of proposed algorithm.
Reliable and accurate estimation of software has always been a matter of concern for industry and
academia. Numerous estimation models have been proposed by researchers, but no model is suitable for all
types of datasets and environments. Since the motive of estimation model is to minimize the gap between
actual and estimated effort, the effort estimation process can be viewed as an optimization problem to tune
the parameters. In this paper, evolutionary computing techniques, including, Bee colony optimization,
Particle swarm optimization and Ant colony optimization have been employed to tune the parameters of
COCOMO Model. The performance of these techniques has been analysed by established performance
measure. The results obtained have been validated by using data of Interactive voice response (IVR)
projects. Evolutionary techniques have been found to be more accurate than existing estimation models.
EVOLUTIONARY COMPUTING TECHNIQUES FOR SOFTWARE EFFORT ESTIMATIONijcsit
Reliable and accurate estimation of software has always been a matter of concern for industry and academia. Numerous estimation models have been proposed by researchers, but no model is suitable for all types of datasets and environments. Since the motive of estimation model is to minimize the gap between actual and estimated effort, the effort estimation process can be viewed as an optimization problem to tune
the parameters. In this paper, evolutionary computing techniques, including, Bee colony optimization, Particle swarm optimization and Ant colony optimization have been employed to tune the parameters of COCOMO Model. The performance of these techniques has been analysed by established performance measure. The results obtained have been validated by using data of Interactive voice response (IVR)
projects. Evolutionary techniques have been found to be more accurate than existing estimation models.
Reliable and accurate estimation of software has always been a matter of concern for industry and academia. Numerous estimation models have been proposed by researchers, but no model is suitable for all types of datasets and environments. Since the motive of estimation model is to minimize the gap between actual and estimated effort, the effort estimation process can be viewed as an optimization problem to tune the parameters. In this paper, evolutionary computing techniques, including, Bee colony optimization, Particle swarm optimization and Ant colony optimization have been employed to tune the parameters of COCOMO Model. The performance of these techniques has been analysed by established performance measure. The results obtained have been validated by using data of Interactive voice response (IVR) projects. Evolutionary techniques have been found to be more accurate than existing estimation models.
A REVIEW OF PARTICLE SWARM OPTIMIZATION (PSO) ALGORITHMIAEME Publication
Particle swarm optimization (PSO) is a population-based stochastic optimization technique that is inspired by the intelligent collective behaviour of certain animals, such as flocks of birds or schools of fish. It has undergone numerous improvements since its debut in 1995. As academics became more familiar with the technique, they produced additional versions aimed at different demands, created new applications in a variety of fields, published theoretical analyses of the impacts of various factors, and offered other variants of the algorithm. This paper discusses the PSO's origins and background, as well as its theory analysis. Then, we examine the current state of research and application in algorithm structure, parameter selection, topological structure, discrete and parallel PSO algorithms, multi-objective optimization PSO, and engineering applications. Finally, existing difficulties are discussed, and new study directions are proposed.
Embellished Particle Swarm Optimization Algorithm for Solving Reactive Power ...ijeei-iaes
This paper proposes Embellished Particle Swarm Optimization (EPSO) algorithm for solving reactive power problem .The main concept of Embellished Particle Swarm Optimization is to extend the single population PSO to the interacting multi-swarm model. Through this multi-swarm cooperative approach, diversity in the whole swarm community can be upheld. Concurrently, the swarm-to-swarm mechanism drastically speeds up the swarm community to converge to the global near optimum. In order to evaluate the performance of the proposed algorithm, it has been tested in standard IEEE 57,118 bus systems and results show that Embellished Particle Swarm Optimization (EPSO) is more efficient in reducing the Real power losses when compared to other standard reported algorithms.
This study introduces and compares different methods for estimating the two parameters of generalized logarithmic series distribution. These methods are the cuckoo search optimization, maximum likelihood estimation, and method of moments algorithms. All the required derivations and basic steps of each algorithm are explained. The applications for these algorithms are implemented through simulations using different sample sizes (n = 15, 25, 50, 100). Results are compared using the statistical measure mean square error.
USING CUCKOO ALGORITHM FOR ESTIMATING TWO GLSD PARAMETERS AND COMPARING IT WI...ijcsit
This study introduces and compares different methods for estimating the two parameters of generalized logarithmic series distribution. These methods are the cuckoo search optimization, maximum likelihood estimation, and method of moments algorithms. All the required derivations and basic steps of each algorithm are explained. The applications for these algorithms are implemented through simulations using different sample sizes (n = 15, 25, 50, 100). Results are compared using the statistical measure mean square error.
This study introduces and compares different methods for estimating the two parameters of generalized logarithmic series distribution. These methods are the cuckoo search optimization, maximum likelihood estimation, and method of moments algorithms. All the required derivations and basic steps of each algorithm are explained. The applications for these algorithms are implemented through simulations using different sample sizes (n = 15, 25, 50, 100). Results are compared using the statistical measure mean square error.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
ANALYSINBG THE MIGRATION PERIOD PARAMETER IN PARALLEL MULTI-SWARM PARTICLE SWARM OPTIMIZATION
1. International Journal of Computer Science & Information Technology (IJCSIT) Vol 8, No 3, June 2016
DOI:10.5121/ijcsit.2016.8303 31
ANALYSINBG THE MIGRATION PERIOD PARAMETER
IN PARALLEL MULTI-SWARM PARTICLE SWARM
OPTIMIZATION
Şaban Gülcü1
and Halife Kodaz2
1
Department of Computer Engineering, Necmettin Erbakan University, Konya, Turkey
2
Department of Computer Engineering, Selcuk University, Konya, Turkey
ABSTRACT
In recent years, there has been an increasing interest in parallel computing. In parallel computing, multiple
computing resources are used simultaneously in solving a problem. There are multiple processors that will
work concurrently and the program is divided into different tasks to be simultaneously solved. Recently, a
considerable literature has grown up around the theme of metaheuristic algorithms. Particle swarm
optimization (PSO) algorithm is a popular metaheuristic algorithm. The parallel comprehensive learning
particle swarm optimization (PCLPSO) algorithm based on PSO has multiple swarms based on the master-
slave paradigm and works cooperatively and concurrently. The migration period is an important parameter
in PCLPSO and affects the efficiency of the algorithm. We used the well-known benchmark functions in the
experiments and analysed the performance of PCLPSO using different migration periods.
KEYWORDS
Particle Swarm Optimization, Migration Period, Parallel Algorithm, Global Optimization
1. INTRODUCTION
In recent years, there has been an increasing interest in parallel computing. Software applications
developed by using conventional methods run on a computer with limited resources as serial
computing. Software executed by a processor on a computer consists of a collection of
instructions. Each instruction is processed after another. An instruction is only processed at a
time. But in parallel computing, multiple computing resources are used simultaneously in solving
a problem. There are multiple processors that will work concurrently and the program is divided
into different tasks to be simultaneously solved. Each task is divided into different instructions.
The instructions are processed on different processors at the same time. Thus, performance
increases and computer programs run in a shorter time. Parallel computing has been used in many
different fields such as cloud computing [1], physics [2] and nanotechnology [3].
Recently, a considerable literature has grown up around the theme of metaheuristic algorithms.
Particle swarm optimization (PSO) algorithm is developed by Kennedy and Eberhart in 1995 [4]
is a popular metaheuristic algorithm. It is a population-based and stochastic optimization
technique. It inspired from the social behaviours of bird flocks. Each individual in the population,
called particle, represents a potential solution. PSO has been applied to many various fields such
as automotive industry [5], energy [6], synchronous motor design [7], bioinformatics [8]. In
recent years, many algorithms based on PSO have been developed such as the comprehensive
learning PSO (CLPSO) algorithm [9] and the parallel comprehensive learning particle swarm
optimization (PCLPSO) algorithm [10]. In recent years, devising parallel models of algorithms
has been a healthy field for developing more efficient optimization procedures [11-14].
Parallelism is an approach not only to reduce the resolution time but also to improve the quality
2. International Journal of Computer Science & Information Technology (IJCSIT) Vol 8, No 3, June 2016
32
of the provided solutions. In CLPSO, instead of using a particle’s best information in the original
PSO, all other particles’ historical best information is used to update the particle’s velocity.
Further, the global best position of population in PSO is never used in CLPSO. With this strategy,
CLPSO searches a larger area and the probability of finding global optimum is increased. The
PCLPSO algorithm based on CLPSO has multiple swarms based on the master-slave paradigm
and works cooperatively and concurrently. Through PCLPSO, the solution quality and the global
search ability are improved. This article studies the effect of the different migration periods on
PCLPSO algorithm.
This article has been organized in the following way: Section 2 is concerned with the
methodologies used for this study. Section 3 presents the experimental results and the findings of
the research. Finally, the article is concluded in Section 4.
2. MATERIALS & METHODS
2.1. PSO
Each particle in PSO represents a bird and offers a solution. Each particle has a fitness value
calculated by fitness function. Particles have velocity information and position information
updated during the optimization process. Each particle searches the food in the search area using
the velocity and position information. PSO aims to find the global optimum or a solution close to
the global optimum and therefore is launched with a random population. The particles update
their velocity and position information by using Equations (1) and (2) respectively. To update the
position of a particle, pbest of the particle and gbest of the whole population are used. pbest and
gbest are repeatedly updated during the optimization process. Thus, the global optimum or a
solution close to the global optimum is found at the end of the algorithm.
)(*2*)(*1** 21
d
i
dd
i
d
i
d
i
d
i
d
i
d
i XgbestrandcXpbestrandcVwV −+−+= (1)
d
i
d
i
d
i VXX +=
(2)
where d
iV and d
iX represent the velocity and the position of the dth dimension of the particle i.
The constant w is called inertia weight plays the role to balance between the global search ability
and local search ability [15]. c1 and c2 are the acceleration coefficients. rand1 and rand2 are the
two random numbers between 0 and 1. They affect the stochastic nature of the algorithm [16].
pbesti is the best position of the particle i. gbest is the best position in the entire swarm. The
inertia weight w is updated according to Equation (3) during the optimization process.
( ) ( ) Twwtwtw /* minmaxmax −−= (3)
where wmax and wmin are the maximum and minimum inertia weights and usually set to 0.9 and 0.2
respectively [15]. t is the actual iteration number and T is the maximum number of iteration
cycles.
2.2. CLPSO
CLPSO based on PSO was proposed by Liang, Qin, Suganthan and Baskar [9]. PSO has some
deficiencies. For instance, if the gbest falls into a local minimum, the population can easily fall
into this local minimum. For this reason, CLPSO doesn’t use gbest. Another property of CLPSO
is that a particle uses also the pbests of all other particles. This method is called as the
comprehensive learning approach. The velocity of a particle in CLPSO is updated using Equation
(4).
3. International Journal of Computer Science & Information Technology (IJCSIT) Vol 8, No 3, June 2016
33
)(*** )(
d
i
d
dfi
d
i
d
i
d
i XpbestrandcVwV −+= (4)
where fi = [fi(1), fi(2),…, fi(D)] is a list of the random selected particles which can be any particles
in the swarm including the particle i. They are determined by the Pc value, called as learning
probability, in Equation (5). d
dfipbest )( indicates the pbest value of the particle which is stored in
the list fi of the particle i for the dth dimension. How a particle selects the pbests for each
dimension is explained in [9].
)(*** )(
d
i
d
dfi
d
i
d
i
d
i XpbestrandcVwV −+= (5)
CLPSO uses a parameter m, called the refreshing gap. It is used to learn from good exemplars and
to escape from local optima. The flowchart of the CLPSO algorithm is given in [9].
2.3. PCLPSO
Although PSO has many advantages, the main deficiency of PSO is the premature convergence
[16]. PCLPSO handles to overcome this deficiency like many PSO variants. The PCLPSO
algorithm based on CLPSO was proposed by Gülcü and Kodaz [10]. The solution quality is
enhanced through multiswarm and cooperation properties. Also, computational efficiency is
improved because PCLPSO runs parallel on a distributed environment.
A population is split into subpopulations. Each subpopulation represents a swarm and each swarm
independently runs PCLPSO algorithm. Thus, they seek the search area. There are two types of
swarms: master-swarm and slave swarm. The number of the swarms is an important parameter in
PCLPSO and we analysed the effects of the number of swarms on the PCLPSO algorithm in our
previous work [17]. In the cooperation technique, each swarm periodically shares its own global
best position with other swarms. The parallelism property is that each swarm runs the algorithm
on a different computer at the same time to achieve computational efficiency. The topology is
shown in Figure 1. Each swarm runs cooperatively and synchronously the PCLPSO algorithm to
find the global optimum. PCLPSO uses Jade middleware framework [18] to establish the
parallelism. The cluster specifications are so: windows XP operating system, pentium i5 3.10
GHz, 2 GB memory, java se 1.7, Jade 4.2 and gigabit ethernet. The flowchart of the PCLPSO
algorithm is given in [10].
Figure 1. The communication topology [10]
In the communication topology, there isn’t any directly communication between slave swarms as
shown in Figure 1. Migration process occurs periodically after a certain number of cycles. Each
swarm sends the own local best solution to the master in the PCLPSO’s migration process. The
master collects the local best solutions into a pool, called ElitePool. It chooses the best solution
4. International Journal of Computer Science & Information Technology (IJCSIT) Vol 8, No 3, June 2016
34
from the ElitePool. This solution is sent to all slave swarms by the master. Thus, PCLPSO obtains
better and more robust solutions.
3. EXPERIMENTAL RESULTS
The experiments performed in this section were designed to study the behaviour of PCLPSO by
varying the migration period. The migration period is an important parameter in PCLPSO and
affects the efficiency of the algorithm. This article analyses the effect of the migration period on
PCLPSO algorithm.
Two unimodal and 12 multimodal benchmark functions which are well known to the global
optimization community and commonly used for the test of optimization algorithms are selected.
The formulas of the functions are given in next subsection. The properties of these functions are
given in Table 1. The number of particles per swarm is 15. According to the dimensions of
functions, the experiments are split into three groups. The properties of these groups are given in
Table 2. The term FE in the table refers the maximum fitness evaluation.
The experiments are carried out on a cluster whose specifications are windows XP operating
system, pentium i5 3.10 GHz, 2 GB memory, java se 1.7, Jade 4.2 and gigabit ethernet. The
inertia weight w linearly decreases from 0.9 to 0.2 during the iterations, the acceleration
coefficient c is equal to 1.49445 and the refreshing gap m is equal to five. 30 independent tests are
carried out for each function. The results are given in next subsections.
Table 1. Type, Global Minimum, Function Value, Search and Initialization Ranges of the Benchmark
Functions.
f Global Minimum x* Function Value f(x*) Search Range Initialization Range
f1 [0,0,…,0] 0 [-100, 100]D
[-100, 50]D
f2 [1,1,…,1] 0 [-2.048, 2.048]D
[-2.048, 2.048]D
f3 [0,0,…,0] 0 [-32.768, 32.768]D
[-32.768, 16]D
f4 [0,0,…,0] 0 [-600, 600]D
[-600, 200]D
f5 [0,0,…,0] 0 [-0.5, 0.5]D
[-0.5, 0.2]D
f6 [0,0,…,0] 0 [-5.12, 5.12]D
[-5.12, 2]D
f7 [0,0,…,0] 0 [-5.12, 5.12]D
[-5.12, 2]D
f8 [420.96,420.96,…,420.96] 0 [-500, 500]D
[-500, 500]D
f9 [0,0,…,0] 0 [-32.768, 32.768]D
[-32.768, 16]D
f10 [0,0,…,0] 0 [-600, 600]D
[-600, 200]D
f11 [0,0,…,0] 0 [-0.5, 0.5]D
[-0.5, 0.2]D
f12 [0,0,…,0] 0 [-5.12, 5.12]D
[-5.12, 2]D
f13 [0,0,…,0] 0 [-5.12, 5.12]D
[-5.12, 2]D
f14 [0,0,…,0] 0 [-500, 500]D
[-5.12, 5.12]D
Table 2. Parameters used in the experiments.
Dimension FE Number of swarms Number of particles
10 3x104
4 15
30 2x105
4 15
100 3x105
4 15
3.1. FUNCTIONS
The functions used in the experiments are the following:
Sphere function:
∑=
=
D
i
ixxf
1
2
1 )( (6)
5. International Journal of Computer Science & Information Technology (IJCSIT) Vol 8, No 3, June 2016
35
Rosenbrock function:
])1()(100[)( 22
1
1
1
2
2 −+−= +
−
=
∑ ii
D
i
i xxxxf (7)
Ackley function:
ex
D
x
D
xf
D
i
i
D
i
i ++
−
−−= ∑∑ ==
20)2cos(
1
exp
1
2.0exp20)(
11
2
3 π (8)
Griewank function:
1cos
4000
)(
1 1
2
4 +
−= ∑ ∏= =
D
i
D
i
ii
i
xx
xf (9)
Functions f1 and f2 are unimodal. Unimodal functions have only one optimum and no local
minima.
Weierstrass function:
( )( ) ( )( )∑∑ ∑ == =
−
+=
max
01
max
0
5 5.0*2cos)5.0(2cos)(
k
k
kk
D
i
k
k
i
kk
baDxbaxf ππ (10)
a=0.5, b=3, kmax = 20.
Rastrigin function:
[ ]∑=
−+=
D
i
ii xxDxf
1
2
6 )2cos(1010)( π (11)
Noncontinuous Rastrigin function:
[ ]∑=
−+=
D
i
ii yyDxf
1
2
7 )2cos(1010)( π (12)
.,...,2,1
2
1
2
1
2
)2(
Difor
x
x
xround
x
y
i
i
i
i
i =
>=
<
=
Schwefel function:
( )∑=
−=
D
i
ii xxDxf
1
8 sin418.9829)( (13)
Functions f3 - f8 are multimodal. Multimodal functions have only one optimum and many local
minima. They are treated as a difficult class of benchmark functions by researchers because the
number of local minima of the function grows exponentially as the number of its dimension
increases [19-22]. Therefore, obtaining good results on multimodal functions is very
important for the optimization algorithms.
Functions f9 - f14 are the rotated version of the f3 - f8. The rotation changes the separable functions
into the no separable functions which are solved harder. A separable function is rotated by using
Equation (14). The matrix M in the formula refers to an orthogonal matrix [23] and the variable y
refers the new input vector of the function.
xMy *= (14)
Rotated Ackley function:
ey
D
y
D
xf
D
i
i
D
i
i ++
−
−−= ∑∑ ==
20)2cos(
1
exp
1
2.0exp20)(
11
2
9 π (15)
where y = M * x.
6. International Journal of Computer Science & Information Technology (IJCSIT) Vol 8, No 3, June 2016
36
Rotated Griewank function:
1cos
4000
)(
1 1
2
10 +
−= ∑ ∏= =
D
i
D
i
ii
i
yy
xf (16)
where y = M * x.
Rotated Weierstrass function:
( )( ) ( )( )∑∑ ∑ == =
−
+=
max
01
max
0
11 5.0*2cos)5.0(2cos)(
k
k
kk
D
i
k
k
i
kk
baDybaxf ππ (17)
where a=0.5, b=3, kmax = 20, y = M * x.
Rotated Rastrigin function:
[ ]∑=
−+=
D
i
ii yyDxf
1
2
12 )2cos(1010)( π (18)
where y = M * x.
Rotated Noncontinuous Rastrigin function:
[ ]∑=
−+=
D
i
ii zzDxf
1
2
13 )*2cos(1010)( π (19)
.,...,1
2
1
2
1
2
)2(
Difor
yif
yif
yround
y
z
i
i
i
i
i =
>=
<
=
where y = M * x.
Rotated Schwefel function:
∑=
−=
D
i
izDxf
1
14 *418.9829)( (20)
( )
( )
.,...,1
500
500
500*10
sin
23
Difor
yif
yif
y
yiy
z
i
i
i
i
i =
>
<=
−
=
−
where y = yʹ + 420.96, yʹ = M * (x - 420.96).
3.2. RESULTS OF THE 10-D PROBLEMS
Table 3 presents the mean of the function values for 10-D problems according to the different
migration periods. Table 4 presents the calculation time of the functions for 10-D problems. In
[10], the importance of the migration period is emphasized: if the information is very often
exchanged, then the solution quality may be better, but the computational efficiency deteriorates.
If the migration period is longer, the computational efficiency is better, but the solution quality
may be worse. It is apparent from these tables that the computational efficiency is better when the
migration period is equal to 100 as expected. But the better values of functions f1-f14 are obtained
when the migration period is around 6. The bold text in the tables refers the best results.
13. International Journal of Computer Science & Information Technology (IJCSIT) Vol 8, No 3, June 2016
43
Table 8. The calculation time (ms) for 100-D problems. (cont.)
P f8 f9 f10 f11 f12 f13 f14
1 8416360 8465946 14887180 107772532 49520680 38914469 8426766
2 4212494 4264592 7455683 54305695 25255891 20959078 4219047
3 2811024 2825860 4945657 36529329 14836586 11975688 2811625
4 2111044 2134315 3753972 26980211 11635633 9480899 2111984
5 1694133 1733743 3175391 21495318 9477302 7026943 1693172
6 1411989 1528062 2471500 17938292 7900083 5856531 1408031
7 1211146 1309620 2204610 15424662 6778599 5022761 1209568
8 1059526 1147609 1983295 13512172 5927734 4393734 1058516
9 940396 1018443 1872491 12005406 5267141 3902958 940396
10 847583 918953 1602417 10818677 4742500 3516010 847760
11 770406 833901 1527842 9821073 4306505 3190875 769755
12 704885 763354 1372219 8991464 3942151 2923417 703380
13 651985 706177 1236329 8320052 3640547 2697932 651417
14 605307 656714 1129041 7723333 3387615 2510213 605740
15 568490 613292 1097830 7225792 3161109 2342984 565031
16 530781 572339 1005362 6760963 2959318 2194333 530792
17 499156 539521 923150 6362182 2790052 2068448 499276
18 468990 509729 857309 5997474 2628448 1949177 469271
19 446589 482698 817491 5701068 2494927 1850594 447359
20 424125 459135 779244 5417932 2372333 1759313 423594
50 170109 184245 327097 2188411 949104 704172 170130
100 85651 92641 170664 1110604 476083 353761 85646
4. CONCLUSIONS
The purpose of the current study was to analyse the effect of the migration period parameter on
PCLPSO algorithm. PCLPSO based on the master-slave paradigm has multiple swarms which
work cooperatively and concurrently on distributed computers. Each swarm runs the algorithm
independently. In the cooperation, the swarms exchange their own local best particle with each
other in every migration process. Thus, the diversity of the solutions increases through the
multiple swarms and cooperation. PCLPSO runs on a cluster. We used the well-known
benchmark functions in the experiments. In the experiments, the performance of PCLPSO is
analysed using different migration periods. This study has shown that the calculation time
decreases when the migration period is longer. We obtained better results for low-dimensional
problems when the migration period is around 6. We obtained better results for 30-dimensional
problems when the migration period is around 13. We obtained better results for high-
dimensional problems when the migration period is around 12. The migration period should be
tuned for different problems. Namely, it varies with regard to the difficulty of problems. As future
work, we plan to investigate the effects of the number of particles to be exchanged between
swarms on the performance of the PCLPSO algorithm.
REFERENCES
[1] M. Mezmaz, N. Melab, Y. Kessaci, Y.C. Lee, E.-G. Talbi, A.Y. Zomaya, D. Tuyttens, A parallel bi-
objective hybrid metaheuristic for energy-aware scheduling for cloud computing systems, Journal of
Parallel and Distributed Computing, 71 (2011) 1497-1508.
[2] Z. Guo, J. Mi, P. Grant, An implicit parallel multigrid computing scheme to solve coupled thermal-
solute phase-field equations for dendrite evolution, Journal of Computational Physics, 231 (2012)
1781-1796.
[3] J. Pang, A.R. Lebeck, C. Dwyer, Modeling and simulation of a nanoscale optical computing system,
Journal of Parallel and Distributed Computing, 74 (2014) 2470-2483.
14. International Journal of Computer Science & Information Technology (IJCSIT) Vol 8, No 3, June 2016
44
[4] J. Kennedy, R. Eberhart, Particle swarm optimization, 1995 Ieee International Conference on Neural
Networks Proceedings, Vols 1-6, (1995) 1942-1948.
[5] A.R. Yildiz, A new hybrid particle swarm optimization approach for structural design optimization
in the automotive industry, Proceedings of the Institution of Mechanical Engineers, Part D: Journal
of Automobile Engineering, 226 (2012) 1340-1351.
[6] M.S. Kıran, E. Özceylan, M. Gündüz, T. Paksoy, A novel hybrid approach based on particle swarm
optimization and ant colony algorithm to forecast energy demand of Turkey, Energy conversion and
management, 53 (2012) 75-83.
[7] M. Mutluer, O. Bilgin, Design optimization of PMSM by particle swarm optimization and genetic
algorithm, in: Innovations in Intelligent Systems and Applications (INISTA), 2012 International
Symposium on, IEEE, 2012, pp. 1-4.
[8] G.E. Güraksın, H. Haklı, H. Uğuz, Support vector machines classification based on particle swarm
optimization for bone age determination, Applied Soft Computing, 24 (2014) 597-602.
[9] J.J. Liang, A.K. Qin, P.N. Suganthan, S. Baskar, Comprehensive learning particle swarm optimizer
for global optimization of multimodal functions, Ieee T Evolut Comput, 10 (2006) 281-295.
[10] Ş. Gülcü, H. Kodaz, A novel parallel multi-swarm algorithm based on comprehensive learning
particle swarm optimization, Engineering Applications of Artificial Intelligence, 45 (2015) 33-45.
[11] E. Alba, Parallel metaheuristics: a new class of algorithms, John Wiley & Sons, 2005.
[12] G.-W. Zhang, Z.-H. Zhan, K.-J. Du, Y. Lin, W.-N. Chen, J.-J. Li, J. Zhang, Parallel particle swarm
optimization using message passing interface, in: Proceedings of the 18th Asia Pacific Symposium
on Intelligent and Evolutionary Systems, Volume 1, Springer, 2015, pp. 55-64.
[13] M. Pedemonte, S. Nesmachnow, H. Cancela, A survey on parallel ant colony optimization, Applied
Soft Computing, 11 (2011) 5181-5197.
[14] B. Li, K. Wada, Communication latency tolerant parallel algorithm for particle swarm optimization,
Parallel Computing, 37 (2011) 1-10.
[15] Y. Shi, R. Eberhart, A modified particle swarm optimizer, in: Evolutionary Computation
Proceedings, 1998. IEEE World Congress on Computational Intelligence., The 1998 IEEE
International Conference on, IEEE, 1998, pp. 69-73.
[16] F. Van Den Bergh, An analysis of particle swarm optimizers, in, University of Pretoria, 2006.
[17] Ş. Gülcü, H. Kodaz, Effects of the number of swarms on parallel multi-swarm PSO, International
Journal of Computing, Communication and Instrumentation Engineering, 3 (2016) 201-204.
[18] F.L. Bellifemine, G. Caire, D. Greenwood, Developing multi-agent systems with JADE, John Wiley
& Sons, 2007.
[19] X. Yao, Y. Liu, G. Lin, Evolutionary programming made faster, Evolutionary Computation, IEEE
Transactions on, 3 (1999) 82-102.
[20] B.-Y. Qu, P.N. Suganthan, S. Das, A distance-based locally informed particle swarm model for
multimodal optimization, Evolutionary Computation, IEEE Transactions on, 17 (2013) 387-402.
[21] X. Li, Niching without niching parameters: particle swarm optimization using a ring topology,
Evolutionary Computation, IEEE Transactions on, 14 (2010) 150-169.
[22] S.C. Esquivel, C.A. Coello Coello, On the use of particle swarm optimization with multimodal
functions, in: Evolutionary Computation, 2003. CEC'03. The 2003 Congress on, IEEE, 2003, pp.
1130-1136.
[23] R. Salomon, Re-evaluating genetic algorithm performance under coordinate rotation of benchmark
functions. A survey of some theoretical and practical aspects of genetic algorithms, BioSystems, 39
(1996) 263-278.