SlideShare a Scribd company logo
1 of 26
Download to read offline
https://iaeme.com/Home/journal/IJMET 19 editor@iaeme.com
International Journal of Mechanical Engineering and Technology (IJMET)
Volume 13, Issue 7, July 2022, pp. 19-44. Article ID: IJMET_13_07_003
Available online at https://iaeme.com/Home/issue/IJMET?Volume=13&Issue=7
ISSN Print: 0976-6340 and ISSN Online: 0976-6359
DOI: https://doi.org/10.17605/OSF.IO/KQ34H
© IAEME Publication
A REVIEW OF PARTICLE SWARM
OPTIMIZATION (PSO) ALGORITHM
Mubeen Shaikh1 and Dr. Dhananjay Yadav2
1
Research Scholar, Department of Mechanical Engineering at SSSUTMS,
Sehore, Madhya Pradesh, India
2
Professor, Department of Mechanical Engineering at SSSUTMS,
Sehore, Madhya Pradesh, India
ABSTRACT
Particle swarm optimization (PSO) is a population-based stochastic optimization
technique that is inspired by the intelligent collective behaviour of certain animals, such
as flocks of birds or schools of fish. It has undergone numerous improvements since its
debut in 1995. As academics became more familiar with the technique, they produced
additional versions aimed at different demands, created new applications in a variety
of fields, published theoretical analyses of the impacts of various factors, and offered
other variants of the algorithm. This paper discusses the PSO's origins and background,
as well as its theory analysis. Then, we examine the current state of research and
application in algorithm structure, parameter selection, topological structure, discrete
and parallel PSO algorithms, multi-objective optimization PSO, and engineering
applications. Finally, existing difficulties are discussed, and new study directions are
proposed.
Keywords: Topology structure, Particle swarm, optimization Multi-objective
optimization, Discrete PSO, Parallel PSO.
Cite this Article: Mubeen Shaikh and Dhananjay Yadav, A Review of Particle Swarm
Optimization (PSO) Algorithm, International Journal of Mechanical Engineering and
Technology (IJMET), 13(7), 2022, pp. 19-44.
https://iaeme.com/Home/issue/IJMET?Volume=13&Issue=7
1. INTRODUCTION
The particle swarm optimization (PSO) algorithm is a swarm-based stochastic optimization
technique proposed by Eberhart and Kennedy (1995) and Kennedy and Eberhart (1995). (1995).
The PSO algorithm models the social behaviour of animals such as insects, herds, birds, and
fishes. These swarms work together to find food, and each member of the swarm changes the
search pattern based on its own and other members' learning experiences. The PSO algorithm's
main design concept is closely tied to two studies: PSO, like evolutionary algorithms, uses a
swarm mode, which allows it to simultaneously search a vast region in the solution space of the
optimised objective function.
A Review of Particle Swarm Optimization (PSO) Algorithm
https://iaeme.com/Home/journal/IJMET 20 editor@iaeme.com
Millonas offered five essential concepts for how to develop swarm artificial life systems
with cooperative behaviour by computer in researching the behaviour of social animals using
artificial life theory (van den Bergh 2001):
(1) Proximity: The swarm should be able to do basic space and time computations.
(2) Quality: The swarm should be able to detect and respond to changes in the environment's
quality.
(3) Diverse response: The swarm should not limit its approach to obtaining resources to a
narrow range.
(4) Stability: the swarm should not change its behaviour mode.
(5) Adaptability: the swarm should modify its behaviour mode when it is justified.
These five concepts encompass the primary characteristics of artificial life systems and have
served as guiding principles in the development of the swarm artificial life system. Particles in
PSO can update their positions and velocities as the environment changes, thereby meeting the
requirements of proximity and quality. Furthermore, in PSO, the swarm does not restrict its
mobility but instead continuously searches for the optimal solution in the possible solution
space. Particles in PSO can maintain a stable movement in the search space while changing
their movement mode to react to environmental changes. As a result, particle swarm systems
satisfy the five principles listed above.
2. ORIGIN AND BACKGROUND
In order to demonstrate the production background and evolution of the PSO algorithm, we first
offer the early simple model, known as the Boid (Bird-oid) model (Reynolds 1987). This model
is intended to replicate bird behaviour and is also a direct source of the PSO algorithm.
The most basic model is depicted here. Each bird is represented by a point in the Cartesian
coordinate system, with beginning velocity and position assigned at random. The software
should then be performed in accordance with the "nearest proximity velocity match rule," so
that one individual has the same speed as its nearest neighbour. With the iteration continuing
in the same manner, all of the points will quickly have the same velocity. Because this model
is overly simplistic and far from realistic, a random variable is introduced to the speed item.
That is, aside from meeting "the nearest proximity velocity match," each speed will be added
with a random variable at each iteration, bringing the total simulation closer to the real scenario.
Heppner created a "cornfield model" to imitate a flock of birds' foraging activity (Clerc and
Kennedy 2002). Assume there was a "cornfield model" on the plane, i.e., food was randomly
distributed aboard the plane at the start. They moved in accordance with the following
principles in order to locate the meal.
Assume that the swarm size is N, that each particle's position vector in D-dimensional space
is Xi = (xi1, xi2,,xid,, xiD), that the velocity vector is Vi = (vi1, vi2,, vid,, viD), that the
individual's optimal position
(i.e., the optimal position that the particle has experienced) is Pi (pi1, pi2,, pid,, piD) ( pg1,
pg2, , pgd , , pgD). Using the minimization problem as an example, without losing generality,
in the early.
The iteration procedure of any particle in each generation. From a sociological standpoint,
we can observe that the first part of the velocity update formula is the influence of the particle's
past velocity. It signifies that the particle is confident in its current moving state and performs
inertial movement in accordance with its own velocity, hence the parameter is known as inertia
weight. The second part, known as the "cognitive" item, is determined by the distance between
the par- ticle's current position and its own optimal position. It refers to the particle's own
Mubeen Shaikh and Dhananjay Yadav
https://iaeme.com/Home/journal/IJMET 21 editor@iaeme.com
thinking, i.e. the particle's movement as a result of its own experience. As a result, parameter
c1 is known as the cognitive learning factor (also called cognitive acceleration factor). The third
component, dubbed "social," is based on the distance between the particle's current position and
the global (or local) ideal position in the swarm. It refers to the sharing of knowledge and
cooperation among particles, specifically particle movement resulting from the experience of
other particles in the swarm. Because it models the movement of a good particle through
cognition, the value c2 is known as the social learning factor (also called social acceleration
factor).
The PSO method has received a lot of attention since it was proposed because of its intuitive
basis, simple and easy implementation, and extensive applicability to many kinds of functions.
The theory and use of the PSO algorithm have advanced significantly over the last two decades.
Researchers have gained a preliminary understanding of the theory, and its application in
several domains has been
realised.
PSO is a parallel and stochastic optimization algorithm. Its benefits are summarised as
follows: It does not necessitate the use of optimised functions such as differential, derivative,
and continuous; its convergence rate is fast; and the algorithm is simple and straightforward to
implement through programming. Unfortunately, it has some drawbacks (Wang 2012): (1) For
functions with several local extremes, it is most likely to fall into the local extreme and cannot
produce the correct output.
This phenomena is caused by two factors: the properties of the optimised functions, and the
particles' divergence vanishing soon, creating premature convergence. These two elements are
frequently tightly linked. (2) The PSO algorithm cannot produce satisfactory results due to a
lack of collaboration from good search methods. The reason for this is that the PSO algorithm
does not make adequate use of the information collected during the computation step.Instead,
it solely takes the information from the swarm and individual optima during each iteration. (3)
While the PSO algorithm allows for global search, it cannot guarantee convergence to global
optima. (4) The PSO method is a meta-heuristic bionic optimization technique with no formal
theoretical underpinning. It is merely intended to simplify and simulate the search phenomenon
of some swarms, but it neither explains why this algorithm is useful in principle nor determines
its relevant range. As a result, the PSO technique is often appropriate for a class of optimization
problems that are high dimensional and do not require particularly exact result.
There are now numerous studies on the PSO algorithm, which can be classified into the
eight groups listed below:
(1)Conduct a theoretical analysis of the PSO algorithm in order to comprehend its operation.
(2) Modify its structure in order to improve performance.
(3) Investigate the effect of various parameter configurations on the PSO algorithm.
(4) Investigate the impact of alternative topological
structures on the PSO algorithm.
(5) Investigate the parallel PSO method.
(6) Investigate the discrete PSO algorithm.
(7) Investigate multi-objective optimization using the PSO algorithm.
(8) Use the PSO algorithm in a variety of technical domains. The remainder of this paper will
begin to outline current research on PSO algorithms from the eight categories listed above. We
cannot review all of the linked research since there are too many, so we select a few
representative ones to review.
A Review of Particle Swarm Optimization (PSO) Algorithm
https://iaeme.com/Home/journal/IJMET 22 editor@iaeme.com
3. THEORETICAL ANALYSIS
Nowadays, theory study of the PSO algorithm mostly focuses on the PSO method's concept, i.e., how
the parts interact with each other, and why it is beneficial for many optimization problems but
not clear for others. This problem's research can be classified into three categories: The
travelling trajectory of a single particle is one, the convergence problem is another, and the
evolution and distribution of the complete particle system over time is the third. Kennedy
(1998) performed the first examination of simplified particle behaviour by simulating different
particle trajectories under a range of design choices. Ozcan and Mohan (1998) published the
first theoretical study of the simplified PSO algorithm, indicating that in a simplified one-
dimensional PSO system, a particle travelled along a route defined by a sinusoidal wave and
randomly determined its amplitude and frequency. Their analysis, however, was limited to the
simple PSO model without the inertia weight and assumedthat Pid and Pgd remained constant.
Actually, Pid and Pgd varied regularly, resulting in a sine wave with numerous different
amplitudes and frequencies. As a result, the overall trajectory appeared disorderly. This
drastically weakened the impact of their conclusions.
Clerc and Kennedy (2002) performed the first formal examination of the PSO algorithm's
stability, but this method regarded the random coefficients as constants, reducing the typical
stochastic PSO to a deterministic dynamic system. The resulting system was a second-order
linear dynamic system whose stability was determined by the system poles or state matrix eigen
roots. van den Bergh (2001) performed a similar analysis on the deterministic version of the
PSO method and discovered the regions in the parameter space where stability could be
ensured. The literature also addressed convergence and parameter selection (Trelea 2003;
Yasuda et al. 2003). However, the authors recognised that they did not account for the stochastic
nature of the PSO method, therefore their results were limited. Emara and Fattah performed a
similar investigation on the continuous version of the PSO algorithm (2004).
The PSO method, as previously proposed, employs constant and uniform distribution
random integers c1 and c2. How would the particle trajectories' first- and second-order stability
areas vary if a random variable is also used, and/or c1 and c2 adhere to other statistical
distributions instead of the uniform distribution? First-order stability analysis (Clerc and
Kennedy 2002; Trelea 2003; Bergh and Engelbrecht 2006) sought to determine whether the
stability of mean trajectories was dependent on the parameters (ω, φ), where = (ag + al)/2 and
c1 and c2 were uniform distributions in the intervals 0 ag and 0 al, respectively. Higher-order
moments were found in stochastic stability analysis, which proved to be highly valuable for
understanding particle swarm dynamics and clarifying PSO convergence properties
(Fernandez-Martinez and Garcia-Gonzalo 2011; Poli 2009)
Kennedy (2003) presented the Bare Bones PSO (BBPSO) model of PSO dynamics. Its
particle velocity update has a Gaussian distribution. Although Kennedy's initial formulation is
not competitive with regular PSO, adding a component-wise jumping mechanism and adjusting
the standard deviation can result in a competitive optimization technique. As a result, al Rifaie
and Blackwell (2012) suggested a Bare Bones with Jumps (BBJ) algorithm with a modified
search spread component and a lower jump probability. It used the difference between the
neighbourhood best and the current position rather than the difference between the particle's
personal best and the neighbourhood best (in the local neighbourhood) (in global
neighbourhood). Three performance criteria (accuracy, efficiency, and dependability) were
used to compare the BBJ against other standard Clerc–Kennedy PSOs and BBJ modifications.
Using these measurements, it was demonstrated that when benchmarks with successful
convergence were evaluated, the accuracy of BBJ was significantly superior than other
algorithms. Furthermore, BBJ has been empirically demonstrated to be the most efficient and
reliable algorithm in both local and global neighbourhoods.
Mubeen Shaikh and Dhananjay Yadav
https://iaeme.com/Home/journal/IJMET 23 editor@iaeme.com
Poli was also interested in the social variant of PSO (al 0) and fully informed particle swarm
(Mendes et al. 2004). (2008). Garcia-Gonza and Fernandez-Martinez (2014) provided the
convergence and stochastic stability study of a number of PSO variants, differing from the
classical PSO in the statistical distribution of the three PSO parameters: inertia weight, local
and global acceleration factors. They presented an analytical presentation for the top limit of
the particle trajectories' second-order stability areas (the so-called USL curves), which is
available for most PSO algorithms. Numerical experiments revealed that adjusting the PSO
parameters near to the USL curve yielded the greatest algorithm performance.
Kadirkamanathan et al. (2006) used the Lyapunov stability analysis and the idea of passive
system to investigate the stability of particle dynamics. This analysis did not presume that all
parameters were non-random, and it obtained the necessary stability requirements. It was based
on random particle dynamics, which were represented as a nonlinear feedback control system.
The feedback loop of such a system had a deterministic linear and a nonlinear component, as
well as a time-varying gain. Though it evaluated the influence of random components, its
stability analysis was conducted with the goal of achieving the optimal position; thus, the
conclusion cannot be directly transferred to non-optimal particles.
Even the original PSO method could converge; however, it could only converge to the
optima that the swarm could search for, and it could not ensure that the attained solution was
the best, or even that it was the local optima. van den Bergh and Engelbrecht (2002) suggested
a PSO method to guarantee algorithm convergence. It used a new update equation for the global
optimal particle, causing it to generate a random search near the global optimal position, while
other particles used their original equations to update. This approach could secure the PSO
algorithm's convergence to the local optimal solution at the expense of a faster convergence
rate, but its performance in multi-modal situations was inferior to the canonical PSO algorithm.
Lack of population diversity was viewed early (Kennedy and Eberhart 1995) as a significant
influencing element for the swarm's pre-mature convergence toward a local optimum; thus,
increasing diversity was regarded as a useful way to escaping from the local optima (Kennedy
and Eber- hart 1995; Zhan et al. 2009). However, increasing swarm variety is detrimental to
fast convergence toward the ideal solution. This phenomenon is well known since Wolpert and
Macready (1997) demonstrated that no algorithm can outperform all others on every type of
task. As a result, research trials to improve the performance of an optimization algorithm should
not be designed to find a general function optimizer (Mendes et al. 2004; Wolpert and Macready
1997), but rather to find a general problem-solver capable of performing well on a wide range
of well-balanced practical benchmark problems (Garcia-Martinez and Rodriguez 2012).
A few PSO versions have been proposed to avoid premature convergence on a local
optimum solution while retaining the quick convergence aspect of the original PSO formulation
(Valle et al. 2008). These methods include fine-tuning the PSO parameters to control particle
velocity updating (Nickabadi et al. 2011), using different PSO local formulations to consider
the best solution within a local topological particle neighbourhood rather than the entire swarm
(Kennedy and Mendes 2002, 2003; Mendes et al. 2004), and integrating the PSO with other
heuristic algorithms (Chen et al. 2013). For instance, comprehensive education. PSO (Liang et
al. 2006) used a new learning technique to promote swarm diversity and avoid premature
convergence in multi-modal problem solving. ALC-PSO (Chen et al. 2013) gave the swarm
leader increasing age and lifetime in order to escape from local optima and avoid premature
convergence. Tanweer et al. (2016) used self-regulating inertia weights and self-perception on
the global search direction to achieve faster convergence and better outcomes.
Blackwell (2005) theoretically investigated and empirically validated the speed features
with diversity loss in the PSO method for spherically symmetric local neighbourhood functions.
Kennedy (2005) conducted a comprehensive study of how speed influences the PSO algorithm,
A Review of Particle Swarm Optimization (PSO) Algorithm
https://iaeme.com/Home/journal/IJMET 24 editor@iaeme.com
which was useful in understanding the impact of speed to PSO performance. Clerc (2006)
thoroughly investigated the PSO iteration process at the stationary stage, as well as the roles of
each random coefficient; eventually, he provided the probability density functions for each
random coefficient.
4. ALGORITHMSTRUCTURE
There are a sea of enhancement approaches for the PSO algorithm structure, which can be
classified into 8 main sub-sections as follows.
4.1. Adoptingmulti-Sub-Populations
Suganthan (1999) developed the concept of subpopulations in the genetic algorithm and a
reproduction operator in the PSO algorithm in 2001. Liang and Suganthan (2005) presented a
dynamic multi-swarm PSO in which the swarm was separated into numerous sub-swarms that
were reassembled often to communicate information. To improve neural fuzzy networks, Peng
and Chen (2015) introduced a symbiotic particle swarm optimization (SPSO) algorithm. The
described SPSO algorithm employed a multi-swarm technique, in which each particle
represented a single fuzzy rule, and each particle in each swarm grew independently to prevent
slipping into a local optima. To handle multi-modal function optimization problems, Chang
(2015) presented a modified PSO technique. It fragmented the original swarm into multiple
sub-swarms depending on particle order. The best particle in each sub-swarm was recorded and
then used to replace the original global best particle in the whole population in the velocity
updating calculation. The improved velocity formula was used to update all particles in each
sub-swarm.
Tanweer et al. (2016) also proposed a new dynamic mentoring and self-regulation-based
particle swarm optimization (DMeSR-PSO) method that classified particles into mentor,
mentee, and independent learner groups based on fitness differences and Euclidian distances
from the best particle.
The PSO approach requires too many particles for the high-dimensional optimization
problem, resulting in significant computational complexity; consequently, achieving a
satisfactory solution is challenging. Recently, the cooperative particle swarm algorithm (CPSO-
H) (Bergh and Engelbrecht 2004) was proposed, which divided the input vector into many sub-
vectors and employed a particle swarm to optimise each sub-vector. Despite the fact that the
CPSO-H algorithm used one-dimensional swarm to search for each dimension, when the search
results were merged by a global swarm, its performance on multi-modal problems has been
demonstrated.
Furthermore, Niu et al. (2005) proposed a multi-population cooperative PSO method and
introduced master–slave sub-population mode into the PSO algorithm. Similarly, Seo et al.
(2006) proposed a multi- grouped PSO that used N groups of particles to simultaneously explore
N peaks of multi-modal issues.
Selleri et al. (2006) used numerous independent sub-populations and added some new
components to the particle velocity update formula, causing the particles to travel toward the
sub-historical population's ideal position or away from the gravity centre of other sub-
populations.
4.1.1. Improving the Selection Strategy for Particle Learning Object
Al-kazemi and Mohan (2002) introduced a multi-phase PSO algorithm in which particles were
grouped in different phases based on temporary search objectives, and these temporary targets
allowed the particles to move toward or away from their own or the global best location. Ting
Mubeen Shaikh and Dhananjay Yadav
https://iaeme.com/Home/journal/IJMET 25 editor@iaeme.com
et al. (2003) changed every particle's pBest and every dimension learned from randomly chosen
other dimensions. If the new pBest was superior, it was utilised to replace the original pBest.
Yang and Wang (2006) devised the roulette selection strategy to decide the gBest in the
PSO algorithm, such that in the early stages of evolution, all individuals had a chance to drive
the search direction to avoid premature evolution. Zhan et al. (2011) proposed an orthogonal
learning PSO that used an orthogonal learning strategy to provide efficient examples. Abdelbar
et al. (2005) developed a fuzzy measure in which multiple particles with the highest fitness
values in each neighbour could influence other particles.
In addition to these methods, the particle positions in the Bare bones PSO algorithm
(Kennedy 2003) were updated using a Gaussian distribution. This distribution was beneficial
for optimization methods because many foragers and wandering animals followed a Levy
distribution of steps. So Richer and Blackwell (2006) substituted random sampling from a Levy
distribution for particle dynamics within PSO. To evaluate its performance, a variety of
benchmark issues were used; the resulting Levy PSO performed as well as, if not better than, a
normal PSO or equivalent Gaussian models. Furthermore, Hendtlass (2003) provided memory
ability to each particle in the speed update equation, while He et al. (2004) included passive
congregation method.Zeng et al. (2005) included an acceleration component into the PSO
algorithm, transforming it from a second-order stochastic system to a third-order stochastic
system. To enhance the PSO algorithm's global search capability
4.1.2. Modifying Velocity Update Strategy
Despite the fact that PSO performance has increased over the years, how to choose an
appropriate velocity update approach and parameters remains an important research subject.
Ardizzon et al. (2015) offered a novel application of the original particle swarm concept, with
two types of agents in the swarm, "explorers" and "settlers," that may dynamically swap roles
during the search procedure. This method may dynamically adjust the particle velocities at each
time step based on the particle's current distance from the optimal place determined so far by
the swarm.
Uniform distribution random numbers in the velocity update approach may also affect
particle movement with high exploration capabilities. To improve PSO performance, Fan and
Yan (2014) proposed a self-adaptive PSO with multiple velocity strategies (SAPSO-MVS).
SAPSO-MVS could create self-adaptive control parameters during the whole evolution
procedure and used a novel velocity updating strategy to optimise the balance between the PSO
algorithm's exploration and exploitation capabilities while avoiding manually tuning the PSO
parameters. Crazy PSO was proposed by Roy and Ghoshal (2008), in which particle velocity
was randomised within predefined boundaries. Its goal was to randomise the velocity of some
particles known as "crazy particles" by applying a specified probability of craziness to maintain
diversity for global search and analysis.
According to Liu et al. (2004), too frequent velocity updates damage the particles' local
exploit ability and impede convergence, thus he introduced a relaxation velocity update
technique, which updated the speed only when the original speed could not improve the
particle's fitness value any longer. Experiment findings demonstrated that this method might
significantly reduce computing load and accelerate convergence. Diosan and Oltean (2006)
employed a genetic algorithm to evolve the structure of the PSO algorithm, i.e., particles
updating order and frequency. affect the particle moving. Thus, Fan and Yan (2014) put forward
a self-adaptive PSO with multiple velocity strategies(SAPSO-MVS) to enhance PSO performance.
SAPSO-MVS could generate self-adaptive control parameters in the total evolution procedure
and adopted a novel velocity updating scheme to improve the balance between the exploration
and exploitation capabilities of the PSO algorithm and avoided to tune the PSO parameters
A Review of Particle Swarm Optimization (PSO) Algorithm
https://iaeme.com/Home/journal/IJMET 26 editor@iaeme.com
manually. Roy and Ghoshal(2008) proposed Crazy PSO in which particle velocity was
randomized within predefined limits. Its aim was to ran-domize the velocity of some particles,
named as “crazy particles” through using a predefined probability of craziness to keep the diversity
for global search and better conver-gence. Unfortunately, values of the predefined probability
of craziness could only be obtained after a few experiments. Peram et al.(2003) presented a
fitness–distance ratio-based PSO (FDR-PSO), in which a new velocity updating equation was
used to regenerate the velocity of each particle. Lietal.(2012)presented a self-learning PSO in
which a velocity update scheme could be automatically modified in the evolution procedure.
Lu et al.(2015b) proposed a mode-dependent velocity updating equation with Marko-vian
switching parameters in switching PSO to overcome the contradiction between the local search
and the global search, which made it easy to jump out of the local mini-mum.
Liuetal.(2004)arguedthattoofrequentvelocityupdatewouldweakentheparticles’localexploita
bilityanddecreasetheconvergence,so he proposed are laxation velocity update strategy, which
updated the speed only when the original speed could not improve the particle’s fitness value
further. Experimental results proved that this strategy could reduce the computation load greatly
and accelerate the convergence. Diosan and Oltean (2006) used genetic algorithm to evolve
PSO algorithm structure, i.e., particles updating order and frequency.
4.1.3. Modifying the speed, or, position. Constrain method and dynamically determining the
search. space
Chaturvedi et al. (2008) regulated the acceleration coefficients dynamically in the maximum
and minimum ranges. However, determining the bound value of the acceleration coefficients
was a tough task that required several simulations. Stacey et al. (2003) proposed a new speed
constrain method for re-randomizing particle speed as well as a novel position constrain method
for re-randomizing particle location. Clerc (2004) used a contraction-expansion coefficient into
evolution algorithms to assure algorithm convergence while loosening the speed bound. Other
ways to dynamically determining the search space, such as squeezing the search space (Barisal
2013), had also been presented
4.1.4. Combining PSO with other Search Techniques
It has two main goals: one is to raise the divergence and avoid premature convergence, and the
other is to improve the PSO algorithm's local search ability. A plethora of models have been
investigated in order to enhance search diversity in the PSO (Poli et al. 2007). These hybrid
algorithms included introducing various genetic operators to the PSO algorithm, such as
selection (Angeline 1998a, b; Lovbjerg et al. 2001), crossover (Angeline 1998b; Chen et al.
2014), mutation (Tsafarakis et al. 2013), or Cauchy mutation (Wang et al. 2011), to increase
diversity and improve the algorithm's ability to escape from local minima. Meng et al. (2015)
introduced crisscross search particle swarm optimization (CSPSO), a new hybrid optimization
technique.
Lim and Isa (2015) proposed a hybrid PSO method that used fuzzy reasoning and a
weighted particle to build a novel search behaviour model that improved the search ability of
the traditional PSO algorithm. Shin and Kita (2014) used information from the second global
best and second individual best particles, in addition to the information from the first global
best and first individual best particles, to improve the search performance of the original PSO.
Tanweer et al. (2016) created a unique particle swarm optimization approach called self-
regulating particle swarm optimization (SRPSO) that used the finest human learning schemes
to find the best outcomes. The SRPSO employed two learning strategies. The first design used
a self-regulating inertia weight, and the second used a fixed inertia weight.predator-
preymodel(Gosciniak2015),uncorrela-tivecomponentanalysismodel (Fanetal.2009),dissipative
Mubeen Shaikh and Dhananjay Yadav
https://iaeme.com/Home/journal/IJMET 27 editor@iaeme.com
model (Xieetal.2002), self-organizing model (Xieetal. 2004), life cycle model (Krinkand
Lovbjerg 2002), Bayesian optimization model (Monson and Seppi 2005), chemical reaction
optimization (Lietal.2015b), neighborhood search mechanism (Wangetal.2013), collision-
avoiding mech-anism (Blackwell and Bentley 2002), information sharing
mechanism(Lietal.2015a), local search technique (Sharifietal.2015), cooperative behavior
(BerghandEngelbrecht2004), hierarchical fair competition (Chenetal.2006b), external memory
(Acan and Gunay2005), gradient descent technique (Noel and Jannett 2004), simplex method
opera-tor (Qianetal.2012; El-Wakeel2014), hillclimbing method (Linetal.2006b), division of
labor (LimandIsa2015), principal component analysis (Muetal.2015), Kalmanfiltering
(Monsonand Seppi 2004), genetic algorithm (Soleimani and Kannan2015), shuffled frog
leaping algorithm(Samuel and Rajan2015), random search algorithm (Ciuprinaetal.2007),
Gaussian local search (Jia et al. 2011), simulated annealing (Liuetal.2014;Gengetal.2014),
taboo search (WenandLiu2005),Levenberg–Marquardt algorithm (Shirkhanietal. 2014), ant
colony algorithm (Shelokaretal.2007), artifi-cial be ecolony (Vitorinoetal.2015; Lietal.2011),
chaos algorithm (Yuanetal.2015), differential evolution (ZhaiandJiang2015), evolutionary
programming (Jamianetal.2015), multi-objective cultural algorithm (Zhang et al. 2013). PSO
algorithm was also extended in quantum space by Sunetal. (2004). The novel PSO model was
based on the delta potential well and modeled the particles as having quantum behaviors.
Furthermore, Medasani and Owechko (2005) expanded the PSO algorithm through introducing
the possibility of c-means and probability theory, and put forward probabilistic PSO algorithm.
Improving for multi-modal Proble
The seventh solution is intended specifically for multi-modal problems, with the hope of finding
multiple superior answers. To acquire numerous better solutions for the optimum problem,
Parsopoulos and Vrahatis (2004) used deflection, stretching, and repulsion, among other
strategies, to find as many minimal locations as possible by preventing the particles from
travelling to the smallest region ever found before. This strategy, however, would generate
additional local optima at both ends of the detected local ones, potentially causing the
optimization algorithm to fall into local optima. As a result, Jin et al. (2005) developed a novel
type of function transformation that could prevent this drawback.
Benameur et al. (2006) presented an adaptive method to determine the niching parameters.
Brits et al. (2003) suggested a niche PSO algorithm to discover and monitor numerous optima
by exploiting many sub-populations at the same time. Brits et al.(2002) investigated a strategy
for simultaneously finding numerous optimal solutions by changing the fitness value
calculating approach. Schoeman and Engelbrecht (2005) used vector operation to determine the
candidate solution and its border in each niche using vector dot production operation and
parallelized this process to produce better results based on the niche PSO algorithm. However,
each niche PSO algorithm had a common drawback in that it needed to establish a niche radius,
and the method performance was very sensitive to the niche radius. In order to address this
issue.
4.1.5. Keeping diversity of the Population
Population diversity is very crucial for improving the PSO algorithm's global convergence.
When population variety was very low, the simplest way to maintain it was to reset some
particles or the entire particle swarm. Lovbjerg and Krink (2002) used a self-organized
criticality in PSO algorithm to show the degree of proximity among the particles in the swarm
and to decide whether or not to re-initialize the particle positions. Clerc (1999) presented Re-
Hope, a deterministic algorithm that reset the swarm when the search space was fairly limited
but had not yet found solutions (No-Hope). To preserve population diversity and to balance
global and local searches, Fang et al. (2016) suggested a decentralised quantum-inspired
A Review of Particle Swarm Optimization (PSO) Algorithm
https://iaeme.com/Home/journal/IJMET 28 editor@iaeme.com
particle swarm optimization (QPSO) method with cellular structured populations (called
cQPSO). The performance of cQPSO-lbest was evaluated on 42 benchmark functions with
varying features (including unimodal, multi-modal, separated, shifted, rotated, noisy, and mis-
scaled) and compared against a set of PSO variations with varying topologies and swarm-based
evolutionary methods (EAs).
Park et al(2010) .'s modified PSO incorporated a chaotic inertia weight that declined and
oscillated simultaneously under the decreasing line. Additional diversity was introduced into
the PSO in this method, but the chaotic control settings needed to be tuned. Netjinda et al.
(2015) recently revealed a novel technique into PSO to boost swarm variety, a mechanism
inspired by starling collective response behaviour. The Starling PSO realised a broader scope
of the search space as a result of this collective reaction mechanism, avoiding poor answers. As
a result of the improved performance, this approach adds extra processes to the original
algorithm. As a result, more parameters were required, and the new step, the collective response
process, increased the execution duration of this algorithm. The algorithm complexity of the
Starling PSO, however, remained the same as that of the original PSO.
5. PARAMETER SELECTION
The inertia weightω (or constriction factorχ), learning factors c1 and c2, speed restrictions
Vmax, position limitations Xmax, swarm size, and beginning swarm are all key elements in the
PSO algorithm. Some researchers fixed other factors and simply evaluated the impact of a
single parameter on the algorithm, whilst others studied the impact of numerous parameters on
the algorithm.
6. INERTIA WEIGHT
According to current research, the inertia weight has the biggest influence on the performance
of the PSO algorithm, hence there are the most studies in this field.
Shi and Eberhart (1998) were the first to address PSO parameter selection.
They implemented an inertia efficient PSO and promoted the convergence feature.
An expansion of this study used fuzzy systems to adjust the inertia weight nonlinearly duri
ng optimization (Shi and Eberhart 2001). In general, it is assumed that in PSO, inertia weight
is used to balance global and local search, with larger inertia weight leaned to global search and
lower inertia weight tended to local search, hence the value of inertia weight should gradually
decrease over time. Shi and Eberhart (1998) proposed that the inertia weight be set to [0.9, 1.2],
and that a linearly decreasing inertia weight might considerably improve PSO performance.
Because fixed inertia weights rarely produce satisfactory results, some PSO variants whose
inertia weight decreased linearly with iteration times (Shi and Eberhart 1998), adaptive changed
(Nickabadi et al. 2011), adjusted by a quadratic function (Tang et al. 2011) and by population
information (Zhan et al. 2009), adjusted based on Bayesian techniques (Zhang et al. 2015),
exponential decreasing in At the same time, there are numerous techniques for changing the
inertia weight adaptively, as well as some evaluation indices, such as the successful history of
search (Fourie and Groenwold 2002), Individual search ability (Yang et al. 2007), particle
average velocity (Yasuda and Iwasaki 2004), population diversity (Jie et al. 2006), smoothness
change in the objective function (Wang et al. 2005), particle swarm evolutionary speed and
aggregation degree (Qin et al. 2006).
Similarly, Liu et al. (2005) used Metropolis criteria to assess whether or not to accept the
inertia weight adjustment. Some people also used a random inertia weight, such as [0.5
(rnd/2.0)]. [0, 1] uniform distribution random numbers (Eberhart and Shi 2001).(Zhang et al.
2003). Jiang and Bompard (2005) used the chaos process to pick the inertia weight, allowing it
Mubeen Shaikh and Dhananjay Yadav
https://iaeme.com/Home/journal/IJMET 29 editor@iaeme.com
to traverse [0, 1]. The improved PSO of Park et al. (2010) introduced a chaotic inertia weight
that oscillated and dropped simultaneously beneath the decreasing line in a chaotic manner, but
the chaotic control parameters required to be tuned.
Learning factors c1 and c2
The weights of the stochastic acceleration terms that drive each particle toward pBest
And g Best are represented by the learning factors c1 and c2 (or nBest ). Many times,
c1 and c2 are set to 2.0, causing the search to cover the region specified in pBest and gBest.
Another typical value is 1.49445, which can ensure PSO algorithm convergence (Clerc and
Kennedy 2002). After extensive testing, Carlisle and Dozier (2001) proposed a superior
parameter set in which c1 and c2 were set to 2.8 and 1.3, respectively, and the performance of
this option was confirmed by Schutte and Groen- wold (2005).Inspired by the concept of time-
varying inertia weight, many PSO variants appeared whose learning factors changed with time
(Ivatloo 2013), such as learning factors that linearly decreased with time (Ratnaweera et al.
2004), dynamically adjusted based on the particles' evolutionary states (Ide and Yasuda 2005),
dynamically adjusted in accordance with the number of fitness values that deteriorate
persistently and the swarm (Chen et al. 2006a).
In most circumstances, the two learning variables c1 and c2 have the same value, resulting
in the same weight for social and cognitive search. Kennedy (1997) investigated two types of
extremes: models with only the social term and models with only the cognitive term, and the
results revealed that these two components were critical to the success of swarm search, while
no definitive conclusions could be drawn about the asymmetric learning factor. There have been
studies that determined the inertia weight and learning factors at the same time. Many
researchers used optimization techniques such as genetic algorithms (Yu et al. 2005), adaptive
fuzzy algorithms (Juang et al. 2011), and differential evolutionary algorithms to dynamically
calculate the inertia weight and learning parameters (Parsopoulos and Vrahatis 2002b),
7. SPEED LIMITS VMAX
The particles' speed was controlled by a maximum speed Vmax, which can be utilised to control
the particle swarm's global search ability. In the original PSO method,ω 1, c1 c2 2, particles'
speed quickly climbs to a very high value, affecting the PSO algorithm's performance, hence
particle velocity must be limited. Later, Clerc and Kennedy (2002) pointed out that it was not
essential to limit particle velocity; instead, incorporating a constriction factor into the speed
update calculation might achieve the same result. Even when the constriction factor was
applied, research revealed that limiting the particle velocity at the same time produced better
results (Eberhart and Shi 2000),As a result, the concept of speed limitation was kept in the PSO
algorithm. In general, Vmax was set to the dynamic range of each variable and was normally a
fixed number, but it could alternatively decline linearly with time (Fan 2002) or dynamically
depending on the success of search history (Fourie and Groen- wold 2002).
PositionlimitsXmax
Particle positions can be controlled by a maximum position Xmax to prevent particles from
flying out of the physical solution space. Robinson and Rahmat-Samii (2004) proposed three
control techniques: absorb- ing walls, reflecting walls, and invisible walls. When one of a
particle's dimensions crossed the boundary of the solution space, the absorbing wall set the
velocity in that dimension to zero, while the reflecting wall changed the direction of particle
velocity, and the particle was eventually pulled back to the allowable solution space by the two
walls. To save time and prevent interfering with the motions of other particles, the invisible
barriers did not calculate the fitness values of the particles flying out. However, the performance
of the PSO method was heavily influenced by the issue dimension and the relative position of
A Review of Particle Swarm Optimization (PSO) Algorithm
https://iaeme.com/Home/journal/IJMET 30 editor@iaeme.com
the global optima and the search space border. Huang and Mohan (2005) developed a hybrid
damping boundary to achieve robust and consistent performance by integrating the features of
the absorbing and reflecting walls. And Mikki and Kishk (2005) integrated the approaches of
hard position limit, absorbing wall, and reflecting wall to achieve superior outcomes.
8. POPULATION SIZE
The population size chosen is connected to the problems to be solved, but it is not particularly
sensitive to the challenges. The typical range is 20–50. In some instances, a larger population
is used to address unique demands.
8.1. Initialization of the Population
The population's initialization is likewise a critical issue. In general, the initial population is
generated randomly, but there are many intelligent population initialization methods, such as
using the nonlinear simplex method (Parsopoulos and Vrahatis 2002a), centroidal Voronoi
tessellations (Richards and Ventura 2004), and orthogonal design (Zhan et al. 2011), to
determine the initial population of the PSO algorithm, making the distribution of the initial
population as evenly distributed as possible, and helping the algorithm to explore.
Robinson et al. (2002) stated that the PSO algorithm and the GA algorithm could be used
in tandem, i.e., using the population optimised by the PSO algorithm as the initial population
of the GA algorithm, or using the population optimised by the GA algorithm as the initial
population of the PSO algorithm, both methods could produce better results. Yang et al. (2015)
introduced LHNPSO, a new PSO technique with low-discrepancy sequence started particles,
high-order (1/ 2) nonlinear time-varying inertia weight, and constant acceleration coefficients.
To adequately populate the search space, the Halton sequence was used to generate initial
population.
Furthermore, PSO algorithm parameters could be adjusted using methods such as sensitivity
analysis (Bartz-Beielstein et al. 2002), regression trees (Bartz-Beielstein et al. 2004a), and
calculate statistics (Bartz-Beielstein et al. 2004b) to improve PSO algorithm performance when
solving practical problems.
Beheshti and Shamsuddin (2015) have presented a nonparametric particle swarm
optimization (NP-PSO) to increase global exploration and local exploitation in PSO without
changing algorithm parameters. To improve the algorithm search capacity, this technique
combined local and global topologies with two quadratic interpolation processes.
9. MULTI-OBJECTIVE OPTIMIZATION PSO
Multi-objective (MO) optimization has become an important study area in recent years. In
multi-object optimization issues, each target function can be optimised independently and then
the optimal value for each target function can be found. Unfortunately, due to the conflicting
aims, it is very hard to find a perfect solution for all of them. As a result, only the Pareto optimal
option is possible. The information exchange method of the PSO algorithm is considerably
different from those of other swarm optimization tools. The chromosomes share information
with each other via crossover operation in the genetic algorithm (GA), which is a bidirectional
information sharing process. While only gBest (or nBest) gives information for other particles
in most PSO algorithms. Traditional PSO algorithms cannot simultaneously discover numerous
optimal locations defining the Pareto frontier due to the point attraction feature. Though we can
achieve numerous optimal solutions by assigning different weights to each objective function,
then combining them and running many times, we still want to find a method that can
concurrently obtain a group of Pareto optimal solutions.
Mubeen Shaikh and Dhananjay Yadav
https://iaeme.com/Home/journal/IJMET 31 editor@iaeme.com
Following the presentation of the vector-evaluated genetic algorithm by Peram et al. (2003),
an ocean of multi-objective optimization algorithms, such as NSGA- II, was introduced one
after the other (Coello et al. 2004). Liu et al. (2016) were the first to investigate the use of the
PSO algorithm in multi-objective optimization, emphasising the importance of individual and
swarm searches, but they did not employ any strategy to maintain variety. Clerc (2004)
employed an external archive to store and determine which particles would be the non-
dominated members, and these members were used to control the flight of other particles based
on the idea of non- dominated optimal.
Kennedy (2003) employed the NSGA-II algorithm's core mechanism to discover the local
optimal particle among the local optimal particles and their offspring particles, and suggested
a non-dominated sorting PSO that used the max-min strategy in the fitness function to determine
Pareto dominance. Furthermore, Goldbarg et al. (2006) optimised a U-tube steam generator
mathematical model in a nuclear power plant using the non-dominated sorting PSO. To handle
multi-objective optimization problems, Ghodratnama et al. (2015) used the comprehensive
learning PSO method in conjunction with Pareto dominance. Ozcan and Mohan (1998) created
an elitist multi-objective PSO that used the elitist mutation coefficient to optimise particle
exploitation and exploration. Wang et al. (2011) developed an iterative multi-objective particle
swarm optimization-based control vector parameterization to deal with state-constrained
chemical and biochemical engineering issues. Clerc and Kennedy (2002), Fan and Yan (2014),
Chen et al. (2014), Lei et al. (2005), and others have suggested multi-objective PSO algorithms
in recent studies. Because the fitness calculation requires a significant amount of computational
resources, it is necessary to decrease the number of fitness functions evaluated in order to reduce
the calculation cost. Pampara et al. (2005) used a fitness inheritance strategy and an estimate
technique to accomplish this goal, comparing the effects of fifteen different inheritance
techniques and four estimation techniques applied to a multi-objective PSO algorithm.
The MOPSO's variety can be maintained using two methods: the Sigma technique
(Lovbjerg and Krink 2002) and the ε-dominance method (Juang et al. 2011; Robinson and
Rahmat-Samii 2004). Robinson and Rahmat-Samii (2004) proposed a multi-swarm PSO
method that divided the entire swarm into three equal-sized sub-swarms. Each sub-swarm used
a separate mutation coefficient, and this method increased the particles' search capability.
Engineering applications of the PSO are attached in the supplementary file due to page
limitations; interested readers are welcome to refer to it.
9. NOISE AND DYNAMIC ENVIRONMENTS
Brits et al. (2003) proposed using the PSO algorithm to monitor the dynamic system, which
tracked the dynamic system by regularly resetting all particles' memories. Deb and Pratap
(2002) took a similar approach. Following that, Geng et al. (2014) presented an adaptive PSO
algorithm that could automatically track changes in the dynamic system, and several
environment detection and response strategies were tested on the parabolic benchmark function.
It effectively boosted the tracking capabilities for environmental change by testing and
reinitializing the best particle in the swarm. Later,Carlisle and Dozier (2000) used a random
point in the search space to determine whether or not the environment changed, however this
needed centralised control, which was incompatible with the distributed processing architecture
of the PSO algorithm. As a result, Clerc (2006) suggested a Tracking Dynamical PSO (TDPSO)
that caused the fitness value of the best historical position to drop over time, eliminating the
requirement forcentralised control. Binkley and Hagiwara (2005) introduced a penalty term in
the particles' update formula to keep the particles lying in an expanding swarm in response to
the fast changing dynamic environment, and this method does not need to assess whether the
optimal point changed or not.
A Review of Particle Swarm Optimization (PSO) Algorithm
https://iaeme.com/Home/journal/IJMET 32 editor@iaeme.com
Monson and Seppi (2004) demonstrated that the basic PSO algorithm may perform
efficiently and stably in a noisy environment; in fact, noise could help the PSO algorithm avoid
falling into local optima in many circumstances. Furthermore, Mostaghim and Teich (2003)
investigated the performance of the unified particle swarm method in a dynamic context.
The preceding works' research objects are simple dynamic systems, the experiment
functions are simple single-mode functions, and the changes are uniform in a simple setting
(that is, fixed step). In reality, genuine dynamic systems are frequently nonlinear and change
non-uniformly in a complicated multi-mode search space. Kennedy (2005) studied a variety of
dynamic situations using four PSO models (a basic PSO, two randomised PSO algorithms, and
a fine-grained PSO).
10. NUMERICAL EXPERIMENTS
PSO was also employed in a number of numerical investigations. Carlisle and Dozier (2001)
used a modified variant of a probabilistic environment-based particle swarm optimization
approach to solve an aggregate production plan model that used the strategy of simultaneously
minimising the most probable value of the imprecise total costs, maximising the possibility of
obtaining lower total costs, and minimising the possibility of obtaining lower total costs. This
method provides a novel approach to considering the inherent uncertainty of the parameters in
an aggregate production plan problem, and it can be used in ambiguous and indeterminate real-
world production planning and scheduling problems with ill-defined data.
Ganesh et al. (2014) used the PSO to optimise the cutting conditions for the response surface
models they constructed. The PSO software provided the minimal values of the relevant criteria
as well as the ideal cutting conditions. To solve the squared error between measured and
modelled values in system identification issues, Lu developed an upgraded PSO method with a
combined fitness function. To validate the feasibility of PSO, numerical simulations with five
benchmark functions were employed, and numerical tests were also carried out to evaluate the
performance of the upgraded PSO. Consistent results revealed that the combined fitness
function-based PSO method was viable and efficient for system identification, and that it could
outperform the conventional PSO approach.
Lu et al. used eight numerical benchmarking functions that represent diverse aspects of
typical issues, as well as a real-world application involving data clustering, to test the Starling
PSO (2015a). The experimental results indicated that the Starling PSO outperformed the
original PSO and produced the best solution in several numerical benchmarking functions and
the majority of real-world problems in the clustering studies.
Sierra and Coello (2005) have performed numerical experiments using benchmark objective
functions of high dimensions to validate the convergence and effectiveness of the proposed
PSO initialization. Salehian and Subraminiam (2015) used an updated PSO to optimise wireless
sensor network performance in terms of the number of alive nodes. The numerical experiments
in a conventional background validated the performance of the selected modified PSO.
11. CONCLUSIONS AND DISCUSSION
PSO algorithm has attracted widespread attention in recent years as a relatively new approach.
The following are some of the benefits of the PSO algorithm:
(1) It is quite robust and may be utilised in a variety of application environments with few
modifications. (
2) It has high distributed capability because the algorithm is essentially a swarm evolutionary
algorithm, making parallel processing simple.
Mubeen Shaikh and Dhananjay Yadav
https://iaeme.com/Home/journal/IJMET 33 editor@iaeme.com
(3) It can soon converge to the optimization value. (4) It is simple to combine with other
algorithms to boost performance.
There are several unsolved problems in PSO algorithm research, including but not limited to:
(1) Analysis of random convergence. Although the PSO method has been demonstrated to be
effective in real-world applications and has produced preliminary theoretical results, it has yet
to provide mathematical proofs of algorithm convergence and convergence rate estimation.
(2) How to calculate the algorithm parameters PSO parameters are typically established based
on specific problems, application experience, and several experiment testing, hence it lacks
adaptability. As a result, another pressing issue to be addressed is how to establish the algorithm
parameters in a convenient and effective manner.
(3) PSO algorithm (discrete/binary). The majority of the research articles in this paper deal with
continuous variables. Limited study evidences that the PSO algorithm had some issues dealing
with discrete variables
(4) Designing an effective algorithm based on the features of various challenges is a very
meaningful effort. For specific application challenges, we should thoroughly investigate the
PSO algorithm and broaden and deepen its applicability. Simultaneously, we should focus on
highly efficient PSO design, combining the PSO with optimised problem or rules, PSO with
neural network, fuzzy logic, evolutionary algorithm, simulated annealing, taboo search,
biological intelligence, and chaos, and so on, to address the problem that the PSO is easily
trapped in the local optima.
(5) Study of PSO algorithm design. More emphasis should be placed on the extremely efficient
PSO algorithm, as well as an appropriate core update formula and effective method for
balancing the global local exploitation and exploration
(6) Look for PSO applications. Because most PSO applications are currently limited to
continuous, single-objective, unconstrained, deterministic optimization issues, we should focus
on discrete, multi-objective, constrained, un-deterministic, dynamic optimization problems.
PSO's application areas should be increased at the same time.
REFERENCES
[1] AbdelbarAM, AbdelshahidS, WunschDCI (2005) Fuzzypso:agener-
alizationofparticleswarmoptimization.In:Proceedingsof2005IEEE international joint
conference on neural networks (IJCNN’05)Montreal,Canada,July31–August4,pp1086–1091
[2] AcanA,GunayA(2005)Enhanced particles warm optimization through external memory
support. In: Proceedings of 2005 IEEE congress one volutionary computation,
Edinburgh,UK,Sept2–4,pp1875–1882
[3] Afshinmanesh F, Marandi A, Rahimi-Kian A (2005) A novel binary particle swarm
optimization method using artificial immune sys-tem. In: Proceedings of the international
conference on computeras a tool (EUROCON 2005) Belgrade, Serbia, Nov 21–24, pp 217–220
[4] Al-kazemi B, Mohan CK (2002) Multi-phase generalization of the particle swarm optimization
algorithm. In: Proceedings of 2002 IEEE Congresson Evolutionary Computation, Honolulu,
Hawaii, August7–9,pp489–494
[5] alRifaie MM, Blackwell T (2012) Bare bones particle swarms with jumpsants. LectNotes
ComputSciSer7461(1):49–60
[6] Angeline PJ (1998a) Evolutionary optimization versus particle swarm optimization philosophy
and performance difference. In: Evolu-tionary programming, Lecture notes in computer science,
vol. viiedition. Springer, Berlin
A Review of Particle Swarm Optimization (PSO) Algorithm
https://iaeme.com/Home/journal/IJMET 34 editor@iaeme.com
[7] AngelinePJ(1998b)Usingselectiontoimproveparticleswarmoptimization.In:Proceedingsofthe19
98IEEEinternationalcon-ference on evolutionary computation, Anchorage, Alaska,
USA,May4–9,pp 84–89
[8] ArdizzonG, CavazziniG, PavesiG (2015)Adaptiveaccelerationcoef-
ficientsforanewsearchdiversificationstrategyinparticleswarmoptimizationalgorithms.InfSci299
:337–378
[9] Banka H, Dara S (2015) A hamming distance based binary particleswarm optimization
(HDBPSO) algorithm for high dimensional feature selection, classification and validation.
Pattern RecognitLett52:94–100
[10] Barisal AK (2013) Dynamic search space squeezing strategy basedintelligent algorithm
solutions to economic dispatch with multi-plefuels.ElectrPowerEnergySyst45:50–59
[11] Bartz-Beielstein T, Parsopoulos KE, VrahatisMN (2002) Tuningpso parameters through
sensitivity analysis. Technical Report CI124/02,SFB531. Universityof Dortmund, Dortmund,
Germany, Department of Computer Science
[12] Bartz-Beielstein T, Parsopoulos KE, Vegt MD, Vrahatis MN
(2004a)Designingparticleswarmoptimizationwithregressiontrees.Technical Report CI 173/04,
SFB 531. University of Dortmund, Dortmund, Germany, Department of Computer Science
[13] Bartz-Beielstein T, Parsopoulos KE, Vrahatis MN (2004b) Analysisof particle swarm
optimization using computational statistics.
In:Proceedingsoftheinternationalconferenceofnumericalanalysisand applied mathematics
(ICNAAM 2004), Chalkis, Greece, pp34–37
[14] Beheshti Z, Shamsuddin SM (2015) Non-parametric particle swarm optimization for global
optimization. Appl Soft Comput 28:345–359
[15] Benameur L, Alami J, Imrani A (2006) Adaptively choosing niching parameters in a PSO. In:
Proceedings of genetic and evolution-ary computation conference (GECCO 2006), Seattle,
Washington,USA,July8–12, pp3–9
[16] Binkley KJ, HagiwaraM(2005) Particle swarm optimization with are a of influence: increasing
the effectiveness of the swarm. In: Pro-ceedings of 2005 IEEE swarm intelligence symposium
(SIS 2005), Pasadena, California,USA,June8–10,pp45–52
[17] Blackwell TM (2005) Particle swarms and population diversity. SoftComput9(11):793–802
[18] Blackwell TM, Bentley PJ (2002) Don’t push me! Collision-avoiding swarms. In: Proceedings
of IEEE congress on evolutionary com-putation, Honolulu, HI, USA, August7–9, pp1691–1697
[19] Bratton D, Kennedy J (2007) Defining a standard for particle swarm optimization. In:
Proceedings of the 2007 IEEE swarm intelligence symposium (SIS2007),
Honolulu,HI,USA,April19–23,pp120–127
[20] Brits R, Engelbrecht AP, van den Bergh F (2002) Solving systems of unconstrained equations
using particle swarm optimization. In: Proceedings of IEEE international conference on
systems, man, and cybernetics, hammamet, Tunisia, October6–9, 2002. July27–28,2013, East
Lansing, Michigan,pp1–9
[21] Brits R, Engelbrecht AP, van den Bergh F (2003) Scalability of nichePSO. In: Proceedings of
the IEEE swarm intelligence symposium, Indianapolis, Indiana, USA, April24–26,pp228–234
[22] Carlisle A, Dozier G (2000) Adapting particle swarm optimization to dynamic environments.
In:Proceedingsoftheinternationalconfer-enceonartificialintelligence,Athens,GA,USA,July31–
August5,pp429–434
[23] CarlisleA, DozierG (2001) An off-the-shelfPSO. In: Proceedings of the workshop on particle
swarm optimization, Indianapolis, Indiana, USA
Mubeen Shaikh and Dhananjay Yadav
https://iaeme.com/Home/journal/IJMET 35 editor@iaeme.com
[24] ChangWD (2015) Amodified particles warm optimization with multiple subpopulations for
multimodal function optimization problems.ApplSoftComput33:170–182
[25] Chatterjee A, Siarry P (2006) Nonlinear inertia weight variation for dynamic adaptation in
particle swarm optimization. ComputOperRes33:859–871
[26] Chaturvedi KT, Pandit M, Shrivastava L (2008) Self-organizing hier-archical particle swarm
optimization for non-convex economic dispatch. IEEE Trans PowerSyst23(3):1079–1087
[27] Chen J, Pan F, Cai T (2006a) Acceleration factor harmonious
particleswarmoptimizer.IntJAutomComput3(1):41–46
[28] Chen K, Li T, Cao T (2006b) Tribe-PSO: a novel global optimization algorithm and its
application in molecular docking. Chemom Intell LabSyst 82:248–259
[29] Chen W, Zhang J, Lin Y, Chen N, Zhan Z, Chung H, Li Y, Shi Y (2013) Particle swarm
optimization with an aging leader and challenger. IEEE TransEvolutComput17(2):241–258
[30] ChenY, Feng Y, LiX (2014) A parallel system for adaptive optics based on parallel mutation
PSOalgorithm.Optik125:329–332
[31] Ciuprina G, Ioan D, Munteanu I (2007) Use of intelligent-particle swarm optimizationinel
ectromagnetics. IEEE Trans Manag 38(2):1037–1040
[32] ClercM(1999)Theswarmandthequeen:towardsadeterministicandadaptiveparticleswarmoptimiz
ation.In:ProceedingsoftheIEEEcongress on evolutionary computation (CEC 1999), pp 1951–
1957,Washington,DC,USA,July6–9,1999
[33] Clerc M (2004) Discrete particle swarm optimization. In: OnwuboluGC (ed) New optimization
techniques in engineering. Springer, Berlin
[34] Clerc M (2006) Stagnation analysis in particle swarm optimisation or what happens when
nothing happens. Technical Report CSM-460, Department of Computer Science, University of
Essex, Essex, UK, August5–8,2006
[35] ClercM, KennedyJ (2002) The particle swarm-explosion, stability and convergence in a multi
dimensional complex space. IEEE TransEvolutComput 6(2):58–73
[36] Coelho LDS, Lee CS (2008) Solving economic load dispatch prob-
lemsinpowersystemsusingchaoticandgaussianparticleswarmoptimizationapproaches.ElectrPow
erEnergySyst30:297–307
[37] Coello CAC, Pulido G, Lechuga M (2004) Handling multiple objec-
tiveswithparticleswarmoptimization.IEEETransEvolutComput8(3):256–279
[38] DebK,PratapA(2002)Afastandelitistmultiobjectivegeneticalgo-rithm:NSGA-
II.IEEETransEvolutComput6(2):182–197
[39] del Valle Y, Venayagamoorthy GK, Mohagheghi S, Hernandez JC, Harley RG (2008) Particle
swarm optimization: basic concepts, variants and applications in power systems. IEEE Trans
EvolutComput12:171–195
[40] DiosanL,OlteanM(2006)Evolvingthestructureoftheparticleswarmoptimizationalgorithms.In:Pr
oceedingsofEuropeanconferenceonevolutionarycomputationincombinatorialoptimization(Evo-
COP2006),pp25–36,Budapest,Hungary,April10–12,2006
[41] DoctorS,VenayagamoorthyGK(2005)Improvingtheperformanceofparticleswarmoptimizationu
singadaptivecriticsdesigns.In:Proceedingsof2005IEEEswarmintelligence
symposium(SIS2005), pp 393–396, Pasadena, California, USA, June 8–10, 2005Eberhart
RC,KennedyJ(1995)
[42] Anewoptimizerusingparticleswarmtheory.In:Proceedingsofthe6thinternationalsymposiumonmi
cromachineand humanscience,pp 39–43,Nagoya,Japan, Mar 13–16,1995
A Review of Particle Swarm Optimization (PSO) Algorithm
https://iaeme.com/Home/journal/IJMET 36 editor@iaeme.com
[43] EberhartRC,ShiY(2000)Comparinginertiaweightsandconstrictionfactors in particle swarm
optimization. In: Proceedings of the IEEE congress on evolutionary computation (CEC 2000),
pp 84–88, SanDiego,CA,USA,July16–19, 2000
[44] Eberhart RC, Shi Y (2001) Particle swarm optimization: developments, applications and
resources. In: Proceedings of the IEEE congresss on evolutionary computation (CEC 2001), pp
81–86, Seoul, Korea,May27–30
[45] El-Wakeel AS (2014) Design optimization of pm couplings usinghybrid particle swarm
optimization-simplex method (PSO-SM)algorithm.ElectrPowerSystRes116:29–35
[46] EmaraHM, FattahHAA(2004)Continuous swarm optimizationtech-nique with stability analysis.
In: Proceedings of American Control Conference, pp 2811–2817, Boston, MA, USA, June 30–
July 2,2004
[47] Engelbrecht AP, Masiye BS, Pampard G (2005) Niching ability of basicparticle swarm
optimization algorithms. In: Proceedings of 2005IEEE Swarm Intelligence Symposium (SIS
2005), pp 397–400,Pasadena,CA,USA,June8–10,2005
[48] FanH(2002) Amodificationto particleswarmoptimization algorithm. EngComput19(8):970–989
[49] Fan Q, Yan X (2014) Self-adaptive particle swarm optimization withmultiple velocity strategies
and its application for p-xylene oxi-dation reaction process optimization.ChemomIntell Lab
Syst139:15–25
[50] FanSKS,LinY,FanC,WangY(2009)Processidentificationusinganew component analysis model
and particle swarm optimization.ChemomIntellLabSyst99:19–29
[51] Fang W, Sun J, Chen H, Wu X (2016) A decentralized quantum-inspiredparticle swarm
optimization algorithm with cellular structuredpopulation.InfSci330:19–48
[52] Fernandez-Martinez JL, Garcia-Gonzalo E (2011) Stochastic stabilityanalysis of the linear
continuous and discrete PSO models. IEEETransEvolutComput15(3):405–423
[53] Fourie PC, Groenwold AA (2002) The particle swarm optimizationalgorithm in size and shape
optimization. StructMultidiscipOptim23(4):259–267
[54] GaneshMR,KrishnaR,ManikantanK,RamachandranS(2014)Entropy based binary particle
swarm optimization and classifi-cationforeardetection.EngApplArtifIntell27:115–128
[55] Garcia-Gonza E, Fernandez-Martinez JL (2014) Convergence andstochastic stability analysis
of particle swarm optimization variantswithgenericparameterdistributions.
ApplMathComput249:286– 302
[56] Garcia-Martinez C, Rodriguez FJ (2012) Arbitrary function optimisa-tion with metaheuristics:
no free lunch and real-world problems. SoftComput 16:2115–2133
[57] Geng J, Li M, Dong Z, Liao Y (2014) Port throughput forecasting byMARS-RSVR with chaotic
simulated annealing particle swarmoptimizationalgorithm.Neurocomputing147:239–250
[58] Ghodratnama A, Jolai F, Tavakkoli-Moghaddamb R (2015) Solving anew multi-objective
multiroute flexible flow line problem by multi-objective particle swarm optimization and nsga-
ii. J ManufSyst36:189–202
[59] Goldbarg EFG, de Souza GR, Goldbarg MC (2006) Particle swarmfor the traveling salesman
problem. In: Proceedings of Europeanconference on evolutionary computation in combinatorial
opti-mization (EvoCOP2006), pp 99-110, Budapest, Hungary, April10–12,2006
[60] Gosciniak I (2015) A new approach to particle swarm
optimizationalgorithm.ExpertSystAppl42:844–854
[61] Hanaf I, Cabrerab FM, Dimanea F, Manzanaresb JT (2016) Applica-tion of particle swarm
optimization for optimizing the
processparametersinturningofpeekcf30composites.ProcediaTechnol22:195–202
Mubeen Shaikh and Dhananjay Yadav
https://iaeme.com/Home/journal/IJMET 37 editor@iaeme.com
[62] He S, Wu Q, Wen J (2004) A particle swarm optimizer with
passivecongregation.BioSystems78:135–147
[63] Hendtlass T (2003) Preserving diversity in particle swarm optimisation.In: Proceedings of the
16th international conference on industrialengineering applications of artificial intelligence and
expert sys-tems,pp31–40,Loughborough,UK,June23–26,2003
[64] Ho S, Yang S, Ni G (2006) A particle swarm optimization methodwith enhanced global search
ability for design optimizations ofelectromagneticdevices.IEEETransMagn42(4):1107–1110
[65] Hu X, Eberhart RC (2002) Adaptive particle swarm optimization:Detection and response to
dynamic systems. In: Proceedings ofIEEE congress on evolutionary computation, pp 1666–
1670, Hon-olulu,HI,USA, May10–14, 2002
[66] Huang T, Mohan AS (2005) A hybrid boundary condition for robustparticle swarm
optimization. Antennas WirelPropagLett 4:112–117
[67] IdeA, YasudaK (2005) A basic study of adaptive particle swarm opti-mization.
ElectrEngJpn151(3):41–49
[68] IvatlooBM(2013) Combined heat and power economic dispatch prob-
lemsolutionusingparticleswarmoptimizationwithtimevaryingaccelerationcoefficients.ElectrPo
werSystRes95(1):9–18
[69] JamianJJ,MustafaMW,MokhlisH(2015)Optimalmultipledistributedgeneration output through
rank evolutionary particle swarm opti-mization.Neurocomputing152:190–198
[70] JiaD,ZhengG,QuB,KhanMK(2011)Ahybridparticleswarmopti-mization algorithm for high-
dimensional problems. ComputIndEng61:1117–1122
[71] Jian W, Xue Y, Qian J (2004) An improved particle swarm optimization algorithm with
neighborhoods topologies. In: Proceedings of
2004internationalconferenceonmachinelearningandcybernetics,pp2332–
2337,Shanghai,China,August26–29,2004
[72] Jiang CW, Bompard E (2005) A hybrid method of chaotic particles warm optimization and
linear interior for reactive power opti-mization.MathComputSimul68:57–65
[73] JieJ,ZengJ,HanC(2006)Adaptiveparticleswarmoptimizationwithfeedback control of diversity.
In: Proceedings of 2006 internationalconferenceonintelligentcomputing(ICIC2006),pp81–
92,Kun-ming,China,August 16–19,2006
[74] Jin Y, Cheng H, Yan J (2005) Local optimum embranchment
basedconvergenceguaranteeparticleswarmoptimizationanditsappli-
cationintransmissionnetworkplanning.In:Proceedingsof2005IEEE/PES transmission and
distribution conference and exhibi-tion:AsiaandPacific,pp1–6,Dalian,China,Aug15–18,2005
[75] Juang YT, Tung SL, Chiu HC (2011) Adaptive fuzzy particle
swarmoptimizationforglobaloptimizationofmultimodalfunctions.InfSci181:4539–4549
[76] Kadirkamanathan V, Selvarajah K, Fleming PJ (2006) Stability
analysisoftheparticledynamicsinparticleswarmoptimizer.IEEETransEvolutComput10(3):245–
255
[77] KennedyJ(1997)Mindsandcultures:particleswarmimplications.In:ProceedingsoftheAAAIFall1
997symposiumoncommunicative action in humans and machines, pp 67–72, Cambridge, MA,
USA,Nov8–10, 1997
[78] KennedyJ(1998)Thebehaviorofparticle.In:Proceedingsofthe7thannualconferenceonevolutionar
yprogram,pp581–589,SanDiego,CA,Mar10–13, 1998
[79] Kennedy J (1999) Small worlds and mega-minds: effects of neighbor-
hoodtopologyonparticleswarmperformance.In:Proceedingsofthe IEEE international conference
on evolutionary computation,pp1931–1938,SanDiego,CA,Mar10–13
A Review of Particle Swarm Optimization (PSO) Algorithm
https://iaeme.com/Home/journal/IJMET 38 editor@iaeme.com
[80] Kennedy J (2000) Stereotyping: Improving particle swarm performance with cluster analysis.
In: Proceedings of the IEEE internationalconferenceonevolutionarycomputation,pp303–308
[81] Kennedy J (2003) Bare bones particle swarms. In: Proceedings of the2003 IEEE swarm
intelligence symposium (SIS’03), pp 80–87,Indianapolis,IN,USA,April24–26,2003
[82] Kennedy J (2004) Probability and dynamics in the particle swarm. In: Proceedings of the IEEE
international conference on evolutionary computation, pp 340–347, Washington, DC, USA,
July 6–9,
2004KennedyJ(2005)Whydoesitneedvelocity?In:ProceedingsoftheIEEEswarmintelligencesym
posium(SIS’05),pp38–44, Pasadena,CA,USA,June8–10,2005
[83] Kennedy J, Eberhart RC (1995) Particle swarm optimization? In: Pro-
ceedingsoftheIEEEinternationalconferenceonneuralnetworks,pp1942–1948,Perth,Australia
[84] Kennedy J, Mendes R (2002) Population structure and particle swarmperformance. In:
Proceedings of the IEEE international conferenceonevolutionarycomputation,pp1671–
1676,Honolulu,HI,USA,Sept22–25, 2002
[85] KennedyJ,MendesR(2003)Neighborhoodtopologiesinfully-informedandbest-of-
neighborhoodparticleswarms.In:Proceed-ings of the 2003 IEEE international workshop on soft
computing inindustrial applications (SMCia/03), pp 45–50, Binghamton,
NewYork,USA,Oct12–14, 2003
[86] Krink T, Lovbjerg M (2002) The life cycle model: combining parti-cle swarm optimisation,
genetic algorithms and hillclimbers. In:Lecture notes in computer science (LNCS) No. 2439:
proceed-ingsofparallelproblemsolvingfromnatureVII(PPSN2002),pp621–
630,Granada,Spain,7–11Dec2002
[87] Lee S, Soak S, Oh S, Pedrycz W, Jeonb M (2008) Modified
binaryparticleswarmoptimization.ProgNatSci18:1161–1166
[88] Lei K, Wang F, Qiu Y (2005) An adaptive inertia weight strategy forparticle swarm optimizer.
In: Proceedings of the third internationalconference on mechatronics and information
technology, pp 51–55,Chongqing,China,Sept21–24,2005
[89] Leontitsis A, Kontogiorgos D, Pagge J (2006) Repel the swarm to
theoptimum.ApplMathComput173(1):265–272
[90] Li X (2004) Better spread and convergence: particle swarm multi-objective optimization using
the maximin fitness function. In:Proceedings of genetic and evolutionary computation
conference(GECCO2004),pp117–128,Seattle,WA,USA,June26–30,2004
[91] LiX(2010)Nichingwithoutnichingparameters:particleswarmoptimization using a ring topology.
IEEE Trans EvolutComput14(1):150–169
[92] LiX,DamKH(2003)Comparingparticleswarmsfortrackingextremaindynamicenvironments.In:P
roceedingsofthe2003Congresson Evolutionary Computation (CEC’03), pp 1772–1779,
Canberra,Australia,Dec8–12,2003
[93] LiZ,WangW,YanY,LiZ(2011)PS-ABC:ahybridalgorithmbasedon particle swarm and artificial
bee colony for high-dimensionaloptimizationproblems.ExpertSystAppl42:8881–8895
[94] Li C, Yang S, Nguyen TT (2012) A self-learning particle swarm opti-mizer for global
optimization problems. IEEE Trans Syst ManCybernetPartBCybernet42(3):627–646
[95] LiY,ZhanZ,LinS,ZhangJ,LuoX(2015a)Competitiveandcooperativeparticleswarmoptimization
withinformationsharingmechanismforglobaloptimizationproblems.InfSci293:370–382
[96] Li Z, Nguyena TT, Chen S, Khac Truong T (2015b) A hybrid algorithmbased on particle swarm
and chemical reaction optimization formulti-objectproblems.ApplSoftComput35:525–540
[97] Liang JJ, Suganthan PN (2005) Dynamic multi-swarm particle swarmoptimizer. In: Proceedings
of IEEE swarm intelligence sympo-sium,pp124–129,Pasadena,CA,USA,June8–10,2005
Mubeen Shaikh and Dhananjay Yadav
https://iaeme.com/Home/journal/IJMET 39 editor@iaeme.com
[98] Liang JJ, Qin AK, Suganthan PN, Baskar S (2006)
Comprehensivelearningparticleswarmoptimizerforglobaloptimizationofmul-
timodalfunctions.IEEETransEvolutComput10(3):281–295
[99] Lim W, Isa NAM (2014) Particle swarm optimization with adaptivetime-varying topology
connectivity. Appl Soft Comput 24:623–642
[100] Lim W, Isa NAM (2015) Adaptive division of labor particle
swarmoptimization.ExpertSystAppl42:5887–5903
[101] Lin Q, Li J, Du Z, Chen J, Ming Z (2006a) A novel multi-
objectiveparticleswarmoptimizationwithmultiplesearchstrategies.EurJOperRes247:732–744
[102] LinX,LiA,ChenB(2006b)Schedulingoptimizationofmixedmodelassemblylineswithhybridpartic
leswarmoptimizationalgorithm.IndEngManag11(1):53–57
[103] LiuY,QinZ,XuZ(2004)Usingrelaxationvelocityupdatestrat-
egytoimproveparticleswarmoptimization.Proceedingsofthirdinternationalconferenceonmachine
learningandcybernetics,pp2469–2472,Shanghai,China,August26–29,2004
[104] Liu F, Zhou J, Fang R (2005) An improved particle swarm optimization and its application in
longterm stream ow forecast. In: Proceed-ings of 2005 international conference on machine
learning andcybernetics, pp 2913–2918, Guangzhou, China, August 18–21,2005
[105] Liu H, Yang G, Song G (2014) MIMO radar array synthesis
usingQPSOwithnormaldistributedcontraction-expansionfactor.Pro-cediaEng15:2449–2453
[106] Liu T, Jiao L, Ma W, Ma J, Shang R (2016) A new quantum-behavedparticle swarm
optimization based on cultural evolution mech-anism for multiobjective problems. Knowl
Based Syst 101:90–99
[107] LovbjergM,KrinkT(2002)Extendingparticleswarmoptimizerswithself-organized criticality. In:
Proceedings of IEEE congress on evo-lutionarycomputation(CEC2002),pp1588–
1593,Honolulu,HI,USA,May7–11, 2002
[108] Lovbjerg M, Rasmussen TK, Krink T (2001) Hybrid particle swarmoptimizer with breeding and
subpopulations. In: Proceedings
ofthirdgeneticandevolutionarycomputationconference(GECCO-2001),pp469–
476,SanFrancisco-SiliconValley,CA,USA,July7–11,2001
[109] LuJ,HuH,BaiY(2015a)Generalizedradialbasisfunctionneuralnet-
workbasedonanimproveddynamicparticleswarmoptimizationandadaboostalgorithm.Neurocom
puting152:305–315
[110] Lu Y, Zeng N, Liu Y, Zhang Z (2015b) A hybrid wavelet neural net-work and switching particle
swarm optimization algorithm for facedirectionrecognition.Neurocomputing155:219–244
[111] Medasani S, Owechko Y (2005) Possibilistic particle swarms foroptimization. In: Applications
of neural networks and machinelearninginimageprocessingIXvol5673,pp82–89
[112] MendesR,KennedyJ, NevesJ(2004)The fullyinformedparti-cle swarm: simpler maybe better.
IEEE Trans EvolutComput8(3):204–210
[113] Meng A, Li Z, Yin H, Chen S, Guo Z (2015) Accelerating
particleswarmoptimizationusingcrisscrosssearch.InfSci329:52–72
[114] Mikki S, Kishk A (2005) Improved particle swarm optimization tech-nique using hard boundary
conditions. Microw Opt TechnolLett46(5):422–426
[115] MohaisAS, MendesR, WardC (2005)Neighborhoodre-structuringinparticle swarm
optimization. In: Proceedings of Australian con-ference on artificial intelligence, pp 776–785,
Sydney, Australia,Dec5–9, 2005
A Review of Particle Swarm Optimization (PSO) Algorithm
https://iaeme.com/Home/journal/IJMET 40 editor@iaeme.com
[116] MonsonCK,SeppiKD(2004)TheKalmanswarm:anewapproachtoparticlemotioninswarmoptimiz
ation.In:Proceedingsofgeneticand evolutionary computation conference (GECCO2004), pp
140–150,Seattle,WA,USA,June26–30,2004
[117] Monson CK, Seppi KD (2005) Bayesian optimization models forparticle swarms. In:
Proceedings of genetic and evolutionary com-putation conference (GECCO2005), pp 193–200,
Washington,DC,USA,June25–29, 2005
[118] Mostaghim S, Teich J (2003) Strategies for finding good local guidesin multi-objective particle
swarm optimization (MOPSO).In: Proceedings of the 2003 IEEE swarm intelligence
symposium(SIS’03), pp 26–33, Indianapolis, Indiana, USA, April 24–26,2003
[119] MuB,WenS,YuanS,LiH(2015)PPSO:PC A based particle swarm optimization for solving
conditional nonlinear optimal perturba-tion.ComputGeosci83:65–71
[120] Netjinda N, Achalakul T, Sirinaovakul B (2015) Particle swarm opti-mization inspired by
starling flock behavior. Appl Soft Comput35:411–422
[121] NgoaTT,SadollahbA,KimaJH(2016)Acooperativeparticleswarmoptimizerwithstochasticmove
mentsforcomputationallyexpen-sivenumericaloptimizationproblems.JComputSci13:68–82
[122] NickabadiAA,EbadzadehMM,SafabakhshR(2011)Anovelparticleswarmoptimizationalgorithm
withadaptiveinertiaweight.ApplSoftComput 11:3658–3670
[123] NiuB,ZhuY,HeX(2005)Multi-population cooperative particle swarm optimization. In:
Proceedings of advances in artificial life—the eighth European conference (ECAL 2005), pp
874–883, Canter-bury,UK,Sept5–9,2005
[124] Noel MM, Jannett TC (2004) Simulation of a new hybrid particle swarmoptimization algorithm.
In: Proceedings of the thirty-sixth IEEESoutheasternsymposiumonsystemtheory,pp150–
153,Atlanta,Georgia,USA,March14–16,2004
[125] Ozcan E, Mohan CK (1998) Analysis of a simple particle swarmoptimization system. In:
Intelligent engineering systems throughartificialneuralnetworks,pp253–258
[126] Pampara G, Franken N, Engelbrecht AP (2005) Combining particleswarm optimization with
angle modulation to solve binary prob-
lems.In:Proceedingsofthe2005IEEEcongressonevolutionarycomputation,pp89–
96,Edinburgh,UK,Sept2–4,2005
[127] Park JB, Jeong YW, Shin JR, Lee KY (2010) An improved particleswarm optimization for
nonconvex economic dispatch problems.IEEETransPowerSyst25(1):156–166
[128] Parsopoulos KE, Vrahatis MN (2002a) Initializing the particle swarmoptimizer using the
nonlinear simplex method. WSEAS Press,Rome
[129] Parsopoulos KE, Vrahatis MN (2002b) Recent approaches to globaloptimization problems
through particle swarm optimization. NatComput1:235–306
[130] ParsopoulosKE,VrahatisMN(2004)Onthecomputationofallglobalminimizers through particle
swarm optimization. IEEE Trans Evo-lutComput 8(3):211–224
[131] Peer E, van den Bergh F, Engelbrecht AP (2003) Using neighbor-
hoodswiththeguaranteedconvergencePSO.In:Proceedingsof IEEE swarm intelligence
symposium (SIS2003), pp 235–242,Indianapolis,IN,USA,April24–26,2003
[132] Peng CC, Chen CH (2015) Compensatory neural fuzzy network withsymbiotic particle swarm
optimization for temperature control.ApplMathModel39:383–395
[133] PeramT,Veeramachanenik,MohanCK(2003)Fitness-distance-
ratiobasedparticleswarmoptimization.In:Proceedingsof2003IEEEswarm intelligence
symposium, pp 174–181, Indianapolis, Indi-ana,USA,April24–26, 2003
Mubeen Shaikh and Dhananjay Yadav
https://iaeme.com/Home/journal/IJMET 41 editor@iaeme.com
[134] Poli R (2008) Dynamics and stability of the sampling distribution
ofparticleswarmoptimisersviamomentanalysis.JArtifEvolAppl10–34:2008
[135] PoliR(2009)Meanandvarianceofthesamplingdistributionofparticle swarm optimizers during
stagnation. IEEE Trans EvolutComput13(4):712–721
[136] PoliR,KennedyJ,BlackwellT(2007)Particleswarmoptimization—
anoverview.SwarmIntell1(1):33–57
[137] Qian X, Cao M, Su Z, Chen J (2012) A hybrid particle swarm opti-mization (PSO)-simplex
algorithm for damage identification ofdelaminatedbeams.MathProblEng1–11:2012
[138] Qin Z, Yu F, Shi Z (2006) Adaptive inertia weight particle swarmoptimization. In: Proceedings
of the genetic and evolutionary com-putationconference,pp450–459,Zakopane,Poland,June25–
29,2006
[139] Ratnaweera A, Halgamuge S, Watson H (2004) Self-organizing hier-archical particle swarm
optimizer with time-varying accelerationcoefficients.IEEETransEvolutComput8(3):240–255
[140] Reynolds CW (1987) Flocks, herds, and schools: a distributed behav-
ioralmodel.ComputGraph21(4):25–34
[141] Richards M, Ventura D (2004) Choosing a starting configuration
forparticleswarmoptimization.In:Proceedingsof2004IEEEinter-national joint conference on
neural networks, pp 2309–2312,Budapest,Hungary,July25–29,2004
[142] Richer TJ, Blackwell TM (2006) The levy particle swarm. In: Pro-ceedings of the IEEE
congress on evolutionary computation, pp808–815,Vancouver,BC,Canada,July16–21,2006
[143] Riget J, Vesterstrom JS (2002) A diversity-guided particle swarmoptimizer—
theARPSO.TechnicalReport2002-02,Departmentof Computer Science, AarhusUniversity,
Aarhus, Denmark
[144] Robinson J, Rahmat-Samii Y (2004) Particle swarm optimization
inelectromagnetics.IEEETransAntennasPropag52(2):397–407
[145] RobinsonJ,SintonS,Rahmat-
SamiiY(2002)Particleswarm,geneticalgorithm,andtheirhybrids:optimizationofaprofiledcorruga
tedhornantenna.In:Proceedingsof2002IEEEinternationalsympo-sium on antennas propagation,
pp 31–317, San Antonio, Texas, USA, June 16–21, 2002
[146] Roy R, Ghoshal SP (2008) A novel crazy swarm optimized economicload dispatch for various
types of cost functions. Electr PowerEnergySyst30:242–253
[147] Salehian S, Subraminiam SK (2015) Unequal clustering by
improvedparticleswarmoptimizationinwirelesssensornetwork.ProcediaComputSci62:403–409
[148] Samuel GG, Rajan CCA (2015) Hybrid: particle swarm optimization-genetic algorithm and
particle swarm optimization-shuffled frogleaping algorithm for long-term generator
maintenance schedul-ing.ElectrPowerEnergySyst65:432–442
[149] SchafferJD(1985)Multiobjectiveoptimizationwithvectorevaluatedgeneticalgorithms.In:Procee
dingsoftheIEEEinternationalcon-ferenceongeneticalgorithm,pp93–100, Pittsburgh,
Pennsylvania, USA
[150] Schoeman IL, Engelbrecht AP (2005) A parallel vector-based particleswarm optimizer. In:
Proceedings of the international conference on neural networks
andgeneticalgorithms(ICANNGA2005), pp268–271, Protugal
[151] Schutte JF, Groenwold AA (2005) A study of global optimization
usingparticleswarms.JGlobOptim31:93–108
[152] Selleri S, Mussetta M, Pirinoli P (2006) Some insight over new varia-tions of the particle swarm
optimization method. IEEE AntennasWirelPropagLett5(1):235–238
A Review of Particle Swarm Optimization (PSO) Algorithm
https://iaeme.com/Home/journal/IJMET 42 editor@iaeme.com
[153] Selvakumar AI, Thanushkodi K (2009) Optimization using civilizedswarm: solution to
economic dispatch with multiple minima. ElectrPowerSystRes79:8–16
[154] SeoJH,ImCH,HeoCG(2006)Multimodalfunctionoptimiza-tion based on particle swarm
optimization. IEEE Trans Magn42(4):1095–1098
[155] SharifiA,KordestaniJK,MahdavianiaM,MeybodiMR(2015)Anovel
hybridadaptivecollaborativeapproachbasedonparticleswarm
optimizationandlocalsearchfordynamicoptimizationproblems.ApplSoftComput32:432–448
[156] Shelokar PS, Siarry P, Jayaraman VK, Kulkarni BD (2007)
Particleswarmandantcolonyalgorithmshybridizedforimprovedcontin-
uousoptimization.ApplMathComput188:129–142
[157] Shi Y, Eberhart RC (1998) A modified particle swarm optimizer. In: Proceedings of theIEEE
international conferenceonevolutionarycomputation,pp69–73, Anchorage, Alaska,USA,May4–
9,1998
[158] Shi Y, Eberhart RC (2001) Fuzzy adaptive particle swarm optimization.In: Proceedings of the
congress on evolutionary computation, pp101–106, IEEEServiceCenter, Seoul,Korea,May27–
30,2001
[159] Shin Y, Kita E (2014) Search performance improvement of particleswarm optimization by
second best particle information. ApplMathComput 246:346–354
[160] Shirkhani R, Jazayeri-Rad H, Hashemi SJ (2014) Modeling of a
solidoxidefuelcellpowerplantusinganensembleofneuralnetworksbased on a combination of the
adaptive particle swarm
optimizationandlevenbergmarquardtalgorithms.JNatGasSciEng21:1171–1183
[161] Sierra MR, Coello CAC (2005) Improving pso-based multi-objectiveoptimization using
crowding, mutation and epsilon-dominance.LectNotesComputSci3410:505–519
[162] Soleimani H, Kannan G (2015) A hybrid particle swarm
optimizationandgeneticalgorithmforclosedloopsupplychainnetworkdesigninlarge-
scalenetworks.ApplMathModel39:3990–4012
[163] StaceyA,JancicM,GrundyI(2003)Particleswarmoptimizationwithmutation.In:ProceedingsofIE
EEcongressonevolutionarycom-putation2003(CEC2003),pp1425–1430, Canberra, Australia,
December8–12,2003
[164] Suganthan PN (1999) Particle swarm optimizer with neighborhoodoperator. In: Proceedings of
the Congress on Evolutionary Com-putation,pp1958–1962,Washington,D.C.USA,July6–
9,1999
[165] Sun J, Feng B, Xu W (2004) Particle swarm optimization with parti-
cleshavingquantumbehavior.In:Proceedingsofthecongressonevolutionarycomputation,pp325–
331,Portland,OR,USA,June19–23,2004
[166] Tang Y, Wang Z, Fang J (2011) Feedback learning particle swarm opti-
mization.ApplSoftComput11:4713–4725
[167] Tanweer MR, Suresh S, Sundararajan N (2016) Dynamic mentoring andself-regulation based
particle swarm optimization algorithm forsolving complex real-world optimization problems.
InfSci 326:1–24
[168] Tatsumi K, Ibuki T, Tanino T (2013) A chaotic particle swarm optimization exploiting a virtual
quartic objectivefunctionbasedon the personal and global best solutions. Appl Math
Comput219(17):8991–9011
[169] TatsumiK,IbukiT,TaninoT(2015)Particleswarmoptimizationwithstochasticselectionofperturbat
ion-basedchaoticupdatingsystem.ApplMathComput 269:904–929
Mubeen Shaikh and Dhananjay Yadav
https://iaeme.com/Home/journal/IJMET 43 editor@iaeme.com
[170] Ting T, Rao MVC, Loo CK (2003) A new class of operators to
accelerateparticleswarmoptimization.In:ProceedingsofIEEEcongressonevolutionary
computation 2003(CEC2003), pp 2406–2410, Can-berra,Australia,Dec8–12,2003
[171] Trelea IC (2003) The particle swarm optimization algorithm: con-vergence analysis and
parameter selection. InfProcessLett85(6):317–325
[172] Tsafarakis S, Saridakis C, Baltas G, Matsatsinis N (2013) Hybrid par-ticle swarm optimization
with mutation for optimizing industrialproduct lines: an application to a mixed solution space
consideringbothdiscreteandcontinuousdesignvariables.IndMarketManage 42(4):496–506
[173] van den Bergh F (2001) An analysis of particle swarm optimizers.
Ph.D.dissertation,UniversityofPretoria,Pretoria,SouthAfrica
[174] van den Bergh F, Engelbrecht AP (2002) A new locally
convergentparticleswarmoptimizer.In:ProceedingsofIEEEconferenceon
system,manandcybernetics,pp96–101,Hammamet,Tunisia, October,2002
[175] vandenBerghF,EngelbrechtAP(2004)Acooperativeapproachtoparticleswarmoptimization.IEEE
TransEvolutComput8(3):225–239
[176] van den Bergh F, Engelbrecht AP (2006) A study of particle swarm optimization particle
trajectories.InfSci176:937–971
[177] Vitorino LN, Ribeiro SF, Bastos-Filho CJA (2015) A mechanism basedon artificial bee colony
to generate diversity in particle swarm optimization. Neurocomputing148:39–45
[178] Vlachogiannis JG, Lee KY (2009) Economic load dispatch—a compar-ativestudyonheuristic
optimization
techniqueswithanimprovedcoordinatedaggregationbasedpso.IEEETransPowerSyst24(2):991–
1001
[179] Wang W (2012) Research on particle swarm optimization algorithmand its application.
Southwest Jiaotong University, Doctor DegreeDissertation,pp36–37
[180] Wang Q, Wang Z, Wang S (2005) A modified particle swarm optimizer using
dynamicinertiaweight.ChinaMechEng16(11):945–948
[181] Wang H, Wu Z, Rahnamayan S, Liu Y, Ventresca M (2011) Enhancingparticle swarm
optimization using generalized opposition-basedlearning.InfSci181:4699–4714
[182] Wang H, Sun H, Li C, Rahnamayan S, Pan J (2013) Diversity enhancedparticle swarm
optimization with neighborhood search. InfSci223:119–135
[183] WenW,LiuG(2005)Swarmdouble-tabusearch.In:First international conference on intelligent
computing, pp 1231–1234, Changsha, China,August 23–26,2005
[184] Wolpert DH, MacreadyWG(1997)Freelunchtheoremsfor
optimization.IEEETransEvolutComput1(1):67–82
[185] Xie X, Zhang W, Yang Z (2002) A dissipative particle swarm opti-mization. In: Proceedings of
IEEE congression on evolutionary computation,pp1456–1461,Honolulu,HI,USA, May,2002
[186] Xie X, Zhang W, Bi D (2004) Optimizing semiconductor devices byself-organizing particle
swarm. In: Proceedings of congress onevolutionary computation (CEC2004), pp 2017–2022,
Portland,Oregon,USA, June19–23,2004
[187] Yang C, Simon D (2005) A new particle swarm optimization
technique.In:Proceedingsof17thinternationalconferenceonsystemsengi-neering (ICSEng 2005),
pp 164–169, Las Vegas, Nevada, USA,Aug16–18, 2005
[188] Yang Z, Wang F (2006) An analysis of roulette selection in early par-ticle swarm optimizing.
In: Proceedings of the 1st international symposium on systems
A Review of Particle Swarm Optimization (PSO) Algorithm
https://iaeme.com/Home/journal/IJMET 44 editor@iaeme.com
andcontrolinaerospaceandastronautics,(ISSCAA2006),pp960–970,Harbin,China,Jan19–
21,2006
[189] Yang X, Yuan J, Yuan J, Mao H (2007) A modified particle swarmoptimizer with dynamic
adaptation. Appl Math Comput 189:1205–1213
[190] Yang C, Gao W, Liu N, Song C (2015) Low-discrepancy
sequenceinitializedparticleswarmoptimizationalgorithmwithhigh-ordernonlineartime-varying
inertia weight. ApplSoftComput29:386–394
[191] YasudaK,IdeA,IwasakiN(2003)Adaptiveparticleswarmoptimiza-tion. In: Proceedings of IEEE
international conference on systems,manandcybernetics,pp1554–
1559,Washington,DC,USA,Octo-ber5–8, 2003
[192] Yasuda K, Iwasaki N (2004) Adaptive particle swarm optimizationusing velocity information
of swarm. In: Proceedings of IEEE international conference on systems, man and cybernetics,
pp3475–3481,Hague,Netherlands,October10–13,2004
[193] Yu H, Zhang L, Chen D, Song X, Hu S (2005) Estimation of
modelparametersusingcompositeparticleswarmoptimization.JChemEngChinUniv 19(5):675–
680
[194] Yuan Y, Ji B, Yuan X, Huang Y (2015) Lockage scheduling of threegorges-
gezhoubadamsbyhybridofchaoticparticleswarmopti- mization and heuristic-adjusted strategies.
Appl Math Comput270:74–89
[195] Zeng J, Cui Z, Wang L (2005) A differential evolutionary particleswarm optimization with
controller. In: Proceedings of the first international conference on intelligent computing
(ICIC2005),pp 467–476, Hefei,China,Aug23–25,2005
[196] ZhaiS,JiangT(2015)Anewsense-through-foliagetarget recognition method based on hybrid
differential evolution and self-adaptiveparticle swarm optimization-based support vector
machine. Neu-rocomputing149:573–584
[197] Zhan Z, Zhang J, Li Y, Chung HH (2009) Adaptive particle swarmoptimization. IEEE Trans
Syst Man Cybernet Part B Cybernet39(6):1362–1381
[198] Zhan Z, Zhang J, Li Y, Shi Y (2011) Orthogonal learning particle swarm optimization. IEEE
Trans EvolutComput15(6):832–847
[199] Zhang L, Yu H, Hu S (2003) A new approach to improve particleswarm optimization. In:
Proceedings of the Genetic and Evolution-ary Computation Conference 2003(GECCO2003),
pp134–139,Chicago, IL,USA, July12–16,2003
[200] Zhang R, Zhou J, Moa L, Ouyanga S, Liao X (2013) Economic envi-ronmental dispatch using
an enhanced multi-objective culturalalgorithm.ElectrPowerSystRes99:18–29
[201] Zhang L, Tang Y, Hua C, Guan X (2015) A new particle swarm opti-mization algorithm with
adaptive inertia weight based on Bayesian techniques. ApplSoftComput28:138–149

More Related Content

What's hot

Genetic programming
Genetic programmingGenetic programming
Genetic programmingMeghna Singh
 
PSO and Its application in Engineering
PSO and Its application in EngineeringPSO and Its application in Engineering
PSO and Its application in EngineeringPrince Jain
 
Particle swarm optimization
Particle swarm optimizationParticle swarm optimization
Particle swarm optimizationHanya Mohammed
 
First Order Logic resolution
First Order Logic resolutionFirst Order Logic resolution
First Order Logic resolutionAmar Jukuntla
 
Application of machine learning in industrial applications
Application of machine learning in industrial applicationsApplication of machine learning in industrial applications
Application of machine learning in industrial applicationsAnish Das
 
Expectation Maximization and Gaussian Mixture Models
Expectation Maximization and Gaussian Mixture ModelsExpectation Maximization and Gaussian Mixture Models
Expectation Maximization and Gaussian Mixture Modelspetitegeek
 
Nature-inspired metaheuristic algorithms for optimization and computional int...
Nature-inspired metaheuristic algorithms for optimization and computional int...Nature-inspired metaheuristic algorithms for optimization and computional int...
Nature-inspired metaheuristic algorithms for optimization and computional int...Xin-She Yang
 
Lossless predictive coding in Digital Image Processing
Lossless predictive coding in Digital Image ProcessingLossless predictive coding in Digital Image Processing
Lossless predictive coding in Digital Image Processingpriyadharshini murugan
 
DataEngConf: Feature Extraction: Modern Questions and Challenges at Google
DataEngConf: Feature Extraction: Modern Questions and Challenges at GoogleDataEngConf: Feature Extraction: Modern Questions and Challenges at Google
DataEngConf: Feature Extraction: Modern Questions and Challenges at GoogleHakka Labs
 
Particle swarm optimization
Particle swarm optimization Particle swarm optimization
Particle swarm optimization Ahmed Fouad Ali
 
Nature-Inspired Optimization Algorithms
Nature-Inspired Optimization Algorithms Nature-Inspired Optimization Algorithms
Nature-Inspired Optimization Algorithms Xin-She Yang
 
Ant Colony Optimization - ACO
Ant Colony Optimization - ACOAnt Colony Optimization - ACO
Ant Colony Optimization - ACOMohamed Talaat
 
Inductive analytical approaches to learning
Inductive analytical approaches to learningInductive analytical approaches to learning
Inductive analytical approaches to learningswapnac12
 
Ant colony optimization (aco)
Ant colony optimization (aco)Ant colony optimization (aco)
Ant colony optimization (aco)gidla vinay
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networksAkash Goel
 
Deep Reinforcement Learning
Deep Reinforcement LearningDeep Reinforcement Learning
Deep Reinforcement LearningUsman Qayyum
 
Object Detection & Tracking
Object Detection & TrackingObject Detection & Tracking
Object Detection & TrackingAkshay Gujarathi
 

What's hot (20)

Genetic programming
Genetic programmingGenetic programming
Genetic programming
 
PSO and Its application in Engineering
PSO and Its application in EngineeringPSO and Its application in Engineering
PSO and Its application in Engineering
 
Particle swarm optimization
Particle swarm optimizationParticle swarm optimization
Particle swarm optimization
 
Firefly algorithm
Firefly algorithmFirefly algorithm
Firefly algorithm
 
First Order Logic resolution
First Order Logic resolutionFirst Order Logic resolution
First Order Logic resolution
 
Application of machine learning in industrial applications
Application of machine learning in industrial applicationsApplication of machine learning in industrial applications
Application of machine learning in industrial applications
 
Expectation Maximization and Gaussian Mixture Models
Expectation Maximization and Gaussian Mixture ModelsExpectation Maximization and Gaussian Mixture Models
Expectation Maximization and Gaussian Mixture Models
 
Crow search algorithm
Crow search algorithmCrow search algorithm
Crow search algorithm
 
Nature-inspired metaheuristic algorithms for optimization and computional int...
Nature-inspired metaheuristic algorithms for optimization and computional int...Nature-inspired metaheuristic algorithms for optimization and computional int...
Nature-inspired metaheuristic algorithms for optimization and computional int...
 
Lossless predictive coding in Digital Image Processing
Lossless predictive coding in Digital Image ProcessingLossless predictive coding in Digital Image Processing
Lossless predictive coding in Digital Image Processing
 
DataEngConf: Feature Extraction: Modern Questions and Challenges at Google
DataEngConf: Feature Extraction: Modern Questions and Challenges at GoogleDataEngConf: Feature Extraction: Modern Questions and Challenges at Google
DataEngConf: Feature Extraction: Modern Questions and Challenges at Google
 
Defuzzification
DefuzzificationDefuzzification
Defuzzification
 
Particle swarm optimization
Particle swarm optimization Particle swarm optimization
Particle swarm optimization
 
Nature-Inspired Optimization Algorithms
Nature-Inspired Optimization Algorithms Nature-Inspired Optimization Algorithms
Nature-Inspired Optimization Algorithms
 
Ant Colony Optimization - ACO
Ant Colony Optimization - ACOAnt Colony Optimization - ACO
Ant Colony Optimization - ACO
 
Inductive analytical approaches to learning
Inductive analytical approaches to learningInductive analytical approaches to learning
Inductive analytical approaches to learning
 
Ant colony optimization (aco)
Ant colony optimization (aco)Ant colony optimization (aco)
Ant colony optimization (aco)
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networks
 
Deep Reinforcement Learning
Deep Reinforcement LearningDeep Reinforcement Learning
Deep Reinforcement Learning
 
Object Detection & Tracking
Object Detection & TrackingObject Detection & Tracking
Object Detection & Tracking
 

Similar to A REVIEW OF PARTICLE SWARM OPTIMIZATION (PSO) ALGORITHM

5 multi robot path planning algorithms
5 multi robot path planning algorithms5 multi robot path planning algorithms
5 multi robot path planning algorithmsprjpublications
 
5 multi robot path planning algorithms
5 multi robot path planning algorithms5 multi robot path planning algorithms
5 multi robot path planning algorithmsprj_publication
 
5 multi robot path planning algorithms
5 multi robot path planning algorithms5 multi robot path planning algorithms
5 multi robot path planning algorithmsprj_publication
 
Markov Chain and Adaptive Parameter Selection on Particle Swarm Optimizer
Markov Chain and Adaptive Parameter Selection on Particle Swarm Optimizer  Markov Chain and Adaptive Parameter Selection on Particle Swarm Optimizer
Markov Chain and Adaptive Parameter Selection on Particle Swarm Optimizer ijsc
 
Evolutionary Computing Techniques for Software Effort Estimation
Evolutionary Computing Techniques for Software Effort EstimationEvolutionary Computing Techniques for Software Effort Estimation
Evolutionary Computing Techniques for Software Effort EstimationAIRCC Publishing Corporation
 
EVOLUTIONARY COMPUTING TECHNIQUES FOR SOFTWARE EFFORT ESTIMATION
EVOLUTIONARY COMPUTING TECHNIQUES FOR SOFTWARE EFFORT ESTIMATIONEVOLUTIONARY COMPUTING TECHNIQUES FOR SOFTWARE EFFORT ESTIMATION
EVOLUTIONARY COMPUTING TECHNIQUES FOR SOFTWARE EFFORT ESTIMATIONijcsit
 
EVOLUTIONARY COMPUTING TECHNIQUES FOR SOFTWARE EFFORT ESTIMATION
EVOLUTIONARY COMPUTING TECHNIQUES FOR SOFTWARE EFFORT ESTIMATIONEVOLUTIONARY COMPUTING TECHNIQUES FOR SOFTWARE EFFORT ESTIMATION
EVOLUTIONARY COMPUTING TECHNIQUES FOR SOFTWARE EFFORT ESTIMATIONAIRCC Publishing Corporation
 
Swarm intelligence pso and aco
Swarm intelligence pso and acoSwarm intelligence pso and aco
Swarm intelligence pso and acosatish561
 
MARKOV CHAIN AND ADAPTIVE PARAMETER SELECTION ON PARTICLE SWARM OPTIMIZER
MARKOV CHAIN AND ADAPTIVE PARAMETER SELECTION ON PARTICLE SWARM OPTIMIZERMARKOV CHAIN AND ADAPTIVE PARAMETER SELECTION ON PARTICLE SWARM OPTIMIZER
MARKOV CHAIN AND ADAPTIVE PARAMETER SELECTION ON PARTICLE SWARM OPTIMIZERijsc
 
AN IMPROVED MULTIMODAL PSO METHOD BASED ON ELECTROSTATIC INTERACTION USING NN...
AN IMPROVED MULTIMODAL PSO METHOD BASED ON ELECTROSTATIC INTERACTION USING NN...AN IMPROVED MULTIMODAL PSO METHOD BASED ON ELECTROSTATIC INTERACTION USING NN...
AN IMPROVED MULTIMODAL PSO METHOD BASED ON ELECTROSTATIC INTERACTION USING NN...ijaia
 
Impact of initialization of a modified particle swarm optimization on coopera...
Impact of initialization of a modified particle swarm optimization on coopera...Impact of initialization of a modified particle swarm optimization on coopera...
Impact of initialization of a modified particle swarm optimization on coopera...IJECEIAES
 
Rhizostoma optimization algorithm and its application in different real-world...
Rhizostoma optimization algorithm and its application in different real-world...Rhizostoma optimization algorithm and its application in different real-world...
Rhizostoma optimization algorithm and its application in different real-world...IJECEIAES
 
Improved Particle Swarm Optimization
Improved Particle Swarm OptimizationImproved Particle Swarm Optimization
Improved Particle Swarm Optimizationvane sanchez
 
an improver particle optmizacion plan de negocios
an improver particle optmizacion plan de negociosan improver particle optmizacion plan de negocios
an improver particle optmizacion plan de negociosCarlos Iza
 
Bat Algorithm is Better Than Intermittent Search Strategy
Bat Algorithm is Better Than Intermittent Search StrategyBat Algorithm is Better Than Intermittent Search Strategy
Bat Algorithm is Better Than Intermittent Search StrategyXin-She Yang
 
ANALYSINBG THE MIGRATION PERIOD PARAMETER IN PARALLEL MULTI-SWARM PARTICLE SW...
ANALYSINBG THE MIGRATION PERIOD PARAMETER IN PARALLEL MULTI-SWARM PARTICLE SW...ANALYSINBG THE MIGRATION PERIOD PARAMETER IN PARALLEL MULTI-SWARM PARTICLE SW...
ANALYSINBG THE MIGRATION PERIOD PARAMETER IN PARALLEL MULTI-SWARM PARTICLE SW...ijcsit
 
Particle Swarm Optimization: The Algorithm and Its Applications
Particle Swarm Optimization: The Algorithm and Its ApplicationsParticle Swarm Optimization: The Algorithm and Its Applications
Particle Swarm Optimization: The Algorithm and Its Applicationsadil raja
 
IRJET- PSO based PID Controller for Bidirectional Inductive Power Transfer Sy...
IRJET- PSO based PID Controller for Bidirectional Inductive Power Transfer Sy...IRJET- PSO based PID Controller for Bidirectional Inductive Power Transfer Sy...
IRJET- PSO based PID Controller for Bidirectional Inductive Power Transfer Sy...IRJET Journal
 
An Hybrid Learning Approach using Particle Intelligence Dynamics and Bacteri...
An Hybrid Learning Approach using Particle Intelligence  Dynamics and Bacteri...An Hybrid Learning Approach using Particle Intelligence  Dynamics and Bacteri...
An Hybrid Learning Approach using Particle Intelligence Dynamics and Bacteri...IJMER
 

Similar to A REVIEW OF PARTICLE SWARM OPTIMIZATION (PSO) ALGORITHM (20)

5 multi robot path planning algorithms
5 multi robot path planning algorithms5 multi robot path planning algorithms
5 multi robot path planning algorithms
 
5 multi robot path planning algorithms
5 multi robot path planning algorithms5 multi robot path planning algorithms
5 multi robot path planning algorithms
 
5 multi robot path planning algorithms
5 multi robot path planning algorithms5 multi robot path planning algorithms
5 multi robot path planning algorithms
 
Markov Chain and Adaptive Parameter Selection on Particle Swarm Optimizer
Markov Chain and Adaptive Parameter Selection on Particle Swarm Optimizer  Markov Chain and Adaptive Parameter Selection on Particle Swarm Optimizer
Markov Chain and Adaptive Parameter Selection on Particle Swarm Optimizer
 
Evolutionary Computing Techniques for Software Effort Estimation
Evolutionary Computing Techniques for Software Effort EstimationEvolutionary Computing Techniques for Software Effort Estimation
Evolutionary Computing Techniques for Software Effort Estimation
 
EVOLUTIONARY COMPUTING TECHNIQUES FOR SOFTWARE EFFORT ESTIMATION
EVOLUTIONARY COMPUTING TECHNIQUES FOR SOFTWARE EFFORT ESTIMATIONEVOLUTIONARY COMPUTING TECHNIQUES FOR SOFTWARE EFFORT ESTIMATION
EVOLUTIONARY COMPUTING TECHNIQUES FOR SOFTWARE EFFORT ESTIMATION
 
EVOLUTIONARY COMPUTING TECHNIQUES FOR SOFTWARE EFFORT ESTIMATION
EVOLUTIONARY COMPUTING TECHNIQUES FOR SOFTWARE EFFORT ESTIMATIONEVOLUTIONARY COMPUTING TECHNIQUES FOR SOFTWARE EFFORT ESTIMATION
EVOLUTIONARY COMPUTING TECHNIQUES FOR SOFTWARE EFFORT ESTIMATION
 
Swarm intelligence pso and aco
Swarm intelligence pso and acoSwarm intelligence pso and aco
Swarm intelligence pso and aco
 
MARKOV CHAIN AND ADAPTIVE PARAMETER SELECTION ON PARTICLE SWARM OPTIMIZER
MARKOV CHAIN AND ADAPTIVE PARAMETER SELECTION ON PARTICLE SWARM OPTIMIZERMARKOV CHAIN AND ADAPTIVE PARAMETER SELECTION ON PARTICLE SWARM OPTIMIZER
MARKOV CHAIN AND ADAPTIVE PARAMETER SELECTION ON PARTICLE SWARM OPTIMIZER
 
AN IMPROVED MULTIMODAL PSO METHOD BASED ON ELECTROSTATIC INTERACTION USING NN...
AN IMPROVED MULTIMODAL PSO METHOD BASED ON ELECTROSTATIC INTERACTION USING NN...AN IMPROVED MULTIMODAL PSO METHOD BASED ON ELECTROSTATIC INTERACTION USING NN...
AN IMPROVED MULTIMODAL PSO METHOD BASED ON ELECTROSTATIC INTERACTION USING NN...
 
Impact of initialization of a modified particle swarm optimization on coopera...
Impact of initialization of a modified particle swarm optimization on coopera...Impact of initialization of a modified particle swarm optimization on coopera...
Impact of initialization of a modified particle swarm optimization on coopera...
 
Rhizostoma optimization algorithm and its application in different real-world...
Rhizostoma optimization algorithm and its application in different real-world...Rhizostoma optimization algorithm and its application in different real-world...
Rhizostoma optimization algorithm and its application in different real-world...
 
Improved Particle Swarm Optimization
Improved Particle Swarm OptimizationImproved Particle Swarm Optimization
Improved Particle Swarm Optimization
 
an improver particle optmizacion plan de negocios
an improver particle optmizacion plan de negociosan improver particle optmizacion plan de negocios
an improver particle optmizacion plan de negocios
 
SI and PSO --Machine Learning
SI and PSO --Machine Learning SI and PSO --Machine Learning
SI and PSO --Machine Learning
 
Bat Algorithm is Better Than Intermittent Search Strategy
Bat Algorithm is Better Than Intermittent Search StrategyBat Algorithm is Better Than Intermittent Search Strategy
Bat Algorithm is Better Than Intermittent Search Strategy
 
ANALYSINBG THE MIGRATION PERIOD PARAMETER IN PARALLEL MULTI-SWARM PARTICLE SW...
ANALYSINBG THE MIGRATION PERIOD PARAMETER IN PARALLEL MULTI-SWARM PARTICLE SW...ANALYSINBG THE MIGRATION PERIOD PARAMETER IN PARALLEL MULTI-SWARM PARTICLE SW...
ANALYSINBG THE MIGRATION PERIOD PARAMETER IN PARALLEL MULTI-SWARM PARTICLE SW...
 
Particle Swarm Optimization: The Algorithm and Its Applications
Particle Swarm Optimization: The Algorithm and Its ApplicationsParticle Swarm Optimization: The Algorithm and Its Applications
Particle Swarm Optimization: The Algorithm and Its Applications
 
IRJET- PSO based PID Controller for Bidirectional Inductive Power Transfer Sy...
IRJET- PSO based PID Controller for Bidirectional Inductive Power Transfer Sy...IRJET- PSO based PID Controller for Bidirectional Inductive Power Transfer Sy...
IRJET- PSO based PID Controller for Bidirectional Inductive Power Transfer Sy...
 
An Hybrid Learning Approach using Particle Intelligence Dynamics and Bacteri...
An Hybrid Learning Approach using Particle Intelligence  Dynamics and Bacteri...An Hybrid Learning Approach using Particle Intelligence  Dynamics and Bacteri...
An Hybrid Learning Approach using Particle Intelligence Dynamics and Bacteri...
 

More from IAEME Publication

IAEME_Publication_Call_for_Paper_September_2022.pdf
IAEME_Publication_Call_for_Paper_September_2022.pdfIAEME_Publication_Call_for_Paper_September_2022.pdf
IAEME_Publication_Call_for_Paper_September_2022.pdfIAEME Publication
 
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...IAEME Publication
 
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURSA STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURSIAEME Publication
 
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURSBROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURSIAEME Publication
 
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONSDETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONSIAEME Publication
 
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONSANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONSIAEME Publication
 
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINOVOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINOIAEME Publication
 
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...IAEME Publication
 
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMYVISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMYIAEME Publication
 
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...IAEME Publication
 
GANDHI ON NON-VIOLENT POLICE
GANDHI ON NON-VIOLENT POLICEGANDHI ON NON-VIOLENT POLICE
GANDHI ON NON-VIOLENT POLICEIAEME Publication
 
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...IAEME Publication
 
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...IAEME Publication
 
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...IAEME Publication
 
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...IAEME Publication
 
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...IAEME Publication
 
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...IAEME Publication
 
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...IAEME Publication
 
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...IAEME Publication
 
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENTA MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENTIAEME Publication
 

More from IAEME Publication (20)

IAEME_Publication_Call_for_Paper_September_2022.pdf
IAEME_Publication_Call_for_Paper_September_2022.pdfIAEME_Publication_Call_for_Paper_September_2022.pdf
IAEME_Publication_Call_for_Paper_September_2022.pdf
 
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
 
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURSA STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
 
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURSBROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
 
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONSDETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
 
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONSANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
 
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINOVOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
 
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
 
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMYVISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
 
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
 
GANDHI ON NON-VIOLENT POLICE
GANDHI ON NON-VIOLENT POLICEGANDHI ON NON-VIOLENT POLICE
GANDHI ON NON-VIOLENT POLICE
 
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
 
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
 
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
 
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
 
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
 
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
 
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
 
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
 
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENTA MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
 

Recently uploaded

CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfAsst.prof M.Gokilavani
 
An introduction to Semiconductor and its types.pptx
An introduction to Semiconductor and its types.pptxAn introduction to Semiconductor and its types.pptx
An introduction to Semiconductor and its types.pptxPurva Nikam
 
Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...121011101441
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girlsssuser7cb4ff
 
Correctly Loading Incremental Data at Scale
Correctly Loading Incremental Data at ScaleCorrectly Loading Incremental Data at Scale
Correctly Loading Incremental Data at ScaleAlluxio, Inc.
 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvLewisJB
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)Dr SOUNDIRARAJ N
 
Concrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxConcrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxKartikeyaDwivedi3
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxbritheesh05
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx959SahilShah
 
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor CatchersTechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catcherssdickerson1
 

Recently uploaded (20)

young call girls in Green Park🔝 9953056974 🔝 escort Service
young call girls in Green Park🔝 9953056974 🔝 escort Serviceyoung call girls in Green Park🔝 9953056974 🔝 escort Service
young call girls in Green Park🔝 9953056974 🔝 escort Service
 
Design and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdfDesign and analysis of solar grass cutter.pdf
Design and analysis of solar grass cutter.pdf
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
 
An introduction to Semiconductor and its types.pptx
An introduction to Semiconductor and its types.pptxAn introduction to Semiconductor and its types.pptx
An introduction to Semiconductor and its types.pptx
 
Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girls
 
Correctly Loading Incremental Data at Scale
Correctly Loading Incremental Data at ScaleCorrectly Loading Incremental Data at Scale
Correctly Loading Incremental Data at Scale
 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvv
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
 
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptxExploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
 
Concrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxConcrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptx
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptx
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx
 
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor CatchersTechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
 
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
 

A REVIEW OF PARTICLE SWARM OPTIMIZATION (PSO) ALGORITHM

  • 1. https://iaeme.com/Home/journal/IJMET 19 editor@iaeme.com International Journal of Mechanical Engineering and Technology (IJMET) Volume 13, Issue 7, July 2022, pp. 19-44. Article ID: IJMET_13_07_003 Available online at https://iaeme.com/Home/issue/IJMET?Volume=13&Issue=7 ISSN Print: 0976-6340 and ISSN Online: 0976-6359 DOI: https://doi.org/10.17605/OSF.IO/KQ34H © IAEME Publication A REVIEW OF PARTICLE SWARM OPTIMIZATION (PSO) ALGORITHM Mubeen Shaikh1 and Dr. Dhananjay Yadav2 1 Research Scholar, Department of Mechanical Engineering at SSSUTMS, Sehore, Madhya Pradesh, India 2 Professor, Department of Mechanical Engineering at SSSUTMS, Sehore, Madhya Pradesh, India ABSTRACT Particle swarm optimization (PSO) is a population-based stochastic optimization technique that is inspired by the intelligent collective behaviour of certain animals, such as flocks of birds or schools of fish. It has undergone numerous improvements since its debut in 1995. As academics became more familiar with the technique, they produced additional versions aimed at different demands, created new applications in a variety of fields, published theoretical analyses of the impacts of various factors, and offered other variants of the algorithm. This paper discusses the PSO's origins and background, as well as its theory analysis. Then, we examine the current state of research and application in algorithm structure, parameter selection, topological structure, discrete and parallel PSO algorithms, multi-objective optimization PSO, and engineering applications. Finally, existing difficulties are discussed, and new study directions are proposed. Keywords: Topology structure, Particle swarm, optimization Multi-objective optimization, Discrete PSO, Parallel PSO. Cite this Article: Mubeen Shaikh and Dhananjay Yadav, A Review of Particle Swarm Optimization (PSO) Algorithm, International Journal of Mechanical Engineering and Technology (IJMET), 13(7), 2022, pp. 19-44. https://iaeme.com/Home/issue/IJMET?Volume=13&Issue=7 1. INTRODUCTION The particle swarm optimization (PSO) algorithm is a swarm-based stochastic optimization technique proposed by Eberhart and Kennedy (1995) and Kennedy and Eberhart (1995). (1995). The PSO algorithm models the social behaviour of animals such as insects, herds, birds, and fishes. These swarms work together to find food, and each member of the swarm changes the search pattern based on its own and other members' learning experiences. The PSO algorithm's main design concept is closely tied to two studies: PSO, like evolutionary algorithms, uses a swarm mode, which allows it to simultaneously search a vast region in the solution space of the optimised objective function.
  • 2. A Review of Particle Swarm Optimization (PSO) Algorithm https://iaeme.com/Home/journal/IJMET 20 editor@iaeme.com Millonas offered five essential concepts for how to develop swarm artificial life systems with cooperative behaviour by computer in researching the behaviour of social animals using artificial life theory (van den Bergh 2001): (1) Proximity: The swarm should be able to do basic space and time computations. (2) Quality: The swarm should be able to detect and respond to changes in the environment's quality. (3) Diverse response: The swarm should not limit its approach to obtaining resources to a narrow range. (4) Stability: the swarm should not change its behaviour mode. (5) Adaptability: the swarm should modify its behaviour mode when it is justified. These five concepts encompass the primary characteristics of artificial life systems and have served as guiding principles in the development of the swarm artificial life system. Particles in PSO can update their positions and velocities as the environment changes, thereby meeting the requirements of proximity and quality. Furthermore, in PSO, the swarm does not restrict its mobility but instead continuously searches for the optimal solution in the possible solution space. Particles in PSO can maintain a stable movement in the search space while changing their movement mode to react to environmental changes. As a result, particle swarm systems satisfy the five principles listed above. 2. ORIGIN AND BACKGROUND In order to demonstrate the production background and evolution of the PSO algorithm, we first offer the early simple model, known as the Boid (Bird-oid) model (Reynolds 1987). This model is intended to replicate bird behaviour and is also a direct source of the PSO algorithm. The most basic model is depicted here. Each bird is represented by a point in the Cartesian coordinate system, with beginning velocity and position assigned at random. The software should then be performed in accordance with the "nearest proximity velocity match rule," so that one individual has the same speed as its nearest neighbour. With the iteration continuing in the same manner, all of the points will quickly have the same velocity. Because this model is overly simplistic and far from realistic, a random variable is introduced to the speed item. That is, aside from meeting "the nearest proximity velocity match," each speed will be added with a random variable at each iteration, bringing the total simulation closer to the real scenario. Heppner created a "cornfield model" to imitate a flock of birds' foraging activity (Clerc and Kennedy 2002). Assume there was a "cornfield model" on the plane, i.e., food was randomly distributed aboard the plane at the start. They moved in accordance with the following principles in order to locate the meal. Assume that the swarm size is N, that each particle's position vector in D-dimensional space is Xi = (xi1, xi2,,xid,, xiD), that the velocity vector is Vi = (vi1, vi2,, vid,, viD), that the individual's optimal position (i.e., the optimal position that the particle has experienced) is Pi (pi1, pi2,, pid,, piD) ( pg1, pg2, , pgd , , pgD). Using the minimization problem as an example, without losing generality, in the early. The iteration procedure of any particle in each generation. From a sociological standpoint, we can observe that the first part of the velocity update formula is the influence of the particle's past velocity. It signifies that the particle is confident in its current moving state and performs inertial movement in accordance with its own velocity, hence the parameter is known as inertia weight. The second part, known as the "cognitive" item, is determined by the distance between the par- ticle's current position and its own optimal position. It refers to the particle's own
  • 3. Mubeen Shaikh and Dhananjay Yadav https://iaeme.com/Home/journal/IJMET 21 editor@iaeme.com thinking, i.e. the particle's movement as a result of its own experience. As a result, parameter c1 is known as the cognitive learning factor (also called cognitive acceleration factor). The third component, dubbed "social," is based on the distance between the particle's current position and the global (or local) ideal position in the swarm. It refers to the sharing of knowledge and cooperation among particles, specifically particle movement resulting from the experience of other particles in the swarm. Because it models the movement of a good particle through cognition, the value c2 is known as the social learning factor (also called social acceleration factor). The PSO method has received a lot of attention since it was proposed because of its intuitive basis, simple and easy implementation, and extensive applicability to many kinds of functions. The theory and use of the PSO algorithm have advanced significantly over the last two decades. Researchers have gained a preliminary understanding of the theory, and its application in several domains has been realised. PSO is a parallel and stochastic optimization algorithm. Its benefits are summarised as follows: It does not necessitate the use of optimised functions such as differential, derivative, and continuous; its convergence rate is fast; and the algorithm is simple and straightforward to implement through programming. Unfortunately, it has some drawbacks (Wang 2012): (1) For functions with several local extremes, it is most likely to fall into the local extreme and cannot produce the correct output. This phenomena is caused by two factors: the properties of the optimised functions, and the particles' divergence vanishing soon, creating premature convergence. These two elements are frequently tightly linked. (2) The PSO algorithm cannot produce satisfactory results due to a lack of collaboration from good search methods. The reason for this is that the PSO algorithm does not make adequate use of the information collected during the computation step.Instead, it solely takes the information from the swarm and individual optima during each iteration. (3) While the PSO algorithm allows for global search, it cannot guarantee convergence to global optima. (4) The PSO method is a meta-heuristic bionic optimization technique with no formal theoretical underpinning. It is merely intended to simplify and simulate the search phenomenon of some swarms, but it neither explains why this algorithm is useful in principle nor determines its relevant range. As a result, the PSO technique is often appropriate for a class of optimization problems that are high dimensional and do not require particularly exact result. There are now numerous studies on the PSO algorithm, which can be classified into the eight groups listed below: (1)Conduct a theoretical analysis of the PSO algorithm in order to comprehend its operation. (2) Modify its structure in order to improve performance. (3) Investigate the effect of various parameter configurations on the PSO algorithm. (4) Investigate the impact of alternative topological structures on the PSO algorithm. (5) Investigate the parallel PSO method. (6) Investigate the discrete PSO algorithm. (7) Investigate multi-objective optimization using the PSO algorithm. (8) Use the PSO algorithm in a variety of technical domains. The remainder of this paper will begin to outline current research on PSO algorithms from the eight categories listed above. We cannot review all of the linked research since there are too many, so we select a few representative ones to review.
  • 4. A Review of Particle Swarm Optimization (PSO) Algorithm https://iaeme.com/Home/journal/IJMET 22 editor@iaeme.com 3. THEORETICAL ANALYSIS Nowadays, theory study of the PSO algorithm mostly focuses on the PSO method's concept, i.e., how the parts interact with each other, and why it is beneficial for many optimization problems but not clear for others. This problem's research can be classified into three categories: The travelling trajectory of a single particle is one, the convergence problem is another, and the evolution and distribution of the complete particle system over time is the third. Kennedy (1998) performed the first examination of simplified particle behaviour by simulating different particle trajectories under a range of design choices. Ozcan and Mohan (1998) published the first theoretical study of the simplified PSO algorithm, indicating that in a simplified one- dimensional PSO system, a particle travelled along a route defined by a sinusoidal wave and randomly determined its amplitude and frequency. Their analysis, however, was limited to the simple PSO model without the inertia weight and assumedthat Pid and Pgd remained constant. Actually, Pid and Pgd varied regularly, resulting in a sine wave with numerous different amplitudes and frequencies. As a result, the overall trajectory appeared disorderly. This drastically weakened the impact of their conclusions. Clerc and Kennedy (2002) performed the first formal examination of the PSO algorithm's stability, but this method regarded the random coefficients as constants, reducing the typical stochastic PSO to a deterministic dynamic system. The resulting system was a second-order linear dynamic system whose stability was determined by the system poles or state matrix eigen roots. van den Bergh (2001) performed a similar analysis on the deterministic version of the PSO method and discovered the regions in the parameter space where stability could be ensured. The literature also addressed convergence and parameter selection (Trelea 2003; Yasuda et al. 2003). However, the authors recognised that they did not account for the stochastic nature of the PSO method, therefore their results were limited. Emara and Fattah performed a similar investigation on the continuous version of the PSO algorithm (2004). The PSO method, as previously proposed, employs constant and uniform distribution random integers c1 and c2. How would the particle trajectories' first- and second-order stability areas vary if a random variable is also used, and/or c1 and c2 adhere to other statistical distributions instead of the uniform distribution? First-order stability analysis (Clerc and Kennedy 2002; Trelea 2003; Bergh and Engelbrecht 2006) sought to determine whether the stability of mean trajectories was dependent on the parameters (ω, φ), where = (ag + al)/2 and c1 and c2 were uniform distributions in the intervals 0 ag and 0 al, respectively. Higher-order moments were found in stochastic stability analysis, which proved to be highly valuable for understanding particle swarm dynamics and clarifying PSO convergence properties (Fernandez-Martinez and Garcia-Gonzalo 2011; Poli 2009) Kennedy (2003) presented the Bare Bones PSO (BBPSO) model of PSO dynamics. Its particle velocity update has a Gaussian distribution. Although Kennedy's initial formulation is not competitive with regular PSO, adding a component-wise jumping mechanism and adjusting the standard deviation can result in a competitive optimization technique. As a result, al Rifaie and Blackwell (2012) suggested a Bare Bones with Jumps (BBJ) algorithm with a modified search spread component and a lower jump probability. It used the difference between the neighbourhood best and the current position rather than the difference between the particle's personal best and the neighbourhood best (in the local neighbourhood) (in global neighbourhood). Three performance criteria (accuracy, efficiency, and dependability) were used to compare the BBJ against other standard Clerc–Kennedy PSOs and BBJ modifications. Using these measurements, it was demonstrated that when benchmarks with successful convergence were evaluated, the accuracy of BBJ was significantly superior than other algorithms. Furthermore, BBJ has been empirically demonstrated to be the most efficient and reliable algorithm in both local and global neighbourhoods.
  • 5. Mubeen Shaikh and Dhananjay Yadav https://iaeme.com/Home/journal/IJMET 23 editor@iaeme.com Poli was also interested in the social variant of PSO (al 0) and fully informed particle swarm (Mendes et al. 2004). (2008). Garcia-Gonza and Fernandez-Martinez (2014) provided the convergence and stochastic stability study of a number of PSO variants, differing from the classical PSO in the statistical distribution of the three PSO parameters: inertia weight, local and global acceleration factors. They presented an analytical presentation for the top limit of the particle trajectories' second-order stability areas (the so-called USL curves), which is available for most PSO algorithms. Numerical experiments revealed that adjusting the PSO parameters near to the USL curve yielded the greatest algorithm performance. Kadirkamanathan et al. (2006) used the Lyapunov stability analysis and the idea of passive system to investigate the stability of particle dynamics. This analysis did not presume that all parameters were non-random, and it obtained the necessary stability requirements. It was based on random particle dynamics, which were represented as a nonlinear feedback control system. The feedback loop of such a system had a deterministic linear and a nonlinear component, as well as a time-varying gain. Though it evaluated the influence of random components, its stability analysis was conducted with the goal of achieving the optimal position; thus, the conclusion cannot be directly transferred to non-optimal particles. Even the original PSO method could converge; however, it could only converge to the optima that the swarm could search for, and it could not ensure that the attained solution was the best, or even that it was the local optima. van den Bergh and Engelbrecht (2002) suggested a PSO method to guarantee algorithm convergence. It used a new update equation for the global optimal particle, causing it to generate a random search near the global optimal position, while other particles used their original equations to update. This approach could secure the PSO algorithm's convergence to the local optimal solution at the expense of a faster convergence rate, but its performance in multi-modal situations was inferior to the canonical PSO algorithm. Lack of population diversity was viewed early (Kennedy and Eberhart 1995) as a significant influencing element for the swarm's pre-mature convergence toward a local optimum; thus, increasing diversity was regarded as a useful way to escaping from the local optima (Kennedy and Eber- hart 1995; Zhan et al. 2009). However, increasing swarm variety is detrimental to fast convergence toward the ideal solution. This phenomenon is well known since Wolpert and Macready (1997) demonstrated that no algorithm can outperform all others on every type of task. As a result, research trials to improve the performance of an optimization algorithm should not be designed to find a general function optimizer (Mendes et al. 2004; Wolpert and Macready 1997), but rather to find a general problem-solver capable of performing well on a wide range of well-balanced practical benchmark problems (Garcia-Martinez and Rodriguez 2012). A few PSO versions have been proposed to avoid premature convergence on a local optimum solution while retaining the quick convergence aspect of the original PSO formulation (Valle et al. 2008). These methods include fine-tuning the PSO parameters to control particle velocity updating (Nickabadi et al. 2011), using different PSO local formulations to consider the best solution within a local topological particle neighbourhood rather than the entire swarm (Kennedy and Mendes 2002, 2003; Mendes et al. 2004), and integrating the PSO with other heuristic algorithms (Chen et al. 2013). For instance, comprehensive education. PSO (Liang et al. 2006) used a new learning technique to promote swarm diversity and avoid premature convergence in multi-modal problem solving. ALC-PSO (Chen et al. 2013) gave the swarm leader increasing age and lifetime in order to escape from local optima and avoid premature convergence. Tanweer et al. (2016) used self-regulating inertia weights and self-perception on the global search direction to achieve faster convergence and better outcomes. Blackwell (2005) theoretically investigated and empirically validated the speed features with diversity loss in the PSO method for spherically symmetric local neighbourhood functions. Kennedy (2005) conducted a comprehensive study of how speed influences the PSO algorithm,
  • 6. A Review of Particle Swarm Optimization (PSO) Algorithm https://iaeme.com/Home/journal/IJMET 24 editor@iaeme.com which was useful in understanding the impact of speed to PSO performance. Clerc (2006) thoroughly investigated the PSO iteration process at the stationary stage, as well as the roles of each random coefficient; eventually, he provided the probability density functions for each random coefficient. 4. ALGORITHMSTRUCTURE There are a sea of enhancement approaches for the PSO algorithm structure, which can be classified into 8 main sub-sections as follows. 4.1. Adoptingmulti-Sub-Populations Suganthan (1999) developed the concept of subpopulations in the genetic algorithm and a reproduction operator in the PSO algorithm in 2001. Liang and Suganthan (2005) presented a dynamic multi-swarm PSO in which the swarm was separated into numerous sub-swarms that were reassembled often to communicate information. To improve neural fuzzy networks, Peng and Chen (2015) introduced a symbiotic particle swarm optimization (SPSO) algorithm. The described SPSO algorithm employed a multi-swarm technique, in which each particle represented a single fuzzy rule, and each particle in each swarm grew independently to prevent slipping into a local optima. To handle multi-modal function optimization problems, Chang (2015) presented a modified PSO technique. It fragmented the original swarm into multiple sub-swarms depending on particle order. The best particle in each sub-swarm was recorded and then used to replace the original global best particle in the whole population in the velocity updating calculation. The improved velocity formula was used to update all particles in each sub-swarm. Tanweer et al. (2016) also proposed a new dynamic mentoring and self-regulation-based particle swarm optimization (DMeSR-PSO) method that classified particles into mentor, mentee, and independent learner groups based on fitness differences and Euclidian distances from the best particle. The PSO approach requires too many particles for the high-dimensional optimization problem, resulting in significant computational complexity; consequently, achieving a satisfactory solution is challenging. Recently, the cooperative particle swarm algorithm (CPSO- H) (Bergh and Engelbrecht 2004) was proposed, which divided the input vector into many sub- vectors and employed a particle swarm to optimise each sub-vector. Despite the fact that the CPSO-H algorithm used one-dimensional swarm to search for each dimension, when the search results were merged by a global swarm, its performance on multi-modal problems has been demonstrated. Furthermore, Niu et al. (2005) proposed a multi-population cooperative PSO method and introduced master–slave sub-population mode into the PSO algorithm. Similarly, Seo et al. (2006) proposed a multi- grouped PSO that used N groups of particles to simultaneously explore N peaks of multi-modal issues. Selleri et al. (2006) used numerous independent sub-populations and added some new components to the particle velocity update formula, causing the particles to travel toward the sub-historical population's ideal position or away from the gravity centre of other sub- populations. 4.1.1. Improving the Selection Strategy for Particle Learning Object Al-kazemi and Mohan (2002) introduced a multi-phase PSO algorithm in which particles were grouped in different phases based on temporary search objectives, and these temporary targets allowed the particles to move toward or away from their own or the global best location. Ting
  • 7. Mubeen Shaikh and Dhananjay Yadav https://iaeme.com/Home/journal/IJMET 25 editor@iaeme.com et al. (2003) changed every particle's pBest and every dimension learned from randomly chosen other dimensions. If the new pBest was superior, it was utilised to replace the original pBest. Yang and Wang (2006) devised the roulette selection strategy to decide the gBest in the PSO algorithm, such that in the early stages of evolution, all individuals had a chance to drive the search direction to avoid premature evolution. Zhan et al. (2011) proposed an orthogonal learning PSO that used an orthogonal learning strategy to provide efficient examples. Abdelbar et al. (2005) developed a fuzzy measure in which multiple particles with the highest fitness values in each neighbour could influence other particles. In addition to these methods, the particle positions in the Bare bones PSO algorithm (Kennedy 2003) were updated using a Gaussian distribution. This distribution was beneficial for optimization methods because many foragers and wandering animals followed a Levy distribution of steps. So Richer and Blackwell (2006) substituted random sampling from a Levy distribution for particle dynamics within PSO. To evaluate its performance, a variety of benchmark issues were used; the resulting Levy PSO performed as well as, if not better than, a normal PSO or equivalent Gaussian models. Furthermore, Hendtlass (2003) provided memory ability to each particle in the speed update equation, while He et al. (2004) included passive congregation method.Zeng et al. (2005) included an acceleration component into the PSO algorithm, transforming it from a second-order stochastic system to a third-order stochastic system. To enhance the PSO algorithm's global search capability 4.1.2. Modifying Velocity Update Strategy Despite the fact that PSO performance has increased over the years, how to choose an appropriate velocity update approach and parameters remains an important research subject. Ardizzon et al. (2015) offered a novel application of the original particle swarm concept, with two types of agents in the swarm, "explorers" and "settlers," that may dynamically swap roles during the search procedure. This method may dynamically adjust the particle velocities at each time step based on the particle's current distance from the optimal place determined so far by the swarm. Uniform distribution random numbers in the velocity update approach may also affect particle movement with high exploration capabilities. To improve PSO performance, Fan and Yan (2014) proposed a self-adaptive PSO with multiple velocity strategies (SAPSO-MVS). SAPSO-MVS could create self-adaptive control parameters during the whole evolution procedure and used a novel velocity updating strategy to optimise the balance between the PSO algorithm's exploration and exploitation capabilities while avoiding manually tuning the PSO parameters. Crazy PSO was proposed by Roy and Ghoshal (2008), in which particle velocity was randomised within predefined boundaries. Its goal was to randomise the velocity of some particles known as "crazy particles" by applying a specified probability of craziness to maintain diversity for global search and analysis. According to Liu et al. (2004), too frequent velocity updates damage the particles' local exploit ability and impede convergence, thus he introduced a relaxation velocity update technique, which updated the speed only when the original speed could not improve the particle's fitness value any longer. Experiment findings demonstrated that this method might significantly reduce computing load and accelerate convergence. Diosan and Oltean (2006) employed a genetic algorithm to evolve the structure of the PSO algorithm, i.e., particles updating order and frequency. affect the particle moving. Thus, Fan and Yan (2014) put forward a self-adaptive PSO with multiple velocity strategies(SAPSO-MVS) to enhance PSO performance. SAPSO-MVS could generate self-adaptive control parameters in the total evolution procedure and adopted a novel velocity updating scheme to improve the balance between the exploration and exploitation capabilities of the PSO algorithm and avoided to tune the PSO parameters
  • 8. A Review of Particle Swarm Optimization (PSO) Algorithm https://iaeme.com/Home/journal/IJMET 26 editor@iaeme.com manually. Roy and Ghoshal(2008) proposed Crazy PSO in which particle velocity was randomized within predefined limits. Its aim was to ran-domize the velocity of some particles, named as “crazy particles” through using a predefined probability of craziness to keep the diversity for global search and better conver-gence. Unfortunately, values of the predefined probability of craziness could only be obtained after a few experiments. Peram et al.(2003) presented a fitness–distance ratio-based PSO (FDR-PSO), in which a new velocity updating equation was used to regenerate the velocity of each particle. Lietal.(2012)presented a self-learning PSO in which a velocity update scheme could be automatically modified in the evolution procedure. Lu et al.(2015b) proposed a mode-dependent velocity updating equation with Marko-vian switching parameters in switching PSO to overcome the contradiction between the local search and the global search, which made it easy to jump out of the local mini-mum. Liuetal.(2004)arguedthattoofrequentvelocityupdatewouldweakentheparticles’localexploita bilityanddecreasetheconvergence,so he proposed are laxation velocity update strategy, which updated the speed only when the original speed could not improve the particle’s fitness value further. Experimental results proved that this strategy could reduce the computation load greatly and accelerate the convergence. Diosan and Oltean (2006) used genetic algorithm to evolve PSO algorithm structure, i.e., particles updating order and frequency. 4.1.3. Modifying the speed, or, position. Constrain method and dynamically determining the search. space Chaturvedi et al. (2008) regulated the acceleration coefficients dynamically in the maximum and minimum ranges. However, determining the bound value of the acceleration coefficients was a tough task that required several simulations. Stacey et al. (2003) proposed a new speed constrain method for re-randomizing particle speed as well as a novel position constrain method for re-randomizing particle location. Clerc (2004) used a contraction-expansion coefficient into evolution algorithms to assure algorithm convergence while loosening the speed bound. Other ways to dynamically determining the search space, such as squeezing the search space (Barisal 2013), had also been presented 4.1.4. Combining PSO with other Search Techniques It has two main goals: one is to raise the divergence and avoid premature convergence, and the other is to improve the PSO algorithm's local search ability. A plethora of models have been investigated in order to enhance search diversity in the PSO (Poli et al. 2007). These hybrid algorithms included introducing various genetic operators to the PSO algorithm, such as selection (Angeline 1998a, b; Lovbjerg et al. 2001), crossover (Angeline 1998b; Chen et al. 2014), mutation (Tsafarakis et al. 2013), or Cauchy mutation (Wang et al. 2011), to increase diversity and improve the algorithm's ability to escape from local minima. Meng et al. (2015) introduced crisscross search particle swarm optimization (CSPSO), a new hybrid optimization technique. Lim and Isa (2015) proposed a hybrid PSO method that used fuzzy reasoning and a weighted particle to build a novel search behaviour model that improved the search ability of the traditional PSO algorithm. Shin and Kita (2014) used information from the second global best and second individual best particles, in addition to the information from the first global best and first individual best particles, to improve the search performance of the original PSO. Tanweer et al. (2016) created a unique particle swarm optimization approach called self- regulating particle swarm optimization (SRPSO) that used the finest human learning schemes to find the best outcomes. The SRPSO employed two learning strategies. The first design used a self-regulating inertia weight, and the second used a fixed inertia weight.predator- preymodel(Gosciniak2015),uncorrela-tivecomponentanalysismodel (Fanetal.2009),dissipative
  • 9. Mubeen Shaikh and Dhananjay Yadav https://iaeme.com/Home/journal/IJMET 27 editor@iaeme.com model (Xieetal.2002), self-organizing model (Xieetal. 2004), life cycle model (Krinkand Lovbjerg 2002), Bayesian optimization model (Monson and Seppi 2005), chemical reaction optimization (Lietal.2015b), neighborhood search mechanism (Wangetal.2013), collision- avoiding mech-anism (Blackwell and Bentley 2002), information sharing mechanism(Lietal.2015a), local search technique (Sharifietal.2015), cooperative behavior (BerghandEngelbrecht2004), hierarchical fair competition (Chenetal.2006b), external memory (Acan and Gunay2005), gradient descent technique (Noel and Jannett 2004), simplex method opera-tor (Qianetal.2012; El-Wakeel2014), hillclimbing method (Linetal.2006b), division of labor (LimandIsa2015), principal component analysis (Muetal.2015), Kalmanfiltering (Monsonand Seppi 2004), genetic algorithm (Soleimani and Kannan2015), shuffled frog leaping algorithm(Samuel and Rajan2015), random search algorithm (Ciuprinaetal.2007), Gaussian local search (Jia et al. 2011), simulated annealing (Liuetal.2014;Gengetal.2014), taboo search (WenandLiu2005),Levenberg–Marquardt algorithm (Shirkhanietal. 2014), ant colony algorithm (Shelokaretal.2007), artifi-cial be ecolony (Vitorinoetal.2015; Lietal.2011), chaos algorithm (Yuanetal.2015), differential evolution (ZhaiandJiang2015), evolutionary programming (Jamianetal.2015), multi-objective cultural algorithm (Zhang et al. 2013). PSO algorithm was also extended in quantum space by Sunetal. (2004). The novel PSO model was based on the delta potential well and modeled the particles as having quantum behaviors. Furthermore, Medasani and Owechko (2005) expanded the PSO algorithm through introducing the possibility of c-means and probability theory, and put forward probabilistic PSO algorithm. Improving for multi-modal Proble The seventh solution is intended specifically for multi-modal problems, with the hope of finding multiple superior answers. To acquire numerous better solutions for the optimum problem, Parsopoulos and Vrahatis (2004) used deflection, stretching, and repulsion, among other strategies, to find as many minimal locations as possible by preventing the particles from travelling to the smallest region ever found before. This strategy, however, would generate additional local optima at both ends of the detected local ones, potentially causing the optimization algorithm to fall into local optima. As a result, Jin et al. (2005) developed a novel type of function transformation that could prevent this drawback. Benameur et al. (2006) presented an adaptive method to determine the niching parameters. Brits et al. (2003) suggested a niche PSO algorithm to discover and monitor numerous optima by exploiting many sub-populations at the same time. Brits et al.(2002) investigated a strategy for simultaneously finding numerous optimal solutions by changing the fitness value calculating approach. Schoeman and Engelbrecht (2005) used vector operation to determine the candidate solution and its border in each niche using vector dot production operation and parallelized this process to produce better results based on the niche PSO algorithm. However, each niche PSO algorithm had a common drawback in that it needed to establish a niche radius, and the method performance was very sensitive to the niche radius. In order to address this issue. 4.1.5. Keeping diversity of the Population Population diversity is very crucial for improving the PSO algorithm's global convergence. When population variety was very low, the simplest way to maintain it was to reset some particles or the entire particle swarm. Lovbjerg and Krink (2002) used a self-organized criticality in PSO algorithm to show the degree of proximity among the particles in the swarm and to decide whether or not to re-initialize the particle positions. Clerc (1999) presented Re- Hope, a deterministic algorithm that reset the swarm when the search space was fairly limited but had not yet found solutions (No-Hope). To preserve population diversity and to balance global and local searches, Fang et al. (2016) suggested a decentralised quantum-inspired
  • 10. A Review of Particle Swarm Optimization (PSO) Algorithm https://iaeme.com/Home/journal/IJMET 28 editor@iaeme.com particle swarm optimization (QPSO) method with cellular structured populations (called cQPSO). The performance of cQPSO-lbest was evaluated on 42 benchmark functions with varying features (including unimodal, multi-modal, separated, shifted, rotated, noisy, and mis- scaled) and compared against a set of PSO variations with varying topologies and swarm-based evolutionary methods (EAs). Park et al(2010) .'s modified PSO incorporated a chaotic inertia weight that declined and oscillated simultaneously under the decreasing line. Additional diversity was introduced into the PSO in this method, but the chaotic control settings needed to be tuned. Netjinda et al. (2015) recently revealed a novel technique into PSO to boost swarm variety, a mechanism inspired by starling collective response behaviour. The Starling PSO realised a broader scope of the search space as a result of this collective reaction mechanism, avoiding poor answers. As a result of the improved performance, this approach adds extra processes to the original algorithm. As a result, more parameters were required, and the new step, the collective response process, increased the execution duration of this algorithm. The algorithm complexity of the Starling PSO, however, remained the same as that of the original PSO. 5. PARAMETER SELECTION The inertia weightω (or constriction factorχ), learning factors c1 and c2, speed restrictions Vmax, position limitations Xmax, swarm size, and beginning swarm are all key elements in the PSO algorithm. Some researchers fixed other factors and simply evaluated the impact of a single parameter on the algorithm, whilst others studied the impact of numerous parameters on the algorithm. 6. INERTIA WEIGHT According to current research, the inertia weight has the biggest influence on the performance of the PSO algorithm, hence there are the most studies in this field. Shi and Eberhart (1998) were the first to address PSO parameter selection. They implemented an inertia efficient PSO and promoted the convergence feature. An expansion of this study used fuzzy systems to adjust the inertia weight nonlinearly duri ng optimization (Shi and Eberhart 2001). In general, it is assumed that in PSO, inertia weight is used to balance global and local search, with larger inertia weight leaned to global search and lower inertia weight tended to local search, hence the value of inertia weight should gradually decrease over time. Shi and Eberhart (1998) proposed that the inertia weight be set to [0.9, 1.2], and that a linearly decreasing inertia weight might considerably improve PSO performance. Because fixed inertia weights rarely produce satisfactory results, some PSO variants whose inertia weight decreased linearly with iteration times (Shi and Eberhart 1998), adaptive changed (Nickabadi et al. 2011), adjusted by a quadratic function (Tang et al. 2011) and by population information (Zhan et al. 2009), adjusted based on Bayesian techniques (Zhang et al. 2015), exponential decreasing in At the same time, there are numerous techniques for changing the inertia weight adaptively, as well as some evaluation indices, such as the successful history of search (Fourie and Groenwold 2002), Individual search ability (Yang et al. 2007), particle average velocity (Yasuda and Iwasaki 2004), population diversity (Jie et al. 2006), smoothness change in the objective function (Wang et al. 2005), particle swarm evolutionary speed and aggregation degree (Qin et al. 2006). Similarly, Liu et al. (2005) used Metropolis criteria to assess whether or not to accept the inertia weight adjustment. Some people also used a random inertia weight, such as [0.5 (rnd/2.0)]. [0, 1] uniform distribution random numbers (Eberhart and Shi 2001).(Zhang et al. 2003). Jiang and Bompard (2005) used the chaos process to pick the inertia weight, allowing it
  • 11. Mubeen Shaikh and Dhananjay Yadav https://iaeme.com/Home/journal/IJMET 29 editor@iaeme.com to traverse [0, 1]. The improved PSO of Park et al. (2010) introduced a chaotic inertia weight that oscillated and dropped simultaneously beneath the decreasing line in a chaotic manner, but the chaotic control parameters required to be tuned. Learning factors c1 and c2 The weights of the stochastic acceleration terms that drive each particle toward pBest And g Best are represented by the learning factors c1 and c2 (or nBest ). Many times, c1 and c2 are set to 2.0, causing the search to cover the region specified in pBest and gBest. Another typical value is 1.49445, which can ensure PSO algorithm convergence (Clerc and Kennedy 2002). After extensive testing, Carlisle and Dozier (2001) proposed a superior parameter set in which c1 and c2 were set to 2.8 and 1.3, respectively, and the performance of this option was confirmed by Schutte and Groen- wold (2005).Inspired by the concept of time- varying inertia weight, many PSO variants appeared whose learning factors changed with time (Ivatloo 2013), such as learning factors that linearly decreased with time (Ratnaweera et al. 2004), dynamically adjusted based on the particles' evolutionary states (Ide and Yasuda 2005), dynamically adjusted in accordance with the number of fitness values that deteriorate persistently and the swarm (Chen et al. 2006a). In most circumstances, the two learning variables c1 and c2 have the same value, resulting in the same weight for social and cognitive search. Kennedy (1997) investigated two types of extremes: models with only the social term and models with only the cognitive term, and the results revealed that these two components were critical to the success of swarm search, while no definitive conclusions could be drawn about the asymmetric learning factor. There have been studies that determined the inertia weight and learning factors at the same time. Many researchers used optimization techniques such as genetic algorithms (Yu et al. 2005), adaptive fuzzy algorithms (Juang et al. 2011), and differential evolutionary algorithms to dynamically calculate the inertia weight and learning parameters (Parsopoulos and Vrahatis 2002b), 7. SPEED LIMITS VMAX The particles' speed was controlled by a maximum speed Vmax, which can be utilised to control the particle swarm's global search ability. In the original PSO method,ω 1, c1 c2 2, particles' speed quickly climbs to a very high value, affecting the PSO algorithm's performance, hence particle velocity must be limited. Later, Clerc and Kennedy (2002) pointed out that it was not essential to limit particle velocity; instead, incorporating a constriction factor into the speed update calculation might achieve the same result. Even when the constriction factor was applied, research revealed that limiting the particle velocity at the same time produced better results (Eberhart and Shi 2000),As a result, the concept of speed limitation was kept in the PSO algorithm. In general, Vmax was set to the dynamic range of each variable and was normally a fixed number, but it could alternatively decline linearly with time (Fan 2002) or dynamically depending on the success of search history (Fourie and Groen- wold 2002). PositionlimitsXmax Particle positions can be controlled by a maximum position Xmax to prevent particles from flying out of the physical solution space. Robinson and Rahmat-Samii (2004) proposed three control techniques: absorb- ing walls, reflecting walls, and invisible walls. When one of a particle's dimensions crossed the boundary of the solution space, the absorbing wall set the velocity in that dimension to zero, while the reflecting wall changed the direction of particle velocity, and the particle was eventually pulled back to the allowable solution space by the two walls. To save time and prevent interfering with the motions of other particles, the invisible barriers did not calculate the fitness values of the particles flying out. However, the performance of the PSO method was heavily influenced by the issue dimension and the relative position of
  • 12. A Review of Particle Swarm Optimization (PSO) Algorithm https://iaeme.com/Home/journal/IJMET 30 editor@iaeme.com the global optima and the search space border. Huang and Mohan (2005) developed a hybrid damping boundary to achieve robust and consistent performance by integrating the features of the absorbing and reflecting walls. And Mikki and Kishk (2005) integrated the approaches of hard position limit, absorbing wall, and reflecting wall to achieve superior outcomes. 8. POPULATION SIZE The population size chosen is connected to the problems to be solved, but it is not particularly sensitive to the challenges. The typical range is 20–50. In some instances, a larger population is used to address unique demands. 8.1. Initialization of the Population The population's initialization is likewise a critical issue. In general, the initial population is generated randomly, but there are many intelligent population initialization methods, such as using the nonlinear simplex method (Parsopoulos and Vrahatis 2002a), centroidal Voronoi tessellations (Richards and Ventura 2004), and orthogonal design (Zhan et al. 2011), to determine the initial population of the PSO algorithm, making the distribution of the initial population as evenly distributed as possible, and helping the algorithm to explore. Robinson et al. (2002) stated that the PSO algorithm and the GA algorithm could be used in tandem, i.e., using the population optimised by the PSO algorithm as the initial population of the GA algorithm, or using the population optimised by the GA algorithm as the initial population of the PSO algorithm, both methods could produce better results. Yang et al. (2015) introduced LHNPSO, a new PSO technique with low-discrepancy sequence started particles, high-order (1/ 2) nonlinear time-varying inertia weight, and constant acceleration coefficients. To adequately populate the search space, the Halton sequence was used to generate initial population. Furthermore, PSO algorithm parameters could be adjusted using methods such as sensitivity analysis (Bartz-Beielstein et al. 2002), regression trees (Bartz-Beielstein et al. 2004a), and calculate statistics (Bartz-Beielstein et al. 2004b) to improve PSO algorithm performance when solving practical problems. Beheshti and Shamsuddin (2015) have presented a nonparametric particle swarm optimization (NP-PSO) to increase global exploration and local exploitation in PSO without changing algorithm parameters. To improve the algorithm search capacity, this technique combined local and global topologies with two quadratic interpolation processes. 9. MULTI-OBJECTIVE OPTIMIZATION PSO Multi-objective (MO) optimization has become an important study area in recent years. In multi-object optimization issues, each target function can be optimised independently and then the optimal value for each target function can be found. Unfortunately, due to the conflicting aims, it is very hard to find a perfect solution for all of them. As a result, only the Pareto optimal option is possible. The information exchange method of the PSO algorithm is considerably different from those of other swarm optimization tools. The chromosomes share information with each other via crossover operation in the genetic algorithm (GA), which is a bidirectional information sharing process. While only gBest (or nBest) gives information for other particles in most PSO algorithms. Traditional PSO algorithms cannot simultaneously discover numerous optimal locations defining the Pareto frontier due to the point attraction feature. Though we can achieve numerous optimal solutions by assigning different weights to each objective function, then combining them and running many times, we still want to find a method that can concurrently obtain a group of Pareto optimal solutions.
  • 13. Mubeen Shaikh and Dhananjay Yadav https://iaeme.com/Home/journal/IJMET 31 editor@iaeme.com Following the presentation of the vector-evaluated genetic algorithm by Peram et al. (2003), an ocean of multi-objective optimization algorithms, such as NSGA- II, was introduced one after the other (Coello et al. 2004). Liu et al. (2016) were the first to investigate the use of the PSO algorithm in multi-objective optimization, emphasising the importance of individual and swarm searches, but they did not employ any strategy to maintain variety. Clerc (2004) employed an external archive to store and determine which particles would be the non- dominated members, and these members were used to control the flight of other particles based on the idea of non- dominated optimal. Kennedy (2003) employed the NSGA-II algorithm's core mechanism to discover the local optimal particle among the local optimal particles and their offspring particles, and suggested a non-dominated sorting PSO that used the max-min strategy in the fitness function to determine Pareto dominance. Furthermore, Goldbarg et al. (2006) optimised a U-tube steam generator mathematical model in a nuclear power plant using the non-dominated sorting PSO. To handle multi-objective optimization problems, Ghodratnama et al. (2015) used the comprehensive learning PSO method in conjunction with Pareto dominance. Ozcan and Mohan (1998) created an elitist multi-objective PSO that used the elitist mutation coefficient to optimise particle exploitation and exploration. Wang et al. (2011) developed an iterative multi-objective particle swarm optimization-based control vector parameterization to deal with state-constrained chemical and biochemical engineering issues. Clerc and Kennedy (2002), Fan and Yan (2014), Chen et al. (2014), Lei et al. (2005), and others have suggested multi-objective PSO algorithms in recent studies. Because the fitness calculation requires a significant amount of computational resources, it is necessary to decrease the number of fitness functions evaluated in order to reduce the calculation cost. Pampara et al. (2005) used a fitness inheritance strategy and an estimate technique to accomplish this goal, comparing the effects of fifteen different inheritance techniques and four estimation techniques applied to a multi-objective PSO algorithm. The MOPSO's variety can be maintained using two methods: the Sigma technique (Lovbjerg and Krink 2002) and the ε-dominance method (Juang et al. 2011; Robinson and Rahmat-Samii 2004). Robinson and Rahmat-Samii (2004) proposed a multi-swarm PSO method that divided the entire swarm into three equal-sized sub-swarms. Each sub-swarm used a separate mutation coefficient, and this method increased the particles' search capability. Engineering applications of the PSO are attached in the supplementary file due to page limitations; interested readers are welcome to refer to it. 9. NOISE AND DYNAMIC ENVIRONMENTS Brits et al. (2003) proposed using the PSO algorithm to monitor the dynamic system, which tracked the dynamic system by regularly resetting all particles' memories. Deb and Pratap (2002) took a similar approach. Following that, Geng et al. (2014) presented an adaptive PSO algorithm that could automatically track changes in the dynamic system, and several environment detection and response strategies were tested on the parabolic benchmark function. It effectively boosted the tracking capabilities for environmental change by testing and reinitializing the best particle in the swarm. Later,Carlisle and Dozier (2000) used a random point in the search space to determine whether or not the environment changed, however this needed centralised control, which was incompatible with the distributed processing architecture of the PSO algorithm. As a result, Clerc (2006) suggested a Tracking Dynamical PSO (TDPSO) that caused the fitness value of the best historical position to drop over time, eliminating the requirement forcentralised control. Binkley and Hagiwara (2005) introduced a penalty term in the particles' update formula to keep the particles lying in an expanding swarm in response to the fast changing dynamic environment, and this method does not need to assess whether the optimal point changed or not.
  • 14. A Review of Particle Swarm Optimization (PSO) Algorithm https://iaeme.com/Home/journal/IJMET 32 editor@iaeme.com Monson and Seppi (2004) demonstrated that the basic PSO algorithm may perform efficiently and stably in a noisy environment; in fact, noise could help the PSO algorithm avoid falling into local optima in many circumstances. Furthermore, Mostaghim and Teich (2003) investigated the performance of the unified particle swarm method in a dynamic context. The preceding works' research objects are simple dynamic systems, the experiment functions are simple single-mode functions, and the changes are uniform in a simple setting (that is, fixed step). In reality, genuine dynamic systems are frequently nonlinear and change non-uniformly in a complicated multi-mode search space. Kennedy (2005) studied a variety of dynamic situations using four PSO models (a basic PSO, two randomised PSO algorithms, and a fine-grained PSO). 10. NUMERICAL EXPERIMENTS PSO was also employed in a number of numerical investigations. Carlisle and Dozier (2001) used a modified variant of a probabilistic environment-based particle swarm optimization approach to solve an aggregate production plan model that used the strategy of simultaneously minimising the most probable value of the imprecise total costs, maximising the possibility of obtaining lower total costs, and minimising the possibility of obtaining lower total costs. This method provides a novel approach to considering the inherent uncertainty of the parameters in an aggregate production plan problem, and it can be used in ambiguous and indeterminate real- world production planning and scheduling problems with ill-defined data. Ganesh et al. (2014) used the PSO to optimise the cutting conditions for the response surface models they constructed. The PSO software provided the minimal values of the relevant criteria as well as the ideal cutting conditions. To solve the squared error between measured and modelled values in system identification issues, Lu developed an upgraded PSO method with a combined fitness function. To validate the feasibility of PSO, numerical simulations with five benchmark functions were employed, and numerical tests were also carried out to evaluate the performance of the upgraded PSO. Consistent results revealed that the combined fitness function-based PSO method was viable and efficient for system identification, and that it could outperform the conventional PSO approach. Lu et al. used eight numerical benchmarking functions that represent diverse aspects of typical issues, as well as a real-world application involving data clustering, to test the Starling PSO (2015a). The experimental results indicated that the Starling PSO outperformed the original PSO and produced the best solution in several numerical benchmarking functions and the majority of real-world problems in the clustering studies. Sierra and Coello (2005) have performed numerical experiments using benchmark objective functions of high dimensions to validate the convergence and effectiveness of the proposed PSO initialization. Salehian and Subraminiam (2015) used an updated PSO to optimise wireless sensor network performance in terms of the number of alive nodes. The numerical experiments in a conventional background validated the performance of the selected modified PSO. 11. CONCLUSIONS AND DISCUSSION PSO algorithm has attracted widespread attention in recent years as a relatively new approach. The following are some of the benefits of the PSO algorithm: (1) It is quite robust and may be utilised in a variety of application environments with few modifications. ( 2) It has high distributed capability because the algorithm is essentially a swarm evolutionary algorithm, making parallel processing simple.
  • 15. Mubeen Shaikh and Dhananjay Yadav https://iaeme.com/Home/journal/IJMET 33 editor@iaeme.com (3) It can soon converge to the optimization value. (4) It is simple to combine with other algorithms to boost performance. There are several unsolved problems in PSO algorithm research, including but not limited to: (1) Analysis of random convergence. Although the PSO method has been demonstrated to be effective in real-world applications and has produced preliminary theoretical results, it has yet to provide mathematical proofs of algorithm convergence and convergence rate estimation. (2) How to calculate the algorithm parameters PSO parameters are typically established based on specific problems, application experience, and several experiment testing, hence it lacks adaptability. As a result, another pressing issue to be addressed is how to establish the algorithm parameters in a convenient and effective manner. (3) PSO algorithm (discrete/binary). The majority of the research articles in this paper deal with continuous variables. Limited study evidences that the PSO algorithm had some issues dealing with discrete variables (4) Designing an effective algorithm based on the features of various challenges is a very meaningful effort. For specific application challenges, we should thoroughly investigate the PSO algorithm and broaden and deepen its applicability. Simultaneously, we should focus on highly efficient PSO design, combining the PSO with optimised problem or rules, PSO with neural network, fuzzy logic, evolutionary algorithm, simulated annealing, taboo search, biological intelligence, and chaos, and so on, to address the problem that the PSO is easily trapped in the local optima. (5) Study of PSO algorithm design. More emphasis should be placed on the extremely efficient PSO algorithm, as well as an appropriate core update formula and effective method for balancing the global local exploitation and exploration (6) Look for PSO applications. Because most PSO applications are currently limited to continuous, single-objective, unconstrained, deterministic optimization issues, we should focus on discrete, multi-objective, constrained, un-deterministic, dynamic optimization problems. PSO's application areas should be increased at the same time. REFERENCES [1] AbdelbarAM, AbdelshahidS, WunschDCI (2005) Fuzzypso:agener- alizationofparticleswarmoptimization.In:Proceedingsof2005IEEE international joint conference on neural networks (IJCNN’05)Montreal,Canada,July31–August4,pp1086–1091 [2] AcanA,GunayA(2005)Enhanced particles warm optimization through external memory support. In: Proceedings of 2005 IEEE congress one volutionary computation, Edinburgh,UK,Sept2–4,pp1875–1882 [3] Afshinmanesh F, Marandi A, Rahimi-Kian A (2005) A novel binary particle swarm optimization method using artificial immune sys-tem. In: Proceedings of the international conference on computeras a tool (EUROCON 2005) Belgrade, Serbia, Nov 21–24, pp 217–220 [4] Al-kazemi B, Mohan CK (2002) Multi-phase generalization of the particle swarm optimization algorithm. In: Proceedings of 2002 IEEE Congresson Evolutionary Computation, Honolulu, Hawaii, August7–9,pp489–494 [5] alRifaie MM, Blackwell T (2012) Bare bones particle swarms with jumpsants. LectNotes ComputSciSer7461(1):49–60 [6] Angeline PJ (1998a) Evolutionary optimization versus particle swarm optimization philosophy and performance difference. In: Evolu-tionary programming, Lecture notes in computer science, vol. viiedition. Springer, Berlin
  • 16. A Review of Particle Swarm Optimization (PSO) Algorithm https://iaeme.com/Home/journal/IJMET 34 editor@iaeme.com [7] AngelinePJ(1998b)Usingselectiontoimproveparticleswarmoptimization.In:Proceedingsofthe19 98IEEEinternationalcon-ference on evolutionary computation, Anchorage, Alaska, USA,May4–9,pp 84–89 [8] ArdizzonG, CavazziniG, PavesiG (2015)Adaptiveaccelerationcoef- ficientsforanewsearchdiversificationstrategyinparticleswarmoptimizationalgorithms.InfSci299 :337–378 [9] Banka H, Dara S (2015) A hamming distance based binary particleswarm optimization (HDBPSO) algorithm for high dimensional feature selection, classification and validation. Pattern RecognitLett52:94–100 [10] Barisal AK (2013) Dynamic search space squeezing strategy basedintelligent algorithm solutions to economic dispatch with multi-plefuels.ElectrPowerEnergySyst45:50–59 [11] Bartz-Beielstein T, Parsopoulos KE, VrahatisMN (2002) Tuningpso parameters through sensitivity analysis. Technical Report CI124/02,SFB531. Universityof Dortmund, Dortmund, Germany, Department of Computer Science [12] Bartz-Beielstein T, Parsopoulos KE, Vegt MD, Vrahatis MN (2004a)Designingparticleswarmoptimizationwithregressiontrees.Technical Report CI 173/04, SFB 531. University of Dortmund, Dortmund, Germany, Department of Computer Science [13] Bartz-Beielstein T, Parsopoulos KE, Vrahatis MN (2004b) Analysisof particle swarm optimization using computational statistics. In:Proceedingsoftheinternationalconferenceofnumericalanalysisand applied mathematics (ICNAAM 2004), Chalkis, Greece, pp34–37 [14] Beheshti Z, Shamsuddin SM (2015) Non-parametric particle swarm optimization for global optimization. Appl Soft Comput 28:345–359 [15] Benameur L, Alami J, Imrani A (2006) Adaptively choosing niching parameters in a PSO. In: Proceedings of genetic and evolution-ary computation conference (GECCO 2006), Seattle, Washington,USA,July8–12, pp3–9 [16] Binkley KJ, HagiwaraM(2005) Particle swarm optimization with are a of influence: increasing the effectiveness of the swarm. In: Pro-ceedings of 2005 IEEE swarm intelligence symposium (SIS 2005), Pasadena, California,USA,June8–10,pp45–52 [17] Blackwell TM (2005) Particle swarms and population diversity. SoftComput9(11):793–802 [18] Blackwell TM, Bentley PJ (2002) Don’t push me! Collision-avoiding swarms. In: Proceedings of IEEE congress on evolutionary com-putation, Honolulu, HI, USA, August7–9, pp1691–1697 [19] Bratton D, Kennedy J (2007) Defining a standard for particle swarm optimization. In: Proceedings of the 2007 IEEE swarm intelligence symposium (SIS2007), Honolulu,HI,USA,April19–23,pp120–127 [20] Brits R, Engelbrecht AP, van den Bergh F (2002) Solving systems of unconstrained equations using particle swarm optimization. In: Proceedings of IEEE international conference on systems, man, and cybernetics, hammamet, Tunisia, October6–9, 2002. July27–28,2013, East Lansing, Michigan,pp1–9 [21] Brits R, Engelbrecht AP, van den Bergh F (2003) Scalability of nichePSO. In: Proceedings of the IEEE swarm intelligence symposium, Indianapolis, Indiana, USA, April24–26,pp228–234 [22] Carlisle A, Dozier G (2000) Adapting particle swarm optimization to dynamic environments. In:Proceedingsoftheinternationalconfer-enceonartificialintelligence,Athens,GA,USA,July31– August5,pp429–434 [23] CarlisleA, DozierG (2001) An off-the-shelfPSO. In: Proceedings of the workshop on particle swarm optimization, Indianapolis, Indiana, USA
  • 17. Mubeen Shaikh and Dhananjay Yadav https://iaeme.com/Home/journal/IJMET 35 editor@iaeme.com [24] ChangWD (2015) Amodified particles warm optimization with multiple subpopulations for multimodal function optimization problems.ApplSoftComput33:170–182 [25] Chatterjee A, Siarry P (2006) Nonlinear inertia weight variation for dynamic adaptation in particle swarm optimization. ComputOperRes33:859–871 [26] Chaturvedi KT, Pandit M, Shrivastava L (2008) Self-organizing hier-archical particle swarm optimization for non-convex economic dispatch. IEEE Trans PowerSyst23(3):1079–1087 [27] Chen J, Pan F, Cai T (2006a) Acceleration factor harmonious particleswarmoptimizer.IntJAutomComput3(1):41–46 [28] Chen K, Li T, Cao T (2006b) Tribe-PSO: a novel global optimization algorithm and its application in molecular docking. Chemom Intell LabSyst 82:248–259 [29] Chen W, Zhang J, Lin Y, Chen N, Zhan Z, Chung H, Li Y, Shi Y (2013) Particle swarm optimization with an aging leader and challenger. IEEE TransEvolutComput17(2):241–258 [30] ChenY, Feng Y, LiX (2014) A parallel system for adaptive optics based on parallel mutation PSOalgorithm.Optik125:329–332 [31] Ciuprina G, Ioan D, Munteanu I (2007) Use of intelligent-particle swarm optimizationinel ectromagnetics. IEEE Trans Manag 38(2):1037–1040 [32] ClercM(1999)Theswarmandthequeen:towardsadeterministicandadaptiveparticleswarmoptimiz ation.In:ProceedingsoftheIEEEcongress on evolutionary computation (CEC 1999), pp 1951– 1957,Washington,DC,USA,July6–9,1999 [33] Clerc M (2004) Discrete particle swarm optimization. In: OnwuboluGC (ed) New optimization techniques in engineering. Springer, Berlin [34] Clerc M (2006) Stagnation analysis in particle swarm optimisation or what happens when nothing happens. Technical Report CSM-460, Department of Computer Science, University of Essex, Essex, UK, August5–8,2006 [35] ClercM, KennedyJ (2002) The particle swarm-explosion, stability and convergence in a multi dimensional complex space. IEEE TransEvolutComput 6(2):58–73 [36] Coelho LDS, Lee CS (2008) Solving economic load dispatch prob- lemsinpowersystemsusingchaoticandgaussianparticleswarmoptimizationapproaches.ElectrPow erEnergySyst30:297–307 [37] Coello CAC, Pulido G, Lechuga M (2004) Handling multiple objec- tiveswithparticleswarmoptimization.IEEETransEvolutComput8(3):256–279 [38] DebK,PratapA(2002)Afastandelitistmultiobjectivegeneticalgo-rithm:NSGA- II.IEEETransEvolutComput6(2):182–197 [39] del Valle Y, Venayagamoorthy GK, Mohagheghi S, Hernandez JC, Harley RG (2008) Particle swarm optimization: basic concepts, variants and applications in power systems. IEEE Trans EvolutComput12:171–195 [40] DiosanL,OlteanM(2006)Evolvingthestructureoftheparticleswarmoptimizationalgorithms.In:Pr oceedingsofEuropeanconferenceonevolutionarycomputationincombinatorialoptimization(Evo- COP2006),pp25–36,Budapest,Hungary,April10–12,2006 [41] DoctorS,VenayagamoorthyGK(2005)Improvingtheperformanceofparticleswarmoptimizationu singadaptivecriticsdesigns.In:Proceedingsof2005IEEEswarmintelligence symposium(SIS2005), pp 393–396, Pasadena, California, USA, June 8–10, 2005Eberhart RC,KennedyJ(1995) [42] Anewoptimizerusingparticleswarmtheory.In:Proceedingsofthe6thinternationalsymposiumonmi cromachineand humanscience,pp 39–43,Nagoya,Japan, Mar 13–16,1995
  • 18. A Review of Particle Swarm Optimization (PSO) Algorithm https://iaeme.com/Home/journal/IJMET 36 editor@iaeme.com [43] EberhartRC,ShiY(2000)Comparinginertiaweightsandconstrictionfactors in particle swarm optimization. In: Proceedings of the IEEE congress on evolutionary computation (CEC 2000), pp 84–88, SanDiego,CA,USA,July16–19, 2000 [44] Eberhart RC, Shi Y (2001) Particle swarm optimization: developments, applications and resources. In: Proceedings of the IEEE congresss on evolutionary computation (CEC 2001), pp 81–86, Seoul, Korea,May27–30 [45] El-Wakeel AS (2014) Design optimization of pm couplings usinghybrid particle swarm optimization-simplex method (PSO-SM)algorithm.ElectrPowerSystRes116:29–35 [46] EmaraHM, FattahHAA(2004)Continuous swarm optimizationtech-nique with stability analysis. In: Proceedings of American Control Conference, pp 2811–2817, Boston, MA, USA, June 30– July 2,2004 [47] Engelbrecht AP, Masiye BS, Pampard G (2005) Niching ability of basicparticle swarm optimization algorithms. In: Proceedings of 2005IEEE Swarm Intelligence Symposium (SIS 2005), pp 397–400,Pasadena,CA,USA,June8–10,2005 [48] FanH(2002) Amodificationto particleswarmoptimization algorithm. EngComput19(8):970–989 [49] Fan Q, Yan X (2014) Self-adaptive particle swarm optimization withmultiple velocity strategies and its application for p-xylene oxi-dation reaction process optimization.ChemomIntell Lab Syst139:15–25 [50] FanSKS,LinY,FanC,WangY(2009)Processidentificationusinganew component analysis model and particle swarm optimization.ChemomIntellLabSyst99:19–29 [51] Fang W, Sun J, Chen H, Wu X (2016) A decentralized quantum-inspiredparticle swarm optimization algorithm with cellular structuredpopulation.InfSci330:19–48 [52] Fernandez-Martinez JL, Garcia-Gonzalo E (2011) Stochastic stabilityanalysis of the linear continuous and discrete PSO models. IEEETransEvolutComput15(3):405–423 [53] Fourie PC, Groenwold AA (2002) The particle swarm optimizationalgorithm in size and shape optimization. StructMultidiscipOptim23(4):259–267 [54] GaneshMR,KrishnaR,ManikantanK,RamachandranS(2014)Entropy based binary particle swarm optimization and classifi-cationforeardetection.EngApplArtifIntell27:115–128 [55] Garcia-Gonza E, Fernandez-Martinez JL (2014) Convergence andstochastic stability analysis of particle swarm optimization variantswithgenericparameterdistributions. ApplMathComput249:286– 302 [56] Garcia-Martinez C, Rodriguez FJ (2012) Arbitrary function optimisa-tion with metaheuristics: no free lunch and real-world problems. SoftComput 16:2115–2133 [57] Geng J, Li M, Dong Z, Liao Y (2014) Port throughput forecasting byMARS-RSVR with chaotic simulated annealing particle swarmoptimizationalgorithm.Neurocomputing147:239–250 [58] Ghodratnama A, Jolai F, Tavakkoli-Moghaddamb R (2015) Solving anew multi-objective multiroute flexible flow line problem by multi-objective particle swarm optimization and nsga- ii. J ManufSyst36:189–202 [59] Goldbarg EFG, de Souza GR, Goldbarg MC (2006) Particle swarmfor the traveling salesman problem. In: Proceedings of Europeanconference on evolutionary computation in combinatorial opti-mization (EvoCOP2006), pp 99-110, Budapest, Hungary, April10–12,2006 [60] Gosciniak I (2015) A new approach to particle swarm optimizationalgorithm.ExpertSystAppl42:844–854 [61] Hanaf I, Cabrerab FM, Dimanea F, Manzanaresb JT (2016) Applica-tion of particle swarm optimization for optimizing the processparametersinturningofpeekcf30composites.ProcediaTechnol22:195–202
  • 19. Mubeen Shaikh and Dhananjay Yadav https://iaeme.com/Home/journal/IJMET 37 editor@iaeme.com [62] He S, Wu Q, Wen J (2004) A particle swarm optimizer with passivecongregation.BioSystems78:135–147 [63] Hendtlass T (2003) Preserving diversity in particle swarm optimisation.In: Proceedings of the 16th international conference on industrialengineering applications of artificial intelligence and expert sys-tems,pp31–40,Loughborough,UK,June23–26,2003 [64] Ho S, Yang S, Ni G (2006) A particle swarm optimization methodwith enhanced global search ability for design optimizations ofelectromagneticdevices.IEEETransMagn42(4):1107–1110 [65] Hu X, Eberhart RC (2002) Adaptive particle swarm optimization:Detection and response to dynamic systems. In: Proceedings ofIEEE congress on evolutionary computation, pp 1666– 1670, Hon-olulu,HI,USA, May10–14, 2002 [66] Huang T, Mohan AS (2005) A hybrid boundary condition for robustparticle swarm optimization. Antennas WirelPropagLett 4:112–117 [67] IdeA, YasudaK (2005) A basic study of adaptive particle swarm opti-mization. ElectrEngJpn151(3):41–49 [68] IvatlooBM(2013) Combined heat and power economic dispatch prob- lemsolutionusingparticleswarmoptimizationwithtimevaryingaccelerationcoefficients.ElectrPo werSystRes95(1):9–18 [69] JamianJJ,MustafaMW,MokhlisH(2015)Optimalmultipledistributedgeneration output through rank evolutionary particle swarm opti-mization.Neurocomputing152:190–198 [70] JiaD,ZhengG,QuB,KhanMK(2011)Ahybridparticleswarmopti-mization algorithm for high- dimensional problems. ComputIndEng61:1117–1122 [71] Jian W, Xue Y, Qian J (2004) An improved particle swarm optimization algorithm with neighborhoods topologies. In: Proceedings of 2004internationalconferenceonmachinelearningandcybernetics,pp2332– 2337,Shanghai,China,August26–29,2004 [72] Jiang CW, Bompard E (2005) A hybrid method of chaotic particles warm optimization and linear interior for reactive power opti-mization.MathComputSimul68:57–65 [73] JieJ,ZengJ,HanC(2006)Adaptiveparticleswarmoptimizationwithfeedback control of diversity. In: Proceedings of 2006 internationalconferenceonintelligentcomputing(ICIC2006),pp81– 92,Kun-ming,China,August 16–19,2006 [74] Jin Y, Cheng H, Yan J (2005) Local optimum embranchment basedconvergenceguaranteeparticleswarmoptimizationanditsappli- cationintransmissionnetworkplanning.In:Proceedingsof2005IEEE/PES transmission and distribution conference and exhibi-tion:AsiaandPacific,pp1–6,Dalian,China,Aug15–18,2005 [75] Juang YT, Tung SL, Chiu HC (2011) Adaptive fuzzy particle swarmoptimizationforglobaloptimizationofmultimodalfunctions.InfSci181:4539–4549 [76] Kadirkamanathan V, Selvarajah K, Fleming PJ (2006) Stability analysisoftheparticledynamicsinparticleswarmoptimizer.IEEETransEvolutComput10(3):245– 255 [77] KennedyJ(1997)Mindsandcultures:particleswarmimplications.In:ProceedingsoftheAAAIFall1 997symposiumoncommunicative action in humans and machines, pp 67–72, Cambridge, MA, USA,Nov8–10, 1997 [78] KennedyJ(1998)Thebehaviorofparticle.In:Proceedingsofthe7thannualconferenceonevolutionar yprogram,pp581–589,SanDiego,CA,Mar10–13, 1998 [79] Kennedy J (1999) Small worlds and mega-minds: effects of neighbor- hoodtopologyonparticleswarmperformance.In:Proceedingsofthe IEEE international conference on evolutionary computation,pp1931–1938,SanDiego,CA,Mar10–13
  • 20. A Review of Particle Swarm Optimization (PSO) Algorithm https://iaeme.com/Home/journal/IJMET 38 editor@iaeme.com [80] Kennedy J (2000) Stereotyping: Improving particle swarm performance with cluster analysis. In: Proceedings of the IEEE internationalconferenceonevolutionarycomputation,pp303–308 [81] Kennedy J (2003) Bare bones particle swarms. In: Proceedings of the2003 IEEE swarm intelligence symposium (SIS’03), pp 80–87,Indianapolis,IN,USA,April24–26,2003 [82] Kennedy J (2004) Probability and dynamics in the particle swarm. In: Proceedings of the IEEE international conference on evolutionary computation, pp 340–347, Washington, DC, USA, July 6–9, 2004KennedyJ(2005)Whydoesitneedvelocity?In:ProceedingsoftheIEEEswarmintelligencesym posium(SIS’05),pp38–44, Pasadena,CA,USA,June8–10,2005 [83] Kennedy J, Eberhart RC (1995) Particle swarm optimization? In: Pro- ceedingsoftheIEEEinternationalconferenceonneuralnetworks,pp1942–1948,Perth,Australia [84] Kennedy J, Mendes R (2002) Population structure and particle swarmperformance. In: Proceedings of the IEEE international conferenceonevolutionarycomputation,pp1671– 1676,Honolulu,HI,USA,Sept22–25, 2002 [85] KennedyJ,MendesR(2003)Neighborhoodtopologiesinfully-informedandbest-of- neighborhoodparticleswarms.In:Proceed-ings of the 2003 IEEE international workshop on soft computing inindustrial applications (SMCia/03), pp 45–50, Binghamton, NewYork,USA,Oct12–14, 2003 [86] Krink T, Lovbjerg M (2002) The life cycle model: combining parti-cle swarm optimisation, genetic algorithms and hillclimbers. In:Lecture notes in computer science (LNCS) No. 2439: proceed-ingsofparallelproblemsolvingfromnatureVII(PPSN2002),pp621– 630,Granada,Spain,7–11Dec2002 [87] Lee S, Soak S, Oh S, Pedrycz W, Jeonb M (2008) Modified binaryparticleswarmoptimization.ProgNatSci18:1161–1166 [88] Lei K, Wang F, Qiu Y (2005) An adaptive inertia weight strategy forparticle swarm optimizer. In: Proceedings of the third internationalconference on mechatronics and information technology, pp 51–55,Chongqing,China,Sept21–24,2005 [89] Leontitsis A, Kontogiorgos D, Pagge J (2006) Repel the swarm to theoptimum.ApplMathComput173(1):265–272 [90] Li X (2004) Better spread and convergence: particle swarm multi-objective optimization using the maximin fitness function. In:Proceedings of genetic and evolutionary computation conference(GECCO2004),pp117–128,Seattle,WA,USA,June26–30,2004 [91] LiX(2010)Nichingwithoutnichingparameters:particleswarmoptimization using a ring topology. IEEE Trans EvolutComput14(1):150–169 [92] LiX,DamKH(2003)Comparingparticleswarmsfortrackingextremaindynamicenvironments.In:P roceedingsofthe2003Congresson Evolutionary Computation (CEC’03), pp 1772–1779, Canberra,Australia,Dec8–12,2003 [93] LiZ,WangW,YanY,LiZ(2011)PS-ABC:ahybridalgorithmbasedon particle swarm and artificial bee colony for high-dimensionaloptimizationproblems.ExpertSystAppl42:8881–8895 [94] Li C, Yang S, Nguyen TT (2012) A self-learning particle swarm opti-mizer for global optimization problems. IEEE Trans Syst ManCybernetPartBCybernet42(3):627–646 [95] LiY,ZhanZ,LinS,ZhangJ,LuoX(2015a)Competitiveandcooperativeparticleswarmoptimization withinformationsharingmechanismforglobaloptimizationproblems.InfSci293:370–382 [96] Li Z, Nguyena TT, Chen S, Khac Truong T (2015b) A hybrid algorithmbased on particle swarm and chemical reaction optimization formulti-objectproblems.ApplSoftComput35:525–540 [97] Liang JJ, Suganthan PN (2005) Dynamic multi-swarm particle swarmoptimizer. In: Proceedings of IEEE swarm intelligence sympo-sium,pp124–129,Pasadena,CA,USA,June8–10,2005
  • 21. Mubeen Shaikh and Dhananjay Yadav https://iaeme.com/Home/journal/IJMET 39 editor@iaeme.com [98] Liang JJ, Qin AK, Suganthan PN, Baskar S (2006) Comprehensivelearningparticleswarmoptimizerforglobaloptimizationofmul- timodalfunctions.IEEETransEvolutComput10(3):281–295 [99] Lim W, Isa NAM (2014) Particle swarm optimization with adaptivetime-varying topology connectivity. Appl Soft Comput 24:623–642 [100] Lim W, Isa NAM (2015) Adaptive division of labor particle swarmoptimization.ExpertSystAppl42:5887–5903 [101] Lin Q, Li J, Du Z, Chen J, Ming Z (2006a) A novel multi- objectiveparticleswarmoptimizationwithmultiplesearchstrategies.EurJOperRes247:732–744 [102] LinX,LiA,ChenB(2006b)Schedulingoptimizationofmixedmodelassemblylineswithhybridpartic leswarmoptimizationalgorithm.IndEngManag11(1):53–57 [103] LiuY,QinZ,XuZ(2004)Usingrelaxationvelocityupdatestrat- egytoimproveparticleswarmoptimization.Proceedingsofthirdinternationalconferenceonmachine learningandcybernetics,pp2469–2472,Shanghai,China,August26–29,2004 [104] Liu F, Zhou J, Fang R (2005) An improved particle swarm optimization and its application in longterm stream ow forecast. In: Proceed-ings of 2005 international conference on machine learning andcybernetics, pp 2913–2918, Guangzhou, China, August 18–21,2005 [105] Liu H, Yang G, Song G (2014) MIMO radar array synthesis usingQPSOwithnormaldistributedcontraction-expansionfactor.Pro-cediaEng15:2449–2453 [106] Liu T, Jiao L, Ma W, Ma J, Shang R (2016) A new quantum-behavedparticle swarm optimization based on cultural evolution mech-anism for multiobjective problems. Knowl Based Syst 101:90–99 [107] LovbjergM,KrinkT(2002)Extendingparticleswarmoptimizerswithself-organized criticality. In: Proceedings of IEEE congress on evo-lutionarycomputation(CEC2002),pp1588– 1593,Honolulu,HI,USA,May7–11, 2002 [108] Lovbjerg M, Rasmussen TK, Krink T (2001) Hybrid particle swarmoptimizer with breeding and subpopulations. In: Proceedings ofthirdgeneticandevolutionarycomputationconference(GECCO-2001),pp469– 476,SanFrancisco-SiliconValley,CA,USA,July7–11,2001 [109] LuJ,HuH,BaiY(2015a)Generalizedradialbasisfunctionneuralnet- workbasedonanimproveddynamicparticleswarmoptimizationandadaboostalgorithm.Neurocom puting152:305–315 [110] Lu Y, Zeng N, Liu Y, Zhang Z (2015b) A hybrid wavelet neural net-work and switching particle swarm optimization algorithm for facedirectionrecognition.Neurocomputing155:219–244 [111] Medasani S, Owechko Y (2005) Possibilistic particle swarms foroptimization. In: Applications of neural networks and machinelearninginimageprocessingIXvol5673,pp82–89 [112] MendesR,KennedyJ, NevesJ(2004)The fullyinformedparti-cle swarm: simpler maybe better. IEEE Trans EvolutComput8(3):204–210 [113] Meng A, Li Z, Yin H, Chen S, Guo Z (2015) Accelerating particleswarmoptimizationusingcrisscrosssearch.InfSci329:52–72 [114] Mikki S, Kishk A (2005) Improved particle swarm optimization tech-nique using hard boundary conditions. Microw Opt TechnolLett46(5):422–426 [115] MohaisAS, MendesR, WardC (2005)Neighborhoodre-structuringinparticle swarm optimization. In: Proceedings of Australian con-ference on artificial intelligence, pp 776–785, Sydney, Australia,Dec5–9, 2005
  • 22. A Review of Particle Swarm Optimization (PSO) Algorithm https://iaeme.com/Home/journal/IJMET 40 editor@iaeme.com [116] MonsonCK,SeppiKD(2004)TheKalmanswarm:anewapproachtoparticlemotioninswarmoptimiz ation.In:Proceedingsofgeneticand evolutionary computation conference (GECCO2004), pp 140–150,Seattle,WA,USA,June26–30,2004 [117] Monson CK, Seppi KD (2005) Bayesian optimization models forparticle swarms. In: Proceedings of genetic and evolutionary com-putation conference (GECCO2005), pp 193–200, Washington,DC,USA,June25–29, 2005 [118] Mostaghim S, Teich J (2003) Strategies for finding good local guidesin multi-objective particle swarm optimization (MOPSO).In: Proceedings of the 2003 IEEE swarm intelligence symposium(SIS’03), pp 26–33, Indianapolis, Indiana, USA, April 24–26,2003 [119] MuB,WenS,YuanS,LiH(2015)PPSO:PC A based particle swarm optimization for solving conditional nonlinear optimal perturba-tion.ComputGeosci83:65–71 [120] Netjinda N, Achalakul T, Sirinaovakul B (2015) Particle swarm opti-mization inspired by starling flock behavior. Appl Soft Comput35:411–422 [121] NgoaTT,SadollahbA,KimaJH(2016)Acooperativeparticleswarmoptimizerwithstochasticmove mentsforcomputationallyexpen-sivenumericaloptimizationproblems.JComputSci13:68–82 [122] NickabadiAA,EbadzadehMM,SafabakhshR(2011)Anovelparticleswarmoptimizationalgorithm withadaptiveinertiaweight.ApplSoftComput 11:3658–3670 [123] NiuB,ZhuY,HeX(2005)Multi-population cooperative particle swarm optimization. In: Proceedings of advances in artificial life—the eighth European conference (ECAL 2005), pp 874–883, Canter-bury,UK,Sept5–9,2005 [124] Noel MM, Jannett TC (2004) Simulation of a new hybrid particle swarmoptimization algorithm. In: Proceedings of the thirty-sixth IEEESoutheasternsymposiumonsystemtheory,pp150– 153,Atlanta,Georgia,USA,March14–16,2004 [125] Ozcan E, Mohan CK (1998) Analysis of a simple particle swarmoptimization system. In: Intelligent engineering systems throughartificialneuralnetworks,pp253–258 [126] Pampara G, Franken N, Engelbrecht AP (2005) Combining particleswarm optimization with angle modulation to solve binary prob- lems.In:Proceedingsofthe2005IEEEcongressonevolutionarycomputation,pp89– 96,Edinburgh,UK,Sept2–4,2005 [127] Park JB, Jeong YW, Shin JR, Lee KY (2010) An improved particleswarm optimization for nonconvex economic dispatch problems.IEEETransPowerSyst25(1):156–166 [128] Parsopoulos KE, Vrahatis MN (2002a) Initializing the particle swarmoptimizer using the nonlinear simplex method. WSEAS Press,Rome [129] Parsopoulos KE, Vrahatis MN (2002b) Recent approaches to globaloptimization problems through particle swarm optimization. NatComput1:235–306 [130] ParsopoulosKE,VrahatisMN(2004)Onthecomputationofallglobalminimizers through particle swarm optimization. IEEE Trans Evo-lutComput 8(3):211–224 [131] Peer E, van den Bergh F, Engelbrecht AP (2003) Using neighbor- hoodswiththeguaranteedconvergencePSO.In:Proceedingsof IEEE swarm intelligence symposium (SIS2003), pp 235–242,Indianapolis,IN,USA,April24–26,2003 [132] Peng CC, Chen CH (2015) Compensatory neural fuzzy network withsymbiotic particle swarm optimization for temperature control.ApplMathModel39:383–395 [133] PeramT,Veeramachanenik,MohanCK(2003)Fitness-distance- ratiobasedparticleswarmoptimization.In:Proceedingsof2003IEEEswarm intelligence symposium, pp 174–181, Indianapolis, Indi-ana,USA,April24–26, 2003
  • 23. Mubeen Shaikh and Dhananjay Yadav https://iaeme.com/Home/journal/IJMET 41 editor@iaeme.com [134] Poli R (2008) Dynamics and stability of the sampling distribution ofparticleswarmoptimisersviamomentanalysis.JArtifEvolAppl10–34:2008 [135] PoliR(2009)Meanandvarianceofthesamplingdistributionofparticle swarm optimizers during stagnation. IEEE Trans EvolutComput13(4):712–721 [136] PoliR,KennedyJ,BlackwellT(2007)Particleswarmoptimization— anoverview.SwarmIntell1(1):33–57 [137] Qian X, Cao M, Su Z, Chen J (2012) A hybrid particle swarm opti-mization (PSO)-simplex algorithm for damage identification ofdelaminatedbeams.MathProblEng1–11:2012 [138] Qin Z, Yu F, Shi Z (2006) Adaptive inertia weight particle swarmoptimization. In: Proceedings of the genetic and evolutionary com-putationconference,pp450–459,Zakopane,Poland,June25– 29,2006 [139] Ratnaweera A, Halgamuge S, Watson H (2004) Self-organizing hier-archical particle swarm optimizer with time-varying accelerationcoefficients.IEEETransEvolutComput8(3):240–255 [140] Reynolds CW (1987) Flocks, herds, and schools: a distributed behav- ioralmodel.ComputGraph21(4):25–34 [141] Richards M, Ventura D (2004) Choosing a starting configuration forparticleswarmoptimization.In:Proceedingsof2004IEEEinter-national joint conference on neural networks, pp 2309–2312,Budapest,Hungary,July25–29,2004 [142] Richer TJ, Blackwell TM (2006) The levy particle swarm. In: Pro-ceedings of the IEEE congress on evolutionary computation, pp808–815,Vancouver,BC,Canada,July16–21,2006 [143] Riget J, Vesterstrom JS (2002) A diversity-guided particle swarmoptimizer— theARPSO.TechnicalReport2002-02,Departmentof Computer Science, AarhusUniversity, Aarhus, Denmark [144] Robinson J, Rahmat-Samii Y (2004) Particle swarm optimization inelectromagnetics.IEEETransAntennasPropag52(2):397–407 [145] RobinsonJ,SintonS,Rahmat- SamiiY(2002)Particleswarm,geneticalgorithm,andtheirhybrids:optimizationofaprofiledcorruga tedhornantenna.In:Proceedingsof2002IEEEinternationalsympo-sium on antennas propagation, pp 31–317, San Antonio, Texas, USA, June 16–21, 2002 [146] Roy R, Ghoshal SP (2008) A novel crazy swarm optimized economicload dispatch for various types of cost functions. Electr PowerEnergySyst30:242–253 [147] Salehian S, Subraminiam SK (2015) Unequal clustering by improvedparticleswarmoptimizationinwirelesssensornetwork.ProcediaComputSci62:403–409 [148] Samuel GG, Rajan CCA (2015) Hybrid: particle swarm optimization-genetic algorithm and particle swarm optimization-shuffled frogleaping algorithm for long-term generator maintenance schedul-ing.ElectrPowerEnergySyst65:432–442 [149] SchafferJD(1985)Multiobjectiveoptimizationwithvectorevaluatedgeneticalgorithms.In:Procee dingsoftheIEEEinternationalcon-ferenceongeneticalgorithm,pp93–100, Pittsburgh, Pennsylvania, USA [150] Schoeman IL, Engelbrecht AP (2005) A parallel vector-based particleswarm optimizer. In: Proceedings of the international conference on neural networks andgeneticalgorithms(ICANNGA2005), pp268–271, Protugal [151] Schutte JF, Groenwold AA (2005) A study of global optimization usingparticleswarms.JGlobOptim31:93–108 [152] Selleri S, Mussetta M, Pirinoli P (2006) Some insight over new varia-tions of the particle swarm optimization method. IEEE AntennasWirelPropagLett5(1):235–238
  • 24. A Review of Particle Swarm Optimization (PSO) Algorithm https://iaeme.com/Home/journal/IJMET 42 editor@iaeme.com [153] Selvakumar AI, Thanushkodi K (2009) Optimization using civilizedswarm: solution to economic dispatch with multiple minima. ElectrPowerSystRes79:8–16 [154] SeoJH,ImCH,HeoCG(2006)Multimodalfunctionoptimiza-tion based on particle swarm optimization. IEEE Trans Magn42(4):1095–1098 [155] SharifiA,KordestaniJK,MahdavianiaM,MeybodiMR(2015)Anovel hybridadaptivecollaborativeapproachbasedonparticleswarm optimizationandlocalsearchfordynamicoptimizationproblems.ApplSoftComput32:432–448 [156] Shelokar PS, Siarry P, Jayaraman VK, Kulkarni BD (2007) Particleswarmandantcolonyalgorithmshybridizedforimprovedcontin- uousoptimization.ApplMathComput188:129–142 [157] Shi Y, Eberhart RC (1998) A modified particle swarm optimizer. In: Proceedings of theIEEE international conferenceonevolutionarycomputation,pp69–73, Anchorage, Alaska,USA,May4– 9,1998 [158] Shi Y, Eberhart RC (2001) Fuzzy adaptive particle swarm optimization.In: Proceedings of the congress on evolutionary computation, pp101–106, IEEEServiceCenter, Seoul,Korea,May27– 30,2001 [159] Shin Y, Kita E (2014) Search performance improvement of particleswarm optimization by second best particle information. ApplMathComput 246:346–354 [160] Shirkhani R, Jazayeri-Rad H, Hashemi SJ (2014) Modeling of a solidoxidefuelcellpowerplantusinganensembleofneuralnetworksbased on a combination of the adaptive particle swarm optimizationandlevenbergmarquardtalgorithms.JNatGasSciEng21:1171–1183 [161] Sierra MR, Coello CAC (2005) Improving pso-based multi-objectiveoptimization using crowding, mutation and epsilon-dominance.LectNotesComputSci3410:505–519 [162] Soleimani H, Kannan G (2015) A hybrid particle swarm optimizationandgeneticalgorithmforclosedloopsupplychainnetworkdesigninlarge- scalenetworks.ApplMathModel39:3990–4012 [163] StaceyA,JancicM,GrundyI(2003)Particleswarmoptimizationwithmutation.In:ProceedingsofIE EEcongressonevolutionarycom-putation2003(CEC2003),pp1425–1430, Canberra, Australia, December8–12,2003 [164] Suganthan PN (1999) Particle swarm optimizer with neighborhoodoperator. In: Proceedings of the Congress on Evolutionary Com-putation,pp1958–1962,Washington,D.C.USA,July6– 9,1999 [165] Sun J, Feng B, Xu W (2004) Particle swarm optimization with parti- cleshavingquantumbehavior.In:Proceedingsofthecongressonevolutionarycomputation,pp325– 331,Portland,OR,USA,June19–23,2004 [166] Tang Y, Wang Z, Fang J (2011) Feedback learning particle swarm opti- mization.ApplSoftComput11:4713–4725 [167] Tanweer MR, Suresh S, Sundararajan N (2016) Dynamic mentoring andself-regulation based particle swarm optimization algorithm forsolving complex real-world optimization problems. InfSci 326:1–24 [168] Tatsumi K, Ibuki T, Tanino T (2013) A chaotic particle swarm optimization exploiting a virtual quartic objectivefunctionbasedon the personal and global best solutions. Appl Math Comput219(17):8991–9011 [169] TatsumiK,IbukiT,TaninoT(2015)Particleswarmoptimizationwithstochasticselectionofperturbat ion-basedchaoticupdatingsystem.ApplMathComput 269:904–929
  • 25. Mubeen Shaikh and Dhananjay Yadav https://iaeme.com/Home/journal/IJMET 43 editor@iaeme.com [170] Ting T, Rao MVC, Loo CK (2003) A new class of operators to accelerateparticleswarmoptimization.In:ProceedingsofIEEEcongressonevolutionary computation 2003(CEC2003), pp 2406–2410, Can-berra,Australia,Dec8–12,2003 [171] Trelea IC (2003) The particle swarm optimization algorithm: con-vergence analysis and parameter selection. InfProcessLett85(6):317–325 [172] Tsafarakis S, Saridakis C, Baltas G, Matsatsinis N (2013) Hybrid par-ticle swarm optimization with mutation for optimizing industrialproduct lines: an application to a mixed solution space consideringbothdiscreteandcontinuousdesignvariables.IndMarketManage 42(4):496–506 [173] van den Bergh F (2001) An analysis of particle swarm optimizers. Ph.D.dissertation,UniversityofPretoria,Pretoria,SouthAfrica [174] van den Bergh F, Engelbrecht AP (2002) A new locally convergentparticleswarmoptimizer.In:ProceedingsofIEEEconferenceon system,manandcybernetics,pp96–101,Hammamet,Tunisia, October,2002 [175] vandenBerghF,EngelbrechtAP(2004)Acooperativeapproachtoparticleswarmoptimization.IEEE TransEvolutComput8(3):225–239 [176] van den Bergh F, Engelbrecht AP (2006) A study of particle swarm optimization particle trajectories.InfSci176:937–971 [177] Vitorino LN, Ribeiro SF, Bastos-Filho CJA (2015) A mechanism basedon artificial bee colony to generate diversity in particle swarm optimization. Neurocomputing148:39–45 [178] Vlachogiannis JG, Lee KY (2009) Economic load dispatch—a compar-ativestudyonheuristic optimization techniqueswithanimprovedcoordinatedaggregationbasedpso.IEEETransPowerSyst24(2):991– 1001 [179] Wang W (2012) Research on particle swarm optimization algorithmand its application. Southwest Jiaotong University, Doctor DegreeDissertation,pp36–37 [180] Wang Q, Wang Z, Wang S (2005) A modified particle swarm optimizer using dynamicinertiaweight.ChinaMechEng16(11):945–948 [181] Wang H, Wu Z, Rahnamayan S, Liu Y, Ventresca M (2011) Enhancingparticle swarm optimization using generalized opposition-basedlearning.InfSci181:4699–4714 [182] Wang H, Sun H, Li C, Rahnamayan S, Pan J (2013) Diversity enhancedparticle swarm optimization with neighborhood search. InfSci223:119–135 [183] WenW,LiuG(2005)Swarmdouble-tabusearch.In:First international conference on intelligent computing, pp 1231–1234, Changsha, China,August 23–26,2005 [184] Wolpert DH, MacreadyWG(1997)Freelunchtheoremsfor optimization.IEEETransEvolutComput1(1):67–82 [185] Xie X, Zhang W, Yang Z (2002) A dissipative particle swarm opti-mization. In: Proceedings of IEEE congression on evolutionary computation,pp1456–1461,Honolulu,HI,USA, May,2002 [186] Xie X, Zhang W, Bi D (2004) Optimizing semiconductor devices byself-organizing particle swarm. In: Proceedings of congress onevolutionary computation (CEC2004), pp 2017–2022, Portland,Oregon,USA, June19–23,2004 [187] Yang C, Simon D (2005) A new particle swarm optimization technique.In:Proceedingsof17thinternationalconferenceonsystemsengi-neering (ICSEng 2005), pp 164–169, Las Vegas, Nevada, USA,Aug16–18, 2005 [188] Yang Z, Wang F (2006) An analysis of roulette selection in early par-ticle swarm optimizing. In: Proceedings of the 1st international symposium on systems
  • 26. A Review of Particle Swarm Optimization (PSO) Algorithm https://iaeme.com/Home/journal/IJMET 44 editor@iaeme.com andcontrolinaerospaceandastronautics,(ISSCAA2006),pp960–970,Harbin,China,Jan19– 21,2006 [189] Yang X, Yuan J, Yuan J, Mao H (2007) A modified particle swarmoptimizer with dynamic adaptation. Appl Math Comput 189:1205–1213 [190] Yang C, Gao W, Liu N, Song C (2015) Low-discrepancy sequenceinitializedparticleswarmoptimizationalgorithmwithhigh-ordernonlineartime-varying inertia weight. ApplSoftComput29:386–394 [191] YasudaK,IdeA,IwasakiN(2003)Adaptiveparticleswarmoptimiza-tion. In: Proceedings of IEEE international conference on systems,manandcybernetics,pp1554– 1559,Washington,DC,USA,Octo-ber5–8, 2003 [192] Yasuda K, Iwasaki N (2004) Adaptive particle swarm optimizationusing velocity information of swarm. In: Proceedings of IEEE international conference on systems, man and cybernetics, pp3475–3481,Hague,Netherlands,October10–13,2004 [193] Yu H, Zhang L, Chen D, Song X, Hu S (2005) Estimation of modelparametersusingcompositeparticleswarmoptimization.JChemEngChinUniv 19(5):675– 680 [194] Yuan Y, Ji B, Yuan X, Huang Y (2015) Lockage scheduling of threegorges- gezhoubadamsbyhybridofchaoticparticleswarmopti- mization and heuristic-adjusted strategies. Appl Math Comput270:74–89 [195] Zeng J, Cui Z, Wang L (2005) A differential evolutionary particleswarm optimization with controller. In: Proceedings of the first international conference on intelligent computing (ICIC2005),pp 467–476, Hefei,China,Aug23–25,2005 [196] ZhaiS,JiangT(2015)Anewsense-through-foliagetarget recognition method based on hybrid differential evolution and self-adaptiveparticle swarm optimization-based support vector machine. Neu-rocomputing149:573–584 [197] Zhan Z, Zhang J, Li Y, Chung HH (2009) Adaptive particle swarmoptimization. IEEE Trans Syst Man Cybernet Part B Cybernet39(6):1362–1381 [198] Zhan Z, Zhang J, Li Y, Shi Y (2011) Orthogonal learning particle swarm optimization. IEEE Trans EvolutComput15(6):832–847 [199] Zhang L, Yu H, Hu S (2003) A new approach to improve particleswarm optimization. In: Proceedings of the Genetic and Evolution-ary Computation Conference 2003(GECCO2003), pp134–139,Chicago, IL,USA, July12–16,2003 [200] Zhang R, Zhou J, Moa L, Ouyanga S, Liao X (2013) Economic envi-ronmental dispatch using an enhanced multi-objective culturalalgorithm.ElectrPowerSystRes99:18–29 [201] Zhang L, Tang Y, Hua C, Guan X (2015) A new particle swarm opti-mization algorithm with adaptive inertia weight based on Bayesian techniques. ApplSoftComput28:138–149