A presentation on PSO with videos and animations to illustrate the concept. The ppt throws light on the concept, the algo, the application and comparison of PSO with GA and DE.
%in Rustenburg+277-882-255-28 abortion pills for sale in Rustenburg
Particle swarm optimization
1.
2.
3. Group Member Details
NAME UNIVERSITY ROLL NO COLLEGE ROLL NO
Ananga Mohan Chatterjee 11500112046 142212
Aniket Anand 11500112047 142213
Madhuja Roy 11500112078 142244
Mahesh Tibrewal 11500112079 142245
4.
5. What is Swarm Intelligence?
The term swarm is used to represent
an aggregation of the animals or
insects which work collectively to
accomplish their day to day tasks in an
intelligent and efficient manner
SI systems are typically made up of a
population of simple agents interacting
locally with one another and with their
environment.
Natural examples of SI include ant
colonies, bird flocking, animal herding,
bacterial growth, and fish schooling.
6. Origin of Particle Swarm Optimization
Particle Swarm Optimization (PSO) is a population based stochastic
optimization technique developed by Dr. Russell C. Eberhart and Dr.
James Kennedy in 1995, inspired by social behaviour of bird flocking or
fish schooling.
Dr. Russell C.
Eberhart
Dr. James Kennedy
7. Origin of Particle Swarm Optimization (contd.)
Dr. Eberhart and Dr. Kennedy were
inspired by the flocking and schooling
patterns of birds and fish. Originally
these two started out developing
computer software simulations of bird
flocking around food sources, then later
realized how well their algorithms
worked on optimization problems.
8. Concept of Particle Swarm Optimization
PSO is an artificial intelligence (AI) technique that can
be used to find approximate solutions to extremely
difficult or impossible numeric maximization and
minimization problems.
In PSO, a swarm of n individuals communicate either
directly or indirectly with one another search directions
(gradients).
Simple algorithm, easy to implement and few
parameters to adjust mainly the velocity.
9.
10. Parameters in PSO
• Population initialized by assigning random positions (Xi) and
velocities (Vi); potential solutions are then flown through
hyperspace
• Each particle keeps track of its “best” (highest fitness)
position in hyperspace. This is called
• At each time step, each particle stochastically accelerates
towards its pbest and gbest (or lbest).
o “pbest for an individual particle.
o “gbest” for best on group.
o “lbest” for the best in neighbourhood.
11. Initialize Particles
end
Calculate fitness values
for each particle
Use each particle’s
velocity value to update
its data value
Keep previous pBest
Assign current fitness as
new pBest
Assign best particle’s pBest
value to gBest
Calculate velocity for each
particle
Is current
fitness value
better than
pBest
Target or
maximum apochs
reached
yes no
no yes
Flowchart
Target positionInitial position
12. Mathematical Approach
Equation:
Vi = [vi1,vi2, ...,vin] called the velocity for particle i.
Xi = [xi1,xi2, ..., xin] represents the position of particle i.
Pbest : represents the best previous position of particle i(i.e., local-best position or its experience)
Gbest : represents the best position among all particles in the population X= [X1,X2, . . .,XN] (i.e. global-best
position)
Rand(.)and rand(.) : are two random variables between [0,1].
C1 and C2 : are positive numbers called acceleration coefficients that guide each particle toward the individual
best and the swarm best positions, respectively.
13. PSO Pseudo Code
For each particle :
Initialize particle
Do :
For each particle :
Calculate fitness value
If the fitness value is better than the best fitness value (pBest) in history
Set current value as the new pBest
End
For each particle :
Find in the particle neighborhood, the particle with the best fitness
Calculate particle velocity according to the velocity equation (1)
Apply the velocity constriction
Update particle position according to the position equation (2)
Apply the position constriction
End
While maximum iterations or minimum error criteria is not attained
14. Modifications in PSO structure
1. Selection of maximum velocity:
The velocities may become too high so that the particles become uncontrolled and exceed search space.
Therefore, velocities are bounded to a maximum value Vmax, that is
2. Adding inertia weight:
A new parameter w for the PSO, named inertia weight is added in order to better control the scope of the
search. So, Eq. (1) is now becomes:
15. Modifications in PSO structure (Contd.)
3. Constriction factor:
If running algorithm without restraining the velocity, the system will explodes after a few
iterations . So,
induce a constriction coefficient in order to control the convergence properties.
With the constriction factor, the PSO equation for computing the velocity is:
Note that ,
• if C = 5 then 𝜒= 0.38 from Eq. (4), will cause a very pronounced damping effect.
• But , if C is set to 4 then 𝜒 is thus 0.729, which works fine.
16. Population Topology
Pattern of connectedness between individuals is like a social network
Connection pattern controls how the solutions can flow through the solution space
PSO gbest and lbest topologies
17.
18. Effect of Re-Initialization
Among the three algorithms, PSO has a higher tendency to cluster rapidly and the
swarm may quickly become stagnant. To remedy this drawback, several sub-
grouping approaches had been proposed to reduce the dominant influence of the
global best particle. A much simpler and frequently used alternative is to simply
keep the global best particle and regenerate all or part of the remaining particles.
This has the effect of generating a new swarm but with the global best as one of
the particles, and this process is called the re-initialization process.
In GA, the clustering is less obvious, but it is often found that the top part of the
population may look similar, and that re-initialization can also inject randomness
into the population to improve the diversity.
In DE, the clustering is the least and re-initialization has the least effect for DE
19. Effect of Local Search
In GA, the density of solution space is less, so it is often found that the GA
operators cannot produce all potential solutions. A popular fix is the use of
local search to see if a better solution can be found around the solutions
produced by GA operators. The local search process is often time consuming,
and to apply it over the whole population could lead to a long solution time.
For PSO, the best particle has a dominant influence over the whole swarm, and
a time saving strategy is to only apply local search to the best particle, and this
can lead to solution improvement with shorter solution time. This strategy was
demonstrated to be highly effective for job shop scheduling in Pratchayaborirak
and Kachitvichyanukul (2011).
This same strategy may not yield the same effect in DE since the best particle
does not have a dominant influence on the population of solutions.
20. Effect of Subgrouping
The use of sub-grouping of homogenous population to improve
solution quality has been demonstrated in GA and PSO.
This sub-grouping allows some groups of solutions to be freed from
the influence of the dominant solutions, and thus the group may be
searching in a different area of the solution space and improve the
exploration aspect of the algorithms.
For DE, the best particle has little influence on the perturbation
process so it is rational to presume that sub-grouping with
homogeneous population may have limited effect on the solution
quality of DE.
21. Qualitative comparison of GA, PSO and DE
GA PSO DE
Require Ranking of solution Yes No No
Influence of population size on solution time Exponential Linear Linear
Influence of best solution on population Medium Most Less
Average fitness cannot be worse False False True
Tendency for premature convergence Medium High Low
Continuity(density) of search space Less More More
Ability to reach good solution without local search Less More More
Homogeneous sub-grouping improves convergence Yes Yes NA
22.
23. Neural Network (NN) Training using PSO
A complex function that accepts some numeric inputs and that generates
some numeric outputs.
The best way to get an idea of what training a neural network using PSO
is like is to take a look at a program that creates a neural network
predictor for a set of Iris flowers, where the goal is to predict the species
based on sepal length and width, and petal length and width.
The program uses an artificially small, 30-item subset of a famous 150-
item benchmark data set called Fisher’s “Iris Data”.
A 4-input, 6-hidden, 3-output neural network is instantiated.
A fully-connected 4-6-3 neural network will have (4 * 6) + (6 * 3) + (6 +
3) = 51 weights and bias values.
A swarm consisting of 12 virtual particles, and the swarm attempts to
find the set of neural network weights and bias values in a maximum of
700 iterations.
After PSO training has completed, the 51 values of the best weights and
biases that were found are displayed. Using those weights and biases,
when the neural network is fed the six training items, the network
24. Mobile Robot Navigation Using Particle Swarm
Optimization and Adaptive NN
Improved particle swarm optimization (PSO) is
used to optimize the path of a mobile robot
through an environment containing static
obstacles.
Relative to many optimization methods that
produce non-smooth paths, the PSO method can
generate smooth paths, which are more preferable
for designing continuous control technologies to
realize path following using mobile robots.
To reduce computational cost of optimization,
the stochastic PSO (S-PSO) with high
exploration ability is developed, so that a
swarm with small size can accomplish path
planning.
25. Hybridization of PSO with Other Evolutionary
Techniques
A popular research trend is to merge or combine the PSO with the other techniques, especially the other
evolutionary computation techniques such as selection, cross-over and mutation.
Some improved and extended PSO methods:
Improved PSO (IPSO): It uses a combination of chaotic sequences and conventional linearly decreasing inertia
weights and crossover operation to increase both exploration and exploitation capability of PSO.
Modified PSO (MPSO): This approach is a mechanism to cope with the equality and inequality constraints.
Furthermore, a dynamic search-space reduction strategy is employed to accelerate the optimization process.
New PSO (NPSO): In this method, the particle is modified in order to remember its worst position. This
modification is improved to explore the search space very effectively.
Improved Coordinated Aggregation based PSO (ICA-PSO): In this algorithm each particle in the swarm retains
a memory of its best position ever encountered, and is attracted only by other particles with better
achievements than its own with the exception of the particle with the best achievement, which moves
randomly.
Hybrid PSO with Sequential Quadratic Programming (PSO-SQP): The SQP method seems to be the best
nonlinear programming method for constrained optimization. It outperforms every other nonlinear
programming method in terms of efficiency, accuracy, and percentage of successful solutions, over a large
number of test problems.
26.
27. ◊ The process of PSO algorithm in finding optimal values follows the work
of an animal society which has no leader.
◊ Particle swarm optimization consists of a swarm of particles, where
particle represent a potential solution (better condition).
◊ Particle will move through a multidimensional search space to find the
best position in that space (the best position may possible to the
maximum or minimum values).
◊ Constraints to be kept in mind are that velocity should have an optimum
value, as too less will be too slow, and if the velocity is too high then the
method will become unstable.
28.
29. We would like to express our gratitude to all
the respected faculty members of our
department for providing us with this
opportunity of giving a presentation on a topic
which was interesting to research on. We
thank our seniors for their able guidance and
support in completing the presentation.
Finally, a word of thanks to all those wo were
directly or indirectly involved in this
presentation.