2. Swarm Intelligence
● Swarm intelligence (SI) is in the field of artificial intelligence (AI) and is based
on the collective behavior of elements in decentralized and self-organized
systems.
● Any attempt to design algorithms or distributed problem-solving devices
inspired by the collective behaviour of social insect colonies and other animal
societies.
● An artificial intelligence (AI) technique based on the collective behavior in
decentralized, self-organized systems
● Generally made up of agents who interact with each other and the
environment
● No centralized control structures
● Based on group behavior found in nature
3. What is PSO?
● Particle swarm optimization (PSO) is a population based
stochastic optimization technique.
● Developed by Dr. Eberhart and Dr. Kennedy in 1995.
● Inspired by social behavior of bird flocking or fish schooling.
● The swarm searches for the food in a cooperative way.
● Each member in the swarm learns from its experience and also
from other members for changing the search pattern to locate
the food.
4. How It works?
● The goal of PSO is to find the optimal solution to an optimization problem
within a given search space.
● The space of all feasible solutions(set of all possible solution candidates) is
called search space. Each point in the search space represents one possible
solution.
● PSO starts with initializing population randomly.
● Solutions are assigned with randomized velocity to explore the search space.
● Each solution in PSO is referred to as a particle.
5. How it works?
● Three distinct features of PSO are
○ Best fitness of each particle(pbest)
○ Best fitness of swarm (gbest)
○ Velocity and position update of each particle
● PSO is in initialized with a group of random particles(solutions) and then
searches for optimal by updating generations.
● Particles move through the solution space, and are evaluated according to
some fitness criterion after each time step. In every iteration, each particle is
updated following two “best” values.
6. How It works?
● The first one is the best solution it has obtained so far. This is called pbest.
● The best value obtained so far by any particle in the population or the global
best is called gbest.
● Velocity and position are updated for exploring and exploiting the search
space to locate the optimal solution.
7. Algorithm
For each particle
Initialize particle
END Do
For each particle
Calculate fitness value
If the fitness value is better than the best fitness value (pBest) in history
set current value as the new pBest
End
8. Algorithm
Choose the particle with the best fitness value of all the particles as the gBest
For each particle
Calculate particle velocity according equation (a)
Update particle position according equation (b)
End
While maximum iterations or minimum error criteria is not attained
9. Equations to update position and velocity
● v[] = v[] + c1 * rand() * (pbest[] - present[]) + c2 * rand() * (gbest[] - present[]) (a)
● present[] = persent[] + v[] (b)
v[] is the particle velocity,
persent[] is the current particle (solution),
pbest[] is the personal best,
gbest[] is the global best,
rand () is a random number between (0,1),
c1, c2 are learning factors.
usually c1 = c2 = 2
10. Stopping Criteria
● The maximum number of iterations the PSO execute. Set a predefined
maximum number of iterations or generations. The algorithm terminates when
it reaches this limit.
● Terminate the algorithm if the best-known solution remains unchanged or
shows no significant improvement over a specific number of iterations
● The minimum error requirement. If prior knowledge of the desired solution
quality is known, stop the algorithm once the best solution meets or exceeds
this quality.
● This stop condition depends on the problem to be optimized.
12. Advantages
● Easy to understand and implement
● Very few algorithm parameters to adjust
● There is no evolution or mutation in the operator
● PSO requires less computing so it is more efficient
● In several cases PSO is more flexible in maintaining a balance between
global and local searches for its search space.
● Easily parallelized for concurrent processing
13. Disadvantages
● Easy to fall(get trapped) into local optimum in high-dimensional space.
● Low convergence rate in the iterative process
● Needs memory to update velocity
14. Applications
Smart City
GIS-based placement of charging stations for electric vehicles
Forecasting Day-ahead traffic flow Highways
Health Care
Diagnosis of Alzheimer’s Disease
• Outperforming several SVM models and two other state-of-the-art deep learning
methods
Intelligent Leukaemia diagnosis
• Escaping from the local optima trap
15. Applications
Environment
Forecasting short-term atmospheric pollutant concentration based on PSO-SVM
• High forecasting accuracy
Industry
Positioning a 3D wind turbine with multiple hub heights on flat terrain
General Purpose
Travelling salesman problem
Path planning of multi-robots
16.
17. Comparison between GA and PSO
● Both algorithms start with a group of a randomly generated population
● Both have fitness values to evaluate the population
● Both update the population and search for the optimium with random
techniques.
● Both systems do not guarantee success.
● PSO does not have genetic operators like crossover and mutation. Particles
update themselves with the internal velocity. They also have memory, which
is important to the algorithm.
18. Comparison between GA and PSO
● Compared with genetic algorithms (GAs), the information sharing mechanism
in PSO is significantly different.
● In GAs, chromosomes share information with each other. So the whole
population moves like a one group towards an optimal area.
● In PSO, only gBest (or lBest) gives out the information to others. It is a one -
way information sharing mechanism.
● The evolution only looks for the best solution. Compared with GA, all the
particles tend to converge to the best solution quickly even in the local version
in most cases.