• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Practical Swarm Optimization (PSO)
 

Practical Swarm Optimization (PSO)

on

  • 1,499 views

Practical Swarm Optimization (PSO)

Practical Swarm Optimization (PSO)

Statistics

Views

Total Views
1,499
Views on SlideShare
1,499
Embed Views
0

Actions

Likes
2
Downloads
169
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Practical Swarm Optimization (PSO) Practical Swarm Optimization (PSO) Presentation Transcript

    • Department of Mechanical and Material Engineering Practical Swarm Optimization (PSO)
    • Goal of Optimization Find values of the variables that minimize or maximize the objective function while satisfying the constraints.
    • . Need for optimization Choose design variables Formulate constraints Formulate objective function Set up variable bounds Select an optimization algorithm Obtain solution(s) Flowchart of Optimal Design Procedure
    • Particle Swarm Optimization Swarm Intelligence (SI) • SI is artificial intelligence, based on the collective behavior of decentralized, self- organized systems. • The expression was introduced by Gerardo Beni and Jing Wang in 1989, in the context of cellular robotic systems. • SI systems are typically made up of a population of simple agents interacting locally with one another and with their environment. • Natural examples of SI include ant colonies, bird flocking, animal herding, bacterial growth, and fish schooling.
    • Particle Swarm Optimization Some SI Application • U.S. military is investigating swarm techniques for controlling unmanned vehicles. • NASA is investigating the use of swarm technology for planetary mapping.
    • Particle Swarm Optimization • The PSO algorithm was first described in 1995 by James Kennedy and Russell C. Eberhart inspired by social behavior of bird flocking or fish schooling. • PSO is an artificial intelligence (AI) technique that can be used to find approximate solutions to extremely difficult or impossible numeric maximization and minimization problems. • Hypotheses are plotted in this space and seeded with an initial velocity, as well as a communication channel between the particles. • Simple algorithm, easy to implement and few parameters to adjust mainly the velocity.
    • Particle Swarm Optimization How it works: • PSO is initialized with a group of random particles (solutions) and then searches for optimal by updating generations. • Particles move through the solution space, and are evaluated according to some fitness criterion after each time step. In every iteration, each particle is updated by following two "best" values. • The first one is the best solution (fitness) it has achieved so far (the fitness value is also stored). This value is called pbest.
    • Particle Swarm Optimization How it works: • Another "best" value that is tracked by the particle swarm optimizer is the best value obtained so far by any particle in the population. This second best value is a global best and called gbest. • When a particle takes part of the population as its topological neighbors, the second best value is a local best and is called lbest. Neighborhood bests allow parallel exploration of the search space and reduce the susceptibility of PSO to falling into local minima, but slow down convergence speed.
    • Particle Swarm Optimization Each particle tries to modify its current position and velocity according to the distance between its current position and pbest, and the distance between its current position and gbest. 2 1( ) * ( CurrentPosition ) 2( ) * ( CurrentPositionn n1 1 best,n best,n )randv v c rand p c gnn Current Position[n+1] = Current Position [n] + v[n+1] current position[n+1]: position of particle at n+1th iteration current position[n]: position of particle at nth iteration v[n+1]: particle velocity at n+1th iteration vn+1: Velocity of particle at n+1 th iteration Vn : Velocity of particle at nth iteration c1 : acceleration factor related to gbest c2 : acceleration factor related to lbest rand1( ): random number between 0 and 1 rand2( ): random number between 0 and 1 gbest: gbest position of swarm pbest: pbest position of particle
    • Particle Swarm Optimization Algorithm For each particle Initialize particle with feasible random number End Do For each particle Calculate the fitness value If the fitness value is better than the best fitness value (pbest) in history Set current value as the new pbest End Choose the particle with the best fitness value of all the particles as the gbest For each particle Calculate particle velocity according to velocity update equation Update particle position according to position update equation End While maximum iterations or minimum error criteria is not attained
    • Particle Swarm Optimization Swarm Topology • In PSO, there have been two basic topologies used in the literature I4 I0 I1 I2I3 I4 I0 I1 I2I3 Star Topology (global neighborhood)Ring Topology (neighborhood of 3)
    • Particle Swarm Optimization Variant of PSO .
    • The Basic Variant of PSO PSO Basic Variant Function Advantages Disadvantages Velocity Clamping (VC) Control the global exploration of the particle. Reduces the size of the step velocity, so that the particles remain in the search area, but it cannot change the search direction of the particle VC reduces the size of the step velocity so it will control the movement of the particle If all the velocity becomes equal to the particle will continue to conduct searches within a hypercube and will probably remain in the optima but will not converge in the local area. Inertia Weight Controls the momentum of the particle by weighing the contribution of the previous velocity, A larger inertia weight in the end of search will foster the convergence ability. Achieve optimality convergence strongly influenced by the inertia weight Constriction Coefficient To ensure the stable convergence of the PSO algorithm [21] Similar with inertia weight when the algorithm converges, the fixed values of the parameters might cause the unnecessary fluctuation of particles Synchronous and Asynchronous Updates Optimization in parallel processing Improved convergence rate Higher throughput: More sophisticated finite element formulations Higher accuracy (mesh densities)
    • Particle Swarm Optimization in Summary The process of PSO algorithm in finding optimal values follows the work of an animal society which has no leader. Particle swarm optimization consists of a swarm of particles, where particle represent a potential solution (better condition). Particle will move through a multidimensional search space to find the best position in that space (the best position may possible to the maximum or minimum values).
    • Question and Answer