Your SlideShare is downloading.
×

×
# Saving this for later?

### Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime - even offline.

#### Text the download link to your phone

Standard text messaging rates apply

Like this presentation? Why not share!

- Swarm Intelligence - An Introduction by Rohit Bhat 6825 views
- Personalization of Web Based Intera... by Tricia Rambharose 682 views
- Particle Swarm Optimization by Stelios Petrakis 16482 views
- Nelder Mead Search Algorithm by Ashish Khetan 4700 views
- A Brief Review of Nature-Inspired A... by Xin-She Yang 156 views
- Turning robot locomotion using trun... by kameltreen 339 views
- Gravitational search algorithm by Ahmed Fouad 192 views
- Particle Swarm Optimization 2006 by SARBJEET SINGH 37563 views
- Optimization techniques for solving... by RAVINDRAN N.V 14109 views
- Recherche à voisinage variable by Mohammed Mansouri 577 views
- Particle Swarm optimization by midhulavijayan 1857 views
- Swarm Intelligence Concepts Princip... by SARBJEET SINGH 27326 views

Like this? Share it with your network
Share

No Downloads

Total Views

1,418

On Slideshare

0

From Embeds

0

Number of Embeds

0

Shares

0

Downloads

187

Comments

0

Likes

2

No embeds

No notes for slide

- 1. Optimisation TechniquesPresented BYSarbjeet SinghME-ECE regular(NITTTR-Chandigarh)Under the Guidance ofDr. Swapna DeviAsso.Prof ECE Dept.Sector-26,NITTTR, Chandigarh
- 2. Optimization MethodsGAPSOACOBFOBBO etc.,
- 3. Conceptual Algorithm
- 4. Genetic Algorithm• Problems are solved by an evolutionary process resulting in a best (fittest) solution (survivor).• Evolutionary Computing – 1960s by I. Rechenberg• Genetic Algorithms – Invented by John Holland 1975 – Made popular by John Koza 1992
- 5. How GA are Different than Traditional Search Methods• GAs work with a coding of the parameter set, not the parameters themselves.• GAs search from a population of points, not a single point.• GAs use payoff information, not derivatives or auxiliary knowldege.• GAs use probablistic transition rules, not deterministic rules.
- 6. Vocabulary• Gene – An single encoding of part of the solution space.• Chromosome – A string of “Genes” that represents a solution.• Population - The number of “Chromosomes” available to test.
- 7. Simple Example• f(x) = {MAX(x2): 0 <= x <= 32 }• Encode Solution: Just use 5 bits (1 or 0).• Generate initial population. A 0 1 1 0 1 B 1 1 0 0 0 C 0 1 0 0 0 D 1 0 0 1 1• Evaluate each solution against objective. Sol. String Fitness % of Total A 01101 169 14.4 B 11000 576 49.2 C 01000 64 5.5 D 10011 361 30.9
- 8. Simple Example (cont.)• Create next generation of solutions – Probability of “being a parent” depends on the fitness.• Ways for parents to create next generation – Reproduction • Use a string again unmodified. – Crossover • Cut and paste portions of one string to another. – Mutation • Randomly flip a bit. – COMBINATION of all of the above.
- 9. The Basic Genetic Algorithm1. [Start] Generate random population of n chromosomes (suitable solutions for the problem)2. [Fitness] Evaluate the fitness f(x) of each chromosome x in the population3. [New population] Create a new population by repeating following steps until the new population is complete 1. [Selection] Select two parent chromosomes from a population according to their fitness (the better fitness, the bigger chance to be selected) 2. [Crossover] With a crossover probability cross over the parents to form new offspring (children). If no crossover was performed, offspring is the exact copy of parents. 3. [Mutation] With a mutation probability mutate new offspring at each locus (position in chromosome). 4. [Accepting] Place new offspring in the new population4. [Replace] Use new generated population for a further run of the algorithm5. [Test] If the end condition is satisfied, stop, and return the best solution in current population6. [Loop] Go to step 2
- 10. Genetic Algorithm• Based on Darwinian Paradigm Reproduction Competition Survive Selection• Intrinsically a robust search and optimization mechanism
- 11. Particle Swarm Optimization-PSO
- 12. Particle Swarm Optimization-PSO Particle Swarm Optimization
- 13. Particle Swarm Optimization-PSO•Particle Swarm Optimization (PSO) uses the concept of socialinteraction to problem solving.•It was developed in 1995 by James Kennedy and Russ Eberhart [Kennedy, J. and Eberhart, R. (1995). “Particle SwarmOptimization”, Proceedings of the 1995 IEEE InternationalConference on Neural Networks, pp. 1942-1948, IEEE Press.]•It has been applied successfully to a wide variety of search andoptimization problems.•In PSO, a swarm of n individuals communicate either directly orindirectly with each other in search directions.•PSO is a simple but powerful search technique
- 14. Collision Avoidance• Rule 1: Avoid Collision with neighboring birds Collision Avoidance
- 15. Velocity Matching• Rule 2: Match the velocity of neighboring birds
- 16. Flock Centering• Rule 3: Stay near neighboring birds
- 17. Particle Swarming - Characteristics• Simple rules for each individual• No central control - Decentralized and hence robust• Emergent and Performs complex functions
- 18. • Bird flocking is one of the best example of PSO in nature.• One of the motive of the development of PSO was to model social behavior.
- 19. Particle Swarm Optimization: The Anatomy of a Particle• A particle (individual) is composed of: – Three vectors: • The x-vector records the current Ik position (location) of the particle in the search space, • The p-vector records the location of the best solution found so far X = <xk0,xk1,…,xkn-1> by the particle, and P = <pk0,pk1,…,pkn-1> • The v-vector contains a direction for which particle will travel in if V = <vk0,vk1,…,vkn-1> undisturbed. x_fitness = ? – Two fitness values: p_fitness = ? • The x-fitness records the fitness of the x-vector, and • The p-fitness records the fitness of the p-vector.
- 20. PSO: Swarm Search• Particles are agents that fly through the search space and record (and communicate) the best solution that they have discovered.• particle moves from one location in the search space to another by adding the v-vector to the x-vector to get another x-vector (Xi = Xi + Vi).• Once the particle computes the new Xi it then evaluates its new location. If x-fitness is better than p-fitness, then Pi = Xi and p-fitness = x-fitness.
- 21. PSO:Swarm Search• the v-vector is calculated before adding it to the x-vector as follows: – vid = vid + c1*rnd()*(pid-xid) + c2*rnd()*(pgd-xid); – xid = xid + vid;• C1,c2 are learning rate / accelaration constants governing the cognition and social components• Where i is location of the particle, g represents the location of the particle with the best of p-fitness• Where, p is personal best• Where d is the dimension.
- 22. Update of Velocity & Positionvid= vid Inertia+ c1*r()*(pid–xid) Cognitive learning+ c2*r()*(pgd – xid) Social learningvid = [ - Vmax, Vmax ] Update Positionx = x + v
- 23. PSO: Swarm Search• Intially the values of the velocity vectors are randomly generated with the range [-Vmax, Vmax] where Vmax is the maximum value that can be assigned to any vid.
- 24. PSO: Swarm Types• In his paper, [Kennedy, J. (1997), “The Particle Swarm: Social Adaptation of Knowledge”, Proceedings of the 1997 International Conference on Evolutionary Computation, pp. 303-308, IEEE Press.]• Kennedy identifies 4 types of PSO based on ϕ1 and ϕ2 .• Given: vid = vid + c1*rnd()*(pid-xid) + c2*rnd()*(pgd-xid); xid = xid + vid; – Full Model (c1, c2> 0) – Cognition Only (c1> 0 and c2= 0), – Social Only (c1= 0 and c2> 0) – Selfless (c1= 0, c2> 0, and g ≠ i)
- 25. Parametersvid Velocity of the ith particlepid pBest position of the ith particlepgd the gBest position of the particlesxid current position of the ith particlec1 & c2 are acceleration constantsr() random function in the range [0, 1]w Inertia weight
- 26. Start PSO AlgorithmInitialize particles with random position and zero velocity pbest = the best solution (fitness) Evaluate fitness value a particle has achieved so far. Compare & update fitness value with pbest and gbest gbest = the YES global best Meet stopping End solution of all criterion? NO particles. Update velocity and position
- 27. Particle Swarm Optimization: Swarm Topology• In PSO, two basic topologies are used – Ring Topology (neighborhood of 3) – Star Topology (global neighborhood) I0 I0 I1 I1 I4 I4 I3 I2 I3 I2
- 28. First Iteration Minimization Problem Best Particle 2 Other Particle 3 11. Initializing Position2. Create Velocity (Vector)
- 29. Second Iteration Minimization Problem Best Particle Other Particle 2 3 11. Update New Position2. Create Velocity (Vector)
- 30. Learning Structures“Inertia Term”
- 31. Current Position (X) Personal Best (Pbest) Learning Structures“Personal Best (Pbest)”
- 32. Current Position (X) Personal Best (Pbest) Global Best (Gbest)Learning Structures“Global Best (Gbest)”
- 33. Properties of Particles1. Ability to exchange information with its neighbors2. Ability to memorize a previous position3. Ability to use information to make a decision.
- 34. Velocity Updating3 terms that update the velocity: Inertia Term: - This term forces the particle to move in the 1. Inertia Term same direction - following own way using old velocity 2. Cognitive Term 3. Social Learning Term
- 35. Velocity Updating3 terms that update the velocity: Cognitive Term: (Personal Best) 1. Inertia Term This term forces the particle to go back to the previous best 2. Cognitive Term position: Conservative tendency 3. Social Learning Term
- 36. Velocity Updating3 terms that update the velocity: Social Learning Term: (Global Best) This term forces the 1. Inertia Term particle to go to the global best position: group following 2. Cognitive Term tendency 3. Social Learning Term
- 37. PSO: Basic Idea: Cognitive BehaviorAn individual remembers its past knowledge Where should I move to? Food : 80 Food : 50 Food : 100
- 38. Particle Swarm Optimization~ Basic Idea: Social Behavior ~An individual gains knowledgefrom other population member Where should I Bird 3 Bird 1 move to? Food : 100 Food : 150 Bird 2 Bird 4 Food : 100 Food : 400
- 39. Pitfalls of PSO• Particles tend to cluster, i.e., converge too fast and may get stuck at local optimum• Movement of particle carried it into infeasible region• Inappropriate mapping of particle space into solution space
- 40. Problem: Particles tend to cluster in the same area. Itreduces movement of swarm as the particles are trappedin a small local area.
- 41. To solve this problem, some particles can be reinitializedinto new positions which may be in a better area. Otherparticles will be pulled to the new area !
- 42. StartInitialize particles with random Modified position and zero velocity PSO algorithm Evaluate fitness value Meet local YES Local search search criterion? NO Compare/update fitness value with pbest and gbest YES Meet stopping criterion? End NO Update velocity and positionNO YES Meet re- Re-initialization initialization criterion?
- 43. PSO: Related Issues• There are a number of related issues concerning PSO: – Controlling velocities (determining the best value for Vmax), – Swarm Size, – Neighborhood Size, – Updating X and Velocity Vectors, – Robust Settings for (ϕ1 and ϕ2),• Carlisle, A. and Dozier, G. (2001). “An Off-The-Shelf PSO”, Proceedings of the 2001 Workshop on Particle Swarm Optimization, pp. 1-6, Indianapolis, IN. ( http://antho.huntingdon.edu/publications/Off-The-Shelf_PSO.pdf)
- 44. Particle Swarm: Controlling Velocities• When using PSO, it is possible for the magnitude of the velocities to become very large.• Performance can suffer if Vmax is inappropriately set.• Two methods were developed for controlling the growth of velocities: – A dynamically adjusted inertia factor, and – A constriction coefficient.
- 45. PSO: The Inertia Factor• When the inertia factor is used, the equation for updating velocities is changed to: – vid = ω*vid + ϕ1*rnd()*(pid-xid) + ϕ2*rnd()*(pgd-xid);• Where ω is initialized to 1.0 and is gradually reduced over time (measured by cycles through the algorithm).
- 46. PSO: The Inertia FactorThe following weighting function is usually utilized in (1)w = wMax-[(wMax-wMin) x iter]/maxIter (2)where wMax= initial weight, wMin = final weight, maxIter = maximum iteration number, iter = current iteration number. sik+1 = sik + Vik+1 (3)
- 47. PSO Comments on the Inertial weight factor:• A large inertia weight (w) facilitates a global search while a small inertia weight facilitates a local search.• By linearly decreasing the inertia weight from a relatively large value to a small value through the course of the PSO run gives the best PSO performance compared with fixed inertia weight settings.Larger w ----------- greater global search abilitySmaller w ------------ greater local search ability.
- 48. PSO: The Constriction Coefficient• In 1999, Maurice Clerc developed a constriction Coefficient for PSO. – vid = K[vid + ϕ1*rnd()*(pid-xid) + ϕ2*rnd()*(pgd-xid)]; – Where K = 2/|2 - ϕ - sqrt(ϕ2 - 4ϕ)|, ϕ = ϕ1 + ϕ2, and ϕ > 4.
- 49. PSO: Swarm and Neighborhood Size• Concerning the swarm size for PSO, as with other ECs there is a trade-off between solution quality and computational cost (in terms of function evaluations).• Global neighborhoods seem to be better in terms of computational costs. The performance is similar to the ring topology (or neighborhoods greater than 3).• There is scope for research on the effects of swarm topology on the search behavior of PSO.
- 50. PSO: Particle Update Methods• There are two ways that particles can be updated: – Synchronously I0 – Asynchronously• Asynchronous update I4 I1 allows for newly discovered solutions to be used more quickly I3 I2• The asynchronous update method is similar to ____.
- 51. Comparisons of GA & PSO• Genetic algorithm (GA) • Particle Swarm Optimization (PSO)• Begins with a population of • Social sharing of information random chromosomes among individuals of a population• 3 Operators • No operators• Selection • Pbest or local best• crossover • global best• mutation • Velocity & Displacement• High computational cost • Low computational cost• High memory requirement• Working in parallel • Low memory requirement• Global search • Moves quickly towards best • Tendency to lock in local optima
- 52. • Population consists of solutions or • Population consists of solutions or chromosomes particles• Terminate after best chromosomes, if • No question of restart PSO for best not satisfy then restart particles• Randomly selected population • Randomly selected initial particle velocity & displacement• More time to determine the results • Less time to determine the results• More complex mechanism • Simple mechanism
- 53. GLN-PSO•The combination vector created by pbest, gbest, lbest,and nbest pulls each particle to a better direction thanprevious versions GLN-PSO Standard PSO pbest pbest More good information sources, Better performance gbest gbest nbest lbest
- 54. Experiments• Tested on 4 multi-modal or complex functions – Ackey – Griewank – Rastrigin – Rosenbrock
- 55. Results 1
- 56. Results 2
- 57. The Binary PSO• Based on two simple modifications of the real- valued PSO: • The position x(t) is an bit-string of length n. • The velocity v(t) is still a vector in real-valued n- dimensional space. • Velocity update formula is left unchanged - the bits in x(t) are treated as reals. • However, the position update formula is modified!
- 58. The position update formula 1 ρ < s (vi (t + 1)) xi (t + 1) = 0 otherwise ρ is a random number between 0 and 1– s is a sigmoid function: 1 s( x) = 1 + e−x
- 59. Meaning of the velocity• Now, vi (t) is a probability for xi (t) being ”set”.• vi(t) changes when xi differs from pi or gi• An example: – xi(t)=1, vi(t)=2.4, pi(t)=1, gi(t)=0 – What will xi(t+1) be? – vi(t+1)=vi(t)+φ1(1-1)+φ2(0-1)=vi(t) – φ2• Control of maximum velocity is important: – We can use vmax or ω to limit the velocity
- 60. Fitness-Distance Ratio• Evaluating influence of jth particle on the ith particle (along the dth dimension) Fitness ( Pj ) − Fitness ( Xi ) FDR ( j , i, d ) = Pjd − Xid where Pj is the previous best position visited by the jth particle Xi is the position of the particle under consideration
- 61. FDR-PSO Algorithm• Each particle influenced by •Its own previous best (pbest) • Global best particle (gbest) • Particle that maximizes FDR (nbest)• Velocity Update Equation Vid t +1 = ω × Vid t + rand () ×ψ 1 × ( Pid − X id ) + rand () ×ψ 2 × ( Pgd − X id ) + rand () ×ψ 3 × ( Pnd − X id )• Position Update Equation Xid = Xid + Vid t +1
- 62. FDR-PSO AlgorithmAlgorithm FDR-PSO: For t= 1 to the max. bound on the number of generations, For i=1 to the population size, Find gbest; For d=1 to the problem dimensionality, Find nbest which maximizes the FDR; Apply the velocity update equation; Update Position; End- for-d; Compute fitness of the particle; If needed, update historical information regarding Pi and Pg; End-for-i; Terminate if Pg meets problem requirements; End-for-t;End algorithm.
- 63. Hybrid Particle Swarm Optimizations• GA-PSO• ACO-PSO• PSO-BFG• PSO-ANN• PSO-Fuzzy• FDTD-PSO• etc,
- 64. Applications Signal Processing Artificial Neural Networks Bio-Medical applications Control Engineering Image Processing Mechanical Engineering Instrumentation Computer Engg Communication Engg

Be the first to comment