• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
PARTICLE SWARM INTELLIGENCE: A PARTICLE SWARM OPTIMIZER WITH ENHANCED GLOBAL SEARCH QUALITIES AND GUARANTEED CONVERGENCE
 

PARTICLE SWARM INTELLIGENCE: A PARTICLE SWARM OPTIMIZER WITH ENHANCED GLOBAL SEARCH QUALITIES AND GUARANTEED CONVERGENCE

on

  • 2,312 views

A new particle swarm optimizer is presented. The new optimizer for static optimization problems incorporates superior global search characteristics and guarantees final convergence.

A new particle swarm optimizer is presented. The new optimizer for static optimization problems incorporates superior global search characteristics and guarantees final convergence.

Statistics

Views

Total Views
2,312
Views on SlideShare
2,311
Embed Views
1

Actions

Likes
0
Downloads
73
Comments
0

1 Embed 1

http://www.slideshare.net 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    PARTICLE SWARM INTELLIGENCE: A PARTICLE SWARM OPTIMIZER WITH ENHANCED GLOBAL SEARCH QUALITIES AND GUARANTEED CONVERGENCE PARTICLE SWARM INTELLIGENCE: A PARTICLE SWARM OPTIMIZER WITH ENHANCED GLOBAL SEARCH QUALITIES AND GUARANTEED CONVERGENCE Document Transcript

    • A4 Model- Based Process Optimisation and Control
    • PARTICLE SWARM INTELLIGENCE: A PARTICLE SWARM OPTIMIZER WITH ENHANCED GLOBAL SEARCH QUALITIES AND GUARANTEED CONVERGENCE Hendrik Große-Löscher Postgraduate, Faculty of Computer Science and Automation, Institute for Automation and Systems Engineering, Ilmenau University of Technology ABSTRACT continuous nonlinear functions through the simulation of a simplified social milieu. PSO does notA new particle swarm optimizer is presented. Particle incorporate the principle ‘survival of the fittest’ like itswarm optimization PSO is a relatively young is used in genetic algorithms. From the beginning tooptimization algorithm related to evolutionary the end of an optimization run all particles ‘survive’computation. This metaheuristic is best suited to and actively search the function area for optima. Aoptimize nonlinear functions. After a brief particle is a tuple of all parameters and thus contains aintroduction to particle swarm optimization giving possibility for solving a given function. After an initialinsights to the underlying paradigms, advantages and distribution within the search area which is oftendrawbacks are highlighted. The operation of the random, the particles move through the search spacecanonical PSO in a synthetically generated 2D test with a direction-dependent velocity. After eachenvironment with three different dynamic landscapes movement the fitness of a particle is evaluated. Fitnessshows the failure of the original version in transient equals the numerical function value where the particleconditions, leading to a curious phenomenon of ‘linear is located. As the swarm members interact with eachcollapse’. Consecutively, an advanced algorithm other based on the information of the best solution ofdemonstrates the potential of PSO in dynamic the entire swarm found so far (global best) and theapplications. Based on the experience of two best individual location determined (personal best),engineering optimization tasks, a refined optimizer cognitive/perception-based and social patterns can beusing constriction coefficient strategy is introduced derived. The particles are attracted by the bestand compared to the classic algorithm and a later positions and move towards the global best. Duringintroduced version with implemented inertia weight. the optimization process the particles may encounterThe new optimizer for static optimization problems positions with higher fitness values. Depending on theincorporates superior global search characteristics and chosen PSO variant and the adjusted parameters, theguarantees final convergence. global search ability, swarm behaviour, convergence rate and the quality of the final solution are basically Index Terms – Particle swarm optimization, influenced. The algorithm proved to be very simple,dynamic environments, linear collapse, constriction robust and highly efficient. Regarding thecoefficient, inertia weight, global search, convergence dimensionality and the complexity of the optimization task no limitations or restrictions occur. 1. INTRODUCTION The outline of the paper is as follows. Section 2 briefly explains the basic principles of PSO.Particle swarm optimization - PSO - was introduced in Following the scheme for static optimization tasks, a1995 by Russell Eberhart and James Kennedy, an special focus is put on dynamic environments withelectrical engineer and a social psychologist. In recent three different kind of optimization problems (heightyears particle swarm intelligence has gained a lot of change of peaks, location change of peaks withrecognition as PSO proved to be an effective method constant heights and location change of peaks withto handle different kind of optimization problems. changing heights). The application of the canonicalMeanwhile, the application areas span from PSO to a synthetically generated dynamic 2D test caseengineering tasks to economics. Related to swarm demonstrates the failure of the algorithm to trackintelligence and evolutionary computation, PSO is a changing peaks, leading to a remarkable appearancemetaheuristic whose paradigms are inspired by bird of ‘linear collapse’. An implementation of anflocking and fish schooling. The precursor of the advanced PSO technique denotes its feasibility incanonical PSO algorithm was originally intended as a changing conditions. A distinguished PSO variant isgraphical simulator of the graceful but unpredictable described in Section 3. After the recapitulation of thechoreography of a flock of birds. The originators commonly used constriction coefficient limiting thediscovered the potential of the method to optimize particles’ velocities, a refined algorithm with a simple 186
    • but effective new strategy is introduced. The xid : potential solution, location of particle i inimplementation leads to a significant improvement of dimension dthe global search ability and enables an enhanced vid : velocity of particle i in dimension dcontrol of the swarm’s convergence behaviour at the pid : best location so far of particle i in dimension dend of an optimization run. Consequently, the pgd : best location so far of the best particle g of allexcellence of the final result is influenced and the neighbours of particle i in dimension dswarm can be adapted to the optimization task. c1 : cognitive parameter/acceleration constantExperimental investigations are illustrated and c2 : social parameter/acceleration constantindicate the outstanding performance of the new rand(), Rand(): equally dispersed random numbersapproach. Section 4 briefly specifies the application of from [0,1]PSO to two engineering tasks with 11 and 361parameters respectively. Finalizing the work, section 5 In equation (1), t is the current iteration step. Thesummarizes the results. movement of the particles following equation (1) is based on a so called cognitive term (term 2) and a 2. PARTICLE SWARM OPTIMIZATION social term (term 3) along with the velocity vid(t-1) (term 1) which is equal to a momentum. The cognitiveOptimization aims to find the minimum or maximum constant c1 influences the individual particleof an objective function within a predefined search behaviour regarding its own best position. The socialarea. Beside other mathematical scopes, this work is constant c2 controls the movement towards thefocused on nonlinear function optimization including direction of the particle which currently has the bestconstraints. The objective function can either be position in the swarm. c2 influences the behaviour ofsingle-objective or multi-objective, e.g. a function particle i with reference to the fitness of thewith standardized weighted objectives where neighbours and thus describes a component of socialconstraints are implemented by penalty terms and the behaviour. In each iteration, the cognitive and socialproblem is converted into a single objective function. part of the movement is varied by random to maintain diversity. During their move, particles can find2.1. Static environments locations which are characterized by a higher fitnessPSO is a stochastic, gradient-less and derivative-free than the best optimum found so far. The behaviour ofnature-analogous algorithm that is based on a set of the swarm is defined by the experience of eachparticles. A particle includes a tuple of all parameters individual swarm member, its current position and theof the optimization task and thus provides a potential exchange of information by the individual swarmsolution. After the initial distribution of the particles members among themselves (orientation of thewithin the search space, all particles attain direction- individuals to group orders). Thereby, PSOdependent and dimension-individual velocities. The successfully imitates the natural behaviour of animalsoperational sequence for static applications is as in groups or swarms.follows [4, 5, 6]. The swarm size, neighbourhood size and topology affect the swarm behaviour and thus influence the1. Stochastical initialization of the particle population search characteristics. Beside several approaches, two within the function/search area. main formulations concerning the particles’2. Calculation of the fitness of each particle. communication topology exist, namely gbest and3. Modification of the individual velocity based on lbest. In the gbest model, each particle is influenced the best individual and best global position so far by the best particle of the entire swarm while in the (neighbourhood). lbest model each particle is influenced by the particles4. Determination of the new positions of the particles. of the local neighbourhood. In many applications, the5. Fitness evaluation (2.) of each particle, with gbest model tends to converge faster. convergence/termination criterion: END, otherwise Another mentionable topic is the information go to 3. analysis of the best position. In a so calledThe referring equation for the velocity update is synchronous PSO which the original version is related to, the best positions are updated after all particle movements in one iteration step. The asynchronousvid (t ) = vid (t − 1) + c1 ⋅ rand ( ) ⋅ ( pid − xid (t − 1) ) + ... PSO updates the best positions after each particle movement which allows an immediate feedback about ... + c 2 ⋅ Rand ( ) ⋅ ( p gd − xid (t − 1) ) (1) the best regions and leads to a higher convergence rate. Based on application experiences with theThe new positions are updated by canonical PSO, numerous optimization runs with small populations are more effective in finding anxid (t ) = xid (t − 1) + vid (t ) (2) admissible solution than few runs with large populations. This surprising effect is based on the fastEquations (1) and (2) employ the denotations convergence of the PSO. 187
    • 2.2. Dynamic environmentsBased on the belief that PSO converges fast, thedynamic tracking of peaks seemed to be possible [10].Today, with deeper insights into PSO and manyapplications at hand, the conventional PSO isinapplicable to dynamic environments. Due to itsimmanent paradigms, PSO fails to most dynamicenvironments.A synthetical 2D test environment mainly consistingof trigonometrical expressions is generated to providethree different dynamic test cases: x1y (x1 , x2 ) = [[(− x12 − x2 ) ⋅ sin( 2 ) ⋅ 2 ⋅ x1 ⋅ x2 ⋅ ... (3) 4... ⋅ csc h( x1 ) ⋅ csc h( x2 ) + sin(3 ⋅ x1 ) + sin(3 ⋅ x2 )] ⋅ ...... ⋅ sin(3 ⋅ x2 ) + sin(3 ⋅ x1 )] ⋅ cos(x1 ) ⋅ sin(x2 ) ⋅ tan(x1 )with x1 , x2 ∈ [−5.5;5.5]This unsimplified equation provides the followingsearch area. Figure 1: 2D test environmentTo develop a proper test suite, three different dynamictasks are included: - height change of peaks (α) - location change of peaks with constant heights (β) - location change of peaks with changing heights (δ)During all calculations, 155 iterations are performedin one way and another155 iterations in the other wayso that the search area moves forward and backward.A sidewise slide with constant heights (β) equals∆x1=0.05 while ∆x1=0.02 when the peaks alsoencounter a change in heights (δ). ∆x2 is constantlykept to zero. Two additional terms in equation (3)ensure the increasing and decreasing heights of thepeaks.The application of the conventional algorithm to thethree different dynamic tasks indicate the failure of thePSO technique to identify and trace single peaks aswell as to find and trace the global optimum. Themain deficit of the classic PSO version in dynamicenvironments is the outdated memory of the particleswhen a change in the function occurs additionally to afatal loss of diversity. Attuned with standardparameter settings, the PSO rapidly converges to a Figure 2: flow diagram of PSO for dynamic tasks 188
    • single peak and only performs a weak local search the species’ seed. These particles are initialised whenaround the peak (test case α and δ). The parameters the species is converged to achieve a balanced ratio ofare c1=c2=2.05, as commonly applied [4, 6], vmax (the convergence and diversity. The subsequent resultsmaximum move of a particle in one dimension) is set indicate the performance of the quoted algorithm.to 0.5 and if a particle tends to leave the search space,it is placed to the limit value of the dimensionrestricting the search area. Soon after starting testinstance β, all particles are collapsing to a single line,showing a curious phenomenon called ‘linearcollapse’ [2]. The length of the line is almost constantfor approximately two thirds of the iterations andamounts to ∆x2=1.44. Figure 5: PSO applied to test case α Figure 3: ‘linear collapse’ as seen from above Figure 6: PSO in dynamic test case β Figure 4: ‘linear collapse’ laterally seenAs the phenomenon appears, all velocities in direction Figure 7: PSO in dynamic environment δof x1 permanently receive the maximum velocity-vmax,1=-0.5, while the velocities in direction of x2 are All procedures employ 200 particles. The radius of aalternating between ±vmax,2=±0.5. Animations and the species is set to rspecies=4 around a seed, the minimumanalysis of the velocities indicate the establishment of amount of particles per species is 15 while 35alternating linear particle trajectories. While particles are the maximum quantity. To initialize thev1=const., all particles are moving between the best neutral and quantum particles around a species’ seed,particles/attractors which are mostly at the both ends a radius of rcloud=0.2 is applied. The convergenceof the path. threshold determining that a species is converged isThe utilisation of PSO for dynamic environments still set to ∆=0.0001. ∆ correlates to a mean distanceis a young field of activity. Figure 2 shows the flow between a species and the species’ seed. Neutraldiagram of a proposed algorithm for dynamic tasks particles provide convergence while the randomlywhich is mainly based on [3]. Without going into initialized quantum particles within rcloud ensuredetail, the foremost principles are the permanent diversity of the species. The current PSO version isupdate of the best positions, the identification of capable of detecting and tracing all major dynamicspecies seeds and their members with a predefined peaks. The algorithm and its success seriously dependamount of minimum and maximum particles and the on the problem-specific parameter settings. Futureinitialisation of neutral and quantum particles around work is encouraged to generalize this tuning. 189
    • 3. PSO WITH CONSTRICTION COEFFICIENT followed by an intensive local search (exploitation) with controlled convergence, suitable for many tasks.Numerous works are dedicated to improve PSO andits premature convergence. Most versions study the 3.3. Experimental resultsswarm behaviour by employing different inertia The sequences below demonstrate the various swarmweight (adding a factor to term 1 in equation (1)) and behaviours of different PSO versions in the static 2Dconstriction coefficient approaches. The aim is to test case of equation (3).control both, exploration and exploitation, global andlocal search.3.1. Common constriction coefficient strategyMathematically, the implementation of a constrictioncoefficient is a special case of the inertia weightversion. The standard constriction coefficientalgorithm [1, 4, 6, 10] is an extension of equation (1).vid (t ) = χ ⋅ [vid (t − 1) + c1 ⋅ rand () ⋅ ( pid − xid (t − 1) ) + ... ... + c2 ⋅ Rand () ⋅ ( pgd − xid (t − 1) )] (4)According to [4], χ is defined as 2χ= with ϕ=c1+c2, ϕ>4 (5) 2 − ϕ − ϕ 2 − 4 ⋅ϕCommonly, c1=c2=2.05 resulting in χ=0.72984.3.2. Refined constriction coefficient strategyBased on application experiences [7, 8, 9], the refinedutilisation of χ can significantly improve the swarm Figure 9: positions visited by conventional PSObehaviour. Figure 8: exemplary trend of χThe simple but effective procedure consists ofuniformly distributed random numbers within the first75% of the iterations in a proposed range of at least[0.1;1.3]. Between 75% and 95% of the iterations, χdecreases linearly from 1 to 0.20871 which results ofequation (5) with c1=2 and c2=5. Within the last 5% of Figure 10: positions visited by PSO with constrictionthe iterations, χ is kept constant at 0.20871. Aninnovative implementation of a random χ>1 leads to a The figures show all visited positions of the PSO withsignificant improvement of the global search ability 75 particles on their way of 200 iterations at 1 (1), 10(exploration). The idea is to provide an exhaustive (2), 30 (3), 50 (4), 100 (5) and 200 iterations (6), seenglobal search that maintains diversity which is from above. The conventional PSO and the version 190
    • with inertia weight (not presented) have a quite similar (nozzle) as well as up to 361 parameters, 7performance and show premature convergence with a constraints, 86 equations and 6 objective criterionsweak global search. The conventional constriction (cam) [7, 8, 9].version keeps χ constant and incorporates an earlyconvergence after one third of the iterations with no 5. CONCLUSIONSfurther global search. The refined constrictionalgorithm has a superior global search capability A new PSO algorithm with a refined constrictionfollowed by a controlled local search. The coefficient strategy is presented. The algorithmconventional version as well as the algorithm with a provides superior global search quality and guaranteeslinear decreasing inertia weight from 0.9 to 0.4 [6, 10] convergence to balance exploration and exploitation.utilize c1=c2=2.05, vmax=0.5 and set the particles to theborder line when they tend to leave the search area. 6. REFERENCESAn improved version of [1] randomly varies χ within[0.2;0.9] over all iterations. The refined constriction [1] Achtnig, J.: Particle Swarm Optimization withalgorithm employs the presented trend of χ, but within Mutation for High Dimensional Problems. Studies ina range of [0.1;1.7]. The choice of c1 and c2 results of Computational Intelligence (SCI) 82, pp. 423-439,the desire to boost global search, but as animations Springer-Verlag, Berlin, 2008indicate, the strategy massively softens the swarmcharacteristics and solidarity inside the random [2] Bentley, P., Blackwell, T.: Dynamic Search withphase of χ so that their proper selection becomes less Charged Swarms. Proceedings of the Genetic andimportant. The search seems to become fully random. Evolutionary Computation Conference (GECCO ‘02),When particles desire to leave the search area, they pp. 19-26, Morgan Kaufmann Publishers Inc., Sanare stochastically reinitialized. This is an essential Francisco, CA, 2002mechanism, as otherwise particles would only sit onthe borderline without active search. A great [3] Blackwell, T., Branke, J., Li, X.: Particle Swarmimprovement is thus the discontinuation of the with Speciation and Adaptation in a Dynamicsensitive parameter vmax, reducing the parameters. Environment. Proceedings of the Genetic andThe upper bound of the constriction coefficient should Evolutionary Computation Conference (GECCO ‘06),be at least 1.5 to insure global search, while the lower pp. 51-58, ACM, New York, NY, 2006bound is less important. Experiments with higher [4] Clerc, M.: Particle Swarm Optimization. ISTE -values (up to 30) do not show significant changes in Hermes Science Publishing, London, 2006the swarm behaviour for the proposed task. The finalvalue of χ is sensitive. Is the value too low, particles [5] Eberhart, R., Kennedy, J.: Particle Swarmmight be slowed down too much so that they cannot Optimization. Proceedings of IEEE Internationalperform a local search, while an excessive χ allows Conference on Neural Networks, IV, pp. 1942-1948,steps that are too vast so that the convergence is weak. Piscataway, NJ, 19954. ENGINEERING APPLICATIONS [6] Eberhart, R., Kennedy, J.: Swarm Intelligence. Morgan Kaufmann Publishers, San Francisco, 2001The constriction PSO as described has been applied toseveral engineering applications with prosperous [7] Große-Löscher, H., Haberland, H., Yalcin, H.:results. Verfahren zur Optimierung einer Einspritzdüse für eine Brennkraftmaschine. Offenlegungsschrift, Deutsches Patent- und Markenamt, DE 102006043460 A1 2008.03.27, München, 2008 [8] Große-Löscher, H., Haberland, H.: Schwarmintelligenz zur Optimierung von Einspritzdüsen. MTZ - Motortechnische Zeitschrift, Nr. 2, S. 80-85, Springer Automotive Media, Wiesbaden, 2010 [9] Große-Löscher, H.: Application of PSO for the optimization of Diesel engine operation. Deliverables D11.2.b+c, HERCULES Integrated Project, EU Sixth Framework Program, TIP3-CT-2003-506676, Figure 11: optimization of injection nozzle and cam Augsburg, 2007The optimizations concern 11 parameters, 47 [10] Shi, Y.: Particle Swarm Optimization. Featureconstraints, 23 equations and 6 objective criterions Article, IEEE Neural Networks Society, 2004 191