A Fast and Inexpensive Particle Swarm Optimization for Drifting Problem-Spaces
Upcoming SlideShare
Loading in...5
×
 

A Fast and Inexpensive Particle Swarm Optimization for Drifting Problem-Spaces

on

  • 602 views

Particle Swarm Optimization is a class of stochastic, population based optimization techniques which are mostly suitable for static problems. However, real world optimization problems are time ...

Particle Swarm Optimization is a class of stochastic, population based optimization techniques which are mostly suitable for static problems. However, real world optimization problems are time variant, i.e., the problem space changes over time. Several researches have been done to address this dynamic optimization problem using Particle Swarms. In this paper we probe the issues of tracking and optimizing Particle Swarms in a dynamic system where the problem-space drifts in a particular direction. Our assumption is that the approximate amount of drift is known, but the direction of the drift is unknown. We propose a Drift Predictive PSO (DriP-PSO) model which does not incur high computation cost, and is very fast and accurate. The main idea behind this technique is to determine the approximate direction using a small number of stagnant particles in which the problem-space is drifting so that the particle velocities may be adjusted accordingly in the subsequent iteration of the algorithm.

Statistics

Views

Total Views
602
Views on SlideShare
602
Embed Views
0

Actions

Likes
0
Downloads
12
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    A Fast and Inexpensive Particle Swarm Optimization for Drifting Problem-Spaces A Fast and Inexpensive Particle Swarm Optimization for Drifting Problem-Spaces Document Transcript

    • 2012 - International Conference on Emerging Trends in Science, Engineering and Technology86A Fast and Inexpensive Particle Swarm Optimizationfor Drifting Problem-SpacesZubin BhuyanDepartment of Computer Science and EngineeringTezpur University,Tezpur, Indiazubin_csi11@agnee.tezu.ernet.inSourav HazarikaDepartment of Computer Science and EngineeringTezpur University,Tezpur, Indiasourav_csi11@agnee.tezu.ernet.inAbstract— Particle Swarm Optimization is a class of stochastic,population based optimization techniques which are mostlysuitable for static problems. However, real world optimizationproblems are time variant, i.e., the problem space changes overtime. Several researches have been done to address this dynamicoptimization problem using Particle Swarms. In this paper weprobe the issues of tracking and optimizing Particle Swarms in adynamic system where the problem-space drifts in a particulardirection. Our assumption is that the approximate amount ofdrift is known, but the direction of the drift is unknown. Wepropose a Drift Predictive PSO (DriP-PSO) model which does notincur high computation cost, and is very fast and accurate. Themain idea behind this technique is to use a few stagnant particlesto determine the approximate direction in which the problem-space is drifting so that the particle velocities may be adjustedaccordingly in the subsequent iteration of the algorithm.Keywords- pso, dynamic exploration, drifting problem-spaceI. INTRODUCTIONSwarm intelligence may be defined as the collectivebehavior of simple rule-following agents in a decentralizedsystem, where the overall behavior of the entire system appearsintelligent to an external observer. In nature, this kind ofbehavior is seen in bird flocks, fish schools, ant colonies andanimal herds. Given a large space of possibilities, a populationof agents is often able solve difficult problems by findingmultivariate solutions or patterns through a simplified form ofsocial interaction [1].Particle swarm optimization was first put forward byKennedy and Eberhart in 1995 [2, 3]. The PSO algorithmexhibits all common evolutionary computation characteristics,viz., initialization with a random population, searching foroptima by updating generations, and updating generationsbased on previous ones. It has been implemented with differentapproaches for a wide range of generic problems, as well as forcase-specific applications focused on a precise requirement [4,5, 6, 7].However, almost all practical problems are time-varying ordynamic, i.e., the environment and the characteristics of theglobal optimum changes over time. More formally, a dynamicsystem is one where the system changes its state in a repeatedor non-repeated manner. In such cases a standard PSO mightnot give the most optimal results. Also, there are several waysin which a system may change over time. The changes mayoccur periodically in some predefined sequence or in randomfashion. References [8, 9] define three kinds of dynamicsystems. First, the location of the optimum value in theproblem space may change. Second, the location can remainconstant but the optimum value may vary. Third, both thelocation and the value of the optimum may vary.II. BACKGROUNDA. Standard Particle Swarm OptimizationPSO is initialized with a population of random solutionscalled particles. Each particle moves about, or flies, in thegiven problem space with a velocity which keeps on varyingcontinuously according to its own flying experience and otherparticles as well. In a D-dimension space the location of the ithparticle is represented as Xi = (xi1,…, xid,…, xiD), and velocityfor the ithparticle is represented as Vi = (vi1,…, vid, …, viD).The best previous position of the ithparticle is called thepbesti. The best pbest among all the particles is called thegbest.Equations (1a) and (1b) are used to update the particles‟position and velocity.𝑣𝑖𝑑 = 𝑤 × 𝑣𝑖𝑑 + 𝑐1 × 𝑟𝑎𝑛𝑑() × 𝑝𝑖𝑑 − 𝑥𝑖𝑑 + 𝑐2 ×𝑟𝑎𝑛𝑑() × (𝑝 𝑔𝑑 − 𝑥𝑖𝑑 ) (1a)𝑥𝑖𝑑 = 𝑥𝑖𝑑 + 𝑣𝑖𝑑 (1b)Equation (1a) calculates the new velocity of the particlesbased on its previous velocity (𝑣𝑖𝑑 ), location where the particlehas achieved its best value (pbesti or 𝑝𝑖𝑑 ), location where thehighest of all pbest has been achieved (gbest or 𝑝 𝑔𝑑 ), w isinertia weight, c1 and c2 are cognitive and social accelerationconstants, and rand() is a random number generator functionThe new position of each particle is then updated usingequation (1b). In both the equations subscript d indicates thedthdimension.If the current fitness of any particle is better than its pbest,then the value of the pbest will be replaced by the currentsolution. Again, if that pbest is better than the existing gbest,
    • 2012 - International Conference on Emerging Trends in Science, Engineering and Technology87then the pbest will become the new gbest. This process isrepeated until a satisfactory result is obtained.B. PSO for Dynamic SystemsSeveral propositions have been made regarding themodification of the PSO algorithm to address the dynamicoptimization problem, i.e., scenarios where the problem spacechanges over time. In such situation the particles might lose itsglobal exploration ability due to changing position of globaloptimum. This usually leads to unsatisfactory, unacceptableand sub-optimal results.Eberhart and Hu in [8] use a “fixed gBest-value method”where the gbest value and the second-best gbest value aremonitored. If these two values do not change for certainnumber of iterations then a possible optimum change isdeclared. Actually the gbest value and second-best gbest valueare monitored to increase the accuracy and prevent falsealarms.Another very successful PSO algorithm for dynamicsystems is the charged-PSO developed by Blackwell andBentley [10]. The driving principle behind charged PSO is agood balance between exploration and exploitation, which inturn results in continuous search for better solution whilerefining the current soluiton. Rakitianskaia and Engelbrecht in[11] have further modified the charged-PSO (CPSO) byincorporating within it the concept of cooperative split PSO(CSPSO). CSPSO is an approach where search space isdivided into smaller subspaces, with each subspace beingoptimised by a separate swarm [12].Hashemi and Meybodi introduced cellular PSO [13]. This isa hybrid model of particle swarm optimization and cellularautomata where the population of particles is split intodifferent groups across cells of cellular automata by imposinga restriction on number of particles in each cell. This wasfurther modified for dynamic systems by introducingtemporary quantam particles [14].III. PROPOSED DRIFT PREDICTIVE PSO MODEL(DRIP-PSO)In this paper we propose a cost-effective and accurate PSOmodel, DriP-PSO, which has been specifically designed forthe scenario where the problem-space drifts in an unknowndirection over time and an approximate amount of drift isknown. The algorithm determines the approximate direction inwhich the problem-space is drifting so that the particlevelocities may be adjusted accordingly in the subsequentiteration of the algorithm. This is achieved by selecting a fewstagnant particles which try to detect the direction of drift.In each iteration of the DriP-PSO algorithm, a smallnumber of stagnant particles are selected randomly. Thestagnant particles do not change their positions for thatparticular round. These stagnant particles would then compareits previous fitness value to its current fitness value. If achange is detected, the stagnant particles will generate foursub-particles which will rest on a circular orbit of radius ρ.Every stagnant particle will be the centre of its circular orbit,and the sub-particles will be placed at right angle to oneanother.For example, let us consider a particle Pi, such that for aparticular round it has been selected as a stagnant particle. Inorder to determine the direction of drift we select two sub-particles,Sj,Piand Sk,Pifrom among the four sub-particles ofparticle Pi such that the previous fitness of the particle Pi liesbetween the fitness values of the two selected sub-particles.The approximate direction of drift, i.e. the direction in whichthe adjustment ξ is required, is calculated by equation (2).θ(ξ) = θSj,Pi+ (θSk,Pi− θSj,Pi) ×α−Sj,PiSk,Pi−Sj,Pi(2)In Equation (2), θ(ξ) is the angle representing the directionin which adjustment ξ has to be made, α is the previous fitnessvalue of the particle Pi, Sk,Piand Sj,Piare fitness of the selectedsub-particles between which the value α lies, θSj,Piand θSk,Piare the angles at which the selected sub-particles are oriented.If the previous fitness of the particle is greater than thefitness value of all the sub-particles, then the direction alongthe sub-particle with highest value is chosen. And if theprevious fitness of the particle is smaller than the fitness valueof all the sub-particles, then the direction along the sub-particle with smallest value is chosen. A graphicalrepresentation is shown in Fig. 1.Figure 1. Graphical representation of drift evaluation using sub-particlesSj,Piand Sj,Piof particle Pi. Orbit radius is ρ.Then, for all stagnant particles, their values of ξ areaveraged with weights and added as an extra term to thevelocity equation as shown in 3(a). The weights are evaluated
    • 2012 - International Conference on Emerging Trends in Science, Engineering and Technology88using the occurrence frequency of adjustment values. Weight𝑤𝑖 for adjustment ξ i is calculated using the equation (3).𝑤𝑖 =𝑓ξ 𝑖𝑛(3)Here 𝑓ξ 𝑖is the number of times the value ξi occurs, and 𝑛 isthe total number of stagnant particles. The significance of theweight is that values of ξ which are found to occur morefrequently are given more importance.The assumption here is that the drift rate is near about ρ, i.e,the sub-particle orbit radius. The value of ρ is chosen in such away that any change in the particle‟s vicinity, due to problemspace drift, is contained in close proximity to the sub-particles.Algorithm 1 Drift Predictive PSO:1. Initialize a population of particles scattered randomlyover the problem space. These particles have arbitraryinitial velocities.2. Select randomly 𝑛 number of stagnant particles.3. For each particlea. Evaluate fitness of particle.b. Evaluation of drift: For each stagnant particle,evaluate probable drift using equation (2)c. Calculate weights wi corresponding to each ξ.d. Change velocity according to the equation (4a):𝑣𝑖𝑑 = 𝑤 × 𝑣𝑖𝑑 + 𝑐1 × 𝑟𝑎𝑛𝑑() × 𝑝𝑖𝑑 − 𝑥𝑖𝑑 + 𝑐2 ×𝑟𝑎𝑛𝑑() × 𝑝 𝑔𝑑 − 𝑥𝑖𝑑 +𝜉 𝑖𝑑 𝑤 𝑖𝑑𝑤 𝑖𝑑(4a)Change the position according to equation (4b):𝑥𝑖𝑑 = 𝑥𝑖𝑑 + 𝑣𝑖𝑑 (4b)Here w is inertia weight, c1 and c2 arecognitive and social acceleration constants, andrand() is a random number generator function. i isthe particle index, g represents index of particle withbest fitness, 𝜉𝑖 is the adjustment required due to thedynamic change in the problem space and wid is theaccuracy with which the drift is predicted. Subscriptd indicates the dthdimension.e. If current fitness of particle is better than pbest,then set pbest value equal to current fitness. Setpbest location to current location.f. If current fitness is better than gbest, reset gbestto current fitness value. Set gbest location tocurrent location of particle.g. Loop back to Step 2 until end criterion issatisfied, or maximum number of iterations iscompleted.Algorithm 1 illustrates the step-by-step working of theproposed DriP-PSO for drifting problem-spaces.IV. SIMULATION AND EXPERIMENTL RESULTSWe designed and implemented a test tool in WPF (.NetFramework 4.0) for testing and comparing the proposed DriP-PSO model with the standard PSO. Fig. 2 shows a screenshotof the PSO test tool. Testing for the proposed PSO modelshave been done for five functions, viz. the Sphere, Step,Rastrigin, Rosenbrock and an arbitrary peak function as shownin Table 1.Figure 2. PSO TEST TOOL screenshot showing the arbitrarypeaks functionIn order to simulate a dynamic system, we designed the testtool to drift the problem space in any direction, by applying anoffset, λ, in every dimension, as given by equation (5)fn+1 = fn(x - λ, y - λ) (5)TABLE I. FUNCTIONS USED FOR TESTINGFunctions FormulaSphere f(x, y) = x2+ y2Step f(x, y) = |x| + |y|Rastrigin f(x,y) = 20 + x2+ y2– 10(cos(2πx) + cos(2πy))Rosenbrock f(x,y) = (1-x)2+ 100(y - x2)2ArbitraryPeaksf(x, y) = 1 – [3(1-x)2e-x2– (y+1)2+ 10(x/5 – x3– y5)e-(x2+y2)– 1/3e-(x+1)2-y2
    • 2012 - International Conference on Emerging Trends in Science, Engineering and Technology89For all functions the offset is varied in the range [0.01,0.09]. Based on [15], in all experiments, the inertia weight wwas set to 0.729844, and c1 and c2 were set to 1.49618 toincrease convergent behavior.The stagnant particles are selected randomly at run time.Table 2 and 3 shows a comparison among standard PSOand Drift Predictive PSO in dynamic environment. The errorpercentage is calculated on the basis of actual minima and theminima detected by the PSO. All results are averages of 20different runs.TABLE II. RESULTS OF DIFFERENT PSOS IN DYNAMIC SCENARIO USING 25PARTICLESPercent error in finding global minimaFunction Standard PSO Drift Predictive PSOSphere 6.799% 2.571%Step 9.847% 2.091%Rastrigin 29.900% 9.143%Rosenbrock 24.616% 3.592%Arbitrary Peaks 27.629% 5.126%TABLE III. RESULTS OF DIFFERENT PSOS IN DYNAMIC SCENARIO USING 35PARTICLESPercent error in finding global minimaFunction Standard PSO Drift Predictive PSOSphere 5.021% 1.871%Step 8.268% 1.438%Rastrigin 25.728% 7.895%Rosenbrock 21.616% 2.332%Arbitrary Peaks 25.744% 4.661%V. CONCLUSIONThe experimental results presented in this paper clearlyshow that the proposed Drift Predictive PSO gives accurateresults for drifting problem spaces. It is stable and incurs lesscomputational cost.ACKNOWLEDGEMENTSWe are grateful to Tuhin Bhuyan of Jorhat EngineeringCollege, India, who helped us in designing the class structureof the PSO Test Tool.REFERENCES[1] J. Kennedy, R. C. Eberhart and Yuhui Shi, Swarm Intelligence, TheMorgan Kaufman series in Evolutionary Computation. San Francisco:Morgan Kaufman Publishers, 2001, pp 287[2] R. C. Eberhart and J. Kennedy, “A new optimizer using particle swarmtheory” Proc. of the Sixth Int. Symp. on Micro Machine and HumanScience, Nagoya, Japan, pp. 39-43, 1995[3] R. C. Eberhart and J. Kennedy, “Particle Swarm Optimization” Proc.IEEE Int. Conf. on Neural Networks, Piscataway, NJ: IEEE Press, IVpp. 1942-1948, 1995.[4] F. van den Bergh, "An Analysis of Particle SwarmOptimizers," PhDDissertation, Department of Computer Science, University of Pretoria,Pretoria, South Africa, 2002.[5] E. Papacostantis, “Coevolving Probabilistic Game Playing Agents UsingParticle Swarm Optimization Algorithms”, IEEE Symp. onComputational Intelligence for Financial Engineering and Economics,2011[6] L. Kezhong and W. Ruchuan, “Application of PSO and QPSOalgorithm to estimate parameters from kinetic model of glutamic acidbatch fermetation”, 7th World Congr. on Intelligent Control andAutomation, 2008.[7] E. Assareh, M.A. Behrang, M.R. Assari and A. Ghanbarzadeh,“Application of PSO and GA techniques on demand estimation of oil inIran”, The 3rd Int. Conf. on Sustainable Energy and EnvironmentalProtection, (SEEP „09), 2009.[8] X. Hu and R. C. Eberhart, “Adaptive Particle Swarm Optimization:Response to Dynamic Systems” Proc. of the 2002 Congr. onEvolutionary Computation, 2002.[9] X. Hu and R. C. Eberhart, “Tracking dynamic systems with PSO:Where‟s the cheese”, Proc. of the workshop on Particle SwarmOptimization, Prude School of Engineering and Technology,Indianapolis, 2001[10] T. M. Blackwell and P. J. Bentley, “Dynamic Search with ChargesSwarms”, in Proc. of Genetic and Evolutionary Computation Conf.,(GECCO „02), Morgan Kaufmann Publishers, 2002, pp.9-13[11] A. Rakitianskaia, and A. P. Engelbrecht, “Cooperative charged particleswarm optimiser”, IEEE Congr. on Evolutionary Computation 2008.CEC 2008, pp. 933-939.[12] F. Van den Bergh and A. Engelbrecht, “A Cooperative approach toparticle swarm optimization”, IEEE Trans. on Evol. Comput., vol. 8, no.3, pp. 225–239, June, 2004.[13] A. B. Hashemi and M. R. Meybodi, "Cellular PSPo: A PSO for DynamicEnvironments," Proc. of 4thInt. Conf. Intelligence Computation andApplications (ISICA 2009), Huangshi, China, 2009.[14] Ali B. Hashemi and M. R. Meybodi, “A Multi-Role Cellular PSO forDynamic Environments”, Proceedings of the 14th International CSIComputer Conference (CSICC09) 2009[15] R. C. Eberhart and Y. Shi, “Comparing inertia weights and constrictionfactors in particle swarm optimization”, In Proc. of the Congr. OnEvolutionary Computation, pp. 84–88, 2000.