Swarm intelligence pso and aco


Published on

Published in: Education, Technology
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Swarm intelligence pso and aco

  1. 1. ISE 410 Heuristics in Optimization Particle Swarm Optimization http://www.particleswarm.info/ http://www.swarmintelligence.org/
  2. 2. Swarm Intell igence <ul><li>Origins in Artificial Life (Alife) Research </li></ul><ul><ul><li>ALife studies how computational techniques can help when studying biological phenomena </li></ul></ul><ul><ul><li>ALife studies how biological techniques can help out with computational problems </li></ul></ul><ul><li>Two main Swarm Intelligence based methods </li></ul><ul><ul><li>Particle Swarm Optimization (PSO) </li></ul></ul><ul><ul><li>Ant Colony Optimization (ACO) </li></ul></ul>
  3. 3. Swarm Intelligence <ul><li>Swarm Intelligence (SI) is the property of a system whereby </li></ul><ul><ul><li>the collective behaviors of (unsophisticated) agents </li></ul></ul><ul><ul><li>interacting locally with their environment </li></ul></ul><ul><ul><li>cause coherent functional global patterns to emerge. </li></ul></ul><ul><li>SI provides a basis with which it is possible to explore collective (or distributed) problem solving without centralized control or the provision of a global model. </li></ul><ul><li>Leverage the power of complex adaptive systems to solve difficult non-linear stochastic problems </li></ul>
  4. 4. Swarm Intelligence <ul><li>Characteristics of a swarm: </li></ul><ul><ul><li>Distributed, no central control or data source; </li></ul></ul><ul><ul><li>Limited communication </li></ul></ul><ul><ul><li>No (explicit) model of the environment; </li></ul></ul><ul><ul><li>Perception of environment (sensing) </li></ul></ul><ul><ul><li>Ability to react to environment changes. </li></ul></ul>
  5. 5. Swarm Intelligence <ul><li>Social interactions (locally shared knowledge) provides the basis for unguided problem solving </li></ul><ul><li>The efficiency of the effort is related to but not dependent upon the degree or connectedness of the network and the number of interacting agents </li></ul>
  6. 6. Swarm Intelligence <ul><li>Robust exemplars of problem-solving in Nature </li></ul><ul><ul><li>Survival in stochastic hostile environment </li></ul></ul><ul><ul><li>Social interaction creates complex behaviors </li></ul></ul><ul><ul><li>Behaviors modified by dynamic environment. </li></ul></ul><ul><li>Emergent behavior observed in: </li></ul><ul><ul><li>Bacteria, immune system, ants, birds </li></ul></ul><ul><ul><li>And other social animals </li></ul></ul>
  7. 7. Particle Swarm Optimization (PSO) <ul><li>History </li></ul><ul><li>Main idea and Algorithm </li></ul><ul><li>Comparisons with GA </li></ul><ul><li>Advantages and Disadvantages </li></ul><ul><li>Implementation and Applications </li></ul>
  8. 8. Particle Swarm Optimization (PSO) <ul><li>History </li></ul><ul><li>Main idea and Algorithm </li></ul><ul><li>Comparisons with GA </li></ul><ul><li>Advantages and Disadvantages </li></ul><ul><li>Implementation and Applications </li></ul>
  9. 9. <ul><li>Population based stochastic optimization technique inspired by social behaviour of bird flocking or fish schooling. </li></ul><ul><li>Developed by Jim Kennedy, Bureau of Labor Statistics, U.S. Department of Labor and Russ Eberhart, Purdue University </li></ul><ul><li>A concept for optimizing nonlinear functions using particle swarm methodology </li></ul>Origins and Inspiration of PSO
  10. 10. <ul><li>Inspired by simulation social behavior </li></ul><ul><li>Related to bird flocking, fish schooling and swarming theory </li></ul><ul><li>- steer toward the center </li></ul><ul><li>- match neighbors’ velocity </li></ul><ul><li>- avoid collisions </li></ul><ul><li>Suppose </li></ul><ul><ul><li>a group of birds are randomly searching food in an area. </li></ul></ul><ul><ul><li>There is only one piece of food in the area being searched. </li></ul></ul><ul><ul><li>All the birds do not know where the food is. But they know how far the food is in each iteration. </li></ul></ul><ul><ul><li>So what's the best strategy to find the food? The effective one is to follow the bird which is nearest to the food. </li></ul></ul>
  11. 11. What is PSO? <ul><li>In PSO, e ach single solution is a &quot;bird&quot; in the search space. </li></ul><ul><li>C all it &quot;particle&quot;. </li></ul><ul><li>All of particles have fitness values </li></ul><ul><ul><li>which are evaluated by the fitness function to be optimized, and </li></ul></ul><ul><li>have velocities </li></ul><ul><ul><li>which direct the flying of the particles. </li></ul></ul><ul><li>The particles fly through the problem space by following the current optimum particles. </li></ul>
  12. 12. PSO Algorithm <ul><li>Initialize with randomly generated particles. </li></ul><ul><li>Update through generations in search for optima </li></ul><ul><li>Each particle has a velocity and position </li></ul><ul><li>Update for each particle uses two “best” values. </li></ul><ul><ul><li>Pbest: best solution (fitness) it has achieved so far. (The fitness value is also stored.) </li></ul></ul><ul><ul><li>Gbest: best value, obtained so far by any particle in the population. </li></ul></ul>
  13. 13. <ul><li>PSO algorithm is not only a tool for optimization, but also a tool for representing sociocognition of human and artificial agents, based on principles of social psychology. </li></ul><ul><li>A PSO system combines local search methods with global search methods, attempting to balance exploration and exploitation. </li></ul>
  14. 14. <ul><li>Population-based search procedure in which individuals called particles change their position (state) with time. </li></ul><ul><li> individual has position </li></ul><ul><li> & individual changes velocity </li></ul>
  15. 15. <ul><li>Particles fly around in a multidimensional search space. During flight, each particle adjusts its position according to its own experience, and according to the experience of a neighboring particle, making use of the best position encountered by itself and its neighbor. </li></ul>
  16. 16. <ul><li>Initialize population in hyperspace </li></ul><ul><li>Evaluate fitness of individual particles </li></ul><ul><li>Modify velocities based on previous best and global (or neighborhood) best positions </li></ul><ul><li>Terminate on some condition </li></ul><ul><li>Go to step 2 </li></ul>Particle Swarm Optimization (PSO) Process
  17. 17. PSO Algorithm <ul><li>Update each particle, each generation </li></ul><ul><li>v[ i ] = v[ i ] + c1 * rand() * (pbest[ i ] - present[]) + c2 * rand() * (gbest[ i ] - present[ i ]) and </li></ul><ul><li>present[ i ] = persent[ i ] + v[ i ] </li></ul><ul><li>where c1 and c2 are learning factors (weights) </li></ul>a b
  18. 18. PSO Algorithm <ul><li>Update each particle, each generation </li></ul><ul><li>v[ i ] = v[ i ] + c1 * rand() * (pbest[ i ] - present[]) + c2 * rand() * (gbest[ i ] - present[ i ]) and </li></ul><ul><li>present[ i ] = pr e sent[ i ] + v[ i ] </li></ul><ul><li>where c1 and c2 are learning factors (weights) </li></ul>a b inertia Personal influence Social (global) influence
  19. 19. <ul><li>Inertia Weight </li></ul><ul><li>d is the dimension, c 1 and c 2 are positive constants, rand 1 and rand 2 are random numbers, and w is the inertia weight </li></ul><ul><li>Velocity can be limited to V max </li></ul>PSO Algorithm
  20. 20. Particle Swarm Optimization (PSO) <ul><li>History </li></ul><ul><li>Main idea and Algorithm </li></ul><ul><li>Comparisons with GA </li></ul><ul><li>Advantages and Disadvantages </li></ul><ul><li>Implementation and Applications </li></ul>
  21. 21. PSO and GA Comparison <ul><li>Commonalities </li></ul><ul><ul><li>PSO and GA are both population based stochastic optimization </li></ul></ul><ul><ul><li>b oth algorithms start with a group of a randomly generated population, </li></ul></ul><ul><ul><li>both have fitness values to evaluate the population. </li></ul></ul><ul><ul><li>Both update the population and search for the optimium with random techniques. </li></ul></ul><ul><ul><li>Both systems do not guarantee success. </li></ul></ul>
  22. 22. PSO and GA Comparison <ul><li>Differences </li></ul><ul><ul><li>PSO does not have genetic operators like crossover and mutation. Particles update themselves with the internal velocity. </li></ul></ul><ul><ul><li>They also have memory, which is important to the algorithm. </li></ul></ul><ul><ul><li>Particles do not die </li></ul></ul><ul><ul><li>the information sharing mechanism in PSO is significantly different </li></ul></ul><ul><ul><ul><li>Info from best to others, GA population moves together </li></ul></ul></ul>
  23. 23. <ul><li>PSO has a memory </li></ul><ul><li> not “what” that best solution was, but “ where ” that best solution was </li></ul><ul><li>Quality : population responds to quality factors pbest and gbest </li></ul><ul><li>Diverse response: responses allocated between pbest and gbest </li></ul><ul><li>Stability : population changes state only when gbest changes </li></ul><ul><li>Adaptability : population does change state when gbest changes </li></ul>
  24. 24. <ul><li>There is no selection in PSO </li></ul><ul><li> all particles survive for the length of the run </li></ul><ul><li> PSO is the only EA that does not remove candidate population members </li></ul><ul><li>In PSO, topology is constant; a neighbor is a neighbor </li></ul><ul><li>Population size: Jim 10-20, Russ 30-40 </li></ul>
  25. 25. <ul><li>Global version vs Neighborhood version </li></ul><ul><li> change p gd to p ld . </li></ul><ul><li>where p gd is the global best position </li></ul><ul><li>and p ld is the neighboring best position </li></ul>PSO Velocity Update Equations
  26. 26. <ul><li>Large inertia weight facilitates global exploration, small on facilitates local exploration </li></ul><ul><li>w must be selected carefully and/or decreased over the run </li></ul><ul><li>Inertia weight seems to have attributes of temperature in simulated annealing </li></ul>Inertia Weight
  27. 27. <ul><li>An important parameter in PSO; typically the only one adjusted </li></ul><ul><li>Clamps particles velocities on each dimension </li></ul><ul><li>Determines “fineness” with which regions are searched </li></ul><ul><li> if too high, can fly past optimal solutions </li></ul><ul><li> if too low, can get stuck in local minima </li></ul>V max
  28. 28. PSO – Pros and Cons <ul><li>Simple in concept </li></ul><ul><li>Easy to implement </li></ul><ul><li>Computationally efficient </li></ul><ul><li>Application to combinatorial problems? </li></ul><ul><li> Binary PSO </li></ul>
  29. 29. <ul><li>Swarm Intelligence by Kennedy, Eberhart, and Shi, Morgan Kaufmann division of Academic Press, 2001. </li></ul><ul><li>http://www.engr.iupui.edu/~eberhart/web/PSObook.html </li></ul><ul><li>http://www.particleswarm.net/ </li></ul><ul><li>http://web.ics.purdue.edu/~hux/PSO.shtml </li></ul><ul><li>http://www.cis.syr.edu/~mohan/pso/ </li></ul><ul><li>http://clerc.maurice.free.fr/PSO/index.htm </li></ul><ul><li>http://users.erols.com/cathyk/jimk.html </li></ul>Books and Website
  30. 30. Ant Colony Optimization
  31. 31. ACO Concept <ul><li>Ants (blind) navigate from nest to food source </li></ul><ul><li>Shortest path is discovered via pheromone trails </li></ul><ul><ul><li>each ant moves at random </li></ul></ul><ul><ul><li>pheromone is deposited on path </li></ul></ul><ul><ul><li>ants detect lead ant’s path, inclined to follow </li></ul></ul><ul><ul><li>more pheromone on path increases probability of path being followed </li></ul></ul>
  32. 32. ACO System <ul><li>Virtual “trail” accumulated on path segments </li></ul><ul><li>Starting node selected at random </li></ul><ul><li>Path selected at random </li></ul><ul><ul><li>based on amount of “trail” present on possible paths from starting node </li></ul></ul><ul><ul><li>higher probability for paths with more “trail” </li></ul></ul><ul><li>Ant reaches next node, selects next path </li></ul><ul><li>Continues until reaches starting node </li></ul><ul><li>Finished “tour” is a solution </li></ul>
  33. 33. ACO System, cont. <ul><li>A completed tour is analyzed for optimality </li></ul><ul><li>“ Trail” amount adjusted to favor better solutions </li></ul><ul><ul><li>better solutions receive more trail </li></ul></ul><ul><ul><li>worse solutions receive less trail </li></ul></ul><ul><ul><li>higher probability of ant selecting path that is part of a better-performing tour </li></ul></ul><ul><li>New cycle is performed </li></ul><ul><li>Repeated until most ants select the same tour on every cycle (convergence to solution) </li></ul>
  34. 34. ACO System, cont. <ul><li>Often applied to TSP (Travelling Salesman Problem): shortest path between n nodes </li></ul><ul><li>Algorithm in Pseudocode: </li></ul><ul><ul><li>Initialize Trail </li></ul></ul><ul><ul><li>Do While (Stopping Criteria Not Satisfied) – Cycle Loop </li></ul></ul><ul><ul><ul><li>Do Until (Each Ant Completes a Tour) – Tour Loop </li></ul></ul></ul><ul><ul><ul><li>Local Trail Update </li></ul></ul></ul><ul><ul><ul><li>End Do </li></ul></ul></ul><ul><ul><ul><li>Analyze Tours </li></ul></ul></ul><ul><ul><ul><li>Global Trail Update </li></ul></ul></ul><ul><ul><li>End Do </li></ul></ul>
  35. 41. ACO Background <ul><li>Discrete optimization problems difficult to solve </li></ul><ul><li>“ Soft computing techniques” developed in past ten years: </li></ul><ul><ul><li>Genetic algorithms (GAs) </li></ul></ul><ul><ul><ul><li>based on natural selection and genetics </li></ul></ul></ul><ul><ul><li>Ant Colony Optimization (ACO) </li></ul></ul><ul><ul><ul><li>modeling ant colony behavior </li></ul></ul></ul>
  36. 42. ACO Background, cont. <ul><li>Developed by Marco Dorigo (Milan, Italy), and others in early 1990s </li></ul><ul><li>Some common applications: </li></ul><ul><ul><li>Quadratic assignment problems </li></ul></ul><ul><ul><li>Scheduling problems </li></ul></ul><ul><ul><li>Dynamic routing problems in networks </li></ul></ul><ul><li>Theoretical analysis difficult </li></ul><ul><ul><li>algorithm is based on a series of random decisions (by artificial ants) </li></ul></ul><ul><ul><li>probability of decisions changes on each iteration </li></ul></ul>
  37. 43. What is ACO as Optimization Tech <ul><li>Probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs </li></ul><ul><li>They are inspired by the behavior of ant s in finding paths from the colonyto food. </li></ul>
  38. 44. Implementation <ul><li>Can be used for both Static and Dynamic Combinatorial optimization problems </li></ul><ul><li>Convergence is guaranteed, although the speed is unknown </li></ul><ul><ul><li>Value </li></ul></ul><ul><ul><li>Solution </li></ul></ul>
  39. 45. The Algorithm <ul><li>Ant Colony Algorithms are typically use to solve minimum cost problems. </li></ul><ul><li>We may usually have N nodes and A undirected arcs </li></ul><ul><li>There are two working modes for the ants: either forwards or backwards. </li></ul><ul><li>Pheromones are only deposited in backward mode. (so that we know how good the path was to update its trail) </li></ul>
  40. 46. The Algorithm <ul><li>The ants memory allows them to retrace the path it has followed while searching for the destination node </li></ul><ul><li>Before moving backward on their memorized path, they eliminate any loops from it. While moving backwards, the ants leave pheromones on the arcs they traversed. </li></ul>
  41. 47. The Algorithm <ul><li>The ants evaluate the cost of the paths they have traversed. </li></ul><ul><li>The shorter paths will receive a greater deposit of pheromones. An evaporation rule will be tied with the pheromones, which will reduce the chance for poor quality solutions. </li></ul>
  42. 48. The ACO Algorithm <ul><li>At the beginning of the search process, a constant amount of pheromone is assigned to all arcs. When located at a node i an ant k uses the pheromone trail to compute the probability of choosing j as the next node: </li></ul><ul><li>where is the neighborhood of ant k when in node i. </li></ul>
  43. 49. The Algorithm <ul><li>When the arc (i,j) is traversed , the pheromone value changes as follows: </li></ul><ul><li>By using this rule, the probability increases that forthcoming ants will use this arc. </li></ul>
  44. 50. The Algorithm <ul><li>After each ant k has moved to the next node, the pheromones evaporate by the following equation to all the arcs: </li></ul><ul><li>where is a parameter. An iteration is a complete cycle involving ants’ movement, pheromone evaporation, and pheromone deposit. </li></ul>
  45. 51. Steps for Solving a Problem by ACO <ul><li>Represent the problem in the form of sets of components and transitions , or by a set of weighted graphs , on which ants can build solutions </li></ul><ul><li>Define the meaning of the pheromone trails </li></ul><ul><li>Define the heuristic preference for the ant while constructing a solution </li></ul><ul><li>If possible implement a efficient local search algorithm for the problem to be solved. </li></ul><ul><li>Choose a specific ACO algorithm and apply to problem being solved </li></ul><ul><li>Tune the parameter of the ACO algorithm. </li></ul>
  46. 52. Applications <ul><li>Efficiently Solves NP hard Problems </li></ul><ul><li>Routing </li></ul><ul><ul><li>TSP (Traveling Salesman Problem) </li></ul></ul><ul><ul><li>Vehicle Routing </li></ul></ul><ul><ul><li>Sequential Ordering </li></ul></ul><ul><li>Assignment </li></ul><ul><ul><li>QAP (Quadratic Assignment Problem) </li></ul></ul><ul><ul><li>Graph Coloring </li></ul></ul><ul><ul><li>Generalized Assignment </li></ul></ul><ul><ul><li>Frequency Assignment </li></ul></ul><ul><ul><li>University Course Time Scheduling </li></ul></ul>4 3 5 2 1
  47. 53. Applications <ul><li>Scheduling </li></ul><ul><ul><li>Job Shop </li></ul></ul><ul><ul><li>Open Shop </li></ul></ul><ul><ul><li>Flow Shop </li></ul></ul><ul><ul><li>Total tardiness (weighted/non-weighted) </li></ul></ul><ul><ul><li>Project Scheduling </li></ul></ul><ul><ul><li>Group Shop </li></ul></ul><ul><li>Subset </li></ul><ul><ul><li>Multi-Knapsack </li></ul></ul><ul><ul><li>Max Independent Set </li></ul></ul><ul><ul><li>Redundancy Allocation </li></ul></ul><ul><ul><li>Set Covering </li></ul></ul><ul><ul><li>Weight Constrained Graph Tree partition </li></ul></ul><ul><ul><li>Arc-weighted L cardinality tree </li></ul></ul><ul><ul><li>Maximum Clique </li></ul></ul>
  48. 54. Applications <ul><li>Other </li></ul><ul><ul><li>Shortest Common Sequence </li></ul></ul><ul><ul><li>Constraint Satisfaction </li></ul></ul><ul><ul><li>2D-HP protein folding </li></ul></ul><ul><ul><li>Bin Packing </li></ul></ul><ul><li>Machine Learning </li></ul><ul><ul><li>Classification Rules </li></ul></ul><ul><ul><li>Bayesian networks </li></ul></ul><ul><ul><li>Fuzzy systems </li></ul></ul><ul><li>Network Routing </li></ul><ul><ul><li>Connection oriented network routing </li></ul></ul><ul><ul><li>Connection network routing </li></ul></ul><ul><ul><li>Optical network routing </li></ul></ul>
  49. 55. Ant Colony Algorithms <ul><li>Let u m and l m be the number of ants that have used the upper and lower branches. </li></ul><ul><li>The probability P u (m) with which the (m+1) th ant chooses the upper branch is: </li></ul>
  50. 56. Traveling Salesperson Problem <ul><li>Famous NP-Hard Optimization Problem </li></ul><ul><li>Given a fully connected, symmetric G(V,E) with known edge costs, find the minimum cost tour. </li></ul><ul><li>Artificial ants move from vertex to vertex to order to find the minimum cost tour using only pheromone mediated trails. </li></ul>
  51. 57. Traveling Salesperson Problem <ul><li>The three main ideas that this ant colony algorithm has adopted from real ant colonies are: </li></ul><ul><ul><li>The ants have a probabilistic preference for paths with high pheromone value </li></ul></ul><ul><ul><li>Shorter paths tend to have a higher rate of growth in pheromone value </li></ul></ul><ul><ul><li>It uses an indirect communication system through pheromone in edges </li></ul></ul>
  52. 58. Traveling Salesperson Problem <ul><li>Ants select the next vertex based on a weighted probability function based on two factors: </li></ul><ul><ul><li>The number of edges and the associated cost </li></ul></ul><ul><ul><li>The trail (pheromone) left behind by other ant agents. </li></ul></ul><ul><li>Each agent modifies the environment in two different ways : </li></ul><ul><ul><li>Local trail updating: As the ant moves between cities it updates the amount of pheromone on the edge </li></ul></ul><ul><ul><li>Global trail updating: When all ants have completed a tour the ant that found the shortest route updates the edges in its path </li></ul></ul>
  53. 59. Traveling Salesperson Problem <ul><li>Local Updating is used to avoid very strong pheromone edges and hence increase exploration (and hopefully avoid locally optimal solutions). </li></ul><ul><li>The Global Updating function gives the shortest path higher reinforcement by increasing the amount of pheromone on the edges of the shortest path. </li></ul>
  54. 60. Empirical Results <ul><li>Compared Ant Colony Algorithm to standard algorithms and meta-heuristic algorithms on Oliver 30 – a 30 city TSP </li></ul><ul><li>Standard: 2-Opt, Lin-Kernighan, </li></ul><ul><li>Meta-Heuristics: Tabu Search and Simulated Annealing </li></ul><ul><li>Conducted 10 replications of each algorithm and provided averaged results </li></ul>
  55. 61. Comparison to Standard Algorithms <ul><li>Examined Solution Quality – not speed; in general, standard algorithms were significantly faster. </li></ul><ul><li>Best ACO solution - 420 </li></ul>421 431 Space Fill 420 492 Near Insert 421 426 Sweep 421 663 Random 420 421 Far Insert 421 437 Near Neighbor L-K 2-Opt
  56. 62. Comparison to Meta-Heuristic Algorithms <ul><li>Meta-Heuristics are algorithms that can be applied to a variety of problems with a minimum of customization. </li></ul><ul><li>Comparing ACO to other Meta-heuristics provides a “fair market” comparison (vice TSP specific algorithms). </li></ul>SA Tabu ACO 25.1 459.8 422 1.5 420.6 420 1.3 420.4 420 Std Dev Mean Best
  57. 63. Other Application Areas <ul><li>Scheduling : Scheduling is a widespread problem of practical importance. </li></ul><ul><li>Paul Forsyth & Anthony Wren, University of Leeds Computer Science department developed a bus driver scheduling application using ant colony concepts. </li></ul>
  58. 64. Advantages and Disadvantages
  59. 65. Advantages and Disadvantages <ul><li>For TSPs (Traveling Salesman Problem), relatively efficient </li></ul><ul><ul><li>for a small number of nodes, TSPs can be solved by exhaustive search </li></ul></ul><ul><ul><li>for a large number of nodes, TSPs are very computationally difficult to solve (NP-hard) – exponential time to convergence </li></ul></ul><ul><li>Performs better against other global optimization techniques for TSP (neural net, genetic algorithms, simulated annealing) </li></ul><ul><li>Compared to GAs (Genetic Algorithms): </li></ul><ul><ul><li>retains memory of entire colony instead of previous generation only </li></ul></ul><ul><ul><li>less affected by poor initial solutions (due to combination of random path selection and colony memory) </li></ul></ul>
  60. 66. Advantages and Disadvantages, cont. <ul><li>Can be used in dynamic applications (adapts to changes such as new distances, etc.) </li></ul><ul><li>Has been applied to a wide variety of applications </li></ul><ul><li>As with GAs, good choice for constrained discrete problems (not a gradient-based algorithm) </li></ul>
  61. 67. Advantages and Disadvantages, cont. <ul><li>Theoretical analysis is difficult: </li></ul><ul><ul><li>Due to sequences of random decisions (not independent) </li></ul></ul><ul><ul><li>Probability distribution changes by iteration </li></ul></ul><ul><ul><li>Research is experimental rather than theoretical </li></ul></ul><ul><li>Convergence is guaranteed, but time to convergence uncertain </li></ul>
  62. 68. Advantages and Disadvantages, cont. <ul><li>Tradeoffs in evaluating convergence: </li></ul><ul><ul><li>In NP-hard problems, need high-quality solutions quickly – focus is on quality of solutions </li></ul></ul><ul><ul><li>In dynamic network routing problems, need solutions for changing conditions – focus is on effective evaluation of alternative paths </li></ul></ul><ul><li>Coding is somewhat complicated, not straightforward </li></ul><ul><ul><li>Pheromone “trail” additions/deletions, global updates and local updates </li></ul></ul><ul><ul><li>Large number of different ACO algorithms to exploit different problem characteristics </li></ul></ul>
  63. 69. Sources <ul><li>Dorigo, Marco and Stützle, Thomas. (2004) Ant Colony Optimization, Cambridge, MA: The MIT Press. </li></ul><ul><li>Dorigo, Marco, Gambardella, Luca M., Middendorf, Martin. (2002) “Guest Editorial,” IEEE Transactions on Evolutionary Computation, 6(4): 317-320. </li></ul><ul><li>Thompson, Jonathan, “Ant Colony Optimization.” http://www.orsoc.org.uk/region/regional/swords/swords.ppt , accessed April 24, 2005. </li></ul><ul><li>Camp, Charles V., Bichon, Barron, J. and Stovall, Scott P. (2005) “Design of Steel Frames Using Ant Colony Optimization,” Journal of Structural Engineeering, 131 (3):369-379. </li></ul><ul><li>Fjalldal, Johann Bragi, “An Introduction to Ant Colony Algorithms.” http://www.informatics.sussex.ac.uk/research/nlp/gazdar/teach/atc/1999/web/johannf/ants.html , accessed April 24, 2005. </li></ul>
  64. 70. Questions?