Gary Yen: "Multi-objective Optimization and Performance Metrics Ensemble"

1,500 views
1,337 views

Published on

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,500
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
73
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Gary Yen: "Multi-objective Optimization and Performance Metrics Ensemble"

  1. 1. MULTIOBJECTIVE OPTIMIZATION ANDPERFORMANCE METRICS ENSEMBLE Gary G. Yen, FIEEE gyen@okstate.edu Professor, Oklahoma State UniversityPast President, IEEE Computational Intelligence Society
  2. 2. ieee-wcci2014.org
  3. 3. Multiobjective Optimization Optimization problems involve more than one objective functions Very common, yet difficult problems in the field of science, engineering, and business management Nonconflicting objectives: achieve a single optimal solution satisfies all objectives simultaneously SOPs Competing objectives: cannot be optimized simultaneously MOP– search for a set of “acceptable”– maybe only suboptimal for one objective– solutions is our goal In operation research/management terms - multiple criterion decision making (MCDM) (International Society on MCDM; http://www.terry.uga.edu/mcdm/)
  4. 4. Why MOP? Buying an Automobile  Objective = reduce cost, while maximize comfort  Which solution (1, A, B, C, 2) is best ???  No solution from this set makes both objectives look better than any other solution from the set  No single optimal solution  Trade off between conflicting objectives- cost and comfort
  5. 5. Mathematical Definition• Mathematical model to formulate the optimization problem Objective Decision Environment Equality Inequality Variable vectors vectors states constraints constraints bounds min{y  f (x, e) : h(x, e)  0, g(x, e)  0, x L  x  xU } x n o Design Variables: decision and objective vector o Constraints: equality and inequality o Greater-than-equal-to inequality constraint can be converted to less-than-equal-to constraint by multiplying -1 o Objective Function: maximization can be converted to minimization due to the duality principle max f (x)  min ( f (x))
  6. 6. Pareto Optimality• Formal Definition: the minimization of the n components f k , k  1,, n of a vector function f of a vector variable x in a universe μ, where f (x)  ( f1 (x), f 2 (x),, f n (x))• Then a decision vector xu   is said to be Pareto-optimal if and only if there is no xv   for which v  f (xv )  (v1 ,, vn ) dominates u  f (xu )  (u1 ,, un ) , that is, there is no x v   such that i {1,, n}, vi  ui and i {1,, n} | vi  ui
  7. 7.  When encounter problems with many objectives (more than five), nearly all algorithms performs poorly because of loss of selection pressure in fitness evaluation solely based upon Pareto domination.
  8. 8. Distinctions from SOP• Multiple conflicting objectives as opposed to single one• Multiple optima vs. single optimum• Two goals instead of one o Progressing towards the Pareto front o Maintaining a diverse set of solutions in the non-dominated front• Dealing with two search spaces o A decision variable space plus an objective space o A proximity of two solutions in one space does not mean a proximity in the other space o Search is performed in the decision space
  9. 9. Disadvantages of Classical Methods• We need prior knowledge of the problem domain to result in to a single objective optimization problem (e.g., weight vector,  constraints)• Results in a single solution for each run• Non-uniformity in Pareto-optimal solution• Require fitness function to be linear, continuous and differentiable• Cannot deal with MOPs having discontinuous and concave Pareto fronts
  10. 10. Why Population-Based Heuristics?• An unorthodox, stochastic, and population based parallel searching algorithm maybe more suitable for MOPs• Classification of EA’s– o Genetic Algorithm; o Genetic Programming; o Evolutionary Strategy; o Ant Colony; o Artificial Immune System; o Particle Swarm Optimization; o Differential Evolution; o Memetic Algorithm
  11. 11. Efforts in Enhancing a PSO for MOPs• Modifying the fitness assignment• Improving PSO flight mechanism• Enhancing the convergence• Preserving the diversity• Managing the population• Constraints and uncertainty handling• Knowledge Management through Culture/Meme
  12. 12. Performance Metrics• To quantify the performance of evolutionary multiobjective algorithms according two essential metrics dictated by Pareto Optimality Convergence measure Diversity measure
  13. 13. Current Practice In literature, when an MOEA is proposed, a number of benchmark problems are chosen to quantify the performance, and based on a set of heuristically chosen performance metrics, the proposed MOEA and some competitive representatives are evaluated statistically given a large number of independent trials. The conclusion, if any been drawn, is often indecisive and reveals no additional insight pertaining to the specific problem characteristics that the proposed MOEA would do the best
  14. 14.  By the No Free Lunch theorem, any algorithm’s elevated performance over one class of problems is exactly paid for in loss over another class. Our Goal is to rank the MOEAs considered based on a more comprehensive measure (hybrid performance metric), revealing specific problem characteristics that the underlying MOEA could perform the best.
  15. 15. Case Study Five state-of-the-art MOEAs – SPEA 2, NSGA-II, PESA-II, IBEA, and MOEA/D Six Benchmark Problems – 2-objective ZDT1, ZDT2, ZDT3, ZDT4, ZDT6 – 3-objective DTLZ2, 5-objective WFG1, WFG2, and – 10-objective DTLZ1 Five Performance Metrics – Inverted Generational Distance (IGD), – Pareto Dominance Indicator (NR), – Maximum Spread (MS), – Spacing, and – Hypervolume Indicator
  16. 16. Performance Metrics Ensemble For the same initial population, all five MOEAs will generate a non-dominated front for a given benchmark function with specific problem characteristics. A randomly chosen performance metric is used to identify the winner of the non-dominated front and its associated MOEA. This process will be repeated 50 times to gain meaningful statistics. These 50 non-dominated fronts could come from either one of five MOEAs and each of five performance metrics could be used for multiple times.
  17. 17. ZDT1 Generates 50 non-dominated fronts as the initial population of Double Elimination Tournament Selection: SPEA 2 NSGA-II IBEA PESA-II MOEA/D 19 11 3 5 12 IGD NR Spacing S-metric MS 11 10 12 10 7
  18. 18. Flow Chart Input: MOEAs Output: Specific Rank Value of All Benchmark MOEAs 50 Problem Approximation YES fronts Double NO No. Remain Elimination to fronts is 0? obtain best front Identify the Winner Eliminate All Fronts Algorithm and from Winner Assign Its Rank Algorithm Value
  19. 19. Double Tournament Elimination 50 Winners from 50 Running Times Winner Bracket (25) Loser Bracket (25) 13 Winners 13 Losers 13 Winners 13 Losers 13 Winners 13 LosersReserved as Winner Reserved as LoserBracket in the Next Bracket in the Next EliminateRound Round
  20. 20. Round 1 50 fronts are competed to down select to 26 fronts (13 in winner bracket and 13 in loser bracket) going through 25 + 12 +12 + 13 = 62 binary tournaments: SPEA 2 NSGA-II IBEA PESA-II MOEA/D 9 8 0 1 8 IGD NR Spacing S-metric MS 13 13 11 13 12
  21. 21. Round 2 Winner Bracket (13) Loser Bracket (13) 7 Winners 7 Losers 7 Winners 7 Losers 7 Winners 7 LosersReserved as Winner Reserved as LoserBracket in the Next Bracket in the Next EliminateRound Round
  22. 22.  26 fronts are competed to down select to14 fronts (7 in winner bracket and 7 in loser bracket) going through 6 + 6 + 7 = 19 binary tournaments:SPEA 2 NSGA-II IBEA PESA-II MOEA/D 6 2 0 1 5 IGD NR Spacing S-metric MS 5 4 5 2 3
  23. 23. Round 3 Winner Bracket (7) Loser Bracket (7) 4 Winners 4 Losers 4 Winners 4 Losers 4 Winners 4 LosersReserved as Winner Reserved as LoserBracket in the Next Bracket in the Next EliminateRound Round
  24. 24.  14 fronts are competed to down select to 8 fronts (4 in winner bracket and 4 in loser bracket) going through 3 + 3 + 4 = 10 binary tournaments: SPEA 2 NSGA-II IBEA PESA-II MOEA/D 3 2 0 0 3 IGD NR Spacing S-metric MS 3 1 3 3 0
  25. 25. Round 4 Winner Bracket (4) Loser Bracket (4) 2 Winners 2 Losers 2 Winners 2 Losers 2 Winners 2 LosersReserved as Winner Reserved as LoserBracket in the Next Bracket in the Next EliminateRound Round
  26. 26.  8 fronts are competed to down select to 4 fronts (2 in winner bracket and 2 in loser bracket) going through 2 + 2 + 2 = 6 binary tournaments: SPEA 2 NSGA-II IBEA PESA-II MOEA/D 2 0 0 0 2 IGD NR Spacing S-metric MS 1 0 2 1 2
  27. 27. Round 5 Winner Bracket (2) Loser Bracket (2) 1 Winners 1 Losers 1 Winners 1 Losers 1 Winners 1 LosersReserved as Winner Reserved as LoserBracket in the Next Bracket in the Next EliminateRound Round
  28. 28.  4 fronts are competed to down select to 2 fronts (1 in winner bracket and 1 in loser bracket) going through 1 + 1 + 1 = 3 binary tournaments : :SPEA 2 NSGA-II IBEA PESA-II MOEA/D 1 0 0 0 1 IGD NR Spacing S-metric MS 1 0 0 1 1
  29. 29. Round 6 Winner Bracket (1) Loser Bracket (1) 1 Final Winners
  30. 30.  In the final that 2 fronts are competed to generate the final winner. About 152 binary tournaments were held to decide a final winner. SPEA 2 NSGA-II IBEA PESA-II MOEA/D 1 0 0 0 0 IGD NR Spacing S-metric MS 0 0 0 1 0 Removing 18 fronts generated by SPEA 2, the remaining 32 fronts will go through the process again…
  31. 31. Final Ranking• 35 repeated and independent experiments are done for each function and the findings have been consistentRanking 2-obj 2-obj 2-obj 2-obj 2-obj 3-obj 5-obj 5-obj 10-obj Order ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 DTLZ2 WFG1 WFG2 DTLZ1 1 SPEA 2 SPEA 2 NSGA-II MOEA/D MOEA/D IBEA IBEA IBEA IBEA 2 MOEA/D MOEA/D MOEA/D NSGA-II IBEA MOEA/D MOEA/D MOEA/D NSGA-II 3 NSGA-II NSGA-II IBEA PESA-II NSGA-II SPEA 2 SPEA 2 NSGA-II MOEA/D 4 PESA-II IBEA SPEA 2 IBEA SPEA 2 NSGA-II NSGA-II SPEA 2 SPEA 2 5 IBEA PESA-II PESA-II SPEA 2 PESA-II PESA-II PESA-II PESA-II PESA-II
  32. 32. Observations on SPEA2 It is the final winner in problem ZDT1 and ZDT2. ZDT1 and ZDT2 do not have local Pareto-optimal fronts and their global Pareto-optimal fronts are continuous. IBEA and PESA-II dropped out of competition in the first round. SPEA2, MOEA/D and NSGA-II compete fiercely till round 4. SPEA 2 will perform well in problems having continuous Pareto-optimal fronts and do not have local Pareto-optimal fronts.
  33. 33.  In ZDT1, SPEA 2 is the final winner and it wins under all four metrics but is inferior to NSGA-II in S-metric. In ZDT2, SPEA 2 is the final winner and it wins under all four metrics but it is a little bit worse than NSGA-II in Spacing metric. In ZDT3, NSGA-II is the final winner and it wins under all four metrics but is inferior to MOEA/D in S-metric. In ZDT4, MOEA/D is the final winner and it wins under all four metrics but it is a little bit worse than NSGA-II in NR metric. In ZDT6, MOEA/D is the final winner but is inferior to IBEA in MS metric and a little bit worse than NSGA-II in Spacing metric. In DTLZ 2, IBEA is the final winner and it wins under all four metrics but is inferior to MOEA/D in Spacing metric.
  34. 34. Observations on NSGA-II It has the best performance in ZDT3. ZDT3 has the discreteness feature and has a Pareto-optimal front consisting of several non- contiguous convex parts. MOEA/D is comparable in performance. NSGA-II will perform well in problems having a Pareto-optimal front consisting of several noncontiguous convex parts.
  35. 35. Observations on MOEA/D It wins in both ZDT4 and ZDT6. ZDT4 has many local Pareto-optimal fronts, make EAs exhibit their ability to deal with multi-modality. ZDT6’s Pareto-optimal solutions are non-uniformly distributed. For ZDT4, SPEA2 was eliminated in early stage of competition. For ZDT6, SPEA2 and PESA-II were eliminated very early. MOEA/D will exhibit its good performance in problems with lots of local Pareto-optimal fronts or Pareto- optimal solutions are not uniformly distributed its global Pareto front.
  36. 36. Observations on IBEA It wins all in DTLZ 2, WFG1, WFG2 and DTLZ 1 which are the test problem having more than two objectives. Many credible publications support the ranking for higher-dimensional benchmark problems. We can make a comparatively conclusion that IBEA can perform better than others in some test problems with high-dimension objectives.
  37. 37. Overall Findings Double elimination design allows specific characteristic-poor performance of a quality algorithm under the special environment still to be able to survive through competitions and win it all. It gives every individual two chances to take part in the competition. This is helpful to reserve good individual, especially in some special conditions.
  38. 38. Remarks knowing no single metric alone can faithfully quantify the performance of a given MOEA under real-world scenarios, this study is intended to reveal the insight pertaining to specific problem characteristics that the underlying MOEA could perform the best. For a given real-world problem, if we know its problem characteristics (e.g., a Pareto front with a number of disconnected segments and a high number of local optima), we may make an educated judgment to choose the specific MOEA for its superior performance given the problem characteristics.
  39. 39. Grand Challenges in EMO Groundbreaking applications with smashing success Toward Many-Objective Optimization under constraints and uncertainties Universal fundamentals in all algorithm formulations Publicity in Interdisciplinary World Education for the next Generations
  40. 40. Q&A

×