Cooperative Co-evolutionary (CC) techniques have
demonstrated the promising performance in dealing with large-scale optimization problems. However, in many applications, their performance may drop due to the presence of imbalanced contributions to the objective function value from different subsets of decision variables. To remedy this drawback, Contribution-Based Cooperative Co-evolutionary (CBCC) algorithms have been proposed.
They have presented significant improvements over traditional CC techniques when the decomposition is accurate and the imbalance level is very high. However, in real-world scenarios, we might not have the knowledge about the ideal decomposition and actual imbalance level of a problem to be solved. Therefore, this study aims at analysing the performance of existing CBCC techniques in more realistic settings, i.e., when the decomposition error is unavoidable and the imbalance level is low or moderate.
Our in-depth analysis reveals that even in these situations, CBCC algorithms are superior alternatives to traditional CC techniques. We also observe that the variations of CBCC techniques may lead to the significantly different performance. Thus, we recommend practitioners to carefully choose a competent variant of CBCC which best suits their particular applications.
4. Background
Large-Scale Black-Box Optimization
• Optimization:
– ∗
= arg min
∈
( )
• Black-Box Optimization:
– is unknown.
– Most of mathematical models cannot be applied (as they have assumptions on )
• Large-Scale Problems:
– is very large (i.e., 1000) and computational budget is limited
– Can be interpreted as a form of curse of dimensionality that results in significant
performance drops dramatically
CEC 2015, Sendai, Japan 4Sensitivity Analysis of CBCC Algorithms
5. Background
Cooperative Co-evolution
• Cooperative Co-evolutionary (CC) EAs:
– Follow the famous divide and conquer approach
– = ∑
• Procedure:
– Decomposition: Divide into a set of smaller problems ( ).
– Credit Assignment: Divide the allocated computational budget equally to all
components.
– Optimization: Optimize each component almost separately.
– Merge: Merge all sub-solutions to form a solution for
CEC 2015, Sendai, Japan 5Sensitivity Analysis of CBCC Algorithms
6. Background
Cooperative Co-evolution
• Cooperative Co-evolutionary EAs:
– Follow the famous divide and conquer approach
– = ∑
• Procedure:
– Decomposition: Divide into a set of smaller problems ( ).
– Credit Assignment: Divide the allocated computational budget equally to all
components.
– Optimization: Optimize each component almost separately.
– Merge: Merge all sub-solutions to form a solution for
CEC 2015, Sendai, Japan 6Sensitivity Analysis of CBCC Algorithms
Are all the components equally
contribute to the objective function?
7. Background
Cooperative Co-evolution
• Cooperative Co-evolutionary EAs:
– Follow the famous divide and conquer approach
– = ∑
• Procedure:
– Decomposition: Divide into a set of smaller problems ( ).
– Credit Assignment: Divide the allocated computational budget equally to all
components.
– Optimization: Optimize each component almost separately.
– Merge: Merge all sub-solutions to form a solution for
CEC 2015, Sendai, Japan 7Sensitivity Analysis of CBCC Algorithms
Are all the components equally
contribute to the objective function?
8. Background
Imbalanced Problems
• Imbalanced Problems:
– Different components/sub-components may have different levels of contribution to
the objective function.
• Sources of Imbalance:
– Different landscape (basis function)
– Different dimensionalities
– For example: ≠ where ≠
– Different weights
– For example: = ∑ ! , #$%&% ! ≠ ! for all ≠
• Challenge:
– For the best result, the most contributing components should receive bigger chunks
of computational budget.
CEC 2015, Sendai, Japan 8Sensitivity Analysis of CBCC Algorithms
9. Background
Contribution-Based CC
• Contribution-Based Cooperative Co-evolutionary (CBCC)[1]:
– Goals:
– Estimate the contribution of each component in the improvement to objective
value.
– Assign proper portion of computational budget to components according to their
contribution.
– Assumptions:
– The budget is limited
– The problem is partially separable
– Challenges:
– Contribution of components may be unknown to the practitioners/algorithms
– Contributions may change during the optimization
[1] M. N. Omidvar, X. Li, and X. Yao. "Smart use of computational resources based on contribution for cooperative co-
evolutionary algorithms." In Proceedings of the GECCO’11
CEC 2015, Sendai, Japan 9Sensitivity Analysis of CBCC Algorithms
10. Background
Contribution-Based CC
• CBCC1 [1]
1. Exploration: Optimize all components for one cycle each and measure
improvements in fitness values (∆ )
2. Exploitation: Optimize the most contributing component ( ∗
= arg max
∈ ..
∆ ) for
only one extra cycle.
3. Repeat the above steps until the stoping criterion is met.
• CBCC2 [1]
1. Exploration: Optimize all components for one cycle each and measure
improvements in fitness values (∆ )
2. Exploitation: Optimize the most contributing component ( ∗
= arg max
∈ ..
∆ )
over and over until the improvement is negligible.
3. Repeat the above steps until the stoping criterion is met.
[1] M. N. Omidvar, X. Li, and X. Yao. "Smart use of computational resources based on contribution for cooperative co-
evolutionary algorithms." In Proceedings of the GECCO’11
CEC 2015, Sendai, Japan 10Sensitivity Analysis of CBCC Algorithms
12. Contributions
• Previous works:
– CBCC1 and CBCC2 were only studied under perfect conditions:
– Ideal decomposition (assuming the structure of the problem is known)
– High level imbalance
• This work:
– Studies CBCC1 and CBCC2 under more realistic conditions:
– Noisy decompositions
– Low and medium levels of imbalance
CEC 2015, Sendai, Japan 12Sensitivity Analysis of CBCC Algorithms
13. Research Questions
1. To what extent CBCC is sensitive to the accuracy of decomposition techniques?
2. In the presence of decomposition errors, is it still beneficial to employ CBCC instead
of traditional CC ?
3. To what extent the imbalance level influences the performance of CBCC?
4. Is it still worthwhile to choose CBCC over traditional CC when the level of imbalance
is unknown (or known but not very significant)?
CEC 2015, Sendai, Japan 13Sensitivity Analysis of CBCC Algorithms
14. Research Questions
1. To what extent CBCC is sensitive to the accuracy of decomposition techniques?
2. In the presence of decomposition errors, is it still beneficial to employ CBCC instead
of traditional CC?
3. To what extent the imbalance level influences the performance of CBCC?
4. Is it still worthwhile to choose CBCC over traditional CC when the level of imbalance
is unknown (or known but not very significant)?
CEC 2015, Sendai, Japan 14Sensitivity Analysis of CBCC Algorithms
1 and 2 address the
Decomposition
accuracy
3 and 4 address the
Imbalance level
16. Experiments
• Part A: Decomposition Accuracy
– Design: Randomly select a percentage of variables from all components
and aggregate them into a new group (unlabelled group).
– Outcome: The resulting component contains variables from all other
groups (strong interactions with all other components)
– Error levels: 0% (ideally noise-free), 5%, 10%, 20%, 30% and 50%
• Part B: Imbalance Level
– Design: In * = ∑ #+. *+( +),
+ - , #+ = 100.1(2,-),
– Outcome: Problems with varying levels of imbalance
– Imbalance level: 3 = 0,1,2,3
CEC 2015, Sendai, Japan 16Sensitivity Analysis of CBCC Algorithms
17. Experiments Setup
• Benchmark
– CEC 2013 LSGO Benchmarks
– Dimensions:1,000
– Budget: 3,000,000 function evaluations
– Categories
1. fully separable functions (f1 - f3),
2. partially separable functions with a separable subcomponent (f4 - f7),
3. partially separable functions with no separable subcomponents (f8 - f11),
4. overlapping functions (f12 - f14),
5. fully non-separable function (f15).
• Statistical Tests
– Kruskal-Wallis rank sum test to obtain p-values.
– Wilcoxon rank sum test for pair-wise comparisons.
CEC 2015, Sendai, Japan 17Sensitivity Analysis of CBCC Algorithms
20. Results
Part A: Basic Statistics
CEC 2015, Sendai, Japan 20Sensitivity Analysis of CBCC Algorithms
21. Results
Part A: WDL & Discussion
• As the decomposition noise level increases, the performance of CBCCs drops.
• Overall, CBCC1 performs better than CBCC2.
• Except for the very poor decomposition (50% accuracy), CBCC1 either outperforms or
works statistically similar to traditional CC.
• It is beneficial to employ CBCC1 instead of traditional CC even in when the
decomposition is not very accurate.
CEC 2015, Sendai, Japan 21Sensitivity Analysis of CBCC Algorithms
22. Results
Part B: Basic Statistics
CEC 2015, Sendai, Japan 22Sensitivity Analysis of CBCC Algorithms
23. Results
Part B: WDL & Discussion
• As the imbalance level increases, the performance of CBCCs improves.
• Overall, CBCC1 performs better than CBCC2.
• In all cases, CBCC1 either outperforms or works statistically similar to traditional CC.
• It is beneficial to employ CBCC1 instead of traditional CC even when the
imbalance level is not very high.
CEC 2015, Sendai, Japan 23Sensitivity Analysis of CBCC Algorithms
24. Results
Measures
• Self-improvement:
– Reflects how much the performance of an algorithm varies when the
noise or imbalance level changes?
– 60789 :, ;: < =
8=>?@ +,A:2 B8=>?@ +,A:C
8=>?@ +,A:2
× 100
• Relative improvement:
– Shows how good/bad an algorithm performs in comparison with the
baseline (i.e., DECC)?
– 6E78FG+H7 I, ;: < =
8=>?@ J,A:C B8=>?@ +,A:C
8=>?@ J,A:C
× 100
CEC 2015, Sendai, Japan 24Sensitivity Analysis of CBCC Algorithms
28. • Major trend: Increasing noise level reduces the performance of CC and
CBCCs in most cases because:
– Increases the interaction between components (less separability) .
– Creates very large components (up to 500 variables)
Results
Part A: Self-improvement
CEC 2015, Sendai, Japan 28Sensitivity Analysis of CBCC Algorithms
29. • Minor trend: Increasing noise level advances the performance of CC and
CBCC1 in some cases (i.e., f6) because:
– The noise may spread the variables of the most contributing component
which results in less imbalanced problem.
Results
Part A: Self-improvement
CEC 2015, Sendai, Japan 29Sensitivity Analysis of CBCC Algorithms
32. • Main trend: Increasing noise level decreases the relative improvements of
both CBCCs.
• Minor trend: CBCC2 is more sensitive to the decomposition accuracy than
CBCC1.
Results
Part A: Relative-improvement
CEC 2015, Sendai, Japan 32Sensitivity Analysis of CBCC Algorithms
35. • Main trend: Increasing imbalance level Improves the relative improvements
of both CBCCs.
• Minor trend: CBCC2 is very sensitive to the imbalance level, while CBCC1
works relatively better in almost all situations.
Results
Part B: Relative-improvement
CEC 2015, Sendai, Japan 35Sensitivity Analysis of CBCC Algorithms
37. Conclusion
Conclusion:
1. The CBCC framework is still effective even when
1. the decomposition accuracy is poor, or
2. the imbalance is marginal.
2. The CBCC1 is more effective in realistic settings than the CBCC2.
CEC 2015, Sendai, Japan 37Sensitivity Analysis of CBCC Algorithms
38. Future Work
Future Work:
1. Studying the sensitivity of CBCC to other types of decomposition errors.
– Breaking a non-separable component into some smaller components
– Merging some separable components into one larger components
2. Investigating the sensitivity of CC (in general) and CBCC (in particular) to the cycle
length of the sub-problem optimizer.
– Accuracy vs. Risk of concept drift
3. Improving existing CBCC variants:
– A better exploration-exploitation balance will result in a more efficient algorithm.
CEC 2015, Sendai, Japan 38Sensitivity Analysis of CBCC Algorithms