Meta-Lamarckian 3some algorithm for real-valued optimization
1. Meta-Lamarckian Learning in Three Stage
Optimal Memetic Exploration
Ferrante Neri, Matthieu Weber, Fabio Caraffini, and Ilpo
Poikolainen
CCI, De Montfort University, United Kingdom
University of Jyv¨askyl¨a, Finland
05.09.2012 (UKCI2012, Edinburgh)
2. Outline
Background
Ockham’s Razor and Multiple Stage Optimal Memetic
Exploration
Sequential Structure
Meta-Lamarckian Learning
Meta-Lamarckian Learning in Three Stage Optimal
Memetic Exploration
Numerical Results
Conclusion
3. Background
Memetic Algorithm (MA): evolutionary framework + one or
more local search components
Memetic Computing (MC): structured set of heterogeneous
components for solving problems
Ockham’s Razor in MC: simple algorithms can display a
performance which is as good as that of complex algorithms 1
Three Stage Optimal Memetic Exploration (3SOME):
Sequential structure composed of three components that
progressively perturb a single solution
1
G. Iacca, F. Neri, E. Mininno, Y.S. Ong, M.H. Lim, Ockham’s Razor in
Memetic Computing: Three Stage Optimal Memetic Exploration, Informa-
tion Sciences, Elsevier, Volume 188, pages 17-43, April 2012
4. The 3SOME algorithm: algorithmic components
The current best solution is named elite xe while a trial
solution is named xt
Long distance exploration (L): at first a solution xt is
randomly generated and then the exponential crossover
(Differential Evolution) with xe is applied
Middle distance exploration (M): a hypercube of side δ,
centred around the solution xe, is constructed and the trial
point xt is generated within the hypercube
Short distance exploration (S): a steepest descent local
search attempts to improve upon xe by separately perturbing
each variable
5. The 3SOME algorithm: algorithmic structure
L is continued until a new promising solution found
M is continued while successful
S is performed and, if successful, activates M, if fails,
activates L
Note: S stands for Success and F for Failure
6. Sequential structures
Composed of a set of algorithmic components (memes)
A solution (or a population) is progressively perturbed by each
component
A set of condition determines which component is activated
for the subsequent perturbation
7. Meta-Lamarckian Learning2
Probabilistic success based adaptive criterion to coordinate
the memes in MAs
Each local search component is associated to an activation
probability:
ηp(t) =
fe∗
p(t)
fep(t)
(1)
where fep(t) is the budget spent by the operator p at iteration
t, and fe∗
p(t) is the budget that led to an improvement.
These probability values are then used for selecting the memes
by means of a roulette selection
2
Y. S. Ong and A.J. Keane, Meta-Lamarckian Learning in Memetic Algo-
rithm, IEEE Transactions On Evolutionary Computation, Vol. 8, No. 2, pp.
99-110, April 2004.
8. What is this paper about?
We attempted to apply the Meta-Lamarckian Learning to the
three memes composing the 3SOME algorithm
Research Question: Would the algorithmic performance vary
if the same components are coordinated by means of a
different scheme?
(Implicit) Research Question 1: Is the Meta-Lamarckian
Learning any better than the 3SOME simplistic scheme at
coordinating the memes?
(Implicit) Research Question 2: How is the structure
important with respect to the memes composing it?
9. Experimental Setup
BBOB20103 at 10,20, and 40 dimensions
CEC20104 at 1000 dimensions
100 runs, each of them has been performed for 5000 × n
fitness evaluations
Meta-Lamarckian 3SOME (ML3SOME) and original 3SOME
have been compared
3
N. Hansen, A. Auger, S. Finck, R. Ros et al., Real-parameter black- box
optimization benchmarking 2010: Noiseless functions definitions, INRIA,
Tech. Rep. RR-6829, 2010
4
K. Tang, X. Li, P. N. Suganthan, Z. Yang, and T. Weise, Benchmark func-
tions for the cec2010 special session and competition on large-scale global
optimization, University of Science and Technology of China (USTC), Tech.
Rep., 2010
10. Summary of the Results
In low dimensions the Meta-Lamarckian scheme appears to
have an advantage on the original memetic structure
The Meta-Lamarckian adaptation does not display a clear
advantage for large scale problems
In high dimensions the Meta-Lamarckian schemes does not
often a reliable behaviour
In some cases ML3SOME appears to be preferable while in
the other cases the original structure displays a higher
performance
13. Remarks and Future Works
Meta-Lamarckian Learning has been compared against a
simplistic scheme for the meme coordination
Meta-Lamarckian Learning appears efficient in low
dimensionality problems
In high dimensions, the increase of the problem complexity
does not require an increase in the algorithmic
complexity
The performance of the same memes but different
coordination schemes lead to very different results
The coordination scheme is a crucially important factor in
Memetic Computing, even for the simplest structures!