Algorithmic issues in computational
intelligence optimization
from design to implementation
from implementation to design
Fabio Caraffini
Faculty of Information Technology
Department of Mathematical Information Technology
September 2016
Lectio Precursoria References
Can a machine think?
and make its decisions?
In Computer Science both Artificial Intelligence (AI) and
Computational Intelligence (CI) seek the same goal, i.e. to make a
machine able to perform intellectual tasks.
Figure: Alan Turing
AI: based on hard computing techniques (work following a
binary logic: Booleans true or false).
CI: based on soft computing methods (work following a “fuzzy”
logic), which enables adaptation to many situations.
Fabio Caraffini University of Jyväskylä
Lectio Precursoria References
Optimisation problems
Any time we pick a decision/make a choice we face an optimisation problem
Maximise/Minimise fm (x) m = 1, 2, . . . , M
subject − to gj (x) 0 j = 1, 2, . . . , J
hk (x) = 0 k = 1, 2, . . . , K
xL
i xi xU
i i = 1, 2, . . . , n
My work focusses on real-valued, single-objective and box
constrained optimisation:
x∗
≡ arg{min
x∈D
f (x)} = {x∗
∈ D | f (x∗
) f (x) ∀ x ∈ D}
Fabio Caraffini University of Jyväskylä
Lectio Precursoria References
Classification of optimisation approaches
(Knowledge of the problem is available)
Analytical Approach: the function has an explicit analytical
expression, derivable over all the variables (and de facto not
highly multivariate).
Exact Methods: the function respects some specific
hypotheses, e.g. linear or quadratic problems. The method
converges to the exact solution after a finite amount of steps
of an iterative procedure.
Approximate Iterative Methods: the problem respects some
hypotheses and can be solved by applying an iterative
procedure in infinite steps. The application of the procedure
for a finite amount of steps still leads to an approximation of
the optimum.
Fabio Caraffini University of Jyväskylä
Lectio Precursoria References
Classification of optimisation approaches
(black-box problem, too complex problems, stringent time and memory constraints)
Metaheuristics: the problem is generic, as often happens in
real-world cases. We give up about knowing the optimum and
we are satisfied about knowing some point that is good
enough for our purpose. There is no guarantee of convergence
in most of the cases.
Computational Intelligence Optimisation (CIO) is a subject that
uses CI to solve optimisation problems, especially in the cases when
there are no hypotheses and a metaheuristic is the only option!
Fabio Caraffini University of Jyväskylä
Lectio Precursoria References
Real-world problems
Normally, the objective function of real-world problems can be
a piece of software, a simulator, an experiment, etc., also
known as black box function.
Optimisation Problems are often rather easily formulated but
very hard to be solved when the problem comes from an
application. In fact, some features characterising the problem
can make it extremely challenging
Some of these features are summarised in the following slides:
Fabio Caraffini University of Jyväskylä
Lectio Precursoria References
Hard real-life:
HIGH NON-LINEARITY.
Usually optimisation problems are characterised by nonlinear
function. Optima are not on the bounds!
In real world optimisation problems, the physical phenomenon,
due to its nature (e.g. in the case of saturation phenomenon
or for systems which employ electronic components), cannot
be approximated by a linear function neither in some areas of
the decision space.
Fabio Caraffini University of Jyväskylä
Lectio Precursoria References
Hard real-life:
HIGH MULTI-MODALITY.
It often happens that the fitness landscape contains many
local optima and that many of these have an unsatisfactory
performance (fitness value).
These fitness landscapes are usually rather difficult to be
handled since the optimisation algorithms which employ
gradient based information in detecting the search direction
could easily converge to a suboptimal basin of attraction.
Fabio Caraffini University of Jyväskylä
Lectio Precursoria References
Hard real-life:
OPTIMISATION IN UNCERTAIN ENVIRONMENTS.
Noisy fitness function: noise in fitness evaluations may come
from many different sources such as sensory measurement
errors or randomized simulations.
Approximated fitness function: when the fitness function is
very expensive to evaluate, or an analytical fitness function is
not available, approximated fitness functions are often used
instead.
Robustness: often, when a solution is implemented, the
design variables or the environmental parameters are subject to
perturbations or changes (e.g. control problems).
Fabio Caraffini University of Jyväskylä
Lectio Precursoria References
Hard real-life:
COMPUTATIONAL EXPENSIVE PROBLEMS.
Optimisation problems can be
computationally expensive
because of two reasons:
large scale problems (needle in a haystack).
computationally expensive fitness function (e.g. design of
aircraft, control on an on-line electric drive).
Fabio Caraffini University of Jyväskylä
Lectio Precursoria References
Hard real-life:
MEMORY/TIME CONSTRAINTS
Many engineering problems are
plagued by a modest hardware
and stringent time constraints.
This can happen:
due to cost limitations (e.g. vacuum cleaner robot);
due to space limitations (e.g. use of minimalistic embedded systems
as wearable technology, wireless sensors networks, etc.);
in real-time systems (e.g. Telecommunication, Video-games, etc.).
Light (and simple) algorithms can be used in a modular way to tackle
complex problems: if the hardware is limited an intelligent solution must
be found!
Fabio Caraffini University of Jyväskylä
Lectio Precursoria References
Hard real-life:
MEMORY/TIME CONSTRAINTS
These scenarios have to be carefully addressed by designing
algorithm tailored to the specific case:
the design is implementation-driven!
(from implementation to design)
implementation limitations have to be considered first to be
able to carry out the optimization process in such constraints.
Chapter 3 of my thesis addresses mamory-saving and
real-time optimization:
PI: “compact differential evolution light”
[Iacca et al., 2012a]
PII: “space robot base disturbance optimization”
[Iacca et al., 2012b]
PIII: “MC for path-following mobile robots”
[Iacca et al., 2013]
PIV: “µ-DE with extra moves along the axes”
[Caraffini et al., 2013c]
Fabio Caraffini University of Jyväskylä
Lectio Precursoria References
Hard real-life:
BLACK-BOX SYSTEMS
Black-box systems make it very difficult to the designer to
produce a tailored and efficient general purpose algorithm.
The designed has to make sure that the optimizer performs an
unbiased search at the the beginning of the process
(EXPLORATON) to then converge and refine the global
optimum (EXPLOITATION).
The optimization process is usually carried out off-line and the
design takes into consideration “algorithmic theory” as the
implementation of the algoritm is not problematic and comes
in a second moment.
(from to design to implementation)
Fabio Caraffini University of Jyväskylä
Lectio Precursoria References
Historical successful strategies
Fogel Owens (USA, 1964): Evolutionary Programming (EP)
[Fogel et al., 1964], see also [Fogel et al., 1966]
Holland (USA, 1975): Genetic Algorithm (GA)
[Holland, 1975]
Rechenberg Schwefel (Germany, 1971): Evolution Strategies (ES)
[Rechenberg, 1971]
Koza (USA, 1990): Genetic Programming (GP)
[Koza, 1990], see also [Koza, 1992b] and [Koza, 1992a]
Moscato (Argentina-USA 1989): Memetic Algorithms (MA)
[Moscato, 1989]
Storne and Price (Germany-USA 1995): Differential Evolution (DE)
[Storn and Price, 1995]
Eberhart (USA 1995): Particol Swarm Optimization
[Eberhart and Kennedy, 1995]
Fabio Caraffini University of Jyväskylä
Lectio Precursoria References
What is the best strategy?
There is no best optimiser! [Wolpert and Macready, 1997]
The 1st of the No Free Lunch Theorems (NFLT) presented in
[Wolpert and Macready, 1997] states that for a given pair of
algorithms A and B:
f
P(xm m, f , A) =
f
P(xm m, f , B)
where P(xm m, f , A) is the probability that algorithm A
detects, after m iterations, the optimal solution xm for a
generic objective function f (analogously for P(xm m, f , B)).
The performance of every pair of algorithms over all the
possible problems is the same.
Fabio Caraffini University of Jyväskylä
Lectio Precursoria References
General message of NFLT
Every problem is a separate story and the algorithm should be
connected to the problem!
So, how do we pick the right strategy?
How can we specialize a general-purpose algorithm to a black
box problem?
Fabio Caraffini University of Jyväskylä
Lectio Precursoria References
Coexistence of exploration and exploitation capabilities
All the aformentoned strategies share similar working
principles.
All the optimization algorithms are the implementation of
the same concept!
An effective informed sampling strategy guides the generation of
new candidate solutions based on their fitnesses and locations of
previously visited points.
The search has to guarantee
an unbiased exploration phase;
an exploitation phase (local search) that is efficient on the
“unknown” problem.
Fabio Caraffini University of Jyväskylä
Lectio Precursoria References
Structural bias
Chapter 5 of my dissertation: PXI [Kononova et al., 2015]
The capability of equally exploring the search space can be
measured and visually displayed in terms of structural bias.
The bias reveals as non-uniform clustering of the population
even in problems where we expect individuals to disperse over
the search space.
The bias manifests an increasing deleterious strength related
to an increase of
the population size;
the problem complexity.
Fabio Caraffini University of Jyväskylä
Lectio Precursoria References
Adaptation
Chapter 4 of my dissertation
To efficiently exploit a solution after exploring the search space
the algorithm has to adapt to the landscape.
Adaptation can be obtained by
tuning the algorithm’s parameters on-the-fly;
embedding local searchers in population-based algorithms;
performing a preliminary landscape analyses. etc.
Examples from my dissertation are:
PV: “multicriteria adaptive differential evolution” [Cheng et al., 2015]
PVI: “super-fit MADE” [Caraffini et al., 2013b]
PVII: “super-fit RIS” [Caraffini et al., 2013a]
PVIII: “EPSDE with a pool of LS algorithms” [Iacca et al., 2014b]
PIX: “Multi-strategy coevolving aging particle optimization”
[Iacca et al., 2014a]
PX: “Hyper-SPAM with Adaptive Operator Secletion”
[Epitropakis et al., 2014]
Fabio Caraffini University of Jyväskylä
Lectio Precursoria References
THANKS FOR LISTENING. . .
. . . ANY QUESTIONS?
Fabio Caraffini University of Jyväskylä
Lectio Precursoria References
References I
Caraffini, F., Iacca, G., Neri, F., Picinali, L., and Mininno, E. (2013a).
A cma-es super-fit scheme for the re-sampled inheritance search.
In Evolutionary Computation (CEC), 2013 IEEE Congress on, pages
1123–1130.
Caraffini, F., Neri, F., Cheng, J., Zhang, G., Picinali, L., and G. Iacca,
E. M. (2013b).
Super-fit multicriteria adaptive differential evolution.
In Evolutionary Computation (CEC), 2013 IEEE Congress on, pages
1678–1685.
Caraffini, F., Neri, F., and Poikolainen, I. (2013c).
Micro-differential evolution with extra moves along the axes.
In IEEE Symposium Series on Computational Intelligence, Symposium on
Differential Evolution, pages 46–53.
Cheng, J., Zhang, G., Caraffini, F., and Neri, F. (2015).
Multicriteria adaptive differential evolution for global numerical
optimization.
Integrated Computer-Aided Engineering.
Fabio Caraffini University of Jyväskylä
Lectio Precursoria References
References II
Eberhart, R. C. and Kennedy, J. (1995).
A new optimizer using particle swarm theory.
In Proceedings of the Sixth International Symposium on Micromachine
and Human Science, pages 39–43.
Epitropakis, M. G., Caraffini, F., Neri, F., and Burke, E. K. (2014).
A separability prototype for automatic memes with adaptive operator
selection.
In Foundations of Computational Intelligence (FOCI), 2014 IEEE
Symposium on, pages 70–77. IEEE.
Fogel, L. J., Owens, A., and Walsh, M. (1964).
On the evolution of artificial intelligence(artificial intelligence generated by
natural evolution process).
In National Symposium on Human Factors in Electronics, 5th, San Diego,
California, pages 63–76.
Fogel, L. J., Owens, A. J., and Walsh, M. J. (1966).
Artificial Intelligence through Simulated Evolution.
John Wiley.
Fabio Caraffini University of Jyväskylä
Lectio Precursoria References
References III
Holland, J. (1975).
Adaptation in Natural and Artificial Systems.
University of Michigan Press.
Iacca, G., Caraffini, F., and Neri, F. (2012a).
Compact differential evolution light: high performance despite limited
memory requirement and modest computational overhead.
Journal of Computer Science and Technology, 27(5):1056–1076.
Iacca, G., Caraffini, F., and Neri, F. (2013).
Memory-saving memetic computing for path-following mobile robots.
Applied Soft Computing, 13(4):2003–2016.
Iacca, G., Caraffini, F., and Neri, F. (2014a).
Multi-strategy coevolving aging particle optimization.
International journal of neural systems, 24(01).
Iacca, G., Caraffini, F., Neri, F., and Mininno, E. (2012b).
Robot base disturbance optimization with compact differential evolution
light.
In EvoApplications, pages 285–294.
Fabio Caraffini University of Jyväskylä
Lectio Precursoria References
References IV
Iacca, G., Neri, F., Caraffini, F., , and Suganthan, P. N. (2014b).
A differential evolution framework with ensemble of parameters and
strategies and pool of local search algorithms.
In EvoApplications, page To appear.
Kononova, A. V., Corne, D. W., Wilde, P. D., Shneer, V., and Caraffini,
F. (2015).
Structural bias in population-based algorithms.
Information Sciences, 298(0):468 – 490.
Koza, J. R. (1990).
Concept formation and decision tree induction using the genetic
programming paradigm.
In Parallel Problem Solving from Nature, pages 124–128. Springer.
Koza, J. R. (1992a).
Genetic programming: on the programming of computers by means of
natural selection, volume 1.
MIT press.
Fabio Caraffini University of Jyväskylä
Lectio Precursoria References
References V
Koza, J. R. (1992b).
Genetic Programming: vol. 1, On the programming of computers by
means of natural selection, volume 1.
MIT press.
Moscato, P. (1989).
On evolution, search, optimization, genetic algorithms and martial arts:
Towards memetic algorithms.
Technical Report 826.
Rechenberg, I. (1971).
Evolutionsstrategie – Optimierung technischer Systeme nach Prinzipien
der biologischen Evolution.
PhD thesis, Technical University of Berlin.
Storn, R. and Price, K. (1995).
Differential evolution - a simple and efficient adaptive scheme for global
optimization over continuous spaces.
Technical Report TR-95-012, ICSI.
Fabio Caraffini University of Jyväskylä
Lectio Precursoria References
References VI
Wolpert, D. and Macready, W. (1997).
No free lunch theorems for optimization.
IEEE Transactions on Evolutionary Computation, 1(1):67–82.
Fabio Caraffini University of Jyväskylä

DEFENSE

  • 1.
    Algorithmic issues incomputational intelligence optimization from design to implementation from implementation to design Fabio Caraffini Faculty of Information Technology Department of Mathematical Information Technology September 2016
  • 2.
    Lectio Precursoria References Cana machine think? and make its decisions? In Computer Science both Artificial Intelligence (AI) and Computational Intelligence (CI) seek the same goal, i.e. to make a machine able to perform intellectual tasks. Figure: Alan Turing AI: based on hard computing techniques (work following a binary logic: Booleans true or false). CI: based on soft computing methods (work following a “fuzzy” logic), which enables adaptation to many situations. Fabio Caraffini University of Jyväskylä
  • 3.
    Lectio Precursoria References Optimisationproblems Any time we pick a decision/make a choice we face an optimisation problem Maximise/Minimise fm (x) m = 1, 2, . . . , M subject − to gj (x) 0 j = 1, 2, . . . , J hk (x) = 0 k = 1, 2, . . . , K xL i xi xU i i = 1, 2, . . . , n My work focusses on real-valued, single-objective and box constrained optimisation: x∗ ≡ arg{min x∈D f (x)} = {x∗ ∈ D | f (x∗ ) f (x) ∀ x ∈ D} Fabio Caraffini University of Jyväskylä
  • 4.
    Lectio Precursoria References Classificationof optimisation approaches (Knowledge of the problem is available) Analytical Approach: the function has an explicit analytical expression, derivable over all the variables (and de facto not highly multivariate). Exact Methods: the function respects some specific hypotheses, e.g. linear or quadratic problems. The method converges to the exact solution after a finite amount of steps of an iterative procedure. Approximate Iterative Methods: the problem respects some hypotheses and can be solved by applying an iterative procedure in infinite steps. The application of the procedure for a finite amount of steps still leads to an approximation of the optimum. Fabio Caraffini University of Jyväskylä
  • 5.
    Lectio Precursoria References Classificationof optimisation approaches (black-box problem, too complex problems, stringent time and memory constraints) Metaheuristics: the problem is generic, as often happens in real-world cases. We give up about knowing the optimum and we are satisfied about knowing some point that is good enough for our purpose. There is no guarantee of convergence in most of the cases. Computational Intelligence Optimisation (CIO) is a subject that uses CI to solve optimisation problems, especially in the cases when there are no hypotheses and a metaheuristic is the only option! Fabio Caraffini University of Jyväskylä
  • 6.
    Lectio Precursoria References Real-worldproblems Normally, the objective function of real-world problems can be a piece of software, a simulator, an experiment, etc., also known as black box function. Optimisation Problems are often rather easily formulated but very hard to be solved when the problem comes from an application. In fact, some features characterising the problem can make it extremely challenging Some of these features are summarised in the following slides: Fabio Caraffini University of Jyväskylä
  • 7.
    Lectio Precursoria References Hardreal-life: HIGH NON-LINEARITY. Usually optimisation problems are characterised by nonlinear function. Optima are not on the bounds! In real world optimisation problems, the physical phenomenon, due to its nature (e.g. in the case of saturation phenomenon or for systems which employ electronic components), cannot be approximated by a linear function neither in some areas of the decision space. Fabio Caraffini University of Jyväskylä
  • 8.
    Lectio Precursoria References Hardreal-life: HIGH MULTI-MODALITY. It often happens that the fitness landscape contains many local optima and that many of these have an unsatisfactory performance (fitness value). These fitness landscapes are usually rather difficult to be handled since the optimisation algorithms which employ gradient based information in detecting the search direction could easily converge to a suboptimal basin of attraction. Fabio Caraffini University of Jyväskylä
  • 9.
    Lectio Precursoria References Hardreal-life: OPTIMISATION IN UNCERTAIN ENVIRONMENTS. Noisy fitness function: noise in fitness evaluations may come from many different sources such as sensory measurement errors or randomized simulations. Approximated fitness function: when the fitness function is very expensive to evaluate, or an analytical fitness function is not available, approximated fitness functions are often used instead. Robustness: often, when a solution is implemented, the design variables or the environmental parameters are subject to perturbations or changes (e.g. control problems). Fabio Caraffini University of Jyväskylä
  • 10.
    Lectio Precursoria References Hardreal-life: COMPUTATIONAL EXPENSIVE PROBLEMS. Optimisation problems can be computationally expensive because of two reasons: large scale problems (needle in a haystack). computationally expensive fitness function (e.g. design of aircraft, control on an on-line electric drive). Fabio Caraffini University of Jyväskylä
  • 11.
    Lectio Precursoria References Hardreal-life: MEMORY/TIME CONSTRAINTS Many engineering problems are plagued by a modest hardware and stringent time constraints. This can happen: due to cost limitations (e.g. vacuum cleaner robot); due to space limitations (e.g. use of minimalistic embedded systems as wearable technology, wireless sensors networks, etc.); in real-time systems (e.g. Telecommunication, Video-games, etc.). Light (and simple) algorithms can be used in a modular way to tackle complex problems: if the hardware is limited an intelligent solution must be found! Fabio Caraffini University of Jyväskylä
  • 12.
    Lectio Precursoria References Hardreal-life: MEMORY/TIME CONSTRAINTS These scenarios have to be carefully addressed by designing algorithm tailored to the specific case: the design is implementation-driven! (from implementation to design) implementation limitations have to be considered first to be able to carry out the optimization process in such constraints. Chapter 3 of my thesis addresses mamory-saving and real-time optimization: PI: “compact differential evolution light” [Iacca et al., 2012a] PII: “space robot base disturbance optimization” [Iacca et al., 2012b] PIII: “MC for path-following mobile robots” [Iacca et al., 2013] PIV: “µ-DE with extra moves along the axes” [Caraffini et al., 2013c] Fabio Caraffini University of Jyväskylä
  • 13.
    Lectio Precursoria References Hardreal-life: BLACK-BOX SYSTEMS Black-box systems make it very difficult to the designer to produce a tailored and efficient general purpose algorithm. The designed has to make sure that the optimizer performs an unbiased search at the the beginning of the process (EXPLORATON) to then converge and refine the global optimum (EXPLOITATION). The optimization process is usually carried out off-line and the design takes into consideration “algorithmic theory” as the implementation of the algoritm is not problematic and comes in a second moment. (from to design to implementation) Fabio Caraffini University of Jyväskylä
  • 14.
    Lectio Precursoria References Historicalsuccessful strategies Fogel Owens (USA, 1964): Evolutionary Programming (EP) [Fogel et al., 1964], see also [Fogel et al., 1966] Holland (USA, 1975): Genetic Algorithm (GA) [Holland, 1975] Rechenberg Schwefel (Germany, 1971): Evolution Strategies (ES) [Rechenberg, 1971] Koza (USA, 1990): Genetic Programming (GP) [Koza, 1990], see also [Koza, 1992b] and [Koza, 1992a] Moscato (Argentina-USA 1989): Memetic Algorithms (MA) [Moscato, 1989] Storne and Price (Germany-USA 1995): Differential Evolution (DE) [Storn and Price, 1995] Eberhart (USA 1995): Particol Swarm Optimization [Eberhart and Kennedy, 1995] Fabio Caraffini University of Jyväskylä
  • 15.
    Lectio Precursoria References Whatis the best strategy? There is no best optimiser! [Wolpert and Macready, 1997] The 1st of the No Free Lunch Theorems (NFLT) presented in [Wolpert and Macready, 1997] states that for a given pair of algorithms A and B: f P(xm m, f , A) = f P(xm m, f , B) where P(xm m, f , A) is the probability that algorithm A detects, after m iterations, the optimal solution xm for a generic objective function f (analogously for P(xm m, f , B)). The performance of every pair of algorithms over all the possible problems is the same. Fabio Caraffini University of Jyväskylä
  • 16.
    Lectio Precursoria References Generalmessage of NFLT Every problem is a separate story and the algorithm should be connected to the problem! So, how do we pick the right strategy? How can we specialize a general-purpose algorithm to a black box problem? Fabio Caraffini University of Jyväskylä
  • 17.
    Lectio Precursoria References Coexistenceof exploration and exploitation capabilities All the aformentoned strategies share similar working principles. All the optimization algorithms are the implementation of the same concept! An effective informed sampling strategy guides the generation of new candidate solutions based on their fitnesses and locations of previously visited points. The search has to guarantee an unbiased exploration phase; an exploitation phase (local search) that is efficient on the “unknown” problem. Fabio Caraffini University of Jyväskylä
  • 18.
    Lectio Precursoria References Structuralbias Chapter 5 of my dissertation: PXI [Kononova et al., 2015] The capability of equally exploring the search space can be measured and visually displayed in terms of structural bias. The bias reveals as non-uniform clustering of the population even in problems where we expect individuals to disperse over the search space. The bias manifests an increasing deleterious strength related to an increase of the population size; the problem complexity. Fabio Caraffini University of Jyväskylä
  • 19.
    Lectio Precursoria References Adaptation Chapter4 of my dissertation To efficiently exploit a solution after exploring the search space the algorithm has to adapt to the landscape. Adaptation can be obtained by tuning the algorithm’s parameters on-the-fly; embedding local searchers in population-based algorithms; performing a preliminary landscape analyses. etc. Examples from my dissertation are: PV: “multicriteria adaptive differential evolution” [Cheng et al., 2015] PVI: “super-fit MADE” [Caraffini et al., 2013b] PVII: “super-fit RIS” [Caraffini et al., 2013a] PVIII: “EPSDE with a pool of LS algorithms” [Iacca et al., 2014b] PIX: “Multi-strategy coevolving aging particle optimization” [Iacca et al., 2014a] PX: “Hyper-SPAM with Adaptive Operator Secletion” [Epitropakis et al., 2014] Fabio Caraffini University of Jyväskylä
  • 20.
    Lectio Precursoria References THANKSFOR LISTENING. . . . . . ANY QUESTIONS? Fabio Caraffini University of Jyväskylä
  • 21.
    Lectio Precursoria References ReferencesI Caraffini, F., Iacca, G., Neri, F., Picinali, L., and Mininno, E. (2013a). A cma-es super-fit scheme for the re-sampled inheritance search. In Evolutionary Computation (CEC), 2013 IEEE Congress on, pages 1123–1130. Caraffini, F., Neri, F., Cheng, J., Zhang, G., Picinali, L., and G. Iacca, E. M. (2013b). Super-fit multicriteria adaptive differential evolution. In Evolutionary Computation (CEC), 2013 IEEE Congress on, pages 1678–1685. Caraffini, F., Neri, F., and Poikolainen, I. (2013c). Micro-differential evolution with extra moves along the axes. In IEEE Symposium Series on Computational Intelligence, Symposium on Differential Evolution, pages 46–53. Cheng, J., Zhang, G., Caraffini, F., and Neri, F. (2015). Multicriteria adaptive differential evolution for global numerical optimization. Integrated Computer-Aided Engineering. Fabio Caraffini University of Jyväskylä
  • 22.
    Lectio Precursoria References ReferencesII Eberhart, R. C. and Kennedy, J. (1995). A new optimizer using particle swarm theory. In Proceedings of the Sixth International Symposium on Micromachine and Human Science, pages 39–43. Epitropakis, M. G., Caraffini, F., Neri, F., and Burke, E. K. (2014). A separability prototype for automatic memes with adaptive operator selection. In Foundations of Computational Intelligence (FOCI), 2014 IEEE Symposium on, pages 70–77. IEEE. Fogel, L. J., Owens, A., and Walsh, M. (1964). On the evolution of artificial intelligence(artificial intelligence generated by natural evolution process). In National Symposium on Human Factors in Electronics, 5th, San Diego, California, pages 63–76. Fogel, L. J., Owens, A. J., and Walsh, M. J. (1966). Artificial Intelligence through Simulated Evolution. John Wiley. Fabio Caraffini University of Jyväskylä
  • 23.
    Lectio Precursoria References ReferencesIII Holland, J. (1975). Adaptation in Natural and Artificial Systems. University of Michigan Press. Iacca, G., Caraffini, F., and Neri, F. (2012a). Compact differential evolution light: high performance despite limited memory requirement and modest computational overhead. Journal of Computer Science and Technology, 27(5):1056–1076. Iacca, G., Caraffini, F., and Neri, F. (2013). Memory-saving memetic computing for path-following mobile robots. Applied Soft Computing, 13(4):2003–2016. Iacca, G., Caraffini, F., and Neri, F. (2014a). Multi-strategy coevolving aging particle optimization. International journal of neural systems, 24(01). Iacca, G., Caraffini, F., Neri, F., and Mininno, E. (2012b). Robot base disturbance optimization with compact differential evolution light. In EvoApplications, pages 285–294. Fabio Caraffini University of Jyväskylä
  • 24.
    Lectio Precursoria References ReferencesIV Iacca, G., Neri, F., Caraffini, F., , and Suganthan, P. N. (2014b). A differential evolution framework with ensemble of parameters and strategies and pool of local search algorithms. In EvoApplications, page To appear. Kononova, A. V., Corne, D. W., Wilde, P. D., Shneer, V., and Caraffini, F. (2015). Structural bias in population-based algorithms. Information Sciences, 298(0):468 – 490. Koza, J. R. (1990). Concept formation and decision tree induction using the genetic programming paradigm. In Parallel Problem Solving from Nature, pages 124–128. Springer. Koza, J. R. (1992a). Genetic programming: on the programming of computers by means of natural selection, volume 1. MIT press. Fabio Caraffini University of Jyväskylä
  • 25.
    Lectio Precursoria References ReferencesV Koza, J. R. (1992b). Genetic Programming: vol. 1, On the programming of computers by means of natural selection, volume 1. MIT press. Moscato, P. (1989). On evolution, search, optimization, genetic algorithms and martial arts: Towards memetic algorithms. Technical Report 826. Rechenberg, I. (1971). Evolutionsstrategie – Optimierung technischer Systeme nach Prinzipien der biologischen Evolution. PhD thesis, Technical University of Berlin. Storn, R. and Price, K. (1995). Differential evolution - a simple and efficient adaptive scheme for global optimization over continuous spaces. Technical Report TR-95-012, ICSI. Fabio Caraffini University of Jyväskylä
  • 26.
    Lectio Precursoria References ReferencesVI Wolpert, D. and Macready, W. (1997). No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, 1(1):67–82. Fabio Caraffini University of Jyväskylä