In this Lunch & Learn session, Chirag Jain gives us a friendly & gentle introduction to Machine Learning & walks through High-Level Learning frameworks using Linear Classifiers.
Lectures of CS-721 (Network Performance Evaluation) taught for the Virtual University by Junaid Qadir.
To access other resources, visit http://sites.google.com/site/netperfeval
In this Lunch & Learn session, Chirag Jain gives us a friendly & gentle introduction to Machine Learning & walks through High-Level Learning frameworks using Linear Classifiers.
Lectures of CS-721 (Network Performance Evaluation) taught for the Virtual University by Junaid Qadir.
To access other resources, visit http://sites.google.com/site/netperfeval
Introduction to machine learning and model building using linear regressionGirish Gore
An basic introduction of Machine learning and a kick start to model building process using Linear Regression. Covers fundamentals of Data Science field called Machine Learning covering the fundamental topic of supervised learning method called linear regression. Importantly it covers this using R language and throws light on how to interpret linear regression results of a model. Interpretation of results , tuning and accuracy metrics like RMSE Root Mean Squared Error are covered here.
Part of the ongoing effort with Skater for enabling better Model Interpretation for Deep Neural Network models presented at the AI Conference.
https://conferences.oreilly.com/artificial-intelligence/ai-ny/public/schedule/detail/65118
Professor Maria Petrou gave a lecture on "A Classification Framework for Software Component Models" in the Distinguished Lecturer Series - Leon The Mathematician.
More Information available at:
http://dls.csd.auth.gr
An overview of Media Analytics outlining the evolution of image classification and knowledge extraction. The presentation offers an insight into the Big-Data Analytics for Media Management.
Introduction to machine learningunsupervised learningSardar Alam
Introduction to Machine learning and unsupervised learning by Andrew Ng is an Associate Professor at Stanford; Chief Scientist of Baidu; and Chairman and Co-Founder of Coursera. intresting slides...its video lecture also on Coursera.
Computational Rationality I - a Lecture at Aalto University by Antti OulasvirtaAalto University
This 2-hour lecture looks at the emerging field of Computational Rationality. Lecture given March 12, 2018, for the Aalto University Master's level course on "Probabilistic Programming and Reinforcement Learning for Cognition and Interaction." Based on: Gershman et al 2015 Science, Lewis et al 2014 Topics in Cog Sci, and Gershman & Daw 2017 Annu Rev Psych
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-parodi
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Facundo Parodi, Research and Machine Learning Engineer at Tryolabs, presents the "An Introduction to Machine Learning and How to Teach Machines to See" tutorial at the May 2019 Embedded Vision Summit.
What is machine learning? How can machines distinguish a cat from a dog in an image? What’s the magic behind convolutional neural networks? These are some of the questions Parodi answers in this introductory talk on machine learning in computer vision.
Parodi introduces machine learning and explores the different types of problems it can solve. He explains the main components of practical machine learning, from data gathering and training to deployment. Parodi then focuses on deep learning as an important machine learning technique and provides an introduction to convolutional neural networks and how they can be used to solve image classification problems. He also touches on recent advancements in deep learning and how they have revolutionized the entire field of computer vision.
Continuous Unsupervised Training of Deep ArchitecturesVincenzo Lomonaco
A number of successful Computer Vision applications have been recently proposed based on Convolutional Networks. However, in most of the cases the system is fully supervised, the training set is fixed and the task completely defined a priori. Even though Transfer Learning approaches proved to be very useful to adapt heavily pre-trained models to ever-changing scenarios, the incremental learning and adaptation capabilities of existing models is still limited and catastrophic forgetting very difficult to control. In this talk we will discuss our experience in the design of deep architectures and algorithms capable of learning objects incrementally both in a supervised and unsupervised way. Finally we will introduce a new dataset and benchmark (CORe50) that we specifically collected to focus on continuous object recognition for Robotic Vision.
Short Description about machine learning.What is machine learning? specifications , categories, terminologies and applications every thing is explained in short way.
Continual Learning with Deep Architectures - Tutorial ICML 2021Vincenzo Lomonaco
Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of Artificial Intelligence (AI) is building an artificial “continual learning” agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex knowledge and skills (Parisi, 2019). However, despite early speculations and few pioneering works (Ring, 1998; Thrun, 1998; Carlson, 2010), very little research and effort has been devoted to address this vision. Current AI systems greatly suffer from the exposure to new data or environments which even slightly differ from the ones for which they have been trained for (Goodfellow, 2013). Moreover, the learning process is usually constrained on fixed datasets within narrow and isolated tasks which may hardly lead to the emergence of more complex and autonomous intelligent behaviors. In essence, continual learning and adaptation capabilities, while more than often thought as fundamental pillars of every intelligent agent, have been mostly left out of the main AI research focus.
In this tutorial, we propose to summarize the application of these ideas in light of the more recent advances in machine learning research and in the context of deep architectures for AI (Lomonaco, 2019). Starting from a motivation and a brief history, we link recent Continual Learning advances to previous research endeavours on related topics and we summarize the state-of-the-art in terms of major approaches, benchmarks and key results. In the second part of the tutorial we plan to cover more exploratory studies about Continual Learning with low supervised signals and the relationships with other paradigms such as Unsupervised, Semi-Supervised and Reinforcement Learning. We will also highlight the impact of recent Neuroscience discoveries in the design of original continual learning algorithms as well as their deployment in real-world applications. Finally, we will underline the notion of continual learning as a key technological enabler for Sustainable Machine Learning and its societal impact, as well as recap interesting research questions and directions worth addressing in the future.
Authors: Vincenzo Lomonaco, Irina Rish
Official Website: https://sites.google.com/view/cltutorial-icml2021
A data science observatory based on RAMP - rapid analytics and model prototypingAkin Osman Kazakci
RAMP approach to analytics: Rapid Analytics and Model Prototyping; collaborative data challenges with in-built data science process management tools and analytics; An observatory of data science and scientists. Presented at the Design Theory Special Interest Group of International Design Society. Mines ParisTech and Centre for Data Science.
Introduction to machine learning and model building using linear regressionGirish Gore
An basic introduction of Machine learning and a kick start to model building process using Linear Regression. Covers fundamentals of Data Science field called Machine Learning covering the fundamental topic of supervised learning method called linear regression. Importantly it covers this using R language and throws light on how to interpret linear regression results of a model. Interpretation of results , tuning and accuracy metrics like RMSE Root Mean Squared Error are covered here.
Part of the ongoing effort with Skater for enabling better Model Interpretation for Deep Neural Network models presented at the AI Conference.
https://conferences.oreilly.com/artificial-intelligence/ai-ny/public/schedule/detail/65118
Professor Maria Petrou gave a lecture on "A Classification Framework for Software Component Models" in the Distinguished Lecturer Series - Leon The Mathematician.
More Information available at:
http://dls.csd.auth.gr
An overview of Media Analytics outlining the evolution of image classification and knowledge extraction. The presentation offers an insight into the Big-Data Analytics for Media Management.
Introduction to machine learningunsupervised learningSardar Alam
Introduction to Machine learning and unsupervised learning by Andrew Ng is an Associate Professor at Stanford; Chief Scientist of Baidu; and Chairman and Co-Founder of Coursera. intresting slides...its video lecture also on Coursera.
Computational Rationality I - a Lecture at Aalto University by Antti OulasvirtaAalto University
This 2-hour lecture looks at the emerging field of Computational Rationality. Lecture given March 12, 2018, for the Aalto University Master's level course on "Probabilistic Programming and Reinforcement Learning for Cognition and Interaction." Based on: Gershman et al 2015 Science, Lewis et al 2014 Topics in Cog Sci, and Gershman & Daw 2017 Annu Rev Psych
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-parodi
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Facundo Parodi, Research and Machine Learning Engineer at Tryolabs, presents the "An Introduction to Machine Learning and How to Teach Machines to See" tutorial at the May 2019 Embedded Vision Summit.
What is machine learning? How can machines distinguish a cat from a dog in an image? What’s the magic behind convolutional neural networks? These are some of the questions Parodi answers in this introductory talk on machine learning in computer vision.
Parodi introduces machine learning and explores the different types of problems it can solve. He explains the main components of practical machine learning, from data gathering and training to deployment. Parodi then focuses on deep learning as an important machine learning technique and provides an introduction to convolutional neural networks and how they can be used to solve image classification problems. He also touches on recent advancements in deep learning and how they have revolutionized the entire field of computer vision.
Continuous Unsupervised Training of Deep ArchitecturesVincenzo Lomonaco
A number of successful Computer Vision applications have been recently proposed based on Convolutional Networks. However, in most of the cases the system is fully supervised, the training set is fixed and the task completely defined a priori. Even though Transfer Learning approaches proved to be very useful to adapt heavily pre-trained models to ever-changing scenarios, the incremental learning and adaptation capabilities of existing models is still limited and catastrophic forgetting very difficult to control. In this talk we will discuss our experience in the design of deep architectures and algorithms capable of learning objects incrementally both in a supervised and unsupervised way. Finally we will introduce a new dataset and benchmark (CORe50) that we specifically collected to focus on continuous object recognition for Robotic Vision.
Short Description about machine learning.What is machine learning? specifications , categories, terminologies and applications every thing is explained in short way.
Continual Learning with Deep Architectures - Tutorial ICML 2021Vincenzo Lomonaco
Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of Artificial Intelligence (AI) is building an artificial “continual learning” agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex knowledge and skills (Parisi, 2019). However, despite early speculations and few pioneering works (Ring, 1998; Thrun, 1998; Carlson, 2010), very little research and effort has been devoted to address this vision. Current AI systems greatly suffer from the exposure to new data or environments which even slightly differ from the ones for which they have been trained for (Goodfellow, 2013). Moreover, the learning process is usually constrained on fixed datasets within narrow and isolated tasks which may hardly lead to the emergence of more complex and autonomous intelligent behaviors. In essence, continual learning and adaptation capabilities, while more than often thought as fundamental pillars of every intelligent agent, have been mostly left out of the main AI research focus.
In this tutorial, we propose to summarize the application of these ideas in light of the more recent advances in machine learning research and in the context of deep architectures for AI (Lomonaco, 2019). Starting from a motivation and a brief history, we link recent Continual Learning advances to previous research endeavours on related topics and we summarize the state-of-the-art in terms of major approaches, benchmarks and key results. In the second part of the tutorial we plan to cover more exploratory studies about Continual Learning with low supervised signals and the relationships with other paradigms such as Unsupervised, Semi-Supervised and Reinforcement Learning. We will also highlight the impact of recent Neuroscience discoveries in the design of original continual learning algorithms as well as their deployment in real-world applications. Finally, we will underline the notion of continual learning as a key technological enabler for Sustainable Machine Learning and its societal impact, as well as recap interesting research questions and directions worth addressing in the future.
Authors: Vincenzo Lomonaco, Irina Rish
Official Website: https://sites.google.com/view/cltutorial-icml2021
A data science observatory based on RAMP - rapid analytics and model prototypingAkin Osman Kazakci
RAMP approach to analytics: Rapid Analytics and Model Prototyping; collaborative data challenges with in-built data science process management tools and analytics; An observatory of data science and scientists. Presented at the Design Theory Special Interest Group of International Design Society. Mines ParisTech and Centre for Data Science.
Opening the “GAIT” For Future Academic Advisors: Developing a Meaningful Grad...Margaret G. Garry
Slides for Pre-Conference Presentation, NACADA Region 7, February 29, 2016.
Slides by Kristopher Infante
Presented by Kristopher Infante, Ashley McCall, and Margaret Garry
A Few Useful Things to Know about Machine Learningnep_test_account
Machine learning algorithms can figure out how to perform
important tasks by generalizing from examples. This is often feasible and cost-effective where manual programming
is not. As more data becomes available, more ambitious
problems can be tackled. As a result, machine learning is
widely used in computer science and other fields. However,
developing successful machine learning applications requires
a substantial amount of “black art” that is hard to find in
textbooks. This article summarizes twelve key lessons that
machine learning researchers and practitioners have learned.
These include pitfalls to avoid, important issues to focus on,
and answers to common questions.
Model-Based User Interface Optimization: Part IV: ADVANCED TOPICS - At SICSA ...Aalto University
Tutorial on Model-Based User Interface Optimization. Part IV: ADVANCED TOPICS.
Presented by Antti Oulasvirta (Aalto University) at SICSA Summer School on Computational Interaction in 2015 in Glasgow. Note: This one-day lecture is divided into multiple parts.
COMPARISON BETWEEN THE GENETIC ALGORITHMS OPTIMIZATION AND PARTICLE SWARM OPT...IAEME Publication
Close range photogrammetry network design is referred to the process of placing a set of
cameras in order to achieve photogrammetric tasks. The main objective of this paper is tried to find
the best location of two/three camera stations. The genetic algorithm optimization and Particle
Swarm Optimization are developed to determine the optimal camera stations for computing the three
dimensional coordinates. In this research, a mathematical model representing the genetic algorithm
optimization and Particle Swarm Optimization for the close range photogrammetry network is
developed. This paper gives also the sequence of the field operations and computational steps for this
task. A test field is included to reinforce the theoretical aspects.
The operation research book that involves all units including the lpp problems, integer programming problem, queuing theory, simulation Monte Carlo and more is covered in this digital material.
Introduction to Data and Computation: Essential capabilities for everyone in ...Kim Flintoff
An overview seminar about the themes of the Curtin Institute for Computation, and some thoughts on the future role of these capabilities in Learning and Teaching.
1. Algorithmic issues in computational
intelligence optimization
from design to implementation
from implementation to design
Fabio Caraffini
Faculty of Information Technology
Department of Mathematical Information Technology
September 2016
2. Lectio Precursoria References
Can a machine think?
and make its decisions?
In Computer Science both Artificial Intelligence (AI) and
Computational Intelligence (CI) seek the same goal, i.e. to make a
machine able to perform intellectual tasks.
Figure: Alan Turing
AI: based on hard computing techniques (work following a
binary logic: Booleans true or false).
CI: based on soft computing methods (work following a “fuzzy”
logic), which enables adaptation to many situations.
Fabio Caraffini University of Jyväskylä
3. Lectio Precursoria References
Optimisation problems
Any time we pick a decision/make a choice we face an optimisation problem
Maximise/Minimise fm (x) m = 1, 2, . . . , M
subject − to gj (x) 0 j = 1, 2, . . . , J
hk (x) = 0 k = 1, 2, . . . , K
xL
i xi xU
i i = 1, 2, . . . , n
My work focusses on real-valued, single-objective and box
constrained optimisation:
x∗
≡ arg{min
x∈D
f (x)} = {x∗
∈ D | f (x∗
) f (x) ∀ x ∈ D}
Fabio Caraffini University of Jyväskylä
4. Lectio Precursoria References
Classification of optimisation approaches
(Knowledge of the problem is available)
Analytical Approach: the function has an explicit analytical
expression, derivable over all the variables (and de facto not
highly multivariate).
Exact Methods: the function respects some specific
hypotheses, e.g. linear or quadratic problems. The method
converges to the exact solution after a finite amount of steps
of an iterative procedure.
Approximate Iterative Methods: the problem respects some
hypotheses and can be solved by applying an iterative
procedure in infinite steps. The application of the procedure
for a finite amount of steps still leads to an approximation of
the optimum.
Fabio Caraffini University of Jyväskylä
5. Lectio Precursoria References
Classification of optimisation approaches
(black-box problem, too complex problems, stringent time and memory constraints)
Metaheuristics: the problem is generic, as often happens in
real-world cases. We give up about knowing the optimum and
we are satisfied about knowing some point that is good
enough for our purpose. There is no guarantee of convergence
in most of the cases.
Computational Intelligence Optimisation (CIO) is a subject that
uses CI to solve optimisation problems, especially in the cases when
there are no hypotheses and a metaheuristic is the only option!
Fabio Caraffini University of Jyväskylä
6. Lectio Precursoria References
Real-world problems
Normally, the objective function of real-world problems can be
a piece of software, a simulator, an experiment, etc., also
known as black box function.
Optimisation Problems are often rather easily formulated but
very hard to be solved when the problem comes from an
application. In fact, some features characterising the problem
can make it extremely challenging
Some of these features are summarised in the following slides:
Fabio Caraffini University of Jyväskylä
7. Lectio Precursoria References
Hard real-life:
HIGH NON-LINEARITY.
Usually optimisation problems are characterised by nonlinear
function. Optima are not on the bounds!
In real world optimisation problems, the physical phenomenon,
due to its nature (e.g. in the case of saturation phenomenon
or for systems which employ electronic components), cannot
be approximated by a linear function neither in some areas of
the decision space.
Fabio Caraffini University of Jyväskylä
8. Lectio Precursoria References
Hard real-life:
HIGH MULTI-MODALITY.
It often happens that the fitness landscape contains many
local optima and that many of these have an unsatisfactory
performance (fitness value).
These fitness landscapes are usually rather difficult to be
handled since the optimisation algorithms which employ
gradient based information in detecting the search direction
could easily converge to a suboptimal basin of attraction.
Fabio Caraffini University of Jyväskylä
9. Lectio Precursoria References
Hard real-life:
OPTIMISATION IN UNCERTAIN ENVIRONMENTS.
Noisy fitness function: noise in fitness evaluations may come
from many different sources such as sensory measurement
errors or randomized simulations.
Approximated fitness function: when the fitness function is
very expensive to evaluate, or an analytical fitness function is
not available, approximated fitness functions are often used
instead.
Robustness: often, when a solution is implemented, the
design variables or the environmental parameters are subject to
perturbations or changes (e.g. control problems).
Fabio Caraffini University of Jyväskylä
10. Lectio Precursoria References
Hard real-life:
COMPUTATIONAL EXPENSIVE PROBLEMS.
Optimisation problems can be
computationally expensive
because of two reasons:
large scale problems (needle in a haystack).
computationally expensive fitness function (e.g. design of
aircraft, control on an on-line electric drive).
Fabio Caraffini University of Jyväskylä
11. Lectio Precursoria References
Hard real-life:
MEMORY/TIME CONSTRAINTS
Many engineering problems are
plagued by a modest hardware
and stringent time constraints.
This can happen:
due to cost limitations (e.g. vacuum cleaner robot);
due to space limitations (e.g. use of minimalistic embedded systems
as wearable technology, wireless sensors networks, etc.);
in real-time systems (e.g. Telecommunication, Video-games, etc.).
Light (and simple) algorithms can be used in a modular way to tackle
complex problems: if the hardware is limited an intelligent solution must
be found!
Fabio Caraffini University of Jyväskylä
12. Lectio Precursoria References
Hard real-life:
MEMORY/TIME CONSTRAINTS
These scenarios have to be carefully addressed by designing
algorithm tailored to the specific case:
the design is implementation-driven!
(from implementation to design)
implementation limitations have to be considered first to be
able to carry out the optimization process in such constraints.
Chapter 3 of my thesis addresses mamory-saving and
real-time optimization:
PI: “compact differential evolution light”
[Iacca et al., 2012a]
PII: “space robot base disturbance optimization”
[Iacca et al., 2012b]
PIII: “MC for path-following mobile robots”
[Iacca et al., 2013]
PIV: “µ-DE with extra moves along the axes”
[Caraffini et al., 2013c]
Fabio Caraffini University of Jyväskylä
13. Lectio Precursoria References
Hard real-life:
BLACK-BOX SYSTEMS
Black-box systems make it very difficult to the designer to
produce a tailored and efficient general purpose algorithm.
The designed has to make sure that the optimizer performs an
unbiased search at the the beginning of the process
(EXPLORATON) to then converge and refine the global
optimum (EXPLOITATION).
The optimization process is usually carried out off-line and the
design takes into consideration “algorithmic theory” as the
implementation of the algoritm is not problematic and comes
in a second moment.
(from to design to implementation)
Fabio Caraffini University of Jyväskylä
14. Lectio Precursoria References
Historical successful strategies
Fogel Owens (USA, 1964): Evolutionary Programming (EP)
[Fogel et al., 1964], see also [Fogel et al., 1966]
Holland (USA, 1975): Genetic Algorithm (GA)
[Holland, 1975]
Rechenberg Schwefel (Germany, 1971): Evolution Strategies (ES)
[Rechenberg, 1971]
Koza (USA, 1990): Genetic Programming (GP)
[Koza, 1990], see also [Koza, 1992b] and [Koza, 1992a]
Moscato (Argentina-USA 1989): Memetic Algorithms (MA)
[Moscato, 1989]
Storne and Price (Germany-USA 1995): Differential Evolution (DE)
[Storn and Price, 1995]
Eberhart (USA 1995): Particol Swarm Optimization
[Eberhart and Kennedy, 1995]
Fabio Caraffini University of Jyväskylä
15. Lectio Precursoria References
What is the best strategy?
There is no best optimiser! [Wolpert and Macready, 1997]
The 1st of the No Free Lunch Theorems (NFLT) presented in
[Wolpert and Macready, 1997] states that for a given pair of
algorithms A and B:
f
P(xm m, f , A) =
f
P(xm m, f , B)
where P(xm m, f , A) is the probability that algorithm A
detects, after m iterations, the optimal solution xm for a
generic objective function f (analogously for P(xm m, f , B)).
The performance of every pair of algorithms over all the
possible problems is the same.
Fabio Caraffini University of Jyväskylä
16. Lectio Precursoria References
General message of NFLT
Every problem is a separate story and the algorithm should be
connected to the problem!
So, how do we pick the right strategy?
How can we specialize a general-purpose algorithm to a black
box problem?
Fabio Caraffini University of Jyväskylä
17. Lectio Precursoria References
Coexistence of exploration and exploitation capabilities
All the aformentoned strategies share similar working
principles.
All the optimization algorithms are the implementation of
the same concept!
An effective informed sampling strategy guides the generation of
new candidate solutions based on their fitnesses and locations of
previously visited points.
The search has to guarantee
an unbiased exploration phase;
an exploitation phase (local search) that is efficient on the
“unknown” problem.
Fabio Caraffini University of Jyväskylä
18. Lectio Precursoria References
Structural bias
Chapter 5 of my dissertation: PXI [Kononova et al., 2015]
The capability of equally exploring the search space can be
measured and visually displayed in terms of structural bias.
The bias reveals as non-uniform clustering of the population
even in problems where we expect individuals to disperse over
the search space.
The bias manifests an increasing deleterious strength related
to an increase of
the population size;
the problem complexity.
Fabio Caraffini University of Jyväskylä
19. Lectio Precursoria References
Adaptation
Chapter 4 of my dissertation
To efficiently exploit a solution after exploring the search space
the algorithm has to adapt to the landscape.
Adaptation can be obtained by
tuning the algorithm’s parameters on-the-fly;
embedding local searchers in population-based algorithms;
performing a preliminary landscape analyses. etc.
Examples from my dissertation are:
PV: “multicriteria adaptive differential evolution” [Cheng et al., 2015]
PVI: “super-fit MADE” [Caraffini et al., 2013b]
PVII: “super-fit RIS” [Caraffini et al., 2013a]
PVIII: “EPSDE with a pool of LS algorithms” [Iacca et al., 2014b]
PIX: “Multi-strategy coevolving aging particle optimization”
[Iacca et al., 2014a]
PX: “Hyper-SPAM with Adaptive Operator Secletion”
[Epitropakis et al., 2014]
Fabio Caraffini University of Jyväskylä
21. Lectio Precursoria References
References I
Caraffini, F., Iacca, G., Neri, F., Picinali, L., and Mininno, E. (2013a).
A cma-es super-fit scheme for the re-sampled inheritance search.
In Evolutionary Computation (CEC), 2013 IEEE Congress on, pages
1123–1130.
Caraffini, F., Neri, F., Cheng, J., Zhang, G., Picinali, L., and G. Iacca,
E. M. (2013b).
Super-fit multicriteria adaptive differential evolution.
In Evolutionary Computation (CEC), 2013 IEEE Congress on, pages
1678–1685.
Caraffini, F., Neri, F., and Poikolainen, I. (2013c).
Micro-differential evolution with extra moves along the axes.
In IEEE Symposium Series on Computational Intelligence, Symposium on
Differential Evolution, pages 46–53.
Cheng, J., Zhang, G., Caraffini, F., and Neri, F. (2015).
Multicriteria adaptive differential evolution for global numerical
optimization.
Integrated Computer-Aided Engineering.
Fabio Caraffini University of Jyväskylä
22. Lectio Precursoria References
References II
Eberhart, R. C. and Kennedy, J. (1995).
A new optimizer using particle swarm theory.
In Proceedings of the Sixth International Symposium on Micromachine
and Human Science, pages 39–43.
Epitropakis, M. G., Caraffini, F., Neri, F., and Burke, E. K. (2014).
A separability prototype for automatic memes with adaptive operator
selection.
In Foundations of Computational Intelligence (FOCI), 2014 IEEE
Symposium on, pages 70–77. IEEE.
Fogel, L. J., Owens, A., and Walsh, M. (1964).
On the evolution of artificial intelligence(artificial intelligence generated by
natural evolution process).
In National Symposium on Human Factors in Electronics, 5th, San Diego,
California, pages 63–76.
Fogel, L. J., Owens, A. J., and Walsh, M. J. (1966).
Artificial Intelligence through Simulated Evolution.
John Wiley.
Fabio Caraffini University of Jyväskylä
23. Lectio Precursoria References
References III
Holland, J. (1975).
Adaptation in Natural and Artificial Systems.
University of Michigan Press.
Iacca, G., Caraffini, F., and Neri, F. (2012a).
Compact differential evolution light: high performance despite limited
memory requirement and modest computational overhead.
Journal of Computer Science and Technology, 27(5):1056–1076.
Iacca, G., Caraffini, F., and Neri, F. (2013).
Memory-saving memetic computing for path-following mobile robots.
Applied Soft Computing, 13(4):2003–2016.
Iacca, G., Caraffini, F., and Neri, F. (2014a).
Multi-strategy coevolving aging particle optimization.
International journal of neural systems, 24(01).
Iacca, G., Caraffini, F., Neri, F., and Mininno, E. (2012b).
Robot base disturbance optimization with compact differential evolution
light.
In EvoApplications, pages 285–294.
Fabio Caraffini University of Jyväskylä
24. Lectio Precursoria References
References IV
Iacca, G., Neri, F., Caraffini, F., , and Suganthan, P. N. (2014b).
A differential evolution framework with ensemble of parameters and
strategies and pool of local search algorithms.
In EvoApplications, page To appear.
Kononova, A. V., Corne, D. W., Wilde, P. D., Shneer, V., and Caraffini,
F. (2015).
Structural bias in population-based algorithms.
Information Sciences, 298(0):468 – 490.
Koza, J. R. (1990).
Concept formation and decision tree induction using the genetic
programming paradigm.
In Parallel Problem Solving from Nature, pages 124–128. Springer.
Koza, J. R. (1992a).
Genetic programming: on the programming of computers by means of
natural selection, volume 1.
MIT press.
Fabio Caraffini University of Jyväskylä
25. Lectio Precursoria References
References V
Koza, J. R. (1992b).
Genetic Programming: vol. 1, On the programming of computers by
means of natural selection, volume 1.
MIT press.
Moscato, P. (1989).
On evolution, search, optimization, genetic algorithms and martial arts:
Towards memetic algorithms.
Technical Report 826.
Rechenberg, I. (1971).
Evolutionsstrategie – Optimierung technischer Systeme nach Prinzipien
der biologischen Evolution.
PhD thesis, Technical University of Berlin.
Storn, R. and Price, K. (1995).
Differential evolution - a simple and efficient adaptive scheme for global
optimization over continuous spaces.
Technical Report TR-95-012, ICSI.
Fabio Caraffini University of Jyväskylä
26. Lectio Precursoria References
References VI
Wolpert, D. and Macready, W. (1997).
No free lunch theorems for optimization.
IEEE Transactions on Evolutionary Computation, 1(1):67–82.
Fabio Caraffini University of Jyväskylä