This document discusses covariance matrix adaptation evolution strategy (CMA-ES), an optimization technique. It begins with an introduction to optimization and evolution strategies. CMA-ES adapts the covariance matrix of a multivariate normal distribution used to sample new solutions, allowing it to better model the objective function. The document covers step-size adaptation, cumulative step-size adaptation, and covariance matrix adaptation, with examples provided.
This is very simple introduction to Clustering with some real world example. At the end of lecture I use stackOverflow API to test some clustering. I also wants to try facebook but it has some problem with it's API
Evolutionary algorithms are stochastic search and optimization heuristics derived from the classic evolution theory, which are implemented on computers in the majority of cases.
This is very simple introduction to Clustering with some real world example. At the end of lecture I use stackOverflow API to test some clustering. I also wants to try facebook but it has some problem with it's API
Evolutionary algorithms are stochastic search and optimization heuristics derived from the classic evolution theory, which are implemented on computers in the majority of cases.
In pattern recognition, the k-nearest neighbors algorithm (k-NN) is a non-parametric method proposed by Thomas Cover used for classification and regression. In both cases, the input consists of the k closest training examples in the feature space.
Tutorial on Theory and Application of Generative Adversarial NetworksMLReview
Description
Generative adversarial network (GAN) has recently emerged as a promising generative modeling approach. It consists of a generative network and a discriminative network. Through the competition between the two networks, it learns to model the data distribution. In addition to modeling the image/video distribution in computer vision problems, the framework finds use in defining visual concept using examples. To a large extent, it eliminates the need of hand-crafting objective functions for various computer vision problems. In this tutorial, we will present an overview of generative adversarial network research. We will cover several recent theoretical studies as well as training techniques and will also cover several vision applications of generative adversarial networks.
Data Science - Part IX - Support Vector MachineDerek Kane
This lecture provides an overview of Support Vector Machines in a more relatable and accessible manner. We will go through some methods of calibration and diagnostics of SVM and then apply the technique to accurately detect breast cancer within a dataset.
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain RatioMarina Santini
attribute selection, constructing decision trees, decision trees, divide and conquer, entropy, gain ratio, information gain, machine leaning, pruning, rules, suprisal
ensemble methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone.
An ensemble is itself a supervised learning algorithm, because it can be trained and then used to make predictions. The trained ensemble, therefore, represents a single hypothesis. This hypothesis, however, is not necessarily contained within the hypothesis space of the models from which it is built.
In pattern recognition, the k-nearest neighbors algorithm (k-NN) is a non-parametric method proposed by Thomas Cover used for classification and regression. In both cases, the input consists of the k closest training examples in the feature space.
Tutorial on Theory and Application of Generative Adversarial NetworksMLReview
Description
Generative adversarial network (GAN) has recently emerged as a promising generative modeling approach. It consists of a generative network and a discriminative network. Through the competition between the two networks, it learns to model the data distribution. In addition to modeling the image/video distribution in computer vision problems, the framework finds use in defining visual concept using examples. To a large extent, it eliminates the need of hand-crafting objective functions for various computer vision problems. In this tutorial, we will present an overview of generative adversarial network research. We will cover several recent theoretical studies as well as training techniques and will also cover several vision applications of generative adversarial networks.
Data Science - Part IX - Support Vector MachineDerek Kane
This lecture provides an overview of Support Vector Machines in a more relatable and accessible manner. We will go through some methods of calibration and diagnostics of SVM and then apply the technique to accurately detect breast cancer within a dataset.
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain RatioMarina Santini
attribute selection, constructing decision trees, decision trees, divide and conquer, entropy, gain ratio, information gain, machine leaning, pruning, rules, suprisal
ensemble methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone.
An ensemble is itself a supervised learning algorithm, because it can be trained and then used to make predictions. The trained ensemble, therefore, represents a single hypothesis. This hypothesis, however, is not necessarily contained within the hypothesis space of the models from which it is built.
SAP Accounting powered by SAP HANA – Moving controlling and finance closer to...John Jordan
New users have traditionally struggled to understand the way SAP separates Financial Accounting and Management Accounting where most legacy systems see the two as one. While it’s easy enough to understand how a payroll account flows from the profit and loss statement into cost center accounting because the account information stays the same, the situation becomes more challenging as a revenue account flows into profitability analysis and is transformed into a value field, or a cost of goods sold account becomes multiple value fields depending on the cost components involved. In its latest product release, SAP is bringing the two worlds closer together. In this session we’ll look at how SAP is addressing these issues with its new product SAP Accounting powered by SAP HANA, part of SAP Simple Finance. This presentation will delve into how the requirements for internal and external reporting are converging and how this convergence impacts SAP Controlling.
This session will cover:
*Changes to report revenue and cost of goods sold by the CO-PA dimensions and how break out the cost of goods sold into multiple accounts
*How overhead is captured and allocated either from cost centers to CO-PA dimensions (assessment) or from high-level to lower-level CO-PA dimensions (top-down distribution)
*The underlying architecture and how FI and CO line items are linked.
*New reports that visualize the transformation of expense into cost of goods sold, work in process, and assets under construction
*How the period close process has been accelerated in SAP Controlling
Get a sneak peak at the first planning applications that allow you to plan costs according to the new paradigm of SAP Simple Finance, where the account is the leading dimension for all accounting activities.
Manufacturer and ascertaining applicability of CGST, SGST/UTGST and IGST on various categories of transactions types listed under the ‘As Is’ phase of work, based on the GST law.
Data sharing and analytics in research and learningJisc
Learning analytics: progress and solutions - Niall Sclater and Michael Webb, both Jisc
Reading analytics - Clifford Lynch, CNI
Sharing data safely and it's re-use for analytics – David Fergusson, Francis Crick
Jisc and CNI conference, 6 July 2016
The ASA president Task Force Statement on Statistical Significance and Replic...jemille6
Yoav Benjamini's slides "The ASA president Task Force Statement on Statistical Significance and Replicability” for Special Session of the (remote) Phil Stat Forum: “Statistical Significance Test Anxiety” on 11 January 2022
Sustainable intensification indicator framework for Africa RISINGafrica-rising
Presented by Philip Grabowski (Michigan State University), Mark Musumba (Columbia University), Cheryl Palm (University of Florida) and Sieg Snapp (Michigan State University) at the Africa RISING East and Southern Africa Phase II Planning Meeting, Lilongwe, Malawi, 5-8 October 2016
Combining and pooling forecasts based on selection criteriaDevon K. Barrow
Nearly 50 years of research has focused on forecast combination versus selection. From the one hand findings suggest that correct identification of the ‘best’ model for a given series may lead to significant accuracy improvements in some cases up to 20-30%. Selecting the best method is challenging in two ways: identifying the best method given sampling uncertainties; and selecting the appropriate criteria. For example, selecting an optimal model based on Akaike Information Criterion (AIC) and Bayesian Information Criteria (BIC) can yield different results. These challenges suggest that there are potential benefits from combining forecasts of models instead. While simple combinations across all available methods may not always be an ideal strategy, combinations performed across a set of suitable methods or using appropriate combination weights usually outperform the selection of a single model. Prior research has focused mainly on combining forecasts of different models, parameters or fitting samples e.g. through bootstrapping. In contrast here we propose a new way to construct pools of forecasts to be combined by combining across different criteria of ‘optimal’ models. Our approach combines forecasts selected by different selection methods including, Akaike, Bayesian, Hanan Quin information criteria and cross-validation. We benchmark this strategy against the combination from a single model selection criterion and evaluate the forecasting performance of the two approaches under different experimental setups providing recommendations for practice.
A Probability Distribution is a way to shape the sample data to make predictions and draw conclusions about an entire population. It refers to the frequency at which some events or experiments occur. It helps finding all the possible values a random variable can take between the minimum and maximum statistically possible values.
Similar to Covariance Matrix Adaptation Evolution Strategy - CMA-ES (20)
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Toxic effects of heavy metals : Lead and Arsenicsanjana502982
Heavy metals are naturally occuring metallic chemical elements that have relatively high density, and are toxic at even low concentrations. All toxic metals are termed as heavy metals irrespective of their atomic mass and density, eg. arsenic, lead, mercury, cadmium, thallium, chromium, etc.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxRASHMI M G
Abnormal or anomalous secondary growth in plants. It defines secondary growth as an increase in plant girth due to vascular cambium or cork cambium. Anomalous secondary growth does not follow the normal pattern of a single vascular cambium producing xylem internally and phloem externally.
BREEDING METHODS FOR DISEASE RESISTANCE.pptxRASHMI M G
Plant breeding for disease resistance is a strategy to reduce crop losses caused by disease. Plants have an innate immune system that allows them to recognize pathogens and provide resistance. However, breeding for long-lasting resistance often involves combining multiple resistance genes
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
1. Covariance Matrix Adaptation Evolution
Strategy (CMA-ES)
BY:
OSAMA SALAH ELDIN
UNDER SUPERVISION:
PROF. MAGDA B. FAYEK
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015
2. Outline
oWhat is Optimization?
oWhat is an Evolution Strategy?
oStep-size Adaptation
oCumulative step-size adaptation
oCovariance Matrix Adaptation
oApplication - Modeling
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 2
3. What is optimization?
oOptimization is the minimization or the maximization of a function
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 3
y=f(x)
Global MinimumLocal Minimum xLocal Minimum
4. What is optimization?
oTry to solve these problems:
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 4
x = 2
x3 – 8 = 0
5. What is optimization?
oTry to solve these problems:
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 5
x=3, y=2
x2 + 3.y – 15 = 0
6. What is optimization?
oTry to solve these problems:
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 6
x=3, y=2, z=2
x2 + y + 2.z – 15 = 0
7. What is optimization?
oTry to solve these problems:
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 7
8. What is optimization?
oTry to solve these problems:
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 8
Can one try all combinations??
This is not recommended
12. What is an Evolution Strategy?
oIt is a technique that searches for the optimum solution in a search-space
oEvolution Strategies belong to the family of Evolutionary Computation
oEvolution strategy steps:
1. Generate a population of candidate solutions
2. Evaluate every individual in the population
3. Select parents from the fittest individuals
4. Reproduce offspring of the next generation (Recombination & mutation)
5. Repeat until a termination criterion is met
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 12
13. Evolution Strategies VS. Genetic Algorithms
ES GA
Initial Population
Random mutations of the
initial guess
Random or seeded
Evaluation Objective Function Fitness (Evaluation) Function
Selection Truncation Selection Different methods
Reproduction Recombination + Mutation Crossover + Mutation
Termination Almost similar stop conditions
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 13
14. What is an Evolution Strategy? - Example
1. Generate a population of candidate solutions
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 14
y=f(x)
x
15. fitness
What is an Evolution Strategy? - Example
2. Evaluate every individual in the population
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 15
y=f(x)
x
16. fitness
What is an Evolution Strategy? - Example
3. Select parents from the fittest individuals
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 16
y=f(x)
x
17. What is an Evolution Strategy? - Example
4. Reproduce offspring of the next generation (Recombination & mutation)
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 17
y=f(x)
x
18. What is an Evolution Strategy? - Example
5. Repeat until a termination criterion is met
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 18
y=f(x)
xEvaluate & Select
19. What is an Evolution Strategy? - Example
5. Repeat until a termination criterion is met
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 19
y=f(x)
xEvaluate & SelectReproduce
20. What is an Evolution Strategy? - Example
5. Repeat until a termination criterion is met
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 20
y=f(x)
x
Optimum Solution
Evaluate – Select – ReproduceReproduceTerminate
21. The Basic Evolution Strategy
oThe basic evolution strategy is defined by:
(µ/ρ, λ)-ES and (µ/ρ+ λ)-ES
Where:
µ The number of selected individuals per generation
ρ The number of parents (selected from µ) involved in recombination (≤ µ)
λ The number of individuals per generation (population size)
, Comma Selection µ parents are selected from the λ individuals
+ Plus Selection µ parents are selected from the λ individuals + the
current ρ parents
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 21
22. The Basic Evolution Strategy - Example
(10/6 , 50)-ES
Select the fittest 10 individuals from the 50 individuals of the current
population, and select 6 random ones from them. Recombine these 6
parents to generate 50 new offspring
(10/6 + 50)-ES
Select the fittest 10 individuals from the 50 individuals of the current
population along with their 6 parents, and select 6 random ones from them
all (from the 56). Recombine these 6 parents to generate 50 new offspring
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 22
23. The structure of an Individual
Object Parameter Vector (Y) Strategy Parameter Vector (S) Individual’s Fitness (F)
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 23
Y The candidate solution of the problem (e.g. (x, y) point)
S The parameters used by the strategy (e.g. mutation strength)
F The fitness of the candidate solution y as measured by the fitness
function (i.e. the value of the objective function)
Y = {x1, x2, z}
24. The structure of an Individual
Object Parameter Vector (Y) Strategy Parameter Vector (S) Individual’s Fitness (F)
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 24
• Evolution strategies search for the optimum:
1. Solution: The highest fitness
2. Strategy Parameters: The fastest improvement
Two search spaces
34. The Basic Evolution Strategy
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 34
2 - Initial Population
1- Initial Solution
3 - Evaluation
4 - Selection
• An initial guess, should be as close as possible to
the expected solution
6 - Termination
5 - Reproduction
35. The Basic Evolution Strategy
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 35
2 - Initial Population
1- Initial Solution
3 - Evaluation
4 - Selection
• The intial population is generated by mutating the
initial solution
6 - Termination
5 - Reproduction
36. The Basic Evolution Strategy
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 36
2 - Initial Population
1- Initial Solution
3 - Evaluation
4 - Selection
• Every individual is evaluated by the objective
function
6 - TerminationBest Fitness = 0
5 - Reproduction
37. The Basic Evolution Strategy
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 37
2 - Initial Population
1- Initial Solution
3 - Evaluation
4 - Selection
• Truncation Selection is used
6 - Termination
5 - Reproduction
Select the fittest µ individuals
Drop the other individuals
38. The Basic Evolution Strategy
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 38
2 - Initial Population
1- Initial Solution
3 - Evaluation
4 - Selection
5 - Reproduction
6 - Termination
Recombination
Reproduction
Mutation
Combining two or more
parents to produce a mean
for the new generation
Adding normally-
distributed random vectors
to the new mean
39. The Basic Evolution Strategy
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 39
2 - Initial Population
1- Initial Solution
3 - Evaluation
4 - Selection
5 - Reproduction
6 - Termination
Recombination
S1 F11 3
S2 F24 6
Solution Strategy Parameters
S32.5 4.5
A simple recombination is taking the average
P1
P2
Fitness
To be calculated
41. The Basic Evolution Strategy
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 41
2 - Initial Population
1- Initial Solution
3 - Evaluation
4 - Selection
5 - Reproduction
6 - Termination
Recombination Mutation
Reproduction
Combining two or more
parents to produce a mean
for the new generation
Adding normally-
distributed random vectors
to the new mean
42. The Basic Evolution Strategy
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 42
2 - Initial Population
1- Initial Solution
3 - Evaluation
4 - Selection
5.5 8.0Parent
RX1 RY1
Generate λ normally-distributed
random vectors
RX2 RY2
RX3 RY3
5.5 + RX1 8.0 + RY1
5.5 + RX2 8.0 + RY2
5.5 + RX3 8.0 + RY3
Add each of the λ mutating vectors
to the initial solution 6 - Termination
5 - Reproduction
Mutation
Recombination
50. Step-size Adaptation (σSA-ES)
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 50
The parent of a generation is
an individual in the previous
generation
60. Cumulative Step-size Adaptation (CSA)
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 60
1. Calculate the average 𝑍𝑡 of the fittest µ solutions
2. Calculate the cumulative path Pc at generation t
The parameter c is called the cumulation parameter, it determines how rapidly
the information stored in Pct fades. The typical value of c is between 1/n and 1/
61. Cumulative Step-size Adaptation (CSA)
3. Update the mutation strength (i.e. step-size)
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 61
The damping parameter dσ determines how much the step-size can
change. (Normally, it is set to 1)
Where ||𝑋||‖ is the Euclidean norm of the vector =
74. Covariance-Matrix Adaptation (CMA)
oTo which direction should the population be directed?
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 74
VarianceCovariance
75. oVariance is a measure of how far a variable changes away from its mean
Variance
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 75
, 𝑋 is the mean of the samples of X
79. Covariance-Matrix
oIt is a matrix whose (i, j) element is the covariance between the ith and the
jth variables
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 79
80. Covariance-Matrix Adaptation (CMA)
oTo which direction should the population be directed?
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 80
Variance=σ2Covariance
81. Principal Component
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 81
oCMA-ES performs a type of Principal Component Analysis (PCA)
oPrincipal Component: The principal variable (component) is equivalent to the
principal player:
1. High Variance
2. Low Covariance with other
components
Distinct, or very special
82. Covariance-Matrix Adaptation (CMA)
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 82
oTo which direction should the population be directed?
Towards the principal component
83. Covariance-Matrix Adaptation (CMA)
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 83
• The optimum solution is (5, 50)
A practical run of CMA-ES x
• The population moves faster
towards the direction of the
second component (50)
• The initial guess is (0, 0)
85. CMA-ES (Steps)-1
oInitial Values
◦ C = I (n x n Identity Matrix)
◦ An initial guess m (n x 1 mean of the initial population)
◦ An initial step size (n x 1 standard-deviation matrix)
1. Generate λ offspring by mutating the mean m:
2. Evaluate the λ offspring
3. Sort the offspring by fitness so that:
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 85
Fittest Individual
86. CMA-ES (Steps)-2
4. Update the mean m of the population
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 86
Weighted average
The constants wi are selected such that:
µ is the number of parents
87. CMA-ES (Steps)-3
5. Update step-size cumulation path 𝑃 𝜎 :
, where:
The random vector that generated the individual xi:λ
◦ cσ : Decay rate for evolution path for step-size σ (≈ 4/n)
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 87
88. CMA-ES (Steps)-4
6. Update the covariance-matrix cumulation path Pc ∈ ℝ(nx1):
cc: Decay rate for evolution path of C
7. Update the step-size σ:
Where ||X|| is the Euclidean norm of the vector X(m) =
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 88
89. CMA-ES (Steps)-5
8. Update the covariance matrix C:
c1: Learning rate for rank-one update of C(≈ 2/n2)
cµ: Learning rate for rank-µ update of C (≈ µw/n2)
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 89
Repeat the previous steps until a satisfying solution is found or a maximum
number of generations is exceeded or no significant improvement is
achieved
90. Advantages of CMA-ES
oCMA-ES can outperform other strategies in the following cases:
◦ Non-separable problems (the parameters of the objective function are
dependent)
◦ The derivative of the objective function is not available
◦ High dimension problems (n is large)
◦ Very large search spaces
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 90
91. CMA-ES Limitations
oCMA-ES can be outperformed by other strategies in the following cases:
◦ Partly separable problems (i.e. optimization of n-dimension objective
function can be divided into a series of n optimizations of every single
parameter)
◦ The derivative of the objective function is easily available (Gradient
Descend / Ascend)
◦ Small dimension problems
◦ Problems that can be solved using a relatively small number of function
evaluations (e.g. < 10n evaluations)
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 91
92. Outline
oWhat is Optimization?
oWhat is an Evolution Strategy?
oStep-size Adaptation
oCumulative step-size adaptation
oCovariance Matrix Adaptation
oApplication - Modeling
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 92
93. Application - Modeling
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 93
f(x)X y
2. Guess a model f(x) = a.x2 + b.x + c
1. Collect Samples
x1
x2
x3
.
.
xn
y1
y2
y3
.
.
yn
3. Optimize the model Find the optimum values of {a, b, c}
94. Application – Modeling in Robocode
Motion Model
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 94
Find a model for this path
95. Application – Modeling in Robocode
Motion Model – Steps
1. Collect Samples: The (x, y) location of the enemy
2. Guess the model (using GA)
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 95
3. Optimize the model
96. Application – Modeling in Robocode
Motion Model – Observations
oDifferent models give different human-like behaviors
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 96
Careless Reckless Tricky
97. Using the Source Code
oThe source code (m-file for MATLAB) for CMA-ES (C, C++, Java, Fortran, Python,
R, Scilab, Matlab / Octave) is available at:
https://www.lri.fr/~hansen/cmaes_inmatlab.html
◦ purecmaes.m: Simple implementation
◦ cmaes.m: Production Code
1. Specify the initial values of the parameters (step-size, covariance matrix, initial
guess, population size … etc.)
2. Define your objective function
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 97
function f=obj_func(x)
f = (calculate the error here) % e.g. f = x(1)^3 – 8;
Matlab /Octave
98. 3. Call the function Matlab /Octave
Using the Source Code
6/3/2016 CAIRO UNIVERSITY - COMPUTER ENGINEERING - 2015 98
>> function_mfile( parameters)