SlideShare a Scribd company logo
1 of 91
Download to read offline
Artificial Intelligence
Holland’s GA Schema Theorem
Andres Mendez-Vazquez
April 7, 2015
1 / 37
Outline
1 Introduction
Schema Definition
Properties of Schemas
2 Probability of a Schema
Probability of an individual is in schema H
Surviving Under Gene wise Mutation
Surviving Under Single Point Crossover
The Schema Theorem
A More General Version
Problems with the Schema Theorem
2 / 37
Outline
1 Introduction
Schema Definition
Properties of Schemas
2 Probability of a Schema
Probability of an individual is in schema H
Surviving Under Gene wise Mutation
Surviving Under Single Point Crossover
The Schema Theorem
A More General Version
Problems with the Schema Theorem
3 / 37
Introduction
Consider the Canonical GA
Binary alphabet.
Fixed length individuals of equal length, l.
Fitness Proportional Selection.
Single Point Crossover (1X).
Gene wise mutation i.e. mutate gene by gene.
Definition 1 - Schema H
A schema H is a template that identifies a subset of strings with
similarities at certain string positions.
Schemata are a special case of a natural open set of a product
topology.
4 / 37
Introduction
Consider the Canonical GA
Binary alphabet.
Fixed length individuals of equal length, l.
Fitness Proportional Selection.
Single Point Crossover (1X).
Gene wise mutation i.e. mutate gene by gene.
Definition 1 - Schema H
A schema H is a template that identifies a subset of strings with
similarities at certain string positions.
Schemata are a special case of a natural open set of a product
topology.
4 / 37
Introduction
Consider the Canonical GA
Binary alphabet.
Fixed length individuals of equal length, l.
Fitness Proportional Selection.
Single Point Crossover (1X).
Gene wise mutation i.e. mutate gene by gene.
Definition 1 - Schema H
A schema H is a template that identifies a subset of strings with
similarities at certain string positions.
Schemata are a special case of a natural open set of a product
topology.
4 / 37
Introduction
Consider the Canonical GA
Binary alphabet.
Fixed length individuals of equal length, l.
Fitness Proportional Selection.
Single Point Crossover (1X).
Gene wise mutation i.e. mutate gene by gene.
Definition 1 - Schema H
A schema H is a template that identifies a subset of strings with
similarities at certain string positions.
Schemata are a special case of a natural open set of a product
topology.
4 / 37
Introduction
Consider the Canonical GA
Binary alphabet.
Fixed length individuals of equal length, l.
Fitness Proportional Selection.
Single Point Crossover (1X).
Gene wise mutation i.e. mutate gene by gene.
Definition 1 - Schema H
A schema H is a template that identifies a subset of strings with
similarities at certain string positions.
Schemata are a special case of a natural open set of a product
topology.
4 / 37
Introduction
Consider the Canonical GA
Binary alphabet.
Fixed length individuals of equal length, l.
Fitness Proportional Selection.
Single Point Crossover (1X).
Gene wise mutation i.e. mutate gene by gene.
Definition 1 - Schema H
A schema H is a template that identifies a subset of strings with
similarities at certain string positions.
Schemata are a special case of a natural open set of a product
topology.
4 / 37
Introduction
Consider the Canonical GA
Binary alphabet.
Fixed length individuals of equal length, l.
Fitness Proportional Selection.
Single Point Crossover (1X).
Gene wise mutation i.e. mutate gene by gene.
Definition 1 - Schema H
A schema H is a template that identifies a subset of strings with
similarities at certain string positions.
Schemata are a special case of a natural open set of a product
topology.
4 / 37
Introduction
Definition 2
If A denotes the alphabet of genes, then A ∪ ∗ is the schema alphabet,
where * is the ‘wild card’ symbol matching any gene value.
Example: For A ∈ {0, 1, ∗} where ∗ ∈ {0, 1}.
5 / 37
Example
The Schema H = [0 1 ∗ 1 *] generates the following individuals
0 1 * 1 *
0 1 0 1 0
0 1 0 1 1
0 1 1 1 0
0 1 1 1 1
Not all schemas say the same
Schema [ 1 ∗ ∗ ∗ ∗ ∗ ∗] has less information than [ 0 1 ∗ ∗ 1 1 0].
It is more!!!
[ 1 ∗ ∗ ∗ ∗ ∗ 0] span the entire length of an individual, but
[ 1 ∗ 1 ∗ ∗ ∗ ∗] does not.
6 / 37
Example
The Schema H = [0 1 ∗ 1 *] generates the following individuals
0 1 * 1 *
0 1 0 1 0
0 1 0 1 1
0 1 1 1 0
0 1 1 1 1
Not all schemas say the same
Schema [ 1 ∗ ∗ ∗ ∗ ∗ ∗] has less information than [ 0 1 ∗ ∗ 1 1 0].
It is more!!!
[ 1 ∗ ∗ ∗ ∗ ∗ 0] span the entire length of an individual, but
[ 1 ∗ 1 ∗ ∗ ∗ ∗] does not.
6 / 37
Example
The Schema H = [0 1 ∗ 1 *] generates the following individuals
0 1 * 1 *
0 1 0 1 0
0 1 0 1 1
0 1 1 1 0
0 1 1 1 1
Not all schemas say the same
Schema [ 1 ∗ ∗ ∗ ∗ ∗ ∗] has less information than [ 0 1 ∗ ∗ 1 1 0].
It is more!!!
[ 1 ∗ ∗ ∗ ∗ ∗ 0] span the entire length of an individual, but
[ 1 ∗ 1 ∗ ∗ ∗ ∗] does not.
6 / 37
Outline
1 Introduction
Schema Definition
Properties of Schemas
2 Probability of a Schema
Probability of an individual is in schema H
Surviving Under Gene wise Mutation
Surviving Under Single Point Crossover
The Schema Theorem
A More General Version
Problems with the Schema Theorem
7 / 37
Schema Order and Length
Definition 3 - Schema Order o (H)
Schema order, o, is the number of non “*” genes in schema H.
Example: o(***11*1)=3.
Definition 3 – Schema Defining Length, δ (H).
Schema Defining Length, δ(H), is the distance between first and last non
“*” gene in schema H.
Example: δ(***11*1)=7-4=3.
Notes
Given an alphabet A with|A| = k, then there are (k + 1)l
possible
schemas of length l.
8 / 37
Schema Order and Length
Definition 3 - Schema Order o (H)
Schema order, o, is the number of non “*” genes in schema H.
Example: o(***11*1)=3.
Definition 3 – Schema Defining Length, δ (H).
Schema Defining Length, δ(H), is the distance between first and last non
“*” gene in schema H.
Example: δ(***11*1)=7-4=3.
Notes
Given an alphabet A with|A| = k, then there are (k + 1)l
possible
schemas of length l.
8 / 37
Schema Order and Length
Definition 3 - Schema Order o (H)
Schema order, o, is the number of non “*” genes in schema H.
Example: o(***11*1)=3.
Definition 3 – Schema Defining Length, δ (H).
Schema Defining Length, δ(H), is the distance between first and last non
“*” gene in schema H.
Example: δ(***11*1)=7-4=3.
Notes
Given an alphabet A with|A| = k, then there are (k + 1)l
possible
schemas of length l.
8 / 37
Outline
1 Introduction
Schema Definition
Properties of Schemas
2 Probability of a Schema
Probability of an individual is in schema H
Surviving Under Gene wise Mutation
Surviving Under Single Point Crossover
The Schema Theorem
A More General Version
Problems with the Schema Theorem
9 / 37
Probabilities of belonging to a Schema H
What do we want?
The probability that individual h is from schema H:
P (h ∈ H)
We need the following probabilities
Pdistruption(H, 1X) = probability of schema being disrupted due to
crossover.
Pdisruption (H, mutation) =probability of schema being disrupted due
to mutation
Pcrossover (H survive)
10 / 37
Probabilities of belonging to a Schema H
What do we want?
The probability that individual h is from schema H:
P (h ∈ H)
We need the following probabilities
Pdistruption(H, 1X) = probability of schema being disrupted due to
crossover.
Pdisruption (H, mutation) =probability of schema being disrupted due
to mutation
Pcrossover (H survive)
10 / 37
Probabilities of belonging to a Schema H
What do we want?
The probability that individual h is from schema H:
P (h ∈ H)
We need the following probabilities
Pdistruption(H, 1X) = probability of schema being disrupted due to
crossover.
Pdisruption (H, mutation) =probability of schema being disrupted due
to mutation
Pcrossover (H survive)
10 / 37
Probability of Disruption
Consider now
The CGA using
fitness proportionate parent selection
on-point crossover (1X)
bitwise mutation with probability Pm
Genotypes of length l
The Schema could be disrupted if the cross over falls between the
ends
Pdistruption(H, 1X) =
δ(H)
(l − 1)
(1)
0 1 0 0 1 0
11 / 37
Probability of Disruption
Consider now
The CGA using
fitness proportionate parent selection
on-point crossover (1X)
bitwise mutation with probability Pm
Genotypes of length l
The Schema could be disrupted if the cross over falls between the
ends
Pdistruption(H, 1X) =
δ(H)
(l − 1)
(1)
0 1 0 0 1 0
11 / 37
Probability of Disruption
Consider now
The CGA using
fitness proportionate parent selection
on-point crossover (1X)
bitwise mutation with probability Pm
Genotypes of length l
The Schema could be disrupted if the cross over falls between the
ends
Pdistruption(H, 1X) =
δ(H)
(l − 1)
(1)
0 1 0 0 1 0
11 / 37
Probability of Disruption
Consider now
The CGA using
fitness proportionate parent selection
on-point crossover (1X)
bitwise mutation with probability Pm
Genotypes of length l
The Schema could be disrupted if the cross over falls between the
ends
Pdistruption(H, 1X) =
δ(H)
(l − 1)
(1)
0 1 0 0 1 0
11 / 37
Probability of Disruption
Consider now
The CGA using
fitness proportionate parent selection
on-point crossover (1X)
bitwise mutation with probability Pm
Genotypes of length l
The Schema could be disrupted if the cross over falls between the
ends
Pdistruption(H, 1X) =
δ(H)
(l − 1)
(1)
0 1 0 0 1 0
11 / 37
Probability of Disruption
Consider now
The CGA using
fitness proportionate parent selection
on-point crossover (1X)
bitwise mutation with probability Pm
Genotypes of length l
The Schema could be disrupted if the cross over falls between the
ends
Pdistruption(H, 1X) =
δ(H)
(l − 1)
(1)
0 1 0 0 1 0
11 / 37
Why?
Given that you have
δ(H) = the distance between first and last non “*”
last position in Genotype - first position in Genotype = l − 1
Case I
δ(H) = 1, when the positions of the non “*” are next to each other
Case II
δ(H) = l − 1, when the positions of the non “*” are in the extremes
12 / 37
Why?
Given that you have
δ(H) = the distance between first and last non “*”
last position in Genotype - first position in Genotype = l − 1
Case I
δ(H) = 1, when the positions of the non “*” are next to each other
Case II
δ(H) = l − 1, when the positions of the non “*” are in the extremes
12 / 37
Why?
Given that you have
δ(H) = the distance between first and last non “*”
last position in Genotype - first position in Genotype = l − 1
Case I
δ(H) = 1, when the positions of the non “*” are next to each other
Case II
δ(H) = l − 1, when the positions of the non “*” are in the extremes
12 / 37
Why?
Given that you have
δ(H) = the distance between first and last non “*”
last position in Genotype - first position in Genotype = l − 1
Case I
δ(H) = 1, when the positions of the non “*” are next to each other
Case II
δ(H) = l − 1, when the positions of the non “*” are in the extremes
12 / 37
Remarks about Mutation
Observation about Mutation
Mutation is applied gene by gene.
In order for schema H to survive, all non * genes in the schema much
remain unchanged.
Thus
Probability of not changing a gene 1 − Pm (Pm probability of
mutation).
Probability of requiring that all o(H) non * genes survive,
(1 − Pm)o(H)
.
Typically the probability of applying the mutation operator, pm 1.
The probability that the mutation disrupt the schema H
Pdisruption (H, mutation) = 1 − (1 − Pm)o(H)
≈ o (H) Pm (2)
After ignoring high terms in the polynomial!!!
13 / 37
Remarks about Mutation
Observation about Mutation
Mutation is applied gene by gene.
In order for schema H to survive, all non * genes in the schema much
remain unchanged.
Thus
Probability of not changing a gene 1 − Pm (Pm probability of
mutation).
Probability of requiring that all o(H) non * genes survive,
(1 − Pm)o(H)
.
Typically the probability of applying the mutation operator, pm 1.
The probability that the mutation disrupt the schema H
Pdisruption (H, mutation) = 1 − (1 − Pm)o(H)
≈ o (H) Pm (2)
After ignoring high terms in the polynomial!!!
13 / 37
Remarks about Mutation
Observation about Mutation
Mutation is applied gene by gene.
In order for schema H to survive, all non * genes in the schema much
remain unchanged.
Thus
Probability of not changing a gene 1 − Pm (Pm probability of
mutation).
Probability of requiring that all o(H) non * genes survive,
(1 − Pm)o(H)
.
Typically the probability of applying the mutation operator, pm 1.
The probability that the mutation disrupt the schema H
Pdisruption (H, mutation) = 1 − (1 − Pm)o(H)
≈ o (H) Pm (2)
After ignoring high terms in the polynomial!!!
13 / 37
Remarks about Mutation
Observation about Mutation
Mutation is applied gene by gene.
In order for schema H to survive, all non * genes in the schema much
remain unchanged.
Thus
Probability of not changing a gene 1 − Pm (Pm probability of
mutation).
Probability of requiring that all o(H) non * genes survive,
(1 − Pm)o(H)
.
Typically the probability of applying the mutation operator, pm 1.
The probability that the mutation disrupt the schema H
Pdisruption (H, mutation) = 1 − (1 − Pm)o(H)
≈ o (H) Pm (2)
After ignoring high terms in the polynomial!!!
13 / 37
Remarks about Mutation
Observation about Mutation
Mutation is applied gene by gene.
In order for schema H to survive, all non * genes in the schema much
remain unchanged.
Thus
Probability of not changing a gene 1 − Pm (Pm probability of
mutation).
Probability of requiring that all o(H) non * genes survive,
(1 − Pm)o(H)
.
Typically the probability of applying the mutation operator, pm 1.
The probability that the mutation disrupt the schema H
Pdisruption (H, mutation) = 1 − (1 − Pm)o(H)
≈ o (H) Pm (2)
After ignoring high terms in the polynomial!!!
13 / 37
Remarks about Mutation
Observation about Mutation
Mutation is applied gene by gene.
In order for schema H to survive, all non * genes in the schema much
remain unchanged.
Thus
Probability of not changing a gene 1 − Pm (Pm probability of
mutation).
Probability of requiring that all o(H) non * genes survive,
(1 − Pm)o(H)
.
Typically the probability of applying the mutation operator, pm 1.
The probability that the mutation disrupt the schema H
Pdisruption (H, mutation) = 1 − (1 − Pm)o(H)
≈ o (H) Pm (2)
After ignoring high terms in the polynomial!!!
13 / 37
Outline
1 Introduction
Schema Definition
Properties of Schemas
2 Probability of a Schema
Probability of an individual is in schema H
Surviving Under Gene wise Mutation
Surviving Under Single Point Crossover
The Schema Theorem
A More General Version
Problems with the Schema Theorem
14 / 37
Gene wise Mutation
Lemma 1
Under gene wise mutation (Applied Gene by Gene), the (lower bound)
probability of an order o(H) schema H surviving at generation (No
Disruption) t is,
1 − o (H) Pm (3)
15 / 37
Probability of an individual is sampled from schema H
Consider the Following
1 Probability of selection depends on
1 Number of instances of schema H in the population.
2 Average fitness of schema H relative to the average fitness of all
individuals in the population.
Thus, we have the following probability
P (h ∈ H) = PUniform (h in Population) × Mean Fitness Ratio
16 / 37
Probability of an individual is sampled from schema H
Consider the Following
1 Probability of selection depends on
1 Number of instances of schema H in the population.
2 Average fitness of schema H relative to the average fitness of all
individuals in the population.
Thus, we have the following probability
P (h ∈ H) = PUniform (h in Population) × Mean Fitness Ratio
16 / 37
Probability of an individual is sampled from schema H
Consider the Following
1 Probability of selection depends on
1 Number of instances of schema H in the population.
2 Average fitness of schema H relative to the average fitness of all
individuals in the population.
Thus, we have the following probability
P (h ∈ H) = PUniform (h in Population) × Mean Fitness Ratio
16 / 37
Probability of an individual is sampled from schema H
Consider the Following
1 Probability of selection depends on
1 Number of instances of schema H in the population.
2 Average fitness of schema H relative to the average fitness of all
individuals in the population.
Thus, we have the following probability
P (h ∈ H) = PUniform (h in Population) × Mean Fitness Ratio
16 / 37
Then
Finally
P (h ∈ H) =
(Number of individuals
matching schema
H at generation t)
(Population Size)
×
(Mean fitness of
individuals matching
schema H)
(Mean fitness of individuals in the
population)
(4)
17 / 37
Then
Finally
P (h ∈ H) =
m (H, t) f (H, t)
Mf (t)
(5)
where M is the population size and m(H, t) is the number of instances of
schema H at generation t.
Lemma 2
Under fitness proportional selection the expected number of instances of
schema H at time t is
E [m (H, t + 1)] = M · P (h ∈ H) =
m (H, t) f (H, t)
f (t)
(6)
18 / 37
Then
Finally
P (h ∈ H) =
m (H, t) f (H, t)
Mf (t)
(5)
where M is the population size and m(H, t) is the number of instances of
schema H at generation t.
Lemma 2
Under fitness proportional selection the expected number of instances of
schema H at time t is
E [m (H, t + 1)] = M · P (h ∈ H) =
m (H, t) f (H, t)
f (t)
(6)
18 / 37
Why?
Note the following
M independent samples (Same Probability) are taken to create the next
generation of parents
Thus
m (H, t + 1) = Ih1 + Ih2 + ... + IhM
Remark: The indicator random variable of ONE for these samples!!!
Then
E [m (H, t + 1)] = E [Ih1 ] + E [Ih2 ] + ... + E [IhM
]
19 / 37
Why?
Note the following
M independent samples (Same Probability) are taken to create the next
generation of parents
Thus
m (H, t + 1) = Ih1 + Ih2 + ... + IhM
Remark: The indicator random variable of ONE for these samples!!!
Then
E [m (H, t + 1)] = E [Ih1 ] + E [Ih2 ] + ... + E [IhM
]
19 / 37
Why?
Note the following
M independent samples (Same Probability) are taken to create the next
generation of parents
Thus
m (H, t + 1) = Ih1 + Ih2 + ... + IhM
Remark: The indicator random variable of ONE for these samples!!!
Then
E [m (H, t + 1)] = E [Ih1 ] + E [Ih2 ] + ... + E [IhM
]
19 / 37
Finally
But, M samples are taken to create the next generation of parents
E [m (H, t + 1)] = P (h1 ∈ H) + P (h2 ∈ H) + ... + P (hM ∈ H)
Remember the Lemma 5.1 in Cormen’s Book
Finally, because P (h1 ∈ H) = P (h2 ∈ H) = ... = P (hM ∈ H)
E [m (H, t + 1)] = M × P (h ∈ H)
QED!!!
20 / 37
Finally
But, M samples are taken to create the next generation of parents
E [m (H, t + 1)] = P (h1 ∈ H) + P (h2 ∈ H) + ... + P (hM ∈ H)
Remember the Lemma 5.1 in Cormen’s Book
Finally, because P (h1 ∈ H) = P (h2 ∈ H) = ... = P (hM ∈ H)
E [m (H, t + 1)] = M × P (h ∈ H)
QED!!!
20 / 37
Outline
1 Introduction
Schema Definition
Properties of Schemas
2 Probability of a Schema
Probability of an individual is in schema H
Surviving Under Gene wise Mutation
Surviving Under Single Point Crossover
The Schema Theorem
A More General Version
Problems with the Schema Theorem
21 / 37
Search Operators – Single point crossover
Observations
Crossover was the first of two search operators introduced to modify
the distribution of schema in the population.
Holland concentrated on modeling the lower bound alone.
22 / 37
Search Operators – Single point crossover
Observations
Crossover was the first of two search operators introduced to modify
the distribution of schema in the population.
Holland concentrated on modeling the lower bound alone.
22 / 37
Crossover
Consider the following
Generated Individual h = 1 0 1 | 1 1 0 0
H1 = * 0 1 | * * * 0
H2 = * 0 1 | * * * *
Crossover
Remarks
1 Schema H1 will naturally be broken by the location of the crossover
operator unless the second parent is able to ‘repair’ the disrupted
gene.
2 Schema H2 emerges unaffected and is therefore independent of the
second parent.
3 Thus, Schema with long defining length are more likely to be
disrupted by single point crossover than schema using short
defining lengths.
23 / 37
Crossover
Consider the following
Generated Individual h = 1 0 1 | 1 1 0 0
H1 = * 0 1 | * * * 0
H2 = * 0 1 | * * * *
Crossover
Remarks
1 Schema H1 will naturally be broken by the location of the crossover
operator unless the second parent is able to ‘repair’ the disrupted
gene.
2 Schema H2 emerges unaffected and is therefore independent of the
second parent.
3 Thus, Schema with long defining length are more likely to be
disrupted by single point crossover than schema using short
defining lengths.
23 / 37
Crossover
Consider the following
Generated Individual h = 1 0 1 | 1 1 0 0
H1 = * 0 1 | * * * 0
H2 = * 0 1 | * * * *
Crossover
Remarks
1 Schema H1 will naturally be broken by the location of the crossover
operator unless the second parent is able to ‘repair’ the disrupted
gene.
2 Schema H2 emerges unaffected and is therefore independent of the
second parent.
3 Thus, Schema with long defining length are more likely to be
disrupted by single point crossover than schema using short
defining lengths.
23 / 37
Crossover
Consider the following
Generated Individual h = 1 0 1 | 1 1 0 0
H1 = * 0 1 | * * * 0
H2 = * 0 1 | * * * *
Crossover
Remarks
1 Schema H1 will naturally be broken by the location of the crossover
operator unless the second parent is able to ‘repair’ the disrupted
gene.
2 Schema H2 emerges unaffected and is therefore independent of the
second parent.
3 Thus, Schema with long defining length are more likely to be
disrupted by single point crossover than schema using short
defining lengths.
23 / 37
Now, we have
Lemma 3
Under single point crossover, the (lower bound) probability of schema H
surviving at generation t is,
Pcrossover (H survive) =1 − Pcrossover (H does not survive)
=1 − pc
δ(H)
l − 1
Pdiff (H, t)
Where
Pdiff (H, t) is the probability that the second parent does not
match schema H.
pc is the a priori selected threshold of applying crossover.
24 / 37
Now, we have
Lemma 3
Under single point crossover, the (lower bound) probability of schema H
surviving at generation t is,
Pcrossover (H survive) =1 − Pcrossover (H does not survive)
=1 − pc
δ(H)
l − 1
Pdiff (H, t)
Where
Pdiff (H, t) is the probability that the second parent does not
match schema H.
pc is the a priori selected threshold of applying crossover.
24 / 37
Now, we have
Lemma 3
Under single point crossover, the (lower bound) probability of schema H
surviving at generation t is,
Pcrossover (H survive) =1 − Pcrossover (H does not survive)
=1 − pc
δ(H)
l − 1
Pdiff (H, t)
Where
Pdiff (H, t) is the probability that the second parent does not
match schema H.
pc is the a priori selected threshold of applying crossover.
24 / 37
How?
We can see the following
Pcrossover (H does not survive) = Pc × Pdistruption(H, 1X) × Pdiff (H, t)
After all
Pc is used to decide if the crossover will happen.
The second parent could come from the same schema, and yes!!! We
do not have a disruption!!!
Then
Pcrossover (H does not survive) = Pc ×
δ(H)
l − 1
× Pdiff (H, t)
25 / 37
How?
We can see the following
Pcrossover (H does not survive) = Pc × Pdistruption(H, 1X) × Pdiff (H, t)
After all
Pc is used to decide if the crossover will happen.
The second parent could come from the same schema, and yes!!! We
do not have a disruption!!!
Then
Pcrossover (H does not survive) = Pc ×
δ(H)
l − 1
× Pdiff (H, t)
25 / 37
How?
We can see the following
Pcrossover (H does not survive) = Pc × Pdistruption(H, 1X) × Pdiff (H, t)
After all
Pc is used to decide if the crossover will happen.
The second parent could come from the same schema, and yes!!! We
do not have a disruption!!!
Then
Pcrossover (H does not survive) = Pc ×
δ(H)
l − 1
× Pdiff (H, t)
25 / 37
How?
We can see the following
Pcrossover (H does not survive) = Pc × Pdistruption(H, 1X) × Pdiff (H, t)
After all
Pc is used to decide if the crossover will happen.
The second parent could come from the same schema, and yes!!! We
do not have a disruption!!!
Then
Pcrossover (H does not survive) = Pc ×
δ(H)
l − 1
× Pdiff (H, t)
25 / 37
In addition
Worst case lower bound
Pdiff (H, t) = 1 (7)
26 / 37
Outline
1 Introduction
Schema Definition
Properties of Schemas
2 Probability of a Schema
Probability of an individual is in schema H
Surviving Under Gene wise Mutation
Surviving Under Single Point Crossover
The Schema Theorem
A More General Version
Problems with the Schema Theorem
27 / 37
The Schema Theorem
The Schema Theorem
The expected number of schema H at generation t + 1 when using a
canonical GA with proportional selection, single point crossover and gene
wise mutation (where the latter are applied at rates pc and Pm) is,
E [m (H, t + 1)] ≥
m (H, t) f (H, t)
f (t)
1 − pc
δ(H)
l − 1
Pdiff (H, t) − o (H) Pm
(8)
28 / 37
Proof
We use the following quantities
Pcrossover (H survive) = 1 − pc
δ(H)
l−1 Pdiff (H, t) ≤ 1
Pno−disruption (H, mutation) = 1 − o (H) Pm ≤ 1
Then, we have that
E [m (H, t + 1)] =M × P (h ∈ H)
=M
m (H, t) f (H, t)
Mf (t)
=
m (H, t) f (H, t)
f (t)
≥
m (H, t) f (H, t)
f (t)
× 1 − pc
δ(H)
l − 1
Pdiff (H, t) × [1 − o (H) Pm]
29 / 37
Proof
We use the following quantities
Pcrossover (H survive) = 1 − pc
δ(H)
l−1 Pdiff (H, t) ≤ 1
Pno−disruption (H, mutation) = 1 − o (H) Pm ≤ 1
Then, we have that
E [m (H, t + 1)] =M × P (h ∈ H)
=M
m (H, t) f (H, t)
Mf (t)
=
m (H, t) f (H, t)
f (t)
≥
m (H, t) f (H, t)
f (t)
× 1 − pc
δ(H)
l − 1
Pdiff (H, t) × [1 − o (H) Pm]
29 / 37
Proof
We use the following quantities
Pcrossover (H survive) = 1 − pc
δ(H)
l−1 Pdiff (H, t) ≤ 1
Pno−disruption (H, mutation) = 1 − o (H) Pm ≤ 1
Then, we have that
E [m (H, t + 1)] =M × P (h ∈ H)
=M
m (H, t) f (H, t)
Mf (t)
=
m (H, t) f (H, t)
f (t)
≥
m (H, t) f (H, t)
f (t)
× 1 − pc
δ(H)
l − 1
Pdiff (H, t) × [1 − o (H) Pm]
29 / 37
Thus
We have the following
E [m (H, t + 1)] ≥
m (H, t) f (H, t)
f (t)
1 − pc
δ(H)
l − 1
Pdiff (H, t) − o (H) Pm + ...
pc
δ(H)
l − 1
Pdiff (H, t)o (H) Pm
≥
m (H, t) f (H, t)
f (t)
1 − pc
δ(H)
l − 1
Pdiff (H, t) − o (H) Pm
The las inequality is possible because pc
δ(H)
l−1 Pdiff (H, t)o (H) Pm ≥ 0
30 / 37
Remarks
Observations
The theorem is described in terms of expectation, thus strictly
speaking is only true for the case of a population with an infinite
number of members.
What about a finite population?
In the case of finite population sizes the significance of population drift
plays an increasingly important role.
31 / 37
Remarks
Observations
The theorem is described in terms of expectation, thus strictly
speaking is only true for the case of a population with an infinite
number of members.
What about a finite population?
In the case of finite population sizes the significance of population drift
plays an increasingly important role.
31 / 37
Remarks
Observations
The theorem is described in terms of expectation, thus strictly
speaking is only true for the case of a population with an infinite
number of members.
What about a finite population?
In the case of finite population sizes the significance of population drift
plays an increasingly important role.
31 / 37
Outline
1 Introduction
Schema Definition
Properties of Schemas
2 Probability of a Schema
Probability of an individual is in schema H
Surviving Under Gene wise Mutation
Surviving Under Single Point Crossover
The Schema Theorem
A More General Version
Problems with the Schema Theorem
32 / 37
More General Version
More General Version
E [m (H, t + 1)] ≥ m (H, t) α (H, t) {1 − β(H, t)} (9)
Where
α(H, t)is the “selection coefficient”
β(H, t) is the “transcription error.”
This allows to say that H survives if
α(H, t) ≥ 1 − β (H, t) or
m (H, t) f (H, t)
f (t)
≥ 1 − pc
δ(H)
l − 1
Pdiff (H, t) − o (H) Pm
33 / 37
More General Version
More General Version
E [m (H, t + 1)] ≥ m (H, t) α (H, t) {1 − β(H, t)} (9)
Where
α(H, t)is the “selection coefficient”
β(H, t) is the “transcription error.”
This allows to say that H survives if
α(H, t) ≥ 1 − β (H, t) or
m (H, t) f (H, t)
f (t)
≥ 1 − pc
δ(H)
l − 1
Pdiff (H, t) − o (H) Pm
33 / 37
More General Version
More General Version
E [m (H, t + 1)] ≥ m (H, t) α (H, t) {1 − β(H, t)} (9)
Where
α(H, t)is the “selection coefficient”
β(H, t) is the “transcription error.”
This allows to say that H survives if
α(H, t) ≥ 1 − β (H, t) or
m (H, t) f (H, t)
f (t)
≥ 1 − pc
δ(H)
l − 1
Pdiff (H, t) − o (H) Pm
33 / 37
More General Version
More General Version
E [m (H, t + 1)] ≥ m (H, t) α (H, t) {1 − β(H, t)} (9)
Where
α(H, t)is the “selection coefficient”
β(H, t) is the “transcription error.”
This allows to say that H survives if
α(H, t) ≥ 1 − β (H, t) or
m (H, t) f (H, t)
f (t)
≥ 1 − pc
δ(H)
l − 1
Pdiff (H, t) − o (H) Pm
33 / 37
Observation
Observation
This is the basis for the observation that short (defining length), low order
schema of above average population fitness will be favored by canonical
GAs, or the Building Block Hypothesis.
34 / 37
Outline
1 Introduction
Schema Definition
Properties of Schemas
2 Probability of a Schema
Probability of an individual is in schema H
Surviving Under Gene wise Mutation
Surviving Under Single Point Crossover
The Schema Theorem
A More General Version
Problems with the Schema Theorem
35 / 37
Problems
Problem 1
Only the worst-case scenario is considered.
No positive effects of the search operators are considered.
This has lead to the development of Exact Schema Theorems.
Problem 2
The theorem concentrates on the number of schema surviving not
which schema survive.
Such considerations have been addressed by the utilization of Markov
chains to provide models of behavior associated with specific
individuals in the population.
36 / 37
Problems
Problem 1
Only the worst-case scenario is considered.
No positive effects of the search operators are considered.
This has lead to the development of Exact Schema Theorems.
Problem 2
The theorem concentrates on the number of schema surviving not
which schema survive.
Such considerations have been addressed by the utilization of Markov
chains to provide models of behavior associated with specific
individuals in the population.
36 / 37
Problems
Problem 1
Only the worst-case scenario is considered.
No positive effects of the search operators are considered.
This has lead to the development of Exact Schema Theorems.
Problem 2
The theorem concentrates on the number of schema surviving not
which schema survive.
Such considerations have been addressed by the utilization of Markov
chains to provide models of behavior associated with specific
individuals in the population.
36 / 37
Problems
Problem 1
Only the worst-case scenario is considered.
No positive effects of the search operators are considered.
This has lead to the development of Exact Schema Theorems.
Problem 2
The theorem concentrates on the number of schema surviving not
which schema survive.
Such considerations have been addressed by the utilization of Markov
chains to provide models of behavior associated with specific
individuals in the population.
36 / 37
Problems
Problem 1
Only the worst-case scenario is considered.
No positive effects of the search operators are considered.
This has lead to the development of Exact Schema Theorems.
Problem 2
The theorem concentrates on the number of schema surviving not
which schema survive.
Such considerations have been addressed by the utilization of Markov
chains to provide models of behavior associated with specific
individuals in the population.
36 / 37
Problems
Problem 1
Only the worst-case scenario is considered.
No positive effects of the search operators are considered.
This has lead to the development of Exact Schema Theorems.
Problem 2
The theorem concentrates on the number of schema surviving not
which schema survive.
Such considerations have been addressed by the utilization of Markov
chains to provide models of behavior associated with specific
individuals in the population.
36 / 37
Problems
Problem 3
Claims of “exponential increases” in fit schema i.e., if the expectation
operator of Schema Theorem is ignored and the effects of crossover
and mutation discounted, the following result was popularized by
Goldberg,
m(H, t + 1)≥(1 + c)m(H, t)
where c is the constant by which fit schema are always fitter than the
population average.
PROBLEM!!!
Unfortunately, this is rather misleading as the average population
fitness will tend to increase with t,
thus population and fitness of remaining schema will tend to converge
with increasing ‘time’.
37 / 37
Problems
Problem 3
Claims of “exponential increases” in fit schema i.e., if the expectation
operator of Schema Theorem is ignored and the effects of crossover
and mutation discounted, the following result was popularized by
Goldberg,
m(H, t + 1)≥(1 + c)m(H, t)
where c is the constant by which fit schema are always fitter than the
population average.
PROBLEM!!!
Unfortunately, this is rather misleading as the average population
fitness will tend to increase with t,
thus population and fitness of remaining schema will tend to converge
with increasing ‘time’.
37 / 37

More Related Content

What's hot

Artificial Neural Network Lect4 : Single Layer Perceptron Classifiers
Artificial Neural Network Lect4 : Single Layer Perceptron ClassifiersArtificial Neural Network Lect4 : Single Layer Perceptron Classifiers
Artificial Neural Network Lect4 : Single Layer Perceptron ClassifiersMohammed Bennamoun
 
HML: Historical View and Trends of Deep Learning
HML: Historical View and Trends of Deep LearningHML: Historical View and Trends of Deep Learning
HML: Historical View and Trends of Deep LearningYan Xu
 
Vc dimension in Machine Learning
Vc dimension in Machine LearningVc dimension in Machine Learning
Vc dimension in Machine LearningVARUN KUMAR
 
Chap 8. Optimization for training deep models
Chap 8. Optimization for training deep modelsChap 8. Optimization for training deep models
Chap 8. Optimization for training deep modelsYoung-Geun Choi
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)EdutechLearners
 
Introduction to soft computing
Introduction to soft computingIntroduction to soft computing
Introduction to soft computingAnkush Kumar
 
Lecture 7: Hidden Markov Models (HMMs)
Lecture 7: Hidden Markov Models (HMMs)Lecture 7: Hidden Markov Models (HMMs)
Lecture 7: Hidden Markov Models (HMMs)Marina Santini
 
An overview of Hidden Markov Models (HMM)
An overview of Hidden Markov Models (HMM)An overview of Hidden Markov Models (HMM)
An overview of Hidden Markov Models (HMM)ananth
 
Monkey & banana problem in AI
Monkey & banana problem in AIMonkey & banana problem in AI
Monkey & banana problem in AIManjeet Kamboj
 
Genetic algorithms vs Traditional algorithms
Genetic algorithms vs Traditional algorithmsGenetic algorithms vs Traditional algorithms
Genetic algorithms vs Traditional algorithmsDr. C.V. Suresh Babu
 
Sequence to sequence (encoder-decoder) learning
Sequence to sequence (encoder-decoder) learningSequence to sequence (encoder-decoder) learning
Sequence to sequence (encoder-decoder) learningRoberto Pereira Silveira
 
Conceptual dependency
Conceptual dependencyConceptual dependency
Conceptual dependencyJismy .K.Jose
 
Genetic algorithm
Genetic algorithmGenetic algorithm
Genetic algorithmgarima931
 
Deep Learning Frameworks slides
Deep Learning Frameworks slides Deep Learning Frameworks slides
Deep Learning Frameworks slides Sheamus McGovern
 
Computational Learning Theory
Computational Learning TheoryComputational Learning Theory
Computational Learning Theorybutest
 

What's hot (20)

Artificial Neural Network Lect4 : Single Layer Perceptron Classifiers
Artificial Neural Network Lect4 : Single Layer Perceptron ClassifiersArtificial Neural Network Lect4 : Single Layer Perceptron Classifiers
Artificial Neural Network Lect4 : Single Layer Perceptron Classifiers
 
HML: Historical View and Trends of Deep Learning
HML: Historical View and Trends of Deep LearningHML: Historical View and Trends of Deep Learning
HML: Historical View and Trends of Deep Learning
 
Vc dimension in Machine Learning
Vc dimension in Machine LearningVc dimension in Machine Learning
Vc dimension in Machine Learning
 
Chap 8. Optimization for training deep models
Chap 8. Optimization for training deep modelsChap 8. Optimization for training deep models
Chap 8. Optimization for training deep models
 
Adaline and Madaline.ppt
Adaline and Madaline.pptAdaline and Madaline.ppt
Adaline and Madaline.ppt
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)
 
Introduction to soft computing
Introduction to soft computingIntroduction to soft computing
Introduction to soft computing
 
Lecture 7: Hidden Markov Models (HMMs)
Lecture 7: Hidden Markov Models (HMMs)Lecture 7: Hidden Markov Models (HMMs)
Lecture 7: Hidden Markov Models (HMMs)
 
An overview of Hidden Markov Models (HMM)
An overview of Hidden Markov Models (HMM)An overview of Hidden Markov Models (HMM)
An overview of Hidden Markov Models (HMM)
 
Monkey & banana problem in AI
Monkey & banana problem in AIMonkey & banana problem in AI
Monkey & banana problem in AI
 
Soft computing
Soft computingSoft computing
Soft computing
 
Genetic algorithms vs Traditional algorithms
Genetic algorithms vs Traditional algorithmsGenetic algorithms vs Traditional algorithms
Genetic algorithms vs Traditional algorithms
 
Self-organizing map
Self-organizing mapSelf-organizing map
Self-organizing map
 
Sequence to sequence (encoder-decoder) learning
Sequence to sequence (encoder-decoder) learningSequence to sequence (encoder-decoder) learning
Sequence to sequence (encoder-decoder) learning
 
Deep learning presentation
Deep learning presentationDeep learning presentation
Deep learning presentation
 
Genetic algorithm
Genetic algorithmGenetic algorithm
Genetic algorithm
 
Conceptual dependency
Conceptual dependencyConceptual dependency
Conceptual dependency
 
Genetic algorithm
Genetic algorithmGenetic algorithm
Genetic algorithm
 
Deep Learning Frameworks slides
Deep Learning Frameworks slides Deep Learning Frameworks slides
Deep Learning Frameworks slides
 
Computational Learning Theory
Computational Learning TheoryComputational Learning Theory
Computational Learning Theory
 

Viewers also liked

Preparation Data Structures 02 recursion
Preparation Data Structures 02 recursionPreparation Data Structures 02 recursion
Preparation Data Structures 02 recursionAndres Mendez-Vazquez
 
Cs6402 design and analysis of algorithms impartant part b questions appasami
Cs6402 design and analysis of algorithms  impartant part b questions appasamiCs6402 design and analysis of algorithms  impartant part b questions appasami
Cs6402 design and analysis of algorithms impartant part b questions appasamiappasami
 
CS 6402 – DESIGN AND ANALYSIS OF ALGORITHMS questio
CS 6402 – DESIGN AND ANALYSIS OF ALGORITHMS questioCS 6402 – DESIGN AND ANALYSIS OF ALGORITHMS questio
CS 6402 – DESIGN AND ANALYSIS OF ALGORITHMS questioKarthik Venkatachalam
 
Cs6402 design and analysis of algorithms may june 2016 answer key
Cs6402 design and analysis of algorithms may june 2016 answer keyCs6402 design and analysis of algorithms may june 2016 answer key
Cs6402 design and analysis of algorithms may june 2016 answer keyappasami
 
Experimental Design | Statistics
Experimental Design | StatisticsExperimental Design | Statistics
Experimental Design | StatisticsTransweb Global Inc
 
Compression Springs | Mechenical Engineering
Compression Springs | Mechenical EngineeringCompression Springs | Mechenical Engineering
Compression Springs | Mechenical EngineeringTransweb Global Inc
 
Design and Analysis of Algorithms
Design and Analysis of AlgorithmsDesign and Analysis of Algorithms
Design and Analysis of AlgorithmsSwapnil Agrawal
 
12. Indexing and Hashing in DBMS
12. Indexing and Hashing in DBMS12. Indexing and Hashing in DBMS
12. Indexing and Hashing in DBMSkoolkampus
 
Hype vs. Reality: The AI Explainer
Hype vs. Reality: The AI ExplainerHype vs. Reality: The AI Explainer
Hype vs. Reality: The AI ExplainerLuminary Labs
 

Viewers also liked (20)

09 binary trees
09 binary trees09 binary trees
09 binary trees
 
ELEMENTARY DATASTRUCTURES
ELEMENTARY DATASTRUCTURESELEMENTARY DATASTRUCTURES
ELEMENTARY DATASTRUCTURES
 
algorithm unit 1
algorithm unit 1algorithm unit 1
algorithm unit 1
 
Data representation UNIT-1
Data representation UNIT-1Data representation UNIT-1
Data representation UNIT-1
 
Unit 4 jwfiles
Unit 4 jwfilesUnit 4 jwfiles
Unit 4 jwfiles
 
algorithm Unit 2
algorithm Unit 2 algorithm Unit 2
algorithm Unit 2
 
Preparation Data Structures 02 recursion
Preparation Data Structures 02 recursionPreparation Data Structures 02 recursion
Preparation Data Structures 02 recursion
 
Hashing
HashingHashing
Hashing
 
algorithm Unit 4
algorithm Unit 4 algorithm Unit 4
algorithm Unit 4
 
Cs6402 design and analysis of algorithms impartant part b questions appasami
Cs6402 design and analysis of algorithms  impartant part b questions appasamiCs6402 design and analysis of algorithms  impartant part b questions appasami
Cs6402 design and analysis of algorithms impartant part b questions appasami
 
CS 6402 – DESIGN AND ANALYSIS OF ALGORITHMS questio
CS 6402 – DESIGN AND ANALYSIS OF ALGORITHMS questioCS 6402 – DESIGN AND ANALYSIS OF ALGORITHMS questio
CS 6402 – DESIGN AND ANALYSIS OF ALGORITHMS questio
 
Cs6402 design and analysis of algorithms may june 2016 answer key
Cs6402 design and analysis of algorithms may june 2016 answer keyCs6402 design and analysis of algorithms may june 2016 answer key
Cs6402 design and analysis of algorithms may june 2016 answer key
 
DESIGN AND ANALYSIS OF ALGORITHM (DAA)
DESIGN AND ANALYSIS OF ALGORITHM (DAA)DESIGN AND ANALYSIS OF ALGORITHM (DAA)
DESIGN AND ANALYSIS OF ALGORITHM (DAA)
 
Experimental Design | Statistics
Experimental Design | StatisticsExperimental Design | Statistics
Experimental Design | Statistics
 
Compression Springs | Mechenical Engineering
Compression Springs | Mechenical EngineeringCompression Springs | Mechenical Engineering
Compression Springs | Mechenical Engineering
 
4.4 hashing
4.4 hashing4.4 hashing
4.4 hashing
 
Design and Analysis of Algorithms
Design and Analysis of AlgorithmsDesign and Analysis of Algorithms
Design and Analysis of Algorithms
 
Hashing
HashingHashing
Hashing
 
12. Indexing and Hashing in DBMS
12. Indexing and Hashing in DBMS12. Indexing and Hashing in DBMS
12. Indexing and Hashing in DBMS
 
Hype vs. Reality: The AI Explainer
Hype vs. Reality: The AI ExplainerHype vs. Reality: The AI Explainer
Hype vs. Reality: The AI Explainer
 

Similar to 07.2 Holland's Genetic Algorithms Schema Theorem

Group 9 genetic-algorithms (1)
Group 9 genetic-algorithms (1)Group 9 genetic-algorithms (1)
Group 9 genetic-algorithms (1)lakshmi.ec
 
Finding similar items in high dimensional spaces locality sensitive hashing
Finding similar items in high dimensional spaces  locality sensitive hashingFinding similar items in high dimensional spaces  locality sensitive hashing
Finding similar items in high dimensional spaces locality sensitive hashingDmitriy Selivanov
 
Дмитрий Селиванов, OK.RU. Finding Similar Items in high-dimensional spaces: L...
Дмитрий Селиванов, OK.RU. Finding Similar Items in high-dimensional spaces: L...Дмитрий Селиванов, OK.RU. Finding Similar Items in high-dimensional spaces: L...
Дмитрий Селиванов, OK.RU. Finding Similar Items in high-dimensional spaces: L...Mail.ru Group
 
Bayesian phylogenetic inference_big4_ws_2016-10-10
Bayesian phylogenetic inference_big4_ws_2016-10-10Bayesian phylogenetic inference_big4_ws_2016-10-10
Bayesian phylogenetic inference_big4_ws_2016-10-10FredrikRonquist
 
Thesis oral defense
Thesis oral defenseThesis oral defense
Thesis oral defenseFan Zhitao
 
maximum parsimony.pdf
maximum parsimony.pdfmaximum parsimony.pdf
maximum parsimony.pdfSrimathideviJ
 
Csr2011 june17 14_00_bulatov
Csr2011 june17 14_00_bulatovCsr2011 june17 14_00_bulatov
Csr2011 june17 14_00_bulatovCSR2011
 
Csr2011 june17 14_00_bulatov
Csr2011 june17 14_00_bulatovCsr2011 june17 14_00_bulatov
Csr2011 june17 14_00_bulatovCSR2011
 
Calculating Projections via Type Checking
Calculating Projections via Type CheckingCalculating Projections via Type Checking
Calculating Projections via Type CheckingDaisuke BEKKI
 
Optimization using soft computing
Optimization using soft computingOptimization using soft computing
Optimization using soft computingPurnima Pandit
 
original
originaloriginal
originalbutest
 

Similar to 07.2 Holland's Genetic Algorithms Schema Theorem (20)

Group 9 genetic-algorithms (1)
Group 9 genetic-algorithms (1)Group 9 genetic-algorithms (1)
Group 9 genetic-algorithms (1)
 
Finding similar items in high dimensional spaces locality sensitive hashing
Finding similar items in high dimensional spaces  locality sensitive hashingFinding similar items in high dimensional spaces  locality sensitive hashing
Finding similar items in high dimensional spaces locality sensitive hashing
 
Дмитрий Селиванов, OK.RU. Finding Similar Items in high-dimensional spaces: L...
Дмитрий Селиванов, OK.RU. Finding Similar Items in high-dimensional spaces: L...Дмитрий Селиванов, OK.RU. Finding Similar Items in high-dimensional spaces: L...
Дмитрий Селиванов, OK.RU. Finding Similar Items in high-dimensional spaces: L...
 
Bayesian phylogenetic inference_big4_ws_2016-10-10
Bayesian phylogenetic inference_big4_ws_2016-10-10Bayesian phylogenetic inference_big4_ws_2016-10-10
Bayesian phylogenetic inference_big4_ws_2016-10-10
 
Thesis oral defense
Thesis oral defenseThesis oral defense
Thesis oral defense
 
Genetic algorithms
Genetic algorithms Genetic algorithms
Genetic algorithms
 
Ruifeng.pptx
Ruifeng.pptxRuifeng.pptx
Ruifeng.pptx
 
unit-4.ppt
unit-4.pptunit-4.ppt
unit-4.ppt
 
unit 4.ppt
unit 4.pptunit 4.ppt
unit 4.ppt
 
maximum parsimony.pdf
maximum parsimony.pdfmaximum parsimony.pdf
maximum parsimony.pdf
 
Fol
FolFol
Fol
 
Pl vol1
Pl vol1Pl vol1
Pl vol1
 
Csr2011 june17 14_00_bulatov
Csr2011 june17 14_00_bulatovCsr2011 june17 14_00_bulatov
Csr2011 june17 14_00_bulatov
 
Lesson 26
Lesson 26Lesson 26
Lesson 26
 
AI Lesson 26
AI Lesson 26AI Lesson 26
AI Lesson 26
 
Csr2011 june17 14_00_bulatov
Csr2011 june17 14_00_bulatovCsr2011 june17 14_00_bulatov
Csr2011 june17 14_00_bulatov
 
Calculating Projections via Type Checking
Calculating Projections via Type CheckingCalculating Projections via Type Checking
Calculating Projections via Type Checking
 
Optimization using soft computing
Optimization using soft computingOptimization using soft computing
Optimization using soft computing
 
Pl vol1
Pl vol1Pl vol1
Pl vol1
 
original
originaloriginal
original
 

More from Andres Mendez-Vazquez

01.04 orthonormal basis_eigen_vectors
01.04 orthonormal basis_eigen_vectors01.04 orthonormal basis_eigen_vectors
01.04 orthonormal basis_eigen_vectorsAndres Mendez-Vazquez
 
01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issuesAndres Mendez-Vazquez
 
05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiation05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiationAndres Mendez-Vazquez
 
01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep Learning01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep LearningAndres Mendez-Vazquez
 
25 introduction reinforcement_learning
25 introduction reinforcement_learning25 introduction reinforcement_learning
25 introduction reinforcement_learningAndres Mendez-Vazquez
 
Neural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning SyllabusNeural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning SyllabusAndres Mendez-Vazquez
 
Introduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabusIntroduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabusAndres Mendez-Vazquez
 
Ideas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data SciencesIdeas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data SciencesAndres Mendez-Vazquez
 
20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variations20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variationsAndres Mendez-Vazquez
 

More from Andres Mendez-Vazquez (20)

2.03 bayesian estimation
2.03 bayesian estimation2.03 bayesian estimation
2.03 bayesian estimation
 
05 linear transformations
05 linear transformations05 linear transformations
05 linear transformations
 
01.04 orthonormal basis_eigen_vectors
01.04 orthonormal basis_eigen_vectors01.04 orthonormal basis_eigen_vectors
01.04 orthonormal basis_eigen_vectors
 
01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues
 
01.02 linear equations
01.02 linear equations01.02 linear equations
01.02 linear equations
 
01.01 vector spaces
01.01 vector spaces01.01 vector spaces
01.01 vector spaces
 
06 recurrent neural_networks
06 recurrent neural_networks06 recurrent neural_networks
06 recurrent neural_networks
 
05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiation05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiation
 
Zetta global
Zetta globalZetta global
Zetta global
 
01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep Learning01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep Learning
 
25 introduction reinforcement_learning
25 introduction reinforcement_learning25 introduction reinforcement_learning
25 introduction reinforcement_learning
 
Neural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning SyllabusNeural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning Syllabus
 
Introduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabusIntroduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabus
 
Ideas 09 22_2018
Ideas 09 22_2018Ideas 09 22_2018
Ideas 09 22_2018
 
Ideas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data SciencesIdeas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data Sciences
 
Analysis of Algorithms Syllabus
Analysis of Algorithms  SyllabusAnalysis of Algorithms  Syllabus
Analysis of Algorithms Syllabus
 
20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variations20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variations
 
18.1 combining models
18.1 combining models18.1 combining models
18.1 combining models
 
17 vapnik chervonenkis dimension
17 vapnik chervonenkis dimension17 vapnik chervonenkis dimension
17 vapnik chervonenkis dimension
 
A basic introduction to learning
A basic introduction to learningA basic introduction to learning
A basic introduction to learning
 

Recently uploaded

Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...roncy bisnoi
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Serviceranjana rawat
 
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).pptssuser5c9d4b1
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxupamatechverse
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)simmis5
 
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)Suman Mia
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Call Girls in Nagpur High Profile
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...ranjana rawat
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Christo Ananth
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxupamatechverse
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxAsutosh Ranjan
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxupamatechverse
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSSIVASHANKAR N
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130Suhani Kapoor
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSISrknatarajan
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
 

Recently uploaded (20)

DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINEDJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
 
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptx
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)
 
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
 
Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptx
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptx
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptx
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSIS
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
 

07.2 Holland's Genetic Algorithms Schema Theorem

  • 1. Artificial Intelligence Holland’s GA Schema Theorem Andres Mendez-Vazquez April 7, 2015 1 / 37
  • 2. Outline 1 Introduction Schema Definition Properties of Schemas 2 Probability of a Schema Probability of an individual is in schema H Surviving Under Gene wise Mutation Surviving Under Single Point Crossover The Schema Theorem A More General Version Problems with the Schema Theorem 2 / 37
  • 3. Outline 1 Introduction Schema Definition Properties of Schemas 2 Probability of a Schema Probability of an individual is in schema H Surviving Under Gene wise Mutation Surviving Under Single Point Crossover The Schema Theorem A More General Version Problems with the Schema Theorem 3 / 37
  • 4. Introduction Consider the Canonical GA Binary alphabet. Fixed length individuals of equal length, l. Fitness Proportional Selection. Single Point Crossover (1X). Gene wise mutation i.e. mutate gene by gene. Definition 1 - Schema H A schema H is a template that identifies a subset of strings with similarities at certain string positions. Schemata are a special case of a natural open set of a product topology. 4 / 37
  • 5. Introduction Consider the Canonical GA Binary alphabet. Fixed length individuals of equal length, l. Fitness Proportional Selection. Single Point Crossover (1X). Gene wise mutation i.e. mutate gene by gene. Definition 1 - Schema H A schema H is a template that identifies a subset of strings with similarities at certain string positions. Schemata are a special case of a natural open set of a product topology. 4 / 37
  • 6. Introduction Consider the Canonical GA Binary alphabet. Fixed length individuals of equal length, l. Fitness Proportional Selection. Single Point Crossover (1X). Gene wise mutation i.e. mutate gene by gene. Definition 1 - Schema H A schema H is a template that identifies a subset of strings with similarities at certain string positions. Schemata are a special case of a natural open set of a product topology. 4 / 37
  • 7. Introduction Consider the Canonical GA Binary alphabet. Fixed length individuals of equal length, l. Fitness Proportional Selection. Single Point Crossover (1X). Gene wise mutation i.e. mutate gene by gene. Definition 1 - Schema H A schema H is a template that identifies a subset of strings with similarities at certain string positions. Schemata are a special case of a natural open set of a product topology. 4 / 37
  • 8. Introduction Consider the Canonical GA Binary alphabet. Fixed length individuals of equal length, l. Fitness Proportional Selection. Single Point Crossover (1X). Gene wise mutation i.e. mutate gene by gene. Definition 1 - Schema H A schema H is a template that identifies a subset of strings with similarities at certain string positions. Schemata are a special case of a natural open set of a product topology. 4 / 37
  • 9. Introduction Consider the Canonical GA Binary alphabet. Fixed length individuals of equal length, l. Fitness Proportional Selection. Single Point Crossover (1X). Gene wise mutation i.e. mutate gene by gene. Definition 1 - Schema H A schema H is a template that identifies a subset of strings with similarities at certain string positions. Schemata are a special case of a natural open set of a product topology. 4 / 37
  • 10. Introduction Consider the Canonical GA Binary alphabet. Fixed length individuals of equal length, l. Fitness Proportional Selection. Single Point Crossover (1X). Gene wise mutation i.e. mutate gene by gene. Definition 1 - Schema H A schema H is a template that identifies a subset of strings with similarities at certain string positions. Schemata are a special case of a natural open set of a product topology. 4 / 37
  • 11. Introduction Definition 2 If A denotes the alphabet of genes, then A ∪ ∗ is the schema alphabet, where * is the ‘wild card’ symbol matching any gene value. Example: For A ∈ {0, 1, ∗} where ∗ ∈ {0, 1}. 5 / 37
  • 12. Example The Schema H = [0 1 ∗ 1 *] generates the following individuals 0 1 * 1 * 0 1 0 1 0 0 1 0 1 1 0 1 1 1 0 0 1 1 1 1 Not all schemas say the same Schema [ 1 ∗ ∗ ∗ ∗ ∗ ∗] has less information than [ 0 1 ∗ ∗ 1 1 0]. It is more!!! [ 1 ∗ ∗ ∗ ∗ ∗ 0] span the entire length of an individual, but [ 1 ∗ 1 ∗ ∗ ∗ ∗] does not. 6 / 37
  • 13. Example The Schema H = [0 1 ∗ 1 *] generates the following individuals 0 1 * 1 * 0 1 0 1 0 0 1 0 1 1 0 1 1 1 0 0 1 1 1 1 Not all schemas say the same Schema [ 1 ∗ ∗ ∗ ∗ ∗ ∗] has less information than [ 0 1 ∗ ∗ 1 1 0]. It is more!!! [ 1 ∗ ∗ ∗ ∗ ∗ 0] span the entire length of an individual, but [ 1 ∗ 1 ∗ ∗ ∗ ∗] does not. 6 / 37
  • 14. Example The Schema H = [0 1 ∗ 1 *] generates the following individuals 0 1 * 1 * 0 1 0 1 0 0 1 0 1 1 0 1 1 1 0 0 1 1 1 1 Not all schemas say the same Schema [ 1 ∗ ∗ ∗ ∗ ∗ ∗] has less information than [ 0 1 ∗ ∗ 1 1 0]. It is more!!! [ 1 ∗ ∗ ∗ ∗ ∗ 0] span the entire length of an individual, but [ 1 ∗ 1 ∗ ∗ ∗ ∗] does not. 6 / 37
  • 15. Outline 1 Introduction Schema Definition Properties of Schemas 2 Probability of a Schema Probability of an individual is in schema H Surviving Under Gene wise Mutation Surviving Under Single Point Crossover The Schema Theorem A More General Version Problems with the Schema Theorem 7 / 37
  • 16. Schema Order and Length Definition 3 - Schema Order o (H) Schema order, o, is the number of non “*” genes in schema H. Example: o(***11*1)=3. Definition 3 – Schema Defining Length, δ (H). Schema Defining Length, δ(H), is the distance between first and last non “*” gene in schema H. Example: δ(***11*1)=7-4=3. Notes Given an alphabet A with|A| = k, then there are (k + 1)l possible schemas of length l. 8 / 37
  • 17. Schema Order and Length Definition 3 - Schema Order o (H) Schema order, o, is the number of non “*” genes in schema H. Example: o(***11*1)=3. Definition 3 – Schema Defining Length, δ (H). Schema Defining Length, δ(H), is the distance between first and last non “*” gene in schema H. Example: δ(***11*1)=7-4=3. Notes Given an alphabet A with|A| = k, then there are (k + 1)l possible schemas of length l. 8 / 37
  • 18. Schema Order and Length Definition 3 - Schema Order o (H) Schema order, o, is the number of non “*” genes in schema H. Example: o(***11*1)=3. Definition 3 – Schema Defining Length, δ (H). Schema Defining Length, δ(H), is the distance between first and last non “*” gene in schema H. Example: δ(***11*1)=7-4=3. Notes Given an alphabet A with|A| = k, then there are (k + 1)l possible schemas of length l. 8 / 37
  • 19. Outline 1 Introduction Schema Definition Properties of Schemas 2 Probability of a Schema Probability of an individual is in schema H Surviving Under Gene wise Mutation Surviving Under Single Point Crossover The Schema Theorem A More General Version Problems with the Schema Theorem 9 / 37
  • 20. Probabilities of belonging to a Schema H What do we want? The probability that individual h is from schema H: P (h ∈ H) We need the following probabilities Pdistruption(H, 1X) = probability of schema being disrupted due to crossover. Pdisruption (H, mutation) =probability of schema being disrupted due to mutation Pcrossover (H survive) 10 / 37
  • 21. Probabilities of belonging to a Schema H What do we want? The probability that individual h is from schema H: P (h ∈ H) We need the following probabilities Pdistruption(H, 1X) = probability of schema being disrupted due to crossover. Pdisruption (H, mutation) =probability of schema being disrupted due to mutation Pcrossover (H survive) 10 / 37
  • 22. Probabilities of belonging to a Schema H What do we want? The probability that individual h is from schema H: P (h ∈ H) We need the following probabilities Pdistruption(H, 1X) = probability of schema being disrupted due to crossover. Pdisruption (H, mutation) =probability of schema being disrupted due to mutation Pcrossover (H survive) 10 / 37
  • 23. Probability of Disruption Consider now The CGA using fitness proportionate parent selection on-point crossover (1X) bitwise mutation with probability Pm Genotypes of length l The Schema could be disrupted if the cross over falls between the ends Pdistruption(H, 1X) = δ(H) (l − 1) (1) 0 1 0 0 1 0 11 / 37
  • 24. Probability of Disruption Consider now The CGA using fitness proportionate parent selection on-point crossover (1X) bitwise mutation with probability Pm Genotypes of length l The Schema could be disrupted if the cross over falls between the ends Pdistruption(H, 1X) = δ(H) (l − 1) (1) 0 1 0 0 1 0 11 / 37
  • 25. Probability of Disruption Consider now The CGA using fitness proportionate parent selection on-point crossover (1X) bitwise mutation with probability Pm Genotypes of length l The Schema could be disrupted if the cross over falls between the ends Pdistruption(H, 1X) = δ(H) (l − 1) (1) 0 1 0 0 1 0 11 / 37
  • 26. Probability of Disruption Consider now The CGA using fitness proportionate parent selection on-point crossover (1X) bitwise mutation with probability Pm Genotypes of length l The Schema could be disrupted if the cross over falls between the ends Pdistruption(H, 1X) = δ(H) (l − 1) (1) 0 1 0 0 1 0 11 / 37
  • 27. Probability of Disruption Consider now The CGA using fitness proportionate parent selection on-point crossover (1X) bitwise mutation with probability Pm Genotypes of length l The Schema could be disrupted if the cross over falls between the ends Pdistruption(H, 1X) = δ(H) (l − 1) (1) 0 1 0 0 1 0 11 / 37
  • 28. Probability of Disruption Consider now The CGA using fitness proportionate parent selection on-point crossover (1X) bitwise mutation with probability Pm Genotypes of length l The Schema could be disrupted if the cross over falls between the ends Pdistruption(H, 1X) = δ(H) (l − 1) (1) 0 1 0 0 1 0 11 / 37
  • 29. Why? Given that you have δ(H) = the distance between first and last non “*” last position in Genotype - first position in Genotype = l − 1 Case I δ(H) = 1, when the positions of the non “*” are next to each other Case II δ(H) = l − 1, when the positions of the non “*” are in the extremes 12 / 37
  • 30. Why? Given that you have δ(H) = the distance between first and last non “*” last position in Genotype - first position in Genotype = l − 1 Case I δ(H) = 1, when the positions of the non “*” are next to each other Case II δ(H) = l − 1, when the positions of the non “*” are in the extremes 12 / 37
  • 31. Why? Given that you have δ(H) = the distance between first and last non “*” last position in Genotype - first position in Genotype = l − 1 Case I δ(H) = 1, when the positions of the non “*” are next to each other Case II δ(H) = l − 1, when the positions of the non “*” are in the extremes 12 / 37
  • 32. Why? Given that you have δ(H) = the distance between first and last non “*” last position in Genotype - first position in Genotype = l − 1 Case I δ(H) = 1, when the positions of the non “*” are next to each other Case II δ(H) = l − 1, when the positions of the non “*” are in the extremes 12 / 37
  • 33. Remarks about Mutation Observation about Mutation Mutation is applied gene by gene. In order for schema H to survive, all non * genes in the schema much remain unchanged. Thus Probability of not changing a gene 1 − Pm (Pm probability of mutation). Probability of requiring that all o(H) non * genes survive, (1 − Pm)o(H) . Typically the probability of applying the mutation operator, pm 1. The probability that the mutation disrupt the schema H Pdisruption (H, mutation) = 1 − (1 − Pm)o(H) ≈ o (H) Pm (2) After ignoring high terms in the polynomial!!! 13 / 37
  • 34. Remarks about Mutation Observation about Mutation Mutation is applied gene by gene. In order for schema H to survive, all non * genes in the schema much remain unchanged. Thus Probability of not changing a gene 1 − Pm (Pm probability of mutation). Probability of requiring that all o(H) non * genes survive, (1 − Pm)o(H) . Typically the probability of applying the mutation operator, pm 1. The probability that the mutation disrupt the schema H Pdisruption (H, mutation) = 1 − (1 − Pm)o(H) ≈ o (H) Pm (2) After ignoring high terms in the polynomial!!! 13 / 37
  • 35. Remarks about Mutation Observation about Mutation Mutation is applied gene by gene. In order for schema H to survive, all non * genes in the schema much remain unchanged. Thus Probability of not changing a gene 1 − Pm (Pm probability of mutation). Probability of requiring that all o(H) non * genes survive, (1 − Pm)o(H) . Typically the probability of applying the mutation operator, pm 1. The probability that the mutation disrupt the schema H Pdisruption (H, mutation) = 1 − (1 − Pm)o(H) ≈ o (H) Pm (2) After ignoring high terms in the polynomial!!! 13 / 37
  • 36. Remarks about Mutation Observation about Mutation Mutation is applied gene by gene. In order for schema H to survive, all non * genes in the schema much remain unchanged. Thus Probability of not changing a gene 1 − Pm (Pm probability of mutation). Probability of requiring that all o(H) non * genes survive, (1 − Pm)o(H) . Typically the probability of applying the mutation operator, pm 1. The probability that the mutation disrupt the schema H Pdisruption (H, mutation) = 1 − (1 − Pm)o(H) ≈ o (H) Pm (2) After ignoring high terms in the polynomial!!! 13 / 37
  • 37. Remarks about Mutation Observation about Mutation Mutation is applied gene by gene. In order for schema H to survive, all non * genes in the schema much remain unchanged. Thus Probability of not changing a gene 1 − Pm (Pm probability of mutation). Probability of requiring that all o(H) non * genes survive, (1 − Pm)o(H) . Typically the probability of applying the mutation operator, pm 1. The probability that the mutation disrupt the schema H Pdisruption (H, mutation) = 1 − (1 − Pm)o(H) ≈ o (H) Pm (2) After ignoring high terms in the polynomial!!! 13 / 37
  • 38. Remarks about Mutation Observation about Mutation Mutation is applied gene by gene. In order for schema H to survive, all non * genes in the schema much remain unchanged. Thus Probability of not changing a gene 1 − Pm (Pm probability of mutation). Probability of requiring that all o(H) non * genes survive, (1 − Pm)o(H) . Typically the probability of applying the mutation operator, pm 1. The probability that the mutation disrupt the schema H Pdisruption (H, mutation) = 1 − (1 − Pm)o(H) ≈ o (H) Pm (2) After ignoring high terms in the polynomial!!! 13 / 37
  • 39. Outline 1 Introduction Schema Definition Properties of Schemas 2 Probability of a Schema Probability of an individual is in schema H Surviving Under Gene wise Mutation Surviving Under Single Point Crossover The Schema Theorem A More General Version Problems with the Schema Theorem 14 / 37
  • 40. Gene wise Mutation Lemma 1 Under gene wise mutation (Applied Gene by Gene), the (lower bound) probability of an order o(H) schema H surviving at generation (No Disruption) t is, 1 − o (H) Pm (3) 15 / 37
  • 41. Probability of an individual is sampled from schema H Consider the Following 1 Probability of selection depends on 1 Number of instances of schema H in the population. 2 Average fitness of schema H relative to the average fitness of all individuals in the population. Thus, we have the following probability P (h ∈ H) = PUniform (h in Population) × Mean Fitness Ratio 16 / 37
  • 42. Probability of an individual is sampled from schema H Consider the Following 1 Probability of selection depends on 1 Number of instances of schema H in the population. 2 Average fitness of schema H relative to the average fitness of all individuals in the population. Thus, we have the following probability P (h ∈ H) = PUniform (h in Population) × Mean Fitness Ratio 16 / 37
  • 43. Probability of an individual is sampled from schema H Consider the Following 1 Probability of selection depends on 1 Number of instances of schema H in the population. 2 Average fitness of schema H relative to the average fitness of all individuals in the population. Thus, we have the following probability P (h ∈ H) = PUniform (h in Population) × Mean Fitness Ratio 16 / 37
  • 44. Probability of an individual is sampled from schema H Consider the Following 1 Probability of selection depends on 1 Number of instances of schema H in the population. 2 Average fitness of schema H relative to the average fitness of all individuals in the population. Thus, we have the following probability P (h ∈ H) = PUniform (h in Population) × Mean Fitness Ratio 16 / 37
  • 45. Then Finally P (h ∈ H) = (Number of individuals matching schema H at generation t) (Population Size) × (Mean fitness of individuals matching schema H) (Mean fitness of individuals in the population) (4) 17 / 37
  • 46. Then Finally P (h ∈ H) = m (H, t) f (H, t) Mf (t) (5) where M is the population size and m(H, t) is the number of instances of schema H at generation t. Lemma 2 Under fitness proportional selection the expected number of instances of schema H at time t is E [m (H, t + 1)] = M · P (h ∈ H) = m (H, t) f (H, t) f (t) (6) 18 / 37
  • 47. Then Finally P (h ∈ H) = m (H, t) f (H, t) Mf (t) (5) where M is the population size and m(H, t) is the number of instances of schema H at generation t. Lemma 2 Under fitness proportional selection the expected number of instances of schema H at time t is E [m (H, t + 1)] = M · P (h ∈ H) = m (H, t) f (H, t) f (t) (6) 18 / 37
  • 48. Why? Note the following M independent samples (Same Probability) are taken to create the next generation of parents Thus m (H, t + 1) = Ih1 + Ih2 + ... + IhM Remark: The indicator random variable of ONE for these samples!!! Then E [m (H, t + 1)] = E [Ih1 ] + E [Ih2 ] + ... + E [IhM ] 19 / 37
  • 49. Why? Note the following M independent samples (Same Probability) are taken to create the next generation of parents Thus m (H, t + 1) = Ih1 + Ih2 + ... + IhM Remark: The indicator random variable of ONE for these samples!!! Then E [m (H, t + 1)] = E [Ih1 ] + E [Ih2 ] + ... + E [IhM ] 19 / 37
  • 50. Why? Note the following M independent samples (Same Probability) are taken to create the next generation of parents Thus m (H, t + 1) = Ih1 + Ih2 + ... + IhM Remark: The indicator random variable of ONE for these samples!!! Then E [m (H, t + 1)] = E [Ih1 ] + E [Ih2 ] + ... + E [IhM ] 19 / 37
  • 51. Finally But, M samples are taken to create the next generation of parents E [m (H, t + 1)] = P (h1 ∈ H) + P (h2 ∈ H) + ... + P (hM ∈ H) Remember the Lemma 5.1 in Cormen’s Book Finally, because P (h1 ∈ H) = P (h2 ∈ H) = ... = P (hM ∈ H) E [m (H, t + 1)] = M × P (h ∈ H) QED!!! 20 / 37
  • 52. Finally But, M samples are taken to create the next generation of parents E [m (H, t + 1)] = P (h1 ∈ H) + P (h2 ∈ H) + ... + P (hM ∈ H) Remember the Lemma 5.1 in Cormen’s Book Finally, because P (h1 ∈ H) = P (h2 ∈ H) = ... = P (hM ∈ H) E [m (H, t + 1)] = M × P (h ∈ H) QED!!! 20 / 37
  • 53. Outline 1 Introduction Schema Definition Properties of Schemas 2 Probability of a Schema Probability of an individual is in schema H Surviving Under Gene wise Mutation Surviving Under Single Point Crossover The Schema Theorem A More General Version Problems with the Schema Theorem 21 / 37
  • 54. Search Operators – Single point crossover Observations Crossover was the first of two search operators introduced to modify the distribution of schema in the population. Holland concentrated on modeling the lower bound alone. 22 / 37
  • 55. Search Operators – Single point crossover Observations Crossover was the first of two search operators introduced to modify the distribution of schema in the population. Holland concentrated on modeling the lower bound alone. 22 / 37
  • 56. Crossover Consider the following Generated Individual h = 1 0 1 | 1 1 0 0 H1 = * 0 1 | * * * 0 H2 = * 0 1 | * * * * Crossover Remarks 1 Schema H1 will naturally be broken by the location of the crossover operator unless the second parent is able to ‘repair’ the disrupted gene. 2 Schema H2 emerges unaffected and is therefore independent of the second parent. 3 Thus, Schema with long defining length are more likely to be disrupted by single point crossover than schema using short defining lengths. 23 / 37
  • 57. Crossover Consider the following Generated Individual h = 1 0 1 | 1 1 0 0 H1 = * 0 1 | * * * 0 H2 = * 0 1 | * * * * Crossover Remarks 1 Schema H1 will naturally be broken by the location of the crossover operator unless the second parent is able to ‘repair’ the disrupted gene. 2 Schema H2 emerges unaffected and is therefore independent of the second parent. 3 Thus, Schema with long defining length are more likely to be disrupted by single point crossover than schema using short defining lengths. 23 / 37
  • 58. Crossover Consider the following Generated Individual h = 1 0 1 | 1 1 0 0 H1 = * 0 1 | * * * 0 H2 = * 0 1 | * * * * Crossover Remarks 1 Schema H1 will naturally be broken by the location of the crossover operator unless the second parent is able to ‘repair’ the disrupted gene. 2 Schema H2 emerges unaffected and is therefore independent of the second parent. 3 Thus, Schema with long defining length are more likely to be disrupted by single point crossover than schema using short defining lengths. 23 / 37
  • 59. Crossover Consider the following Generated Individual h = 1 0 1 | 1 1 0 0 H1 = * 0 1 | * * * 0 H2 = * 0 1 | * * * * Crossover Remarks 1 Schema H1 will naturally be broken by the location of the crossover operator unless the second parent is able to ‘repair’ the disrupted gene. 2 Schema H2 emerges unaffected and is therefore independent of the second parent. 3 Thus, Schema with long defining length are more likely to be disrupted by single point crossover than schema using short defining lengths. 23 / 37
  • 60. Now, we have Lemma 3 Under single point crossover, the (lower bound) probability of schema H surviving at generation t is, Pcrossover (H survive) =1 − Pcrossover (H does not survive) =1 − pc δ(H) l − 1 Pdiff (H, t) Where Pdiff (H, t) is the probability that the second parent does not match schema H. pc is the a priori selected threshold of applying crossover. 24 / 37
  • 61. Now, we have Lemma 3 Under single point crossover, the (lower bound) probability of schema H surviving at generation t is, Pcrossover (H survive) =1 − Pcrossover (H does not survive) =1 − pc δ(H) l − 1 Pdiff (H, t) Where Pdiff (H, t) is the probability that the second parent does not match schema H. pc is the a priori selected threshold of applying crossover. 24 / 37
  • 62. Now, we have Lemma 3 Under single point crossover, the (lower bound) probability of schema H surviving at generation t is, Pcrossover (H survive) =1 − Pcrossover (H does not survive) =1 − pc δ(H) l − 1 Pdiff (H, t) Where Pdiff (H, t) is the probability that the second parent does not match schema H. pc is the a priori selected threshold of applying crossover. 24 / 37
  • 63. How? We can see the following Pcrossover (H does not survive) = Pc × Pdistruption(H, 1X) × Pdiff (H, t) After all Pc is used to decide if the crossover will happen. The second parent could come from the same schema, and yes!!! We do not have a disruption!!! Then Pcrossover (H does not survive) = Pc × δ(H) l − 1 × Pdiff (H, t) 25 / 37
  • 64. How? We can see the following Pcrossover (H does not survive) = Pc × Pdistruption(H, 1X) × Pdiff (H, t) After all Pc is used to decide if the crossover will happen. The second parent could come from the same schema, and yes!!! We do not have a disruption!!! Then Pcrossover (H does not survive) = Pc × δ(H) l − 1 × Pdiff (H, t) 25 / 37
  • 65. How? We can see the following Pcrossover (H does not survive) = Pc × Pdistruption(H, 1X) × Pdiff (H, t) After all Pc is used to decide if the crossover will happen. The second parent could come from the same schema, and yes!!! We do not have a disruption!!! Then Pcrossover (H does not survive) = Pc × δ(H) l − 1 × Pdiff (H, t) 25 / 37
  • 66. How? We can see the following Pcrossover (H does not survive) = Pc × Pdistruption(H, 1X) × Pdiff (H, t) After all Pc is used to decide if the crossover will happen. The second parent could come from the same schema, and yes!!! We do not have a disruption!!! Then Pcrossover (H does not survive) = Pc × δ(H) l − 1 × Pdiff (H, t) 25 / 37
  • 67. In addition Worst case lower bound Pdiff (H, t) = 1 (7) 26 / 37
  • 68. Outline 1 Introduction Schema Definition Properties of Schemas 2 Probability of a Schema Probability of an individual is in schema H Surviving Under Gene wise Mutation Surviving Under Single Point Crossover The Schema Theorem A More General Version Problems with the Schema Theorem 27 / 37
  • 69. The Schema Theorem The Schema Theorem The expected number of schema H at generation t + 1 when using a canonical GA with proportional selection, single point crossover and gene wise mutation (where the latter are applied at rates pc and Pm) is, E [m (H, t + 1)] ≥ m (H, t) f (H, t) f (t) 1 − pc δ(H) l − 1 Pdiff (H, t) − o (H) Pm (8) 28 / 37
  • 70. Proof We use the following quantities Pcrossover (H survive) = 1 − pc δ(H) l−1 Pdiff (H, t) ≤ 1 Pno−disruption (H, mutation) = 1 − o (H) Pm ≤ 1 Then, we have that E [m (H, t + 1)] =M × P (h ∈ H) =M m (H, t) f (H, t) Mf (t) = m (H, t) f (H, t) f (t) ≥ m (H, t) f (H, t) f (t) × 1 − pc δ(H) l − 1 Pdiff (H, t) × [1 − o (H) Pm] 29 / 37
  • 71. Proof We use the following quantities Pcrossover (H survive) = 1 − pc δ(H) l−1 Pdiff (H, t) ≤ 1 Pno−disruption (H, mutation) = 1 − o (H) Pm ≤ 1 Then, we have that E [m (H, t + 1)] =M × P (h ∈ H) =M m (H, t) f (H, t) Mf (t) = m (H, t) f (H, t) f (t) ≥ m (H, t) f (H, t) f (t) × 1 − pc δ(H) l − 1 Pdiff (H, t) × [1 − o (H) Pm] 29 / 37
  • 72. Proof We use the following quantities Pcrossover (H survive) = 1 − pc δ(H) l−1 Pdiff (H, t) ≤ 1 Pno−disruption (H, mutation) = 1 − o (H) Pm ≤ 1 Then, we have that E [m (H, t + 1)] =M × P (h ∈ H) =M m (H, t) f (H, t) Mf (t) = m (H, t) f (H, t) f (t) ≥ m (H, t) f (H, t) f (t) × 1 − pc δ(H) l − 1 Pdiff (H, t) × [1 − o (H) Pm] 29 / 37
  • 73. Thus We have the following E [m (H, t + 1)] ≥ m (H, t) f (H, t) f (t) 1 − pc δ(H) l − 1 Pdiff (H, t) − o (H) Pm + ... pc δ(H) l − 1 Pdiff (H, t)o (H) Pm ≥ m (H, t) f (H, t) f (t) 1 − pc δ(H) l − 1 Pdiff (H, t) − o (H) Pm The las inequality is possible because pc δ(H) l−1 Pdiff (H, t)o (H) Pm ≥ 0 30 / 37
  • 74. Remarks Observations The theorem is described in terms of expectation, thus strictly speaking is only true for the case of a population with an infinite number of members. What about a finite population? In the case of finite population sizes the significance of population drift plays an increasingly important role. 31 / 37
  • 75. Remarks Observations The theorem is described in terms of expectation, thus strictly speaking is only true for the case of a population with an infinite number of members. What about a finite population? In the case of finite population sizes the significance of population drift plays an increasingly important role. 31 / 37
  • 76. Remarks Observations The theorem is described in terms of expectation, thus strictly speaking is only true for the case of a population with an infinite number of members. What about a finite population? In the case of finite population sizes the significance of population drift plays an increasingly important role. 31 / 37
  • 77. Outline 1 Introduction Schema Definition Properties of Schemas 2 Probability of a Schema Probability of an individual is in schema H Surviving Under Gene wise Mutation Surviving Under Single Point Crossover The Schema Theorem A More General Version Problems with the Schema Theorem 32 / 37
  • 78. More General Version More General Version E [m (H, t + 1)] ≥ m (H, t) α (H, t) {1 − β(H, t)} (9) Where α(H, t)is the “selection coefficient” β(H, t) is the “transcription error.” This allows to say that H survives if α(H, t) ≥ 1 − β (H, t) or m (H, t) f (H, t) f (t) ≥ 1 − pc δ(H) l − 1 Pdiff (H, t) − o (H) Pm 33 / 37
  • 79. More General Version More General Version E [m (H, t + 1)] ≥ m (H, t) α (H, t) {1 − β(H, t)} (9) Where α(H, t)is the “selection coefficient” β(H, t) is the “transcription error.” This allows to say that H survives if α(H, t) ≥ 1 − β (H, t) or m (H, t) f (H, t) f (t) ≥ 1 − pc δ(H) l − 1 Pdiff (H, t) − o (H) Pm 33 / 37
  • 80. More General Version More General Version E [m (H, t + 1)] ≥ m (H, t) α (H, t) {1 − β(H, t)} (9) Where α(H, t)is the “selection coefficient” β(H, t) is the “transcription error.” This allows to say that H survives if α(H, t) ≥ 1 − β (H, t) or m (H, t) f (H, t) f (t) ≥ 1 − pc δ(H) l − 1 Pdiff (H, t) − o (H) Pm 33 / 37
  • 81. More General Version More General Version E [m (H, t + 1)] ≥ m (H, t) α (H, t) {1 − β(H, t)} (9) Where α(H, t)is the “selection coefficient” β(H, t) is the “transcription error.” This allows to say that H survives if α(H, t) ≥ 1 − β (H, t) or m (H, t) f (H, t) f (t) ≥ 1 − pc δ(H) l − 1 Pdiff (H, t) − o (H) Pm 33 / 37
  • 82. Observation Observation This is the basis for the observation that short (defining length), low order schema of above average population fitness will be favored by canonical GAs, or the Building Block Hypothesis. 34 / 37
  • 83. Outline 1 Introduction Schema Definition Properties of Schemas 2 Probability of a Schema Probability of an individual is in schema H Surviving Under Gene wise Mutation Surviving Under Single Point Crossover The Schema Theorem A More General Version Problems with the Schema Theorem 35 / 37
  • 84. Problems Problem 1 Only the worst-case scenario is considered. No positive effects of the search operators are considered. This has lead to the development of Exact Schema Theorems. Problem 2 The theorem concentrates on the number of schema surviving not which schema survive. Such considerations have been addressed by the utilization of Markov chains to provide models of behavior associated with specific individuals in the population. 36 / 37
  • 85. Problems Problem 1 Only the worst-case scenario is considered. No positive effects of the search operators are considered. This has lead to the development of Exact Schema Theorems. Problem 2 The theorem concentrates on the number of schema surviving not which schema survive. Such considerations have been addressed by the utilization of Markov chains to provide models of behavior associated with specific individuals in the population. 36 / 37
  • 86. Problems Problem 1 Only the worst-case scenario is considered. No positive effects of the search operators are considered. This has lead to the development of Exact Schema Theorems. Problem 2 The theorem concentrates on the number of schema surviving not which schema survive. Such considerations have been addressed by the utilization of Markov chains to provide models of behavior associated with specific individuals in the population. 36 / 37
  • 87. Problems Problem 1 Only the worst-case scenario is considered. No positive effects of the search operators are considered. This has lead to the development of Exact Schema Theorems. Problem 2 The theorem concentrates on the number of schema surviving not which schema survive. Such considerations have been addressed by the utilization of Markov chains to provide models of behavior associated with specific individuals in the population. 36 / 37
  • 88. Problems Problem 1 Only the worst-case scenario is considered. No positive effects of the search operators are considered. This has lead to the development of Exact Schema Theorems. Problem 2 The theorem concentrates on the number of schema surviving not which schema survive. Such considerations have been addressed by the utilization of Markov chains to provide models of behavior associated with specific individuals in the population. 36 / 37
  • 89. Problems Problem 1 Only the worst-case scenario is considered. No positive effects of the search operators are considered. This has lead to the development of Exact Schema Theorems. Problem 2 The theorem concentrates on the number of schema surviving not which schema survive. Such considerations have been addressed by the utilization of Markov chains to provide models of behavior associated with specific individuals in the population. 36 / 37
  • 90. Problems Problem 3 Claims of “exponential increases” in fit schema i.e., if the expectation operator of Schema Theorem is ignored and the effects of crossover and mutation discounted, the following result was popularized by Goldberg, m(H, t + 1)≥(1 + c)m(H, t) where c is the constant by which fit schema are always fitter than the population average. PROBLEM!!! Unfortunately, this is rather misleading as the average population fitness will tend to increase with t, thus population and fitness of remaining schema will tend to converge with increasing ‘time’. 37 / 37
  • 91. Problems Problem 3 Claims of “exponential increases” in fit schema i.e., if the expectation operator of Schema Theorem is ignored and the effects of crossover and mutation discounted, the following result was popularized by Goldberg, m(H, t + 1)≥(1 + c)m(H, t) where c is the constant by which fit schema are always fitter than the population average. PROBLEM!!! Unfortunately, this is rather misleading as the average population fitness will tend to increase with t, thus population and fitness of remaining schema will tend to converge with increasing ‘time’. 37 / 37