1.
Animating The Inanimate
Genetic Algorithms
In Complex Problem Space
Thorolf Horn Tonjum.
School of Computing and Technology,
University of Sunderland, The Informatics Centre.
2.
Introduction.
This paper describes a learning scenario, where a creature with tree legs learns to walk.
The aim of this project is to see the genetic algorithm converge in a problem domain of high complexity.
To produce a learning scenario of suitable complexity, we use a physics engine to simulate a 3d world, in which we place a learning
agent with 3 legs with 2 joints each. Movement becomes a functional of the functions of each limb, with respect to speed, gravity,
weight, position, and rotation.
The Tripod is a learning agent; it is an autonomous entity, which generates its actions from definitions expressed in its own gene
structure. The Tripod learns from experience. Learning is the ability to perform better by using experience to alter ones behaviour.
The experience is produced by the agent experimenting with random movements, while getting graded by a fitness function (tutor).
The fitness function grades the Tripod based on the distance the Tripod is able to travel from the initial starting coordinate, within a
period of 30 seconds. The learning data is the patterns of movements generated by the Tripod, together with the result of the fitness
function. The Tripod learns from its experience by using a genetic algorithm. The genetic algorithm is a machine learning algorithm
from the class of random heuristic search based algorithms.
The genetic algorithm.
3.
Genetic algorithms are widely used in machine learning; they are applicable in situations where specialised learning algorithms are
not available, typically to situations where the underlying function is not known. As this is the case in our learning problem the genetic
algorithm is suitable.
Suitable but not perfect, as genetic algorithms are machine time intensive, and does not guaranty an optimal, not even a pareto-
optimal solution, as long as the search space is not completely searchable. (A pareto-optimal solution is a solution in the complete
hypothesis space where any sub part of the pareto-optimal solution is not inferior to any sub part of any other solution).
Random heuristic search based algorithms, like evolutionary algorithms, simulated annealing, and tabu search, are problem
independent optimization algorithms. They are robust, signifying good, but not optimal behaviour for many problems.
One of the major drawbacks of using a simple genetic algorithm (SGA) is the lack of underlying mathematical theory.
“Analysis reveals that an exact mathematical analysis of SGA is possible only for
small problems. For a binary problem of size n the exact analysis needs the computation of 2^n equations.”
[Heinz Muhlenbein, 1993. "Mathematical Analysis of Evolutionary Algorithms for Optimization"]
One of the reasons for the inherit difficulties of mathematical analysis of genetic algorithms is that formulating a coherent
mathematical model of the genetic algorithm would have to encompass the varying effects of the different and sometimes dynamic
mixes of the probabilistic genetic operators; mutation, recombination, crossover, and selection scheme.
The consequence of this is that we are currently incapable of making mathematical sound
a priori estimations, thus we can not answer questions like:
How likely is it that the n’th iteration of the algorithm will find the optimum?
What is the approximate time until a global optimum is found.
How many generations must we wait before finding a solution that fits within
a certain performance confidence interval?
How much variance is there in the measures from run to run?
How much are the results affected by changes in the genetic parameters, like population size, mutation rates, selection schemes, and
crossover rates and types?
Defining convergence for genetic algorithms is not straight forward. Because genetic algorithms are parallel stochastic search
procedures, the results will be stochastic and unpredictable.
A suitable definition of convergence for genetic algorithms is: “A SGA will normally produce populations with consecutively higher
fitness until the fitness increase stops at an optimum. Whether this is a local or global optimum,
can not determined, unless one knows the underlying functions of the problem space.”
“Normaly” signifies an algorithm with properly tuned genetic parameters.
Then how can one properly tune the genetic parameters?
The answer is trough trial and error, by tuning the parameters and observing the
convergence rates, and variance in the results, one can develop a hands on understanding of the problem and thereby find good
parameters. Finding good parameters involves producing good mixture between exploratory pressures of mutation and crossover,
and exploitative pressure of selection. To much exploitation leads to premature convergence (crowding). To little exploitation reduce
the algorithm to pure random search.
Due to the lack of underlying mathematical theory,
it is has not yet been developed theory to compute the Vapnic Chervonenkis dimension, or other performance boundary theories.
Consequently convergence rates can not be estimated. Therefore confidence intervals for determination of the probability of reaching
pareto optimal results within certain number of iterations is not establishable. Likewise, theory for defining halting criteria, for knowing
when continuing search is futile, is also lacking. As a consequence there exist no algorithm to determine whether the search
converge to local or global optimum.
Nevertheless genetic algorithms have a remarkable ability to escape local optimums and converge unhindered towards the global
optimum [Ralf Salomon 1995. “Evaluating Genetic Algorithm performance” Elsevier Science].
Specialised genetic algorithm implementations, as opposed to the simple genetic algorithm (SGA), solve one or more of the
theoretical lacks, discussed above, but the specialisations restricts the algorithm, modifies the algorithm, or imposes certain
assumptions on the fitness landscapes, the search space, or the fitness function, in ways not compatible with our model.
Tree major approaches to standard performance measures for genetic algorithms
have been undertaken; linkage, evolutionary multiobjective optimization, and genetic algorithms for function optimization.
But none of these are compatible with our model
4.
Linkage.
Holland (Holland, 1975) suggested that operators learning linkage information to recombine alleles might be necessary for genetic
algorithm success. Linkage is the concept of chromosome sub parts that have to stick together to secure transfer of high fitness
levels to the offspring. If linkage exists between two genes, recombination might result in low fitness if those two genes are not
transferred together. A group of highly linked genes forms a linkage group, or a building block.
An analytical model of time to convergence can then be derived from the linkage model.
To map out the linkages in our model, is not trivial, as our model has nondeterministic characteristics. It would be possible to make a
nondeterministic probabilistic mapping of the linkage groups, but this could be detrimental to the mathematical model of time to
convergence. It is not clear how this model should be modified to encompass nondeterministic linkage groups.
[Goldberg, Yassine, & Chen, 2003. “DSM clustering in DSMDGA” ] & [Goldberg & Miller, 1997. “The genetic ordering in LLGA” ]
Evolutionary Multiobjective Optimization.
Evolutionary multiobjective optimization computes convergence bounds by assuming that a pareto optimal set can be defined. No
pareto optimal set can be defined in our model as the underlying functions are not measurable nor approximatable to a degree of
accuracy where pareto optimal sets are definable.
Genetic Algorithms for Function Optimization.
The theoretical understandings of the properties of genetic algorithms being used for function optimization depend on transient
markov chain analysis, and rely on the presence of a fitness function. In our model the fitness function is not mathematically
definable in terms if the control input (limb movements), as the fitness function is merely a sample of travelling distance.
Inductive bias.
The inductive bias of the genetic algorithm is a result of the implementation in question, determiners are: the fitness function, the
representation of the problem as a search space, the probabilistic genetic operators (selection, crossover, and mutation), and the size
of the population.
System description.
The Tripod is a construct with a body and tree legs, each leg has two joints. The Tripod lives in a OpenGl based 3d world simulation,
modelled with correct physics, including gravity, friction, time, force, and velocity. The 3d world simulator (Breve) and the
programming language (Steve) are made by Jon Klein.
The problem domain.
* The definition of the problem space refers to the Tripod_B edition. See user guide.
The Tripod agent produces many different movement patterns, this is how the agent learns, it learns by trial and error. Each
movement pattern can be seen as one hypothesis out of the vast hypothesis space of all possible movement patterns. This space is
the search space for discovery of optimal movement patterns.
The hypothesis space is : 630^6 * 500^6 + 200^6 = 9.769 * 10^32.
The Tripod has tree legs with 2 joints each, each joint is operated by 3 variables.
The 3 variables are joint angle [-/+ 360],
Local velocity of joint movement [-/+ 200] and global velocity of joint movement [-+ 250].
These are represented as floats with 6 digits precision in the program,
3.149879 as an example of the angle variable. But only the first two digits has certain effect.
The problem space is dependent on a physics simulation which employs a random number generator,
and produces rounding errors. The simulation behaves in stochastic ways that can not be predicted, a specific action can cause more
than one result, thus our problem domain has deterministic characteristics.
The effect of a movement is dependent of the previous state space;
therefore it can be seen as a transitional markow chain.
Some movements are selfconflicting thus the problem space is also disjunct.
The underlying functions are not known, ergo we deal with a black box system.
Many important parameters are not supplied to the algorithm, like angle to the ground,
the speed and direction of the tripod itself.
The problem space can be classified as a multiobjective optimization problem,
5.
where several interdependent functions effects the result.
Following that this a complex problem domain (defined under),
the genetic algorithm seems to be a valid choice, as it is classified as a general robust learning arlgorithm (meaning it will work on
nearly all learning problems), the "No Free Lunch" theorem [Wolpert & Macready. 1995] clearly states that a problem domain spesific
algorithm will be preferable, but in our instance no such algorithm is available, as the underlying functions are not known, the problem
domain is non deterministic, and this problem domain combines time dependency (transitional markow chains) with multiobjective
optimization complexity.
This problem domain can be classified as a NP-hard search problem.
Reason: If we continued to ad joints to the legs, the algorithm execution time would increase by more than O(n^k) where k is a
constant and n is the complexity. Example: if n where the number of joints, K would not be constant. if you added a joint to each leg
n=n+3, the search space would increase by a factor of 10^17.
From 630^6 * 500^6 + 200^6 = 9.769 * 10^32 to 630^9 * 500^9 + 200^9 = 3.053 * 10^49
In other words the growth rate of the search space is a function of the number of
the input instance, in our example this function does not grow linearly, but exponentially with the input instances.
Complex.
A conceptual whole made up of sub parts that relate in intricate ways, not readily analyzable.
NP-hard
Definition: The complexity class of decision problems that are intrinsically harder than those that can be solved by a nondeterministic
Turing machine in polynomial time. When a decision version of a combinatorial optimization problem is proved to belong to the class
of NP-complete problems, which includes well-known problems such as the travelling salesman problem, the bin packing problem,
etc.
Polynomial time
Definition: When the execution time of a computation, m(n), is no more than a polynomial function of the problem size, n.
More formally m(n) = O(n^k) where k is a constant.
Nondeterministic Turing machine
Definition: A Turing machine which has more than one next state for some combinations of contents of the current cell and current
state.
GP-hard.
Our problem domain can be defined as a GP-hard, because it can be defined as a moving needle in the haystack problem, this is so
Because the optimal movement is a result of the previous domain state combined with stochastics, thus it will have nondeterministic
properties and differ as a function of time / domain state. Definition of GP-hard : A problem domain where no choice of
genetic algorithm properties, representations, and operators exist that make such a problem easy for a genetic algorithm to solve.
Implementation.
The learning algorithm.
The genetic algorithm uses trial and error in intelligent ways, forming emergent patterns in global hyperspace, by actions in local
hyperspace. Hyperspace is the space of all possible permutations of the control variables (the variables that compose the
hypothesis), in our example we have 13 elements and our hyperspace is therefore of 13 dimensions.
The Fitness function.
The fitness function evaluates the result of 3 legs moving concurrently,
each leg movement affects the effect of the other legs movements, thus the moving Tripod is a system of nonlinear effects
amid the legs, this is iterative trough time, therefore our system can be describes as a nonlinear dynamical system.
Because the physics engine that does the simulation, works in stochastic ways, the exact same movement of the legs from the
exact same velocity and direction state, is not guarantied to produce the exact same effect. One reason for the stochastic is
rounding errors, another is the timing functions, and a third is the random number generator.
As there exist no determinable underlying function or functional, and the system is obviously to complex to approximate, the best
way to produce a fitness function is to base it on some external factor with some relevance to the success / failure of walking,
measuring the distance travelled, is therefore an ample choice.
General Guideline:
6.
The anticipated structure of a solution to a given problem is broken down into as many small parts as possible, the algorithm then
searches for the optimal solution, as the optimal combination of the different solution sub-parts. This search space can be vast,
resulting in very long search times. The genetic algorithm is capable of performing these vast global searches more efficient than
random search.
How it works
Gradually some individuals pick up useful traits, these traits are picked up from all over
the global search space, whenever useful traits are discovered, the local search area they rose from are investigated further by
keeping the gene string that contains the coordinates to this hyperlocation alive in future generations. The best traits are then
recombined when the parents reproduce and make children with possible even more successful hyperlocations, the population
then slowly converge towards an optimal hyperlocation. The optimal hyperlocation is the gene string that contains
the best candidate solution.
Exemplification.
GA = (F, P, R, M)
GA = The genetic algorithm.
F = Fitness function.
P = Population size.
H = Hypothesis, represented by a vector of floats length N
R = Replacement rate (in percent).
M = Mutation rate (in percent).
C = Crossover probability.
N = Chromosome length (a vector of N floats).
1. Initialise the population with P random vectors of length N.
2. Evaluate each H in P by F.
3. Delete the worst 30% of H in P.
4. Replace the worst 20% by new random chromosomes.
5. Reproduce the best 20% of H in P by crossover.
6. Let the rest of H in P reproduce by crossover at probability C.
7. Mutate the rest of H in P by probability C.
8. Tweak the upper 50-80% of H in P.
Functions:
Random vectors - are produced by filling the 13 floats with random numbers using a random number generator.
Crossover is produced by randomly picking 6 of 13 genes from parent 1 and substituting them with the genes from parent 2.
Mutation is produced by randomly choosing 1 gene, replacing it with a random number.
Selection is done by randomly picking 3 genomes, using 2 of them to reproduce by replacing the third.
Tweaking is done by randomly picking 3 of 13 genes, changing them by some small random amount, this technique
resembles what is known as “shaking the weights” in neural networks,
and is meant as an aide to get candidates unstuck and out of local optimums.
Originality.
The reason for this somewhat original approach is that the simulation only
allows testing one candidate at a time, this takes 30 simulated seconds which is approximately 5 real seconds, this means that we
are bound by processing time to restrict the population of candidates to an arbitrary low number.
In the simulations we use a population size of 10. This creates a bias towards to small genetic diversity, and pushes the population to
premature convergence in the direction of suboptimal local optima. To counter this effect we insert a high percentage
of new random genes in each generations, we also set the mutation rate unusually high. The crossover rate is lower than normal to
ensure breeding space for evolving individuals. Tweaking is introduced as a means to counter stagnation.
This approach creates very interesting dynamics in our genepool;
The best genome (nr 0) is ensured crossover with the second best (nr 1) replacing number 7.
Genome x (randomly picked from the genome) is reproduced
with a new random genome replacing nr 8.
Then genome nr 8. which contains 50 % random genes is reproduced by a random genome replacing nr. 9 with a genome containing
75 % random genes.
This creates a system where genome 7-9 are systematically more random.
Genomes 5 and 6 are targeted for mutation. Genomes 3 and 4 are targeted for tweaking. Genome 0,1, and 2 are kept unchanged.
7.
This further extends the entropy ladder from 3-9 where each higher number is
successively more random than the one before. This system creates a sliding effect
that makes sure that the genomes that are not competitive will fall down the ladder being exposed to consecutively more and more
randomness, which is the only thing that can help them gain competitiveness again. However the successful genomes moves up the
ladder and thus hides from the entropy preserving their fitness. This creates a king of the hill effect.
Figure 2. Entropy Ladder.
Figure 3 Example Run.
Figure 3 shows an example run, the first number of each column is the fitness, the second number in the column is the id of the
genome, the column position indicates genome 0-9, 0 is the leftmost one.
Survival of the fittest.
The items marked green have just made a big leap up the ladder,
the red ones has fallen down, the strength of the colour indicates the size of the change.
This chart shows how the dynamics work by two streams of genes flowing through each other,
namely the losers flowing out into oblivion, and the winners flowing up into position.
Nb. Crossover between floats is performed by letting the child become a random number in the max and min range of the two
parents:
Parent 1 Parent 2 Child
Crossover: 3.020028 -0.382130 2.810501
Crossover: 1.631748 1.229667 1.245619
Crossover: 1.631748 1.229667 1.331516
Crossover: -2.339981 -2.185591 -2.204014
Evaluating the results.
The model proves powerful and shows fast convergence, performing far better than random search, which is indicated by the fitness
mean of 100 genetic generations reaching 150, while the fitness mean of 100 generations of random search is around 37.
8.
But it seams like the convergence is premature.
The implementation is probably a victim of crowding, where above average individuals
spread their genes so fast to the rest of the population, that new genes are blocked from
evolving into competition. This is because the above average individuals has taken the safe positions
in the entropy ladder by a fitness slightly higher that what can normally be achieved by sheer randomness. The new individuals
carrying vital new gene patterns, do not get a chance to evolve, as they are demolished by the high grade entropy at the unsafe lower
levels of the ladder.
This suggests that the above average individuals are far enough above average,
to suppress random competition, and supports the belief that the algorithm produce very competitive candidate solutions.
The implementation called for a method to directly control the amount of nondeterminism, this was achieved by building a support
platform with legs, surrounding the Tripod and ensuring it not to fall over by keeping it upright. Two main editions of the tripod exists
Tripod_A and Tripod_B, Tripod_B is without the support structure and exhibit maximum nondeterminism (meaning the maximal
nondeterminism in this model).
When comparing the convergence of model A and B, we clearly see that the maximum nondeterminism model B behaves much more
chaotic, it takes longer to train, it achieves inferior end results, and it does not quite develop coherent gaiting patterns.
The degree of nondeterminism can be adjusted by lowering the support structure.
Exploring the effects of varying nondeterminism, suggests that within time constraints genetic algorithms can only handle a certain
level of nondeterminism before breaking into aimless chaos without converging at necessary speed.
Given infinite time, only the very tiniest fraction less than total nondeterminism is needed for the algorithm to eventually converge.
Improvements.
The results could be made far better by enlarging the gene pool by a 100 fold, from 10 to 1000 individuals.
Or we could use a tournament solution where, the original approach were performed 50 times, and then have
competitions with 10 and 10 of the 50 best candidates, ultimately breeding populations of higher and higher fitness. However this
would increase running times from over night to the duration of a month.
The problem of crowding can be amended by maintain diversity along the current
population, this can be done by incorporating density information into the selection process:
where an individual's chance of being selected is decreased the greater the density of individuals
in its neighbourhood.
Still good solutions can be lost due to random events like mutation and crossover. A common way to deal with this problem is to
maintain a secondary population, the so-called archive, to which promising solutions in the population are stored after
each generation. The archive can be integrated into the EA by including archive members in the selection process. Likewise the
archive can be used to store individuals with exotic gene combinations, conserving the individuals with genes the farthest from the
density clusters.
Appendix A. User guide.
Appendix B. Bibliography.
Appendix C. Test runs
Appendix D. Code.
Appendix A. User guide.
On the CD:
The enclosed mpeg movies shows the system in use.
The papers folder includes 50 research papers of relevance.
The code is contained in the text file : Tripod_A.tz
To run the software simulation:
Copy the folder breve to C.
C:breve
9.
type:
“cmd” at the windows command run prompt,
then paste in (use right mouse button on dos promt):
For Tripod_A:
cd C:brevebin
SET BREVE_CLASS_PATH=C:breveLIBCLASSES
breve C:breveCOMM2CTripod_A.tz pico -u
For Tripod_B:
cd C:brevebin
SET BREVE_CLASS_PATH=C:breveLIBCLASSES
breve C:breveCOMM2CTripod_B.tz pico –u
For Tripod_B random search :
cd C:brevebin
SET BREVE_CLASS_PATH=C:breveLIBCLASSES
breve C:breveCOMM2CTripod_B_Random_Search.tz pico –u
The .tz files are the code in text format.
Commands:
Keypress : 1 : Performs bullet time pan.
Mouse : click-n-hold-left-mouse-button & move mouse: Looks around.
Mouse : Press F2 & click-n-hold-left-mouse-button & push mouse up and down : Zooms.
Mouse : Right click : Brings up command popup.
Mouse : Left click : Selects tripod & draws with sketch style.
Appendix B. Bibliography.
Holland, (1975). “Adaptation in Natural and Artificial Systems.” University of Michigan.
T. C. Fogarty, (1989).
”Varying the probability of mutation in the genetic algorithm. “
In Schaffer, Proceedings of the Third International Conference on Genetic Algorithms and
their Applications, pp. 104–109.
Grefenstette & Baker, (1989).
“How genetic algorithms work: a critical look at implicit parallelism”
In Proceedings of the Third International Conference on Genetic Algorithms.
W Hart & R Belew, (1991).
”Optimizing an arbitrary function is hard for the genetic algorithm.”
In Proceedings of the Fourth International Conference on Genetic Algorithms.
Heinz Muhlenbein, (1993).
“Mathematical Analysis of Evolutionary Algorithms for Optimization”
GMD – Schloss Birlinghoven,Germany
10.
Muehlenbein & Asoh (1994).
”On the mean convergence time of evolutionary algorithms without selection and mutation. “
Parallel Problem Solving from Nature
Mitchell, Holland, & Forrest, (1994).
“When will a genetic algorithm outperform hill climbing? Advances in Neural Information Processing”
Ralf Salomon, (1995). “Evaluating Genetic Algorithm performance” Elsevier Science.
[Wolpert & Macready, (1995). the "No Free Lunch" theorem.
Goldberg & Miller, (1997). “The genetic ordering in LLGA”.
Prugel-Bennet & J.L. Shapiro, (1997).
”An analysis of a genetic algorithm for simple randomising systems”
Physica D, 104:75–114,
G. Harik, (1999).
“Linkage learning via probabilistic modeling in the ecga.”
Technical Report IlliGal 99010, University of Illinois, Urbana-Champaign.
Droste, Jansen, & Wegener, (1999).
“Perhaps not a free lunch but at least a free appetizer.”
In GECCO-99: Proceedings of the Genetic and Evolutionary Computation Conference.
Heckendorn & Whitley, (1999).
”Polynomial time summary statistics for a generalization”
of MAXSAT. In GECCO-99: Proceedings of the Genetic and Evolutionary
Computation Conference.
M. Vose, (1999). “The Simple Genetic Algorithm: Foundations and Theory.” MIT Press, Cambridge.
Goldberg , (2001). “Genetic Algorithms in Search,Optimization and Machine Learning.”
Goldberg, Yassine, & Chen, (2003). “DSM clustering in DSMDGA”.
Gunter Rudolph & Alexandru Agapie
“Convergence Properties of Some Multi-Objective Evolutionary Algorithms”
Appendix C. Test runs.
Test run of
Tripod_A
Low degree of nondeterminism:
: 58 5: 34 8: 30 9: 14 7: 13 3: 13 1: 13 2: 9 4: 3 0: 2 6
: 79 5: 45 8: 39 9: 22 6: 21 3: 19 7: 18 4: 17 1: 14 2: 3 0
: 86 5: 50 6: 50 8: 44 3: 43 9: 38 2: 18 4: 9 1: 9 7: 3 0
: 89 5: 60 2: 53 6: 51 8: 45 9: 44 3: 28 7: 24 1: 22 4: 17 0
: 90 5: 81 8: 54 6: 51 3: 48 4: 43 2: 43 9: 27 0: 22 1: 22 7
: 90 5: 67 2: 67 8: 67 4: 53 6: 35 3: 28 9: 19 0: 15 1: 12 7
: 89 8: 75 4: 57 2: 57 5: 52 6: 28 3: 28 1: 20 0: 17 9: 9 7
: 82 4: 79 8: 67 2: 47 3: 44 6: 35 5: 34 1: 16 9: 11 0: 6 7
: 90 5: 83 4: 46 2: 44 7: 40 6: 40 8: 36 9: 30 0: 22 1: 18 3
: 81 5: 66 2: 65 7: 42 8: 42 9: 37 3: 35 4: 20 6: 19 1: 15 0
: 100 5: 68 7: 61 4: 48 9: 42 2: 33 6: 32 3: 20 8: 8 1: 7 0
: 124 5: 90 6: 71 7: 61 2: 53 4: 51 9: 43 1: 35 0: 24 3: 13 8
: 123 5: 103 6: 72 2: 72 0: 70 7: 57 4: 54 1: 38 9: 35 3: 23 8
: 87 0: 79 9: 70 1: 66 7: 55 4: 55 5: 54 2: 43 8: 38 6: 22 3
: 109 0: 81 8: 77 9: 75 1: 58 4: 50 2: 48 7: 47 3: 23 5: 19 6
14.
The algorithm specific code is the only code that is fully commented,
as the other code is regarded trivial.
File: Tripod_A.tz :
# Tripod
@use PhysicalControl.
@use Link.
@use File.
@use Genome.
@use Shape.
@use Stationary.
@use MultiBody.
@define SPEED_K 19.
Controller Walker.
PhysicalControl : Walker {
+ variables:
SelectedGenomes, GenePopulation (list).
currentSelectedGenome (int).
Tripod (object).
equalTime (float).
locked (int).
lockMenu (object).
cloudTexture (int).
+ to init:
floorShape (object).
floor (object).
number (int).
item (object).
file (object).
equalTime =0.
self disable-freed-instance-protection.
locked = 0.
self set-random-seed-from-dev-random.
self enable-lighting.
self enable-smooth-drawing.
self move-light to (0, 20, 0).
# Create the floor for the Tripod to walk on.
floorShape = (new Shape init-with-cube size (1000, 2, 1000)).
floor = new Stationary.
floor register with-shape floorShape at-location (0, 0, 0).
floor catch-shadows.
floor set-color to (1.0, 1.0, 1.0).
cloudTexture = (self load-image from "images/clouds.png").
self enable-shadow-volumes.
self enable-reflections.
self half-gravity.
self set-background-color to (.4, .6, .9).
self set-background-texture to cloudTexture.
# Create the Tripod.
Tripod = new TripodTemplate.
Tripod move to (0, 6, 0).
self offset-camera by (3, 13, -13).
self watch item Tripod.
15.
GenePopulation = 10 new genoms.
# Create list GenePopulation
foreach item in GenePopulation: {
(item set-number to number).
# print "Genome ", (GenePopulation{number} get-number), (GenePopulation{number} get-distance).
number += 1.
}
# Starts the program
self pick-Genomes.
# set up the menus...
lockMenu = (self add-menu named "Lock Genome" for-method "toggle-Genome-lock").
self add-menu-separator.
self add-menu named "Save Current Genome" for-method "save-current-genome".
self add-menu named "Load Into Current Genome" for-method "load-into-current-genome".
# schedule the first Genome change and we're ready to go.
self schedule method-call "change-Genomes" at-time (self get-time) + 30.0.
self display-current-Genome.
+ to display-current-Genome:
currentNumber (int).
currentNumber = (SelectedGenomes{currentSelectedGenome} get-number).
self set-display-text to "Genome #$currentNumber" at-x -.95 at-y -.9.
+ to iterate:
SelectedGenomes{currentSelectedGenome} control robot Tripod at-time ((self get-time) - equalTime + 1).
super iterate.
+ to pick-Genomes:
sort GenePopulation with compare-distance.
SelectedGenomes{0} = GenePopulation{0}.
SelectedGenomes{1} = GenePopulation{1}.
SelectedGenomes{2} = GenePopulation{2}.
SelectedGenomes{3} = GenePopulation{3}.
SelectedGenomes{4} = GenePopulation{4}.
SelectedGenomes{5} = GenePopulation{5}.
SelectedGenomes{6} = GenePopulation{6}.
SelectedGenomes{7} = GenePopulation{7}.
SelectedGenomes{8} = GenePopulation{8}.
SelectedGenomes{9} = GenePopulation{9}.
currentSelectedGenome = 0.
+ to change-Genomes:
newGenome (int).
newOffset (vector).
myMobile (list).
Current_distance (float).
Acummulated_distance (float).
New_Acummulated_distance (float).
Current_distance = |(Tripod get-location)|.
Acummulated_distance = SelectedGenomes{currentSelectedGenome} get-distance.
New_Acummulated_distance =((1 * Acummulated_distance) + 2 * Current_distance) / 3.
SelectedGenomes{currentSelectedGenome} set-distance to New_Acummulated_distance .
free Tripod.
Tripod = new TripodTemplate.
16.
Tripod move to (0, 6, 0).
self offset-camera by (3, 5, -23).
self watch item Tripod.
Tripod set-color.
equalTime = 0.
equalTime = (self get-time).
currentSelectedGenome += 1.
if currentSelectedGenome > 9: {
self breed-new-genoms.
self pick-Genomes.
}
newGenome = (SelectedGenomes{currentSelectedGenome} get-number).
# schedule a new Genome change in 30 seconds.
self schedule method-call "change-Genomes" at-time (self get-time) + 30.0.
self display-current-Genome.
+ to breed-new-genoms:
Testnum (int).
number(int).
GenomeNr (int).
Fitness (int).
random_select (int).
item (object).
Da_Genome (object).
sort SelectedGenomes with compare-distance.
number =-1.
foreach item in SelectedGenomes: {
number += 1.
GenePopulation{number} = SelectedGenomes{number}.
Fitness = SelectedGenomes{number} get-distance.
GenomeNr = SelectedGenomes{number} get-number.
printf ":",Fitness,GenomeNr.
}
# secure flow of new genetic material
GenePopulation{9} randomize.
GenePopulation{7} randomize.
# breed
SelectedGenomes{0} True_breed with GenePopulation{(random[9])} to-child GenePopulation{8}.
GenePopulation{8} True_breed with GenePopulation{7} to-child GenePopulation{9}.
SelectedGenomes{0} True_breed with SelectedGenomes{1} to-child GenePopulation{7}.
random_select= random[9].
SelectedGenomes{random_select} True_breed with SelectedGenomes{random[9]} to-child GenePopulation{(random[9])}.
# mutate
(GenePopulation{5} get-genome) mutate.
(GenePopulation{6} get-genome) mutate.
random_select= random[9].
(GenePopulation{random_select} get-genome) mutate.
# Tweak
(GenePopulation{3} get-genome) Tweak.
(GenePopulation{4} get-genome) Tweak.
random_select= random[9].
(GenePopulation{random_select} get-genome) Tweak.
17.
+ to compare-distance of a (object) with b (object):
result (float).
result = (b get-distance) - (a get-distance).
return result.
# the following methods are accessed from the simulation menu.
+ to toggle-Genome-lock:
if locked == 1: {
locked = 0.
Tripod center.
self schedule method-call "change-Genomes" at-time (self get-time) + 30.0.
lockMenu uncheck.
} else {
locked = 1.
lockMenu check.
}
+ to save-current-genome:
(SelectedGenomes{currentSelectedGenome} get-genome) save-with-dialog.
+ to load-into-current-genome:
(SelectedGenomes{currentSelectedGenome} get-genome) load-with-dialog.
+ to catch-key-2-down:
self save-as-xml file "world1.xml" .
+ to catch-key-3-down:
self save-as-xml file "world2.xml" .
+ to catch-key-1-down:
newOffset (vector).
newOffset = random[(90, 10, 90)] + (-15, 1, -15).
if |newOffset| < 14: newOffset = 14 * newOffset/|newOffset|.
self bullet-pan-camera-offset by newOffset steps 100.
newOffset = random[(90, 10, 90)] + (-15, 1, -15).
if |newOffset| < 14: newOffset = 14 * newOffset/|newOffset|.
self bullet-pan-camera-offset by newOffset steps 100.
newOffset = random[(90, 10, 90)] + (-15, 1, -15).
if |newOffset| < 14: newOffset = 14 * newOffset/|newOffset|.
self bullet-pan-camera-offset by newOffset steps 100.
# look at self from newOffset.
}
Object : genoms {
+ variables:
distanceTraveled (float).
genome (object).
number (int).
+ to set-number to n (int):
number = n.
+ to get-number:
return number.
+ to init:
genome = new TripodGenome.
self randomize.
+ to randomize:
18.
genome randomize.
+ to get-genome:
return genome.
+ to breed with otherGenome (object) to-child child (object):
(child get-genome) crossover from-parent-1 (otherGenome get-genome) from-parent-2 (self get-genome).
+ to True_breed with otherGenome (object) to-child child (object):
Genome_1 (object).
Genome_2 (object).
Genome_3 (object).
Returnee (float).
Returnee2 (float).
Returnee3 (float).
n, nn (int).
Genome_1 = (otherGenome get-genome).
Genome_2 = (self get-genome).
Genome_3 = (child get-genome).
# Copying Genome_1 to Genome_3
for n=0, n<13, n+=1: {
Returnee = Genome_1 ReturnGeene to n .
Genome_3 SetGeene to Returnee to n.
}
# Producing Crossover
for n=0, n<7, n+=1: {
nn = (random[12]).
Returnee = Genome_2 ReturnGeene to nn .
Returnee2 = Genome_1 ReturnGeene to nn .
Returnee3 = random[(Returnee - Returnee2)].
if Returnee3 < 0 : Returnee3 = (Returnee3 * -1) + Returnee.
if Returnee3 > 0 : Returnee3 = Returnee3 + Returnee2.
Genome_3 SetGeene to Returnee to nn.
# print "Crossover: ", Returnee , Returnee2 , Returnee3 .
}
+ to control robot theRobot (object) at-time t (float):
theRobot set-joint-velocity-1 to SPEED_K * (genome calculate-torque-1 at (t )).
theRobot set-joint-velocity-2 to SPEED_K * -(genome calculate-torque-2 at (t )).
theRobot set-joint-velocity-3 to SPEED_K * -(genome calculate-torque-3 at (t )).
theRobot set-joint-velocity-4 to SPEED_K * -(genome calculate-torque-4 at (t )).
theRobot set-joint-velocity-5 to SPEED_K * -(genome calculate-torque-5 at (t )).
theRobot set-joint-velocity-6 to SPEED_K * (genome calculate-torque-6 at (t)).
+ to set-distance to value (float):
distanceTraveled = value.
+ to get-distance:
return distanceTraveled.
}
Genome : TripodGenome {
+ variables:
genomeData (13 floats).
+ to randomize:
genomeData[0] = random[5.0] - 2.5.
genomeData[1] = random[2.0] - 1.0 .
genomeData[2] = random[2.0] - 1.0 .
genomeData[3] = random[2.0] - 1.0 .
genomeData[4] = random[2.0] - 1.0 .
19.
genomeData[5] = random[2.0] - 1.0 .
genomeData[6] = random[2.0] - 1.0 .
genomeData[7] = random[6.3] - 3.15.
genomeData[8] = random[6.3] - 3.15.
genomeData[9] = random[6.3] - 3.15.
genomeData[10] = random[6.3] - 3.15.
genomeData[11] = random[6.3] - 3.15.
genomeData[12] = random[6.3] - 3.15.
+ to PrintGenome:
n (int).
for n=0, n<13, n+=1: {
print "Genome: ", n, " ", genomeData[n].
}
+ to ReturnGeene to value (int):
# print "return genomeData: ", genomeData[value].
return genomeData[value] .
+ to SetGeene to number (int) to value (float):
# print "SetGeene : ", number,value.
genomeData[number] = value.
+ to calculate-torque-1 at time (float):
return .5 * (sin(genomeData[0] * (time + genomeData[7])) - (genomeData[1])).
+ to calculate-torque-2 at time (float):
return .5 * (sin(genomeData[0] * (time + genomeData[8])) - (genomeData[2])).
+ to calculate-torque-3 at time (float):
return .5 * (sin(genomeData[0] * (time + genomeData[9])) - (genomeData[3])).
+ to calculate-torque-4 at time (float):
return .5 * (sin(genomeData[0] * (time + genomeData[10])) - (genomeData[4])).
+ to calculate-torque-5 at time (float):
return .5 * (sin(genomeData[0] * (time + genomeData[11])) - (genomeData[5])).
+ to calculate-torque-6 at time (float):
return .5 * (sin(genomeData[0] * (time + genomeData[12])) - (genomeData[6])).
+ to Tweak:
n (int).
n = random[12].
if n < 7: genomeData[n] = genomeData[n] + ((random[2.0] - 1.0)/30) .
if n > 6: genomeData[n] = genomeData[n] + ((random[6.3] - 3.15)/30) .
if n = 0: genomeData[n] = genomeData[n] + ((random[5.0] - 2.5)/30) .
n = random[12].
if n < 7: genomeData[n] = genomeData[n] + ((random[2.0] - 1.0)/10) .
if n > 6: genomeData[n] = genomeData[n] + ((random[6.3] - 3.15)/10) .
if n = 0: genomeData[n] = genomeData[n] + ((random[5.0] - 2.5)/10) .
+ to Shake:
n (int).
for n=0, n<12, n+=1: {
if n < 7: genomeData[n] = genomeData[n] + ((random[2.0] - 1.0)/30) .
if n > 6: genomeData[n] = genomeData[n] + ((random[6.3] - 3.15)/30) .
if n = 0: genomeData[n] = genomeData[n] + ((random[5.0] - 2.5)/30) .
}
+ to mutate:
n (int).
20.
n = random[12].
if n < 7: genomeData[n] = random[2.0] - 1.0.
if n > 6: genomeData[n] = random[6.3] - 3.15.
if n = 0: genomeData[n] = random[5.0] - 2.5.
}
MultiBody : TripodTemplate {
+ variables:
bodyLink (object).
links (list).
Color1 (vector).
Color2 (vector).
joints (list).
+ to get-root:
return bodyLink.
+ to init
linkShape, FootShape, SupportShape , lowerLinkShape, bodyShape (object).
self add-menu named "Send to Center" for-method "center".
SupportShape = new Shape.
SupportShape init-with-cube size (.16, 1, .16).
lowerLinkShape = new Shape.
lowerLinkShape init-with-cube size (.26, 2.0, .26).
linkShape = new Shape.
linkShape init-with-cube size (.28, 1.0, .28).
bodyShape = new Shape.
bodyShape init-with-sphere radius (0.5).
FootShape = new Shape.
FootShape init-with-polygon-disk radius (0.5) sides (8) height (0.05).
Color1 = random[(1.0, 1.0, 1.0)].
Color2 = random[(1.0, 1.0, 1.0)].
links = 6 new Links.
joints = 6 new RevoluteJoints.
links{0} set shape linkShape.
links{0} set-color to Color1.
links{2} set shape linkShape.
links{2} set-color to Color1.
links{4} set shape linkShape.
links{4} set-color to Color1.
links{1} set shape lowerLinkShape.
links{1} set-color to (1.0, 1.0, 1.0).
links{3} set shape lowerLinkShape.
links{3} set-color to (0.0, 1.0, 1.0).
links{5} set shape lowerLinkShape.
links{5} set-color to (1.0, 0.0, 1.0).
bodyLink = new Link.
bodyLink set shape bodyShape.
bodyLink set-color to Color2.
joints{0} link parent bodyLink to-child links{0}
with-normal ( 1.5, 0, 1 )
with-parent-point (0.722, 1, 0.552)
21.
with-child-point (0, 2.2, 0).
joints{0} set-joint-limit-vectors min ( 0.7, 0, 0) max ( 1.3, 0, 0).
joints{1} link parent links{0} to-child links{1}
with-normal (-10, 0, 0)
with-parent-point (0, -.5, 0)
with-child-point (0, 1.0, 0).
joints{2} link parent bodyLink to-child links{2}
with-normal (-1.5, 0, 1)
with-parent-point ( -0.722, 1, 0.552)
with-child-point (0, 2.2, 0).
joints{2} set-joint-limit-vectors min ( 0.65, 0, 0) max ( 1.3, 0, 0).
joints{3} link parent links{2} to-child links{3}
with-normal (-10, 0, 0)
with-parent-point (0, -.5, 0)
with-child-point (0, 1.0, 0).
joints{4} link parent bodyLink to-child links{4}
with-normal ( -1.5, 0, -1 )
with-parent-point ( 0 , 1, -0.552)
with-child-point (0, 2.2, 0).
joints{4} set-joint-limit-vectors min ( 0.6, 0, 0) max ( 1.3, 0, 0).
joints{5} link parent links{4} to-child links{5}
with-normal (-10, 0, 0)
with-parent-point (0, -.5, 0)
with-child-point (0, 1.0, 0).
self register with-link bodyLink.
#self rotate around-axis (0, 1, 0) by 1.57.
joints set-double-spring with-strength 30 with-max 1.5 with-min -2 .
joints set-strength-limit to 40.
+ to Re-set:
self unregister.
+ to center:
currentLocation (vector).
currentLocation = (self get-location).
self move to (0, currentLocation::y, 0).
+ to set-color:
Color3 (vector).
Color4 (vector).
Color3 = random[(1.0, 1.0, 1.0)].
Color4 = random[(1.0, 1.0, 1.0)].
links{0} set-color to Color3.
links{2} set-color to Color3.
links{4} set-color to Color3.
links{1} set-color to Color4.
links{3} set-color to Color4.
links{5} set-color to Color4.
bodyLink set-color to Color4.
+ to set-joint-velocity-1 to value (float):
joints{1} set-joint-velocity to value.
22.
+ to set-joint-velocity-2 to value (float):
joints{2} set-joint-velocity to value.
+ to set-joint-velocity-3 to value (float):
joints{3} set-joint-velocity to value.
+ to set-joint-velocity-4 to value (float):
joints{4} set-joint-velocity to -value.
+ to set-joint-velocity-5 to value (float):
joints{5} set-joint-velocity to -value.
+ to set-joint-velocity-6 to value (float):
joints{0} set-joint-velocity to value.
+ to Re-set-shape:
joints{0} set-joint-velocity to 2.
joints{1} set-joint-velocity to 2.
joints{2} set-joint-velocity to 2.
joints{3} set-joint-velocity to 2.
joints{4} set-joint-velocity to 2.
joints{5} set-joint-velocity to 2.
+ to destroy:
free bodyLink.
free links{0} .
free links{2} .
free links{4} .
free links{1} .
free links{3} .
free links{5} .
free joints{0} .
free joints{1} .
free joints{2} .
free joints{3} .
free joints{4} .
free joints{5} ..
}
Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.
Be the first to comment