Genetic Algorithms and Particle Swarm Optimisation @ Flash Brighton, 28th July 2009
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Genetic Algorithms and Particle Swarm Optimisation @ Flash Brighton, 28th July 2009

on

  • 4,689 views

Genetic Algorithms and Particle Swarm Optimisation @ Flash Brighton, 28th July 2009

Genetic Algorithms and Particle Swarm Optimisation @ Flash Brighton, 28th July 2009

Statistics

Views

Total Views
4,689
Views on SlideShare
4,626
Embed Views
63

Actions

Likes
2
Downloads
214
Comments
0

4 Embeds 63

http://www.inductible.net 40
http://www.slideshare.net 21
http://moodle.allhallows.qld.edu.au 1
http://web.archive.org 1

Accessibility

Categories

Upload Details

Uploaded via as Apple Keynote

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Genetic Algorithms and Particle Swarm Optimisation @ Flash Brighton, 28th July 2009 Presentation Transcript

  • 1. The Genetic Algorithm
  • 2. Learning outcome - What the ‘GA’ is - How it works (with demo of an implementation) - Where ‘GA’ fits into the grand scheme of ‘Evolutionary computing’ - Touch on some other cool stuff in Evolutionary Computing: PSO, Neural Networks, Neutral Networks)
  • 3. Optimisation • A class of problem in which there can be described to be some solutions that are ‘better’ than others • Examples include :- - Building a paper aeroplane - Sending a rocket to the moon - Escaping from savannah predators - Choosing the right canine companion for Paris Hilton
  • 4. What variables describe a solution to the problem? e.g. Choosing the right canine companion for Paris Hilton - Fur colour - Fur length - Size to aggression ratio - Upkeep costs (recurrent insurance excess payouts on ‘accidental’ maulings, etc.)
  • 5. Fitness Function Create some function to judge how we differently favour these potential solutions; what are our ‘selection criteria’ - Paris doesn’t care much about mean little dogs - She does desperately want a white one though... - I also have it on good authority that short hair is a high priority; small handbags are particularly in fashion this season, and a fluffy dog simply won’t fit. - It absolutely has to be a dog whose intellect does not eclipse her own... Purpose of function: Derive a score based on how closely each solution matches our selection criteria
  • 6. An algorithm? Computational aim: to develop an iterative procedure to generate new solutions and ‘try them out’ against our fitness criteria - Small smart brown dog is better than big brown dog... - But not so good as small dumb white dog... - etc...
  • 7. Random search? Random search as a possible heuristic? 1. Choose a random solution 2. Score it 3. Repeat (1) Cons :- - How long until we find a suitable solution? - Probability dictates that one day, we’ll get it right, but that could be decades away for a complex problem!
  • 8. Haris Pilton and the temple of pure evil Imagine a gargantuan temple filled with every possible canine companion that our favourite trashy entertainer may ever possibly own... Along the far wall, subjects are sorted by their fur length, in a perpendicular direction, they are sorted in order of intelligence - smart hairy dogs are physically far from dumb bald dogs. In the Y dimension, the Divine Host consider each mutt by its perceived ‘fitness’ - how much each individual is suited as a ‘solution’ in becoming Paris’ new toy; the dogs are elevated by their score...
  • 9. Search Space What it might look like :-
  • 10. A more realistic search space
  • 11. What strikes you about this picture? - It looks like a landscape - The landscape metaphor is a really powerful concept; it leads us to think spatially about the problem... - We can ‘walk about’ on the landscape by stepping between spatially adjacent solutions - In order to find the best solution, we might simply walk uphill - Think about the landscapes associated with other problems, remembering that ‘geographical’ characteristics are defined by our fitness function: How we define ‘better’ solutions defines how easy it is to find them.
  • 12. Hyperdimensional Perambulation - In ‘higher’ dimensions (4, 5, 6, ... 1000, ... N ) the metaphor holds true, except the landscape might look something more like a piece by M.C. Escher
  • 13. Gradient Ascent Gradient Ascent as a possible heuristic? 1. Choose a random solution 2. Score it 3. Score each adjacent solution 4. ‘Step’ onto the highest solution 5. Repeat (3) Cons :- - Getting stuck in ‘local optima’
  • 14. Evolutionary Computing Introducing some other possible algorithms for searching solution space for the ‘best’ solution to our optimisation problem... Inspiration from nature :- Evolution Bird flocking
  • 15. Questions?
  • 16. Genetic Algorithm (‘GA’) John Holland the ‘father’ of GA's (1960's) - In the real world, we can imagine a population of organisms that live for a time and then die - Some produce offspring; typically 'fitter' individuals within the population produce more offspring, or otherwise raise said offspring to a point in which they in turn can successfully produce offspring of their own - Over successive generations, the make up of the population changes according to environmental influence (the 'fitness function' is the environment). The species 'adapts' to the conditions - Interesting aside - populations tend to remain adaptable too - the real world presents a fitness function that is continually dynamic; more like the surface of a rough ocean than a terrestrial landscape.
  • 17. Charles Darwin You may recognise this dude from that ‘tenner in your pocket Darwin devised the 'magic ingredients' for evolution by natural selection... - Heredity - parental traits passed on to offspring - Variability - recombination and mutation 'mix things up' (random and undirected) - Selection - The environment is harsh; some kids don't make it (non-random and directed)
  • 18. Simulating Evolution - Construct a population of agents or 'solutions' within our search space that live for a time and then die - Individuals live long enough to be assessed by our fitness function - SELECTION: Choose the fittest individuals - HEREDITY: Recombine parental traits into 'offspring' - VARIATION: Mutate each offspring with some small probability - Replace the old population with the offspring and repeat.
  • 19. Recombination (1) Some biology: Organisms are encoded by DNA - polymer chains made from Nucleotide units ATGC ...DNA is transcribed into complementary RNA strands AUGC... ...Triplets of bases encode Amino Acids... ...Amino Acids form polymer chains with structure (primary, secondary, tertiary, quaternary) which roll up and join together to form proteins... ...Proteins form cells and chemical components constituting... ...Organs, which constitute organisms!
  • 20. Recombination (2) Recombination acts at the level of DNA during the process of Meiosis - the formation of Gametes, or 'eggs 'n sperm' Recombination works by 'crossing over' a strand of DNA, mixing Mum's genes with Dad's at choice 'loci' Key point: recombination creates offspring that are similar to parents
  • 21. Digital Recombination - In GA's we simulate this in a much simpler way - encode a parent's solution variables into a string, at some designated (or randomly chosen) point, swap in another parents' variables and chuck away what's been replaced - Coming up with a string encoding can actually be quite hard, as I'll demonstrate when we get to the Chinchillas. - Some people who have some luck with bytecode arrays use these; a real number (e.g. 20) can be represented in string form by turning it into a bitwise representation ( 10100 ); this can feasibly by recombined against other bytecode arrays - I dislike bytecode because it's not ‘organic’ like real numbers, and has other issues, e.g... - Hamming Cliffs - small changes in bytecode represent huge leaps in decimal space - lots of mutations are required to realise an adjacent step in real-numbered terms.
  • 22. Mutation Much easier: for each variable within the encoded string, ‘tweak’ the value (by some random amount) with some probability.
  • 23. Pseudocode Generate a random initial population While( ! bored ) { evaluate population with fitness function select the fittest individuals as parents generate offspring via recombination mutate offspring with some probability replace parents with offspring }
  • 24. Tweakables (1) - Variable mutation rates; high mutation rate == random search! - Mutation step size; ‘large rare’ or ‘small regular’ mutations? - Variable population size; bigger populations usually help (trade off slower process for larger sample size) - No Sex; remove the recombination step - asexual reproduction with mutation only (microbial reproduction - offspring are mutant clones) - Elitism? Parent gets a free pass through to the next generation - Demes? Geographical isolation leads to speciation in the real world; multiple populations with no interbreeding might start in different places in search space
  • 25. Tweakables (2) - Different ways of choosing parents... - Just take the best two? - Choose with a fitness dependent probability for each offspring to be produced (rank selection)
  • 26. Questions?
  • 27. Particle Swarm Optimisation Particle Swarm Optimisation (PSO) is a population based 'stochastic' (it has random bits) optimisation technique developed by Dr. Eberhart and Dr. Kennedy (social psychology bods) in 1995. PSO: Inspired by the social behavior of bird flocking or fish schooling
  • 28. Swarm Intelligence Swarming: the collective behaviors of simple individuals interacting with their environment and each other in a way that yields the result of some higher function, e.g. ants building a nest, birds locating food - such things are called 'emergent' properties, and sometimes given the token 'swarm intelligence'.
  • 29. Boids A visual example of simulated flocking - Craig Reynolds’ Boids http://www.red3d.com/cwr/boids/ With a few simple rules and lots of bodies carrying them out, some really cool stuff can happen.
  • 30. Morpheous is Doomed Imagine the following scenario: A group of killer robots are randomly searching an area for Morpeus, with the intention of laying down some silicon law and proving that there's more to life than kung fu, funny coloured pills and dark sunglasses. Morpheous is hiding in one particular area, and while the swarm do not know where the he is, they are individually allowed to know how far away he is with each step they take in their search (and are allowed to communicate this information with each other). So what's a good strategy for the swarm to converge on find Mr 'Pheous? An effective strategy is for everyone to follow the bot which is nearest to Morpheous in each successive step in the search.
  • 31. Particle Swarms (1) In PSO, we have a spatially distributed population of agents (much like individuals within the GA) which represent solutions occupying our problem's search space. - Agents have fitness values which are evaluated by the fitness function specific to our problem. - Agents are called 'particles', and indeed act like particles in that they are physically simulated, having variable velocity which affects their position (variables defining an N-dimensional vector solution) in search space. Particles fly through the space, following the current ‘fittest’ particles.
  • 32. Particle Swarms (2) The location of the 'best' yet found location is shared by all members of the swarm, and individual behavior set so that particles accelerate towards this location. As particles fly around in the search space, they are constantly overshooting their target, circling around it, slowly converging etc. This behavior leads the swarm to explore the search space in a directed fashion, constantly evaluating fitness in a way that concentrates on regions which provide better fitness solutions. DEMO: http://uk.geocities.com/markcsinclair/pso.html
  • 33. PSO Implementation (1) PSO is initialised with a group of random particles (solutions); the algorithm then searches for optima by iterative generations. In every iteration, each particle is updated by following two ‘best’ values.: PBEST: ‘Personal best’ - vector location of the best solution this particle has achieved so far. GBEST: ‘Global Best’ - the best value obtained so far by any particle within the population.
  • 34. PSO Implementation (2) After finding the GBEST and PBEST vectors, the particle updates its velocity and position: Velocity update: vel = last vel + r1*w1*(pbest - current position) + r2*w2*(gbest - current position) (r1, r2 are random units, w1, w2 are weighting values for gbest and pbest) Position update: position = last position + vel Prevent explosion: Particles' velocities on each dimension are clamped to a user specified maximum velocity.
  • 35. PSO Pseudocode Initialize particles While( ! bored ) { For each particle { Calculate fitness value If fitness is better than personal best fitness value (pBest) set current location as the new pBest If fitness is better than global best fitness value (gBest) set current location as the new gBest } For each particle { Calculate particle velocity Clamp velocity Update particle position } }
  • 36. Questions?
  • 37. Chinchillas in Space A woefully over-ambitions ‘quick’ demo of a genetic algorithm implementation