• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Evolutionary Algorithms and their Applications in Civil Engineering - 1
 

Evolutionary Algorithms and their Applications in Civil Engineering - 1

on

  • 5,448 views

 

Statistics

Views

Total Views
5,448
Views on SlideShare
5,430
Embed Views
18

Actions

Likes
3
Downloads
258
Comments
0

2 Embeds 18

http://www.slideshare.net 15
http://www.brijj.com 3

Accessibility

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Evolutionary Algorithms and their Applications in Civil Engineering - 1 Evolutionary Algorithms and their Applications in Civil Engineering - 1 Presentation Transcript

    • Evolutionary Algorithms and Civil Engineering BY VAMSIDHAR TANKALA & SHREY MODI DEPARTMENT OF CIVIL ENGINEERING
    • Consider the following problem Minimize the function : f ( x1 , x2 , x3 , x4 )  x  x2  x3  x4 1 2 2 2 2
    • Various methods  Mathematical differentiation , and other plotting techniques.  A computer program based on these techniques can be easily formulated.  What are the issues to be considered ? - Computational time - Complexity of the problem – Increase the parameters and observe the computational time - Smoothness of the function , if not rugged techniques work efficiently
    • Need for Evolutionary Techniques Vagaries faced by the traditional techniques  Rugged landscape of the problem  Presence of many discontinuities  Simulation of the real world applications where mathematical formulations are not available : “BLACK-BOX APPROACHES” One example : Dynamic traffic simulation.
    • Need for evolutionary procedures  “Genetic Algorithms are good at taking large, potentially huge search spaces and navigating them, looking for optimal combinations of things, solutions you might not otherwise find in a lifetime.”  - Salvatore Mangano  Computer Design, May 1995
    • Brief introduction to GA’s  Directed search algorithms based on the mechanics of biological evolution  Developed by John Holland, University of Michigan (1970’s)  To understand the adaptive processes of natural systems  To design artificial systems software that retains the robustness of natural systems  Provide efficient, effective techniques for optimization and machine learning applications  Widely-used today in business, scientific and engineering circles
    • Genetic Algorithm  Outline of the steps involved in GA  Encoding  Initialization  Reproduction  Selection  Termination Criteria
    • Deb’s example Consider a simple can design problem A cylindrical can considered to have only two parameters – the diameter d and height h. Considering that the can needs to have a volume of atleast 300 ml and the objective of the design is to minimize the cost of the can material
    • Objective function d2  Minimize f ( d , h)  c (   dh )  2   d 2h  Subject to g1 (d , h)   300,  4  Variable bounds d min  d  d max   hmin  h  hmax 
    • Representing a solution h (d,h)=(8,10) cm (chromosome) = 01000 01010 d
    • Fitness Calculation F ( s)  0.065[ (8)   (8)(10)] 2  23. Fitness – assigning a “goodness” measure
    • A Sample random generation Cost - 30 23 30 11 Cost- 40 24 9 37
    • Selection Operator  Identify good(usually above-average) solutions in a population.  Make multiple copies of good solutions.  Eliminate bad solutions from the population so that multiple copies of good solutions can be placed in the population.
    • Common Selection methods  Tournament Selection  Proportionate selection  Ranking selection
    • Tournament selection Mating Pool 24 23 23 24 37 30 11+30 11+30 24 23 23 24 37 9+40 30 37 9+40 30
    • Other Selection Operators  Ranking Selection  Stochastic Remainder roulette wheel selection  Proportionate selection
    • What happens in mating pool??  Crossover Operation  Mutation Operation
    • Crossover operator 23 22 (8,10) 01000 01010 01010 00110 (10,6) 37 39 (14,6) 01110 00110 01100 01010 (12,10)
    • Mutation Operator 22 (10,6) 01010 00110 01000 00110 (8,6) 16
    • Overall understanding of GA‟s
    • Step 1: Encoding Problem 21  How to encode a solution of the problem into chromosome ?  Types of Encoding  Binary coding 1 0 0 1 1 1 0 1 Difficult to apply directly Not a natural coding  Real number coding 2.3352 5.3252 6.2895 4.1525 Mainly for constrained optimization problems  Integer coding 3 5 1 2 4 8 7 6 For combinatorial optimization problems Ex. Quadratic Assignment Problems
    • Step 1: Encoding Problem (Cont.) 22  Coding Space and Solution Space Decoding Coding Space Genetic Operations Solution Space Evaluation and Selection Encoding
    • Step 1: Encoding Problem (Cont.) 23 • Critical issues with encoding  Feasibility of a chromosome  solution decoded from a chromosome lies in a feasible region of the problem  Legality of a chromosome  chromosomes represents a solution to a problem  Uniqueness of mapping (Between Chromosomes and solution to the problem)  1 - n mapping (Undesired mapping)  n – 1 mapping (Undesired mapping) One chromosome represents only one  1 – 1 mapping (Desired mapping) solution to the problem
    • Step 1: Encoding Problem (Cont.) 24 Coding Space Solution Space infeasible one Feasible space Coding Space Solution Space
    • Step 2: Initialization 25  Create initial population of solutions  Randomly  Local search  Feasible Solutions  For optimization problem  Minimize: F (x1, x2, x3)  Binary encoding 1 0 1 1 0 0 1 1 1 0 0 1 x1 x2 x3
    • Step 2: Initialization (Cont.) 26  Population of solutions  Fitness of solutions are evaluated (= objective function) Solution No. Fitness values 1 0 1 0 0 1 0 1 1 0 1 0 1 13.2783 0 1 1 0 1 0 1 0 0 1 0 1 2 20.3749 0 0 1 0 1 0 1 1 1 1 0 0 3 19.8302 Chromosomes 0 1 0 1 0 0 1 0 0 0 1 1 4 52.9405 1 0 0 0 1 0 1 0 1 0 0 1 5 25.8202 1 0 1 1 1 1 0 0 0 0 1 1 6 36.0282 0 0 1 0 1 0 1 1 0 1 1 0 7 70.9202 0 1 1 1 1 0 0 1 1 1 0 1 8 38.9022 0 1 0 1 0 1 0 1 1 0 0 1 9 29.0292 1 0 0 0 1 1 1 1 1 1 0 0 10 21.9292
    • Step 3: Reproduction 27  Crossover operation (Based on crossover probability) Crossover Points Parent 1  Select parents from population based on crossover probability Parent 2  Randomly select two points between strings to perform crossover operation Offspring 1  Perform crossover operations on selected strings Offspring 2  Known for Local search operation
    • Step 3: Reproduction (Cont.) 28  For the example of optimization problem  Let the crossover probability be 0.8 Solution Selected Solution For crossover No. operation 1 1 0 1 0 0 1 0 1 1 0 1 0 0.9502 > 0.8 NO 2 0 1 1 0 1 0 1 0 0 1 0 1 0.2191 < 0.8 YES 3 0 0 1 0 1 0 1 1 1 1 0 0 0.4607 < 0.8 YES 4 0 1 0 1 0 0 1 0 0 0 1 1 0.6081 < 0.8 YES 5 1 0 0 0 1 0 1 0 1 0 0 1 0.8128 > 0.8 NO 6 1 0 1 1 1 1 0 0 0 0 1 1 0.9256 > 0.8 NO 7 0 0 1 0 1 0 1 1 0 1 1 0 0.7779 < 0.8 YES 0 1 1 1 1 0 0 1 1 1 0 1 0.4596 < 0.8 YES 8 0 1 0 1 0 1 0 1 1 0 0 1 0.9817 > 0.8 NO 9 1 0 0 0 1 1 1 1 1 1 0 0 0.7784 < 0.8 YES 10 Chromosomes Random values [0,1]
    • Step 3: Reproduction (Cont.) 29 Solution Parents Selected Offspring Selected 2 0 1 1 0 1 0 1 0 0 1 0 1 0 1 1 0 1 0 1 1 0 1 0 1 3 0 0 1 0 1 0 1 1 1 1 0 0 0 0 1 0 1 0 1 0 1 1 0 0 4 0 1 0 1 0 0 1 0 0 0 1 1 0 1 0 1 0 0 1 1 0 1 1 1 7 0 0 1 0 1 0 1 1 0 1 1 0 0 0 1 0 1 0 1 0 0 0 1 0 Crossover Points 8 0 1 1 1 1 0 0 1 1 1 0 1 0 1 0 0 1 0 0 1 1 1 0 1 10 1 0 0 0 1 1 1 1 1 1 0 0 1 0 1 1 1 1 1 1 1 1 0 0
    • Step 3: Reproduction (Cont.) 30  Mutation operation (based on mutation probability pm)  each bit of every individual is modified with probability pm  main operator for global search (looking at new areas of the search space)  pm usually small {0.001,…,0.01}  rule of thumb pm = 1/no. of bits in chromosome
    • Step 3: Reproduction (Cont.) 31 ith solution string from the population  For optimization problem 0 0 1 0 1 0 1 1 0 1 0 0 Minimize: F (x1, x2, x3)  Let pm = 1/12 = 0.083 0.12 0.57 0.62 0.31 0. 01 0.73 0.83 0.63 0.02 0.26 0.94 0.63  Generate Random number [0,1] for Mutation each bit  Select bits having probability less than 0 0 1 0 1 0 1 1 0 1 0 0 pm  Interchange the bits with each other
    • Step 4: Selection (“Survival of the fittest”) 32  Directs the search Population towards promising (pop_size) regions in the search Crossover operation space Mutation operation  Basic issues involved in selection phase:  Sampling space: Parents and Offspring  Regular sampling space: all offspring + few parent = pop_size Offspring produced
    • Step 4: Selection (“Survival of the fittest”) (Cont.) 33  Basic issues involved in selection phase:  Sampling space:  Enlarged sampling space: All offspring + All parent Crossover operation Population (pop_size) Mutation operation Offspring produced
    • Step 4: Selection (“Survival of the fittest”) (Cont.) 34 Selection probability for kth individual  Sampling fk Mechanism: How to pk  pop _ size select chromosomes  fj j 1 from sampling space Based on pk, cumulative probability is  Basic approaches calculated, and roulette wheel is constructed Stochastic Samplings Roulette Wheel selection: Zone of kth  To determine survival individual probability proportional to the fitness value  randomly generate a number between [0,1] and select the individual fk is the fitness value of kth individual
    • Step 4: Selection (“Survival of the fittest”) (Cont.) 35  Deterministic Samplings: select best pop_size individuals from the parents and offspring No duplication of the individuals  Mixed Samplings: both random and deterministic samplings are done Step 5: Termination Criteria  Repeating the above steps until the termination criteria is not satisfied  Termination criteria maximum number of generations no improvement in fitness values for fixed generation
    • Summary of Genetic Algorithms 36 Begin { initialize population; evaluate population; while (TerminationCriteriaNotSatisfied) { select parents for reproduction; perform Crossover and mutation; evaluate population; } }
    • Issues for GA Practitioners 37  Choosing basic implementation issues:  Encoding  Population size, Mutation rate, Crossover rate …..  Selection, Deletion policies  Types of Crossover, Mutation operators  Termination Criteria  Performance, scalability  Solution is only as good as the evaluation function (often hardest part)
    • Benefits of Genetic Algorithms 38  Concept is easy to understand  Modular, separate from application  Supports multi-objective optimization  Good for “noisy” environments  Always an answer; answer gets better with time  Inherently parallel; easily distributed  Many ways to speed up and improve a GA-based application as knowledge about problem domain is gained  Easy to exploit previous or alternate solutions  Flexible building blocks for hybrid applications  Substantial history and range of use
    • When to Use a GA 39  Alternate solutions are too slow or overly complicated  Need an exploratory tool to examine new approaches  Problem is similar to one that has already been successfully solved by using a GA  Want to hybridize with an existing solution  Benefits of the GA technology meet key problem requirements
    • Some GA Application Types 40 Domain Application Types Control gas pipeline, pole balancing, missile evasion, pursuit Design semiconductor layout, aircraft design, keyboard configuration, communication networks Scheduling manufacturing, facility scheduling, resource allocation Robotics trajectory planning Machine Learning designing neural networks, improving classification algorithms, classifier systems Signal Processing filter design Game Playing poker, checkers, prisoner’s dilemma Combinatorial set covering, travelling salesman, routing, bin packing, graph colouring and partitioning Optimization
    • Sample Applications in Civil Engineering  Transportation Engineering  Brief discussion of following areas: - Dynamic traffic simulation. - Aggregate blending . - Back calculation of Pavement Layer Modulii.  Numerous applications in Structural engineering , environmental, geotechnical and water resources engineering. Research articles are available in superfluity concerning applications of GA in civil engineering
    • Ant Colony Optimization
    • Inspiration  Ants are practically blind but they still manage to find their way to the food. How do they do it?  These observations inspired a new type of algorithm called ant algorithms (or ant systems).  Result of research on computational intelligence approaches to combinatorial optimization.  The algorithm is modeled after the natural behavior of ants.
    • Natural behavior of ant Nest Food Ant search for their food
    • Natural behavior of ant Nest Food Obstacle An obstacle has blocked the path of ants
    • Natural behavior of ant Nest Food Obstacle What to do? Every ant flips a coin and choose a path
    • Natural behavior of ant Nest Food Obstacle Finally, after some time shorter path reinforced
    • Natural Ants  Almost Blind.  Incapable of achieving complex task alone.  Rely on the phenomena of swarm intelligence for survival.  Capable of establishing shortest-route paths from their colony to feeding sources and back.  Use stigmergic communication via pheromone trails.
    • Natural Ants  Follow existing pheromone trails with high probability.  What emerges is a form of autocatalytic behavior: the more ants follow a trail, the more attractive that trail becomes for being followed.  The probability of a path choice increases with the number of times the same path was chosen before.
    • What is Stigmergy?
    • Stigmergic  A term coined by French biologist Pierre-Paul Grasse, is interaction through the environment.  Two individuals interact indirectly when one of them modifies the environment and the other responds to the new environment at a later time. This is stigmergy.
    • Stigmergy Ants uses stigmergy. PHEROMONES But how?
    • Pheromones These are chemical What substances dropped by is us in our path. Pheromone?
    • Ant Colony Optimization
    • Basic Requirements  Since the ant algorithms are based on shortest path finding methodology utilized by the ants in search for their food, thus their implementation requires:  The problem to be solved must either be in graphical format or could be expressed in graphical form.  Must be finite (i.e. must have a start and end).
    • Ant Algorithms  Ant systems are a population based approach. In this respect it is similar to genetic algorithms.  Each ant is a simple agent with the following characteristics:  It probabilistically chooses the node to visit with certain probability.  Uses a tabu list to avoid revisit to the node.  After the completion of tour it lays pheromone trail on each visited edge.
    • Flowchart of Ant algorithms Initialize Evaluate Update Find Solutions Solutions Pheromone Ants Yes Is No Probabilistically find STOP Termination New solutions based Criteria met? On pheromone values Update Evaluate Pheromone Solutions
    • Initialization Initialize Evaluate Update Find Solutions Solutions Pheromone Ants Yes Is No Probabilistically find STOP Termination New solutions based Criteria met? On pheromone values Update Evaluate Pheromone Solutions
    • Initialization Initialize Ants  Initially ants are randomly placed on the nodes.  Each edge is initialized with small amount of pheromones.  Each edge‟s Visibility, a heuristic value equal to the inverse of distance between the edge, is initialized.
    • Find Solutions Initialize Evaluate Update Ants Find Solutions Solutions Pheromone Yes Is No Probabilistically find STOP Termination New solutions based Criteria met? On pheromone values Update Evaluate Pheromone Solutions
    • Find Solutions Find Solutions  Each ant probabilistically select the next node to visit with certain probability given by: Cycle Quantity of pheromone Number on edge i-j.  1 ij (t )   Distance between Pij (t )   dij  edge i-j   1   nodesij(t )  dij  jallowed   α,β constants Identified Probability of transition Using Tabu List from node i to j
    • Tabu List  It is used by the ant to avoid revisit to any node.  It stores the node to be visited by the ant.
    • Pheromone Update Update Pheromone  After each ant complete their tour, pheromone count on each edge is updated using: Pheromone laid by Evaporation rate each ant that uses edge (i,j) Q  ij (t  1)  (1   ) ij (t )   kColonythat Lk used edge ( i , j ) Quantity of pheromone Total distance traveled on edge i-j during by ant k during its tour cycle t+1.
    • Termination  The termination criteria commonly used are:  Designated Maximum number of cycles.  Specified CPU time limit.  Maximum number of cycles between two improvements of the global best solution.
    • Control Parameters  Number of ants  Pheromone Weight ()  Visibility Weight (β)  Pheromone persistence (  )  Number of cycles
    • Ant Algorithms - Applications  Travelling Salesman Problem (TSP)  Facility Layout Problem - which can be shown to be a Quadratic Assignment Problem (QAP)  Vehicle Routing  Stock Cutting (at Nottingham)
    • ANT COLONY APPLICATION TO TRAVELING SALESMAN PROBLEM – AN EXAMPLE ILLUSTRATION
    • Ant Colony Algorithms and TSP  Ant Colony Optimization was initially designed for Traveling Salesman Problem.  At the start of the algorithm one ant is placed in each city.  Assuming that the TSP is being represented as a fully connected graph, each edge has an intensity of trail on it. This represents the pheromone trail laid by the ants.
    • Ant Colony Algorithms and TSP  The distance to the next town, is known as the visibility, nij, and is defined as 1/dij, where, dij, is the distance between cities i and j.  When an ant decides which town to move to next, it does so with a probability that is based on the visibility for that city and the amount of trail intensity on the connecting edge.
    • Ant Colony Algorithms and TSP  At each cycle pheromone evaporation takes place.  The evaporation rate,1- p, is a value between 0 and 1.  In order to stop ants visiting the same city in the same tour a data structure, Tabu, is maintained.
    • Results on TSP with 10 cities
    • Results on TSP with 10 cities
    • Results on TSP with 10 cities
    • Results on TSP with 10 cities Optimal Solution
    • Variants  Best and Worst Ant System  The best ant receives reward while the worst ant is punished.  If the search stucks at a local optimum, restart is employed.  Maximum and Minimum Ant System  An upper and lower bound are exposed on the pheromone levels.  Search starts using the max.  Rank Based Ant System  The ants are sorted wrt. the fitnesses of each tour they find.  Their pheromone levels are adjusted accordingly  Elitist Ant System  The best tour found at each step receives an extra pheromone.
    • Concluding remarks on Ant algorithms  Ant algorithms are inspired by real ant colony.  Probability of ant following certain route is a function  Pheromone intensity  Visibility  Evaporation  Ant algorithms are very suitable for problems having graphical structures.
    • Particle Swarm Optimization
    • Inspiration  It was inspired from the swarms in nature such as birds, fish, etc.  PSO algorithm has been originally developed to imitate the motion of flock of birds.  Particle Swarm Optimization (PSO) applies concept of social interaction for problem solving
    • Particle Swarm Algorithms  It was developed in 1995 by James Kennedy and Russ Eberhart.  PSO is a robust stochastic optimization technique based on the movement and intelligence of swarms.  In PSO, a swarm of n individuals communicate either directly or indirectly with one another search directions (gradients).  It has been applied successfully to a wide variety of search and optimization problems
    • PSO Formulation  The algorithm uses a set of particles flying over a search space to locate a global optimum.  A particle encodes a candidate solution to a problem at hand.  During an iteration of PSO, each particle updates its position according to its previous experience and the experience of its neighbors.
    • Fundamentals of PSO  A particle (individual) is composed of:  Three vectors:  The x-vector records the current position (location) of the particle in the search space,  The p-vector (pbest) records the location of the best solution found so far by the particle, and  The v-vector contains a gradient (direction) for which particle will travel in if undisturbed
    • PSO: Generic Algorithm Schema Start Initialize swarm with random position (x0) and velocity vectors (v0) For Each Particle Evaluate Fitness Next Particle If Update Position fitness(xt)> fitness (gbest) xt+1= xt+1 + vt+1 gbest=xt If Update velocity vt 1  W  vt  c2  rand( 0,1 )  ( pbest  xt ) fitness(xt)> fitness (pbest)  c3  rand( 0,1 )  ( gbest  xt )] pbest=xt gbest= Global Best Position pbest= Self Best Position If c1 and c2= Acceleration Coefficients false Terminate W = Inertial Weight true gbest = output End
    • Algorithm Implementation  The basic concept of PSO lies in accelerating each particle toward the best position found by it so far (pbest) and the global best position (gbest) obtained so far by any particle, with a random weighted acceleration at each time step.  This is done by simply adding the v-vector to the x-vector to get another x-vector (Xi = Xi + Vi).  Once the particle computes the new Xi it then evaluates its new location. If x-fitness is better than p-fitness, then pbest = Xi and p-fitness = x-fitness.
    • Psychosocial compromise Particle’s best position so Particle’ far s Current position pbest Global x best gbest position attained v vt 1  W  vt  c1  rand(0, 1)  (pbest  x t )  c2  rand(0, 1)  (gbest  x t )] gbest = Global Best Position c1 and c2 = Acceleration Coefficients pbest= Self Best Position W = Inertial Weight
    • Initial parameters  Swarm size  Position of particles.  Velocity of particles.  Maximum number of iterations.
    • Control Parameters  Swarm size  Inertial Weight W  Acceleration Coefficients c1 and c2  Number of iterations
    • Inertia Weight W  A large inertia weight (w) facilitates a global search while a small inertia weight facilitates a local search. Larger W Greater Global Search Ability Smaller W Greater Local Search Ability
    • Acceleration Coefficients  Determines the inclination of search. C1 larger Greater Local Search Ability than C2 C2 larger Greater Global Search Ability than C1
    • Comparison with Evolutionary Algorithms (EAs)  Unlike EAs, in PSO there is no selection operator.  PSO does not implement survival of the fittest strategy and all individuals are kept as members of the population throughout the course.
    • PSO implementation on TSP
    • Encoding Schema  Generally PSO is applied over problems involving real variables.  However, through the use of proper encoding schema it can be applied to solve hard combinatorial optimization problems like Traveling Salesman Problem, Knapsack Problem, Node Coloring, Sequencing and Scheduling.
    • Encoding Schema  For TSP, each particle’s position is coded in the form of a one dimensional string whose dimensions equals the number of cities that are to be visited.  The particles are randomly initialized with rank vectors or priority numbers. 5 9 2 4 6 3 String Priority representation Numbers for TSP with 6 cities
    • Decoding Smallest Priority Number Encoded String 5 9 2 4 6 3 Decoded String 1 Least priority is assigned the first city
    • Decoding Smallest Priority Number Encoded 5 9 N 4 6 3 String Representing that it has been decoded Decoded String 1 2 Least priority is Repeated till all assigned the next cities are city assigned
    • Decoding Encoded 5 9 2 4 6 3 String After Decoding Finally Decoded 4 6 1 3 5 2 String
    • Solution Strategy by Particle Swarm Algorithm  Randomly initialize the particle‟s position (ranks) and velocity.  Decode the particles and evaluate objective.  Store the initial position in particle‟s memory.  Modify velocity using cognitive and social components and update position.  Decode the particles „position and evaluate objective.  If the position of particle is better than the position stored in memory, update memory.  Update the global best if a better particle is obtained.  Repeat the process till required no. of iterations are complete.  The particle with best position is the output.
    • Results of PSO on TSP with 10 Nodes
    • Results of PSO on TSP with 10 Nodes
    • Results of PSO on TSP with 10 Nodes
    • Results of PSO on TSP with 10 Nodes PSO took relatively large time to evolve the optimal solution
    • Concluding remarks on “Particle Swarm”  Fast convergence thus time requirement is less.  Global as well as Local search component.  Dependence on parameter tuning is less.  More effective on problems involving real values.  Chances of early convergence due to high convergence speed.
    • ARTIFICIAL IMMUNE SYSTEM
    • Artificial Immune Systems  AIS are adaptive systems inspired by theoretical immunology and observed immune functions, principles and models, which are applied to complex problem domains (de Castro and Timmis)  A recently developed evolutionary technique inspired by theory of Immunology A way to study the response of immune system, when a non-self Antigen pattern is recognized by Antibody
    • Biological Immune System Efficiency of the acquired response depends upon the ability of antibodies to recognize the antigens, depends upon 1016 Antigens for less than 100 antibody genes o Generalization o Screening Self/Non-self o Memory Discriminatio n Ability to remember previous infections
    • Artificial Immune System  History of Artificial Immune System Initially developed from the theory of “immunology” in mid 1980’s In 1990, first use of immune algorithm to solve optimization problem In mid 1990: Application to Computer Security In mid 1990: Machine Learning
    • Artificial Immune System  Artificial Immune System: An Optimization View Objective Entire Solution Functions Building Blocks Constraints Feasible Solutions
    • Artificial Immune Systems  Basic Elements  Immune Systems: To protect the body from the foreign matters  Antigen: Any foreign disease causing elements  Antibody: Utilized to identify, bind and eliminate antigens
    • General Framework for AIS- The AIS Cycle Population Selection Initialization Cloning & Evaluation Hypermutation
    • Artificial Immune System  AIS: A Generic Framework Immune Algorithm Affinity Measures Representation Application Domain
    • Flow of the Algorithm Hypermutation of each clone Population P of individuals Clone Pool of the population Repository of Probabilistically select P best individual good solution When search gets stagnated good solutions are sent to the current population
    • Artificial Immune System  Artificial Immune System: An Assessment  Advantages General Purpose AIS tools Easily Extensible Potential for distribution  Disadvantages Parameter Sensitive Computationally Expensive
    • Artificial Immune System  Distinctive Features & Their Applications: Features Applications Learning & Adaptation Security Immunological Memory Pattern Recognition Self/Non-self Classification Heuristic Optimization Self Organizing Modeling & Agents Application Localization & Circulation Clustering Autonomous/Decentralized Concept Learning & Recommender System
    • Artificial Immune System  AIS: Potential Area of optimization  Fault & Anomaly Detection  Data Mining (Machine Learning, patter recognition)  Agent Based systems  Autonomous Control  Information Security System  Scheduling
    • Dynamic traffic simulation  CALIBARTION OF MESOSCOPIC TRAFFIC SIMULATION USING POPULATION BASED EVOLUTIONARY ALGORITHMS - methodology to calibrate dynamic traffic simulation models with real data acquired from traffic counts and travel time measurements acquired from GPS devices
    • Brief outlook  To use a tool called METROPOLIS  No Mathematical function involved, hence a need for simulation arises – simulation of the real world conditions.  A simulation of real time traffic will be processed in the model and it gives different indicators as output. One of the indicator is the travel time along the defined paths in a network  Initially ,tested on toy networks (network containing small networks)
    • Computational Details  Programmed in the following way : - A GUI platform developed in java which works like a compiler for optimization - The main features of the compiler are : * Any EA can be embedded * Any problem can be optimized ALGORITHM OPTIMISATION PROBLEM PLATFORM
    • Overall framework NODE-1 RANDOMLY CALCULATE GENERATE FITNESS PLATFORM NODE-2 TRAFFIC VARIABLES FITNESS NODE-3 R E S NODE-4 U L T S