Methods of Combining Neural Networks and Genetic Algorithms

  • 639 views
Uploaded on

 

More in: Education
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
639
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
12
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Methods of Combining Neural Networks and Genetic Algorithms: A Tutorial Talib S. Hussain Queen’s University hussain@qucis.queensu.ca • Introduction to NNs and GAs • Approaches to Combining NNs and GAs Supportive: Applied to different stages of problem Collaborative: Applied concurrently to entire problem • Issues in Research and Applications Baldwin Effect: Learning guides evolution Generalisation: Must avoid over-specialising Genetic Encoding: Wide variety of methods
  • 2. Brief Refresher Neural Networks • Learning technique • Neurons with weighted connections • Learning through weight changes • Represent a large class of functions • Highly biased search Genetic Algorithms • Optimisation technique • Populations of similar solutions • Survival of the fittest • Propagation by mutation and crossover • Weakly biased search
  • 3. Collaborative Combination Methods Evolution of Connection Weights • GA optimises specific NN weights • GA used as the learning rule of the NN • Population of NNs with same topology but diff. weights • Pro: May converge faster than gradient descent • Less susceptible to local minima • Con: Highly inefficient in space and time Evolution of Architectures • GA optimises general NN structural parameters • GA applied in conjunction with neural learning • Population of NNs with different topologies • Pro: Not limited to fixed topology • Examines wide variety of solutions • Con: Convergence dependent upon genetic representation • May be highly inefficient in space and/or time Evolution of Learning Rules • GA optimises general NN structural and learning parameters • GA applied in conjunction with (variable) neural learning • Population of NNs with diff. topologies and learning methods • Pro: Not limited to fixed topology or learning rule • Applicable to wide range of problems • Con: Techniques are new, few and untested • Probably highly inefficient in time