2. Introduction
• Evolution and lifetime learning combine to create
the capabilities of animal brain
– Investigating the role of Neuro-Evolution in deeplearning
• Neuro-Evolution
– To train feature extractor
• Deep-learning
– Learn from the features
3. Neuro-Evolution
• What is HyperNEAT (Hypercube-based NeuroEvolution of
Augmenting Topologies )?
• Weights of the ANN are generated as a function of geometry
• Evolves both the topology and weight of a networks to maximize
performance
4. •
Deep-learning
An algorithm to make ANN learn in multiple levels of
representation, corresponding to different levels of abstraction.
• CNN learns features based upon locality.
6. Experimental set-up
• MINST data set
– 60,000 training and 10,000 test images (28 x28 pixel)
• 10 classes (0-9)
• HyperNEAT in 4 flavors
– HpyerNEAT with traditional ANN architecture
– HyperNEAT with CNN architecture
• Learning to classify images by itself
– ANN for image classification
• Acting as a feature learner
– ANN that transform images into features
7. Experimental set-up
• Architecture of HyperNEAT
– A multi-layer (z-axis)
– Each layer has features
– Each layer is presented by a triple (X, Y, F),
• F is the number of features
• X, Y are the pixel dimensions.
– CPPN queried for weights of neuron connection
• Neurons located at a particular (x, y, f, z) coordinate
8. Experimental set-up
• HpyerNEAT with traditional ANN architecture
– is a seven layer neural network
• 1 input, 1 output, 5 hidden layers,
• (28; 28; 1), (16; 16; 3), (8; 8; 3), (6; 6; 8), (3; 3; 8), (1; 1; 100), (1;
1; 64), and (1; 1; 10).
– Each layer is fully connected to the adjacent layers and
each neuron has a bipolar sigmoid activation function
10. Experimental set-up
• To act as feature extractor
– HpyerNEAT with traditional ANN architecture
• (1; 1; 100) become the new output layer
– HyperNEAT with CNN architecture
• (1; 1; 120) become the new output layer
• Feature vectors are given to BP that trains the modified network
• After evolution completes, the generation champions are evaluated
on the MNIST testing set (10,000)
12. Results
• HpyerNEAT – non-feature extractor
– Fitness is determined by applying the ANN substrate to
the training images
• HpyerNEAT –feature extractor
– Fitness is the testing performance of BP trained network
• Trained for 250 BP iteration on 300 training images
• Tested on 1000 training images