SlideShare a Scribd company logo
1 of 28
A temporal classifier system using spiking neural networks Gerard David Howard, Larry Bull & Pier-Luca Lanzi {david4.howard, larry.bull} @uwe.ac.uk pierluca.lanzi @polimi.it 1
Contents Intro & Motivation System architecture – Spiking XCSF Constructivism (nodes and connections) Working in continuous space Comparison to MLP / Q-learner Taking time into consideration Comparison to MLP Simulated robotics 2
Motivation Many real-world tasks incorporate continuous space and continuous time Autonomous robotics are an unanswered question: will require some degree of knowledge “self-shaping” or control over their internal knowledge representation We introduce an LCS containing spiking networks and demonstrate the usefulness of the representation Handles continuous space and continuous time Representation structure dependent on environment 3
XCSF Includes computed prediction, which is calculated from input state (augmented by constant x0) and a weight vector – each classifier has a weight vector Weights are updated linearly using modified delta rule Main differences from canonical: SNN replaces condition and calculates action Self-adaptive parameters give autonomous learning control Topology of networks altered in GA cycle Generalisation from computed prediction, computed actions and network topologies 4
Spiking networks Spiking networks have temporal functionality We use Integrate-and-Fire (IAF) neurons Each neuron has a membrane potential (m) that varies through time When m exceeds a threshold, the neuron sends a spike to every neuron in the network that it has a forward connection to, and resets m Membrane potential is a way of implementing memory 5
Spiking networks IAF Spiking network replaces condition and action, 2 input, 3 output nodes Each input state processed 5 times by spiking network. Neural outputs are spike trains: high (>=3) or low (<3) spikes in 5-element output window. Classifier doesn’t match if !M node is high ,[object Object],6 00101 = LOW L Input state  (one node per element) R 01110 = HIGH ![M] 10001 = LOW
Self-adaptive parameters During a GA cycle, a parent’s µ value is  copied to its offspring and altered The offspring then applies its own µ to itself (bounded  [0-1]) before being inserted into the population. Similar to ES mutation alteration Mutate µ  µ * e N(0,1) Insert Copy µ  [A] Mutate µ  µ * e N(0,1) Copy µ  Insert 7
Constructivism Neural Constructivism - interaction with environment guides learning process by growing/pruning dendritic connectivity Constructivism can add or remove neurons from the hidden layer during a GA event   Two new self-adaptive values control NC , ψ (probability of constructivism event occurring) and ω (probability of adding rather than removing a node).  These are modified during a GA cycle as with µ 8 Randomly initialised weights
Connection selection Automatic feature selection often used in conjunction with neural networks – allows reduction in number of inputs to only highest utility features We apply feature selection to every connection in a network, connection is enabled/disabled on satisfaction of new self-adaptive parameter τ.   All connections initially enabled, connections created via node addition are enabled with 50% probability per connection ,[object Object],9
Effects of SA, NC & CS Self Adaptation allows the system to control the amount of search taking place in an environmental niche without having to predetermine suitable parameter values Neural Constructivism allows classifier to automatically grow networks to match task complexity ,[object Object],10
Continuous Grid World Two dimensional continuous grid environment Runs from 0 – 1 in both x and y axes Goal state is where (x+y>1.9) – darker regions of grid represent higher expected payoff.  Reaching goal returns a reward of 1000, else 0 Agent starts randomly anywhere in the grid except the goal state, aims to reach goal (moving 0.05) in fewest possible steps (avg. opt. 18.6) 1.00 0.50 Agent 0.00 0.50 1.00 11
Discrete movement Agent can make a single discrete movement (N,E,S,W) N=(HIGH,HIGH), E=(HIGH,LOW) etc… Experimental parameters N=20000,γ =0.95, β=0.2, ε0=0.005, θGA=50, θDEL=50 XCSF parameters as normal. Initial prediction error in new classifiers=0.01, initial fitness=0.1 Additional trial from fixed location lets us perform t-tests. “Stability” shows first step that 50 consecutive trials reach goal state from this location. 12
Discrete movement 13 ,[object Object]
Fewer macroclassifiers = greater generalisation
Lower mutation rate = more stable evolutionary process,[object Object]
Continuous duration actions Reward usually calculated as Reward is now calculated as Two discount factors that favour overall effectiveness                 and efficient state transitions                  respectively     =0.05, ρ=0.1 tt = total steps for entire trial  ti = duration of a single action Timeout=20; new steps to goal is 1.5 15
Continuous Grid World TCS 16 ,[object Object]
Lower mutation rate = more stable evolutionary process,[object Object]
Smaller step size 18 ,[object Object]
Tabular Q-learner cannot learn – too many (s,a)  combinations, long action chains!
Spiking non-TCS cannot learn – too many (s,a)  combinations, long action chains!
MLP TCS cannot learn – lack of memory?
Spiking TCS canlearn to optimally solve this environment by extending an action set across multiple states and recalculating actions where necessary
Aided by temporal element of networks,[object Object],[object Object]
Mountain-car 21 ,[object Object]
Guide a car out of a valley, sometimes requiring non-obvious behaviour
State comprises position and velocity
Actions increase/decrease velocity: HIGH/HIGH = increase, LOW/LOW = decrease, anything else = no change.

More Related Content

What's hot

The Back Propagation Learning Algorithm
The Back Propagation Learning AlgorithmThe Back Propagation Learning Algorithm
The Back Propagation Learning Algorithm
ESCOM
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
DEEPASHRI HK
 
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Simplilearn
 
MPerceptron
MPerceptronMPerceptron
MPerceptron
butest
 

What's hot (20)

Multi Layer Network
Multi Layer NetworkMulti Layer Network
Multi Layer Network
 
The Art Of Backpropagation
The Art Of BackpropagationThe Art Of Backpropagation
The Art Of Backpropagation
 
Feedforward neural network
Feedforward neural networkFeedforward neural network
Feedforward neural network
 
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Artificial Neural Networks Lect5: Multi-Layer Perceptron & BackpropagationArtificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
 
2.5 backpropagation
2.5 backpropagation2.5 backpropagation
2.5 backpropagation
 
The Back Propagation Learning Algorithm
The Back Propagation Learning AlgorithmThe Back Propagation Learning Algorithm
The Back Propagation Learning Algorithm
 
Activation function
Activation functionActivation function
Activation function
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Ffnn
FfnnFfnn
Ffnn
 
14 Machine Learning Single Layer Perceptron
14 Machine Learning Single Layer Perceptron14 Machine Learning Single Layer Perceptron
14 Machine Learning Single Layer Perceptron
 
Introduction to Convolutional Neural Networks
Introduction to Convolutional Neural NetworksIntroduction to Convolutional Neural Networks
Introduction to Convolutional Neural Networks
 
nn network
nn networknn network
nn network
 
Classification using back propagation algorithm
Classification using back propagation algorithmClassification using back propagation algorithm
Classification using back propagation algorithm
 
Convolutional Neural Network (CNN) presentation from theory to code in Theano
Convolutional Neural Network (CNN) presentation from theory to code in TheanoConvolutional Neural Network (CNN) presentation from theory to code in Theano
Convolutional Neural Network (CNN) presentation from theory to code in Theano
 
Multi Layer Perceptron & Back Propagation
Multi Layer Perceptron & Back PropagationMulti Layer Perceptron & Back Propagation
Multi Layer Perceptron & Back Propagation
 
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
 
Hands on machine learning with scikit-learn and tensor flow by ahmed yousry
Hands on machine learning with scikit-learn and tensor flow by ahmed yousryHands on machine learning with scikit-learn and tensor flow by ahmed yousry
Hands on machine learning with scikit-learn and tensor flow by ahmed yousry
 
15 Machine Learning Multilayer Perceptron
15 Machine Learning Multilayer Perceptron15 Machine Learning Multilayer Perceptron
15 Machine Learning Multilayer Perceptron
 
Training Neural Networks
Training Neural NetworksTraining Neural Networks
Training Neural Networks
 
MPerceptron
MPerceptronMPerceptron
MPerceptron
 

Similar to A temporal classifier system using spiking neural networks

"An adaptive modular approach to the mining of sensor network ...
"An adaptive modular approach to the mining of sensor network ..."An adaptive modular approach to the mining of sensor network ...
"An adaptive modular approach to the mining of sensor network ...
butest
 
Thesis Presentation on Energy Efficiency Improvement in Data Centers
Thesis Presentation on Energy Efficiency Improvement in Data CentersThesis Presentation on Energy Efficiency Improvement in Data Centers
Thesis Presentation on Energy Efficiency Improvement in Data Centers
Monica Vitali
 
Dynamic Kohonen Network for Representing Changes in Inputs
Dynamic Kohonen Network for Representing Changes in InputsDynamic Kohonen Network for Representing Changes in Inputs
Dynamic Kohonen Network for Representing Changes in Inputs
Jean Fecteau
 
ANNs have been widely used in various domains for: Pattern recognition Funct...
ANNs have been widely used in various domains for: Pattern recognition  Funct...ANNs have been widely used in various domains for: Pattern recognition  Funct...
ANNs have been widely used in various domains for: Pattern recognition Funct...
vijaym148
 
Poster_Reseau_Neurones_Journees_2013
Poster_Reseau_Neurones_Journees_2013Poster_Reseau_Neurones_Journees_2013
Poster_Reseau_Neurones_Journees_2013
Pedro Lopes
 

Similar to A temporal classifier system using spiking neural networks (20)

tutorial.ppt
tutorial.ppttutorial.ppt
tutorial.ppt
 
Proximity Detection in Distributed Simulation of Wireless Mobile Systems
Proximity Detection in Distributed Simulation of Wireless Mobile SystemsProximity Detection in Distributed Simulation of Wireless Mobile Systems
Proximity Detection in Distributed Simulation of Wireless Mobile Systems
 
08 neural networks
08 neural networks08 neural networks
08 neural networks
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
 
Neural networks
Neural networksNeural networks
Neural networks
 
Classification by backpropacation
Classification by backpropacationClassification by backpropacation
Classification by backpropacation
 
"An adaptive modular approach to the mining of sensor network ...
"An adaptive modular approach to the mining of sensor network ..."An adaptive modular approach to the mining of sensor network ...
"An adaptive modular approach to the mining of sensor network ...
 
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
 
deep CNN vs conventional ML
deep CNN vs conventional MLdeep CNN vs conventional ML
deep CNN vs conventional ML
 
Thesis Presentation on Energy Efficiency Improvement in Data Centers
Thesis Presentation on Energy Efficiency Improvement in Data CentersThesis Presentation on Energy Efficiency Improvement in Data Centers
Thesis Presentation on Energy Efficiency Improvement in Data Centers
 
Dynamic Kohonen Network for Representing Changes in Inputs
Dynamic Kohonen Network for Representing Changes in InputsDynamic Kohonen Network for Representing Changes in Inputs
Dynamic Kohonen Network for Representing Changes in Inputs
 
Techniques in Deep Learning
Techniques in Deep LearningTechniques in Deep Learning
Techniques in Deep Learning
 
Multilayer Perceptron - Elisa Sayrol - UPC Barcelona 2018
Multilayer Perceptron - Elisa Sayrol - UPC Barcelona 2018Multilayer Perceptron - Elisa Sayrol - UPC Barcelona 2018
Multilayer Perceptron - Elisa Sayrol - UPC Barcelona 2018
 
ANNs have been widely used in various domains for: Pattern recognition Funct...
ANNs have been widely used in various domains for: Pattern recognition  Funct...ANNs have been widely used in various domains for: Pattern recognition  Funct...
ANNs have been widely used in various domains for: Pattern recognition Funct...
 
Echo state networks and locomotion patterns
Echo state networks and locomotion patternsEcho state networks and locomotion patterns
Echo state networks and locomotion patterns
 
MNN
MNNMNN
MNN
 
Poster_Reseau_Neurones_Journees_2013
Poster_Reseau_Neurones_Journees_2013Poster_Reseau_Neurones_Journees_2013
Poster_Reseau_Neurones_Journees_2013
 
Introduction to deep learning
Introduction to deep learningIntroduction to deep learning
Introduction to deep learning
 
X-TREPAN: A MULTI CLASS REGRESSION AND ADAPTED EXTRACTION OF COMPREHENSIBLE D...
X-TREPAN: A MULTI CLASS REGRESSION AND ADAPTED EXTRACTION OF COMPREHENSIBLE D...X-TREPAN: A MULTI CLASS REGRESSION AND ADAPTED EXTRACTION OF COMPREHENSIBLE D...
X-TREPAN: A MULTI CLASS REGRESSION AND ADAPTED EXTRACTION OF COMPREHENSIBLE D...
 

More from Daniele Loiacono

2010 Simulated Car Racing Championship @ CIG-2010
2010 Simulated Car Racing Championship @ CIG-20102010 Simulated Car Racing Championship @ CIG-2010
2010 Simulated Car Racing Championship @ CIG-2010
Daniele Loiacono
 
2010 Simulated Car Racing Championship @ GECCO-2010
2010 Simulated Car Racing Championship @ GECCO-20102010 Simulated Car Racing Championship @ GECCO-2010
2010 Simulated Car Racing Championship @ GECCO-2010
Daniele Loiacono
 
2010 Simulated Car Racing Championship @ WCCI-2010
2010 Simulated Car Racing Championship @ WCCI-20102010 Simulated Car Racing Championship @ WCCI-2010
2010 Simulated Car Racing Championship @ WCCI-2010
Daniele Loiacono
 

More from Daniele Loiacono (20)

GPUs for GEC Competition @ GECCO-2013
GPUs for GEC Competition @ GECCO-2013GPUs for GEC Competition @ GECCO-2013
GPUs for GEC Competition @ GECCO-2013
 
EvoRobocode Competition @ GECCO-2013
EvoRobocode Competition @ GECCO-2013EvoRobocode Competition @ GECCO-2013
EvoRobocode Competition @ GECCO-2013
 
2013 Simulated Car Racing @ GECCO-2013
2013 Simulated Car Racing @ GECCO-20132013 Simulated Car Racing @ GECCO-2013
2013 Simulated Car Racing @ GECCO-2013
 
2012 Simulated Car Racing Championship @ CIG-2012
2012 Simulated Car Racing Championship @ CIG-20122012 Simulated Car Racing Championship @ CIG-2012
2012 Simulated Car Racing Championship @ CIG-2012
 
2012 Simulated Car Racing Championship @ GECCO-2012
2012 Simulated Car Racing Championship @ GECCO-20122012 Simulated Car Racing Championship @ GECCO-2012
2012 Simulated Car Racing Championship @ GECCO-2012
 
2012 Simulated Car Racing Championship @ Evo*-2012
2012 Simulated Car Racing Championship @ Evo*-20122012 Simulated Car Racing Championship @ Evo*-2012
2012 Simulated Car Racing Championship @ Evo*-2012
 
Computational Intelligence in Games Tutorial @GECCO2012
Computational Intelligence in Games Tutorial @GECCO2012Computational Intelligence in Games Tutorial @GECCO2012
Computational Intelligence in Games Tutorial @GECCO2012
 
XCSF with Local Deletion: Preventing Detrimental Forgetting
XCSF with Local Deletion: Preventing Detrimental ForgettingXCSF with Local Deletion: Preventing Detrimental Forgetting
XCSF with Local Deletion: Preventing Detrimental Forgetting
 
Testing learning classifier systems
Testing learning classifier systemsTesting learning classifier systems
Testing learning classifier systems
 
Random Artificial Incorporation of Noise in a Learning Classifier System Envi...
Random Artificial Incorporation of Noise in a Learning Classifier System Envi...Random Artificial Incorporation of Noise in a Learning Classifier System Envi...
Random Artificial Incorporation of Noise in a Learning Classifier System Envi...
 
One Step Fits All
One Step Fits AllOne Step Fits All
One Step Fits All
 
Introducing LCS to Digital Design Verification
Introducing LCS to Digital Design VerificationIntroducing LCS to Digital Design Verification
Introducing LCS to Digital Design Verification
 
Confusion Matrices for Improving Performance of Feature Pattern Classifier Sy...
Confusion Matrices for Improving Performance of Feature Pattern Classifier Sy...Confusion Matrices for Improving Performance of Feature Pattern Classifier Sy...
Confusion Matrices for Improving Performance of Feature Pattern Classifier Sy...
 
Automatically Defined Functions for Learning Classifier Systems
Automatically Defined Functions for Learning Classifier SystemsAutomatically Defined Functions for Learning Classifier Systems
Automatically Defined Functions for Learning Classifier Systems
 
Voting Based Learning Classifier System for Multi-Label Classification
Voting Based Learning Classifier System for Multi-Label ClassificationVoting Based Learning Classifier System for Multi-Label Classification
Voting Based Learning Classifier System for Multi-Label Classification
 
2011 Simulated Car Racing Championship @ GECCO-2011
2011 Simulated Car Racing Championship @ GECCO-20112011 Simulated Car Racing Championship @ GECCO-2011
2011 Simulated Car Racing Championship @ GECCO-2011
 
2010 Simulated Car Racing Championship @ CIG-2010
2010 Simulated Car Racing Championship @ CIG-20102010 Simulated Car Racing Championship @ CIG-2010
2010 Simulated Car Racing Championship @ CIG-2010
 
2010 Simulated Car Racing Championship @ GECCO-2010
2010 Simulated Car Racing Championship @ GECCO-20102010 Simulated Car Racing Championship @ GECCO-2010
2010 Simulated Car Racing Championship @ GECCO-2010
 
2010 Simulated Car Racing Championship @ WCCI-2010
2010 Simulated Car Racing Championship @ WCCI-20102010 Simulated Car Racing Championship @ WCCI-2010
2010 Simulated Car Racing Championship @ WCCI-2010
 
Car Setup Optimization Competition @ EvoStar 2010
Car Setup Optimization Competition @ EvoStar 2010Car Setup Optimization Competition @ EvoStar 2010
Car Setup Optimization Competition @ EvoStar 2010
 

Recently uploaded

Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Victor Rentea
 
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptxHarnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
FIDO Alliance
 
Microsoft BitLocker Bypass Attack Method.pdf
Microsoft BitLocker Bypass Attack Method.pdfMicrosoft BitLocker Bypass Attack Method.pdf
Microsoft BitLocker Bypass Attack Method.pdf
Overkill Security
 

Recently uploaded (20)

Vector Search @ sw2con for slideshare.pptx
Vector Search @ sw2con for slideshare.pptxVector Search @ sw2con for slideshare.pptx
Vector Search @ sw2con for slideshare.pptx
 
Design Guidelines for Passkeys 2024.pptx
Design Guidelines for Passkeys 2024.pptxDesign Guidelines for Passkeys 2024.pptx
Design Guidelines for Passkeys 2024.pptx
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
 
Cyber Insurance - RalphGilot - Embry-Riddle Aeronautical University.pptx
Cyber Insurance - RalphGilot - Embry-Riddle Aeronautical University.pptxCyber Insurance - RalphGilot - Embry-Riddle Aeronautical University.pptx
Cyber Insurance - RalphGilot - Embry-Riddle Aeronautical University.pptx
 
Design and Development of a Provenance Capture Platform for Data Science
Design and Development of a Provenance Capture Platform for Data ScienceDesign and Development of a Provenance Capture Platform for Data Science
Design and Development of a Provenance Capture Platform for Data Science
 
Intro to Passkeys and the State of Passwordless.pptx
Intro to Passkeys and the State of Passwordless.pptxIntro to Passkeys and the State of Passwordless.pptx
Intro to Passkeys and the State of Passwordless.pptx
 
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptxHarnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
Harnessing Passkeys in the Battle Against AI-Powered Cyber Threats.pptx
 
Navigating the Large Language Model choices_Ravi Daparthi
Navigating the Large Language Model choices_Ravi DaparthiNavigating the Large Language Model choices_Ravi Daparthi
Navigating the Large Language Model choices_Ravi Daparthi
 
Continuing Bonds Through AI: A Hermeneutic Reflection on Thanabots
Continuing Bonds Through AI: A Hermeneutic Reflection on ThanabotsContinuing Bonds Through AI: A Hermeneutic Reflection on Thanabots
Continuing Bonds Through AI: A Hermeneutic Reflection on Thanabots
 
ERP Contender Series: Acumatica vs. Sage Intacct
ERP Contender Series: Acumatica vs. Sage IntacctERP Contender Series: Acumatica vs. Sage Intacct
ERP Contender Series: Acumatica vs. Sage Intacct
 
Generative AI Use Cases and Applications.pdf
Generative AI Use Cases and Applications.pdfGenerative AI Use Cases and Applications.pdf
Generative AI Use Cases and Applications.pdf
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
WebRTC and SIP not just audio and video @ OpenSIPS 2024
WebRTC and SIP not just audio and video @ OpenSIPS 2024WebRTC and SIP not just audio and video @ OpenSIPS 2024
WebRTC and SIP not just audio and video @ OpenSIPS 2024
 
ChatGPT and Beyond - Elevating DevOps Productivity
ChatGPT and Beyond - Elevating DevOps ProductivityChatGPT and Beyond - Elevating DevOps Productivity
ChatGPT and Beyond - Elevating DevOps Productivity
 
الأمن السيبراني - ما لا يسع للمستخدم جهله
الأمن السيبراني - ما لا يسع للمستخدم جهلهالأمن السيبراني - ما لا يسع للمستخدم جهله
الأمن السيبراني - ما لا يسع للمستخدم جهله
 
Microsoft CSP Briefing Pre-Engagement - Questionnaire
Microsoft CSP Briefing Pre-Engagement - QuestionnaireMicrosoft CSP Briefing Pre-Engagement - Questionnaire
Microsoft CSP Briefing Pre-Engagement - Questionnaire
 
Human Expert Website Manual WCAG 2.0 2.1 2.2 Audit - Digital Accessibility Au...
Human Expert Website Manual WCAG 2.0 2.1 2.2 Audit - Digital Accessibility Au...Human Expert Website Manual WCAG 2.0 2.1 2.2 Audit - Digital Accessibility Au...
Human Expert Website Manual WCAG 2.0 2.1 2.2 Audit - Digital Accessibility Au...
 
Microsoft BitLocker Bypass Attack Method.pdf
Microsoft BitLocker Bypass Attack Method.pdfMicrosoft BitLocker Bypass Attack Method.pdf
Microsoft BitLocker Bypass Attack Method.pdf
 
Top 10 CodeIgniter Development Companies
Top 10 CodeIgniter Development CompaniesTop 10 CodeIgniter Development Companies
Top 10 CodeIgniter Development Companies
 

A temporal classifier system using spiking neural networks

  • 1. A temporal classifier system using spiking neural networks Gerard David Howard, Larry Bull & Pier-Luca Lanzi {david4.howard, larry.bull} @uwe.ac.uk pierluca.lanzi @polimi.it 1
  • 2. Contents Intro & Motivation System architecture – Spiking XCSF Constructivism (nodes and connections) Working in continuous space Comparison to MLP / Q-learner Taking time into consideration Comparison to MLP Simulated robotics 2
  • 3. Motivation Many real-world tasks incorporate continuous space and continuous time Autonomous robotics are an unanswered question: will require some degree of knowledge “self-shaping” or control over their internal knowledge representation We introduce an LCS containing spiking networks and demonstrate the usefulness of the representation Handles continuous space and continuous time Representation structure dependent on environment 3
  • 4. XCSF Includes computed prediction, which is calculated from input state (augmented by constant x0) and a weight vector – each classifier has a weight vector Weights are updated linearly using modified delta rule Main differences from canonical: SNN replaces condition and calculates action Self-adaptive parameters give autonomous learning control Topology of networks altered in GA cycle Generalisation from computed prediction, computed actions and network topologies 4
  • 5. Spiking networks Spiking networks have temporal functionality We use Integrate-and-Fire (IAF) neurons Each neuron has a membrane potential (m) that varies through time When m exceeds a threshold, the neuron sends a spike to every neuron in the network that it has a forward connection to, and resets m Membrane potential is a way of implementing memory 5
  • 6.
  • 7. Self-adaptive parameters During a GA cycle, a parent’s µ value is copied to its offspring and altered The offspring then applies its own µ to itself (bounded [0-1]) before being inserted into the population. Similar to ES mutation alteration Mutate µ  µ * e N(0,1) Insert Copy µ [A] Mutate µ  µ * e N(0,1) Copy µ Insert 7
  • 8. Constructivism Neural Constructivism - interaction with environment guides learning process by growing/pruning dendritic connectivity Constructivism can add or remove neurons from the hidden layer during a GA event Two new self-adaptive values control NC , ψ (probability of constructivism event occurring) and ω (probability of adding rather than removing a node). These are modified during a GA cycle as with µ 8 Randomly initialised weights
  • 9.
  • 10.
  • 11. Continuous Grid World Two dimensional continuous grid environment Runs from 0 – 1 in both x and y axes Goal state is where (x+y>1.9) – darker regions of grid represent higher expected payoff. Reaching goal returns a reward of 1000, else 0 Agent starts randomly anywhere in the grid except the goal state, aims to reach goal (moving 0.05) in fewest possible steps (avg. opt. 18.6) 1.00 0.50 Agent 0.00 0.50 1.00 11
  • 12. Discrete movement Agent can make a single discrete movement (N,E,S,W) N=(HIGH,HIGH), E=(HIGH,LOW) etc… Experimental parameters N=20000,γ =0.95, β=0.2, ε0=0.005, θGA=50, θDEL=50 XCSF parameters as normal. Initial prediction error in new classifiers=0.01, initial fitness=0.1 Additional trial from fixed location lets us perform t-tests. “Stability” shows first step that 50 consecutive trials reach goal state from this location. 12
  • 13.
  • 14. Fewer macroclassifiers = greater generalisation
  • 15.
  • 16. Continuous duration actions Reward usually calculated as Reward is now calculated as Two discount factors that favour overall effectiveness and efficient state transitions respectively =0.05, ρ=0.1 tt = total steps for entire trial ti = duration of a single action Timeout=20; new steps to goal is 1.5 15
  • 17.
  • 18.
  • 19.
  • 20. Tabular Q-learner cannot learn – too many (s,a) combinations, long action chains!
  • 21. Spiking non-TCS cannot learn – too many (s,a) combinations, long action chains!
  • 22. MLP TCS cannot learn – lack of memory?
  • 23. Spiking TCS canlearn to optimally solve this environment by extending an action set across multiple states and recalculating actions where necessary
  • 24.
  • 25.
  • 26. Guide a car out of a valley, sometimes requiring non-obvious behaviour
  • 28. Actions increase/decrease velocity: HIGH/HIGH = increase, LOW/LOW = decrease, anything else = no change.
  • 29. Noise! (+/- 5% of both state elements)
  • 30. TCS optimal steps to goal = 1
  • 32.
  • 33.
  • 34. Simulate a Khepera robot that uses 3 IR and 3 light sensors as input state
  • 35. Two bump sensors detect a collision and reverse the robot/reform [M] if collision is detected
  • 36. Task: similar to grid world, but with an obstacle to avoid and a light source to reach.
  • 38.
  • 39. Robot start position constrained so that obstacle is always between it and the light source
  • 40. Movement is much more granular than with grid(0.05) !
  • 41.
  • 42. Robotics 26 Steps to goal Connected hidden layer nodes Percentage enabled connections Self-adaptive parameters, μ, ψ, τ all plotted on RHS axis
  • 43.
  • 44. Initially seeding with 6 hidden layer nodes still let’s us use connection selection to generate behavioural variation in the networks
  • 45. Temporal functionality of the networks is exploited so that a single action set canDrop unwanted classifiers to change it’s favoured action at specific points (e.g. just before a collision) Alter the action advocated action of a majority of classifiers in [A] for the same effect
  • 46. Thanks for your time! 28