SlideShare a Scribd company logo
Jason Tsai (蔡志順)
Oct. 11, 2018 @台灣人工智慧學校 新竹分校
INTRODUCTION TO SPIKING NEURAL
NETWORKS
*Picture adopted from
https://bit.ly/2Rh7cYy
*Copyright Notice:
All figures in this presentation are taken from
the quoted sources as mentioned in the
respective slides and their copyright belongs
to the owners. This presentation itself adopts
Creative Commons license.
Neural Networks 3D Simulation
(Video demo)
*Video from https://youtu.be/3JQ3hYko51Y
Questions
 What are the advantages of spiking
neural networks and neuromorphic
computing?
 What are current challenges of spiking
neural networks (SNNs)?
Characteristics of SNNs
 Spatio-temporal
 Asynchronous
 Sparsity
 Additive weight operation
 Energy-efficient
 Stochastic
 Robust to noise
Outlines
• Basic neuroscience
• Learning algorithms
• Neuron models
• Neural encoding schemes
• SNN examples (papers)
• Neuromorphic platforms
Prerequisite
Neuroscience
Nerve Cell
(Neuron)
*Picture taken from https://en.wikipedia.org/wiki/Neuron
Neuron’s Spike: Action Potential
*Figure adopted from https://en.wikipedia.org/wiki/Action_potential & The front cover of
“Spikes: Exploring the Neural Code (1999)”
The Effect of Presynaptic Spikes on
Postsynaptic Neuron
*Figure adopted from Wulfram Gerstner & Werner M. Kistler. Spiking Neuron Models:
Single Neurons, Populations, Plasticity. Cambridge University Press. 2002. Page 5.
The Firing of a Leaky Integrate-and-
Fire Model Neuron
*Figure adopted from https://doi.org/10.1371/journal.pone.0001377
Hebb’s Learning Postulate
 "When an axon of cell A is near enough to excite a cell B and
repeatedly or persistently takes part in firing it, some growth
process or metabolic change takes place in one or both cells such
that A's efficiency, as one of the cells firing B, is increased.“*
* Refer to Donald O. Hebb, The Organization of Behavior: A Neuropsychological Theory. 1949 & 2002. Page 62.
 Causality
 Repetition
Long-Term Potentiation (LTP) / Long-
Term Depression (LTD)
 LTP is a long-lasting, activity-dependent increase in synaptic
strength that is a leading candidate as a cellular mechanism
contributing to memory formation in mammals in a very
broadly applicable sense.*
* Refer to J. David Sweatt. Mechanisms of Memory, Second Edition. Academic Press. 2010. Page 112.
Synaptic Plasticity
*Figure adopted from Wulfram Gerstner & Werner M. Kistler. Spiking Neuron Models:
Single Neurons, Populations, Plasticity. Cambridge University Press. 2002. Page 353.
Back-propagating Action Potential (bAP)
*Further reading: https://en.wikipedia.org/wiki/Neural_backpropagation
Induction of tLTP requires activation of the presynaptic
input milliseconds before the bAP in the postsynaptic
dendrite.
*Figure adopted from https://doi.org/10.3389/fnsyn.2011.00004
Spike-Timing-Dependent Plasticity
(STDP)
Experiment Evidence of STDP
 From Wikipedia:
“Henry Markram, when he was in Bert Sakmann's lab and published their
work in 1997, used dual patch clamping techniques to repetitively
activate pre-synaptic neurons 10 milliseconds before activating the post-
synaptic target neurons, and found the strength of the synapse
increased. When the activation order was reversed so that the pre-
synaptic neuron was activated 10 milliseconds after its post-synaptic
target neuron, the strength of the pre-to-post synaptic connection
decreased.
Further work, by Guoqiang Bi, Li Zhang, and Huizhong Tao in Mu-Ming
Poo's lab in 1998, continued the mapping of the entire time course
relating pre- and post-synaptic activity and synaptic change, to show that
in their preparation synapses that are activated within 5-20 ms before a
postsynaptic spike are strengthened, and those that are activated within a
similar time window after the spike are weakened.”
*Further reading: https://en.wikipedia.org/wiki/Spike-timing-dependent_plasticity
Lateral Inhibition
Lateral inhibition is a Central Nervous System process whereby
application of a stimulus to the center of the receptive field excites a
neuron, but a stimulus applied near the edge inhibits it.
*Figure adopted from https://bit.ly/2yaat37
Lateral Inhibition
(Cont’d)
*Figure adopted from http://wei-space.blogspot.tw/2007/11/lateral-inhibition.html
& https://en.wikipedia.org/wiki/Lateral_inhibition
Hierarchical Sparse Distributed
Representations in Visual Cortex
*Figure adopted from https://bit.ly/2Ov5qV2 & https://bit.ly/2xTS1fw
Dopamine: Essential for Reward
Processing in Mammalian Brain
*Figure adopted from http://www.jneurosci.org/content/29/2/444
Dopamine neurons form huge synaptic contacts to target!
Learning Rule
Two Hot Approaches
 Supervised: Stochastic Gradient Descent
based Backpropagation learning rule
(Treat the membrane potentials of spiking neurons as
differentiable signals, where discontinuities at spike
times are considered as noise.*)
Unsupervised: STDP (Spike-Timing-
Dependent Plasticity) based learning rule
*Refer to Jun Haeng Lee, et al., Training Deep Spiking Neural Networks Using Backpropagation.
Frontiers in Neuroscience, 08 November 2016. https://doi.org/10.3389/fnins.2016.00508
*Refer to Yu, Q., Tang, H., Hu, J., Tan, K.C., Neuromorphic Cognitive Systems: A Learning and Memory
Centered Approach. Springer International Publishing. 2017. Page 9.
STDP Learning Rule
STDP Learning Rule (1-to-1)
*Figure adopted from http://dx.doi.org/10.7551/978-0-262-33027-5-ch037
STDP Learning Rule (2-to-1)
N0 is stimulated until N1 fires, then e0 is stopped for 30 ms.
N2 is stimulated by e2 during those 30 ms.
*Figure adopted from http://dx.doi.org/10.7551/978-0-262-33027-5-ch037
STDP Finds Spike Patterns
*Figure adopted from https://doi.org/10.1371/journal.pone.0001377
Reward-modulated STDP
*Figure adopted from https://doi.org/10.1371/journal.pcbi.1000180
Neural Modeling
1st Generation of Neuron Models
(McCulloch–Pitts Neuron Model)
*Figure adopted from http://wwwold.ece.utep.edu/research/webfuzzy/docs/kk-thesis/kk-thesis-html/node12.html
2nd Generation of Neuron Models
*Figure adopted from http://cs231n.github.io/neural-networks-1/
3rd Generation of Neuron Models
(Spiking Neuron Models)
*Figure adopted from http://kzyjc.cnjournals.com/html/2018/5/20180512.htm
Spiking Neuron Models
Miscellaneous models:
 Hodgkin-Huxley model
 Izhikevish model
 Leaky Integrate-and-Fire (LIF) model
 Spike Response model (SRM)
……
*Further reading: https://en.wikipedia.org/wiki/Biological_neuron_model
& http://www.scholarpedia.org/article/Spike-response_model
Hodgkin-Huxley Model
*Figure adopted from Wulfram Gerstner & Werner M. Kistler. Spiking Neuron Models: Single Neurons,
Populations, Plasticity. Cambridge University Press. 2002. Page 34.
Hodgkin-Huxley Model (Cont’d)
*Taken from: https://www.bonaccorso.eu/2017/08/19/hodgkin-huxley-spiking-neuron-model-python/amp/
Izhikevich Model
*Taken from: http://www.physics.usyd.edu.au/teach_res/mp/ns/doc/nsIzhikevich3.htm
Izhikevich Model (Cont’d)
*Refer to Simple Model of Spiking Neurons (2003) https://www.izhikevich.org/publications/spikes.htm
Leaky Integrate-and-Fire Model
*Figure adopted from Wulfram Gerstner, Werner M. Kistler, Richard Naud and Liam Paninski “Neuronal Dynamics:
From Single Neurons to Networks and Models of Cognition” Cambridge University Press. 2014. Page 11.
Neural Coding
Hypothesized Neural Coding Schemes
 Rate Coding
 Temporal Coding
 Population Coding
 Sparse Coding
*Further reading: https://en.wikipedia.org/wiki/Neural_coding
Rate Coding
*Further reading: http://lcn.epfl.ch/~gerstner/SPNM/node7.html
Rate as a Spike Density
Rate as a Population Activity
Temporal Coding
*Further reading: http://lcn.epfl.ch/~gerstner/SPNM/node8.html
Time-to-First-Spike
(Latency Code)
Firing at Phases respecting
to Oscillation
Interspike synchrony
Population Coding
*Figure adopted from https://doi.org/10.1038/35039062
Sparse Coding
*Figure adopted from http://brainworkshow.sparsey.com/measuring-similarity-in-localist-vs-distributed-representations/
Sparse Coding with Inhibitory Neurons
 Population sparseness: Few neurons are
active at any given time
 Lifetime sparseness: Individual neurons
are responsive to few specific stimuli
*Figure adopted from https://doi.org/10.1523/JNEUROSCI.4188-12.2013
Sparse Coding Example based on
Linear Generative Model
*Slide credit: Andrew Ng
SNN Examples
Refer to “Milad Mozafari, et al., First-spike-based
visual categorization using reward-modulated STDP
(2018)” https://doi.org/10.1109/TNNLS.2018.2826721
Gabor Filter
*Further reading: https://en.wikipedia.org/wiki/Gabor_filter
Model Architecture
Formulae
S2 neuron fires if its Vi(t) reaches the threshold
R-STDP-based Learning Rule
Temporal Discrimination
Refer to “Luziwei Leng, et al., Spiking neurons with
short-term synaptic plasticity form superior
generative networks (2018)”
https://doi.org/10.1038/s41598-018-28999-2
Spiking Network Sampling
Generating Oriented Bars
tSNE Plots
Modeling Imbalanced Dataset
Neuromorphic
Computing
Categories of AI Chips
 AI Accelerator
 GPU
 FPGA
 ASIC
 Neuromorphic chip
 Network-on-Chip
 Memory-based
 Memristor-based
 Many-core CPU
 DSP
 Spintronics-based
 Photonics-based
Intel’s Loihi Chip
*Figure adopted from https://doi.org/10.1109/MM.2018.112130359
*Video demo https://youtu.be/cDKnt9ldXv0
ANN-to-SNN Conversion
 Train ANNs using standard supervised training
techniques like backpropagation to leverage
the superior performance of trained ANNs and
subsequently convert to event-driven SNNs for
inference operation on neuromorphic platform.
 Rate-encoded spikes are approximately
proportional to the magnitude of the original
ANN inputs.
ANN-to-SNN Conversion
(Cont’d)
*Figure adopted from https://arxiv.org/abs/1802.02627
A Poisson event-generation process is used to produce the input spike
train to the network.
Neuromorphic Chip Market Forecast
*Figure adopted from https://bit.ly/2xUfxZV
Software Simulation
 MATLAB
 PyNN http://neuralensemble.org/PyNN/
 BindsNET https://github.com/Hananel-
Hazan/bindsnet
 Brian http://briansimulator.org/
 Nengo https://www.nengo.ai/
 NEST http://www.nest-simulator.org/
Further Reading
 Wulfram Gerstner & Werner M. Kistler, “Spiking Neuron Models:
Single Neurons, Populations, Plasticity”. Cambridge University
Press (2002)
 Wulfram Gerstner, Werner M. Kistler, Richard Naud and Liam
Paninski, “Neuronal Dynamics: From Single Neurons to Networks
and Models of Cognition”. Cambridge University Press (2014)
 Eugene M. Izhikevich, “The Dynamical Systems in Neuroscience:
Geometry of Excitability and Bursting”. The MIT Press (2007)
 Nikola K. Kasabov, “Time-Space, Spiking Neural Networks and
Brain-Inspired Artificial Intelligence”. Springer International
Publishing (2018)
 Amirhossein Tavanaei, et al., “Deep Learning in Spiking Neural
Networks”. arXiv:1804.08150 (2018)

More Related Content

What's hot

Introduction to Recurrent Neural Network
Introduction to Recurrent Neural NetworkIntroduction to Recurrent Neural Network
Introduction to Recurrent Neural Network
Yan Xu
 
Deep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural NetworksDeep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural Networks
Christian Perone
 
Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders
Akash Goel
 
Convolutional Neural Network (CNN)
Convolutional Neural Network (CNN)Convolutional Neural Network (CNN)
Convolutional Neural Network (CNN)
Muhammad Haroon
 
Deep learning - A Visual Introduction
Deep learning - A Visual IntroductionDeep learning - A Visual Introduction
Deep learning - A Visual Introduction
Lukas Masuch
 
Intro to Neural Networks
Intro to Neural NetworksIntro to Neural Networks
Intro to Neural Networks
Dean Wyatte
 
Neural network & its applications
Neural network & its applications Neural network & its applications
Neural network & its applications
Ahmed_hashmi
 
CNN Tutorial
CNN TutorialCNN Tutorial
CNN Tutorial
Sungjoon Choi
 
Deep Learning - RNN and CNN
Deep Learning - RNN and CNNDeep Learning - RNN and CNN
Deep Learning - RNN and CNN
Pradnya Saval
 
Deep neural networks
Deep neural networksDeep neural networks
Deep neural networks
Si Haem
 
Autoencoders in Deep Learning
Autoencoders in Deep LearningAutoencoders in Deep Learning
Autoencoders in Deep Learning
milad abbasi
 
Recurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: TheoryRecurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: Theory
Andrii Gakhov
 
Survey of Attention mechanism & Use in Computer Vision
Survey of Attention mechanism & Use in Computer VisionSurvey of Attention mechanism & Use in Computer Vision
Survey of Attention mechanism & Use in Computer Vision
SwatiNarkhede1
 
Feedforward neural network
Feedforward neural networkFeedforward neural network
Feedforward neural network
Sopheaktra YONG
 
Training Neural Networks
Training Neural NetworksTraining Neural Networks
Training Neural Networks
Databricks
 
“Deploying PyTorch Models for Real-time Inference On the Edge,” a Presentatio...
“Deploying PyTorch Models for Real-time Inference On the Edge,” a Presentatio...“Deploying PyTorch Models for Real-time Inference On the Edge,” a Presentatio...
“Deploying PyTorch Models for Real-time Inference On the Edge,” a Presentatio...
Edge AI and Vision Alliance
 
Variational Autoencoders For Image Generation
Variational Autoencoders For Image GenerationVariational Autoencoders For Image Generation
Variational Autoencoders For Image Generation
Jason Anderson
 
Explicit Density Models
Explicit Density ModelsExplicit Density Models
Explicit Density Models
Sangwoo Mo
 
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfCCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
Asst.prof M.Gokilavani
 
Artificial Intelligence: Artificial Neural Networks
Artificial Intelligence: Artificial Neural NetworksArtificial Intelligence: Artificial Neural Networks
Artificial Intelligence: Artificial Neural Networks
The Integral Worm
 

What's hot (20)

Introduction to Recurrent Neural Network
Introduction to Recurrent Neural NetworkIntroduction to Recurrent Neural Network
Introduction to Recurrent Neural Network
 
Deep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural NetworksDeep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural Networks
 
Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders
 
Convolutional Neural Network (CNN)
Convolutional Neural Network (CNN)Convolutional Neural Network (CNN)
Convolutional Neural Network (CNN)
 
Deep learning - A Visual Introduction
Deep learning - A Visual IntroductionDeep learning - A Visual Introduction
Deep learning - A Visual Introduction
 
Intro to Neural Networks
Intro to Neural NetworksIntro to Neural Networks
Intro to Neural Networks
 
Neural network & its applications
Neural network & its applications Neural network & its applications
Neural network & its applications
 
CNN Tutorial
CNN TutorialCNN Tutorial
CNN Tutorial
 
Deep Learning - RNN and CNN
Deep Learning - RNN and CNNDeep Learning - RNN and CNN
Deep Learning - RNN and CNN
 
Deep neural networks
Deep neural networksDeep neural networks
Deep neural networks
 
Autoencoders in Deep Learning
Autoencoders in Deep LearningAutoencoders in Deep Learning
Autoencoders in Deep Learning
 
Recurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: TheoryRecurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: Theory
 
Survey of Attention mechanism & Use in Computer Vision
Survey of Attention mechanism & Use in Computer VisionSurvey of Attention mechanism & Use in Computer Vision
Survey of Attention mechanism & Use in Computer Vision
 
Feedforward neural network
Feedforward neural networkFeedforward neural network
Feedforward neural network
 
Training Neural Networks
Training Neural NetworksTraining Neural Networks
Training Neural Networks
 
“Deploying PyTorch Models for Real-time Inference On the Edge,” a Presentatio...
“Deploying PyTorch Models for Real-time Inference On the Edge,” a Presentatio...“Deploying PyTorch Models for Real-time Inference On the Edge,” a Presentatio...
“Deploying PyTorch Models for Real-time Inference On the Edge,” a Presentatio...
 
Variational Autoencoders For Image Generation
Variational Autoencoders For Image GenerationVariational Autoencoders For Image Generation
Variational Autoencoders For Image Generation
 
Explicit Density Models
Explicit Density ModelsExplicit Density Models
Explicit Density Models
 
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfCCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
 
Artificial Intelligence: Artificial Neural Networks
Artificial Intelligence: Artificial Neural NetworksArtificial Intelligence: Artificial Neural Networks
Artificial Intelligence: Artificial Neural Networks
 

Similar to Introduction to Spiking Neural Networks: From a Computational Neuroscience perspective

파이콘 한국 2020) 파이썬으로 구현하는 신경세포 기반의 인공 뇌 시뮬레이터
파이콘 한국 2020) 파이썬으로 구현하는 신경세포 기반의 인공 뇌 시뮬레이터파이콘 한국 2020) 파이썬으로 구현하는 신경세포 기반의 인공 뇌 시뮬레이터
파이콘 한국 2020) 파이썬으로 구현하는 신경세포 기반의 인공 뇌 시뮬레이터
Seonghyun Kim
 
  Brain-Inspired Computation based on Spiking Neural Networks ...
 Brain-Inspired Computation based on Spiking Neural Networks               ... Brain-Inspired Computation based on Spiking Neural Networks               ...
  Brain-Inspired Computation based on Spiking Neural Networks ...
Jorge Pires
 
Basics of Neural Networks
Basics of Neural NetworksBasics of Neural Networks
CARLsim 3: Concepts, Tools, and Applications
CARLsim 3: Concepts, Tools, and ApplicationsCARLsim 3: Concepts, Tools, and Applications
CARLsim 3: Concepts, Tools, and Applications
Michael Beyeler
 
PhD Defense
PhD DefensePhD Defense
PhD Defense
Taehoon Lee
 
fundamentals-of-neural-networks-laurene-fausett
fundamentals-of-neural-networks-laurene-fausettfundamentals-of-neural-networks-laurene-fausett
fundamentals-of-neural-networks-laurene-fausett
Zarnigar Altaf
 
Artificial Neural Network and its Applications
Artificial Neural Network and its ApplicationsArtificial Neural Network and its Applications
Artificial Neural Network and its Applications
shritosh kumar
 
Blue Brain_Nikhilesh+Krishna Raj
Blue Brain_Nikhilesh+Krishna RajBlue Brain_Nikhilesh+Krishna Raj
Blue Brain_Nikhilesh+Krishna Raj
Krishna Raj .S
 
Fundamentals of Neural Network (Soft Computing)
Fundamentals of Neural Network (Soft Computing)Fundamentals of Neural Network (Soft Computing)
Fundamentals of Neural Network (Soft Computing)
Amit Kumar Rathi
 
Introduction_NNFL_Aug2022.pdf
Introduction_NNFL_Aug2022.pdfIntroduction_NNFL_Aug2022.pdf
Introduction_NNFL_Aug2022.pdf
901AnirudhaShivarkar
 
Reinforcement Learning: Chapter 15 Neuroscience
Reinforcement Learning: Chapter 15 NeuroscienceReinforcement Learning: Chapter 15 Neuroscience
Reinforcement Learning: Chapter 15 Neuroscience
Jason Tsai
 
NEURAL NETWORKS
NEURAL NETWORKSNEURAL NETWORKS
NEURAL NETWORKSESCOM
 
2023-1113e-INFN-Seminari-Paolucci-BioInspiredSpikingLearningSleepCycles.pdf
2023-1113e-INFN-Seminari-Paolucci-BioInspiredSpikingLearningSleepCycles.pdf2023-1113e-INFN-Seminari-Paolucci-BioInspiredSpikingLearningSleepCycles.pdf
2023-1113e-INFN-Seminari-Paolucci-BioInspiredSpikingLearningSleepCycles.pdf
pierstanislaopaolucc1
 
Neural Networks: Self-Organizing Maps (SOM)
Neural Networks:  Self-Organizing Maps (SOM)Neural Networks:  Self-Organizing Maps (SOM)
Neural Networks: Self-Organizing Maps (SOM)
Mostafa G. M. Mostafa
 
Computational neuropharmacology drug designing
Computational neuropharmacology drug designingComputational neuropharmacology drug designing
Computational neuropharmacology drug designing
Revathi Boyina
 
Fuzzy Logic Final Report
Fuzzy Logic Final ReportFuzzy Logic Final Report
Fuzzy Logic Final ReportShikhar Agarwal
 
Neural Netwrok
Neural NetwrokNeural Netwrok
Neural Netwrok
Rabin BK
 

Similar to Introduction to Spiking Neural Networks: From a Computational Neuroscience perspective (20)

파이콘 한국 2020) 파이썬으로 구현하는 신경세포 기반의 인공 뇌 시뮬레이터
파이콘 한국 2020) 파이썬으로 구현하는 신경세포 기반의 인공 뇌 시뮬레이터파이콘 한국 2020) 파이썬으로 구현하는 신경세포 기반의 인공 뇌 시뮬레이터
파이콘 한국 2020) 파이썬으로 구현하는 신경세포 기반의 인공 뇌 시뮬레이터
 
  Brain-Inspired Computation based on Spiking Neural Networks ...
 Brain-Inspired Computation based on Spiking Neural Networks               ... Brain-Inspired Computation based on Spiking Neural Networks               ...
  Brain-Inspired Computation based on Spiking Neural Networks ...
 
Basics of Neural Networks
Basics of Neural NetworksBasics of Neural Networks
Basics of Neural Networks
 
CARLsim 3: Concepts, Tools, and Applications
CARLsim 3: Concepts, Tools, and ApplicationsCARLsim 3: Concepts, Tools, and Applications
CARLsim 3: Concepts, Tools, and Applications
 
PhD Defense
PhD DefensePhD Defense
PhD Defense
 
Lesson 37
Lesson 37Lesson 37
Lesson 37
 
AI Lesson 37
AI Lesson 37AI Lesson 37
AI Lesson 37
 
fundamentals-of-neural-networks-laurene-fausett
fundamentals-of-neural-networks-laurene-fausettfundamentals-of-neural-networks-laurene-fausett
fundamentals-of-neural-networks-laurene-fausett
 
Artificial Neural Network and its Applications
Artificial Neural Network and its ApplicationsArtificial Neural Network and its Applications
Artificial Neural Network and its Applications
 
Blue Brain_Nikhilesh+Krishna Raj
Blue Brain_Nikhilesh+Krishna RajBlue Brain_Nikhilesh+Krishna Raj
Blue Brain_Nikhilesh+Krishna Raj
 
Nencki321 day2
Nencki321 day2Nencki321 day2
Nencki321 day2
 
Fundamentals of Neural Network (Soft Computing)
Fundamentals of Neural Network (Soft Computing)Fundamentals of Neural Network (Soft Computing)
Fundamentals of Neural Network (Soft Computing)
 
Introduction_NNFL_Aug2022.pdf
Introduction_NNFL_Aug2022.pdfIntroduction_NNFL_Aug2022.pdf
Introduction_NNFL_Aug2022.pdf
 
Reinforcement Learning: Chapter 15 Neuroscience
Reinforcement Learning: Chapter 15 NeuroscienceReinforcement Learning: Chapter 15 Neuroscience
Reinforcement Learning: Chapter 15 Neuroscience
 
NEURAL NETWORKS
NEURAL NETWORKSNEURAL NETWORKS
NEURAL NETWORKS
 
2023-1113e-INFN-Seminari-Paolucci-BioInspiredSpikingLearningSleepCycles.pdf
2023-1113e-INFN-Seminari-Paolucci-BioInspiredSpikingLearningSleepCycles.pdf2023-1113e-INFN-Seminari-Paolucci-BioInspiredSpikingLearningSleepCycles.pdf
2023-1113e-INFN-Seminari-Paolucci-BioInspiredSpikingLearningSleepCycles.pdf
 
Neural Networks: Self-Organizing Maps (SOM)
Neural Networks:  Self-Organizing Maps (SOM)Neural Networks:  Self-Organizing Maps (SOM)
Neural Networks: Self-Organizing Maps (SOM)
 
Computational neuropharmacology drug designing
Computational neuropharmacology drug designingComputational neuropharmacology drug designing
Computational neuropharmacology drug designing
 
Fuzzy Logic Final Report
Fuzzy Logic Final ReportFuzzy Logic Final Report
Fuzzy Logic Final Report
 
Neural Netwrok
Neural NetwrokNeural Netwrok
Neural Netwrok
 

More from Jason Tsai

基於深度學習的人臉辨識技術簡介
基於深度學習的人臉辨識技術簡介基於深度學習的人臉辨識技術簡介
基於深度學習的人臉辨識技術簡介
Jason Tsai
 
Neural Network Design: Chapter 17 Radial Basis Networks
Neural Network Design: Chapter 17 Radial Basis NetworksNeural Network Design: Chapter 17 Radial Basis Networks
Neural Network Design: Chapter 17 Radial Basis Networks
Jason Tsai
 
Neural Network Design: Chapter 18 Grossberg Network
Neural Network Design: Chapter 18 Grossberg NetworkNeural Network Design: Chapter 18 Grossberg Network
Neural Network Design: Chapter 18 Grossberg Network
Jason Tsai
 
Convolutional Neural Networks (CNN) — 卷積神經網路的前世今生
Convolutional Neural Networks (CNN) — 卷積神經網路的前世今生Convolutional Neural Networks (CNN) — 卷積神經網路的前世今生
Convolutional Neural Networks (CNN) — 卷積神經網路的前世今生
Jason Tsai
 
Convolutional Neural Networks (CNN) — 卷積神經網路的前世今生
Convolutional Neural Networks (CNN) — 卷積神經網路的前世今生Convolutional Neural Networks (CNN) — 卷積神經網路的前世今生
Convolutional Neural Networks (CNN) — 卷積神經網路的前世今生
Jason Tsai
 
漫談人工智慧:啟發自大腦科學的深度學習網路
漫談人工智慧:啟發自大腦科學的深度學習網路漫談人工智慧:啟發自大腦科學的深度學習網路
漫談人工智慧:啟發自大腦科學的深度學習網路
Jason Tsai
 
Deep Learning: Chapter 11 Practical Methodology
Deep Learning: Chapter 11 Practical MethodologyDeep Learning: Chapter 11 Practical Methodology
Deep Learning: Chapter 11 Practical Methodology
Jason Tsai
 
Deep Learning: Introduction & Chapter 5 Machine Learning Basics
Deep Learning: Introduction & Chapter 5 Machine Learning BasicsDeep Learning: Introduction & Chapter 5 Machine Learning Basics
Deep Learning: Introduction & Chapter 5 Machine Learning Basics
Jason Tsai
 
漫談人工智慧:啟發自大腦科學的深度學習網路
漫談人工智慧:啟發自大腦科學的深度學習網路漫談人工智慧:啟發自大腦科學的深度學習網路
漫談人工智慧:啟發自大腦科學的深度學習網路
Jason Tsai
 
漫談人工智慧:啟發自大腦科學的深度學習網路
漫談人工智慧:啟發自大腦科學的深度學習網路漫談人工智慧:啟發自大腦科學的深度學習網路
漫談人工智慧:啟發自大腦科學的深度學習網路
Jason Tsai
 

More from Jason Tsai (10)

基於深度學習的人臉辨識技術簡介
基於深度學習的人臉辨識技術簡介基於深度學習的人臉辨識技術簡介
基於深度學習的人臉辨識技術簡介
 
Neural Network Design: Chapter 17 Radial Basis Networks
Neural Network Design: Chapter 17 Radial Basis NetworksNeural Network Design: Chapter 17 Radial Basis Networks
Neural Network Design: Chapter 17 Radial Basis Networks
 
Neural Network Design: Chapter 18 Grossberg Network
Neural Network Design: Chapter 18 Grossberg NetworkNeural Network Design: Chapter 18 Grossberg Network
Neural Network Design: Chapter 18 Grossberg Network
 
Convolutional Neural Networks (CNN) — 卷積神經網路的前世今生
Convolutional Neural Networks (CNN) — 卷積神經網路的前世今生Convolutional Neural Networks (CNN) — 卷積神經網路的前世今生
Convolutional Neural Networks (CNN) — 卷積神經網路的前世今生
 
Convolutional Neural Networks (CNN) — 卷積神經網路的前世今生
Convolutional Neural Networks (CNN) — 卷積神經網路的前世今生Convolutional Neural Networks (CNN) — 卷積神經網路的前世今生
Convolutional Neural Networks (CNN) — 卷積神經網路的前世今生
 
漫談人工智慧:啟發自大腦科學的深度學習網路
漫談人工智慧:啟發自大腦科學的深度學習網路漫談人工智慧:啟發自大腦科學的深度學習網路
漫談人工智慧:啟發自大腦科學的深度學習網路
 
Deep Learning: Chapter 11 Practical Methodology
Deep Learning: Chapter 11 Practical MethodologyDeep Learning: Chapter 11 Practical Methodology
Deep Learning: Chapter 11 Practical Methodology
 
Deep Learning: Introduction & Chapter 5 Machine Learning Basics
Deep Learning: Introduction & Chapter 5 Machine Learning BasicsDeep Learning: Introduction & Chapter 5 Machine Learning Basics
Deep Learning: Introduction & Chapter 5 Machine Learning Basics
 
漫談人工智慧:啟發自大腦科學的深度學習網路
漫談人工智慧:啟發自大腦科學的深度學習網路漫談人工智慧:啟發自大腦科學的深度學習網路
漫談人工智慧:啟發自大腦科學的深度學習網路
 
漫談人工智慧:啟發自大腦科學的深度學習網路
漫談人工智慧:啟發自大腦科學的深度學習網路漫談人工智慧:啟發自大腦科學的深度學習網路
漫談人工智慧:啟發自大腦科學的深度學習網路
 

Recently uploaded

Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
Alan Dix
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
KatiaHIMEUR1
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
James Anderson
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
Product School
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Sri Ambati
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
UiPathCommunity
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
Elena Simperl
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
Paul Groth
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
DianaGray10
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Product School
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Thierry Lestable
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
Alison B. Lowndes
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
RTTS
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
ThousandEyes
 
Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
Cheryl Hung
 

Recently uploaded (20)

Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
 
Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
 

Introduction to Spiking Neural Networks: From a Computational Neuroscience perspective

  • 1. Jason Tsai (蔡志順) Oct. 11, 2018 @台灣人工智慧學校 新竹分校 INTRODUCTION TO SPIKING NEURAL NETWORKS *Picture adopted from https://bit.ly/2Rh7cYy
  • 2. *Copyright Notice: All figures in this presentation are taken from the quoted sources as mentioned in the respective slides and their copyright belongs to the owners. This presentation itself adopts Creative Commons license.
  • 3. Neural Networks 3D Simulation (Video demo) *Video from https://youtu.be/3JQ3hYko51Y
  • 4. Questions  What are the advantages of spiking neural networks and neuromorphic computing?  What are current challenges of spiking neural networks (SNNs)?
  • 5. Characteristics of SNNs  Spatio-temporal  Asynchronous  Sparsity  Additive weight operation  Energy-efficient  Stochastic  Robust to noise
  • 6. Outlines • Basic neuroscience • Learning algorithms • Neuron models • Neural encoding schemes • SNN examples (papers) • Neuromorphic platforms
  • 8. Nerve Cell (Neuron) *Picture taken from https://en.wikipedia.org/wiki/Neuron
  • 9. Neuron’s Spike: Action Potential *Figure adopted from https://en.wikipedia.org/wiki/Action_potential & The front cover of “Spikes: Exploring the Neural Code (1999)”
  • 10. The Effect of Presynaptic Spikes on Postsynaptic Neuron *Figure adopted from Wulfram Gerstner & Werner M. Kistler. Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press. 2002. Page 5.
  • 11. The Firing of a Leaky Integrate-and- Fire Model Neuron *Figure adopted from https://doi.org/10.1371/journal.pone.0001377
  • 12. Hebb’s Learning Postulate  "When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased.“* * Refer to Donald O. Hebb, The Organization of Behavior: A Neuropsychological Theory. 1949 & 2002. Page 62.  Causality  Repetition
  • 13. Long-Term Potentiation (LTP) / Long- Term Depression (LTD)  LTP is a long-lasting, activity-dependent increase in synaptic strength that is a leading candidate as a cellular mechanism contributing to memory formation in mammals in a very broadly applicable sense.* * Refer to J. David Sweatt. Mechanisms of Memory, Second Edition. Academic Press. 2010. Page 112.
  • 14. Synaptic Plasticity *Figure adopted from Wulfram Gerstner & Werner M. Kistler. Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press. 2002. Page 353.
  • 15. Back-propagating Action Potential (bAP) *Further reading: https://en.wikipedia.org/wiki/Neural_backpropagation Induction of tLTP requires activation of the presynaptic input milliseconds before the bAP in the postsynaptic dendrite.
  • 16. *Figure adopted from https://doi.org/10.3389/fnsyn.2011.00004 Spike-Timing-Dependent Plasticity (STDP)
  • 17. Experiment Evidence of STDP  From Wikipedia: “Henry Markram, when he was in Bert Sakmann's lab and published their work in 1997, used dual patch clamping techniques to repetitively activate pre-synaptic neurons 10 milliseconds before activating the post- synaptic target neurons, and found the strength of the synapse increased. When the activation order was reversed so that the pre- synaptic neuron was activated 10 milliseconds after its post-synaptic target neuron, the strength of the pre-to-post synaptic connection decreased. Further work, by Guoqiang Bi, Li Zhang, and Huizhong Tao in Mu-Ming Poo's lab in 1998, continued the mapping of the entire time course relating pre- and post-synaptic activity and synaptic change, to show that in their preparation synapses that are activated within 5-20 ms before a postsynaptic spike are strengthened, and those that are activated within a similar time window after the spike are weakened.” *Further reading: https://en.wikipedia.org/wiki/Spike-timing-dependent_plasticity
  • 18. Lateral Inhibition Lateral inhibition is a Central Nervous System process whereby application of a stimulus to the center of the receptive field excites a neuron, but a stimulus applied near the edge inhibits it. *Figure adopted from https://bit.ly/2yaat37
  • 19. Lateral Inhibition (Cont’d) *Figure adopted from http://wei-space.blogspot.tw/2007/11/lateral-inhibition.html & https://en.wikipedia.org/wiki/Lateral_inhibition
  • 20. Hierarchical Sparse Distributed Representations in Visual Cortex *Figure adopted from https://bit.ly/2Ov5qV2 & https://bit.ly/2xTS1fw
  • 21. Dopamine: Essential for Reward Processing in Mammalian Brain *Figure adopted from http://www.jneurosci.org/content/29/2/444 Dopamine neurons form huge synaptic contacts to target!
  • 23. Two Hot Approaches  Supervised: Stochastic Gradient Descent based Backpropagation learning rule (Treat the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise.*) Unsupervised: STDP (Spike-Timing- Dependent Plasticity) based learning rule *Refer to Jun Haeng Lee, et al., Training Deep Spiking Neural Networks Using Backpropagation. Frontiers in Neuroscience, 08 November 2016. https://doi.org/10.3389/fnins.2016.00508
  • 24. *Refer to Yu, Q., Tang, H., Hu, J., Tan, K.C., Neuromorphic Cognitive Systems: A Learning and Memory Centered Approach. Springer International Publishing. 2017. Page 9. STDP Learning Rule
  • 25. STDP Learning Rule (1-to-1) *Figure adopted from http://dx.doi.org/10.7551/978-0-262-33027-5-ch037
  • 26. STDP Learning Rule (2-to-1) N0 is stimulated until N1 fires, then e0 is stopped for 30 ms. N2 is stimulated by e2 during those 30 ms. *Figure adopted from http://dx.doi.org/10.7551/978-0-262-33027-5-ch037
  • 27. STDP Finds Spike Patterns *Figure adopted from https://doi.org/10.1371/journal.pone.0001377
  • 28. Reward-modulated STDP *Figure adopted from https://doi.org/10.1371/journal.pcbi.1000180
  • 30. 1st Generation of Neuron Models (McCulloch–Pitts Neuron Model) *Figure adopted from http://wwwold.ece.utep.edu/research/webfuzzy/docs/kk-thesis/kk-thesis-html/node12.html
  • 31. 2nd Generation of Neuron Models *Figure adopted from http://cs231n.github.io/neural-networks-1/
  • 32. 3rd Generation of Neuron Models (Spiking Neuron Models) *Figure adopted from http://kzyjc.cnjournals.com/html/2018/5/20180512.htm
  • 33. Spiking Neuron Models Miscellaneous models:  Hodgkin-Huxley model  Izhikevish model  Leaky Integrate-and-Fire (LIF) model  Spike Response model (SRM) …… *Further reading: https://en.wikipedia.org/wiki/Biological_neuron_model & http://www.scholarpedia.org/article/Spike-response_model
  • 34. Hodgkin-Huxley Model *Figure adopted from Wulfram Gerstner & Werner M. Kistler. Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press. 2002. Page 34.
  • 35. Hodgkin-Huxley Model (Cont’d) *Taken from: https://www.bonaccorso.eu/2017/08/19/hodgkin-huxley-spiking-neuron-model-python/amp/
  • 36. Izhikevich Model *Taken from: http://www.physics.usyd.edu.au/teach_res/mp/ns/doc/nsIzhikevich3.htm
  • 37. Izhikevich Model (Cont’d) *Refer to Simple Model of Spiking Neurons (2003) https://www.izhikevich.org/publications/spikes.htm
  • 38. Leaky Integrate-and-Fire Model *Figure adopted from Wulfram Gerstner, Werner M. Kistler, Richard Naud and Liam Paninski “Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition” Cambridge University Press. 2014. Page 11.
  • 40. Hypothesized Neural Coding Schemes  Rate Coding  Temporal Coding  Population Coding  Sparse Coding *Further reading: https://en.wikipedia.org/wiki/Neural_coding
  • 41. Rate Coding *Further reading: http://lcn.epfl.ch/~gerstner/SPNM/node7.html Rate as a Spike Density Rate as a Population Activity
  • 42. Temporal Coding *Further reading: http://lcn.epfl.ch/~gerstner/SPNM/node8.html Time-to-First-Spike (Latency Code) Firing at Phases respecting to Oscillation Interspike synchrony
  • 43. Population Coding *Figure adopted from https://doi.org/10.1038/35039062
  • 44. Sparse Coding *Figure adopted from http://brainworkshow.sparsey.com/measuring-similarity-in-localist-vs-distributed-representations/
  • 45. Sparse Coding with Inhibitory Neurons  Population sparseness: Few neurons are active at any given time  Lifetime sparseness: Individual neurons are responsive to few specific stimuli *Figure adopted from https://doi.org/10.1523/JNEUROSCI.4188-12.2013
  • 46. Sparse Coding Example based on Linear Generative Model *Slide credit: Andrew Ng
  • 48. Refer to “Milad Mozafari, et al., First-spike-based visual categorization using reward-modulated STDP (2018)” https://doi.org/10.1109/TNNLS.2018.2826721
  • 49. Gabor Filter *Further reading: https://en.wikipedia.org/wiki/Gabor_filter
  • 51. Formulae S2 neuron fires if its Vi(t) reaches the threshold
  • 54. Refer to “Luziwei Leng, et al., Spiking neurons with short-term synaptic plasticity form superior generative networks (2018)” https://doi.org/10.1038/s41598-018-28999-2
  • 60. Categories of AI Chips  AI Accelerator  GPU  FPGA  ASIC  Neuromorphic chip  Network-on-Chip  Memory-based  Memristor-based  Many-core CPU  DSP  Spintronics-based  Photonics-based
  • 61. Intel’s Loihi Chip *Figure adopted from https://doi.org/10.1109/MM.2018.112130359 *Video demo https://youtu.be/cDKnt9ldXv0
  • 62. ANN-to-SNN Conversion  Train ANNs using standard supervised training techniques like backpropagation to leverage the superior performance of trained ANNs and subsequently convert to event-driven SNNs for inference operation on neuromorphic platform.  Rate-encoded spikes are approximately proportional to the magnitude of the original ANN inputs.
  • 63. ANN-to-SNN Conversion (Cont’d) *Figure adopted from https://arxiv.org/abs/1802.02627 A Poisson event-generation process is used to produce the input spike train to the network.
  • 64. Neuromorphic Chip Market Forecast *Figure adopted from https://bit.ly/2xUfxZV
  • 65. Software Simulation  MATLAB  PyNN http://neuralensemble.org/PyNN/  BindsNET https://github.com/Hananel- Hazan/bindsnet  Brian http://briansimulator.org/  Nengo https://www.nengo.ai/  NEST http://www.nest-simulator.org/
  • 66. Further Reading  Wulfram Gerstner & Werner M. Kistler, “Spiking Neuron Models: Single Neurons, Populations, Plasticity”. Cambridge University Press (2002)  Wulfram Gerstner, Werner M. Kistler, Richard Naud and Liam Paninski, “Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition”. Cambridge University Press (2014)  Eugene M. Izhikevich, “The Dynamical Systems in Neuroscience: Geometry of Excitability and Bursting”. The MIT Press (2007)  Nikola K. Kasabov, “Time-Space, Spiking Neural Networks and Brain-Inspired Artificial Intelligence”. Springer International Publishing (2018)  Amirhossein Tavanaei, et al., “Deep Learning in Spiking Neural Networks”. arXiv:1804.08150 (2018)