SlideShare a Scribd company logo
SS2016 Modern Neural
Computation
Lecture 2: Synaptic
Learning Rules
Hirokazu Tanaka
School of Information Science
Japan Institute of Science and Technology
Neurons communicate through synapses.
In this lecture we will learn:
• Basic anatomy and physiology of synapses
• Rate coding and spike coding
• Hebbian learning
• Spike-timing-dependent plasticity
• Reward-modulated plasticity
Synaptic plasticity underlies behavioral modification.
Kandel (1979) Scientific American; Kandel (2001) Science
Synapses: electrical and chemical neurotransmission
Figure 5.1, Neuroscience 3rd Edition
Long-term potentiation (LTP) of hippocampal synapses
Figure 24.6, Neuroscience 3rd EditionFigure 24.5, Neuroscience 3rd Edition
Long-term potentiation (LTP) of hippocampal synapses
Figures 24.7 & 24.8, Neuroscience 3rd Edition
Molecular mechanisms underlying hippocampal LTP.
Figures 24.9 & 24.10, Neuroscience 3rd Edition
Long-term depression (LTD)
How does a neuron represent information?
Panzari et al. (2010) Trends in Neurosciences
Rate coding: Number of Spikes matters.
Rate coding hypothesis: a neuron represents information through
its spike rate.
Hartline (1940) Am J Physiol; Hartline (1969) Science
Compound eye of horseshoe crab Recoding from optic nerve
Firing patterns of cortical neurons are highly irregular, which are well
approximated by a random Poisson process (Softky & Koch (1993) J Neurosci;
Shadlen & Newsome (1994) Current Biology).
Temporal coding: Spike timing matters.
Temporal coding hypothesis: a neuron represents information
through its spike timings.
Gollisch & Meister (2008) Science Johansson & Birznieks (2004) Nature Neurosci
Hebb’s postulate of activity dependent plasticity.
"Let us assume that the persistence or repetition of a reverberatory
activity (or "trace") tends to induce lasting cellular axon of cell A is
near enough to excite a cell B and repeatedly or persistently takes
part in firing it, some growth process or metabolic change takes
place in one or both cells such that A's efficiency, as one of the cells
firing B, is increased."
Hebbian theory: a theory in neuroscience that proposes an
explanation for the adaptation of neurons in the brain during the
learning process.
Donald O. Hebb (1904-1985)
The Organization of Behavior (1949)
Image source: Wikipedia, Donald O. Hebb
Synaptic plasticity: rate-coding model
1u
2u
3u
1w
2w
3w
T
v vτ =− + w u
v ( )
( )
T
1
T
1
n
n
u u
w w
=
=
u
w


input ratesoutput rate
synaptic strengths
T
v ≈ w u
If we consider a time scale larger
than τ, then
Hebbian plasticity in equation.
vη∆ =w u
1 1
n n
w u
v
w u
η
∆   
   
=   
   ∆   
 
T
η∆ =w uu w
Hebbian learning with input vector u and output v
Vector form:
Or component form:
If the membrane dynamics is fast compared to the timescale of
synaptic plasticity, the output is approximated as:
Then the Hebbian rule now reads:
T
.v = w u
This form of learning rule is unstable.
T
η∆ =w uu w
T
η η∆= =w uu w Cw
Covariance matrix of random inputs
T
=C uu Wishart matrix
If inputs u1, …, un are i.i.d., their covariance matrix is called the
Wishart matrix (Wishart, 1936):
All eigenvalues of a Wishart matrix are non-negative.
Hebbian learning with single input u
Hebbian learning with input ensemble
Exercise 1
This form of learning rule is unstable.
Eigenvalue decomposition
1, ,i i i i nλ= =Ce e  1 0nλ λ≥ ≥ ≥
η∆ =w Cw
i i
i
a= ∑w e
i i ia aηλ∆ =
All eigenvalues of a Wishart matrix are non-negative.
The eigenvectors form a basis for the n-dim space, and the weight
vector w may be decomposed into the eigenvectors:
Then, the Hebbian learning rule is reduced as:
Therefore, ai grows exponentially, finally diverging to infinity.
Covariance matrix of input has non-negative eigenvalues.
Covariance matrix of random inputs
( )T 2T T T
0≥= =x Cx x uu x u x
1
i i
n
i
a e
=
= ∑x
T
, 1 ,
2
1
,
1
T
n n n
i
i j i j
i j j i j j i i
i
j ia a a a aλ δ λ
= = =
= = =∑ ∑ ∑x Cx e Ce
For any non-zero vector x:
If the vector is decomposed in terms of eigenvectors,
then,
For any {ai} this quantity must be non-negative. Therefore, the
eigenvalues {λi} must be non-negative, too.
Generalization of Hebbian learning.
( )( )v vη∆ = − −w u u
Covariance learning
BCM rule
( )Mv vη θ∆= −w u
Bienenstock, Cooper & Munro (1982) J Neurosci
Sejnowski (1977) Biophys J
φ(v)
v
Synaptic weights change if pre-and post-activities are positively
correlated.
Synaptic plasticity depends linearly on pre-
synaptic activities and nonlinearly on post-
synaptic activity (thresholding).
The thresholding value changes according
to post-synaptic activity (homeostasis).
Generalization of Hebbian learning.
BCM rule
( )Mv vη θ∆= −w u
Bienenstock, Cooper & Munro (1982) J Neurosci
φ(v)
v
Synaptic plasticity depends linearly on pre-
synaptic activities and nonlinearly on post-
synaptic activity (thresholding).
The thresholding value changes according
to post-synaptic activity (homeostasis).
2
EM vθ  =  
( )2
1v vη∆= −w u
There is only one stable fixed point at v=1.
Weight normalization: additive or multiplicative.
vη∆ =w uHebbian learning, , is inherently unstable.
One way to avoid this instability (i.e., divergence) is to impose a
constraint over the weight vector w.
1i
i
w =∑
Additive normalization
Multiplicative normalization
i i j
j
w w v w v
n
η
η∆ = − ∑
( )
( ) ( )
( ) ( )
1
t t
t
t t
+ ∆
+ =
+ ∆
w w
w
w w
1=w
Oja (1982) Neural Networks
Oja learning rule as a principle component analyzer.
Oja learning rule in discrete time
Oja (1982) Neural Networks
( ) ( ) ( )2
1
v
t v v
v
η
η η
η
+
+ = = + − +
+
w u
w w u w
w u

( ) ( ) ( ) ( ) ( ) ( )( )1t t v t t v t tη+ = + −w w u w
( )
d
v v
dt
η= −
w
u w
( )Td
dt
η= −
w
Cw w Cww
Oja learning rule in continuous time
Oja learning rule in continuous time
Oja learning rule as a principle component analyzer.
Oja (1982) Neural Networks
( )Td
dt
η= −
w
Cw w Cww
i i
i
a= ∑w e 1, ,i i i i nλ= =Ce e  1 0nλ λ≥ ≥ ≥
2
1
n
i i i j j i
j
a a a aλ λ
=
 
= − 
 
∑
1
i
i
a
b
a
≡
( )1i i ib bλ λ= −
( )1 const, 0 2, ,ia a i n∴ → → =
Eigenvector decomposition
Modeling synapses: conductance-based model.
( )( ) ( )( )rest ex ex in inm
dV
V V g t E V g t E V
dt
τ = − + − + −
LIF excitatory
synapse
inhibitory
synapse
Gerstner (2014) Neuronal Dynamics, Chapter 3
( ) ( )syn
syn syn
f
t t
t
f
g t g e t t
τ
−
−
= Θ −∑
( ) ( ) ( )rise fast slow
syn syn 1 1
f f f
t t t t t t
f
f
g t g e ae a ae t tτ τ τ
− − −
− − −   
   = − + − Θ −
      
∑
exponential with one decay time constant
exponentials with one rise and two decay time constants
Modeling synapses: conductance-based model.
( )( ) ( )( )rest ex ex in inm
dV
V V g t E V g t E V
dt
τ = − + − + −
LIF excitatory
synapse
inhibitory
synapse
Gerstner (2014) Neuronal Dynamics, Chapter 3
( ) ( )syn
syn syn
f
t t
t
f
g t g e t t
τ
−
−
= Θ −∑
( ) ( ) ( )rise fast slow
syn syn 1 1
f f f
t t t t t t
f
f
g t g e ae a e t tτ τ τ
− − −
− − −   
   = − + − Θ −
      
∑
exponential with one decay time constant
exponentials with one rise and two decay time constants
Modeling synapses: conductance-based model.
Gerstner (2014) Neuronal Dynamics, Chapter 3
( ) ( )syn
syn syn
f
t t
t
f
g t g e t t
τ
−
−
= Θ −∑
( ) ( ) ( )rise fast slow
syn syn 1 1
f f f
t t t t t t
f
f
g t g e ae a e t tτ τ τ
− − −
− − −   
   = − + − Θ −
      
∑
exponential with one decay time constant
exponentials with one rise and two decay time constants
excitatory
inhibitory
rise fast1 ms, 6 msτ τ≈ ≈
rise fast slow25 50 ms, 100 300 ms, 500 1000 msτ τ τ≈ − ≈ − ≈ −
GABAA
GABAB
Modeling synapses: conductance-based model.
( )( ) ( )( )rest ex ex in inm
dV
V V g t E V g t E V
dt
τ = − + − + −
ex
ex ex
dg
g
dt
τ = −
( ) ( )ex ex exg t g t g← +
in
in in
dg
g
dt
τ = −
( ) ( )in in ing t g t g← +
LIF excitatory
synapse
inhibitory
synapse
Dynamics of conductance
Synaptic plasticity: how the peak conductances of excitatory and
inhibitory synapses is modified in an activity-dependent manner.
Song et al. (2000) Nature Neurosci
Spike-timing dependent plasticity (STDP)
Sjöström & Gerstner, Scholarpedia, 5(2):1362. doi:10.4249/scholarpedia.1362
pre-post: potentiation
post-pre: depression
STDP in equations.
Sjöström & Gerstner, Scholarpedia, 5(2):1362. doi:10.4249/scholarpedia.1362
( )
:post spikes : pre spikes
n f
ij i j
n f
w W t t∆= −∑ ∑
( )
exp for 0
exp for 0
tA t
W t
tA t
τ
τ
+
+
−
−
  − >   
= 
 − < 
  
Online implementation of STDP learning
Sjöström & Gerstner, Scholarpedia, 5(2):1362. doi:10.4249/scholarpedia.1362
( ) ( )
: presynaptic
spike
j f
j j j
f
dx
x a x t t
dt
τ δ+ +=− + −∑ ( ) ( )
: postsynaptic
spike
ni
j i i
n
dy
y a y t t
dt
τ δ− −=− + −∑
xj : presynaptic trace of neuron j
“remembering when presynaptic neuron
j spikes”
yi : postsynaptic trace of neuron i
“remembering when postsynaptic
neuron i spikes”
( ) ( ) ( ) ( )
:postsynaptic : presynaptic
spikes spikes
ij n f
ij j i ij i i
n f
dw
A w x t t A w y t t
dt
δ δ+ −− − −∑ ∑
Weight dependence: hard and soft bounds.
Sjöström & Gerstner, Scholarpedia, 5(2):1362. doi:10.4249/scholarpedia.1362
( ) ( ) ( ) ( )
:postsynaptic : presynaptic
spikes spikes
ij ij
ij n f
j i i i
n f
dw
x t t y t tA w A w
dt
δ δ+ −= − − −∑ ∑
Weight learning dynamics
Hard bound rule
(Linear) Soft bound rule
( ) ( ), :A w A w+ − determines the weight dependence of STDP learning rule.
( ) ( )
( ) ( )
maxA w w w
A w w
η
η
+ +
− −
=Θ −
= Θ
For biological reasons, the synaptic weights should be restricted to wmin < w < wmax .
( ) ( )
( )
maxA w w w
A w w
η
η
+ +
− −
= −
=
( )A w+
( )A w−
Temporal all-to-all versus nearest-neighbor spike
interaction.
Sjöström & Gerstner, Scholarpedia, 5(2):1362. doi:10.4249/scholarpedia.1362
( ) ( ) ( ) ( )
: presynaptic : postsynaptic
spike spike
,j f ni
j j j i
f n
j i
dx dy
x t t y t t
dt d
a a y
t
xτ δ τ δ+ −+ −=− + − =− + −∑ ∑
Synaptic trace dynamics
( ):a x+ determines how much trace is incremented by spikes.
( ) 1a x+ = ( ) 1a x x+ = −
All-to-all interaction Nearest-neighbor interaction
All spikes contribute additively to the trace,
and the trace is not upper-bounded.
Only the nearest spike contributes to the
trace and the trace is upper-bounded to 1.
Additive vs multiplicative STDP.
van Rossum et al. (2000) J Neurosci.
( )
exp for 0
exp for 0
tA t
W t
tA t
τ
τ
+
+
−
−
  − >   
= 
 − < 
  
( )
exp for 0
exp for 0
tA t
W t
tA tW
τ
τ
+
+
−
−
  − >   
= 
 − < 
  
Additive STDP Multiplicative STDP
Potentiation and depression are
independent of the weight value.
Depression are weight dependent in a
multiplicative way; a large synapse gets
depressed more and a weak synapse less.
Triplet law: three-spike interaction
pre pre1 2
pre1 1 1 1 pre2 2 2 2
post post1 2
post1 1 1 1 post2 2 2 2
if then 1. if then 1.
if then 1. if then 1.
dx dx
x t t x x x t t x x
dt dt
dy dy
y t t y y y t t y y
dt dt
τ τ
τ τ
=− = ← + =− = ← +
=− = ← + =− = ← +
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )pre post
2 1 3 2 1 2 1 3 2 1w t A y t A x t y t t t A x t A y t x t t tε δ ε δ− − + +
   ∆ =− + − − + + − −   
post-pre LTD pre-post-pre LTD pre-post LTP post-pre-post LTP
Pfister & Gerstner (2006) J Neurosci
Dynamics of two presynaptic and two postsynaptic traces
Pre-post-pre LTD and pre-post-pre LTP
STDP for inhibitory synapses
Vogels et al. (2011) Science
Relation of STDP to other learning rules.
• STDP and rate-based Hebbian learning rules
Kempter, R., Gerstner, W., & Van Hemmen, J. L. (1999). Hebbian learning and spiking
neurons. Physical Review E, 59(4), 4498.
• STDP and Bienenstock-Cooper-Munro (BCM) rule
Izhikevich, E. M., & Desai, N. S. (2003). Relating stdp to bcm. Neural computation, 15(7),
1511-1523.
Pfister, J. P., & Gerstner, W. (2006). Triplets of spikes in a model of spike timing-
dependent plasticity. The Journal of neuroscience, 26(38), 9673-9682.
• STDP and temporal-difference learning rule
Rao, R. P., & Sejnowski, T. J. (2001). Spike-timing-dependent Hebbian plasticity as
temporal difference learning. Neural computation, 13(10), 2221-2237.
Exercise 2
Functional consequence: reduced latency
Sjöström & Gerstner, Scholarpedia, 5(2):1362. doi:10.4249/scholarpedia.1362
Song & Abbott (2000) Nature Neurosci
potentiated
depressed
Functional consequence: latent pattern detection
Masquelier et al. (2008) PLoS One; (2009) Neural Comput
Functional consequence: latent pattern detection
Masquelier et al. (2008) PLoS One; (2009) Neural Comput
Functional consequence: latent pattern detection
Masquelier et al. (2008) PLoS One; (2009) Neural Comput
( ) ( ) ( )
j
i j j
t
t t wv t t tη ε= −− + ∑
Spike response model (SRM): membrane potential in integral form.
action potential synaptic potential
presynaptic spikepostsynaptic spike
Spike-timing-dependent plasticity:
presynaptic spike tj and postsynaptic spike ti
if
if
i j
i j
t
j i j
j t
j
t
i j
t
A ew t
w
e t
t
A tw
τ
τ−
+
−
+
−
−
−

 +
→ 

− <
>

Functional consequence: latent pattern detection
Masquelier et al. (2008) PLoS One; (2009) Neural Comput
Bird-song learning: LMAN provides exploratory noise.
Vocal motor pathway (VMP)
• HVC (High vocal center)
• RA
Anterior forebrain pathway (AFP)
• Area X
• DLM
• LMAN
Kao et al. (2005) Nature
HVC-RA synaptic plasticity modulated by reward.
Fiete & Seung (2007) J Neurophysiol.
Tripartite synaptic plasticity
( ) LMAN LM V
0
N CA H
( ) ( ) ( ) (( )) i
tij
ij ii j
dW
R t e t dt s tG t t
dt
t s s tRη η  ′ − 
′− ′= ∫
Fiete & Seung (2007) J Neurophysiol.
Exercise 3
This tripartite learning rule indeed leads to reward maximization.
Summary
• Synaptic plasticity refers to activity-dependent change of
a synaptic weight between neurons, underlying the
physiological basis for learning and memory.
• Hebbian learning: “Fire together, wire together.”
• Synaptic plasticity may be formulated in terms of rate
coding or spike-timing coding.
• Synaptic plasticity is determined not only among two
connected neurons but also is modulated by other
factors (e.g., reward, homeostasis).
Exercises
1. Prove that all eigenvalues of a Wishart matrix are
positive semidefinite.
2. Read the following paper:
Kempter, R., Gerstner, W., & Van Hemmen, J. L. (1999). Hebbian learning and spiking
neurons. Physical Review E, 59(4), 4498.
From the additive STDP learning rule, derive the
following rate-based Hebbian learning rule (fi and fj are
pre- and post-synaptic activity, respectively):
3. Read the following paper:
Fiete, I. R., & Seung, H. S. (2006). Gradient learning in spiking neural networks by
dynamic perturbation of conductances. Physical review letters, 97(4), 048104.
Prove that the learning rule (slide 46) can be derived as
a consequence of reward maximization.
ij i j jw f f fα β∆ = +
Exercises: Code Implementation of Song et al. (2000)
( )
( ) ( ) ( )( ) ( ) ( )( )m rest ex ex inin
dV t
V V t g t E V t g t E V t
dt
τ = − + − + −
( )
( )
( )
( )ex in
e iex in nx ,
dg t dg t
g t g t
dt dt
τ τ=− =−
Membrane dynamics
Conductance dynamics
( ) ( )ex ex ag t g t g→ + when a-th excitatory input arrives
( ) ( )ini inng t g t g+→ when any inhibitory input arrives
Goal: Implement the STDP rule in Song, Miller & Abbott (2000).
Exercises: Code Implementation of Song et al. (2000)
STDP for presynaptic firing:
( )maxmax ,0a a
a a
g M tg g
P P A+
→   
→
+
+
STDP for postsynaptic firing:
when a-th excitatory input arrives
( )max maxmin ,a a ag P tg g
A
g
M M −
→  
→ +
 +
when output neuron fires
Synaptic traces:
( )
( )
( )
( )+
,
a
a
dM t
M t
dt
dP t
P t
dt
τ
τ
− = −
= −
Exercises: Code Implementation of Song et al. (2000)
%% parameter setting:
% LIF-neuron parameters:
taum = 20/1000;
Vrest = -70;
Eex = 0;
Ein = -70;
Vth = -54;
Vreset = -60;
% synapse parameters:
Nex = 1000;
Nin = 200;
tauex = 5/1000;
tauin = 5/1000;
gmaxin = 0.05;
gmaxex = 0.015;
% STDP parameters:
Ap = 0.005;
An = Ap*1.05;
taup = 20/1000;
taun = 20/1000;
%simulation parameters:
dt = 0.1/1000;
T = 200;
t = 0:dt:T;
% input firing rates:
Fex = randi([10 30], 1, Nex);
Fin = 10*ones(1,Nin);
%% simulation:
V = zeros(length(t), 1);
M = zeros(length(t), 1);
P = zeros(length(t), Nex);
gex = zeros(length(t), 1);
gin = zeros(length(t), 1);
V(1) = Vreset;
ga = zeros(length(t), Nex);
ga(1,:) = gmaxex*ones(1,Nex);
disp('Now simulating LIF neuron ...');
tic;
for n=1:length(t)-1
% WRITE YOUR CODE HERE:
end
toc;
JAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rules

More Related Content

What's hot

Neuronal self-organized criticality (II)
Neuronal self-organized criticality (II)Neuronal self-organized criticality (II)
Neuronal self-organized criticality (II)
Osame Kinouchi
 
Neuronal self-organized criticality
Neuronal self-organized criticalityNeuronal self-organized criticality
Neuronal self-organized criticality
Osame Kinouchi
 
Dynamic response of structures with uncertain properties
Dynamic response of structures with uncertain propertiesDynamic response of structures with uncertain properties
Dynamic response of structures with uncertain properties
University of Glasgow
 
Artificial Neural Network Lect4 : Single Layer Perceptron Classifiers
Artificial Neural Network Lect4 : Single Layer Perceptron ClassifiersArtificial Neural Network Lect4 : Single Layer Perceptron Classifiers
Artificial Neural Network Lect4 : Single Layer Perceptron Classifiers
Mohammed Bennamoun
 
Dynamics of structures with uncertainties
Dynamics of structures with uncertaintiesDynamics of structures with uncertainties
Dynamics of structures with uncertainties
University of Glasgow
 
Dynamics of nonlocal structures
Dynamics of nonlocal structuresDynamics of nonlocal structures
Dynamics of nonlocal structures
University of Glasgow
 
類神經網路、語意相似度(一個不嫌少、兩個恰恰好)
類神經網路、語意相似度(一個不嫌少、兩個恰恰好)類神經網路、語意相似度(一個不嫌少、兩個恰恰好)
類神經網路、語意相似度(一個不嫌少、兩個恰恰好)Ming-Chi Liu
 
Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...
Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...
Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...
Chiheb Ben Hammouda
 
03 image transform
03 image transform03 image transform
03 image transform
Rumah Belajar
 
Stereographic Circular Normal Moment Distribution
Stereographic Circular Normal Moment DistributionStereographic Circular Normal Moment Distribution
Stereographic Circular Normal Moment Distribution
mathsjournal
 
Active Controller Design for Regulating the Output of the Sprott-P System
Active Controller Design for Regulating the Output of the Sprott-P SystemActive Controller Design for Regulating the Output of the Sprott-P System
Active Controller Design for Regulating the Output of the Sprott-P System
ijccmsjournal
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
 
Lecture 12 (Image transformation)
Lecture 12 (Image transformation)Lecture 12 (Image transformation)
Lecture 12 (Image transformation)
VARUN KUMAR
 
Adaptive dynamic programming for control
Adaptive dynamic programming for controlAdaptive dynamic programming for control
Adaptive dynamic programming for controlSpringer
 
G234247
G234247G234247
02 2d systems matrix
02 2d systems matrix02 2d systems matrix
02 2d systems matrix
Rumah Belajar
 
Unbiased MCMC with couplings
Unbiased MCMC with couplingsUnbiased MCMC with couplings
Unbiased MCMC with couplings
Pierre Jacob
 
Fuzzy and nn
Fuzzy and nnFuzzy and nn
Fuzzy and nn
Shimi Haridasan
 

What's hot (20)

Neuronal self-organized criticality (II)
Neuronal self-organized criticality (II)Neuronal self-organized criticality (II)
Neuronal self-organized criticality (II)
 
Neuronal self-organized criticality
Neuronal self-organized criticalityNeuronal self-organized criticality
Neuronal self-organized criticality
 
Dynamic response of structures with uncertain properties
Dynamic response of structures with uncertain propertiesDynamic response of structures with uncertain properties
Dynamic response of structures with uncertain properties
 
Annintro
AnnintroAnnintro
Annintro
 
Artificial Neural Network Lect4 : Single Layer Perceptron Classifiers
Artificial Neural Network Lect4 : Single Layer Perceptron ClassifiersArtificial Neural Network Lect4 : Single Layer Perceptron Classifiers
Artificial Neural Network Lect4 : Single Layer Perceptron Classifiers
 
Dynamics of structures with uncertainties
Dynamics of structures with uncertaintiesDynamics of structures with uncertainties
Dynamics of structures with uncertainties
 
Ch13
Ch13Ch13
Ch13
 
Dynamics of nonlocal structures
Dynamics of nonlocal structuresDynamics of nonlocal structures
Dynamics of nonlocal structures
 
類神經網路、語意相似度(一個不嫌少、兩個恰恰好)
類神經網路、語意相似度(一個不嫌少、兩個恰恰好)類神經網路、語意相似度(一個不嫌少、兩個恰恰好)
類神經網路、語意相似度(一個不嫌少、兩個恰恰好)
 
Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...
Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...
Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...
 
03 image transform
03 image transform03 image transform
03 image transform
 
Stereographic Circular Normal Moment Distribution
Stereographic Circular Normal Moment DistributionStereographic Circular Normal Moment Distribution
Stereographic Circular Normal Moment Distribution
 
Active Controller Design for Regulating the Output of the Sprott-P System
Active Controller Design for Regulating the Output of the Sprott-P SystemActive Controller Design for Regulating the Output of the Sprott-P System
Active Controller Design for Regulating the Output of the Sprott-P System
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
Lecture 12 (Image transformation)
Lecture 12 (Image transformation)Lecture 12 (Image transformation)
Lecture 12 (Image transformation)
 
Adaptive dynamic programming for control
Adaptive dynamic programming for controlAdaptive dynamic programming for control
Adaptive dynamic programming for control
 
G234247
G234247G234247
G234247
 
02 2d systems matrix
02 2d systems matrix02 2d systems matrix
02 2d systems matrix
 
Unbiased MCMC with couplings
Unbiased MCMC with couplingsUnbiased MCMC with couplings
Unbiased MCMC with couplings
 
Fuzzy and nn
Fuzzy and nnFuzzy and nn
Fuzzy and nn
 

Viewers also liked

強化学習勉強会・論文紹介(Kulkarni et al., 2016)
強化学習勉強会・論文紹介(Kulkarni et al., 2016)強化学習勉強会・論文紹介(Kulkarni et al., 2016)
強化学習勉強会・論文紹介(Kulkarni et al., 2016)
Sotetsu KOYAMADA(小山田創哲)
 
Why dont you_create_new_spark_jl
Why dont you_create_new_spark_jlWhy dont you_create_new_spark_jl
Why dont you_create_new_spark_jlShintaro Fukushima
 
Probabilistic Graphical Models 輪読会 #1
Probabilistic Graphical Models 輪読会 #1Probabilistic Graphical Models 輪読会 #1
Probabilistic Graphical Models 輪読会 #1
Takuma Yagi
 
KDD2016論文読み会資料(DeepIntent)
KDD2016論文読み会資料(DeepIntent) KDD2016論文読み会資料(DeepIntent)
KDD2016論文読み会資料(DeepIntent)
Sotetsu KOYAMADA(小山田創哲)
 
最近のRのランダムフォレストパッケージ -ranger/Rborist-
最近のRのランダムフォレストパッケージ -ranger/Rborist-最近のRのランダムフォレストパッケージ -ranger/Rborist-
最近のRのランダムフォレストパッケージ -ranger/Rborist-
Shintaro Fukushima
 
機械学習によるデータ分析 実践編
機械学習によるデータ分析 実践編機械学習によるデータ分析 実践編
機械学習によるデータ分析 実践編
Ryota Kamoshida
 
RBM、Deep Learningと学習(全脳アーキテクチャ若手の会 第3回DL勉強会発表資料)
RBM、Deep Learningと学習(全脳アーキテクチャ若手の会 第3回DL勉強会発表資料)RBM、Deep Learningと学習(全脳アーキテクチャ若手の会 第3回DL勉強会発表資料)
RBM、Deep Learningと学習(全脳アーキテクチャ若手の会 第3回DL勉強会発表資料)
Takuma Yagi
 
Kerberos
KerberosKerberos
Kerberos
Gichelle Amon
 
Women in Tech: How to Build A Human Company
Women in Tech: How to Build A Human CompanyWomen in Tech: How to Build A Human Company
Women in Tech: How to Build A Human Company
Luminary Labs
 
Rユーザのためのspark入門
Rユーザのためのspark入門Rユーザのためのspark入門
Rユーザのためのspark入門Shintaro Fukushima
 
【強化学習】Montezuma's Revenge @ NIPS2016
【強化学習】Montezuma's Revenge @ NIPS2016【強化学習】Montezuma's Revenge @ NIPS2016
【強化学習】Montezuma's Revenge @ NIPS2016
Sotetsu KOYAMADA(小山田創哲)
 
論文紹介:Using the Forest to See the Trees: A Graphical. Model Relating Features,...
論文紹介:Using the Forest to See the Trees: A Graphical. Model Relating Features,...論文紹介:Using the Forest to See the Trees: A Graphical. Model Relating Features,...
論文紹介:Using the Forest to See the Trees: A Graphical. Model Relating Features,...
Takuma Yagi
 
機械学習によるデータ分析まわりのお話
機械学習によるデータ分析まわりのお話機械学習によるデータ分析まわりのお話
機械学習によるデータ分析まわりのお話
Ryota Kamoshida
 
What is the maker movement?
What is the maker movement?What is the maker movement?
What is the maker movement?
Luminary Labs
 
The Human Company Playbook, Version 1.0
The Human Company Playbook, Version 1.0The Human Company Playbook, Version 1.0
The Human Company Playbook, Version 1.0
Luminary Labs
 
Hype vs. Reality: The AI Explainer
Hype vs. Reality: The AI ExplainerHype vs. Reality: The AI Explainer
Hype vs. Reality: The AI Explainer
Luminary Labs
 
A Non Linear Model to explain persons with Stroke
A Non Linear Model to explain persons with StrokeA Non Linear Model to explain persons with Stroke
A Non Linear Model to explain persons with Stroke
Hariohm Pandian
 
Migraine: A dynamics disease
Migraine: A dynamics diseaseMigraine: A dynamics disease
Migraine: A dynamics disease
MPI Dresden / HU Berlin
 

Viewers also liked (20)

Os module 2 d
Os module 2 dOs module 2 d
Os module 2 d
 
強化学習勉強会・論文紹介(Kulkarni et al., 2016)
強化学習勉強会・論文紹介(Kulkarni et al., 2016)強化学習勉強会・論文紹介(Kulkarni et al., 2016)
強化学習勉強会・論文紹介(Kulkarni et al., 2016)
 
Why dont you_create_new_spark_jl
Why dont you_create_new_spark_jlWhy dont you_create_new_spark_jl
Why dont you_create_new_spark_jl
 
Probabilistic Graphical Models 輪読会 #1
Probabilistic Graphical Models 輪読会 #1Probabilistic Graphical Models 輪読会 #1
Probabilistic Graphical Models 輪読会 #1
 
KDD2016論文読み会資料(DeepIntent)
KDD2016論文読み会資料(DeepIntent) KDD2016論文読み会資料(DeepIntent)
KDD2016論文読み会資料(DeepIntent)
 
最近のRのランダムフォレストパッケージ -ranger/Rborist-
最近のRのランダムフォレストパッケージ -ranger/Rborist-最近のRのランダムフォレストパッケージ -ranger/Rborist-
最近のRのランダムフォレストパッケージ -ranger/Rborist-
 
機械学習によるデータ分析 実践編
機械学習によるデータ分析 実践編機械学習によるデータ分析 実践編
機械学習によるデータ分析 実践編
 
RBM、Deep Learningと学習(全脳アーキテクチャ若手の会 第3回DL勉強会発表資料)
RBM、Deep Learningと学習(全脳アーキテクチャ若手の会 第3回DL勉強会発表資料)RBM、Deep Learningと学習(全脳アーキテクチャ若手の会 第3回DL勉強会発表資料)
RBM、Deep Learningと学習(全脳アーキテクチャ若手の会 第3回DL勉強会発表資料)
 
Kerberos
KerberosKerberos
Kerberos
 
Women in Tech: How to Build A Human Company
Women in Tech: How to Build A Human CompanyWomen in Tech: How to Build A Human Company
Women in Tech: How to Build A Human Company
 
Rユーザのためのspark入門
Rユーザのためのspark入門Rユーザのためのspark入門
Rユーザのためのspark入門
 
Network security
Network securityNetwork security
Network security
 
【強化学習】Montezuma's Revenge @ NIPS2016
【強化学習】Montezuma's Revenge @ NIPS2016【強化学習】Montezuma's Revenge @ NIPS2016
【強化学習】Montezuma's Revenge @ NIPS2016
 
論文紹介:Using the Forest to See the Trees: A Graphical. Model Relating Features,...
論文紹介:Using the Forest to See the Trees: A Graphical. Model Relating Features,...論文紹介:Using the Forest to See the Trees: A Graphical. Model Relating Features,...
論文紹介:Using the Forest to See the Trees: A Graphical. Model Relating Features,...
 
機械学習によるデータ分析まわりのお話
機械学習によるデータ分析まわりのお話機械学習によるデータ分析まわりのお話
機械学習によるデータ分析まわりのお話
 
What is the maker movement?
What is the maker movement?What is the maker movement?
What is the maker movement?
 
The Human Company Playbook, Version 1.0
The Human Company Playbook, Version 1.0The Human Company Playbook, Version 1.0
The Human Company Playbook, Version 1.0
 
Hype vs. Reality: The AI Explainer
Hype vs. Reality: The AI ExplainerHype vs. Reality: The AI Explainer
Hype vs. Reality: The AI Explainer
 
A Non Linear Model to explain persons with Stroke
A Non Linear Model to explain persons with StrokeA Non Linear Model to explain persons with Stroke
A Non Linear Model to explain persons with Stroke
 
Migraine: A dynamics disease
Migraine: A dynamics diseaseMigraine: A dynamics disease
Migraine: A dynamics disease
 

Similar to JAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rules

Freezing of energy of a soliton in an external potential
Freezing of energy of a soliton in an external potentialFreezing of energy of a soliton in an external potential
Freezing of energy of a soliton in an external potential
Alberto Maspero
 
RNN and sequence-to-sequence processing
RNN and sequence-to-sequence processingRNN and sequence-to-sequence processing
RNN and sequence-to-sequence processing
Dongang (Sean) Wang
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
Fabian Pedregosa
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
Institute of Technology Telkom
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
Elvis DOHMATOB
 
Restricted Boltzman Machine (RBM) presentation of fundamental theory
Restricted Boltzman Machine (RBM) presentation of fundamental theoryRestricted Boltzman Machine (RBM) presentation of fundamental theory
Restricted Boltzman Machine (RBM) presentation of fundamental theory
Seongwon Hwang
 
Network-Growth Rule Dependence of Fractal Dimension of Percolation Cluster on...
Network-Growth Rule Dependence of Fractal Dimension of Percolation Cluster on...Network-Growth Rule Dependence of Fractal Dimension of Percolation Cluster on...
Network-Growth Rule Dependence of Fractal Dimension of Percolation Cluster on...
Shu Tanaka
 
Physical Chemistry Exam Help
Physical Chemistry Exam HelpPhysical Chemistry Exam Help
Physical Chemistry Exam Help
Live Exam Helper
 
Boundness of a neural network weights using the notion of a limit of a sequence
Boundness of a neural network weights using the notion of a limit of a sequenceBoundness of a neural network weights using the notion of a limit of a sequence
Boundness of a neural network weights using the notion of a limit of a sequence
IJDKP
 
Introducing Zap Q-Learning
Introducing Zap Q-Learning   Introducing Zap Q-Learning
Introducing Zap Q-Learning
Sean Meyn
 
Hodgkin-Huxley & the nonlinear dynamics of neuronal excitability
Hodgkin-Huxley & the nonlinear  dynamics of neuronal excitabilityHodgkin-Huxley & the nonlinear  dynamics of neuronal excitability
Hodgkin-Huxley & the nonlinear dynamics of neuronal excitability
SSA KPI
 
Anomaly Detection in Sequences of Short Text Using Iterative Language Models
Anomaly Detection in Sequences of Short Text Using Iterative Language ModelsAnomaly Detection in Sequences of Short Text Using Iterative Language Models
Anomaly Detection in Sequences of Short Text Using Iterative Language Models
Cynthia Freeman
 
Mathematics and AI
Mathematics and AIMathematics and AI
Mathematics and AI
Marc Lelarge
 
MM2020-AV
MM2020-AVMM2020-AV
MM2020-AV
ArianVezvaee
 
Tutorial APS 2023: Phase transition for statistical estimation: algorithms an...
Tutorial APS 2023: Phase transition for statistical estimation: algorithms an...Tutorial APS 2023: Phase transition for statistical estimation: algorithms an...
Tutorial APS 2023: Phase transition for statistical estimation: algorithms an...
Marc Lelarge
 
4 stochastic processes
4 stochastic processes4 stochastic processes
4 stochastic processes
Solo Hermelin
 
NN-Ch3.PDF
NN-Ch3.PDFNN-Ch3.PDF
NN-Ch3.PDF
gnans Kgnanshek
 
Neural Networks. Overview
Neural Networks. OverviewNeural Networks. Overview
Neural Networks. Overview
Oleksandr Baiev
 
NANO266 - Lecture 10 - Temperature
NANO266 - Lecture 10 - TemperatureNANO266 - Lecture 10 - Temperature
NANO266 - Lecture 10 - Temperature
University of California, San Diego
 
EC8553 Discrete time signal processing
EC8553 Discrete time signal processing EC8553 Discrete time signal processing
EC8553 Discrete time signal processing
ssuser2797e4
 

Similar to JAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rules (20)

Freezing of energy of a soliton in an external potential
Freezing of energy of a soliton in an external potentialFreezing of energy of a soliton in an external potential
Freezing of energy of a soliton in an external potential
 
RNN and sequence-to-sequence processing
RNN and sequence-to-sequence processingRNN and sequence-to-sequence processing
RNN and sequence-to-sequence processing
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
 
Restricted Boltzman Machine (RBM) presentation of fundamental theory
Restricted Boltzman Machine (RBM) presentation of fundamental theoryRestricted Boltzman Machine (RBM) presentation of fundamental theory
Restricted Boltzman Machine (RBM) presentation of fundamental theory
 
Network-Growth Rule Dependence of Fractal Dimension of Percolation Cluster on...
Network-Growth Rule Dependence of Fractal Dimension of Percolation Cluster on...Network-Growth Rule Dependence of Fractal Dimension of Percolation Cluster on...
Network-Growth Rule Dependence of Fractal Dimension of Percolation Cluster on...
 
Physical Chemistry Exam Help
Physical Chemistry Exam HelpPhysical Chemistry Exam Help
Physical Chemistry Exam Help
 
Boundness of a neural network weights using the notion of a limit of a sequence
Boundness of a neural network weights using the notion of a limit of a sequenceBoundness of a neural network weights using the notion of a limit of a sequence
Boundness of a neural network weights using the notion of a limit of a sequence
 
Introducing Zap Q-Learning
Introducing Zap Q-Learning   Introducing Zap Q-Learning
Introducing Zap Q-Learning
 
Hodgkin-Huxley & the nonlinear dynamics of neuronal excitability
Hodgkin-Huxley & the nonlinear  dynamics of neuronal excitabilityHodgkin-Huxley & the nonlinear  dynamics of neuronal excitability
Hodgkin-Huxley & the nonlinear dynamics of neuronal excitability
 
Anomaly Detection in Sequences of Short Text Using Iterative Language Models
Anomaly Detection in Sequences of Short Text Using Iterative Language ModelsAnomaly Detection in Sequences of Short Text Using Iterative Language Models
Anomaly Detection in Sequences of Short Text Using Iterative Language Models
 
Mathematics and AI
Mathematics and AIMathematics and AI
Mathematics and AI
 
MM2020-AV
MM2020-AVMM2020-AV
MM2020-AV
 
Tutorial APS 2023: Phase transition for statistical estimation: algorithms an...
Tutorial APS 2023: Phase transition for statistical estimation: algorithms an...Tutorial APS 2023: Phase transition for statistical estimation: algorithms an...
Tutorial APS 2023: Phase transition for statistical estimation: algorithms an...
 
4 stochastic processes
4 stochastic processes4 stochastic processes
4 stochastic processes
 
NN-Ch3.PDF
NN-Ch3.PDFNN-Ch3.PDF
NN-Ch3.PDF
 
Neural Networks. Overview
Neural Networks. OverviewNeural Networks. Overview
Neural Networks. Overview
 
NANO266 - Lecture 10 - Temperature
NANO266 - Lecture 10 - TemperatureNANO266 - Lecture 10 - Temperature
NANO266 - Lecture 10 - Temperature
 
EC8553 Discrete time signal processing
EC8553 Discrete time signal processing EC8553 Discrete time signal processing
EC8553 Discrete time signal processing
 

More from hirokazutanaka

東京都市大学 データ解析入門 10 ニューラルネットワークと深層学習 1
東京都市大学 データ解析入門 10 ニューラルネットワークと深層学習 1東京都市大学 データ解析入門 10 ニューラルネットワークと深層学習 1
東京都市大学 データ解析入門 10 ニューラルネットワークと深層学習 1
hirokazutanaka
 
東京都市大学 データ解析入門 9 クラスタリングと分類分析 2
東京都市大学 データ解析入門 9 クラスタリングと分類分析 2東京都市大学 データ解析入門 9 クラスタリングと分類分析 2
東京都市大学 データ解析入門 9 クラスタリングと分類分析 2
hirokazutanaka
 
東京都市大学 データ解析入門 8 クラスタリングと分類分析 1
東京都市大学 データ解析入門 8 クラスタリングと分類分析 1東京都市大学 データ解析入門 8 クラスタリングと分類分析 1
東京都市大学 データ解析入門 8 クラスタリングと分類分析 1
hirokazutanaka
 
東京都市大学 データ解析入門 7 回帰分析とモデル選択 2
東京都市大学 データ解析入門 7 回帰分析とモデル選択 2東京都市大学 データ解析入門 7 回帰分析とモデル選択 2
東京都市大学 データ解析入門 7 回帰分析とモデル選択 2
hirokazutanaka
 
東京都市大学 データ解析入門 6 回帰分析とモデル選択 1
東京都市大学 データ解析入門 6 回帰分析とモデル選択 1東京都市大学 データ解析入門 6 回帰分析とモデル選択 1
東京都市大学 データ解析入門 6 回帰分析とモデル選択 1
hirokazutanaka
 
東京都市大学 データ解析入門 5 スパース性と圧縮センシング 2
東京都市大学 データ解析入門 5 スパース性と圧縮センシング 2東京都市大学 データ解析入門 5 スパース性と圧縮センシング 2
東京都市大学 データ解析入門 5 スパース性と圧縮センシング 2
hirokazutanaka
 
東京都市大学 データ解析入門 4 スパース性と圧縮センシング1
東京都市大学 データ解析入門 4 スパース性と圧縮センシング1東京都市大学 データ解析入門 4 スパース性と圧縮センシング1
東京都市大学 データ解析入門 4 スパース性と圧縮センシング1
hirokazutanaka
 
東京都市大学 データ解析入門 3 行列分解 2
東京都市大学 データ解析入門 3 行列分解 2東京都市大学 データ解析入門 3 行列分解 2
東京都市大学 データ解析入門 3 行列分解 2
hirokazutanaka
 
東京都市大学 データ解析入門 2 行列分解 1
東京都市大学 データ解析入門 2 行列分解 1東京都市大学 データ解析入門 2 行列分解 1
東京都市大学 データ解析入門 2 行列分解 1
hirokazutanaka
 
Computational Motor Control: Reinforcement Learning (JAIST summer course)
Computational Motor Control: Reinforcement Learning (JAIST summer course) Computational Motor Control: Reinforcement Learning (JAIST summer course)
Computational Motor Control: Reinforcement Learning (JAIST summer course)
hirokazutanaka
 
Computational Motor Control: Introduction (JAIST summer course)
Computational Motor Control: Introduction (JAIST summer course)Computational Motor Control: Introduction (JAIST summer course)
Computational Motor Control: Introduction (JAIST summer course)
hirokazutanaka
 

More from hirokazutanaka (11)

東京都市大学 データ解析入門 10 ニューラルネットワークと深層学習 1
東京都市大学 データ解析入門 10 ニューラルネットワークと深層学習 1東京都市大学 データ解析入門 10 ニューラルネットワークと深層学習 1
東京都市大学 データ解析入門 10 ニューラルネットワークと深層学習 1
 
東京都市大学 データ解析入門 9 クラスタリングと分類分析 2
東京都市大学 データ解析入門 9 クラスタリングと分類分析 2東京都市大学 データ解析入門 9 クラスタリングと分類分析 2
東京都市大学 データ解析入門 9 クラスタリングと分類分析 2
 
東京都市大学 データ解析入門 8 クラスタリングと分類分析 1
東京都市大学 データ解析入門 8 クラスタリングと分類分析 1東京都市大学 データ解析入門 8 クラスタリングと分類分析 1
東京都市大学 データ解析入門 8 クラスタリングと分類分析 1
 
東京都市大学 データ解析入門 7 回帰分析とモデル選択 2
東京都市大学 データ解析入門 7 回帰分析とモデル選択 2東京都市大学 データ解析入門 7 回帰分析とモデル選択 2
東京都市大学 データ解析入門 7 回帰分析とモデル選択 2
 
東京都市大学 データ解析入門 6 回帰分析とモデル選択 1
東京都市大学 データ解析入門 6 回帰分析とモデル選択 1東京都市大学 データ解析入門 6 回帰分析とモデル選択 1
東京都市大学 データ解析入門 6 回帰分析とモデル選択 1
 
東京都市大学 データ解析入門 5 スパース性と圧縮センシング 2
東京都市大学 データ解析入門 5 スパース性と圧縮センシング 2東京都市大学 データ解析入門 5 スパース性と圧縮センシング 2
東京都市大学 データ解析入門 5 スパース性と圧縮センシング 2
 
東京都市大学 データ解析入門 4 スパース性と圧縮センシング1
東京都市大学 データ解析入門 4 スパース性と圧縮センシング1東京都市大学 データ解析入門 4 スパース性と圧縮センシング1
東京都市大学 データ解析入門 4 スパース性と圧縮センシング1
 
東京都市大学 データ解析入門 3 行列分解 2
東京都市大学 データ解析入門 3 行列分解 2東京都市大学 データ解析入門 3 行列分解 2
東京都市大学 データ解析入門 3 行列分解 2
 
東京都市大学 データ解析入門 2 行列分解 1
東京都市大学 データ解析入門 2 行列分解 1東京都市大学 データ解析入門 2 行列分解 1
東京都市大学 データ解析入門 2 行列分解 1
 
Computational Motor Control: Reinforcement Learning (JAIST summer course)
Computational Motor Control: Reinforcement Learning (JAIST summer course) Computational Motor Control: Reinforcement Learning (JAIST summer course)
Computational Motor Control: Reinforcement Learning (JAIST summer course)
 
Computational Motor Control: Introduction (JAIST summer course)
Computational Motor Control: Introduction (JAIST summer course)Computational Motor Control: Introduction (JAIST summer course)
Computational Motor Control: Introduction (JAIST summer course)
 

Recently uploaded

Polish students' mobility in the Czech Republic
Polish students' mobility in the Czech RepublicPolish students' mobility in the Czech Republic
Polish students' mobility in the Czech Republic
Anna Sz.
 
Synthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptxSynthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptx
Pavel ( NSTU)
 
Supporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptxSupporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptx
Jisc
 
678020731-Sumas-y-Restas-Para-Colorear.pdf
678020731-Sumas-y-Restas-Para-Colorear.pdf678020731-Sumas-y-Restas-Para-Colorear.pdf
678020731-Sumas-y-Restas-Para-Colorear.pdf
CarlosHernanMontoyab2
 
Digital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and ResearchDigital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and Research
Vikramjit Singh
 
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdfAdversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Po-Chuan Chen
 
How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17
Celine George
 
CACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdfCACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdf
camakaiclarkmusic
 
Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.
Ashokrao Mane college of Pharmacy Peth-Vadgaon
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
Sandy Millin
 
Acetabularia Information For Class 9 .docx
Acetabularia Information For Class 9  .docxAcetabularia Information For Class 9  .docx
Acetabularia Information For Class 9 .docx
vaibhavrinwa19
 
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th SemesterGuidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Atul Kumar Singh
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
Jisc
 
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
TechSoup
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
Thiyagu K
 
Chapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptxChapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptx
Mohd Adib Abd Muin, Senior Lecturer at Universiti Utara Malaysia
 
Palestine last event orientationfvgnh .pptx
Palestine last event orientationfvgnh .pptxPalestine last event orientationfvgnh .pptx
Palestine last event orientationfvgnh .pptx
RaedMohamed3
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
Celine George
 
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
EugeneSaldivar
 
Honest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptxHonest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptx
timhan337
 

Recently uploaded (20)

Polish students' mobility in the Czech Republic
Polish students' mobility in the Czech RepublicPolish students' mobility in the Czech Republic
Polish students' mobility in the Czech Republic
 
Synthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptxSynthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptx
 
Supporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptxSupporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptx
 
678020731-Sumas-y-Restas-Para-Colorear.pdf
678020731-Sumas-y-Restas-Para-Colorear.pdf678020731-Sumas-y-Restas-Para-Colorear.pdf
678020731-Sumas-y-Restas-Para-Colorear.pdf
 
Digital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and ResearchDigital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and Research
 
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdfAdversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
 
How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17
 
CACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdfCACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdf
 
Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
 
Acetabularia Information For Class 9 .docx
Acetabularia Information For Class 9  .docxAcetabularia Information For Class 9  .docx
Acetabularia Information For Class 9 .docx
 
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th SemesterGuidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th Semester
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
 
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
 
Chapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptxChapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptx
 
Palestine last event orientationfvgnh .pptx
Palestine last event orientationfvgnh .pptxPalestine last event orientationfvgnh .pptx
Palestine last event orientationfvgnh .pptx
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
 
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
 
Honest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptxHonest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptx
 

JAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rules

  • 1. SS2016 Modern Neural Computation Lecture 2: Synaptic Learning Rules Hirokazu Tanaka School of Information Science Japan Institute of Science and Technology
  • 2. Neurons communicate through synapses. In this lecture we will learn: • Basic anatomy and physiology of synapses • Rate coding and spike coding • Hebbian learning • Spike-timing-dependent plasticity • Reward-modulated plasticity
  • 3. Synaptic plasticity underlies behavioral modification. Kandel (1979) Scientific American; Kandel (2001) Science
  • 4. Synapses: electrical and chemical neurotransmission Figure 5.1, Neuroscience 3rd Edition
  • 5. Long-term potentiation (LTP) of hippocampal synapses Figure 24.6, Neuroscience 3rd EditionFigure 24.5, Neuroscience 3rd Edition
  • 6. Long-term potentiation (LTP) of hippocampal synapses Figures 24.7 & 24.8, Neuroscience 3rd Edition
  • 7. Molecular mechanisms underlying hippocampal LTP. Figures 24.9 & 24.10, Neuroscience 3rd Edition
  • 9. How does a neuron represent information? Panzari et al. (2010) Trends in Neurosciences
  • 10. Rate coding: Number of Spikes matters. Rate coding hypothesis: a neuron represents information through its spike rate. Hartline (1940) Am J Physiol; Hartline (1969) Science Compound eye of horseshoe crab Recoding from optic nerve Firing patterns of cortical neurons are highly irregular, which are well approximated by a random Poisson process (Softky & Koch (1993) J Neurosci; Shadlen & Newsome (1994) Current Biology).
  • 11. Temporal coding: Spike timing matters. Temporal coding hypothesis: a neuron represents information through its spike timings. Gollisch & Meister (2008) Science Johansson & Birznieks (2004) Nature Neurosci
  • 12. Hebb’s postulate of activity dependent plasticity. "Let us assume that the persistence or repetition of a reverberatory activity (or "trace") tends to induce lasting cellular axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased." Hebbian theory: a theory in neuroscience that proposes an explanation for the adaptation of neurons in the brain during the learning process. Donald O. Hebb (1904-1985) The Organization of Behavior (1949) Image source: Wikipedia, Donald O. Hebb
  • 13. Synaptic plasticity: rate-coding model 1u 2u 3u 1w 2w 3w T v vτ =− + w u v ( ) ( ) T 1 T 1 n n u u w w = = u w   input ratesoutput rate synaptic strengths T v ≈ w u If we consider a time scale larger than τ, then
  • 14. Hebbian plasticity in equation. vη∆ =w u 1 1 n n w u v w u η ∆        =       ∆      T η∆ =w uu w Hebbian learning with input vector u and output v Vector form: Or component form: If the membrane dynamics is fast compared to the timescale of synaptic plasticity, the output is approximated as: Then the Hebbian rule now reads: T .v = w u
  • 15. This form of learning rule is unstable. T η∆ =w uu w T η η∆= =w uu w Cw Covariance matrix of random inputs T =C uu Wishart matrix If inputs u1, …, un are i.i.d., their covariance matrix is called the Wishart matrix (Wishart, 1936): All eigenvalues of a Wishart matrix are non-negative. Hebbian learning with single input u Hebbian learning with input ensemble Exercise 1
  • 16. This form of learning rule is unstable. Eigenvalue decomposition 1, ,i i i i nλ= =Ce e  1 0nλ λ≥ ≥ ≥ η∆ =w Cw i i i a= ∑w e i i ia aηλ∆ = All eigenvalues of a Wishart matrix are non-negative. The eigenvectors form a basis for the n-dim space, and the weight vector w may be decomposed into the eigenvectors: Then, the Hebbian learning rule is reduced as: Therefore, ai grows exponentially, finally diverging to infinity.
  • 17. Covariance matrix of input has non-negative eigenvalues. Covariance matrix of random inputs ( )T 2T T T 0≥= =x Cx x uu x u x 1 i i n i a e = = ∑x T , 1 , 2 1 , 1 T n n n i i j i j i j j i j j i i i j ia a a a aλ δ λ = = = = = =∑ ∑ ∑x Cx e Ce For any non-zero vector x: If the vector is decomposed in terms of eigenvectors, then, For any {ai} this quantity must be non-negative. Therefore, the eigenvalues {λi} must be non-negative, too.
  • 18. Generalization of Hebbian learning. ( )( )v vη∆ = − −w u u Covariance learning BCM rule ( )Mv vη θ∆= −w u Bienenstock, Cooper & Munro (1982) J Neurosci Sejnowski (1977) Biophys J φ(v) v Synaptic weights change if pre-and post-activities are positively correlated. Synaptic plasticity depends linearly on pre- synaptic activities and nonlinearly on post- synaptic activity (thresholding). The thresholding value changes according to post-synaptic activity (homeostasis).
  • 19. Generalization of Hebbian learning. BCM rule ( )Mv vη θ∆= −w u Bienenstock, Cooper & Munro (1982) J Neurosci φ(v) v Synaptic plasticity depends linearly on pre- synaptic activities and nonlinearly on post- synaptic activity (thresholding). The thresholding value changes according to post-synaptic activity (homeostasis). 2 EM vθ  =   ( )2 1v vη∆= −w u There is only one stable fixed point at v=1.
  • 20. Weight normalization: additive or multiplicative. vη∆ =w uHebbian learning, , is inherently unstable. One way to avoid this instability (i.e., divergence) is to impose a constraint over the weight vector w. 1i i w =∑ Additive normalization Multiplicative normalization i i j j w w v w v n η η∆ = − ∑ ( ) ( ) ( ) ( ) ( ) 1 t t t t t + ∆ + = + ∆ w w w w w 1=w Oja (1982) Neural Networks
  • 21. Oja learning rule as a principle component analyzer. Oja learning rule in discrete time Oja (1982) Neural Networks ( ) ( ) ( )2 1 v t v v v η η η η + + = = + − + + w u w w u w w u  ( ) ( ) ( ) ( ) ( ) ( )( )1t t v t t v t tη+ = + −w w u w ( ) d v v dt η= − w u w ( )Td dt η= − w Cw w Cww Oja learning rule in continuous time Oja learning rule in continuous time
  • 22. Oja learning rule as a principle component analyzer. Oja (1982) Neural Networks ( )Td dt η= − w Cw w Cww i i i a= ∑w e 1, ,i i i i nλ= =Ce e  1 0nλ λ≥ ≥ ≥ 2 1 n i i i j j i j a a a aλ λ =   = −    ∑ 1 i i a b a ≡ ( )1i i ib bλ λ= − ( )1 const, 0 2, ,ia a i n∴ → → = Eigenvector decomposition
  • 23. Modeling synapses: conductance-based model. ( )( ) ( )( )rest ex ex in inm dV V V g t E V g t E V dt τ = − + − + − LIF excitatory synapse inhibitory synapse Gerstner (2014) Neuronal Dynamics, Chapter 3 ( ) ( )syn syn syn f t t t f g t g e t t τ − − = Θ −∑ ( ) ( ) ( )rise fast slow syn syn 1 1 f f f t t t t t t f f g t g e ae a ae t tτ τ τ − − − − − −       = − + − Θ −        ∑ exponential with one decay time constant exponentials with one rise and two decay time constants
  • 24. Modeling synapses: conductance-based model. ( )( ) ( )( )rest ex ex in inm dV V V g t E V g t E V dt τ = − + − + − LIF excitatory synapse inhibitory synapse Gerstner (2014) Neuronal Dynamics, Chapter 3 ( ) ( )syn syn syn f t t t f g t g e t t τ − − = Θ −∑ ( ) ( ) ( )rise fast slow syn syn 1 1 f f f t t t t t t f f g t g e ae a e t tτ τ τ − − − − − −       = − + − Θ −        ∑ exponential with one decay time constant exponentials with one rise and two decay time constants
  • 25. Modeling synapses: conductance-based model. Gerstner (2014) Neuronal Dynamics, Chapter 3 ( ) ( )syn syn syn f t t t f g t g e t t τ − − = Θ −∑ ( ) ( ) ( )rise fast slow syn syn 1 1 f f f t t t t t t f f g t g e ae a e t tτ τ τ − − − − − −       = − + − Θ −        ∑ exponential with one decay time constant exponentials with one rise and two decay time constants excitatory inhibitory rise fast1 ms, 6 msτ τ≈ ≈ rise fast slow25 50 ms, 100 300 ms, 500 1000 msτ τ τ≈ − ≈ − ≈ − GABAA GABAB
  • 26. Modeling synapses: conductance-based model. ( )( ) ( )( )rest ex ex in inm dV V V g t E V g t E V dt τ = − + − + − ex ex ex dg g dt τ = − ( ) ( )ex ex exg t g t g← + in in in dg g dt τ = − ( ) ( )in in ing t g t g← + LIF excitatory synapse inhibitory synapse Dynamics of conductance Synaptic plasticity: how the peak conductances of excitatory and inhibitory synapses is modified in an activity-dependent manner. Song et al. (2000) Nature Neurosci
  • 27. Spike-timing dependent plasticity (STDP) Sjöström & Gerstner, Scholarpedia, 5(2):1362. doi:10.4249/scholarpedia.1362 pre-post: potentiation post-pre: depression
  • 28. STDP in equations. Sjöström & Gerstner, Scholarpedia, 5(2):1362. doi:10.4249/scholarpedia.1362 ( ) :post spikes : pre spikes n f ij i j n f w W t t∆= −∑ ∑ ( ) exp for 0 exp for 0 tA t W t tA t τ τ + + − −   − >    =   − <    
  • 29. Online implementation of STDP learning Sjöström & Gerstner, Scholarpedia, 5(2):1362. doi:10.4249/scholarpedia.1362 ( ) ( ) : presynaptic spike j f j j j f dx x a x t t dt τ δ+ +=− + −∑ ( ) ( ) : postsynaptic spike ni j i i n dy y a y t t dt τ δ− −=− + −∑ xj : presynaptic trace of neuron j “remembering when presynaptic neuron j spikes” yi : postsynaptic trace of neuron i “remembering when postsynaptic neuron i spikes” ( ) ( ) ( ) ( ) :postsynaptic : presynaptic spikes spikes ij n f ij j i ij i i n f dw A w x t t A w y t t dt δ δ+ −− − −∑ ∑
  • 30. Weight dependence: hard and soft bounds. Sjöström & Gerstner, Scholarpedia, 5(2):1362. doi:10.4249/scholarpedia.1362 ( ) ( ) ( ) ( ) :postsynaptic : presynaptic spikes spikes ij ij ij n f j i i i n f dw x t t y t tA w A w dt δ δ+ −= − − −∑ ∑ Weight learning dynamics Hard bound rule (Linear) Soft bound rule ( ) ( ), :A w A w+ − determines the weight dependence of STDP learning rule. ( ) ( ) ( ) ( ) maxA w w w A w w η η + + − − =Θ − = Θ For biological reasons, the synaptic weights should be restricted to wmin < w < wmax . ( ) ( ) ( ) maxA w w w A w w η η + + − − = − = ( )A w+ ( )A w−
  • 31. Temporal all-to-all versus nearest-neighbor spike interaction. Sjöström & Gerstner, Scholarpedia, 5(2):1362. doi:10.4249/scholarpedia.1362 ( ) ( ) ( ) ( ) : presynaptic : postsynaptic spike spike ,j f ni j j j i f n j i dx dy x t t y t t dt d a a y t xτ δ τ δ+ −+ −=− + − =− + −∑ ∑ Synaptic trace dynamics ( ):a x+ determines how much trace is incremented by spikes. ( ) 1a x+ = ( ) 1a x x+ = − All-to-all interaction Nearest-neighbor interaction All spikes contribute additively to the trace, and the trace is not upper-bounded. Only the nearest spike contributes to the trace and the trace is upper-bounded to 1.
  • 32. Additive vs multiplicative STDP. van Rossum et al. (2000) J Neurosci. ( ) exp for 0 exp for 0 tA t W t tA t τ τ + + − −   − >    =   − <     ( ) exp for 0 exp for 0 tA t W t tA tW τ τ + + − −   − >    =   − <     Additive STDP Multiplicative STDP Potentiation and depression are independent of the weight value. Depression are weight dependent in a multiplicative way; a large synapse gets depressed more and a weak synapse less.
  • 33. Triplet law: three-spike interaction pre pre1 2 pre1 1 1 1 pre2 2 2 2 post post1 2 post1 1 1 1 post2 2 2 2 if then 1. if then 1. if then 1. if then 1. dx dx x t t x x x t t x x dt dt dy dy y t t y y y t t y y dt dt τ τ τ τ =− = ← + =− = ← + =− = ← + =− = ← + ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )pre post 2 1 3 2 1 2 1 3 2 1w t A y t A x t y t t t A x t A y t x t t tε δ ε δ− − + +    ∆ =− + − − + + − −    post-pre LTD pre-post-pre LTD pre-post LTP post-pre-post LTP Pfister & Gerstner (2006) J Neurosci Dynamics of two presynaptic and two postsynaptic traces Pre-post-pre LTD and pre-post-pre LTP
  • 34. STDP for inhibitory synapses Vogels et al. (2011) Science
  • 35. Relation of STDP to other learning rules. • STDP and rate-based Hebbian learning rules Kempter, R., Gerstner, W., & Van Hemmen, J. L. (1999). Hebbian learning and spiking neurons. Physical Review E, 59(4), 4498. • STDP and Bienenstock-Cooper-Munro (BCM) rule Izhikevich, E. M., & Desai, N. S. (2003). Relating stdp to bcm. Neural computation, 15(7), 1511-1523. Pfister, J. P., & Gerstner, W. (2006). Triplets of spikes in a model of spike timing- dependent plasticity. The Journal of neuroscience, 26(38), 9673-9682. • STDP and temporal-difference learning rule Rao, R. P., & Sejnowski, T. J. (2001). Spike-timing-dependent Hebbian plasticity as temporal difference learning. Neural computation, 13(10), 2221-2237. Exercise 2
  • 36. Functional consequence: reduced latency Sjöström & Gerstner, Scholarpedia, 5(2):1362. doi:10.4249/scholarpedia.1362 Song & Abbott (2000) Nature Neurosci potentiated depressed
  • 37. Functional consequence: latent pattern detection Masquelier et al. (2008) PLoS One; (2009) Neural Comput
  • 38. Functional consequence: latent pattern detection Masquelier et al. (2008) PLoS One; (2009) Neural Comput
  • 39. Functional consequence: latent pattern detection Masquelier et al. (2008) PLoS One; (2009) Neural Comput ( ) ( ) ( ) j i j j t t t wv t t tη ε= −− + ∑ Spike response model (SRM): membrane potential in integral form. action potential synaptic potential presynaptic spikepostsynaptic spike Spike-timing-dependent plasticity: presynaptic spike tj and postsynaptic spike ti if if i j i j t j i j j t j t i j t A ew t w e t t A tw τ τ− + − + − − −   + →   − < > 
  • 40. Functional consequence: latent pattern detection Masquelier et al. (2008) PLoS One; (2009) Neural Comput
  • 41. Bird-song learning: LMAN provides exploratory noise. Vocal motor pathway (VMP) • HVC (High vocal center) • RA Anterior forebrain pathway (AFP) • Area X • DLM • LMAN Kao et al. (2005) Nature
  • 42. HVC-RA synaptic plasticity modulated by reward. Fiete & Seung (2007) J Neurophysiol.
  • 43. Tripartite synaptic plasticity ( ) LMAN LM V 0 N CA H ( ) ( ) ( ) (( )) i tij ij ii j dW R t e t dt s tG t t dt t s s tRη η  ′ −  ′− ′= ∫ Fiete & Seung (2007) J Neurophysiol. Exercise 3 This tripartite learning rule indeed leads to reward maximization.
  • 44. Summary • Synaptic plasticity refers to activity-dependent change of a synaptic weight between neurons, underlying the physiological basis for learning and memory. • Hebbian learning: “Fire together, wire together.” • Synaptic plasticity may be formulated in terms of rate coding or spike-timing coding. • Synaptic plasticity is determined not only among two connected neurons but also is modulated by other factors (e.g., reward, homeostasis).
  • 45. Exercises 1. Prove that all eigenvalues of a Wishart matrix are positive semidefinite. 2. Read the following paper: Kempter, R., Gerstner, W., & Van Hemmen, J. L. (1999). Hebbian learning and spiking neurons. Physical Review E, 59(4), 4498. From the additive STDP learning rule, derive the following rate-based Hebbian learning rule (fi and fj are pre- and post-synaptic activity, respectively): 3. Read the following paper: Fiete, I. R., & Seung, H. S. (2006). Gradient learning in spiking neural networks by dynamic perturbation of conductances. Physical review letters, 97(4), 048104. Prove that the learning rule (slide 46) can be derived as a consequence of reward maximization. ij i j jw f f fα β∆ = +
  • 46. Exercises: Code Implementation of Song et al. (2000) ( ) ( ) ( ) ( )( ) ( ) ( )( )m rest ex ex inin dV t V V t g t E V t g t E V t dt τ = − + − + − ( ) ( ) ( ) ( )ex in e iex in nx , dg t dg t g t g t dt dt τ τ=− =− Membrane dynamics Conductance dynamics ( ) ( )ex ex ag t g t g→ + when a-th excitatory input arrives ( ) ( )ini inng t g t g+→ when any inhibitory input arrives Goal: Implement the STDP rule in Song, Miller & Abbott (2000).
  • 47. Exercises: Code Implementation of Song et al. (2000) STDP for presynaptic firing: ( )maxmax ,0a a a a g M tg g P P A+ →    → + + STDP for postsynaptic firing: when a-th excitatory input arrives ( )max maxmin ,a a ag P tg g A g M M − →   → +  + when output neuron fires Synaptic traces: ( ) ( ) ( ) ( )+ , a a dM t M t dt dP t P t dt τ τ − = − = −
  • 48. Exercises: Code Implementation of Song et al. (2000) %% parameter setting: % LIF-neuron parameters: taum = 20/1000; Vrest = -70; Eex = 0; Ein = -70; Vth = -54; Vreset = -60; % synapse parameters: Nex = 1000; Nin = 200; tauex = 5/1000; tauin = 5/1000; gmaxin = 0.05; gmaxex = 0.015; % STDP parameters: Ap = 0.005; An = Ap*1.05; taup = 20/1000; taun = 20/1000; %simulation parameters: dt = 0.1/1000; T = 200; t = 0:dt:T; % input firing rates: Fex = randi([10 30], 1, Nex); Fin = 10*ones(1,Nin); %% simulation: V = zeros(length(t), 1); M = zeros(length(t), 1); P = zeros(length(t), Nex); gex = zeros(length(t), 1); gin = zeros(length(t), 1); V(1) = Vreset; ga = zeros(length(t), Nex); ga(1,:) = gmaxex*ones(1,Nex); disp('Now simulating LIF neuron ...'); tic; for n=1:length(t)-1 % WRITE YOUR CODE HERE: end toc;