This lecture discusses synaptic learning rules in neural networks. It introduces the basic anatomy and physiology of synapses and different coding schemes neurons use, such as rate coding and spike timing coding. It then covers several synaptic plasticity rules, including Hebbian learning, spike-timing dependent plasticity (STDP), and the Bienenstock-Cooper-Munro (BCM) rule. It also discusses modeling synapses using the conductance-based model and implementations of STDP learning through online learning rules and weight dependence mechanisms.
Computational Motor Control: State Space Models for Motor Adaptation (JAIST s...hirokazutanaka
This is lecure 3 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=dtpgJLRt90M
Computational Motor Control: Kinematics & Dynamics (JAIST summer course)hirokazutanaka
Computational Motor Control: Kinematics & Dynamics (JAIST summer course)
This is lecture 1 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=8nk4DlpAaS8
Computational Motor Control: Optimal Control for Deterministic Systems (JAIST...hirokazutanaka
This is lecure 2 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=lNH1q4y1m-U
Computational Motor Control: Optimal Control for Stochastic Systems (JAIST su...hirokazutanaka
This is lecure 5 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=XS7MDRMPQfU
Computational Motor Control: Optimal Estimation in Noisy World (JAIST summer ...hirokazutanaka
This is lecure 4 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=2-VRBIg5m0w
Computational Motor Control: State Space Models for Motor Adaptation (JAIST s...hirokazutanaka
This is lecure 3 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=dtpgJLRt90M
Computational Motor Control: Kinematics & Dynamics (JAIST summer course)hirokazutanaka
Computational Motor Control: Kinematics & Dynamics (JAIST summer course)
This is lecture 1 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=8nk4DlpAaS8
Computational Motor Control: Optimal Control for Deterministic Systems (JAIST...hirokazutanaka
This is lecure 2 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=lNH1q4y1m-U
Computational Motor Control: Optimal Control for Stochastic Systems (JAIST su...hirokazutanaka
This is lecure 5 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=XS7MDRMPQfU
Computational Motor Control: Optimal Estimation in Noisy World (JAIST summer ...hirokazutanaka
This is lecure 4 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=2-VRBIg5m0w
Finite element modelling of nonlocal dynamic systems, Modal analysis of nonlocal dynamical systems, Dynamics of damped nonlocal systems, Numerical illustrations
Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...Chiheb Ben Hammouda
In biochemically reactive systems with small copy numbers of one or more reactant molecules, the dynamics are dominated by stochastic effects. To approximate those systems, discrete state-space and stochastic simulation approaches have been shown to be more relevant than continuous state-space and deterministic ones. These stochastic models constitute the theory of Stochastic Reaction Networks (SRNs). In systems characterized by having simultaneously fast and slow timescales, existing discrete space-state stochastic path simulation methods, such as the stochastic simulation algorithm (SSA) and the explicit tau-leap (explicit-TL) method, can be very slow. In this talk, we propose a novel implicit scheme, split-step implicit tau-leap (SSI-TL), to improve numerical stability and provide efficient simulation algorithms for those systems. Furthermore, to estimate statistical quantities related to SRNs, we propose a novel hybrid Multilevel Monte Carlo (MLMC) estimator in the spirit of the work by Anderson and Higham (SIAM Multiscal Model. Simul. 10(1), 2012). This estimator uses the SSI-TL scheme at levels where the explicit-TL method is not applicable due to numerical stability issues, and then, starting from a certain interface level, it switches to the explicit scheme. We present numerical examples that illustrate the achieved gains of our proposed approach in this context.
Stereographic Circular Normal Moment Distributionmathsjournal
Minh et al (2003) and Toshihiro Abe et al (2010) proposed a new method to derive circular distributions from the existing linear models by applying Inverse stereographic projection or equivalently bilinear transformation. In this paper, a new circular model, we call it as stereographic circular normal moment distribution, is derived by inducing modified inverse stereographic projection on normal moment distribution (Akin Olosunde et al (2008)) on real line. This distribution generalizes stereographic circular normal distribution (Toshihiro Abe et al (2010)), the density and distribution functions of proposed model admit closed form. We provide explicit expressions for trigonometric moments.
Finite element modelling of nonlocal dynamic systems, Modal analysis of nonlocal dynamical systems, Dynamics of damped nonlocal systems, Numerical illustrations
Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...Chiheb Ben Hammouda
In biochemically reactive systems with small copy numbers of one or more reactant molecules, the dynamics are dominated by stochastic effects. To approximate those systems, discrete state-space and stochastic simulation approaches have been shown to be more relevant than continuous state-space and deterministic ones. These stochastic models constitute the theory of Stochastic Reaction Networks (SRNs). In systems characterized by having simultaneously fast and slow timescales, existing discrete space-state stochastic path simulation methods, such as the stochastic simulation algorithm (SSA) and the explicit tau-leap (explicit-TL) method, can be very slow. In this talk, we propose a novel implicit scheme, split-step implicit tau-leap (SSI-TL), to improve numerical stability and provide efficient simulation algorithms for those systems. Furthermore, to estimate statistical quantities related to SRNs, we propose a novel hybrid Multilevel Monte Carlo (MLMC) estimator in the spirit of the work by Anderson and Higham (SIAM Multiscal Model. Simul. 10(1), 2012). This estimator uses the SSI-TL scheme at levels where the explicit-TL method is not applicable due to numerical stability issues, and then, starting from a certain interface level, it switches to the explicit scheme. We present numerical examples that illustrate the achieved gains of our proposed approach in this context.
Stereographic Circular Normal Moment Distributionmathsjournal
Minh et al (2003) and Toshihiro Abe et al (2010) proposed a new method to derive circular distributions from the existing linear models by applying Inverse stereographic projection or equivalently bilinear transformation. In this paper, a new circular model, we call it as stereographic circular normal moment distribution, is derived by inducing modified inverse stereographic projection on normal moment distribution (Akin Olosunde et al (2008)) on real line. This distribution generalizes stereographic circular normal distribution (Toshihiro Abe et al (2010)), the density and distribution functions of proposed model admit closed form. We provide explicit expressions for trigonometric moments.
Women in Tech: How to Build A Human CompanyLuminary Labs
We often think about design in terms of product or service strategy, but what about the design of companies? In the words of Phin Barnes of First Round Capital: “Entrepreneurs are the designers of companies. Great startup CEOs recognize very early that their job is not to build a product, but to build a company — defined by mission, values, and culture.”
Recently, organizations large and small have radically rethought company design by embracing employee-favorable policies such as establishing livable wages, developing creative equity plans, offering paid parental leave policies, and even pulling out of an entire state in protest of discrimination. In addition to sending a strong signal that people come first, these organizations are also making an economic argument to investors that employee-friendly policies pay dividends in reduced turnover and improved business outcome.
In this talk, Sara Holoubek, CEO of Luminary Labs, shares the forces behind this sea change as well as practical examples from companies featured in The Human Company Playbook, including Plated, Etsy, Pinterest, and General Assembly.
The Human Company Playbook, Version 1.0Luminary Labs
Recently, major corporations have radically rethought how they do business by establishing livable wages, developing creative equity plans, offering paid parental leave policies, and even pulling out of an entire state in protest of discrimination. In addition
to sending a strong signal that people come first, these organizations are also making
an economic argument to investors that employee-friendly policies pay dividends in reduced turnover and improved business outcome.
But what about small companies, and what about startups? The playbook aims to answer just that.
Read more: https://medium.com/@sarita/we-don-t-need-more-woman-friendly-companies-27a533b1fb9f#.p5iskl75j
Artificial intelligence (AI) is everywhere, promising self-driving cars, medical breakthroughs, and new ways of working. But how do you separate hype from reality? How can your company apply AI to solve real business problems?
Here’s what AI learnings your business should keep in mind for 2017.
Freezing of energy of a soliton in an external potentialAlberto Maspero
We study the dynamics of a soliton in the generalized NLS with a small external potential. We prove that there exists an effective mechanical system describing the dynamics of the soliton and that, for any positive integer r, the energy of such a mechanical system is almost conserved up to times of order ϵ^{−r}. In the rotational invariant case we deduce that the true orbit of the soliton remains close to the mechanical one up to times of order ϵ^{−r}.
The slides for the equation deviation of recurrent neural network (RNN), back-propagation through time and Sequence-to-sequence (Seq2Seq) models in image/video captioning tasks. Used in group paper reading in University of Sydney.
Random Matrix Theory and Machine Learning - Part 3Fabian Pedregosa
ICML 2021 tutorial on random matrix theory and machine learning.
Part 3 covers: 1. Motivation: Average-case versus worst-case in high dimensions 2. Algorithm halting times (runtimes) 3. Outlook
MVPA with SpaceNet: sparse structured priorsElvis DOHMATOB
The GraphNet (aka S-Lasso), as well as other “sparsity + structure” priors like TV (Total-Variation), TV-L1, etc., are not easily applicable to brain data because of technical problems
relating to the selection of the regularization parameters. Also, in
their own right, such models lead to challenging high-dimensional optimization problems. In this manuscript, we present some heuristics for speeding up the overall optimization process: (a) Early-stopping, whereby one halts the optimization process when the test score (performance on leftout data) for the internal cross-validation for model-selection stops improving, and (b) univariate feature-screening, whereby irrelevant (non-predictive) voxels are detected and eliminated before the optimization problem is entered, thus reducing the size of the problem. Empirical results with GraphNet on real MRI (Magnetic Resonance Imaging) datasets indicate that these heuristics are a win-win strategy, as they add speed without sacrificing the quality of the predictions. We expect the proposed heuristics to work on other models like TV-L1, etc.
Network-Growth Rule Dependence of Fractal Dimension of Percolation Cluster on...Shu Tanaka
Our paper entitled “Network-Growth Rule Dependence of Fractal Dimension of Percolation Cluster on Square Lattice" was published in Journal of the Physical Society of Japan. This work was done in collaboration with Dr. Ryo Tamura (NIMS).
http://journals.jps.jp/doi/abs/10.7566/JPSJ.82.053002
NIMSの田村亮さんとの共同研究論文 “Network-Growth Rule Dependence of Fractal Dimension of Percolation Cluster on Square Lattice" が Journal of the Physical Society of Japan に掲載されました。
http://journals.jps.jp/doi/abs/10.7566/JPSJ.82.053002
I am Alex N. I am a Physical Chemistry Exam Helper at liveexamhelper.com. I hold a Masters' Degree in Physical Chemistry, from Leeds Trinity University. I have been helping students with their exams for the past 10 years. You can hire me to take your exam in Physical Chemistry.
Visit liveexamhelper.com or email info@liveexamhelper.com. You can also call on +1 678 648 4277 for any assistance with the Physical Chemistry exam.
Boundness of a neural network weights using the notion of a limit of a sequenceIJDKP
feed forward neural network with backpropagation le
arning algorithm is considered as a black box
learning classifier since there is no certain inter
pretation or anticipation of the behavior of a neur
al
network weights. The weights of a neural network ar
e considered as the learning tool of the classifier
, and
the learning task is performed by the repetition mo
dification of those weights. This modification is
performed using the delta rule which is mainly used
in the gradient descent technique. In this article
a
proof is provided that helps to understand and expl
ain the behavior of the weights in a feed forward n
eural
network with backpropagation learning algorithm. Al
so, it illustrates why a feed forward neural networ
k is
not always guaranteed to converge in a global minim
um. Moreover, the proof shows that the weights in t
he
neural network are upper bounded (i.e. they do not
approach infinity). Data Mining, Delta
Reinforcement learning: hidden theory, and new super-fast algorithms
Lecture presented at the Center for Systems and Control (CSC@USC) and Ming Hsieh Institute for Electrical Engineering,
February 21, 2018
Stochastic Approximation algorithms are used to approximate solutions to fixed point equations that involve expectations of functions with respect to possibly unknown distributions. The most famous examples today are TD- and Q-learning algorithms. The first half of this lecture will provide an overview of stochastic approximation, with a focus on optimizing the rate of convergence. A new approach to optimize the rate of convergence leads to the new Zap Q-learning algorithm. Analysis suggests that its transient behavior is a close match to a deterministic Newton-Raphson implementation, and numerical experiments confirm super fast convergence.
Based on
@article{devmey17a,
Title = {Fastest Convergence for {Q-learning}},
Author = {Devraj, Adithya M. and Meyn, Sean P.},
Journal = {NIPS 2017 and ArXiv e-prints},
Year = 2017}
The main machine learning algorithms are built upon various mathematical foundations such as statistics, optimization, and probability. Will this also hold true for Artificial Intelligence? In this presentation, I will showcase some recent examples of interactions between machine learning and mathematics.
Colloquium @ CEREMADE (October 3, 2023)
In this tutorial, we study various statistical problems such as community detection on graphs, Principal Component Analysis (PCA), sparse PCA, and Gaussian mixture clustering in a Bayesian framework. Using a statistical physics point of view, we show that there exists a critical noise level above which it is impossible to estimate better than random guessing. Below this threshold, we compare the performance of existing polynomial-time algorithms to the optimal one and observe a gap in many situations: even if non-trivial estimation is theoretically possible, computationally efficient methods do not manage to achieve optimality. This tutorial will present how we adapted the tools and techniques from the mathematical study of spin glasses to study high-dimensional statistics and Approximate Message Passing (AMP) algorithm.
This tutorial was presented by Marc Lelarge at the 21st INFORMS Applied Probability Society Conference (2023)
https://informs-aps2023.event.univ-lorraine.fr/
Stochastic Processes describe the system derived by noise.
Level of graduate students in mathematics and engineering.
Probability Theory is a prerequisite.
For comments please contact me at solo.hermelin@gmail.com.
For more presentations on different subjects visit my website at http://www.solohermelin.com.
UCSD NANO 266 Quantum Mechanical Modelling of Materials and Nanostructures is a graduate class that provides students with a highly practical introduction to the application of first principles quantum mechanical simulations to model, understand and predict the properties of materials and nano-structures. The syllabus includes: a brief introduction to quantum mechanics and the Hartree-Fock and density functional theory (DFT) formulations; practical simulation considerations such as convergence, selection of the appropriate functional and parameters; interpretation of the results from simulations, including the limits of accuracy of each method. Several lab sessions provide students with hands-on experience in the conduct of simulations. A key aspect of the course is in the use of programming to facilitate calculations and analysis.
Computational Motor Control: Reinforcement Learning (JAIST summer course) hirokazutanaka
This is lecure 6 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=GHMcx5F0_j8
Computational Motor Control: Introduction (JAIST summer course)hirokazutanaka
This is a course introduction to JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). https://www.youtube.com/user/ht2022columbia
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
2. Neurons communicate through synapses.
In this lecture we will learn:
• Basic anatomy and physiology of synapses
• Rate coding and spike coding
• Hebbian learning
• Spike-timing-dependent plasticity
• Reward-modulated plasticity
9. How does a neuron represent information?
Panzari et al. (2010) Trends in Neurosciences
10. Rate coding: Number of Spikes matters.
Rate coding hypothesis: a neuron represents information through
its spike rate.
Hartline (1940) Am J Physiol; Hartline (1969) Science
Compound eye of horseshoe crab Recoding from optic nerve
Firing patterns of cortical neurons are highly irregular, which are well
approximated by a random Poisson process (Softky & Koch (1993) J Neurosci;
Shadlen & Newsome (1994) Current Biology).
11. Temporal coding: Spike timing matters.
Temporal coding hypothesis: a neuron represents information
through its spike timings.
Gollisch & Meister (2008) Science Johansson & Birznieks (2004) Nature Neurosci
12. Hebb’s postulate of activity dependent plasticity.
"Let us assume that the persistence or repetition of a reverberatory
activity (or "trace") tends to induce lasting cellular axon of cell A is
near enough to excite a cell B and repeatedly or persistently takes
part in firing it, some growth process or metabolic change takes
place in one or both cells such that A's efficiency, as one of the cells
firing B, is increased."
Hebbian theory: a theory in neuroscience that proposes an
explanation for the adaptation of neurons in the brain during the
learning process.
Donald O. Hebb (1904-1985)
The Organization of Behavior (1949)
Image source: Wikipedia, Donald O. Hebb
13. Synaptic plasticity: rate-coding model
1u
2u
3u
1w
2w
3w
T
v vτ =− + w u
v ( )
( )
T
1
T
1
n
n
u u
w w
=
=
u
w
input ratesoutput rate
synaptic strengths
T
v ≈ w u
If we consider a time scale larger
than τ, then
14. Hebbian plasticity in equation.
vη∆ =w u
1 1
n n
w u
v
w u
η
∆
=
∆
T
η∆ =w uu w
Hebbian learning with input vector u and output v
Vector form:
Or component form:
If the membrane dynamics is fast compared to the timescale of
synaptic plasticity, the output is approximated as:
Then the Hebbian rule now reads:
T
.v = w u
15. This form of learning rule is unstable.
T
η∆ =w uu w
T
η η∆= =w uu w Cw
Covariance matrix of random inputs
T
=C uu Wishart matrix
If inputs u1, …, un are i.i.d., their covariance matrix is called the
Wishart matrix (Wishart, 1936):
All eigenvalues of a Wishart matrix are non-negative.
Hebbian learning with single input u
Hebbian learning with input ensemble
Exercise 1
16. This form of learning rule is unstable.
Eigenvalue decomposition
1, ,i i i i nλ= =Ce e 1 0nλ λ≥ ≥ ≥
η∆ =w Cw
i i
i
a= ∑w e
i i ia aηλ∆ =
All eigenvalues of a Wishart matrix are non-negative.
The eigenvectors form a basis for the n-dim space, and the weight
vector w may be decomposed into the eigenvectors:
Then, the Hebbian learning rule is reduced as:
Therefore, ai grows exponentially, finally diverging to infinity.
17. Covariance matrix of input has non-negative eigenvalues.
Covariance matrix of random inputs
( )T 2T T T
0≥= =x Cx x uu x u x
1
i i
n
i
a e
=
= ∑x
T
, 1 ,
2
1
,
1
T
n n n
i
i j i j
i j j i j j i i
i
j ia a a a aλ δ λ
= = =
= = =∑ ∑ ∑x Cx e Ce
For any non-zero vector x:
If the vector is decomposed in terms of eigenvectors,
then,
For any {ai} this quantity must be non-negative. Therefore, the
eigenvalues {λi} must be non-negative, too.
18. Generalization of Hebbian learning.
( )( )v vη∆ = − −w u u
Covariance learning
BCM rule
( )Mv vη θ∆= −w u
Bienenstock, Cooper & Munro (1982) J Neurosci
Sejnowski (1977) Biophys J
φ(v)
v
Synaptic weights change if pre-and post-activities are positively
correlated.
Synaptic plasticity depends linearly on pre-
synaptic activities and nonlinearly on post-
synaptic activity (thresholding).
The thresholding value changes according
to post-synaptic activity (homeostasis).
19. Generalization of Hebbian learning.
BCM rule
( )Mv vη θ∆= −w u
Bienenstock, Cooper & Munro (1982) J Neurosci
φ(v)
v
Synaptic plasticity depends linearly on pre-
synaptic activities and nonlinearly on post-
synaptic activity (thresholding).
The thresholding value changes according
to post-synaptic activity (homeostasis).
2
EM vθ =
( )2
1v vη∆= −w u
There is only one stable fixed point at v=1.
20. Weight normalization: additive or multiplicative.
vη∆ =w uHebbian learning, , is inherently unstable.
One way to avoid this instability (i.e., divergence) is to impose a
constraint over the weight vector w.
1i
i
w =∑
Additive normalization
Multiplicative normalization
i i j
j
w w v w v
n
η
η∆ = − ∑
( )
( ) ( )
( ) ( )
1
t t
t
t t
+ ∆
+ =
+ ∆
w w
w
w w
1=w
Oja (1982) Neural Networks
21. Oja learning rule as a principle component analyzer.
Oja learning rule in discrete time
Oja (1982) Neural Networks
( ) ( ) ( )2
1
v
t v v
v
η
η η
η
+
+ = = + − +
+
w u
w w u w
w u
( ) ( ) ( ) ( ) ( ) ( )( )1t t v t t v t tη+ = + −w w u w
( )
d
v v
dt
η= −
w
u w
( )Td
dt
η= −
w
Cw w Cww
Oja learning rule in continuous time
Oja learning rule in continuous time
22. Oja learning rule as a principle component analyzer.
Oja (1982) Neural Networks
( )Td
dt
η= −
w
Cw w Cww
i i
i
a= ∑w e 1, ,i i i i nλ= =Ce e 1 0nλ λ≥ ≥ ≥
2
1
n
i i i j j i
j
a a a aλ λ
=
= −
∑
1
i
i
a
b
a
≡
( )1i i ib bλ λ= −
( )1 const, 0 2, ,ia a i n∴ → → =
Eigenvector decomposition
23. Modeling synapses: conductance-based model.
( )( ) ( )( )rest ex ex in inm
dV
V V g t E V g t E V
dt
τ = − + − + −
LIF excitatory
synapse
inhibitory
synapse
Gerstner (2014) Neuronal Dynamics, Chapter 3
( ) ( )syn
syn syn
f
t t
t
f
g t g e t t
τ
−
−
= Θ −∑
( ) ( ) ( )rise fast slow
syn syn 1 1
f f f
t t t t t t
f
f
g t g e ae a ae t tτ τ τ
− − −
− − −
= − + − Θ −
∑
exponential with one decay time constant
exponentials with one rise and two decay time constants
24. Modeling synapses: conductance-based model.
( )( ) ( )( )rest ex ex in inm
dV
V V g t E V g t E V
dt
τ = − + − + −
LIF excitatory
synapse
inhibitory
synapse
Gerstner (2014) Neuronal Dynamics, Chapter 3
( ) ( )syn
syn syn
f
t t
t
f
g t g e t t
τ
−
−
= Θ −∑
( ) ( ) ( )rise fast slow
syn syn 1 1
f f f
t t t t t t
f
f
g t g e ae a e t tτ τ τ
− − −
− − −
= − + − Θ −
∑
exponential with one decay time constant
exponentials with one rise and two decay time constants
25. Modeling synapses: conductance-based model.
Gerstner (2014) Neuronal Dynamics, Chapter 3
( ) ( )syn
syn syn
f
t t
t
f
g t g e t t
τ
−
−
= Θ −∑
( ) ( ) ( )rise fast slow
syn syn 1 1
f f f
t t t t t t
f
f
g t g e ae a e t tτ τ τ
− − −
− − −
= − + − Θ −
∑
exponential with one decay time constant
exponentials with one rise and two decay time constants
excitatory
inhibitory
rise fast1 ms, 6 msτ τ≈ ≈
rise fast slow25 50 ms, 100 300 ms, 500 1000 msτ τ τ≈ − ≈ − ≈ −
GABAA
GABAB
26. Modeling synapses: conductance-based model.
( )( ) ( )( )rest ex ex in inm
dV
V V g t E V g t E V
dt
τ = − + − + −
ex
ex ex
dg
g
dt
τ = −
( ) ( )ex ex exg t g t g← +
in
in in
dg
g
dt
τ = −
( ) ( )in in ing t g t g← +
LIF excitatory
synapse
inhibitory
synapse
Dynamics of conductance
Synaptic plasticity: how the peak conductances of excitatory and
inhibitory synapses is modified in an activity-dependent manner.
Song et al. (2000) Nature Neurosci
28. STDP in equations.
Sjöström & Gerstner, Scholarpedia, 5(2):1362. doi:10.4249/scholarpedia.1362
( )
:post spikes : pre spikes
n f
ij i j
n f
w W t t∆= −∑ ∑
( )
exp for 0
exp for 0
tA t
W t
tA t
τ
τ
+
+
−
−
− >
=
− <
29. Online implementation of STDP learning
Sjöström & Gerstner, Scholarpedia, 5(2):1362. doi:10.4249/scholarpedia.1362
( ) ( )
: presynaptic
spike
j f
j j j
f
dx
x a x t t
dt
τ δ+ +=− + −∑ ( ) ( )
: postsynaptic
spike
ni
j i i
n
dy
y a y t t
dt
τ δ− −=− + −∑
xj : presynaptic trace of neuron j
“remembering when presynaptic neuron
j spikes”
yi : postsynaptic trace of neuron i
“remembering when postsynaptic
neuron i spikes”
( ) ( ) ( ) ( )
:postsynaptic : presynaptic
spikes spikes
ij n f
ij j i ij i i
n f
dw
A w x t t A w y t t
dt
δ δ+ −− − −∑ ∑
30. Weight dependence: hard and soft bounds.
Sjöström & Gerstner, Scholarpedia, 5(2):1362. doi:10.4249/scholarpedia.1362
( ) ( ) ( ) ( )
:postsynaptic : presynaptic
spikes spikes
ij ij
ij n f
j i i i
n f
dw
x t t y t tA w A w
dt
δ δ+ −= − − −∑ ∑
Weight learning dynamics
Hard bound rule
(Linear) Soft bound rule
( ) ( ), :A w A w+ − determines the weight dependence of STDP learning rule.
( ) ( )
( ) ( )
maxA w w w
A w w
η
η
+ +
− −
=Θ −
= Θ
For biological reasons, the synaptic weights should be restricted to wmin < w < wmax .
( ) ( )
( )
maxA w w w
A w w
η
η
+ +
− −
= −
=
( )A w+
( )A w−
31. Temporal all-to-all versus nearest-neighbor spike
interaction.
Sjöström & Gerstner, Scholarpedia, 5(2):1362. doi:10.4249/scholarpedia.1362
( ) ( ) ( ) ( )
: presynaptic : postsynaptic
spike spike
,j f ni
j j j i
f n
j i
dx dy
x t t y t t
dt d
a a y
t
xτ δ τ δ+ −+ −=− + − =− + −∑ ∑
Synaptic trace dynamics
( ):a x+ determines how much trace is incremented by spikes.
( ) 1a x+ = ( ) 1a x x+ = −
All-to-all interaction Nearest-neighbor interaction
All spikes contribute additively to the trace,
and the trace is not upper-bounded.
Only the nearest spike contributes to the
trace and the trace is upper-bounded to 1.
32. Additive vs multiplicative STDP.
van Rossum et al. (2000) J Neurosci.
( )
exp for 0
exp for 0
tA t
W t
tA t
τ
τ
+
+
−
−
− >
=
− <
( )
exp for 0
exp for 0
tA t
W t
tA tW
τ
τ
+
+
−
−
− >
=
− <
Additive STDP Multiplicative STDP
Potentiation and depression are
independent of the weight value.
Depression are weight dependent in a
multiplicative way; a large synapse gets
depressed more and a weak synapse less.
33. Triplet law: three-spike interaction
pre pre1 2
pre1 1 1 1 pre2 2 2 2
post post1 2
post1 1 1 1 post2 2 2 2
if then 1. if then 1.
if then 1. if then 1.
dx dx
x t t x x x t t x x
dt dt
dy dy
y t t y y y t t y y
dt dt
τ τ
τ τ
=− = ← + =− = ← +
=− = ← + =− = ← +
( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )pre post
2 1 3 2 1 2 1 3 2 1w t A y t A x t y t t t A x t A y t x t t tε δ ε δ− − + +
∆ =− + − − + + − −
post-pre LTD pre-post-pre LTD pre-post LTP post-pre-post LTP
Pfister & Gerstner (2006) J Neurosci
Dynamics of two presynaptic and two postsynaptic traces
Pre-post-pre LTD and pre-post-pre LTP
35. Relation of STDP to other learning rules.
• STDP and rate-based Hebbian learning rules
Kempter, R., Gerstner, W., & Van Hemmen, J. L. (1999). Hebbian learning and spiking
neurons. Physical Review E, 59(4), 4498.
• STDP and Bienenstock-Cooper-Munro (BCM) rule
Izhikevich, E. M., & Desai, N. S. (2003). Relating stdp to bcm. Neural computation, 15(7),
1511-1523.
Pfister, J. P., & Gerstner, W. (2006). Triplets of spikes in a model of spike timing-
dependent plasticity. The Journal of neuroscience, 26(38), 9673-9682.
• STDP and temporal-difference learning rule
Rao, R. P., & Sejnowski, T. J. (2001). Spike-timing-dependent Hebbian plasticity as
temporal difference learning. Neural computation, 13(10), 2221-2237.
Exercise 2
39. Functional consequence: latent pattern detection
Masquelier et al. (2008) PLoS One; (2009) Neural Comput
( ) ( ) ( )
j
i j j
t
t t wv t t tη ε= −− + ∑
Spike response model (SRM): membrane potential in integral form.
action potential synaptic potential
presynaptic spikepostsynaptic spike
Spike-timing-dependent plasticity:
presynaptic spike tj and postsynaptic spike ti
if
if
i j
i j
t
j i j
j t
j
t
i j
t
A ew t
w
e t
t
A tw
τ
τ−
+
−
+
−
−
−
+
→
− <
>
43. Tripartite synaptic plasticity
( ) LMAN LM V
0
N CA H
( ) ( ) ( ) (( )) i
tij
ij ii j
dW
R t e t dt s tG t t
dt
t s s tRη η ′ −
′− ′= ∫
Fiete & Seung (2007) J Neurophysiol.
Exercise 3
This tripartite learning rule indeed leads to reward maximization.
44. Summary
• Synaptic plasticity refers to activity-dependent change of
a synaptic weight between neurons, underlying the
physiological basis for learning and memory.
• Hebbian learning: “Fire together, wire together.”
• Synaptic plasticity may be formulated in terms of rate
coding or spike-timing coding.
• Synaptic plasticity is determined not only among two
connected neurons but also is modulated by other
factors (e.g., reward, homeostasis).
45. Exercises
1. Prove that all eigenvalues of a Wishart matrix are
positive semidefinite.
2. Read the following paper:
Kempter, R., Gerstner, W., & Van Hemmen, J. L. (1999). Hebbian learning and spiking
neurons. Physical Review E, 59(4), 4498.
From the additive STDP learning rule, derive the
following rate-based Hebbian learning rule (fi and fj are
pre- and post-synaptic activity, respectively):
3. Read the following paper:
Fiete, I. R., & Seung, H. S. (2006). Gradient learning in spiking neural networks by
dynamic perturbation of conductances. Physical review letters, 97(4), 048104.
Prove that the learning rule (slide 46) can be derived as
a consequence of reward maximization.
ij i j jw f f fα β∆ = +
46. Exercises: Code Implementation of Song et al. (2000)
( )
( ) ( ) ( )( ) ( ) ( )( )m rest ex ex inin
dV t
V V t g t E V t g t E V t
dt
τ = − + − + −
( )
( )
( )
( )ex in
e iex in nx ,
dg t dg t
g t g t
dt dt
τ τ=− =−
Membrane dynamics
Conductance dynamics
( ) ( )ex ex ag t g t g→ + when a-th excitatory input arrives
( ) ( )ini inng t g t g+→ when any inhibitory input arrives
Goal: Implement the STDP rule in Song, Miller & Abbott (2000).
47. Exercises: Code Implementation of Song et al. (2000)
STDP for presynaptic firing:
( )maxmax ,0a a
a a
g M tg g
P P A+
→
→
+
+
STDP for postsynaptic firing:
when a-th excitatory input arrives
( )max maxmin ,a a ag P tg g
A
g
M M −
→
→ +
+
when output neuron fires
Synaptic traces:
( )
( )
( )
( )+
,
a
a
dM t
M t
dt
dP t
P t
dt
τ
τ
− = −
= −
48. Exercises: Code Implementation of Song et al. (2000)
%% parameter setting:
% LIF-neuron parameters:
taum = 20/1000;
Vrest = -70;
Eex = 0;
Ein = -70;
Vth = -54;
Vreset = -60;
% synapse parameters:
Nex = 1000;
Nin = 200;
tauex = 5/1000;
tauin = 5/1000;
gmaxin = 0.05;
gmaxex = 0.015;
% STDP parameters:
Ap = 0.005;
An = Ap*1.05;
taup = 20/1000;
taun = 20/1000;
%simulation parameters:
dt = 0.1/1000;
T = 200;
t = 0:dt:T;
% input firing rates:
Fex = randi([10 30], 1, Nex);
Fin = 10*ones(1,Nin);
%% simulation:
V = zeros(length(t), 1);
M = zeros(length(t), 1);
P = zeros(length(t), Nex);
gex = zeros(length(t), 1);
gin = zeros(length(t), 1);
V(1) = Vreset;
ga = zeros(length(t), Nex);
ga(1,:) = gmaxex*ones(1,Nex);
disp('Now simulating LIF neuron ...');
tic;
for n=1:length(t)-1
% WRITE YOUR CODE HERE:
end
toc;