This document summarizes key concepts from a lecture on neural networks and neuroscience:
- Single-layer neural networks like perceptrons can only learn linearly separable patterns, while multi-layer networks can approximate any function. Backpropagation enables training multi-layer networks.
- Recurrent neural networks incorporate memory through recurrent connections between units. Backpropagation through time extends backpropagation to train recurrent networks.
- The cerebellum functions similarly to a perceptron for motor learning and control. Its feedforward circuitry from mossy fibers to Purkinje cells maps to the layers of a perceptron.
Computational Motor Control: State Space Models for Motor Adaptation (JAIST s...hirokazutanaka
This is lecure 3 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=dtpgJLRt90M
Computational Motor Control: Kinematics & Dynamics (JAIST summer course)hirokazutanaka
Computational Motor Control: Kinematics & Dynamics (JAIST summer course)
This is lecture 1 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=8nk4DlpAaS8
Computational Motor Control: Optimal Control for Deterministic Systems (JAIST...hirokazutanaka
This is lecure 2 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=lNH1q4y1m-U
Computational Motor Control: Optimal Estimation in Noisy World (JAIST summer ...hirokazutanaka
This is lecure 4 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=2-VRBIg5m0w
Computational Motor Control: Optimal Control for Stochastic Systems (JAIST su...hirokazutanaka
This is lecure 5 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=XS7MDRMPQfU
Computational Motor Control: State Space Models for Motor Adaptation (JAIST s...hirokazutanaka
This is lecure 3 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=dtpgJLRt90M
Computational Motor Control: Kinematics & Dynamics (JAIST summer course)hirokazutanaka
Computational Motor Control: Kinematics & Dynamics (JAIST summer course)
This is lecture 1 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=8nk4DlpAaS8
Computational Motor Control: Optimal Control for Deterministic Systems (JAIST...hirokazutanaka
This is lecure 2 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=lNH1q4y1m-U
Computational Motor Control: Optimal Estimation in Noisy World (JAIST summer ...hirokazutanaka
This is lecure 4 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=2-VRBIg5m0w
Computational Motor Control: Optimal Control for Stochastic Systems (JAIST su...hirokazutanaka
This is lecure 5 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=XS7MDRMPQfU
Random Matrix Theory and Machine Learning - Part 3Fabian Pedregosa
ICML 2021 tutorial on random matrix theory and machine learning.
Part 3 covers: 1. Motivation: Average-case versus worst-case in high dimensions 2. Algorithm halting times (runtimes) 3. Outlook
Random Matrix Theory and Machine Learning - Part 1Fabian Pedregosa
ICML 2021 tutorial on random matrix theory and machine learning. Part 1 covers: 1. A brief history of Random Matrix Theory, 2. Classical Random Matrix Ensembles (basic building blocks)
INFLUENCE OF OVERLAYERS ON DEPTH OF IMPLANTED-HETEROJUNCTION RECTIFIERSZac Darcy
In this paper we compare distributions of concentrations of dopants in an implanted-junction rectifiers in a
heterostructures with an overlayer and without the overlayer. Conditions for decreasing of depth of the
considered p-n-junction have been formulated.
Deep neural networks & computational graphsRevanth Kumar
To improve the performance of a Deep Learning model. The goal is to reduce the optimization function which can be divided based on the classification and the regression problems.
Random Matrix Theory and Machine Learning - Part 3Fabian Pedregosa
ICML 2021 tutorial on random matrix theory and machine learning.
Part 3 covers: 1. Motivation: Average-case versus worst-case in high dimensions 2. Algorithm halting times (runtimes) 3. Outlook
Random Matrix Theory and Machine Learning - Part 1Fabian Pedregosa
ICML 2021 tutorial on random matrix theory and machine learning. Part 1 covers: 1. A brief history of Random Matrix Theory, 2. Classical Random Matrix Ensembles (basic building blocks)
INFLUENCE OF OVERLAYERS ON DEPTH OF IMPLANTED-HETEROJUNCTION RECTIFIERSZac Darcy
In this paper we compare distributions of concentrations of dopants in an implanted-junction rectifiers in a
heterostructures with an overlayer and without the overlayer. Conditions for decreasing of depth of the
considered p-n-junction have been formulated.
Deep neural networks & computational graphsRevanth Kumar
To improve the performance of a Deep Learning model. The goal is to reduce the optimization function which can be divided based on the classification and the regression problems.
Women in Tech: How to Build A Human CompanyLuminary Labs
We often think about design in terms of product or service strategy, but what about the design of companies? In the words of Phin Barnes of First Round Capital: “Entrepreneurs are the designers of companies. Great startup CEOs recognize very early that their job is not to build a product, but to build a company — defined by mission, values, and culture.”
Recently, organizations large and small have radically rethought company design by embracing employee-favorable policies such as establishing livable wages, developing creative equity plans, offering paid parental leave policies, and even pulling out of an entire state in protest of discrimination. In addition to sending a strong signal that people come first, these organizations are also making an economic argument to investors that employee-friendly policies pay dividends in reduced turnover and improved business outcome.
In this talk, Sara Holoubek, CEO of Luminary Labs, shares the forces behind this sea change as well as practical examples from companies featured in The Human Company Playbook, including Plated, Etsy, Pinterest, and General Assembly.
The Human Company Playbook, Version 1.0Luminary Labs
Recently, major corporations have radically rethought how they do business by establishing livable wages, developing creative equity plans, offering paid parental leave policies, and even pulling out of an entire state in protest of discrimination. In addition
to sending a strong signal that people come first, these organizations are also making
an economic argument to investors that employee-friendly policies pay dividends in reduced turnover and improved business outcome.
But what about small companies, and what about startups? The playbook aims to answer just that.
Read more: https://medium.com/@sarita/we-don-t-need-more-woman-friendly-companies-27a533b1fb9f#.p5iskl75j
Artificial intelligence (AI) is everywhere, promising self-driving cars, medical breakthroughs, and new ways of working. But how do you separate hype from reality? How can your company apply AI to solve real business problems?
Here’s what AI learnings your business should keep in mind for 2017.
Binary classification and linear separators. Perceptron, ADALINE, artifical neurons. Artificial neural networks (ANNs), activation functions, and universal approximation theorem. Linear versus non-linear classification problems. Typical tasks, architectures and loss functions. Gradient descent and back-propagation. Support Vector Machines (SVMs), soft-margins and kernel trick. Connexions between ANNs and SVMs.
Abstract: This PDSG workshop introduces basic concepts of the grandfather of neural networks - the Perceptron. Concepts covered are history, algorithm and limitations.
Level: Fundamental
Requirements: No prior programming or statistics knowledge required.
Machine Learning - Introduction to Neural NetworksAndrew Ferlitsch
Abstract: This PDSG workshop introduces basic concepts of neural networks. Concepts covered are Neurons, Binary vs. Categorical vs. Real Value output, activation functions, fully connected networks, deep neural networks, specialized learners, cost function and feed forward.
Level: Fundamental
Requirements: No prior programming or statistics knowledge required.
Deep learning lecture - part 1 (basics, CNN)SungminYou
This presentation is a lecture with the Deep Learning book. (Bengio, Yoshua, Ian Goodfellow, and Aaron Courville. MIT press, 2017) It contains the basics of deep learning and theories about the convolutional neural network.
Computational Motor Control: Reinforcement Learning (JAIST summer course) hirokazutanaka
This is lecure 6 note for JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). Lecture video: https://www.youtube.com/watch?v=GHMcx5F0_j8
Computational Motor Control: Introduction (JAIST summer course)hirokazutanaka
This is a course introduction to JAIST summer school on computational motor control (Hirokazu Tanaka & Hiroyuki Kambara). https://www.youtube.com/user/ht2022columbia
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
JAISTサマースクール2016「脳を知るための理論」講義04 Neural Networks and Neuroscience
1. SS2016 Modern Neural
Computation
Lecture 5: Neural Networks
and Neuroscience
Hirokazu Tanaka
School of Information Science
Japan Institute of Science and Technology
2. Supervised learning as functional approximation.
In this lecture we will learn:
• Single-layer neural networks
Perceptron and the perceptron theorem.
Cerebellum as a perceptron.
• Multi-layer feedforward neural networks
Universal functional approximations, Back-propagation
algorithms
• Recurrent neural networks
Back-propagation-through-time (BPTT) algorithms
• Tempotron
Spike-based perceptron
3. Gradient-descent learning for optimization.
• Classification problem: to output discrete labels.
For a binary classification (i.e., 0 or 1), a cross-entropy is
often used.
• Regression problem: to output continuous values.
Sum of squared errors is often used.
4. Cost function: classification and regression.
• Classification problem: to output discrete labels.
For a binary classification (i.e., 0 or 1), a cross-entropy is
often used.
• Regression problem: to output continuous values.
Sum of squared errors is often used.
ˆ:output of network, :desired outputi iy y
( ) ( ) ( )
ˆ1ˆ
: samples: samples
ˆ ˆlog 1 log 1 log 1ii
i i i i i
yy
i
ii
y y y y y y
−
− − =− + − − ∑∏
( )
: sa p e
2
m l s
ˆi
i
iy y−∑
5. Perceptron: single-layer neural network.
• Assume a single-layer neural network with an input layer
composed of N units and an output layer composed of
one unit.
• Input units are specified by
and an output unit are determined by
( )1
T
Nx x=x
( )T
0
1
0
n
i
i iy f w x fw w
=
= + = +
∑ w x
( )
1 if 0
0 if 0
u
f
u
u
≥
=
<
7. Perceptron: single-layer neural network.
• [Remark] Instead of using
often, an augmented input vector
are used. Then,
( )1
T
Nx x=x
( ) ( )T T
0y f w f= + =w x w x
( )11
T
Nx x=x
( )10
T
Nw w w=w
8. Perceptron Learning Algorithm.
( ) ( ) ( ){ }21 1 2, , ,, , ,P Pd d dx x x
• Given a training set:
• Perceptron learning rule:
( )i i iydη −∆ =w x
while err>1e-4 && count<10
y = sign(w'*X)';
wnew = w + X*(d-y)/P;
wnew = wnew/norm(wnew);
count = count+1;
err = norm(w-wnew)/norm(w)
w = wnew;
end
11. Perceptron’s capacity: Cover’s Counting Theorem.
• Question: Suppose that there are P vectors in N-
dimensional Euclidean space.
There are 2P possible patterns of two classes. How many
of them are linearly separable?
[Remark] They are assumed to be in general position.
• Answer: Cover’s Counting Theorem.
{ }1, ,, N
P i ∈x x x
( )
1
0
1
, 2
N
k
P
C P N
k
−
=
−
=
∑
12. Perceptron’s capacity: Cover’s Counting Theorem.
• Cover’s Counting Theorem.
• Case 𝑃𝑃 ≤ 𝑁𝑁:
• Case 𝑃𝑃 = 2𝑁𝑁:
• Case 𝑃𝑃 ≫ 𝑁𝑁:
( )
1
0
1
, 2
N
k
P
C P N
k
−
=
−
=
∑
( ), 2P
C P N =
( ) 1
, 2P
C P N −
=
( ), N
C P N AP≈
Cover (1965) IEEE Information; Sompolinsky (2013) MIT lecture note
13. Perceptron’s capacity: Cover’s Counting Theorem.
• Case for large P:
Orhan (2014) “Cover’s Function Counting Theorem”
( ) 1 2
1 e
2
rf
,
2 2P
pC P
N
N
p
+ −
≈
14. Cerebellum as a Perceptron.
Llinas (1974) Scientific American
15. Cerebellum as a Perceptron.
• Cerebellar cortex has a feedforward structure:
mossy fibers -> granule cells -> parallel fibers -> Purkinje
cells
Ito (1984) “Cerebellum and Neural Control”
16. Cerebellum as a Perceptron (or its extensions)
• Perceptron model
Marr (1969): Long-term potentiation (LTP) learning.
Albus (1971): Long-term depression (LTD) learning.
• Adaptive filter theory
Fujita (1982): Reverberation among granule and Golgi
cells for generating temporal templates.
• Liquid-state machine model
Yamazaki and Tanaka (2007):
17. Perceptron: a new perspective.
• Evaluation of memory capacity of a Purkinje cell using
perceptron methods (the Gardner limit).
Brunel, N., Hakim, V., Isope, P., Nadal, J. P., & Barbour, B. (2004). Optimal
information storage and the distribution of synaptic weights: perceptron versus
Purkinje cell. Neuron, 43(5), 745-757.
• Estimation of dimensions of neural representations
during visual memory task in the prefrontal cortex using
perceptron methods (Cover’s counting theorem).
Rigotti, M., Barak, O., Warden, M. R., Wang, X. J., Daw, N. D., Miller, E. K., & Fusi,
S. (2013). The importance of mixed selectivity in complex cognitive tasks.
Nature, 497(7451), 585-590.
18. Limitation of Perceptron.
• Only linearly separable input-output sets can be learned.
• Non-linear sets, even a simple one like XOR, CANNOT be
learned.
19. Multilayer neural network: feedforward design
( )n
ix
( )1n
jx −
Layer 1 Layer n-1 Layer n Layer N
( )1n
ijw
−
• Feedforward network: a unit in layer n receives inputs
from layer n-1 and projects to layer n+1.
20. Multilayer neural network: feedforward design
( )n
ix
( )1n
jx −
Layer 1 Layer n-1 Layer n Layer N
( )1n
ijw
−
• Feedforward network: a unit in layer n receives inputs
from layer n-1 and projects to layer n+1.
21. Multilayer neural network: forward propagation.
( ) ( )
( ) ( ) ( )1 1
1
n n n n
i i ij j
j
x f u f w x− −
=
= =
∑
( )
1
1 u
f u
e−
=
+
( )
( )
( ) ( )( )2
1 1
1
1
1
11
u
u uu
f
e
e e
u
e
f u f u
−
− −−
= = − =
+ + +
′ −
Layer n-1 Layer n
( )n
ix
( )1n
jx
−
( )1n
ijw
−
( ) ( ) ( )1 1
1
n n n
i ij j
j
u w x
− −
=
= ∑
In a feedforward multilayer neural network propagates its activities
from one layer to another in one direction:
Inputs to neurons in layer n are a
summation of activities of neurons in
layer n-1:
The function f is called an activation function, and its derivative is
easy to compute:
22. Multilayer neural network: error backpropagation
• Define an cost function as a squared sum of errors in
output units:
Gradients of cost function with respect to weights:
( )
( ) ( )
( )
2 21 1
2 2
N N
i i i
i i
x z= − = ∆∑ ∑
Layer n-1 Layer n
( ) ( ) ( ) ( )
( ) ( )1 1
1
n n n n n
i j j j ji
j
x x w
− −
∆ = ∆ −∑
( )1n
j
−
∆
( )n
i∆
The neurons in the output layer has
explicit supervised errors (the difference
between the network outputs and the
desired outputs). How, then, to compute
the supervising signals for neurons in
intermediate layers?
23. Multilayer neural network: error backpropagation
1. Compute activations of units in all layers.
2. Compute errors in the output units, .
3. “Back-propagate” the errors to lower layers using
4. Update the weights
( )
{ } ( )
{ } ( )
{ }1
,, , ,
n N
i i ix x x
( )
{ }N
i∆
( ) ( ) ( ) ( )
( ) ( )1 1
1n n n n n
i j j j ji
j
x x w
− −
∆ = ∆ −∑
( ) ( ) ( ) ( )
( ) ( )1 1 1
1
n n n n n
ij i i i jw x x xη + + +
∆ =− ∆ −
24. Multilayer neural network as universal machine for
functional approximation.
A multilayer neural network is in principle able to approximate any
functional relationship between inputs and outputs at any desired
accuracy (Funahashi, 1988).
Intuition: A sum or a difference of two sigmoid functions is a “bump-
like” function. And, a sufficiently large number of bump functions
can approximate any function.
25. NETtalk: A parallel network that learns to read aloud.
Sejnowski & Rosenberg (1987) Complex Systems
A feedforward three-layer neural network with delay lines.
26. NETtalk: A parallel network that learns to read aloud.
Sejnowski & Rosenberg (1987) Complex Systems; https://www.youtube.com/watch?v=gakJlr3GecE
A feedforward three-layer neural network with delay lines.
27. NETtalk: A parallel network that learns to read aloud.
Sejnowski & Rosenberg (1987) Complex Systems
Activations of hidden units for a same sound but different inputs
28. Hinton diagrams: characterizing and visualizing
connection to and from hidden units.
Hinton (1992) Sci Am
Activations of hidden units for a same sound but different inputs
29. Autonomous driving learning by backpropagation.
Pomerleau (1991) Neural Comput
Activations of hidden units for a same sound but different inputs
31. Gradient vanishing problem: why is training a multi-layer
neural network so difficult?
Hochreiter et al. (1991)
• The back-propagation algorithm works only for neural networks of
three or four layers.
• Training neural networks with many hidden layers – called “deep
neural networks”- is notoriously difficult.
( ) ( ) ( ) ( )
( ) ( )1 1
1N N N N N
j i i i ij
i
x x w− −
∆ = ∆ −∑
( ) ( ) ( ) ( )
( ) ( )
( ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( )
2 1 1 1 2
1 1 1 2
1
1 1
N N N N N
k j j j jk
j
N N N N N N N
i i i ij j j jk
j i
x x w
x x w x x w
− − − − −
− − − −
∆ = ∆ −
= ∆ − −
∑
∑ ∑
( )
( ) ( ) ( ) ( )( 1) ( 1) ( 1) ( 1) ( ) ( )
~ 1 1 1
n Nn n N N N N
x x x x x x+ + − −
∆ − × × − × − ×∆
32. Multilayer neural network: recurrent connections
• A feedforward neural network can represent an
instantaneous relationship between inputs and outputs
- memoryless: it depends on current inputs but not on
previous inputs.
• In order to describe a history, a neural network should
have its own dynamics.
• One way to incorporate dynamics into a neural network
is to introduce recurrent connections between units.
33. Working memory in the parietal cortex.
• A feedforward neural network can represent an
instantaneous relationship between inputs and outputs
- memoryless: it depends on current inputs x(t) but not
on previous inputs x(t-1), x(t-2), ...
• In order to describe a history, a neural network should
have its own dynamics.
• One way to incorporate dynamics into a neural network
is to introduce recurrent connections between units.
34. Multilayer neural network: recurrent connections
( ) ( )( ) ( ) ( )( )( )1 1ii i
x t f u t f t t+= += +Wx Ua
( ) ( )( )iz t g t= Vx
Recurrent dynamics of neural network:
Output readout:
a x z
U VW
35. Temporal unfolding: backpropagation through time (BPTT)
1t−a
1t−x tztx
{ }10 2 1,, , ,, ,t T −a a a aa
{ }1 2 3, , , ,, ,t Tzz z zz
,U W V
Training set for a recurrent network:
Input series:
Output series:
Optimize the weight matrices so as to approximate the training set:
36. Temporal unfolding: backpropagation through time (BPTT)
0a 1z1x,U W V
0a
2z1x,U W
V,U W
1a 2x
0a
3z
1x,U W
V
,U W
1a 3x2x
,U W
2a
1t−a
1t−x tztx,U W V
42. Tempotron: Spike-based perceptron.
Consider five neurons and each emitting one spike but at different timings:
Rate coding: Information is coded in numbers of spikes in a given period.
( ) ( )31 2 4 5, , , , 1,1,1,1,1r r r r r =
Temporal coding: Information is coded in temporal patterns of spiking.
45. Tempotron: Spike-based perceptron.
3
1 1
t t
w e w e− ∆ −∆
+
2 2
2 t
w e w− ∆
+
2
1 1
t
w e w− ∆
+
3
2 2
t t
w e w e− ∆ −∆
+
( ) ( )2
1 2
3 2
1t t t
w e e w e θ− ∆ − ∆ − ∆
+ + + > ( ) ( )2
1 2
2 3
1t t t
w e w e e θ− ∆ − ∆ − ∆
+ + + <
( ) ( )
3 2 2
1
2 3 2
1 2
2
1
, ,
1
t t t
t t t
w e e e
w e e e
− ∆ − ∆ − ∆
− ∆ − ∆ − ∆
+ +
= = =
+ +
w x x
( ) ( )T T1 2
,θ θ> <w x w x
Consider a classification problem of two spike patterns:
If a vector notation is introduced:
This classification problem is reduced to a perceptron problem:
46. Tempotron: Spike-based perceptron.
3
1 1
t t
w e w e− ∆ −∆
+
2 2
2 t
w e w− ∆
+
2
1 1
t
w e w− ∆
+
3
2 2
t t
w e w e− ∆ −∆
+
( ) ( )2
1 2
3 2
1t t t
w e e w e θ− ∆ − ∆ − ∆
+ + + > ( ) ( )2
1 2
2 3
1t t t
w e w e e θ− ∆ − ∆ − ∆
+ + + <
( ) ( )
3 2 2
1
2 3 2
1 2
2
1
, ,
1
t t t
t t t
w e e e
w e e e
− ∆ − ∆ − ∆
− ∆ − ∆ − ∆
+ +
= = =
+ +
w x x
( ) ( )T T1 2
,θ θ> <w x w x
Consider a classification problem of two spike patterns:
If a vector notation is introduced:
This classification problem is reduced to a perceptron problem:
47. Learning a tempotron: intuition.
3
1 1
t t
w e w e− ∆ −∆
+
2 2
2 t
w e w− ∆
+
2
1 1
t
w e w− ∆
+
3
2 2
t t
w e w e− ∆ −∆
+
( ) ( )2
1 2
3 2
1t t t
w e e w e θ− ∆ − ∆ − ∆
+ + + > ( ) ( )2
1 2
2 3
1t t t
w e w e e θ− ∆ − ∆ − ∆
+ + >+
What was wrong if the second pattern was misclassified?
The last spike of neuron #1 (red one) is most responsible for the error, so
the synaptic strength of this neuron should be reduced.
1w λ∆ = −
48. Learning a tempotron: intuition.
3
1 1
t t
w e w e− ∆ −∆
+
2 2
2 t
w e w− ∆
+
2
1 1
t
w e w− ∆
+
3
2 2
t t
w e w e− ∆ −∆
+
( ) ( )2
1 2
3 2
1t t t
w e e w e θ− ∆ − ∆ − ∆
+ + <+ ( ) ( )2
1 2
2 3
1t t t
w e w e e θ− ∆ − ∆ − ∆
+ + + <
What was wrong if the second pattern was misclassified?
The last spike of neuron #2 (red one) is most responsible for the error, so
the synaptic strength of this neuron should be potentiated.
2w λ∆ = +
49. Exercise: Capacity of perceptron.
• Generate a set of random vectors.
• Write a code for the Perceptron learning algorithm.
• By randomly relabeling, count how many of them are
linearly separable.
Rigotti, M., Barak, O., Warden, M. R., Wang, X. J., Daw, N. D., Miller, E. K., & Fusi, S.
(2013). The importance of mixed selectivity in complex cognitive tasks. Nature,
497(7451), 585-590.
50. Exercise: Training of recurrent neural networks.
0
α
=
I
P
T
1 T
1
n n n n
n n
n n n
+= −
+
P r r P
P P
r P r
Goal: Investigate the effects of chaos and feedback in a recurrent
network.
( )1t n n n t+= −+ + ∆x x x Mr
T
tanhnn nz = w x
tanhn n=r x
1 nn n n ne+= −w w P r
nn ne z f= −
Recurrent dynamics without feedback:
Update of covariance matrix:
Update of weight matrix:
force_internal_all2all.m
51. Exercise: Training of recurrent neural networks.
0
α
=
I
P
T
1 T
1
n n n n
n n
n n n
+= −
+
P r r P
P P
r P r
Goal: Investigate the effects of chaos and feedback in a recurrent
network.
( )1
f
t n nn n n tz+= − ++ + ∆x x Mr wx
T
tanhnn nz = w x
tanhn n=r x
1 nn n n ne+= −w w P r
nn ne z f= −
Recurrent dynamics with feedback:
Update of covariance matrix:
Update of weight matrix:
force_external_feedback_loop.m
52. Exercise: Training of recurrent neural networks.
Goal: Investigate the effects of chaos and feedback in a recurrent
network.
• Investigate the effect of output feedback. Are there any difference
in the activities of recurrent units?
• Investigate the effect of gain parameter g. What happens if the gain
parameter is smaller than 1?
• Try to approximate some other time series such as chaotic ones.
Use the Lorentz model, for example.
53. References
• Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1988). Learning representations by back-
propagating errors. Cognitive modeling, 5(3), 1.
• Sejnowski, T. J., & Rosenberg, C. R. (1987). Parallel networks that learn to pronounce English text.
Complex systems, 1(1), 145-168.
• Funahashi, K. I. (1989). On the approximate realization of continuous mappings by neural networks.
Neural networks, 2(3), 183-192.
• S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. Gradient flow in recurrent nets: the
difficulty of learning long-term dependencies
• Zipser, D. (1991). Recurrent network model of the neural mechanism of short-term active memory.
Neural Computation, 3(2), 179-193.
• Johansson, R. S., & Birznieks, I. (2004). First spikes in ensembles of human tactile afferents code
complex spatial fingertip events. Nature neuroscience, 7(2), 170-177.
• Branco, T., Clark, B. A., & Häusser, M. (2010). Dendritic discrimination of temporal input sequences
in cortical neurons. Science, 329(5999), 1671-1675.
• Gütig, R., & Sompolinsky, H. (2006). The tempotron: a neuron that learns spike timing–based
decisions. Nature neuroscience, 9(3), 420-428.
• Sussillo, D., & Abbott, L. F. (2009). Generating coherent patterns of activity from chaotic neural
networks. Neuron, 63(4), 544-557.