Workshop: Dendritic integration and
computation with active dendrites
February 8, 2018
Subutai Ahmad
sahmad@numenta.com
The Predictive Neuron: How Active Dendrites Enable
Spatiotemporal Computation In The Neocortex
1)			Understand	operating	principles of	the	neocortex
- seek	biologically	derived	theories
- test	empirically	and	via	simulation
2)			Enable	technology	based	on	cortical	principles
- active	open	source	community
- intelligent	machines	will	be	based	on	cortical	principles
Observation:
The neocortex is constantly predicting its inputs.
“the most important and also the most neglected problem of
cerebral physiology” (Lashley, 1951)
How can networks of pyramidal neurons learn predictive
models of the world?
Research question:
1) How can neurons learn predictive models of extrinsic temporal sequences?
3) Experimentally testable predictions
- Impact of NMDA spikes
- Branch specific plasticity
- Sparse correlation structure
- Pyramidal neuron uses active dendrites for prediction
- A single layer network model for complex predictions
- Works on real world applications
- Extension of sequence memory model
- Learns models of objects using motion based context signal
- Can predict sensory features using movement
“Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in the Neocortex”
Hawkins and Ahmad, Frontiers in Neural Circuits, 2016/03/30
“A Theory of How Columns in the Neocortex Learn the Structure of the World”
Hawkins, Ahmad, and Cui, Frontiers in Neural Circuits, 2017/10/25
2) How can neurons learn predictive models of sensorimotor sequences?
5K to 30K excitatory synapses
- 10% proximal
- 90% distal
Distal dendrites are pattern detectors
- 8-15 co-active, co-located synapses
generate dendritic NMDA spikes
- sustained depolarization of soma but
does not typically generate AP
Pyramidal Neuron
(Mel, 1992; Branco & Häusser, 2011; Schiller et al, 2000; Losonczy, 2006; Antic
et al, 2010; Major et al, 2013; Spruston, 2008; Milojkovic et al, 2005, etc.)
Prediction Starts in the Neuron
Proximal synapses: Cause somatic spikes
Define classic receptive field of neuron
Distal synapses: Cause dendritic spikes
Put the cell into a depolarized, or “predictive” state
Depolarized neurons fire sooner, inhibiting nearby neurons.
A neuron can predict its activity in hundreds of unique contexts.
5K to 30K excitatory synapses
- 10% proximal
- 90% distal
Distal dendrites are pattern detectors
- 8-15 co-active, co-located synapses
generate dendritic NMDA spikes
- sustained depolarization of soma but
does not typically generate AP
HTM Neuron Model
Prediction Starts in the Neuron
Pyramidal Neuron
(Poirazi et al., 2003)
(Hawkins & Ahmad, 2016)
A Single Layer Network Model for Sequence Memory
- Neurons in a mini-column learn same FF receptive field.
- Active dendritic segments form connections to nearby cells.
- Depolarized cells fire first, and inhibit other cells within mini-column.
No prediction Predicted input
(Hawkins & Ahmad, 2016)
(Cui et al, 2016)
t=0
t=1
Predicted cells inhibit
neighbors
Next prediction t=2
t=0
t=1
Synaptic changes localized to dendritic segments:
(Stuart and Häusser, 2001; Losonczy et al., 2008)
1. If a cell was correctly predicted, positively reinforce the dendritic
segment that caused the prediction.
2. If a cell was incorrectly predicted, slightly negatively reinforce the
corresponding dendritic segment.
3. If no cell was predicted in a mini-column, reinforce the dendritic
segment that best matched the previous input.
Continuous Branch Specific Learning
X
A B
B
C
C
D
Y
Before learning
X B’’ C’’
D’
Y’’
After learning
A B’ C’
Same columns,
but only one cell active per column.
High Order (Non-Markovian) Sequences
Two sequences: A-B-C-D
X-B-C-Y
Application To Real World Streaming Data Sources
- Accuracy is comparable to state of the art ML techniques (LSTM, ARIMA, etc.)
- Continuous unsupervised learning - adapts to changes far better than other techniques
- Top benchmark score in detecting anomalies and unusual behavior
- Extremely fault tolerant (tolerant to 40% noise and faults)
- Multiple open source implementations (some commercial)
“Continuous online sequence learning with an unsupervised neural network model”
Cui, Ahmad and Hawkins, Neural Computation, 2016
“Unsupervised real-time anomaly detection for streaming data”
Ahmad, Lavin, Purdy and Zuha, Neurocomputing, 2017
2015-04-20
Monday
2015-04-21
Tuesday
2015-04-22
Wednesday
2015-04-23
Thursday
2015-04-24
Friday
2015-04-25
Saturday
2015-04-26
Sunday
0 k
5 k
10 k
15 k
20 k
25 k
30 k
PassengerCountin30minwindow
A
B C
Shift
AR
IM
ALSTM
1000LSTM
3000LSTM
6000
TM
0.0
0.2
0.4
0.6
0.8
1.0
NRMSE
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
MAPE
0.0
0.5
1.0
1.5
2.0
2.5
NegativeLog-likelihood
Shift
AR
IM
ALSTM
1000LSTM
3000LSTM
6000
TM
LSTM
1000LSTM
3000LSTM
6000
TM
D
?
NYC	Taxi	Demand Machine	Temperature	Sensor	Data
1) How can neurons learn predictive models of temporal sequences?
3) Experimentally testable predictions
- Impact of NMDA spikes
- Branch specific plasticity
- Sparse correlation structure
- Pyramidal neuron uses active dendrites for prediction
- A single layer network model for complex predictions
- Works on real world applications
- Extension of sequence memory model
- Learns models of objects using motion based context signal
- Can predict sensory features using movement
“Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in the Neocortex”
Hawkins and Ahmad, Frontiers in Neural Circuits, 2016/03/30
“A Theory of How Columns in the Neocortex Learn the Structure of the World”
Hawkins, Ahmad, and Cui, Frontiers in Neural Circuits, 2017/10/25
2) How can neurons learn predictive models of sensorimotor sequences?
How Could Neurons Learn Predictive Models of Sensorimotor
Sequences?
Sequence memory
Sensorimotor sequences
SensorMotor-related context
Hypothesis:
- We need to provide an “allocentric location”, updated through movement
- Grid cell modules, applied to objects, can compute allocentric location
Two Layer Model of Sensorimotor Inference
Feature @ location
Object Stable over movement of sensor
- Using allocentric location, cellular layers can accurately predict its
input as the sensor moves.
Sensor
Feature
Allocentric
Location
Pooling
Seq Mem
Changes with each movement
Yale-CMU-Berkeley (YCB) Object Benchmark (Calli et al, 2017)
- 80 objects designed for robotics grasping tasks
- Includes high-resolution 3D CAD files
YCB Object Benchmark
We created a virtual hand using the Unity game engine
Curvature based sensor on each fingertip
4096 neurons per layer per column
98.7% recall accuracy (77/78 uniquely classified)
Convergence time depends on object, sequence of
sensations, number of fingers.
Simulation using YCB Object Benchmark
Pairwise confusion between objects after 1 touch
Convergence 1 finger 1 touch
Pairwise confusion between objects after 2 touches
Convergence 1 finger 2 touches
Pairwise confusion between objects after 6 touches
Convergence 1 finger 6 touches
Pairwise confusion between objects after 10 touches
Convergence 1 finger 10 touches
1) How can neurons learn predictive models of temporal sequences?
3) Experimentally testable predictions
- Impact of NMDA spikes
- Branch specific plasticity
- Sparse correlation structure
- Pyramidal neuron uses active dendrites for prediction
- A single layer network model for complex predictions
- Works on real world applications
- Extension of sequence memory model
- Learns models of objects using motion based context signal
- Can predict sensory features using movement
“Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in the Neocortex”
Hawkins and Ahmad, Frontiers in Neural Circuits, 2016/03/30
“A Theory of How Columns in the Neocortex Learn the Structure of the World”
Hawkins, Ahmad, and Cui, Frontiers in Neural Circuits, 2017/10/25
2) How can neurons learn predictive models of sensorimotor sequences?
1)	Impact	of	NMDA	spikes:
Dendritic	NMDA	spikes	cause	cells	to	fire	faster	than	they	would	otherwise.
Fast	local	inhibitory	networks	(e.g.	minicolumns)	inhibit	cells	that	don’t	fire	early.
Sparser	activations	during	a	predictable	sensory	stream.	
For	predictable	natural	stimuli,	dendritic	spikes	will	be	more	frequent	than	APs.	
(Vinje &	Gallant,	2002;	Smith	et	al,	2013;	Wilmes et	al,	2016;	Moore	et	al,	2017)
2)	Branch	specific	plasticity:
Strong	LTP	in	dendritic	branch	when	NMDA	spike	followed	by	back	action	potential	(bAP).	
Weak	LTP	(without	NMDA	spike)	if	synapse	cluster	becomes	active	followed	by	a	bAP.
Weak	LTD	when	an	NMDA	spike	is	not	followed	by	an	action	potential/bAP.
(Holthoff et	al,	2004;	Losonczy et	al,	2008;	Yang	et	al,	2014;	Cichon &	Gang,	2015)
3)	Correlation	structure:
Low	pair-wise	correlations	between	cells	but	significant	high-order	correlations.
High	order	assembly	correlated	with	specific	point	in	a	predictable	sequence.
Unanticipated	inputs	leads	to	a	burst	of	activity,	correlated	within minicolumns.
Activity	during	predicted	inputs	will	be	a	subset	of	activity	during	unpredicted	inputs.
Neighboring	mini-columns	will	be	uncorrelated.
(Ecker	et	al,	2010;	Smith	&	Häusser,	2010;	Schneidman	et	al,	2006;	Miller	et	al,	2014;	Homann et	al,	2017)
Properties And Experimentally Testable Predictions
20
Correlation Structure With Natural Sequences
Cell Assembly Order
3 4 5 6
Numberofunique
cellassemblies
x104
0
1
2
3
4
5
6
7
Trial Number
0 5 10 15 20
ActivationDensity
# 10-3
3
4
5
6
7
8
9
10
11
12
V1
AL
D
ataShuffled
Poisson
Subsetindex
0
0.02
0.04
0.06
0.08
V1
Trial 1 Trial 20
Visual
Stimulus
Screen
A
Time (sec)
5 10 15 20 25 30
Neuron#
0
50
100
150
200
250
300
350
Time (sec)
5 10 15 20 25 30
Neuron#
0
50
100
150
200
250
300
350
Neuron#
80
100
120
140
160
Neuron#
80
100
120
140
160
Cell A
3 4
0
500
1000
1500
2000
2500
3000V1
ALV1
V1
AL
B
C
F
E
D
0.2
0.3
0.06
0.08
serving
lassembly
serving
lassembly
(Stirman	et	al,	2016)
Spencer	L.	Smith YiYi Yu
20 presentations of a 30-
second natural movie
Sparser Activity With Repeated Presentations
Cell Assembly Order
3 4 5 6
Numberofunique
cellassemblies
x104
0
1
2
3
4
5
6
7
Trial Number
0 5 10 15 20
Ac
3
4
5
6
0
D
ataShuffled
Poisson
0
0.02
Trial 1 Trial 20
Stimulus
Screen
Time (sec)
5 10 15 20 25 30
Neuron#
0
50
100
150
200
250
300
350
Time (sec)
5 10 15 20 25 30
Neuron#
0
50
100
150
200
250
300
350
Time (sec)
5 10 15 20 25 30
Neuron#
0
20
40
60
80
100
120
140
160
Time (sec)
5 10 15 20 25 30
Neuron#
0
20
40
60
80
100
120
140
160
Cell Asse
3 4
0
500
1000
1500
2000
2500
3000 ALV1
ALV1
V1
AL
-1 -0.5 0 0.5 1
0
0.1
0.2
0.3
-1 -0.5 0 0
0
0.02
0.04
0.06
0.08
Time jitter (sec)
Prob.ofobserving
repeatedcellassembly
Time jitter (sec
Prob.ofobserving
repeatedcellassembly
Cell Assembly Order
3 4 5 6
Numberofunique
cellassemblies
x104
0
1
2
3
4
5
6
7
Trial Number
0 5 10 15 20
ActivationDensity
# 10-3
3
4
5
6
7
8
9
10
11
12
V1
AL
Subsetindex
0.0
0.0
0.0
0.0
Trial 1 Trial 20
Visual
Stimulus
Screen
Time (sec)
5 10 15 20 25 30
Neuron#
0
50
100
150
200
250
300
350
Time (sec)
5 10 15 20 25 30
0
500
1000
1500
2000
2500
3000V1
V1
Similar to (Vinje & Gallant, 2002)
Emergence of High Order Cell Assemblies
Cell Assembly Order
3 4 5 6
Numberofunique
cellassemblies
x104
0
1
2
3
4
5
6
7
Trial Number
0 5 10 15 20
3
4 0
D
ataShuffled
Poisson
0
30
30
Cell Assembly Order
3 4 5 6
0
500
1000
1500
2000
2500
3000
Data
Shuffled
Poisson
D
ataShuffled
Poisson
ALV1
ALV1
3-order cell
assembly
single cell
-1 -0.5 0 0.5 1
0
0.1
0.2
0.3
-1 -0.5 0 0.5 1
0
0.02
0.04
0.06
0.08
Time jitter (sec)
Prob.ofobserving
repeatedcellassembly
Time jitter (sec)
Prob.ofobserving
repeatedcellassembly
Cell assemblies are significantly more likely to
occur in sequences than predicted by a
Poisson model (p<0.001).
Cell Assembly Order
3 4 5 6
Numberofunique
cellassemblies
x104
0
1
2
3
4
5
6
7
Trial Number
0 5 10 15 20
3
0
D
ataShuffled
Poisson
0
Cell Assembly Order
3 4 5 6
0
500
1000
1500
2000
2500
3000
Data
Shuffled
Poisson
D
ataShuffled
Poisson
ALV1
ALV1
3-order cell
assembly
single cell
-1 -0.5 0 0.5 1
0
0.1
0.2
0.3
-1 -0.5 0 0.5 1
0
0.02
0.04
0.06
0.08
Time jitter (sec)
Prob.ofobserving
repeatedcellassembly
Time jitter (sec)
Prob.ofobserving
repeatedcellassembly
Sparse code predicts specific point in a
sequence (single cells don’t).
Similar to (Miller et al, 2014)
- A model of sequence learning in cortex
- Relies on “predictive neuron” with active dendrites and fast inhibitory networks
- Can learn complex temporal sequences
- Applied to real world streaming applications
- A model of sensorimotor sequence learning in cortex
- Identical network of pyramidal cells can perform a very different computation
- Difference: motion related context sent to active dendrites
- Detailed list of experimentally testable properties
24
Summary
Open Issues / Discussion
Are active dendrites necessary? (Yes!)
- Is a two layer network of uniform point neurons sufficient? (No!)
How to integrate calcium spikes, BAC firing, and apical dendrites?
Continuous time model of HTM, including inhibitory networks
Collaborations
We are always interested in hosting visiting scholars and interns.
Co-authors: Jeff Hawkins, Scott Purdy, Marcus Lewis (Numenta), Yuwei Cui
Contact info: sahmad@numenta.com
@SubutaiAhmad

The Predictive Neuron: How Active Dendrites Enable Spatiotemporal Computation in the Neocortex by Subutai Ahmad (02/08/2018)

  • 1.
    Workshop: Dendritic integrationand computation with active dendrites February 8, 2018 Subutai Ahmad sahmad@numenta.com The Predictive Neuron: How Active Dendrites Enable Spatiotemporal Computation In The Neocortex
  • 2.
    1) Understand operating principles of the neocortex - seek biologically derived theories -test empirically and via simulation 2) Enable technology based on cortical principles - active open source community - intelligent machines will be based on cortical principles
  • 3.
    Observation: The neocortex isconstantly predicting its inputs. “the most important and also the most neglected problem of cerebral physiology” (Lashley, 1951) How can networks of pyramidal neurons learn predictive models of the world? Research question:
  • 4.
    1) How canneurons learn predictive models of extrinsic temporal sequences? 3) Experimentally testable predictions - Impact of NMDA spikes - Branch specific plasticity - Sparse correlation structure - Pyramidal neuron uses active dendrites for prediction - A single layer network model for complex predictions - Works on real world applications - Extension of sequence memory model - Learns models of objects using motion based context signal - Can predict sensory features using movement “Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in the Neocortex” Hawkins and Ahmad, Frontiers in Neural Circuits, 2016/03/30 “A Theory of How Columns in the Neocortex Learn the Structure of the World” Hawkins, Ahmad, and Cui, Frontiers in Neural Circuits, 2017/10/25 2) How can neurons learn predictive models of sensorimotor sequences?
  • 5.
    5K to 30Kexcitatory synapses - 10% proximal - 90% distal Distal dendrites are pattern detectors - 8-15 co-active, co-located synapses generate dendritic NMDA spikes - sustained depolarization of soma but does not typically generate AP Pyramidal Neuron (Mel, 1992; Branco & Häusser, 2011; Schiller et al, 2000; Losonczy, 2006; Antic et al, 2010; Major et al, 2013; Spruston, 2008; Milojkovic et al, 2005, etc.) Prediction Starts in the Neuron
  • 6.
    Proximal synapses: Causesomatic spikes Define classic receptive field of neuron Distal synapses: Cause dendritic spikes Put the cell into a depolarized, or “predictive” state Depolarized neurons fire sooner, inhibiting nearby neurons. A neuron can predict its activity in hundreds of unique contexts. 5K to 30K excitatory synapses - 10% proximal - 90% distal Distal dendrites are pattern detectors - 8-15 co-active, co-located synapses generate dendritic NMDA spikes - sustained depolarization of soma but does not typically generate AP HTM Neuron Model Prediction Starts in the Neuron Pyramidal Neuron (Poirazi et al., 2003) (Hawkins & Ahmad, 2016)
  • 7.
    A Single LayerNetwork Model for Sequence Memory - Neurons in a mini-column learn same FF receptive field. - Active dendritic segments form connections to nearby cells. - Depolarized cells fire first, and inhibit other cells within mini-column. No prediction Predicted input (Hawkins & Ahmad, 2016) (Cui et al, 2016) t=0 t=1 Predicted cells inhibit neighbors Next prediction t=2 t=0 t=1
  • 8.
    Synaptic changes localizedto dendritic segments: (Stuart and Häusser, 2001; Losonczy et al., 2008) 1. If a cell was correctly predicted, positively reinforce the dendritic segment that caused the prediction. 2. If a cell was incorrectly predicted, slightly negatively reinforce the corresponding dendritic segment. 3. If no cell was predicted in a mini-column, reinforce the dendritic segment that best matched the previous input. Continuous Branch Specific Learning
  • 9.
    X A B B C C D Y Before learning XB’’ C’’ D’ Y’’ After learning A B’ C’ Same columns, but only one cell active per column. High Order (Non-Markovian) Sequences Two sequences: A-B-C-D X-B-C-Y
  • 10.
    Application To RealWorld Streaming Data Sources - Accuracy is comparable to state of the art ML techniques (LSTM, ARIMA, etc.) - Continuous unsupervised learning - adapts to changes far better than other techniques - Top benchmark score in detecting anomalies and unusual behavior - Extremely fault tolerant (tolerant to 40% noise and faults) - Multiple open source implementations (some commercial) “Continuous online sequence learning with an unsupervised neural network model” Cui, Ahmad and Hawkins, Neural Computation, 2016 “Unsupervised real-time anomaly detection for streaming data” Ahmad, Lavin, Purdy and Zuha, Neurocomputing, 2017 2015-04-20 Monday 2015-04-21 Tuesday 2015-04-22 Wednesday 2015-04-23 Thursday 2015-04-24 Friday 2015-04-25 Saturday 2015-04-26 Sunday 0 k 5 k 10 k 15 k 20 k 25 k 30 k PassengerCountin30minwindow A B C Shift AR IM ALSTM 1000LSTM 3000LSTM 6000 TM 0.0 0.2 0.4 0.6 0.8 1.0 NRMSE 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 MAPE 0.0 0.5 1.0 1.5 2.0 2.5 NegativeLog-likelihood Shift AR IM ALSTM 1000LSTM 3000LSTM 6000 TM LSTM 1000LSTM 3000LSTM 6000 TM D ? NYC Taxi Demand Machine Temperature Sensor Data
  • 11.
    1) How canneurons learn predictive models of temporal sequences? 3) Experimentally testable predictions - Impact of NMDA spikes - Branch specific plasticity - Sparse correlation structure - Pyramidal neuron uses active dendrites for prediction - A single layer network model for complex predictions - Works on real world applications - Extension of sequence memory model - Learns models of objects using motion based context signal - Can predict sensory features using movement “Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in the Neocortex” Hawkins and Ahmad, Frontiers in Neural Circuits, 2016/03/30 “A Theory of How Columns in the Neocortex Learn the Structure of the World” Hawkins, Ahmad, and Cui, Frontiers in Neural Circuits, 2017/10/25 2) How can neurons learn predictive models of sensorimotor sequences?
  • 12.
    How Could NeuronsLearn Predictive Models of Sensorimotor Sequences? Sequence memory Sensorimotor sequences SensorMotor-related context Hypothesis: - We need to provide an “allocentric location”, updated through movement - Grid cell modules, applied to objects, can compute allocentric location
  • 13.
    Two Layer Modelof Sensorimotor Inference Feature @ location Object Stable over movement of sensor - Using allocentric location, cellular layers can accurately predict its input as the sensor moves. Sensor Feature Allocentric Location Pooling Seq Mem Changes with each movement
  • 14.
    Yale-CMU-Berkeley (YCB) ObjectBenchmark (Calli et al, 2017) - 80 objects designed for robotics grasping tasks - Includes high-resolution 3D CAD files YCB Object Benchmark We created a virtual hand using the Unity game engine Curvature based sensor on each fingertip 4096 neurons per layer per column 98.7% recall accuracy (77/78 uniquely classified) Convergence time depends on object, sequence of sensations, number of fingers. Simulation using YCB Object Benchmark
  • 15.
    Pairwise confusion betweenobjects after 1 touch Convergence 1 finger 1 touch
  • 16.
    Pairwise confusion betweenobjects after 2 touches Convergence 1 finger 2 touches
  • 17.
    Pairwise confusion betweenobjects after 6 touches Convergence 1 finger 6 touches
  • 18.
    Pairwise confusion betweenobjects after 10 touches Convergence 1 finger 10 touches
  • 19.
    1) How canneurons learn predictive models of temporal sequences? 3) Experimentally testable predictions - Impact of NMDA spikes - Branch specific plasticity - Sparse correlation structure - Pyramidal neuron uses active dendrites for prediction - A single layer network model for complex predictions - Works on real world applications - Extension of sequence memory model - Learns models of objects using motion based context signal - Can predict sensory features using movement “Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in the Neocortex” Hawkins and Ahmad, Frontiers in Neural Circuits, 2016/03/30 “A Theory of How Columns in the Neocortex Learn the Structure of the World” Hawkins, Ahmad, and Cui, Frontiers in Neural Circuits, 2017/10/25 2) How can neurons learn predictive models of sensorimotor sequences?
  • 20.
    1) Impact of NMDA spikes: Dendritic NMDA spikes cause cells to fire faster than they would otherwise. Fast local inhibitory networks (e.g. minicolumns) inhibit cells that don’t fire early. Sparser activations during a predictable sensory stream. For predictable natural stimuli, dendritic spikes will be more frequent than APs. (Vinje & Gallant, 2002; Smith et al, 2013; Wilmes et al, 2016; Moore et al, 2017) 2) Branch specific plasticity: Strong LTP in dendritic branch when NMDA spike followed by back action potential (bAP). Weak LTP (without NMDA spike) if synapse cluster becomes active followed by a bAP. Weak LTD when an NMDA spike is not followed by an action potential/bAP. (Holthoffet al, 2004; Losonczy et al, 2008; Yang et al, 2014; Cichon & Gang, 2015) 3) Correlation structure: Low pair-wise correlations between cells but significant high-order correlations. High order assembly correlated with specific point in a predictable sequence. Unanticipated inputs leads to a burst of activity, correlated within minicolumns. Activity during predicted inputs will be a subset of activity during unpredicted inputs. Neighboring mini-columns will be uncorrelated. (Ecker et al, 2010; Smith & Häusser, 2010; Schneidman et al, 2006; Miller et al, 2014; Homann et al, 2017) Properties And Experimentally Testable Predictions 20
  • 21.
    Correlation Structure WithNatural Sequences Cell Assembly Order 3 4 5 6 Numberofunique cellassemblies x104 0 1 2 3 4 5 6 7 Trial Number 0 5 10 15 20 ActivationDensity # 10-3 3 4 5 6 7 8 9 10 11 12 V1 AL D ataShuffled Poisson Subsetindex 0 0.02 0.04 0.06 0.08 V1 Trial 1 Trial 20 Visual Stimulus Screen A Time (sec) 5 10 15 20 25 30 Neuron# 0 50 100 150 200 250 300 350 Time (sec) 5 10 15 20 25 30 Neuron# 0 50 100 150 200 250 300 350 Neuron# 80 100 120 140 160 Neuron# 80 100 120 140 160 Cell A 3 4 0 500 1000 1500 2000 2500 3000V1 ALV1 V1 AL B C F E D 0.2 0.3 0.06 0.08 serving lassembly serving lassembly (Stirman et al, 2016) Spencer L. Smith YiYi Yu 20 presentations of a 30- second natural movie
  • 22.
    Sparser Activity WithRepeated Presentations Cell Assembly Order 3 4 5 6 Numberofunique cellassemblies x104 0 1 2 3 4 5 6 7 Trial Number 0 5 10 15 20 Ac 3 4 5 6 0 D ataShuffled Poisson 0 0.02 Trial 1 Trial 20 Stimulus Screen Time (sec) 5 10 15 20 25 30 Neuron# 0 50 100 150 200 250 300 350 Time (sec) 5 10 15 20 25 30 Neuron# 0 50 100 150 200 250 300 350 Time (sec) 5 10 15 20 25 30 Neuron# 0 20 40 60 80 100 120 140 160 Time (sec) 5 10 15 20 25 30 Neuron# 0 20 40 60 80 100 120 140 160 Cell Asse 3 4 0 500 1000 1500 2000 2500 3000 ALV1 ALV1 V1 AL -1 -0.5 0 0.5 1 0 0.1 0.2 0.3 -1 -0.5 0 0 0 0.02 0.04 0.06 0.08 Time jitter (sec) Prob.ofobserving repeatedcellassembly Time jitter (sec Prob.ofobserving repeatedcellassembly Cell Assembly Order 3 4 5 6 Numberofunique cellassemblies x104 0 1 2 3 4 5 6 7 Trial Number 0 5 10 15 20 ActivationDensity # 10-3 3 4 5 6 7 8 9 10 11 12 V1 AL Subsetindex 0.0 0.0 0.0 0.0 Trial 1 Trial 20 Visual Stimulus Screen Time (sec) 5 10 15 20 25 30 Neuron# 0 50 100 150 200 250 300 350 Time (sec) 5 10 15 20 25 30 0 500 1000 1500 2000 2500 3000V1 V1 Similar to (Vinje & Gallant, 2002)
  • 23.
    Emergence of HighOrder Cell Assemblies Cell Assembly Order 3 4 5 6 Numberofunique cellassemblies x104 0 1 2 3 4 5 6 7 Trial Number 0 5 10 15 20 3 4 0 D ataShuffled Poisson 0 30 30 Cell Assembly Order 3 4 5 6 0 500 1000 1500 2000 2500 3000 Data Shuffled Poisson D ataShuffled Poisson ALV1 ALV1 3-order cell assembly single cell -1 -0.5 0 0.5 1 0 0.1 0.2 0.3 -1 -0.5 0 0.5 1 0 0.02 0.04 0.06 0.08 Time jitter (sec) Prob.ofobserving repeatedcellassembly Time jitter (sec) Prob.ofobserving repeatedcellassembly Cell assemblies are significantly more likely to occur in sequences than predicted by a Poisson model (p<0.001). Cell Assembly Order 3 4 5 6 Numberofunique cellassemblies x104 0 1 2 3 4 5 6 7 Trial Number 0 5 10 15 20 3 0 D ataShuffled Poisson 0 Cell Assembly Order 3 4 5 6 0 500 1000 1500 2000 2500 3000 Data Shuffled Poisson D ataShuffled Poisson ALV1 ALV1 3-order cell assembly single cell -1 -0.5 0 0.5 1 0 0.1 0.2 0.3 -1 -0.5 0 0.5 1 0 0.02 0.04 0.06 0.08 Time jitter (sec) Prob.ofobserving repeatedcellassembly Time jitter (sec) Prob.ofobserving repeatedcellassembly Sparse code predicts specific point in a sequence (single cells don’t). Similar to (Miller et al, 2014)
  • 24.
    - A modelof sequence learning in cortex - Relies on “predictive neuron” with active dendrites and fast inhibitory networks - Can learn complex temporal sequences - Applied to real world streaming applications - A model of sensorimotor sequence learning in cortex - Identical network of pyramidal cells can perform a very different computation - Difference: motion related context sent to active dendrites - Detailed list of experimentally testable properties 24 Summary
  • 25.
    Open Issues /Discussion Are active dendrites necessary? (Yes!) - Is a two layer network of uniform point neurons sufficient? (No!) How to integrate calcium spikes, BAC firing, and apical dendrites? Continuous time model of HTM, including inhibitory networks Collaborations We are always interested in hosting visiting scholars and interns. Co-authors: Jeff Hawkins, Scott Purdy, Marcus Lewis (Numenta), Yuwei Cui Contact info: sahmad@numenta.com @SubutaiAhmad