SlideShare a Scribd company logo
1 of 103
Analyzing large scale spike trains
with spatio-temporal constraints:
application to retinal data
Supervised by Prof. Bruno Cessac
Hassan Nasser
๐‘…๐‘’๐‘™๐‘Ž๐‘ก๐‘–๐‘œ๐‘› ๐‘…|๐‘†
๐‘ƒ ๐‘… ๐‘†
Response
variability
Biological
neural network
Stimulus Spike Response
S R
Neural
prosthetics
Bio-inspired
technologies
Time (ms)
Trial
2
๐‘…๐‘’๐‘™๐‘Ž๐‘ก๐‘–๐‘œ๐‘› ๐‘…|๐‘†
๐‘ƒ ๐‘… ๐‘†
Stimulus Spike Response
S R
Spike train
statistics
3
Memory
4
Probabilistic
Models
Maximum
entropy
Point
process
Ising
(Schneidman
et al 06)
Triplets (Ganmor et al
09)
Spatial No memory
โ€ฆ
5
Probabilistic
Models
Maximum
entropy
1 time-step memory
(Marre et al 09)
Generalized
Linear model
Point
process
General framework
(Vasquez et al 12)
Ising
(Schneidman
et al 06)
Triplets (Ganmor et al
09)
Spatio-
Temporal
Spatial
No memory
Limited to
1 time step
memory
Limited to
small scale
Neurons are considered
conditionally independent
given the past
Hawks
Linear Non
Linear model
# of neurons doubles
every 8 years !!
6
Goal
โ€ข Definitions
โ€“ Basic concepts
โ€“ Maximum entropy principle (Spatial
& Spatio-temporal).
โ€ข Montecarlo in the service of
large neural spike trains
โ€ข Fitting parameters
โ€“ Tests on synthetic data
โ€“ Application on real data
โ€ข The EnaS software
โ€ข Discussion
Develop a
framework to fit
spatio temporal
maximum entropy
models on large
scale spike trains
7
Goal
โ€ข Definitions
โ€“ Basic concepts
โ€“ Maximum entropy principle (Spatial
& Spatio-temporal).
โ€ข Montecarlo in the service of
large neural spike trains
โ€ข Fitting parameters
โ€“ Tests on synthetic data
โ€“ Application on real data
โ€ข The EnaS software
โ€ข Discussion
Develop a
framework to fit
spatio temporal
maximum entropy
models on large
scale spike trains
8
Spike objects
๐‘ ๐‘๐‘–๐‘˜๐‘’ ๐‘ก๐‘Ÿ๐‘Ž๐‘–๐‘› = ๐œ” ๐‘ ๐‘๐‘–๐‘˜๐‘’ ๐‘๐‘Ž๐‘ก๐‘ก๐‘’๐‘Ÿ๐‘› = ๐œ” ๐‘ก
๐‘ก
๐‘ ๐‘๐‘–๐‘˜๐‘’ ๐‘๐‘™๐‘œ๐‘๐‘˜ = ๐œ”๐‘ก1
๐‘ก2
๐‘†๐‘๐‘–๐‘˜๐‘’ ๐‘’๐‘ฃ๐‘’๐‘›๐‘ก = ๐œ”๐‘– ๐‘ก
๐‘ก1 ๐‘ก2
๐‘– = ๐‘›๐‘’๐‘ข๐‘Ÿ๐‘œ๐‘› ๐‘–๐‘›๐‘‘๐‘’๐‘ฅ
๐‘ก = ๐‘ก๐‘–๐‘š๐‘’
Empirical probability ๏ƒจ
๐‘‡
๐œ‹ ๐œ”
๐‘‡
๐‘๐‘™๐‘œ๐‘๐‘˜, ๐‘๐‘Ž๐‘ก๐‘ก๐‘’๐‘Ÿ๐‘›, โ€ฆ 9
Confidence plot
1
0
0
1
0
1
0
0
0
1
1
0
1
0
1
1
0
0
0
0
1
0
0
1
0
0
0
1
1
0
Upper bound (+3๐œŽ)
Lower bound (โˆ’3๐œŽ)
10
0 Observed probability 1
Predictedprobability
0
1
2 ๐‘ 22๐‘ 23๐‘
Monomials
โ€ข ๐‘š๐‘™(๐œ”) = ๐‘Ÿ ๐œ”๐‘– ๐‘Ÿ
๐‘ก ๐‘Ÿ =
1 iff ๐œ”๐‘– ๐‘Ÿ
๐‘ก ๐‘Ÿ = 1 โˆ€๐‘Ÿ
0 otherwise
๐œ”_0 47 ๐œ”_1 47
๐œ”_7 40 ๐œ”_8 41๐œ”_4 28 ๐œ”_6 28 ๐œ”_(28)
Pairwise
Pairwise with
1 time-step
delay
Triplet
๐œ”_2 21 ๐œ”_4 23
11๐‘†๐‘ก๐‘Ž๐‘ก๐‘–๐‘œ๐‘›๐‘Ž๐‘Ÿ๐‘–๐‘ก๐‘ฆ โ‰ก ๐œ‹ ๐œ”
๐‘‡
(๐‘š๐‘™) does not change overall the spike train
Imagine a spatial case โ€ฆ
2 ๐‘ possible
patterns/states
๐‘š๐‘™ = ๐œ”๐‘–(0)๐œ”๐‘—(0)
๐œ‡[๐œ” ๐‘ก ]
: ๐‘2 Pairwise
correlations monomials
๐‘š๐‘™ = ๐œ”๐‘–(0)
< ๐‘š๐‘™>
for ๐œ” ๐‘ก
Maximum entropy
Given some measure
??
12
2 ๐‘
โ‰ซ ๐‘ + ๐‘2
: N Individual activity
monomials.
๐’ฎ ๐œ‡ = โˆ’
๐œ” 0
๐œ‡ ๐œ” 0 log ๐œ‡[๐œ”(0)]
Constraints:
๐œ‡ ๐‘š๐‘™ = ๐œ‹ ๐œ”
๐‘‡
[๐‘š๐‘™]
Spatial models
๐œ‡ = arg max
๐œˆโˆˆโ„ณ
๐’ฎ ๐œˆ + ๐œ†0
๐œ” 0
๐œˆ ๐œ” 0 โˆ’ 1 +
๐‘™=1
๐ฟ
๐œ†๐‘™ ๐œˆ ๐‘š๐‘™ โˆ’ ๐œ‹ ๐œ”
๐‘‡
[๐‘š๐‘™]
Sought distribution
Statistical entropy
Normalization Parameters Empirical measure
Predicted measure
๐’ฎ ๐œˆ = โˆ’
๐œ” 0
๐œˆ ๐œ” 0 log ๐œˆ[๐œ”(0)]
๐œ‡ ๐œ” 0 =
1
๐‘ ๐€
๐‘’โ„‹ ๐œ”(0)
๐‘ƒ๐‘œ๐‘ก๐‘’๐‘›๐‘ก๐‘–๐‘Ž๐‘™ โˆถ โ„‹๐œ† ๐œ”(0) =
๐‘™
๐œ†๐‘™ ๐‘š๐‘™
Partition function
๐‘๐œ† =
๐œ” 0
๐‘’โ„‹ ๐œ”After fitting parameters:
Ising model
13
Prediction with a spatial model
Spatial patterns
14
Observed probability
Predictedprobability
Prediction with a spatial model
Spatio temporal pattern of memory depth 1
๐œ‡ = ๐œ‡ ร— ๐œ‡
15
Observed probability
Predictedprobability
Prediction with a spatial model
Spatio temporal pattern of memory depth 2
๐œ‡ = ๐œ‡ ร— ๐œ‡ ร— ๐œ‡
16
Observed probability
Predictedprobability
Prediction with a spatial model
Spatio temporal pattern of memory depth 2
๐œ‡ = ๐œ‡ ร— ๐œ‡ ร— ๐œ‡
17
memory pattern
Observed probability
Predictedprobability
The Spike train as a Markov Chain
Time
Neuron#
๐ท
๐‘ƒ ๐œ” ๐‘› ๐œ” ๐‘›โˆ’1
๐‘›โˆ’๐ท
๐‘›
๐œ‡ ๐œ”0
๐‘›
=
Present
18
The Spike train as a Markov Chain
Time
Neuron#
๐‘ƒ ๐œ” ๐‘› โˆ’ 1 ๐œ” ๐‘›โˆ’2
๐‘›โˆ’๐ทโˆ’1
๐‘ƒ ๐œ” ๐‘› ๐œ” ๐‘›โˆ’1
๐‘›โˆ’๐ท
๐‘›
๐œ‡ ๐œ”0
๐‘›
=
19
๐ท
The Spike train as a Markov Chain
Time
Neuron#
๐‘›
Chapmanโ€“Kolmogorov equation
๐‘ƒ ๐œ” ๐‘› โˆ’ 1 ๐œ” ๐‘›โˆ’2
๐‘›โˆ’๐ทโˆ’1
๐‘ƒ ๐œ” ๐‘› ๐œ” ๐‘›โˆ’1
๐‘›โˆ’๐ท
๐‘ƒ ๐œ” ๐‘› โˆ’ 2 ๐œ” ๐‘›โˆ’3
๐‘›โˆ’๐ทโˆ’2
๐œ‡ ๐œ”0
๐‘›
= ๐œ‡ ๐œ”0
๐ท
. ๐‘ƒ ๐œ” ๐ท + 1 ๐œ”0
๐ท
โ€ฆ
20
๐ท
Markov states with memory
๐‘ค ๐‘คโ€ฒ
๐‘ƒ[๐‘ค โ†’ ๐‘คโ€ฒ]
0 1 0 0 1 1 0 0
1 0 1 0 0 1 0 1
0 1 1 1 0 1 0 1
๐‘ค
๐‘คโ€ฒ
๐‘ค & ๐‘คโ€ฒ
:
blocks of size ๐ท
21
1 0 0
1 0 1
1 0 1
๐‘ค โ†’ ๐‘คโ€ฒ
= 0
๐‘ค โ†’ ๐‘คโ€ฒ = ๐‘’โ„‹ ๐œ”0
๐ท
0 0 1
0 0 0
0 0 0
1 1 1
0 1 1
1 1 1
0 0 1
1 0 0
1 1 0
Legal transitions
Illegal transitions
๐‘ค โ†’ ๐‘คโ€ฒ
1 0 0
0 1 0
1 1 1
๐‘ค ๐‘คโ€ฒ
โ‰ 
0 0 1
0 0 0
0 0 0
1 1 1
0 1 1
1 1 1
0 0 1
1 0 0
1 1 0
Markov states with memory
๐‘ค ๐‘คโ€ฒ
๐‘ƒ[๐‘ค โ†’ ๐‘คโ€ฒ]
Legal transitions
Illegal transitions
๐‘ค โ†’ ๐‘คโ€ฒ
1 0 0
0 1 0
1 1 1
๐‘ค ๐‘คโ€ฒ
0 1 0 0 1 1 0 0
1 0 1 0 0 1 0 1
0 1 1 1 0 1 0 1
โ‰ 
๐‘ค & ๐‘คโ€ฒ
:
blocks of size ๐ท
22
1 0 0
1 0 1
1 0 1
โ„’ ๐‘คโ€ฒ ๐‘ค
๐‘ค โ†’ ๐‘คโ€ฒ
= 0
๐‘ค โ†’ ๐‘คโ€ฒ = ๐‘’โ„‹ ๐œ”0
๐ท
โ„’ ๐‘คโ€ฒ ๐‘ค = ๐‘’โ„‹ ๐œ”0
๐ท , if ๐‘คโ€ฒ ๐‘ค is a legal transition
0 Otherwise
Transfer matrix
Non normalized
Perron-Frobenius
Theorem
Right eigenvector
Left eigenvector
๐“ˆ ๐€
The biggest eigenvalue
๐‘…(. )
L(. )
Using Chapmanโ€“
Kolmogorov equation
๐œ‡ ๐œ”0
๐‘›
=
๐‘’โ„‹ ๐œ”0
๐‘›
๐“ˆ ๐€
๐‘›โˆ’๐ท+1 ๐‘… ๐œ” ๐‘›โˆ’๐ท
๐‘›
๐ฟ(๐œ”0
๐ท
)
๐’ซ โ„‹ = log ๐“ˆ ๐œ†
Direct computing of
the Kullback-Leibler
Divergence
๐‘‘ ๐พ๐ฟ ๐œ‹ ๐œ”
๐‘‡
, ๐œ‡ ๐€ = ๐’ซ โ„‹ โˆ’ ๐œ‹ ๐œ”
๐‘‡
โ„‹ โˆ’ ๐’ฎ ๐œ‹ ๐œ”
๐‘‡
2 ๐‘๐ท
Compute the
average of
monomials
๐œ‡ ๐‘š๐‘™ =
๐œ•๐’ซ โ„‹
๐œ•๐œ†๐‘™
23
Pressure Entropy
Empirical
probability
of the
potential
Setting the
constraints
Computing the
empirical
distribution
๐œ‹ ๐œ”
๐‘‡
(๐‘š๐‘™)
Random set of
parameters
Computing the
predicted
distribution
๐œ‡ ๐œ†(๐‘–๐‘ก) ๐‘š๐‘™
Update the
parameters
Final set of
parameters
Predicted
distribution
Comparison
Transfer
Matrix
24
Limitation of the transfer matrix โ„’ ๐‘ค๐‘ค
โ€ฒ
2 ๐‘๐ท
2 ๐‘๐ท
โ„’ ๐‘คโ€ฒ ๐‘ค ๐‘–, ๐‘— โˆˆ โ„ ๐ท๐‘œ๐‘ข๐‘๐‘™๐‘’ = 1 ๐ต๐‘ฆ๐‘ก๐‘’
2 ๐‘๐ท ร— 2 ๐‘๐ท = 22๐‘๐ท ๐ต๐‘ฆ๐‘ก๐‘’๐‘ โ„’ ๐‘คโ€ฒ ๐‘ค
Memoryneed
Neuron number
Range: R = D+1 = 3
20 neurons
๏ƒจ
1,099,511,627,776 ๐‘‡๐ต
25
๐‘๐‘… = 20
Small scale Large scale
๐‘๐‘… > 20๐‘๐‘… โ‰ค 20
Transfer matrix Montecarlo
26
Computing the
predicted
distribution
๐œ‡ ๐œ†(๐‘–๐‘ก) ๐‘š๐‘™
Setting the
constraints
Computing the
empirical
distribution
๐œ‹ ๐œ”
๐‘‡
(๐‘š๐‘™)
Random set of
parameters
Update the
parameters
Final set of
parameters
Predicted
distribution
Comparison
Transfer
Matrix
27
Goal
Develop a
framework to fit
spatio temporal
maximum entropy
models on large
scale spike trains
โ€ข Definitions
โ€“ Basic concepts
โ€“ Maximum entropy principle (Spatial
& Spatio-temporal).
โ€ข Montecarlo in the service of
large neural spike trains
โ€ข Fitting parameters
โ€“ Tests on synthetic data
โ€“ Application on real data
โ€ข The EnaS software
โ€ข Discussion
28
Metropolis-Hasting (1970)
โ€ข ๐€ ๏ƒ  ~๐œ‡ ๐œ† ๐‘š๐‘™
โ€ข Montecarlo states
โ€ข Transition between states:
๐‘ƒ ๐œ” ๐‘›, 1
|๐œ” ๐‘›, 2
= max
๐‘„ ๐œ” ๐‘›, 1 |๐œ” ๐‘›,(2)
๐‘„ ๐œ” ๐‘›, 2 |๐œ”, ๐‘›, 1
ร—
๐œ‡ ๐œ” ๐‘›, 2
๐œ‡ ๐œ” ๐‘›, 1
, 1
๐‘
๐‘›
Proposal function
๐œ‡ ๐œ”0
๐‘›
=
๐‘’โ„‹ ๐œ”0
๐‘›
๐“ˆ ๐€
๐‘›โˆ’๐ท+1 ๐‘… ๐œ” ๐‘›โˆ’๐ท
๐‘›
๐ฟ(๐œ”0
๐ท
)Symmetric in Metropolis algorithm:
๐‘„ ๐œ” 1 โ†’ ๐œ” 2 = ๐‘„ ๐œ” 2 โ†’ ๐œ” 1
where ๐‘…, ๐ฟ, ๐“ˆ are unkown
29
Metropolis-Hasting
๐œ‡[๐œ” ๐‘›, 2 ])
๐œ‡[๐œ” ๐‘›, 1 ])
=
๐‘’
โ„‹ ๐œ”0
๐‘›,(2)
๐“ˆ ๐€
๐‘›โˆ’๐ท+1 ๐‘… ๐œ” ๐‘›โˆ’๐ท
๐‘›,(2)
๐ฟ(๐œ”0
๐ท,(2)
)
๐‘’
โ„‹ ๐œ”0
๐‘›,(1)
๐“ˆ ๐€
๐‘›โˆ’๐ท+1 ๐‘… ๐œ” ๐‘›โˆ’๐ท
๐‘›,(1)
๐ฟ(๐œ”0
๐ท,(1)
)
x
30
๐‘›
Metropolis-Hasting
๐œ‡[๐œ” ๐‘›, 2
])
๐œ‡[๐œ” ๐‘›, 1 ])
=
๐‘’
โ„‹ ๐œ”0
๐‘›,(2)
๐“ผ ๐€
๐’โˆ’๐‘ซ+๐Ÿ ๐‘… ๐œ” ๐‘›โˆ’๐ท
๐‘›,(2)
๐ฟ(๐œ”0
๐ท,(2)
)
๐‘’
โ„‹ ๐œ”0
๐‘›,(1)
๐“ผ ๐€
๐’โˆ’๐‘ซ+๐Ÿ ๐‘… ๐œ” ๐‘›โˆ’๐ท
๐‘›,(1)
๐ฟ(๐œ”0
๐ท,(1)
)
31
๐‘›
x
Avoiding ๐‘… & ๐ฟ
๐œ‡[๐œ” ๐‘›, 2
])
๐œ‡[๐œ” ๐‘›, 1 ])
=
๐‘’
โ„‹ ๐œ”0
๐‘›,(2)
๐“ผ ๐€
๐’โˆ’๐‘ซ+๐Ÿ ๐‘… ๐œ” ๐‘›โˆ’๐ท
๐‘›,(2)
๐ฟ(๐œ”0
๐ท,(2)
)
๐‘’
โ„‹ ๐œ”0
๐‘›,(1)
๐“ผ ๐€
๐’โˆ’๐‘ซ+๐Ÿ ๐‘… ๐œ” ๐‘›โˆ’๐ท
๐‘›,(1)
๐ฟ(๐œ”0
๐ท,(1)
)
๐‘›
๐œ” ๐‘›โˆ’๐ท
๐‘›๐œ”0
๐ท
32
x
Avoiding ๐‘… & ๐ฟ
๐œ‡[๐œ” ๐‘›, 2
])
๐œ‡[๐œ” ๐‘›, 1 ])
=
๐‘’
โ„‹ ๐œ”0
๐‘›,(2)
๐“ผ ๐€
๐’โˆ’๐‘ซ+๐Ÿ ๐‘… ๐œ” ๐‘›โˆ’๐ท
๐‘›,(2)
๐ฟ(๐œ”0
๐ท,(2)
)
๐‘’
โ„‹ ๐œ”0
๐‘›,(1)
๐“ผ ๐€
๐’โˆ’๐‘ซ+๐Ÿ ๐‘… ๐œ” ๐‘›โˆ’๐ท
๐‘›,(1)
๐ฟ(๐œ”0
๐ท,(1)
)
=
๐‘’
โ„‹ ๐œ”0
๐‘›,(2)
๐‘’
โ„‹ ๐œ”0
๐‘›,(1) = ๐‘’ฮ”โ„‹ ๐€
x
๐œ” ๐‘›โˆ’๐ท
๐‘›๐œ”0
๐ท
33
๐‘›
Algorithm review Start: Random spike
train
Parameters ๐€
๐‘ neurons.
Length = ๐‘›
Choose a random
event and flip it
Compute ๐‘’ฮ”โ„‹ ๐€
๐‘’ฮ”โ„‹ ๐€ > ๐œ–
๐œ– โˆˆ [0,1]
Choose between
[๐ท + 1, ๐‘› โˆ’ ๐ท โˆ’ 1]
No
Accept the
change
Reject the
change
Yes
Updated Montecarlo
spike train
๐‘๐‘“๐‘™๐‘–๐‘ / Loop
34
Computed only
between [โˆ’๐ท, +๐ท]
Hassan Nasser, Olivier Marre, and Bruno Cessac. Spike trains analysis using
Gibbs distributions and Montecarlo method. Journal of Statistical Mechanics:
Theory and experiments, 2013.
35
Update the
parameters
Monte-
Carlo
Computing the
predicted
distribution
๐œ‡ ๐œ†(๐‘–๐‘ก) ๐‘š๐‘™
?
Setting the
constraints
Computing the
empirical
distribution
๐œ‹ ๐œ”
๐‘‡
(๐‘š๐‘™)
Random set of
parameters
Final set of
parameters
Predicted
distribution
Comparison
36
Goal
Develop a
framework to fit
spatio temporal
maximum entropy
models on large
scale spike trains
โ€ข Definitions
โ€“ Basic concepts
โ€“ Maximum entropy principle (Spatial
& Spatio-temporal).
โ€ข Montecarlo in the service of
large neural spike trains
โ€ข Fitting parameters
โ€“ Tests on synthetic data
โ€“ Application on real data
โ€ข The EnaS software
โ€ข Discussion
37
Fitting parameters / concept
Maximizing entropy
(difficult because computing the exact entropy intractable)
โ‰ก
minimizing the divergence
๐‘‘ ๐พ๐ฟ ๐œ‹ ๐œ”
๐‘‡
, ๐œ‡ ๐€ = ๐’ซ ๐€ โˆ’ ๐œ‹ ๐œ”
๐‘‡
โ„‹ โˆ’ ๐’ฎ[๐œ‹ ๐œ”
๐‘‡
]
Dudรญk, M., Phillips, S., and Schapire, R. (2004). Performance guarantees for
regularized maximum entropy density estimation. Proceedings of the 17th Annual
Conference on Computational Learning Theory.
Small scale: easy to compute
Large scale: hard to compute
- Bounding the negative log likelihood
Divergence
Iterations
Big ๐‘‘ ๐พ๐ฟ
Small ๐‘‘ ๐พ๐ฟ
38
- Relaxation
Fitting parameters / concept
Maximizing entropy
(difficult because computing the exact entropy intractable)
โ‰ก
minimizing the divergence
๐‘‘ ๐พ๐ฟ ๐œ‹ ๐œ”
๐‘‡
, ๐œ‡ ๐€ = ๐’ซ ๐€ โˆ’ ๐œ‹ ๐œ”
๐‘‡
โ„‹ โˆ’ ๐’ฎ[๐œ‹ ๐œ”
๐‘‡
]
Dudรญk, M., Phillips, S., and Schapire, R. (2004). Performance guarantees for
regularized maximum entropy density estimation. Proceedings of the 17th Annual
Conference on Computational Learning Theory.
Small scale: easy to compute
Large scale: hard to compute
- Bounding the negative log likelihood
Divergence
Iterations
Big ๐‘‘ ๐พ๐ฟ
Small ๐‘‘ ๐พ๐ฟ
39
- Relaxation
Fitting parameters / concept
Hassan Nasser and Bruno Cessac. Parameters fitting for spatio-temporal
maximum entropy distributions: application to neural spike trains. Submitted to
Entropy.
- Bounding the Divergence
40
- With relaxation
Cost function
๐ถ๐‘“ = ๐‘‘ ๐พ๐ฟ ๐œ‹ ๐œ”
๐‘‡
, ๐œ‡ ๐€โ€ฒ โˆ’ ๐‘‘ ๐พ๐ฟ ๐œ‹ ๐œ”
๐‘‡
, ๐œ‡ ๐€ = ๐’ซ ๐€โ€ฒ
โˆ’ ๐’ซ ๐€ โˆ’ ๐œ‹ ๐œ”
๐‘‡
[ฮ”โ„‹๐€]
๐ถ๐‘“ = lim
๐’โ†’โˆž
1
๐‘›
log
๐œ”0
๐‘›โˆ’1
๐œ‡ ๐€ ๐œ”0
๐‘›โˆ’1
๐‘’ฮ”โ„‹ ๐€ ๐œ”0
๐‘›โˆ’1
โˆ’ ๐œ‹ ๐œ”
๐‘‡
[ฮ”โ„‹๐€]
๐ถ๐‘“
๐‘ ๐‘’๐‘ž
โ‰ค
1
๐‘› โˆ’ ๐ท
โˆ’๐›ฟ๐œ‹ ๐œ”
๐‘‡
๐‘š๐‘™ + log 1 + ๐‘’ ๐›ฟ
โˆ’ 1 ๐œ‡ ๐€ ๐‘š๐‘™ + ๐œ–( ๐œ† + ๐›ฟ โˆ’ |๐œ†|)
๐ถ๐‘“
๐‘๐‘Ž๐‘Ÿ
โ‰ค
1
๐ฟ ๐‘› โˆ’ ๐ท
๐‘™
โˆ’๐›ฟ๐œ‹ ๐œ”
๐‘‡
๐‘š๐‘™ + log 1 + ๐‘’ ๐›ฟ ๐‘™ โˆ’ 1 ๐œ‡ ๐€ ๐‘š๐‘™ + ๐œ–๐‘™( ๐œ†๐‘™ + ๐›ฟ๐‘™ โˆ’ |๐œ†๐‘™|)
Relaxation >0Using Montecarlo
Parallel:
Sequential:
41
Parameters update
Number of parameters
๐€โ€ฒ
= ๐€ + ๐œน
Setting the
constraints
Computing the
empirical
distribution
๐œ‹ ๐œ”
๐‘‡
(๐‘š๐‘™)
Random set of
parameters
Computing the
predicted
distribution
๐œ‡ ๐œ†(๐‘–๐‘ก) ๐‘š๐‘™
Update the
parameters
Final set of
parameters
Exact predicted
distribution
Comparison
Monte-
Carlo
Fitting
42
Updating the target distribution
๐œ•2
๐’ซ[๐€ ]
๐œ•๐œ†๐‘— ๐œ•๐œ† ๐‘˜
=
๐‘›=โˆ’โˆž
โˆž
๐ถ๐‘—๐‘˜ ๐‘›
๐œ‡ ๐€+๐œน ๐‘š๐‘™ = ๐œ‡ ๐€ ๐‘š๐‘™ +
๐‘˜
๐œ•2
๐’ซ[๐€ ]
๐œ•๐œ†๐‘— ๐œ•๐œ† ๐‘˜
๐›ฟ ๐‘˜ +
1
2
๐‘—,๐‘˜,๐‘™
๐œ•3
๐’ซ[๐€]
๐œ•๐œ†๐‘— ๐œ•๐œ† ๐‘˜ ๐œ•๐œ†๐‘™
๐›ฟ๐‘— ๐›ฟ ๐‘˜ ๐›ฟ๐‘™ + โ‹ฏ
New ๐€ ๏ƒจ New distribution.
Montecarlo
Taylor
Expansion
Previous
distribution
Exponential decay of correlation
๏ƒจ
In practice n is finite
(If ๐œน ๐’”๐’Ž๐’‚๐’๐’)๐œ‡ = ๐‘“๐‘ข๐‘›๐‘๐‘ก๐‘–๐‘œ๐‘›(๐œ†)
43
โ„Ž๐‘’๐‘Ž๐‘ฃ๐‘ฆ ๐‘ก๐‘œ ๐‘๐‘œ๐‘š๐‘๐‘ข๐‘ก๐‘’
Demo
44
45
Error
evolving
Observed probability
Predictedprobability
46
Parameters fitting Demo
Synthetic data sets
โ€ข Sparse
Rates
parameters
Higher order parameters
โ€ข Dense
47
Dense
Sparse
N=20, R = 3
NR = 60
Random (known)
parameter/monomials
Try to recover the parameters
Errors on parameters
48
Synthetic spike train
Comparing blocks probabilities
N = 40 / Spatial N = 40 / Spatio-temporal
NR = 40 NR = 80
49
Data Courtesy: Michael J. Berry II (Princeton University)
and Olivier Marre (Institut de la vision, Paris).
Purely spatial pairwise
Pairwise with 1 time-step memory
Binned at 20 ms
Application on retinal data
50
Schneidman et al 2006
Real data: 20 neurons
Spatial Pairwise Spatio-temporal Pairwise
51
Real data: 40 neurons
Pairwise Spatial Pairwise Spatio-temporal
52
Goal
Develop a
framework to fit
spatio temporal
maximum entropy
models on large
scale spike trains
โ€ข Definitions
โ€“ Basic concepts
โ€“ Maximum entropy principle (Spatial
& Spatio-temporal).
โ€ข Montecarlo in the service of
large neural spike trains
โ€ข Fitting parameters
โ€“ Tests on synthetic data
โ€“ Application on real data
โ€ข The EnaS software
โ€ข Discussion
53
Event neural assembly Simulation
(EnaS)
V1
2007
V2
2010
V3
2014
Thierry Viรฉville
Bruno Cessac
Juan-Carlos Vasquez / Horacio-Rostro Gonzalez/Hassan Nasser
Selim Kraria
+ Graphical user interface
Goal:
Analyzing spike trains
Share research advances with the community
C++ & Qt (interface Java, Matlab, PyThon)
54
Architecture
EnaS
RasterBlock Gibbs Potential
Graphical User
interface
- Data management
- Formats
- Empirical statistics
- Grammar
- Defining models
- Generating artificial
spike trains
- Fitting
- Montecarlo process
(Parallelization)
- Interactive environment
- Visualization of
stimulus and response
simultaneously.
- Demo
Contributions Contributions
55
0 1 0 0 1 1 0 0
1 0 1 0 0 1 0 1
0 1 1 1 0 1 0 1
Grammar
0 1 0 0 1 1 0 0
1 0 1 0 0 1 0 1
0 1 1 1 0 1 0 1
Needed in:
Computing empirical distribution.
Montecarlo sampling.
Divergence and Entropy computing.
Confidence bounds.
Grammar
56
Only observed transitions
๐‘‡ โˆ’ ๐ท maximum
Transfer Matrix ๏ƒจ 2 ๐‘๐ท
Grammar data structure
0 1 0 0 1 1 0 0
1 0 1 0 0 1 0 1
0 1 1 1 0 1 0 1
0 1 0 0 1 1 0 0
1 0 1 0 0 1 0 1
0 1 1 1 0 1 0 1
N = 3.
D = 2.
0 1
1 0
0 1
0
1
1
Prefix
Suffix
57
Transition
Grammar data structure
0 1 0 0 1 1 0 0
1 0 1 0 0 1 0 1
0 1 1 1 0 1 0 1
0 1 0 0 1 1 0 0
1 0 1 0 0 1 0 1
0 1 1 1 0 1 0 1
N = 3.
D = 2.
0 1
1 0
0 1
0
1
1
1 0
0 1
1 1
0
0
1
Prefix
Suffix
58
Grammar data structure
0 1 0 0 1 1 0 0
1 0 1 0 0 1 0 1
0 1 1 1 0 1 0 1
0 1 0 0 1 1 0 0
1 0 1 0 0 1 0 1
0 1 1 1 0 1 0 1
N = 3.
D = 2.
0 1
1 0
0 1
0
1
1
1 0
0 1
1 1
0
0
1
Prefix
Suffix
59
2
This transitions appears 2 times!
Grammar data structure
0 1 0 0 1 0 1 0
1 0 1 0 1 0 0 1
0 1 1 1 1 0 0 1
0 1 0 0 1 1 0 0
1 0 1 0 0 1 0 1
0 1 1 1 0 1 0 1
N = 3.
D = 2.
0 1
1 0
0 1
0
1
1
1 0
0 1
1 1
0
0
1
Prefix
Suffix
60
2
1 0
1 0
1 0
0
1
1
1
0
0
A new suffix
Map: C++ data container Sorting in a chosen order
โ€ฆ
61
Map: C++ data container Sorting in a chosen order
โ€ฆ
It appeared two times!
62
Architecture
EnaS
RasterBlock Gibbs Potential
Graphical User
interface
- Data management
- Grammar
- Empirical statistics
- โ€ฆ
- Defining models
- Generating artificial
spike trains
- Fitting
- Montecarlo process
(Parallelization)
- Interactive environment
- Visualization of
stimulus and response
simultaneously.
- Demo
63
Parallelization of Montecarlo process
๐‘๐‘ก๐‘–๐‘š๐‘’๐‘ 
xx
x
x x
x
x x
x
x x
x x
x
x x
x
x
x x
Personal Multi-processors computer: 2-8 processors
Cluster (64 processors machines at INRIA)
OpenMp
64
MPI More processors / More time consuming in our case
Processor 1 Processor 2
Example with 2 processors
65
Boundaries between processors
x
x
๐ท
[โˆ’๐ท, +D]
6614 months + February!1 Processor
Architecture
EnaS
RasterBlock Gibbs Potential
Graphical User
interface
- Data management
- Grammar
- Empirical statistics
- โ€ฆ
- Defining models
- Generating artificial
spike trains
- Fitting
- Montecarlo process
(Parallelization)
- Interactive environment
- Visualization of
stimulus and response
simultaneously.
- Demo
67
Data courtesy: Gerrit Hilgen, Newcastle University,
Institute of Neuroscience, United Kingdom
Interface design:
Selim Kraria 68
EnaS Demo
Modelling window
69
Hassan Nasser, Selim Kraria, Bruno Cessac. EnaS: a new software for analyzing
large scale spike trains. In preparation.
70
Conclusion
71
Montecarlo
โ€ฆโ€ฆโ€ฆโ€ฆ.
Fitting
EnaS
Software
Models Perspectives
Parameters Perspectives
Perspectives
A framework to fit spatio-temporal maximum entropy
models on large scale spike trains
Synthetic data Vs Real data
Synthetic data
Potential shape is known
(monomials are known)
Real data
72
Potential shape is
unknown
(monomials are
unknown)Fitting only
Guessing the shape
+
Fitting
Monomials
Model
Canonical
Ising, pairwise with
delay, triplets, โ€ฆ
Small scaleLarge scale
- Big computation time
- Non Observed
monomials
- Estimation errors.
Pre-Selection
73
Rodrigo Cofre & Bruno Cessac
40 neurons
Making sense of parameters
Model
parameters
Evaluate the importance
of particular type of
correlations
Possibility of generalize
the model prediction on
new stimulus
74
Stationary
Maximum
entropy model
New
stimulus
1- Statistics
2- No new
response
S R
Stimulus Dependent
Maximum Entropy
models (Granot-
Atedgi et al 13)
New
stimulus
New spike
Response
S R
75
EnaS
Retina Spike sorting Spike trainStimulus
Visualization
Visualization
+
Empirical analysis
+
Maximum Entropy
modelling
NowFuture
- More empirical
observation
packages
- More neural
coding
functionalities
Spike sorting
- Receptive
field
- Neurons
selection
Type
identification
- Stimulus
design
- Features
extraction
76
Retina
models
VirtualRetina
Next โ€ฆ
Starting a company in IT/Data Analytics:
โ€“ First prize in innovative project competition (UNICE Foundation).
โ€“ Current project: Orientation in education using real surveys.
โ€“ EnaS is in perspective in collaboration with INRIA.
Caty Conraux & Vincent Tricard
77
Thanks collaborators
โ€ข Adrian Palacios
โ€ข Olivier Marre
โ€ข Michael J. Berry II
โ€ข Gaลกper Tkaฤik
โ€ข Thierry Morra
78
79
Appendix
โ€ข Tuning Ntimes.
โ€ข Tuning Nflip.
โ€ข Validating montecarlo algorithm.
โ€ข Tunnig delta.
โ€ข MPI Vs OpenMP, memory.
โ€ข Why MPI is not better than OpenMP?
โ€ข Computational complexity of the Montecarlo algorithm.
โ€ข Review of Montecarlo / Nflip.
โ€ข Number of Iterations for fitting.
โ€ข Fluctuations on parameters / Non existing monomials.
โ€ข Epsilon on fitting parameters.
โ€ข Binning.
โ€ข Tests with several stimulus.
โ€ข Granot-Atedgi et al 2013
โ€ข Granot-Atedgi et al 2013
80
Models with random parameters ๐€
81
Tuning ๐‘๐‘ก๐‘–๐‘š๐‘’๐‘ 
Dense Sparse
๐‘๐‘ก๐‘–๐‘š๐‘’๐‘  ๐‘๐‘ก๐‘–๐‘š๐‘’๐‘ 
๐‘‘ ๐‘˜๐‘™๐‘‘ ๐‘˜๐‘™
82
Tuning ๐‘˜ โ‰ก ๐‘๐‘“๐‘™๐‘–๐‘
๐œ† ๐ท๐‘’๐‘›๐‘ ๐‘’
Montecarlo Transfer Matrix
๐‘๐‘“๐‘™๐‘–๐‘ = ๐‘˜ ร— ๐‘ ร— ๐‘๐‘ก๐‘–๐‘š๐‘’๐‘  Exact
๐‘‘ ๐‘˜๐‘™(๐‘˜)
๐œ† ๐‘†๐‘๐‘Ž๐‘Ÿ๐‘ ๐‘’
Montecarlo Transfer Matrix
๐‘๐‘“๐‘™๐‘–๐‘ = ๐‘˜ ร— ๐‘ ร— ๐‘๐‘ก๐‘–๐‘š๐‘’๐‘  Exact
๐‘‘ ๐‘˜๐‘™(๐‘˜)
๐‘˜
๐‘˜ = 10 ๐‘˜ = 50
83
10
๐‘˜
Taylor Expansion (๐›ฟ test)
||๐œน|| ||๐œน||
Dense Sparse
84
โ€ข Multiprocessors computers:
โ€“ Personal computer (2-8 processors).
โ€“ Cluster (64 processors machines at INRIA).
โ€ข Parallel programming frameworks:
โ€“ OpenMp: The processors of the same computer divide
the tasks (live memory (RAM) is shared).
โ€“ MPI: several processors on each computer share the
task (Memory in not shared).
4 processors
๏ƒจ Time/4.
Parallelization
64 processors ๏ƒจ Time/64.
85
MPI
โ€ข OpenMP is limited to the number of processors
on a single machine.
โ€ข With MPI, 64 processors x 10 machine ๏ƒจ 640
processors.
โ€ข Although we though it would take less time
with MPI, but โ€ฆ! Master
computer1 cluster of 64 proc
Another cluster of 64 proc
Another cluster of 64 proc
Another cluster of 64 proc
The whole
Montecarlo
Spike train
At each change of the memory, there will be a communication between the clusters and the master
๏ƒจAt each flip ๏ƒจ loss of time in communication more than computing 86
Computing complexity
Montecarlo (๐‘๐‘ก๐‘–๐‘š๐‘’๐‘  = 106; ๐‘˜ = 50)
Vs
and Transfer Matrix
87
Computational complexity
Taken for running this algorithm:
๐ถ๐‘œ๐‘š๐‘๐‘ข๐‘ก๐‘–๐‘›๐‘” ๐‘ก๐‘–๐‘š๐‘’ = ๐‘˜. ๐‘. ๐‘๐‘ก๐‘–๐‘š๐‘’๐‘  . ๐‘กฮ”โ„‹ ๐€
๐‘กฮ”โ„‹ ๐€
= ๐‘“๐‘ก(๐ฟ)
Start: Random
spike train
Choose a random
event and flip it
Compute ๐‘’ฮ”โ„‹ ๐€
๐‘’ฮ”โ„‹ ๐€ > ๐œ–
๐œ– โˆˆ [0,1]
No
Accept the
change
Reject the
change
Yes
Updated Montecarlo
spike train
Loop : ๐‘˜. ๐‘. ๐‘๐‘ก๐‘–๐‘š๐‘’๐‘ 2- In each loop, computing ๐‘’ฮ”โ„‹ ๐€ needs to
perform a loop over the monomials.
1- We have a loop over ๐‘˜. ๐‘. ๐‘๐‘ก๐‘–๐‘š๐‘’๐‘  .
On a cluster of 64 processors:
- 40 Neurons Ising: 10 min
- 40 Neurons Pairwise: 20 min
88
Start: Random spike
train
Parameters ๐€
๐‘ neurons.
Length = ๐‘๐‘ก๐‘–๐‘š๐‘’๐‘ 
Choose a random
event and flip it
Compute ๐‘’ฮ”โ„‹ ๐€
๐‘’ฮ”โ„‹ ๐€ > ๐œ–
๐œ– โˆˆ [0,1]
No
Accept the
change
Reject the
change
Yes
Updated Montecarlo
spike train
Tuning
Loop : ๐‘๐‘“๐‘™๐‘–๐‘=
Algorithm
review
๐‘˜ ร— ๐‘ ร— ๐‘๐‘ก๐‘–๐‘š๐‘’๐‘ 
Choose between
[๐ท, ๐‘๐‘ก๐‘–๐‘š๐‘’๐‘  โˆ’ ๐ท]
89
How many iterations do we need?
โ€ข ๐‘๐‘… < 20:
โ€“ 50 parallel + 100 sequential
โ€ข ๐‘๐‘… < 150:
โ€“ 100 parallel + 100
sequential
90
๐œ– on parameters fitting
โ€ข Dudik et al does not allow that:
โ€ข ๐›ฝ๐‘™ > ๐œ‹๐‘™ || ๐›ฝ๐‘™ > 1 โˆ’ ๐œ‹๐‘™. In this case โ‡’ ๐›ฝ๐‘™ = 0.9 ๐œ‹๐‘™
โ€ข We avoid dividing by 0 (
โ€ฆ
๐œ‡ ๐‘™
) โ€ฆ by replacing
putting ๐œ†๐‘™ = โˆ’โˆž
91
Problem of non- observed monomials
Central limit theorem (Fluctuations on monomials averages)
๏ƒจ๐œ‡ ๐€ ๐‘š๐‘™ =
๐œ•๐’ซ
๐œ•๐œ†
; ๐ ๐œ†
โˆ—
๐‘š๐‘™ + ๐œผ =
๐œ•๐’ซ
๐œ•๐œ†
+
๐๐œ•2 ๐’ซ
๐œ•๐œ†2 + โ€ฆ =
๐œ•๐’ซ
๐œ•๐œ†
+ ๐œ–๐’ณ + โ‹ฏ
๏ƒจ๐€ = ๐€โˆ— + ๐ : Fluctuations on parameters
๏ƒจ๐ = ๐’ณโˆ’1 ๐œผ
Covariance matrix ๐’ณ๐‘–๐‘— =
๐œ•2 ๐’ซ
๐œ•๐œ† ๐‘– ๐œ•๐œ† ๐‘—
Convex
Computing ๐’ณ over 1000 potential
shows that a big percentage of ๐’ณ is
zero ๏ƒจ ๐’ณโˆ’1
will have big value ๏ƒจ
flucutations on ๐ are big.
๐’ณ
92
Binning
โ€ข Change completely
the statistics.
โ€ข 700% of more new
patterns appear when
we bin at 20.
โ€ข Should be studied
rigorously.
93
0 1 1 0 0
0 0 1 0 1
2 neurons + binning = 5
0 0 0 0 0
0 0 1 0 1
1
1
0 0 0 0 0
0 0 0 0 0
0
0
1
0
0
1
โ€ฆ.210
4
95
- Loss of information
- Loosing biological scale
- More dense spike train
- Less non-observed monomials
Why spike trains have been binned in the literature?
- No clear answer.
- Relation between taking binning as a substitute for
memory is not convincing.
- Might be because it allows having more monomials ๏ƒจ
Less dangerous for convexity ๏ƒจ convergence is more
guaranteed.
Making sense of parameters
Stimulus 1
Stimulus 2
Stimulus 4
= ๐‘ƒ ๐‘… ๐‘†1
= ๐‘ƒ ๐‘… ๐‘†2
= ๐‘ƒ ๐‘… ๐‘†3
= ๐‘ƒ ๐‘… ๐‘†4
Stimulus 3
96
P[S|R]
Einat Granot-Atedgi, Gaลกper Tkaฤik, Ronen Segev, Elad Schneidman. Stimulus-dependent
Maximum Entropy Models of Neural Population Codes. Plos Comp. Biol. 2013.
97
Scheidman 2006
LNL
โ„‹ ๐œ” =
๐‘–
๐œ†๐‘–(๐‘ก)๐œ”๐‘– ๐‘ก +
๐‘–,๐‘—
๐œ†๐‘–๐‘— ๐œ”๐‘– 0 ๐œ”๐‘—(0)
Cross validation on small scale
Vasquez et al 2013
98
Relaxation
99
๐‘™=1
๐ฟ
๐›ฝ๐‘™
+
๐›ผ ๐œ‡ ๐œ† ๐‘š๐‘™ โˆ’ ๐œ‹ ๐‘š๐‘™ โ‰ค ๐œ–๐‘™ + ๐›ฝ๐‘™
โˆ’
๐›ผ[ ๐œ‹ ๐‘š๐‘™ โˆ’ ๐œ‡ ๐œ† ๐‘š๐‘™ โ‰ค ๐œ–๐‘™]
100
Schneidman et al 2006 stimulus
Confidence bounds in linear scale
101
Confidence bounds in log scale
102
103

More Related Content

What's hot

Entropic characteristics of quantum channels and the additivity problem
Entropic characteristics of quantum channels and the additivity problemEntropic characteristics of quantum channels and the additivity problem
Entropic characteristics of quantum channels and the additivity problemwtyru1989
ย 
Stabilization of linear time invariant systems, Factorization Approach
Stabilization of linear time invariant systems, Factorization ApproachStabilization of linear time invariant systems, Factorization Approach
Stabilization of linear time invariant systems, Factorization ApproachSolo Hermelin
ย 
Model Predictive Control based on Reduced-Order Models
Model Predictive Control based on Reduced-Order ModelsModel Predictive Control based on Reduced-Order Models
Model Predictive Control based on Reduced-Order ModelsPantelis Sopasakis
ย 
Computational Motor Control: Optimal Estimation in Noisy World (JAIST summer ...
Computational Motor Control: Optimal Estimation in Noisy World (JAIST summer ...Computational Motor Control: Optimal Estimation in Noisy World (JAIST summer ...
Computational Motor Control: Optimal Estimation in Noisy World (JAIST summer ...hirokazutanaka
ย 
Computational Information Geometry on Matrix Manifolds (ICTP 2013)
Computational Information Geometry on Matrix Manifolds (ICTP 2013)Computational Information Geometry on Matrix Manifolds (ICTP 2013)
Computational Information Geometry on Matrix Manifolds (ICTP 2013)Frank Nielsen
ย 
Computational Motor Control: Optimal Control for Stochastic Systems (JAIST su...
Computational Motor Control: Optimal Control for Stochastic Systems (JAIST su...Computational Motor Control: Optimal Control for Stochastic Systems (JAIST su...
Computational Motor Control: Optimal Control for Stochastic Systems (JAIST su...hirokazutanaka
ย 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking componentsChristian Robert
ย 
Computational Motor Control: State Space Models for Motor Adaptation (JAIST s...
Computational Motor Control: State Space Models for Motor Adaptation (JAIST s...Computational Motor Control: State Space Models for Motor Adaptation (JAIST s...
Computational Motor Control: State Space Models for Motor Adaptation (JAIST s...hirokazutanaka
ย 
Introduction to MCMC methods
Introduction to MCMC methodsIntroduction to MCMC methods
Introduction to MCMC methodsChristian Robert
ย 
4 stochastic processes
4 stochastic processes4 stochastic processes
4 stochastic processesSolo Hermelin
ย 
Computational Motor Control: Kinematics & Dynamics (JAIST summer course)
Computational Motor Control: Kinematics & Dynamics (JAIST summer course)Computational Motor Control: Kinematics & Dynamics (JAIST summer course)
Computational Motor Control: Kinematics & Dynamics (JAIST summer course)hirokazutanaka
ย 
Contraction mapping
Contraction mappingContraction mapping
Contraction mappingHancheol Choi
ย 
Future cosmology with CMB lensing and galaxy clustering
Future cosmology with CMB lensing and galaxy clusteringFuture cosmology with CMB lensing and galaxy clustering
Future cosmology with CMB lensing and galaxy clusteringMarcel Schmittfull
ย 

What's hot (20)

Entropic characteristics of quantum channels and the additivity problem
Entropic characteristics of quantum channels and the additivity problemEntropic characteristics of quantum channels and the additivity problem
Entropic characteristics of quantum channels and the additivity problem
ย 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
ย 
Stabilization of linear time invariant systems, Factorization Approach
Stabilization of linear time invariant systems, Factorization ApproachStabilization of linear time invariant systems, Factorization Approach
Stabilization of linear time invariant systems, Factorization Approach
ย 
Model Predictive Control based on Reduced-Order Models
Model Predictive Control based on Reduced-Order ModelsModel Predictive Control based on Reduced-Order Models
Model Predictive Control based on Reduced-Order Models
ย 
Computational Motor Control: Optimal Estimation in Noisy World (JAIST summer ...
Computational Motor Control: Optimal Estimation in Noisy World (JAIST summer ...Computational Motor Control: Optimal Estimation in Noisy World (JAIST summer ...
Computational Motor Control: Optimal Estimation in Noisy World (JAIST summer ...
ย 
Computational Information Geometry on Matrix Manifolds (ICTP 2013)
Computational Information Geometry on Matrix Manifolds (ICTP 2013)Computational Information Geometry on Matrix Manifolds (ICTP 2013)
Computational Information Geometry on Matrix Manifolds (ICTP 2013)
ย 
CLIM Fall 2017 Course: Statistics for Climate Research, Guest lecture: Data F...
CLIM Fall 2017 Course: Statistics for Climate Research, Guest lecture: Data F...CLIM Fall 2017 Course: Statistics for Climate Research, Guest lecture: Data F...
CLIM Fall 2017 Course: Statistics for Climate Research, Guest lecture: Data F...
ย 
Computational Motor Control: Optimal Control for Stochastic Systems (JAIST su...
Computational Motor Control: Optimal Control for Stochastic Systems (JAIST su...Computational Motor Control: Optimal Control for Stochastic Systems (JAIST su...
Computational Motor Control: Optimal Control for Stochastic Systems (JAIST su...
ย 
CLIM: Transition Workshop - Projected Data Assimilation - Erik Van Vleck, Ma...
CLIM: Transition Workshop - Projected Data Assimilation  - Erik Van Vleck, Ma...CLIM: Transition Workshop - Projected Data Assimilation  - Erik Van Vleck, Ma...
CLIM: Transition Workshop - Projected Data Assimilation - Erik Van Vleck, Ma...
ย 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking components
ย 
Computational Motor Control: State Space Models for Motor Adaptation (JAIST s...
Computational Motor Control: State Space Models for Motor Adaptation (JAIST s...Computational Motor Control: State Space Models for Motor Adaptation (JAIST s...
Computational Motor Control: State Space Models for Motor Adaptation (JAIST s...
ย 
Introduction to MCMC methods
Introduction to MCMC methodsIntroduction to MCMC methods
Introduction to MCMC methods
ย 
4 stochastic processes
4 stochastic processes4 stochastic processes
4 stochastic processes
ย 
Fuzzy logic
Fuzzy logicFuzzy logic
Fuzzy logic
ย 
An Improved Quantum-behaved Particle Swarm Optimization Algorithm Based on Ch...
An Improved Quantum-behaved Particle Swarm Optimization Algorithm Based on Ch...An Improved Quantum-behaved Particle Swarm Optimization Algorithm Based on Ch...
An Improved Quantum-behaved Particle Swarm Optimization Algorithm Based on Ch...
ย 
Computational Motor Control: Kinematics & Dynamics (JAIST summer course)
Computational Motor Control: Kinematics & Dynamics (JAIST summer course)Computational Motor Control: Kinematics & Dynamics (JAIST summer course)
Computational Motor Control: Kinematics & Dynamics (JAIST summer course)
ย 
Contraction mapping
Contraction mappingContraction mapping
Contraction mapping
ย 
20130722
2013072220130722
20130722
ย 
Future cosmology with CMB lensing and galaxy clustering
Future cosmology with CMB lensing and galaxy clusteringFuture cosmology with CMB lensing and galaxy clustering
Future cosmology with CMB lensing and galaxy clustering
ย 
mcmc
mcmcmcmc
mcmc
ย 

Similar to Analysis of large scale spiking networks dynamics with spatio-temporal constraints: application to Multi-Electrodes acquisitions in the retina

Point symmetries of lagrangians
Point symmetries of lagrangiansPoint symmetries of lagrangians
Point symmetries of lagrangiansorajjournal
ย 
Paper Introduction "Density-aware person detection and tracking in crowds"
Paper Introduction "Density-aware person detection and tracking in crowds"Paper Introduction "Density-aware person detection and tracking in crowds"
Paper Introduction "Density-aware person detection and tracking in crowds"ๅฃฎ ๅ…ซๅนก
ย 
fuzzy fuzzification and defuzzification
fuzzy fuzzification and defuzzificationfuzzy fuzzification and defuzzification
fuzzy fuzzification and defuzzificationNourhan Selem Salm
ย 
Deep learning-for-pose-estimation-wyang-defense
Deep learning-for-pose-estimation-wyang-defenseDeep learning-for-pose-estimation-wyang-defense
Deep learning-for-pose-estimation-wyang-defenseWei Yang
ย 
20230213_ComputerVision_์—ฐ๊ตฌ.pptx
20230213_ComputerVision_์—ฐ๊ตฌ.pptx20230213_ComputerVision_์—ฐ๊ตฌ.pptx
20230213_ComputerVision_์—ฐ๊ตฌ.pptxssuser7807522
ย 
Parallel Algorithms for Geometric Graph Problems (at Stanford)
Parallel Algorithms for Geometric Graph Problems (at Stanford)Parallel Algorithms for Geometric Graph Problems (at Stanford)
Parallel Algorithms for Geometric Graph Problems (at Stanford)Grigory Yaroslavtsev
ย 
Intro to Quant Trading Strategies (Lecture 2 of 10)
Intro to Quant Trading Strategies (Lecture 2 of 10)Intro to Quant Trading Strategies (Lecture 2 of 10)
Intro to Quant Trading Strategies (Lecture 2 of 10)Adrian Aley
ย 
Sampling method : MCMC
Sampling method : MCMCSampling method : MCMC
Sampling method : MCMCSEMINARGROOT
ย 
2014.10.dartmouth
2014.10.dartmouth2014.10.dartmouth
2014.10.dartmouthQiqi Wang
ย 
QTML2021 UAP Quantum Feature Map
QTML2021 UAP Quantum Feature MapQTML2021 UAP Quantum Feature Map
QTML2021 UAP Quantum Feature MapHa Phuong
ย 
Regularisation & Auxiliary Information in OOD Detection
Regularisation & Auxiliary Information in OOD DetectionRegularisation & Auxiliary Information in OOD Detection
Regularisation & Auxiliary Information in OOD Detectionkirk68
ย 
Sequence Entropy and the Complexity Sequence Entropy For ๐’๐’๏€ Action
Sequence Entropy and the Complexity Sequence Entropy For ๐’๐’๏€ ActionSequence Entropy and the Complexity Sequence Entropy For ๐’๐’๏€ Action
Sequence Entropy and the Complexity Sequence Entropy For ๐’๐’๏€ ActionIJRES Journal
ย 
Myers_SIAMCSE15
Myers_SIAMCSE15Myers_SIAMCSE15
Myers_SIAMCSE15Karen Pao
ย 
Deconvolution
DeconvolutionDeconvolution
Deconvolutiongregthom992
ย 
Hierarchical matrix techniques for maximum likelihood covariance estimation
Hierarchical matrix techniques for maximum likelihood covariance estimationHierarchical matrix techniques for maximum likelihood covariance estimation
Hierarchical matrix techniques for maximum likelihood covariance estimationAlexander Litvinenko
ย 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsElvis DOHMATOB
ย 
Deep learning study 2
Deep learning study 2Deep learning study 2
Deep learning study 2San Kim
ย 
DFA minimization algorithms in map reduce
DFA minimization algorithms in map reduceDFA minimization algorithms in map reduce
DFA minimization algorithms in map reduceIraj Hedayati
ย 

Similar to Analysis of large scale spiking networks dynamics with spatio-temporal constraints: application to Multi-Electrodes acquisitions in the retina (20)

Point symmetries of lagrangians
Point symmetries of lagrangiansPoint symmetries of lagrangians
Point symmetries of lagrangians
ย 
Icra 17
Icra 17Icra 17
Icra 17
ย 
Paper Introduction "Density-aware person detection and tracking in crowds"
Paper Introduction "Density-aware person detection and tracking in crowds"Paper Introduction "Density-aware person detection and tracking in crowds"
Paper Introduction "Density-aware person detection and tracking in crowds"
ย 
fuzzy fuzzification and defuzzification
fuzzy fuzzification and defuzzificationfuzzy fuzzification and defuzzification
fuzzy fuzzification and defuzzification
ย 
Deep learning-for-pose-estimation-wyang-defense
Deep learning-for-pose-estimation-wyang-defenseDeep learning-for-pose-estimation-wyang-defense
Deep learning-for-pose-estimation-wyang-defense
ย 
20230213_ComputerVision_์—ฐ๊ตฌ.pptx
20230213_ComputerVision_์—ฐ๊ตฌ.pptx20230213_ComputerVision_์—ฐ๊ตฌ.pptx
20230213_ComputerVision_์—ฐ๊ตฌ.pptx
ย 
Parallel Algorithms for Geometric Graph Problems (at Stanford)
Parallel Algorithms for Geometric Graph Problems (at Stanford)Parallel Algorithms for Geometric Graph Problems (at Stanford)
Parallel Algorithms for Geometric Graph Problems (at Stanford)
ย 
Intro to Quant Trading Strategies (Lecture 2 of 10)
Intro to Quant Trading Strategies (Lecture 2 of 10)Intro to Quant Trading Strategies (Lecture 2 of 10)
Intro to Quant Trading Strategies (Lecture 2 of 10)
ย 
Sampling method : MCMC
Sampling method : MCMCSampling method : MCMC
Sampling method : MCMC
ย 
2014.10.dartmouth
2014.10.dartmouth2014.10.dartmouth
2014.10.dartmouth
ย 
QTML2021 UAP Quantum Feature Map
QTML2021 UAP Quantum Feature MapQTML2021 UAP Quantum Feature Map
QTML2021 UAP Quantum Feature Map
ย 
Regularisation & Auxiliary Information in OOD Detection
Regularisation & Auxiliary Information in OOD DetectionRegularisation & Auxiliary Information in OOD Detection
Regularisation & Auxiliary Information in OOD Detection
ย 
Sequence Entropy and the Complexity Sequence Entropy For ๐’๐’๏€ Action
Sequence Entropy and the Complexity Sequence Entropy For ๐’๐’๏€ ActionSequence Entropy and the Complexity Sequence Entropy For ๐’๐’๏€ Action
Sequence Entropy and the Complexity Sequence Entropy For ๐’๐’๏€ Action
ย 
Myers_SIAMCSE15
Myers_SIAMCSE15Myers_SIAMCSE15
Myers_SIAMCSE15
ย 
Deconvolution
DeconvolutionDeconvolution
Deconvolution
ย 
Hierarchical matrix techniques for maximum likelihood covariance estimation
Hierarchical matrix techniques for maximum likelihood covariance estimationHierarchical matrix techniques for maximum likelihood covariance estimation
Hierarchical matrix techniques for maximum likelihood covariance estimation
ย 
Lecture 3 sapienza 2017
Lecture 3 sapienza 2017Lecture 3 sapienza 2017
Lecture 3 sapienza 2017
ย 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
ย 
Deep learning study 2
Deep learning study 2Deep learning study 2
Deep learning study 2
ย 
DFA minimization algorithms in map reduce
DFA minimization algorithms in map reduceDFA minimization algorithms in map reduce
DFA minimization algorithms in map reduce
ย 

More from Hassan Nasser

Poster Toward a realistic retinal simulator
Poster Toward a realistic retinal simulatorPoster Toward a realistic retinal simulator
Poster Toward a realistic retinal simulatorHassan Nasser
ย 
Toward a realistic retina simulator
Toward a realistic retina simulatorToward a realistic retina simulator
Toward a realistic retina simulatorHassan Nasser
ย 
Large scale analysis for spiking data
Large scale analysis for spiking dataLarge scale analysis for spiking data
Large scale analysis for spiking dataHassan Nasser
ย 
Large scalespikingnetworkanalysis
Large scalespikingnetworkanalysisLarge scalespikingnetworkanalysis
Large scalespikingnetworkanalysisHassan Nasser
ย 
Presentation of local estimation of pressure wave velocity from dynamic MR im...
Presentation of local estimation of pressure wave velocity from dynamic MR im...Presentation of local estimation of pressure wave velocity from dynamic MR im...
Presentation of local estimation of pressure wave velocity from dynamic MR im...Hassan Nasser
ย 
Mesure locale de la vitesse de lโ€™onde de pression par lโ€™IRM dynamique.
Mesure locale de la vitesse de lโ€™onde de pression par lโ€™IRM dynamique.Mesure locale de la vitesse de lโ€™onde de pression par lโ€™IRM dynamique.
Mesure locale de la vitesse de lโ€™onde de pression par lโ€™IRM dynamique.Hassan Nasser
ย 

More from Hassan Nasser (6)

Poster Toward a realistic retinal simulator
Poster Toward a realistic retinal simulatorPoster Toward a realistic retinal simulator
Poster Toward a realistic retinal simulator
ย 
Toward a realistic retina simulator
Toward a realistic retina simulatorToward a realistic retina simulator
Toward a realistic retina simulator
ย 
Large scale analysis for spiking data
Large scale analysis for spiking dataLarge scale analysis for spiking data
Large scale analysis for spiking data
ย 
Large scalespikingnetworkanalysis
Large scalespikingnetworkanalysisLarge scalespikingnetworkanalysis
Large scalespikingnetworkanalysis
ย 
Presentation of local estimation of pressure wave velocity from dynamic MR im...
Presentation of local estimation of pressure wave velocity from dynamic MR im...Presentation of local estimation of pressure wave velocity from dynamic MR im...
Presentation of local estimation of pressure wave velocity from dynamic MR im...
ย 
Mesure locale de la vitesse de lโ€™onde de pression par lโ€™IRM dynamique.
Mesure locale de la vitesse de lโ€™onde de pression par lโ€™IRM dynamique.Mesure locale de la vitesse de lโ€™onde de pression par lโ€™IRM dynamique.
Mesure locale de la vitesse de lโ€™onde de pression par lโ€™IRM dynamique.
ย 

Recently uploaded

From idea to production in a day โ€“ Leveraging Azure ML and Streamlit to build...
From idea to production in a day โ€“ Leveraging Azure ML and Streamlit to build...From idea to production in a day โ€“ Leveraging Azure ML and Streamlit to build...
From idea to production in a day โ€“ Leveraging Azure ML and Streamlit to build...Florian Roscheck
ย 
Aminabad Call Girl Agent 9548273370 , Call Girls Service Lucknow
Aminabad Call Girl Agent 9548273370 , Call Girls Service LucknowAminabad Call Girl Agent 9548273370 , Call Girls Service Lucknow
Aminabad Call Girl Agent 9548273370 , Call Girls Service Lucknowmakika9823
ย 
RA-11058_IRR-COMPRESS Do 198 series of 1998
RA-11058_IRR-COMPRESS Do 198 series of 1998RA-11058_IRR-COMPRESS Do 198 series of 1998
RA-11058_IRR-COMPRESS Do 198 series of 1998YohFuh
ย 
E-Commerce Order PredictionShraddha Kamble.pptx
E-Commerce Order PredictionShraddha Kamble.pptxE-Commerce Order PredictionShraddha Kamble.pptx
E-Commerce Order PredictionShraddha Kamble.pptxBoston Institute of Analytics
ย 
Digi Khata Problem along complete plan.pptx
Digi Khata Problem along complete plan.pptxDigi Khata Problem along complete plan.pptx
Digi Khata Problem along complete plan.pptxTanveerAhmed817946
ย 
04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationships04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationshipsccctableauusergroup
ย 
Low Rate Call Girls Bhilai Anika 8250192130 Independent Escort Service Bhilai
Low Rate Call Girls Bhilai Anika 8250192130 Independent Escort Service BhilaiLow Rate Call Girls Bhilai Anika 8250192130 Independent Escort Service Bhilai
Low Rate Call Girls Bhilai Anika 8250192130 Independent Escort Service BhilaiSuhani Kapoor
ย 
VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...
VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...
VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...Suhani Kapoor
ย 
Log Analysis using OSSEC sasoasasasas.pptx
Log Analysis using OSSEC sasoasasasas.pptxLog Analysis using OSSEC sasoasasasas.pptx
Log Analysis using OSSEC sasoasasasas.pptxJohnnyPlasten
ย 
Dubai Call Girls Wifey O52&786472 Call Girls Dubai
Dubai Call Girls Wifey O52&786472 Call Girls DubaiDubai Call Girls Wifey O52&786472 Call Girls Dubai
Dubai Call Girls Wifey O52&786472 Call Girls Dubaihf8803863
ย 
Unveiling Insights: The Role of a Data Analyst
Unveiling Insights: The Role of a Data AnalystUnveiling Insights: The Role of a Data Analyst
Unveiling Insights: The Role of a Data AnalystSamantha Rae Coolbeth
ย 
High Class Call Girls Noida Sector 39 Aarushi ๐Ÿ”8264348440๐Ÿ” Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi ๐Ÿ”8264348440๐Ÿ” Independent Escort...High Class Call Girls Noida Sector 39 Aarushi ๐Ÿ”8264348440๐Ÿ” Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi ๐Ÿ”8264348440๐Ÿ” Independent Escort...soniya singh
ย 
FESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdfFESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdfMarinCaroMartnezBerg
ย 
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptxEMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptxthyngster
ย 
Full night ๐Ÿฅต Call Girls Delhi New Friends Colony {9711199171} Sanya Reddy โœŒ๏ธo...
Full night ๐Ÿฅต Call Girls Delhi New Friends Colony {9711199171} Sanya Reddy โœŒ๏ธo...Full night ๐Ÿฅต Call Girls Delhi New Friends Colony {9711199171} Sanya Reddy โœŒ๏ธo...
Full night ๐Ÿฅต Call Girls Delhi New Friends Colony {9711199171} Sanya Reddy โœŒ๏ธo...shivangimorya083
ย 
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Jack DiGiovanna
ย 
Call Girls in Defence Colony Delhi ๐Ÿ’ฏCall Us ๐Ÿ”8264348440๐Ÿ”
Call Girls in Defence Colony Delhi ๐Ÿ’ฏCall Us ๐Ÿ”8264348440๐Ÿ”Call Girls in Defence Colony Delhi ๐Ÿ’ฏCall Us ๐Ÿ”8264348440๐Ÿ”
Call Girls in Defence Colony Delhi ๐Ÿ’ฏCall Us ๐Ÿ”8264348440๐Ÿ”soniya singh
ย 
Call Girls In Mahipalpur O9654467111 Escorts Service
Call Girls In Mahipalpur O9654467111  Escorts ServiceCall Girls In Mahipalpur O9654467111  Escorts Service
Call Girls In Mahipalpur O9654467111 Escorts ServiceSapana Sha
ย 

Recently uploaded (20)

From idea to production in a day โ€“ Leveraging Azure ML and Streamlit to build...
From idea to production in a day โ€“ Leveraging Azure ML and Streamlit to build...From idea to production in a day โ€“ Leveraging Azure ML and Streamlit to build...
From idea to production in a day โ€“ Leveraging Azure ML and Streamlit to build...
ย 
Aminabad Call Girl Agent 9548273370 , Call Girls Service Lucknow
Aminabad Call Girl Agent 9548273370 , Call Girls Service LucknowAminabad Call Girl Agent 9548273370 , Call Girls Service Lucknow
Aminabad Call Girl Agent 9548273370 , Call Girls Service Lucknow
ย 
RA-11058_IRR-COMPRESS Do 198 series of 1998
RA-11058_IRR-COMPRESS Do 198 series of 1998RA-11058_IRR-COMPRESS Do 198 series of 1998
RA-11058_IRR-COMPRESS Do 198 series of 1998
ย 
E-Commerce Order PredictionShraddha Kamble.pptx
E-Commerce Order PredictionShraddha Kamble.pptxE-Commerce Order PredictionShraddha Kamble.pptx
E-Commerce Order PredictionShraddha Kamble.pptx
ย 
Digi Khata Problem along complete plan.pptx
Digi Khata Problem along complete plan.pptxDigi Khata Problem along complete plan.pptx
Digi Khata Problem along complete plan.pptx
ย 
04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationships04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationships
ย 
Low Rate Call Girls Bhilai Anika 8250192130 Independent Escort Service Bhilai
Low Rate Call Girls Bhilai Anika 8250192130 Independent Escort Service BhilaiLow Rate Call Girls Bhilai Anika 8250192130 Independent Escort Service Bhilai
Low Rate Call Girls Bhilai Anika 8250192130 Independent Escort Service Bhilai
ย 
VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...
VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...
VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...
ย 
Log Analysis using OSSEC sasoasasasas.pptx
Log Analysis using OSSEC sasoasasasas.pptxLog Analysis using OSSEC sasoasasasas.pptx
Log Analysis using OSSEC sasoasasasas.pptx
ย 
VIP Call Girls Service Charbagh { Lucknow Call Girls Service 9548273370 } Boo...
VIP Call Girls Service Charbagh { Lucknow Call Girls Service 9548273370 } Boo...VIP Call Girls Service Charbagh { Lucknow Call Girls Service 9548273370 } Boo...
VIP Call Girls Service Charbagh { Lucknow Call Girls Service 9548273370 } Boo...
ย 
Dubai Call Girls Wifey O52&786472 Call Girls Dubai
Dubai Call Girls Wifey O52&786472 Call Girls DubaiDubai Call Girls Wifey O52&786472 Call Girls Dubai
Dubai Call Girls Wifey O52&786472 Call Girls Dubai
ย 
Unveiling Insights: The Role of a Data Analyst
Unveiling Insights: The Role of a Data AnalystUnveiling Insights: The Role of a Data Analyst
Unveiling Insights: The Role of a Data Analyst
ย 
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in Kishangarh
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in  KishangarhDelhi 99530 vip 56974 Genuine Escort Service Call Girls in  Kishangarh
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in Kishangarh
ย 
High Class Call Girls Noida Sector 39 Aarushi ๐Ÿ”8264348440๐Ÿ” Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi ๐Ÿ”8264348440๐Ÿ” Independent Escort...High Class Call Girls Noida Sector 39 Aarushi ๐Ÿ”8264348440๐Ÿ” Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi ๐Ÿ”8264348440๐Ÿ” Independent Escort...
ย 
FESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdfFESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdf
ย 
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptxEMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptx
ย 
Full night ๐Ÿฅต Call Girls Delhi New Friends Colony {9711199171} Sanya Reddy โœŒ๏ธo...
Full night ๐Ÿฅต Call Girls Delhi New Friends Colony {9711199171} Sanya Reddy โœŒ๏ธo...Full night ๐Ÿฅต Call Girls Delhi New Friends Colony {9711199171} Sanya Reddy โœŒ๏ธo...
Full night ๐Ÿฅต Call Girls Delhi New Friends Colony {9711199171} Sanya Reddy โœŒ๏ธo...
ย 
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
ย 
Call Girls in Defence Colony Delhi ๐Ÿ’ฏCall Us ๐Ÿ”8264348440๐Ÿ”
Call Girls in Defence Colony Delhi ๐Ÿ’ฏCall Us ๐Ÿ”8264348440๐Ÿ”Call Girls in Defence Colony Delhi ๐Ÿ’ฏCall Us ๐Ÿ”8264348440๐Ÿ”
Call Girls in Defence Colony Delhi ๐Ÿ’ฏCall Us ๐Ÿ”8264348440๐Ÿ”
ย 
Call Girls In Mahipalpur O9654467111 Escorts Service
Call Girls In Mahipalpur O9654467111  Escorts ServiceCall Girls In Mahipalpur O9654467111  Escorts Service
Call Girls In Mahipalpur O9654467111 Escorts Service
ย 

Analysis of large scale spiking networks dynamics with spatio-temporal constraints: application to Multi-Electrodes acquisitions in the retina

  • 1. Analyzing large scale spike trains with spatio-temporal constraints: application to retinal data Supervised by Prof. Bruno Cessac Hassan Nasser
  • 2. ๐‘…๐‘’๐‘™๐‘Ž๐‘ก๐‘–๐‘œ๐‘› ๐‘…|๐‘† ๐‘ƒ ๐‘… ๐‘† Response variability Biological neural network Stimulus Spike Response S R Neural prosthetics Bio-inspired technologies Time (ms) Trial 2
  • 6. Probabilistic Models Maximum entropy 1 time-step memory (Marre et al 09) Generalized Linear model Point process General framework (Vasquez et al 12) Ising (Schneidman et al 06) Triplets (Ganmor et al 09) Spatio- Temporal Spatial No memory Limited to 1 time step memory Limited to small scale Neurons are considered conditionally independent given the past Hawks Linear Non Linear model # of neurons doubles every 8 years !! 6
  • 7. Goal โ€ข Definitions โ€“ Basic concepts โ€“ Maximum entropy principle (Spatial & Spatio-temporal). โ€ข Montecarlo in the service of large neural spike trains โ€ข Fitting parameters โ€“ Tests on synthetic data โ€“ Application on real data โ€ข The EnaS software โ€ข Discussion Develop a framework to fit spatio temporal maximum entropy models on large scale spike trains 7
  • 8. Goal โ€ข Definitions โ€“ Basic concepts โ€“ Maximum entropy principle (Spatial & Spatio-temporal). โ€ข Montecarlo in the service of large neural spike trains โ€ข Fitting parameters โ€“ Tests on synthetic data โ€“ Application on real data โ€ข The EnaS software โ€ข Discussion Develop a framework to fit spatio temporal maximum entropy models on large scale spike trains 8
  • 9. Spike objects ๐‘ ๐‘๐‘–๐‘˜๐‘’ ๐‘ก๐‘Ÿ๐‘Ž๐‘–๐‘› = ๐œ” ๐‘ ๐‘๐‘–๐‘˜๐‘’ ๐‘๐‘Ž๐‘ก๐‘ก๐‘’๐‘Ÿ๐‘› = ๐œ” ๐‘ก ๐‘ก ๐‘ ๐‘๐‘–๐‘˜๐‘’ ๐‘๐‘™๐‘œ๐‘๐‘˜ = ๐œ”๐‘ก1 ๐‘ก2 ๐‘†๐‘๐‘–๐‘˜๐‘’ ๐‘’๐‘ฃ๐‘’๐‘›๐‘ก = ๐œ”๐‘– ๐‘ก ๐‘ก1 ๐‘ก2 ๐‘– = ๐‘›๐‘’๐‘ข๐‘Ÿ๐‘œ๐‘› ๐‘–๐‘›๐‘‘๐‘’๐‘ฅ ๐‘ก = ๐‘ก๐‘–๐‘š๐‘’ Empirical probability ๏ƒจ ๐‘‡ ๐œ‹ ๐œ” ๐‘‡ ๐‘๐‘™๐‘œ๐‘๐‘˜, ๐‘๐‘Ž๐‘ก๐‘ก๐‘’๐‘Ÿ๐‘›, โ€ฆ 9
  • 10. Confidence plot 1 0 0 1 0 1 0 0 0 1 1 0 1 0 1 1 0 0 0 0 1 0 0 1 0 0 0 1 1 0 Upper bound (+3๐œŽ) Lower bound (โˆ’3๐œŽ) 10 0 Observed probability 1 Predictedprobability 0 1 2 ๐‘ 22๐‘ 23๐‘
  • 11. Monomials โ€ข ๐‘š๐‘™(๐œ”) = ๐‘Ÿ ๐œ”๐‘– ๐‘Ÿ ๐‘ก ๐‘Ÿ = 1 iff ๐œ”๐‘– ๐‘Ÿ ๐‘ก ๐‘Ÿ = 1 โˆ€๐‘Ÿ 0 otherwise ๐œ”_0 47 ๐œ”_1 47 ๐œ”_7 40 ๐œ”_8 41๐œ”_4 28 ๐œ”_6 28 ๐œ”_(28) Pairwise Pairwise with 1 time-step delay Triplet ๐œ”_2 21 ๐œ”_4 23 11๐‘†๐‘ก๐‘Ž๐‘ก๐‘–๐‘œ๐‘›๐‘Ž๐‘Ÿ๐‘–๐‘ก๐‘ฆ โ‰ก ๐œ‹ ๐œ” ๐‘‡ (๐‘š๐‘™) does not change overall the spike train
  • 12. Imagine a spatial case โ€ฆ 2 ๐‘ possible patterns/states ๐‘š๐‘™ = ๐œ”๐‘–(0)๐œ”๐‘—(0) ๐œ‡[๐œ” ๐‘ก ] : ๐‘2 Pairwise correlations monomials ๐‘š๐‘™ = ๐œ”๐‘–(0) < ๐‘š๐‘™> for ๐œ” ๐‘ก Maximum entropy Given some measure ?? 12 2 ๐‘ โ‰ซ ๐‘ + ๐‘2 : N Individual activity monomials. ๐’ฎ ๐œ‡ = โˆ’ ๐œ” 0 ๐œ‡ ๐œ” 0 log ๐œ‡[๐œ”(0)] Constraints: ๐œ‡ ๐‘š๐‘™ = ๐œ‹ ๐œ” ๐‘‡ [๐‘š๐‘™]
  • 13. Spatial models ๐œ‡ = arg max ๐œˆโˆˆโ„ณ ๐’ฎ ๐œˆ + ๐œ†0 ๐œ” 0 ๐œˆ ๐œ” 0 โˆ’ 1 + ๐‘™=1 ๐ฟ ๐œ†๐‘™ ๐œˆ ๐‘š๐‘™ โˆ’ ๐œ‹ ๐œ” ๐‘‡ [๐‘š๐‘™] Sought distribution Statistical entropy Normalization Parameters Empirical measure Predicted measure ๐’ฎ ๐œˆ = โˆ’ ๐œ” 0 ๐œˆ ๐œ” 0 log ๐œˆ[๐œ”(0)] ๐œ‡ ๐œ” 0 = 1 ๐‘ ๐€ ๐‘’โ„‹ ๐œ”(0) ๐‘ƒ๐‘œ๐‘ก๐‘’๐‘›๐‘ก๐‘–๐‘Ž๐‘™ โˆถ โ„‹๐œ† ๐œ”(0) = ๐‘™ ๐œ†๐‘™ ๐‘š๐‘™ Partition function ๐‘๐œ† = ๐œ” 0 ๐‘’โ„‹ ๐œ”After fitting parameters: Ising model 13
  • 14. Prediction with a spatial model Spatial patterns 14 Observed probability Predictedprobability
  • 15. Prediction with a spatial model Spatio temporal pattern of memory depth 1 ๐œ‡ = ๐œ‡ ร— ๐œ‡ 15 Observed probability Predictedprobability
  • 16. Prediction with a spatial model Spatio temporal pattern of memory depth 2 ๐œ‡ = ๐œ‡ ร— ๐œ‡ ร— ๐œ‡ 16 Observed probability Predictedprobability
  • 17. Prediction with a spatial model Spatio temporal pattern of memory depth 2 ๐œ‡ = ๐œ‡ ร— ๐œ‡ ร— ๐œ‡ 17 memory pattern Observed probability Predictedprobability
  • 18. The Spike train as a Markov Chain Time Neuron# ๐ท ๐‘ƒ ๐œ” ๐‘› ๐œ” ๐‘›โˆ’1 ๐‘›โˆ’๐ท ๐‘› ๐œ‡ ๐œ”0 ๐‘› = Present 18
  • 19. The Spike train as a Markov Chain Time Neuron# ๐‘ƒ ๐œ” ๐‘› โˆ’ 1 ๐œ” ๐‘›โˆ’2 ๐‘›โˆ’๐ทโˆ’1 ๐‘ƒ ๐œ” ๐‘› ๐œ” ๐‘›โˆ’1 ๐‘›โˆ’๐ท ๐‘› ๐œ‡ ๐œ”0 ๐‘› = 19 ๐ท
  • 20. The Spike train as a Markov Chain Time Neuron# ๐‘› Chapmanโ€“Kolmogorov equation ๐‘ƒ ๐œ” ๐‘› โˆ’ 1 ๐œ” ๐‘›โˆ’2 ๐‘›โˆ’๐ทโˆ’1 ๐‘ƒ ๐œ” ๐‘› ๐œ” ๐‘›โˆ’1 ๐‘›โˆ’๐ท ๐‘ƒ ๐œ” ๐‘› โˆ’ 2 ๐œ” ๐‘›โˆ’3 ๐‘›โˆ’๐ทโˆ’2 ๐œ‡ ๐œ”0 ๐‘› = ๐œ‡ ๐œ”0 ๐ท . ๐‘ƒ ๐œ” ๐ท + 1 ๐œ”0 ๐ท โ€ฆ 20 ๐ท
  • 21. Markov states with memory ๐‘ค ๐‘คโ€ฒ ๐‘ƒ[๐‘ค โ†’ ๐‘คโ€ฒ] 0 1 0 0 1 1 0 0 1 0 1 0 0 1 0 1 0 1 1 1 0 1 0 1 ๐‘ค ๐‘คโ€ฒ ๐‘ค & ๐‘คโ€ฒ : blocks of size ๐ท 21 1 0 0 1 0 1 1 0 1 ๐‘ค โ†’ ๐‘คโ€ฒ = 0 ๐‘ค โ†’ ๐‘คโ€ฒ = ๐‘’โ„‹ ๐œ”0 ๐ท 0 0 1 0 0 0 0 0 0 1 1 1 0 1 1 1 1 1 0 0 1 1 0 0 1 1 0 Legal transitions Illegal transitions ๐‘ค โ†’ ๐‘คโ€ฒ 1 0 0 0 1 0 1 1 1 ๐‘ค ๐‘คโ€ฒ โ‰ 
  • 22. 0 0 1 0 0 0 0 0 0 1 1 1 0 1 1 1 1 1 0 0 1 1 0 0 1 1 0 Markov states with memory ๐‘ค ๐‘คโ€ฒ ๐‘ƒ[๐‘ค โ†’ ๐‘คโ€ฒ] Legal transitions Illegal transitions ๐‘ค โ†’ ๐‘คโ€ฒ 1 0 0 0 1 0 1 1 1 ๐‘ค ๐‘คโ€ฒ 0 1 0 0 1 1 0 0 1 0 1 0 0 1 0 1 0 1 1 1 0 1 0 1 โ‰  ๐‘ค & ๐‘คโ€ฒ : blocks of size ๐ท 22 1 0 0 1 0 1 1 0 1 โ„’ ๐‘คโ€ฒ ๐‘ค ๐‘ค โ†’ ๐‘คโ€ฒ = 0 ๐‘ค โ†’ ๐‘คโ€ฒ = ๐‘’โ„‹ ๐œ”0 ๐ท
  • 23. โ„’ ๐‘คโ€ฒ ๐‘ค = ๐‘’โ„‹ ๐œ”0 ๐ท , if ๐‘คโ€ฒ ๐‘ค is a legal transition 0 Otherwise Transfer matrix Non normalized Perron-Frobenius Theorem Right eigenvector Left eigenvector ๐“ˆ ๐€ The biggest eigenvalue ๐‘…(. ) L(. ) Using Chapmanโ€“ Kolmogorov equation ๐œ‡ ๐œ”0 ๐‘› = ๐‘’โ„‹ ๐œ”0 ๐‘› ๐“ˆ ๐€ ๐‘›โˆ’๐ท+1 ๐‘… ๐œ” ๐‘›โˆ’๐ท ๐‘› ๐ฟ(๐œ”0 ๐ท ) ๐’ซ โ„‹ = log ๐“ˆ ๐œ† Direct computing of the Kullback-Leibler Divergence ๐‘‘ ๐พ๐ฟ ๐œ‹ ๐œ” ๐‘‡ , ๐œ‡ ๐€ = ๐’ซ โ„‹ โˆ’ ๐œ‹ ๐œ” ๐‘‡ โ„‹ โˆ’ ๐’ฎ ๐œ‹ ๐œ” ๐‘‡ 2 ๐‘๐ท Compute the average of monomials ๐œ‡ ๐‘š๐‘™ = ๐œ•๐’ซ โ„‹ ๐œ•๐œ†๐‘™ 23 Pressure Entropy Empirical probability of the potential
  • 24. Setting the constraints Computing the empirical distribution ๐œ‹ ๐œ” ๐‘‡ (๐‘š๐‘™) Random set of parameters Computing the predicted distribution ๐œ‡ ๐œ†(๐‘–๐‘ก) ๐‘š๐‘™ Update the parameters Final set of parameters Predicted distribution Comparison Transfer Matrix 24
  • 25. Limitation of the transfer matrix โ„’ ๐‘ค๐‘ค โ€ฒ 2 ๐‘๐ท 2 ๐‘๐ท โ„’ ๐‘คโ€ฒ ๐‘ค ๐‘–, ๐‘— โˆˆ โ„ ๐ท๐‘œ๐‘ข๐‘๐‘™๐‘’ = 1 ๐ต๐‘ฆ๐‘ก๐‘’ 2 ๐‘๐ท ร— 2 ๐‘๐ท = 22๐‘๐ท ๐ต๐‘ฆ๐‘ก๐‘’๐‘ โ„’ ๐‘คโ€ฒ ๐‘ค Memoryneed Neuron number Range: R = D+1 = 3 20 neurons ๏ƒจ 1,099,511,627,776 ๐‘‡๐ต 25
  • 26. ๐‘๐‘… = 20 Small scale Large scale ๐‘๐‘… > 20๐‘๐‘… โ‰ค 20 Transfer matrix Montecarlo 26
  • 27. Computing the predicted distribution ๐œ‡ ๐œ†(๐‘–๐‘ก) ๐‘š๐‘™ Setting the constraints Computing the empirical distribution ๐œ‹ ๐œ” ๐‘‡ (๐‘š๐‘™) Random set of parameters Update the parameters Final set of parameters Predicted distribution Comparison Transfer Matrix 27
  • 28. Goal Develop a framework to fit spatio temporal maximum entropy models on large scale spike trains โ€ข Definitions โ€“ Basic concepts โ€“ Maximum entropy principle (Spatial & Spatio-temporal). โ€ข Montecarlo in the service of large neural spike trains โ€ข Fitting parameters โ€“ Tests on synthetic data โ€“ Application on real data โ€ข The EnaS software โ€ข Discussion 28
  • 29. Metropolis-Hasting (1970) โ€ข ๐€ ๏ƒ  ~๐œ‡ ๐œ† ๐‘š๐‘™ โ€ข Montecarlo states โ€ข Transition between states: ๐‘ƒ ๐œ” ๐‘›, 1 |๐œ” ๐‘›, 2 = max ๐‘„ ๐œ” ๐‘›, 1 |๐œ” ๐‘›,(2) ๐‘„ ๐œ” ๐‘›, 2 |๐œ”, ๐‘›, 1 ร— ๐œ‡ ๐œ” ๐‘›, 2 ๐œ‡ ๐œ” ๐‘›, 1 , 1 ๐‘ ๐‘› Proposal function ๐œ‡ ๐œ”0 ๐‘› = ๐‘’โ„‹ ๐œ”0 ๐‘› ๐“ˆ ๐€ ๐‘›โˆ’๐ท+1 ๐‘… ๐œ” ๐‘›โˆ’๐ท ๐‘› ๐ฟ(๐œ”0 ๐ท )Symmetric in Metropolis algorithm: ๐‘„ ๐œ” 1 โ†’ ๐œ” 2 = ๐‘„ ๐œ” 2 โ†’ ๐œ” 1 where ๐‘…, ๐ฟ, ๐“ˆ are unkown 29
  • 30. Metropolis-Hasting ๐œ‡[๐œ” ๐‘›, 2 ]) ๐œ‡[๐œ” ๐‘›, 1 ]) = ๐‘’ โ„‹ ๐œ”0 ๐‘›,(2) ๐“ˆ ๐€ ๐‘›โˆ’๐ท+1 ๐‘… ๐œ” ๐‘›โˆ’๐ท ๐‘›,(2) ๐ฟ(๐œ”0 ๐ท,(2) ) ๐‘’ โ„‹ ๐œ”0 ๐‘›,(1) ๐“ˆ ๐€ ๐‘›โˆ’๐ท+1 ๐‘… ๐œ” ๐‘›โˆ’๐ท ๐‘›,(1) ๐ฟ(๐œ”0 ๐ท,(1) ) x 30 ๐‘›
  • 31. Metropolis-Hasting ๐œ‡[๐œ” ๐‘›, 2 ]) ๐œ‡[๐œ” ๐‘›, 1 ]) = ๐‘’ โ„‹ ๐œ”0 ๐‘›,(2) ๐“ผ ๐€ ๐’โˆ’๐‘ซ+๐Ÿ ๐‘… ๐œ” ๐‘›โˆ’๐ท ๐‘›,(2) ๐ฟ(๐œ”0 ๐ท,(2) ) ๐‘’ โ„‹ ๐œ”0 ๐‘›,(1) ๐“ผ ๐€ ๐’โˆ’๐‘ซ+๐Ÿ ๐‘… ๐œ” ๐‘›โˆ’๐ท ๐‘›,(1) ๐ฟ(๐œ”0 ๐ท,(1) ) 31 ๐‘› x
  • 32. Avoiding ๐‘… & ๐ฟ ๐œ‡[๐œ” ๐‘›, 2 ]) ๐œ‡[๐œ” ๐‘›, 1 ]) = ๐‘’ โ„‹ ๐œ”0 ๐‘›,(2) ๐“ผ ๐€ ๐’โˆ’๐‘ซ+๐Ÿ ๐‘… ๐œ” ๐‘›โˆ’๐ท ๐‘›,(2) ๐ฟ(๐œ”0 ๐ท,(2) ) ๐‘’ โ„‹ ๐œ”0 ๐‘›,(1) ๐“ผ ๐€ ๐’โˆ’๐‘ซ+๐Ÿ ๐‘… ๐œ” ๐‘›โˆ’๐ท ๐‘›,(1) ๐ฟ(๐œ”0 ๐ท,(1) ) ๐‘› ๐œ” ๐‘›โˆ’๐ท ๐‘›๐œ”0 ๐ท 32 x
  • 33. Avoiding ๐‘… & ๐ฟ ๐œ‡[๐œ” ๐‘›, 2 ]) ๐œ‡[๐œ” ๐‘›, 1 ]) = ๐‘’ โ„‹ ๐œ”0 ๐‘›,(2) ๐“ผ ๐€ ๐’โˆ’๐‘ซ+๐Ÿ ๐‘… ๐œ” ๐‘›โˆ’๐ท ๐‘›,(2) ๐ฟ(๐œ”0 ๐ท,(2) ) ๐‘’ โ„‹ ๐œ”0 ๐‘›,(1) ๐“ผ ๐€ ๐’โˆ’๐‘ซ+๐Ÿ ๐‘… ๐œ” ๐‘›โˆ’๐ท ๐‘›,(1) ๐ฟ(๐œ”0 ๐ท,(1) ) = ๐‘’ โ„‹ ๐œ”0 ๐‘›,(2) ๐‘’ โ„‹ ๐œ”0 ๐‘›,(1) = ๐‘’ฮ”โ„‹ ๐€ x ๐œ” ๐‘›โˆ’๐ท ๐‘›๐œ”0 ๐ท 33 ๐‘›
  • 34. Algorithm review Start: Random spike train Parameters ๐€ ๐‘ neurons. Length = ๐‘› Choose a random event and flip it Compute ๐‘’ฮ”โ„‹ ๐€ ๐‘’ฮ”โ„‹ ๐€ > ๐œ– ๐œ– โˆˆ [0,1] Choose between [๐ท + 1, ๐‘› โˆ’ ๐ท โˆ’ 1] No Accept the change Reject the change Yes Updated Montecarlo spike train ๐‘๐‘“๐‘™๐‘–๐‘ / Loop 34 Computed only between [โˆ’๐ท, +๐ท]
  • 35. Hassan Nasser, Olivier Marre, and Bruno Cessac. Spike trains analysis using Gibbs distributions and Montecarlo method. Journal of Statistical Mechanics: Theory and experiments, 2013. 35
  • 36. Update the parameters Monte- Carlo Computing the predicted distribution ๐œ‡ ๐œ†(๐‘–๐‘ก) ๐‘š๐‘™ ? Setting the constraints Computing the empirical distribution ๐œ‹ ๐œ” ๐‘‡ (๐‘š๐‘™) Random set of parameters Final set of parameters Predicted distribution Comparison 36
  • 37. Goal Develop a framework to fit spatio temporal maximum entropy models on large scale spike trains โ€ข Definitions โ€“ Basic concepts โ€“ Maximum entropy principle (Spatial & Spatio-temporal). โ€ข Montecarlo in the service of large neural spike trains โ€ข Fitting parameters โ€“ Tests on synthetic data โ€“ Application on real data โ€ข The EnaS software โ€ข Discussion 37
  • 38. Fitting parameters / concept Maximizing entropy (difficult because computing the exact entropy intractable) โ‰ก minimizing the divergence ๐‘‘ ๐พ๐ฟ ๐œ‹ ๐œ” ๐‘‡ , ๐œ‡ ๐€ = ๐’ซ ๐€ โˆ’ ๐œ‹ ๐œ” ๐‘‡ โ„‹ โˆ’ ๐’ฎ[๐œ‹ ๐œ” ๐‘‡ ] Dudรญk, M., Phillips, S., and Schapire, R. (2004). Performance guarantees for regularized maximum entropy density estimation. Proceedings of the 17th Annual Conference on Computational Learning Theory. Small scale: easy to compute Large scale: hard to compute - Bounding the negative log likelihood Divergence Iterations Big ๐‘‘ ๐พ๐ฟ Small ๐‘‘ ๐พ๐ฟ 38 - Relaxation
  • 39. Fitting parameters / concept Maximizing entropy (difficult because computing the exact entropy intractable) โ‰ก minimizing the divergence ๐‘‘ ๐พ๐ฟ ๐œ‹ ๐œ” ๐‘‡ , ๐œ‡ ๐€ = ๐’ซ ๐€ โˆ’ ๐œ‹ ๐œ” ๐‘‡ โ„‹ โˆ’ ๐’ฎ[๐œ‹ ๐œ” ๐‘‡ ] Dudรญk, M., Phillips, S., and Schapire, R. (2004). Performance guarantees for regularized maximum entropy density estimation. Proceedings of the 17th Annual Conference on Computational Learning Theory. Small scale: easy to compute Large scale: hard to compute - Bounding the negative log likelihood Divergence Iterations Big ๐‘‘ ๐พ๐ฟ Small ๐‘‘ ๐พ๐ฟ 39 - Relaxation
  • 40. Fitting parameters / concept Hassan Nasser and Bruno Cessac. Parameters fitting for spatio-temporal maximum entropy distributions: application to neural spike trains. Submitted to Entropy. - Bounding the Divergence 40 - With relaxation
  • 41. Cost function ๐ถ๐‘“ = ๐‘‘ ๐พ๐ฟ ๐œ‹ ๐œ” ๐‘‡ , ๐œ‡ ๐€โ€ฒ โˆ’ ๐‘‘ ๐พ๐ฟ ๐œ‹ ๐œ” ๐‘‡ , ๐œ‡ ๐€ = ๐’ซ ๐€โ€ฒ โˆ’ ๐’ซ ๐€ โˆ’ ๐œ‹ ๐œ” ๐‘‡ [ฮ”โ„‹๐€] ๐ถ๐‘“ = lim ๐’โ†’โˆž 1 ๐‘› log ๐œ”0 ๐‘›โˆ’1 ๐œ‡ ๐€ ๐œ”0 ๐‘›โˆ’1 ๐‘’ฮ”โ„‹ ๐€ ๐œ”0 ๐‘›โˆ’1 โˆ’ ๐œ‹ ๐œ” ๐‘‡ [ฮ”โ„‹๐€] ๐ถ๐‘“ ๐‘ ๐‘’๐‘ž โ‰ค 1 ๐‘› โˆ’ ๐ท โˆ’๐›ฟ๐œ‹ ๐œ” ๐‘‡ ๐‘š๐‘™ + log 1 + ๐‘’ ๐›ฟ โˆ’ 1 ๐œ‡ ๐€ ๐‘š๐‘™ + ๐œ–( ๐œ† + ๐›ฟ โˆ’ |๐œ†|) ๐ถ๐‘“ ๐‘๐‘Ž๐‘Ÿ โ‰ค 1 ๐ฟ ๐‘› โˆ’ ๐ท ๐‘™ โˆ’๐›ฟ๐œ‹ ๐œ” ๐‘‡ ๐‘š๐‘™ + log 1 + ๐‘’ ๐›ฟ ๐‘™ โˆ’ 1 ๐œ‡ ๐€ ๐‘š๐‘™ + ๐œ–๐‘™( ๐œ†๐‘™ + ๐›ฟ๐‘™ โˆ’ |๐œ†๐‘™|) Relaxation >0Using Montecarlo Parallel: Sequential: 41 Parameters update Number of parameters ๐€โ€ฒ = ๐€ + ๐œน
  • 42. Setting the constraints Computing the empirical distribution ๐œ‹ ๐œ” ๐‘‡ (๐‘š๐‘™) Random set of parameters Computing the predicted distribution ๐œ‡ ๐œ†(๐‘–๐‘ก) ๐‘š๐‘™ Update the parameters Final set of parameters Exact predicted distribution Comparison Monte- Carlo Fitting 42
  • 43. Updating the target distribution ๐œ•2 ๐’ซ[๐€ ] ๐œ•๐œ†๐‘— ๐œ•๐œ† ๐‘˜ = ๐‘›=โˆ’โˆž โˆž ๐ถ๐‘—๐‘˜ ๐‘› ๐œ‡ ๐€+๐œน ๐‘š๐‘™ = ๐œ‡ ๐€ ๐‘š๐‘™ + ๐‘˜ ๐œ•2 ๐’ซ[๐€ ] ๐œ•๐œ†๐‘— ๐œ•๐œ† ๐‘˜ ๐›ฟ ๐‘˜ + 1 2 ๐‘—,๐‘˜,๐‘™ ๐œ•3 ๐’ซ[๐€] ๐œ•๐œ†๐‘— ๐œ•๐œ† ๐‘˜ ๐œ•๐œ†๐‘™ ๐›ฟ๐‘— ๐›ฟ ๐‘˜ ๐›ฟ๐‘™ + โ‹ฏ New ๐€ ๏ƒจ New distribution. Montecarlo Taylor Expansion Previous distribution Exponential decay of correlation ๏ƒจ In practice n is finite (If ๐œน ๐’”๐’Ž๐’‚๐’๐’)๐œ‡ = ๐‘“๐‘ข๐‘›๐‘๐‘ก๐‘–๐‘œ๐‘›(๐œ†) 43 โ„Ž๐‘’๐‘Ž๐‘ฃ๐‘ฆ ๐‘ก๐‘œ ๐‘๐‘œ๐‘š๐‘๐‘ข๐‘ก๐‘’
  • 47. Synthetic data sets โ€ข Sparse Rates parameters Higher order parameters โ€ข Dense 47
  • 48. Dense Sparse N=20, R = 3 NR = 60 Random (known) parameter/monomials Try to recover the parameters Errors on parameters 48 Synthetic spike train
  • 49. Comparing blocks probabilities N = 40 / Spatial N = 40 / Spatio-temporal NR = 40 NR = 80 49
  • 50. Data Courtesy: Michael J. Berry II (Princeton University) and Olivier Marre (Institut de la vision, Paris). Purely spatial pairwise Pairwise with 1 time-step memory Binned at 20 ms Application on retinal data 50 Schneidman et al 2006
  • 51. Real data: 20 neurons Spatial Pairwise Spatio-temporal Pairwise 51
  • 52. Real data: 40 neurons Pairwise Spatial Pairwise Spatio-temporal 52
  • 53. Goal Develop a framework to fit spatio temporal maximum entropy models on large scale spike trains โ€ข Definitions โ€“ Basic concepts โ€“ Maximum entropy principle (Spatial & Spatio-temporal). โ€ข Montecarlo in the service of large neural spike trains โ€ข Fitting parameters โ€“ Tests on synthetic data โ€“ Application on real data โ€ข The EnaS software โ€ข Discussion 53
  • 54. Event neural assembly Simulation (EnaS) V1 2007 V2 2010 V3 2014 Thierry Viรฉville Bruno Cessac Juan-Carlos Vasquez / Horacio-Rostro Gonzalez/Hassan Nasser Selim Kraria + Graphical user interface Goal: Analyzing spike trains Share research advances with the community C++ & Qt (interface Java, Matlab, PyThon) 54
  • 55. Architecture EnaS RasterBlock Gibbs Potential Graphical User interface - Data management - Formats - Empirical statistics - Grammar - Defining models - Generating artificial spike trains - Fitting - Montecarlo process (Parallelization) - Interactive environment - Visualization of stimulus and response simultaneously. - Demo Contributions Contributions 55
  • 56. 0 1 0 0 1 1 0 0 1 0 1 0 0 1 0 1 0 1 1 1 0 1 0 1 Grammar 0 1 0 0 1 1 0 0 1 0 1 0 0 1 0 1 0 1 1 1 0 1 0 1 Needed in: Computing empirical distribution. Montecarlo sampling. Divergence and Entropy computing. Confidence bounds. Grammar 56 Only observed transitions ๐‘‡ โˆ’ ๐ท maximum Transfer Matrix ๏ƒจ 2 ๐‘๐ท
  • 57. Grammar data structure 0 1 0 0 1 1 0 0 1 0 1 0 0 1 0 1 0 1 1 1 0 1 0 1 0 1 0 0 1 1 0 0 1 0 1 0 0 1 0 1 0 1 1 1 0 1 0 1 N = 3. D = 2. 0 1 1 0 0 1 0 1 1 Prefix Suffix 57 Transition
  • 58. Grammar data structure 0 1 0 0 1 1 0 0 1 0 1 0 0 1 0 1 0 1 1 1 0 1 0 1 0 1 0 0 1 1 0 0 1 0 1 0 0 1 0 1 0 1 1 1 0 1 0 1 N = 3. D = 2. 0 1 1 0 0 1 0 1 1 1 0 0 1 1 1 0 0 1 Prefix Suffix 58
  • 59. Grammar data structure 0 1 0 0 1 1 0 0 1 0 1 0 0 1 0 1 0 1 1 1 0 1 0 1 0 1 0 0 1 1 0 0 1 0 1 0 0 1 0 1 0 1 1 1 0 1 0 1 N = 3. D = 2. 0 1 1 0 0 1 0 1 1 1 0 0 1 1 1 0 0 1 Prefix Suffix 59 2 This transitions appears 2 times!
  • 60. Grammar data structure 0 1 0 0 1 0 1 0 1 0 1 0 1 0 0 1 0 1 1 1 1 0 0 1 0 1 0 0 1 1 0 0 1 0 1 0 0 1 0 1 0 1 1 1 0 1 0 1 N = 3. D = 2. 0 1 1 0 0 1 0 1 1 1 0 0 1 1 1 0 0 1 Prefix Suffix 60 2 1 0 1 0 1 0 0 1 1 1 0 0 A new suffix
  • 61. Map: C++ data container Sorting in a chosen order โ€ฆ 61
  • 62. Map: C++ data container Sorting in a chosen order โ€ฆ It appeared two times! 62
  • 63. Architecture EnaS RasterBlock Gibbs Potential Graphical User interface - Data management - Grammar - Empirical statistics - โ€ฆ - Defining models - Generating artificial spike trains - Fitting - Montecarlo process (Parallelization) - Interactive environment - Visualization of stimulus and response simultaneously. - Demo 63
  • 64. Parallelization of Montecarlo process ๐‘๐‘ก๐‘–๐‘š๐‘’๐‘  xx x x x x x x x x x x x x x x x x x x Personal Multi-processors computer: 2-8 processors Cluster (64 processors machines at INRIA) OpenMp 64 MPI More processors / More time consuming in our case
  • 65. Processor 1 Processor 2 Example with 2 processors 65
  • 67. Architecture EnaS RasterBlock Gibbs Potential Graphical User interface - Data management - Grammar - Empirical statistics - โ€ฆ - Defining models - Generating artificial spike trains - Fitting - Montecarlo process (Parallelization) - Interactive environment - Visualization of stimulus and response simultaneously. - Demo 67
  • 68. Data courtesy: Gerrit Hilgen, Newcastle University, Institute of Neuroscience, United Kingdom Interface design: Selim Kraria 68 EnaS Demo
  • 70. Hassan Nasser, Selim Kraria, Bruno Cessac. EnaS: a new software for analyzing large scale spike trains. In preparation. 70
  • 72. Synthetic data Vs Real data Synthetic data Potential shape is known (monomials are known) Real data 72 Potential shape is unknown (monomials are unknown)Fitting only Guessing the shape + Fitting
  • 73. Monomials Model Canonical Ising, pairwise with delay, triplets, โ€ฆ Small scaleLarge scale - Big computation time - Non Observed monomials - Estimation errors. Pre-Selection 73 Rodrigo Cofre & Bruno Cessac 40 neurons
  • 74. Making sense of parameters Model parameters Evaluate the importance of particular type of correlations Possibility of generalize the model prediction on new stimulus 74
  • 75. Stationary Maximum entropy model New stimulus 1- Statistics 2- No new response S R Stimulus Dependent Maximum Entropy models (Granot- Atedgi et al 13) New stimulus New spike Response S R 75
  • 76. EnaS Retina Spike sorting Spike trainStimulus Visualization Visualization + Empirical analysis + Maximum Entropy modelling NowFuture - More empirical observation packages - More neural coding functionalities Spike sorting - Receptive field - Neurons selection Type identification - Stimulus design - Features extraction 76 Retina models VirtualRetina
  • 77. Next โ€ฆ Starting a company in IT/Data Analytics: โ€“ First prize in innovative project competition (UNICE Foundation). โ€“ Current project: Orientation in education using real surveys. โ€“ EnaS is in perspective in collaboration with INRIA. Caty Conraux & Vincent Tricard 77
  • 78. Thanks collaborators โ€ข Adrian Palacios โ€ข Olivier Marre โ€ข Michael J. Berry II โ€ข Gaลกper Tkaฤik โ€ข Thierry Morra 78
  • 79. 79
  • 80. Appendix โ€ข Tuning Ntimes. โ€ข Tuning Nflip. โ€ข Validating montecarlo algorithm. โ€ข Tunnig delta. โ€ข MPI Vs OpenMP, memory. โ€ข Why MPI is not better than OpenMP? โ€ข Computational complexity of the Montecarlo algorithm. โ€ข Review of Montecarlo / Nflip. โ€ข Number of Iterations for fitting. โ€ข Fluctuations on parameters / Non existing monomials. โ€ข Epsilon on fitting parameters. โ€ข Binning. โ€ข Tests with several stimulus. โ€ข Granot-Atedgi et al 2013 โ€ข Granot-Atedgi et al 2013 80
  • 81. Models with random parameters ๐€ 81
  • 82. Tuning ๐‘๐‘ก๐‘–๐‘š๐‘’๐‘  Dense Sparse ๐‘๐‘ก๐‘–๐‘š๐‘’๐‘  ๐‘๐‘ก๐‘–๐‘š๐‘’๐‘  ๐‘‘ ๐‘˜๐‘™๐‘‘ ๐‘˜๐‘™ 82
  • 83. Tuning ๐‘˜ โ‰ก ๐‘๐‘“๐‘™๐‘–๐‘ ๐œ† ๐ท๐‘’๐‘›๐‘ ๐‘’ Montecarlo Transfer Matrix ๐‘๐‘“๐‘™๐‘–๐‘ = ๐‘˜ ร— ๐‘ ร— ๐‘๐‘ก๐‘–๐‘š๐‘’๐‘  Exact ๐‘‘ ๐‘˜๐‘™(๐‘˜) ๐œ† ๐‘†๐‘๐‘Ž๐‘Ÿ๐‘ ๐‘’ Montecarlo Transfer Matrix ๐‘๐‘“๐‘™๐‘–๐‘ = ๐‘˜ ร— ๐‘ ร— ๐‘๐‘ก๐‘–๐‘š๐‘’๐‘  Exact ๐‘‘ ๐‘˜๐‘™(๐‘˜) ๐‘˜ ๐‘˜ = 10 ๐‘˜ = 50 83 10 ๐‘˜
  • 84. Taylor Expansion (๐›ฟ test) ||๐œน|| ||๐œน|| Dense Sparse 84
  • 85. โ€ข Multiprocessors computers: โ€“ Personal computer (2-8 processors). โ€“ Cluster (64 processors machines at INRIA). โ€ข Parallel programming frameworks: โ€“ OpenMp: The processors of the same computer divide the tasks (live memory (RAM) is shared). โ€“ MPI: several processors on each computer share the task (Memory in not shared). 4 processors ๏ƒจ Time/4. Parallelization 64 processors ๏ƒจ Time/64. 85
  • 86. MPI โ€ข OpenMP is limited to the number of processors on a single machine. โ€ข With MPI, 64 processors x 10 machine ๏ƒจ 640 processors. โ€ข Although we though it would take less time with MPI, but โ€ฆ! Master computer1 cluster of 64 proc Another cluster of 64 proc Another cluster of 64 proc Another cluster of 64 proc The whole Montecarlo Spike train At each change of the memory, there will be a communication between the clusters and the master ๏ƒจAt each flip ๏ƒจ loss of time in communication more than computing 86
  • 88. Computational complexity Taken for running this algorithm: ๐ถ๐‘œ๐‘š๐‘๐‘ข๐‘ก๐‘–๐‘›๐‘” ๐‘ก๐‘–๐‘š๐‘’ = ๐‘˜. ๐‘. ๐‘๐‘ก๐‘–๐‘š๐‘’๐‘  . ๐‘กฮ”โ„‹ ๐€ ๐‘กฮ”โ„‹ ๐€ = ๐‘“๐‘ก(๐ฟ) Start: Random spike train Choose a random event and flip it Compute ๐‘’ฮ”โ„‹ ๐€ ๐‘’ฮ”โ„‹ ๐€ > ๐œ– ๐œ– โˆˆ [0,1] No Accept the change Reject the change Yes Updated Montecarlo spike train Loop : ๐‘˜. ๐‘. ๐‘๐‘ก๐‘–๐‘š๐‘’๐‘ 2- In each loop, computing ๐‘’ฮ”โ„‹ ๐€ needs to perform a loop over the monomials. 1- We have a loop over ๐‘˜. ๐‘. ๐‘๐‘ก๐‘–๐‘š๐‘’๐‘  . On a cluster of 64 processors: - 40 Neurons Ising: 10 min - 40 Neurons Pairwise: 20 min 88
  • 89. Start: Random spike train Parameters ๐€ ๐‘ neurons. Length = ๐‘๐‘ก๐‘–๐‘š๐‘’๐‘  Choose a random event and flip it Compute ๐‘’ฮ”โ„‹ ๐€ ๐‘’ฮ”โ„‹ ๐€ > ๐œ– ๐œ– โˆˆ [0,1] No Accept the change Reject the change Yes Updated Montecarlo spike train Tuning Loop : ๐‘๐‘“๐‘™๐‘–๐‘= Algorithm review ๐‘˜ ร— ๐‘ ร— ๐‘๐‘ก๐‘–๐‘š๐‘’๐‘  Choose between [๐ท, ๐‘๐‘ก๐‘–๐‘š๐‘’๐‘  โˆ’ ๐ท] 89
  • 90. How many iterations do we need? โ€ข ๐‘๐‘… < 20: โ€“ 50 parallel + 100 sequential โ€ข ๐‘๐‘… < 150: โ€“ 100 parallel + 100 sequential 90
  • 91. ๐œ– on parameters fitting โ€ข Dudik et al does not allow that: โ€ข ๐›ฝ๐‘™ > ๐œ‹๐‘™ || ๐›ฝ๐‘™ > 1 โˆ’ ๐œ‹๐‘™. In this case โ‡’ ๐›ฝ๐‘™ = 0.9 ๐œ‹๐‘™ โ€ข We avoid dividing by 0 ( โ€ฆ ๐œ‡ ๐‘™ ) โ€ฆ by replacing putting ๐œ†๐‘™ = โˆ’โˆž 91
  • 92. Problem of non- observed monomials Central limit theorem (Fluctuations on monomials averages) ๏ƒจ๐œ‡ ๐€ ๐‘š๐‘™ = ๐œ•๐’ซ ๐œ•๐œ† ; ๐ ๐œ† โˆ— ๐‘š๐‘™ + ๐œผ = ๐œ•๐’ซ ๐œ•๐œ† + ๐๐œ•2 ๐’ซ ๐œ•๐œ†2 + โ€ฆ = ๐œ•๐’ซ ๐œ•๐œ† + ๐œ–๐’ณ + โ‹ฏ ๏ƒจ๐€ = ๐€โˆ— + ๐ : Fluctuations on parameters ๏ƒจ๐ = ๐’ณโˆ’1 ๐œผ Covariance matrix ๐’ณ๐‘–๐‘— = ๐œ•2 ๐’ซ ๐œ•๐œ† ๐‘– ๐œ•๐œ† ๐‘— Convex Computing ๐’ณ over 1000 potential shows that a big percentage of ๐’ณ is zero ๏ƒจ ๐’ณโˆ’1 will have big value ๏ƒจ flucutations on ๐ are big. ๐’ณ 92
  • 93. Binning โ€ข Change completely the statistics. โ€ข 700% of more new patterns appear when we bin at 20. โ€ข Should be studied rigorously. 93
  • 94. 0 1 1 0 0 0 0 1 0 1 2 neurons + binning = 5 0 0 0 0 0 0 0 1 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 โ€ฆ.210 4
  • 95. 95 - Loss of information - Loosing biological scale - More dense spike train - Less non-observed monomials Why spike trains have been binned in the literature? - No clear answer. - Relation between taking binning as a substitute for memory is not convincing. - Might be because it allows having more monomials ๏ƒจ Less dangerous for convexity ๏ƒจ convergence is more guaranteed.
  • 96. Making sense of parameters Stimulus 1 Stimulus 2 Stimulus 4 = ๐‘ƒ ๐‘… ๐‘†1 = ๐‘ƒ ๐‘… ๐‘†2 = ๐‘ƒ ๐‘… ๐‘†3 = ๐‘ƒ ๐‘… ๐‘†4 Stimulus 3 96
  • 97. P[S|R] Einat Granot-Atedgi, Gaลกper Tkaฤik, Ronen Segev, Elad Schneidman. Stimulus-dependent Maximum Entropy Models of Neural Population Codes. Plos Comp. Biol. 2013. 97 Scheidman 2006 LNL โ„‹ ๐œ” = ๐‘– ๐œ†๐‘–(๐‘ก)๐œ”๐‘– ๐‘ก + ๐‘–,๐‘— ๐œ†๐‘–๐‘— ๐œ”๐‘– 0 ๐œ”๐‘—(0)
  • 98. Cross validation on small scale Vasquez et al 2013 98
  • 99. Relaxation 99 ๐‘™=1 ๐ฟ ๐›ฝ๐‘™ + ๐›ผ ๐œ‡ ๐œ† ๐‘š๐‘™ โˆ’ ๐œ‹ ๐‘š๐‘™ โ‰ค ๐œ–๐‘™ + ๐›ฝ๐‘™ โˆ’ ๐›ผ[ ๐œ‹ ๐‘š๐‘™ โˆ’ ๐œ‡ ๐œ† ๐‘š๐‘™ โ‰ค ๐œ–๐‘™]
  • 100. 100 Schneidman et al 2006 stimulus
  • 101. Confidence bounds in linear scale 101
  • 102. Confidence bounds in log scale 102
  • 103. 103