SlideShare a Scribd company logo
Towards Modeling Neural Networks with
Physiologically Different Populations:
Constructing a Monte-Carlo Model
By Adam Cone
VIGRE Research Project
Summer 2003
Advisor: _____________________
Prof. Daniel Tranchina
Date:_______________
1
Adam Cone Modeling Neural Networks
Table of Contents
Abstract …………………...……………………………………………...………………..3
Introduction ………………………………………………………………….…………….......3
Biological Background ……………………………………………………………...…..4
Mathematical Background …………………………………………………….………5
Monte-Carlo Network Construction …………………………….………………………7
Network Construction and Representation .…………………………………………....8
Translating Neural Activity into Network Stimulus …………………………………..……9
Variable Physiological Characteristics ……………………………………………….12
Multi-Population Simulation Results ...…………………………………………………13
Mean Field and Monte-Carlo Comparisons .……………………………………………....13
Physiological Parameter Variation: Testing ….…………………………………………....13
Conclusion ...………………………………………………………………………………...18
Literature Review ……...……………………………………………………………………19
Appendix A …………………….……………………………………………………………..20
Appendix B ………………………….………………………………………………………..21
2
Adam Cone Modeling Neural Networks
Abstract
Computing power is a fundamental limitation in mathematically modeling multi-
population neural networks in the visual cortex, and innovative modeling techniques are
continually sought to facilitate more efficient, complete simulation. Recent population-density
methods have shown promise in comparisons with contemporary Monte-Carlo and mean field
methods in single-population regimes,1
suggesting their potential usefulness in network
modeling. To carry-out comparisons in physiologically accurate network regimes, all three
models must be modified and expanded to account, not only for multiple external inputs, but
also for network interactions, and different population response parameters.
This paper details our construction of multi-population network Monte-Carlo and mean
field models and critically analyses simulation results to verify their accuracy. We conclude that
the Monte-Carlo method is suitable for use in future population-density method testing. Finally,
we propose additional variables that Monte-Carlo and mean-field models should account for in
future simulations.
Related Fields: computational neural science, numerical computing, computational biology,
applied mathematics, mathematical modeling, neural networks, biomathematics
Introduction
“[A Monte Carlo method] solves a problem by generating suitable random numbers and
observing that fraction of the numbers obeying some [property set]. The method is useful for
obtaining numerical solutions to problems which are too complicated to solve analytically.”2
“In
the Population-Density approach, integrate-and-fire neurons are grouped into large populations
of similar neurons. For each population, we form a probability density that represents the
distribution of neurons over all possible [voltage] states.”3,4
A practical difference between the
two methods is that Monte Carlo simulations are applicable to all kinds of problems, whereas
Population-Density simulations were developed specifically for modeling large populations of
similar neurons with specific properties.
The comparison is important because identifying and using the most computationally
efficient method will enable us to account for more variables, simulate more neurons for longer
durations and, in short, make our simulations more “life-like”. These expanded models can
improve our understanding of neural networks.
1
Cone
2
Weisstein
3
Nykamp and Tranchina
4
For a thorough derivation of the Population-Density method, see ibid.
3
Adam Cone Modeling Neural Networks
Biological Background
From a biological perspective, neurons (Figure
1)5
are the brain’s fundamental units. A
neuron’s essential function is to integrate input
from various sources and either fire or not fire
an action potential, the medium of inter-neuron
communication, which, in turn, stimulates
other neurons. Neurons interface with one
another at synapses (Figure 2)6
where the pre-
synaptic neuron releases molecules called
neurotransmitters that open ion channels on
the post-synaptic neuron. The open ion
channels allow ions from the surrounding
cytosol to enter the neuron, or vice-versa.
Since the ions are carrying charge, their
movement is essentially an electrical current,
and it’s effect on the neuron voltage relative to
the cytosol can be either excitatory (increases voltage and action potential probability) or
inhibitory (decreased voltage and action potential probability). Whether a neuron ultimately fires
an action potential is a function of its voltage at the axon-hillock, the action potential initiation
area. Each neuron has a threshold voltage and, when this threshold voltage is met or exceeded
at the axon hillock, the neuron fires an action
potential. Once it has fired, the neuron enters
a refractory period; it cannot fire during this
period as its ion concentrations are being
reestablished. A neuron’s, or group of
neurons’, activity is defined as the action
potential firing rate. Determining neural
activity for various groups of neurons is
crucial in human visual cortex analysis
because it indicates what processes are
taking place at various locations.
Figure 1: Neuron Diagram
During its passage through the central
visual system, information from the retina,
encoded as action potentials from retinal
neurons, undergoes several relays and
transformations. Each of these relays and
transformations involves either a redirection
or re-organization, respectively, of the action-
potential-encoded information by neuron
populations, large neuron groups that are
similar in their biophysical properties and synaptic connections. These populations interact in
neural networks to perform the various operations on visual information (e.g. mapping,
rerouting, organizing, filtering, integrating, etc.) that enable us to interpret visual stimuli. Neural
networks exhibit complex behavior, and facilitate high-level processes, such as orientation
tuning. Understanding neural network behavior is one of the central projects in understanding
the visual cortex.
Figure 2: Synapse Diagram
5
Maizels
6
Maizels
4
Adam Cone Modeling Neural Networks
Mathematical Background:
The following integrate-and-fire neuron modeling equations are used directly to update
the mean field method; they form the computational foundation for our simulations. We use
adapted forms of the equations to update the Monte-Carlo simulation. Let and be
the excitatory and inhibitory membrane conductances (nS) as functions of time, and let
( )eg t ( )ig t
eτ and
iτ be the excitatory and inhibitory decay constants (ms) . In the absence of synaptic events, the
excitatory and inhibitory membrane conductances decay according to the first-order differential
equations
( ) ( )
, 0e e
e
e
dg t g t
dt
τ
τ
−
= ≠
( ) ( )
, 0i i
i
i
dg t g t
dt
τ
τ
−
= ≠ .
If T (ms) is the synaptic event time,k ( k )f T +
and ( k )f T −
are the right- and left-hand
limits of a function f at T , andk
k
e
eτ
Γ
and
k
i
iτ
Γ
are the random excitatory and inhibitory
conductance boosts (nS) at , thenkT
( ) ( ) , 0
( ) ( ) , 0
k
e
e k e k e
e
k
i
i k i k i
i
g T g T
g T g T
τ
τ
τ
τ
+ −
+ −
Γ
= + ≠
Γ
= + ≠
Combining these equations, we obtain the general equations for membrane
conductance:
( ) ( )
( )
, 0
( ) ( )
( )
, 0
k
e e k
e k
e
e
k
i i k
i k
i
i
g t t T
dg t
dt
g t t T
dg t
dt
δ
τ
τ
δ
τ
τ
+ Γ −
= − ≠
+ Γ −
= − ≠
∑
∑
Now, let (mV) be the resting membrane voltage, V t be the membrane voltage at time
(mV), and denote the equilibrium excitatory and inhibitory voltages (mV), and be the
membrane capacitance. We model membrane voltage with the following differential equation:
rE ( ) t
eE iE C
[ ( )] [ ( )] [ ( )]( )
, 0r r e e i ig E V t g E V t g E V tdV t
C
dt C
− + − + −
= ≠ .
5
Adam Cone Modeling Neural Networks
Suppose that excitatory and inhibitory synaptic events are occurring with frequency (Hz)
and (Hz), respectively, for a population , and that the events are Poisson distributed for
each neuron. Then the average excitatory and inhibitory conductances for the neurons in the
population and is given by
( )e tν
P
( )eg t
( )i tν
( )ig t
( ) ( ) ( )
, 0
( ) ( ) ( )
, 0
e e e e
e
e
i i i i
i
i
d g t t g t
dt
d g t t g t
dt
ν
τ
τ
ν
τ
τ
Γ −
= ≠
Γ −
= ≠
Now, when we model the evolving mean-field conductance for each population in the
network, we must account not only for synaptic input from external stimuli, but also for synaptic
events generated by network neurons. Physiologically, when a pre-synaptic neuron crosses
voltage threshold and fires an action potential, there is a random delay between the firing time
and the time at which any post-synaptic neuron experiences a resultant synaptic event.
Suppose pre-synaptic neuron A synapses to post-synaptic neuron B and that neuron A fires
an action potential at a time T (ms). We want to know the time T (ms) at which neuron B
experiences the resultant synaptic event as a function of T . Computationally, we model this
delay by defining two time quantities: the minimum possible delay between action potential firing
and synaptic event occurance (ms); and the maximum possible additional delay time
(ms). We compute as follows: T T ,7
ap se
ap
minT
maxrandT seT min maxrandse ap randT T= + +
randwhere (unitless) is MATLAB function that outputs a uniformly distributed random number
between 0 and 1.
In the mean field population model, we compute the rate of synaptic input to population
β from population γ , (Hz), as a function of the activity of population γ , (Hz), where
,8
and the distribution function α (unitless), which inputs a synaptic delay, and outputs
the probability that it will occur for a given action potential. In our case, because we select the
delay from a uniform distribution, is a piecewise constant function, namely
βγν ( )A tγ
t ∈ ( )t
( )tα
( )
[ ]
( )
min
min min max max
max
min max
0, ,
1
( ) , , , 0
0, ,
rand rand
rand
rand
t T
t t T T T T
T
t T T
α
 ∈ −∞


= ∈ + ≠

 ∈ + ∞
.
mindt T>
min maxrandT T dt= = (1 rand)se ap dt
7
A potential problem arises if our simulation time-step . To avoid this, we can, for example, set
, which is physiologically accurate, so that becomes T T .= + +
maxrand randT8
We do not further restrict t because, due to the random part of the synaptic delay, action
potentials from population α that originated at different times could effect at any given time .tαβν
6
Adam Cone Modeling Neural Networks
Finally, let (unitless) denote the number of synapses from γ to β . Now we use a
convolution integral as follows:
Nβγ
( ) ( )N A t t t dtβγ βγ γν α
∞
−∞
′ ′ ′= −∫
min maxmin
min min max
( ) ( ) ( ) ( ) ( ) ( )
rand
rand
T TT
T T T
N A t t t dt A t t t dt A t t t dtβγ γ γ γα α α
+ ∞
−∞ +
 
′ ′ ′ ′ ′ ′ ′ ′ ′= − + − + − 
  
∫ ∫ ∫
t′ ( ) 0twhere (ms) is the ‘time ago’. Using the fact that when t T , we
obtain
min min max[ ,( )]randT T∉ +α =
min max min max
min minmax max
1
0 ( ) 0 ( )
rand randT T T T
rand randT T
N
N A t t dt A t t dt
T T
βγ
βγ βγ γ γν
+ +
 
′ ′ ′ ′= + − + = − 
  
∫ ∫
We now make the substitution
*
*
*
max min max
*
min min
*
*
'
'
( )
( )
0 1 1
'
'
rand
t t t
t t t
t t T T
t t T
dt d t t
dt dt
dt dt
= −
⇒ = −
⇒ = − +
⇒ = −
′−
⇒ = = − = −
′
⇒ = −
so that our new integral is
min max
min
min min
min max min max
( )
* *
max
* * * *
max max( ) ( )
( )( )
( 1)( 1) ( ) ( )
rand
rand rand
t T T
rand t T
t T t T
rand randt T T t T T
N
A t dt
T
N N
A t dt A t dt
T T
βγ
βγ γ
βγ βγ
γ γ
ν
− +
−
− −
− + − +
= −
= − − =
∫
∫ ∫
.
Monte-Carlo Model Construction
Our goal was to adapt the Monte-Carlo population model to simulate multi-population neural
network activity. For our purposes, a multi-population network is a group of neuron populations,
in which any neuron can, a priori, synapse to any other neuron in the network, including itself.
There were three main challenges in achieving this goal:
7
Adam Cone Modeling Neural Networks
1) network construction and representation
2) translating neural activity to network stimulus
3) endowing different populations with different physiological characteristics
Network Construction and Representation
The first problem was to computationally define the following network features:
1) network size
2) number of populations
3) relative population sizes
4) population types (excitatory or inhibitory)
5) network connectivity
The first four quantities are relatively straightforward to implement, but the network connectivity
is not. One approach is to input the data for each synapse that exists between neurons. So, for
example, if we want neuron 3 to synapses to neuron 12, we could manually input those values
to the computer. However, since the neural networks can be larger than 100,000 and the
number of synapses far higher, this solution is impractical. Furthermore, we are not concerned
with whether a synapse exists between any two specific neurons- the holistic statistical
properties of network connectivity are our primary concern- so inputting values this way would
be tedious. We need a more efficient method of defining the connectivity.
If one considers the neuron populations as the fundamental functional units of the
network, then one can think of how the neuron populations are connected to one another
without concentrating on the individual neurons. Suppose there is a network with two
populations A and B , with a and neurons, respectively.b 9
From a population perspective, we
completely define the network by declaring the number of synapses from A to B , ; to A ,
; to A , ; and to B , . Equivalently, we could declare the probability that a
given neuron in population synapses to a given neuron in population , given by , etc.
Since we have defined the connectivity to our satisfaction, we want to randomly generate a
network with neuron-level connectivity that satisfies our specified population-level connectivity.10
We achieve this by generating an × matrix, where (unitless) is the number of neurons
in the network. In this matrix,
ABS A
AAS B BAS B BBS
A B ABS
ab
N N N
1, neuron i has an inhibitory synapse to neuron j
0, neuron i doesn't synapse to neuron j
1, neuron i has an excitatory synapse to neuron j
ijx
−

= 


N
A
.
For example, suppose we want a neural network with neurons and three populations,
, B , and C . We want and to be excitatory, C inhibitory, and their sizes to be a , b andA B
9
Note that, because we assume that each neuron can synapse at most once to any other neuron, ab is
the total number of possible synapses from A to B.
8
Adam Cone Modeling Neural Networks
c (unitless), respectively. How do we define population-level connectivity? We construct a 3 3×
matrix M with
( )any given neuron in population i synapses to any given neuron in population jijm P=
a a×
[
.
Now, we generate an random matrix (i.e. a matrix in which each entry is a uniformly
distributed real number ). We ask the computer to perform a logical operation on
each such that
]0,1r ∈ L
ijm
0, 0,
( ) , 0
1, ,1
AA
ij
ij
AA
ij
S
m
aa
L m aa
S
m
aa
  
∈    
= ≠
  ∈   
.
We now perform similar operations for each 2-population combination (i.e. A : B , A :C , B : A ,
B : B , etc.), and concatenate the resulting matrices. Now we have our connectivity matrix – our
representation of the neural network.
Translating Neural Activity into Network Stimulus
dt
We are concerned primarily with network activity, specifically the rate at which each
population in the network, as well as the network itself, is firing action potentials at any given
time. To find this, we need to record how many neurons fired in each discretized time step of
length , which necessitates updating the conductances and voltage of each neuron in each
time step. In a network model, we need to perform all the operations of the population model, in
addition to accounting for inter-neuron interaction. The following is a checklist of the basic steps:
1) classify synaptic input rate to each neuron from external source
2) translate input rate into a Poisson-distributed sequences of synaptic events
3) classify action potential-generated synaptic events from network neurons
4) sort the synaptic events from network and external stimulus into new sequence
5) integrate over time to update the neuron conductances
6) integrate over time to find the neuron voltages
7) decide whether each neuron has fired an action potential
8) record which neurons will experience network-generated synaptic events in next dt
9) store the three-dimensional state-space coordinates.
When the simulation is running, every time a neuron fires, we must
1) reference the connectivity matrix
2) determine which neurons experience synaptic events
3) determine the synaptic event times
4) determine whether the pre-synaptic neuron is excitatory or inhibitory
5) generate the random strength of the synaptic events
9
Adam Cone Modeling Neural Networks
6) store the post-synaptic neuron/time/strength data for future reference.11
In the Monte-Carlo regime, accounting for inter-network communication is essentially a
bookkeeping problem. In computer science terminology, we need a data object in which we can
easily store and modify data about inter-neuron communication. We first construct this object,
then explain how it is used in the program.
The synaptic input events for each neuron must be sorted by time of occurrence,
because our second-order accurate integration scheme requires integrating voltages and
conductances between these synaptic events. The essential problem is one of data storage and
access, but because we ultimately want synaptic events lined up for use in future time-steps, we
call our data object the queue matrix. The queue matrix dynamically stores future synaptic input
data for each post-neuron. That is, for each neuron, the queue matrix stores the times and types
of future synaptic events.
Now, because the synaptic delay , the queue matrix
must have a capacity of at least . Since no resultant synaptic event can
occur more than T T after the end of the current time-step, further storage is
extraneous,12
so we have that one of the matrix dimensions is , where
is the MATLAB notation for the smallest integer function.
[ ]mi( )sa apT T T− ∈
min max
l 1randT T+ 
cei
dt
+
n min max,( )randT T+
 
rand
 
min+ max
cei min max
l 1randT T
dt
+ 
+ 
 
ceil
max_in
Furthermore, since it is possible that a neuron experiences multiple synaptic events from
other network neurons in one time-step, we need to know how much space to allocate for
synaptic events. Let (unitless) denote the maximum number of incoming synapses held
by any neuron in the network and (ms) denote the refractory period. Then, in general, the
requisite number of events the queue matrix must store, and, therefore, one of the queue matrix’
dimensions, is given by .
refτ
max
max_in*ceil randT dt
τ
 +
  ref 
N
( ) ( ) ( )neurons
ijkq th
events × time-steps ×
th
Finally, since the queue matrix must store synaptic event data for each neuron, one of
the matrix’ dimensions must be .
By convention, our queue matrix is . In other words, in
, we find the data about the time and type of the i synaptic event occurring during the j
time-step at neuron . This leaves us with the problem of storing two pieces of information, time
and type, in a single matrix cell. For computational reasons, we chose to represent the data as a
complex number . Let T (ms) be the elapsed time between the beginning of the time-
k
a bi+ se
11
We assume that each pre-synaptic neuron is either excitatory or inhibitory, not both.
12
That the total simulation length is more than is irrelevant; we simply erase all
events after the current time-step.
min max
ceil 1andT T
dt
+ 
+ 
 
10
Adam Cone Modeling Neural Networks
step and the synaptic event. Then the real part, a , is given by . The
imaginary part is given by .
0, if no event o
, >0se se
a
T T

= 

1, synaptic event
0, no syna ic event
t
b
−

= 
ccured
b
an inhibitory
pt
1, an excitatory synaptic even

,ap ω
The queue matrix is updated at the end of each time-step; a process which requires
defining two additional objects. Let T (ms) denote the time at which a neuron ω fired, and let
(ms) denote the beginning of the time-step. After we finish the voltage integration and
determine if/when each neuron has fired, we create vector of length of neuron firing times in
which is the time the neuron fired, given by . Furthermore, we sum down the
first dimension of the queue matrix to obtain a ( event-counter
matrix, in which x is the number of synaptic events the neuron is already determined to
experience in the time-step. So, if further pre-synaptic neuron firing leads to additional
synaptic events, we will reference the event counter matrix to decide at which positions in the
queue matrix column we should put them.
0T
N
ix thi
, 0ap iT T
dt
−
0≠ ) ( )times-steps × neurons
ij thj
thi
jk
fireN N×
fireN dt
fireN N×
In each dt , after we have defined the neuron firing vector and we know which neurons
fired, we want to ‘queue-up’ the resultant synaptic events at all the neurons post-synaptic to
firing neurons. To do this, we use the neuron firing vector to find the appropriate rows of the
neuron connectivity matrix and, since the connectivity firing matrix rows not corresponding to
firing neurons are irrelevant, we condense the relevent rows into a new firing
connectivity matrix, where denotes the number of neurons that fired in the current .
Now, we generate an delay times matrix, with uniformly distributed random values
. We create another × matrix, firing times matrix, by right-
multiplying an ones matrix by the transpose of the firing times row vector, so that
has the value of the element in the neuron firing times vector. Adding these two matrices,
we obtain random, uniformly distributed synaptic event reception times that account for the both
the different firing times of the pre-synaptic neurons, and the multiple post-synaptic targets. To
rid ourselves of the unwanted data, we element-multiply the times matrix by the absolute value
of the firing connectivity matrix. We call this the firing times matrix.
[ ]min min,R N N∈ fireN NmaxNij +
N×
thi
fireN ijx
We need to associate with each future synaptic event generated in the current a
type, excitatory or inhibitory, based on the designation of the pre-synaptic neuron that fired. We
now construct a firing information matrix by adding the firing times matrix to the firing
connectivity matrix, multiplied by , the imaginary number. So, in each firing information matrix
cell, we have a complex number , with the synaptic event reception time, and
the synaptic event type: excitatory or inhibitory. Sorting the columns by absolute
value, we arrange the events chronologically from first (top) to last (bottom).
dt
i
λ real( )λ
sign(imag( ))λ
11
Adam Cone Modeling Neural Networks
Although we have organized our action potential firing data for the current time-step, it
remains to enter this data into the queue matrix, where it can be efficiently accessed in future
time-loops. We do this by looping over the maximum number of synaptic events any post-
synaptic neuron will experience as a result of action potentials that occurred in the current time-
step; a quantity obtained by summing down the absolute value of the firing connectivity matrix.
In the loop, we increment the relevant cells of the event counter matrix, and define row, column,
and depth indices for the queue matrix.
Variable Physiological Characteristics
Our goal is to model multiple population network activity in the visual cortex. Optimally,
we want our simulation to handle a user-specified number of populations, each with different
qualities (i.e. excitatory vs. inhibitory, different refractory periods, voltage thresholds, etc.),
without having to write individual programs for each case. We construct a variable population
network by the algorithm outlined in Network Construction, but how do we efficiently assign
physiological constants to the different populations? Briefly, the user constructs the following
column vectors, each of length (unitless), where is the number of populations in the
network:
P P
N thi
thk
excitatory equilibrium voltage
inhibitory equilibrium voltage
resting voltage
excitatory conductance decay constant
inhibitory conductance decay constant
membrane time constant
refractory period
reset voltage
threshold voltage
For example, the i element of the reset voltage vector contains the reset voltage
constant for the neurons in population i .
th
Although we now have sufficient data to simulate interacting populations with different
physiological parameters, the data is not in a convenient form for calculations. Since many of
our Monte-Carlo computations are based on individual neurons, we would like, for each
characteristic, to have a vector of dimension whose element is the characteristic value of
the i neuron in the network. Let be the number of neurons in the population. Then we
can construct a matrix of dimension , where
th kP
N P×
1
1 1
0, ,
j j
n n
n n
ij
i P P
x
−
= =
  
∉ 
1
1 1
1, ,
j j
n n
n n
i P P
−
= =

  
= 
∑ ∑
  ∈   
∑ ∑
N th th
.
Now, multiplication by, for instance, the refractory time column vector yields a column vector of
length , where the i element is the refractory time value of the i network neuron.
12
Adam Cone Modeling Neural Networks
For example, suppose a given neural network has seven neurons and four distinct
populations, , , , and , with sizes two, one, three, and one, respectively. Further,
suppose that the population refractory time values are 3 , , , and , respectively.
Then the computationally convenient vector of neuron refractory time values is given by:
a b c d
sµ 2 sµ 6 sµ 4 sµ
1 0 0 0 3 s
1 0 0 0 3 s
3 s
0 1 0 0 2 s
2 s
0 0 1 0 6 s
6 s
0 0 1 0 6 s
4 s
0 0 1 0 6 s
0 0 0 1 6 s
µ
µ
µ
µ
µ
µ
µ
µ
µ
µ
µ
   
   
    
    
     =    
    
    
   
   
   
.
Performing this operation for each population-variable characteristic, we obtain
convenient vectors for each characteristic. The data is now in a computationally convenient
form.
Multi-Population Simulation Results
To demonstrate the necessity (over the mean-field method) and versatility of our Monte-
Carlo multi-population method, we present activity and conductance comparisons between the
mean-field and Monte-Carlo simulation data (Figures 1-4); and the results of four simulations,
each run with different physiological parameters and each with 1000 neurons, and two
interacting populations: one excitatory (population 1) and one inhibitory (population 2) (Figures
5-8). The specific physiological parameters of each simulation are found in Appendix A, and the
simulation code is found is Appendix B.
Mean-Field and Monte-Carlo Comparisons
While the Monte-Carlo method is the standard for accuracy in network modeling, it must
ponderously track each individual neuron; a computationally costly process. The mean field
method, by comparison, involves only numerically solving ordinary differential equations, and is
far faster; ideally, we would use them exclusively. However, the mean field models neuron-
interaction poorly, as demonstrated in Figures 1-4, hence the need for a Monte-Carlo model.
Figures 1-4 each show relative agreement between the mean field and Monte-Carlo methods,
but we see, in each figure, the mean field deviating from the Monte-Carlo. This motivates our
need for a Monte-Carlo method.
Physiological Parameter Variation: Testing
One of our primary goals was to account for interactions between physiologically
different neuron populations. After programming the simulations, we varied physiological
parameters (e.g. threshold voltage, refractory period, etc.) and critically examined the results.
We expect, for instance, that if an isolated neuron population X has a higher threshold voltage
then another, otherwise identical, isolated neuron population Y , then, given similar synaptic
input , will have a higher activity than . If a simulation did not show this, it is unlikely thatY X
13
Adam Cone Modeling Neural Networks
the simulation accurately modeled reality. The following are similar functionality tests, which
compare population activity.
We ran simulations with four different parameter sets, three of them varying by one or
two physiological variables from the control set, parameter set 1, and compared the results.
Briefly, parameter set 2 differs from parameter set 1 in its threshold voltage values; higher for
the excitatory population and lower for the inhibitory population. In parameter set three, we
made the following two changes: we modified the connectivity matrix so that there is substantial
population self-synapsing; and we changed the refractory periods for both populations (halved
for population 1, doubled for population 2). Finally, population set 4 has a higher resting voltage
for population 1, and a lower resting voltage for population 2.
Parameter Set 2: Threshold Voltage
In comparison with Figure 5, Figure 6 has two salient features (increased population 2
activity; decreased population 1 activity), both of which are consistent with the physiological
differences- a population’s activity is proportional to the ease with which its neurons can reach
threshold voltage. By making the threshold harder to meet for population 1, and easier to meet
for population 2, we decreased and increased their respective activities. This effect is enhanced
by the populations’ connectivity. Since population 2 is inhibitory and synapses to many neurons
in population 1, population 2’s increased activity means increased inhibition for population 1. So,
while population 2’s increased activity results exclusively from the threshold voltage change,
population 1’s decreased activity is the result of both the threshold voltage changes, and the
indirect effect of the threshold voltage changes through the network architecture.
Parameter Set 3: Network Connectivity and Refractory Period
Relative to the simulation activity data from parameter set 1, parameter set 3 data
(Figure 7) exhibits an increased positive disparity between the activities of population 1 and
population 2.
The result of altering the refractory period is that the population 1 neurons have less
forced inactivity, whereas the population 2 neurons have more. When a neuron is in a refractory
period its voltage cannot evolve, and, in some sense, the excitatory synaptic events are ‘wasted’
in that they cannot make the neuron more likely to fire. In parameter set 3, the refractory period
changes minimize this ‘waste’ for population 1, and maximize it for population 2- hence, the
greater disparity.
The effect of allowing population self-synapsing is subtler, but important. Population 1 is
excitatory, which means that its action potentials precipitate excitatory synaptic events at post-
synaptic neurons, including those in inhibitory population 2. In the population 1 network, these
excitatory events make the inhibitory population more likely to fire, which, in turn, decreases the
activity of population 1. For most external synaptic stimulus time-courses, this effect is minor,
but present; the network connectivity means that population 1’s activity is, through population 2,
somewhat self-limiting. However, in parameter 3 network, the set of neurons post-synaptic to
population 1 include neurons in population 1, which means that population 1’s activity has a
strong self-actuating effect. Furthermore, population 2’s self-synapsing has precisely the
opposite effect- it’s activity is now strongly self-limiting, and, therefore, so is its tempering effect
on population 1. The net result is that population 1’s activity increases and population 2’s activity
decreases. The combination of these two effects explains the large differences between Figures
5 and 7.
14
Adam Cone Modeling Neural Networks
Parameter Set 4: Neuron Resting Voltage
Figure 8’s relationship to Figure 5 is similar to Figure 7’s, but less extreme; Population
1’s activity is slightly higher, and population 2’s activity is slightly lower.
When a neuron experiences no synaptic events, its conductance will asymptotically
approach the resting voltage. The higher the resting voltage, the less excitatory synaptic
stimulus a neuron will need to reach threshold voltage, and vice-versa. Hence, when we
increase the resting voltage of population 1 and decrease that of population 2, we see the
corresponding differences. The net effect is weaker than that of the changes in parameter set 3
for two reasons. First, there is only one altered physiological variable. Second, the resting
voltage is less influential when a neuron experiences a high frequency of synaptic events, and,
in all our simulations, the rate of synaptic events is relatively high. However, the effect is still, not
surprisingly, significant.
15
Adam Cone Modeling Neural Networks
16
Adam Cone Modeling Neural Networks
17
Adam Cone Modeling Neural Networks
Conclusion
Although our multi-population simulations generated coherent results, there are several
possible improvements that we plan to implement in future Monte-Carlo simulations. For
example, synaptic depression becomes increasingly important in network-level, as opposed to
population-level, analysis. The voltage-impact of a synaptic event, although partially random, is
a function of the amount and type of neurotransmitter released by the pre-synaptic neuron into
the synaptic cleft. When a synaptic event occurs, the pre-synaptic neuron’s immediately-
available neurotransmitter is depleted by some number of neurotransmitter molecules χ . Given
sufficient time, the pre-synaptic exponentially restores the amount of immediately-available
neurotransmitter to its original value. However, if the pre-synaptic neuron fires action potentials
at some critical frequency, the amount of immediately-available neurotransmitter can fall below
χ , and the synaptic event’s efficacy is reduced or ‘depressed’. In population simulations this
phenomenon is of little importance, since we are concerned only with the activity of the modeled
population. However, in multi-population networks, the generated action potentials have
consequences, and synaptic depression must be accounted for.
Nevertheless, the two-population simulation results are consistent with theoretical
predictions, which suggests that the multi-population Monte-Carlo network model is an
appropriate foil for testing new population density methods.
18
Adam Cone Modeling Neural Networks
Literature Review
Cone, Adam Richard. New York University; Courant Institute of Mathematical Sciences. 9/26/03
<http://www.cims.nyu.edu/vigrenew/ug_research/adam_cone.pdf>
Maizels, Deborah Jane. Zoobotanica. Apple. 10-18-02
<http://www.zoobotanica.plus.com/portfolio%20medicine%20pages/synapse.htm>.
Nykamp, Duane and Daniel Tranchina. “A Population Density Approach That Facilitates Large-
Scale Modeling of Neural Networks: Analysis and an Application to Orientation Tuning.” Journal
of Computational Neural Science 8 (2000): 19-50
Weisstein, Eric. Eric Weisstein's World of Mathematics (MathWorldTM
). Wolfram Research. 10-
18-02 <http://mathworld.wolfram.com/MonteCarloMethod.html>.
19
Adam Cone Modeling Neural Networks
20
Adam Cone Modeling Neural Networks
Appendix B: Simulation Code
Main Program
function
multi_pop4a(tau_E,tau_I,tau_M,tau_R,E_E,E_I,E_R,v_T,v_R,nue_0,nui_0,c_e,c_i,N,Tsim,randstate
,pop_con_mat,pop_type_vec,pop_frac_vec)
%second-order accurate version in which events are generated in groups with ID and sorted
%tau_E - Excitatory synaptic conductance time constants for each population (column vector
n_pops long)
%tau_I - Inhibitory synaptic time constants for each population
%tau_M - Membrane time constant
%tau_Ref - Refractory time for each population
%nue_0 - time-average excitatory synaptic input rate
%nu_0 - time-average inhibitory synaptic input rate
%N - number of neurons in the simulation
%c_e - maximum contrast for random excitory conductance for each population
%c_i - maximum contrast for random inhibitory conductance for each population
% synaptic input rates
%tau_ref for each neuron
%dispersion of uniform latencies handled correctly
%calls new qcount: qcount_11
rand('state',randstate)
dt=min(tau_E)/10;
nt=ceil(Tsim/dt); %number of time points
t=(0:(nt))*dt; %the time points
nt=nt+1;
t_max=max(t);
%Generate connectivity matrix
[con_mat, pop_id_vec]= conmat_5(N,pop_con_mat,pop_type_vec,pop_frac_vec);
n_pops=size(pop_con_mat,1);
Kpop=false(N,n_pops); %matrix where each column entry=1 for neurons in population with that
column number
for jp=1:n_pops
Kpop(:,jp)=(pop_id_vec'==jp);
end
E_e=(Kpop*E_E)';%row vectors of length N
v_th=(Kpop*v_T)';
E_i=(Kpop*E_I)';
E_r=(Kpop*E_R)';
v_reset=(Kpop*v_R)';
tau_m=(Kpop*tau_M)';
tau_e=(Kpop*tau_E)';
tau_i=(Kpop*tau_I)';
tau_ref=(Kpop*tau_R)';
n_ref=round(tau_ref/dt); %number of time bins in refractory state
rate_mf=zeros(n_pops,nt); % mean-field rate
nu_e=zeros(n_pops,nt); %external excitatory input rate for each pop
nu_i=nu_e; % ditto for inhibitory
%%%UPDATE FOR NPOPS (NOT JUST 2 POPS)
for j=1:n_pops
[nu_e(j,:) nu_i(j,:)]=ex_in_synaptic_rates(nue_0(j),nui_0(j),c_e(j),c_i(j),t,randstate);
end
%check to see whether these have to be defined here
nue_vec=zeros(N,1);
nui_vec=nue_vec;
axon_delay_constant=dt;
axon_delay_rand_range=2*dt;
tmin=axon_delay_constant;
tmax=tmin+axon_delay_rand_range;
Td=tmax-tmin;
21
Adam Cone Modeling Neural Networks
kmax=ceil(tmax/dt);
kmin=ceil(tmin/dt);
npoints=kmax-kmin+2;
e1=dt*kmax-tmax;
z=min(dt,kmax*dt-tmin);
weights=zeros(npoints,1);
weights(1)=z-e1 - (1/2)*(z^2/dt -e1^2/dt);
weights(2)=(1/2)*(z^2/dt -e1^2/dt);
kw=2;
while kw < npoints
z=min(dt,(kmax-kw+1)*dt-tmin);
weights(kw)=weights(kw)+z-(1/2)*z^2/dt;
weights(kw+1)=weights(kw+1)+(1/2)*z^2/dt;
kw=kw+1;
end
weights=weights/Td;
q_mat = qmat_6(con_mat,axon_delay_constant,axon_delay_rand_range,dt,max(tau_ref));
[max_event_num,num_neurons,num_dt_slots]=size(q_mat);
event_counter_matrix=zeros(num_dt_slots,num_neurons);
mu_Gamma_E=tau_M./(E_E-E_R); % Expected area under unitary synaptic event for each pop.
Gives average EPSP of ~0.5 mV
mu_GE=nue_0.*mu_Gamma_E; %Average of G_e for steady synaptic input rate at nu_0
sigma_sq_Gamma_E=mu_Gamma_E.^2/5; %variance of Gamma_e for parabolic distribution
mu_Gamma_E_sq=sigma_sq_Gamma_E+mu_Gamma_E.^2; %expected square of Gamma_e
sigma_GE=sqrt(nue_0.*mu_Gamma_E_sq./(2*tau_E)); %variance of G_e for steady synaptic input
at rate nu_0
mu_Gamma_I=10*mu_Gamma_E;%mean area under inhibitory cinductance for each pop
mu_GI=nui_0.*mu_Gamma_I;%mean inhibitory conductance
sigma_sq_Gamma_I=mu_Gamma_I.^2/5;
mu_Gamma_I_sq=sigma_sq_Gamma_I+mu_Gamma_I.^2;%expected square of Gamma_I
sigma_GI=sqrt(nui_0.*mu_Gamma_I_sq./(2*tau_I)); %variance of G_e for steady synaptic input
at rate nu_0
mu_Gamma_e=(Kpop*mu_Gamma_E)';
mu_Gamma_i=(Kpop*mu_Gamma_I)';
g_e=rate_mf; %mean_field E conductance for each pop at each time
g_i=g_e; %ditto for I
g_i_mc=g_i;
g_e_mc=g_i;
g_e(:,1)=nue_0.*mu_Gamma_E; %populations-by-time steps
g_i(:,1)=nui_0.*mu_Gamma_I;
aptimes=zeros(n_pops,ceil(Tsim*200*N)); %action potential times matrix
Jap=zeros(n_pops,1); %index for keeping track of location of ap's in the matrix above
count_down=-ones(1,N); %counts number of time steps until exiting refractory period
delt_fire=zeros(1,N); %time of action potential measured from beginning of time step
G_ep1=zeros(1,N); %conductance values to be used in integration
G_ep2=G_ep1;
G_ip1=G_ep1;
G_ip2=G_ip1;
Dt=zeros(1,N); %time step, either dt, or less for neurons emerging from
%refractory period
t_remain=dt*ones(1,N);
t_elapse=zeros(1,N);
Tzero=zeros(1,N);
ID_vec=Tzero;
dt_vec=dt*ones(1,N);
nET=20;
ETzero=zeros(nET,N);
count_zero=zeros(1,N);
G_e=zeros(nt,N); %Matrix of Ge values (nt+1) time points by N neurons
G_i=zeros(nt,N); %Matrix of Gi values (nt+1) time points by N neurons
22
Adam Cone Modeling Neural Networks
V=G_e; %Corresponding membrane voltages
G_e(1,:)=( Kpop*mu_GE+(Kpop*sigma_GE).*randn(N,1) )'; %Initialize G_e at t=0 by choosing
values from a gaussian distribution
G_e(1,:)=max(G_e(1,:),zeros(size(G_e(1,:)))); %If negative, set to zero
G_i(1,:)=( Kpop*mu_GI+(Kpop*sigma_GI).*randn(N,1) )'; %Initialize G_e at t=0 by choosing
values from a gaussian distribution
G_i(1,:)=max(G_i(1,:),zeros(size(G_i(1,:))));
g_i_mc(1)=mean(G_i(1,:));
g_e_mc(1)=mean(G_e(1,:));
V(1,:)=E_i + (v_th-E_i).*rand(1,N); % Initial random V from uniform disribution
V1 =E_i + (v_th-E_i).*rand(1,N);
%
%Mean-field computation
rate_mf(:,1)=mean_field_rate_vectorized(E_R,E_E,E_I,v_T,v_R,tau_M,tau_R,g_e(:,1),g_i(:,1));
rand('state',randstate+1)
cpt=cputime; %for measuring cpu time for time loop
counter=0;
for k=1:(nt-1) %Step through time
firing_times_vector=Tzero;
klower1=max(1,k-kmax);
kupper1=max(1,k-kmin+1);
nw1=kupper1-klower1+1;
JR1=klower1:kupper1;
JW1=(npoints-nw1+1):npoints;
klower2=max(1,k+1-kmax);
kupper2=max(1,k+1-kmin+1);
nw2=kupper2-klower2+1;
JR2=klower2:kupper2;
JW2=(npoints-nw2+1):npoints;
%take out loop independent stuff, as Adam did
nu_e1=nu_e(:,k)+ N*(pop_con_mat')*(
(rate_mf(:,JR1)*weights(JW1)).*(pop_type_vec'==1).*(pop_frac_vec') );
nu_e2=nu_e(:,k+1)+N*(pop_con_mat')*(
(rate_mf(:,JR2)*weights(JW2)).*(pop_type_vec'==1).*(pop_frac_vec') );
nu_i1=nu_i(:,k)+ N*(pop_con_mat')*( (rate_mf(:,JR1)*weights(JW1)).*(pop_type_vec'==-
1).*(pop_frac_vec') );
nu_i2=nu_i(:,k+1)+N*(pop_con_mat')*( (rate_mf(:,JR2)*weights(JW2)).*(pop_type_vec'==-
1).*(pop_frac_vec') );
g_e(:,k+1)=g_syn_updatea(g_e(:,k),nu_e1,nu_e2,tau_E,dt,mu_Gamma_E);
g_i(:,k+1)=g_syn_updatea(g_i(:,k),nu_i1,nu_i2,tau_I,dt,mu_Gamma_I);
%Mean-filed computation
rate_mf(:,k+1)=mean_field_rate_vectorized(E_R,E_E,E_I,v_T,v_R,tau_M, ...
tau_R,g_e(:,k+1),g_i(:,k+1));
%TAKE AVERAGE FOR TWO ADJACENT GRID POINTS INSTEAD?
nue_vec=Kpop*nu_e(:,k);
nui_vec=Kpop*nu_i(:,k);
rate_tot_vec=nue_vec+nui_vec;
Ns=poissrnd(dt*rate_tot_vec); %number of events for each of N neurons
p_e_vec=Kpop*( nu_e(:,k)./(nu_e(:,k)+nu_i(:,k)) );%probability that an event is of type
E for each neuron
G_ep1=G_e(k,:); %auxilliary conductance vector, initialized to that at beginning of time
step
G_ip1=G_i(k,:); %ditto
G_ep2=G_ep1; %ditto
G_ip2=G_ip1; %ditto
V_ep1=V(k,:);%auxilliary voltage vector
V_ep2=V_ep1; %ditto
n_max=max(Ns);%intialization of maximum over neurons of number of remaining events to be
integrated
if n_max>nET
n_max
23
Adam Cone Modeling Neural Networks
warning('Increase first dimension, nET, of ET matrix')
end
counter=counter+1;%pointer to present dt slot in queue matrix (q_mat)
mod_counter = mod(counter,num_dt_slots) + num_dt_slots*(mod(counter,num_dt_slots)==0);
%Generate all times and IDs for events coming from external input
%do this by grouping neurons according to common number of events.
%Start with all neurons tha have maximum number of events.
%Generate all event times and IDs in vectorized manner for these.
%Next, do the same for all neurons with one fewer number of events.
%Repeat until encountering nerons with no events
ETIDtemp=ETzero;
for K=n_max:-1:1%while K > 0
JK=(Ns==K);
ns=sum(JK);
if ns>0
p_e_mat=ones(K,ns)*diag(p_e_vec(JK));
ETIDtemp(1:K,JK)=dt*rand(K,ns)+i*(-1+2*(rand(K,ns)<=p_e_mat)); %generate times
%and IDs (E=1 or I=-1) for all neuron
with n_max
%synaptic events
end
end
max_internal_events=max(event_counter_matrix(mod_counter,:));
ETIDtot=sort([ETIDtemp(1:max(1,n_max),:);q_mat(1:max(1,max_internal_events),:,mod_counter)])
;
no_events=sum(n_max+max_internal_events);
[nnn mmm]=size(ETIDtot);
min_n_zeros=min(sum(ETIDtot==0,1));%minimum number of leading zeros in columns of
ETIDtot
if min_n_zeros~=(nnn)
row_start=min_n_zeros+1;
else
row_start=nnn;
end
ETIDtot=ETIDtot(row_start:nnn,:);
t_elapse=Tzero; %Initialize elapsed time at beginning of time step
t_remain=dt_vec; %Initializeremaining time at beginning of time step
T=Tzero; %initlaize time-since-last-event vector
Jint=false(1,N); %intialize vector that points to neurons with events to be integrated
count=count_zero;
%%%%%%%%%%%%%%%%%%%%%%
num_rows=nnn-row_start+1;
num_steps=num_rows;
if no_events>0
num_steps=num_rows+1;
end
%%%%%%%%%%%%%%%%%%%%%% %
for nd=1:num_steps
if nd <=num_rows
Jint=ETIDtot(nd,:)~=0;
n_int=sum(Jint);
if n_int~=0
T(Jint)=real(ETIDtot(nd,Jint))-t_elapse(Jint);
else
Jint=true(1,N);
T=t_remain;
end
else
Jint=true(1,N);
T=t_remain;
end
G_ep2(Jint)=G_ep1(Jint).*exp(-T(Jint)./tau_e(Jint));
G_ip2(Jint)=G_ip1(Jint).*exp(-T(Jint)./tau_i(Jint));
24
Adam Cone Modeling Neural Networks
%integrate only the subset of the neurons that are
%out or coming out of refractory period
J_out= Jint & (count_down<0); %index of neurons that are to be integrated that were
%nonrefractory at beginning of time step
J_coming=Jint & count_down==0 & delt_fire>t_elapse & delt_fire<= ...
(t_elapse+T); %index of neurons to be integrated that are coming out during
current
%time step
%count_down(J_coming)=count_down(J_coming)-1;
Dt=T; %auxilliary time step, initialized to full time
%to next event
if sum(J_coming)>0 %if any neurons coming out
G_ep1(J_coming)=G_ep1(J_coming).*exp(-(delt_fire(J_coming)-
t_elapse(J_coming))./tau_e(J_coming));%conductance upon emerging
G_ip1(J_coming)=G_ip1(J_coming).*exp(-(delt_fire(J_coming)-
t_elapse(J_coming))./tau_i(J_coming));%ditto
Dt(J_coming)=t_elapse(J_coming)+T(J_coming)-delt_fire(J_coming);%times between
emerging and end of time step
end
J=J_out | J_coming;
%integrate all non_refractory neurons
V_ep2(J) = ( V_ep1(J)-(Dt(J)./(2*tau_m(J))).*( V_ep1(J) - 2*E_r(J) - G_ep2(J).*E_e(J)
- G_ip2(J).*E_i(J) + G_ep1(J).*...
(V_ep1(J)-E_e(J)) + G_ip1(J).*(V_ep1(J)-E_i(J)) ) )./( 1 +
(Dt(J)./(2*tau_m(J))).*(1+G_ep2(J)+G_ip2(J)) );
%Find out who has crossed threshold; find times, and reset
%(put into refractory pool)
J=V_ep2>=v_th;
nf=sum(J);
if nf>0
V_ep2(J)=v_reset(J);
A=( (G_ep2(J)-G_ep1(J)).*(v_th(J)-E_e(J)) + (G_ip2(J)-G_ip1(J)).*(v_th(J)-E_i(J))
)./(2*tau_m(J).*Dt(J));
%solution from trapezoidal rule and
%linear interp of G_e and G_i
B=( V_ep1(J)+ v_th(J)- 2*E_r(J) + G_ep1(J).*(V_ep1(J)+v_th(J)-2*E_e(J)) +
G_ip1(J).*(V_ep1(J)+v_th(J)-2*E_i(J)) )./(2*tau_m(J));
C=v_th(J)-V_ep1(J);
rp=(-B+sqrt(B.^2-4*A.*C))./(2*A); %possible firing times are roots of a quadratic
rm=(-B-sqrt(B.^2-4*A.*C))./(2*A);
r=[rp; rm];
dtp=[Dt(J);Dt(J)];
tm=sum(r.*((r>0)&(r<dtp))); %take the only sensible root
delt_fire(J)=t_elapse(J)+tm; %record the time of firing within the
%interval
KF=( (ones(n_pops,1)*J) & Kpop' );
Nap=sum(KF,2);
firing_times_vector(J)=delt_fire(J)/dt;
for kap=1:n_pops
Ip=Jap(kap)+(1:Nap(kap));
aptimes(kap,Ip)=t(k)+delt_fire(KF(kap,:));
Jap(kap)=Jap(kap)+Nap(kap);
end
count_down(J)=n_ref(J); %reset the count down vector f
end
%update conductances of all neurons that had event (those with
%n_max>0)
if no_events>0 & (nd<=num_rows)
ID_vec=imag(ETIDtot(nd,:));
Je=Jint & (ID_vec==1); %picks subscripts of neurons that have excitatory events
Ji=Jint & (ID_vec==-1); %picks out subscripts of neurons that have inhibitory
events
N_e=sum(Je);
N_i=sum(Ji);
25
Adam Cone Modeling Neural Networks
z=rand(1,n_int); %First, generate uniformly distributed random number for each
event
theta=(atan2(2*sqrt(z-z.^2),(1-2*z))-2*pi)/3; %Convert this into parabolically
distributed number, by this
%and the following two lines
x=2*cos(theta)+1;
if N_e>0
G_ep2(Je)=G_ep2(Je)+mu_Gamma_e(Je).*x(1:N_e)./tau_e(Je);
end
if N_i>0
G_ip2(Ji)=G_ip2(Ji)+mu_Gamma_i(Ji).*x(N_e+1:n_int)./tau_i(Ji);
end
end
V_ep1=V_ep2;
G_ep1=G_ep2;
G_ip1=G_ip2;
t_remain(Jint)=t_remain(Jint)-T(Jint);
t_elapse(Jint)=t_elapse(Jint)+T(Jint);
end
V(k+1,:)=V_ep2;
G_e(k+1,:)=G_ep2;
G_i(k+1,:)=G_ip2;
count_down=count_down-1; %decrement the count down vector by 1
Jhist=count_down<0;
for kmc=1:n_pops
g_e_mc(kmc,k+1)=mean(G_ep2(Kpop(:,kmc)'));
g_i_mc(kmc,k+1)=mean(G_ip2(Kpop(:,kmc)'));
end
event_counter_matrix(mod_counter,:)=0;%initialze counter to zero in
%current slot that has just been used
q_mat(:,:,mod_counter)=zeros(size(q_mat(:,:,mod_counter)));%zero the dt slot of q_mat
just used
[q_mat event_counter_matrix] = qcount_11(con_mat,q_mat,mod_counter,num_dt_slots,...
firing_times_vector,axon_delay_constant,axon_delay_rand_range,dt,N, ...
event_counter_matrix);
end
cpt=cputime-cpt
%---------------PLOTTING STUFF---------------
figure(1)
plot(t,[nu_e;nu_i])
xlabel('Time (s)'); ylabel('Synaptic Input Rate (Hz)');
set(gca,'XLim',[0 Tsim])
legend('E_1','E_2','I_1','I_2')
title('Excitatory and Inhibitory Synaptic Input Rates vs. Time')
dth=0.002;
th=0:dth:Tsim;
t_rate=(dth/2):dth:(t_max-dth/2);
aptimes1=aptimes(1,aptimes(1,:)~=0);
N1=N*pop_frac_vec(1);
figure(2)
rate_monte1=hist(aptimes1,t_rate)/(N1*dth);
plot(t,rate_mf(1,:),'r-')
hold on
bar(t_rate,rate_monte1)
xlabel('Time (s)'); ylabel('Population 1 Firing Rate (Hz)')
hold off
set(gca,'XLim',[0 Tsim])
title('Mean Field and Monte-Carlo Population 1 Firing Rate vs. Time')
legend('Mean Field','Monte Carlo')
aptimes2=aptimes(2,aptimes(2,:)~=0);
N2=N*pop_frac_vec(2);
figure(3)
plot(t,rate_mf(2,:),'r-')
hold on
rate_monte2=hist(aptimes2,t_rate)/(N2*dth);
26
Adam Cone Modeling Neural Networks
bar(t_rate,rate_monte2)
xlabel('Time (s)'); ylabel('Population 2 Firing Rate (Hz)')
hold off
legend('Mean Field','Mont Carlo')
set(gca,'XLim',[0 Tsim])
title('Mean Field and Monte-Carlo Population 2 Firing Rate vs. Time')
figure(4)
plot(t_rate,rate_monte1,'c',t_rate,rate_monte2,'k')
xlabel('Time (s)'); ylabel('Firing Rate (Hz)');
legend('Population 1', 'Population 2')
title('Populations 1 and 2 Firing Rates vs. Time')
figure(5)
plot(t,[g_e;g_e_mc])
xlabel('Time (s)'); ylabel('Conductance (nS)');
legend('Mean Field 1','Mean Field 2','Monte-Carlo 1','Monte-Carlo 2')
title('Mean Field and Monte Carlo Excitatory Network Conductances vs. Time')
figure(6)
plot(t,[g_i;g_i_mc])
xlabel('Time (s)'); ylabel('Conductance (nS)')
legend('Mean Field 1','Mean Field 2','Monte-Carlo 1','Monte-Carlo 2')
title('Mean Field and Monte Carlo Inhibitory Network Conductances vs. Time')
save monte_carlo_results_multi_pops t_rate rate_monte1 rate_monte2 t nu_e nu_i ...
g_e g_i g_e_mc g_i_mc t rate_mf randstate
Connectivity Matrix Generator
% This m-file constructs a connectivity matrix from data about the number
% and type of populations and the user-specified output-connectivity of each
% population: 1) N = number of neurons
% 2) pop_connectivity_matrix = if the number of distinct
% populations is pop_number, then pop_connectivity_matrix is
% a pop_number*pop_number matrix, in which entry (i,j) is the
% probability that a neuron in population i synapses to some
% neuron in population j.
% 3) pop_type_vector = 1*pop_number matrix of type of each population
% (1 for excitatory, -1 for inhibitory).
% 4) pop_fraction_vector = 1*pop_number matrix of the proportion
% of total number in
% each population.
function [connectivity_matrix, pop_id_vector]=
conmat_5(N,pop_connectivity_matrix,pop_type_vector,pop_fraction_vector)
cpt = cputime;
% multi_pop_con_mat_generator is short for muliple-population-connectivity-matrix-generator
%Neurons are assigned populations based on their labels. For example,
%if population 1 comprises 20% of the network size, then the first 20%
%of the neurons, counting from 1, will be in population 1. For this reason,
%we construct the following vector which will give us something like "population boundaries"
n_pops=length(pop_fraction_vector);
pop_count_vector=round(N*pop_fraction_vector);%the jth entry in population_count_vector is
how many neurons are in population j.
pop_count_vector(n_pops)=N-sum(pop_count_vector(1:(n_pops-1)));
if sum(pop_count_vector==0)~=0
error('ZERO NEURONS IN AT LEAST ONE POPULATION')
end
%now, declare that connectivity_matrix, the eventual output of this function m-file, is an
N*N matrix, which we will fill with
% synapse type/presence values (i.e. 1 at (i,j): excitatory synapse from neuron i to neuron
j, -1 at
27
Adam Cone Modeling Neural Networks
%(i,j): inhibitory synapse from neuron i to neuron j, 0 at (i,j): no synapse from neuron i
to neuron j)
connectivity_matrix = sparse(zeros(N,N));
%Suppose we want to know whether neuron A will synapse to neuron B. Although each synapse
presence
%value is decided randomly, the weighting is given by the connectivity of
%the population containing A, a, to the population containing B, b. This value is
%found in the user-specified pop_connectivity_matrix, namely, at (a,b).
%This value uniquely determines the probability that neuron A synapses to
%neuron B.
pop_partition_vector =[0,cumsum(pop_count_vector)];
%sub_ab_connectivity_matrix is the "sub-connectivity matrix" between
%pre-synaptic population a and post-synaptic population b. It will
%be assimilated at each step by connectivity_matrix.
pop_id_vector=sparse(zeros(1,N));
for a = 1:n_pops %step through pre-synaptic populations
pop_id_vector((pop_partition_vector(a)+1):pop_partition_vector(a+1))=a;
for b = 1:n_pops %step through post_synaptic populations
sub_ab_connectivity_matrix = rand(pop_count_vector(a),pop_count_vector(b)) <
pop_connectivity_matrix(a,b);
connectivity_matrix((pop_partition_vector(a)+1):pop_partition_vector(a+1),(pop_partition_vec
tor(b)+1):...
pop_partition_vector(b+1)) = sub_ab_connectivity_matrix*pop_type_vector(a);
end
end
cpt = cputime-cpt;
Queue Matrix Updating
function [queue, event_counter_matrix] =
qcount_10(connectivity_matrix,queue_matrix,mod_counter,n_dt_slots,...
firing_times_vector,axon_delay_constant,axon_delay_rand_range,dt,N,event_counter_matrix)
Jfire=(firing_times_vector~=0);
firing_neuron_number=sum(Jfire);
firing_connectivity_matrix = connectivity_matrix(Jfire,:);
random_part = axon_delay_rand_range/dt*rand(size(firing_connectivity_matrix));
constant_part = axon_delay_constant/dt*ones(size(firing_connectivity_matrix));
times_part =
diag(firing_times_vector(find(firing_times_vector)))*ones(size(firing_connectivity_matrix));
firing_times_matrix =
abs(firing_connectivity_matrix).*(times_part+constant_part+random_part);
firing_info_matrix = firing_times_matrix+i*firing_connectivity_matrix;
firing_total_vector = sum(abs(firing_connectivity_matrix),1);
firing_info_matrix = sort(firing_info_matrix,1);
lower_bound = firing_neuron_number-max(firing_total_vector)+1;
firing_info_matrix = firing_info_matrix(lower_bound:firing_neuron_number,:);
for b = 1:max(firing_total_vector)
max_neurons_vector = (firing_total_vector >= (max(firing_total_vector)+1-b));
I = find(max_neurons_vector);
K = mod(mod_counter+floor(real(firing_info_matrix(b,I))),n_dt_slots);
K=K+n_dt_slots*(K==0);
counter_index = sub2ind(size(event_counter_matrix),K,I);
event_counter_matrix(counter_index) = event_counter_matrix(counter_index) + 1;
queue_index = sub2ind(size(queue_matrix),event_counter_matrix(counter_index),I,K);
queue_matrix(queue_index) = i*imag(firing_info_matrix(b,max_neurons_vector))+...
dt*( real(firing_info_matrix(b,max_neurons_vector))-...
floor(real(firing_info_matrix(b,max_neurons_vector))) );
end
queue = queue_matrix;
28
Adam Cone Modeling Neural Networks
Queue Matrix Generator
function queue_matrix =
qmat_5(connectivity_matrix,axon_delay_constant,axon_delay_rand_range,dt,tau_ref)
%axon_delay_rand_range=length of randon part of delay interval following
%minumum delay, axon_delay_constant
C = ceil((axon_delay_constant+axon_delay_rand_range)/dt)+1;%number of dt slots needed
max_in = max(sum(abs(connectivity_matrix)));
%A =max_in*ceil((C*dt-axon_delay_constant)/tau_ref); %estimate of maximum number of events
to be stored
A =max_in*ceil((axon_delay_rand_range+dt)/tau_ref); %estimate of maximum number of events to
be stored
%find the number of neurons in the network
B = length(connectivity_matrix);
queue_matrix = zeros(A,B,C);
Field Firing Rate Computation
function rate_mf=mean_field_rate_vectorized(E_r,E_e,E_i,v_th,v_reset,tau_m,tau_ref,g_e,g_i)
Eg=(E_r+g_e.*E_e+g_i.*E_i)./(1+g_e+g_i);
tau_1=tau_m./(1+g_e+g_i);
ts=tau_1.*log( (Eg-v_reset)./(Eg-v_th) );
rate_mf=(Eg>v_th)./(ts+tau_ref);
External Synaptic Input Generator
function [nu_e, nu_i]=ex_in_synaptic_rates(nue_0,nui_0,c_e,c_i,tg,randstate)
rand('state',randstate)
f=[1 3 7 15 31 63];
nf=length(f);
ce=rand(1,nf);
ci=rand(1,nf);
thetae=2*pi*rand(1,nf);
thetai=2*pi*rand(1,nf);
de=zeros(size(tg));
di=zeros(size(tg));
for j=1:nf
de=de+ce(j)*sin(2*pi*f(j)*tg + thetae(j));
di=di+ci(j)*sin(2*pi*f(j)*tg + thetai(j));
end
se=max(abs(de));
si=max(abs(di));
nu_e=nue_0*(1+c_e*(de/se));
nu_i=nui_0*(1+c_i*(di/si));
Mean Field Synaptic Conductance Updating
function g_s=g_syn_update(g_s,nu_s_1,nu_s_2,tau_s,dt,mu_Gamma_s)
g_s=( g_s + (dt./(2*tau_s)).*(mu_Gamma_s.*(nu_s_2 + nu_s_1) - g_s) )./...
(1 + dt./(2*tau_s));
Conductance Computation Program
function [g, dg, sigma_G, mu_G, mu_Gamma_e]=get_gbins(ngbins,tau_m,tau_e,nu_0,nu_e,E_e,E_r)
mu_Gamma_e=tau_m/(E_e-E_r); % Expected area under unitary synaptic event. Gives average EPSP
of ~0.5 mV
29
Adam Cone Modeling Neural Networks
%mu_Gamma_e=mu_Gamma_e/2;
%
%Asssume steady input until time zero, when sinusoidal modulation begins
mu_G=nu_0*mu_Gamma_e; %Average of G_e for steady synaptic input rate at nu_0
sigma_sq_Gamma_e=mu_Gamma_e^2/5; %variance of Gamma_e for parabolic distribution
mu_Gamma_e_sq=sigma_sq_Gamma_e+mu_Gamma_e^2; %expected square of Gamma_e
sigma_G=sqrt(nu_0*mu_Gamma_e_sq/(2*tau_e)); %variance of G_e for steady synaptic input at
rate nu_0
%Set up bins for g and also for histogram
mu_G_max=max(nu_e)*mu_Gamma_e;
sigma_G_max=sqrt(max(nu_e)*mu_Gamma_e_sq/(2*tau_e));
gmax=mu_G_max+3*sigma_G_max;
%
dg=gmax/ngbins;
g=(0:(ngbins-1))*dg; % a row vector
%
Voltage Computation Program
function [v, dv, v_reset, E_r]=get_vbins(E_i,E_r,v_th,nvbins)
%dv=(v_th-E_r)/(nvbins-0.5); %E_r is a grid point
dv=(v_th-E_i)/nvbins;
v=E_i + ((1:nvbins)' - 0.5)*dv; %A column vector of voltages. E_i and v_redet are half-grid
points.
E_r=v( floor((E_r-E_i)/dv) + 1 ); %E_r is chosen to be a grid
point
v_reset=E_r-dv/2; %v_reset is a half grid point
30

More Related Content

What's hot

Boundness of a neural network weights using the notion of a limit of a sequence
Boundness of a neural network weights using the notion of a limit of a sequenceBoundness of a neural network weights using the notion of a limit of a sequence
Boundness of a neural network weights using the notion of a limit of a sequence
IJDKP
 
neural-networks (1)
neural-networks (1)neural-networks (1)
neural-networks (1)
rockeysuseelan
 
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
cscpconf
 
Neural network & its applications
Neural network & its applications Neural network & its applications
Neural network & its applications
Ahmed_hashmi
 
honn
honnhonn
ssnow_manuscript_postreview
ssnow_manuscript_postreviewssnow_manuscript_postreview
ssnow_manuscript_postreview
Stephen Snow
 
Artificial Neural Network (draft)
Artificial Neural Network (draft)Artificial Neural Network (draft)
Artificial Neural Network (draft)
James Boulie
 
Deep learning algorithms
Deep learning algorithmsDeep learning algorithms
Deep learning algorithms
Revanth Kumar
 
Deep neural networks & computational graphs
Deep neural networks & computational graphsDeep neural networks & computational graphs
Deep neural networks & computational graphs
Revanth Kumar
 
Y. H. Presentation
Y. H. PresentationY. H. Presentation
Y. H. Presentation
yamen78
 
ANALYSIS OF ELEMENTARY CELLULAR AUTOMATA CHAOTIC RULES BEHAVIOR
ANALYSIS OF ELEMENTARY CELLULAR AUTOMATA CHAOTIC RULES BEHAVIORANALYSIS OF ELEMENTARY CELLULAR AUTOMATA CHAOTIC RULES BEHAVIOR
ANALYSIS OF ELEMENTARY CELLULAR AUTOMATA CHAOTIC RULES BEHAVIOR
ijsptm
 
Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9
Randa Elanwar
 
Neural Networks: Self-Organizing Maps (SOM)
Neural Networks:  Self-Organizing Maps (SOM)Neural Networks:  Self-Organizing Maps (SOM)
Neural Networks: Self-Organizing Maps (SOM)
Mostafa G. M. Mostafa
 
Application of artificial_neural_network
Application of artificial_neural_networkApplication of artificial_neural_network
Application of artificial_neural_network
gabo GAG
 
Introduction to Neural Networks
Introduction to Neural NetworksIntroduction to Neural Networks
Introduction to Neural Networks
Databricks
 
Fractals in Small-World Networks With Time Delay
Fractals in Small-World Networks With Time DelayFractals in Small-World Networks With Time Delay
Fractals in Small-World Networks With Time Delay
Xin-She Yang
 
what is neural network....???
what is neural network....???what is neural network....???
what is neural network....???
Adii Shah
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
Institute of Technology Telkom
 
Neural
NeuralNeural
Neural
Vaibhav Shah
 

What's hot (19)

Boundness of a neural network weights using the notion of a limit of a sequence
Boundness of a neural network weights using the notion of a limit of a sequenceBoundness of a neural network weights using the notion of a limit of a sequence
Boundness of a neural network weights using the notion of a limit of a sequence
 
neural-networks (1)
neural-networks (1)neural-networks (1)
neural-networks (1)
 
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
 
Neural network & its applications
Neural network & its applications Neural network & its applications
Neural network & its applications
 
honn
honnhonn
honn
 
ssnow_manuscript_postreview
ssnow_manuscript_postreviewssnow_manuscript_postreview
ssnow_manuscript_postreview
 
Artificial Neural Network (draft)
Artificial Neural Network (draft)Artificial Neural Network (draft)
Artificial Neural Network (draft)
 
Deep learning algorithms
Deep learning algorithmsDeep learning algorithms
Deep learning algorithms
 
Deep neural networks & computational graphs
Deep neural networks & computational graphsDeep neural networks & computational graphs
Deep neural networks & computational graphs
 
Y. H. Presentation
Y. H. PresentationY. H. Presentation
Y. H. Presentation
 
ANALYSIS OF ELEMENTARY CELLULAR AUTOMATA CHAOTIC RULES BEHAVIOR
ANALYSIS OF ELEMENTARY CELLULAR AUTOMATA CHAOTIC RULES BEHAVIORANALYSIS OF ELEMENTARY CELLULAR AUTOMATA CHAOTIC RULES BEHAVIOR
ANALYSIS OF ELEMENTARY CELLULAR AUTOMATA CHAOTIC RULES BEHAVIOR
 
Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9
 
Neural Networks: Self-Organizing Maps (SOM)
Neural Networks:  Self-Organizing Maps (SOM)Neural Networks:  Self-Organizing Maps (SOM)
Neural Networks: Self-Organizing Maps (SOM)
 
Application of artificial_neural_network
Application of artificial_neural_networkApplication of artificial_neural_network
Application of artificial_neural_network
 
Introduction to Neural Networks
Introduction to Neural NetworksIntroduction to Neural Networks
Introduction to Neural Networks
 
Fractals in Small-World Networks With Time Delay
Fractals in Small-World Networks With Time DelayFractals in Small-World Networks With Time Delay
Fractals in Small-World Networks With Time Delay
 
what is neural network....???
what is neural network....???what is neural network....???
what is neural network....???
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
 
Neural
NeuralNeural
Neural
 

Viewers also liked

SAMRAT SENGUPTA CV
SAMRAT SENGUPTA CVSAMRAT SENGUPTA CV
SAMRAT SENGUPTA CV
SAMRAT SENGUPTA
 
Эми Уайнхаус
Эми УайнхаусЭми Уайнхаус
Эми Уайнхаус
Настя Хамелеон
 
Purification of regucalcin from the seminal vesicular fluid
Purification of regucalcin from the seminal vesicular fluidPurification of regucalcin from the seminal vesicular fluid
Purification of regucalcin from the seminal vesicular fluid
Ana Isabel Valencia Gómez
 
Boston Article
Boston ArticleBoston Article
Boston Article
Brian Perry
 
Modulo2 proyecto de vida
Modulo2 proyecto de vida Modulo2 proyecto de vida
Modulo2 proyecto de vida
emily mejia
 
my last vaciations
my last vaciationsmy last vaciations
my last vaciations
Nebol
 
Adriana santos unidad1 ppi- v-licenciatura 2016
Adriana santos unidad1  ppi- v-licenciatura 2016Adriana santos unidad1  ppi- v-licenciatura 2016
Adriana santos unidad1 ppi- v-licenciatura 2016
adriana santos
 
EPSAHypACone2005
EPSAHypACone2005EPSAHypACone2005
EPSAHypACone2005
Adam Cone
 
John Richard Self
John Richard SelfJohn Richard Self
John Richard Self
John Self
 
Purification of regucalcin from the seminal vesicular fluid
Purification of regucalcin from the seminal vesicular fluidPurification of regucalcin from the seminal vesicular fluid
Purification of regucalcin from the seminal vesicular fluid
Ana Isabel Valencia Gómez
 
WeidlingerPaperACone2005
WeidlingerPaperACone2005WeidlingerPaperACone2005
WeidlingerPaperACone2005
Adam Cone
 
Sistema Nervios Central
Sistema Nervios CentralSistema Nervios Central
Sistema Nervios Central
Selene Peñaloza
 

Viewers also liked (12)

SAMRAT SENGUPTA CV
SAMRAT SENGUPTA CVSAMRAT SENGUPTA CV
SAMRAT SENGUPTA CV
 
Эми Уайнхаус
Эми УайнхаусЭми Уайнхаус
Эми Уайнхаус
 
Purification of regucalcin from the seminal vesicular fluid
Purification of regucalcin from the seminal vesicular fluidPurification of regucalcin from the seminal vesicular fluid
Purification of regucalcin from the seminal vesicular fluid
 
Boston Article
Boston ArticleBoston Article
Boston Article
 
Modulo2 proyecto de vida
Modulo2 proyecto de vida Modulo2 proyecto de vida
Modulo2 proyecto de vida
 
my last vaciations
my last vaciationsmy last vaciations
my last vaciations
 
Adriana santos unidad1 ppi- v-licenciatura 2016
Adriana santos unidad1  ppi- v-licenciatura 2016Adriana santos unidad1  ppi- v-licenciatura 2016
Adriana santos unidad1 ppi- v-licenciatura 2016
 
EPSAHypACone2005
EPSAHypACone2005EPSAHypACone2005
EPSAHypACone2005
 
John Richard Self
John Richard SelfJohn Richard Self
John Richard Self
 
Purification of regucalcin from the seminal vesicular fluid
Purification of regucalcin from the seminal vesicular fluidPurification of regucalcin from the seminal vesicular fluid
Purification of regucalcin from the seminal vesicular fluid
 
WeidlingerPaperACone2005
WeidlingerPaperACone2005WeidlingerPaperACone2005
WeidlingerPaperACone2005
 
Sistema Nervios Central
Sistema Nervios CentralSistema Nervios Central
Sistema Nervios Central
 

Similar to NeurSciACone

project_presentation
project_presentationproject_presentation
project_presentation
Russell Jarvis
 
PR12-225 Discovering Physical Concepts With Neural Networks
PR12-225 Discovering Physical Concepts With Neural NetworksPR12-225 Discovering Physical Concepts With Neural Networks
PR12-225 Discovering Physical Concepts With Neural Networks
Kyunghoon Jung
 
JACT 5-3_Christakis
JACT 5-3_ChristakisJACT 5-3_Christakis
JACT 5-3_Christakis
Vasilis Barbaris
 
Senior thesis
Senior thesisSenior thesis
Senior thesis
Chris Fritz
 
Word Recognition in Continuous Speech and Speaker Independent by Means of Rec...
Word Recognition in Continuous Speech and Speaker Independent by Means of Rec...Word Recognition in Continuous Speech and Speaker Independent by Means of Rec...
Word Recognition in Continuous Speech and Speaker Independent by Means of Rec...
CSCJournals
 
A PERFORMANCE EVALUATION OF A PARALLEL BIOLOGICAL NETWORK MICROCIRCUIT IN NEURON
A PERFORMANCE EVALUATION OF A PARALLEL BIOLOGICAL NETWORK MICROCIRCUIT IN NEURONA PERFORMANCE EVALUATION OF A PARALLEL BIOLOGICAL NETWORK MICROCIRCUIT IN NEURON
A PERFORMANCE EVALUATION OF A PARALLEL BIOLOGICAL NETWORK MICROCIRCUIT IN NEURON
ijdpsjournal
 
Basics of Neural Networks
Basics of Neural NetworksBasics of Neural Networks
14 Machine Learning Single Layer Perceptron
14 Machine Learning Single Layer Perceptron14 Machine Learning Single Layer Perceptron
14 Machine Learning Single Layer Perceptron
Andres Mendez-Vazquez
 
EM Term Paper
EM Term PaperEM Term Paper
EM Term Paper
Preeti Sahu
 
Hardware Implementation of Spiking Neural Network (SNN)
Hardware Implementation of Spiking Neural Network (SNN)Hardware Implementation of Spiking Neural Network (SNN)
Hardware Implementation of Spiking Neural Network (SNN)
supratikmondal6
 
Wavelet-based EEG processing for computer-aided seizure detection and epileps...
Wavelet-based EEG processing for computer-aided seizure detection and epileps...Wavelet-based EEG processing for computer-aided seizure detection and epileps...
Wavelet-based EEG processing for computer-aided seizure detection and epileps...
IJERA Editor
 
Neural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics CourseNeural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics Course
Mohaiminur Rahman
 
Dsp review
Dsp reviewDsp review
Dsp review
ChetanShahukari
 
neural pacemaker
neural pacemakerneural pacemaker
neural pacemaker
Steven Yoon
 
Restricted Boltzman Machine (RBM) presentation of fundamental theory
Restricted Boltzman Machine (RBM) presentation of fundamental theoryRestricted Boltzman Machine (RBM) presentation of fundamental theory
Restricted Boltzman Machine (RBM) presentation of fundamental theory
Seongwon Hwang
 
Neural Networks Ver1
Neural  Networks  Ver1Neural  Networks  Ver1
Neural Networks Ver1
ncct
 
Complexity and Quantum Information Science
Complexity and Quantum Information ScienceComplexity and Quantum Information Science
Complexity and Quantum Information Science
Melanie Swan
 
Improvement of a Bidirectional Brain-Computer Interface for Neural Engineerin...
Improvement of a Bidirectional Brain-Computer Interface for Neural Engineerin...Improvement of a Bidirectional Brain-Computer Interface for Neural Engineerin...
Improvement of a Bidirectional Brain-Computer Interface for Neural Engineerin...
HayleyBoyd5
 
P REDICTION F OR S HORT -T ERM T RAFFIC F LOW B ASED O N O PTIMIZED W...
P REDICTION  F OR  S HORT -T ERM  T RAFFIC  F LOW  B ASED  O N  O PTIMIZED  W...P REDICTION  F OR  S HORT -T ERM  T RAFFIC  F LOW  B ASED  O N  O PTIMIZED  W...
P REDICTION F OR S HORT -T ERM T RAFFIC F LOW B ASED O N O PTIMIZED W...
ijcsit
 
Random process and noise
Random process and noiseRandom process and noise
Random process and noise
Punk Pankaj
 

Similar to NeurSciACone (20)

project_presentation
project_presentationproject_presentation
project_presentation
 
PR12-225 Discovering Physical Concepts With Neural Networks
PR12-225 Discovering Physical Concepts With Neural NetworksPR12-225 Discovering Physical Concepts With Neural Networks
PR12-225 Discovering Physical Concepts With Neural Networks
 
JACT 5-3_Christakis
JACT 5-3_ChristakisJACT 5-3_Christakis
JACT 5-3_Christakis
 
Senior thesis
Senior thesisSenior thesis
Senior thesis
 
Word Recognition in Continuous Speech and Speaker Independent by Means of Rec...
Word Recognition in Continuous Speech and Speaker Independent by Means of Rec...Word Recognition in Continuous Speech and Speaker Independent by Means of Rec...
Word Recognition in Continuous Speech and Speaker Independent by Means of Rec...
 
A PERFORMANCE EVALUATION OF A PARALLEL BIOLOGICAL NETWORK MICROCIRCUIT IN NEURON
A PERFORMANCE EVALUATION OF A PARALLEL BIOLOGICAL NETWORK MICROCIRCUIT IN NEURONA PERFORMANCE EVALUATION OF A PARALLEL BIOLOGICAL NETWORK MICROCIRCUIT IN NEURON
A PERFORMANCE EVALUATION OF A PARALLEL BIOLOGICAL NETWORK MICROCIRCUIT IN NEURON
 
Basics of Neural Networks
Basics of Neural NetworksBasics of Neural Networks
Basics of Neural Networks
 
14 Machine Learning Single Layer Perceptron
14 Machine Learning Single Layer Perceptron14 Machine Learning Single Layer Perceptron
14 Machine Learning Single Layer Perceptron
 
EM Term Paper
EM Term PaperEM Term Paper
EM Term Paper
 
Hardware Implementation of Spiking Neural Network (SNN)
Hardware Implementation of Spiking Neural Network (SNN)Hardware Implementation of Spiking Neural Network (SNN)
Hardware Implementation of Spiking Neural Network (SNN)
 
Wavelet-based EEG processing for computer-aided seizure detection and epileps...
Wavelet-based EEG processing for computer-aided seizure detection and epileps...Wavelet-based EEG processing for computer-aided seizure detection and epileps...
Wavelet-based EEG processing for computer-aided seizure detection and epileps...
 
Neural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics CourseNeural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics Course
 
Dsp review
Dsp reviewDsp review
Dsp review
 
neural pacemaker
neural pacemakerneural pacemaker
neural pacemaker
 
Restricted Boltzman Machine (RBM) presentation of fundamental theory
Restricted Boltzman Machine (RBM) presentation of fundamental theoryRestricted Boltzman Machine (RBM) presentation of fundamental theory
Restricted Boltzman Machine (RBM) presentation of fundamental theory
 
Neural Networks Ver1
Neural  Networks  Ver1Neural  Networks  Ver1
Neural Networks Ver1
 
Complexity and Quantum Information Science
Complexity and Quantum Information ScienceComplexity and Quantum Information Science
Complexity and Quantum Information Science
 
Improvement of a Bidirectional Brain-Computer Interface for Neural Engineerin...
Improvement of a Bidirectional Brain-Computer Interface for Neural Engineerin...Improvement of a Bidirectional Brain-Computer Interface for Neural Engineerin...
Improvement of a Bidirectional Brain-Computer Interface for Neural Engineerin...
 
P REDICTION F OR S HORT -T ERM T RAFFIC F LOW B ASED O N O PTIMIZED W...
P REDICTION  F OR  S HORT -T ERM  T RAFFIC  F LOW  B ASED  O N  O PTIMIZED  W...P REDICTION  F OR  S HORT -T ERM  T RAFFIC  F LOW  B ASED  O N  O PTIMIZED  W...
P REDICTION F OR S HORT -T ERM T RAFFIC F LOW B ASED O N O PTIMIZED W...
 
Random process and noise
Random process and noiseRandom process and noise
Random process and noise
 

NeurSciACone

  • 1. Towards Modeling Neural Networks with Physiologically Different Populations: Constructing a Monte-Carlo Model By Adam Cone VIGRE Research Project Summer 2003 Advisor: _____________________ Prof. Daniel Tranchina Date:_______________ 1
  • 2. Adam Cone Modeling Neural Networks Table of Contents Abstract …………………...……………………………………………...………………..3 Introduction ………………………………………………………………….…………….......3 Biological Background ……………………………………………………………...…..4 Mathematical Background …………………………………………………….………5 Monte-Carlo Network Construction …………………………….………………………7 Network Construction and Representation .…………………………………………....8 Translating Neural Activity into Network Stimulus …………………………………..……9 Variable Physiological Characteristics ……………………………………………….12 Multi-Population Simulation Results ...…………………………………………………13 Mean Field and Monte-Carlo Comparisons .……………………………………………....13 Physiological Parameter Variation: Testing ….…………………………………………....13 Conclusion ...………………………………………………………………………………...18 Literature Review ……...……………………………………………………………………19 Appendix A …………………….……………………………………………………………..20 Appendix B ………………………….………………………………………………………..21 2
  • 3. Adam Cone Modeling Neural Networks Abstract Computing power is a fundamental limitation in mathematically modeling multi- population neural networks in the visual cortex, and innovative modeling techniques are continually sought to facilitate more efficient, complete simulation. Recent population-density methods have shown promise in comparisons with contemporary Monte-Carlo and mean field methods in single-population regimes,1 suggesting their potential usefulness in network modeling. To carry-out comparisons in physiologically accurate network regimes, all three models must be modified and expanded to account, not only for multiple external inputs, but also for network interactions, and different population response parameters. This paper details our construction of multi-population network Monte-Carlo and mean field models and critically analyses simulation results to verify their accuracy. We conclude that the Monte-Carlo method is suitable for use in future population-density method testing. Finally, we propose additional variables that Monte-Carlo and mean-field models should account for in future simulations. Related Fields: computational neural science, numerical computing, computational biology, applied mathematics, mathematical modeling, neural networks, biomathematics Introduction “[A Monte Carlo method] solves a problem by generating suitable random numbers and observing that fraction of the numbers obeying some [property set]. The method is useful for obtaining numerical solutions to problems which are too complicated to solve analytically.”2 “In the Population-Density approach, integrate-and-fire neurons are grouped into large populations of similar neurons. For each population, we form a probability density that represents the distribution of neurons over all possible [voltage] states.”3,4 A practical difference between the two methods is that Monte Carlo simulations are applicable to all kinds of problems, whereas Population-Density simulations were developed specifically for modeling large populations of similar neurons with specific properties. The comparison is important because identifying and using the most computationally efficient method will enable us to account for more variables, simulate more neurons for longer durations and, in short, make our simulations more “life-like”. These expanded models can improve our understanding of neural networks. 1 Cone 2 Weisstein 3 Nykamp and Tranchina 4 For a thorough derivation of the Population-Density method, see ibid. 3
  • 4. Adam Cone Modeling Neural Networks Biological Background From a biological perspective, neurons (Figure 1)5 are the brain’s fundamental units. A neuron’s essential function is to integrate input from various sources and either fire or not fire an action potential, the medium of inter-neuron communication, which, in turn, stimulates other neurons. Neurons interface with one another at synapses (Figure 2)6 where the pre- synaptic neuron releases molecules called neurotransmitters that open ion channels on the post-synaptic neuron. The open ion channels allow ions from the surrounding cytosol to enter the neuron, or vice-versa. Since the ions are carrying charge, their movement is essentially an electrical current, and it’s effect on the neuron voltage relative to the cytosol can be either excitatory (increases voltage and action potential probability) or inhibitory (decreased voltage and action potential probability). Whether a neuron ultimately fires an action potential is a function of its voltage at the axon-hillock, the action potential initiation area. Each neuron has a threshold voltage and, when this threshold voltage is met or exceeded at the axon hillock, the neuron fires an action potential. Once it has fired, the neuron enters a refractory period; it cannot fire during this period as its ion concentrations are being reestablished. A neuron’s, or group of neurons’, activity is defined as the action potential firing rate. Determining neural activity for various groups of neurons is crucial in human visual cortex analysis because it indicates what processes are taking place at various locations. Figure 1: Neuron Diagram During its passage through the central visual system, information from the retina, encoded as action potentials from retinal neurons, undergoes several relays and transformations. Each of these relays and transformations involves either a redirection or re-organization, respectively, of the action- potential-encoded information by neuron populations, large neuron groups that are similar in their biophysical properties and synaptic connections. These populations interact in neural networks to perform the various operations on visual information (e.g. mapping, rerouting, organizing, filtering, integrating, etc.) that enable us to interpret visual stimuli. Neural networks exhibit complex behavior, and facilitate high-level processes, such as orientation tuning. Understanding neural network behavior is one of the central projects in understanding the visual cortex. Figure 2: Synapse Diagram 5 Maizels 6 Maizels 4
  • 5. Adam Cone Modeling Neural Networks Mathematical Background: The following integrate-and-fire neuron modeling equations are used directly to update the mean field method; they form the computational foundation for our simulations. We use adapted forms of the equations to update the Monte-Carlo simulation. Let and be the excitatory and inhibitory membrane conductances (nS) as functions of time, and let ( )eg t ( )ig t eτ and iτ be the excitatory and inhibitory decay constants (ms) . In the absence of synaptic events, the excitatory and inhibitory membrane conductances decay according to the first-order differential equations ( ) ( ) , 0e e e e dg t g t dt τ τ − = ≠ ( ) ( ) , 0i i i i dg t g t dt τ τ − = ≠ . If T (ms) is the synaptic event time,k ( k )f T + and ( k )f T − are the right- and left-hand limits of a function f at T , andk k e eτ Γ and k i iτ Γ are the random excitatory and inhibitory conductance boosts (nS) at , thenkT ( ) ( ) , 0 ( ) ( ) , 0 k e e k e k e e k i i k i k i i g T g T g T g T τ τ τ τ + − + − Γ = + ≠ Γ = + ≠ Combining these equations, we obtain the general equations for membrane conductance: ( ) ( ) ( ) , 0 ( ) ( ) ( ) , 0 k e e k e k e e k i i k i k i i g t t T dg t dt g t t T dg t dt δ τ τ δ τ τ + Γ − = − ≠ + Γ − = − ≠ ∑ ∑ Now, let (mV) be the resting membrane voltage, V t be the membrane voltage at time (mV), and denote the equilibrium excitatory and inhibitory voltages (mV), and be the membrane capacitance. We model membrane voltage with the following differential equation: rE ( ) t eE iE C [ ( )] [ ( )] [ ( )]( ) , 0r r e e i ig E V t g E V t g E V tdV t C dt C − + − + − = ≠ . 5
  • 6. Adam Cone Modeling Neural Networks Suppose that excitatory and inhibitory synaptic events are occurring with frequency (Hz) and (Hz), respectively, for a population , and that the events are Poisson distributed for each neuron. Then the average excitatory and inhibitory conductances for the neurons in the population and is given by ( )e tν P ( )eg t ( )i tν ( )ig t ( ) ( ) ( ) , 0 ( ) ( ) ( ) , 0 e e e e e e i i i i i i d g t t g t dt d g t t g t dt ν τ τ ν τ τ Γ − = ≠ Γ − = ≠ Now, when we model the evolving mean-field conductance for each population in the network, we must account not only for synaptic input from external stimuli, but also for synaptic events generated by network neurons. Physiologically, when a pre-synaptic neuron crosses voltage threshold and fires an action potential, there is a random delay between the firing time and the time at which any post-synaptic neuron experiences a resultant synaptic event. Suppose pre-synaptic neuron A synapses to post-synaptic neuron B and that neuron A fires an action potential at a time T (ms). We want to know the time T (ms) at which neuron B experiences the resultant synaptic event as a function of T . Computationally, we model this delay by defining two time quantities: the minimum possible delay between action potential firing and synaptic event occurance (ms); and the maximum possible additional delay time (ms). We compute as follows: T T ,7 ap se ap minT maxrandT seT min maxrandse ap randT T= + + randwhere (unitless) is MATLAB function that outputs a uniformly distributed random number between 0 and 1. In the mean field population model, we compute the rate of synaptic input to population β from population γ , (Hz), as a function of the activity of population γ , (Hz), where ,8 and the distribution function α (unitless), which inputs a synaptic delay, and outputs the probability that it will occur for a given action potential. In our case, because we select the delay from a uniform distribution, is a piecewise constant function, namely βγν ( )A tγ t ∈ ( )t ( )tα ( ) [ ] ( ) min min min max max max min max 0, , 1 ( ) , , , 0 0, , rand rand rand rand t T t t T T T T T t T T α  ∈ −∞   = ∈ + ≠   ∈ + ∞ . mindt T> min maxrandT T dt= = (1 rand)se ap dt 7 A potential problem arises if our simulation time-step . To avoid this, we can, for example, set , which is physiologically accurate, so that becomes T T .= + + maxrand randT8 We do not further restrict t because, due to the random part of the synaptic delay, action potentials from population α that originated at different times could effect at any given time .tαβν 6
  • 7. Adam Cone Modeling Neural Networks Finally, let (unitless) denote the number of synapses from γ to β . Now we use a convolution integral as follows: Nβγ ( ) ( )N A t t t dtβγ βγ γν α ∞ −∞ ′ ′ ′= −∫ min maxmin min min max ( ) ( ) ( ) ( ) ( ) ( ) rand rand T TT T T T N A t t t dt A t t t dt A t t t dtβγ γ γ γα α α + ∞ −∞ +   ′ ′ ′ ′ ′ ′ ′ ′ ′= − + − + −     ∫ ∫ ∫ t′ ( ) 0twhere (ms) is the ‘time ago’. Using the fact that when t T , we obtain min min max[ ,( )]randT T∉ +α = min max min max min minmax max 1 0 ( ) 0 ( ) rand randT T T T rand randT T N N A t t dt A t t dt T T βγ βγ βγ γ γν + +   ′ ′ ′ ′= + − + = −     ∫ ∫ We now make the substitution * * * max min max * min min * * ' ' ( ) ( ) 0 1 1 ' ' rand t t t t t t t t T T t t T dt d t t dt dt dt dt = − ⇒ = − ⇒ = − + ⇒ = − ′− ⇒ = = − = − ′ ⇒ = − so that our new integral is min max min min min min max min max ( ) * * max * * * * max max( ) ( ) ( )( ) ( 1)( 1) ( ) ( ) rand rand rand t T T rand t T t T t T rand randt T T t T T N A t dt T N N A t dt A t dt T T βγ βγ γ βγ βγ γ γ ν − + − − − − + − + = − = − − = ∫ ∫ ∫ . Monte-Carlo Model Construction Our goal was to adapt the Monte-Carlo population model to simulate multi-population neural network activity. For our purposes, a multi-population network is a group of neuron populations, in which any neuron can, a priori, synapse to any other neuron in the network, including itself. There were three main challenges in achieving this goal: 7
  • 8. Adam Cone Modeling Neural Networks 1) network construction and representation 2) translating neural activity to network stimulus 3) endowing different populations with different physiological characteristics Network Construction and Representation The first problem was to computationally define the following network features: 1) network size 2) number of populations 3) relative population sizes 4) population types (excitatory or inhibitory) 5) network connectivity The first four quantities are relatively straightforward to implement, but the network connectivity is not. One approach is to input the data for each synapse that exists between neurons. So, for example, if we want neuron 3 to synapses to neuron 12, we could manually input those values to the computer. However, since the neural networks can be larger than 100,000 and the number of synapses far higher, this solution is impractical. Furthermore, we are not concerned with whether a synapse exists between any two specific neurons- the holistic statistical properties of network connectivity are our primary concern- so inputting values this way would be tedious. We need a more efficient method of defining the connectivity. If one considers the neuron populations as the fundamental functional units of the network, then one can think of how the neuron populations are connected to one another without concentrating on the individual neurons. Suppose there is a network with two populations A and B , with a and neurons, respectively.b 9 From a population perspective, we completely define the network by declaring the number of synapses from A to B , ; to A , ; to A , ; and to B , . Equivalently, we could declare the probability that a given neuron in population synapses to a given neuron in population , given by , etc. Since we have defined the connectivity to our satisfaction, we want to randomly generate a network with neuron-level connectivity that satisfies our specified population-level connectivity.10 We achieve this by generating an × matrix, where (unitless) is the number of neurons in the network. In this matrix, ABS A AAS B BAS B BBS A B ABS ab N N N 1, neuron i has an inhibitory synapse to neuron j 0, neuron i doesn't synapse to neuron j 1, neuron i has an excitatory synapse to neuron j ijx −  =    N A . For example, suppose we want a neural network with neurons and three populations, , B , and C . We want and to be excitatory, C inhibitory, and their sizes to be a , b andA B 9 Note that, because we assume that each neuron can synapse at most once to any other neuron, ab is the total number of possible synapses from A to B. 8
  • 9. Adam Cone Modeling Neural Networks c (unitless), respectively. How do we define population-level connectivity? We construct a 3 3× matrix M with ( )any given neuron in population i synapses to any given neuron in population jijm P= a a× [ . Now, we generate an random matrix (i.e. a matrix in which each entry is a uniformly distributed real number ). We ask the computer to perform a logical operation on each such that ]0,1r ∈ L ijm 0, 0, ( ) , 0 1, ,1 AA ij ij AA ij S m aa L m aa S m aa    ∈     = ≠   ∈    . We now perform similar operations for each 2-population combination (i.e. A : B , A :C , B : A , B : B , etc.), and concatenate the resulting matrices. Now we have our connectivity matrix – our representation of the neural network. Translating Neural Activity into Network Stimulus dt We are concerned primarily with network activity, specifically the rate at which each population in the network, as well as the network itself, is firing action potentials at any given time. To find this, we need to record how many neurons fired in each discretized time step of length , which necessitates updating the conductances and voltage of each neuron in each time step. In a network model, we need to perform all the operations of the population model, in addition to accounting for inter-neuron interaction. The following is a checklist of the basic steps: 1) classify synaptic input rate to each neuron from external source 2) translate input rate into a Poisson-distributed sequences of synaptic events 3) classify action potential-generated synaptic events from network neurons 4) sort the synaptic events from network and external stimulus into new sequence 5) integrate over time to update the neuron conductances 6) integrate over time to find the neuron voltages 7) decide whether each neuron has fired an action potential 8) record which neurons will experience network-generated synaptic events in next dt 9) store the three-dimensional state-space coordinates. When the simulation is running, every time a neuron fires, we must 1) reference the connectivity matrix 2) determine which neurons experience synaptic events 3) determine the synaptic event times 4) determine whether the pre-synaptic neuron is excitatory or inhibitory 5) generate the random strength of the synaptic events 9
  • 10. Adam Cone Modeling Neural Networks 6) store the post-synaptic neuron/time/strength data for future reference.11 In the Monte-Carlo regime, accounting for inter-network communication is essentially a bookkeeping problem. In computer science terminology, we need a data object in which we can easily store and modify data about inter-neuron communication. We first construct this object, then explain how it is used in the program. The synaptic input events for each neuron must be sorted by time of occurrence, because our second-order accurate integration scheme requires integrating voltages and conductances between these synaptic events. The essential problem is one of data storage and access, but because we ultimately want synaptic events lined up for use in future time-steps, we call our data object the queue matrix. The queue matrix dynamically stores future synaptic input data for each post-neuron. That is, for each neuron, the queue matrix stores the times and types of future synaptic events. Now, because the synaptic delay , the queue matrix must have a capacity of at least . Since no resultant synaptic event can occur more than T T after the end of the current time-step, further storage is extraneous,12 so we have that one of the matrix dimensions is , where is the MATLAB notation for the smallest integer function. [ ]mi( )sa apT T T− ∈ min max l 1randT T+  cei dt + n min max,( )randT T+   rand   min+ max cei min max l 1randT T dt +  +    ceil max_in Furthermore, since it is possible that a neuron experiences multiple synaptic events from other network neurons in one time-step, we need to know how much space to allocate for synaptic events. Let (unitless) denote the maximum number of incoming synapses held by any neuron in the network and (ms) denote the refractory period. Then, in general, the requisite number of events the queue matrix must store, and, therefore, one of the queue matrix’ dimensions, is given by . refτ max max_in*ceil randT dt τ  +   ref  N ( ) ( ) ( )neurons ijkq th events × time-steps × th Finally, since the queue matrix must store synaptic event data for each neuron, one of the matrix’ dimensions must be . By convention, our queue matrix is . In other words, in , we find the data about the time and type of the i synaptic event occurring during the j time-step at neuron . This leaves us with the problem of storing two pieces of information, time and type, in a single matrix cell. For computational reasons, we chose to represent the data as a complex number . Let T (ms) be the elapsed time between the beginning of the time- k a bi+ se 11 We assume that each pre-synaptic neuron is either excitatory or inhibitory, not both. 12 That the total simulation length is more than is irrelevant; we simply erase all events after the current time-step. min max ceil 1andT T dt +  +    10
  • 11. Adam Cone Modeling Neural Networks step and the synaptic event. Then the real part, a , is given by . The imaginary part is given by . 0, if no event o , >0se se a T T  =   1, synaptic event 0, no syna ic event t b −  =  ccured b an inhibitory pt 1, an excitatory synaptic even  ,ap ω The queue matrix is updated at the end of each time-step; a process which requires defining two additional objects. Let T (ms) denote the time at which a neuron ω fired, and let (ms) denote the beginning of the time-step. After we finish the voltage integration and determine if/when each neuron has fired, we create vector of length of neuron firing times in which is the time the neuron fired, given by . Furthermore, we sum down the first dimension of the queue matrix to obtain a ( event-counter matrix, in which x is the number of synaptic events the neuron is already determined to experience in the time-step. So, if further pre-synaptic neuron firing leads to additional synaptic events, we will reference the event counter matrix to decide at which positions in the queue matrix column we should put them. 0T N ix thi , 0ap iT T dt − 0≠ ) ( )times-steps × neurons ij thj thi jk fireN N× fireN dt fireN N× In each dt , after we have defined the neuron firing vector and we know which neurons fired, we want to ‘queue-up’ the resultant synaptic events at all the neurons post-synaptic to firing neurons. To do this, we use the neuron firing vector to find the appropriate rows of the neuron connectivity matrix and, since the connectivity firing matrix rows not corresponding to firing neurons are irrelevant, we condense the relevent rows into a new firing connectivity matrix, where denotes the number of neurons that fired in the current . Now, we generate an delay times matrix, with uniformly distributed random values . We create another × matrix, firing times matrix, by right- multiplying an ones matrix by the transpose of the firing times row vector, so that has the value of the element in the neuron firing times vector. Adding these two matrices, we obtain random, uniformly distributed synaptic event reception times that account for the both the different firing times of the pre-synaptic neurons, and the multiple post-synaptic targets. To rid ourselves of the unwanted data, we element-multiply the times matrix by the absolute value of the firing connectivity matrix. We call this the firing times matrix. [ ]min min,R N N∈ fireN NmaxNij + N× thi fireN ijx We need to associate with each future synaptic event generated in the current a type, excitatory or inhibitory, based on the designation of the pre-synaptic neuron that fired. We now construct a firing information matrix by adding the firing times matrix to the firing connectivity matrix, multiplied by , the imaginary number. So, in each firing information matrix cell, we have a complex number , with the synaptic event reception time, and the synaptic event type: excitatory or inhibitory. Sorting the columns by absolute value, we arrange the events chronologically from first (top) to last (bottom). dt i λ real( )λ sign(imag( ))λ 11
  • 12. Adam Cone Modeling Neural Networks Although we have organized our action potential firing data for the current time-step, it remains to enter this data into the queue matrix, where it can be efficiently accessed in future time-loops. We do this by looping over the maximum number of synaptic events any post- synaptic neuron will experience as a result of action potentials that occurred in the current time- step; a quantity obtained by summing down the absolute value of the firing connectivity matrix. In the loop, we increment the relevant cells of the event counter matrix, and define row, column, and depth indices for the queue matrix. Variable Physiological Characteristics Our goal is to model multiple population network activity in the visual cortex. Optimally, we want our simulation to handle a user-specified number of populations, each with different qualities (i.e. excitatory vs. inhibitory, different refractory periods, voltage thresholds, etc.), without having to write individual programs for each case. We construct a variable population network by the algorithm outlined in Network Construction, but how do we efficiently assign physiological constants to the different populations? Briefly, the user constructs the following column vectors, each of length (unitless), where is the number of populations in the network: P P N thi thk excitatory equilibrium voltage inhibitory equilibrium voltage resting voltage excitatory conductance decay constant inhibitory conductance decay constant membrane time constant refractory period reset voltage threshold voltage For example, the i element of the reset voltage vector contains the reset voltage constant for the neurons in population i . th Although we now have sufficient data to simulate interacting populations with different physiological parameters, the data is not in a convenient form for calculations. Since many of our Monte-Carlo computations are based on individual neurons, we would like, for each characteristic, to have a vector of dimension whose element is the characteristic value of the i neuron in the network. Let be the number of neurons in the population. Then we can construct a matrix of dimension , where th kP N P× 1 1 1 0, , j j n n n n ij i P P x − = =    ∉  1 1 1 1, , j j n n n n i P P − = =     =  ∑ ∑   ∈    ∑ ∑ N th th . Now, multiplication by, for instance, the refractory time column vector yields a column vector of length , where the i element is the refractory time value of the i network neuron. 12
  • 13. Adam Cone Modeling Neural Networks For example, suppose a given neural network has seven neurons and four distinct populations, , , , and , with sizes two, one, three, and one, respectively. Further, suppose that the population refractory time values are 3 , , , and , respectively. Then the computationally convenient vector of neuron refractory time values is given by: a b c d sµ 2 sµ 6 sµ 4 sµ 1 0 0 0 3 s 1 0 0 0 3 s 3 s 0 1 0 0 2 s 2 s 0 0 1 0 6 s 6 s 0 0 1 0 6 s 4 s 0 0 1 0 6 s 0 0 0 1 6 s µ µ µ µ µ µ µ µ µ µ µ                        =                           . Performing this operation for each population-variable characteristic, we obtain convenient vectors for each characteristic. The data is now in a computationally convenient form. Multi-Population Simulation Results To demonstrate the necessity (over the mean-field method) and versatility of our Monte- Carlo multi-population method, we present activity and conductance comparisons between the mean-field and Monte-Carlo simulation data (Figures 1-4); and the results of four simulations, each run with different physiological parameters and each with 1000 neurons, and two interacting populations: one excitatory (population 1) and one inhibitory (population 2) (Figures 5-8). The specific physiological parameters of each simulation are found in Appendix A, and the simulation code is found is Appendix B. Mean-Field and Monte-Carlo Comparisons While the Monte-Carlo method is the standard for accuracy in network modeling, it must ponderously track each individual neuron; a computationally costly process. The mean field method, by comparison, involves only numerically solving ordinary differential equations, and is far faster; ideally, we would use them exclusively. However, the mean field models neuron- interaction poorly, as demonstrated in Figures 1-4, hence the need for a Monte-Carlo model. Figures 1-4 each show relative agreement between the mean field and Monte-Carlo methods, but we see, in each figure, the mean field deviating from the Monte-Carlo. This motivates our need for a Monte-Carlo method. Physiological Parameter Variation: Testing One of our primary goals was to account for interactions between physiologically different neuron populations. After programming the simulations, we varied physiological parameters (e.g. threshold voltage, refractory period, etc.) and critically examined the results. We expect, for instance, that if an isolated neuron population X has a higher threshold voltage then another, otherwise identical, isolated neuron population Y , then, given similar synaptic input , will have a higher activity than . If a simulation did not show this, it is unlikely thatY X 13
  • 14. Adam Cone Modeling Neural Networks the simulation accurately modeled reality. The following are similar functionality tests, which compare population activity. We ran simulations with four different parameter sets, three of them varying by one or two physiological variables from the control set, parameter set 1, and compared the results. Briefly, parameter set 2 differs from parameter set 1 in its threshold voltage values; higher for the excitatory population and lower for the inhibitory population. In parameter set three, we made the following two changes: we modified the connectivity matrix so that there is substantial population self-synapsing; and we changed the refractory periods for both populations (halved for population 1, doubled for population 2). Finally, population set 4 has a higher resting voltage for population 1, and a lower resting voltage for population 2. Parameter Set 2: Threshold Voltage In comparison with Figure 5, Figure 6 has two salient features (increased population 2 activity; decreased population 1 activity), both of which are consistent with the physiological differences- a population’s activity is proportional to the ease with which its neurons can reach threshold voltage. By making the threshold harder to meet for population 1, and easier to meet for population 2, we decreased and increased their respective activities. This effect is enhanced by the populations’ connectivity. Since population 2 is inhibitory and synapses to many neurons in population 1, population 2’s increased activity means increased inhibition for population 1. So, while population 2’s increased activity results exclusively from the threshold voltage change, population 1’s decreased activity is the result of both the threshold voltage changes, and the indirect effect of the threshold voltage changes through the network architecture. Parameter Set 3: Network Connectivity and Refractory Period Relative to the simulation activity data from parameter set 1, parameter set 3 data (Figure 7) exhibits an increased positive disparity between the activities of population 1 and population 2. The result of altering the refractory period is that the population 1 neurons have less forced inactivity, whereas the population 2 neurons have more. When a neuron is in a refractory period its voltage cannot evolve, and, in some sense, the excitatory synaptic events are ‘wasted’ in that they cannot make the neuron more likely to fire. In parameter set 3, the refractory period changes minimize this ‘waste’ for population 1, and maximize it for population 2- hence, the greater disparity. The effect of allowing population self-synapsing is subtler, but important. Population 1 is excitatory, which means that its action potentials precipitate excitatory synaptic events at post- synaptic neurons, including those in inhibitory population 2. In the population 1 network, these excitatory events make the inhibitory population more likely to fire, which, in turn, decreases the activity of population 1. For most external synaptic stimulus time-courses, this effect is minor, but present; the network connectivity means that population 1’s activity is, through population 2, somewhat self-limiting. However, in parameter 3 network, the set of neurons post-synaptic to population 1 include neurons in population 1, which means that population 1’s activity has a strong self-actuating effect. Furthermore, population 2’s self-synapsing has precisely the opposite effect- it’s activity is now strongly self-limiting, and, therefore, so is its tempering effect on population 1. The net result is that population 1’s activity increases and population 2’s activity decreases. The combination of these two effects explains the large differences between Figures 5 and 7. 14
  • 15. Adam Cone Modeling Neural Networks Parameter Set 4: Neuron Resting Voltage Figure 8’s relationship to Figure 5 is similar to Figure 7’s, but less extreme; Population 1’s activity is slightly higher, and population 2’s activity is slightly lower. When a neuron experiences no synaptic events, its conductance will asymptotically approach the resting voltage. The higher the resting voltage, the less excitatory synaptic stimulus a neuron will need to reach threshold voltage, and vice-versa. Hence, when we increase the resting voltage of population 1 and decrease that of population 2, we see the corresponding differences. The net effect is weaker than that of the changes in parameter set 3 for two reasons. First, there is only one altered physiological variable. Second, the resting voltage is less influential when a neuron experiences a high frequency of synaptic events, and, in all our simulations, the rate of synaptic events is relatively high. However, the effect is still, not surprisingly, significant. 15
  • 16. Adam Cone Modeling Neural Networks 16
  • 17. Adam Cone Modeling Neural Networks 17
  • 18. Adam Cone Modeling Neural Networks Conclusion Although our multi-population simulations generated coherent results, there are several possible improvements that we plan to implement in future Monte-Carlo simulations. For example, synaptic depression becomes increasingly important in network-level, as opposed to population-level, analysis. The voltage-impact of a synaptic event, although partially random, is a function of the amount and type of neurotransmitter released by the pre-synaptic neuron into the synaptic cleft. When a synaptic event occurs, the pre-synaptic neuron’s immediately- available neurotransmitter is depleted by some number of neurotransmitter molecules χ . Given sufficient time, the pre-synaptic exponentially restores the amount of immediately-available neurotransmitter to its original value. However, if the pre-synaptic neuron fires action potentials at some critical frequency, the amount of immediately-available neurotransmitter can fall below χ , and the synaptic event’s efficacy is reduced or ‘depressed’. In population simulations this phenomenon is of little importance, since we are concerned only with the activity of the modeled population. However, in multi-population networks, the generated action potentials have consequences, and synaptic depression must be accounted for. Nevertheless, the two-population simulation results are consistent with theoretical predictions, which suggests that the multi-population Monte-Carlo network model is an appropriate foil for testing new population density methods. 18
  • 19. Adam Cone Modeling Neural Networks Literature Review Cone, Adam Richard. New York University; Courant Institute of Mathematical Sciences. 9/26/03 <http://www.cims.nyu.edu/vigrenew/ug_research/adam_cone.pdf> Maizels, Deborah Jane. Zoobotanica. Apple. 10-18-02 <http://www.zoobotanica.plus.com/portfolio%20medicine%20pages/synapse.htm>. Nykamp, Duane and Daniel Tranchina. “A Population Density Approach That Facilitates Large- Scale Modeling of Neural Networks: Analysis and an Application to Orientation Tuning.” Journal of Computational Neural Science 8 (2000): 19-50 Weisstein, Eric. Eric Weisstein's World of Mathematics (MathWorldTM ). Wolfram Research. 10- 18-02 <http://mathworld.wolfram.com/MonteCarloMethod.html>. 19
  • 20. Adam Cone Modeling Neural Networks 20
  • 21. Adam Cone Modeling Neural Networks Appendix B: Simulation Code Main Program function multi_pop4a(tau_E,tau_I,tau_M,tau_R,E_E,E_I,E_R,v_T,v_R,nue_0,nui_0,c_e,c_i,N,Tsim,randstate ,pop_con_mat,pop_type_vec,pop_frac_vec) %second-order accurate version in which events are generated in groups with ID and sorted %tau_E - Excitatory synaptic conductance time constants for each population (column vector n_pops long) %tau_I - Inhibitory synaptic time constants for each population %tau_M - Membrane time constant %tau_Ref - Refractory time for each population %nue_0 - time-average excitatory synaptic input rate %nu_0 - time-average inhibitory synaptic input rate %N - number of neurons in the simulation %c_e - maximum contrast for random excitory conductance for each population %c_i - maximum contrast for random inhibitory conductance for each population % synaptic input rates %tau_ref for each neuron %dispersion of uniform latencies handled correctly %calls new qcount: qcount_11 rand('state',randstate) dt=min(tau_E)/10; nt=ceil(Tsim/dt); %number of time points t=(0:(nt))*dt; %the time points nt=nt+1; t_max=max(t); %Generate connectivity matrix [con_mat, pop_id_vec]= conmat_5(N,pop_con_mat,pop_type_vec,pop_frac_vec); n_pops=size(pop_con_mat,1); Kpop=false(N,n_pops); %matrix where each column entry=1 for neurons in population with that column number for jp=1:n_pops Kpop(:,jp)=(pop_id_vec'==jp); end E_e=(Kpop*E_E)';%row vectors of length N v_th=(Kpop*v_T)'; E_i=(Kpop*E_I)'; E_r=(Kpop*E_R)'; v_reset=(Kpop*v_R)'; tau_m=(Kpop*tau_M)'; tau_e=(Kpop*tau_E)'; tau_i=(Kpop*tau_I)'; tau_ref=(Kpop*tau_R)'; n_ref=round(tau_ref/dt); %number of time bins in refractory state rate_mf=zeros(n_pops,nt); % mean-field rate nu_e=zeros(n_pops,nt); %external excitatory input rate for each pop nu_i=nu_e; % ditto for inhibitory %%%UPDATE FOR NPOPS (NOT JUST 2 POPS) for j=1:n_pops [nu_e(j,:) nu_i(j,:)]=ex_in_synaptic_rates(nue_0(j),nui_0(j),c_e(j),c_i(j),t,randstate); end %check to see whether these have to be defined here nue_vec=zeros(N,1); nui_vec=nue_vec; axon_delay_constant=dt; axon_delay_rand_range=2*dt; tmin=axon_delay_constant; tmax=tmin+axon_delay_rand_range; Td=tmax-tmin; 21
  • 22. Adam Cone Modeling Neural Networks kmax=ceil(tmax/dt); kmin=ceil(tmin/dt); npoints=kmax-kmin+2; e1=dt*kmax-tmax; z=min(dt,kmax*dt-tmin); weights=zeros(npoints,1); weights(1)=z-e1 - (1/2)*(z^2/dt -e1^2/dt); weights(2)=(1/2)*(z^2/dt -e1^2/dt); kw=2; while kw < npoints z=min(dt,(kmax-kw+1)*dt-tmin); weights(kw)=weights(kw)+z-(1/2)*z^2/dt; weights(kw+1)=weights(kw+1)+(1/2)*z^2/dt; kw=kw+1; end weights=weights/Td; q_mat = qmat_6(con_mat,axon_delay_constant,axon_delay_rand_range,dt,max(tau_ref)); [max_event_num,num_neurons,num_dt_slots]=size(q_mat); event_counter_matrix=zeros(num_dt_slots,num_neurons); mu_Gamma_E=tau_M./(E_E-E_R); % Expected area under unitary synaptic event for each pop. Gives average EPSP of ~0.5 mV mu_GE=nue_0.*mu_Gamma_E; %Average of G_e for steady synaptic input rate at nu_0 sigma_sq_Gamma_E=mu_Gamma_E.^2/5; %variance of Gamma_e for parabolic distribution mu_Gamma_E_sq=sigma_sq_Gamma_E+mu_Gamma_E.^2; %expected square of Gamma_e sigma_GE=sqrt(nue_0.*mu_Gamma_E_sq./(2*tau_E)); %variance of G_e for steady synaptic input at rate nu_0 mu_Gamma_I=10*mu_Gamma_E;%mean area under inhibitory cinductance for each pop mu_GI=nui_0.*mu_Gamma_I;%mean inhibitory conductance sigma_sq_Gamma_I=mu_Gamma_I.^2/5; mu_Gamma_I_sq=sigma_sq_Gamma_I+mu_Gamma_I.^2;%expected square of Gamma_I sigma_GI=sqrt(nui_0.*mu_Gamma_I_sq./(2*tau_I)); %variance of G_e for steady synaptic input at rate nu_0 mu_Gamma_e=(Kpop*mu_Gamma_E)'; mu_Gamma_i=(Kpop*mu_Gamma_I)'; g_e=rate_mf; %mean_field E conductance for each pop at each time g_i=g_e; %ditto for I g_i_mc=g_i; g_e_mc=g_i; g_e(:,1)=nue_0.*mu_Gamma_E; %populations-by-time steps g_i(:,1)=nui_0.*mu_Gamma_I; aptimes=zeros(n_pops,ceil(Tsim*200*N)); %action potential times matrix Jap=zeros(n_pops,1); %index for keeping track of location of ap's in the matrix above count_down=-ones(1,N); %counts number of time steps until exiting refractory period delt_fire=zeros(1,N); %time of action potential measured from beginning of time step G_ep1=zeros(1,N); %conductance values to be used in integration G_ep2=G_ep1; G_ip1=G_ep1; G_ip2=G_ip1; Dt=zeros(1,N); %time step, either dt, or less for neurons emerging from %refractory period t_remain=dt*ones(1,N); t_elapse=zeros(1,N); Tzero=zeros(1,N); ID_vec=Tzero; dt_vec=dt*ones(1,N); nET=20; ETzero=zeros(nET,N); count_zero=zeros(1,N); G_e=zeros(nt,N); %Matrix of Ge values (nt+1) time points by N neurons G_i=zeros(nt,N); %Matrix of Gi values (nt+1) time points by N neurons 22
  • 23. Adam Cone Modeling Neural Networks V=G_e; %Corresponding membrane voltages G_e(1,:)=( Kpop*mu_GE+(Kpop*sigma_GE).*randn(N,1) )'; %Initialize G_e at t=0 by choosing values from a gaussian distribution G_e(1,:)=max(G_e(1,:),zeros(size(G_e(1,:)))); %If negative, set to zero G_i(1,:)=( Kpop*mu_GI+(Kpop*sigma_GI).*randn(N,1) )'; %Initialize G_e at t=0 by choosing values from a gaussian distribution G_i(1,:)=max(G_i(1,:),zeros(size(G_i(1,:)))); g_i_mc(1)=mean(G_i(1,:)); g_e_mc(1)=mean(G_e(1,:)); V(1,:)=E_i + (v_th-E_i).*rand(1,N); % Initial random V from uniform disribution V1 =E_i + (v_th-E_i).*rand(1,N); % %Mean-field computation rate_mf(:,1)=mean_field_rate_vectorized(E_R,E_E,E_I,v_T,v_R,tau_M,tau_R,g_e(:,1),g_i(:,1)); rand('state',randstate+1) cpt=cputime; %for measuring cpu time for time loop counter=0; for k=1:(nt-1) %Step through time firing_times_vector=Tzero; klower1=max(1,k-kmax); kupper1=max(1,k-kmin+1); nw1=kupper1-klower1+1; JR1=klower1:kupper1; JW1=(npoints-nw1+1):npoints; klower2=max(1,k+1-kmax); kupper2=max(1,k+1-kmin+1); nw2=kupper2-klower2+1; JR2=klower2:kupper2; JW2=(npoints-nw2+1):npoints; %take out loop independent stuff, as Adam did nu_e1=nu_e(:,k)+ N*(pop_con_mat')*( (rate_mf(:,JR1)*weights(JW1)).*(pop_type_vec'==1).*(pop_frac_vec') ); nu_e2=nu_e(:,k+1)+N*(pop_con_mat')*( (rate_mf(:,JR2)*weights(JW2)).*(pop_type_vec'==1).*(pop_frac_vec') ); nu_i1=nu_i(:,k)+ N*(pop_con_mat')*( (rate_mf(:,JR1)*weights(JW1)).*(pop_type_vec'==- 1).*(pop_frac_vec') ); nu_i2=nu_i(:,k+1)+N*(pop_con_mat')*( (rate_mf(:,JR2)*weights(JW2)).*(pop_type_vec'==- 1).*(pop_frac_vec') ); g_e(:,k+1)=g_syn_updatea(g_e(:,k),nu_e1,nu_e2,tau_E,dt,mu_Gamma_E); g_i(:,k+1)=g_syn_updatea(g_i(:,k),nu_i1,nu_i2,tau_I,dt,mu_Gamma_I); %Mean-filed computation rate_mf(:,k+1)=mean_field_rate_vectorized(E_R,E_E,E_I,v_T,v_R,tau_M, ... tau_R,g_e(:,k+1),g_i(:,k+1)); %TAKE AVERAGE FOR TWO ADJACENT GRID POINTS INSTEAD? nue_vec=Kpop*nu_e(:,k); nui_vec=Kpop*nu_i(:,k); rate_tot_vec=nue_vec+nui_vec; Ns=poissrnd(dt*rate_tot_vec); %number of events for each of N neurons p_e_vec=Kpop*( nu_e(:,k)./(nu_e(:,k)+nu_i(:,k)) );%probability that an event is of type E for each neuron G_ep1=G_e(k,:); %auxilliary conductance vector, initialized to that at beginning of time step G_ip1=G_i(k,:); %ditto G_ep2=G_ep1; %ditto G_ip2=G_ip1; %ditto V_ep1=V(k,:);%auxilliary voltage vector V_ep2=V_ep1; %ditto n_max=max(Ns);%intialization of maximum over neurons of number of remaining events to be integrated if n_max>nET n_max 23
  • 24. Adam Cone Modeling Neural Networks warning('Increase first dimension, nET, of ET matrix') end counter=counter+1;%pointer to present dt slot in queue matrix (q_mat) mod_counter = mod(counter,num_dt_slots) + num_dt_slots*(mod(counter,num_dt_slots)==0); %Generate all times and IDs for events coming from external input %do this by grouping neurons according to common number of events. %Start with all neurons tha have maximum number of events. %Generate all event times and IDs in vectorized manner for these. %Next, do the same for all neurons with one fewer number of events. %Repeat until encountering nerons with no events ETIDtemp=ETzero; for K=n_max:-1:1%while K > 0 JK=(Ns==K); ns=sum(JK); if ns>0 p_e_mat=ones(K,ns)*diag(p_e_vec(JK)); ETIDtemp(1:K,JK)=dt*rand(K,ns)+i*(-1+2*(rand(K,ns)<=p_e_mat)); %generate times %and IDs (E=1 or I=-1) for all neuron with n_max %synaptic events end end max_internal_events=max(event_counter_matrix(mod_counter,:)); ETIDtot=sort([ETIDtemp(1:max(1,n_max),:);q_mat(1:max(1,max_internal_events),:,mod_counter)]) ; no_events=sum(n_max+max_internal_events); [nnn mmm]=size(ETIDtot); min_n_zeros=min(sum(ETIDtot==0,1));%minimum number of leading zeros in columns of ETIDtot if min_n_zeros~=(nnn) row_start=min_n_zeros+1; else row_start=nnn; end ETIDtot=ETIDtot(row_start:nnn,:); t_elapse=Tzero; %Initialize elapsed time at beginning of time step t_remain=dt_vec; %Initializeremaining time at beginning of time step T=Tzero; %initlaize time-since-last-event vector Jint=false(1,N); %intialize vector that points to neurons with events to be integrated count=count_zero; %%%%%%%%%%%%%%%%%%%%%% num_rows=nnn-row_start+1; num_steps=num_rows; if no_events>0 num_steps=num_rows+1; end %%%%%%%%%%%%%%%%%%%%%% % for nd=1:num_steps if nd <=num_rows Jint=ETIDtot(nd,:)~=0; n_int=sum(Jint); if n_int~=0 T(Jint)=real(ETIDtot(nd,Jint))-t_elapse(Jint); else Jint=true(1,N); T=t_remain; end else Jint=true(1,N); T=t_remain; end G_ep2(Jint)=G_ep1(Jint).*exp(-T(Jint)./tau_e(Jint)); G_ip2(Jint)=G_ip1(Jint).*exp(-T(Jint)./tau_i(Jint)); 24
  • 25. Adam Cone Modeling Neural Networks %integrate only the subset of the neurons that are %out or coming out of refractory period J_out= Jint & (count_down<0); %index of neurons that are to be integrated that were %nonrefractory at beginning of time step J_coming=Jint & count_down==0 & delt_fire>t_elapse & delt_fire<= ... (t_elapse+T); %index of neurons to be integrated that are coming out during current %time step %count_down(J_coming)=count_down(J_coming)-1; Dt=T; %auxilliary time step, initialized to full time %to next event if sum(J_coming)>0 %if any neurons coming out G_ep1(J_coming)=G_ep1(J_coming).*exp(-(delt_fire(J_coming)- t_elapse(J_coming))./tau_e(J_coming));%conductance upon emerging G_ip1(J_coming)=G_ip1(J_coming).*exp(-(delt_fire(J_coming)- t_elapse(J_coming))./tau_i(J_coming));%ditto Dt(J_coming)=t_elapse(J_coming)+T(J_coming)-delt_fire(J_coming);%times between emerging and end of time step end J=J_out | J_coming; %integrate all non_refractory neurons V_ep2(J) = ( V_ep1(J)-(Dt(J)./(2*tau_m(J))).*( V_ep1(J) - 2*E_r(J) - G_ep2(J).*E_e(J) - G_ip2(J).*E_i(J) + G_ep1(J).*... (V_ep1(J)-E_e(J)) + G_ip1(J).*(V_ep1(J)-E_i(J)) ) )./( 1 + (Dt(J)./(2*tau_m(J))).*(1+G_ep2(J)+G_ip2(J)) ); %Find out who has crossed threshold; find times, and reset %(put into refractory pool) J=V_ep2>=v_th; nf=sum(J); if nf>0 V_ep2(J)=v_reset(J); A=( (G_ep2(J)-G_ep1(J)).*(v_th(J)-E_e(J)) + (G_ip2(J)-G_ip1(J)).*(v_th(J)-E_i(J)) )./(2*tau_m(J).*Dt(J)); %solution from trapezoidal rule and %linear interp of G_e and G_i B=( V_ep1(J)+ v_th(J)- 2*E_r(J) + G_ep1(J).*(V_ep1(J)+v_th(J)-2*E_e(J)) + G_ip1(J).*(V_ep1(J)+v_th(J)-2*E_i(J)) )./(2*tau_m(J)); C=v_th(J)-V_ep1(J); rp=(-B+sqrt(B.^2-4*A.*C))./(2*A); %possible firing times are roots of a quadratic rm=(-B-sqrt(B.^2-4*A.*C))./(2*A); r=[rp; rm]; dtp=[Dt(J);Dt(J)]; tm=sum(r.*((r>0)&(r<dtp))); %take the only sensible root delt_fire(J)=t_elapse(J)+tm; %record the time of firing within the %interval KF=( (ones(n_pops,1)*J) & Kpop' ); Nap=sum(KF,2); firing_times_vector(J)=delt_fire(J)/dt; for kap=1:n_pops Ip=Jap(kap)+(1:Nap(kap)); aptimes(kap,Ip)=t(k)+delt_fire(KF(kap,:)); Jap(kap)=Jap(kap)+Nap(kap); end count_down(J)=n_ref(J); %reset the count down vector f end %update conductances of all neurons that had event (those with %n_max>0) if no_events>0 & (nd<=num_rows) ID_vec=imag(ETIDtot(nd,:)); Je=Jint & (ID_vec==1); %picks subscripts of neurons that have excitatory events Ji=Jint & (ID_vec==-1); %picks out subscripts of neurons that have inhibitory events N_e=sum(Je); N_i=sum(Ji); 25
  • 26. Adam Cone Modeling Neural Networks z=rand(1,n_int); %First, generate uniformly distributed random number for each event theta=(atan2(2*sqrt(z-z.^2),(1-2*z))-2*pi)/3; %Convert this into parabolically distributed number, by this %and the following two lines x=2*cos(theta)+1; if N_e>0 G_ep2(Je)=G_ep2(Je)+mu_Gamma_e(Je).*x(1:N_e)./tau_e(Je); end if N_i>0 G_ip2(Ji)=G_ip2(Ji)+mu_Gamma_i(Ji).*x(N_e+1:n_int)./tau_i(Ji); end end V_ep1=V_ep2; G_ep1=G_ep2; G_ip1=G_ip2; t_remain(Jint)=t_remain(Jint)-T(Jint); t_elapse(Jint)=t_elapse(Jint)+T(Jint); end V(k+1,:)=V_ep2; G_e(k+1,:)=G_ep2; G_i(k+1,:)=G_ip2; count_down=count_down-1; %decrement the count down vector by 1 Jhist=count_down<0; for kmc=1:n_pops g_e_mc(kmc,k+1)=mean(G_ep2(Kpop(:,kmc)')); g_i_mc(kmc,k+1)=mean(G_ip2(Kpop(:,kmc)')); end event_counter_matrix(mod_counter,:)=0;%initialze counter to zero in %current slot that has just been used q_mat(:,:,mod_counter)=zeros(size(q_mat(:,:,mod_counter)));%zero the dt slot of q_mat just used [q_mat event_counter_matrix] = qcount_11(con_mat,q_mat,mod_counter,num_dt_slots,... firing_times_vector,axon_delay_constant,axon_delay_rand_range,dt,N, ... event_counter_matrix); end cpt=cputime-cpt %---------------PLOTTING STUFF--------------- figure(1) plot(t,[nu_e;nu_i]) xlabel('Time (s)'); ylabel('Synaptic Input Rate (Hz)'); set(gca,'XLim',[0 Tsim]) legend('E_1','E_2','I_1','I_2') title('Excitatory and Inhibitory Synaptic Input Rates vs. Time') dth=0.002; th=0:dth:Tsim; t_rate=(dth/2):dth:(t_max-dth/2); aptimes1=aptimes(1,aptimes(1,:)~=0); N1=N*pop_frac_vec(1); figure(2) rate_monte1=hist(aptimes1,t_rate)/(N1*dth); plot(t,rate_mf(1,:),'r-') hold on bar(t_rate,rate_monte1) xlabel('Time (s)'); ylabel('Population 1 Firing Rate (Hz)') hold off set(gca,'XLim',[0 Tsim]) title('Mean Field and Monte-Carlo Population 1 Firing Rate vs. Time') legend('Mean Field','Monte Carlo') aptimes2=aptimes(2,aptimes(2,:)~=0); N2=N*pop_frac_vec(2); figure(3) plot(t,rate_mf(2,:),'r-') hold on rate_monte2=hist(aptimes2,t_rate)/(N2*dth); 26
  • 27. Adam Cone Modeling Neural Networks bar(t_rate,rate_monte2) xlabel('Time (s)'); ylabel('Population 2 Firing Rate (Hz)') hold off legend('Mean Field','Mont Carlo') set(gca,'XLim',[0 Tsim]) title('Mean Field and Monte-Carlo Population 2 Firing Rate vs. Time') figure(4) plot(t_rate,rate_monte1,'c',t_rate,rate_monte2,'k') xlabel('Time (s)'); ylabel('Firing Rate (Hz)'); legend('Population 1', 'Population 2') title('Populations 1 and 2 Firing Rates vs. Time') figure(5) plot(t,[g_e;g_e_mc]) xlabel('Time (s)'); ylabel('Conductance (nS)'); legend('Mean Field 1','Mean Field 2','Monte-Carlo 1','Monte-Carlo 2') title('Mean Field and Monte Carlo Excitatory Network Conductances vs. Time') figure(6) plot(t,[g_i;g_i_mc]) xlabel('Time (s)'); ylabel('Conductance (nS)') legend('Mean Field 1','Mean Field 2','Monte-Carlo 1','Monte-Carlo 2') title('Mean Field and Monte Carlo Inhibitory Network Conductances vs. Time') save monte_carlo_results_multi_pops t_rate rate_monte1 rate_monte2 t nu_e nu_i ... g_e g_i g_e_mc g_i_mc t rate_mf randstate Connectivity Matrix Generator % This m-file constructs a connectivity matrix from data about the number % and type of populations and the user-specified output-connectivity of each % population: 1) N = number of neurons % 2) pop_connectivity_matrix = if the number of distinct % populations is pop_number, then pop_connectivity_matrix is % a pop_number*pop_number matrix, in which entry (i,j) is the % probability that a neuron in population i synapses to some % neuron in population j. % 3) pop_type_vector = 1*pop_number matrix of type of each population % (1 for excitatory, -1 for inhibitory). % 4) pop_fraction_vector = 1*pop_number matrix of the proportion % of total number in % each population. function [connectivity_matrix, pop_id_vector]= conmat_5(N,pop_connectivity_matrix,pop_type_vector,pop_fraction_vector) cpt = cputime; % multi_pop_con_mat_generator is short for muliple-population-connectivity-matrix-generator %Neurons are assigned populations based on their labels. For example, %if population 1 comprises 20% of the network size, then the first 20% %of the neurons, counting from 1, will be in population 1. For this reason, %we construct the following vector which will give us something like "population boundaries" n_pops=length(pop_fraction_vector); pop_count_vector=round(N*pop_fraction_vector);%the jth entry in population_count_vector is how many neurons are in population j. pop_count_vector(n_pops)=N-sum(pop_count_vector(1:(n_pops-1))); if sum(pop_count_vector==0)~=0 error('ZERO NEURONS IN AT LEAST ONE POPULATION') end %now, declare that connectivity_matrix, the eventual output of this function m-file, is an N*N matrix, which we will fill with % synapse type/presence values (i.e. 1 at (i,j): excitatory synapse from neuron i to neuron j, -1 at 27
  • 28. Adam Cone Modeling Neural Networks %(i,j): inhibitory synapse from neuron i to neuron j, 0 at (i,j): no synapse from neuron i to neuron j) connectivity_matrix = sparse(zeros(N,N)); %Suppose we want to know whether neuron A will synapse to neuron B. Although each synapse presence %value is decided randomly, the weighting is given by the connectivity of %the population containing A, a, to the population containing B, b. This value is %found in the user-specified pop_connectivity_matrix, namely, at (a,b). %This value uniquely determines the probability that neuron A synapses to %neuron B. pop_partition_vector =[0,cumsum(pop_count_vector)]; %sub_ab_connectivity_matrix is the "sub-connectivity matrix" between %pre-synaptic population a and post-synaptic population b. It will %be assimilated at each step by connectivity_matrix. pop_id_vector=sparse(zeros(1,N)); for a = 1:n_pops %step through pre-synaptic populations pop_id_vector((pop_partition_vector(a)+1):pop_partition_vector(a+1))=a; for b = 1:n_pops %step through post_synaptic populations sub_ab_connectivity_matrix = rand(pop_count_vector(a),pop_count_vector(b)) < pop_connectivity_matrix(a,b); connectivity_matrix((pop_partition_vector(a)+1):pop_partition_vector(a+1),(pop_partition_vec tor(b)+1):... pop_partition_vector(b+1)) = sub_ab_connectivity_matrix*pop_type_vector(a); end end cpt = cputime-cpt; Queue Matrix Updating function [queue, event_counter_matrix] = qcount_10(connectivity_matrix,queue_matrix,mod_counter,n_dt_slots,... firing_times_vector,axon_delay_constant,axon_delay_rand_range,dt,N,event_counter_matrix) Jfire=(firing_times_vector~=0); firing_neuron_number=sum(Jfire); firing_connectivity_matrix = connectivity_matrix(Jfire,:); random_part = axon_delay_rand_range/dt*rand(size(firing_connectivity_matrix)); constant_part = axon_delay_constant/dt*ones(size(firing_connectivity_matrix)); times_part = diag(firing_times_vector(find(firing_times_vector)))*ones(size(firing_connectivity_matrix)); firing_times_matrix = abs(firing_connectivity_matrix).*(times_part+constant_part+random_part); firing_info_matrix = firing_times_matrix+i*firing_connectivity_matrix; firing_total_vector = sum(abs(firing_connectivity_matrix),1); firing_info_matrix = sort(firing_info_matrix,1); lower_bound = firing_neuron_number-max(firing_total_vector)+1; firing_info_matrix = firing_info_matrix(lower_bound:firing_neuron_number,:); for b = 1:max(firing_total_vector) max_neurons_vector = (firing_total_vector >= (max(firing_total_vector)+1-b)); I = find(max_neurons_vector); K = mod(mod_counter+floor(real(firing_info_matrix(b,I))),n_dt_slots); K=K+n_dt_slots*(K==0); counter_index = sub2ind(size(event_counter_matrix),K,I); event_counter_matrix(counter_index) = event_counter_matrix(counter_index) + 1; queue_index = sub2ind(size(queue_matrix),event_counter_matrix(counter_index),I,K); queue_matrix(queue_index) = i*imag(firing_info_matrix(b,max_neurons_vector))+... dt*( real(firing_info_matrix(b,max_neurons_vector))-... floor(real(firing_info_matrix(b,max_neurons_vector))) ); end queue = queue_matrix; 28
  • 29. Adam Cone Modeling Neural Networks Queue Matrix Generator function queue_matrix = qmat_5(connectivity_matrix,axon_delay_constant,axon_delay_rand_range,dt,tau_ref) %axon_delay_rand_range=length of randon part of delay interval following %minumum delay, axon_delay_constant C = ceil((axon_delay_constant+axon_delay_rand_range)/dt)+1;%number of dt slots needed max_in = max(sum(abs(connectivity_matrix))); %A =max_in*ceil((C*dt-axon_delay_constant)/tau_ref); %estimate of maximum number of events to be stored A =max_in*ceil((axon_delay_rand_range+dt)/tau_ref); %estimate of maximum number of events to be stored %find the number of neurons in the network B = length(connectivity_matrix); queue_matrix = zeros(A,B,C); Field Firing Rate Computation function rate_mf=mean_field_rate_vectorized(E_r,E_e,E_i,v_th,v_reset,tau_m,tau_ref,g_e,g_i) Eg=(E_r+g_e.*E_e+g_i.*E_i)./(1+g_e+g_i); tau_1=tau_m./(1+g_e+g_i); ts=tau_1.*log( (Eg-v_reset)./(Eg-v_th) ); rate_mf=(Eg>v_th)./(ts+tau_ref); External Synaptic Input Generator function [nu_e, nu_i]=ex_in_synaptic_rates(nue_0,nui_0,c_e,c_i,tg,randstate) rand('state',randstate) f=[1 3 7 15 31 63]; nf=length(f); ce=rand(1,nf); ci=rand(1,nf); thetae=2*pi*rand(1,nf); thetai=2*pi*rand(1,nf); de=zeros(size(tg)); di=zeros(size(tg)); for j=1:nf de=de+ce(j)*sin(2*pi*f(j)*tg + thetae(j)); di=di+ci(j)*sin(2*pi*f(j)*tg + thetai(j)); end se=max(abs(de)); si=max(abs(di)); nu_e=nue_0*(1+c_e*(de/se)); nu_i=nui_0*(1+c_i*(di/si)); Mean Field Synaptic Conductance Updating function g_s=g_syn_update(g_s,nu_s_1,nu_s_2,tau_s,dt,mu_Gamma_s) g_s=( g_s + (dt./(2*tau_s)).*(mu_Gamma_s.*(nu_s_2 + nu_s_1) - g_s) )./... (1 + dt./(2*tau_s)); Conductance Computation Program function [g, dg, sigma_G, mu_G, mu_Gamma_e]=get_gbins(ngbins,tau_m,tau_e,nu_0,nu_e,E_e,E_r) mu_Gamma_e=tau_m/(E_e-E_r); % Expected area under unitary synaptic event. Gives average EPSP of ~0.5 mV 29
  • 30. Adam Cone Modeling Neural Networks %mu_Gamma_e=mu_Gamma_e/2; % %Asssume steady input until time zero, when sinusoidal modulation begins mu_G=nu_0*mu_Gamma_e; %Average of G_e for steady synaptic input rate at nu_0 sigma_sq_Gamma_e=mu_Gamma_e^2/5; %variance of Gamma_e for parabolic distribution mu_Gamma_e_sq=sigma_sq_Gamma_e+mu_Gamma_e^2; %expected square of Gamma_e sigma_G=sqrt(nu_0*mu_Gamma_e_sq/(2*tau_e)); %variance of G_e for steady synaptic input at rate nu_0 %Set up bins for g and also for histogram mu_G_max=max(nu_e)*mu_Gamma_e; sigma_G_max=sqrt(max(nu_e)*mu_Gamma_e_sq/(2*tau_e)); gmax=mu_G_max+3*sigma_G_max; % dg=gmax/ngbins; g=(0:(ngbins-1))*dg; % a row vector % Voltage Computation Program function [v, dv, v_reset, E_r]=get_vbins(E_i,E_r,v_th,nvbins) %dv=(v_th-E_r)/(nvbins-0.5); %E_r is a grid point dv=(v_th-E_i)/nvbins; v=E_i + ((1:nvbins)' - 0.5)*dv; %A column vector of voltages. E_i and v_redet are half-grid points. E_r=v( floor((E_r-E_i)/dv) + 1 ); %E_r is chosen to be a grid point v_reset=E_r-dv/2; %v_reset is a half grid point 30