SlideShare a Scribd company logo
1 of 9
Download to read offline
International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.3, May 2021
DOI: 10.5121/ijaia.2021.12303 25
A STDP RULE THAT FAVOURS CHAOTIC SPIKING
OVER REGULAR SPIKING OF NEURONS
Mario Antoine Aoun
MontrΓ©al, Quebec, Canada
ABSTRACT
We compare the number of states of a Spiking Neural Network (SNN) composed from chaotic spiking
neurons versus the number of states of a SNN composed from regular spiking neurons while both SNNs
implementing a Spike Timing Dependent Plasticity (STDP) rule that we created. We find out that this
STDP rule favors chaotic spiking since the number of states is larger in the chaotic SNN than the regular
SNN. This chaotic favorability is not general; it is exclusive to this STDP rule only. This research falls
under our long-term investigation of STDP and chaos theory.
KEYWORDS
Competitive Hebbian Learning, Chaotic Spiking Neural Network, Adaptive Exponential Integrate and
Fire (AdEx) Neuron, Spike Timing Dependent Plasticity, STDP, Chaos Theory, Synchronization,
Coupling, Chaos Control, Time Delay, Regular Spiking Neuron, Nearest neighbour, Recurrent Neural
Network
1. INTRODUCTION
Spiking Neural Networks (SNNs) [1] are gaining much attention in the scientific literature,
specifically neuromorphic engineering [2], due to their low energy consumption when
implemented on electronic circuits; compared to artificial neural networks using conventional
neuron models [2]. For instance, deep classical artificial neural networks, using gate activation
functions as neurons, consume great amount of power in their training when implemented on
Field Programmable Gate Arrays (FPGAs) [2]. On the contrary, spiking neurons implemented
on Very Large Scale Integration (VLSI) chips consume way less energy [2]. The reason behind
this difference is that spiking neurons are event driven systems [2]. Furthermore, SNNs are
considered the third generation of neural networks because, as their name imply, they use
spiking neuron models, where the first generation is based on perceptrons using threshold gates,
and the second started with Hopfield nets using activation functions [3]. On the other side,
Hebbian learning is the binding phenomenon that takes place between neurons after continuous
stimulation they make upon each other [4]. Spike Timing Dependent Plasticity (STDP)
incorporates exact timing of stimulations as a condition for the binding to occur [5]. In this work
we will study STDP in the context of SNNs where STDP governs the connections weights
inside the SNN. The SNN considered here (Fig. 1.a) is composed from spiking neurons that are
recurrently connected to each other via time-delayed connections. Neurons have no self-
feedback connections. The time delay is fixed and same for all the connections in the network.
Connections have weights, too. When neurons connect and update their connections’ weights
via STDP, then they synchronize their activity to a same repetitive spiking pattern (Fig. 1.b).
The latter is considered the network’s state. In simple terms, this state is a synchronized spiking
output pattern of all the neurons composing the network. In this paper, we are going to study the
number of these states for a recurrent SNN composed from chaotic spiking neurons and a
recurrent SNN composed from regular spiking neurons while both networks are implementing
International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.3, May 2021
26
same STDP rule. We will give experimental evidence that a chaotic recurrent SNN has indeed
more states than a regular recurrent SNN while both are incorporating same STDP rule. But, we
should emphasize that such claim and the results are exclusive to this specific STDP rule only.
This research is part from our investigation of STDP and chaos theory, especially chaotic
spiking neurons, which started in early 2006-2007 [6], spanned many years [7, 8, 9, 10] and
continues to date.
The paper is divided as follows: In section 2 (Methods), we sketch the neural network
architecture and explain the neuron model used. Also, we introduce our synaptic plasticity
model based on STDP. In section 3 (Results), we provide a comparable study of the number of
states that can be stabilized in network of chaotic spiking neurons implementing STDP versus
the number of states that can be stabilized in a network of regular spiking neurons implementing
the same STDP. Section 4 is a discussion of the results and section 5 concludes the paper.
2. METHODS
The neural network architecture that will be used in this paper is composed of n spiking neurons
that are recurrently connected to one another, hence a recurrent spiking neural network. We use
the Adaptive Exponential (AdEx) integrate and fire neuron model [11], as the elementary
component of the network, because it is a neuron model that can simulate regular spiking or
chaotic spiking by changing its parameters [12]. Each connection, between any two neurons in
the network, has a fixed time delay and an adjustable weight. The time delays are kept fixed
because the network will synchronize its spiking activity in a window period equal to the time
delay. The weights are adjusted using synaptic plasticity, as we will see next. The network
architecture is illustrated in Figure 1.a.
	(a) (b)
	
	
Figure 1. a. Neural Network Architecture: n AdEx Neurons (A1, …, An) are recurrently
connected. Every connection has a varying weight β€œΟ‰β€ and a fixed time delay β€œΟ„β€. Note that β€œΟ„β€
is identical for all connections. b. Synchronized spikes Output: When the network stabilizes,
then all the neurons synchronize their output to a same repetitive spiking pattern of length β€œΟ„β€.
In this example, β€œΟ„β€ is equal to 300 and the repetitive pattern is composed from three spikes.
Time resolution is in ms. Time steps from 10000 to 14500.
Pattern Blocks: (10000-10299, 10300-10599, 10600-10899…).
Spikes: (10002, 10074, 10298, 10302, 10374, 10598, 10602, 10674, 10898…).
This repetitive pattern is considered as the network state.
International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.3, May 2021
27
As we can see in Figure 1.a, we have a neural network of n AdEx Neurons (e.g., A1 to An) where
the subscript β€œi” is the index of an AdEx Neuron β€œA”. Each neuron has n-1 connections
because there are no self-feedback connections. A connection from neuron Aj to neuron Ai has a
time delay β€œΟ„i,j” and a weight β€œΟ‰i,j”.
We rewrite the AdEx Neuron equations [12] in their Euler form:
𝑉! 𝑑 = 𝑉! 𝑑 βˆ’ 1 +
𝑑𝑑 βˆ’π‘”! 𝑉! 𝑑 βˆ’ 1 βˆ’ 𝐸! + 𝑔! βˆ†! exp
𝑉! 𝑑 βˆ’ 1 βˆ’ 𝑉!
βˆ†!
+ 𝐼𝑐 βˆ’ πœ“!(𝑑 βˆ’ 1)
𝐢
	
(1)
πœ“! 𝑑 = πœ“! 𝑑 βˆ’ 1 +
𝑑𝑑 π‘Ž 𝑉!(𝑑 βˆ’ 1) βˆ’ 𝐸! βˆ’ πœ“!(𝑑 βˆ’ 1)
Ο„!
	
Where Vi(t) is the neuron’s voltage at time t and Ξ¨i(t) is its adaptation variable. The subscript
β€œi” indicates the neuron’s index inside the network as we mentioned earlier. dt is the Euler time
step which is set to 0.1 ms for good precision. The parameters of the AdEx Neuron for both
regular and chaotic firing are retrieved from [12] and are shown in Table 1, next.
Table 1. Configuration of the AdEx Neuron parameters for regular and chaotic firing modes
[12].
Mode 
Parameters
C gL EL VT Ξ”T a Ο„w b Vr Ic ΞΈ
Regular 200 10 -70 -50 2 2 30 0 -58 500 0
Chaotic 100 12 -60 -50 2 -11 130 30 -48 160 0
C is membrane capacitance in pF, gL is leak conductance in nS, EL is leak reversal in mV, VT is
threshold potential in mV, Ξ”T is rise slope factor in mV, a is 'sub-threshold' adaptation
conductance in nS, Ο„w is adaptation time constant in ms, b is spike trigger adaptation current
increment in pA, Vr is reset potential in mV, Ic is input current in pA and ΞΈ is reset threshold in
mV [12].
The neuron initial conditions are:
Vi(0) = Vr
ψi(0) = 0
Ici = LIc + (UIc - LIc) * Ri	
(2)
Where,
Ri is a random decimal between 0 and 1.
LIc is the lower bound of the default input current (Ic).
UIc is the upper bound of the default input current (Ic).
We note that LIc and UIc are set according to the firing mode (i.e. regular or chaotic) requested.
For instance, if we want regular firing, we can set LIc to 400 and UIc to 600, which are in fact
100 units away from their mean Ic which is 500. As for chaotic firing, we can set LIc to 150 and
UIc to 170, which are 10 units away from their mean Ic that is 160. Note that in the case of
chaotic firing, it was observed in [12] that the neuron’s chaotic firing is sensitive to the input
current 160 (e.g. Ic = 160 pA) and values far away from 160 would ruin the chaotic behaviour
of the neuron. In fact, we need the neurons of the network to have slight variations in their
initial conditions (i.e. initial value of the Input current Ic) so their output will be different from
International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.3, May 2021
28
one another; which will offer great opportunity to the synaptic plasticity model that depends on
the firing time of the neurons, to operate properly as we will see next.
When the neuron’s voltage passes its threshold (i.e. Vi > ΞΈ), then the neuron fires a spike that is
represented by Ξ³i:
Ξ³i
(t) =
1,Vi
(t) >ΞΈ
0,Vi
(t) ≀θ
⎧
⎨
βŽͺ
⎩
βŽͺ
	 (3)
The neuron receives time-delayed spikes from other neurons through its incoming connections,
which are scaled by each connection’s weight Ο‰i,j and summed up as a total input Si(t) which
would be added to the neuron’s voltage Vi(t).
But, before evaluating Si(t) and adding it to Vi(t), the weights of the connections should be
updated where the weight change Δωi,j is calculated and added to Ο‰i,j.
So, in order to update the weights of the connections of the network then we implement a
competitive scenario of STDP that is similar to the approach made by Song et al, in 2000 [13],
in their work on Competitive Hebbian learning based on STDP [13]. Our implementation of a
competitive STDP protocol uses nearest neighbour spikes and is described as:
𝐼𝑓 𝑑 βˆ’ 𝜏!,! βˆ’ 𝑑!"# ! β‰₯ 𝑑!"#$ ! βˆ’ (𝑑 βˆ’ 𝜏!,!) & 𝑉! 𝑑 > πœƒ π‘‘β„Žπ‘’π‘› Δ𝑀!,! = Δ𝑀!
	
𝐼𝑓 𝑑 βˆ’ 𝜏!,! βˆ’ 𝑑!"# ! < 𝑑!"#$ ! βˆ’ (𝑑 βˆ’ 𝜏!,!) π‘‘β„Žπ‘’π‘› Δ𝑀!,! = Δ𝑀!
	
(4)
Where,
tpre(j) is the time of the spike, of input neuron j, that occurred before t – Ο„i,j.
tpost(j) is the time of the spike, of input neuron j, that occurred after t – Ο„i,j.
Δω–
is the negative decrease of the weight change and equal to:
Δ𝑀!
= 𝐴!
βˆ— 𝑒
!!!!,!!!!"#$(!)
!!,! 	 (5)
Δω+
is the positive increase of the weight change, equal to:
Δ𝑀!
= 𝐴!
βˆ— 𝑒
!
!!!!,!!!!"#(!)
!!,! 	 (6)
A–
, A+
are the lower and upper boundaries, of the negative and positive weight change,
respectively, which are defined as the following:
𝐴!
= 𝑉
! βˆ— πœ‡!
	 (7)
And,
𝐴!
= πœƒ βˆ’ 𝑉
! βˆ’ 𝑀!,! βˆ— πœ‡!
	 (8)
Where,
¡–
, Β΅+
constants that are experimentally set to 0.01 and 0.1, respectively. We noticed that setting
¡–
to 0.01 then the neuron’s voltage wouldn’t decrease to very low values during start of STDP
phase.
The reason behind this choice in defining A–
and A+
is supported by the fact that these
parameters should be voltage dependent as suggested in [14]. Furthermore, by setting A–
= Vr *
International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.3, May 2021
29
¡–
then we are sure that this parameter is always negative, which is a must for Long Term
Depression (LTD) to take place in the STDP protocol. Second, by setting A+
= [(ΞΈ – Vr) – Ο‰i,j] *
Β΅+
, then we are sure that Long Term Potentiation (LTP) will settle down once the weight of a
connection has reached a maximal value equal to ΞΈ – Vr.
Last but not least, the weight of every connection is updated according to the following:
𝑀!,! 𝑑 = 𝑀!,! 𝑑 βˆ’ 1 + Δ𝑀!,!	 (9)
Afterwards, the Input to neuron i is calculated according to the following:
𝑆! 𝑑 = 𝑀!,! 𝑑 βˆ— 𝛾!(𝑑(!"#,!"#$)(!))
!
!!!,!!!
	 (10)
Where,
𝑑(!"#,!"#$)(!) =
𝑑!"# ! 𝑖𝑛 π‘π‘Žπ‘ π‘’ π‘œπ‘“ Δ𝑀!
𝑑!"#$ ! 𝑖𝑛 π‘π‘Žπ‘ π‘’ π‘œπ‘“ Δ𝑀!
𝑑 βˆ’ 𝜏!,! 𝑖𝑛 π‘π‘Žπ‘ π‘’ π‘œπ‘“ π›₯𝑀!
	 (11)
Δω0
is the case of Δω = 0, which occurs when the neurons had synchronized their neural
states over a period Ο„. This means there is no more change in the weights that is necessary.
After calculating the input Si(t) to the neuron, then it is added to the neuron’s voltage Vi(t):
𝑉! 𝑑 = 𝑉! 𝑑 + 𝑆! 𝑑 	 (12)
When the neuron’s voltage crosses its threshold ΞΈ, it is reset to its reset value Vr and its
adaptation variable is set to the current value of the latter plus the adaptation reset parameter b
(spike trigger adaptation current increment):
𝐼𝑓 𝑉! 𝑑 > πœƒ π‘‘β„Žπ‘’π‘› 𝑉! 𝑑 = 𝑉
! π‘Žπ‘›π‘‘ πœ“!(𝑑) = πœ“!(𝑑) + 𝑏	 (13)
The weights of all the connections are initially set to 0, so the neurons inside the network run in
isolation and evolve their dynamics. Then, synaptic plasticity starts to operate by updating the
connection weights of every neuron. For instance, in the following experiments, the neurons run
in isolation for 5000 time steps before their connections weights start to get updated.
3. RESULTS
We studied the number of different states, depicted by synchronous repetitive firing patterns
(e.g., Figure 1.b), that can be stabilized in a recurrent SNN (as in Figure 1.a) composed from P
spiking neurons, such that P is an arbitrary positive integer that dictates the network’s
cardinality (i.e., the size of the network), where STDP is managing the connections weights
between the neurons, as we explained in the previous section. We compared the number of
stabilized states of a P size recurrent SNN composed from P chaotic spiking neurons and
another P size recurrent SNN composed from P regular spiking neurons according to the
settings of Table 1. For simplicity, the recurrent SNN composed from chaotic spiking neurons is
referred as chaotic SNN and the recurrent SNN composed from regular spiking neurons is
referred as regular SNN. The number of neurons that constitute both the network of chaotic
spiking neurons and the network of regular spiking neurons will be between 10 and 50:
𝑃 ∈ {10, 20, 30, 40, 50}	 (14)
International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.3, May 2021
30
We aim to find out that the capacity of the chaotic SNN is larger than the capacity of a regular
SNN (i.e., The number of stabilized states inside recurrent SNN of chaotic spiking neurons is
superior than the number of stabilized states inside recurrent SNN of regular spiking neurons).
To do this, we run twice, five sets of one hundred random experiments each (Random in the
sense that an experiment starts with random initial conditions as mentioned in the previous
section – Equation 2). The total will be 2 * 5 * 100 = 1000 experiments. Each experiment is
composed of a recurrent SNN of P Neurons while increasing P by 10 for each set, starting from
P = 10 to P = 50. The first five sets of experiments will be executed on recurrent SNN
composed of chaotic spiking neurons, while the second five sets of experiments will be
executed on recurrent SNN composed of regular spiking neurons. The results of the two β€˜five
sets’ of experiments are illustrated in a bar graph in Figure 4.c. Also, we show neurons activity
(e.g., voltage outputs), evolution of connections weights and STDP windows for two runs of
both networks (Figures 2.a, 2.b, 3.a, 3.b, 4.a and 4.b, respectively).
(a)
	
(b)
	
Figure 2. (a) and (b) show Voltage activity for a chaotic and regular Spiking Neural Network
(SNN), respectively, composed of 10 neurons each. Neurons start synchronizing after time step
5000. Time resolution is 1ms.
(a)
	
(b)
	
Figure 3. (a) and (b) show average of all connections weights inside a chaotic and regular
Spiking Neural Network (SNN), respectively. Each network is composed from 10 neurons.
Neurons start synchronizing after time step 5000. Weights stabilization is fast and reaches a
steady value. Time resolution is 1ms.
International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.3, May 2021
31
(a)
	
(b)
	
(c)
	
Figure 4. (a) and (b) show STDP windows for a chaotic and regular Spiking Neural Network
(SNN), respectively. (c) is a comparison of the number of different stabilized neural states for
both types of networks through 1000 total experiments (each network runs a set of 100
experiments with increasing number of neurons for each set).
By analysing the graph of Figure 4.c, we observe that the number of different stabilized states of
recurrent SNN composed from chaotic spiking neurons is always superior to the number of
different stabilized states of recurrent SNN composed from regular spiking neurons for any
number of neurons composing the network.
4. DISCUSSION
We should note that the number of states that can be reached within this type of network (Figure
1.a) is theoretically infinite because it increases by increasing the time delay of the neural
network connections. In our experiments, the time delay was fixed (Ο„i,j is equal to 300). We
notice (Figure 4.c) that by increasing the number of neurons then the number of states, in both
types (e.g., chaotic and regular) of networks, decreases. This is true, because for a fixed time
delay, increasing the number of neurons, that compose the network, causes more saturation of
spikes that eventually reduces the number of unique patterns. Also, we noticed very high
International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.3, May 2021
32
excitation in both networks; this is because Δω+
(Equation 4) is always active. Thus, the
synchronized output of the network can be assigned to a β€œNot” logic gate, this way the spikes
pattern generated by the network is reversed, as shown in Figure 1.b.
5. CONCLUSIONS
Experimental simulations were conducted in the aim of comparing the number of different
stabilized states (i.e., synchronized firing activity of repetitive spiking patterns) through a
network of chaotic spiking neurons vs. a network of regular spiking neurons whilst the
connections between the neurons inside both networks followed same competitive nearest
neighbour STDP rule that we created. We tested networks composed of 10, 20, 30, 40 and 50
neurons. The results of the experiments confirmed that this STDP rule favours chaotic spiking
over regular spiking of neurons because the numbers of different stabilized states within
networks of chaotic spiking neurons were shown to be way larger than the numbers of different
stabilized states within the networks of regular spiking neurons. We have to note that this
behaviour is not general but restricted to this STDP rule only.
REFERENCES
[1] Maass, Wolfgang, and Christopher M. Bishop, eds. Pulsed neural networks. MIT press, 2001.
[2] Greengard, Samuel. "Neuromorphic chips take shape." Communications of the ACM 63, no. 8
(2020): 9-11.
[3] Maass, Wolfgang. "Networks of spiking neurons: the third generation of neural network
models." Neural networks 10, no. 9 (1997): 1659-1671.
[4] Hebb, Donald Olding. The organization of behavior: A neuropsychological theory. Psychology
Press, 2005.
[5] Markram, Henry, Wulfram Gerstner, and Per Jesper SjΓΆstrΓΆm. "Spike-timing-dependent
plasticity: a comprehensive overview." Frontiers in synaptic neuroscience 4 (2012): 2.
[6] Aoun, Mario Antoine. Temporal difference method with hebbian plasticity rule as a learning
algorithm in networks of chaotic spiking neurons. [Master’s Thesis], Notre Dame University,
Louaize, Lebanon, 2007.
[7] Aoun, Mario Antoine. "Stdp within nds neurons." In International Symposium on Neural
Networks, pp. 33-43. Springer, Berlin, Heidelberg, 2010.
[8] Aoun, Mario Antoine, and Mounir Boukadoum. "Learning algorithm and neurocomputing
architecture for NDS Neurons." In 2014 IEEE 13th International Conference on Cognitive
Informatics and Cognitive Computing, pp. 126-132. IEEE, 2014.
[9] Aoun, Mario Antoine, and Mounir Boukadoum. "Chaotic liquid state machine." International
Journal of Cognitive Informatics and Natural Intelligence (IJCINI) 9, no. 4 (2015): 1-20.
[10] Aoun, Mario Antoine. Learning and memory within chaotic neurons. [Phd Thesis], University of
Quebec in Montreal, Canada, 2019.
[11] Brette, Romain, and Wulfram Gerstner. "Adaptive exponential integrate-and-fire model as an
effective description of neuronal activity." Journal of neurophysiology 94, no. 5 (2005): 3637-
3642.
[12] Naud, Richard, Nicolas Marcille, Claudia Clopath, and Wulfram Gerstner. "Firing patterns in the
adaptive exponential integrate-and-fire model." Biological cybernetics 99, no. 4 (2008): 335-347.
[13] Song, Sen, Kenneth D. Miller, and Larry F. Abbott. "Competitive Hebbian learning through
spike-timing-dependent synaptic plasticity." Nature neuroscience 3, no. 9 (2000): 919-926.
[14] Clopath, Claudia, and Wulfram Gerstner. "Voltage and spike timing interact in STDP–a unified
model." Frontiers in synaptic neuroscience 2 (2010): 25.
International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.3, May 2021
33
AUTHOR
Mario Antoine Aoun has Phd. in Cognitive Informatics from
UQAM. His research interests are:
Theoretical Computer Science, Cognitive Computing, Cognitive
Informatics, Machine Learning, Mathematical Modelling,
Evolutionary Algorithms, Genetic Algorithms, Spiking Neurons,
Chaotic Spiking Neural Networks, Deep Learning, Deep Neural
Network, Recurrent Neural Network, Cognitive Neurodynamics,
Neuro-economics, Synaptic Plasticity, Spike Timing Dependent
Plasticity, Nonlinear Dynamics, Chaotic Neurodynamics,
Computational Neuroscience, Neural Computing Architectures,
Synchronization, Quantum Computing, Relativistic Computing,
Reservoir Computing, Chaos Theory, Chaos Control and Hyper-
computation. His email address is: mario@live.ca

More Related Content

What's hot

Boundness of a neural network weights using the notion of a limit of a sequence
Boundness of a neural network weights using the notion of a limit of a sequenceBoundness of a neural network weights using the notion of a limit of a sequence
Boundness of a neural network weights using the notion of a limit of a sequenceIJDKP
Β 
Neural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's PerceptronNeural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's PerceptronMostafa G. M. Mostafa
Β 
Quantum brain a recurrent quantum neural network model to describe eye trac...
Quantum brain   a recurrent quantum neural network model to describe eye trac...Quantum brain   a recurrent quantum neural network model to describe eye trac...
Quantum brain a recurrent quantum neural network model to describe eye trac...Elsa von Licy
Β 
Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Randa Elanwar
Β 
Introduction to Neural networks (under graduate course) Lecture 2 of 9
Introduction to Neural networks (under graduate course) Lecture 2 of 9Introduction to Neural networks (under graduate course) Lecture 2 of 9
Introduction to Neural networks (under graduate course) Lecture 2 of 9Randa Elanwar
Β 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks ShwethaShreeS
Β 
A Novel Single-Trial Analysis Scheme for Characterizing the Presaccadic Brain...
A Novel Single-Trial Analysis Scheme for Characterizing the Presaccadic Brain...A Novel Single-Trial Analysis Scheme for Characterizing the Presaccadic Brain...
A Novel Single-Trial Analysis Scheme for Characterizing the Presaccadic Brain...konmpoz
Β 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSMohammed Bennamoun
Β 
Artificial Neural Network (draft)
Artificial Neural Network (draft)Artificial Neural Network (draft)
Artificial Neural Network (draft)James Boulie
Β 
Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9Randa Elanwar
Β 
Comparative study of ANNs and BNNs and mathematical modeling of a neuron
Comparative study of ANNs and BNNs and mathematical modeling of a neuronComparative study of ANNs and BNNs and mathematical modeling of a neuron
Comparative study of ANNs and BNNs and mathematical modeling of a neuronSaransh Choudhary
Β 
Artificial Neural Network
Artificial Neural Network Artificial Neural Network
Artificial Neural Network Iman Ardekani
Β 
Neurogrid : A Mixed-Analog-Digital Multichip System for Large-Scale Neural Si...
Neurogrid : A Mixed-Analog-Digital Multichip System for Large-Scale Neural Si...Neurogrid : A Mixed-Analog-Digital Multichip System for Large-Scale Neural Si...
Neurogrid : A Mixed-Analog-Digital Multichip System for Large-Scale Neural Si...IIT Bombay
Β 
Modeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologyModeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologytheijes
Β 
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...cscpconf
Β 
Introduction to Neural networks (under graduate course) Lecture 1 of 9
Introduction to Neural networks (under graduate course) Lecture 1 of 9Introduction to Neural networks (under graduate course) Lecture 1 of 9
Introduction to Neural networks (under graduate course) Lecture 1 of 9Randa Elanwar
Β 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkKnoldus Inc.
Β 

What's hot (20)

20120140503023
2012014050302320120140503023
20120140503023
Β 
Boundness of a neural network weights using the notion of a limit of a sequence
Boundness of a neural network weights using the notion of a limit of a sequenceBoundness of a neural network weights using the notion of a limit of a sequence
Boundness of a neural network weights using the notion of a limit of a sequence
Β 
Neural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's PerceptronNeural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's Perceptron
Β 
Quantum brain a recurrent quantum neural network model to describe eye trac...
Quantum brain   a recurrent quantum neural network model to describe eye trac...Quantum brain   a recurrent quantum neural network model to describe eye trac...
Quantum brain a recurrent quantum neural network model to describe eye trac...
Β 
Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9
Β 
Introduction to Neural networks (under graduate course) Lecture 2 of 9
Introduction to Neural networks (under graduate course) Lecture 2 of 9Introduction to Neural networks (under graduate course) Lecture 2 of 9
Introduction to Neural networks (under graduate course) Lecture 2 of 9
Β 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
Β 
A Novel Single-Trial Analysis Scheme for Characterizing the Presaccadic Brain...
A Novel Single-Trial Analysis Scheme for Characterizing the Presaccadic Brain...A Novel Single-Trial Analysis Scheme for Characterizing the Presaccadic Brain...
A Novel Single-Trial Analysis Scheme for Characterizing the Presaccadic Brain...
Β 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Β 
Artificial Neural Network (draft)
Artificial Neural Network (draft)Artificial Neural Network (draft)
Artificial Neural Network (draft)
Β 
Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9
Β 
Comparative study of ANNs and BNNs and mathematical modeling of a neuron
Comparative study of ANNs and BNNs and mathematical modeling of a neuronComparative study of ANNs and BNNs and mathematical modeling of a neuron
Comparative study of ANNs and BNNs and mathematical modeling of a neuron
Β 
Artificial Neural Network
Artificial Neural Network Artificial Neural Network
Artificial Neural Network
Β 
Neural
NeuralNeural
Neural
Β 
Neurogrid : A Mixed-Analog-Digital Multichip System for Large-Scale Neural Si...
Neurogrid : A Mixed-Analog-Digital Multichip System for Large-Scale Neural Si...Neurogrid : A Mixed-Analog-Digital Multichip System for Large-Scale Neural Si...
Neurogrid : A Mixed-Analog-Digital Multichip System for Large-Scale Neural Si...
Β 
honn
honnhonn
honn
Β 
Modeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologyModeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technology
Β 
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
Β 
Introduction to Neural networks (under graduate course) Lecture 1 of 9
Introduction to Neural networks (under graduate course) Lecture 1 of 9Introduction to Neural networks (under graduate course) Lecture 1 of 9
Introduction to Neural networks (under graduate course) Lecture 1 of 9
Β 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
Β 

Similar to A STDP RULE THAT FAVOURS CHAOTIC SPIKING OVER REGULAR SPIKING OF NEURONS

ANN based STLF of Power System
ANN based STLF of Power SystemANN based STLF of Power System
ANN based STLF of Power SystemYousuf Khan
Β 
Echo state networks and locomotion patterns
Echo state networks and locomotion patternsEcho state networks and locomotion patterns
Echo state networks and locomotion patternsVito Strano
Β 
EM Term Paper
EM Term PaperEM Term Paper
EM Term PaperPreeti Sahu
Β 
Hybrid PSO-SA algorithm for training a Neural Network for Classification
Hybrid PSO-SA algorithm for training a Neural Network for ClassificationHybrid PSO-SA algorithm for training a Neural Network for Classification
Hybrid PSO-SA algorithm for training a Neural Network for ClassificationIJCSEA Journal
Β 
PR12-225 Discovering Physical Concepts With Neural Networks
PR12-225 Discovering Physical Concepts With Neural NetworksPR12-225 Discovering Physical Concepts With Neural Networks
PR12-225 Discovering Physical Concepts With Neural NetworksKyunghoon Jung
Β 
Hardware Implementation of Spiking Neural Network (SNN)
Hardware Implementation of Spiking Neural Network (SNN)Hardware Implementation of Spiking Neural Network (SNN)
Hardware Implementation of Spiking Neural Network (SNN)supratikmondal6
Β 
Word Recognition in Continuous Speech and Speaker Independent by Means of Rec...
Word Recognition in Continuous Speech and Speaker Independent by Means of Rec...Word Recognition in Continuous Speech and Speaker Independent by Means of Rec...
Word Recognition in Continuous Speech and Speaker Independent by Means of Rec...CSCJournals
Β 
neural pacemaker
neural pacemakerneural pacemaker
neural pacemakerSteven Yoon
Β 
Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Deepu Gupta
Β 
Neural Networks Ver1
Neural  Networks  Ver1Neural  Networks  Ver1
Neural Networks Ver1ncct
Β 
Neural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics CourseNeural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics CourseMohaiminur Rahman
Β 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cseNaveenBhajantri1
Β 
Eliano Pessa
Eliano PessaEliano Pessa
Eliano Pessaagrilinea
Β 
Final cnn shruthi gali
Final cnn shruthi galiFinal cnn shruthi gali
Final cnn shruthi galiSam Ram
Β 
On The Application of Hyperbolic Activation Function in Computing the Acceler...
On The Application of Hyperbolic Activation Function in Computing the Acceler...On The Application of Hyperbolic Activation Function in Computing the Acceler...
On The Application of Hyperbolic Activation Function in Computing the Acceler...iosrjce
Β 
Optimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionOptimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionIAEME Publication
Β 

Similar to A STDP RULE THAT FAVOURS CHAOTIC SPIKING OVER REGULAR SPIKING OF NEURONS (20)

ANN based STLF of Power System
ANN based STLF of Power SystemANN based STLF of Power System
ANN based STLF of Power System
Β 
JACT 5-3_Christakis
JACT 5-3_ChristakisJACT 5-3_Christakis
JACT 5-3_Christakis
Β 
Echo state networks and locomotion patterns
Echo state networks and locomotion patternsEcho state networks and locomotion patterns
Echo state networks and locomotion patterns
Β 
EM Term Paper
EM Term PaperEM Term Paper
EM Term Paper
Β 
Hybrid PSO-SA algorithm for training a Neural Network for Classification
Hybrid PSO-SA algorithm for training a Neural Network for ClassificationHybrid PSO-SA algorithm for training a Neural Network for Classification
Hybrid PSO-SA algorithm for training a Neural Network for Classification
Β 
PR12-225 Discovering Physical Concepts With Neural Networks
PR12-225 Discovering Physical Concepts With Neural NetworksPR12-225 Discovering Physical Concepts With Neural Networks
PR12-225 Discovering Physical Concepts With Neural Networks
Β 
Hardware Implementation of Spiking Neural Network (SNN)
Hardware Implementation of Spiking Neural Network (SNN)Hardware Implementation of Spiking Neural Network (SNN)
Hardware Implementation of Spiking Neural Network (SNN)
Β 
Word Recognition in Continuous Speech and Speaker Independent by Means of Rec...
Word Recognition in Continuous Speech and Speaker Independent by Means of Rec...Word Recognition in Continuous Speech and Speaker Independent by Means of Rec...
Word Recognition in Continuous Speech and Speaker Independent by Means of Rec...
Β 
neural pacemaker
neural pacemakerneural pacemaker
neural pacemaker
Β 
Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02
Β 
Neural Networks Ver1
Neural  Networks  Ver1Neural  Networks  Ver1
Neural Networks Ver1
Β 
Neural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics CourseNeural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics Course
Β 
Lec10
Lec10Lec10
Lec10
Β 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cse
Β 
H44084348
H44084348H44084348
H44084348
Β 
Eliano Pessa
Eliano PessaEliano Pessa
Eliano Pessa
Β 
Final cnn shruthi gali
Final cnn shruthi galiFinal cnn shruthi gali
Final cnn shruthi gali
Β 
On The Application of Hyperbolic Activation Function in Computing the Acceler...
On The Application of Hyperbolic Activation Function in Computing the Acceler...On The Application of Hyperbolic Activation Function in Computing the Acceler...
On The Application of Hyperbolic Activation Function in Computing the Acceler...
Β 
Ak methodology nnloadmodeling
Ak methodology nnloadmodelingAk methodology nnloadmodeling
Ak methodology nnloadmodeling
Β 
Optimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionOptimal neural network models for wind speed prediction
Optimal neural network models for wind speed prediction
Β 

Recently uploaded

main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidNikhilNagaraju
Β 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
Β 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
Β 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AIabhishek36461
Β 
Internship report on mechanical engineering
Internship report on mechanical engineeringInternship report on mechanical engineering
Internship report on mechanical engineeringmalavadedarshan25
Β 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSCAESB
Β 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
Β 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
Β 
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZTE
Β 
Study on Air-Water & Water-Water Heat Exchange in a Finned ο»ΏTube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned ο»ΏTube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned ο»ΏTube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned ο»ΏTube ExchangerAnamika Sarkar
Β 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.eptoze12
Β 
πŸ”9953056974πŸ”!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
πŸ”9953056974πŸ”!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...πŸ”9953056974πŸ”!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
πŸ”9953056974πŸ”!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...9953056974 Low Rate Call Girls In Saket, Delhi NCR
Β 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...srsj9000
Β 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ
Β 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
Β 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxvipinkmenon1
Β 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girlsssuser7cb4ff
Β 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
Β 

Recently uploaded (20)

9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
Β 
main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfid
Β 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
Β 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
Β 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AI
Β 
Internship report on mechanical engineering
Internship report on mechanical engineeringInternship report on mechanical engineering
Internship report on mechanical engineering
Β 
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptxExploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Β 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentation
Β 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Β 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
Β 
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
Β 
Study on Air-Water & Water-Water Heat Exchange in a Finned ο»ΏTube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned ο»ΏTube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned ο»ΏTube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned ο»ΏTube Exchanger
Β 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.
Β 
πŸ”9953056974πŸ”!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
πŸ”9953056974πŸ”!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...πŸ”9953056974πŸ”!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
πŸ”9953056974πŸ”!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
Β 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Β 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
Β 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
Β 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptx
Β 
Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girls
Β 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
Β 

A STDP RULE THAT FAVOURS CHAOTIC SPIKING OVER REGULAR SPIKING OF NEURONS

  • 1. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.3, May 2021 DOI: 10.5121/ijaia.2021.12303 25 A STDP RULE THAT FAVOURS CHAOTIC SPIKING OVER REGULAR SPIKING OF NEURONS Mario Antoine Aoun MontrΓ©al, Quebec, Canada ABSTRACT We compare the number of states of a Spiking Neural Network (SNN) composed from chaotic spiking neurons versus the number of states of a SNN composed from regular spiking neurons while both SNNs implementing a Spike Timing Dependent Plasticity (STDP) rule that we created. We find out that this STDP rule favors chaotic spiking since the number of states is larger in the chaotic SNN than the regular SNN. This chaotic favorability is not general; it is exclusive to this STDP rule only. This research falls under our long-term investigation of STDP and chaos theory. KEYWORDS Competitive Hebbian Learning, Chaotic Spiking Neural Network, Adaptive Exponential Integrate and Fire (AdEx) Neuron, Spike Timing Dependent Plasticity, STDP, Chaos Theory, Synchronization, Coupling, Chaos Control, Time Delay, Regular Spiking Neuron, Nearest neighbour, Recurrent Neural Network 1. INTRODUCTION Spiking Neural Networks (SNNs) [1] are gaining much attention in the scientific literature, specifically neuromorphic engineering [2], due to their low energy consumption when implemented on electronic circuits; compared to artificial neural networks using conventional neuron models [2]. For instance, deep classical artificial neural networks, using gate activation functions as neurons, consume great amount of power in their training when implemented on Field Programmable Gate Arrays (FPGAs) [2]. On the contrary, spiking neurons implemented on Very Large Scale Integration (VLSI) chips consume way less energy [2]. The reason behind this difference is that spiking neurons are event driven systems [2]. Furthermore, SNNs are considered the third generation of neural networks because, as their name imply, they use spiking neuron models, where the first generation is based on perceptrons using threshold gates, and the second started with Hopfield nets using activation functions [3]. On the other side, Hebbian learning is the binding phenomenon that takes place between neurons after continuous stimulation they make upon each other [4]. Spike Timing Dependent Plasticity (STDP) incorporates exact timing of stimulations as a condition for the binding to occur [5]. In this work we will study STDP in the context of SNNs where STDP governs the connections weights inside the SNN. The SNN considered here (Fig. 1.a) is composed from spiking neurons that are recurrently connected to each other via time-delayed connections. Neurons have no self- feedback connections. The time delay is fixed and same for all the connections in the network. Connections have weights, too. When neurons connect and update their connections’ weights via STDP, then they synchronize their activity to a same repetitive spiking pattern (Fig. 1.b). The latter is considered the network’s state. In simple terms, this state is a synchronized spiking output pattern of all the neurons composing the network. In this paper, we are going to study the number of these states for a recurrent SNN composed from chaotic spiking neurons and a recurrent SNN composed from regular spiking neurons while both networks are implementing
  • 2. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.3, May 2021 26 same STDP rule. We will give experimental evidence that a chaotic recurrent SNN has indeed more states than a regular recurrent SNN while both are incorporating same STDP rule. But, we should emphasize that such claim and the results are exclusive to this specific STDP rule only. This research is part from our investigation of STDP and chaos theory, especially chaotic spiking neurons, which started in early 2006-2007 [6], spanned many years [7, 8, 9, 10] and continues to date. The paper is divided as follows: In section 2 (Methods), we sketch the neural network architecture and explain the neuron model used. Also, we introduce our synaptic plasticity model based on STDP. In section 3 (Results), we provide a comparable study of the number of states that can be stabilized in network of chaotic spiking neurons implementing STDP versus the number of states that can be stabilized in a network of regular spiking neurons implementing the same STDP. Section 4 is a discussion of the results and section 5 concludes the paper. 2. METHODS The neural network architecture that will be used in this paper is composed of n spiking neurons that are recurrently connected to one another, hence a recurrent spiking neural network. We use the Adaptive Exponential (AdEx) integrate and fire neuron model [11], as the elementary component of the network, because it is a neuron model that can simulate regular spiking or chaotic spiking by changing its parameters [12]. Each connection, between any two neurons in the network, has a fixed time delay and an adjustable weight. The time delays are kept fixed because the network will synchronize its spiking activity in a window period equal to the time delay. The weights are adjusted using synaptic plasticity, as we will see next. The network architecture is illustrated in Figure 1.a. (a) (b) Figure 1. a. Neural Network Architecture: n AdEx Neurons (A1, …, An) are recurrently connected. Every connection has a varying weight β€œΟ‰β€ and a fixed time delay β€œΟ„β€. Note that β€œΟ„β€ is identical for all connections. b. Synchronized spikes Output: When the network stabilizes, then all the neurons synchronize their output to a same repetitive spiking pattern of length β€œΟ„β€. In this example, β€œΟ„β€ is equal to 300 and the repetitive pattern is composed from three spikes. Time resolution is in ms. Time steps from 10000 to 14500. Pattern Blocks: (10000-10299, 10300-10599, 10600-10899…). Spikes: (10002, 10074, 10298, 10302, 10374, 10598, 10602, 10674, 10898…). This repetitive pattern is considered as the network state.
  • 3. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.3, May 2021 27 As we can see in Figure 1.a, we have a neural network of n AdEx Neurons (e.g., A1 to An) where the subscript β€œi” is the index of an AdEx Neuron β€œA”. Each neuron has n-1 connections because there are no self-feedback connections. A connection from neuron Aj to neuron Ai has a time delay β€œΟ„i,j” and a weight β€œΟ‰i,j”. We rewrite the AdEx Neuron equations [12] in their Euler form: 𝑉! 𝑑 = 𝑉! 𝑑 βˆ’ 1 + 𝑑𝑑 βˆ’π‘”! 𝑉! 𝑑 βˆ’ 1 βˆ’ 𝐸! + 𝑔! βˆ†! exp 𝑉! 𝑑 βˆ’ 1 βˆ’ 𝑉! βˆ†! + 𝐼𝑐 βˆ’ πœ“!(𝑑 βˆ’ 1) 𝐢 (1) πœ“! 𝑑 = πœ“! 𝑑 βˆ’ 1 + 𝑑𝑑 π‘Ž 𝑉!(𝑑 βˆ’ 1) βˆ’ 𝐸! βˆ’ πœ“!(𝑑 βˆ’ 1) Ο„! Where Vi(t) is the neuron’s voltage at time t and Ξ¨i(t) is its adaptation variable. The subscript β€œi” indicates the neuron’s index inside the network as we mentioned earlier. dt is the Euler time step which is set to 0.1 ms for good precision. The parameters of the AdEx Neuron for both regular and chaotic firing are retrieved from [12] and are shown in Table 1, next. Table 1. Configuration of the AdEx Neuron parameters for regular and chaotic firing modes [12]. Mode Parameters C gL EL VT Ξ”T a Ο„w b Vr Ic ΞΈ Regular 200 10 -70 -50 2 2 30 0 -58 500 0 Chaotic 100 12 -60 -50 2 -11 130 30 -48 160 0 C is membrane capacitance in pF, gL is leak conductance in nS, EL is leak reversal in mV, VT is threshold potential in mV, Ξ”T is rise slope factor in mV, a is 'sub-threshold' adaptation conductance in nS, Ο„w is adaptation time constant in ms, b is spike trigger adaptation current increment in pA, Vr is reset potential in mV, Ic is input current in pA and ΞΈ is reset threshold in mV [12]. The neuron initial conditions are: Vi(0) = Vr ψi(0) = 0 Ici = LIc + (UIc - LIc) * Ri (2) Where, Ri is a random decimal between 0 and 1. LIc is the lower bound of the default input current (Ic). UIc is the upper bound of the default input current (Ic). We note that LIc and UIc are set according to the firing mode (i.e. regular or chaotic) requested. For instance, if we want regular firing, we can set LIc to 400 and UIc to 600, which are in fact 100 units away from their mean Ic which is 500. As for chaotic firing, we can set LIc to 150 and UIc to 170, which are 10 units away from their mean Ic that is 160. Note that in the case of chaotic firing, it was observed in [12] that the neuron’s chaotic firing is sensitive to the input current 160 (e.g. Ic = 160 pA) and values far away from 160 would ruin the chaotic behaviour of the neuron. In fact, we need the neurons of the network to have slight variations in their initial conditions (i.e. initial value of the Input current Ic) so their output will be different from
  • 4. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.3, May 2021 28 one another; which will offer great opportunity to the synaptic plasticity model that depends on the firing time of the neurons, to operate properly as we will see next. When the neuron’s voltage passes its threshold (i.e. Vi > ΞΈ), then the neuron fires a spike that is represented by Ξ³i: Ξ³i (t) = 1,Vi (t) >ΞΈ 0,Vi (t) ≀θ ⎧ ⎨ βŽͺ ⎩ βŽͺ (3) The neuron receives time-delayed spikes from other neurons through its incoming connections, which are scaled by each connection’s weight Ο‰i,j and summed up as a total input Si(t) which would be added to the neuron’s voltage Vi(t). But, before evaluating Si(t) and adding it to Vi(t), the weights of the connections should be updated where the weight change Δωi,j is calculated and added to Ο‰i,j. So, in order to update the weights of the connections of the network then we implement a competitive scenario of STDP that is similar to the approach made by Song et al, in 2000 [13], in their work on Competitive Hebbian learning based on STDP [13]. Our implementation of a competitive STDP protocol uses nearest neighbour spikes and is described as: 𝐼𝑓 𝑑 βˆ’ 𝜏!,! βˆ’ 𝑑!"# ! β‰₯ 𝑑!"#$ ! βˆ’ (𝑑 βˆ’ 𝜏!,!) & 𝑉! 𝑑 > πœƒ π‘‘β„Žπ‘’π‘› Δ𝑀!,! = Δ𝑀! 𝐼𝑓 𝑑 βˆ’ 𝜏!,! βˆ’ 𝑑!"# ! < 𝑑!"#$ ! βˆ’ (𝑑 βˆ’ 𝜏!,!) π‘‘β„Žπ‘’π‘› Δ𝑀!,! = Δ𝑀! (4) Where, tpre(j) is the time of the spike, of input neuron j, that occurred before t – Ο„i,j. tpost(j) is the time of the spike, of input neuron j, that occurred after t – Ο„i,j. Δω– is the negative decrease of the weight change and equal to: Δ𝑀! = 𝐴! βˆ— 𝑒 !!!!,!!!!"#$(!) !!,! (5) Δω+ is the positive increase of the weight change, equal to: Δ𝑀! = 𝐴! βˆ— 𝑒 ! !!!!,!!!!"#(!) !!,! (6) A– , A+ are the lower and upper boundaries, of the negative and positive weight change, respectively, which are defined as the following: 𝐴! = 𝑉 ! βˆ— πœ‡! (7) And, 𝐴! = πœƒ βˆ’ 𝑉 ! βˆ’ 𝑀!,! βˆ— πœ‡! (8) Where, ¡– , Β΅+ constants that are experimentally set to 0.01 and 0.1, respectively. We noticed that setting ¡– to 0.01 then the neuron’s voltage wouldn’t decrease to very low values during start of STDP phase. The reason behind this choice in defining A– and A+ is supported by the fact that these parameters should be voltage dependent as suggested in [14]. Furthermore, by setting A– = Vr *
  • 5. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.3, May 2021 29 ¡– then we are sure that this parameter is always negative, which is a must for Long Term Depression (LTD) to take place in the STDP protocol. Second, by setting A+ = [(ΞΈ – Vr) – Ο‰i,j] * Β΅+ , then we are sure that Long Term Potentiation (LTP) will settle down once the weight of a connection has reached a maximal value equal to ΞΈ – Vr. Last but not least, the weight of every connection is updated according to the following: 𝑀!,! 𝑑 = 𝑀!,! 𝑑 βˆ’ 1 + Δ𝑀!,! (9) Afterwards, the Input to neuron i is calculated according to the following: 𝑆! 𝑑 = 𝑀!,! 𝑑 βˆ— 𝛾!(𝑑(!"#,!"#$)(!)) ! !!!,!!! (10) Where, 𝑑(!"#,!"#$)(!) = 𝑑!"# ! 𝑖𝑛 π‘π‘Žπ‘ π‘’ π‘œπ‘“ Δ𝑀! 𝑑!"#$ ! 𝑖𝑛 π‘π‘Žπ‘ π‘’ π‘œπ‘“ Δ𝑀! 𝑑 βˆ’ 𝜏!,! 𝑖𝑛 π‘π‘Žπ‘ π‘’ π‘œπ‘“ π›₯𝑀! (11) Δω0 is the case of Δω = 0, which occurs when the neurons had synchronized their neural states over a period Ο„. This means there is no more change in the weights that is necessary. After calculating the input Si(t) to the neuron, then it is added to the neuron’s voltage Vi(t): 𝑉! 𝑑 = 𝑉! 𝑑 + 𝑆! 𝑑 (12) When the neuron’s voltage crosses its threshold ΞΈ, it is reset to its reset value Vr and its adaptation variable is set to the current value of the latter plus the adaptation reset parameter b (spike trigger adaptation current increment): 𝐼𝑓 𝑉! 𝑑 > πœƒ π‘‘β„Žπ‘’π‘› 𝑉! 𝑑 = 𝑉 ! π‘Žπ‘›π‘‘ πœ“!(𝑑) = πœ“!(𝑑) + 𝑏 (13) The weights of all the connections are initially set to 0, so the neurons inside the network run in isolation and evolve their dynamics. Then, synaptic plasticity starts to operate by updating the connection weights of every neuron. For instance, in the following experiments, the neurons run in isolation for 5000 time steps before their connections weights start to get updated. 3. RESULTS We studied the number of different states, depicted by synchronous repetitive firing patterns (e.g., Figure 1.b), that can be stabilized in a recurrent SNN (as in Figure 1.a) composed from P spiking neurons, such that P is an arbitrary positive integer that dictates the network’s cardinality (i.e., the size of the network), where STDP is managing the connections weights between the neurons, as we explained in the previous section. We compared the number of stabilized states of a P size recurrent SNN composed from P chaotic spiking neurons and another P size recurrent SNN composed from P regular spiking neurons according to the settings of Table 1. For simplicity, the recurrent SNN composed from chaotic spiking neurons is referred as chaotic SNN and the recurrent SNN composed from regular spiking neurons is referred as regular SNN. The number of neurons that constitute both the network of chaotic spiking neurons and the network of regular spiking neurons will be between 10 and 50: 𝑃 ∈ {10, 20, 30, 40, 50} (14)
  • 6. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.3, May 2021 30 We aim to find out that the capacity of the chaotic SNN is larger than the capacity of a regular SNN (i.e., The number of stabilized states inside recurrent SNN of chaotic spiking neurons is superior than the number of stabilized states inside recurrent SNN of regular spiking neurons). To do this, we run twice, five sets of one hundred random experiments each (Random in the sense that an experiment starts with random initial conditions as mentioned in the previous section – Equation 2). The total will be 2 * 5 * 100 = 1000 experiments. Each experiment is composed of a recurrent SNN of P Neurons while increasing P by 10 for each set, starting from P = 10 to P = 50. The first five sets of experiments will be executed on recurrent SNN composed of chaotic spiking neurons, while the second five sets of experiments will be executed on recurrent SNN composed of regular spiking neurons. The results of the two β€˜five sets’ of experiments are illustrated in a bar graph in Figure 4.c. Also, we show neurons activity (e.g., voltage outputs), evolution of connections weights and STDP windows for two runs of both networks (Figures 2.a, 2.b, 3.a, 3.b, 4.a and 4.b, respectively). (a) (b) Figure 2. (a) and (b) show Voltage activity for a chaotic and regular Spiking Neural Network (SNN), respectively, composed of 10 neurons each. Neurons start synchronizing after time step 5000. Time resolution is 1ms. (a) (b) Figure 3. (a) and (b) show average of all connections weights inside a chaotic and regular Spiking Neural Network (SNN), respectively. Each network is composed from 10 neurons. Neurons start synchronizing after time step 5000. Weights stabilization is fast and reaches a steady value. Time resolution is 1ms.
  • 7. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.3, May 2021 31 (a) (b) (c) Figure 4. (a) and (b) show STDP windows for a chaotic and regular Spiking Neural Network (SNN), respectively. (c) is a comparison of the number of different stabilized neural states for both types of networks through 1000 total experiments (each network runs a set of 100 experiments with increasing number of neurons for each set). By analysing the graph of Figure 4.c, we observe that the number of different stabilized states of recurrent SNN composed from chaotic spiking neurons is always superior to the number of different stabilized states of recurrent SNN composed from regular spiking neurons for any number of neurons composing the network. 4. DISCUSSION We should note that the number of states that can be reached within this type of network (Figure 1.a) is theoretically infinite because it increases by increasing the time delay of the neural network connections. In our experiments, the time delay was fixed (Ο„i,j is equal to 300). We notice (Figure 4.c) that by increasing the number of neurons then the number of states, in both types (e.g., chaotic and regular) of networks, decreases. This is true, because for a fixed time delay, increasing the number of neurons, that compose the network, causes more saturation of spikes that eventually reduces the number of unique patterns. Also, we noticed very high
  • 8. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.3, May 2021 32 excitation in both networks; this is because Δω+ (Equation 4) is always active. Thus, the synchronized output of the network can be assigned to a β€œNot” logic gate, this way the spikes pattern generated by the network is reversed, as shown in Figure 1.b. 5. CONCLUSIONS Experimental simulations were conducted in the aim of comparing the number of different stabilized states (i.e., synchronized firing activity of repetitive spiking patterns) through a network of chaotic spiking neurons vs. a network of regular spiking neurons whilst the connections between the neurons inside both networks followed same competitive nearest neighbour STDP rule that we created. We tested networks composed of 10, 20, 30, 40 and 50 neurons. The results of the experiments confirmed that this STDP rule favours chaotic spiking over regular spiking of neurons because the numbers of different stabilized states within networks of chaotic spiking neurons were shown to be way larger than the numbers of different stabilized states within the networks of regular spiking neurons. We have to note that this behaviour is not general but restricted to this STDP rule only. REFERENCES [1] Maass, Wolfgang, and Christopher M. Bishop, eds. Pulsed neural networks. MIT press, 2001. [2] Greengard, Samuel. "Neuromorphic chips take shape." Communications of the ACM 63, no. 8 (2020): 9-11. [3] Maass, Wolfgang. "Networks of spiking neurons: the third generation of neural network models." Neural networks 10, no. 9 (1997): 1659-1671. [4] Hebb, Donald Olding. The organization of behavior: A neuropsychological theory. Psychology Press, 2005. [5] Markram, Henry, Wulfram Gerstner, and Per Jesper SjΓΆstrΓΆm. "Spike-timing-dependent plasticity: a comprehensive overview." Frontiers in synaptic neuroscience 4 (2012): 2. [6] Aoun, Mario Antoine. Temporal difference method with hebbian plasticity rule as a learning algorithm in networks of chaotic spiking neurons. [Master’s Thesis], Notre Dame University, Louaize, Lebanon, 2007. [7] Aoun, Mario Antoine. "Stdp within nds neurons." In International Symposium on Neural Networks, pp. 33-43. Springer, Berlin, Heidelberg, 2010. [8] Aoun, Mario Antoine, and Mounir Boukadoum. "Learning algorithm and neurocomputing architecture for NDS Neurons." In 2014 IEEE 13th International Conference on Cognitive Informatics and Cognitive Computing, pp. 126-132. IEEE, 2014. [9] Aoun, Mario Antoine, and Mounir Boukadoum. "Chaotic liquid state machine." International Journal of Cognitive Informatics and Natural Intelligence (IJCINI) 9, no. 4 (2015): 1-20. [10] Aoun, Mario Antoine. Learning and memory within chaotic neurons. [Phd Thesis], University of Quebec in Montreal, Canada, 2019. [11] Brette, Romain, and Wulfram Gerstner. "Adaptive exponential integrate-and-fire model as an effective description of neuronal activity." Journal of neurophysiology 94, no. 5 (2005): 3637- 3642. [12] Naud, Richard, Nicolas Marcille, Claudia Clopath, and Wulfram Gerstner. "Firing patterns in the adaptive exponential integrate-and-fire model." Biological cybernetics 99, no. 4 (2008): 335-347. [13] Song, Sen, Kenneth D. Miller, and Larry F. Abbott. "Competitive Hebbian learning through spike-timing-dependent synaptic plasticity." Nature neuroscience 3, no. 9 (2000): 919-926. [14] Clopath, Claudia, and Wulfram Gerstner. "Voltage and spike timing interact in STDP–a unified model." Frontiers in synaptic neuroscience 2 (2010): 25.
  • 9. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.12, No.3, May 2021 33 AUTHOR Mario Antoine Aoun has Phd. in Cognitive Informatics from UQAM. His research interests are: Theoretical Computer Science, Cognitive Computing, Cognitive Informatics, Machine Learning, Mathematical Modelling, Evolutionary Algorithms, Genetic Algorithms, Spiking Neurons, Chaotic Spiking Neural Networks, Deep Learning, Deep Neural Network, Recurrent Neural Network, Cognitive Neurodynamics, Neuro-economics, Synaptic Plasticity, Spike Timing Dependent Plasticity, Nonlinear Dynamics, Chaotic Neurodynamics, Computational Neuroscience, Neural Computing Architectures, Synchronization, Quantum Computing, Relativistic Computing, Reservoir Computing, Chaos Theory, Chaos Control and Hyper- computation. His email address is: mario@live.ca