SlideShare a Scribd company logo
1 of 16
Download to read offline
INDIAN INSTITUTE OF TECHNOLOGY,
GUWAHATI
SUMMER INTERNSHIP 2022
Project Topic: - “Efficient Hardware implementation for
image classification with simplified Spiking Neural Network
(SNN)”.
Under the Supervision of
Dr. Gaurav Trivedi (IIT Guwahati, Electrical Engineering)
Under the mentorship of
Mr. Ashvinikumar Pruthviraj Dongre (IIT, Guwahati, PhD Scholar)
Applicant’s Details:
Name – SUPRATIK MONDAL
Institute Roll number – 510719009
Parent Institute Name – Indian Institute of Engineering Science
and Technology, Shibpur (IIEST, Shibpur)
Department – Electronics and Telecommunication Engineering.
Year of Study - 4th
year
ACKNOWLEDGEMENT:
I want to express my heartful of acknowledgement and also wants to give my warmest thanks
to my supervisor Dr. Gaurav Trivedi (Assistant Professor, IIT Guwahati, Electrical
Engineering) who made this work possible by giving me this golden opportunity to work under
this research project.
I would also like to express my thanks to my mentor Mr. Ashvinikumar Pruthviraj Dongre
(IIT, Guwahati, PhD scholar) for his valuable advices and suggestions. His advices carried me
through all the stages of working under this research project. His wonderful mentorship and
valuable helps have driven me towards gaining a lot of knowledge and experience through
which I have successfully completed my research internship.
The last but not the least. I would like to give my special thanks and gratitude to my father Mr.
Sanat Kumar Mondal and my mother Mrs. Mithu Mondal for constantly supporting and
encouraging me throughout this journey. Also, thanks to all my team members to help me in
this project work.
------------x--------x------------------
INTRODUCTION: - Neuromorphic Computing (NC) has become a highly emerged
technology in most of the today’s application. The main motive of Neuromorphic Computing
is to mimic human brain and their action of recognizing and classifying figures. In the new era
of neuromorphic computing, Spiking Neural Network (SNN) has emerged as third generation
neural network. SNN is more analogous to our nervous system, so it is chosen as an efficient
model to mimic our brain. In all over the world researchers are working on this and as SNN
should do the same task as the brain, so the architecture of SNN structure has been made
equivalent to our biological nervous system. It consists of neuron cell (soma), neural dendrites,
axon and synapses. All these components are controlled and their status are updated through
some learning techniques (supervised or unsupervised learning). Supervised learning,
especially back propagation is very much popular is Convolution Neural Network (CNN),
where differential equations are adopted to update the synaptic weights. But in SNN, the back
propagation is not very useful because spikes are represented by unit impulses. So, exact
differentiation of impulse is not possible (though approximation differentiation is possible).
So, SNN gives a somewhat little less accuracy than CNN or ANN. But the implementation of
SNN is very much efficient and hardware friendly than ANN or CNN, because in SNN the
images are converted into spikes and the presence of spikes is represented by bit-1 or bit-0.
SNN adopts a popular unsupervised learning technique named as Spike time-dependent
Plasticity (STDP) learning technique. This learning technique is not supervised because here
the obtained output is not supervised and it is not compared with the desired output, because in
this learning technique, unlabelled data is given to the network. This learning basically deals
with the timing of pre and post synaptic spikes, which is an unsupervised technique. So, in this
report the work on the hardware implementation of image classification through SNN and
adopting a new variation of STDP learning has been discussed.
ACTION POTENTIAL OF SNN: - Spiking Neural Network (SNN) basically works
on the basis of “Firing-on-basis-of-potential-upgradation”. The leaky-integrated model of SNN
has been adopted here. The process of spiking according to leaky-integrated model has been
described by taking fig. 1 as reference.
• Part-5 represents that, without any stimulus or excitation, the membrane potential of
soma is ~ 70 mV, called as resting potential.
• When a stimulus (maybe input spikes from other neurons) comes at point 1, the
potential of the soma increases from resting potential.
• If the stimulus continuous to come, then the membrane potential will increase and will
try to reach to the threshold limit (negative value). Once the potential exceeded the
threshold potential limit, the membrane potential rapidly increases to a high positive
value (~ +40 mV), thus giving a spike. This process is called as depolarization (part-2).
It may happen that in the process of enhancing the membrane potential, if the stimulus
stops coming, then the membrane potential can’t reach the threshold and will again start
to decrease thus failing to generate spike.
• After depolarization stage, comes the repolarization stage (part-3). In this stage, the
membrane potential value will decrease from the positive value to a negative value.
• The membrane potential will cross the resting potential and will further decrease to a
more negative value (~ -85 mV). This negative potential value is called refractory or
minimum potential and the stage is called hyperpolarization (part-4).
• From that refractory potential, the membrane potential will again increase and will
touch the resting potential in the steady state. The time from which the membrane
potential crosses the resting potential after repolarization to the time up to which it again
reaches to the resting potential, is called refractory period. Inside this refractory period
the network will not respond to any incoming stimulus and the membrane potential will
not update.
Fig. 1
The equation for updating the potential according to the leaky-integrated fire model in
simplified SNN is as follows: -
V (t) = {
𝑉(𝑡 − 1) + ∑ [𝑊
𝑗. 𝑆𝑗(𝑡)] − 𝐷, 𝑉𝑚𝑖𝑛 ≤ 𝑉(𝑡 − 1) ≤ 𝑉𝑡ℎ
𝑛
𝑗=1
𝑉𝑟𝑒𝑠𝑡𝑖𝑛𝑔, 𝑉(𝑡 − 1) ≤ 𝑉𝑚𝑖𝑛
𝑉𝑟𝑒𝑓𝑟𝑎𝑐𝑡𝑜𝑟𝑦, 𝑉(𝑡 − 1) ≥ 𝑉𝑡ℎ
} ----------------- (I)
In the above equation, “V(t)” is the membrane potential at present time (t) “V(t-1)” is the
previous membrane potential, “Wj” is the weight of the jth
synapse, “Sj(t)” is the incoming
stimulus or spike from other neurons on jth
synapse at time t, “D” is the voltage leakage factor,
“𝑉𝑚𝑖𝑛” is the minimum membrane potential (or) refractory potential, “𝑉𝑡ℎ” is the threshold
potential of the neuron, “𝑉𝑟𝑒𝑠𝑡𝑖𝑛𝑔” is the membrane resting potential and “𝑉𝑟𝑒𝑓𝑟𝑎𝑐𝑡𝑜𝑟𝑦” is the
membrane refractory (or) minimum potential.
NEURAL NETWORKING MODEL AND ARCHITECTURE: - The network
structure and its functionality has been illustrated in the below flow diagrams (fig. 2 and 3).
Fig. 2
Fig. 3
Fig. 2 describes the structure of the neural network. The whole project been carried out by
taking MNIST datasets as training and testing data. According to MNIST datasets, each image
(for each digit) has 28-by-28-pixel matrix. Each image pixel is represented by 8-bit gray scale
value.
This image is convoluted with a (5-by-5) image filter or image kernel (fig. 4). The convoluted
image is used to generate the firing rate of the spikes (according to equation 1a) by mapping
the image potential values (after convolution). With the generated firing rate, the spike
generator will generate the spikes. The input neuron block passes the spike train to output
neuron block. As there are 784 pixels, so there are 784 neurons. Each pixel is dedicated to each
input neuron. So, these spikes contain the intensity value of each image pixel. These spikes are
generated according to Uniform Poisson’s Distribution Statistics.
FR = {
1
𝑇𝑟𝑒𝑓(𝑚𝑖𝑛)∗
𝑅𝐹
𝑉𝑚𝑎𝑥
, 𝑅𝐹 > 0
0, 𝑅𝐹 ≤ 0
} ----------------------------1a
Where, FR is the firing rate, Tref(min) is the minimum refractory period, RF is the pixel value
after convolution with image kernel, Vmax is the maximum membrane potential.
-0.5 -0.125 0.125 -0.125 -0.5
-0.125 0.125 0.625 0.125 -0.125
0.125 0.625 1 0.625 0.125
-0.125 0.125 0.625 0.125 -0.125
-0.5 -0.125 0.125 -0.125 -0.5
Fig. 4 (image filter for blurriness)
Now, these spikes will go to the output neurons through the synapses. Synapses are basically
connections between neurons, which helps the electrical signal to travel from one neuron to
other. Neurons connecting after the synapse is called post-synaptic neuron and neuron
connecting before a synapse is called pre-synaptic neuron.
So, these input spikes will enter the output neuron. In our model we are using 10 output
neurons, as there are 10 digits according to the MNIST datasets. The synaptic weights and
incoming spikes are responsible for updating the potential of the respective output neurons. If
the output neuron membrane potential is greater than the threshold potential, the output neuron
will spike. Otherwise, it won’t spike. An algorithm of “Winner-Takes-All” (WTA) has been
adopted to reduce the competitiveness and noise inside the network. According to this
algorithm, only one neuron will spike at a time stamp and will inhibit the other neurons from
spiking. There are two types of lateral inhibition possible – Self Lateral Inhibition (SLI) and
Mutual Lateral Inhibition (MLI). According to SLI, within a single output neuron layer, a
Lateral Inhibition (LI) block will sort the potentials of the all the output neurons in that layer
and then compare the maximum potential holding neuron with the threshold value. If it exceeds
the threshold, that particular output neuron will spike. According to MLI, there will be two
output neuron layers (exhibitory and inhibitory layer) mapped in one-to-one fashion. When a
single output neuron from exhibitory layer will spike (if its potential is greater than threshold),
then that spike will enter the inhibitory layer to that neuron which is facing the spiking neuron.
The receiving neuron in the inhibitory layer will throw negative weights feedback signals to
the non-spiking neurons in exhibitory layer, so that their membrane potential decreases below
the threshold. So, this process will go on for all neurons in exhibitory layer. We have
implemented SLI algorithm, as it requires less hardware than MLI.
Our model exhibits an unsupervised STDP learning technique. This technique deals with the
timing of the pre and post synaptic neurons. According to STDP, if the pre-synaptic neuron(s)
fires before post-synaptic neuron(s), then that synaptic weight will increase. If the reverse
happens, then the synaptic weight will decrease. We have taken the STDP window as t ∈ [2,20]
in both the directions. The equation for updating weights according to STDP is as follows:
Wnew = {
𝑊𝑜𝑙𝑑 + 𝜎𝛥𝑊(𝑊
𝑚𝑎𝑥 − 𝑊𝑜𝑙𝑑), 𝛥𝑊 ≥ 0
𝑊𝑜𝑙𝑑 + 𝜎𝛥𝑊(𝑊𝑜𝑙𝑑 − 𝑊𝑚𝑖𝑛), 𝛥𝑊 ≤ 0
} -------------------------- (II)
Where, ΔW is expressed as:
ΔW =
{
𝐴+
exp (
𝛥𝑡
𝜏+
) , 2 ≤ 𝛥𝑡 ≤ 20
𝐴−
exp (
𝛥𝑡
𝜏−
) , −20 ≤ 𝛥𝑡 ≤ −2
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 }
-------------------------------- (III)
In equation (II) and (III), “Wnew” represents the changed weight, “Wold” represents the previous
weight, “Wmax” is the maximum weight, “Wmin” is the minimum weight, “𝛥𝑊” is the change
in weight, “𝜎” is the learning rate, “A+
” and “A-
” are the coefficients of 𝛥𝑊 in both positive
and negative direction respectively, “𝛥𝑡” is the time difference between pre and post synaptic
neuron, “𝜏+” and “𝜏−” are the time constant in both positive and negative direction
respectively. The STDP graph and algorithm has represented in fig. 5.
Fig. 5.1
Fig. 5.2
Fig. 5.2 depicts the continuous learning graph (or) STDP graph of the SNN. This graph can be
modified and a new variation of this STDP curve can be achieved which is a discrete form of
the continuous curve. We have implemented the discrete STDP in our model. The discrete
STDP is a bit hardware friendly than its continuous counterpart. It is because, in the continuous
STDP curve, a frequent interval of Δt should be taken in order to maintain the accuracy but it
consumes a much more memory space and power. But in the discrete form, less frequent time
stamps than the previous one would suffice. Here, the step size (δ) of the discrete STDP curve
can be adjusted to improve the accuracy and hardware architecture. We have taken a step size
of 1-bit in both the directions. The discrete STDP curve (fig. 6) and its mathematical expression
are given below:
Fig. 6
The mathematical modelling of ΔW is as follows:
ΔW = {
∑ 𝐴𝑘[𝑢(𝛥𝑡 − 𝛥𝑡𝑘−1) − 𝑢(𝛥𝑡 − 𝛥𝑡𝑘)], 2 ≤ 𝛥𝑡 ≤ 20
19
𝑘=1
∑ 𝐵𝑘[𝑢(−𝛥𝑡 − 𝛥𝑡𝑘) − 𝑢(−𝛥𝑡 − 𝛥𝑡𝑘−1)], −20 ≤ 𝛥𝑡 ≤ −2
19
𝑘=1
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
} --------------- (IV)
In equation (IV), u(Δt) is the unit step signal as a function of Δt. Ak – Ak-1 = Bk – Bk-1 = 1-bit,
where k = 1, 2, 3, ……………., 19. A1 = 0.035 and B1 = -0.025.
HARDWARE ARCHITECTURE AND IMPLEMENTATION: - The
implementation of SNN has been done in Zynq UltraScale+ (xczu7cg-ffvf1517-2-e) FPGA
board. The hardware level hierarchy is given in fig. 7.
Fig. 7
• SYSTEM – It consist of a core, which is the brain of this network.
• CORE – The core is basically a module which connects the input neuron blocks, time
unit, Δt calculator and output neuron block. It plays the main role in training procedure.
The core takes the training spikes and their corresponding threshold values from
memory (in which data is stored from spike_train.py and var_th.py respectivey).
• INPUT NEURON BLOCK – This block basically contains all the input neurons in a
single module. Input neurons are implemented with a buffer-with-counter. The buffer
is used for passing the input spike trains to subsequent blocks and the counter is
dedicated to count the ISI (Inter Spike Interval) for the spike trains.
• TIME UNIT – Time unit is implemented to calculate time units for SNN operations
and to initiate various blocks and sub-blocks. A counter is implemented here to count
the time units for various operations. This block will take a universal input to start the
coring of the image. After the coring is started, it will start the input and output neuron
block accordingly after getting a valid acknowledgement from both the neuron blocks.
It also gives an acknowledgement signal to indicate the completion of coring of the
image.
• Δt CALCULATOR - This block calculates the time difference between the post and
pre synaptic neuron and send this data to output neuron block for updating synaptic
weights. There are 784 “Δt calculators” inside this module. Each calculator is dedicated
to evaluate the Δt for each input neuron and their corresponding spiking output neuron.
• OUTPUT NEURON BLOCK – This is the main block in this architecture. This block
is responsible for generating output spikes (on the basis of membrane potential) and
trained weights for reconstruction and testing. There are some sub-blocks under this
block, they are: -
a. Lateral Inhibitor (LI): - This block will sort the maximum potential from all the
output neurons and compare the sorted potential with the threshold. If the sorted
potential exceeds the threshold, then that particular neuron will spike and the rest
will be inhibited. Counter (for state transformation) and comparator are applied to
implement this module.
b. Count Muxer: - This block mainly consists of lookup tables for fetching the values
of ΔW (in both directions) to update the weights according to the value of Δt.
Basically, this module will calculate the index of the input neuron and in turn will
calculate its corresponding Δt to fetch the value of ΔW accordingly in both the
directions. Memory architecture is designed to implement this logic.
c. STDP: - It is the brain of output neuron block. This block is responsible for learning
the whole network by updating the synaptic weights accordingly. The weight
change procedure is done according to equation II and IV (as discrete STDP has
been implemented). Adders, Subtractor, Multiplier, Buffers and Shift registers are
main components here.
d. Weight Memory: - This module consists a RAM architecture for storing the
synaptic weights. The initial weights (received from weight_initialization.py
python file) are stored here. After full training of the network, data (weights) in this
module is updated and it is taken for reconstruction and testing. There is a feedback
technique in the connection between weight memory and STDP, that is shown and
explained in the next section.
e. Output Neurons: - There are 10 output neurons implement in hardware. Each
output neuron’s main task is to generate spike according to the membrane potential
of the corresponding neuron (if membrane potential is greater than threshold). The
membrane potential is calculated in another module named – “potential adder”
which is integrated inside each output neuron. Several control signals are generated
to control the potential adder. The potential adder is basically implemented by
adders and subtractors.
SNN MODEL FLOW CHART IN HARDWARE IMPLEMENTATION: -
The flow chart of the SNN model according to the hardware implementation is shown in fig.
8. This flow chart shows the whole procedure of the SNN.
Fig. 8
The explanation of the flow is as follows: -
a. At first the control box is starting to initiate and start the whole operation.
b. From control box, two main signals, valid_ips and start_core_img will be generated.
The former signal is to indicate, whether the training spikes (coming from
spike_train.py python file) are coming or not (or) they are valid or not. The latter signal
is used to initiate the coring (training and generating spikes) of the image.
c. Once the core starts, the Time Unit block will send two main signals – start_ip_nub
and start_op_nub to start the input and output neuron block respectively after getting
valid acknowledgements from them through two signals – valid_ip_nub and
valid_op_nub.
d. As input neuron block starts, it will generate spikes which will enter to the output
neuron block on starting.
e. After the output neuron block starts, the spikes are trained to generate trained weights
and output spikes. The learning process is described as: -
❖ The weight updating (learning) procedure mainly occurs in STDP. That block
will take the old or previous weight from the weight memory and update it.
After updating, the new weights generated from the STDP block get stored in
the same location(s), in the same weight memory (in order to update the synaptic
weights).
❖ It is to be mentioned that Δt calculator is a block, which will take the input
spikes and output spikes to calculate Δt in order to calculate ΔW. This ΔW is
passed to STDP block to update weight (according to equation II and IV).
❖ After receiving the input spikes, the potential updater will update the potential
of each neuron by reading the weights from the weight memory (according to
equation I). The Lateral Inhibitor (LI) block will sort the potentials of the output
neurons to find the maximum potential. When the maximum membrane
potential is greater than the threshold, then that neuron will spike.
❖ After full training of the images, the trained weights are obtained from the
weight memory which is used for reconstruction (through reconstruct.py
python file). After reconstruction, the images are stored in neuron1.png,
neuron2.png, …………………, neuron10.png.
The block diagram of whole SNN system along with the core is shown in fig. 9. From the
below block diagram, it is evident that the SNN system consist of the core part. The core will
receive the inputs – “input spikes” and “threshold” from a memory. Actually, the two codes
spike_train.py and var_th.py is simulated to get the spikes and their thresholds. These values
are then stored into a memory, which is further fetched by the core for training purpose. The
inputs valid_ips and valid_maxing is given as two control signals to the active high enable
(EN) terminals of two data registers, for passing the spikes and threshold values to the core
respectively (when EN is high). Also, valid_ips and valid_maxing will go to core for further
processing. And the input start_core_img is directly given to core to start it.
Fig. 9
PARAMETERS FOR HARDWARE SIMULATION: - The parameters used for
our hardware simulation has been obtained from parameters.py python file is given in table-
1.
Table-1
Sl. No. Parameters Values
1. Size of weight bits 24 (12-bits for integer and
12-bits for fraction part)
2. Size of Threshold Potential bits 24 (12-bits for integer and
12-bits for fraction part)
3. Voltage Leakage Factor (D) 0.75
4. Refractory Period 30-time units
5. Resting Potential 0
6. Minimum or Refractory Potential -5
7. Maximum Weight 2.0
8. Minimum Weight -1.2
9. Learning Rate (σ) 0.0625
10. Step Size (δ) 1-bit
9. Epochs 20
10. Positive STDP window 20-time units
11. Negative STDP window 20-time units
12. Spikes Train window (T) 150-time units
The threshold potential has been implemented as a variable one. The variable threshold value
is generated by following way:
As there are 150-time units for a spike train (for a particular image), so, at a particular time
stramp, number of spikes in all the input neurons are counted. It is repeated for 150-time
stramps. Among 150 counts of input spikes, the maximum count is divided by 3 to get the
variable threshold.
RESOURCES UTILIZATIONS: - The resources utilized in the simulation has been
displayed in table-2
Table-2
Sl. No. Resources Available Utilized % Utilization Ratio
1. LUTs 230,400 7,794 3.38 %
2. Flip Flops 460,800 3,739 0.81 %
3. Block RAMs 312 10 3.205 %
4. DSPs 1728 40 2.315 %
SUMMARY: - Thus in summary, in this project we have made the previous code and
architecture bug free and separated the weight_change block and ram_weight block from
output neurons and implemented them outside the output neurons (inside the output neuron
block) as STDP and WEIGHT MEMORY respectively. The STDP block as a whole will
train all the output neurons parallelly and the WEIGHT MEMORY block will store all (784
in our case) synaptic weights connecting to each output neuron. We have also implemented a
Δt CALCULATOR module for calculating the time difference between the pre-synaptic and
post-synaptic neuron (for ΔW calculation). This module resides under CORE module. Apart
from this, we have also tried to implement discrete (or) step-wise approximation of continuous
STDP learning (having a step size (δ) of 1-bit) in our hardware architecture. After training of
some images for a smaller number of epochs, we have got output spikes and trained weights
from the WEIGHT MEMORY, but we haven’t got fruitful reconstructed images and
accuracy. Some more modifications and optimizations are required further to sort the persisting
problem(s).
---------------------------x-------------------------x-------------------------

More Related Content

Similar to Hardware Implementation of Spiking Neural Network (SNN)

Neural Networks Ver1
Neural  Networks  Ver1Neural  Networks  Ver1
Neural Networks Ver1
ncct
 
NeurSciACone
NeurSciAConeNeurSciACone
NeurSciACone
Adam Cone
 

Similar to Hardware Implementation of Spiking Neural Network (SNN) (20)

Neural Networks Ver1
Neural  Networks  Ver1Neural  Networks  Ver1
Neural Networks Ver1
 
Artificial Neural Network in Medical Diagnosis
Artificial Neural Network in Medical DiagnosisArtificial Neural Network in Medical Diagnosis
Artificial Neural Network in Medical Diagnosis
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Neural network
Neural networkNeural network
Neural network
 
Backpropagation.pptx
Backpropagation.pptxBackpropagation.pptx
Backpropagation.pptx
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Modeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologyModeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technology
 
Artificial neural networks (2)
Artificial neural networks (2)Artificial neural networks (2)
Artificial neural networks (2)
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cse
 
Classification by back propagation, multi layered feed forward neural network...
Classification by back propagation, multi layered feed forward neural network...Classification by back propagation, multi layered feed forward neural network...
Classification by back propagation, multi layered feed forward neural network...
 
Neural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfNeural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdf
 
JACT 5-3_Christakis
JACT 5-3_ChristakisJACT 5-3_Christakis
JACT 5-3_Christakis
 
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdfNEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
 
Neural networks of artificial intelligence
Neural networks of artificial  intelligenceNeural networks of artificial  intelligence
Neural networks of artificial intelligence
 
NeurSciACone
NeurSciAConeNeurSciACone
NeurSciACone
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Neural-Networks.ppt
Neural-Networks.pptNeural-Networks.ppt
Neural-Networks.ppt
 
StockMarketPrediction
StockMarketPredictionStockMarketPrediction
StockMarketPrediction
 
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
 

Recently uploaded

UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
rknatarajan
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Christo Ananth
 

Recently uploaded (20)

(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
 
Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)
 
Glass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesGlass Ceramics: Processing and Properties
Glass Ceramics: Processing and Properties
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
Online banking management system project.pdf
Online banking management system project.pdfOnline banking management system project.pdf
Online banking management system project.pdf
 
Russian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
Russian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur EscortsRussian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
Russian Call Girls in Nagpur Grishma Call 7001035870 Meet With Nagpur Escorts
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptx
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
 
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
 
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINEDJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writing
 

Hardware Implementation of Spiking Neural Network (SNN)

  • 1. INDIAN INSTITUTE OF TECHNOLOGY, GUWAHATI SUMMER INTERNSHIP 2022 Project Topic: - “Efficient Hardware implementation for image classification with simplified Spiking Neural Network (SNN)”. Under the Supervision of Dr. Gaurav Trivedi (IIT Guwahati, Electrical Engineering) Under the mentorship of Mr. Ashvinikumar Pruthviraj Dongre (IIT, Guwahati, PhD Scholar) Applicant’s Details: Name – SUPRATIK MONDAL Institute Roll number – 510719009 Parent Institute Name – Indian Institute of Engineering Science and Technology, Shibpur (IIEST, Shibpur) Department – Electronics and Telecommunication Engineering. Year of Study - 4th year
  • 2. ACKNOWLEDGEMENT: I want to express my heartful of acknowledgement and also wants to give my warmest thanks to my supervisor Dr. Gaurav Trivedi (Assistant Professor, IIT Guwahati, Electrical Engineering) who made this work possible by giving me this golden opportunity to work under this research project. I would also like to express my thanks to my mentor Mr. Ashvinikumar Pruthviraj Dongre (IIT, Guwahati, PhD scholar) for his valuable advices and suggestions. His advices carried me through all the stages of working under this research project. His wonderful mentorship and valuable helps have driven me towards gaining a lot of knowledge and experience through which I have successfully completed my research internship. The last but not the least. I would like to give my special thanks and gratitude to my father Mr. Sanat Kumar Mondal and my mother Mrs. Mithu Mondal for constantly supporting and encouraging me throughout this journey. Also, thanks to all my team members to help me in this project work. ------------x--------x------------------
  • 3. INTRODUCTION: - Neuromorphic Computing (NC) has become a highly emerged technology in most of the today’s application. The main motive of Neuromorphic Computing is to mimic human brain and their action of recognizing and classifying figures. In the new era of neuromorphic computing, Spiking Neural Network (SNN) has emerged as third generation neural network. SNN is more analogous to our nervous system, so it is chosen as an efficient model to mimic our brain. In all over the world researchers are working on this and as SNN should do the same task as the brain, so the architecture of SNN structure has been made equivalent to our biological nervous system. It consists of neuron cell (soma), neural dendrites, axon and synapses. All these components are controlled and their status are updated through some learning techniques (supervised or unsupervised learning). Supervised learning, especially back propagation is very much popular is Convolution Neural Network (CNN), where differential equations are adopted to update the synaptic weights. But in SNN, the back propagation is not very useful because spikes are represented by unit impulses. So, exact differentiation of impulse is not possible (though approximation differentiation is possible). So, SNN gives a somewhat little less accuracy than CNN or ANN. But the implementation of SNN is very much efficient and hardware friendly than ANN or CNN, because in SNN the images are converted into spikes and the presence of spikes is represented by bit-1 or bit-0. SNN adopts a popular unsupervised learning technique named as Spike time-dependent Plasticity (STDP) learning technique. This learning technique is not supervised because here the obtained output is not supervised and it is not compared with the desired output, because in this learning technique, unlabelled data is given to the network. This learning basically deals with the timing of pre and post synaptic spikes, which is an unsupervised technique. So, in this report the work on the hardware implementation of image classification through SNN and adopting a new variation of STDP learning has been discussed. ACTION POTENTIAL OF SNN: - Spiking Neural Network (SNN) basically works on the basis of “Firing-on-basis-of-potential-upgradation”. The leaky-integrated model of SNN has been adopted here. The process of spiking according to leaky-integrated model has been described by taking fig. 1 as reference. • Part-5 represents that, without any stimulus or excitation, the membrane potential of soma is ~ 70 mV, called as resting potential. • When a stimulus (maybe input spikes from other neurons) comes at point 1, the potential of the soma increases from resting potential. • If the stimulus continuous to come, then the membrane potential will increase and will try to reach to the threshold limit (negative value). Once the potential exceeded the
  • 4. threshold potential limit, the membrane potential rapidly increases to a high positive value (~ +40 mV), thus giving a spike. This process is called as depolarization (part-2). It may happen that in the process of enhancing the membrane potential, if the stimulus stops coming, then the membrane potential can’t reach the threshold and will again start to decrease thus failing to generate spike. • After depolarization stage, comes the repolarization stage (part-3). In this stage, the membrane potential value will decrease from the positive value to a negative value. • The membrane potential will cross the resting potential and will further decrease to a more negative value (~ -85 mV). This negative potential value is called refractory or minimum potential and the stage is called hyperpolarization (part-4). • From that refractory potential, the membrane potential will again increase and will touch the resting potential in the steady state. The time from which the membrane potential crosses the resting potential after repolarization to the time up to which it again reaches to the resting potential, is called refractory period. Inside this refractory period the network will not respond to any incoming stimulus and the membrane potential will not update. Fig. 1 The equation for updating the potential according to the leaky-integrated fire model in simplified SNN is as follows: - V (t) = { 𝑉(𝑡 − 1) + ∑ [𝑊 𝑗. 𝑆𝑗(𝑡)] − 𝐷, 𝑉𝑚𝑖𝑛 ≤ 𝑉(𝑡 − 1) ≤ 𝑉𝑡ℎ 𝑛 𝑗=1 𝑉𝑟𝑒𝑠𝑡𝑖𝑛𝑔, 𝑉(𝑡 − 1) ≤ 𝑉𝑚𝑖𝑛 𝑉𝑟𝑒𝑓𝑟𝑎𝑐𝑡𝑜𝑟𝑦, 𝑉(𝑡 − 1) ≥ 𝑉𝑡ℎ } ----------------- (I) In the above equation, “V(t)” is the membrane potential at present time (t) “V(t-1)” is the previous membrane potential, “Wj” is the weight of the jth synapse, “Sj(t)” is the incoming stimulus or spike from other neurons on jth synapse at time t, “D” is the voltage leakage factor,
  • 5. “𝑉𝑚𝑖𝑛” is the minimum membrane potential (or) refractory potential, “𝑉𝑡ℎ” is the threshold potential of the neuron, “𝑉𝑟𝑒𝑠𝑡𝑖𝑛𝑔” is the membrane resting potential and “𝑉𝑟𝑒𝑓𝑟𝑎𝑐𝑡𝑜𝑟𝑦” is the membrane refractory (or) minimum potential. NEURAL NETWORKING MODEL AND ARCHITECTURE: - The network structure and its functionality has been illustrated in the below flow diagrams (fig. 2 and 3). Fig. 2
  • 6. Fig. 3 Fig. 2 describes the structure of the neural network. The whole project been carried out by taking MNIST datasets as training and testing data. According to MNIST datasets, each image (for each digit) has 28-by-28-pixel matrix. Each image pixel is represented by 8-bit gray scale value. This image is convoluted with a (5-by-5) image filter or image kernel (fig. 4). The convoluted image is used to generate the firing rate of the spikes (according to equation 1a) by mapping the image potential values (after convolution). With the generated firing rate, the spike generator will generate the spikes. The input neuron block passes the spike train to output neuron block. As there are 784 pixels, so there are 784 neurons. Each pixel is dedicated to each input neuron. So, these spikes contain the intensity value of each image pixel. These spikes are generated according to Uniform Poisson’s Distribution Statistics. FR = { 1 𝑇𝑟𝑒𝑓(𝑚𝑖𝑛)∗ 𝑅𝐹 𝑉𝑚𝑎𝑥 , 𝑅𝐹 > 0 0, 𝑅𝐹 ≤ 0 } ----------------------------1a
  • 7. Where, FR is the firing rate, Tref(min) is the minimum refractory period, RF is the pixel value after convolution with image kernel, Vmax is the maximum membrane potential. -0.5 -0.125 0.125 -0.125 -0.5 -0.125 0.125 0.625 0.125 -0.125 0.125 0.625 1 0.625 0.125 -0.125 0.125 0.625 0.125 -0.125 -0.5 -0.125 0.125 -0.125 -0.5 Fig. 4 (image filter for blurriness) Now, these spikes will go to the output neurons through the synapses. Synapses are basically connections between neurons, which helps the electrical signal to travel from one neuron to other. Neurons connecting after the synapse is called post-synaptic neuron and neuron connecting before a synapse is called pre-synaptic neuron. So, these input spikes will enter the output neuron. In our model we are using 10 output neurons, as there are 10 digits according to the MNIST datasets. The synaptic weights and incoming spikes are responsible for updating the potential of the respective output neurons. If the output neuron membrane potential is greater than the threshold potential, the output neuron will spike. Otherwise, it won’t spike. An algorithm of “Winner-Takes-All” (WTA) has been adopted to reduce the competitiveness and noise inside the network. According to this algorithm, only one neuron will spike at a time stamp and will inhibit the other neurons from spiking. There are two types of lateral inhibition possible – Self Lateral Inhibition (SLI) and Mutual Lateral Inhibition (MLI). According to SLI, within a single output neuron layer, a Lateral Inhibition (LI) block will sort the potentials of the all the output neurons in that layer and then compare the maximum potential holding neuron with the threshold value. If it exceeds the threshold, that particular output neuron will spike. According to MLI, there will be two output neuron layers (exhibitory and inhibitory layer) mapped in one-to-one fashion. When a single output neuron from exhibitory layer will spike (if its potential is greater than threshold), then that spike will enter the inhibitory layer to that neuron which is facing the spiking neuron. The receiving neuron in the inhibitory layer will throw negative weights feedback signals to the non-spiking neurons in exhibitory layer, so that their membrane potential decreases below the threshold. So, this process will go on for all neurons in exhibitory layer. We have implemented SLI algorithm, as it requires less hardware than MLI. Our model exhibits an unsupervised STDP learning technique. This technique deals with the timing of the pre and post synaptic neurons. According to STDP, if the pre-synaptic neuron(s) fires before post-synaptic neuron(s), then that synaptic weight will increase. If the reverse happens, then the synaptic weight will decrease. We have taken the STDP window as t ∈ [2,20] in both the directions. The equation for updating weights according to STDP is as follows: Wnew = { 𝑊𝑜𝑙𝑑 + 𝜎𝛥𝑊(𝑊 𝑚𝑎𝑥 − 𝑊𝑜𝑙𝑑), 𝛥𝑊 ≥ 0 𝑊𝑜𝑙𝑑 + 𝜎𝛥𝑊(𝑊𝑜𝑙𝑑 − 𝑊𝑚𝑖𝑛), 𝛥𝑊 ≤ 0 } -------------------------- (II) Where, ΔW is expressed as:
  • 8. ΔW = { 𝐴+ exp ( 𝛥𝑡 𝜏+ ) , 2 ≤ 𝛥𝑡 ≤ 20 𝐴− exp ( 𝛥𝑡 𝜏− ) , −20 ≤ 𝛥𝑡 ≤ −2 0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 } -------------------------------- (III) In equation (II) and (III), “Wnew” represents the changed weight, “Wold” represents the previous weight, “Wmax” is the maximum weight, “Wmin” is the minimum weight, “𝛥𝑊” is the change in weight, “𝜎” is the learning rate, “A+ ” and “A- ” are the coefficients of 𝛥𝑊 in both positive and negative direction respectively, “𝛥𝑡” is the time difference between pre and post synaptic neuron, “𝜏+” and “𝜏−” are the time constant in both positive and negative direction respectively. The STDP graph and algorithm has represented in fig. 5. Fig. 5.1 Fig. 5.2 Fig. 5.2 depicts the continuous learning graph (or) STDP graph of the SNN. This graph can be modified and a new variation of this STDP curve can be achieved which is a discrete form of the continuous curve. We have implemented the discrete STDP in our model. The discrete STDP is a bit hardware friendly than its continuous counterpart. It is because, in the continuous STDP curve, a frequent interval of Δt should be taken in order to maintain the accuracy but it consumes a much more memory space and power. But in the discrete form, less frequent time
  • 9. stamps than the previous one would suffice. Here, the step size (δ) of the discrete STDP curve can be adjusted to improve the accuracy and hardware architecture. We have taken a step size of 1-bit in both the directions. The discrete STDP curve (fig. 6) and its mathematical expression are given below: Fig. 6 The mathematical modelling of ΔW is as follows: ΔW = { ∑ 𝐴𝑘[𝑢(𝛥𝑡 − 𝛥𝑡𝑘−1) − 𝑢(𝛥𝑡 − 𝛥𝑡𝑘)], 2 ≤ 𝛥𝑡 ≤ 20 19 𝑘=1 ∑ 𝐵𝑘[𝑢(−𝛥𝑡 − 𝛥𝑡𝑘) − 𝑢(−𝛥𝑡 − 𝛥𝑡𝑘−1)], −20 ≤ 𝛥𝑡 ≤ −2 19 𝑘=1 0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 } --------------- (IV) In equation (IV), u(Δt) is the unit step signal as a function of Δt. Ak – Ak-1 = Bk – Bk-1 = 1-bit, where k = 1, 2, 3, ……………., 19. A1 = 0.035 and B1 = -0.025.
  • 10. HARDWARE ARCHITECTURE AND IMPLEMENTATION: - The implementation of SNN has been done in Zynq UltraScale+ (xczu7cg-ffvf1517-2-e) FPGA board. The hardware level hierarchy is given in fig. 7. Fig. 7 • SYSTEM – It consist of a core, which is the brain of this network. • CORE – The core is basically a module which connects the input neuron blocks, time unit, Δt calculator and output neuron block. It plays the main role in training procedure. The core takes the training spikes and their corresponding threshold values from memory (in which data is stored from spike_train.py and var_th.py respectivey). • INPUT NEURON BLOCK – This block basically contains all the input neurons in a single module. Input neurons are implemented with a buffer-with-counter. The buffer is used for passing the input spike trains to subsequent blocks and the counter is dedicated to count the ISI (Inter Spike Interval) for the spike trains. • TIME UNIT – Time unit is implemented to calculate time units for SNN operations and to initiate various blocks and sub-blocks. A counter is implemented here to count the time units for various operations. This block will take a universal input to start the coring of the image. After the coring is started, it will start the input and output neuron block accordingly after getting a valid acknowledgement from both the neuron blocks. It also gives an acknowledgement signal to indicate the completion of coring of the image. • Δt CALCULATOR - This block calculates the time difference between the post and pre synaptic neuron and send this data to output neuron block for updating synaptic weights. There are 784 “Δt calculators” inside this module. Each calculator is dedicated to evaluate the Δt for each input neuron and their corresponding spiking output neuron.
  • 11. • OUTPUT NEURON BLOCK – This is the main block in this architecture. This block is responsible for generating output spikes (on the basis of membrane potential) and trained weights for reconstruction and testing. There are some sub-blocks under this block, they are: - a. Lateral Inhibitor (LI): - This block will sort the maximum potential from all the output neurons and compare the sorted potential with the threshold. If the sorted potential exceeds the threshold, then that particular neuron will spike and the rest will be inhibited. Counter (for state transformation) and comparator are applied to implement this module. b. Count Muxer: - This block mainly consists of lookup tables for fetching the values of ΔW (in both directions) to update the weights according to the value of Δt. Basically, this module will calculate the index of the input neuron and in turn will calculate its corresponding Δt to fetch the value of ΔW accordingly in both the directions. Memory architecture is designed to implement this logic. c. STDP: - It is the brain of output neuron block. This block is responsible for learning the whole network by updating the synaptic weights accordingly. The weight change procedure is done according to equation II and IV (as discrete STDP has been implemented). Adders, Subtractor, Multiplier, Buffers and Shift registers are main components here. d. Weight Memory: - This module consists a RAM architecture for storing the synaptic weights. The initial weights (received from weight_initialization.py python file) are stored here. After full training of the network, data (weights) in this module is updated and it is taken for reconstruction and testing. There is a feedback technique in the connection between weight memory and STDP, that is shown and explained in the next section. e. Output Neurons: - There are 10 output neurons implement in hardware. Each output neuron’s main task is to generate spike according to the membrane potential of the corresponding neuron (if membrane potential is greater than threshold). The membrane potential is calculated in another module named – “potential adder” which is integrated inside each output neuron. Several control signals are generated to control the potential adder. The potential adder is basically implemented by adders and subtractors.
  • 12. SNN MODEL FLOW CHART IN HARDWARE IMPLEMENTATION: - The flow chart of the SNN model according to the hardware implementation is shown in fig. 8. This flow chart shows the whole procedure of the SNN. Fig. 8 The explanation of the flow is as follows: - a. At first the control box is starting to initiate and start the whole operation. b. From control box, two main signals, valid_ips and start_core_img will be generated. The former signal is to indicate, whether the training spikes (coming from spike_train.py python file) are coming or not (or) they are valid or not. The latter signal is used to initiate the coring (training and generating spikes) of the image. c. Once the core starts, the Time Unit block will send two main signals – start_ip_nub and start_op_nub to start the input and output neuron block respectively after getting
  • 13. valid acknowledgements from them through two signals – valid_ip_nub and valid_op_nub. d. As input neuron block starts, it will generate spikes which will enter to the output neuron block on starting. e. After the output neuron block starts, the spikes are trained to generate trained weights and output spikes. The learning process is described as: - ❖ The weight updating (learning) procedure mainly occurs in STDP. That block will take the old or previous weight from the weight memory and update it. After updating, the new weights generated from the STDP block get stored in the same location(s), in the same weight memory (in order to update the synaptic weights). ❖ It is to be mentioned that Δt calculator is a block, which will take the input spikes and output spikes to calculate Δt in order to calculate ΔW. This ΔW is passed to STDP block to update weight (according to equation II and IV). ❖ After receiving the input spikes, the potential updater will update the potential of each neuron by reading the weights from the weight memory (according to equation I). The Lateral Inhibitor (LI) block will sort the potentials of the output neurons to find the maximum potential. When the maximum membrane potential is greater than the threshold, then that neuron will spike. ❖ After full training of the images, the trained weights are obtained from the weight memory which is used for reconstruction (through reconstruct.py python file). After reconstruction, the images are stored in neuron1.png, neuron2.png, …………………, neuron10.png. The block diagram of whole SNN system along with the core is shown in fig. 9. From the below block diagram, it is evident that the SNN system consist of the core part. The core will receive the inputs – “input spikes” and “threshold” from a memory. Actually, the two codes spike_train.py and var_th.py is simulated to get the spikes and their thresholds. These values are then stored into a memory, which is further fetched by the core for training purpose. The inputs valid_ips and valid_maxing is given as two control signals to the active high enable (EN) terminals of two data registers, for passing the spikes and threshold values to the core respectively (when EN is high). Also, valid_ips and valid_maxing will go to core for further processing. And the input start_core_img is directly given to core to start it.
  • 15. PARAMETERS FOR HARDWARE SIMULATION: - The parameters used for our hardware simulation has been obtained from parameters.py python file is given in table- 1. Table-1 Sl. No. Parameters Values 1. Size of weight bits 24 (12-bits for integer and 12-bits for fraction part) 2. Size of Threshold Potential bits 24 (12-bits for integer and 12-bits for fraction part) 3. Voltage Leakage Factor (D) 0.75 4. Refractory Period 30-time units 5. Resting Potential 0 6. Minimum or Refractory Potential -5 7. Maximum Weight 2.0 8. Minimum Weight -1.2 9. Learning Rate (σ) 0.0625 10. Step Size (δ) 1-bit 9. Epochs 20 10. Positive STDP window 20-time units 11. Negative STDP window 20-time units 12. Spikes Train window (T) 150-time units The threshold potential has been implemented as a variable one. The variable threshold value is generated by following way: As there are 150-time units for a spike train (for a particular image), so, at a particular time stramp, number of spikes in all the input neurons are counted. It is repeated for 150-time stramps. Among 150 counts of input spikes, the maximum count is divided by 3 to get the variable threshold. RESOURCES UTILIZATIONS: - The resources utilized in the simulation has been displayed in table-2 Table-2 Sl. No. Resources Available Utilized % Utilization Ratio 1. LUTs 230,400 7,794 3.38 % 2. Flip Flops 460,800 3,739 0.81 % 3. Block RAMs 312 10 3.205 % 4. DSPs 1728 40 2.315 %
  • 16. SUMMARY: - Thus in summary, in this project we have made the previous code and architecture bug free and separated the weight_change block and ram_weight block from output neurons and implemented them outside the output neurons (inside the output neuron block) as STDP and WEIGHT MEMORY respectively. The STDP block as a whole will train all the output neurons parallelly and the WEIGHT MEMORY block will store all (784 in our case) synaptic weights connecting to each output neuron. We have also implemented a Δt CALCULATOR module for calculating the time difference between the pre-synaptic and post-synaptic neuron (for ΔW calculation). This module resides under CORE module. Apart from this, we have also tried to implement discrete (or) step-wise approximation of continuous STDP learning (having a step size (δ) of 1-bit) in our hardware architecture. After training of some images for a smaller number of epochs, we have got output spikes and trained weights from the WEIGHT MEMORY, but we haven’t got fruitful reconstructed images and accuracy. Some more modifications and optimizations are required further to sort the persisting problem(s). ---------------------------x-------------------------x-------------------------