SlideShare a Scribd company logo
1 of 11
Download to read offline
A Collective Study on Recent Implementations of
FPGA-based Artificial Neural Network
Abstract—This paper explores some recent applications of
the FPGA-based ANNs (Field Programmable Gate Arrays-
based Artificial Neural Networks). ANN is a research field and
a tool in many area of the data processing system. In an
scholarly manner, it gives a description of organism brain
(biological nervous system) based on mathematical models and
nonlinear processing units. Even though ANN can be
programmed using software, the hardware implementation can
give more accurate modeling for ANN, which reveals the
inherent parallelism embedded in ANN dynamics. FPGA is one
of the hardware implementations of ANN that provides
reconfigurable programmable electronic platform and offers
robust flexibility. Based on this study, this paper covers
different applications (biomedicine, robotics, neuromorphic
computation, analog circuit simulators…) besides presenting
their merits and demerits during their process.
Keywords—Artificial Neural Network, Computing, FPGA,
Neuromorphic, Parallelism, Resistive Switching.
I. INTRODUCTION
An Artificial Neural Network (ANN) -biologically
inspired nonparametric and nonlinear artificial intelligence
algorithm- performs processing and learning abilities,
training, classification, simulation, pattern recognition,
prediction, recalling, and hybrid forecasting of information
based on the biological nervous system to model a complex
system implemented as a parallelism and distributed network
of simple nonlinear processing units [1]–[3].
Hardware (HW) implementation of ANNs is more
preferable software counterparts due to the executed time
and performed pattern. The later consumes more time to
simulate ANN (speed constrains) as its size becomes large
and executes ANN in sequential pattern (series). By
implementing ANN on a circuit level, this allows for higher
speed computation and parallel executed pattern. ANN
consists of input/hidden/output layers (analogue to neurons
~1011) and weighted connections among them (analogue to
synapses ~1015), the former denotes the nodes, while the
former indicates the weights. Any node between input and
output layers belongs to hidden layer (it is built of artificial
neurons), where nonlinear mapping between input nodes and
output nodes is done. At the analog circuit level, sensors
convert the inputs to suitable form for the processing, and
then converted inputs are weighted sum (linear combination)
and multiplied by nonlinear activation function to produce
the output. Weighted sum can be done by using either
multiplier & summer or weighted summer opamp. Another
adder is used to add the output of the weighted summer with
the bias, which is used to increase or decrease the net input
of the activation function [4]–[5]. Activation function is the
most important, expensive, and difficult part in any hardware
implementation of ANN. It is popularly represented by
sigmoid function, which is applied to the output of the linear
combination input signals. It computes the weighted sum
then it adds direction and decision whether to active a
specific neuron or not. Recently, Memristive crossbar arrays
are a good model for the weights [6], and ANN analog
computation can be achieved using Memristor as in [7].
Using VLSI CMOS technology can offer nonlinearity
characteristic on the cost of the inaccurate computation,
thermal drift, lack of flexibility (non-reconfigurable and non-
reprogrammable features), and limited scaling. ASIC can
realize compact parallel architecture for ANN and provide
high-speed performance. However, it is expensive, non-
accurate computation and lack of flexible design.
Microprocessors and DSPs do not support parallel designs.
To support parallelism, FPGA is used which is a good
candidate for reconfiguration and flexibility in designing
ANN. It helps to have repeated iterative learning information
processing with modified weights and reconfigurable
structure. In addition, it has density that is more compact
with lower cost and lower time cycle. FPGA-based ANN can
offer a good modularity and dynamic adaptation for neural
computation. The main important feature of FPGA is parallel
computing which is in a good match with architecture of
ANN [8]–[9]. Generally, in any parallel computing, one
challenging point appears as the best way to map
applications onto HW. More specifically in FPGA, the basic
computing cells have rigid interconnection structures and for
large size ANN, FPGA needs more HW resources (increase
the number of used units such as multipliers and activation
functions).
There are several types of ANN for various problems in
the literature and the structure of an ANN can be divided into
two phases. The first phase is the learning process (training
process or off-line phase), in which ANN learns the data set
obtained from the system model. It includes diverse
optimization algorithms to decide the values of the weights
and the bias. The second phase is the recalling process
(prediction process or real-time phase), where the optimized
values of the artificial neurons are verified by using the test
dataset (test dataset differ than training dataset in which
ANN may encounter different dataset than in the training
dataset).
In this brief, modern miscellaneous implementations of
ANN on FPGA are reviewed. It will be shown that different
FPGA families agree with prompt prototyping of several
ANN architecture and implementation approaches.
The design framework of these FPGAs can be compared
with respect to total time response, precision, total area
(CLB), maximum clock frequency, weight updates per
second, and HW resource utilization. The organization of
this paper is as follows. Section II surveys relevant previous
works. Section III discusses and reports the results of the
different studies. Finally, Section IV concludes the paper.
II. FPGA-BASED HW IMPLEMENTATION OF ANNS
In this section, ten case studies are presented for the
FPGA-based ANN applications. These cases are targeted
towards two categories: i. Applications (biomedicine, sensor
networks, magnetism, power system, security …) and ii.
Emulated modeling (Memristor modeling, power amplifiers,
chaotic circuits …).
A. Applications based on ANN implemented on FPGA
FPGA can be applied to widespread range of
applications, which can be a part of data acquisition system.
In [10], [12], [24]–[25], [30]–[31], and [40], the systems are
modeled using ANN, and these systems are suited for
medicine/biomedicine applications. [10] deals with the ECG
anomaly and design a system implemented on FPGA to
detect such arrhythmia. The signal records are taken from
MIT-BIH arrhythmia database and trained using resilient
back propagation (RPROP) algorithm for Multi-Layer
Perceptron (MLP) ANN architecture. The used activation
functions are piecewise linearly approximated hyperbolic
tangent function (for hidden layer) and hyperbolic tangent
function (for hidden layer and output node) which reduce
computation time, complexity, and have simple form.
Various data types and different number of input/hidden
layers are tested to get the suitable FPGA performance. It
was shown that 12-6-2 MLP for 24-bit classifies the record
with accuracy of 99.82% (the best accuracy among the tested
record).
In [12], identification for human blood based on image
processing is modelled using feedforward neural network.
This is helpful when the blood sample is large while the
conventional method may take long time for large samples.
The back propagation algorithm is deployed as training
process and the sigmoid function is used as activation
function. The performance of the FPGA achieves 97.5%
accuracy for 80×80 pixel resolution. Positron Emission
Tomography (PET) is a scan imaging technique to detect and
treat cancer staging.
Neural inverse optimal control for the glucose level
estimation on FPGA platform is carried out in [24] and the
Extended Kalman Filter (EKF) recurrent high-order neural
network (RHONN) is chosen as neural identifier
architecture. Selecting EKF as training algorithm comes
from the minimum error from the optimized weight values.
16-bit fixed-point representation is used and the total power
consumption of the system is 142.18mW.
In [25], such system is modeled by using ANN to operate
in real time to process triple coincidences (triplets) in a PET
scanner by identifying the true line of response (LOR). The
operating frequency was 50 MHz and the ANN pipelined
model processes the triplets without exceeding 6000 FPGA
slices. The neurons are activated using hyperbolic tangent
activation function for 6-2-1 layers (10, 5, and 1 neurons
respectively). The result for the LOR selection is 97.99%
indicating a precise selection of the time.
Literature [30] shows a comparative work on two ANN
models; one model with a 32-bit floating point FPGA based
model activated by sigmoid activation function, while the
other model with a 16-bit fixed point FPGA based model
activated by piecewise linear sigmoid (PLS) activation
function. Each model has 1-1-1 input-hidden-output layers
with eight input neurons, two hidden neurons, and one output
neuron. The first model gives 97.66% accuracy and the
second model reached 96.54% accuracy, nevertheless,
resource utilization of the second model was lower than the
first model. The classification times are 1.07μs and 1.01μs
for the first model and the second model, respectively.
An embedded system-on-chip for Atrial Fibrillation
(AFIB) detection on heart screening is presented in [31]. The
detection algorithm flow starts with pre-processing units
including bandpass filter cascaded with stationary wavelet
transform, then the feature extraction is done, and ends by
ANN to classify the pattern of AFIB. Overall, this detection
system achieves accuracy of 95.3% while using 40,830
Logic elements.
Biomedical imaging techniques need to be designed
accurately to characterize the tissue types (normal or
abnormal). Such a technique is intravascular optical
coherence tomography (OCT) covered in [40] for heart
diseases. Feed forward ANN is used after collecting relevant
information from the image (using feature extraction), in
which ANN processes these features of the image by
classifier, trained to distinguish them then ANN is
implemented on FPGA platform. Operating with 180MHz,
the ANN process is finished with 0.7sec real time
classification.
Literature [27] proposes a design of FPGA-based
Multilayer feed forward ANN using SOC (system-on-chip)
design approach to make the advantage of the hardware
reuse and sharing. This obviously will reduce the on-chip
area. The architecture of FFNN is based on hyperbolic
tangent activation function and back propagation training
algorithm for 16-bit representation. The activation function is
approximated by piecewise linear function to allow
implementation combinational logic-only.
For detection and recognition purposes, literatures [13],
[15], [22]–[23], [26], [32], [35], [37]–[39], and [42] are
modeled using different ANN topologies and each targeted
specific applications. ANN on FPGA for forest fire detection
is presented in [15]. Five sensors as input neurons are used to
detect the firestorm activated by sigmoid function in the
eight feed forward hidden layers. Wireless sensor network
(WSN) with ANN gives better detection and reduce the
delay compare to WSN only. The maximum operating
frequency for the used FPGA (Virtex-5 Altera 6.4a starter) is
604.27MHz.
Literature [22] shows detection of the fault in automotive
systems i.e. fault diagnosis in a diesel engine through FPGA-
based accelerated MLP ANN. The used activation function is
the sigmoid function due to using fewer nodes during
mapping relationship. The training algorithm to train the
hidden layer neurons is the back propagation applied offline.
The effective area on Spartan-6 XCSLX150 is 60183 for 1/6
input layer (8 neurons) and six hidden layers obtained by
number of DSPs times 512 added to number of look-up-
tables (LUTs).
End-User programming platform [23] is used to facilitate
the interface with FPGA for ANN model of e-Nose system
that distinguishes the odor of four coffees’ types. The feed
forward ANN is trained using back propagation algorithm
and implemented on Virtex-4 FPGA operating under
122.489 MHz frequency for 1-1-1 FFANN layer. End-User
programming platform offers a relief from knowing low-
level hardware description languages (HDLs). This is applied
through an in-built and collaborative graphical user interface
(GUI) to keep a rapid prototyping.
In [26], it was developed an ANN network based Trax
solver with a 64×64 square playing board area. The ANN
was trained offline using back propagation algorithm.
Allowing the use of weights, this combined with a binary
activation function, which considerably reduced the design
of digital logic circuit. The design was implemented on a 40-
nm process FPGA (Arria II GX; Altera Corp.) and it was
capable of operating at a clock frequency of 75MHz.
An approach for autonomous robot navigation has been
presented in [32] as an efficient execution of the feed
forward ANN. 3000 is trained using back propagation
algorithm. The goal behind such design is to obtain the
shortest and safest path for the robot avoiding the obstacles
with the assist of ANN as navigation technique. ANN is
implemented on Xilinx Virtex-II Pro FPGA. The results
obtained with 357.5 MHz clock frequency that is a good
sound compared to previous works.
Speech signal recognition depends on gender recognition
and emotional recognition. Speech signal in [35] is
recognized and classified using FPGA-based ANN trained
under back propagation algorithm. This work shows that
using ANN as classifier can achieve better results than
existing classifier called Latent Dirichlet Allocation (LDA)
in term of the used logic elements and the processing time.
For hardware implementation of ANN in the recognition
applications, the data type is critical, in which fixed-point
representation can improve execution performance in the
feed forward computation of the network. In [37], two
different types of the recognitions are used MNIST
handwritten digit recognition benchmark and a phoneme
recognition task on TIMIT corpus. The new contribution in
this work was mapping ANN on FPGA hardware using fixed
point without the need for external DRAM memory. The
back propagation is used as training algorithm while
unsupervised greedy RBM (Restricted Boltzmann Machine)
algorithm is applied during the learning algorithm for digit
classification.
Literature [13] also directed toward MNIST handwritten
digit recognition (28×28 pixel image) using FPGA-based
multi-layer feed forward ANN. The Coordinate Rotation
Digital Computer (CORDIC) algorithm approximates the
activation function (sigmoid function). The data type is 32-
bit single precision floating-point number standard used for
the good accuracy of the hardware implementation compared
to the fixed point, since such accuracy is obtained from the
truncation error of the fixed-point operation.
MNIST handwritten digit recognition (28×28 pixel image
= 784 pixels) also is visited in [42] in which two recognition
systems are tested; the first system practices the feature
extraction (Principal Component Analysis (PCA)) to map the
complexity of the input image data to the lower dimensional
space before pattern recognition with ANN, while the second
one straight worthy uses ANN to recognize the input image.
The first system is preferred in a limited ANN environment
of the hardware computing and memory resources. With the
development of the artificial intelligent, the second system
emerges that can be a good candidate for multi-hidden
layers. By electing the second system, literature [42] works
onto two schemes; one is called multi-hardware layer ANN
(MHL), which performs the implementation of all computing
layers (considering one input layer, one hidden layer, and
one output layer) on multi hardware achieving better
performance (each layer on separate hardware). The second
scheme is single hardware layer ANN (SHL) that
implements single layer on hardware allowing for better
sharing hardware to achieve better area utilization
(considering one input layer, multi hidden layers, and one
output layer run on the same hardware). There is a need for a
control unit to control the forward computation by
multiplexing the proper weight and correct input. Each
computing neuron in the hidden layer consists of
accumulated multiplier, ROM storage unit, and logistic
sigmoid activation function. Using MNIST database for
setting up the experiment in [42], it was shown that the
customized SHL ANNs performance and its scalability
(different number of hidden layers) performs efficiently
hardware scheme with respect to storage resource supporting
16-bit half-precision floating-point representation.
Currently, data mining with parallel computational
intelligence provides better solutions for cloud database. One
emerging issue is to use a good modeling that matches with
the requirements (such as unsupervised learning and
compressed dimensionality). Suggested model in [38] is
Self-organizing map (SOM) ANN where its principle
depends on data compression. To achieve accurate hardware
realization on FPGA with fixed-point representation,
conscience SOM (CSOM) is used to offer optimized entropy
to well-suit high dimensionality data structure that can be
implemented on different processing platforms.
Another SOM ANN is proposed in [39] supporting on-
line training for video categorization in autonomous
surveillance systems. On-chip FPGA hardware is
accomplished with high speed, compact parallelism, low
power consumption. It assists to have a flexible
implementation and works well under continuous input data
flow. Two SOMs are tested experimentally, one for 2D data
stream application (Telecommunication: Quadrature
Amplitude Modulation identification operating under
maximum frequency of 2.38 MHz), and the other for 3D data
stream application (real-time video compression operating
under maximum frequency of 1.51 MHz).
References from [16] to [20] focus on the detection of the
air showers, which are generated from ultra-high-energy
cosmic rays (1018–1020 eV) which can be observed from
the Pierre Auger observatory, located on western Argentina,
considered as the world’s largest cosmic ray observatory.
Inclined air showers consist of neutrinos with charged and
neutral currents built weak interaction with other particles.
Neutrinos are fingerprint for inclined air showers in the
atmosphere featured with a very small cross section for
interactions, and sensitive detection on high zenith angles.
Therefore, the surface detector for the neutrino-induced air
showers should be with accurate pattern recognition, which
can be achieved via ANN as done in [16]–[17]. The FPGA
Triggers based on 8-6-1 ANN in [16]–[17], 12-8-1 ANN in
[18], and 12-10-1 ANN in [19]–[20] were trained using
Levenberg-Marguardt algorithm and the test reveals that the
adequate results fitted for front-end boards in the Pierre
Auger surface detector system.
Power systems are one of the systems that need accurate
modeling to achieve efficiently the desired optimized results.
In [33], a compact virtual anemometer for MPPT (maximum
power point tracking) of wind turbines (horizontal-axis and
vertical-axis wind turbines) is implemented on FPGA. This
system is based on a Growing Neural Gas (GNG) ANN,
which provides a good trade-off among accuracy, resource
utilization, computational speed, and complexity in the
implementation. This ANN is a special type of self-
supervised neural networks where it adds new neurons
(units) in a progressively manner. The maximum number of
units is elected considering the nature of data and the size of
the required area.
The literature [35] suggests a new smart sensor for on-
line detection and classification of the power quality (PQ)
and power quality disturbances (PQD) using some basics
power sensors, in which the electrical installations in a non-
disrupting approach, using HOS (high order statistics: first
order statistics: mean, second order statistics: variance, third
order statistics: skewness, fourth order statistics: Kurtosis)
processing. This FPGA-based smart sensor can exhibit like
waveform analyzer (device for power quality monitoring and
measurement). In addition, it has a precise PQD classifier,
which makes it able to classify different types of single PQD
and some combinations of PQD. The HOS processing adds a
distinguish feature to the system as it makes the use of very
low FPGA resources compatible with the improvement of
the high-performance signal processing techniques for power
system smart sensors. The simplicity and the low cost with
on-line test for the three-phase power systems are the main
features for the proposed sensors. The activation function for
the hidden layer FFNN is the log-sigmoid function and they
are trained with Levenberg-Marquardt algorithm.
B. Analog circuits simulators using FPGA-based hardware
implementation of ANNs
Various techniques are utilized to design some analog
circuits such as Memristor, power amplifiers, and chaotic
circuits. The selective case studies are [11], [14], [21], and
[28]–[29]. In [11], Memristor HW simulator is used along
with the weighted summation (KCL) electronic modules as
basic blocks to implement ANN hardware on FPGA for
Single-Layer Perceptron (SLP). KCL module is considered
as weighted summation function that will update the
weighted values. Spike-timing-dependent plasticity (STDP)
unsupervised spatiotemporal pattern learning is chosen
because it facilitates extraction of the input pattern correctly
for ANN. FPGA implementation is done for several input
neurons, and logic data for eight input neurons gives total
logic elements of 306 and total registers of 3751.
Chaotic generator is a good candidate representation to
capture the chaotic behavior of the brain activities since EEG
(Electroencephalogram) signal processing realizes as a
stochastic process. The chaotic system considered in [14] is
Hénon map, and the suitable model for biological brain
activities is artificial neural network with MLP-BP topology.
The 3 input 4 hidden neurons ANN models for the chaotic
generator are mapped on FPGA hardware with fixed-point
representation. The FPGA hardware is operating with
maximum frequency of 35.15MHz while consuming 0.182W
on-chip power.
Also in [21], the author design Lorenz Chaotic Generator
using ANN model implemented on FPGA for secure
communication. The ANN layers are trained using three
different algorithms: Levenburg-Marguerdt, Bayesian
Regulation, and Scaled Conjugate Gradient. One hidden
layer is used with 1 to 16 neurons where the minimal number
of neurons in hidden layer needs to fit with the precision
requirement. Therefore, three training algorithms are used to
get the optimal number of hidden neurons for FPGA
hardware implementation in which the optimal value was 8
neurons. Sigmoid activation function is used for the neurons
in the hidden layer while ramp function is used as activation
function for the output layer.
The literature [28] presents Pehlivan–Uyaroglu Chaotic
System (PUCS) which a novel approach to be modeled using
3-8-3 Feed Forward Neural Network (FFNN) trained by back
propagation algorithm. The FFNN-based PUCS is trained
offline using Matlab environment Neural Network
Processing Toolbox. Then, hardware implementation is done
using FPGA operation under maximum frequency: 266.429
MHz with the weights and daises written to a VHDL code
for 32-bit single precision floating point.
In [29], another chaotic system is designed and mapped
to FPGA for MVPDOC (Modified Van Pol-Duffing
Oscillator Circuit). The modeled system is based on wavelet
decomposition and MLP ANN trained by Levenberg–
Marquardt (LM) back propagation algorithm. The
implemented resources show a good utilization in term of
logic elements making the hardware MVPDOC system
proper to be used in any nonlinear dynamic chaotic system.
The inverse characteristics of power amplifiers (GaN
class-F power amplifier working with a LTE signal center at
2 GHz) is modeled using NARX Neural network under the
requirement of AM/AM and AM/PM characteristics [34].
NARX is a type of the Recurrent Neural Network and can
linearize microwave power amplifiers using digital pre-
distortion method (DPD) which is based on the modified
baseband signal related to the inverse function of the power
amplifier. The FPGA implementation via Verilog language
operates under 95.511 MHz maximum frequency.
Similarly, [41] works to model the behavior of RF power
amplifiers using MLP ANN trained with back propagation
algorithm considering two folds. One fold is nonlinearity
effects while the other is memory effects. AM/AM and
AM/PM characteristics demonstrate how the model is
accurate. Implementing the model on DSP-FPGA kit shows
small complexity with high processing behavioral model for
RF power amplifiers.
III. DISCUSSION OF THE ANN FEATURES AND FPGA
RESOURCES IN THE PRESENTED CASE STUDIES
FPGA-based reconfigurable computing hardware
architectures are well suited for implementation of ANNs as
one can achieve rapidly configurability to adjust the weights
and topologies of an ANN. This will help to have fast
prototyping [43]. The density of FPGA reconfiguration
allows for having high numbers of elements to reach to the
proper functionality within the unit chip area. Further, as a
feature for ANN, it needs to be learnt using off-line
learning/training algorithm, and it requires to be adopted
through the recalling phase. These features work together to
customize the topology of ANN and the computational
accuracy.
Table I and Table II summarize the main feature of ANN
and the FPGA hardware resources, respectively, for the 32
collective studies published from 2014 to 2018 (4 in 2014, 7
in 2015, 5 in 2016, 13 in 2017, and 3 in 2018). There are 23
literatures published as conference papers, 8 literatures
published as journal articles, and one as a book (Outstanding
PhD Research).
A. Results Analysis of ANN Features
ANN has grasped many practical applications and
implementations in a medicine/biomedicine [10], [12], [24]
[25], [30]−[31], [40], analog circuit simulators [11], [14],
[21], [28]−[29], [34], [41], pattern detection and recognition
[13], [15], [22]−[23], [26]−[27], [32], [35], [37]−[39], air
showers [16]−[20], and power system [33], [36]. ANN
structural design requires a massive amount of the parallel
computing and storage resources, thus; it demands parallel
computing devices such as FPGAs.
Data representation plays a significant rule in
implementing ANN on FPGA hardware. The choice of the
data type format for the weights and the activation functions
is important to the recognition rate and the performance, for
which numerous data type representations can be considered,
like fixed-point [10], [14], [21], [24]–[25], [29], [33], [37]–
[39], floating point [10], [13], [22], [28], [34] or
integer/binary representations. Fixed-point and integer/binary
representations can reach improved execution performance
in the forward computation of the networks, it is a great
difficulty to train deep neural network (many hidden layers)
for recognition applications, then map the optimized weights
onto FPGA hardware. The multilayered ANN is referred to
of three or more layers: one input layer, one or more hidden
layer(s) and one output layer [25]. As for the accuracy
reduction of the FPGA hardware, it primarily caused by the
truncation error of the fixed-point operation. Some works
make a comparison of fixed-point with floating-point
operations to scale the effect of the round off errors, and the
CSOM can be used for the fixed-point selection as in [38].
In contrast, the floating-point data type format can bring
easier ANN training processes in software and it able to
possibly give suitable recognition accuracy and execution
performance. It is revealed that reduced-precision floating-
point representation is a candidate for the hardware
realization of the ANN on FPGA [42].
B. Results Analysis of FPGA HW Resources
Different types of FPGA technology platforms are used
in the selected literature. The two main companies are Xilinx
and Intel/Altera. Each one has several FPGA families
operating on various frequencies with miscellaneous
available resources.
IV. SUMMARY
Several FPGA-based ANNs targeted towards different
applications are discussed in this brief. FPGA hardware
implementations make ANN more convenient to be realized
and reconfigurable. Choosing the FPGA technology platform
depends on available resources, ANN topology, and data
type representation.
REFERENCES
[1] V. Ntinas, et al., IEEE Trans. Neural Networks Learning Systems,
2018.
[2] D. Korkmaz, et al., AWERProcedia Information Technology &
Computer Science, 2013, pp. 342–348.
[3] I. Li, et al., In IEEE Inter. SoC Design Conf. (ISOCC), 2016, pp. 297–
298.
[4] M. Alçın, et al., Optik, vol. 127, pp. 5500–5505, 2016.
[5] L. Gatet, et al, IEEE Sensors Journal, vol. 8, no. 8, pp. 1413–1421,
2008.
[6] O. Krestinskaya, et al., In IEEE Inter. Symp. Circ. Syst. (ISCAS), 2018,
pp. 1–5.
[7] L. Hu, et al., Adv. Mater., 1705914, 2018.
[8] Z. Hajduk, Neurocomputing, vol. 247, pp. 59–61, 2017.
[9] R. Tuntas, et al., Applied Soft Computing, vol. 35, pp. 237–246, 2015.
[10] M. Wess, et. al., in 2017 IEEE International Symposium on Circuits
and Systems (ISCAS), 2017, pp. 1–4.
[11] Ntinas et al., IEEE TNNLS, 2018.
[12] D. Darlis, et. al., In IEEE 2018 Inter. Conf. Signals and Systems
(ICSigSys), 2018, pp. 142–145.
[13] Z.F. Li, , et. al., In IEEE 2017 Inter. Conf. Electron Devices Solid-
State Circuits (EDSSC), 2017, pp. 1–2.
[14] L. Zhang, In 2017 IEEE XXIV Inter. Conf. Electron. Electrical
Engineering Computing (INTERCON), 2017, pp. 1–4.
[15] S. Anand, et. al., In 2017 2nd Inter. Conf. Comput. Commun.
Technologies (ICCCT), 2017, pp. 265–270.
[16] Z. Szadkowski, et. al., IEEE Trans. Nucl. Sci., vol. 64, no. 6, pp.
1271–1281, 2017.
[17] Z. Szadkowski, et. al., In IEEE Progress in Electromag. Research
Symp. (PIERS), 2016, pp. 1517–1521.
[18] Z. Szadkowski, et. al., In 2015 4th IEEE Inter. Conf. Advance.
Nuclear Instrum. Measurement Methods their Applic. (ANIMMA),
2015, pp. 1–8.
[19] Z. Szadkowski, et. al, In 2015 IEEE Federated Conf. Comput. Science
Inform. Systems (FedCSIS), 2015, pp. 693–700.
[20] Z. Szadkowski, et. al., IEEE Trans. Nuclear Science, vol. 62, no. 3 pp.
1002–1009, 2015.
[21] L. Zhang, 2017 IEEE 30th Canadian Conf. Electr. Comput. Eng.
(CCECE), 2017, pp. 1–4.
[22] S. Shreejith, et al. in IEEE Design, Automation & Test in Europe
Conference & Exhibition (DATE), 2016, pp. 37–42.
[23] A. Tisan, et. al., IEEE Trans. Indus. Informatics, vol. 12, no. 3, 2016.
[24] J. C. Romero-Aragon, et. al., In 2014 IEEE Symp. Computat. Intellig.
Control Automat. (CICA), 2014, pp. 1–7.
[25] C. Geoffroy, et. al., IEEE Trans. Nuclear Science, vol. 62, no. 3, 2015.
[26] T. Fujimori,, et. al., 2015 IEEE Inter. Conf. Field Programmable
Technology (FPT), 2015, pp. 260–263.
[27] R. Biradar, et. al., In IEEE Inter. Conf. Cogn.Comput. Inform. Process.
(CCIP), 2015, pp. 1–6.
[28] M. Alçın, et al., Optik, vol. 127, pp. 5500–5505, 2016.
[29] R. Tuntas, Applied Soft Computing, vol. 35, pp. 237–246, 2015.
[30] A. T. ÖZDEMİR, et. al., Turkish J. Electr. Eng. Comput. Sciences,
vol. 23, no. Sup. 1, 2089-2106, 2015.
[31] H. W. Lim, et al. , ISOCC 2017, 2017, pp. 90–91.
[32] N. Aamer, et. al., in Proceedings of the 2nd International Conference
on Communication and Electronics Systems (ICCES 2017), 2017, pp.
935–942.
[33] A. Accetta, In 2017 IEEE 26th Inter. Symp. Industrial Electron. (ISIE),
2017, pp. 926–933.
[34] J. A. Renteria-Cedano, et al, In IEEE 57th Inter. Midwest Sympo. in
Circuits and Systems (MWSCAS), 2014, pp. 209–212.
[35] B. Rajasekhar, et. al., In 2017 3rd International Conference on
Biosignals, images and instrumentation (ICBSII), 2017, pp. 1–6.
[36] G. D. J. Martinez-Figueroa, et. al., IEEE Access, vol. 5, pp. 14259–
14274, 2017.
[37] J. Park, et. al. in IEEE Inter Conf. Acoustics Speech Signal Process.
(ICASSP), 2016, pp. 1011–1015.
[38] J. Lachmair, et. al., In IEEE 2017 Inter. Joint Conf. Neural Networks
(IJCNN), 2017, pp. 4299–4308.
[39] M. A. A. de Sousa, et. al., In IEEE Inter. Joint Conf. Neural Networks
(IJCNN), 2017, pp. 3930–3937.
[40] P. Antonik, Springer Theses, Recognizing Outstanding Ph.D.
Research, 2018.
[41] J. C. Núñez-Perez, et. al., In Inter. Conf. Electron. Commun. Comput.
(CONIELECOMP), 2014, pp. 237–242.
[42] H. M. Vu, et. al., In: V. Bhateja , B. Nguyen, N. Nguyen, S. Satapathy,
DN. Le (eds), Information Systems Design and Intelligent
Applications. Advances in Intelligent Systems and Computing, vol
672. Springer, Singapore, 2018.
[43] J. Zhu, et. al., In Inter. Conf. Field Programmable Logic Applicat,
Springer, Berlin, Heidelberg, 2003, pp. 1062–1066.
TABLE I. ARTIFICIAL NEURAL NETWORK PROPERTIES FOR THE LITERATURE REVIEW
Comparison
Work/Year
ANN type
Weighted
summation
(WS)/Activation
function (AF)
# Input/hidden
layers
(neurons)
Data type
Test
data set
Training
algorithm (TA)/
Learning
algorithm (LA)
Application
10/2017
Multi-Layer
Perceptron
(MLP)
AF: Piecewise
linearly
approximated
hyperbolic tangent
function (for hidden
layer)/ hyperbolic
tangent function
Tanh (for hidden
layer and output
node)
8/6 layer
a. 12/16/24-
bit fixed
point /b.
floating
point
104
TA: Resilient
backpropagation
(RPROP)
ECG Anomaly
Detection
11/2018
Single-Layer
Perceptron
(SLP)
WS: KCL
computation
1/1 Layer (8
neurons/-)
14-bit -
LA: spike-timing-
dependent plasticity
(STDP)
unsupervised
spatiotemporal
pattern learning
Memristor
Simulator
12/2018
Feed forward
back
propagation
(FFBP)
AF: sigmoid function 1/1 Layer - 40
TA:
Back Propagation
(BP)
Human Blood
Identification
Device
13/2017
Multilayer
feed-forward
AF: logic sigmoid
function with
Rotation Digital
Computer (CORDIC)
algorithm
-/1 Layer (-
/300neurons)
32-bit
single
precision
floating
point
60000 -
MNIST
handwritten
digit recognition
14/2017
MLP-BP
topology
AF: bipolar sigmoid
activation
function (hidden
neurons) and ramp
activation
function (output
neurons)
1/1 Layer (3
neurons/4
neurons)
fixed-point 6000
TA:
Back Propagation
(BP)
Brain research
15/2017
Multilayer
Feedforward
Neural
Network
(FFNN)
AF: Sigmoid
function
1/8 Layer (5
neurons/-)
- - -
Forest fire
detection in
WSN
16/2017
17/2016
18/2015
19/2015
20/2014
-
AF: Tangent sigmoid
function
@7 & 8: 8/6
Layer
@9: 12/8 Layer
@10: 12/10
Layer
14-bit -
TA:
Levenberg-
Marguardt (LM)
Detection
of Neutrino-
Induced Air
Showers
21/2017 - -
1/1 Layer (3
neurons/8
neurons)
32-bit fixed
point
10000
TA: Levenberg-
Marguardt (LM),
Bayesian
Regulation (BR).
and Scaled
Conjugate
Gradient(SCG)
Secure
communication
22/2016 MLP
AF: Sigmoid
Function
1/6 Layer (8
neurons/-)
Floating
point
1000
TA: Back
propagation (BP)
Fault Detection in
Automotive
Systems (Fault
Diagnosis of a
Diesel Engine)
23/2016 Feedforward AF: sigmoid function 1/1 Layer - -
TA:
Back Propagation
(BP)
Pattern recognition
module for an
artificial olfactory
system
to recognize
different types of
coffee (e-Nose)
24/2014
Recurrent
high-order
neural
network
(RHONN)
AF: hyperbolic
tangent function
-
16-bit
fixed-point
1400
TA: Extended
Kalman
Filter (EKF)
Glucose Level
Regulation for
Diabetes Mellitus
Type 1 Patients
25/2015
Pipelined
Architecture
AF: hyperbolic
tangent function
6/2 Layer (10
neurons/5neuro
ns)
18 bits
fixed-point
-
TA:
Back Propagation
(BP)
Positron Emission
Tomography
(PET)
26/2015 MLP -
1/1 Layer (1
neuron/7
neurons)
- -
TA:
Back Propagation
(BP)
Trax Solver (game
solver)
27/2015
Multilayer
Feedforward
Neural
Network
AF: hyperbolic
tangent Function
1/2 Layer (3
neurons/9
neurons)
16-bit -
TA:
Back Propagation
(BP)
Function
Approximation
28/2016
FFNN
AF: Log-Sigmoid
function
1/1 Layer (3
neurons/8
neurons)
32-bit
single
precision
floating
point
200,000
TA:
Back Propagation
(BP)
Pehlivan–
Uyaroglu Chaotic
System
29/2015 MLP
AF: Tangent sigmoid
activation
function (hidden
neurons) and Linear
activation
function (output
neurons)
1/2 Layer (4
neurons/19neur
ons)
Fixed point -
TA: Levenberg–
Marquardt (LM)
back propagation
Modified Van der
Pol–Duffing
Oscillator Circuit
30/2015
MLP-BP
AF: Piecewise linear
sigmoid (PLS)
activation function
/sigmoid activation
function
1/1 Layer (8
neurons/ 2
neurons)
16-bit fixed
point /32-
bit single
precision
floating
point
-
TA:
Back Propagation
(BP)
Mobile ANN-
based automatic
ECG arrhythmia
classifier
31/2017 - - - - 14 -
Atrial Fibrillation
Classifier
32/2017 FFNN - - - 3000
TA:
Back Propagation
(BP)
VLSI approach for
autonomous robot
navigation
33/2017
Self-
Supervised
Neural
Network:
Growing
Neural Gas
- - fixed point -
TA: Growing
Neural Gas
algorithm
Virtual
Anemometer for
MPPT of Wind
Energy
Conversion
Systems
34/2014
Recurrent
Neural
Network:
NARX
Neural
network
AF: hyperbolic
tangent function
Tanh
1/1 Layer
Floating
point
- -
Modeling the
Inverse
Characteristics of
Power Amplifiers
( GaN class F PA
working with a
LTE signal center
at 2 GHz)
35/2017 FFNN AF: sigmoid function
1/1 Layer (-/20
neurons)
- -
TA:
Back Propagation
(BP)
Emotion
recognition from
speech signal
36/2017 FFNN
AF: log-sigmoid
function
1/1 layer (3
neurons/20
neurons)
- 1000
TA: Levenberg-
Marquardt
algorithm
Smart Sensor for
Detection and
Classification of
Power Quality
Disturbances
37/2016
Feed-
forward deep
neural
networks
AF: logistic sigmoid
function
1/3 Layer
8-bit Fixed-
Point
-
TA:
Back Propagation
(BP )
LA: unsupervised
greedy RBM
(Restricted
Boltzmann
Machine) learning
algorithm
MNIST
handwritten
digit recognition
benchmark and a
phoneme
recognition task
on TIMIT corpus
38/2017
Self-
Organizing
Map (SOM)
Artificial
Neural
Network
- -
16-bit fixed
point
-
LA: The Self-
organizing map
(SOM)
Data Mining
39/2017
Self-
Organizing
Map (SOM)
Artificial
Neural
Network
- -
16-bit fixed
point
-
LA: The Self-
organizing map
SOM1/SOM2
Telecommunicatio
n/Video
categorization in
autonomous
surveillance
systems
40/2018 FFNN
AF: Tangent sigmoid
function
1/2 Layer 16-bit 600 -
Intravascular
OCT Scans
41/2014 MLP
AF: Tangent sigmoid
function
- - -
TA: Levenberg–
Marquardt (LM)
back propagation
RF Power
Amplifier
42/2018
MHL-ANN
(multiple-
hardware-
layer/
SHL-ANN
(single-
hardware-
layer)
AF: Logistic sigmoid
function
1/1 Layer (20
neurons/
12neurons)
/
1/2 Layer (784
neurons/80
neurons)
16-bit
half-
precision
floating-
point
10000
TA: The back
Propagation (BP)
with the stochastic
gradient descent
(SGD) algorithm
Handwritten digit
recognition
application with
MNIST database
with 28×28 pixel
image = 784 pixels
TABLE II. FPGA HW RESOURCE
FPGA family Type Total Elements/Other Features
Implementation tool
on FPGA
Accuracy
10/2017
ARM processor
Zynq
DSP: a. 28 /
b. 42
Flip-Flops: a.
1772 / b. 9295
LUT: a. 1895 /
b. 15163
Latency: a. 87 / b.
1208
Vivado HLS tool
a. 99.81%
/b. 99.59
11/2018
Cyclone
II/EP2C70F672
C6 Altera
Total logic elements: 306
Total registers: 3751
Quartus II and
ModelSim tools
-
12/2018
Xilinx FPGA
Spartan 3S1000
-
Very High Speed
Integrated Circuit
(VHSIC) Hardware
Description Language
97.5% for
96x96 pixel
sizes
(resolution)
13/2017
Cyclone IV
EP4CE115
Altera
Total logic elements: 6618
Total combinational function: 5906
Dedicated logic resisters: 2772
Embedded multiplier 9-bit elements: 21
Quartus II with an
Altera
Verilog
-
14/2017 Zynq 7020
Maximum Frequency 35.15MHz
4 input LUTs: 1954
Registers: 364
Slices: 605
DSPs: 24
BRAMs: 20
Total On-chip Power: 0.182W
- -
15/2017
Virtex-5
Altera 6.4a
starter
Maximum frequency: 604.27MHz
Slice registers: 45
Slice LUTs: 16
Logic elements: 16
LUT-FF: 53
Bonded IOBs: 45
Verilog HDL code
utilizing Model Sim
-
16/2017
17/2016
18/2015
19/2015
20/2014
Cyclone V E
FPGA
5CEFA9F31I7
@7:
Maximum frequency: 172.98MHz @100°C
Registers: 3839
DSP (18×18): 92
Adaptive logic module (ALM): 2189
@9:
Multipliers in Adaptive Logic Modules (ALMs): 1247
Multiplier in ALMs and DSP: 107/8
@10:
Multipliers in Adaptive Logic Modules (ALMs): 41151
CORSIKA and Off
Line simulation
Packages
AHDL code
-
21/2017 - - - -
22/2016
Spartan-6
Xilinx
XC6SLX45T
Flip-flops: 11401
LUTs: 17175
BRAMs: 0
DSPs: 84
Latency: 105
STM32 platform
(STM32F407FZ)
-
23/2016
Virtex-4 SX
4VSX35
LUTs: 332
RAMB16s: 4
DSP48s: 23
Maximum frequency: 122.489 MHz
End-User
Programming Platform
VHDL
-
24/2014
Altera DE2-115
Cyclone IV
EP4CE115F29
C7
Logic elements: 14262
Registers: 3059
Embedded Multiplier 9-bit element: 40
Power consumption: 142.18mW
Verilog
25/2015
Virtex 2 Pro
series
XC2VP50
Maximum Frequency: 50MHz
Slices: <6000 (5463 slices)
Memory blocks: 19
Multipliers: 45
- 97.99%
26/2015
40-nm process
FPGA Arria II
GX; Altera
Corp.
Maximum Frequency: 75MHz
Combinational ALUTs: 79015
Memory ALUTs: 473
Dedicated logic registers: 25620
Total block memory bits: 4208006
Total DSP Blocks: 0
Total PLL: 1
Very High Speed
Integrated Circuit
(VHSIC) Hardware
Description Language
(VHDL) and
Quartus II ver. 14.1
27/2015
Virtex 5
XUPV5-
LX110T
Slice LUTs: 2875
Slice Registers: 2014
Bonded IOBs: 4
Block RAM/FIFO: 2
DSP48Es: 3
Memory: 72 kB
- -
28/2016
Xilinx Virtex 6
XC6VCX240T
Slice registers: 86329
Slice LUTs: 87207
Fully used LUT-FF pairs: 67624
VHDL/
Matlab
-
Bonded IOBs: 195
Maximum Frequency: 266.429 MHz
29/2015
Xilinx Virtex-II
Pro XC2V1000
CLKs: 2
Slices: 1236
Slice flip-flops: 329
MULT18X18s: 13
4 input LUTs: 2134
Bonded IOBs: 82
VHDL -
30/2015
Altera Cyclone
III
EP3C120F780
Frequency: 50MHz
Logic elements: 1814/23189
DSP elements: 40/220
Logic registers: 784/10816
-
96.54%
/
97.66%
31/2017 Cyclone-IV Logic elements: 40830 Altera DE2-115 95.3%
32/2017
Xilinx
Virtex-II Pro
Maximum Frequency: 357.5 MHz
Slices: 492
Slice Flip-Flops: 371
4-Input LUTs: 942
Bonded IOBs: 44
Matlab -
33/2017
Altera Cyclone
III
EP3C25F324
Logic elements: 22148
Registers: 2265
Memory bits: 6528
Embedded 9-bit multipliers: 32
VHDL
Matlab
-
34/2014
Virtex-6 FPGA
ML 605
Evaluation Kit
Slice registers: 38572
Slice LUTs: 29057
LUT FF: 21491
Bonded IOBs: 3
Block RAM: 225
DSP: 64
Maximum Frequency: 95.511MHz
Verilog language
Matlab
SystemVue
Xilinx ISE tool
-
35/2017 -
Slices: 2817
FF: 2916
LUTs: 2900
Bonded IOBs: 16
GCLKs: 1
- -
36/2017
Altera DE2-115
Cyclone IVE
EP4CE115F297
C
Logic elements: 2000
Registers: 580
Multiplier 9-bit: 4
Memory bits: 153396
Maximum Frequency: 61.75MHz
VHDL
Matlab
-
37/2016
Xilinx
XC7Z045
Digit recognition/ Phoneme recognition
FFs: 136677/161923
LUTs: 213593/137300
BRAMs:750.5/378
DSPs: 900/0
- -
38/2017
Xilinx Virtex-5
V5FX100T/
Xilinx Virtex-7
V7FX690T
Slice Registers: 45036/211793
Slice LUTs: 57679/273,055
BRAM/FIFOs: 226,1421
DSPs: 100/700
Power consumption: 23.5W/44W
- -
39/2017
Xilinx ISE
platform Virtex
5 XC5VLX50T
LUTs: 8845/17945
Chip utilization: 30% / 62%
Maximum frequency: 2.38 MHz/1.51 MHz
Xilinx ISim simulator
tool
-
40/2018
Xilinx VC707
evaluation
board Virtex-7
XC7VX485T-
2FFG1761C
FFs: 29609
LUTs: 21065
Block RAM: 1028 (16 kb)
DSP48E: 201
Maximum frequency: 180 MHz
VHDL -
41/2014
Cyclone®III
Edition-Altera
Logic elements: 2720
Memory bits: 1549824
Logic registers: 1348
Multiplier 9-bit: 170
Matlab-Simulink
software
-
42/2018
Xilinx Virtex-5
XC5VLX-110T
FFs: 24025/44079
LUTs: 28340/63454
Block RAM: 22/40
DSP: 22/40
Maximum frequency: 205 MHz/197 MHz
VHDL
90.88%
/
97.20%
recognition
rate

More Related Content

What's hot

Designing Conservative Reversible N-Bit Binary Comparator for Emerging Quantu...
Designing Conservative Reversible N-Bit Binary Comparator for Emerging Quantu...Designing Conservative Reversible N-Bit Binary Comparator for Emerging Quantu...
Designing Conservative Reversible N-Bit Binary Comparator for Emerging Quantu...VIT-AP University
 
Advanced applications of artificial intelligence and neural networks
Advanced applications of artificial intelligence and neural networksAdvanced applications of artificial intelligence and neural networks
Advanced applications of artificial intelligence and neural networksPraveen Kumar
 
Adaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errorsAdaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errorsIJCSEA Journal
 
Downloadfile
DownloadfileDownloadfile
Downloadfilegaur07av
 
Hawaii Swnt Ao
Hawaii Swnt AoHawaii Swnt Ao
Hawaii Swnt AoAntonini
 
Comparison of Neural Network Training Functions for Hematoma Classification i...
Comparison of Neural Network Training Functions for Hematoma Classification i...Comparison of Neural Network Training Functions for Hematoma Classification i...
Comparison of Neural Network Training Functions for Hematoma Classification i...IOSR Journals
 
Artificial Neural Networks (ANNS) For Prediction of California Bearing Ratio ...
Artificial Neural Networks (ANNS) For Prediction of California Bearing Ratio ...Artificial Neural Networks (ANNS) For Prediction of California Bearing Ratio ...
Artificial Neural Networks (ANNS) For Prediction of California Bearing Ratio ...IJMER
 
Development of a virtual linearizer for correcting transducer static nonlinea...
Development of a virtual linearizer for correcting transducer static nonlinea...Development of a virtual linearizer for correcting transducer static nonlinea...
Development of a virtual linearizer for correcting transducer static nonlinea...ISA Interchange
 
Multilayer perceptron
Multilayer perceptronMultilayer perceptron
Multilayer perceptronomaraldabash
 
A SURVEY OF SPIKING NEURAL NETWORKS AND SUPPORT VECTOR MACHINE PERFORMANCE BY...
A SURVEY OF SPIKING NEURAL NETWORKS AND SUPPORT VECTOR MACHINE PERFORMANCE BY...A SURVEY OF SPIKING NEURAL NETWORKS AND SUPPORT VECTOR MACHINE PERFORMANCE BY...
A SURVEY OF SPIKING NEURAL NETWORKS AND SUPPORT VECTOR MACHINE PERFORMANCE BY...ijdms
 
15 Machine Learning Multilayer Perceptron
15 Machine Learning Multilayer Perceptron15 Machine Learning Multilayer Perceptron
15 Machine Learning Multilayer PerceptronAndres Mendez-Vazquez
 
A High Speed Pipelined Dynamic Circuit Implementation Using Modified TSPC Log...
A High Speed Pipelined Dynamic Circuit Implementation Using Modified TSPC Log...A High Speed Pipelined Dynamic Circuit Implementation Using Modified TSPC Log...
A High Speed Pipelined Dynamic Circuit Implementation Using Modified TSPC Log...IDES Editor
 

What's hot (18)

G013124354
G013124354G013124354
G013124354
 
Designing Conservative Reversible N-Bit Binary Comparator for Emerging Quantu...
Designing Conservative Reversible N-Bit Binary Comparator for Emerging Quantu...Designing Conservative Reversible N-Bit Binary Comparator for Emerging Quantu...
Designing Conservative Reversible N-Bit Binary Comparator for Emerging Quantu...
 
Perceptron & Neural Networks
Perceptron & Neural NetworksPerceptron & Neural Networks
Perceptron & Neural Networks
 
Advanced applications of artificial intelligence and neural networks
Advanced applications of artificial intelligence and neural networksAdvanced applications of artificial intelligence and neural networks
Advanced applications of artificial intelligence and neural networks
 
Adaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errorsAdaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errors
 
Intoduction to Neural Network
Intoduction to Neural NetworkIntoduction to Neural Network
Intoduction to Neural Network
 
Downloadfile
DownloadfileDownloadfile
Downloadfile
 
Hawaii Swnt Ao
Hawaii Swnt AoHawaii Swnt Ao
Hawaii Swnt Ao
 
Comparison of Neural Network Training Functions for Hematoma Classification i...
Comparison of Neural Network Training Functions for Hematoma Classification i...Comparison of Neural Network Training Functions for Hematoma Classification i...
Comparison of Neural Network Training Functions for Hematoma Classification i...
 
Artificial Neural Networks (ANNS) For Prediction of California Bearing Ratio ...
Artificial Neural Networks (ANNS) For Prediction of California Bearing Ratio ...Artificial Neural Networks (ANNS) For Prediction of California Bearing Ratio ...
Artificial Neural Networks (ANNS) For Prediction of California Bearing Ratio ...
 
Development of a virtual linearizer for correcting transducer static nonlinea...
Development of a virtual linearizer for correcting transducer static nonlinea...Development of a virtual linearizer for correcting transducer static nonlinea...
Development of a virtual linearizer for correcting transducer static nonlinea...
 
Multilayer perceptron
Multilayer perceptronMultilayer perceptron
Multilayer perceptron
 
Cudaray
CudarayCudaray
Cudaray
 
A SURVEY OF SPIKING NEURAL NETWORKS AND SUPPORT VECTOR MACHINE PERFORMANCE BY...
A SURVEY OF SPIKING NEURAL NETWORKS AND SUPPORT VECTOR MACHINE PERFORMANCE BY...A SURVEY OF SPIKING NEURAL NETWORKS AND SUPPORT VECTOR MACHINE PERFORMANCE BY...
A SURVEY OF SPIKING NEURAL NETWORKS AND SUPPORT VECTOR MACHINE PERFORMANCE BY...
 
15 Machine Learning Multilayer Perceptron
15 Machine Learning Multilayer Perceptron15 Machine Learning Multilayer Perceptron
15 Machine Learning Multilayer Perceptron
 
A High Speed Pipelined Dynamic Circuit Implementation Using Modified TSPC Log...
A High Speed Pipelined Dynamic Circuit Implementation Using Modified TSPC Log...A High Speed Pipelined Dynamic Circuit Implementation Using Modified TSPC Log...
A High Speed Pipelined Dynamic Circuit Implementation Using Modified TSPC Log...
 
Octopus-ReEL
Octopus-ReELOctopus-ReEL
Octopus-ReEL
 
cug2011-praveen
cug2011-praveencug2011-praveen
cug2011-praveen
 

Similar to FPGA-based ANN Implementations Study

International Journal of Computational Engineering Research (IJCER)
International Journal of Computational Engineering Research (IJCER) International Journal of Computational Engineering Research (IJCER)
International Journal of Computational Engineering Research (IJCER) ijceronline
 
Solar power forecasting report
Solar power forecasting reportSolar power forecasting report
Solar power forecasting reportGaurav Singh
 
Application of nn to power system
Application of nn to power systemApplication of nn to power system
Application of nn to power systemjulio shimano
 
International Refereed Journal of Engineering and Science (IRJES)
International Refereed Journal of Engineering and Science (IRJES)International Refereed Journal of Engineering and Science (IRJES)
International Refereed Journal of Engineering and Science (IRJES)irjes
 
Artificial Neural Network Implementation On FPGA Chip
Artificial Neural Network Implementation On FPGA ChipArtificial Neural Network Implementation On FPGA Chip
Artificial Neural Network Implementation On FPGA ChipMaria Perkins
 
Efficiency of Neural Networks Study in the Design of Trusses
Efficiency of Neural Networks Study in the Design of TrussesEfficiency of Neural Networks Study in the Design of Trusses
Efficiency of Neural Networks Study in the Design of TrussesIRJET Journal
 
Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...IOSR Journals
 
A Survey of Spiking Neural Networks and Support Vector Machine Performance By...
A Survey of Spiking Neural Networks and Support Vector Machine Performance By...A Survey of Spiking Neural Networks and Support Vector Machine Performance By...
A Survey of Spiking Neural Networks and Support Vector Machine Performance By...ijsc
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)ijceronline
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER) International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER) ijceronline
 
Parallel implementation of pulse compression method on a multi-core digital ...
Parallel implementation of pulse compression method on  a multi-core digital ...Parallel implementation of pulse compression method on  a multi-core digital ...
Parallel implementation of pulse compression method on a multi-core digital ...IJECEIAES
 
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...ijcsit
 
New artificial neural network design for Chua chaotic system prediction usin...
New artificial neural network design for Chua chaotic system  prediction usin...New artificial neural network design for Chua chaotic system  prediction usin...
New artificial neural network design for Chua chaotic system prediction usin...IJECEIAES
 
Efficient design of feedforward network for pattern classification
Efficient design of feedforward network for pattern classificationEfficient design of feedforward network for pattern classification
Efficient design of feedforward network for pattern classificationIOSR Journals
 
Artificial Neural Network Seminar Report
Artificial Neural Network Seminar ReportArtificial Neural Network Seminar Report
Artificial Neural Network Seminar ReportTodd Turner
 
IMPLEMENTATION OF A NEW IR-UWB SYSTEM BASED ON M-OAM MODULATION ON FPGA COMPO...
IMPLEMENTATION OF A NEW IR-UWB SYSTEM BASED ON M-OAM MODULATION ON FPGA COMPO...IMPLEMENTATION OF A NEW IR-UWB SYSTEM BASED ON M-OAM MODULATION ON FPGA COMPO...
IMPLEMENTATION OF A NEW IR-UWB SYSTEM BASED ON M-OAM MODULATION ON FPGA COMPO...ijwmn
 
Associative memory implementation with artificial neural networks
Associative memory implementation with artificial neural networksAssociative memory implementation with artificial neural networks
Associative memory implementation with artificial neural networkseSAT Publishing House
 

Similar to FPGA-based ANN Implementations Study (20)

International Journal of Computational Engineering Research (IJCER)
International Journal of Computational Engineering Research (IJCER) International Journal of Computational Engineering Research (IJCER)
International Journal of Computational Engineering Research (IJCER)
 
10
1010
10
 
Solar power forecasting report
Solar power forecasting reportSolar power forecasting report
Solar power forecasting report
 
Application of nn to power system
Application of nn to power systemApplication of nn to power system
Application of nn to power system
 
International Refereed Journal of Engineering and Science (IRJES)
International Refereed Journal of Engineering and Science (IRJES)International Refereed Journal of Engineering and Science (IRJES)
International Refereed Journal of Engineering and Science (IRJES)
 
Artificial Neural Network Implementation On FPGA Chip
Artificial Neural Network Implementation On FPGA ChipArtificial Neural Network Implementation On FPGA Chip
Artificial Neural Network Implementation On FPGA Chip
 
Efficiency of Neural Networks Study in the Design of Trusses
Efficiency of Neural Networks Study in the Design of TrussesEfficiency of Neural Networks Study in the Design of Trusses
Efficiency of Neural Networks Study in the Design of Trusses
 
Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...
 
A Survey of Spiking Neural Networks and Support Vector Machine Performance By...
A Survey of Spiking Neural Networks and Support Vector Machine Performance By...A Survey of Spiking Neural Networks and Support Vector Machine Performance By...
A Survey of Spiking Neural Networks and Support Vector Machine Performance By...
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER) International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)
 
Parallel implementation of pulse compression method on a multi-core digital ...
Parallel implementation of pulse compression method on  a multi-core digital ...Parallel implementation of pulse compression method on  a multi-core digital ...
Parallel implementation of pulse compression method on a multi-core digital ...
 
Gene's law
Gene's lawGene's law
Gene's law
 
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
 
ADC
ADCADC
ADC
 
New artificial neural network design for Chua chaotic system prediction usin...
New artificial neural network design for Chua chaotic system  prediction usin...New artificial neural network design for Chua chaotic system  prediction usin...
New artificial neural network design for Chua chaotic system prediction usin...
 
Efficient design of feedforward network for pattern classification
Efficient design of feedforward network for pattern classificationEfficient design of feedforward network for pattern classification
Efficient design of feedforward network for pattern classification
 
Artificial Neural Network Seminar Report
Artificial Neural Network Seminar ReportArtificial Neural Network Seminar Report
Artificial Neural Network Seminar Report
 
IMPLEMENTATION OF A NEW IR-UWB SYSTEM BASED ON M-OAM MODULATION ON FPGA COMPO...
IMPLEMENTATION OF A NEW IR-UWB SYSTEM BASED ON M-OAM MODULATION ON FPGA COMPO...IMPLEMENTATION OF A NEW IR-UWB SYSTEM BASED ON M-OAM MODULATION ON FPGA COMPO...
IMPLEMENTATION OF A NEW IR-UWB SYSTEM BASED ON M-OAM MODULATION ON FPGA COMPO...
 
Associative memory implementation with artificial neural networks
Associative memory implementation with artificial neural networksAssociative memory implementation with artificial neural networks
Associative memory implementation with artificial neural networks
 

More from Hoopeer Hoopeer

Tektronix mdo3104 mixed domain oscilloscope
Tektronix mdo3104 mixed domain oscilloscopeTektronix mdo3104 mixed domain oscilloscope
Tektronix mdo3104 mixed domain oscilloscopeHoopeer Hoopeer
 
Low power sar ad cs presented by pieter harpe
Low power sar ad cs presented by pieter harpeLow power sar ad cs presented by pieter harpe
Low power sar ad cs presented by pieter harpeHoopeer Hoopeer
 
Cadence tutorial lab_2_f16
Cadence tutorial lab_2_f16Cadence tutorial lab_2_f16
Cadence tutorial lab_2_f16Hoopeer Hoopeer
 
Step by step process of uploading presentation videos
Step by step process of uploading presentation videos Step by step process of uploading presentation videos
Step by step process of uploading presentation videos Hoopeer Hoopeer
 
233466440 rg-major-project-final-complete upload
233466440 rg-major-project-final-complete upload233466440 rg-major-project-final-complete upload
233466440 rg-major-project-final-complete uploadHoopeer Hoopeer
 
435601093 s-parameter LTtspice
435601093 s-parameter LTtspice435601093 s-parameter LTtspice
435601093 s-parameter LTtspiceHoopeer Hoopeer
 
Influential and powerful professional electrical and electronics engineering ...
Influential and powerful professional electrical and electronics engineering ...Influential and powerful professional electrical and electronics engineering ...
Influential and powerful professional electrical and electronics engineering ...Hoopeer Hoopeer
 
Ki0232 3 stage fm transmitter
Ki0232 3 stage fm transmitterKi0232 3 stage fm transmitter
Ki0232 3 stage fm transmitterHoopeer Hoopeer
 
Teager energy operator (teo)
Teager energy operator (teo)Teager energy operator (teo)
Teager energy operator (teo)Hoopeer Hoopeer
 
Teager energy operator (teo)
Teager energy operator (teo)Teager energy operator (teo)
Teager energy operator (teo)Hoopeer Hoopeer
 
Cadence tutorial lab_2_f16
Cadence tutorial lab_2_f16Cadence tutorial lab_2_f16
Cadence tutorial lab_2_f16Hoopeer Hoopeer
 
Performance of the classification algorithm
Performance of the classification algorithmPerformance of the classification algorithm
Performance of the classification algorithmHoopeer Hoopeer
 
Bardeen brattain and shockley
Bardeen brattain and shockleyBardeen brattain and shockley
Bardeen brattain and shockleyHoopeer Hoopeer
 

More from Hoopeer Hoopeer (20)

Symica
SymicaSymica
Symica
 
Tektronix mdo3104 mixed domain oscilloscope
Tektronix mdo3104 mixed domain oscilloscopeTektronix mdo3104 mixed domain oscilloscope
Tektronix mdo3104 mixed domain oscilloscope
 
Low power sar ad cs presented by pieter harpe
Low power sar ad cs presented by pieter harpeLow power sar ad cs presented by pieter harpe
Low power sar ad cs presented by pieter harpe
 
Cadence tutorial lab_2_f16
Cadence tutorial lab_2_f16Cadence tutorial lab_2_f16
Cadence tutorial lab_2_f16
 
Step by step process of uploading presentation videos
Step by step process of uploading presentation videos Step by step process of uploading presentation videos
Step by step process of uploading presentation videos
 
233466440 rg-major-project-final-complete upload
233466440 rg-major-project-final-complete upload233466440 rg-major-project-final-complete upload
233466440 rg-major-project-final-complete upload
 
435601093 s-parameter LTtspice
435601093 s-parameter LTtspice435601093 s-parameter LTtspice
435601093 s-parameter LTtspice
 
Influential and powerful professional electrical and electronics engineering ...
Influential and powerful professional electrical and electronics engineering ...Influential and powerful professional electrical and electronics engineering ...
Influential and powerful professional electrical and electronics engineering ...
 
Ki0232 3 stage fm transmitter
Ki0232 3 stage fm transmitterKi0232 3 stage fm transmitter
Ki0232 3 stage fm transmitter
 
Teager energy operator (teo)
Teager energy operator (teo)Teager energy operator (teo)
Teager energy operator (teo)
 
Teager energy operator (teo)
Teager energy operator (teo)Teager energy operator (teo)
Teager energy operator (teo)
 
En physics
En physicsEn physics
En physics
 
Beautiful lectures
Beautiful lecturesBeautiful lectures
Beautiful lectures
 
Cadence tutorial lab_2_f16
Cadence tutorial lab_2_f16Cadence tutorial lab_2_f16
Cadence tutorial lab_2_f16
 
Performance of the classification algorithm
Performance of the classification algorithmPerformance of the classification algorithm
Performance of the classification algorithm
 
Electronics i ii razavi
Electronics i ii razaviElectronics i ii razavi
Electronics i ii razavi
 
Bardeen brattain and shockley
Bardeen brattain and shockleyBardeen brattain and shockley
Bardeen brattain and shockley
 
978 1-4615-6311-2 fm
978 1-4615-6311-2 fm978 1-4615-6311-2 fm
978 1-4615-6311-2 fm
 
William gilbert strange
William gilbert strangeWilliam gilbert strange
William gilbert strange
 
A. ellison
A. ellisonA. ellison
A. ellison
 

Recently uploaded

Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)Suman Mia
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLDeelipZope
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
Analog to Digital and Digital to Analog Converter
Analog to Digital and Digital to Analog ConverterAnalog to Digital and Digital to Analog Converter
Analog to Digital and Digital to Analog ConverterAbhinavSharma374939
 
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZTE
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝soniya singh
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxDeepakSakkari2
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerAnamika Sarkar
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxAsutosh Ranjan
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 

Recently uploaded (20)

Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCL
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
Analog to Digital and Digital to Analog Converter
Analog to Digital and Digital to Analog ConverterAnalog to Digital and Digital to Analog Converter
Analog to Digital and Digital to Analog Converter
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptx
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptx
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
 

FPGA-based ANN Implementations Study

  • 1. A Collective Study on Recent Implementations of FPGA-based Artificial Neural Network Abstract—This paper explores some recent applications of the FPGA-based ANNs (Field Programmable Gate Arrays- based Artificial Neural Networks). ANN is a research field and a tool in many area of the data processing system. In an scholarly manner, it gives a description of organism brain (biological nervous system) based on mathematical models and nonlinear processing units. Even though ANN can be programmed using software, the hardware implementation can give more accurate modeling for ANN, which reveals the inherent parallelism embedded in ANN dynamics. FPGA is one of the hardware implementations of ANN that provides reconfigurable programmable electronic platform and offers robust flexibility. Based on this study, this paper covers different applications (biomedicine, robotics, neuromorphic computation, analog circuit simulators…) besides presenting their merits and demerits during their process. Keywords—Artificial Neural Network, Computing, FPGA, Neuromorphic, Parallelism, Resistive Switching. I. INTRODUCTION An Artificial Neural Network (ANN) -biologically inspired nonparametric and nonlinear artificial intelligence algorithm- performs processing and learning abilities, training, classification, simulation, pattern recognition, prediction, recalling, and hybrid forecasting of information based on the biological nervous system to model a complex system implemented as a parallelism and distributed network of simple nonlinear processing units [1]–[3]. Hardware (HW) implementation of ANNs is more preferable software counterparts due to the executed time and performed pattern. The later consumes more time to simulate ANN (speed constrains) as its size becomes large and executes ANN in sequential pattern (series). By implementing ANN on a circuit level, this allows for higher speed computation and parallel executed pattern. ANN consists of input/hidden/output layers (analogue to neurons ~1011) and weighted connections among them (analogue to synapses ~1015), the former denotes the nodes, while the former indicates the weights. Any node between input and output layers belongs to hidden layer (it is built of artificial neurons), where nonlinear mapping between input nodes and output nodes is done. At the analog circuit level, sensors convert the inputs to suitable form for the processing, and then converted inputs are weighted sum (linear combination) and multiplied by nonlinear activation function to produce the output. Weighted sum can be done by using either multiplier & summer or weighted summer opamp. Another adder is used to add the output of the weighted summer with the bias, which is used to increase or decrease the net input of the activation function [4]–[5]. Activation function is the most important, expensive, and difficult part in any hardware implementation of ANN. It is popularly represented by sigmoid function, which is applied to the output of the linear combination input signals. It computes the weighted sum then it adds direction and decision whether to active a specific neuron or not. Recently, Memristive crossbar arrays are a good model for the weights [6], and ANN analog computation can be achieved using Memristor as in [7]. Using VLSI CMOS technology can offer nonlinearity characteristic on the cost of the inaccurate computation, thermal drift, lack of flexibility (non-reconfigurable and non- reprogrammable features), and limited scaling. ASIC can realize compact parallel architecture for ANN and provide high-speed performance. However, it is expensive, non- accurate computation and lack of flexible design. Microprocessors and DSPs do not support parallel designs. To support parallelism, FPGA is used which is a good candidate for reconfiguration and flexibility in designing ANN. It helps to have repeated iterative learning information processing with modified weights and reconfigurable structure. In addition, it has density that is more compact with lower cost and lower time cycle. FPGA-based ANN can offer a good modularity and dynamic adaptation for neural computation. The main important feature of FPGA is parallel computing which is in a good match with architecture of ANN [8]–[9]. Generally, in any parallel computing, one challenging point appears as the best way to map applications onto HW. More specifically in FPGA, the basic computing cells have rigid interconnection structures and for large size ANN, FPGA needs more HW resources (increase the number of used units such as multipliers and activation functions). There are several types of ANN for various problems in the literature and the structure of an ANN can be divided into two phases. The first phase is the learning process (training process or off-line phase), in which ANN learns the data set obtained from the system model. It includes diverse optimization algorithms to decide the values of the weights and the bias. The second phase is the recalling process (prediction process or real-time phase), where the optimized values of the artificial neurons are verified by using the test dataset (test dataset differ than training dataset in which ANN may encounter different dataset than in the training dataset). In this brief, modern miscellaneous implementations of ANN on FPGA are reviewed. It will be shown that different FPGA families agree with prompt prototyping of several ANN architecture and implementation approaches.
  • 2. The design framework of these FPGAs can be compared with respect to total time response, precision, total area (CLB), maximum clock frequency, weight updates per second, and HW resource utilization. The organization of this paper is as follows. Section II surveys relevant previous works. Section III discusses and reports the results of the different studies. Finally, Section IV concludes the paper. II. FPGA-BASED HW IMPLEMENTATION OF ANNS In this section, ten case studies are presented for the FPGA-based ANN applications. These cases are targeted towards two categories: i. Applications (biomedicine, sensor networks, magnetism, power system, security …) and ii. Emulated modeling (Memristor modeling, power amplifiers, chaotic circuits …). A. Applications based on ANN implemented on FPGA FPGA can be applied to widespread range of applications, which can be a part of data acquisition system. In [10], [12], [24]–[25], [30]–[31], and [40], the systems are modeled using ANN, and these systems are suited for medicine/biomedicine applications. [10] deals with the ECG anomaly and design a system implemented on FPGA to detect such arrhythmia. The signal records are taken from MIT-BIH arrhythmia database and trained using resilient back propagation (RPROP) algorithm for Multi-Layer Perceptron (MLP) ANN architecture. The used activation functions are piecewise linearly approximated hyperbolic tangent function (for hidden layer) and hyperbolic tangent function (for hidden layer and output node) which reduce computation time, complexity, and have simple form. Various data types and different number of input/hidden layers are tested to get the suitable FPGA performance. It was shown that 12-6-2 MLP for 24-bit classifies the record with accuracy of 99.82% (the best accuracy among the tested record). In [12], identification for human blood based on image processing is modelled using feedforward neural network. This is helpful when the blood sample is large while the conventional method may take long time for large samples. The back propagation algorithm is deployed as training process and the sigmoid function is used as activation function. The performance of the FPGA achieves 97.5% accuracy for 80×80 pixel resolution. Positron Emission Tomography (PET) is a scan imaging technique to detect and treat cancer staging. Neural inverse optimal control for the glucose level estimation on FPGA platform is carried out in [24] and the Extended Kalman Filter (EKF) recurrent high-order neural network (RHONN) is chosen as neural identifier architecture. Selecting EKF as training algorithm comes from the minimum error from the optimized weight values. 16-bit fixed-point representation is used and the total power consumption of the system is 142.18mW. In [25], such system is modeled by using ANN to operate in real time to process triple coincidences (triplets) in a PET scanner by identifying the true line of response (LOR). The operating frequency was 50 MHz and the ANN pipelined model processes the triplets without exceeding 6000 FPGA slices. The neurons are activated using hyperbolic tangent activation function for 6-2-1 layers (10, 5, and 1 neurons respectively). The result for the LOR selection is 97.99% indicating a precise selection of the time. Literature [30] shows a comparative work on two ANN models; one model with a 32-bit floating point FPGA based model activated by sigmoid activation function, while the other model with a 16-bit fixed point FPGA based model activated by piecewise linear sigmoid (PLS) activation function. Each model has 1-1-1 input-hidden-output layers with eight input neurons, two hidden neurons, and one output neuron. The first model gives 97.66% accuracy and the second model reached 96.54% accuracy, nevertheless, resource utilization of the second model was lower than the first model. The classification times are 1.07μs and 1.01μs for the first model and the second model, respectively. An embedded system-on-chip for Atrial Fibrillation (AFIB) detection on heart screening is presented in [31]. The detection algorithm flow starts with pre-processing units including bandpass filter cascaded with stationary wavelet transform, then the feature extraction is done, and ends by ANN to classify the pattern of AFIB. Overall, this detection system achieves accuracy of 95.3% while using 40,830 Logic elements. Biomedical imaging techniques need to be designed accurately to characterize the tissue types (normal or abnormal). Such a technique is intravascular optical coherence tomography (OCT) covered in [40] for heart diseases. Feed forward ANN is used after collecting relevant information from the image (using feature extraction), in which ANN processes these features of the image by classifier, trained to distinguish them then ANN is implemented on FPGA platform. Operating with 180MHz, the ANN process is finished with 0.7sec real time classification. Literature [27] proposes a design of FPGA-based Multilayer feed forward ANN using SOC (system-on-chip) design approach to make the advantage of the hardware reuse and sharing. This obviously will reduce the on-chip area. The architecture of FFNN is based on hyperbolic tangent activation function and back propagation training algorithm for 16-bit representation. The activation function is approximated by piecewise linear function to allow implementation combinational logic-only. For detection and recognition purposes, literatures [13], [15], [22]–[23], [26], [32], [35], [37]–[39], and [42] are modeled using different ANN topologies and each targeted specific applications. ANN on FPGA for forest fire detection is presented in [15]. Five sensors as input neurons are used to detect the firestorm activated by sigmoid function in the eight feed forward hidden layers. Wireless sensor network (WSN) with ANN gives better detection and reduce the delay compare to WSN only. The maximum operating frequency for the used FPGA (Virtex-5 Altera 6.4a starter) is 604.27MHz. Literature [22] shows detection of the fault in automotive systems i.e. fault diagnosis in a diesel engine through FPGA- based accelerated MLP ANN. The used activation function is the sigmoid function due to using fewer nodes during mapping relationship. The training algorithm to train the hidden layer neurons is the back propagation applied offline. The effective area on Spartan-6 XCSLX150 is 60183 for 1/6 input layer (8 neurons) and six hidden layers obtained by number of DSPs times 512 added to number of look-up- tables (LUTs).
  • 3. End-User programming platform [23] is used to facilitate the interface with FPGA for ANN model of e-Nose system that distinguishes the odor of four coffees’ types. The feed forward ANN is trained using back propagation algorithm and implemented on Virtex-4 FPGA operating under 122.489 MHz frequency for 1-1-1 FFANN layer. End-User programming platform offers a relief from knowing low- level hardware description languages (HDLs). This is applied through an in-built and collaborative graphical user interface (GUI) to keep a rapid prototyping. In [26], it was developed an ANN network based Trax solver with a 64×64 square playing board area. The ANN was trained offline using back propagation algorithm. Allowing the use of weights, this combined with a binary activation function, which considerably reduced the design of digital logic circuit. The design was implemented on a 40- nm process FPGA (Arria II GX; Altera Corp.) and it was capable of operating at a clock frequency of 75MHz. An approach for autonomous robot navigation has been presented in [32] as an efficient execution of the feed forward ANN. 3000 is trained using back propagation algorithm. The goal behind such design is to obtain the shortest and safest path for the robot avoiding the obstacles with the assist of ANN as navigation technique. ANN is implemented on Xilinx Virtex-II Pro FPGA. The results obtained with 357.5 MHz clock frequency that is a good sound compared to previous works. Speech signal recognition depends on gender recognition and emotional recognition. Speech signal in [35] is recognized and classified using FPGA-based ANN trained under back propagation algorithm. This work shows that using ANN as classifier can achieve better results than existing classifier called Latent Dirichlet Allocation (LDA) in term of the used logic elements and the processing time. For hardware implementation of ANN in the recognition applications, the data type is critical, in which fixed-point representation can improve execution performance in the feed forward computation of the network. In [37], two different types of the recognitions are used MNIST handwritten digit recognition benchmark and a phoneme recognition task on TIMIT corpus. The new contribution in this work was mapping ANN on FPGA hardware using fixed point without the need for external DRAM memory. The back propagation is used as training algorithm while unsupervised greedy RBM (Restricted Boltzmann Machine) algorithm is applied during the learning algorithm for digit classification. Literature [13] also directed toward MNIST handwritten digit recognition (28×28 pixel image) using FPGA-based multi-layer feed forward ANN. The Coordinate Rotation Digital Computer (CORDIC) algorithm approximates the activation function (sigmoid function). The data type is 32- bit single precision floating-point number standard used for the good accuracy of the hardware implementation compared to the fixed point, since such accuracy is obtained from the truncation error of the fixed-point operation. MNIST handwritten digit recognition (28×28 pixel image = 784 pixels) also is visited in [42] in which two recognition systems are tested; the first system practices the feature extraction (Principal Component Analysis (PCA)) to map the complexity of the input image data to the lower dimensional space before pattern recognition with ANN, while the second one straight worthy uses ANN to recognize the input image. The first system is preferred in a limited ANN environment of the hardware computing and memory resources. With the development of the artificial intelligent, the second system emerges that can be a good candidate for multi-hidden layers. By electing the second system, literature [42] works onto two schemes; one is called multi-hardware layer ANN (MHL), which performs the implementation of all computing layers (considering one input layer, one hidden layer, and one output layer) on multi hardware achieving better performance (each layer on separate hardware). The second scheme is single hardware layer ANN (SHL) that implements single layer on hardware allowing for better sharing hardware to achieve better area utilization (considering one input layer, multi hidden layers, and one output layer run on the same hardware). There is a need for a control unit to control the forward computation by multiplexing the proper weight and correct input. Each computing neuron in the hidden layer consists of accumulated multiplier, ROM storage unit, and logistic sigmoid activation function. Using MNIST database for setting up the experiment in [42], it was shown that the customized SHL ANNs performance and its scalability (different number of hidden layers) performs efficiently hardware scheme with respect to storage resource supporting 16-bit half-precision floating-point representation. Currently, data mining with parallel computational intelligence provides better solutions for cloud database. One emerging issue is to use a good modeling that matches with the requirements (such as unsupervised learning and compressed dimensionality). Suggested model in [38] is Self-organizing map (SOM) ANN where its principle depends on data compression. To achieve accurate hardware realization on FPGA with fixed-point representation, conscience SOM (CSOM) is used to offer optimized entropy to well-suit high dimensionality data structure that can be implemented on different processing platforms. Another SOM ANN is proposed in [39] supporting on- line training for video categorization in autonomous surveillance systems. On-chip FPGA hardware is accomplished with high speed, compact parallelism, low power consumption. It assists to have a flexible implementation and works well under continuous input data flow. Two SOMs are tested experimentally, one for 2D data stream application (Telecommunication: Quadrature Amplitude Modulation identification operating under maximum frequency of 2.38 MHz), and the other for 3D data stream application (real-time video compression operating under maximum frequency of 1.51 MHz). References from [16] to [20] focus on the detection of the air showers, which are generated from ultra-high-energy cosmic rays (1018–1020 eV) which can be observed from the Pierre Auger observatory, located on western Argentina, considered as the world’s largest cosmic ray observatory. Inclined air showers consist of neutrinos with charged and neutral currents built weak interaction with other particles. Neutrinos are fingerprint for inclined air showers in the atmosphere featured with a very small cross section for interactions, and sensitive detection on high zenith angles. Therefore, the surface detector for the neutrino-induced air showers should be with accurate pattern recognition, which can be achieved via ANN as done in [16]–[17]. The FPGA Triggers based on 8-6-1 ANN in [16]–[17], 12-8-1 ANN in
  • 4. [18], and 12-10-1 ANN in [19]–[20] were trained using Levenberg-Marguardt algorithm and the test reveals that the adequate results fitted for front-end boards in the Pierre Auger surface detector system. Power systems are one of the systems that need accurate modeling to achieve efficiently the desired optimized results. In [33], a compact virtual anemometer for MPPT (maximum power point tracking) of wind turbines (horizontal-axis and vertical-axis wind turbines) is implemented on FPGA. This system is based on a Growing Neural Gas (GNG) ANN, which provides a good trade-off among accuracy, resource utilization, computational speed, and complexity in the implementation. This ANN is a special type of self- supervised neural networks where it adds new neurons (units) in a progressively manner. The maximum number of units is elected considering the nature of data and the size of the required area. The literature [35] suggests a new smart sensor for on- line detection and classification of the power quality (PQ) and power quality disturbances (PQD) using some basics power sensors, in which the electrical installations in a non- disrupting approach, using HOS (high order statistics: first order statistics: mean, second order statistics: variance, third order statistics: skewness, fourth order statistics: Kurtosis) processing. This FPGA-based smart sensor can exhibit like waveform analyzer (device for power quality monitoring and measurement). In addition, it has a precise PQD classifier, which makes it able to classify different types of single PQD and some combinations of PQD. The HOS processing adds a distinguish feature to the system as it makes the use of very low FPGA resources compatible with the improvement of the high-performance signal processing techniques for power system smart sensors. The simplicity and the low cost with on-line test for the three-phase power systems are the main features for the proposed sensors. The activation function for the hidden layer FFNN is the log-sigmoid function and they are trained with Levenberg-Marquardt algorithm. B. Analog circuits simulators using FPGA-based hardware implementation of ANNs Various techniques are utilized to design some analog circuits such as Memristor, power amplifiers, and chaotic circuits. The selective case studies are [11], [14], [21], and [28]–[29]. In [11], Memristor HW simulator is used along with the weighted summation (KCL) electronic modules as basic blocks to implement ANN hardware on FPGA for Single-Layer Perceptron (SLP). KCL module is considered as weighted summation function that will update the weighted values. Spike-timing-dependent plasticity (STDP) unsupervised spatiotemporal pattern learning is chosen because it facilitates extraction of the input pattern correctly for ANN. FPGA implementation is done for several input neurons, and logic data for eight input neurons gives total logic elements of 306 and total registers of 3751. Chaotic generator is a good candidate representation to capture the chaotic behavior of the brain activities since EEG (Electroencephalogram) signal processing realizes as a stochastic process. The chaotic system considered in [14] is Hénon map, and the suitable model for biological brain activities is artificial neural network with MLP-BP topology. The 3 input 4 hidden neurons ANN models for the chaotic generator are mapped on FPGA hardware with fixed-point representation. The FPGA hardware is operating with maximum frequency of 35.15MHz while consuming 0.182W on-chip power. Also in [21], the author design Lorenz Chaotic Generator using ANN model implemented on FPGA for secure communication. The ANN layers are trained using three different algorithms: Levenburg-Marguerdt, Bayesian Regulation, and Scaled Conjugate Gradient. One hidden layer is used with 1 to 16 neurons where the minimal number of neurons in hidden layer needs to fit with the precision requirement. Therefore, three training algorithms are used to get the optimal number of hidden neurons for FPGA hardware implementation in which the optimal value was 8 neurons. Sigmoid activation function is used for the neurons in the hidden layer while ramp function is used as activation function for the output layer. The literature [28] presents Pehlivan–Uyaroglu Chaotic System (PUCS) which a novel approach to be modeled using 3-8-3 Feed Forward Neural Network (FFNN) trained by back propagation algorithm. The FFNN-based PUCS is trained offline using Matlab environment Neural Network Processing Toolbox. Then, hardware implementation is done using FPGA operation under maximum frequency: 266.429 MHz with the weights and daises written to a VHDL code for 32-bit single precision floating point. In [29], another chaotic system is designed and mapped to FPGA for MVPDOC (Modified Van Pol-Duffing Oscillator Circuit). The modeled system is based on wavelet decomposition and MLP ANN trained by Levenberg– Marquardt (LM) back propagation algorithm. The implemented resources show a good utilization in term of logic elements making the hardware MVPDOC system proper to be used in any nonlinear dynamic chaotic system. The inverse characteristics of power amplifiers (GaN class-F power amplifier working with a LTE signal center at 2 GHz) is modeled using NARX Neural network under the requirement of AM/AM and AM/PM characteristics [34]. NARX is a type of the Recurrent Neural Network and can linearize microwave power amplifiers using digital pre- distortion method (DPD) which is based on the modified baseband signal related to the inverse function of the power amplifier. The FPGA implementation via Verilog language operates under 95.511 MHz maximum frequency. Similarly, [41] works to model the behavior of RF power amplifiers using MLP ANN trained with back propagation algorithm considering two folds. One fold is nonlinearity effects while the other is memory effects. AM/AM and AM/PM characteristics demonstrate how the model is accurate. Implementing the model on DSP-FPGA kit shows small complexity with high processing behavioral model for RF power amplifiers. III. DISCUSSION OF THE ANN FEATURES AND FPGA RESOURCES IN THE PRESENTED CASE STUDIES FPGA-based reconfigurable computing hardware architectures are well suited for implementation of ANNs as one can achieve rapidly configurability to adjust the weights and topologies of an ANN. This will help to have fast prototyping [43]. The density of FPGA reconfiguration allows for having high numbers of elements to reach to the proper functionality within the unit chip area. Further, as a feature for ANN, it needs to be learnt using off-line learning/training algorithm, and it requires to be adopted
  • 5. through the recalling phase. These features work together to customize the topology of ANN and the computational accuracy. Table I and Table II summarize the main feature of ANN and the FPGA hardware resources, respectively, for the 32 collective studies published from 2014 to 2018 (4 in 2014, 7 in 2015, 5 in 2016, 13 in 2017, and 3 in 2018). There are 23 literatures published as conference papers, 8 literatures published as journal articles, and one as a book (Outstanding PhD Research). A. Results Analysis of ANN Features ANN has grasped many practical applications and implementations in a medicine/biomedicine [10], [12], [24] [25], [30]−[31], [40], analog circuit simulators [11], [14], [21], [28]−[29], [34], [41], pattern detection and recognition [13], [15], [22]−[23], [26]−[27], [32], [35], [37]−[39], air showers [16]−[20], and power system [33], [36]. ANN structural design requires a massive amount of the parallel computing and storage resources, thus; it demands parallel computing devices such as FPGAs. Data representation plays a significant rule in implementing ANN on FPGA hardware. The choice of the data type format for the weights and the activation functions is important to the recognition rate and the performance, for which numerous data type representations can be considered, like fixed-point [10], [14], [21], [24]–[25], [29], [33], [37]– [39], floating point [10], [13], [22], [28], [34] or integer/binary representations. Fixed-point and integer/binary representations can reach improved execution performance in the forward computation of the networks, it is a great difficulty to train deep neural network (many hidden layers) for recognition applications, then map the optimized weights onto FPGA hardware. The multilayered ANN is referred to of three or more layers: one input layer, one or more hidden layer(s) and one output layer [25]. As for the accuracy reduction of the FPGA hardware, it primarily caused by the truncation error of the fixed-point operation. Some works make a comparison of fixed-point with floating-point operations to scale the effect of the round off errors, and the CSOM can be used for the fixed-point selection as in [38]. In contrast, the floating-point data type format can bring easier ANN training processes in software and it able to possibly give suitable recognition accuracy and execution performance. It is revealed that reduced-precision floating- point representation is a candidate for the hardware realization of the ANN on FPGA [42]. B. Results Analysis of FPGA HW Resources Different types of FPGA technology platforms are used in the selected literature. The two main companies are Xilinx and Intel/Altera. Each one has several FPGA families operating on various frequencies with miscellaneous available resources. IV. SUMMARY Several FPGA-based ANNs targeted towards different applications are discussed in this brief. FPGA hardware implementations make ANN more convenient to be realized and reconfigurable. Choosing the FPGA technology platform depends on available resources, ANN topology, and data type representation. REFERENCES [1] V. Ntinas, et al., IEEE Trans. Neural Networks Learning Systems, 2018. [2] D. Korkmaz, et al., AWERProcedia Information Technology & Computer Science, 2013, pp. 342–348. [3] I. Li, et al., In IEEE Inter. SoC Design Conf. (ISOCC), 2016, pp. 297– 298. [4] M. Alçın, et al., Optik, vol. 127, pp. 5500–5505, 2016. [5] L. Gatet, et al, IEEE Sensors Journal, vol. 8, no. 8, pp. 1413–1421, 2008. [6] O. Krestinskaya, et al., In IEEE Inter. Symp. Circ. Syst. (ISCAS), 2018, pp. 1–5. [7] L. Hu, et al., Adv. Mater., 1705914, 2018. [8] Z. Hajduk, Neurocomputing, vol. 247, pp. 59–61, 2017. [9] R. Tuntas, et al., Applied Soft Computing, vol. 35, pp. 237–246, 2015. [10] M. Wess, et. al., in 2017 IEEE International Symposium on Circuits and Systems (ISCAS), 2017, pp. 1–4. [11] Ntinas et al., IEEE TNNLS, 2018. [12] D. Darlis, et. al., In IEEE 2018 Inter. Conf. Signals and Systems (ICSigSys), 2018, pp. 142–145. [13] Z.F. Li, , et. al., In IEEE 2017 Inter. Conf. Electron Devices Solid- State Circuits (EDSSC), 2017, pp. 1–2. [14] L. Zhang, In 2017 IEEE XXIV Inter. Conf. Electron. Electrical Engineering Computing (INTERCON), 2017, pp. 1–4. [15] S. Anand, et. al., In 2017 2nd Inter. Conf. Comput. Commun. Technologies (ICCCT), 2017, pp. 265–270. [16] Z. Szadkowski, et. al., IEEE Trans. Nucl. Sci., vol. 64, no. 6, pp. 1271–1281, 2017. [17] Z. Szadkowski, et. al., In IEEE Progress in Electromag. Research Symp. (PIERS), 2016, pp. 1517–1521. [18] Z. Szadkowski, et. al., In 2015 4th IEEE Inter. Conf. Advance. Nuclear Instrum. Measurement Methods their Applic. (ANIMMA), 2015, pp. 1–8. [19] Z. Szadkowski, et. al, In 2015 IEEE Federated Conf. Comput. Science Inform. Systems (FedCSIS), 2015, pp. 693–700. [20] Z. Szadkowski, et. al., IEEE Trans. Nuclear Science, vol. 62, no. 3 pp. 1002–1009, 2015. [21] L. Zhang, 2017 IEEE 30th Canadian Conf. Electr. Comput. Eng. (CCECE), 2017, pp. 1–4. [22] S. Shreejith, et al. in IEEE Design, Automation & Test in Europe Conference & Exhibition (DATE), 2016, pp. 37–42. [23] A. Tisan, et. al., IEEE Trans. Indus. Informatics, vol. 12, no. 3, 2016. [24] J. C. Romero-Aragon, et. al., In 2014 IEEE Symp. Computat. Intellig. Control Automat. (CICA), 2014, pp. 1–7. [25] C. Geoffroy, et. al., IEEE Trans. Nuclear Science, vol. 62, no. 3, 2015. [26] T. Fujimori,, et. al., 2015 IEEE Inter. Conf. Field Programmable Technology (FPT), 2015, pp. 260–263. [27] R. Biradar, et. al., In IEEE Inter. Conf. Cogn.Comput. Inform. Process. (CCIP), 2015, pp. 1–6. [28] M. Alçın, et al., Optik, vol. 127, pp. 5500–5505, 2016. [29] R. Tuntas, Applied Soft Computing, vol. 35, pp. 237–246, 2015. [30] A. T. ÖZDEMİR, et. al., Turkish J. Electr. Eng. Comput. Sciences, vol. 23, no. Sup. 1, 2089-2106, 2015. [31] H. W. Lim, et al. , ISOCC 2017, 2017, pp. 90–91. [32] N. Aamer, et. al., in Proceedings of the 2nd International Conference on Communication and Electronics Systems (ICCES 2017), 2017, pp. 935–942. [33] A. Accetta, In 2017 IEEE 26th Inter. Symp. Industrial Electron. (ISIE), 2017, pp. 926–933. [34] J. A. Renteria-Cedano, et al, In IEEE 57th Inter. Midwest Sympo. in Circuits and Systems (MWSCAS), 2014, pp. 209–212. [35] B. Rajasekhar, et. al., In 2017 3rd International Conference on Biosignals, images and instrumentation (ICBSII), 2017, pp. 1–6. [36] G. D. J. Martinez-Figueroa, et. al., IEEE Access, vol. 5, pp. 14259– 14274, 2017.
  • 6. [37] J. Park, et. al. in IEEE Inter Conf. Acoustics Speech Signal Process. (ICASSP), 2016, pp. 1011–1015. [38] J. Lachmair, et. al., In IEEE 2017 Inter. Joint Conf. Neural Networks (IJCNN), 2017, pp. 4299–4308. [39] M. A. A. de Sousa, et. al., In IEEE Inter. Joint Conf. Neural Networks (IJCNN), 2017, pp. 3930–3937. [40] P. Antonik, Springer Theses, Recognizing Outstanding Ph.D. Research, 2018. [41] J. C. Núñez-Perez, et. al., In Inter. Conf. Electron. Commun. Comput. (CONIELECOMP), 2014, pp. 237–242. [42] H. M. Vu, et. al., In: V. Bhateja , B. Nguyen, N. Nguyen, S. Satapathy, DN. Le (eds), Information Systems Design and Intelligent Applications. Advances in Intelligent Systems and Computing, vol 672. Springer, Singapore, 2018. [43] J. Zhu, et. al., In Inter. Conf. Field Programmable Logic Applicat, Springer, Berlin, Heidelberg, 2003, pp. 1062–1066.
  • 7. TABLE I. ARTIFICIAL NEURAL NETWORK PROPERTIES FOR THE LITERATURE REVIEW Comparison Work/Year ANN type Weighted summation (WS)/Activation function (AF) # Input/hidden layers (neurons) Data type Test data set Training algorithm (TA)/ Learning algorithm (LA) Application 10/2017 Multi-Layer Perceptron (MLP) AF: Piecewise linearly approximated hyperbolic tangent function (for hidden layer)/ hyperbolic tangent function Tanh (for hidden layer and output node) 8/6 layer a. 12/16/24- bit fixed point /b. floating point 104 TA: Resilient backpropagation (RPROP) ECG Anomaly Detection 11/2018 Single-Layer Perceptron (SLP) WS: KCL computation 1/1 Layer (8 neurons/-) 14-bit - LA: spike-timing- dependent plasticity (STDP) unsupervised spatiotemporal pattern learning Memristor Simulator 12/2018 Feed forward back propagation (FFBP) AF: sigmoid function 1/1 Layer - 40 TA: Back Propagation (BP) Human Blood Identification Device 13/2017 Multilayer feed-forward AF: logic sigmoid function with Rotation Digital Computer (CORDIC) algorithm -/1 Layer (- /300neurons) 32-bit single precision floating point 60000 - MNIST handwritten digit recognition 14/2017 MLP-BP topology AF: bipolar sigmoid activation function (hidden neurons) and ramp activation function (output neurons) 1/1 Layer (3 neurons/4 neurons) fixed-point 6000 TA: Back Propagation (BP) Brain research 15/2017 Multilayer Feedforward Neural Network (FFNN) AF: Sigmoid function 1/8 Layer (5 neurons/-) - - - Forest fire detection in WSN 16/2017 17/2016 18/2015 19/2015 20/2014 - AF: Tangent sigmoid function @7 & 8: 8/6 Layer @9: 12/8 Layer @10: 12/10 Layer 14-bit - TA: Levenberg- Marguardt (LM) Detection of Neutrino- Induced Air Showers 21/2017 - - 1/1 Layer (3 neurons/8 neurons) 32-bit fixed point 10000 TA: Levenberg- Marguardt (LM), Bayesian Regulation (BR). and Scaled Conjugate Gradient(SCG) Secure communication 22/2016 MLP AF: Sigmoid Function 1/6 Layer (8 neurons/-) Floating point 1000 TA: Back propagation (BP) Fault Detection in Automotive Systems (Fault Diagnosis of a Diesel Engine)
  • 8. 23/2016 Feedforward AF: sigmoid function 1/1 Layer - - TA: Back Propagation (BP) Pattern recognition module for an artificial olfactory system to recognize different types of coffee (e-Nose) 24/2014 Recurrent high-order neural network (RHONN) AF: hyperbolic tangent function - 16-bit fixed-point 1400 TA: Extended Kalman Filter (EKF) Glucose Level Regulation for Diabetes Mellitus Type 1 Patients 25/2015 Pipelined Architecture AF: hyperbolic tangent function 6/2 Layer (10 neurons/5neuro ns) 18 bits fixed-point - TA: Back Propagation (BP) Positron Emission Tomography (PET) 26/2015 MLP - 1/1 Layer (1 neuron/7 neurons) - - TA: Back Propagation (BP) Trax Solver (game solver) 27/2015 Multilayer Feedforward Neural Network AF: hyperbolic tangent Function 1/2 Layer (3 neurons/9 neurons) 16-bit - TA: Back Propagation (BP) Function Approximation 28/2016 FFNN AF: Log-Sigmoid function 1/1 Layer (3 neurons/8 neurons) 32-bit single precision floating point 200,000 TA: Back Propagation (BP) Pehlivan– Uyaroglu Chaotic System 29/2015 MLP AF: Tangent sigmoid activation function (hidden neurons) and Linear activation function (output neurons) 1/2 Layer (4 neurons/19neur ons) Fixed point - TA: Levenberg– Marquardt (LM) back propagation Modified Van der Pol–Duffing Oscillator Circuit 30/2015 MLP-BP AF: Piecewise linear sigmoid (PLS) activation function /sigmoid activation function 1/1 Layer (8 neurons/ 2 neurons) 16-bit fixed point /32- bit single precision floating point - TA: Back Propagation (BP) Mobile ANN- based automatic ECG arrhythmia classifier 31/2017 - - - - 14 - Atrial Fibrillation Classifier 32/2017 FFNN - - - 3000 TA: Back Propagation (BP) VLSI approach for autonomous robot navigation
  • 9. 33/2017 Self- Supervised Neural Network: Growing Neural Gas - - fixed point - TA: Growing Neural Gas algorithm Virtual Anemometer for MPPT of Wind Energy Conversion Systems 34/2014 Recurrent Neural Network: NARX Neural network AF: hyperbolic tangent function Tanh 1/1 Layer Floating point - - Modeling the Inverse Characteristics of Power Amplifiers ( GaN class F PA working with a LTE signal center at 2 GHz) 35/2017 FFNN AF: sigmoid function 1/1 Layer (-/20 neurons) - - TA: Back Propagation (BP) Emotion recognition from speech signal 36/2017 FFNN AF: log-sigmoid function 1/1 layer (3 neurons/20 neurons) - 1000 TA: Levenberg- Marquardt algorithm Smart Sensor for Detection and Classification of Power Quality Disturbances 37/2016 Feed- forward deep neural networks AF: logistic sigmoid function 1/3 Layer 8-bit Fixed- Point - TA: Back Propagation (BP ) LA: unsupervised greedy RBM (Restricted Boltzmann Machine) learning algorithm MNIST handwritten digit recognition benchmark and a phoneme recognition task on TIMIT corpus 38/2017 Self- Organizing Map (SOM) Artificial Neural Network - - 16-bit fixed point - LA: The Self- organizing map (SOM) Data Mining 39/2017 Self- Organizing Map (SOM) Artificial Neural Network - - 16-bit fixed point - LA: The Self- organizing map SOM1/SOM2 Telecommunicatio n/Video categorization in autonomous surveillance systems 40/2018 FFNN AF: Tangent sigmoid function 1/2 Layer 16-bit 600 - Intravascular OCT Scans 41/2014 MLP AF: Tangent sigmoid function - - - TA: Levenberg– Marquardt (LM) back propagation RF Power Amplifier 42/2018 MHL-ANN (multiple- hardware- layer/ SHL-ANN (single- hardware- layer) AF: Logistic sigmoid function 1/1 Layer (20 neurons/ 12neurons) / 1/2 Layer (784 neurons/80 neurons) 16-bit half- precision floating- point 10000 TA: The back Propagation (BP) with the stochastic gradient descent (SGD) algorithm Handwritten digit recognition application with MNIST database with 28×28 pixel image = 784 pixels
  • 10. TABLE II. FPGA HW RESOURCE FPGA family Type Total Elements/Other Features Implementation tool on FPGA Accuracy 10/2017 ARM processor Zynq DSP: a. 28 / b. 42 Flip-Flops: a. 1772 / b. 9295 LUT: a. 1895 / b. 15163 Latency: a. 87 / b. 1208 Vivado HLS tool a. 99.81% /b. 99.59 11/2018 Cyclone II/EP2C70F672 C6 Altera Total logic elements: 306 Total registers: 3751 Quartus II and ModelSim tools - 12/2018 Xilinx FPGA Spartan 3S1000 - Very High Speed Integrated Circuit (VHSIC) Hardware Description Language 97.5% for 96x96 pixel sizes (resolution) 13/2017 Cyclone IV EP4CE115 Altera Total logic elements: 6618 Total combinational function: 5906 Dedicated logic resisters: 2772 Embedded multiplier 9-bit elements: 21 Quartus II with an Altera Verilog - 14/2017 Zynq 7020 Maximum Frequency 35.15MHz 4 input LUTs: 1954 Registers: 364 Slices: 605 DSPs: 24 BRAMs: 20 Total On-chip Power: 0.182W - - 15/2017 Virtex-5 Altera 6.4a starter Maximum frequency: 604.27MHz Slice registers: 45 Slice LUTs: 16 Logic elements: 16 LUT-FF: 53 Bonded IOBs: 45 Verilog HDL code utilizing Model Sim - 16/2017 17/2016 18/2015 19/2015 20/2014 Cyclone V E FPGA 5CEFA9F31I7 @7: Maximum frequency: 172.98MHz @100°C Registers: 3839 DSP (18×18): 92 Adaptive logic module (ALM): 2189 @9: Multipliers in Adaptive Logic Modules (ALMs): 1247 Multiplier in ALMs and DSP: 107/8 @10: Multipliers in Adaptive Logic Modules (ALMs): 41151 CORSIKA and Off Line simulation Packages AHDL code - 21/2017 - - - - 22/2016 Spartan-6 Xilinx XC6SLX45T Flip-flops: 11401 LUTs: 17175 BRAMs: 0 DSPs: 84 Latency: 105 STM32 platform (STM32F407FZ) - 23/2016 Virtex-4 SX 4VSX35 LUTs: 332 RAMB16s: 4 DSP48s: 23 Maximum frequency: 122.489 MHz End-User Programming Platform VHDL - 24/2014 Altera DE2-115 Cyclone IV EP4CE115F29 C7 Logic elements: 14262 Registers: 3059 Embedded Multiplier 9-bit element: 40 Power consumption: 142.18mW Verilog 25/2015 Virtex 2 Pro series XC2VP50 Maximum Frequency: 50MHz Slices: <6000 (5463 slices) Memory blocks: 19 Multipliers: 45 - 97.99% 26/2015 40-nm process FPGA Arria II GX; Altera Corp. Maximum Frequency: 75MHz Combinational ALUTs: 79015 Memory ALUTs: 473 Dedicated logic registers: 25620 Total block memory bits: 4208006 Total DSP Blocks: 0 Total PLL: 1 Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL) and Quartus II ver. 14.1 27/2015 Virtex 5 XUPV5- LX110T Slice LUTs: 2875 Slice Registers: 2014 Bonded IOBs: 4 Block RAM/FIFO: 2 DSP48Es: 3 Memory: 72 kB - - 28/2016 Xilinx Virtex 6 XC6VCX240T Slice registers: 86329 Slice LUTs: 87207 Fully used LUT-FF pairs: 67624 VHDL/ Matlab -
  • 11. Bonded IOBs: 195 Maximum Frequency: 266.429 MHz 29/2015 Xilinx Virtex-II Pro XC2V1000 CLKs: 2 Slices: 1236 Slice flip-flops: 329 MULT18X18s: 13 4 input LUTs: 2134 Bonded IOBs: 82 VHDL - 30/2015 Altera Cyclone III EP3C120F780 Frequency: 50MHz Logic elements: 1814/23189 DSP elements: 40/220 Logic registers: 784/10816 - 96.54% / 97.66% 31/2017 Cyclone-IV Logic elements: 40830 Altera DE2-115 95.3% 32/2017 Xilinx Virtex-II Pro Maximum Frequency: 357.5 MHz Slices: 492 Slice Flip-Flops: 371 4-Input LUTs: 942 Bonded IOBs: 44 Matlab - 33/2017 Altera Cyclone III EP3C25F324 Logic elements: 22148 Registers: 2265 Memory bits: 6528 Embedded 9-bit multipliers: 32 VHDL Matlab - 34/2014 Virtex-6 FPGA ML 605 Evaluation Kit Slice registers: 38572 Slice LUTs: 29057 LUT FF: 21491 Bonded IOBs: 3 Block RAM: 225 DSP: 64 Maximum Frequency: 95.511MHz Verilog language Matlab SystemVue Xilinx ISE tool - 35/2017 - Slices: 2817 FF: 2916 LUTs: 2900 Bonded IOBs: 16 GCLKs: 1 - - 36/2017 Altera DE2-115 Cyclone IVE EP4CE115F297 C Logic elements: 2000 Registers: 580 Multiplier 9-bit: 4 Memory bits: 153396 Maximum Frequency: 61.75MHz VHDL Matlab - 37/2016 Xilinx XC7Z045 Digit recognition/ Phoneme recognition FFs: 136677/161923 LUTs: 213593/137300 BRAMs:750.5/378 DSPs: 900/0 - - 38/2017 Xilinx Virtex-5 V5FX100T/ Xilinx Virtex-7 V7FX690T Slice Registers: 45036/211793 Slice LUTs: 57679/273,055 BRAM/FIFOs: 226,1421 DSPs: 100/700 Power consumption: 23.5W/44W - - 39/2017 Xilinx ISE platform Virtex 5 XC5VLX50T LUTs: 8845/17945 Chip utilization: 30% / 62% Maximum frequency: 2.38 MHz/1.51 MHz Xilinx ISim simulator tool - 40/2018 Xilinx VC707 evaluation board Virtex-7 XC7VX485T- 2FFG1761C FFs: 29609 LUTs: 21065 Block RAM: 1028 (16 kb) DSP48E: 201 Maximum frequency: 180 MHz VHDL - 41/2014 Cyclone®III Edition-Altera Logic elements: 2720 Memory bits: 1549824 Logic registers: 1348 Multiplier 9-bit: 170 Matlab-Simulink software - 42/2018 Xilinx Virtex-5 XC5VLX-110T FFs: 24025/44079 LUTs: 28340/63454 Block RAM: 22/40 DSP: 22/40 Maximum frequency: 205 MHz/197 MHz VHDL 90.88% / 97.20% recognition rate