SlideShare a Scribd company logo
1 of 17
Download to read offline
International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012
DOI : 10.5121/vlsic.2012.3220 243
Analog VLSI Implementation of Neural Network
Architecture for Signal Processing
Neeraj Chasta 1, Sarita Chouhan2 and Yogesh Kumar3
1Department of Electronics and Communication, Mewar University, Chittorgarh, Rajasthan,
India
neeraj.chasta@gmail.com
2Department of Electronics and Communication, MLVTEC, Bhilwara, Rajasthan, India
sarita.mlvtec@yahoo.co.in
3Electronics and Communication, MLVTEC Bhilwara, Rajasthan, India
yogeshkumar989@ymail.com
Abstract
With the advent of new technologies and advancement in medical science we are trying to process the
information artificially as our biological system performs inside our body. Artificial intelligence through a
biological word is realized based on mathematical equations and artificial neurons. Our main focus is on
the implementation of Neural Network Architecture (NNA) with on a chip learning in analog VLSI for
generic signal processing applications. In the proposed paper analog components like Gilbert Cell
Multiplier (GCM), Neuron activation Function (NAF) are used to implement artificial NNA. The analog
components used are comprises of multipliers and adders’ along with the tan-sigmoid function circuit
using MOS transistor in subthreshold region. This neural architecture is trained using Back propagation
(BP) algorithm in analog domain with new techniques of weight storage. Layout design and verification of
the proposed design is carried out using Tanner EDA 14.1 tool and synopsys Tspice. The technology used
in designing the layouts is MOSIS/HP 0.5u SCN3M, Tight Metal.
Keyword
Neural Network Architecture, Back Propagation Algorithm.
1. Introduction
1.1 Artificial Intelligence
Intelligence is the computational part of the ability to achieve goals in the world. Actually
intelligence is a biological word and is acquired from past experiences. The science which defines
intelligence mathematically is known as Artificial Intelligence (AI). Artificial Intelligence is
implemented by using artificial neurons and these artificial neurons comprised of several analog
components. The proposed paper is a step in the implementation of neural network architecture
using back propagation algorithm for data compression. The neuron selected is comprises of
multiplier and adder along with the tan-sigmoid function. The training algorithm used is
performed in analog domain thus the whole neural architecture is a analog structure.
International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012
244
Figure 1 can be expressed mathematically as
Figure 1: Neural Network
a = f (P1W1+P2W2+P3W3+Bias)
where „a‟ is the output of the neuron & „p‟ is input and „w‟ is neuron weight . The bias is
optional and user defined. A neuron in a network is itself a simple processing unit which has an
associated weight for each input to strengthening it and produces an output. The working of
neuron is to add together all the inputs and calculating an output to be passed on. The neural
architecture is trained using back propagation algorithm and also it is a feed forward network.
The designed neuron is suitable for both analog and digital applications. The proposed neural
architecture is capable of performing operations like sine wave learning, amplification and
frequency multiplication and can also be used for analog signal processing activities.
1.2 Multiple Layers of Neurons
When a set of single layer neurons are connected with each other it forms a multiple layer
neurons, as shown in the figure 2.
International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012
245
Figure 2: Layered structure of Neural Network
As it is clear from the above figure that weights w11 to w16 are used to connect the inputs v1 and
v2 to the neuron in the hidden layer. Then weights w21 to w23 transferred the output of hidden
layer to the output layer. The final output is a21.
2. Architecture
2.1 Analog Components for Neural Architecture
The inputs to the neuron v1 and v2 as shown in figure 2 are multiplied by the weight matrix, the
resultant output is summed up and is passed through an NAF. The output of the activation
function is then passes to the next layer for further processing. Blocks to be used are Multiplier
block, Adders, NAF block with derivative.
2.1.1 Multiplier Block
The Gilbert cell is used as the multiplier block. The schematic of the Gilbert cell is as shown in
the figure 3.
International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012
246
Figure 3: Gilbert cell schematic with current mirror circuit
As in this paper we are mainly focusing on layout part so the layout of the Gilbert cell multiplier
is shown in figure 4. Layout design and verification of the proposed design is carried out using
Tanner EDA 14.1 tool and synopsys Tspice. The technology used in designing the layouts is
MOSIS/HP 0.5u SCN3M, Tight Metal.
Figure 4: Layout of Gilbert cell multiplier
International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012
247
2.1.2 Adders
The output of the Gilbert cell is in the form of current (trans conductance).The node of the Gilbert
cell connecting the respective outputs act as adder itself.
2.1.3 Neuron Activation Function (NAF)
The designed activation function is tan sigmoid. The proposed design is actually a differential
amplifier modified for differential output. Same circuit is capable of producing output the
activation function and the differentiation of the activation function. Here two designs are
considered for NAF
1. Differential amplifier as NAF
2. Modified differential amplifier as NAF with differentiation output.
2.1.3.1 Differential Amplifier Design As A Neuron Activation Function (Tan)
This block is named as tan in the final schematics for Neural Architecture. Differential amplifier
when design to work in the sub-threshold region acts as a neuron activation function.
Figure 5: Schematic of NAF
2.1.3.2 Modified Differential Amplifier Design for Differentiation Output (fun)
Schematic of the design shown in the figure 5 is used for the tan sigmoid function generator with
modification for the differential output. The NAF function can be derived from the same
differential pair configuration. The structure has to be modified for the differential output.
2.1.3.3 Layout issues
As we already discussed that we are using tan sigmoid function to realize NAF the reason behind
using tan- sigmoid function is that after carried out a mathematical analysis we found that tan
International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012
248
sigmoid function is best suitable for compression and achieving a tangential output. The layout of
NAF is shown in figure 6.
2.2 Realization of Neural Architecture using Analog Components
The components designed in the previous section are used to implement the neural architecture.
The tan block is the differential amplifier block designed in section 2.1.3.1. This block is used as
the neuron activation function as well as for the multiplication purpose. The mult is the Gilbert
cell multiplier designed in section 2.1.1 The fun is the Neuron activation function circuit with
differential output designed in section 2.1.3.2
Figure 7: Implementation of the Neural Architecture using Analog Blocks
Figure 7 shows exactly how the neural architecture of figure 2 is implemented using analog
components. The input layer is the input to the 2:3:1 neuron. The hidden layer is connected to the
input layer by weights in the first layer named as w1i. The output layer is connected to input layer
through weights w2j. The op is the output of 2:3:1 neuron. The schematic of circuit for image
compression is shown in figure 8.
International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012
249
Fig 8: Schematic of Compression Block
The Layout of circuit for image compression is shown in figure 9.
Figure 9: Layout of compression block
2.3 Neuron Application -Image Compression and Decompression
The above proposed NA is used for image compression. Image consisting of pixel intensities are
fed to the network shown in Figure 8 for compression and then this compressed image act as
input for the decompression block. The layout circuit of decompression block is as shown in
figure 11.
International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012
250
The 2:3:1 neuron proposed has an inherent capability of compressing the inputs, as there are two
inputs and one output. The compression achieved is 97.30%. Since the inputs are fed in the
analog form to the network there is no need for analog to digital converters.
Figure 10 shows the schematic of decompression block
Fig10: Schematic of Decompression Block
Figure 11: Layout of Decompression Block
International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012
251
This is one of the major advantages of this work. A 1:3:2 neural networks is designed for the
decompression purpose. The neural network has 3 neurons in the hidden layer and two in the
output layer.
Figure 12 shows the compression and decompression scheme. The training algorithm used in this
network is Back Propagation algorithm. The error propagates from decompression block to the
compression bock.
Once the network is trained for different inputs the two architectures are separated and can be
used as compression block and decompression block independently.
Figure 12: Image Compression and Decompression using proposed neural architecture
The layout implementation of block diagram of compression and decompression block is shown
in figure 14 and figure 13 shows the combined schematic of compression and decompression
block:
Fig 13: Combined Schematic of Compression and Decompression Block
International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012
252
The layout implementation of block diagram of compression and decompression block is shown
in figure 14 and figure 13 shows the combined schematic of compression and decompression
block:
Figure 14: Combined Layout of Compression and Decompression block
3. Back Propagation Algorithm
Backpropagation is the most common method of training of artificial neural network so as to
minimize the objective function. It is the generalization of delta rule and mainly used for feed
forward networks. The backpropagation algorithm is understand by dividing it into two phases.
First phase is propagation and second is weight update.
1. Propagation
a) Forward propagation of training pattern‟s input through the neural network in order to
generate the propagation‟s output activations.
b) Backward propagation of the of the propagation‟s output activations through the neural
network using the training pattern‟s target in order to generate the deltas of all output and
hidden in neurons.
2. Weight Update
a) For each weight synapse it multiply its output delta and input activation to get the
gradient of the weight.
b) Bring the weight in opposite direction of the gradient by subtracting a ratio of it from the
weight.
This ratio influences the speed and quality of learning and it is called learning rate. The sign of
the gradient of a weight indicates where the error is increasing, that is why the weight must be
updated in the opposite direction. These phases goes on repeating until the performance is of the
network is satisfactory.
International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012
253
Figure 15: Notation for three-layered network
There are (n + 1) × k weights between input sites and hidden units and (k +1)×m between hidden
and output units. Let W1 denote the (n+1)×k matrix with component w ij
(1)
at the i-th row and the
j-th column. Similarly let W2 denote the (k + 1) × m matrix with components wij
(2)
ij . We use an
overlined notation to emphasize that the last row of both matrices corresponds to the biases of the
computing units. The matrix of weights without this last row will be needed in the
backpropagation step. The n-dimensional input vector Ộ = (o1, . . . , on) is extended, transforming
it to ˆo = (o1, . . . , on, 1). The excitation netj of the j-th hidden unit is given by:
The activation function is a sigmoid and the output Oj
(1)
of this unit is thus
After choosing the weights of the network randomly, the backpropagation algorithm is used to
compute the necessary corrections. The algorithm can be decomposed in the following four steps:
(i) Feed-forward computation
(ii) Backpropagation to the output layer
(iii) Backpropagation to the hidden layer
(iv) Weight updates
The algorithm is stopped when the value of the error function has become sufficiently small.
International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012
254
4. Results and Discussions
4.1. Simulation Result for Gilbert cell Multiplier
The designed Gilbert cell is simulated using TSPICE. The simulation result shown in the figure
16 is for the multiplication of two voltages v1 and v2. v1 voltage is 4 Vpp and 100 MHz
frequency. v2 is 2 Vpp 10 MHz frequency.
The output wave is the multiplication of v1 and v2 voltage done by the circuit. The output
amplitude is 5.6 Vpp. The output can be seen matching with the theoretical output.
Figure 16: Multiplication operation of Gilbert cell multiplier
International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012
255
4.2. Simulation Result for Neuron activation function
Figure 17: Circuit output for Neuron Activation function block (tan)
4.4 Image Compression using Neural Architecture
Image compression is separately performed using neural architecture. The simulation result for
image compression are shown in the figure 18 The input v1 was a sine wave with 2 Vpp voltage
at 10 MHz frequency and input v2 was a sine wave with 4 Vpp at a frequency 100 MHz. The
output of compression is a DC signal of 70 mV.
International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012
256
Figure 18: Simulation of Image compression
4.5 Image Decompression using Neural Architecture
The separate decompression of a 3 Vpp sine wave at 100 MHz gives the decompressed output as
a signal of 7 Vpp. The simulation result of image decompression is shown in figure 19.
Figure 19: Simulation of Image decompression
International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012
257
4.6 Image Compression and Decompression using Neural Architecture
The Neural Architecture is extended for the application of image compression and
decompression. The simulation result for image compression and decompression are shown in the
figure 20. The input 1 was a sine wave with 16 Vpp voltage 10 MHz frequency and input 2 was a
sine wave with 9 Vpp voltage and 100 MHz frequency. The compressed output was a DC signal
of 4 Vpp.
Figure 20: Image compression and Decompression Simulation
The decompressed output for input 1 was a 4.5 Vpp sine wave and for input 2 was a 4.5 Vpp sine
wave. The compression we achieved is 97.22 % and the average decompression achieved is
60.94%.
5. Conclusion
Neural network with their remarkable ability to derive meaning from complicated or imprecise
data can be used to extract patterns and to detect trends that are too complex to be noticed by
either humans or other computer techniques. Due to its adaptive learning, self-organization, real
time operations and fault tolerance via redundant information coding properties it can be used in
Modelling and Diagnosing the Cardiovascular System and in Electronic noses which has several
potential applications in telemedicine. Another application developed was “Instant Physician”
which represents the “best” diagnosis and treatment. This work can be further extended to
implement neuro fuzzy system with high speed low power, CMOS circuit (Layout extraction) in
current mode as well as for nano scale circuit simulation using Double Gate MOSFET (DG
MOSFET) modeling.
International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012
258
References
[1] Cyril Prasanna Raj P & S.L. Pinnae “DESIGN AND ANALOG VLSI IMPLEMENTATION OF
NEURAL NETWORK ARCHITECTURE FOR SIGNAL PROCESSING” European Journal of
Scientific Research ISSN 1450-216X Vol.27No.2 (2009), pp.199-216
[2] George Papadourakis “INTRODUCTION TO NEURAL NETWORKS” (2009)
[3] Anita Wasilewska “NEURAL NETWORKS” State University of New York at Stony Brook.
[4] Ranjeet Ranade & Sanjay Bhandari & A.N. Chandorkar “VLSI Implementation of Artificial Neural
Digital Multiplier and Adder” pp.318-319
[5] Rafid Ahmed Khalil & Sa'ad Ahmed Al-Kazzaz “Digital Hardware Implementation of Artificial
Neurons Models Using FPGA”pp.12-24
[6] Bose N.K., Liang P., Neural Network Fundamentals With graphs, algorithms and Application, Tata
McGraw hill, New Delhi, 2002,ISBN0-07-463529-8.
[7] Razavi Behzad, Design of Analog CMOS Integrated Circuits, Tata McGraw hill, New Delhi , 2002,
ISBN0-07-052903-5.
[8] Roy Ludvig Sigvartsen, An Analog Neural Network With On-Chip Learning Thesis Department of
informatics, University of Oslo, 1994
[9] Isik Aybayetal , Classification of Neural Network Hardware, Neural Network World ,IDG Co., Vol6
No1, 1996, pp.11-29 ,
[10] Vincent F. Koosh Analog Computation and Learning in VLSI PhD thesis California institute of
technology, Pasadena, California.2001.
[11] E. Vittoz, et al.,” The design of high performance analog circuits on digital CMOS chips,” IEEE J.
Solid State Circuit 20, 1985, pp.657-665
[12] S. Orcioni G. Biagetti, M. Conti, “A mixed signal fuzzy controller using current model circuits, “
Analog Integrated Circuits Process. 38, 2004 , pp.215-231.
[13] F. Djeffal et al., “Design and Simulation of Nanoelectronic DG MOSFET Current Source using
Artificial Neural Networks,” Materials Science and Engineering C, vol. 27, 2007, pp. 1111-1116
[14] H. Ben Hommounda et al., “Neural Based Models Of Semiconductor Devices for Spice Simulator,”
American J. of Applied Science, vol. 5, 2008, pp. 385-391
[15] Arne Heittmann, “An Analog VLSI Pulsed Neural Network for Image Segmentation using Adaptive
Connection Weights” Dresden University of Technology, Department of Electrical Engineering and
Information Technology,Dresden, Germany, 2000
[16] Wai-Chi Fang et al, “A VLSI Neural Processor for Image Data Compression using Self-Organisation
Networks” IEEE Transactions on Neural Networks, Vol. 3, No. 3, May 1992, pp. 506-517
[17] R. Rojas, Neural Networks, Springer-Verlag, Berlin, 1996
[18] D. Nguyen a and B. Wid row, Improving the learning speed of 2-layer neural network by choosing
initial values of the adaptive weights, IEEE First International Joint Conference on Neural Networks ,
3, 21–26, (1990).
[19] R.A. Jacobs, Increased rates of convergence through learning rate adaptat ion , Neural Networks , 1 ,
295–307, (1988).
[20] T. P. Vogl, J. K. Man gis, J.K. Rigler, W. T. Z ink and D .L. Alkon, Accelerating the convergence of
the back-propagation method , Biological Cyberne tics, 59 , 257–263,(1988).
International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012
259
Authors
1. Neeraj K. Chasta received his B. Eng. Degree in Electronics & Communication from
University of Rajasthan, Jaipur, India in 2007, and the M. Tech degree in Information
& Communication Technology with specialization VLSI from Dhirubhai Ambani
institute of Information & Communication Technology (DA-IICT), Gandhinagar, India
in 2010. His current research interests are in analog & mixed signal design with low
power & high speed emphasis & VLSI Testing.
2. Sarita Chouhan received her B. Eng. Degree in Electronics & Communication from
Rajeev Gandhi Proudyogiki Vishwavidyalaya, Bhopal, India in 2002 and M.Tech degree
in VLSI design from Mewar University, Chittorgarh, Rjasthan, India. Currently she is an
Assistant Prof. in Electronics and Communication department in Manikya Lal Verma
Textile & Engg. College, Bhilwara, Rajasthan, India. Her area of interest are applied
electronics, Microelectronics, VLSI, VHDL, Verilog, EDA, Analog CMOS designing and
Low Power optimization.
3. Yogesh Kumar is a student of final year B.Tech. Pursuing his Degree in Electronics &
Communication from Rajasthan Technical University, Kota, India .His area of interest are
in VLSI design and MATLAB. He presented papers in National and International
conferences on Analog CMOS design using signal processing and currently working on
Image processing.

More Related Content

What's hot

Objective Evaluation of a Deep Neural Network Approach for Single-Channel Spe...
Objective Evaluation of a Deep Neural Network Approach for Single-Channel Spe...Objective Evaluation of a Deep Neural Network Approach for Single-Channel Spe...
Objective Evaluation of a Deep Neural Network Approach for Single-Channel Spe...csandit
 
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...cscpconf
 
Introduction to Genetic Algorithm
Introduction to Genetic AlgorithmIntroduction to Genetic Algorithm
Introduction to Genetic AlgorithmAdri Jovin
 
Introductory Session on Soft Computing
Introductory Session on Soft ComputingIntroductory Session on Soft Computing
Introductory Session on Soft ComputingAdri Jovin
 
International Refereed Journal of Engineering and Science (IRJES)
International Refereed Journal of Engineering and Science (IRJES)International Refereed Journal of Engineering and Science (IRJES)
International Refereed Journal of Engineering and Science (IRJES)irjes
 
Image watermarking based on integer wavelet transform-singular value decompos...
Image watermarking based on integer wavelet transform-singular value decompos...Image watermarking based on integer wavelet transform-singular value decompos...
Image watermarking based on integer wavelet transform-singular value decompos...IJECEIAES
 
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTORARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTORijac123
 
3.a heuristic based_multi-22-33
3.a heuristic based_multi-22-333.a heuristic based_multi-22-33
3.a heuristic based_multi-22-33Alexander Decker
 
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...IJECEIAES
 
Efficiency of Neural Networks Study in the Design of Trusses
Efficiency of Neural Networks Study in the Design of TrussesEfficiency of Neural Networks Study in the Design of Trusses
Efficiency of Neural Networks Study in the Design of TrussesIRJET Journal
 
A broad ranging open access journal Fast and efficient online submission Expe...
A broad ranging open access journal Fast and efficient online submission Expe...A broad ranging open access journal Fast and efficient online submission Expe...
A broad ranging open access journal Fast and efficient online submission Expe...ijceronline
 
NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A SERIES - PARALLEL SYSTEM SUB...
NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A SERIES - PARALLEL SYSTEM SUB...NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A SERIES - PARALLEL SYSTEM SUB...
NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A SERIES - PARALLEL SYSTEM SUB...IAEME Publication
 
1.meena tushir finalpaper-1-12
1.meena tushir finalpaper-1-121.meena tushir finalpaper-1-12
1.meena tushir finalpaper-1-12Alexander Decker
 

What's hot (19)

Ft3111351140
Ft3111351140Ft3111351140
Ft3111351140
 
Objective Evaluation of a Deep Neural Network Approach for Single-Channel Spe...
Objective Evaluation of a Deep Neural Network Approach for Single-Channel Spe...Objective Evaluation of a Deep Neural Network Approach for Single-Channel Spe...
Objective Evaluation of a Deep Neural Network Approach for Single-Channel Spe...
 
071bct537 lab4
071bct537 lab4071bct537 lab4
071bct537 lab4
 
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...
 
Introduction to Genetic Algorithm
Introduction to Genetic AlgorithmIntroduction to Genetic Algorithm
Introduction to Genetic Algorithm
 
Introductory Session on Soft Computing
Introductory Session on Soft ComputingIntroductory Session on Soft Computing
Introductory Session on Soft Computing
 
International Refereed Journal of Engineering and Science (IRJES)
International Refereed Journal of Engineering and Science (IRJES)International Refereed Journal of Engineering and Science (IRJES)
International Refereed Journal of Engineering and Science (IRJES)
 
Image watermarking based on integer wavelet transform-singular value decompos...
Image watermarking based on integer wavelet transform-singular value decompos...Image watermarking based on integer wavelet transform-singular value decompos...
Image watermarking based on integer wavelet transform-singular value decompos...
 
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTORARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
 
3.a heuristic based_multi-22-33
3.a heuristic based_multi-22-333.a heuristic based_multi-22-33
3.a heuristic based_multi-22-33
 
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
 
Efficiency of Neural Networks Study in the Design of Trusses
Efficiency of Neural Networks Study in the Design of TrussesEfficiency of Neural Networks Study in the Design of Trusses
Efficiency of Neural Networks Study in the Design of Trusses
 
A broad ranging open access journal Fast and efficient online submission Expe...
A broad ranging open access journal Fast and efficient online submission Expe...A broad ranging open access journal Fast and efficient online submission Expe...
A broad ranging open access journal Fast and efficient online submission Expe...
 
honn
honnhonn
honn
 
Capstone paper
Capstone paperCapstone paper
Capstone paper
 
Neural network
Neural networkNeural network
Neural network
 
NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A SERIES - PARALLEL SYSTEM SUB...
NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A SERIES - PARALLEL SYSTEM SUB...NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A SERIES - PARALLEL SYSTEM SUB...
NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A SERIES - PARALLEL SYSTEM SUB...
 
1.meena tushir finalpaper-1-12
1.meena tushir finalpaper-1-121.meena tushir finalpaper-1-12
1.meena tushir finalpaper-1-12
 
238 243
238 243238 243
238 243
 

Similar to Analog VLSI Implementation of Neural Network Architecture for Signal Processing

Implementation of Feed Forward Neural Network for Classification by Education...
Implementation of Feed Forward Neural Network for Classification by Education...Implementation of Feed Forward Neural Network for Classification by Education...
Implementation of Feed Forward Neural Network for Classification by Education...ijsrd.com
 
Artificial Neural Network Implementation On FPGA Chip
Artificial Neural Network Implementation On FPGA ChipArtificial Neural Network Implementation On FPGA Chip
Artificial Neural Network Implementation On FPGA ChipMaria Perkins
 
Analytical and Systematic Study of Artificial Neural Network
Analytical and Systematic Study of Artificial Neural NetworkAnalytical and Systematic Study of Artificial Neural Network
Analytical and Systematic Study of Artificial Neural NetworkIRJET Journal
 
Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...IOSR Journals
 
A survey research summary on neural networks
A survey research summary on neural networksA survey research summary on neural networks
A survey research summary on neural networkseSAT Publishing House
 
A simplified design of multiplier for multi layer feed forward hardware neura...
A simplified design of multiplier for multi layer feed forward hardware neura...A simplified design of multiplier for multi layer feed forward hardware neura...
A simplified design of multiplier for multi layer feed forward hardware neura...eSAT Publishing House
 
modeling-a-perceptron-neuron-using-verilog-developed-floating-point-numbering...
modeling-a-perceptron-neuron-using-verilog-developed-floating-point-numbering...modeling-a-perceptron-neuron-using-verilog-developed-floating-point-numbering...
modeling-a-perceptron-neuron-using-verilog-developed-floating-point-numbering...RioCarthiis
 
DEEP LEARNING BASED BRAIN STROKE DETECTION
DEEP LEARNING BASED BRAIN STROKE DETECTIONDEEP LEARNING BASED BRAIN STROKE DETECTION
DEEP LEARNING BASED BRAIN STROKE DETECTIONIRJET Journal
 
Modeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologyModeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologytheijes
 
Implementing Neural Networks Using VLSI for Image Processing (compression)
Implementing Neural Networks Using VLSI for Image Processing (compression)Implementing Neural Networks Using VLSI for Image Processing (compression)
Implementing Neural Networks Using VLSI for Image Processing (compression)IJERA Editor
 
Optimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionOptimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionIAEME Publication
 
Optimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionOptimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionIAEME Publication
 
Optimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionOptimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionIAEME Publication
 
Neural Networks.pptx
Neural Networks.pptxNeural Networks.pptx
Neural Networks.pptxshahinbme
 
MODELLING AND SIMULATION OF 128-BIT CROSSBAR SWITCH FOR NETWORK -ONCHIP
MODELLING AND SIMULATION OF 128-BIT CROSSBAR SWITCH FOR NETWORK -ONCHIPMODELLING AND SIMULATION OF 128-BIT CROSSBAR SWITCH FOR NETWORK -ONCHIP
MODELLING AND SIMULATION OF 128-BIT CROSSBAR SWITCH FOR NETWORK -ONCHIPVLSICS Design
 
International Journal of Computational Engineering Research (IJCER)
International Journal of Computational Engineering Research (IJCER) International Journal of Computational Engineering Research (IJCER)
International Journal of Computational Engineering Research (IJCER) ijceronline
 
Efficient And Improved Video Steganography using DCT and Neural Network
Efficient And Improved Video Steganography using DCT and Neural NetworkEfficient And Improved Video Steganography using DCT and Neural Network
Efficient And Improved Video Steganography using DCT and Neural NetworkIJSRD
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)ijceronline
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)ijceronline
 
Implementation of an arithmetic logic using area efficient carry lookahead adder
Implementation of an arithmetic logic using area efficient carry lookahead adderImplementation of an arithmetic logic using area efficient carry lookahead adder
Implementation of an arithmetic logic using area efficient carry lookahead adderVLSICS Design
 

Similar to Analog VLSI Implementation of Neural Network Architecture for Signal Processing (20)

Implementation of Feed Forward Neural Network for Classification by Education...
Implementation of Feed Forward Neural Network for Classification by Education...Implementation of Feed Forward Neural Network for Classification by Education...
Implementation of Feed Forward Neural Network for Classification by Education...
 
Artificial Neural Network Implementation On FPGA Chip
Artificial Neural Network Implementation On FPGA ChipArtificial Neural Network Implementation On FPGA Chip
Artificial Neural Network Implementation On FPGA Chip
 
Analytical and Systematic Study of Artificial Neural Network
Analytical and Systematic Study of Artificial Neural NetworkAnalytical and Systematic Study of Artificial Neural Network
Analytical and Systematic Study of Artificial Neural Network
 
Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...
 
A survey research summary on neural networks
A survey research summary on neural networksA survey research summary on neural networks
A survey research summary on neural networks
 
A simplified design of multiplier for multi layer feed forward hardware neura...
A simplified design of multiplier for multi layer feed forward hardware neura...A simplified design of multiplier for multi layer feed forward hardware neura...
A simplified design of multiplier for multi layer feed forward hardware neura...
 
modeling-a-perceptron-neuron-using-verilog-developed-floating-point-numbering...
modeling-a-perceptron-neuron-using-verilog-developed-floating-point-numbering...modeling-a-perceptron-neuron-using-verilog-developed-floating-point-numbering...
modeling-a-perceptron-neuron-using-verilog-developed-floating-point-numbering...
 
DEEP LEARNING BASED BRAIN STROKE DETECTION
DEEP LEARNING BASED BRAIN STROKE DETECTIONDEEP LEARNING BASED BRAIN STROKE DETECTION
DEEP LEARNING BASED BRAIN STROKE DETECTION
 
Modeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologyModeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technology
 
Implementing Neural Networks Using VLSI for Image Processing (compression)
Implementing Neural Networks Using VLSI for Image Processing (compression)Implementing Neural Networks Using VLSI for Image Processing (compression)
Implementing Neural Networks Using VLSI for Image Processing (compression)
 
Optimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionOptimal neural network models for wind speed prediction
Optimal neural network models for wind speed prediction
 
Optimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionOptimal neural network models for wind speed prediction
Optimal neural network models for wind speed prediction
 
Optimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionOptimal neural network models for wind speed prediction
Optimal neural network models for wind speed prediction
 
Neural Networks.pptx
Neural Networks.pptxNeural Networks.pptx
Neural Networks.pptx
 
MODELLING AND SIMULATION OF 128-BIT CROSSBAR SWITCH FOR NETWORK -ONCHIP
MODELLING AND SIMULATION OF 128-BIT CROSSBAR SWITCH FOR NETWORK -ONCHIPMODELLING AND SIMULATION OF 128-BIT CROSSBAR SWITCH FOR NETWORK -ONCHIP
MODELLING AND SIMULATION OF 128-BIT CROSSBAR SWITCH FOR NETWORK -ONCHIP
 
International Journal of Computational Engineering Research (IJCER)
International Journal of Computational Engineering Research (IJCER) International Journal of Computational Engineering Research (IJCER)
International Journal of Computational Engineering Research (IJCER)
 
Efficient And Improved Video Steganography using DCT and Neural Network
Efficient And Improved Video Steganography using DCT and Neural NetworkEfficient And Improved Video Steganography using DCT and Neural Network
Efficient And Improved Video Steganography using DCT and Neural Network
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)
 
Implementation of an arithmetic logic using area efficient carry lookahead adder
Implementation of an arithmetic logic using area efficient carry lookahead adderImplementation of an arithmetic logic using area efficient carry lookahead adder
Implementation of an arithmetic logic using area efficient carry lookahead adder
 

Recently uploaded

Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsanshu789521
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
MENTAL STATUS EXAMINATION format.docx
MENTAL     STATUS EXAMINATION format.docxMENTAL     STATUS EXAMINATION format.docx
MENTAL STATUS EXAMINATION format.docxPoojaSen20
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfSumit Tiwari
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application ) Sakshi Ghasle
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsKarinaGenton
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfUmakantAnnand
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docxPoojaSen20
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 

Recently uploaded (20)

TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha elections
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
MENTAL STATUS EXAMINATION format.docx
MENTAL     STATUS EXAMINATION format.docxMENTAL     STATUS EXAMINATION format.docx
MENTAL STATUS EXAMINATION format.docx
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application )
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its Characteristics
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.Compdf
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docx
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 

Analog VLSI Implementation of Neural Network Architecture for Signal Processing

  • 1. International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012 DOI : 10.5121/vlsic.2012.3220 243 Analog VLSI Implementation of Neural Network Architecture for Signal Processing Neeraj Chasta 1, Sarita Chouhan2 and Yogesh Kumar3 1Department of Electronics and Communication, Mewar University, Chittorgarh, Rajasthan, India neeraj.chasta@gmail.com 2Department of Electronics and Communication, MLVTEC, Bhilwara, Rajasthan, India sarita.mlvtec@yahoo.co.in 3Electronics and Communication, MLVTEC Bhilwara, Rajasthan, India yogeshkumar989@ymail.com Abstract With the advent of new technologies and advancement in medical science we are trying to process the information artificially as our biological system performs inside our body. Artificial intelligence through a biological word is realized based on mathematical equations and artificial neurons. Our main focus is on the implementation of Neural Network Architecture (NNA) with on a chip learning in analog VLSI for generic signal processing applications. In the proposed paper analog components like Gilbert Cell Multiplier (GCM), Neuron activation Function (NAF) are used to implement artificial NNA. The analog components used are comprises of multipliers and adders’ along with the tan-sigmoid function circuit using MOS transistor in subthreshold region. This neural architecture is trained using Back propagation (BP) algorithm in analog domain with new techniques of weight storage. Layout design and verification of the proposed design is carried out using Tanner EDA 14.1 tool and synopsys Tspice. The technology used in designing the layouts is MOSIS/HP 0.5u SCN3M, Tight Metal. Keyword Neural Network Architecture, Back Propagation Algorithm. 1. Introduction 1.1 Artificial Intelligence Intelligence is the computational part of the ability to achieve goals in the world. Actually intelligence is a biological word and is acquired from past experiences. The science which defines intelligence mathematically is known as Artificial Intelligence (AI). Artificial Intelligence is implemented by using artificial neurons and these artificial neurons comprised of several analog components. The proposed paper is a step in the implementation of neural network architecture using back propagation algorithm for data compression. The neuron selected is comprises of multiplier and adder along with the tan-sigmoid function. The training algorithm used is performed in analog domain thus the whole neural architecture is a analog structure.
  • 2. International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012 244 Figure 1 can be expressed mathematically as Figure 1: Neural Network a = f (P1W1+P2W2+P3W3+Bias) where „a‟ is the output of the neuron & „p‟ is input and „w‟ is neuron weight . The bias is optional and user defined. A neuron in a network is itself a simple processing unit which has an associated weight for each input to strengthening it and produces an output. The working of neuron is to add together all the inputs and calculating an output to be passed on. The neural architecture is trained using back propagation algorithm and also it is a feed forward network. The designed neuron is suitable for both analog and digital applications. The proposed neural architecture is capable of performing operations like sine wave learning, amplification and frequency multiplication and can also be used for analog signal processing activities. 1.2 Multiple Layers of Neurons When a set of single layer neurons are connected with each other it forms a multiple layer neurons, as shown in the figure 2.
  • 3. International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012 245 Figure 2: Layered structure of Neural Network As it is clear from the above figure that weights w11 to w16 are used to connect the inputs v1 and v2 to the neuron in the hidden layer. Then weights w21 to w23 transferred the output of hidden layer to the output layer. The final output is a21. 2. Architecture 2.1 Analog Components for Neural Architecture The inputs to the neuron v1 and v2 as shown in figure 2 are multiplied by the weight matrix, the resultant output is summed up and is passed through an NAF. The output of the activation function is then passes to the next layer for further processing. Blocks to be used are Multiplier block, Adders, NAF block with derivative. 2.1.1 Multiplier Block The Gilbert cell is used as the multiplier block. The schematic of the Gilbert cell is as shown in the figure 3.
  • 4. International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012 246 Figure 3: Gilbert cell schematic with current mirror circuit As in this paper we are mainly focusing on layout part so the layout of the Gilbert cell multiplier is shown in figure 4. Layout design and verification of the proposed design is carried out using Tanner EDA 14.1 tool and synopsys Tspice. The technology used in designing the layouts is MOSIS/HP 0.5u SCN3M, Tight Metal. Figure 4: Layout of Gilbert cell multiplier
  • 5. International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012 247 2.1.2 Adders The output of the Gilbert cell is in the form of current (trans conductance).The node of the Gilbert cell connecting the respective outputs act as adder itself. 2.1.3 Neuron Activation Function (NAF) The designed activation function is tan sigmoid. The proposed design is actually a differential amplifier modified for differential output. Same circuit is capable of producing output the activation function and the differentiation of the activation function. Here two designs are considered for NAF 1. Differential amplifier as NAF 2. Modified differential amplifier as NAF with differentiation output. 2.1.3.1 Differential Amplifier Design As A Neuron Activation Function (Tan) This block is named as tan in the final schematics for Neural Architecture. Differential amplifier when design to work in the sub-threshold region acts as a neuron activation function. Figure 5: Schematic of NAF 2.1.3.2 Modified Differential Amplifier Design for Differentiation Output (fun) Schematic of the design shown in the figure 5 is used for the tan sigmoid function generator with modification for the differential output. The NAF function can be derived from the same differential pair configuration. The structure has to be modified for the differential output. 2.1.3.3 Layout issues As we already discussed that we are using tan sigmoid function to realize NAF the reason behind using tan- sigmoid function is that after carried out a mathematical analysis we found that tan
  • 6. International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012 248 sigmoid function is best suitable for compression and achieving a tangential output. The layout of NAF is shown in figure 6. 2.2 Realization of Neural Architecture using Analog Components The components designed in the previous section are used to implement the neural architecture. The tan block is the differential amplifier block designed in section 2.1.3.1. This block is used as the neuron activation function as well as for the multiplication purpose. The mult is the Gilbert cell multiplier designed in section 2.1.1 The fun is the Neuron activation function circuit with differential output designed in section 2.1.3.2 Figure 7: Implementation of the Neural Architecture using Analog Blocks Figure 7 shows exactly how the neural architecture of figure 2 is implemented using analog components. The input layer is the input to the 2:3:1 neuron. The hidden layer is connected to the input layer by weights in the first layer named as w1i. The output layer is connected to input layer through weights w2j. The op is the output of 2:3:1 neuron. The schematic of circuit for image compression is shown in figure 8.
  • 7. International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012 249 Fig 8: Schematic of Compression Block The Layout of circuit for image compression is shown in figure 9. Figure 9: Layout of compression block 2.3 Neuron Application -Image Compression and Decompression The above proposed NA is used for image compression. Image consisting of pixel intensities are fed to the network shown in Figure 8 for compression and then this compressed image act as input for the decompression block. The layout circuit of decompression block is as shown in figure 11.
  • 8. International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012 250 The 2:3:1 neuron proposed has an inherent capability of compressing the inputs, as there are two inputs and one output. The compression achieved is 97.30%. Since the inputs are fed in the analog form to the network there is no need for analog to digital converters. Figure 10 shows the schematic of decompression block Fig10: Schematic of Decompression Block Figure 11: Layout of Decompression Block
  • 9. International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012 251 This is one of the major advantages of this work. A 1:3:2 neural networks is designed for the decompression purpose. The neural network has 3 neurons in the hidden layer and two in the output layer. Figure 12 shows the compression and decompression scheme. The training algorithm used in this network is Back Propagation algorithm. The error propagates from decompression block to the compression bock. Once the network is trained for different inputs the two architectures are separated and can be used as compression block and decompression block independently. Figure 12: Image Compression and Decompression using proposed neural architecture The layout implementation of block diagram of compression and decompression block is shown in figure 14 and figure 13 shows the combined schematic of compression and decompression block: Fig 13: Combined Schematic of Compression and Decompression Block
  • 10. International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012 252 The layout implementation of block diagram of compression and decompression block is shown in figure 14 and figure 13 shows the combined schematic of compression and decompression block: Figure 14: Combined Layout of Compression and Decompression block 3. Back Propagation Algorithm Backpropagation is the most common method of training of artificial neural network so as to minimize the objective function. It is the generalization of delta rule and mainly used for feed forward networks. The backpropagation algorithm is understand by dividing it into two phases. First phase is propagation and second is weight update. 1. Propagation a) Forward propagation of training pattern‟s input through the neural network in order to generate the propagation‟s output activations. b) Backward propagation of the of the propagation‟s output activations through the neural network using the training pattern‟s target in order to generate the deltas of all output and hidden in neurons. 2. Weight Update a) For each weight synapse it multiply its output delta and input activation to get the gradient of the weight. b) Bring the weight in opposite direction of the gradient by subtracting a ratio of it from the weight. This ratio influences the speed and quality of learning and it is called learning rate. The sign of the gradient of a weight indicates where the error is increasing, that is why the weight must be updated in the opposite direction. These phases goes on repeating until the performance is of the network is satisfactory.
  • 11. International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012 253 Figure 15: Notation for three-layered network There are (n + 1) × k weights between input sites and hidden units and (k +1)×m between hidden and output units. Let W1 denote the (n+1)×k matrix with component w ij (1) at the i-th row and the j-th column. Similarly let W2 denote the (k + 1) × m matrix with components wij (2) ij . We use an overlined notation to emphasize that the last row of both matrices corresponds to the biases of the computing units. The matrix of weights without this last row will be needed in the backpropagation step. The n-dimensional input vector Ộ = (o1, . . . , on) is extended, transforming it to ˆo = (o1, . . . , on, 1). The excitation netj of the j-th hidden unit is given by: The activation function is a sigmoid and the output Oj (1) of this unit is thus After choosing the weights of the network randomly, the backpropagation algorithm is used to compute the necessary corrections. The algorithm can be decomposed in the following four steps: (i) Feed-forward computation (ii) Backpropagation to the output layer (iii) Backpropagation to the hidden layer (iv) Weight updates The algorithm is stopped when the value of the error function has become sufficiently small.
  • 12. International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012 254 4. Results and Discussions 4.1. Simulation Result for Gilbert cell Multiplier The designed Gilbert cell is simulated using TSPICE. The simulation result shown in the figure 16 is for the multiplication of two voltages v1 and v2. v1 voltage is 4 Vpp and 100 MHz frequency. v2 is 2 Vpp 10 MHz frequency. The output wave is the multiplication of v1 and v2 voltage done by the circuit. The output amplitude is 5.6 Vpp. The output can be seen matching with the theoretical output. Figure 16: Multiplication operation of Gilbert cell multiplier
  • 13. International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012 255 4.2. Simulation Result for Neuron activation function Figure 17: Circuit output for Neuron Activation function block (tan) 4.4 Image Compression using Neural Architecture Image compression is separately performed using neural architecture. The simulation result for image compression are shown in the figure 18 The input v1 was a sine wave with 2 Vpp voltage at 10 MHz frequency and input v2 was a sine wave with 4 Vpp at a frequency 100 MHz. The output of compression is a DC signal of 70 mV.
  • 14. International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012 256 Figure 18: Simulation of Image compression 4.5 Image Decompression using Neural Architecture The separate decompression of a 3 Vpp sine wave at 100 MHz gives the decompressed output as a signal of 7 Vpp. The simulation result of image decompression is shown in figure 19. Figure 19: Simulation of Image decompression
  • 15. International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012 257 4.6 Image Compression and Decompression using Neural Architecture The Neural Architecture is extended for the application of image compression and decompression. The simulation result for image compression and decompression are shown in the figure 20. The input 1 was a sine wave with 16 Vpp voltage 10 MHz frequency and input 2 was a sine wave with 9 Vpp voltage and 100 MHz frequency. The compressed output was a DC signal of 4 Vpp. Figure 20: Image compression and Decompression Simulation The decompressed output for input 1 was a 4.5 Vpp sine wave and for input 2 was a 4.5 Vpp sine wave. The compression we achieved is 97.22 % and the average decompression achieved is 60.94%. 5. Conclusion Neural network with their remarkable ability to derive meaning from complicated or imprecise data can be used to extract patterns and to detect trends that are too complex to be noticed by either humans or other computer techniques. Due to its adaptive learning, self-organization, real time operations and fault tolerance via redundant information coding properties it can be used in Modelling and Diagnosing the Cardiovascular System and in Electronic noses which has several potential applications in telemedicine. Another application developed was “Instant Physician” which represents the “best” diagnosis and treatment. This work can be further extended to implement neuro fuzzy system with high speed low power, CMOS circuit (Layout extraction) in current mode as well as for nano scale circuit simulation using Double Gate MOSFET (DG MOSFET) modeling.
  • 16. International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012 258 References [1] Cyril Prasanna Raj P & S.L. Pinnae “DESIGN AND ANALOG VLSI IMPLEMENTATION OF NEURAL NETWORK ARCHITECTURE FOR SIGNAL PROCESSING” European Journal of Scientific Research ISSN 1450-216X Vol.27No.2 (2009), pp.199-216 [2] George Papadourakis “INTRODUCTION TO NEURAL NETWORKS” (2009) [3] Anita Wasilewska “NEURAL NETWORKS” State University of New York at Stony Brook. [4] Ranjeet Ranade & Sanjay Bhandari & A.N. Chandorkar “VLSI Implementation of Artificial Neural Digital Multiplier and Adder” pp.318-319 [5] Rafid Ahmed Khalil & Sa'ad Ahmed Al-Kazzaz “Digital Hardware Implementation of Artificial Neurons Models Using FPGA”pp.12-24 [6] Bose N.K., Liang P., Neural Network Fundamentals With graphs, algorithms and Application, Tata McGraw hill, New Delhi, 2002,ISBN0-07-463529-8. [7] Razavi Behzad, Design of Analog CMOS Integrated Circuits, Tata McGraw hill, New Delhi , 2002, ISBN0-07-052903-5. [8] Roy Ludvig Sigvartsen, An Analog Neural Network With On-Chip Learning Thesis Department of informatics, University of Oslo, 1994 [9] Isik Aybayetal , Classification of Neural Network Hardware, Neural Network World ,IDG Co., Vol6 No1, 1996, pp.11-29 , [10] Vincent F. Koosh Analog Computation and Learning in VLSI PhD thesis California institute of technology, Pasadena, California.2001. [11] E. Vittoz, et al.,” The design of high performance analog circuits on digital CMOS chips,” IEEE J. Solid State Circuit 20, 1985, pp.657-665 [12] S. Orcioni G. Biagetti, M. Conti, “A mixed signal fuzzy controller using current model circuits, “ Analog Integrated Circuits Process. 38, 2004 , pp.215-231. [13] F. Djeffal et al., “Design and Simulation of Nanoelectronic DG MOSFET Current Source using Artificial Neural Networks,” Materials Science and Engineering C, vol. 27, 2007, pp. 1111-1116 [14] H. Ben Hommounda et al., “Neural Based Models Of Semiconductor Devices for Spice Simulator,” American J. of Applied Science, vol. 5, 2008, pp. 385-391 [15] Arne Heittmann, “An Analog VLSI Pulsed Neural Network for Image Segmentation using Adaptive Connection Weights” Dresden University of Technology, Department of Electrical Engineering and Information Technology,Dresden, Germany, 2000 [16] Wai-Chi Fang et al, “A VLSI Neural Processor for Image Data Compression using Self-Organisation Networks” IEEE Transactions on Neural Networks, Vol. 3, No. 3, May 1992, pp. 506-517 [17] R. Rojas, Neural Networks, Springer-Verlag, Berlin, 1996 [18] D. Nguyen a and B. Wid row, Improving the learning speed of 2-layer neural network by choosing initial values of the adaptive weights, IEEE First International Joint Conference on Neural Networks , 3, 21–26, (1990). [19] R.A. Jacobs, Increased rates of convergence through learning rate adaptat ion , Neural Networks , 1 , 295–307, (1988). [20] T. P. Vogl, J. K. Man gis, J.K. Rigler, W. T. Z ink and D .L. Alkon, Accelerating the convergence of the back-propagation method , Biological Cyberne tics, 59 , 257–263,(1988).
  • 17. International Journal of VLSI design & Communication Systems (VLSICS) Vol.3, No.2, April 2012 259 Authors 1. Neeraj K. Chasta received his B. Eng. Degree in Electronics & Communication from University of Rajasthan, Jaipur, India in 2007, and the M. Tech degree in Information & Communication Technology with specialization VLSI from Dhirubhai Ambani institute of Information & Communication Technology (DA-IICT), Gandhinagar, India in 2010. His current research interests are in analog & mixed signal design with low power & high speed emphasis & VLSI Testing. 2. Sarita Chouhan received her B. Eng. Degree in Electronics & Communication from Rajeev Gandhi Proudyogiki Vishwavidyalaya, Bhopal, India in 2002 and M.Tech degree in VLSI design from Mewar University, Chittorgarh, Rjasthan, India. Currently she is an Assistant Prof. in Electronics and Communication department in Manikya Lal Verma Textile & Engg. College, Bhilwara, Rajasthan, India. Her area of interest are applied electronics, Microelectronics, VLSI, VHDL, Verilog, EDA, Analog CMOS designing and Low Power optimization. 3. Yogesh Kumar is a student of final year B.Tech. Pursuing his Degree in Electronics & Communication from Rajasthan Technical University, Kota, India .His area of interest are in VLSI design and MATLAB. He presented papers in National and International conferences on Analog CMOS design using signal processing and currently working on Image processing.