Optimizing the Performance of I-mod Leach-PD Protocol in Wireless Sensor Netw...
ADC
1. Intelligent Successive Approximation Technique for
High Resolution High Speed Analog to Digital
Converter Design
Sunny Gupta, Kunal Gupta
Department of Electronics Engineering
Delhi College of Engineering
Bawana Road, Delhi, India 110042
Abstract- This paper presents a modification of well
established SAR technique for analog to digital conversion using
artificial neural networks for time series prediction to greatly
reduce the conversion time, yet maintaining the high resolution of
conversion. The technique pushes the conversion speed close to
Flash ADCs while using fewer resources, which is successfully
demonstrated in the MATLAB and PSPICE simulation results.
I. INTRODUCTION
Successive approximation has established itself as one of
the most widely used technique for analog to digital
conversion process, especially for high-resolution conversions
with excellent accuracy. 18-24 bit SAR based ADCs are not
uncommon these days. Moreover, by employing pipelining,
system throughput can be greatly improved, while using
simple, non-critical analog components. However, one
problem that remains is speed. SAR architecture belongs to
the class of conversion algorithms that have the conversion
time proportional to the resolution. Flash ADCs are currently
the only choice for very high speed ADCs, though such
systems do not enjoy resolutions higher than 8-10 bits. This is
mainly due to the fact that chip area and power consumption
of Flash ADCs increase exponentially with resolution, not
forgetting the fact that component accuracy also approaches
critical limits.
It is clear that there is a definite trade-off between
resolution and speed. Many critical applications demand
ADCs with high resolution, as good as 24 bits apart from
small conversion time, pushing the conversion rates to as high
as 1Gsps. This paper presents a modification in conventional
SAR technique based on time series prediction using artificial
neural networks to reduce the number of conversion cycles
used. Functional simulations have been performed using
MATLAB® and circuit implementations to be used are
introduced using CMOS 0.18um technology, simulated in
ORCAD®/PSPICE®.
II. INTELLIGENT SAR ALGORITHM
In conventional SAR algorithm, the search starts by
assuming that the signal has a value in the mid of the input
range, and the MSB is set. A comparison is performed
between actual value and the digital approximation. If actual
value is greater than or equal to approximation, the bit last set
is retained, else it is reset. This process is continued moving
down till the LSB. Once decision has been made for the LSB,
the conversion is complete.
The above method has a shortcoming that the number of
steps taken by the algorithm to converge is independent of the
value of input signal. Specifically, the number of steps taken is
equal to the number of bits of resolution. The algorithm does
not take any benefit from the knowledge of the signal
statistics. Common signals possess a large amount of
autocorrelation between successive samples. This means that
there will seldom be a large change in the value of a signal,
given that it has been suitably over sampled, which is
generally the case with analog to digital conversion. We
exploit this feature of sampled signals by making a prediction
about the future signal value using a few past values. This
predicted value is used as the starting point for the SAR
algorithm. Now given that the prediction is done within a
specified limit of error, the algorithm will converge very
quickly for most of the signal values, which is very different
and beneficial from the conventional SAR algorithm.
However this also means that the conversion times will now
depend on signal statistics.
Fig. 1. System Architecture
There are several methods of time series prediction, one
of the most efficient being using artificial neural networks [1].
2. It is a non-linear prediction method, and thus benefits from
minimizing the errors with the use of small network, as
opposed to linear predictive coding, which may require a large
number of coefficients to achieve similar performance.
Fig. 1 shows the overall system architecture. The block
diagram resembles a conventional SAR ADC, the only
difference being the neural network loop, which provides the
initial value for the search.
III. ARTIFICIAL NEURAL NETWORKS
Artificial neural networks are inspired from the
functioning of brain, and there has been tremendous
development in this field over the past 2 decades. In this
design, we adopted one of the simplest, yet most capable
forms of neural network called Multi Layer Perceptron. Back
propagation training algorithms has been used to train the
network to perform the desired prediction. Real audio data in
PCM format sampled at 44.1 kHz and 16 bit wide samples
have been used as training as well as validation datasets. All
the neural network development was done using Neural
Network Toolbox 4.0.1 and programming done using
MATLAB.
The choice of the neural network structure is one of the
most crucial aspect of ANN based designs. The larger the
number of layers, the more capable is the network. However
as we had to realize the network in hardware, more number of
layers will reduce the overall speed of the system. Since single
layer perceptrons cannot solve all the types of problems, we
settled on 2 layer perceptrons, which means we have a hidden
layer and an output layer of neurons apart from the input layer
(which does not contain any neurons).
Fig. 2. Performance Graph of Neural Networks: Note the red dot which marks
the selected network
Another aspect to be considered was the number of
neurons in each layer. Small number of neurons will restrict
the generalization of the problem, while a large number of
neurons will lead to memorization [2]. Thus, we chose to
simulate a large number of networks, with different number of
neurons in each layer, and then characterize each of the
generated networks by a performance index, which was
chosen to be the mean square error in prediction. A total of
128 networks were designed and simulated. Then we chose a
few networks which had least mean square error and relatively
simple structures. These networks were validated using a new
set of dataset and the performance indices were recalculated.
Fig. 3. Neural Network Architecture
Fig. 4. Performance of the selected network on validation data.
Prediction error lies in a narrow strip of 10% about 0.
Finally a trade-off between network complexity and
prediction accuracy needed to be made in choosing the
network, as there was only a small increase in performance,
which required a large increase in the network size. We settled
upon a 4-3-16 network, which uses 4 previous signal values in
the input layer, 3 hidden neurons and 16 output neurons (this
parameter was fixed as we had to obtain a 16 bit digital
equivalent of the predicted value to be used as the starting
value for the SAR algorithm). Fig. 2 shows the MSE
performance of the simulated networks, the finally selected
network been highlighted. Fig. 3 shows the structure of the
selected network, while Fig. 4 shows its performance. It is
clearly seen that a large fraction of signal values are estimated
with very good accuracy. This certainly gets translated into a
large reduction in conversion time taken.
IV. CIRCUIT IMPLEMENTATIONS
Recent progresses in VLSI have made it possible to
implement an entire parallel processing neural network on a
single chip. This has greatly enhanced the capabilities of
SoCs. CMOS technology easily lends itself towards hardware
neural network implementations due to low power
consumption and smaller chip area consumption. Moreover,
recent requirements of low voltage designs have pushed the
operation of circuits from saturation region to the sub-
threshold region, and this has beneficially been exploited by
moving to current mode translinear circuit designs. As neural
network involves addition and multiplication as most of the
3. computations, current mode design provides very simple
solutions. We introduce here the basic building blocks using
CMOS 0.18um TSMC device technology, which we plan to
use in the circuit implementation of the proposed algorithm.
A. Synapse
Synapse performs the function of multiplying the signal
with the weight. It basically acts as a uni-directional
connection between two neurons. By using currents as the
signals, synapses can be easily designed using current mirrors.
However, simple current mirrors could not be used in our
design as the weights in the realized network vary over a large
range, from 1000 to 0.01. Thus we need special current
mirrors whose transfer gain can be adjusted over 5 decades.
Such mirrors have been introduced in [3] and likewise, which
we also plan to use in our design.
B. Neuron Transfer Function
MLPs generally have sigmoid transfer function, and we
have kept with that trend. Since we are using signals in current
form, we needed a current mode circuit that realizes sigmoid
transfer function. It is a well known fact that a differential
amplifier realizes the desired transfer function in voltage mode
[4]. We simply added circuit to make it work in current mode.
Since out current signals were low (in sub-threshold region)
we needed to boost them up before they could be used to drive
a differential pair. Fig. 5 shows the proposed circuit and Fig. 6
its performance.
M5
M6 R1
1k
M1
CurrentInput
M4
M7
I1
0Adc
M10
0
M8
VBias
M2
VCC
M13
M11
I
M3
M12
CurrentOutput
M9
Fig. 5. Sigmoid Transfer Function Circuit
Fig. 6. Neuron Transfer Function circuit performance
C. Analog memory
The inputs to the neural network are the previous signal
values. This means that we need a circuit which can hold the
previous signal values, up to 4 levels. Such a memory can be
easily designed in digital mode, using a shift register.
However, not only will the effective number of inputs to the
neural network will increase to a large number, the complexity
of network will grow exponentially. Moreover, the prediction
for the current conversion will have to be done in the same
conversion. This means that the conversion cannot begin until
the neural network has done its functioning. Therefore, we
chose to save the previous signal values in the analog form
itself. In this way, while the SAR algorithm is active, the
neural network can make the prediction in parallel, thereby
speeding up the process to a large extent.
This objective led us to the development of a new kind of
circuit to store analog voltages. It is a simple circuit
comprising of CMOS transmission gates and a voltage buffer,
as shown in Fig. 7. These two gates are clocked in opposite
phases. First gate samples the input signal and passes it to the
output. During the opposite phased clock, the other gate
maintains the output by forming a unity feedback loop. In this
way, the stored value is refreshed automatically, and no loss
occurs in the signal value. By cascading such cells in master
slave configuration, an analog tapped delay line can be
realized which operates in current mode, so that it can be
directly plugged into the system.
Fig. 7. Analog Memory Circuit
D. Current Comparator
The current level of the DAC has to be kept low to lower
the power consumed. This calls for the need for a very
accurate comparator, which can work at very low current
levels (in the nA range). Moreover the response time of the
circuit should be less than 1ns to achieve the desired
conversion speed.
Fig. 8 shows the comparator circuit which is a
modification of the comparator presented in [5]. The circuit
has been simulated using OrCAD and its performance shown
in Fig. 9 The upper graph shows the output in response to a
step input of 250 nA. The rise time is approx. 0.28 ns while
the fall time is 0.35 ns. Fig. 10 shows that the circuit is
capable of detecting current differences as low a fraction of a
nA.
4. U10
FANOUT = 1
I
U4
FANOUT = 1
U3
FANOUT = 1
U6
FANOUT = 1
I9
TD = 1ns
TF = 0.1ns
PW = 3ns
PER = 15ns
I1 = 9.9uA
I2 = 10.1uA
TR = 0.1ns
I
M4
W = 1um
M2
W = 0.6um
GND
V
M1
W = 6um
U7
FANOUT = 1
I6
{Ip}
VCC
M3
W = 10um
Fig. 8. Current Comparator
Fig. 9. Time Response of Comparator
Fig. 10. Transfer Characteristics of the Comparator: Note that the circuit is
capable of detecting very small differences in currents
V. CONCLUSIONS
In this paper a novel method of ADC design has been
presented. This method modifies the conventional SAR
technique using artificial neural networks. The functional
simulations show that this technique would result in improved
conversion speed along with higher resolution. Various circuit
blocks to be used have been presented and an emphasis has
been laid on current mode approach resulting in improved
performance.
VI. REFERENCES
[1] W.C. Chu and N.K. Bose, Speech signal prediction using feed forward
Neural network, ELECTRONICS LETTERS, May 1998, Vol. 34.
[2] Simon Haykin, Neural Networks: A Comprehensive Foundation (2nd
Edition), ISBN-10: 0132733501, Prentice Hall.
[3] C. Tomouzau, J. Lidgey and D. Haigh, Analogue IC Design: The
Current Mode Approach, Chapter 6.
[4] Ezz I. El-Masry, Brent J. Maundy and Hong-Kui Yang, Analog VLSI
Current Mode Implementation of Artificial Neural Networks, CH 3381-
1/93/$01.00 81993 IEEE.
[5] H. Traff, Novel approach to high speed CMOS current
comparators, Electron. Lett., ~ 01.28, No. 3, pp. 310-312, 1992
VII. ACKNOWLEDGEMENT
The authors would like to thank Dr. M. Kulkarni,
Department of Electronics and Communication Engineering,
Dr. Pragati Kumar, Department of Information Technology &
Dr. S. Naagesh, Department of Mechanical Engineering, Delhi
College of Engineering for their valuable guidance and
support.
VIII. ABOUT THE AUTHORS
The authors are final year undergraduate students at the
Department of Electronics & Communication Engineering at
Delhi College of Engineering, Delhi, India. Their interests lie
in the fields of CMOS analog circuit design & embedded
systems, especially in the field of communication & signal
processing systems. They plan to further their interests in
these fields by working in industry and going for higher
studies in the same.