ADCs transform an analog voltage to a binary number (1’s and 0’s), then to a digital number (base 10) where the number of bits = ADC resolution. The digital number approximates the analog voltage because it is represented in discrete steps. The ADC resolution determines how closely the digital number approximates the analog value An n-bit ADC has a resolution of one part in 2<+>n<+>. A 12-bit ADC has a resolution of one part in 4,096, where 2<+>12<+> = 4,096. A 12-bit ADC with an input of 10 Vdc resolves the measurement into 10 Vdc/4096 = 0.00244 Vdc = 2.44 mV, a 16-bit ADC resolution is 10 Vdc/2<+>16<+> = 10/65,536 = 0.153 mV. The resolution is usually specified with respect to the full-range reading of the ADC.
How it works: 1. All bits start from zero. 2. The DAC’s MSB sets to 1, forcing the DAC output to ½ of full scale (or 5 V in a 10-V system). 3. The DAC output is compared to the input signal; if the DAC output is lower, the MSB remains set at 1. If the DAC output is higher, the MSB resets to zero. 4. The second MSB with a weight of ¼ of full scale sets to1 and forces the output of the DAC to either ¾ full scale or ¼ full scale. 5. The DAC output is compared to the input signal and the second bit either remains set to1 if the DAC output is lower, or resets to zero if the DAC output is higher. 6. The third MSB is compared the same way and the process repeats in order of descending bit weight until the LSB is compared. 7. At the end of the process, the output register contains the digital code representing the analog input signal. Conversion rates exceed 200 kHz, and 12 to 16-bit ADCs are relatively inexpensive.
Voltage-to-frequency ADCs convert the input voltage to a pulse train with frequency proportional to the amplitude. The pulses are counted over a fixed period to determine the frequency, and the pulse counter output represents the digital voltage. They have high noise rejection characteristics, because the input signal is integrated over the counting interval. They are used for slow/noisy signals, and remote sensing in noisy environments. The digital pulse generator is hard-wired to the counter.
Dual-slope integrating ADCs charge a capacitor over a fixed period where the current is proportional to the input voltage. The capacitor’s discharge time under a constant current determines the value of the input voltage. These ADCs are accurate and stable because the ratio of rise time to fall time determines the measurement, not on the capacitor’s absolute value. The ADCs reject ac line frequency noise when the integration time matches a multiple of the ac period. 20-bit accuracy is common, but it’s slow, 60 Hz maximum, and slower for ADCs that integrate over multiples of the line frequency.
Inexpensive, integrating ADC with high noise rejection. Work best with low-bandwidth signals of a few MHz. Users set integration time. They don’t require trimming or calibration. They contain a digital filter, and work without anti-aliasing filter at the input. The principle of operation: Vin sums with the output of the DAC; the integrator adds Vs to a value stored previously. When the integrator output is equal to or greater than zero, the comparator output switches to one; when the integrator output is less than zero, the comparator switches to zero. The DAC modulates the feedback loop, continually adjusting the comparator output to equal the analog input and maintain the integrator output at zero. The DAC keeps the integrator’s output near the reference voltage level. The output signal becomes a one-bit data stream that feeds a digital filter. The digital filter averages the ones and zeros, determines the bandwidth and settling time, and outputs multiple-bit data. The digital low-pass filter then feeds the decimation filter, which cuts the sample rate of the multi-bit data stream in half for each stage. For example, a seven-stage filter reduces the sample-rate by 128.
Digital filters improve ADC accuracy for ac signals. The signal is sampled several times greater than the Nyquist value (oversamples). The integrator is a low-pass filter for the input signal, and a high-pass filter for the quantization noise. It lowers the noise floor further, and combined with the decimation filter, the frequency of the output decreases. The loop frequency is in MHz; the output data is in kHz. The digital filter also can be notched at 60 Hz to eliminate power-line frequency interference.
ADC Comparisons (self explanatory.)
Sigma-i represents each independent error that influence an ADCs accuracy. Errors include sensor anomalies, noise, amplifier gain and offset errors, ADC quantization (resolution) errors, and other factors.
Gain and offset errors are reduced with trimpots. In hardware/software calibration, the software instructs the DAC to null offsets and set full-scale voltages. In software only methods, correction factors are stored in memory and correct the digital value based on ADC readings.
Figures 2.07A, B, D, and C Figure 2.07: Common ADC Errors: The straight line in each graph represents the analog input voltage and the perfect output voltage reading from an ADC with infinite resolution. The step function in Graph A shows the ideal response for a 3-bit ADC. Graphs B, C, D, and E show effect on ADC output from the various identified errors. Quantization error: In a perfect ADC, a unique digital code is generated for each analog voltage measured. (See Figure 2.07A.) A real ADC has small gaps between consecutive digital numbers; the amount depends on the smallest quantum value that the ADC can resolve, such as 2.44 mV (LSB) for a 12-bit converter in a 10-Vdc range. The quantization error is 1.22 mV or less (0.0122%). ADC errors are typically specified by: the error in LSBs, the voltage error for a specified range, and the % of reading error. ADC accuracy should approach its specified resolution.
Gain error: ADC output is unable to faithfully reproduce the input voltage value.
Linearity error: Nearly impossible to eliminate by calibration. Nonlinearity error should be one LSB or less.
Missing codes: Some ADCs are unable to produce an accurate digital output for a specific analog input. This 3-bit ADC cannot represent the number 4 for any input. Affects accuracy and resolution.
Offset: is set to zero at zero input, and gain set to full scale with a calibrated voltage source.
Shows distribution of codes and Verifies accuracy. The histogram illustrates how 12-bit ADC samples in a set were distributed among the various codes for a 2.5-V measurement in a FSR (full-scale range) of 10-V. Most codes intended for the 1024 bin representing 2.5 V actually ended up there, but others fell under a gaussian distribution due to white noise content. A perfect ADC would produce only one vertical bar in the histogram for the specified input frequency and amplitude because it measured only one value for each and every sample. But because of the ADC’s inherent non-linearities, it produces a distribution of bars on either side representing digital words sorted into different code bins.
ENOB test evaluates the total system of interconnected ADC/MUX/PGA/SSH as a unit. 2. Check all channels; errors also come from cross talk. 3. Capture 1024 samples and run through an FFT to compute the ENOB. 4. The test measures the effects of all items in the right column.
Accuracy: Averaging the output can obtainhigher accuracy for a signal embedded in noise than one free of noise. Alarge number of samples yield a Gaussian distribution which can be accurately defined with a more precise peak for the wave. Stability: Some systems introduce dither (random noise) to take advantage of the signal averaging accuracy and stability. Digital Audio Recordings: Early recordings lacked output averaging. A musical note could decay into a buzz because not all the bits were enabled. The output was distorted and the ear could not filter it out. Averaging fixed the problem.
Be the first to comment