Implementation of Digital Filters


Published on

3F3 – Digital Signal Processing (DSP), January 2009, lecture slides 7, Dr Elena Punskaya, Cambridge University Engineering Department

Published in: Education
  • Excellent presentation. Great summary of the practical issues. Thank you.
    Are you sure you want to  Yes  No
    Your message goes here
  • On slide 18, the denominator in the last fraction seems first order,
    Are you sure you want to  Yes  No
    Your message goes here
No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Implementation of Digital Filters

  1. 1. Implementation of Digital Filters Elena Punskaya Some material adapted from courses by Prof. Simon Godsill, Dr. Arnaud Doucet, Dr. Malcolm Macleod and Prof. Peter Rayner 1
  2. 2. Filter Implementation • Discussed: How to design digital filters • How do you implement them in practice? • In a double-precision floating-point world – no problem • No constraints simple architecture is possible 2
  3. 3. Direct form I Implementation Any FIR/IIR Filter If N≠M just put some coefficients to zero Moving average Autoregressive part part 3
  4. 4. Constraints However, one usually has severe – speed constraints – power constraints direct implementation is not a good idea What can we do? 4
  5. 5. Addressing speed/power concerns • Reduce total number of operations • In particular, multiplications might take longer than additions & takes more power reduce their number • Fixed-point arithmetic takes much less area cheaper, faster • The area of a fixed-point parallel multiplier is proportional to the product of the coefficient and data wordlengths reduce data or coefficient wordlengths 5
  6. 6. Advantages of the Alternative Structures • For each given transfer function, there are many potential realization structures • Direct I is one of the examples • Alternative structures are useful since in fixed-point implementation – may decrease multiplications or overall computation load – may make the response much less sensitive to coefficient imprecision (coefficient quantisation) – may add less quantisation noise into the output signal 6
  7. 7. Structures for FIR Filters • Implementation of FIR filters is far more straightforward than IIR filters • General FIR filter • Direct form I structure, also called tapped delay line or transversal structure Requirements: M memory locations for storing previous inputs Complexity: M multiplications and M additions per output point 7
  8. 8. Linear Phase FIR with symmetric taps • However, FIR filters are often designed to have linear phase, and the symmetry in the filter taps • One can rewrite • Symmetric FIR realization The number of multiplications reduced to (integer part) 8
  9. 9. Structure of IIR Filters* • We are interested in implementing • Direct Form I implementation Alternative structures: • parallel • cascade • feedback *Most of the material discussed of course applies to FIR filters as well 9
  10. 10. Parallel Structure of IIR Filter • The idea: rewrite as a sum of filters • Parallel Structure for K = 2 10
  11. 11. Example 1: Parallel Structure of IIR Filter • Consider the following transfer function • Partial fraction expansion leads to complex poles one would not implement the filter 11
  12. 12. Example 1: Parallel Structure of IIR Filter • Recombine the last two terms parallel structure with 3 branches • Alternatively parallel structure with 2 branches 12
  13. 13. Example 2: Parallel Structure of IIR Filter • Consider the following transfer function • The denominator admits 3 roots: 0.0655 ± 0.5755j and 0.0492 • Recombine the two conjugate roots partial fraction expansion 13
  14. 14. General Procedure to Obtain a Parallel Structure • Decompose H(z) using a partial fraction expansion • Combine any pair of complex conjugate poles to obtain real-valued elements Hi(z) • Optional: Combine such elements further if beneficial Remark: Typically we limit ourselves to second-order sections 14
  15. 15. Main Characteristics of the Parallel Structure • Simple to implement • Sometimes has an advantage over the cascade realisation in terms of internally generated quantisation noise, not much though (no amplication of errors over various stages) • Errors of coefficient quantization of Hi(z) ) affects zeros of H(z) longer coefficient wordlengths required to ensure stability • Zeros on the unit circle in the overall transfer function are not preserved no saving of multipliers can be obtained for filters having such zeros 15
  16. 16. Cascade Structure of IIR Filter • The idea: rewrite the transfer function as a product of filters • Cascade Structure - the output of one filter is the input to another one, for K = 2 • If we ignore finite precision effects the order of filters in a cascade can be changed without altering the transfer function 16
  17. 17. Back to Example 1: Cascade Structure • The following transfer function was considered • Decompose as cascade structure with 2 sections 17
  18. 18. Back to Example 2: Cascade Structure • The following transfer function was considered • The denominator admits 3 roots: • 0.0655 ± 0.5755j and 0.0492 • Recombine the two conjugate roots and decompose: cascade structure with 2 sections 18
  19. 19. General Procedure to Obtain Cascade Structure • Compute the poles and zeros of H(z) • Combine any pair of complex conjugate poles/zeros to obtain real-valued elements Hi(z) • Optional: Combine such elements further if beneficial Remark: Typically first and second-order sections are used 19
  20. 20. Sensitivity to coefficient quantisation • The filter coefficients are quantised errors in coefficient value cause errors in pole and zero positions and, hence, filter response • Example: Consider a filter with four poles at z = - 0.9 (close to unit circle but stable) 20
  21. 21. Example: Sensitivity to coefficient quantisation • Direct form filter would have the following denominator polynomial in its transfer function: • Cascade of four first-order sections: Assume an error of -0.06 Direct: error -0.06 to the third coefficient, i.e. 4.86 → 4.8 roots of the resulting polynomial are -1.5077, -0.7775 ± 0.4533j, -0.5372 Filter unstable! Cascade: error -0.06 to 0.9, i.e. 0.9 → 0.84 Not big deal! smaller change, only one root affected Cascade has much lower sensitivity to coefficient quantisation 21
  22. 22. Useful Tips • Each complex root, with its inevitable conjugate, can be implemented by a single second-order section • Consider a root at rejω and its conjugate re-jω the real-coefficient second-order polynomial • If interested in placing zeros • If interested in placing poles , usually assumed 22
  23. 23. Filters with Zeros on the Unit Circle • Many filters (FIR and IIR) have zeros on the unit circle • Biquadratic section with rejω and its conjugate re-jω no multiplication required - very used in practice • Implementing a high-order filter with many zeros on the unit circle as a cascade of biquadratic sections requires fewer total multiplications than a direct form implementation 23
  24. 24. Feedback Structure of IIR Filter Two filters H1(z) and H2(z) in a feedback structure Transfer function of a feedback network Output Transfer function of the filter 24
  25. 25. Example: Feedback Structure Transfer Function More complex structure Similar technique: feedback network output transfer function 25
  26. 26. Example: Feedback Structure Transfer Function Quite complex A far simpler way: to rearrange the system so that it looks like a feedback network in cascade with a parallel one 26
  27. 27. IIR Direct Forms • Direct From I – considered already • Direct Form II – standard alternative 27
  28. 28. Direct Form II Implementing transfer function Set it as a cascade of two: can be realized with a parallel structure can be realized with a feedback structure part can be realized with a parallel structure 28
  29. 29. Direct Form II parallel Putting it all together parallel feedback Direct form II is preferable to Direct form I as it requires a smaller number of memory locations. Direct form II is canonic (the number delays of delay elements is exactly N) while direct form I is not. 29
  30. 30. Example of Direct Form II realisation Consider the third-order IIR transfer function feedback 30
  31. 31. Finite-Precision Number Representation • In a computer, numbers are represented as combinations of a finite number of binary digits, or bits that take values of 0 and 1 • Bits are usually organised into bytes containing 8 bits or words (16 bits, 32 bits) • Two forms are used to represent numbers on a digital computer: – fixed-point – floating-point 31
  32. 32. Fixed-point representation The magnitude of the number is expressed in powers of 2, with the binary point separating positive and negative exponents B bits number representation, value (B,A) 16-bit word binary point A bits B-A bits sign bit least significant bit (LSB) Number range B bits Example: a B = 12 bit number with A = 2 bits before the binary point is in the range -2048/1024 to +2047/1024 inclusive All values are quantised to integer multiples of the LSB 32
  33. 33. Overflow • If the result of any calculation in the filter exceeds its number range overflow occurs. • By default, a value slightly greater than the maximum representable positive number becomes a large negative number, and vice versa. • This is called wraparound; the resulting error is huge. • In IIR filters it can result in very large amplitude quot;overflow oscillationsquot;. 33
  34. 34. Strategies to avoid overflow Two strategies exist • scaling - to ensure that values can never (or hardly ever) overflow • saturation arithmetic - to ensure that if overflow occurs its effects are greatly reduced 34
  35. 35. Saturation arithmetic • First, the results of all calculations are to full precision. For example, the addition of 2 (B,A) values results in a (B +1,A +1) value; the multiplication of a (B,A) value by a (C,D) value results in a (B+C-1,A+D-1) value. • Then, the higher order bits of the true result are processed to detect overflow. • If overflow occurs, the maximum possible positive value or minimum possible negative value is returned 35
  36. 36. Saturation arithmetic Instead of merely masking the true result to a (B,A), overflow is detected and the maximum possible positive value or minimum possible negative value is returned. Some DSP ICs incorporate saturation arithmetic hardware 36
  37. 37. Scaling l1 scaling Assume that the input to a filter is bounded by Then its output is bounded by where is its impulse response is known as the l1 norm of the filter impulse response, easy to compute numerically 37
  38. 38. l1 scaling Bounded input bounded output If the maximum permissible output magnitude is D, overflow cannot occur provided we scale the output by However if we reduce the magnitude of signals, the ratio of signal power to quantisation noise power becomes smaller Scaling worsens the noise performance of the filter 38
  39. 39. Alternative scaling The input signal which gives rise to the largest possible output is unlikely to occur in practice, so a less conservative scaling approach is often used • l2 scaling • frequency – response scaling For both saturation arithmetic is still needed as overflow is still possible 39
  40. 40. l2 scaling Choose less conservative scaling based on the scale factor which is the root mean square impulse response and this is known as l2 scaling 40
  41. 41. Frequency-response scaling Suppose frequency response of the filter is a sine wave of frequency ω and peak amplitude C at the input gives a sine wave of peak amplitude at the output • To prevent overflow of a single sine wave use scaling factor 41
  42. 42. Application of scaling to cascade and parallel realisations • At each step, you must compute the impulse response or frequency response from the input of the overall filter to the point of interest, taking account of all scaling already included up to that point. • The scaling at section inputs may implemented using simple binary shifts (by using the next smaller power of 2), or by incorporating it into the FIR coefficient scaling of the preceding section. • For a parallel realisation, scaling is computed independently for each section, but all section outputs must be scaled by the same amount, so the overall scaling of each section must be made the same. • Finally scaling is applied to the final adder(s) which add together the outputs of the parallel sections. 42
  43. 43. Roundoff (quantisation) noise generation The output of a multiplier has more bits than its inputs (for example, a 16 by 16 two's complement multiplier outputs a 31-bit two's complement value). To store the output it has to be (re)quantised (low order bits have to be thrown away) An error called quantisation noise or roundoff noise is added at that point The noise variance at the multiplier output, assuming rounding is used, is q2/12, where q is the LSB size after quantisation. (The same as for quantisation of analogue signals.) 43
  44. 44. Roundoff (quantisation) noise assumptions • It is often assumed that the quantisation noise at each multiplier output is white (independent from sample to sample). • It is also assumed that it is independent between multipliers, so that the noise variances (quot;powersquot;) add. (The assumption of whiteness is actually a very poor model if the signal is narrowband, but it is reasonable for large amplitude wideband signals. The assumption of independence can also be a poor model.) 44
  45. 45. Roundoff (quantisation) noise in FIR and IIR filters As a result: • The quantisation noise from the multipliers of an FIR filter therefore adds white noise directly to the output signal • In IIR filters, the white quantisation noise from the feedback multipliers filter is fed to the input of the filter, so the resulting noise spectrum at the filter output is coloured. Its spectrum is proportional to the square of the filter's frequency response magnitude. 45
  46. 46. Roundoff (quantisation) noise Roundooff noise level is affected by data wordlengths, filter response, filter structure and (to an extent) by section ordering in cascade structures. Further details are in specialist texts. Remark. DSP ICs, and some VLSI filters, provide an accumulator store of longer wordlength than the data wordlength (e.g. a 32-bit accumulator for a 16-bit DSP). The multiplier outputs are accumulated at the longer wordlength, and then the accumulator output is only quantised once. This approach significantly reduces roundoff noise. 46
  47. 47. Limit cycles Zero-input limit cycles are self-sustaining oscillations, caused by the rounding of the results of computations. Example: consider the second-order filter • This is a stable second order IIR filter with complex poles at j0.9. • If rounding to the nearest LSB is used at the output of the multiplier, then when y (n - 2) = ±1; ± 2; ± 3; or ± 4LSB, the computation 0.9y (n-2) will give the result ± 1; ± 2; ± 3; or ± 4LSB respectively. • Hence a limit cycle of the form y (n) = 4; 0;-4; 0; 4; 0;-4; 0 (or the same pattern with 3,2, or 1) may occur. • Effectively, the reason is that the rounding non-linearity has increased the feedback gain to 1, turning the system into an oscillator. 47
  48. 48. Limit cycles • Limit cycles are troublesome in some applications, especially with short data wordlengths, where the limit cycle may be relatively large. With the longer wordlengths of DSP ICs, it is often possible to ignore limit cycles. Solutions: • One solution is to quantise toward 0 (truncation) instead of rounding the multiplier output. But the extra roundoff noise due to truncation may require the data wordlength to be increased by 1 or 2 bits. • Another solution is to use certain forms of digital filters (such as Wave filters) which do not support limit cycles. However these are computationally more expensive. 48
  49. 49. Deadbands • Consider a simple digital low-pass filter such as is commonly used for smoothing: • The transfer function is • This has unit gain at zero frequency (z = 1), and a pole at 1 – α. • The time constant is approximately 1/α samples, for α<<1 • If then the multiplier output will round to zero, and the filter output will therefore remain constant. • Hence a constant output known as the deadband, arises. It can be up to (0.5/α)LSB • If, for example, (1/α)=10000 to give a time constant of 10000 samples, then the size of the LSB of the internal arithmetic must be 5000 times smaller than the permissible size of the deadband. This implies 13 extra bits (since 212 = 4096). 49
  50. 50. Coefficient quantisation • In a previous section we showed that the cascade form is much less sensitive to coefficient quantisation than a high order direct form filter. (This is also true of the parallel form.) • If the filter has zeros on the unit circle and is implemented using a second-order section, the cascade realisation has the advantage that these zeros stay on the unit circle (because a coefficient b2 = 1 is unaffected by quantisation), although their frequencies may be altered. 50
  51. 51. Coefficient quantisation • A traditional way to study the relative merits of different filter structures was to analyse the sensitivity of the frequency response magnitude to random (often Gaussian) perturbations of the coefficients, and to use this as a measure of the likely sensitivity of a given structure to coefficient quantisation. • Various structures, including Lattice and Wave filters, give even lower sensitivity to coefficient quantisation than the cascade realisation. • However, they generally require a substantially increased number of multipliers. For a specific filter design, you should compute the actual filter response with quantised coefficients, and then modify it if necessary. 51
  52. 52. Coefficient quantisation • In dedicated hardware, such as custom ICs, where there are significant benefits from reducing coefficient wordlengths, discrete optimisation can be used to search for the finite- wordlength filter with the closest response to a given specification. • Some discrete optimisation algorithms, including Genetic Algorithms and Simulated Annealing, are available in software libraries 52
  53. 53. 53
  54. 54. Thank you! 54