Your SlideShare is downloading.
×

×

Saving this for later?
Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.

Text the download link to your phone

Standard text messaging rates apply

Like this document? Why not share!

- Analog electronics modern analog ... by Boging Bobit 1382 views
- Design of analog filters (rolf scha... by Ã‚Å Ã£d TarÃ¬q 1110 views
- Design of analog filters by rolf sc... by Ã‚Å Ã£d TarÃ¬q 3457 views
- Analog electronics analog circuit... by Boging Bobit 3051 views
- True Differential S-Parameter Measu... by Rohde & Schwarz 1390 views
- Microwave Measurements 3rd Edition by abdelkrim gadda 2669 views
- Circuit by Ibrahim YÄ±lmaz 7041 views
- Transformer Designing by Praveen Sharma 3989 views
- Basic Engineering Circuit Analysis ... by Javier Sanz 1620 views
- Power transformers principles_and_a... by Asif Eqbal 10730 views
- Analog circuits -world_class_designs by Savanna Steppe 718 views
- Analog electronics intuitive anal... by Boging Bobit 2427 views

Like this? Share it with your network
Share

186

views

views

Published on

Mohamed najim -_digital_filters_design_for_signal_and_image_processing

gonzalo santiago martinez

No Downloads

Total Views

186

On Slideshare

0

From Embeds

0

Number of Embeds

0

Shares

0

Downloads

5

Comments

0

Likes

1

No embeds

No notes for slide

- 1. Digital Filters Design for Signal and Image Processing
- 2. This page intentionally left blank
- 3. Digital Filters Design for Signal and Image Processing Edited by Mohamed Najim
- 4. First published in France in 2004 by Hermès Science/Lavoisier entitled “Synthèse de filtres numériques en traitement du signal et des images” First published in Great Britain and the United States in 2006 by ISTE Ltd Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address: ISTE Ltd ISTE USA 6 Fitzroy Square 4308 Patrice Road London W1T 5DX Newport Beach, CA 92663 UK USA www.iste.co.uk © ISTE Ltd, 2006 © LAVOISIER, 2004 The rights of Mohamed Najim to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988. ___________________________________________________________________________ Library of Congress Cataloging-in-Publication Data Synthèse de filtres numériques en traitement du signal et des images. English Digital filters design for signal and image processing/edited by Mohamed Najim. p. cm. Includes index. ISBN-13: 978-1-905209-45-3 ISBN-10: 1-905209-45-2 1. Electric filters, Digital. 2. Signal processing--Digital techniques. 3. Image processing--Digital techniques. I. Najim, Mohamed. II. Title. TK7872.F5S915 2006 621.382'2--dc22 2006021429 British Library Cataloguing-in-Publication Data A CIP record for this book is available from the British Library ISBN 10: 1-905209-45-2 ISBN 13: 978-1-905209-45-3 Printed and bound in Great Britain by Antony Rowe Ltd, Chippenham, Wiltshire.
- 5. Table of Contents Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Chapter 1. Introduction to Signals and Systems . . . . . . . . . . . . . . . . . 1 Yannick BERTHOUMIEU, Eric GRIVEL and Mohamed NAJIM 1.1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2. Signals: categories, representations and characterizations . . . . . . . . 1 1.2.1. Definition of continuous-time and discrete-time signals . . . . . . . 1 1.2.2. Deterministic and random signals . . . . . . . . . . . . . . . . . . . . 6 1.2.3. Periodic signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.2.4. Mean, energy and power. . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2.5. Autocorrelation function. . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.3. Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.4. Properties of discrete-time systems. . . . . . . . . . . . . . . . . . . . . . 16 1.4.1. Invariant linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.4.2. Impulse responses and convolution products. . . . . . . . . . . . . . 16 1.4.3. Causality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1.4.4. Interconnections of discrete-time systems . . . . . . . . . . . . . . . 18 1.5. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Chapter 2. Discrete System Analysis . . . . . . . . . . . . . . . . . . . . . . . . 21 Mohamed NAJIM and Eric GRIVEL 2.1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.2. The z-transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.2.1. Representations and summaries . . . . . . . . . . . . . . . . . . . . . 21 2.2.2. Properties of the z-transform . . . . . . . . . . . . . . . . . . . . . . . 28 2.2.2.1. Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.2.2.2. Advanced and delayed operators. . . . . . . . . . . . . . . . . . . 29 2.2.2.3. Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
- 6. vi Digital Filters Design for Signal and Image Processing 2.2.2.4. Changing the z-scale . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.2.2.5. Contrasted signal development . . . . . . . . . . . . . . . . . . . . 31 2.2.2.6. Derivation of the z-transform. . . . . . . . . . . . . . . . . . . . . 31 2.2.2.7. The sum theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.2.2.8. The final-value theorem . . . . . . . . . . . . . . . . . . . . . . . . 32 2.2.2.9. Complex conjugation . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.2.2.10. Parseval’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.2.3. Table of standard transform. . . . . . . . . . . . . . . . . . . . . . . . 33 2.3. The inverse z-transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.3.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.3.2. Methods of determining inverse z-transforms . . . . . . . . . . . . . 35 2.3.2.1. Cauchy’s theorem: a case of complex variables . . . . . . . . . . 35 2.3.2.2. Development in rational fractions . . . . . . . . . . . . . . . . . . 37 2.3.2.3. Development by algebraic division of polynomials. . . . . . . . 38 2.4. Transfer functions and difference equations . . . . . . . . . . . . . . . . 39 2.4.1. The transfer function of a continuous system . . . . . . . . . . . . . 39 2.4.2. Transfer functions of discrete systems . . . . . . . . . . . . . . . . . 41 2.5. Z-transforms of the autocorrelation and intercorrelation functions . . . 44 2.6. Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.6.1. Bounded input, bounded output (BIBO) stability . . . . . . . . . . . 46 2.6.2. Regions of convergence . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.6.2.1. Routh’s criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2.6.2.2. Jury’s criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Chapter 3. Frequential Characterization of Signals and Filters . . . . . . . 51 Eric GRIVEL and Yannick BERTHOUMIEU 3.1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.2. The Fourier transform of continuous signals . . . . . . . . . . . . . . . . 51 3.2.1. Summary of the Fourier series decomposition of continuous signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.2.1.1. Decomposition of finite energy signals using an orthonormal base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.2.1.2. Fourier series development of periodic signals . . . . . . . . . . 52 3.2.2. Fourier transforms and continuous signals . . . . . . . . . . . . . . . 57 3.2.2.1. Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.2.2.2. Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.2.2.3. The duality theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.2.2.4. The quick method of calculating the Fourier transform . . . . . 59 3.2.2.5. The Wiener-Khintchine theorem. . . . . . . . . . . . . . . . . . . 63 3.2.2.6. The Fourier transform of a Dirac comb . . . . . . . . . . . . . . . 63 3.2.2.7. Another method of calculating the Fourier series development of a periodic signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
- 7. Table of Contents vii 3.2.2.8. The Fourier series development and the Fourier transform . . . 68 3.2.2.9. Applying the Fourier transform: Shannon’s sampling theorem . 75 3.3. The discrete Fourier transform (DFT) . . . . . . . . . . . . . . . . . . . . 78 3.3.1. Expressing the Fourier transform of a discrete sequence. . . . . . . 78 3.3.2. Relations between the Laplace and Fourier z-transforms . . . . . . 80 3.3.3. The inverse Fourier transform . . . . . . . . . . . . . . . . . . . . . . 81 3.3.4. The discrete Fourier transform . . . . . . . . . . . . . . . . . . . . . . 82 3.4. The fast Fourier transform (FFT) . . . . . . . . . . . . . . . . . . . . . . . 86 3.5. The fast Fourier transform for a time/frequency/energy representation of a non-stationary signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 3.6. Frequential characterization of a continuous-time system . . . . . . . . 91 3.6.1. First and second order filters . . . . . . . . . . . . . . . . . . . . . . . 91 3.6.1.1. 1st order system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 3.6.1.2. 2nd order system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 3.7. Frequential characterization of discrete-time system . . . . . . . . . . . 95 3.7.1. Amplitude and phase frequential diagrams. . . . . . . . . . . . . . . 95 3.7.2. Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Chapter 4. Continuous-Time and Analog Filters. . . . . . . . . . . . . . . . . 99 Daniel BASTARD and Eric GRIVEL 4.1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.2. Different types of filters and filter specifications. . . . . . . . . . . . . . 99 4.3. Butterworth filters and the maximally flat approximation . . . . . . . . 104 4.3.1. Maximally flat functions (MFM). . . . . . . . . . . . . . . . . . . . . 104 4.3.2. A specific example of MFM functions: Butterworth polynomial filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 4.3.2.1. Amplitude-squared expression . . . . . . . . . . . . . . . . . . . . 106 4.3.2.2. Localization of poles . . . . . . . . . . . . . . . . . . . . . . . . . . 107 4.3.2.3. Determining the cut-off frequency at –3 dB and filter orders . . 110 4.3.2.4. Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 4.3.2.5. Realization of a Butterworth filter . . . . . . . . . . . . . . . . . . 112 4.4. Equiripple filters and the Chebyshev approximation . . . . . . . . . . . 113 4.4.1. Characteristics of the Chebyshev approximation . . . . . . . . . . . 113 4.4.2. Type I Chebyshev filters. . . . . . . . . . . . . . . . . . . . . . . . . . 114 4.4.2.1. The Chebyshev polynomial . . . . . . . . . . . . . . . . . . . . . . 114 4.4.2.2. Type I Chebyshev filters. . . . . . . . . . . . . . . . . . . . . . . . 115 4.4.2.3. Pole determination . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 4.4.2.4. Determining the cut-off frequency at –3 dB and the filter order 118 4.4.2.5. Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 4.4.2.6. Realization of a Chebyshev filter . . . . . . . . . . . . . . . . . . 121 4.4.2.7. Asymptotic behavior . . . . . . . . . . . . . . . . . . . . . . . . . . 122 4.4.3. Type II Chebyshev filter. . . . . . . . . . . . . . . . . . . . . . . . . . 123
- 8. viii Digital Filters Design for Signal and Image Processing 4.4.3.1. Determining the filter order and the cut-off frequency . . . . . . 123 4.4.3.2. Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 4.5. Elliptic filters: the Cauer approximation. . . . . . . . . . . . . . . . . . . 125 4.6. Summary of four types of low-pass filter: Butterworth, Chebyshev type I, Chebyshev type II and Cauer. . . . . . . . . . . . . . . . . . . . . . . . 125 4.7. Linear phase filters (maximally flat delay or MFD): Bessel and Thomson filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 4.7.1. Reminders on continuous linear phase filters . . . . . . . . . . . . . 126 4.7.2. Properties of Bessel-Thomson filters . . . . . . . . . . . . . . . . . . 128 4.7.3. Bessel and Bessel-Thomson filters. . . . . . . . . . . . . . . . . . . . 130 4.8. Papoulis filters (optimum (On)) . . . . . . . . . . . . . . . . . . . . . . . . 132 4.8.1. General characteristics. . . . . . . . . . . . . . . . . . . . . . . . . . . 132 4.8.2. Determining the poles of the transfer function. . . . . . . . . . . . . 135 4.9. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Chapter 5. Finite Impulse Response Filters . . . . . . . . . . . . . . . . . . . . 137 Yannick BERTHOUMIEU, Eric GRIVEL and Mohamed NAJIM 5.1. Introduction to finite impulse response filters . . . . . . . . . . . . . . . 137 5.1.1. Difference equations and FIR filters. . . . . . . . . . . . . . . . . . . 137 5.1.2. Linear phase FIR filters . . . . . . . . . . . . . . . . . . . . . . . . . . 142 5.1.2.1. Representation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 5.1.2.2. Different forms of FIR linear phase filters . . . . . . . . . . . . . 147 5.1.2.3. Position of zeros in FIR filters . . . . . . . . . . . . . . . . . . . . 150 5.1.3. Summary of the properties of FIR filters . . . . . . . . . . . . . . . . 152 5.2. Synthesizing FIR filters using frequential specifications . . . . . . . . . 152 5.2.1. Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 5.2.2. Synthesizing FIR filters using the windowing method . . . . . . . . 159 5.2.2.1. Low-pass filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 5.2.2.2. High-pass filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 5.3. Optimal approach of equal ripple in the stop-band and passband . . . . 165 5.4. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Chapter 6. Infinite Impulse Response Filters . . . . . . . . . . . . . . . . . . . 173 Eric GRIVEL and Mohamed NAJIM 6.1. Introduction to infinite impulse response filters . . . . . . . . . . . . . . 173 6.1.1. Examples of IIR filters. . . . . . . . . . . . . . . . . . . . . . . . . . . 174 6.1.2. Zero-loss and all-pass filters . . . . . . . . . . . . . . . . . . . . . . . 178 6.1.3. Minimum-phase filters. . . . . . . . . . . . . . . . . . . . . . . . . . . 180 6.1.3.1. Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 6.1.3.2. Stabilizing inverse filters . . . . . . . . . . . . . . . . . . . . . . . 181 6.2. Synthesizing IIR filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 6.2.1. Impulse invariance method for analog to digital filter conversion . 183
- 9. Table of Contents ix 6.2.2. The invariance method of the indicial response . . . . . . . . . . . . 185 6.2.3. Bilinear transformations . . . . . . . . . . . . . . . . . . . . . . . . . . 185 6.2.4. Frequency transformations for filter synthesis using low-pass filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 6.3. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Chapter 7. Structures of FIR and IIR Filters . . . . . . . . . . . . . . . . . . . 191 Mohamed NAJIM and Eric GRIVEL 7.1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 7.2. Structure of FIR filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 7.3. Structure of IIR filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 7.3.1. Direct structures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 7.32. The cascade structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 7.3.3. Parallel structures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 7.4. Realizing finite precision filters. . . . . . . . . . . . . . . . . . . . . . . . 211 7.4.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 7.4.2. Examples of FIR filters . . . . . . . . . . . . . . . . . . . . . . . . . . 212 7.4.3. IIR filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 7.4.3.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 7.4.3.2. The influence of quantification on filter stability . . . . . . . . . 221 7.4.3.3. Introduction to scale factors. . . . . . . . . . . . . . . . . . . . . . 224 7.4.3.4. Decomposing the transfer function into first- and second-order cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 7.5. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Chapter 8. Two-Dimensional Linear Filtering . . . . . . . . . . . . . . . . . . 233 Philippe BOLON 8.1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 8.2. Continuous models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 8.2.1. Representation of 2-D signals . . . . . . . . . . . . . . . . . . . . . . 233 8.2.2. Analog filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 8.3. Discrete models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 8.3.1. 2-D sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 8.3.2. The aliasing phenomenon and Shannon’s theorem . . . . . . . . . . 240 8.3.2.1. Reconstruction by linear filtering (Shannon’s theorem) . . . . . 240 8.3.2.2. Aliasing effect. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 8.4. Filtering in the spatial domain. . . . . . . . . . . . . . . . . . . . . . . . . 242 8.4.1. 2-D discrete convolution. . . . . . . . . . . . . . . . . . . . . . . . . . 242 8.4.2. Separable filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 8.4.3. Separable recursive filtering . . . . . . . . . . . . . . . . . . . . . . . 246 8.4.4. Processing of side effects . . . . . . . . . . . . . . . . . . . . . . . . . 249 8.4.4.1. Prolonging the image by pixels of null intensity. . . . . . . . . . 250
- 10. x Digital Filters Design for Signal and Image Processing 8.4.4.2. Prolonging by duplicating the border pixels . . . . . . . . . . . . 251 8.4.4.3. Other approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 8.5. Filtering in the frequency domain. . . . . . . . . . . . . . . . . . . . . . . 253 8.5.1. 2-D discrete Fourier transform (DFT). . . . . . . . . . . . . . . . . . 253 8.5.2. The circular convolution effect. . . . . . . . . . . . . . . . . . . . . . 255 8.6. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Chapter 9. Two-Dimensional Finite Impulse Response Filter Design . . . . 261 Yannick BERTHOUMIEU 9.1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 9.2. Introduction to 2-D FIR filters. . . . . . . . . . . . . . . . . . . . . . . . . 262 9.3. Synthesizing with the two-dimensional windowing method . . . . . . . 263 9.3.1. Principles of method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 9.3.2. Theoretical 2-D frequency shape. . . . . . . . . . . . . . . . . . . . . 264 9.3.2.1. Rectangular frequency shape . . . . . . . . . . . . . . . . . . . . . 264 9.3.2.2. Circular shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 9.3.3. Digital 2-D filter design by windowing . . . . . . . . . . . . . . . . . 271 9.3.4. Applying filters based on rectangular and circular shapes . . . . . . 271 9.3.5. 2-D Gaussian filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 9.3.6. 1-D and 2-D representations in a continuous space . . . . . . . . . . 274 9.3.6.1. 2-D specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 9.3.7. Approximation for FIR filters . . . . . . . . . . . . . . . . . . . . . . 277 9.3.7.1. Truncation of the Gaussian profile. . . . . . . . . . . . . . . . . . 277 9.3.7.2. Rectangular windows and convolution . . . . . . . . . . . . . . . 279 9.3.8. An example based on exploiting a modulated Gaussian filter. . . . 280 9.4. Appendix: spatial window functions and their implementation . . . . . 286 9.5. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Chapter 10. Filter Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 Michel BARRET 10.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 10.2. The Schur-Cohn criterion. . . . . . . . . . . . . . . . . . . . . . . . . . . 298 10.3. Appendix: resultant of two polynomials . . . . . . . . . . . . . . . . . . 314 10.4. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Chapter 11. The Two-Dimensional Domain . . . . . . . . . . . . . . . . . . . . 321 Michel BARRET 11.1. Recursive filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 11.1.1. Transfer functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 11.1.2. The 2-D z-transform . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 11.1.3. Stability, causality and semi-causality . . . . . . . . . . . . . . . . . 324
- 11. Table of Contents xi 11.2. Stability criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 11.2.1. Causal filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 11.2.2. Semi-causal filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 11.3. Algorithms used in stability tests . . . . . . . . . . . . . . . . . . . . . . 334 11.3.1. The jury Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334 11.3.2. Algorithms based on calculating the Bezout resultant . . . . . . . 339 11.3.2.1. First algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 11.3.2.2. Second algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 11.3.3. Algorithms and rounding-off errors . . . . . . . . . . . . . . . . . . 347 11.4. Linear predictive coding . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 11.5. Appendix A: demonstration of the Schur-Cohn criterion . . . . . . . . 355 11.6. Appendix B: optimum 2-D stability criteria . . . . . . . . . . . . . . . . 358 11.7. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 List of Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
- 12. This page intentionally left blank
- 13. Introduction Over the last decade, digital signal processing has matured; thus, digital signal processing techniques have played a key role in the expansion of electronic products for everyday use, especially in the field of audio, image and video processing. Nowadays, digital signal is used in MP3 and DVD players, digital cameras, mobile phones, and also in radar processing, biomedical applications, seismic data processing, etc. This book aims to be a text book which presents a thorough introduction to digital signal processing featuring the design of digital filters. The purpose of the first part (Chapters 1 to 9) is to initiate the newcomer to digital signal and image processing whereas the second part (Chapters 10 and 11) covers some advanced topics on stability for 2-D filter design. These chapters are written at a level that is suitable for students or for individual study by practicing engineers. When talking about filtering methods, we refer to techniques to design and synthesize filters with constant filter coefficients. By way of contrast, when dealing with adaptive filters, the filter taps change with time to adjust to the underlying system. These types of filters will not be addressed here, but are presented in various books such as [HAY 96], [SAY 03], [NAJ 06]. Chapter 1 provides an overview of various classes of signals and systems. It discusses the time-domain representations and characterizations of the continuous- time and discrete-time signals. Chapter 2 details the background for the analysis of discrete-time signals. It mainly deals with the z-transform, its properties and its use for the analysis of linear systems, represented by difference equations.
- 14. xiv Digital Filters Design for Signal and Image Processing Chapter 3 is dedicated to the analysis of the frequency properties of signals and systems. The Fourier transform, the discrete Fourier transform (DFT) and the fast Fourier transform (FFT) are introduced along with their properties. In addition, the well-known Shannon sampling theorem is recalled. As we will see, some of the most popular techniques for digital infinite impulse response (IIR) filter design benefit from results initially developed for analog signals. In order to make the reader’s task easy, Chapter 4 is devoted to continuous- time filter design. More particularly, we recall several approximation techniques developed by mathematicians such as Chebyshev or Legendre, who have thus seen their names associated with techniques of filter design. The following chapters form the core of the book. Chapter 5 deals with the techniques to synthesize finite impulse response (FIR) filters. Unlike IIR filters, these have no equivalent in the continuous-time domain. The so-called windowing method, as a FIR filter design method, is first presented. This also enables us to emphasize the key role played by the windowing in digital signal processing, e.g., for frequency analysis. The Remez algorithm is then detailed. Chapter 6 concerns IIR filters. The most popular techniques for analog to digital filter conversion, such as the bilinear transform and the impulse invariance method, are presented. As the frequency response of these filters is represented by rational functions, we must tackle the problems of stability induced by the existence of poles of these rational functions. In Chapter 7, we address the selection of the filter structure and point out its importance for filter implementation. Some problems due to the finite-precision implementation are listed and we provide rules to choose an appropriate structure while implementing filter on fixed point operating devices. In comparison with many available books dedicated to digital filtering, this title features both 1-D and 2-D systems, and as such covers both signal and image processing. Thus, in Chapters 8 and 9, 2-D filtering is investigated. Moreover, it is not easy to establish the necessary and sufficient conditions to test the stability of 2-D signals. Therefore, Chapters 10 and 11 are dedicated to the difficult problem of the stability of 2-D digital system, a topic which is still the subject of many works such as [ALA 2003] [SER 06]. Even if these two chapters are not a prerequisite for filter design, they can provide the reader who would like to study the problems of stability in the multi-dimensional case with valuable clarifications. This contribution is another element that makes this book stand out.
- 15. Introduction xv The field of digital filtering is often perceived by students as a “patchwork” of formulae and recipes. Indeed, the methods and concepts are based on several specific optimization techniques and mathematical results which are difficult to grasp. For instance, we have to remember that the so-called Parks-McClellan algorithm proposed in 1972 was first rejected by the reviewers [PAR 72]. This was probably due to the fact that the size of the submitted paper, i.e., 5 pages, did not enable the reviewers to understand every step of the approach [McC 05]. In this book we have tried, at every stage, to justify the necessity of these approaches without recalling all the steps of the derivation of the algorithm. They are described in many articles published during the 1970s in the IEEE periodicals i.e., Transactions on Acoustics Speech and Signal Processing, which has since become Transactions on Signal Processing and Transactions on Circuits and Systems. Mohamed NAJIM Bordeaux [ALA 2003] ALATA O., NAJIM M., RAMANANJARASOA C. and TURCU F., “Extension of the Schur-Cohn Stability Test for 2-D AR Quarter-Plane Model”, IEEE Trans. on Information Theory, vol. 49, no. 11, November 2003. [HAY 96] HAYKIN S., Adaptive Filter Theory, 3rd edition, Prentice Hall, 1996. [McC 05] McCLELLAN J.H. and PARKS W. Th., “A Personal History of the Parks- McClellan Algorithm” IEEE Signal Processing Magazine, pp 82-86, March 2005. [NAJ 06] NAJIM M., Modélisation, estimation et filtrage optimale en traitement du signal, forthcoming, 2006, Hermes, Paris. [PAR 72] PARKS W. Th. and McCLELLAN J.H., “Chebyshev Approximation for Nonrecursive Digital Filters with Linear Phase,” IEEE Trans. Circuit Theory, vol. CT-19, no. 2, pp 189-194, 1972. [SAY 03] SAYED A., Fundamentals of Adaptive Filtering, Wiley IEEE Press, 2003. [SER 06] SERBAN I., TURCU F., NAJIM M., “Schur Coefficients in Several Variables”, Journal of Mathematical Analysis and Applications, vol. 320, issue no. 1, August 2006, pp 293-302.
- 16. This page intentionally left blank
- 17. Chapter 1 Introduction to Signals and Systems 1.1. Introduction Throughout a range of fields as varied as multimedia, telecommunications, geophysics, astrophysics, acoustics and biomedicine, signals and systems play a major role. Their frequential and temporal characteristics are used to extract and analyze the information they contain. However, what importance do signals and systems really hold for these disciplines? In this chapter we will look at some of the answers to this question. First we will discuss different types of continuous and discrete-time signals, which can be termed random or deterministic according to their nature. We will also introduce several mathematical tools to help characterize these signals. In addition, we will describe the acquisition chain and processing of signals. Later we will define the concept of a system, emphasizing invariant discrete-time linear systems. 1.2. Signals: categories, representations and characterizations 1.2.1. Definition of continuous-time and discrete-time signals The function of a signal is to serve as a medium for information. It is a representation of the variations of a physical variable. Chapter written by Yannick BERTHOUMIEU, Eric GRIVEL and Mohamed NAJIM.
- 18. 2 Digital Filters Design for Signal and Image Processing A signal can be measured by a sensor, then analyzed to describe a physical phenomenon. This is the situation of a tension taken to the limits of a resistance in order to verify the correct functioning of an electronic board, as well as, to cite one example, speech signals that describe air pressure fluctuations perceived by the human ear. Generally, a signal is a function of time. There are two kinds of signals: continuous and discrete-time. A continuous-time or analog signal can be measured at certain instants. This means physical phenomena create, for the most part, continuous-time signals. Figure 1.1. Example of the sleep spindles of an electroencephalogram (EEG) signal The advancement of computer-based techniques at the end of the 20th century led to the development of digital methods for information processing. The capacity to change analog signals to digital signals has meant a continual improvement in processing devices in many application fields. The most significant example of this is in the field of telecommunications, especially in cell phones and digital televisions. The digital representation of signals has led to an explosion of new techniques in other fields as varied as speech processing, audiofrequency signal analysis, biomedical disciplines, seismic measurements, multimedia, radar and measurement instrumentation, among others. Time (s)
- 19. Introduction to Signals and Systems 3 The signal is said to be a discrete-time signal when it can be measured at certain instants; it corresponds to a sequence of numerical values. Sampled signals are the result of sampling, uniform or not, of a continuous-time signal. In this work, we are especially interested in signals taken at regular intervals of time, called sampling periods, which we write as 1=s s T f where fs is called the sampling rate or the sampling frequency. This is the situation for a temperature taken during an experiment, or of a speech signal (see Figure 1.2). This discrete signal can be written either as x(k) or x(kTs). Generally, we will use the first writing for its simplicity. In addition, a digital signal is a discrete-time discrete-valued signal. In that case, each signal sample value belongs to a finite set of possible values. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 x 10 4 Figure 1.2. Example of a digital voiced speech signal (the sampling frequency fs is at 16 KHz) The choice of a sampling frequency depends on the applications being used and the frequency range of the signal to be sampled. Table 1.1 gives several examples of sampling frequencies, according to different applications. Time (s)
- 20. 4 Digital Filters Design for Signal and Image Processing Signal fs Ts Speech: Telephone band – telephone- Broadband – audio-visual conferencing- 8 KHz or 16 KHz 125 µs 62.5 µs Audio: Broadband (Stereo) 32 KHz 44.1 KHz 48 KHz 31.25 µs 22.7 µs 20.8 µs Video 10 MHz 100 ns Table 1.1. Sampling frequencies according to processed signals In Figure 1.3, we show an acquisition chain, a processing chain and a signal restitution chain. The adaptation amplifier makes the input signal compatible with the measurement chain. A pre-filter which is either pass-band or low-pass, is chosen to limit the width of the input signal spectrum; this avoids the undesirable spectral overlap and hence, the loss of spectral information (aliasing). We will return to this point when we discuss the sampling theorem in section 3.2.2.9. This kind of anti-aliasing filter also makes it possible to reject the out-of-band noise and, when it is a pass-band filter, it helps suppress the continuous component of the signal. The Analog-to-Digital Converter (A/D) partly carries out sampling, and then quantification, at the sampling frequency fs, that is, it allocates a coding to each sampling on a certain number of bits. The digital input signal is then processed in order to give the digital output signal. The reconversion into an analog signal is made possible by using a D/A converter and a smoothing filter. Many parameters influence sampling, notably the quantification step and the response time of the digital system, both during acquisition and restitution. However, by improving the precision of the A/D converter and the speed of the calculators, we can get around these problems. The choice of the sampling frequency also plays an important role.
- 21. Introduction to Signals and Systems 5 Figure 1.3. Complete acquisition chain and digital processing of a signal Different types of digital signal representation are possible, such as functional representations, tabulated representations, sequential representations, and graphic representations (as in bar diagrams). Looking at examples of basic digital signals, we return to the unit sample sequence represented by the Kronecker symbol δ(k), the unit step signal u(k), and the unit ramp signal r(k). This gives us: Unit sample sequence: ( ) ( ) 0 1 1for 0k k δ δ =⎧⎪ ⎨ = ≠⎪⎩ Physical variable Digital input signal ProcessingA/D converter Low-pass filter or pass-band Adaptation amplifier Sampling blocker Smoothing filter Processed signal D/A converter Digital output signal Digital system Analog signal Sensor
- 22. 6 Digital Filters Design for Signal and Image Processing Unit step signal: ( ) ( ) 1for 0 0 for 0 u k k u k k ⎧ = ≥⎪ ⎨ = <⎪⎩ Unit ramp signal: ( ) ( ) for 0 0 for 0. r k k k r k k ⎧ = ≥⎪ ⎨ = <⎪⎩ -10 -8 -6 -4 -2 0 2 4 6 8 10 0 0.2 0.4 0.6 0.8 1 Scale amplitude impulse unity -10 -8 -6 -4 -2 0 2 4 6 8 10 0 0.2 0.4 0.6 0.8 1 indices amplitude unity Figure 1.4. Unit sample sequence δ(k) and unit step signal u(k) 1.2.2. Deterministic and random signals We class signals as being deterministic or random. Random signals can be defined according to the domain in which they are observed. Sometimes, having specified all the experimental conditions of obtaining the physical variable, we see that it fluctuates. Its values are not completely determined, but they can be evaluated in terms of probability. In this case, we are dealing with a random experiment and the signal is called random. In the opposite situation, the signal is called deterministic.
- 23. Introduction to Signals and Systems 7 Figure 1.5. Several realizations of a 1-D random signal EXAMPLE 1.1.– let us look at a continuous signal modeled by a sinusoidal function of the following type. ( ) ( )sin 2πx t a ft= × This kind of model is deterministic. However, in other situations, the signal amplitude and the signal frequency can be subject to variations. Moreover, the signal can be disturbed by an additive noise b(t); then it is written in the following form: ( ) ( ) ( )( ) ( )sin 2πx t a t f t t b t= × × + where a(t), f(t) and b(t) are random variables for each value of t. We say then that x(t) is a random signal. The properties of the received signal x(t) then depends on the statistical properties of these random variables. samples realizationno.5realizationno.4realizationno.3realizationno.2realizationno.1
- 24. 8 Digital Filters Design for Signal and Image Processing Figure 1.6. Several examples of a discrete random 2-D process 1.2.3. Periodic signals The class of signals termed periodic plays an important role in signal and image processing. In the case of a continuous-time signal, a signal is called periodic of period T0 if T0 is the smallest value verifying the relation: ( ) ( )txTtx =+ 0 , t∀ . And, for a discrete-time signal, the period of which is N0, we have: ( ) ( )kxNkx =+ 0 , k∀ . EXAMPLE 1.2.– examples of periodic signals: ( ) ( )0sin 2πx t f t= , ( ) ( )k kx 1−= , ( ) ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ = 8 cos πk kx .
- 25. Introduction to Signals and Systems 9 1.2.4. Mean, energy and power We can characterize a signal by its mean value. This value represents the continuous component of the signal. When the signal is deterministic, it equals: ( )1 11 ( ) 1 µ lim T T x t dt T→+∞ = ∫ where T1 designates the integration time. (1.1) When a continuous-time signal is periodic and of period T0, the expression of the mean value comes to: ( ) 00 ( ) 1 µ T x t dt T = ∫ (1.2) PROOF – we can always express the integration time T1 according to the period of the signal in the following way: 1 0T kT= + ξ where k is an integer and ξ is chosen so that 0 < ξ ≤ T0. From there, ( ) ( ) 1 1 01 0( ) ( ) 1 1 lim lim T k T kT x t dt x t dt T kT µ →+∞ →+∞ = =∫ ∫ , since ξ becomes insignificant compared to kT0. By using the periodicity property of the continuous signal x(t), we deduce that ( ) ( ) 0 00 0( ) ( ) 1 1 µ k T T x t dt x t dt kT T = =∑ ∫ ∫ . When the signal is random, the statistical mean is defined for a fixed value of t, as follows: ( ) ( ) ( )µ , ,t E X t x p x t dx +∞ −∞ = =⎡ ⎤⎣ ⎦ ∫ (1.3) where E[.] indicates the mathematical expectation and p(x, t) represents the probability density of the random signal at the instant t. We can obtain the mean value if we know p(x, t); in other situations, we can only obtain an estimated value.
- 26. 10 Digital Filters Design for Signal and Image Processing For the class of signals called ergodic in the sense of the mean, we assimilate the statistical mean to the temporal mean, which brings us back to the expression we have seen previously: ( )1 11 ( ) 1 µ lim T T x t dt T→+∞ = ∫ . Often, we are interested in the energy ε of the processed signal. For a continuous-time signal x(t), we have: ( ) 2 ε x t dt +∞ −∞ = ∫ . (1.4) In the case of a discrete-time signal, the energy is defined as the sum of the magnitude-squared values of the signal x(k): ( ) 2 ε k x k= ∑ (1.5) For a continuous-time signal x(t), its mean power P is expressed as follows: ( ) dttx T P T T ∫+∞→ = )( 21 lim . (1.6) For a discrete-time signal x(k), its mean power is represented as: ( )∑ = +∞→ = N k N kx N P 1 21 lim (1.7) In signal processing, we often introduce the concept of signal-to-noise ratio (SNR) to characterize the noise that can affect signals. This variable, expressed in decibels (dB), corresponds to the ratio of powers between the signal and the noise. It is represented as: 10SNR 10log signal noise P P ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ (1.8) where signalP and noiseP indicate, respectively, the powers of the sequences of the signal and the noise. EXAMPLE 1.3.– let us consider the example of a periodic signal with a period of 300 Hz signal that is perturbed by a zero-mean Gaussian additive noise with a signal-to-noise ratio varying from 20 to 0 dB at each 10 dB step. Figures 1.7 and 1.8 show these different situations.
- 27. Introduction to Signals and Systems 11 0 0.01 0.02 0.03 0.04 0.05 0.06 -5 0 5 time, in seconds signalwithoutadditivenoise 0 0.01 0.02 0.03 0.04 0.05 0.06 0 5 time, in seconds signalwithadditivenoiseSNR=20dB Figure 1.7. Temporal representation of the original signal and of the signal with additive noise, with a signal-to-noise ratio equal to 20 dB 0 0.01 0.02 0.03 0.04 0.05 0.06 -5 0 5 time, in seconds SNR=10dB 0 0.01 0.02 0.03 0.04 0.05 0.06 -5 0 5 time, in seconds SNR=0dB signalwithadditivenoisesignalwithadditivenoise Figure 1.8. Temporal representation of signals with additive noise, with signal-to-noise ratios equal to 10 dB and 0 dB
- 28. 12 Digital Filters Design for Signal and Image Processing 1.2.5. Autocorrelation function Let us take the example of a deterministic continuous signal x(t) of finite energy. We can carry out a signal analysis from its autocorrelation function, which is represented as: ( ) ( )* (τ)xxR x t x t dtτ +∞ −∞ = −∫ (1.9) The autocorrelation function allows us to measure the degree of resemblance existing between x(t) and ( )τ−tx . Some of these properties can then be shown from the results of the scalar products. From the relations shown in equations (1.4) and (1.9), we see that Rxx(0) corresponds to the energy of the signal. We can easily demonstrate the following properties: )()( * ττ −= xxxx RR τ∀ ∈ (1.10) )0()( xxxx RR ≤τ τ∀ ∈ (1.11) When the signal is periodic and of the period T0, the autocorrelation function is periodic and of the period T0. It can be obtained as follows: ( ) ( ) 0 * 0 0 1 (τ) τ T xxR x t x t dt T = −∫ (1.12) We should remember that the autocorrelation function is a specific instance of the intercorrelation function of two deterministic signals x(t) and y(t), represented as: ( ) ( )* (τ) τxyR x t y t dt +∞ −∞ = −∫ (1.13) Now, let us look at a discrete-time random process {x(k)}. We can describe this process from its autocorrelation function, at the instants k1 and k2, written Rxx (k1, k2) and expressed as ⎥⎦ ⎤ ⎢⎣ ⎡= )()(),( 2 * 121 kxkxEkkRxx 1 2( , ) ,k k∀ ∈ × (1.14) where )( 2 * kx denotes the conjugate of )( 2kx in the case of complex processes.
- 29. Introduction to Signals and Systems 13 The covariance (or autocovariance) function Cxx taken at instants k1 and k2 of the process is shown by: [ ] [ ] ⎥⎦ ⎤ ⎢⎣ ⎡ −−= ))()(())()((),( * 22 * 1121 kxEkxkxEkxΕkkCxx , (1.15) where [ ])( 1kxE indicates the statistical mean of )( 1kx . We should keep in mind that, for zero-mean random processes, the autocovariance and autocorrelation functions are equal. ),(),( 2121 kkRkkC xxxx = 1 2( , )k k∀ . (1.16) The correlation coefficient is as follows: 1 2 1 2 1 1 2 2 ( , ) ρ ( , ) ( , ) ( , ) xx xx xx xx C k k k k C k k C k k = 1 2( , )k k∀ ∈ × . (1.17) It verifies: 1 2ρ ( , ) 1xx k k ≤ 1 2( , )k k∀ ∈ × . (1.18) When the correlation coefficient 1 2ρ ( , )xx k k takes a high and positive value, the values of the random processes at instants k1 and k2 have similar behaviors. This means that the elevated values of x(k1) correspond to the elevated values of x(k2). The same holds true for the lowered values k1; the process takes the lowered values of k2. The more 1 2ρ ( , )xx k k tends toward zero, the lower the correlation. When 1 2ρ ( , )xx k k equals zero for all distinct values of k1 and k2, the values of the process are termed decorrelated. If 1 2ρ ( , )xx k k becomes negative, x(k1) and x(k2) have opposite signs. In a more general situation, if we look at two random processes x(k) and y(k), their intercorrelation function is written as: ⎥⎦ ⎤ ⎢⎣ ⎡= )()(),( 2 * 121 kykxEkkRxy (1.19) As for the intercovariance function, it is shown by: ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ ⎟ ⎠ ⎞⎜ ⎝ ⎛− ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ ⎟ ⎠ ⎞⎜ ⎝ ⎛−= * 22 * 1121 )()()()(),( kyEkykxEkxEkkCxy (1.20)
- 30. 14 Digital Filters Design for Signal and Image Processing [ ] ( ) ( )* 212121 )()(,),( kyEkxEkkRkkC xyxy −= (1.21) The two random process are not correlated if 0),( 21 =kkCxy ),( 21 kk∀ (1.22) A process is called stationary to the 2nd order, even in a broad sense, if its statistical mean [ ]µ ( )E x k= is a constant and if its autocorrelation function only depends on the gap between k1 and k2; that is, if: )(),( 2121 kkRkkR xxxx −= . (1.23) From this, in stationary processes, the autocorrelation process verifies two conditions. The first condition relates to symmetry. Given that: ⎥⎦ ⎤ ⎢⎣ ⎡ += )()()( * kxmkxEmRxx (1.24) we can easily show that: )()( * mRmR xxxx =− m∀ ∈ . (1.25) For the second condition, we introduce the random vector x consisting of M+1 samples of the process {x(k)}: ( ) ( )⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ = Mx x x 0 . (1.26) The autocorrelation matrix RM is represented by { }H E x x where H x indicates the hermetian vector of . H x This is a Toeplitz matrix that is expressed in the following form: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ − −− +− −+−− = 011 11 101 110 xxxxxxxx xxxx xxxxxx xxxxxxxx M RRMRMR RMR MRRR MRMRRR R (1.27)
- 31. Introduction to Signals and Systems 15 NOTE.– vectoral and matricial approaches can often be employed in signal processing. As well, using autocorrelation matrices and, more generally, intercorrelation matrices, can be effective. This type of matrix plays a role in the development of optimal filters, notably those of Wiener and Kalman. It is important to implement decomposition techniques in signal and noise subspaces used for spectral analysis, speech enhancement, determining the number of users in a telecommunication cell, to mention a few usages. 1.3. Systems A system carries out an operation chain, which consists of processing applied to one or several input signals. It also provides one or several output signals. A system is therefore characterized by several types of variables, described below: – inputs: depending on the situation, we differentiate between the commands (which are inputs that the user can change or manipulate) and the driving processes or excitations which usually are not accessible; – outputs; – state variables that provide information on the “state” of the system. By the term “state” we mean the minimal number of parameters, stored usually in a vector, that can characterize the development of the system, where the inputs are supposed to be known; – mathematical equations that link input and output variables. In much the same way as we classify signals, we speak of digital systems (respectively analog) if the inputs and outputs are digital (respectively analog). When we consider continuous physical systems, if we have two inputs and two outputs, the system is then a quadrupole. We wish to impose a given variation law on the output according to the input. If the relation between input and output is given in the form of a differential linear equation with constant coefficients, we then have a linear system that is time-invariant and continuous. Depending on the situation, we use physical laws to develop equations; in electronics, for example, we employ Kirchhoff’s laws and Thévenin’s and Norton’s theorems or others to establish our equations. Later in this text, we will discuss discrete-time systems in more detail. These are systems that transform a discrete-time input signal x(k) into a discrete-time output signal y(k) in the following manner: ( )kx ⇒ ( ) ( )[ ]kxTky = . (1.28)
- 32. 16 Digital Filters Design for Signal and Image Processing By way of example, we see that ( ) ( )kxky = , ( ) ( )1−= kxky and ( ) ( )1+= kxky respectively express the identity, the elementary delay and the elementary lead. 1.4. Properties of discrete-time systems 1.4.1. Invariant linear systems The important features of a system are linearity, temporal shift invariance (or invariance in time) and stability. A system represented by the operator T is termed linear if 21 , xx∀ 21 , aa∀ so we get: [ ] [ ] [ ])()()()( 22112211 kxTakxTakxakxaT +=+ . (1.29) A system is called time-invariant if the response to a delayed input of l samples is the delayed output of l samples; that is: ( )kx ⇒ ( ) ( )[ ]kxTky = , then ( )[ ] ( )lkylkxT −=− (1.30) and this holds, whatever the input signal x(k) and the temporal shift l. As well, a continuous linear system time-invariant system is always called a stationary (or homogenous) linear filter. 1.4.2. Impulse responses and convolution products If the input of a system is the impulse unity δ(k), the output is called the impulse response of the system h(k), or: ( ) ( )δh k T k= ⎡ ⎤⎣ ⎦ . (1.31) Figure 1.9. Impulse response A usual property of the impulse δ(k) helps us describe any discrete-time signal as the weighted sum of delayed pulses: Linear filterδ(k) h(k)
- 33. Introduction to Signals and Systems 17 ( ) ( ) ( )∑ +∞ −∞= −= l lklxkx δ (1.32) The output of an invariant continuous linear system can therefore be expressed in the following form: ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( ).∑∑ ∑ ∞+ −∞= ∞+ −∞= +∞ −∞= −=−= ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ −== ll l lkhlxlkTlx lklxTkxTky δ δ (1.33) The output y(k) thus corresponds to the convolution product between the input x(k) and the impulse response h(k): ( ) ( ) ( ) ( ) ( ) ( ) ( )∑ +∞ −∞= −=== n nkhnxkxkhkhkxky ** . (1.34) We see that the convolution relation has its own legitimacy; that is, it is not obtained by a discretization of the convolution relation obtained in continuous systems. Using the example of a continuous system, we need only two hypotheses to establish this relation: those of invariance and linearity. 1.4.3. Causality The impulse response filter h(k) is causal when the output y(k) remains null as long as the input x(k) is null. This corresponds to the philosophical principle of causality, which states that all precedent causes have consequences. An invariant linear system is causal only if its output for every k instant (that is, y(k)), depends solely on the present and past (x(k), x(k-1),… and so on). Given the relation in equation (1.34), its impulse response satisfies the following condition: ( ) 0 for 0h k k= < (1.35) An impulse response filter h(k) is termed anti-causal when the impulse response filter h(-k) is causal; that is, it becomes causal after inversion in the sense of time. The output of rank k then depends only on the inputs that are superior, or equal to k.
- 34. 18 Digital Filters Design for Signal and Image Processing 1.4.4. Interconnections of discrete-time systems Discrete-time systems can be interconnected either in cascade (series) or in parallel to obtain new systems. These are represented, respectively, in Figures 1.10 and 1.11. Figure 1.10. Interconnection in series For interconnection in series, the impulse response of the resulting system h(k) is represented by ( ) ( ) ( )khkhkh 21 *= . Thus, subject to the associativity of the law *, we have: ( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ).***** ** * 2112 12 2 kxkhkxkhkhkxkhkh kxkhkh kskhky === = = Figure 1.11. Interconnection in parallel For a interconnection in parallel, the impulse response of the system h(k) is written as ( ) ( ) ( )khkhkh 21 += . So we have: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ).** ** 21 21 21 kxkhkxkhkh kxkhkxkh ksksky =+= += += y(k)x(k) y(k)x(k) s(k) h1(k) h2(k) h1(k) h2(k) s1(k) s2(k) +
- 35. Introduction to Signals and Systems 19 1.5. Bibliography [JAC 86] JACKSON L. B., Digital Filters and Signal Processing, Kluwer Academic Publishing, Boston, ISBN 0-89838-174-6. 1986. [KAL 97] KALOUPTSIDIS N., Signal Processing Systems, Theory and Design, Wiley Interscience, 1997, ISBN 0-471-11220-8. [ORF 96] ORFANIDIS S. J., Introduction to Signal Processing, Prentice Hall, ISBN 0-13- 209172-0, 1996. [PRO 92] PROAKIS J and MANOLAKIS D., Digital Signal Processing, Principles, Algorithms and Applications, 2nd ed., MacMillan, 1992, ISBN 0-02-396815-X. [SHE 99] SHENOI B. A., Magnitude and Delay Approximation of 1-D and 2-D Digital Filters, Springer, 1999, ISBN 3-540-64161-0. [THE 92] THERRIEN C., Discrete Random Signals and Statistical Signal Processing, Prentice Hall, ISBN 0-13-852112-3, 1992. [TRE 76] TREITTER S. A., Introduction to Discrete-Time Signal Processing, John Wiley & Sons (Sd), 1976, ISBN 0-471-88760-9. [VAN 89] VAN DEN ENDEN A. W. M. and VERHOECKX N. A. M., Discrete-Time Signal Processing: An Introduction, pp. 173-177, Prentice Hall, 1989, ISBN 0-13-216755-7.
- 36. This page intentionally left blank
- 37. Chapter 2 Discrete System Analysis 2.1. Introduction The study of discrete-time signals is based on the z-transform, which we will discuss in this chapter. Its properties make it very useful for studying linear, time- invariant systems. This chapter is organized as follows. First, we will study discrete, invariant linear systems based on the z-transform, which plays a role similar to that of the Laplace transform in continuous systems. We will present the representation of this transform, as well as its main properties; then we will discuss the inverse-z-transform. From a given z-transform, we will present different methods of determining the corresponding discrete-time signal. Lastly, the concepts of transfer functions and difference equations will be covered. We also provide a table of z-transforms. 2.2. The z-transform 2.2.1. Representations and summaries With analog systems, the Laplace transform Xs(s) related to a continuous function x(t), is a function of a complex variable s and is represented by: ( ) ( )∫ +∞ ∞− − = dtetxsX st s . (2.1) Chapter written by Mohamed NAJIM and Eric GRIVEL.
- 38. 22 Digital Filters Design for Signal and Image Processing This variable exists when the real part of the complex variable s satisfies the relation: ( )Rer s R< < , (2.2) with and ,r R= −∞ = +∞ r and R potentially characterizing the existence of limits of Xs(s) . The Laplace transform helps resolve the linear differential equations to constant coefficients by transforming them into algebraic products. Similarly, we introduce the z-transform when studying discrete-time signals. Let {x(k)} be a real sequence. The bilaterial or two-sided z-transform Xz(z) of the sequence {x(k)} is represented as follows: ∑ +∞ −∞= − = k k z zkxzX )()( , (2.3) where z is a complex variable. The relation (2.3) is sometimes called the direct z- transform since this makes it possible to transform the time-domain signal {x(k)} into a representation in the complex-plane. The z-transform only exists for the values of z that enable the series to converge; that is, for the value of z so that Xz(z) has a finite value. The set of all values of z satisfying these properties is then called the region of convergence (ROC). DEMONSTRATION 2.1.– we know that the absolute convergence of a series brings about the basic convergence of the series. By applying the Cauchy criterion to the series ∑ +∞ =0 )( k kx , the series ∑ +∞ =0 )( k kx absolutely converges if: 1)(lim /1 < +∞→ k k kx . The series diverges if 1)(lim /1 > +∞→ k k kx . If 1)(lim /1 = +∞→ k k kx , we cannot be certain of the convergence.
- 39. Discrete System Analysis 23 From this, let us express Xz(z) as follows: ∑∑∑ +∞ − − −∞= − +∞ −∞= − +== 0 1 )()()()( k k k k k z zkxzkxzkxzX . The series ∑ − −∞= − 1 )( k k zkx converges absolutely if: 1)(lim /1 <− +∞→ k k k zkx , or if: k k kx z /1 )(lim 1 − < +∞→ . As well, the series ∑ +∞ − 0 )( k zkx converges absolutely if: 1)(lim /1 <− +∞→ k k k zkx , or if: zkx k k < +∞→ /1 )(lim If we write max/1 )(lim 1 λ= − +∞→ k k kx and min /1 )(lim λ= +∞→ k k kx , the z-transform Xz(z) converges if: min max0 z≤ λ < < λ . The quantities minλ and maxλ now characterize the region of convergence (ROC) of the series Xz(z). The series ∑ +∞ −∞= − k k zkx )( diverges towards the strict exterior of the ROC. We should remember that the region of convergence may be empty, as is sometimes the case where ( )2 ( ) 1x k k= + .
- 40. 24 Digital Filters Design for Signal and Image Processing We can also represent, especially for causal sequences, the monolateral z- transform, )(zX z , from the sequence {x(k)} with: ∑ +∞ = − = 0 )()( k k z zkxzX with z≤minλ . DEMONSTRATION 2.2.– to establish the absolute convergence of the series, we can use another approach from the one previously shown with the bilateral transformation. It is based on d’Alembert’s law. We use this law to understand the relation between two consecutive samples of the analyzed discrete-time signal. We know that if the sequence ⎪⎭ ⎪ ⎬ ⎫ ⎪⎩ ⎪ ⎨ ⎧ + )( )1( kx kx converges towards a limit L that is strictly inferior to 1, the absolute convergence of ∑ +∞ =0 )( k kx is assured. If we apply this test to the z-transform, we get: 1 )( )1( lim )( )1( lim 1 1 < + = + − +∞→− −− +∞→ z kx kx zkx zkx kk k k , which gives us: min )( )1( lim λ= + > +∞→ kx kx z k The ROC corresponds to all points in the complex-plane outside the central disk of radius λmin. With discrete-time causal signals, such as: ( ) 0=kx for 0<k , the one-sided (or unilateral) and the bilateral z-transforms are reduced to the same expression: ∑∑ +∞ = − +∞ −∞= − == 0 )()()( k k k k z zkxzkxzX with z≤minλ
- 41. Discrete System Analysis 25 Now let us look at two examples of z-transforms. EXAMPLE 2.1.– the unit step signal u(k) can be represented as: ( ) 0=ku for 0<k and ( ) 1=ku for 0≥k . Its z-transform is written ∑ +∞ = − = 0 )( k k z zzU . The convergence is assured for 1>z , and we get the closed form expression of the z-transform 11 1 )( 1 − = − = − z z z zU z with 1>z . EXAMPLE 2.2.– here we assume that the signal x(k) is represented by: ( ) k kx α= with 1<α We then get: ∑∑∑ −∞= −− +∞ = − +∞ −∞= − +== 0 1 )( k kk k kk k kk z zzzzX ααα . The absolute convergence of the series ∑ +∞ = − 1k kk zα and ∑ −∞= −− 0 k kk zα is assured for α α 1 << z . We then have: 111 1 1 1 1 )( −−− − − + − = zz z zX z αα α and α α 1 << z . When the signal is causal, we will obtain ( ) k kx α= for 0≥k and ( ) 0=kx . Its z-transform then equals: 1 1 1 )( − − = z zX z α with z<α .
- 42. 26 Digital Filters Design for Signal and Image Processing -10 -8 -6 -4 -2 0 2 4 6 8 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 indices amplitude sequence Figure 2.1. Representation of x(k)=α│k│ and of the ROC of its z-transform Xz (z) α α 1
- 43. Discrete System Analysis 27 -2 0 2 4 6 8 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 indices amplitude sequence Figure 2.2. Representation of the causal signal x(k)=α│k│ u(k) and of the ROC of its z-transform Xz (z) α α
- 44. 28 Digital Filters Design for Signal and Image Processing 2.2.2. Properties of the z-transform 2.2.2.1. Linearity The z-transform is linear. Actually, with the two sequences ( ){ }kx1 and ( ){ }kx2 . 21 , aa∀ , we have: ⎥⎦ ⎤ ⎢⎣ ⎡Ζ+ ⎥⎦ ⎤ ⎢⎣ ⎡Ζ= ⎥⎦ ⎤ ⎢⎣ ⎡ +Ζ )()()()( 22112211 kxakxakxakxa (2.4) where Z[.] represents the operator “z-transform”. This result is valid, provided the intersection of the ROC is not empty. DEMONSTRATION 2.3.– [ ] [ ] [ ] [ ])()( )()( )()()()( 2211 2211 22112211 kxakxa zkxazkxa zkxakxakxakxa k k k k k k Ζ+Ζ= += +=+Ζ ∑∑ ∑ ∞+ −∞= − ∞+ −∞= − +∞ −∞= − The ROC of a sum of transforms then corresponds to the intersection of the ROCs. EXAMPLE 2.3.– the linearity property can be exploited in the calculation of the z- transform of the discrete hyperbolic sinus x(k)=sh(k) u(k): ( )[ ] ( ) ( )( ) ( ) ( ) ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ −−= −−=Ζ ∑∑ ∑ ∞+ = − ∞+ = − +∞ = − 00 0 expexp 2 1 expexp 2 1 k k k k k k zkzk zkkksh The ROC is represented by 1)1exp( 1 <− z and 1)1exp( 1 <− − z , so )1exp(>z . ( )[ ] ( ) ( ) 21 1 11 121 12 )1exp(11 1 )1exp(11 1 2 1 −− − −− +− = ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −− − − =Ζ zzch zsh zz ksh for |z| > exp(1).
- 45. Discrete System Analysis 29 2.2.2.2. Advanced and delayed operators Let Xz (z) be the z-transform of the discrete-time signal {x(k)}. The z-transform of ( ){ }mkx − is: ( )[ ] ( )[ ] ( )zXzkxzmkx z mm −− =Ζ=−Ζ (2.5) Delaying the signal by m steps thus brings about a multiplication by z-m in the z domain. The operator z-1 is called the basic delay operator, then simply the delay operator. With filters, we often see the following representation: Figure 2.3. Delayed unitary operator Usually, the ROC is not modified, except potentially at origin and at infinity. DEMONSTRATION 2.4.– by definition ( )[ ] ( )∑ +∞ −∞= − −=−Ζ k k zmkxmkx . By changing the variables mkn −= , we get: ( )[ ] ( ) ( ) ( ) ( )[ ]kxzznxzznxmkx m n nm n mn Ζ===−Ζ − +∞ −∞= −− +∞ −∞= +− ∑∑ Advancing the m signal leads to a multiplication by zm of the transform in the domain of z. The operator z is called the advanced unitary operator or, more simply, the advance operator. The following representation shows this. Figure 2.4. Advanced unitary operator EXAMPLE 2.4.– now we look at the z-transform of discrete-time exponential signals ( ) k ekx α− = for k ≥ 0 and x(k) = 0 for k < 0 and y(k) = x(k-m) where m is a natural integer. z ( )kx )(zX z ( )1+kx )(zzX z 1− z ( )kx )(zX z ( )1−kx )(1 zXz z −
- 46. 30 Digital Filters Design for Signal and Image Processing ( ) [ ] 1 1 1 −− − − =Ζ= ze ezX k z α α for α ez > and ( ) ( ) 1 1 −− − − − == ze z zXzzY m z m z α . 2.2.2.3. Convolution We know that the convolution between two discrete causal sequences {x1(k)} and {x2(k)} verifies the following relation: ( ) ( ) ( ) ( ) ( ) ( )∑∑ = +∞ = −=−= k nn nkxkxnkxkxkxkx 0 21 0 2121 * (2.6) The z-transform of the convolution product of the two sequences is then the simple product of the z-transforms of the two sequences: ( ) ( )[ ] ( )[ ] ( )[ ]kxkxkxkx 2121 * ΖΖ=Ζ (2.7) The ROC of the convolution product is the intersection of the ROC of the z- transforms of {x1(k)} and {x2 (k)}. We see that this result is very often used in studying invariant linear systems, since the response of a system corresponds, as we saw in equation (1.34), to the convolution product of its impulse response by the input signal. DEMONSTRATION 2.5.– since ( )1 1 0 ( ) k k Z x k x k z +∞ − = =⎡ ⎤⎣ ⎦ ∑ and ( )2 2 0 ( ) k k Z x k x k z +∞ − = =⎡ ⎤⎣ ⎦ ∑ , the product ( ) ( )zXzX 21 can be written as: ( )[ ] ( )[ ] ( ) ( ) ( ) ( ) ( ) ( )[ ] ( )[ ] ( ) ( )[ ]kxkxZzkxx zmkxmx zmkxmx zxxxxxxkxZkxZ k k k k k m k k m 21 0 21 0 0 21 0 21 1 21212121 ** )()( )()( 011000 == ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ −= +−++ ++= − ∞+ = − ∞+ = = − = − ∑ ∑ ∑ ∑ on the condition that the intersection of the ROC of the two series must not be empty.
- 47. Discrete System Analysis 31 2.2.2.4. Changing the z-scale Let us assume that Xz(z) is the z-transform of the discrete-time signal {x(k)}. With a given constant a, real or complex, the z-transform of the ( ){ }kxak is: ( ) ⎟ ⎠ ⎞ ⎜ ⎝ ⎛=⎥⎦ ⎤ ⎢⎣ ⎡Ζ − zaXkxa z k 1 with maxmin λλ aza ≤≤ * (2.8) DEMONSTRATION 2.6.– ( )[ ] ( ) ( )( ) ( )zaXzakxzkxakxa z k k kkkk 11 − +∞ −∞= +∞ −∞= −−− ∑ ∑ ===Ζ The ROC is then: maxmin λλ aza ≤≤ 2.2.2.5. Contrasted signal development Let Xz (z) be the z-transform of the discrete-time signal ( ){ }kx with maxmin λλ << z . We then represent the sequence as ( ){ } ( ){ }kxky −= . The z- transform of ( ){ }ky then equals: ( ) ( )1− = zXzY zz . (2.9) DEMONSTRATION 2.7.– ( ) ( ) ( ) ( )1− +∞ −∞= +∞ −∞= − ∑ ∑ ==−= zXzkxzkxzY z k k kk z The region of convergence is then written as: minmax 11 λλ << z 2.2.2.6. Derivation of the z-transform By deriving the z-transform in relation to z-1 and then multiplying it by z-1 , we return to the following characteristic result: ( ) ( ) ( )( ) ( ) ( )( )kkxzkkxzkkxz zd zdX z k k kkz Ζ=== ∑ ∑ +∞ −∞= +∞ −∞= −−−− − − 111 1 1 (2.10)
- 48. 32 Digital Filters Design for Signal and Image Processing EXAMPLE 2.5.– now we will at the z-transform of the following discrete-time causal signal: ( ) ( ) ( ) ( ) ( )4123154335 −+−=−+−= kkkkkkkx δδδδ We can easily demonstrate that the z-transform of δ(k) for all values of z equals 1. By using advanced and delayed linearity properties, we find that: ( ) ( )[ ] 43 354335 −− +=−+−Ζ zzkk δδ for all values of z. From this ( ) ( ) 43 1 43 1 1215 35 −− − −− − += + = zz dz zzd zzX z 2.2.2.7. The sum theorem If 1 is inside the ROC, we easily find that: ( )zXkx z k z ∑ +∞ −∞= → = 1 lim)( (2.11) 2.2.2.8. The final-value theorem Here we look at two sequences ( ){ }kx and ( ){ }ky such as ( ) ( ) ( )kxkxky −+= 1 , by supposing the absolute convergence of the series ∑ +∞ −∞=k ky )( . From this we get the sum theorem ( )zYky z k z ∑ +∞ −∞= → = 1 lim)( . Now, we know that ( ) ( ) ( )zXzzY zz 1−= , and, by construction, )(lim)(lim)( kxkxky k k k −∞→ +∞ −∞= +∞→ −=∑ . From there, if 0)(lim = −∞→ kx k , we have ( ) ( )zXzkx z zk 1lim)(lim 1 −= →+∞→ . 2.2.2.9. Complex conjugation Here we consider the two sequences ( ){ }kx and ( ){ }ky such as ( ) ( )* kxky = ( ) ( ) ( )( ) ( )[ ]** * * * zXzkxzkxzY z k k kk z = ⎪⎭ ⎪ ⎬ ⎫ ⎪⎩ ⎪ ⎨ ⎧ === ∑ ∑ ∞+ −∞= ∞+ −∞= −− (2.12)
- 49. Discrete System Analysis 33 2.2.2.10. Parseval’s theorem ( )∫ ∑ +∞ = −− = C k zz kxdzzzXzX j 0 211 )()( 2 1 π (2.13) provided that Xz(z) converges on an open ring containing the circle unity. The energy does not depend on the representation mode, whether it is temporal or in the z-domain. 2.2.3. Table of standard transform ( ){ }kx )(zX z ( )δ k 1 ( )k mδ − m z− ( ) ( ) ( ) 0 for 0 1 for 0 x k k u k x k k ⎧ = <⎪ ⎨ = ≥⎪⎩ 1−z z ( ) ( )kkukx = ( )2 1−z z ( ) ( )kukkx 2 = ( )3 2 1− + z zz ( ) ( )kukkx 3 = ( ) ( )4 2 1 14 − ++ z zzz ( ) ( )kukkx 4 = ( ) ( )5 23 1 11111 − +++ z zzzz ( ) α k x k = with α 1< . 1 ( ) 1 α 1 α X z z z α = + + − − ( ) ( )αk x k k u k= ( )2 α α z z − ( ) ( )2 αk x k k u k= ( ) ( )3 α α α z z z + − ( ) ( )3 αk x k k u k= ( ) ( ) 2 2 4 α 4α α α z z z z + + −
- 50. 34 Digital Filters Design for Signal and Image Processing ( ) ( )4 αk x k k u k= ( ) ( ) 3 2 2 3 5 α 11α 11α α α z z z z z + + + − ( ) sin( . ). ( )sx k kT u k0= ω 2 .sin( . ) 2. .cos( . ) 1 s s z T z z T 0 0 ω − ω + ( ) cos( . ). ( )sx k kT u k0= ω 2 .[ cos( . )] 2. .cos( . ) 1 s s z z T z z T 0 0 − ω − ω + ( ) sin( . ). ( )k sx k kT u kα 0= ω 2 2 .sin( . ) 2. .cos( . ) s s z T z z T α α α 0 0 ω − ω + ( ) cos( . ). ( )k sx k kT u kα 0= ω 2 2 .[ cos( . )] 2. .cos( . ) s s z z T z z T α α α 0 0 − ω − ω + ( ) sin( . ). ( )k sx k k kT u kα 0= ω 2 2 2 ( )( ) sin( . ) ( 2. .cos( . ) ) s s z z z T z z T α α α α α 0 0 − + ω − ω + ( ) cos( . ). ( )k sx k k kT u kα 0= ω 2 2 2 2 2 cos( . ) 2 cos( . ) ( 2. .cos( . ) ) s s s z z T z T z z T α α α α α 0 0 0 ⎢ ⎥ω − + ω⎣ ⎦ − ω + ( ) [1 cos( . )]. ( )sx k kT u k0= − ω 2 .[ cos( . )] 1 2. .cos( . ) 1 s s z z Tz z z z T 0 0 − ω − − − ω + ( ) ( )[ ] ( )kuekakx ka ...11 .− +−= . . . 2 . . . 1 [ ] s s s a T s a T a T aT e zz z z z e z e − − − − − − − − . ( ) .sin( . ). ( )a k sx k e kT u k− 0= ω . 0 . 2. .2 0 . .sin( . ) 2. . .cos( . ) s s s a T s a T a T s z e T z z e T e − − − ω − ω + . ( ) .cos( . ). ( )a k sx k e kT u k− 0= ω . 0 . 2. .2 0 . cos( . ) 2. . .cos( . ) s s s a T s a T a T s z z e T z z e T e − − − ⎢ ⎥− ω⎣ ⎦ − ω + Table 2.1. z-transforms of specific signals 2.3. The inverse z-transform 2.3.1. Introduction The purpose of this section is to present the methods that help us find the expression of a discrete-time signal from its z-transform. This often presents problems that can be difficult to resolve. Applying the residual theorem often helps to determine the sequence ( ){ }kx , but the application can be long and cumbersome.
- 51. Discrete System Analysis 35 So in practice, we tend to use simpler methods, notably those based on development by division, according to increasing the powers in z-1 , which constitutes a decomposition of the system into subsystems. Nearly all the z-transforms that we see in filtering are, in effect, rational fractions. 2.3.2. Methods of determining inverse z-transforms 2.3.2.1. Cauchy’s theorem: a case of complex variables If we acknowledge that, in the ROC, the z-transform of ( ){ }kx , written )(zX z , has a Laurent serial development, we have: ∑∑ − −∞= − +∞ = − += 1 0 )( k k k k k kz zzzX υτ The coefficients kτ and kυ are the values of the discrete sequence ( ){ }kx that are to be determined. They can be obtained by calculating the integral ( ) ( ) dzzzX j kx k C z 1 2 1 − ∫= π (where C is a closed contour in the interior of the ROC), by the residual method as follows: ( ) ( ) ( ) ( ) ϕρρ π ϕρρρ π ϕ π ϕ ϕϕ π ϕ deeX deeeXkx jkkj z jkjkj z ∫ ∫ = = −− 2 0 11 2 0 2 1 2 1 where ρ belongs to the ROC. DEMONSTRATION 2.8.– let us look at a discrete-time causal signal ( ){ }kx of the z-transform ( )zX z . We have, by definition: ∑ +∞ = − = 0 )()( n n z znxzX or ∑ +∞ = −+−− = 0 11 )()( n knk z znxzzX . By integrating these qualities the length of a closed contour C to the interior of the region of convergence of the transform Xz(z) by turning around 0 once in the positive direction, we get: ( ) ( ) ( )kxjdzznxdzznxdzzzX n C kn C C n knk z π2)( 0 1 0 11 ∑ ∫∫ ∫∑ +∞ = −+− +∞ = −+−− ===
- 52. 36 Digital Filters Design for Signal and Image Processing By taking an expression of z in the form of ϕ ρ j ez = , we easily arrive at: ( ) ( ) ϕρρ π ϕ π ϕ deeXkx jkkj z∫= 2 0 2 1 Now, using the residual theorem, this sum corresponds to the sum of the residuals of 1 )( −k z zzX surrounded by C. 1 11 ( ) ( ) 2 k k z z simple polepoles surrounded only by CC X z z dz Residual X z z j π − −⎡ ⎤⎡ ⎤= ⎢ ⎥⎣ ⎦⎣ ⎦ ∑∫ Reminders: when pn is a rth order pole of the expression 1 )( −k z zzX , we can express 1 )( −k z zzX in the form of a rational fraction of the type ( ) ( )r npz zN − . The residual taken in pn is then equal to: [ ][ ] n n pz r r p k z dz zNd r zzX = − − − ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ × − = 1 1 1 )( )!1( 1 )(Res . With a pole of the order of multiplicity 1, the expression is reduced to: )()(Res 1 n p k z pNzzX n = ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ ⎥⎦ ⎤ ⎢⎣ ⎡ − EXAMPLE 2.6.– we determine that the discrete-time causal signal whose z- transform equals 2 )( − − = ez z zX z . ( ) ∫ − − = C k dz ez z j kx 22 1 π for 0≥k . Calculating this integral involves the one pole e -2 of the order in multiplicity 1. From this we get: ( ) k z k e ez z kx 2 )2exp( 2 Res − −= − = ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ − = for 0≥k
- 53. Discrete System Analysis 37 2.3.2.2. Development in rational fractions With linear systems, the expression of the z-transform is presented in the form of a rational fraction; so we will present a decomposition of X(z) into basic elements. Let ( ) ( ) ( )zD zN zX z = . The decomposition into basic elements helps us express ( )zX z in the following form: ( ) ∑∑ = = +− − r i j j i ji i i az1 1 1 , β β α , where r is the number of poles of Xz(z), βi the multiplicity order of the complex pole ai. We then get: ( ) ( ) ( ) ( ) i i az j i j ji z zD zN az j = − − ∂ ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ −∂ − = 1 1 , !1 1 β α The z-transform is written as a linear combination of simple fractions of the order 1 or 2, with which we can easily determine the inverse transforms. EXAMPLE 2.7.– Let ( ) ( )( )( )321 11123 23 −−− +− = zzz zzz zX z . We then write that: ( ) 111 31 1 21 1 1 1 −−− − + − + − = zzz zX z from this, the inverse transform corresponds to: ( ) ( ) ( )kukx kk 321 ++=
- 54. 38 Digital Filters Design for Signal and Image Processing Figure 2.5. Decomposition into subsystems of the system represented by Xz(z) EXAMPLE 2.8.– here, our purpose is to find the inverse z-transform of Xz(z) represented by the relation ( ) 21 231 3 −− +− = zz zX z for 2>z . The decomposition into basic elements allows us to express ( )zX z as follows: ( ) ( )( ) ( ) ( )12 6 1 3 121 3 231 3 111121 − − − = −− = +− = −−−−−− zzzzzz zX z from which ( ) ( ) ( )kukx k 123 1 −×= + . 2.3.2.3. Development by algebraic division of polynomials When the expression of the z-transform appears in the form of rational fractions, ( ) ( ) ( )zD zN zX z = , we can also obtain an approximate development by carrying out the polynomial division of N(z) by D(z), on condition that the ROC contains 0 or infinity. A division will be done according to the positive powers of z if the convergence region contains 0 and according to the negative powers of z if the convergence region contains infinity. ( )1−z z ( )2−z z ( )3−z z 1 )(zX z+
- 55. Discrete System Analysis 39 EXAMPLE 2.9.– let ( ) 1 1 1 0.9 zX z z− = − which corresponds to the expression of the transfer function of a filter used for voice signal analysis. Since the ROC contains infinity, we then carry out the polynomial division according to the negative powers of z. 1 1 1 0.9 z− − 1- 1 0.9 z− 1+ 1 0.9 z− + 2 0.81 z− + 3 0.729 z− … 1 0.9 z− 1 0.9 z− - 2 0.81 z− 2 0.81 z− 2 0.81 z− - 2 0.729 z− … We obtain: ( ) 1 2 3 1 0.9 0.81 0.729 ...zX z z z z− − − ≈ + + + + The corresponding sequence is represented by: ( ) 10 =x , ( )1 0.9x = , ( )2 0.81x = , ( )3 0.729x = , ( ) 0.9k x k = . 2.4. Transfer functions and difference equations 2.4.1. The transfer function of a continuous system A continuous linear system whose input is x(t) produces a response y(t). This system is regulated by a linear differential equation with constant coefficients that links x(t) and y(t). The most general expression of this differential equation is in the form: q q qp p p dt txd b dt tdx btxb dt tyd a dt tdy atya )()( )( )()( )( 1010 +++=+++ (2.14) By assuming that ( ) ( ) 0== tytx for t < 0, we will show that if we apply the Laplace transform to the differential equation (2.14), we will obtain an explicit relation between the Laplace transforms of x(t) and y(t).
- 56. 40 Digital Filters Design for Signal and Image Processing Since: ( ) ( ) n n n d y t L s Y s dt ⎡ ⎤ =⎢ ⎥ ⎢ ⎥⎣ ⎦ (2.15) and: ( ) ( ) n n n d x t L s X s dt ⎡ ⎤ =⎢ ⎥ ⎢ ⎥⎣ ⎦ , (2.16) we get: 0 1 0 1( ) ( ) ( ) ( )p q p qa a s a s Y s b b s b s X s+ + + = + + +… … (2.17) The relation of the Laplace transforms of the input and output of the system gives the system transmittance, or even what we can term the transfer function. It equals: 0 1 0 1 ( ) ( ) . ( ) q q s p p b b s b sY s H s X s a a s a s + + + = = + + + … … (2.18) This means that whatever the nature of the input (unit sample sequence, unit step signal, unit ramp signal), we can easily obtain the Laplace transform of the output: ( ) ( ) ( )sY s H s X s= (2.19) The frequency transform of the output generated by the system can then be analyzed by using Bode’s, Nyquist’s or Black’s diagrams. A quick review of a Black’s diagram shows it contains two diagrams: one represents amplitude (or gain); the other shows phase. Each one separately plots the module in logarithmic scale format and the argument of the transfer function according to the frequency. The Nyquist diagram plots the ensemble of points of ( )ωsH j by writing as abscissas [ ]Re ( ω)sH j and in ordinates [ ]Im ( ω)sH j . Lastly, Black’s diagram gives the ensemble of definite points in abscissas with ( )ωsH j and in ordinates by [ ]( ω)sArg H j . Except in certain limited cases, we can always approximate the transfer function to a product of rational fractions of orders 1 and 2; this will put into cascade several filters of orders 1 and 2.
- 57. Discrete System Analysis 41 2.4.2. Transfer functions of discrete systems We have seen in section 1.4.2 that an invariant linear system of impulse response h(k) whose input is x(k) and output is y(k) verifies the following equation: ( ) ( ) ( ) ( ) ( )khkxnkhnxky n *=−= ∑ +∞ −∞= The z-transform of the relation in equation (1.34) gives a basic product between the z-transforms of the input and of the impulse response of the system, on the condition that the z-transforms converge on the same, non-empty ROC. We then have the following on the convergence domain intersection: ( ) ( ) ( )zXzHzY zzz = (2.20) or: ( ) ( ) ( ) ( )∑ +∞ −∞= − == k k z z z zkh zX zY zH (2.21) The transfer function is the z-transform of the impulse response of the system. This filter is excited by an input of the z-transform written as Xz(z) and delivers the output whose z-transform is Yz(z). With discrete systems, if at the instant k the filter output is characterized by the input states: ( ) ( ) ( ){ }11 +−− Nkxkxkx , and output states: ( ) ( ) ( ){ }11 +−− Mkykyky , the most general relation between the samples is the following difference equation: )1()()1()1()( 10110 +−++=+−++−+ −− NkxbkxbMkyakyakya NM . (2.22) From there, by carrying out the z-transform of the input and output, the difference equation becomes: ( ) ( ) )()()()( 1 10 1 10 zXzbzXbzYzazYa z N Nzz M Mz −− − −− − ++=++ ,
- 58. 42 Digital Filters Design for Signal and Image Processing or ( ) ( ) )( )( )( )( )( 1 1 1 10 1 1 1 10 zH zA zB zazaa zbzbb zX zY zM M N N z z == +++ +++ = −− − − −− − − , (2.23) Thus, the transfer function is expressed from the polynomials A(z) and B(z), which are completely represented according to the position of their zeros in the complex plane. Figure 2.6. Representation of the discrete system with an input and an output COMMENT 2.1.– we also find this kind of representation in modeling signals with parametric models, the most widely used example being the auto-regressive moving average (ARMA). Let y(t) be a signal that is represented by M samples ( ) ( ) ( ){ }11 +−− Mkykyky that is assumed to be generated by an excitation characterized by its N samples ( ) ( ) ( ){ }11 +−− Nkxkxkx . A linear discrete model of the signal is a linear relation between the samples ( ){ }kx and ( ){ }ky that can be expressed as follows: )1()()1()1()( 10110 +−++=+−++−+ −− NkxbkxbMkyakyakya NM . (2.24) This kind of representation constitutes an ARMA model, of the order (M-1, N-1). The coefficients { } 1...,,0 −= Miia and { } 1...,,0 −= Niib are termed transverse parameters. In general, we adopt the convention a0 = 1. We then have: ∑ ∑ − = − = −+−−= 1 1 1 0 )()()( M i N i ii ikxbikyaky (2.25) The ARMA model can be interpreted as a filtering function of the transfer Hz(z). )(zX z ( ) ( )1 1 1 10 1 1 1 10 )( −− − − −− − − +++ +++ = M M N N z zazaa zbzbb zH )(zYz
- 59. Discrete System Analysis 43 50 100 150 200 250 -3 -2 -1 0 1 2 Amplitude Number of samples Figure 2.7. Realization of a second-order autoregressive process In the case of a model termed autoregressive (AR), the { } 1...,,0 −= Niib are null, except b0, and the model is reduced to the following expression: ∑ − = +−−= 1 1 0 )()()( M i i kxbikyaky (2.26) 0 50 100 150 200 250 -3 -2 -1 0 1 2 Amplitude number of samples Figure 2.8. Realization of a second-order autoregressive process
- 60. 44 Digital Filters Design for Signal and Image Processing In this way, the polynomial B(z) is reduced to a constant B(z) = b0 and the transfer function Hz(z) now only consists of poles. For this reason, this model is called the all-pole model. We can also use the moving average or MA so that { } 1...,,1 −= Niia are null, which reduces the model to: )1()1()()( 110 +−++−+= − Nkxbkxbkxbky N . (2.27) Here, A(z) equals 1. The model is then characterized by the position of its zeros in the complex plan, so it also is called the all-zero model: ( )1 1 1 10)( −− − − ++= N Nz zbzbbzH … (2.28) 2.5. Z-transforms of the autocorrelation and intercorrelation functions The spectral density in z of the sequence { }( )x k is represented as the z-transform of the autocorrelation function ( )kRxx of { })(kx , a variable we saw in the previous chapter: k k xxxx zkRzS − +∞ −∞= ∑= )()( (2.29) We can also introduce the concept of a discrete interspectrum of sequences { })(kx and { })(ky as the z-transform of the intercorrelation function ( )kRxy . k k xyxy zkRzS − +∞ −∞= ∑= )()( (2.30) When x and y are real, it can also be demonstrated that )()( 1− = zSzS yxxy . Inverse transforms allow us to find intercorrelation and autocorrelation functions from )(zSxy and )(zSxx : dzzzS j mR m xyxy 1 )( 2 1 )( − ∫= π (2.31) dzzzS j mR m xxxx 1 )( 2 1 )( − ∫= π (2.32) Specific case: ( )[ ] dzzzS j xxER xxxx 12 )( 2 1 )0( − ∫== π
- 61. Discrete System Analysis 45 Now let us look at a system with a real input { })(kx , an output { }( )y k , and an impulse response h(k). We then calculate )(zSxy when it exists: ( ) ( ) ( ) ( ) ( ) 0 ( ) ( ) n n xy xy n n n n m S z R n z E x k y k n z E x k h m x k n m z +∞ +∞ − − =−∞ =−∞ +∞ +∞ − =−∞ = = = −⎡ ⎤⎣ ⎦ ⎡ ⎤ = − −⎢ ⎥ ⎢ ⎥⎣ ⎦ ∑ ∑ ∑ ∑ . If permutation between the mathematical expectation and summation is possible: ( ) ( ) ( ) 0 ( ) n xy m n S z h m E x k x k n m z +∞ +∞ − = =−∞ = − −⎡ ⎤⎣ ⎦∑ ∑ ( ) ( ) ( ) ( ) ( ) ( ) 0 ( ) 0 0 ( ) n xy xx m n m n m xx m n m n xx m n S z h m R n m z h m z R n m z h m z R n z +∞ +∞ − = =−∞ +∞ +∞ − − − = =−∞ +∞ +∞ − − = =−∞ = − − = − − = − ∑ ∑ ∑ ∑ ∑ ∑ Now, as the signal x is real, ( ) ( )xx xxR n R n− = . Since ( ) ( ) 0 m m h m z H z +∞ − = =∑ and ( ) ( )n xx xx n R n z S z +∞ − =−∞ =∑ , we thus establish the following connection between the transfer function ( )zH z of the system and its interspectral functions Sxy (z) and Sxx(z): ( ) )()( zSzHzS xxzxy = (2.33) 2.6. Stability The fact that the transfer function is a rational fraction naturally leads us to the issue of stability, which can be studied from considering the z-transform of the impulse response.
- 62. 46 Digital Filters Design for Signal and Image Processing 2.6.1. Bounded input, bounded output (BIBO) stability A linear time invariant system is BIBO stable if its impulse response verifies the following relation (see also Chapter 10): ∑ +∞ −∞= +∞< k kh )( (2.34) The transfer function is the z-transform of the impulse response; from there, we have, for all of z belonging to the ROC: ( ) ( ) ( )k k z k k H z h k z h k z +∞ +∞ − − =−∞ =−∞ = ≤∑ ∑ (2.35) Now, on the unity circle in the complex plan z, we have: ∑∑ +∞ −∞= +∞ −∞= − = kk k khzkh )()( (2.36) From this the following result is obtained: exp( 2 ) | ( ) | <+ s z f z j f H z π= ∞ (2.37) Many stability criteria have been developed to study the stability of filters. Among these, we first will look at the test of pole positions of the transfer function, then at Routh’s and Jury’s criteria. 2.6.2. Regions of convergence In causal systems, a necessary and sufficient condition of stability is that all the poles of the transfer function must be inside the unity circle in the z-plane. The decomposition of the basic elements of the transfer function of a discrete causal system Hz(z) introduces two types of terms. 1 1 1 az− − admits the pole ip a= , and 21 1 21 −− − +− + czbz ezd admits the complex conjugates as poles 22 bcjbpi −±= of the module equal to cpi = .
- 63. Discrete System Analysis 47 Here we see that the z-transform of the sequence ( ) ( )1 k x k a u k= converges if, and only if, 1a < and equals 1 1 1 az− − . In addition, according to Table 2.1, 2 0( ) sin(ω ). ( )k sx k kT u kα= and 3 0( ) cos(ω ). ( )k sx k kT u kα= admit, respectively, the following z-transforms: 0 2 2 0 sin(ω . ) 2. .cos(ω . ) s s z T z z T α − α + α and 0 2 2 0 .[. cos(ω . )] 2. .cos(ω . ) s s z T z z T − α − α + α on condition that 1<α . A specific linear combination of ( )kx2 and of ( )kx3 gives us a z-transform in the form of 21 1 21 −− − +− + czbz ezd with 1<c . For an anti-causal system, a necessary and sufficient condition of stability is that all the poles of the transfer function must be strictly outside the unity circle. EXAMPLE 2.10.– let the following formula be the transfer function of a discrete causal system: ( ) 21 1 24 21 −− − +− − = zz z zH z . It admits for zero 21 =z and for poles 4 31 1 i p + = and 4 31 2 i p − = . The stability is verified because |p1| <1 and |p2| <1. -1 -0.5 0 0.5 1 1.5 2 -1 -0.5 0 0.5 1 4 31 1 i p + = 4 31 2 i p − = 21 =z Im(z) Re(z) Figure 2.9. Diagram of poles and zeros of ( ) 21 1 24 21 −− − +− − = zz z zH z
- 64. 48 Digital Filters Design for Signal and Image Processing 2.6.2.1. Routh’s criterion The first approach we will consider for looking at stability uses Routh’s criterion. In general, Routh’s criterion is used to study the stability of continuous systems, usually with looped systems. It helps us learn the number of zeros of the real part of a polynomial by examining its coefficients. Routh’s criterion has been adapted to discrete systems by changing variables with the following transform: 1 1 λ 1 λ z− − → + (2.38) We then continue by analyzing the denominator of H(λ) that is expressed as: 0 α λ n k k k= ∑ . (2.39) We formulate the following table: nα 2−nα 4−nα … 1−nα 3−nα 5−nα … 1 321 − −−− − = n nnnn n α αααα β 1 541 1 − −−− − − = n nnnn n α αααα β … n nnnn n β αβαβ χ 113 −−− − = n nnnn n β αβαβ χ 125 1 −−− − − = And so on Table 2.2. Table for application of Routh’s criterion Routh’s theorem states that the number of zeros of Hz(λ) of the strictly positive real part is equal to the number of sign changes. We can verify this by looking at the first column of Table 2.2 from top to bottom. EXAMPLE 2.11.– Let us look again at the example where: ( ) 21 1 24 21 −− − +− + = zz z zH z .
- 65. Discrete System Analysis 49 First, we carry out the change of the variable indicated in equation (2.38). We get: ( ) ( ) ( ) 2 2 λ3 2λ λ λ λ3 6λ 7λ z N H D + − = = + + . From this, the following table is constructed from the coefficients of D(λ): 7 7 6 3 Table 2.3. Application of Routh’s criterion There is no change of sign in the first column; this means there will be no zeros in the strictly positive real part of D(λ). We conclude from this that there will be stability. 2.6.2.2. Jury’s criterion Let ( ) ( ) ( )zA zB zH z = be the transfer function. Jury’s criterion is an algebraic criterion that allows us to determine if the polynomial roots A(z) are inside the circle of radius unity in the z-plane. So we get: ( ) k M k k zazA − − = ∑= 1 0 where the coefficients ak are real and 00 >a . We construct a table of 2(M–1)-3 rows. The first two lines of this table are filled, respectively, by polynomial coefficients according to the increasing, then decreasing, powers in z. The following lines are respectively deduced by using the determinant of specific coefficients of the two proceeding lines, as follows: 1 2 0 1 β M M k k k a a a a − − − + = , 2 3 0 1 β β γ β β M M k k k − − − + = , etc.
- 66. 50 Digital Filters Design for Signal and Image Processing This gives us the following table: 1/ 1−Ma 2−Ma 3−Ma … kMa −−1 kMa −−2 … 1a 0a 2/ 0a 1a 2a … ka 1+ka … 2−Ma 1−Ma 3/ 2βM − 3βM − 4βM − … 2βM k− − … … 0β 4/ 0β 1β 2β … βk … … 2βM − 5/ 3γM − 4γM − … 3γM k− − … 0γ 6/ 0γ 1γ … … γk … 3γM − … … … … … … … … … … … … … 2M-7/ 3p 2p 1p 0p 2M-6/ 0p 1p 2p 3p 2M-5/ 2q 1q 0q Table 2.4. Table for establishing and verifying Jury’s criterion According to Jury’s criterion, the polynomial roots are inside the circle of radius unity in the z-plane if the following M conditions are met: – ( ) 01 >A and ( ) 01 >−A if M-1 is even or ( ) 01 <−A if M-1 is odd. – 01 aaM <− . – 2 0β βM − > 03 γγ >−M ,… and 02 qq > . EXAMPLE 2.12.– looking again at the example of ( ) 21 1 24 21 −− − +− + = zz z zH z with ( ) 21 24 −− +−= zzzA . The corresponding Jury table is as follows: 1/ 1 -2 4 2/ 4 -2 1 3/ -15 6 4/ 6 -15 5/ 189 In addition, since ( ) 031 >=A , ( ) 071 >=−A , the poles of the transfer function are inside the unity circle. In Chapter 10, we will discuss stability in more depth.
- 67. Chapter 3 Frequential Characterization of Signals and Filters 3.1. Introduction This chapter discusses frequential representations of signals and filters. We will introduce the Fourier transform of continuous-time signals by first presenting the Fourier series decomposition of periodic signals. Properties and basic calculation methods will be demonstrated. We will then present the frequential analysis of discrete-time signals from the discrete Fourier transform using the standard and most rapid versions. These concepts will then be illustrated using the example of speech signals from a common time-frequency-energy representation – the spectrogram. 3.2. The Fourier transform of continuous signals 3.2.1. Summary of the Fourier series decomposition of continuous signals 3.2.1.1. Decomposition of finite energy signals using an orthonormal base Let x(t) be a finite energy signal. We consider the scalar product ( ) ( )tt ki ϕϕ , of two functions ( )tiϕ and ( )tkϕ of finite energy, represented as follows: Chapter written by Eric GRIVEL and Yannick BERTHOUMIEU.
- 68. 52 Digital Filters Design for Signal and Image Processing ( ) ( ) ( ) ( )dttttt kiki ∫ +∞ ∞− = * , ϕϕϕϕ (3.1) where ( )tk * ϕ denotes the complex conjugate of ( )tkϕ . A family ( ){ }tkϕ of finite energy functions is called orthonormal if it verifies the following relations: ( ) ( ) ( )kitt ki −= δϕϕ , . (3.2) A family ( ){ }k tϕ is complete if any vector of the space can be approximated as closely as possible by a linear combination of ( ){ }k tϕ . A family ( ){ }k tϕ is termed maximal when the sole function x(t) of orthogonal finite energy throughout ( )tkϕ is the null function. We can then decompose the signal x(t) on an orthonormal base ( ){ }k tϕ as follows: ( ) ( ) ( ) ( )tttxtx k k k ϕϕ∑= , (3.3) COMMENT 3.1.– when the family is not complete, ( ) ( ) ( ), k k k x t t tϕ ϕ∑ is an optimum approximation in the least squares sense of the signal x(t). 3.2.1.2. Fourier series development of periodic signals The Fourier series development of a periodic signal x(t) and of period 0T follows from the decomposition of a signal on an orthonormal base. To observe this, we look at the family of periodic function ( ){ }kk tϕ represented as follows: ( ) ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ = t T k jtk 0 2exp πϕ with ∈k (3.4) Here, the scalar product is that of periodic signals of period 0T and of finite power; that is, such as ( ) ( ) ∫ +∞< 0 2 T dttϕ , so: ( ) ( ) ( ) ( ) ( ) ∫= 0 * 0 1 , T kiki dttt T tt ϕϕϕϕ (3.5)
- 69. Frequential Characterization of Signals and Filters 53 We then have: ( ) ( ) ( ) ( ) ( )[ ].sin 1 2exp 1 , 2/ 2/ 00 0 0 π π πϕϕ ki ki dtt T ki j T tt T T ki − − = ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ − = ∫ − (3.6) If ki ≠ ( ) ( ) 0, =tt ki ϕϕ ; otherwise, ( ) ( ) 1, =tt kk ϕϕ . All periodic signals x(t) and of period 0T can be decomposed in Fourier series according to a linear combination of functions ( ) ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = t T k jtk 0 2exp πϕ . Given equation (3.3), we have: ( ) ∑ +∞ −∞= ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = k k t T k jctx 0 2exp π (3.7) where ck measures the degree of resemblance between x(t) and ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ t T k j 0 2exp π : ( ) ( ) ( ) ∫ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ −=⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = 0 000 2exp 1 2exp, T k dtt T k jtx T t T k jtxc ππ (3.8) When the signal x(t) is real, we can demonstrate that the Fourier series decomposition of x(t) is written as: ( ) ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ +⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ += ∑ +∞ = t T k bt T k a a tx k k k 01 0 0 2sin2cos 2 ππ (3.9) where the real quantities ka and kb verify the following relations: ( ) ( ) ∫ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = 0 00 2cos 2 T k dtt T k tx T a π with k∈ (3.10) and ( ) ( ) ∫ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = 0 00 2sin 2 T k dtt T k tx T b π with k∈ (3.11)
- 70. 54 Digital Filters Design for Signal and Image Processing PROOF.– ck is a complex quantity; we can express it as: ( )kkk jcc φexp= (3.12) When the signal x(t) is real, since the coefficients ck and kc− are complex conjugates kk cc −= , and ( )kkkk jccc φ−==− exp* . We then have: ( ) ∑ ∑ ∑ ∑ ∞+ = ∞+ = ∞+ = +∞ −∞= ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ ++= ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ +−+ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ ++= ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = 1 0 0 1 1 00 0 0 2cos2 2exp2exp 2exp k kk k k kkkk k k t T k cc t T k jct T k jcc t T k jctx φπ φπφπ π (3.13) ( ) ( ) ( ) ( ) ( ) ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ +⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ += ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ −⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ += ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ −⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ += ∑ ∑ ∑ ∞+ = ∞+ = +∞ = t T k bt T k ac t T k ct T k cc t T k t T k cctx k kk k k kkk kk k k 01 0 0 01 0 0 001 0 2sin2cos 2sinsin22coscos2 sin2sincos2cos2 ππ πφπφ φπφπ (3.14) comparing relations (3.14) and (3.15) leads to the following identification ( ) ( )kkkk cca Re2cos2 == φ and ( ) ( )kkkk ccb Im2sin2 −=−= φ (3.15) The coefficients ck and kc− are then linked to the quantities ka and kb , as follows: ( )kkk jbac −= 2 1 and ( )kkk jbac +=− 2 1 (3.16) COMMENT 3.2.– periodic signals do not have finite energy on the interval ] [∞+∞− ; . That means that the quantity ( ) dttx∫ +∞ ∞− 2 does not have a finite value. We can also say that x(t) is not of a summable square.
- 71. Frequential Characterization of Signals and Filters 55 COMMENT 3.3.– we also see that, according to Parseval’s equality, ( ) ( ) ∑ ∫ +∞ −∞= = k T k dttx T c 0 2 0 2 1 (3.17) If x(t) is real, ( )∑ ∫ +∞ −∞= = k T k dttx T c )( 2 0 2 0 1 . The signal’s total average power is thus equal to the sum of the average powers of the different harmonics and of the continuous component. COMMENT 3.4.– we remember that the average power of a periodic signal is given by the relation: ( ) ( ) 0 0 0 1 cdttx T T == ∫µ . COMMENT 3.5.– if the analyzed signal is even, the complex coefficients kc constitute an even sequence. If the signal is odd, the complex coefficients kc of the Fourier series decomposition form an odd sequence. ( ) ( )txtxt =−∀ , if, and only if, kk ccZk =∈∀ −, (3.18) ( ) ( )txtxt −=−∀ , if and only if kk ccZk −=∈∀ −, (3.19) From there, if the analyzed signal is even, the complex coefficients kc constitute a real even sequence. If the signal is odd and real, the complex coefficients kc of the Fourier series decomposition form a pure imaginary odd sequence. COMMENT 3.6.– amplitude and phase spectra. Amplitude spectrum expresses the frequential distribution of the amplitude of the signal. It is given by the module of the complex coefficients kc according to the frequencies 0T k related to the functions ( ) ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = t T k jtk 0 2exp πϕ .
- 72. 56 Digital Filters Design for Signal and Image Processing f |c0| |c1||c-1| |c2||c-2| |c3||c-3| Amplitude spectral Figure 3.1. Amplitude spectrum of a periodic signal According to Figure 3.1, the spectrum of the periodic signal x(t) has a discrete representation. It contains the average value, the fundamental component, and the harmonics of the signal whose frequency is a multiple of the fundamental. Introducing a delay in the signal x(t) does not modify the amplitude spectrum of the signal, but modifies the phase spectrum, which is expressed by the phase of the complex coefficients kc according to the frequencies 0T k linked to the functions ( ) ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ = t T jktk 0 2 exp π ϕ . This phase spectrum is also discrete. If we let kd be the complex coefficients of the Fourier series development of x(t – τ), we then have: ( ) ∑ +∞ −∞= ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ =− k k t T jkdtx 0 2 exp π τ (3.20) Now, with equation (3.7), we also have: ( ) ( ) ∑ ∑ ∞+ −∞= +∞ −∞= ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −= ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −=− k k k k t T jk T jkc t T jkctx . 2 exp 2 exp 2 exp 00 0 π τ π τ π τ (3.21)
- 73. Frequential Characterization of Signals and Filters 57 According to equations (3.20) and (3.21), we deduce that: ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −= τ π 0 2 exp T jkcd kk (3.22) and kk dc = . (3.23) EXAMPLE.– let the signal be written as follows: ( ) ( ) ( ) ( )( ) ( ) ( )tfjctfjc tfjtfj tftx 0101 00 0 2exp2exp 2exp2exp 2 1 2cos ππ ππ π −+= −+= = − The signal is periodic and of period 0 0 1 f T = . The corresponding amplitude and phase spectra are discrete. That means that there are only certain frequencies in the signal. Here, this corresponds to two Dirac in the frequency domain placed at frequencies 0f and 0f− . 3.2.2. Fourier transforms and continuous signals 3.2.2.1. Representations The Fourier transform of a signal x(t) of total finite energy, with a value in the ensemble of complexes is represented as follows: ( ) ( )( ) ( )∫ +∞ ∞− − == dtetxtxTFfX ftj π2 . (3.24) The Fourier transform of a signal x(t) being a complex variable, the amplitude and phase spectra respectively represent the module and the phase of X(f) according to the frequency f. The Fourier transform is then written as: ( ) ( )∫ +∞ ∞− = dfefXtx ftj π2 (3.25)
- 74. 58 Digital Filters Design for Signal and Image Processing 3.2.2.2. Properties The Fourier transform is a linear application that verifies certain properties that can be easily proven by using equations (3.24) and (3.25). We will see that this transform goes from the temporal to the frequential domain and that its use facilitates the characterization of continuous signals. Indeed, it helps transform algebraic equations to differential equations and differential equations to algebraic ones: – when ( ) ( )txty * = , we have ( ) ( )fXfY −= * ; – when ( ) ( )0ttxty −= , we have ( ) ( )fXefY ftj 02π− = – when ( ) ( )txety tfj 02π = , we have ( ) ( )0ffXfY −= – when ( ) ( )atxty = , we have ( ) ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ = a f X a fY 1 – when ( ) ( ) ( ) ( ) ( )∫ +∞ ∞− −== τττ dtzxtztxty * , we have ( ) ( ) ( )fZfXfY = . We thus have: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )( )∫∫ ∫ ∫ ∫∫∫ ∞+ ∞− − ∞+ ∞− − ∞+ ∞− +∞ ∞− +∞ ∞− +− +∞ ∞− − +∞ ∞− − = ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −= == dtetztxdtedtzx dudeuzxdueuzdexfY ftft uffuf ππ τππτπ τττ ττττ 22 222 * – when ( ) ( ) ( )tztxty = where * designates the convolution product, we have ( ) ( ) ( )fZfXfY *= . – if ( )ty is real and even, its transform ( )fY is real and even; indeed, since ( ) ( ) ( )tytyty * =−= , we have ( ) ( ) ( )fYfYfY −=−= * . – if ( )ty is real and odd, its transform ( )fY is odd and purely imaginary. Since ( ) ( ) ( )tytyty * =−−= , we have ( ) ( ) ( )fYfYfY −=−−= * . – when ( ) ( ) n n dx txd ty = , we have ( ) ( ) ( )fXfjfY n π2= ; – when ( ) ( )∫= t duuxty 0 , we have ( ) ( ) ( )fcfX fj fY δ π += 2 1 where c designates the mean.
- 75. Frequential Characterization of Signals and Filters 59 3.2.2.3. The duality theorem Given the expressions we have seen for the Fourier transform and the inverse Fourier transform, we can now discuss its dual properties. We can easily demonstrate that if x(t) has for a Fourier transform X(f), then X(t) allows for a Fourier transform x(– f). We then have: ( )( ) ( ) ( ) ( ) ( )fxdtetXdtetXtXTF tfjftj −=== ∫∫ +∞ ∞− − +∞ ∞− − ππ 22 (3.26) As well, if x(t) is real and even, then X(f) is even and real and X(t) allows for the Fourier transform x(f). We will discuss this property again in Chapter 9. 3.2.2.4. The quick method of calculating the Fourier transform By proceeding with successive derivations, we can easily calculate the Fourier transform of a signal. EXAMPLE 3.1.– we calculate the Fourier transform of the derivative x’(t) of the rectangular impulse signal of duration θ. Figure 3.2. Temporal representation of the function gate and its derivative θ/2 -θ/2 t 1 -1 θ/2-θ/2 t x(t) 1 ( ) ( ) dt tdx tx ='
- 76. 60 Digital Filters Design for Signal and Image Processing By deriving the rectangular impulse signal of duration θ”, we can express it according to two Dirac impulses ( ) ( ) ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ −−⎟ ⎠ ⎞ ⎜ ⎝ ⎛ +== 22 ' θ δ θ δ tttx dt tdx . The Fourier transform of this signal can be easily obtained; ( ) ( ) ( ) ( )[ ]txfTFjfjfj dt tdx TF πθπθπ 2expexp =−−=⎥ ⎦ ⎤ ⎢ ⎣ ⎡ . From there, by writing that ( ) ( ) x x x π πsin sinc = , we get: ( ) ( )[ ] ( ) ( )θθ θπ θπ θ f f f txTFfX sinc sin === EXAMPLE 3.21.– here, we look at a signal represented as: ( ) ( ) 0 0 0 cos 2 if , 2 2 0 otherwise T Tt x t a t T x t π ⎧ ⎛ ⎞ ⎡ ⎤ = × ∈ −⎪ ⎜ ⎟ ⎢ ⎥ ⎣ ⎦⎨ ⎝ ⎠ ⎪ =⎩ -8 -6 -4 -2 0 2 4 6 8 -3 -2 -1 0 1 2 3 t amplitude Figure 3.3. Temporal representation of the signal x(t) for a=3 and T0=10
- 77. Frequential Characterization of Signals and Filters 61 By deriving the signal x(t), we also obtain discontinuities such as 2 0T t −= and 2 0T t = . ( ) 0 0 0 0 0 0 2 sin 2 if , 2 2 2 2 0 elsewhere T T T Ta t a t a t tdx t T T dt π π ⎧ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎡ ⎤ − × − δ + + δ − ∈ −⎪ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎢ ⎥= ⎨ ⎝ ⎠ ⎝ ⎠ ⎣ ⎦⎝ ⎠ ⎪ ⎩ or we can write: ( ) 0 0 1 0 0 2 sin 2 if , 2 2 0 otherwise T Ta t t x t T T π π ⎧ ⎛ ⎞ ⎡ ⎤ − × ∈ −⎪ ⎜ ⎟ ⎢ ⎥= ⎨ ⎣ ⎦⎝ ⎠ ⎪ ⎩ ( ) ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ −+⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ +−= 22 00 2 T ta T tatx δδ we get: ( ) ( ) ( )txtx dt tdx 21 += The derivation of x1(t) can be expressed according to x(t), as follows: ( ) ( )tx Tdt tdx 2 0 1 2 ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ −= π We end up with the following system: ( ) ( ) ( ) ( ) ( )⎪ ⎩ ⎪ ⎨ ⎧ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −= += tx T tx txtxtx 2 0 ' 1 21 ' 2π
- 78. 62 Digital Filters Design for Signal and Image Processing Using the Fourier transform helps simplify the resolution of this system. We obtain: ( ) ( ) ( ) ( ) ( ) ( )⎪ ⎩ ⎪ ⎨ ⎧ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −= −+−= fX Tfj fX fTjafTjafXffXj 2 0 1 001 2 2 1 expexp2 π π πππ , -8 -6 -4 -2 0 2 4 6 8 -3 -2 -1 0 1 2 3 t amplitude Figure 3.4. Temporal representation of the signal x'(t) From this: ( ) ( ) ( ) ( ) ⎪ ⎪ ⎪ ⎩ ⎪⎪ ⎪ ⎨ ⎧ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −= −= ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ + fX Tfj fX fTajfX Tfj fj 2 0 1 0 2 0 2 2 1 sin2 2 2 1 2 π π π π π π , or: ( ) ( ) ( ) ( ) ( )0 2 0 0 0 0 2 0 0 sinc 1 1 sin 1 1 fT fT aT fT fT fT aT fX − = − = π π . x1(t) x2(t)
- 79. Frequential Characterization of Signals and Filters 63 3.2.2.5. The Wiener-Khintchine theorem In this section, we look at the Fourier transform of the autocorrelation function Rxx(τ) of a real continuous-time signal x(t): ( )( ) ( ) ( ) ( ) ( ) ( ) dtdetxtx dtdetxtx deRRTF fj fj fj xxxx ∫ ∫ ∫ ∫ ∫ ∞+ ∞− ∞+ ∞− − −∞+ ∞− ∞+ ∞− −+∞ ∞− ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ −= −= = ττ ττ τττ τπ τπ τπ 2 2 2 (3.27) We then change the variable τ−= tu ( ) ( ) ( ) ( ) ( ) ( ) ( )fXfX dtfXetxdtdueuxtx ftjutfj −= −=⎟ ⎠ ⎞ ⎜ ⎝ ⎛ ∫∫ ∫ +∞ ∞− −+∞ ∞− +∞ ∞− −− ππ 22 (3.28) Now, since x(t) is real, ( ) ( )fXfX * =− . The Fourier transform of the autocorrelation function of the signal ( )tx thus satisfies: ( )( ) ( ) ( )fSfXRTF xxxx == 2 τ (3.29) where ( )fSxx designates the spectral density of the signal x(t). This relation often represents the Wiener-Khintchine theorem in the case of deterministic signals. COMMENT 3.7.– another way to obtain this result consists of directly applying the properties of the Fourier transform presented in section 3.2.2.2 to the autocorrelation function that can be seen in the convolution product: ( ) ( ) ( ) ( ) ( )** ττττ −=−= ∫ +∞ ∞− xxdttxtxRxx . 3.2.2.6. The Fourier transform of a Dirac comb The Dirac comb ( )∑ +∞ −∞= − k kTt 0 δ is a periodic singular distribution of period T0. In order to determine the transform of this signal, we introduce the squared periodic signal x(t) coming from a periodic reproduction of period T0 of the rectangular impulse signal of duration θ and of amplitude θ 1 .
- 80. 64 Digital Filters Design for Signal and Image Processing Figure 3.5. Squared periodic signal By making θ tend towards 0, the squared periodic signal tends towards the Dirac comb. So we have: ( ) ( )tgt θ θ δ 0 lim → = . (3.30) We then calculate the development coefficients by using the Fourier series of the periodic signal: ( ) 00 0000 0 11 lim 11 0 T dt T dtt T c T = ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ == ∫∫ → θ θ θ δ . (3.31) As well, we have for k ≠ 0: ( ) 0 θ 00 0 0 00 0 θ 0 θ 00 0 0 θ 0 0 1 2 1 1 2 δ exp lim exp θ 1 2 lim exp 2 θ 1 1 2 lim 1 exp θ . 2 θ T k k k c t j t dt j t dt T T T T T k j t T j k T k j j k T θπ π π π π π → → → ⎛ ⎞⎛ ⎞ ⎛ ⎞ = − = −⎜ ⎟⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎝ ⎠ ⎝ ⎠⎝ ⎠ ⎡ ⎤⎛ ⎞ = − −⎢ ⎥⎜ ⎟ ⎢ ⎥⎝ ⎠⎣ ⎦ ⎡ ⎤⎛ ⎞⎛ ⎞ = − −⎢ ⎥⎜ ⎟⎜ ⎟⎜ ⎟ ⎢ ⎥⎝ ⎠⎝ ⎠⎣ ⎦ ∫ ∫ (3.32) By then carrying out a limited development of ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ − θ π 0 2 exp T k j when θ tends towards 0, we obtain: ( ) 000 12 11 1 lim 2 1 T O T k j kj ck = ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ++−= → θθ π θπ θ . (3.33) x(t) t T0 1/θ θ
- 81. Frequential Characterization of Signals and Filters 65 Using equations (3.31) and (3.33), we get: ( ) ∑∑ +∞ −∞= +∞ −∞= ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ =− kk t T k j T kTt 00 2exp 1 0 πδ . (3.34) A Dirac comb thus has a discrete spectrum; each frequency component situated at every 0 1 T and is of amplitude 0 1 T . We say that the Fourier transform of a Dirac comb of period 0 T and of unity amplitude is a Dirac comb of period 0 1 T and of amplitude 0 1 T : ( ) ∑∑ +∞ −∞= +∞ −∞= ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −= ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ − kk T k f T kTtTF 00 1 0 δδ . (3.35) COMMENT 3.8.– according to the properties given in section 3.2.2.2., we have: ∑∑ +∞ −∞= +∞ −∞= ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −= ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ kk T k ft T k jTF 00 2exp δπ (3.36) With equations (3.35) and (3.36), we end up with Poisson’s summation formula: ( ) ∑∑ +∞ −∞= +∞ −∞= ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ =− kk t T k j T kTt 00 2exp 1 0 πδ . (3.37) COMMENT 3.9.– by using the properties given in section 3.2.2.2, we can also demonstrate that: ( ) ( )0 0δ exp 2 k k TF t kT j kfTπ +∞ +∞ =−∞ =−∞ ⎡ ⎤ − =⎢ ⎥ ⎣ ⎦ ∑ ∑ (3.38)
- 82. 66 Digital Filters Design for Signal and Image Processing 3.2.2.7. Another method of calculating the Fourier series development of a periodic signal Let x(t) be a signal constructed from the periodization of a pattern m(t) to the period T0. This signal allows for a Fourier series development by satisfying equations (3.7) and (3.8). Calculating the coefficients of this development can be carried out in another way when x(t) can be expressed from a pattern m(t) as follows: ( ) ( ) ( ) ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ −= ∑ +∞ −∞=k kTttmtx 0* δ (3.39) We can then obtain ( )X f from equation (3.7) or (3.39): ( ) ( ) ∑ ∑∑ ∞+ −∞= +∞ −∞= +∞ −∞= ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −= ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ =⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −×= k k kk T k fc T k f T k M TT k f T fMfX 0 00000 11 δ δδ (3.40) By identification, it is then possible to express the coefficients of the Fourier series development of the signal x(t) according to M(f), the Fourier transform of the pattern. ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ = 00 1 T k M T ck (3.41) We use this result with the signal x(t) shown in Figure 3.6. Figure 3.6. A periodic signal x(t) t T0 1 -1
- 83. Frequential Characterization of Signals and Filters 67 Figure 3.7. Pattern linked to periodic signal shown in Figure 3.6 From one of the methods shown in section 3.2.2.3, we find the transform of the pattern described in Figure 3.7: ( ) ( )0 02 exp 2 sin 2 fTj T f fj fM ππ π −⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = (3.42) The coefficients of the Fourier series development of the signal x(t) then equal: ( )2 0 0 1 2 sin exp 2 0 if is even 2 if is odd k k k c M j k T T jk k k jk π π π π ⎛ ⎞ ⎛ ⎞ = = −⎜ ⎟ ⎜ ⎟ ⎝ ⎠⎝ ⎠ ⎧ ⎪ = ⎨ −⎪ ⎩ (3.43) By using equation (3.8), we obtain the same result as in equation (3.43). We then have: ( )( ) 0 0 0 0 / 2 / 2 0 0 0 0 / 2 0 0 1 2 2 exp exp 2 2 sin 1 cos 0 if is even .2 if is odd T k T T c jk t dt jk t dt T T T j j k t dt k T T k k k jk π π π π π π − ⎡ ⎤⎛ ⎞ ⎛ ⎞ = − − + −⎢ ⎥⎜ ⎟ ⎜ ⎟ ⎢ ⎥⎝ ⎠ ⎝ ⎠⎣ ⎦ ⎛ ⎞ = − = − −⎜ ⎟ ⎝ ⎠ ⎧ ⎪ = ⎨ −⎪ ⎩ ∫ ∫ ∫ (3.44) T0 m(t) t 1 -1
- 84. 68 Digital Filters Design for Signal and Image Processing 3.2.2.8. The Fourier series development and the Fourier transform Here we look at the centered rectangular impulse signal of duration θ written as ( ) ( )ttx θ∏= . This signal is called transitory or square summable; that is, its total energy is of finite value: ( ) +∞<∫ +∞ ∞− dttx2 We reproduce this signal at regular intervals θ>0T in order to obtain a periodic signal written ( )txp . We then develop in Fourier series the signal ( )txp of fundamental period T0. Figure 3.8. Temporal representation of the centered rectangular impulse signal of duration θ ( ) ∑∑ ∞+ −∞= ∞+ −∞= ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ =⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ = kk kp t T jk T k T k T t T jkctx 0 0 0 00 2 exp sin 2 exp π θπ θπ θπ . The signal’s spectrum is discrete and equals: ( ) ∑∑ ∞+ −∞= ∞+ −∞= ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ − ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ =⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −= kk kp T k f T k T k TT k fcfX 0 0 0 00 sin δ θπ θπ θ δ . θ/2-θ/2 t x(t) 1
- 85. Frequential Characterization of Signals and Filters 69 The spectral density of the signal is termed discrete. It is represented by the module of squared complex coefficients kc according to the frequencies 0T k linked to the functions ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ t T jk 0 2 exp π . As such it equals: ( ) ∑∑ ∞+ −∞= ∞+ −∞= ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ − ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ =⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −= kk kxx T k f T k T k TT k fcfS pp 0 2 0 0 00 2 sin δ θπ θπ θ δ . Now, the spectral density of the pattern energy equals: ( ) ( ) ( ) 2 2 sin θπ θπ θ f f fXfSxx == . From this, we deduce that reproducing the signal gate of support θ at period T0 allows us to express the spectral density of the signal ( )txp from that of the pattern x(t): ( ) ( ) ∑ +∞ −∞= ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ −= k xx T k ffX T fS pp 0 2 0 ² 1 δ The spectral density of the pattern sampled at frequency 0 1 T , weighted by a factor of ² 1 0T , provides the expression of the spectral density of the periodized signal. We then observe the evolution of the frequential content of the signal when the period T0 tends towards infinity.
- 86. 70 Digital Filters Design for Signal and Image Processing So we propose: ( ) . 2 exp 2 0 2/ 2/ 0 0 0 0 0 dtt T jkts cT T k X T T p kT ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −= =⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ∫− π π ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ 0 2 T k X k π is relative to the complex kth harmonic angular frequency 0 2 T kπ . We then have: ( ) ∑∑ ∞+ −∞= ∞+ −∞= ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ = ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ = k T k kp t T jk T k X T t T jkctx 0000 2 exp 212 exp 0 πππ (3.45) 0 1 T corresponds to a multiplicative constant, with a gap between two successive harmonic angular frequencies. We have: ( ) . 2 212 0 00 1 T T k T k kkk π ππ ωωω = − + =−=∆ + (3.46) From there, equation (3.45) becomes: ( ) ( ) ( ) ( ) ( ) .exp 2 1 exp 2 0 0 ∑ ∑ ∞+ −∞= +∞ −∞= ∆= ∆ = k kkkT k kkT k p tjX tjXts ωωω π ωω π ω Let us look at the limit of ( )txp when the period T0 tends towards infinity. If we assume that ( )kTX ω0 has a limit written as ( )kX ω when T0 tends towards infinity, we get:
- 87. Frequential Characterization of Signals and Filters 71 ( ) ( ) ( ) ( ) ( ) .exp 2 1 exp 2 1 limlim 0 00 ∫ ∑ ∞+ ∞− ∞ −∞= +∞→+∞→ = ∆= ωωω π ωωω π dtjX tjXtx k kkkT T p T (3.47) By making the period T0 tend towards infinity, we come to study the frequential behavior of the pattern, which is assumed to be transitory; that is, to the representation of the Fourier transform of the finite energy signal. APPLICATION.– here, we look again at the above example with 0.02θ = s and T = 0.05 s. The signal is reconstructed by considering only a limited number of complex coefficients of the Fourier series (11.31, then 61). This is shown in Figure 3.9. -0.1 -0.08 -0.06 -0.04 -0.02 0 0.02 0.04 0.06 0.08 0.1 -0.5 0 0.5 1 1.5 time Amplitudewith11coeff.. -0.1 -0.08 -0.06 -0.04 -0.02 0 0.02 0.04 0.06 0.08 0.1 -0.5 0 0.5 1 1.5 time Amplitudewith31coeff. -0.1 -0.08 -0.06 -0.04 -0.02 0 0.02 0.04 0.06 0.08 0.1 -0.5 0 0.5 1 1.5 time amplitudewith61coeff. Figure 3.9. Signal obtained by adding a given number of decomposition components using the Fourier series (11, 31, and 61 coefficients) employing the Gibbs phenomenon Now we will consider the spectrum evolution of the periodized pattern according to the values of the period T0 equal to 0.05 s, 0.1 s, 0.5 s, 1 s and 5 s.
- 88. 72 Digital Filters Design for Signal and Image Processing -0.02 -0.015 -0.01 -0.005 0 0.005 0.01 0.015 0.02 0 0.2 0.4 0.6 0.8 1 amplitude pattern time (in seconds) Figure 3.10. Amplitude spectrum of the pattern -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2 0 0.2 0.4 0.6 0.8 1 periodized pattern time (in seconds) amplitude -4 -3 -2 -1 0 1 2 3 4 x 10 -3 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 normalized frequency amplitude -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0 0.2 0.4 0.6 0.8 1 periodized pattern time (in seconds) amplitude -4 -3 -2 -1 0 1 2 3 4 x 10 -3 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 normalized frequency amplitude
- 89. Frequential Characterization of Signals and Filters 73 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.2 0.4 0.6 0.8 1 periodized pattern time (in seconds) amplitude -4 -3 -2 -1 0 1 2 3 4 x 10 -3 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 x 10 -3 normalized frequency amplitude -4 -3 -2 -1 0 1 2 3 4 0 0.2 0.4 0.6 0.8 1 periodized pattern time (in seconds) amplitude -4 -3 -2 -1 0 1 2 3 4 x 10 -3 0 0.5 1 1.5 2 2.5 3 3.5 4 x 10 -4 normalized frequency amplitude -20 -15 -10 -5 0 5 10 15 20 0 0.2 0.4 0.6 0.8 1 periodized pattern Time (in seconds) amplitude -4 -3 -2 -1 0 1 2 3 4 x 10 -3 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 x 10 -5 normalized frequency amplitude Figure 3.11. Evolution of the temporal representation and of the spectrum of the periodized pattern, according to the period value
- 90. 74 Digital Filters Design for Signal and Image Processing COMMENT.– it is important to bring together equations (3.7) and (3.8) to represent the Fourier series development of equations (3.24) and (3.25): ( ) ( ) ∫ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ −= 0 00 2exp 1 T k dtt T k jtx T c π . (3.8) ( ) ( ) ( )∫ +∞ ∞− −= dtftjtxfX π2exp . (3.24) ( ) ∑ +∞ −∞= ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = k k t T k jctx 0 2exp π . (3.7) ( ) ( ) ( )∫ +∞ ∞− = dfftjfXtx π2exp . (3.25) Equation (3.8) helps us evaluate the resemblance degree existing between a periodic signal x(t) to be analyzed and ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ t T k j 0 2exp π . Since the signal is periodic and of non-finite energy, integration occurs on a period T0. Equation (3.7) provides the expression for x(t) according to the family of complex exponentials ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ t T k j 0 2exp π . Only the multiple frequencies of the fundamental frequencies are present in this signal. The spectrum is therefore discrete. Equation (3.24) allows us to evaluate the resemblance degree existing between a signal x(t) of finite energy and ( )ftj π2exp . The frequency f is here indeterminate because all can be present in the signal. As well, the integration domain can be since the signal is of finite energy. Using the example of equation (3.7), equation (3.25) gives the expression of x(t) according to the complex exponentials ( )ftj π2exp . The discrete sum present in (3.7) becomes an integral on equation (3.25) because all f frequencies can be taken into account.
- 91. Frequential Characterization of Signals and Filters 75 3.2.2.9. Applying the Fourier transform: Shannon’s sampling theorem In this section, we will look at signal sampling and reconstruction starting from a sampled analog signal written x(t) which we suppose will have bounded support spectrum; that is, the module of the Fourier transform of the signal x(t) is null for throughout the frequency maxff > . We will later return to this last hypothesis. ( ) 0=fX for maxff > (3.48) Figure 3.12. Bounded spectrum of the analyzed signal The origin of this spectrum boundary is either a property of the analyzed signal or is due to a low-pass pre-filtering, as we have seen in the acquisition chain and in the process shown in Figure 1.3. To obtain the sampled signal xs(t), the continuous input signal x(t) is multiplied by a pulse train ( )s k t kTδ +∞ =−∞ −∑ of period Ts: ( ) ( ) ( )s s k x t x t t kTδ +∞ =−∞ = × −∑ . (3.49) The resulting signal xs(t) is then filtered by an ideal passband filter to give the reconstructed signal xr(t). The goal of what follows is to determine if the entire sampling period allows for a reconstruction of the signal after digitization and filtering. ( )fX fmax f
- 92. 76 Digital Filters Design for Signal and Image Processing Figure 3.13. Digitization and filtering The Fourier transform Xs(f) is the Fourier transform of the product between the input signal x(t) and the impulse train. Xs(f) corresponds to the Fourier transform of x(t) convoluted with that of ( )s k t kTδ +∞ =−∞ −∑ ; it is thus a reproduction of the spectrum X(f) at the frequency 1 s s f T = . ( ) ( ) 1 1 s k ks s s s k k X f X f f X f T T T T δ +∞ +∞ =−∞ =−∞ ⎛ ⎞ ⎛ ⎞ = ∗ − = −⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ∑ ∑ (3.50) All sampling frequencies fs do not necessarily guarantee the correct reconstruction of the signal (as shown in Figure 3.13) by low-pass filtering. The spectrum supports X(f) centered at the multiple values of the sampling frequency should not be superimposed. Figures 3.14 and 3.15 allow us to visualize different situations. ( )s k t kTδ +∞ =−∞ −∑ x(t) ( )txr( )sx t -1/(Ts) 1/(2Ts) 1 f
- 93. Frequential Characterization of Signals and Filters 77 Figure 3.14. Spectrum of sampled signal Figure 3.15. Spectrum overlap In order to avoid spectrum distortions of the sampled signal that are due to spectrum overlap, we must take: fs ≥ 2 fmax (3.51) In this way, we demonstrate Shannon’s sampling theorem, which fixes the sampling choice fs. ffmax fs fs-fmax ( )sX f spectrum overlap f f fsfs-fmax fmax ( )sX f
- 94. 78 Digital Filters Design for Signal and Image Processing If we do not retain the hypothesis of a bounded spectrum signal, folding can occur no matter which sampling frequency we use. The perfect reconstruction of a signal may be impossible if we do not have additional information about the signal. In practice, there is no maximum frequency from which the spectrum can be considered as null. We get around this problem by using a low-pass pre-filtering of the continuous signal before the sampling stage. 2 sf is called Shannon’s frequency, Nyquist’s frequency or folding frequency. 3.3. The discrete Fourier transform (DFT) 3.3.1. Expressing the Fourier transform of a discrete sequence Let us look at the signal xs(t) coming from the sampling of x(t) at the sampling frequency fs: ( ) ( ) ( ) ( ) ( )s s s s k k x t x t t kT x kT t kTδ δ +∞ +∞ =−∞ =−∞ = × − = −∑ ∑ (3.52) According to equation (3.24), the Fourier transform of the signal xs(t) verifies the following relation: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) exp 2 exp 2 exp 2 exp 2 s s s k s s k s k s X f x t j ft dt x t t kT j ft dt x kT j fkT f x k j k f π δ π π π +∞ −∞ +∞+∞ −∞ =−∞ +∞ =−∞ +∞ =−∞ = − ⎡ ⎤ = × − −⎢ ⎥ ⎣ ⎦ = − ⎛ ⎞ = −⎜ ⎟ ⎝ ⎠ ∫ ∑∫ ∑ ∑ (3.53) If we introduce fr, the frequency reduced or normalized in relation to the sampling frequency fs, i.e., r s f f f = , we will have: ( ) ( ) ( )exp 2s r s r k X f x k jk fπ +∞ =−∞ = −∑ . (3.54)
- 95. Frequential Characterization of Signals and Filters 79 The Fourier transform of a discrete sequence is one of the most commonly used spectrum analysis tools. It consists in decomposing the discrete-time signal on an orthonormal base of complex exponential functions. ( )s rX f is generally a complex function of the reduced frequency fr, as we see in the following expression: ( ) ( ) ( )exp ( )s r s r rX f X f j fϕ= . (3.55) Among the properties of the Fourier transform, we can first of all consider that: ( ) ( )* s r s rX f X f= − if xs is real (3.56) Then, using equation (3.54), we have: ( ) ( ) ( ) ( ) ( ) ( ) * * exp 2 exp 2s r s r s r k k s r X f x k jk f x k jk f X f π π +∞ +∞ =−∞ =−∞ ⎡ ⎤ = − =⎢ ⎥ ⎣ ⎦ = − ∑ ∑ . (3.57) Secondly, we can verify that the Fourier transform module is an even function in the case of a real signal; by taking the module of equation (3.57), we have: ( ) ( )s r s rX f X f= − for all normalized frequencies (3.58) As for the phase of the Fourier transform of the discrete sequence, it is an odd function: ( ) ( )rr ff ϕϕ =−− , for every normalized frequency fr. (3.59) Thirdly, the Fourier transform of a discrete sequence is a periodic function and of period 1 (in normalized frequency). We can easily demonstrate that: ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 exp 2 1 exp 2 exp 2 exp 2 . s r s r k s r k s r s r k X f x k jk f x k jk f jk x k jk f X f π π π π +∞ =−∞ +∞ =−∞ +∞ =−∞ + = − + = − − = − = ∑ ∑ ∑
- 96. 80 Digital Filters Design for Signal and Image Processing COMMENT 3.9.– the frequency fr is a continuous variable that can thus be varied in practice between –1/2 and 1/2. This is because the periodicity of the Fourier transform of a discrete signal and the original continuous signal have a bound support spectrum whose maximum frequency satisfies Shannon’s sampling theorem that we have seen in section 3.2.2.9. If we prefer to work with the effective frequency f, the observation interval returns to , 2 2 s sf f⎡ ⎡ −⎢ ⎢ ⎣ ⎣ . Lastly, if we instead use the normalized angular frequency rfπθ 2= , the observation interval becomes [ [ππ,− . 3.3.2. Relations between the Laplace and Fourier z-transforms Let us assume that the discrete sequence that we process has been obtained by sampling a causal discrete-time signal at the sampling period Ts ( ){ }kx are samples of the causal signal x(t). The z-transform is then: ∑ +∞ = − = 0 )()( k k z zkxzX The sequence ( ){ }kx as such can be interpreted as an impulse train xs(t), of amplitude equal to that of the signal x(t), and which verifies the following expression: ( ) ( ) ( ).s s k x t x k t kTδ= −∑ (3.60) Its Laplace transform is then expressed as follows: ( ) ( ) ( )( )expe e s k X p L x t x k kT p= = −⎡ ⎤⎣ ⎦ ∑ (3.61) If we compare the transforms )(zX z and Xs(s) of this causal signal, they are reduced to the same expression if ( )exp sz T s= . Generally, if s is a complex frequency, s = σ + jω with σ and ω real, we have ωσ j eez = . We can then show the link existing between the s- and z-planes.
- 97. Frequential Characterization of Signals and Filters 81 In the complex s-plane, the half plane represented by 0<σ corresponds to the unity disk represented in the complex z-plane. As well, the straight half plane represented by 0<σ corresponds to the z-plane outside the unity disk. The imaginary axis represented by 0σ = in the s-plane corresponds to the unity circle in the z-plane. Figure 3.16. Link between complex z-plane and s-plane if the sequence ( ){ }kx corresponds to the continuous signal x(t) sampled at the period TS, the linked Fourier transform is obtained, when it exists, from Xs(s) by taking fjp π2= . We have: ( ) ( )exp 2s k s f F x t x k j k f π ⎛ ⎞ = −⎡ ⎤ ⎜ ⎟⎣ ⎦ ⎝ ⎠ ∑ (3.62) So we see that the Fourier transform and the z-transform of the causal sequence ( ){ }kx taken as exp 2 s f z j f π ⎛ ⎞ = ⎜ ⎟ ⎝ ⎠ are identical. EXAMPLE 3.3.– let the digital causal signal x(k)= αk for k ≥ 0 and x(k)= 0 for k < 0. The Fourier transform will thus be: ( ) rfjr e fX π α 2 1 1 − − = if 1<α . 3.3.3. The inverse Fourier transform The inverse Fourier transform of X(f) is expressed with:
- 98. 82 Digital Filters Design for Signal and Image Processing ( ) ( ) 0 0 1/ exp 2 sf f s sf f x k f X f jk df f π + ⎛ ⎞ = ⎜ ⎟ ⎝ ⎠ ∫ , (3.63) where f0 can take any value. Since we generally take 0 / 2sf f= − , we get: ( ) ( ) / 2 / 2 1/ exp 2 s s f s sf f x k f X f jk df f π − ⎛ ⎞ = ⎜ ⎟ ⎝ ⎠ ∫ . (3.64) 3.3.4. The discrete Fourier transform Here we will look closely at a situation where we have a finite number of samples of a discrete-time causal signal. In practice, it is not realistic to carry out an infinite sum of terms and so we carry out a discrete Fourier transform on a finite number N of samples of a discrete signal. Equation (3.54) is reduced to: ( ) ( ) ( ) 1 0 exp 2 N r r k X f x k jk fπ − = = −∑ . (3.65) For a given normalized sequence fr, the Fourier transform X(fr) of a sequence x(k) is thus represented as the scalar product of the signal and the orthogonal base elements. However, the normalized frequency is a continuous variable, which presents some problems when we want to implement the transformation given in equation (3.65). For this reason, we must look for another transformation without these drawbacks. This is the discrete Fourier transform (DFT) on N points, done from N samples of a discrete-time signal, represented as follows: ( ) ( ) ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ −= ∑ − = N n jkkxnX N k π2 exp 1 0 . (3.66) It coincides with the discrete Fourier transform at the following frequencies: N n fr = with n varying from 2 N − to 1 2 − N . (3.67)
- 99. Frequential Characterization of Signals and Filters 83 and it is cancelled for other frequencies. The discrete Fourier transform is a function of the index n and N. To simplify our presentation, we propose X(n) to designate the value of the discrete Fourier transform at the normalized frequency N n . Because of the discretization of the frequency, the inverse discrete Fourier is obtained as follows: ( ) ( ) ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ = ∑ − −= N n jknX N kx N Nn π2 exp 1 12/ 2/ . (3.68) COMMENT.– it is important to be vigilant during a frequential analysis based on the Fourier transform of a finite number of samplings of a discrete sequence. Here we look at the Fourier transform of the following signal: ( ) 0 cos 2 s f x k k f π ⎛ ⎞ = ⎜ ⎟ ⎝ ⎠ for k=0,…,N-1 The Fourier transform of this discrete signal is then expressed as: ( ) 1 0 0 1 0 0 0 0 0 0 0 cos 2 exp 2 1 exp 2 exp 2 2 1 exp 2 1 exp 2 1 2 1 exp 2 1 exp 2 N k s s N k s s s s s s f f X f k j k f f f f f f j k j k f f f f f f j N j N f f f f f f j j f f π π π π π π π π − = − = ⎛ ⎞ ⎛ ⎞ = −⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ⎛ ⎞ ⎛ ⎞− + = + −⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ⎡ ⎤⎛ ⎞ ⎛ ⎞− + − − −⎢ ⎥⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠⎢ ⎥= + ⎢ ⎥⎛ ⎞ ⎛ ⎞− + − − −⎢ ⎥⎜ ⎟ ⎜ ⎟ ⎢ ⎥⎝ ⎠ ⎝ ⎠⎣ ⎦ ∑ ∑
- 100. 84 Digital Filters Design for Signal and Image Processing The discrete Fourier transform satisfies the following relation: ( ) 0 0 0 0 1 exp 2 1 exp 2 1 2 1 exp 2 1 exp 2 s s s s s s s s f f f n f n N Nj N j N f f X n f f f n f n N Nj j f f π π π π ⎡ ⎤⎛ ⎞ ⎛ ⎞ − +⎢ ⎥⎜ ⎟ ⎜ ⎟ − − −⎢ ⎥⎜ ⎟ ⎜ ⎟ ⎢ ⎥⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎢ ⎥⎝ ⎠ ⎝ ⎠= +⎢ ⎥⎛ ⎞ ⎛ ⎞ ⎢ ⎥− +⎜ ⎟ ⎜ ⎟ ⎢ ⎥− − −⎜ ⎟ ⎜ ⎟ ⎢ ⎥⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟⎢ ⎥⎝ ⎠ ⎝ ⎠⎣ ⎦ According to the value of the frequency f0, two situations can arise: if f0 is a multiple of sf N , i.e. 0 sf f l N = , we get: ( ) ( )( ) ( )( ) ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ + −− +−− + ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ − − −− = N nl j nlj N nl j nlj nX π π π π 2exp1 2exp1 2exp1 2exp1 2 1 In this case, if nl ≠ or nl −≠ , ( ) 0=nX . This means that the observed components of the amplitude spectrum will be null throughout, except at frequencies 0 sf f l N = ± . Now, if 0f is not a multiple of sf N , the amplitude spectrum does not present this specificity and brings out the influence of the short term spectral analysis of the discrete-time signal; that is, of the influence of the window. We will return to this issue in section 5.2.1. We illustrate this phenomenon in a situation when we analyze the following signal where 8,000Hzsf = and N = 64: ( ) 0 1 cos 2 2cos 2 s s f f x k k k f f π π ⎛ ⎞ ⎛ ⎞ = +⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ First case: 0 1,000Hzf = and 1 2,375Hzf = . These frequencies are multiples of the frequential resolution / 125Hzsf N = . We again find the factor 2 as amplitude existing between the sinusoidal components.
- 101. Frequential Characterization of Signals and Filters 85 Second case: Hz4400 =f and f1 = 3,000Hz. Here, only 1f is a multiple of /sf N . It is not possible to develop the factor 2 as amplitude between the two sinusoidal components using the discrete Fourier transform. Third case: Hz4400 =f and f1 = 500Hz. The gap between the two frequencies here is too week to be able to distinguish the contribution of the two sinusoidal components. 0 500 1000 1500 2000 2500 3000 3500 4000 0 10 20 30 40 50 60 70 Figure 3.17. First case: amplitude spectrum of the signal (continuous line) and the module of the discrete Fourier transform (star) 0 500 1000 1500 2000 2500 3000 3500 4000 0 10 20 30 40 50 60 70 Figure 3.18. Second case: amplitude spectrum of the signal (continuous line) and the module of the discrete Fourier transform (star)
- 102. 86 Digital Filters Design for Signal and Image Processing 0 500 1000 1500 2000 2500 3000 3500 4000 0 10 20 30 40 50 60 70 Figure 3.19. Third case: amplitude spectrum of the signal (continuous line) and the module of the discrete Fourier transform (star) 3.4. The fast Fourier transform (FFT) In 1965, Cooley and Tuekey proposed a fast algorithm for calculating the discrete Fourier transform. With real signals, the direct calculation of equation (3.66) requires 2N 2 multiplications and ( )12 −NN additions. With complex signals, the computational cost reaches 4N 2 multiplications and ( )122 −NN additions. The fast method consists of operating by dichotomy, which reduces, as we will see later, to a calculatory complexity as ( )NN 2log . From now on, to simplify presentation, we use the example of a fast Fourier transform on N = 8 points. ( ) ( ) ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ −= ∑ = N n jkkxnX k π2 exp 7 0 for n varying from 0 to 7 (3.69) We introduce the coefficient WN called the “Twiddle factor”, which corresponds to the complex root of the unity represented by: ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ −= N jWN π2 exp . (3.70) We can then rewrite equation (3.69) in the form: ( ) ( ) nk N N k WkxnX ∑ − = = 1 0 for n varying from 0 to N – 1. (3.71)
- 103. Frequential Characterization of Signals and Filters 87 For N = 8, equation (3.71) leads to the following matricial equation: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ = ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ 7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0 49 8 42 8 35 8 28 8 21 8 14 8 7 8 0 8 42 8 36 8 30 8 24 8 18 8 12 8 6 8 0 8 35 8 30 8 25 8 20 8 15 8 10 8 5 8 0 8 28 8 24 8 20 8 16 8 12 8 8 8 4 8 0 8 21 8 18 8 15 8 12 8 9 8 6 8 3 8 0 8 14 8 12 8 10 8 8 8 6 8 4 8 2 8 0 8 7 8 65 8 4 8 3 8 2 8 1 8 0 8 0 8 0 8 0 8 0 8 0 8 0 8 0 8 0 8 x x x x x x x x WWWWWWWW WWWWWWWW WWWWWWWW WWWWWWWW WWWWWWWW WWWWWWWW WWWWWWWW WWWWWWWW X X X X X X X X (3.72) The complex roots of the unity have specific qualities that can be exploited to simplify equation (3.72). Actually, the “Twiddle factors” satisfy the following properties 1=nN NW , 12/ −=N NW and Nn N n N WW + = . We thus show the redundancy of the WN coefficients. This is a reduction of this redundancy which allows us to obtain a calculation algorithm of the Fourier transform of reduced calculatory complexity. ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ = ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ 7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0 1 8 2 8 3 8 4 8 5 8 6 8 7 8 0 8 2 8 4 8 6 8 0 8 2 8 4 8 6 8 0 8 3 8 6 8 1 8 4 8 7 8 2 8 5 8 0 8 4 8 0 8 4 8 0 8 4 8 0 8 4 8 0 8 5 8 2 8 7 8 4 8 1 8 6 8 3 8 0 8 6 8 4 8 2 8 0 8 6 8 4 8 2 8 0 8 7 8 65 8 4 8 3 8 2 8 1 8 0 8 0 8 0 8 0 8 0 8 0 8 0 8 0 8 0 8 x x x x x x x x WWWWWWWW WWWWWWWW WWWWWWWW WWWWWWWW WWWWWWWW WWWWWWWW WWWWWWWW WWWWWWWW X X X X X X X X (3.73) Given equation (3.73), now we try to reduce the calculatory complexity of the discrete Fourier transform. For that, we assume that N is even, i.e., N = 2P. We introduce the auxiliary sequences ( ){ } 1,...,0 −= Pkku and ( ){ } 1,...,0 −= Pkkv which correspond respectively to the range terms, both even and odd, of ( ){ } 12,...,0 −= Pkkx : ( ){ } ( ){ } 1,...,01,...,0 2 −=−= = PkPk kxku (3.74) and ( ){ } ( ){ } 1,...,01,...,0 12 −=−= += PkPk kxkv . (3.75)
- 104. 88 Digital Filters Design for Signal and Image Processing We obtain: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ). 1 0 1 0 12 2 1 0 2 2 1 0 1 0 nVWnU WivWWiu WivWiuWkxnX n N in P P i n N in P P i ni P P i in P P i kn N N k += += +== ∑∑ ∑∑∑ − = − = + − = − = − = (3.76) The two auxiliary series U(n) and V(n) which make up X(n) lead to calculations carried out on P points instead of N = 2P. We will later develop this observation further. In addition: ( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( ). 1 0 1 0 1 0 1 0 12 2 1 0 2 2 1 0 1 0 nVWnU WWivWWWWiu WWivWWWWiu WivWiuWkxPnX n N P P in P P i P N n N iP P in P P i iP P in P P i iP N n N iP P in P P i Pni P P i Pni P P i Pnk N N k −= += += +==+ ∑∑ ∑∑ ∑∑∑ − = − = − = − = ++ − = + − = + − = (3.77) The FFT is used again only to calculate U(n) and V(n) for n varying from 0 to P – 1. U(n) and V(n) being the discrete Fourier transforms on P points of sequences of range terms both even and odd. From this we easily deduce X(n) for n varying from 0 to N - 1. We will apply the same procedure to calculate U(n) and V(n) on the condition that P is even. We are thus lead, in the situation where N = 8, to the following calculation schema, which introduces “butterfly” patterns:
- 105. Frequential Characterization of Signals and Filters 89 Figure 3.20. First step of implementation of the fast Fourier transform for N = 8 At this stage of the calculation, there remains to be expressed U(n) and V(n), the discrete Fourier transforms on P points of the terms sequences being even and odd. Figure 3.21. Implementation of the fast Fourier transform for N = 8 The algorithm is called a temporal interleaving algorithm because the input sequence does not appear in chronological order. The indices have undergone a binary inversion. We can verify that the number of “stages” of the transformation is equal to log2N. The calculatory complexity thus can be expressed as N log2N. In practice, it can occur that we do not have a number N of samples that are of the power of 2. To implement the FFT, we can then complete the sequence of N samples by the null values so as to obtain a power of 2 as the number of samples to be analyzed. This procedure is called zero-padding. x(0) x(4) x(2) x(6) x(1) x(5) x(3) x(7) W0 W0 W0 W0 X(0) X(1) X(2) X(3) X(4) X(5) X(6) X(7) - - - - U(0) U(2) U(4) U(6) V(0) V(2) V(4) V(6) W0 W1 W2 W3 W0 W2 - - - - W0 W2 - - - - Fourier transform of the sequence v(k) Fourier transform of the sequence u(k) U(0) U(2) U(4) U(6) V(0) V(2) V(4) V(6) W0 W1 W2 W3 X(0) X(1) X(2) X(3) X(4) X(5) X(6) X(7) - - - - x(0) x(2) x(4) x(6) x(1) x(3) x(5) x(7)
- 106. 90 Digital Filters Design for Signal and Image Processing 3.5. The fast Fourier transform for a time/frequency/energy representation of a non-stationary signal Here we can use the fast Fourier transform to analyze a quasi-stationary signal. Several techniques based on the Fourier transform of the autocorrelation function of the signal help us obtain a signal characterization, especially when using the so- called periodogram and correlogram methods. When the signal is no longer stationary, we can analyze the evolution of the frequential content of a signal from its spectrogram (see Figure 3.22). 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 x 10 4 (a) (b) Figure 3.22. Recording of a voiced speech signal Waziwaza (a) and corresponding spectrogram (b) W A Z I W A Z A [z] formants, vowels [a] Time (s) Amplitude Time (s) Frequency(KHz)
- 107. Frequential Characterization of Signals and Filters 91 This is a situation where the power spectral density is calculated by looking at the successive segments of the signal. This tool thus gives a three-dimensional representation of the voice signal: time, frequency and energy. This last quantity is represented by a degree of blackening according to the amplitude values. The higher the amplitude, the higher the blackening intensity. In Figure 3.22, the presence of formants, which are resonance frequencies of vocal behavior, on a spectrogram corresponds to the frequency ranges whose energy is especially high and appear in the form of bands that are approximately parallel to the abscissas axis. 3.6. Frequential characterization of a continuous-time system 3.6.1. First and second order filters 3.6.1.1. 1st order system Let us look at a physical system regulated by a linear differential equation of the 1st order, as is usually the case with RC and LR type filters: ( ) ( ) ( )tKxty dt tdy =+τ (3.78) Figure 3.23. RC filter Figure 3.24. LR filter
- 108. 92 Digital Filters Design for Signal and Image Processing The transmittance of the system, i.e. ( ) ( ) ( ) Y s H s X s = , where Y(s) designates the Laplace transform1 of y(t) and is expressed by: ( ) 1 K H s sτ = + . (3.79) In taking s = jω where ω = 2πf designates the angular frequency, we obtain: ( ) ( )( )ωτ ωτωτ ω arctanexp 11 j j K j K jH − + = + = (3.80) where K is called the static gain. With RC and LR filters, the time constant is worth, respectively, RC=τ and R L =τ and K = 1. We characterize the system by its impulse response or its indicial response. When x(t) = δ(t), X(s) = 1. From there: ( ) 1 11 K K Y s s sτ τ τ = = + + (3.81) and if we refer to a Laplace transform table, we deduce from it the expression of the output according to time: ( ) ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ −= ττ tK ty exp . (3.82) 1 See equation (2.1) in Chapter 2.
- 109. Frequential Characterization of Signals and Filters 93 We can proceed in the same way to obtain the indicial response; that is, it is obtained when ( ) ( )tutx = with a Laplace transform of ( ) 1 X s s = . We then have: ( ) 1 1 1 Y s K s s τ ⎡ ⎤ ⎢ ⎥ = −⎢ ⎥ ⎢ ⎥+ ⎣ ⎦ (3.83) and ( ) ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ −−= τ t Kty exp1 . (3.84) This indicial response is characterized by a transitory regime and a stationary regime. 3.6.1.2. 2nd order system Here we look at a physical system that is regulated by a linear differential equation of the 2nd order. Figure 3.25. 2nd order filters The transmittance of the system is expressed by: ( ) 2 1 2 n n K H s s s ξ ω ω = ⎛ ⎞ + + ⎜ ⎟ ⎝ ⎠ (3.85)
- 110. 94 Digital Filters Design for Signal and Image Processing According to the values of ξ, the transmittance poles are real (ξ ≥ 1) or complex conjugates (ξ < 1). We can say that the system is either hyper- or under-buffered. ξ is termed the damping factor of the second order transfer function and ωn is the resonant angular frequency. We can also write equation (3.85) by using the quality factor ξ ω 2 2 n Q = : ( ) 2 2 2 n n n H s K s s Q ω ω ω = + + (3.86) The Bode diagram expressed in phase and amplitude is easily deduced from the transfer function by taking the module and the phase of ( ) s j H s ω= . EXAMPLE 3.4.– we look at a gain in the passband equal to 26 dB, a cut-off angular frequency 50=nω rad/s and a quality factor Q = 10. We then get: ( ) 2 50,000 5 2,500 H s s s = + + . (3.87) The corresponding Bode amplitude and phase diagrams (with a logarithmic scale for the abscissas) are shown in Figure 3.26. 10 0 10 1 10 2 10 3 -200 -150 -100 -50 0 Frequency (radians) Phase(degrees) 10 0 10 1 10 2 10 3 10 -2 10 -1 10 0 10 1 10 2 10 3 Frequency (radians) Magnitude Figure 3.26. Diagram of a 2nd degree filter
- 111. Frequential Characterization of Signals and Filters 95 We also see that the poles are situated on the left half-plane of the Laplace complex plane. The system is therefore stable. -60 -40 -20 0 20 40 60 -50 -40 -30 -20 -10 0 10 20 30 40 50 Real part Imaginarypart Figure 3.27. Position of poles of a 2nd continuous-time filter 3.7. Frequential characterization of discrete-time system 3.7.1. Amplitude and phase frequential diagrams The frequential characterization of a filter is obtained from the Fourier transform of the impulse response. According to section 3.3.2, the frequency response of the system can be obtained by calculating the transfer function of the system H(z) then by being placed on the unity circle z, i.e., by taking ( )exp 2 exp 2 r s f z j j f f π π ⎛ ⎞ = =⎜ ⎟ ⎝ ⎠ as the expression of the transfer function, on condition that 1=z is in the convergence domain of H(z). poles
- 112. 96 Digital Filters Design for Signal and Image Processing Thus, we write: ( )( ) ( )( ) ( )( )rrr fjfjHfjH πψππ 2exp2exp2exp = . From here, we can trace the amplitude response represented in the logarithmic scale by: ( ) )2exp( 10log20 rfjz zH π= × . We can also trace the phase response ( )rfπψ 2 from the z-transform of the impulse response ( ) )2exp( rfjz zH π= according to the normalized frequency. 3.7.2. Application Let us consider the system characterized by its impulse response, shown by: ( ) ( ) 1 1 for 0 3 0 otherwise k h k k N h k +⎧ ⎛ ⎞ ⎪ = ≤ ≤⎜ ⎟ ⎨ ⎝ ⎠ ⎪ =⎩ We take N equal to 1. If the input x(k) is the impulse δ(k), we have: ( ) ( ) ( ) ( )khkkhky == δ* that is: ( ) 3 1 0 =y , ( ) 9 1 1 =y and y(k) =0 otherwise. The transfer function linked to the system equals: ( ) 1 9 1 3 1 − += zzH . The system is of finite impulse response; it is stable since ( ) +∞<=+=∑ 9 4 9 1 3 1 k kh . We take N equal to 2.
- 113. Frequential Characterization of Signals and Filters 97 If the input x(k) is the impulse δ(k), we have: ( ) ( ) ( ) ( )khkkhky == δ* , that is: ( ) 3 1 0 =y , ( ) 9 1 1 =y , ( ) 27 1 2 =y and y(k) = 0otherwise. The transfer function of the system equals: ( ) 21 27 1 9 1 3 1 −− ++= zzzH . The system is of finite impulse response; it is stable since ( ) +∞<=++=∑ 27 13 27 1 9 1 3 1 k kh . When N tends towards infinity. We then have ( ) ( )kukh k 1 3 1 + ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ = . The filter is of infinite impulse response. The filter is stable because ( )∑ ∑ +∞ = + +∞<=× − =⎟ ⎠ ⎞ ⎜ ⎝ ⎛ = k k k kh 0 1 2 1 3 1 3 1 1 1 3 1 We can also justify the stability of this system by analyzing the position of the pole of the transfer function of the system ( ) 1 3 1 1 1 3 1 − − = z zH . This last, represented by 3 1 =z is situated well inside the unity circle in the z-plane.
- 114. This page intentionally left blank
- 115. Chapter 4 Continuous-Time and Analog Filters 4.1. Introduction The synthesis of digital filters has benefited from research done with continuous- time filters. So, to make this text comprehensive, in this chapter we will start with a brief summary of continuous-filter synthesis, which is carried out using analog components such as resistances, inductances, condensers and even active components. In this chapter, the main methods to design continuous-time filters are introduced and the different families of filters that have been developed are presented. We will first discuss Butterworth, Cauer and Chebyshev filters (these last of types I and II). The frequency responses of Type I (resp. Type II) Chebyshev low-pass filters exhibit ripple in the passband (resp. in the stopband). We also will discuss Bessel- Thomson and Papoulis filters. The main points covered in this chapter will be taken up again in Chapter 6, which presents information on infinite impulse response digital filters. 4.2. Different types of filters and filter specifications Let us consider the example of an ideal low-pass filter of normalized gain and whose frequency is in relation to cut-off frequency (see Figure 4.1). Chapter written by Daniel BASTARD and Eric GRIVEL.
- 116. 100 Digital Filters Design for Signal and Image Processing With an ideal filter, transmission is total in the passband and the stopband. We write x as the normalized frequency in relation to the cut-off frequency: cf f x = (4.1) NOTE.– x is also called the normalized angular frequency in relation to the cut-off angular frequency: cc f f x == ω ω Figure 4.1. Ideal low-pass filter Figure 4.2. Low-pass filter corresponding to Figure 4.1, normalized in frequency and amplitude In general, we will deduce normalized high-pass, band-pass and band-stop filters from normalized low-pass filters by applying frequency variable change formulae (see Figures 4.3, 4.4 and 4.5). 2 ( 2 )H j fπ 2 0H 1 0 1 Passband Stopband x= f / cf 2 0H 0 cf Passband Stopband f 2 )2( fjH π
- 117. Continuous-Time and Analog Filters 101 Obtaining the transfer function of the filter )2( fjH π from the transfer function of the normalized low-pass filter )( jxH follows the frequency transformation summarized in Table 4.1. Obtaining a filter Transformation carried out from the transfer function of the normalized low-pass filter High-pass with cut-off frequency cf Replace jx with jf fc Passband characterized by low and high cut- off frequencies 1cf and 2cf Replace jx with ( )12 21 2 cc cc fff fff j − − Stopband characterized by low and high cut- off angular frequencies of 1cf and 2cf Replace jx with ( ) ( )21 2 12 cc cc fffj fff − − Table 4.1. Frequency transformation to obtain the transfer function of a filter H(j2πf) from a normalized transfer function of a low-pass filter Figure 4.3. Ideal high-pass filter 2 )2( fjH π f0 Stopband Passband 2 0H cf
- 118. 102 Digital Filters Design for Signal and Image Processing Figure 4.4. Ideal stopband filter Figure 4.5. Ideal passband filter NOTE 4.1.– in practice, according to Paley Wiener’s theorem, it is impossible to obtain ideal filters that completely reject the frequential components of a signal on a finite band of frequencies. For this reason, we define a specification as a device on which we inscribe the filtering curve of the real filter. From here, we will no longer use the term stopband, but rather attenuated band. Moreover, unlike an ideal specification, a real filter contains a transition band (see Figure 4.6). 2 )2( fjH π 2 0H f 0 1cf Stopband Passband 2cf Stopband 2 )2( fjH π f 0 1cf Passband Stopband 2cf Passband 2 0H
- 119. Continuous-Time and Analog Filters 103 Figure 4.6. Low-pass filter specification The response curve can then be approximated in several ways. In this chapter, we present different approximation approaches that lead to filters whose squared transfer function module is a rational fraction. Since the module )2( fjH π and the phase ( )fϕ of the transfer function are, respectively, even and odd functions of the frequency f, the expression of the squared transfer function module 2 )2( fjH π is expressed as: ∑ ∑ = = == n k k k m k k k HfjHjH 0 2 0 2 222 )0()2()( ωα ωβ πω . (4.2) If we introduce x, the normalized frequency in relation to the cut-off frequency, we have: ∑ ∑ = = = n k k k m k k k xa xb HjxH 0 2 0 2 22 )0()( , (4.3) where b0 = 1 and a0 = 1. Attenuated band x = f/fc 2 0 2 )2( H fjH π 1 Passband Transition band Reference level /a cf f/p cf f
- 120. 104 Digital Filters Design for Signal and Image Processing If the degree of the denominator is superior to that of the numerator, we know that 0)(lim 2 = ∞→ jxH x and that the filter is of the low-pass type. From there, we take the series: nm < . (4.4) We then introduce the concept of attenuation A(jx) of a filter, satisfying the relation: 2 2 )( 1 )( jxH jxA = . (4.5) No matter which filter being considered, we can demonstrate that the attenuation will appear in the following form: ( )222 1)( jxjxA φε+= (4.6) According to the nature of ( )jxφ , which is a polynomial or a rational fraction, we then speak of polynomial or elliptic filters. 4.3. Butterworth filters and the maximally flat approximation 4.3.1. Maximally flat functions (MFM) Here we look at a low-pass normalized transfer function whose squared amplitude is shown in equation (4.3). We try to find a filter with the flattest possible frequency response in the passband when x is close to 0. To come as close as possible to the specification, the synthesized filter must have an amplitude diagram as flat as possible when x = 0. For that, we find the conditions that allow us to cancel the successive derivations of the function 2 )( jxH . We know that the squared amplitude function 2 )( jxH , being analytic when x=0, can be developed by using the McLaurin series as follows: …++++= 3222 !3 )0(''' !2 )0('' )0(')0()( x H x H xHHjxH (4.7)
- 121. Continuous-Time and Analog Filters 105 This development introduces the successive derivatives of 2 )( jxH used for x=0, or: 0 2 )()0(' = = x jxH dx d H , 0 2 2 2 )()0('' = = x jxH dx d H , etc. (4.8) Moreover, 2 )( jxH being a rational fraction (see equation (4.3)), we can approach it by a polynomial by proceeding to the division following increasing powers of polynomials that constitute the numerator and the denominator of 2 )( jxH . In this way we obtain the following development: [ ]{ }…+−−−+−+= 4 11122 2 11 2 0 2 )()()(1)( xabaabxabHjxH (4.9) Since the McLaurin development is unique in the convergence region, we can then identify the coefficients of equations (4.7) and (4.9). The flatness of the filter response is assured if the successive derivations of the function 2 )( jxH are null. The odd derivations, linked to the odd powers of the development, are all equal to 0, since the square of the amplitude is even and only contains even powers of x. For a maximally flat function of order n (written MFMn), we get: with 1, 2, , 0 with 1, , 1. i i i b a i m a i m n = =⎧ ⎨ = = + −⎩ … … (4.10) Consequently, equation (4.3) can be rewritten so that the denominator is equal to the numerator, to which we add the term n n xa 2 . The squared function of the amplitude then takes the following form: n n m m m m xaxbxbxb xbxbxb HjxH 224 2 2 1 24 2 2 12 0 2 1 1 )( +++++ ++++ = … … (4.11) Equation (4.11) completely satisfies the null hypothesis of the derivatives when x=0 and the flatness of the response curve. As well, 0)( 2 →jxH when x → ∞ since n > m. However, we have no guaranty that the decrease will be monotone. By taking into account the existence of a numerator function of x, equation (4.11) can have zeros in the transition band of the filter; and because of this, 2 )( jxH will no longer be a monotone function. In order to control the filter’s behavior in this frequency band, we must ensure that the numerator does not cancel
- 122. 106 Digital Filters Design for Signal and Image Processing itself; that is, we must ensure that 2 )( jxH has no zeros. So we impose the following supplementary condition: 0 for 1, ...,ib i m= = (4.12) Equation (4.11) is then expressed as: n n xa HjxH 2 2 0 2 1 1 )( + = . (4.13) This last function then only has poles. 4.3.2. A specific example of MFM functions: Butterworth polynomial filters 4.3.2.1. Amplitude-squared expression Butterworth functions are specific examples of MFM functions. They take the following form: n x H jxH 2 2 02 1 )( + = . (4.14) So by normalizing amplitude, we get: nn x jxH 2 2 1 1 )( + = . (4.15) We observe that whatever the n degree, we have: 2 1 )( 2 =jH n . (4.16) For this normalized angular frequency equal to 1, the amplitude drops 3 dB in relation to the reference level (i.e. when x=0) All the curves in the Butterworth approximation therefore pass through this point. In this example, according to the representation of attenuation given in equation (4.5), we get: n xjxA 22 1)( += (4.17)
- 123. Continuous-Time and Analog Filters 107 4.3.2.2. Localization of poles By replacing the normalized angular frequency x with s/j in equation (4.15), we obtain: 2 1 ( ) ( ) 1 ( 1) n n n n H s H s s − = + − (4.18) 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x Squaredamplitude Butterworth filter Figure 4.7. Squared amplitude of Butterworth filters for different orders Poles pk of ( ) ( )n nH s H s− are those of 2 1 ( 1)n n s + = − . They are situated on the unity circle and are written as: ( ) 12,,1,0 2 21 exp −=⎟ ⎠ ⎞ ⎜ ⎝ ⎛ ++ = nk n kn jpk …π . (4.19) order 2 order 3 order 4 order 5 order 1 orders 6 and 7
- 124. 108 Digital Filters Design for Signal and Image Processing By introducing the real and imaginary parts of the poles, we come to the following expression: ( ) ( ) kx n kn j k n kn pk 2 21 sin 2 21 cos ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ ++ +⎟ ⎠ ⎞ ⎜ ⎝ ⎛ ++ = π σ π . (4.20) To obtain an expression of the transfer function Hn(s), we conserve only the poles situated on the left-hand side of the complex plane in order to satisfy the stability criteria of the system. Generally, the normalized transfer function for a Butterworth filter of order n is written as: 1 1 )( 1 2 2 1 1 +++++ = − − − − sasasas sH n n n n nn … (4.21) 0 0.2 0.4 0.6 0.8 1 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 x attenuation attenuation of Butterworth filters Figure 4.8. Attenuation of Butterworth filters according to order Increasing order
- 125. Continuous-Time and Analog Filters 109 The polynomial situated at the denominator of equation (4.21) corresponds to the Butterworth polynomial of degree n. Expressions of Butterworth polynomials are given by the following formulae. We will distinguish between situations where n is even and odd: n even: / 2 2 k=1 ( 2cos 1) n ks s θ+ × +∏ n kk 2 )12( π θ −= . n odd: ( 1)/ 2 2 k=1 (s+1) ( 2cos 1) n ks s θ − + × +∏ n k k π θ = . -1 -0.5 0 0.5 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Re(s) Im(s) poles H(s)H(-s), example n=2 -1 -0.5 0 0.5 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Re(s) Im(s) poles H(s)H(-s), example n=3 Figure 4.9. Position of poles of ( ) ( )n nH s H s− and ( )nH s for n=2 and n=3
- 126. 110 Digital Filters Design for Signal and Image Processing Order Butterworth polynomials n = 1 s + 1 n = 2 s2 + 2 s2 + 2 s + 1 n = 3 s3 + 2s2 + 2s + 1 = (s + 1)(s2 + s + 1) n = 4 s4 + 2.613s3 + 3.414s2 + 2.613s + 1 = (s2 + 0.765s +1)(s2 +1.848s +1) n = 5 s5 + 3.236s4 + 5.236s3 + 5.236s2 + 3.236s + 1 = (s + 1)(s2 + 0.618s +1)(s2 +1.848s +1) Table 4.2. Expression of Butterworth polynomials 4.3.2.3. Determining the cut-off frequency at –3 dB and filter orders Figure 4.10. Low-pass filter specification at cut-off fc to – 3 dB (as reference) Let AP be the maximum attenuation that we wish at the frequency fp and Aa the minimal attenuation that we wish at the frequency fa. The determination of the minimal order that satisfies these two conditions is established as follows: ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ +≤ n c a a f f A 2 1log10 (4.22) x = f/fc 0 fa/fc -Ap -Aa 2 0 2 )( H fH (in dB) -3 fp/fc 1
- 127. Continuous-Time and Analog Filters 111 and: ⎟⎟ ⎟ ⎠ ⎞ ⎜⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ += n c p p f f A 2 1log10 . (4.23) From there, taking into account equations (4.23) and (4.22), we get: . 110 1 110 10 10 22 ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ − × ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ −≥ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ×⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ p a A An p c n c a f f f f (4.24) We can then verify that the value of n that allows us to satisfy the specification constraints is expressed with: . log 110 110 log 2 1 10 10 ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎜ ⎝ ⎛ − − ≥ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ p a A A f f n p a (4.25) In addition, the cut-off frequency f c at –3 dB is easily obtained from equation (4.23): nA ac a ff 2 1 10 110 − ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −= . (4.26) 4.3.2.4. Application Suppose we want to synthesize a Butterworth filter with an attenuation of 40 dB at 4,000 Hz and of 0.5 dB at 3,200 Hz. Using the formulae in section 4.3.2.3, we find that the minimal order equals 26 and the cut-off frequency equals 3,350 Hz.
- 128. 112 Digital Filters Design for Signal and Image Processing 0 2,000 4,000 6,000 8,000 10,000 12,000 -60 -40 -20 0 Frequency,Hz Gain,dB Frequency response 0 2,000 4,000 6,000 8,000 10,000 12,000 0 0.5 1 1.5 Frequency,Hz Gain Frequency response Flat response in the passband Figure 4.11. Synthesis of a continuous Butterworth filter. Frequential representation 4.3.2.5. Realization of a Butterworth filter Up to now, we have represented a Butterworth filter so that x = 1, which gives us an attenuation of 3 dB or ( ) 3 1 log10 2 =× jH dB. However, when representing a specification, we may want to have a different attenuation at the end of the passband. To resolve this problem, we then look at the squared amplitude in the following way: ( ) n x jxH 22 2 1 1 ε+ = (4.27) where ε is a parameter calculated so that we have x = 1, we then have the desired attenuation.
- 129. Continuous-Time and Analog Filters 113 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x Squaredamplitude Butterworth filter Figure 4.12. Example of a filter realization when ε =1/2 4.4. Equiripple filters and the Chebyshev approximation 4.4.1. Characteristics of the Chebyshev approximation Butterworth filters are widely used, but their use has the drawback of an elevated degree of polynomials with standard applications. To get around this problem, an alternative solution consists in using equiripple filters and, more specifically, Chebyshev filters. The type I Chebyshev approximation (or of type II or inverse) distributes the approximation error throughout the entire passband (or throughout the attenuated band). Unlike Butterworth filters, the frequency response curve then presents, with the Chebyshev approximation, an oscillation in this frequency band. This is an equal amplitude oscillation. We should bear in mind that the maximum value of admissible error in relation to the reference level is minimized. Moreover, we can demonstrate that the amplitude in the attenuated band decreases in monotonically and much more quickly, for filters of an order above 1, than is the case with Butterworth filters. n x22 1 1 ε+ n x2 1 1 +
- 130. 114 Digital Filters Design for Signal and Image Processing 4.4.2. Type I Chebyshev filters 4.4.2.1. The Chebyshev polynomial We represent ( )xCn , the Chebyshev function, sometimes called the Chebyshev polynomial, of order n as follows: ( ) ( )[ ] ( )[ ]⎩ ⎨ ⎧ > ≤≤ = 1coshargcosh 10arccoscos xxn xxn xCn (4.28) In Table 4.3, we give the Chebyshev functions of 0 to 4 for n. We see that these are even functions if n is even and odd functions if n is odd. -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 3 x y Chebyshev polynomials Figure 4.13. Variation curves of the first Chebyshev functions Degree Chebyshev polynomials 0 C0(x) = 1 1 C1(x) = x 2 C2(x) = 2x2 – 1 3 C3(x) = 4x3 – 3x 4 C4(x) = 8x4 – 8x2 + 1 n Cn(x) = 2xCn–1(x) – Cn–2(x) n≥2 Table 4.3. Chebyshev polynomial values order 2 order 0 order 3 order 1
- 131. Continuous-Time and Analog Filters 115 ( ) 11 =nC ∀n. (4.29) and ( ) 1if is even 0 0 if is odd. n n C n ±⎧ = ⎨ ⎩ (4.30) 4.4.2.2. Type I Chebyshev filters For a type I Chebyshev filter, the square of the normalized amplitude is in the form: )(1 1 )( 22 2 p n f f C fH ε+ = (4.31) where ε is a parameter that regulates the ripple value in the passband. We should remember that here we introduce pf f the normalized frequency in relation to the frequency limit of the passband pf , and not in relation to the cut-off frequency cf to – 3 dB. 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 squaredamplitude Chebyshev filter Figure 4.14. Squared amplitude of type I Chebyshev filters for different orders 3/1=ε order 0 order 1 order 2 order 3 order 4 pf f
- 132. 116 Digital Filters Design for Signal and Image Processing According to Figure 4.14, we see that the extrema numbers present in the passband is equal to the filter order. Moreover, the attenuation satisfies the relation: )(1)( 222 p n f f CfA ε+= (4.32) attenuation in Chebychev filters 0 0.2 0.4 0.6 0.8 1 1 1.2 1.4 1.6 1.8 2 x attenuation pf f Figure 4.15. Example of attenuation in type I Chebyshev filters for different orders 3/1=ε 4.4.2.3. Pole determination The poles pk are expressed by the denominator roots of 2 | ( ) | :H s 2 2 2 1 | ( ) | 1 ( )n H s P C j ε = + (4.33)
- 133. Continuous-Time and Analog Filters 117 So: )).(arccos(cos 1 )( j p n j j p C k k n = ±= ε (4.34) We then introduce the quantities ks , ku , and kv so that: )arccos( j p jvus k kkk =+= . (4.35) From here, equation (4.34) becomes: ε 1 sinhsincoshcos jnvnujnvnu kkkk ±=− . (4.36) By identifying the real and imaginary parts of the two portions of equation (4.7), the poles of the transfer function H(p) are determined from the system following the two equations with two unknown factors: ⎪⎩ ⎪ ⎨ ⎧ ±= = ε 1 sinhsin 0coshcos kk kk nvnu nvnu (4.37) Since 0cosh ≠knv , the relation in equation (4.37) is reduced to: ⎪⎩ ⎪ ⎨ ⎧ ±= = ε 1 sinhsin 0cos kk k nvnu nu (4.38) So we have: ( ) ⎪ ⎩ ⎪ ⎨ ⎧ ±= −=+= ε 1 sinhsin 12...,0avec12 2 kk k nvnu n,kk n π u (4.39) From there, since 1sin ±=knu , equation (4.37) is equivalent to: ( ) 1 2 1 with 0 ...,2 1 2 1 1 sinh ( ) k k π u k k , n n v v n ε − ⎧ = + = −⎪⎪ ⎨ ⎪ = = ⎪⎩ (4.40)
- 134. 118 Digital Filters Design for Signal and Image Processing We can easily demonstrate that the poles are situated on an ellipse. So equation (4.35) can be written as follows: vujvujjvujp kkkkkkk coshcossinhsin)cos( +=+=+= ωσ (4.41) By identifying the real and imaginary part of equation (4.41), we get: cosh cos sinh sin ⎪ ⎪ ⎩ ⎪⎪ ⎨ ⎧ = = v u v u k k k k ω σ . (4.42) We can then write: 1 )(cosh)(sinh 2 2 2 2 =+ vv kk ωσ . (4.43) 4.4.2.4. Determining the cut-off frequency at –3 dB and the filter order Figure 4.16. Low-pass filter specification characterized by angular frequencies at the end of the passband and the beginning of the attenuated band Let Ap be the maximum attenuation that we wish for the angular frequency fp and Aa the maximum attenuation that we wish for angular frequency fa. Determining the minimal order satisfying these two conditions is done as follows below. 2 0 2 )( H jH ω (in dB) f 0 af -Ap -Aa pf
- 135. Continuous-Time and Analog Filters 119 Given equation (4.31) and the constraints linked to the specifications, we have, on the one hand, since 1≥ p a f f . ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ += ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ +≤ p a p a na f f chnch f f CA arg1log101log10 2222 εε (4.44) and on the other hand: ( )222 1log101log10 εε += ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ += p p np f f CA (4.45) From there, taking into account equations (4.44) and (4.45), we obtain: . 110 110 arg arg 10 10 2 2 22 − − ≥ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ = ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ Ap Aa p ap a f f chnch f f chnch ε ε (4.46) or: ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎜ ⎝ ⎛ − − ≥ p a Ap Aa f f ch ch n arg 110 110 arg 10 10 (4.47) Instead of using the function ( )charg , we can exploit an alternative relation. In addition, the hyperbolic cosines and the hyperbolic sinus verify the following relation: ( ) ( ) 122 =− xshxch (4.48)
- 136. 120 Digital Filters Design for Signal and Image Processing Using equation (4.48), we can then write ch(x) + sh(x) as follows: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ).1 exp 2 expexp 2 expexp 2 xchxch x xxxx xshxch ++= = −− + −+ =+ (4.49) With the equality in equation (4.49), the function ( )ychx arg= can be written as follows using the Neperian logarithm: ( ) ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ ++= 2 1lnarg yyych . (4.50) From equations (4.47) and (4.50), we then get: ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ++ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎜ ⎝ ⎛ − − ++ − − ≥ 2 10 10 10 10 1ln 110 110 1 110 110 ln p a p a Ap Aa Ap Aa f f f f n . (4.51) Using equation (4.45), we easily obtain the value of the coefficient ε: 110 10 −= pA ε . (4.52) We can easily express the cut-off frequency at –3 dB; it satisfies the following relation: 2 1 )(1 1 )( 22 2 = + = p c n c f f C fH ε , (4.53) or: ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ = ε 1 arg 1 ch n chff pc . (4.54)
- 137. Continuous-Time and Analog Filters 121 4.4.2.5. Application We want to synthesize a Chebyshev filter with an attenuation of 40 dB to 4,000 Hz and of 0.5 dB to 3,200 Hz. 0 2,000 4,000 6,000 8,000 10,000 12,000 -50 -40 -30 -20 -10 0 Frequency,Hz GaindB Frequency response 0 2,000 4,000 6,000 8,000 10,000 12,000 0 0.2 0.4 0.6 0.8 1 Frequency,Hz Gain Frequency response Figure 4.17. Synthesis of a continuous, type I Chebyshev filter By applying the above formulae, we find that the order of the filter is 10 instead of 26 for the Butterworth filter described earlier. 4.4.2.6. Realization of a Chebyshev filter In order to realize this type of filter, the following two constraints must be satisfied: – we must respect the ripple value in the reference band; – we must obtain the minimum attenuation value in the attenuated band. The transfer function is written in the following form: 0 1 2 1 2 1 0 ( ) n n n n n K H s p a p a p a p a− − − − = + + + + + (4.55)
- 138. 122 Digital Filters Design for Signal and Image Processing where n, the polynomial degree, is determined by the required attenuation in the attenuated band. There are tables showing the denominator values of the transfer function of the filter for different values of n and of the ripple in the reference band, expressed in dB (also see Tables 4.4 and 4.5). Degree Polynomial for 0.5 dB of ripple 1 s + 2.863 2 s2 + 1.425s + 1.516 3 s3 + 1.253s2 + 1.535s + 0.716 = (s + 0.626)(s2 + 0.626s + 1.142) 4 s4 + 1.197s3 + 1.717s2 + 1.025s + 0.379 = (s2 + 0.351s + 1.064)(s2 + 0.845s + 0.356) 5 s5 + 1.172s4 + 1.937s3 + 1.309s2 + 0.753s +0.179 = (s + 0.362)(s2 + 0.224s + 1.036)(s2 + 0.586s + 0.477) Table 4.4. Denominator of H(s) for 0.5 dB of ripple Degree Polynomial for 1 dB of ripple 1 s + 1.965 2 s2 + 1.098s + 1.103 3 s3 + 0.988s2 + 1.238s + 0.491 = (s + 0.494)(s2 + 0.494s + 0.994) 4 s4 + 0.953s3 + 1.454s2 + 0.743s + 0.276 = (s2 + 0.279s + 0.987)(s2 + 0.674s + 0.279) 5 s5 + 0.937s4 + 1.689s3 + 0.974s2 + 0.581s +0.123 = (s + 0.289)(s2 + 0.179s + 0.988)(s2 + 0.468s + 0.429) Table 4.5. Denominator of H(s) for 1 dB of ripple 4.4.2.7. Asymptotic behavior Here we look at asymptotic behavior in the gain curve of a Chebyshev filter of order n. We can demonstrate that: ( )16)log(20log20 )(1 1 log10lim)(log10lim 22 2 −−−−≈ + = +∞→+∞→ nxn xC jxH n xx ε ε (4.56)
- 139. Continuous-Time and Analog Filters 123 So: ( ) ( )16log20log20 2log20log20 )(log20limlog20lim )(log10limlog10lim)(log10lim 1 2222 −++≈ +≈ += += − +∞→+∞→ +∞→+∞→+∞→ nxn x xC xCxC nn n xx n xx n x ε ε ε εε . (4.57) Now, for a Butterworth filter of order n, there is a drop of 20 n dB per decade, so: ).log(20log20 1 1 log10lim)(log10lim 22 2 xn x jxH nxx −−≈ + = +∞→+∞→ ε ε (4.58) To the same degree, we observe that a Chebyshev filter presents more attenuation than a Butterworth filter. 4.4.3. Type II Chebyshev filter For a type II Chebyshev filter, the square of the normalized amplitude possess both poles and zeros: ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ + = ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ + = f f C f f C f f f f C f f C fH a n p a n p p a n p a n 2 2 2 2 2 2 2 1 1 1 1 )( εε . (4.59) 4.4.3.1. Determining the filter order and the cut-off frequency We determine the minimal order that will satisfy these two conditions as described below. Using equation (4.59) and the constraints linked to the specifications, we have, on the one hand: 2 2 1 1 )( ε+ =pfH (4.60)
- 140. 124 Digital Filters Design for Signal and Image Processing and on the other hand: ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ + = ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ + = p a n a a n p a n a f f C f f C f f C fH 22 2 2 2 2 1 1 1 1 )( ε ε . (4.61) The constraints in equations (4.60) and (4.61) lead to the same condition of order as with the type I Chebyshev filter, so we have: ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎜ ⎝ ⎛ − − ≥ p a Ap Aa f f ch ch n arg 110 110 arg 10 10 (4.62) or: ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ++ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎜ ⎝ ⎛ − − ++ − − ≥ 2 10 10 10 10 1ln 110 110 1 110 110 ln p a p a Ap Aa Ap Aa f f f f n . (4.63) We also have: 110 10 −= pA ε . (4.64) 4.4.3.2. Application Suppose we wish to synthesize a type II Chebyshev filter with an attenuation of 40 dB to 4,000 and of 0.5 at 3,200 Hz.
- 141. Continuous-Time and Analog Filters 125 0 2,000 4,000 6,000 8,000 10,000 12,000 -60 -50 -40 -30 -20 -10 0 Frequency,Hz Gain,dB Frequency response Figure 4.18. Synthesis of a continuous-type II Chebyshev filter. Frequential representation: ripple in the attenuated band 4.5. Elliptic filters: the Cauer approximation The Chebyshev approximation that distributes error in one of the bands allows for filtering performances equivalent to those of Butterworth filters, and at lower orders. We can distribute the approximation error at the same time in the passband and the attenuated band. This theory, which is called elliptic approximation, leads to the development of Cauer filters. In this text, we will not discuss this type of filter in detail. 4.6. Summary of four types of low-pass filter: Butterworth, Chebyshev type I, Chebyshev type II and Cauer Let us look again at the attenuation formulae: with 2 2 )( 1 )( jxH jxA = (4.5) and ( )222 1)( xjxA φε+= (4.6) We present four important types of filters in the following table:
- 142. 126 Digital Filters Design for Signal and Image Processing Butterworth filter 0 50 100 150 200 250 300 0 0.2 0.4 0.6 0.8 1 1.2 1.4 – Maximally flat approximation in the passband – Polynomial filter – n xjxA 22 1)( += Type I Chebyshev filter 0 50 100 150 200 250 300 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 – Equiripple approximation in the passband – Filter polynomial – )(1)( 222 xCjxA nε+= where )(xCn is the Chebyshev polynomial of order n – Higher attenuation than that of a Butterworth filter of equivalent degree Type II Chebyshev filter (inverse) 0 50 100 150 200 250 300 0 0.2 0.4 0.6 0.8 1 1.2 1.4 – Equiripple approximation in the attenuated band – ( )222 1)( jxjxA φε+= with here ( )2 jxφ of rational fraction type Cauer filter (elliptic) 0 50 100 150 200 250 300 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 – Ripple in the passband and the attenuated band – ( )222 1)( jxjxA φε+= with ( )2 jxφ of rational fraction type 4.7. Linear phase filters (maximally flat delay or MFD): Bessel and Thomson filters 4.7.1. Reminders on continuous linear phase filters In some applications, we need a filter with a transfer function whose phase varies linearly with the frequency in a specific range.
- 143. Continuous-Time and Analog Filters 127 Let us assume a continuous-time filter represented by its transfer function ( )( )fjfjHfjH ϕππ exp)2()2( = . If we have been looking at )2( fjH π , we must also take the characteristics of φ(f) into account. To do that, we look at an input signal that is made of two sinusoidal components: ( ) ( ) ( )( )tffAftAtx ∆++= ππ 2sin2sin 21 . (4.65) The filtered signal then is written as: ( ) ( ) ( )( ) ( ) ( ) ( )( )fftffffHA fftfHAty ∆++∆+∆++ += ϕπ ϕπ 2sin 2sin 2 1 (4.66) or ( ) ( ) ( ) ( ) ( ) ( ) . 2 2sin 2 2sin 2 1 ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ ∆+ ∆+ +∆++ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ += ff ff tffHA f f tffHAty π ϕ π π ϕ π (4.67) Now, if we carry out a limited development of ( ) ( )ff ff ∆+ ∆+ϕ , we get: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) . 1 1 ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ ∆ ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ −+≈ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ ∆+ ∆ ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ −+= ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ ∆+ ∆ −××⎥ ⎦ ⎤ ⎢ ⎣ ⎡ ∆+∆+= ∆+ ∆+ f f f f df fd f f fO f f f f df fd f f fO f f f fOf df fd f ff ff ϕϕϕ ϕϕϕ ϕ ϕ ϕ (4.68) Taking equation (4.68) into account, equation (4.67) is written as: ( ) ( ) ( ) ( ) ( ) ( ) ( ) . 2 1 2 2sin 2 2sin 2 1 ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ ∆ ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ −++∆++ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ +≈ f f f f df fd f f tffHA f f tffHAty ϕϕ ππ ϕ π π ϕ π (4.69)
- 144. 128 Digital Filters Design for Signal and Image Processing In order to avoid a frequency distortion, it is necessary that ( ) ( ) f f f f df fd ∆ ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ − ϕϕ π2 1 be cancelled. Since 0≠∆f , we come to the following condition: ( ) ( ) f f df fd ϕϕ = (4.70) If the phase is in the form: ( ) ff .βϕ = . (4.71) then the condition in equation (4.70) is satisfied. We then say that the filter is of linear phase. We then introduce the quantities τφ and τg which are, respectively, the phase and group delays: ( ) f f π ϕ τϕ 2 −= (4.72) ( ) df fd g ϕ π τ 2 1 −= . (4.73) With linear phase filters, these two variables are equal. NOTE 4.2.– we also say a filter is of linear phase when ( ) χβϕ += ff . . However, in this case the condition in equation (4.70) is not satisfied and we also observe here a phase distortion. 4.7.2. Properties of Bessel-Thomson filters Up to now, if we look at the phase linked to transfer functions of Butterworth and Chebyshev filters, we see that they do not necessarily present a linear phase. By proposing s = jx, the transfer function for a discrete linear system can be written in the following form: ( )( )xjjxH xjxdxc xjxbxa jxH ϕexp)( )()( )()( )( 22 22 = + + = (4.74)
- 145. Continuous-Time and Analog Filters 129 where a, b, and d are polynomials in x2 . Using equation (4.74), the phase of the transfer function verifies the following formula: ( ) ( ) ( ) ( )2 2 2 2 arctanarctan)(arg)( xc xxd xa xxb jxHx −==ϕ (4.75) The phase φ(x) given in equation (4.75) is an analytic function. By carrying out its development in series, we obtain: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ). 53 ( ) 53 ()( 25 255 23 233 2 2 25 255 3 233 2 2 2 −+−− −+−= xc xdx xc xdx xc xxd xa xbx a xbx xa xxb x x ϕ (4.76) By arranging the powers of x, we can therefore write the phase in the following form: +++= 5 5 3 31)( xxxx αααϕ (4.77) To make the phase linear, we must use equation (4.77) to obtain: ⎩ ⎨ ⎧ === ≠ 0 0 753 1 …ααα α . (4.78) This kind of linear response then corresponds to a flat response in propagation time (maximally flat delay or MFM), since: 11 )( α ϕ τ −=−= dx xd . (4.79) If we write the propagation time in the form: ( ) ( ) ( ) ( ) +++ +++ = 42 4 22 2 42 4 22 2 01 1 1 xxbxxb xxaxxa ττ (which is an even function), (4.80) we can use the method employed to determine the MFM functions in order to determine the MF functions for the transit time.
- 146. 130 Digital Filters Design for Signal and Image Processing EXAMPLE 4.1.– let a transfer function be written as: ( ) 01 2 )( ββ δ ++ + = jxjx jx KjxH , (4.81) We try to obtain the relation between the coefficients β0, β1, and δ so that the response is in linear phase. For this purpose, we express the phase of the filter H(jx): [ ] ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ + ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ − − − − ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ −+⎟ ⎠ ⎞ ⎜ ⎝ ⎛ −= − −= 3 2 0 1 2 0 1 5 53 2 0 1 3 1 53 1 arctanarctan)( x x x xxxx x xx jxH β β β β δδδ β β δ ϕ (4.82) from which we get: [ ] ( )5 3 0 3 1 2 0 1 3 3 0 1 33 11 )( xxxjxH + ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ +−−+⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −= β β β β δβ β δ ϕ . (4.83) To obtain a linear phase response, we must have: ⎪ ⎪ ⎩ ⎪ ⎪ ⎨ ⎧ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ =− ≠− 3 0 01 3 1 0 1 3 0 1 δ β βββ β β δ (4.84) 4.7.3. Bessel and Bessel-Thomson filters There is a class of linear phase filters whose transmission zeros are all at infinity. These filters are called Bessel or Bessel-Thomson filters. The transfer functions of these filters are of the type: 1 2 1 2 1 0 ( ) n n n n n K H s s a s a s a s a− − − − = + + + + + . (4.85) We can use the above procedure to determine the coefficients of the polynomial; however, for high orders, we must resolve a non-linear system of equations, which can prove difficult. The coefficients of the denominator polynomial can be determined most easily using the Storch method, in which we approximate the
- 147. Continuous-Time and Analog Filters 131 following function, which represents a pure delay and thus corresponds to a linear phase filter: 0 0 1 1 exp( ) | exp( ) . cosh sinh s s s s τ τ =− = − = + (4.86) Now, we can write coth s as follows: 2 4 6 3 5 7 1 cosh 2! 4! 6!coth . sinh 3! 5! 7! s s s s s s s s s s + + + + = = + + + + . (4.87) equation (4.87) can be written in another form: cosh 1 1 . 3 1sinh 5 1 7 s s s s s s = + + + + (4.88) The approximation to order n consists of taking the first n terms of the development. In this way, the approximation of coth s with the help of the 3rd degree polynomial gives: 2 3 1 1 6 15 ( ) coth . 3 1 ( )15 5 s N s s s D ss s s s + ≈ + = = ++ (4.89) The approximation to the 3rd order will be in the following form: 3 2 3 ( ) . ( ) ( ) ( )15 6 15 k K K T s N s D s B ss s s = = = + + + + (4.90) The Bn polynomials are called Bessel polynomials. Degree Bessel polynomial 0 1 1 B1(s) = s + 1 2 B2(s) = s2 + 3s + 3 n Bn(s) = (2n – 1) Bn–1(s) +s2 Bn–2(s) ∀n ≥ 2 Table 4.6. Bessel polynomials (Bn)
- 148. 132 Digital Filters Design for Signal and Image Processing We can then express them as: 0 (2 )! ( ) . 2 !( )! n k n n k k n k s B s k n k− = − = −∑ (4.91) 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 -7 -6 -5 -4 -3 -2 -1 0 x squaredamplitude,dBscale Bessel filter Figure 4.19. Squared amplitude of Bessel filters for different orders 4.8. Papoulis filters (optimum (On)) 4.8.1. General characteristics Compared to Butterworth filters of equivalent orders, these filters do not present ripple in their passband. Papoulis filters combine the advantages of Butterworth and Chebyshev filters. These filters are obtained using a method that imposes a maximum value at the integral of the square of the attenuation )(2 jxA for 11 ≤≤− x or also to 1)(2 −jxA for 11 ≤≤− x : ( )( )∫− ≤− 1 1 22 1 ξdxjxA . (4.92) Order increase
- 149. Continuous-Time and Analog Filters 133 More specifically, we can propose ( ) ( )jxLjxA n 22 1 =− . The square of the amplitude is then in the form: )(1 1 )( 22 2 xL jxH nε+ = (4.93) where Ln (x2 ) is the generator polynomial of optimum filters and ε is a parameter determined by the required attenuation for x = 1. When ε = 1 and if we say the specification is symmetrical, determining ( )jxLn then leads to determining the polynomials of norm 1 linked to the scalar product: ( ) ( ) ( ) ( )∫− = 1 1 , dxxQxPxQxP . (4.94) The polynomials )( 2 xLn verify the relations: ( ) ( ) 2) 0 0 1 1 ( 0 for 0 1 n n n L L dL x x dx = = ≥ ≤ ≤ (4.95) The filters do not obtain the same function according to the value of the order, but their transfer function is always expressed as type 1 Legendre polynomials Pi(x). Degree Type 1 Legendre polynomials 0 1)(0 =xP 1 xxP =)(1 2 )13( 2 1 )( 2 2 −= xxP n ( ) ( ) n nn nn dx xd n xP ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ − = 1 !2 1 2 Table 4.7. Legendre polynomials
- 150. 134 Digital Filters Design for Signal and Image Processing n odd n even ∫ ∑ − − = ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ = 12 1 2 0 2 2 )()( x k i iin duuPaxL with n = 2k + 1 the constants ai are given by: ( ) )1(2 1 1253 21 0 +× = + ==== k k aaa a k ∫ ∑ − − = ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ += 12 1 2 0 2 2 )()1()( x k i iin duuPauxL with n = 2k + 2 the coefficients ai are given by: k even )2)(1( 1 125 2 0 ++ = + === kkk aa a k and 0131 ==== −kaaa k odd )2)(1( 1 1273 31 ++ = + === kkk aaa k and 0120 ==== −kaaa Table 4.8. Representation of the polynomials ( )2 xLn according to order n Table 4.9 gives different expressions of ( )2 xLn for the values of n from 2 to 6. Degree ( )2 xLn 2 4 x 3 246 33 xxx +− 3 4 468 386 xxx +− 5 246810 8284020 xxxxx +−+− 6 4681012 64010512050 xxxxx +++− Table 4.9. Values of polynomials ( )2 xLn
- 151. Continuous-Time and Analog Filters 135 4.8.2. Determining the poles of the transfer function The transfer function is obtained by proposing s = jx and by retaining for H(s) only the poles situated in the left half-plane. When ε = 1, the poles, as with the poles of Butterworth filters, are situated on the unity circle. Degree Polynomials for Papoullis filters (optimum) 1 s + 1 2 s2 +1.414s + 1 3 s3 + 1.310s2 + 1.359s + 0.577 = (s + 0.620)(s2 + 0.690s + 0.929) Table 4.10. Values for polynomials for optimum filters 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 -35 -30 -25 -20 -15 -10 -5 0 x squaredamplitude,dBscale Papoulis filter Figure 4.20. Squared amplitude of Papoulis filters for different orders 4.9. Bibliography [WIE 75] WEINBERG L. Network Analysis and Synthesis, Krieger Pub Co, ASIN 0882753215, 1975. [ZVE 67] ZVEREV A. Handbook of Filter Synthesis, Wiley-Interscience ISBN 0471986801, 1967.
- 152. This page intentionally left blank
- 153. Chapter 5 Finite Impulse Response Filters 5.1. Introduction to finite impulse response filters Finite impulse response filters (FIR filters) are filters used in many applications, especially in image processing, as we will see in Chapters 8 and 9. Their popularity is due to their simplicity; they only use one finite sequence of input signal samples. This step allows FIR filters to easily attain specificities that cannot be obtained with infinite impulse response or IIR filters, especially in the realization of causal linear phase filters. Moreover, FIR filters have the advantage of always being stable, which makes them very useful for an easy material implantation. Depending on the application, the order of the model usually varies from 25 to 400. 5.1.1. Difference equations and FIR filters In Chapter 3, difference equations were introduced to characterize linear time- invariant (LTI) digital systems with an input of x(k) and an output y(k). The difference equation deals with digital systems whereas differential equations make it possible to characterize analog systems. )1()()1()( 1010 +−++=+−++ −− NkxbkxbMkyakya NM (5.1) Chapter written by Yannick BERTHOUMIEU, Eric GRIVEL and Mohamed NAJIM.
- 154. 138 Digital Filters Design for Signal and Image Processing For finite impulse response filters, equation (5.1) verifies: { }0 1 1 1and 0i i M a a ≤ ≤ − = = . (5.2) From here, for FIR filters, difference equations also correspond to the convolution between the impulse response and the input of the signal. Indeed, given (5.2), equation (5.1) becomes: ( ) ( )∑ − = −= 1 0 N j j jkxbky . (5.3) The output and the input satisfy the following relation: ( ) ( ) ( ) ( ) ( )∑ − = −== 1 0 * N j jkxjhkxkhky , (5.4) from where, by identification: ( ) for 0 1jh j b j N= ≤ ≤ − . (5.5) Equation (5.3) shows that FIR filters do not present the recursive mode in their implementation. From the z-transform of equation (5.3), we easily deduce the transfer function of the system, i.e., the relation between the z-transform of the output y(k) and that of the input x(k): ( ) ∑ − = − = 1 0 z N n n nz zbH . (5.6) Using the information provided in Chapter 3, we can characterize the frequential behavior of the system in the following way: ( ) ( ) 1 exp 2 0 exp 2 s N fz nz j f n s f H f H z b j n fπ π − ⎛ ⎞ = ⎜ ⎟⎜ ⎟ =⎝ ⎠ ⎛ ⎞ = = −⎜ ⎟ ⎝ ⎠ ∑ . (5.7) EXAMPLE 5.1.– the first order FIR filter We represent a first order FIR filter as follows:
- 155. Finite Impulse Response Filters 139 ( ) ( ) ( )1 2 1 2 1 −+= kxkxky . (5.8) This example corresponds to the following sample of a continuous signal: ( ) ( ) ( )( ) 1 2 sy t x t x t T= + − , where Ts designates the sampling period. By carrying out the Fourier transform of equation (5.8), we obtain: ( ) ( ) ( ) 1 exp 2 2 s f Y f X f X f j f π ⎡ ⎤⎛ ⎞ = + −⎢ ⎥⎜ ⎟ ⎢ ⎥⎝ ⎠⎣ ⎦ . This expression can then be factorized as follows: ( ) ( ) 1 exp exp exp 2 s s s f f f Y f j j j X f f f f π π π ⎡ ⎤⎛ ⎞ ⎛ ⎞ ⎛ ⎞ = − + −⎢ ⎥⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎢ ⎥⎝ ⎠ ⎝ ⎠ ⎝ ⎠⎣ ⎦ , which leads us to the following relation between Y(f) and X(f): ( ) ( )exp .cos s s f f Y f j X f f f π π ⎛ ⎞ ⎛ ⎞ = −⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ . The corresponding transfer function then equals: ( ) ( ) exp .cos ( ) s s Y f f f H f j X f f f π π ⎛ ⎞ ⎛ ⎞ = = −⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ . (5.9) The filter is called a cosinusoidal filter. This is a low-pass filter, which helps conserve the continuous component, but eliminates the frequential components situated at 2 sf ± (see Figure 5.1). FIR filters with equal impulse response coefficients are called comb filters.
- 156. 140 Digital Filters Design for Signal and Image Processing -0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Frequentialresponseoffilter Normalized response Figure 5.1. Frequential representation of H(f) module In equation (5.9), the term exp s f j f π ⎛ ⎞ −⎜ ⎟ ⎝ ⎠ means a delay 1 2 2 s s T f τ = = in the time-domain, which corresponds to the propagation time of the signal across the filter. So the impulse response is written: ( ) ( ) ( )[ ]1 2 1 −+= kkkh δδ EXAMPLE 5.2.– second order FIR filter. In relation to the above example, this kind of filter introduces a supplementary sample in the difference equation: ( ) ( ) ( ) ( )[ ]212 4 1 −+−+= kxkxkxky .
- 157. Finite Impulse Response Filters 141 The Fourier transform of the difference equation gives us: ( ) ( ) 1 1 2exp - 2 exp - 4 4 s s f f Y f j j X f f f π π ⎡ ⎤⎛ ⎞ ⎛ ⎞ = + +⎢ ⎥⎜ ⎟ ⎜ ⎟ ⎢ ⎥⎝ ⎠ ⎝ ⎠⎣ ⎦ . By factorizing exp 2 s f j f π ⎛ ⎞ −⎜ ⎟ ⎝ ⎠ , we have the following relation: ( ) ( ) 1 exp 2 exp 2 2 exp - 2 4 s s s f f f Y f j j j X f f f f π π π ⎡ ⎤⎛ ⎞ ⎛ ⎞ ⎛ ⎞ = − + +⎢ ⎥⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎢ ⎥⎝ ⎠ ⎝ ⎠ ⎝ ⎠⎣ ⎦ , or: ( ) ( ) 1 exp 2 cos 2 1 2 s s f f Y f j X f f f π π ⎡ ⎤⎛ ⎞ ⎛ ⎞ = − +⎢ ⎥⎜ ⎟ ⎜ ⎟ ⎢ ⎥⎝ ⎠ ⎝ ⎠⎣ ⎦ . This filter is called a raised cosinusoidal filter; it is characterized by its transfer function: ( ) ( ) ( ) 1 exp 2 1 cos 2 2 s s Y f f f H f j X f f f π π ⎡ ⎤⎛ ⎞ ⎛ ⎞ = = − +⎢ ⎥⎜ ⎟ ⎜ ⎟ ⎢ ⎥⎝ ⎠ ⎝ ⎠⎣ ⎦ and its impulse response equals: 1 1 1 ( ) δ( ) δ( 1) δ( 2) 4 2 4 h k k k k= + − + − . The graphic representation in Figure 5.2 traces the frequency response with that of the first order filter shown in the dotted line.
- 158. 142 Digital Filters Design for Signal and Image Processing -0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Frequentialresponseofthefilter normalized frequency Figure 5.2. Frequential representation of the H(f) module We observe that increasing the order of the filter also helps increase the filter’s selectivity. This is foreseeable, since the impulse response of the filter shown in Example 5.2, )2( 4 1 )1( 2 1 )( 4 1 −+−+ kkk δδδ , corresponds to the convolution of the impulse response of the filter in example 1, ( ) ( )[ ]1 2 1 −+ kk δδ by itself. The filter in example 2 thus consists in twice sequentially applying the filter shown in example 1. 5.1.2. Linear phase FIR filters FIR filters help realize linear phase systems. This property is often very useful in certain applications, especially those in telecommunications. 5.1.2.1. Representation The transfer function ( )fH is a complex quantity that can be written by using the module and related phase: ( ) ( )( )( ) ( ) expH f H f H f j f= = ϕ (5.10) Using the properties of the Fourier transform, if the impulse response {h(n)} is real, the module of the transfer function |H(f)| is then an even function of the frequency f. Example 1 Example 2
- 159. Finite Impulse Response Filters 143 The linear phase constraint brings us to: ( ) 2 s f f f ϕ β πα= − with - 2 2 s sf f f< < . (5.11) In Chapter 4 we introduced the concept of a group delay, represented as follows: ( ) ( )1 2π g d f d df d ω ω ϕ ϕ τ = − = − (5.12) This group delay is representative of the time that a frequential component of the signal needs to go across the system. We see that when ( )fϕ is linear, gτ is then constant whatever the frequential component. The linearity of the phase of the filter then requires a simple delay. This property is noteworthy, especially in signals modulated by carriers. If we replace ( )fϕ in equation (5.10) with its expression in (5.11), we then get: ( ) ( ) ( ) 1 0 exp 2 exp 2 . N n s s f H f h n jn f f H f j f π β πα − = ⎛ ⎞ = −⎜ ⎟ ⎝ ⎠ ⎛ ⎞⎛ ⎞ = −⎜ ⎟⎜ ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠ ∑ (5.13) By identifying the real and imaginary parts in equation (5.13), we obtain the following relations: ( ) ( ) 1 0 cos 2 cos 2 N n s s f f h n n H f f f π β πα − = ⎛ ⎞ ⎛ ⎞ = −⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ∑ (5.14) and ( ) ( ) 1 0 sin 2 sin 2 N n s s f f h n n H f f f π β πα − = ⎛ ⎞ ⎛ ⎞ − = −⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ∑ . (5.15) By combining equations (5.14) and (5.15), the linear phase condition is expressed as: ( ) ( ) 1 0 1 0 sin 2 sin 2 tan 2 cos 2 cos 2 N ns s N s ns s f f h n n f ff f f f h n n f f β πα π β πα β πα π − = − = ⎛ ⎞ ⎛ ⎞ −⎜ ⎟ ⎜ ⎟ ⎛ ⎞ ⎝ ⎠ ⎝ ⎠− = = −⎜ ⎟ ⎛ ⎞ ⎛ ⎞⎝ ⎠ −⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ∑ ∑ (5.16)
- 160. 144 Digital Filters Design for Signal and Image Processing which leads us to write, by obtaining the cross product of equation (5.16), the following condition: ( ) ( ) 1 0 1 0 cos 2 sin 2 sin 2 cos 2 0 N n s s N n s s f f h n n f f f f h n n f f π β πα π β πα − = − = ⎛ ⎞ ⎛ ⎞ −⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ⎛ ⎞ ⎛ ⎞ + − =⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ∑ ∑ (5.17) which can be rewritten as follows: ( ) ( ) 1 0 sin 2 0 N n s f h n n f π α β − = ⎛ ⎞ − + =⎜ ⎟ ⎝ ⎠ ∑ . (5.18) Equation (5.18) can be interpreted as a Fourier series development introducing impulse response coefficients h(n) as development coefficients. We can thus deduce that this development is unique. From here, equation (5.18) can be exploited in order to edit a series of relations between the parameters α, β, and N. So, we first consider the situation where N is even. We can then decompose the sum of equation (5.18) into two terms, as follows: ( ) ( ) ( ) ( ) 2 1 0 1 2 sin 2 sin 2 0. N n s N n N s f h n n f f h n n f π α β π α β − = − = ⎛ ⎞ − +⎜ ⎟ ⎝ ⎠ ⎛ ⎞ + − + =⎜ ⎟ ⎝ ⎠ ∑ ∑ (5.19) By introducing the change of the variable nNm −−= 1 on the second term in (5.19), we obtain: ( ) ( ) ( ) ( ) / 2 1 0 / 2 1 0 sin 2 1 sin 2 1 0. N n s N m s f h n n f f h N m N m f π α β π α β − = − = ⎛ ⎞ − +⎜ ⎟ ⎝ ⎠ ⎛ ⎞ + − − − − − + =⎜ ⎟ ⎝ ⎠ ∑ ∑ (5.20) The condition of the linear phase can then be verified by integrating the specific criteria with the impulse response. First, let us assume that the impulse response satisfies:
- 161. Finite Impulse Response Filters 145 ( ) ( )nNhnh −−= 1 1,...,0 −=∀ Nn . (5.21) Using the hypothesis in equation (5.21), equation (5.20) becomes: ( ) ( ) ( ) / 2 1 0 sin 2 sin 2 1 0 N n s s f f h n n N n f f π α β π α β − = ⎡ ⎤⎛ ⎞ ⎛ ⎞ − + + − − − + =⎢ ⎥⎜ ⎟ ⎜ ⎟ ⎢ ⎥⎝ ⎠ ⎝ ⎠⎣ ⎦ ∑ . (5.22) The condition in equation (5.22) is then validated if for n, we have: ( ) ( )sin 2 sin 2 1 s s f f n N n f f π α β π α β ⎛ ⎞ ⎛ ⎞ − + = − − − − +⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ . (5.23) Equation (5.23) leads to two conditions on the angles. By introducing γ integer, the first is represented by: ( ) ( ) ( )2 2 1 2 1 s s f f n N n f f π α β π α β γ π ⎛ ⎞ − + = − − − + + +⎜ ⎟ ⎝ ⎠ . (5.24) However, this does not allow us to decide on the values of β and α. The second is represented by: ( ) ( )2 2 1 2 s s f f n N n f f π α β π α β γ π ⎛ ⎞ − + = − − − − + +⎜ ⎟ ⎝ ⎠ , (5.25) or: ( )1 2 s f N f π α β γπ− − + = + . (5.26) Since equation (5.26) is valid for all the f frequencies, we obtain: ( )α 1 2N= − , β = 0 or π respectively for γ = 0 and 1. (5.27) COMMENT 5.1.– we can also take ( ) ( )nNhnh −−−= 1 and continue the process when N is odd.
- 162. 146 Digital Filters Design for Signal and Image Processing The possible situations are summarized in Table 5.1: N even 2 1−N 0 , π N odd 2 1−N π 3π , 2 2 Table 5.1. Values of α and β according to parity of N These symmetries are shown in Figures 5.3 and 5.4. 2 4 6 8 10 12 14 -0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 Figure 5.3. Even symmetry βα
- 163. Finite Impulse Response Filters 147 2 4 6 8 10 12 14 -0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 Figure 5.4. Odd symmetry 5.1.2.2. Different forms of FIR linear phase filters Various parametric transformations bring about four types of different filters: ( ) ( )nNhnh −−= 1 ( ) ( )nNhnh −−−= 1 N even Type I Type III N odd Type II Type IV Table 5.2. Different types of linear phase filters Each of these four types gives a different frequency response. Type 1: ( ) ( )( )exp 1r s f H f H f j N f π ⎛ ⎞ = − −⎜ ⎟ ⎝ ⎠ (5.28)
- 164. 148 Digital Filters Design for Signal and Image Processing with ( ) ( ) 2 1 0 1 2 cos 2 2 N r n s f N H f h n n f π − = ⎛ ⎞−⎛ ⎞ = −⎜ ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠ ∑ (5.29) Type II: ( ) ( )( )exp 1r s f H f H f j N f π ⎛ ⎞ = − −⎜ ⎟ ⎝ ⎠ (5.30) with: ( ) ( ) ( )3 2 0 1 1 2 cos 2 2 2 N r n s N f N H f h h n n f π − = ⎛ ⎞− −⎛ ⎞ ⎛ ⎞ = + −⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠⎝ ⎠ ∑ (5.31) Type III: ( ) ( )( )exp 1 2 r s f H f H f j N j f π π ⎛ ⎞ = − − +⎜ ⎟ ⎝ ⎠ (5.32) with: ( ) ( ) 2 1 0 1 2 sin 2 2 N r n s f N H f h n n f π − = ⎛ ⎞−⎛ ⎞ = −⎜ ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠ ∑ (5.33) Type IV: ( ) ( )( )exp 1 2 r s f H f H f j N j f π π ⎛ ⎞ = − − +⎜ ⎟ ⎝ ⎠ (5.34) with: ( ) ( ) ( )3 2 0 1 2 sin 2 2 N r n s f N H f h n n f π − = ⎛ ⎞−⎛ ⎞ = −⎜ ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠ ∑ (5.35) We see that for a Type IV filter, the condition ( ) ( )nNhnh −−−= 1 is equivalent to: 0 2 1 =⎟ ⎠ ⎞ ⎜ ⎝ ⎛ −N h , (5.36) for an odd number of samplings.
- 165. Finite Impulse Response Filters 149 EXAMPLE 5.3.– finding the relations of ( )H f for type II filters The Fourier transform is expressed by the following relation: ( ) ( ) 1 0 exp 2 N n s f H f h n j n f π − = ⎛ ⎞ = −⎜ ⎟ ⎝ ⎠ ∑ . We develop this expression by separating the above sum into three terms and by injecting the symmetry condition ( ) ( )nNhnh −−= 1 : ( ) ( ) ( ) ( ) ( ) ( ) 3 2 0 1 1 2 exp 2 1 1 exp 2 exp 1 . 2 N n s N n N s s f H f h n j n f f N f h N n j n h j N f f π π π − = − = + ⎛ ⎞ = −⎜ ⎟ ⎝ ⎠ ⎛ ⎞ ⎛ ⎞−⎛ ⎞ + − − − + − −⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠ ⎝ ⎠ ∑ ∑ by introducing a variable change in the second term, we obtain: ( ) ( ) ( ) ( ) ( ) ( ) ( ) 3 2 0 3 2 0 exp 2 exp 2 1 exp 2 1 exp 1 2 N n s N ns s s f H f h n j n f f f j N h n j n f f N f h j N f π π π π − = − = ⎛ ⎞ = −⎜ ⎟ ⎝ ⎠ ⎛ ⎞ ⎛ ⎞ + − − +⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ⎛ ⎞−⎛ ⎞ + − −⎜ ⎟⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ∑ ∑ . This expression leads us directly to the formula obtained above, on condition that we factorize the phase term as follows: ( ) ( ) ( ) ( )( )3 2 0 11 exp 1 2 cos 2 2 2 N ns s Nf N f H f j N h h n n f f π π − = ⎡ ⎤⎛ ⎞−⎛ ⎞⎛ ⎞ −⎛ ⎞ = − − + −⎢ ⎥⎜ ⎟⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎝ ⎠⎢ ⎥⎝ ⎠ ⎝ ⎠⎝ ⎠⎣ ⎦ ∑ EXAMPLE 5.4.– N = 11 size filter with even symmetry h(0)=h(10) 0 h(1)=h(9) 0.0402 h(2)=h(8) 0.2008 h(3)=h(7) 0.5098 h(4)=h(6) 0.8492 h(5) 1 Table 5.3. Value of coefficients with the finite impulse response of example 5.4
- 166. 150 Digital Filters Design for Signal and Image Processing 0 1 2 3 4 5 6 7 8 9 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure 5.5. Impulse response of example 5.4 5.1.2.3. Position of zeros in FIR filters We recall that the z-transform linked to FIR filters is written as follows: ( ) ( )∑ − = +−−− −+++== 1 0 11 )1()1()0( N n Nn z zNhzhhznhzH . (5.37) Now, if we follow the information in section 5.1.2.2 to obtain a linear phase filter, the impulse response must satisfy the following relation: ( ) ( )nNhnh −−±= 1 . (5.38) From here, the z-transform of a FIR linear phase filter can be rewritten as follows: ( ) 121 )0()1()1()0( +−+−− ±±++= NN z zhzhzhhzH . (5.39) With equation (5.39), we can evaluate ( )1− zH ; then we get: ( ) [ ] ( ). )0()1()1()0( )0()1()1()0( 1 1211 121 zHz hzhzhzhz zhzhzhhzH N NNN NN z − −+−+−− −−− ±= ±±++= ±±++= (5.40)
- 167. Finite Impulse Response Filters 151 Equation (5.40) thus introduces a constraint on the placement of poles of the transfer function for a linear phase finite impulse response filter: – if zi is a zero of the transfer function then the inverse of this zero zi -1 is also a zero; – a linear phase filter cannot be a minimum-phase filter1 because zi and zi -1 cannot be simultaneously situated inside the unity circle in the z-plane. As well, in so far as the coefficients of the filter are real, the conjugates zi and zi -1 are also zeros. In conclusion, the z-transform of a linear phase FIR filter can be expressed as follows: ( ) ( ) ( )( ) ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −−−= −−−−− = − ∏ 111 1 1 1 1 1 1110 ze r ze r zerzerhzH iiii j i j i j i p i j iz ϕϕϕϕ (5.41) iz { }zRe { }zIm * iz iz 1 * 1 iz Figure 5.6. Representation in the z-plane of the pole placement constraint 1 We will return to this point in Chapter 6.
- 168. 152 Digital Filters Design for Signal and Image Processing 5.1.3. Summary of the properties of FIR filters FIR filters offer the following features: – they can help to realize linear phase filters, which helps to control phase distortion that can occur due to the filtering; – they help obtain reduced calculation and computable noise. Since these filters do not present loops in their structure, the noise occurring from a finite precision implantation is easily controlled. We will return to this point in Chapter 7. However, these filters present the drawback, compared with infinite impulse response filters, of requiring a higher order to satisfy the same frequential specifications. This can lead to a more complex operating mode. 5.2. Synthesizing FIR filters using frequential specifications Synthesizing finite impulse response filters is the main step that helps to fix coefficient values of the impulse response. These samples, called filter coefficients, are obtained by trying to approach as closely as possible an ideal frequency response. Many models exist and it is difficult to present an exhaustive list. However, several classes are notable for their simplicity or their performance in terms of approximating an ideal filter. The first method presented here is the best known for its properties and for its simplicity. Commonly known as the windowing method, it corresponds to a weighting of the truncated impulse response of a filter following directly from specifications with ideal frequency. In the section that follows, we present many weightings allowing for a compromise between attenuation in the stop-band and the rapid decrease of the transition band. In the following sections we will also discuss this part of the influence of truncation on the impulse response of the ideal filter. The second method, which entails more complex calculations, is an optimal approach in the sense of minimizing a “cost” function expressed by the gap between the impulse response of the ideal filter and that which we are trying to synthesize. 5.2.1. Windows Usually, we cannot simultaneously process all the samples of a signal; we process them in reduced segments, chosen with an analysis window. By choosing a size of an adapted window, we can generally observe that the signal is stationary during the duration of the analysis period. Apodization windows are often used in signal processing.
- 169. Finite Impulse Response Filters 153 The windowed signal xw(k) is thus represented as the product of the signal and of the weighting window: )().()( kwkxkxw = , (5.42) where x(k) is the signal to be analyzed and w(k) the weighting or temporal window of null value outside the observation interval. This temporal product is transformed in the frequential domain by a convolution product of the Fourier transforms of the sequence and window. Even if a rectangular window seems the most obvious choice for this operation, it is not necessarily the one most widely used. So we now calculate the Fourier transform of this rectangular and causal window w(k) on N points; its z-transform is easily expressed because we know the terms of a geometric sequence with multiplier 1− z . We obtain: ( ) 1 1 0 1 1 − −− = − − − =∑ z z z NN k k . By taking ( )exp 2π rz j f= , we deduce from it the module of the Fourier transform of the rectangular window: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 exp 2π exp π exp π exp 2π 1 exp 2π exp π exp π sin π . sin π r r r r r r r r r j Nf j Nf j Nf j Nf j f j f j f Nf f − − − − − = × − − − − − − = We see that this module cancels itself when the normalized frequencies are multiples of N 1 , with the exception of 0. For the continuous component, the module equals N. The width of the principle lobe is of N 2 , if we are considering a normalized module/frequency representation. Using the information in Figure 5.9, we see that the secondary lobes are attenuated by at least 13 dB. This means that the ratio between the amplitudes of the principle lobe and the first secondary lobe equal –13 dB:
- 170. 154 Digital Filters Design for Signal and Image Processing ( ) ( ) ( ) ( ) Central frequency of the main lobe 10 0 sin sin 20log 13dB sin sin r r fr r r fr Nf f Nf f π π λ π π = = ⎛ ⎞ ⎜ ⎟ ⎜ ⎟ = ≈ −⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ . This result is explained by the rough sequence of the series values w(k) from 1 to 0. To avoid this kind of variation, other windows have been proposed, especially triangular windows or, more often, Blackman, Kaiser and generalized Hanning and Hamming polynomial windows. These last two are the most widely used. Notably, these different classes of windows are characterized by a progressive passage from 1 to 0. The global form of the module of the Fourier transform of temporal windows is still, however, composed of a central lobe and of secondary lobes. However, the values of the ratio λ between the amplitudes of the principle lobe and the first secondary lobe vary. Type of window Definition Category I: polynomial window The module of the Fourier transform is of the form ( ) sin π sin π a r r N f a f α ⎛ ⎞ ⎜ ⎟ ⎝ ⎠ Rectangular Situation where: the module of the Fourier transform equals ( ) ( ) sin π sin π r r Nf f and 1)( =kw for k = 0,…, N – 1 and 0)( =kw , as well. Triangular (Bartlett) Situation where: the module of the Fourier transform equals ( ) 2 sin π 2 2 sin π r r N f N f ⎛ ⎞ ⎜ ⎟ ⎝ ⎠
- 171. Finite Impulse Response Filters 155 and 1 2 if 0 1 2 ( ) 2 2 if 1 1 0 otherwise k N k L N k w k L k N N −⎧ ≤ ≤ =⎪ − ⎪ ⎪ = − ≤ ≤ −⎨ −⎪ ⎪ ⎪ ⎩ Parabolic Case where: the module of the Fourier transform equals ( ) 3 sin π 3 sin π r r N f f α ⎛ ⎞ ⎜ ⎟ ⎝ ⎠ Category II: generalized Hanning window ( )( ) 1 cos(2π ) k w k a a N = − − for k = 0,…, N – 1 Rectangular Situation where: 1)( =kw for k = 0,…,N – 1 Hanning Case where: ( ) 0.5 0.5cos(2π ) k w k N = − for k = 0,…,N – 1 Hamming Case where: ( ) 0.54 0.46cos(2π ) k w k N = − for k = 0,…,N – 1 Other categories Blackman 1 1 ( ) 0.42 0.5cos(2π ) 0.08cos(4π ) k k w k N N − − = − + for k = 0,…, N – 1 Kaiser ( ) 2 0 0 α 1 1 ( ) α k I L w k I ⎛ ⎞ ⎛ ⎞⎜ ⎟− −⎜ ⎟⎜ ⎟⎝ ⎠ ⎝ ⎠= for k = 0, …, N – 1 With ().0I is the Bessel function of the first type. α is a parameter generally between 4 and 9, chosen by the user. The modified Bessel function of the first type can be approximated by ( ) 2 0 1 1 α 1 ! 2 mM m x I m= ⎡ ⎤⎛ ⎞ = + ⎢ ⎥⎜ ⎟ ⎝ ⎠⎢ ⎥⎣ ⎦ ∑ . The given approximation leads us to consider the sum of M terms. Generally M is taken as superior to 10 (often 14 is kept).
- 172. 156 Digital Filters Design for Signal and Image Processing Figure 5.7. Temporal representations of rectangular windows of Hamming, Hanning, Bartlett and Kaiser classes 0 20 40 60 80 100 120 0 0.2 0.4 0.6 0.8 1 indices w(n) Figure 5.8. Temporal representation of Kaiser windows for different values of α 0 20 40 60 80 100 120 0 0.2 0.4 0.6 0.8 1 indices w(n) Temporal representation of windows α=8 α=7 α=6 α=5 α=4
- 173. Finite Impulse Response Filters 157 Looking at Figure 5.9, we see that the module, expressed in dB, of the rectangular window presents a main lobe of a width two times smaller than those of the modules of Hamming, Hanning and Bartlett windows. However, the attenuation of the secondary lobes is clearly lower: 13 dB for the rectangular window as against 25 dB for the triangular window, 41 dB for the Hamming window, 31 dB for the Hanning window, and 59 dB for the Blackman window. We see in Figure 5.11 that the choice of the parameter α for the Kaiser window conditions its frequential behavior. A value of α equal to 4 helps us obtain an attenuation close to that of the Hamming window. To increase this attenuation still further, we can increase the value of α. In compensation, the width of the principle lobe is larger. The parameter α thus helps to bring about a compromise between the width of the principle lobe and the amplitude of the secondary lobes. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 -150 -100 -50 0 normalized frequency module,indB Kaiser window (with α = 4) Rectangular window Hanning window Figure 5.9. Modules of Fourier transforms of rectangular, Kaiser (with α=4), and Hanning windows Kaiser window (with α=4)
- 174. 158 Digital Filters Design for Signal and Image Processing 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 -150 -100 -50 0 normalized frequency module,indB Hanning window Hamming window Barlett window Figure 5.10. Modules of Fourier transforms of Bartlett and Hamming windows Type of window attenuation of secondary lobes Comparison of the widths of the principal lobe Spectrum decrease in dB Rectangular 13 dB (poor) 20 dB per decade (weak) Triangular 25 dB (weak) 40 dB per decade (acceptable) Hanning 31 dB (acceptable) 60 dB per decade (good) Hamming 41 dB (correct) Double value of that obtained with the rectangular window 20 dB per decade (weak) Blackman 59 dB (good) Triple value 60 dB per decade (good) Kaiser Of the order of 30-40 dB Value of the order of the double 20 dB per decade (weak) Table 5.4. Summary of characteristics of secondary spectral lobes for different types of windows
- 175. Finite Impulse Response Filters 159 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 -140 -120 -100 -80 -60 -40 -20 0 normalized frequency module,indB Figure 5.11. Frequential representations of Kaiser windows for α equal to 4, then 7 5.2.2. Synthesizing FIR filters using the windowing method 5.2.2.1. Low-pass filters Let us assume we want to synthesize an ideal digital filter whose frequency response, shown in Figure 5.12, is 2 sf − and 2 sf . The filter is of the low-pass type and of cut-off frequency fc: )( fHideal cf− cf 2 sf 2 sf − f Figure 5.12. Frequency response of an ideal low-pass filter α=4 α=7
- 176. 160 Digital Filters Design for Signal and Image Processing We see that this response is reproduced in all the fs. By introducing Hideal, continuous (f) represented as follows: , 1 ( ) 0 elsewhere c ideal continuous f f H f ⎧ ≤⎪ = ⎨ ⎪⎩ (5.43) The ideal digital filter satisfies: ( ),( ) ( ) .ideal ideal continuous s k H f H f f kfδ +∞ ∗ =−∞ = −∑ (5.44) Using the inverse Fourier transform in equation (5.44), we can link to these specifications the following impulse response: ( ) ( ) ( ) ( ) 1 1 ,( ) ( ) 1 ( )exp( 2 ) 1 exp( 2 ) sin 21 c c ideal ideal continuous s k ideal s ks f ks sf c ks s h t TF H f TF f kf H f j ft df t kT f k j ft df t f f f t k t f t f δ π δ π δ π δ π +∞ − − =−∞ +∞ +∞ =−∞−∞ +∞ =−∞− +∞ =−∞ ⎛ ⎞ = × −⎜ ⎟ ⎝ ⎠ = × − ⎛ ⎞ = × −⎜ ⎟ ⎝ ⎠ ⎛ ⎞ = × −⎜ ⎟ ⎝ ⎠ ∑ ∑∫ ∑∫ ∑ (5.45) Equation (5.45) leads to the following values of the discrete impulse response for all of k ≠ 0: sin(2 ) ( ) 2 2 c c s ideal cs s f k f f h k ff k f π π = . (5.46) The impulse response of an ideal low-pass filter is then equal, for k = 0: (0) 2 c ideal s f h f = . (5.47) This kind of impulse response is, on the one hand, of infinite width and, on the other, non-causal. This means the filter cannot be produced. Taking into account the
- 177. Finite Impulse Response Filters 161 relatively rapid decrease of the ideal impulse response hideal(k), we can approximate the filter using the following steps: – we must consider only a part of the impulse response; that is, by multiplying the ideal impulse response hideal(k) by a apodization window w(k) centered in 0. This choice makes h(k) become a truncated version of the ideal impulse response written ( )winh k : ( ) ( ) ( )win idealh k h k w k= (5.48) – we can carry out a temporal shift of the impulse response in order to make the filter causal. By introducing this shift, we do not change the amplitude of the filter specifications, but modify the phase. This means that if we look at an impulse response of odd width N = 2L+1, the impulse response will be written as follows: ( ) ( ) ( ) sin(2 ) ( ) 2 if 0 2 and 2 ( ) 2 ( ) 0 elsewhere c c s lp cs s c lp s lp f k L f f h k w k L k L k L ff k L f f h L f h k π π ⎧ −⎪ ⎪ = − ≤ ≤ ≠ ⎪ −⎪ ⎪ ⎨ ⎪ = ⎪ ⎪ ⎪ ⎪ =⎩ (5.1) The reasoning behind this windowing method consists of characterizing the effects of this truncation by using several types of windows. Figures 5.13, 5.14, 5.15 and 5.16 present the impulse responses and the phase and frequency responses of the synthesized filter by using the windowing method. Several windows are used for orders of 20, 50 and 100. The normalized cut-off frequency c s f f is here equal to 0.2.
- 178. 162 Digital Filters Design for Signal and Image Processing 0 20 40 60 80 100 120 -0.5 0 0.5 0 20 40 60 80 100 120 -0.5 0 0.5 0 20 40 60 80 100 120 -0.5 0 0.5 Figure 5.13. Impulse response of a low-pass filter synthesized with the windowing method using a rectangular filter and orders 20, 50, then 100 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 -100 -80 -60 -40 -20 0 20 Figure 5.14. Normalized frequency response of a low-pass filter synthesized with the windowing method using a rectangular filter and orders 20, 50, then 100 Order =20 Order =50 Order =100
- 179. Finite Impulse Response Filters 163 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 -160 -140 -120 -100 -80 -60 -40 -20 0 20 Figure 5.15. Normalized frequency response of a low-pass filter synthesized with the windowing method using a Hanning window and orders 20, 50, 100 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 -120 -100 -80 -60 -40 -20 0 20 Figure 5.16. Normalized frequency response of a low-pass filter synthesized with the windowing method using a Hamming filter and orders 20, 50, then 100 Order =20 Order =50 Order =100 Order =20 Order =50 Order =100
- 180. 164 Digital Filters Design for Signal and Image Processing 5.2.2.2. High-pass filters Here we assume an ideal filter to be synthesized with a frequency response shown in Figure 5.17. This is a high-pass filter with a cut-off frequency fc: , ( )ideal hpH f cf− cf 2 sf 2 sf − f Figure 5.17. Frequency response of an ideal high-pass filter , 1 ( ) 0 otherwise c ideal hp f f H f ⎧ ≥⎪ = ⎨ ⎪⎩ (5.50) The frequency response of this high-pass filter is that of the filter presented in section 5.2.2.1. , ( ) ( ) 1.ideal hp idealH f H f+ = (5.51) From here, in the temporal domain, the corresponding impulse response satisfies the equation: , ( ) ( ) ( )ideal hp idealH k k h kδ= − (5.52) As in section 5.2.2.1, by considering a windowing operation and a shift to make the impulse response of width N = 2L+ 1 causal, we obtain: ( )( ) ( )hp lph k k L h kδ= − − (5.53)
- 181. Finite Impulse Response Filters 165 By taking into account the expression of the impulse response of the low-pass filter obtained by the method in equation (5.49), we get: ( ) ( ) ( ) sin(2 ) ( ) 2 if 0 2 and 2 ( ) 1 2 ( ) 0 elsewhere c c s hp cs s c hp s hp f k L f f h k w k L k L k L ff k L f f h L f h k π π ⎧ −⎪ ⎪ = − − ≤ ≤ ≠ ⎪ −⎪ ⎪ ⎨ ⎪ = − ⎪ ⎪ ⎪ ⎪ =⎩ (5.54) COMMENT 5.2.– even if the synthesized filter is obtained by truncation, we can demonstrate that the windowing method is an optimal method in the sense of the following error criterion: ( ) ( ) 22 2 2 ideal fs fs H f H f dfε − − = −∫ (5.55) Even though it is optimal, this method does not allow for the distribution of the approximation error in the passband, the attenuated band, or the transition band. In the present situation, it is basically concentrated in the transition band. In other words, this method does not help control approximation errors in the different bands. In the next section, we will present other approximation techniques that allow for a better approximation control in all the frequency bands. This is especially the case with methods that operate by a frequential weighting of the error criteria. 5.3. Optimal approach of equal ripple in the stop-band and passband Finding an optimal solution to the problem of the amplitude approximation of specifications is obtained by minimizing a distance criterion between the theoretical frequency responses and those brought about by synthesis. To present this approach, we will consider a low-pass linear phase FIR filter of type II (see equations (5.30) and (5.31)); that is, with a symmetrical impulse response and a choice of N odd response.
- 182. 166 Digital Filters Design for Signal and Image Processing We then introduce the quantities: ( ) ( ) 3 2 for between 0 and 2 1 1 2 2 N b k h k k N N b h −⎧ =⎪⎪ ⎨ − −⎛ ⎞ ⎛ ⎞⎪ =⎜ ⎟ ⎜ ⎟⎪ ⎝ ⎠ ⎝ ⎠⎩ (5.56) We have seen that in equations (5.30) and (5.31), the related transfer function satisfies the formula: ( ) ( ) ( ) ( ) 3 2 0 1 1 2 cos 2 2 2 exp 1 N n s s N f N H f h h n n f f j N f π π − = ⎡ ⎤⎛ ⎞− −⎛ ⎞ ⎛ ⎞ = + −⎢ ⎥⎜ ⎟⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠⎢ ⎥⎝ ⎠⎣ ⎦ ⎛ ⎞ × − −⎜ ⎟ ⎝ ⎠ ∑ (5.57) Taking into account equation (5.56), equation (5.57) is written as follows: ( ) ( ) ( ) 1 2 0 1 exp 1 cos 2 2 N ns s f f N H f j N b n n f f π π − = ⎛ ⎞ ⎛ ⎞−⎛ ⎞ = − − −⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠ ⎝ ⎠ ∑ (5.58) To simplify matters, we only consider the quantity Hr(f), which determines the amplitude: ( ) ( ) 1 2 0 1 cos 2 2 N r n s f N H f b n n f π − = ⎛ ⎞−⎛ ⎞ = −⎜ ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠ ∑ . (5.59) The problem is in estimating the coefficients b(n) so that the frequency response is optimal by distributing the approximation error in the passband and the attenuated band.
- 183. Finite Impulse Response Filters 167 Figure 5.18. Frequency response of a normalized passband filter We write δ1 and δ2 respectively for the maximum ripple level in the passband and attenuated band. According to Chebyshev’s alternance theorem on the polynomial approximation theory, for Hr(f) to be the sole solution approximating the desired frequency response Hideal(f) in a sub-interval C of 0, 2 sf⎡ ⎤ ⎢ ⎥ ⎣ ⎦ , a necessary and sufficient condition is that the error function: ( ) ( )ε ( )ideal rf H f H f= − (5.60) presents at least ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ + − m N 2 1 extrema for a range of frequencies in C (with m integer). Transition band Attenuated band f / sf ( 2π )H j f 1 Passband af / sfpf / sf 1+δ1 1-δ1 -δ2 δ2
- 184. 168 Digital Filters Design for Signal and Image Processing According to this principle, we obtain for frequencies 2 110 ... +〈〈〈 Nfff (i.e., m = 1) ranging in increasing order an error function that alternatively takes the opposing values: ( ) ( )1ε εi if f += − . (5.61) We then propose δ as the maximum error value ( )ε if for 2 1 ,..,0 + = N i . This principle remains valid when we introduce a weighting function W(f) on the error, so that: ( ) ( ) ( )( )ε ( )ideal rf W f H f H f= − . (5.62) This function W(f) allows us to condition the relative error in the passband and the stop-band (according to the specifications being used). With low-pass filters, we can introduce, for example, the following formula: ( ) 2 1/ in the passband 1 in the stop-band W f δ δ⎧ = ⎨ ⎩ (5.63) If we introduce pf as the maximum frequency of the passband and af as the minimal frequency of the attenuated band, we can represent the desired response as follows: 1 0 ( ) 0 . 2 p ideal s a f f H f f f f ≤ ≤⎧ ⎪ = ⎨ ≤ ≤⎪ ⎩ (5.64) With equations (5.62), (5.63), and (5.64), we then have: 2δ δ= . (5.65)
- 185. Finite Impulse Response Filters 169 The problem of estimating the coefficients b(n) is then reformulated as follows: ( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 2 0 1 1 cos 2 2 i i ideal i r i N i i ideal i n s W f H f H f f N W f H f b n n f δ π − = − = −⎡ ⎤⎣ ⎦ ⎡ ⎤ ⎛ ⎞−⎛ ⎞⎢ ⎥= − −⎜ ⎟⎜ ⎟⎢ ⎥⎝ ⎠⎝ ⎠⎢ ⎥⎣ ⎦ ∑ 1 for 0,1,..., 2 N i + = . (5.66) From the results obtained in equation (5.66) obtained 1 for 0,1,..., 2 N i + = we can obtain a matricial relation to estimate the coefficients of the impulse response b(n): ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 0 0 0 1 1 1 1 1 1 2 2 2 1 2 0 1 1 1 cos 2 cos( 1 ) 1 1 1 cos 2 cos( 1 ) 2 1 0 1 1 cos(2 ) cos( 1 ) ( ) s s s N N N s s N ideal ideal f f N f f W f N bf f N f W f b b f f N f f W f H f H f π π π π δ π π + + + + ⎡ ⎤⎛ ⎞ −⎢ ⎥⎜ ⎟ ⎝ ⎠⎢ ⎥ ⎡ − ⎤⎛ ⎞⎢ ⎥−⎛ ⎞ ⎜ ⎟⎢ ⎥⎢ ⎥− ⎝ ⎠⎜ ⎟ ⎢ ⎥⎢ ⎥⎝ ⎠ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥− ⎣ ⎦⎢ ⎥− ⎢ ⎥ ⎢ ⎥⎣ ⎦ = 1 2 . ideal NH f + ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎛ ⎞⎢ ⎥⎜ ⎟ ⎢ ⎥⎝ ⎠⎣ ⎦ (5.67) The Remez algorithm is a procedure used to determine the range of frequencies fi and the corresponding maximum error δ necessary for the resolution of the matricial system in equation (5.67). The steps of this approach are as follows: – step 1 is the initialization phase that consists of selecting the order of the filter and the initial range of frequencies fi;
- 186. 170 Digital Filters Design for Signal and Image Processing – step 2 is the calculation of the corresponding maximum error δ based on equation (5.67) and leading to: ( ) ( ) ( ) ( ) 1 2 1 2 0 1 2 0 1 0 1 1 2 γ δ 1 γ γ γ .. .. N N i ideal i i N N H f W f W f W f + + = + + = − − + + ⎛ ⎞ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ∑ , (5.68) where: 1 2 0 1 cos 2 cos 2 N i n i n n i s s f f f f γ π π + = ≠ = ⎛ ⎞ ⎛ ⎞ −⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ∏ . (5.69) – step 3 is the evaluation of ( )ir fH . Using the initial formula in equation (5.66), we can easily evaluate ( )ir fH as follows: ( ) ( ) ( ) ( ) δ 1 i r i ideal i i H f H f W f = − − for 2 1 ...,,1,0 + = N i . (5.70) – step 4 is the evaluation of the error ( )ε f for a dense interval of frequencies. To know if δ is really the maximum, the Lagrange interpolation makes it possible to estimate the values of ( )fH r on a selected dense interval of frequencies. The frequency response in this interval is expressed from ( )ir fH as follows: ( ) ( ) 1 2 0 1 2 0 cos 2 cos 2 cos 2 cos 2 N i r i i i s s r N i i i s s B H f ff f f H f B ff f f π π π π − = − = ⎛ ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟⎛ ⎞ ⎛ ⎞ ⎜ ⎟−⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎝ ⎠ ⎝ ⎠⎝ ⎠= ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎛ ⎞ ⎛ ⎞ −⎢ ⎥⎜ ⎟ ⎜ ⎟ ⎢ ⎥⎝ ⎠ ⎝ ⎠⎣ ⎦ ∑ ∑ (5.71)
- 187. Finite Impulse Response Filters 171 where 1 2 0 1 cos 2 cos 2 N i n i n n i s s B f f f f π π − = ≠ = ⎛ ⎞ ⎛ ⎞ −⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ∏ . We can deduce from this the error occurring between the desired filter and the obtained filter on the dense interval of frequencies: ( ) ( ) ( ) ( )ε ideal rf W f H f H f= −⎡ ⎤⎣ ⎦ (5.72) If ( )ε δf < for all the frequencies of the dense domain, the optimal solution is found. Otherwise, another range of frequencies fi must be chosen and we begin the procedure again from Step 2. COMMENT 5.3.– the order of the filter can be modified. – Step 5 is the calculation of the coefficients of the impulse response of the filter. Once Step 3 has been validated, the optimal values fi as well as δ are used to calculate the coefficients b(n) from the matricial system in equation (5.67). COMMENT 5.4.– in order to avoid the matricial inversion, a technique based on the fast Fourier transform exists and can be used. The advantage of this technique as relates to the windowing method is related to the control of specifications parameters ( )1 2, ,δ ,δp af f , which are difficult to adjust with other methods. As well, the approach used in the Kaiser technique allows for an estimated order with the following formula: ( ) 1 6,14 13log20 2110 + ∆ − −= f M δδ , (5.73) where f∆ is the width of the transition band represented by pa fff −=∆ .
- 188. 172 Digital Filters Design for Signal and Image Processing 5.4. Bibliography [JAC 86] JACKSON L. B., Digital Filters and Signal Processing, Kluwer Academic Publishing, Boston, ISBN 0-89838-174-6. 1986. [PRO 92] PROAKIS J. and MANOLAKIS D., Digital Signal Processing, principles, algorithms and applications, second edition, MacMillan, 1992, ISBN 0-02-396815-X.
- 189. Chapter 6 Infinite Impulse Response Filters 6.1. Introduction to infinite impulse response filters Infinite impulse response (IIR) filters are recursive mode filters that are characterized by the following difference equation: )1()()1()( 1010 +−++=+−++ −− NkxbkxbMkyakya NM , (6.1) where at least one of the coefficients { } 11 −≤≤ Niib is non-null. We can easily reduce this to a relation where a0 = 1. From here, we will assume that this hypothesis is satisfied. Equation (6.1) is verified for all the values of k. We thus have: ( ) )()1()(2)1( 1011 NkxbkxbMkyakyaky NM −++−+−−−−=− −− (6.2) By reinjecting equation (6.2) of )1( −ky in the difference equation in (6.1), we see that ( )y k depends on the preceding values of the output ( 2)y k − ,…, ( )y k M− and of N + 1 values of the input signal )(kx , …, ( )x k N− . By repeating this step to infinity, we express the output ( )y k as a linear combination of an infinity of terms of the input signal )(kx . The filter is therefore an IIR filter. Chapter written by Eric GRIVEL and Mohamed NAJIM.
- 190. 174 Digital Filters Design for Signal and Image Processing The z-transform on equation (6.2) helps us obtain the transfer function of the filter, which is a rational fraction in z: ( ) ( ) ( ) ( ) ( )zA zB za zb zX zY zH M i i i N i i i z z z === ∑ ∑ − = − − = − 1 0 1 0 (6.3) The division following the increasing powers of the numerator by the denominator then leads to an infinite sum of terms. If the order of a finite impulse response (FIR) filter is between 25 and 400, that of an equivalent IIR filter is generally lower and typically varies between 5 and 20 (or sometimes more). This chapter is organized as follows: after giving several examples of IIR filters, we will discuss their synthesis. Synthesis can be carried out in several ways. One of the most commonly used methods consists of using continuous-time synthesis techniques (as discussed in Chapter 4), then going from the continuous domain to the digital. For that, it is necessary to follow the rules of changing from the continuous to the discrete domain. To this effect, we will use the following two approaches: that of the invariance of the impulse response and that of the bilinear transformation. 6.1.1. Examples of IIR filters Let us look at two types of discrete filters, which we assume are causal. The first, expressed as ][1 xTy = is regulated by the following difference equation: )1( 4 1 )( 4 1 )1( 4 1 )( −++−= nxnxnyny . From here, the transfer function of the system is written as: ( ) ( )zD zN z z zX zY zH 4 1 4 1 1 1 4 1 )( )( )( 1 1 1 = − + == − − . Now we look at a second transformation ][2 xTy = so that: )1( 2 1 )( 4 3 )1( 4 1 )( −−+−= nxnxnyny .
- 191. Infinite Impulse Response Filters 175 We obtain the following transfer function H2 (z): 1 1 2 4 1 1 3 2 1 4 3 )( )( )( − − − − == z z zX zY zH . Now we consider the stability of the two transformations. For that, we look at the position of the poles of the two transfer functions H1(z) and H2(z). In both cases, 1 4 1 <=z . We can then obtain the impulse responses h1(n) and h2(n), respectively from H1(z) and H2(z). Since H1(z) = ∑ +∞ −∞= − = n n znhzH )()( 11 , i.e. that the transfer function is the z-transform of the impulse response of the system, we carry out the polynomial division of N(z) by D(z). We now directly calculate the impulse response of the transformations T1 and T2 using difference equations. In order to begin calculating the output sequence of the causal system, we see that the input and output sequences are causal (except in the special case when we have information about the system’s initialization). So, let ( )( ) δx n n= . We can easily conclude that: 4 1 )0(1 =h and 4 5 4 1 )(1 ×⎟ ⎠ ⎞ ⎜ ⎝ ⎛ = n nh for 1≥n , which we can rewrite in the following form: ( ) ( )1 1 5 ( ) δ 4 4 n h n u n n ⎛ ⎞⎛ ⎞ ⎜ ⎟= × −⎜ ⎟⎜ ⎟⎝ ⎠⎝ ⎠ . . 4 1 1 1 4 1 1 4 1 4 5 1 4 1 4 5 )()( 1 1 0 1 0 11 − − +∞ = − +∞ = − +∞ −∞= − − + = −⎟ ⎠ ⎞ ⎜ ⎝ ⎛ =−⎟ ⎠ ⎞ ⎜ ⎝ ⎛ == ∑∑∑ z z zzznhzH n n n n n n n
- 192. 176 Digital Filters Design for Signal and Image Processing We proceed in the same way to calculate h2 (n), the impulse response linked to the second transformation T2. ( ) ( )2 1 5 ( ) 2δ 4 4 n h n u n n ⎛ ⎞⎛ ⎞ ⎜ ⎟= − × +⎜ ⎟⎜ ⎟⎝ ⎠⎝ ⎠ . . 4 1 1 3 2 1 4 3 2 4 1 4 5 2 4 1 4 5 )()( 1 1 0 1 0 22 − − ∞+ = − +∞ = − +∞ −∞= − − − =+⎟ ⎠ ⎞ ⎜ ⎝ ⎛ −= +⎟ ⎠ ⎞ ⎜ ⎝ ⎛ −== ∑ ∑∑ z z z zznhzH n n n n n n n The frequency responses linked to the transformations T1 and T2 are shown in Figures 6.1 and 6.2. -0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 -30 -25 -20 -15 -10 -5 0 normalized frequency frequencyresponse(indB) Figure 6.1. Amplitude diagram of the T1 transformation (low-pass) -0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 -20 -15 -10 -5 0 5 frequencyresponse(indB) normalized frequency Figure 6.2. Amplitude diagram of the transformation T2 (high-pass)
- 193. Infinite Impulse Response Filters 177 Now we will look at the following system links: [][ ].21 TT , [][ ].12 TT and [] [].. 12 TT + . We calculate their respective impulse responses by using the laws of association of invariant linear systems. First, the impulse responses of the links [][ ].21 TT and [][ ].12 TT are identical. So the transfer functions and the corresponding impulse responses are, respectively: ( ) ( )zHzHzHzHzHzH 1,212212,1 )()()()( === . and )(*)()(*)( 1221 nhnhnhnh = . More precisely, we obtain: ( ) ( ) 21 21 2 1 11 212,1 16 1 2 1 1 3 2 3 1 1 16 3 4 1 1 1 3 2 1 16 3 )()( −− −− − −− +− −+ = ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ − +⎟ ⎠ ⎞ ⎜ ⎝ ⎛ − == zz zz z zz zHzHzH ( ) ( ) ( ) ( ) ( ) 1,2 1 2 2 1( )* ( ) ( )* ( ) 1 5 1 5 δ * 2δ . 4 4 4 4 n n h n h n h n h n h n u n n u n n = = ⎡ ⎤ ⎡ ⎤⎛ ⎞ ⎛ ⎞⎛ ⎞ ⎛ ⎞⎢ ⎥ ⎢ ⎥⎜ ⎟ ⎜ ⎟= × − − × +⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟⎢ ⎥ ⎢ ⎥⎝ ⎠ ⎝ ⎠⎝ ⎠ ⎝ ⎠⎣ ⎦ ⎣ ⎦ The difference equation related to the links [][ ].21 TT and [][ ].12 TT then equals: ( ) ( ) ( ) ( ) ( ) ( )2 8 1 1 16 1 16 3 2 16 1 1 2 1 −−−++−−−= nxnxnxnynyny
- 194. 178 Digital Filters Design for Signal and Image Processing -0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 -30 -25 -20 -15 -10 -5 0 frequencyresponse(indB) normalized frequency Figure 6.3. Amplitude diagram of the transformation [][ ].21 TT Now we consider the link [] [].. 12 TT + . We see that the transfer function satisfies the identity: ( ) 1)()( 2121 =+=+ zHzHzH The corresponding impulse response is ( ) ( )1 2 δh n n+ = . 6.1.2. Zero-loss and all-pass filters A linear filter is a zero-loss filter if the energy of the signal is conserved during filtering for all input signals. This means we have: ( ) ( )∑∑ +∞ −∞= +∞ −∞= = kk kykx 22 . (6.4) Therefore, the energies of the input signals x(k) and the output signals y(k) can be calculated in the frequential domain using Parseval’s theorem. We thus have: ( ) ( ) 2 2 2 2 s s f k f x k X f df +∞ =−∞ − =∑ ∫ (6.5) and ( ) ( ) ( ) ( ) 2 2 2 2 2 2 2 2 s s s s f f k f f y k Y f df H f X f df +∞ =−∞ − − = =∑ ∫ ∫ . (6.6)
- 195. Infinite Impulse Response Filters 179 According to equations (6.5) and (6.6), a linear filter is zero-loss if we have: ( ) ( ) ( ) 2 2 2 2 2 2 2 s s s s f f f f X f df H f X f df − − =∫ ∫ . (6.7) We assume the filter is stable. So that equation (6.7) is verified for all input signals x(k), we must have, with the possible exception of a finite number of responses, for all frequency values: ( ) ffH ∀=1 . (6.8) A zero-loss filter that is stable and of finite order is thus an all-pass filter of unity gain. The simplest all-pass filters are FIR filters such as ( ) 1±=zH or ( ) m zzH − ±= . For IIR type filters, the most general expression of the transfer function of a causal all-pass filter is: ( ) ( ) ∑ ∑ − = − − = −− − ±= 1 0 1 0 *1 N k k k N k k k N m za zaz zzH , (6.9) where a*k designates the conjugate of ak. EXAMPLE 6.1.– we assume that the coefficients ak are real. ( ) ( )( ) ( )( ) 1 11 2 1 2 1 1 * 1 1 1 1 1 1 1 * 1 1 1 3π 3π1 1 2 exp 1 2 exp 4 42 1 1 3π 1 3π1 1 exp 1 exp 2 4 42 2 1 1 . 1 1 j z j zz z H z z z j z j z p z p z p z p z − −− − − − − − − − − − − − ⎛ ⎞⎛ ⎞⎛ ⎞ ⎛ ⎞ − − −⎜ ⎟ ⎜ ⎟+ + ⎜ ⎟⎜ ⎟ ⎝ ⎠ ⎝ ⎠⎝ ⎠⎝ ⎠= = ⎛ ⎞⎛ ⎞⎛ ⎞ ⎛ ⎞+ + − − −⎜ ⎟ ⎜ ⎟⎜ ⎟⎜ ⎟ ⎝ ⎠ ⎝ ⎠⎝ ⎠⎝ ⎠ − − = − − Since the filter is stable, the poles are inside the unity circle in the complex z- plane. The zeros are then necessarily outside the unity circle.
- 196. 180 Digital Filters Design for Signal and Image Processing -1 -0.5 0 0.5 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Real part Imaginarypart Figure 6.4. Diagram of poles and zeros for an all-pass filter 6.1.3. Minimum-phase filters 6.1.3.1. Problem Here we look at a filter represented by its transfer function H(z). We use the input x(k) and try to characterize the output y(k). Figure 6.5. Filtering of input ( )kx However, now we assume that we use the output y(k) and that H(z) is known. We can then determine the z-transform of x(k) as follows: ( ) ( ) ( )zY zH zX 1 = if zzH ∀≠ 0)( (6.10) )(zH ( )ky )(zY ( )kx )(zX
- 197. Infinite Impulse Response Filters 181 Figure 6.6. Inversion or deconvolution The operation of the change from H(z) to )( 1 zH , called deconvolution, is only possible if )( 1 zH is stable; that is, if all the zeros of H(z) that have become poles of )( 1 zH inside the unity disk. At this point, the following question arises: can we bring the zeros that are outside the unity circle to the inside of the circle? The answer is discussed in the following section. 6.1.3.2. Stabilizing inverse filters We say that a minimal phase, causal filter is stable whose transfer function zeros are inside or on the unity circle. This kind of filter presents a minimum group delay in the ensemble of filters having the same transfer function module, no matter what z is in the z-plane. Let a filter be characterized by its transfer function H(z). Let Hint(z) be the transfer function constructed from the poles and the zeros of H(z) situated in the unity disk and Hext(z) the transfer function constructed from the zeros of H(z) situated outside the unity disk. We then obtain the following decomposition of H(z): ( ) ( ) ( ) ( ) ( )∏ −= = i i ext zzzH zHzHzH int int with 1>iz (6.11) This decomposition can develop in the same way by introducing 1*− iz , the inverse of the conjugate of iz ( ) ( ) ( ) ( ) ( )zH i ii i i zH i ii zzz zz zzzzHzH 2 1 1** 1** int ∏ ∏ ∏ ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ − − ×⎟ ⎠ ⎞ ⎜ ⎝ ⎛ −= − − (6.12) )( 1 zH ( )kx )(zX ( )ky )(zY
- 198. 182 Digital Filters Design for Signal and Image Processing The zeros of H1(z) all being inside the unity circle since 1 1* < − iz , )(1 zH is at minimum phase. In addition, 2 exp 2 ( ) 1 s f z j f H z π ⎛ ⎞ = ⎜ ⎟⎜ ⎟ ⎝ ⎠ = . Indeed, we have: ( ) ( ) ( ) ( ) 2 2 2 2 2 exp 2 exp 2 1 cos 2 sin 2 exp 2 1 1 2 cos 2 2 sin 2 1 2 i i i s i i i s i i i s s i i i s i i i i i s s i i f j a jb f H f f a jb j f f f a j b f f f a jb j f f f a b a b f f a b π π π π π π π ⎛ ⎞⎛ ⎞ − +⎜ ⎟⎜ ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠= ⎛ ⎞⎛ ⎞ − −⎜ ⎟⎜ ⎟⎜ ⎟ ⎝ ⎠⎝ ⎠ ⎛ ⎞⎛ ⎞ ⎛ ⎞ − + −⎜ ⎟⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎝ ⎠ ⎝ ⎠⎝ ⎠ = ⎛ ⎞ − −⎜ ⎟ ⎝ ⎠ ⎛ ⎞ ⎛ ⎞ + + − −⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ = + + − ∏ ∏ ∏ ∏ ∏ 1 cos 2 2 sin 2i i i s s f f a b f f π π = ⎛ ⎞ ⎛ ⎞ −⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ∏ (6.13) Consequently, we can always decompose a filter into a minimum phase and all- pass filter. We thus have: ( ) ( ) ( )min phase all passH z H z H z−= . (6.14) Since we have: ( ) ( ) ( )( ) ( ) ( ) ( )( ) ( ) ( ) ( )( ) min phase min phase min phase all pass all pass all pass exp exp exp . H f H f j f H f H f j f H f H f j f− − − = ϕ = ϕ = ϕ (6.15) Then from equation (6.14), we get: ( ) ( ) ( )min phase all passf f f−ϕ = ϕ + ϕ . (6.16)
- 199. Infinite Impulse Response Filters 183 and by deriving equation (6.16) in relation to the frequency on the interval , 2 2 s sf f⎡ ⎤ −⎢ ⎥ ⎣ ⎦ , we get: ( ) ( ) ( )min phase all passd f d fd f df df df −ϕ ϕϕ = + . (6.17) Now, we can demonstrate by using equation (6.12) that: ( )all pass 0 d f df −ϕ ≤ . (6.18) We then have: ( ) ( )min phased fd f df df ϕϕ ≤ . (6.19) In Chapter 4, we introduced the concept of group delay, represented as follows: ( )1 2π g d f df τ ϕ = − . (4.73) Using equations (4.73) and (6.19), we conclude that: ( ) ( )min phase , min phase 1 1 τ 2π 2π g g d fd f df df τ ϕϕ = − ≥ − = . (6.20) Consequently, a minimum phase filter is a filter whose group delay is the lowest among causal and stable filters with the same transfer function module throughout f. 6.2. Synthesizing IIR filters 6.2.1. Impulse invariance method for analog to digital filter conversion In the 1970s, the development of digital processing of filters led to implementing previously established methods for synthesizing continuous filters. In this section, we will discuss the impulse invariance method for analog to digital data conversion. Then we will present the bilinear transformation, which enables us to use all available techniques for synthesizing continuous filters.
- 200. 184 Digital Filters Design for Signal and Image Processing The impulse invariance method for analog to digital filter conversion is based on the fact that the impulse response of a digital filter must correspond to the sampling of the impulse response of a continuous filter. The method is described below. – we write the transfer function of the continuous filter in the form of a development in basic elements: 1 ( ) . N i c ii r H s s s= = −∑ (6.21) – from this we deduce that ( ) )exp( 1 ∑ = = N i ii tprth . (6.22) The sampling frequency sT being fixed, we obtain: ( ) 1 exp( ) N s i i s i h nT r p nT = = ∑ . (6.23) which produces, by using the z-transform: ( ) 1 1 1 exp( ) N i i i s r H z p T z− = = − ∑ . (6.24) We see that the poles of H(z) are in the form exp(piTs). If the original continuous- time filter is stable, the poles pi are partly real negative and the poles of the transfer function H(z) are inside the unity circle in the complex z-plane; so we have: ( ) ( )( ) ( )( ) ( )( ) ( )( ) exp( ) exp Re Im exp Re exp Im exp Re 1 i s i s i s i s i s i s p T p T j p T p T j p T p T = + = = < (6.25) However, this method cannot always be applied. We must respect several conditions: – on the one hand, the frequency response of the continuous filter must be null or must be considered as null beyond a certain frequency. Moreover, the sampling of
- 201. Infinite Impulse Response Filters 185 the frequency response (see equation (6.22)) leads to the relation between the respective transfer functions of the digital or analog filters: ( ) ( ) 1 /c s ks H f H f k T T +∞ =−∞ = −∑ – on the other hand, sampling of the impulse response must be possible at every instant. The response of the continuous filter must not create any discontinuities. We see this means that the denominator of the impulse response must be at least of 2 degrees. 6.2.2. The invariance method of the indicial response In section 6.2.1, we discussed synthesizing an infinite impulse response digital filter whose impulse response corresponds to the sampled response of a corresponding analog filter. An alternate approach is to conserve the indicial response instead of the impulse response. So we have: )(*)()( tuthtyind = . By using the Laplace transform of the above formula, we obtain: ( ) ( ) ind H s Y s s = . We then proceed as in section 6.2.1. 6.2.3. Bilinear transformations Here we must find a transformation that moves from the discrete-time domain to that of continuous-time, and inversely; once this occurs, we can then use nearly all the methods established for synthesizing continuous filters, and then discrete-time filters. Thus, from a formal point of view, with a given transfer function in s, the bilinear transformation consists of replacing s with 1 1 2 1 1s z T z − − − + in the transfer function of the continuous filter. This helps us obtain a digital filter with approximately the same frequency response as that of an analog filter.
- 202. 186 Digital Filters Design for Signal and Image Processing Figure 6.5. Continuous filter In the following sections, we will establish this transformation by using the method of the invariance calculation of a surface with discrete and continuous examples. We recall that the impulse response of an integrator filter is: ( ) 1for 0 0 for 0 t h t t ≥⎧ = ⎨ <⎩ (6.26) and ( ) 1 H s s = . (6.27) From here, whatever the input e(t), the output of this system is equal to the convolution product of the causal input e(t) of the impulse response h(t), so ( ) ( ) ( )thtety *= . ( ) ( ) ( ) ( ) ( ) ( ) ( ) 0 τ τ τ τ τ τ τ τ τ t y t h e t d e h t d e h t d +∞ +∞ −∞ −∞ = − = − = −∫ ∫ ∫ (6.28) So, for two successive instants tn and tn+1, we have: ( ) ( ) ( ) 1 1 τ n n t n n t y t y t e dτ + + − = ∫ . (6.29) We can then calculate this integral by using the trapeze method so that the increment ( )1 1n n s s st t n T nT T+ − = + − = is sufficiently small. We then have: ( ) ( ) ( ) ( )( )1 1 2 s n n n n T y t y t e t e t+ +− = + , (6.30) H y(t)x(t)
- 203. Infinite Impulse Response Filters 187 and: ( ) ( ) ( ) ( )( )1 1 2 sT y n y n e n e n+ − = + + . (6.31) The z-transform of this last discrete equation gives the transfer function: ( ) ( ) ( ) 1 1 1 2 1 s Y z T z H z X z z − − + = = − . (6.32) This step allows us to establish an equation that leads from the continuous domain to the discrete domain with the following result: 1 1 2 1 1s z s T z − − − ≈ + . (6.33) The correspondence on the unity circle between the frequencies in the continuous and discrete domains is: ( )( ) ( )( ) 1 1 1 exp2 1 exp discrete s continuous s discrete s j T j T j T ω ω ω − − − = + , (6.34) or: 2 tan 2 discrete s continuous s T T ω ω ⎛ ⎞ = ⎜ ⎟ ⎝ ⎠ . (6.35) According to equation (6.35), we see that the bilinear transformation brings about an axial distortion of frequencies. This distortion is called warping. For low values of discrete angular frequency, we observe that: 2 2 tan 2 2 discrete s discrete s continuous s s discrete T T T T ω ω ω ω ⎛ ⎞ = ≈ ×⎜ ⎟ ⎝ ⎠ ≈ . (6.36) However, for higher values, equation (6.35) is non-linear. The cut-off frequency of a low-pass filter then undergoes this modification, and we have: ( ) ( )2 tan 2 c sdiscrete c continuous s T T ω ω ⎛ ⎞ = ⎜ ⎟⎜ ⎟ ⎝ ⎠ . (6.37) This means we must take this frequency distortion into account.
- 204. 188 Digital Filters Design for Signal and Image Processing It is also important to remember that the bilinear transformation conserves the filter’s stability. We observe that the transform of the left half-plane of the Laplace s-plane is, by the bilinear transformation, the unity disk in the z-plane. Figure 6.6. Transformation of the left half-plane in the Laplace plane into the unity disk in z-plane We can demonstrate that the image of the ordinate axis of the complex s-plane by the bilinear transformation in equation (6.33) is the unity circle in the complex z- plane. Using equation (6.33), we get: 1 2 1 2 s s T s z T s + = − . (6.38) The ordinate axis of the complex s-plane corresponds to the pure-imaginary values for s: s = jα with α real variant of –∞ to +∞. So using equation (6.38), we have: 1 2 exp( 2artan ) 2 1 2 s s s T j T z j T j α α α + ⎛ ⎞ = = ⎜ ⎟ ⎝ ⎠− . (6.39) 6.2.4. Frequency transformations for filter synthesis using low-pass filters To determine another type of filter – that is, a high-pass, passband or cut-off band filter – we represent a low-pass filter characterized by its transfer function Hlp(z). Then we carry out the following variable changes by proposing ( )11 −− = Zfz obtain the transfer function of the desired filter ( )desiredH Z . ( ) ( )( )1 desired lpH Z H f z− = (6.40) Re(s) Im(s)
- 205. Infinite Impulse Response Filters 189 Low-pass to high-pass ( ) 1 1 1 α 1 α Z f Z Z − − − + = − + with ( ) ( ) cos 2 cos 2 s c c s c c T w T w ω α ω ⎡ ⎤ +⎢ ⎥ ⎣ ⎦= − ⎡ ⎤ −⎢ ⎥ ⎣ ⎦ where ωc and cw respectively designate the cut-off angular frequency of the low-pass filter of origin and the cut-off angular frequency of the desired filter. Low-pass to passband ( ) 2 1 1 2 1 2α 1 1 1 1 2α 1 1 1 t t Z Z t tf Z t t Z Z t t − − − − − − − + + += − − − + + + where ( ) ( ) 1 2 2 1 cos 2 cos 2 s c c s c c T w T w ω α ω ⎡ ⎤ +⎢ ⎥ ⎣ ⎦= − ⎡ ⎤ −⎢ ⎥ ⎣ ⎦ , ( )2 1tan 2 s c c T x w w ⎡ ⎤ = −⎢ ⎥ ⎣ ⎦ and 1 tan 2 s c T t x ω ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ with 1cw and 2cw respectively designating the low and high limits of the desired passband. Passband to cut-off band ( ) 2 1 1 2 1 2α 1 1 1 1 2α 1 1 1 t t Z Z t tf Z t t Z Z t t − − − − − − − + + += − − + + + where ( ) ( ) 1 2 2 1 cos 2 cos 2 s h k s h k T w w T w w α ⎡ ⎤ +⎢ ⎥ ⎣ ⎦= ⎡ ⎤ −⎢ ⎥ ⎣ ⎦ , ( )2 1tan 2 s h k T x wω ⎡ ⎤ = −⎢ ⎥ ⎣ ⎦ and 1 tan 2 s c T t x ω ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ with 1cw and 2cw respectively designating the low and high limits of the desired attenuated band. 6.3. Bibliography [JAC 86] JACKSON L. B., Digital Filters and Signal Processing, Kluwer Academic Publishers, Boston, ISBN 0-89838-174-6. 1986. [KAL 97] KALOUPTSIDIS N., Signal Processing Systems, Theory and Design, Wiley Interscience, 1997, ISBN 0-471-11220-8. [ORF 96] ORFANIDIS S. J., Introduction to signal Processing, Prentice Hall, ISBN 0-13-209172-0, 1996.
- 206. This page intentionally left blank
- 207. Chapter 7 Structures of FIR and IIR Filters 7.1. Introduction Filter realization structures are synoptic diagrams that plot the way different arithmetical operations such as additions, multiplications, and shifts are connected. When we are operating in infinite precision – that is, when there are no quantification errors – all structures will yield the same results. However, quantification errors and coding of different parameters on processors operating in fixed-point will affect structures, which then will not yield the same filtering results. In this way, an IIR filter assumed to be stable can, during its implantation, lead to unstable filters if the appropriate realization structure has not been chosen. This is due to the fact that the structures do not have the same sensitivity to quantification, truncation, and round of errors. This chapter is organized as follows: first, we will present structures dedicated to FIR and IIR filters. We will look closely at direct and cascade structures. We will discuss second-order cells. Then, we will present ways of choosing finite precision structures by lowering sensitivity through manipulating quantification error. Chapter written by Mohamed NAJIM and Eric GRIVEL.
- 208. 192 Digital Filters Design for Signal and Image Processing 7.2. Structure of FIR filters With continuous filters that have discrete components, the filter structure is an electric schema that includes condensers, coils and resistances, as well as other elements. These filters are active if they contain amplifiers. In the case of digital filters, the structure of the filter corresponds to the synoptic schema that connects input and output, into which delay elements and weighting values are introduced. This is a way of visualizing the flow of samplings when it undergoes delays or when it is weighted. This structure is directly deduced from the transfer function or the difference equation. It is very simple and for this reason is called a direct structure. For an FIR filter characterized by equation (5.3) in Chapter 5, with input x and output y, we obtain Figure 7.1. Figure 7.1. Direct structure of an FIR filter 7.3. Structure of IIR filters 7.3.1. Direct structures In this section, we will refer to the direct canonic structure that is characterized by the following transfer function: ( ) 1 1 0 1 0 01 1 1 0 with 1 1 N i iN N i z M M iM i i b z b b z H z a a z a z − − − + − = − + − −− = + + = = = + + ∑ ∑ . (7.1) b0 z-1 b1 + z-1 b2 + z-1 b3 + z-1 bN-1 + x(k) y(k)
- 209. Structures of FIR and IIR Filters 193 First, we introduce an intermediary output x1(k), so that: ( ) ( ) ( )1101 +−++= − Nkxbkxbkx N (7.2) ( ) ( ) ( )1101 +−++= − Nkxbkxbkx N (7.3) The transfer function in equation (7.1) can then be decomposed into a product of two transfer functions: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ). 1 1 1 12 1 1 1 101 1 1 1 1 10 zHzH zX zX zX zY zbb za za zbb zX zY zH N NM M M M N N z =×= ++× ++ = ++ ++ == +− −+− − +− − +− − (7.4) All this occurs as if the filter Hz(z) has been obtained by cascading an FIR filter H1(z) and an IIR filter H2(z). Figure 7.2. Putting filters H1 and H2 into cascade We end up with the most spontaneous realization, called the direct form structure 1. X(z) x(k) )(1 zH y(k)x1(k) X1(z) Y(z) )(2 zH
- 210. 194 Digital Filters Design for Signal and Image Processing Figure 7.3. Direct form structure 1 of an IIR filter We see that this structure requires N+M-2 delay cells. We can then ask the following question: can we share some cells and only use M-1 delay cells, knowing that M>N? The answer is yes, leading us to the direct form structure 2. b0 x(k) y(k) b1 b2 b3 bN-1 + + + + z-1 z-1 z-1 z-1 z-1 z-1 z-1 + + + + -a1 -a2 -a3 z-1 -aM-1 x1(k)
- 211. Structures of FIR and IIR Filters 195 Figure 7.4. Direct form 2 of an IIR filter An alternative approach consists of decomposing the transfer function shown in equation (7.1) into an IIR, then an FIR filter. The intermediate output x1(k) then satisfies: ( ) ( ) ( )1111 +−−−= − Mkxakxkx M (7.5) b0 x(k) b1 b2 b3 bN-1 + + + z-1 z-1 -aM-1 y(k) -a1 -a2 + -a3 z-1 z-1 z-1 + -aN-1
- 212. 196 Digital Filters Design for Signal and Image Processing ( ) ( ) ( )11110 +−++= − Nkxbkxbky N (7.6) Here we have ( ) ( ) 1 1 1 1 1 +− −++ = M M zazX zX (7.7) ( ) ( ) ( )1 10 1 +− −++= N N zbb zX zY (7.8) So by using equations (7.5) and (7.6), we can establish the structure shown in Figure 7.5. Figure 7.5. Direct form structure before sharing delay elements x(k) y(k) + + + -a1 -a2 -aN-1 Z -1 + -aM-1 z-1 bN-1 z-1 z-1 z-1 x1(k) b0 b1 b2 + + + z-1 z-1
- 213. Structures of FIR and IIR Filters 197 We can nevertheless distribute some delay cells. We then have the structure shown in Figure 7.6. Figure 7.6. Canonic structure of an IIR filter x(k) + y(k) b0 b1 b2 b3 + + + + + + + z-1 z-1 z-1 -a1 -a2 -a3 -aN-1 z-1 z-1 bN-1+ x1(k) -aM-1
- 214. 198 Digital Filters Design for Signal and Image Processing When M = N = 2, we end up with a second-order cell (see Figure 7.7). Figure 7.7. Canonic structure of a second-order cell Taking into account the role played by the second-order cell, we need to understand what happens when we consider a linear system guided by a pole near the unity disk. To generate a resonance at the frequency f0, we consider a pole inside the unity disk 0 exp 2i s f p R j f π ⎛ ⎞ = ⎜ ⎟ ⎝ ⎠ with 1<R . * 0 exp 2i s f p R j f π ⎛ ⎞ = −⎜ ⎟ ⎝ ⎠ is also a pole of the transfer function of the second-order cell. From here, the transfer function can be written in the two equivalent forms that follow: 1 10 0 ( ) (1 exp 2 )(1 .exp 2 ) z s s G H z f f R j z R j z f f π π− − = ⎛ ⎞ ⎛ ⎞ − − −⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ (7.9) and 1 2 1 2 1 2 20 ( ) 1 1 2 cos 2 z s G G H z a z a z f R z R z f π − − − − = = + + ⎛ ⎞ + +⎜ ⎟ ⎝ ⎠ . (7.10) b2-a2 y(k) b1 + + z-1 z-1 -a1 + + x(k)
- 215. Structures of FIR and IIR Filters 199 The frequency response of the filter corresponds to the Fourier transform of the impulse response, which can be obtained by taking the transfer function Hz (z) as exp 2 s f z j f π ⎛ ⎞ = ⎜ ⎟ ⎝ ⎠ so that: ( ) ( )exp 2 / 1 2 0 0 ( ) 1 exp 2 exp 4 . (1 exp 2 )(1 .exp 2 ) s z z j f f s s s s H f H z G f f a j a j f f G f f f f R j R j f f π π π π π = = = ⎛ ⎞ ⎛ ⎞ + − + −⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ = ⎛ ⎞ ⎛ ⎞− + − − −⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ (7.11) According to the values of R, the resonance is more or less strong and tends towards infinity when R tends towards 1. So, Figure 7.8 shows the position of the poles and the corresponding frequency response in different situations. The normalized frequencies linked to the poles are equal to 0 0.3 s f f = ± and the values of R are equal successively to 0.7, 0.9 and 0.99. We then normalize H(f) so that 1)( 0 =fH , which conditions the gain value G as follows: 2 (1 ) 1 2 cos(4 ) s f G R R R f π= − − + . (7.12) By definition, the cut-off frequencies fc of this filter at –3 dB are: ( ) ( ) 2 10 0 10log 3 dBcH f H f = − . (7.13)
- 216. 200 Digital Filters Design for Signal and Image Processing -1 -0.5 0 0.5 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Real part Imaginarypart 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 -10 -5 0 5 10 15 normalized frequency amplitude -1 -0.5 0 0.5 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Real part Imaginarypart 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 -15 -10 -5 0 5 10 15 20 normalized frequency amplitude -1 -0.5 0 0.5 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Real part Imaginarypart 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 -20 -10 0 10 20 30 40 normalized frequency amplitude Figure 7.8. Frequential representation of a second-order filter, according to values of R (0.7, 0.9 and 0.99)
- 217. Structures of FIR and IIR Filters 201 Equation (7.13) is reduced to: ( ) ( ) 2 1 2 0 = fH fH c , (7.14) both by using equation (7.11) and from the fact that 1)( 0 =fH : 2))2cos(21)()2cos(21( 2020 =+ + −+ − − R f ff RR f ff R ech c ech c ππ . (7.15) This equation admits two solutions f1 and f2 which characterize the bandwidth 12 fff −=∆ . To avoid complex algebraic calculations, using a geometric approach helps us to easily find the result. Indeed, let zA , zB, and zQ be the complex numbers associated with points A, B, and Q in the complex z-plane and at frequencies f1, f2, and f0 (as in Figure 7.9). Moreover, pi is associated with point P and pi* to P*. We thus have: 1*1 11 )( − −−− = AiAi Az zpzp G zH , (7.16) 1*1 11 )( −− −− = QiQi Qz zpzp G zH . (7.17) By combining equations (7.16) and (7.17), we obtain: * * 2 2 1*1 1*1 11 11 )( )( iAiA iQiQ A Q AiAi QiQi Qz Az pzpz pzpz z z zpzp zpzp zH zH −− −− = −− −− = − − −− −− . (7.18) Now, when R tends towards 1, the poles approach the unity circle while the points P, Q, and A become very close and their distance in relation to O, the source in the complex plane, is approximately the same, i.e. AQ zz ≈ . Therefore, equation (7.18) becomes: * * )( )( iAiA iQiQ Qz Az pzpz pzpz zH zH −− −− ≈ . (7.19)
- 218. 202 Digital Filters Design for Signal and Image Processing Re(z) Im(z) 1 B Q A 45° pi rf∆ O P Figure 7.9. Geometric resolution of the passband of a second-order filter Since distances between points Q, A, and P* are approximately the same, this means that: ** iQiA pzpz −=− . (7.20) Taking into account the equalities in equations (7.14) and (7.20), equation (7.18) becomes: 2 1 )( )( )( )( 2 22 2 2 ≈ − − ≈= iA iQ Qz Az Qz Az pz pz zH zH zH zH . (7.21) The angle represented by segments AP and PQ is therefore approximately equal to 4 π , which means that the angle represented by segments PA and AQ is as well. The triangle PAQ is almost an isosceles. This means that: AQPQR ==−1 . (7.22)
- 219. Structures of FIR and IIR Filters 203 When R tends towards 1, we assimilate the arc AB to the tangent vector to the unity circle in Q. We deduce that: ( ) 2 2 2 2 2 2 2 2 1 . s f AB AQ PQ f R ∆ ∆θ π π π π π = ≈ ≈ × ≈ × ≈ × − (7.23) from which we get: ( )2 1 s f R f ∆ ≈ − . (7.24) We can demonstrate in this way that the bandwidth is proportional to (1–R). The more the poles are close to the unity circle in the z-plane, the more the filter is selective around the frequencies associated with the poles. Now we place a pair of zeros 0 exp 2i s f z r j f π ⎛ ⎞ = ⎜ ⎟ ⎝ ⎠ and * 0 exp 2i s f z r j f π ⎛ ⎞ = −⎜ ⎟ ⎝ ⎠ close to the poles inside the disk and associated at the same frequencies. Equation (7.9) becomes: 1 10 0 1 10 0 1 2 20 1 2 20 (1 exp 2 )(1 exp 2 ) ( ) (1 exp 2 )(1 exp 2 ) 1 2 cos 2 1 2 cos 2 s s z s s s s f f r j z r j z f f H z G f f R j z R j z f f f r z r z f f R z R z f π π π π π π − − − − − − − − ⎛ ⎞ ⎛ ⎞ − − −⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠= ⎛ ⎞ ⎛ ⎞ − − −⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ⎛ ⎞ + +⎜ ⎟ ⎝ ⎠= ⎛ ⎞ + +⎜ ⎟ ⎝ ⎠ (7.25) or 2 2 1 1 2 2 1 1 1 1 )( −− −− ++ ++ = zaza zbzb GzH z . (7.26)
- 220. 204 Digital Filters Design for Signal and Image Processing The frequency response of the filter then satisfies: 1 2 1 2 0 0 0 0 1 exp 2 exp 4 ( ) 1 exp 2 exp 4 (1 exp 2 )(1 exp 2 . (1 exp 2 )(1 exp 2 s s s s s s s s f f b j b j f f H f G f f a j a j f f f f f f r j r j f f G f f f f R j r j f f π π π π π π π π ⎛ ⎞ ⎛ ⎞ + − + −⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠= ⎛ ⎞ ⎛ ⎞ + − + −⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ ⎛ ⎞ ⎛ ⎞− + − − −⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠= ⎛ ⎞ ⎛ ⎞− + − − −⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ (7.27) According to the values of r, we will observe an accentuation effect of the resonance at normalized frequencies 0 0.3 s f f = ± when r < R and weakening if r > R. The accentuation or weakening level is controlled by the proximity of r to R. The spike width is always related to the proximity of the poles in relation to the unity circle (see Figures 7.10 to 7.13). Figure 7.10. Representation of the frequency response of a second-order filter without zeros. R=0.7 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 -10 -5 0 5 10 15 normalized frequency amplitude -1 -0.5 0 0.5 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Real part Imaginarypart
- 221. Structures of FIR and IIR Filters 205 -1 -0.5 0 0.5 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Real part Imaginarypart -1 -0.5 0 0.5 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Real part Imaginarypart Figure 7.11. Representation of the frequency response of a second-order filter with zeros close to the poles, R=0.7, r takes respectively the values of 0.65 and 0.75 -1 -0.5 0 0.5 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Real part Imaginarypart Figure 7.12. Representation of the frequency response of a second order filter without zeros R=0.9 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 1.2 normalized frequency amplitude 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 -1.5 -1 -0.5 0 0.5 normalized frequency amplitude 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 -15 -10 -5 0 5 10 15 20 normalized frequency amplitude
- 222. 206 Digital Filters Design for Signal and Image Processing -1 -0.5 0 0.5 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Real part Imaginarypart 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 normalized frequency amplitude -1 -0.5 0 0.5 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Real part Imaginarypart 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 -6 -5 -4 -3 -2 -1 0 1 normalized frequency amplitude Figure 7.13. Representation of the frequency response of a second-order filter with zeros close to the poles R=0.9, r taking respectively the values 0.85 and 0.9 If we choose the value of r equal to 1, the expression of the transfer function given in equations (7.25) and (7.26) becomes: 1 10 0 1 10 0 (1 exp 2 )(1 exp 2 ) ( ) (1 exp 2 )(1 exp 2 ) s s z s s f f j z j z f f H z G f f R j z R j z f f π π π π − − − − ⎛ ⎞ ⎛ ⎞ − − −⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠= ⎛ ⎞ ⎛ ⎞ − − −⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ (7.28)
- 223. Structures of FIR and IIR Filters 207 1 20 1 2 20 1 2cos 2 ( ) 1 2 cos 2 s z s f z z f H z G f R z R z f π π − − − − ⎛ ⎞ + +⎜ ⎟ ⎝ ⎠= ⎛ ⎞ + +⎜ ⎟ ⎝ ⎠ , (7.29) and also ( ) ( )zRN zN G zbRzRb zbzb GzH z 12 2 21 1 2 2 1 1 1 1 )( −−− −− = ++ ++ = . (7.30) -1 -0.5 0 0.5 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Real part Imaginarypart -1 -0.5 0 0.5 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Real part Imaginarypart Figure 7.14. Frequential representation of a second-order filter. R equals 0.9 and r equals 0, then 1 Whether it has two poles or two poles and two zeros, we learn several facts from studying this kind of filter: – we learn that the resonance acuity is regulated by the proximity degree of the pole in relation to the unity circle; – we learn the association of zeros in the transfer function allows us to modify the response curve of the filter. There will be positions of zeros that will lessen the curve, and there will be others that instead favor resonance, as shown in the diagrams in Figure 7.15. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 -15 -10 -5 0 5 10 15 20 normalized frequency amplitude 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 -30 -25 -20 -15 -10 -5 0 5 normalized frequency amplitude
- 224. 208 Digital Filters Design for Signal and Image Processing 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 -30 -25 -20 -15 -10 -5 0 5 10 15 20 normalized frequency amplitude Figure 7.15. Frequential representation of a second-order filter with zeros more or less close to the poles. R equals 0.9 and r varies from 0 to 1 by steps of 0.05 This technique can be generalized to synthesize a filter having a finite number of slots. It is enough to generate a numerator N(z) whose zeros are situated on the unity circle and correspond to the desired slots. We then construct the denominator which must equal N(R-1 z) to deduce the transfer function. We thus obtain a generalized expression of the following formula: M M M M M z zbzbzb zbzbzb GzH −−− −−− ++++ ++++ = ρρρ ...1 ...1 )( 2 2 21 1 2 2 1 1 (7.31) APPLICATION.– with a signal sampled at 500 Hz, we want to eliminate a periodic signal whose fundamental frequency equals 50 Hz. The normalized frequencies to be eliminated are represented as: 50 . 500 10 n s f n n f = ± = ± (7.32) r increases from 0 to 1 r=R=0.9
- 225. Structures of FIR and IIR Filters 209 The zeros of Hz (z) thus correspond to: ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ ±= 10 2exp n jzn π , (7.33) and correspond to the tenth-degree root of the unity. Thus, we have: ( ) 10 1 − −= zzN (7.34) and we take, for example: 1010 10 1 1 )( − − − − = z z zH z ρ . (7.35) -1 -0.5 0 0.5 1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Real part Imaginarypart -0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 -50 -40 -30 -20 -10 0 10 20 30 40 normalized frequency amplitude Figure 7.16. Frequential representation of a tenth-order filter. ρ equals 0.98 Slot filters allow us to remove harmonies, while comb filters can increase periodicities; that is, they increase the signals containing harmonic frequencies (multiples of the fundamental). These filters are used in audio applications for devices that create reverberations. The filters act as reflectors of sound waves and favor certain periodic signals. 7.3.2. The cascade structure The cascade structure decomposes the filter of the transfer function Hz(z) with a succession of first and second-order cells Hi(z). However, we can see that a first
- 226. 210 Digital Filters Design for Signal and Image Processing order cell is a specific example of a second-order cell by taking the values associated with z-2 that are equal to zero: ( ) ( ) 1 1 0 1 0 01 1 0 1 0 1 with 1 N i iN N i z M M iM i i Q c i i b z b b z H z a a a z a z K H z − − − + − = − + − −− = = + + = = = + + = ∑ ∑ ∏ (7.36) or the following formula: ( ) ∏ = −− −− ++ ++ = Q i cici cici cz zaza zbzb KzH 1 2 2 1 1 2 2 1 1 1 1 , (7.37) or: ( ) ∏ = −− −− ++ ++ = Q i cici cici icz zaza zbzb KzH 1 2 2 1 1 2 2 1 1 , 1 1 with ∏ = = Q i icc KK 1 , . (7.38) Figure 7.17. Cascade structure Figure 7.18. Cascade structure with distrubuted gain X( )z )(kx Y( )z )(1 zH )(ky 1,cK )(2 zH2,cK )(zHQQcK , X( )z )(kx )(2 zH )(zHQ )(zY )(1 zH )(ky cK
- 227. Structures of FIR and IIR Filters 211 7.3.3. Parallel structures A parallel structure decomposes the filter of the transfer function Hz(z) with a parallel interconnection of filters the transfer function of which is the transfer function Hi(z). We have, as a result, the following formula for the transfer function: ( ) ( )∑ = = Q i iz zHzH 1 . (7.39) Figure 7.19. Parallel structure In the following section, we will look more closely at direct and cascade structures. 7.4. Realizing finite precision filters 7.4.1. Introduction In Chapters 5 and 6, we discussed FIR and IIR filter synthesis. The methods presented there allow us to obtain the transfer function of the filter, generally in the direct canonic form. However, in practice, and more especially when a filter is implanted in a processor dedicated to digital signal processing (DSP), the coefficients and values of the samples are coded on a finite number of bits. Quantifying these values requires several constraints that must be taken into account. )(zX )(kx )(2 zH )(zHQ )(zY )(1 zH )(ky +
- 228. 212 Digital Filters Design for Signal and Image Processing With IIR filters, we will show the influence quantification errors can have on a filter coefficient of the frequency response of a filter, in the case of a direct canonic, then cascade structure. This observation allows us to deduce the most adaptive choice for implementing digital filters. Then, we will look at other problems related to implementing filters, such as saturation. We should remember that this section does not look at parallel structures; however, a similar study can be made to compare the influence of coefficient quantification on the frequency response of a filter. 7.4.2. Examples of FIR filters Let the following formula be the transfer function of an FIR filter: ∑ − = − = 1 0 0 )()( N n n znhzH . (7.40) During implantation, the coefficients of the impulse response )(nh will never have exact theoretical value as predicted; they will be rounded off to the value [ ]rnh )( , the closest multiple of the quantification step q. The change to finite precision thus introduces a quantification error e(n) written as follows: [ ]rnhnhne )()()( −= . (7.41) This error is limited and verifies the following inequality: 2 )( q ne ≤ for a rounded off quantification. (7.42) So we have: [ ] .)()( )()()()( 1 0 0 1 0 1 0 1 0 ∑ ∑ ∑∑ − = − − = − = −−− − = −= −== N n n N n N n nnn N n r znezH zneznhznhzH (7.43)
- 229. Structures of FIR and IIR Filters 213 The error occurring in the transfer function due to the quantification of the coefficients of the impulse response is: ∑ − = − =−= 1 0 0 )()()()( N n n znezHzHzE . (7.44) The frequency response, evaluated on the unity circle, then satisfies: 1 exp 2 0 ( ) ( ) | ( ) . 2 s N f z j nf q E f E z e n N π − ⎛ ⎞ = ⎜ ⎟ = ⎝ ⎠ = ≤ ≤∑ (7.45) 2 Nq constitutes the upper limit of the error. It is easy to evaluate because the coefficients appear in a linear way in the expression of the frequency response. 7.4.3. IIR filters 7.4.3.1. Introduction First, let us again look at the transfer function expressions for direct and cascade structures: ( ) 1 1 0 1 0 01 1 0 1 0 with 1 N i iN N i z M M iM i i b z b b z H z a a a z a z − − − + − = − + − −− = + + = = = + + ∑ ∑ (7.1) and ( ) ∏ = −− −− ++ ++ = Q i cici cici cz zaza zbzb KzH 1 2 2 1 1 2 2 1 1 1 1 . (7.37) In this section, we will look at the influence of the coefficient quantification of the filter on the frequential behavior of the filter. To do this, we observe the filter attenuation Af (f), represented as: ( ) ( )10 exp( 2 / ) 20log s f z z j f f A f H z π= = − (7.46)
- 230. 214 Digital Filters Design for Signal and Image Processing The expression of the filter attenuation Af(f), shown in equation (7.46), is difficult to use in our study. So we will instead use another expression that does not directly interpose the quantity ( )10 exp( 2 / ) log s z z j f f H z π= . So, since the transfer function ( )zH z is a complex quantity, we can express it as a function of its module and phase, as follows: ( ) ( ) ( ) ( )( ) exp( 2 / ) exp . s z z j f f H f H z H f j f π ϕ = = = (7.47) From here, by taking the complex logarithm of equation (7.47), we obtain: ( ) ( ) ( )ln lnH f H f j fϕ= + . (7.48) Consequently, ( ) ( ) ( ) exp( 2 / ) ln Re(ln ) Re(ln ) s z z j f f H f H f H z π= = = , (7.49) and: ( ) ( ) ( )10 exp( 2 / ) 1 1 log Re(ln ) Re(ln ) ln10 ln10 s z z j f f H f H f H z π= = = . (7.50) Using equations (7.46) and (7.50), the filter attenuation Af (f) can be written as: ( ) ( ) exp( 2 / ) 20 Re(ln ) ln10 s f z z j f f A f H z π= = − . (7.51) If we look at a direct structure, the change to finite precision introduces an error on each coefficient ap of the denominator and bp of the numerator of the transfer function Hz(z) that we write respectively as pa∆ and pb∆ . This quantification will generate an attenuation error of the filter. If we carry out a first-order approximation, the global error on ( )fAf∆ equals: ( ) ( ) ( ) ( ) ( ) . 1 0 1 0 1 0 1 0 ∑∑ ∑∑ − = − = − = − = ∆+∆= ∆ ∂ ∂ +∆ ∂ ∂ =∆ N p pb M p pa N p p p f M p p p f f bfSafS b b fA a a fA fA pp (7.52)
- 231. Structures of FIR and IIR Filters 215 In equation (7.52), ( )fS ia designates the attenuation sensitivity of the filter in relation to coefficient ai. By using equation (7.50), we can express it as follows: ( ) ( ) ( ) ( ) exp( 2 / ) exp( 2 / ) Re(ln )20 ln10 ln20 Re ln10 s p s z z j f ff a p p z p z j f f H zA f S f a a H z a π π = = ∂∂ = = − ∂ ∂ ⎡ ⎤∂ ⎢ ⎥= − ∂⎢ ⎥ ⎣ ⎦ (7.53) or: ( ) ( ) ( ) ( ) exp( 2 / ) 20 1 Re ln10p s f z a p z p z j f f A f H z S f a H z a π= ⎡ ⎤∂ ∂ ⎢ ⎥= = − ∂ ∂⎢ ⎥ ⎣ ⎦ . (7.54) With a cascade structure, the error occurring on the filter attenuation equals, to the first order: ( ) ( ) ( ) ( ) ( ) ( ) ( ) cK Q p j pjb Q p j pja c c f Q p j pj cpj f Q p j pj cpj f f KfSbfSafS K K fA b b fA a a fA fA ccpjcpj ∆+∆+∆= ∆ ∂ ∂ +∆ ∂ ∂ +∆ ∂ ∂ =∆ ∑∑∑∑ ∑∑∑∑ = == = = == = 1 2 11 2 1 1 2 11 2 1 (7.55) In this section, we compare attenuation variations ( )fAf∆ due to quantification when we use the direct canonic or cascade structure. We begin by looking at the direct structure and, more particularly by the calculation of ( )fS pa . We use the transfer function in equation (7.1), since: ( ) ( ) exp( 2 / ) 1 1 0 1 2 10 0 0 exp( 2 / ) 1 Re Re . s s z z p z j f f M i i pN ii iN Mi i i i i i i z j f f H z H z a a z z b z b z a z π π = − − −− −= − −− = − = = = ⎡ ⎤∂ ⎢ ⎥ ∂⎢ ⎥ ⎣ ⎦ ⎡ ⎤⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥= × −⎢ ⎥⎢ ⎥⎛ ⎞⎢ ⎥⎢ ⎥⎜ ⎟⎢ ⎥⎢ ⎥⎝ ⎠⎣ ⎦⎣ ⎦ ∑ ∑ ∑ ∑ (7.56) For p contained between 1 and M-1, we obtain:
- 232. 216 Digital Filters Design for Signal and Image Processing ( ) ( ) ( ) ( ) exp( 2 / ) 1 0 exp( 2 / ) 20 1 Re ln10 20 Re . ln10 p s s f a p p z j f f p M i i i z j f f A f H z S f a H z a z a z π π = − − − = = ⎡ ⎤∂ ∂ ⎢ ⎥= = − ∂ ∂⎢ ⎥ ⎣ ⎦ ⎡ ⎤⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥= ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦⎢ ⎥⎣ ⎦ ∑ (7.57) In addition, given equation (7.1), since: ( ) ( ) exp( 2 / ) 1 0 1 1 00 exp( 2 / ) 1 Re Re s s z z p z j f f M i i p i N M ii ii ii z j f f H z H z b a z z a zb z π π = − − − = − − −− == = ⎡ ⎤∂ ⎢ ⎥ ∂⎢ ⎥ ⎣ ⎦ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥= × ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ∑ ∑∑ (7.58) for p contained between 0 and N -1, we have: ( ) ( ) ( ) ( ) exp( 2 / ) 1 1 0 exp( 2 / ) 20 1 Re ln10 20 Re . ln10 p s s f z b p z p z j f f p N i i z j f f A f H z S f b H z b z b z π π = − − − = = ⎡ ⎤∂ ∂ ⎢ ⎥= = − ∂ ∂⎢ ⎥ ⎣ ⎦ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥= ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ∑ (7.59) From here, the error occurring on the filter attenuation equals, to the first order: ( ) 1 1 10 0 exp( 2 / ) 1 1 1 0 exp( 2 / ) 20 Re ln10 20 Re . ln10 s s pN f pN p i i z j f f pM pM ip i i z j f f z A f b b z z a a z π π ∆ ∆ ∆ −− − −= = = −− − −= = = ⎡ ⎤⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥= ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦⎣ ⎦ ⎡ ⎤⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥+ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦⎣ ⎦ ∑ ∑ ∑ ∑ (7.60)
- 233. Structures of FIR and IIR Filters 217 Using equation (7.60), the error is notably expressed as a sum of rational functions whose denominator is of degree N-1 or M-1. We will see that this quantity plays a role that becomes more harmful to a filter the higher the filter’s order. To see this, we introduce the poles { } Miip ,...,1= of the transfer function. ( ) 1 0 exp( 2 / ) 1 1 1 0 exp( 2 / ) 1 1 exp( 2 / ) Re Re Re s s s p M i i i z j f f p M M i M i i z j f f M p M i i z j f f z a z z z a z z z z p π π π − − − = = − − − − − = = − − = = ⎡ ⎤⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎛ ⎞ ⎢ ⎥⎢ ⎥⎜ ⎟ ⎢ ⎥⎝ ⎠⎣ ⎦⎣ ⎦ ⎡ ⎤⎡ ⎤ ⎢ ⎥⎢ ⎥ ×⎢ ⎥⎢ ⎥= ⎢ ⎥⎢ ⎥⎛ ⎞⎢ ⎥×⎢ ⎥⎜ ⎟ ⎢ ⎥⎝ ⎠⎣ ⎦⎣ ⎦ ⎡ ⎤⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥= ⎢ ⎥⎢ ⎥−⎢ ⎥⎢ ⎥⎣ ⎦⎢ ⎥⎣ ⎦ ∑ ∑ ∏ (7.61) So we can express ipz − as follows: )exp( iii jpzpz ψ×−=− (7.62) where ψi represented in Figure 7.20. Figure 7.20. Geometric representation of the position of the poles z pi Ψi Re z Im z 1
- 234. 218 Digital Filters Design for Signal and Image Processing From here, given equation (7.62), equation (7.61) can be written as: ( ) 1 10 exp( 2 / ) 1 1 1 1 exp( 2 ( 1 ) ) Re Re exp 1 Re exp ( 2 ( 1 ) ) 1 cos 2 ( 1 ) s p s MM i i ii ii z j f f M iM i s i i M iM i i i f j M p fz z p ja z f j M p f z p M p z p π π ψ ψ π ψ π − − − == = = = = = ⎡ ⎤⎡ ⎤ ⎡ ⎤⎢ ⎥ − −⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥=⎢ ⎥⎢ ⎥⎛ ⎞ ⎢ ⎥−⎢ ⎥⎢ ⎥⎜ ⎟ ⎢ ⎥⎢ ⎥ ⎣ ⎦⎝ ⎠⎣ ⎦⎣ ⎦ ⎡ ⎤ ⎢ ⎥⎛ ⎞ ⎢ ⎥= − − − −⎜ ⎟ ⎢ ⎥⎝ ⎠− ⎢ ⎥⎣ ⎦ = − − − − ∏∑ ∑ ∏ ∑ ∏ s f f ⎛ ⎞ ⎜ ⎟ ⎝ ⎠ (7.63) The higher the number of M poles, the more z, which describes the unity circle in the complex plane, has the chance to be close to one or several between them. The higher the number M of poles, the more likely ∏ = − M i ipz 1 is to be small. So the higher the filter’s order, the more the first order-error occurring on the filter is raised. For a second-order direct structure, the attenuation sensitivity of the filter in relation to the p degree of the coefficient ap of the denominator of the transfer function Hz(z) is the lowest. Lastly, the closer the poles are to the unity circle, the more the errors there will be for certain frequencies, since ipz − becomes smaller. Now we will look at the cascade structure of the same filter characterized by equation (7.37). As before, we calculate the sensitivity in relation to the coefficients of the denominators and the numerators of rational fractions of the transfer function, and in relation to the gain K. ( ) ( ) ( ) ( ) 1 1 1 exp( 2 / ) 20 1 Re ln10ck s f z a ck z ck z j f f A H z S a H z a π θ θ = ⎡ ⎤∂ ∂ ⎢ ⎥= = − ∂ ∂⎢ ⎥ ⎣ ⎦ . (7.64)
- 235. Structures of FIR and IIR Filters 219 So now we have: ( ) ( ) ( ) ( ) 1 exp( ) 1 2 1 2 1 2 11 2 1 21 2 1 2 21 2 1 2 1 2 exp( 2 / )1 2 1 1 2 1 1 11 1 1 1 s z z ck z j ci ci ck cki k ci ci Q ci ci ck ck z j f f i ci ci H z H z a b z b z K b z b z za z a z b z b z a z a zK a z a z θ π = − − − − −− − ≠ − − − − =− − = ∂ = ∂ + + ⎛ ⎞+ + ×+ + ⎜ ⎟× − ⎜ ⎟+ + ⎜ ⎟+ + ⎝ ⎠ + + ∏ ∏ (7.65) from which we have: ( ) ( ) ( ) ( ) 1 1 1 exp( 2 / ) 1 1 2 1 2 exp( 2 / ) 20 1 Re ln10 20 Re . ln10 1 ck s s f z a ck z ck z j f f ck ck z j f f A f H z S f a H z a z a z a z π π = − − − = ⎡ ⎤∂ ∂ ⎢ ⎥= = − ∂ ∂⎢ ⎥ ⎣ ⎦ ⎡ ⎤⎛ ⎞ ⎢ ⎥= ⎜ ⎟⎜ ⎟+ +⎢ ⎥⎝ ⎠⎣ ⎦ (7.66) As before, we can demonstrate that: ( ) ( ) ( ) ( ) exp( 2 / ) 20 1 Re ln10 20 1 ln10 s f z K c z c z j f f c A f H z S f K H z K K π= ⎡ ⎤∂ ∂ ⎢ ⎥= = − ∂ ∂⎢ ⎥ ⎣ ⎦ = − × (7.67) ( ) ( ) 2 2 1 2 2 1 2 exp( 2 / ) 20 Re ln10 1ck s f a ck ck ck z j f f A f z S f a a z a z π − − − = ⎡ ⎤⎛ ⎞∂ ⎢ ⎥= = ⎜ ⎟⎜ ⎟∂ + +⎢ ⎥⎝ ⎠⎣ ⎦ (7.68) ( ) ( ) 1 1 1 2 1 1 2 exp( 2 / ) 20 Re ln10 1ck s f b ck ck ck z j f f A f z S f b b z b z π − − − = ⎡ ⎤⎛ ⎞∂ ⎢ ⎥= = − ⎜ ⎟⎜ ⎟∂ + +⎢ ⎥⎝ ⎠⎣ ⎦ (7.69) ( ) ( ) 2 2 1 2 2 1 2 exp( 2 / ) 20 Re ln10 1ck s f b ck ck ck z j f f A f z S f b b z b z π − − − = ⎡ ⎤⎛ ⎞∂ ⎢ ⎥= = − ⎜ ⎟⎜ ⎟∂ + +⎢ ⎥⎝ ⎠⎣ ⎦ (7.70)
- 236. 220 Digital Filters Design for Signal and Image Processing Unlike the case with direct form structures, the global error depends on the cascade structure of second-order rational functions. By following the same reasoning based on equations (7.62) and (7.63), we see that the cascade structure is the most appropriate; it minimizes first-order error occurring with filter attenuation. So we choose a structure of second-order cells in cascade to realize an even order filter. For an odd order filter, a first-order cell must also be used. In order to illustrate the difference in sensitivity between the two structures, we present the reduced precision implementation (to 1/256 then about 1/64) of an IIR type II Chebyshev filter (expected attenuation of 30 dB, with normalized cut-off frequency of 0.15) with a direct and cascade structure. The obtained result is close to the theoretical model. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 -50 -45 -40 -35 -30 -25 -20 -15 -10 -5 0 Direct structure Cascade structure Expected filter
- 237. Structures of FIR and IIR Filters 221 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 -50 -45 -40 -35 -30 -25 -20 -15 -10 -5 0 Figure 7.21. Squared amplitude of a type II Chebyshev filter (expected attenuation of 30 dB, normalized cut-off frequency of 0.15) in finite precision with two types of structures: cascade and direct 7.4.3.2. The influence of quantification on filter stability If quantification errors have an influence on the frequency response of a filter, they also have an intrinsic influence on the position of the transfer function poles and, consequently, on the stability of the filter. Let us look again at the expression of the transfer function in the case of a direct structure: ( ) 1 0 1 1 0 1 1 0 01 0 ( ) with 1. ( ) N N z M M N i i i M i i i b b z H z a a z b z B z a A z a z − + − − + − − − = − − = + + = + + = = = ∑ ∑ (7.1) Direct structure Cascade structure Expected filter
- 238. 222 Digital Filters Design for Signal and Image Processing We then note the presence of the poles { } Miip ,...,1= of the transfer function in equation (7.1): ( ) ( )∏ = − − = M i i z zp zB zH 1 1 1 )( . (7.71) When the filter is realized with one of the direct forms, the coefficients { } 1,...,1 −= Miia directly appear in the difference equation. During the implantation of this type of filter, the change into finite precision has an influence on the position of the poles { } Miip ,...,1= and, consequently, on the stability of the filter. In this section, we will obtain an estimation of the precision with which the coefficients { } Miia ,...,1= must be represented to guaranty the filter’s stability. We make the hypothesis that Hz(z) is a transfer function of a low-pass filter. The poles { } Miip ,...,1= of Hz(z) are thus all inside and near the unity disk. We can write that: ii ep += 1 , (7.72) where the error ei is complex and so 1 for 1,...ie i M<< = << 1 . Let us assume that a sole coefficient ar is modified and that it takes the new value q ra , so that: δ+= r q r aa , (7.73) where δ represents the gap in relation to the exact value. The denominator of the transfer function Hz (z) then is expressed in the following form: ∑ − = −−− +=++= 1 0 .)(.1)( M i rri i q zzAzzazA δδ . (7.74) So for z = 1, equation (7.74) becomes: )1()1( AAq += δ . (7.75)
- 239. Structures of FIR and IIR Filters 223 The transfer function ( ) ( )zA zB zH q q q z =)( then admits a pole as z = 1 if 0)1( =q A by taking into account equation (7.75): ( ) 01 1 0 =+=+ ∑ − = M i iaA δδ . (7.76) By introducing the poles { } Miip ,...,1= of Hz (z), from equation (7.76) we obtain the following condition: ( )∏ = −−= M i ip 1 1δ because 1=z . (7.77) By taking into account equation (7.72), the condition in equation (7.77) becomes: ∏ = = M i ie 1 δ . (7.78) The filter of the transfer function ( ) ( )zA zB zH q q q z =)( is therefore unstable because the stability criterion based on the position of the poles being strictly inside the unity disk in the complex plane has not been respected. So, we obtain the pole 1=z for a relatively weak perturbation ∏ = = M i ie 1 δ , because: 1 1 <<= ∏ = M i ieδ . (7.79) DIGITAL EXAMPLE.– let us consider a digital filter with the following transfer function: 321 31 970299.09403.297.21 1 )99.01( 1 )( −−− − −+− = − = zzz z zH z (7.80)
- 240. 224 Digital Filters Design for Signal and Image Processing Using equation (7.77), if we increase one of the coefficients of the denominator of –A(1) = –10-6 , the filter implanted in the direct form structure will be unstable. As well, for a perturbation of the order of 2-20 <10-6 <2-19 , we need at the least a coefficient coding on 19 bits. However, if we choose to cascade three first- order filters with identical transfer functions, equal to: 1 99.01 1 − − z , the filter will be less sensitive to errors. In this situation, the instability will occur if we increase one of the coefficients of 10-2 , which corresponds to the quantification error brought about by a coding on 6 bits. Sections 7.4.3.1 and 7.4.3.2 analyzed the importance choosing the correct structure for IIR filters during their implantation. If the direct form structure seems a priori the simplest and most natural, it is also highly sensitive to quantification errors and can lead to a frequency response that does not correspond to the specifications. This means the filter can be unstable. For this reason we will choose a cascade structure. However, several questions occur at this point: – Must we introduce scale factors to avoid saturations in second-order cells? – How can we obtain, then sequence, second-order cells that make up this structure? – What criterion should we use for representing the necessary cells? The answer to this question will be the topic of section 7.4.3.3. 7.4.3.3. Introduction to scale factors Before we discuss cell sequencing, it is important that we take into account saturation problems that can occur at each point of the second-order cell. Because of recursivity, saturation can spread and make the filtering algorithm unusable. To alleviate this problem, we introduce scale factors inside each second-order cell. With the second-order cell with the transfer function: 2 2 1 1 2 2 1 1 1 1 −− −− ++ ++ zaza zbzb cici cici , (7.81) the first step consists of avoiding the saturation of the intermediary output x1(k) (see Figure 7.22).
- 241. Structures of FIR and IIR Filters 225 To do that, we determine the impulse response fi(k) of the system whose input is x(k) and output is x1(k). It corresponds to the inverse transform of the transfer function ( ) 2 2 1 11 1 −− ++ = zaza zF cici z . By introducing a normalization factor at the input of the second-order cell, written αi, and represented by: ( )∑ +∞ = = 0k ii kfα (7.82) the intermediary output does not undergo saturation because it corresponds to the input x(k) before normalization. The resulting transfer function equals: ( ) 2 2 1 11 11 −− ++ ×= zaza zF cicii z α . Figure 7.22. First part of the structure of a second-order cell with input normalization We must then compensate for this normalization by iα/1 in the second part of the cell in order to find the transfer function of the order i of the second-order cell; that is: 2 2 1 1 2 2 1 1 1 1 −− −− ++ ++ zaza zbzb cici cici x(k) z-1 z-1 -aci1 -aci2 x1(k) + + 1/αi
- 242. 226 Digital Filters Design for Signal and Image Processing This operation multiplies the transfer function of the system ( ) 2 2 1 11 −− ++= zbzbzG ciciz whose input is x1(k) and output is y(k) by iα . So we have: ( ) 2 2 1 1 −− ++= zbzbzG ciiciiiz ααα . (7.83) Figure 7.23. Structure of a second-order cell resulting from input normalization Two alternative approaches consist in taking the following normalization factors: ( ) 2/1 0 2 1 ∑ ∞+ = = k i kfα (7.84) or ( )1 exp( 2 /max ( ) |z j f fs f F z πα == . (7.85) 7.4.3.4. Decomposing the transfer function into first- and second-order cells As we have seen in section 7.4.3.1, we cascade first- and second-order cells instead of using an order M direct structure. αi bci2-aci2 y(k) αi bci1 + + z-1 z-1 -aci1 x1(k) + + x(k) 1/αi αi
- 243. Structures of FIR and IIR Filters 227 When M is even, we recall that we have: ( ) ( ) ( ) ( )∏ ∏ ∏ = = = −− −− = = ++ ++ = Q i ic Q i i i c Q i cici cici cz zHK zD zN K zaza zbzb KzH 1 1 1 2 2 1 1 2 2 1 1 1 1 (7.86) However, according to the association that we have made with ( ){ } Qii zN ,...,1= and ( ){ } Qii zD ,...,1= , the resulting filters do not work the same way in relation to quantifying filter coefficients and input signal samples. The first question we must resolve is how to best pair poles and transfer function zeros. Then we must define the order in which these cells are arranged. Quantifying the filter coefficients on a finite number of bits modifies the position of the poles and the filter’s zeros. So the poles and the second–order cells represented by the transfer function: ( ) 2 2 1 1 2 2 1 1 1 1 −− −− ++ ++ = zaza zbzb zH cici cici i (7.87) respectively equal: ⎥⎦ ⎤ ⎢⎣ ⎡ −+−= 2 2 111 4 2 1 cicicii aaap and ⎥⎦ ⎤ ⎢⎣ ⎡ −−−= 2 2 112 4 2 1 cicicii aaap (7.88) and ⎥⎦ ⎤ ⎢⎣ ⎡ −+−= 2 2 111 4 2 1 cicicii bbbz and ⎥⎦ ⎤ ⎢⎣ ⎡ −−−= 2 2 112 4 2 1 cicicii bbbz . (7.89) Now we will look more closely at poles; depending on the values of coefficients 1cia and 2cia , the nature of the poles will differ.
- 244. 228 Digital Filters Design for Signal and Image Processing Figure 7.24. Stability triangle So when 2 2 1 4 cici aa < , the poles are complex conjugates; otherwise they are real. To ensure stability, we must have: 11 <ip and 12 <ip , (7.90) This constraint implies that the two following inequalities must be satisfied1: 12 <cia (7.91) and 21 1 cici aa +≤ . (7.92) From there, when working in infinite precision, the positions can occupy the poles in the plane 1cia , 2cia are written in a triangle shown in Figure 7.24. By working in finite precision, 1cia and 2cia take the discrete values respectively 1 The Jury criterion presented in Chapter 2 is an alternative approach that helps to represent these two inequalities. Position of poles outside the stability zone resulting from the quantification of the filter’s coefficients 21-1-2 -1 Real poles Complex poles 1cia 2cia 1
- 245. Structures of FIR and IIR Filters 229 between –1 and 1 and –2 and 2. This means we must take care that the corresponding poles stay within the stability triangle. The pairing of the transfer function poles and zeros depends on minimizing the power of the output noise occurring from quantification errors on the samples and results of operations. We write this quantity NtotaloutputP . This means that the quantification of the input signals introduces a noise ex at input whose level equals 12 2 q , where q designates the quantification step. It brings about, as an output of the first cell, a noise whose level can be calculated in the frequential domain, and equals: ( ) ( ) 22 1/ 2 1 11/ 212 Noutput N fq P df D f− = × ∫ . (7.92) At the output of the cascaded structure of N/2 cells (we assume N is even), it thus generates an error whose level equals: ( ) ( ) 22 1/ 2 / 2 0 11/ 212 N i N output ii N fq P df D f=− = × ∏∫ . (7.93) However, this quantification error of the input signal is not the only element to consider; inside each cell, the result of each multiplication of two operands of M bits is normally coded on 2M bits. Now, if several multiplication have been done, the number of bits on which each final result is coded becomes higher and higher. In practice, this is not feasible. For this reason, we carry out a truncation operation or round – off the result of the M bit multiplication. In Figure 7.25 this truncation of 2M to M bits is modeled by the addition of an error written eji(n). This operation introduces a global error je whose level is 12 2 q .
- 246. 230 Digital Filters Design for Signal and Image Processing Figure 7.25. Modelization of truncation errors for a second-order structure y(k) αi bc1 αi bc2 + + z-1 z-1 -ac1 -ac2 x1(k) + + ej1(k) + ej4(k) + + ej2(k) ej5(k) + x(k) 1/αi + ej0(k) αi + ej3(k) x(k) y(k) bc1 bc2 + + z-1 z-1 -ac1 -ac2 x1(k) + + ej(k)
- 247. Structures of FIR and IIR Filters 231 From there, if we look at the type of error generated in the j cell, the resulting noise from the filter’s output equals: ( ) ( ) 22 1/ 2 / 2 11/ 212 N i Njoutput ii j N fq P df D f= +− = × ∏∫ . (7.94) Using equations (7.93) and (7.94), the total noise at the filter’s output thus equals: ( ) ( ) 22 1/ 2 / 2/ 2 1 1/ 212 NN i Ntotaloutput j ii j N fq P df D f= =− ⎛ ⎞ ⎜ ⎟= × ⎜ ⎟ ⎝ ⎠ ∑ ∏∫ . (7.95) In order to minimize NtotaloutputP , we must minimize each ( ) ( ) 2 fD fN i i factor. The most complex problem is that of the value taken by ( ) ( ) 2 fD fN i i for the frequency associated with a pole of ( ) ( ) ( )zD zN zH i i i = . To avoid an overly high amplitude response at this frequency, we must choose a zero of ( ) ( ) ( )zD zN zH i i i = close to the pole to best neutralize the pole’s influence. Once the pairing of the poles and zeros of the transfer function has been done, the order of the poles will be represented by minimizing the level of the output noise. We should keep in mind that much work is being done to improve this procedure so as to reduce noise levels. 7.5. Bibliography [JAC 86] JACKSON L. B., Digital Filters and Signal Processing, Kluwer Academic Publishers, Boston, ISBN 0-89838-174-6. 1986. [KAL 97] KALOUPTSIDIS N., Signal Processing Systems, Theory and Design, Wiley Interscience, 1997, ISBN 0-471-11220-8.
- 248. 232 Digital Filters Design for Signal and Image Processing [ORF 96] ORFANIDIS S. J., Introduction to Signal Processing, Prentice Hall, ISBN 0-13-209172-0, 1996. [PRO 92] PROAKIS J. and MANOLAKIS D., Digital Signal Processing, principles, algorithms and applications, second edition, McMillan, 1992, ISBN 0-02-396815-X. [SHE 99] SHENOI B. A., Magnitude and Delay Approximation of 1-D and 2-D Digital Filters, Springer, 1999, ISBN 3-540-64161-0. [TRE 76] TREITTER S. A., Introduction to Discrete-Time Signal Processing, John Wiley & Sons (Sd), 1976, ISBN 0-471-88760-9.
- 249. Chapter 8 Two-Dimensional Linear Filtering 8.1. Introduction This chapter presents several digital filtering techniques applied to two- dimensional data. The most common applications are concerned with the processing of images. Other kinds of data can be processed using similar techniques, such as time-frequency representations and time-scale representations of mono-dimensional signals. The fundamental principles of this kind of filtering are based on the 2-D sampling theorem and on the Fourier transform. This chapter includes a brief reminder of continuous models and stationary 2-D linear filtering, since most of the later explanations make use of these. Then, we will introduce two-dimensional sampling techniques. Filtering operations will then be discussed in both spatial and frequency domains. 8.2. Continuous models 8.2.1. Representation of 2-D signals In a natural way and as with temporal signals, the usual model for representing two-dimensional signals is the functional model, which can possibly extend to Chapter written by Philippe BOLON.
- 250. 234 Digital Filters Design for Signal and Image Processing distributions. Since we are most often dealing with images here, temporal coordinates are replaced by spatial coordinates, written as x and y. ),(),( : 2 yxsyx s → ℜ→ℜ (8.1) Under normal conditions – that is, for finite energy functions – signals can be described in the Fourier domain by means of spatial frequencies u and v, using the bidimensional Fourier transform (FT): ∫ ∫ +∞ ∞− +∞ ∞− −−= dxdyvyjuxjyxsvuS )2exp()2exp(),(),( ππ (8.2) It should be noticed that the 2-D transform is separable. The 2-D calculation is obtained by linking the two calculations of the one dimensional (1-D) transform by successively integrating them in relation to each of the two variables: dyvyjdxuxjyxsvuS )2exp()2exp(),(),( ππ − ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −= ∫ ∫ +∞ ∞− +∞ ∞− . (8.3) A linear filtering transforms the 2-D signal s(x,y) into another 2-D signal, written here as w(x,y). Figure 8.1. 2-D linear filtering The linear filtering operation is represented, in the spatial domain, by the following convolution equation: ( ) ( )∫ ∫ +∞ ∞− +∞ ∞− −−== βαβαβα ddyxshyxsyxhyxw ,),(),(*,),( (8.4) It can be described and interpreted in the frequency domain by the product of the Fourier transforms: ),(),(),( vuSvuHvuW = , (8.5) Filter h,H s(x,y) w(x,y)
- 251. Two-Dimensional Linear Filtering 235 where W(u,v), H(u,v), and S(u,v) denote the two-dimensional transforms of w(x,y), h(x,y), and s(x,y) respectively. 8.2.2. Analog filtering When working in the optical domain, we can frequently experiment with stationary linear filtering operations. This is especially true of blurring effects introduced by the diffraction phenomena that may be due to limited lens openings, poor focusing, or movement during exposure. Figure 8.2 gives an example of the effect of moving the camera during exposure. Figure 8.2. Sharp image and image blurred by the effect of movement When the displacement speed is constant during the image acquisition time, the impulse response of the filter corresponds to a 1-D rectangular function (see Figure 8.3) whose width is proportional to the amplitude of the displacement occurring during the acquisition. In general, the effects introduced by optical devices are of the passband type. To compensate for these effects, digital techniques must be used. These will be discussed in the following sections.
- 252. 236 Digital Filters Design for Signal and Image Processing h(x) x Figure 8.3. Impulse response corresponding to the movement effect 8.3. Discrete models 8.3.1. 2-D sampling For digital images, data are stored in the form of 2-D tables. The usual representation is that of real functions defined on Z2 . ),(),( : 2 lkslk Zs → ℜ→ (8.6) where k represents the index of columns and l the index of rows. The link between the continuous model and the discrete model is established by the sampling operation, represented mathematically by the product of the continuous image by a bidimensional Dirac comb. Here we will assume that the multiplication conditions of the distributions are satisfied; that is, that the function representing the analog image is sufficiently regular. For the sake of simplicity and without loss of information, we will assume that the sampling steps in horizontal and vertical directions are identical and equal to the unity. The bidimensional Dirac comb is written as: ( ) ( )∑ +∞ −∞= −−=Ξ lk lykxyx , ,, δ , (8.7) where δ(x,y) is the bidimensional Dirac distribution centered on the origin: 〈δ(x, y), ϕ(x, y)〉 = ϕ(0, 0) (8.8) where ϕ (x,y) is a test function.
- 253. Two-Dimensional Linear Filtering 237 The sampled signal associated with the analog signal s(x,y) is then given by the following formula: ( ) ( )∑ +∞ −∞= −−=Ξ= lk e lykxlksyxyxsyxs , ,),(,),(),( δ . (8.9) As with temporal signals, it is possible to characterize 2-D signals in the frequency domain. Since the 2-D Dirac comb is invariant by 2-D Fourier transform, taking the Fourier transform of both sides of equation (8.9) yields equation (8.10): ∑ ∈ −−=Ξ= Znm e nvmuSvuvuSvuS , ),(),(*),(),( . (8.10) As in the 1-D case, the periodization effect can be observed in the Fourier domain. In the general case, sampling steps can be different according to whether rows or columns are considered. Let eu and ev be the sampling frequencies in x and y. The sampling steps are then represented as follows: eu x 1 =∆ and ev y 1 =∆ (8.11) The sampled signal obtained from equation (8.9) becomes: ( ) ( )∑ +∞ −∞= ∆∆ ∆−∆−=Ξ= lk yxe ylyxkxlksyxyxsyxs , , ,),(,),(),( δ (8.12) and its frequency representation is equal to: ∑ ∈ −−=Ξ= Znm eeeevueee nvvmuuSvuvuvuSvuvuS ee , , ),(),(*),(),( (8.13) Figures 8.4 and 8.5 give two examples of Dirac combs and their Fourier transforms. We see that the more the sampling is narrow in the spatial domain, the wider it is in the Fourier domain, and vice versa.
- 254. 238 Digital Filters Design for Signal and Image Processing Figure 8.4. First example of the Fourier transform of a Dirac comb Figure 8.5. Second example. The Fourier transform of a Dirac comb is a Dirac comb. The Dirac distribution support is represented in white on the black background As indicated by equation (8.12), the spectrum of the sampled signal is periodized with periods ue and ve. Figure 8.6 illustrates this periodization effect.
- 255. Two-Dimensional Linear Filtering 239 a/ b/ c/ d/ e/ f/ Figure 8.6. Spectrum of a sampled image. a/ initial image; b/ spectrum of the initial image; c/ spectrum of the initial image undersampled by factor 3; d/ image reconstructed by low-pass filtering; e/ spectrum of the initial image undersampled by factor 4; and f/ image reconstructed by low-pass filtering
- 256. 240 Digital Filters Design for Signal and Image Processing 8.3.2. The aliasing phenomenon and Shannon’s theorem 8.3.2.1. Reconstruction by linear filtering (Shannon’s theorem) As discussed above, sampling produces periodized patterns in the frequency domain. The separation conditions of these patterns in the Fourier domain are similar to those for 1-D signals. The sampling frequency must be higher than twice the bandwidth of the signal. It is then possible to reconstruct the continuous 2-D signal exactly from these samples by means of a linear filter (Shannon’s theorem). The frequency response of the reconstruction filter is the function having an amplitude eevu 1 on the rectangle ] 2 , 2 [] 2 , 2 [ eeee vvuu −×− and zero elsewhere: ( )vu vu vuH eeee vvuu ee ,1 1 ),( 2 , 2 ; 2 , 2 −− = (8.14) It corresponds to the impulse response h(x,y), expressed by: )(sinc)(sinc),( yvxuyxh ee= . (8.15) 8.3.2.2. Aliasing effect This linear filter extracts low frequencies of the sampled filter. When the filter carries energy at a frequency that is too high in relation to the cut-off frequency, it is no longer possible to separate the initial pattern from the patterns obtained by periodization. This filtering produces artifacts. This is what is meant by aliasing. Exercise An image is described by the function s(x,y) = 1 + cos[2π(u0x+v0y)], with u0 = 0.2, v0 = 0.6. This image is sampled with a unitary sampling step, then filtered by a low-pass filter of separable sinus cardinal type (see equations (8.14) and (8.15)) of cut-off frequencies of 2 1 == cc vu . What is the orientation of the structures which can be observed? Calculate the horizontal (row) and vertical (column) frequencies within the low- pass filtered image. What is the new orientation of the structures within the filtered image?
- 257. Two-Dimensional Linear Filtering 241 a/ b/ c/ d/ Figure 8.7. Aliasing phenomenon. a/ the initial image (Brodatz texture d6); b/ the spectrum of the initial image; c/ shows the low-pass version (fc=0.20); and d/ the low-pass version (fc=0.12) Figure 8.7 shows the phenomenon with a textured image. The original image includes quasiperiodic patterns which have a squared form, constituting alignments that are mainly horizontal and vertical. This is found in the Fourier transform of the image. The first low pass filtering, with a fairly high cut-off frequency, yields a blurring effect related to the elimination of high frequency spikes (see Figure 8.7c). A second low-pass filtering, with a lower cut-off frequency, produces a more blurred image. Although this result is expected, it is associated with another phenomenon: the patterns now form alignments according to the image’s diagonals. This orientation change is an illustration of the aliasing phenomenon (see Figure 8.7d).
- 258. 242 Digital Filters Design for Signal and Image Processing 8.4. Filtering in the spatial domain A filtering operation can be carried out by discrete convolution, which is the digital version of equation (8.4). When 2-D signals represent images of the real world, they are modelized by non-stationary random fields, which can be considered as being locally stationary. Here, optimal operators turn out to be non-stationary operators as well. Implementation techniques consist of locally adapting the coefficients of a linear filter. This is the reason why their characteristics are introduced here. 8.4.1. 2-D discrete convolution Let s be the input signal and w the output signal. The input/output relationship is given by a discrete convolution equation: ∑∑ ∈ ∈ −−= Zi Zj jliksjihlkw ),(),(),( , (8.16) where h(k,l) denotes the impulse response of the 2-D filter. Figure 8.8. 2-D discrete linear filtering The filter can be implemented using equation (8.16) when the number of non- null coefficients of the impulse response is finite. At each pixel, the inner product of the “coefficients” vector by the “data” vector has to be calculated. For an impulse response of M × N coefficients, the number of multiplications by pixel is MN. When the 2-D signal represents an image, odd dimensions are generally chosen in order to symmetrize the processing around the current pixel. Filter h,H s(x,y) w(x,y)
- 259. Two-Dimensional Linear Filtering 243 Window analysis support in relation to the image k l Impulse response support k-m k+m l-n l+n Figure 8.9. Implementation of a linear filter by calculating an inner product By taking M = 2m + 1 and N = 2n + 1, the output signal at the pixel (k,l) is given by: sh jliksjihlkw t m mi n nj = −−= ∑ ∑ + −= + −= ),(),(),( (8.17) with: ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ −− − = ),( ),1( ),( nmh nmh nmh h and ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ ++ −+− −− = ),( ),1( ),( nlmks nlmks nlmks s . The question of setting the filter coefficients (filter synthesis) is similar to that of 1-D linear filters (see Chapter 5). It leads to an optimization calculation according to statistical or frequency criteria. The main applications of linear filtering in image processing are noise reduction (optimization of a low-pass filter) or edge detection (band pass or high-pass filtering).
- 260. 244 Digital Filters Design for Signal and Image Processing (a) (b) (c) 1 0 -1 1 0 -1 1 0 -1 1 1 1 0 0 0 -1 -1 -1 Figure 8.10. Linear filtering carried out by discrete convolution. Original image (a) image filtered with horizontal gradient; (b) image filtered with vertical gradient; (c) impulse responses linked respectively to (b) and (c) 8.4.2. Separable filters Whenever possible, separable filters are used in order to reduce calculatory complexity. The condition that has to be satisfied by the impulse response is: )()(),( 21 jhihjih = . (8.18)
- 261. Two-Dimensional Linear Filtering 245 Hence, the output value at pixel (k,l) (see equation (8.16)) is given by: ∑ ∑ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −−= j i jliksihjhlkw ),()()(),( 12 (8.19) that we rewrite, by representing the intermediary signal φ to the pixel (k,l): ∑= i k-i, l)sihlk ()(),( 1ϕ (8.20) Equation (8.19) can be rewritten as: ∑= j k, l-j).jhlkw ()(),( 2 ϕ (8.21) The processing is thus equivalent to cascading two linear 1-D filters. The first one, the impulse response of which is h1, is applied to the rows of image s. The second one, the impulse response of which is h2, is applied to the columns of image ϕ. Figure 8.11. Separable linear filtering Both 1-D filters are implemented according to the inner product scheme (see equation (8.17)). The number of multiplications by pixel is equal to M + N instead of MN for the non-separable implementation. Reducing the filter complexity is done at the expense of increasing the amount of memory space to store the intermediate image and of the delay introduced in a situation of real time processing, since “column” processing can only begin after the whole image has been digitized. 1-D filtering by row Impulse response h1 1-D filtering by column Impulse response h2 Image s Image ϕ Image w
- 262. 246 Digital Filters Design for Signal and Image Processing Exercise Is the impulse response filter h separable? If it is, we determine the 1-D impulse responses h1 and h2. ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ −−−−− −−−−− = 48/112/18/112/148/1 24/16/14/16/124/1 00000 24/16/14/16/124/1 48/112/18/112/148/1 h 8.4.3. Separable recursive filtering When the impulse response of the filter to be realized is large, the number of multiplications becomes significant and calculation time increases. The solution for 1-D filters often involves using recursive implementations. With bidimensional signals, as well as for large dimensions, recursive filtering leads to complications. This is because the representation of causality is not unique. However, it is possible to take advantage of recursive implementations when the filter is separable. The problem is effectively reduced to that of realizing a non-causal 1-D impulse response. This technique consists of splitting the non-causal impulse response h into a causal part, here written hc, and an anti-causal part, written hnc. By using the synthesis techniques developed for 1-D filters, a first filter, denoted ψ1, is realized by means of a causal difference equation and applied to the initial signal. The second filter, denoted ψ2, is applied to the initial signal reversed. The global output is obtained by adding the outputs of filters ψ1 and ψ2. ,...),,...,,( 2111 −−−Ψ= kkkkk zzssz for k increasing from kmin to kmax ,...),,...,,( 2112 +++Ψ= kkkkk wwssw for k decreasing from kmax to kmin. kkk wzy += (8.22)
- 263. Two-Dimensional Linear Filtering 247 hc hnc h Figure 8.12. Decomposition of the impulse response shown in casual (hc) and anti-casual (hnc) parts 1Ψ1 y signal s “Forward” filtering 2 signal s + z w “Backward” filtering Ψ2 Figure 8.13. Realization of a non-causal filter by decomposition into a casual and non-causal filter
- 264. 248 Digital Filters Design for Signal and Image Processing Let us consider, for example, the realization of a separable smoothing filter, the impulse response of which is )()(),( jhihjiH = , represented by: k akh =)( , Zk ∈ , 0 < a < 1. (8.23) This response can be decomposed into the sum of a causal response and of an anti-causal response that is represented by: ( )kuakh k =)(c and ( )kuakh k −= − 1)(nc (8.24) -4 -2 0 2 4 6 8 10 0 0.2 0.4 0.6 0.8 1 hc -10 -8 -6 -4 -2 0 2 4 0 0.2 0.4 0.6 0.8 1 hnc Figure 8.14. Representation of the causal and anti-causal components for a = 0.8 The realization of the causal response is usually carried out by using the difference equation: 1−+= kkk azsz with k increasing. (8.25) To realize the anti-causal part, we must take into account the shift of the impulse response. The difference equation is: 11 ++ += kkk awasw with k decreasing. (8.26)
- 265. Two-Dimensional Linear Filtering 249 This implementation technique requires three multiplications per pixel for 1-D processing, and six multiplications for a complete filtering. There must be a memory plane for storing the intermediary image and two rows (or columns) for storing the 1-D signals zk and wk. For images that are sufficiently large in relation to the “time constant” of the filters, it is possible to simplify the 1-D filtering device by carrying out forward and backward processes in a serial mode and not in parallel mode. By resolving the following difference equations: 1−+= kkk azsz β , k increasing (8.27) 1++= kkk ayzy γ k decreasing yields the impulse response given by: k a aa kh )1)(1( )( +− = γβ . (8.28) We find the desired impulse response with the correct choice of the coefficients β and γ. 8.4.4. Processing of side effects In the previous sections, we have discussed the finite dimension of image supports in an abstract way. In practice, the problem arises for pixels on the border of the image, when the support of the impulse response is not totally included in that of the image. Impulse response support Figure 8.15. Pixels situated at image border
- 266. 250 Digital Filters Design for Signal and Image Processing Applying the relation of the inner product found in equation (8.17) means we must prolong the image outside its natural support. There are two main techniques for processing side effects, according to the hypothesis made for this extension. 8.4.4.1. Prolonging the image by pixels of null intensity In the absence of information about the image, aside from its support, we can assume that the unobserved pixels have a null intensity (black pixels). For a non-recursive filter, it is enough to limit, in the inner product of equation (8.17), the sum to only the pixels situated in the image. With an impulse response of size N × M, processing the first pixel (in the upper left corner of the image) requires only (n + 1) × (m + 1) coefficients of the impulse response (see Figure 8.16). For the following pixel (as a row), we see (n + 2) × (m + 1), etc. After this comes a transition zone that corresponds to the raising edge of the filter, whose extension is the half-diameter of the support of the impulse response. N=2n+1 M=2m+1 Pixels taken into account Current pixel image Support of the impulse response Figure 8.16. Processing of an edge pixel This phenomenon exists with both separable and non-separable filters. For a separable recursive filter, it is enough to set initial conditions of the difference equation in equation (8.22) by: 0...21 minmin === −− kk zz (8.29) 0...21 maxmax === ++ kk ww
- 267. Two-Dimensional Linear Filtering 251 8.4.4.2. Prolonging by duplicating the border pixels The second technique consists of artificially extending the image support by attaching to it supplementary rows and columns that are identical to the external rows and columns of the image. This is shown in Figure 8.17. 10 10 10 12 15 Auxiliary rows 10 10 10 12 15 10 10 10 12 15 11 11 11 13 10 6 6 6 13 10 Auxiliary columns Figure 8.17. Duplication of border pixels With a non-recursive filter of finite impulse response, we directly apply equation (8.16) (or (8.19), for a separable filter) to the data of the extended image. For a stationary, separable filter with infinite impulse response, that has been realized recursively (see section 8.4.3), the functions ψ1 and ψ2 of equation (8.22) are linear combinations of the following types. ∑ ∑ − = − = −− += 1 0 1 1 P p P p pkppkpk zbsaz for the “forward” equation (8.30) ∑ ∑ − = − = ++ ′+′= 1 0 1 1 P p P p pkppkpk wbsaw for the “backward” equation (8.31) For a signal of length L, whose index values vary between 0 and L-1, the initial conditions must then verify, for the “forward” equation: α=== −− ...21 zz with ∑ ∑ − = − = =− 1 1 1 0 0 )()1( P p P p pp bsaα (8.32) and, for the “backward” equation: β=== + ....1LL ww with ∑ ∑ − = − = − ′=′− 1 1 1 0 1 )()1( P p P p pLp bsaβ (8.33)
- 268. 252 Digital Filters Design for Signal and Image Processing 8.4.4.3. Other approaches There are other techniques for processing border effects with finite impulse response filters. We can, for example, decide not to calculate the value of the output’s filter for the pixels situated on the border of the image. Unprocessed pixels of the image Processed pixels Figure 8.18. Unprocessed pixels This makes it possible to avoid the arbitrary choice of the prolongation mode of support of the image, but it also reduces the usable size of the filtered image. When several filters are cascaded, the usable image rapidly becomes smaller. However, we should remember that this effect is lessened if we use separable filers. Another approach, which we can use if the filter’s coefficients are of the same sign, consists of renormalizing the coefficients and, in calculating the output value, only using the pixels situated inside the image support. Let (k,l) be a pixel situated at the border of the image. We assume that all the coefficients of the filter are positive. Let A be the sum of the coefficients. By again using the expression in equation (8.17), we write g the subvector of the coefficients corresponding to the pixels situated inside the image support. Let B be the sum of the coefficients of g . By writing σ as the vector of the inputs (situated inside the image support) and f the vector represented by g B A f = (8.34) the output w(k,l) is then calculated as follows: σt flkw =),( (8.35)
- 269. Two-Dimensional Linear Filtering 253 8.5. Filtering in the frequency domain When the filter properties are specified in the frequency domain, or when the impulse responses have a very large support, it is advantageous to carry out the filtering operation in the Fourier domain by using equation (8.5). Changing the representation domain is done using the 2-D discrete Fourier transform. Consequently, the input/output relation, in the spatial domain, is no longer described by a simple discrete convolution equation, but by a circular convolution equation. We will look at the effects of this phenomenon at the end of this section. 8.5.1. 2-D discrete Fourier transform (DFT) Let us look at a sequence of two indices {xk,l}. To simplify our presentation, we assume that the two indices have the same variation domain, from 0 to N-1. The transformed sequence { x~ mn} is given by: ∑∑ − = − = −− = 1 0 1 0 22 ,, ~ N k N l N nl j N mk j lknm eexx ππ (8.36) It is clear that the transform sequence is periodic, of period N for each index. Equation (8.36) also shows that the transformation is separable, being constituted of a discrete 1-D Fourier transform (DFT) operating row by row, followed with a DFT column by column. As in the 1-D context, the 2-D discrete Fourier transform is inversible: ∑∑ − = − = ++ = 1 0 1 0 22 ,2, ~1 N m N n N nl j N mk j nmlk eex N x ππ (8.37) As in the 1-D context, we link spatial frequencies, respectively horizontal and vertical, to the indices m and n. These frequencies are given by: kN m um ∆ = and lN n vn ∆ = , (8.38) where ∆k and ∆l are the horizontal and vertical sampling steps. The application to linear filtering is done according to the diagram shown in Figure 8.19.
- 270. 254 Digital Filters Design for Signal and Image Processing Figure 8.19. Frequency domain filtering: block diagram The frequency response of the filter is represented, frequency by frequency, with the help of complex coefficients: ),( ~ , nmnm vuHh = (8.39) the filtered image is represented, in the Fourier domain, by: nmnmnm xhy ,,, ~~~ = (8.40) and in the spatial domain by: ∑∑ − = − = ++ = 1 0 1 0 22 ,2, ~1 N m N n N nl j N mk j nmlk eey N y ππ . (8.41) The main advantage of this approach is that the filtering operation can be precisely specified in the frequency domain. Moreover, if the image dimensions are powers of 2, Fourier transforms can be implemented by means of fast 1-D algorithms (FFT). Complexity Let us consider an image of dimension N × N with N = 2q . The number of multiplications for a monodimensional DFT is of the order of Nlog2(N). This DFT is applied to N rows and N columns to calculate the bidimensional DFT. Hence, the number of complex multiplications is thus of the order of 2N2 log2 (N). The second stage carries out the weighting in the Fourier domain and requires N2 multiplications. The third stage returns to the spatial domain by a succession of 1-D inverse DFT operations, also requiring 2N2 log2 (N) multiplications. The total number of multiplications is thus of the order of: )41())(log41( 2 2 2 qNNNK +=+= (8.42) 2-D-DFT nmh , ~ 2-D-DFT inverse Image ynmx , ~ × Image x nmy , ~ Weighting
- 271. Two-Dimensional Linear Filtering 255 We should remember that the normalization term as 2 1 N is sometimes put under the direct transformation and not on the inverse transformation. The most logical procedure would be to take a normalization coefficient equal to N 1 on each transformation, direct and inverse; but this complicates calculations when the dimensions of the image are not even powers of 2. 8.5.2. The circular convolution effect By realizing filtering by multiplication of discrete Fourier transforms, we produce, in the spatial domain, an output image resulting from the circular convolution of the input image by the inverse DFT of the frequency response. For 1-D signals, the circular convolution of the sequence {xk} by the sequence {hk} is the sequence {yk} represented by the following relation: ∑ − = −= 1 0 N p pkpk xhy (8.43) with kk xx = for 0≥k kNk xx += for k < 0. This prolongs the input sequence by periodization. Figure 8.20. Circular convolution of the input sequence Let us return to the processing of bidimensional sequences. Let {hk,l} be the sequence obtained by inverse DFT of the filter frequency response. Using equations (8.36) and (8.40), the frequency representation is given by: ( ) ( ) ∑∑∑∑ − = − = − = − = ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ + −⎟ ⎠ ⎞ ⎜ ⎝ ⎛ + −= 1 0 1 0 1 0 1 0 ,,, 2exp2exp~ N p N q N r N s srqpnm N sqn j N rpm jxhy ππ (8.44) pN-1 A h(p) Input xp pN-1k
- 272. 256 Digital Filters Design for Signal and Image Processing By setting: p + r =k ; q + s = l (8.45) and by taking into account the fact that, for k greater or equal to N, ( ) ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ − −=⎟ ⎠ ⎞ ⎜ ⎝ ⎛ − N Nkm j N mk j ππ 2exp2exp (8.46) the frequency representation of the filtered image can be put in the form: ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ −⎟ ⎠ ⎞ ⎜ ⎝ ⎛ − ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ = ∑∑ ∑∑ − = − = − = − = −− N nk j N ml jxhy N l N k N p N q qkplqpnm ππ 2exp2exp~ 1 0 1 0 1 0 1 0 ,,, (8.47) The sequence {x k,l} is the periodized sequence, written as xk,l, for k and l positives lkNx ,+ , for k negative and l positive, lNkx +, , for k positive and l negative and lNkNx ++ , for k and l negative. The term between parentheses in equation (8.47) results from the circular convolution of the sequence {h} and of the sequence {x}. By using equations (8.36), (8.37) and (8.47), it can be shown that it is the value of the pixel (k, l) of the filtered image. This calculation corresponds to another technique for controlling border effects. For a pixel situated at the upper left corner of the image, the value of the filtered image is obtained by a linear combination of the intensities of the pixels not only in close proximity but also those near the right and lower borders of the image. All this occurs as if the images, as well as the impulse response, were periodic. l k l k Impulse response support Figure 8.21. Circular convolution: periodization effect of the initial image
- 273. Two-Dimensional Linear Filtering 257 The main drawback of this calculation mode is that it can produce artifacts near the border of an image due to the presence of a strong transition on the opposite border of the image. One way of getting around this problem is to double the size of the image by adding blocks of zeros. This technique is called zero-padding. With the above formulae, the size of the image becomes (2N) × (2N). In processing border pixels, the circularity effect is now concerned with pixels having amplitude zero. Nevertheless, there is still a transient effect on the block corresponding to the initial image, as well as on the auxiliary blocks which, a priori, will not be retained. Original image Original image extended by zero-padding 0 0 0 Figure 8.22. Extension of the image by adding pixels of null-intensity 0 00 Figure 8.23. Compensation for the circularity effect
- 274. 258 Digital Filters Design for Signal and Image Processing By writing g as the impulse response extended by zero-padding, and by applying equation (8.36) to the sequence g, which size is (2N) × (2N), we can see that, for the DFT of the sequence g, we have: nmnm hg ,2,2 ~~ = . (8.48) by writing G as the frequency response of the impulse response filter g, we get, by using equations (8.39) and (8.40), and by assuming that the row and column sampling steps are equal: ),(),() 2 2 , 2 2 ( ∆∆ = ∆∆ = ∆∆ N n N m H N n N m G N n N m G (8.49) Hence, the filter specifications are not modified. From equation (8.42), the number of multiplications is now of the order of: )2(log16))2(log41(4 2 2 2 2 NNNNK ≈+=′ (8.50) Filtering large images When the images to be filtered are of large dimensions, it is possible to split them into blocks of size N × N and to filter each block in the frequency domain. To avoid problems of circularity, each block is extended to a size (2N) × (2N) by the zero-padding. As with pixels, the blocks can be marked by a column index (p) and a line index (q), as shown in Figure 8.24. Bp,qq p Figure 8.24. Decomposition of an image into blocks We saw in the previous section that processing the block Bp,q affects the neighboring blocks Bp+1,q, Bp,q+1 and Bp+1,q+1. The filtered image is thus obtained by adding, in each pixel Bp,q, the results of the filtering of the blocks Bp,q, Bp-1,q, Bp,q-1 and Bp-1,q-1.
- 275. Two-Dimensional Linear Filtering 259 The advantage of this approach is that it enables us to directly consider the specifications in the frequency domain, to limit the amplitude of the transient phenomenon on the edges of the image while reducing the number of operations. So, for an image of size L × L, to be processed by a filter whose impulse response has a support N × N with L = λ N, the number of multiplications in the decomposition by blocks is: )2(log16 2 22 NNKB λ≈ (8.51) Figure 8.25. Reconstruction of block (p,q) By applying the finite impulse response filtering technique, the number of multiplication would be: 2 2 2 4 FIRK L N Nλ= = (8.52) For an image of size 2,048 × 2,048 decomposed into blocks of 256 × 256, the gain in complexity is of the order of 16,000. This number naturally constitutes an upper limit, in so far as the number of words representing the data has not been taken into account. 8.6. Bibliography [BRE 95] BRES S., JOLION J.M., LEBOURGEOIS F., Traitement et Analyse des Images Numériques, Hermès Science Publications, 1995, ISBN 2-7462-0741-9. [COQ 95] COQUEREZ J.P., PHILIPP S., Analyse d’images: filtrage et segmentation, Masson, 1995, ISBN 2-225-84923-4. [GON 92] GONZALEZ R.C., WOODS R.E., Digital image processing, Prentice Hall, 1992, ISBN 0-13-094650-8. Bp,qBp-1,q Bp,q-1Bp-1,q-1
- 276. 260 Digital Filters Design for Signal and Image Processing [JAI 88] JAIN A.K., Fundamentals of Digital Image Processing, Prentice Hall, Englewood Cliffs, N.J., 1988, ISBN 0-13-336165-9. [KUN 93] KUNT M., Traitement numérique des images, Presses Polytechniques et Universitaires Romandes, Collection Electricité, 1993, ISBN 2-880-74238-2. [LIM 90] LIM T.S., Two Dimensional Signal and Image Processing, Prentice Hall, 1990, ISBN 0-13-934563-9. [SON 99] SONKA M., HLAVAC V., BOYLE R., Image processing and analysis and Machine vision, PWS Publishing, 1999, ISBN 0-534-95393-X.
- 277. Chapter 9 Two-Dimensional Finite Impulse Response Filter Design 9.1. Introduction Finite impulse response or FIR filters are currently used in image processing. As well as improving the visual quality of images, these digital filters help with contour detection and the motion estimation between two consecutive images. Quite often they are dedicated to estimating the components of the gradient or of the Laplacian. In order not return to topics we have already presented, and in keeping with the rest of this text, in this chapter we will discuss frequency response of 2-D digital filters that are used in applications such as signal and noise separation. As we saw in Chapter 5, there are many design techniques dedicated to FIR filters. Each of these techniques can be applied to the domain of images. In order to limit our discussion to a two-dimensional context, we have focused on the isotropic feature of the designed filter. Isotropy means that the effect of filtering is identical with all orientations. We will discuss two filters that have this characteristic: circular and Gaussian filters. For design, we have chosen the windowing method because, as we show in section 9.4, it is based on concepts used in two-dimensional spectrum analysis. In order for our presentation to be as clear as possible, whenever possible we will provide examples of how the techniques are used. Chapter written by Yannick BERTHOUMIEU.
- 278. 262 Digital Filters Design for Signal and Image Processing 9.2. Introduction to 2-D FIR filters The most basic digital forms of the impulse responses of 2-D FIR filters are given by squared matrices of small size [3×3] or [5×5]. The coefficients of these matrices represent the spatial form of the digital filter that is applied according to an implementation mode shown in equation (8.17). Many currently available forms of software dedicated to image retouching make use of these basic forms. For example, let us look at the following low-pass filters: ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ = 111 111 111 9 1 1M (9.1) ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ = 121 242 121 16 1 2M . (9.2) As shown in equation (8.19) in Chapter 8, because of their separability, these masks decompose into two 1-D filters that are respectively applied according to horizontal and vertical directions. Thus, we have: [ ] ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ ∗= 1 1 1 3 1 111 3 1 1M (9.3) [ ] ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ ∗= 1 2 1 4 1 121 4 1 2M . (9.4) Convolution masks M1 and M2 thus have coefficients that are characterized by 1- D amplitude profiles that are respectively rectangular and triangular. As shown in Figure 9.1, these low-pass filters do not have tunable cut-off frequencies. The only adjustable feature, from the frequency domain point of view, is the position of the first passage through zero of the response, which can be modified by varying the matrix size. This means it is inversely proportional to the width of the mask in each of its dimensions. As well, because of their separability, the Fourier transform of their impulse response presents a non-isotropic frequency feature. The isotropy is an important property of the filter; it assures that all the oriented patterns contained in the image are filtered in the same way.
- 279. Two-Dimensional Finite Impulse Response Filter Design 263 -0.5 0 0.5 -0.5 0 0.5 0 0.2 0.4 0.6 0.8 1 -0.5 0 0.5 -0.5 0 0.5 0 0.2 0.4 0.6 0.8 1 Figure 9.1. Frequency responses for rectangular and triangular windows of size [3x3] From the above comments, we can see that filters M1 and M2 do not possess the expected characteristics of useful shapes, as presented in Chapter 4. We cannot, for example, modify the passband or regulate the transition band of the filter. In addition, if we want to obtain a better control on the desired frequency features, we must implement another filter design technique. A possible solution that preserves the FIR approach involves using the windowing method, which we will present in the next section. 9.3. Synthesizing with the two-dimensional windowing method 9.3.1. Principles of method As we have seen in Chapter 8, the transformation that characterizes an invariant linear system is represented by the impulse response written ),( yxhd , which satisfies the following formula: ( ) ∫ ∫ +∞ ∞− +∞ ∞− −−= βαβαβα ddyxshyxw d ),(),(, , (9.5) where s(x,y) and w(x,y) respectively designate the two-dimensional input and output signals. Depending on application requirements, we synthesize an impulse response that helps us to obtain a desired frequency response following typical shapes such as low-pass, high pass, cut-off band or passband filter. The correspondence between frequency domain representation and impulse response is assured by the 2-D Fourier transform.
- 280. 264 Digital Filters Design for Signal and Image Processing Let us consider the truncated impulse response of the digital 2-D FIR filter defined by: ( ) ( )jliksjihlkw m mi n nj −−= ∑ ∑ −= −= ,),(, . (9.6) The quantities m and n represent the size of the impulse response in horizontal and vertical directions. To establish the link between discrete and continuous forms, equation (9.6) can be derived in a continuous spatial domain. We then have: ( ) ∫ ∫ + − + − −−= 1 1 2 2 ),(),(, t t t t ddyxshyxw βαβαβα . (9.7) The quantities t1 and t2 represent the spatial boundaries associated with the non- null values of the impulse response h(x,y) and are directly proportional to m and n. By directly comparing equations (9.5) and (9.7), and by introducing the function p(x,y), we have: ),(),(),( yxpyxhyxh d= (9.8) The function p(x,y) appears as a weighting function that allows us to conserve only a certain part of the impulse response hd(x,y). This weighting function is often called an apodization window. The characteristics of the apodization window p(x,y) are similar to those described in Chapter 5 in the 1-D domain. A more detailed discussion of the 2-D context is given in section 9.4. 9.3.2. Theoretical 2-D frequency shape As we saw in section 9.3.1, the windowing method is based on a weighted version hd(x,y) whose Fourier transform constitutes the theoretical frequency shape. In the 2-D case, the shape is two-dimensional, so it can take various geometric shapes. In this section, we will present two shapes that help represent rectangular and circular frequency structures. However, other templates can be constructed. 9.3.2.1. Rectangular frequency shape The rectangular shape used in low-pass filtering is shown in Figure 9.2.
- 281. Two-Dimensional Finite Impulse Response Filter Design 265 H(u,v) v 1 u ucn vcn Figure 9.2. Rectangular shape of a low-pass 2-D filter The shape is represented in normalized frequency; that is, in the space (u,v). The normalized cut-off frequencies are written ),( cncn vu . The related impulse response ),( lkhd is then represented as the inverse discrete Fourier transform or IDFT of the shape ),( vuH . [ ] ( ) ( )( , ) ( , ) 2 sinc 2 2 sinc 2d cn cn cn cnh k l IDTF H u v u u k v v l⎡ ⎤ ⎡ ⎤= = ⎣ ⎦ ⎣ ⎦ (9.9) DEMONSTRATION 9.1.– to demonstrate the results of equation (9.9), we will use the duality theorem seen in Chapter 3. Knowing that the Fourier transform is the dual form of its inverse transform, we divide the continuous form from the 2-D function written g(x,y). Since the form is separable, we have: )()(),( 21 ygxgyxg = (9.10) with: 1 if for 1 and 2 ( ) 0 otherwise i i i g η α η ⎧ ≤ =⎪ = ⎨ ⎪⎩ . (9.11) Its Fourier transform can then be expressed as follows: ( )[ ]∫ ∫ +∞ ∞− +∞ ∞− +−= dxdyvyuxjyxgvuG π2exp),(),( . With the separable form represented by equation (9.10), G(u,v) becomes: ( ) ( )∫ ∫ +∞ ∞− +∞ ∞− −−= dxdyvyjuxjygxgvuG ππ 2exp2exp)()(),( 21 .
- 282. 266 Digital Filters Design for Signal and Image Processing from which we have: dxuxjxgvuG )2exp()(),( 1 π−= ∫ +∞ ∞− ∫ +∞ ∞− − dyvyjyg )2exp()(2 π . Given equation (9.11), the functions g1(x) and g2(y), the Fourier transform of g(x,y) becomes: 2 2 1 1 2 2 1 1 2 )2exp( 2 )2exp( )2exp()2exp(),( α α α α α α α α π π π π ππ −− −− ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ − − ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ − − = −−= ∫∫ vj vyj uj uxj dyvyjdxuxjvuG This equality is reduced to: v v u u vuG π απ π απ )2sin()2sin( ),( 21 = . We then come back to equation (9.9) by proposing x x x π πsin )(sinc = . 9.3.2.2. Circular shape The circular shape representing a low-pass filtering is characterized by a circular opening, shown in Figure 9.3. v u H(u,v) 1 Figure 9.3. Circular shape of a low-pass filter fcn
- 283. Two-Dimensional Finite Impulse Response Filter Design 267 The associated impulse response is expressed as: [ ] ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ + + == 22 1 22 2),(),( lkfI lk f vuHTFDIlkh cn cn d π (9.12) where I1 designates the Bessel function of the first type of order 1. DEMONSTRATION 9.2.– as in the previous demonstration, we here use the dual formulation represented by the reference image. For a > 0, let the signal g(x,y) represent a circular opening written as: 2 2 2 1 if ( , ) 0 otherwise x y a g x y ⎧ + <⎪ = ⎨ ⎪⎩ From the fact of the circular symmetry of the shape, we see the following variable changes: cos ( , ) ( ) 1 for 0 r a sin x r g r g r y r θ θ θ =⎧ ⇒ = = ≤ <⎨ =⎩ (9.13) The Fourier transform of the signal g(x,y) is then expressed by the following formula1: ( )[ ] ( )[ ] .sincos2exp),( 2exp),(),( 0 ∫ ∫ ∫ ∫ − ∞+ +∞ ∞− +∞ ∞− +−= +−= π π θθθπθ π drdvrurjrrg dxdyvyuxjyxgvuG 1 We recall that variable changes introduces the Jacobian formula. We have for all f: ∫ ∫∫ ∫ ∫ ∫∫ ∫ − ∞+ − ∞+ − ∞+∞+ ∞− ∞+ ∞− = − = == π π π π π π θθθ θθ θθ θ θ θ θθ 00 0 ),( cossin sincos ),( ),(),(),( drdrrfdrd rr r rf drd d dy dr dy d dx dr dx rfdxdyyxfvuG
- 284. 268 Digital Filters Design for Signal and Image Processing In the frequency domain, after a change of variables to polar coordinates: ( ) ( )φρφρ sin,cos, →vu we have obtained the following formula: ( )∫ ∫ − +∞ +−= π π θθφθφπρθφρφρ 0 )sinsincoscos2exp(),()sin,cos( drdrjrrgG . or, by taking equation (9.13) into account: ∫ ∫ +∞ − ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ −−= 0 ))cos(2exp()()sin,cos( drdrjrrgG π π θθφπρφρφρ . (9.14) At this stage of development, we focus on the term of equation (9.14), ∫ − −− π π θθφπρ drj ))cos(2exp( . By proposing rπρη 2= , we obtain: .))cos(sin( ))cos(cos())cos(exp( ∫ ∫∫ − −− −− −=−− π π π π π π θθφη θθφηθθφη dj ddj (9.15) Now, using the results of the Bessel functions of the first type of order 0, Io(η), we know that: ∫ − = π π ηηπ dwwIo ))cos(cos()(2 and ∫ − = π π η 0))cos(sin( dww . (9.16) Given the equalities in equations (9.15) and (9.16), equation (9.14) then becomes: 0 2 0 ( cos , sin ) 2 (2 ) 1 1 ( ) with 2 . 2 a o a o G rI r dr I d r πρ ρ φ ρ φ π πρ η η η η πρ ρ πρ = = × = ∫ ∫
- 285. Two-Dimensional Finite Impulse Response Filter Design 269 To calculate this last integral, we introduce the type 1 Bessel functions of order 1 represented by the following expression: [ ]1 0( ) ( ) d I I d η η η η η = . We then have: ( ) ( )ρπ ρ φρφρ aI a G 2sin,cos 1= . Since 2 1)( lim 1 0 = → η η η I , the Fourier transform of the signal g(x,y) in the frequency space in polar coordinates is expressed as: ( ) ⎪ ⎩ ⎪ ⎨ ⎧ = = 2 1 )0,0( 2)sin,cos( aG aI a G π ρπ ρ φρφρ (9.17) Figure 9.4. Spectrum representation associated with G(u,v) The rectangular and circular shapes presented here both correspond to low-pass type filters. We should remember that only the circular shape has an isotropic frequency response. The rectangular one does not process patterns oriented at 45° and 90° that are in present in the image in the same way, since the gain of the filter is not the same for these two orientations.
- 286. 270 Digital Filters Design for Signal and Image Processing To understand the window approach of the other types of filters, we here provide the formulae of passage of the low-pass filter towards other types of filter classes. The form, shown below, presents for the circular filter the ensemble of formulae of passage. It can be easily transposed to the rectangular form or to other filter forms. High-pass of normalized cut-off frequency [ ]0,0.5cnf ∈ ( , ) ( , ) ( , )hp bph k l k l h k lδ= − -0.5 -0.5 0.5 0.5 cnf Band-pass of normalized cut-off frequencies [ ]1 2 and 0,0.5cn cnf f ∈ ( ) ( )2 1 2 11 1 2 2 ( , ) 2 2 with bp cn cn cn cn h k l f f I f R I f R R R R k l π π = − = + -0.5 -0.5 0.5 0.5 2cnf 1cnf Band-stop of normalized cut-off frequencies [ ]1 2 and 0,0.5cn cnf f ∈ ( , ) ( , ) ( , )bs bph k l k l h k lδ= − -0.5 -0.5 0.5 0.5 2cnf 1cnf
- 287. Two-Dimensional Finite Impulse Response Filter Design 271 Now that we have demonstrated how to construct ideal impulse responses, we will discuss carrying out design using the windowing method. This step consists of choosing a “finite spatial” 2-D function. 9.3.3. Digital 2-D filter design by windowing As with filters dedicated to one dimensional signals, we can implement spatial windows, either separable or non-separable, of operational filters by using infinite impulse response associated with the frequency shapes discussed above. The implementation is represented by the equation: ),(),(),( nmpnmhnmh d= , (9.18) with )()(),( npmpnmp yx= in a separable example. The choice of a window corresponds to a compromise between the transition band’s selectivity and the performance of the reject associated with the filter’s stop- band. These results have already been discussed in the chapter dealing with 1-D FIR filters and are summarized in section 9.4. At this stage, it is important to note that only non-separable windows allow us to preserve the isotropic feature of the filter. Separable windows favor vertical and horizontal axes. 9.3.4. Applying filters based on rectangular and circular shapes To illustrate the use of and interest in the windowing method, we will process the image shown in Figure 9.5. This image is characterized by a very high colored noise level. Thanks to the spectrum calculated with the help of the Fourier transform, we can see that the frequency components associated with the noise are distributed outside a circular opening of the order of 0.3 in normalized frequency, as we demonstrated in equation (9.6).
- 288. 272 Digital Filters Design for Signal and Image Processing Figure 9.5. Image to be filtered -50 -40 -30 -20 -10 0 10 20 30 -0.4 -0.2 0 0.2 0.4 -0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 Figure 9.6. Frequency spectrum associated to image with noise By looking at Figure 9.6, we can see that the manufactured parts in the image are difficult to see clearly. To improve their visual clarity, we will filter the images. After learning the spectrum of the images (see the lower part of Figure 9.6), we deduce that the filter must be of the low-pass type to eliminate the high frequency components of the noise that are found beyond a circular shape. Several types of filters can be used. In a later section, we will present results obtained with the finite impulse response filters discussed above. The results presented here correspond to the three filter types we have considered. The first is a rectangular filter of type M1, represented by equation (9.1).
- 289. Two-Dimensional Finite Impulse Response Filter Design 273 The obtained image is then the outcome of a filtering done by sinus cardinal in frequency. The restoration is of lower quality than with rectangular and circular shapes (see Figure 9.7). -0.5 0 0.5 -0.5 0 0.5 0 0.2 0.4 0.6 0.8 1 -0.5 0 0.5 -0.5 0 0.5 0 0.5 1 1.5 -0.5 0 0.5 -0.5 0 0.5 0 0.5 1 1.5 Figure 9.7. Image resulting from filtering (left) and the filter used (right)
- 290. 274 Digital Filters Design for Signal and Image Processing 9.3.5. 2-D Gaussian filters A typical filter that is widely used in image processing is based on the Gaussian form of the impulse response. These filters are separable and preserve their Gaussian form in the frequency plan. They are the basis of the Gaussian pyramid whose advantage is its capacity for multi-scale image processing. This kind of processing is used to strengthen many applications against hypothesis shifts, such as form recognition, or in estimating high amplitude movements. Their derivability means they can also be used for approximating spatial gradients; in this application they are called Canny gradients. To obtain a second derivative, they are present in specific usages, because the Gaussian differences help us approximate the Laplace operator of an image. 9.3.6. 1-D and 2-D representations in a continuous space The analytic form of a Gaussian is characterized by a parameter xσ which is proportional to the size of the neighborhood on which the filter operates. This parameter is the so-called “standard deviation” of the Gaussian filter. The centered form is as follows: ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −= 2 2 2 exp 2 1 )( xx x xg σσπ . (9.19) The associated Fourier transform then equals: ( ) ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −= 2 2 exp)( 22 u uG x πσ . (9.20) As shown in Figure 9.8, and according to equations (9.19) and (9.20), we can observe that the more open the form in the initial space, the more closed the form in the frequency space.
- 291. Two-Dimensional Finite Impulse Response Filter Design 275 -5 -4 -3 -2 -1 0 1 2 3 4 5 0 0.2 0.4 0.6 0.8 1 x -5 -4 -3 -2 -1 0 1 2 3 4 5 0 0.2 0.4 0.6 0.8 1 frequency G(u) g(x) -5 -4 -3 -2 -1 0 1 2 3 4 5 0 0.2 0.4 0.6 0.8 1 x -5 -4 -3 -2 -1 0 1 2 3 4 5 0 0.2 0.4 0.6 0.8 1 frequency G(u) g(x) Figure 9.8. Gaussians of standard deviation σx 2 = 0.5, then σx 2 = 2 and their Fourier transforms
- 292. 276 Digital Filters Design for Signal and Image Processing This principle is called the uncertainty principle. It stipulates that a function cannot have limited time and frequency supports. The cut-off frequency of the Gaussian is given by the following approximation: x cnu σ 19.0 ≈ . (9.21) 9.3.6.1. 2-D specifications The representation for a two-dimensional space is directly calculated according to equation (9.19) by applying a separable association. We have: ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ +−= 2 2 2 2 2 1 exp 2 1 ),( yxyx yx yxg σσσπσ , (9.22) where xσ and yσ respectively designate the standard deviations according to the spatial axes x and y. Its 2-D Fourier transform is expressed as: ( ) ( )[ ]⎟ ⎠ ⎞ ⎜ ⎝ ⎛ +−= 2222 22 2 1 exp),( vuvuG yx πσπσ . (9.23) -10 -5 0 5 10 -10 0 10 0 0.05 0.1 0.15 0.2 xy Figure 9.9. Spatial representation of the Gaussian function according to (x,y)
- 293. Two-Dimensional Finite Impulse Response Filter Design 277 -4 -2 0 2 4 -4 -2 0 2 4 0 0.2 0.4 0.6 0.8 1 uv Figure 9.10. Frequency representation of the Gaussian profile To reduce calculation time during implementation, we can exploit the filter’s separability. Thus, we have a 2-D filter that decomposes in two convolutions following the lines, then the columns, of the discrete image. The associated 1-D filter can be implemented with a finite or infinite digital support. In the next section, we will present two kinds of implementation for 1-D digital FIR filters. 9.3.7. Approximation for FIR filters The methods we will present are based, respectively, on a truncated version of the continuous Gaussian profile, and on the central limit theorem. 9.3.7.1. Truncation of the Gaussian profile This approach is the simplest. It truncates, on a temporal support [-N:N], the Gaussian equation: ( ) ⎪ ⎪ ⎩ ⎪⎪ ⎨ ⎧ > ≤ = − Nn Nne ng x n xx 0 2 1 2 2 2σ σπ . (9.24)
- 294. 278 Digital Filters Design for Signal and Image Processing However, there is a constraint for obtaining a “correct” approximation in the sense of Shannon’s theorem, whether in time or in frequency. This constraint is expressed by a coupling between the standard deviation of the Gaussian and the number of samples 2N+1 being considered. So, as shown in Figure 9.11, when the number of samples is too low in relation to the opening of the Gaussian profile, the Gibbs phenomenon generates a spectrum distortion. -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0.12 0.14 0.16 0.18 0.2 0.22 x -0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 0 0.2 0.4 0.6 0.8 Frequency representation Figure 9.11. Generation for N=2 and 2=xσ Nevertheless, if the number of points is sufficient, the Gaussianity of the frequency spectrum is respected.
- 295. Two-Dimensional Finite Impulse Response Filter Design 279 -6 -4 -2 0 2 4 6 0 0.05 0.1 0.15 0.2 x -0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 0 0.2 0.4 0.6 0.8 1 Frequency representation Figure 9.12. Generation for N=6 and 2=xσ In practice, to couple the standard deviation of the Gaussian form and the necessary temporal support, we state: xkN σ= where k equals 3, 4 or 5. (9.25) 9.3.7.2. Rectangular windows and convolution The solution presented in this section is based on the central limit theorem, which states that if the signals are processed by an infinite number of low-pass systems associated with a cascade connection, the resulting impulse response tends towards a Gaussian profile. In practice, we can limit the accumulation to four passages. This assures, as we saw in the above section, a stable approximation of the Gaussian form. We have used the 1-D carrying function, the basic low-pass filter here is a sinus cardinal in the frequency space. The approximation is expressed by: ( ) ( ) ( ) ( ) ( )nunununungx ***≈ . ( ) ⎪⎩ ⎪ ⎨ ⎧ > ≤ += Nn Nn Nnu 0 )12( 1 with xN σ= . (9.26)
- 296. 280 Digital Filters Design for Signal and Image Processing 2 4 6 8 10 12 14 16 18 20 0 0.05 0.1 0.15 0.2 x -0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 0 0.2 0.4 0.6 0.8 1 Frequency representation Figure 9.13. Generation for N=3 The advantage of this solution is that it does not require calculating a Gaussian function, which would require a longer calculation. As well, we observe that, having obtained an anistropic frequency response, the serial association of these low-pass filters tends to generate an isotropic response because of the use of a carrying function. 9.3.8. An example based on exploiting a modulated Gaussian filter To illustrate the use of a modulated Gaussian filter, let us consider filtering an image that is distorted by periodic noise. The degraded image and its spectrum are shown, respectively, in Figures 9.14 and 9.15.
- 297. Two-Dimensional Finite Impulse Response Filter Design 281 Figure 9.14. Image degraded by the presence of a periodic frame -0.4 -0.2 0 0.2 0.4 -0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 Figure 9.15. Spectral representation of the degraded image We have seen that the image in Figure 9.14 shows a periodic frame characterized by two frequency peaks for the pairs (u,v) expressed by (0.4; 0.1) and (–0.4; –0.1) in its spectral representation in Figure 9.15.
- 298. 282 Digital Filters Design for Signal and Image Processing To suppress this periodic pattern, we center, by frequency transformation, a Gaussian form on each of these peaks. To carry out this operation, we will discuss each of the steps necessary for this type of filtering. The first step involves generating a selective Gaussian filter by using the second approach based on a cascade of rectangular filters with N = 7. -0.5 0 0.5 -0.5 0 0.5 0 0.2 0.4 0.6 0.8 1 Fx Fy Magnitude Figure 9.16. Step 1: representation of Gaussian 2-D filter After evaluating the frequency positions of the periodic frame, we generate the following amplitude modulation: ),(),(),( nmsnmhnmh es = with: )22cos(),( umvnnms ππ += . (9.27) Equation (9.27) allows us to transform the spectral Gaussian pattern, as shown in Figure 9.17. v u
- 299. Two-Dimensional Finite Impulse Response Filter Design 283 -0.5 0 0.5 -0.5 0 0.5 0 0.1 0.2 0.3 0.4 0.5 Fx Fy Magnitude Figure 9.17. Step 2: representation of a modulated 2-D Gaussian filter The last step in this kind of filtering involves constructing a rejection filter from frequency components associated with the periodic frame, of impulse response written hr (m,n). This operation is carried out using the following equation: ),(*2),(),( nmhnmnmh sr −= δ (9.28) The factor 2 is introduced to compensate the factor 0.5 that appears during the decomposition of the cosinus signal of equation (9.27) into two complex exponentials. The frequency response associated with this new filter is shown in Figure 9.18. We observe a rejection of the spectral components characterizing the periodic frame. v u
- 300. 284 Digital Filters Design for Signal and Image Processing -0.5 0 0.5 -0.5 0 0.5 0 0.2 0.4 0.6 0.8 1 Fx Fy Magnitude Figure 9.18. Step 3: spectral representation of rejection -0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 -0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 Figure 9.19. Frequency spectrum of the image obtained after filtering v u
- 301. Two-Dimensional Finite Impulse Response Filter Design 285 The image that results from complete filtering is shown in Figure 9.20. We note that the periodic frame has been deleted as expected. However, this deletion is not complete because the frequency spectrum shown in Figure 9.19 makes lines appear that change the position of the rejected components. These lines are due to the amplitudes of the secondary lobes associated to the sinus cardinals of each of the components. Since the filter used is a FIR filter, we end the analysis of this example by visualizing the border effects. Figure 9.21 shows a “zoom” of the image shown in Figure 9.20 on the lower left edge. The ensemble of unprocessed pixels covers a band equal to the half width of the mask of the impulse response. This is because we have used an implementation method without extending the processed image. Figure 9.20. Image obtained as convolution output with the final filter Figure 9.21. Visualization of border effects
- 302. 286 Digital Filters Design for Signal and Image Processing 9.4. Appendix: spatial window functions and their implementation The goal of this appendix is to explain the action in the frequency space of the apodization window. To illustrate the transformation resulting from weighting, we will consider, in a general way, a signal. Generally, apodization is represented by: ),(),(),(0 yxwyxyyxy = , (9.29) where ),(0 yxy is the finite observation of a signal represented on an infinite 2-D horizon written as ( , )y x y . In the frequency domain, a convolution product regulates the transformation: ),(),(),(0 vuWvuYvuY ∗= . (9.30) Because of the convolution operator, we observe a frequency distortion of the spectrum ( , )Y u v . The window which we choose by default is a rectangular window. 0 5 10 15 0 5 10 15 0 0.5 1 1.5 2 Figure 9.22. Spatial profile of a rectangular window -1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1 0 0.2 0.4 0.6 0.8 1 Fx Fy Magnitude Figure 9.23. Associated 2-D frequency spectrum As shown in Figure 9.23, in the frequency plane, the rectangular window is characterized by a fairly narrow principle lobe that is inversely proportional to the length of the window. Its second characteristic is the low differential between its amplitude maxima of its secondary lobes in relation to that of the principle lobe. As we saw with 1-D signals, many windows have been described in the technical literature, most notably Hamming, Bartlett, Blackman, and Kaiser windows.
- 303. Two-Dimensional Finite Impulse Response Filter Design 287 0 2 4 6 8 10 0 2 4 6 8 10 0 0.5 1 1.5 2 Figure 9.24. Spatial profile of a Kaiser window 1=α -1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1 0 0.2 0.4 0.6 0.8 1 Fx Fy Magnitude Figure 9.25. Associated 2-D spectrum 0 2 4 6 8 10 0 2 4 6 8 10 0 0.5 1 1.5 2 Figure 9.26. Spatial profile of a Kaiser window 5=α -1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1 0 0.2 0.4 0.6 0.8 1 Fx Fy Magnitude Figure 9.27. Associated 2-D spectrum From this point on, we will limit our discussion to the Kaiser window; it has the advantage of being parametrable: [ ]α α 0 2 0 1 )( I lI lh ⎥⎦ ⎤ ⎢⎣ ⎡ + = , (9.31) where [].0I designates the Bessel function in the 1st kind of zero order and α is the regulating parameter that controls the spectral leakage. In two-dimensional contexts, we can generate two types of filters from a monodimensional representation:
- 304. 288 Digital Filters Design for Signal and Image Processing ( ) ( ) which designates a separable model ( , ) ( , ) which designates a non-separable model w m w n w m n w m n ⎧ = ⎨ ⎩ (9.32) In non-separable examples, to use once more the results we obtained from 1-D signals, we can exploit the following formula: 22)(),( nmr rwnmw += = . (9.33) EXAMPLE 9.1.– in order to show the influence of a 2-D window, we carry out the spectral analysis of a given signal. Studying temporal windows brings us to the following principles: – using a rectangular window helps us observe components that have similar frequencies. This kind of window, compared to all other known windows, has the best frequency resolution. This last factor is inversely proportional to the size of the window. The first zero of the frequency response is in 1/N where N designates the number of points of the window; – since the secondary lobes are of relatively high levels in module compared to those of the main peak, the secondary lobes will cause signal perturbation of low amplitude components that can be contained in the signal. To bring out components of this type, we use other windows, such as Blackman or Kaiser windows with an elevated α parameter. Let us consider the following 2-D signal, which is the sum of pure frequency components: ∑ = += 3 1 )22cos(),( i iii nvmuanmI ππ . (9.34) The associated frequency form is then represented by: ( ) ( )[ ]∑ = +++−−= 3 1 ,, 2 ),( i iiii i vvuuvvuu a vuI δδ (9.35)
- 305. Two-Dimensional Finite Impulse Response Filter Design 289 -0.1 0.1 v u Figure 9.28. Frequency components of the distribution By using the values contained in the following table, we obtain the image shown in Figure 9.29. Amplitude Normalized frequency column Normalized line frequency 50 0.04 0.1 50 0.06 0.1 0.01 0.32 0.2 Figure 9.29. Image obtained from combining three components This example looks closely at a signal whose spectral content has two components with very similar frequencies and components of very low amplitude (0.01) compared to that of the other components (50).
- 306. 290 Digital Filters Design for Signal and Image Processing Figure 9.30. Spectrum calculated with a rectangular window Figure 9.31. Spectrum calculated with a separable Kaiser window Figure 9.32. Spectrum calculated with a Kaiser window with circular symmetry Figure 9.30 shows the spectrum estimated with a rectangular window. In this instance, two components with the highest amplitude are perfectly visible. However, the third component has not been reproduced. In Figures 9.31 and 9.32, the third component is clear because the Kaiser windows have been used. Nevertheless, in these last examples, we see a fusion of traces representing two components in very close proximity. This phenomenon occurs because of the enlargement of the principle lobe. Spectral resolution is thus greatly reduced.
- 307. Two-Dimensional Finite Impulse Response Filter Design 291 9.5. Bibliography [GON 92] GONZALEZ R.C., WOODS R.E., Digital image processing, Prentice Hall, ISBN 0-13-094650-8, (1992). [JAI 88] JAIN A.K., Fundamentals of Digital Image Processing, Prentice Hall, ISBN 0-13- 336165-9, (1989). [LIM 90] LIM T.S., Two-dimensional Signal and Image Processing, Prentice Hall, ISBN 0-13-934563-9, (1990). [SHE 99] SHENOI B.A., Magnitude and Delay Approximation of 1-D and 2-D Digital Filters, Springer-Verlag, ISBN 3-540-64161-0 [SON 99] SONKA M., HLAVAC V., BOYLE R., Image processing and analysis and Machine vision, PWS Publishing, ISBN 0-534-95393-X, (1999).
- 308. This page intentionally left blank
- 309. Chapter 10 Filter Stability 10.1. Introduction A filter of impulse response h(k) is bounded-input, bounded-output (BIBO) stable when a bounded input signal x(k) produces a bounded output y(k). Let us consider a bounded input signal x(k); (10.1) where h*(- k) and )( kh − respectively designate the conjugate and the complex modulus of h(- k). With equation (10.1), the output y(k) is written as: ( ) ( ) ( ) ( ) ( ) ( ) ( )∑ ∑ ∞+ −∞= +∞ −∞= −−−= −== n n nhnhnkh nxnkhkxkhky )( * * (10.2) More specifically, equation (10.2) verifies for k = 0: ( ) ( ) ( ) ( )∑∑ ∞+ −∞= ∞+ −∞= = − − = nn nh nh nh y 2 0 . (10.3) Chapter written by Michel BARRET. ( ) ( ) ⎪⎩ ⎪ ⎨ ⎧ ≠−−− = otherwise0 0)(if)(* khkhkh kx
- 310. 294 Digital Filters Design for Signal and Image Processing If the filter is BIBO stable, then it verifies the following condition: ( )∑ +∞ −∞= +∞< n nh . (10.4) Inversely, if the impulse response h(k) of the filter satisfies equation (10.4) then, for any input signal x(k) bounded by M, the output y(k) verifies: ( ) ( ) ( ) ( ) +∞<≤−≤ ∑∑ +∞ −∞= +∞ −∞= nn nhMnkxnhky . (10.5) Let us assume now that the filter admits a transfer function1: ( ) ( )∑ +∞ −∞= − = k k z zkhzH for Rzr << (10.6) We write C the open convergence ring (i.e. circular band) represented as: C = {z ∈ | r < |z| < R}, (10.7) or it can be represented as: C = {z ∈ | |z| < R}, (10.8) or as: C = {z ∈ | r < |z|}, (10.9) or as: = (10.10) 1 The Laurent series ( ) ( ) ( )∑∑∑ +∞ = − +∞ = +∞ −∞= − +−= 01 k k k k k k zkhzkhzkh decomposes as the sum of a power series in z and a power series in z -1 . We recall that the convergence domain of a power series in z is a disk centered at the origin, of possibly infinite radius of the domain {z ∈ | |z| < R}. If |z| < R, then the series converges absolutely and if |z| > R, then the series diverges.
- 311. Filter Stability 295 and C is the closing of the convergence ring, obtained by replacing the strict inequalities by loose inequalities. If the filter is BIBO stable, then the series of the general term |h(k)| converges and the unit circle: T = {z ∈ | |z| = 1} (10.11) is included in C . Inversely, if T ⊂ C , then the filter is BIBO stable, since equation (10.4) is satisfied. EXAMPLE 10.1.– the impulse response filter represented as: (10.12) admits a transfer function Hz(z) of the convergence ring C = {z ∈ | 1 < |z|}. We can see that the filter is BIBO stable. EXAMPLE 10.2.– an impulse response filter represented by: ( ) )1(1 2 += kkh , ∀k ∈ (10.13) admits no transfer function Hz(z) because the power series, ( )∑ +∞ = − 0k k zkh converges for 1>z . But the power series ( )∑ +∞ = − 1k k zkh converges for │z│<1. We can, however, observe that the filter is BIBO stable. For causal filters, BIBO stability is also called asymptotic stability, by extension of a concept associated with dynamic systems. In a problem of steady dynamics, a position of equilibrium is asymptotically stable if, after having undergone an infinitesimal perturbation, the system tends to return to the same position. For a filter, this means the filter is asymptotically stable if the output tends towards 0 when the input signal is set at 0. This is true for any bounded input. Generally, with non-causal filters, a filter is asymptotically stable if this condition is equally satisfied by inverting the sense of time. According to properties of linearity and time invariance of a filter, we can in a general sense assume that the ( ) ⎪⎩ ⎪ ⎨ ⎧ ≥+ = else0 0if)1(1 2 kk kh
- 312. 296 Digital Filters Design for Signal and Image Processing input signal is set at 0 from the instant 0 and that it is bounded by 1. For an input signal x(k), the output y(k) of the filter of impulse response h(k) then equals: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )∑∑ ∑∑ > ∞+ −∞= − −∞= +∞ −∞= −=−= −=−= knn nn nkxnhnkxnh nxnkhnxnkhky 1 (10.14) Whatever the signal x(k), y(k) tends towards 0 when k tends towards infinity if and only if the following condition is satisfied: ( )∑ +∞ = +∞< 0n nh (10.15) In the case of a non-causal filter, the conjunction of equation (10.15) with similar outcomes is obtained by inverting the sense of time is equivalent to equation (10.4) by BIBO stability. Let us consider a causal filter and assume that it admits a transfer function Hz(z). Its convergence ring C is then the complement of a disk and satisfies: C = {z ∈ | r < |z|}, (10.16) In other words, its convergence ring C is not bounded. This is a characteristic property of causal filters: if its convergence ring, which we assume to be non-empty, is not bounded, then the filter is causal. EXAMPLE 10.3.– here we consider a real or complex number a and the following rational fraction: ( ) az z zH − = Note that: (10.17) ( ) .1for 1for 1 0 1 1 0 <−= <= − ≥ −− − ≥ − ∑ ∑ zazaza azzazH k kk k kk
- 313. Filter Stability 297 The rational fraction H(z) coincides with the transfer function Hz(z) of a causal filter on the ring |z| > |a|. In this case, the impulse response equals: h(k) = ak for k ≥ 0. (10.18) It also coincides with the transfer function Hz(z) of an anti-causal filter on the ring |z| < |a. Here, the impulse response equals: (10.19) With the above information, we can see how a filter that is BIBO stable and causal admits a transfer function Hz(z) on a convergence ring C that is not bounded and verifies T ⊂ C . Inversely, if a filter admits a transfer function on the domain {z ∈ | r < |z|} with r < 1, then the filter is BIBO stable and causal. Let us now look at a causal filter whose transfer function is expressed in the form of a rational fraction: ( ) ( ) ( )zP zQ zH z = (10.20) where Q(z) and P(z) are complex polynomials in z. Without a loss of general information, let us assume that the rational fraction Hz(z) is irreducible, that is, that the polynomials P(z) and Q(z) have no zeros in common. Since the filter is causal, the convergence domain of the transfer function is not bounded, so we have the subset: {z ∈ | r < |z|}, (10.21) where r is the highest modulus of the roots of P(z). We recall that the poles of the fraction Hz(z) are the zeros2 of its denominator P(z). These are the points for which ( )zH z takes an infinite value. 2 A complex number ζ is the zero or root of a polynomial P(z) when P(ζ) =0. The multiplicity order of ζ is then the integer k > 0, so that P(ζ) = 0, P’(ζ) = 0, …, P(k-1) (ζ) = 0 and P(k) (ζ) ≠ 0, where P’(z) (resp. P(k) (z)) designates the polynomial derivative (resp. derivative of order k) of P(z). ( ) ( )⎪⎩ ⎪ ⎨ ⎧ = <−= 00 0for h kakh k
- 314. 298 Digital Filters Design for Signal and Image Processing So, following the above information, the filter is BIBO stable and causal if the poles of H(z) each have a modulus strictly below 1, that is, they are all inside the open unit disk: U = {z ∈ | |z| < 1}. In section 10.2, we will see how to characterize polynomials whose zeros are all in U. 10.2. The Schur-Cohn criterion In this section, we will present an algorithm that helps us learn, in a finite number of steps, if the following complex polynomial: ( ) N NN azazazP +++= −1 10 with 00 ≠a (10.22) has all its zeros inside the unit disk; that is, in the subset: U = {z ∈ | |z| < 1}. For that, we introduce the polynomials ( )zP and P*(z) represented as follows: ( ) *1* 1 * 0 N NN azazazP +++= − (10.23) obtained by conjugating3 the coefficients of P(z) without conjugating the variable and ( ) * 0 1* 1 * * azazazP N N N N +++= − − (10.24) obtained by conjugating and inversing the order of the coefficients of P(z). The polynomial P*(z) is then called the reciprocal polynomial of P(z). EXAMPLE 10.4.– the polynomial P(z) = (2 + j)z3 +3z admits, for reciprocal polynomial P*(z) = 3z2 + (2 – j). 3 For a complex number a, its conjugate is written a* or conj(a).
- 315. Filter Stability 299 COMMENT 10.1.– the reciprocal polynomials P*(z) and ( )zP satisfy the following equality: ( ) ( )1* − = zPzzP N . (10.25) Now, from the polynomial PN(z) = P(z), we construct a family of N polynomials Pj, with 1 ≤ j ≤ N, with the following average of the recurrence relation: z zPkzP zP jjj j )()( )( * 1 + =− , (10.26) where: ( ) ( )0 0 * j j j P P k −= (10.27) and where we assume that 1≠jk for 1≤ j ≤ N. THEOREM 10.1.– the polynomial P(z) has all its zeros in U if and only if the coefficients kj, with 1 ≤ j ≤ N thus obtained each have a modulus strictly below 1. COMMENT 10.2.– the demonstration of Theorem 10.1 and the terms of another, more general theorem linking the number of zeros of P(z) localized inside the unit disk to the signature of a hermitian quadratic form, are discussed in section 10.3. EXAMPLE 10.5.– let us consider the following polynomial: ( ) ( )( )( ) ( ). 210171241211 4 2342 zP zzzzzjzjzzP = ++++=+−+++= P(z) admits two zeros inside the unit circle and two outside it. By applying equation (10.27), we obtain: ( ) ( ) 2 1 0 0 * 4 4 4 −== P P k Since 14 <k , we determine P3(z) from equation (10.26), and we have: ( ) 4 2 17 73 23 3 +++= zzzzP and 3 4 3 −=k Since 13 >k , the polynomial P(z) does not have all its zeros in U.
- 316. 300 Digital Filters Design for Signal and Image Processing EXAMPLE 10.6.– now we consider the following polynomial: ( ) ( ) ( )zP zzzzz j z j zzP 4 2342 2 1 378412 2 1 2 1 = ++++=+⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ − +⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ + += whose zeros are all inside the unit circle. By applying equations (10.26) and (10.27), we obtain: 8 1 4 −=k and 3 2 3 63 61 49 ( ) 2 16 8 8 P z z z z= + + + . 63 32 3 −=k and 2 2 2,945 325 1,135 ( ) 1,008 72 504 P z z z= + + . 589 454 2 −=k and 1 11,175 4,875 ( ) 9,424 4,712 P z z= + . 447 390 1 −=k . All the zeros of P(z) are thus in U. EXAMPLE 10.7.– now let us look at the following polynomial: ( ) ( ) ( ) ( ) ( ) ( )zP zzzz z j z j zzP 4 234 2 1242452444 12 2 1 2 1 = +++++++= +⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ − +⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ + += which has two zeros inside the unit circle. By applying equations (10.26) and (10.27), we obtain: 4 1 4 −=k and ( ) 323 4 15 4 2 153 4 15 23 3 +⎟ ⎠ ⎞ ⎜ ⎝ ⎛ ++ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ++= zzzzP . 5 4 3 −=k and ( ) 20 27 20 227 20 27 2 2 ++= zzzP .
- 317. Filter Stability 301 12 =k . Since the coefficient k2 is of modulus 1, the polynomial P(z) does not have all its zeros in U. The stability domain In this section, we will look at causal filters whose transfer function is expressed in the form of a rational fraction ( ) ( ) ( )zPzQzH z /= that we assume to be irreducible. We have seen that the BIBO stability of a filter of this kind is assured if and only if all the poles of H(z) have a modulus strictly inferior to 1. It is useful to characterize the polynomials P(z) having this property, and we call these Schur polynomials after the German mathematician who wrote about them in the 1920s. We will consider the polynomial of the variable z, P(z) = a0zN + […] + aN, which has indeterminate complex numbers as coefficients and whose degree does not exceed N. We associate to the polynomial P(z) its reciprocal polynomial P*(z) expressed in equation (10.24) and the point P = (a0,…, aN) of N+1 . The polynomial P*(z) coinciding with the reciprocal polynomial of P(z) when P(z) is of degree N; that is, when a0 is non-null. Via this correspondence, which is a bijection between the space N[z] of complex polynomials in z whose degree does not exceed N and the affine space N+1 , we will identify the polynomial P(z) to the point P = (a0,…, aN) and the polynomial P*(z) to the point ( )* 0 * ,,* aaP N …= . By construction, the polynomial P*(z) verifies the equality: ( ) ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = * 1 * u PconjuuP N . (10.28) Consequently, a complex number u ≠ 0 is the root of P*(z) with the multiplicity order α > 0 if and only if * 1 u is zero of P(z) with the same multiplicity order. In addition, the complex number u = 0 is the root of P*(z) with the multiplicity order α > 0 if and only if the polynomial P(z) is of degree N – α. Inversely, the polynomial P*(z) is of degree N – α < N only if the complex number u is the root of P(z) with the multiplicity order α. In other words, when the
- 318. 302 Digital Filters Design for Signal and Image Processing polynomial P(z) is of degree N, the zeros of the reciprocal polynomial P*(z) are deduced from those of P(z) via transformation: * 1 u u (10.29) This transformation carries the non-null complex number u = r exp(jθ) on the number ( )θj ru exp 11 * = , which has the same phase angle as u and the inverse modulus. Furthermore, it is involutive, meaning it is equal to its inverse transformation. The transformation in equation (10.29) thus brings the interior of the unit disk, whose origin 0 has been removed, on the set of complex numbers strictly superior to 1, and inversely. It also carries the unit circle on itself. In addition, we write # P the number of complex zeros of the polynomial P(z) whose modulus is inferior or equal to 1, the zeros being included following their multiplicity order; that is, the doubled roots counted twice, the tripled roots three times, and so on. EXAMPLE 10.8.– the following polynomial: 12584)1)(14()( 23422 +−+−=−+= zzzzzzzP (10.30) that admits j/2 and – j/2 as simple roots and 1 as a double root verifies # P = 4. All non-null polynomials P(z), which are written in the form presented in the section entitled “The stability domain” above, admit, at most, N roots counted following their multiplicity order in the complex plane. For P ∈ N+1 {0}, we thus have: NP ≤≤#0 (10.31) We regroup the polynomials of N[z] following the number of roots of the modulus inferior or equal to 1; in this way we obtain a partition of the space N+1 {0} in N + 1 domains: Dp = {P ∈ N+1 | #P = p} with 0 ≤ p ≤ N. (10.32) A polynomial P(z) of degree N has all its modulus roots strictly inferior to 1 if and only if its reciprocal polynomial P*(z) is not cancelled in the closed unit disk.
- 319. Filter Stability 303 In other words, a polynomial P(z) of degree N is a Schur polynomial only if 0DP* ∈ . The domain D0 is thus the domain of the reciprocal Schur polynomials of degree N, and it characterizes the set of complex polynomials of degree N appearing at denominators of transfer functions of ARMA filters that are asymptotically stable and causal. We call D0 the stability domain of the space N+1 . Now we will develop the following expression: [ ]),...,1()1(...),...,( )()( 1 110 1 0 NN NN N N N i zza izazP ξξσξξσ ξ −++−= −= − = ∏ By introducing the basic symmetrical polynomials Nσσ ,...,1 of roots Nξξ ,...,1 of P(z), we establish the well-known relations between the coefficients of P(z) and its roots [MIG 99]. It then seems clear that the coefficients Naaa ,...,, 10 of P(z) are the continuous functions of the roots ξ1,…ξN and of the coefficient of the highest degree a0. However, as soon as N ≥ 5, it is not longer possible to explicitly link the roots of the

Be the first to comment