Your SlideShare is downloading.
×

Like this document? Why not share!

220

Published on

Mohamed najim -_digital_filters_design_for_signal_and_image_processing

gonzalo santiago martinez

No Downloads

Total Views

220

On Slideshare

0

From Embeds

0

Number of Embeds

0

Shares

0

Downloads

5

Comments

0

Likes

1

No embeds

No notes for slide

- 1. Digital Filters Design for Signal and Image Processing
- 2. This page intentionally left blank
- 3. Digital Filters Design for Signal and Image Processing Edited by Mohamed Najim
- 4. First published in France in 2004 by Hermès Science/Lavoisier entitled “Synthèse de filtres numériques en traitement du signal et des images” First published in Great Britain and the United States in 2006 by ISTE Ltd Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address: ISTE Ltd ISTE USA 6 Fitzroy Square 4308 Patrice Road London W1T 5DX Newport Beach, CA 92663 UK USA www.iste.co.uk © ISTE Ltd, 2006 © LAVOISIER, 2004 The rights of Mohamed Najim to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988. ___________________________________________________________________________ Library of Congress Cataloging-in-Publication Data Synthèse de filtres numériques en traitement du signal et des images. English Digital filters design for signal and image processing/edited by Mohamed Najim. p. cm. Includes index. ISBN-13: 978-1-905209-45-3 ISBN-10: 1-905209-45-2 1. Electric filters, Digital. 2. Signal processing--Digital techniques. 3. Image processing--Digital techniques. I. Najim, Mohamed. II. Title. TK7872.F5S915 2006 621.382'2--dc22 2006021429 British Library Cataloguing-in-Publication Data A CIP record for this book is available from the British Library ISBN 10: 1-905209-45-2 ISBN 13: 978-1-905209-45-3 Printed and bound in Great Britain by Antony Rowe Ltd, Chippenham, Wiltshire.
- 5. Table of Contents Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Chapter 1. Introduction to Signals and Systems . . . . . . . . . . . . . . . . . 1 Yannick BERTHOUMIEU, Eric GRIVEL and Mohamed NAJIM 1.1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2. Signals: categories, representations and characterizations . . . . . . . . 1 1.2.1. Definition of continuous-time and discrete-time signals . . . . . . . 1 1.2.2. Deterministic and random signals . . . . . . . . . . . . . . . . . . . . 6 1.2.3. Periodic signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.2.4. Mean, energy and power. . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2.5. Autocorrelation function. . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.3. Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.4. Properties of discrete-time systems. . . . . . . . . . . . . . . . . . . . . . 16 1.4.1. Invariant linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.4.2. Impulse responses and convolution products. . . . . . . . . . . . . . 16 1.4.3. Causality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1.4.4. Interconnections of discrete-time systems . . . . . . . . . . . . . . . 18 1.5. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Chapter 2. Discrete System Analysis . . . . . . . . . . . . . . . . . . . . . . . . 21 Mohamed NAJIM and Eric GRIVEL 2.1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.2. The z-transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.2.1. Representations and summaries . . . . . . . . . . . . . . . . . . . . . 21 2.2.2. Properties of the z-transform . . . . . . . . . . . . . . . . . . . . . . . 28 2.2.2.1. Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.2.2.2. Advanced and delayed operators. . . . . . . . . . . . . . . . . . . 29 2.2.2.3. Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
- 6. vi Digital Filters Design for Signal and Image Processing 2.2.2.4. Changing the z-scale . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.2.2.5. Contrasted signal development . . . . . . . . . . . . . . . . . . . . 31 2.2.2.6. Derivation of the z-transform. . . . . . . . . . . . . . . . . . . . . 31 2.2.2.7. The sum theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.2.2.8. The final-value theorem . . . . . . . . . . . . . . . . . . . . . . . . 32 2.2.2.9. Complex conjugation . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.2.2.10. Parseval’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.2.3. Table of standard transform. . . . . . . . . . . . . . . . . . . . . . . . 33 2.3. The inverse z-transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.3.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.3.2. Methods of determining inverse z-transforms . . . . . . . . . . . . . 35 2.3.2.1. Cauchy’s theorem: a case of complex variables . . . . . . . . . . 35 2.3.2.2. Development in rational fractions . . . . . . . . . . . . . . . . . . 37 2.3.2.3. Development by algebraic division of polynomials. . . . . . . . 38 2.4. Transfer functions and difference equations . . . . . . . . . . . . . . . . 39 2.4.1. The transfer function of a continuous system . . . . . . . . . . . . . 39 2.4.2. Transfer functions of discrete systems . . . . . . . . . . . . . . . . . 41 2.5. Z-transforms of the autocorrelation and intercorrelation functions . . . 44 2.6. Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.6.1. Bounded input, bounded output (BIBO) stability . . . . . . . . . . . 46 2.6.2. Regions of convergence . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.6.2.1. Routh’s criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2.6.2.2. Jury’s criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Chapter 3. Frequential Characterization of Signals and Filters . . . . . . . 51 Eric GRIVEL and Yannick BERTHOUMIEU 3.1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.2. The Fourier transform of continuous signals . . . . . . . . . . . . . . . . 51 3.2.1. Summary of the Fourier series decomposition of continuous signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.2.1.1. Decomposition of finite energy signals using an orthonormal base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.2.1.2. Fourier series development of periodic signals . . . . . . . . . . 52 3.2.2. Fourier transforms and continuous signals . . . . . . . . . . . . . . . 57 3.2.2.1. Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.2.2.2. Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.2.2.3. The duality theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.2.2.4. The quick method of calculating the Fourier transform . . . . . 59 3.2.2.5. The Wiener-Khintchine theorem. . . . . . . . . . . . . . . . . . . 63 3.2.2.6. The Fourier transform of a Dirac comb . . . . . . . . . . . . . . . 63 3.2.2.7. Another method of calculating the Fourier series development of a periodic signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
- 7. Table of Contents vii 3.2.2.8. The Fourier series development and the Fourier transform . . . 68 3.2.2.9. Applying the Fourier transform: Shannon’s sampling theorem . 75 3.3. The discrete Fourier transform (DFT) . . . . . . . . . . . . . . . . . . . . 78 3.3.1. Expressing the Fourier transform of a discrete sequence. . . . . . . 78 3.3.2. Relations between the Laplace and Fourier z-transforms . . . . . . 80 3.3.3. The inverse Fourier transform . . . . . . . . . . . . . . . . . . . . . . 81 3.3.4. The discrete Fourier transform . . . . . . . . . . . . . . . . . . . . . . 82 3.4. The fast Fourier transform (FFT) . . . . . . . . . . . . . . . . . . . . . . . 86 3.5. The fast Fourier transform for a time/frequency/energy representation of a non-stationary signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 3.6. Frequential characterization of a continuous-time system . . . . . . . . 91 3.6.1. First and second order filters . . . . . . . . . . . . . . . . . . . . . . . 91 3.6.1.1. 1st order system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 3.6.1.2. 2nd order system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 3.7. Frequential characterization of discrete-time system . . . . . . . . . . . 95 3.7.1. Amplitude and phase frequential diagrams. . . . . . . . . . . . . . . 95 3.7.2. Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Chapter 4. Continuous-Time and Analog Filters. . . . . . . . . . . . . . . . . 99 Daniel BASTARD and Eric GRIVEL 4.1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 4.2. Different types of filters and filter specifications. . . . . . . . . . . . . . 99 4.3. Butterworth filters and the maximally flat approximation . . . . . . . . 104 4.3.1. Maximally flat functions (MFM). . . . . . . . . . . . . . . . . . . . . 104 4.3.2. A specific example of MFM functions: Butterworth polynomial filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 4.3.2.1. Amplitude-squared expression . . . . . . . . . . . . . . . . . . . . 106 4.3.2.2. Localization of poles . . . . . . . . . . . . . . . . . . . . . . . . . . 107 4.3.2.3. Determining the cut-off frequency at –3 dB and filter orders . . 110 4.3.2.4. Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 4.3.2.5. Realization of a Butterworth filter . . . . . . . . . . . . . . . . . . 112 4.4. Equiripple filters and the Chebyshev approximation . . . . . . . . . . . 113 4.4.1. Characteristics of the Chebyshev approximation . . . . . . . . . . . 113 4.4.2. Type I Chebyshev filters. . . . . . . . . . . . . . . . . . . . . . . . . . 114 4.4.2.1. The Chebyshev polynomial . . . . . . . . . . . . . . . . . . . . . . 114 4.4.2.2. Type I Chebyshev filters. . . . . . . . . . . . . . . . . . . . . . . . 115 4.4.2.3. Pole determination . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 4.4.2.4. Determining the cut-off frequency at –3 dB and the filter order 118 4.4.2.5. Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 4.4.2.6. Realization of a Chebyshev filter . . . . . . . . . . . . . . . . . . 121 4.4.2.7. Asymptotic behavior . . . . . . . . . . . . . . . . . . . . . . . . . . 122 4.4.3. Type II Chebyshev filter. . . . . . . . . . . . . . . . . . . . . . . . . . 123
- 8. viii Digital Filters Design for Signal and Image Processing 4.4.3.1. Determining the filter order and the cut-off frequency . . . . . . 123 4.4.3.2. Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 4.5. Elliptic filters: the Cauer approximation. . . . . . . . . . . . . . . . . . . 125 4.6. Summary of four types of low-pass filter: Butterworth, Chebyshev type I, Chebyshev type II and Cauer. . . . . . . . . . . . . . . . . . . . . . . . 125 4.7. Linear phase filters (maximally flat delay or MFD): Bessel and Thomson filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 4.7.1. Reminders on continuous linear phase filters . . . . . . . . . . . . . 126 4.7.2. Properties of Bessel-Thomson filters . . . . . . . . . . . . . . . . . . 128 4.7.3. Bessel and Bessel-Thomson filters. . . . . . . . . . . . . . . . . . . . 130 4.8. Papoulis filters (optimum (On)) . . . . . . . . . . . . . . . . . . . . . . . . 132 4.8.1. General characteristics. . . . . . . . . . . . . . . . . . . . . . . . . . . 132 4.8.2. Determining the poles of the transfer function. . . . . . . . . . . . . 135 4.9. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Chapter 5. Finite Impulse Response Filters . . . . . . . . . . . . . . . . . . . . 137 Yannick BERTHOUMIEU, Eric GRIVEL and Mohamed NAJIM 5.1. Introduction to finite impulse response filters . . . . . . . . . . . . . . . 137 5.1.1. Difference equations and FIR filters. . . . . . . . . . . . . . . . . . . 137 5.1.2. Linear phase FIR filters . . . . . . . . . . . . . . . . . . . . . . . . . . 142 5.1.2.1. Representation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 5.1.2.2. Different forms of FIR linear phase filters . . . . . . . . . . . . . 147 5.1.2.3. Position of zeros in FIR filters . . . . . . . . . . . . . . . . . . . . 150 5.1.3. Summary of the properties of FIR filters . . . . . . . . . . . . . . . . 152 5.2. Synthesizing FIR filters using frequential specifications . . . . . . . . . 152 5.2.1. Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 5.2.2. Synthesizing FIR filters using the windowing method . . . . . . . . 159 5.2.2.1. Low-pass filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 5.2.2.2. High-pass filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 5.3. Optimal approach of equal ripple in the stop-band and passband . . . . 165 5.4. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Chapter 6. Infinite Impulse Response Filters . . . . . . . . . . . . . . . . . . . 173 Eric GRIVEL and Mohamed NAJIM 6.1. Introduction to infinite impulse response filters . . . . . . . . . . . . . . 173 6.1.1. Examples of IIR filters. . . . . . . . . . . . . . . . . . . . . . . . . . . 174 6.1.2. Zero-loss and all-pass filters . . . . . . . . . . . . . . . . . . . . . . . 178 6.1.3. Minimum-phase filters. . . . . . . . . . . . . . . . . . . . . . . . . . . 180 6.1.3.1. Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 6.1.3.2. Stabilizing inverse filters . . . . . . . . . . . . . . . . . . . . . . . 181 6.2. Synthesizing IIR filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 6.2.1. Impulse invariance method for analog to digital filter conversion . 183
- 9. Table of Contents ix 6.2.2. The invariance method of the indicial response . . . . . . . . . . . . 185 6.2.3. Bilinear transformations . . . . . . . . . . . . . . . . . . . . . . . . . . 185 6.2.4. Frequency transformations for filter synthesis using low-pass filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 6.3. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Chapter 7. Structures of FIR and IIR Filters . . . . . . . . . . . . . . . . . . . 191 Mohamed NAJIM and Eric GRIVEL 7.1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 7.2. Structure of FIR filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 7.3. Structure of IIR filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 7.3.1. Direct structures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 7.32. The cascade structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 7.3.3. Parallel structures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 7.4. Realizing finite precision filters. . . . . . . . . . . . . . . . . . . . . . . . 211 7.4.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 7.4.2. Examples of FIR filters . . . . . . . . . . . . . . . . . . . . . . . . . . 212 7.4.3. IIR filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 7.4.3.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 7.4.3.2. The influence of quantification on filter stability . . . . . . . . . 221 7.4.3.3. Introduction to scale factors. . . . . . . . . . . . . . . . . . . . . . 224 7.4.3.4. Decomposing the transfer function into first- and second-order cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 7.5. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Chapter 8. Two-Dimensional Linear Filtering . . . . . . . . . . . . . . . . . . 233 Philippe BOLON 8.1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 8.2. Continuous models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 8.2.1. Representation of 2-D signals . . . . . . . . . . . . . . . . . . . . . . 233 8.2.2. Analog filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 8.3. Discrete models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 8.3.1. 2-D sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 8.3.2. The aliasing phenomenon and Shannon’s theorem . . . . . . . . . . 240 8.3.2.1. Reconstruction by linear filtering (Shannon’s theorem) . . . . . 240 8.3.2.2. Aliasing effect. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 8.4. Filtering in the spatial domain. . . . . . . . . . . . . . . . . . . . . . . . . 242 8.4.1. 2-D discrete convolution. . . . . . . . . . . . . . . . . . . . . . . . . . 242 8.4.2. Separable filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 8.4.3. Separable recursive filtering . . . . . . . . . . . . . . . . . . . . . . . 246 8.4.4. Processing of side effects . . . . . . . . . . . . . . . . . . . . . . . . . 249 8.4.4.1. Prolonging the image by pixels of null intensity. . . . . . . . . . 250
- 10. x Digital Filters Design for Signal and Image Processing 8.4.4.2. Prolonging by duplicating the border pixels . . . . . . . . . . . . 251 8.4.4.3. Other approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 8.5. Filtering in the frequency domain. . . . . . . . . . . . . . . . . . . . . . . 253 8.5.1. 2-D discrete Fourier transform (DFT). . . . . . . . . . . . . . . . . . 253 8.5.2. The circular convolution effect. . . . . . . . . . . . . . . . . . . . . . 255 8.6. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Chapter 9. Two-Dimensional Finite Impulse Response Filter Design . . . . 261 Yannick BERTHOUMIEU 9.1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 9.2. Introduction to 2-D FIR filters. . . . . . . . . . . . . . . . . . . . . . . . . 262 9.3. Synthesizing with the two-dimensional windowing method . . . . . . . 263 9.3.1. Principles of method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 9.3.2. Theoretical 2-D frequency shape. . . . . . . . . . . . . . . . . . . . . 264 9.3.2.1. Rectangular frequency shape . . . . . . . . . . . . . . . . . . . . . 264 9.3.2.2. Circular shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 9.3.3. Digital 2-D filter design by windowing . . . . . . . . . . . . . . . . . 271 9.3.4. Applying filters based on rectangular and circular shapes . . . . . . 271 9.3.5. 2-D Gaussian filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 9.3.6. 1-D and 2-D representations in a continuous space . . . . . . . . . . 274 9.3.6.1. 2-D specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 9.3.7. Approximation for FIR filters . . . . . . . . . . . . . . . . . . . . . . 277 9.3.7.1. Truncation of the Gaussian profile. . . . . . . . . . . . . . . . . . 277 9.3.7.2. Rectangular windows and convolution . . . . . . . . . . . . . . . 279 9.3.8. An example based on exploiting a modulated Gaussian filter. . . . 280 9.4. Appendix: spatial window functions and their implementation . . . . . 286 9.5. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Chapter 10. Filter Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 Michel BARRET 10.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 10.2. The Schur-Cohn criterion. . . . . . . . . . . . . . . . . . . . . . . . . . . 298 10.3. Appendix: resultant of two polynomials . . . . . . . . . . . . . . . . . . 314 10.4. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Chapter 11. The Two-Dimensional Domain . . . . . . . . . . . . . . . . . . . . 321 Michel BARRET 11.1. Recursive filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 11.1.1. Transfer functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 11.1.2. The 2-D z-transform . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 11.1.3. Stability, causality and semi-causality . . . . . . . . . . . . . . . . . 324
- 11. Table of Contents xi 11.2. Stability criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 11.2.1. Causal filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 11.2.2. Semi-causal filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 11.3. Algorithms used in stability tests . . . . . . . . . . . . . . . . . . . . . . 334 11.3.1. The jury Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334 11.3.2. Algorithms based on calculating the Bezout resultant . . . . . . . 339 11.3.2.1. First algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 11.3.2.2. Second algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 11.3.3. Algorithms and rounding-off errors . . . . . . . . . . . . . . . . . . 347 11.4. Linear predictive coding . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 11.5. Appendix A: demonstration of the Schur-Cohn criterion . . . . . . . . 355 11.6. Appendix B: optimum 2-D stability criteria . . . . . . . . . . . . . . . . 358 11.7. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 List of Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
- 12. This page intentionally left blank
- 13. Introduction Over the last decade, digital signal processing has matured; thus, digital signal processing techniques have played a key role in the expansion of electronic products for everyday use, especially in the field of audio, image and video processing. Nowadays, digital signal is used in MP3 and DVD players, digital cameras, mobile phones, and also in radar processing, biomedical applications, seismic data processing, etc. This book aims to be a text book which presents a thorough introduction to digital signal processing featuring the design of digital filters. The purpose of the first part (Chapters 1 to 9) is to initiate the newcomer to digital signal and image processing whereas the second part (Chapters 10 and 11) covers some advanced topics on stability for 2-D filter design. These chapters are written at a level that is suitable for students or for individual study by practicing engineers. When talking about filtering methods, we refer to techniques to design and synthesize filters with constant filter coefficients. By way of contrast, when dealing with adaptive filters, the filter taps change with time to adjust to the underlying system. These types of filters will not be addressed here, but are presented in various books such as [HAY 96], [SAY 03], [NAJ 06]. Chapter 1 provides an overview of various classes of signals and systems. It discusses the time-domain representations and characterizations of the continuous- time and discrete-time signals. Chapter 2 details the background for the analysis of discrete-time signals. It mainly deals with the z-transform, its properties and its use for the analysis of linear systems, represented by difference equations.
- 14. xiv Digital Filters Design for Signal and Image Processing Chapter 3 is dedicated to the analysis of the frequency properties of signals and systems. The Fourier transform, the discrete Fourier transform (DFT) and the fast Fourier transform (FFT) are introduced along with their properties. In addition, the well-known Shannon sampling theorem is recalled. As we will see, some of the most popular techniques for digital infinite impulse response (IIR) filter design benefit from results initially developed for analog signals. In order to make the reader’s task easy, Chapter 4 is devoted to continuous- time filter design. More particularly, we recall several approximation techniques developed by mathematicians such as Chebyshev or Legendre, who have thus seen their names associated with techniques of filter design. The following chapters form the core of the book. Chapter 5 deals with the techniques to synthesize finite impulse response (FIR) filters. Unlike IIR filters, these have no equivalent in the continuous-time domain. The so-called windowing method, as a FIR filter design method, is first presented. This also enables us to emphasize the key role played by the windowing in digital signal processing, e.g., for frequency analysis. The Remez algorithm is then detailed. Chapter 6 concerns IIR filters. The most popular techniques for analog to digital filter conversion, such as the bilinear transform and the impulse invariance method, are presented. As the frequency response of these filters is represented by rational functions, we must tackle the problems of stability induced by the existence of poles of these rational functions. In Chapter 7, we address the selection of the filter structure and point out its importance for filter implementation. Some problems due to the finite-precision implementation are listed and we provide rules to choose an appropriate structure while implementing filter on fixed point operating devices. In comparison with many available books dedicated to digital filtering, this title features both 1-D and 2-D systems, and as such covers both signal and image processing. Thus, in Chapters 8 and 9, 2-D filtering is investigated. Moreover, it is not easy to establish the necessary and sufficient conditions to test the stability of 2-D signals. Therefore, Chapters 10 and 11 are dedicated to the difficult problem of the stability of 2-D digital system, a topic which is still the subject of many works such as [ALA 2003] [SER 06]. Even if these two chapters are not a prerequisite for filter design, they can provide the reader who would like to study the problems of stability in the multi-dimensional case with valuable clarifications. This contribution is another element that makes this book stand out.
- 15. Introduction xv The field of digital filtering is often perceived by students as a “patchwork” of formulae and recipes. Indeed, the methods and concepts are based on several specific optimization techniques and mathematical results which are difficult to grasp. For instance, we have to remember that the so-called Parks-McClellan algorithm proposed in 1972 was first rejected by the reviewers [PAR 72]. This was probably due to the fact that the size of the submitted paper, i.e., 5 pages, did not enable the reviewers to understand every step of the approach [McC 05]. In this book we have tried, at every stage, to justify the necessity of these approaches without recalling all the steps of the derivation of the algorithm. They are described in many articles published during the 1970s in the IEEE periodicals i.e., Transactions on Acoustics Speech and Signal Processing, which has since become Transactions on Signal Processing and Transactions on Circuits and Systems. Mohamed NAJIM Bordeaux [ALA 2003] ALATA O., NAJIM M., RAMANANJARASOA C. and TURCU F., “Extension of the Schur-Cohn Stability Test for 2-D AR Quarter-Plane Model”, IEEE Trans. on Information Theory, vol. 49, no. 11, November 2003. [HAY 96] HAYKIN S., Adaptive Filter Theory, 3rd edition, Prentice Hall, 1996. [McC 05] McCLELLAN J.H. and PARKS W. Th., “A Personal History of the Parks- McClellan Algorithm” IEEE Signal Processing Magazine, pp 82-86, March 2005. [NAJ 06] NAJIM M., Modélisation, estimation et filtrage optimale en traitement du signal, forthcoming, 2006, Hermes, Paris. [PAR 72] PARKS W. Th. and McCLELLAN J.H., “Chebyshev Approximation for Nonrecursive Digital Filters with Linear Phase,” IEEE Trans. Circuit Theory, vol. CT-19, no. 2, pp 189-194, 1972. [SAY 03] SAYED A., Fundamentals of Adaptive Filtering, Wiley IEEE Press, 2003. [SER 06] SERBAN I., TURCU F., NAJIM M., “Schur Coefficients in Several Variables”, Journal of Mathematical Analysis and Applications, vol. 320, issue no. 1, August 2006, pp 293-302.
- 16. This page intentionally left blank
- 17. Chapter 1 Introduction to Signals and Systems 1.1. Introduction Throughout a range of fields as varied as multimedia, telecommunications, geophysics, astrophysics, acoustics and biomedicine, signals and systems play a major role. Their frequential and temporal characteristics are used to extract and analyze the information they contain. However, what importance do signals and systems really hold for these disciplines? In this chapter we will look at some of the answers to this question. First we will discuss different types of continuous and discrete-time signals, which can be termed random or deterministic according to their nature. We will also introduce several mathematical tools to help characterize these signals. In addition, we will describe the acquisition chain and processing of signals. Later we will define the concept of a system, emphasizing invariant discrete-time linear systems. 1.2. Signals: categories, representations and characterizations 1.2.1. Definition of continuous-time and discrete-time signals The function of a signal is to serve as a medium for information. It is a representation of the variations of a physical variable. Chapter written by Yannick BERTHOUMIEU, Eric GRIVEL and Mohamed NAJIM.
- 18. 2 Digital Filters Design for Signal and Image Processing A signal can be measured by a sensor, then analyzed to describe a physical phenomenon. This is the situation of a tension taken to the limits of a resistance in order to verify the correct functioning of an electronic board, as well as, to cite one example, speech signals that describe air pressure fluctuations perceived by the human ear. Generally, a signal is a function of time. There are two kinds of signals: continuous and discrete-time. A continuous-time or analog signal can be measured at certain instants. This means physical phenomena create, for the most part, continuous-time signals. Figure 1.1. Example of the sleep spindles of an electroencephalogram (EEG) signal The advancement of computer-based techniques at the end of the 20th century led to the development of digital methods for information processing. The capacity to change analog signals to digital signals has meant a continual improvement in processing devices in many application fields. The most significant example of this is in the field of telecommunications, especially in cell phones and digital televisions. The digital representation of signals has led to an explosion of new techniques in other fields as varied as speech processing, audiofrequency signal analysis, biomedical disciplines, seismic measurements, multimedia, radar and measurement instrumentation, among others. Time (s)
- 19. Introduction to Signals and Systems 3 The signal is said to be a discrete-time signal when it can be measured at certain instants; it corresponds to a sequence of numerical values. Sampled signals are the result of sampling, uniform or not, of a continuous-time signal. In this work, we are especially interested in signals taken at regular intervals of time, called sampling periods, which we write as 1=s s T f where fs is called the sampling rate or the sampling frequency. This is the situation for a temperature taken during an experiment, or of a speech signal (see Figure 1.2). This discrete signal can be written either as x(k) or x(kTs). Generally, we will use the first writing for its simplicity. In addition, a digital signal is a discrete-time discrete-valued signal. In that case, each signal sample value belongs to a finite set of possible values. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 x 10 4 Figure 1.2. Example of a digital voiced speech signal (the sampling frequency fs is at 16 KHz) The choice of a sampling frequency depends on the applications being used and the frequency range of the signal to be sampled. Table 1.1 gives several examples of sampling frequencies, according to different applications. Time (s)
- 20. 4 Digital Filters Design for Signal and Image Processing Signal fs Ts Speech: Telephone band – telephone- Broadband – audio-visual conferencing- 8 KHz or 16 KHz 125 µs 62.5 µs Audio: Broadband (Stereo) 32 KHz 44.1 KHz 48 KHz 31.25 µs 22.7 µs 20.8 µs Video 10 MHz 100 ns Table 1.1. Sampling frequencies according to processed signals In Figure 1.3, we show an acquisition chain, a processing chain and a signal restitution chain. The adaptation amplifier makes the input signal compatible with the measurement chain. A pre-filter which is either pass-band or low-pass, is chosen to limit the width of the input signal spectrum; this avoids the undesirable spectral overlap and hence, the loss of spectral information (aliasing). We will return to this point when we discuss the sampling theorem in section 3.2.2.9. This kind of anti-aliasing filter also makes it possible to reject the out-of-band noise and, when it is a pass-band filter, it helps suppress the continuous component of the signal. The Analog-to-Digital Converter (A/D) partly carries out sampling, and then quantification, at the sampling frequency fs, that is, it allocates a coding to each sampling on a certain number of bits. The digital input signal is then processed in order to give the digital output signal. The reconversion into an analog signal is made possible by using a D/A converter and a smoothing filter. Many parameters influence sampling, notably the quantification step and the response time of the digital system, both during acquisition and restitution. However, by improving the precision of the A/D converter and the speed of the calculators, we can get around these problems. The choice of the sampling frequency also plays an important role.
- 21. Introduction to Signals and Systems 5 Figure 1.3. Complete acquisition chain and digital processing of a signal Different types of digital signal representation are possible, such as functional representations, tabulated representations, sequential representations, and graphic representations (as in bar diagrams). Looking at examples of basic digital signals, we return to the unit sample sequence represented by the Kronecker symbol δ(k), the unit step signal u(k), and the unit ramp signal r(k). This gives us: Unit sample sequence: ( ) ( ) 0 1 1for 0k k δ δ =⎧⎪ ⎨ = ≠⎪⎩ Physical variable Digital input signal ProcessingA/D converter Low-pass filter or pass-band Adaptation amplifier Sampling blocker Smoothing filter Processed signal D/A converter Digital output signal Digital system Analog signal Sensor
- 22. 6 Digital Filters Design for Signal and Image Processing Unit step signal: ( ) ( ) 1for 0 0 for 0 u k k u k k ⎧ = ≥⎪ ⎨ = <⎪⎩ Unit ramp signal: ( ) ( ) for 0 0 for 0. r k k k r k k ⎧ = ≥⎪ ⎨ = <⎪⎩ -10 -8 -6 -4 -2 0 2 4 6 8 10 0 0.2 0.4 0.6 0.8 1 Scale amplitude impulse unity -10 -8 -6 -4 -2 0 2 4 6 8 10 0 0.2 0.4 0.6 0.8 1 indices amplitude unity Figure 1.4. Unit sample sequence δ(k) and unit step signal u(k) 1.2.2. Deterministic and random signals We class signals as being deterministic or random. Random signals can be defined according to the domain in which they are observed. Sometimes, having specified all the experimental conditions of obtaining the physical variable, we see that it fluctuates. Its values are not completely determined, but they can be evaluated in terms of probability. In this case, we are dealing with a random experiment and the signal is called random. In the opposite situation, the signal is called deterministic.
- 23. Introduction to Signals and Systems 7 Figure 1.5. Several realizations of a 1-D random signal EXAMPLE 1.1.– let us look at a continuous signal modeled by a sinusoidal function of the following type. ( ) ( )sin 2πx t a ft= × This kind of model is deterministic. However, in other situations, the signal amplitude and the signal frequency can be subject to variations. Moreover, the signal can be disturbed by an additive noise b(t); then it is written in the following form: ( ) ( ) ( )( ) ( )sin 2πx t a t f t t b t= × × + where a(t), f(t) and b(t) are random variables for each value of t. We say then that x(t) is a random signal. The properties of the received signal x(t) then depends on the statistical properties of these random variables. samples realizationno.5realizationno.4realizationno.3realizationno.2realizationno.1
- 24. 8 Digital Filters Design for Signal and Image Processing Figure 1.6. Several examples of a discrete random 2-D process 1.2.3. Periodic signals The class of signals termed periodic plays an important role in signal and image processing. In the case of a continuous-time signal, a signal is called periodic of period T0 if T0 is the smallest value verifying the relation: ( ) ( )txTtx =+ 0 , t∀ . And, for a discrete-time signal, the period of which is N0, we have: ( ) ( )kxNkx =+ 0 , k∀ . EXAMPLE 1.2.– examples of periodic signals: ( ) ( )0sin 2πx t f t= , ( ) ( )k kx 1−= , ( ) ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ = 8 cos πk kx .
- 25. Introduction to Signals and Systems 9 1.2.4. Mean, energy and power We can characterize a signal by its mean value. This value represents the continuous component of the signal. When the signal is deterministic, it equals: ( )1 11 ( ) 1 µ lim T T x t dt T→+∞ = ∫ where T1 designates the integration time. (1.1) When a continuous-time signal is periodic and of period T0, the expression of the mean value comes to: ( ) 00 ( ) 1 µ T x t dt T = ∫ (1.2) PROOF – we can always express the integration time T1 according to the period of the signal in the following way: 1 0T kT= + ξ where k is an integer and ξ is chosen so that 0 < ξ ≤ T0. From there, ( ) ( ) 1 1 01 0( ) ( ) 1 1 lim lim T k T kT x t dt x t dt T kT µ →+∞ →+∞ = =∫ ∫ , since ξ becomes insignificant compared to kT0. By using the periodicity property of the continuous signal x(t), we deduce that ( ) ( ) 0 00 0( ) ( ) 1 1 µ k T T x t dt x t dt kT T = =∑ ∫ ∫ . When the signal is random, the statistical mean is defined for a fixed value of t, as follows: ( ) ( ) ( )µ , ,t E X t x p x t dx +∞ −∞ = =⎡ ⎤⎣ ⎦ ∫ (1.3) where E[.] indicates the mathematical expectation and p(x, t) represents the probability density of the random signal at the instant t. We can obtain the mean value if we know p(x, t); in other situations, we can only obtain an estimated value.
- 26. 10 Digital Filters Design for Signal and Image Processing For the class of signals called ergodic in the sense of the mean, we assimilate the statistical mean to the temporal mean, which brings us back to the expression we have seen previously: ( )1 11 ( ) 1 µ lim T T x t dt T→+∞ = ∫ . Often, we are interested in the energy ε of the processed signal. For a continuous-time signal x(t), we have: ( ) 2 ε x t dt +∞ −∞ = ∫ . (1.4) In the case of a discrete-time signal, the energy is defined as the sum of the magnitude-squared values of the signal x(k): ( ) 2 ε k x k= ∑ (1.5) For a continuous-time signal x(t), its mean power P is expressed as follows: ( ) dttx T P T T ∫+∞→ = )( 21 lim . (1.6) For a discrete-time signal x(k), its mean power is represented as: ( )∑ = +∞→ = N k N kx N P 1 21 lim (1.7) In signal processing, we often introduce the concept of signal-to-noise ratio (SNR) to characterize the noise that can affect signals. This variable, expressed in decibels (dB), corresponds to the ratio of powers between the signal and the noise. It is represented as: 10SNR 10log signal noise P P ⎡ ⎤ = ⎢ ⎥ ⎣ ⎦ (1.8) where signalP and noiseP indicate, respectively, the powers of the sequences of the signal and the noise. EXAMPLE 1.3.– let us consider the example of a periodic signal with a period of 300 Hz signal that is perturbed by a zero-mean Gaussian additive noise with a signal-to-noise ratio varying from 20 to 0 dB at each 10 dB step. Figures 1.7 and 1.8 show these different situations.
- 27. Introduction to Signals and Systems 11 0 0.01 0.02 0.03 0.04 0.05 0.06 -5 0 5 time, in seconds signalwithoutadditivenoise 0 0.01 0.02 0.03 0.04 0.05 0.06 0 5 time, in seconds signalwithadditivenoiseSNR=20dB Figure 1.7. Temporal representation of the original signal and of the signal with additive noise, with a signal-to-noise ratio equal to 20 dB 0 0.01 0.02 0.03 0.04 0.05 0.06 -5 0 5 time, in seconds SNR=10dB 0 0.01 0.02 0.03 0.04 0.05 0.06 -5 0 5 time, in seconds SNR=0dB signalwithadditivenoisesignalwithadditivenoise Figure 1.8. Temporal representation of signals with additive noise, with signal-to-noise ratios equal to 10 dB and 0 dB
- 28. 12 Digital Filters Design for Signal and Image Processing 1.2.5. Autocorrelation function Let us take the example of a deterministic continuous signal x(t) of finite energy. We can carry out a signal analysis from its autocorrelation function, which is represented as: ( ) ( )* (τ)xxR x t x t dtτ +∞ −∞ = −∫ (1.9) The autocorrelation function allows us to measure the degree of resemblance existing between x(t) and ( )τ−tx . Some of these properties can then be shown from the results of the scalar products. From the relations shown in equations (1.4) and (1.9), we see that Rxx(0) corresponds to the energy of the signal. We can easily demonstrate the following properties: )()( * ττ −= xxxx RR τ∀ ∈ (1.10) )0()( xxxx RR ≤τ τ∀ ∈ (1.11) When the signal is periodic and of the period T0, the autocorrelation function is periodic and of the period T0. It can be obtained as follows: ( ) ( ) 0 * 0 0 1 (τ) τ T xxR x t x t dt T = −∫ (1.12) We should remember that the autocorrelation function is a specific instance of the intercorrelation function of two deterministic signals x(t) and y(t), represented as: ( ) ( )* (τ) τxyR x t y t dt +∞ −∞ = −∫ (1.13) Now, let us look at a discrete-time random process {x(k)}. We can describe this process from its autocorrelation function, at the instants k1 and k2, written Rxx (k1, k2) and expressed as ⎥⎦ ⎤ ⎢⎣ ⎡= )()(),( 2 * 121 kxkxEkkRxx 1 2( , ) ,k k∀ ∈ × (1.14) where )( 2 * kx denotes the conjugate of )( 2kx in the case of complex processes.
- 29. Introduction to Signals and Systems 13 The covariance (or autocovariance) function Cxx taken at instants k1 and k2 of the process is shown by: [ ] [ ] ⎥⎦ ⎤ ⎢⎣ ⎡ −−= ))()(())()((),( * 22 * 1121 kxEkxkxEkxΕkkCxx , (1.15) where [ ])( 1kxE indicates the statistical mean of )( 1kx . We should keep in mind that, for zero-mean random processes, the autocovariance and autocorrelation functions are equal. ),(),( 2121 kkRkkC xxxx = 1 2( , )k k∀ . (1.16) The correlation coefficient is as follows: 1 2 1 2 1 1 2 2 ( , ) ρ ( , ) ( , ) ( , ) xx xx xx xx C k k k k C k k C k k = 1 2( , )k k∀ ∈ × . (1.17) It verifies: 1 2ρ ( , ) 1xx k k ≤ 1 2( , )k k∀ ∈ × . (1.18) When the correlation coefficient 1 2ρ ( , )xx k k takes a high and positive value, the values of the random processes at instants k1 and k2 have similar behaviors. This means that the elevated values of x(k1) correspond to the elevated values of x(k2). The same holds true for the lowered values k1; the process takes the lowered values of k2. The more 1 2ρ ( , )xx k k tends toward zero, the lower the correlation. When 1 2ρ ( , )xx k k equals zero for all distinct values of k1 and k2, the values of the process are termed decorrelated. If 1 2ρ ( , )xx k k becomes negative, x(k1) and x(k2) have opposite signs. In a more general situation, if we look at two random processes x(k) and y(k), their intercorrelation function is written as: ⎥⎦ ⎤ ⎢⎣ ⎡= )()(),( 2 * 121 kykxEkkRxy (1.19) As for the intercovariance function, it is shown by: ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ ⎟ ⎠ ⎞⎜ ⎝ ⎛− ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ ⎟ ⎠ ⎞⎜ ⎝ ⎛−= * 22 * 1121 )()()()(),( kyEkykxEkxEkkCxy (1.20)
- 30. 14 Digital Filters Design for Signal and Image Processing [ ] ( ) ( )* 212121 )()(,),( kyEkxEkkRkkC xyxy −= (1.21) The two random process are not correlated if 0),( 21 =kkCxy ),( 21 kk∀ (1.22) A process is called stationary to the 2nd order, even in a broad sense, if its statistical mean [ ]µ ( )E x k= is a constant and if its autocorrelation function only depends on the gap between k1 and k2; that is, if: )(),( 2121 kkRkkR xxxx −= . (1.23) From this, in stationary processes, the autocorrelation process verifies two conditions. The first condition relates to symmetry. Given that: ⎥⎦ ⎤ ⎢⎣ ⎡ += )()()( * kxmkxEmRxx (1.24) we can easily show that: )()( * mRmR xxxx =− m∀ ∈ . (1.25) For the second condition, we introduce the random vector x consisting of M+1 samples of the process {x(k)}: ( ) ( )⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ = Mx x x 0 . (1.26) The autocorrelation matrix RM is represented by { }H E x x where H x indicates the hermetian vector of . H x This is a Toeplitz matrix that is expressed in the following form: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ − −− +− −+−− = 011 11 101 110 xxxxxxxx xxxx xxxxxx xxxxxxxx M RRMRMR RMR MRRR MRMRRR R (1.27)
- 31. Introduction to Signals and Systems 15 NOTE.– vectoral and matricial approaches can often be employed in signal processing. As well, using autocorrelation matrices and, more generally, intercorrelation matrices, can be effective. This type of matrix plays a role in the development of optimal filters, notably those of Wiener and Kalman. It is important to implement decomposition techniques in signal and noise subspaces used for spectral analysis, speech enhancement, determining the number of users in a telecommunication cell, to mention a few usages. 1.3. Systems A system carries out an operation chain, which consists of processing applied to one or several input signals. It also provides one or several output signals. A system is therefore characterized by several types of variables, described below: – inputs: depending on the situation, we differentiate between the commands (which are inputs that the user can change or manipulate) and the driving processes or excitations which usually are not accessible; – outputs; – state variables that provide information on the “state” of the system. By the term “state” we mean the minimal number of parameters, stored usually in a vector, that can characterize the development of the system, where the inputs are supposed to be known; – mathematical equations that link input and output variables. In much the same way as we classify signals, we speak of digital systems (respectively analog) if the inputs and outputs are digital (respectively analog). When we consider continuous physical systems, if we have two inputs and two outputs, the system is then a quadrupole. We wish to impose a given variation law on the output according to the input. If the relation between input and output is given in the form of a differential linear equation with constant coefficients, we then have a linear system that is time-invariant and continuous. Depending on the situation, we use physical laws to develop equations; in electronics, for example, we employ Kirchhoff’s laws and Thévenin’s and Norton’s theorems or others to establish our equations. Later in this text, we will discuss discrete-time systems in more detail. These are systems that transform a discrete-time input signal x(k) into a discrete-time output signal y(k) in the following manner: ( )kx ⇒ ( ) ( )[ ]kxTky = . (1.28)
- 32. 16 Digital Filters Design for Signal and Image Processing By way of example, we see that ( ) ( )kxky = , ( ) ( )1−= kxky and ( ) ( )1+= kxky respectively express the identity, the elementary delay and the elementary lead. 1.4. Properties of discrete-time systems 1.4.1. Invariant linear systems The important features of a system are linearity, temporal shift invariance (or invariance in time) and stability. A system represented by the operator T is termed linear if 21 , xx∀ 21 , aa∀ so we get: [ ] [ ] [ ])()()()( 22112211 kxTakxTakxakxaT +=+ . (1.29) A system is called time-invariant if the response to a delayed input of l samples is the delayed output of l samples; that is: ( )kx ⇒ ( ) ( )[ ]kxTky = , then ( )[ ] ( )lkylkxT −=− (1.30) and this holds, whatever the input signal x(k) and the temporal shift l. As well, a continuous linear system time-invariant system is always called a stationary (or homogenous) linear filter. 1.4.2. Impulse responses and convolution products If the input of a system is the impulse unity δ(k), the output is called the impulse response of the system h(k), or: ( ) ( )δh k T k= ⎡ ⎤⎣ ⎦ . (1.31) Figure 1.9. Impulse response A usual property of the impulse δ(k) helps us describe any discrete-time signal as the weighted sum of delayed pulses: Linear filterδ(k) h(k)
- 33. Introduction to Signals and Systems 17 ( ) ( ) ( )∑ +∞ −∞= −= l lklxkx δ (1.32) The output of an invariant continuous linear system can therefore be expressed in the following form: ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( ).∑∑ ∑ ∞+ −∞= ∞+ −∞= +∞ −∞= −=−= ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ −== ll l lkhlxlkTlx lklxTkxTky δ δ (1.33) The output y(k) thus corresponds to the convolution product between the input x(k) and the impulse response h(k): ( ) ( ) ( ) ( ) ( ) ( ) ( )∑ +∞ −∞= −=== n nkhnxkxkhkhkxky ** . (1.34) We see that the convolution relation has its own legitimacy; that is, it is not obtained by a discretization of the convolution relation obtained in continuous systems. Using the example of a continuous system, we need only two hypotheses to establish this relation: those of invariance and linearity. 1.4.3. Causality The impulse response filter h(k) is causal when the output y(k) remains null as long as the input x(k) is null. This corresponds to the philosophical principle of causality, which states that all precedent causes have consequences. An invariant linear system is causal only if its output for every k instant (that is, y(k)), depends solely on the present and past (x(k), x(k-1),… and so on). Given the relation in equation (1.34), its impulse response satisfies the following condition: ( ) 0 for 0h k k= < (1.35) An impulse response filter h(k) is termed anti-causal when the impulse response filter h(-k) is causal; that is, it becomes causal after inversion in the sense of time. The output of rank k then depends only on the inputs that are superior, or equal to k.
- 34. 18 Digital Filters Design for Signal and Image Processing 1.4.4. Interconnections of discrete-time systems Discrete-time systems can be interconnected either in cascade (series) or in parallel to obtain new systems. These are represented, respectively, in Figures 1.10 and 1.11. Figure 1.10. Interconnection in series For interconnection in series, the impulse response of the resulting system h(k) is represented by ( ) ( ) ( )khkhkh 21 *= . Thus, subject to the associativity of the law *, we have: ( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ).***** ** * 2112 12 2 kxkhkxkhkhkxkhkh kxkhkh kskhky === = = Figure 1.11. Interconnection in parallel For a interconnection in parallel, the impulse response of the system h(k) is written as ( ) ( ) ( )khkhkh 21 += . So we have: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ).** ** 21 21 21 kxkhkxkhkh kxkhkxkh ksksky =+= += += y(k)x(k) y(k)x(k) s(k) h1(k) h2(k) h1(k) h2(k) s1(k) s2(k) +
- 35. Introduction to Signals and Systems 19 1.5. Bibliography [JAC 86] JACKSON L. B., Digital Filters and Signal Processing, Kluwer Academic Publishing, Boston, ISBN 0-89838-174-6. 1986. [KAL 97] KALOUPTSIDIS N., Signal Processing Systems, Theory and Design, Wiley Interscience, 1997, ISBN 0-471-11220-8. [ORF 96] ORFANIDIS S. J., Introduction to Signal Processing, Prentice Hall, ISBN 0-13- 209172-0, 1996. [PRO 92] PROAKIS J and MANOLAKIS D., Digital Signal Processing, Principles, Algorithms and Applications, 2nd ed., MacMillan, 1992, ISBN 0-02-396815-X. [SHE 99] SHENOI B. A., Magnitude and Delay Approximation of 1-D and 2-D Digital Filters, Springer, 1999, ISBN 3-540-64161-0. [THE 92] THERRIEN C., Discrete Random Signals and Statistical Signal Processing, Prentice Hall, ISBN 0-13-852112-3, 1992. [TRE 76] TREITTER S. A., Introduction to Discrete-Time Signal Processing, John Wiley & Sons (Sd), 1976, ISBN 0-471-88760-9. [VAN 89] VAN DEN ENDEN A. W. M. and VERHOECKX N. A. M., Discrete-Time Signal Processing: An Introduction, pp. 173-177, Prentice Hall, 1989, ISBN 0-13-216755-7.
- 36. This page intentionally left blank
- 37. Chapter 2 Discrete System Analysis 2.1. Introduction The study of discrete-time signals is based on the z-transform, which we will discuss in this chapter. Its properties make it very useful for studying linear, time- invariant systems. This chapter is organized as follows. First, we will study discrete, invariant linear systems based on the z-transform, which plays a role similar to that of the Laplace transform in continuous systems. We will present the representation of this transform, as well as its main properties; then we will discuss the inverse-z-transform. From a given z-transform, we will present different methods of determining the corresponding discrete-time signal. Lastly, the concepts of transfer functions and difference equations will be covered. We also provide a table of z-transforms. 2.2. The z-transform 2.2.1. Representations and summaries With analog systems, the Laplace transform Xs(s) related to a continuous function x(t), is a function of a complex variable s and is represented by: ( ) ( )∫ +∞ ∞− − = dtetxsX st s . (2.1) Chapter written by Mohamed NAJIM and Eric GRIVEL.
- 38. 22 Digital Filters Design for Signal and Image Processing This variable exists when the real part of the complex variable s satisfies the relation: ( )Rer s R< < , (2.2) with and ,r R= −∞ = +∞ r and R potentially characterizing the existence of limits of Xs(s) . The Laplace transform helps resolve the linear differential equations to constant coefficients by transforming them into algebraic products. Similarly, we introduce the z-transform when studying discrete-time signals. Let {x(k)} be a real sequence. The bilaterial or two-sided z-transform Xz(z) of the sequence {x(k)} is represented as follows: ∑ +∞ −∞= − = k k z zkxzX )()( , (2.3) where z is a complex variable. The relation (2.3) is sometimes called the direct z- transform since this makes it possible to transform the time-domain signal {x(k)} into a representation in the complex-plane. The z-transform only exists for the values of z that enable the series to converge; that is, for the value of z so that Xz(z) has a finite value. The set of all values of z satisfying these properties is then called the region of convergence (ROC). DEMONSTRATION 2.1.– we know that the absolute convergence of a series brings about the basic convergence of the series. By applying the Cauchy criterion to the series ∑ +∞ =0 )( k kx , the series ∑ +∞ =0 )( k kx absolutely converges if: 1)(lim /1 < +∞→ k k kx . The series diverges if 1)(lim /1 > +∞→ k k kx . If 1)(lim /1 = +∞→ k k kx , we cannot be certain of the convergence.
- 39. Discrete System Analysis 23 From this, let us express Xz(z) as follows: ∑∑∑ +∞ − − −∞= − +∞ −∞= − +== 0 1 )()()()( k k k k k z zkxzkxzkxzX . The series ∑ − −∞= − 1 )( k k zkx converges absolutely if: 1)(lim /1 <− +∞→ k k k zkx , or if: k k kx z /1 )(lim 1 − < +∞→ . As well, the series ∑ +∞ − 0 )( k zkx converges absolutely if: 1)(lim /1 <− +∞→ k k k zkx , or if: zkx k k < +∞→ /1 )(lim If we write max/1 )(lim 1 λ= − +∞→ k k kx and min /1 )(lim λ= +∞→ k k kx , the z-transform Xz(z) converges if: min max0 z≤ λ < < λ . The quantities minλ and maxλ now characterize the region of convergence (ROC) of the series Xz(z). The series ∑ +∞ −∞= − k k zkx )( diverges towards the strict exterior of the ROC. We should remember that the region of convergence may be empty, as is sometimes the case where ( )2 ( ) 1x k k= + .
- 40. 24 Digital Filters Design for Signal and Image Processing We can also represent, especially for causal sequences, the monolateral z- transform, )(zX z , from the sequence {x(k)} with: ∑ +∞ = − = 0 )()( k k z zkxzX with z≤minλ . DEMONSTRATION 2.2.– to establish the absolute convergence of the series, we can use another approach from the one previously shown with the bilateral transformation. It is based on d’Alembert’s law. We use this law to understand the relation between two consecutive samples of the analyzed discrete-time signal. We know that if the sequence ⎪⎭ ⎪ ⎬ ⎫ ⎪⎩ ⎪ ⎨ ⎧ + )( )1( kx kx converges towards a limit L that is strictly inferior to 1, the absolute convergence of ∑ +∞ =0 )( k kx is assured. If we apply this test to the z-transform, we get: 1 )( )1( lim )( )1( lim 1 1 < + = + − +∞→− −− +∞→ z kx kx zkx zkx kk k k , which gives us: min )( )1( lim λ= + > +∞→ kx kx z k The ROC corresponds to all points in the complex-plane outside the central disk of radius λmin. With discrete-time causal signals, such as: ( ) 0=kx for 0<k , the one-sided (or unilateral) and the bilateral z-transforms are reduced to the same expression: ∑∑ +∞ = − +∞ −∞= − == 0 )()()( k k k k z zkxzkxzX with z≤minλ
- 41. Discrete System Analysis 25 Now let us look at two examples of z-transforms. EXAMPLE 2.1.– the unit step signal u(k) can be represented as: ( ) 0=ku for 0<k and ( ) 1=ku for 0≥k . Its z-transform is written ∑ +∞ = − = 0 )( k k z zzU . The convergence is assured for 1>z , and we get the closed form expression of the z-transform 11 1 )( 1 − = − = − z z z zU z with 1>z . EXAMPLE 2.2.– here we assume that the signal x(k) is represented by: ( ) k kx α= with 1<α We then get: ∑∑∑ −∞= −− +∞ = − +∞ −∞= − +== 0 1 )( k kk k kk k kk z zzzzX ααα . The absolute convergence of the series ∑ +∞ = − 1k kk zα and ∑ −∞= −− 0 k kk zα is assured for α α 1 << z . We then have: 111 1 1 1 1 )( −−− − − + − = zz z zX z αα α and α α 1 << z . When the signal is causal, we will obtain ( ) k kx α= for 0≥k and ( ) 0=kx . Its z-transform then equals: 1 1 1 )( − − = z zX z α with z<α .
- 42. 26 Digital Filters Design for Signal and Image Processing -10 -8 -6 -4 -2 0 2 4 6 8 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 indices amplitude sequence Figure 2.1. Representation of x(k)=α│k│ and of the ROC of its z-transform Xz (z) α α 1
- 43. Discrete System Analysis 27 -2 0 2 4 6 8 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 indices amplitude sequence Figure 2.2. Representation of the causal signal x(k)=α│k│ u(k) and of the ROC of its z-transform Xz (z) α α
- 44. 28 Digital Filters Design for Signal and Image Processing 2.2.2. Properties of the z-transform 2.2.2.1. Linearity The z-transform is linear. Actually, with the two sequences ( ){ }kx1 and ( ){ }kx2 . 21 , aa∀ , we have: ⎥⎦ ⎤ ⎢⎣ ⎡Ζ+ ⎥⎦ ⎤ ⎢⎣ ⎡Ζ= ⎥⎦ ⎤ ⎢⎣ ⎡ +Ζ )()()()( 22112211 kxakxakxakxa (2.4) where Z[.] represents the operator “z-transform”. This result is valid, provided the intersection of the ROC is not empty. DEMONSTRATION 2.3.– [ ] [ ] [ ] [ ])()( )()( )()()()( 2211 2211 22112211 kxakxa zkxazkxa zkxakxakxakxa k k k k k k Ζ+Ζ= += +=+Ζ ∑∑ ∑ ∞+ −∞= − ∞+ −∞= − +∞ −∞= − The ROC of a sum of transforms then corresponds to the intersection of the ROCs. EXAMPLE 2.3.– the linearity property can be exploited in the calculation of the z- transform of the discrete hyperbolic sinus x(k)=sh(k) u(k): ( )[ ] ( ) ( )( ) ( ) ( ) ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ −−= −−=Ζ ∑∑ ∑ ∞+ = − ∞+ = − +∞ = − 00 0 expexp 2 1 expexp 2 1 k k k k k k zkzk zkkksh The ROC is represented by 1)1exp( 1 <− z and 1)1exp( 1 <− − z , so )1exp(>z . ( )[ ] ( ) ( ) 21 1 11 121 12 )1exp(11 1 )1exp(11 1 2 1 −− − −− +− = ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −− − − =Ζ zzch zsh zz ksh for |z| > exp(1).
- 45. Discrete System Analysis 29 2.2.2.2. Advanced and delayed operators Let Xz (z) be the z-transform of the discrete-time signal {x(k)}. The z-transform of ( ){ }mkx − is: ( )[ ] ( )[ ] ( )zXzkxzmkx z mm −− =Ζ=−Ζ (2.5) Delaying the signal by m steps thus brings about a multiplication by z-m in the z domain. The operator z-1 is called the basic delay operator, then simply the delay operator. With filters, we often see the following representation: Figure 2.3. Delayed unitary operator Usually, the ROC is not modified, except potentially at origin and at infinity. DEMONSTRATION 2.4.– by definition ( )[ ] ( )∑ +∞ −∞= − −=−Ζ k k zmkxmkx . By changing the variables mkn −= , we get: ( )[ ] ( ) ( ) ( ) ( )[ ]kxzznxzznxmkx m n nm n mn Ζ===−Ζ − +∞ −∞= −− +∞ −∞= +− ∑∑ Advancing the m signal leads to a multiplication by zm of the transform in the domain of z. The operator z is called the advanced unitary operator or, more simply, the advance operator. The following representation shows this. Figure 2.4. Advanced unitary operator EXAMPLE 2.4.– now we look at the z-transform of discrete-time exponential signals ( ) k ekx α− = for k ≥ 0 and x(k) = 0 for k < 0 and y(k) = x(k-m) where m is a natural integer. z ( )kx )(zX z ( )1+kx )(zzX z 1− z ( )kx )(zX z ( )1−kx )(1 zXz z −
- 46. 30 Digital Filters Design for Signal and Image Processing ( ) [ ] 1 1 1 −− − − =Ζ= ze ezX k z α α for α ez > and ( ) ( ) 1 1 −− − − − == ze z zXzzY m z m z α . 2.2.2.3. Convolution We know that the convolution between two discrete causal sequences {x1(k)} and {x2(k)} verifies the following relation: ( ) ( ) ( ) ( ) ( ) ( )∑∑ = +∞ = −=−= k nn nkxkxnkxkxkxkx 0 21 0 2121 * (2.6) The z-transform of the convolution product of the two sequences is then the simple product of the z-transforms of the two sequences: ( ) ( )[ ] ( )[ ] ( )[ ]kxkxkxkx 2121 * ΖΖ=Ζ (2.7) The ROC of the convolution product is the intersection of the ROC of the z- transforms of {x1(k)} and {x2 (k)}. We see that this result is very often used in studying invariant linear systems, since the response of a system corresponds, as we saw in equation (1.34), to the convolution product of its impulse response by the input signal. DEMONSTRATION 2.5.– since ( )1 1 0 ( ) k k Z x k x k z +∞ − = =⎡ ⎤⎣ ⎦ ∑ and ( )2 2 0 ( ) k k Z x k x k z +∞ − = =⎡ ⎤⎣ ⎦ ∑ , the product ( ) ( )zXzX 21 can be written as: ( )[ ] ( )[ ] ( ) ( ) ( ) ( ) ( ) ( )[ ] ( )[ ] ( ) ( )[ ]kxkxZzkxx zmkxmx zmkxmx zxxxxxxkxZkxZ k k k k k m k k m 21 0 21 0 0 21 0 21 1 21212121 ** )()( )()( 011000 == ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ −= +−++ ++= − ∞+ = − ∞+ = = − = − ∑ ∑ ∑ ∑ on the condition that the intersection of the ROC of the two series must not be empty.
- 47. Discrete System Analysis 31 2.2.2.4. Changing the z-scale Let us assume that Xz(z) is the z-transform of the discrete-time signal {x(k)}. With a given constant a, real or complex, the z-transform of the ( ){ }kxak is: ( ) ⎟ ⎠ ⎞ ⎜ ⎝ ⎛=⎥⎦ ⎤ ⎢⎣ ⎡Ζ − zaXkxa z k 1 with maxmin λλ aza ≤≤ * (2.8) DEMONSTRATION 2.6.– ( )[ ] ( ) ( )( ) ( )zaXzakxzkxakxa z k k kkkk 11 − +∞ −∞= +∞ −∞= −−− ∑ ∑ ===Ζ The ROC is then: maxmin λλ aza ≤≤ 2.2.2.5. Contrasted signal development Let Xz (z) be the z-transform of the discrete-time signal ( ){ }kx with maxmin λλ << z . We then represent the sequence as ( ){ } ( ){ }kxky −= . The z- transform of ( ){ }ky then equals: ( ) ( )1− = zXzY zz . (2.9) DEMONSTRATION 2.7.– ( ) ( ) ( ) ( )1− +∞ −∞= +∞ −∞= − ∑ ∑ ==−= zXzkxzkxzY z k k kk z The region of convergence is then written as: minmax 11 λλ << z 2.2.2.6. Derivation of the z-transform By deriving the z-transform in relation to z-1 and then multiplying it by z-1 , we return to the following characteristic result: ( ) ( ) ( )( ) ( ) ( )( )kkxzkkxzkkxz zd zdX z k k kkz Ζ=== ∑ ∑ +∞ −∞= +∞ −∞= −−−− − − 111 1 1 (2.10)
- 48. 32 Digital Filters Design for Signal and Image Processing EXAMPLE 2.5.– now we will at the z-transform of the following discrete-time causal signal: ( ) ( ) ( ) ( ) ( )4123154335 −+−=−+−= kkkkkkkx δδδδ We can easily demonstrate that the z-transform of δ(k) for all values of z equals 1. By using advanced and delayed linearity properties, we find that: ( ) ( )[ ] 43 354335 −− +=−+−Ζ zzkk δδ for all values of z. From this ( ) ( ) 43 1 43 1 1215 35 −− − −− − += + = zz dz zzd zzX z 2.2.2.7. The sum theorem If 1 is inside the ROC, we easily find that: ( )zXkx z k z ∑ +∞ −∞= → = 1 lim)( (2.11) 2.2.2.8. The final-value theorem Here we look at two sequences ( ){ }kx and ( ){ }ky such as ( ) ( ) ( )kxkxky −+= 1 , by supposing the absolute convergence of the series ∑ +∞ −∞=k ky )( . From this we get the sum theorem ( )zYky z k z ∑ +∞ −∞= → = 1 lim)( . Now, we know that ( ) ( ) ( )zXzzY zz 1−= , and, by construction, )(lim)(lim)( kxkxky k k k −∞→ +∞ −∞= +∞→ −=∑ . From there, if 0)(lim = −∞→ kx k , we have ( ) ( )zXzkx z zk 1lim)(lim 1 −= →+∞→ . 2.2.2.9. Complex conjugation Here we consider the two sequences ( ){ }kx and ( ){ }ky such as ( ) ( )* kxky = ( ) ( ) ( )( ) ( )[ ]** * * * zXzkxzkxzY z k k kk z = ⎪⎭ ⎪ ⎬ ⎫ ⎪⎩ ⎪ ⎨ ⎧ === ∑ ∑ ∞+ −∞= ∞+ −∞= −− (2.12)
- 49. Discrete System Analysis 33 2.2.2.10. Parseval’s theorem ( )∫ ∑ +∞ = −− = C k zz kxdzzzXzX j 0 211 )()( 2 1 π (2.13) provided that Xz(z) converges on an open ring containing the circle unity. The energy does not depend on the representation mode, whether it is temporal or in the z-domain. 2.2.3. Table of standard transform ( ){ }kx )(zX z ( )δ k 1 ( )k mδ − m z− ( ) ( ) ( ) 0 for 0 1 for 0 x k k u k x k k ⎧ = <⎪ ⎨ = ≥⎪⎩ 1−z z ( ) ( )kkukx = ( )2 1−z z ( ) ( )kukkx 2 = ( )3 2 1− + z zz ( ) ( )kukkx 3 = ( ) ( )4 2 1 14 − ++ z zzz ( ) ( )kukkx 4 = ( ) ( )5 23 1 11111 − +++ z zzzz ( ) α k x k = with α 1< . 1 ( ) 1 α 1 α X z z z α = + + − − ( ) ( )αk x k k u k= ( )2 α α z z − ( ) ( )2 αk x k k u k= ( ) ( )3 α α α z z z + − ( ) ( )3 αk x k k u k= ( ) ( ) 2 2 4 α 4α α α z z z z + + −
- 50. 34 Digital Filters Design for Signal and Image Processing ( ) ( )4 αk x k k u k= ( ) ( ) 3 2 2 3 5 α 11α 11α α α z z z z z + + + − ( ) sin( . ). ( )sx k kT u k0= ω 2 .sin( . ) 2. .cos( . ) 1 s s z T z z T 0 0 ω − ω + ( ) cos( . ). ( )sx k kT u k0= ω 2 .[ cos( . )] 2. .cos( . ) 1 s s z z T z z T 0 0 − ω − ω + ( ) sin( . ). ( )k sx k kT u kα 0= ω 2 2 .sin( . ) 2. .cos( . ) s s z T z z T α α α 0 0 ω − ω + ( ) cos( . ). ( )k sx k kT u kα 0= ω 2 2 .[ cos( . )] 2. .cos( . ) s s z z T z z T α α α 0 0 − ω − ω + ( ) sin( . ). ( )k sx k k kT u kα 0= ω 2 2 2 ( )( ) sin( . ) ( 2. .cos( . ) ) s s z z z T z z T α α α α α 0 0 − + ω − ω + ( ) cos( . ). ( )k sx k k kT u kα 0= ω 2 2 2 2 2 cos( . ) 2 cos( . ) ( 2. .cos( . ) ) s s s z z T z T z z T α α α α α 0 0 0 ⎢ ⎥ω − + ω⎣ ⎦ − ω + ( ) [1 cos( . )]. ( )sx k kT u k0= − ω 2 .[ cos( . )] 1 2. .cos( . ) 1 s s z z Tz z z z T 0 0 − ω − − − ω + ( ) ( )[ ] ( )kuekakx ka ...11 .− +−= . . . 2 . . . 1 [ ] s s s a T s a T a T aT e zz z z z e z e − − − − − − − − . ( ) .sin( . ). ( )a k sx k e kT u k− 0= ω . 0 . 2. .2 0 . .sin( . ) 2. . .cos( . ) s s s a T s a T a T s z e T z z e T e − − − ω − ω + . ( ) .cos( . ). ( )a k sx k e kT u k− 0= ω . 0 . 2. .2 0 . cos( . ) 2. . .cos( . ) s s s a T s a T a T s z z e T z z e T e − − − ⎢ ⎥− ω⎣ ⎦ − ω + Table 2.1. z-transforms of specific signals 2.3. The inverse z-transform 2.3.1. Introduction The purpose of this section is to present the methods that help us find the expression of a discrete-time signal from its z-transform. This often presents problems that can be difficult to resolve. Applying the residual theorem often helps to determine the sequence ( ){ }kx , but the application can be long and cumbersome.
- 51. Discrete System Analysis 35 So in practice, we tend to use simpler methods, notably those based on development by division, according to increasing the powers in z-1 , which constitutes a decomposition of the system into subsystems. Nearly all the z-transforms that we see in filtering are, in effect, rational fractions. 2.3.2. Methods of determining inverse z-transforms 2.3.2.1. Cauchy’s theorem: a case of complex variables If we acknowledge that, in the ROC, the z-transform of ( ){ }kx , written )(zX z , has a Laurent serial development, we have: ∑∑ − −∞= − +∞ = − += 1 0 )( k k k k k kz zzzX υτ The coefficients kτ and kυ are the values of the discrete sequence ( ){ }kx that are to be determined. They can be obtained by calculating the integral ( ) ( ) dzzzX j kx k C z 1 2 1 − ∫= π (where C is a closed contour in the interior of the ROC), by the residual method as follows: ( ) ( ) ( ) ( ) ϕρρ π ϕρρρ π ϕ π ϕ ϕϕ π ϕ deeX deeeXkx jkkj z jkjkj z ∫ ∫ = = −− 2 0 11 2 0 2 1 2 1 where ρ belongs to the ROC. DEMONSTRATION 2.8.– let us look at a discrete-time causal signal ( ){ }kx of the z-transform ( )zX z . We have, by definition: ∑ +∞ = − = 0 )()( n n z znxzX or ∑ +∞ = −+−− = 0 11 )()( n knk z znxzzX . By integrating these qualities the length of a closed contour C to the interior of the region of convergence of the transform Xz(z) by turning around 0 once in the positive direction, we get: ( ) ( ) ( )kxjdzznxdzznxdzzzX n C kn C C n knk z π2)( 0 1 0 11 ∑ ∫∫ ∫∑ +∞ = −+− +∞ = −+−− ===
- 52. 36 Digital Filters Design for Signal and Image Processing By taking an expression of z in the form of ϕ ρ j ez = , we easily arrive at: ( ) ( ) ϕρρ π ϕ π ϕ deeXkx jkkj z∫= 2 0 2 1 Now, using the residual theorem, this sum corresponds to the sum of the residuals of 1 )( −k z zzX surrounded by C. 1 11 ( ) ( ) 2 k k z z simple polepoles surrounded only by CC X z z dz Residual X z z j π − −⎡ ⎤⎡ ⎤= ⎢ ⎥⎣ ⎦⎣ ⎦ ∑∫ Reminders: when pn is a rth order pole of the expression 1 )( −k z zzX , we can express 1 )( −k z zzX in the form of a rational fraction of the type ( ) ( )r npz zN − . The residual taken in pn is then equal to: [ ][ ] n n pz r r p k z dz zNd r zzX = − − − ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ × − = 1 1 1 )( )!1( 1 )(Res . With a pole of the order of multiplicity 1, the expression is reduced to: )()(Res 1 n p k z pNzzX n = ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ ⎥⎦ ⎤ ⎢⎣ ⎡ − EXAMPLE 2.6.– we determine that the discrete-time causal signal whose z- transform equals 2 )( − − = ez z zX z . ( ) ∫ − − = C k dz ez z j kx 22 1 π for 0≥k . Calculating this integral involves the one pole e -2 of the order in multiplicity 1. From this we get: ( ) k z k e ez z kx 2 )2exp( 2 Res − −= − = ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ − = for 0≥k
- 53. Discrete System Analysis 37 2.3.2.2. Development in rational fractions With linear systems, the expression of the z-transform is presented in the form of a rational fraction; so we will present a decomposition of X(z) into basic elements. Let ( ) ( ) ( )zD zN zX z = . The decomposition into basic elements helps us express ( )zX z in the following form: ( ) ∑∑ = = +− − r i j j i ji i i az1 1 1 , β β α , where r is the number of poles of Xz(z), βi the multiplicity order of the complex pole ai. We then get: ( ) ( ) ( ) ( ) i i az j i j ji z zD zN az j = − − ∂ ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ −∂ − = 1 1 , !1 1 β α The z-transform is written as a linear combination of simple fractions of the order 1 or 2, with which we can easily determine the inverse transforms. EXAMPLE 2.7.– Let ( ) ( )( )( )321 11123 23 −−− +− = zzz zzz zX z . We then write that: ( ) 111 31 1 21 1 1 1 −−− − + − + − = zzz zX z from this, the inverse transform corresponds to: ( ) ( ) ( )kukx kk 321 ++=
- 54. 38 Digital Filters Design for Signal and Image Processing Figure 2.5. Decomposition into subsystems of the system represented by Xz(z) EXAMPLE 2.8.– here, our purpose is to find the inverse z-transform of Xz(z) represented by the relation ( ) 21 231 3 −− +− = zz zX z for 2>z . The decomposition into basic elements allows us to express ( )zX z as follows: ( ) ( )( ) ( ) ( )12 6 1 3 121 3 231 3 111121 − − − = −− = +− = −−−−−− zzzzzz zX z from which ( ) ( ) ( )kukx k 123 1 −×= + . 2.3.2.3. Development by algebraic division of polynomials When the expression of the z-transform appears in the form of rational fractions, ( ) ( ) ( )zD zN zX z = , we can also obtain an approximate development by carrying out the polynomial division of N(z) by D(z), on condition that the ROC contains 0 or infinity. A division will be done according to the positive powers of z if the convergence region contains 0 and according to the negative powers of z if the convergence region contains infinity. ( )1−z z ( )2−z z ( )3−z z 1 )(zX z+
- 55. Discrete System Analysis 39 EXAMPLE 2.9.– let ( ) 1 1 1 0.9 zX z z− = − which corresponds to the expression of the transfer function of a filter used for voice signal analysis. Since the ROC contains infinity, we then carry out the polynomial division according to the negative powers of z. 1 1 1 0.9 z− − 1- 1 0.9 z− 1+ 1 0.9 z− + 2 0.81 z− + 3 0.729 z− … 1 0.9 z− 1 0.9 z− - 2 0.81 z− 2 0.81 z− 2 0.81 z− - 2 0.729 z− … We obtain: ( ) 1 2 3 1 0.9 0.81 0.729 ...zX z z z z− − − ≈ + + + + The corresponding sequence is represented by: ( ) 10 =x , ( )1 0.9x = , ( )2 0.81x = , ( )3 0.729x = , ( ) 0.9k x k = . 2.4. Transfer functions and difference equations 2.4.1. The transfer function of a continuous system A continuous linear system whose input is x(t) produces a response y(t). This system is regulated by a linear differential equation with constant coefficients that links x(t) and y(t). The most general expression of this differential equation is in the form: q q qp p p dt txd b dt tdx btxb dt tyd a dt tdy atya )()( )( )()( )( 1010 +++=+++ (2.14) By assuming that ( ) ( ) 0== tytx for t < 0, we will show that if we apply the Laplace transform to the differential equation (2.14), we will obtain an explicit relation between the Laplace transforms of x(t) and y(t).
- 56. 40 Digital Filters Design for Signal and Image Processing Since: ( ) ( ) n n n d y t L s Y s dt ⎡ ⎤ =⎢ ⎥ ⎢ ⎥⎣ ⎦ (2.15) and: ( ) ( ) n n n d x t L s X s dt ⎡ ⎤ =⎢ ⎥ ⎢ ⎥⎣ ⎦ , (2.16) we get: 0 1 0 1( ) ( ) ( ) ( )p q p qa a s a s Y s b b s b s X s+ + + = + + +… … (2.17) The relation of the Laplace transforms of the input and output of the system gives the system transmittance, or even what we can term the transfer function. It equals: 0 1 0 1 ( ) ( ) . ( ) q q s p p b b s b sY s H s X s a a s a s + + + = = + + + … … (2.18) This means that whatever the nature of the input (unit sample sequence, unit step signal, unit ramp signal), we can easily obtain the Laplace transform of the output: ( ) ( ) ( )sY s H s X s= (2.19) The frequency transform of the output generated by the system can then be analyzed by using Bode’s, Nyquist’s or Black’s diagrams. A quick review of a Black’s diagram shows it contains two diagrams: one represents amplitude (or gain); the other shows phase. Each one separately plots the module in logarithmic scale format and the argument of the transfer function according to the frequency. The Nyquist diagram plots the ensemble of points of ( )ωsH j by writing as abscissas [ ]Re ( ω)sH j and in ordinates [ ]Im ( ω)sH j . Lastly, Black’s diagram gives the ensemble of definite points in abscissas with ( )ωsH j and in ordinates by [ ]( ω)sArg H j . Except in certain limited cases, we can always approximate the transfer function to a product of rational fractions of orders 1 and 2; this will put into cascade several filters of orders 1 and 2.
- 57. Discrete System Analysis 41 2.4.2. Transfer functions of discrete systems We have seen in section 1.4.2 that an invariant linear system of impulse response h(k) whose input is x(k) and output is y(k) verifies the following equation: ( ) ( ) ( ) ( ) ( )khkxnkhnxky n *=−= ∑ +∞ −∞= The z-transform of the relation in equation (1.34) gives a basic product between the z-transforms of the input and of the impulse response of the system, on the condition that the z-transforms converge on the same, non-empty ROC. We then have the following on the convergence domain intersection: ( ) ( ) ( )zXzHzY zzz = (2.20) or: ( ) ( ) ( ) ( )∑ +∞ −∞= − == k k z z z zkh zX zY zH (2.21) The transfer function is the z-transform of the impulse response of the system. This filter is excited by an input of the z-transform written as Xz(z) and delivers the output whose z-transform is Yz(z). With discrete systems, if at the instant k the filter output is characterized by the input states: ( ) ( ) ( ){ }11 +−− Nkxkxkx , and output states: ( ) ( ) ( ){ }11 +−− Mkykyky , the most general relation between the samples is the following difference equation: )1()()1()1()( 10110 +−++=+−++−+ −− NkxbkxbMkyakyakya NM . (2.22) From there, by carrying out the z-transform of the input and output, the difference equation becomes: ( ) ( ) )()()()( 1 10 1 10 zXzbzXbzYzazYa z N Nzz M Mz −− − −− − ++=++ ,
- 58. 42 Digital Filters Design for Signal and Image Processing or ( ) ( ) )( )( )( )( )( 1 1 1 10 1 1 1 10 zH zA zB zazaa zbzbb zX zY zM M N N z z == +++ +++ = −− − − −− − − , (2.23) Thus, the transfer function is expressed from the polynomials A(z) and B(z), which are completely represented according to the position of their zeros in the complex plane. Figure 2.6. Representation of the discrete system with an input and an output COMMENT 2.1.– we also find this kind of representation in modeling signals with parametric models, the most widely used example being the auto-regressive moving average (ARMA). Let y(t) be a signal that is represented by M samples ( ) ( ) ( ){ }11 +−− Mkykyky that is assumed to be generated by an excitation characterized by its N samples ( ) ( ) ( ){ }11 +−− Nkxkxkx . A linear discrete model of the signal is a linear relation between the samples ( ){ }kx and ( ){ }ky that can be expressed as follows: )1()()1()1()( 10110 +−++=+−++−+ −− NkxbkxbMkyakyakya NM . (2.24) This kind of representation constitutes an ARMA model, of the order (M-1, N-1). The coefficients { } 1...,,0 −= Miia and { } 1...,,0 −= Niib are termed transverse parameters. In general, we adopt the convention a0 = 1. We then have: ∑ ∑ − = − = −+−−= 1 1 1 0 )()()( M i N i ii ikxbikyaky (2.25) The ARMA model can be interpreted as a filtering function of the transfer Hz(z). )(zX z ( ) ( )1 1 1 10 1 1 1 10 )( −− − − −− − − +++ +++ = M M N N z zazaa zbzbb zH )(zYz
- 59. Discrete System Analysis 43 50 100 150 200 250 -3 -2 -1 0 1 2 Amplitude Number of samples Figure 2.7. Realization of a second-order autoregressive process In the case of a model termed autoregressive (AR), the { } 1...,,0 −= Niib are null, except b0, and the model is reduced to the following expression: ∑ − = +−−= 1 1 0 )()()( M i i kxbikyaky (2.26) 0 50 100 150 200 250 -3 -2 -1 0 1 2 Amplitude number of samples Figure 2.8. Realization of a second-order autoregressive process
- 60. 44 Digital Filters Design for Signal and Image Processing In this way, the polynomial B(z) is reduced to a constant B(z) = b0 and the transfer function Hz(z) now only consists of poles. For this reason, this model is called the all-pole model. We can also use the moving average or MA so that { } 1...,,1 −= Niia are null, which reduces the model to: )1()1()()( 110 +−++−+= − Nkxbkxbkxbky N . (2.27) Here, A(z) equals 1. The model is then characterized by the position of its zeros in the complex plan, so it also is called the all-zero model: ( )1 1 1 10)( −− − − ++= N Nz zbzbbzH … (2.28) 2.5. Z-transforms of the autocorrelation and intercorrelation functions The spectral density in z of the sequence { }( )x k is represented as the z-transform of the autocorrelation function ( )kRxx of { })(kx , a variable we saw in the previous chapter: k k xxxx zkRzS − +∞ −∞= ∑= )()( (2.29) We can also introduce the concept of a discrete interspectrum of sequences { })(kx and { })(ky as the z-transform of the intercorrelation function ( )kRxy . k k xyxy zkRzS − +∞ −∞= ∑= )()( (2.30) When x and y are real, it can also be demonstrated that )()( 1− = zSzS yxxy . Inverse transforms allow us to find intercorrelation and autocorrelation functions from )(zSxy and )(zSxx : dzzzS j mR m xyxy 1 )( 2 1 )( − ∫= π (2.31) dzzzS j mR m xxxx 1 )( 2 1 )( − ∫= π (2.32) Specific case: ( )[ ] dzzzS j xxER xxxx 12 )( 2 1 )0( − ∫== π
- 61. Discrete System Analysis 45 Now let us look at a system with a real input { })(kx , an output { }( )y k , and an impulse response h(k). We then calculate )(zSxy when it exists: ( ) ( ) ( ) ( ) ( ) 0 ( ) ( ) n n xy xy n n n n m S z R n z E x k y k n z E x k h m x k n m z +∞ +∞ − − =−∞ =−∞ +∞ +∞ − =−∞ = = = −⎡ ⎤⎣ ⎦ ⎡ ⎤ = − −⎢ ⎥ ⎢ ⎥⎣ ⎦ ∑ ∑ ∑ ∑ . If permutation between the mathematical expectation and summation is possible: ( ) ( ) ( ) 0 ( ) n xy m n S z h m E x k x k n m z +∞ +∞ − = =−∞ = − −⎡ ⎤⎣ ⎦∑ ∑ ( ) ( ) ( ) ( ) ( ) ( ) 0 ( ) 0 0 ( ) n xy xx m n m n m xx m n m n xx m n S z h m R n m z h m z R n m z h m z R n z +∞ +∞ − = =−∞ +∞ +∞ − − − = =−∞ +∞ +∞ − − = =−∞ = − − = − − = − ∑ ∑ ∑ ∑ ∑ ∑ Now, as the signal x is real, ( ) ( )xx xxR n R n− = . Since ( ) ( ) 0 m m h m z H z +∞ − = =∑ and ( ) ( )n xx xx n R n z S z +∞ − =−∞ =∑ , we thus establish the following connection between the transfer function ( )zH z of the system and its interspectral functions Sxy (z) and Sxx(z): ( ) )()( zSzHzS xxzxy = (2.33) 2.6. Stability The fact that the transfer function is a rational fraction naturally leads us to the issue of stability, which can be studied from considering the z-transform of the impulse response.
- 62. 46 Digital Filters Design for Signal and Image Processing 2.6.1. Bounded input, bounded output (BIBO) stability A linear time invariant system is BIBO stable if its impulse response verifies the following relation (see also Chapter 10): ∑ +∞ −∞= +∞< k kh )( (2.34) The transfer function is the z-transform of the impulse response; from there, we have, for all of z belonging to the ROC: ( ) ( ) ( )k k z k k H z h k z h k z +∞ +∞ − − =−∞ =−∞ = ≤∑ ∑ (2.35) Now, on the unity circle in the complex plan z, we have: ∑∑ +∞ −∞= +∞ −∞= − = kk k khzkh )()( (2.36) From this the following result is obtained: exp( 2 ) | ( ) | <+ s z f z j f H z π= ∞ (2.37) Many stability criteria have been developed to study the stability of filters. Among these, we first will look at the test of pole positions of the transfer function, then at Routh’s and Jury’s criteria. 2.6.2. Regions of convergence In causal systems, a necessary and sufficient condition of stability is that all the poles of the transfer function must be inside the unity circle in the z-plane. The decomposition of the basic elements of the transfer function of a discrete causal system Hz(z) introduces two types of terms. 1 1 1 az− − admits the pole ip a= , and 21 1 21 −− − +− + czbz ezd admits the complex conjugates as poles 22 bcjbpi −±= of the module equal to cpi = .
- 63. Discrete System Analysis 47 Here we see that the z-transform of the sequence ( ) ( )1 k x k a u k= converges if, and only if, 1a < and equals 1 1 1 az− − . In addition, according to Table 2.1, 2 0( ) sin(ω ). ( )k sx k kT u kα= and 3 0( ) cos(ω ). ( )k sx k kT u kα= admit, respectively, the following z-transforms: 0 2 2 0 sin(ω . ) 2. .cos(ω . ) s s z T z z T α − α + α and 0 2 2 0 .[. cos(ω . )] 2. .cos(ω . ) s s z T z z T − α − α + α on condition that 1<α . A specific linear combination of ( )kx2 and of ( )kx3 gives us a z-transform in the form of 21 1 21 −− − +− + czbz ezd with 1<c . For an anti-causal system, a necessary and sufficient condition of stability is that all the poles of the transfer function must be strictly outside the unity circle. EXAMPLE 2.10.– let the following formula be the transfer function of a discrete causal system: ( ) 21 1 24 21 −− − +− − = zz z zH z . It admits for zero 21 =z and for poles 4 31 1 i p + = and 4 31 2 i p − = . The stability is verified because |p1| <1 and |p2| <1. -1 -0.5 0 0.5 1 1.5 2 -1 -0.5 0 0.5 1 4 31 1 i p + = 4 31 2 i p − = 21 =z Im(z) Re(z) Figure 2.9. Diagram of poles and zeros of ( ) 21 1 24 21 −− − +− − = zz z zH z
- 64. 48 Digital Filters Design for Signal and Image Processing 2.6.2.1. Routh’s criterion The first approach we will consider for looking at stability uses Routh’s criterion. In general, Routh’s criterion is used to study the stability of continuous systems, usually with looped systems. It helps us learn the number of zeros of the real part of a polynomial by examining its coefficients. Routh’s criterion has been adapted to discrete systems by changing variables with the following transform: 1 1 λ 1 λ z− − → + (2.38) We then continue by analyzing the denominator of H(λ) that is expressed as: 0 α λ n k k k= ∑ . (2.39) We formulate the following table: nα 2−nα 4−nα … 1−nα 3−nα 5−nα … 1 321 − −−− − = n nnnn n α αααα β 1 541 1 − −−− − − = n nnnn n α αααα β … n nnnn n β αβαβ χ 113 −−− − = n nnnn n β αβαβ χ 125 1 −−− − − = And so on Table 2.2. Table for application of Routh’s criterion Routh’s theorem states that the number of zeros of Hz(λ) of the strictly positive real part is equal to the number of sign changes. We can verify this by looking at the first column of Table 2.2 from top to bottom. EXAMPLE 2.11.– Let us look again at the example where: ( ) 21 1 24 21 −− − +− + = zz z zH z .
- 65. Discrete System Analysis 49 First, we carry out the change of the variable indicated in equation (2.38). We get: ( ) ( ) ( ) 2 2 λ3 2λ λ λ λ3 6λ 7λ z N H D + − = = + + . From this, the following table is constructed from the coefficients of D(λ): 7 7 6 3 Table 2.3. Application of Routh’s criterion There is no change of sign in the first column; this means there will be no zeros in the strictly positive real part of D(λ). We conclude from this that there will be stability. 2.6.2.2. Jury’s criterion Let ( ) ( ) ( )zA zB zH z = be the transfer function. Jury’s criterion is an algebraic criterion that allows us to determine if the polynomial roots A(z) are inside the circle of radius unity in the z-plane. So we get: ( ) k M k k zazA − − = ∑= 1 0 where the coefficients ak are real and 00 >a . We construct a table of 2(M–1)-3 rows. The first two lines of this table are filled, respectively, by polynomial coefficients according to the increasing, then decreasing, powers in z. The following lines are respectively deduced by using the determinant of specific coefficients of the two proceeding lines, as follows: 1 2 0 1 β M M k k k a a a a − − − + = , 2 3 0 1 β β γ β β M M k k k − − − + = , etc.
- 66. 50 Digital Filters Design for Signal and Image Processing This gives us the following table: 1/ 1−Ma 2−Ma 3−Ma … kMa −−1 kMa −−2 … 1a 0a 2/ 0a 1a 2a … ka 1+ka … 2−Ma 1−Ma 3/ 2βM − 3βM − 4βM − … 2βM k− − … … 0β 4/ 0β 1β 2β … βk … … 2βM − 5/ 3γM − 4γM − … 3γM k− − … 0γ 6/ 0γ 1γ … … γk … 3γM − … … … … … … … … … … … … … 2M-7/ 3p 2p 1p 0p 2M-6/ 0p 1p 2p 3p 2M-5/ 2q 1q 0q Table 2.4. Table for establishing and verifying Jury’s criterion According to Jury’s criterion, the polynomial roots are inside the circle of radius unity in the z-plane if the following M conditions are met: – ( ) 01 >A and ( ) 01 >−A if M-1 is even or ( ) 01 <−A if M-1 is odd. – 01 aaM <− . – 2 0β βM − > 03 γγ >−M ,… and 02 qq > . EXAMPLE 2.12.– looking again at the example of ( ) 21 1 24 21 −− − +− + = zz z zH z with ( ) 21 24 −− +−= zzzA . The corresponding Jury table is as follows: 1/ 1 -2 4 2/ 4 -2 1 3/ -15 6 4/ 6 -15 5/ 189 In addition, since ( ) 031 >=A , ( ) 071 >=−A , the poles of the transfer function are inside the unity circle. In Chapter 10, we will discuss stability in more depth.
- 67. Chapter 3 Frequential Characterization of Signals and Filters 3.1. Introduction This chapter discusses frequential representations of signals and filters. We will introduce the Fourier transform of continuous-time signals by first presenting the Fourier series decomposition of periodic signals. Properties and basic calculation methods will be demonstrated. We will then present the frequential analysis of discrete-time signals from the discrete Fourier transform using the standard and most rapid versions. These concepts will then be illustrated using the example of speech signals from a common time-frequency-energy representation – the spectrogram. 3.2. The Fourier transform of continuous signals 3.2.1. Summary of the Fourier series decomposition of continuous signals 3.2.1.1. Decomposition of finite energy signals using an orthonormal base Let x(t) be a finite energy signal. We consider the scalar product ( ) ( )tt ki ϕϕ , of two functions ( )tiϕ and ( )tkϕ of finite energy, represented as follows: Chapter written by Eric GRIVEL and Yannick BERTHOUMIEU.
- 68. 52 Digital Filters Design for Signal and Image Processing ( ) ( ) ( ) ( )dttttt kiki ∫ +∞ ∞− = * , ϕϕϕϕ (3.1) where ( )tk * ϕ denotes the complex conjugate of ( )tkϕ . A family ( ){ }tkϕ of finite energy functions is called orthonormal if it verifies the following relations: ( ) ( ) ( )kitt ki −= δϕϕ , . (3.2) A family ( ){ }k tϕ is complete if any vector of the space can be approximated as closely as possible by a linear combination of ( ){ }k tϕ . A family ( ){ }k tϕ is termed maximal when the sole function x(t) of orthogonal finite energy throughout ( )tkϕ is the null function. We can then decompose the signal x(t) on an orthonormal base ( ){ }k tϕ as follows: ( ) ( ) ( ) ( )tttxtx k k k ϕϕ∑= , (3.3) COMMENT 3.1.– when the family is not complete, ( ) ( ) ( ), k k k x t t tϕ ϕ∑ is an optimum approximation in the least squares sense of the signal x(t). 3.2.1.2. Fourier series development of periodic signals The Fourier series development of a periodic signal x(t) and of period 0T follows from the decomposition of a signal on an orthonormal base. To observe this, we look at the family of periodic function ( ){ }kk tϕ represented as follows: ( ) ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ = t T k jtk 0 2exp πϕ with ∈k (3.4) Here, the scalar product is that of periodic signals of period 0T and of finite power; that is, such as ( ) ( ) ∫ +∞< 0 2 T dttϕ , so: ( ) ( ) ( ) ( ) ( ) ∫= 0 * 0 1 , T kiki dttt T tt ϕϕϕϕ (3.5)
- 69. Frequential Characterization of Signals and Filters 53 We then have: ( ) ( ) ( ) ( ) ( )[ ].sin 1 2exp 1 , 2/ 2/ 00 0 0 π π πϕϕ ki ki dtt T ki j T tt T T ki − − = ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ − = ∫ − (3.6) If ki ≠ ( ) ( ) 0, =tt ki ϕϕ ; otherwise, ( ) ( ) 1, =tt kk ϕϕ . All periodic signals x(t) and of period 0T can be decomposed in Fourier series according to a linear combination of functions ( ) ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = t T k jtk 0 2exp πϕ . Given equation (3.3), we have: ( ) ∑ +∞ −∞= ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = k k t T k jctx 0 2exp π (3.7) where ck measures the degree of resemblance between x(t) and ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ t T k j 0 2exp π : ( ) ( ) ( ) ∫ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ −=⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = 0 000 2exp 1 2exp, T k dtt T k jtx T t T k jtxc ππ (3.8) When the signal x(t) is real, we can demonstrate that the Fourier series decomposition of x(t) is written as: ( ) ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ +⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ += ∑ +∞ = t T k bt T k a a tx k k k 01 0 0 2sin2cos 2 ππ (3.9) where the real quantities ka and kb verify the following relations: ( ) ( ) ∫ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = 0 00 2cos 2 T k dtt T k tx T a π with k∈ (3.10) and ( ) ( ) ∫ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = 0 00 2sin 2 T k dtt T k tx T b π with k∈ (3.11)
- 70. 54 Digital Filters Design for Signal and Image Processing PROOF.– ck is a complex quantity; we can express it as: ( )kkk jcc φexp= (3.12) When the signal x(t) is real, since the coefficients ck and kc− are complex conjugates kk cc −= , and ( )kkkk jccc φ−==− exp* . We then have: ( ) ∑ ∑ ∑ ∑ ∞+ = ∞+ = ∞+ = +∞ −∞= ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ ++= ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ +−+ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ ++= ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = 1 0 0 1 1 00 0 0 2cos2 2exp2exp 2exp k kk k k kkkk k k t T k cc t T k jct T k jcc t T k jctx φπ φπφπ π (3.13) ( ) ( ) ( ) ( ) ( ) ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ +⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ += ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ −⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ += ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎣ ⎡ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ −⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ += ∑ ∑ ∑ ∞+ = ∞+ = +∞ = t T k bt T k ac t T k ct T k cc t T k t T k cctx k kk k k kkk kk k k 01 0 0 01 0 0 001 0 2sin2cos 2sinsin22coscos2 sin2sincos2cos2 ππ πφπφ φπφπ (3.14) comparing relations (3.14) and (3.15) leads to the following identification ( ) ( )kkkk cca Re2cos2 == φ and ( ) ( )kkkk ccb Im2sin2 −=−= φ (3.15) The coefficients ck and kc− are then linked to the quantities ka and kb , as follows: ( )kkk jbac −= 2 1 and ( )kkk jbac +=− 2 1 (3.16) COMMENT 3.2.– periodic signals do not have finite energy on the interval ] [∞+∞− ; . That means that the quantity ( ) dttx∫ +∞ ∞− 2 does not have a finite value. We can also say that x(t) is not of a summable square.
- 71. Frequential Characterization of Signals and Filters 55 COMMENT 3.3.– we also see that, according to Parseval’s equality, ( ) ( ) ∑ ∫ +∞ −∞= = k T k dttx T c 0 2 0 2 1 (3.17) If x(t) is real, ( )∑ ∫ +∞ −∞= = k T k dttx T c )( 2 0 2 0 1 . The signal’s total average power is thus equal to the sum of the average powers of the different harmonics and of the continuous component. COMMENT 3.4.– we remember that the average power of a periodic signal is given by the relation: ( ) ( ) 0 0 0 1 cdttx T T == ∫µ . COMMENT 3.5.– if the analyzed signal is even, the complex coefficients kc constitute an even sequence. If the signal is odd, the complex coefficients kc of the Fourier series decomposition form an odd sequence. ( ) ( )txtxt =−∀ , if, and only if, kk ccZk =∈∀ −, (3.18) ( ) ( )txtxt −=−∀ , if and only if kk ccZk −=∈∀ −, (3.19) From there, if the analyzed signal is even, the complex coefficients kc constitute a real even sequence. If the signal is odd and real, the complex coefficients kc of the Fourier series decomposition form a pure imaginary odd sequence. COMMENT 3.6.– amplitude and phase spectra. Amplitude spectrum expresses the frequential distribution of the amplitude of the signal. It is given by the module of the complex coefficients kc according to the frequencies 0T k related to the functions ( ) ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ = t T k jtk 0 2exp πϕ .
- 72. 56 Digital Filters Design for Signal and Image Processing f |c0| |c1||c-1| |c2||c-2| |c3||c-3| Amplitude spectral Figure 3.1. Amplitude spectrum of a periodic signal According to Figure 3.1, the spectrum of the periodic signal x(t) has a discrete representation. It contains the average value, the fundamental component, and the harmonics of the signal whose frequency is a multiple of the fundamental. Introducing a delay in the signal x(t) does not modify the amplitude spectrum of the signal, but modifies the phase spectrum, which is expressed by the phase of the complex coefficients kc according to the frequencies 0T k linked to the functions ( ) ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ = t T jktk 0 2 exp π ϕ . This phase spectrum is also discrete. If we let kd be the complex coefficients of the Fourier series development of x(t – τ), we then have: ( ) ∑ +∞ −∞= ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ =− k k t T jkdtx 0 2 exp π τ (3.20) Now, with equation (3.7), we also have: ( ) ( ) ∑ ∑ ∞+ −∞= +∞ −∞= ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −= ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −=− k k k k t T jk T jkc t T jkctx . 2 exp 2 exp 2 exp 00 0 π τ π τ π τ (3.21)
- 73. Frequential Characterization of Signals and Filters 57 According to equations (3.20) and (3.21), we deduce that: ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ −= τ π 0 2 exp T jkcd kk (3.22) and kk dc = . (3.23) EXAMPLE.– let the signal be written as follows: ( ) ( ) ( ) ( )( ) ( ) ( )tfjctfjc tfjtfj tftx 0101 00 0 2exp2exp 2exp2exp 2 1 2cos ππ ππ π −+= −+= = − The signal is periodic and of period 0 0 1 f T = . The corresponding amplitude and phase spectra are discrete. That means that there are only certain frequencies in the signal. Here, this corresponds to two Dirac in the frequency domain placed at frequencies 0f and 0f− . 3.2.2. Fourier transforms and continuous signals 3.2.2.1. Representations The Fourier transform of a signal x(t) of total finite energy, with a value in the ensemble of complexes is represented as follows: ( ) ( )( ) ( )∫ +∞ ∞− − == dtetxtxTFfX ftj π2 . (3.24) The Fourier transform of a signal x(t) being a complex variable, the amplitude and phase spectra respectively represent the module and the phase of X(f) according to the frequency f. The Fourier transform is then written as: ( ) ( )∫ +∞ ∞− = dfefXtx ftj π2 (3.25)

Be the first to comment