Your SlideShare is downloading. ×
0
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Ch13
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Ch13

146

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
146
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
13
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Chapter 13 Discrete Image Transforms 13.1 Introduction 13.2 Linear transformations  One-dimensional discrete linear transformations Definition. If x is an N-by-1 vector and T is an N-by-N matrix, then y = Tx is a linear transformation of the vector x, I.e., N− 1 yi = ∑ ij x j t j =0 T is called the kernel matrix of the transformation
  • 2. Linear Transformations Rotation and scaling are examples of linear transformations  y1  cos(θ) y  = sin(θ)  2  −sin(θ) x1  cos(θ) x2    If T is a nonsingular matrix, the inverse linear transformation is x = T −1y
  • 3. Linear Transformations Unitary Transforms   If the kernel matrix of a linear system is a unitary matrix, the linear system is called a unitary transformation. A matrix T is unitary if T −1 = (T∗ ) t , i.e., T(T∗ ) t = (T∗ ) t T = I  If T is unitary and real, then T is an orthogonal matrix. T −1 = Tt , i.e., TT t = Tt T = I
  • 4. Unitary Transforms The rows ( also the columns ) of an orthogonal matrix are a set of orthogonal vectors. One-dimensional DFT is an example of a unitary transform 1 i F = k ∑f exp(−j 2π N ) N or N− 1 k i= 0 i F = Wf where the matrix W is a unitary matrix.
  • 5. Unitary Transforms Interpretation     The rows (or columns) of a unitary matrix form a orthonormal basis for the N-dimensional vector space. A unitary transform can be viewed as a coordinate transformation, rotating the vector in N-space without changing its length. A unitary linear transformation converts a Ndimensional vector to a N-dimensional vector of transform coefficients, each of which is computed as the inner product of the input vector x with one of the rows of the transform matrix T. The forward transformation is referred to as analysis, and the backward transformation is referred to as synthesis.
  • 6. Two-dimensional discrete linear transformations A two-dimensional linear transformation is N− N− 1 1 Gmn =∑ Fik t (i, k ; m, n) ∑ i= k = 0 0 where t (i, k ; m, n) forms an N2-by-N2 block matrix having N-by-N blocks, each of which is an N-by-N matrix If t (i, k ; m, n) = t (i, m)t (k , n) , the transformation is called separable. r Gmn c N 1  −  =∑∑ ik t c ( k , n)  r (i , m) F t  i=  = 0 k 0  N− 1
  • 7. 2-D separable symmetric unitary transforms If t (i, k ; m, n) = t (i, m)t (k , n) , the transformation is symmetric, Gmn N 1  −  =∑∑ ik t ( k , n)  (i , m) F t  i=  = 0 k 0  N− 1 or G = TFT The inverse transform is F = T −1GT −1 = (T∗) t G (T∗) t Example: The two-dimensional DFT G = WFW , F = ( W ∗) t G ( W ∗ ) t
  • 8. Orthogonal transformations If the matrix T is real, the linear transformation is called an orthogonal transformation. Its inverse transformation is F = T′GT′ If T is a symmetric matrix, the forward and inverse transformation are identical, G = TFT and F = TGT
  • 9. Basis functions and basis images Basis functions  The rows of a unitary form a orthonormal basis for a N-dimensional vector space. There are different unitary transformations with different choice of basis vectors. A unitary transformation corresponds to a rotation of a vector in a N-dimensional (N2 for twodimensional case) vector space.
  • 10. Basis Images The inverse unitary transform an be viewed as the weighted sum of N2 basis image, which is the } as inverse transformation of a matrix G = {δ p,q Fmn i − p , j −q 1 N −  =∑(i, m) ∑ i −p , k −q t ( k , n)  =t ( p, m)t ( q, n) t δ i= 0 k =0  n− 1 This means that each basis image is the outer product of two rows of the transform matrix. Any image can be decomposed into a set of basis image by the forward transformation, and the inverse transformation reconstitute the image by summing the basis images.
  • 11. 13.4 Sinusoidal Transforms 13.4.1 The discrete Fourier transform  The forward and inverse DFT’s are F = Wf and f = ( W ∗ ) t F where  w0, 0  w0, N −1    W=      wN −1,0  wN −1, N −1    ik 1 − j 2π N wi ,k = e N
  • 12. 13.4 Sinusoidal Transforms The spectrum vector  The frequency corresponding to the ith element of F is  2i  N fN  si =   2( N − i ) f N  N   0≤ i ≤ N /2 N / 2 +1≤ i ≤ N −1 The frequency components are arranged as 0 fN N /2 0 1 i fN N /2 i fN N/2 − ( N − i) fN N /2 − fN N /2 N-1
  • 13. 13.4 Sinusoidal Transforms  The frequencies are symmetric about the highest frequency component. Using a circular right shift by the amount N/2, we can place the zero-frequency at N/2 and frequency increases in both directions from there. The Nyquist frequency (the highest frequency) at F0. This can be done by changing the signs of the oddnumbered elements of f(x) prior to computing the DFT. This is because
  • 14. 13.4 Sinusoidal Transforms F (u ) ⇔ ( x) ⇒ F (u − N / 2) ⇔ exp( j 2πx N /2 ) f ( x) N = exp( jπx) f ( x) = (−1) x f ( x)  The two-dimensional DFT  For the two-dimensional DFT, changing the sign of half the elements of the image matrix shifts its zero frequency component to the center of the spectrum F (u , v) ⇔ f ( x, y ) ⇒ F (u − N / 2, v − N / 2) ⇔ (− 1) x+ y f ( x, y ) 1 4 3 2 2 3 4 1
  • 15. 13.4.2 Discrete Cosine Transform The two-dimensional discrete cosine transform (DCT) is defined as  π (2i + 1)m   π (2k + 1)n  Gc (m, n) = α (m)α (n)∑ ∑ g (i, k ) cos   cos  2 N   2N    i=0 k =0 N −1 N −1 and its inverse  π (2i + 1)m   π (2k + 1)n  g (i, k ) = ∑ ∑ α (m)α (n)Gc (m, n) cos   cos  2 N   2N    m= 0 n= 0 N −1 N −1 where α ( 0) = 1 N and α ( m) = 2 N for 1 ≤ m ≤ N
  • 16. The discrete cosine transform DCT can be expressed as a unitary matrix form G c = CgC Where the kernel matrix has elements Ci , m π (2i + 1)m  = α ( m) cos   2N   The DCT is useful in image compression.
  • 17. The sine transform The discrete sine transform (DST) is defined as 2 N −1 N −1  π (i + 1)(m + 1)   π (k + 1)(n + 1)  Gs (m, n) = g (i, k ) sin  ∑ ∑0  sin   N + 1 i=0 k = N +1 N +1     and 2 N −1 N −1  π (i + 1)(m + 1)   π (k + 1)(n + 1)  g (i, k ) = Gs (m, n) sin  ∑0 ∑0  sin   N + 1 m= n= N +1 N +1     The DST has unitary kernel ti , k = 2 π (i + 1)(k + 1)  sin   N +1 N +1  
  • 18. The Hartley transform The forward two-dimensional discrete Hartley transform1 N −1 N − Gm ,n 1 = N  2π  g i ,k cas  (im + kn) ∑∑ N  i =0 k =0 The inverse DHT g i ,k 1 = N  2π  Gm , n cas  (im + kn) ∑∑ N  m =0 n =0 N −1 N −1 where the basis function cas(θ ) = cos(θ ) + sin(θ ) = 2 cos(θ − π / 4)
  • 19. The Hartley transform The unitary kernel matrix of the Hartley transform has elements ti , k 1 = N   ik  cas 2π N     The Hartley transform is the real part minus the imaginary part of the corresponding Fourier transform, and the Fourier transform is the even part minus j times the odd part of the Hartley transform.
  • 20. 13.5 Rectangular wave transforms 13.5.1 The Hadamard Transform  Also called the Walsh transform.  The Hadamard transform is a symmetric, separable orthogonal transformation that has only +1 and –1 as elements in its kernel matrix. It exists only for N = 2 n  For the two-by-two case 1 H2 = 2 1 1  1 −1    And for general cases 1 1 H N / 2 HN =  N N H N / 2 H N /2  − HN /2  
  • 21. The Hadamard Transform H8 1 1  1  1 1 H8 = 2 2 1  1 1  1  1 1  1  1 1 H8 = 2 2 1  1 1  1  1 1 1 1 1 1 1 0 − 1 1 − 1 1 − 1 1 − 1 7  1 − 1 − 1 1 1 − 1 − 1 3  −1 −1 1 1 −1 −1 1  4 1 1 1 − 1 − 1 − 1 − 1 1  −1 1 −1 −1 1 −1 1  6 1 −1 −1 −1 −1 1 1  2  − 1 − 1 1 − 1 1 1 − 1 5  1 1 1 1 1 1 1 1 −1 −1 −1 −1 −1 −1 −1 1 −1 1 −1 −1 1 1 1 1 1 1 − 1 − 1 − 1 − 1  −1 −1 1 1   1 1 − 1 − 1 1 −1 −1 1   − 1 1 1 − 1 −1 1 −1 1   1 − 1 1 − 1  0 1 2 3 4 5 Ordered Hadamard transform −1 −1 6 7
  • 22. 13.5.3 The Slant Transform The orthogonal kernel matrix for the slant transform is obtained iteratively as 1 S2 = 2  1  2 1  5 S4 =  2 0 − 1  5  0 1 5 1 2 5 1 2 − 5 0 1 5  0  1   5  −1  2   5     1 2 1 2 1 2 1 − 2 0 0 0 0 1 1  1 −1   0 0 1 2 1 2  1   2   3    0  = 1  10 1  2 1   2   2 1   1 −  10 2   0 1 2 1 10 1 − 2 3 − 10 1 2 1 − 10 1 − 2 3 10 1  2  3   − 10  1   2  1  − 10  
  • 23. 13.5.3 The Slant Transform  And  1 0 a b  N N 1  0 SN = 2 0 1 − b a  N0 N  0 I 0 I 1 0  0 − aN bN  S 0 I  N /2 0  0 −1   0 S N / 2  0 bN aN 0 − I  where I is the identity matrix of order N/2-2 and a2 N 3N 2 N2 −1 = , b2 N = 2 4N − 1 4N 2 − 1
  • 24. 13.5.3 The Slant Transform The basis function for N=8 0 4 1 5 2 6 3 7
  • 25. 13.5.4 The Haar Transform The Basis functions of Haar transform  For any integer 0 ≤ k ≤ N −1 , let k = 2p + q −1 p ≥ 0, q ≥ 1  where and is the largest power of 2 that 2 p ≤ k , and q − 1 is the remainder. The Haar function is defined by h0 ( x) = 2p 1 N
  • 26. 13.5.4 The Haar Transform  And  The 8-by-8 Haar orthogonal kernel matrix is  p/2 2  1  −p/2 hk ( x) = 2 N  0        1  Hr = 2 2       q −1 q −1 / 2 ≤x< 2p 2p q −1 / 2 q ≤x< p 2p 2 otherwise 1 1 1 1 1 1 1 1  1 1 1 1 −1 −1 −1 −1   2 2 − 2 − 2 0 0 0 0   0 0 0 0 2 2 − 2 − 2 2 −2 0 0 0 0 0 0   0 0 2 −2 0 0 0 0  0 0 0 0 2 −2 0 0   1 0 0 0 0 0 2 −2  
  • 27. 13.5.4 The Haar Transform Basis Functions for N=8
  • 28. Basis images of Haar
  • 29. Basis image of Hadamard
  • 30. Basis images of DCT
  • 31. 13.6 Eigenvector-based transforms Eigenvalues and eigenvectors  For an N-by-N matrix A, λ is a scalar, if | A −λ |=0 I then λ is called an eigenvalue of A. The vector v that satisfies Av =λ v is called an eigenvectors of A.
  • 32. 13.6 Eigenvector-based transforms 13.6.2 Principal-Component Analysis  Suppose x is an N-by-1 random vector, The mean vector can be estimated from its L samples as L 1 m x ≈ ∑x l L l =1 and its covariance matrix can be estimated by 1 C = ε {( x − m )(x − m )′} ≈ ∑ x x − m m L L x  x x l =1 l t l x t x The matrix C x is a real and symmetric matrix.
  • 33. 13.6 Eigenvector-based transforms  Let A be a matrix whose rows are eigenvectors of C x , then C y = AC x A′ is a diagonal matrix having the eigenvalues of C x along its diagonal, I.e., 0 λ1  Cy =     0 λN     Let the matrix A define a linear transformation by y = A ( x −m x )
  • 34. 13.6 Eigenvector-based transforms  It can be shown that the covariance matrix of the vector y is C y = AC x A′ . Since the matrix Cy  y is a diagonal matrix, its off-diagonal elements are zero, the element of are uncorrelated. Thus the linear transformation remove the correlation among the variables. The reverse transform = A′y +m x = A −1y +m can reconstruct x from y.
  • 35. 13.6 Dimension Reduction We can reduce the dimensionality of the y vector by ignoring one or more of the eigenvectors that have small eigenvalues. Let B be the M-by-N matrix (M<N) formed by discarding the lower N-M rows of A, and let mx=0 for simplicity, then the transformed vector has smaller dimension ˆ y = Bx
  • 36. 13.6 Dimension Reduction The vector x can be reconstructed(approximately) by ˆ ˆ x = B′y The mean square error is MSE = N ∑λ k =M + 1 k ˆ The vector y is called the principal component of the vector x.
  • 37. 13.6.3 The Karhunen-Loeve Transform The K-L transform is defined as y = A(x − m x ) The dimension-reducing capability of the K-L transform makes it quite useful for image compression. When the image is a first-order Markov process, where the correlation between pixels decreases linearly with their separation distance, the basis images for the K-L transform can be written explicitly.
  • 38. 13.6.3 The Karhunen-Loeve Transform When the correlation between adjacent pixels approaches unity, the K-L basis functions approach those of discrete cosine transform. Thus, DCT is a good approximation for the K-L transform.
  • 39. 13.6.4 The SVD Transform Singular value decomposition Any N-by-N matrix A can be decomposed as A = UΛV t t where the columns of U and V are the eigenvectors of AA andA , respectively. is an N-by-N At Λ diagonal matrix containing the singular values of A.  The forward singular value decomposition(SVD) transform Λ = U t AV   The inverse SVD transform A = UΛV t
  • 40. 13.6.4 The SVD transform For SVD transform, the kernel matrices are image-dependent. The SVD has a very high power of image compression, we can get lossless compression by at least a factor of N. and even higher lossy compression ratio by ignoring some small singular values may be achieved.
  • 41. The SVD transform Illustration of figure 13-7
  • 42. 13.7 Transform domain filtering Like in the Fourier transform domain, filter can be designed in other transform domain. Transform domain filtering involves modification of the weighting coefficients prior to reconstruction of the image via the inverse transform. If either of the desired components or the undesired components of the image resemble one or a few of the basis image of a particular transform, then that transform will be useful in separating the two.
  • 43. 13.7 Transform domain filtering Haar transform is a good candidate for detecting vertical and horizontal lines and edges.
  • 44. 13.7 Transform domain filtering Illustration of fig. 13-8.
  • 45. 13.7 Transform domain filtering Figure 13-9

×