Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

04 image enhancement edge detection

3,633 views

Published on

Digital image processing - Download file di http://rumah-belajar.org

Published in: Education, Business, Technology

04 image enhancement edge detection

  1. 1. Image Enhancement & Edge Detection Tati R Mengko
  2. 2. Introduction Image Enhancement : accentuation and sharpening of image features (edges, boundaries, contrast) to improve image visual appearance and analysis. does not increase information content, but improves feature dynamic range to ease feature detection. Major challenge in image enhancement: quantification of the enhanced feature criterion.
  3. 3. Image Enhancement Techniques Image Enhancement Point Spatial Transform Pseudo Operation operation Operation Coloring•Contrast •Noise •Linear Filtering •False ColoringStretching Smoothing •Root Filtering •Psudocoloring•Noise Clipping •Median Filtering •Homomorphic•Window Slicing •Unsharp Filtering•Histogram MaskingModeling •Low-, High-, Band-pass Filtering •Zooming
  4. 4. Point Operation Point Operation zero-memory operation Map graylevel u∈[0,L] to graylevel v∈[0,L], through transform relation v=f(u). Contrast stretching, clipping, thresholding α u, 0≤u<a vb v γ  v =  β ( u − a ) + va , a ≤ u < b β  va γ ( u − b ) + vb , b ≤ u < L α u α and β determination is based on a b L image histogram.
  5. 5. Point Operation Contrast stretching Bad contrast: unadequate illumination, sensor non-linearity. In the contrast-stretched area, transforming gradient >1. Special contrast-stretching cases: Clipping: α=γ=0. Thresholding : α=γ=0 , a=b. v v vb v γ β va u u α u a b L stretching clipping tresholding
  6. 6. Thresholding Example: Binarization Red Blood Cells Grayscale Image Red Blood Cells Binary Image
  7. 7. Thresholding Example: Gamma Correction Original Image Gamma-corrected Image
  8. 8. Digital Negative Reversed graylevel scaling: 50 50 100 100 v = L-u 150 150 200 200 250 250 50 100 150 200 250 50 100 150 200 250 %MATLAB I=imread(cameraman.tif); figure(1);imagesc(I);colormap(gray); figure(2);imagesc(255-double(I)); colormap(gray);
  9. 9. Intensity Level Slicing Segment a certain intensity level from the rest of the image. Without background:  L, a ≤ u ≤ b v= 0, lainnya With background:  L, a ≤ u ≤ b v= u , lainnya v v L u u a b a b L
  10. 10. Intensity Level SlicingOriginal image Without background With background
  11. 11. Range Compression and Digital Substraction Compress image intensity range: v = c log10(1+|u|) , c = scaling constant Digital Substraction: Detect difference/gradual intensity change between images. Example: Digital Substraction Angiography (DSA).
  12. 12. Example of Range Compression
  13. 13. Histogram Modeling Image histogram represents distribution of graylevel occurrence in an image. Histogram modeling: modification of an image histogram into a desired shape. Histogram Equalization: uniformly-distribute graylevel occurrence frequency distribution. I = imread(tire.tif);imshow(I); imhist(I,64) J = histeq(I);figure, imshow(J); figure; imhist(J,64) 2000 3000 1800 2500 1600 1400 2000 1200 1500 1000 1000 800 600 500 400 0 200 0 0 50 100 150 200 250 0 50 100 150 200 250 Before hist.equalization After hist.equalization
  14. 14. Histogram Examples
  15. 15. Histogram Equalization Histogram equalization is not appropriate for narrow intensity distribution images. Original Histeq Image Result
  16. 16. Spatial Operation Spatial Averaging Pixel intensity is replaced with weighted average of its neighborhood pixels intensity. v ( m, n ) = ∑ ∑ a (k,l ) y (m − k, n − l ) ( k ,l )∈W Spatial averaging: a(k,l) = constant 1 v ( m, n ) = ∑W ∑ y ( m − k , n − l ) NW ( k ,l )∈ a(k,l)=1/NW, NW = number of pixels within filtering window
  17. 17. Spatial Operation Alternative method: every pixel is replaced by its 4- closest neioghbors average intensity value : v(m,n) = 0.5 [y(m,n) + 0.25{ y(m-1,n) + y(m+1,n) + y(m,n-1) + y(m,n+1)} ] Averaging ‘ mask’: ¼ ¼ 1/9 1/9 1/9 0 1/8 0 ¼ ¼ 1/9 1/9 1/9 1/8 ¼ 1/8 1/9 1/9 1/9 0 1/8 0 2×2 window 3×3 window 5-points weighted averaging
  18. 18. Spatial Operation Spatial averaging: Smoothing Low-pass filtering Subsampling Noisy image: y(m,n) = u(m,n) + η(m,n) η(m,n) = white noise with variance= ση2 Output image: 1 v ( m, n ) = NW ∑ ∑ u ( m − k , n − l ) + η ( m, n ) ( k ,l )∈W η(m,n) = spatial average• If mean(η(m,n)) = 0, noise power suppression level is proportional to the number of pixels in the filtering window: ση2 = ση2/NW.
  19. 19. Example of Spatial Operation Original Image Original Image + Noise
  20. 20. Example of Spatial Operation
  21. 21. Blurring Effect due to Averaging
  22. 22. Blurring Effect due to Averaging
  23. 23. Effects of Various Noise Energy
  24. 24. Effects of Various Noise Energy
  25. 25. Median Filter The last examples shows the weakness of averaging method Median filter overcome this problem
  26. 26. Median Filter
  27. 27. Image Sharpening Enhance missing delicate structures due to blurring effect. Sharpen graylevel difference between neighbouring pixels in an image. High-pass filtering Shift-invariant operator Convolution mask contains positive number surrounded by negative numbers.  −1 −1 −1 1  −1 8 −1 9  −1 −1 −1  
  28. 28. High Pass Filtering Relation between high-pass filtered image g, original image f, and its low-pass filtered version: g(m, n) = f(m, n) – lowpass(f(m, n))
  29. 29. High-boost Filtering (Unsharp Masking) Substraction of amplified original image and low-pass fitered original image: g(m, n) = Af(m, n) – lowpass(f(m, n)) = (A-1)f(m, n) + [f(m, n)– lowpass(f(m, n))] = (A-1)f(m, n) + higpass(f(m, n)) Produce an edge-enhanced version of original image. Original Image High pass High boost
  30. 30. Derivative Filter Sharpen image boundaries/edges, based on discrete spatial gradient operator: Implemented with 2-D convolution approach: hn = edge detector derivative filter convolution mask
  31. 31. Derivative Filter: Roberts Operator Original Image
  32. 32. Derivative Filter: Prewitt & Sobel Operator Sobel Prewitt
  33. 33. Frequency-domain Method Enhancement conducted in the transform domain, followed by an inverse transform to obtain the spatial domain enhanced image representation. For original image U = {u(m,n)} being transformed into V = {v(k,l)} : V = AUAT. Enhancement operation produces v’(k,l) = f(u(m,n)). Spatial domain enhanced image: U = A-1V’(AT)-1 . Generalized Filtering: zero-memory transformation Pixel-to-pixel multiplication: v’(k, l) = g(k, l) v(k, l). g(k, l) = zonal mask.u(m,n) Transformasi v(k,l) Operasi v’(k,l) Transformasi u’(m,n) Inverse Uniter AUAT Titik f(.) A-1 V’(AT)-1
  34. 34. Frequency-domain Method Frequency-domain processing is utilized to accelerate spatial image filtering. Time domain convolution is equivalent to fourier domain (FFT) pixel-to-pixel multiplication: g(m,n) = h(m,n)*f(m,n) ≈ G(u,v) = H(u,v) F(u,v)
  35. 35. Ideal Low-pass Filter Ideal low-pass filter: 1, jika u 2 + v 2 ≤ r  r0=57 0 H ( u, v ) =  0, jika u 2 + v 2 > r0  Original Image LPF, r0 = 57 LPF, r0 = 36 LPF, r0 = 26• Ringing effect: property of ideal filter.
  36. 36. Butterworth Low Pass Filter Butterworth low-pass filter: n: order filter, r0: cutoff freq. Original Image r0 = 18 r0 = 13 r0 = 10•Removes ringing effects.
  37. 37. Butterworth Low-pass Filter False contouring removal and noise suppression. False False contour contour due removal with to LPF unadequate Butterworth quantization Original Image Noisy Image Filtered Image
  38. 38. Ideal High-pass Filter Ideal high-pass filter 0, jika u 2 + v 2 ≤ r  r0 = 36 0H ( u, v ) =  1, jika u 2 + v 2 > r0  Original Image HPF, r0 = 18 HPF, r0 = 36 HPF, r0 = 26•Ringing effect: property of ideal filter
  39. 39. Butterworth High-pass Filter Butterworth high-pass filter r0=47, n=2 n: filter order, r0: cutoff freq. Citra asli r0 = 47 r0 = 36 r0 = 81•Removes ringing effects.
  40. 40. Iterative basedEdge Enhancement T.L.R. Mengko
  41. 41. Pyramid Edge Detection There may be a number of strong edges in the image that are not significant, because they are short or unconnected. Pyramid edge detection is used to enhance substantial (strong & long) edges, but to ignore the weak or short edges. repetitive shrinkage EDGE TRACKING 1. Cut down to quarter size by averaging 4 corresponding pixels 2. Repeat x times, keep each generated image 3. At the smallest image, perform edge detection (e.g Sobel) 4. Find edges ? (a threshold is needed) 5. If yes, perform edge detection on the group of 4 corresponding pixel in the next larger image. averaging 6. Continue to the next larger image till the largest image.

×