Notes on Image Processing
E D G E D E T E C T I O N O F I M A G E S
BY: Mohammed Kamel
Supervisor
Prof : R.M.Farouk
DEPARTMENT OF MATHEMATICS &
COMPUTER SCIENCE
what is edge detection?
Edges are places in the image with strong intensity contrast (Or) it ‘difference of
intensity between two images (Or) Edges are pixels where image brightness
changes abruptly.
Example:
(a) (b)
Figure 1.1
edge detection: (a) shows [original image] that will be used to find its edge, (b)
shows edge Detection of image using [canny edge detection] method.
Why is Edge Detection Useful?
-Important features can be extracted from the edges of an image (e.g., corners, lines).
-These features are used by higher-level computer vision algorithms (e.g., recognition).
-Lane and directions detection.
Figure 1.2
Figure 1.2: shows Lane and directions detection for navigation
Effect of Illumination (error)
reflection:
(a) (b) (c) (d)
Figure 1.3
Figure 1.3: error: (a) original image, (b) shows actual edge detection of (a), (b) shows same
original image but with light source, (d) shows how light source effect on edge of image.
Edge direction: perpendicular to the direction of maximum intensity change (i.e. edge normal)
Edge Descriptors!
Edge direction: perpendicular to the direction of maximum intensity change (i.e. edge normal)
Edge strength: related to the local image contrast along the Normal.
Edge position: the image position at which the edge is Located.
Figure 1.4
Figure 1.4: shows how edge direction and normal represented in image.
Step edge: the image intensity abruptly changes from one value on one side of the
Discontinuity to a different value on the opposite side. Is also called (Step
Function, hard edge).
Modeling Intensity Changes:
Ramp edge: a step edge where the intensity change is not instantaneous but occur
over a Finite distance.
Modeling Intensity Changes(cont’d)
Within some short distance (i.e., usually generated by lines
Ridge edge: the image intensity abruptly changes value but then returns to the starting
Value within some short distance (i.e., usually generated by lines)
Roof edge: a ridge edge where the intensity change is not instantaneous but occur over
a finite distance (i.e., usually generated by the intersection of two surfaces).
Main Steps in Edge Detection
(1) Smoothing: suppress as much noise as possible, without destroying true edges. (Or) it
Make pixel nearest as possible to each other to make image more clearly
Is also called [Blur].
(2)Enhancement: apply differentiation to enhance the quality of edges (sharpening).and apply
some kind of filter that responds strongly at edges for example Take highest
Value of pixel in image and replace it with all pixel to Increase quality of image.
(3)Thresholding: determine which edge pixels should be discarded as noise and which should be
retain (I.e. threshold edge magnitude).
(4) Localization: determine the exact edge location.
** (Sub-pixel resolution): might be required for some applications to estimate the location of an
edge to better than the spacing between pixels.
Edge Detection Using Derivative
Often, points that lie on an edge are detected by:
(1) Detecting the local maxima or minima of the first derivative.
(2) Detecting the zero-crossings of the second derivative.
Edge Detection Using First Derivative
1D functions (not centered at x)
0
( ) ( )
'( ) lim ( 1) ( )
h
f x h f x
f x f x f x
h
 
    (h=1) Mask: [-1 1]
 1,0,1Mask M= (Centered at x)
Edge Detection Using Second Derivative
Approximate finding maxima/minima of gradient magnitude by finding places where:
2
( ) 0
2
d f
x
dx

Can’t always find discrete pixels where the second derivative is ((zero)) look for zero-crossing
instead. (See below).
0
'( ) '( )
''( ) lim '( 1) '( )
h
f x h f x
f x f x f x
h
 
     ( 2) 2 ( 1) ( )f x f x f x    (h=1)
Replace x+1 with x (i.e., centered at x):
''( ) ( 1) 2 ( ) ( 1)f x f x f x f x     Mask: [1 -2 1]
-Four cases of zero-crossing: {+,-}, {+, 0,-}, {-, +}, {-, 0, +}
-Slope of zero-crossing {a, -b} is: |a+b|
-To detect “strong” zero-crossing, Threshold the slope.
20 20 20
10 10 10 10 10
10
10 10
0 0 0 0 0 0 0
0 0 0 00
( )f x
'( ) f(x 1) f(x)f x   
(approximation f'(x) at x+1/2)
''( ) f(x 1) 2f(x) f(x 1)f x     
(approximation f''(x) at x)
Smoothing
We will try to suppress as much noise as possible, without smoothing away the meaningful
edges. It make pixel nearest as possible to each other to make image more clearly It’s also
known by [Blur].
But it different than zooming because when you zoom an image using pixel replication and
zooming factor is increased, you saw a blurred image. This image also has less details, but it
is not true blurring. Because in zooming, you add new pixels to an image, which increase
the overall number of pixels in an image, whereas in blurring, the number of pixels of a
normal image and a blurred image remains the same.
Smoothing can be achieved by many ways. The common type of filters that are used to
perform blurring: [1] Mean filter (Box filter).
[2]Weighted average filter.
[3]Gaussian filter.
Smoothing(cont’d)
Noise :( unwanted signals)
Its random variation of brightness or color information in image, it’s also undesirable when
image capture that adds spurious and extraneous information.
Effects of noise
Consider a single row or column of the image (Plotting intensity as a function of position gives
a Signal)
Smoothing(cont’d)
f
h
( )h f
( * )h f
x


Solution: smooth first
Look for peaks in ( * )h f
x


Derivative theorem of convolution
( ) ( )h f h f
x x
 
  
 
(i.e., saves one operation)
f
h
x


( )h f
x



Thresholding:
Determine which edge pixels should be discarded as noise and which should be retained (i.e.
threshold edge magnitude).for example: by looking to image and set any number then make
biggest numbers 1 and smallest numbers 0.
-Reduce number of false edges by applying a threshold T
-all values below T are changed to 0.
-selecting a good values for T is difficult.
- Some false edges will remain if T is too low.
- Some edges will disappear if T is too high.
- Some edges will disappear due to softening of the edge contrast by shadows.
*If the gradient at a pixel is above ‘High’, declare it an ‘edge pixel’.
*If the gradient at a pixel is below ‘Low’, declare it a ‘non-edge-pixel’.
*If the gradient at a pixel is between ‘Low’ and ‘High’ then declare it an ‘edge pixel’ if and
only if it is Connected to an ‘edge pixel’ directly or via pixels between ‘Low’ and ‘High’.
Thresholding(cont’d)
(a) (b)
(c) Figure 1.5 (d)
Figure 1.5: (a) shows [original image], (b) shows [Fine scale high threshold], (c)
shows [Coarse scale high threshold], (d) shows [coarse scale low threshold].
Edge Detection Using First Derivative
(Gradient)
The first derivate of an image can be computed using the gradient
Grad(f)=
f
x
f
y
 
 
 
 
  
Gradient Representation
The gradient is a vector which has magnitude and direction:
Magnitude (grad (f)) = Direction (grad (f)) =
2 2
f f
x y
 

 
1
tan /
f f
y x
   
 
  
*Magnitude: indicates edge strength.
*Direction: indicates edge direction. i.e., perpendicular to edge direction
Approximate Gradient
Approximate gradient using finite differences
0
( , ) ( , )
lim
h
f f x h y f x y
x h
  

 0
( , ) ( , )
lim
h
f f x y h f x y
y h
  


( , ) ( , )
( 1, ) ( , )X
X
f x h y f x yf
f x y f x y
x h
 
   

( , ) ( , )
( , 1) ( , )
y
y
f x y h f x yf
f x y f x y
y h
 
   

1Xh 
1yh 
Cartesian vs pixel-coordinates:
- J corresponds to x- direction.
- I corresponds to y- direction.
( 1, ) ( , )f x y f x y 
( , 1) ( , )f x y f x y 
( , 1) f(i, j)
f
f i j
x

  

( , ) ( 1, )
f
f i j f i j
y

  

Sensitive to horizontal edges!
3 2 3 3
2 3
(x , y ) (x , y )
y y
f f

(x, y ) (x, y)f Dy f
Dy
 

( , ) f(x, y 1)f x y  
3 2 2 3(y = y + Dy, y = y, x = x, Dy = 1)
3 2(x , y )
1 1(x , y )
𝑥3, 𝑦3
𝑥2, 𝑦1
Or
(Grad in the Y direction)
2 1 1 1
2 1
(x , y ) (x , y )f f
x x


(x Dx, y) (x, y)f f
Dx
 
( 1, ) f(x, y)f x y  
(Grad in the X direction)
2 1 1 2(x = x+Dx, x = x, y =y = y, Dx =1)
Or
Edge in the x-direction Edge in the y-direction
(x+1/2, y)
(X, y+1/2)
f
x


f
y


We can implement and using following masks:
f
x


Good approximation at (x+1/2, y)
Good approximation at (x, y+1/2)
-A different approximation of the gradient
( , ) ( , ) ( 1, 1)
f
x y f x y f x y
x

   

( , ) ( 1, ) ( , 1)
f
x y f x y f x y
y

   

f
y


another Approximation
0 1 2
7 3
6 5 4
[i,j]
a a a
a a
a a a
Consider the arrangement of pixels about the pixel (I, J):
3 x 3 neighborhood:
f
x


f
y


The partial derivatives can be computed by:
2 3 4 0 7 6(a a a ) (a a a )XM c c      6 5 4 0 1 2(a a a ) (a a a )yM c c     
The constant c implies the emphasis given to pixels closer to the center of the mask
Roberts Operator
f
x


f
y


= f (i, j) - f )i + 1, j + 1)
= f (i + 1, j)- f (i, j + 1)
This approximation can be implemented by the following masks:
1 0
0 1
 
 
 
0 1
1 0
 
 
 (Note: Mx and My are approximations at (i + 1/2, j + 1/2))
Prewitt Operator
XM yM
Setting c = 1, we get the Prewitt operator:
are approximation at (i, j)
1 0 1
1 0 1
1 0 1
 
 
 
  
1 1 1
0 0 0
1 1 1
 
 
 
    
Flow chart
Image Blurred Edges in x










11
11
11
 11 













101
101
101
Image Blurred Edges in y






1
1










 111
000
111






111
111
𝑀 𝑋 = 𝑀 𝑦 =
Average
Smoothing in x
Derivative
Filtering in x
And
And
Average
Smoothing in y
Derivative
Filtering in y
Setting c = 2, we get the Sobel operator:
Sobel Operator
1 0 1
2 0 2
1 0 1
 
 
 
  
1 2 1
0 0 0
1 2 1
 
 
 
    
𝑀 𝑋 = 𝑀 𝑦 =
Flow chart
Image Blurred Edges in x










11
22
11
 11 













101
202
101
Image Blurred Edges in y






1
1










 121
000
121






121
121
Average
Smoothing in x
Derivative
Filtering in x
Derivative
Filtering in y
Average
Smoothing in y
And
And













101
202
101










 121
000
121
*
*
I
dx
d
I
dy
d
22












I
dy
d
I
dx
d
Threshold Edges
Image I
Edge Detection Steps Using Gradient
A) Smooth the input image ( ( , ) ( , )*G(x, y))F x y f x y
( , )*M ( , )X XF f x y x y
C) ( , )*M ( , )y yF f x y x y
B)
D) Magnitude ( , ) X yx y F F 
.
E) Direction
1
( , ) tan (F / F )X yx y 

F) IF magnitude ( , ) Tx y  , then possible edge point.
(a) (b) (c) (d) (e)
Figure1.6
Figure 1.6:(a) shows [Original image], (b) shows how image represented using [Sobel 3x3],(c) show [Sobel
3x3 grayscale],(d) shows how image represented using [Prewitt 3x3], and (e) shows [Prewitt 3x3 grayscale] .
Marr Hildreth Edge Detector
The derivative operators presented so far are not very useful because they are very
sensitive to noise. To filter the noise before enhancement, Marr and Hildreth proposed a
Gaussian Filter, combined with the Laplacian for edge detection. Is the Laplacian of
Gaussian (LoG)?
The fundamental characteristics of LoG edge detector are:
[1] Smooth image by Gaussian filter:
S = G * I
Where: S: is smoothed image.
G: is Gaussian filter represented by
2 2
2
21
( , )
2
x y
G x y e 
 
 

I: the original image.
[2] Apply Laplacian
2 2
2
2 2
s s s
x y
 
  
 
 Note: [ [ ] is used for gradient or derivative.]is used for laplacian
Marr Hildreth Edge Detector
(cont’d)
[3] Deriving the Laplacian of Gaussian (LoG)
2 2 2
( * ) ( )*IS G I G    
2 2
2
2 2
22
23
1
(2 )
2
x y
x y
g e 

 

  
Used in mechanics, electromagnetics, wave theory, quantum mechanics and Laplace equation
[4] Find zero crossings
(a)Scan along each row, record an edge point at the location of zero-crossing.
(b)Repeat above step along each column.
-Four cases of zero-crossing: {+,-}, {+, 0,-}, {-, +}, {-, 0, +}
-Slope of zero-crossing {a, -b} is: |a+b|
-To detect “strong” zero-crossing, and threshold the slope.
The Separability of LoG
Similar to separability of Gaussian filter two-dimensional Gaussian can be separated into 2
one-dimensional Gaussians.
Image G(x) G(y) +
I G
Laplacian of Gaussian Filtering
Image
Gxx(x)
Gyy(y)
G(x)
G(y)
+ S2

Canny has shown that the first derivative of the Gaussian closely approximates the operator that
optimizes the product of signal-to-noise ratio and localization. (i.e., analysis based on "step-
edges" corrupted by "Gaussian noise“)
J. Canny, a Computational Approach to Edge Detection, IEEE Trans. Pattern Analysis and
Machine Intelligence, 8:679-714, 1986.
:
canny edge detector
Steps of Canny edge detector:
Xf yf[1]Compute and
( * ) * *GXf G f G f
x x
 
 
 
( * ) * *Gyf G f G f
y y
 
 
 Xf  yf 
G(x, y) is the Gaussian function
2
( , y)
x
G x


2
( , y)
y
G x


Gx
Gx(x, y) is the derivate of (x, y) with respect to x: (x, y) =
Gy
Gy(x, y) is the derivate of (x, y) with respect to y: (x, y) =
[2]Compute the gradient magnitude and direction: Magnitude (I, j) =
2 2
X yf f
[3]Apply non-maxima suppression.
[4]Apply hysteresis thresholding/edge linking.
Canny Edge Detector Steps
1. Noise Reduction: Since edge detection is susceptible to noise in the image, first step is to
Remove the noise in the image with a 5x5 Gaussian filter.
2. Finding Intensity Gradient of the Image: Smoothened image is then filtered with Sobel kernel
in both horizontal and vertical direction to get first derivative in Horizontal
Direction and vertical direction from these two images, we can Find Edge
Gradient and Direction for each pixel as follows:
2 2
X yG GEdge Gradient= Angle ( ) =
1
tan y
X
G
G
  
 
 
3. Non-maxima suppression: After getting gradient magnitude and direction, a full scan of
image is done to remove any unwanted pixels which may not
constitute the edge for this, at every pixel, pixel is checked if it is
a local maximum in its neighborhood in the direction of gradient.
shows Point A is on the edge (in vertical direction).
Gradient direction is normal to the edge. Point B and C are in gradient directions. So point A is
checked with point B and C to see if it forms a local maximum. If so, it is considered for next
stage, otherwise, it is suppressed (put to zero). In short, the result you get is a binary image with
“thin edges”.
4. Hysteresis thresholding: (Hysteresis: A lag or momentum factor).
1 if ( , ) for some threshold T
( , )
0 otherwise
f x y T
E x y
  

 


- Can only select “strong” edges and does not guarantee “continuity”.
- For “maybe” edges, decide on the edge if neighboring pixel is a strong edge.
Hysteresis thresholding uses two thresholds:
-Low threshold 1t -high threshold ht ht 1t(usually = 2 )
1
0 1
0
( , ) definitely an edge
( , ) mayebe an edge,depends on context
( , ) < definitely not an edge
f x y t
t f x y t
f x y t
 
  

(a) (b) (c) (d) (e)
Figure 1.7
(a) shows [Original image], (b) shows how image represented using [Grayscale image], (c) show [Gradient
magnitude], (d) shows how image represented using [Thresholded gradient magnitude], and (e) shows
[non-maxima suppression (thinning)].
Criteria for Optimal Edge Detection
(A) Good detection: Minimize the probability of false positives (i.e., spurious edges).
Minimize the probability of false negatives (i.e., missing real edges).
(B) Good localization: Detected edges must be as close as possible to the true edges.
(C) Single response: Minimize the number of local maxima around the true edge.
Appendix
Sobel+Edge detection
Gaussian Filters
Gaussian blur (smoothing) Gaussian with convolution
Smoothing filter tool
Marr Hildreth Edge
Detector
References
[1] J. F. Canny, .A computational approach to edge detection. IEEE Trans.Pattern Anal.
Machine Intel vol. 8, No. 6, pp. 679.698, Nov. 2007.
[2] Classical feature detection:
URL: http://www.dai.ed.ac.uk/CVonline/LOCAL_COPIES/OWENS/LECT6/node2.html
[3] Edge detection tutorial URL: http://www.pages.drexel.edu/~weg22/edge.html
[4] Canny edge detection tutorial URL: http://www.pages.drexel.edu/~weg22/can_tut.html
[5] Sobel edge detection algorithm URL: http://www.dai.ed.ac.uk/HIPR2/sobel.htm
[6] Gaussian filtering URL: http://robotics.eecs.berkeley.edu/~mayi/imgproc/gademo.html
[7] URL: http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector.h
[19] URL: http://www.mathworks.com/help/coder/examples/edge-detection-on-images.html
[20] URL: https://www.youtube.com/watch?v=q6fn-i16h20
[21] URL: https://www.youtube.com/watch?v=q6fn-i16h20
[22] URL: https://www.youtube.com/watch?v=CuOoz0eLmG8
[23] URL: http://northstar-www.dartmouth.edu/doc/idl/html_6.2/Smoothing_an_Image.html
[24] URL: http://en.wikipedia.org/wiki/Canny_edge_detector
[25] URL: http://en.wikipedia.org/wiki/Edge_detection
[26] URL: http://www.mathworks.com/discovery/edge-detection.html
The End

Notes on image processing

  • 1.
    Notes on ImageProcessing E D G E D E T E C T I O N O F I M A G E S BY: Mohammed Kamel Supervisor Prof : R.M.Farouk DEPARTMENT OF MATHEMATICS & COMPUTER SCIENCE
  • 2.
    what is edgedetection? Edges are places in the image with strong intensity contrast (Or) it ‘difference of intensity between two images (Or) Edges are pixels where image brightness changes abruptly. Example: (a) (b) Figure 1.1 edge detection: (a) shows [original image] that will be used to find its edge, (b) shows edge Detection of image using [canny edge detection] method.
  • 3.
    Why is EdgeDetection Useful? -Important features can be extracted from the edges of an image (e.g., corners, lines). -These features are used by higher-level computer vision algorithms (e.g., recognition). -Lane and directions detection. Figure 1.2 Figure 1.2: shows Lane and directions detection for navigation
  • 4.
    Effect of Illumination(error) reflection: (a) (b) (c) (d) Figure 1.3 Figure 1.3: error: (a) original image, (b) shows actual edge detection of (a), (b) shows same original image but with light source, (d) shows how light source effect on edge of image.
  • 5.
    Edge direction: perpendicularto the direction of maximum intensity change (i.e. edge normal) Edge Descriptors! Edge direction: perpendicular to the direction of maximum intensity change (i.e. edge normal) Edge strength: related to the local image contrast along the Normal. Edge position: the image position at which the edge is Located. Figure 1.4 Figure 1.4: shows how edge direction and normal represented in image.
  • 6.
    Step edge: theimage intensity abruptly changes from one value on one side of the Discontinuity to a different value on the opposite side. Is also called (Step Function, hard edge). Modeling Intensity Changes: Ramp edge: a step edge where the intensity change is not instantaneous but occur over a Finite distance.
  • 7.
    Modeling Intensity Changes(cont’d) Withinsome short distance (i.e., usually generated by lines Ridge edge: the image intensity abruptly changes value but then returns to the starting Value within some short distance (i.e., usually generated by lines) Roof edge: a ridge edge where the intensity change is not instantaneous but occur over a finite distance (i.e., usually generated by the intersection of two surfaces).
  • 8.
    Main Steps inEdge Detection (1) Smoothing: suppress as much noise as possible, without destroying true edges. (Or) it Make pixel nearest as possible to each other to make image more clearly Is also called [Blur]. (2)Enhancement: apply differentiation to enhance the quality of edges (sharpening).and apply some kind of filter that responds strongly at edges for example Take highest Value of pixel in image and replace it with all pixel to Increase quality of image. (3)Thresholding: determine which edge pixels should be discarded as noise and which should be retain (I.e. threshold edge magnitude). (4) Localization: determine the exact edge location. ** (Sub-pixel resolution): might be required for some applications to estimate the location of an edge to better than the spacing between pixels.
  • 9.
    Edge Detection UsingDerivative Often, points that lie on an edge are detected by: (1) Detecting the local maxima or minima of the first derivative. (2) Detecting the zero-crossings of the second derivative.
  • 10.
    Edge Detection UsingFirst Derivative 1D functions (not centered at x) 0 ( ) ( ) '( ) lim ( 1) ( ) h f x h f x f x f x f x h       (h=1) Mask: [-1 1]  1,0,1Mask M= (Centered at x) Edge Detection Using Second Derivative Approximate finding maxima/minima of gradient magnitude by finding places where: 2 ( ) 0 2 d f x dx  Can’t always find discrete pixels where the second derivative is ((zero)) look for zero-crossing instead. (See below). 0 '( ) '( ) ''( ) lim '( 1) '( ) h f x h f x f x f x f x h        ( 2) 2 ( 1) ( )f x f x f x    (h=1) Replace x+1 with x (i.e., centered at x): ''( ) ( 1) 2 ( ) ( 1)f x f x f x f x     Mask: [1 -2 1]
  • 11.
    -Four cases ofzero-crossing: {+,-}, {+, 0,-}, {-, +}, {-, 0, +} -Slope of zero-crossing {a, -b} is: |a+b| -To detect “strong” zero-crossing, Threshold the slope. 20 20 20 10 10 10 10 10 10 10 10 0 0 0 0 0 0 0 0 0 0 00 ( )f x '( ) f(x 1) f(x)f x    (approximation f'(x) at x+1/2) ''( ) f(x 1) 2f(x) f(x 1)f x      (approximation f''(x) at x)
  • 12.
    Smoothing We will tryto suppress as much noise as possible, without smoothing away the meaningful edges. It make pixel nearest as possible to each other to make image more clearly It’s also known by [Blur]. But it different than zooming because when you zoom an image using pixel replication and zooming factor is increased, you saw a blurred image. This image also has less details, but it is not true blurring. Because in zooming, you add new pixels to an image, which increase the overall number of pixels in an image, whereas in blurring, the number of pixels of a normal image and a blurred image remains the same. Smoothing can be achieved by many ways. The common type of filters that are used to perform blurring: [1] Mean filter (Box filter). [2]Weighted average filter. [3]Gaussian filter.
  • 13.
    Smoothing(cont’d) Noise :( unwantedsignals) Its random variation of brightness or color information in image, it’s also undesirable when image capture that adds spurious and extraneous information. Effects of noise Consider a single row or column of the image (Plotting intensity as a function of position gives a Signal)
  • 14.
    Smoothing(cont’d) f h ( )h f (* )h f x   Solution: smooth first Look for peaks in ( * )h f x  
  • 15.
    Derivative theorem ofconvolution ( ) ( )h f h f x x        (i.e., saves one operation) f h x   ( )h f x   
  • 16.
    Thresholding: Determine which edgepixels should be discarded as noise and which should be retained (i.e. threshold edge magnitude).for example: by looking to image and set any number then make biggest numbers 1 and smallest numbers 0. -Reduce number of false edges by applying a threshold T -all values below T are changed to 0. -selecting a good values for T is difficult. - Some false edges will remain if T is too low. - Some edges will disappear if T is too high. - Some edges will disappear due to softening of the edge contrast by shadows. *If the gradient at a pixel is above ‘High’, declare it an ‘edge pixel’. *If the gradient at a pixel is below ‘Low’, declare it a ‘non-edge-pixel’. *If the gradient at a pixel is between ‘Low’ and ‘High’ then declare it an ‘edge pixel’ if and only if it is Connected to an ‘edge pixel’ directly or via pixels between ‘Low’ and ‘High’.
  • 17.
    Thresholding(cont’d) (a) (b) (c) Figure1.5 (d) Figure 1.5: (a) shows [original image], (b) shows [Fine scale high threshold], (c) shows [Coarse scale high threshold], (d) shows [coarse scale low threshold].
  • 18.
    Edge Detection UsingFirst Derivative (Gradient) The first derivate of an image can be computed using the gradient Grad(f)= f x f y            Gradient Representation The gradient is a vector which has magnitude and direction: Magnitude (grad (f)) = Direction (grad (f)) = 2 2 f f x y      1 tan / f f y x          *Magnitude: indicates edge strength. *Direction: indicates edge direction. i.e., perpendicular to edge direction
  • 19.
    Approximate Gradient Approximate gradientusing finite differences 0 ( , ) ( , ) lim h f f x h y f x y x h      0 ( , ) ( , ) lim h f f x y h f x y y h      ( , ) ( , ) ( 1, ) ( , )X X f x h y f x yf f x y f x y x h        ( , ) ( , ) ( , 1) ( , ) y y f x y h f x yf f x y f x y y h        1Xh  1yh  Cartesian vs pixel-coordinates: - J corresponds to x- direction. - I corresponds to y- direction. ( 1, ) ( , )f x y f x y  ( , 1) ( , )f x y f x y  ( , 1) f(i, j) f f i j x      ( , ) ( 1, ) f f i j f i j y     
  • 20.
    Sensitive to horizontaledges! 3 2 3 3 2 3 (x , y ) (x , y ) y y f f  (x, y ) (x, y)f Dy f Dy    ( , ) f(x, y 1)f x y   3 2 2 3(y = y + Dy, y = y, x = x, Dy = 1) 3 2(x , y ) 1 1(x , y ) 𝑥3, 𝑦3 𝑥2, 𝑦1 Or (Grad in the Y direction) 2 1 1 1 2 1 (x , y ) (x , y )f f x x   (x Dx, y) (x, y)f f Dx   ( 1, ) f(x, y)f x y   (Grad in the X direction) 2 1 1 2(x = x+Dx, x = x, y =y = y, Dx =1) Or Edge in the x-direction Edge in the y-direction
  • 21.
    (x+1/2, y) (X, y+1/2) f x   f y   Wecan implement and using following masks: f x   Good approximation at (x+1/2, y) Good approximation at (x, y+1/2) -A different approximation of the gradient ( , ) ( , ) ( 1, 1) f x y f x y f x y x       ( , ) ( 1, ) ( , 1) f x y f x y f x y y       f y  
  • 22.
    another Approximation 0 12 7 3 6 5 4 [i,j] a a a a a a a a Consider the arrangement of pixels about the pixel (I, J): 3 x 3 neighborhood: f x   f y   The partial derivatives can be computed by: 2 3 4 0 7 6(a a a ) (a a a )XM c c      6 5 4 0 1 2(a a a ) (a a a )yM c c      The constant c implies the emphasis given to pixels closer to the center of the mask Roberts Operator f x   f y   = f (i, j) - f )i + 1, j + 1) = f (i + 1, j)- f (i, j + 1) This approximation can be implemented by the following masks: 1 0 0 1       0 1 1 0      (Note: Mx and My are approximations at (i + 1/2, j + 1/2))
  • 23.
    Prewitt Operator XM yM Settingc = 1, we get the Prewitt operator: are approximation at (i, j) 1 0 1 1 0 1 1 0 1          1 1 1 0 0 0 1 1 1            Flow chart Image Blurred Edges in x           11 11 11  11               101 101 101 Image Blurred Edges in y       1 1            111 000 111       111 111 𝑀 𝑋 = 𝑀 𝑦 = Average Smoothing in x Derivative Filtering in x And And Average Smoothing in y Derivative Filtering in y
  • 24.
    Setting c =2, we get the Sobel operator: Sobel Operator 1 0 1 2 0 2 1 0 1          1 2 1 0 0 0 1 2 1            𝑀 𝑋 = 𝑀 𝑦 = Flow chart Image Blurred Edges in x           11 22 11  11               101 202 101 Image Blurred Edges in y       1 1            121 000 121       121 121 Average Smoothing in x Derivative Filtering in x Derivative Filtering in y Average Smoothing in y And And
  • 25.
  • 26.
    Edge Detection StepsUsing Gradient A) Smooth the input image ( ( , ) ( , )*G(x, y))F x y f x y ( , )*M ( , )X XF f x y x y C) ( , )*M ( , )y yF f x y x y B) D) Magnitude ( , ) X yx y F F  . E) Direction 1 ( , ) tan (F / F )X yx y   F) IF magnitude ( , ) Tx y  , then possible edge point. (a) (b) (c) (d) (e) Figure1.6 Figure 1.6:(a) shows [Original image], (b) shows how image represented using [Sobel 3x3],(c) show [Sobel 3x3 grayscale],(d) shows how image represented using [Prewitt 3x3], and (e) shows [Prewitt 3x3 grayscale] .
  • 27.
    Marr Hildreth EdgeDetector The derivative operators presented so far are not very useful because they are very sensitive to noise. To filter the noise before enhancement, Marr and Hildreth proposed a Gaussian Filter, combined with the Laplacian for edge detection. Is the Laplacian of Gaussian (LoG)? The fundamental characteristics of LoG edge detector are: [1] Smooth image by Gaussian filter: S = G * I Where: S: is smoothed image. G: is Gaussian filter represented by 2 2 2 21 ( , ) 2 x y G x y e       I: the original image. [2] Apply Laplacian 2 2 2 2 2 s s s x y         Note: [ [ ] is used for gradient or derivative.]is used for laplacian
  • 28.
    Marr Hildreth EdgeDetector (cont’d) [3] Deriving the Laplacian of Gaussian (LoG) 2 2 2 ( * ) ( )*IS G I G     2 2 2 2 2 22 23 1 (2 ) 2 x y x y g e         Used in mechanics, electromagnetics, wave theory, quantum mechanics and Laplace equation [4] Find zero crossings (a)Scan along each row, record an edge point at the location of zero-crossing. (b)Repeat above step along each column. -Four cases of zero-crossing: {+,-}, {+, 0,-}, {-, +}, {-, 0, +} -Slope of zero-crossing {a, -b} is: |a+b| -To detect “strong” zero-crossing, and threshold the slope. The Separability of LoG Similar to separability of Gaussian filter two-dimensional Gaussian can be separated into 2 one-dimensional Gaussians.
  • 29.
    Image G(x) G(y)+ I G Laplacian of Gaussian Filtering Image Gxx(x) Gyy(y) G(x) G(y) + S2 
  • 30.
    Canny has shownthat the first derivative of the Gaussian closely approximates the operator that optimizes the product of signal-to-noise ratio and localization. (i.e., analysis based on "step- edges" corrupted by "Gaussian noise“) J. Canny, a Computational Approach to Edge Detection, IEEE Trans. Pattern Analysis and Machine Intelligence, 8:679-714, 1986. : canny edge detector
  • 31.
    Steps of Cannyedge detector: Xf yf[1]Compute and ( * ) * *GXf G f G f x x       ( * ) * *Gyf G f G f y y      Xf  yf  G(x, y) is the Gaussian function 2 ( , y) x G x   2 ( , y) y G x   Gx Gx(x, y) is the derivate of (x, y) with respect to x: (x, y) = Gy Gy(x, y) is the derivate of (x, y) with respect to y: (x, y) = [2]Compute the gradient magnitude and direction: Magnitude (I, j) = 2 2 X yf f [3]Apply non-maxima suppression. [4]Apply hysteresis thresholding/edge linking.
  • 32.
    Canny Edge DetectorSteps 1. Noise Reduction: Since edge detection is susceptible to noise in the image, first step is to Remove the noise in the image with a 5x5 Gaussian filter. 2. Finding Intensity Gradient of the Image: Smoothened image is then filtered with Sobel kernel in both horizontal and vertical direction to get first derivative in Horizontal Direction and vertical direction from these two images, we can Find Edge Gradient and Direction for each pixel as follows: 2 2 X yG GEdge Gradient= Angle ( ) = 1 tan y X G G        3. Non-maxima suppression: After getting gradient magnitude and direction, a full scan of image is done to remove any unwanted pixels which may not constitute the edge for this, at every pixel, pixel is checked if it is a local maximum in its neighborhood in the direction of gradient.
  • 33.
    shows Point Ais on the edge (in vertical direction). Gradient direction is normal to the edge. Point B and C are in gradient directions. So point A is checked with point B and C to see if it forms a local maximum. If so, it is considered for next stage, otherwise, it is suppressed (put to zero). In short, the result you get is a binary image with “thin edges”. 4. Hysteresis thresholding: (Hysteresis: A lag or momentum factor). 1 if ( , ) for some threshold T ( , ) 0 otherwise f x y T E x y         - Can only select “strong” edges and does not guarantee “continuity”. - For “maybe” edges, decide on the edge if neighboring pixel is a strong edge.
  • 34.
    Hysteresis thresholding usestwo thresholds: -Low threshold 1t -high threshold ht ht 1t(usually = 2 ) 1 0 1 0 ( , ) definitely an edge ( , ) mayebe an edge,depends on context ( , ) < definitely not an edge f x y t t f x y t f x y t       (a) (b) (c) (d) (e) Figure 1.7 (a) shows [Original image], (b) shows how image represented using [Grayscale image], (c) show [Gradient magnitude], (d) shows how image represented using [Thresholded gradient magnitude], and (e) shows [non-maxima suppression (thinning)].
  • 35.
    Criteria for OptimalEdge Detection (A) Good detection: Minimize the probability of false positives (i.e., spurious edges). Minimize the probability of false negatives (i.e., missing real edges). (B) Good localization: Detected edges must be as close as possible to the true edges. (C) Single response: Minimize the number of local maxima around the true edge.
  • 36.
  • 37.
    Gaussian Filters Gaussian blur(smoothing) Gaussian with convolution
  • 38.
  • 39.
  • 40.
    References [1] J. F.Canny, .A computational approach to edge detection. IEEE Trans.Pattern Anal. Machine Intel vol. 8, No. 6, pp. 679.698, Nov. 2007. [2] Classical feature detection: URL: http://www.dai.ed.ac.uk/CVonline/LOCAL_COPIES/OWENS/LECT6/node2.html [3] Edge detection tutorial URL: http://www.pages.drexel.edu/~weg22/edge.html [4] Canny edge detection tutorial URL: http://www.pages.drexel.edu/~weg22/can_tut.html [5] Sobel edge detection algorithm URL: http://www.dai.ed.ac.uk/HIPR2/sobel.htm [6] Gaussian filtering URL: http://robotics.eecs.berkeley.edu/~mayi/imgproc/gademo.html [7] URL: http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/canny_detector/canny_detector.h [19] URL: http://www.mathworks.com/help/coder/examples/edge-detection-on-images.html [20] URL: https://www.youtube.com/watch?v=q6fn-i16h20 [21] URL: https://www.youtube.com/watch?v=q6fn-i16h20 [22] URL: https://www.youtube.com/watch?v=CuOoz0eLmG8 [23] URL: http://northstar-www.dartmouth.edu/doc/idl/html_6.2/Smoothing_an_Image.html [24] URL: http://en.wikipedia.org/wiki/Canny_edge_detector [25] URL: http://en.wikipedia.org/wiki/Edge_detection [26] URL: http://www.mathworks.com/discovery/edge-detection.html
  • 41.

Editor's Notes

  • #2 To view this presentation, first, turn up your volume and second, launch the self-running slide show.