Your SlideShare is downloading. ×
SIFT: Scale Invariant Feature Transform Presenter: Michal Erel <ul><li>David G. Lowe,   </li></ul><ul><li>&quot; Distincti...
Object Recognition <ul><li>Find a particular object we've encountered before. </li></ul><ul><li>Search for local features ...
Why do we care about matching features? <ul><li>Object Recognition </li></ul><ul><li>Location Recognition </li></ul><ul><l...
Location Recognition
Panoramic Image Matching
We want invariance!!! <ul><li>Good features should be robust to all sorts of nastiness that can occur between images. </li...
Types of invariance <ul><li>Illumination </li></ul>
Types of invariance <ul><li>Illumination </li></ul><ul><li>Scale </li></ul>
Types of invariance <ul><li>Illumination </li></ul><ul><li>Scale </li></ul><ul><li>Rotation </li></ul>
Types of invariance <ul><li>Illumination </li></ul><ul><li>Scale </li></ul><ul><li>Rotation </li></ul><ul><li>Affine </li>...
Types of invariance <ul><li>Illumination </li></ul><ul><li>Scale </li></ul><ul><li>Rotation </li></ul><ul><li>Affine </li>...
SIFT- Scale Invariant Feature Transform <ul><li>The features are: </li></ul><ul><li>Invariant to image scaling </li></ul><...
Step I: Detection of Scale-Space Extrema <ul><li>Identify locations and scales that can be assigned under different views ...
Scale-Space
Scale-Space To scale: take every second pixel in each row and column (another approach: average 4 pixels)
Difference of Gaussians (DOG) Sigma 4 Sigma2-Sigma4 Sigma 2
Scale-Space with DOG
Scale-Space with DOG
Local Extrema Detection <ul><li>Compare each pixel to: </li></ul><ul><li>8 neighbours in current image </li></ul><ul><li>9...
Keypoints Too many keypoints, some are unstable
Step II: Keypoint Localization Reject points with low contrast  Reject points that are localized along an edge.
Step II: Keypoint Localization <ul><li>Fit keypoint to nearby data for location, scale and ratio of principal curvatures. ...
Keypoint Localization <ul><li>Initial approach: locate keypoints at location and scale of the central sample point. </li><...
Keypoint Localization Use Quadric Taylor Expansion of the scale-space function, so that the origin is at the sample point:...
Reject Low Contrast Keypoints Calculate value of D at extremum point X: if |D(X)| < 0.03: discard keypoint for having a lo...
Reject Low Contrast Keypoints
Eliminate Edge Responses: DoG function might have strong response along edges, even if unstable to small amounts of noise ...
Eliminate Edge Responses: No need to explicitly calculate the eigenvalues – we only need their ratio!! a = small eigenvalu...
Eliminate Edge Responses: To check if the ratio of the principal curvatures is below a threshold r, we only need to check ...
Reject Near-Edge Keypoints
832 keypoints 729 keypoints (eliminate low contrast) 536 keypoints (eliminate edge keypoints)
Step III: Orientation Assignment Each keypoint is assigned 1 or more orientations, based on local image gradient direction...
Gradient Calculation The scale of the keypoint is used to select the Gaussian image L we’ll work on (image with closest sc...
Gradient Calculation
Gradient Calculation
Orientation Histogram Orientation histogram with 36 bins (each bin covers 10 degrees) Each sample added to the histogram b...
Orientation Histogram: Detect highest peak and local peaks that are within 80% of the highest peak. Use these to assign (1...
Step IV: Local Image Descriptor Previous operations imposed a local 2D coordination system, which provides invariance to i...
Descriptor Representation Use the scale of the keypoint to select the level of Gaussian blur. Sample the gradient magnitud...
Descriptor Representation
Descriptor Representation :
Invariance to Affine Illumination Changes: * Multiplication by a constant: Normalize vector to unit length: A change in ea...
Partial Invariance To Non Affine Illumination changes: Will cause large change in relative magnitude, but is unlikely to a...
Partial Invariance To Affine Change In Viewpoint Angle:
Object Recognition: Best candidate match for each keypoint is  nearest neighbour in database Problem: many background feat...
Results:
More Results:
More Results  (not as successful…):
Image matching:
Sources / Web Sources: <ul><li>Article: </li></ul><ul><li>David G. Lowe, &quot;Distinctive image features from scale-invar...
Slide / Web Sources Continued: <ul><li>Matching Features: Prof. Bill Freeman courses.csail.mit.edu/6.869/lectnotes/lect8/l...
Slide / Web Sources Continued: <ul><li>Object Recognition with Invariant Features: David Lowe www.cs.ubc.ca/~lowe/425/slid...
The  End…
Upcoming SlideShare
Loading in...5
×

Michal Erel's SIFT presentation

19,215

Published on

Michal's presentation on David Lowe's Scale Invariant Feature Transform (IJCV 2004)

Published in: Business, Technology
3 Comments
38 Likes
Statistics
Notes
No Downloads
Views
Total Views
19,215
On Slideshare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
0
Comments
3
Likes
38
Embeds 0
No embeds

No notes for slide

Transcript of "Michal Erel's SIFT presentation"

  1. 1. SIFT: Scale Invariant Feature Transform Presenter: Michal Erel <ul><li>David G. Lowe, </li></ul><ul><li>&quot; Distinctive image features from scale-invariant keypoints ,“ </li></ul><ul><li>International Journal of Computer Vision, 60, 2 (2004), pp. 91-110 </li></ul>
  2. 2. Object Recognition <ul><li>Find a particular object we've encountered before. </li></ul><ul><li>Search for local features based on the appearance of the object at particular interest points </li></ul>
  3. 3. Why do we care about matching features? <ul><li>Object Recognition </li></ul><ul><li>Location Recognition </li></ul><ul><li>Image Alignment & Matching </li></ul><ul><li>Stereo Matching </li></ul><ul><li>Robot self localization </li></ul><ul><li>Image retrieval by similarity (from large database) </li></ul>
  4. 4. Location Recognition
  5. 5. Panoramic Image Matching
  6. 6. We want invariance!!! <ul><li>Good features should be robust to all sorts of nastiness that can occur between images. </li></ul>
  7. 7. Types of invariance <ul><li>Illumination </li></ul>
  8. 8. Types of invariance <ul><li>Illumination </li></ul><ul><li>Scale </li></ul>
  9. 9. Types of invariance <ul><li>Illumination </li></ul><ul><li>Scale </li></ul><ul><li>Rotation </li></ul>
  10. 10. Types of invariance <ul><li>Illumination </li></ul><ul><li>Scale </li></ul><ul><li>Rotation </li></ul><ul><li>Affine </li></ul>
  11. 11. Types of invariance <ul><li>Illumination </li></ul><ul><li>Scale </li></ul><ul><li>Rotation </li></ul><ul><li>Affine </li></ul><ul><li>Perspective </li></ul>
  12. 12. SIFT- Scale Invariant Feature Transform <ul><li>The features are: </li></ul><ul><li>Invariant to image scaling </li></ul><ul><li>Invariant to rotation </li></ul><ul><li>Partially invariant to: </li></ul><ul><li>Change in illumination </li></ul><ul><li>Change in 3D camera viewpoint </li></ul><ul><li>Occlusion, clutter, or noise </li></ul>
  13. 13. Step I: Detection of Scale-Space Extrema <ul><li>Identify locations and scales that can be assigned under different views of the same object </li></ul><ul><li>Scale-Space function! </li></ul>
  14. 14. Scale-Space
  15. 15. Scale-Space To scale: take every second pixel in each row and column (another approach: average 4 pixels)
  16. 16. Difference of Gaussians (DOG) Sigma 4 Sigma2-Sigma4 Sigma 2
  17. 17. Scale-Space with DOG
  18. 18. Scale-Space with DOG
  19. 19. Local Extrema Detection <ul><li>Compare each pixel to: </li></ul><ul><li>8 neighbours in current image </li></ul><ul><li>9 neighbours in scale above </li></ul><ul><li>9 neighbours in scale below </li></ul><ul><li>Take pixel if larger / smaller than all of them </li></ul><ul><li> This is called a Keypoint </li></ul>
  20. 20. Keypoints Too many keypoints, some are unstable
  21. 21. Step II: Keypoint Localization Reject points with low contrast Reject points that are localized along an edge.
  22. 22. Step II: Keypoint Localization <ul><li>Fit keypoint to nearby data for location, scale and ratio of principal curvatures. </li></ul><ul><li>Reject points with low contrast & points that are localized along an edge. </li></ul>
  23. 23. Keypoint Localization <ul><li>Initial approach: locate keypoints at location and scale of the central sample point. </li></ul><ul><li>New approach: try to calculate the interpolated location of the maximum. Improves matching and stability </li></ul>
  24. 24. Keypoint Localization Use Quadric Taylor Expansion of the scale-space function, so that the origin is at the sample point: (x is the offset from this point) Calculate extermum: if X > 0.5: the extermum lies closer to a different point (Need to recalculate…) Otherwise: add offset to the sample point location to get the estimated extremum ^
  25. 25. Reject Low Contrast Keypoints Calculate value of D at extremum point X: if |D(X)| < 0.03: discard keypoint for having a low contrast
  26. 26. Reject Low Contrast Keypoints
  27. 27. Eliminate Edge Responses: DoG function might have strong response along edges, even if unstable to small amounts of noise Edge identification: large principal curvature across the edge, but small one in perpendicular direction. Note ♥ : It's easy to show that the two principle curvatures (i.e., the min and max curvatures) are always along directions perpendicular to each other.  In general, finding the principle directions amounts to solving a nxn eigenvalue problem
  28. 28. Eliminate Edge Responses: No need to explicitly calculate the eigenvalues – we only need their ratio!! a = small eigenvalue b = large eigenvalue r = ratio between large and small eigenvalues (r=a/b) (r+1)^2/r is at min when a=b, and increases as the ratio increases
  29. 29. Eliminate Edge Responses: To check if the ratio of the principal curvatures is below a threshold r, we only need to check if: Use r = 10 to reject keypoints that lay along an edge
  30. 30. Reject Near-Edge Keypoints
  31. 31. 832 keypoints 729 keypoints (eliminate low contrast) 536 keypoints (eliminate edge keypoints)
  32. 32. Step III: Orientation Assignment Each keypoint is assigned 1 or more orientations, based on local image gradient directions. Data is trasformed relative to the assigned orientation, scale and location hence providing invariance to these transformations
  33. 33. Gradient Calculation The scale of the keypoint is used to select the Gaussian image L we’ll work on (image with closest scale) – All computations are performed in a scale-invariant manner. We calculate gradient magnitue and orientation using pixel differences:
  34. 34. Gradient Calculation
  35. 35. Gradient Calculation
  36. 36. Orientation Histogram Orientation histogram with 36 bins (each bin covers 10 degrees) Each sample added to the histogram bin is weighted by its gradient magnitude and by a Gaussian weighted circular window with theta = 1.5 times that of the keypoint scale
  37. 37. Orientation Histogram: Detect highest peak and local peaks that are within 80% of the highest peak. Use these to assign (1 or more) orientations
  38. 38. Step IV: Local Image Descriptor Previous operations imposed a local 2D coordination system, which provides invariance to image location, scale and orientation We wish to compute descriptors for the local image regions: 1. Highly distinctive 2. Invariant as possible to remaining variations (illumination, 3D viewpoint…)
  39. 39. Descriptor Representation Use the scale of the keypoint to select the level of Gaussian blur. Sample the gradient magnitude and orientation around the keypoint Assign weight to magnitude using a Gaussian weighted function with theta = ½ width of descriptor window (provides gradual change & gives less emphasis to gradients far from the keypoint Use a descriptor array with histogram bins
  40. 40. Descriptor Representation
  41. 41. Descriptor Representation :
  42. 42. Invariance to Affine Illumination Changes: * Multiplication by a constant: Normalize vector to unit length: A change in each pixel: pixel -> a * pixel (each pixel multiplied by a constant) will result – gradient -> gradient * a. This will be canceled by the normalization * Addition of a constant: pixel -> pixel + a Has no effect on the gradient
  43. 43. Partial Invariance To Non Affine Illumination changes: Will cause large change in relative magnitude, but is unlikely to affect gradient orientations. Solution: reduce the influence of large gradient magintudes by thresholding the values to be no larger than 0.2, then normalize them to unit length.
  44. 44. Partial Invariance To Affine Change In Viewpoint Angle:
  45. 45. Object Recognition: Best candidate match for each keypoint is nearest neighbour in database Problem: many background features will not have a matching pair in database resulting in a false match Global threshold to descriptors does not perform well since some descriptors are more discriminating than others Solution: Compare distance to closet neighbour to that of the second closet neighbour (that comes from a different object)
  46. 46. Results:
  47. 47. More Results:
  48. 48. More Results (not as successful…):
  49. 49. Image matching:
  50. 50. Sources / Web Sources: <ul><li>Article: </li></ul><ul><li>David G. Lowe, &quot;Distinctive image features from scale-invariant keypoints,&quot; International Journal of Computer Vision, 60, 2 (2004), pp. 91-110 http://citeseer.ist.psu.edu/654168.html </li></ul><ul><li>Some slides were adopted from: </li></ul><ul><li>Matching with Invariant Features: Darya Frolova, Denis Simakov www.wisdom.weizmann.ac.il/~deniss/vision_spring04/files/InvariantFeatures.ppt </li></ul>
  51. 51. Slide / Web Sources Continued: <ul><li>Matching Features: Prof. Bill Freeman courses.csail.mit.edu/6.869/lectnotes/lect8/lect8-slides-6up.pdf </li></ul><ul><li>Object Recognition Using Local Descriptors: Javier Ruiz-del-Solar www.ciw.cl/material/compression2005/ruiz.pdf </li></ul><ul><li>Scale Invariant Feature Transform: Tom Duerig www-cse.ucsd.edu/classes/fa06/cse252c/tduerig1.ppt </li></ul>
  52. 52. Slide / Web Sources Continued: <ul><li>Object Recognition with Invariant Features: David Lowe www.cs.ubc.ca/~lowe/425/slides/10-sift-6up.pdf </li></ul><ul><li>Local Feature Tutorial: courses.csail.mit.edu/6.869/handouts/tutSIFT04.pdf F. Estrada et al </li></ul><ul><li>Introduction to SIFT features: www.danet.dk/sensor_fusion/SIFT features.ppt </li></ul><ul><li>More on Features: Yung-Yu Chaung www.csie.ntu.edu.tw/~cyy/courses/vfx/06spring/lectures/handouts/lec05_feature_4up.pdf </li></ul>
  53. 53. The End…

×