Scale Invariant feature transform


Published on

Scale Invariant Feature Transform Algorithm

Published in: Education
  • Be the first to comment

Scale Invariant feature transform

  1. 1. 1
  2. 2. Team Members : Chinmay Samant Rajdeep Mandrekar Shanker NaikScale Invariant Feature Transform Laxman Pednekar Guide : Prof. Rachael Dhanraj
  3. 3. Sub-Image Matching• Sub-Image Matching – the main part of our project.• Rejection of the Chain code Algorithm.• Using Scale invariant Feature Transform (or SIFT) Algorithm. 3
  4. 4. Sub-Image MatchingScale-invariant feature transform Algorithm• Creating Scale-space and Difference of Gaussian pyramid• Extrema detection• Noise Elimination• Orientation assignment• Descriptor Computation• Keypoints matching 4
  5. 5. Creating Scale-space and Difference of Gaussian pyramid• In scale Space we take the image and generate progressively blurred out images, then resize the original image to half and generate blurred images.• Images that are of same size but different scale are called octaves. 5
  6. 6. How Blurring is performed?• Mathematically blurring is defined as convolution of Gaussian operator and image.• where G= Gaussian Blur operator 6
  7. 7. Difference of Gaussian(DoG) 7
  8. 8. Extrema detectionIn the image X is current pixel, while green circles are itsneighbors, X is marked as Keypoint if it is greatest or least of all 26neighboring pixels.First and last scale are not checked for keypoints as there are notenough neighbors to compare. 8
  9. 9. Noise Elimination1. Removing Low Contrast features - If magnitude of intensity at current pixel is less than certain value then it is rejected.2. Removing edges – For poorly defined peaks in the DoG function, the principal curvature across the edge would be much larger than the principal curvature along it – To determine edges Hessian matrix is used. 9
  10. 10. Tr (H) = Dxx + DyyDet(H) = DxxDyy - (Dxy )2R=Tr(H)^2/Det(H)If the value of R is greater for a candidate keypoint, then that keypoint is poorly localized and hence rejected. 10
  11. 11. Orientation assignment• The gradient magnitude, m(x, y), and orientation, θ(x, y), is precomputed using pixel differences: 11
  12. 12. Orientation assignment 12
  13. 13. Descriptor Computation 13
  14. 14. Keypoints matching• Each keypoint in the original image is compared to every keypoints in the transformed image using the descriptors.• The descriptors of the two respective, keypoints must be closest. Then match is found. 14
  15. 15. Thank You 15