4. Definition
ā¢ Image matching can be defined as āthe process of bringing two
images geometrically into agreement so that corresponding pixels in
the two images correspond to the same physical region of the scene
being imaged
ā¢ Image matching problem is accomplished by transforming (e.g.,
translating, rotating, scaling) one of the images in such a way that the
similarity with the other image is maximized in some sense
ā¢ The 3D nature of real-world scenarios makes this solution complex to
achieve, specially because images can be taken from arbitrary
viewpoints and in different illumination conditions.
5. Image Matching
ā¢ Image matching is a fundamental aspect of many problems in
computer vision, including object and scene recognition, content-
based image retrieval, stereo correspondence, motion tracking,
texture classification and video data mining.
ā¢ It is a complex problem, that remains challenging due to partial
occlusions, image deformations, and viewpoint or lighting changes
that may occur across different images
7. Feature Matching
ā¢ What stuff in the left image matches with stuff on the right?
ā¢ This information is important for for automatic panorama stitching
9. Even Harder Case
NASA Mars Rover images with SIFT feature matches
Figure by Noah Snavely
10. Image Matching Procedure
1. Detection of distinguished regions
2. Local invariant description of these regions
3. Definition of the correspondence space
4. searching of a globally consistent subset of correspondences.
12. Invariant Local Features
1. At an interesting
point, letās define a
coordinate system
(x,y axis)
2. Use the coordinate
system to pull out a
patch at that point
13. Invariant Local Features
ā¢ Algorithm for finding points and representing their patches should produce similar
results even when conditions vary
ā¢ Buzzword is āinvarianceā
ā Geometric invariance: translation, rotation, scale
ā Photometric invariance: brightness, exposure, ā¦etc.
Feature Descriptors
14. What makes good feature?
ā¢ We would like to align by matching local features of these 3 images
ā¢ What would be the good local features (ones easy to match)?
15. Local Measure of Uniqueness
Suppose we only consider a small window of pixels
ā¢ What defines whether a feature is a good or bad candidate?
ā¢ How does the window change when we shift it?
ā¢ Shifting the window in any direction causes a big change
āflatā region:
no change in all
directions
āedgeā:
no change along
the edge direction
ācornerā:
significant change
in all directions
17. Invariance
Suppose we are comparing two images I1 and I2
ā I2 may be a transformed version of I1
ā What kinds of transformations are we likely to encounter in
practice?
Weād like to find the same features regardless of the
transformation
ā This is called transformational invariance
ā Most feature methods are designed to be invariant to
ā¢ Translation, 2D rotation, scale
ā They can usually also handle
ā¢ Limited 3D rotations (SIFT works up to about 60 degrees)
ā¢ Limited affine transformations (2D rotation, scale, shear)
ā¢ Limited illumination/contrast changes
18. How to achieve invariance?
1. Make sure our detector is invariant
ā Harris is invariant to translation and rotation
ā Scale is trickier
ā¢ common approach is to detect features at many scales using a Gaussian pyramid
(e.g., MOPS)
ā¢ More sophisticated methods find āthe best scaleā to represent each feature (e.g.,
SIFT)
2. Design an invariant feature descriptor
ā A descriptor captures the information in a region around the
detected feature point
ā The simplest descriptor: a square window of pixels
ā¢ Whatās this invariant to?
ā Letās look at some better approachesā¦
19. Scale invariant detection
ā¢ Suppose we are looking for
corners
ā¢ Key idea: find scale that
gives local maximum of f
ā f is a local maximum in both
position and scale
ā Common definition of f:
Laplacian
(or difference between two
Gaussian filtered images with
different sigmas)
20. Slide from Tinne Tuytelaars
Lindeberg et al, 1996
Slide from Tinne Tuytelaars
Lindeberg et al., 1996
21.
22. Rotation invariance for feature
descriptors
Find dominant orientation of the image patch
ā¢ This is given by x+, the eigenvector of H corresponding to ļ¬+
ā ļ¬+ is the larger eigenvalue
ā¢ Rotate the patch according to this angle
Figure by Matthew Brown
23. Multiscale Oriented PatcheS
descriptor
Take 40x40 square window
around detected feature
ā¢ Scale to 1/5 size (using
prefiltering)
ā¢ Rotate to horizontal
ā¢ Sample 8x8 square
window centered at
feature
ā¢ Intensity normalize the
window by subtracting
the mean, dividing by the
standard deviation in the
window (both window I
and aI+b will match)
8 pixels
26. Scale Invariant Feature
Transform
Basic idea:
ā¢ Take 16x16 square window
around detected feature
ā¢ Compute edge orientation (angle
of the gradient - 90 ) for each
pixel
ā¢ Throw out weak edges
(threshold gradient magnitude)
ā¢ Create histogram of surviving
edge orientations
0 2ļ°
27. SIFT Descriptor
Full version
ā¢ Divide the 16x16 window into a 4x4 grid of cells (2x2 case shown
below)
ā¢ Compute an orientation histogram for each cell
ā¢ 16 cells * 8 orientations = 128 dimensional descriptor
28. SIFT Properties
Extraordinarily robust matching technique
ā¢ Can handle changes in viewpoint
ā¢ Up to about 60 degree out of plane rotation
ā¢ Can handle significant changes in illumination, sometimes even day
vs. night (next example)
ā¢ Fast and efficientācan run in real time
ā¢ Lots of code available
http://people.csail.mit.edu/albert/ladypack/wiki/index.php?title=Kno
wn_implementations_of_SIFT
31. Feature Matching
Given a feature in I1, how to find
the best match in I2?
1. Define distance function that
compares two descriptors
2. Test all the features in I2, find the one
with min distance
How to define the difference
between two features f1, f2?
ā¢ Simple approach is SSD(f1, f2)
ā Sum of square differences between
entries of the two descriptors
ā Can give good scores to very
ambiguous (bad) matches
I1
f1
f2
I2
32. Feature Distance
How to define the difference
between two features f1, f2?
ā¢ Better approach: ratio
distance = SSD(f1, f2) /
SSD(f1, f2ā)
ā f2 is best SSD match to f1 in I2
ā f2ā is 2nd best SSD match to f1 in
I2
ā gives small values for ambiguous
matches
f1
f2
f2ā
I1
I2
33. Evaluating the Results
ā¢ How can we measure the performance of a feature matcher?
50
75
200
feature distance
34. True/false Positive
The distance threshold affects performance
ā¢ True positives = # of detected matches that are correct
ā Suppose we want to maximize theseāhow to choose threshold?
ā¢ False positives = # of detected matches that are incorrect
ā Suppose we want to minimize theseāhow to choose threshold?
50
75
200
feature distance
false match
true match
35. Applications
Features are used for:
ā¢ Image alignment (e.g., mosaics)
ā¢ 3D reconstruction
ā¢ Motion tracking
ā¢ Object recognition
ā¢ Indexing and database retrieval
ā¢ Robot navigation
ā¢ ā¦ etc.
36. Acknowledgment
Some of slides in this PowerPoint presentation are adaptation from
various slides, many thanks to:
1. Deva Ramanan, Department of Computer Science, University of
California at Irvine (http://www.ics.uci.edu/~dramanan/)