SlideShare a Scribd company logo
Computer Vision
Chap.3 : Feature Detection
SUB CODE: 3171614
SEMESTER: 7TH IT
PREPARED BY:
PROF. KHUSHALI B. KATHIRIYA
Outline
• Edge Detection
• Corner Detection
• Line and Curve Detection
• Active Contours
• SIFT and HOG Descriptors
• Shape Context Descriptors
• Morphological Operations
Prepared by: Prof. Khushali B Kathiriya
2
Morphological Operation
PREPARED BY:
PROF. KHUSHALI B. KATHIRIYA
Key Stages in Digital Image Processing
Image
Acquisition
Image
Restoration
Morphological
Processing
Segmentation
Representation
& Description
Image
Enhancement
Object
Recognition
Problem Domain
Colour Image
Processing
Image
Compression
Prepared by: Prof. Khushali B Kathiriya
4
Key Stages in Digital Image Processing:
Morphological Processing
Image
Acquisition
Image
Restoration
Morphological
Processing
Segmentation
Representation
& Description
Image
Enhancement
Object
Recognition
Problem Domain
Colour Image
Processing
Image
Compression
Prepared by: Prof. Khushali B Kathiriya
5
Contents
• Once segmentation is complete, morphological operations can be used
to remove imperfections in the segmented image and provide
information on the form and structure of the image
• In this lecture we will consider
• What is morphology?
• Simple morphological operations
• Compound operations
• Morphological algorithms
1, 0, Black, White?
• Throughout all of the following slides whether 0 and 1 refer to white or black is
a little interchangeable
• All of the discussion that follows assumes segmentation has already taken place
and that images are made up of 0s for background pixels and 1s for object pixels
• After this it doesn’t matter if 0 is black, white, yellow, green…….
What Is Morphology?
• Morphological image processing (or morphology) describes a range of image
processing techniques that deal with the shape (or morphology) of features in an
image
• Morphological operations are typically applied to remove imperfections
introduced during segmentation, and so typically operate on bi-level images
Quick Example
Image after segmentation Image after segmentation and
morphological processing
Structuring Elements, Hits & Fits
B
A
C
Structuring Element
Fit: All on pixels in the
structuring element cover on
pixels in the image
Hit: Any on pixel in the
structuring element covers
an on pixel in the image
All morphological processing operations are based on these simple ideas
Structuring Elements
• Structuring elements can be of any size and make any shape
• However, for simplicity we will use rectangular structuring elements with their
origin at the middle pixel
1 1 1
1 1 1
1 1 1
0 0 1 0 0
0 1 1 1 0
1 1 1 1 1
0 1 1 1 0
0 0 1 0 0
0 1 0
1 1 1
0 1 0
What is use of Hit, Miss and Fit?
• Hit and miss algorithm can be used to thin and skeletonize a shape in a binary
image.
Prepared by: Prof. Khushali B Kathiriya
12
Fitting & Hitting
1 1
1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1
B C
A
1 1 1
1 1 1
1 1 1
Structuring
Element 1
1
1 1 1
1
Structuring
Element 2
Fitting & Hitting
1 1
1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1
1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1
Prepared by: Prof. Khushali B Kathiriya
14
1
1 1 1
1
0 0
0 1 1 0 0
0 1 1 1 1 1 0
0 1 1 1 1 1 0
0 1 1 1 1 0
0 1 1 1 1 1 0
0 0 1 1 1 1 0 0
0 0 0 0 0 0
Fundamental Operations
• Fundamentally morphological image processing is very much like spatial
filtering
• The structuring element is moved across every pixel in the original image to
give a pixel in a new processed image
• The value of this new pixel depends on the operation performed
• There are two basic morphological operations: erosion and dilation
Erosion
PREPARED BY:
PROF. KHUSHALI B. KATHIRIYA
Erosion
• Erosion of image f by structuring element s is given by f  s
• The structuring element s is positioned with its origin at (x, y) and the new pixel
value is determined using the rule:



=
otherwise
0
fits
if
1
)
,
(
f
s
y
x
g
Erosion Example
Structuring Element
Original Image Processed Image With Eroded Pixels
Erosion Example
Structuring Element
Original Image Processed Image
0 0 0
0 1 1
1 0
0 1 1 0
0 1 0
0 0 0
Erosion Example 1
Watch out: In these examples a 1 refers to a black pixel!
Original image Erosion by 3*3
square structuring
element
Erosion by 5*5
square structuring
element
Erosion Example 2
• It acts as a morphological filtering operation in which image details smaller than
the structuring elements are filtered from the image
Original
image
After erosion
with a disc of
radius 10
After erosion
with a disc of
radius 20
After erosion
with a disc of
radius 5
What Is Erosion For?
Erosion can split apart joined objects
Erosion can strip away extrusions
Watch out: Erosion shrinks objects
Erosion can split apart
Apply Erosion on image
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 1 1 1 1 0 0 0
0 0 0 1 1 1 1 0 0 0
0 0 0 1 1 1 1 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
Structuring Element
Eroded Image
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 1 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
Dilation
PREPARED BY:
PROF. KHUSHALI B. KATHIRIYA
Dilation
• Dilation of image f by structuring element s is given by f s
• The structuring element s is positioned with its origin at (x, y) and the new
pixel value is determined using the rule:




=
otherwise
0
hits
if
1
)
,
(
f
s
y
x
g
Dilation Example
Structuring Element
Original Image Processed Image
Dilation Example
Structuring Element
Original Image Processed Image With Dilated Pixels
Dilation Example 1
Original image Dilation by 3*3
square structuring
element
Dilation by 5*5
square structuring
element
Watch out: In these examples a 1 refers to a black pixel!
Dilation Example 2
Structuring element
Original image After dilation
What Is Dilation For?
Dilation can repair breaks or bridges the gap
Dilation can repair intrusions
Watch out: Dilation enlarges objects
Dilation
• Advantage of dilation over low pass filter used to bridge the gap is that
• dilation results directly in binary image.
• Low pass filter starts with a binary image and produce a gray scale image , which
would require a pass with a threholding function to convert it back to binary form
Apply Dilation on image
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 1 1 1 1 0 0 0
0 0 0 1 1 1 1 0 0 0
0 0 0 1 1 1 1 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
Dilated Image
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 1 1 1 1 1 1 0
0 0 1 1 1 1 1 1 0
0 0 1 1 1 1 1 1 0
0 0 1 1 1 1 1 1 0
0 0 1 1 1 1 1 1 0
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
Compound Operations
PREPARED BY:
PROF. KHUSHALI B. KATHIRIYA
Compound Operations
• More interesting morphological operations can be performed by performing
combinations of erosions and dilations
• The most widely used of these compound operations are:
• Opening
• Closing
Opening
PREPARED BY:
PROF. KHUSHALI B. KATHIRIYA
Opening
• The opening of image f by structuring element s, denoted f ○ s is simply an
erosion followed by a dilation
f ○ s = (f s) s

Original shape After erosion After dilation
(opening)
Note a disc shaped structuring element is used
Opening Example
Original
Image
Image
After
Opening
Opening Example
Structuring Element
Original Image Processed Image
Opening Example
Structuring Element
Original Image Processed Image
(f s) (f s) s

f
s
Advantages of opening
• Opening generally smoothes the contour of an object
• Breaks narrow isthmuses,
• And eliminates thin protrusions
Closing
PREPARED BY:
PROF. KHUSHALI B. KATHIRIYA
Closing
• The closing of image f by structuring element s, denoted f • s is simply a
dilation followed by an erosion
f • s = (f  s) s

Original shape After dilation After erosion
(closing)
Note a disc shaped structuring element is used
Closing Example
Original
Image
Image
After
Closing
Closing Example
Structuring Element
Original Image Processed Image
Closing Example
Structuring Element
Original Image Processed Image
Advantages of closing
• Closing also tends to smooth sections of contours
• It generally fuses narrow breaks and long thin gulfs
• It eliminates small holes and fills gaps in the contour
Morphological Processing Example
Morphological Algorithms
• Using the simple technique we have looked at so far we can begin to
consider some more interesting morphological algorithms
• We will look at:
• Boundary extraction
• Region filling
• There are lots of others as well though:
• Extraction of connected components
• Thinning/thickening
• Skeletonisation
Boundary Extraction
PREPARED BY:
PROF. KHUSHALI B. KATHIRIYA
Boundary Extraction
• Extracting the boundary (or outline) of an object is often extremely useful
• The boundary can be given simply as
• β(A) = A – (AB)
Boundary Extraction Example
• A simple image and the result of performing boundary extraction using a square
3*3 structuring element
Original Image Extracted Boundary
Image Segmentation
(Thresholding)
PREPARED BY:
PROF. KHUSHALI B. KATHIRIYA
Contents
• Today we will continue to look at the problem of segmentation, this
time though in terms of thresholding
• In particular we will look at:
• What is thresholding?
• Simple thresholding
• Adaptive thresholding
Thresholding
• Thresholding is usually the first step in any segmentation approach
• We have talked about simple single value thresholding already
• Thresholding is used to produce regions of uniformity within the given image
based on some threshold criteria T
Thresholding
• Single value thresholding can be given mathematically as follows:





=
T
y
x
f
if
T
y
x
f
if
y
x
g
)
,
(
0
)
,
(
1
)
,
(
Thresholding Example
• Imagine a poker playing robot that needs to visually interpret the cards in its
hand
Original Image Thresholded Image
But Be Careful
• If you get the threshold wrong, the results can be disastrous
Threshold Too Low Threshold Too High
Basic Global Thresholding
• Based on the histogram of an image
• Partition the image histogram using a single global threshold
• The success of this technique very strongly depends on how well the histogram
can be partitioned
• Types of thresholding :-
1) Global thresholding
2) Local thresholding
Basic Global Thresholding Algorithm
The basic global threshold, T, is calculated
as follows:
1. Select an initial estimate for T (typically the average grey level in the image)
2. Segment the image using T to produce two groups of pixels:
1. G1 consisting of pixels with grey levels >T
2. G2 consisting pixels with grey levels ≤ T
3. Compute the average grey levels of pixels in G1 to give μ1 and G2 to give μ2
Basic Global Thresholding Algorithm
4. Compute a new threshold value:
5. Repeat steps 2 – 4 until the difference in T in successive iterations is less than a
predefined limit T∞
This algorithm works very well for finding thresholds when the histogram is
suitable
2
2
1 
 +
=
T
Thresholding Example 1
Thresholding Example 2
Problems With Single Value Thresholding
• Single value thresholding only works for bimodal histograms
• Images with other kinds of histograms need more than a single threshold
Problems With Single Value Thresholding (cont…)
• Let’s say we want to isolate the contents
of the bottles
• Think about what the histogram for this
image would look like
• What would happen if we used a single
threshold value?
Single Value Thresholding and Illumination
Uneven illumination can really upset a single valued thresholding
scheme
Basic Adaptive Thresholding
• An approach to handle situations in which single value thresholding will not
work is to divide an image into sub images and threshold these individually
• Since the threshold for each pixel depends on its location within an image this
technique is said to adaptive
Basic Adaptive Thresholding Example
• The image below shows an example of using adaptive thresholding with
the image shown previously
• As can be seen success is mixed
• But, we can further subdivide the troublesome sub images for more
success
Basic Adaptive Thresholding Example (cont…)
• These images show the
troublesome parts of the previous
problem further subdivided
• After this sub division successful
thresholding can be achieved
Summary
• In this lecture we have begun looking at segmentation, and in particular
thresholding
• We saw the basic global thresholding algorithm and its shortcomings
• We also saw a simple way to overcome some of these limitations using
adaptive thresholding
Contents
• So far we have been considering image processing techniques used to
transform images for human interpretation
• Today we will begin looking at automated image analysis by
examining the thorny issue of image segmentation:
• The segmentation problem
• Finding points, lines and edges
Image Segmentation
PREPARED BY:
PROF. KHUSHALI B. KATHIRIYA
The Segmentation Problem
• Segmentation attempts to partition the pixels of an image into groups that
strongly correlate with the objects in an image
• Typically the first step in any automated computer vision application
The Segmentation Problem
• Image Segmentation can be achieved in any one of two ways:-
1) Segmentation based on discontinuities in intensity – it partitions the image based
on abrupt changes in intensity, such as edges
2) Segmentation based on similarities in intensity – it divides the image into regions that
are similar according to a set of predefined criteria
Segmentation Examples
Detection Of Discontinuities
• There are three basic types of grey level discontinuities that we
tend to look for in digital images:
1. Points
2. Lines
3. Edges
• We typically find discontinuities using masks and correlation
• These all are high frequency components hence we need mask
which are basically high pass
Point Detection
PREPARED BY:
PROF. KHUSHALI B. KATHIRIYA
Point Detection
• Point detection can be achieved simply using the mask
below:
• Points are detected at those pixels in the subsequent
filtered image that are above a set threshold
Point Detection
Prepared by: Prof. Khushali B Kathiriya
80
Point Detection (Cont…)
X-ray image of
a turbine blade
Result of point
detection
Result of
thresholding
Line Detection
PREPARED BY:
PROF. KHUSHALI B. KATHIRIYA
Line Detection
• The next level of complexity is to try to detect lines
• The masks below will extract lines that are one pixel thick and running in a
particular direction
Line Detection
Prepared by: Prof. Khushali B Kathiriya
84
Line Detection (cont…)
Binary image of a wire
bond mask
After
processing
with -45° line
detector
Result of
thresholding
filtering result
Line Detection Example
Apply horizontal mask on the image
0 0 0 10 0 0 0 0
0 0 0 10 0 0 0 0
0 0 0 10 0 0 0 0
0 0 0 10 0 0 0 0
0 10 10 10 10 10 10 0
0 0 0 10 0 0 0 0
0 0 0 10 0 0 0 0
0 0 0 10 0 0 0 0
Line Detection Example
Make negative values zero as it is high pass mask
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 -20 -20 -20 -20 -30 -20 0
0 40 40 40 40 60 40 0
0 -20 -20 -20 -20 -30 -20 0
0 0 0 10 0 0 0 0
0 0 0 10 0 0 0 0
Edge Detection
PREPARED BY:
PROF. KHUSHALI B. KATHIRIYA
Edge Detection
An edge is a set of connected pixels that lie on the boundary between two regions
Edges & Derivatives
• We have already spoken
about how derivatives
are used to find
discontinuities
• 1st derivative tells us
where an edge is
• 2nd derivative can
be used to show
edge direction
Derivatives & Noise
• Derivative based edge detectors are extremely sensitive to noise
• We need to keep this in mind
Common Edge Detectors
• Given a 3*3 region of an image the following edge detection filters can be used
Prepared by: Prof. Khushali B Kathiriya
93
Edge Detection Example
Original Image Horizontal Gradient Component
Vertical Gradient Component Combined Edge Image
Edge Detection Example
Edge Detection Example
Edge Detection Example
Edge Detection Example
Edge Detection Problems
• Often, problems arise in edge detection in that there are is too much detail
• For example, the brickwork in the previous example
• One way to overcome this is to smooth images prior to edge detection
Edge Detection Example With Smoothing
Original Image Horizontal Gradient Component
Vertical Gradient Component Combined Edge Image
Laplacian Edge Detection
PREPARED BY:
PROF. KHUSHALI B. KATHIRIYA
Laplacian Edge Detection
• We encountered the 2nd-order derivative based Laplacian filter already
• The Laplacian is typically not used by itself as it is too sensitive to noise
• Usually when used for edge detection the Laplacian is combined with a smoothing
Gaussian filter
Laplacian Edge Detection
Prepared by: Prof. Khushali B Kathiriya
103
Laplacian Of Gaussian
• The Laplacian of Gaussian (or Mexican hat) filter uses
the Gaussian for noise removal and the Laplacian for
edge detection
Laplacian Of Gaussian Example
Summary
• In this lecture we have begun looking at segmentation, and in particular
edge detection
• Edge detection is massively important as it is in many cases the first step
to object recognition
Active Contours
PREPARED BY:
PROF. KHUSHALI B. KATHIRIYA
Active Contour (Boundary Detection)
• Segmentation is a section of image processing for the separation or segregation
of information from the required target region of the image. There are different
techniques used for segmentation of pixels of interest from the image.
• Active contour is one of the active models in segmentation techniques, which
makes use of the energy constraints and forces in the image for separation of
region of interest. Active contour defines a separate boundary or curvature for
the regions of target object for segmentation.
Prepared by: Prof. Khushali B Kathiriya
108
Active Contour(Cont.)
• Application of Active Contour
• Medical Imaging
• Brain CT images
• MRI images
• Cardiac images
• Motion Tracking
• Stereo Tracking
Prepared by: Prof. Khushali B Kathiriya
109
What is Active Contour?
Prepared by: Prof. Khushali B Kathiriya
110
What is Active Contour?
Prepared by: Prof. Khushali B Kathiriya
111
Example of Active Contour
Prepared by: Prof. Khushali B Kathiriya
112
Example of Active Contour (Cont.)
Prepared by: Prof. Khushali B Kathiriya
113
Medical Imaging
Example of Active Contour (Cont.)
Prepared by: Prof. Khushali B Kathiriya
114
Motion Tracking
SIFT Features
PREPARED BY:
PROF. KHUSHALI B. KATHIRIYA
SIFT Features
• SIFT stands for Scale-Invariant Feature Transform and was first presented in 2004,
by D.Lowe, University of British Columbia. SIFT is invariance to image scale and rotation.
Prepared by: Prof. Khushali B Kathiriya
116
SIFT Features (Cont.)
• Major advantages of SIFT are
• Locality: features are local, so robust to occlusion and clutter (no prior segmentation)
• Distinctiveness: individual features can be matched to a large database of objects
• Quantity: many features can be generated for even small objects
• Efficiency: close to real-time performance
• Extensibility: can easily be extended to a wide range of different feature types, with
each adding robustness
Prepared by: Prof. Khushali B Kathiriya
117
SIFT Features (Cont.)
Prepared by: Prof. Khushali B Kathiriya
118
Comparing SIFT Descriptors
Prepared by: Prof. Khushali B Kathiriya
119
Comparing SIFT Descriptors (Cont.)
Prepared by: Prof. Khushali B Kathiriya
120
Comparing SIFT Descriptors (Cont.)
Prepared by: Prof. Khushali B Kathiriya
121
SIFT Results: Scale Invariance
Prepared by: Prof. Khushali B Kathiriya
122
SIFT Results: Rotation Invariance
Prepared by: Prof. Khushali B Kathiriya
123
SIFT Results: Robustness to Clutter
Prepared by: Prof. Khushali B Kathiriya
124
Panorama Stitching using SIFT
Prepared by: Prof. Khushali B Kathiriya
125
Auto Collage using SIFT
Prepared by: Prof. Khushali B Kathiriya
126
SIFT for 3D objects?
• Not really match for all cases..
Prepared by: Prof. Khushali B Kathiriya
127
The Algorithm
• SIFT is quite an involved algorithm. There are mainly four steps involved in the
SIFT algorithm. We will see them one-by-one.
• Scale-space peak selection: Potential location for finding features.
• Keypoint Localization: Accurately locating the feature keypoints.
• Orientation Assignment: Assigning orientation to keypoints.
• Keypoint descriptor: Describing the keypoints as a high dimensional vector.
• Keypoint Matching
Prepared by: Prof. Khushali B Kathiriya
128
Prepared by: Prof. Khushali B Kathiriya
129
The SIFT algorithm
• The resplendent Eiffel Tower, of course! The keen-eyed among you will also have noticed that each
image has a different background, is captured from different angles, and also has different objects in
the foreground (in some cases).
• I’m sure all of this took you a fraction of a second to figure out. It doesn’t matter if the image is
rotated at a weird angle or zoomed in to show only half of the Tower. This is primarily because you
have seen the images of the Eiffel Tower multiple times and your memory easily recalls its features.
We naturally understand that the scale or angle of the image may change but the object remains the
same.
• But machines have an almighty struggle with the same idea. It’s a challenge for them to identify the
object in an image if we change certain things (like the angle or the scale). Here’s the good news –
machines are super flexible and we can teach them to identify images at an almost human-level.
Prepared by: Prof. Khushali B Kathiriya
130
We will seen….
1. Introduction to SIFT
2. Constructing a Scale Space
1. Gaussian Blur
2. Difference of Gaussian
3. Keypoint Localization
1. Local Maxima/Minima
2. Keypoint Selection
4. Orientation Assignment
1. Calculate Magnitude & Orientation
2. Create Histogram of Magnitude & Orientation
5. Keypoint Descriptor
6. Feature Matching
Prepared by: Prof. Khushali B Kathiriya
131
Introduction to SIFT
• SIFT, or Scale Invariant Feature Transform, is a feature detection
algorithm in Computer Vision.
• SIFT helps locate the local features in an image, commonly known
as the ‘keypoints‘ of the image. These keypoints are scale & rotation
invariant that can be used for various computer vision
applications, like image matching, object detection, scene
detection, etc.
• For example, here is another image of the Eiffel Tower along with
its smaller version. The keypoints of the object in the first image
are matched with the keypoints found in the second image. The
same goes for two images when the object in the other image is
slightly rotated.
Prepared by: Prof. Khushali B Kathiriya
132
Introduction to SIFT (Cont.)
• Constructing a Scale Space: To make sure that features are scale-independent
• Keypoint Localisation: Identifying the suitable features or keypoints
• Orientation Assignment: Ensure the keypoints are rotation invariant
• Keypoint Descriptor: Assign a unique fingerprint to each keypoint
Prepared by: Prof. Khushali B Kathiriya
133
Constructing the scale space
• We use the Gaussian Blurring technique to reduce the noise in an image.
• So, for every pixel in an image, the Gaussian Blur calculates a value based on its
neighboring pixels. Below is an example of image before and after applying the
Gaussian Blur. As you can see, the texture and minor details are removed from
the image and only the relevant information like the shape and edges remain:
Prepared by: Prof. Khushali B Kathiriya
134
Constructing the scale space (Cont.)
• Hence, these blur images are created for multiple scales. To create a new set of
images of different scales, we will take the original image and reduce the scale
by half. For each new image, we will create blur versions as we saw above.
Prepared by: Prof. Khushali B Kathiriya
135
Prepared by: Prof. Khushali B Kathiriya
136
Difference of Gaussian
• Difference of Gaussian is a feature
enhancement algorithm that involves
the subtraction of one blurred version of
an original image from another, less
blurred version of the original.
Prepared by: Prof. Khushali B Kathiriya
137
Difference of Gaussian
• Let us create the DoG for the
images in scale space. Take a look
at the below diagram. On the left,
we have 5 images, all from the
first octave (thus having the same
scale). Each subsequent image is
created by applying the Gaussian
blur over the previous image.
• On the right, we have four images
generated by subtracting the
consecutive Gaussians.
Prepared by: Prof. Khushali B Kathiriya
138
Keypoint Localization
• Once the images have been created, the next step is to find the important
keypoints from the image that can be used for feature matching. The idea is to
find the local maxima and minima for the images. This part is divided into two
steps:
1. Find the local maxima and minima
2. Remove low contrast keypoints (keypoint selection)
Prepared by: Prof. Khushali B Kathiriya
139
Keypoint Localization
• This means that every pixel value is compared with 26
other pixel values to find whether it is the local
maxima/minima. For example, in the below diagram, we
have three images from the first octave. The pixel
marked x is compared with the neighboring pixels (in
green) and is selected as a keypoint if it is the highest or
lowest among the neighbors:
• We now have potential keypoints that represent the
images and are scale-invariant. We will apply the last
check over the selected keypoints to ensure that these
are the most accurate keypoints to represent the image.
Prepared by: Prof. Khushali B Kathiriya
140
Orientation Assignment
• At this stage, we have a set of stable keypoints for the images. We will now
assign an orientation to each of these keypoints so that they are invariant to
rotation. We can again divide this step into two smaller steps:
1. Calculate the magnitude and orientation
2. Create a histogram for magnitude and orientation
Prepared by: Prof. Khushali B Kathiriya
141
Orientation Assignment
• Let’s say we want to find the magnitude and orientation for
the pixel value in red. For this, we will calculate the
gradients in x and y directions by taking the difference
between 55 & 46 and 56 & 42. This comes out to be Gx = 9
and Gy = 14 respectively.
• Once we have the gradients, we can find the magnitude and
orientation using the following formulas:
• Magnitude = √[(Gx)2+(Gy)2] = 16.64
• Φ = atan(Gy / Gx) = atan(1.55) = 57.17
Prepared by: Prof. Khushali B Kathiriya
142
Keypoint Descriptor
• This is the final step for SIFT. So far, we have stable keypoints that are scale-
invariant and rotation invariant. In this section, we will use the neighboring
pixels, their orientations, and magnitude, to generate a unique fingerprint for
this keypoint called a ‘descriptor’.
• Additionally, since we use the surrounding pixels, the descriptors will be
partially invariant to illumination or brightness of the images.
• We will first take a 16×16 neighborhood around the keypoint. This 16×16 block is
further divided into 4×4 sub-blocks and for each of these sub-blocks, we generate
the histogram using magnitude and orientation.
Prepared by: Prof. Khushali B Kathiriya
143
Keypoint Descriptor
Prepared by: Prof. Khushali B Kathiriya
144
HOG Features
PREPARED BY:
PROF. KHUSHALI B. KATHIRIYA
HOG (Histogram of Oriented Gradients)
Prepared by: Prof. Khushali B Kathiriya
146
Gradient Magnitude and Angle
Prepared by: Prof. Khushali B Kathiriya
147
HOG (Histogram of Oriented Gradients)
• Consider the below image of size (180 x 280). Let us take a detailed look at how
the HOG features will be created for this image:
Prepared by: Prof. Khushali B Kathiriya
148
HOG (Histogram of Oriented Gradients) (Cont.)
• Step:1 Preprocessed Data (64 x128)
• We need to preprocess the image and bring down the width to height ratio to 1:2. The
image size should preferably be 64 x 128. This is because we will be dividing the
image into 8*8 and 16*16 patches to extract the features. Having the specified size (64
x 128) will make all our calculations pretty simple. In fact, this is the exact value used
in the original paper.
Prepared by: Prof. Khushali B Kathiriya
149
HOG (Histogram of Oriented Gradients) (Cont.)
• Step:2 Calculating Gradients (direction x and y)
• The next step is to calculate the gradient for every pixel in the image. Gradients are
the small change in the x and y directions. Here, I am going to take a small patch
from the image and calculate the gradients on that:
Prepared by: Prof. Khushali B Kathiriya
150
HOG (Histogram of Oriented Gradients) (Cont.)
• I have highlighted the pixel value 85. Now, to determine the gradient (or change)
in the x-direction, we need to subtract the value on the left from the pixel value
on the right. Similarly, to calculate the gradient in the y-direction, we will
subtract the pixel value below from the pixel value above the selected pixel.
• Hence the resultant gradients in the x and y direction for this pixel are:
• Change in X direction(Gx) = 89 – 78 = 11
• Change in Y direction(Gy) = 68 – 56 = 8
Prepared by: Prof. Khushali B Kathiriya
151
HOG (Histogram of Oriented Gradients) (Cont.)
• Step 3: Calculate the Magnitude and Orientation
• The gradients are basically the base and perpendicular here. So, for the previous
example, we had Gx and Gy as 11 and 8. Let’s apply the Pythagoras theorem to
calculate the total gradient magnitude:
• Total Gradient Magnitude = √[(Gx)2+(Gy)2]
• Total Gradient Magnitude = √[(11)2+(8)2] = 13.6
• Next, calculate the orientation (or direction) for the same pixel. We know that we can
write the tan for the angles:
• tan(Φ) = Gy / Gx
• Hence, the value of the angle would be:
• Φ = atan(Gy / Gx) =atan(8/11)= 36
Prepared by: Prof. Khushali B Kathiriya
152
HOG (Histogram of Oriented Gradients) (Cont.)
• Step 4: Create Histograms using Gradients and Orientation
• A histogram is a plot that shows the frequency distribution of a set of continuous
data. We have the variable (in the form of bins) on the x-axis and the frequency on
the y-axis. Here, we are going to take the angle or orientation on the x-axis and the
frequency on the y-axis.
• we can use the gradient magnitude to fill the values in the matrix.
Prepared by: Prof. Khushali B Kathiriya
153
HOG (Histogram of Oriented Gradients) (Cont.)
• Step: 5 Calculate Histogram of Gradients in 8×8 cells (9×1)
• The histograms created in the HOG feature descriptor are not
generated for the whole image. Instead, the image is divided into
8×8 cells, and the histogram of oriented gradients is computed for
each cell. Why do you think this happens?
• By doing so, we get the features (or histogram) for the smaller
patches which in turn represent the whole image. We can certainly
change this value here from 8 x 8 to 16 x 16 or 32 x 32.
• If we divide the image into 8×8 cells and generate the histograms,
we will get a 9 x 1 matrix for each cell. This matrix is generated
using method 4 that we discussed in the previous section.
Prepared by: Prof. Khushali B Kathiriya
154
HOG (Histogram of Oriented Gradients) (Cont.)
• Step: 6 Normalize gradients in 16×16 cell (36×1)
• Before we understand how this is done, it’s important to understand why this is done
in the first place.
• Although we already have the HOG features created for the 8×8 cells of the image,
the gradients of the image are sensitive to the overall lighting. This means that for a
particular picture, some portion of the image would be very bright as compared to
the other portions.
• We cannot completely eliminate this from the image. But we can reduce this lighting
variation by normalizing the gradients by taking 16×16 blocks. Here is an example
that can explain how 16×16 blocks are created:
Prepared by: Prof. Khushali B Kathiriya
155
HOG (Histogram of Oriented Gradients) (Cont.)
Prepared by: Prof. Khushali B Kathiriya
156
HOG (Histogram of Oriented Gradients) (Cont.)
• Here, we will be combining four 8×8 cells to create a 16×16 block. And we already
know that each 8×8 cell has a 9×1 matrix for a histogram. So, we would have four 9×1
matrices or a single 36×1 matrix. To normalize this matrix, we will divide each of
these values by the square root of the sum of squares of the values. Mathematically,
for a given vector V:
• V = [a1, a2, a3, ….a36]
• We calculate the root of the sum of squares:
• k = √(a1)2+ (a2)2+ (a3)2+ …. (a36)2
• And divide all the values in the vector V with this value k:
Prepared by: Prof. Khushali B Kathiriya
157
HOG (Histogram of Oriented Gradients) (Cont.)
• Step: 7 Features for the complete image
• We are now at the final step of generating HOG features for the
image. So far, we have created features for 16×16 blocks of the
image. Now, we will combine all these to get the features for the
final image.
• Can you guess what would be the total number of features that
we will have for the given image? We would first need to find
out how many such 16×16 blocks would we get for a single
64×128 image:
Prepared by: Prof. Khushali B Kathiriya
158
HOG (Histogram of Oriented Gradients) (Cont.)
Prepared by: Prof. Khushali B Kathiriya
159
HOG (Histogram of Oriented Gradients)
Prepared by: Prof. Khushali B Kathiriya
160
HOG (Histogram of Oriented Gradients)
Prepared by: Prof. Khushali B Kathiriya
161
HOG (Histogram of Oriented Gradients)
Prepared by: Prof. Khushali B Kathiriya
162
HOG (Histogram of Oriented Gradients)
Prepared by: Prof. Khushali B Kathiriya
163
Shape Context Descriptor
PREPARED BY:
PROF. KHUSHALI B. KATHIRIYA
Shape Context Descriptor
• Shape Context — is a scale & rotation invariant shape descriptor (not image).
Prepared by: Prof. Khushali B Kathiriya
165
Shape Context Descriptor (Cont.)
• Matching shapes can be much difficult task then just matching images, for example
recognition of hand-written text, or fingerprints. Because most of shapes that we
trying to match is heavy augmented. I can bet that you will never write to identical
letters for all your life. And look at this from the point of people detection algorithm
based on handwriting matching — it would be just hell.
• Of course in the age of Neural networks and RNNs it also can be solved in a different
way then just straight mathematics, but not always you can use heavy and memory
hungry things like NNs. Even today most of the hardware like phone, cameras,
audio processors using good-old mathematics algorithms to make things running.
And also it always cool to do some mathematics magic
Prepared by: Prof. Khushali B Kathiriya
166
Shape Context Descriptor (Cont.)
Prepared by: Prof. Khushali B Kathiriya
167
Shape Context Descriptor (Cont.)
• The main idea is quite simple — build histogram based on points in Polar
coordinates system. So for each point you just get Euclidean distance and angles
against other points, norm them and based on logspace map count number of
points occurred in each map region. And that’s all.
• To make things rotation invariant i prefer to just rotate this map on minus angle
between to most furthers points, but it can be done in other ways too.
Prepared by: Prof. Khushali B Kathiriya
168
Shape Context Descriptor (Cont.)
• Shape information is an important visual nod for object recognition. Broadly,
Shape Descriptor (SD) is categorized as contour-based and region-based shape
descriptors. The contour-based shape descriptors make use of only the boundary
information of shape extracted by conventional edge or contour detection
approach
Prepared by: Prof. Khushali B Kathiriya
169
Shape Context Descriptor (Cont.)
• A logical grid of constant size is applied on boundary image, and we extract
points which are intersecting logical grid with boundary image as shown in
Figure 10. The set of such descriptor points is used for computing feature
descriptor representing shape information of an image.
• The relative arrangement of this descriptor points is represented using the log-
polar histogram. For each descriptor point, the log-polar histogram is calculated,
and last feature descriptor represents the summation of every log-polar
histogram at every descriptor points [7].
Prepared by: Prof. Khushali B Kathiriya
170

More Related Content

What's hot

Image segmentation
Image segmentationImage segmentation
Image segmentation
Gayan Sampath
 
morphological image processing
morphological image processingmorphological image processing
morphological image processing
John Williams
 
Segmentation
SegmentationSegmentation
Segmentation
guest49d49
 
Image segmentation
Image segmentationImage segmentation
Image segmentation
Bulbul Agrawal
 
Chapter 1 and 2 gonzalez and woods
Chapter 1 and 2 gonzalez and woodsChapter 1 and 2 gonzalez and woods
Chapter 1 and 2 gonzalez and woods
asodariyabhavesh
 
introduction to Digital Image Processing
introduction to Digital Image Processingintroduction to Digital Image Processing
introduction to Digital Image Processing
nikesh gadare
 
image basics and image compression
image basics and image compressionimage basics and image compression
image basics and image compression
murugan hari
 
Digital Image Processing: Image Segmentation
Digital Image Processing: Image SegmentationDigital Image Processing: Image Segmentation
Digital Image Processing: Image Segmentation
Mostafa G. M. Mostafa
 
Spatial filtering using image processing
Spatial filtering using image processingSpatial filtering using image processing
Spatial filtering using image processing
Anuj Arora
 
Image segmentation
Image segmentationImage segmentation
Image segmentation
Maneesha Krishnan
 
Computer Vision with Deep Learning
Computer Vision with Deep LearningComputer Vision with Deep Learning
Computer Vision with Deep Learning
Capgemini
 
Morphological Image Processing
Morphological Image ProcessingMorphological Image Processing
Morphological Image Processing
kumari36
 
Image Restoration
Image RestorationImage Restoration
Image Restoration
Poonam Seth
 
Chap01 visual perception
Chap01 visual perceptionChap01 visual perception
Chap01 visual perception
shabanam tamboli
 
Chapter10 image segmentation
Chapter10 image segmentationChapter10 image segmentation
Chapter10 image segmentation
asodariyabhavesh
 
Image Processing Basics
Image Processing BasicsImage Processing Basics
Image Processing Basics
A B Shinde
 
Morphological image processing
Morphological image processingMorphological image processing
Morphological image processing
Vinayak Narayanan
 
Object Recognition
Object RecognitionObject Recognition
Object Recognition
Eman Abed AlWahhab
 
Image segmentation with deep learning
Image segmentation with deep learningImage segmentation with deep learning
Image segmentation with deep learning
Antonio Rueda-Toicen
 
Image segmentation 2
Image segmentation 2 Image segmentation 2
Image segmentation 2
Rumah Belajar
 

What's hot (20)

Image segmentation
Image segmentationImage segmentation
Image segmentation
 
morphological image processing
morphological image processingmorphological image processing
morphological image processing
 
Segmentation
SegmentationSegmentation
Segmentation
 
Image segmentation
Image segmentationImage segmentation
Image segmentation
 
Chapter 1 and 2 gonzalez and woods
Chapter 1 and 2 gonzalez and woodsChapter 1 and 2 gonzalez and woods
Chapter 1 and 2 gonzalez and woods
 
introduction to Digital Image Processing
introduction to Digital Image Processingintroduction to Digital Image Processing
introduction to Digital Image Processing
 
image basics and image compression
image basics and image compressionimage basics and image compression
image basics and image compression
 
Digital Image Processing: Image Segmentation
Digital Image Processing: Image SegmentationDigital Image Processing: Image Segmentation
Digital Image Processing: Image Segmentation
 
Spatial filtering using image processing
Spatial filtering using image processingSpatial filtering using image processing
Spatial filtering using image processing
 
Image segmentation
Image segmentationImage segmentation
Image segmentation
 
Computer Vision with Deep Learning
Computer Vision with Deep LearningComputer Vision with Deep Learning
Computer Vision with Deep Learning
 
Morphological Image Processing
Morphological Image ProcessingMorphological Image Processing
Morphological Image Processing
 
Image Restoration
Image RestorationImage Restoration
Image Restoration
 
Chap01 visual perception
Chap01 visual perceptionChap01 visual perception
Chap01 visual perception
 
Chapter10 image segmentation
Chapter10 image segmentationChapter10 image segmentation
Chapter10 image segmentation
 
Image Processing Basics
Image Processing BasicsImage Processing Basics
Image Processing Basics
 
Morphological image processing
Morphological image processingMorphological image processing
Morphological image processing
 
Object Recognition
Object RecognitionObject Recognition
Object Recognition
 
Image segmentation with deep learning
Image segmentation with deep learningImage segmentation with deep learning
Image segmentation with deep learning
 
Image segmentation 2
Image segmentation 2 Image segmentation 2
Image segmentation 2
 

Similar to CV_Chap 3 Features Detection

Lec_9_ Morphological ImageProcessing .pdf
Lec_9_ Morphological ImageProcessing .pdfLec_9_ Morphological ImageProcessing .pdf
Lec_9_ Morphological ImageProcessing .pdf
nagwaAboElenein
 
Morphological image Processing
Morphological image ProcessingMorphological image Processing
Morphological image Processing
murugeswariSenthilku
 
Morphological image processing
Morphological image processingMorphological image processing
Morphological image processing
Raghu Kumar
 
Digital image processing techniques
Digital image processing techniquesDigital image processing techniques
Digital image processing techniques
Shab Bi
 
Morphological Operations (2).pptx
Morphological Operations (2).pptxMorphological Operations (2).pptx
Morphological Operations (2).pptx
RiyaLuThra7
 
dokumen.tips_computer-graphics-image-processing-chapter-9-computer-graphics-i...
dokumen.tips_computer-graphics-image-processing-chapter-9-computer-graphics-i...dokumen.tips_computer-graphics-image-processing-chapter-9-computer-graphics-i...
dokumen.tips_computer-graphics-image-processing-chapter-9-computer-graphics-i...
YogeshNeelappa2
 
DigitalImageProcessing 9-Morphology.ppt
DigitalImageProcessing 9-Morphology.pptDigitalImageProcessing 9-Morphology.ppt
DigitalImageProcessing 9-Morphology.ppt
FazaFoudhaili
 
morphological tecnquies in image processing
morphological tecnquies in image processingmorphological tecnquies in image processing
morphological tecnquies in image processing
soma saikiran
 
Machine Perception with cognition
Machine Perception with cognitionMachine Perception with cognition
Machine Perception with cognition
Umamaheswari372
 
Morphological operations
Morphological operationsMorphological operations
Dilation and erosion
Dilation and erosionDilation and erosion
Dilation and erosion
Aswin Pv
 
Digital image processing DIP
Digital image processing DIPDigital image processing DIP
Digital image processing DIP
ChaitaliAnantkumarDa
 
Chapter 9 morphological image processing
Chapter 9 morphological image processingChapter 9 morphological image processing
Chapter 9 morphological image processing
asodariyabhavesh
 
Implement the morphological operations: Dilation, Erosion, Opening and Closing
Implement the morphological operations: Dilation, Erosion, Opening and ClosingImplement the morphological operations: Dilation, Erosion, Opening and Closing
Implement the morphological operations: Dilation, Erosion, Opening and Closing
National Cheng Kung University
 
COM2304: Morphological Image Processing
COM2304: Morphological Image ProcessingCOM2304: Morphological Image Processing
COM2304: Morphological Image Processing
Hemantha Kulathilake
 
12 cie552 object_recognition
12 cie552 object_recognition12 cie552 object_recognition
12 cie552 object_recognition
Elsayed Hemayed
 
An application of morphological
An application of morphologicalAn application of morphological
An application of morphological
Naresh Chilamakuri
 
Binary images and Morphology - Computer Vision
Binary images and Morphology - Computer VisionBinary images and Morphology - Computer Vision
Binary images and Morphology - Computer Vision
othersk46
 
Btp viewmorph
Btp viewmorphBtp viewmorph
Btp viewmorph
CP default
 
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Simplilearn
 

Similar to CV_Chap 3 Features Detection (20)

Lec_9_ Morphological ImageProcessing .pdf
Lec_9_ Morphological ImageProcessing .pdfLec_9_ Morphological ImageProcessing .pdf
Lec_9_ Morphological ImageProcessing .pdf
 
Morphological image Processing
Morphological image ProcessingMorphological image Processing
Morphological image Processing
 
Morphological image processing
Morphological image processingMorphological image processing
Morphological image processing
 
Digital image processing techniques
Digital image processing techniquesDigital image processing techniques
Digital image processing techniques
 
Morphological Operations (2).pptx
Morphological Operations (2).pptxMorphological Operations (2).pptx
Morphological Operations (2).pptx
 
dokumen.tips_computer-graphics-image-processing-chapter-9-computer-graphics-i...
dokumen.tips_computer-graphics-image-processing-chapter-9-computer-graphics-i...dokumen.tips_computer-graphics-image-processing-chapter-9-computer-graphics-i...
dokumen.tips_computer-graphics-image-processing-chapter-9-computer-graphics-i...
 
DigitalImageProcessing 9-Morphology.ppt
DigitalImageProcessing 9-Morphology.pptDigitalImageProcessing 9-Morphology.ppt
DigitalImageProcessing 9-Morphology.ppt
 
morphological tecnquies in image processing
morphological tecnquies in image processingmorphological tecnquies in image processing
morphological tecnquies in image processing
 
Machine Perception with cognition
Machine Perception with cognitionMachine Perception with cognition
Machine Perception with cognition
 
Morphological operations
Morphological operationsMorphological operations
Morphological operations
 
Dilation and erosion
Dilation and erosionDilation and erosion
Dilation and erosion
 
Digital image processing DIP
Digital image processing DIPDigital image processing DIP
Digital image processing DIP
 
Chapter 9 morphological image processing
Chapter 9 morphological image processingChapter 9 morphological image processing
Chapter 9 morphological image processing
 
Implement the morphological operations: Dilation, Erosion, Opening and Closing
Implement the morphological operations: Dilation, Erosion, Opening and ClosingImplement the morphological operations: Dilation, Erosion, Opening and Closing
Implement the morphological operations: Dilation, Erosion, Opening and Closing
 
COM2304: Morphological Image Processing
COM2304: Morphological Image ProcessingCOM2304: Morphological Image Processing
COM2304: Morphological Image Processing
 
12 cie552 object_recognition
12 cie552 object_recognition12 cie552 object_recognition
12 cie552 object_recognition
 
An application of morphological
An application of morphologicalAn application of morphological
An application of morphological
 
Binary images and Morphology - Computer Vision
Binary images and Morphology - Computer VisionBinary images and Morphology - Computer Vision
Binary images and Morphology - Computer Vision
 
Btp viewmorph
Btp viewmorphBtp viewmorph
Btp viewmorph
 
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
 

More from Khushali Kathiriya

Chap.3 Knowledge Representation Issues Chap.4 Inference in First Order Logic
Chap.3 Knowledge Representation  Issues  Chap.4 Inference in First Order  LogicChap.3 Knowledge Representation  Issues  Chap.4 Inference in First Order  Logic
Chap.3 Knowledge Representation Issues Chap.4 Inference in First Order Logic
Khushali Kathiriya
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
Khushali Kathiriya
 
learning with complete data.pdf
learning with complete data.pdflearning with complete data.pdf
learning with complete data.pdf
Khushali Kathiriya
 
ADA_2_Analysis of Algorithms
ADA_2_Analysis of AlgorithmsADA_2_Analysis of Algorithms
ADA_2_Analysis of Algorithms
Khushali Kathiriya
 
ADA_1 Introduction of Algorithm
ADA_1 Introduction of AlgorithmADA_1 Introduction of Algorithm
ADA_1 Introduction of Algorithm
Khushali Kathiriya
 
AI_ Backtracking in prolog
AI_ Backtracking in prologAI_ Backtracking in prolog
AI_ Backtracking in prolog
Khushali Kathiriya
 
AI_Cuts in prolog
AI_Cuts in prologAI_Cuts in prolog
AI_Cuts in prolog
Khushali Kathiriya
 
AI_Recursive search in prolog
AI_Recursive search in prologAI_Recursive search in prolog
AI_Recursive search in prolog
Khushali Kathiriya
 
AI_List in prolog
AI_List in prologAI_List in prolog
AI_List in prolog
Khushali Kathiriya
 
AI_11 Game playing
AI_11 Game playingAI_11 Game playing
AI_11 Game playing
Khushali Kathiriya
 
AI_ Bays theorem
AI_ Bays theoremAI_ Bays theorem
AI_ Bays theorem
Khushali Kathiriya
 
AI_Bayesian belief network
AI_Bayesian belief networkAI_Bayesian belief network
AI_Bayesian belief network
Khushali Kathiriya
 
AI_11 Understanding
AI_11  UnderstandingAI_11  Understanding
AI_11 Understanding
Khushali Kathiriya
 
AI_ 8 Weak Slot and Filler Structure
AI_ 8 Weak Slot and Filler  StructureAI_ 8 Weak Slot and Filler  Structure
AI_ 8 Weak Slot and Filler Structure
Khushali Kathiriya
 
AI_7 Statistical Reasoning
AI_7 Statistical Reasoning AI_7 Statistical Reasoning
AI_7 Statistical Reasoning
Khushali Kathiriya
 
AI_6 Uncertainty
AI_6 Uncertainty AI_6 Uncertainty
AI_6 Uncertainty
Khushali Kathiriya
 
AI_5 Resolution/ Channing
AI_5 Resolution/ ChanningAI_5 Resolution/ Channing
AI_5 Resolution/ Channing
Khushali Kathiriya
 
AI_ 3 & 4 Knowledge Representation issues
AI_ 3 & 4 Knowledge Representation issuesAI_ 3 & 4 Knowledge Representation issues
AI_ 3 & 4 Knowledge Representation issues
Khushali Kathiriya
 
AI_2 State Space Search
AI_2 State Space SearchAI_2 State Space Search
AI_2 State Space Search
Khushali Kathiriya
 
AI_1 Introduction of AI
AI_1 Introduction of AIAI_1 Introduction of AI
AI_1 Introduction of AI
Khushali Kathiriya
 

More from Khushali Kathiriya (20)

Chap.3 Knowledge Representation Issues Chap.4 Inference in First Order Logic
Chap.3 Knowledge Representation  Issues  Chap.4 Inference in First Order  LogicChap.3 Knowledge Representation  Issues  Chap.4 Inference in First Order  Logic
Chap.3 Knowledge Representation Issues Chap.4 Inference in First Order Logic
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
learning with complete data.pdf
learning with complete data.pdflearning with complete data.pdf
learning with complete data.pdf
 
ADA_2_Analysis of Algorithms
ADA_2_Analysis of AlgorithmsADA_2_Analysis of Algorithms
ADA_2_Analysis of Algorithms
 
ADA_1 Introduction of Algorithm
ADA_1 Introduction of AlgorithmADA_1 Introduction of Algorithm
ADA_1 Introduction of Algorithm
 
AI_ Backtracking in prolog
AI_ Backtracking in prologAI_ Backtracking in prolog
AI_ Backtracking in prolog
 
AI_Cuts in prolog
AI_Cuts in prologAI_Cuts in prolog
AI_Cuts in prolog
 
AI_Recursive search in prolog
AI_Recursive search in prologAI_Recursive search in prolog
AI_Recursive search in prolog
 
AI_List in prolog
AI_List in prologAI_List in prolog
AI_List in prolog
 
AI_11 Game playing
AI_11 Game playingAI_11 Game playing
AI_11 Game playing
 
AI_ Bays theorem
AI_ Bays theoremAI_ Bays theorem
AI_ Bays theorem
 
AI_Bayesian belief network
AI_Bayesian belief networkAI_Bayesian belief network
AI_Bayesian belief network
 
AI_11 Understanding
AI_11  UnderstandingAI_11  Understanding
AI_11 Understanding
 
AI_ 8 Weak Slot and Filler Structure
AI_ 8 Weak Slot and Filler  StructureAI_ 8 Weak Slot and Filler  Structure
AI_ 8 Weak Slot and Filler Structure
 
AI_7 Statistical Reasoning
AI_7 Statistical Reasoning AI_7 Statistical Reasoning
AI_7 Statistical Reasoning
 
AI_6 Uncertainty
AI_6 Uncertainty AI_6 Uncertainty
AI_6 Uncertainty
 
AI_5 Resolution/ Channing
AI_5 Resolution/ ChanningAI_5 Resolution/ Channing
AI_5 Resolution/ Channing
 
AI_ 3 & 4 Knowledge Representation issues
AI_ 3 & 4 Knowledge Representation issuesAI_ 3 & 4 Knowledge Representation issues
AI_ 3 & 4 Knowledge Representation issues
 
AI_2 State Space Search
AI_2 State Space SearchAI_2 State Space Search
AI_2 State Space Search
 
AI_1 Introduction of AI
AI_1 Introduction of AIAI_1 Introduction of AI
AI_1 Introduction of AI
 

Recently uploaded

Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...
Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...
Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...
imrankhan141184
 
NEWSPAPERS - QUESTION 1 - REVISION POWERPOINT.pptx
NEWSPAPERS - QUESTION 1 - REVISION POWERPOINT.pptxNEWSPAPERS - QUESTION 1 - REVISION POWERPOINT.pptx
NEWSPAPERS - QUESTION 1 - REVISION POWERPOINT.pptx
iammrhaywood
 
Leveraging Generative AI to Drive Nonprofit Innovation
Leveraging Generative AI to Drive Nonprofit InnovationLeveraging Generative AI to Drive Nonprofit Innovation
Leveraging Generative AI to Drive Nonprofit Innovation
TechSoup
 
C1 Rubenstein AP HuG xxxxxxxxxxxxxx.pptx
C1 Rubenstein AP HuG xxxxxxxxxxxxxx.pptxC1 Rubenstein AP HuG xxxxxxxxxxxxxx.pptx
C1 Rubenstein AP HuG xxxxxxxxxxxxxx.pptx
mulvey2
 
Walmart Business+ and Spark Good for Nonprofits.pdf
Walmart Business+ and Spark Good for Nonprofits.pdfWalmart Business+ and Spark Good for Nonprofits.pdf
Walmart Business+ and Spark Good for Nonprofits.pdf
TechSoup
 
The History of Stoke Newington Street Names
The History of Stoke Newington Street NamesThe History of Stoke Newington Street Names
The History of Stoke Newington Street Names
History of Stoke Newington
 
B. Ed Syllabus for babasaheb ambedkar education university.pdf
B. Ed Syllabus for babasaheb ambedkar education university.pdfB. Ed Syllabus for babasaheb ambedkar education university.pdf
B. Ed Syllabus for babasaheb ambedkar education university.pdf
BoudhayanBhattachari
 
MARY JANE WILSON, A “BOA MÃE” .
MARY JANE WILSON, A “BOA MÃE”           .MARY JANE WILSON, A “BOA MÃE”           .
MARY JANE WILSON, A “BOA MÃE” .
Colégio Santa Teresinha
 
How to Setup Warehouse & Location in Odoo 17 Inventory
How to Setup Warehouse & Location in Odoo 17 InventoryHow to Setup Warehouse & Location in Odoo 17 Inventory
How to Setup Warehouse & Location in Odoo 17 Inventory
Celine George
 
writing about opinions about Australia the movie
writing about opinions about Australia the moviewriting about opinions about Australia the movie
writing about opinions about Australia the movie
Nicholas Montgomery
 
Bed Making ( Introduction, Purpose, Types, Articles, Scientific principles, N...
Bed Making ( Introduction, Purpose, Types, Articles, Scientific principles, N...Bed Making ( Introduction, Purpose, Types, Articles, Scientific principles, N...
Bed Making ( Introduction, Purpose, Types, Articles, Scientific principles, N...
Leena Ghag-Sakpal
 
spot a liar (Haiqa 146).pptx Technical writhing and presentation skills
spot a liar (Haiqa 146).pptx Technical writhing and presentation skillsspot a liar (Haiqa 146).pptx Technical writhing and presentation skills
spot a liar (Haiqa 146).pptx Technical writhing and presentation skills
haiqairshad
 
math operations ued in python and all used
math operations ued in python and all usedmath operations ued in python and all used
math operations ued in python and all used
ssuser13ffe4
 
Liberal Approach to the Study of Indian Politics.pdf
Liberal Approach to the Study of Indian Politics.pdfLiberal Approach to the Study of Indian Politics.pdf
Liberal Approach to the Study of Indian Politics.pdf
WaniBasim
 
BÀI TẬP BỔ TRỢ TIẾNG ANH LỚP 9 CẢ NĂM - GLOBAL SUCCESS - NĂM HỌC 2024-2025 - ...
BÀI TẬP BỔ TRỢ TIẾNG ANH LỚP 9 CẢ NĂM - GLOBAL SUCCESS - NĂM HỌC 2024-2025 - ...BÀI TẬP BỔ TRỢ TIẾNG ANH LỚP 9 CẢ NĂM - GLOBAL SUCCESS - NĂM HỌC 2024-2025 - ...
BÀI TẬP BỔ TRỢ TIẾNG ANH LỚP 9 CẢ NĂM - GLOBAL SUCCESS - NĂM HỌC 2024-2025 - ...
Nguyen Thanh Tu Collection
 
Main Java[All of the Base Concepts}.docx
Main Java[All of the Base Concepts}.docxMain Java[All of the Base Concepts}.docx
Main Java[All of the Base Concepts}.docx
adhitya5119
 
Wound healing PPT
Wound healing PPTWound healing PPT
Wound healing PPT
Jyoti Chand
 
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UP
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPLAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UP
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UP
RAHUL
 
Constructing Your Course Container for Effective Communication
Constructing Your Course Container for Effective CommunicationConstructing Your Course Container for Effective Communication
Constructing Your Course Container for Effective Communication
Chevonnese Chevers Whyte, MBA, B.Sc.
 
ZK on Polkadot zero knowledge proofs - sub0.pptx
ZK on Polkadot zero knowledge proofs - sub0.pptxZK on Polkadot zero knowledge proofs - sub0.pptx
ZK on Polkadot zero knowledge proofs - sub0.pptx
dot55audits
 

Recently uploaded (20)

Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...
Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...
Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...
 
NEWSPAPERS - QUESTION 1 - REVISION POWERPOINT.pptx
NEWSPAPERS - QUESTION 1 - REVISION POWERPOINT.pptxNEWSPAPERS - QUESTION 1 - REVISION POWERPOINT.pptx
NEWSPAPERS - QUESTION 1 - REVISION POWERPOINT.pptx
 
Leveraging Generative AI to Drive Nonprofit Innovation
Leveraging Generative AI to Drive Nonprofit InnovationLeveraging Generative AI to Drive Nonprofit Innovation
Leveraging Generative AI to Drive Nonprofit Innovation
 
C1 Rubenstein AP HuG xxxxxxxxxxxxxx.pptx
C1 Rubenstein AP HuG xxxxxxxxxxxxxx.pptxC1 Rubenstein AP HuG xxxxxxxxxxxxxx.pptx
C1 Rubenstein AP HuG xxxxxxxxxxxxxx.pptx
 
Walmart Business+ and Spark Good for Nonprofits.pdf
Walmart Business+ and Spark Good for Nonprofits.pdfWalmart Business+ and Spark Good for Nonprofits.pdf
Walmart Business+ and Spark Good for Nonprofits.pdf
 
The History of Stoke Newington Street Names
The History of Stoke Newington Street NamesThe History of Stoke Newington Street Names
The History of Stoke Newington Street Names
 
B. Ed Syllabus for babasaheb ambedkar education university.pdf
B. Ed Syllabus for babasaheb ambedkar education university.pdfB. Ed Syllabus for babasaheb ambedkar education university.pdf
B. Ed Syllabus for babasaheb ambedkar education university.pdf
 
MARY JANE WILSON, A “BOA MÃE” .
MARY JANE WILSON, A “BOA MÃE”           .MARY JANE WILSON, A “BOA MÃE”           .
MARY JANE WILSON, A “BOA MÃE” .
 
How to Setup Warehouse & Location in Odoo 17 Inventory
How to Setup Warehouse & Location in Odoo 17 InventoryHow to Setup Warehouse & Location in Odoo 17 Inventory
How to Setup Warehouse & Location in Odoo 17 Inventory
 
writing about opinions about Australia the movie
writing about opinions about Australia the moviewriting about opinions about Australia the movie
writing about opinions about Australia the movie
 
Bed Making ( Introduction, Purpose, Types, Articles, Scientific principles, N...
Bed Making ( Introduction, Purpose, Types, Articles, Scientific principles, N...Bed Making ( Introduction, Purpose, Types, Articles, Scientific principles, N...
Bed Making ( Introduction, Purpose, Types, Articles, Scientific principles, N...
 
spot a liar (Haiqa 146).pptx Technical writhing and presentation skills
spot a liar (Haiqa 146).pptx Technical writhing and presentation skillsspot a liar (Haiqa 146).pptx Technical writhing and presentation skills
spot a liar (Haiqa 146).pptx Technical writhing and presentation skills
 
math operations ued in python and all used
math operations ued in python and all usedmath operations ued in python and all used
math operations ued in python and all used
 
Liberal Approach to the Study of Indian Politics.pdf
Liberal Approach to the Study of Indian Politics.pdfLiberal Approach to the Study of Indian Politics.pdf
Liberal Approach to the Study of Indian Politics.pdf
 
BÀI TẬP BỔ TRỢ TIẾNG ANH LỚP 9 CẢ NĂM - GLOBAL SUCCESS - NĂM HỌC 2024-2025 - ...
BÀI TẬP BỔ TRỢ TIẾNG ANH LỚP 9 CẢ NĂM - GLOBAL SUCCESS - NĂM HỌC 2024-2025 - ...BÀI TẬP BỔ TRỢ TIẾNG ANH LỚP 9 CẢ NĂM - GLOBAL SUCCESS - NĂM HỌC 2024-2025 - ...
BÀI TẬP BỔ TRỢ TIẾNG ANH LỚP 9 CẢ NĂM - GLOBAL SUCCESS - NĂM HỌC 2024-2025 - ...
 
Main Java[All of the Base Concepts}.docx
Main Java[All of the Base Concepts}.docxMain Java[All of the Base Concepts}.docx
Main Java[All of the Base Concepts}.docx
 
Wound healing PPT
Wound healing PPTWound healing PPT
Wound healing PPT
 
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UP
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPLAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UP
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UP
 
Constructing Your Course Container for Effective Communication
Constructing Your Course Container for Effective CommunicationConstructing Your Course Container for Effective Communication
Constructing Your Course Container for Effective Communication
 
ZK on Polkadot zero knowledge proofs - sub0.pptx
ZK on Polkadot zero knowledge proofs - sub0.pptxZK on Polkadot zero knowledge proofs - sub0.pptx
ZK on Polkadot zero knowledge proofs - sub0.pptx
 

CV_Chap 3 Features Detection

  • 1. Computer Vision Chap.3 : Feature Detection SUB CODE: 3171614 SEMESTER: 7TH IT PREPARED BY: PROF. KHUSHALI B. KATHIRIYA
  • 2. Outline • Edge Detection • Corner Detection • Line and Curve Detection • Active Contours • SIFT and HOG Descriptors • Shape Context Descriptors • Morphological Operations Prepared by: Prof. Khushali B Kathiriya 2
  • 4. Key Stages in Digital Image Processing Image Acquisition Image Restoration Morphological Processing Segmentation Representation & Description Image Enhancement Object Recognition Problem Domain Colour Image Processing Image Compression Prepared by: Prof. Khushali B Kathiriya 4
  • 5. Key Stages in Digital Image Processing: Morphological Processing Image Acquisition Image Restoration Morphological Processing Segmentation Representation & Description Image Enhancement Object Recognition Problem Domain Colour Image Processing Image Compression Prepared by: Prof. Khushali B Kathiriya 5
  • 6. Contents • Once segmentation is complete, morphological operations can be used to remove imperfections in the segmented image and provide information on the form and structure of the image • In this lecture we will consider • What is morphology? • Simple morphological operations • Compound operations • Morphological algorithms
  • 7. 1, 0, Black, White? • Throughout all of the following slides whether 0 and 1 refer to white or black is a little interchangeable • All of the discussion that follows assumes segmentation has already taken place and that images are made up of 0s for background pixels and 1s for object pixels • After this it doesn’t matter if 0 is black, white, yellow, green…….
  • 8. What Is Morphology? • Morphological image processing (or morphology) describes a range of image processing techniques that deal with the shape (or morphology) of features in an image • Morphological operations are typically applied to remove imperfections introduced during segmentation, and so typically operate on bi-level images
  • 9. Quick Example Image after segmentation Image after segmentation and morphological processing
  • 10. Structuring Elements, Hits & Fits B A C Structuring Element Fit: All on pixels in the structuring element cover on pixels in the image Hit: Any on pixel in the structuring element covers an on pixel in the image All morphological processing operations are based on these simple ideas
  • 11. Structuring Elements • Structuring elements can be of any size and make any shape • However, for simplicity we will use rectangular structuring elements with their origin at the middle pixel 1 1 1 1 1 1 1 1 1 0 0 1 0 0 0 1 1 1 0 1 1 1 1 1 0 1 1 1 0 0 0 1 0 0 0 1 0 1 1 1 0 1 0
  • 12. What is use of Hit, Miss and Fit? • Hit and miss algorithm can be used to thin and skeletonize a shape in a binary image. Prepared by: Prof. Khushali B Kathiriya 12
  • 13. Fitting & Hitting 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 B C A 1 1 1 1 1 1 1 1 1 Structuring Element 1 1 1 1 1 1 Structuring Element 2
  • 14. Fitting & Hitting 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Prepared by: Prof. Khushali B Kathiriya 14 1 1 1 1 1 0 0 0 1 1 0 0 0 1 1 1 1 1 0 0 1 1 1 1 1 0 0 1 1 1 1 0 0 1 1 1 1 1 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0
  • 15. Fundamental Operations • Fundamentally morphological image processing is very much like spatial filtering • The structuring element is moved across every pixel in the original image to give a pixel in a new processed image • The value of this new pixel depends on the operation performed • There are two basic morphological operations: erosion and dilation
  • 17. Erosion • Erosion of image f by structuring element s is given by f  s • The structuring element s is positioned with its origin at (x, y) and the new pixel value is determined using the rule:    = otherwise 0 fits if 1 ) , ( f s y x g
  • 18. Erosion Example Structuring Element Original Image Processed Image With Eroded Pixels
  • 19. Erosion Example Structuring Element Original Image Processed Image 0 0 0 0 1 1 1 0 0 1 1 0 0 1 0 0 0 0
  • 20. Erosion Example 1 Watch out: In these examples a 1 refers to a black pixel! Original image Erosion by 3*3 square structuring element Erosion by 5*5 square structuring element
  • 21. Erosion Example 2 • It acts as a morphological filtering operation in which image details smaller than the structuring elements are filtered from the image Original image After erosion with a disc of radius 10 After erosion with a disc of radius 20 After erosion with a disc of radius 5
  • 22. What Is Erosion For? Erosion can split apart joined objects Erosion can strip away extrusions Watch out: Erosion shrinks objects Erosion can split apart
  • 23. Apply Erosion on image 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Structuring Element
  • 24. Eroded Image 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
  • 26. Dilation • Dilation of image f by structuring element s is given by f s • The structuring element s is positioned with its origin at (x, y) and the new pixel value is determined using the rule:     = otherwise 0 hits if 1 ) , ( f s y x g
  • 28. Dilation Example Structuring Element Original Image Processed Image With Dilated Pixels
  • 29. Dilation Example 1 Original image Dilation by 3*3 square structuring element Dilation by 5*5 square structuring element Watch out: In these examples a 1 refers to a black pixel!
  • 30. Dilation Example 2 Structuring element Original image After dilation
  • 31. What Is Dilation For? Dilation can repair breaks or bridges the gap Dilation can repair intrusions Watch out: Dilation enlarges objects
  • 32. Dilation • Advantage of dilation over low pass filter used to bridge the gap is that • dilation results directly in binary image. • Low pass filter starts with a binary image and produce a gray scale image , which would require a pass with a threholding function to convert it back to binary form
  • 33. Apply Dilation on image 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
  • 34. Dilated Image 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 1 1 1 1 1 1 0 0 0 1 1 1 1 1 1 0 0 0 1 1 1 1 1 1 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
  • 35. Compound Operations PREPARED BY: PROF. KHUSHALI B. KATHIRIYA
  • 36. Compound Operations • More interesting morphological operations can be performed by performing combinations of erosions and dilations • The most widely used of these compound operations are: • Opening • Closing
  • 38. Opening • The opening of image f by structuring element s, denoted f ○ s is simply an erosion followed by a dilation f ○ s = (f s) s  Original shape After erosion After dilation (opening) Note a disc shaped structuring element is used
  • 41. Opening Example Structuring Element Original Image Processed Image (f s) (f s) s  f s
  • 42. Advantages of opening • Opening generally smoothes the contour of an object • Breaks narrow isthmuses, • And eliminates thin protrusions
  • 44. Closing • The closing of image f by structuring element s, denoted f • s is simply a dilation followed by an erosion f • s = (f  s) s  Original shape After dilation After erosion (closing) Note a disc shaped structuring element is used
  • 48. Advantages of closing • Closing also tends to smooth sections of contours • It generally fuses narrow breaks and long thin gulfs • It eliminates small holes and fills gaps in the contour
  • 50. Morphological Algorithms • Using the simple technique we have looked at so far we can begin to consider some more interesting morphological algorithms • We will look at: • Boundary extraction • Region filling • There are lots of others as well though: • Extraction of connected components • Thinning/thickening • Skeletonisation
  • 51. Boundary Extraction PREPARED BY: PROF. KHUSHALI B. KATHIRIYA
  • 52. Boundary Extraction • Extracting the boundary (or outline) of an object is often extremely useful • The boundary can be given simply as • β(A) = A – (AB)
  • 53. Boundary Extraction Example • A simple image and the result of performing boundary extraction using a square 3*3 structuring element Original Image Extracted Boundary
  • 55. Contents • Today we will continue to look at the problem of segmentation, this time though in terms of thresholding • In particular we will look at: • What is thresholding? • Simple thresholding • Adaptive thresholding
  • 56. Thresholding • Thresholding is usually the first step in any segmentation approach • We have talked about simple single value thresholding already • Thresholding is used to produce regions of uniformity within the given image based on some threshold criteria T
  • 57. Thresholding • Single value thresholding can be given mathematically as follows:      = T y x f if T y x f if y x g ) , ( 0 ) , ( 1 ) , (
  • 58. Thresholding Example • Imagine a poker playing robot that needs to visually interpret the cards in its hand Original Image Thresholded Image
  • 59. But Be Careful • If you get the threshold wrong, the results can be disastrous Threshold Too Low Threshold Too High
  • 60. Basic Global Thresholding • Based on the histogram of an image • Partition the image histogram using a single global threshold • The success of this technique very strongly depends on how well the histogram can be partitioned • Types of thresholding :- 1) Global thresholding 2) Local thresholding
  • 61. Basic Global Thresholding Algorithm The basic global threshold, T, is calculated as follows: 1. Select an initial estimate for T (typically the average grey level in the image) 2. Segment the image using T to produce two groups of pixels: 1. G1 consisting of pixels with grey levels >T 2. G2 consisting pixels with grey levels ≤ T 3. Compute the average grey levels of pixels in G1 to give μ1 and G2 to give μ2
  • 62. Basic Global Thresholding Algorithm 4. Compute a new threshold value: 5. Repeat steps 2 – 4 until the difference in T in successive iterations is less than a predefined limit T∞ This algorithm works very well for finding thresholds when the histogram is suitable 2 2 1   + = T
  • 65. Problems With Single Value Thresholding • Single value thresholding only works for bimodal histograms • Images with other kinds of histograms need more than a single threshold
  • 66. Problems With Single Value Thresholding (cont…) • Let’s say we want to isolate the contents of the bottles • Think about what the histogram for this image would look like • What would happen if we used a single threshold value?
  • 67. Single Value Thresholding and Illumination Uneven illumination can really upset a single valued thresholding scheme
  • 68. Basic Adaptive Thresholding • An approach to handle situations in which single value thresholding will not work is to divide an image into sub images and threshold these individually • Since the threshold for each pixel depends on its location within an image this technique is said to adaptive
  • 69. Basic Adaptive Thresholding Example • The image below shows an example of using adaptive thresholding with the image shown previously • As can be seen success is mixed • But, we can further subdivide the troublesome sub images for more success
  • 70. Basic Adaptive Thresholding Example (cont…) • These images show the troublesome parts of the previous problem further subdivided • After this sub division successful thresholding can be achieved
  • 71. Summary • In this lecture we have begun looking at segmentation, and in particular thresholding • We saw the basic global thresholding algorithm and its shortcomings • We also saw a simple way to overcome some of these limitations using adaptive thresholding
  • 72. Contents • So far we have been considering image processing techniques used to transform images for human interpretation • Today we will begin looking at automated image analysis by examining the thorny issue of image segmentation: • The segmentation problem • Finding points, lines and edges
  • 73. Image Segmentation PREPARED BY: PROF. KHUSHALI B. KATHIRIYA
  • 74. The Segmentation Problem • Segmentation attempts to partition the pixels of an image into groups that strongly correlate with the objects in an image • Typically the first step in any automated computer vision application
  • 75. The Segmentation Problem • Image Segmentation can be achieved in any one of two ways:- 1) Segmentation based on discontinuities in intensity – it partitions the image based on abrupt changes in intensity, such as edges 2) Segmentation based on similarities in intensity – it divides the image into regions that are similar according to a set of predefined criteria
  • 77. Detection Of Discontinuities • There are three basic types of grey level discontinuities that we tend to look for in digital images: 1. Points 2. Lines 3. Edges • We typically find discontinuities using masks and correlation • These all are high frequency components hence we need mask which are basically high pass
  • 78. Point Detection PREPARED BY: PROF. KHUSHALI B. KATHIRIYA
  • 79. Point Detection • Point detection can be achieved simply using the mask below: • Points are detected at those pixels in the subsequent filtered image that are above a set threshold
  • 80. Point Detection Prepared by: Prof. Khushali B Kathiriya 80
  • 81. Point Detection (Cont…) X-ray image of a turbine blade Result of point detection Result of thresholding
  • 82. Line Detection PREPARED BY: PROF. KHUSHALI B. KATHIRIYA
  • 83. Line Detection • The next level of complexity is to try to detect lines • The masks below will extract lines that are one pixel thick and running in a particular direction
  • 84. Line Detection Prepared by: Prof. Khushali B Kathiriya 84
  • 85. Line Detection (cont…) Binary image of a wire bond mask After processing with -45° line detector Result of thresholding filtering result
  • 86. Line Detection Example Apply horizontal mask on the image 0 0 0 10 0 0 0 0 0 0 0 10 0 0 0 0 0 0 0 10 0 0 0 0 0 0 0 10 0 0 0 0 0 10 10 10 10 10 10 0 0 0 0 10 0 0 0 0 0 0 0 10 0 0 0 0 0 0 0 10 0 0 0 0
  • 87. Line Detection Example Make negative values zero as it is high pass mask 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -20 -20 -20 -20 -30 -20 0 0 40 40 40 40 60 40 0 0 -20 -20 -20 -20 -30 -20 0 0 0 0 10 0 0 0 0 0 0 0 10 0 0 0 0
  • 88. Edge Detection PREPARED BY: PROF. KHUSHALI B. KATHIRIYA
  • 89. Edge Detection An edge is a set of connected pixels that lie on the boundary between two regions
  • 90. Edges & Derivatives • We have already spoken about how derivatives are used to find discontinuities • 1st derivative tells us where an edge is • 2nd derivative can be used to show edge direction
  • 91. Derivatives & Noise • Derivative based edge detectors are extremely sensitive to noise • We need to keep this in mind
  • 92. Common Edge Detectors • Given a 3*3 region of an image the following edge detection filters can be used
  • 93. Prepared by: Prof. Khushali B Kathiriya 93
  • 94. Edge Detection Example Original Image Horizontal Gradient Component Vertical Gradient Component Combined Edge Image
  • 99. Edge Detection Problems • Often, problems arise in edge detection in that there are is too much detail • For example, the brickwork in the previous example • One way to overcome this is to smooth images prior to edge detection
  • 100. Edge Detection Example With Smoothing Original Image Horizontal Gradient Component Vertical Gradient Component Combined Edge Image
  • 101. Laplacian Edge Detection PREPARED BY: PROF. KHUSHALI B. KATHIRIYA
  • 102. Laplacian Edge Detection • We encountered the 2nd-order derivative based Laplacian filter already • The Laplacian is typically not used by itself as it is too sensitive to noise • Usually when used for edge detection the Laplacian is combined with a smoothing Gaussian filter
  • 103. Laplacian Edge Detection Prepared by: Prof. Khushali B Kathiriya 103
  • 104. Laplacian Of Gaussian • The Laplacian of Gaussian (or Mexican hat) filter uses the Gaussian for noise removal and the Laplacian for edge detection
  • 106. Summary • In this lecture we have begun looking at segmentation, and in particular edge detection • Edge detection is massively important as it is in many cases the first step to object recognition
  • 107. Active Contours PREPARED BY: PROF. KHUSHALI B. KATHIRIYA
  • 108. Active Contour (Boundary Detection) • Segmentation is a section of image processing for the separation or segregation of information from the required target region of the image. There are different techniques used for segmentation of pixels of interest from the image. • Active contour is one of the active models in segmentation techniques, which makes use of the energy constraints and forces in the image for separation of region of interest. Active contour defines a separate boundary or curvature for the regions of target object for segmentation. Prepared by: Prof. Khushali B Kathiriya 108
  • 109. Active Contour(Cont.) • Application of Active Contour • Medical Imaging • Brain CT images • MRI images • Cardiac images • Motion Tracking • Stereo Tracking Prepared by: Prof. Khushali B Kathiriya 109
  • 110. What is Active Contour? Prepared by: Prof. Khushali B Kathiriya 110
  • 111. What is Active Contour? Prepared by: Prof. Khushali B Kathiriya 111
  • 112. Example of Active Contour Prepared by: Prof. Khushali B Kathiriya 112
  • 113. Example of Active Contour (Cont.) Prepared by: Prof. Khushali B Kathiriya 113 Medical Imaging
  • 114. Example of Active Contour (Cont.) Prepared by: Prof. Khushali B Kathiriya 114 Motion Tracking
  • 115. SIFT Features PREPARED BY: PROF. KHUSHALI B. KATHIRIYA
  • 116. SIFT Features • SIFT stands for Scale-Invariant Feature Transform and was first presented in 2004, by D.Lowe, University of British Columbia. SIFT is invariance to image scale and rotation. Prepared by: Prof. Khushali B Kathiriya 116
  • 117. SIFT Features (Cont.) • Major advantages of SIFT are • Locality: features are local, so robust to occlusion and clutter (no prior segmentation) • Distinctiveness: individual features can be matched to a large database of objects • Quantity: many features can be generated for even small objects • Efficiency: close to real-time performance • Extensibility: can easily be extended to a wide range of different feature types, with each adding robustness Prepared by: Prof. Khushali B Kathiriya 117
  • 118. SIFT Features (Cont.) Prepared by: Prof. Khushali B Kathiriya 118
  • 119. Comparing SIFT Descriptors Prepared by: Prof. Khushali B Kathiriya 119
  • 120. Comparing SIFT Descriptors (Cont.) Prepared by: Prof. Khushali B Kathiriya 120
  • 121. Comparing SIFT Descriptors (Cont.) Prepared by: Prof. Khushali B Kathiriya 121
  • 122. SIFT Results: Scale Invariance Prepared by: Prof. Khushali B Kathiriya 122
  • 123. SIFT Results: Rotation Invariance Prepared by: Prof. Khushali B Kathiriya 123
  • 124. SIFT Results: Robustness to Clutter Prepared by: Prof. Khushali B Kathiriya 124
  • 125. Panorama Stitching using SIFT Prepared by: Prof. Khushali B Kathiriya 125
  • 126. Auto Collage using SIFT Prepared by: Prof. Khushali B Kathiriya 126
  • 127. SIFT for 3D objects? • Not really match for all cases.. Prepared by: Prof. Khushali B Kathiriya 127
  • 128. The Algorithm • SIFT is quite an involved algorithm. There are mainly four steps involved in the SIFT algorithm. We will see them one-by-one. • Scale-space peak selection: Potential location for finding features. • Keypoint Localization: Accurately locating the feature keypoints. • Orientation Assignment: Assigning orientation to keypoints. • Keypoint descriptor: Describing the keypoints as a high dimensional vector. • Keypoint Matching Prepared by: Prof. Khushali B Kathiriya 128
  • 129. Prepared by: Prof. Khushali B Kathiriya 129
  • 130. The SIFT algorithm • The resplendent Eiffel Tower, of course! The keen-eyed among you will also have noticed that each image has a different background, is captured from different angles, and also has different objects in the foreground (in some cases). • I’m sure all of this took you a fraction of a second to figure out. It doesn’t matter if the image is rotated at a weird angle or zoomed in to show only half of the Tower. This is primarily because you have seen the images of the Eiffel Tower multiple times and your memory easily recalls its features. We naturally understand that the scale or angle of the image may change but the object remains the same. • But machines have an almighty struggle with the same idea. It’s a challenge for them to identify the object in an image if we change certain things (like the angle or the scale). Here’s the good news – machines are super flexible and we can teach them to identify images at an almost human-level. Prepared by: Prof. Khushali B Kathiriya 130
  • 131. We will seen…. 1. Introduction to SIFT 2. Constructing a Scale Space 1. Gaussian Blur 2. Difference of Gaussian 3. Keypoint Localization 1. Local Maxima/Minima 2. Keypoint Selection 4. Orientation Assignment 1. Calculate Magnitude & Orientation 2. Create Histogram of Magnitude & Orientation 5. Keypoint Descriptor 6. Feature Matching Prepared by: Prof. Khushali B Kathiriya 131
  • 132. Introduction to SIFT • SIFT, or Scale Invariant Feature Transform, is a feature detection algorithm in Computer Vision. • SIFT helps locate the local features in an image, commonly known as the ‘keypoints‘ of the image. These keypoints are scale & rotation invariant that can be used for various computer vision applications, like image matching, object detection, scene detection, etc. • For example, here is another image of the Eiffel Tower along with its smaller version. The keypoints of the object in the first image are matched with the keypoints found in the second image. The same goes for two images when the object in the other image is slightly rotated. Prepared by: Prof. Khushali B Kathiriya 132
  • 133. Introduction to SIFT (Cont.) • Constructing a Scale Space: To make sure that features are scale-independent • Keypoint Localisation: Identifying the suitable features or keypoints • Orientation Assignment: Ensure the keypoints are rotation invariant • Keypoint Descriptor: Assign a unique fingerprint to each keypoint Prepared by: Prof. Khushali B Kathiriya 133
  • 134. Constructing the scale space • We use the Gaussian Blurring technique to reduce the noise in an image. • So, for every pixel in an image, the Gaussian Blur calculates a value based on its neighboring pixels. Below is an example of image before and after applying the Gaussian Blur. As you can see, the texture and minor details are removed from the image and only the relevant information like the shape and edges remain: Prepared by: Prof. Khushali B Kathiriya 134
  • 135. Constructing the scale space (Cont.) • Hence, these blur images are created for multiple scales. To create a new set of images of different scales, we will take the original image and reduce the scale by half. For each new image, we will create blur versions as we saw above. Prepared by: Prof. Khushali B Kathiriya 135
  • 136. Prepared by: Prof. Khushali B Kathiriya 136
  • 137. Difference of Gaussian • Difference of Gaussian is a feature enhancement algorithm that involves the subtraction of one blurred version of an original image from another, less blurred version of the original. Prepared by: Prof. Khushali B Kathiriya 137
  • 138. Difference of Gaussian • Let us create the DoG for the images in scale space. Take a look at the below diagram. On the left, we have 5 images, all from the first octave (thus having the same scale). Each subsequent image is created by applying the Gaussian blur over the previous image. • On the right, we have four images generated by subtracting the consecutive Gaussians. Prepared by: Prof. Khushali B Kathiriya 138
  • 139. Keypoint Localization • Once the images have been created, the next step is to find the important keypoints from the image that can be used for feature matching. The idea is to find the local maxima and minima for the images. This part is divided into two steps: 1. Find the local maxima and minima 2. Remove low contrast keypoints (keypoint selection) Prepared by: Prof. Khushali B Kathiriya 139
  • 140. Keypoint Localization • This means that every pixel value is compared with 26 other pixel values to find whether it is the local maxima/minima. For example, in the below diagram, we have three images from the first octave. The pixel marked x is compared with the neighboring pixels (in green) and is selected as a keypoint if it is the highest or lowest among the neighbors: • We now have potential keypoints that represent the images and are scale-invariant. We will apply the last check over the selected keypoints to ensure that these are the most accurate keypoints to represent the image. Prepared by: Prof. Khushali B Kathiriya 140
  • 141. Orientation Assignment • At this stage, we have a set of stable keypoints for the images. We will now assign an orientation to each of these keypoints so that they are invariant to rotation. We can again divide this step into two smaller steps: 1. Calculate the magnitude and orientation 2. Create a histogram for magnitude and orientation Prepared by: Prof. Khushali B Kathiriya 141
  • 142. Orientation Assignment • Let’s say we want to find the magnitude and orientation for the pixel value in red. For this, we will calculate the gradients in x and y directions by taking the difference between 55 & 46 and 56 & 42. This comes out to be Gx = 9 and Gy = 14 respectively. • Once we have the gradients, we can find the magnitude and orientation using the following formulas: • Magnitude = √[(Gx)2+(Gy)2] = 16.64 • Φ = atan(Gy / Gx) = atan(1.55) = 57.17 Prepared by: Prof. Khushali B Kathiriya 142
  • 143. Keypoint Descriptor • This is the final step for SIFT. So far, we have stable keypoints that are scale- invariant and rotation invariant. In this section, we will use the neighboring pixels, their orientations, and magnitude, to generate a unique fingerprint for this keypoint called a ‘descriptor’. • Additionally, since we use the surrounding pixels, the descriptors will be partially invariant to illumination or brightness of the images. • We will first take a 16×16 neighborhood around the keypoint. This 16×16 block is further divided into 4×4 sub-blocks and for each of these sub-blocks, we generate the histogram using magnitude and orientation. Prepared by: Prof. Khushali B Kathiriya 143
  • 144. Keypoint Descriptor Prepared by: Prof. Khushali B Kathiriya 144
  • 145. HOG Features PREPARED BY: PROF. KHUSHALI B. KATHIRIYA
  • 146. HOG (Histogram of Oriented Gradients) Prepared by: Prof. Khushali B Kathiriya 146
  • 147. Gradient Magnitude and Angle Prepared by: Prof. Khushali B Kathiriya 147
  • 148. HOG (Histogram of Oriented Gradients) • Consider the below image of size (180 x 280). Let us take a detailed look at how the HOG features will be created for this image: Prepared by: Prof. Khushali B Kathiriya 148
  • 149. HOG (Histogram of Oriented Gradients) (Cont.) • Step:1 Preprocessed Data (64 x128) • We need to preprocess the image and bring down the width to height ratio to 1:2. The image size should preferably be 64 x 128. This is because we will be dividing the image into 8*8 and 16*16 patches to extract the features. Having the specified size (64 x 128) will make all our calculations pretty simple. In fact, this is the exact value used in the original paper. Prepared by: Prof. Khushali B Kathiriya 149
  • 150. HOG (Histogram of Oriented Gradients) (Cont.) • Step:2 Calculating Gradients (direction x and y) • The next step is to calculate the gradient for every pixel in the image. Gradients are the small change in the x and y directions. Here, I am going to take a small patch from the image and calculate the gradients on that: Prepared by: Prof. Khushali B Kathiriya 150
  • 151. HOG (Histogram of Oriented Gradients) (Cont.) • I have highlighted the pixel value 85. Now, to determine the gradient (or change) in the x-direction, we need to subtract the value on the left from the pixel value on the right. Similarly, to calculate the gradient in the y-direction, we will subtract the pixel value below from the pixel value above the selected pixel. • Hence the resultant gradients in the x and y direction for this pixel are: • Change in X direction(Gx) = 89 – 78 = 11 • Change in Y direction(Gy) = 68 – 56 = 8 Prepared by: Prof. Khushali B Kathiriya 151
  • 152. HOG (Histogram of Oriented Gradients) (Cont.) • Step 3: Calculate the Magnitude and Orientation • The gradients are basically the base and perpendicular here. So, for the previous example, we had Gx and Gy as 11 and 8. Let’s apply the Pythagoras theorem to calculate the total gradient magnitude: • Total Gradient Magnitude = √[(Gx)2+(Gy)2] • Total Gradient Magnitude = √[(11)2+(8)2] = 13.6 • Next, calculate the orientation (or direction) for the same pixel. We know that we can write the tan for the angles: • tan(Φ) = Gy / Gx • Hence, the value of the angle would be: • Φ = atan(Gy / Gx) =atan(8/11)= 36 Prepared by: Prof. Khushali B Kathiriya 152
  • 153. HOG (Histogram of Oriented Gradients) (Cont.) • Step 4: Create Histograms using Gradients and Orientation • A histogram is a plot that shows the frequency distribution of a set of continuous data. We have the variable (in the form of bins) on the x-axis and the frequency on the y-axis. Here, we are going to take the angle or orientation on the x-axis and the frequency on the y-axis. • we can use the gradient magnitude to fill the values in the matrix. Prepared by: Prof. Khushali B Kathiriya 153
  • 154. HOG (Histogram of Oriented Gradients) (Cont.) • Step: 5 Calculate Histogram of Gradients in 8×8 cells (9×1) • The histograms created in the HOG feature descriptor are not generated for the whole image. Instead, the image is divided into 8×8 cells, and the histogram of oriented gradients is computed for each cell. Why do you think this happens? • By doing so, we get the features (or histogram) for the smaller patches which in turn represent the whole image. We can certainly change this value here from 8 x 8 to 16 x 16 or 32 x 32. • If we divide the image into 8×8 cells and generate the histograms, we will get a 9 x 1 matrix for each cell. This matrix is generated using method 4 that we discussed in the previous section. Prepared by: Prof. Khushali B Kathiriya 154
  • 155. HOG (Histogram of Oriented Gradients) (Cont.) • Step: 6 Normalize gradients in 16×16 cell (36×1) • Before we understand how this is done, it’s important to understand why this is done in the first place. • Although we already have the HOG features created for the 8×8 cells of the image, the gradients of the image are sensitive to the overall lighting. This means that for a particular picture, some portion of the image would be very bright as compared to the other portions. • We cannot completely eliminate this from the image. But we can reduce this lighting variation by normalizing the gradients by taking 16×16 blocks. Here is an example that can explain how 16×16 blocks are created: Prepared by: Prof. Khushali B Kathiriya 155
  • 156. HOG (Histogram of Oriented Gradients) (Cont.) Prepared by: Prof. Khushali B Kathiriya 156
  • 157. HOG (Histogram of Oriented Gradients) (Cont.) • Here, we will be combining four 8×8 cells to create a 16×16 block. And we already know that each 8×8 cell has a 9×1 matrix for a histogram. So, we would have four 9×1 matrices or a single 36×1 matrix. To normalize this matrix, we will divide each of these values by the square root of the sum of squares of the values. Mathematically, for a given vector V: • V = [a1, a2, a3, ….a36] • We calculate the root of the sum of squares: • k = √(a1)2+ (a2)2+ (a3)2+ …. (a36)2 • And divide all the values in the vector V with this value k: Prepared by: Prof. Khushali B Kathiriya 157
  • 158. HOG (Histogram of Oriented Gradients) (Cont.) • Step: 7 Features for the complete image • We are now at the final step of generating HOG features for the image. So far, we have created features for 16×16 blocks of the image. Now, we will combine all these to get the features for the final image. • Can you guess what would be the total number of features that we will have for the given image? We would first need to find out how many such 16×16 blocks would we get for a single 64×128 image: Prepared by: Prof. Khushali B Kathiriya 158
  • 159. HOG (Histogram of Oriented Gradients) (Cont.) Prepared by: Prof. Khushali B Kathiriya 159
  • 160. HOG (Histogram of Oriented Gradients) Prepared by: Prof. Khushali B Kathiriya 160
  • 161. HOG (Histogram of Oriented Gradients) Prepared by: Prof. Khushali B Kathiriya 161
  • 162. HOG (Histogram of Oriented Gradients) Prepared by: Prof. Khushali B Kathiriya 162
  • 163. HOG (Histogram of Oriented Gradients) Prepared by: Prof. Khushali B Kathiriya 163
  • 164. Shape Context Descriptor PREPARED BY: PROF. KHUSHALI B. KATHIRIYA
  • 165. Shape Context Descriptor • Shape Context — is a scale & rotation invariant shape descriptor (not image). Prepared by: Prof. Khushali B Kathiriya 165
  • 166. Shape Context Descriptor (Cont.) • Matching shapes can be much difficult task then just matching images, for example recognition of hand-written text, or fingerprints. Because most of shapes that we trying to match is heavy augmented. I can bet that you will never write to identical letters for all your life. And look at this from the point of people detection algorithm based on handwriting matching — it would be just hell. • Of course in the age of Neural networks and RNNs it also can be solved in a different way then just straight mathematics, but not always you can use heavy and memory hungry things like NNs. Even today most of the hardware like phone, cameras, audio processors using good-old mathematics algorithms to make things running. And also it always cool to do some mathematics magic Prepared by: Prof. Khushali B Kathiriya 166
  • 167. Shape Context Descriptor (Cont.) Prepared by: Prof. Khushali B Kathiriya 167
  • 168. Shape Context Descriptor (Cont.) • The main idea is quite simple — build histogram based on points in Polar coordinates system. So for each point you just get Euclidean distance and angles against other points, norm them and based on logspace map count number of points occurred in each map region. And that’s all. • To make things rotation invariant i prefer to just rotate this map on minus angle between to most furthers points, but it can be done in other ways too. Prepared by: Prof. Khushali B Kathiriya 168
  • 169. Shape Context Descriptor (Cont.) • Shape information is an important visual nod for object recognition. Broadly, Shape Descriptor (SD) is categorized as contour-based and region-based shape descriptors. The contour-based shape descriptors make use of only the boundary information of shape extracted by conventional edge or contour detection approach Prepared by: Prof. Khushali B Kathiriya 169
  • 170. Shape Context Descriptor (Cont.) • A logical grid of constant size is applied on boundary image, and we extract points which are intersecting logical grid with boundary image as shown in Figure 10. The set of such descriptor points is used for computing feature descriptor representing shape information of an image. • The relative arrangement of this descriptor points is represented using the log- polar histogram. For each descriptor point, the log-polar histogram is calculated, and last feature descriptor represents the summation of every log-polar histogram at every descriptor points [7]. Prepared by: Prof. Khushali B Kathiriya 170