2. Need for image data management
For efficient storage and retrieval of images
in large databases.
While it is perfectly feasible to identify a
desired image from a small collection simply
by browsing, more effective techniques are
needed with collections containing
thousands of items which need some form of
access by image content.
3. What is CBIR?
Process of retrieving desired images from a
large collection on the basis of features (such
as colour, texture and shape) that can be
automatically extracted from the images
themselves.
Also known as query by image content (QBIC)
and content-based visual information retrieval
(CBVIR)
4. Contd…..
“Content-based” means that the
search will analyze the actual contents
of the image.
Indexing is often used as identifying
features within an image.
Indexing data structures: structures to
speed up the retrieval of features within
image collections.
5.
6. Practical applications of CBIR
Crime prevention
The military
Architectural and engineering design
Fashion and interior design
Journalism and advertising
Medical diagnosis
Geographical information and remote sensing systems
Cultural heritage
Education and training
Home entertainment
Web searching.
7. Content comparisons
Color : The size of the feature vector
depends on the size of the image.
Texture: Texture based features do not
describe much about variance and
rotation.
So we have considered shape
features
8. Feature extraction using Exact Legendre
moment computation
image moments :particular weighted
averages of the image pixels'
intensities
or
functions of those moments chosen
to have some attractive property or
interpretation.
Main advantage :ability to provide
invariant measures of shape.
9. Image moments are basically classified
into
a) non-orthogonal moments and
b) orthogonal moments.
Orthogonal moments: representation of
image with minimum amount of
information redundancy
CLASSIFICATION OF IMAGE MOMENTS
10. Legendre moments
Belong to the class of orthogonal
moments
used to attain a near zero value of
redundancy measure in a set of
moment functions
correspond to independent
characteristics of the image.
11. The definition of Legendre moments has a form
of projection of the image intensity function onto
the Legendre polynomials.
Legendre moments of order (p + q) for an image
with intensity function f (x, y) are defined as
Contd….
12. Contd…….
where P(x) is the pth-order Legendre polynomial
defined as
where x [−1, 1], and the Legendre polynomial Pp(x)
obeys the following recursive relation:
with P0(x) = 1, P1(x) = x and p>1.
13. A digital image of size M ×N is an array of
pixels. Centers of these pixels are the
points (xi,yj ), where the image intensity
function is defined only for this discrete set
of points fixed at constant values
Δxi = 2/M, and Δyj = 2/N
in x and y directions repectively.
14. Exact Legendre moments
The integrals in Legendre moments are
evaluated exactly using summations to
reduce the approximation error.
The computation time and
computational complexity are reduced
by applying fast algorithm.
16. Contd…
Exact Legendre moments are computed using fast
algorithm as follows:
Where,
Yiq is the qth order moment of row i.
17. Classification of data classes using
support vector machine (SVM)
SVMs are a set of related supervised
learning methods used for classification .
Viewing input data as two sets of
vectors in an n-dimensional space, an
SVM will construct a separating hyperplane
in that space, one which maximizes the
margin between the two data sets.
18. Contd…..
Margin: two parallel hyperplanes are
constructed, one on each side of the separating
hyperplane, which are "pushed up against" the
two data sets.
Larger the margin, better the generalization
error of the classifier.
19. Objectives
The objectives of SVM are:
To define a optimal hyper plane with
maximum margin.
To map data into high dimensional space to
make it easier for linear classification.
20. A p − 1-dimensional hyper plane
separating p-dimensional data points.
The points of one class are divided from the
other class using this hyper plane
Linear classifiers
24. 24`
Setting Up the Optimization Problem
kbxw −=+⋅
kbxw =+⋅
0=+⋅ bxw
kk
w
The width of the
margin is:
2 k
w
Now we have to
maximize the
margin.
K=1=>
2
max
. . ( ) 1, of class 1
( ) 1, of class 2
w
s t w x b x
w x b x
× + ≥ ∀
× + ≤ − ∀
25. quadratic programming (QP) optimization
problem.
We have to minimize the value of Subjected to certain constraints
This is the primal form
It is expressed in dual form to make it easier to
optimize
Here we obtain non zero Lagrange multipliers.
These are called support vectors.
27. Algorithm
1. Read all the images from the database.
2. The Exact Legendre moments of each
image is calculated.
3.Each class is trained with every other class
independently using SVM.
4. The first class of images is trained with all
the other 19 classes using SVM and 19
different hyper planes are constructed.
28. 5. The first step in training process involves
labeling of the training images. The class that is
considered positive for training is labeled Y= +1
and all other images are labeled Y=-1.
6. A optimized hyper plane is constructed that
divides the positive images from other classes
using SVM.
29. 7. The Hessian matrix is calculated for the set of
training vectors.
H=∑Xi.Xj.ci.cj. where X is the set of feature
vectors.
8. the dual optimization form of the equation is
calculated
9. Using ‘quadprog’ function in Matlab the
optimization of equation is done.
30. 10.there is one weight for every training point
where the points with O< a, < C are called
support vectors. Using these support vectors the
value of W is calculated.
11. The value of bias is obtained from the
equation,
b= w.x-1, where x is a training image
31. ……….so on
Each class is
trained with every
other class and a
hyper plane is
constructed.
32. 12.The feature vectors of a query image are taken
and are substituted in all the planes.
13. The values of the planes are observed.
The image is classified into that class which has
the maximum number of planes satisfied.
34. Experimental work and results
We have taken coil database consisting of 20
different classes of images each class consisting of
72 images.
The different classes of images that were taken in
the database are as shown below:
36. The results show that there has been a linear
growth in the classification percentage with the
number of training images increased.
The feature vectors of the images are increased
by taking higher orders of Legendre moments.
The retrieval rate is found to be 96.592% with
18 images taken for training and legendre
moments upto the order of 5.
40. Future scope
Exact Legendre moments of higher order can
be considered.
Focus on CBIR systems that can make use of
relevance feedback, where the user
progressively refines the search results by
marking images in the results as "relevant",
"not relevant", or "neutral" to the search
query, then repeating the search with the new
information may be done in future.