The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
Lalal
1. Why SIFT?
an algorithm to describe local features in images.
Imagecontent is transformed into local feature coordinates that are
invariant to translation, rotation, scale, and other imaging parameters
Advantages of SIFT
Locality: features are local, so robustto occlusion and clutter (no prior
segmentation)
Distinctiveness: individual features can be matched to a large databaseof objects
Quantity: many features can be generated for even small objects
Efficiency: close to real-time performance
Extensibility: can easily be extended to wide rangeof differing feature types, with
each adding robustness
Other Descriptors
Patches Based:-
Disadv:-
Classifying individual patches is very hard because patches fromdifferent
classes may seem similar due to the effects of illumination, pose, noise or
similarity.
a patch may contain someforeground pixels as well as background pixels
but our main focus is on local features.
Generative Model Based:-
Disadv:-
A generative model only applies to probabilistic methods. It is a model for
randomly generating observabledata values.
since most statistical models areonly approximations to
the true distribution, if the model's application is to infer about a subsetof
2. variables conditional on known values of others, then it can be argued that
the approximation makes more assumptions than are necessary to solve
the problemat hand.
Local and Global Features
Global:- global features representdetails about the whole image.
Color distribution, brightness and sharpness.
Faster to process.
They define online the global statistics of the landmark image while ignores
the cells of interest.
Local:- local features representmore details such as the relationship between
pixels.
Similarities and differences with pixels.
Much more costly in processing
Local features are usually extracted fromlocal regions which surrounds the
interesting salient points.
K-partitioning Algorithm
What is Clustering?
The process of organizing objects into groups whosemembers aresimilar in some
way”.
A cluster is thereforea collection of objects which are “similar” between them
and are “dissimilar” to the objects belonging to other clusters.
k-clustering
k-means is one of the simplest unsupervised learning algorithms that solve
the well known clustering problem. The procedurefollows a simple and easy
way to classify a given data set through a certain number of clusters (assumek
clusters) fixed apriori.
3. The main idea is to define k centers, one for each cluster. These centers should
be placed in a cunning way because of different location causes different
result. So, the better choice is to place them as much as possible far away
fromeach other. The next step is to take each point belonging to a given data
set and associateit to the nearestcenter. When no point is pending, the first
step is completed and an early group age is done. At this point we need to re-
calculate k new centroids as barycenter of the clusters resulting fromthe
previous step. After we havethese k new centroids, a new binding has to be
done between the same data set points and the nearestnew center. A loop has
been generated. As a result of this loop we may notice that the k centers change
their location step by step until no morechanges are done or in other words
centers do not moveany more. Finally, this algorithm aims at minimizing an
objective function know as squared error function
Advantages
1) Fast, robustand easier to understand.
3) Gives best resultwhen data set are distinct or well separated fromeach other.
Disadvantages
7) Applicable only when mean is defined i.e. fails for categorical data.
8) Unable to handle noisy data and outliers.
9) Algorithm fails for non-linear data set