SlideShare a Scribd company logo
1 of 6
Download to read offline
International Journal Of Mathematics And Statistics Invention (IJMSI)
E-ISSN: 2321 – 4767 P-ISSN: 2321 - 4759
www.Ijmsi.org || Volume 3 Issue 2 || February. 2015 || PP-17-22
www.ijmsi.org 17 | P a g e
A feature selection method for automatic image annotation
Shunle Zhu1
1
School of Zhejiang seas and oceans, Institute of mathematics and communication, Zhou Shan, Zhe Jiang,
316004
ABSTRACT: Automatic image annotation (AIA) is the bridge of high-level semantic information and the
low-level feature. AIA is an effective method to resolve the problem of “Semantic Gap”. According to the
intrinsic character of AIA, some common features are selected from the labeled images by multiple instance
learning method. The feature selection method is applied into the task of automatic image annotation in this
paper. Each keyword is analyzed hierarchically in low-granularity-level under the framework of feature
selection. Through the common representative instances are mined, the semantic similarity of images can be
effectively expressed and the better annotation results are able to be acquired, which testifies the effectiveness of
the proposed annotation method. Gaussian mixture model is built by the selected feature method to characterize
the labeled keyword. The experimental results illustrate the good perfromance of AIA.
KEYWORDS: Automatic image annotation, feature selection, multiple instance learning, representative
instances, semantic similarity
I. INTRODUCTION
With the development of multimedia and network technology, image data has been becoming more
common rapidly. Facing a mass of image resource, content based image retrieval (CBIR), a technology to
organize, manage and analyze these resource efficiently, is becoming a hot point. However, under the limitation
of “semantic gap”, that is, the underlying vision features, such as color, texture, and shape, can not reflect and
match the query attention completely, CBIR confronts the unprecedented challenge.
In recent years, newly proposed automatic image annotation (AIA) keeps focus on erecting a bridge
between high-level semantic and low-level features, which is an effective approach to solve the above mentioned
semantic gap. Co-occurrence model proposed by Morris etc. in 1999 initiated the research of automatic image
annotation. In [2] Translation model was developed to annotate image automatically based on an assumption that
keywords and vision features were different language to describe the same image. Similar to [2], literature [3]
proposed Cross Media Relevance Model (CMRM) where the vision information of each image was denoted as
blob set which is to manifest the semantic information of image. However, blob set in CMRM was erected based
on discrete region clustering which produced a loss of vision features so that the annotation results were too
perfect. In order to compensate for this problem, a Continuous-space Relevance Model (CRM) was proposed in
[4]. Furthermore, in [5] Multiple-Bernoulli Relevance Model was proposed to improve CMRM and CRM.
Recently, there are novel researches about AIA and have obtained many good results [6-9].
Despite variable sides in the above mentioned methods, the core idea based on automatic image
annotation is identical. The core idea of automatic image annotation applies annotated images to erect a certain
model to describe the potential relationship or map between as keywords and image features which is used to
predict unknown annotation images. Even if previous literatures achieved some results from variable sides
A feature selection method for automatic image annotation
www.ijmsi.org 18 | P a g e
respectively, semantic description of each keyword has not been defined explicitly in them. For this end, on the
basis of investigating the characters of the automatic image annotation, i.e. images annotated by keywords
comprise multiple regions; automatic image annotation is regarded as a problem of multi instance learning. The
proposed method analyzes each keyword in multi-granularity hierarchy to reflect the semantic similarity so that
the method not only characterizes semantic implication accurately but also improves the performance of image
annotation which verifies the effectiveness of our proposed method.
This article is organized as follows: section 1 introduces automatic image annotation briefly; automatic
image annotation based on multi-instance learning framework is discussed in detail in section 2; and
experimental process and results are described in section 3; section 4 summaries and discusses the future
research briefly.
II. AUTOMATIC IMAGE ANNOTATION IN THE FRAMEWORK OF MULTI-INSTANCE
LEARNING
In the previous learning framework, a sample is viewed as an instance, i.e. the relationship between
samples and instances is one-to-one, while a sample may contain more instances, this is to say, the relationship
between samples and instances is one-to-many. Ambiguities between training samples of multi-instance learning
differ from ones of supervised learning, unsupervised learning and reinforcement learning completely so that the
previous methods hardly solve the proposed problems. Owing to its characteristic features and wide prospect,
multi-instance learning is absorbing more and more attentions in machine learning domain and is referred to as a
newly learning framework. The core idea multi-instance learning is that the training sample set consists of
concept-annotated bags which contain unannotated instances. The purpose of multi-instance learning is to assign
a conceptual annotation to bags beyond training set by learning from training bags. In general, a bag is annotated
a Positive if and only if at least one instance is labeled Positive, otherwise the bag is annotated as Negative.
2.1 Framework of Image Annotation of Multi-instance Learning
According to the above-mentioned definition of the multi-instance learning, namely, a Positive bag
contain at least a positive instance, we can draw a conclusion that positive instances should be distributed much
more than negative instances in Positive bags. This conclusion shares common properties with DD algorithm [12]
in multi-instance learning domain. If some point can represent the more semantic of a specified keyword than
any other point in the feather space, no less than one instance in positive bags should be close to this point while
all instances in negative bags will be far away from this point. In the proposed methods, we take into
consideration each semantic keyword independently. Even if a part of useful information will be lost neglecting
the relationship between keywords, various keywords from each image are used to computing the similarities
between images so that the proposed methods can represent the semantic similarity of image effectively in low-
granularity. In the following sections, each keyword will be analyzed and applied in local level so that irrelevant
information with keywords will be eliminated to improve the precision of representation of the semantic of
keywords. Firstly, keywords w , including Positive and Negative bags, are collected, and the area surrounded by
Positive bags are obtained by clustering adaptively. Secondly, this cluster is viewed as Positive set of w which
contains most items than other clusters and is farthest from Negative bags. Thirdly, Gaussian Mixture Model
(GMM) is used to learn the semantic of w . Finally, the images can be annotated automatically based on the
posterior probability of each keyword of images according to the probability of image in GMM by using
Bayesian estimation. Figure illustrates this process.
A feature selection method for automatic image annotation
www.ijmsi.org 19 | P a g e
Adaptive
clustering of
positive bags
GMM for each keyword
Representative
feature set
Bayesian ruleAnnotation
Far from
negative bags
Feature Selection Process
Images labeled
with keyword W
Image (Bag)
+
Region
(Instance)
Annotation Process
Figure 1 the framework of automatic image annotation based multi-instance learning
2.2 Automatic Image Annotation
In convenience, we firstly put forward some symbols. w is denoted as a semantic keyword,
 1,...,kX X k N 
as a set of training samples, where N is the number of training samples;  1 , , nS x x 
 
as a set of
representative instances after adaptively clustering, where nx
is the nth item in a clusters. Therefore, GMM is
constructed to describe semantic concept of w , i.e. GMM is used to estimate the distribution of each keyword of
feature space to erect the one-to-one map from keywords to vision feature. Note that the superiority of GMM lies
in producing a smooth estimation for any density distribution which can reflect the feature distribution of
semantic keywords effectively by non-parameter density estimating.
For a specified keyword w , GMM represents its vision feature distribution,  p x w
is defined as follows:
   1
,
M
i i i
i
Np x w x 

 
(1)
where  ,i iN x  
represents the Gussian distribution of ith
component , i
and i
are the corresponding mean
and variance respectively, i is weight of the ith
component, reflecting its significance, and 1
1
M
ii

 , M is the
number of components. Each component represents a cluster in feature space, reflecting a vision feature of w .
In each component, the conditional probability density of low-level vision feature vector x can be computed
as follows:
 
 
   11 1
; ; exp
22
T
i i i i id
i
N x x x  



 
     
 
(2)
where d is the dimension of feature vector x . The parameters of GMM are estimated by EM method which is
maximum likelihood estimation for distribution parameters from incomplete data. EM consists of two steps,
expectation step, E-step, and maximum step, M-step, which are executed alternately until convergence after
multiple iteration. Assuming that the keyword w can produce wN
representative instances,  ,i i i  
represents mean and co-variance of the ith
Gussian component. Intuitively, different semantic keywords should
represent different vision features and the numbers of components are not identical with each other in general so
that an adaptive value of M can be obtained based on Minimum Description Length (MDL)[13].
A feature selection method for automatic image annotation
www.ijmsi.org 20 | P a g e
The proposed method extracts semantic clustering sets from training images which are used to construct
GMM in which each component represents some vision feature of a specified keyword. From the perspective of
semantic mapping, the proposed model described the one-to-many relationship between keywords and the
corresponding vision features. The extracted semantic clustering set can reflect the semantic similarity between
instances and keywords. According to the above methods, a GMM is constructed for each keyword respectively
to describe the semantic of the keyword. And then, for a specified image to be annotated  1,..., mX x x
, where mx is
denoted as the mth
separated region, the probability of keyword w is computed according to formula (3).Finally,
the image X is annotated according to 5 keywords of greatest posterior probabilities.
   1
m
i
i
p w X p x w

 
(3)
III. EXPERIMENTAL RESULTS AND ANALYSIS
For comparison with other image annotation algorithms fairly, COREL[2], a widely used image data set,
is selected in our experimental process. This image set consists of 5000 images, 4500 images from which are
used as training samples, the rest 500 images as test samples. 1 through 5 keywords is extracted to annotate an
image, so in all 371 keywords exists in dataset. In our experiments, each image is divided 10 regions using
Normalized Cut segment technology [10]. 42379 regions are produced in all for a whole image data set, and then,
these regions are clustered to 500 groups each of which is called a blob. For each region, 36-demension features,
such as color, shape, location etc. are considered like literature [2].
In order to measure the performances of various image annotation methods, we adopt the same
evaluation metrics as literature [5], some popular indicators in automatic image annotation and image retrieval.
Precision is referred as the ratio of the times of correct annotation in relation to all the times of annotation, while
recall is referred as the ratio of the times of correct annotation in relation to all the positive samples. The
detailed definitions are as follows:
B
precision
A

(4)
B
recall
C

(5)
where A is the number of images annotated by some keyword; B is the number of images annotated correctly; C
is the number of images annotated by some keyword in the whole data set. As a tradeoff between the above
indicators, the geometric mean of them is adopted widely, namely:
1 1 1
2
F
precision recall
 
  
  (6)
Moreover, we take a statistics of the number of keywords annotated correctly which are used to annotate an
image correctly at least. The statistical value reflects the coverage of keywords in our proposed methods,
denoted by NumWords.
3.1 Experimental Results
Figure 2 shows that the annotated results of the proposed method, MIL Annotation, keep rather a high
consistent with the ground truth. This fact verifies the effectiveness of our proposed methods.
A feature selection method for automatic image annotation
www.ijmsi.org 21 | P a g e
Test Images
Ground Truth
horse grass
tree animal
elephant animal
tree sky ground
MIL
Annotation
horse animal
tree field grass
elephant animal
ground tree sky
Test Images
Ground Truth
bus vehicle
ground building
building tree
grass sky
MIL
Annotation
bus vehicle tree
building track
building tree
horse grass sky
Figure 2 illustrations of annotation results of MIL Annotation
3.2 Annotation Results of MIL Annotation
Table 1 and Table 2 show that compare the average performance between our proposed method and
some traditional annotation models such as COM[1], TM[2], CMRM[3], CRM[4] and MBRM[5], on COREL
image data set. In experiments, 263 keywords are concerned.
Table 1 the performances of various annotation model on COREL
Models COM TM CMRM CRM MBRM MIL
Num Words 19 49 66 107 122 124
Results on 263 keywords
Average
Precision
0.03 0.06 0.10 0.16 0.24 0.20
Average Recall 0.02 0.04 0.09 0.19 0.25 0.22
Table 2 the comparison of F-measure between various models
From Table 1 and Table 2, we can know that the annotation performance of the proposed method outperforms
other models in two keyword set, and the proposed method has a significant improvement relation to existing
algorithms in average precision, average recall F-measure and NumWords. Specifically, MIL annotation can
obtain a significant improvement over COM, TM, CMRM and CRM; in existing probability-based image
annotation models, MBRM can get a best annotation performance which is equivalent to the performance of MIL
annotation.
Models COM TM CMRM CRM MBRM MIL
F-Measurement 0.024 0.048 0.095 0.174 0.245 0.211
A feature selection method for automatic image annotation
www.ijmsi.org 22 | P a g e
IV. SUMMARIES
Analyzing the properties of automatic image annotation deeply can know it can be viewed as a
multi-instance learning problem so that we proposed a method to annotated images automatically based on
multi-instance learning. Each keyword is analyzed independently to guarantee more effective semantic similarity
in low-granularity. And then, under the frame of multi-instance learning, each keyword is further analyzed in
various hierarchies. Irrelevant information with keywords will be eliminated to improve the precision of
representation of the semantic of keywords by mapping keywords to corresponding region. Experimental results
demonstrated the effectiveness of MR-MIL.
REFERENCES
[1] Mori Y, Takahashi H, Oka R. Image-to-word transformation based on dividing and vector quantizing images with words. In: Proc.
ofIntl. Workshop on Multimedia Intelligent Storage and Retrieval Management (MISRM'99), Orlando, Oct. 1999.
[2] Duygulu P, Barnard K, Freitas N, Forsyth D. Object recognition as machine translation: learning a lexicon for a fixed image
vocabulary. In: Proc. of European Conf. on Computer Vision (ECCV’02), Copenhagen, Denmark, May. 2002: 97-112.
[3] Jeon J, Lavrenko V, Manmatha R. Automatic image annotation and retrieval using cross-media relevance models. In: Proc. of Int.
ACM SIGIR Conf. on Research and Development in Information Retrieval (ACM SIGIR’03), Toronto, Canada, Jul. 2003: 119-126.
[4] Lavrenko V, Manmatha R, Jeon J. A model for learning the semantics of pictures. In: Proc. Of Advances in Neural Information
Processing Systems (NIPS’03), 2003.
[5] Feng S, Manmatha R, Lavrenko V. Multiple bernoulli relevance models for image and video annotation. In: Proc. of IEEE Int. Conf.
on Computer Vision and Pattern Recognition (CVPR’04), Washington DC, USA, Jun. 2004: 1002-1009.
[6] Zhang D. S., Islam M., Lu G. J., A review on automatic image annotation techniques, Journal of Pattern Recognition, 2012, 45(1):
346-362.
[7] Shi F., Wang J. J., Wang Z. Y., Region based supervised annotation for semantic image retrieval, International Journal of Electronics
and Communications, 2011, 65(11): 929 - 936.
[8] Huang J. Z., Huang Y. C., Yu Y., Li H. S., Metaxas D. N., automatic image annotation using group sparsity, IEEE conference on
Computer Vision and Pattern Recognition, Jun., 2010, pp. 3312 - 3319.
[9] Laic A. A., Iftene A., Automatic image annotation, Internal Conference on Linguistic Resources and Tools for Processing the
Romanian Language, Sep., 2014, pp. 143 - 152.
[10] Shi J, Malik J. Normalized cuts and image Segmentation. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2000, 22(8):
888-905.
[11] Maron O. Learning from ambiguity. Department of Electrical Engineering and Computer Science, MIT, PhD dissertation. 1998.
[12] Maron O, Lozano P T. A framework for multiple-instance learning. In: Proc. of Advances in Neural Information Processing
Systems (NIPS’98), Pittsburgh, USA, Oct. 1998: 570-576.
[13] Li J, Wang J. Automatic linguistic indexing of pictures by a statistical modeling approach. IEEE Trans. On Pattern Analysis and
Machine Intelligence, 2003, 25(9): 1075 - 1088.

More Related Content

What's hot

A Review of Feature Extraction Techniques for CBIR based on SVM
A Review of Feature Extraction Techniques for CBIR based on SVMA Review of Feature Extraction Techniques for CBIR based on SVM
A Review of Feature Extraction Techniques for CBIR based on SVMIJEEE
 
Content based image retrieval based on shape with texture features
Content based image retrieval based on shape with texture featuresContent based image retrieval based on shape with texture features
Content based image retrieval based on shape with texture featuresAlexander Decker
 
Content-based Image Retrieval Using The knowledge of Color, Texture in Binary...
Content-based Image Retrieval Using The knowledge of Color, Texture in Binary...Content-based Image Retrieval Using The knowledge of Color, Texture in Binary...
Content-based Image Retrieval Using The knowledge of Color, Texture in Binary...Zahra Mansoori
 
Evaluation of Texture in CBIR
Evaluation of Texture in CBIREvaluation of Texture in CBIR
Evaluation of Texture in CBIRZahra Mansoori
 
An Automatic Color Feature Vector Classification Based on Clustering Method
An Automatic Color Feature Vector Classification Based on Clustering MethodAn Automatic Color Feature Vector Classification Based on Clustering Method
An Automatic Color Feature Vector Classification Based on Clustering MethodRSIS International
 
Texture based feature extraction and object tracking
Texture based feature extraction and object trackingTexture based feature extraction and object tracking
Texture based feature extraction and object trackingPriyanka Goswami
 
IRJET - An Enhanced Approach for Extraction of Text from an Image using Fuzzy...
IRJET - An Enhanced Approach for Extraction of Text from an Image using Fuzzy...IRJET - An Enhanced Approach for Extraction of Text from an Image using Fuzzy...
IRJET - An Enhanced Approach for Extraction of Text from an Image using Fuzzy...IRJET Journal
 
International Refereed Journal of Engineering and Science (IRJES)
International Refereed Journal of Engineering and Science (IRJES)International Refereed Journal of Engineering and Science (IRJES)
International Refereed Journal of Engineering and Science (IRJES)irjes
 
Implementation of Fuzzy Logic for the High-Resolution Remote Sensing Images w...
Implementation of Fuzzy Logic for the High-Resolution Remote Sensing Images w...Implementation of Fuzzy Logic for the High-Resolution Remote Sensing Images w...
Implementation of Fuzzy Logic for the High-Resolution Remote Sensing Images w...IOSR Journals
 
Text Extraction from Image using Python
Text Extraction from Image using PythonText Extraction from Image using Python
Text Extraction from Image using Pythonijtsrd
 
Caricature Recognition and Generation
Caricature Recognition and GenerationCaricature Recognition and Generation
Caricature Recognition and GenerationSaurav Jha
 
Colour-Texture Image Segmentation using Hypercomplex Gabor Analysis
Colour-Texture Image Segmentation using Hypercomplex Gabor AnalysisColour-Texture Image Segmentation using Hypercomplex Gabor Analysis
Colour-Texture Image Segmentation using Hypercomplex Gabor Analysissipij
 
Properties of Images in LSB Plane
Properties of Images in LSB PlaneProperties of Images in LSB Plane
Properties of Images in LSB PlaneIOSR Journals
 

What's hot (16)

A Review of Feature Extraction Techniques for CBIR based on SVM
A Review of Feature Extraction Techniques for CBIR based on SVMA Review of Feature Extraction Techniques for CBIR based on SVM
A Review of Feature Extraction Techniques for CBIR based on SVM
 
Content based image retrieval based on shape with texture features
Content based image retrieval based on shape with texture featuresContent based image retrieval based on shape with texture features
Content based image retrieval based on shape with texture features
 
Content-based Image Retrieval Using The knowledge of Color, Texture in Binary...
Content-based Image Retrieval Using The knowledge of Color, Texture in Binary...Content-based Image Retrieval Using The knowledge of Color, Texture in Binary...
Content-based Image Retrieval Using The knowledge of Color, Texture in Binary...
 
Evaluation of Texture in CBIR
Evaluation of Texture in CBIREvaluation of Texture in CBIR
Evaluation of Texture in CBIR
 
InternshipReport
InternshipReportInternshipReport
InternshipReport
 
An Automatic Color Feature Vector Classification Based on Clustering Method
An Automatic Color Feature Vector Classification Based on Clustering MethodAn Automatic Color Feature Vector Classification Based on Clustering Method
An Automatic Color Feature Vector Classification Based on Clustering Method
 
Texture based feature extraction and object tracking
Texture based feature extraction and object trackingTexture based feature extraction and object tracking
Texture based feature extraction and object tracking
 
IRJET - An Enhanced Approach for Extraction of Text from an Image using Fuzzy...
IRJET - An Enhanced Approach for Extraction of Text from an Image using Fuzzy...IRJET - An Enhanced Approach for Extraction of Text from an Image using Fuzzy...
IRJET - An Enhanced Approach for Extraction of Text from an Image using Fuzzy...
 
International Refereed Journal of Engineering and Science (IRJES)
International Refereed Journal of Engineering and Science (IRJES)International Refereed Journal of Engineering and Science (IRJES)
International Refereed Journal of Engineering and Science (IRJES)
 
Implementation of Fuzzy Logic for the High-Resolution Remote Sensing Images w...
Implementation of Fuzzy Logic for the High-Resolution Remote Sensing Images w...Implementation of Fuzzy Logic for the High-Resolution Remote Sensing Images w...
Implementation of Fuzzy Logic for the High-Resolution Remote Sensing Images w...
 
Text Extraction from Image using Python
Text Extraction from Image using PythonText Extraction from Image using Python
Text Extraction from Image using Python
 
Caricature Recognition and Generation
Caricature Recognition and GenerationCaricature Recognition and Generation
Caricature Recognition and Generation
 
Sub1547
Sub1547Sub1547
Sub1547
 
Fc4301935938
Fc4301935938Fc4301935938
Fc4301935938
 
Colour-Texture Image Segmentation using Hypercomplex Gabor Analysis
Colour-Texture Image Segmentation using Hypercomplex Gabor AnalysisColour-Texture Image Segmentation using Hypercomplex Gabor Analysis
Colour-Texture Image Segmentation using Hypercomplex Gabor Analysis
 
Properties of Images in LSB Plane
Properties of Images in LSB PlaneProperties of Images in LSB Plane
Properties of Images in LSB Plane
 

Viewers also liked

Web image annotation by diffusion maps manifold learning algorithm
Web image annotation by diffusion maps manifold learning algorithmWeb image annotation by diffusion maps manifold learning algorithm
Web image annotation by diffusion maps manifold learning algorithmijfcstjournal
 
Using a Bag of Words for Automatic Medical Image Annotation with a Latent Sem...
Using a Bag of Words for Automatic Medical Image Annotation with a Latent Sem...Using a Bag of Words for Automatic Medical Image Annotation with a Latent Sem...
Using a Bag of Words for Automatic Medical Image Annotation with a Latent Sem...ijaia
 
Combining Generative And Discriminative Classifiers For Semantic Automatic Im...
Combining Generative And Discriminative Classifiers For Semantic Automatic Im...Combining Generative And Discriminative Classifiers For Semantic Automatic Im...
Combining Generative And Discriminative Classifiers For Semantic Automatic Im...CSCJournals
 
Tag based image retrieval (tbir) using automatic image annotation
Tag based image retrieval (tbir) using automatic image annotationTag based image retrieval (tbir) using automatic image annotation
Tag based image retrieval (tbir) using automatic image annotationeSAT Journals
 
Automatic Image Annotation (AIA)
Automatic Image Annotation (AIA)Automatic Image Annotation (AIA)
Automatic Image Annotation (AIA)Farzaneh Rezaei
 

Viewers also liked (7)

Web image annotation by diffusion maps manifold learning algorithm
Web image annotation by diffusion maps manifold learning algorithmWeb image annotation by diffusion maps manifold learning algorithm
Web image annotation by diffusion maps manifold learning algorithm
 
Using a Bag of Words for Automatic Medical Image Annotation with a Latent Sem...
Using a Bag of Words for Automatic Medical Image Annotation with a Latent Sem...Using a Bag of Words for Automatic Medical Image Annotation with a Latent Sem...
Using a Bag of Words for Automatic Medical Image Annotation with a Latent Sem...
 
Combining Generative And Discriminative Classifiers For Semantic Automatic Im...
Combining Generative And Discriminative Classifiers For Semantic Automatic Im...Combining Generative And Discriminative Classifiers For Semantic Automatic Im...
Combining Generative And Discriminative Classifiers For Semantic Automatic Im...
 
Tag based image retrieval (tbir) using automatic image annotation
Tag based image retrieval (tbir) using automatic image annotationTag based image retrieval (tbir) using automatic image annotation
Tag based image retrieval (tbir) using automatic image annotation
 
Automatic Image Annotation (AIA)
Automatic Image Annotation (AIA)Automatic Image Annotation (AIA)
Automatic Image Annotation (AIA)
 
Image Annotation
Image AnnotationImage Annotation
Image Annotation
 
Automatic Image Annotation
Automatic Image AnnotationAutomatic Image Annotation
Automatic Image Annotation
 

Similar to A feature selection method for automatic image annotation

A Multi Criteria Decision Making Based Approach for Semantic Image Annotation
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation A Multi Criteria Decision Making Based Approach for Semantic Image Annotation
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation ijcax
 
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation A Multi Criteria Decision Making Based Approach for Semantic Image Annotation
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation ijcax
 
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation A Multi Criteria Decision Making Based Approach for Semantic Image Annotation
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation ijcax
 
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation
A Multi Criteria Decision Making Based Approach for Semantic Image AnnotationA Multi Criteria Decision Making Based Approach for Semantic Image Annotation
A Multi Criteria Decision Making Based Approach for Semantic Image Annotationijcax
 
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation A Multi Criteria Decision Making Based Approach for Semantic Image Annotation
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation ijcax
 
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESNEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESijcax
 
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESNEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESijcax
 
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESNEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESijcax
 
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESNEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESijcax
 
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESNEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESijcax
 
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESNEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESijcax
 
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESNEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESijcax
 
Learning to rank image tags with limited
Learning to rank image tags with limitedLearning to rank image tags with limited
Learning to rank image tags with limitednexgentech15
 
LEARNING TO RANK IMAGE TAGS WITH LIMITED TRAINING EXAMPLES
LEARNING TO RANK IMAGE TAGS WITH LIMITED TRAINING EXAMPLESLEARNING TO RANK IMAGE TAGS WITH LIMITED TRAINING EXAMPLES
LEARNING TO RANK IMAGE TAGS WITH LIMITED TRAINING EXAMPLESnexgentechnology
 
Semi-Supervised Method of Multiple Object Segmentation with a Region Labeling...
Semi-Supervised Method of Multiple Object Segmentation with a Region Labeling...Semi-Supervised Method of Multiple Object Segmentation with a Region Labeling...
Semi-Supervised Method of Multiple Object Segmentation with a Region Labeling...sipij
 
Probabilistic model based image segmentation
Probabilistic model based image segmentationProbabilistic model based image segmentation
Probabilistic model based image segmentationijma
 
Face Recognition Using Gabor features And PCA
Face Recognition Using Gabor features And PCAFace Recognition Using Gabor features And PCA
Face Recognition Using Gabor features And PCAIOSR Journals
 
A Novel Approach of Fuzzy Based Semi-Automatic Annotation for Similar Facial ...
A Novel Approach of Fuzzy Based Semi-Automatic Annotation for Similar Facial ...A Novel Approach of Fuzzy Based Semi-Automatic Annotation for Similar Facial ...
A Novel Approach of Fuzzy Based Semi-Automatic Annotation for Similar Facial ...ijsrd.com
 
National Flags Recognition Based on Principal Component Analysis
National Flags Recognition Based on Principal Component AnalysisNational Flags Recognition Based on Principal Component Analysis
National Flags Recognition Based on Principal Component Analysisijtsrd
 

Similar to A feature selection method for automatic image annotation (20)

A Multi Criteria Decision Making Based Approach for Semantic Image Annotation
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation A Multi Criteria Decision Making Based Approach for Semantic Image Annotation
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation
 
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation A Multi Criteria Decision Making Based Approach for Semantic Image Annotation
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation
 
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation A Multi Criteria Decision Making Based Approach for Semantic Image Annotation
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation
 
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation
A Multi Criteria Decision Making Based Approach for Semantic Image AnnotationA Multi Criteria Decision Making Based Approach for Semantic Image Annotation
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation
 
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation A Multi Criteria Decision Making Based Approach for Semantic Image Annotation
A Multi Criteria Decision Making Based Approach for Semantic Image Annotation
 
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESNEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
 
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESNEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
 
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESNEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
 
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESNEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
 
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESNEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
 
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESNEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
 
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGESNEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
NEW ONTOLOGY RETRIEVAL IMAGE METHOD IN 5K COREL IMAGES
 
Learning to rank image tags with limited
Learning to rank image tags with limitedLearning to rank image tags with limited
Learning to rank image tags with limited
 
LEARNING TO RANK IMAGE TAGS WITH LIMITED TRAINING EXAMPLES
LEARNING TO RANK IMAGE TAGS WITH LIMITED TRAINING EXAMPLESLEARNING TO RANK IMAGE TAGS WITH LIMITED TRAINING EXAMPLES
LEARNING TO RANK IMAGE TAGS WITH LIMITED TRAINING EXAMPLES
 
D05222528
D05222528D05222528
D05222528
 
Semi-Supervised Method of Multiple Object Segmentation with a Region Labeling...
Semi-Supervised Method of Multiple Object Segmentation with a Region Labeling...Semi-Supervised Method of Multiple Object Segmentation with a Region Labeling...
Semi-Supervised Method of Multiple Object Segmentation with a Region Labeling...
 
Probabilistic model based image segmentation
Probabilistic model based image segmentationProbabilistic model based image segmentation
Probabilistic model based image segmentation
 
Face Recognition Using Gabor features And PCA
Face Recognition Using Gabor features And PCAFace Recognition Using Gabor features And PCA
Face Recognition Using Gabor features And PCA
 
A Novel Approach of Fuzzy Based Semi-Automatic Annotation for Similar Facial ...
A Novel Approach of Fuzzy Based Semi-Automatic Annotation for Similar Facial ...A Novel Approach of Fuzzy Based Semi-Automatic Annotation for Similar Facial ...
A Novel Approach of Fuzzy Based Semi-Automatic Annotation for Similar Facial ...
 
National Flags Recognition Based on Principal Component Analysis
National Flags Recognition Based on Principal Component AnalysisNational Flags Recognition Based on Principal Component Analysis
National Flags Recognition Based on Principal Component Analysis
 

Recently uploaded

chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learningmisbanausheenparvam
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝soniya singh
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxDeepakSakkari2
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx959SahilShah
 
microprocessor 8085 and its interfacing
microprocessor 8085  and its interfacingmicroprocessor 8085  and its interfacing
microprocessor 8085 and its interfacingjaychoudhary37
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AIabhishek36461
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfAsst.prof M.Gokilavani
 
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2RajaP95
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...VICTOR MAESTRE RAMIREZ
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130Suhani Kapoor
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerAnamika Sarkar
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 

Recently uploaded (20)

chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learning
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
 
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Serviceyoung call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptx
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx
 
microprocessor 8085 and its interfacing
microprocessor 8085  and its interfacingmicroprocessor 8085  and its interfacing
microprocessor 8085 and its interfacing
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AI
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
 
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 

A feature selection method for automatic image annotation

  • 1. International Journal Of Mathematics And Statistics Invention (IJMSI) E-ISSN: 2321 – 4767 P-ISSN: 2321 - 4759 www.Ijmsi.org || Volume 3 Issue 2 || February. 2015 || PP-17-22 www.ijmsi.org 17 | P a g e A feature selection method for automatic image annotation Shunle Zhu1 1 School of Zhejiang seas and oceans, Institute of mathematics and communication, Zhou Shan, Zhe Jiang, 316004 ABSTRACT: Automatic image annotation (AIA) is the bridge of high-level semantic information and the low-level feature. AIA is an effective method to resolve the problem of “Semantic Gap”. According to the intrinsic character of AIA, some common features are selected from the labeled images by multiple instance learning method. The feature selection method is applied into the task of automatic image annotation in this paper. Each keyword is analyzed hierarchically in low-granularity-level under the framework of feature selection. Through the common representative instances are mined, the semantic similarity of images can be effectively expressed and the better annotation results are able to be acquired, which testifies the effectiveness of the proposed annotation method. Gaussian mixture model is built by the selected feature method to characterize the labeled keyword. The experimental results illustrate the good perfromance of AIA. KEYWORDS: Automatic image annotation, feature selection, multiple instance learning, representative instances, semantic similarity I. INTRODUCTION With the development of multimedia and network technology, image data has been becoming more common rapidly. Facing a mass of image resource, content based image retrieval (CBIR), a technology to organize, manage and analyze these resource efficiently, is becoming a hot point. However, under the limitation of “semantic gap”, that is, the underlying vision features, such as color, texture, and shape, can not reflect and match the query attention completely, CBIR confronts the unprecedented challenge. In recent years, newly proposed automatic image annotation (AIA) keeps focus on erecting a bridge between high-level semantic and low-level features, which is an effective approach to solve the above mentioned semantic gap. Co-occurrence model proposed by Morris etc. in 1999 initiated the research of automatic image annotation. In [2] Translation model was developed to annotate image automatically based on an assumption that keywords and vision features were different language to describe the same image. Similar to [2], literature [3] proposed Cross Media Relevance Model (CMRM) where the vision information of each image was denoted as blob set which is to manifest the semantic information of image. However, blob set in CMRM was erected based on discrete region clustering which produced a loss of vision features so that the annotation results were too perfect. In order to compensate for this problem, a Continuous-space Relevance Model (CRM) was proposed in [4]. Furthermore, in [5] Multiple-Bernoulli Relevance Model was proposed to improve CMRM and CRM. Recently, there are novel researches about AIA and have obtained many good results [6-9]. Despite variable sides in the above mentioned methods, the core idea based on automatic image annotation is identical. The core idea of automatic image annotation applies annotated images to erect a certain model to describe the potential relationship or map between as keywords and image features which is used to predict unknown annotation images. Even if previous literatures achieved some results from variable sides
  • 2. A feature selection method for automatic image annotation www.ijmsi.org 18 | P a g e respectively, semantic description of each keyword has not been defined explicitly in them. For this end, on the basis of investigating the characters of the automatic image annotation, i.e. images annotated by keywords comprise multiple regions; automatic image annotation is regarded as a problem of multi instance learning. The proposed method analyzes each keyword in multi-granularity hierarchy to reflect the semantic similarity so that the method not only characterizes semantic implication accurately but also improves the performance of image annotation which verifies the effectiveness of our proposed method. This article is organized as follows: section 1 introduces automatic image annotation briefly; automatic image annotation based on multi-instance learning framework is discussed in detail in section 2; and experimental process and results are described in section 3; section 4 summaries and discusses the future research briefly. II. AUTOMATIC IMAGE ANNOTATION IN THE FRAMEWORK OF MULTI-INSTANCE LEARNING In the previous learning framework, a sample is viewed as an instance, i.e. the relationship between samples and instances is one-to-one, while a sample may contain more instances, this is to say, the relationship between samples and instances is one-to-many. Ambiguities between training samples of multi-instance learning differ from ones of supervised learning, unsupervised learning and reinforcement learning completely so that the previous methods hardly solve the proposed problems. Owing to its characteristic features and wide prospect, multi-instance learning is absorbing more and more attentions in machine learning domain and is referred to as a newly learning framework. The core idea multi-instance learning is that the training sample set consists of concept-annotated bags which contain unannotated instances. The purpose of multi-instance learning is to assign a conceptual annotation to bags beyond training set by learning from training bags. In general, a bag is annotated a Positive if and only if at least one instance is labeled Positive, otherwise the bag is annotated as Negative. 2.1 Framework of Image Annotation of Multi-instance Learning According to the above-mentioned definition of the multi-instance learning, namely, a Positive bag contain at least a positive instance, we can draw a conclusion that positive instances should be distributed much more than negative instances in Positive bags. This conclusion shares common properties with DD algorithm [12] in multi-instance learning domain. If some point can represent the more semantic of a specified keyword than any other point in the feather space, no less than one instance in positive bags should be close to this point while all instances in negative bags will be far away from this point. In the proposed methods, we take into consideration each semantic keyword independently. Even if a part of useful information will be lost neglecting the relationship between keywords, various keywords from each image are used to computing the similarities between images so that the proposed methods can represent the semantic similarity of image effectively in low- granularity. In the following sections, each keyword will be analyzed and applied in local level so that irrelevant information with keywords will be eliminated to improve the precision of representation of the semantic of keywords. Firstly, keywords w , including Positive and Negative bags, are collected, and the area surrounded by Positive bags are obtained by clustering adaptively. Secondly, this cluster is viewed as Positive set of w which contains most items than other clusters and is farthest from Negative bags. Thirdly, Gaussian Mixture Model (GMM) is used to learn the semantic of w . Finally, the images can be annotated automatically based on the posterior probability of each keyword of images according to the probability of image in GMM by using Bayesian estimation. Figure illustrates this process.
  • 3. A feature selection method for automatic image annotation www.ijmsi.org 19 | P a g e Adaptive clustering of positive bags GMM for each keyword Representative feature set Bayesian ruleAnnotation Far from negative bags Feature Selection Process Images labeled with keyword W Image (Bag) + Region (Instance) Annotation Process Figure 1 the framework of automatic image annotation based multi-instance learning 2.2 Automatic Image Annotation In convenience, we firstly put forward some symbols. w is denoted as a semantic keyword,  1,...,kX X k N  as a set of training samples, where N is the number of training samples;  1 , , nS x x    as a set of representative instances after adaptively clustering, where nx is the nth item in a clusters. Therefore, GMM is constructed to describe semantic concept of w , i.e. GMM is used to estimate the distribution of each keyword of feature space to erect the one-to-one map from keywords to vision feature. Note that the superiority of GMM lies in producing a smooth estimation for any density distribution which can reflect the feature distribution of semantic keywords effectively by non-parameter density estimating. For a specified keyword w , GMM represents its vision feature distribution,  p x w is defined as follows:    1 , M i i i i Np x w x     (1) where  ,i iN x   represents the Gussian distribution of ith component , i and i are the corresponding mean and variance respectively, i is weight of the ith component, reflecting its significance, and 1 1 M ii   , M is the number of components. Each component represents a cluster in feature space, reflecting a vision feature of w . In each component, the conditional probability density of low-level vision feature vector x can be computed as follows:        11 1 ; ; exp 22 T i i i i id i N x x x                (2) where d is the dimension of feature vector x . The parameters of GMM are estimated by EM method which is maximum likelihood estimation for distribution parameters from incomplete data. EM consists of two steps, expectation step, E-step, and maximum step, M-step, which are executed alternately until convergence after multiple iteration. Assuming that the keyword w can produce wN representative instances,  ,i i i   represents mean and co-variance of the ith Gussian component. Intuitively, different semantic keywords should represent different vision features and the numbers of components are not identical with each other in general so that an adaptive value of M can be obtained based on Minimum Description Length (MDL)[13].
  • 4. A feature selection method for automatic image annotation www.ijmsi.org 20 | P a g e The proposed method extracts semantic clustering sets from training images which are used to construct GMM in which each component represents some vision feature of a specified keyword. From the perspective of semantic mapping, the proposed model described the one-to-many relationship between keywords and the corresponding vision features. The extracted semantic clustering set can reflect the semantic similarity between instances and keywords. According to the above methods, a GMM is constructed for each keyword respectively to describe the semantic of the keyword. And then, for a specified image to be annotated  1,..., mX x x , where mx is denoted as the mth separated region, the probability of keyword w is computed according to formula (3).Finally, the image X is annotated according to 5 keywords of greatest posterior probabilities.    1 m i i p w X p x w    (3) III. EXPERIMENTAL RESULTS AND ANALYSIS For comparison with other image annotation algorithms fairly, COREL[2], a widely used image data set, is selected in our experimental process. This image set consists of 5000 images, 4500 images from which are used as training samples, the rest 500 images as test samples. 1 through 5 keywords is extracted to annotate an image, so in all 371 keywords exists in dataset. In our experiments, each image is divided 10 regions using Normalized Cut segment technology [10]. 42379 regions are produced in all for a whole image data set, and then, these regions are clustered to 500 groups each of which is called a blob. For each region, 36-demension features, such as color, shape, location etc. are considered like literature [2]. In order to measure the performances of various image annotation methods, we adopt the same evaluation metrics as literature [5], some popular indicators in automatic image annotation and image retrieval. Precision is referred as the ratio of the times of correct annotation in relation to all the times of annotation, while recall is referred as the ratio of the times of correct annotation in relation to all the positive samples. The detailed definitions are as follows: B precision A  (4) B recall C  (5) where A is the number of images annotated by some keyword; B is the number of images annotated correctly; C is the number of images annotated by some keyword in the whole data set. As a tradeoff between the above indicators, the geometric mean of them is adopted widely, namely: 1 1 1 2 F precision recall        (6) Moreover, we take a statistics of the number of keywords annotated correctly which are used to annotate an image correctly at least. The statistical value reflects the coverage of keywords in our proposed methods, denoted by NumWords. 3.1 Experimental Results Figure 2 shows that the annotated results of the proposed method, MIL Annotation, keep rather a high consistent with the ground truth. This fact verifies the effectiveness of our proposed methods.
  • 5. A feature selection method for automatic image annotation www.ijmsi.org 21 | P a g e Test Images Ground Truth horse grass tree animal elephant animal tree sky ground MIL Annotation horse animal tree field grass elephant animal ground tree sky Test Images Ground Truth bus vehicle ground building building tree grass sky MIL Annotation bus vehicle tree building track building tree horse grass sky Figure 2 illustrations of annotation results of MIL Annotation 3.2 Annotation Results of MIL Annotation Table 1 and Table 2 show that compare the average performance between our proposed method and some traditional annotation models such as COM[1], TM[2], CMRM[3], CRM[4] and MBRM[5], on COREL image data set. In experiments, 263 keywords are concerned. Table 1 the performances of various annotation model on COREL Models COM TM CMRM CRM MBRM MIL Num Words 19 49 66 107 122 124 Results on 263 keywords Average Precision 0.03 0.06 0.10 0.16 0.24 0.20 Average Recall 0.02 0.04 0.09 0.19 0.25 0.22 Table 2 the comparison of F-measure between various models From Table 1 and Table 2, we can know that the annotation performance of the proposed method outperforms other models in two keyword set, and the proposed method has a significant improvement relation to existing algorithms in average precision, average recall F-measure and NumWords. Specifically, MIL annotation can obtain a significant improvement over COM, TM, CMRM and CRM; in existing probability-based image annotation models, MBRM can get a best annotation performance which is equivalent to the performance of MIL annotation. Models COM TM CMRM CRM MBRM MIL F-Measurement 0.024 0.048 0.095 0.174 0.245 0.211
  • 6. A feature selection method for automatic image annotation www.ijmsi.org 22 | P a g e IV. SUMMARIES Analyzing the properties of automatic image annotation deeply can know it can be viewed as a multi-instance learning problem so that we proposed a method to annotated images automatically based on multi-instance learning. Each keyword is analyzed independently to guarantee more effective semantic similarity in low-granularity. And then, under the frame of multi-instance learning, each keyword is further analyzed in various hierarchies. Irrelevant information with keywords will be eliminated to improve the precision of representation of the semantic of keywords by mapping keywords to corresponding region. Experimental results demonstrated the effectiveness of MR-MIL. REFERENCES [1] Mori Y, Takahashi H, Oka R. Image-to-word transformation based on dividing and vector quantizing images with words. In: Proc. ofIntl. Workshop on Multimedia Intelligent Storage and Retrieval Management (MISRM'99), Orlando, Oct. 1999. [2] Duygulu P, Barnard K, Freitas N, Forsyth D. Object recognition as machine translation: learning a lexicon for a fixed image vocabulary. In: Proc. of European Conf. on Computer Vision (ECCV’02), Copenhagen, Denmark, May. 2002: 97-112. [3] Jeon J, Lavrenko V, Manmatha R. Automatic image annotation and retrieval using cross-media relevance models. In: Proc. of Int. ACM SIGIR Conf. on Research and Development in Information Retrieval (ACM SIGIR’03), Toronto, Canada, Jul. 2003: 119-126. [4] Lavrenko V, Manmatha R, Jeon J. A model for learning the semantics of pictures. In: Proc. Of Advances in Neural Information Processing Systems (NIPS’03), 2003. [5] Feng S, Manmatha R, Lavrenko V. Multiple bernoulli relevance models for image and video annotation. In: Proc. of IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR’04), Washington DC, USA, Jun. 2004: 1002-1009. [6] Zhang D. S., Islam M., Lu G. J., A review on automatic image annotation techniques, Journal of Pattern Recognition, 2012, 45(1): 346-362. [7] Shi F., Wang J. J., Wang Z. Y., Region based supervised annotation for semantic image retrieval, International Journal of Electronics and Communications, 2011, 65(11): 929 - 936. [8] Huang J. Z., Huang Y. C., Yu Y., Li H. S., Metaxas D. N., automatic image annotation using group sparsity, IEEE conference on Computer Vision and Pattern Recognition, Jun., 2010, pp. 3312 - 3319. [9] Laic A. A., Iftene A., Automatic image annotation, Internal Conference on Linguistic Resources and Tools for Processing the Romanian Language, Sep., 2014, pp. 143 - 152. [10] Shi J, Malik J. Normalized cuts and image Segmentation. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2000, 22(8): 888-905. [11] Maron O. Learning from ambiguity. Department of Electrical Engineering and Computer Science, MIT, PhD dissertation. 1998. [12] Maron O, Lozano P T. A framework for multiple-instance learning. In: Proc. of Advances in Neural Information Processing Systems (NIPS’98), Pittsburgh, USA, Oct. 1998: 570-576. [13] Li J, Wang J. Automatic linguistic indexing of pictures by a statistical modeling approach. IEEE Trans. On Pattern Analysis and Machine Intelligence, 2003, 25(9): 1075 - 1088.