Active learning methods for interactive image retrieval

1,287 views
1,172 views

Published on

Published in: Education, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,287
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
44
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Active learning methods for interactive image retrieval

  1. 1. 1200 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 7, JULY 2008 Active Learning Methods for Interactive Image Retrieval Philippe Henri Gosselin and Matthieu Cord Abstract—Active learning methods have been considered with positive labels indicate relevant images for the current concept,increased interest in the statistical learning community. Initially and the negative labels irrelevant images.developed within a classification framework, a lot of extensions are To achieve the relevance feedback process, the first strategynow being proposed to handle multimedia applications. This paperprovides algorithms within a statistical framework to extend active focuses on the query concept updating. The aim of this strategylearning for online content-based image retrieval (CBIR). The is to refine the query according to the user labeling. A simpleclassification framework is presented with experiments to com- approach, called query modification, computes a new query bypare several powerful classification techniques in this information averaging the feature vectors of relevant images [2]. Another ap-retrieval context. Focusing on interactive methods, active learning proach, the query reweighting, consists of computing a new sim-strategy is then described. The limitations of this approach forCBIR are emphasized before presenting our new active selection ilarity function between the query and any picture in the data-process RETIN. First, as any active method is sensitive to the base. A usual heuristic is to weigh the axes of the feature spaceboundary estimation between classes, the RETIN strategy carries [4]. In order to perform a better refinement of the similarity func-out a boundary correction to make the retrieval process more tion, optimization-based techniques can be used. They are basedrobust. Second, the criterion of generalization error to optimize on a mathematical criterion for computing the reweighting, forthe active learning selection is modified to better represent theCBIR objective of database ranking. Third, a batch processing of instance Bayes error [5], or average quadratic error [6], [7].images is proposed. Our strategy leads to a fast and efficient active Although these techniques are efficient for target search andlearning scheme to retrieve sets of online images (query concept). monomodal concept retrieval, they hardly track complex imageExperiments on large databases show that the RETIN method concepts.performs well in comparison to several other active strategies. Performing an estimation of the query concept can be seen as a statistical learning problem, and more precisely as a binary I. INTRODUCTION classification task between the relevant and irrelevant classes [8]. In image retrieval, many techniques based on statisticalH UMAN interactive systems have attracted a lot of research interest in recent years, especially for con-tent-based image retrieval systems. Contrary to the early learning have been proposed, as for instance Bayes classifi- cation [9], k-Nearest Neighbors [10], Gaussian mixtures [11], Gaussian random fields [12], or support vector machines [3],systems, which focused on fully automatic strategies, recent [8]. In order to deal with complex and multimodal concepts, weapproaches have introduced human-computer interaction [1], have adopted a statistical learning approach. Additionally, the[2]. In this paper, we focus on the retrieval of concepts within possibility to work with kernel functions is decisive.a large image collection. We assume that a user is looking for However, a lot of these learning strategies consider the CBIRa set of images, the query concept, within a database. The aim process as a classical classification problem, without any adap-is to build a fast and efficient strategy to retrieve the query tations to the characteristics of this context. For instance, someconcept. discriminative classifiers exclusively return binary labels when In content-based image retrieval (CBIR), the search may be real values are necessary for CBIR ranking purposes. Further-initiated using a query as an example. The top rank similar im- more, the system has to handle classification with few trainingages are then presented to the user. Then, the interactive process data, especially at the beginning of the search, where the queryallows the user to refine his request as much as necessary in concept has to be estimated in a database of thousands of imagesa relevance feedback loop. Many kinds of interaction between with only a few examples. Active learning strategies have beenthe user and the system have been proposed [3], but most of proposed to handle this type of problem. Another point concernsthe time, user information consists of binary labels indicating the class sizes, since the query concept is often a small subsetwhether or not the image belongs to the desired concept. The of the database. In contrast to more classical classification prob- lems, relevant and irrelevant classes are highly imbalanced (up Manuscript received October 9, 2006; revised February 22, 2008. The as- to factor 100). Depending on the application context, computa-sociate editor coordinating the review of this manuscript and approving it for tional time has also to be carefully considered when an onlinepublication was Dr. Eli Saber. P. H. Gosselin is with the ETIS/CNRS, 95014 Cergy-Pontoise, France retrieval algorithm is designed. To address this problem, we as-(e-mail: gosselin@ensea.fr). sume that any learning task must be at most , where is M. Cord is with the LIP6/UPMC/CNRS, 75016 Paris, France (e-mail: the size of the database.matthieu.cord@lip6.fr). In this paper, we focus on statistical learning techniques for Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org. interactive image retrieval. We propose a scheme to embed dif- Digital Object Identifier 10.1109/TIP.2008.924286 ferent active learning strategies into a general formulation. The 1057-7149/$25.00 © 2008 IEEE
  2. 2. GOSSELIN AND CORD: ACTIVE LEARNING METHODS FOR INTERACTIVE IMAGE RETRIEVAL 1201originality of our approach is based on the association of three community [17] and were quickly used by researchers of thecomponents: CBIR community [18]. SVMs select the optimal separating • boundary correction, which corrects the noisy classifica- hyperplane which has the largest margin and, hence, the lowest tion boundary in the first iterations; vicinal risk. Their high ability to be used with a kernel function • average precision maximization, which selects the images provides many advantages that we describe in Section III. so that classification and mean average precision are en- Fisher discriminant analysis also uses hyperplane classifiers, hanced; but aims at minimizing both the size of the classes and the in- • batch selection, which addresses the problem of the selec- verse distance between the classes [19]. Let us note that in ex- tion of multiple images in the same feedback iteration. periments, they give results similar to SVMs [20]. We also propose a preselection technique to speed up the se-lection process, which leads to a computational complexity neg- B. Feature Distribution and Kernel Frameworkligible compared to the size of the database for the whole active In order to compute the relevance function for anylearning process. All these components are integrated in our re- image , a classification method has to estimate the density oftrieval system, called RETIN. each class and/or the boundary. This task is mainly dependent In this scope, we first present the binary classification and on the shape of the data distribution in the feature space. In thekernel framework to represent complex class distributions (Sec- CBIR context, relevant images may be distributed in a singletion II). Powerful classification methods and well-defined ker- mode for one concept, and in a large number of modes for an-nels for global image signatures are evaluated on real database other concept, thereby inducing nonlinear classification prob-experiments. Secondly, we present active learning to interact lems.with the user (Section III), then introduce the RETIN active Gaussian mixtures are highly used in the CBIR context, sincelearning scheme in Section IV, where its components are de- they have the ability to represent these complex distributionsscribed in Section V, Section VI and Section VII. Finally, we [21]. However, in order to get an optimal estimation of the den-compare our method to existing ones using real scenario on large sity of a concept, data have to be distributed as Gaussian (limita-databases (Section VIII). tion for real data processing). Furthermore, the large number of parameters required for Gaussian mixtures leads to high com- II. CLASSIFICATION FRAMEWORK putational complexity. Another approach consists of using the In the classification framework for CBIR, retrieving image kernel function framework [22]. These functions were intro-concepts is modeled as a two-class problem: the relevant class, duced in the statistical learning community with the supportthe set of images in the searched concept, and the irrelevant vector machines. The first objective of this framework is to mapclass, composed by the remaining database. image indexes to vectors in a Hilbert space, and, hence, Let be the image indexes of the database. A training it turns the nonlinear problem into linear ones. It is also differentset is expressed from any user label retrieval session as from other linearization techniques by its ability to implicitly , where if the image is labeled work on the vectors of the Hilbert space. Kernel methods neveras relevant, if the image is labeled as irrelevant explicitly compute the vectors , but only work on their dot(otherwise ). The classifier is then trained using these product , and, hence, allow to work on very largelabels, and a relevance function is determined in order or infinite Hilbert spaces. Furthermore, learning tasks and distri-to be able to rank the whole database. bution of data can be separated. Under the assumption that there Image indexes contain a summary of visual features, in this is a kernel function which maps our data into a linear space, allpaper histograms of colors and textures. linear learning techniques can be used on any image database. We denote by the value of the kernel function be-A. Classification Methods for CBIR tween images and . This function will be considered as the Bayes and probabilistic classifiers are the most used classifi- default similarity function in the following.cation methods for CBIR [13]–[15]. They are interesting sincethey are directly “relevance oriented,” i.e., no modifications are C. Comparison of Classifiersrequired to get the probability of an image to be in the concept. We have experimented several classification methods forHowever, since Gaussian-based models are generally used, they CBIR: Bayes classification [9] with a Parzen density estimation,focus on estimating the center of each class, and then are less ac- k-Nearest neighbors [10], support vector machines [8], kernelcurate near the boundary than discriminative methods, which is Fisher discriminant [19], and also a query-reweighting strategyan important aspect for active learning. Furthermore, because of [7]. Databases, scenario, evaluation protocol and quality mea-the unbalance of training data, the tuning of the irrelevant class surement are detailed in the appendix. The results in terms ofis not trivial. mean average precision are shown in Fig. 1(a) according to the k-Nearest Neighbors determines the class of an image con- training set size (we omit the KFD which gives results verysidering its nearest neighbors. Despite its simplicity, it still has close to inductive SVMs) for both ANN and Corel databases.real interest in terms of efficiency, especially for the processing One can see that the classification-based methods give theof huge databases [16]. best results, showing the power of statistical methods over ge- Support vector machines are classifiers which has known ometrical approaches, like the one reported here (similarity re-much success in many learning tasks during the past years. finement method). The SVM technique performs slightly betterThey were introduced with kernel functions in the statistical than others in this context. As they have a strong mathematical
  3. 3. 1202 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 7, JULY 2008 Fig. 1. Mean average precision (%) for a classification from randomly selected examples on the (left) Corel and the (right) ANN photo database.framework and efficient algorithmic implementation, SVMs are A. Example of Active Strategiesused as the default method in the RETIN system. Fig. 2 shows the interest of a selection step. In this example, We also made some comparisons between different kernels the images are represented by 2-D feature vectors, the white cir-(linear, polynomial, triangle, Gaussian , Gaussian , and cles are images the user is looking for, and the black circles areGaussian ). In our experiments, the Gaussian gives the best the images the user is not interested in. At the beginning, theperformance, which is not a surprising result since histograms user provided two labels, represented in figures by larger circlesare used as image signatures, and the distance is dedicated [cf. Fig. 2(a)]. These two labels allow the system to computefor comparing distributions. In the following experiments, we a first classification. In classical relevance feedback systems, aalways use a Gaussian kernel with a distance. common way of selection was to label the most relevant pictures Remark: As the whole data set is available during the returned by the system. As one can see in Fig. 2(b), this choicetraining, it could be interesting to consider a semi-supervised is not effective, since in that case the classification is not im-or transductive framework. For instance, there are extended proved. Other types of selection of new examples may be con-versions of Gaussian mixtures [11], [13], and transductive sidered. For instance, in Fig. 2(c), the active learning selectionSVM [23]. We have experimented with these methods. The working on uncertainty is proposed: the user labels the picturescomputational time is very high and no improvement has been the closest to the boundary, resulting in an enhanced classifica-observed [24], [25]. Transduction does not seem to be an tion in that case Fig. 2(c).interesting approach for CBIR as already observed in [3]. Anyway, the global performances remain very low for any B. Optimization Schemeproposed methods for the Corel experiments. The MAP isunder 20%, even when the number of training data is up to Starting from the image signatures , the training set100 for Corel. The Corel database is much bigger than ANN and the relevance functionand the simulated concepts are more difficult. The training set defined in Section II, new notations are introduced for theremains too small to allow classifiers to efficiently learn the teacher that labels images as or 1, thequery concept. Active learning is now considered to overcome indexes of the labeled images , and the unlabeled ones .this problem. The active learning aims at selecting the unlabeled data that will enhance the most the relevance function trained with III. ACTIVE LEARNING the label added to . To formalize this selection process as a minimization problem, a cost function is introduced. Active learning is close to supervised learning, except that According to any active learning method, the selected image istraining data are not independent and identically distributed minimizing over the pool of unlabeled imagesvariables. Some of them are added to the training set thanks toa dedicated process. In this paper, we only consider selective (1)sampling approaches from the general active learning frame-work. In this context, the main challenge is to find data that,once added to the training set, will allow us to achieve the best C. Active Learning Methodsclassification function. These methods have been introduced toperform good classifications with few training data in compar- Two different strategies are usually considered for activeison to the standard supervised scheme. learning: the uncertainty-based sampling, which selects the When the learner can only choose new data in a pool of un- images for which the relevance function is the most uncertainlabeled data, it is called pool-based active learning framework (Fig. 2), and the error reduction strategy, which aims at mini-[26]. In CBIR, the whole set of images is available anytime. mizing the generalization error of the classifier.These data will be considered as the pool of unlabeled data According to this distinction, we have selected two popularduring the selective sampling process. active learning strategies for presentation and comparison.
  4. 4. GOSSELIN AND CORD: ACTIVE LEARNING METHODS FOR INTERACTIVE IMAGE RETRIEVAL 1203Fig. 2. Active learning illustration. A linear classifier is computed for white (relevant) and black (irrelevant) data classification. Only the large circles are used fortraining, the small ones represent unlabeled data. The line is the boundary between classes after training. (a) Initial boundary. (b) Two new data (the closest to therelevant one) have been added to the training set; the boundary is unchanged. (c) Most uncertain data (closest to the boundary) are added to the training data; theboundary significantly moved, and provides a better separation between black and white data. (a) Initial training set. (b) Label the most relevant data. (c) Label themost uncertain data. 1) Uncertainty-Based Sampling: In our context of binary with a loss function which evaluates the loss between the es-classification, the learner of the relevance function has to clas- timation and the true distribution .sify data as relevant or irrelevant. Any data in the pool of unla- The optimal pair minimizes this expectation over thebeled samples may be evaluated by the learner. Some are defini- pool of unlabeled imagestively relevant, others irrelevant, but some may be more diffi-cult to classify. Uncertainty-based sampling strategy aims at se- (3)lecting unlabeled samples that the learner is the most uncertainabout. and . To achieve this strategy, a first approach proposed by Cohn As the expected error is not accessible, the integral over[27] uses several classifiers with the same training set, and se- is usually approximated using the unlabeled set. Roy andlects samples whose classifications are the most contradictory. McCallum [26] also propose to estimate the probabilityAnother solution consists of computing a probabilistic output with the relevance function provided by the current classifier.for each sample, and selecting the unlabeled samples with the With a loss function , the estimation of the expectation isprobabilities closest to 0.5 [28]. Similar strategies have also expressed for anybeen proposed with SVM classifier [29], with a theoretical jus-tification [18], and with nearest neighbor classifier [30]. In any case, a relevance function is trained. This func-tion may be adapted from a distribution, a membership to a Furthermore, as the labels on are unknown, they areclass (distance to the hyperplane for SVM), or a utility function. estimated by computing the expectation for each possible label.Using this relevance function, uncertain data will be close to Hence, the cost function is given0: . The solution to the minimization problem in (1) is (4) (2) The following relation between and is used: The efficiency of these methods depends on the accuracy ofthe relevance function estimation close to the boundary betweenrelevant and irrelevant classes. 2) Error Reduction-Based Strategy: Active learning strate- D. Active Learning in CBIR Contextgies based on error reduction [26] aim at selecting the samplethat, once added to the training set, minimizes the error of gen- Active learning methods are generally used for classificationeralization of the new classifier. problems where the training set is large and classes are well Let denote the (unknown) probability of sample balanced. However, our context is somewhat different.to be in class (relevant or irrelevant), and the (also • Unbalance of classes: The class of relevant images (theunknown) distribution of the images. With , the training searched concept) is generally 20 to 100 times smaller thanprovides the estimation of , and the expected the class of irrelevant images. As a result, the boundaryerror of generalization is is very inaccurate, especially in the first iterations of rel- evance feedback, where the size of the training set is dra- matically small. Many methods become inefficient in this context, and the selection is then somewhat random.
  5. 5. 1204 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 7, JULY 2008 • Selection criterion: Whenever minimizing the error of clas- sification is interesting for CBIR, this criterion does not completely reflect the user satisfaction. Other utility cri- teria closer to this, such as precision, should provide more efficient selections. • Batch selection: We have to propose more than one image to label between two feedback steps, contrary to many ac- tive learning techniques which are only able to select a single image. The computation time is also an important criterion for CBIRin generalized applications, since people will not wait severalminutes between two feedback steps. Furthermore, a fast selec-tion allows the user to provide more labels in the same time.Thus, it is more interesting to use a less efficient but fast methodthan a more efficient but highly-computational one. To take into account these characteristics, we propose in Sec-tion IV an active learning strategy dedicated to CBIR context, Fig. 3. RETIN active learning scheme.based on the optimization of function , as proposed in (2) or(4). Aside from the active learning context, some other learningapproaches have been proposed in Bayesian framework. Ded-icated to target search (where the user is looking for a single the average precision, whose details are presented in Sec-image) a probability for each image to be the target is updated tion VI.thanks to user interaction [31], [32]. The third step computes the batch selection. The method presented in Section VII selects images , using the previously computed criteria . IV. RETIN ACTIVE LEARNING SCHEME 5) Feedback: The user labels the selected images, and a new classification and correction can be computed. The process We propose an active learning scheme based on binary clas- is repeated as many times as necessary.sification in order to interact with a user looking for image con-cepts in databases. The scheme in summarized in Fig. 3. 1) Initialization: A retrieval session is initialized from one V. ACTIVE BOUNDARY CORRECTION image brought by the user. The features are computed on During the first steps of relevance feedback, classifiers are that new image and added to the database. This image is trained with very few data, about 0.1% of the database size. At then labeled as relevant, and the closest pictures are shown this stage, classifiers are not even able to perform a good es- to the user. Note that other initializations could be used, for timation of the size of the concept. Their natural behavior in instance with keywords. this case is to divide the database into two parts of almost the 2) Classification: A binary classifier is trained with the labels same size. Each new sample changes the estimated class of hun- the user gives. We use a SVM with a Gaussian kernel, dreds, sometimes thousands of images. Selection is then close since it has revealed being the most efficient [33], [34]. The to a random selection. result is a function which returns the relevance of A solution is to ensure that the retrieval session is initialized each image , according to the examples . with a minimum of examples. For instance, Tong proposes to 3) Boundary Correction: We add an active correction to the initialize with 20 examples [33]. Beyond the question of the boundary in order to deal with the few training data and minimum number of examples required for a good starting the imbalance of the classes. We present this technique in boundary, initial examples are not always available without Section V. some third-party knowledge (for instance, keywords). 4) Selection: In the case where the user is not satisfied with the Thus, we propose a method to correct the boundary in order to current classification, the system selects a set of images the reduce this problem. A first version of this method was proposed user should label. The selection must be such so that the in [35], but we introduce here a generalization of this one. The labeling of those images provides the best performances. correction is a shift of the boundary We divide the selection into three steps. The first step aims at reducing the computational time, by (5) preselecting some hundreds of pictures , which may be in the optimal selection set. We propose to prese- with the relevance function, and the correction at feed- lect the closest pictures to the (corrected) boundary. This back step . process is computed very fast, and the uncertainty-based We compute the correction to move the boundary towards the selection method have proved its interest in CBIR context. most uncertain area of the database. In other words, we want a The second step is the computation of the selection crite- positive value of for any image in the concept, and a rion for each preselected image. We use an uncer- negative value for others. In order to get this behavior, a ranking tainty-based selection criterion which aims at maximizing of the database is computed: with
  6. 6. GOSSELIN AND CORD: ACTIVE LEARNING METHODS FOR INTERACTIVE IMAGE RETRIEVAL 1205 a function returning the indexes of the sorted values labeled image and the label provided by the userof . Let be the rank of the image whose relevance is the mostuncertain. We have (7) Remark: when using SVMs for computing the classifier , this correction process is close to the computation of parameter in the SVM decision functionOur correction is expressed by: . In order to compute the correction, we propose an algorithmbased on an adaptive tuning of during the feedback steps. The (8)value at the th iteration is computed considering the setof labels provided by the user at the current iteration . Actually, we suppose that the best threshold corresponds to Parameter can be computed using the KKT conditions [20],the searched boundary. Such a threshold allows to present as for instance if is a support vector: .many relevant images as irrelevant ones. Thus, if and only if the Hence, one can see that our boundary correction is close toset of the selected images is well balanced (between relevant the computation of parameter of the SVM decision functionand irrelevant images), then the threshold is good. We exploit (8), except that we are considering vectors out of the trainingthis property to tune . set . At the th feedback step, the system is able to classify im-ages using the current training set. The user gives new labels for VI. AVERAGE PRECISION MAXIMIZATIONimages , and they are compared to the Any active learning method aims at selecting samples whichcurrent classification. If the user mostly gives relevant labels, the decreases the error of classification. In CBIR, users are inter-system should propose new images for labeling around a higher ested in database ranking. A usual metric to evaluate this rankingrank to get more irrelevant labels. On the contrary, if the user is the average precision (see appendix for definition).mostly gives irrelevant labels; thus, classification does not seemto be good to rank , and new images for labeling should be se- A. Average Precision Versus Classification Errorlected around a lower rank (to get more relevant labels). In order We made experiments to evaluate the difference between theto get this behavior, we introduce the following update rule direct optimization of the average precision and the classifi- cation error. For the minimization of the classification error (6) scheme, the optimal pair is the one which satisfies the following equation over the unlabeled data pool [cf. (3)]with the set of indexes of selected images at step and func-tion such as the following. • If we have as many positive labels as negative ones in , no changes are needed since the boundary is certainly well placed. Thus, should be close to zero. where is the expected error of generalization of • If we have much more positive than negative labels, the the classifier trained with set . boundary is certainly close to the center of the relevant In the case of the maximization of the average precision, the class. In order to find more negative labels, a better expression is close to the previous one, except that we consider boundary should correspond to a further rank , which the average precision means that should be positive. • On the contrary, if we have many negative labels, the boundary is certainly too far from the center of the rele- vant class. In this case, should be negative. The easiest way to carry out a function with such a behavioris to compute the difference between the number of positive and where is the ranking of the database computed fromnegative labels , . However, it is interesting to take the classifier output trained with the set .into account the error between the current relevance of We compared these two criteria on a reference database wherea new labeled image and the label provided by all the labels are known. One can notice that these criteria cannotthe user. Indeed, if an image is labeled as positive and be used in real applications, where most of the labels are un-its current relevance is close to 1, this label should not be taken known, and are only used here for evaluation purpose.into account. On the contrary, if an image is labeled as negative First of all, one can see that for both criteria, the MAP al-and its current relevance is close to 1, it means that the current ways increases, which means that an iterative selection of ex-boundary is too far from the center of the relevant class. In this amples will always increase the performance of the system. Fur-case, a high change of is required. thermore, whenever the classification error method increases This behavior for can be obtained by the computation of the the MAP, the technique maximizing the average precision per-average error between the current relevance of a new forms significantly better with a gain around 20%. This large
  7. 7. 1206 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 7, JULY 2008difference with the error minimization is a motivation to develop boundary. This method works a little better than the two otheraverage precision-based strategies even if the true MAP is not ones (in terms of MAP), but the difference is not as large asavailable and has to be approximated. The aim here is to select when combining with a precision-oriented factor.the images that will maximize the average precision and, hence,estimate . VII. BATCH SELECTION However, the estimation of is particularly difficult The aim of batch selection is to select the set that minimizeswith the kind of samples we have chosen. Indeed, we opted the selection criterion over all the sets of unlabeled images.for binary labels, which are simple enough for any nonexpert As for the selection of one image, this minimization has to beuser. Those labels are well adapted for classification, but they estimated. Because of the constraint in terms of complexity ofare less suitable with the estimation of average precision: this the CBIR context, a direct estimation cannot be performed. Acriterion is based on a ranking of the database, but binary labels naive extension of the selection for more than one sample is todo not give any information in such a way. select the samples that minimize function . However, a better batch selection can be achieved by selecting samples in an iter-B. Precision-Oriented Selection ative way [36]. Although this strategy is sub-optimal, it requires As explained before, the straight estimation of the average little computation in comparison to an exhaustive search of theprecision is not efficient in our context. We opted for a different best subset.strategy but have kept in mind the objective of optimization of Actually, our algorithm selects pictures one by one, andthe ranking. We experimentally noticed that selecting images prevents from the selection of close samples:close to the boundary between relevant and irrelevant classes isdefinitively efficient, but when considering a subset of imagesclose to the boundary, the active sampling strategy of selectingthe closest image inside this subset is not necessarily the best forstrategy. We propose to consider the subest of images close tothe boundary, and then to use a criterion related to the averageprecision in order to select the winner image of the selectivesampling strategy. endfor In order to compute a score related to average precision, wepropose to consider the sub-database of the labeled pictures. We with kernel function between sample and samplehave the ground truth for this sub-database, since we have the .labels of all its images. Thus, it becomes feasible to compute the The efficiency of this batch selection is the result of the ef-average precision on this sub-database, without any estimation. ficiency of current classification techniques, where the labelingWe still need a ranking of this sub-database in order to compute of one or more close samples provides almost the same classifi-the average precision. We compute the similarity of an cation output. Hence, this batch selection process provides moreunlabeled image to any labeled images (using the kernel diversity in the training set.function as the similarity function), and then rank the labeled Our batch selection does not depend on a particular techniqueimages according to these similarity values. The resulting factor of selection of only one image; hence, it may be used with all is the average precision on the sub-database with this the previously described active learning processes.ranking The aim of this factor is to support the picture that, once labeled, will have the most chance to increase the VIII. ACTIVE LEARNING EXPERIMENTSaverage precision. Experiments have been carried out on the databases described Our final cost function achieves a tradeoff between getting in the appendix. In the following, we use a SVM classifier withthe closest image and optimizing the average precision a Gaussian kernel and a distance. (9) A. Comparison of Active Learners Then, using this factor, in the case where the user labels the We compare the method proposed in this paper to an uncer-selected image as positive, the image and its neightbors will be tainty-based method [33], and a method which aimswell ranked. Since we selected the image which, at the same at minimizing the error of generalization [26]. We also add atime, is the closest to positive labels and furthest from negative nonactive method, which randomly selects the images.labels, the new labeled image (and its neightbors) will be well Results per concept are shown in Table I for the Corel photoranked, and will probably not bring irrelevant images in the top database for 10 to 50 labels. First of all, one can see that we haveof the ranking. selected concepts of different levels of complexities. The perfor- Let us note that the idea of combining “pessimist” (like uncer- mances go from few percentages of Mean average precision totainty-based) and “optimist” (like our precision oriented factor) 89%. The concepts that are the most difficult to retrieve are verystrategies seems to be an interesting way of investigation. We small and/or have a very diversified visual content. For instance,have also tested a combination of uncertainty-based and error the “Ontario” concept has 42 images, and is compound of pic-reduction by selecting images with the algorithm of Roy and tures of buildings and landscapes with no common visual con-McCallum [26] but only testing on the 100 closest images to the tent. However, the system is able to make some improvements.
  8. 8. GOSSELIN AND CORD: ACTIVE LEARNING METHODS FOR INTERACTIVE IMAGE RETRIEVAL 1207 TABLE I = Roy&McCallum MEAN AVERAGE PRECISION (%) PER CONCEPT FOR EACH ACTIVE LEARNER, ON THE COREL PHOTO DATABASE. (A) = SVM MINIMIZATION OF THE ERROR OF GENERALIZATION; (B) = RETIN Active Learning ; (C)Next, the RETIN active learning method has the best perfor- Global results are shown in Fig. 4(a). First, one can see themances for most concepts. benefit of active learning in our context, which increases MAP
  9. 9. 1208 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 7, JULY 2008 Fig. 4. Mean average precision (%) for different active learners on the (left) Corel and the (right) ANN photo database.Fig. 5. Results on the Corel photo database. (Left) Mean average precision (%) using one to three components of the RETIN selection process. (Right) Precision/recall curves for the concept “savanna.”from 11% to 15% for the Corel database. The method which one only using active boundary correction, and the last one isaims at minimizing the error of generalization is the less efficient the selection of the images the closest to the boundary, with noactive learning method. The most efficient method is the pre- components.cision-oriented method we introduced in this paper, especially The first improvements come from the active boundary cor-in the first iterations, where the number of samples is smallest. rection and diversification, but in different ways. The boundaryAbout computational time per feedback, the method correction increases the performances the most in the firstneeds about 20 ms, the method of [26] several minutes, and the feedback steps, while the diversification increases the perfor-one proposed in this paper 45 ms on the Corel photo database. mances the most after several feedback steps. This behavior is One can also notice that the gap between active and nonac- as expected, since the boundary correction aims at handling thetive learning is larger on the Corel photo database, which has lack of training samples during the first feedback steps, and theten times more images than the ANN database. This shows that diversification is more interesting when the category becomesactive learning becomes more and more efficient when the size large. Finally, the top curve shows that the enhancement ofof the database grows. the criterion proposed in Section VI increases the performances when used with correction and diversification.B. Retin Components C. Labeling System Parametrization The RETIN selection process is composed of three compo- We ran simulations with the same protocol that in the pre-nents: active boundary correction (Section V), precision-ori- vious section, but with various numbers of labels per feedback.ented selection (Section VI), and diversification (Section VII). In order to get comparable results, we ensure that the size ofExperiments have been carried out on the Corel database in the training set at the end of a retrieval session is always theorder to show the level of improvement brought by each of these same. We compute the precision/recall curves for all the con-components. The results are shown in Fig. 5(a). The top curve cepts of the database. Results for the “savanna” concept areshows the performances using all the components, the next one shown in Fig. 5(b); let us note that all concepts gave similarusing diversification and active boundary correction, the next results moduling a scaling factor. As one can see on this figure,
  10. 10. GOSSELIN AND CORD: ACTIVE LEARNING METHODS FOR INTERACTIVE IMAGE RETRIEVAL 1209the more feedback steps, the more performances increase. In-creasing feedback steps leads to more classification updates,which allows a better correction and selection. IX. CONCLUSION In this paper, the RETIN active learning strategy for in-teractive learning in content-based image retrieval context ispresented. The classification framework for CBIR is studiedand powerful classification techniques for information re-trieval context are selected. After analyzing the limitation ofactive learning strategies to the CBIR context, we introducethe general RETIN active learning scheme, and the differentcomponents to deal with this particular context. The main con-tributions concern the boundary correction to make the retrieval Fig. 6. RETIN user interface.process more robust, and secondly, the introduction of a newcriterion for image selection that better represents the CBIRobjective of database ranking. Other improvements, as batch and the best recall curve is the one which increases the fastest,processing and speed-up process are proposed and discussed. which means that the system has retrieved most of the relevantOur strategy leads to a fast and efficient active learning scheme images, and few of them are lost.to online retrieve query concepts from a database. Experiments Precision and recall are interesting for a final evaluation ofon large databases show that the RETIN method gives very one category; however, for larger evaluation purposes, we con-good results in comparison to several other active strategies. sider the precision/recall curve. This curve is the set of all the The framework introduced in this article may be extended. couples (precision, recall) for each number of images returnedWe are currently working on kernel functions for object classes by the system. The curve always starts from the top left (1,0)retrieval, based on bags of features: each image is no more repre- and ends in the bottom right (0,1). Between these two points,sented by a single global vector, but by a set of vectors. The im- the curve decreases regularly. A good precision/recall curve isplementation of such a kernel function is fully compatible with a curve which decreases slowly, which means that at the samethe RETIN active learning scheme described in this article, and time, the system returns a lot of relevant images and few of themthe initial results are really encouraging. are lost. This property is interesting since the size of the category is not playing an important role, which allows the comparison APPENDIX of precision/recall for different categories. DATABASES AND FEATURES FOR EXPERIMENTS The precision/recall curve can also be summarized by a single Databases We considered two databases. The first one is the real value called average precision, which corresponds to theCOREL photo database. To get tractable computation for the area under an ideal (noninterpolated) recall/precision curve.statistical evaluation, we randomly selected 77 of the COREL To evaluate a system over all the categories, the average pre-folders, to obtain a database of 6000 images. The second one cisions for each category are combined (averaged) across allis the ANN database from the Washington university, with 500 categories to create the noninterpolated mean average precisionimages. (MAP) for that set. Let us note that this criterion is the one used Features. We use a histogram of 25 colors and 25 textures by the TRECVID evaluation campaign [37].for each image, computed from a vector quantization [35]. The precision and recall values are measured by simulating Concepts. For the ANN database, we used the already ex- retrieval scenario. For each simulation, an image category isisting 11 concepts. For the Corel database, in order to perform randomly chosen. Next, 100 images are selected using activeinteresting evaluation, we built from the database 50 concepts learning and labeled according to the chosen category. Theseof various complexity. Each concept is built from 2 or 3 of the labeled images are used to train a classifier, which returns aCOREL folders. The concept sizes are from 50 to 300. The set ranking of the database. The average precision is then computedof all concepts covers the whole database, and many of them using the ranking. These simulations are repeated 1000 times,share common images. and all values of are averaged to get the Mean average precision. Quality assessment. The CBIR system performance mea- Next, we repeat ten times these simulations to get the mean andsurement is based on the precision and recall. the standard deviation of the MAP. Precision and recall are considered for one category, and have RETIN interface. The RETIN user interface (cf. Fig. 6) isas many values as images in the database (from one image re- composed of three sub-parts. The main one at the top left dis-trieved to the maximum number of images the system can re- plays the current ranking of the database. The second at theturn). Precision and recall are metrics to evaluate the ranking of bottom displays the current selection of the active learner. Thethe images returned by the system for one category. The preci- user gives new labels by clicking the left or right mouse button.sion curve is always decreasing (or stationary), and the best pre- Once new labels are given, the retrieval is updated, and a newcision curve is the one which decreases the less, which means ranking is displayed in the main part. The last part at the top rightthat whatever the number of images retrieved by the system, a lot displays information about the current image. We show on Fig. 7of them are relevant ones. The recall curve is always increasing, the 75 most relevant pictures after three and five iterations of five
  11. 11. 1210 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 7, JULY 2008Fig. 7. Results: 75 most relevant pictures for the concept “mountain” after three or five feedback iterations of our active learning-based strategy. (a) Top rank afterthree iterations of five labels. (b) Top rank after five iterations of five labels.labels for the concept “mountain.” One can see that the system [5] J. Peng, B. Bhanu, and S. Qing, “Probabilistic feature relevanceis able to retrieve many images of the concept, while discrimi- learning for content-based image retrieval,” Comput. Vis. Image Understand., vol. 75, no. 1-2, pp. 150–164, Jul.-Aug. 1999.nating irrelevant pictures with close visual characteristics as for [6] N. Doulamis and A. Doulamis, “A recursive optimal relevance feed-example pictures of dolphins. This concept is very complex and back scheme for CBIR,” presented at the Int.Conf. Image Processing,overlaps with several other concepts and will necessitate more Thessaloniki, Greece, Oct. 2001. [7] J. Fournier, M. Cord, and S. Philipp-Foliguet, “Back-propagationiterations to be completely learned. Online demonstration is also algorithm for relevance feedback in image retrieval,” in Proc. Int.available on our website (http://retin.ensea.fr). Conf. Image Processing, Thessaloniki, Greece, Oct. 2001, vol. 1, pp. 686–689. [8] O. Chapelle, P. Haffner, and V. Vapnik, “Svms for histogram REFERENCES based image classification,” IEEE Trans. Neural Netw., vol. 10, pp. [1] R. Veltkamp, “Content-based image retrieval system: A survey,” Tech. 1055–1064, 1999. Rep., Univ. Utrecht, Utrecht, The Netherlands, 2002. [9] N. Vasconcelos, “Bayesian models for visual information retrieval,” [2] Y. Rui, T. Huang, S. Mehrotra, and M. Ortega, “A relevance feedback Ph.D. dissertation, Mass. Inst. Technol., Cambridge, 2000. architecture for content-based multimedia information retrieval sys- [10] S.-A. Berrani, L. Amsaleg, and P. Gros, “Recherche approximative de tems,” in Proc. IEEE Workshop Content-Based Access of Image and plus proches voisins: Application la reconnaissance d’images par de- Video Libraries, 1997, pp. 92–89. scripteurs locaux,” Tech. Sci. Inf., vol. 22, no. 9, pp. 1201–1230, 2003. [3] E. Chang, B. T. Li, G. Wu, and K. Goh, “Statistical learning for ef- [11] N. Najjar, J. Cocquerez, and C. Ambroise, “Feature selection for semi fective visual information retrieval,” in Proc. IEEE Int. Conf. Image supervised learning applied to image retrieval,” in Proc. IEEE Int. Conf. Processing, Barcelona, Spain, Sep. 2003, pp. 609–612. Image Processing, Barcelona, Spain, Sep. 2003, vol. 2, pp. 559–562. [4] S. Aksoy, R. Haralick, F. Cheikh, and M. Gabbouj, “A weighted dis- [12] X. Zhu, Z. Ghahramani, and J. Lafferty, “Semi-supervised learning tance approach to relevance feedback,” in Proc. IAPR Int. Conf. Pattern using gaussian fields and harmonic functions,” presented at the Int. Recognition, Barcelona, Spain, Sep. 3–8, 2000, vol. IV, pp. 812–815. Conf. Machine Learning, 2003.
  12. 12. GOSSELIN AND CORD: ACTIVE LEARNING METHODS FOR INTERACTIVE IMAGE RETRIEVAL 1211 [13] A. Dong and B. Bhanu, “Active concept learning in image databases,” [32] D. Geman and R. Moquet, “A stochastic feedback model for image re- IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 35, pp. 450–466, trieval,” in Proc. RFIA, Paris, France, Feb. 2000, vol. III, pp. 173–180. 2005. [33] S. Tong and E. Chang, “Support vector machine active learning for [14] N. Vasconcelos and M. Kunt, “Content-based retrieval from image image retrieval,” ACM Multimedia, pp. 107–118, 2001. databases: Current solutions and future directions,” in Proc. Int. Conf. [34] P. Gosselin and M. Cord, “A comparison of active classification Image Processing, Thessaloniki, Greece, Oct. 2001, vol. 3, pp. 6–9. methods for content-based image retrieval,” presented at the Int. [15] Y. Chen, X. Zhou, and T. Huang, “One-class SVM for learning in Workshop Computer Vision meets Databases (CVDB), ACM Sigmod, image retrieval,” in Proc. Int. Conf. Image Processing, Thessaloniki, Paris, France, Jun. 2004. Greece, Oct. 2001, vol. 1, pp. 34–37. [35] M. Cord, P. Gosselin, and S. Philipp-Foliguet, “Stochastic exploration [16] S.-A. Berrani, L. Amsaleg, and P. Gros, “Approximate searches: and active learning for image retrieval,” Image Vis. Comput., vol. 25, + k-neighbors precision,” in Proc. Inf. Conf. Information and Knowl- pp. 14–23, 2006. edge Management, 2003, pp. 24–31. [36] K. Brinker, “Incorporating diversity in active learning with support [17] V. Vapnik, Statistical Learning Theory. New York: Wiley-Inter- vector machines,” in Proc. Int. Conf. Machine Learning, Feb. 2003, science, 1998. pp. 59–66. [18] S. Tong and D. Koller, “Support vector machine active learning with [37] TREC Video Retrieval Evaluation Campain. [Online]. Available: http:// application to text classification,” J. Mach. Learn. Res., vol. 2, pp. www-nlpir.nist.gov/projects/trecvid/ 45–66, Nov. 2001. [19] S. Mika, G. Rätsch, J. Weston, B. Schölkopf, and K.-R. Müller;, “Fisher discriminant analysis with kernels,” in Neural Networks for Signal Pro- cessing IX, Y. Hu, J. Larsen, E. Wilson, and S. Douglas, Eds. Piscat- away, NJ: IEEE, 1999, pp. 41–48. [20] B. Schölkopf and A. Smola, Learning With Kernels. Cambridge, MA: MIT Press, 2002. Philippe Henri Gosselin received the Ph.D. degree [21] P. Yin, B. Bhanu, K. Chang, and A. Dong, “Integrating relevance in image and signal processing from Cergy Univer- feedback techniques for image retrieval using reinforcement learning,” sity, France, in 2005. IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 10, pp. 1536–1551, After two years of postdoctoral positions at the Oct. 2005. LIP6 Lab, Paris, France, and at the ETIS Lab, [22] J. Shawe-Taylor and N. Cristianini, Kernel Methods for Pattern Anal- Cergy, he joined the MIDI Team in the ETIS Lab ysis. Cambridge, U.K.: Cambridge Univ. Press, 2004. as an Assistant Professor. His research focuses on [23] T. Joachims, “Transductive inference for text classification using sup- machine learning for online muldimedia retrieval. port vector machines,” in Proc. 16th Int. Conf. Machine Learning, San He developed several statistical tools to deal with the Francisco, CA, 1999, pp. 200–209. special characteristics of content-based multimedia [24] P. Gosselin, M. Najjar, M. Cord, and C. Ambroise, “Discriminative retrieval. This include studies on kernel functions on classification vs modeling methods in CBIR,” presented at the IEEE histograms, bags, and graphs of features, but also weakly supervised semantic Advanced Concepts for Intelligent Vision Systems (ACIVS), Sep. learning methods. He is involved in several international research projects, with 2004. applications to image, video, and 3-D object databases. [25] M. Cord, “RETIN AL: An active learning strategy for image category Dr. Gosselin is a member of the European network of excellence Multimedia retrieval,” in Proc. IEEE Int. Conf. Image Processing, Singapore, Oct. Understanding through Semantics, Computation and Learning (MUSCLE). 2004, vol. 4, pp. 2219–2222. [26] N. Roy and A. McCallum, “Toward optimal active learning through sampling estimation of error reduction,” presented at the Int. Conf. Ma- chine Learning, 2001. Matthieu Cord received the Ph.D. degree in image [27] D. Cohn, “Active learning with statistical models,” J. Artif. Intell. Res., processing from Cergy University, France, in 1998. vol. 4, pp. 129–145, 1996. After a one-year postdoctorate position at the [28] D. D. Lewis and J. Catlett, W. Cohen, H. Hirsh, and editors, Eds., “Het- ESAT Lab, KUL, Belgium, he joined the ETIS Lab, erogeneous uncertainly sampling for supervised learning,” in Proc. Int. France. He received the HDR degree and got a Full Conf. Machine Learning., Jul. 1994, pp. 148–56. Professor position at the LIP6 Lab, UPMC, Paris, in [29] J. M. Park, “Online learning by active sampling using orthogonal de- 2006. His research interests include content-based cision support vectors,” in Proc. IEEE Workshop Neural Networks for image retrieval (CBIR) and computer vision. In Signal Processing, Dec. 2000, vol. 77, pp. 263–283. CBIR, he focuses on learning-based approaches to [30] M. Lindenbaum, S. Markovitch, and D. Rusakov, “Selective sampling visual information retrieval. He developed several for nearest neighbor classifiers,” Mach. Learn., vol. 54, no. 2, pp. interactive systems, to implement, compare, and 125–152, Feb. 2004. evaluate online relevance feedback strategies for web-based applications and [31] I. C. M. Miller, T. Minka, T. Papathomas, and P. Yianilos, “The artwork dedicated contexts. He is also working on multimedia content analysis. bayesian image retrieval system, PicHunter: Theory, implementation He is involved in several international research projects. He is a member and psychophysical experiments,” IEEE Trans. Image Process., vol. 9, of the European network of excellence Multimedia Understanding through no. 1, pp. 20–37, Jan. 2000. Semantics, Computation, and Learning (MUSCLE).

×