GOSSELIN AND CORD: ACTIVE LEARNING METHODS FOR INTERACTIVE IMAGE RETRIEVAL 1201originality of our approach is based on the association of three community  and were quickly used by researchers of thecomponents: CBIR community . SVMs select the optimal separating • boundary correction, which corrects the noisy classiﬁca- hyperplane which has the largest margin and, hence, the lowest tion boundary in the ﬁrst iterations; vicinal risk. Their high ability to be used with a kernel function • average precision maximization, which selects the images provides many advantages that we describe in Section III. so that classiﬁcation and mean average precision are en- Fisher discriminant analysis also uses hyperplane classiﬁers, hanced; but aims at minimizing both the size of the classes and the in- • batch selection, which addresses the problem of the selec- verse distance between the classes . Let us note that in ex- tion of multiple images in the same feedback iteration. periments, they give results similar to SVMs . We also propose a preselection technique to speed up the se-lection process, which leads to a computational complexity neg- B. Feature Distribution and Kernel Frameworkligible compared to the size of the database for the whole active In order to compute the relevance function for anylearning process. All these components are integrated in our re- image , a classiﬁcation method has to estimate the density oftrieval system, called RETIN. each class and/or the boundary. This task is mainly dependent In this scope, we ﬁrst present the binary classiﬁcation and on the shape of the data distribution in the feature space. In thekernel framework to represent complex class distributions (Sec- CBIR context, relevant images may be distributed in a singletion II). Powerful classiﬁcation methods and well-deﬁned ker- mode for one concept, and in a large number of modes for an-nels for global image signatures are evaluated on real database other concept, thereby inducing nonlinear classiﬁcation prob-experiments. Secondly, we present active learning to interact lems.with the user (Section III), then introduce the RETIN active Gaussian mixtures are highly used in the CBIR context, sincelearning scheme in Section IV, where its components are de- they have the ability to represent these complex distributionsscribed in Section V, Section VI and Section VII. Finally, we . However, in order to get an optimal estimation of the den-compare our method to existing ones using real scenario on large sity of a concept, data have to be distributed as Gaussian (limita-databases (Section VIII). tion for real data processing). Furthermore, the large number of parameters required for Gaussian mixtures leads to high com- II. CLASSIFICATION FRAMEWORK putational complexity. Another approach consists of using the In the classiﬁcation framework for CBIR, retrieving image kernel function framework . These functions were intro-concepts is modeled as a two-class problem: the relevant class, duced in the statistical learning community with the supportthe set of images in the searched concept, and the irrelevant vector machines. The ﬁrst objective of this framework is to mapclass, composed by the remaining database. image indexes to vectors in a Hilbert space, and, hence, Let be the image indexes of the database. A training it turns the nonlinear problem into linear ones. It is also differentset is expressed from any user label retrieval session as from other linearization techniques by its ability to implicitly , where if the image is labeled work on the vectors of the Hilbert space. Kernel methods neveras relevant, if the image is labeled as irrelevant explicitly compute the vectors , but only work on their dot(otherwise ). The classiﬁer is then trained using these product , and, hence, allow to work on very largelabels, and a relevance function is determined in order or inﬁnite Hilbert spaces. Furthermore, learning tasks and distri-to be able to rank the whole database. bution of data can be separated. Under the assumption that there Image indexes contain a summary of visual features, in this is a kernel function which maps our data into a linear space, allpaper histograms of colors and textures. linear learning techniques can be used on any image database. We denote by the value of the kernel function be-A. Classiﬁcation Methods for CBIR tween images and . This function will be considered as the Bayes and probabilistic classiﬁers are the most used classiﬁ- default similarity function in the following.cation methods for CBIR –. They are interesting sincethey are directly “relevance oriented,” i.e., no modiﬁcations are C. Comparison of Classiﬁersrequired to get the probability of an image to be in the concept. We have experimented several classiﬁcation methods forHowever, since Gaussian-based models are generally used, they CBIR: Bayes classiﬁcation  with a Parzen density estimation,focus on estimating the center of each class, and then are less ac- k-Nearest neighbors , support vector machines , kernelcurate near the boundary than discriminative methods, which is Fisher discriminant , and also a query-reweighting strategyan important aspect for active learning. Furthermore, because of . Databases, scenario, evaluation protocol and quality mea-the unbalance of training data, the tuning of the irrelevant class surement are detailed in the appendix. The results in terms ofis not trivial. mean average precision are shown in Fig. 1(a) according to the k-Nearest Neighbors determines the class of an image con- training set size (we omit the KFD which gives results verysidering its nearest neighbors. Despite its simplicity, it still has close to inductive SVMs) for both ANN and Corel databases.real interest in terms of efﬁciency, especially for the processing One can see that the classiﬁcation-based methods give theof huge databases . best results, showing the power of statistical methods over ge- Support vector machines are classiﬁers which has known ometrical approaches, like the one reported here (similarity re-much success in many learning tasks during the past years. ﬁnement method). The SVM technique performs slightly betterThey were introduced with kernel functions in the statistical than others in this context. As they have a strong mathematical
1202 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 7, JULY 2008 Fig. 1. Mean average precision (%) for a classiﬁcation from randomly selected examples on the (left) Corel and the (right) ANN photo database.framework and efﬁcient algorithmic implementation, SVMs are A. Example of Active Strategiesused as the default method in the RETIN system. Fig. 2 shows the interest of a selection step. In this example, We also made some comparisons between different kernels the images are represented by 2-D feature vectors, the white cir-(linear, polynomial, triangle, Gaussian , Gaussian , and cles are images the user is looking for, and the black circles areGaussian ). In our experiments, the Gaussian gives the best the images the user is not interested in. At the beginning, theperformance, which is not a surprising result since histograms user provided two labels, represented in ﬁgures by larger circlesare used as image signatures, and the distance is dedicated [cf. Fig. 2(a)]. These two labels allow the system to computefor comparing distributions. In the following experiments, we a ﬁrst classiﬁcation. In classical relevance feedback systems, aalways use a Gaussian kernel with a distance. common way of selection was to label the most relevant pictures Remark: As the whole data set is available during the returned by the system. As one can see in Fig. 2(b), this choicetraining, it could be interesting to consider a semi-supervised is not effective, since in that case the classiﬁcation is not im-or transductive framework. For instance, there are extended proved. Other types of selection of new examples may be con-versions of Gaussian mixtures , , and transductive sidered. For instance, in Fig. 2(c), the active learning selectionSVM . We have experimented with these methods. The working on uncertainty is proposed: the user labels the picturescomputational time is very high and no improvement has been the closest to the boundary, resulting in an enhanced classiﬁca-observed , . Transduction does not seem to be an tion in that case Fig. 2(c).interesting approach for CBIR as already observed in . Anyway, the global performances remain very low for any B. Optimization Schemeproposed methods for the Corel experiments. The MAP isunder 20%, even when the number of training data is up to Starting from the image signatures , the training set100 for Corel. The Corel database is much bigger than ANN and the relevance functionand the simulated concepts are more difﬁcult. The training set deﬁned in Section II, new notations are introduced for theremains too small to allow classiﬁers to efﬁciently learn the teacher that labels images as or 1, thequery concept. Active learning is now considered to overcome indexes of the labeled images , and the unlabeled ones .this problem. The active learning aims at selecting the unlabeled data that will enhance the most the relevance function trained with III. ACTIVE LEARNING the label added to . To formalize this selection process as a minimization problem, a cost function is introduced. Active learning is close to supervised learning, except that According to any active learning method, the selected image istraining data are not independent and identically distributed minimizing over the pool of unlabeled imagesvariables. Some of them are added to the training set thanks toa dedicated process. In this paper, we only consider selective (1)sampling approaches from the general active learning frame-work. In this context, the main challenge is to ﬁnd data that,once added to the training set, will allow us to achieve the best C. Active Learning Methodsclassiﬁcation function. These methods have been introduced toperform good classiﬁcations with few training data in compar- Two different strategies are usually considered for activeison to the standard supervised scheme. learning: the uncertainty-based sampling, which selects the When the learner can only choose new data in a pool of un- images for which the relevance function is the most uncertainlabeled data, it is called pool-based active learning framework (Fig. 2), and the error reduction strategy, which aims at mini-. In CBIR, the whole set of images is available anytime. mizing the generalization error of the classiﬁer.These data will be considered as the pool of unlabeled data According to this distinction, we have selected two popularduring the selective sampling process. active learning strategies for presentation and comparison.
GOSSELIN AND CORD: ACTIVE LEARNING METHODS FOR INTERACTIVE IMAGE RETRIEVAL 1203Fig. 2. Active learning illustration. A linear classiﬁer is computed for white (relevant) and black (irrelevant) data classiﬁcation. Only the large circles are used fortraining, the small ones represent unlabeled data. The line is the boundary between classes after training. (a) Initial boundary. (b) Two new data (the closest to therelevant one) have been added to the training set; the boundary is unchanged. (c) Most uncertain data (closest to the boundary) are added to the training data; theboundary signiﬁcantly moved, and provides a better separation between black and white data. (a) Initial training set. (b) Label the most relevant data. (c) Label themost uncertain data. 1) Uncertainty-Based Sampling: In our context of binary with a loss function which evaluates the loss between the es-classiﬁcation, the learner of the relevance function has to clas- timation and the true distribution .sify data as relevant or irrelevant. Any data in the pool of unla- The optimal pair minimizes this expectation over thebeled samples may be evaluated by the learner. Some are deﬁni- pool of unlabeled imagestively relevant, others irrelevant, but some may be more difﬁ-cult to classify. Uncertainty-based sampling strategy aims at se- (3)lecting unlabeled samples that the learner is the most uncertainabout. and . To achieve this strategy, a ﬁrst approach proposed by Cohn As the expected error is not accessible, the integral over uses several classiﬁers with the same training set, and se- is usually approximated using the unlabeled set. Roy andlects samples whose classiﬁcations are the most contradictory. McCallum  also propose to estimate the probabilityAnother solution consists of computing a probabilistic output with the relevance function provided by the current classiﬁer.for each sample, and selecting the unlabeled samples with the With a loss function , the estimation of the expectation isprobabilities closest to 0.5 . Similar strategies have also expressed for anybeen proposed with SVM classiﬁer , with a theoretical jus-tiﬁcation , and with nearest neighbor classiﬁer . In any case, a relevance function is trained. This func-tion may be adapted from a distribution, a membership to a Furthermore, as the labels on are unknown, they areclass (distance to the hyperplane for SVM), or a utility function. estimated by computing the expectation for each possible label.Using this relevance function, uncertain data will be close to Hence, the cost function is given0: . The solution to the minimization problem in (1) is (4) (2) The following relation between and is used: The efﬁciency of these methods depends on the accuracy ofthe relevance function estimation close to the boundary betweenrelevant and irrelevant classes. 2) Error Reduction-Based Strategy: Active learning strate- D. Active Learning in CBIR Contextgies based on error reduction  aim at selecting the samplethat, once added to the training set, minimizes the error of gen- Active learning methods are generally used for classiﬁcationeralization of the new classiﬁer. problems where the training set is large and classes are well Let denote the (unknown) probability of sample balanced. However, our context is somewhat different.to be in class (relevant or irrelevant), and the (also • Unbalance of classes: The class of relevant images (theunknown) distribution of the images. With , the training searched concept) is generally 20 to 100 times smaller thanprovides the estimation of , and the expected the class of irrelevant images. As a result, the boundaryerror of generalization is is very inaccurate, especially in the ﬁrst iterations of rel- evance feedback, where the size of the training set is dra- matically small. Many methods become inefﬁcient in this context, and the selection is then somewhat random.
1204 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 7, JULY 2008 • Selection criterion: Whenever minimizing the error of clas- siﬁcation is interesting for CBIR, this criterion does not completely reﬂect the user satisfaction. Other utility cri- teria closer to this, such as precision, should provide more efﬁcient selections. • Batch selection: We have to propose more than one image to label between two feedback steps, contrary to many ac- tive learning techniques which are only able to select a single image. The computation time is also an important criterion for CBIRin generalized applications, since people will not wait severalminutes between two feedback steps. Furthermore, a fast selec-tion allows the user to provide more labels in the same time.Thus, it is more interesting to use a less efﬁcient but fast methodthan a more efﬁcient but highly-computational one. To take into account these characteristics, we propose in Sec-tion IV an active learning strategy dedicated to CBIR context, Fig. 3. RETIN active learning scheme.based on the optimization of function , as proposed in (2) or(4). Aside from the active learning context, some other learningapproaches have been proposed in Bayesian framework. Ded-icated to target search (where the user is looking for a single the average precision, whose details are presented in Sec-image) a probability for each image to be the target is updated tion VI.thanks to user interaction , . The third step computes the batch selection. The method presented in Section VII selects images , using the previously computed criteria . IV. RETIN ACTIVE LEARNING SCHEME 5) Feedback: The user labels the selected images, and a new classiﬁcation and correction can be computed. The process We propose an active learning scheme based on binary clas- is repeated as many times as necessary.siﬁcation in order to interact with a user looking for image con-cepts in databases. The scheme in summarized in Fig. 3. 1) Initialization: A retrieval session is initialized from one V. ACTIVE BOUNDARY CORRECTION image brought by the user. The features are computed on During the ﬁrst steps of relevance feedback, classiﬁers are that new image and added to the database. This image is trained with very few data, about 0.1% of the database size. At then labeled as relevant, and the closest pictures are shown this stage, classiﬁers are not even able to perform a good es- to the user. Note that other initializations could be used, for timation of the size of the concept. Their natural behavior in instance with keywords. this case is to divide the database into two parts of almost the 2) Classiﬁcation: A binary classiﬁer is trained with the labels same size. Each new sample changes the estimated class of hun- the user gives. We use a SVM with a Gaussian kernel, dreds, sometimes thousands of images. Selection is then close since it has revealed being the most efﬁcient , . The to a random selection. result is a function which returns the relevance of A solution is to ensure that the retrieval session is initialized each image , according to the examples . with a minimum of examples. For instance, Tong proposes to 3) Boundary Correction: We add an active correction to the initialize with 20 examples . Beyond the question of the boundary in order to deal with the few training data and minimum number of examples required for a good starting the imbalance of the classes. We present this technique in boundary, initial examples are not always available without Section V. some third-party knowledge (for instance, keywords). 4) Selection: In the case where the user is not satisﬁed with the Thus, we propose a method to correct the boundary in order to current classiﬁcation, the system selects a set of images the reduce this problem. A ﬁrst version of this method was proposed user should label. The selection must be such so that the in , but we introduce here a generalization of this one. The labeling of those images provides the best performances. correction is a shift of the boundary We divide the selection into three steps. The ﬁrst step aims at reducing the computational time, by (5) preselecting some hundreds of pictures , which may be in the optimal selection set. We propose to prese- with the relevance function, and the correction at feed- lect the closest pictures to the (corrected) boundary. This back step . process is computed very fast, and the uncertainty-based We compute the correction to move the boundary towards the selection method have proved its interest in CBIR context. most uncertain area of the database. In other words, we want a The second step is the computation of the selection crite- positive value of for any image in the concept, and a rion for each preselected image. We use an uncer- negative value for others. In order to get this behavior, a ranking tainty-based selection criterion which aims at maximizing of the database is computed: with
GOSSELIN AND CORD: ACTIVE LEARNING METHODS FOR INTERACTIVE IMAGE RETRIEVAL 1205 a function returning the indexes of the sorted values labeled image and the label provided by the userof . Let be the rank of the image whose relevance is the mostuncertain. We have (7) Remark: when using SVMs for computing the classiﬁer , this correction process is close to the computation of parameter in the SVM decision functionOur correction is expressed by: . In order to compute the correction, we propose an algorithmbased on an adaptive tuning of during the feedback steps. The (8)value at the th iteration is computed considering the setof labels provided by the user at the current iteration . Actually, we suppose that the best threshold corresponds to Parameter can be computed using the KKT conditions ,the searched boundary. Such a threshold allows to present as for instance if is a support vector: .many relevant images as irrelevant ones. Thus, if and only if the Hence, one can see that our boundary correction is close toset of the selected images is well balanced (between relevant the computation of parameter of the SVM decision functionand irrelevant images), then the threshold is good. We exploit (8), except that we are considering vectors out of the trainingthis property to tune . set . At the th feedback step, the system is able to classify im-ages using the current training set. The user gives new labels for VI. AVERAGE PRECISION MAXIMIZATIONimages , and they are compared to the Any active learning method aims at selecting samples whichcurrent classiﬁcation. If the user mostly gives relevant labels, the decreases the error of classiﬁcation. In CBIR, users are inter-system should propose new images for labeling around a higher ested in database ranking. A usual metric to evaluate this rankingrank to get more irrelevant labels. On the contrary, if the user is the average precision (see appendix for deﬁnition).mostly gives irrelevant labels; thus, classiﬁcation does not seemto be good to rank , and new images for labeling should be se- A. Average Precision Versus Classiﬁcation Errorlected around a lower rank (to get more relevant labels). In order We made experiments to evaluate the difference between theto get this behavior, we introduce the following update rule direct optimization of the average precision and the classiﬁ- cation error. For the minimization of the classiﬁcation error (6) scheme, the optimal pair is the one which satisﬁes the following equation over the unlabeled data pool [cf. (3)]with the set of indexes of selected images at step and func-tion such as the following. • If we have as many positive labels as negative ones in , no changes are needed since the boundary is certainly well placed. Thus, should be close to zero. where is the expected error of generalization of • If we have much more positive than negative labels, the the classiﬁer trained with set . boundary is certainly close to the center of the relevant In the case of the maximization of the average precision, the class. In order to ﬁnd more negative labels, a better expression is close to the previous one, except that we consider boundary should correspond to a further rank , which the average precision means that should be positive. • On the contrary, if we have many negative labels, the boundary is certainly too far from the center of the rele- vant class. In this case, should be negative. The easiest way to carry out a function with such a behavioris to compute the difference between the number of positive and where is the ranking of the database computed fromnegative labels , . However, it is interesting to take the classiﬁer output trained with the set .into account the error between the current relevance of We compared these two criteria on a reference database wherea new labeled image and the label provided by all the labels are known. One can notice that these criteria cannotthe user. Indeed, if an image is labeled as positive and be used in real applications, where most of the labels are un-its current relevance is close to 1, this label should not be taken known, and are only used here for evaluation purpose.into account. On the contrary, if an image is labeled as negative First of all, one can see that for both criteria, the MAP al-and its current relevance is close to 1, it means that the current ways increases, which means that an iterative selection of ex-boundary is too far from the center of the relevant class. In this amples will always increase the performance of the system. Fur-case, a high change of is required. thermore, whenever the classiﬁcation error method increases This behavior for can be obtained by the computation of the the MAP, the technique maximizing the average precision per-average error between the current relevance of a new forms signiﬁcantly better with a gain around 20%. This large
1206 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 7, JULY 2008difference with the error minimization is a motivation to develop boundary. This method works a little better than the two otheraverage precision-based strategies even if the true MAP is not ones (in terms of MAP), but the difference is not as large asavailable and has to be approximated. The aim here is to select when combining with a precision-oriented factor.the images that will maximize the average precision and, hence,estimate . VII. BATCH SELECTION However, the estimation of is particularly difﬁcult The aim of batch selection is to select the set that minimizeswith the kind of samples we have chosen. Indeed, we opted the selection criterion over all the sets of unlabeled images.for binary labels, which are simple enough for any nonexpert As for the selection of one image, this minimization has to beuser. Those labels are well adapted for classiﬁcation, but they estimated. Because of the constraint in terms of complexity ofare less suitable with the estimation of average precision: this the CBIR context, a direct estimation cannot be performed. Acriterion is based on a ranking of the database, but binary labels naive extension of the selection for more than one sample is todo not give any information in such a way. select the samples that minimize function . However, a better batch selection can be achieved by selecting samples in an iter-B. Precision-Oriented Selection ative way . Although this strategy is sub-optimal, it requires As explained before, the straight estimation of the average little computation in comparison to an exhaustive search of theprecision is not efﬁcient in our context. We opted for a different best subset.strategy but have kept in mind the objective of optimization of Actually, our algorithm selects pictures one by one, andthe ranking. We experimentally noticed that selecting images prevents from the selection of close samples:close to the boundary between relevant and irrelevant classes isdeﬁnitively efﬁcient, but when considering a subset of imagesclose to the boundary, the active sampling strategy of selectingthe closest image inside this subset is not necessarily the best forstrategy. We propose to consider the subest of images close tothe boundary, and then to use a criterion related to the averageprecision in order to select the winner image of the selectivesampling strategy. endfor In order to compute a score related to average precision, wepropose to consider the sub-database of the labeled pictures. We with kernel function between sample and samplehave the ground truth for this sub-database, since we have the .labels of all its images. Thus, it becomes feasible to compute the The efﬁciency of this batch selection is the result of the ef-average precision on this sub-database, without any estimation. ﬁciency of current classiﬁcation techniques, where the labelingWe still need a ranking of this sub-database in order to compute of one or more close samples provides almost the same classiﬁ-the average precision. We compute the similarity of an cation output. Hence, this batch selection process provides moreunlabeled image to any labeled images (using the kernel diversity in the training set.function as the similarity function), and then rank the labeled Our batch selection does not depend on a particular techniqueimages according to these similarity values. The resulting factor of selection of only one image; hence, it may be used with all is the average precision on the sub-database with this the previously described active learning processes.ranking The aim of this factor is to support the picture that, once labeled, will have the most chance to increase the VIII. ACTIVE LEARNING EXPERIMENTSaverage precision. Experiments have been carried out on the databases described Our ﬁnal cost function achieves a tradeoff between getting in the appendix. In the following, we use a SVM classiﬁer withthe closest image and optimizing the average precision a Gaussian kernel and a distance. (9) A. Comparison of Active Learners Then, using this factor, in the case where the user labels the We compare the method proposed in this paper to an uncer-selected image as positive, the image and its neightbors will be tainty-based method , and a method which aimswell ranked. Since we selected the image which, at the same at minimizing the error of generalization . We also add atime, is the closest to positive labels and furthest from negative nonactive method, which randomly selects the images.labels, the new labeled image (and its neightbors) will be well Results per concept are shown in Table I for the Corel photoranked, and will probably not bring irrelevant images in the top database for 10 to 50 labels. First of all, one can see that we haveof the ranking. selected concepts of different levels of complexities. The perfor- Let us note that the idea of combining “pessimist” (like uncer- mances go from few percentages of Mean average precision totainty-based) and “optimist” (like our precision oriented factor) 89%. The concepts that are the most difﬁcult to retrieve are verystrategies seems to be an interesting way of investigation. We small and/or have a very diversiﬁed visual content. For instance,have also tested a combination of uncertainty-based and error the “Ontario” concept has 42 images, and is compound of pic-reduction by selecting images with the algorithm of Roy and tures of buildings and landscapes with no common visual con-McCallum  but only testing on the 100 closest images to the tent. However, the system is able to make some improvements.
GOSSELIN AND CORD: ACTIVE LEARNING METHODS FOR INTERACTIVE IMAGE RETRIEVAL 1207 TABLE I = Roy&McCallum MEAN AVERAGE PRECISION (%) PER CONCEPT FOR EACH ACTIVE LEARNER, ON THE COREL PHOTO DATABASE. (A) = SVM MINIMIZATION OF THE ERROR OF GENERALIZATION; (B) = RETIN Active Learning ; (C)Next, the RETIN active learning method has the best perfor- Global results are shown in Fig. 4(a). First, one can see themances for most concepts. beneﬁt of active learning in our context, which increases MAP
1208 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 7, JULY 2008 Fig. 4. Mean average precision (%) for different active learners on the (left) Corel and the (right) ANN photo database.Fig. 5. Results on the Corel photo database. (Left) Mean average precision (%) using one to three components of the RETIN selection process. (Right) Precision/recall curves for the concept “savanna.”from 11% to 15% for the Corel database. The method which one only using active boundary correction, and the last one isaims at minimizing the error of generalization is the less efﬁcient the selection of the images the closest to the boundary, with noactive learning method. The most efﬁcient method is the pre- components.cision-oriented method we introduced in this paper, especially The ﬁrst improvements come from the active boundary cor-in the ﬁrst iterations, where the number of samples is smallest. rection and diversiﬁcation, but in different ways. The boundaryAbout computational time per feedback, the method correction increases the performances the most in the ﬁrstneeds about 20 ms, the method of  several minutes, and the feedback steps, while the diversiﬁcation increases the perfor-one proposed in this paper 45 ms on the Corel photo database. mances the most after several feedback steps. This behavior is One can also notice that the gap between active and nonac- as expected, since the boundary correction aims at handling thetive learning is larger on the Corel photo database, which has lack of training samples during the ﬁrst feedback steps, and theten times more images than the ANN database. This shows that diversiﬁcation is more interesting when the category becomesactive learning becomes more and more efﬁcient when the size large. Finally, the top curve shows that the enhancement ofof the database grows. the criterion proposed in Section VI increases the performances when used with correction and diversiﬁcation.B. Retin Components C. Labeling System Parametrization The RETIN selection process is composed of three compo- We ran simulations with the same protocol that in the pre-nents: active boundary correction (Section V), precision-ori- vious section, but with various numbers of labels per feedback.ented selection (Section VI), and diversiﬁcation (Section VII). In order to get comparable results, we ensure that the size ofExperiments have been carried out on the Corel database in the training set at the end of a retrieval session is always theorder to show the level of improvement brought by each of these same. We compute the precision/recall curves for all the con-components. The results are shown in Fig. 5(a). The top curve cepts of the database. Results for the “savanna” concept areshows the performances using all the components, the next one shown in Fig. 5(b); let us note that all concepts gave similarusing diversiﬁcation and active boundary correction, the next results moduling a scaling factor. As one can see on this ﬁgure,
GOSSELIN AND CORD: ACTIVE LEARNING METHODS FOR INTERACTIVE IMAGE RETRIEVAL 1209the more feedback steps, the more performances increase. In-creasing feedback steps leads to more classiﬁcation updates,which allows a better correction and selection. IX. CONCLUSION In this paper, the RETIN active learning strategy for in-teractive learning in content-based image retrieval context ispresented. The classiﬁcation framework for CBIR is studiedand powerful classiﬁcation techniques for information re-trieval context are selected. After analyzing the limitation ofactive learning strategies to the CBIR context, we introducethe general RETIN active learning scheme, and the differentcomponents to deal with this particular context. The main con-tributions concern the boundary correction to make the retrieval Fig. 6. RETIN user interface.process more robust, and secondly, the introduction of a newcriterion for image selection that better represents the CBIRobjective of database ranking. Other improvements, as batch and the best recall curve is the one which increases the fastest,processing and speed-up process are proposed and discussed. which means that the system has retrieved most of the relevantOur strategy leads to a fast and efﬁcient active learning scheme images, and few of them are lost.to online retrieve query concepts from a database. Experiments Precision and recall are interesting for a ﬁnal evaluation ofon large databases show that the RETIN method gives very one category; however, for larger evaluation purposes, we con-good results in comparison to several other active strategies. sider the precision/recall curve. This curve is the set of all the The framework introduced in this article may be extended. couples (precision, recall) for each number of images returnedWe are currently working on kernel functions for object classes by the system. The curve always starts from the top left (1,0)retrieval, based on bags of features: each image is no more repre- and ends in the bottom right (0,1). Between these two points,sented by a single global vector, but by a set of vectors. The im- the curve decreases regularly. A good precision/recall curve isplementation of such a kernel function is fully compatible with a curve which decreases slowly, which means that at the samethe RETIN active learning scheme described in this article, and time, the system returns a lot of relevant images and few of themthe initial results are really encouraging. are lost. This property is interesting since the size of the category is not playing an important role, which allows the comparison APPENDIX of precision/recall for different categories. DATABASES AND FEATURES FOR EXPERIMENTS The precision/recall curve can also be summarized by a single Databases We considered two databases. The ﬁrst one is the real value called average precision, which corresponds to theCOREL photo database. To get tractable computation for the area under an ideal (noninterpolated) recall/precision curve.statistical evaluation, we randomly selected 77 of the COREL To evaluate a system over all the categories, the average pre-folders, to obtain a database of 6000 images. The second one cisions for each category are combined (averaged) across allis the ANN database from the Washington university, with 500 categories to create the noninterpolated mean average precisionimages. (MAP) for that set. Let us note that this criterion is the one used Features. We use a histogram of 25 colors and 25 textures by the TRECVID evaluation campaign .for each image, computed from a vector quantization . The precision and recall values are measured by simulating Concepts. For the ANN database, we used the already ex- retrieval scenario. For each simulation, an image category isisting 11 concepts. For the Corel database, in order to perform randomly chosen. Next, 100 images are selected using activeinteresting evaluation, we built from the database 50 concepts learning and labeled according to the chosen category. Theseof various complexity. Each concept is built from 2 or 3 of the labeled images are used to train a classiﬁer, which returns aCOREL folders. The concept sizes are from 50 to 300. The set ranking of the database. The average precision is then computedof all concepts covers the whole database, and many of them using the ranking. These simulations are repeated 1000 times,share common images. and all values of are averaged to get the Mean average precision. Quality assessment. The CBIR system performance mea- Next, we repeat ten times these simulations to get the mean andsurement is based on the precision and recall. the standard deviation of the MAP. Precision and recall are considered for one category, and have RETIN interface. The RETIN user interface (cf. Fig. 6) isas many values as images in the database (from one image re- composed of three sub-parts. The main one at the top left dis-trieved to the maximum number of images the system can re- plays the current ranking of the database. The second at theturn). Precision and recall are metrics to evaluate the ranking of bottom displays the current selection of the active learner. Thethe images returned by the system for one category. The preci- user gives new labels by clicking the left or right mouse button.sion curve is always decreasing (or stationary), and the best pre- Once new labels are given, the retrieval is updated, and a newcision curve is the one which decreases the less, which means ranking is displayed in the main part. The last part at the top rightthat whatever the number of images retrieved by the system, a lot displays information about the current image. We show on Fig. 7of them are relevant ones. The recall curve is always increasing, the 75 most relevant pictures after three and ﬁve iterations of ﬁve
1210 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 7, JULY 2008Fig. 7. Results: 75 most relevant pictures for the concept “mountain” after three or ﬁve feedback iterations of our active learning-based strategy. (a) Top rank afterthree iterations of ﬁve labels. (b) Top rank after ﬁve iterations of ﬁve labels.labels for the concept “mountain.” One can see that the system  J. Peng, B. Bhanu, and S. Qing, “Probabilistic feature relevanceis able to retrieve many images of the concept, while discrimi- learning for content-based image retrieval,” Comput. Vis. Image Understand., vol. 75, no. 1-2, pp. 150–164, Jul.-Aug. 1999.nating irrelevant pictures with close visual characteristics as for  N. Doulamis and A. Doulamis, “A recursive optimal relevance feed-example pictures of dolphins. This concept is very complex and back scheme for CBIR,” presented at the Int.Conf. Image Processing,overlaps with several other concepts and will necessitate more Thessaloniki, Greece, Oct. 2001.  J. Fournier, M. Cord, and S. Philipp-Foliguet, “Back-propagationiterations to be completely learned. Online demonstration is also algorithm for relevance feedback in image retrieval,” in Proc. Int.available on our website (http://retin.ensea.fr). Conf. Image Processing, Thessaloniki, Greece, Oct. 2001, vol. 1, pp. 686–689.  O. Chapelle, P. Haffner, and V. Vapnik, “Svms for histogram REFERENCES based image classiﬁcation,” IEEE Trans. Neural Netw., vol. 10, pp.  R. Veltkamp, “Content-based image retrieval system: A survey,” Tech. 1055–1064, 1999. Rep., Univ. Utrecht, Utrecht, The Netherlands, 2002.  N. Vasconcelos, “Bayesian models for visual information retrieval,”  Y. Rui, T. Huang, S. Mehrotra, and M. Ortega, “A relevance feedback Ph.D. dissertation, Mass. Inst. Technol., Cambridge, 2000. architecture for content-based multimedia information retrieval sys-  S.-A. Berrani, L. Amsaleg, and P. Gros, “Recherche approximative de tems,” in Proc. IEEE Workshop Content-Based Access of Image and plus proches voisins: Application la reconnaissance d’images par de- Video Libraries, 1997, pp. 92–89. scripteurs locaux,” Tech. Sci. Inf., vol. 22, no. 9, pp. 1201–1230, 2003.  E. Chang, B. T. Li, G. Wu, and K. Goh, “Statistical learning for ef-  N. Najjar, J. Cocquerez, and C. Ambroise, “Feature selection for semi fective visual information retrieval,” in Proc. IEEE Int. Conf. Image supervised learning applied to image retrieval,” in Proc. IEEE Int. Conf. Processing, Barcelona, Spain, Sep. 2003, pp. 609–612. Image Processing, Barcelona, Spain, Sep. 2003, vol. 2, pp. 559–562.  S. Aksoy, R. Haralick, F. Cheikh, and M. Gabbouj, “A weighted dis-  X. Zhu, Z. Ghahramani, and J. Lafferty, “Semi-supervised learning tance approach to relevance feedback,” in Proc. IAPR Int. Conf. Pattern using gaussian ﬁelds and harmonic functions,” presented at the Int. Recognition, Barcelona, Spain, Sep. 3–8, 2000, vol. IV, pp. 812–815. Conf. Machine Learning, 2003.
GOSSELIN AND CORD: ACTIVE LEARNING METHODS FOR INTERACTIVE IMAGE RETRIEVAL 1211  A. Dong and B. Bhanu, “Active concept learning in image databases,”  D. Geman and R. Moquet, “A stochastic feedback model for image re- IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 35, pp. 450–466, trieval,” in Proc. RFIA, Paris, France, Feb. 2000, vol. III, pp. 173–180. 2005.  S. Tong and E. Chang, “Support vector machine active learning for  N. Vasconcelos and M. Kunt, “Content-based retrieval from image image retrieval,” ACM Multimedia, pp. 107–118, 2001. databases: Current solutions and future directions,” in Proc. Int. Conf.  P. Gosselin and M. Cord, “A comparison of active classiﬁcation Image Processing, Thessaloniki, Greece, Oct. 2001, vol. 3, pp. 6–9. methods for content-based image retrieval,” presented at the Int.  Y. Chen, X. Zhou, and T. Huang, “One-class SVM for learning in Workshop Computer Vision meets Databases (CVDB), ACM Sigmod, image retrieval,” in Proc. Int. Conf. Image Processing, Thessaloniki, Paris, France, Jun. 2004. Greece, Oct. 2001, vol. 1, pp. 34–37.  M. Cord, P. Gosselin, and S. Philipp-Foliguet, “Stochastic exploration  S.-A. Berrani, L. Amsaleg, and P. Gros, “Approximate searches: and active learning for image retrieval,” Image Vis. Comput., vol. 25, + k-neighbors precision,” in Proc. Inf. Conf. Information and Knowl- pp. 14–23, 2006. edge Management, 2003, pp. 24–31.  K. Brinker, “Incorporating diversity in active learning with support  V. Vapnik, Statistical Learning Theory. New York: Wiley-Inter- vector machines,” in Proc. Int. Conf. Machine Learning, Feb. 2003, science, 1998. pp. 59–66.  S. Tong and D. Koller, “Support vector machine active learning with  TREC Video Retrieval Evaluation Campain. [Online]. Available: http:// application to text classiﬁcation,” J. Mach. Learn. Res., vol. 2, pp. www-nlpir.nist.gov/projects/trecvid/ 45–66, Nov. 2001.  S. Mika, G. Rätsch, J. Weston, B. Schölkopf, and K.-R. Müller;, “Fisher discriminant analysis with kernels,” in Neural Networks for Signal Pro- cessing IX, Y. Hu, J. Larsen, E. Wilson, and S. Douglas, Eds. Piscat- away, NJ: IEEE, 1999, pp. 41–48.  B. Schölkopf and A. Smola, Learning With Kernels. Cambridge, MA: MIT Press, 2002. Philippe Henri Gosselin received the Ph.D. degree  P. Yin, B. Bhanu, K. Chang, and A. Dong, “Integrating relevance in image and signal processing from Cergy Univer- feedback techniques for image retrieval using reinforcement learning,” sity, France, in 2005. IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 10, pp. 1536–1551, After two years of postdoctoral positions at the Oct. 2005. LIP6 Lab, Paris, France, and at the ETIS Lab,  J. Shawe-Taylor and N. Cristianini, Kernel Methods for Pattern Anal- Cergy, he joined the MIDI Team in the ETIS Lab ysis. Cambridge, U.K.: Cambridge Univ. Press, 2004. as an Assistant Professor. His research focuses on  T. Joachims, “Transductive inference for text classiﬁcation using sup- machine learning for online muldimedia retrieval. port vector machines,” in Proc. 16th Int. Conf. Machine Learning, San He developed several statistical tools to deal with the Francisco, CA, 1999, pp. 200–209. special characteristics of content-based multimedia  P. Gosselin, M. Najjar, M. Cord, and C. Ambroise, “Discriminative retrieval. This include studies on kernel functions on classiﬁcation vs modeling methods in CBIR,” presented at the IEEE histograms, bags, and graphs of features, but also weakly supervised semantic Advanced Concepts for Intelligent Vision Systems (ACIVS), Sep. learning methods. He is involved in several international research projects, with 2004. applications to image, video, and 3-D object databases.  M. Cord, “RETIN AL: An active learning strategy for image category Dr. Gosselin is a member of the European network of excellence Multimedia retrieval,” in Proc. IEEE Int. Conf. Image Processing, Singapore, Oct. Understanding through Semantics, Computation and Learning (MUSCLE). 2004, vol. 4, pp. 2219–2222.  N. Roy and A. McCallum, “Toward optimal active learning through sampling estimation of error reduction,” presented at the Int. Conf. Ma- chine Learning, 2001. Matthieu Cord received the Ph.D. degree in image  D. Cohn, “Active learning with statistical models,” J. Artif. Intell. Res., processing from Cergy University, France, in 1998. vol. 4, pp. 129–145, 1996. After a one-year postdoctorate position at the  D. D. Lewis and J. Catlett, W. Cohen, H. Hirsh, and editors, Eds., “Het- ESAT Lab, KUL, Belgium, he joined the ETIS Lab, erogeneous uncertainly sampling for supervised learning,” in Proc. Int. France. He received the HDR degree and got a Full Conf. Machine Learning., Jul. 1994, pp. 148–56. Professor position at the LIP6 Lab, UPMC, Paris, in  J. M. Park, “Online learning by active sampling using orthogonal de- 2006. His research interests include content-based cision support vectors,” in Proc. IEEE Workshop Neural Networks for image retrieval (CBIR) and computer vision. In Signal Processing, Dec. 2000, vol. 77, pp. 263–283. CBIR, he focuses on learning-based approaches to  M. Lindenbaum, S. Markovitch, and D. Rusakov, “Selective sampling visual information retrieval. He developed several for nearest neighbor classiﬁers,” Mach. Learn., vol. 54, no. 2, pp. interactive systems, to implement, compare, and 125–152, Feb. 2004. evaluate online relevance feedback strategies for web-based applications and  I. C. M. Miller, T. Minka, T. Papathomas, and P. Yianilos, “The artwork dedicated contexts. He is also working on multimedia content analysis. bayesian image retrieval system, PicHunter: Theory, implementation He is involved in several international research projects. He is a member and psychophysical experiments,” IEEE Trans. Image Process., vol. 9, of the European network of excellence Multimedia Understanding through no. 1, pp. 20–37, Jan. 2000. Semantics, Computation, and Learning (MUSCLE).