Accelerated Hypothesis Generation forMultistructure Data via Preference AnalysisTat-Jun Chin, Member, IEEE, Jin Yu, and Da...
process such that the probability of hitting an all-inliersubset is improved. The trick is to endow each input datumwith a...
with ~p < p have been chosen into a minimal subset, weshould exploit the information available in these ~p data (andnot me...
hypotheses may be modified according to the inlier ratioestimate while still bounding the runtime.We are also aware of rec...
presentation and is unnecessary for fðxxxxi; xxxxjÞ, or for oursubsequent steps to work.We further analyze the results by ...
obvious. Moreover, by the 400th iteration at least one all-inlier minimal subset have been retrieved for each structure;th...
9: construct Pk (12) using aðiÞ, h and data in S10: sample skþ1 according to Pk11: append S :¼ S [ fxxxxskþ1g12: end for13...
across the two views. True and false matches are manuallyidentified. This yields a data set with 70 inliers (20.71 percent...
of size 69 (24.73 percent), 68 (24.37 percent), and 29(10.39 percent). As before, multiple data instances aregenerated, ea...
prominent. As expected, the performance of Multi-GSdecays with the increase of h since the intersection function(4) loses ...
hypotheses sampled decreases approx. linearly with . Notethat in our case, since the number of off-plane inliers addedjSet...
metrics for the previous multistructure data experimentsare used here. The median of each metric is plotted inFig. 11. Not...
all-inlier minimal subsets were sampled. Due to a lowerorder geometric model (p ¼ 4), with brute speed the othermethods si...
[12] Y. Kanazawa and H. Kawakami, “Detection of Planar Regionswith Uncalibrated Stereo Using Distributions of Feature Poin...
Upcoming SlideShare
Loading in...5

Accelerated hypothesis generation for multi structure data via preference analysis


Published on

For more projects visit @

Published in: Technology, Education
  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Accelerated hypothesis generation for multi structure data via preference analysis

  1. 1. Accelerated Hypothesis Generation forMultistructure Data via Preference AnalysisTat-Jun Chin, Member, IEEE, Jin Yu, and David Suter, Senior Member, IEEEAbstract—Random hypothesis generation is integral to many robust geometric model fitting techniques. Unfortunately, it is alsocomputationally expensive, especially for higher order geometric models and heavily contaminated data. We propose a fundamentallynew approach to accelerate hypothesis sampling by guiding it with information derived from residual sorting. We show that residualsorting innately encodes the probability of two points having arisen from the same model, and is obtained without recourse to domainknowledge (e.g., keypoint matching scores) typically used in previous sampling enhancement methods. More crucially, our approachencourages sampling within coherent structures and thus can very rapidly generate all-inlier minimal subsets that maximize the robustcriterion. Sampling within coherent structures also affords a natural ability to handle multistructure data, a condition that is usuallydetrimental to other methods. The result is a sampling scheme that offers substantial speed-ups on common computer vision taskssuch as homography and fundamental matrix estimation. We show on many computer vision data, especially those with multiplestructures, that ours is the only method capable of retrieving satisfactory results within realistic time budgets.Index Terms—Geometric model fitting, robust estimation, hypothesis generation, residual sorting, multiple structures.Ç1 INTRODUCTIONRANDOM hypothesis sampling is central to many state-of-the-art robust estimation techniques. The procedure isoften embedded in the “hypothesize-and-verify” frame-work commonly found in methods such as Random SampleConsensus (RANSAC) [8] and Least Median Squares(LMedS) [20]. The goal of sampling is to generate manyputative hypotheses of a given geometric model (e.g., thefundamental matrix), where each hypothesis is fitted on arandomly chosen minimal subset of the input data. Thehypotheses are then scored in the verification stageaccording to a robust criterion (e.g., number of inliers,median of squared residuals).The success of hypothesize-and-verify relies on “hitting”at least one minimal subset containing only inliers of aparticular genuine instance of the geometric model. Let X ¼fxxxxigNi¼1 be a set of N input data. A minimal subset is a subsetS of X of size p, where p is also the order of the geometricmodel, i.e., the number of parameters in the model. Aminimal subset S which contains only inliers to a particularmodel instance of the model allows the determination themodel parameters without being affected by outliers.Under random hypothesis sampling [8], each member ofS is chosen randomly without replacement. AssumingN ) p the probability that S contains all inliers isapproximately ð1 À "Þp, where " 2 ½0; 1Š is the outliercontamination rate, i.e., the proportion of outliers in X.Therefore, on average, 1=ð1 À "Þpsamples will be requiredbefore hitting an all-inlier minimal subset. Note that theprobability value ð1 À "Þpdecreases exponentially with p.In other words, for large ps, consecutively getting p inliersvia pure random selection is nontrivial.The analysis does not augur well for computer visionapplications. First, computer vision data are usually heavilycontaminated with outliers due to imperfections in acquisi-tion and preprocessing. Moreover, in practical settings thedata usually contain multiple instances of the geometricmodel or structures [24]. In such cases, the effective outlierrate faced by a structure is contributed to by both the grossoutliers and pseudo-outliers (i.e., inliers of other validstructures). Second, many useful geometric models are ofsignificant order in p. This is certainly true in multiviewgeometry, where robust estimators have seen widespreadusage (e.g., p ¼ 4 for homographies, p ¼ 7 or 8 forfundamental matrices).Fig. 1 shows an example of multistructure data pertain-ing to the two-view motion segmentation problem. Pointmatches are established on the image pairs using SIFTkeypoint matching [14], and the matches on a particularmotion give rise to a fundamental matrix relation. By carefulmanual filtering, the matches are determined to consist of113 outliers (40.50 percent) while the inliers are separatedinto three groups whose sizes are 69 (24.73 percent), 68(24.37 percent), and 29 (10.39 percent), respectively. Refer-ring to the first structure and using the 8-point algorithm forfundamental matrix estimation [10], one has to sample, onaverage, 1=0:24738% 70;000 minimal subsets to hit a singlepromising hypothesis from that structure!Due to the widespread usage of robust estimators incomputer vision, many innovations [5], [12], [25], [3], [17],[21] have been proposed to speed-up random hypothesisgeneration. These methods aim to guide the samplingIEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 34, NO. 4, APRIL 2012 625. The authors are with the School of Computer Science, The University ofAdelaide, Australian Center for Visual Technologies (ACVT), NorthTerrace 5005, SA, Australia.E-mail: {tjchin, jin.yu, dsuter} received 23 Nov. 2010; revised 31 Mar. 2011; accepted 13 July2011; published online 4 Aug. 2011.Recommended for acceptance by C. Stewart.For information on obtaining reprints of this article, please send e-mail, and reference IEEECS Log NumberTPAMI-2010-11-0893.Digital Object Identifier no. 10.1109/TPAMI.2011.169.0162-8828/12/$31.00 ß 2012 IEEE Published by the IEEE Computer Society
  2. 2. process such that the probability of hitting an all-inliersubset is improved. The trick is to endow each input datumwith a prior probability of being an inlier and to samplesuch that data that have high probabilities are more likely tobe simultaneously selected. Such prior probabilities are oftenderived from application-specific knowledge. For example,Guided-MLESAC [25] and PROSAC [3] concentrate thesampling effort on correspondences with higher keypointmatching scores, the rationale being that inlier correspon-dences originate from more confident keypoint matches(recall that in geometry estimation a datum consists of apair of matching points in two views). Proximity samplingassumes that inliers form dense clusters [12] or lie inmeaningful image segments [17]. SCRAMSAC [21] imposesa spatial consistency filter so that only correspondenceswhich respect local geometry get sampled.However, a crucial deficiency of previous methods lies intreating the inlier probability of a datum to be independent ofthe other data. This is untrue when there are multiplestructures. Given that a datum xxxxi is chosen (and thusregarded as an inlier), the probability that xxxxj is also an inlier(and thus should be chosen as well into the same minimalsubset) depends on whether xxxxj arose from the samestructure as xxxxi. In other words, it is very possible that twocorrespondences with high keypoint matching scores areinliers from different valid structures. Methods that ignorethis point are bound to wastefully generate many invalidcross-structure hypotheses. Our results on real and syntheticdata (see Fig. 2) prove that this indeed occurs in manyprevious methods.Assuming independence on minimal subset selectionalso precludes the usage of conditional sampling strategies inhypothesis generation. More specifically, given that ~p data626 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 34, NO. 4, APRIL 2012Fig. 2. Given the input data in (a), where there are four lines with 100 points per line and 100 gross outliers, we sample 100 line hypotheses (eachestimated from a minimal subset of size 2) using the proposed method, uniform sampling (a` la the original RANSAC [8]), proximity sampling [12],Guided-MLESAC [25], and PROSAC [3], yielding the parameter space density plotted, respectively, in (b), (d), (f), (h), and (j). Notice that thehypotheses of our method are concentrated on the correct models, yielding four distinct peaks. Results from the other methods, however, containmany false peaks and irrelevant hypotheses. As Section 5 shows, in multistructure data our method can successfully “hit” all true models usingconsiderably less time than other methods.Fig. 1. The “Board Game” image pair with three genuine instances offundamental matrices. Correspondences are established using SIFT[14] matching. False matches are marked with green crosses, while theother markers delineate inlier membership to distinct structures.
  3. 3. with ~p < p have been chosen into a minimal subset, weshould exploit the information available in these ~p data (andnot merely the prior inlier probabilities) to guide theselection of the next datum. The key is to derive andupdate conditional inlier probabilities (or proxies of these)that allow us to “home into” the more promising data. Sucha strategy can greatly improve the chances of retrieving all-inlier minimal subsets from each genuine structure, and isessential for higher order models where one cannot dependon pure random selection or rough prior inlier probabilities.Moreover, conditional sampling also encourages dataselection within coherent structures. However most pre-vious methods do not sample conditionally; thus they arelargely dependent on brute speed to retrieve all-inlierminimal subsets.In addition, we also argue that the domain knowledgeused in previous guided sampling techniques may nottranslate into convincing prior inlier probabilities. Forexample, false correspondences can have high matchingscores, e.g., on scenes with repetitive textures many salientpatches look similar, but these could result in matches thatare false according to the epipolar geometry [21], [23]. Theidea that inliers form dense clusters also becomes ineffec-tive under high outlier rates as outliers start to encroach theneighborhood of inliers. In the general case, it is question-able whether some usable and reliable domain knowledgeis always available.Our contributions. We propose a fundamentally noveltechnique to accelerate hypothesis generation for robustmodel fitting. Our guided sampling scheme is driven onlyby residual sorting information and does not require domain-or application-specific knowledge. The scheme is encodedin a series of inlier probabilities estimates which areupdated on the fly. Most importantly, our inlier probabil-ities are conditioned on the selected data and this enablesaccurate hypothesis sampling. Under multistructure data,this also encourages sampling within coherent structures.Our technique dramatically reduces the number of samplesrequired, and hence is vastly superior to other samplingmethods. In fact, we show many cases where other methodssimply break down while our technique still producessatisfactory results. This work is an extension of our priorwork in [2].The rest of the paper is organized as follows: Section 2surveys related work to put this paper in the right context.Section 3 describes the basic principles leading to our novelhypothesis generation scheme in Section 4. Section 5provides extensive experimental results and Section 6draws conclusions and discusses future work.2 PREVIOUS WORKMany previous enhancements on the hypothesize-and-verify framework occur in the context of RANSAC. Arecent survey [19] categorizes them into three groups.Group 1: Sampling enhancement. The first group ofmethods aim to improve the random hypothesis samplingroutine such that the chances of hitting an all-inlier sampleis increased. In [5], the LO-RANSAC method introduces aninner RANSAC loop into the main RANSAC body such thathypotheses may be generated from the set of inliers foundso far, thus improving the consensus score more rapidly. Acrucial ingredient for the method, however, is the inlierthreshold for distinguishing inliers from outliers. In caseswith complex geometric models, high-dimensional data,and intricate residual functions, considerable tuning effortis required to obtain an appropriate inlier threshold.Guided-MLESAC [25] and PROSAC [3] focus thesampling on more confident keypoint matches. Thesemethods, however, are essentially guided only by the priorinlier probabilities and do not conduct conditional samplingto further improve efficiency. The reliance on prior inliersprobabilities also renders them prone to sampling invalidcross-structure hypotheses given multistructure data. More-over, scenes with repetitive textures may give rise tocorrespondences that are well matched (i.e., high scores)in terms of local appearance, but incorrect according to theepipolar geometry [21], [23].In [12], sampling is concentrated on neighboring corre-spondences, and in a similar spirit GroupSAC [17] focusessampling on groups of data obtained using image segmen-tation. SCRAMSAC [21] conducts a spatial filtering stepsuch that matches with similar local geometry are con-sidered. These methods thus make the assumption thatinliers from the same structure cluster together in the image(spatial) domain. We show later that this assumption isviolated and the methods are easily confused in richlytextured scenes where outliers coexist densely with inliersin local neighborhoods. Moreover, the notion of proximityusually requires a scale or bandwidth parameter (e.g., forGaussian neighborhood) that can be difficult to tune forsome input spaces.We emphasize that our work belongs to this group.However, in contrast to the other methods in this group, weconstruct and update on-the-fly conditional inlier probabil-ities for minimal subset selection. This supports highlyefficient hypothesis sampling, especially for higher ordermodels and multistructure data. Further, our method isdomain independent and does not require potentiallyconfusing prior inlier probabilities or calculations of dataproximity measures.Group 2: Efficient verification. This group of methodsspeed up hypothesis verification by minimizing the timeexpended for evaluating unpromising hypotheses. The Td;dtest [15] evaluates a hypothesis on a small random subset ofthe input data. This may mistakenly reject good hypotheses;thus a much larger number of samples are required.However, the overall time can potentially be reduced sincethe verification now consumes less time. The Bail-Out test [1]and WaldSAC [4], [16], respectively, apply catch-and-releasestatistics and Wald’s theory of sequential decision making toallow early termination of the verification of an unfavorablehypothesis.Group 3: Optimizing order of verification. The thirdgroup aims for real-time RANSAC. The task is to find thebest model from a fixed number of hypotheses afforded bythe time interval. Given a set of hypotheses, PreemptiveRANSAC [18] scores them in a breadth-first manner suchthat unpromising hypotheses can be quickly filtered outfrom the subsequent passes. ARRSAC [19] performs apartially breadth-first verification such that the number ofCHIN ET AL.: ACCELERATED HYPOTHESIS GENERATION FOR MULTISTRUCTURE DATA VIA PREFERENCE ANALYSIS 627
  4. 4. hypotheses may be modified according to the inlier ratioestimate while still bounding the runtime.We are also aware of recent work [7], [13] that sidestepsthe hypothesize-and-verify framework and solves robustestimation directly as a global optimization problem. Whileproviding globally optimal solutions, these methods requireconsiderably more time than RANSAC. Here, our aim is toefficiently fit a geometric model onto noisy data withminimal loss to accuracy, and therefore our aims aredifferent from [7], [13]. We also note that these methods[7], [13] currently cannot handle multistructure data, whichare prevalent in practical applications.3 INLIER PROBABILITIES FROM SORTINGWe first describe how inlier probabilities can be estimatedfrom residual sorting information. Let X :¼ fxxxxigNi¼1 be a setof N input data. Under the hypothesize-and-verify frame-work, a series of tentative models (or hypotheses)f1; . . . ; M g is fitted on randomly selected minimal subsetsof the input data, where M is the number of hypothesesgenerated. For each datum xxxxi, we compute its absoluteresiduals as measured to the M hypotheses to form theresidual vector:rrrrðiÞ:¼ÂrðiÞ1 rðiÞ2 Á Á Á rðiÞMÃ: ð1ÞNote that the hypotheses do not lie in any particular orderexcept the order in which they are generated. We then findthe permutation,aaaaðiÞ:¼ÂaðiÞ1 aðiÞ2 Á Á Á aðiÞMÃ; ð2Þsuch that the elements in rrrrðiÞare sorted in nondescendingorder, i.e.,u v ¼) rðiÞaðiÞurðiÞaðiÞv: ð3ÞThe sorting aaaaðiÞessentially ranks the M hypothesesaccording to the preference of xxxxi; the higher a hypothesis isranked, the more likely it is that xxxxi is an inlier to it.Intuitively, two data xxxxi and xxxxj will share many commonhypotheses at the top of their preference list aaaaðiÞand aaaaðjÞifthey are inliers from the same structure. This is independentof whether xxxxi and xxxxj are, for example, correspondenceswith high keypoint matching scores. To elicit this phenom-enon, first let aaaaðiÞ1:h be the vector with the first-h elements ofaaaaðiÞ. We define the following function as the “intersection”between xxxxi and xxxxj:fðxxxxi; xxxxjÞ :¼1haaaaðiÞ1:h aaaaðjÞ1:h; ð4Þwhere jaaaaðiÞ1:h aaaaðjÞ1:hj finds the number of common elementsshared by aaaaðiÞ1:h and aaaaðjÞ1:h. Window size h with 1 h Mspecifies the number of leading hypotheses to take intoaccount. Note that fðxxxxi; xxxxjÞ ranges between 0 and 1 and issymmetric. Also, fðxxxxi; xxxxiÞ ¼ 1 for all i.We demonstrate this effect using the data in Fig. 1. Globalcoordinate normalization is first performed on the corre-spondences following [10]. We then generate M ¼ 1;000fundamental matrix hypotheses, each fitted using DirectLinear Transformation (DLT) [11] on a minimal subset ofsize p ¼ 8 chosen via pure random selection. The residual rðiÞmis taken as the Sampson distance of datum xxxxi to themth hypothesis. We then obtain the responses of fðxxxxi; xxxxjÞfor all unique pairs of the input data while h is varied from1; . . . ; M. Fig. 3a plots the average of the responses whichare separated into two sets: Set “SS” contains pairs of inliersfrom the same structure, while set “DS” contains the rest.The result clearly shows that inliers from the same structurehave higher intersection values relative to other possiblepairs of inputs. Also, the gap between SS and DS rapidlygrows as h is increased (under pure random hypothesessampling, the expected response from DS can be shown tobe linear against h). It quickly attains the maximum valuebefore decreasing slowly to 0 at h ¼ M. This indicates that hcontrols the discriminative power of fðxxxxi; xxxxjÞ.We then obtain the N  N matrix K, where the elementat the ith row and jth column is fðxxxxi; xxxxjÞ. Fig. 3b displays Kfor the data in Fig. 1 when h ¼ 100 (or h ¼ 0:1M) forfðxxxxi; xxxxjÞ. The points are arranged according to theirstructure membership, i.e., xxxx1 to xxxx69 are inliers fromstructure 1, xxxx70 to xxxx137 are inliers from Structure 2 and soon. The gross outliers are xxxx167 to xxxx279. This makes visible ablock diagonal pattern which confirms that strong mutualsupport occurs among inliers of the same structure. Weemphasize that such a data arrangement is purely to aid in628 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 34, NO. 4, APRIL 2012Fig. 3. (a) Average intersection values for the Board Game image pair(Fig. 1) for varying h. SS: Same structure, DS: Otherwise. (b) Thecorresponding matrix K of size 279  279 with parameter h in fðxxxxi; xxxxjÞ isset to 100 (or h ¼ 0:1M). (c) Average values of rows of matrix Kgrouped according to structure membership. Red vertical lines delineatestructure/group boundary.
  5. 5. presentation and is unnecessary for fðxxxxi; xxxxjÞ, or for oursubsequent steps to work.We further analyze the results by averaging the rows ofK according to the above data arrangement. Fig. 3c showsthe result. Unsurprisingly, the results show that, onaverage, the row values for an inlier concentrate mostlyon other inliers from the same structure, while for a grossoutlier the row values are generally low and appear to berandomly distributed.Our idea is to use these intersection values to driveconditional sampling of minimal subsets. Given that adatum is selected, the intersection values of that datum withthe rest of the data yield sampling weights (i.e., a usefulproxy for conditional inlier probabilities) to seek the seconddatum. The intersection values of the first and second datathen provide guidance to pick the third datum, and so on(Section 4 explains how weights are combined). This resultsin a sampling scheme that encourages minimal subsetselection within consistent structures. Our derivations aregeneral and equally applicable to a wide range of computervision problems.Interestingly, this phenomenon of coherence in prefer-ence among inliers from the same structure is realizedwithout obtaining a single all-inlier minimal subset fromany of the true model instances in the data. Indeed, for theBoard Game image pair, it is statistically unlikely that evenone all-inlier minimal subset exists among the 1,000randomly chosen samples used in Fig. 3 (recall that wewould need, on average, % 70;000 samples before retrievingone all-inlier minimal subset for Structure 1).4 ACCELERATED HYPOTHESIS GENERATIONIn this section, we describe how residual sorting can beexploited to drive a very efficient hypothesis generationscheme for robust model fitting.4.1 Calculating Conditional Inlier ProbabilitiesAssume M model hypotheses f1; . . . ; Mg have beengenerated so far and we wish to sample the next hypotheses(in a guided fashion). Let the model to be fitted bedetermined by a minimal subsetS ¼ fxxxxs1. . . xxxxspg X ð5Þof size p, wherefs1; . . . ; spg f1; . . . ; Ng ð6Þis a set of indices identifying the members of X present in S.Sampling a hypothesis is thus equivalent to determining thevalues of fsjgpj¼1. Henceforth, where no confusion arises, xxxxsjand sj are used interchangeably.The first datum s1 is selected purely randomly, i.e., s1 issampled from the discrete uniform distributions1 $ Uð1; NÞ: ð7ÞTo select the second datum s2, we construct the conditionalinlier probability distributionP1ðiÞ :¼1fðxxxxi; xxxxs1Þ; if i 6¼ s1;0; otherwise;ð8Þwhere f is the intersection function (4) based on theM hypotheses sampled thus far. Normalization constant 1,1 ¼1Pi6¼s1fðxxxxi; xxxxs1Þ; ð9Þensures that P1ðiÞ is a valid discrete probability distribution,while imposing P1ðiÞ ¼ 0 for i ¼ s1 guarantees samplingwithout replacement. The second datum s2 is then selectedaccording to P1, i.e.,s2 $ P1: ð10ÞThis can be accomplished by using the values of P1 as a setof sampling weights, i.e., ifP1ðuÞ P1ðvÞ; ð11Þthen xxxxu is more likely to be selected than xxxxv.We now wish to combine the information provided by s1and s2 to sample s3, and in general to use the informationprovided by the k data chosen thus far to sample theðk þ 1Þth datum. To this end, we construct thekth conditional inlier probability distribution asPkðiÞ :¼ kQkj¼1 fðxxxxi; xxxxsjÞ; if i 62 fs1; . . . ; skg;0; otherwise;ð12Þand sampleskþ1 $ Pk: ð13ÞAgain, analogously to (9), k normalizes Pk to maintain it asa valid discrete probability distribution.Intuitively, (12) is equivalent to the element-wise multi-plication of rows s1; . . . ; sk of matrix K with the s1th; . . . ;skth elements set to zero. The guidance provided by Pk,however, is contingent on window size h in fðxxxxi; xxxxjÞ. Asdiscussed in Section 3, h controls the discriminative powerof fðxxxxi; xxxxjÞ. Fig. 3a as well as further experimentations inSection 5.2 lead us to conclude that a large range of h, i.e.,d0:05Me h d0:4Me, is effective for the task. In all ourresults in Section 5, h is set to d0:1Me. Note also that ifh ¼ M, fðxxxxi; xxxxjÞ ¼ 1 for all i; j; thus the Pks reduce to thediscrete uniform distribution.4.2 Bootstrap Effect on Hypothesis SamplingWe show how the sampling weights in (12) achieve a“bootstrap” effect in hypothesis sampling. A small initial setof hypotheses, sampled practically randomly, providerough guidance to select the next minimal subsets. Thenewly fitted hypotheses then improve the accuracy of theconditional inlier probabilities for retrieving other usefulminimal subsets. The result is that the desired blockdiagonal pattern in K occurs at a much lower M.Fig. 4 depicts an actual run of weighted sampling on theBoard Game image pair. By the 25th iteration, samplingweights good enough for differentiating inliers from most ofthe outliers, as well as a rough block diagonal pattern, haveemerged. By only the 50th iteration, sampling weights fordichotimizing different structures have materialized. Theresult at this stage is already visibly better than the result ofrandom sampling at 1,000 iterations (c.f. Fig. 3). Fig. 5 showsthe outcome after 400 iterations of conditional sampling,where the existence of three structures in K is even moreCHIN ET AL.: ACCELERATED HYPOTHESIS GENERATION FOR MULTISTRUCTURE DATA VIA PREFERENCE ANALYSIS 629
  6. 6. obvious. Moreover, by the 400th iteration at least one all-inlier minimal subset have been retrieved for each structure;this represents a massive improvement in efficiency. Usingpure random selection and referring to the smalleststructure in the data (10.39 percent inliers), on average,1=0:10398% 73;000;000 iterations are required before hittinga single all-inlier minimal subset from each structure!As alluded to before this, bootstrap effect is achievedwithout having sampled a single all-inlier minimal subset inthe initial stages (the first-four histograms in the third rowof Fig. 4 are thus empty). This departs from the intuitionthat at least one true hypothesis from each structure isrequired. Our result is clear empirical evidence thatseemingly “worthless” (according to RANSAC criterion)hypotheses do provide useful information for structuredichotomy since all that is required is for inliers from thesame structure mutually prefer the same hypotheses.4.3 Efficient ImplementationAfter a new hypothesis is sampled, the conditional inlierprobabilities (12) can be updated by first appending theabsolute residuals as measured to the new hypothesis to theresidual vector of each datum (1), i.e.,rrrrðiÞnew ¼ÂrrrrðiÞold rðiÞMþ1Ã; where rrrrðiÞold 2 IR1ÂM; ð14Þand then resorting rrrrðiÞnew to obtain the new permutation aaaaðiÞnew.Window size h is updated accordingly as hnew ¼d0:1ðM þ 1Þe. The efficiency of the update can be improvedby maintaining the already sorted rrrrðiÞold in a data structurethat facilitates efficient search and insertion for rðiÞMþ1, e.g.,binary trees with Oðlog MÞ complexity for the operations.More sophisticated implementations, however, are onlyworthwhile for very large Ms (in the order of 106). In mostof our cases, M 104and rrrrðiÞnew can simply be whollyresorted with quicksort without noticeable increases incomputational overheads.Furthermore, updating the residual sorting as soon as anew hypothesis arrives is unnecessarily conservativebecause a single new hypothesis does not add muchinformation about inlier probabilities. Our proposed algo-rithm thus updates the sorting permutation aaaaðiÞandadvances h only after a block (of size b) of new hypothesesare generated. Algorithm 1 provides the pseudocode for ourmethod which we call “Multi-GS.” Section 5.2 explores howh and b affect sampling efficiency.Algorithm 1. Accelerated Multistructure HypothesisGeneration by Preference Analysis (Multi-GS)1: input input data X, total required number ofhypotheses T, size of a minimal subset p 0 andblock size b 02: output a set  of T model hypotheses3: for M :¼ 1; 2; . . . ; T do4: if M b then5: randomly sample p data and store as S6: else7: select at random s1 and initialize S :¼ fxxxxs1g8: for k :¼ 1; 2; Á Á Á ; ðp À 1Þ do630 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 34, NO. 4, APRIL 2012Fig. 4. Bootstrapping hypothesis sampling for the Board Game image pair. Results after 1, 5, 10, 25, and 50 iterations (respectively, columns 1-5) ofsampling using the proposed conditional inlier probabilities (12) are shown. Top row: Resulting matrix K of intersection values. Middle row: Averagerow values of K separated according to structure membership. Bottom row: Count of all-inlier minimal subsets retrieved for each structure. Zero all-inlier minimal subsets are sampled in the initial stages; hence the first-four histograms are empty.
  7. 7. 9: construct Pk (12) using aðiÞ, h and data in S10: sample skþ1 according to Pk11: append S :¼ S [ fxxxxskþ1g12: end for13: end if14:  ¼  [ {New hypothesis instantiated from S}15: append to all rrrrðiÞabs. residual to new hypothesis16: if M ! b and modðM; bÞ ¼ 0 then17: sort all rðiÞto obtain permutations aaaaðiÞ18: h :¼ d0:1Me19: end if20: end for21: return ÂMore careful thought, however, should be put intoimplementing the intersection required in fðxxxxi; xxxxjÞ (4). Astraightforward algorithm with a nested loop will scale asOðh2Þ. Using a symbol table [22] reduces this to a moreefficient OðhÞ, as described in Algorithm 2.Algorithm 2. Computing fðxxxxi; xxxxjÞ using symbol tables1: input residual sorting indices aaaaðiÞ¼ ½aðiÞ1 aðiÞ2 Á Á Á aðiÞM Šand aaaaðjÞ¼ ½aðjÞ1 aðjÞ2 Á Á Á aðjÞM Š, desired win. size h2: output: response of fðxxxxi; xxxxjÞ3: initialize to zero symbol table y of size M4: for s :¼ 1; 2; . . . ; h do5: yðaðiÞs Þ :¼ 1 (set the aðiÞs -th element of y to 1)6: end for7: acc :¼ 08: for s :¼ 1; 2; . . . ; h do9: if yðaðjÞs Þ ¼ 1 then10: acc :¼ acc þ 111: end if12: end for13: return acc Ä hOn the surface it seems that the somewhat involvedcomputation is a weakness. However, our algorithmconducts a more informed sampling given a unit ofcomputation time in comparison to other techniques. Theresult is that we require less effective CPU time to hit all-inlier minimal subsets, especially for models of higher orderand heavily contaminated data.5 EXPERIMENTSWe evaluate the performance of the proposed method(Multi-GS, Algorithm 1) on common computer vision tasks.We compared against the following state-of-the-art sam-pling methods:1. LO-RANSAC [5].2. Proximity sampling [12].3. Guided-MLESAC [25].4. PROSAC [3].We consider pure random sampling as in the originalRANSAC [8] as the baseline. We implemented all algorithmsin MATLAB. All experiments were run on a Linux machinewith a 3.0 GHz Intel Core 2 Duo processor and 4 GB of mainmemory.Parameter settings for all methods are detailed in Table 1.Note that the inlier threshold for LO-RANSAC is obtainedusing ground truth knowledge of the inlier-outlier identityof each datum. This is an unfair advantage since the methodis allowed to access the ground truth. We also stress that,unless stated otherwise, for Multi-GS we consistently set h ¼d0:1Me and b ¼ 10.5.1 Performance under Different Outlier RatesWe first examine the performance of Multi-GS underdifferent outlier rates. We focus on the fundamental matrixestimation problem for single and multistructure data.Single-structure data. We use the Barr-Smith Libraryimage pair shown in Fig. 8c. The images are resized to 240 Â320 pixels and SIFT keypoints are detected and matchedCHIN ET AL.: ACCELERATED HYPOTHESIS GENERATION FOR MULTISTRUCTURE DATA VIA PREFERENCE ANALYSIS 631TABLE 1Parameter Settings for All MethodsFig. 5. Results after 400 iterations of sampling from the conditional inlierprobabilities (12) for the Board Game image pair. (a) Averageintersection values. (b) K matrix. (c) Average of rows of K accordingto structure membership.
  8. 8. across the two views. True and false matches are manuallyidentified. This yields a data set with 70 inliers (20.71 percent)and 268 outliers (79.29 percent). Global coordinate normal-ization [10] is performed on the data for numerical stability.The 8-point method [11] is applied to fit fundamentalmatrices on minimal subsets of size 8. The residual is takenas the symmetric transfer error.Each run of this experiment involves a data instancewhich contains all 70 inliers and L outliers, where L ¼ 26;52; . . . ; 260. For each L, 50 data instances are produced,where for each instance the L outliers are randomly selectedfrom the full set of 268 outliers. On each data instance, we runall compared methods for a maximum of 10 seconds. Wethen record1. the total number of minimal subsets sampled,2. among the sampled minimal subsets, the number ofall-inlier minimal subsets produced,3. the number of steps (i.e., number of minimal subsetssampled) before hitting the first all-inlier minimalsubset (i.e., the solution), and4. the elapsed time when the first all-inlier minimalsubset is obtained.For each value of L the median results over the50 repetitions are obtained and plotted in Fig. 6 (top row).Fig. 6a shows that within the 10 seconds limit, Multi-GSis capable of sampling less than 1/3 of the hypothesesgenerated by the other methods. This stems from the highercomputational demands of updating the conditionalweights. However, despite the smaller number of samplingsteps per unit time, Multi-GS can produce many more all-inlier minimal subsets than the other methods, especiallyfor outlier rates of 20 percent and above (see Fig. 6b). In fact,at the highest outlier rate (%80%), only Multi-GS andPROSAC can still produce all-inlier minimal subsets within10 seconds (with Multi-GS returning a slightly highernumber than PROSAC (see Table 2)), while the othermethods fail to find a single solution.In terms of the number of iterations and time beforehitting the first solution, PROSAC, which benefits fromaccurate keypoint matching scores, has an overwhelmingadvantage. The first all-inlier minimal subset is almostalways retrieved by PROSAC in the very first iteration(see Figs. 6c and 6d). Multi-GS is relatively close behind.At the highest outlier rate, Multi-GS retrieves the first all-inlier minimal subset in %300 iterations or 0.89s (seeTable 2). Only PROSAC and Multi-GS are capable ofproducing sub-1 s performances on the Barr-Smith Libraryimage pair. Note that this experiment is conducted usingsingle-structure data. We shall compare the methods onmultistructure data next.Multistructure data. We repeat the previous experi-ment, this time using the Board Game image pair whichconsists of three fundamental matrix instances (see Fig. 1).The data contain 113 outliers (40.50 percent) and 166 inliers(59.5 percent) with the inliers separated into three groups632 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 34, NO. 4, APRIL 2012Fig. 6. Performance under different outlier rates (best viewed in color). Top row: Barr-Smith Library (single structure). Bottom row: Board Game(multistructure). (a) and (e): Number of minimal subsets sampled in 10 seconds. (b) and (f): Number of all-inlier minimal subsets sampled in10 seconds. (c) and (g): Steps to hit the first solution (not plotted if unable to find solutions within 10 seconds). (d) and (h): Time to hit the first solution(not plotted if unable to find solutions within 10 seconds).
  9. 9. of size 69 (24.73 percent), 68 (24.37 percent), and 29(10.39 percent). As before, multiple data instances aregenerated, each containing all the inliers and L randomlyselected outliers with L ¼ 11; 22; . . . ; 110. For each L value50 repetitions are generated. Considering that the data aremultistructure, we regard the arrival of a solution asthe retrieval of at least one all-inlier minimal subset from eachstructure. Fig. 6 (bottom row) illustrates the median resultsover 50 repetitions. Note that the outlier rate in Fig. 6(bottom row) is simply taken as 100%  L=ðL þ 166Þ; theactual per structure outlier rate is actually much higher.The trends in Fig. 6e are similar to those in Fig. 6a, i.e.,Multi-GS is capable of sampling much fewer minimalsubsets than the other methods. However, despite perform-ing fewer iterations, as shown in Fig. 6f Multi-GS yieldsoverwhelmingly more all-inlier minimal subsets. Since theinliers are clustered Proximity Sampling is capable ofyielding a decent amount of all-inlier minimal subsets, butthis performance deteriorates rapidly as the outlier rateincreases. As expected, PROSAC and Guided-MLESAC areunable to retrieve all-inlier minimal subsets due to theirtendency to include inliers from different structures into thesame minimal subset. Keypoint matching scores alonecannot distinguish inliers from different structures. Finally,as can be seen in Figs. 6g and 6h, only Multi-GS can obtainsolutions within the 10 seconds limit. In fact, Multi-GS isable of retrieving all-inlier minimal subsets from eachstructure within 1 s.5.2 Effects of Parameters h and bWe examine empirically the effects of varying parameters h(window size) and b (block size) (see Algorithm 1). Weagain use the Barr-Smith Library (Fig. 8c) and Board Game(Fig. 1) image pairs with 50 percent of the outliers included.In this experiment, we parameterize window size h as aratio of the number of hypotheses M sampled so far. Thetwo parameters are set in the range of h ¼ ½0:05 0:1 . . . 1:0Šand b ¼ ½10 20 . . . 100Š. For each combination of h and b, werun Multi-GS 10 times. In each run, we record the timeelapsed to hit the first solution (again, for the multistructurecase this is regarded as the retrieval of at least one all-inlierminimal subset from each structure). Fig. 7 shows themedian results.It is clear that the performance of Multi-GS is tolerant toblock size b. Within the range of b (10 b 100) similarresults are obtained. The effect of h, however, is moreCHIN ET AL.: ACCELERATED HYPOTHESIS GENERATION FOR MULTISTRUCTURE DATA VIA PREFERENCE ANALYSIS 633TABLE 2Performance of Various Sampling Methods for Fundamental Matrix Estimation on Image Pairs in Fig. 8Each experiment was given 50 random runs, each for 10 CPU seconds. We report the total number of samples found within the given time budgetand the median of CPU seconds (respectively, sampling steps) that are required to find at least one all-inlier minimal subset for each structure. If thisis not achieved within the time limit an “” is marked at the corresponding row and column. The median number of all-inlier samples found is listedseparately for each structure (Structure-i, i = 1; 2; . . . ). The number of inliers and the inlier ratio for each structure is given in the parentheses. The topresult with respect to each performance measure are boldfaced.
  10. 10. prominent. As expected, the performance of Multi-GSdecays with the increase of h since the intersection function(4) loses discriminative power. The deterioration, however,is faster on Board Game (multistructure) since the per-structure outlier rate is much higher, especially on thesmallest structure, whose inliers make up 10.39 percent ofthe data. Thus, the optimal h should be proportional to theinlier rate of a genuine structure. Nonetheless, even onBoard Game, Multi-GS demonstrates a large degree oftolerance to h as it is capable of retrieving solutions within10 seconds given h 2 ½0:05M 0:4MŠ.5.3 Further Results on Fundamental Matrix fittingOther image pairs used for fundamental matrix estimationare shown in Fig. 8. Their respective inlier and outlierproportions are listed in Table 2. The experimental setupand performance metric are the same as in Section 5.1,except that we do not vary the outlier rates. Table 2summarizes the results obtained. A similar conclusion asthat achieved in Section 5.1 is apparent here. On single-structure data, PROSAC rapidly retrieves the first solutiondue to the availability of accurate keypoint matching scores,while Multi-GS is at least on par with the other methods. Onmultistructure data, however, all methods except Multi-GSbreak down. Multi-GS is also capable of consistentlyretrieving the first solution under 1s as well as the largestnumber of all-inlier minimal subsets.5.3.1 Performance under DegeneraciesUnder the 8-point method, a degenerate fundamentalmatrix is obtained when more than six correspondencesamong eight in a minimal subset lie on the same plane [9].This occurs frequently when there exists a dominant planein the scene [6]. Degenerate models may have highconsensus despite being far from the optimal model.The Dino-Books data in Fig. 8f are modified by removingtwo of the three genuine structures (see Fig. 9a). The inliersof the remaining structure lie on two distinct planes. Weindicate these as Set A (magenta triangles, 29 matches) andSet B (blue squares, 49 matches). We create differentinstances of the data by using all of Set B (regarded as the“dominant plane”) while controlling the number of Set Ainliers (the “off-plane” inliers) added. This yields a series ofratios of on-plane inliers to total inliers ¼ jSet Bj=ðjSet Aj þ jSet BjÞ: ð15ÞWe are interested in the number of nondegeneratehypotheses produced as the above ratio is increased. Tothis end, we vary the above ratio in the range ½0:75 0:90Š.For each ratio 50 data instances are generated, where eachinstance contains the same quantity of outliers as inliers;this maintains the outlier rate at 50 percent. On each datainstance each method is run for 10 seconds. The medianresults across the 50 repetitions are presented in Fig. 9b.Unsurprisingly, since Multi-GS does not explicitlyaccount for degeneracies, the number of nondegenerate634 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 34, NO. 4, APRIL 2012Fig. 7. Effects of varying the settings of h and b on Multi-GS (seeAlgorithm 1). Each point on the surface above indicates the timerequired to hit the first solution given combination of h and b. Note that(a) is on single-structure data while (b) is on multistructure data.Fig. 8. Image pairs for fundamental matrix estimation. The keypoints are detected and matched using SIFT. The colors indicate group labels whilegross outliers are marked as “+”. (a)-(c) are single structure, while (d)-(f) are multistructure. (d) and (e) are from, while the others are our own data.
  11. 11. hypotheses sampled decreases approx. linearly with . Notethat in our case, since the number of off-plane inliers addedjSetAj can exceed 7, a minimal subset with more than sixmembers from Set A is also counted as denegerate. Relativeto the expected number1of nondenegerate hypothesesamong all-inlier minimal subsets, it appears that Multi-GSis below the norm, indicating a tendency to sample fromdata on the dominant plane.However, due to a much larger number of all-inlierminimal subsets retrieved, Multi-GS is still able to generatesignificantly more nondegenerate hypotheses. Contrast thiswith random sampling, which produced a relativelyinsignificant number of nondegenerate all-inlier minimalsubsets. Thus, in a practical sense, Multi-GS is still a goodchoice in the presence of degeneracies.Moreover, it is worth stressing that in prior work [6], [9],degeneracies are not handled in the hypothesis samplingprocess, i.e., the sampling does not explicitly search fornondegenerate models. Instead, a separate mechanism isused to detect degenerate hypotheses (e.g., robust rankdetection [9]) before invoking a correction routine (e.g.,plane-and-parallax algorithm [6]) to recover the funda-mental matrix from the degenerate hypotheses. Suchroutines can be easily plugged into Multi-GS to recovernondegenerate hypotheses from the degenerate all-inlierminimal subsets.5.4 Single- and Multi-Structure Homography FittingThe image pairs used in this experiment are shown in Fig. 10.Each structure in this problem corresponds to a planarsurface which can be modeled by a 2D homography. Globalcoordinate normalization [10] is conducted on the databefore putative homographies are estimated from minimalsubsets of four matches (i.e., p ¼ 4) using DLT [11]. Theresidual is computed as the symmetric transfer error. Oneach image pair each method was run 50 times, each runlimited to 10 seconds. Within the given time budget the sameperformance metrics from Section 5.1 are recorded. Table 3summarizes the median results.As shown in Table 3, on almost all image pairs Multi-GSretrieved the highest number of all-inlier minimal subsets.However, in this experiment the gap in performance betweenMulti-GS and the others is not as obvious. This is a directconsequence of a lower order geometric model (p ¼ 4); thus,the other methods can compensate for their lower accuracywith speed. Nonetheless, the results still demonstrate thatMulti-GS is able to obtain a much higher number of all-inlierminimal subsets with much fewer iterations.The results also reveal the limits of Multi-GS. The highestper structure outlier rate (pseudo and gross outliers) it cantolerate before failing to retrieve all-inlier minimal subsetsfrom a particular structure is % 95%.5.5 Motion Subspace FittingWe also consider the affine camera motion segmentationproblem. Point trajectories are first tracked across a videosequence and stored in a data matrix:D ¼ ½d1 . . . dN Š 2 IR2FÂN; ð16Þwhere F is the number of frames and N is the number oftrajectories. Each column di corresponds to a particulartrajectory. Under the affine camera model, columnsbelonging to the same rigid motion lie on the same 4Dsubspace. A popular method for motion segmentation is torobustly fit multiple subspaces on the trajectories usingRANSAC [26]. Following their work, we first project thedata matrix onto the first-five principal subspaces, yielding~D 2 IR5ÂN. Minimal subsets containing four randomlychosen columns of ~D (i.e., p ¼ 4) are then produced, andeach subset is used to estimated 4D-subspace hypothesesusing SVD. The residual rðiÞm is computed as the orthogonaldistance of the ith column to the mth subspace hypotheses.We use sequences from the Hopkins 155 benchmark dataset [26], which consists of 155 sequences of tracked featurepoints. Each sequence contains either two or three distinctmotions (e.g., moving cars, checkerboards) established from150-300 individual trajectories. See [26] for details of thecollection methodology. Note that in these sequences,missing or bad tracks were simply discarded; hence, fromthe point of view of robust estimation, there are no grossoutliers. Only pseudo-outliers exist due to the multistructuredata. Therefore, this data set is significantly easier than thedata considered previously.We compare Multi-GS against the methods listed inTable 1. The parameters (which differ from Table 1) are:1. For Guided-MLESAC and PROSAC, since there areno gross outliers, the prior inlier probabilities (i.e.,data quality scores) are set to 1=N for all datum.2. For proximity sampling, the standard deviation istaken as twice the average interdatum distance in IR5.On each of the 155 sequences each method is given 20 runs,each run limited to 10 seconds. The same performanceCHIN ET AL.: ACCELERATED HYPOTHESIS GENERATION FOR MULTISTRUCTURE DATA VIA PREFERENCE ANALYSIS 635Fig. 9. Performance under degeneracies.1. First, compute the probability in [6, Equation 7], but by replacing Hthere with and adding a second analogous probability term for the casejSet Aj ! 7. Then, multiply the resulting probability with the number of all-inlier minimal subsets produced by Multi-GS.
  12. 12. metrics for the previous multistructure data experimentsare used here. The median of each metric is plotted inFig. 11. Note that in this data set, we are unable to smoothlyvary the outlier rate like in Fig. 6. To facilitate comparisons,we reorder all results according to the values of the thirdmetric of Multi-GS (the number of minimal subsets sampledbefore hitting at least one all-inlier minimal subset fromeach structure) in ascending order. This yields an approx-imate left-to-right ordering based on the difficulty of thedifferent sequences.While Fig. 11b shows that Multi-GS sampled fewernumber of all-inlier minimal subsets, it should be noted thathere we are dealing with multistructure data. The metricused in Fig. 11b does not consider from which structure the636 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 34, NO. 4, APRIL 2012Fig. 10. Image pairs for homography estimation experiment with marked keypoints. The keypoints are detected and matched using SIFT. The colorsindicate group labels, while gross outliers are marked as “+”. All data are multistructure. (a) and (d)-(f) are from the Oxford Visual Geometry Group,while the others are captured in our campus.TABLE 3Performance of Various Sampling Methods for Multihomography Estimation on Image Pairs in Fig. 10Each data was given 50 random runs of 10 CPU seconds each. The same performance metrics as Table 2 are used.
  13. 13. all-inlier minimal subsets were sampled. Due to a lowerorder geometric model (p ¼ 4), with brute speed the othermethods simply sampled almost exclusively from thedominant structures (i.e., motions with a large number oftrajectories). Hence, the other methods actually require moresteps/time to hit at least one all-inlier minimal subsets fromeach structure. This is clear from Figs. 11c and 11d. Further,only Multi-GS can successfully hit all structures within thetime limit on all 155 sequences. The broken lines in the figuresindicate failures of the other methods at some of thesequences. Finally, note that the divergence in performancebetween Multi-GS and others occurs close to the 120thsequence. This is because most of the sequences beforenumber 120 are easy and we were unable to test at a finerrange of difficulty without artificially modifying the data.Finally, since there are no gross outliers in eachsequence, all data are a priori equally valid inliers. Thus,in this experiment, for G-MLESAC and PROSAC we setequal prior inlier probabilities for all data; in any case it isunknown how to compile the KLT tracker results intoquality scores analogous to two-view point matchingscores. However, this also means that the two methodsare effectively conducting pure random sampling.6 CONCLUSIONS AND EXTENSIONSWe propose a fundamentally new accelerated hypothesissampling scheme which uses information derived fromresidual sorting. In contrast to existing sampling techni-ques, our method does not require prior inlier scores fromdomain knowledge typical in other techniques. Theproposed method performs a conditional sampling strategythat encourages selecting minimal subsets within coherentstructures. Experiments on various geometric fitting tasksshow that the proposed method significantly outperformsother methods in terms of sampling efficiency and speed torecover all-inlier minimal subsets, especially on multi-structure data.It is evident that Multi-GS excels on both multi and single-structure data; according to the top row of Fig. 6, Multi-GS isonly surpassed by PROSAC on single-structure data. Thus,even in cases where the existence of single and multiplestructures are possible, using Multi-GS alone will providecompetitive results. However, in some applications thepractitioner might wish to apply the optimal algorithm ineither case. We recommend a solution which involves fusingMulti-GS with PROSAC/Guided-MLESAC. The idea is tosample the first datum s1 according to prior inlier probabil-ities such as keypoint matching scores (instead of purelyrandomly at the moment in Multi-GS), while the othermembers s2; . . . ; sp are sampled according to the Multi-GSconditional weights. This provides the advantage of the bestsingle-structure algorithms while retaining the ability ofMulti-GS to deal with multiple structures.ACKNOWLEDGMENTSThe authors would like to thank the reviewers for theirinsightful comments. This work was partly supported bythe Australian Research Council grant DP0878801.REFERENCES[1] D. Capel, “An Effective Bail-Out Test for RANSAC ConsensusScoring,” Proc. British Machine Vision Conf., 2005.[2] T.-J. Chin, J. Yu, and D. Suter, “Accelerated Hypothesis Genera-tion for Multi-Structure Robust Fitting,” Proc. 11th European Conf.Computer Vision, 2010.[3] O. Chum and J. Matas, “Matching with PROSAC- ProgressiveSample Consensus,” Proc. IEEE Conf. Computer Vision and PatternRecognition, 2005.[4] O. Chum and J. Matas, “Optimal Randomized RANSAC,” IEEETrans. Pattern Analysis and Machine Intelligence, vol. 30, no. 8,pp. 1472-1482, Aug. 2008.[5] O. Chum, J. Matas, and J. Kittler, “Locally Optimized RANSAC,”Proc. Deutsche Arbeitsgemeinschaft fu¨r Mustererkennung, 2003.[6] O. Chum, T. Werner, and J. Matas, “Two-View GeometryEstimation Unaffected by a Dominant Plane,” Proc. IEEE Conf.Computer Vision and Pattern Recognition, 2005.[7] O. Enqvist and F. Kahl, “Two View Geometry Estimation withOutliers,” Proc. British Machine Vision Conf., 2009.[8] M.A. Fischler and R.C. Bolles, “RANSAC: A Paradigm for ModelFitting with Applications to Image Analysis and AutomatedCartography,” Comm. ACM, vol. 24, pp. 381-395, 1981.[9] J.-M. Frahm and M. Pollefeys, “RANSAC for (Quasi-)DegenerateData (QDEGSAC),” Proc. IEEE Conf. Computer Vision and PatternRecognition, 2006.[10] R. Hartley, “In Defense of the Eight-Point Algorithm,” IEEE Trans.Pattern Analysis and Machine Intelligence, vol. 19, no. 6, pp. 580-593,June 1997.[11] R. Hartley and A. Zisserman, Multiple View Geometry in ComputerVision, second ed. Cambridge Univ. Press, 2004.CHIN ET AL.: ACCELERATED HYPOTHESIS GENERATION FOR MULTISTRUCTURE DATA VIA PREFERENCE ANALYSIS 637Fig. 11. Results on the Hopkins 155 data set (motion subspace estimation) (best viewed in color).
  14. 14. [12] Y. Kanazawa and H. Kawakami, “Detection of Planar Regionswith Uncalibrated Stereo Using Distributions of Feature Points,”Proc. British Machine Vision Conf., 2004.[13] H. Li, “Consensus Set Maximization with Guaranteed GlobalOptimality for Robust Geometry Estimation,” Proc. 12th IEEE Int’lConf. Computer Vision, 2009.[14] D. Lowe, “Distinctive Image Features from Scale-Invariant Key-points,” Int’l J. Computer Vision, vol. 60, no. 2, pp. 91-110, 2004.[15] J. Matas and O. Chum, “Randomized RANSAC with td;d Test,”Image and Vision Computing, vol. 22, pp. 837-842, 2004.[16] J. Matas and O. Chum, “Randomized RANSAC with SequentialProbability Ratio Test,” Proc. 10th IEEE Int’l Conf. Computer Vision,2005.[17] K. Ni, H. Jin, and F. Dellaert, “GroupSAC: Efficient Consensus inthe Presence of Groupings,” Proc. 12th IEEE Int’l Conf. ComputerVision, 2009.[18] D. Nister, “Preemptive RANSAC for Live Structure and MotionEstimation,” Proc. Ninth IEEE Int’l Conf. Computer Vision, 2003.[19] R. Raguram, J.-M. Frahm, and M. Pollefeys, “A ComparativeAnalysis of RANSAC Techniques Leading to Adaptive Real-TimeRandom Sample Consensus,” Proc. 10th European Conf. ComputerVision, 2008.[20] P.J. Rousseeuw and A.M. Leroy, Robust Regression and OutlierDetection. Wiley, 1987.[21] T. Sattler, B. Leibe, and L. Kobbelt, “SCRAMSAC: ImprovingRANSAC’s Efficiency with a Spatial Consistency Filter,” Proc. 12thIEEE Int’l Conf. Computer Vision, 2009.[22] R. Sedgewick and K. Wayne, Algorithms, fourth ed. Addison-Wesley, 2010.[23] E. Serradell, M. Ozuysal, V. Lepetit, P. Fua, and F. Moreno-Noguer, “Combining Geometric and Appearance Priors forRobust Homography Estimation,” Proc. 11th European Conf.Computer Vision, 2010.[24] C.V. Stewart, “Robust Parameter Estimation in Computer Vision,”SIAM Rev., vol. 41, no. 3, pp. 513-537, 1999.[25] B.J. Tordoff and D.W. Murray, “Guided-MLESAC: Faster ImageTransform Estimation by Using Matching Priors,” IEEE Trans.Pattern Analysis and Machine Intelligence, vol. 27, no. 10, pp. 1523-1535, Oct. 2005.[26] R. Tron and R. Vidal, “A Benchmark for the Comparison of 3DMotion Segmentation Algorithms,” Proc. IEEE Conf. ComputerVision and Pattern Recognition, 2007.Tat-Jun Chin received the BEng degree inmechatronics engineering from the UniversitiTeknologi Malaysia (UTM) in 2003 and the PhDdegree in computer systems engineering fromMonash University, Victoria, Australia, in 2007.He was a research fellow at the Institute forInfocomm Research (I2R) in Singapore from2007-2008. Since 2008, he has been a lecturerat The University of Adelaide, South Australia.His research interests include robust estimationand statistical learning methods in computer vision. He is a member ofthe IEEE.Jin Yu received the master’s degree in artificialintelligence from the Katholieke UniversiteitLeuven, Belgium, in 2004 and the PhD degreein information sciences and engineering from theAustralian National University, Australia, in2010. Currently, she is working as a postdoctoralresearcher with the School of Computer Scienceat The University of Adelaide, Australia. Hermain research interests include stochastic learn-ing, nonsmooth optimization, and robust modelfitting for machine learning, and computer vision problems.David Suter received the BSc degree in appliedmathematics and physics from The FlindersUniversity of South Australia in 1977, theGraduate Diploma in Compliance from the RoyalMelbourne Institute of Technology in 1984, andthe PhD degree in computer science fromLa Trobe University in 1991. He was a lecturerat La Trobe from 1988 to 1991 and a seniorlecturer (1992), associate professor (2001), andprofessor (2006-2008) at Monash University,Melbourne, Australia. Since 2008 he has been a professor in the Schoolof Computer Science, The University of Adelaide. He is head of theSchool of Computer Science. He served on the Australian ResearchCouncil (ARC) College of Experts from 2008-2010. He is on the editorialboards of the International Journal of Computer Vision and the Journalof Mathematical Imaging and Vision. He has previously served on theeditorial boards of Machine Vision and Applications and the InternationalJournal of Image and Graphics. He was general cochair or the AsianConference on Computer Vision (Melbourne 2002) and is currentlycochair of the IEEE International Conference on Image Processing (ICIP2013). He is a senior member of the IEEE.. For more information on this or any other computing topic,please visit our Digital Library at IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 34, NO. 4, APRIL 2012