Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this document? Why not share!

512 views

Published on

No Downloads

Total views

512

On SlideShare

0

From Embeds

0

Number of Embeds

3

Shares

0

Downloads

11

Comments

0

Likes

1

No embeds

No notes for slide

- 1. AN IMPROVED ALGORITHM FOR ALGEBRAIC CURVE AND SURFACE FITTING Gabriel Taubin IBM T.J.Watson Research Center P.O.Box 704 Yorktown Heights, NY 10598Abstract model-based computer vision tasks. Typically, the in- Recent years have seen an increasing interest in al- put for these tasks is either an intensity image or densegebraic curves and surfaces of high degree as geomet- range data [19, 22, 27, 28].ric models or shape descriptors for model-based com- One of the fundamental problems in building aputer vision tasks such as object recognition and po- recognition and positioning system based on implicitsition estimation. Although their invariant-theoretic curves and surfaces is how to ﬁt these curves and sur-properties them a natural choice for these tasks, ﬁtting faces to data. This process will be necessary for au-algebraic curves and surfaces to data sets is difﬁcult, tomatically constructing object models from range orand ﬁtting algorithms often suffer from instability, and intensity data and for building intermediate represen-numerical problems. One source of problems seems to tations from observations during recognition. Severalbe the performance function being minimized. Since methods are available for extracting straight line seg-minimizing the sum of the squares of the Euclidean ments [13], planar patches [14], quadratic arcs [1, 2, 5,distances from the data points to the curve or surface 9, 10, 15, 17, 21, 20, 25, 32], and quadric surface patcheswith respect to the coefﬁcients of the deﬁning polyno- [3, 4, 7, 14, 16, 18] from 2D edge maps and 3D range im-mials is computationally impractical, because measur- ages. Recently, methods have also been developed foring the Euclidean distances require iterative processes, ﬁtting algebraic curve and surface patches of arbitraryapproximations are used. In the past we have used a degree [8, 19, 24, 22, 26, 27, 28, 30].simple ﬁrst order approximation of the Euclidean dis- Although the properties of implicit algebraic curvestance from a point to an implicit curve or surface which and surfaces make them a natural choice for objectyielded good results in the case of unconstrained alge- recognition and positioning applications, least squaresbraic curves or surfaces, and reasonable results in the algebraic curve and surface ﬁtting algorithms oftencase of bounded algebraic curves and surfaces. How- suffer from instability problems. One of the sourcesever, experiments with the exact Euclidean distance of problems seems to be the performance function thathave shown the limitations of this simple approxima- is being minimized [23]. Since minimizing the sumtion. In this paper we introduce a more complex, and of the squares of the Euclidean distances from thebetter, approximation to the Euclidean distance from a data points to the implicit curve or surface with re-point to an algebraic curve or surface. Evaluating this spect to the continuous parameters of the implicit func-new approximate distance does not require iterative tions is computationally impractical, because measur-procedures either, and the ﬁtting algorithm based on ing the Euclidean distances already require iterativeit produces results of the same quality as those based processes, approximations are used. In the past weon the exact Euclidean distance. have used a simple approximation of the EuclideanKey words: Algebraic surfaces. Fitting. Surface recon- distance from a point to an implicit curve or sur-struction. Distance to algebraic surfaces. face which yielded good results in the case of uncon- strained algebraic curves or surfaces of a given degree,1 Introduction and reasonable results in the case of bounded alge- In the last few years several researchers have started braic curves and surfaces. However, experiments withusing algebraic curves and surfaces of high degree the exact Euclidean distance have shown that the sim-as geometric models or shape descriptors in different ple approximation has limitations, in particular for this 658
- 2. family [23]. Taubin [29], deﬁning simple algorithms for surface, and iterative methods are required to computerendering planar algebraic curves on raster displays, it.introduced a more complex, but more accurate, two di- Thus, we seek approximations to the distance func-mensional higher order approximate, which can still be tion. Often, the distance from Ô to is taken to ´µevaluated directly from the coefﬁcients of the deﬁning simply be the result of evaluating the polynomial atpolynomials and the coordinates of the point. In this ´µ Ô , since without noise Ô will vanish for all Ô onpaper we extend the deﬁnitions to dimension three, ´µ Õwithin the context of implicit curve and surface ﬁtting.We show that this new approximate distance produces ¡½ ´ µ ½ ´ Ô µ¾results of the same quality as those based on the ex- Õ ½act Euclidean distance, and much better than those ob- While computationally attractive, ﬁtting based on thistained using other available methods. distance is biased [5].2 Algebraic curve and surfaces An alternative to approximately solve the original In this section we review some previous results on computational problem, i.e., the minimization of (2),ﬁtting algebraic curves and surfaces to measured data is to replace the real distance from a point to an im-points. An implicit surface is the set of zeros of a plicit curve or surface by the ﬁrst order approxima-smooth function IR¿ IR of three variables: tion [26, 27] ´µ ´Ü½ Ü¾ Ü¿ µØ ´Ü½ Ü¾ Ü¿ µ ¼ ×Ø´Ü ´ µµ¾ ´Üµ¾ Ö ´Üµ ¾ (3)Similarly, an implicit 2D curve is the set ´µ ´Ü½ Ü¾ µ ´ Ü½ Ü¾µ ¼ of zeros of a smooth function The mean value of this function on a ﬁxed set of data IR¾ IR of two variables. The curves or surfaces points Õare algebraic if the functions are polynomials. Although ½ ¡´ µ ½ Ô ¾ ´ µthe methods to be discussed will be valid for dimen-sion 2,3, and above, we will concentrate on the case of Õ ¾ ½ Ö ´Ô µ (4)surfaces, i.e., dimension 3. In what follows, a polyno- is a smooth nonlinear function of the parameters, andmial of degree in three variables Ü½ Ü¾ Ü¿ will be can be locally minimized using well established non-denoted as follows linear least squares techniques. However, one is in- ´Üµ «Ü « (1) terested in the global minimization of (4), and wants to avoid a global search. A method to choose a good ini- ¼ « tial estimate was introduced in [26, 27] as well. Bywhere Ü ´ µ ´ µ Ü½ Ü¾ Ü¿ Ø , « «½ «¾ «¿ is a multiin- replacing the performance function again the difﬁcultdex, a tern of nonnegative integers, « « ½ «¾ «¿· · multimodal optimization problem turns into a gen-is the size of the multiindex, Ü« Ü«½ Ü«¾ Ü«¿ is a mono- eralized eigenproblem. There exist certain families ½ ¾ ¿mial, and « ¼ « is the set of coefﬁcients of implicit curves or surfaces, such as those deﬁning straight lines, circles, planes, spheres and cylinders,of . which have the value of Ö Ü ¾ constant on ´µ . ´µ3 Fitting In those cases we have Õ Given a ﬁnite set of three dimensional data points Ô½ ÔÕ , the problem of ﬁtting an algebraic Ö ´Ô µ ¾ ½ Ö ´Ô µ ¾surface ´µ to the data set is usually cast as mini- Õ ½ µ ½ ÈÕ ½ ´Ô µmizing the mean square distance ½ Õ ´Ô µ¾ ¾ Õ ½ Õ Õ ½ Ö ´Ô µ ¾ ½ ÈÕ Ö ´Ô µ ¾ ½ ¡´ µ Õ ×Ø´Ô ´ µµ¾ (2) Õ ½ In the linear model, the right hand side of the previousfrom the data points to the curve or surface ,a´µ expression reduces to the quotient of two quadratic functions of the vector of parameters .function of the set of coefﬁcients of the polynomial. ½ ÈÕ ½ ´Ô µ Unfortunately, the minimization of (2) is computa- ¾ Å Øtionally impractical, because there is no closed form Õ ½ ÈÕ ½ Ö ´Ô µ ¾ (5)expression for the distance from a point to an algebraic Õ Æ Ø 659
- 3. where the matrices Å and Æ , are nonnegative deﬁ- computing the Euclidean distance is computationallynite, symmetric, and only functions of the data points: expensive, and sometimes numerically unstable, in fol- lowing sections we introduce approximations which Õ Å ½ ´Ô µ ´Ô µØ can be less expensively evaluated. Õ ½ 5 Simple approximate distance Õ Æ ½ ´Ô µ ´Ô µØ The simple approximate distance of equation (3) is Õ ½ a ﬁrst order approximation of the Euclidean distance from a point to a zero set of a smooth function. Itwhere IRÒ IR ¢Ò is the Jacobian matrix of . is a generalization of the update step of the New-In the algebraic case, the entries of Å and Æ are lin- ton method for root-ﬁnding [11]. In this section weear combinations of moments of the data points. The review the derivation of this approximate distance,new problem, the minimization of (5), reduces to a gen- which from now on will be denoted Æ½ Ô ´ µ , becauseeralized eigenvalue problem, with the minimizer being we will need it to derive the higher order approximatethe eigenvector corresponding to the minimum eigen- distances.value of the pencil ´ Å Æ µ ¼ . Note that the Let Ô ¾ IRÒ be a point such that Ö Ô ´µ , and ¼complexity of this generalized eigenvalue ﬁt method is ´µ let us expand Ü in Taylor series up to ﬁrst order inpolynomial in the number of variables, while the local a neighborhood of Ôminimization of (4) is a function of the number of datapoints, which is usually much larger than the number Ü´µ ´ µ· ´ µ ´ Ô Ö Ô Ì Ü Ô Ç Ü Ô ¾ µ· ´ µof variables. This method directly applies to ﬁtting un- Now, truncate the series after the linear terms, applyconstrained algebraic curves and surfaces, and there is the triangular, and then the Cauchy-Schwartz inequal-an extension to the case of bounded surfaces as well ity, to obtain[30]. It is important to note that the approximate dis- Ü ´µ ´ µ· ´ µ ´ µ Ô Ö Ô Ì Ü Ôtance (3) is also biased in some sense. If one of thedata points Ô is close to a critical point of the poly- ´Ôµ Ö ´ÔµÌ ´Ü Ôµnomial , i.e., Ö Ô ´ µ ¼ , but Ô ´ µ ¼ , the ratio ´Ôµ Ö ´Ôµ Ü Ô ´ µ ´ µ Ô ¾ Ö Ô ¾ becomes large. This implies that theminimizer of (4) must be a polynomial with no critical The simple approximate distance from Ô to is ´µpoints close to the data set. This is certainly a limita- deﬁned as the value of Ü Ô that annihilates thetion. Besides, recent experiments with the exact Eu- expression on the right sideclidean distance [23] have shown much better results, Æ½ ´Ô µ ´Ôµat the expense of a very long computation. We willnow introduce a better approximation. Ö ´Ôµ (7) Besides the fact that it can be computed very fast4 A better approximation to the Euclidean in constant time, requiring only the evaluation of the distance function and its ﬁrst order partial derivatives at the The next problem that we have to deal with is how to point, the fundamental property of the simple approx-correctly approximate the distance Ô ×Ø ´ from ´ µµ imate distance is that it is a ﬁrst order approximationa point Ô ¾ IRÒ to the set of zeros of a polynomial . to the exact distance in a neighborhood of every reg-In general, the Euclidean distance from Ô to the set of ular point (a point Ô ¾ IR¾ such that Ô ´µ but ¼zeros ´µ of a function IRÒ IR is deﬁned as the Ö Ô ´µ ¼ ). In our context, this is a very desirableminimum of the distances from Ô to points in the zero property.set Lemma 1 Let IRÒ IR be a function with continuous ×Ø ´Ô ´ µµ Ñ Ò Ü Ô ´Üµ ¼ (6) derivatives up to second order, in a neighborhood of a regular point Ô¼ of ´µ . Let Û be a unit length normal vector toIn this section we show that in the general case we will ´µ · at Ô¼ , and let ÔØ Ô¼ ØÛ , for Ø ¾ IR . Then Ð Ñ ÆÆ½´´ÔÔØ µµ ½need to explicitly compute the distance using numer-ical methods, but in the algebraic case (when Ü is ´µ Ø ¼ Øa polynomial of low degree) different combinations ofsymbolic and numerical methods can be used to solve where Æ Ô ´ µ denotes the Euclidean distance from the pointthe problem. Then, and since even in the algebraic case Ô to the set ´ µ. 660
- 4. P ROOF : In the appendix. ¾ ´µ nonnegative root of Ô Æ . This is exactly what we did to deﬁne the ﬁrst order approximate distance (7). We Æ½ ´Ô µ Æ ´Ô µ ﬁrst rewrite the polynomial (1) in Taylor series around ¹ ¹ Ô ´Üµ ¼´Ü Ôµ « ´Ôµ ´Ü Ôµ« (8) ´ µ Ô ´ µ Ô ¼ « ¡ ¡ ¡ where ÍÕ « ¡ « ¡ « ¡ Õ Õ Õ Õ Õ ½¡ «½ ·¡¡¡·«Ò ¬ ¬ ¹ ¹ « ´Ôµ « «½ ¡ ¡ ¡ Ü«Ò ¬ Ü½ Ò ¬Ü Ô Æ ´Ô µ Æ½ ´Ô µ where « «½ «¾ «¿ . Horner’s algorithm can be usedFigure 1: The simple approximate distance either overestimates or to compute these coefﬁcients stably and efﬁciently [6]underestimates the exact distance. from the coefﬁcients of the polynomial expanded at As it can be seen already in the one-dimensional case the origin « « ¼ , and even par-in ﬁgure 1, the approximate distance either underesti- allel algorithms exist to do so [12]. We now deﬁnemates or overestimates the exact distance. Underesti- the coefﬁcients ¼ Ô ½ Ô ´µ ´µ Ô of the univari- ´µmation is not a problem for the algorithms described ate polynomial ´µ Æ . We deﬁne the ﬁrst coefﬁcient asabove, but overestimation is a real problem. It is not ¼Ô´µ ´µ Ô , and for ½¾ we setdifﬁcult to ﬁnd examples where the simple approx- Ò ¡ ½ Ó½ ¾imate distance approaches ½ . That happens when ´Ôµ « « ´Ôµ ¾ Ö Ô ´µ but¼ Ô ´µ ¼ . For example, the zero «set of the polynomial Ü ´µ ´ · Ü¾ Ý¾ ¾ is a cir- ½µ ½cumference of radius , but the gradient of is zero We can now state the ﬁrst fundamental property of theat the origin, and so, the simple approximate distance bounding polynomialfrom the origin to ´µ is ½ . For algebraic curves andsurfaces the higher order approximate distances intro-duced in the next section will solve the problem. Ô ´Æµ ´Ôµ Æ ¼6 Higher order approximate distance In this section we derive a new approximate dis- ´µ Lemma 2 If Ü is a polynomial of degree , and Ô ´Æµ is the univariate polynomial deﬁned above, thentance. The simple approximate distance involves thevalue of the function and the ﬁrst order partial deriva-tives at the point. This will be coincide with the new ´Üµ Ô ´ Ü Ô µapproximate distance for polynomials of degree one. for every Ü ¾ IR¿ .In general, we will deﬁne the approximate distance oforder as a function Æ Ô ´ µ of the coordinates of the P ROOF : In the appendix. ¾point, and all the partial derivatives of the polynomial Note that, since it attains the nonnegative value of degree at the point Ô , satisfying the following ¼Ô´µ ´µ Ô at Æ ¼ , and it is monotonically de-inequality creasing for Æ ¼ , the polynomial Ô Æ has a unique ´µ ¼ Æ ´Ô µ Æ ´Ô µ nonnegative root Æ Ô ´ µ . This number, Æ Ô , is ´ µ a lower bound for the Euclidean distance from Ô towhen is a polynomial of degreeshow that Æ Ô ´ µ . That is, we will is a lower bound for the Euclidean ´µ , because for Ü Ô ´ µ Æ Ô , we havedistance from the point to the algebraic curve . ´µ ´Üµ Ô´ Ü Ô µ ¼Since we will also show that all the approximate dis-tances are asymptotically equivalent to the Euclidean Also, note that, if Ô ´Æ µ ¼ then the distance from Ôdistance near regular points, we will be able to replace to ´Ì Ô µ is larger than Æ , and in the case of polyno-the Euclidean distance with the approximate distance mials of degree , this means that the set ´ µ doesof order in our performance function. not cut the circle of radius Æ centered at Ô . Our approach is to construct a univariate polyno- To compute the value of Æ ´Ô µ , due to the mono- ´µmial Ô Æ of the same degree , such that Ô ´¼µ tonicity of the polynomial Ô ´Æ µ and the uniqueness of ´µ Ô , and Ü ´µ ´ µ Ô Ü Ô for all Ü . The approx- the positive root, any univariate root ﬁnding algorithmimate distance Æ Ô ´ µ will be deﬁned as the smallest will converge very quickly to Æ ´Ô µ . Even Newton’s 661
- 5. algorithm converges in a couple of iterations. But in or- we obtainder to make such an algorithm work, we need a prac- Ò Ótical method to compute an initial point, or to bracket ¼ ´Ôµ Æ ´Ô µthe root. Since we already have a lower bound (Æ ), ¼ ¼we only need to show an upper bound for Æ Ô . For ´ µ Ò ´Ôµ Óthis, note that Æ ´Ô µ · ¼ ´Æµ ´Æµ Ò Ó Æ ´Ô µ Ô ¼ ´Ôµ · ½ ´Ôµ Æ ´Ôµ Æ ´Ô µ ½ ¼ ½for every Æ ¼ , and so Æ Ô ´ µ Æ½ Ô , the simple ´ µ where ´ ½ ¾ ¿ µ is another multiindex, fromapproximate distance of section 5 is an upper bound. where we getBut, as we have observed above, this upper boundcould be Ò ØÝ . Since the degree of is , at least Ò ´Ôµ Æ ´Ô µ Óone of the coefﬁcients of degree must be nonzero, Æ ´Ô µ ½and so, ´µ ¼ Ô . More generally, if Ô the ´µ ¼ Ò Ófollowing inequality ´Ôµ Æ ´Ô µ ½ ½ Ô ´Æ µ ´Æ µ ¼ ´Ôµ · ´Ôµ Æ Note that the denominator of the expression on the ¼ ¼ right side is the derivative Ô Æ of the univariate ´µcan be used to obtain the following bound ´µ polynomial Ô Æ with respect to Æ , evaluated at Æ ´ µ Æ Ô . To compute these derivatives we also need ¬ ¬ ¼ ´Ôµ ¬½ ¬ to compute the partial derivatives of ¼ Ô Ô ´µ ´µ ¼ Æ ´Ô µ ¬ ¬ ´Ôµ ¬ ¬ with respect to the coefﬁcients of . For ¼½ , we have Finally, the asymptotic behavior of the new approxi- ´Ôµ ½ ¡ ½ ´Ôµ « ´Ôµmate distance near regular points is determined by thebehavior of the simple approximate distance, i.e., the ´Ôµ « « «ﬁrst order approximate distance. where IR¾Lemma 3 Let IR be a function with continuous ´Ôµ ´¬µ Ô¬ ¬ «·¬partial derivatives up to order ·½ , in a neighborhood of a « ifregular point Ô¼ of ´µ . Let Û be a unit length normal ¼vector to ´µ at Ô¼ , and let ÔØ · Ô¼ ØÛ , for Ø ¾ IR . otherwiseThen Ð Ñ ´´ µ In the last equation addition of multiindices is deﬁned Æ ÔØ · ½¾¿ µ ½ coordinatewise « ¬ for , and ¬ is Ø ¼ Æ ÔØ also a vector of nonnegative integers.P ROOF : In the appendix. ¾ 8 Experimental results7 Implementation details We have implemented the methods described above For the minimization of the approximate mean for ﬁtting unconstrained polynomials of an arbitrarysquare distance of order degree to range data. The Levenberg-Marquardt algo- rithm requires an initial estimate, and we have found Õ ¡´ µ ½ the results quite dependent on this initial choice. In Õ Æ ´Ô µ¾ (9) the past we have use a method based on generalized ½ eigendecomposition [26, 27] to get initial estimatesusing the Levenberg-Marquardt algorithm, we need to with good results. In this case we have observed betterprovide a routine for the evaluation of Æ Ô and all ´ µ results starting the algorithm from the coefﬁcients of aits partial derivatives with respect to the parameters sphere, a surface deﬁned by a polynomial « ¼ « . Now we show how to com- ½ ¾ ¿ ¡ Ü¾ · Ü¾ · Ü¾ · ½¼¼ Ü½ · ¼½¼ Ü¾ · ¼¼½ Ü¿ · ¼¼¼pute these partial derivatives. By differentiating theequation with the coefﬁcients ½¼¼ ¼½¼ ¼¼½ ¼¼¼ computed Ô Æ Ô ´ ´ µµ ¼ using the generalized eigenvalue ﬁt method. For 662
- 6. higher degree surfaces, we also tried surfaces deﬁnedby polynomials Ü¾ · Ü¾ · Ü¾ ¾ · ¡ « ½ ¾ ¿ «Ü ¼ « ½with the coefﬁcients « initialized with the general- a b cized eigenvalue ﬁt method as well. We call these sur-faces hyperspheres. In both cases the initial surface isbounded, and although the results are not constrainedto be bounded surfaces, they turned out to bounded inall the examples that we have tried. Starting from theresults of the unconstrained generalized eigenvalue ﬁtmethod usually produced unbounded surfaces, and d e fof much worse quality than those initialized from thebounded surfaces deﬁned above. Figure 3: Cup. (a): Large data set. (b): Hypersphere ﬁt to (a). (c) The main problem is whether the data set can be well General ﬁt to (a). (d): Small data set. (e): Hypersphere ﬁt to (d). (f)approximated with an algebraic surface of the given General ﬁt to (d).degree or not. This is a problem that cannot be solvedbeforehand. In those cases with a positive answer, the above. The result of this initial ﬁtting step is shown inﬁtted surfaces turn out to be good approximations of the second column of the pictures, along with the datathe original surfaces. In the cases with a negative an- points. The third column shows the results of the non-swer, the quality of ﬁtted surface varies, some results linear minimization, the ﬁt based on the approximateare good and some are bad. distance introduced in this paper, along with the data Figures 2, 3, and 4 show some examples with syn- set.thetic data, while ﬁgure 5 shows a couple of exampleswith range and CT data. a b c a b c d e f Figure 4: Torus. (a): Large data set. (b): Hypersphere ﬁt to (a). (c) d e f General ﬁt to (a). (d): Small data set. (e): Hypersphere ﬁt to (d). (f) General ﬁt to (d).Figure 2: Bean. (a): Large data set. (b): Hypersphere ﬁt to (a). (c)General ﬁt to (a). (d): Small data set. (e): Hypersphere ﬁt to (d). (f)General ﬁt to (d). In all the cases the resulting surfaces are not con- strained to be bounded, but in general they turned out In th synthetic examples the data was generated to be bounded. An exception can be observed in ﬁg-from algebraic surfaces, but the data points are not ure 4, where the ﬁt to the low resolution data set ison the surfaces, but close by. In fact, the data points unbounded. It is important to note that this new ap-are some vertices of a regular rectangular mesh con- proximate distance can be used to ﬁt bounded surfacesstructed by recursive subdivision of a cube containing as well, using the parameterized families introducedthe original surface. The nonlinear minimization pro- in [30].cess was initialized with the hypersurface ﬁt described Finally, ﬁgure 5 show some examples of ﬁtting sur- 663
- 7. and ¼ ´Ô¼µ ´ÔØµ · Ö ´ÔØµÌ ´Ô¼ ÔØµ · Ç´ Ô¼ ÔØ ¾ µ a b c ´ÔØµ § Ö ´ÔØµ Ø · Ç´ Ø ¾ µ Moving ´ÔØµ to the other member, dividing by Ö ÔØ ´ µ Ø , and taking absolute value, we obtain Æ½ ´ÔØ µ ´Ô Ø µ Æ´ÔØ µ Ö ´ÔØ µ Ø ½ · Ç´ Ø µ d e f which ﬁnishes the proof. ¾ P ROOF [Lemma 2] : We ﬁrst rewrite the polynomialFigure 5: Range and CT data. (a): Range data. (b): Hypersphere (8) as a sum of formsﬁt to (a). (c) General ﬁt to (a). (d): CT data. (e): Hypersphere ﬁt to(d). (f) General ﬁt to (d). Ò Ó ´Üµ « ´Ôµ ´Ü Ôµ«faces to real data. The ﬁrst example is range data from ¼ «a pepper. The second example is CT data. It is im- apply the triangular inequalityportant to note that in the case of CT data the resultis bounded, and it was impossible to get a stable re- ¬ ¬sult with the old method. This is one example where ´Üµ ´Ôµ ¬ ¬ « ´Ôµ ´Ü Ôµ ¬ «¬the data is so badly conditioned (very close to an ellip- ½ «soid) for a fourth degree surface ﬁt, that almost all themethods either fail to converge, or produce bad ﬁts. and the Cauchy-Schwartz inequality to the terms in the sum9 Conclusions ¬ ¬ « ´Ôµ ´Ü Ôµ ¬ We have described a new method to improve the al- ¬ «¬ ¬gebraic surface ﬁtting process by better approximating « ¬ ¡ ¬¬ ¬ ½ ¾ ¬ ¬ ¡½ ¾ « ´Ôµ¬ ¬ « ´Ü Ôµ«¬the Euclidean distance from a point to the surface. This ¬ ¬ « ¬method is more stable than previous algorithms, and «produces much better results. Ò ¡ ½ Ó½ ¾ Ò ¡ Ó½ ¾A Proofs « « ´Ôµ¾ « ´´Ü Ôµ« µ¾ « « P ROOF [Lemma 1] : In the ﬁrst place, under the cur-rent hypotheses, there exists a neighborhood of Ô¼ where, for « we denotewithin which the Euclidean distance from the point ÔØto the curve ´µ is exactly equal to Ø ¡ « «½ «¾ «¿ Æ´ÔØ µ ÔØ Ô¼ Ø Now, on the right hand side of the previous equation(see for example [31, Chapter 16]). Also, since Ô ¼ is a we identify the deﬁnition of Ô , followed by a ´µregular point of ´µ , we have Ö Ô¼ ´ µ , and ¼ function of Ü Ô , but from the multinomial formulaby continuity of the partial derivatives, Ö ÔØ ½ ´ µ we getis bounded for small Ø . Thus, we can divide by ´ µ Ö ÔØ without remorse. And since Û is a unit Ò ¡ ´´Ü Ôµ«µ¾ Ó½ ¾ Ü Ôlength normal vector, Ö Ô¼ ´ µ ´ µ ¦ Ö Ô¼ Û . Finally, « «by continuity of the second order partial derivatives,and by the previous facts, we obtain It follows that Ö ´ÔØ µ Ö ´Ô¼ µ · Ç´ ÔØ Ô¼ µ ´Ü µ ¼ ´Ôµ · ´Ôµ Ü Ô Ô ´ Ü Ô µ ¦ Ö ´Ô¼µ Û · Ç´ Ø µ ½ 664
- 8. ¾ [8] D.S. Chen. A data-driven intermediate level feature ex- P ROOF [Lemma 3] : Since the case ½ was traction algorithm. IEEE Transactions on Pattern Analysisthe subject of Lemma 1, let us assume that ½. and Machine Intelligence, 11(7), July 1989.Since for small Ø ¼ ´ÔØ µ ´ÔØµ ¼ ( Ø ¼ ) and [9] D.B. Cooper and N. Yalabik. On the cost of approximat- ½ ´ÔØ µ Ö ´ÔØ µ ¼ , we can divide by ¼ ´ÔØ µ and ing and recognizing noise-perturbed straight lines andby ½ ´ÔØ µ . Remembering that the simple approximate quadratic curve segments in the plane. NASA-NSG-distance is Æ½ ´ÔØ µ ¼ ´ÔØ µ ½ ´ÔØ µ , we can rewrite 5036/I, Brown University, March 1975. [10] D.B. Cooper and N. Yalabik. On the computationalthe previous equation as follows cost of approximating and recognizing noise-perturbed ´Ô ½ Æ½´ÔØØ µ straight lines and quadratic arcs in the plane. IEEE Transactions on Computers, 25(10):1020–1032, October Æ µ 1976. Æ ´ÔØ µÒ ´ÔØ µ Æ ´ÔØ µ ¾Ó Æ ´ÔØ µ [11] J.E. Dennis and R.B. Shnabel. Numerical Methods Æ½ ´ÔØ µ ¾ ½ ´ÔØ µ for Unconstrained Optimization and Nonlinear Equations. Prentice-Hall, 1983. [12] M.L. Dowling. A fast parallel horner algorithm. SIAMNow we observe that the three factors on the right side ½ Journal on Computing, 19(1):133–142, February 1990.are nonnegative, the ﬁrst factor is bounded by be- [13] R.O Duda and P.E. Hart. Pattern Classiﬁcation and Scenecause ¼ Æ ÔØ ´ µ ´ µ Æ½ ÔØ , the second factor is Analysis. John Wiley and Sons, 1973.bounded by [14] O.D. Faugeras, M. Hebert, and E Pauchon. Segmenta- ´ÔØ µ tion of range data into planar and quadric patches. In ½ ´ÔØ µ Proceedings, IEEE Conference on Computer Vision and Pat- ¾ tern Recognition, 1983. ´for Æ½ ÔØ µ ½ , which is continuous at Ø , and so ¼ [15] D. Forsyth, J.L. Mundy, A. Zisserman, and C.M. Brown. ¼ Projectively invariant representation using implicit al-bounded for Ø , and the last factor is bounded by gebraic curves. In Proceedings, First European Conference ´ µÆ½ ÔØ . That is, we have shown that on Computer Vision, 1990. [16] D.B. Gennery. Object detection and measurement using Æ ´ÔØ µ Æ½ ´ÔØ µ ½ · Ç´Æ½ ´ÔØ µµ stereo vision. In Proceedings, DARPA Image Understand- ing Workshop, pages 161–167, April 1980. [17] R Gnanadesikan. Methods for Statistical Data Analysis ofwhich based on Lemma 1 ﬁnishes the proof. ¾ Multivariate Observations. Wiley, 1977. [18] E.L. Hall, J.B.K. Tio, C.A. McPherson, and F.A. Sadjadi.References Measuring curved surfaces for robot vision. IEEE Com- [1] A. Albano. Representation of digitized contours in puter, pages 42–54, December 1982 1982. terms of conic arcs and stright-line segments. Computer [19] D.J. Kriegman and J. Ponce. On recognizing and posi- Graphics and Image Processing, 3:23–33, 1974. tioning curved 3D objects from image contours. IEEE [2] R.H. Biggerstaff. Three variations in dental arch form Transactions on Pattern Analysis and Machine Intelligence, estimated by a quadratic equation. Journal of Dental Re- 12:1127–1137, 1990. search, 51:1509, 1972. [20] K.A. Paton. Conic sections in authomatic chromosome [3] R.M. Bolle and D.B. Cooper. Bayesian recognition of analysis. Machine Intelligence, 5:411, 1970. local 3D shape by approximating image intensity func- [21] K.A. Paton. Conic sections in chromosome analysis. tions with quadric polynomials. IEEE Transactions on Pattern Recognition, 2:39, 1970. Pattern Analysis and Machine Intelligence, 6(4), July 1984. [22] J. Ponce, A. Hoogs, and D.J. Kriegman. On using CAD [4] R.M. Bolle and D.B. Cooper. On optimally combin- models to compute the pose of curved 3D objects. In ing pieces of information, with applications to estimat- IEEE Workshop on Directions in Automated “CAD-Based” ing 3D complex-object position from range data. IEEE Vision, pages 136–145, Maui, Hawaii, June 1991. To ap- Transactions on Pattern Analysis and Machine Intelligence, pear in CVGIP: Image Understanding. 8(5), September 1986. [23] J. Ponce, D.J. Kriegman, S. Petitjean, S. Sullivan, [5] F.L. Bookstein. Fitting conic sections to scattered data. G. Taubin, and B. Vijayakumar. Techniques for 3D Object Computer Vision, Graphics, and Image Processing, 9:56–71, Recognition, chapter Representations and Algorithms 1979. for 3D Curved Object Recognition. Elsevier Press, 1993. [6] A. Borodin and I. Munro. The Computational Complexity (to appear). of Algebraic and Numeric Problems. American Elsevier, [24] V. Pratt. Direct least squares ﬁtting of algebraic sur- New York, 1975. faces. Computer Graphics, 21(4):145–152, July 1987. [7] B. Cernuschi-Frias. Orientation and location parameter es- [25] P.D. Sampson. Fitting conic sections to very scattered timation of quadric surfaces in 3D from a sequence of images. data: an iterative reﬁnment of the bookstein algorithm. PhD thesis, Brown University, May 1984. Computer Vision, Graphics, and Image Processing, 18:97– 665
- 9. 108, 1982.[26] G. Taubin. Nonplanar curve and surface estimation in 3-space. In Proceedings, IEEE Conference on Robotics and Automation, May 1988.[27] G. Taubin. Estimation of planar curves, surfaces and nonplanar space curves deﬁned by implicit equations, with applications to edge and range image segmenta- tion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 13(11):1115–1138, November 1991.[28] G. Taubin. Recognition and Positioning of Rigid Object Us- ing Algebraic and Moment Invariants. PhD thesis, Brown University, May 1991.[29] G. Taubin. Rasterizing implicit curves by space sub- division. Technical Report RC17913, IBM, April 1992. Also, ACM Transactions on Graphics (to appear).[30] G. Taubin, F. Cukierman, S. Sullivan, J. Ponce, and D.J. Kriegman. Parameterizing and ﬁtting bounded alge- braic curves and surfaces. In Proceedings, IEEE Confer- ence on Computer Vision and Pattern Recognition, pages 103–108, Champaign, Illinois, June 1992.[31] J.A. Thorpe. Elementary Topics in Differential Geometry. Springer-Verlag, 1979.[32] K. Turner. Computer Perception of Curved Objects using a Television Camara. PhD thesis, University of Edimburgh, November 1974. 666

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment