Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Taubin cvpr89


Published on

Published in: Technology, Education
  • Be the first to comment

  • Be the first to like this

Taubin cvpr89

  1. 1. Representing and Comparing Shapes Using Shape Polynomials Gabriel TaubinÝ , Ruud M. BolleÞ , and David B. CooperÝ Ý Laboratory for Engineering Man/Machine Systems Brown University, Division of Engineering, Providence, RI 02912 Þ Exploratory Computer Vision Group IBM T. J. Watson Research Center, P.O. Box 704, Yorktown Heights, NY 10598 Abstract telligent robots. One can distinguish bound-We address the problem of multiresolution 2D and 3D ary and region/volumetric representations. For the 3Dshape representation. Shape is defined as a probability case, the distinction between boundary and volumet-measure with compact support. Both object represen- ric representations is somewhat artificial when dealingtations, typically sets of curves and/or surface patches, with visual data. What can be measured as input for aand observations, sets of scattered data, can be rep- visual system are points on the object surface, obtainedresented in this way. Global properties of shapes are either by passive or active means [3]. If a shape-from-defined as expectations (statistical averages) of certain Ü technique [1] is used, the input may be surface nor-functions. In particular, the moments of the shapes mals. Information about the surface of the scene, andare global properties. For any shape Ë and every in- not about volume, is the input. Hence, the accessibilityteger ¼ we associate a shape polynomial of degree criterion of Marr and Nishihara [16] suggests a bound-¾ whose coefficients are functions of the moments of ary representation for 3D objects. Also, Brady’s crite-Ë . These polynomials are related to the shape Ë in an rion [8], that shape should be locally reconstructible,affine invariant way. They yield small values near Ë suggests a boundary representation for 2D shapes. Inand large values far away and their level sets approxi- this paper, we will concentrate on 2D and 3D bound-mate Ë . With the shape polynomials we define two dis- ary representations; the theory can be extended to anytances between shapes. An asymmetric distance mea- dimension.sures how well one shape fits as a subset of another one;a symmetric version indicates how equal two shapesare. The evaluation of these distance measures is de-termined via a sequence of computationally very fast 1.1 Some existing boundary representa-matrix operations. The distance measures are used for tionsrecognition and positioning of objects in occluded en-vironments. Let us review some 2D/3D shape representations. A 2D object can be represented by a 1D curve that bounds a “well-behaved” region [3]. Examples are the1 Introduction chain code [13], or implicit functions describing the 2D boundary. In the 3D case, implicit functions, forMuch computer vision research has been focussed example, quadric surfaces popularized by Faugeras eton how to represent two-dimensional and three- al. [12], Bolle and Cooper [5, 6], and others [9, 14, 10],dimensional objects. Since the central problems in can be used to describe the surface of an object. Here, amodel-based computer vision are object modeling, sen- complex surface is represented by patches of primitivesor data (images, depth maps, etc.) generation model- surfaces. Superquadrics [4, 2, 7, 17], an extension ofing, and model inferencing from sensor data, the repre- quadric surfaces, fall in the same category. An examplesentation of 2D and 3D shapes is a basic problem. A of a superquadric is an ellipsoid that can smoothly de-useful theory of shape will also have applications in form into various exotic shapes. More recently, Taubinother areas. Such applications include computer graph- [18] has been using high-degree polynomials whichics, design automation, manufacturing automation, ter- permit the representation of complex collections of 2Drain mapping, vehicle guidance, surveillance, and in- curves and 3D surfaces with a single analytic function. 1
  2. 2. 1.2 A new shape representation of only low-order moments of We will extend (1) by including higher order moments of the data, therebyA representation of an object is typically a vector of fea- capturing the information required to better approxi-tures or a specification of the boundary of the object. mate the data set. We will call these polynomials shapeUsually, a boundary representation is given by a finite polynomials.set of curve segments and/or surface patches. Since3D sensors provide scattered sets of points in three-space, which can be sparse for passive stereo (using 2.2 Shape as a probability measureisolated features) or tactile data, or dense for activestereo, shape reconstruction has to be achieved from In Mechanics one usually deals with point masses, andsets of points, and the reconstruction has to be com- with curve, surface, and volume mass densities. In Electro-pared to the representation of the object. In this pa- statics, the objects of study are point charges and curve,per, we present a new view on boundary-based repre- surface, and volume charge densities. A unified founda-sentations of shape and observations of such shapes. tion for both cases is provided by Measure and Proba-This view allows for a unified treatment of both data bility Theory.sets corresponding to observations of objects and rep- Before introducing our definition of shape, let us re-resentations of objects. In Section 2, we define shape call some definitions. A probability measure È is con-as a probability measure with compact support, i.e., a centrated in a set Ë , or, alternatively, the set Ë supportsdistribution of mass. We define properties of shapes as È if È Ë´ µ ½ The support of È is the smallest closedexpectations, or averages, of functions in Section 3. In set which supports È . The probability measure È hasparticular, all the moments of a shape can be expressed compact support if its support is bounded. And finally,in this way. In Section 4, we construct the shape polyno- if the support of È does not contain any open set inmials from the higher-order moments of a shape. These the usual Euclidean norm topology, then È is a singularpolynomials yield low values near the original shape probability measure.and large values elsewhere. Section 5 introduces dis- Although our typical data sets can be represented bytances between shapes by computing the expectation of singular probability measures because the data lies ona shape polynomial, corresponding to one shape, with points, curves and surfaces, we define a shape as a prob-respect to another shape. In Section 6, we discuss some ability measure with compact support. With this defi-applications and we describe a small part of an experi- nition, a single point is a shape, the well-known Diracmental 2D object recognition system. Æ . A finite set of Õ points, as the data set from above, can be represented in a similar fashion, as a sum of Æ ’s with uniform weight Õ .½2 Definition of shape In Mechanics and Electrostatics, we can measure how much mass or charge lies in a certain region ofLet Ò be the dimension of the space. Although the space by computing the curvilinear or surface integraltheory is independent of the dimension, in this pa-per, Ò ¾ or Ò ¿ . Hence, a point x ´ µ Ü Ý Ø or of the mass or charge density, along the part of thex ´ µ Ü Ý Þ refers to a point in two-space and three- Ø curve or surface present inside the given region. In our case, the amount of shape present in a specific region ofspace, respectively. space is just the fraction of length or area of the curve or surface included in the given region. The shape of a set2.1 Motivation of curve segments or surface patches is the weightedLet ½Ü ÜÕ be a set of È ½ scattered data points in Õ linear combination of the corresponding parts.Ò-space. The mean value ½ x and covariance ÈÕ ¦ ´ µ´ µ Õ ½ x   x  matrix ½ Ø provide informa- Õ 3 Properties of shapestion about the position, orientation, and spread ofParticularly, the polynomial A property of a single point shape is just the value that ´Ü   µ ¦ ½´Ü   Ø µ·½ (1) a certain function attains at the given point. For a finite set of points, a property is the average value of the samesummarizes all this information. Now, by a sublevel set, function over all the points in the set. In general, wewe will mean the set of points where this polynomial is define a property of a shape as the expected value of aless than or equal to a constant. These sublevel sets are certain function.a family of nested hyperellipsoidal regions centered at For example, for a finite set of points and with principal axes in the direction of the eigen- x½ xÕ the mean of È the expectation of the vec- is ¦vectors of . In almost all the cases, the sublevel sets do tor function x, i.e., ½ Õ ½ x The covariance of ´ µ´ µ Õnot provide a good description of the shape of the orig- is the expectation of the matrix function x   x  Ø ¦ ½ ÈÕ ´ µ´ µ ½ x   x   Øinal data set because the polynomial (1) is a function i.e., Õ 2
  3. 3. 3.1 General propertiesFor any measurable function x , the property it ´µdefines for a single point shape x¼ is x¼Ê ´µ x dÈx¼ ´ µ x¼ For a finite set of data points , theexpressions of the previous paragraph can be general- DATA d = 2 d = 4ized to the function . And, for example, the propertydefined by the function for a curve segment Ä yields Figure 1: Ellipse Ä ´xµ dÈ Ä ½ ´xµ dдxµ Ä Ä where Ø ´½ Ü µ Ø Also note that the matrixwhere Ä is the length of the curve segment Ä ÄÊ ´µ dÐ x Similarly, for a surface patch , properties are ½ Ø ½ Ü Ø ¦· Ü ÜÜ Ø Ä Ë Ë Ø Øsurface integrals. This can be extended to a set of curvesegments or surface patches. Hence, it is possible to The polynomial of (2) can be generalized to higherrepresent both local and global properties of shape. order. Given a positive integer , let be the vector of all the monomials of degree For example, with3.2 Transformations of shapes and equiva- ¾ and Ò ¿ we have lence ´½ Ü Ý Þ Ü¾ ÜÝ ÜÞ Ý¾ ÝÞ Þ¾µ ØIn general, a surface or object can be sensed from any  Ò· ¡ In general, X is a column vector of Ñ Ò elements.position. The points on a surface that are sensed from Å Now, let Ë be the nonnegative and symmetric matrixtwo sensing positions are related by an affine trans-formation. We wish to study how our shapes and Å Ë Ë Ø . Unless Ë is perfectly represented by atheir properties change with respect to these transfor- polynomial in x of degree this matrix is nonsingular.mations. Hence, with certain minor restrictions we can assume Given a shape Ë and some affine transformation that this is the case for the classes of shapes involved̴ܵ x · a new shape Ë is induced by the Ì [20]. Ìtransformation. That is, Ë is the probability measure The shape polynomial of order associated with the shape Ë isdefined by the equation ÌË ´ µ Ë ´Ì ½ µ ¨ ´xµ Ë ½ Ø Å Ë  ½ Ñwhere is a measurable set. Equivalently, since a prob- where Ñ is the dimension of the vector We will writeability measure is uniquely defined by the expectations Ì ¨´µ Ë x when the order is clearly understood. Noteof all possible measurable functions on the space, Ë is ¨ ´µ that ½ x is the polynomial (2). Since the matrix Å Å  ½ , and Ë is a positive ¨ Ëalso defined by the following equation is positive definite, so is ÆÌ ¾ ¨´µ ½ Ë Ì Ë Ë polynomial of degree , in fact Ë x Ñ for all x. To motivate the term shape polynomials, we showwhere is a measurable function. Then, two shapes Ë some examples. These are the graphs of shape poly-and É are equivalent if there exists some affine trans-formation Ì such that É Ì Ë Since the family of nomials represented as gray-level images, for different values of . Darker regions correspond to low valuesaffine transformations is a group, this is a true relation and brighter regions to higher values. These are 2Dof equivalence on the shapes. examples, because they are easier to visualize, but the same conclusions hold in the 3D case. Figure 1 shows4 Shape polynomials a set of noisy points located around an ellipse. Figure 2 shows a more complex shape, the boundary of an artifi-In this section, a systematic generalization of the poly- cially generated wrench. Figure 3 shows another com-nomial (1) is introduced, which will enable us to com- plex shape but produced by thresholding a real image. Note that as the degree, of the shape polynomial in- ¨ ´Üµpare shapes. creases, Ë approximates the original data set bet-4.1 Higher order shape polynomials ter. However, even for ¾ the polynomial captures the essence of the data set well.Let Ë be a shape. We rewrite (1) in the following way ½  ½ 4.2 Properties of the shape polynomials ´Ü   µ ¦ ½ ´Ü   µ · ½ Ø ¦· Ø Ø Ø We state some properties of the shape polynomials. A (2) detailed analysis, along with the proofs, will be found 3
  4. 4. ¡ ´Ë µ É The asymmetry is, in fact, required since we want to measure the distance between an object Ë and a subobject É. A symmetric version is useful for other applications ¡´Ë ɵ É ¨ · Ë Ë ¨  ½ É ¾ DATA d = 6 d = 12 Figure 2: Wrench 5.2 Properties of the ¡-distances This section is a consequence of the properties of the ¡ shape polynomials. The -distance from a shape Ë to itself is zero, i.e., ¡ ´Ë µ ¼ Ë DATA d = 4 d = 8 The ¡-distance, ¡ ´Éµ is invariant with respect to si- Ë multaneous affine transformations of shapes Ë and É. If ̴ܵ Ü · is an affine transformation, then Figure 3: Plier ¡Ì ´Ìɵ ¡ ´Éµ Ë Ëin [20]. The most important property is the invariance. Finally, if we move shape É to ½ keeping Ë fixed, theFor any shape Ë and affine transformation , the ex- Ì ¡ -distance also goes to ½.pression Similar statements hold for the symmetric version, ¨ ´ µ ¨ ´Ì ´ µµ ÌË x Ë  ½ x along with the propertyis a polynomial identity. Hence, the value of the shape Üpolynomial at a points depends only on the relative ¡´Ë ɵ ¼position of the point with respect to the original shape. ¨ Let Ë be a shape and Ë its corresponding shape 5.3 Computation of the ¡-distancepolynomial. Then there exists a positive number such ¨´µthat for every x, Ë x ¨´µ x ¾ Hence, Ë x grows to We give a fast algorithm for the evaluation of É Ë , ¨½ as x grows to ½. An implication is that at least all which is the essential part of both distance measures. We are interested in the two cases, the distance betweenof the sublevel sets are bounded. This is a very weak way ¡´ µ the shapes Ë and É, Ë É and the distance betweento say that they approximate in some sense the shape of Ì Ë and É,the original data set. In the same way as we have done here, we can de- Ë ¡ ´Ì µ ¡ É Ì ½ Ë É ´ µfine trigonometric shape polynomials, replacing the vec- The latter matches two shapes by transforming onetor of monomials by a vector of complex exponen- with an affine transformation It can be shown [20]Ìtials. For a large family of shapes we can prove that thatthe sequence of trigonometric shape polynomials con- É ¨ Ë ¨ ´Üµ Ë ÈÉverges to the original shape [20]. The present case ofregular shape polynomials is very closely related to the ½ trace Å Å ½ ½ Ä ½Ä ¾ É É ¾trigonometric case. Ñ Ë Ñ Ë where ¾ is the sum of the squares of the entries of ¾ the matrix Ä Ä and Ë and É are the Cholesky decom-5 Comparing shapes Å Å positions of Ë and É respectively,with Ë a lower Ä triangular matrix with positive diagonal elements, andIn this section, we will use the shape polynomials¨ ´Üµ to define distance measures between shapes. Å Ä similarly for É and É The right-hand side of the Ë above expression is the sum of the squares of the entries of the matrix. The use of the Cholesky decomposition5.1 The ¡-distance between shapes improves the numerical accuracy of the computation. ¡- Given an affine transformation , there exists a ÌLet Ë and É be two arbitrary shapes, we define the unique matrixÌ , which is a function of T only, suchdistance from Ë to É as that ´Ì´Üµµ Ì is a polynomial identity. The en- ¬ ¬ ¬ ¬ Ì tries of the matrix are polynomials in the parameters ¡ ´Éµ Ë É ¨  ½ Ë ¬ ¬ ¨ ´xµ dÈ   ½¬ Ë É ¬ Ì of for which analytic expressions are known [18]. To ¡ ´Ì µ compute Ë É , we have Ä ½Ì ÄThis distance measure is not a real distance in the senseof metric spaces. For example, we have Ë É ¡´ µ Ì ¨ É Ë Ë É ¾ ¾ 4
  5. 5. 6 Applications large, there is no good match. For ½ ¡ the -distance between two shapes is an analytic expression, a sum ¡We start this section using the symmetric -distance of quotients of corresponding eigenvalues [20]. Lowto estimate the position and orientation of one known ¡ value of order one -distance corresponds to almostand unoccluded object. Then we extend the procedure equal eigenvalues of the data and model scatter matri-to the classification of one unoccluded object among a ces. Therefore, the first step of the classification can befinite set of known models. Finally, we describe how implemented with a hash table [15] using a properlyto use both distance measures for recognizing objects quantized function of the 2D or 3D eigenvalue a cluttered environment. For the first two problems, That is, the set of eigenvalues is used as the hash indexthe model of an object is a matrix of moments. For the and allows for quickly pruning the search space.latter, the object model is a collection of such matrices,corresponding to significant parts of the object. 6.3 Modeling in terms of important parts6.1 One unoccluded object and one model Our goal is to use the above procedures to develop techniques for recognizing objects in such hostile envi-Let the data be represented by the shape Ë , the model ronments as, for example, a bin of parts or in a clutteredby the shape É, and let T be the linear transformation environment. Recognition is based on matching smallthat matches the model with the data. For a fixed order regions of observed data with parts of known models. Ì the best match is produced by the minimizer of the The small regions will be assumed to constitute unoc- ¡´ Ì µexpression Ë É This is a multimodal polynomial cluded observations of the corresponding parts of thefunction of the parameters Ì Ì To compute we need models. If an object is modeled as a collection of geo-a good initial approximation. We begin by using sec- metrically related subobjects, the problem is reduced toond degree polynomials, ½ for the approximation. finding the best collection of pairings between model ÅThen the matrix Ë is just the usual scatter matrix of subobjects and regions of observed data that satisfiesthe data in the plane or 3-space. For ½ and T a rigid the same set of global constraints.body transformation, determination of the estimate Ì We need a procedure to chose important parts of ahas an analytic solution – two solutions in 2-space and model or data set. This procedure is described be-four in 3-space, in most of the cases. One can first ro- low. To speed up the recognition steps, the modelstate the model until the eigenvectors of its scatter ma- are preprocessed. For each model, a standard positiontrix align with the corresponding ones of the data, thus is computed, using the mean vector and eigenvectorsestimating the orientation of the object being measured, of its scatter matrix. Every important part of a modeland then translate the model until its mean coincides is transformed to its own standard position relative towith the mean of the data, thus estimating the location the model coordinate system. This transformation isof the object. Since there is more than one rotation pos- stored, along with the Cholesky decomposition of thesible, the ambiguity is then resolved by computing the matrix of moments and the inverse of this matrix. Thissymmetric distance for a higher degree approximation, significantly reduces the number of operations in thei.e., some ½ . Then, this initial transformation can be recognition phase. All of the important parts are orga-improved by a gradient descent technique. However, nized in the hash table described in the previous sub-the initial transformation is a very good approximation section. That is, the information about all the importantto the global minimum. parts of all the models is stored in a hash table indexed by eigenvalues. The important parts were chosen as the data within6.2 One unoccluded object and several a circular (rotation invariant) window of fixed radius. models The center of each window was chosen at random andHere, we have to determine which model, ɽ ÉÖ , then adjusted, iteratively, until it was close to the meanis the one which best matches the observed data, Ë . For of the data inside the window. This procedure tendseach model É we compute an initial transformation to find windows containing high-curvature points and,Ì as described in the previous section. Then we clas- more generally, regions with much structure, i.e., many data points. Other criteria for locating special interestsify Ë as belonging to the class of É if regions are under study. Figure 4 shows all the win- ¡´Ë Ì É µ Ñ Ò ¡´Ë Ì É µ ½ Ö dows computed for the data set and the two models used in the experiments.Only for the model that minimizes this expression, the Both the observation and model data of Figure 4¡ -distance for some ½ is iteratively minimized. Al- were acquired from images of tools taken against a ¡though the computation of the -distances is not very white background. These images were thresholded toexpensive, the number of evaluations can be reduced. extract object boundaries. These boundaries, including ¡If the -distance for ½ from the data to an object is deformations because of specular reflections and shad- 5
  6. 6. MODEL 1 MODEL 2 DATA Figure 4: Important Parts Figure 5: Hypothesis generated for model 1ows were used as data.6.4 Occluded objects and several modelsIn the recognition phase, the important parts of the dataare extracted following the same procedure as for themodels (Figure 4). A standard position is computed for Figure 6: Hypothesis generated for model 2each part, relative to a global coordinate system asso-ciated with the data. Hence, windows of data are cho-sen by a criterion that should result in windows corre- model and transformation pairs were accumulated insponding to those chosen in the model. The data in a a parameter space corresponding to the group of trans-window is translated and rotated to put the mean and formations. The weight of each vote was computed asscatter matrix eigenvalues in standard position. Initial ½ ´½ · ¡µÃ ¡ with the symmetric distance betweenlocal matches are made of the eigenvalues for a win- model and data part and à a positive constant. Finally,dow with those for model subparts; the eigenvalues of peaks were extracted as global hypotheses about ob-the data within the windows are used as an index in the jects at certain locations.hash table that contains the models. This experiment was designed to understand and The next stage in the recognition process is the com- ¡ show the usefulness of the -distance. Figures 5 andbinatorial correspondence problem. Suppose that at a 6 show the hypotheses generated for the models andcertain point in the recognition process, several impor- data set shown in figure 4. These hypotheses weretant parts of the data are assigned to certain parts of generated by matching procedures based on the eigen-the same model, and these assignments are spatially vectors and eigenvalues of the local scatter matrices.consistent. The union of these data points constitutes That is, the initial transformations estimated from thea subobject of the corresponding model and the asym- distance measure of order ½ were used. The hy- ¡metric -distance is used to measure how well this potheses have to be verified, using the global consis-data fits as a subset of the model. Simultaneously, other ¡ tency measure defined by the asymmetric -distance.groups of important data parts will be assigned to other The location estimates of objects can be improved by amodels or the same model, in a different position. The gradient descent technique applied to the global asym- ¡sum of these asymmetric -distances, from groups of ¡ metric -distance. These techniques have been appliedconsistent data assignments to models, is a global con- in closely related problems [18].sistency measure to be minimized. One of the possibilities is a sequential algorithmbased on the classification procedure described above. 7 DiscussionThe best match is computed for every important partof the data; then consistent pairs are grouped together By defining shape as a probability measure with com-following a verification procedure. Another possibil- pact support, object representations (curves, surfaces)ity is to use some kind of stochastic minimization tech- and observations of such objects (scattered data points)nique, for example, simulated annealing for optimizing are treated in a unified fashion. The concept of shapethe global performance function. Yet another possibil- polynomials captures the properties of these funda-ity is explored in the following. This possibility can be mentally different shape types at multiple resolutions.seen as a way to compute an initial configuration for Using the shape polynomials, two meaningful andstochastic relaxation. computationally efficient measures of shape similarity Global hypotheses are generated by a weighted vot- are defined, a symmetric and an asymmetric version. Iting scheme based on purely local matching. Every pos- is shown that these tools can be used to recognize, clas-sible pair of data and model regions, determined by sify, and estimate the positions of unoccluded objects,entries in the hash table, generated an hypothesis for which is a part of the general problem of recognizinga model along with a transformation. The votes for occluded objects in a complicated scene. Finally, it is 6
  7. 7. demonstrated that the asymmetric version of the dis- [5] R.M. Bolle & D.B. Cooper, “Bayesian recognition of local 3-tance measure can be used to implement a global con- D shape by approximating image intensity functions with quadric polynomials,” IEEE Trans. Pattern Anal. Machine Intell.,sistency measure for the correspondence problem. Vol. 6, July 1984, pp. 418-429. Some of the concepts introduced in this paper aregeneralizations of well-established concepts. The - ¡ [6] R.M. Bolle & D.B. Cooper, “On optimally combining pieces of information, with application to estimating 3-D complex-objectdistances provide meaningful measures to fit data position from range data,” IEEE Trans. Pattern Anal. Machinepoints or surface specifications of one object to that of Intell., Vol. PAMI-8, Sept. 1986, pp. 619-638.other objects. Special cases of this are well-known, forexample, methods to match points to planar patches. [7] T.E. Boult & A.D. Gross, “Recovery of superquadrics from depth information,” Proc. AAAI Workshop on Spatial-ReasoningThough representing models and data by one or more and Multisensor Integration, Oct. 1987.polynomial surface patches [18] is an attractive ap-proach there are two difficulties that are eliminated [8] M. Brady, “Criteria for representing shape,” Human and Ma- chine Vision, A. Rosenfeld and J. Beck (eds.), New York, Aca-with the approach in this paper. The first is that the demic Press, 1983.zeros of a polynomial generally extend out to infin-ity. Hence, data points far from and not associated [9] B. Cernuschi-Frias, “Orientation and location parameter esti- mation of quadric surfaces in 3-D from a sequence of images,”with an object can still be very close to its polynomial Brown University Ph. Dissertation, May 1984; also Brown Univ(e.g., a plane extends to infinity). The other problem TR LEMS-5, Feb. that polynomial surface fitting requires on-line ma- [10] R.D. Rimey & F.S. Cohen, “Maximum Likelihood Segmenta-trix eigenvalue finding. Shape polynomials in this pa- tion of Range Data,” IEEE Trans. on Robotics & Automation,per take low values only in the vicinity of the model, April 1988.and the computation to compute the distance measure [11] D.B. Cooper, Y.P. Hung & G. Taubin, “A new model-basedis less than for the polynomial surface fitting. stereo approach for 3D surface reconstruction using contours The representation of one object as a collection of on the surface pattern,” Proc. Second Int’l Conf. on Comp. Vision,subobjects has been used before, however, the subob- December 1988.jects have been simpler, such as quadric patches. For [12] O.D. Faugeras & M. Hebert, “A 3-D recognition and position-2D edge images, our representation allows us to use ing algorithm using geometric matching between primitivethe information provided, not only by the silhouette, surfaces,” Proc. 8th Int. Joint Conf. on Artificial Intell., Aug. 1983,but by the internal boundary structure too. This rep- pp. 996-1002.resentation is a generalization of the concept of interest [13] H. Freeman, “Computer processing of line drawing images,”points, such as intersections of lines and high curvature Computer Surveys, Vol.6, No. 1, March 1974, pp. 57-98points. Our interest points are subregions with a rela- [14] E.L. Hall, J.B.K. Tio, C.A. McPherson, & F.A. Sadjadi, “Mea-tively complex structure, i.e., regions that can only be suring curved surfaces for robot vision,” IEEE Computer, Dec.represented with higher . 1982, pp. 42-54. Finally, we are in the process of implementing a gen- [15] Y. Lamdan, J.T. Schwartz & H.J.Wolfson, “Object Recognitioneral vision system. The results are very promising, and by Affine invariant Matching,” Proc. IEEE Conf. on Computerwe believe that the methods for shape representation Vision and Pattern Recognition, June 1988and comparison are extremely consequential. [16] D. Marr & H.K. Nishihara, “Representation and recognition of the spatial organization of three dimensional structure,” Proc. R. Soc. London B, Vol. 200, 1978, pp. 269-294.8 Acknowledgement [17] A.P. Pentland, “Recognition by parts,” Proc. First Int’l Conf. onThis work was partially supported by an IBM Manufac- Comp. Vision, London, June 1987, pp. 612-620.turing Research Fellowship and National Science Foun- [18] G. Taubin, “Algebraic Nonplanar Curve and Surface Estima-dation Grant IRI-8715774. tion in 3-Space with Application to Position Estimation,” Brown Univ. TR LEMS-43; also IBM Technical Report RC 13873, Febru- ary 1988.References [19] G.Taubin, R.M.Bolle and D.B.Cooper, “Representing and Com- paring Shapes Using Shape Polynomials,” IBM Technical Report [1] J. Aloimonos, “Visual shape computation,” Proc. of the IEEE, RC 14292, December 1988. Vol. 76, no. 8, Aug. 1988, pp. 899-916. [20] G. Taubin, “About Shape Descriptors and Shape Matching,” [2] R Bajcsy & F. Solina, “Three dimensional object representation Brown Univ. TR LEMS (in preparation). revisited,” Proc. First Int’l Conf. on Comp. Vision, London, June 1987, pp 231-240 [3] D.H. Ballard & C.M. Brown, Computer Vision, New Jersey, Prentice-Hall, Inc., 1982 [4] A.H. Barr, “Superquadrics and angle-preserving transforma- tions,” IEEE Comp. Graphics and Applications, Jan. 1981, pp. 11-23. 7