Balan cad2000

254 views

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
254
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Balan cad2000

  1. 1. COMPUTER-AIDED DESIGN Computer-Aided Design 32 (2000) 825–846 www.elsevier.com/locate/cad 3D mesh geometry filtering algorithms for progressive transmission schemes R. Balan a,*, G. Taubin b a Institute for Mathematics and Its Applications, 207 Church St. S.E., Minneapolis, MN 55455, USA b IBM T.J. Watson Research Center, P.O. Box 704, Yorktown Heights, NY 10598, USA Accepted 14 March 2000Abstract A number of static and multi-resolution methods have been introduced in recent years to compress 3D meshes. In most of these methods,the connectivity information is encoded without loss of information, but user-controllable loss of information is tolerated while compressingthe geometry and property data. All these methods are very efficient at compressing the connectivity information, in some cases to a fractionof a bit per vertex, but the geometry and property data typically occupies much more room in the compressed bitstream than the compressedconnectivity data. In this paper, we investigate the use of polynomial linear filtering as studied in the Refs. [Taubin G. A signal processingapproach to fair surface design. Computer Graphics Proc., Annual Conference Series 1995;351–358; Taubin G, Zhang T, Golub G. Optimalsurface smoothing as filter design. IBM Research report RC–20,404, 1996], as a global predictor for the geometry data of a 3D mesh in multi-resolution 3D geometry compression schemes. Rather than introducing a new method to encode the multi-resolution connectivity informa-tion, we choose one of the efficient existing schemes depending on the structure of the multi-resolution data. After encoding the geometry ofthe lowest level of detail with an existing scheme, the geometry of each subsequent level of detail is predicted by applying a polynomial filterto the geometry of its predecessor lifted to the connectivity of the current level. The polynomial filter is designed to minimize the l 2-norm ofthe approximation error but other norms can be used as well. Three properties of the filtered mesh are studied next: accuracy, robustness andcompression ratio. The Zeroth Order Filter (unit polynomial) is found to have the best compression ratio. But higher order filters achievebetter accuracy and robustness properties at the price of a slight decrease of the compression ratio. 2000 Elsevier Science Ltd. All rightsreserved.Keywords: Compression; Geometry; Multi resolution; Filter1. Introduction storage and transmission over networks and other commu- nication channels. In most of these methods, the connectiv- Polygonal models are the primary 3D representations for ity information is encoded without loss of information, andthe manufacturing, architectural, and entertainment indus- user-controllable loss is tolerated while compressing thetries. They are also central to multimedia standards such as geometry and property data. In fact, some of these methodsVRML and MPEG-4. In these standards, a polygonal model only addressed the encoding of the connectivity datais defined by the position of its vertices (geometry); by the [6]. Multi-resolution schemes reduce the burden ofassociation between each face and its sustaining vertices generating hierarchies of levels on the fly, which may(connectivity); and optional colors, normals and texture be computationally expensive and time consuming. Incoordinates (properties). some of the multi-resolution schemes the levels of Several single-resolution [9,20] and multi-resolution detail are organized in the compressed data in progres-methods [8,14,15,18] have been introduced in recent years sive fashion, from low to high resolution. This is ato represent 3D meshes in compressed form for compact desirable property for applications, which require trans- mission of large 3D data sets over low bandwidth communication channels. Progressive schemes are * Corresponding author at Siemens Corporate Research, 755 CollegeRoad East, Princeton, NJ 08540, USA. more complex and typically not as efficient as single- E-mail addresses: balan@ima.umn.edu (R. Balan), taubin@watson. resolution methods, but reduce quite significantly theibm.com (G. Taubin). latency in the decoder process.0010-4485/00/$ - see front matter 2000 Elsevier Science Ltd. All rights reserved.PII: S0010-448 5(00)00069-5
  2. 2. 826 R. Balan, G. Taubin / Computer-Aided Design 32 (2000) 825–846 In this paper, we investigate the use of polynomial linear The mesh geometry can be thought of as a collection offiltering [16,17], as a global predictor for the geometry data three vectors (x,y,z) of length nV containing, respec-of a 3D mesh in multi-resolution 3D geometry compression tively, the three coordinates of each vertex; alternativelyschemes. As in other multi-resolution geometry compres- we can see V as representing a collection of nV vectorssion schemes, the geometry of a certain level of detail is …r0 ; r1 ; …; rnV † of length 3, each of them being thepredicted as a function of the geometry of the next coarser position vector of some mesh vertex. To the list Flevel of detail. However, other 3D geometry compression we associate the symmetric nV × nV vertex to vertexschemes use simpler and more localized prediction incidence matrix M, and the nV × nV matrix K definedschemes. by: Although, we concentrate on the compression of geome-try data, property data may be treated similarly. The meth- KˆI DM …1†ods introduced in this paper apply to the large family ofmulti-resolution connectivity encoding schemes. These where D is the nV × nV diagonal matrix whose (i,i)linear filters are defined by the connectivity of each element is the inverse of the number of first-orderlevel of detail and a few parameters, which in this neighbors the vertex i has. As shown in Ref. [16], Kpaper are obtained by minimizing a global criterion has nV real eigenvalues all in the interval [0,2].related to certain desirable properties. Our simulations Consider now a collection P ˆ …Px …X†; Py …X†; Pz …X†† ofpresent the least-square filters compared with some three polynomials each of degree d, for some positive inte-other standard filters. ger d. In Ref. [12], the Butterfly subdivision scheme is used for Definition We call P a polynomial filter of length d 1the predictor. In Ref. [7], a special second-order local criter- (and degree or order d), where its action on the mesh V, Fion is minimized to refine the coarser resolution mesh. The is defined by a new mesh V H , F of identical connectivitylatter class of algorithms has concentrated on mesh simpli- but of geometry V H ˆ …x H ; y H ; z H † given by:fication procedures and efficient connectivity encodingschemes. For instance, in the Progressive Forest Split x H ˆ Px …K†x; y H ˆ Py …K†y; z H ˆ Pz …K†z …2†scheme [18], the authors have used a technique where the A rational filter (Q,P) of orders (m,n) is defined by twosequence of splits is determined based on the local volume collections of polynomials …Qx ; Qy ; Qz † and …Px ; Py ; Pz † ofconservation criterion. Next, the connectivity can be effi- degrees m, respectively, n, whose action on the mesh V,ciently compressed as presented in the aforementioned F is defined by the new mesh V H , F through:paper or as in Ref. [15]. Mesh simplification has also been studied in a differentcontext. Several works address the remeshing problem, Qx …K†x H ˆ Px …K†x; Qy …K†y H ˆ Py …K†y;usually for editing purposes. For instance in Ref. [4], …3† Qz …K†z H ˆ Pz …K†zthe harmonic mapping is used to resample the mesh.Thus, the remeshing is obtained by minimizing a globalcurvature-based energy criterion. A conformal map is To avoid possible confusions, we assume Qx …K†; Qy …K† andused in Ref. [10] for similar purposes, whereas in Qz …K† invertible. We point out the filtered mesh has theRef. [11] again a global length based energy criterion same connectivity as the original mesh; only the geometryis used to remesh. changes. Note also the filter works for non-manifold The organization of the paper is as following. In Section connectivity as well.2, we review the mesh topology based filtering and In this report we consider only polynomial filters, i.e.introduce the basic notions. In Section 3, we present rational filters of the form (1,P). In Ref. [3], the authorstwo geometry-encoding algorithms. In Section 4, we considered the case (Q,1). Note the distinction betweenanalyze three desirable properties: accuracy, robustness polynomial and rational filters is artificial. Indeed, anyand compression ratio. In Section 5, we present numer- rational filter is equivalent to a polynomial filter of lengthical and graphical results; and finally, the conclusions nV, in general, and in fact, any polynomial filter of degreeare contained in Section 6 and are followed by the larger than nV is equivalent to a polynomial filter of degreebibliography. at most nV 1. These facts are results of the Cayley– Hamilton theorem (see Ref. [5], for instance) that says the characteristic polynomial of K vanishes when applied on K. Therefore:2. Mesh topology based filtering Q…K† 1 P…K† ˆ P0 …K† …4† Consider a mesh (V, F) given by a list of vertexcoordinates V (the mesh geometry) of the nV vertices, for some polynomial P0 of degree at most nV 1. Hence,and a list of polygonal faces F (the mesh connectivity). the notion of Infinite Impulse Response (IIR) filter does not
  3. 3. R. Balan, G. Taubin / Computer-Aided Design 32 (2000) 825–846 827have any correspondence in the mesh topology based 3. The progressive approximation algorithmsfiltering, because any polynomial of higher order or rationalfilter is equivalent to a polynomial filter of degree at most In progressive transmission schemes, the originalnV 1, thus, a Finite Impulse Response (FIR) filter. mesh is represented as a sequence of successivelyHowever, the difference between polynomial and simplified meshes obtained by edge collapsing andrational filters lays in their implementation. The poly- vertex removal. Many simplification techniques havenomial filter is easily implemented by a forward itera- been proposed in the literature. For instance in Ref.tion scheme. The rational filter can be implemented by [18] the progressive forest split method is used. Ita forward–backward iteration scheme: consists of partitioning the mesh into disjoint patches and in each patch a connected sequence of edge collap- sing is performed.w ˆ Px …K†x Qx …K†x H ˆ w …5† The meshes we are using here have been simplified by a clustering procedure. First, all the coordinates are normal-involving the solution of a linear system of size nV. For ized so that the mesh is included in a 3D unit cube. Next thesmall degrees m, n compared to nV (when the rational cube is divided along each coordinate axis into 2 B segmentsform has an advantage), the backward iteration turns (B is the quantizing rate, representing the number of bits perinto a sparse linear system, and thus efficient methods vertex and coordinate needed to encode the geometry), thuscan be applied to implement it. obtaining 2 3B smaller cubes. In each smaller cube, all the Two particular filtering schemes are of special impor- edges are collapsed to one vertex placed in the center oftance to us and are studied next. The first scheme is called the corresponding cube. The mesh thus obtained representsthe Zeroth Order Filter and is simply defined by the the quantized mesh at the finest resolution level. Theconstant polynomial 1: coarsening process proceeds now as follows: 2 3K smaller cubes are replaced by one of edge size 2 K times bigger,PZ …X† ˆ …1; 1; 1† …6† and all the vertices inside are removed and replaced by one placed in the middle of the bigger cube. Next, the procedure is repeated until we obtain a suffi-Technically speaking, with the order definition given ciently small number of vertices (i.e. a sufficient coarsebefore, this is a zero order filter, but the most general resolution).form of zero order filters would be constant polynomials, At each level of resolution, the collapsing ratio (i.e.not necessarily 1. However, throughout this paper we keep the number of vertices of the finer resolution, dividedthis convention to call the constant polynomial 1, the Zeroth by the number of vertices of the coarser resolution) isOrder Filter. Note its action is trivial: it does not change not bigger than 2 3K. In practice, however, this numberanything. could be much smaller than this bound, in which case The second distinguished filtering scheme is called Gaus- some levels may be skipped. After l steps, the numbersian Smoothing and it is a first-order filter defined by: of bits needed to encode one coordinate of any such vertex is B–lK. Thus, if we consider all the levels ofPG …X† ˆ …1 X; 1 X; 1 X† …7† detail and a constant collapsing ratio R, the total number of bits per coordinate needed to encode theUsing the definition of K and the filter action on the mesh geometry becomes:geometry, the geometry of the Gaussian filtered mesh isgiven by: N Mb ˆ NB …B K† R H H Hx ˆ DMx; y ˆ DMy; z ˆ DMz …8† N N …B 2K† … …B LK†which turns into the following explicit form (using the R2 RLposition vectors ri and the first-order neighborhood i * ofvertex i): where N is the initial number of vertices and L the numbers of levels. Assuming 1=RL p 1 we obtain 1 ˆr Hi ˆ rv …9† i v i R R Mb ˆ NB NK : R 1 …R 1†2In other words, the new mesh geometry is obtained bytaking the average of the first-order neighbors’ positionson the original mesh. Thus, the number of bits per vertex (of initial mesh)
  4. 4. 828 R. Balan, G. Taubin / Computer-Aided Design 32 (2000) 825–846Table 1The data packet structure for the basic encoding algorithmMeshnL 1 MapnL 2,nL 1&ConnnL 2 ParamnL 2 DiffnL 2 ··· Map0,1&Conn0 Param0 Diff0and coordinate turns into: tries (GeomnL 1, GeomnL 2, …, Geom1, Geom0). Our basic encoding algorithm is the following. R KNbits B ‰bits=vertex·coordinateŠ …10† 3.1. The basic encoding algorithm R 1 R 1 Step 1. Encode GeomnL 1 using an entropy or arithmeticThus, if we quantize the unit cube using B ˆ 10 bits encoder;and we resample at each level with a coarsening factor Step 2. For l ˆ nL 1 down to 1 repeat:of 2K ˆ 2 and a collapsing ratio R ˆ 2; we obtain the Step 2.1 Based on mesh Meshl and connectivitysequence as B=K ˆ 10 levels of details encoded using an Connl 1 find a set of parameters Paraml 1 andaverage of 22 bits/vertex·coordinate, or 66 bits/vertex construct a predictor of the geometry Geoml 1:(including all three coordinates). Thus, a single resolu- ^ Geoml ˆ Predictor 1tion encoding would require only B (10, in this exam-ple) bits per vertex and coordinate in an uncompressed …Meshl ; Connl 1 ; mapl 1;l ; Paraml 1 †encoding. Using the clustering decomposition algorithm,the encoding of all levels of details would require Nbits(given by Eq. (10)), about 22 in this example, which is Step 2.2 Encode the parameters Paraml 1;more than twice the single resolution rate. Step 2.3 Compute the approximation error Diffl 1 ˆ In this scenario, no information about the coarser resolu- ^ Geoml 1 Geoml 1 and encode the differences. Stion mesh has been used to encode the finer resolutionmesh. In a progressive transmission, the coarser approx-imation may be used to predict the finer approximationmesh and thus only the differences should be encoded The decoder will reconstruct the geometry at each leveland transmitted. Moreover, the previous computations did by simply adding up the difference to his prediction:not take into account the internal redundancy of the bitstream. An entropy encoder would perform much better Geoml 1 ˆ Predictor…Meshl ; Connl 1 ; mapl 1;l ; Paraml 1 †than Eq. (10). In this paper, we do not discuss theconnectivity encoding problem, since we are interested in Diffl 1the geometry encoding only. Yet, we assume at each level It is clear that different predictors yield different perfor-of detail the decoder knows the connectivity of that level mance results. In the next section, we present several desir-mesh. able properties of the encoding scheme. Suppose (MeshnL 1, mapnL 2,nL 1, MeshnL 2, mapnL 3,nL 2, The data packet structure is represented in Table 1.…, map1,2, Mesh1, map0,1, Mesh0) is the sequence of meshes The Predictor consists of applying the sequence of opera-obtained by coarsening algorithm, where MeshnL 1 is the tors extension, where the geometry of level l is extended tocoarsest resolution mesh, Mesh0 the finest resolution level l 1, and update, where the geometry is updatedmesh, and mapl 1;l : {0; 1; …; nVl 1 1} 3 {0; 1; …; using a polynomial filter whose coefficients are called para-nVl 1} is the collapsing map that associates to the nVl 1 meters of the predictor and whose matrix is the finer resolu-vertices of the finer resolution mesh, the nVl vertices of the tion incidence matrix Kl 1.coarser resolution mesh where they collapse. Each mesh The extension step is straightforwardly realized using theMeshl has two components (Geoml, Connl), the geometry collapsing maps:and connectivity, respectively, as explained earlier. Weare concerned with the encoding of the sequence of geome- ril 1;ext ˆ rmapl l 1;l …i† …11†Table 2The data packet structure for the variable encoding algorithmMeshnL 1 MapnL 2;nL 1 ConnnL 2 ParamnL 2 DiffnL 2 … Maps 2;2 1 Conns 1Params 1 Diffs 1 Maps 1;s Conns Params Maps;s 1 Conns 1 Params 1 …… Map2;1 Conn1 Param1 Map0;1 Conn0 Param0 Diff0
  5. 5. R. Balan, G. Taubin / Computer-Aided Design 32 (2000) 825–846 829 Fig. 1. The car mesh at the finest resolution (left) and the coarsest resolution (right) after seven levels of reduction.Thus the first “prediction” of the new vertex i is on the of degree d,same point where it collapses, i.e. the position of thevertex mapl 1;l …i† in mesh l. Next, the updating step is xl 1;updt ˆ Px …Kl 1 †xl 1;ext …12†performed by polynomial filtering as in Eq. (2). The Let r denote the nVl 1 × 3 matrix containing all the l 1filter coefficients are the predictor parameters and have coordinates in the natural order, rl 1 ˆ ‰xl 1 yl 1 zl 1 Š;to be found and encoded. On each coordinate, we use a similar for rl 1;updt : The update is our prediction for theseparate filter. In the next section, we introduce differ- geometry Geoml 1. Then the coefficients are chosen to mini-ent criteria to measure the prediction error associated mize some lp -norm of the prediction error:to a specific property. In Ref. [18], Taubin filters (i.e.of the form P…X† ˆ …1 lX†…1 mX†† have been used min Jl 1 ˆ rl 1 rl 1;updt lp …13† Filters Coefficientsas predictors, but no optimization of the parametershas been done. Here, we use more general linear Note the optimization problem decouples into three inde-filters taking into account several performance criteria pendent optimization problems, because we allow differentas well. filters on each coordinate. The polynomial Px(X) can be More specific, let us denote by xil 1;ext the nVl 1 -vector of represented either in the power basis, i.e. Px …X† ˆ €dx-coordinates obtained by extension (11), and by xl 1;updt € kˆ0 ck X ; or in another basis. We € k tried the Chebyshevthe filtered vector with the polynomial px …X† ˆ d ck X kkˆ0 basis as well, in which case Px …X† ˆ d ck Tk …X† with Tk kˆ0Fig. 2. The car mesh at the finest resolution level when no difference is used and the filtering is performed by: the Zeroth Order Filter (top left); the GaussianSmoother (top right); the least-squares filter of order 1 (bottom left) and the least-squares filter of order 3 (bottom right).
  6. 6. 830 R. Balan, G. Taubin / Computer-Aided Design 32 (2000) 825–846Table 3Compression ratio results for several filtering schemes applied to the car mesh rendered in Fig. 1Filter Zeroth Order Gaussian Smoothing LS of degree 1 LS of degree 3Coarsest mesh 795 795 795 795Coefficients 0 0 168 336Differences 21,840 64,611 22,520 22,574Total (bytes) 22,654 65,425 23,503 23,725Rate (bits/vertex) 14.17 40.94 14.71 14.82Table 4 level S 1; then, from level S down to level 0, we do notCompression ratios for different filter lengths in power basis send any difference but just the parameters, excepted for theFilter’s degree 1 2 3 4 5 6 7 level 0 when we send the differences as well. The algorithm just described is presented next.bits/vertex 14.71 14.82 14.85 14.89 14.96 15.04 15.11 3.2. The variable length encoding algorithmthe kth Chebyshev polynomial. On each coordinate, thecriterion J l 1 decouples as follows: Step 1. Encode MeshnL 1 ; Step 2. For l ˆ nL 1 down to S repeat:J l 1 ˆ …Jx 1 ; Jy 1 ; Jz 1 † lp ; l l l Jx 1 l ˆ Ax clx 1 x l 1 lp ; Step 2.1 Estimate the parameters Paraml 1 by mini- ˆ Ay cly mizing J l 1 ; where the predictor uses the true geometry lp ; ˆ Az clz lp ; l 1 1Jy yl 1 l Jz 1 1 zl 1 of level l, Geoml: …14†where the nv × d 1 matrix Ax is either ^ Geoml 1Ax ˆ ‰xl 1;updt Kl 1 xl 1;updt … Kld 1 xl 1;updt Š …15† ˆ f …Connl ; Connl 1 ; mapl 1;l ; Geoml ; Paraml 1 †in the power basis case, orAx ˆ ‰xl 1;updt T1 …Kl 1 †xl 1;updt … Td …Kl 1 †xl 1;updt Š …16† Step 2.2 Encode the parameters Paraml 1; Step 2.3 If l S; encode the differences Diffl 1 ˆin the Chebyshev basis case. clx 1 is the d 1-vector of the ^ Geoml 1 Geoml 1 ;x-coordinate filter coefficients and xl 1 the nVl 1 -vector of Step 3. For l ˆ S 1 down to 1the actual x-coordinates all computed at level l 1. Similar Step 3.1 Estimate the parameters Paraml 1 by mini-for Ay ; Az ; cy ; cz ; and yl 1 ; zl 1 : mizing J l 1 where the predictor uses the estimated The basic encoding algorithm can be modified to a more geometry of level l:general context. The user may select the levels for which the ^ Geoml 1differences are sent. Then, for those levels the differencesare not sent, the extension step to the next level has to use ˆ f …Connl ; Connl 1 ; mapl ^ 1;l ; Geoml ; Paraml 1 †the predicted values instead of the actual values of thecurrent level. In particular, we may want to send the differ-ences starting with level nL 1 and going down to some Step 3.2 Encode the parameters Paraml 1;Table 5Compression ratios in the single resolution implementation of the variable length encoding algorithm applied to the car mesh rendered in Fig. 1Filter Zeroth Order Gaussian Smoothing LS of degree 1 LS of degree 3Coarsest mesh 795 795 795 795Coefficients 0 0 168 336Differences 27,489 27,339 25,867 25,532Total (bytes) 28,306 28,156 26,859 26,690Rate (bits/vertex) 17.71 17.62 16.81 16.68l 2 error/vertex(10 4) 11.96 18.64 9.33 8.44
  7. 7. R. Balan, G. Taubin / Computer-Aided Design 32 (2000) 825–846 831 CAMARO - Single Resolution Histogram- Zeroth Order Filter CMARO - Single Resolution Histogram- Gaussian Filter 1 1 0.1 0.1 log(Probability) log(Probability) 0.01 0.01 0.001 0.001 0.0001 0.0001 -60 -40 -20 0 20 40 60 -60 -40 -20 0 20 40 60 Prediction Error Prediction Error CAMARO - Single Resolution Histogram- Optimal First Order Filter CAMARO - Single Resolution Histogram- Optimal Third Order Filter 1 1 0.1 0.1 log(Probability) log(Probability) 0.01 0.01 0.001 0.001 0.0001 0.0001 -60 -40 -20 0 20 40 60 -60 -40 -20 0 20 40 60 Prediction Error Prediction ErrorFig. 3. Semilog histograms of the prediction errors associated to the four filters for the Single Resolution scheme applied to the car mesh rendered in Fig. 1.Fig. 4. The color plot of the single resolution approximation errors for the four different filters: Zeroth Order Filter (upper left); Gaussian Smoothing (upperright); LS first order (lower left) and LS third order (lower right).
  8. 8. 832 R. Balan, G. Taubin / Computer-Aided Design 32 (2000) 825–846Table 6Compression ratio results for several filtering schemes applied to the round table mesh rendered in Fig. 5Filter Zeroth Order Gaussian Smoothing LS of degree 1 LS of degree 3Coarsest mesh 438 438 438 438Coefficients 0 0 168 336Differences 22,739 51,575 23,645 23,497Total (bytes) 23,196 52,032 24,274 24,295Rate (bits/vertex) 15.63 35.07 16.36 16.37Table 7 mization problems all of the type mentioned before. The l p-Compression ratios for different filter lengths, in power basis, for the round norm to be minimized is different in each case. For accuracytable the predictor has to minimize the l ∞ norm, for robustness theFilter’s degree 1 2 3 4 5 6 7 l 2 norm should be used, whereas the compression ratio isbits/vertex 16.36 16.34 16.37 16.50 16.62 16.73 16.88 optimized for p ‰1; 2Š in general. Thus, a sensible criter- ion should be a trade-off between these various norms. Taking the computational complexity into account, we Step 4. Using the last prediction of level 0, encode the have chosen the l 2-norm as our criterion and in the following ^ differences, Diff0 ˆ Geom0 Geom0 : S section of examples, we show several results we have obtained.In this case, the data packet structure is the one representedin Table 2. In particular, for S ˆ nL only the last set of 4.1. Accuracydifferences is encoded. This represents an alternative to Consider the following scenario: suppose we choose S ˆthe single-resolution encoding scheme. nL; the number of levels, in the variable length encoding algorithm. Suppose also the data block containing the level zero differences is lost (note this is the only data block4. Desired properties containing differences because S ˆ nL†: In this case, we would like to predict the finest resolution mesh as accurately In this section, we discuss three properties we may want as possible based on the available information. Equiva-the encoding scheme to possess. The three properties: accu- lently, we would like to minimize the distance betweenracy, robustness, and compression ratio yield different opti- ˆ Mesh0 and the prediction Mesh0, under the previous Fig. 5. The round table mesh at the finest resolution (left) and the coarsest resolution (right) after seven levels of reduction.
  9. 9. R. Balan, G. Taubin / Computer-Aided Design 32 (2000) 825–846 833Fig. 6. The round table mesh at the finest resolution level when no difference is used and the filtering is performed by: the Zeroth Order Filter (top left); theGaussian Smoother (top right); the least squares filter of order 1 (bottom left) and the least squares filter of order 3 (bottom right).hypothesis. There are many ways of measuring mesh tion problem (13) decouples in three independent optimiza-distances. One such measure is the Haussdorf distance. tion problems. For p ˆ ∞; these have the following form:Although, it describes very well the closeness of twomeshes, the Haussdorf distance yields a computational inf Ac b …17† l∞expensive optimization problem. Instead of Haussdorf cdistance one can consider the maximum distance betweenvertices (i.e. the l ∞-norm, see Ref. [1]): where A was introduced by Eqs. (15) and (16), depending on the basis choice, c is the nf-vector of unknown filter coeffi-1a ˆ max ri0 ^i r0 V r0 ^ r0 l∞ cients, and b is one of the three vectors x, y or z. For 0 0 i nV0 1 i nV 1; 0 j fL 1; A ˆ ‰aij Š; b ˆ …bi † and writ-Note the l ∞-norm is an upper bound for the Haussdorf ing cj ˆ fj gj with fj 0; the positive part, and gj 0;distance. Consequently 1 a controls the meshes closeness the negative part of cj (thus at least one of them is alwaysas well. As mentioned in the previous section, the optimiza- zero), the optimization problem (17) turns into the following
  10. 10. 834 R. Balan, G. Taubin / Computer-Aided Design 32 (2000) 825–846 TABLE - Single Resolution Histogram- Zeroth Order Filter 1 TABLE - Single Resolution Histogram- Gaussian Filter 1 0.1 0.1 log(Probability) log(Probability) 0.01 0.01 0.001 0.001 0.0001 0.0001 -100 -50 0 50 100 Prediction Error -100 -50 0 50 100 Prediction Error TABLE - Single Resolution Histogram- Optimal First Order Filter TABLE - Single Resolution Histogram- Optimal Third Order Filter 1 1 0.1 0.1 log(Probability) log(Probability) 0.01 0.01 0.001 0.001 0.0001 0.0001 -100 -50 0 50 100 -100 -50 0 50 100 Prediction Error Prediction ErrorFig. 7. Semilog histograms of the prediction errors associated to the four filters for the Single Resolution scheme applied to the round table rendered in Fig. 5.linear programming problem: but perturbed by some random quantities. This may be P Q due to several causes. We can either imagine irretrievable ˆd transmission errors or even a resampling process at the max R w 1 … fj gj † Sw;fj ;gj ;ui ;vi jˆ0 transmitter to reduce the code length of the entire object. In any case, we assume the true difference di is perturbed bysubject to : w; fj ; gj ; ui ; vi 0 some stochastic process n i. Thus, the reconstructed geome- try has the form ˆ d …18†b i ˆ ui aij … fj gj † w xil 1;reconstr ˆ xi1 1;updt diffi ni : jˆ0 We assume the perturbations are about of the same size as ˆ dbi ˆ vi aij … fj gj † w the prediction differences. Next, suppose we want to jˆ0 minimize in average the effect of these perturbations. Then, one such criterion is the noise variance E‰n2 Š: Assum- iwith 1 a small number to enforce at least one of fj or gj to ing the stochastic process is ergodic, it follows that the noisebe zero (for instance 1 ˆ 10 6 †: With the standard variance can be estimated by the average of all the coordi-simplex algorithm, this problem requires the storage nate perturbations:of a …2nV 2† × …2nV 2d 2†-matrix (the so-calledtableaux) which is prohibitive for large number of vertices 1 Nˆ1 2 E‰n2 Š ˆ i n:(nV of order 10 5, for instance). In any case, the moral of this N iˆ0 isubsection is to point out that the more accurate predictor isthe one that achieves a lower l ∞-norm error. Next, since the perturbation is of the same order as the prediction error, the later term can be replaced by the aver-4.2. Robustness age of the differences. Hence, we want to minimize: Consider now the following scenario: the differences 1 Nˆ1 2 E‰n2 Š i dassociated to the prediction algorithm are not set to zero N iˆ0 i
  11. 11. R. Balan, G. Taubin / Computer-Aided Design 32 (2000) 825–846 835Fig. 8. The color plot of the single resolution approximation errors for the four different filters: Zeroth Order Filter (upper left); Gaussian Smoothing (upperright); LS first order (lower left) and LS third order (lower right). Fig. 9. The skateboard mesh at the finest resolution (left) and the coarsest resolution (right) after eight levels of reduction.
  12. 12. 836 R. Balan, G. Taubin / Computer-Aided Design 32 (2000) 825–846Fig. 10. The skateboard mesh at the finest resolution level when no difference is used and the filtering is performed by: the Zeroth Order Filter (top left); theGaussian Smoother (top right); the least squares filter of order 1 (bottom left) and the least squares filter of order 3 (bottom right). SKATEBOARD - Single Resolution Histogram- Zeroth Order Filter SKATEBOARD - Single Resolution Histogram- Gaussian Filter 1 1 0.1 0.1 log(Probability) log(Probability) 0.01 0.01 0.001 0.001 0.0001 0.0001 -60 -40 -20 0 20 40 60 -60 -40 -20 0 20 40 60 Prediction Error Prediction Error SKATEBOARD - Single Resolution Histogram- Optimal First Order Filter SKATEBOARD - Single Resolution Histogram- Optimal Third Order Filter 1 1 0.1 0.1 log(Probability) log(Probability) 0.01 0.01 0.001 0.001 0.0001 0.0001 -60 -40 -20 0 20 40 60 -60 -40 -20 0 20 40 60 Prediction Error Prediction ErrorFig. 11. Semilog histograms of the prediction errors associated to the four filters for the Single Resolution scheme applied to the mesh rendered in Fig. 9.
  13. 13. R. Balan, G. Taubin / Computer-Aided Design 32 (2000) 825–846 837Fig. 12. The color plot of the single resolution approximation errors for the four different filters: Zeroth Order Filter (upper left); Gaussian Smoothing (upperright); LS first order (lower left) and LS third order (lower right).This shows that a criterion of robustness is the total energy algorithm. In general, estimating compression ratios is aof the differences. In this case, our goal to increase the tough problem due to several reasons. First of all, onerobustness of the algorithm is achieved by decreasing the should assume a stochastic model of the data to be encoded.l 2-norm of the prediction errors. Thus, the filters are the In our case, we encode the vectors of prediction errors,solvers of the optimization problem (13) for p ˆ 2: The which in turn depend on the mesh geometry and the waysolution in terms of filter coefficients is very easily obtained we choose the filters’ coefficients. Next, one should have anby using the pseudoinverse matrix. Thus, the solution of: exact characterization of the encoder’s compression ratio. The best compression ratio, assuming a purely stochasticinf Ac b l2 c data, is given by Shannon’s entropy formula and, conse-is given by: quently, by the entropy encoder which strives to achieve this bound (Shannon-Fano and Huffman codings—seec ˆ …AT A† 1 AT b …19† Ref. [21] or [2]). However, the entropy encoder requires some a priori information about the data to be sent, as4.3. Compression ratio well as overhead information that may affect the global compression ratio. Alternatively, one can use adaptive enco- The third property we discuss now is the compression ders like the adaptive arithmetic encoder as in the JPEG/ratio the algorithm achieves. In fact, if no error or further MPEG standards (see Ref. [13]). This encoder may performquantization is assumed, the compression ratio is perhaps better in practice than blind entropy or arithmetic encoders,the most important criterion in judging and selecting an however, it has the important shortcoming that its Fig. 13. The piping mesh at the finest resolution (left) and the coarsest resolution (right) after seven levels of reduction.
  14. 14. 838 R. Balan, G. Taubin / Computer-Aided Design 32 (2000) 825–846Fig. 14. The piping mesh at the finest resolution level when no difference is used and the filtering is performed by: the Zeroth Order Filter (top left); theGaussian Smoother (top right); the least squares filter of order 1 (bottom left) and the least squares filter of order 3 (bottom right).compression ratio is not characterized by a closed formula. frequency, the sum turns intoIn any case, for purely stochastic data the best compressionratio is bounded by Shannon’s formula, which we discuss 1 Nˆ1 Rˆ log2 p…n ˆ xi †next. We thus assume our bit sequence encoding scheme N iˆ0achieves this optimal bound. Suppose the quantized differ-ences xi ; 0 i N 1; are independently distributed and Note the summation index has changed. At this point wehave a known probability distribution, say p…n†; 2B 1 have to assume a stochastic model for the prediction errors.n 2B : Thus p(n) is the probability that a difference is n. In We consider the power-type distribution that generalizesthis case, the average (i.e. expected value) of the number of both the Gaussian and Laplace distributions, that arebits needed to encode one such difference is not less than: frequently used in computer graphics models (see Ref. [12], for instance): ˆ 2B 1RShannon ˆ p…n† log2 p…n† 1 aaa nˆ 2B 1 p…x† ˆ exp… a x a † …20† 1 Bwhere 2 is the number of quantization levels (see Ref. 2G a[19]). Assuming now the ergodic hypothesis holds true,p(n) can be replaced by the repetition frequency p…n† ˆ where G (x) is the Euler’s Gamma function (to normalize thef …n†=N; where f(n) is the repetition number of the value n expression) and a is a parameter. For a ˆ 1 it becomes theand N the total number of values (presumably N ˆ 3nV†: Laplace distribution, whereas for a ˆ 2 it turns intoThus, if we replace the first p(n) in the above formula by this the Gauss distribution. Then, the previous rate formulaTable 8Compression ratios in the Single Resolution implementation of the Variable length encoding algorithm applied to the round table mesh rendered in Fig. 5Filter Zeroth Order Gaussian Smoothing LS of degree 1 LS of degree 3Coarsest mesh 438 438 438 438Coefficients 0 0 168 336Differences 30,386 29,138 27,972 27,377Total (bytes) 30,847 29,599 28,609 28,146Rate (bits/vertex) 20.79 19.95 19.28 18.97l 2 error/vertex(10 2) 15.85 17.38 11.17 9.97
  15. 15. R. Balan, G. Taubin / Computer-Aided Design 32 (2000) 825–846 839Table 9Compression ratio results for several filtering schemes applied to the skateboard rendered in Fig. 9Filter Zeroth Order Gaussian Smoothing LS of degree 1 LS of degree 3Coarsest mesh 444 444 444 444Coefficients 0 0 168 336Differences 22,735 46,444 22,627 22,443Total (bytes) 23,199 46,908 23,259 23,247Rate (bits/vertex) 14.33 28.98 14.37 14.36Table 10Compression ratios for different filter lengths, in power basis, for the round following estimator for the parameter a:table 1 N ^ aˆFilter’s degree 1 2 3 4 5 6 7 a ˆ N 1bits/vertex 14.37 14.32 14.36 14.45 14.48 14.54 16.01 xi a iˆ0 and the above formula of the rate becomes: 4 5 1 ˆ N 1 aturns into: R ˆ r 0 … a† log2 xi ; a iˆ0 a log2 e Nˆ1 a …21†R ˆ R0 xi ; G… 1 † N a 1 ea iˆ0 r0 …a† ˆ 1 log2 log 2 a a N 1 1 Consider now two linear predictors associated to twoR0 ˆ 1 log2 G log 2 a log a a a 2 different linear filters. Each of them will have different prediction errors. If we assume the prediction errors areNow we replace the parameter a by an estimate of it. An independent in each case and distributed by the sameeasy computation shows the expected value of x a for the a - power law with exponent a but maybe different parameterspower p.d.f. (Eq. (20)) is E‰ x a Š ˆ 1=aa: Thus, we get the a1, respectively a2, then the prediction scheme that yieldsTable 11Compression ratios in the Single Resolution implementation of the variable length encoding algorithm applied to the skateboard rendered in Fig. 9 Zeroth Order Gaussian Smoothing LS Filter of degree 1 LS Filter of degree 3Coarsest mesh 444 444 444 444Coefficients 0 0 168 336Differences 28,931 27,082 26,542 26,436Total (bytes) 29,397 27,549 27,184 27,250Rate (bits/vertex) 18.16 17.02 16.80 16.84l 2 error/vertex(10 4) 43.87 51.83 38.82 36.12Table 12Compression ratio results for several filtering schemes applied to the piping construction mesh rendered in Fig. 13Filter Zeroth Order Gaussian Smoothing LS of degree 1 LS of degree 3Coarsest mesh 522 522 522 522Coefficients 0 0 144 288Differences 2605 31,595 2625 2673Total (bytes) 3146 32,136 3311 3503Rate (bits/vertex) 1.38 14.17 1.46 1.54
  16. 16. 840 R. Balan, G. Taubin / Computer-Aided Design 32 (2000) 825–846Table 13 Table 16Compression ratios for different filter lengths, in power basis, for the piping Compression ratios for different filter lengths, in power basis, for sphereconstruction Filter’s degree 1 2 3 4 5 6 7Filter’s degree 1 2 3 4 5 6 7 bits/vertex 16.20 12.90 11.28 10.59 10.36 10.42 10.69bits/vertex 1.46 1.50 1.54 1.58 1.61 1.65 1.69the sequence of differences with smaller l a -norm has a 5. Examplesbetter (entropic) compression bound and, therefore, ismore likely to achieve a better compression ratio. Equiva- In this section, we present a number of examples of ourlently, the p.d.f. that has a larger parameter a, or is narrower, filtering algorithm. For several meshes, we study the accu-would be encoded using fewer bits. racy the fine resolution mesh is approximated, and also the The argument we presented here suggests that a better compression ratio obtained for different filter lengths.compression ratio is achieved by the prediction scheme First, we analyze the basic encoding algorithm presentedthat minimizes the l a -norm of the prediction error, where in Section 3. The filters’ coefficients are obtained by solvinga is the p.d.f.’s characteristic exponent (when it is a power- the optimal problem (13) for p ˆ 2; i.e. we use the least-type law), usually between 1 (the Laplace case) and 2 (the squares solution.Gaussian case). For p ˆ 2; the optimizing filter is found by The car mesh represented in Fig. 1 (left) having nV0 ˆusing the pseudoinverse of A as in Eq. (19). For p ˆ 1; the 12; 784 vertices and 24,863 faces is decomposed into aoptimizer solves the linear programming problem: sequence of eight levels of detail. The coarsest resolution mesh of nV7 ˆ 219 vertices is rendered in Fig. 1 (right). We P Q ˆ N 1 ˆ d used several filter lengths to compress the meshes. In parti- max R ui vi 1 … fj gj † S cular, we study four types of filters, namely the Zeroth Orderfj ;gj ;ui ;vi iˆ0 jˆ0 Filter, the Gaussian Smoother and filters of order d ˆ 1 and d ˆ 3 (decomposed in power basis). The last two filters willsubject to : f j ; gj ; ui ; v i 0 …22† be termed as “higher order filters”, although their order is ˆ d relatively low. To check the accuracy of the approximation,b i ˆ ui vi aij … fj gj † we used the prediction algorithm assuming the differences jˆ0 are zero at all levels. The four meshes corresponding to the four filters are represented in Fig. 2.with 1 as in Eq. (18), which involves (in the simplex algo- Note the Zeroth Order Filter does not change the geome-rithm) a …N 2† × …2N 2d 1† matrix and the same try at all (because of its pure extension nature). It gives thecomputational problems as Eq. (18). worst approximation of the mesh, yet it has the bestTable 14Compression ratios in the Single Resolution implementation of the variable length encoding algorithm applied to the piping construction rendered in Fig. 13Filter Zeroth Order Gaussian Smoothing LS of degree 1 LS of degree 3Coarsest mesh 522 522 522 522Coefficients 0 0 168 336Differences 19,160 41,278 21,009 21,573Total (bytes) 19,704 41,823 21,701 22,458Rate (bits/vertex) 8.69 18.44 9.57 9.90l 2 error/vertex(10 2) 1.21 22.96 1.20 1.20Table 15Compression ratio results for several filtering schemes applied to the sphere rendered in Fig. 17Filter Zeroth Order Gaussian Smoothing LS of degree 1 LS of degree 3Coarsest mesh 881 881 881 881Coefficients 0 0 72 144Differences 29,770 22,614 19,762 13,395Total (bytes) 30,673 23,518 20,738 14,440Rate (bits/vertex) 23.96 18.37 16.20 11.28
  17. 17. R. Balan, G. Taubin / Computer-Aided Design 32 (2000) 825–846 841 PIPING - Single Resolution Histogram- Zeroth Order Filter PIPING - Single Resolution Histogram- Gaussian Filter 1 1 0.1 0.1 log(Probability) log(Probability) 0.01 0.01 0.001 0.001 0.0001 0.0001 -30 -20 -10 0 10 20 30 -60 -40 -20 0 20 40 60 Prediction Error Prediction Error PIPING - Single Resolution Histogram- Optimal First Order Filter PIPING - Single Resolution Histogram- Optimal Third Order Filter 1 1 0.1 0.1 log(Probability) log(Probability) 0.01 0.01 0.001 0.001 0.0001 0.0001 -30 -20 -10 0 10 20 30 -30 -20 -10 0 10 20 30 Prediction Error Prediction ErrorFig. 15. Semilog histograms of the prediction errors associated to the four filters for the Single Resolution scheme applied to the mesh rendered in Fig. 13.Fig. 16. The color plot of the single resolution approximation errors for the four different filters: Zeroth Order Filter (upper left); Gaussian Smoothing (upperright); LS first order (lower left) and LS third order (lower right).
  18. 18. 842 R. Balan, G. Taubin / Computer-Aided Design 32 (2000) 825–846 Fig. 17. The sphere at the finest resolution (left) and the coarsest resolution (right) after four levels of reduction.compression ratio (see below). The Gaussian filter smoothes that particular filter. The average l 2-norm error is given onout all the edges, a natural consequence since it really corre- the last row in Table 5.sponds to a discrete diffusion process. The higher order Note in the Single Resolution case there is not muchfilters (i.e. first order and third order) trade-off between difference among the filtering schemes considered. In parti-smoothing and compression. cular, the higher order filters perform better than the Zeroth In terms of the compression ratio, the four filters have Order Filter, and the Gaussian filter behaves similarly to theperformed as shown in Table 3. Varying the filter length, other filters. This is different to the Multi-Resolution case inwe found the compression ratios indicated in Table 4: all the Table 3. There, the Gaussian filter behaves very poorly, andresults apply to the geometry component only. The connec- the Zeroth Order Filter gives the best compression ratio. Intivity is not presented here. fact it is better than the Single Resolution case (Tables 6 and Next we study the variable length encoding algorithm for 7). On the other hand, with respect to the accuracy, thethe four particular filters mentioned before with the para- higher order filters give a more accurate approximationmeter S ˆ nL (i.e. in the Single Resolution case). Thus, the than the Zeroth Order Filter.mesh geometry is obtained by successively filtering About the same conclusions hold for three other meshesthe extensions of the coarser mesh and, at the last level, we used: the round table, the skateboard and the pipingthe true differences are encoded. In terms of accuracy, we construction. The round table rendered in Fig. 5, left, hasobtained very similar meshes. More significantly are the nV0 ˆ 11; 868 vertices and 20,594 faces. The coarsest reso-compression ratios, shown in Table 5. To analyze the lution mesh (at level eight, pictured on the right side) hascompression ratios of these four filters, we also have plotted nV7 ˆ 112 vertices. The predicted mesh after eight levels ofthe histogram of the errors on a semilogarithmic scale in decomposition when no difference is used, is rendered inFig. 3. Note the power-type p.d.f. hypothesis is well satisfied Figs. 6–8.by the Zeroth, LS 1st and LS 3rd order filters, and less by the The skateboard mesh at the finest resolution (left, in Fig.Gaussian smoother. Also as smaller the l 2-norm error gets, 9) has nV0 ˆ 12; 947 vertices and 16,290 faces. At the coar-as narrower the p.d.f. and as smaller the rate becomes, in sest resolution (right, in the same figure) it has nV7 ˆ 125accordance with the conclusions of Section 4.3. vertices (see also Fig. 10). Equally important is how these errors are distributed on The piping construction has nV0 ˆ 18; 138 vertices, andthe mesh. In Fig. 4, we convert the actual differences into a after seven levels of details, it is reduced to nV0 ˆ 147scale of colors and set this color as an attribute for each vertices. The first simplification step achieves almostvertex. Darker colors (blue, green) represent a smaller the theoretical bound: from 18,138 vertices, the mesherror, whereas lighter colors (yellow, red) represent a larger is simplified to 2520 vertices (Figs. 11 and 12). Theprediction error. The darker the color the better the predic- original mesh and its approximations are rendered intion and also the accuracy. All the errors are normalized Figs. 13 and 14.with respect to the average l 2-norm error per vertex for The last mesh we discuss is somewhat different to the
  19. 19. R. Balan, G. Taubin / Computer-Aided Design 32 (2000) 825–846 843Fig. 18. The sphere at the finest resolution level when no difference is used and the filtering is performed by: the Zeroth Order Filter (top left); the GaussianSmoother (top right); the least squares filter of order 1 (bottom left) and the least squares filter of order 3 (bottom right).other. It is a sphere of nV0 ˆ 10; 242 vertices and 20,480 the Zeroth Order Filter compresses best the irregular andfaces that reduces after four levels to nV3 ˆ 224 vertices. less smooth meshes, whereas higher order filter are betterThe striking difference is the compression ratio of the for smoother and more regular meshes. However, in termsZeroth Order filter: it is the worst of all the filters we of accuracy and robustness, the higher order filters performchecked (Tables 8–16). Even the Gaussian filter fares better much better than its main “competitor”, the Zeroth Orderthan this filter (Figs. 15 and 16). Snapshots of the approxi- Filter. Note, except for highly regular meshes (like sphere,mated meshes are pictured in Figs. 17 and 18. The mesh for instance), relatively low-order filters are optimal. Theused is non-manifold but this is not a problem for the range [1…5] seems enough for most of the encodinggeometry encoder. The histograms shown in Fig. 19 are in schemes.accordance with the rate results presented in Table 17: thenarrower the distribution the better the rate. Note also howwell a power-type low fits the 3rd order filtered distribution 6. Conclusions(Fig. 20). These examples show that in terms of compression ratio, In this paper, we study the 3D geometry filtering using the

×