Anchors-based lossless compression of progressive triangle meshes ZhiQuan Cheng, ShiYao Jin, HuaFeng Liu PDL Laboratory, National University of Defense Technology, China [email protected]

Abstract In this paper, a lossless triangle compression approach, improved from the progressive valencedriven compression [1], is proposed. Firstly, to best approximate the original geometry, anchors, locating at the salient feature position, are selected from the input mesh surface vertices in a feature-sensitive fashion and preserved in the base mesh. Secondly, a multi-granularity quantization method in geometry encoding is presented, which allows making better of the dynamic range (different number of bits in normal/tangential components) than with a fixed-bit representation. Consequently, with the aid of the anchors and variational geometry coding, the ratedistortion performance is much better than the original and the recent similar lossless algorithms, even can compare beauty with spectral compression techniques at low bit rates. Besides these, experiments also show that our proposed coder outperforms the original in the geometry coding efficiency, and the range of improvements is typically around 4%~20%.

1. Introduction 3D mesh compression [2] can be illustrated as the following: “Given a 3D mesh, extract a series meshes from the original and transmit them in single-rate or progressive way, whose elements satisfy some quality requirements, while approximating well the input”. The elements can include connectivity, geometry, parametric information, and other properties such as color and texture. “Quality” has several meanings. It can be related to the compression ratio (separated as connectivity encoding and geometry encoding) measured by bits per triangle (bpt) or bits per vertex (bpv), the rate-distortion representing the trade-off between size and accuracy, the entropy of the coding method, or/and the type of 3D mesh such as manifold or not. In contrast to single-rate coders, progressive coders [3] are typically based on mesh simplification

techniques to code a range of intermediate shapes, and measured by the rate-distortion curve. However, the optimization of the rate-distortion remains a current challenge, especially at the low bits rates. The main intuitive way is to impose a restriction on the base mesh: the base mesh includes the salient vertices (anchors) on the surface and can represent the structure of the model as close as possible, just as the lossy semi-regular remeshing compression algorithms [4-6]. So, one of our main contributions is the rate-distortion performance betterment at the aid of the anchors. In the geometry encoding and decoding, Alliez et al. [1] ever used the barycenter prediction and the local approximate Frenet coordinate frame to separate normal and tangential components to further reduce the bit rate, inspired by [4]. The normal geometry and tangential parameter components are quantized with identical bits, although the geometry compression is always better than previous approaches. Therefore, the other of our main contributions is the notable improvement in the geometry coding. We complement this improvement by a different quantization for different components.

2. Related work Research on single-rate 3D mesh compression has become maturity. For the progressive mesh compression, the valence-driven approach is still among the best ones, and the geometric center method driven by the kd-tree [7] or octree [8] spatial decomposition have achieved excellent performance in the sense that they provide lower compression rate that is close to the state-of the-art single-rate valence-based coder [9]. However, most of these algorithms for lossless progressive mesh coding use uniform quantization applied in vertex coordinate space and various prediction schemes, and typically treat triangle meshes as consisting of two distinct components: connectivity and geometry, while not three components: geometry, connectivity and parameterization. Our improved valence-driven

algorithm tries to reduce the parametric components to further decrease the coding rate and achieve better rate-distortion curve. The spectral coding [10], the geometry image coding [11,12], the bandelet coding [13] and the wavelet coding [4-6] methods further improve the compression ratio. It is worthwhile to point out that these coding algorithms are all lossy [14]. In fact, the optimization of the rate-distortion tradeoff involves many challenging issues linked to sampling and approximation theory, differential geometry and information theory. Sorkine et al. [15,16] transform the Cartesian coordinates of the original mesh to Ǭ coordinates by the Laplacian matrix, concentrate errors at the low-frequency end of the spectrum, and reconstruct an approximation of the geometry from the quantizedǬcoordinates and the spatial locations of the anchors. Although the rate-distortion curves are clearly improved, the reconstructed mesh using k-anchor invertible laplacian is not smooth at the anchor vertices [15] and is not an actually fully progressive [16] in the sense that the mesh has always the full connectivity. These deficiencies are all rectified in our anchorsbased lossless progressive compression algorithm.

3. Anchors detection Anchors, detected by the vertex principle curvatures, are mesh points whose original coordinates would be included in the encoded base mesh coordinates. Let an oriented surface Ǯ , and denote kmax and kmin its maximal and minimal principle curvatures. By the finite-difference local neighbor computation [17], we define a feature value k(v) on each vertex v by the absolute principle curvature value 㧔|kmax |+|kmin |㧕. To use similar values independent of a specific mesh, we normalized the feature values as cf(v)= (k(v)-u)/ ǻ , where u is the mean and ǻis standard deviation of k(v) over all vertices of the mesh.. To clearly distinguish valley and ridge features, the sign of normalized curvature value is assigned same to that of its minimal principle curvature (e.g. ridge is positive and valley is negative). The identical hysteresis thresholding function [18] on the cf(v) is defined to filter feature regions out, where the upper bound for the hysteresis is usually 1.7 and the lower bound is 1.2 (Figure 1). As observed by Kobbelt et al. [19], the associated approximate problem will not be solved by oversampling, since the surface normal in the reconstructed model will not converge to the normal field of the original object. So, for a feature region, we only take a few points as the anchors: its few center points and end

points if the feature region is longer enough (see the anchors, marked as black points, in Figure 1). -

0

ˇ

(a)tooth

(b)horse

(c)rabbit

Figure 1. Anchors detection

4. Anchors-based progressive compression 4.1. Connectivity coding Based on the observation that the entropy of mesh connectivity is dependent on the distribution of vertex valences, Alliez and Desbrun [1] iteratively apply the valence-driven decimating conquest and the cleaning conquest in pair to get level-of-detail meshes (see Figure 2). The even decimating passes remove valence ҅ 6 vertices, while the odd cleaning steps remove only valence 3 vertices. Such a selection of valences reduces the dispersion of valences during decimation, the latter dispersion being further reduced by a deterministic patch re-triangulation designed to generate valence 3 vertices, later removed by odd cleaning steps. Therefore, the conquest way generates a regular inverse sqrt(3) subdivision, and coding one valence per vertex is sufficient to rebuild the connectivity.

g5

g4

g6 g1

g16 g3 g2

g15 g12 g11

g7

g8

g14 g13 g10 g9

(a)

g5 g

4

g1 g2 g3

(b)

(c)

Figure 2. An example of progressive valencedriven simplification process (a) the decimating conquest, (b) the cleaning conquest, and (c) the resulting mesh after the decimating conquest and the cleaning conquest, where the shaded areas represent the conquered patches and the thick lines represent the gates. The gates to be processed are depicted in the black color, while the gates already processed are in the gray color. Each arrow represents the direction of entrance into a patch.

The connectivity encoding step is not changed in our approach, except that we respect anchors and send more null_patch codes to encode them. Once the anchors have been sampled, they must be un-removed in the simplification process, and be preserved in the base mesh. It’s well-known that if anchors preserved appropriately, the rate-distortion performance would be improved better, especially at the low rate. As [1] mentioned, they can skip some important vertices by just sending null_patch codes if these particular vertices may better remain present at current stage. In the similar way, we encode the null_patch, while an anchor v is the current processing vertex. An instance of the anchor v is shown by the figure 3, which clearly illustrated that the anchor is kept and un-decimated.

g5

g4

g6

v

g1

g16 g3 g2

g15 g12 g11

g7

g8

g14 g13 g10 g9

g5 g

4

g1

(a)

v

v

g2 g3

(b)

)))* where ș is the angle from normal n to vector bv r , and ij is the phasic angle from vector t1 to the projection )))* vector of bv r on the tangential plane. The polar coordinates ș and ij are defined in terms of approximate Frenet coordinates by ij=180*arctan(ȕ/Į)/ʌ or 360-180*arctan(ȕ/Į)/ʌ, and 2

V5

(c)

Figure 3. The effects of anchor v to decimation process

V4 Vr

V6 V1

Vr

t1

Local coordinate frame

Encoding Decoding

V4 b

V6 V1

V4

V5

V3 V6 Input gate

V2

b

V2

V1

Vr Ȗ

V5

V3

b

p

V6

t1

V4 ȕ

V3 Į V2

V1

Figure 4. Prediction method for geometry encoding in [1]. The current input gate is colored in red. Residuals are expressed in terms of both tangential and normal components deduced from the current patch’s frontier, known for both the coder and the decoder.

4.2. Geometry coding Right after each vertex’s valence code, to use the implicit order defined by the connectivity codes, Alliez and Desbrun use the barycentric prediction and approximate Frenet coordinate frame to encode the geometry with a global uniform quantization step in 8 to 12 bits (see Figure 4.). The normal n and barycenter b of a patch approximate the tangent plane of the surface. Then the position of the inserted vertex vr is encoded as an offset from the tangent plane: vr=b+ǩ

W1 Ǫ W2ǫ Q>@. After the original approximate Frenet coordinate of the vertex vr is projected on three basic vectors, we apply a second coordinate transformation in the other two vectors except the n vector (see Figure 5): from the approximate Frenet coordinate frame to a 3D polar coordinate frame,

t2

n

V3

V2

V5

Consequently, it’s inevitable that the number of null_patch is increasing. Fortunately, at the end of encoding process, the similar simulated decoding stage procedure as [1] can be adopted to remove some unnecessary code null_patch, and can decrease at least 10% of these accident codes on an average. Thereby, our connectivity coding efficiency is just slightly higher than the original, which will be discussed in section 5.

2

ș=180*arctan( D  E /Ȗ) /ʌ. When decoding, the approximate Frenet coordinatesĮand ȕ can also be inversely calculated in terms of ș and ij. The benefit of the equivalent transformation is based on the following observations in [4]: In a smooth semi-regular mesh, the geometry information (normal component) is much important than the parameter information (tangential component), and the distribution of the polar angles (the angle from the normal axis) becomes very nonuniform in the local frame with peaks around 0qand 180q, just as mentioned in [4]. The fact indicates that the entropy of our encoding would decreased further, since the ș distribution is un-dispersed.

vr

vr ¤

¤

© £ b

¢

b



Figure 5. The approximate Frenet coordinate frame (left) is further transformed to the polar coordinate frame (right). Although that parameter, i.e., tangential, information does not contribute most to the error metric, unfortunately, we cannot just ignore tangential components since our algorithm is lossless. Especially at coarser levels, tangential coefficients can still contain some geometric information. Thus, we can

5. Experiment results and discussion For the purpose of comparision, we have reproduced the valence-driven compression [1] with the help of Alliez. In this section, we primarily focus on the comparision of compression results and ratedistortion curves obtained from Metro [20]. Our current implementation encodes and decodes 8,000 faces/s on a common PIV PC, and can handle arbitrary genus and arbitrary numbers of holes. Typical experimental results are listed in Table 1, where the mesh name, the number of vertices and quantization bits in each mesh are listed in the first three columns. Then we compare our coding bit rates (in the unit of bpv) with the original [1] (named AD), and the recent Octree mesh coder [8] (named OT). For every algorithm, we report the connectivity and the geometry costs separately. In other words, bit rates for the connectivity coding are listed in the column 4, 6 and 8, while those for the geometry coding are listed in the column 5, 7 and 9. The last column throws the geometry gain (GG) of our over the AD, since the geometry data dominate the compressed file size in most cases. The GG advances is computed by GG=(GOUR-GAD)/GAD*100%, and is typically around 4%~20%. Note that the regular geometry leads to regular vertex distribution and good prediction accuracy, which both contribute to high efficiency in entropy coding. And the experiments have shown that highly regular meshes can be coded much more compactly, since our method exploits the regularity both in valence and geometry. By the fandisk mesh, we display some selected layers and contrast the simplified appearance in Figure 6, the bottom are ours with two anchors, while the top

instances in Figure 6 are created from the reproducing original algorithm [1]. To compare the rate-distortion performance, we plot the curves for two meshes (venus body and venus head) in Figure 7. As compared with the AD [1] and the spectral method [10] ([10] can give the best rate-distortion curve until now), we see from figure 7.a and 7.b that our improved coder produces significantly less distortion than [1] at all bit rates, especially at low bit rates. Table 1: Compression rates for typical meshes, which is measured for connectivity and geometry coding in bpv. Models #v

Quant. (bits) fandisk 6,475 10 horse 19,851 12 venus 11,217 10 body venus 13,407 10 head torus 3,6450 12 rabbit 67,039 12

OT OT /C /G 2.6 10.7 2.9 13.7 - -

AD /C 5.0 4.6 3.6

-

5.2 14.4 5.6

-

AD /G 12.3 16.2 10.2

OUR /C 5.2 4.9 3.8

OUR GG /G (%) 10.5 14.6 12.9 20.4 9.8 3.9 12.4 19.2

2.9 8.9 0.47 4.3 0.51 4.1 4.7 3.4 11.4 5.4 17.6 6.0 14.2 19.3

15v

34v

67v

106v

139v

12v

26v

68v

112v

139v

Figure 6. The perceptual rate-distortion comparison between [1](top) and our decoding(bottom) effect with different vertices granularity. --Mean Square Error (Bouding Bxo Percent)

further improve the error curves by quantizing the tangential component in fewer bits. The range of ș is [0q,180q], and the range of ijis [0q,360q]. We can quantize ș and ij in three bits, since the value scopes of them is very small. It must be noted is that there is a discontinuous point in ș, where șis equal to 90q. To handle this exception, șis assigned to 89q, if the exception happened. As a result, a different quantization for different components is directly realized, that is the tangential parameter information can be encoded in proximate predictive by only 3 bits and much less than 8 to 12 bits in the uniform quantization [1]. The primary effect of the 3 bits quantization in ș and ij is that the geometry coding efficiency is advanced significantly, section 5 will show shat the range of improvements is typically around 4%~20%.

AD[1]

Our method

Rate [ bits]

Spectral[10]

*105

Figure 7.a. Rate-ratio curve of venus body

*10-3 Metro--Mean Square Error (value)

AD[1]

Our method

Rate [ bits]

Spectral[10]

*105

Figure 8.b. Rate-ratio curve of venus head

6. Conclusion We have proposed an anchor-based valence-driven progressive 3D mesh encoder, improved from the famous lossless compression algorithm [1]. Our improvement is based on the observation that the error metric is much less sensitive to quantization error of tangential versus normal components, and then their compression is in different bits. To maintain the salient geometry feature, the anchors are preserved in the connectivity coding. While in the geometry coding, a further coordinate transformation is employed to optimize the distribution of tangential components, and a different quantization in the normal and tangential components to further reduce the bit rate. And the tangential parameter information can be quantized in three bits with similar precision. It is demonstrated that our proposed mesh coder achieves the state-of-art coding efficiency and better rate-distortion performance than [1]. Besides these, a new anchor selection algorithm is also presented, which can locate anchors in the salient feature regions of the input mesh.

Acknowledgement We would like to thank Pierre Alliez for his earnest help at the implementation process. The rabbit, horse, and other models are provided by Stanford Graphics Laboratory, Cyberware Inc., and the Caltech Multi-Res Modeling Group.

References [1] P. Alliez, M. Desbrun, “Progressive encoding for lossless transmission of triangle meshes”, In Proc. SIGGRAPH 2001, ACM Press, Los Angeles, CA , USA, 2001, pp. 198–205. [2] M. Deering, “Mesh compression”, In Proc.SIGGRAPH 1995, ACM Press, Los Angeles, CA , USA, 1995,pp. 13-20. [3] H. Hoppe, “Progressive Meshes”, In Proc. SIGGRAPH 1996, ACM Press,New Orleans, LA, USA, 1996, pp. 99-108.

[4] A. Khodakovsky, P. Schroder, and W. Sweldens. “Progressive geometry compression”. In Proc. SIGGRAPH 2000, ACM Press, New Orleans, USA, 2000, pp. 271-278. [5] I. Guskov, K. Vidimce, W. Sweldens, and P. Schroeder. “Normal meshes”. In Proc. SIGGRAPH 2000, ACM Press, New Orleans, USA, 2000, pp.95–102. [6] I. Friedel, P. Schroder and A. Khodakovsky. “Variational normal meshes”, ACM Trans. Graph. vol. 23, no. 4, ACM Press, 2004, pp. 1061-1073. [7] P. M. Gandoin, and O. Devillers, “Progressive lossless compression of arbitrary simplicial complexes”, In Proc. SIGGRAPH2002, ACM Press, San Antonio, Texas, USA, 2002, pp.372-379. [8] J. L. Peng, and C.C. J. Kuo. “Geometry-guided progressive lossless 3D mesh coding with octree (OT) decomposition”, In Proc. SIGGRAPH2005, ACM Press, Los Angeles, CA , USA, 2005,pp.609-616 [9] P. Alliez, and M. Desbrun, “Valence-Driven Connectivity Encoding of 3D Meshes”, In Proc. EUROGRAPHICS2001, CGF, Manchester, UK, 2001, pp. 480-489. [10] Z. Karni, and C. Gotsman, “Spectral compression of mesh geometry”, In Proc. SIGGRAPH2000, ACM Press, New Orleans, Louisiana, USA, 2000, pp. 279-286. [11] X.F. Gu, S.J. Gortler, and H. Hoppe, “Geometry images”, In Proc. SIGGRAPH2002, ACM Press,San Antonio, Texas, USA, 2002, pp. 355-361, [12]E. Praun, and H. Hoppe, “Spherical parametrization and remeshing”, In Proc. SIGGRAPH2003, ACM Press, San Diego, California, USA, 2003, pp. 340-349. [13] Peyre, G. and S. Mallat "Surface compression with geometric bandelets," In Proc. SIGGRAPH2005, ACM Press, Los Angeles, CA , USA, 2005, pp.601-608. [14] P. Alliez, and C. Gotsman, “Recent Advances in Compression of 3D Meshes”, Advances in Multiresolution for Geometric Modelling, LNCS, 2005, pp. 3-26. [15] O. Sorkine, D. Cohen-Or, and S. Toledo, “High-pass quantization for mesh encoding”, In Proc. Eurographics/ ACM SIGGRAPH symposium on Geometry processing, ACM Press, Aachen, Germany, 2003, pp. 42-51. [16] O. Sorkine, D. Irony, and S. Toledo. “Geometry-Aware Bases for Shape Approximation”, IEEE Transactions on Visualization and Computer Graphics vol. 11, no. 2, IEEE Press, 2005, pp. 171-180. [17] S. Rusinkiewicz, “Estimating Curvatures and Their Derivatives on Triangle Meshes”, In Proc. Symposium on 3D Data Processing, Visualization, and Transmission 2004, IEEE Press, Thessaloniki, Greece, 2004, pp. 486-95 [18] A. Hubeli, and M. Gross, “Multiresolution feature extraction for unstructured meshes”, In Proc. IEEE Visualization 2001, IEEE Press, San Diego, California, USA, 2001, pp. 287–294. [19] L.P. Kobbelt, M. Botsch, U. Schwanecke, and H.P. Seidel, “Feature sensitive surface extraction from volume data”, In Proc. SIGGRAPH 2001, ACM Press, Los Angeles, CA , USA, 2001, pp. 57–66. [20] P. Cignoni, C. Rocchini, and R. Scopigno. “Metro: Measuring Error on Simplified Surfaces”, Computer Graphics Forum 17(2), 1998, pp.167–174.

Anchors-based lossless compression of progressive triangle meshes

PDL Laboratory, National University of Defense Technology, China. [email protected] ..... Caltech Multi-Res Modeling Group. References. [1] P. Alliez ...

534KB Sizes 0 Downloads 256 Views

Recommend Documents

Universal lossless data compression algorithms
2.7 Families of universal algorithms for lossless data compression . . 20 .... A full-length movie of high quality could occupy a vast part of a hard disk.

Lossless Compression – Slepian Wolf
May 5, 2009 - decoder has access to K, it can decode Y and get X=Y+K. • In general case, partition the space into cosets associated with the syndromes of the principal underlying channel. (repetition code here). • Encoding – Compute Syndrome co

EBOOK Lossless Compression Handbook - Khalid ...
Aug 15, 2002 - *Invaluable resource for engineers dealing with image processing, signal processing, multimedia systems, wireless technology and more.

Universal lossless data compression algorithms
4.1.3 Analysis of the output sequence of the Burrows–Wheeler transform . .... main disadvantages of the PPM algorithms are slow running and large memory.

Model Clipping Triangle Strips and Quad Meshes.
computing the clipped result of a triangle with a single clipping plane. Each case ..... Ph.D. Thesis, University Claude Bernard, Lyon I, Lyon, France. 4. Newman ...

level-embedded lossless image compression
information loss, in several areas–such as medical, satellite, and legal imaging– lossless ..... tation of picture and audio information - progressive bi-level.

Lossless Value Directed Compression of Complex ... - Semantic Scholar
School of Mathematical and Computer Sciences (MACS). Heriot-Watt University, Edinburgh, UK. {p.a.crook, o.lemon} @hw.ac.uk .... 1In the case of a system that considers N-best lists of ASR output. 2Whether each piece of information is filled, ...

“Lossless Value Directed Compression of Complex User Goal States ...
Real user goals vs. simplified dialogue state ... price { budget, mid-range, expensive }, location { city centre, ... } Current POMDP Systems. 6 ... Data driven:.

Lossless Value Directed Compression of Complex ... - Semantic Scholar
(especially with regard to specialising it for the compression of such limited-domain query-dialogue SDS tasks); investigating alternative methods of generating ...

Factorization-based Lossless Compression of ... - Research at Google
A side effect of our approach is increasing the number of terms in the index, which ..... of Docs in space Θ. Figure 1 is an illustration of such a factor- ization ..... 50%. 60%. 8 iterations 35 iterations. C o m p re ssio n. R a tio. Factorization

Factorization-based Lossless Compression of Inverted ...
the term-document matrix, resulting in a more compact inverted in- ... H.3.1 [Information Storage And Retrieval]: Indexing methods .... an approximate solution.

A Lossless Color Image Compression Architecture ...
Abstract—In this paper, a high performance lossless color image compression and decompression architecture to reduce both memory requirement and ...

Gray-level-embedded lossless image compression
for practical imaging systems. Although most ... tion for the corresponding file size or rate. However ... other values generalize this notion to a partition- ing into ...

The progressive mesh compression based on ...
Jul 14, 2007 - cent points of the contour, and LENedge is the average edge length of ..... nents should be considered for selection and delivery, the visibility ...

The progressive mesh compression based on ...
Jul 14, 2007 - more connected regions, and the process that decomposes a model into visually ... proach to decoding compressed mesh data is independent of the encoding .... function center(γ) is the barycenter of γ; halfX, half. Y, and halfZ ...

Intrinsic Parameterizations of Surface Meshes
As 3D data becomes more and more detailed, there is an increased need for fast and ...... In Proceed- ings of Vision, Modeling and Visualization (1998), H.-P. S..

Intrinsic Parameterizations of Surface Meshes - CiteSeerX
the choice of the energy sometimes seems very arbitrary, and most of them may visually .... efficient in solving for the parameterization. 2.3. Admissible Intrinsic .... of ∂EA(M ,U)/∂ui — giving an alternate, simple derivation of the conformal

Protection of compression drivers
maintaining a good degree of protection. 2. Somewhat smaller capacitor values may be required for additional protection in high—pa war sound reinforcement.

Dual Laplacian Morphing for Triangular Meshes - CiteSeerX
curvature flow in the dual mesh domain due to the simplic- ity of the neighborhood structure of dual mesh vertices. Our approach can generate visual pleasing ...

Data Compression
Data Compression. Page 2. Huffman Example. ASCII. A 01000001. B 01000010. C 01000011. D 01000100. E 01000101. A 01. B 0000. C 0001. D 001. E 1 ...

Some Configurations of Triangle Centers - Semantic Scholar
Feb 24, 2003 - In fact the nine points A+, A−, A∗. , . . . themselves form the ..... La Grange, Illinois 60525, USA. E-mail address: [email protected].

Some Configurations of Triangle Centers - Semantic Scholar
Feb 24, 2003 - Some configurations inscriptable in a cubic. First let us set the notation for several triangles. Given a triangle T with vertices. A, B, and C, let A∗.

Triangle Review.pdf
different from the intended trajectory. After the rocket has traveled 40 million miles (and clearly not hit. the planet), how far away from Mars will it be? How long do ...

Now that's Progressive! - googleusercontent.com
campaign management,” says Marketing Process Manager Pawan Divakarla. .... and it would provide an automated and effective way to manage campaigns.