Visual Comput (2007) 23: 651–660 DOI 10.1007/s00371-007-0128-5

Zhi-Quan Cheng Hua-Feng Liu Shi-Yao Jin

Published online: 14 July 2007 © Springer-Verlag 2007

Z.-Q. Cheng (u) · H.-F. Liu · S.-Y. Jin PDL Laboratory in the University of Defense Technology, Changsha City, Hunan province, 410073, P.R. China [email protected], [email protected], [email protected]

ORIGINAL ARTICLE

The progressive mesh compression based on meaningful segmentation

Abstract Nowadays, both mesh meaningful segmentation (also called shape decomposition) and progressive compression are fundamental important problems, and some compression algorithms have been developed with the help of patch-type segmentation. However, little attention has been paid to the effective combination of mesh compression and meaningful segmentation. In this paper, to accomplish both adaptive selective accessibility and a reasonable compression ratio, we break down the original mesh into meaningful parts and encode each part by an efficient compression algorithm. In our method, the segmentation of a model is obtained by a new feature-based decomposition

1 Introduction Mesh segmentation refers to partitioning a mesh into more connected regions, and the process that decomposes a model into visually meaningful components, which is always called part-type mesh segmentation [32] (meaningful segmentation or shape decomposition). A detailed survey of mesh segmentation algorithms can be found in [4, 32]. Usually, researchers seek efficient procedures that can produce more natural results in close agreement with human shape perception. In particular, meaningful segmentation has become a key ingredient in many mesh operation methods, which include texture mapping [31], shape manipulation [16, 22], parameterization [39], mesh editing [33], mesh deformation [13], shape matching [7, 24, 38], collision detection [21] and so on.

algorithm, which makes use of the salient feature contours to parse the object. Moreover, the progressive compression is an improved degree-driven method, which adapts a multi-granularity quantization method in geometry encoding to obtain a higher compression ratio. We provide evidence that the proposed combination can be beneficial in many applications, such as viewdependent rendering and streaming of large meshes in a compressed form. Keywords Mesh compression · Progressive · Meaningful segmentation · View-dependent

At the same time, 3D mesh compression [5] can be illustrated as follows: “Given a 3D mesh, a series of approximate meshes is compactly represented by entropy coding of its elements in single-rate or progressive way”. The elements can include connectivity, geometry, parametric information, and other properties such as color and texture. Just as the recent survey papers [3, 26] have implied, research on single-rate 3D mesh compression has become mature and more attention should be paid to progressive innovations [12]. For progressive mesh compression, researchers have achieved excellent results in the sense that they provide a lower compression rate that is close to the state-of-the-art single-rate valence-based coder [2], such as the progressive degree-driven method [1], spatial tree decomposition [8, 27], texture mapping [30], geometry image coding [9, 31], bandelet coding [28], semi-regular

652

Z.-Q. Cheng et al.

remeshing based on wavelet coding [6, 10, 17], and spectral coding [15]. Some former compression algorithms have used mesh segmentation for various purposes. For example, the construction of a mapping texture [9, 30, 31] involves dividing a given surface into sub-mesh patches, which should be topologically equivalent to a disk domain and must not impose large distortion after parametrization onto 2D. The spectral compression [15] breaks the mesh into smaller sub-patches to reduce the size of the Laplacian matrix of each patch for eigenvector computation. In addition, by partitioning a 3D scene [34–37] into small planar-like pieces, the progressive transmission of the mesh can be more view-dependent [34, 35, 37] to accelerate visual quality, and more robust to provide an error-resilient function [36]. However, little attention has been paid to the effective combination of mesh compression and shape decomposition, although the combination can be beneficial in many application fields. In mesh compression based on meaningful segmentation, the full details of a specific part of a compressed mesh can be made available without decoding other parts. Such a property will enable various applications of mesh compression, such as selective decoding for view-dependent refinement and partial editing of large meshes [14, 25]. Furthermore, if the meaningful part random accessibility [18] is combined with progressiveness, mesh compression can be used for more efficient visualization and manipulation of large-scale meshes with limited main memory and network bandwidth, and effective protection of interactive 3D objects [19]. In this paper, we propose a novel paradigm for mesh compression, which provides progressiveness based on meaningful segmentation. Unlike most previous mesh compression work, where the decoding is a deterministic process and merely a reverse of the encoding, our approach to decoding compressed mesh data is independent of the encoding process and a random order decoding on different parts is provided. For example, Fig. 1 illustrates the part-based mesh compression instance of the David model, which decompresses the David statue’s head and neck with full details while the others are just basic information. 1.1 Overview Our approach is carried out in three key steps. First, the mesh is decomposed into multiple meaningful parts, whose cutting boundaries are determined by salient candidate contours, which are automatically completed to form short loops around the mesh based on principal component analysis (PCA) of each contour’ vertices. Second, a progressive compression algorithm for efficient transmission of triangle meshes is proposed, which modifies the existed progressive degree-driven compression method [1], by using a different quantization in the geometry and parametric information to further reduce the com-

Fig. 1. Part-based mesh decompression of the David model with focus on head and neck components

pression ratio. Third, the organic integration between progressive compression and meaningful segmentation of the mesh is obtained by a hierarchical element tree. Then, our technique can locally decode specific parts of the compressed mesh in a coarse-to-fine manner and can even further provide a resuming capability at the breaking points. Consequently, the paper makes the following contributions: • A novel automatic meaningful mesh segmentation algorithm, which partitions a given mesh into multiple meaningful parts, is presented in Sect. 2. Based on the minima rule [11], our approach carries out the PCA computation on salient candidate contours to form short loops around the parts of a mesh. • A multi-granularity quantization method in geometry encoding, which makes use of the dynamic range (different number of bits in normal/tangential components than the original fixed-bit representation) [1], is presented in Sect. 3. Our improved coder outperforms the original algorithm [1] in geometry coding efficiency, and the range of improvements is typically around 4% ∼ 20%.

Progressive mesh compression using meaningful segmentation

• The approach detailed in Sect. 4 progressively transmits adaptive parts of a triangle mesh based on the result of former shape decomposition by using the improved mesh compression.

653

prevents the partitioning loops from crossing each other. Therefore, in our implementation, the priority of a feature contour γ , defined by Eq. 1, is calculated by three factors: length, centricity, and perpendicular quality,

The minima rule [11] from cognitive theory states that human perception usually divides an object into parts along the concave discontinuity of the tangent plane. Enlightened by the idea of Lee et al. [20], we directly define the cutting boundary between different parts as the concave feature contour, guided by the minima rule.

priority(γ) = length(γ)∗ centricity(γ)∗ η(γ). (1)  • Length(γ) = e∈γ length(e), where length(e) is the length of an edge e. • centricity(γ) = (1 − center(γ).x/halfX)∗ (1 − center(γ).x/halfY)∗ (1 − center(γ).x/halfZ), where function center(γ) is the barycenter of γ ; halfX, half Y, and halfZ, respectively, stand for the length of the three half axes of the model’s oriented bounding box. • η(γ) = 1 − min(angle(xAxis, mainDir), angle(yAxis, mainDir), angle(zAxis, mainDir))/(π/2), where perpendicular quality η(γ) is computed from the minimum angle between the cutting plane’s normal and the most parallel medial axis.

2.1 Feature contour extraction and priority

2.2 Closing of PCA-based feature contours

Following the minima rule, we define a feature value on each vertex by the minimum curvature value (Fig. 2a), which is calculated by the finite-difference local neighbor computation presented by Ruisinkiewicz [29]. Then, to overcome the irregular distribution of the curvature value, the normalizing and hysteresis thresholding process is adapted just as in [20] to construct regions (Fig. 2b) by connecting the vertices that pass the thresholding. The upper bound for the thresholding is set as −1.3 and the lower bound as −0.9. Next, we use the thinning method to obtain several graph structures of a feature skeleton by peeling vertices from the boundary of the region towards the inside and then extracting feature contours (Fig. 2c) from the graphs. To cut the mesh, a single specific feature contour should be chosen from the candidates, which are obtained before the feature contour extraction stage. It is important to determine the order of contours, because our approach

For a selected feature contour γ that is always open, we need complete it to form a closed loop around the mesh. The most crucial issue in closing the feature contour γ is how to restrict the shortest path between its endpoints, which is from one endpoint to the other by going over the other side of the mesh instead of the natural shortest route. Lee et al. [20] did this by using four complex functions: distance, normal, centricity, and feature, and they adopted four experiential parameters determined by tedious experiments. On the contrary, we solve the problem in a simple and intuitive way, which restricts the contour completion by two parallel cutting planes and an isolating zone. The structural diagram of our approach can be seen in Fig. 3. The parallel cutting planes generate a restricted region and have a great impact on the feature contour closing path by constraining the path only in the area. However, if the shortest path between the contour’s endpoints, which can

The rest of the paper is structured as follows. Sections 2–4 discuss the various steps of our approach and their application. Finally, we draw some conclusions and discuss further work in Sect. 5.

2 Meaningful mesh segmentation

Fig. 2. Feature contour extraction

654

Z.-Q. Cheng et al.

of vertices. This kind of least distance planarity computation is of course quite common. The standard technique, based on principal component analysis, is to construct the sample covariance matrix: 1  (Vi − V¯ )(Vi − V¯ )T , k−1 k

Z=

(3)

i=1

Fig. 3a–c. Structural diagram of feature contour completion. a One contour. b Sandwich components (front view). c Isolating zone (side view)

be archived by the common Dijkstra algorithm, are not limited, it would still be the original feature contour rather than go through the other side of the mesh. So, to avoid this problem, an isolating zone, which is an approximate oriented bounding box (OBB), is added to prevent the shortest path from going wrong way. The OBB (Fig. 3c) is created as the follows: The center is located at the outer d distance position from the middle vertex of the feature contour, the three normalized vectors are initialized as the previous three eigenvectors accordingly, and the lengths of the three half axes are set as 3d in the oriented direction and 2d in the others. In particular, distance d is defined as follows:  3 ∗ GAPpoint , if GAPpoint > LNGedge d= , (2) 3 ∗ LNGedge , else where GAPpoint is the average distance between the adjacent points of the contour, and LENedge is the average edge length of the mesh. Now, for each candidate contour, the key of our method is how to define the orientation and location of the two planes. 1) For the orientation, a good idea is to find the best fit plane whose distance summation to this set of points is the least, since all feature contours have an associated set

 where V¯ is the mean of the vertices V¯ = ( i Vi)/k, and the three eigenvectors of the matrix Z determine a local frame with V¯ as the origin. Out of the three eigenvectors, the one corresponding to the smallest eigenvalue is likely to correspond to the contour’s general direction (see the red vector in Fig. 4c and Fig. 4d), and then the orientation would be one of the other two eigenvectors. The alternative determination is done by using the former feature contour completion tactic. We compare the two closed loops formed from the two eigenvectors and select the one corresponding to the loop whose zone does not cross any existing segmentation loop and whose length is the shortest (see the loop flagged by 2 in Fig. 4e). The other loop (corresponding to the loop flagged by 1 in the Fig. 4e) would be excluded beyond all doubt. For the instance shown in Fig. 4, the green eigenvector is the appropriate one. 2) For the location of the parallel planes, both planes are positioned at the same distance to the mean V¯ of the vertices with a threshold value d. During the process of feature contour completion, the feature function also considers other contours located in the same restricted area. Therefore, we enable two feature contours that are far from each other to be connected in the looping. Figure 5 shows the effects of the feature function, which demonstrate that the loop path is more plausible. In order to accept the accurate segmenting feature loops, we use the identical part salience criterion [20] to check whether or not they are significant enough. The criterion is the combination of three factors of a part, which include area, protrusion, and feature. Since our approach is also heuristic, a rare manual rejection for some models may be possible.

Fig. 4a–e. Finding the orientation of the two parallel planes. a,b The examined neck part and its magnification. c,d The local frame computed from matrix Z in different view. e Closed loops

Progressive mesh compression using meaningful segmentation

655

Fig. 5a,b. The effect of the feature function. a Output without feature effect. b Output with feature effect

2.3 Segmentation results We implemented our approach by using Visual C++ with OpenGL, on a PC with an Intel Pentium IV at 2.8 GHz with 512 MB memory. To demonstrate our algorithm, we present some segmentation instances of triangle meshes in Fig. 6. It is obvious that the models in Fig. 6 have been accurately segmented into parts beyond the PCA computation on the feature contours, since the parts are fitted with human visual perception results.

3 Improved progressive mesh compression Based on the observation that the entropy of mesh connectity is dependent on the distribution of vertex valences, Alliez and Desbrun [1] iteratively apply the valence-driven decimating conquest and the cleaning conquest in pairs to obtain progressive compact meshes. Each independent set thus corresponds to one decimation pass. The even decimation passes remove valence 6 vertices, while the odd ones remove only valence 3 vertices. Such a selection of valences reduces the dispersion of valences during decimation, the latter dispersion being further reduced by a deterministic patch re-triangulation designed to generate valence 3 vertices, later removed by odd decimation passes. The decimation is coordinated with the coding and for progressively regular meshes it generates √ a regular inverse 3 subdivision, and coding one valence per vertex is sufficient to re-build the connectivity. Since the geometry components are dominant in the compressed file size in most cases, a better geometry coder is essential for the high coding efficiency of a 3D mesh. In the geometry encoding and decoding, Alliez et al. [1] even used the barycenter prediction and the local approxite Frenet coordinate frame to separate normal and tangential components to further reduce the bit rate, as shown in Fig. 7. The normal n and barycenter b of a patch

Fig. 6. Instances of segmentation

approxite the tangent plane of the surface. Then the position of the inserted vertex vr is encoded as an offset from the tangent plane: vr = b + α∗ t1 + β ∗ t2 + γ ∗ n.

(4)

However, the combination is incomplete: both the normal geometry and tangential parameter components are quantized with a global uniform quantization step in 8 to 12 bits, although the influence of the parameterization

656

Z.-Q. Cheng et al.

Fig. 7. Prediction method for geometry encoding in [1]. The current input gate is shown in red. Residuals are expressed in terms of both tangential and normal components deduced from the current patch’s frontier, known for both the coder and the decoder

can be reduced further. Therefore, we complement this improvement by a different quantization for different components, and the experimental results further confirm these claims in practice. After the original approximate Frenet coordinate of the vertex vr has been projected on three basic vectors, we apply a second coordinate transformation on the other two vectors, except the n vector; from the approximate Frenet coordinate frame to a 3D polar coordinate frame (see Fig. 8), where θ is the angle from normal n to vector, and ϕ is the phasic angle from vector t1 to the projection vector of on the tangential plane. The polar coordinates θ and ϕ are defined in terms of approximate Frenet coordinates by the following, where π is equal to 3.14:  180 ∗ arctan(β/α)/π, β≥0 , (5) ϕ= 360 − 180∗ arctan(β/α)/π, else   θ = 180 ∗ arctan α2 + β 2 /γ /π. (6) When decoding, the approximate Frenet coordinates α and β can also be inversely calculated in terms of θ and ϕ. The benefit of the equivalent transformation is based on the following observations in [17]: In a smooth semiregular mesh, the geometry information (normal component) is more important than the parameter information (tangential component), and the distribution of the polar angles (the angle from the normal axis) becomes very nonuniform in the local frame with peaks around 0◦ and 180◦ . As mentioned in [17], the facts indicate that the entropy of our encoding would be decreased further, since the θ distribution is undispersed. Although the parameter, i.e., tangential, information does not contribute most to the error metric, we cannot merely ignore tangential components. Especially at coarser levels, tangential coefficients can still contain some geometric information. Thus, we can further improve the compression ratio and the rate-distortion performance by quantizing the tangential component in fewer bits. The range θ is [0◦ , 180◦ ] and the range ϕ is [0◦ , 360◦ ]. We can quantize θ and ϕ in three bits, since

Fig. 8. The approximate Frenet coordinate frame (left) is further transformed to the polar coordinate frame (right)

their value scopes are very small. It must be noted is that there is a discontinuous point in θ, where θ is equal to 90◦ . To handle this exception, θ is assigned to 89◦ . As a result, a different quantization for different components is achieved, i.e., the tangential parameter information can be encoded in proximate predictive coding by only 3 bits and much less than 8 to 12 bits in the uniform quantization [1]. The primary effect of the 3 bits quantization in θ Table 1. Compression rates for typical meshes, measured for connectivity and geometry coding in bpv Mesh

#v

Q bit

OT OT AD AD Our Our GG /C /G /C /G /C /G (%)

Fandisk Horse Venus body Venus head Torus Rabbit Buddha David

6475 19 851 11 217

10 12 10

2.6 10.7 5.0 12.3 5.1 11.7 4.9 2.9 13.7 4.6 16.2 4.6 12.9 20.4 – – 4.1 11.9 4.1 10.8 9.2

13 407

10

36 450 67 039 37 893 257 778

12 12 12 12





3.6 10.2 3.7

8.4 17.6

2.9 8.9 0.4 3.9 0.4 3.6 7.6 3.4 11.4 5.4 17.6 5.4 14.2 19.3 – – – – 6.2 17.7 – – – – – 5.9 15.6 –

Progressive mesh compression using meaningful segmentation

and ϕ is that the geometry coding efficiency is advanced significantly. Typical experimental results, listed in Table 1, demonstrate that the range of improvements is typically around 4% ∼ 20%. In the table, the mesh name, the number of vertices and quantization bits in each mesh are listed in the first three columns. Then we compare our coding bit rates (bpv units) with the original [1] (AD), and the recent octree mesh coder [27] (OT). For every algorithm, we report the connectivity and the geometry costs separately.

Fig. 9. Perceptual comparison between the original [1] (top) and our improved (bottom) decoding Fandisk model in similar bits

657

In other words, bit rates for the connectivity coding are listed in columns 4, 6 and 8, while those for the geometry coding are listed in columns 5, 7 and 9. The last column shows our geometry gain (GG) over AD, since the geometry data dominate the compressed file size in most cases. The GG advance is computed by Eq. 7 and is typically around 4% ∼ 20%. Note that the regular geometry leads to regular vertex distribution and good prediction accuracy, which both contribute to high efficiency in entropy coding, GG = (GOUR − GAD )/G∗AD 100%.

(7)

By the Fandisk mesh, in a series of similar bits, we display some selected layers and contrast the reconstructed appearance in Fig. 9; the our instances are shown at the bottom, while the top instances are created by reproducing the original algorithm [1]. To compare the ratedistortion performance, we plot the curves for two meshes (the Venus body and the Venus head) in Fig. 10. As compared with AD [1] and the spectral method [15] (to date [15] gives the best rate-distortion curve), we see from Fig. 10a and 10b that our improved coder produces significantly less distortion than [1] at all bit rates, especially so at low bit rates.

4 Part-based adaptive progressive transmission 4.1 Data structure Unlike most cases of previous mesh compression works, whose decoding is a deterministic process and is merely a reverse of the encoding, our approach decoding order

Fig. 10. The rate-ratio curve of the Venus model

Fig. 11. Data organization of a compressed model

658

Z.-Q. Cheng et al.

of compressed mesh data is independent of the encoding process and a random order decoding on different parts is provided. To accomplish both adaptive selective accessibility and a reasonable compression ratio, we break down the original mesh into parts and encode each part by the improved degree-driven compression algorithm addressed in Sect. 3. Therefore, a part is the atomic unit for adaptive accessibility in our case. Our approach is similar to [34], but in our case, the data in a sub-tree is highly compressed with arithmetic coding. Based on the meaningful parts of a model, we represent the encoded model by a hierarchical element tree and pointer table, illustrated in Fig. 11. As shown in Fig. 11a, the hierarchical element tree is created by information of multiple parts, which includes a base model and several layered compressed levels. Figure 11b gives an example of the pointer table, which provides the adaptive selection parameters by keeping track of the positions where the compressed data of parts are stored. In the decoding phase, the pointer table resides in the main memory. Consequently, with the aid of the pointer table, the breakout position is recorded, if one part’s transmission is stopped by even one interruption. After the interruption the decompression can be resumed. 4.2 View-dependent progressive decompression Possible applications of our mesh compression technique include view-dependent rendering and streaming of meshes in a compressed form. We confirm that the benefits of heuristics, proposed in [23] for view-dependent simplification, are also suitable for predicting importance in the context of adaptive transmission. However, good heuristics to evaluate importance are difficult to design in a way that closely mimics the partition made by a human eye into what is important and what is not. We use a simple approach to estimate perceptual importance and visibility from the bounding box information of each part. Initially, a collection of bounding boxes is used to display an outline of the meaningful parts. Next, to determine the order in which part compressed components should be considered for selection and delivery, the visibility and perceptual importance of each component are estimated. We use the distance error threshold, detailed by Luebke et al. [23], to estimate visibility and perceptual importance in one pass. For a given viewing position, we sort the collection of bounding boxes representing 3D model parts by their center’s distance to the viewpoint. Then, we can assign selective levels to these parts in reverse order. For the model parts that are untouched by the current view frustum, they are considered as context data and only their corresponding basic meshes are preserved. At run time, the decompressed levels in the hierarchical element tree appear to increment and decrement as the viewpoint moves. Figure 12 shows some examples of view-dependent decompression results by our approach. Our algorithm can

Fig. 12. Examples of view-dependent decompression

decode each part with an appropriate resolution by viewing parameters. Typically, it decodes visible partitions at a higher resolution, while decompressing fewer data for invisible partitions.

Progressive mesh compression using meaningful segmentation

5 Conclusion A part-based progressive mesh coding and streaming algorithm, which first attempting to effectively integrate the mesh compression with meaningful segmentation, was proposed in this paper. First, to enable part-dependent progressive transmission, the proposed algorithm divides a mesh model into several meaningful components based on the minimal rule [11] from cognitive theory, by completing the salient feature contours by PCA analysis of their vertices. Second, our approach encodes each partition independently by a progressive degree-driven compression algorithm, which improves the original algorithm [1] using a multi-granularity quantization method in geometry encoding, and the range of improvements is typically around 4% ∼ 20%. Third, we introduce a data structure, called a hierarchical element tree, to progressively transmit each meaningful part in the random accessible way. Just as the simulation results show, the proposed algorithm can reduce the required bandwidth by trans-

659

mitting only high-resolution visible parts while cutting out unusable details of invisible parts, and it has a better performance than non-view-dependent methods when the user does not need to view the whole model within a short period of time. We believe that much room remains for further improvement, such as investigating a robust mesh segmentation algorithm that can partition multi-resolution meshes into similar meaningful parts, developing a better mesh compression method that has better compression ratio, and applying the combination of mesh segmentation with progressive compression to new applications, such as virtual reality navigation.

Acknowledgement We would like to thank Pierre Alliez for his help with the implementation process. The David, Bunny, and other models were provided by Stanford Graphics Laboratory, 3D Meshes Research Database by INRIA GAMMA Group, and Aim@SHAPE Shape Repository.

References 1. Alliez, P., Desbrun, M.: Progressive encoding for lossless transmission of triangle meshes. In: SIGGRAPH, Los Angeles, pp. 198–205 (2001) 2. Alliez, P., Desbrun, M.: Valence-driven connectivity encoding of 3D meshes. In: EuroGraphics, CGF, Manchester, UK, pp. 480–489 (2001) 3. Alliez, P., Gotsman, C.: Recent advances in compression of 3D meshes. In: Advances in Multiresolution for Geometric Modelling, pp. 3–26. Springer (2005) 4. Attene, M., Katz, S., Mortara, M., Patane, G., Spagnuolo, M., Tal, A.: Mesh segmentation – a comparative study. In: SMI, Japan, pp. 14–25. IEEE Press (2006) 5. Deering, M.: Mesh compression. In: SIGGRAPH, pp. 13–20. ACM Press, Los Angeles, CA, USA (1995) 6. Friedel, I., Schroder, P., Khodakovsky, A.: Variational normal meshes. ACM Trans. Graph. 23(4), 1061–1073 (2004) 7. Funkhouser, T., Kazhdan, M., Shilane, P., Min, P., Kiefer, W., Tal, A., Rusinkiewicz, S., Dobkin, D.: Modeling by example. ACM Trans. Graph. 23(3), 652–663 (2004) 8. Gandoin, P.-M., Devillers, O.: Progressive lossless compression of arbitrary simplicial complexes. In: SIGGRAPH, pp. 372–379. ACM Press (2002) 9. Gu, X.F., Gortler, S.J., Hoppe, H.: Geometry images. In: SIGGRAPH, pp. 355–361. ACM Press (2002) 10. Guskov, I., Vidimce, K., Sweldens, W., Schroeder, P.: Normal meshes. In: SIGGRAPH, pp. 95–102. ACM Press, New Orleans (2000) 11. Hoffman, D.D., Richards, W.A.: Parts of recognition. Cognition 18, 65–96 (1984)

12. Hoppe, H.: Progressive meshes. In: SIGGRAPH, pp. 99–108. ACM Press, New Orleans (1996) 13. Huang, J., Shi, X., Liu, X., Zhou, K., Wei, L.-Y., Teng, S.-H., Bao, H., Guo, B., Shum, H.-Y.: Subspace gradient domain mesh deformation. ACM Trans. Graph. 25(3), 1126–1134 (2006) 14. Kalaiah, A., Varshney, A.: Statistical geometry representation for efficient transmission and rendering. ACM Trans. Graph. 24(2), 348–373 (2005) 15. Karni, Z., Gotsman, C.: Spectral compression of mesh geometry. In: SIGGRAPH, pp. 279–286. ACM Press/Addison-Wesley (2000) 16. Katz, S., Tal, A.: Hierarchical mesh decomposition using fuzzy clustering and cuts. ACM Trans. Graph. 22(3), 954–961 (2003) 17. Khodakovsky, A., Schroder, P., Sweldens, W.: Progressive geometry compression. In: SIGGRAPH, pp. 271–278. ACM Press, New Orleans (2000) 18. Kim, J., Choe, S., Lee, S.: Multiresolution random accessible mesh compression. Comput. Graph. Forum 25(3), 323–332 (2006) 19. Koller, D., Turitzin, M., Tarini, M., Croccia, G., Cignoni, P., Scopigno, R.: Protected interactive 3D graphics via remote rendering. ACM Trans. Graph. 25(3), 695–703 (2004) 20. Lee, Y., Lee, S., Shamir, A., Cohen-Or, D., Seidel H.-P.: Mesh scissoring with minima rule and part salience. Comput. Aided Geom. Design 22, 444–465 (2005) 21. Li, X., Toon, T.W., Huang, Z.: Decomposing polygon meshes for interactive applications. In: SI3D, pp. 35–42. ACM Press (2001)

22. Lien, J.M., Keyser, J., Amato, N.M.: Simultaneous shape decomposition and skeletonization. In: SPM ’06: Proceedings of the 2006 ACM Symposium on Solid and Physical Modeling, pp. 219–228. ACM Press (2006) 23. Luebke, D., Reddy, M., Cohen, J.D., Varshney, A., Watson, B., Huebner R.: Level of Detail for 3D Graphics. Morgan Kaufmann (2002) 24. Mangan, A.P., Whitaker, R.T.: Partitioning 3D surface meshes using watershed segmentation. IEEE Trans. Vis. Comput. Graph. 5(4), 308–321 (1999) 25. Martin, I.B.: Adaptive graphics. IEEE Comput. Graph. Appl. 2(1), 6–10 (2003) 26. Peng, J., Kim, C.-S., Kuo, C.-C.J.: Technologies for 3D mesh compression: A survey. J. Vis. Commun. Image Represent. 16(6), 688–733 (2005) 27. Peng, J., Kuo, C.-C.J.: Geometry-guided progressive lossless 3D mesh coding with octree (OT) decomposition. ACM Trans. Graph. 24(3), 609–616 (2005) 28. Peyre, G., Mallat, S.: Surface compression with geometric bandelets. ACM Trans. Graph. 24(3), 601–609 (2005) 29. Rusinkiewicz, S.: Estimating curvatures and their derivatives on triangle meshes. In: Symposium on 3D Data Processing, Visualization, and Transmission, pp. 486–495. IEEE Press (2004) 30. Sander, P.V., Snyder, J., Gortler, S.J., Hoppe, H.: Texture mapping progressive meshes. In: SIGGRAPH, pp. 409–416. ACM Press (2001) 31. Sander, P.V., Wood, Z.J., Gortler, S.J., Snyder, J., Hoppe, H.: Multi-chart geometry images. In: Eurographics/ACM SIGGRAPH Symposium on Geometry

660

Z.-Q. Cheng et al.

Processing, pp. 146–155. ACM Press (2003) 32. Shamir, A.: A formulation of boundary mesh segmentation. In: 3DPVT, pp. 82–89. IEEE Press (2004) 33. Shatz, I., Tal, A., Leifman, G.: Paper craft models from meshes. Visual Comput. 22(9), 825–834 (2006) 34. Sheng, Y., Kim, C.-S., Kuo, C.-C.J.: A progressive view-dependent technique for interactive 3-D mesh transmission. IEEE Trans. Circ. Sys. Video Technol. 14(11), 1249–1264 (2004)

35. Yan, Z., Kumar, S., Kuo, C.-C.J.: Error resilient coding of 3-D graphic models via adaptive mesh segmentation. IEEE Trans. Circ. Sys. Video Technol. 11(7), 860–873 (2001) 36. Yan, Z., Kumar, S., Kuo, C.-C.J.: Mesh segmentation schemes for error resilient coding of 3D graphic models. IEEE Trans. Circ. Sys. Video Technol. 15(1), 138–144 (2005) 37. Yang, S., Kim, C.-S., Kuo, C.-C.J.: View-dependent progressive mesh coding

based on partitioning. In: VCIP, pp. 268–279. IEEE Press (2002) 38. Zelinka, S., Garl, M.: Surfacing by numbers. In: GI ’06: Proceedings of the 2006 Conference on Graphics Interface, pp. 107–113. Canadian Information Processing Society (2006) 39. Zhang, E., Mischaikow, K., Turk, G.: Feature-based surface parameterization and texture mapping. ACM Trans. Graph. 24(1), 1–27 (2005)

Z HI -Q UAN C HENG is a PhD student at the PDL Laboratory at the University of Defense Technology, Changsha City, P.R. China. His research interests include geometry processing and 3D streaming.

H UA -F ENG L IU is a PhD candidate in the PDL Laboratory at the University of Defense Technology, Changsha City, P.R. China. His research interests include geometry processing and wireless networks.

S HI -YAO J IN is a professor at the University of Defense Technology, Changsha City, P.R. China. Professor Jin graduated from Harbin Military Engineering College in 1961. His research interests include distributed interactive simulation and real-time techniques.

The progressive mesh compression based on ...

Jul 14, 2007 - cent points of the contour, and LENedge is the average edge length of ..... nents should be considered for selection and delivery, the visibility ...

669KB Sizes 2 Downloads 174 Views

Recommend Documents

The progressive mesh compression based on ...
Jul 14, 2007 - more connected regions, and the process that decomposes a model into visually ... proach to decoding compressed mesh data is independent of the encoding .... function center(γ) is the barycenter of γ; halfX, half. Y, and halfZ ...

Anchors-based lossless compression of progressive triangle meshes
PDL Laboratory, National University of Defense Technology, China. [email protected] ..... Caltech Multi-Res Modeling Group. References. [1] P. Alliez ...

3D Mesh Compression in Open3DGC - GitHub
No need to preserve triangles and vertices order. ‒ No need for 32-bit precision for positions/attributes. ‒ Neighbour vertices exhibit high geometry correlations.

A Meaningful Mesh Segmentation Based on Local ...
[11] A. Frome, D. Huber, R. Kolluri, T. Bulow and J. Malik,. “Recognizing Objects in Range Data Using Regional Point. Descriptors,” In: Proc. of Eighth European Conf. Computer. Vision, 2004, vol. 3, pp. 224-237. [12] D. Huber, A. Kapuria, R.R. Do

A Meaningful Mesh Segmentation Based on Local Self ...
the human visual system decomposes complex shapes into parts based on valleys ... use 10í4 of the model bounding box diagonal length for all the examples ...

A Progressive Image Transmission Method based on Discrete ...
A Progressive Image Transmission Method based on Discrete Wavelet Transform (DWT).pdf. A Progressive Image Transmission Method based on Discrete ...

Dependency Tree Based Sentence Compression
training data. Still, there are few unsupervised meth- ods. For example, Hori & Furui (2004) introduce a scoring function which relies on such informa- tion sources as word significance score and language model. A compression of a given length which

Matrix Completion Based ECG Compression
Sep 3, 2011 - coder, good tolerance to quantization noise, and good quality reconstruction. ... A full rank matrix of dimensions (n × n) has n2 degrees of freedom and ..... methods for matrix rank minimization,” Tech. Rep., Department of.

Segmentation-based CT image compression
The existing image compression standards like JPEG and JPEG 2000, compress the whole image as a single frame. This makes the system simple but ...

Example-based Image Compression - Research at Google
Index Terms— Image compression, Texture analysis. 1. ..... 1The JPEG2000 encoder we used for this comparison was Kakadu Soft- ware version 6.0 [10]. (a).

Mesh-Based Sensor Relocation for Coverage ...
are not displayed; proxy nodes are represented by big colorful dots, and their ..... A-node, as servers, send four query messages respectively to the north, the ... protocols, every relocating node transfers all its local data to the newcomer at.

content aspect ratio preserving mesh-based image ...
CONTENT ASPECT RATIO PRESERVING MESH-BASED IMAGE RESIZING. Kazu Mishiba1, Masaaki Ikehara2 and Takeshi Yoshitome1. 1Department of Electrical and Electronic Engineering, Tottori University, Tottori, Japan. 2Department of Electronics and Electrical Eng

Data Compression on DSP Processors
This report aims at studying various compression techniques for data ..... The GIF (Graphics Interchange Format) and the UNIX compress utility, both use.

Compression Artifacts Removal on Contrast Enhanced Video
adaptive to the artifacts visibility level of the input video signal is used. ... to improve the quality of the videos that are captured in extreme lighting conditions, ...

Interactive and progressive image retrieval on the ...
INTERNET, we present the principle of an interactive and progressive search ... make difficult to find a precise piece of information with the use of traditional text .... images, extracted from sites of the architect and providers of building produc

On the limits of sentence compression by deletion
breviations or acronyms (“US” for “United States”), symbols (euro symbol for ..... of banks' to compound words such as bankmedewerkers 'bank-employees'. ..... Clarke, J., Lapata, M.: Global inference for sentence compression an integer lin-.

SIFT-BASED IMAGE COMPRESSION Huanjing Yue1 ...
However, the SIFT descriptors consume a lot of computing resources. For efficient ..... for internet or cloud applications where a large-scale image set is always ...

Factorization-based Lossless Compression of ... - Research at Google
A side effect of our approach is increasing the number of terms in the index, which ..... of Docs in space Θ. Figure 1 is an illustration of such a factor- ization ..... 50%. 60%. 8 iterations 35 iterations. C o m p re ssio n. R a tio. Factorization

On the Optimality of Progressive Income Redistribution
lowed to sign contracts contingent on their offspring's income. They can save, nonethe- ..... ingly more mobile transition matrix. ... taking into account the transition dynamics, and compare it to the tax code that is optimal .... quite large, espec

Progressive Men and Women on the Move for ... - WordPress.com
Aug 27, 2016 - and its re-affirmation, 10 Years On. 2. , which guide Member States on ..... Under EU law, access to the labour market and vocational training ...

Text-Based Image Retrieval using Progressive Multi ...
The resultant optimization problem in MIL-. CPB is easier in this work, ... vant web images returned by the image search engine and the method suggested in ...

Denoising via MCMC-based Lossy Compression
denoising, for the setting of a stationary ergodic source corrupted by additive .... vides an alternative and more implicit method for learning the required source ..... To each reconstruction sequence yn ∈ ˆXn, assign energy. E(yn). [Hk(yn) + ...

Fovea Window for Wavelet-based Compression
reducing the redundancy of the data transmitted. .... types of compression, namely, lossy and lossless compression ... only the center pixel of the fovea region.