View-based 3D Object Retrieval Using Tangent Subspace Analysis Fan Wang, Fei Li, Qionghai Dai and Guihua Er a Department

a

of Automation, Tsinghua University, Beijing, 100084, P.R. China ABSTRACT

Light field is a novel image-based representation of 3D object, in which each 3D object is described by a group of images captured from many viewpoints. It is irrelevant to the complexity of the 3D scenario or objects. Due to this advantage, we propose a 3D object retrieval framework based on light field. An effective distance measure through subspace analysis of light field data is defined, and our method makes use of the structural information hidden in the images of light field. To obtain a more reasonable distance measure, the distance in low dimensional spaces is supplemented. Additionally, our algorithm can handle the problem of arbitrary camera numbers and positions when capturing the light field. In our experiment, a standard 3D object database is adopted, and our proposed distance measure shows better performance than the “LFD” in 3D object retrieval and recognition. Keywords: 3D object, Light field, tangent subspace

1. INTRODUCTION Recent advancement of 3D techniques has made 3D modeling and visualizing possible, and databases of largescale 3D data have been available on Internet. This has led to the need for 3D object retrieval. The fundamental techniques involved in the retrieval process are the 3D object representation and matching. Most of current 3D representations are parameterized and based on the 3D models, and the matching is also based on model parameters, for example, the color, shape or texture information. Shape is the most commonly used descriptor when describing 3D objects, and there have been various methods to extract efficient 3D shape feature. A 3D shape can be represented in terms of its outer surfaces, boundaries in the level of 2D, or in the volume level. For example, voxel, as a volumetric representation, has been proposed.1 Additionally, shape features can be classified to global features2, 3 and local features,4, 5 which characterize the global and local shape of a 3D model respectively. As well as the 3D geometrical or structural properties, some view-based representations of 3D models have been adopted. Based on the main idea that if two models are similar they should look similar from all viewing angles, some researchers began to study 3D model retrieval and recognition using the visual similarity of the models. There is ample literature on recognition or retrieval from 2D views. In those kinds of methods, each 3D model is represented by a group of images as seen from different views. Nayar et al.6 use principal component analysis (PCA) on a set of object views to generate a distance between an unknown view and prototypical views. Cootes and Taylor7 proposed an active appearance model (AAM) which learns a statistical model by training on a series of 2D images. Ullman and Basri8 represent each view as a linear combination of prototypical views. Spin image is also proposed,9 in which a data-level shape representation that places few restrictions on object shape. In another paper,10 each 3D model is described by a number of binary images and the query should be a 2D sketch shape. In recent years, a new concept called “Light Field” has been put into use to represent 3D objects instead of the model based method11 .12 Intuitively, in a light field, each 3D object or scenario is represented by images as seen from different views or different directions, with little or none geometry information. In this case, the cost Further author information: (Send correspondence to Fan Wang) Fan Wang: E-mail: [email protected], Telephone: 86 10 62788613

of processing or interactively viewing the scene is independent of scene complexity, which avoids the complex model parameters, especially when the scene is of abundant details. One of the state of the art algorithms in view-based 3D object retrieval uses a representation called the Light Field Descriptor (LFD).13 LFD is a set of 10 projections of a 3D model as seen from half the vertices of a dodecahedron. That means, a LFD consists ten silhouettes of the 3D object obtained from ten viewing angles distributed evenly on the viewing sphere. A similarity score is computed as the sum of the scores of 10 pairwise image comparisons (using an image matching method). The similarity score of two LFDs is then the minimum score obtained by rotating the viewing sphere of one LFD relative to the other, by 60 times of correspondences in total. A set of ten light fields resulted from ten times different rotations of the sphere are stored for each 3D model. That means, each 3D model is represented by 10 LFDs, evenly distributed across the viewing sphere. Then the dissimilarity of two models is found as the minimal dissimilarity obtained by comparing the viewing sphere of the ten “light fields descriptors” relative to the other’s ten “light field descriptor”. Using a test database of 1,833 models classified into 47 classes and one “miscellaneous” class of 1,284 models, they show that their method has better classification performance than other current 3D shape matching methods. After further thoughts, it should be pointed out that, LFD has an assumption that two light field should contain the same number of images, which are captured from some fixed viewpoints, i.e. ten vertices of a dodecahedron. As a result, it can be easily concluded that it will not handle the problem when there are different number of images in two light fields. In most cases, we cannot expect that all the 3D objects gathered from different resources have been represented by the images captured with the same settings, i.e. the same number of viewpoints or the fixed positions the cameras placed. Different researchers may collect the light fields of 3D objects as their wish or their need, then how to measure these objects with such kind of light fields? Our approach to 3D object retrieval and recognition also relies on a sampling of the viewing sphere to generate the 2D views from different viewpoints. In our method, it is avoided to match two 3D models through matching the views one by one. Considering the spatial structure of light field, we seek to treat the images in a light field as a whole structured object, and try to directly match the whole light fields without finding or rendering the matching views. Therefore, we construct a light field retrieval scheme and focus on how to measure the distance between two light fields for effective retrieval. The rest of the paper is organized as follows. Section 2 reviews the property of light field and summarizes the difficulty and particularity of applying light field retrieval. How to measure the distance between two light fields is proposed and reasoned in Section 3, followed by the experimental results and some analysis in Section 4. At last, the conclusion is made in Section 5.

2. LIGHT FIELD CHARACTERISTICS IN RETRIEVAL A light field is a collection of light rays following through space in all directions captured by a multi camera array and recorded as multi-view images. With the advent of light field, new images viewed in any position and any direction can be rendered with only a little geometry information or even none geometry information involved. The main characteristic of light field in representing 3D object can be listed as follows: (1) The images in a light field are not independent and uncorrelated. they are spatially related and there is an intrinsic structure according to their spatial location. (2) In current research of view-based 3D model, the features are often extracted from each image independently, which can not reflect the structure information. Though difficult, it is possible to extract some feature containing structure information and treat a light field as whole, however, we represent the images by the pixels of them here for simplicity. (3) Different light fields might have different structures, because the distribution of viewpoints can be set discretionarily, and the equipment and environment for light field collection can be quite different. It is possible that two light fields contain different numbers of viewpoints, corresponding to different positions. Therefore, we can not fit them one by one since the views in one light field may not have the corresponding views in another. For example, this problem can not be handled by Light Field Descriptor .13

(4) Additionally, to be precise and convenient to render images of new views, the sampling density should be high enough to prove the speed for rendering and interactively viewing. As a result, the light field data is of pretty high dimension, which is a challenging problem for storage and computational load. Due to the difficulties listed above, we should seek for an effective method to analysis light field data, further exploit the structure of light field and avoid the curse of dimensionality. Let’s focus on the definition of light field. A light field is a 5-dimensional function representing the radiance at a given 3D point in a given direction, and the 5-dimensional representation may be reduced to 4 dimensions in free space as a consequence of the fact that the radiance does not change along a line unless blocked. Then the ~ Each direction d~ can be parameterized by the Light field is defined as the radiance at a point p in direction d. intersection of two arbitrary oriented planes (u, v) and (s, t), while the parameters (u, v, s, t) are restricted to lie in range [0, 1].11 Therefore, the plenoptic function L (u, v, s, t) has only several underlying degrees of freedom. That is to say, the light field data in the observation space is controlled by several intrinsic variables and are generated from these evenly spaced low-dimensional latent variables. In other words, the images of a light field are undergoing some kinds of distortion, such as, shift, rotation, or some change of viewpoint. As a result, they are highly related in the original high dimensional space. Therefore, the data in light field can be thought of as constituting highly nonlinear manifolds in the high-dimensional observation space.

3. DISTANCE MEASUREMENT BETWEEN LIGHT FIELDS 3.1 Linear Subspace Analysis Consider an M × N × 3-dimensional point representing a RGB image of M × N pixel in light field L, we seek to explore the inter relationship between these points instead of treating them independently. The Euclidean distance or Mahalanobis distance are usually used to measure the distances between the points in the observation space. However, in a light field, when viewed in different positions and represented in images, the objects in the scenario can be regarded as having undergone some transformations, such as shift, rotation or scale, which can not be well accommodated by Euclidean distance. Since it has been analyzed in Section 2 that the images of a light field lie in a manifold, we turn to tangent distance as a proper distance measure, because it assumes there is a sample manifold generated by certain transformations, which can be approximated by the first-order Taylor expansion. That is, tangent distance makes a linear approximation to the arbitrary transforms on the manifold. Here we need to construct a tangent vector Ti for each transformation. When approximating the local structure of the manifold on a particular point and finding the above tangent vectors, principle component analysis (PCA) is adopted here. In essence, PCA seeks to reduce the dimension of the data by finding a few orthogonal linear combinations of the original points with the largest variance, and it is the best in the mean-square error sense. Let {xi } , 1 ≤ i ≤ p represent the points belonging to a light field. First each point is centralized with the mean vector x0 , and X = x(i) , i = 1, · · · , p is got, of which the mean of X is zero. The covariance matrix is calculated as Σ = XX T , which is then written as following after spectral decomposition Σ = U ΛU T where Λ = diag (λ1 , · · · , λp ) is the diagonal matrix of the ordered eigenvalues λp ≥ · · · ≥ λ1 , and U is a p × p orthogonal matrix containing the corresponding eigenvectors in column. Then the matrix T consisting of the r tangent vectors is just formed by the r eigenvectors corresponding to the r largest eigenvalues. They are the directions that the data changes with the largest variance, and regarded as the most representative directions of x. Then each point of the light field can be regarded as a linear combination of these r vectors. That is, the light field can be represented as the subspace spanned by these r tangent vectors. When measuring the distance between the light fields, the problem has been transferred to the distance between two subspaces, or the distance between two groups of bases vectors, which is matching the bases in substance. A B Denote LA and LB as two light fields. xA i is a test point on L . For L , T is a matrix consisting of its B r tangent vectors at x0 . Each point in the subspace spanned by the r tangent vectors passing through xB 0

Figure 1. Example of Tangent Space.

represents the linear approximation to the full combination of transforms. When measuring the distance, we search for the point in this tangent space that is closest to a test point xA i —- the linear approximation to our ideal. Formally, referring to Fig.1, given the matrix T consisting of the r tangent vectors at xB 0 , the tangent distance A B from a point xA in L to the point x is 0 i °¤ ¢ ¡ ¢ £°¡ B B ° x0 + T b − xA ° Dtan xA i i , x0 = min B that is, the Euclidean distance from xA i to the tangent space of x0 .

However, if the manifold is thought as a whole instead of simply a set of points, the calculation of the point-to-manifold or manifold-to-manifold distance can be defined differently. When constructing the tangent vector, the PCA approximation of the local structure mentioned above can be extended to the global structure of the whole manifold. We perform PCA on all the points on the manifold, that is, all the images in the whole light field are taken into account. If the manifold is flat enough and ¡ can be¢ represented well by the plane spanned by the tangent vectors, we can directly measure the distance D xA i , LB as °¤ ¢ ¡ ¢ £°¡ B ° x0 + T b − xA ° Dtan xA i i , LB = min B in which xB 0 can be denoted as the mean vector of all the points in light field L .

The distance discussed above describes the so-called “one-sided” tangent distance. Furthermore, if the two manifolds are both flat enough and considered wholly instead of separately, the two-sided tangent distance can be defined, which allows both manifolds to be approximated by the linear space B spanned by the tangent vectors. Thus, if T is the matrix of the r tangent vectors of xB 0 for light field L , and A A S likewise at x0 , the mean vector of light field L , then the two-sided tangent distance is ¡ ¢ ¢ ¡ A ¢°¤ £°¡ ° . T SD LA,LB = min ° xB 0 +T b − x0 +Sa a,b

As the light field data has been represented by several bases, this distance T SD can be regarded as the distance between two tangent subspaces spanned by S and T .

3.2 Nonlinear Subspace Analysis As analyzed in Section 2, the light field data in the observation space is controlled by several intrinsic variables and are generated by these latent variables. This justifies the usage of the bases or the intrinsic low-dimensional subspace to represent the light field. Manifold learning algorithms have been used to automatically recover low-dimensional representations of these sets of images. There are many impressive methods about how to discover the intrinsical features of the manifold, and we mainly consider the methods about spectral decomposition. When learning the subspace of the light field data, there are many impressive methods about how to discover the intrinsical structure of the manifold. Kernel Principal Component Analysis (KPCA) 14 is such a nonlinear one which is the generalization of PCA. Its basic idea is to map the original data into a high-dimensional space where we perform PCA. Like SVM, we can circumvent the process of nonlinear mapping by using the kernel function. The main advantage of kernel PCA is that it only requires solving an eigenvalue problem, rather than a nonlinear optimization. As the diversity of the kernel function, kernel PCA comprises a fairly general class of nonlinearities that can be used. when performing on different light field, as long as the same kernel is chosen, the final obtained bases will be comparable, and the similarity between two sets of bases defined above is reasonable. Additionally, nonlinear kernel dimensionality reduction methods ,15 including ISOMAP, Locally Linear Embedding (LLE) and Laplacian Eigenmaps (LE) can also be adopted here. It can be interpreted that, KPCA can be regarded as a method of nonlinear dimensionality reduction, and the main purpose is to discover the intrinsic structure of the light field data and construct a low-dimensional subspace which best approximates the essential space of the light field itself. In the reduction process, we could use the spatial structure as a prior knowledge in defining the neighborhood, therefore, a more accurate structure may be learnt. finding theªintrinsic subspace of light field LA , the points represented in low dimension are denoted ©After A as x ˆi , i = ©1, · · · , m . Thenª the points in another light field LB are mapped into this subspace as test point, denoted as x ˆB i , i = 1, · · · , n . Perform PCA on these points and find their tangent vectors just by the way mentioned in Section 3.2, then the tangent distance between the two low dimensional subspaces can be calculated as the two-sided tangent distance, which is denoted as T SD0 .

3.3 Final Defined Distance Measure It seems that tangent distance in high dimensional space is enough, since it measures in the original observation space. However, it is not true in practice. Consider the case illustrated in Fig.2. It is possible that point B xA r and point xt locate in a common manifold Mrt in nature, while they seemingly belong to different light C fields and lie on different sub-manifolds.¡ But when the manifold Mrt¡, xC has ¢ ¢ the point x does not locates in B A B C A A the same tangent distance to xr , Dtan x , xr , as the tangent distance between xt and xA r , Dtan xt , xr . Therefore, to avoid these unreasonable results, we should judge whether the point does locate on Mrt or not. This could be realized by computing the distance in low-dimensional subspace obtained through some nonlinear dimensionality reduction technique. ˆA ˆB ˆC , and the tangent distances r , x t , and x ¡ BDenote ¢ the mapped ¡ C A ¢points are x L A in low dimensional space as Dtan ˆt , x ˆ¢r , Dtan x ˆ ,x ˆr . In the¡low dimensional space which is the intrinsic ¡ Bx ¢ C A subspace of manifold Mrt , Dtan x ˆt , x ˆA will be shorter than D x ˆ , x ˆ because of the case that xC does not tan r r locates in the manifold Mrt . Combining the distance in high dimensional and low dimensional spaces together yields the final distance D2 : ¡ ¢ ¡ B A¢ ¡ B A¢ A 2 2 D2 xB ˆt , x ˆr t , xr = Dtan xt , xr + γDtan x The detailed theoretical analysis can be found in some unified analysis of probabilistic tangent subspace.16 Furthermore, this point-to-point distance in high and low dimensional space can also be generalized to the manifold-to-manifold distance in Section 3.1, i.e. the subspace-to-subspace distance. Thus, the manifold-tomanifold distances are also defined as the combination of that in high-dimensional and low-dimensional space. That is, combining the T SD and T SD0 together yields the final distance: which can also be set as weighted square summation 2 D(LA , LB ) = T SD(LA , LB )2 + γT SD0 (LA , LB ) .

Figure 2. Example of Disadvantage of Tangent Distance.

4. EXPERIMENTS AND ANALYSIS Our database for experiments is on the following web site for practical trial use: http://3d.csie.ntu.edu.tw. There are 10,911 3D models in this database, all free downloaded via the Internet. It is denoted as ”NTU 3D Model Database ver.1” on their web site. Several models in this database are illustrated in Fig.3.

4.1 Testifying the Assumption of Manifold A group of view images of a 3D object are generated, by capturing the image of this object with cameras at 72 different positions. The cameras are put on 72 points of a circle surrounding the object, and the rotation angel between two neighboring cameras is 5 degree. That means, the 72 points will cover the whole 360 degree of the circle. Finally, each 3D object is represented by a set of 72 images. The placement of cameras is illustrated in Fig.4. According to the analysis in Section 2, when the viewpoints are sampled on a circle, we first suppose that the radius is a constant and the condition of illumination has not been changed, then the scale and lighteness is invariable. It can be concluded that the generated views are controlled by the only factor of rotation. To testify the assumption of manifold mentioned in Section 2, a primary experiment is implemented. KPCA is performed on a set of views of a 3D model images, which are originally high dimensional in their observation space. For example, a RGB image of M × N pixels is represented as a point of M × N × 3-dimensional, which are highly nonlinear. These points are projected to a space of 2 dimensions by the obtained projection matrix. These 2-dimensional points are presented in Fig.5, which form a nearly standard circle as we have expected. If the axes are transformed to polar coordinates, the only variable that intrinsically controls these points is the rotation angel.

4.2 Experiments on 3D Object Retrieval Traditionally, the diagram of “Precision” vs “Recall” is a common way of evaluating performance in documental and visual information retrieval. Recall measures the ability of the system to retrieve all models that are relevant, it denotes how many relevant images have been retrieved and presented in all the relevant ones. Precision measures that the ability of the system to retrieve only models that are relevant, and it denotes the proportion of the relevant images in the presented results. They are defined as: Recall =

P recise =

relevant correctly retrieved all relevant relevant correctly retrieved all retrieved

In general, to measure the recall and precision, a ground truth database is needed to assess the relevance of models with some queries. Therefore, a test database is firstly formed by randomly selected the 3D models

Figure 3. Several 3D models in our database of objects which are used to evaluate the performance of the retrieval and recognition algorithm presented as follows.

from the “NTU 3D Model Database”. 200 models are selected which cover 20 different classes with each class containing 10 objects. The work of selection and classification was done by a student independent of this research, regarded as a human evaluator, so the ground truth of our database is impersonal. In our experiments, each object is represented by a group of 72 images, and how to capture the images is described in Section 4.1. When users query with a 3D object, that is, choose a group of images as the input, our system would search for more specific objects using distance measure defined in Section 3. To demonstrate the effectiveness of our proposed algorithm, the Light Field Descriptor13 is chosen for comparison, for it provides good retrieval performance and is regarded as the state-of-the-art algorithm in several survey papers .17 Another traditional and simple method is also implemented here, which finds the nearest pairs of images in two light fields and utilizes their distance as the distance between the two light fields. In this method, a partial match is made for it just concerns the nearest view instead of all the views. That means, this measurement considers that if there is one view of a model similar to that of the other model, these two models are regarded as similar.

Figure 4. The viewpoints are sampled on the circle surrounding the object. Several representative views of images are presented.

In,13 a 3D model was represented by 10 LFDs, each with 10 silhouettes. However, the algorithms for comparison should use the same form of light field for justice. Therefore, the light field with 72 views distributed on a circle is adopted here. The three experiments are performed on the light fields generated from the 200 models database mentioned above. The precision-recall curve of retrieval performance is illustrated in Fig.6, from which it can be concluded that our algorithm outperforms LFD and NN

4.3 Experiments on 3D Object Recognition In the experiments of 3D object recognition, the adopted database is almost the same as that used in Section 4.3. The algorithm for comparison is also the LFD in13 and the nearest neighbor. In 3D object recognition, the goal is to identify which class a model belongs to. The distances between a test object and all the train objects are computed and sorted in ascending order. To evaluate the performance of recognition, the followings are defined: (a)The train object which has the smallest distance to the test object is called the “best match”. The test object will be classified according to the “best match”. (b)Three train objects which have the 3 smallest distances to the test object are selected, if two of them belong to the same class, the test object will be assigned to this class too; otherwise refer to (a). This is called “3-NN voting”. (c)Five train objects which have the 5 smallest distances to the test object are selected, if three of them belong to the same class, the test object will be assigned to this class too; otherwise refer to (a). This is called “5-NN voting”. In the experiment, one object is selected as test data, and the other 199 are train data each time. This experiment is implemented for 200 times, and all the 200 models have been selected as test data once. Each

6 52

53

54

57 58 59 60 55 56 61 62

63

64

51 50 49 48 47 46 45 44 43

4

2

42 41 0

40 39

65

66 67 68 69 70 71 72 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

38 −2

37 36 35

−4

34

19

33

20 21

32 31 −6

−6

−4

30

29 −2

28

27

26 0

25

24

23

22

2

4

6

Figure 5. Example of manifold tangent space. The dimension of original images is reduced to 2 and each point corresponds with an original images. In this 2-dimensional space, these points distribute on a circle just like the original images are captured as in Fig.4 1 NN LFD Our proposal

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0

0.1

0.2

0.3

0.4

0.5 Recall

0.6

0.7

0.8

0.9

1

Figure 6. Comparison of Nearest Neighbor, LFD13 and our proposal in 3D object retrieval.

time whether the test object is assigned to the right class is justified according to the ground truth. The final average accuracy of “best match”, “3-NN voting”, “5-NN voting” for 200 times is listed in Table.4.3 From the experiments above, it can be concluded that our proposal outperforms the state-of-the-art algorithm in both 3D object retrieval and recognition. Additionally, as analyzed in Section 3, our method is robust to the variant positions and numbers of cameras, since it does not need to match the images of two groups one by one.

Method NN LFD Our Proposal

best match 48% 58% 61.5%

3-NN voting 37% 39.5% 45%

5-NN voting 28.5% 34% 39%

Table 1. Comparison of 3D object recognition. Ours is better.

Then the views can be set arbitrarily.

5. CONCLUSION As an effective representation of 3D objects, light field is getting widely used because of its independence of complexity of scenario or models. For light field retrieval, a distance measure between light fields is defined based on tangent subspace analysis. In original space, tangent distance is calculated based on PCA to approximate the local structure of data. Through some nonlinear dimensionality reduction method, a low dimensional subspace is obtained. All the observation points are mapped into this subspace and another distance can be calculated as a supplement to that in high dimensional space. It has been analyzed that the combination of the two distances is reasonable and our experiments also demonstrate this. Our proposed distance measure shows better performance than a famous and state-of-the-art algorithm called “LFD” in both retrieval and recognition. The last but not the least is that our method is not limited to the number and positions of the viewpoints, which can be widely used in the real applications, while “LFD” can not handle this problem.

ACKNOWLEDGMENTS This work is supported by the project of NSFC No.60772048 and the key project of NSFC No.60432030.

REFERENCES 1. M. Kazhadan, T. Funkhouser, and S. Rusinkiewicz, “Rotation invariant spherical harmonic representation of 3D shape descriptors,” 2003. In Proceedings of the Eurographics/ACM SIGGRAPH symposium on Geometry processing. 2. C. Zhang and T. Chen, “Efficient feature extraction for 2D/3D objects in mesh representation,” 2001. In CIP 2001. 3. R. Ohbuchi, T. Otagiri, M. Ibato, and T. Takei, “Shape-similarity search of three-dimensional models using parameterized statistics,” 2002. In Pacific Graphics 2002. 4. H.-Y. Shum, M. Hebert, and K. Ikeuchi, “On 3D shape similarity,” 1996. IEEE Conference on Computer Vision and Pattern Recognition. 5. Y. Liu, H. Zha, and H. Qin, “Shape topics: A compact representation and new algorithms for 3D partial shape retrieval,” 2006. IEEE Conference on Computer Vision and Pattern Recognition. 6. S.Nayar, S.Rene, and H. Murase, “Realtime 100 object recognition system,” 1996. IEEE International Conference on Robotics and Automation. 7. T. Cootes, C. Taylor, D. Cooper, and J. Graham, “Active shape models: Their training and application,” Computer Vision and Image Understanding 61(1), p. 38C59, 1995. 8. S. Ullman and R. Basri, “Recognition by linear combinations of models,” IEEE Transactions on Pattern Analysis and Machine Intelligence 13(10), p. 992C1006, 1991. 9. A. Johnson and M. Hebert, “Using spin images for efficient object recognition in cluttered 3D scenes,” IEEE Transaction on Pattern Analysis and Machine Intelligence 21(5), pp. 433–449, 1999. 10. J. L¨offler, “Content-based retrieval of 3D models in distributed web databases by visual shape information,” 2000. IV2000. 11. M. Levoy and P. Hanrahan, “Light field rendering,” 1996. Computer Graphics (SIGGRAPH 96 Proceedings). 12. S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” 1996. Computer Graphics (SIGGRAPH 96Proceedings).

13. D.-Y. Chen, X.-P. Tian, Y.-T. Shen, and M. Ouhyoung, “On visual similarity based 3D model retrieval,” 2003. EUROGRAPHICS 2003. 14. B. Sch¨olkropf, A. Smola, and R. Muller, “Nonlinear component analysis as a kernel eigenvalue problem,” 1998. Neural Computation. 15. J. Ham, D. Lee, S. Mika, and B. Sch¨ olkopf, “A kernel view of the dimensionality reduction of manifolds,” 2004. Proceedings of the 21st International Conference on Machine Learning. 16. J. Lee, J. Wang, C. Zhang, and Z. Bian, “Probabilistic tangent subspace: A unified view,” 2004. Proceedings of 21st International Conference on Machine Learning. 17. J. Tangelder and R. Veltkamp, “A survey of content based 3D shape retrieval methods,” 2004. Shape Modeling Applications, 2004. Proceedings.

View-based 3D Object Retrieval Using Tangent ...

this advantage, we propose a 3D object retrieval framework based on light field. ... In our experiment, a standard 3D object database is adopted, and our.

444KB Sizes 0 Downloads 430 Views

Recommend Documents

3D articulated object retrieval using a graph-based ... - Springer Link
Aug 12, 2010 - Department of Electrical and Computer Engineering, Democritus. University ... Among the existing 3D object retrieval methods, two main categories ...... the Ph.D. degree in the Science of ... the past 9 years he has been work-.

3D Object Retrieval using an Efficient and Compact ...
[Information Storage and Retrieval]: Information Search and Retrieval. 1. ... shape descriptor that provides top discriminative power and .... plexity which is prohibitive for online interaction as well as .... degree of significance, we can intuitiv

NON-RIGID 3D SHAPE RETRIEVAL USING ... - Semantic Scholar
2National Institute of Standards and Technology, Gaithersburg, USA. 3Cardiff University ... cal form for a 3D model based on MDS and PCA. 2. Local Feature ...

NON-RIGID 3D SHAPE RETRIEVAL USING ...
rapid development of 3D shape retrieval systems that, given a query object .... methods for the application of non-rigid 3D shape retrieval. Moreover, the new ...

Visual Similarity based 3D Shape Retrieval Using Bag ...
nience and intuition), we call it “CM-BOF” algorithm in this paper. ... translate the center of its mass to the origin and ... given unit geodesic sphere whose mass center is also ...... Advanced in Computer Graphics and Computer Vision, pp. 44â€

Efficient 3D shape matching and retrieval using a ...
software development which has greatly facilitated 3D model acquisition ..... tion (similar to PCA), is the use of singular value decomposi- tion (SVD) [28]. In [22 ...

Retrieval of 3D Articulated Objects using a graph-based representation
called protrusion function, pf() [APP. ∗ ... the minimum cost paths between the representative salient .... The optimal cost of the optimization process is the EMD.

Compressed Domain Video Retrieval using Object and ...
The object features such as speed, area and trajectory are then obtained after ... With the advent of broadband networks, high-powered workstations and ... (3) With the availability of hardware encoders and decoders for MPEG, such an.

designing software for 3d object modelling using digital ...
KEYWORDS : Software, Design ,Point Cloud, Modelling, Non-metric, Calibration, Matching ... images to prepare coordinate of control points in image coordinate system and .... conjugate points in other images . output file consist following.

Detection-based Object Labeling in 3D Scenes
In computer vision, both scene labeling and object de- tection have been extensively .... Hence, the maximum detector response across all scales is the best estimate of the true ..... at 30, 45, and 60 degrees with the horizon. Each detector is.

CONTENT-FREE IMAGE RETRIEVAL USING ...
signed value of one is denoted by XE = {Xj1 , ..., XjE } and is called the ... XE is in practice much smaller than. XH. .... Fourteen trademark examiners in the Intel-.

Object category learning and retrieval with weak ... - LLD Workshop
network and are represented as memory units, and 2) simultaneously building a ... methods rely on a lot of annotated data for training in order to perform well.

Object category learning and retrieval with weak ... - LLD Workshop
1 Introduction. Unsupervised discovery of common patterns is a long standing task for artificial intelligence as shown in Barlow (1989); Bengio, Courville, and Vincent (2012). Recent deep learning .... network from scratch based on the Network in Net

Geometry of the tangent bundle and the tangent ...
SASAKI METRIC. On the other hand, since (expx ◦R−u)∗u : TuTxM −→ TxM is isomorphism, dim ImK(x,u) = n. Now, we need only to show V(x,u) ∩ H(x,u) = {0}.

Tangent Space Calculation.pdf
Tangent Space Calculation http://www.terathon.com/code/tangent.html. 3 de 3 02/10/2012 19:32. Page 3 of 3. Tangent Space Calculation.pdf. Tangent Space ...

3D Object Recognition Based on Low Frequency ... - CiteSeerX
in visual learning. ..... based in the polar form of the Box-Muller transformation [1]. .... [1] Box, G.E.P., Muller, M.E.: A note on the generation of random normal ...

Complex 3D General Object Reconstruction from Line ...
object found is the pyramid, and the algorithm does not add ... object (cube and pyramid in this example). .... The following scheme can solve this problem.

Object Tracking using Particle Filters
happens between these information updates. The extended Kalman filter (EKF) can approximate non-linear motion by approximating linear motion at each time step. The Condensation filter is a form of the EKF. It is used in the field of computer vision t

[inria-00567850, v1] Autonomous 3D object modeling ...
Received Day Month Year. Revised Day ... 1. Next-Best-View for a humanoid robot: finding the next posture to model an unknown object. constraints of our ... With regard to computer vision, the problem of visual object recognition has been thoroughly

Content-Based Medical Image Retrieval Using Low-Level Visual ...
the retrieval task of the ImageCLEFmed 2007 edition [3], using only visual in- ..... models that mix up textual and visual data to improve the performance of our.

Fast Forensic Video Event Retrieval Using Geospatial ...
It also makes the online video search possible by filtering tremendous amount of data ... H.3.3 [Information Storage and Retrieval]: Informa- tion Search and ...

Content-Based Histopathology Image Retrieval Using ...
visualization, annotation[9] and retrieval[10], among others, are being exploited for ... [8] create a correlation matrix of mean centered data set, then this matrix is ...

Scalable Object Detection using Deep Neural Networks
neural network model for detection, which predicts a set of class-agnostic ... way, can be scored using top-down feedback [17, 2, 4]. Us- ing the same .... We call the usage of priors for matching ..... In Proceedings of the IEEE Conference on.