A Lightweight 3D Visualization and Navigation System on Handheld Devices Wendel B. Silva

Maria Andreia ´ Formico Rodrigues

Mestrado em Informatica Aplicada ´ Universidade de Fortaleza - UNIFOR Av. Washington Soares, 1321 J(30) 60811-905 Fortaleza–CE, Brazil

Mestrado em Informatica Aplicada ´ Universidade de Fortaleza - UNIFOR Av. Washington Soares, 1321 60811-905 Fortaleza–CE, Brazil

[email protected]

[email protected]

ABSTRACT This work presents a lightweight 3D visualization and navigation system we have proposed and implemented on handheld devices, using the Open Graphics Library for Embedded Systems (OpenGL ES API). The visibility algorithms view-frustum culling, backface culling (this one available in the OpenGL ES API), and a combination of view-frustum culling and backface culling, associated to different depth levels of Octrees (used to partition the 3D scene) were implemented and used to optimize the processing time required to render 3D graphics. The system was then tested using these combinations of algorithms and performance analyses were conducted for situations where the camera walks through an environment containing 6199 polygons. The results show that navigation at interactive rates of 10.07 and 30.61 frames per second can be obtained using the PocketPC iPaq hx2490b and the mobile phone Nokia n82, respectively.

Categories and Subject Descriptors I.3.7 and I.3.8 [Computer Graphics]: Hidden line/surface removal - Application and Graphics System, respectively

Keywords 3D Visualization, Visibility Algorithms, Octrees, Handheld Devices, Graphical Application

1. INTRODUCTION Normally, in a 3D environment it is not possible to visualize all the surfaces of all the objects simultaneously from one observer viewpoint. Consequently, objects or part of objects that are not visible to the observer must be removed from the rendering pipeline. This procedure is important since it diminishes the quantity of polygons to be rendered in a scene. Actually, visibility is a fundamental and complex problem that does not have an optimum solution. Several

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SAC’09 March 8-12, 2009, Honolulu, Hawaii, U.S.A. Copyright 2009 ACM 978-1-60558-166-8/09/03 ...$5.00.

are the factors that are influential in this problem, such as: number of pixels to be painted, number of objects in a scene, geometric complexity of the objects, object distribution, object dynamics (that is, if they are moving or static) level of realism of the objects (presence of textures, transparencies), etc. Data structures that represent recursive partitions of space are often used in graphical applications and, particularly, can be associated with visibility algorithms to obtain a better performance in the process of 3D rendering and navigation. The challenges of optimizing visibility algorithms for interactive rendering of polygons become even more complex when the execution platform is a mobile device. The possibility of progressive increase in the capacity of graphics processing and storage of these devices points to a scenario of interaction where the users will make use of more realistic graphical applications. For example, 3D visualization of data and navigation through virtual worlds in areas such as medicine, engineering, entertainment, etc. However, despite the significant technological advances witnessed in the past years, the majority of mobile devices still present important limitations when compared to traditional personal computers: low processing power; little storage memory; restricted screen size and low resolution; limited forms of interaction with the user; etc. The development of 3D graphical applications for mobile devices that take into account the restrictions previously discussed is a recent area of research, and still little explored by software developers and researchers. In this sense, there is an evident demand for proposals of optimization that propitiate the efficient use of the different technological resources available in each type of mobile device, in such a way as to ensure the generation of compact and simultaneously realistic implementations. This work presents a lightweight 3D visualization and navigation system we have proposed and implemented on handheld devices, using the Open Graphics Library for Embedded Systems (OpenGL ES API). The visibility algorithms view-frustum culling, backface culling (this one available in the OpenGL ES API), and a combination of view-frustum culling and OpenGL ES backface culling, associated to different depth levels of Octrees (used to partition the 3D scene) were implemented and used to optimize the processing time required to render 3D graphics. The system was then tested using these combinations of algorithms and performance analyses were conducted for situations where the camera walks through an environment containing 6199 poly-

gons. The results show that navigation at interactive rates of 10.07 and 30.61 frames per second can be obtained using the PocketPC iPaq hx2490b (without GPU) and the mobile phone Nokia n82 (with GPU), respectively, with a memory occupied by the application of 120KB.

2. RELATED WORK Many acceleration techniques have been developed to increase visualization speed in complex graphical environments, composed of a large quantity of polygons [1]. Visibility algorithms are among these techniques. In this context, Cohenor et al. conducted a detailed comparative study between different existing methods of visualization [6]. Visibility algorithms seek the efficient removal of nonvisible parts that compose a scene, so that they are not processed by the rendering pipeline. Among the best known algorithms are the methods that are executed in a preprocessing (offline) phase, and those that are executed in the application execution time (online). The visibility algorithms can also be classified as to its working space. There are algorithms that work in the space of the object, using 3D information from the environment; and those that operate in the space of the image, using a 2D representation [7] of the 3D space [8, 3]. In this work we use 3D information from the environment, and the visibility algorithms implemented are executed at software level. Lluch et al. proposed a client/server application where the hidden part of the environment is removed in the server [13], and the scene is shown on the handheld device using the Klimt API. In our work, removal of the hidden geometry occurs on the client itself, not requiring obligatory access to the server. Chang and Ger developed an application for handheld devices [5], for viewing 3D graphical environments, however, contrary to our work, the application is imagebased rendering. Garcia et al. developed a framework and a virtual rescue training application for firefighters, aimed at simulating an indoor space, which allows users to navigate through 3D environments, find an optimal and collision-free path from one place to another, and obtain additional information on objects [2]. Hudson et al. consider that the objects positioned in the shadow generated by the obstacles are not visible to the observer [11]. Starting from this idea, they described a method of visibility based on the dynamic choice of a set of obstacles, and on the calculation of their respective shadows. Differently, in this work we used structures of spatial partitioning, in the case, Octrees, to make use of the spatial coherence. Duguet and Drettakis proposed an efficient way to display complex geometries [9]. However, differently from this work, they did not use the OpenGL ES API, the tested geometric model is point-based rendering, and their work is focused on a spatial data structure namely P-grids and memory needed to store massive models. Nurminen developed a 3D application of a virtual map for small devices, for which only a visualization algorithm, the occlusion culling [14], is implemented. In this work, apart from implementing different visibility algorithms and analyzing system performance, we combined these algorithms among each other as well as Octrees with different depth levels, with the objective of obtaining interactive rendering rates. Finally, some authors carried out comparative studies of 3D applications in mobile devices. Pulli [15] elaborated

quite a detailed study concerning some APIs more largely used in handheld devices. Hwang et al. [12] explored camera control parameters to generate different possibilities of scene views. Hatchet et al. [10] developed methods of interaction for navigating in virtual environments in mobile devices. However, in none of these works were visibility algorithms combined among each other as well as any spatial data structure to optimize and demonstrate the performance gains of the application.

3.

SYSTEM OVERVIEW

The system we developed consists of a set of class packets (which include the view-frustum culling and the backface culling visibility algorithms, combined with Octrees with different depth levels), with the main objective of ensuring locomotion of the camera at interactive rates through the 3D environment visualized on the handheld devices. The view-frustum culling algorithm can discard many polygons in the scene, depending on its geometric complexity, removing those that are outside the volume of visualization, and can be associated to other visibility algorithms. In our system, among the methods of intersection used in the viewfrustum culling algorithm, we implemented the intersection tests between the volume of visualization and a point, between a volume of visualization and a sphere, and between a volume of visualization and a cube. The system presents also two implementations of the backface culling algorithm, one based on the traditional algorithm [16], and another available in the OpenGL ES API. The depth limit of the Octree can be specified by the developer himself. Object-oriented programming is used which makes possible an easy reuse and extension of the class packets available in the system to the development of other 3D graphical applications. In particular, two programming languages were used: Java 1.6 for the implementation of the module executed in a pre-processing phase (which uses the Java3D graphic API), and C++ in the application module (this latter together with an OpenGL ES graphic API). Besides the desktop computer, the present version of the implementation is also available for mobile devices, such as Smartphones and Pocket PCs with Windows Mobile and mobile phones with Symbian OS.

3.1

Architecture

Our system architecture is fundamentally composed of two basic modules: 1) the one that partitions the 3D environment in spatial data structures; and 2) the one that contains the packets of the visibility algorithms and of the spatial partitioning structure to be used in the graphical applications developed. In our system, the combined use of the visibility algorithms and of different depth levels of the Octrees is important as a strategy used to diminish the number of polygons to be rendered in a scene and, therefore, ensure the execution of graphical applications at interactive rates in the handheld devices. Both the first and the second modules can be executed in personal computers and in mobile devices (Figure 1). However, the first module, which operates in a pre-processing phase, depends on the processing capacity of the execution platform (whether it is a traditional personal computer or a mobile device). The second module, in its turn, can be executed on different platforms, as long as they present an implementation of the OpenGL ES API. Our proposed architecture is fundamentally based on the

Figure 1: In (a), due to processing power restrictions of the mobile device, the pre-processing module executes in the personal computer (in this case the partitioned environment can be accessed locally, for example, via USB, or remotely, via network), and the visibility algorithms are executed at software level in the mobile device. In (b) the two modules execute in the device.

principle of separation of data, presentation, and interaction mechanisms, using the Model-View-Controller (MVC) architectural pattern [4], which is used to partition the environment into three parts: Model, View and Control (in this work denominated Device Controller). The Model administrates current system data and the behavior of 3D objects, makes the required data available to View, and executes the instructions that were interpreted by the Device Controller. Additionally, in the Model the visibility algorithms are implemented, as well as the Octree spatial partitioning structure (Figure 2). The View contains basic information for the rendering of 3D scenes on the device screen, as well as the camera information (angle of view, aspect ratio, and near and far planes). At each frame generated during the locomotion of the observer (camera), the View requests from the Model the data to be rendered. Other view profiles can be created and/or annexed, for example, seeking the generation of images with differing levels of detail, depending on the computing resources available on the mobile device. Further, the OpenGL ES API can be substituted by other graphic APIs available for mobile devices, as long as these present characteristics and functionalities similar to those of the OpenGL ES API, such as, for example, types of geometric primitives and ways of grouping, mapping and normalization between systems of coordinates, etc. The Device Controller interprets the entry operations of a device, such as: keyboard, pen and joystick commands, passing the instructions already translated to the Model. In reality, the Device Controller corresponds to the interaction interface between the user and the application.

3.2 Interface During execution of the system the user can move around in the environment using the device joystick, in which the up and down buttons transport the position of the observer, respectively forwards and backwards; and the left and right buttons rotate the observer around the y axis. The user can also opt to enable or disable a certain visibility algorithm, and can additionally use different combinations of visibility algorithms. Besides the navigation option guided by the user, the system also offers the possibility of recording a specific trajectory of locomotion through the environment, defined by the user and reproduced automatically.

Figure 2: The system architecture, based on the MVC software pattern. In the following section we will specify in detail the methodology defined and used in the tests conducted, as well as the results obtained.

4.

TESTS AND RESULTS

In this work, the preprocessing phase (the one that partitions the 3D environment in spatial data structures) was executed on a personal computer, following the execution platform showed in (a) of Figure 1. To carry out the system performance analysis tests we used two different mobile platforms: the PocketPC iPaq hx2490b mobile device, with 128mb of internal memory, 64mb of heap memory, 520MHz Intel PXA270 processor, without a GPU; and the mobile phone Nokia n82, with 100mb of internal memory, 128mb of heap memory, ARM 11 332 MHz processor, with GPU. The modeled 3D world corresponds to an indoor environment composed of two floors, containing a set of 40 rooms lined up with the coordinate axes, and 6199 triangles distributed uniformly around the scene (Figure 3 shows one of the floors of the environment). Initially, we specified and recorded a trajectory of camera locomotion (containing 1476 frames) through the environment. This trajectory is formed by reference points A, B, C, D, E, F and G, as shown in Figure 4, and was pre-recorded to ensure greater control of the experiments conducted, as well as to facilitate the reproduction of the tests. Next, using the PocketPC iPaq, four tests were carried out using different combinations of visibility algorithms: one without using any visibility algorithm, and three using visibility algorithms (view-frustum culling, OpenGL ES backface culling, and a view-frustum algorithm combined with the OpenGL ES backface culling one), as shown in Figure 5. As for the tests with the traditional backface culling algorithm we have implemented, it should be emphasized that it has not achieved competitive advantages in relation to the OpenGL ES backface culling algorithm, and thus it was excluded from our graphical results and performance analysis. The results show that the combination of view-frustum culling and OpenGL ES backface culling algorithms present

Figure 5: Processing time expended for renderization of different visibility algorithms along the trajectory. values that compose these two curves is +0.87. Since the maximum positive correlation is 1, a correlation value of +0.87 means that the corresponding variables closely vary together and in the same direction, indicating a relationship.

Figure 3: On the top view, the indoor modeled 3D environment containing 6199 triangles. On the left, its visualization on the PocketPC iPaq hx2490b and, on the right, on the mobile phone Nokia n82. the best performance for the indoor modeled environment (solid light grey curve, filled without contour, in Figure 5), expending, on average, 134.10 ms to render each frame. More specifically, the two combinations that contain the view-frustum culling algorithms (solid light grey curves, filled with and without contours) obtained close results, with a maximum difference of 2ms in the rendering of each frame. Among the other curves, the OpenGL ES backface culling algorithm (solid medium grey curve) needed 192.9ms, on average, to render each frame. The processing time necessary to render each frame is related to the number of triangles sent to the rendering pipeline along the trajectory. We can observe that the curve that represents the time expended on the rendering process has a format similar to the curve that represents the number of triangles sent to the rendering pipeline (black and grey curves in Figure 6, respectively). More objectively, the resulting correlation value computed between the numerical

Figure 6: Along the trajectory, on the ordinate axis to the right, the processing time for rendering the scenes that compose the environment and, on the ordinate axis to the left, the number of triangles sent to the rendering pipeline. We also conducted tests with different depth levels for the Octree, with the purpose of identifying the level with best performance for this type of data structure, used for the spatial partitioning of the indoor modeled environment. In particular, we tested depth levels 3, 4, 5 and 6 (Figure 7). We then identified that the Octree structure with 4 depth levels (solid black curve in Figure 7) in conjunction with the best combination of visibility algorithms (view-frustum culling and OpenGL ES backface culling) reached rates of approximately 4 to 17 frames/s in the worst and best cases, respectively, using the PocketPC iPaq model hx2490b, with an average rate of 10.07 frames/s. On the mobile phone Nokia n82, the Octree with 4 depth levels, in conjunction with the view-frustum culling and OpenGL ES backface culling algorithms, reached rates of approximately 11.03 and 64.93 frames/s in the worst and best cases, respectively, with an average rate of 30.61 frames/s. Also, on the Nokia n82, the resulting correlation value computed between the numerical values that represent the time expended on the rendering process and number of triangles sent to the rendering pipeline was +0.97, indicating a strong relationship.

5. Figure 4: Trajectory formed by reference points A, B, C, D, E, F and G and the respective fields of view of the camera used for the tests.

CONCLUSIONS AND FUTURE WORK

This work presented a lightweight 3D visualization and navigation system we implemented on handheld devices, using the OpenGL ES API. Different combinations of visibility

[3]

[4]

Figure 7: Processing time for renderization of the octree with different depth levels along the trajectory. algorithms and different depth levels of Octrees were implemented to optimize the processing time required to render an indoor 3D environment containing 6199 polygons. We show that 3D visualization and navigation at interactive rates is possible to be carried out in mobile devices, with or without the presence of a GPU (the system was tested on the mobile phone Nokia n82 and on the PocketPC iPaq model hx2490b), by means of the combined use of the view-frustum culling and OpenGL ES backface culling algorithms, associated to the Octree structure with 4 depth levels. Additionally, along the trajectory around the environment, it was demonstrated that both the processing time necessary for execution of the view-frustum culling algorithm, and the number of triangles sent to the rendering pipeline, influenced in the performance of the Octrees structures, in the best case (Octree with 4 levels), attaining an average rendering rate of 10.07 and 30.61 frames/s on the PocketPC iPaq and mobile phone Nokia n82, respectively. Besides which, on the PocketPC iPaq, subjectively we showed that the curve that represents the processing time expended in the renderization has a format similar to the curve that represents the number of triangles sent to the pipeline, and, objectively, that they have a correlation of +0.87 (and on the mobile phone Nokia n82, +0.97). We conclude also that, depending on the type of modeled 3D environment and its geometric complexity, different combinations of visibility algorithms and spatial partitioning structures should be explored to obtain the best rendering performances. As future work, extending the system by means of implementing other visibility algorithms, such as occlusion culling, a well as other spatial partitioning structures, such as Grid and Kd-Tree, is foreseen. Finally, different 3D environments, with different levels of complexity, whether indoor, outdoor, or both, can still be explored to verify system performance.

6. ACKNOWLEDGMENTS

[5]

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

Maria Andr´eia F. Rodrigues is supported by the Brazilian Agency CNPq under grant No 303046/2006-6 and would like to thank for its financial support. [15]

7. REFERENCES [1] T. Akenine-M¨ oller and E. Haines. Real-Time Rendering. A. K. Peters, 2nd edition, 2002. [2] R. G. Barbosa and M. A. F. Rodrigues. Supporting Guided Navigation in Mobile Virtual Environments.

[16]

In Proceedings of the 13th Symposium on Virtual Reality Software and Technology (VRST’06), pages 220–226. ACM Press, 2006. J. Bittner, V. Havran, and P. Slav´ık. Hierarchical Visibility Culling with Occlusion Trees. In Proceedings of Computer Graphics International 1998 (CGI’98), pages 207–219, 1998. F. Buschmann, R. Meunier, H. Rohnert, P. Sommerlad, and M. Stal. Pattern-Oriented Software Architecture, Volume 1: A System of Patterns. John Wiley & Sons, August 1996. C.-F. Chang and S.-H. Ger. Enhancing 3D Graphics on Mobile Devices by Image-based Rendering. In Proceedings of the 3rd IEEE Pacific Rim Conference on Multimedia (PCM’02), pages 1105–1111, London, UK, 2002. Springer-Verlag. D. Cohen-Or, Y. Chrysanthou, C. T. Silva, and F. Durand. A Survey of Visibility for Walkthrough Applications. IEEE Transactions on Visualization and Computer Graphics, 9(3):412–431, 2002. D. Cohen-Or, E. Rich, U. Lerner, and V. Shenkar. A Real-time Photo-realistic Visual Flythrough. IEEE Transactions on Visualization and Computer Graphics, 2:255–264, 1996. S. R. Coorg and S. J. Teller. Temporally Coherent Conservative Visibility. In Symposium on Computational Geometry, pages 78–87, 1996. F. Duguet and G. Drettakis. Flexible Point-based Rendering on Mobile Devices. IEEE Computer Graphics and Applications, 24(4), July-August 2004. M. Hachet, F. Decle, and P. Guitton. Z-Goto for Efficient Navigation in 3D Environments from Discrete Inputs. In Proceedings of the 13th Symposium on Virtual Reality Software and Technology (VRST’06), pages 236–239, New York, NY, USA, 2006. ACM Press. T. Hudson, D. Manocha, J. Cohen, M. C. Lin, K. E. H. III, and H. Zhang. Accelerated Occlusion Culling Using Shadow Frusta. In Proceedings of ACM Symposium on Computational Geometry, pages 1–10, 1997. J. Hwang, J. Jung, and G. J. Kim. Hand-held Virtual Reality: a Feasibility Study. In Proceedings of the 13th Symposium on Virtual Reality Software and Technology (VRST’06), pages 356–363. ACM Press, 2006. J. Lluch, R. Gait´ an, E. Camahort, and R. Viv´ o. Interactive Three-dimensional Rendering on Mobile Computer Devices. In Proceedings of the 18th ACM SIGCHI International Conference on Advances in Computer Entertainment Technology (ACE’05), pages 254–257. ACM Press, 2005. A. Nurminen. m-LOMA - A Mobile 3D City Map. In In Proceedings of the 11th International Conference on 3D Web Technology (Web3D’06), pages 7–18. ACM Press, 2006. K. Pulli. APIs for Mobile Graphics. SPIE Electronic Imaging 2006 Multimedia on Mobile Devices II, pages 1–13, 2006. J. Weeks. GameDev.net - 3D Backface Culling. Available at http://www.gamedev.net/reference/articles/article1088.asp.

A Lightweight 3D Visualization and Navigation System ...

plemented and used to optimize the processing time required to render 3D graphics. The system was then tested using these combinations of algorithms and performance analy- ses were conducted for situations where the camera walks through an environment containing 6199 polygons. The re- sults show that navigation ...

1MB Sizes 0 Downloads 227 Views

Recommend Documents

A Lightweight 3D Visualization and Navigation System ...
software developers and researchers. In this sense ... are executed at software level. Lluch et al. ... from implementing different visibility algorithms and ana-.

A Lightweight 3D Visualization and Navigation ...
when the execution platform is a mobile device. The pos- sibility of progressive increase in the capacity of graphics processing and storage of these devices points to a scenario of interaction where the users will make ... data and navigation throug

A Lightweight Multimedia Web Content Management System
Also we need email for notification. Supporting ... Content meta-data can be subscribed and sent via email server. .... content in batch mode. Anonymous user ...

A precise teach and repeat visual navigation system based ... - GitHub
All the aforementioned modules are available as C++ open source code at [18]. III. EXPERIMENTAL EVALUATION. To evaluate the system's ability to repeat the taught path and to correct position errors that might arise during the navigation, we have taug

A precise teach and repeat visual navigation system based ... - GitHub
proposed navigation system corrects position errors of the robot as it moves ... University [email protected]. The work has been supported by the Czech Science Foundation projects. 17-27006Y and ... correct its heading from the visual information

A Particle System for Interactive Visualization of 3D Flows - IEEE Xplore
Sep 8, 2005 - saving particle positions in graphics memory, and then sending these positions through the GPU again to obtain images in the frame buffer.

A 3D Shape Measurement System - Semantic Scholar
With the technical advancement of manufacturing and industrial design, the ..... Matlab, http://www.vision.caltech.edu/bouguetj/calib_do c/. [3] O. Hall-Holt and S.

A 3D Shape Measurement System
With the technical advancement of manufacturing and industrial design, the ..... Matlab, http://www.vision.caltech.edu/bouguetj/calib_do c/. [3] O. Hall-Holt and S.

Interactive Mobile 3D Graphics for On-the-go Visualization and ...
Interactive Mobile 3D Graphics for On-the-go Visualization .... The M3G Model manages the data and behaviour of the 3D graphical ..... In Museums and the Web.

A Simple Visual Navigation System with Convergence ... - GitHub
University of Lincoln ... Czech Technical University in Prague. {tkrajnik ... saves its descriptor, image coordinates and robot distance from segment start. ..... Research program funded by the Ministry of Education of the Czech Republic. No.

Perspectives on the development of a magnetic navigation system for ...
Mar 17, 2006 - of this system for cardiac mapping and ablation in patients with supraventricular ... maximum field strength dropped from 0.15 T (Telstar) to.

Perspectives on the development of a magnetic navigation system for ...
Mar 17, 2006 - Development of the magnetic navigation system was motiv- ated by the need for accurate catheter manipulation during complex ablation ...

A simple visual navigation system for an UAV - GitHub
Tomáš Krajnık∗, Matıas Nitsche†, Sol Pedre†, Libor Preucil∗, Marta E. Mejail†,. ∗. Department of Cybernetics, Faculty of Electrical Engineering, Czech Technical University in Prague [email protected], [email protected]

A simple visual navigation system for an UAV - Department of ...
drone initial and actual position be (ax,ay,az)T and (x, y, z)T respectively, and |ax| ≪ s, .... stages: 1) Integral image generation, 2) Fast-Hessian detector. (interest point ..... Available: http://www.gaisler.com/doc/structdes.pdf. [28] A. J. V

man-75\volvo-navigation-system-information-and-quick-reference ...
man-75\volvo-navigation-system-information-and-quick-reference-guide-pdf.pdf. man-75\volvo-navigation-system-information-and-quick-reference-guide-pdf.

Spatial referenced photographic system with navigation arrangement
May 23, 2008 - SERIALLY TRANSMIT. ENCODED BYTES WITH. MSBs LEADING. \625 l. I O. Z?ogM gzaggi. MODULATE SERIAL BIT. STREAM TO RCA \630.

Spatial referenced photographic system with navigation arrangement
May 23, 2008 - v. VIDEO. DATABASE i. E i TRACKING l. ; DATABASE TO. ; POSITIONAL. ; DATABASE ..... A L R M T R. SHIFT. CCE E O E E S. REGISTER. 1.

Global Inertial Navigation System Market 2016 Industry Trend and ...
Global Inertial Navigation System Market 2016 Industry Trend and Forecast 2021.pdf. Global Inertial Navigation System Market 2016 Industry Trend and ...

Locus: An indoor localization, tracking and navigation system for multi ...
tracking and navigation system for multi-story buildings called Locus that determines floor and location by ... Key words: Indoor location, Localization, Tracking, Navigation, Context- and location-aware applications and .... human effort. To the bes

Implementing an Interactive Visualization System on a ...
Currently, programming such systems requires that algorithms be ...... [CAM94] W. J. Camp, S. J. Plimpton, B. A. Hendrickson, and R. W. Leland, “Massively ...

Implementing an Interactive Visualization System on a ... - CiteSeerX
Department of Computer Science. University of Massachusetts-Lowell. One University Avenue. Lowell, MA 01854. Abstract. The use of a massively parallel ...

Implementing an Interactive Visualization System on a ... - CiteSeerX
formed by connecting line segments at their endpoints according to a set of geometric ... supercomputer version of the color icon and briefly outline the benefits and ..... The memory limitations of the Terasys forces the use of memory to be an impor

2D/3D Web Visualization on Mobile Devices
Web visualization on both high-end and low-end mobile devices as the. MWeb3D ... lithically as a single piece of software running on a single computer.