Supporting Guided Navigation in Mobile Virtual Environments Rafael Garcia Barbosa, Maria Andréia Formico Rodrigues Mestrado em Informática Aplicada Universidade de Fortaleza - UNIFOR Av. Washington Soares 1321, J(30) 60811-905 Fortaleza-CE Brazil Tel.: +55 85 3477-3268

[email protected], [email protected]

ABSTRACT Developing interactive 3D graphics for mobile Java applications is a reality. Recently, the Mobile 3D Graphics (M3G) API was proposed to provide an efficient 3D graphics environment suitable for the J2ME platform. However, new services and applications using interactive 3D graphics, which have already achieved reasonable standards on the desktop, do not yet exist for resource-constrained handheld devices. In this work, we developed a framework for supporting guided navigation in mobile virtual environments. To illustrate its main functionalities, a virtual rescue training was designed, implemented and tested on mobile phones. Users can load virtual environments from a remote PC server, navigate through them, find an optimal and collision-free path from one place to another, and obtain additional information on the objects.

Categories and Subject Descriptors I.3.7 [Computer Graphics]: Three-dimensional Graphics and Realism---Virtual Reality; C.2.1 [Computer-Communication Networks]: Network Architecture and Design---Wireless Communication.

General Terms Algorithms, Design, Experimentation.

Keywords Guided navigation, virtual environment, mobile device.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. VRST'06, November 1–3, 2006, Limassol, Cyprus. Copyright 2006 ACM 1-59593-321-2/06/0011...$5.00.

1. INTRODUCTION Given current technology trends, mobile communication devices and 3D applications are playing an increasingly important role in the development of services for handheld devices, particularly because interactive virtual environments and participative services are essential to satisfy the needs of today’s users, from anywhere and at any time [3]. Consequently, it is expected that handheld devices will soon be replacing desktop computers as the predominant location for users to visualize and walk through virtual environments. Despite innovative advances in computing capability and wireless communication services, mobile systems have several limitations when compared to classical personal computers. These limitations imply in challenges for the development of interactive 3D applications. For example, a significant problem with mobile graphical applications is to strive for an implementation as compact as possible limited by the amount of memory available and the low processing capabilities [10]. Further, new services and applications using 3D graphics and interactions, which have already achieved reasonable standards on the desktop [6, 22], do not yet exist for resource-constrained handheld computing devices. In this respect, due to the fact that mobile phones enable local computation, and wireless communication makes it possible to use client/server infrastructures, it is also naturally expected that handheld device users will require increasingly sophisticated interactive 3D services, allowing them, for instance, to use virtual applications to visualize and navigate through complex scenarios (loaded from a high capacity server) while they are on the go. Recently, the Mobile 3D Graphics (M3G) API (also known as JSR-184) was proposed to provide an efficient 3D graphics environment suitable for the J2ME platform [17, 18]. This means that developing interactive 3D graphics for mobile Java applications is a reality. Examples of such handheld computing applications include virtual study classes and training operations in emergency situations where users can take advantage of their mobile devices to guide them through those environments, and also to provide them with detailed information about the places, items and services available at each visited location. Several other

types of mobile applications can also benefit from M3G, including games, terrain visualization, user interfaces, 3D product visualization, etc. In this work, we developed a framework for supporting guided navigation in mobile virtual environments. To illustrate its main functionalities, a virtual rescue training was designed, implemented and tested on an emulator as well as on a mobile phone. Users can load virtual environments from an up-to-date database storage on a remote PC server, navigate through them, find an optimal and collision-free path from one place to another, and obtain additional information on the objects of interest by directly pointing to their representation in the virtual world. To provide more realism, we have extended the M3G API to support 3D collision handling among objects using bounding boxes. The rest of the paper is organized as follows. The next section presents an overview of related work. In Section 3, we describe our implemented extensions to M3G API to support the framework functionalities we have designed. To guarantee smooth cell phone 3D navigation, a simple memory management scheme was implemented for our framework (Section 4). Section 5 specifies the communication mechanism implemented between the mobile device and a world server. To illustrate the framework functionalities, a virtual fire fighting and rescue operation application is presented in Section 6. Finally, Section 7 concludes the paper with a summary of our research and suggestion for future work.

2. RELATED WORK Virtual tours through museums, shopping centers, universities or even entire cities have required the development of sophisticated applications that allow users to interactively navigate and explore their surrounding environment, for instance, to acquire detailed information about selected items of interest. In terms of positioning, we can define two types of virtual environments: indoor [1] and outdoor [1, 7, 14]. In this work, we have developed a handheld computing application aimed at simulating an indoor space, whose main focus is allowing users to navigate and interact with the environment, without noticing that it may not be entirely loaded on the memory of the device due to space constraints imposed by the hardware of the cell phone. Several approaches have been proposed in the literature to navigation and transmission of 3D scenes using client-server architectures [7, 14]. Some authors recognize that the 3D information should be provided on demand and relative to the position/orientation of the user, although many of them do not address the problem of client-server communication, with the 3D application being executed exclusively in the handheld device [1, 8]. In our work, we have successfully designed and implemented a client-server communication between mobile devices (phone cells) and a server (a personal computer). Users can then request a specific 3D scene to be explored by the local application, thus providing the mobile devices with greater autonomy from the server. Instead of developing applications aimed at the visualization of simple 2D maps, text or html pages [1], our work offers support to realistic 3D scenes, similar to those typically found in virtual environments available for personal desktop computers [6, 22]. Most related work does not support the creation of sophisticated

3D environments, nor their visualization and exploration [7, 14]. In other words, the level of interactivity in those systems is still too restricted (an exception is the work described in [14], where parts of a scene can be selected). The work proposed by Laakso et al. is the closest to ours, since in that work the users (in the role of virtual tourists) can also use a 3D map to navigate, explore and obtain tourist information from the environment (a virtual city) they are visiting [8]. However, in examining the few existing computer graphics applications similar to ours, we have reached the conclusion that none is fully functional yet [7]. Besides, they do not use recent 3D technologies (e.g. M3G) for the generation of 3D environments. Apart from navigating manually through the environment (also known as local navigation mode), some authors offer a path planning function (known as global navigation mode) [13, 16]. In general, most techniques used to implement path planning require the extraction of the geometric structure of the scene (to obtain a graph of interconnected cells [2, 9] or to obtain a skeleton of the scene geometry represented by a discreet Voronoi diagram [12]). However, this graph is conservative and generated during the preprocessing phase, which is unsuitable for our framework. For example, dynamic situations (where the graph needs updating to include present avatar context or of other elements of the scenario) would be impracticable. Besides, this approach would take practically the same amount of time to discover a path from one point to another, whether or not it is collision-free. Thus, we propose creating the graph in an alternative manner that allows updating in dynamic scenes, as well as demanding less time to process scenes with fewer obstacles.

3. EXTENDING THE M3G API The M3G API versions presently available either do not offer access to the object vertices (hampering the use of basic computer graphics operations), or are not available in existing devices and emulators yet [18]. In addition, the m3g file format has some limitations that do not allow easy geometric structure manipulation or scene partitioning. Therefore, to achieve a realistic interactive visualization on the mobile phone, we have extended the M3G API, adding functions that allow creation of bounding boxes around mesh objects, and the extraction of geometric information from the environment by generating a graph of interconnected cells that can be used, among other things, to generate path planning. In our work, in the local view mode the user can wander freely through the virtual world. So, to increase realism, the possibility of collisions between the avatar and objects in the scene should be dealt with. Supposing a scene where dynamic object collide with other elements of the scene, detecting collision using current implementation of M3G API becomes unfeasible, because each dynamic object would need to fire not one, but several intersection rays, in several directions, since collisions between objects could happen on any of the coordinate axes. The use of intersection rays poses yet another problem: they can fail to detect a possible point of collision due to the limited number of rays that can be fired from each object. To solve these problems we implemented an approach for dealing with collisions, using bounding boxes, where one can verify if the maximum and minimum points of an object trespass on those of

other elements in the scene. Besides being a less costly process (if the scene is not too crowded), the detection approach implemented is flexible enough to work independently of whether the dynamic object moves forwards, laterally, or even backwards, comprising collisions in any of the coordinate axes.

3.1 Generating object bounding boxes Although the vertices of objects composing a scene are specified in integer values, in the M3G API their coordinates can be specified in floating point values. Algorithm 1 details the approach implemented in this work to obtain the mesh vertices of a given object. First one needs to obtain the scale and bias values used originally to determine the object coordinates (Line 2). Second, we need to create an empty vector which will be supplied with the vertex positions created in floating point values (Line 4). Finally, we apply the scale and bias obtained in Line 3 to these coordinates to obtain the floating point value of the position of the object (Line 5), and return the position vector of the object (Line 6). To obtain the object bounding boxes it is necessary to inspect all the vertices obtained by Algorithm 1, and discover their maximum and minimum values in each of the coordinate axes. The use of a structure of rigid scene graphs, as imposed by the retained mode of the M3G API [18], creates a problem in calculating the bounding boxes; the values of the object vertices may not match their real positions in the virtual world, since these positions may still suffer geometric transformation stored in the object’s parent nodes that contain these vertices. To solve this problem, the composition of the geometric transformations of the parent nodes should be applied on the extremes of the bounding boxes generated. This is represented in Algorithms 2 and 3, where we demonstrate how to rise in the hierarchy of the scene graph and apply the parent transformations to their child nodes.

Initially the composition of the parent transformations can be obtained through a recursion (Algorithm 3 and Line 3 of Algorithm 2), which initiates at the vertices of the original object and rises in its hierarchy, updating a set of initially empty geometrical transformations (Line 8 of Algorithm 3) provided as a parameter (Line 3 of Algorithm 2). Besides this initial positioning of the vertices, each new geometrical transformation applied to the object (or its parents) in execution time, alters the positions of the object vertices again, requesting an update of the bounding box. To prevent this update the geometrical alterations applied to the objects (or to their parents) are also applied to the extremes of the bounding box, thus keeping vertices values updated (Line 5 of Algorithm 2).

Algorithm 2 – Applying Transformation to the Mesh Extremes: receives a BoundedMesh as input and updates its extremes in-place 1: void updateExtremes ( BoundedMesh mesh ) { // Loading the geometric transformations of the mesh // parent nodes 2:

Transform traT, rotT, scaT;

3:

getParentTransf ( mesh, scaT, rotT, traT ); // Getting the mesh bounding box extremes

4:

float[] lowerupper = getLowerUpper( mesh ); // Applying the obtained parent nodes transformations // to the mesh bounding box extremes

5:

applyTransf ( scat, rotT, traT, lowerupper );

6: }

Algorithm 1 – Getting the Vertices: receives a mesh as input and outputs a list of vertices

Algorithm 3 – Getting Parent Transformations: receives a Mesh and three Transforms as input and updates Transforms in-place

1: float[] getVertices ( Mesh mesh ) {

1: void getParentTransf ( Node src, Transform scaT,

// Getting the object that encapsulates the vertices

Transform rotT, Transform traT ){

// positions 2:

// Calculating the geometric transformations (scale,

VertexArray pos = getPositions( mesh ); // Getting the scale and bias values used by the mesh

3:

Transformations t = getScaleAndBias( mesh );

// rotation, and translation) of the current mesh 2: 5:

// Generating the vector that will load the updated float[] out = new float[ getVertexCount(mesh)*4 ];

// the transformation vectors 8:

// Apply geometric transformations to the vector of // positions and load the results on the out vector

applyTransformations( sca, rot, tra, scat, rotT, traT ); // Moving up in the hierarchy using a recursion call

11: if ( isNotRoot( src ))

5:

applyScaleAndBias( t, pos, out );

12:

6:

return out;

13: }

7: }

getTransformations( src, sca, rot, tra); // Apply these geometric transformations values on

// vertices positions 4:

float[] sca, rot, tra;

getParentTransf (src.getParent(), scaT, rotT, traT);

3.2 Creating the graph of interconnected cells In Figure 1, the graph of interconnected cells implemented represents the optimum, collision-free path in a virtual environment, where the nodes or vertices represent the portals (which join the rooms), and the edges represent the regions of the rooms (which interconnect any two portals), with nonnegative edge weights. The process of generating this graph is made easier by classes available in the M3G API, which allow element picking, firing an intersection ray from a point of origin in any given direction.

exit

3 of Algorithm 4). For this, in our implementation the portals were associated to identifiers (actually, integer values) during the process of generating the M3G file. Once we have obtained the portals it becomes necessary to calculate their bounding boxes (Line 4 of Algorithm 4), as they will be used to obtain the center of mass of the portals. A ray is then fired from the center of each portal in the direction of other portals present in the same scene (Lines 5-8 of Algorithm 4). The weight of the edge joining the two portals is relative to the Euclidian distance between them. If a ray hits an element other than the portal of destination (Line 11 of Algorithm 4), a search is conducted for finding a path joining the two portals (Algorithm 5).

Algorithm 4 – Creating a Graph: generation of a graph that contains the portals (vertices) 1: Graph createGraph ( Group scene ) {

entry

2:

Graph graph = new Graph(); // Get the scene portals

portals

optimal path

victim

remaining edge

avatar’s initial position Figure 1: The graph of interconnected cells.

3:

PortalsList scenePortals = loadPortals( group ); // Calculating the portals´ bounding boxes

4:

PortalsList bound = setBounds( scenePortals );

5:

for ( int i = 0; i < bound.size(); i++ )

6:

for ( int j = 0; j < bound.size(); j++) // Checking the existence of a connection between

entry

// the portals 7:

if ( bound.get(i).isNotLinked(bound.get(j)) ) { // Firing na intersection ray from one portal to // another

a

b

8:

BoundedElement hit = cast(bound.get(i), bound.get(j)); // Testing if the ray hits the target portal

9:

if ( hit == bound.get(j) )

10:

c

d

graph.add( bound.get(i), bound.get(j) );

11

else

12:

findPath(graph, bound.get(i), bound.get(j), hit ); } // Keeping the shortest path

e

exit

f

Figure 2: (a), (b), (c), (d), and (e) show all possible paths between the portal of origin and destination avoiding obstacles, and (f) is a view of the optimum (shortest) path.

In general, for realistically navigating through traditional virtual environments (such as those available for desktop personal computers), we need to previously identify the obstacles in the environment [2, 9, 13]. Instead, in our approach we need to identify the portals that will compose our cell graph. The creation of this graph (Algorithm 4) initiates with this identification (Line

13: setShortPath( graph ); 14: return graph; 15: } When a ray is fired from one portal to another, a collision with an object in the scene can occur. In this case, we must find a manner of avoiding the obstacle without the camera loosing its center of interest. We used a simple approach where small cubes are inserted into the graph of interconnected cells to assist on this problem, as shown in (a), (b), (c), (d), and (e) of Figure 2. Initially the cubes are positioned next to the extremes of the element hit by the ray fired from the portal of origin (Line 2 of Algorithm 5). For each cube two new rays are fired: one from the

portal of origin, in the direction of the cube, and another from this cube in the direction of the portal of destination (Lines 3 and 4 of Algorithm 5, respectively). If both rays hit their targets two new edges will be added to the graph (lines 7-8 and 12-13 of Algorithm 5). However, if any of the rays fails to hit its target, the process repeats itself (through recursion) until all paths between the portals of origin and destination are found (Lines 10 and 15 of Algorithm 5).

Algorithm 5 – Finding a Path: generation of a collision-free path between two portals 1: void findPath ( Graph graph, BoundedP src, BoundedP dest, Bounded hit ) { // Creating small cubes in the neighbourhood of the // hit geometric element (the obstacle) 2:

Cube[] cubes = createNeighboorCubes( hit ); // Firing an intersection ray from the source to the // created cubes

3:

BoundedElement[] hitFromSrc = cast( src, cubes ); // Firing an intersection ray from the created cubes to // a target object

4:

BoundedElement[] hitToDest = cast( cubes, dest ); // Testing if the first rays hit the cubes

6:

for (int i=0; i< hitFromSrc.length; i++)

7:

if ( hitFromSrc[i] == cubes[i] )

8:

graph.add( src, cubes[i] );

9:

else

10:

findPath( graph, src, cubes[i], hitFromSrc[i]); // Testing if the last rays hit their target

11: for (int i=0; i< hitToDest.length; i++) 12: 13: 14: 15:

if ( hitToDest[i] == dest[i] )

graph.add( cubes[i], dest ); else

findPath( graph, cubes[i], dest, hitToDest[i]);

16: } Once all the possible paths have been generated, one can use the Dijkstra algorithm [4] to obtain the optimum (shortest) path (line 13 of Algorithm 4). The shortest path is the problem of finding a path between two vertices such as the sum of the weights of its constituent edges is minimized. At the end of the process the algorithm implemented generates the graph containing the optimum, collision-free path (Line 14 of Algorithm 4), as shown in Figure 2.f. Besides the portals, geometrical elements of interest to the user (for example, any object that belongs to scene) should be inserted as nodes of the cell graph to provide the application with detailed

information about the place, items and services available at each visited location.

4. SMOOTH MOBILE PHONE 3D NAVIGATION Client devices which are designed for mobility, such as cell phones, have less computing and storage capacity compared to the PC servers. Therefore, the main requirements for smooth mobile phone 3D navigation are low processing load and fast 3D data delivery. To satisfy these requirements, we implemented a server-side virtual world storage and a divided information transfer technique (we can load the whole virtual world on the mobile device or only a part of this world, depending on the available memory on the mobile phone). In this work, in order to avoid the application waiting for the download of a scene from the server, we implemented threads. These are responsible for analyzing the scenes that will next be accessed by the mobile device user, requesting them previously from the server during possible moments of pause in the processing of the mobile device. This model of prediction, based on [5] uses information on the position of the avatar and of the direction and speed of the camera. Even considering that possibility of loading only parts (rooms or scenes) of a complete world, at a given moment, there may not be sufficient memory left to load new scenes, requiring the implementation of an algorithm to administrate the available memory. Since the Java virtual machine imposes severe restrictions on memory management (for example, not allowing its explicit de-allocation) [20], control can be maintained through the request to free up space which is no longer in use (for example, parts of the world that are not being viewed at a given moment). To achieve this, in our work we use a simple policy freeing scenes viewed less recently.

5. THE COMMUNICATION MECHANISM BETWEEN MOBILE DEVICES AND SERVERS Currently, the network technologies used in our implementation are Bluetooth [19], for data transmission among mobile devices [15], and Wi-Fi (or wireless TCP/IP) or Bluetooth, for establishing a network connection between a mobile device and the world server. Therefore, the context in which the users of handheld devices are depends on their proximity (Bluetooth). However, it is expected that new data transmission technologies will emerge in the near future providing the users a broader context. Our implementation takes this possibility into account by offering certain flexibility and independence of the network technology used, hiding from the M3G Model communication details specific to a networking technology [15].

6. A VIRTUAL FIRE FIGHTING AND RESCUE OPERATION Although we hope never to face an emergency situation, our main motivation is that we should be prepared to respond to the unexpected, such as to aid in the prompt and efficient rescue of a person or evacuation of a building, for example. Therefore, in order to illustrate the main functionalities of our framework, in

this section we describe a virtual fire fighting and rescue operation application that we have designed, implemented and tested, where users can perform automatic guided navigation for training purposes.

(for example, fire extinguishers) by directly pointing to their representation in the virtual world. Additionally, they can change positioning and zooming into a particular viewpoint, for example, to inspect the mobile virtual rescue scenario more closely.

In particular, cell phone users (firefighters) can use their handheld devices as an alternative way to fire fighting or rescue an occupant (any trapped, missing or handicapped) of a building in immediate danger, and relocating them to a safe area of the building, by following 3D space instructions from a remote server during an emergency situation. This guided navigation also avoids people getting lost or getting trapped by obstacles, while at the same time making them aware that they have some direct and immediate control over how they move through the virtual environment.

Finally, the firefighters can also use their mobile devices to send warning, updating messages, or images to the world server, pointing out specific and important changes in the building, for example, when parts of a room collapse or catch fire (totally or partially obstructing the way to rescue a victim) or when they want to request an alternative path to rescue a person in danger, by considering the obstacles found in the middle of the way as well as the victim´s current position.

entry

The application was successfully tested on a J2ME emulator and on a Sony Ericson W600i mobile phone. The results are shown in Figure 4. This cell phone corresponds to a standard model that has up to 1500 Kbytes of dynamic heap (used to run the application) and precisely 256 Mbytes of storage heap (used to storage files).

a

b

exit

c

d

a

b

c

d

Fire extinguishers Fireman Victim Figure 3: An example of optimum path planning from the fireman’s initial position (entry door in (a)) to a specific target (the victim in (d)), avoiding obstacles and passing through points of interest (fire extinguishers).

For example, firefighters can control the position and orientation of the virtual camera, through which they see the mobile virtual world. Further, firefighters can automatically discover the exit routes for a specific floor (exit signs are provided throughout the virtual building indicating the direct paths of exit of the application), or even an unobstructed escape route, and find the locations of extinguishers. Moreover, they can find the closest fire extinguisher, and then determine if it is the proper type by reading the class code and comparing this information with the type of fire. Firefighters can then find an optimal path from one place (the entrance of the building) to another (the room where the victim is trapped), as shown in (a), (b), (c), and (d) of Figure 3. Also, they can have access to additional information through the visualization of virtual scenes that they are interactively exploring and may request information on the objects of interest

Figure 4: The virtual fire fighting and rescue operation implemented on a Sony Erickson W600i mobile phone.

7. CONCLUSIONS AND FUTURE WORK In this work, we successfully developed a framework for supporting guided navigation in mobile virtual environments. To illustrate its main functionalities, a virtual rescue training was designed, implemented and tested on an emulator and on a mobile phone.

The tests we carried out demonstrate that the user of a mobile phone can successfully navigate locally through a virtual world. In particular, due to the limited memory usually available in these devices, only parts of this 3D world may be loaded, resulting in the creation of a model of memory management that allows these parts to always be loaded when the avatar reaches them. Furthermore, even with the constraints imposed by the hardware of the device and by the M3G API, we have implemented a compact path planning approach successfully. Extension of the original API were also necessary, since it presented some deficiencies in respect of effecting basic calculations of computer graphics, such as dealing with collisions and bounding boxes. For future work we can highlight the implementation of a model of automatic partitioning of the 3D world, as well as application of techniques to allow rendering more realistic 3D scenes (with special effects). Alternative solutions to find an optimum navigation path may be also implemented by using artificial potential fields [11, 21], where the trapped victim and the exit of the building generate attractive potentials, while each obstacle generates a repulsive potential. Further, we aim to test the system systematically on more complex mobile rescue operations. Finally, with mobile devices, where technological immersion is very limited, the efficiency of interaction techniques is also of absolute importance for the success of the interactive applications. Therefore, it is interesting to use usability metrics to quantify the degree of interactivity achieved by the virtual rescue operation using different mobile phone models.

8. ACKNOWLEDGMENTS The authors are grateful to the Brazilian supporting agency. In particular, Rafael Garcia Barbosa benefits of a CAPES MSc. studentship, under grant No. 22002014. We also thank Alexandre Gomes de Paula for the code development using Bluetooth, which was useful for establishing a network connection between a mobile device and the world server.

9. REFERENCES [1] Abowd, D. A., Atkeson, C. G., Hong, J., Long, S., Kooper R. and Pinkerton, M. Cyberguide: a mobile context-aware tour guide. Wireless Networks, 3, 5 (Oct. 1997), 421–433. [2] Andújar, C., Vázquez, P. and Fairén, M. Way-Finder: guided tours through complex walkthrough models. Computer Graphics Forum, 23, 3 (Sep. 2004), 499-508. [3] Chittaro, L. Visualizing information on mobile devices. IEEE Computer, 39, 3 (Mar. 2006), 40-45. [4] Cormen, T. H., Leiserson , C. E., Rivest, R. L. and Stein, C. Introduction to Algorithms. MIT Press Cambridge, MA, USA, 1990. [5] Correa, W.T., Klosowski, J.T. and Silva, C.T. Visibilitybased prefetching for interactive out-of-core rendering. In Proceedings of the 2003 IEEE Symposium on Parallel and Large-Data Visualization and Graphics (Seattle, USA, Oct. 20-21, 2003). IEEE Computer Society, 2003, 2. [6] Di Blas, N., Hazan, S. and Paolini, P. The SEE experience. edutainment in 3D virtual worlds. In Museums and the Web

2003 (Charlotte, NC, Mar. 19-22, 2003). Archives & Museum Informatics, 2003. [7] Krum, D. M., Ribarsky, W. and Hodges, L. Collaboration Infrastructure for a Mobile Situational Visualization System. Available at http://wwwstatic.cc.gatech.edu/grads/k/David.Krum/papers/krum.collab .pdf. Visited on August 25, 2006. [8] Laakso, K., Gjesdal, O. and Sulebak, J. R. Tourist information and navigation support by using 3D maps displayed on mobile devices. In Proceedings of the HCI in Mobile Guides, Workshop at Mobile HCI (Udine, Italy, Sep. 8-11, 2003). [9] Lamarche, F. and Donikian, S., Crowd of virtual humans: a new approach for real time navigation in complex and structured environments. Computer Graphics Forum, 23, 3 (Sep. 2004), 509-518. [10] Lee, S., Ko, S. and Fox, G. Adapting content for mobile devices in heterogeneous collaboration environments. In Proceedings of the International Conference on Wireless Networks (Las Vegas, USA, Jun. 23-26, 2003). CSREA Press, 2003, 211-217. [11] Li, Q., DeRosa, M. and Rus, D. Distributed algorithm for guiding navigation across a sensor network. In Proceedings of the 9th annual international conference on Mobile computing and networking (San Diego, USA). ACM Press, 2003, 313-325. [12] Li, T.-Y., Lien, J.-M., Chiu, S.-Y. and Yu, T.-H. Automatically generating virtual guided tours. In Proceedings of the Computer Animation (May 26-28). IEEE Computer Society, 1999, 99-106. [13] Paris, S., Bonvalet, N. and Donikian, S. Environmental abstraction and path planning techniques for realistic crowd simulation. Computer Animation and Virtual Worlds, 17, 34 (July 2006), 325-335. [14] Raposo, A. B., Neumann, L., Magalhaes, L. P. and Ricarte, I. L. M. Visualization in a mobile WWW environment. WebNet'97- World Conference of the WWW, Internet, and Intranet (Toronto, Canada, 1997). [15] Rodrigues, M. A. F., Barbosa, R. G. and Mendonça, N. C. Interactive mobile 3D graphics for on-the-go visualization and walkthroughs. In Proceedings of the 21st Annual ACM Symposium on Applied Computing, Special Track on Handheld Computing (Dijon, France, Apr. 23-27, 2006). ACM Press, 2006, 2, 1002-1007. [16] Salomon, B., Garber, M., Lin, M. and Manocha, D. Interactive navigation in complex environments using path planning. In Proceedings of the 2003 symposium on Interactive 3D graphics (Monterey, California, April, 27-30, 2003). ACM Press, 2003, 41-50. [17] Sun Microsystems. Java 2, Micro Edition (J2ME) Wireless Toolkit 2.2. Available at http://java.sun.com/ products/sjwtoolkit/download-2_2.html. Visited on August 25, 2006. [18] Sun Microsystems. JSR-184: Mobile 3D Graphics API for J2ME. Available at http://jcp.org/aboutJava/

communityprocess/final/jsr184/index.html. Visited on August 25, 2006. [19] Sun Microsystems. JSR-82: Java APIs for Bluetooth. December 2003. Available at http://jcp.org/en/jsr/ detail?id=82. Visited on August 25, 2006. [20] Sun Microsystems. The K virtual machine (KVM). Available at http://java.sun.com/products/cldc/wp/. Visited on August 25, 2006.

[21] Tseng, Y.-C., Pan, M.-S. and Tsai, Y.-Y. Wireless sensor networks for emergency navigation. Computer, 39, 7 (Jul. 2006), 55-62. [22] Wojciechowski, R., Walczak, K., White, M. and Cellary, W. Building virtual and augmented reality museum exhibitions. In Proceedings of the 9th International Conference on 3D Web Technology (Monterey, California, Apr., 2004). ACM Press, 2004, 135-144.

Supporting Guided Navigation in Mobile Virtual ...

wireless communication services, mobile systems have several ... phone. Users can load virtual environments from an up-to-date database storage on a remote PC server, navigate ..... Proceedings of the International Conference on Wireless.

561KB Sizes 0 Downloads 141 Views

Recommend Documents

MemX: Supporting Large Memory Workloads in Xen Virtual Machines
tific workloads, virtual private servers, and backend support for websites are common .... They enable the driver domain to set up a. DMA based data transfer ...

Collaborative Filtering Supporting Web Site Navigation
rithms that try to cluster users with respect some (given ... tion Systems, with the emerging filtering technologies .... file, the user is interested in the document.

strategies of mobile virtual network operators in the ...
MVNOs (Mobile Virtual Network Operators) are companies that function as mobile ... Network Operators (MNOs) by leasing from them the access to mobile ...

A Display for Supporting Ownership of Virtual Arms - Institute of ...
University of Zurich and ETH Zurich ... in the correct position relative to the user on a table top. We ..... of Eyeglass Displays," Lecture Notes in Computer Science,.

A Display for Supporting Ownership of Virtual Arms - Institute of ...
showing a live video image of the subject's arm on the screen. [8-10]. While easy to ..... of Eyeglass Displays," Lecture Notes in Computer Science, vol. 1927, pp.

Indoor Navigation System for Mobile Robot using ...
navigation using wireless sensor network with ultrasonic sensors. Without the need ... to the ceiling maintain routing tables through flooding [5]. The routing table ...

Vision for Mobile Robot Navigation: A Survey
One can now also design a vision-based navigation system that uses a ...... planning, it did not provide a meaningful model of the environment. For that reason ...

Vision Based Tracking and Navigation of Mobile ...
The mobile robots used in the proposed application ... through a keyboard attached with a desktop computer. The desktop computer receives video and sonar ...

Cheap Diy Magnet Google Cardboard Virtual Reality Vr Mobile ...
Cheap Diy Magnet Google Cardboard Virtual Reality V ... Samsung #1 Ape Free Shipping & Wholesale Price.pdf. Cheap Diy Magnet Google Cardboard Virtual ...

Combining Sensorial Modalities in Haptic Navigation ...
Recent development of haptic technology has made possible the physical interaction ... among several subjects in order to discuss this issue. Results state that a ... of the information required for these skills is visual information. Therefore, it i

Navigation Protocols in Sensor Networks
We wish to create more versatile information systems by using adaptive distributed ... VA 23187-8795; email: [email protected]; D. Rus, Computer Science and .... on a small set of nodes initially configured as beacons to estimate node loca-.

Cheap Diy Magnet Google Cardboard Virtual Reality Vr Mobile ...
Cheap Diy Magnet Google Cardboard Virtual Reality V ... Samsung #2 Ape Free Shipping & Wholesale Price.pdf. Cheap Diy Magnet Google Cardboard Virtual ...

HIV Navigation Options In SF.pdf
Page. 1. /. 1. Loading… Page 1. HIV Navigation Options In SF.pdf. HIV Navigation Options In SF.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying HIV Navigation Options In SF.pdf. Page 1 of 1.

Supporting Variable Pedagogical Models in ... - Semantic Scholar
eml.ou.nl/introduction/articles.htm. (13) Greeno, Collins, Resnick. “Cognition and. Learning”, In Handbook of Educational Psychology,. Berliner & Calfee (Eds) ...

Supporting Synchronous Sensemaking in Geo ...
College of Information Sciences and Technology, the Pennsylvania State University. University ... sensemaking in an emergency management situation, which.

Supporting Mutual Engagement in Creative Collaboration - CiteSeerX
engage with each other through their social interaction. In our work we focus on ... can create and edit a short shared loop of music semi- .... Performance Arts and Digital Media. Intellect ... Interventions: A Strategy and Experiments in Mapping.

Supporting Variable Pedagogical Models in Network ... - CiteSeerX
not technical but come from educational theory backgrounds. This combination of educationalists and technologists has meant that each group has had to learn.

Understanding and Supporting Sensemaking in ...
Collaborative sensemaking, collaborative Web search,. SearchTogether ... structure, and visualize task-related information [2, 9]. Few researchers have .... CoSense uses data from a user's SearchTogether session and provides alternate ...

Supporting Information
Jul 13, 2010 - macaque brain consists of 95 vertices and 2,402 edges [1], where the network models brain regions as vertices and the presence of long dis-.

Supporting Information
Oct 26, 2010 - between 0.3 and 1.0 mL of an aqueous colloidal dispersion. (4 g∕L) of partially doped polyaniline nanofibers was mixed with. 3 mL of DI water.

Supporting Information
May 31, 2011 - tions were carried out using a molecular orbital approach within a gradient corrected density functional framework. The molecular orbitals are ...

Supporting Information
Oct 26, 2010 - +1.2 V and then back to −0.2 V by using a scan rate of 50 mV∕s. ... The effect of polymer nanofiber concentration on film .... gold electrodes on SiO2∕Si and (Top Right) a close-up illustration of the electrode geometry.