Virtual Reality (2006) 10: 31–40 DOI 10.1007/s10055-006-0028-4

ORIGINAL ARTICLE

In˜aki Dı´ az Æ Josune Hernantes Æ Ignacio Mansa Alberto Lozano Æ Diego Borro Æ Jorge Juan Gil Emilio Sa´nchez

Influence of multisensory feedback on haptic accessibility tasks

Received: 23 December 2005 / Accepted: 3 April 2006 / Published online: 27 April 2006  Springer-Verlag London Limited 2006

Abstract Environments of a certain nature, such as those related to maintenance tasks can benefited from haptic stimuli by performing accessibility simulation in a realistic manner. Accessibility is defined as the physical feasibility of accessing an element of a 3D model avoiding undesirable collisions. This paper studies the benefits that multisensory systems can provide in performing this kind of tasks. The research is specially focused on the improvements provided by auditory feedback to the user’s performance. We have carried out a user study where participants had to perform an accessibility task with the aid of different combinations of sensorial stimuli. A large haptic interface for aeronautic maintainability has been extended with real-time sound generation capabilities to study this issue. The results of these experiments show that auditory stimuli provide with useful cues to the users helping them to correct trajectories and hence improving their performance. Keywords Haptics Æ Accessibility Æ Force feedback Æ Auditory Æ Virtual environments Æ Synchronization Æ Multisensory interaction Æ Multimodal

1 Introduction Humans are able to perceive the environment using all their senses. Usually sight is the predominant sense, although some of the other senses are also needed to perform most tasks. Sometimes, it is necessary to I. Dı´ az Æ J. Hernantes Æ I. Mansa Æ A. Lozano Æ D. Borro J. J. Gil Æ E. Sa´nchez CEIT, Paseo Manuel Lardiza´bal, 15, 20018 San Sebastia´n, Spain I. Dı´ az Æ J. Hernantes Æ I. Mansa Æ A. Lozano Æ D. Borro J. J. Gil (&) Æ E. Sa´nchez TECNUN, University of Navarra, Paseo Manuel Lardiza´bal, 13, 20018 San Sebastia´n, Spain E-mail: [email protected] Tel.: +34-943-212800 Fax: +34-943-213076

perceive the environment in more detail and all our senses are unconsciously used to obtain the information we need. For instance, maintainability studies need of accessibility testing to verify whether each part of the model is accessible or not. Obviously, a visual test is not enough to detect possible inaccessible parts. In addition to this, the worker has to explore and manipulate the model or different parts of it to complete the assessment, thus more than one sense is necessary. Providing users with the natural ability to use all their senses in a simulation environment is an important goal in the Virtual Reality research area. Within this context, haptic devices are used to provide us with force feedback in domains where it is needed, such is the case of the accessibility studies. Using these devices, interactivity is enhanced. Furthermore, the simulation efficiency can be improved by making users aware of other physical characteristics of the objects, such as the weight or surface smoothness. However, haptic technology is quite recent and as a result the tactile realism obtained is not as good as it would be desirable. In addition, rigid bodies cannot be haptically simulated as rigid as they appear in reality due to mechanical and control constraints (Gil et al. 2004; Colgate and Schenkel 1994) and the sensation of real surface textures in virtual environments still remains very complex. In an attempt to improve the overall perception, we can help users by adding sound to haptic systems. In reality, if we hear a person hitting an object with a hammer, we can deduce how much strength was used without really having to experience the impact force. Thus, adding sound to haptic applications seems like a valid hypothesis in improving the haptic perception. The research in this paper is intended to validate this hypothesis for a specific problem with haptic interaction: accessibility. This problem relates to the accuracy to guide the virtual haptic representation through a threedimensional (3D) model. Users must avoid colliding with the virtual model. However, when the collision is unavoidable, users will receive adequate feedback (visual, auditory and haptic) to be able to correct the

32

trajectory. This paper presents the experiments made with a haptic system in order to prove the enhancements in performing accessibility tasks thanks to multisensory feedback. Even more precisely, it sets focus on the advantages that auditory feedback can offer to help users recover from collisions. The remainder of this article is organized as follows. Section 2 presents other related work on the influence of multisensory interaction in haptic systems. Later on, in Sect. 3, we address the problems to do with accessibility using haptic interfaces. Section 4 describes the multisensory (visual, haptic and auditory) system implemented for our experiments. The different parts in our haptic system are explained and some points on haptic, visual and audio rendering are made and reviewed. In Sect. 5, we present the experiments performed and the methodology used, while results are discussed in Sect. 6. Finally, in Sect. 7 some conclusions are drawn.

2 Related work The first Virtual Reality systems were mainly based on visual stimuli. In order to immerse the user, stereoscopic techniques were developed. Several studies prove the benefits of using these techniques to enhance the visual perception. Stereoscopic visualization contributes to understand spatial relationships and discriminate objects from ground (Yeh and Silverstein 1990). For instance, in the design phase of sheet metal parts, stereo visualization helped engineers to correct 75% of their mistakes (Veron et al. 1990). In many cases, the interaction with the virtual environment helps users to accomplish more accurate movements. The use of stereo improves both accuracy and time reaction in tasks such as navigation, manipulation or even educational training. Through the years, new stimuli have been introduced in virtual environments in an attempt to improve the overall perception. There are several studies that have dealt with multisensory interaction in haptic systems. In Pai (2005), a good introduction to multisensory processing is given and in Stein and Meredith (1993), multisensory perception is studied more widely. A full example of a haptic system with audio interaction is presented in DiFilippo and Pai (2000). Some studies have analysed in particular, the benefits of adding auditory feedback to haptic systems. DiFranco et al. (1997) showed how auditory stimuli of prerecorded tapping sounds influenced on the stiffness perception of the virtual objects. In Mc Gee et al. (2001), a combination of haptic and auditory feedback was proposed as a solution to increase the quantity and quality of available textural information. In the same field, Guest et al. (2002) suggested that users could be influenced by auditory input on surface textural perception. Apart from auditory stimuli, visual cues had also been used to alter haptic perception. For instance, Srinivasan et al. (1996) showed how visual stimuli could modify the stiffness perception of a virtual spring.

However, coupling three sensory modalities in a haptic application does not guarantee a better performance on its own. On the one hand, synchronization and latency problems are commonly faced, since each modality is processed individually and then coupled together. In this area, several studies have tried to establish a valid threshold for the asynchrony between audio and haptic stimuli. Aldestein et al. (2003) established a threshold that lies between 18 and 25 ms as the JND (Just Noticeable Difference) for the asynchrony between haptic and audio feedback, whereas for Levitin et al. (2000), it was about 42 ms. In DiFilippo and Pai (2000), 2 ms was established as a valid lower latency threshold for the synchronization of haptic and audio rendering in collision events. On the other hand, cross-modal effects can also appear and reduce performance when several sensory modalities are coupled together. Numerous studies have analysed whether multimodal presentation of stimuli improves or decrements performance. McGee et al. (2000) described the different ways (conflicting, redundant, and complementary) in which sound and haptic information may be integrated to influence the user’s perception. While some authors (Lederman 1979; Wu et al. 1999) suggest that one stimulus becomes dominant to the others depending on the tasks; others (Heller 1982; Poling et al. 2003) suggest that multimodal inputs improve perception. As it can be noticed, there is still much research to be done in this field. Therefore, it is very important to take all these considerations into account in the development of multisensory systems in order to guarantee the desired performance.

3 Accessibility problems in haptic systems In this paper, a study about the influence of multisensory processing is carried out for the specific case of accessibility tasks. These tasks can be found in different fields such as surgery training or maintainability. For instance, in a simulation of a surgical procedure on a kidney, it is essential to find the most suitable path to reach the damaged area without harming other organs. Another well-known field where accessibility must be analysed in detail is in maintainability tasks over mechanical systems. In this context, the models used are complex engines with a limited workspace. Hence, it is usually difficult to reach a specific component due to the high number of elements (wires, pipes and harnesses) which can be found in the way (Fig. 1). Therefore, it is necessary to find a suitable path in order to access the part to assemble or disassemble. All this brings in what it is known as accessibility. As it can be deduced from the previous examples, there are many problems in performing these accessibility tasks accurately. Using merely visual feedback, users do not receive enough information for a correct fulfilment of these tasks, since they can penetrate inside rigid objects of the environment. As a result, the

33

Fig. 1 Sequence of a virtual disassembly operation

trajectories and guidance are not realistic enough. The need to increase realism and interaction motivates the integration of an additional sense. Therefore, the touch sense using haptic devices is added. Thanks to force feedback, the user can correct the trajectory in the access to some parts of the virtual environment. However, the haptic performance obtained with current haptic devices is not as realistic as it would be desirable yet. Hence, in present work, the aim is to analyse whether the auditory feedback would enhance accuracy in accessibility tasks and help to correct the trajectory, as it happens when force feedback is used. In the real world, when a collision between two bodies happens, the sound heard by humans has implicit information such as the localization of the impact, the material of the object or its size. Sometimes, it is also possible to describe the snapshot of the action relying only on sound. Therefore, the addition of the auditory sense could be well motivated to help users to get more information about the virtual haptic environment and it should be used in applications where the user’s reaction time becomes critical. Furthermore, it gives instant awareness of collision events and helps users to correct the trajectory and hence improving the overall performance.

4 System overview Our experiments have been carried out using a large haptic device called LHIfAM (Borro et al. 2004a). It allows users to perform training and aircraft maintainability tasks in a virtual environment. The user interacts and manipulates mechanical assemblies with a virtual tool handled by the haptic device (Fig. 2). In this way, users can validate assembly–disassembly sequences in the product design phase. This evaluation is achieved studying the accessibility of the objects, i.e. one object is not accessible if it cannot be reached by means of the haptic device. Recently, auditory feedback has been added to the system in order to study multisensory interaction in accessibility simulations. In the following subsections,

Fig. 2 User interaction with our multisensory system

the different system components are briefly described, focusing on those modules involved in the generation of sensorial feedback. The method used to create the sound feedback and other approaches are also presented. Finally, the importance of a suitable synchronization of audio, visual and tactile stimuli is discussed, as well as the architecture developed to accomplish it. 4.1 Haptic device The LHIfAM is a floor-grounded haptic interface with large workspace especially designed for aeronautic maintainability applications (Fig. 3). It provides force feedback in three translational degrees of freedom, while three additional orientations are measured, but not actuated, by a compact wrist. An interesting design feature of this haptic system is that its workspace can be relocated. Therefore, it is possible to reproduce different maintainability operations and check different ergonomic situations. Thanks to its workspace relocation and mechanical design, it can be used not only into the aeronautical field but also over any kind of mechanical system. 4.2 Graphic rendering module A key issue to perform accessibility tasks is to avoid unnecessary collisions with the 3D model when the user is accessing to an element of the model. Therefore, a proper visualization of both the 3D environment and the virtual representation of worker’s tools are essential. As the initial system was designed for aircraft maintainability tasks, the visual module was conceived to render large and compact models. These CAD models can easily rise up to several millions of polygons. Deficient or not real-time visualization of such large models would influence negatively on the user’s accessing tasks. Hence, several optimization techniques were integrated. In

34

few cubic meters. And after computing collisions among objects in the virtual environment, this module computes the contact response. The force response consists of computing the interaction force between the avatar and the virtual objects when a collision is detected. This force must approximate as closely as possible the contact forces that users would normally feel during contact among real objects. In order to compute this force, the collision module collects the information necessary to determine the correct interaction, i.e, penetrations and normal vectors, taking into account the user’s movement. This information is sent to the haptic rendering module, which computes the final force and applies it on the user. 4.4 Haptic rendering module

Fig. 3 Workspace of the LHIfAM

addition to well-known optimizations such as culling and level of details methods, a specific occlusion culling algorithm was developed (Mansa et al. 2006). Performing accessibility tasks requires an accurate and natural representation of spatial information, thus only real-time visualization is not enough for this purpose. A stereoscopic visualization is available to provide users a more realistic experience and a better understanding of the 3D world. Several factors contribute to assess the quality of the stereo visualization. One of them is the binocular disparity (Hodges 1992). The generation of stereo is achieved by creating two views of the 3D scene. These views are quite similar; they only differ in their horizontal position. Another important factor is related to the screen parallax (Hodges and Davis 1993). Objects in stereo can appear in front of the display screen (negative parallax) or beyond it (positive parallax). Objects visualized with positive parallax are seen as if users were looking through a window. On the other hand, using negative parallax objects emerge from the screen. In our case, the visualization of compact models is more suitable using a salient perception (negative parallax) that immerses users and abstracts them from the real world.

The second type of sensory feedback provided to the users is the haptic feedback. The haptic loop has two main tasks. Firstly, it acquires the position and orientation of the tracking device and secondly, it calculates the force which is restored to the user taking into account the information sent by the collision module. As the collision module and the haptic rendering module have different frequencies, interpolation and extrapolation techniques are used to couple both modules. More technical details can be found in Garcı´ a-Alonso et al. (2005). The sampling rate of the haptic module is an important factor to achieve a realistic contact sensation. Since the mechanoreceptors in our fingertips detect signals up to 1 kHz, with the highest sensitivity around 300 Hz (Boff and Lincoln 1988), the sampling rate is set to 2 kHz in the LHIfAM. Usability has been a main issue in the design of the haptic device. As explained before, LHIfAM is the device handled by the user. Its dimensions can make uncomfortable its manipulation what would be a serious problem to perform accesibility tasks. In order to avoid this disturbing problem and to achieve a higher transparency and therefore a better usability, an impedance force law was implemented in the LHIfAM. A force feedforward loop was added in order to decrease the inertia of this device along its translational degree-offreedom. 4.5 Audio rendering module

4.3 Collision detection module The collision detection module is another main module responsible for providing appropriate feedback to the user. It detects all the collisions among the objects of the virtual scene using an uniform spatial grid decomposition based on voxels. The collision method developed (Borro et al. 2004b), is essentially focused on huge and compact models which contain millions of polygons in a

This module has been developed to analyse the benefits that auditory feedback provides to the haptic interaction. Two approaches were initially considered to generate this feedback: pre-recorded sounds and real-time generation of sounds. The creation of a sound effect is the first step in providing audio to an application. Sound effects can be recorded or created by a programmer. Recorded sounds are real sounds that will be played back later in an

35

application when a similar event takes place. To achieve realistic contact sounds, the sound emitted should provide the user with different information, such as the localization of the contact, the material of the colliding objects or the impact force applied. This fact implies that for the combination of different objects, contact points and applied forces, there should be a pre-recorded sound to be played back at the right time. Obviously, it is not worth building such a sound library for a simple tap between two objects. Thus, it becomes necessary the second approach to generate auditory feedback in realtime: a sound model of high flexibility in order to generate or manipulate the sound in real-time, taking into account the actions performed. Recent studies have addressed this problem and have focused on formulating a physical model of contact sounds (van den Doel and Pai 2001; van den Doel et al. 2001). The aim has been to model them by a mathematical parameterized expression that could generate, as well as manipulate, the sound emitted. To solve this question, the natural production of sound has been taken into account. The collision between two objects leads to the vibration of their outer surfaces. These vibrations create pressure waves in the air, which are sensed and perceived by the human ear. The main idea is to simulate these waves through a physical sound model. In the last years, there have been two main approaches to build that physical model of sounds. The first one is based on FEM simulation (Yano and Iwata 2001; O’Brien et al. 2001) whilst the other one on modal synthesis (van den Doel and Pai 1998). However, it is quite difficult to run a simulation using FEM at audio rates (44.1 kHz) with current hardware due to the high computational requirements needed. Although using FEM vibration and sound propagation could be modelled in detail, this model is computationally too slow for real-time haptic simulations. The approach implemented in our system is the modal synthesis technique, since it seems to be the most appropriate method up to now to generate collision sounds in real-time. The modal audio synthesis algorithm used in our system is widely described in van den Doel and Pai (1998). It is based on vibration dynamics and it takes into account properties of the sounding object such as the material, the size and the contact location. The impulse response y(t) of an arbitrary solid object to an external force at a particular point, can be described as a sum of damped sinusoids: yðtÞ ¼

N X

ank edn t sinð2pfn tÞ:

ð1Þ

n¼1

In this expression, fn are the modal frequencies, dn are the decay rates, N is the number of modal nodes, ank are the gains for each mode at different locations and y(t) denotes the audio signal as a function of time. The frequencies and dampings are determined by material properties, while the coupling gains of the modes depend on the contact localization. There are many methods to

find for each type of sound, the optimal values for these parameters (van den Doel 1998; Richmond and Pai 2000) or they can also be estimated empirically. One advantage of this synthesis algorithm is that it is linear; there is a relationship between the output signal and the input excitation. Therefore, the haptic contact forces calculated could also become the audio force inputs. Computation time of the modal synthesis technique depends on the total amount of modal modes processed, N, and on the audio sampling rate used. To represent sounds of frequencies up to f Hz, the Shannon sampling theorem (Shannon 1949) states that the sampling rate must remain higher that twice the higher f frequency; otherwise, frequencies above f would produce distortions. Humans can perceive frequencies between 20 and 20,000 Hz and a typical audio sampling rate is 44,100 Hz (CD quality). However, it is often difficult to achieve this sampling rate with current hardware. The number of modes necessary to generate a sound of good quality depends on the object collided. For example, if the object is a simple metallic rigid bar, between five and ten modes are usually enough for an accurate sound. In addition, if the bar is made out of wood, the audio sampling rate does not need to be as high as for metallic objects. In general, the programmer has to decide the relation between the audio sampling rate and the number of modes computed, because the quality of the sound depends on both factors. The higher the sampling rate, the lower the amount of modes the computer will be able to process. The modal synthesis technique fulfills in general the requirements to model sounds for tapping, sliding, rolling or scraping events and for objects made out of metal, plastic or wood. Our system uses this technique to render audio and provide realistic auditory feedback to the haptic application. Its parameters are determined depending on the application to provide virtual objects with adequate sounds. 4.6 Synchronization of the feedback modules The synchronization of multisensory data is a key issue to achieve a proper behaviour of the whole system. If data are not perfectly synchronized, the perception of certain events can be very poor and it can also reduce user concentration, what results in errors in performing accessibility tasks. For instance, if the manipulation of the haptic device is not correctly updated in the display screen, the overall action would appear unrealistic. Similar phenomenon happens when haptic forces are applied and the sound corresponding to the event is emitted some time later. It could be compared to the natural effect of watching fireworks far away the source. The main problem in the synchronization of different sensory modalities is the different sampling rates needed. As it is mentioned before, haptic feedback needs at least a sampling rate of 1 kHz for a good tactile sensation. The collision detection loop, however, has not a fixed

36

sampling rate. Its frequency depends on the geometrical complexity of virtual hand tool instead of the whole model complexity (Borro et al. 2004a). In each haptic loop, intermediate values have to be estimated until new information from the collision detection loop arrives. Otherwise, the haptic sampling rate would be imposed by the collision detection module which is much slower, and the resultant haptic feeling would be very poor. There are many techniques (Boff and Lincoln 1988; Adachi et al. 1995) to couple the haptic fast loop with the collision slow loop. On the other hand, the visual refresh rate cannot be fixed to a certain frecuency without limiting the quality of the final image since it depends on the complexity of the model. Indeed, for human eyes it is essential a visualization frecuency of at least 25 Hz to consider the simulation as a real-time response. Finally, for audio synthesis 20 kHz is considered a good sampling rate. Apart from the sampling rates, latencies must also be taken into account. The latencies produced working with complex industrial models and with stereoscopic visualization would cause an asynchronized response between haptic and visual stimuli. So far, with much more affordable models of two millions of polygons (Fig. 1), the synchonization is totally guaranteed. Other system latencies can appear in coupling audio and haptic feedback. Audio latencies must be considered when sound is rendered in a computer and then is sent to the loudspeakers through a sound card. These latencies depend on the OS, and were measured in DiFilippo (2000). In this case, it seems that the best option would be to utilize OpenAL and EAX on Creative Sound Blaster Audigy or X-Fi for hardware acceleration. On the other hand, latencies are usually unnoticeable if sound is rendered in a DSP, and the output signal is directly conducted to the loudspeakers from an analog output. In our system (Fig. 4), the LHIfAM haptic interface is controlled by a dSPACE DS1104 board that reads position and joint information, processes the control loop and outputs the force commands to the motors. Graphic rendering and collision detection are performed in a separated PC—Pentium IV 3 GHz running Windows XP—and using Multithread programming. Information between graphics and control is sent via Ethernet. Audio synthesis and rendering is also performed in the same dSPACE board in order to guarantee a real-time synchronization between the output force consign to the motors and the output audio signal. This audio signal is directly sent from an analog output channel of the board to the loudspeakers. This two PC-based architecture has several advantages. One of the benefits is the independence of control and graphic PCs which allows the connection of different haptic devices such as a PHANToM. This property could be useful to carry out a benchmark to test the behavior of our system using different haptic devices. Another important advantage is that processes are better managed using different CPUs for control and

Fig. 4 System modules and loops

simulation, i.e. the system overload is avoided and applications can be run in real time.

5 Multisensory experiments A set of experiments has been carried out with the collaboration of several participants. The aim of these tests is to check the influence of multisensory feedback in performing accessibility tasks, such as those related to maintenance. We show that user performance is improved using a combination of auditory, visual and haptic stimuli. The scenario used and the methodology followed for the experiments are described in detail in the following subsections. 5.1 Participants Twelve subjects took part in the experiments, nine men and three women. They were from 25 to 32 years old. All of them had normal or corrected to normal vision, reported normal tactile function and they were free of auditory impairments. Most of the subjects had no prior experience with haptic interfaces. All participants were naive to the details and hypothesis of the experiments. 5.2 Procedure Firstly, a suitable scenario had to be chosen for the experiments. This was not an easy task since we found many problems. Accessibility tasks, in general, imply accurate displacements and a good perception of the environment. However, almost none of the participants in the experiments were familiarized with these types of virtual environments and accessibility tasks. Another problem we found was that the realistic models usually manipulated in these training tasks allow multiple accessing paths, due to their inherent complexity and their high number of components. Therefore, it is difficult to find a common and single movement pattern for all users, in order to analyse and make a reliable comparison among all the results obtained. Taking into

37

account all these constraints and that the quality of the sound does not depend on the complexity of the model, we concluded that it was necessary to create an intuitive and straightforward scenario where users could accomplish the experiments without having any knowledge about this type of environments. In addition, it was easier to fix a path which all users could repeat through the different trials, something really difficult if we work with large and complex models. All these factors led us to model a maze (Fig. 5), where participants were forced to move through all the haptic workspace and in very different ergonomic positions as it happens in accessibility operations. The procedure of the experiments was the following. The labyrinth was presented graphically to the subjects on a screen from a top view. The image was projected with two projectors mounted on the ceiling that displayed the application in stereoscopic. Users were provided with special glasses to perceive the 3D visual perception. A solid sphere was located at the entrance of the labyrinth and its position was handled with the end effector of the LHIfAM. The size of the outer rectangle of the labyrinth was set to 1 m · 1 m in order to cover all the workspace of the haptic device. In such way, participants were forced to interact with it in different ergonomic conditions. The size of the sphere (80 mm diameter) was nearly the same as the distance from wall to wall of the maze (82 mm) which made accessibility more difficult. The height of the walls was large enough in order to guarantee that subjects could not escape from the top of the model whereas at the bottom, a rigid floor was modelled. The contact model used in the walls of the labyrinth was based on a spring-damper system, and the contact force was computed by means of this model. The stiffness and damping coefficients of the labyrinth walls in the normal direction were set to 600 N/m and 1 N s/m respectively, in order to achieve a proper behaviour of

the LHIfAM. In the tangential movement along the virtual walls, a friction force model was implemented following the one proposed by Salisbury (Salisbury et al. 1995). The static coefficient of friction was set to 0.4, while the dynamic coefficient of friction was 0.2 and the stiffness in the tangential movement was 100 N/m. The sound generated when the ball collided with the walls of the maze was a metallic sound rendered at a sampling rate of 20 kHz and with ten modal modes computed. It was rendered as a tapping sound proportional to the haptic force sent to the user. And if the contact remained, a ‘‘stick-slip’’ sound was rendered proportional to the haptic friction force. Initially, participants were allowed to interact with other different haptic applications for a short period of time, so that they could familiarize with haptic devices and get used to visual accommodation. Then, they were requested to cover the whole path from the entrance of the labyrinth to the centre and back. In each experiment, collision information of the sphere with the maze was provided from different sensory channels. Visual feedback (stereoscopic visualization) was always displayed, while sound and force feedback were alternatively restored in four possible combinations: only visual display (V), visual with sound feedback (VS), visual with force feedback (VF) and visual, force and sound feedback together (VSF). Each participant did the experiment four times, each time with one of the four possible combinations. However, the order of the combinations changed from one subject to another and there was also an interval of time of more than 1 h between each experiment performed by each subject. The aim was to avoid that participants could acquire some kind of training and get used to the path they had to follow. Subjects were told to complete the task at the speed they felt more comfortable, and that the time they spent to complete the task would not be measured. The only instructions given to them were that the main target was to avoid collisions between the ball and the walls of maze, and that they could not go backwards. They were also informed that sound and force feedback would be provided depending on the experiment, whenever they collided with any of the walls. Visual, haptic and auditory stimuli should prevent the user from penetrating further inside the labyrinth walls. Therefore, the accessibility was analysed by evaluating the collisions between the sphere and the labyrinth walls. To make an analysis of the results, we considered lower penetration values in the maze as an indicator of a better perception of the environment and a higher accuracy in accessibility tasks. In each experiment and for each subject, the penetration of the sphere in the walls was measured.

6 Results and discussion Fig. 5 Virtual environment for the experiments

The results obtained in the experimental phase are analysed in this section. The aim is to determine whether

38

additional information obtained from another sensory channel—audio—could improve or decrease the user’s performance in accessibility tasks. As mentioned in the previous section, the criterion adopted is that smaller penetrations represent that subjects achieved an overall better performance in the task. The average penetration values measured for each experiment and each subject are shown in Table 1. Analysing each subject individually, all participants had better results with force feedback than without it. This could be expected since force feedback prevents users physically from entering the walls. If we divide the experiments into two groups, the ones with and without force feedback, the main interest of the tests remains in comparing the results of these two groups with and without sound. In the group of experiments without force feedback, all the subjects except one (subject 7) obtained better results when sound feedback was added. In the experiments with force feedback, all participants also obtained better results when sound feedback was provided. Figure 6 shows the means and the variances of all penetration values for each type of experiment. It can be noticed that force feedback prevented participants from penetrating further inside the walls more than sound feedback did. This fact suggests that for this particular type of applications, force feedback has a greater influence over the participants than sound. Taking as reference the experiments performed only with the aid of visual stimuli (V), the mean values obtained in VS experiments showed that there was an improvement of 20% compared to V test whilst in VF and VSF, the average enhancement was of 80 and 85%, respectively. It is also interesting to analyse how the different combinations of force, vision and sound restitutions affected the general values of penetration when considering all the experiments together. Variance analysis of the mean values between V and VS trials confirmed that sound improved results (P = 0.019). Mean penetration value was reduced in 20% with sound. The same analysis between VF and VFS showed the same conclusion

Table 1 Experimental results Subject

V

VS

VF

VSF

1 2 3 4 5 6 7 8 9 10 11 12

5.18 4.48 4.03 5.31 4.28 7.46 5.66 7.31 5.13 4.61 4.65 4.94

4.80 3.87 3.08 3.55 3.23 5.65 5.98 4.32 3.77 4.18 4.13 4.13

1.52 0.92 0.70 0.78 1.01 1.11 0.97 1.06 1.14 1.11 0.94 0.81

1.25 0.68 0.69 0.72 0.72 0.89 0.87 0.65 1.41 0.69 0.69 0.49

Mean penetration values (mm)

Fig. 6 Box and whisker diagram indicating penetration values measured for each trial. Line inside the box represents median values and star point represents mean values. Cross points represent outliers

(P = 0.018). In this case, mean penetration was reduced in 23%. One-way analysis of the variance (ANOVA) between all trials showed no evidence of interaction between stimuli, except for the interaction between VF and VSF. The explanation for these results comes from the fact that some participants penetrate more with both sound and force feedback than others with only force feedback, depending on their ability to handle the haptic device. However, it is interesting to notice that all participants individually improved their results when sound feedback was provided. The analysis of the results shows that the addition of sound stimuli reduces penetration values. Therefore, it can be concluded that results confirmed the hypothesis of participants improving general performance in these experiments when sound feedback was added. Apart from the measurement of penetration values, the comments made by the participants and their behaviour while performing the experiments were also taken into account for further analysis. Although subjects were told that performance time would not be measured, it was however measured. Time results were very different from one subject to another since they performed the experiments at very different speed rates. However, it was quite common that overall subjects needed more time to perform the experiments with sound feedback than without it (Table 2). Comparing the mean values obtained in V and VS trials, participants spent 13.4 s more with sound feedback, whilst in VSF trial participants needed 11.5 s more on average than in VF to complete the task (Fig. 7). A paired samples t test has been performed to compare the means of the trials. The results of this analysis show that the addition of audio significantly increases

39 Table 2 Mean values and standard deviations for time (s) measured for each trial

Mean (s) Standard deviation (s)

V

VF

VS

VSF

88.16 26.41

82 23.01

101.58 25.25

93.58 29.95

the time required to complete the experiment: V–VS (P = 0.024) and VF–VSF (P = 0.005). However, since the number of subjects is rather small, a further analysis on a larger sample is needed to validate this conclusion.

7 Conclusions and current line of research Improving the interaction and perception between humans and computers is a continuous challenge. There is a need to increase the user’s immersion and interaction in Virtual Reality systems and this motivates to bring in additional senses into the process. This paper presents a user study to show the influence of multisensory processing at reinforcing the sensation of the user in virtual environments with haptic interfaces. Results demonstrate that the synergies of visual, haptic and auditory stimuli in haptic systems led to improvements in accessibility tasks more than in systems with fewer stimuli. The maze used in the experiments reproduces the general accessibility problems in applications like training for maintenance tasks or other training applications. Therefore, the results suggest that multisensory feedback can improve general accessibility applications. Perception of the environment, as well as collision events, become more realistic and users can perform tasks with higher accuracy. The whole multisensory system presented has been improved to perform accessibility tasks by adding

Fig. 7 Mean time (s) and standard deviation for each trial

auditory feedback. The key to build systems that provide successful multisensory experiences for the users is to achieve a proper synchronization of all sensory modalities involved. Currently, we are considering several ways to improve the quality of the multisensory system. One of them is the enhancement of the sound generated by the auditory feedback module. The aim is to develop more complex sounds that will represent complex textures and interactions. Another area of improvement would have to do with researching the benefits that an accurate directionality of sound can provide users with. Accurate generation of sound direction could provide with important cues to the perception of collision locations. Acknowledgments The research work presented in this paper is supported by the European Commission, under the FP6 IST-2002002114 Enactive Network of Excellence (http://www.enactivenetwork.org/).

References Adachi Y, Kumano T, Ogino K (1995) Intermediate representation for stiff virtual objects. In: Proceedings of the IEEE virtual reality annual international symposium, pp 203–210 Adelstein BD, Begault DR, Anderson MR, Wenzel EM (2003) Sensitivity to haptic-audio asynchrony. In: Proceedings of the 5th international conference on multimodal interfaces, Vancouver, Canada, pp 73–76 DOI 10.1145/958432.958448 Boff KR, Lincoln JE (1988) Engineering data compendium: human perception and performance. Wright-Patterson Air Force Base, Harry G. Armstrong Aerospace Medical Research Laboratory, Ohio Borro D, Savall J, Amundarain A, Gil JJ, Garcı´ a-Alonso A, Matey L (2004a) A large haptic device for aircraft engine maintainability. IEEE Comput Graph Appl 24(6):70–74 DOI 10.1109/ MCG.2004.45 Borro D, Garcı´ a-Alonso A, Matey L (2004b) Approximation of optimal voxel size for collision detection in maintainability simulations within massive virtual environments. Comput Graph Forum 23(1):13–23 DOI 10.1111/j.1467-8659.2004.00002.x Colgate JE, Schenkel G (1994) Passivity of a class of sampled-data systems: application to haptic interfaces. In: Proceedings of the American control conference, Baltimore, MD, pp 3236–3240 DiFilippo D (2000) The AHI: an audio and haptic interface for simulating contact interactions. Master’s thesis, University of British Columbia DiFilippo D, Pai DK (2000) The AHI: an audio and haptic interface for contact interactions. In: Proceedings of the 13th annual ACM symposium on user interface software and technology, San Diego, California, USA, pp 149–158 DOI 10.1145/ 354401.354437 DiFranco DE, Beauregard GL, Srinivasan MA (1997) The effect of auditory cues on the haptic perception of stiffness in virtual environments. In: Proceedings of the 1997 ASME international mechanical engineering congress and exposition, Dallas, TX, USA, pp 17–22 van den Doel K (1998) Sound synthesis for virtual reality and computer games. Ph. D. thesis, University of British Columbia van den Doel K, Pai DK (1998) The sounds of physical shapes. Presence: Teleoperators Virtual Environ 7(4):382–395 van den Doel K, Pai DK (2001) JASS: a java audio synthesis system for programmers. In: Proceedings of the 2001 international conference on auditory display, Espoo, Finland, pp 150– 154

40 van den Doel K, Kry PG, Pai DK (2001) FoleyAutomatic: Physically-based sound effects for interactive simulation and animation. In: Proceedings of the 28th annual conference on computer graphics and interactive techniques, Los Angeles, USA, pp 537–544 DOI 10.1145/383259.383322 Garcı´ a-Alonso A, Gil JJ, Borro D (2005) Interfaces for VR applications development in design. In: Proceedings of the virtual concept 2005, Biarritz, France, p 109 Gil JJ, Avello A, Rubio A´, Flo´rez J (2004) Stability analysis of a 1 DOF haptic interface using the Routh–Hurwitz criterion. IEEE Trans Control Syst Technol 12(4):583–588 DOI 10.1109/ TCST.2004.825134 Guest S, Catmur C, Lloyd D, Spence C (2002) Audiotactile interactions in roughness perception. Exp Brain Res 146:161–171 DOI 10.1007/s00221-002-1164–z Heller MA (1982) Visual and tactual texture perception: intersensory cooperation. Percept Psychophys 31(4):339–344 Hodges LF (1992) Tutorial: time-multiplexed stereoscopic computer graphics. IEEE Comput Graph Appl 12(2):20–30 DOI 10.1109/38.124285 Hodges LF, Davis ET (1993) Geometric considerations for stereoscopic virtual environments. Presence: Teleoperators Virtual Environ 2(1):34–43 Lederman SJ (1979) Auditory texture perception. Perception 8:93–103 Levitin DJ, MacLean K, Mathews M, Chu LY, Jensen ER (2000) The perception of cross-modal simultaneity. In: Proceedings of International Journal of Computing Anticipatory Systems, vol 517 (1), pp 323–329 DOI 10.1063/1.1291270 Mansa I, Amundarain A, Hernantes J, Garcı´ a-Alonso A, Borro D (2006) Occlusion culling for dense geometric scenarios. Proceedings of the Laval Virtual 2006, Laval, France (in press) Mc Gee MR, Gray PD, Brewster SA (2000) The effective combination of haptic and auditory textural information. In: Proceedings of the first international workshop on haptic human– computer interaction, Lecture Notes in Computer Science, Springer, 2058:118–126 Mc Gee MR, Gray PD, Brewster SA (2001) Feeling rough: multimodal perception of virtual roughness. In: Proceedings of the 1st Eurohaptics conference, Birmingham, UK, pp 29–33 O’Brien JF, Cook PR, Essl G (2001) Synthesizing sounds from physically based motion. In: Proceedings of the 28th annual conference on computer graphics and interactive techniques, Los Angeles, USA, pp 529–536 DOI 10.1145/383259.383321

Pai DK (2005) Multisensory interaction: real and virtual, In: P Dario P, Chatila R (eds) Robotics research: the eleventh international symposium, vol 15. Springer tracts in advanced robotics. Springer, Berlin Heidelberg New York, pp 489–500 DOI 10.1007/11008941_52 Poling GL, Weisenberger JM, Kerwin K (2003) The role of multisensory feedback in haptic surface perception. In: Proceedings of the 11th annual symposium on haptic interfaces for virtual environments and teleoperator systems, Los Angeles, CA, pp 187–194 DOI 10.1109/HAPTIC.2003.1191271 Richmond JL, Pai DK (2000) Active measurement of contact sounds. In: Proceedings of the 2000 IEEE international conference on robotics and automation, San Francisco, CA, USA, pp 2146–2152 DOI 10.1109/ROBOT.2000.846346 Salisbury JK, Brock DL, Massie TH, Swarup N, Zilles C (1995) Haptic rendering: programming touch interaction with virtual objects. In: Proceedings of the 1995 symposium on interactive 3D graphics, Monterey, CA, USA, pp 123–130 DOI 10.1145/ 199404.199426 Shannon CE (1949) Communication in the presence of noise. Proc Inst Radio Eng 37(1):10–21 Srinivasan MA, Beauregard GL, Brock DL (1996) The impact of visual information on the haptic perception of stiffness in virtual environments. In: Proceedings of the 1996 ASME international mechanical engineering congress and exposition, Atlanta, Georgia, USA, pp 555–559 Stein BE, Meredith MA (1993) The merging of the senses. MIT Press, Cambridge Veron H, Southard DA, Leger JR, Conway JL (1990) Stereoscopic displays for terrain database visualization. Proc Stereosc Displays Appl 124–135 Wu W, Basdogan C, Srinivasan MA (1999) Visual, haptic, and bimodal perception of size and stiffness in virtual environments. ASME Dynam Syst Control Div 67:19–26 Yano H, Iwata H (2001) Software architecture for audio and haptic rendering based on a physical model. In: Proceedings of the 8th IFIP TC13 conference on human–computer interaction, Tokyo, Japan, pp 19–26 Yeh YY, Silverstein LD (1990) Limits of fusion and depth judgment in stereoscopic color displays. Hum Factors 32(1):45–60

Academic paper: Influence of multisensory feedback on haptic ...

Feb 20, 2017 - studying the accessibility of the objects, i.e. one object is ...... computer interaction, Lecture Notes in Computer Science,. Springer, 2058:118– ...

242KB Sizes 0 Downloads 171 Views

Recommend Documents

Influence of multisensory feedback on haptic ...
Apr 27, 2006 - sibility of accessing an element of a 3D model avoiding undesirable collisions. This paper studies the benefits that multisensory systems can provide in performing this kind of tasks. ..... Accessibility tasks, in general, imply.

Influence of User Grasping Position on Haptic ...
on the performance of haptic rendering. Two dynamic models ... design and tune haptic controllers. Many researchers .... with haptic applications using the PHANToM, but they were not trained to ...... techniques, using a common web camera.

Influence of User Grasping Position on Haptic ...
Mechanical design of haptic interfaces is an important research field to ensure the usability of ... This device is a desktop haptic interface with low inertia ... with haptic applications using the PHANToM, but they were not trained to perform the .

Stability Boundary for Haptic Rendering: Influence of ...
are supported by simulations and experimental data using the DLR Light-Weight ... applications, such as virtual prototyping [1] and maintainability analysis [2, .... Classical control tools have been applied to the linear system in order to obtain ..

Stability Boundary for Haptic Rendering: Influence of ...
Senior Scientist. Carsten Preusche. Senior Scientist ... From the control point of view, a haptic system is a sampled-data controlled mechatronic device. Fig.

Haptic Feedback to Help Geometry Learning for ...
reduce the number of graphic [4]. • some haptic devices have already been successfully used in the ... graphic information and haptic ones. Such link involves large re-education of the amount of information that can be ..... London : Routledge, Nov

On the Design Method of a Haptic Interface ... - Semantic Scholar
Abstract: A haptic interface can be a passive system with virtual coupling as a filter. Virtual coupling has been designed for satisfying passivity. However, it affects transparency of haptic interface as well as stability. This paper suggests new de

On the Design Method of a Haptic Interface ... - Semantic Scholar
The Z-width which is the dynamic range of achievable impedance was introduced[2] and it represents the range of impedance when the passivity is guaranteed.

Influence of treated paper board mill effluent on soil and ...
to paper mill effluent was remote. The soil was analyzed ..... 1.3. 1.1. 1.3. 1.0. 1.2. 1.0. 1.2. 1.2. In flu e n c e o f tre a te d p a p e. r b o a rd m ill e fflu e n. t o n s o.

On the Influence of Sensor Morphology on Vergence
present an information-theoretic analysis quantifying the statistical regu- .... the data. Originally, transfer entropy was introduced to identify the directed flow or.

Study on the influence of dryland technologies on ...
Abstract : A field experiment was conducted during the North East monsoon season ... Keywords: Sowing time, Land management, Seed hardening and Maize ...

The Impact of Baroclinic Eddy Feedback on the ...
5 U. R sech2(y/s). (2). We use the following values for our control parame- ters: UR 5 40 m s21 ..... verse is found at low frequency, when baroclinicity anomalies are ..... time scale with a single number when the eddy forcing spectrum has some ...

Influence of photosensor noise on accuracy of cost ... - mikhailkonnik
That is especially true for the low-light conditions4 and/or the case of cost-effective wavefront sensors.5 Using widely available fast and inexpensive CMOS sensors, it would be possible to build low-cost adaptive optics systems for small telescopes,

Live Feedback on Behavioral Changes
Abstract—The costs to find and fix bugs grows over time, to the point where fixing a bug after release may cost as much as. 100 times more than before release. To help programmers find bugs as soon as they are introduced, we sketch a plugin for an

Mendelian Randomisation study of the influence of eGFR on coronary ...
24 Jun 2016 - 1Department of Non-communicable Disease Epidemiology, London School of Hygiene and Tropical Medicine,. UK. 2Department of Tropical Hygiene, Faculty of Tropical Medicine, Mahidol University, Thailand. 3Institute of. Cardiovascular Scienc

Influence of photosensor noise on accuracy of cost ... - mikhailkonnik
developed high-level model.18 The model consists of the photon shot noise, the photo response non-uniformity .... affects the accuracy of a wavefront sensor only in low light conditions and to some extent on intermediate-level of light. Then the ....

Influence of different levels of spacing and manuring on growth ...
Page 1 of 8. 1. Influence of different levels of spacing and manuring on growth, yield and. quality of Alpinia calcarata (Linn.) Willd. Baby P Skaria, PP Joy, Samuel Mathew and J Thomas. 2006. Kerala Agricultural University, Aromatic and Medicinal Pl

Influence of EMS-physician presence on survival after out-of ...
Influence of EMS-physician presence on survival after o ... resuscitation: systematic review and meta-analysis.pdf. Influence of EMS-physician presence on ...

student feedback on teaching
I am very grateful to Judith for giving us a very good learning experience in Structural Geology. Even though the course load is heavy, I managed to learn a lot ...

Influence of photosensor noise on accuracy of cost-effective Shack ...
troiding accuracy for the cost-effective CMOS-based wavefront sensors were ... has 5.00µm pixels with the pixel fill factor of 50%, quantum efficiency of 60%,.

Influence of composite period and date of observation on phenological ...
residual clouds or high atmospheric water vapour. ... to minimise the inherent error and present a best case scenario. .... (Vermote, personal communication).

Influence of weeding regime on severity of sugarcane ...
RESEARCH ARTICLE. (Open Access). Influence of weeding regime on severity of sugarcane mosaic disease in selected improved sugarcane germplasm accessions in the Southern. Guinea Savanna agroecology of Nigeria. TAIYE HUSSEIN ALIYU* AND OLUSEGUN SAMUEL

influence of sampling design on validity of ecological ...
inhabiting large home ranges. In open .... necessarily differ in behaviour, which will result in a trade-off ... large home ranges, such as red fox and wolverine. .... grid. Ecological Applications, 21, 2908–2916. O'Brien, T.G., Baillie, J.E.M., Kr

The Influence of Admixed Micelles on Corrosion Performance of ...
The Influence of Admixed Micelles on Corrosion Performance of reinforced mortar.pdf. The Influence of Admixed Micelles on Corrosion Performance of ...