HCI Bibliography Home | HCI Conferences | VRST Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
VRST Tables of Contents: 979899000102030405060708091012131415

Proceedings of the 2007 ACM Symposium on Virtual Reality Software and Technology

Fullname:VRST'07 ACM Symposium on Virtual Reality Software and Technology
Editors:Aditi Majumder; Larry Hodges; Daniel Cohen-Or; Ming Lin; Werner Purgathofer; Gerard Kim; Ronan Boulic; Miguel Otaduy
Location:Newport Beach, California
Dates:2007-Nov-05 to 2007-Nov-07
Standard No:ISBN: 1-59593-863-X, 978-1-59593-863-3; ACM Order Number: 609070; ACM DL: Table of Contents hcibib: VRST07
Links:Conference Home Page
  1. Tracking, calibration & VR support
  2. Simulating human and nature in motion
  3. Avatars, crowds & perceptions
  4. Rendering
  5. 3D interaction & multi-sensory rendering
  6. Display & navigations
  7. Posters

Tracking, calibration & VR support

Implicit 3D modeling and tracking for anywhere augmentation BIBAKFull-Text 19-28
  Sehwan Kim; Stephen DiVerdi; Jae Sik Chang; Taehyuk Kang; Ronald Iltis; Tobias Höllerer
This paper presents an online 3D modeling and tracking methodology that uses aerial photographs for mobile augmented reality. Instead of relying on models which are created in advance, the system generates a 3D model for a real building on the fly by combining frontal and aerial views with the help of an optical sensor, an inertial sensor, a GPS unit and a few mouse clicks. A user's initial pose is estimated using an aerial photograph, which is retrieved from a database according to the user's GPS coordinates, and an inertial sensor which measures pitch. To track the user's position and orientation in real-time, feature-based tracking is carried out based on salient points on the edges and the sides of a building the user is keeping in view. We implemented camera pose estimators using both a least squares and an unscented Kalman filter (UKF) approach. The UKF approach results in more stable and reliable vision-based tracking. We evaluate the speed and accuracy of both approaches, and we demonstrate the usefulness of our computations as important building blocks for an Anywhere Augmentation scenario.
Keywords: UKF, camera pose estimation, feature-based tracking, online modeling, outdoor augmented reality
Real-time tracking of visually attended objects in interactive virtual environments BIBAKFull-Text 29-38
  Sungkil Lee; Gerard Jounghyun Kim; Seungmoon Choi
This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) features, the framework also uses topdown (goal-directed) contexts to predict the human gaze. The framework first builds feature maps using preattentive features such as luminance, hue, depth, size, and motion. The feature maps are then integrated into a single saliency map using the center-surround difference operation. This pixel-level bottom-up saliency map is converted to an object-level saliency map using the item buffer. Finally, the top-down contexts are inferred from the user's spatial and temporal behaviors during interactive navigation and used to select the most plausibly attended object among candidates produced in the object saliency map. The computational framework was implemented using the GPU and exhibited extremely fast computing performance (5.68 msec for a 256X256 saliency map), substantiating its adequacy for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the visual attention tracking framework with respect to actual human gaze data. The attained accuracy level was well supported by the theory of human cognition for visually identifying a single and multiple attentive targets, especially due to the addition of top-down contextual information. The framework can be effectively used for perceptually based rendering without employing an expensive eye tracker, such as providing the depth-of-field effects and managing the level-of-detail in virtual environments.
Keywords: attention tracking, bottom-up feature, saliency map, top-down context, virtual environment, visual attention
Simultaneous 4 gestures 6 DOF real-time two-hand tracking without any markers BIBAKFull-Text 39-42
  Markus Schlattman; Reinhard Klein
In this paper we present a novel computer vision based hand-tracking method, which is capable of simultaneously tracking 6+4 degrees of freedom (DOFs) of each human hand in real-time (25 frames per second) with the help of 3 (or more) off-the-shelf consumer cameras. '6+4 DOF' means that the system can track the global pose (6 continuous parameters for translation and rotation) of 4 different gestures. Different studies discovered the need for two-handed interaction to enable an intuitive 3D Human-Computer-Interaction. Previously, using both hands as at least 6 DOF input devices involved the use of either datagloves or markers. Applying our two-hand-tracking we evaluated the use of both hands as input devices for two applications: fly-through exploration of a virtual world and a mesh editing application.
Keywords: hand tracking, interaction techniques, virtual reality
A semi-automatic realtime calibration technique for a handheld projector BIBAKFull-Text 43-46
  Vinh Ninh Dao; Kazuhiro Hosoi; Masanori Sugimoto
In this paper, a semi-automatic realtime calibration technique for a handheld projector is described. The proposed technique always keeps a shape of a projected screen rectangular on a specified projection surface, while a user continuously changes his standing position or the pose of the projector. The technique is especially useful for a future mobile phone with a built-in projector that allows a user to project its screen onto any surface in any location and share the screen with multiple people. Informal evaluations using the technique have been conducted to identify the level of its acceptability by users and to find problems to be improved. An example entertainment application to explore possibilities of the proposed technique has been developed.
Keywords: calibration, distortion correction, handheld projector, interaction technique
Handheld AR indoor guidance system using vision technique BIBAKFull-Text 47-50
  Eunsoo Jung; Sujin Oh; Yanghee Nam
We present a mobile augmented reality (AR) system for indoor guidance where we apply vision technique without using markers. There are two main problems: First, to augment suitable information according to users situation, it has to be identified which place the user belongs to. Second, to put information into the scene and to make it aligned with the target scene element, the basic structure of the augmenting space should be grasped. For place recognition in real time mobile system, we employ simple feature detection method combined with graph based spatial connectivity representation. Also, Image-based analysis method is applied to interpret the basic scene structure from the video.
Keywords: augmented reality, computer vision, mobile guidance
Supporting the creation of dynamic, interactive virtual environments BIBAKFull-Text 51-54
  Kristopher J. Blom; Steffi Beckhaus
Virtual Reality's expanding adoption makes the creation of more interesting dynamic, interactive environments necessary in order to meet the expectations of users accustomed to modern computer games. In this paper, we present initial explorations of using the recently developed Functional Reactive Programming paradigm to support the creation of such environments. The Functional Reactive Programming paradigm supports these actions by providing tools that match both the user's perception of the dynamics of the world and the underlying hybrid nature of such environments. Continuous functions with explicit time dependencies describe the dynamic behaviors of the environment and discrete event mechanisms provide for modifying the active behaviors of the environment. Initial examples show how this paradigm can be used to control dynamic, interactive Virtual Environments.
Keywords: dynamic virtual environments, functional reactive programming, interactive, virtual reality

Simulating human and nature in motion

Stable and efficient miscible liquid-liquid interactions BIBAKFull-Text 55-64
  Hongbin Zhu; Kai Bao; Enhua Wu; Xuehui Liu
In our surrounding environment, we may often see many various miscible liquid-liquid mixture phenomena, like pouring honey or ink into water, Coca Cola into strong wine etc., while few papers have devoted to the simulation of the phenomena. In this paper, we use a two-fluid lattice Boltzmann method (TFLBM) to simulate the underlying dynamics of miscible mixtures. By the method, a subgrid model is applied to improve its numerical stability so that the free surface of the mixture, accompanying with higher Reynolds number, can be simulated. We also apply control forces to the mixture with interesting animation created. By optimizing the memory structure and taking the advantage of dual-core or multi-core systems, we achieve real time computation for a domain in 643 cells full of fluid mixtures.
Keywords: control, free surface, lattice Boltzmann method, memory optimization, miscible mixture, multicore system, stability, subgrid model
Simulating competitive interactions using singly captured motions BIBAKFull-Text 65-72
  Hubert P. H. Shum; Taku Komura; Shuntaro Yamazaki
It is difficult to create scenes where multiple avatars are fighting / competing with each other. Manually creating the motions of avatars is time consuming due to the correlation of the movements between the avatars. Capturing the motions of multiple avatars is also difficult as it requires a huge amount of post-processing. In this paper, we propose a new method to generate a realistic scene of avatars densely interacting in a competitive environment. The motions of the avatars are considered to be captured individually, which will increase the easiness of obtaining the data. We propose a new algorithm called the temporal expansion approach which maps the continuous time action plan to a discrete space such that turn-based evaluation methods can be used. As a result, many mature algorithms in game such as the min-max search and α-β pruning can be applied. Using our method, avatars will plan their strategies taking into account the reaction of the opponent. Fighting scenes with multiple avatars are generated to demonstrate the effectiveness of our algorithm. The proposed method can also be applied to other kinds of continuous activities that require strategy planning such as sport games.
Keywords: human simulation, motion capture, motion planning
State-annotated motion graphs BIBAKFull-Text 73-76
  Bill Chiu; Victor Zordan; Chun-Chih Wu
Motion graphs have gained popularity in recent years as a means for re-using motion capture data by connecting previously unrelated segments of a recorded library. Current techniques for controlling movement of a character via motion graphs have largely focused on path planning which is difficult due to the density of connections found on the graph. We introduce "state-annotated motion graphs," a novel technique which allows high-level control of character behavior by using a dual representation consisting of both a motion graph and a behavior state machine. This special motion graph is generated from labeled data and then bound to a finite state machine with similar labels. At run-time, character behavior is simply controlled by switching states. We show that it is possible to generate rich, controllable motion without the need for deep planning. We demonstrate that, when applied to an interactive fighting testbed, simple state-switching controllers may be coded intuitively to create various effects.
Keywords: behavior control, human animation, motion capture
Interactive control of physically-valid aerial motion: application to VR training system for gymnasts BIBAKFull-Text 77-80
  Franck Multon; Ludovic Hoyet; Taku Komura; Richard Kulpa
This paper aims at proposing a new method to animate aerial motions in interactive environments while taking dynamics into account. Classical approaches are based on spacetime constraints and require a complete knowledge of the motion. However, in Virtual Reality, the user's actions are unpredictable so that such techniques cannot be used. In this paper, we deal with the simulation of gymnastic aerial motions in virtual reality. A user can directly interact with the virtual gymnast thanks to a real-time motion capture system. The user's arm motions are blended to the original aerial motions in order to verify their consequences on the virtual gymnast's performance. Hence, a user can select an initial motion, an initial velocity vector, an initial angular momentum, and a virtual character. Each of these choices has a direct influence on mechanical values such as the linear and angular momentum. We thus have developed an original method to adapt the character's poses at each time step in order to make these values compatible with mechanical laws: the angular momentum is constant during the aerial phase and the linear one is determined at take-off. Our method enables to animate up to 16 characters at 30hz on a common PC. To sum-up, our method enables to solve kinematic constraints, to retarget motion and to correct it to satisfy mechanical laws. The virtual gymnast application described in this paper is very promising to help sports-men getting some ideas which postures are better during the aerial phase for better performance.
Keywords: dynamics, interactivity, motion control, sports application, virtual human
Anticipation from example BIBAKFull-Text 81-84
  Victor Zordan; Adriano Macchietto; Jose Medin; Marc Soriano; Chun-Chih Wu; Ronald Metoyer; Robert Rose
Automatically generated anticipation is a largely overlooked component of response in character motion for computer animation. We present an approach for generating anticipation to unexpected interactions with examples taken from human motion capture data. Our system generates animation by quickly selecting an anticipatory action using a Support Vector Machine (SVM) which is trained offline to distinguish the characteristics of a given scenario according to a metric that assesses predicted damage and energy expenditure for the character. We show our results for a character that can anticipate by blocking or dodging a threat coming from a variety of locations and targeting any part of the body, from head to toe.
Keywords: behavior control, human animation, motion capture
A robust method for real-time thread simulation BIBAKFull-Text 85-88
  Blazej Kubiak; Nico Pietroni; Fabio Ganovelli; Marco Fratarcangeli
In this paper, we present a physically based model for real-time simulation of thread dynamics. Our model captures all the relevant aspects of the physics of the thread, including quasi-zero elasticity, bending, torsion and self-collision, and it provides output forces for the haptic feedback. The physical properties are modeled in terms of constraints that are iteratively satisfied while the numerical integration is carried out through a Verlet scheme. This approach leads to an unconditionally stable, controllable and computationally light simulation [Müller et al. 2007]. Our results demonstrate the effectiveness of our model, showing the interaction of the thread with other objects in real time and the creation of complex knots.
Keywords: physically based animation, surgical simulation

Avatars, crowds & perceptions

Accurate on-line avatar control with collision anticipation BIBAKFull-Text 89-97
  Manuel Peinado; Daniel Meziat; Damien Maupu; Daniel Raunhardt; Daniel Thalmann; Ronan Boulic
Interactive control of a virtual character through full body movement has a wide range of applications. However, there is a need for systems that accurately reproduce the motion of a performer while accounting for surrounding obstacles. We propose an approach based on a Prioritized Inverse Kinematics constraint solver. Several markers are placed on the user's body. A set of kinematic constraints make the virtual character track these markers. At the same time, we monitor the instantaneous displacements of a set of geometric primitives, called observers, attached to different parts of the virtual character. When an observer enters the influence area of an obstacle, its motion is damped by means of automatically created preventive constraints. The IK solver satisfies both maker and preventive constraints simultaneously, yielding postures of the virtual character that remain close to those of the user, while avoiding collisions with the virtual environment. Our performance measurements show the maturity of the IK technology for real-time full-body interactions.
Keywords: character animation, collision avoidance, inverse kinematics, motion capture, virtual reality
Real-time navigation of independent agents using adaptive roadmaps BIBAFull-Text 99-106
  Avneesh Sud; Russell Gayle; Erik Andersen; Stephen Guy; Ming Lin; Dinesh Manocha
We present a novel algorithm for navigating a large number of independent agents in complex and dynamic environments. We compute adaptive roadmaps to perform global path planning for each agent simultaneously. We take into account dynamic obstacles and inter-agents interaction forces to continuously update the roadmap by using a physically-based agent dynamics simulator. We also introduce the notion of 'link bands' for resolving collisions among multiple agents. We present efficient techniques to compute the guiding path forces and perform lazy updates to the roadmap. In practice, our algorithm can perform real-time navigation of hundreds and thousands of human agents in indoor and outdoor scenes.
Interactive control of real-time crowd navigation in virtual environment BIBAKFull-Text 109-112
  Xiaogang Jin; Charlie C. L. Wang; Shengsheng Huang; Jiayi Xu
Interactive control is one of the key issues when simulating crowd navigation in virtual environment. In this paper, we propose a simple but practical method for authoring crowd scenes in an effective and intuitive way. Radial Basis Functions (RBF) based vector field is employed as the governing tool to drive the motion flow. With this basic mathematical tool, users can easily control the motions of crowd by simply sketching velocities on a few points in the scene. Our approach is fast enough to allow on-the-fly modification of the vector field. Besides, the behavior of an individual in a crowd can be interactively adjusted by changing the ratio between its autonomous and governed movements.
Keywords: crowd simulation, navigation control, radial basis functions, vector field
CrowdViewer: from simple script to large-scale virtual crowds BIBAKFull-Text 113-116
  Tianlu Mao; Bo Shu; Wenbin Xu; Shihong Xia; Zhaoqi Wang
Visualization of large-scale virtual crowds is ubiquitous in many applications of computer graphics. For reasons of efficiency in modeling, animating and rendering, it is difficult to populate scenes with a large number of individually animated virtual characters in real-time applications. In this paper, we present an effective and readily usable solution to this problem. It accepts simple script which includes motion state and position information of each individual at each time step. Supported by material database and motion database, various human models are generated from model templates, and then driven by an agile on-line animation approach. A developed point-based rendering approach is presented to accelerate rendering. We test our system with script including 30,000 people evacuating from a sports arena. The results demonstrate that our approach provides a very effective way to visualize large-scale crowds with high visual realism in real-time.
Keywords: animating, modeling, rendering, virtual crowds
Officer Garcia: a virtual human for mediating eyewitness identification BIBAKFull-Text 117-120
  Brent Daugherty; Sabarish Babu; Brian Cutler; Larry Hodges
An analysis of court cases has revealed that the mistaken identification of the wrong person by victims and witnesses of a crime is the single most common error leading to the arrest and conviction of innocent people [Wells et al. 2006]. Recognizing the role of mistaken identification in erroneous conviction, a growing number of states and police departments have reformed their eyewitness identification procedures. In this paper, we investigate a new procedural reform: the use of a virtual officer who does not know the identity of the suspect in the lineup and therefore cannot bias the witness toward false identification.
Keywords: Officer Garcia, embodied conversational agents, human-computer interaction, virtual humans, virtual officer
The benefits of immersion for spatial understanding of complex underground cave systems BIBAKFull-Text 121-124
  Philip Schuchardt; Doug A. Bowman
A common reason for using immersive virtual environments (IVEs) in visualization is the hypothesis that IVEs should provide a higher level of spatial understanding for complex 3D structures, such as those found in underground cave systems. Therefore, we aimed to explore the use of IVEs for visualization of underground caves, and to determine the benefits of immersion for viewing such models. We ran an experiment in which domain experts answered questions with two different levels of immersion. The results show that for certain tasks the more immersive system significantly improved accuracy, speed, and comprehension over the non-immersive environment, and that 3D visualization overall is a good match for the underground cave data.
Keywords: cave, immersion, spatial understanding, visualization


Consistent interactive augmentation of live camera images with correct near-field illumination BIBAKFull-Text 125-132
  Thorsten Grosch; Tobias Eble; Stefan Mueller
Inserting virtual objects in real camera images with correct lighting is an active area of research. Current methods use a high dynamic range camera with a fish-eye lens to capture the incoming illumination. The main problem with this approach is the limitation to distant illumination. Therefore, the focus of our work is a real-time description of both near -- and far-field illumination for interactive movement of virtual objects in the camera image of a real room. The daylight, which is coming in through the windows, produces a spatially varying distribution of indirect light in the room; therefore a near-field description of incoming light is necessary. Our approach is to measure the daylight from outside and to simulate the resulting indirect light in the room. To accomplish this, we develop a special dynamic form of the irradiance volume for real-time updates of indirect light in the room and combine this with importance sampling and shadow maps for light from outside. This separation allows object movements with interactive frame rates (10-17 fps). To verify the correctness of our approach, we compare images of synthetic objects with real objects.
Keywords: augmented image synthesis, global illumination
Interactive rendering of optical effects in wet hair BIBAKFull-Text 133-140
  Rajeev Gupta; Nadia Magnenat-Thalmann
Visually, wet hair is easily distinguishable from dry hair because of the increased highlights and intense darkening displayed by them. It is therefore essential for realism to capture these characteristics under certain real world conditions. In this regard we propose a model for rendering wet hair at interactive rates. We start by analyzing the physical aspect behind this special effect in hair and then present a model for incorporating the variations in visual appearance of the hair due to presence of water. For simulating the increased specularity because of the water layer on hair, we present a parameter controlled Gaussian-based model. To simulate darkening in hair, for outer hair we consider total internal reflection at water-hair interface as dominant and propose a probabilistic approach to determine the amount of light absorbed. For inner hair, we consider that increase in opacity due to water results in stronger self-shadow and propose a model that updates the opacities based on water content and accumulates them to calculate the self-shadow term. By preprocessing and optimising our algorithm both for the self-shadow in dry hair and the special effects due to water presence, we can get visually pleasing results at interactive rates. Furthermore, the model is highly versatile and can easily be adaptable to other liquids and hair styling products.
Keywords: hair simulation, interactive rendering, self-shadow, wet hair rendering
Building high performance DVR via HLA, scene graph and parallel rendering BIBAKFull-Text 141-144
  Hua Xiong; Zonghui Wang; Xiaohong Jiang; Jiaoying Shi
Distributed simulation and parallel rendering based on PC cluster have seen great success in recent years. To improve the overall performance, there is a trend to integrate modeling, simulation and visualization into a common distributed environment. In this paper, we propose a unified framework of building high performance distributed virtual reality (DVR) applications. The core components of this framework include the High Level Architecture (HLA), scene graphs and parallel rendering. The HLA supports interactive distributed simulation. Scene graphs are efficient to organize and manipulate scene data. And parallel rendering provides powerful rendering ability. This paper presents the in-depth architectural analysis of each components and derives a design that integrates them into a unified framework. Two DVR applications, including a remote navigation of massive virtual scenes and a multi-player video game, have been developed to evaluate the framework performance.
Keywords: collaborative environments, distributed simulation, distributed virtual reality, graphics cluster, high level architecture, parallel rendering, scene graph
Real-time global illumination in the CAVE BIBAKFull-Text 145-148
  Jesper Mortensen; Pankaj Khanna; Insu Yu; Mel Slater
Global illumination in VR applications remains an elusive goal. While it potentially has a positive impact on presence, the significant real-time computation and integration complexities involved have been stumbling blocks. In this paper we present recent and ongoing work in the Virtual Light Field paradigm for global illumination as a solution to this problem. We discuss its suitability for real-time VR applications and detail recent work in integrating it with the XVR system for real-time GPU-based rendering in a CAVE™. This rendering method achieves real-time rendering of L(S|D)* E solutions in time independent of illumination complexity and largely independent of geometric complexity.
Keywords: global illumination, light fields, virtual reality
Incremental wavelet importance sampling for direct illumination BIBAKFull-Text 149-152
  Hao-da Huang; Yanyun Chen; Xing Tong; Wen-cheng Wang
Most of existing importance sampling methods for direct illumination exploit importance of illumination and surface BRDF. Without taking the visibility into consideration, they can not adaptively adjust the number of samples for each pixel during the sampling process. As a result, these methods tend to produce images with noise in partially occluded regions. In this paper, we introduce an incremental wavelet importance sampling approach, in which the visibility information is used to determine the number of samples at run time. For this purpose, we present a perceptual-based variance that is computed from visibility of samples. In the sampling process, the Halton sample points are incrementally warped for each pixel until the variance of warped samples converges. We demonstrate that our method is more efficient than existing importance sampling approaches.
Keywords: importance sampling, rendering, wavelets
The design and implementation of a VR-architecture for smooth motion BIBAKFull-Text 153-156
  F. A. Smit; R. van Liere; B. Fröhlich
We introduce an architecture for smooth motion in virtual environments. The system performs forward depth image warping to produce images at video refresh rates. In addition to color and depth, our 3D warping approach records per-pixel motion information during rendering of the three-dimensional scene. These enhanced depth images are used to perform per-pixel advection, which considers object motion and view changes. Our dual graphics card architecture is able to render the 3D scene at the highest possible frame rate on one graphics card, while doing the depth image warping on a second graphics engine at video refresh rate.
   This architecture allows us to compensate for visual artifacts, also called motion judder, arising when the rendering frame rate is lower than the video refresh rate. The evaluation of our method shows motion judder can be effectively removed.
Keywords: VR, dual-GPU, judder, motion estimation, smooth motion, video refresh rate, warping

3D interaction & multi-sensory rendering

Gesture-based interaction for a magic crystal ball BIBAKFull-Text 157-164
  Li-Wei Chan; Yi-Fan Chuang; Meng-Chieh Yu; Yi-Liu Chao; Ming-Sui Lee; Yi-Ping Hung; Jane Hsu
Crystal balls are generally considered as media to perform divination or fortune-telling. These imaginations are mainly from some fantasy films and fiction, in which an augur can see into the past, the present, or the future through a crystal ball. With the distinct impressions, crystal ball has revealed itself as a perfect interface for the users to access and to manipulate visual media in an intuitive, imaginative and playful manner. We developed an interactive visual display system named Magic Crystal Ball (MaC Ball). MaC Ball is a spherical display system, which allows the users to see a virtual object/scene appearing inside a transparent sphere, and to manipulate the displayed content with barehanded interactions. Interacting with MaC Ball makes the users feeling acting with magic power. With MaC Ball, user can manipulate the display with touch and hover interactions. For instance, the user waves hands above the ball, causing clouds blowing from bottom of the ball, or slides fingers on the ball to rotate the displayed object. In addition, the user can press single finger to select an object or to issue a button. MaC Ball takes advantages on the impressions of crystal balls, allowing the users acting with visual media following their imaginations. For applications, MaC Ball has high potential to be used for advertising and demonstration in museums, product launches, and other venues.
Keywords: 3D interaction, entertainment, haptics
Using an event-based approach to improve the multimodal rendering of 6DOF virtual contact BIBAKFull-Text 165-173
  Jean Sreng; Florian Bergez; Jérémie Legarrec; Anatole Lécuyer; Claude Andriot
This paper decribes a general event-based approach to improve multimodal rendering of 6DOF (degree of freedom) contact between objects in interactive virtual object simulations. The contact events represent the different steps of two objects colliding with each other: (1) the state of free motion, (2) the impact event at the moment of collision (3) the friction state during the contact and (4) the detachment event at the end of the contact. The different events are used to improve the classical feedback by superimposing specific rendering techniques based on these events. First we propose a general method to generate these events based only on the objects' positions given by the simulation. Second, we describe a set of different types of multimodal feedback associated to the different events that we implemented in a complex virtual simulation dedicated to virtual assembly. For instance, we propose a visual rendering of impact, friction and detachment based on particle effects. We used the impact event to improve the 6DOF haptic rendering by superimposing a high frequency force pattern to the classical force feedback. We also implemented a realistic audio rendering using impact and friction sound on the corresponding events. All these first implementations can be easily extended with other event-based effects on various rigid body simulations thanks to our modular approach.
Keywords: 6DOF, audio, contact, event-based, haptic, multimodal, rendering, visual
A classification scheme for multi-sensory augmented reality BIBAKFull-Text 175-178
  Robert W. Lindeman; Haruo Noma
We present a new classification framework for describing augmented reality (AR) applications based on where the mixing of real and computer-generated stimuli takes place. In addition to "classical" visual AR techniques, such as optical-see-through and video-see-through AR, our framework encompasses AR directed at the other senses as well. This "axis of mixing location" is a continuum ranging from the physical environment to the human brain. There are advantages and disadvantages of mixing at different points along the continuum, and while there is no "best" location, we present sample usage scenarios that illustrate the expressiveness of this classification approach.
Keywords: audio, augmented reality, gustatory, haptics, olfactory, video
Real-time auditory-visual distance rendering for a virtual reaching task BIBAKFull-Text 179-182
  Luca Mion; Federico Avanzini; Bruno Mantel; Benoit Bardy; Thomas A. Stoffregen
This paper reports on a study on the perception and rendering of distance in multimodal virtual environments. A model for binaural sound synthesis is discussed, and its integration in a real-time system with motion tracking and visual rendering is presented. Results from a validation experiment show that the model effectively simulates relevant auditory cues for distance perception in dynamic conditions. The model is then used in a subsequent experiment on the perception of egocentric distance. The design and preliminary result from this experiment are discussed.
Keywords: 3-D sound, egocentric distance, multimodal interaction, virtual auditory space
Enhancing VR-based visualization with a 2D vibrotactile array BIBAKFull-Text 183-186
  Christoph W. Borst; Vijay B. Baiyya
We discuss methods to enable haptic visualization on vibrotactile arrays. Our work is motivated by the potential for a tactile array to provide an additional useful channel for information such as location cues related to dataset features or remote user behaviors. We present a framework for array rendering and several specific techniques. Novel aspects of our work include the example application of a palm-sized tactile array to visualize dataset features or remote user state in a VR system, a generalized haptic glyph mechanism for 2D tactile arrays, and the extension of graphical visualization techniques to haptics (glyphs, fisheye distortion, spatial anti-aliasing, gamma correction).
Keywords: haptic exploration, haptic glyphs, haptics, haptization, tactile map, vibrotactile array
Towards a system for reusable 3D interaction techniques BIBAKFull-Text 187-190
  Andrew Ray; Doug A. Bowman
Although 3D interaction techniques (3DITs) such as the Go-Go technique for object manipulation can be conceptually very simple, implementing these techniques can be a difficult task. Hidden complexities are often revealed at the low-level implementation stage. VR toolkits, which are commonly used to implement 3DITs, solve the problem of allowing applications to run in any hardware environment, but rarely provide support for the technique development process or technique reuse. Because VR toolkits are not interoperable, 3DIT developers cannot share their working techniques with others. In this paper, we describe IFFI, a toolkit that has been designed specifically to support 3DIT development and to allow for reuse of techniques in different VR toolkits. We are using IFFI to move towards the goal of implementing a library of reusable 3DITs to help increase their usage, increase consistency, and provide a foundation for future technique development.
Keywords: 3D interaction techniques, VR toolkits, virtual environments, virtual reality

Display & navigations

An immaterial depth-fused 3D display BIBAKFull-Text 191-198
  Cha Lee; Stephen DiVerdi; Tobias Höllerer
We present an immaterial display that uses a generalized form of depth-fused 3D (DFD) rendering to create unencumbered 3D visuals. To accomplish this result, we demonstrate a DFD display simulator that extends the established depth-fused 3D principle by using screens in arbitrary configurations and from arbitrary viewpoints. The performance of the generalized DFD effect is established with a user study using the simulator. Based on these results, we developed a prototype display using two immaterial screens to create an unencumbered 3D visual that users can penetrate, enabling the potential for direct walk-through and reach-through manipulation of the 3D scene.
Keywords: 3D displays, depth-fused 3D, immeterial displays, user study
Active guideline: spatiotemporal history as a motion technique and navigation aid for virtual environments BIBAKFull-Text 199-202
  Andreas Simon; Christian Stern
Users of virtual environments regularly have problems using 3D motion interfaces and exhibit a disturbing tendency to become disoriented and get lost quickly in large virtual worlds. We introduce the active guideline, a novel auxiliary motion technique and navigation aid for virtual environments. Similar to the ubiquitous Back-button interface for web navigation, the active guideline implicitly records a history of the user's motion and allows immediate and convenient travel back (and forth) along a trace of the previously traveled path. This lets users revisit previous viewpoints, allowing them to recover from being "lost in space" by enabling easy, continuous backtracking all the way up to the "home" position. In contrast to bookmarking of viewpoints, the active guideline is "always-on" and requires no active user intervention or strategic planning to successfully function as a navigation aid. This paper discusses the behavior and implementation of the active guideline and presents results of an initial study exploring the usability of the technique.
Keywords: backtracking, navigation, virtual environments
Depth-of-field blur effects for first-person navigation in virtual environments BIBAKFull-Text 203-206
  Sébastien Hillaire; Anatole Lécuyer; Rémi Cozot; Géry Casiez
This paper studies the use of visual blur effects, i.e., blurring of parts of the image fed back to the user, for First-Person-Navigations in Virtual Environments (VE). First, we introduce a model of dynamic visual blur for VE which is based on two types of blur effect: (1) a Depth-of-Field blur (DoF blur) which simulates the blurring of objects located in front or back of the focus point of the eyes, and (2) a peripheral blur which simulates the blurring of objects located at the periphery of the field of vision. We introduce two new techniques to improve real-time DoF: (1) a paradigm to compute automatically the focal distance, and (2) a temporal filtering that simulates the accommodation phenomenon. Second, we describe the results of a pilot experiment conducted to study the influence of blur effects on the performance and preference of video gamers during multiplayer sessions. Interestingly, it seems that visual blur effects did not degrade performance of gamers and they were preferred and selected by nearly half of the participants to improve fun and game-play. Taken together, our results suggest that the use of visual blur effects could thus be suitable in videogames and in other virtual environments.
Keywords: accommodation, depth-of-field blur, first-person-navigation, focalization, peripheral blur, videogames, visual blur
Tour generation for exploration of 3D virtual environments BIBAKFull-Text 207-210
  Niklas Elmqvist; M. Eduard Tudoreanu; Philippas Tsigas
Navigation in complex and large-scale 3D virtual environments has been shown to be a difficult task, imposing a high cognitive load on the user. In this paper, we present a comprehensive method for assisting users in exploring and understanding such 3D worlds. The method consists of two distinct phases: an off-line computation step deriving a grand tour using the world geometry and any semantic target information as input, and an on-line interactive navigation step providing guided exploration and improved spatial perception for the user. The former phase is based on a voxelized version of the geometrical dataset that is used to compute a connectivity graph for use in a TSP-like formulation of the problem. The latter phase takes the output tour from the off-line step as input for guiding 3D navigation through the environment.
Keywords: navigation aids, navigation assistance, tour generation
Dynamic landmark placement as a navigation aid in virtual worlds BIBAKFull-Text 211-214
  Daniel Cliburn; Tess Winlock; Stacy Rilea; Matt Van Donsel
In this paper, we explore the use of dynamically placed landmarks as navigation aids when users search a virtual world for target objects. Subjects were asked to search a virtual world four times for six red spheres. Eighty-six subjects participated in one of four conditions: no landmarks, statically placed landmarks, landmarks dynamically placed into the world at the subject's discretion that disappeared from trial to trial, and landmarks dynamically placed into the world at the subject's discretion that remained from trial to trial. An analysis of the experimental results revealed that dynamic landmarks which disappeared between trials had little impact on a subject's performance. However, when landmarks remained in the world from one trial to the next, subjects covered significantly less distance than those in the no landmark condition, and obtained similar performance to those in the static landmark condition. Results indicate that dynamically placed landmarks, which remain between visits, can serve as effective navigation aids in virtual worlds lacking obvious physical landmarks.
Keywords: landmarks, navigation, virtual environment


AR façade: an augmented reality interactive drama BIBAFull-Text 215-216
  Steven Dow; Manish Mehta; Blair MacIntyre; Michael Mateas
Our demonstration presents AR Façade, a physically embodied version of the interactive drama Façade, at the Beall Center in Irvine, CA. In this drama, players are situated in a married couple's apartment, and interact primarily through conversation with the characters and manipulation of objects in the space. Our demonstration will include two versions of the experience -- an immersive augmented reality (AR) version and a desktop computing based implementation, where players communicate using typed keyboard input. Our recent study cross media study revealed empirical differences between the versions [Dow et al. 2007]. Through interviews and observations of players, we found that immersive AR can create an increased sense of presence, confirming generally held expectations. However, we learned that increased presence does not necessarily lead to more engagement. Rather, mediation may be necessary for some players to fully engage with certain immersive media experiences.
Bimanual task division preferences for volume selection BIBAFull-Text 217-218
  Amy Ulinski; Zachary Wartell; Larry F. Hodges
Using both hands for 3D interaction allows users to transfer ingrained interaction skills, significantly increase performance on certain tasks, and reduce training [Bowman et al. 2005]. Guiard's framework of Bimanual manipulation states that different classes of bimanual actions exist [1997]. The Bimanual Asymmetric classification consists of both hands, performing different actions, coordinated to accomplish the same task. The Bimanual Symmetric classification involves each hand performing identical actions, either synchronously or asynchronously. Latulipe et al. compared a symmetric, dual-mouse technique for manipulation of spline curves, to two asymmetric dual-mouse techniques and a standard single-mouse technique. The symmetric technique performed best and was most preferred by participants [2006].
Design flexibility in seamless coded pattern for localization BIBAFull-Text 219-220
  Atsushi Hiyama; Shigeru Saito; Tomohiro Tanikawa; Michitaka Hirose
Recent development in mobile and ubiquitous computing technology with broadband wireless network has realizing the location based application for mobile and wearable computers. According to the wide spreading number of cellular phone with GPS receiver, it has become easy to provide location based application in outdoor scene. On the contrary, there are several positioning technologies for indoor scene, such using wireless sensors or image processing technology. However, those technologies are still not utilized in daily environment due to the cost inefficiency and the complexity in system setup. Considering the problems in previous positioning system, it is necessary to overcome the following four features to utilize the system in daily scene.
  • The system is reasonable and easy to setup.
  • The visual feature of the system must not ruin the design of daily
  • The system is capable of locating all the users inside the covered
  • The system is able to extend in wide area such as public space in buildings.
  • Development of MR application families: an InTml-based approach BIBAKFull-Text 221-222
      Pablo Figueroa; José Ferreira; Camilo Castro
    We show an approach for the development of families of MR applications. A family of MR applications is a set of applications that share some common tasks but differ mainly in their user interfaces. The development of a family of applications allows reusability of design and code, adaptability to new hardware and context, and distributed and heterogeneous deployments. Some examples are presented and future trends are discussed.
    Keywords: InTml, MR development, product lines
    Evaluating a haptic-based virtual environment for venepuncture training BIBAKFull-Text 223-224
      Shamus P. Smith; Susan Todd
    Simulated medical environments allow clinical staff to practice medical procedures for longer than traditional training methods without endangering patients. Haptic devices are a key technology in such systems but it is unclear how their usability can be evaluated. A user study has been performed with a commercial haptic-based medical virtual environment using data logging, user questionnaires and a modified think-aloud verbal protocol to support usability analysis. However, the evaluator does not share the haptic feedback with the participant and it was found that this can be problematic in shaping questions during an evaluation session and interpreting the collected data.
    Keywords: haptic evaluation, medical training, usability, venepuncture, virtual environments
    Eye tracking and gaze vector calculation within immersive virtual environments BIBAKFull-Text 225-226
      Adrian Haffegee; Vassil Alexandrov; Russell Barrow
    Human vision is arguably our most powerful sense, with our eyes constantly darting around in an almost subconscious manner to create a complete picture of the visual scene around us. These movements can unveil information about the way the brain is processing the incoming visual data into its mental image of our surroundings.
       In this paper we discuss a method of obtaining and preprocessing the eye movements of a user immersed within a controllable synthetic environment. We investigate how their gaze patterns can be captured and used to identify viewed virtual objects, and how this can be used as a natural method of interacting with the scene.
    Keywords: eye tracking, gaze interaction, immersive VR
    Chloe@University: an indoor, mobile mixed reality guidance system BIBAKFull-Text 227-228
      Achille Peternier; Xavier Righetti; Mathieu Hopmann; Daniel Thalmann; Matteo Repettoy; George Papagiannakis; Pierre Davy; Mingyu Lim; Nadia Magnenat-Thalmann; Paolo Barsocchi; Tasos Fragopoulos; Dimitrios Serpanos; Yiannis Gialelis; Anna Kirykou
    With the advent of ubiquitous and pervasive computing environments, one of promising applications is a guidance system. In this paper, we propose a mobile mixed reality guide system for indoor environments, Chloe@University. A mobile computing device (Sony's Ultra Mobile PC) is hidden inside a jacket and a user selects a destination inside a building through voice commands. A 3D virtual assistant then appears in the see-through HMD and guides him/her to destination. Thus, the user simply follows the virtual guide. Chloe@University also suggests the most suitable virtual character (e.g. human guide, dog, cat, etc.) based on user preferences and profiles. Depending on user profiles, different security levels and authorizations for content are previewed. Concerning indoor location tracking, WiFi, RFID, and sensor-based methods are integrated in this system to have maximum flexibility. Moreover smart and transparent wireless connectivity provides the user terminal with fast and seamless transition among Access Points (APs). Different AR navigation approaches have been studied: [Olwal 2006], [Elmqvist et al.] and [Newman et al.] work indoors while [Bell et al. 2002] and [Reitmayr and Drummond 2006] are employed outdoors. Accurate tracking and registration is still an open issue and recently it has mostly been tackled by no single method, but mostly through aggregation of tracking and localization methods, mostly based on handheld AR. A truly wearable, HMD based mobile AR navigation aid for both indoors and outdoors with rich 3D content remains an open issue and a very active field of multi-discipline research.
    Keywords: localization, mixed reality, real-time systems, sensor networks, virtual human
    Hybrid traveling in fully-immersive large-scale geographic environments BIBAKFull-Text 229-230
      Frank Steinicke; Gerd Bruder; Klaus Hinrichs
    In this paper we present hybrid traveling concepts that enable users to navigate immersively through 3D geospatial environments displayed by arbitrary applications such as Google Earth or Microsoft Virtual Earth. We propose a framework which allows to integrate virtual reality (VR) based interaction devices and concepts into such applications that do not support VR technologies natively.
       In our proposed setup the content displayed by a geospatial application is visualized stereoscopically on a head-mounted display (HMD) for immersive exploration. The user's body is tracked in order to support natural traveling through the VE via a walking metaphor. Since the VE usually exceeds the dimension of the area in which the user can be tracked, we propose different strategies to map the user's movement into the virtual world intuitively. Moreover, commonly available devices and interaction techniques are presented for both-handed interaction to enrich the navigation process.
    Keywords: hybrid traveling, navigation, virtual reality
    Interactive smart character in a shooting game BIBAFull-Text 231-232
      Chun-Chieh Chen; Tsai-Yen Li
    Interactivity is a critical issue in designing a good game or similar virtual environments. As the computer hardware is continuously improved, more computing power can be invested in interactivity in addition to graphics rendering. For games with virtual characters, how a character interact with a user is usually specified at design time according to a given scene. Consequently, the characters in a game usually can only display canned motions at designated locations of a scene. After several runs of practice, the user may easily get bored because these actions become predictable. Therefore, it is highly desirable to have a smarter character that can plan its motions according to the inputs from the user as well as other constraints from the environments.
    Mixed reality for enhancing business communications using virtual worlds BIBAKFull-Text 233-234
      Muthukkumar S. Kadavasal; Krishna K. Dhara; Xiaotao Wu; Venkatesh Krishnaswamy
    Online virtual worlds are attracting businesses that intend to offer new enterprise class services. Often, these services in virtual worlds are closely linked with the real enterprise resources. For a successful deployment of these services, a mixed reality model with communications that extend the virtual worlds to enterprise resources is required. In this paper, we take a look at this new class of collaborative applications by using a customer service application as an example. We discuss various issues in offering such a service, lay out the requirements, propose an architecture for mixed reality communications, and present our prototype implementation. We believe that this example service can introduce a mixed reality based communication paradigm that is applicable to a wide range of other business or enterprise services.
    Keywords: collaboration, communication, mixed reality, virtual reality
    Motion picture production facility with liquid cooled 512 processor mobile super computing vehicle and virtual reality environment BIBAFull-Text 235-236
      Mark J. Prusten
    The motion picture production pipeline has evolved where super computers are needed at the filming location. This allows content from digital cinema cameras recorded on external raid hard drives, shooting 4K-8K content, to have immediate processing capabilities. This Mobile Super Computing Vehicle, MSCV, will provide a mobile nervous center for a 6-8 man production crew. This announcement describes the Linux based liquid cooled 512 processors producing a 30 Teraflops super computer with 5 racks that will be integrated into a tractor trailer vehicle. The electrical, thermal, and mechanical engineering design issues of this undertaking will be analyzed and presented. The thermal issues are resolved by a closed chilled water system running through a unique design carries away up to 95 percent of the heat generated by the system. A complete Linux production pipeline tool set for digital intermediate processing and visual effects will be presented.
    Multi-agent systems applied to virtual environments: a case study BIBAFull-Text 237-238
      A. Barella; C. Carrascosa; V. Botti; M. Martí
    Computer game development covers many different areas, such as graphics, artificial intelligence (AI) and network communications. Nowadays, players demand more sophisticated and credible computer-controlled opponents, but results are sometimes unsatisfactory. This is due to the need of real-time processing constraints to reach an acceptable feeling of immersion for the user. The development of systems, tools and development environments can play an important role in the progress of this field. The combination of AI techniques and virtual reality (or virtual environments) has given birth to the field of intelligent virtual environments (IVEs). An IVE is a virtual environment simulating a physical (or real) world, inhabited by autonomous intelligent entities [Luck and Aylett 2000]. These entities have to interact in / with the virtual environment as if they were real entities in the real world. There are some other typical problems of an IVE that we have to solve, as explained in [Bierbaum et al. 2001]: independence of the underlying technologies, user interactions (not only displaying images, but using specific devices such as trackers, gloves), synchronization of both data and image (even in case of multiple displays).
    Shakespearean karaoke BIBAFull-Text 239-240
      Lauren Cairco; Sabarish Babu; Amy Ulinski; Catherine Zanbaka; Larry F. Hodges
    Traditionally, students study plays by reading from a book. However, reading dialogue on paper does not always communicate the various emotions and actions that help people understand the significance of the person-to-person interactions that are represented.
    Stride scheduling for time-critical collision detection BIBAKFull-Text 241-242
      Daniel S. Coming; Oliver G. Staadt
    We present an event-based scheduling method for time-critical collision detection that meets time constraints by balancing and prioritizing computation spent on intersection tests without starvation. Our approach tests each potentially colliding pair of objects at a different frequency, with unbounded temporal resolution. We preserve believability by adaptively prioritizing intersection tests to reduce errors in collision detection, using information about the objects and scene. By combining kinetic sweep and prune with stride scheduling we interleave rendering, broad phase collision pruning, narrow phase intersection testing, and collision response. This approach accrues no per-frame overhead and allows interruption at any point in collision detection, including the broad phase.
    Keywords: Collision Detection, dynamic scenes, many-body collision detection, time-critical computing
    Usability of multiple degree-of-freedom input devices and virtual reality displays for interactive visual data analysis BIBAKFull-Text 243-244
      Elke Moritz; Hans Hagen; Thomas Wischgoll; Joerg Meyer
    Interactive virtual reality applications commonly require two key technologies: multiple degree-of-freedom input devices, and 2D or 3D displays. The industry has developed a vast variety of devices for a growing consumer market. Consumer magazines regularly publish test reports for new devices. These reports are often focused on the gaming community, which is typically the driving force behind new product development. Although many lessons can be learned from the gaming industry, the scientific community is generally focused on other criteria, such as precision, degrees of freedom, and user tracking. It is expected that some of these criteria, which are currently in the state of research, will eventually be incorporated into products for a mass market, just like consumer graphics cards and certain input devices did in the past.
       This study is an attempt to provide an overview of existing 2D and 3D input device and display technologies for interactive scientific visualization applications. Different types of input devices and displays were tested in combination with each other. The article explains why certain combinations of input devices and displays work together better than others.
    Keywords: displays, input devices, interactive rendering, navigation, virtual reality
    View-dependent mesh streaming using multi-chart geometry images BIBAKFull-Text 245-246
      Bin Sheng; Enhua Wu
    Many mesh streaming algorithms have focused on the transmission order of the polygon data with respect to the current viewpoint. In contrast to the conventional progressive streaming where the resolution of a model changes in the geometry space, we present an new approach which firstly partitions a mesh into several patches, then converts these patch into multi-chart geometry images(MCGIM). After all the MCGIM and normal map atlas are obtained by regular re-sampling, we could construct the regular quadtree-based hierarchical representation based on MCGIM. Experimental results have shown the effectiveness of our approach where one server streams the MCGIM texture atlas to the clients.
    Keywords: clustering, geometry image, mesh streaming, multiresolution rendering
    Virtual vision: visual sensor networks in virtual reality BIBAKFull-Text 247-248
      Faisal Z. Qureshi; Demetri Terzopoulos
    The virtual vision paradigm features a unique synergy of computer graphics, artificial life, and computer vision technologies. Virtual vision prescribes visually and behaviorally realistic virtual environments as a simulation tool in support of research on large-scale visual sensor networks. Virtual vision has facilitated our research into developing multi-camera control and scheduling algorithms for next-generation smart video surveillance systems.
    Keywords: reality emulator, smart cameras, virtual vision
    VR-based visual analytics of LIDAR data for cliff erosion assessment BIBAFull-Text 249-250
      Tung-Ju Hsieh; Michael J. Olsen; Elizabeth Johnstone; Adam P. Young; Neal Driscoll; Scott A. Ashford; Falko Kuester
    The ability to explore, conceptualize and correlate spatial and temporal changes of topographical records, is needed for the development of new analytical models that capture the mechanisms contributing towards sea cliff erosion. This paper presents a VR-centric approach for cliff erosion assessment from light detection and ranging (LIDAR) data, including visualization techniques for the delineation, segmentation, and classification of features, change detection and annotation. Research findings are described in the context of a sea cliff failure observed in Solana Beach in San Diego county.