HCI Bibliography Home | HCI Conferences | VRST Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
VRST Tables of Contents: 979899000102030405060708091012131415

Proceedings of the 2004 ACM Symposium on Virtual Reality Software and Technology

Fullname:VRST'04 ACM Symposium on Virtual Reality Software and Technology
Editors:Rynson W. H. Lau; George Baciu
Location:Hong Kon
Dates:2004-Nov-10 to 2004-Nov-12
Standard No:ISBN: 1-58113-907-1; Order Number:069040; ACM DL: Table of Contents hcibib: VRST04
  1. Keynote speaker
  2. Session 1A: object interactions and collisions
  3. Session 1B: animation and simulations
  4. Session 1C: human interactions and perceptions
  5. Keynote speaker
  6. Session 2A: applications and systems
  7. Session 2B: real-time interactions and rendering (short papers)
  8. Session 2C: techniques and applications (short papers)
  9. Keynote speaker
  10. Session 3A: devices and haptics

Keynote speaker

Recent advances in visibility determination for large and complex virtual environments BIBAFull-Text 1
  Daniel Cohen-Or
Rendering large and complex virtual environments in real time remains a challenge as the complexity of models keeps growing. Visibility techniques such as occlusion culling can effectively reduce the rendering depth complexity.
   In my talk I'll survey recent advances in visibility determination. I'll focus on conservative and aggressive techniques and show how these techniques can be combined effectively with level of details and image-based techniques. I'll show how these new techniques can be accelerated by modern graphics cards to achieve interactive rendering of extremely complex scenes.

Session 1A: object interactions and collisions

Fast and reliable collision culling using graphics hardware BIBAKFull-Text 2-9
  Naga K. Govindaraju; Ming C. Lin; Dinesh Manocha
We present a reliable culling algorithm that enables fast and accurate collision detection between triangulated models in a complex environment. Our algorithm performs fast visibility queries on the GPUs for eliminating a subset of primitives that are not in close proximity. To overcome the accuracy problems caused by the limited viewport resolution, we compute the Minkowski sum of each primitive with a sphere and perform reliable 2.5D overlap tests between the primitives. We are able to achieve more effective collision culling as compared to prior object-space culling algorithms. We integrate our culling algorithm with CULLIDE [8] and use it to perform reliable GPU-based collision queries at interactive rates on all types of models, including non-manifold geometry, deformable models, and breaking objects.
Keywords: collision detection, deformable models, graphics hardware, minkowski sums
Interactive collision detection for complex and deformable models using programmable graphics hardware BIBAKFull-Text 10-15
  Wei Chen; Huagen Wan; Hongxin Zhang; Hujun Bao; Qunsheng Peng
In this paper we present an interactive collision detection algorithm for complex and deformable objects. For two target models, our approach rapidly calculates their region of interests (ROI), which is the overlapping of their axis aligned bounding boxes (AABBs), in CPU. The surfaces of both models inside the ROI are then voxelized using a novel GPU-based real-time voxelization method. The resultant volumes are represented by two 2D textures in video memory. The collision query is efficiently accomplished by comparing these 2D textures in GPU. The algorithm is robust to handle arbitrary shapes, no matter geometric models are convex or concave, closed or open, rigid or deformable. Our preliminary implementation achieves interactive frame rate for complex models with up to one million triangles on commodity desktop PCs.
Keywords: collision detection, deformation, graphics hardware, voxelization
The Grappl 3D interaction technique library BIBAKFull-Text 16-23
  Mark Green; Joe Lo
One of the obstacles to the widespread use of interactive 3D applications is the lack of good tools for developing them. The development of these tools has been complicated by the wide range of hardware configurations used in 3D applications. Also, there is a lack of common software platforms for developing the tools required for 3D user interfaces. As a result, many groups develop their own set of interaction techniques without taking advantage of the work of others, wasting a considerable amount of development time. The Grappl project aims to solve these problems by providing software tools that adapt to the hardware configuration and automatically design most of the user interface. One of the main components of this project is an interaction technique library that supports a wide range of input and output devices. This library provides an open platform for the development of 3D interaction techniques that encourages further development in this area. Interaction techniques developed using this toolkit can be used in our user interface design system, so application developers can easily take advantage of new interaction techniques. The design and implementation of this library is described in this paper.
Keywords: 3D user interfaces, interaction techniques
Multi-layered deformable surfaces for virtual clothing BIBAKFull-Text 24-31
  Wingo Sai-Keung Wong; George Baciu; Jinlian Hu
We propose a positional constraint method to solve the multi-layered deformable surface problem based on a master-slave scheme. This allows two or more deformable surfaces to be attached together in any orientation relative to each other for the purpose of modeling cloth attachments and multi-layered clothing. The method does not require the mesh resolution of the deformable surfaces to be the same or the matching of anchor points between layers. After the attachment process, the surfaces are treated as a multi-layered surface. However, this surface contains non-manifold features. We introduce a technique for preventing self-intersection of the non-manifold features. We demonstrate the stability of this method by performing several experiments with high surface complexity and a large number of colliding feature pairs. Interactive rates can easily be achieved for multilayered surfaces with an appropriate discretization level of triangles.
Keywords: collision detection, deformable surfaces, master-slave, multi-layer, non-manifold geometry, virtual clothing
Animating reactive motions for biped locomotion BIBAKFull-Text 32-40
  Taku Komura; Howard Leung; James Kuffner
In this paper, we propose a new method for simulating reactive motions for running or walking human figures. The goal is to generate realistic animations of how humans compensate for large external forces and maintain balance while running or walking. We simulate the reactive motions of adjusting the body configuration and altering footfall locations in response to sudden external disturbance forces on the body. With our proposed method, the user first imports captured motion data of a run or walk cycle to use as the primary motion. While executing the primary motion, an external force is applied to the body. The system automatically calculates a reactive motion for the center of mass and angular momentum around the center of mass using an enhanced version of the linear inverted pendulum model. Finally, the trajectories of the generalized coordinates that realize the precalculated trajectories of the center of mass, zero moment point, and angular momentum are obtained using constrained inverse kinematics. The advantage of our method is that it is possible to calculate reactive motions for bipeds that preserve dynamic balance during locomotion, which was difficult using previous techniques. We demonstrate our results on an application that allows a user to interactively apply external perturbations to a running or walking virtual human model. We expect this technique to be useful for human animations in interactive 3D systems such as games, virtual reality, and potentially even the control of actual biped robots.
Keywords: interactive 3D graphics, inverse kinematics, motion control

Session 1B: animation and simulations

Animating complex hairstyles in real-time BIBAKFull-Text 41-48
  Pascal Volino; Nadia Magnenat-Thalmann
True real-time animation of complex hairstyles on animated characters is the goal of this work, and the challenge is to build a mechanical model of the hairstyle which is sufficiently fast for real-time performance while preserving the particular behavior of the hair medium and maintaining sufficient versatility for simulating any kind of complex hairstyles.
   Rather than building a complex mechanical model directly related to the structure of the hair strands, we take advantage of a volume free-form deformation scheme. We detail the construction of an efficient lattice mechanical deformation model which represents the volume behavior of the hair strands. The lattice is deformed as a particle system using state-of-the-art numerical methods, and animates the hairs using quadratic B-Spline interpolation. The hairstyle reacts to the body skin through collisions with a metaball-based approximation. The model is highly scalable and allows hairstyles of any complexity to be simulated in any rendering context with the appropriate tradeoff between accuracy and computation speed, fitting the need of Level-of-Detail optimization schemes.
Keywords: hair modeling, mechanical simulation, real-time animation, virtual characters
A lightweight algorithm for real-time motion synthesis BIBAKFull-Text 49-56
  Katsuaki Kawachi; Takeo Kanade; Hiromasa Suzuki
This paper presents an algorithm for interactive character animation with kinematic constraints with limited computational time. In order to reduce necessary computation, the animation is not created by procedural algorithm but synthesized by deforming and concatenating short motion examples, each consisting of a sequence of postures. A keyframe placed between two consecutive motion examples is deformed by using inverse kinematics so that it satisfies given constraints. The motion examples between the keyframes are deformed to ensure continuity of postures in position and velocity. We parameterize each posture as a set of particles in an orthogonal coordinate system. The inverse kinematics method with the particle representation realizes fast and stable deformation of keyframe postures, and the deformation of motion examples are calculated on a frame-by-frame basis by decomposing a whole-body deformation into per-particle deformations. We present some examples of character animations synthesized at an interactive rate by this algorithm.
Keywords: character animation, inverse kinematics
Marker-free kinematic skeleton estimation from sequences of volume data BIBAKFull-Text 57-64
  Christian Theobalt; Edilson de Aguiar; Marcus A. Magnor; Holger Theisel; Hans-Peter Seidel
For realistic animation of an artificial character a body model that represents the character's kinematic structure is required. Hierarchical skeleton models are widely used which represent bodies as chains of bones with interconnecting joints. In video motion capture, animation parameters are derived from the performance of a subject in the real world. For this acquisition procedure too, a kinematic body model is required. Typically, the generation of such a model for tracking and animation is, at best, a semi-automatic process. We present a novel approach that estimates a hierarchical skeleton model of an arbitrary moving subject from sequences of voxel data that were reconstructed from multi-view video footage. Our method does not require a-priori information about the body structure. We demonstrate its performance using synthetic and real data.
Keywords: kinematic skeleton, learning, model reconstruction, motion capture, tracking, volume processing
Scalable pedestrian simulation for virtual cities BIBAKFull-Text 65-72
  Soteris Stylianou; Marios M. Fyrillas; Yiorgos Chrysanthou
Most of the common approaches for the pedestrian simulation, used in the Graphics/VR community, are bottom-up. The avatars are individually simulated in the space and the overall behavior emerges from their interactions. This can lead to interesting results but it does not scale and can not be applied to populating a whole city. In this paper we present a novel method that can scale to a scene of almost any size. We use a top-down approach where the movement of the pedestrians is computed at a higher level, taking a global view of the model, allowing the flux and densities to be maintained at very little cost at the city level. This information is used for stochastically guiding a more detailed and realistic low level simulation when the user zooms in to a specific region, thus maintaining the consistency.
   At the heart of the system is an iterative method that models the flow of avatars as a random walk. People are moved around a graph of nodes until the model reaches a steady state which provides feedback for the avatar low level navigation at run time. The Negative Binomial distribution function is used to model the number of people leaving each node while the selected direction is based on the popularity of the nodes through their preference-factor. The preference-factor is a function of a number of parameters including the visibility of a node, the events taking place in it and so on.
   An important feature of the low-level dynamics is that a user can interactively specify a number of intuitive variables that can predictably modify the collective behavior of the avatars in a region; the density, the flux and the number of people can be selectively modified.
Keywords: animation, avatars, pedestrian simulation

Session 1C: human interactions and perceptions

Observing effects of attention on presence with fMRI BIBAKFull-Text 73-80
  Sungkil Lee; Gerard J. Kim; Janghan Lee
Presence is one of the goals of many virtual reality systems. Historically, in the context of virtual reality, the concept of presence has been associated much with spatial perception (bottom up process) as its informal definition of "feeling of being there" suggests. However, recent studies in presence have challenged this view and attempted to widen the concept to include psychological immersion, thus linking more high level elements (processed in a top down fashion) to presence such as story and plots, flow, attention and focus, identification with the characters, emotion, etc. In this paper, we experimentally studied the relationship between two content elements, each representing the two axis of the presence dichotomy, perceptual cues for spatial presence and sustained attention for (psychological) immersion. Our belief was that spatial perception or presence and a top down processed concept such as voluntary attention have only a very weak relationship, thus our experimental hypothesis was that sustained attention would positively affect spatial presence in a virtual environment with impoverished perceptual cues, but have no effect in an environment rich in them. In order to confirm the existence of the sustained attention in the experiment, fMRI of the subjects were taken and analyzed as well. The experimental results showed that that attention had no effect on spatial presence, even in the environment with impoverished spatial cues.
Keywords: attention, fMRI, presence, virtual reality
Supporting social human communication between distributed walk-in displays BIBAKFull-Text 81-88
  David Roberts; Robin Wolff; Oliver Otto; Dieter Kranzlmueller; Christoph Anthes; Anthony Steed
Future teleconferencing may enhance communication between remote people by supporting non-verbal communication within an unconstrained space where people can move around and share the manipulation of artefacts. By linking walk-in displays with a Collaborative Virtual Environment (CVE) platform we are able to physically situate a distributed team in a spatially organised social and information context. We have found this to demonstrate unprecedented naturalness in the use of space and body during non-verbal communication and interaction with objects.
   However, relatively little is known about how people interact through this technology, especially while sharing the manipulation of objects. We observed people engaged in such a task while geographically separated across national boundaries. Our analysis is organised into collaborative scenarios, that each requires a distinct balance of social human communication with consistent shared manipulation of objects.
   Observational results suggest that walk-in displays do not suffer from some of the important drawbacks of other displays. Previous trials have shown that supporting natural non-verbal communication, along with responsive and consistent shared object manipulation, is hard to achieve. To better understand this problem, we take a close look at how the scenario impacts on the characteristics of event traffic. We conclude by suggesting how various strategies might reduce the consistency problem for particular scenarios.
Keywords: CVE, consistency control, event traffic, human interaction
Using a vibro-tactile display for enhanced collision perception and presence BIBAKFull-Text 89-96
  Jonghyun Ryu; Gerard Jounghyun Kim
One of the goals and means of realizing virtual reality is through multimodal interfaces, leveraging on the many sensory organs that humans possess. Among them, the tactile sense is important and useful for close range interaction and manipulation tasks. In this paper, we explore this possibility using a vibro-tactile device on the whole body for simulating collision between the user and virtual environment. We first experimentally verify the effect of enhanced user felt presence by employing localized vibration feedback alone on collision, and further investigate how to effectively provide the sense of collision using the vibro-tactile display in different ways. In particular, we test the effects of using a vibration feedback model (for simulating collision with different object materials), saltation, and simultaneous use of 3D sound toward spatial presence and perceptual realism. The results have shown that employing the proposed vibro-tactile interface did enhance the sense of presence, especially when combined with 3D sound. Furthermore, the use of saltation also helped the user detect and localize the point of contact more correctly. The use of the vibration feedback model was not found to be significantly effective, and sometimes even hindered the correct sense of collision primarily due to the limitation of the vibrotactile display device.
Keywords: multimodality, presence, sensory saltation, tactile interface, vibration feedback model, vibrator, virtual environments
FreeWalk/Q: social interaction platform in virtual space BIBAKFull-Text 97-104
  Hideyuki Nakanishi; Toru Ishida
We have integrated technologies related to virtual social interaction, e.g. virtual environments, visual simulations, and lifelike characters. In our previous efforts to integrate them, the asymmetry between agents and avatars made the systems too complex to be used widely. Another crucial problem we faced is that it took a long time to construct agents that play various roles, since each role needs its specific behavioral repertory. To eliminate these problems, we developed a general-use platform, FreeWalk/Q, in which agents and avatars can share the same interaction model and scenario. We created a control mechanism to reduce the behavioral differences between agents and avatars, and a description method to design the external role rather than the internal mechanism. In the development, we found that it was necessary to prepare several topologies of control mechanism and several granular levels of description method.
Keywords: agent, avatar, interaction platform, scenario description, social interaction, virtual city, virtual community, virtual space, virtual training

Keynote speaker

Turning VR inside out: thoughts about where we are heading BIBAFull-Text 105
  Steven Feiner
Our field and the world have changed greatly in the ten years since the first VRST was held in Singapore in 1994. Computers have grown smaller, faster, and cheaper, while polygon counts, frame rates, and display resolutions have increased impressively, true to the promise of Moore's Law. But, what comes next?
   This talk will sketch some of the directions in which I feel virtual reality is (or should be) heading. I will discuss the potential for taking virtual reality outside, through wearable and mobile computing; for bring the outside in, by capturing the real world; and for accommodating large numbers of displays, users, and tasks, by embedding them in a fluid and collaborative augmented environment.

Session 2A: applications and systems

Scanning and rendering scene tunnels for virtual city traversing BIBAKFull-Text 106-113
  Jiang Yu Zheng; Yu Zhou; Min Shi
This paper proposes a visual representation named scene tunnel to capture and visualize urban scenes for Internet based virtual city traversing. We scan cityscapes by using multiple cameras on a vehicle that moves along a street, and generate a real scene archive more complete than a route panorama. The scene tunnel can cover high architectures and various object aspects, and its data size is much less than video. It is suitable for image transmission and rendering over the Internet. The scene tunnel has a uniformed resolution along the camera path and can provide continuous views for visual navigation in a virtual or real city. This paper explores the image acquisition methods from slit calibration, view scanning, to image integration. A plane of scanning is determined for flexible camera setting and image integration. The paper further addresses the city visualization on the Internet that includes view transformation, data streaming, and interactive functions. The high-resolution scenes are mapped onto a wide window dynamically. The compact and continuous scene tunnel facilitates the data streaming and allows virtual traversing to be extended to a large area.
Keywords: internet media, navigation, route panorama, scene representation, scene tunnel, visualization
Modeling and rendering of walkthrough environments with panoramic images BIBAKFull-Text 114-121
  Angus M. K. Siu; Ada S. K. Wan; Rynson W. H. Lau
An important, potential application of image-based techniques is to create photo-realistic image-based environments for interactive walkthrough. However, existing image-based studies are based on different assumptions with different focuses. There is a lack of a general framework or architecture for evaluation and development of a practical image-based system. In this paper, we propose an architecture to unify different image-based methods. Based on the architecture, we propose an image-based system to support interactive walkthrough of scalable environments. In particular, we introduce the concept of angular range, which is useful for designing a scalable configuration, recovering geometric proxy as well as rendering. We also propose a new method to recover geometry information even from outdoor scenes and a new rendering method to address the problem of abrupt visual changes in a scalable environment.
Keywords: 3D reconstruction, geometric proxy, image-based methods, image-based modeling, image-based rendering
Design and evaluation of a wind display for virtual reality BIBAKFull-Text 122-128
  Taeyong Moon; Gerard J. Kim
One of the goals in the design of virtual environments (VE) is to give the user the feeling of existence within the VE, known as presence. Employing multimodality is one way to increase presence, and as such, numerous multimodal input and output devices have been used in the context of virtual reality (VR). However, the simulation and investigation into the effects of the wind (or air flow) has not been treated much in the VR research community. In this paper, we introduce a wind display system, called the "WindCube," designed for virtual reality applications. The WindCube consists of a number of small fans attached to a cubical structure in which a VR system user interacts with the VE. We first discuss the design parameters of the proposed display device such as the type of the fan used, and the appropriate number, locations and directions of the fans in relation to providing the minimum level of the wind effects and enhanced presence. In order to simulate the effects of the wind, a wind field is first specified within the virtual environment. We describe how the specified wind field is rendered to the user through the proposed device. Finally, we investigate the effects of the proposed wind display to user felt presence through an experiment. It is our belief that wind display is very important and cost effective modality to consider and employ, because it involves "air," a medium that makes the VE felt more "livable," in contrast to many VE's that looks vacuum.
Keywords: air flow, interface, presence, simulation, virtual environments, wind
GameOD: an internet based game-on-demand framework BIBAKFull-Text 129-136
  Frederick W. B. Li; Rynson W. H. Lau; Danny Kilis
Multiplayer online 3D games are becoming very popular in recent years. However, existing games require the complete game content to be installed prior to game playing. Since the content is usually large in size, it may be difficult to run these games on a PDA or other handheld devices. It also pushes game companies to distribute their games as CDROMs/DVDROMs rather than online downloading. On the other hand, due to network latency, players may perceive discrepant status of some dynamic game objects. In this paper, we present a game-on-demand (GameOD) framework to distribute game content progressively in an on-demand manner. It allows critical contents to be available at the players' machines in a timely fashion. We present a simple distributed synchronization method to allow concurrent players to synchronize their perceived game status. Finally, we show some performance results of the proposed framework.
Keywords: distributed synchronization, distributed virtual environments, multiplayer online games, on-demand replication
A CAVE system for interactive modeling of global illumination in car interior BIBAKFull-Text 137-145
  Kirill Dmitriev; Thomas Annen; Grzegorz Krawczyk; Karol Myszkowski; Hans-Peter Seidel
Global illumination dramatically improves realistic appearance of rendered scenes, but usually it is neglected in VR systems due to its high costs. In this work we present an efficient global illumination solution specifically tailored for those CAVE applications, which require an immediate response for dynamic light changes and allow for free motion of the observer, but involve scenes with static geometry. As an application example we choose the car interior modeling under free driving conditions. We illuminate the car using dynamically changing High Dynamic Range (HDR) environment maps and use the Precomputed Radiance Transfer (PRT) method for the global illumination computation. We leverage the PRT method to handle scenes with non-trivial topology represented by complex meshes. Also, we propose a hybrid of PRT and final gathering approach for high-quality rendering of objects with complex Bi-directional Reflectance Distribution Function (BRDF). We use this method for predictive rendering of the navigation LCD panel based on its measured BRDF. Since the global illumination computation leads to HDR images we propose a tone mapping algorithm tailored specifically for the CAVE. We employ head tracking to identify the observed screen region and derive for it proper luminance adaptation conditions, which are then used for tone mapping on all walls in the CAVE. We distribute our global illumination and tone mapping computation on all CPUs and CPUs available in the CAVE, which enables us to achieve interactive performance even for the costly final gathering approach.
Keywords: BRDF, CAVE, LCD panel, virtual reality

Session 2B: real-time interactions and rendering (short papers)

Towards full-body haptic feedback: the design and deployment of a spatialized vibrotactile feedback system BIBAKFull-Text 146-149
  Robert W. Lindeman; Robert Page; Yasuyuki Yanagida; John L. Sibert
This paper presents work we have done on the design and implementation of an untethered system to deliver haptic cues for use in immersive virtual environments through a body-worn garment. Our system can control a large number of body-worn vibration units, each with individually controllable vibration intensity. Several design iterations have helped us to refine the system and improve such aspects as robustness, ease of donning and doffing, weight, power consumption, cable management, and support for many different types of feedback units, such as pager motors, solenoids, and muffin fans. In addition, experience integrating the system into an advanced virtual reality system has helped define some of the design constraints for creating wearable solutions, and to further refine our implementation.
Keywords: CQB, full-body, haptic feedback, virtual reality
An efficient representation of complex materials for real-time rendering BIBAKFull-Text 150-153
  Wan-Chun Ma; Sung-Hsiang Chao; Bing-Yu Chen; Chun-Fa Chang; Ming Ouhyoung; Tomoyuki Nishita
In this paper, we propose an appearance representation for general complex materials which can be applied in real-time rendering framework. By combining a single parametric shading function (such as the Phong model) and the proposed spatial-varying residual function (SRF), this representation can recover the appearance of complex materials with little loss of visual fidelity. The difference between the real data and the parametric shading is directly fitted by a specific function for easy reconstruction. It is simple, flexible and easy to be implemented on programmable graphics hardware. Experiments show that the mean square error (MSE) between the reconstructed appearance and real photographs is less than 5%.
Keywords: bi-directional texture function, parametric shading function, reflectance
Huge texture mapping for real-time visualization of large-scale terrain BIBAKFull-Text 154-157
  Wei Hua; Huaisheng Zhang; Yanqing Lu; Hujun Bao; Qunsheng Peng
Texture mapping greatly influences the performance of visualization in many 3D applications. Sometimes the texture data is so large that it has to be stored in slower external storage, rather than fast texture memory or host memory. In these circumstances, texture mapping becomes the performance bottleneck. In this paper, we present a compact multiresolution model, Texture Mipmap Quadtree (TMQ), to represent large-scale textures. It facilitates fast loading and pre-filtering of textures from slower external storage. Integrating continuous LOD model of terrain geometry, we present a criterion to select proper textures from TMQ according to viewing parameters during rendering stage. By exploiting temporal coherence, a dynamic texture management scheme is devised based on two-level cache hierarchy to further increase the performance of texture mapping.
Keywords: mipmap, out-of-core, real-time rendering, terrain, texture compression, texture mapping
A voxel based multiresolution technique for soft tissue deformation BIBAKFull-Text 158-161
  A Lenka Jerábková; Torsten Kuhlen; Timm P. Wolter; Norbert Pallua
Real time tissue deformation is an important aspect of interactive virtual reality (VR) environments such as medical trainers. Most approaches in deformable modelling use a fixed space discretization. A surgical trainer requires high plausibility of the deformations especially in the area close to the instrument. As the area of intervention is not known a priori, adaptive techniques have to be applied.
   We present an approach for real time deformation of soft tissue based on a regular FEM mesh of cube elements as opposed to a mesh of tetrahedral elements used by the majority of soft tissue simulators. A regular mesh structure simplifies the local refinement operation as the elements topology and stiffness are known implicitly. We propose an octree-based adaptive multiresolution extension of our basic approach.
   The volumetric representation of the deformed object is created automatically from medical images or by voxelization of a surface model. The resolution of the volumetric representation is independent of the surface geometry resolution. The surface is deformed according to the simulation performed on the underlying volumetric mesh.
Keywords: finite elements method, multiresolution, soft tissue deformation, virtual reality
A framework for 3D visualisation and manipulation in an immersive space using an untethered bimanual gestural interface BIBAKFull-Text 162-165
  Yves Boussemart; François Rioux; Frank Rudzicz; Michael Wozniewski; Jeremy R. Cooperstock
Immersive Environments offer users the experience of being submerged in a virtual space, effectively transcending the boundary between the real and virtual world. We present a framework for visualization and manipulation of 3D virtual environments in which users need not resort to the awkward command vocabulary of traditional keyboard-and-mouse interaction. We have adapted the transparent toolglass paradigm as a gestural interface widget for a spatially immersive environment. To serve that purpose, we have implemented a bimanual gesture interpreter to recognize and translate a user's actions into commands for control of these widgets. In order to satisfy a primary design goal of keeping the user completely untethered, we use purely video-based tracking techniques.
Keywords: bimanual interaction, gesture recognition, immersive environment, scene modelling, telepresence, toolglass

Session 2C: techniques and applications (short papers)

The MORGAN framework: enabling dynamic multi-user AR and VR projects BIBAKFull-Text 166-169
  Jan Ohlenburg; Iris Herbst; Irma Lindt; Thorsten Fröhlich; Wolfgang Broll
The availability of a suitable framework is of vital importance for the development of Augmented Reality (AR) and Virtual Reality (VR) projects. While features such as scalability, platform independence, support of multiple users, distribution of components, and an efficient and sophisticated rendering are the key requirements of current and future applications, existing frameworks often address these issues only partially. In our paper we present MORGAN -- an extensible component-based AR/VR framework, enabling sophisticated dynamic multi-user AR and VR projects. Core components include the MORGAN API, providing developers access to various input devices, including common tracking devices, as well as a modular render engine concept, allowing us to provide native support for individual scene graph concepts. The MORGAN framework has already been successfully deployed in several national and international research and development projects.
Keywords: augmented reality, distributed system design, framework, render engine, tracking, virtual reality
Fast model tracking with multiple cameras for augmented reality BIBAKFull-Text 170-173
  Alberto Sanson; Umberto Castellani; Andrea Fusiello
In this paper we present a technique for tracking complex models in video sequences with multiple cameras. Our method uses information derived from image gradient by comparing them with edges of the tracked object, whose 3D model is known. A score function is defined, depending on the amount of image gradient "seen" by the model edges. The sought pose parameters are obtained by maximizing this function using a non deterministic algorithm which proved to be optimal for this problem. Preliminary experiments with both synthetic and real sequences have shown small errors in pose estimations and a good behavior in augmented reality applications.
Keywords: exterior orientation, pose estimation, registration
Occlusion handling for medical augmented reality using a volumetric phantom model BIBAKFull-Text 174-177
  Jan Fischer; Dirk Bartz; Wolfgang Straßer
The support of surgical interventions has long been in the focus of application-oriented augmented reality research. Modern methods of surgery, like minimally-invasive procedures, can benefit from the additional information visualization provided by augmented reality. The usability of medical augmented reality depends on a rendering scheme for virtual objects designed to generate easily and quickly understandable augmented views. One important factor for providing such an accessible reality augmentation is the correct handling of the occlusion of virtual objects by real scene elements. The usually large volumetric datasets used in medicine are ill-suited for use as phantom models for static occlusion handling. We present a simple and fast preprocessing pipeline for medical volume datasets which extracts their visual hull volume. The resulting, significantly simplified visual hull iso-surface is used for real-time static occlusion handling in our AR system, which is based on off-the-shelf medical equipment.
Keywords: augmented reality, occlusion handling, visual hull, volume data
NOYO: 6DOF elastic rate control for virtual environments BIBAKFull-Text 178-181
  Andreas Simon; Mario Doulis
It is an interesting challenge to design input devices that are easy to learn and use and that allow a wide range of differentiated input. We have developed a novel joystick-like handheld input device as a 6DOF elastic rate controller for travel and rate-controlled object manipulation in virtual environments. The NOYO combines a 6DOF elastic force sensor with a 3DOF source-less isotonic orientation tracker. This combination allows consistent mapping of input forces from local device coordinates to absolute world coordinates, effectively making the NOYO a "SpaceMouse to go". The device is designed to allow one-handed as well as two-handed operation, depending on the task and the skill level of the user. A quantitative usability study shows the handheld NOYO to be up to 21% faster, easier to learn, and significantly more efficient than the SpaceMouse desktop device.
Keywords: elastic force input, input device, travel, virtual environment
Tailor tools for interactive design of clothing in virtual environments BIBAKFull-Text 182-185
  Michael Keckeisen; Matthias Feurer; Markus Wacker
In this work, we present virtual tailor tools which allow the interactive design and modification of clothing in a 3D Virtual Environment. In particular, we propose algorithms and interaction techniques for sewing and cutting garments during a physical cloth simulation, including the automatic modification of the underlying planar cloth patterns.
Keywords: cloth modelling and simulation, interaction techniques, interactive design, virtual prototyping

Keynote speaker

Reality-augmented virtuality: modeling dynamic events from nature BIBAFull-Text 186
  Marcus Magnor
Virtual Reality thrives on interactivity, realism, and increasingly also animation. The virtual world is not a static place anymore: dynamic entities mimicking natural phenomena are finding their way into computer games and special effects. Typically, physics-based models or ad-hoc behavioral descriptions are drawn on to emulate water waves, flames, smoke, cloth motion, ... For interactive VR applications, unfortunately, simulating complex physical processes is often too time-consuming, while, on the other hand, simplified model descriptions yield un-natural, artificial animation results.
   Alternatively, natural events may be acquired from the "real thing". Given a handful of synchronized video recordings, this talk presents examples how complex, time-varying natural phenomena may be modeled from reality to be incorporated into time-critical 3D graphics applications. The reward are photo-realistic rendering results and truly authentic animations.

Session 3A: devices and haptics

Transpost: all-around display system for 3D solid image BIBAKFull-Text 187-194
  Rieko Otsuka; Takeshi Hoshino; Youichi Horry
A novel method for an all-around display system that shows three-dimensional stereo images without the need for special goggles has been developed. This system simply needs a directional-reflection screen, mirrors, and a standard projector. The basic concept behind this system is to make use of the phenomenon called "afterimage" that occurs when screen is spinning. The key to this approach is to make a directional reflection screen with a limited viewing angle and project images onto it. The projected image is made up of 24 images of an object, taken from 24 different angles. By reconstructing this image, a three-dimensional object can be displayed on the screen. The display system can present images of computer-graphics and photographs, full-length movies, and so on.
   Our aim is to make a system for not only displaying images but also for interacting with them. Several display examples demonstrated that the system will be useful in applications such as guide displays in public places and facilities.
Keywords: all-around display, stereo vision, telepresence
Telerehabilitation: controlling haptic virtual environments through handheld interfaces BIBAKFull-Text 195-200
  Mario Gutiérrez; Patrick Lemoine; Daniel Thalmann; Frédéric Vexo
This paper presents a telerehabilitation system for kinesthetic therapy (treatment of patients with arm motion coordination disorders). Patients can receive therapy while being immersed in a virtual environment (VE) with haptic feedback. Our system is based on a Haptic Workstation that provides force-feedback on the upper limbs. One of our main contributions is the use of a handheld device as the main interface for the therapist. The handheld allows for monitoring, adapting and designing exercises in real-time (dynamic VE). Visual contact with the patient is kept by means of a webcam.
Keywords: handheld devices, haptic interfaces, kinesthetic therapy, telerehabilitation, virtual environments
Multi-resolution haptic interaction of hybrid virtual environments BIBAKFull-Text 201-208
  Hui Chen; Hanqiu Sun
Our sense of touch is spatially focused and has a far lower bandwidth in comparison with visual sense that has the largest bandwidth. While most haptic studies utilize point interactions, resulting in a conflict between this low information bandwidth and relative complexity of virtual scene. In this paper, we investigate a novel multi-resolution force evaluation scheme of hybrid virtual models in a unified haptic interaction framework. Force contact model of tool-object interaction based on Hertz's contact theory is integrated in our work. Physical properties with different object materials expressed by Poisson's Ratio and Young's Modulus are involved to investigate most realistic perception during multi-resolution haptic interaction. The hierarchical impostor representation of surface and volumetric models are constructed, and the optimal performance based on cost and benefit evaluation at run time is employed to meet both the visual and hapic perceptual qualities. During this multi-resolution haptic interaction, our scheme adaptively determines the rendering mode using graphics and haptics impostors represented in the virtual environments. Our experimental results have demonstrated the satisfactory performance of proposed multi-resolution haptic interaction framework applicable for hybrid virtual models.
Keywords: haptics, multi-resolution representation, virtual environments
Electrostatic tactile display with thin film slider and its application to tactile tele-presentation systems BIBAKFull-Text 209-216
  Akio Yamamoto; Shuichi Nagasawa; Hiroaki Yamamoto; Toshiro Higuchitokyo
A new electrostatic tactile display is proposed to realize compact tactile display devices that can be incorporated with virtual reality systems. The tactile display of this study consists of a thin conductive film slider with stator electrodes that excite electrostatic forces. Users of the device experience tactile texture sensations by moving the slider with their fingers. The display operates by applying two-phase cyclic voltage patterns to the electrodes. This paper reports on the application of the new tactile display in a tactile tele-presentation system. In the system, a PVDF tactile sensor and DSP controller automatically generate voltage patterns to present surface texture sensations through the tactile display. A sensor, in synchronization with finger motion on the tactile display, scans a texture sample and outputs information about the sample surface. The information is processed by a DSP and fed back to the tactile display in real time. The tactile tele-presentation system was evaluated in texture discrimination tests and demonstrated a 79% correct answer ratio. A transparent electrostatic tactile display is also reported in which the tactile display is combined with an LCD to realize a visual-tactile integrated display system.
Keywords: cutaneous sensation, tactile display, tactile sensing, tele-presentation, user interface, virtual reality