HCI Bibliography Home | HCI Conferences | About VRST | VRST Conf Proceedings | Detailed Records | RefWorks | EndNote | Hide Abstracts
VRST Tables of Contents: 97989900010203040506070809101213 ⇐ MORE

Proceedings of the 2012 ACM Symposium on Virtual Reality Software and Technology

Fullname:Proceedings of the 18th ACM symposium on Virtual reality software and technology
Editors:Mark Green; Wolfgang Stuerzlinger; Marc Erich Latoschik; Bill Kapralos
Location:Toronto, Ontario, Canada
Dates:2012-Dec-10 to 2012-Dec-12
Publisher:ACM
Standard No:ISBN: 978-1-4503-1469-5; ACM DL: Table of Contents; hcibib: VRST12
Papers:40
Pages:214
Links:Conference Website
Summary:It is our great pleasure to welcome you to the 2012 ACM Symposium on Virtual Reality Software and Technology -- VRST'12. VRST has become one of the major scientific events in the area of virtual reality since its debut in 1994 in Singapore. The symposium continues its tradition as an international forum for the presentation of research results and experience reports on leading edge issues of software, hardware and systems for Virtual Reality. The mission of the symposium is to share novel technologies that fulfill the needs of Virtual Reality applications and environments and to identify new directions for future research and development. VRST gives researchers and practitioners a unique opportunity to share their perspectives with others interested in the various aspects of Virtual Reality and owes its existence to a vibrant and productive research community. This year, VRST was held December 10-12, 2012 in Toronto, Ontario, Canada.
    The call for papers attracted 88 submissions from Asia, Europe, Australia, and North and South America in all areas of Virtual Reality research. Particular attention was given to work on system with a special track focusing on architectures, frameworks, reusability, adaptivity, and performance testing and evaluation. An international program committee consisting of 16 experts in the topic areas and the three program chairs handled the highly competitive and selective review process. Almost every submission received four or more reviews, two from members of the international program committee and two from external reviewers. Reviewing was double-blind, where only the program chairs and the program committee member assigned to identify external reviewers knew the identity of the authors.
    In the end, the program committee was able to accept 25 out of 88 submissions, which corresponds to an acceptance rate of 28%. For posters, 15 out of 32 submissions will appear in the proceedings. The topics range from tracking, augmented and mixed reality, interaction, navigation and locomotion, collaboration, haptics, simulation, agents and behaviors to two sessions for a systems track. We hope that these proceedings will serve as a valuable reference for Virtual Reality researchers and developers.
  1. Oh my God, they are alive (agents and behavior)
  2. Apply yourself (systems track: applications)
  3. Escaping reality (augmented and mixed reality)
  4. With feeling (haptics and physics)
  5. Move it (interaction)
  6. Where and how (locomotion and collaboration)
  7. Dreaming big (systems track: system engineering)
  8. The ins and outs (tracking)
  9. Posters

Oh my God, they are alive (agents and behavior)

Generating diverse ethnic groups with genetic algorithms BIBAFull-Text 1-8
  Tomas Trescak; Anton Bogdanovych; Simeon Simoff; Inmaculada Rodriguez
Simulating large crowds of virtual agents has become an important problem in virtual reality applications, video games, cinematography and training simulators. In this paper, we show how to achieve a high degree of appearance variation among individual 3D avatars in generated crowds through the use of genetic algorithms, while also manifesting unique characteristic features of a given population group. We show how virtual cities can be populated with diverse crowds of virtual agents that preserve their ethnic features, illustrate how our approach can be used to simulate full body avatar appearance, present a case study and analyze our results.
Bridging the gap between visual exploration and agent-based pedestrian simulation in a virtual environment BIBAFull-Text 9-16
  Martin Brunnhuber; Helmut Schrom-Feiertag; Christian Luksch; Thomas Matyus; Gerd Hesina
We present a system to evaluate and improve visual guidance systems and signage for pedestrians inside large buildings. Given a 3D model of an actual building we perform agent-based simulations mimicking the decision making process and navigation patterns of pedestrians trying to find their way to predefined locations. Our main contribution is to enable agents to base their decisions on realistic three-dimensional visibility and occlusion cues computed from the actual building geometry with added semantic annotations (e.g. meaning of signs, or purpose of inventory), as well as an interactive visualization of simulated movement trajectories and accompanying visibility data tied to the underlying 3D model. This enables users of the system to quickly pinpoint and solve problems within the simulation by watching, exploring and understanding emergent behavior inside the building. This insight gained from introspection can in turn inform planning and thus improve the effectiveness of guidance systems.
Real-time physical modelling of character movements with Microsoft Kinect BIBAFull-Text 17-24
  Hubert Shum; Edmond S. L. Ho
With the advancement of motion tracking hardware such as the Microsoft Kinect, synthesizing human-like characters with real-time captured movements becomes increasingly important. Traditional kinematics and dynamics approaches perform sub-optimally when the captured motion is noisy or even incomplete. In this paper, we proposed a unified framework to control physically simulated characters with live captured motion from Kinect. Our framework can synthesize any posture in a physical environment using external forces and torques computed by a PD controller. The major problem of Kinect is the incompleteness of the captured posture, with some degree of freedom (DOF) missing due to occlusions and noises. We propose to search for a best matched posture from a motion database constructed in a dimensionality reduced space, and substitute the missing DOF to the live captured data. Experimental results show that our method can synthesize realistic character movements from noisy captured motion. The proposed algorithm is computationally efficient and can be applied to a wide variety of interactive virtual reality applications such as motion-based gaming, rehabilitation and sport training.

Apply yourself (systems track: applications)

OmniKinect: real-time dense volumetric data acquisition and applications BIBAFull-Text 25-32
  Bernhard Kainz; Stefan Hauswiesner; Gerhard Reitmayr; Markus Steinberger; Raphael Grasset; Lukas Gruber; Eduardo Veas; Denis Kalkofen; Hartmut Seichter; Dieter Schmalstieg
Real-time three-dimensional acquisition of real-world scenes has many important applications in computer graphics, computer vision and human-computer interaction. Inexpensive depth sensors such as the Microsoft Kinect allow to leverage the development of such applications. However, this technology is still relatively recent, and no detailed studies on its scalability to dense and view-independent acquisition have been reported. This paper addresses the question of what can be done with a larger number of Kinects used simultaneously. We describe an interference-reducing physical setup, a calibration procedure and an extension to the KinectFusion algorithm, which allows to produce high quality volumetric reconstructions from multiple Kinects whilst overcoming systematic errors in the depth measurements. We also report on enhancing image based visual hull rendering by depth measurements, and compare the results to KinectFusion. Our system provides practical insight into achievable spatial and radial range and into bandwidth requirements for depth data acquisition. Finally, we present a number of practical applications of our system.
iAR: an exploratory augmented reality system for mobile devices BIBAFull-Text 33-40
  Nils Karlsson; Gang Li; Yakup Genc; Angela Huenerfauth; Elizabeth Bononno
In this paper we propose an exploratory augmented reality (AR) system for mobile devices. A hybrid system of 3D object detection and 3D tracking is used to rapidly localize the object in the scene. Based on randomized tree classifier, object detection supports large viewpoint changes. While edge-based 3D tracking provides efficient computation and better accuracy for pose estimation of incremental motions. The result is an augmented reality system that works well with large viewpoint variance and has superior accuracy. Using a mobile device such as iPad or iPhone, our system further provides exploratory capability using their touch screen and wireless connectivity. The user is able to interact and explore the 3D object on the image or video, and collaborate with a remote user. Extensive experiments with different subjects demonstrate that the proposed system advances the state-of-the-art in augmented reality with novel and intuitive applications.
FlyVIZ: a novel display device to provide humans with 360° vision by coupling catadioptric camera with hmd BIBAFull-Text 41-44
  Jérôme Ardouin; Anatole Lécuyer; Maud Marchal; Clément Riant; Eric Marchand
Have you ever dreamed of having eyes in the back of your head? In this paper we present a novel display device called FlyVIZ which enables humans to experience a real-time 360° vision of their surroundings for the first time. To do so, we combine a panoramic image acquisition system (positioned on top of the user's head) with a Head-Mounted Display (HMD). The omnidirectional images are transformed to fit the characteristics of HMD screens. As a result, the user can see his/her surroundings, in real-time, with 360° images mapped into the HMD field-of-view. We foresee potential applications in different fields where augmented human capacity (an extended field-of-view) could benefit, such as surveillance, security, or entertainment. FlyVIZ could also be used in novel perception and neuroscience studies.

Escaping reality (augmented and mixed reality)

Online real-time presentation of virtual experiences for external viewers BIBAFull-Text 45-52
  Kevin Ponto; Hyun Joon Shin; Joe Kohlmann; Michael Gleicher
Externally observing the experience of a participant in a virtual environment is generally accomplished by viewing an egocentric perspective. Monitoring this view can often be difficult for others to watch due to unwanted camera motions that appear unnatural and unmotivated. We present a novel method for reducing the unnaturalness of these camera motions by minimizing camera movement while maintaining the context of the participant's observations. For each time-step, we compare the parts of the scene viewed by the virtual participant to the parts of the scene viewed by the camera. Based on the similarity of these two viewpoints we next determine how the camera should be adjusted. We present two means of adjustment, one which continuously adjusts the camera and a second which attempts to stop camera movement when possible. Empirical evaluation shows that our method can produce paths that have substantially shorter travel distances, are easier to watch and maintain the original observations of the participant's virtual experience.
Dense depth maps from sparse models and image coherence for augmented reality BIBAFull-Text 53-60
  Stefanie Zollmann; Gerhard Reitmayr
A convincing combination of virtual and real data in an Augmented Reality (AR) application requires detailed 3D information about the real world scene. In many situations extensive model data is not available, while sparse representations such as outlines on a map exist. In this paper, we present a novel approach using such sparse 3D model data to seed automatic image segmentation and infer a dense depth map of an environment. Sparse 3D models of known landmarks, such as points and lines from GIS databases, are projected into a registered image and initialize 2D image segmentation at the projected locations in the image. For the segmentation we propose different techniques, which combine shape information, semantics given by the database, and the visual appearance in the referenced image. The resulting depth information of objects in the scene can be used in many applications, including occlusion handling, label placement, and 3D modeling.
Achieving robust alignment for outdoor mixed reality using 3D range data BIBAFull-Text 61-68
  Masaki Inaba; Atsuhiko Banno; Takeshi Oishi; Katsushi Ikeuchi
Mixed reality (MR) technology can be applied to various applications such as architecture, advertising, and navigation systems, so the desire to utilize MR in outdoor environments has been increasing. In order to utilize MR, it is necessary to achieve alignment super imposing virtual contents in the desired position. However, because light changes continually in outdoor environments, and the appearance of real objects changes also, in some cases the previous image-based alignment methods do not work well. In this paper, a robust image-based alignment method to be used in outdoor environments is proposed. In the proposed method, the albedo of real objects is estimated using 3D shapes of these objects in advance, and the appearance is reproduced from the albedo and current light environment. The appearance of real objects and reproduced image becomes close, so a robust image-based alignment is achieved.

With feeling (haptics and physics)

HapSeat: producing motion sensation with multiple force-feedback devices embedded in a seat BIBAFull-Text 69-76
  Fabien Danieau; Julien Fleureau; Philippe Guillotel; Nicolas Mollet; Anatole Lécuyer; Marc Christie
We introduce a novel way of simulating sensations of motion which does not require an expensive and cumbersome motion platform. Multiple force-feedbacks are applied to the seated user's body to generate a sensation of motion experiencing passive navigation. A set of force-feedback devices such as mobile armrests or headrests are arranged around a seat so that they can apply forces to the user. We have dubbed this new approach HapSeat. A proof of concept has been designed which uses three low-cost force-feedback devices, and two control models have been implemented. Results from the first user study suggest that subjective sensations of motion are reliably generated using either model. Our results pave the way to a novel device to generate consumer motion effects based on our prototype.
Simulation of deformable solids in interactive virtual reality applications BIBAFull-Text 77-84
  Wen Tang; Tao Ruan Wan
Simulation of deformable objects has become indispensable in many virtual reality applications. Linear finite element algorithms are frequently applied in interactive physics simulation in order to ensure computational efficiency. However, there exists a variety of situations in which higher order simulation accuracy is expected to improve physical behaviors of deformable objects to match their real-world counterparts. For example in the context of virtual surgery, interactive surgical manipulations mandate algorithmic requirements to maintain both interactive frame rates and simulation accuracy, presenting major challenges in simulation methods. In this paper, we present an interactive system for efficient finite element based simulation of hyperplastic solids with more accurate physics behaviors compared with that of standard corotational methods. Our approach begins with a physics model to mitigate drawbacks of the corotational linear elasticity in preserving energy and momenta. A new damping model is presented which takes into account the differential of rotation to compensate the loss of momenta due to rotations. Thus, more accurate simulations can be achieved with this new model, whereas standard corotational methods using rotated damping to handle energy dissipation does not preserve momenta. We then present a real time simulation framework for computing finite element based deformable solids with full capability allowing complex objects to collide and interact with each other. A constrained system is also provided for robust control and the ease of use the simulation system. We demonstrate the parallel implementation to enable realistic and stable physics behaviors of large deformations capable of handling unpredictable user inputs in interactive virtual environments. The implementation details and insights on practical considerations in implementation such as our experience in parallel computation of the physics for mesh-based finite element objects would be useful for people who wish to develop real-time applications in this area.
Real-time simulation of long thin flexible objects in interactive virtual environments BIBAFull-Text 85-92
  Tao Ruan Wan; Wen Tang; Dongjin Huang
Many virtual reality-based applications involve simulations of micro-structures such as hair, fibers and textile yarns, as well as ropes, flexible wires and tubes. In virtual surgery, for example, flexible wires and tubes are common medical instruments and devices. Core to the simulations is the robust physics-based computation of elastic rods. In this paper, we present a volumetric finite element based approach to simulating rod-like objects with real-time performance suitable for interactive virtual environments. A sequence of Cosserat joints (tiny volumetric elastic joints) linked by rigid bar segments are used to compute the elastic rod objects. By construction, each of the joints is equipped with its own mass, degrees of freedom (DOFs) with a small volumetric deformation field to measure deformation energies due to stretching, shearing, bending, and twisting about the centerline curve of the long flexible object. Therefore, a generalized continuum formulation is derived to compute both bending and twisting deformations of elastic rods, resulting a simple and general simulation model to facilitate efficient physics computations, whereas conversional simulation methods for elastic rods require explicitly decoupling between bending and twisting deformations. In this paper, we show simulations of a wide range of object behaviors for interactive virtual reality applications.
Modifying an identified position of edged shapes using pseudo-haptic effects BIBAFull-Text 93-96
  Yuki Ban; Takuji Narumi; Tomohiro Tanikawa; Michitaka Hirose
In our research, we aim to construct the visuo-haptic system which can give users a sense as if they were touching on virtual objects with variety of shapes, using pseudo-haptic effects. In this paper, we focus on modifying the identification of a position of edges on a object when touching it with a pointing finger, by displacing the visual representation of the user's hand in order to construct a novel visuo-haptic system. We compose a video see-through system, which enables us to change the perception of the shape of an object a user is visually touching, by displacing the visual representation of the user's hand as if s/he was touching the visual shape, when in actuality s/he is touching another shape. We had experiments and showed participants perceived positions of edges that was the same as the one they were visually touching, even though the positions of edges they were actually touching was different. These results prove that the perceived positions of edges could be modified, and even if the ratio of the successive distance between edges is 1 : 1, we can modify the perception of this ratio from 3 : 2 to 2 : 3.

Move it (interaction)

CrOS: a touch screen interaction technique for cursor manipulation on 2-manifolds BIBAFull-Text 97-100
  Manuel Veit; Antonio Capobianco; Dominique Bechmann
We present a new Virtual Reality (VR) interaction technique, called Cursor On Surface (CrOS). It is an interaction technique for cursor manipulation on 2-manifolds using touch screen input. The objective is to provide a technique to easily perform complex modelling operations in VR. CrOS is based on an algorithm which maps 2-D inputs into cursor displacements on 3-D surfaces. The technique relies on two principles. Firstly, it restricts the manipulation space to a 2-D space. Secondly, it reduces the complexity of the task through an automatic orientation algorithm that prevent the user from switching between edition task and object repositioning task.
Starfish: a selection technique for dense virtual environments BIBAFull-Text 101-104
  Jonathan Wonner; Jérôme Grosjean; Antonio Capobianco; Dominique Bechmann
We present Starfish -- a new target selection technique for virtual reality (VR) environments. This technique provides a solution to accurately select targets in high-density 3D scenes. The user controls a 3D pointer surrounded by a starfish-shaped closed surface. The extremity of each branch ends exactly on preselected near targets. The shape is an implicit surface built on the segments going from the pointer to each of these targets. As the pointer moves across the scene, the starfish shape is dynamically rebuilt. When it is locked the pointer is allowed to move inside the volume, slide down the desired branch, reach and select the corresponding target. Since the pointer stays within the shape, targets are easy to reach and select.
Efficient selection of multiple objects on a large scale BIBAFull-Text 105-112
  Rasmus Stenholt
The task of multiple object selection (MOS) in immersive virtual environments is important and still largely unexplored. The difficulty of efficient MOS increases with the number of objects to be selected. E.g. in small-scale MOS, only a few objects need to be simultaneously selected. This may be accomplished by serializing existing single-object selection techniques. In this paper, we explore various MOS tools for large-scale MOS. That is, when the number of objects to be selected are counted in hundreds, or even thousands. This makes serialization of single-object techniques prohibitively time consuming. Instead, we have implemented and tested two of the existing approaches to 3-D MOS, a brush and a lasso, as well as a new technique, a magic wand, which automatically selects objects based on local proximity to other objects. In a formal user evaluation, we have studied how the performance of the MOS tools are affected by the geometric configuration of the objects to be selected. Our studies demonstrate that the performance of MOS techniques is very significantly affected by the geometric scenario facing the user. Furthermore, we demonstrate that a good match between MOS tool shape and the geometric configuration is not always preferable, if the applied tool is complex to use.

Where and how (locomotion and collaboration)

Brain computer interface vs walking interface in VR: the impact of motor activity on spatial transfer BIBAFull-Text 113-120
  Florian Larrue; Hélène Sauzéon; Lioubov Aguilova; Fabien Lotte; Martin Hachet; Bernard NKaoua
The goal of this study is to explore new navigation methods in Virtual Reality (VR) and to understand the impact of motor activity on spatial cognition, and more precisely the question of the spatial learning transfer. We present a user study comparing two interfaces with different motor activities: the first one, a walking interface (a treadmill with rotation) gives the user a high level of sensorimotor activity (especially body-based and vestibular information). The second one, a brain computer interface (BCI), enables the user to navigate in a virtual environment (VE) without any motor activity, by using brain activity only. The task consisted in learning a path in a virtual city built from a 3D model of a real city with either one of these two interfaces (named treadmill condition and BCI condition), or in the real city directly (the real condition). Then, participants had to recall spatial knowledge, according to six different tasks assessing spatial memory and transfer. We also evaluated the ergonomics of these two interfaces and the presence felt by participants. Surprisingly, contrary to expectations, our results showed similar performances whatever the spatial restitution tasks or the interfaces used, very close to that of the real condition, which tends to indicate that motor activity is not essential to learn and transfer spatial knowledge. Even if BCI seems to be less natural to use than the treadmill, our study suggests that BCI is a promising interface for studying spatial cognition.
Leaning-based travel interfaces revisited: frontal versus sidewise stances for flying in 3D virtual spaces BIBAFull-Text 121-128
  Jia Wang; Rob Lindeman
In this paper we revisit the design of leaning-based travel interfaces and propose a design space to categorize existing implementations. Within the design space, frontal and sidewise stances when using a flying surfboard interface were compared through a user study. The interfaces were adapted and improved from our previous designs using a body-mounted, multi-touch touchpad. Two different experiments were designed and conducted that focus on user performance and virtual world cognition, respectively. The results suggest better user performance and user experience when using the frontal stance, although no better spatial orientation or virtual world cognition was identified. Further, user interviews revealed that despite the realistic simulation of skateboarding/snowboarding, the sidewise stance suffers from poor usability due to inefficient and inaccurate turning control and confusion between the viewing and movement directions. Based on these results, several guidelines are proposed to aid the design of leaning-based travel interfaces for immersive virtual reality applications.
Evaluation of remote collaborative manipulation for scientific data analysis BIBAFull-Text 129-136
  Cédric Fleury; Thierry Duval; Valérie Gouranton; Anthony Steed
In the context of scientific data analysis, we propose to compare a remote collaborative manipulation technique with a single user manipulation technique. The manipulation task consists in positioning a clipping plane in order to perform cross-sections of scientific data that show several points of interest located inside these data. For the remote collaborative manipulation, we have chosen to use the 3-hand manipulation technique proposed by Aguerreche et al., which is very suitable with a remote manipulation of a plane. We ran two experiments to compare the two manipulation techniques with some participants located in two different countries. These experiments has shown that the remote collaborative manipulation technique was significantly more efficient than the single user manipulation when the 3 points of interest were far apart inside the scientific data and, consequently, when the manipulation task was more difficult and required more precision. When the 3 points of interest were close together, there was not significant difference between the two manipulation techniques.

Dreaming big (systems track: system engineering)

CaveUDK: a VR game engine middleware BIBAFull-Text 137-144
  Jean-Luc Lugrin; Fred Charles; Marc Cavazza; Marc Le Renard; Jonathan Freeman; Jane Lessiter
Previous attempts at developing immersive versions of game engines have faced difficulties in achieving both overall high performance and preserving reusability of software developments. In this paper, we present a high-level VR middleware based on one of the most successful commercial game engines: the Unreal® Engine 3.0 (UE3). We describe a VR framework implemented as an extension to the Unreal® Development Kit (UDK) supporting CAVE"-like installations. Our approach relies on a distributed architecture reinforced by specific replication patterns to synchronize the user's point of view and interactions within a multi-screen installation. Our performance benchmarks indicated that our immersive port does not affect the game engine performance, even with complex real-time applications, such as fast-paced multiplayer First Person Shooter (FPS) games or high-resolution graphical environments with 2M+ polygons. A user study also demonstrated the capacity of our VR middleware to elicit high spatial presence while maintaining low cybersickness effects. With free distribution, we believe such a platform can support future Entertainment and VR research.
VINS: shared memory space for definition of interactive techniques BIBAFull-Text 145-152
  Dimitar Valkov; Alexander Giesler; Klaus Hinrichs
Traditionally, interaction techniques for virtual reality applications are implemented in a proprietary way on specific target platforms, e. g., requiring specific hardware, physics or rendering libraries, which hinders reusability and portability. Even though abstraction layers for hardware devices are provided by numerous virtual reality libraries, they are usually tightly bound to a particular rendering environment and hardware configuration. In this paper we introduce VINS (Virtual Interactive Namespace) a seamless distributed memory space, which provides a hierarchical structure to support reusable design of interactive techniques. With VINS an interaction metaphor, whether it is implemented as function or class in the main application thread, uses its own thread or runs as its own process on another computer, can be transferred from one application to another without modifications. We describe the underlying concepts and present examples on how to integrate VINS with different frameworks or already implemented interactive techniques.
Evaluating Scala, actors, & ontologies for intelligent realtime interactive systems BIBAFull-Text 153-160
  Dennis Wiebusch; Martin Fischbach; Marc Erich Latoschik; Henrik Tramberend
This article evaluates the utility of three technical design approaches implemented during the development of a Realtime Interactive Systems (RIS) architecture focusing on the areas of Virtual and Augmented Reality (VR and AR), Robotics, and Human-Computer Interaction (HCI). The design decisions are (1) the choice of the Scala programming language, (2) the implementation of the actor computational model, and (3) the central incorporation of ontologies as a base for semantic modeling, required for several Artificial Intelligence (AI) methods. A white-box expert review is applied to a detailed use case illustrating an interactive and multimodal game scenario, which requires a number of complex functional features like speech and gesture processing and instruction mapping. The review matches the three design decisions against three comprehensive non-functional requirements from software engineering: Reusability, scalability, and extensibility. The qualitative evaluation is condensed to a semi-quantitative summary, pointing out the benefits of the chosen technical design.

The ins and outs (tracking)

Tracking of manufacturing tools with cylindrical markers BIBAFull-Text 161-168
  Jan-Patrick Hülß; Bastian Müller; Daniel Pustka; Jochen Willneff; Konrad Zürl
In industrial manufacturing processes, targets for infrared marker based tracking have to be robust and must integrate into the tools without disturbing the work flow. In this paper, we propose cylindrical markers attached directly to the tool. We show that, targets equipped with cylindrical markers can be tracked with about the same precision as targets equipped with spherical markers, if a correction for the reduced symmetry is applied. Additionally, the markers can be placed on different parts of a tool with a flexible connection, which is also considered in this paper.
Random model variation for universal feature tracking BIBAFull-Text 169-176
  Jan Herling; Wolfgang Broll
Feature based tracking approaches become more and more common for Augmented Reality (AR). However, most upcoming AR solutions are designed for mobile devices, in particular for smartphones and tablet computers, lacking sufficient performance for the execution of state-of-the art feature based approaches at interactive frame rates. In this paper we will present our approach significantly increasing the speed of feature based tracking, thus allowing for real-time applications even on mobile devices. Our approach applies a randomized pose initialization, is applicable to any feature detector and does not require any feature appearance attributes, such as descriptors or ferns.
Static pose reconstruction with an instrumented bouldering wall BIBAFull-Text 177-184
  Rami Aladdin; Paul Kry
This paper describes the design and construction of an instrumented bouldering wall, and a technique for estimating poses by optimizing an objective function involving contact forces. We describe the design and calibration of the wall, which can capture the contact forces and torques during climbing while motion capture (MoCap) records the climber pose, and present a solution for identifying static poses for a given set of holds and forces. We show results of our calibration process and static poses estimated for different measured forces. To estimate poses from forces, we use optimization and start with an inexpensive objective to guide the solver toward the optimal solution. When good candidates are encountered, the full objective function is evaluated with a physics-based simulation to determine physical plausibility while meeting additional constraints. Comparison between our reconstructed poses and MoCap show that our objective function is a good model for human posture.

Posters

Pinch-n-paste: direct texture transfer interaction in augmented reality BIBAFull-Text 185-186
  Atsushi Umakatsu; Tomohiro Mashita; Kiyoshi Kiyokawa; Haruo Takemura
Our Pinch-n-Paste allows a user to touch or pinch one part of an object, copy and move its texture, and paste it onto an other object, directly with his or her hand, in an augmented reality environment. To transfer texture appropriately from one part of an object to another, two texture images are generated by the Least Square Conformal Map (LSCM) technique. Two regions in the texture images corresponding to source and target areas of interest are then obtained using cross-boundary brushes. Target texel values are sampled from corresponding source texels by Moving Least Squares (MLS), and are finally mapped onto the target object.
Underwater augmented reality game using the DOLPHYN BIBAFull-Text 187-188
  Abdelkader Bellarbi; Christophe Domingues; Samir Otmane; Samir Benbelkacem; Alain Dinis
The introduction of virtual and mixed realities in aquatic leisure activities constitutes a technological rupture when compared with the status of related technologies. With the extension of Internet to underwater applications, the innovative character of the project becomes evident, and the impact of this development in the littoral and beach tourism may be considerable. In fact, there are recent developments to extend the use of computers and computer components, such as the mouse, to underwater uses. The Dolphyn is an underwater-computerized display system with various sensors and devices conceived for existing swimming pools and for beach shores, associating computer functions, video gaming and multisensory simulations.
Multimodal virtual environment subserving the design of electronic orientation aids for the blind BIBAFull-Text 189-190
  Slim Kammoun; Marc J-M Macé; Christophe Jouffrais
In the last few decades, a growing number of Electronic Orientation Aids (EOA) has been developed with the purpose of improving the autonomy of visually impaired people. However, the majority of those systems are not used by the blind due to limited usability. The main challenges to be addressed are about interaction and guidance. To address these issues, we designed a multimodal (input and output) Virtual Environment (VE) that simulates different interactions that could be used for space perception and guidance in an EOA. This platform subserves two goals: help designers to systematically test guidance strategies (i.e. for the development of new EOAs) and train blind people to use interactive EOAs, with an emphasis on cognitive mapping enhancement. In a multimodal VE, both objectives are assessed in a controlled, cost-effective, safe and flexible environment.
Elastic connections: separating and observation methods for complex virtual objects BIBAFull-Text 191-192
  Mai Otsuki; Tsutomu Oshita; Asako Kimura; Fumihisa Shibata; Hideyuki Tamura
Today's technology enables users to manipulate complex, multi-part 3D virtual objects such as industrial products, structures designed by CAD, and models of the human body in large 3D space. In general modeling software, parts of such a complex 3D virtual objects are grouped and manipulated together, but not individually, for efficient operation. Therefore, a separating operation is necessary when the user wants to observe or manipulate only a part of a complex object. We have developed a system that realizes gesture-based separation and observation of a group of parts within complex virtual objects in 3D space. One of the practical applications of our system is in training, such as learning the structure of the human body or industrial products. In our system, users can separate a group of parts freely by pulling and cutting a virtual rubber band. The widths of these virtual rubber bands are relative to the connection strength between each part. This allows the user to easily understand relationships between parts. Additionally, this feedback provides a comfortable operational familiarity.
Supporting data collection in virtual worlds BIBAFull-Text 193-194
  Hiep Luong; Dipesh Gautam; John Gauch; Susan Gauch
This paper presents a new services paradigm for virtual world crawler interaction that is co-operative and exploits information about 3D objects in the virtual world. Our approach supports analyzing redundant information crawled from virtual worlds in order to decrease the amount of data collected by crawlers, keep search engine collections up to date, and provide an efficient mechanism for collecting and searching information from multiple virtual worlds.
Optical illusion in augmented reality BIBAFull-Text 195-196
  Maryam Khademi; Hossein Mousavi Hondori; Cristina Videira Lopes
While developers are mainly tackling primary problems in devel-oping augmented reality (AR) systems [1-2], perceptually-correct augmentation rests a critical challenge. In this paper, we focus on how to correctly display and accurately convey the augmented virtual object's size with respect to real-world objects. We conducted a user study to examine how subjects would verify relative size of virtual objects, augmented in a real scene. The results con-firmed that optical illusion occurs in AR applications if comparative size of virtual objects to real-world ones is not considered.
Falling water with key particle and envelope surface for virtual liquid manipulation model BIBAFull-Text 197-198
  Shunsuke Miyashita; Kenji Funahashi
One of our main goal is to provide a VR chemical laboratory system as a VR-learning system for the people, i.e., who have to stay in the hospital. Therefore we have already proposed an interactive model of virtual liquid like water which focus on user impression and real-time processing rather than exact behavior simulation. However free fall water was simulated and rendered with particles simply. In this paper, we propose an efficient and effective method of free fall water with key particles instead of conventional particles. The envelope surface is rendered around the key particles as a surface of the water.
Are immersive FPS games enjoyable? BIBAFull-Text 199-200
  Jeann-Luc Lugrinn; Fred Charles; Marc Cavazza; Marc Le Renard; Jonathan Freeman; Jane Lessiter
This paper describes an experiment comparing immersive and non-immersive gaming using a state-of-the-art first person shooter game (FPS) in which we analyse user experience and performance through a combination of in-game metrics, questionnaires and subjective reports. Our results show an overwhelming subjective preference for the immersive version despite a decrease in performance attributed to a more realistic aiming mechanism. Interaction metrics suggest that users took full advantage of the immersive context rather than simply transposing their desktop gaming skills.
A semantic reasoning engine for context-awareness: detection and enhancement of 3D interaction interests BIBAFull-Text 201-202
  Yannick Dennemont; Guillaume Bouyer; Samir Otmane; Malik Mallem
We propose a semantic reasoning engine for context-awareness in classic VR environments. It is currently used to automatically detect user's interests and manage visual enhancements depending on the user's movement.
Gaze and gesture based object manipulation in virtual worlds BIBAFull-Text 203-204
  Dana Slambekova; Reynold Bailey; Joe Geigel
In this work we present a framework for enabling the use of both eye gaze and hand gestures for interaction within a 3D virtual world. We define a set of natural interaction mechanisms for manipulation of objects within the 3D space and describe a prototype implementation based on Second Life that allows these mechanisms to be used in that world. We also explore how these mechanisms can be extended to other spatial tasks such as camera positioning and motion.
Collaborative approach for dynamic adjustment of selection areas in polygonal modelling BIBAFull-Text 205-206
  Adrien Girard; Yacine Bellik; Malika Auvray; Mehdi Ammi
Mutual awareness between users working in collaborative virtual environments is an important factor for efficient collaboration. This article presents a collaborative metaphor dedicated to the dynamic adjustment of selection area in polygonal modelling.
Spatial augmented reality based tangible CAD system BIBAFull-Text 207-208
  Hyeon Joon Joo; Ross Smith; Bruce Thomas; Jun Park
In current Computer Aided Design (CAD) systems, designers are commonly restricted to a traditional workstation environment with mouse and keyboard. This environment is indirect from the physical object they are designing, and as such they may lose the one to one correspondence between the virtual and physical magnification of the design. In order to reduce this, we propose a Spatial Augmented Reality (SAR) based CAD system which consists of a fixed camera-projector pair, a Light Emitting Diode (LED) pen with two buttons, a wireless communication module, and a physical drawing board.
A system for evaluating 3D pointing techniques BIBAFull-Text 209-210
  Robert J. Teather; Wolfgang Stuerzlinger
This demo presents a desktop VR system for evaluating human performance in 3D pointing tasks. The system supports different input devices (e.g., mouse and 6DOF remote pointer), pointing techniques (e.g., screen-plane and depth cursors), and cursor visualization styles (e.g., one-eyed and stereo 3D cursors). The objective is to comprehensively compare all combinations of these conditions. We especially focus in on fair and direct comparisons between 2D and 3D pointing tasks. Finally, our system includes a new pointing technique that outperforms standard ray pointing.
Exploration of fused multi-volume images using user-defined binary masks BIBAFull-Text 211-212
  Ryan Armstrong; Roy Eagleson; Sandrine de Ribaupierre
Acquisition and fusion of multiple imaging modalities is becoming an increasingly desired clinical practice. This is particularly the case in radiation therapy, where dosage must be determined using electron density calculations from CT images, which lack the contrast to resolve soft-tissue structures. This often necessitates the fusion of corresponding MRI images in order to plot radiation trajectories safely around critical tissues. Simultaneous visualization of multiple volumes using direct volume rendering (DVR) techniques offers a number of advantages over traditional visualization methods. Specifically, fused visualization enhances the relational aspects of volumes, providing improved context [cite the first one]. However, there are many challenges involved in implementing DVR using fused data sets. The primary challenge is determining how images overlap to provide meaningful information. Additionally, there is increased computational complexity beyond standard DVR techniques, threatening real-time applications of fused DVR. These difficulties are evident to users of such systems as they must manage complex user interfaces and poor application performance. In this work, we introduce a user-centric multi-volume DVR technique which addresses issues of performance and ease of use. Through an intuitive interface, users are able to spatially define regions of interest, determining the relative contributions of each modality in the output rendering.
Gesture-based interaction with 3D visualizations of document collections for exploration and search BIBAFull-Text 213-214
  Angélica de Antonio; Cristian Moral; Daniel Klepel; Martín J. Abente
Despite the powerful tools which are available nowadays to make easy the access to information contained in huge document collections, like WWW, satisfactory solutions haven't been found yet which allow not only to easily locate potentially interesting documents but also to help users to build a mental model on a set of documents, and to allow users to explore and interact intuitively with a corpus and to reorganize it according to their interests and preferences. Three-dimensional representations of document collections have shown their usefulness in helping users visualize the thematic structure of the collection. However, a 3D visualization is not enough. New interaction paradigms and techniques need to be investigated.