HCI Bibliography Home | HCI Conferences | ICVR Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
ICVR Tables of Contents: 070911-111-2

VMR 2011: 4th International Conference on Virtual and Mixed Reality, Part I: New Trends

Fullname:VMR 2011: 4th International Conference on Virtual and Mixed Reality, Part I: New Trends
Note:Volume 13 of HCI International 2011
Editors:Randall Shumaker
Location:Orlando, Florida
Dates:2011-Jul-09 to 2011-Jul-14
Volume:1
Publisher:Springer-Verlag
Series:Lecture Notes in Computer Science 6773
Standard No:ISBN: 978-3-642-22020-3 (print), 978-3-642-22021-0 (online); hcibib: ICVR11-1
Papers:43
Pages:399
Links:Online Proceedings | Publisher Book Page
  1. ICVR 2011-07-09 Volume 1
    1. Augmented Reality Applications
    2. Virtual and Immersive Environments
    3. Novel Interaction Devices and Techniques in VR
    4. Human Physiology and Behaviour in VR Environments

ICVR 2011-07-09 Volume 1

Augmented Reality Applications

AR Based Environment for Exposure Therapy to Mottephobia BIBAKFull-Text 3-11
  Andrea F. Abate; Michele Nappi; Stefano Ricciardi
Mottephobia is an anxiety disorder revolving around an extreme, persistent and irrational fear of moths and butterflies leading sufferers to panic attacks. This study presents an ARET (Augmented Reality Exposure Therapy) environment aimed to reduce mottephobia symptoms by progressive desensitization. The architecture described is designed to provide a greater and deeper level of interaction between the sufferer and the object of its fears. To this aim the system exploits an inertial ultrasonic-based tracking system to capture the user's head and wrists positions/orientations within the virtual therapy room, while a couple of instrumented gloves capture fingers' motion. A parametric moth behavioral engine allows the expert monitoring the therapy session to control many aspects of the virtual insects augmenting the real scene as well as their interaction with the sufferer.
Keywords: Augmented reality; exposure therapy; mottephobia
Designing Augmented Reality Tangible Interfaces for Kindergarten Children BIBAKFull-Text 12-19
  Pedro Campos; Sofia Pessanha
Using games based on novel interaction paradigms for teaching children is becoming increasingly popular because children are moving towards a new level of inter-action with technology and there is a need to children to educational contents through the use of novel, attractive technologies. Instead of developing a computer program using traditional input techniques (mouse and keyboard), this re-search presents a novel user interface for learning kindergarten subjects. The motivation is essentially to bring something from the real world and couple that with virtual reality elements, accomplishing the interaction using our own hands. It's a symbiosis of traditional cardboard games with digital technology. The rationale for our approach is simple. Papert (1996) refers that "learning is more effective when the apprentice voluntarily engages in the process". Motivating the learners is therefore a crucial factor to increase the possibility of action and discovery, which in turn increases the capacity of what some researchers call learning to learn. In this sense, the novel constructionist-learning paradigm aims to adapt and prepare tomorrow's schools to the constant challenges faced by a society, which is currently embracing and accelerating pace of profound changes. Augmented reality (Shelton and Hedley, 2002) and tangible user interfaces (Sharlin et al., 2004) fitted nicely as a support method for this kind of learning paradigm.
Keywords: Augmented reality; Interactive learning systems; Tangible Interfaces
lMAR: Highly Parallel Architecture for Markerless Augmented Reality in Aircraft Maintenance BIBAKFull-Text 20-29
  Andrea Caponio; Mauricio Hincapié; Eduardo González Mendívil
A novel architecture for real time performance marker-less augmented reality is introduced. The proposed framework consists of several steps: at first the image taken from a video feed is analyzed and corner points are extracted, labeled, filtered and tracked along subsequent pictures. Then an object recognition algorithm is executed and objects in the scene are recognized. Eventually, position and pose of the objects are given. Processing steps only rely on state of the art image processing algorithms and on smart analysis of their output. To guarantee real time performances, use of modern highly parallel graphic processing unit is anticipated and the architecture is designed to exploit heavy parallelization.
Keywords: Augmented Reality; Parallel Computing; CUDA; Image Processing; Object Recognition; Machine Vision
5-Finger Exoskeleton for Assembly Training in Augmented Reality BIBAKFull-Text 30-39
  Siam Charoenseang; Sarut Panjan
This paper proposes an augmented reality based exoskeleton for virtual object assembly training. This proposed hand exoskeleton consists of 9 DOF joints which can provide force feedback to all 5 fingers at the same time. This device has ability to simulate shape, size, and weight of the virtual objects. In this augmented reality system, user can assembly virtual objects in real workspace which is superimposed with computer graphics information. During virtual object assembly training, user can receive force feedback which is synchronized with physics simulation. Since this proposed system can provide both visual and kinesthesia senses, it will help the users to improve their assembly skills effectively.
Keywords: Exoskeleton Device; Augment Reality; Force Feedback
Remote Context Monitoring of Actions and Behaviors in a Location through 3D Visualization in Real-Time BIBAKFull-Text 40-44
  John Conomikes; Zachary Pacheco; Salvador Barrera; Juan Antonio Cantu; Lucy Beatriz Gomez; Christian de los Reyes; Juan Manuel Mendez Villarreal; Takeo Shime; Yuki Kamiya; Hideki Kawai; Kazuo Kunieda; Keiji Yamada
The foal of this [project is to present huge amounts of data, not parse-able by a single person and present it in an interactive 3D recreation of the events that the sensors detected using a 3D rendering engine known as Panda3D. "Remote Context Monitoring of Actions and Behavior in a Location Through the Usage of 3D Visualization in Real-time" is a software applications designed to read large amounts of data from a database and use that data to recreate the context that the events occurred to improve understanding of the data.
Keywords: 3D; Visualization; Remote; Monitoring; Panda3D; Real-Time
Spatial Clearance Verification Using 3D Laser Range Scanner and Augmented Reality BIBAKFull-Text 45-54
  Hirotake Ishii; Shuhei Aoyama; Yoshihito Ono; Weida Yan; Hiroshi Shimoda; Masanori Izumi
A spatial clearance verification system for supporting nuclear power plant dismantling work was developed and evaluated by a subjective evaluation. The system employs a three-dimensional laser range scanner to obtain three-dimensional surface models of work environment and dismantling targets. The system also employs Augmented Reality to allow field workers to perform simulation of transportation and temporal placement of dismantling targets using the obtained models to verify spatial clearance in actual work environments. The developed system was evaluated by field workers. The results show that the system is acceptable and useful to confirm that dismantling targets can be transported through narrow passages and can be placed in limited temporal workspaces. It was also found that the extension of the system is desirable to make it possible for multiple workers to use the system simultaneously to share the image of the dismantling work.
Keywords: Augmented Reality; Laser Range Scanner; Nuclear Power Plants; Decommissioning; Spatial Clearance Verification
Development of Mobile AR Tour Application for the National Palace Museum of Korea BIBAKFull-Text 55-60
  Jae-Beom Kim; Changhoon Park
We present the mobile augmented reality tour application (MART) to provide intuitive interface for the tourist. And, a context-awareness is used for smart guide. In this paper, we discuss practical ways of recognizing the context correctly with overcoming the limitation of the sensors. First, semi-automatic context recognition is proposed to explore context ontology based on user experience. Second, multiple sensors context-awareness enables to construct context ontology by using multiple sensor. And, we introduce the iPhone tour application for the National Palace Museum of Korea.
Keywords: Mobile; Augmented Reality; Tour; Semi-automatic context recognition; Multi-sensor context-awareness
A Vision-Based Mobile Augmented Reality System for Baseball Games BIBAKFull-Text 61-68
  Seong-Oh Lee; Sang Chul Ahn; Jae-In Hwang; Hyoung-Gon Kim
In this paper we propose a new mobile augmented-reality system that will address the need of users in viewing baseball games with enhanced contents. The overall goal of the system is to augment meaningful information on each player position on a mobile device display. To this end, the system takes two main steps which are homography estimation and automatic player detection. This system is based on still images taken by mobile phone. The system can handle various images that are taken from different angles with a large variation in size and pose of players and the playground, and different lighting conditions. We have implemented the system on a mobile platform. The whole steps are processed within two seconds.
Keywords: Mobile augmented-reality; baseball game; still image; homography; human detection; computer vision
Social Augmented Reality for Sensor Visualization in Ubiquitous Virtual Reality BIBAKFull-Text 69-75
  Youngho Lee; Jongmyung Choi; Sehwan Kim; Seunghun Lee; Say Jang
There have been several research activities on data visualization exploiting augmented reality technologies. However, most researches are focused on tracking and visualization itself, yet do not much discuss social community with augmented reality. In this paper, we propose a social augmented reality architecture that selectively visualizes sensor information based on the user's social network community. We show three scenarios: information from sensors embedded in mobile devices, from sensors in environment, and from social community. We expect that the proposed architecture will have a crucial role in visualizing thousands of sensor data selectively according to the user's social network community.
Keywords: Ubiquitous virtual reality; context-awareness; augmented reality; social community
Digital Diorama: AR Exhibition System to Convey Background Information for Museums BIBAKFull-Text 76-86
  Takuji Narumi; Oribe Hayashi; Kazuhiro Kasada; Mitsuhiko Yamazaki; Tomohiro Tanikawa; Michitaka Hirose
In this paper, we propose a MR museum exhibition system, the "Digital Diorama" system, to convey background information intuitively. The system aims to offer more features than the function of existing dioramas in museum exhibitions by using mixed reality technology. The system superimposes computer generated diorama scene reconstructed from related image/video materials onto real exhibits. First, we implement and evaluate location estimation methods of photos and movies are taken in past time. Then, we implement and install two types of prototype system at the estimated position to superimpose virtual scenes onto real exhibit in the Railway Museum. By looking into an eyehole type device of the proposed system, visitors can feel as if they time-trip around the exhibited steam locomotive and understand historical differences between current and previous appearance.
Keywords: Mixed Reality; Museum Exhibition; Digital Museum
Augmented Reality: An Advantageous Option for Complex Training and Maintenance Operations in Aeronautic Related Processes BIBAKFull-Text 87-96
  Horacio Rios; Mauricio Hincapié; Andrea Caponio; Emilio Mercado; Eduardo González Mendívil
The purpose of this article is to present the comparison between three different methodologies for the transfer of knowledge of complex operations in aeronautical processes that are related to maintenance and training. The first of them is the use of the Traditional Teaching Techniques that uses manuals and printed instructions to perform an assembly task; the second one, is the use of audiovisual tools to give more information to operators; and finally, the use of an Augmented Reality (AR) application to achieve the same goal with the enhancing of real environment with virtual content. We developed an AR application that operates in a regular laptop with stable results and provides useful information to the user during the 4 hours of training; also basic statistical analysis was done to compare the results of our AR application.
Keywords: Augmented Reality; Maintenance; Training; Aeronautic Field
Enhancing Marker-Based AR Technology BIBAKFull-Text 97-104
  Jonghoon Seo; Jinwook Shim; Ji-Hye Choi; James Park; Tack-Don Han
In this paper, we propose a method that solves both jittering and occlusion problems which is the biggest issue in marker based augmented reality technology. Because we adjust the pose estimation by using multiple keypoints that exist in the marker based on cells, we can predict the strong pose on jittering. Additionally, we can solve the occlusion problem by applying tracking technology.
Keywords: Marker-based AR; Augmented Reality; Tracking
MSL_AR Toolkit: AR Authoring Tool with Interactive Features BIBAKFull-Text 105-112
  Jinwook Shim; Jonghoon Seo; Tack-Don Han
We describe an authoring tool for Augmented Reality (AR) contents. In recent years there have been a number of frameworks proposed for developing Augmented Reality (AR) applications. This paper describes an authoring tool for Augmented Reality (AR) application with interactive features. We developed the AR authoring tool which provides Interactive features that we can perform the education service project and participate it actively for the participating education service. In this paper, we describe MSL_AR Authoring tool process and two kinds of interactive features.
Keywords: Augmented Reality; Authoring; interaction
Camera-Based In-situ 3D Modeling Techniques for AR Diorama in Ubiquitous Virtual Reality BIBAKFull-Text 113-122
  Atsushi Umakatsu; Hiroyuki Yasuhara; Tomohiro Mashita; Kiyoshi Kiyokawa; Haruo Takemura
We have been studying an in-situ 3D modeling and authoring system, AR Diorama. In the AR Diorama system, a user is able to reconstruct a 3D model of a real object of concern and describe behaviors of the model by stroke input. In this article, we will introduce two ongoing studies on interactive 3D reconstruction techniques. First technique is feature-based. Natural feature points are first extracted and tracked. A convex hull is then obtained from the feature points based on Delaunay tetrahedralisation. The polygon mesh is carved to approximate the target object based on a feature-point visibility test. Second technique is region-based. Foreground and background color distribution models are first estimated to extract an object region. Then a 3D model of the target object is reconstructed by silhouette carving. Experimental results show that the two techniques can reconstruct a better 3D model interactively compared with our previous system.
Keywords: AR authoring; AR Diorama; 3D reconstruction
Design Criteria for AR-Based Training of Maintenance and Assembly Tasks BIBAKFull-Text 123-132
  Sabine Webel; Ulrich Bockholt; Jens Keil
As the complexity of maintenance tasks can be enormous, the efficient training of technicians in performing those tasks becomes increasingly important. Maintenance training is a classical application field of Augmented Reality explored by different research groups. Mostly technical aspects (e.g tracking, 3D augmentations) have been in focus of this research field. In our paper we present results of interdisciplinary research based on the fusion of cognitive science, psychology and computer science. We focus on analyzing the improvement of AR-based training of maintenance skills by addressing also the necessary cognitive skills. Our aim is to find criteria for the design of AR-based maintenance training systems. A preliminary evaluation of the proposed design strategies has been conducted by expert trainers from industry.
Keywords: Augmented Reality; training; skill acquisition; training system; industrial applications

Virtual and Immersive Environments

Object Selection in Virtual Environments Performance, Usability and Interaction with Spatial Abilities BIBAKFull-Text 135-143
  Andreas Baier; David Wittmann; Martin Ende
We investigate the influence of users' spatial orientation and space relations ability on performance with six different interaction methods for object selection in virtual environments. Three interaction methods are operated with a mouse, three with a data glove. Results show that mouse based interaction methods perform better compared to data glove based methods. Usability ratings reinforce these findings. However, performance with the mouse based methods appears to be independent from users' spatial abilities, whereas data glove based methods are not.
Keywords: Object selection; interaction method; virtual environment; input device; performance; usability; spatial ability
Effects of Menu Orientation on Pointing Behavior in Virtual Environments BIBAKFull-Text 144-153
  Nguyen-Thong Dang; Daniel Mestre
The present study investigated the effect of menu orientation on user performance in a menu items' selection task in virtual environments. An ISO 9241-9-based multi-tapping task was used to evaluate subjects' performance. We focused on a local interaction task in a mixed reality context where the subject's hand directly interacted with 3D graphical menu items. We evaluated the pointing performance of subjects across three levels of inclination: a vertical menu, a 45°-tilted menu and a horizontal menu. Both quantitative data (movement time, errors) and qualitative data were collected in the evaluation. The results showed that a horizontal orientation of the menu resulted in decreased performance (in terms of movement time and error rate), as compared to the two other conditions. Post-hoc feedback from participants, using a questionnaire confirmed this difference. This research might contribute to guidelines for the design of 3D menus in a virtual environment.
Keywords: floating menu; menu orientation; local interaction; pointing; evaluation; virtual environments
Some Evidences of the Impact of Environment's Design Features in Routes Selection in Virtual Environments BIBAKFull-Text 154-163
  M. Emília C. Duarte; Elisângela Vilar; Francisco Rebelo; Júlia Teles; Ana Almeida
This paper reports results from a research project investigating users' navigation in a Virtual Environment (VE), using immersive Virtual Reality. The experiment was conducted to study the extent that certain features of the environment (i.e., colors, windows, furniture, signage, corridors' width) may affect the way users select paths within a VE. Thirty university students participated in this study. They were requested to traverse a VE, as fast as possible and without pausing, until they reached the end. During the travel they had to make choices regarding the paths. The results confirmed that the window, corridors' width, and exit sign factors are route predictors in the extent that they influence the paths selection. The remaining factors did not influence significantly the decisions. These findings may have implications for the design of environments to enhance wayfinding.
Keywords: Virtual Reality; Wayfinding; paths selection; environmental features
Evaluating Human-Robot Interaction during a Manipulation Experiment Conducted in Immersive Virtual Reality BIBAKFull-Text 164-173
  Mihai Duguleana; Florin Grigorie Barbuceanu; Gheorghe Mogan
This paper presents the main highlights of a Human-Robot Interaction (HRI) study conducted during a manipulation experiment performed in Cave Automatic Virtual Environment (CAVE). Our aim is to assess whether using immersive Virtual Reality (VR) for testing material handling scenarios that assume collaboration between robots and humans is a practical alternative to similar real live applications. We focus on measuring variables identified as conclusive for the purpose of this study (such as the percentage of tasks successfully completed, the average time to complete task, the relative distance and motion estimate, presence and relative contact errors) during different manipulation scenarios. We present the experimental setup, the HRI questionnaire and the results analysis. We conclude by listing further research issues.
Keywords: human-robot interaction; immersive virtual reality; CAVE; presence; manipulation
3-D Sound Reproduction System for Immersive Environments Based on the Boundary Surface Control Principle BIBAKFull-Text 174-184
  Seigo Enomoto; Yusuke Ikeda; Shiro Ise; Satoshi Nakamura
We constructed a 3-D sound reproduction system containing a 62-channel loudspeaker array and 70-channel microphone array based on the boundary surface control principle (BoSC). The microphone array can record the volume of the 3-D sound field and the loudspeaker array can accurately recreate it in other locations. Using these systems, we realized immersive acoustic environments similar to cinema or television sound spaces. We also recorded real 3-D acoustic environments, such as an orchestra performance and forest sounds, by using the microphone array. Recreated sound fields were evaluated by demonstration experiments using the 3-D sound field. Subjective assessments of 390 subjects confirm that these systems can achieve high presence for 3-D sound reproduction and provide the listener with deep immersion.
Keywords: Boundary surface control principle; Immersive environments; Virtual reality; Stereophony; Surround sound
Workspace-Driven, Blended Orbital Viewing in Immersive Environments BIBAKFull-Text 185-193
  Scott Frees; David Lancellotti
We present several additions to orbital viewing in immersive virtual environments, including a method of blending standard and orbital viewing to allow smoother transitions between modes and more flexibility when working in larger workspaces. Based on pilot studies, we present methods of allowing users to manipulate objects while using orbital viewing in a more natural way. Also presented is an implementation of workspace recognition, where the application automatically detects areas of interest and offers to invoke orbital viewing as the user approaches.
Keywords: Immersive Virtual Environments; Context-Sensitive Interaction; 3DUI; interaction techniques
Irradiating Heat in Virtual Environments: Algorithm and Implementation BIBAKFull-Text 194-203
  Marco Gaudina; Andrea Brogni; Darwin G. Caldwell
Human-computer interactive systems focused mostly on graphical rendering, implementation of haptic feedback sensation or delivery of auditory information. Human senses are not limited to those information and other physical characteristics, like thermal sensation, are under research and development. In Virtual Reality, not so many algorithms and implementation have been exploited to simulate thermal characteristics of the environment. This physical characteristic can be used to dramatically improve the overall realism. Our approach is to establish a preliminary way of modelling an irradiating thermal environment taking into account the physical characteristics of the heat source. We defined an algorithm where the irradiating heat surface is analysed for its physical characteristic, material and orientation with respect to a point of interest. To test the algorithm consistency some experiments were carried out and the results have been analysed. We implemented the algorithm in a basic virtual reality application using a simple and low cost thermo-feedback device to allow the user to perceive the temperature in the 3D space of the environment.
Keywords: Virtual Reality; Thermal Characteristic; Haptic; Physiology
Providing Immersive Virtual Experience with First-Person Perspective Omnidirectional Movies and Three Dimensional Sound Field BIBAKFull-Text 204-213
  Kazuaki Kondo; Yasuhiro Mukaigawa; Yusuke Ikeda; Seigo Enomoto; Shiro Ise; Satoshi Nakamura; Yasushi Yagi
Providing high immersive feeling to audiences has proceeded with growing up of techniques about video and acoustic medias. In our proposal, we record and reproduce omnidirectional movies captured at a perspective of an actor and three dimensional sound field around him, and try to reproduce more impressive feeling. We propose a sequence of techniques to archive it, including a recording equipment, video and acoustic processing, and a presentation system. Effectiveness and demand of our system has been demonstrated by ordinary people through evaluation experiments.
Keywords: First-person Perspective; Omnidirectional Vision; Three Dimensional Sound Reproduction; Boundary Surface Control Principle
Intercepting Virtual Ball in Immersive Virtual Environment BIBAKFull-Text 214-222
  Massimiliano Valente; Davide Sobrero; Andrea Brogni; Darwin G. Caldwell
Catching a flying ball is a difficult task that requires sensory systems to calculate the precise trajectory of the ball to predict its movement, and the motor systems to drive the hand in the right place at the right time.
   In this paper we have analyzed the human performance in an intercepting task performed in an immersive virtual environment and the possible improvement of the performance by adding some feedback.
   Virtual balls were launched from a distance of 11 m with 12 trajectories. The volunteers was equipped only with shutter glasses and one maker on backhand to avoid any constriction of natural movements. We ran the experiment in a natural scene, either without feedback or with acoustic feedback to report a corrects intercept. Analysis of performance shows a significant increment of successful trials in feedback condition. Experiment results are better with respect to similar experiment described in literature, but performances are still lower to results in real world.
Keywords: Virtual Reality; Ecological Validity; Interceptive Action

Novel Interaction Devices and Techniques in VR

Concave-Convex Surface Perception by Visuo-vestibular Stimuli for Five-Senses Theater BIBAKFull-Text 225-233
  Tomohiro Amemiya; Koichi Hirota; Yasushi Ikei
The paper describes a pilot study of perceptual interactions among visual, vestibular, and tactile stimulations for enhancing the sense of presence and naturalness for ultra-realistic sensations. In this study, we focused on understanding the temporally and spatially optimized combination of visuo-tactile-vestibular stimuli that would create concave-convex surface sensations. We developed an experimental system to present synchronized visuo-vestibular stimulation and evaluated the influence of various combinations of visual and vestibular stimuli on the shape perception by body motion. The experimental results urge us to add a tactile sensation to facilitate ultra-realistic communication by changing the contact area between the human body and motion chair.
Keywords: vestibular stimulation; ultra realistic; multimodal; tactile
Touching Sharp Virtual Objects Produces a Haptic Illusion BIBAKFull-Text 234-242
  Andrea Brogni; Darwin G. Caldwell; Mel Slater
Top down perceptual processing implies that much of what we perceive is based on prior knowledge and expectation. It has been argued that such processing is why Virtual Reality works at all -- the brain filling in missing information based on expectation. We investigated this with respect to touch. Seventeen participants were asked to touch different objects seen in a Virtual Reality system. Although no haptic feedback was provided, questionnaire results show that sharpness was experienced when touching a virtual cone and scissors, but not when touching a virtual sphere. Skin conductance responses separate out the sphere as different to the remaining objects. Such exploitation of expectation-based illusory sensory feedback could be useful in the design of plausible virtual environments.
Keywords: Virtual Reality; Human Reaction; Physiology; Haptic Illusion
Whole Body Interaction Using the Grounded Bar Interface BIBAKFull-Text 243-249
  Bong-gyu Jang; Hyunseok Yang; Gerard J. Kim
Whole body interaction is an important element in promoting the level of presence and immersion in virtual reality systems. In this paper, we investigate the effect of "grounding" the interaction device to take advantage of the significant passive reaction force feedback sensed throughout the body, and thus in effect realizing the whole body interaction without complicated sensing and feedback apparatus. An experiment was conducted to assess the task performance and level of presence/immersion, as compared to a keyboard input method, using a maze navigation task. The results showed that while the G-Bar did induce significantly higher presence and the task performance (maze completion time and number of wall collisions) was on par with the already familiar keyboard interface. The keyboard user instead had to adjust and learn how to navigate faster and not collide with the wall over time, indicating that the whole body interaction contributed to a better perception of the immediate space. Thus considering the learning rate and the relative unfamiliarity of G-Bar, with sufficient training, G-Bar could accomplish both high presence/immersion and task performance for s.
Keywords: Whole-body interaction; Presence; Immersion; Task performance; Isometric interaction
Digital Display Case Using Non-contact Head Tracking BIBAKFull-Text 250-259
  Takashi Kajinami; Takuji Narumi; Tomohiro Tanikawa; Michitaka Hirose
In our research, we aim to construct the Digital Display Case system, which enables a museum exhibition using virtual exhibits using computer graphics technology, to convey background information about exhibits effectively. In this paper, we consider more practical use in museum, and constructed the system using head tracking, which doesn't need to load any special devices on users. We use camera and range camera to detect and track user's face, and calculate images on displays to enable users to appreciate virtual exhibits as if they were really in the virtual case.
Keywords: Digital Display Case; Digital Museum; Computer Graphics; Virtual Reality
Meta Cookie+: An Illusion-Based Gustatory Display BIBAKFull-Text 260-269
  Takuji Narumi; Shinya Nishizaka; Takashi Kajinami; Tomohiro Tanikawa; Michitaka Hirose
In this paper, we propose the illusion-based "Pseudo-gustation" method to change perceived taste of a food when people eat by changing its appearance and scent with augmented reality technology. We aim at utilizing an influence between modalities for realizing a "pseudo-gustatory" system that enables the user to experience various tastes without changing the chemical composition of foods. Based on this concept, we built a "Meta Cookie+" system to change the perceived taste of a cookie by overlaying visual and olfactory information onto a real cookie. We performed an experiment that investigates how people experience the flavor of a plain cookie by using our system. The result suggests that our system can change the perceived taste based on the effect of the cross-modal interaction of vision, olfaction and gustation.
Keywords: Illusion-based Virtual Reality; Gustatory Display; Pseudo-gustation; Cross-modal Integration; Augmented Reality
LIS3D: Low-cost 6DOF Laser Interaction for outdoor Mixed Reality BIBAFull-Text 270-279
  Pedro Santos; Hendrik Schmedt; Bernd Amend; Philip Hammer; Ronny Giera; Elke Hergenröther; André Stork
This paper introduces a new low-cost, laser-based 6DOF interaction technology for outdoor mixed reality applications. It can be used in a variety of outdoor mixed reality scenarios for making 3D annotations or correctly placing 3D virtual content anywhere in the real world. In addition, it can also be used with virtual back-projection displays for scene navigation purposes. Applications can range from design review in the architecture domain to cultural heritage experiences on location. Previous laser-based interaction techniques only yielded 2D or 3D intersection coordinates of the laser beam with a real world object. The main contribution of our solution is that we are able to reconstruct the full pose of an area targeted by our laser device in relation to the user. In practice, this means that our device can be used to navigate any scene in 6DOF. Moreover, we can place any virtual object or any 3D annotation anywhere in a scene, so it correctly matches the user's perspective.
Olfactory Display Using Visual Feedback Based on Olfactory Sensory Map BIBAKFull-Text 280-289
  Tomohiro Tanikawa; Aiko Nambu; Takuji Narumi; Kunihiro Nishimura; Michitaka Hirose
Olfactory sensation is based on chemical signals whereas the visual sensation and auditory sensation are based on physical signals. Therefore olfactory displays which exist now can only present the set of scents which was prepared beforehand because a set of "primary odors" has not been found. In our study, we focus on development of an olfactory display using cross modality which can represent more patterns of scents than the patterns of scents prepared. We construct olfactory sensory map by asking subjects to smell various aroma chemicals and evaluate their similarity. Based on the map, we selected a few aroma chemicals and implemented a visual and olfactory display. We succeed to generate various smell feeling from only few aromas, and it is able to substitute aromas by pictures nearer aromas are drawn by pictures more strongly. Thus, we can reduce the number of aromas in olfactory displays using the olfactory map.
Keywords: Olfactory display; Multimodal interface; Cross modality; Virtual Reality
Towards Noninvasive Brain-Computer Interfaces during Standing for VR Interactions BIBAKFull-Text 290-294
  Hideaki Touyama
In this study, we propose a portable Brain-Computer Interface (BCI) aiming to realize a novel interaction with VR objects during standing. The ElectroEncephaloGram (EEG) was recorded under two experimental conditions: I) the subject was during sitting at rest and II) during simulated walking conditions in indoor environment. In both conditions, the Steady-State Visual Evoked Potential (SSVEP) was successfully detected by using computer generated visual stimuli. This result suggested that the EEG signals with portable BCI systems would provide a useful interface in performing VR interactions during standing in indoor environment such as immersive virtual space.
Keywords: Brain-Computer Interface (BCI); Electroencephalogram (EEG); Steady-State Visual Evoked Potential (SSVEP); standing; immersive virtual environment

Human Physiology and Behaviour in VR Environments

Stereoscopic Vision Induced by Parallax Images on HMD and Its Influence on Visual Functions BIBAKFull-Text 297-305
  Satoshi Hasegawa; Akira Hasegawa; Masako Omori; Hiromu Ishio; Hiroki Takada; Masaru Miyao
Visual function of lens accommodation was measured while subjects used stereoscopic vision in a head mounted display (HMD). Eyesight with stereoscopic Landolt ring images displayed on HMD was also studied. In addition, the recognized size of virtual stereoscopic images was estimated using the HMD. Accommodation to virtual objects was seen when subjects viewed stereoscopic images of 3D computer graphics, but not when the images were displayed without appropriate binocular parallax. This suggests that stereoscopic moving images on HMD induced the visual accommodation. Accommodation should be adjusted to the position of virtual stereoscopic images induced by parallax. The difference in the distances of the focused display and stereoscopic image may cause visual load. However, an experiment showed that Landolt rings of almost the same size were distinguished regardless of virtual distance of 3D images if the parallax was not larger than the fusional upper limit. However, congruent figures that were simply shifted to cause parallax were seen to be larger as the distance to the virtual image became longer. The results of this study suggest that stereoscopic moving images on HMD induced the visual accommodation by expansion and contraction of the ciliary muscle, which was synchronized with convergence. Appropriate parallax of stereoscopic vision should not reduce the visibility of stereoscopic virtual objects. The recognized size of the stereoscopic images was influenced by the distance of the virtual image from display.
Keywords: 3-D Vision; Lens Accommodation; Eyesight; Landolt ring; Size Constancy
Comparison of Accommodation and Convergence by Simultaneous Measurements during 2D and 3D Vision Gaze BIBAKFull-Text 306-314
  Hiroki Hori; Tomoki Shiomi; Tetsuya Kanda; Akira Hasegawa; Hiromu Ishio; Yasuyuki Matsuura; Masako Omori; Hiroki Takada; Satoshi Hasegawa; Masaru Miyao
Accommodation and convergence were measured simultaneously while subjects viewed 2D and 3D images. The aim was to compare fixation distances between accommodation and convergence in young subjects while they viewed 2D and 3D images. Measurements were made using an original machine that combined WAM-5500 and EMR-9, and 2D and 3D images were presented using a liquid crystal shutter system. Results suggested that subjects' accommodation and convergence were found to change the diopter value periodically when viewing 3D images. The mean values of accommodation and convergence among the 6 subjects were almost equal when viewing 2D and 3D images respectively. These findings suggest that the ocular functions when viewing 3D images are very similar to those during natural viewing. When subjects are young, accommodative power while viewing 3D images is similar to the distance of convergence, and the two values of focusing distance are synchronized with each other.
Keywords: Stereoscopic Vision; Simultaneous Measurement; Accommodation and Convergence; Visual Fatigue
Tracking the UFO's Paths: Using Eye-Tracking for the Evaluation of Serious Games BIBAKFull-Text 315-324
  Michael D. Kickmeier-Rust; Eva-Catherine Hillemann; Dietrich Albert
Computer games are undoubtedly an enormously successful genre. Over the past years, a continuously growing community of researchers and practitioners made the idea of using the potential of computer games for serious, primarily educational purposes equally popular. However, the present hype over serious games is not reflected in sound evidence for the effectiveness and efficiency of such games and also indicators for the quality of learner-game interaction is lacking. In this paper we look into those questions, investigating a geography learning game prototype. A strong focus of the investigation was on relating the assessed variables with gaze data, in particular gaze paths and interaction strategies in specific game situations. The results show that there a distinct gender differences in the interaction style with different game elements, depending on the demands on spatial abilities (navigating in the three-dimensional spaces versus controlling rather two-dimensional features of the game) as well as distinct differences between high and low performers.
Keywords: Game-based learning; serious games; learning performance; eye tracking
The Online Gait Measurement for Characteristic Gait Animation Synthesis BIBAFull-Text 325-334
  Yasushi Makihara; Mayu Okumura; Yasushi Yagi; Shigeo Morishima
This paper presents a method to measure online the gait features from the gait silhouette images and to synthesize characteristic gait animation for an audience-participant digital entertainment. First, both static and dynamic gait features are extracted from the silhouette images captured by an online gait measurement system. Then, key motion data for various gaits are captured and a new motion data is synthesized by blending key motion data. Finally, blend ratios of the key motion data are estimated to minimize gait feature errors between the blended model and the online measurement. In experiments, the effectiveness of gait feature extraction were confirmed by using 100 subjects from OU-ISIR Gait Database and characteristic gait animations were created based on the measured gait features.
Measuring and Modeling of Multi-layered Subsurface Scattering for Human Skin BIBAFull-Text 335-344
  Tomohiro Mashita; Yasuhiro Mukaigawa; Yasushi Yagi
This paper introduces a Multi-Layered Subsurface Scattering (MLSSS) model to reproduce an existing human's skin in a virtual space. The MLSSS model consists of a three dimensional layer structure with each layer an aggregation of simple scattering particles. The MLSSS model expresses directionally dependent and inhomogeneous radiance distribution. We constructed a measurement system consisting of four projectors and one camera. The parameters of MLSSS were estimated using the measurement system and geometric and photometric analysis. Finally, we evaluated our method by comparing rendered images and real images.
An Indirect Measure of the Implicit Level of Presence in Virtual Environments BIBAKFull-Text 345-353
  Steven Nunnally; Durell Bouchard
Virtual Environments (VEs) are a common occurrence for many computer users. Considering their spreading usage and speedy development it is ever more important to develop methods that capture and measure key aspects of a VE, like presence. One of the main problems with measuring the level of presence in VEs is that the users may not be consciously aware of its affect. This is a problem especially for direct measures that rely on questionnaires and only measure the perceived level of presence explicitly. In this paper we develop and validate an indirect measure for the implicit level of presence of users, based on the physical reaction of users to events in the VE. The addition of an implicit measure will enable us to evaluate and compare VEs more effectively, especially with regard to their main function as immersive environments. Our approach is practical, cost-effective and delivers reliable results.
Keywords: Virtual Environments; Presence; Indirect Implicit Measure
Effect of Weak Hyperopia on Stereoscopic Vision BIBAKFull-Text 354-362
  Masako Omori; Asei Sugiyama; Hiroki Hori; Tomoki Shiomi; Tetsuya Kanda; Akira Hasegawa; Hiromu Ishio; Hiroki Takada; Satoshi Hasegawa; Masaru Miyao
Convergence, accommodation and pupil diameter were measured simultaneously while subjects were watching 3D images. The subjects were middle-aged and had weak hyperopia. WAM-5500 and EMR-9 were combined to make an original apparatus for the measurements. It was confirmed that accommodation and pupil diameter changed synchronously with convergence. These findings suggest that with naked vision the pupil is constricted and the depth of field deepened, acting like a compensation system for weak accommodation power. This suggests that people in middle age can view 3D images more easily if positive (convex lens) correction is made.
Keywords: convergence; accommodation; pupil diameter; middle age; 3D image
Simultaneous Measurement of Lens Accommodation and Convergence to Real Objects BIBAKFull-Text 363-370
  Tomoki Shiomi; Hiromu Ishio; Hiroki Hori; Hiroki Takada; Masako Omori; Satoshi Hasegawa; Shohei Matsunuma; Akira Hasegawa; Tetsuya Kanda; Masaru Miyao
Human beings can perceive that objects are three-dimensional (3D) as a result of simultaneous lens accommodation and convergence on objects, which is possible because humans can see so that parallax occurs with the right and left eye. Virtual images are perceived via the same mechanism, but the influence of binocular vision on human visual function is insufficiently understood. In this study, we developed a method to simultaneously measure accommodation and convergence in order to provide further support for our previous research findings. We also measured accommodation and convergence in natural vision to confirm that these measurements are correct. As a result, we found that both accommodation and convergence were consistent with the distance from the subject to the object. Therefore, it can be said that the present measurement method is an effective technique for the measurement of visual function, and that even during stereoscopic vision correct values can be obtained.
Keywords: simultaneous measurement; eye movement; accommodation and convergence; natural vision
Comparison in Degree of the Motion Sickness Induced by a 3-D Movie on an LCD and an HMD BIBAKFull-Text 371-379
  Hiroki Takada; Yasuyuki Matsuura; Masumi Takada; Masaru Miyao
Three-dimensional (3D) television sets are already on the market and are becoming increasingly popular among consumers. Watching stereoscopic 3D movies, though, can produce certain adverse affects such as asthenopia and motion sickness. Visually induced motion sickness (VIMS) is considered to be caused by an increase in visual-vestibular sensory conflict while viewing stereoscopic images. VIMS can be analyzed both psychologically and physiologically. According to our findings reported at the last HCI International conference, VIMS could be detected with the total locus length and sparse density, which were used as analytical indices of stabilograms. In the present study, we aim to analyze the severity of motion sickness induced by viewing conventional 3D movies on a liquid crystal display (LCD) compared to that induced by viewing these movies on a head-mounted display (HMD). We quantitatively measured the body sway in a resting state and during exposure to a conventional 3D movie on an LCD and HMD. Subjects maintained the Romberg posture during the recording of stabilograms at a sampling frequency of 20 Hz. The simulator sickness questionnaire (SSQ) was completed before and immediately after exposure. Statistical analyses were applied to the SSQ subscores and to the abovementioned indices (total locus length and sparse density) for the stabilograms. Friedman tests showed the main effects in the indices for the stabilograms. Multiple comparisons revealed that viewing the 3D movie on the HMD significantly affected the body sway, despite a large visual distance.
Keywords: visually induced motion sickness; stabilometry; sparse density; liquid crystal displays (LCDs); head-mounted displays (HMDs)
Evaluation of Human Performance Using Two Types of Navigation Interfaces in Virtual Reality BIBAKFull-Text 380-386
  Luís Teixeira; Emília Duarte; Júlia Teles; Francisco Rebelo
Most of Virtual Reality related studies use a hand-centric device as a navigation interface. Since this could be a problem when is required to manipulate objects or it can even distract a participant from other tasks if he has to "think" on how to move, a more natural and leg-centric interface seems more appropriate. This study compares human performance variables (distance travelled, time spent and task success) when using a hand-centric device (Joystick) and a leg-centric type of interface (Nintendo Wii Balance Board) while interacting in a Virtual Environment in a search task. Forty university students (equally distributed in gender and number by experimental conditions) participated in this study. Results show that participants were more efficient when performing navigation tasks using the Joystick than with the Balance Board. However there were no significantly differences in the task success.
Keywords: Virtual Reality; Navigation interfaces; Human performance
Use of Neurophysiological Metrics within a Real and Virtual Perceptual Skills Task to Determine Optimal Simulation Fidelity Requirements BIBAFull-Text 387-399
  Jack Maxwell Vice; Anna Skinner; Chris Berka; Lauren Reinerman-Jones; Daniel Barber; Nicholas Pojman; Veasna Tan; Marc M. Sebrechts; Corinna E. Lathan
The military is increasingly looking to virtual environment (VE) developers and cognitive scientists to provide virtual training platforms to support optimal training effectiveness within significant time and cost constraints. However, current methods for determining the most effective levels of fidelity in these environments are limited. Neurophysiological metrics may provide a means for objectively assessing the impact of fidelity variations on training. The current experiment compared neurophysiological and performance data for a real-world perceptual discrimination task as well as a similarly-structured VE training task under systematically varied fidelity conditions. Visual discrimination and classification was required between two militarily-relevant (M-16 and AK-47 rifle), and one neutral (umbrella) stimuli, viewed through a real and virtual Night Vision Device. Significant differences were found for task condition (real world versus virtual, as well as visual stimulus parameters within each condition), within both the performance and physiological data.