Classification of Interaction Techniques in the 3D Virtual Environment on Mobile Devices | | BIBAK | Full-Text | 3-13 | |
Eliane Balaa; Mathieu Raynal; Youssef Bou Issa; Emmanuel Dubois | |||
3D Virtual Environments (3DVE) are more and more used in different
applications such as CAD, games, or teleoperation. Due to the improvement of
smartphones hardware performance, 3D applications were also introduced to
mobile devices. In addition, smartphones provide new computing capabilities far
beyond the traditional voice communication. They are permitted by the variety
of built-in sensors and the internet connectivity. In consequence, interesting
3D applications can be designed by enabling the device capabilities to interact
in a 3DVE. Due to the fact that smartphones have small and flat screens and
that a 3DVE is wide and dense, mobile devices present some constraints: the
environment density, the depth of targets and the occlusion. The pointing task
faces these three problems to select a target. We propose a new classification
of the existing interaction techniques, according to three axis of
classification: a) the three discussed problems (density, depth and occlusion);
b) the first two subtasks of the pointing task (navigation, selection); and c)
the number of targets selected by the pointing technique (1 or N). In this
paper we will begin by presenting a state of the art of the different pointing
techniques in existing 3DVE, structured around three selection techniques: a)
Ray casting, b) Curve and c) Point cursor. Then we will present our
classification, and we will illustrate the classification of the main pointing
techniques for 3DVE. From this classification, we will discuss the type of
interaction that seems the most appropriate to perform this subtask optimally. Keywords: Interaction techniques; 3D Virtual environment; mobile devices; environment
density; depth of targets; occlusion; Augmented Reality |
Multimodal Interfaces and Sensory Fusion in VR for Social Interactions | | BIBAK | Full-Text | 14-24 | |
Esubalew Bekele; Joshua W. Wade; Dayi Bian; Lian Zhang; Zhi Zheng; Amy Swanson; Medha Sarkar; Zachary Warren; Nilanjan Sarkar | |||
Difficulties in social interaction, verbal and non-verbal communications as
well as repetitive and atypical patterns of behavior, are typical
characteristics of Autism spectrum disorders (ASD). Advances in computer and
robotic technology are enabling assistive technologies for intervention in
psychiatric disorders such as autism spectrum disorders (ASD) and schizophrenia
(SZ). A number of research studies indicate that many children with ASD prefer
technology and this preference can be explored to develop systems that may
alleviate several challenges of traditional treatment and intervention. The
current work presents development of an adaptive virtual reality-based social
interaction platform for children with ASD. It is hypothesized that endowing a
technological system that can detect the feeling and mental state of the child
and adapt its interaction accordingly is of great importance in assisting and
individualizing traditional intervention approaches. The proposed system
employs sensors such as eye trackers and physiological signal monitors and
models the context relevant psychological state of the child from combination
of these sensors. Preliminary affect recognition results indicate that
psychological states could be determined from peripheral physiological signals
and together with other modalities including gaze and performance of the
participant, it is viable to adapt and individualize VR-based intervention
paradigms. Keywords: Social interaction; virtual reality; autism intervention; multimodal system;
adaptive interaction; eye tracking; physiological processing; sensor fusion |
Multi-modal Interaction System to Tactile Perception | | BIBAK | Full-Text | 25-34 | |
Lorenzo Cavalieri; Michele Germani; Maura Mengoni | |||
Haptic simulation of materials is one of the most important challenges in
human-computer interaction. A fundamental step to achieve it regards the
definition of how human beings can encode the information acquired by different
sensorial channels' stimulation. In this context, this paper presents the
study, implementation and evaluation of a multi-modal cutaneous feedback device
(CFD) for the simulation of material textures. In addition to tactile
stimulation, two further sensory components (e.g. eyesight and hearing) are
integrated to support the user to better recognize and discriminate different
classes of materials and then, overcome previous identified drawbacks. An
experimental protocol is tuned to assess the relevance of each stimulated
channel in material texture recognition. Tests are carried out with real and
virtual materials. Result comparison is used to validate the proposed approach
and verify the realism of simulation. Keywords: Elettrocutaneous feedback; haptic; multi-modal stimulation |
Principles of Dynamic Display Aiding Presence in Mixed Reality Space Design | | BIBAK | Full-Text | 35-43 | |
Inkyung Choi; Jihyun Lee | |||
In this study, presence principles were developed for dynamic display design
and evaluation of dynamic display for designing mixed reality space. This is a
research to classify the indicators collected through the researches about the
existing measurement and evaluation of the existence felling and information
suggestion methods in mixed reality as the evaluation principles of the
displays and multimodal's interfaces that construct the mixed reality.
Additionally, by constructing QFD evaluation frame based on this presence
principles and evaluating the interface that composes the mixed reality,
research results were tried to be reflected in the future works. Keywords: Spatial Presence; Dynamic Display; Mixed Reality; Presence Principles |
Combining Multi-Sensory Stimuli in Virtual Worlds -- A Progress Report | | BIBAK | Full-Text | 44-54 | |
Julia Fröhlich; Ipke Wachsmuth | |||
In order to make a significant step towards more realistic virtual
experiences, we created a multi-sensory stimuli display for a CAVE-like
environment. It comprises graphics, sound, tactile feedback, wind and warmth.
In the present report we discuss the possibilities and constraints tied to such
an enhancement. To use a multi-modal display in a proper way, many
considerations have to be taken into account. This includes safety
requirements, hardware devices and software integration. For each stimulus
different possibilities are reviewed with regard to their assets and drawbacks.
Eventually the resulting setup realized in our lab is described -- to our
knowledge one of the most comprehensive systems. Technical evaluations as well
as user studies accompanied the development and gave hints with respect to
necessities and chances. Keywords: Multi-Sensory Stimuli; Wind; Warmth; Presence; Virtual Reality |
R-V Dynamics Illusion: Psychophysical Influence on Sense of Weight by Mixed-Reality Visual Stimulation of Moving Objects | | BIBAK | Full-Text | 55-64 | |
Satoshi Hashiguchi; Yohei Sano; Fumihisa Shibata; Asako Kimura | |||
When humans sense the weight of real objects, their perception is known to
be influenced by not only tactile information but also visual information. In a
Mixed-Reality (MR) environment, the appearance of touchable objects can be
changed by superimposing a computer-generated image (CGI) onto them (MR visual
stimulation). In this paper, we studied the psychophysical influence on the
sense of weight by using a real object that has a CGI superimposed on it. In
the experiments, we show CGI representing the inertial force caused by the
movable objects inside, while the subject swings the real object. The results
of the experiments show that the subjects sensed weight differently when being
shown the CGI animation. Keywords: Mixed Reality; Sense of Weight; Visual Stimulation; Psychophysical Influence |
Expansion of the Free Form Projection Display Using a Hand-Held Projector | | BIBA | Full-Text | 65-74 | |
Kaoru Kenjo; Ryugo Kijima | |||
We developed the multi projection system that supplement the free form projection display (FFPD) that virtual object image projected onto the real object with the projection of hand-held projector. This system enabled the users to expansion of projection area and look see the interesting area by covert to high-definition display. Furthermore, we investigated the effects of user's stereoscopy by visual gap of images projected each projector. |
Study of an Interactive and Total Immersive Device with a Personal 3D Viewer and Its Effects on the Explicit Long-Term Memories of the Subjects | | BIBA | Full-Text | 75-84 | |
Evelyne Lombardo | |||
We studied an interactive (functional and intentional interactivity) and immersive (technical and psychological immersion) device with a personal 3D viewer (360° vision and environmentally ego-centered) and its effects on the explicit long-term memories of the subjects (4 groups of 30 students for a total of 120 subjects) (2007 and 2012). We have tested memory, communication and feeling of presence in our virtual environment with a canonic test of presence (Witmer and Singer, 1998). This article is a reflection on these 3D devices and their impact on the long term memory of the students, and on their presence sensation. |
Research and Simulation on Virtual Movement Based on Kinect | | BIBA | Full-Text | 85-92 | |
Qi Luo; Guohui Yang | |||
Kinect is a line of motion sensing input devices by Microsoft for Xbox 360 and Xbox One video game consoles and Windows PCs. Based around a webcam-style add-on peripheral, it enables users to control and interact with their console/computer without the need for a game controller, through a natural user interface using gestures and spoken commands. The virtual simulation system is designed in the paper. Key Technologies of the Simulation System based on Virtual movement such us Characters in skinned binding technology, Kinect data capture, Movement data extraction and processing model, Depth of the image to bone, Sports redirection module and Skeleton model with motion data node bound are introduced in the paper. |
A Natural User Interface for Navigating in Organized 3D Virtual Contents | | BIBAK | Full-Text | 93-104 | |
Guido Maria Re; Monica Bordegoni | |||
The research activity presented in this paper aim at extending the
traditional planar navigation, which is adopted by many desktop applications
for searching information, to an experience in a Virtual Reality (VR)
environment. In particular, the work proposes a system that allows the user to
navigate in virtual environments, in which the objects are spatially organized
and sorted. The visualization of virtual object has been designed and an
interaction method, based on gestures, has been proposed to trigger the
navigation in the environment. The article describes the design and the
development of the system, by starting from some considerations about the
intuitiveness and naturalness required for a three-dimensional navigation. In
addition, an initial case study has been carried out and consists in using the
system in a virtual 3D catalogue of furniture. Keywords: Virtual Reality; Natural User Interfaces; Navigation; Gestures; Virtual
Catalogue |
Requirements for Virtualization of AR Displays within VR Environments | | BIBAK | Full-Text | 105-116 | |
Erik Steindecker; Ralph Stelzer; Bernhard Saske | |||
Everybody has been talking about new emerging products in augmented reality
(AR) and their potential to enhance our daily life and work. The AR technology
has been around for quite a while and various use cases have been thought and
tested. Clearly, the new AR Systems (e.g. Vuzix m100, Google Glasses) will
bring its use to a new level. For planning, designing and reviewing of
innovative AR systems and their application, virtual reality (VR) technology
can be supportive. Virtual prototypes of AR Systems can be expired and
evaluated within a VR environment (e.g. CAVE).
This paper proposes the virtualization of AR displays within VR environments and discusses requirements. A user study investigates the necessary pixel density for the legibility of a virtual display in order to verify the significance of guidelines given by ISO 9241-300. Furthermore, equations examine the suitability of various VR systems for display virtualization within VR environments. This will enable reviews of various display systems in a virtual manner. Keywords: Virtual Reality; Augmented Reality; Virtualization; Display; User Study |
Robot Behavior for Enhanced Human Performance and Workload | | BIBAK | Full-Text | 117-128 | |
Grace Teo; Lauren Reinerman-Jones | |||
Advancements in technology in the field of robotics have made it necessary
to determine integration and use for these in civilian tasks and military
missions. Currently, literature is limited on robot employment in tasks and
missions, and few taxonomies exist that guide understanding of robot
functionality. As robots acquire more capabilities and functions, they will
likely be working more closely with humans in human-robot teams. In order to
better utilize and design robots that enhance performance in such teams, a
better understanding of what robots can do and the impact of these behaviors on
the human operator/teammate is needed. Keywords: Human-robot teaming; Robot behavior; Performance; Workload |
Subjective-Situational Study of Presence | | BIBAK | Full-Text | 131-138 | |
Nataly Averbukh | |||
The paper is devoted to the description of the interview approach to reveal
presence state and its types such as environmental, social and personal
presence. The questions of the interview is described and analyzed in detail.
The questions were formulated in view of the subject's behavior and the
reactions during tests. Also the answers of the test subjects are analyzed from
a perspective of sense of presence revealing. The interview method proved its
efficiency. This method allowed to identify in practice types of presence being
under researching. In addition, it has enabled a better understanding of the
dynamics of the perception changes in the case of presence. The flexibility of
this method allows to adjust it under specific virtual environment, and to
clarify all key aspects to understand presence. Keywords: sense of presence; interview approach; types of presence |
Development of a Squad Level Vocabulary for Human-Robot Interaction | | BIBAK | Full-Text | 139-148 | |
Daniel Barber; Ryan W. Wohleber; Avonie Parchment; Florian Jentsch; Linda Elliott | |||
Interaction with robots in military applications is trending away from
teleoperation and towards collaboration. Enabling this transition requires
technologies for natural and intuitive communication between Soldiers and
robots. Automated Speech Recognition (ASR) systems designed using a
well-defined lexicon are likely to be more robust to the challenges of dynamic
and noisy environments inherent to military operations. To successfully apply
this approach to ASR development, lexicons should involve an early focus on the
target audience. To facilitate development a vocabulary focused at the squad
level for Human Robot Interaction (HRI), 31 Soldiers from Officer Candidate
School at Ft. Benning, GA provided hypothetical commands for directing an
autonomous robot to perform a variety of spatial navigation and reconnaissance
tasks. These commands were analyzed, using word frequency counts and
heuristics, to determine the structure and word choice of commands. Results
presented provide a baseline Squad Level Vocabulary (SLV) and a foundation for
development of HRI technologies enabling multi-modal communications within
mixed-initiative teams. Keywords: Human-robot interaction; human-robot teaming; mixed-initiative teams; speech
recognition |
Towards an Interaction Concept for Efficient Control of Cyber-Physical Systems | | BIBAK | Full-Text | 149-158 | |
Ingo Keller; Anke Lehmann; Martin Franke; Thomas Schlegel | |||
In this work, we introduce our interaction concept for efficient control of
cyber-physical systems (CPS). The proposed concept addresses the challenges of
the increased amount of smart/electronic devices along with increasingly
complex user interfaces. With a dual reality approach, the user is able to
perform the same action in the physical world as well in the virtual world by
synchronizing both. We solve thereby the most important compelling issue of
ease of use, flexibility, and bridging the gap between both worlds. Our
approach is substantiated by two test scenarios by means of a
characteristically CPS setting. Keywords: cyber-physical system; smart home; dual reality interaction; synchronized
environments |
3D Design for Augmented Reality | | BIBAK | Full-Text | 159-169 | |
Ivar Kjellmo | |||
How do you define a good concept when designing augmented reality apps for
mobiles? This paper focuses on design processes technically, graphically and
conceptually in the development of 3D content for Augmented Reality on mobile
devices. Based on experiences in the development and implementation of a course
in 3D design for Augmented Reality at NITH (The Norwegian School of IT),
challenges and methods in creating concepts, optimized graphics and visually
coherent content for AR will be discussed. Keywords: Augmented Reality; Virtual Reality; Mixed reality; Education; 3D design;
Concepts; Presence in augmented and virtual reality |
Don't Walk into Walls: Creating and Visualizing Consensus Realities for Next Generation Videoconferencing | | BIBA | Full-Text | 170-180 | |
Nicolas H. Lehment; Philipp Tiefenbacher; Gerhard Rigoll | |||
This contribution examines the problem of linking two remote rooms into one shared teleconference space using augmented reality (AR). Previous work in remote collaboration focusses either on the display of data and participants or on the interactions required to complete a given task. The surroundings are usually either disregarded entirely or one room is chosen as the "hosting" room which serves as the reference space. In this paper, we aim to integrate the two surrounding physical spaces of the users into the virtual conference space. We approach this problem using techniques borrowed from computational geometric analysis, from computer graphics and from 2D image processing. Our goal is to provide a thorough discussion of the problem and to describe an approach to creating consensus realities for use in AR videoconferencing. |
Transparency in a Human-Machine Context: Approaches for Fostering Shared Awareness/Intent | | BIBAK | Full-Text | 181-190 | |
Joseph B. Lyons; Paul R. Havig | |||
Advances in autonomy have the potential to reshape the landscape of the
modern world. Yet, research on human-machine interaction is needed to better
understand the dynamic exchanges required between humans and machines in order
to optimize human reliance on novel technologies. A key aspect of that exchange
involves the notion of transparency as humans and machines require shared
awareness and shared intent for optimal team work. Questions remain however,
regarding how to represent information in order to generate shared awareness
and intent in a human-machine context. The current paper will review a recent
model of human-robot transparency and will propose a number of methods to
foster transparency between humans and machines. Keywords: transparency; human-machine interaction; trust in automation; trust |
Delegation and Transparency: Coordinating Interactions So Information Exchange Is No Surprise | | BIBAK | Full-Text | 191-202 | |
Christopher A. Miller | |||
We argue that the concept and goal of "transparency" in human-automation
interactions does not make sense as naively formulated; humans cannot be aware
of everything automation is doing and why in most circumstances if there is to
be any cognitive workload savings. Instead, we argue, a concept of transparency
based on and shaped by delegation interactions provides a framework for what
should be communicated in "transparent" interactions and facilitates that
communication and comprehension. Some examples are provided from recent work in
developing delegation systems. Keywords: flexible automation; adaptive/adaptable automation; Playbook®;
delegation; Uninhabited Aerial Systems; trust; transparency; supervisory
control |
Trust and Consequences: A Visual Perspective | | BIBAK | Full-Text | 203-214 | |
Emrah Onal; John O'Donovan; Laura Marusich; Michael S. Yu; James Schaffer; Cleotilde Gonzalez; Tobias Höllerer | |||
User interface (UI) composition and information presentation can impact
human trust behavior. Trust is a complex concept studied by disciplines like
psychology, sociology, economics, and computer science. Definitions of trust
vary depending on the context, but are typically based on the core concept of
"reliance on another person or entity". Trust is a critical concept since the
presence or absence of the right level of trust can affect user behavior, and
ultimately, the overall system performance. In this paper, we look across four
studies to explore the relationship between UI elements and human trust
behavior. Results indicate that UI composition and information presentation can
impact human trust behavior. While further research is required to corroborate
and generalize these results, we hope that this paper will provide a reference
point for future studies by identifying UI elements that are likely to
influence human trust. Keywords: Trust; cooperation; user interface; visualization; design; typology; model |
Choosing a Selection Technique for a Virtual Environment | | BIBAK | Full-Text | 215-225 | |
Danilo Souza; Paulo Dias; Beatriz Sousa Santos | |||
Bearing in mind the difficulty required to create virtual environments, a
platform for Setting-up Interactive Virtual Environments (pSIVE) was created to
help non-specialists benefit from virtual applications involving virtual tours
where users may interact with elements of the environment to extract contextual
information. The platform allows creating virtual environments and setting up
their aspects, interaction methods and hardware to be used. The construction of
the world is done by loading 3D models and associating multimedia information
(videos, texts or PDF documents) to them.
A central interaction task in the envisioned applications of pSIVE is the selection of objects that have associated multimedia information. Thus, a comparative study between two variants of the ray-tracing selection technique was performed. The study also demonstrates the flexibility of the platform, since it was easily adapted to serve as a test environment. Keywords: Virtual Reality; Virtual Environments; Selection |
Augmented Reality Evaluation: A Concept Utilizing Virtual Reality | | BIBA | Full-Text | 226-236 | |
Philipp Tiefenbacher; Nicolas H. Lehment; Gerhard Rigoll | |||
In recent years the field of augmented reality (AR) has seen great advances in interaction, tracking and rendering. New input devices and mobile hardware have enabled entirely new interaction concepts for AR content. The high complexity of AR applications results in lacking usability evaluation practices on part of the developer. In this paper, we present a thorough classification of factors influencing user experience, split into the broad categories of rendering, tracking and interaction. Based on these factors, we propose an architecture for evaluating AR experiences prior to deployment in an adapted virtual reality (VR) environment. Thus we enable rapid prototyping and evaluation of AR applications especially suited for applications in challenging industrial AR projects. |
Good Enough Yet? A Preliminary Evaluation of Human-Surrogate Interaction | | BIBAK | Full-Text | 239-250 | |
Julian, IV Abich; Lauren E. Reinerman-Jones; Gerald Matthews; Gregory F. Welch; Stephanie J. Lackey; Charles E. Hughes; Arjun Nagendran | |||
Research exploring the implementation of surrogates has included areas such
as training (Chuah et al., 2013), education (Yamashita, Kuzuoka, Fujimon, &
Hirose, 2007), and entertainment (Boberg, Piippo, & Ollila, 2008).
Determining the characteristics of the surrogate that could potentially
influence the human's behavioral responses during human-surrogate interactions
is of importance. The present work will draw on the literature about
human-robot interaction (HRI), social psychology literature regarding the
impact that the presence of a surrogate has on another human, and
communications literature about human-human interpersonal interaction. The
review will result in an experimental design to evaluate various dimensions of
the space of human-surrogate characteristics influence on interaction. Keywords: human-robot interaction; human-surrogate interaction; communications; social
psychology; avatar; physical-virtual avatar |
A Design Methodology for Trust Cue Calibration in Cognitive Agents | | BIBAK | Full-Text | 251-262 | |
Ewart J. de Visser; Marvin Cohen; Amos Freedy; Raja Parasuraman | |||
As decision support systems have developed more advanced algorithms to
support the human user, it is increasingly difficult for operators to verify
and understand how the automation comes to its decision. This paper describes a
design methodology to enhance operators' decision making by providing trust
cues so that their perceived trustworthiness of a system matches its actual
trustworthiness, thus yielding calibrated trust. These trust cues consist of
visualizations to diagnose the actual trustworthiness of the system by showing
the risk and uncertainty of the associated information. We present a trust cue
design taxonomy that lists all possible information that can influence a trust
judgment. We apply this methodology to a scenario with advanced automation that
manages missions for multiple unmanned vehicles and shows specific trust cues
for 5 levels of trust evidence. By focusing on both individual operator trust
and the transparency of the system, our design approach allows for calibrated
trust for optimal decision-making to support operators during all phases of
mission execution. Keywords: Trust; Trust Calibration; Trust Cues; Cognitive Agents; Uncertainty
Visualization; Bayesian Modeling; Computational Trust Modeling; Automation;
Unmanned Systems; Cyber Operations; Trustworthiness |
Effects of Gender Mapping on the Perception of Emotion from Upper Body Movement in Virtual Characters | | BIBA | Full-Text | 263-273 | |
Maurizio Mancini; Andrei Ermilov; Ginevra Castellano; Fotis Liarokapis; Giovanna Varni; Christopher Peters | |||
Despite recent advancements in our understanding of the human perception of the emotional behaviour of embodied artificial entities in virtual reality environments, little remains known about various specifics relating to the effect of gender mapping on the perception of emotion from body movement. In this paper, a pilot experiment is presented investigating the effects of gender congruency on the perception of emotion from upper body movements. Male and female actors were enrolled to conduct a number of gestures within six general categories of emotion. These motions were mapped onto virtual characters with male and female embodiments. According to the gender congruency condition, the motions of male actors were mapped onto male characters (congruent) or onto female characters (incongruent) and vice-versa. A significant effect of gender mapping was found in the ratings of perception of three emotions (anger, fear and happiness), suggesting that gender may be an important aspect to be considered in the perception, and hence generation, of some emotional behaviours. |
AR Navigation System Using Interaction with a CG Avatar | | BIBA | Full-Text | 274-281 | |
Hirosuke Murata; Maiya Hori; Hiroki Yoshimura; Yoshio Iwai | |||
This paper describes a navigation system that is guided by a CG avatar using augmented reality (AR) technology. Some existing conventional AR navigation systems use arrows for route guidance. However, the positions to which the arrows point can be unclear because the actual scale of the arrow is unknown. In contrast, a navigation process conducted by a person indicates the routes clearly. In addition, this process offers a sense of safety with its expectation of arrival at the required destination, because the user can reach the destination as long as he/she follows the navigator. Moreover, the user can communicate easily with the navigator. In this research, we construct an AR navigation system using a CG avatar to perform interactively in place of a real person. |
Virtual Humans for Interpersonal and Communication Skills' Training in Crime Investigations | | BIBAK | Full-Text | 282-292 | |
Konstantinos Mykoniatis; Anastasia Angelopoulou; Michael D. Proctor; Waldemar Karwowski | |||
Virtual Humans (VHs) have been employed in multidisciplinary fields to
advance interpersonal skills critical to many professional, including law
enforcement agents, military personnel, managers, doctors, lawyers and other
professionals. Law enforcement agencies in particular have faced a growing need
to develop human to human interpersonal training to increase interviewing and
interrogation skills. In this paper, we present a prototype VE that has been
developed to provide law enforcement agents with effective interview and
interrogation training and experiential learning. The virtual training
environment will need to be tested and formally evaluated to verify the
benefits compared to live exercises and traditional training techniques. Keywords: Virtual Human; Training; Law enforcement agents; Interpersonal Skills;
Virtual Environment |
The Avatar Written upon My Body: Embodied Interfaces and User Experience | | BIBAK | Full-Text | 293-304 | |
Mark Palmer | |||
There is a growing consensus that the perception of our body is emergent and
has a plasticity that can be affected through techniques such as the Rubber
Hand Illusion (RHI). Alongside this we are seeing increased capabilities in
technologies that track and represent our movements on screen. This paper will
examine these issues through the RHI and conditions such as Complex Regional
Pain Syndrome (CRPS) and consider the possibilities offered by these
technologies for therapeutic use. It addition it will examine the issues raised
for all users, asserting that we have reached a point where we can no longer
afford assume that these are merely tools of representation. Keywords: Avatar; Body Image; Complex Regional Pain Syndrome; Motion Sickness;
Emergent |
How Does Varying Gaze Direction Affect Interaction between a Virtual Agent and Participant in an On-Line Communication Scenario? | | BIBAK | Full-Text | 305-316 | |
Adam Qureshi; Christopher Peters; Ian Apperly | |||
Computer based perspective taking tasks in cognitive psychology often
utilise static images and auditory instructions to assess online communication.
Results are then explained in terms of theory of mind (the ability to
understand that other agents have different beliefs, desires and knowledge to
oneself). The current study utilises a scenario in which participants were
required to select objects in a grid after listening to instructions from an
on-screen director. The director was positioned behind the grid from the
participants' view. As objects in some slots were concealed from the view of
the director, participants needed to take the perspective of the director into
account in order to respond accurately. Results showed that participants
reliably made errors, attributable to not using the information from the
director's perspective efficiently, rather than not being able to take the
director's perspective. However, the fact that the director was represented by
a static sprite meant that even for a laboratory based experiment, the level of
realism was low. This could have affected the level of participant engagement
with the director and the task. This study, a collaboration between computer
science and psychology, advances the static sprite model by incorporating head
movement into a more realistic on-screen director with the aim of a.) Improving
engagement and b.) investigating whether gaze direction affects accuracy and
response times of object selection. Results suggest that gaze direction can
influence the speed of accurate object selection, but only slightly and in
certain situations; specifically those complex enough to warrant the
participant paying additional attention to gaze direction and those that
highlight perspective differences between themselves and the director. This in
turn suggests that engagement with a virtual agent could be improved by taking
these factors into account. Keywords: Theory of mind; on-line communication; gaze direction; engagement |
An Image Based Approach to Hand Occlusions in Mixed Reality Environments | | BIBAK | Full-Text | 319-328 | |
Andrea F. Abate; Fabio Narducci; Stefano Ricciardi | |||
The illusion of the co-existence of virtual objects in the physical world,
which is the essence of MR paradigm, is typically made possible by
superimposing virtual contents onto the surrounding environment captured
through a camera. This works well until the order of the planes to be
composited is coherent to their distance from the observer. But, whenever an
object of the real world is expected to occlude the virtual contents, the
illusion vanishes. What should be seen behind a real object could be visualized
over it instead, generating a "cognitive dissonance" that may compromise scene
comprehension and, ultimately, the interaction capabilities during the MR
experience. This paper describes an approach to handle hand occlusions in MR/AR
interaction contexts by means of an optimized stereo matching technique based
on the belief propagation algorithm. Keywords: mixed reality; hand occlusion; disparity map |
Assembly of the Virtual Model with Real Hands Using Augmented Reality Technology | | BIBAK | Full-Text | 329-338 | |
Poonpong Boonbrahm; Charlee Kaewrat | |||
In the past few years, studying in the field of Augmented Reality (AR) has
been expanded from technical aspect such as tracking system, authoring tools
and etc. to applications ranging from the fields of education, entertainment,
medicine to manufacturing. In manufacturing, which relies on assembly process,
AR is used for assisting staffs in the field of maintenance and assembly.
Usually, it has been used as a guidance system, for example using graphical
instructions for advising the users with the steps in performing the
maintenance or assembly operation. In assembly training, especially for small,
expensive or harmful devices, interactive technique using real hands may be
suitable than the guiding technique. Using tracking algorithm to track both
hands in real time, interaction can occurs by the execution of grasp and
release gestures. Bare hand tracking technique, which uses gesture recognition
to enable interaction with augmented objects are also possible. In this paper,
we attempted to use marker based AR technique to assemble 3D virtual objects
using natural hand interaction. By applying the markers to fit on fingertip and
assigned the corresponding virtual 3D finger that have physical properties such
as surface, volume, density, friction and collision detection properties to
them, interaction between fingers and objects could be executed. This setup was
designed on a PC based system but could be ported to iOS or Android, so that it
would work on tablet or mobile phones as well. Unity 3D game engine was used
with Vuforia AR platform. In order to grab and move the virtual object by hand,
the shape of the virtual finger (Vulforia's target) has been investigated.
Appropriate friction coefficient were applied to both virtual fingers and the
object and then at least two virtual fingers were force to press on the 3D
virtual object in opposite directions so that frictional force is more than
gravitational force. To test this method, virtual model of LEGO's mini-figures
which composed of five pieces, was used and the assembly could be done in just
a short time. Comparing with other popular technique such as "gestures
recognition", we have found that our technique could provide more efficient
result in term of cost and natural feeling. Keywords: Augmented Reality; Manufacturing; Assembly Process; Virtual Object Assembly |
Future Media Internet Technologies for Digital Domes | | BIBAK | Full-Text | 339-350 | |
Dimitrios Christopoulos; Efstathia Hatzi; Anargyros Chatzitofis; Nicholas Vretos; Petros Daras | |||
The paper outlines the primary challenges and principles for museums and
venues that wish to accommodate social and Future Media Internet (FMI)
technologies, incorporating the experiences gathered through the EXPERIMEDIA
project experiments. Keywords: Future Museums; New Media; Infrastructure; Smart Devices; EXPERIMEDIA
Project |
Fast and Accurate 3D Reproduction of a Remote Collaboration Environment | | BIBAK | Full-Text | 351-362 | |
ABM Tariqul Islam; Christian Scheel; Ali Shariq Imran; Oliver Staadt | |||
We present an approach for high quality rendering of the 3D representation
of a remote collaboration scene, along with real-time rendering speed, by
expanding the unstructured lumigraph rendering (ULR) method. ULR uses a 3D
proxy which is in the simplest case a 2D plane. We develop dynamic proxy for
ULR, to get a better and more detailed 3D proxy in real-time; which leads to
the rendering of high-quality and accurate 3D scenes with motion parallax
support. The novel contribution of this work is the development of a dynamic
proxy in real-time. The dynamic proxy is generated based on depth images
instead of color images as in the Lumigraph approach. Keywords: 3D reproduction; remote collaboration; telepresence; unstructured lumigraph
rendering; motion parallax |
From Image Inpainting to Diminished Reality | | BIBAK | Full-Text | 363-374 | |
Norihiko Kawai; Tomokazu Sato; Naokazu Yokoya | |||
Image inpainting, which removes undesired objects in a static image and
fills in the missing regions with plausible textures, has been developed in the
research fields of image processing. On the other hand, Diminished Reality
(DR), which visually removes real objects from video images by filling in the
missing regions with background textures in real time, is one of the growing
topics in Virtual/Mixed Reality, and considered as the opposite of Augmented
Reality. In this paper, we introduce the state-of-the-art of image inpainting
methods and how to apply the image inpainting to diminished reality. Keywords: image inpainting; diminished reality; augmented reality |
A Semantically Enriched Augmented Reality Browser | | BIBAK | Full-Text | 375-384 | |
Tamás Matuszka; Sándor Kámán; Attila Kiss | |||
Owing to the remarkable advancement of smartphones, Augmented Reality
applications have become part of everyday life. Augmented Reality browsers are
the most commonly used among these applications. The users can search and
display interesting places from the physical environment surrounding them by
means of these browsers. Some of the most popular AR browsers use only one data
source and the openly available datasets are not used. In contrast, the main
objective of Linked Open Data community project is to link knowledge from
different data sources. This pursuit makes it easier to retrieval information,
among others. In this paper, an Augmented Reality browser was presented.
Information derived from Linked Open Data was used by the browser as data
source. Due to this, the system is able to handle more data sources. Keywords: Augmented Reality; Semantic Web; Location-based Services; Linked Data |
Mobile Augmentation Based on Switching Multiple Tracking Method | | BIBAK | Full-Text | 385-395 | |
Ayaka Miyagi; Daiki Yoshihara; Kei Kusui; Asako Kimura; Fumihisa Shibata | |||
This paper presents a localization mechanism for mobile augmented reality
systems in various places. Recently, variety of image-based tracking methods
have been proposed: artificial marker based methods, and natural feature based
methods. However, localization done with only one tracking methods is difficult
in all situation. Therefore, we propose a system, which enables users to
continually track in various situation by dynamically switching the multiple
localization methods. Our proposed mechanism consists of clients, a switcher,
and servers. The server estimates the camera pose of the client, and the
switcher selects the outstanding localization method. Furthermore, we employed
real-time mapping to continually estimate the position and orientation even if
the camera is apart from the prior knowledge of the environment. After
localization, the newly updated mapping result is stored in the server. Thus,
we could continually track even if the environment has changed. Keywords: mixed reality; localization; tracking |
Hand Tracking with a Near-Range Depth Camera for Virtual Object Manipulation in an Wearable Augmented Reality | | BIBAK | Full-Text | 396-405 | |
Gabyong Park; Taejin Ha; Woontack Woo | |||
This paper proposes methods for tracking a bare hand with a near-range depth
camera attached to a video see-through Head-mounted Display (HMD) for virtual
object manipulation in an Augmented Reality (AR) environment. The particular
focus herein is upon using hand gestures that are frequently used in daily
life. First, we use a near-range depth camera attached to HMD to segment the
hand object easily, considering both skin color and depth information within
arms' reaches. Then, fingertip and base positions are extracted through
primitive models of the finger and palm. According to these positions, the
rotation parameters of finger joints are estimated through an
inverse-kinematics algorithm. Finally, the user's hands are localized from
physical space by camera-tracking and then used for 3D virtual object
manipulation. Our method is applicable to various AR interaction scenarios such
as digital information access/control, creative CG modeling,
virtual-hand-guiding, or game UIs. Keywords: Hand Tracking; HMD; Augmented Reality |
Matching Levels of Task Difficulty for Different Modes of Presentation in a VR Table Tennis Simulation by Using Assistance Functions and Regression Analysis | | BIBAK | Full-Text | 406-417 | |
Daniel Pietschmann; Stephan Rusdorf | |||
UX is often compared between different systems or iterations of the same
system. Especially when investigating human perception processes in virtual
tasks and associated effects, experimental manipulation allows for better
control of confounders. When manipulating modes of presentation, such as
stereoscopy or visual perspective, the quality and quantity of available
sensory cues is manipulated as well, resulting not only in different user
experiences, but also in modified task difficulty. Increased difficulty and
lower user task performance may lead to negative attributions that spill over
to the evaluation of the system as a whole (halo effect). To avoid this, the
task difficulty should remain unaltered. In highly dynamic virtual
environments, the modification of difficulty with Fitts' law may prove
problematic, so an alternative is presented using curve fitting regression
analyses of empirical data from a within-subjects experiment in a virtual table
tennis simulation to calculate equal difficulty levels. Keywords: Virtual Reality; Performance; User Experience; Spatial Presence; Table
Tennis Simulation |
A Pen Based Tool for Annotating Planar Objects | | BIBAK | Full-Text | 418-427 | |
Satoshi Yonemoto | |||
In recent augmented reality (AR) application, marker-less tracking
approaches are often used. Most marker-less tracking approaches force user to
capture the front view of a target object during the initial setup. We have
recently proposed two image rectification methods for non-frontal view of a
planar object. These methods can be applied to reference image generation in
marker-less AR. This paper describes a pen based tool for annotating planar
objects. Our tool builds upon several interactive image rectification methods,
and supports registration of AR Annotations, marker-less tracking and
annotation overlay. Keywords: image rectification; marker-less tracking; AR annotation |