HCI Bibliography Home | HCI Conferences | SUI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
SUI Tables of Contents: 1314

Proceedings of the 2013 ACM Symposium Spatial User Interaction

Fullname:SUI'13: Proceedings of the 1st ACM Symposium on Spatial User Interaction
Editors:Evan Suma; Wolfgang Stuerzlinger; Frank Steinicke
Location:Los Angeles, California
Dates:2013-Jul-20 to 2013-Jul-21
Publisher:ACM
Standard No:ISBN: 978-1-4503-2141-9; ACM DL: Table of Contents; hcibib: SUI13
Papers:22
Pages:97
Links:Conference Website
  1. Full papers
  2. Demos & posters

Full papers

Visualization of off-surface 3D viewpoint locations in spatial augmented reality BIBAFull-Text 1-8
  Matt Adcock; David Feng; Bruce Thomas
Spatial Augmented Reality (SAR) systems can be used to convey guidance in a physical task from a remote expert. Sometimes that remote expert is provided with a single camera view of the workspace but, if they are given a live captured 3D model and can freely control their point of view, the local worker needs to know what the remote expert can see. We present three new SAR techniques, Composite Wedge, Vector Boxes, and Eyelight, for visualizing off-surface 3D viewpoints and supporting the required workspace awareness. Our study showed that the Composite Wedge cue was best for providing location awareness, and the Eyelight cue was best for providing visibility map awareness.
To touch or not to touch?: comparing 2D touch and 3D mid-air interaction on stereoscopic tabletop surfaces BIBAFull-Text 9-16
  Gerd Bruder; Frank Steinicke; Wolfgang Sturzlinger
Recent developments in touch and display technologies have laid the groundwork to combine touch-sensitive display systems with stereoscopic three-dimensional (3D) display. Although this combination provides a compelling user experience, interaction with objects stereoscopically displayed in front of the screen poses some fundamental challenges: Traditionally, touch-sensitive surfaces capture only direct contacts such that the user has to penetrate the visually perceived object to touch the 2D surface behind the object. Conversely, recent technologies support capturing finger positions in front of the display, enabling users to interact with intangible objects in mid-air 3D space. In this paper we perform a comparison between such 2D touch and 3D mid-air interactions in a Fitts' Law experiment for objects with varying stereoscopical parallax. The results show that the 2D touch technique is more efficient close to the screen, whereas for targets further away from the screen, 3D selection outperforms 2D touch. Based on the results, we present implications for the design and development of future touch-sensitive interfaces for stereoscopic displays.
Novel metrics for 3D remote pointing BIBAFull-Text 17-20
  Steven J. Castellucci; Robert J. Teather; Andriy Pavlovych
We introduce new metrics to help explain 3D pointing device movement characteristics. We present a study to assess these by comparing two cursor control modes using a Sony PS Move. "Laser" mode used ray casting, while "position" mode mapped absolute device movement to cursor motion. Mouse pointing was also included, and all techniques were also analyzed with existing 2D accuracy measures. Results suggest that position mode shows promise due to its accurate and smooth pointer movements. Our 3D movement metrics do not correlate well with performance, but may be beneficial in understanding how devices are used.
Spatial user interface for experiencing Mogao caves BIBAFull-Text 21-24
  Leith Kin Yip Chan; Sarah Kenderdine; Jeffrey Shaw
In this paper, we describe the design and implementation of the Pure Land AR, which is an installation that employs spatial user interface and allows users to virtually visit the UNESCO world heritage -- Mogao Caves by using handheld devices. The installation was shown to the public at different museums and galleries. The result of the work and the user responses is discussed.
Seamless interaction using a portable projector in perspective corrected multi display environments BIBAFull-Text 25-32
  Jorge H. dos S. Chernicharo; Kazuki Takashima; Yoshifumi Kitamura
In this work, we study ways to use a portable projector to extend the workspace in a perspective corrected multi display environment (MDE). This system uses the relative position between the user and displays in order to show the content perpendicularly to the user's point of view in a deformation-free fashion. We introduce the image created by the portable projector as a new, temporary and movable image in the perspective corrected MDE, creating a more flexible workspace to the user. In our study, we combined two ways of using the projector (handheld or head-mounted) with two ways of moving the cursor on the screens (using a mouse or a laser-pointing based strategy), proposing four techniques to be tried by the users. Also, two exploratory evaluation experiments were performed in order to evaluate our system. The first experiment (5 participants) aimed to evaluate how using a movable screen in order to fill the gaps between displays affects the performance of the user in a cross-display pointing task; while the second (6 participants) aimed to evaluate how using the projector to extend the workspace impacts the task completion time in an off-screen content recognition task. Our results showed that while no significant improvement of the performance of the users could be seen on the pointing task, the users were significantly faster when recognizing off-screen content. Also, the introduction of the portable projector reduced the overall task load on both tasks.
Free-hands interaction in augmented reality BIBAFull-Text 33-40
  Dragos Datcu; Stephan Lukosch
The ability to use free-hand gestures is extremely important for mobile augmented reality applications. This paper proposes a computer vision-driven model for natural free-hands interaction in augmented reality. The novelty of the research is the use of robust hand modeling by combining Viola&Jones and Active Appearance Models. A usability study evaluates the hands free interaction model in with a focus on the accuracy of hand based pointing for menu navigation and menu item selection. The results indicate high accuracy of pointing and high usability of the free-hands interaction in augmented reality. The research is part of a joint project of TU Delft and the Netherlands Forensic Institute in The Hague, aiming at the development of novel technologies for crime scene investigations.
Performance effects of multi-sensory displays in virtual teleoperation environments BIBAFull-Text 41-48
  Paulo G. de Barros; Robert W. Lindeman
Multi-sensory displays provide information to users through multiple senses, not only through visuals. They can be designed for the purpose of creating a more-natural interface for users or reducing the cognitive load of a visual-only display. However, because multi-sensory displays are often application-specific, the general advantages of multi-sensory displays over visual-only displays are not yet well understood. Moreover, the optimal amount of information that can be perceived through multi-sensory displays without making them more cognitively demanding than a visual-only displays is also not yet clear. Last, the effects of using redundant feedback across senses on multi-sensory displays have not been fully explored. To shed some light on these issues, this study evaluates the effects of increasing the amount of multi-sensory feedback on an interface, specifically in a virtual teleoperation context. While objective data showed that increasing the number of senses in the interface from two to three led to an improvement in performance, subjective feedback indicated that multi-sensory interfaces with redundant feedback may impose an extra cognitive burden on users.
Evaluating performance benefits of head tracking in modern video games BIBAFull-Text 53-60
  Arun Kulshreshth; Joseph J., Jr. LaViola
We present a study that investigates user performance benefits of using head tracking in modern video games. We explored four different carefully chosen commercial games with tasks which can potentially benefit from head tracking. For each game, quantitative and qualitative measures were taken to determine if users performed better and learned faster in the experimental group (with head tracking) than in the control group (without head tracking). A game expertise pre-questionnaire was used to classify participants into casual and expert categories to analyze a possible impact on performance differences. Our results indicate that head tracking provided a significant performance benefit for experts in two of the games tested. In addition, our results indicate that head tracking is more enjoyable for slow paced video games and it potentially hurts performance in fast paced modern video games. Reasoning behind our results is discussed and is the basis for our recommendations to game developers who want to make use of head tracking to enhance game experiences.
Volume cracker: a bimanual 3D interaction technique for analysis of raw volumetric data BIBAFull-Text 61-68
  Bireswar Laha; Doug A. Bowman
Analysis of volume datasets often involves peering inside the volume to understand internal structures. Traditional approaches involve removing part of the volume through slicing, but this can result in the loss of context. Focus+context visualization techniques can distort part of the volume, or can assume prior definition of a region of interest or segmentation of layers of the volume. We propose a new bimanual 3D interaction technique, called Volume Cracker (VC), which allows the user to crack open a raw volume like a book to analyze the internal structures. VC preserves context by always displaying all the voxels, and by connecting the sub-volumes with curves joining the cracked faces. We discuss the design choices that we made, based on observations from prior user studies, input from domain scientists, and design studios. We also report the results of a user study comparing VC with a standard desktop interaction technique and a standard 3D bimanual interaction technique. The study used tasks from two categories of a generic volume analysis task taxonomy. We found VC had significant advantages over the other two techniques for search and pattern recognition tasks.
Direct 3D object manipulation on a collaborative stereoscopic display BIBAFull-Text 69-72
  Kasim Özacar; Kazuki Takashima; Yoshifumi Kitamura
IllusionHole (IH) is an interactive stereoscopic tabletop display that allows multiple users to interactively observe and directly point at a particular position of a stereoscopic object in a shared workspace. We explored a mid-air direct multi-finger interaction technique to efficiently perform fundamental object manipulations for single user (e.g., selection, rotation, translation and scaling) on IH. Performance of the proposed technique was compared with a cursor-based single pointing technique by a 3D docking task. The results showed that direct object manipulation with proposed technique provides greater benefits on user experience in a collaborative environment.
FocalSpace: multimodal activity tracking, synthetic blur and adaptive presentation for video conferencing BIBAFull-Text 73-76
  Lining Yao; Anthony DeVincenzi; Anna Pereira; Hiroshi Ishii
We introduce FocalSpace, a video conferencing system that dynamically recognizes relevant activities and objects through depth sensing and hybrid tracking of multimodal cues, such as voice, gesture, and proximity to surfaces. FocalSpace uses this information to enhance users' focus by diminishing the background through synthetic blur effects. We present scenarios that support the suppression of visual distraction, provide contextual augmentation, and enable privacy in dynamic mobile environments. Our user evaluation indicates increased memory accuracy and user preference for FocalSpace techniques compared to traditional video conferencing.

Demos & posters

Effects of stereo and head tracking in 3d selection tasks BIBAFull-Text 77
  Bartosz Bajer; Robert J. Teather; Wolfgang Sturzlinger
We report a 3D selection study comparing stereo and head-tracking with both mouse and pen pointing. Results indicate stereo was primarily beneficial to the pen mode, but slightly hindered mouse speed. Head tracking had fewer noticeable effects.
Towards bi-manual 3D painting: generating virtual shapes with hands BIBAFull-Text 79
  Alexis Clay; Jean-Christophe Lombardo; Julien Conan; Nadine Couture
We aim at combining surface generation by hands with 3D painting in a large space, from 10 to 200 m2 (for a stage setup). Our long-term goal is to phase 3D surface generation in choreography, in order to produce augmented dance shows where the dancer can draw elements in 3D (characters, sets) while dancing. We present two systems; a first system in a CAVE environment, and second system more adapted to a stage setup. A comparison of both systems is provided, and an exploratory user experiment was performed, both with laypersons and dancers.
User-defined SUIs: an exploratory study BIBAFull-Text 81
  Alexis Clay; Anissa Samar; Maroua Ben Younes; Régis Mollard; Marion Wolff
In this poster we present an exploratory bottom-up experiment to assess the user's choices in terms of bodily interactions when facing a set of tasks. 29 subjects were asked to perform basic tasks on a large screen TV in three positions: standing, sitting, and lying on a couch, without any guidance on how to perform them. As such, we obtained spontaneous interaction propositions for each task. Subjects were then interviewed on their choices, and their internal representation of information and its dynamics. A statistical analysis highlighted the preferred interactions in each position.
Bimanual spatial haptic interface for assembly tasks BIBFull-Text 83
  Jonas Forsslund; Sara C. Schvartzman; Sabine Girod; Rebeka Silva; Kenneth Salisbury; Sonny Chan; Brian Jo
Fusing depth, color, and skeleton data for enhanced real-time hand segmentation BIBAFull-Text 85
  Yu-Jen Huang; Mark Bolas; Evan A. Suma
As sensing technology has evolved, spatial user interfaces have become increasingly popular platforms for interacting with video games and virtual environments. In particular, recent advances in consumer-level motion tracking devices such as the Microsoft Kinect have sparked a dramatic increase in user interfaces controlled directly by the user's hands and body. However, existing skeleton tracking middleware created for these sensors, such as those developed by Microsoft and OpenNI, tend to focus on coarse full-body motions, and suffers from several well-documented limitations when attempting to track the positions of the user's hands and segment them from the background. In this paper, we present an approach for more robustly handling these failure cases by combining the original skeleton tracking positions with the color and depth information returned from the sensor.
A virtually tangible 3D interaction system using an autostereoscopic display BIBAFull-Text 87
  Takumi Kusano; Takehiro Niikura; Takashi Komuro
We propose a virtually tangible 3D interaction system that enables direct interaction with three dimensional virtual objects which are presented on an autostereoscopic display.
Up- and downwards motions in 3D pointing BIBAFull-Text 89
  Sidrah Laldin; Robert J. Teather; Wolfgang Stuerzlinger
We present an experiment that examines 3D pointing in fish tank VR using the ISO 9241-9 standard. The experiment used three pointing techniques: mouse, ray, and touch using a stylus. It evaluated user pointing performance with stereoscopically displayed varying height targets above an upward-facing display. Results show differences in upwards and downwards motions for the 3D touch technique.
Autonomous control of human-robot spacing: a socially situated approach BIBAFull-Text 91
  Ross Mead; Maja J. Mataric
To enable socially situated human-robot interaction, a robot must both understand and control proxemics, the social use of space, to employ communication mechanisms analogous to those used by humans. In this work, we investigate speech and gesture production and recognition as a function of social agent spacing during both human-human and human-robot interactions. These models were used to implement an autonomous proxemic robot controller. The controller utilizes a sampling-based method, wherein each sample represents inter-agent pose, as well as agent speech and gesture production and recognition estimates; a particle filter uses these estimates to maximize the performance of both the robot and the human during the interaction. This functional approach yields pose, speech, and gesture estimates consistent with related literature. This work contributes to the understanding of the underlying pre-cultural processes that govern proxemic behavior, and has implications for robust proxemic controllers for robots in complex interactions and environments.
Real-time image-based animation using morphing with human skeletal tracking BIBAFull-Text 93
  Wataru Naya; Kazuya Fukumoto; Tsuyoshi Yamamoto; Yoshinori Dobashi
We propose a real-time image-based animation technique for virtual fitting applications. Our method uses key image finding from a database which uses skeletal data as a search key, and then create in-between images by using image morphing. Comparing to conventional method using 3DCG rendering, our method achieves higher frame rate and realistic textile representation. Unlike [1], data size and search time are reduced with database optimization.
Augmenting multi-touch with commodity devices BIBAFull-Text 95
  Francisco R. Ortega; Armando Barreto; Naphtali Rishe
We describe two approaches to augment multi-touch user input with commodity devices (Kinect and wiiMote).
Effectiveness of commodity BCI devices as means to control an immersive virtual environment BIBAFull-Text 97
  Jerald Thomas; Steve Jungst; Pete Willemsen
This poster focuses on research investigating the control of an immersive virtual environment using the Emotiv EPOC, a consumer-grade brain computer interface. The primary emphasis of the work is to determine the feasibility of the Emotiv EPOC at manipulating elements of an interactive virtual environment. We have developed a system utilizing the Emotiv EPOC as the main interface to a custom testing environment comprised of the Blender Game Engine, Python, and a VRPN system. A series of experiments that measure response time, reliability, and accuracy have been developed and the current results are described.
   Our poster presents the current state of the project including preliminary efforts in piloting the experiments. These findings provide insight into potential results from experimentation with active subjects and prove to be promising.