HCI Bibliography Home | HCI Conferences | NGCA Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
NGCA Tables of Contents: 11

Proceedings of the 2011 Conference on Novel Gaze-Controlled Applications

Fullname:Proceedings of the 1st Conference on Novel Gaze-Controlled Applications
Editors:Veronica Sundstedt; Charlotte Sennersten
Location:Karlskrona, Sweden
Dates:2011-May-26 to 2011-May-26
Publisher:ACM
Standard No:ISBN: 1-4503-0680-2, 978-1-4503-0680-5; ACM DL: Table of Contents hcibib: NGCA11
Papers:11
Pages:71
Links:Conference Home Page
Designing gaze-supported multimodal interactions for the exploration of large image collections BIBAFull-Text 1
  Sophie Stellmach; Sebastian Stober; Andreas Nürnberger; Raimund Dachselt
While eye tracking is becoming more and more relevant as a promising input channel, diverse applications using gaze control in a more natural way are still rather limited. Though several researchers have indicated the particular high potential of gaze-based interaction for pointing tasks, often gaze-only approaches are investigated. However, time-consuming dwell-time activations limit this potential. To overcome this, we present a gaze-supported fisheye lens in combination with (1) a keyboard and (2) and a tilt-sensitive mobile multi-touch device. In a user-centered design approach, we elicited how users would use the aforementioned input combinations. Based on the received feedback we designed a prototype system for the interaction with a remote display using gaze and a touch-and-tilt device. This eliminates gaze dwell-time activations and the well-known Midas Touch problem (unintentionally issuing an action via gaze). A formative user study testing our prototype provided further insights into how well the elaborated gaze-supported interaction techniques were experienced by users.
Mobile gaze-based screen interaction in 3D environments BIBAFull-Text 2
  Diako Mardanbegi; Dan Witzner Hansen
Head-mounted eye trackers can be used for mobile interaction as well as gaze estimation purposes. This paper presents a method that enables the user to interact with any planar digital display in a 3D environment using a head-mounted eye tracker. An effective method for identifying the screens in the field of view of the user is also presented which can be applied in a general scenario in which multiple users can interact with multiple screens. A particular application of using this technique is implemented in a home environment with two big screens and a mobile phone. In this application a user was able to interact with these screens using a wireless head-mounted eye tracker.
Eye tracking within the packaging design workflow: interaction with physical and virtual shelves BIBAFull-Text 3
  Chip Tonkin; Andrew D. Ouzts; Andrew T. Duchowski
Measuring consumers' overt visual attention through eye tracking is a useful method of assessing a package design's impact on likely buyer purchase patterns. To preserve ecological validity, subjects should remain immersed in a shopping context throughout the entire study. Immersion can be achieved through proper priming, environmental cues, and visual stimuli. While a complete physical store offers the most realistic environment, the use of projectors in creating a virtual environment is desirable for efficiency, cost, and flexibility reasons. Results are presented from a study comparing consumers' visual behavior in the presence of either virtual or physical shelving through eye movement performance and process metrics and their subjective impressions. Analysis suggests a difference in visual search performance between environments even though the perceived difference is negligible.
Towards intelligent user interfaces: anticipating actions in computer games BIBAFull-Text 4
  Hendrik Koesling; Alan Kenny; Andrea Finke; Helge Ritter; Seamus McLoone; Tomas Ward
The study demonstrates how the on-line processing of eye movements in First Person Shooter (FPS) games helps to predict player decisions regarding subsequent actions. Based on action-control theory, we identify distinct cognitive orientations in pre- and post-decisional phases. Cognitive orientations differ with regard to the width of attention or "re-ceptiveness": In the pre-decisional phase players process as much information as possible and then focus on implementing intended actions in the post-decisional phase. Participants viewed animated sequences of FPS games and decided which game character to rescue and how to implement their action. Oculomotor data shows a clear distinction between the width of attention in pre- and post-decisional phases, supporting the Rubicon model of action phases. Attention rapidly narrows when the goal intention is formed. We identify a lag of 800-900 ms between goal formation ("cognitive Rubicon") and motor response. Game engines may use this lag to anticipatively respond to actions that players have not executed yet. User interfaces with a gaze-dependent, gaze-controlled anticipation module should thus enhance game character behaviours and make them much "smarter".
Hyakunin-Eyesshu: a tabletop Hyakunin-Isshu game with computer opponent by the action prediction based on gaze detection BIBAFull-Text 5
  Michiya Yamamoto; Munehiro Komeda; Takashi Nagamatsu; Tomio Watanabe
A tabletop interface can enable interactions between images and real objects using various sensors; therefore, it can be used to create many works in the media arts field. By focusing on gaze-and-touch interaction, we proposed the concept of an eye-tracking tabletop interface (ETTI) as a new type of interaction interface for the creation of media artworks. In this study, we developed "Hyakunin-Eyesshu," a prototype for ETTI content that enables users to play the traditional Japanese card game "Hyakunin-Isshu" with a computer character. In addition, we demonstrated this system at an academic meeting and obtained user feedback. We expect that our work will lead to advancements in interfaces for various interactions and to various new media artworks with precise gaze estimation.
Comparison of gaze-to-objects mapping algorithms BIBAFull-Text 6
  Oleg Spakov
Gaze data processing is an important and necessary step in gaze-based applications. This study focuses on the comparison of several gaze-to-object mapping algorithms using various dwell times for selection and presenting targets of several types and sizes. Seven algorithms found in literature were compared against two newly designed algorithms. The study revealed that a fractional mapping algorithm (known) has produced the highest rate of correct selections and fastest selection times, but also the highest rate of incorrect selections. The dynamic competing algorithm (designed) has shown the next best result, but also high rate of incorrect selections. A small impact on the type of target to the calculated statistics has been observed. A strictly centered gazing has helped to increase the rate of correct selections for all algorithms and types of targets. The directions for further mapping algorithms improvement and future investigation have been explained.
Evaluation of a remote webcam-based eye tracker BIBAFull-Text 7
  Henrik Skovsgaard; Javier San Agustin; Sune Alstrup Johansen; John Paulin Hansen; Martin Tall
In this paper we assess the performance of an open-source gaze tracker in a remote (i.e. table-mounted) setup, and compare it with two other commercial eye trackers. An experiment with 5 subjects showed the open-source eye tracker to have a significantly higher level of accuracy than one of the commercial systems, Mirametrix S1, but also a higher error rate than the other commercial system, a Tobii T60. We conclude that the web-camera solution may be viable for people who need a substitute for the mouse input but cannot afford a commercial system.
An open-source low-cost eye-tracking system for portable real-time and offline tracking BIBAFull-Text 8
  Nicolas Schneider; Peter Bex; Erhardt Barth; Michael Dorr
Open-source eye trackers have the potential to bring gaze-controlled applications to a wider audience or even the mass market due to their low cost, and their flexibility and tracking quality are continuously improving. We here present a new portable low-cost head-mounted eye-tracking system based on the open-source ITU Gaze Tracker software. The setup consists of a pair of self-built tracking glasses with attached cameras for eye and scene recording. The software was significantly extended and functionality was added for calibration in space, scene recording, synchronization for eye and scene videos, and offline tracking. Results of indoor and outdoor evaluations show that our system provides a useful tool for low-cost portable eye tracking; the software is publicly available.
Gaze and voice controlled drawing BIBAFull-Text 9
  Jan van der Kamp; Veronica Sundstedt
Eye tracking is a process that allows an observers gaze to be determined in real time by measuring their eye movements. Recent work has examined the possibility of using gaze control as an alternative input modality in interactive applications. Alternative means of interaction are especially important for disabled users for whom traditional techniques, such as mouse and keyboard, may not be feasible. This paper proposes a novel combination of gaze and voice commands as a means of hands free interaction in a paint style program. A drawing application is implemented which is controllable by input from gaze and voice. Voice commands are used to activate drawing which allow gaze to be used only for positioning the cursor. In previous work gaze has also been used to activate drawing using dwell time. The drawing application is evaluated using subjective responses from participant user trials. The main result indicates that although gaze and voice offered less control that traditional input devices, the participants reported that it was more enjoyable.
Exploring interaction modes for image retrieval BIBAFull-Text 10
  Corey Engelman; Rui Li; Jeff Pelz; Pengcheng Shi; Anne Haake
The number of digital images in use is growing at an increasing rate across a wide array of application domains. That being said, there is an ever-growing need for innovative ways to help endusers gain access to these images quickly and effectively. Moreover, it is becoming increasingly more difficult to manually annotate these images, for example with text labels, to generate useful metadata. One such method for helping users gain access to digital images is content-based image retrieval (CBIR). Practical use of CBIR systems has been limited by several "gaps", including the well-known semantic gap and usability gaps [1]. Innovative designs are needed to bring end users into the loop to bridge these gaps. Our human-centered approaches integrate human perception and multimodal interaction to facilitate more usable and effective image retrieval. Here we show that multi-touch interaction is more usable than gaze based interaction for explicit image region selection.
Gaze interaction from bed BIBAFull-Text 11
  John Paulin Hansen; Javier San Augustin; Henrik Skovsgaard
This paper presents a low-cost gaze tracking solution for bedbound people composed of free-ware tracking software and commodity hardware. Gaze interaction is done on a large wall-projected image, visible to all people present in the room. The hardware equipment leaves physical space free to assist the person. Accuracy and precision of the tracking system was tested in an experiment with 12 subjects. We obtained a tracking quality that is sufficiently good to control applications designed for gaze interaction. The best tracking condition were achieved when people were sitting up compared to lying down. Also, gaze tracking in the bottom part of the image was found to be more precise than in the top part.