HCI Bibliography Home | HCI Conferences | ETSA Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
ETSA Tables of Contents: 13

Proceedings of 2013 Eye Tracking South Africa 2013-08-29

Fullname:Proceedings of the 2013 Conference on Eye Tracking South Africa
Editors:Pieter Blignaut
Location:Cape Town, South Africa
Dates:2013-Aug-29 to 2013-Aug-31
Standard No:ISBN: 978-1-4503-2110-5; ACM DL: Table of Contents; hcibib: ETSA13
Links:Conference Website
  1. Full Papers
  2. Short Papers

Full Papers

Shedding light on retail environments BIBAFull-Text 2-7
  Tracy Harwood; Martin Jones; Ashley Carreras
This paper presents an overview of research into consumer responses to lighting within retail stores using mobile eye-tracking. It begins with a brief review of pertinent literature in relation to lighting and visual attention. The study is small scale and experimental, using 3 scenarios with different lighting patterns on a visual merchandising unit. Tobii Mobile™ glasses were used to provide naturalistic visual attention data of consumer responses to the unit. Eye-tracking data was time interval content analysed by lighting scenario and position of focal attention on the unit. Data was subsequently analysed using repeated measures ANOVA to assess correlations. Findings highlight methodological implications as well as the roles of lighting and position of products. Future research directions are discussed.
Visual perception of international traffic signs: influence of e-learning and culture on eye movements BIBAFull-Text 8-16
  Gergely Rakoczi; Andrew Duchowski; Helena Casas-Tost; Margit Pohl
Various eye movement metrics were recorded during the visual perception of international traffic signs embedded within an e-learning course designed to familiarize participants with foreign signage. Goals of the were to gauge differences in task types, sign origin, and ethnicity (American, Chinese, and Austrian) as well as effectiveness of the e-learning teaching materials in terms of prior preparation. Results, in contrast to other studies, suggest that teaching materials had no overall effect on either eye movement metrics nor on task success rates. Instead, sign origin had the strongest effect on gaze, as foreign signs in mixed presentation with domestic signs, elicited a larger number of fixations with longer mean fixation durations, highest regression rates, and lower performance scores. Possible effects of ethnicity were also noted: Americans showed lower mean fixation durations over the entire experiment, independent of test conditions, with Chinese participants fixating faster on (correct) road signs than the other ethnic groups.
Appearance-based gaze tracking with spectral clustering and semi-supervised Gaussian process regression BIBAFull-Text 17-23
  Ke Liang; Youssef Chahir; Michèle Molina; Charles Tijus; François Jouen
Two of the challenges in appearance-based gaze tracking are: 1) prediction accuracy, 2) the efficiency of calibration process, which can be considered as the collection and analysis phase of labelled and unlabelled eye data. In this paper, we introduce an appearance-based gaze tracking model with a rapid calibration. First we propose to concatenate local eye appearance Center-Symmetric Local Binary Pattern (CS-LBP) descriptor for each subregion of eye image to form an eye appearance feature vector. The spectral clustering is then introduced to get the supervision information of eye manifolds on-line. Finally, taking advantage of eye manifold structure, a sparse semi-supervised Gaussian Process Regression (GPR) method is applied to estimate the subject's gaze coordinates. Experimental results demonstrate that our system with an efficient and accurate 5-points calibration not only can reduce the run-time cost but also can lead to a better accuracy result of 0.9°.
Reading on-screen text with gaze-based auto-scrolling BIBAFull-Text 24-31
  Selina Sharmin; Oleg Špakov; Kari-Jouko Räihä
Visual information on eye movements can be used to facilitate scrolling while one is reading on-screen text. We carried out an experiment to find preferred reading regions on the screen and implemented an automatic scrolling technique based on the preferred regions of each individual reader. We then examined whether manual and automatic scrolling have an effect on reading behaviour on the basis of eye movement metrics, such as fixation duration and fixation count. We also studied how different font sizes affect the eye movement metrics. Results of analysis of data collected from 24 participants indicated no significant difference between manual and automatic scrolling in reading behaviour. Preferred reading regions on the screen varied among the participants. Most of them preferred relatively short regions. A significant effect of font size on fixation count was found. Subjective opinions indicated that participants found automatic scrolling convenient to use.
A regression-based method for the prediction of the indecisiveness degree through eye movement patterns BIBAFull-Text 32-38
  Yannick Lufimpu-Luviya; Djamel Merad; Sebastien Paris; Véronique Drai-Zerbib; Thierry Baccino; Bernard Fertil
The development of eye-tracking-based methods to describe a person's indecisiveness is not commonly explored, even though research has shown that indecisiveness is involved in many unwanted cognitive states, such as a reduction in self-confidence during the decision-making process, doubts about past decisions, reconsidering, trepidation, distractibility, procrastination, neuroticism and even revenge. The purpose of our work is to propose a predictive model of a subject's degree of indecisiveness. To reach this goal, we first need to extract statistically relevant. Using eye-tracking methodology, we build a list of patterns that best distinguish decisive people from indecisive people; this segmentation is made according to the state of the art. The final list of eye-tracking patterns is also coherent with the state of art. A comparison between Multiple Linear Regression (MLR) and Support Vector Regression (SVR) is made so as to select the best predictive model.
The effect of mapping function on the accuracy of a video-based eye tracker BIBAFull-Text 39-46
  Pieter Blignaut; Daniël Wium
In a video-based eye tracker the pupil-glint vector changes as the eyes move. Using an appropriate model, the pupil-glint vector can be mapped to coordinates of the point of regard (PoR). Using a simple hardware configuration with one camera and one infrared source, the accuracy that can be achieved with various mapping models is compared with one another. No single model proved to be the best for all participants. It was also found that the arrangement and number of calibration targets has a significant effect on the accuracy that can be achieved with the said hardware configuration. A mapping model is proposed that provides reasonably good results for all participants provided that a calibration set with at least 8 targets is used. It was shown that although a large number of calibration targets (18) provide slightly better accuracy than a smaller number of targets (8), the improvement might not be worth the extra effort during a calibration session.
Saccade deviation indicators for automated eye tracking analysis BIBAFull-Text 47-54
  J. A. de Bruin; K. M. Malan; J. H. P. Eloff
Eye tracking has been around for more than 100 years and the technology has improved at an incredible rate. With the advancement of technology, eye tracking can even be done from a mobile phone, which allows for large scale eye tracking studies to be performed. Unfortunately, eye tracking analysis is still a time consuming activity especially when done on a large scale, due to the high dependence on human expertise. This paper introduces saccade deviation indices (SDI) and saccade length indices (SLI), metrics to assist in faster analysis of eye tracking data. In addition, bench-mark deviation vectors (BDV) are introduced to highlight repetitive path deviation in eye tracking data. In order to obtain these metrics, a benchmark user is used to determine where and by how much the participants deviated from the expected scan path. A study was performed, recording the eye movements of participants while using a mobile procurement application. The results were compared to the results of an expert usability study to establish the feasibility of this approach. Preliminary results indicate that the SDI and SLI can reduce the time that an experts spend analysing eye tracking data. Additional time is saved by highlighting possible usability issues, by mapping BDV back onto the user interfaces, indicating where the user deviated from the expected scan path.

Short Papers

Dealing with head-mounted eye-tracking data: comparison of a frame-by-frame and a fixation-based analysis BIBAFull-Text 55-57
  Pieter Vansteenkiste; Greet Cardon; Matthieu Lenoir
Although analysing software for eye-tracking data has improved a lot during the last decennia, analysis of gaze behaviour recorded using a head-mounted device is still challenging. In this paper gaze behaviour of six participants, cycling on four different roads, was analysed both Frame-by-Frame and on a fixation-based way. A Pearson correlation of 0.930 was found between the two methods, which points out good validity of the fixation-based analysing method. For the analysis of gaze behaviour over an extended period of time, the fixation-based approach can save a lot of processing time.
Circular heat map transition diagram BIBAFull-Text 58-61
  Tanja Blascheck; Michael Raschke; Thomas Ertl
Eye tracking experiments are the state-of-the-art technique to study questions of usability of graphical interfaces. Visualizations help to analyse eye tracking data by presenting it in a graphical way. In this paper we contribute a new visualization technique combining features of state-of-the-art visualizations for eye tracking data like heat maps, and transition matrices with a circular layout. The circular heat map transition diagram uses areas of interest (AOIs) and orders them alphabetically on a circular layout to show transitions between AOIs visually. The AOIs are colour coded segments on a circle where the colour is mapped with respect to the fixation count in each AOI. The segment size corresponds to the fixation duration within an AOI. Furthermore, the transitions between and within the AOIs of a participant are drawn as arrow lines. Key features of the circular heat map transition diagram are extraction of similar eye movement patterns of different participants, graphical representation of transitions between AOIs, finding an appropriate AOI sequence, and investigating inefficient search behaviour of participants. To be able to use the visualization technique in practice, we have implemented three variants, the AOI transition diagram, the AOI transition and completion time diagram, and the fixation transition diagram. We will show their application in an exemplary analysis of an eye tracking experiment.
Measuring the impact of subtitles on cognitive load: eye tracking and dynamic audiovisual texts BIBAFull-Text 62-66
  Jan-Louis Kruger; Esté Hefer; Gordon Matthew
In educational design literature, it is often taken as fact that subtitles increase cognitive load (CL). This paper investigates this assumption experimentally by comparing various measures of CL when students watch a recorded academic lecture with or without subtitles. Since the measurement of cognitive load is by no means a simple matter, we first provide an overview of the different measurement techniques based on causality and objectivity. We measure CL by means of eye tracking (pupil dilation), electroencephalography (EEG), self-reported ratings of mental effort, frustration, comprehension effort and engagement, as well as performance measures (comprehension test).
   Our findings seem to indicate that the subtitled condition in fact created lower CL in terms of percentage change in pupil diameter (PCPD) for the stimulus, approaching significance. In the subtitled condition PCPD also correlates significantly with participants' self-reported comprehension effort levels (their perception of how easy or difficult it was to understand the lecture). The EEG data, in turn, shows a significantly higher level of frustration for the unsubtitled condition. Negative emotional states could be caused by situations of higher CL (or cognitive overload) leading to learner frustration and dissatisfaction with learning activities and own performance [16]. It could therefore be reasoned that participants had a higher CL in the absence of subtitles. The self-reported frustration levels correlate with the frustration measured by the EEG as well as the self-reported engagement levels for the subtitled group. We also found a significant correlation between the self-reported engagement levels and both the short- and long-term comprehension for the unsubtitled condition but not for the subtitled condition. There was no significant difference in either short-term or long-term performance measures between the two groups, which seems to suggest that subtitles at the very least, do not result in cognitive overload.
A new interaction technique involving eye gaze tracker and scanning system BIBAFull-Text 67-70
  Pradipta Biswas; Pat Langdon
This paper presents a new input interaction system for people with severe disabilities by combining eye gaze tracking and single switch scanning interaction techniques. The system is faster than only scanning based systems while more comfortable to use than existing eye gaze tracking based systems. We reported results from a couple of user studies that show the new system is equally fast compared to existing eye tracking systems that does not involve scanning, participants with no prior experience with eye tracking based system could learn using this new system successfully within 10 minutes but it demands higher mental effort in comparison to another new modality of interaction -- a gesture based system.
EyeSketch: a drawing application for gaze control BIBAFull-Text 71-74
  Henna Heikkilä
We present a gaze-driven drawing application called EyeSketch. Contrary to earlier gaze-controlled drawing applications, our application utilizes drawing objects that can be moved and resized, and their colour attributes can be changed after drawing. Tool and object selections are implemented with dwell buttons. Tools for moving and resizing are controlled with gaze gestures and by closing the eyes. Our gaze gestures are simple, one-segment gestures that end outside the screen area. The gestures are used to give the direction for moving and also for the command to make the object either smaller or larger. Closing the eyes signals to the application to stop a moving object. In our evaluations, these gaze gestures were judged as a usable interaction style for the moving and resizing purpose.
Real-time 3D gaze analysis in mobile applications BIBAFull-Text 75-78
  Jan Hendrik Hammer; Michael Maurus; Jürgen Beyerer
This paper presents a system for real-time analysis of 3D gaze data arising in mobile applications. Our system allows users to freely move in a known 3D environment while their gaze is computed on arbitrarily shaped objects. The scanpath is analysed fully automatically using fixations and areas-of-interest -- all in 3D and real time. Furthermore, the scanpath can be visualized in parallel in a 3D model of the environment. This enables to observe the scanning behaviour of a subject. We describe how this has been realized for a commercial off-the-shelf mobile eye tracker utilizing an inside-out tracking mechanism for head pose estimation. Moreover, we show examples of real gaze data collected in a museum.
Cycling around bends: the effect of cycling speed on steering and gaze behavior BIBAFull-Text 79
  Pieter Vansteenkiste; David Van Hamme; Greet Cardon; Matthieu Lenoir
Although it is generally accepted that visual information guides steering, there is no consensus whether the tangent point strategy (the point of the inner lane boundary bearing the highest curvature in the 2D retinal image) or the gaze sampling strategy (looking at a points in the future path) is best suited to guide steering around bends. Unfortunately, visual behavior while negotiating curves has almost uniquely been tested in car driving situations and no effect of driving speed has been described yet. Therefore, current research investigates the effect of cycling speed on the visual behavior while cycling curves.
A hazard perception test for cycling children: an exploratory study BIBAFull-Text 80
  Pieter Vansteenkiste; Linus Zeuwts; Greet Cardon; Matthieu Lenoir
Traffic related cognitive skills have been tested for young car drivers with a Hazard Perception test but not for children, although they might benefit even more from it than young drives. Therefore An exploratory study to the use of a HP-test for testing the cognitive/traffic skills of young cyclists.
Is there a difference in visual search patterns between watching video clips of fencers on a computer screen and reacting on them on a life-sized screen? BIBAFull-Text 81
  Linus Zeuwts; Gijs Debuyck; Pieter Vansteenkiste; Matthieu Lenoir
To compare the visual attention of multiple subjects in a sports situation, an identical stimulus has to be presented, which is often only possible by using video images. Therefore, reacting on video clips projected on a large screen seems to approach a real-life experiment the most. However, being able to move while watching a video screen implies that a head-mounted eye-tracker has to be used, with time consuming data analysis as a result. When participants only have to watch the video. Remote Eye-tracking Devices can be used. With these devices, data analysis can be automated and therefore they are much less time consuming. However, gaze behavior of watching a videos on a computer screen might differ from gaze behavior when reacting on them on a life-sized projection screen. In current experiment the difference between these two experimental set-ups was tested for the gaze behavior of elite fencers.
When the screen is not enough: differences of art exploration in the museum and in the lab BIBAFull-Text 82
  Carlos Pedreira; Joaquin Navajas; Rodrigo Quian Quiroga
The study of visual perception and higher order cognitive processes, such as memory, has been performed mainly in controlled experimental environments given by the researchers' laboratories. However, the laboratory environment experiments cannot capture naturalistic behaviours and are, therefore, a limited approximation to the behaviours in real conditions. To study how the environment affects behavioural studies we have performed an exploratory art experiment in the controlled environment of our laboratory and in the British Museum in London.