HCI Bibliography Home | HCI Conferences | AUIC Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
AUIC Tables of Contents: 00010203040506070809101112131415

Proceedings of AUIC'08, Australasian User Interface Conference

Fullname:Proceedings of the ninth conference on Australasian user interface -- Volume 76
Editors:Beryl Plimmer; Gerald Weber
Location:Wollongong, Australia
Dates:2008-Jan-22 to 2008-Jan-25
Standard No:ISBN: 978-1-920682-57-6; ACM DL: Table of Contents; hcibib: AUIC08
Links:Online Proceedings
  1. Keynote
  2. Contributed papers: novel interaction
  3. Contributed papers: collaborative interaction
  4. Contributed papers: user interface analysis
  5. Contributed papers: usability evaluation


Simple, social, ethical and beautiful: requirements for UIs in the home BIBAFull-TextPDF 3-9
  Andrew F. Monk
The paper sets out some requirements for the user interfaces of technologies that are to be used in the home. Each theme is illustrated with some of the current research being carried out at CUHTec. Interfaces for domestic technologies should be: Simple -- we are willing to expend only slight cognitive resources in many domestic contexts (this theme is illustrated by the extreme example of assistive technology for people with dementia); Social -- much of what we do in the home is purely for social enjoyment (illustrated by our work to develop quantitative measures of fun based on the behaviour of groups sharing photographs); Ethical -- domestic caring technologies pose serious issues of privacy and informed consent (illustrated by our work on social dependability in the design of telecare), and Beautiful -- the objects in our homes speak to our tastes and values (illustrated by our work with Jayne Wallace, a digital jeweller).

Contributed papers: novel interaction

Multi-sensory game interface improves player satisfaction but not performance BIBAFull-TextPDF 13-18
  Keith V. Nesbitt; Ian Hoskens
Players of computer games tend to be discerning about game quality. So, to be successful, game designers need to ensure that players receive the best possible experience. A growing trend in the design of game interfaces is the use of multi-sensory (visual, auditory and haptic) interfaces to broaden the experience for players. The assumption is that, by displaying different information to different senses, it is possible to increase the amount of information available to players and so assist their performance. To test this assumption, the first-person shooter game, "Quake 3: Arena", was evaluated in four modes: with only visual cues; with both visual and auditory cues; with both visual and haptic cues; and with visual, auditory and haptic cues. Players reported improved 'immersion', 'confidence' and 'satisfaction' when additional sensory cues were included, the multisensory game interface seemed to improve the player's experience, but there was no statistically significant improvement in their performance. We suspect that a better design of the information being displayed for each sense may be required if multi-sensory displays are to significantly improve the player's performance on specific game tasks.
User evaluation of god-like interaction techniques BIBAFull-TextPDF 19-27
  Aaron Stafford; Wayne Piekarski
God-like interaction is a metaphor for improved communication of situational and navigational information between outdoor users, equipped with mobile augmented reality systems, and indoor users, equipped with tabletop projector display systems. This paper presents the results of a user evaluation that explores user experience to a subset of the god-like interaction metaphor. The tasks performed by the participants were designed around known problems in the AR community. The results of the evaluation are intended to help define the boundaries in which the god-like interaction metaphor is practical for communication of navigational and situational information. This paper reports the findings of the evaluation as well as recommendations for further development of the interaction metaphor.
AreWeThereYet?: a temporally aware media player BIBAFull-TextPDF 29-32
  Matt Adcock; Jaewoo Chung; Chris Schmandt
In this paper we describe the design and implementation of the AreWeThereYet? (AWTY) Player -- a (digital) audio player that composes a program of audio media that is extremely likely to fit within the user's available listening time. AWTY uses time compression and track selection techniques to help the listener make more efficient use of their time. More importantly, it possesses an awareness of the listener's temporal context. It forms an estimate of the available listening time and uses this prediction to compose a playlist of a suitable length. We hope that this research prototype will inspire others to further investigate the ways in which temporally aware computing might be employed.

Contributed papers: collaborative interaction

A collaborative guidance case study BIBAFull-TextPDF 33-42
  Duncan Stevenson; Jane Li; Jocelyn Smith; Matthew Hutchins
This paper reports on a collaborative guidance case study, which investigated the use of remote pointing and drawing technologies in a system designed for spatially focussed collaborative tasks. Four guidance technologies were available to the participants -- pointing and drawing over video of the remote site and pointing and drawing into the remote workplace itself. The experimental task was designed to mimic the actions observed in an actual application setting. The purpose of the study was to see how the participants would use the technology and how they would collaborate with each other during the performance of the task.
   Specifically, the experiment looked at how the participants selected from the choice of guidance technology, and changed their selection, as the task progressed. It looked at how they used the technology and how they created working, 3-dimensional, shared frames of reference for the task. Finally it explored the way the system supported emerging collaborative behaviour between each pair of participants.
   The paper concludes that the participants were able to make reasoned choices about their selection of guidance technology, and that they evolved effective guidance strategies as the task progressed. They adapted their understanding of each other's frame of reference with respect to the task by focusing on reference objects created during the task. Finally, the paper concludes that the experimental system did indeed foster emerging collaborative behaviour between the participants.
Enabling co-located ad-hoc collaboration on shared displays BIBAFull-TextPDF 43-50
  Peter Hutterer; Bruce H. Thomas
All major desktop environments are designed around the assumption of having a single system cursor and a single keyboard. Co-located multi-user interaction on a standard desktop requires users to physically hand over the devices. Existing collaboration applications require complicated and limiting setups and no collaboration application or toolkit supports ad-hoc transition from a traditional single-user desktop to a multi-user collaboration environment without restarting applications.
   Our Multi-Pointer X server (MPX) allows easy transition between a single-user desktop and a multi-user collaboration environment. Pointer devices and keyboards can be added and removed at any time. Independent cursors and keyboard foci for these devices allow users to interact with and type into multiple applications simultaneously. MPX is compatible with any legacy X application and resolves ambiguity in legacy APIs using the novel "ClientPointer" principle. MPX also provides new APIs for multi-user applications and thus enables fluid integration of single-user and multi-user environments.
Public and private workspaces on tabletop displays BIBAFull-TextPDF 51-54
  Ross T. Smith; Wayne Piekarski
As co-operative work environments are becoming more popular, new tools and techniques have been emerging that allow users to perform collaborative tasks more efficiently. We have been exploring new interaction techniques made possible by using a multi-view display as a tabletop surface. This paper presents the concept of public and private working areas for multi-view display environments, and presents a taxonomy that allows us to better understand how they can be applied in computer supported collaborative work environments. We have formally defined and categorized various multi-view characteristics, along with possible uses and applications. We also created a display mask that allows an LCD monitor to be used as a multi-view display from four viewing directions. Furthermore, our initial implementation of a window manager utilizing the taxonomy has been discussed to demonstrate some of the interaction techniques that are possible using a multi-view tabletop display.

Contributed papers: user interface analysis

Automated usability testing framework BIBAFull-TextPDF 55-64
  Fiora T. W. Au; Simon Baker; Ian Warren; Gillian Dobbie
Handheld device applications with poor usability can reduce the productivity of users and incur costs for businesses, thus usability testing should play a vital role in application development. Conventional usability testing methodologies, such as formal user testing, can be expensive, time consuming and labour intensive; less resource-demanding alternatives can yield unreliable results. Automating aspects of usability testing would improve its efficiency and make it more practical to perform throughout development.
   An automated usability testing tool should capture as input the properties of an application's graphical user interface, the sequence of user actions as they use the application to achieve particular tasks, their behaviour and comments, as well as a description of these tasks. The tool should evaluate both the static and dynamic properties of the interface, examine navigational burden and suggest modifications or templates that would improve usability. Results should be quick and easy to interpret, and be understandable by personnel other than specialised testers.
   Several existing tools that are typical of the tools available today meet some but not all of these requirements. In this paper we describe the design of the HUIA testing framework, in which we have to meet as many of these requirements as possible.
Automated reverse engineering of hard-coded GUI layouts BIBAFull-TextPDF 65-73
  Christof Lutteroth
Most GUIs are specified in the form of source code, which hard-codes information relating to the layout of graphical controls. This representation is very low-level, and makes GUIs hard to maintain. We suggest a reverse engineering approach that is able to recover a higher-level layout representation of a hardcoded GUI using the Auckland Layout Model, which is based on the mathematical notion of linear programming. This approach allows developers to use existing code and existing tools, as well as specifications on a higher level of abstraction. We show how existing hard-coded GUIs can be extended to support dynamic layout adjustment with very little effort, and how GUIs can be beautified automatically during reverse engineering.
Annotating UI architecture with actual use BIBAFull-TextPDF 75-78
  Neil Ramsay; Stuart Marshall; Alex Potanin
Developing an appropriate user interface architecture for supporting a system's tasks is critical to the system's overall usability. While there are principles to guide architectural design, confirming that the correct decisions are made can involve the collection and analysis of lots of test data. We are developing a testing environment that will automatically compare and contrast the actual user interaction data against the existing user interface architectural models. This can help a designer more clearly understand how the actual tasks performed relate to the proposed architecture, and enhances feedback between different design artifacts.

Contributed papers: usability evaluation

The effects of menu parallelism on visual search and selection BIBAFull-TextPDF 79-84
  Philip Quinn; Andy Cockburn
Menus and toolbars are the primary controls for issuing commands in modern interfaces. As software systems continue to support increasingly large command sets, the user's task of locating the desired command control is progressively time consuming. Many factors influence a user's ability to visually search for and select a target in a set of menus or toolbars, one of which is the degree of parallelism in the display arrangement. A fully parallel layout will show all commands at once, allowing the user to visually scan all items without needing to manipulate the interface, but there is a risk that this will harm performance due to excessive visual clutter. At the other extreme, a fully serial display minimises visual clutter by displaying only one item at a time, but separate interface manipulations are necessary to display each item. This paper examines the effects of increasing the number of items displayed to users in menus through parallelism -- displaying multiple menus simultaneously, spanning both horizontally and vertically -- and compares it to traditional menus and pure serial display menus. We found that moving from serial to a partially parallel (traditional) menu significantly improved user performance, but moving from a partially parallel to a fully parallel menu design had more ambiguous results. The results have direct design implications for the layout of command interfaces.
The "mental map" versus "static aesthetic" compromise in dynamic graphs: a user study BIBAFull-TextPDF 85-93
  Peter Saffrey; Helen Purchase
The design of automatic layout algorithms for single graphs is a well established field, and some recent studies show how these algorithms affect human understanding. By contrast, layout algorithms for graphs that change over time are relatively immature, and few studies exist to evaluate their effectiveness empirically. This paper presents two new dynamic graph layout algorithms and empirical investigations of how effective these algorithms are with respect to human understanding. Central to each algorithm is the "mental map": the degree to which the layout supports continuous understanding. This work aims to evaluate the importance of the mental map, alongside traditional static graph aesthetics, in answering questions about dynamic graphs. We discover that a simple concept of the mental map is not sufficient for increasing understanding of the graph.