HCI Bibliography Home | HCI Conferences | EICS Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
EICS Tables of Contents: 09101112131415

ACM SIGCHI 2013 Symposium on Engineering Interactive Computing Systems

Fullname:Proceedings of the 5th ACM SIGCHI symposium on Engineering interactive computing systems
Editors:Peter Forbrig; Prasun Dewan; Michael Harrison; Kris Luyten
Location:London, England
Dates:2013-Jun-24 to 2013-Jun-27
Standard No:ISBN: 978-1-4503-2138-9; ACM DL: Table of Contents; hcibib: EICS13
Links:Conference Website
  1. Keynote address
  2. Adaptation
  3. Design and implementation process
  4. Analysis
  5. Keynote address
  6. Posters and demonstrations
  7. Doctoral consortium
  8. Gesture, multi-touch, tangibles, and speech
  9. Callaboration
  10. Empirical techniques
  11. Keynote address
  12. Measures and metrics
  13. Design and implementation experience
  14. Tutorial
  15. Workshops

Keynote address

Using the crowd to understand and adapt user interfaces BIBAFull-Text 1-2
  Jeffrey Nichols
Engineering user interfaces has long implied careful design carried out using formal methods applied by human experts and automated systems. While these methods have advantages, especially for creating interfaces that have the flexibility to adapt to users and situations, they can also be time consuming, expensive, and there are relatively few experts able to apply them effectively. In particular, many engineering methods require the construction of one or more models, each of which can only be created through many hours of work by an expert. In this keynote, I will explore how social and human computation methods can be applied to reduce the barriers to achieving user interface flexibility and ultimately to using engineering methods. In a first example, I will illustrate how groups of users can work together to modify and improve user interfaces through end-user programming examples from the CoScripter and Highlight projects. I will then discuss some initial work on using a crowd of novice workers to create models of existing user interfaces. I hope this keynote will inspire the engineering community to consider alternate approaches that creatively combine formal methods with the power of crowds.


RBUIS: simplifying enterprise application user interfaces through engineering role-based adaptive behavior BIBAFull-Text 3-12
  Pierre A. Akiki; Arosha K. Bandara; Yijun Yu
Enterprise applications such as customer relationship management (CRM) and enterprise resource planning (ERP) are very large scale, encompassing millions of lines-of-code and thousands of user interfaces (UI). These applications have to be sold as feature-bloated off-the-shelf products to be used by people with diverse needs in required feature-set and layout preferences based on aspects such as skills, culture, etc. Although several approaches have been proposed for adapting UIs to various contexts-of-use, little work has focused on simplifying enterprise application UIs through engineering adaptive behavior. We define UI simplification as a mechanism for increasing usability through adaptive behavior by providing users with a minimal feature-set and an optimal layout based on the context-of-use. In this paper we present Role-Based UI Simplification (RBUIS), a tool supported approach based on our CEDAR architecture for simplifying enterprise application UIs through engineering role-based adaptive behavior. RBUIS is integrated in our general-purpose platform for developing adaptive model-driven enterprise UIs. Our approach is validated from the technical and end-user perspectives by applying it to developing a prototype enterprise application and user-testing the outcome.
Model-driven development and evolution of customized user interfaces BIBAFull-Text 13-22
  Andreas Pleuss; Stefan Wollny; Goetz Botterweck
One of the main benefits of model-driven development of User Interfaces (UIs) is the increase in efficiency and consistency when developing multiple variants of a UI. For instance, multiple UIs for different target users, platforms, devices, or for whole product families can be generated from the same abstract models. However, purely generated UIs are not always sufficient as there is often need for customizing the individual UI variants, e.g., due to usability issues or specific customer requirements.
   In this paper we present a model-driven approach for the development of UI families with systematic support for customizations. The approach supports customizing all aspects of a UI (UI elements, screens, navigation, etc.) and storing the customizations in specific models. As a result, a UI family can be evolved more efficiently because individual UI variants can be re-generated (after some changes have been applied to the family) without losing any previously made customizations. We demonstrate this by thirty highly customized real-world products from a commercial family of web information systems called HIS-GX/QIS.
CrowdAdapt: enabling crowdsourced web page adaptation for individual viewing conditions and preferences BIBAFull-Text 23-32
  Michael Nebeling; Maximilian Speicher; Moira C. Norrie
The range and growing diversity of new devices makes it increasingly difficult to design suitable web interfaces for every browsing client. We present CrowdAdapt -- a context-aware web design tool that supports developers in the creation of adaptive layout solutions for a wide variety of use contexts by crowdsourcing web site adaptations designed for individual viewing conditions and preferences. We focus on one experiment we conducted for an existing news web site using CrowdAdapt (i) to explore the design space in terms of layout alternatives created by the crowd, (ii) to identify adaptation preferences with respect to different viewing situations, and (iii) to assess the perceived quality of crowd-generated layouts in terms of reading comfort and efficiency. The results suggest that crowdsourced adaptation could lead to very flexible web interfaces informed by individual end-user requirements. In particular, scenarios such as the adaptation to large-screen contexts that the majority of web sites fail to address could be supported with relatively little effort.
ACCESS: a technical framework for adaptive accessibility support BIBAFull-Text 33-42
  Michael Heron; Vicki L. Hanson; Ian W. Ricketts
In this paper we outline ACCESS -- an open source, cross-platform, plug-in enabled software framework designed to provide a mapping between user needs and system configuration. The framework inverts the responsibility for making system configuration changes so that it lies with the computer rather than the user. In turn, the responsibility for identifying when changes should be made is delegated onto the plug-ins that have been incorporated into the framework. User feedback is solicited by a simple reinforcement mechanic through which individuals can like or dislike adaptations that are made. User interaction adjusts the probabilities that plug-ins will be selected in future, and also allows for plug-ins to adjust their own algorithms in line with user preferences. Results of experimental testing are encouraging, and show strong support for the perceived benefit, tractability and appropriateness of the framework.
An environment for designing and sharing adaptation rules for accessible applications BIBAFull-Text 43-48
  Raúl Miñón; Fabio Paternò; Myriam Arrue
In this work we present a design space for adaptation rules for applications accessible to people with special needs, and an environment supporting the sharing of such rules across various applications. The adaptation rules are classified according to the target user disabilities, as well as other relevant criteria useful to ease their integration in other design tools.

Design and implementation process

A constructive approach for design space exploration BIBAFull-Text 49-58
  Anke Dittmar; Stefan Piehler
The co-evolution of different kinds of external representations is essential in Human-Centered Design. It helps design teams to interleave different design activities and to view a design problem from different perspectives. The paper investigates a coupling of representations for Design Rationale, formal HCI models, and prototypical implementations for a more effective co-exploration of problem and design spaces with both analytical and empirical means. Deliberated underdesign and parallel, model-guided prototyping are proposed techniques to systematically integrate exploratory design steps into evolutionary prototyping. The general approach is instantiated with QOC diagrams, HOPS models, and Java implementations. HOPS models are used for two purposes: to create 'throw-away extensions' of an existing prototype and to specify design goals and constraints. The animation tool allows designers to explore and to reflect the model-guided prototypes. A case study demonstrates the applicability of the approach. Implications for related design practices are discussed.
IOWAState: implementation models and design patterns for identity-aware user interfaces based on state machines BIBAFull-Text 59-68
  Yann Laurillau
The emergence of interactive surfaces and technologies able to differentiate users allows the design and development of Identity-Aware (IA) interfaces, a new and richer set of user interfaces (UIs). Such user interfaces are able to adapt their behavior depending on who is interacting. However, existing implementations, mostly as software toolkits, are still ad-hoc and mostly based on existing GUI toolkits which are not designed to support user differentiation. The problem is that the development of IA interfaces is more complex than the development of traditional UIs and still requires extra programming efforts. To address these issues, we present a set of implementation models, named IOWAState models, to specify the behavior as state machines, the architecture and the components of IA interfaces. In addition, based on our IOWAState models and a classification of IA user interfaces, we detail a set of design patterns to implement the behavior of IA user interfaces.
Guidelines for integrating personas into software engineering tools BIBAFull-Text 69-74
  Shamal Faily; John Lyle
Personas have attracted the interest of many in the usability and software engineering communities. To date, however, there has been little work illustrating how personas can be integrated into software tools to support these engineering activities. This paper presents four guidelines that software engineering tools should incorporate to support the design and evolution of personas. These guidelines are grounded in our experiences modifying the open-source CAIRIS Requirements Management tool to support design and development activities for the EU FP7 webinos project.
A methodology for generating an assistive system for smart environments based on contextual activity patterns BIBAFull-Text 75-80
  Michael Zaki; Peter Forbrig
Despite the existence of various approaches addressing the development of nowadays interactive systems, smart environments impose an additional set of challenges for the designer. The main tenet of those environments is to deliver proper assistance to resident users who are performing their daily life tasks. However, an assistive system to be deployed in a smart environment has to meet some crucial requirements in order to successfully accomplish its mission. Thus, a well-defined development methodology for the generation of such a system to be employed in a given smart environment is highly beneficial. In this paper, we present a development methodology enabling the generation of tailored (user-specific) assistive user interfaces based on contextual activity patterns. We illustrate step by step the various stages by which the development of a supportive system for smart environments has to pass.


Verification of interactive software for medical devices: PCA infusion pumps and FDA regulation as an example BIBAFull-Text 81-90
  Paolo Masci; Anaheed Ayoub; Paul Curzon; Michael D. Harrison; Insup Lee; Harold Thimbleby
Medical device regulators such as the US Food and Drug Administration (FDA) aim to make sure that medical devices are reasonably safe before entering the market. To expedite the approval process and make it more uniform and rigorous, regulators are considering the development of reference models that encapsulate safety requirements against which software incorporated in to medical devices must be verified. Safety, insofar as it relates to interactive systems and its regulation, is generally a neglected topic, particularly in the context of medical systems. An example is presented here that illustrates how the interactive behaviour of a commercial Patient Controlled Analgesia (PCA) infusion pump can be verified against a reference model. Infusion pumps are medical devices used in healthcare to deliver drugs to patients, and PCA pumps are particular infusion pump devices that are often used to provide pain relief to patients on demand. The reference model encapsulates the Generic PCA safety requirements provided by the FDA, and the verification is performed using a refinement approach. The contribution of this work is that it demonstrates a concise and semantically unambiguous approach to representing what a regulator's requirements for a particular interactive device might be, in this case focusing on user-interface requirements. It provides an inspectable and repeatable process for demonstrating that the requirements are satisfied. It has the potential to replace the considerable documentation produced at the moment by a succinct document that can be subjected to careful and systematic analysis.
Modelling safety properties of interactive medical systems BIBAFull-Text 91-100
  Judy Bowen; Steve Reeves
Formally modelling the software functionality and interactivity of safety-critical devices allows us to prove properties about their behaviours and be certain that they will respond to user interaction correctly. In domains such as medical environments, where many different devices may be used, it is equally important to ensure that all devices used adhere to a set of safety, and other, principles designed for that environment. In this paper we look at modelling important properties of interactive medical devices including safety considerations mandated by their users. We use ProZ for model checking to ensure that properties stated in temporal logic hold, and also to check invariants. In this way we gain confidence that important properties do hold of the device, and that models of particular devices adhere to the properties described.
Applying theorem discovery to automatically find and check usability heuristics BIBAFull-Text 101-106
  Andy Gimblett; Harold Thimbleby
Theorem discovery is a novel technique for the automatic analysis of statespace-based models of user interfaces, in which possible sequences of user actions are systematically computed and compared for equivalence, or close equivalence, of effect. Using this technique, we noticed a previously undetected problem with the behaviour of many widely-used inexpensive off-the-shelf interactive devices. Specifically, on many calculators, pressing the decimal point key has no effect on the display, thus unnecessarily breaking the well known usability heuristic that an interactive system should provide appropriate feedback to the user, and potentially causing unnecessary confusion that may lead to error. While this insight is interesting in itself, it is also of significance as a simple but nonetheless non-trivial example of the power and potential of theorem discovery as an analytical technique, not least because the problem -- obvious once pointed out -- has apparently remained undetected and unremarked upon for many years.
Combining static and dynamic analysis for the reverse engineering of web applications BIBAFull-Text 107-112
  Carlos E. Silva; José C. Campos
Software has become so complex that it is increasingly hard to have a complete understanding of how a particular system will behave. Web applications, their user interfaces in particular, are built with a wide variety of technologies making them particularly hard to debug and maintain. Reverse engineering techniques, either through static analysis of the code or dynamic analysis of the running application, can be used to help gain this understanding. Each type of technique has its limitations. With static analysis it is difficult to have good coverage of highly dynamic applications, while dynamic analysis faces problems with guaranteeing that generated models fully capture the behavior of the system. This paper proposes a new hybrid approach for the reverse engineering of web applications' user interfaces. The approach combines dynamic analyzes of the application at runtime, with static analyzes of the source code of the event handlers found during interaction. Information derived from the source code is both directly added to the generated models, and used to guide the dynamic analysis.
Timisto: a technique to extract usage sequences from storyboards BIBAFull-Text 113-118
  Joël Vogt; Mieke Haesen; Kris Luyten; Karin Coninx; Andreas Meier
Storyboarding is a technique that is often used for the conception of new interactive systems. A storyboard illustrates graphically how a system is used by its users and what a typical context of usage is. Although the informal notation of a storyboard stimulates creativity, and makes them easy to understand for everyone, it is more difficult to integrate in further steps in the engineering process. We present an approach, "Time In Storyboards" (Timisto), to extract valuable information on how various interactions with the system are positioned in time with respect to each other. Timisto does not interfere with the creative process of storyboarding, but maximizes the structured information about time that can be deduced from a storyboard.

Keynote address

Design for human interaction: communication as a special case of misunderstanding BIBAFull-Text 119-120
  Patrick G. T. Healey
In order to engineer effective and usable interactive computing systems we need to consider not just the human-system interface but the human-human interface. The success of many technologies depends not just on how easy they are to understand and operate but also on how effectively they integrate with the wider ecology of our interactions with others. This point has been made especially clearly by ethnomethodological studies of the use of technology in workplace contexts (e.g. Heath and Luff, 2000). It also helps to explain why, for example, the evolution of video and music technology has been driven as much by ease of sharing as it has been by image or audio quality and why some technologies, such as SMS messaging succeed despite having a poor human-system interface. As Kang (2000) succinctly put it "The killer application of the internet is other people" (p. 1150, cited in Bargh and McKenna 2004).
   If technology acts, by accident or by design, as an interface between people then we might try to generalise human-system approaches to design by treating them as the basic building blocks of the larger human-system-human interface. This talk will argue, however, that this kind of 'scaling-up' approach is insufficient. In particular, the generalization of human-system models to contexts which involve multiple participants leads us to ignore some critical processes that underpin the effectiveness of human-human interaction. More specifically, a focus on the cognitive, behavioural or communicative capabilities of individual human beings does not provide an adequate understanding of how different people co-ordinate their understanding of what they are doing through communication.
   This line of argument suggests that in addition to understanding the broader social context of interactive systems we can also benefit from focusing on the specific low level mechanisms that underpin human interaction. The recurrent need to co-ordinate understanding amongst multiple participants, across a variety of contexts, highlights the importance of the processes by which people collaborate to detect and recover from misunderstandings using whatever resources are to hand (Sacks, Schegloff, and Jefferson, 1974; Clark 1996, Healey, 2008).
   This approach can feed into the design of interactive systems in a number of ways. It moves our understanding of human interaction beyond 'informational bandwidth' and 'psychological bandwidth' approaches. It brings into focus co-ordination processes that are often impeded even by tools that are specifically designed to support human communication. This can provide new ideas for design, a diagnostic process for requirements gathering and formative analysis and comparative metrics for assessing how a technology impacts on the success of communication (Healey, Colman and Thirlwell, 2005).

Posters and demonstrations

Crowdsourcing user interface adaptations for minimizing the bloat in enterprise applications BIBAFull-Text 121-126
  Pierre Akiki; Arosha Bandara; Yijun Yu
Bloated software systems encompass a large number of features resulting in an increase in visual complexity. Enterprise applications are a common example of such types of systems. Since many users only use a distinct subset of the available features, providing a mechanism to tailor user interfaces according to each user's needs helps in decreasing the bloat thereby reducing the visual complexity. Crowdsourcing can be a means for speeding up the adaptation process by engaging and leveraging the enterprise application communities. This paper presents a tool supported model-driven mechanism for crowdsourcing user interface adaptations. We evaluate our proposed mechanism and tool through a basic preliminary user study.
Supporting elastic collaboration: integration of collaboration components in dynamic contexts BIBAFull-Text 127-132
  Jordan Janeiro; Stephan Lukosch; Stefan Radomski; Mathias Johanson; Massimo Mecella; Jonas Larsson
In dynamic problem-solving situations, groups and organizations have to become more flexible to adapt collaborative workspaces according to their needs. New paradigms propose to bridge two opposing process and ad-hoc perspectives to achieve such flexibility. However, a key challenge relies on the dynamic integration of groupware tools in the same collaborative workspace. This paper proposes a collaborative workspace (Elgar) that supports the Elastic Collaboration concept, and a standard interface to realize the integration of groupware tools, named Elastic Collaboration Components. The paper illustrates the use of such flexible collaborative workspace and the use of groupware tools in a machine diagnosis scenario that requires collaboration.
Visualization of physical library shelves to facilitate collection management and retrieval BIBAFull-Text 133-138
  Matthew Jervis; Masood Masoodian
Electronic cataloguing systems are used by libraries to provide search mechanisms for finding books in their collections. These systems provide limited, if any, tools for browsing content electronically in a manner similar to browsing books on physical library shelves. Furthermore, library patrons often struggle to physically locate and retrieve books, even after they have found what they are looking for using library catalogue systems. A number of prototype technologies have been developed in recent years to assist library users with the task of locating books. These systems are, however, rather limited in their functionality, and generally do not provide tools for remote browsing of library shelves. In this paper we introduce Metis, a system designed to allow virtual viewing of collections, and to assist with physical retrieval of books using a range of desktop and mobile computing devices.
Cedar studio: an IDE supporting adaptive model-driven user interfaces for enterprise applications BIBAFull-Text 139-144
  Pierre A. Akiki; Arosha K. Bandara; Yijun Yu
Support tools are necessary for the adoption of model-driven engineering of adaptive user interfaces (UI). Enterprise applications in particular, require a tool that could be used by developers as well as I.T. personnel during all the development and post-development phases. An IDE that supports adaptive model-driven enterprise UIs could further promote the adoption of this approach. This paper describes Cedar Studio, our IDE for building adaptive model-driven UIs based on the CEDAR reference architecture for adaptive UIs. This IDE provides visual design and code editing tools for UI models and adaptive behavior. It is evaluated conceptually using a set of criteria from the literature and applied practically by devising example adaptive enterprise user interfaces.
Tool support for automated multi-device GUI generation from discourse-based communication models BIBAFull-Text 145-150
  Roman Popp; David Raneburger; Hermann Kaindl
Automated generation of graphical user interfaces (GUIs) from models is possible, but their usability is often not good enough for real-world use, in particular not for small devices. Also automated tailoring of GUIs for different devices is still an issue. Our tools provide such tailoring for different devices through automatic optimization of corresponding optimization objectives under given constraints. Currently, two different optimization strategies are implemented, with their focus on tapping and vertical scrolling on touchscreen, respectively. The constraints (relevant properties such as screen size and resolution) are to be provided by the users of our tools in device specifications. Through our tool support, WIMP (window, icon, menu, pointer) GUIs can be generated at a decent level of usability nearly automatically, in particular for small devices. This is important due to the more and more widespread use of smartphones.

Doctoral consortium

Engineering adaptive user interfaces for enterprise applications BIBAFull-Text 151-154
  Pierre A. Akiki
The user interface (UI) layer is considered an important component in software applications since it links the users to the software's functionality. Enterprise applications such as enterprise resource planning and customer relationship management systems have very complex UIs that are used by users with diverse needs in terms of the required features and layout preferences. The inability to cater for the variety of user needs diminishes the usability of these applications. One way to cater for those needs is through adaptive UIs. Some enterprise software providers offer mechanisms for tailoring UIs based on the variable user needs, yet those are not generic enough to be used with other applications and require maintaining multiple UI copies manually. A generic platform based on a model-driven approach could be more reusable since operating on the model level makes it technology independent. The main objective of this research is devising a generic, scalable, and extensible platform for building adaptive enterprise application UIs based on a runtime model-driven approach. This platform primarily targets UI simplification, which we defined as a mechanism for increasing usability through adaptive behavior by providing users with a minimal feature-set and an optimal layout based on the context-of-use. This paper provides an overview of the research questions and methodology, the results that were achieved so far, and the remaining work.
Using differential formal analysis for dependable number entry BIBAFull-Text 155-158
  Abigail Cauchi
User interfaces that employ the same display and buttons may look the same but can work very differently depending on how they are implemented. In healthcare, it is critical that interfaces that look the same are the same. Hospitals typically have many types of similar infusion pump, with different software versions, and variation between pump behavior may lead to unexpected adverse events. For example, when entering drug doses into infusion pumps that use the same display and button designs, different results may arise when pushing identical sequences of buttons. These differences arise as a result of subtle implementation differences and may lead to under-dose or over-dose errors.
   This work explores different implementations of a 5-key interface for entering numbers using a new user interface analysis technique, Differential Formal Analysis.
   Using Differential Formal Analysis different 5-key interfaces are analysed based on log data collected from 19 infusion pumps over a 3 year period from a UK hospital. The results from this analysis is domain specific to infusion pumps. A comparison is made between domain specific results and generic results from Differential Formal Analysis performed using random data.
The CoGenIVE concept revisited: a toolkit for prototyping multimodal systems BIBAFull-Text 159-162
  Fredy Cuenca
Many specialized toolkits have been developed with the purpose of facilitating the creation of multimodal systems. They allow their users to specify certain tasks of their intended systems by means of a visual language instead of programming code. One of these toolkits, CoGenIVE, was developed in our research lab, and despite of its successful application in many internal projects, it gradually fell into disuse. The rethinking of CoGenIVE unveiled the existence of important gaps hindering a fuller understanding of these toolkits for rapid prototyping of multimodal systems. This paper aims to remedy some of these gaps with the proposal of: (a) the architecture of a toolkit for rapid prototyping of multimodal systems, (b) a scale for measuring the support for implementation provided by a toolkit, and (c) a classification of a representative set of existing toolkits.
Addressing dependability for interactive systems: application to interactive cockpits BIBAFull-Text 163-166
  Camille Fayollas
Most of the work done for improving interactive systems reliability is based on methods and techniques to avoid the occurrence of faults. The goal of most of such techniques is to remove software defects prior to deployment. However, it has been proved that regardless of the approaches that are setup, system crashes may still occur at runtime. One of the potential sources of such crashes is natural faults triggered by alpha-particles from radioactive contaminants in the chips or neutron from cosmic radiation. This phenomenon appears with a higher probability while flying in the high atmosphere, which is the case for aircrafts. Safety-critical systems need to cope with this type of fault to be dependable.
   The main goal of this PhD is to provide means and methodology to build dependable interactive systems using interactive cockpits as a case study. The work presented in this doctoral consortium paper gives an excerpt of the solution proposed to build dependable interactive systems. This approach is a two-fold solution to deal with both (i) software faults prior to operation by using zero-default development dedicated to interactive systems and (ii) natural faults by embedding fault-tolerant mechanisms in the interactive system.
A context-aware dialog model for multi-device web development BIBAFull-Text 167-170
  Javier Rodríguez Escolar
Model-Based User Interface Design (MBUID) consists of a step-wise method that structures the development of User Interfaces (UIs) based on models. According to this method, developers focus on creating a UI model, that is an abstract representation of it, and delegate the UI code generation process to automatic tools that take into account platform peculiarities. This paper explores the applicability of MBUI techniques to context-aware Service Front Ends (SFEs), i.e. UIs of web services that react to context changes. For this purpose, it introduces a context-aware dialog model that captures the adaptable behavior of a UI depending on variations of the context of use, a standard-based notation to represent it, and an open-source development environment that supports this development method.
UISKEI++: multi-device wizard of oz prototyping BIBAFull-Text 171-174
  Vinícius C. V. B. Segura; Simone D. J. Barbosa
Low-fidelity prototyping is an inexpensive and quick alternative for exploring different design solutions. And with Wizard of Oz experiments, one can present an interactive -- yet unfinished -- prototype to the final user, who can see how the system is planned to work. Combining low-fidelity prototyping with Wizard of Oz can be a low cost and time-efficient way to prototype both the user interface and the interaction. This would be particularly useful in the case of prototyping for multiple devices, since different solutions need to be developed and tailored to suit each device's characteristics. This proposal discusses plans for developing a tool to provide multi-device prototyping support through the incorporation of different abstraction levels and support for Wizard of Oz experiments.
Audiovisual perception in a virtual world: an application of human-computer interaction evaluation to the development of immersive environments BIBAFull-Text 175-178
  Carlos C. L. Silva
Understanding the mechanisms underlying audiovisual perception is crucial for the development of interactive audiovisual immersive environments. Some human perceptual mechanisms pose challenging problems that can now be better explored with the latest technology in computer-generated environments. Our main goal is to develop an interactive audiovisual immersive system that provides to its users a highly immersive and perceptually coherent interactive environment. In order to do this, we will perform user studies to get a better knowledge of the rules guiding audiovisual perception. This will allow improvements in the simulation of realistic virtual environments through the use of predictive human cognition models as guides for the development of an audiovisual interactive immersive system. This system will encompass the integration of two Virtual Reality systems: a Cave Automatic Virtual Environment-like (CAVE-like) system and a room acoustic modeling and auralization system. The interactivity between user and the audiovisual virtual world will be enabled by the using of a Motion Capture system as a user position tracker.
Autonomous adaptation of user interfaces to support mobility in ambient intelligence systems BIBAFull-Text 179-182
  Gervasio Varela
The work presented in this paper is focused on building Ambient Intelligence (AmI) applications capable of moving from one environment to another, while their user interface keeps adapting itself, autonomously, to the variable environment conditions and the available interaction resources.
   AmI applications are expected to interact with users naturally and transparently, therefore, most of their interaction relies on embedded devices that obtain information from the user and environment. This work implements a framework for AmI systems that elevates those embedded devices to the class of interaction resources. It does so by providing a new level of abstraction that decouples applications, conceptually and physically, from the different specific interaction resources available and their underlying heterogeneous technologies.
   In order to drive the adaptation process to environment changes, the system makes use of a set of models that describe the user, environment conditions and devices, and algorithms for context-aware selection of the interaction devices.
Metric-based evaluation of graphical user interfaces: model, method, and software support BIBAFull-Text 183-186
  Mathieu Zen
Many factors contribute to ensuring User eXperience (UX) of Graphical User Interfaces, such as, but not limited to: usability, fun, engagement, subjective satisfaction. Aesthetics is a potential element that could also significantly contribute to this user experience. Although aesthetics have been extensively discussed, there is a need to rely on a sound, empirically validated methodology in order to properly evaluate how aesthetics could be measured, namely through metrics. Two main issues need to be addressed: the representativeness and the relevance of aesthetics metrics. In order to address these challenges, this paper introduces a methodology for metric-based evaluation of a graphical user interface of any type. This methodology is based on an underlying model that captures aesthetics aspects and related metrics, a method for computing them based on the underlying model, and software that supports enacting this method on any type of graphical user interface.

Gesture, multi-touch, tangibles, and speech

GestIT: a declarative and compositional framework for multiplatform gesture definition BIBAFull-Text 187-196
  Lucio Davide Spano; Antonio Cisternino; Fabio Paternò; Gianni Fenu
Gestural interfaces allow complex manipulative interactions that are hardly manageable using traditional event handlers. Indeed, such kind of interaction has longer duration in time than that carried out in form-based user interfaces, and often it is important to provide users with intermediate feedback during the gesture performance. Therefore, the gesture specification code is a mixture of the recognition logic and the feedback definition. This makes it difficult 1) to write maintainable code and 2) reuse the gesture definition in different applications. To overcome these kinds of limitations, the research community has considered declarative approaches for the specification of gesture temporal evolution. In this paper, we discuss the creation of gestural interfaces using GestIT, a framework that allows declarative and compositional definition of gestures for different recognition platforms (e.g. multitouch and full-body), through a set of examples and the comparison with existing approaches.
Designing disambiguation techniques for pointing in the physical world BIBAFull-Text 197-206
  William Delamare; Céline Coutrix; Laurence Nigay
Several ways for selecting physical objects exist, including touching and pointing at them. Allowing the user to interact at a distance by pointing at physical objects can be challenging when the environment contains a large number of interactive physical objects, possibly occluded by other everyday items. Previous pointing techniques highlighted the need for disambiguation techniques. Addressing this challenge, this paper contributes a design space that organizes along groups and axes a set of options for designers to relevantly (1) describe, (2) classify, and (3) design disambiguation techniques. First, we have not found techniques in the literature yet that our design space could not describe. Second, all the techniques show a different path along the axes of our design space. Third, it allows defining of several new paths/solutions that have not yet been explored. We illustrate this generative power with the example of such a designed technique, Physical Pointing Roll (P2Roll).
Formal description of multi-touch interactions BIBAFull-Text 207-216
  Arnaud Hamon; Philippe Palanque; José Luís Silva; Yannick Deleris; Eric Barboni
The widespread use of multi-touch devices and the large amount of research that has been carried out around them has made this technology mature in a very short amount of time. This makes it possible to consider multi-touch interactions in the context of safety critical systems. Indeed, beyond this technical aspect, multi-touch interactions present significant benefits such as input-output integration, reduction of physical space, sophisticated multi-modal interaction? However, interactive cockpits belonging to the class of safety critical systems, development processes and methods used in the mass market industry are not suitable as they usually focus on usability and user experience factors upstaging dependability. This paper presents a tool-supported model-based approach suitable for the development of interactive systems featuring multi-touch interactions techniques. We demonstrate the possibility to describe touch interaction techniques in a complete and unambiguous way and that the formal description technique is amenable to verification. The capabilities of the notation is demonstrated over two different interaction techniques (namely Pitch and Tap and Hold) together with a software architecture explaining how these interaction techniques can be embedded in an interactive application.
What if everyone could do it?: a framework for easier spoken dialog system design BIBAFull-Text 217-222
  Pierrick Milhorat; Stephan Schlögl; Gérard Chollet; Jerome Boudy
While Graphical User Interfaces (GUI) still represent the most common way of operating modern computing technology, Spoken Dialog Systems (SDS) have the potential to offer a more natural and intuitive mode of interaction. Even though some may say that existing speech recognition is neither reliable nor practical, the success of recent product releases such as Apple's Siri or Nuance's Dragon Drive suggests that language-based interaction is increasingly gaining acceptance. Yet, unlike applications for building GUIs, tools and frameworks that support the design, construction and maintenance of dialog systems are rare. A particular challenge of SDS design is the often complex integration of technologies. Systems usually consist of several components (e.g. speech recognition, language understanding, output generation, etc.), all of which require expertise to deploy them in a given application domain. This paper presents work in progress that aims at supporting this integration process. We propose a framework of components and describe how it may be used to prototype and gradually implement a spoken dialog system without requiring extensive domain expertise.
RefactorPad: editing source code on touchscreens BIBAFull-Text 223-228
  Felix Raab; Christian Wolff; Florian Echtler
Despite widespread use of touch-enabled devices, the field of software development has only slowly adopted new interaction methods for available tools. In this paper, we present our research on RefactorPad, a code editor for editing and restructuring source code on touchscreens. Since entering and modifying code with on-screen keyboards is time-consuming, we have developed a set of gestures that take program syntax into account and support common maintenance tasks on devices such as tablets. This work presents three main contributions: 1) a test setup that enables researchers and participants to collaboratively walk through code examples in real-time; 2) the results of a user study on editing source code with both finger and pen gestures; 3) a list of operations and some design guidelines for creators of code editors or software development environments who wish to optimize their tools for touchscreens.


Interactive prototyping of tabletop and surface applications BIBAFull-Text 229-238
  Tulio de Souza Alcantara; Jennifer Ferreira; Frank Maurer
Physically large touch-based devices, such as tabletops, afford numerous innovative interaction possibilities; however, for application development on these devices to be successful, users must be presented with interactions they find natural and easy to learn. User-centered design advocates the use of prototyping to help designers create software that is a better fit with user needs and yet, due to time pressures or inappropriate tool support, prototyping may be considered too costly to do. To address these concerns, we designed ProtoActive, a tool for designing and evaluating multi-touch applications on large surfaces via sketch-based prototypes. Our tool allows designers to define custom gestures and evaluate them without requiring any programming knowledge. The paper presents the results of pilot studies as well as in-the-wild usage of the tool.
Toward rapid and iterative development of tangible, collaborative, distributed user interfaces BIBAFull-Text 239-248
  Chris Branton; Brygg Ullmer; Andre Wiggins; Landon Rogge; Narendra Setty; Stephen David Beck; Alex Reeser
Distributed, tangible, collaborative applications involve potentially complex interactions of users, computing platforms, and physical artifacts. Realizing the necessary connections for these interactions can create hardware and software dependencies early in development, resulting in a system that is difficult to adapt to design changes. The Ensemble architecture is designed to encourage exploratory development of these systems by limiting the impact of changing components. Ensemble is a product of the exploratory design process it supports, evolving through use in two distinct application domains. The experience gained from these implementations has shaped Ensemble's structure and design priorities, resulting in a component-based architecture that includes: (i) an application framework and graphical user interface support; (ii) a service framework, including service publication and discovery; (iii) local and remote event handling; (iv) distributed user and resource coordination; and (v) a structured configuration language shared by all Ensemble components.
A framework for the development of distributed interactive applications BIBAFull-Text 249-254
  Luca Frosini; Marco Manca; Fabio Paternò
In this paper we present a framework and the associated software architecture to manage user interfaces that can be distributed and/or migrated in multi-device and multi-user environments. It supports distribution across dynamic sets of devices, and does not require the use of a fixed server. We also report on its current implementation, and an example application.

Empirical techniques

CrowdStudy: general toolkit for crowdsourced evaluation of web interfaces BIBAFull-Text 255-264
  Michael Nebeling; Maximilian Speicher; Moira C. Norrie
While traditional usability testing methods can be both time consuming and expensive, tools for automated usability evaluation tend to oversimplify the problem by limiting themselves to supporting only certain evaluation criteria, settings, tasks and scenarios. We present CrowdStudy, a general web toolkit that combines support for automated usability testing with crowdsourcing to facilitate large-scale online user testing. CrowdStudy is based on existing crowdsourcing techniques for recruiting workers and guiding them through complex tasks, but implements mechanisms specifically designed for usability studies, allowing testers to control user sampling and conduct evaluations for particular contexts of use. Our toolkit provides support for context-aware data collection and analysis based on an extensible set of metrics, as well as tools for managing, reviewing and analysing any collected data. The paper demonstrates several useful features of CrowdStudy for two different scenarios, and discusses the benefits and tradeoffs of using crowdsourced evaluation.
Complex activities in an operations center: a case study and model for engineering interaction BIBAFull-Text 265-274
  Judith M. Brown; Steven L. Greenspan; Robert L. Biddle
Data operations and command centers are crucial for managing today's Internet-based economy. Despite advances in automation, the challenges placed on operations professionals continue to increase as they work individually or in teams to repair or proactively avoid service disruptions. Although there have been a few studies of collaborative work in military supervisory control centers, due to the sensitive nature of work in operating centers, there have been few studies on the activities that take place in commercial data centers. In this case study of a large, complex data operations and control center, activity theory is used to guide and interpret observations of individual and collaborative work. This resulted in a model of data operations activities, and the identification of tensions that arise within and between these activities. This model is of value to interaction engineers in the first phase of a user-centered engineering methodology. Using this model, we provide some recommendations for reducing some of the tensions we found, and discuss significant opportunities and challenges in this new domain for the HCI community.
Insights into layout patterns of mobile user interfaces by an automatic analysis of android apps BIBAFull-Text 275-284
  Alireza Sahami Shirazi; Niels Henze; Albrecht Schmidt; Robin Goldberg; Benjamin Schmidt; Hansjörg Schmauder
Mobile phones recently evolved into smartphones that provide a wide range of services. One aspect that differentiates smartphones from their predecessor is the app model. Users can easily install third party applications from central mobile application stores. In this paper we present a process to gain insights into mobile user interfaces on a large scale. Using the developed process we automatically disassemble and analyze the 400 most popular free Android applications. The results suggest that the complexity of the user interface differs between application categories. Further, we analyze interface layouts to determine the most frequent interface elements and identify combinations of interface widgets. The most common combination that consists of three nested elements covers 5.43% of all interface elements. It is more frequent than progress bars and checkboxes. The ten most frequent patterns together cover 21.13% of all interface elements. They are all more frequent than common widget including radio buttons and spinner. We argue that the combinations identified not only provide insights about current mobile interfaces, but also enable the development of new optimized widgets.

Keynote address

Engineering works: what is (and is not) engineering for interactive computer systems? BIBAFull-Text 285-286
  Ann Blandford
What does it mean to "engineer" an interactive computer system? Is it about the team doing the work (that they are engineers), about the process being followed, about the application domain, or what? Is engineering about managing complexity, safety or reliability? For physical artifacts, it may be possible to achieve consensus on how well engineered a product is, but this is more difficult for digital artifacts. In this talk, I will offer some perspectives, both positive and negative, on the nature of engineering for interactive computer systems and, at least implicitly, the nature and future of the EICS conference series.

Measures and metrics

Improving software effort estimation with human-centric models: a comparison of UCP and iUCP accuracy BIBAFull-Text 287-296
  Rui Alves; Pedro Valente; Nuno Jardim Nunes
Bringing human-centric models into the software development lifecycle provides unique opportunities to enhance development practice. Modeling the interactive aspects of a software system ensures a better understanding of user requirements leading to improved user interface and general usage and acceptance of the system. It also provides a unique opportunity to enhance conventional software development practices, such as effort estimation, which is known to have major deviations. In this paper we illustrate this mutual benefit presenting a statistical analysis of the effort estimation for seven real world software development projects. We contrast a conventional use-case points (UCP) method with iUCP an HCI enhanced method Here we propose an enhancement of the iUCP original effort estimation formula. This results in an improved mean deviation of iUCP over UCP supporting the claim that reflecting HCI concerns into internal SE artifacts generates more accurate estimations of software development effort. Our results provide additional evidence of the benefits of using human-centric models to enhance the software development practice, in particular for long lasting challenges like generating accurate project estimates early in the development lifecycle.
Validating an episodic UX model on online shopping decision making: a survey study with B2C e-commerce BIBAFull-Text 297-306
  Abdullah Al Sokkar; Effie Law
Existing online shopping decision-making models (OSDMs) do not address adequately the role of experiential qualities in customer satisfaction. The awareness of this scoping issue has become stronger due to the recent User Experience (UX) research. We have developed an OSDM called 'Episodic UX Model on Decision-Making' (EUX-DM) by integrating the established technology acceptance model, emerging UX models, and expectation-confirmation theory. EUX-DM covers three phases: before interaction, after interaction, and confirmation. To validate the model, we designed and conducted a web-based survey, which comprises eight main constructs. Five (i.e. usefulness, ease-of-use, aesthetic quality, trust and experiential quality) were measured in all three phases, two (i.e. usage attitude, intention to purchase, overall satisfaction) were measured in the 'during' phase, and one (i.e. overall satisfaction) was measured only in the 'confirmation' phase. Results from analysing 278 responses suggest the validity of our model. Implications for augmenting EUX-DM are discussed.
Assessing the support provided by a toolkit for rapid prototyping of multimodal systems BIBAFull-Text 307-312
  Fredy Cuenca; Davy Vanacken; Karin Coninx; Kris Luyten
Choosing an appropriate toolkit for creating a multimodal interface is a cumbersome task. Several specialized toolkits include fusion and fission engines that allow developers to combine and decompose modalities to capture multimodal input and provide multimodal output. Unfortunately, the extent to which these toolkits can facilitate the creation of a multimodal interface is hard or impossible to estimate, due to the absence of a scale where the toolkit's capabilities can be measured on. In this paper, we propose a measurement scale, which allows the assessment of specialized toolkits without need for time-consuming testing or source code analysis. This scale is used to measure and compare the capabilities of three toolkits: CoGenIVE, HephaisTK and ICon.

Design and implementation experience

Echo: the editor's wisdom with the elegance of a magazine BIBAFull-Text 313-322
  Joshua Hailpern; Bernardo A. Huberman
The explosive growth of user generated content, along with the continuous increase in the amount of traditional sources of content, has made it extremely hard for users to digest the relevant pieces of information that they need to pay attention to in order to make sense of their needs. Thus, solutions are needed to help both professionals (e.g lawyers, analysts, economists) and ordinary users navigate this flood of information. We present a novel interaction model and system called Echo which uses machine learning techniques to traverse a corpus of documents and distill crucial opinions from the collective intelligence of the crowd. Based on this analysis, Echo creates an intuitive and elegant interface, as though constructed by an editor, that allows users to quickly find salient documents and opinions, all powered by the wisdom of the crowd. The Echo UI directs the user's attention to critical opinions using a natural magazine style metaphor, with visual call outs and other typographic changes. Therefore, this paper present two key contributions (an algorithm and interaction model) that allow a user to "read as normal," while focusing her attention on the important opinions within documents, and showing how these opinions relate to those of the crowd.
Hardware-in-the-loop-based evaluation platform for automotive instrument cluster development (EPIC) BIBAFull-Text 323-332
  Sebastian Osswald; Pratik Sheth; Manfred Tscheligi
This paper offers a contribution for platform-based evaluation techniques by proposing a hardware-in-the-loop-based approach for automotive instrument cluster (IC) development. An automotive IC interface requires for special attention as it provides the driver with safety-relevant information like speed or state of charge that is critical for the driving situation. As state of the art in-vehicle Human-Machine-Interfaces (HMI) are mostly embedded systems that make time-consuming research and development processes necessary, we propose a development platform that allows for a more rapid interface implementation and analysis. The evaluation platform for ICs (EPIC) is targeted at supporting engineers and researchers during the development phase of novel interface solutions that are reliable regarding their hardware connectivity and signal communication. It consists of a model-based vehicle simulation combined with automotive hardware to enable a real-time vehicle structure through a controller area network (CAN). Interchangeable, Android-based interfaces illustrate the flexibility of the approach and show the operability of the evaluation platform. To illustrate the applicability of the approach, the platform was embedded in an engineering process for an electric vehicle to address the challenge of user interface development.


Creativity on a shoestring: concept generating in agile development BIBAFull-Text 333-334
  Neil Maiden; Bianca Hollis
This tutorial presents creativity techniques that can be applied with limited resources including time in agile development projects.


3rd workshop on distributed user interfaces: models, methods and tools BIBAFull-Text 335-336
  María D. Lozano; Jose A. Galllud; Ricardo Tesoriero; Víctor M. R. Penichet; Jean Vanderdonckt; Habib Fardoun
This document describes the most relevant issues regarding development approaches for computer systems based on distributed user interfaces (DUIs). DUIs have brought about drastic changes affecting the way interactive systems are conceived and this fact affects the way these novel systems are designed and developed. New features need to be taken into account from the very beginning of the development process and new models, methods, and tools need to be considered for the correct development of interactive systems based on Distributed User Interfaces. The goal of this workshop is to promote the discussion about the development of DUIs, answering a set of key questions: How current UI models can be used or extended to cover the new features of DUIs' What new features should be considered and how should they be included within the development process? What new methods and tools do we need to develop DUIs in a correct way following the quality standards for interactive systems?
Formal methods for interactive system: (FMIS 2013) BIBAFull-Text 337-338
  Judy Bowen; Steve Reeves
The workshop focuses on use of formal methods in the development and analysis of Interactive Systems. The workshop is particularly concerned with issues relating to Human Computer Interaction and to the analysis of interaction in a variety of computing environments (safety critical, ubiquitous etc.). In the latter case the complexities of dynamic context, including location and large numbers of interacting entities, pose particular challenges to formal modelling.
Context-aware service front-ends BIBAFull-Text 339-340
  Francisco Javier Caminero Gil; Fabio Paternò; Vivian Genaro Motti
Context-aware adaptation of user interfaces have been investigated since the early 80's to provide mechanisms for stakeholders to propose, implement and execute adaptation, enabling users to efficiently interact with adaptive and adaptable applications. Today, adapting UIs according to the context of use becomes inevitable. Not only because users interact with applications from many distinct environments (platforms, devices and users' profile vary significantly), but also because such applications must provide a high usability level regardless of the contexts of use, efficiently adapting themselves according to the context. In this sense, Serenoa project proposes its 2nd workshop, to join experts in the domain of context-aware adaptation to exchange experiences, discuss current trends, promote approaches, and raise awareness for this field.