HCI Bibliography Home | HCI Conferences | EICS Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
EICS Tables of Contents: 09101112131415

ACM SIGCHI 2015 Symposium on Engineering Interactive Computing Systems

Fullname:EICS'15: ACM SIGCHI Symposium on Engineering Interactive Computing Systems
Editors:Michael Nebeling; Jürgen Ziegler; Laurence Nigay
Location:Duisburg, Germany
Dates:2015-Jun-23 to 2015-Jun-26
Standard No:ISBN: 978-1-4503-3646-8; ACM DL: Table of Contents; hcibib: EICS15
Links:Conference Website
  1. Keynote I
  2. UI tooling and testing
  3. Gesture and touch
  4. Playful interaction
  5. Model-based engineering
  6. Around the body -- movement and physiology
  7. Ubiquitous and context-aware systems
  8. Sketch it, print it, touch it
  9. Demo session
  10. Keynote II
  11. Model-driven development
  12. Testing and validation
  13. Workshop summaries

Keynote I

The breadth-depth dichotomy: a force for mediocrity BIBAFull-Text 1
  Daniel Wigdor
In the early days of a technology's penetration into the marketplace, it is common to witness an explosion of variations of that technology, each laying claim to some unique property which makes it superior to the alternatives. While diversity is essential in that it breeds new and different experiences, it can also give rise to a problem: the Breadth-Depth Dichotomy. This dichotomy is a phenomenon which can be observed in the years in which a technology is making the transition from research lab to consumer device. It lives at the edge between business and design decisions, and creates pressures on all members of a product design team, pushing product designs towards mediocrity. It is the simultaneous, contradictory pull towards both abstraction for broader impact, targeting the maximum number of potential customers, and deeper specificity for a superior user experience which takes full advantage of a given platform.

UI tooling and testing

HiReD: a high-resolution multi-window visualisation environment for cluster-driven displays BIBAFull-Text 2-11
  Chris Rooney; Roy A. Ruddle
High-resolution, wall-size displays often rely on bespoke software for performing interactive data visualisation, leading to interface designs with little or no consistency between displays. This makes adoption for novice users difficult when migrating from desktop environments. However, desktop interface techniques (such as task- and menu-bars) do not scale well and so cannot be relied on to drive the design of large display interfaces. In this paper we present HiReD, a multi-window environment for cluster-driven displays. As well as describing the technical details of the system, we also describe a suite of low-precision interface techniques that aim to provide a familiar desktop environment to the user while overcoming the scalability issues of high-resolution displays. We hope that these techniques, as well as the implementation of HiReD itself, can encourage good practice in the design and development of future interfaces for high-resolution, wall-size displays.
Yeti: yet another automatic interface composer BIBAFull-Text 12-21
  Effie Karuzaki; Anthony Savidis
As applications become larger, building their UI is getting harder. While a lot of research focuses on new ways of building UIs, little work focuses on reusing existing UI components to automatically compose large-scale interfaces. This paper introduces Yeti, an automatic UI composer for desktop and android applications written in Java, that adopts a task-driven discipline where task hierarchy denotes component containment and control. We propose the notion of globally unique task identifiers to avoid task naming confusions across components and repositories. To enable applications set mandatory control aspects for the retrieved UI components, we introduce required APIs as part of task definitions. Yeti emphasizes the composition of reusable coarse-grained UI components rather than automatic UI creation from scratch, so no lower-level specifications are deployed. Retrieved and composed components can be directly handled inside application logic via the application-defined APIs and the generic component interface required for all components. This programming-oriented approach allows UI programmers to deploy Yeti as a software library, while it enables the mix of composed UI parts with manually coded ones. To validate our system and demonstrate its deployment, we present an example application created from existing components.
XDSession: integrated development and testing of cross-device applications BIBAFull-Text 22-27
  Michael Nebeling; Maria Husmann; Christoph Zimmerli; Giulio Valente; Moira C. Norrie
Despite the recent proliferation of new cross-device application frameworks, there is still a lack of sophisticated tools for testing new applications during their development. This paper presents XDSession--a framework for cross-device application development based on a concept of cross-device sessions, not only useful for managing distribution and synchronisation, but also for logging and debugging. Integrated with XDSession are two new tools specifically designed for cross-device testing. First, the session controller supports management and testing of cross-device sessions with connected or simulated devices at run-time. Second, the session inspector enables inspection and analysis of multi-device/multi-user sessions with support for deterministic record/replay of cross-device sessions. We show the utility of XDSession based on a case study of a semester-long course project in which our tools were used by students to reimplement an existing application and extend it with cross-device capabilities.
Plasticity for 3D user interfaces: new models for devices and interaction techniques BIBAFull-Text 28-33
  Jérémy Lacoche; Thierry Duval; Bruno Arnaldi; Eric Maisel; Jérome Royan
This paper introduces a new device model and a new interaction technique model to deal with plasticity issues for Virtual Reality (VR) and Augmented Reality (AR). We aim to provide developers with solutions to use and create interaction techniques that will fit to the needed tasks of a 3D application and to the input and output devices available. The device model introduces a new description of inputs and outputs devices that includes capabilities, limitations and representations in the real world. We also propose a new way to develop interaction techniques with an approach based on PAC and ARCH models. These techniques are implemented independently of the concrete devices used thanks to the proposed device model. Moreover, our approach aims to facilitate the portability of interaction techniques over different target OS and 3D framework.

Gesture and touch

GISMO: a domain-specific modelling language for executable prototyping of gestural interaction BIBAFull-Text 34-43
  Deshayes Romuald; Tom Mens
This paper presents Gismo, an extensible domain-specific modelling language for prototyping executable models of gestural interaction. Relying on an underlying customisable framework, domain-specific models can specify, simulate and execute the behaviour of how users interact with a software application through the use of different interaction controllers and gesture types (e.g., specific hand movements or other body gestures). Model transformation technology is used to define the domain-specific operational semantics of Gismo, as well as to verify domain-specific properties. ICO models are automatically generated from Gismo models, and are executed by an underlying framework that can communicate with the target software application. We illustrate the use of Gismo through a running example that models the gestural interaction of a graphical application using dynamic hand gestures to control an animated 3D character. We report on the usability of Gismo based on an evaluation with 12 participants.
Designing guiding systems for gesture-based interaction BIBAFull-Text 44-53
  William Delamare; Céline Coutrix; Laurence Nigay
2D or 3D gesture commands are still not routinely adopted, despite the technological advances for tracking gestures. The fact that gesture commands are not self-revealing is a bottleneck for this adoption. Guiding novice users is therefore crucial in order to reveal what commands are available and how to trigger them. However guiding systems are mainly designed in an ad hoc manner. Even if isolated design characteristics exist, they concentrate on a limited number of guidance aspects. We hence present a design space that unifies and completes these studies by providing a coherent set of issues for designing the behavior of a guiding system. We distinguish Feedback and Feedforward and consider four questions: When, What, How and Where. In order to leverage efficient use of our design space, we provide an online tool and illustrate with scenarios how practitioners can use it.
A toolkit for analysis and prediction of touch targeting behaviour on mobile websites BIBAFull-Text 54-63
  Daniel Buschek; Alexander Auch; Florian Alt
Touch interaction on mobile devices suffers from several problems, such as the thumb's limited reach or the occlusion of targets by the finger. This leads to offsets between the user's intended touch location and the actual location sensed by the device. Recent research has modelled such offset patterns to analyse and predict touch targeting behaviour. However, these models have only been applied in lab experiments for specific tasks (typing, pointing, targeting games). In contrast, their applications to websites are yet unexplored. To close this gap, this paper explores the potential of touch modelling for the mobile web: We present a toolkit which allows web developers to collect and analyse touch interactions with their websites. Our system can learn about users' targeting patterns to simulate expected touch interactions and help identify potential usability issues for future versions of the website prior to deployment. We train models on data collected in a field experiment with 50 participants in a shopping scenario. Our analyses show that the resulting models capture interesting behavioural patterns, reveal insights into user-specific behaviour, and enable predictions of expected error rates for individual interface elements.

Playful interaction

A novel approach to sports oriented video games with real-time motion streaming BIBAFull-Text 64-73
  Anton Bogdanovych; Christopher Stanton
We are currently observing a paradigm shift in virtual reality and simulation technologies. From being predominantly entertainment focused, these technologies are now finding a much wider use in the so-called "serious games" space, where playing a game teaches the player some useful skills applicable in the physical world. While this trend is booming in military simulations, training and even education, surprisingly enough, there is hardly any work available in the domain of sports. Most sports oriented video games do not teach skills that can be later reused in the physical game environment and thus there are minimal benefits to the player's "real-world" performance. Performing key sports actions such as shooting a basketball or hitting a tennis ball are normally done via actions (like pressing keyboard buttons or swinging a game controller) that have little correspondence to the movement that needs to be performed in the game's physical environment. In this paper we advocate a new era where it is possible to play simulated sports games that are not only enjoyable and fun, but can also improve the athletic skills required for the real-world performance of that sport. We illustrate the possibility of this idea via state of the art inertial motion capture equipment. To highlight the key aspects of our approach we have developed a basketball video game where a player pantomimes dribbling and shooting in a virtual world with a virtual ball. Importantly, the virtual world of the game responds to a player's motions by simulating the complex physical interactions that occur during a physical game of basketball. For example, if a player attempts to score a basket, the simulated ball will leave the player's hand at the appropriate time with realistic force and velocity, as determined by motion capture and the physics system of the selected game platform. We explain how this game was developed and discuss technical details and obtained results.
A case study into the accessibility of text-parser based interaction BIBAFull-Text 74-83
  Michael James Heron
The academic issues surrounding the accessibility of video games are reasonably well understood although compensations and inclusive design have not yet been comprehensively adopted by professional game developers. Several sets of guidelines have been produced to support developers wishing to ensure a greater degree of accessibility in their titles, and while the recommendations are broadly harmonious they only address the issues in isolation without being mindful of context or the subtle relationships between interaction choices and verisimilitude within game interfaces. That is not to denigrate the value of these resources, which is considerable -- instead it is to highlight a deficiency in the literature which can be addressed with reflective case studies.
   This paper represents one such case study, aimed at addressing accessibility concerns within interactive text interfaces. While the specifics of this paper are aimed at multiplayer text game accessibility improvements, it is anticipated that many of the lessons learned would be appropriate for any environment, such as command line interfaces, where the accessibility of written and read text is currently suboptimal.
Towards a gamification of industrial production: a comparative study in sheltered work environments BIBAFull-Text 84-93
  Oliver Korn; Markus Funk; Albrecht Schmidt
Using video game elements to improve user experience and user engagement in non-game applications is called "gamification". This method of enriching human-computer interaction has been applied successfully in education, health and general business processes. However, it has not been established in industrial production so far.
   After discussing the requirements specific for the production domain we present two workplaces augmented with gamification. Both implementations are based on a common framework for context-aware assistive systems but exemplify different approaches: the visualization of work performance is complex in System 1 and simple in System 2.
   Based on two studies in sheltered work environments with impaired workers, we analyze and compare the systems' effects on work and on workers. We show that gamification leads to a speed-accuracy-tradeoff if no quality-related feedback is provided. Another finding is that there is a highly significant raise in acceptance if a straightforward visualization approach for gamification is used.
LIBRARINTH interactive game to explore the library of the future BIBAFull-Text 94-99
  Florian Vandecasteele; Esmee Vanbeselaere; Lore Vandemaele; Jelle Saldien; Steven Verstockt
This paper describes the design process of the LIBRARINTH interactive game, a demand-driven project based on the library sector input. The main goal of the Marble Maze-like game is to introduce new library services and events in a more fun, entertaining and explorative way. The game should facilitate the library in informing its public that it is in a transition of a traditional content warehouse to an interactive knowledge center and content provider. LIBRARINTH will attract users for the library-of-the-future exploration and by doing that keep them informed of what is going on in the library environment. On the one hand, LIBRARINTH is linked to a mobile application so that participants can consult the information that they collected while playing the game. On the other hand, the library can use the game statistics to discover which information is found interesting by their visitors. Based on this information they can anticipate their services and dynamically adjust the information that is provided in the game. In this paper, we mainly focus on the construction, mechanics and electronics of the maze and on the design of the mobile app, which is the game controller.

Model-based engineering

Model transformation rules for customization of multi-device graphical user interfaces BIBAFull-Text 100-109
  David Raneburger; Hermann Kaindl; Roman Popp
In the context of model-driven generation of (graphical) user interfaces, systematic customization of automatically determined results is a challenge. While it may seem obvious that certain customizations can be expressed as specific transformation rules, adding such rules usually requires adaptation of existing rules and may lead to an unmanageable rule set.
   In this paper, we propose a new approach for managing model transformation rules for customization of graphical user interfaces in the context of their automated generation. Our approach facilitates managing transformation rules, because already existing rules do not have to be changed or replaced. A trial application provided some empirical evidence of the feasibility of our new approach. It even showed that some customization rules can be reused for generating graphical user interfaces for other devices with different properties, in effect for multi-device generation.
Creating models of interactive systems with the support of lightweight reverse-engineering tools BIBAFull-Text 110-119
  Judy Bowen
Creating formal models of interactive systems is an important step in the development process, particularly for safety-critical interactive systems. Such models can be used for a variety of software engineering purposes such as model-checking, verification, testing, refinement etc. While the use of such models at the beginning of the implementation cycle is important, it is often the case that we are interested in performing many of the same activities (particularly safety verification and testing) on systems which have been built without the use of formal methods or modelling. In order to do this we need to somehow reverse-engineer the system to generate the models post-implementation. In this paper we describe an approach we have developed to support the reverse-engineering of interactive systems and tools which support this. The tools perform partial reverse-engineering and act as a guide to the developer of the models rather than providing a fully automated solution. We motivate this by discussing the importance of what we describe as 'light-weight' tools in the engineering process.
Model-based development of accessible, personalized web forms for ICF-based assessment BIBAFull-Text 120-125
  Dominik Rupprecht; Jonas Etzold; Birgit Bomsdorf
Tools, such as interactive forms, for International Classification of Functioning, Disability and Health (ICF) assessment are expensive to develop, because they are used in different domains and have to be fitted to the special needs of the user groups. This paper focuses on ICF assessment for and by people with dyslexia caused by cognitive impairments. It presents a cost-effective approach to semi-automatically generated, accessible web-forms. Based on a developed meta-model enriched with so called clarifying communication elements and state information, a system was developed, which transforms the model at runtime into an interactive web application. Every text element of the form can on the one hand be personalized and on the other hand can be enhanced with available content like icons, images, videos showing explanations.
Responsive task modelling BIBAFull-Text 126-131
  Davide Anzalone; Marco Manca; Fabio Paternò; Carmen Santoro
In this paper we present a new tool for specifying task models (Responsive CTT), which can be accessed through touch-based mobile devices such as smartphones and tablets as well. The tool is Web-based and responsive in order to provide adapted user interfaces to better support the most common activities in task modelling through various types of devices. We describe the relevant aspects to take into account for this purpose and how we have addressed them in designing the tool. We also report on first user tests.

Around the body -- movement and physiology

Capturing and analysing movement using depth sensors and Labanotation BIBAFull-Text 132-141
  Börge Kordts; Bashar Altakrouri; Andreas Schrader
Full body interactions are becoming increasingly important for Human-Computer Interaction (HCI) and very essential in thriving areas such as mobile applications, games and Ambient Assisted Living (AAL) solutions. While this enriches the design space of interactive applications in ubiquitous and pervasive environments, it dramatically increases the complexity of programming and customising such systems for end-users and non-professional interaction developers. This work addresses the growing need for simple ways to define, customise and handle user interactions by manageable means of demonstration and declaration. Our novel approach fosters the use of Labanotation (as one of the most popular movement description visual notations) and off-shelf motion capture technologies for interaction recoding, generation and analysis. This paper presents a novel reference implementation, called Ambient Movement Analysis Engine, to allow for recording movement scores and subscribing to events in Labanotation format from live motion data streams.
Kinect analysis: a system for recording, analysing and sharing multimodal interaction elicitation studies BIBAFull-Text 142-151
  Michael Nebeling; David Ott; Moira C. Norrie
Recently, guessability studies have become a popular means among researchers to elicit user-defined interaction sets involving gesture, speech and multimodal input. However, tool support for capturing and analysing interaction proposals is lacking and the method itself is still evolving. This paper presents Kinect Analysis--a system designed for interaction elicitation studies with support for record-and-replay, visualisation and analysis based on Kinect's depth, audio and video streams. Kinect Analysis enables post-hoc analysis during playback and live analysis with real-time feedback while recording. In particular, new visualisations such as skeletal joint traces and heatmaps can be superimposed for analysis and comparison of multiple recordings. It also introduces KinectScript--a simple scripting language to query recordings and automate analysis tasks based on skeleton, distance, audio and gesture scripts. The paper discusses Kinect Analysis both as a tool and a method that could enable researchers to more easily collect, study and share interaction proposals. Using data from a previous guessability study with 25 users, we show that Kinect Analysis in combination with KinectScript is useful and effective for a range of analysis tasks.
FlyLoop: a micro framework for rapid development of physiological computing systems BIBAFull-Text 152-157
  Evan M. Peck; Eleanor Easse; Nick Marshall; William Stratton; L. Felipe Perrone
With the advent of wearable computing, cheap, commercial-grade sensors have broadened the accessibility to real-time physiological sensing. While there is considerable research that explores leveraging this information to drive intelligent interfaces, the construction of such systems have largely been limited to those with significant technical expertise. Even seasoned programmers are forced to tackle serious engineering challenges such as merging data from multiple sensors, applying signal processing algorithms, and modeling user state in real-time. These hurdles limit the accessibility and replicability of physiological computing systems, and more broadly intelligent interfaces. In this paper, we present FlyLoop -- a small, lightweight Java framework that enables programmers to rapidly develop, and experiment with intelligent systems. By focusing on simplicity and modularity rather than device compatibility or software dependencies, we believe that FlyLoop can broaden the participation in next-generation user interfaces, and encourage systems that can be communicated and reproduced.
Dynamic user interface adaptation driven by physiological parameters to support learning BIBAFull-Text 158-163
  Giuseppe Ghiani; Marco Manca; Fabio Paternò
Technology to make physiological measurements related to attention and cognitive load is becoming more affordable. We propose a solution based on combining the exploitation of dynamic user information gathered through such technology with a rule-based strategy for adaptation of e-learning Web applications. We focus on users' physiological data and aspects relevant for the task being carried out. A flexible rule-based approach allows designers and developers to define a wide range of rule compositions to express changes in the user interface based on how the user feels and behaves. The overall goal of the framework is to serve as a tool for content developers of Web applications, such as operators of online Learning Management Systems, and for their end-users. In this domain, through our approach teachers can create their educational contents, and specify how they should dynamically adapt to students' behaviour in order to improve the learning process.

Ubiquitous and context-aware systems

fabryq: using phones as gateways to prototype internet of things applications using web scripting BIBAFull-Text 164-173
  Will McGrath; Mozziyar Etemadi; Shuvo Roy; Bjoern Hartmann
Ubiquitous computing devices are often size- and power-constrained, which prevents them from directly connecting to the Internet. An increasingly common pattern is therefore to interpose a smart phone as a network gateway, and to deliver GUIs for such devices. Implementing the pipeline from embedded device through a phone application to the Internet requires a complex and disjoint set of languages and APIs. We present fabryq, a platform that simplifies the prototyping and deployment of such applications. fabryq uses smartphones as bridges that connect devices using the short range wireless technology, Bluetooth Low Energy (BLE), to the Internet. Developers only write code in one language (Javascript) and one location (a server) to communicate with their device. We introduce a protocol proxy programming model to control remote devices; and a capability-based hardware abstraction approach that supports scaling from a single prototype device to a deployment of multiple devices. To illustrate the utility of our platform, we show example applications implemented by authors and users, and describe μfabryq, a BLE prototyping API similar to Arduino, built with fabryq.
PhoneEar: interactions for mobile devices that hear high-frequency sound-encoded data BIBAFull-Text 174-179
  Aditya Shekhar Nittala; Xing-Dong Yang; Scott Bateman; Ehud Sharlin; Saul Greenberg
We present PhoneEar, a new approach that enables mobile devices to understand the broadcasted audio and sounds that we hear every day using existing infrastructure. PhoneEar audio streams are embedded with sound-encoded data using nearly inaudible high frequencies. Mobile devices then listen for messages in the sounds around us, taking actions to ensure we don't miss any important info. In this paper, we detail our implementation of PhoneEar, describe a study demonstrating that mobile devices can effectively receive sound-based data, and describe the results of a user study that shows that embedding data in sounds is not detrimental to sound quality. We also exemplify the space of new interactions, through four PhoneEar-enabled applications. Finally, we discuss the challenges to deploying apps that can hear and react to data in the sounds around us.
Sensor-based and tangible interaction with a TV community platform for seniors BIBAFull-Text 180-189
  Katja Herrmanny; Levent Gözüyasli; Daniel Deja; Jürgen Ziegler
This paper introduces a set of sensor-based, tangible techniques for interacting with a TV-based online social community platform (SCP) for seniors. The goal of these techniques is to allow seniors to use familiar objects and places in their living environment for communicating with other SCP members in a natural, intuitive manner. A prototype sensor-assisted living environment was created, consisting of a Smart TV, illuminated proximity switches (so-called "Activity Lights"), RFID readers embedded in furniture and RFID-tagged objects. A middleware broker component analyses the received sensor events and sends corresponding events to the TV, the environment or the SCP. The TV set serves as the main output device for showing SCP content and messages, while inputs may be delivered through physical objects, furniture-embedded sensors, tablets used as second screens, or the conventional remote control.
   To investigate users' experience when interacting with the system, and their attitude towards the sensor-based solution, an empirical evaluation with 15 seniors was conducted.
The SHARC framework: utilizing personal dropbox accounts to provide a scalable solution to the storage and sharing of community generated locative media BIBAFull-Text 190-199
  Trien V. Do; Keith Cheverst
The emergence of personal cloud storage services provides a new paradigm for storing and sharing data. In this paper we present the design of the SHARC framework and in particular focus on the utilization of personal Dropbox accounts to provide a scalable solution to the storage and sharing of community generated locative media relating to a community's Cultural Heritage. In addition to scalability issues, the utilization of personal Dropbox storage also supports 'sense of ownership' (relating to community media) which has arisen as an important requirement during our on-going 'research-in-the-wild' working with the rural village community of Wray and involving public display deployments to support the display and sharing of community photos and stories. While the framework presented here is currently being tested with a particular place-based community (Wray), it has been designed to provide a general solution that should support other place-based communities.

Sketch it, print it, touch it

Connecting UI and business processes in a collaborative sketching environment BIBAFull-Text 200-205
  Markus Kleffmann; Marc Hesenius; Volker Gruhn
Sketching is an important activity in software development projects and has many advantages over strict formal languages, especially in cross-functional teams with different technical background. Sketches are used to develop all kinds of diagrams, providing a different view on the application and the underlying business logic. Several tools have been developed in recent years to support sketching activities, especially in the area of UI prototype sketching. Sketches are used to quickly develop a first impression of future UIs, helping designers, engineers, and domain experts to gain a common understanding of layout and dialog flows. Unfortunately, UI sketching tools focus on a single application aspect -- the UI -- and do not take business processes, technical data structures, and their relationships into account. We demonstrate how UI, business process, and technical diagram sketches can be interconnected in an augmented team room.
To print or not to print: hybrid learning with METIS learning platform BIBAFull-Text 206-215
  Joshua Hailpern; Rares Vernica; Molly Bullock; Udi Chatow; Jian Fan; Georgia Koutrika; Jerry Liu; Lei Liu; Steven Simske; Shanchan Wu
As part of the explosion in educational software, online tools, and open educational resources there has been a rapid devaluation of printed textbooks. While digital texts have advantages, printed textbooks still provide irreplaceable value over online media. Therefore technology should enhance, rather than eliminate printed text. To this end, this paper presents METIS, a hybrid learning software/service platform that is designed to support active reading. METIS provides easy digital-to-print-to-digital usage, simple creation of Cheat Sheets & FlexNotes for personal note taking and organization, and a custom flexible rendering & publishing engine for education called Aero. METIS was designed based on lessons learned from a formative study of 523 students at SJSU, and validated through focus groups involving 32 educators and students at both high school and college levels.
TULIP: a widget-based software framework for tangible tabletop interfaces BIBAFull-Text 216-221
  Eric Tobias; Valérie Maquil; Thibaud Latour
In this paper, we describe a new software framework for tangible tabletop interfaces: TULIP. The framework uses an abstraction layer to receive information from computer vision frameworks, such as reacTIVision, and a widget model based on MCRit to enable rapid application development. TULIP applies Software Engineering principles such as Separation of Concerns to remain extensible and simple while providing support for tangible interaction. TULIP implements a widget model and defines a program flow. This paper will show the different considerations during the conception and design of TULIP before illustrating its use with a tangible application developed for a national project.

Demo session

A test-bed for Facebook friend-list recommendations BIBAFull-Text 222-225
  Ziyou Wu; Isabella Huang; Xubin Zheng; Jacob Bartel; Andrew Vitkus; Prasun Dewan
We have engineered an interactive Facebook-based test-bed for experimenting with friend-list recommendations. Its user-interface has two components: one allows the end-user to use a recommendation algorithm to create usable friend-lists, and the other allows the researcher to determine the quality of the recommendations. It supports multi-stage experiments to compare the efforts required to create friend-lists manually and using the recommender. Multiple visualizations are provided to help understand and evaluate the underlying algorithm. Several preliminary experiments have provided encouraging results. The architecture allows the recommendation algorithms, end-user interfaces, and visualizations to be changed independently. A video demonstration of this work is available at http://youtu.be/FOMSVALrdGs.
Hasselt UIMS: a tool for describing multimodal interactions with composite events BIBAFull-Text 226-229
  Fredy Cuenca; Jan Van den Bergh; Kris Luyten; Karin Coninx
Implementing multimodal interactive systems with event-driven tools requires splitting the code across multiple event handlers, which greatly complicate programmers' work. To alleviate this complexity we propose to extend the capabilities of existing event-driven tools so that instead of detecting user events as if they were independent, they are able to detect sequences of semantically related events defined by programmers. Such extended capability can only be possible with notations for (1) defining event patterns, and for (2) binding these patterns with one or multiple event handlers, and (3) specifying human-machine dialogs. This paper presents Hasselt UIMS, a tool that supports the creation of multimodal interfaces with these notations.
Using the djnn framework to create and validate interactive components iteratively BIBAFull-Text 230-233
  Stéphanie Rey; Stéphane Conversy; Mathieu Magnaudet; Mathieu Poirier; Daniel Prun; Jean-Luc Vinot; Stéphane Chatty
Using a real life scenario of aircraft cockpit design, we illustrate how the model-based architecture of the djnn programming framework allows to combine the multidisciplinary and iterative processes of user interface design with the requirements of industrial system development. Treating software programs as hierarchies of interactive components allows to delegate the production of components to multiple actors, each using the tools of their trade. Components can be exchanged in various formats, refined without modifying their surroundings, and undergo automated property verifications before being integrated.
Innovative key features for mastering model complexity: flexilab, a multimodel editor illustrated on task modeling BIBAFull-Text 234-237
  Nicolas Hili; Yann Laurillau; Sophie Dupuy-Chessa; Gaëlle Calvary
Modeling Human Computer Interaction (HCI) is nowadays practiced by IT companies. However, it remains a straightforward task that requires some advanced User Interface (UI) modeling tools to ease the design of large-scale models. This includes tackling massive UI models, multiplicity of models, multiplicity of stakeholders and collaborative editing.
   This paper presents a UI multimodel editor for HCI, illustrated on task modeling. We present innovative key features (genericity, creativity, model conformity, reusability, etc.) to facilitate UI model design and to ease interaction.
A tool for optimizing the use of a large design space for gesture guiding systems BIBAFull-Text 238-241
  William Delamare; Céline Coutrix; Laurence Nigay
We present a tool to help practitioners to characterize, compare and design gesture guiding systems. The tool can be used to find an example system meeting specific requirements or to start exploring an original research area based on unexplored design options. The motivation for the online tool is the large underlying design space including 35 design axes: the tool therefore helps explore and combine the various design options. Moreover the tool currently includes the description of 46 gesture guiding systems: the tool is thus also a repository of existing gesture guiding systems.

Keynote II

The semantic web: interacting with the unknown BIBAFull-Text 242-243
  Steffen Staab
When developing user interfaces for interacting with data and content one typically assumes that one knows the type of data and one knows how to interact with such type of data. The core idea of the Semantic Web is that data is self-describing, which implies from a data consumer's point of view that its semantics is not designed and described according to its use, but according to possibly orthogonal concerns of a data publisher and that its usage semantics emerges over time. The ensued flexibility is one of the greatest assets of the Semantic Web, but it also severely handicaps intelligent interaction with its data.

Model-driven development

A generic tool-supported framework for coupling task models and interactive applications BIBAFull-Text 244-253
  Célia Martinie; David Navarre; Philippe Palanque; Camille Fayollas
Task models are a very powerful artefact describing users' goals and users' activity and contain numerous information extremely useful for designing usable interactive application. Indeed, task models is one of the very few means for ensuring effectiveness of the application i.e. that the application allows users to reach their goals and perform their tasks. Despite those advantages, task models are usually perceived as a very expensive artefact to build that has to be thrown away as soon as the interactive application has been designed, i.e. right after the early stages of the design process are performed. However, tasks models can also be of great help for instance when used to support training material production, for training of operators and for providing tasks and goals oriented contextual help while the interactive application is being used ... This paper proposes a tool-supported framework for exploiting task models throughout the development process and even when the interactive application is deployed and used. To this end, we introduce a framework for connecting task models to an existing, executable, interactive application. The main contribution of the paper lies in the definition of a systematic correspondence between the user interface elements of the interactive application and the low level tasks in the task model. Depending on the fact that the code of the application is available or not, the fact that the application has been prepared at programming time for such integration or not, we propose different alternatives to perform such correspondence (in a tool-supported way). This task-application integration allows the exploitation of task models at run time bringing in the benefits listed above to any interactive application. The approach, the tools and the integration are presented on a case study of a Flight Control Unit (FCU) used in aircraft cockpits.
Using profiled ontologies to leverage model driven user interface generation BIBAFull-Text 254-259
  Werner Gaulke; Jürgen Ziegler
Mobile computing and new input methods have increased the need to create multiple interfaces for one functional core. Automatic generation of user interfaces attempts a solution for this problem. Existing approaches either generate interfaces on the base of a detailed task model or use domain models in conjunction with interface specific annotations and transformation rules. While task models are very time consuming to create and cannot easily be reused domain models lack the flexibility for use cases which are not covered or in conflict with used transformation rules. Based on an overview of existing approaches this paper sets out a conceptual framework which combines both task model and ontology based concepts. It is shown that the proposed combination leads to more abstract and reusable task models.

Testing and validation

Plasticity of user interfaces: formal verification of consistency BIBAFull-Text 260-265
  Raquel Oliveira; Sophie Dupuy-Chessa; Gaëlle Calvary
Plastic user interfaces have the capacity of adapting themselves to their context of use while preserving usability. This property gives rise to several versions of the same UI. This paper addresses the problem of verifying UI adaptation by means of formal methods. It proposes three approaches, all of them supported by the CADP toolbox and LNT formal language. The first approach permits the reasoning over the adaptation output, i.e. the UI versions: some properties are verified over the UI models thanks to model checking. The second solution proposes to verify the plasticity engine. The last approach compares UI versions thanks to equivalence checking. These approaches are discussed and compared on an example of a system in the nuclear power plant domain.
Equivalence checking for comparing user interfaces BIBAFull-Text 266-275
  Raquel Oliveira; Sophie Dupuy-Chessa; Gaëlle Calvary
Plastic User Interfaces (UIs) have the capacity to adapt to changes in their context of use while preserving usability. This exposes users to different versions of UIs that can diverge from each other at several levels, which may cause loss of consistency. This raises the question of similarity between UIs. This paper proposes an approach to comparing UIs by measuring to what extent UIs have the same interaction capabilities and appearance. We use the equivalence checking formal method. The approach verifies whether two UI models are equivalent or not. When they are not equivalent, the UI divergences are listed, thus providing the possibility of leaving them out of the analysis. In this case, the two UIs are said equivalent modulo such divergences. Furthermore, the approach shows that one UI can contain at least all interaction capabilities of another. We apply the approach to a case study in the nuclear power plant domain in which several UI versions are analyzed, and the equivalence and inclusion relations are demonstrated.
Verification of properties of interactive components from their executable code BIBAFull-Text 276-285
  Stéphane Chatty; Mathieu Magnaudet; Daniel Prun
In this paper we describe how an executable model of interactive software can be exploited to allow programmers or specifiers to express properties that will be automatically checked on the components they create or reuse. The djnn framework relies on a theoretical model of interactive software in which applications are described in their totality as hierarchies of interactive components, with no additional code. This includes high level components, but also the graphics, behaviors, computations and data manipulations that constitute them. Because of this, the structure of the application tree provides significant insights in the nature and behavior of components. Pattern recognition systems can then be used to express and check simple properties, such as the external signature of a component, its internal flows of control, or even the continued visibility of a component on a display. This provides programmers with solutions for checking their components, ensuring non-regression, or working in a contract-oriented fashion with other UI development stakeholders.
Towards a privacy threat model for public displays BIBAFull-Text 286-291
  Morin Ostkamp; Christian Kray; Gernot Bauer
While public displays have proliferated in many areas, passersby frequently do not perceive the contents shown as useful. One way to provide relevant content is to show personalized information or to let users personalize displays interactively. However, this approach raises privacy concerns. To gain a better understanding of these concerns, we carried out a study assessing the applicability of an existing threat model to public displays. We report on key outcomes, propose an extended privacy threat model tailored to interactive public displays, and provide an assessment of the importance of different privacy threats in various application scenarios. Engineers of interactive displays may use our findings to systematically analyze privacy requirements and threats, which in turn may lead to privacy-aware designs that can positively affect user attitude and display usage.

Workshop summaries

Large-scale interaction deployment: approaches and challenges BIBAFull-Text 292-293
  Bashar Altakrouri; Andreas Schrader; Simo Hosio; Martin Christof Kindsmüller; Beat Signer
The increasing acceptance and innovation in Natural User Interfaces (NUIs) promise a widespread adoption of interactive systems following this paradigm. Although dozens of novel interaction techniques are being proposed every year, the currently applied approaches for designing and implementing NUI-based systems are greatly challenged. This workshop aims at outlining and discussing some of those emerging challenges based on four general research perspectives, namely large-scale and dynamic runtime deployment of interaction techniques; adequate long-term dissemination of interaction techniques; in-situ adaptation of interaction techniques; and dynamic interaction ensembles.
Workshop on formal methods in human computer interaction BIBAFull-Text 294-295
  Benjamin Weyers; Judy Bowen; Alan Dix; Philippe Palanque
This workshop aims to gather active researchers and practitioners in the field of formal methods for interactive systems. The main objective is twofold: on one hand look at the evolutions of the definition and use of formal methods for interactive systems since the last book on the field nearly 20 years ago [1] following the seminal work reported in [2]. On the other hand, to identify important themes for the next decade of research. Formal methods aid in the design, development and evaluation of interactive systems providing the unique opportunity for complete and unambiguous description amenable to formal verification. The HCI community has demonstrated that the next generation of user interfaces is moving off the desktop: these emerging interfaces exploit novel input techniques such as tangible, haptic, camera-based, brain-computer, interaction, present a large quantity of information possibly distributed to a wide range of devices. In this workshop, we will discuss common themes, conflicting approaches and techniques, and future directions for the next generation of formal methods that will support the development of large scale dependable and usable interactive systems.
Model-based interactive ubiquitous systems (MODIQUITOUS) BIBAFull-Text 296-297
  Thomas Schlegel; Ronny Seiger; Christine Keller; Romina Kühn
Ubiquitous systems introduce a new quality of interaction both into our lives and into software engineering. Software becomes increasingly dynamic, requiring frequent changes to system structures, distribution and behaviour. The constant adaptation to new user needs and contexts as well as new modalities, components, and communication channels make these systems differ strongly from what has been standard over the last decades. Model-based interaction at runtime forms a promising approach for coping with dynamics and uncertainties inherent to interactive ubiquitous systems (IUS). This workshop discusses how model-driven development can be used to handle these challenges. In this third edition of MODIQUITOUS we put special focus on using models at runtime to support flexible, context-aware and interactive ubiquitous computing. Our goal is to bring together researchers and practitioners focused on different areas of IUS and to discuss various aspects of model-based interaction.
   The workshop will be held as a full day workshop and aims to provide a forum for discussing new ideas, issues and solutions for model-based IUS. It will include the presentation of participants' contributions and various forms of interactive discussions concerning the presented topics.
Engineering interactive systems with SCXML BIBAFull-Text 298-299
  Dirk Schnelle-Walka; Stefan Radomski; Jim Barnett; Max Mühlhäuser
The W3C SCXML standard for Harel state-charts, in unison with the W3C MMI architecture specification and related work from the W3C MMI working group are a promising suite of recommendations to become the "HTML of multimodal applications". This 2nd installment of the workshop will provide a forum for academia and industry alike to discuss recent developments with regard to dialog modeling using state-charts and identify remaining short-comings in the operationalization and application of the related approaches.
Systems and tools for cross-device user interfaces BIBAFull-Text 300-301
  Michael Nebeling; Fabio Paternò; Frank Maurer; Jeffrey Nichols
The goal of the XDUI 2015 workshop is to bring together leading and upcoming systems researchers in the area of cross-device interfaces and define a research agenda together. The workshop aims to be useful, not only for the EICS research community, but for the wider HCI community, where many recent cross-device systems and tools have been developed and investigated almost in parallel without learning from and building on each other. It targets both new and established researchers in the area -- new researchers will quickly get an overview of the state of the art, while established researchers can draw more detailed comparisons between their solutions and discuss benefits and limitations. This workshop at EICS provides a unique opportunity to sketch the design space of possible cross-device user interfaces and discuss technical concerns of existing solutions as well as open issues and future research directions.