HCI Bibliography Home | HCI Conferences | About EICS | EICS Conf Proceedings | Detailed Records | RefWorks | EndNote | Hide Abstracts
EICS Tables of Contents: 091011121314 ⇐ MORE

ACM SIGCHI 2012 Symposium on Engineering Interactive Computing Systems

Fullname:Proceedings of the 4th ACM SIGCHI symposium on Engineering Interactive Computing Systems
Editors:Simone D.J. Barbosa; José Creissac Campos; Rick Kazman; Philippe Palanque; Michael Harrison; Steve Reeves
Location:Copenhagen, Denmark
Dates:2012-Jun-25 to 2012-Jun-28
Publisher:ACM
Standard No:ISBN: 978-1-4503-1168-7; ACM DL: Table of Contents hcibib: EICS12
Papers:47
Pages:336
Links:Conference Website
Summary:It is our great pleasure to welcome you to the 4th ACM SIGCHI Symposium on Engineering Interactive Computing Systems -- EICS'12 held in Copenhagen (25--28 June 2012). EICS is an international conference devoted to all aspects of engineering usable and effective interactive computing systems. Topics of interest include multi-device interactive systems, new and emerging modalities (e.g., gesture), entertaining applications (e.g., mobile and ubiquitous games), safety critical systems (e.g. medical devices), and design and development methods (e.g., extreme programming).
    EICS focuses on tools, techniques and methods for designing and developing interactive systems. The conference brings together people who study or practice the engineering of interactive systems, drawing from Human-Computer Interaction (HCI), Software Engineering, Requirements Engineering, Computer-Supported Cooperative Work (CSCW), Ubiquitous & Pervasive Systems, Game Development, and Cognitive Engineering communities. The conference is the legatee of a number of conferences and workshops series: EHCI (Engineering Human Computer Interaction), DSV-IS (International Workshop on the Design, Specification and Verification of Interactive Systems), CADUI (International Conference on Computer-Aided Design of User Interfaces) and TAMODIA (International Workshop on Task Models and Diagrams).
    Since its beginning EICS has witnessed a growing number of submissions. This year the program contains 21 full papers carefully chosen from a total of 95 submissions (22% acceptance rate). There are also 12 late breaking papers (five of which presented as posters) as well as a number of doctoral reports, workshop reports, tutorial abstracts and demonstration descriptions. The published material originates from 15 countries, including New Zealand, North and South America, and Europe. In addition, keynote addresses will be offered by Robert Jacob (Tufts University, USA) and Jakob Bardram (IT University of Copenhagen, Denmark).
    We believe that for this fourth EICS edition we obtained an exciting and interactive program, which stimulates fruitful discussion in the relevant research fields. Topics range from model-based approaches to the design, analysis and generation of user interfaces, to toolkits supporting their development. A diversity of interaction styles and application areas is covered including multimodal, multidevice or multi-touch interfaces, ubiquitous computing, and health.
    We hope that you will find this year program interesting and thought provoking. The symposium aims to provide you with a valuable opportunity to share ideas with other researchers and practitioners from institutions around the world. We also wish the best to the next edition, EICS 2013 to be held in London, UK, in June 2013.
  1. Keynote addresses
  2. Engineering 1
  3. UbiComp
  4. Toolkits
  5. UI generation
  6. Formal methods
  7. WWW & visualization
  8. Models
  9. Task models
  10. Engineering 2
  11. Health
  12. Demonstrations
  13. Poster Session
  14. Doctoral consortium
  15. Tutorial
  16. Workshop

Keynote addresses

Engineering next generation interfaces: past and future BIBAFull-Text 1-2
  Robert J. K. Jacob
Tools, abstractions, models, and specification techniques for engineering new generations of interactive systems have tended to follow the development of such systems by about half a generation. In each case, hackers first start experimenting with new types of systems. Then the model developers and tool builders enter as requirements and paradigms solidify. And ultimately the tools and abstractions become so widely accepted and commonplace that they are no longer an open research area. This has happened with conventional graphical user interfaces, and it continues through new generations of interaction styles. It poses a continuing challenge to our community to focus ahead on the tools and techniques needed for each new emerging future interaction style.
   I will discuss research projects on specifying previous and current genres of "next generation" user interfaces and how each has been matched to its target domain and has followed this pattern. I will also describe a new genre of adaptive, lightweight brain-computer interfaces as an example of the kinds of next generation interfaces that I see emerging. I offer it as a challenge to our community -- to think about tools and techniques for engineering a new generation of interfaces of this sort.
Distributed interaction BIBAFull-Text 3-4
  Jakob E. Bardram
The personal computer as used by most people still to a large degree follows an interaction and technological design dating back to Allan Kay's Dynabook and the Xerox Star. This implies that interaction is confined to a single device with a single keyboard/mouse/display hardware configuration sitting on a desk, and personal rather than collaborative work is in focus.
   The challenges of "moving the computer beyond the desktop" are being addressed within different research fields. For example, Ubiquitous Computing (Ubicomp) investigates how computing can be embedded in everyday life; Computer Supported Cooperative Work (CSCW) researches collaborative interaction; and many researchers in the CHI and EICS community explores basic infrastructure and technologies for handling multiple devices and displays in e.g. smart room setups.
   In this talk, I will present our approach to these challenges. Specifically, I will introduce the term of "distributed interaction," which is a research agenda focusing on researching theory, conceptual frameworks, interaction design, user interfaces, and infrastructure that allow interaction with computers to be distributed along three dimension: Devices -- computers should not be viewed as single device but as (inter)networked devices. Hence, interaction is not confined to one device, but should encompass multiple devices. Space -- computers are distributed in space and time, and are not confined to one setting. This includes mobility, but more importantly that devices are to be found in all sorts of odd settings where they need to adapt to, and collaborate with, their surroundings, including other devices, people, interaction devices, etc. People -- computers are to a large degree the primary way of collaboration in distributed organizations. Hence, a lot has changed since the personal computer was designed for small office collaboration and there is a need for incorporating support for global interaction as a fundamental mechanism in the computing platforms.
   I will present our current approach for supporting distributed interaction called "activity-based computing" (ABC). Based on a strong theoretical foothold in Activity Theory, ABC provides a conceptual framework, interaction design, user interface, and a distributed programming and runtime infrastructure for distributed interaction. I will present ABC and show how it has been applied in building support for clinical work in hospitals and for smart space technology.

Engineering 1

Increasing Kinect application development productivity by an enhanced hardware abstraction BIBAFull-Text 5-14
  Bernardo Reis; João Marcelo Teixeira; Felipe Breyer; Luis Arthur Vasconcelos; Aline Cavalcanti; André Ferreira; Judith Kelner
Designing and implementing the interaction behavior for body tracking capable systems requires complex modeling of actions and extensive calibration. Being the most recent and successful device for robust interactive body tracking, Microsoft's Kinect has enabled natural interaction by the use of consumer hardware, providing detailed and powerful information to designers and developers, but little tooling. To fulfill this lack of adequate tools for helping developers in the prototyping and implementation of such interfaces, we present Kina, a toolkit that makes the development not fully conditional to the existence of a sensor. By providing playback capabilities together with an online movement database, it reduces the physical effort found while performing testing activities.
Fusion in multimodal interactive systems: an HMM-based algorithm for user-induced adaptation BIBAFull-Text 15-24
  Bruno Dumas; Beat Signer; Denis Lalanne
Multimodal interfaces have shown to be ideal candidates for interactive systems that adapt to a user either automatically or based on user-defined rules. However, user-based adaptation demands for the corresponding advanced software architectures and algorithms. We present a novel multimodal fusion algorithm for the development of adaptive interactive systems which is based on hidden Markov models (HMM). In order to select relevant modalities at the semantic level, the algorithm is linked to temporal relationship properties. The presented algorithm has been evaluated in three use cases from which we were able to identify the main challenges involved in developing adaptive multimodal interfaces.
User interface engineering for software product lines: the dilemma between automation and usability BIBAFull-Text 25-34
  Andreas Pleuss; Benedikt Hauptmann; Deepak Dhungana; Goetz Botterweck
Software Product Lines (SPL) are systematic approach to develop families of similar software products by explicating their commonalities and variability, e.g., in a feature model. Using techniques from model-driven development, it is then possible to automatically derive a concrete product from a given configuration (i.e., selection of features). However, this is problematic for interactive applications with complex user interfaces (UIs) as automatically derived UIs often provide limited usability. Thus, in practice, the UI is mostly created manually for each product, which results in major drawbacks concerning efficiency and maintenance, e.g., when applying changes that affect the whole product family. This paper investigates these problems based on real-world examples and analyses the development of product families from a UI perspective. To address the underlying challenges, we propose the use of abstract UI models, as used in HCI, to bridge the gap between automated, traceable product derivation and customized, high quality user interfaces. We demonstrate the feasibility of the approach by a concrete example implementation for the suggested model-driven development process.

UbiComp

Autonomic management of multimodal interaction: DynaMo in action BIBAFull-Text 35-44
  Pierre-Alain Avouac; Philippe Lalanda; Laurence Nigay
Multimodal interaction can play a dual key role in pervasive environments because it provides naturalness for interacting with distributed, dynamic and heterogeneous digitally controlled equipment and flexibility for letting the users select the interaction modalities depending on the context. The DynaMo (Dynamic multiModality) framework is dedicated to the development and the runtime management of multimodal interaction in pervasive environments. This paper focuses on the autonomic approach of DynaMo whose originality is based on partial interaction models. The autonomic manager combines and completes partial available models at runtime in order to build multimodal interaction adapted to the current execution conditions and in conformance with the predicted models. We illustrate the autonomic solution by considering several running examples and different partial interaction models.
A logical framework for multi-device user interfaces BIBAFull-Text 45-50
  Fabio Paternò; Carmen Santoro
In this paper, we present a framework for describing various design dimensions that can help in better understanding the features provided by tools and applications for multi-device environments. We indicate the possible options for each dimension, and also discuss how various research proposals in the area are located in our framework. The final discussion also points out important areas for future research.

Toolkits

PuReWidgets: a programming toolkit for interactive public display applications BIBAFull-Text 51-60
  Jorge Cardoso; Rui José
Interaction is repeatedly pointed out as a key enabling element towards more engaging and valuable public displays. Still, most digital public displays today do not support any interactive features. We argue that this is mainly due to the lack of efficient and clear abstractions that developers can use to incorporate interactivity into their applications. As a consequence, interaction represents a major overhead for developers, and users are faced with inconsistent interaction models across different displays. This paper describes the results of a study on interaction widgets for generalized interaction with public displays. We present PuReWidgets, a toolkit that supports multiple interaction mechanisms, automatically generated graphical interfaces, asynchronous events and concurrent interaction. This is an early effort towards the creation of a programming toolkit that developers can incorporate into their public display applications to support the interaction process across multiple display systems without considering the specifics of what interaction modality will be used on each particular display.
jQMultiTouch: lightweight toolkit and development framework for multi-touch/multi-device web interfaces BIBAFull-Text 61-70
  Michael Nebeling; Moira Norrie
Application developers currently have to deal with the increased proliferation of new touch devices and the diversity in terms of both the native platform support for common gesture-based interactions and touch input sensing and processing techniques, in particular, for custom multi-touch behaviours. This paper presents jQMultiTouch -- a lightweight web toolkit and development framework for multi-touch interfaces that can run on many different devices and platforms. jQMultiTouch is inspired from the popular jQuery toolkit for implementing interfaces in a device-independent way based on client-side web technologies. Similar to jQuery, the framework resolves cross-browser compatibility issues and implementation differences between device platforms by providing a uniform method for the specification of multi-touch interface elements and associated behaviours that seamlessly translate to browser-specific code. At the core of jQMultiTouch is a novel input stream query language for filtering and processing touch event data based on an extensible set of match predicates and aggregate functions. We demonstrate design simplicity for developers along several example applications and discuss performance, scalability and portability of the framework.
ToyVision: a toolkit for prototyping tabletop tangible games BIBAFull-Text 71-80
  Javier Marco; Eva Cerezo; Sandra Baldassarri
This paper presents "ToyVision", a software toolkit aimed to make easy the prototyping of tangible games in visual based tabletop devices. Compared to other software toolkits which offer very limited and tag-centered tangible possibilities, ToyVision provides designers and developers with intuitive tools for modeling innovative tangible controls and with higher level user's manipulations data. ToyVision is based on Reactivision open-source toolkit, which has been extended with new functionalities in its Hardware layer. The main design decision taken has been to split the Widget Layer from the lower abstraction layers. This new abstraction layer (the Widget layer) is the distinguishing feature of ToyVision and provides the developer with access to a set of encapsulated classes that give the status of any playing piece handled in the tabletop while the game is running. The toolkit has been complemented with a Graphic Assistant that gathers from the designer all the information needed by the toolkit to model all the tangible playing pieces. As a practical example, the process of prototyping a tangible game is described.

UI generation

MyUI: generating accessible user interfaces from multimodal design patterns BIBAFull-Text 81-90
  Matthias Peissner; Dagmar Häbe; Doris Janssen; Thomas Sellner
Adaptive user interfaces can make technology more accessible. Quite a number of conceptual and technical approaches have been proposed for adaptations to diverse user needs, multiple devices or multiple environments. Little work, however, has been directed at integrating all the essential aspects of adaptive user interfaces for accessibility in one system. In this paper, we present our generic MyUI infrastructure for increased accessibility through automatically generated adaptive user interfaces. The multimodal design patterns repository serves as the basis for a modular approach to individualized user interfaces. This open and extensible pattern repository makes the adaptation rules transparent for designers and developers who can contribute to the repository by sharing their knowledge about accessible design. The adaptation architecture and procedures enable user interface generation and dynamic adaptations during run-time. For the specification of an abstract user interface model, a novel statecharts-based notation has been developed. A development tool supports the interactive creation of the graphical user interface model.
An automated layout approach for model-driven WIMP-UI generation BIBAFull-Text 91-100
  David Raneburger; Roman Popp; Jean Vanderdonckt
Automated Window / Icon / Menu / Pointing Device User Interface (WIMP-UI) generation has been considered a promising technology for at least two decades. One of the major reasons why it has not become mainstream so far is that the usability of automatically generated UIs is rather low. This is mainly because non-functional requirements like layout or style issues are not considered adequately during the generation process. This paper proposes an automated layout approach that supports the explicit specification of layout parameters in device-independent and thus reusable transformation rules. Missing layout parameters are completed automatically, based on 'Layout Hints' under the consideration of scrolling preferences. We are aware that human intervention in the context of UI development will always be required to create high-quality UIs. Therefore, we aim to improve the generated UI by considering hints and applying heuristics, rather than solving a problem for which we believe that there is no generic solution.
Systematic generation of abstract user interfaces BIBAFull-Text 101-110
  Vi Tran; Jean Vanderdonckt; Ricardo Tesoriero; François Beuvens
An abstract user interface is defined according the Cameleon Reference Framework as a user interface supporting an interactive task abstracted from its implementation, independently of any target computing platform and interaction modality. While an abstract user interface could be specified in isolation, it could also be produced from various models such as a task model, a domain model, or a combination of both, possibly based on information describing the context of use (i.e., the user, the platform, and the environment). This paper presents a general-purpose algorithm that systematically generates all potential abstract user interfaces from a task model as candidates that could then be refined in two ways: removing irrelevant candidates based on constraints imposed by the temporal operators and grouping or ungrouping candidates according to constraints imposed by the context of use. A model-driven engineering environment has been developed that applies this general-purpose algorithm with multiple levels of refinement ranging from no contextual consideration to full-context consideration. This algorithm is exemplified on a some sample interactive application to be executed in various contexts of use, such as different categories of users using different platforms for the same task.

Formal methods

Engineering animations in user interfaces BIBAFull-Text 111-120
  Thomas Mirlacher; Philippe Palanque; Regina Bernhaupt
Graphical User Interfaces used to be static, graphically representing one software state after the other. However, animated transitions between these static states are an integral part in modern user interfaces and processes for both their design and implementation remain a challenge for designers and developers.
   This paper proposes a Petri net model-based approach to support the design, implementation and validation of animated user interfaces by providing a complete and unambiguous description of the entire user interface including animations. A process for designing interactive systems focusing on animations is presented, along with a framework for the definition and implementation of animation in user interfaces. The framework proposes a two levels approach for defining a high-level view of an animation (focusing on animated objects, their properties to be animated and on the composition of animations) and a low-level one dealing with detailed aspects of animations such as timing and optimization. A case study (in the domain of interactive Television) elaborating the application of the presented process and framework exemplifies the contribution.
Modelling user manuals of modal medical devices and learning from the experience BIBAFull-Text 121-130
  Judy Bowen; Steve Reeves
Ensuring that users can successfully interact with software and hardware devices is a critical part of software engineering. There are many approaches taken to ensure successful interaction, e.g. the use of user-centred design, usability studies, training and education etc. In this paper we consider how the users of modal medical devices, such as syringe pumps, are supported (or not) post-training by documentation such as user manuals. Our intention is to show that modelling such documents is a useful component in the software engineering process, allowing us to discover inconsistencies between devices and manuals as well as uncovering potentially undesirable properties of the devices being modelled.
Formal analysis of ubiquitous computing environments through the APEX framework BIBAFull-Text 131-140
  José Luís Silva; José Campos; Michael Harrison
Ubiquitous computing (ubicomp) systems involve complex interactions between multiple devices and users. This complexity makes it difficult to establish whether: (1) observations made about use are truly representative of all possible interactions; (2) desirable characteristics of the system are true in all possible scenarios. To address these issues, techniques are needed that support an exhaustive analysis of a system's design. This paper demonstrates one such exhaustive analysis technique that supports the early evaluation of alternative designs for ubiquitous computing environments. The technique combines models of behavior within the environment with a virtual world that allows its simulation. The models support checking of properties based on patterns. These patterns help the analyst to generate and verify relevant properties. Where these properties fail then scenarios suggested by the failure provide an important aid to redesign. The proposed technique uses APEX, a framework for rapid prototyping of ubiquitous environments based on Petri nets. The approach is illustrated through a smart library example. Its benefits and limitations are discussed.

WWW & visualization

Collaborative web browsing: multiple users, multiple pages, concurrent access, one display BIBAFull-Text 141-150
  Oliver Schmid; Agnes Lisowska Masson; Béat Hirsbrunner
Situations where users want to engage in collaborative web browsing are becoming increasingly common. However, current web technologies aren't designed to allow multiple users to browse the web simultaneously within a single common browser or application since they are unable to handle issues such as simultaneous access by multiple pointers, and multiple simultaneous points of focus within an application. The web-based solution that we propose is an implementation of a forward proxy that injects third-party web pages with specialized JavaScript that provides the aforementioned functionalities transparently, without affecting the original third-party web pages, and thus effectively extending them for collaborative web browsing scenarios. Moreover, our solution provides functionalities for cloning web screens to encourage awareness of the browsing activities of collaborators, and does not require any configuration or software installation on web-enabled client devices, allowing for easy walk-up-and-use interaction.
Weighted faceted browsing for characteristics-based visualization selection through end users BIBAFull-Text 151-156
  Martin Voigt; Artur Werstler; Jan Polowinski; Klaus Meißner
Faceted browsing is a widely spread, intuitive, and interactive search paradigm for information collections based on the metadata of its items. However, it has the problem that every selected criterion is mandatory so that less important ones may reduce the result set and interesting items may be removed unintentionally. On the other hand, choosing only very few facets yields to an unmanageable set of items wherein the best ones do not become obvious. In this paper, we propose weighted faceted browsing, which seamlessly extends the existing faceted browsing paradigm. Besides basic filtering capabilities, it provides a sophisticated relevance ranking of the result set based on the distinction between mandatory and weighted optional search criteria. Further, we show its practicability within an information visualization workbench to facilitate the end user's search for visualization components based on their characteristics.
Interactive construction of semantic widgets for visualizing semantic web data BIBAFull-Text 157-162
  Timo Stegemann; Jürgen Ziegler; Tim Hussein; Werner Gaulke
The rapidly growing amount of semantically represented data on the Web creates the need for more intuitive methods and tools to interact with these data and to use them in standard Web applications. We present a method how users can interactively define personalized views of large semantic data spaces. Specifically, we propose X3S as a technique and format for specifying 'semantic widgets' that integrate querying and filtering of semantic data with defining their layout and presentation style. In addition, an editor has been developed that allows to create X3S templates in a direct manipulation style. The editor and the underlying format are evaluated against existing approaches by comparing their functional capabilities as well as in an initial user study.
Extraction and interactive exploration of knowledge from aggregated news and social media content BIBAFull-Text 163-168
  Arno Scharl; Alexander Hubmann-Haidvogel; Albert Weichselbraun; Gerhard Wohlgenannt; Heinz-Peter Lang; Marta Sabou
The webLyzard media monitoring and Web intelligence platform (www.webLyzard.com) presented in this paper is a generic tool for assessing the strategic positioning of an organization and the effectiveness of its communication strategies. The platform captures and aggregates large archives of digital content from multiple stakeholder groups. Each week it processes millions of documents and user comments from news media, blogs, Web 2.0 platforms such as Facebook, Twitter and YouTube, the Web sites of companies and NGOs, and other sources. An interactive dashboard with trend charts and complex map projections shows how often and where information is published. It also provides a real-time account of topics that stakeholders associate with an organization. Positive or negative sentiment is computed automatically, which reflects the impact of public relations and marketing campaigns.

Models

Specifying and running rich graphical components with Loa BIBAFull-Text 169-178
  Olivier Beaudoux; Mickael Clavreul; Arnaud Blouin; Mengqiang Yang; Olivier Barais; Jean-Marc Jezequel
Interactive system designs often require the use of rich graphical components whose capabilities go beyond the set of widgets provided by GUI toolkits. The implementation of such rich graphical components require a high programming effort that GUI toolkits do not alleviate. In this paper, we propose the Loa framework that allows both the specification of rich graphical components and their integration within running interactive applications. We illustrate the specification and integration with the Loa framework as part of a global process for the design of interactive systems.
Unify localization using user interface description languages and a navigation context-aware translation tool BIBAFull-Text 179-188
  Michael Tschernuth; Michael Lettner; Rene Mayrhofer
The past few years have shown a tendency from desktop software development towards mobile application development due to the increasing amount of smartphone users and available devices. Compared to traditional desktop applications, requirements are different in the mobile world. Due to the massive amount of mobile applications it is important to bring a new idea to the market very quickly and concurrently target a large number of users all over the world. The aspect of localization is crucial if the product should be usable in different countries. The term localization in this context refers to the process of adapting a software to different regions by changing the language, image resources, reading direction or other regional requirements. The proposed solution covers the aspect of string translation, with a focus on devices where the screen area is limited. Translating a software poses a challenge since the text can have several meanings on the one hand and has to match the available screen space on the other hand.
   Knowing the context and area where a string appears in the user interface can improve the quality and accuracy of the translation. Besides that it reduces efforts for layout implementation and testing. This paper refers to that feature as navigation context-aware. A Context-Aware Translation Tool (CATT) including this feature is presented. As an input for the tool a user interface description language (UIDL) is used which contributes platform independence to the tool. To increase the applicability of the tool to a number of description languages, a meta-model was created which specifies crucial compatibility requirements. An evaluation of existing languages regarding their compatibility to the proposed model and a discussion of limitations is included.
What can model-based UI design offer to end-user software engineering? BIBAFull-Text 189-194
  Anke Dittmar; Alfonso García Frey; Sophie Dupuy-Chessa
End-User Programming enables end users to create their own programs. This can be accomplished in different ways, where one of them is by appropriation or reconfiguration of existing software. However, there is a trade-off between end users' 'situated design' and quality design which is addressed in End-User Software Engineering. This paper investigates how methods and techniques from Model-Based UI Design can contribute to End-User Software Engineering. Applying the concept of Extra-UI, the paper describes a Model-Based approach that allows to extend core applications in a way that some of the underlying models and assumptions become manipulable by end users. The approach is discussed through a running example.

Task models

Modeling task transitions to help designing for better situation awareness BIBAFull-Text 195-204
  Thomas Villaren; Gilles Coppin; Angélica Leal
In complex systems such as cockpits or unmanned systems, operators manage a set of tasks with high temporal dynamics. Frequent changes of situation within the same mission can sometimes induce a loss of operators' Situation Awareness.
   In this paper, we introduce a methodology for design of Human-Computer Interfaces in dynamic systems taking into account the situation elements constituting operators' activity. We follow a user-centered approach; end-users and domain experts are included along the different steps of this model-based design process.
   The complete methodology is presented here, from initial task & situation modeling, through transition analysis, to the final recommendations on interface design, applied to an illustrative example.
Exploring design principles of task elicitation systems for unrestricted natural language documents BIBAFull-Text 205-210
  Hendrik Meth; Alexander Maedche; Maximilian Einoeder
During the design of interactive systems, user tasks need to be identified within natural language documents (like interview transcripts, support messages or workshop memos) and be transformed into task models. This time-consuming and error-prone analysis process demands for automation, however corresponding software support is still sparse. This paper describes a Design Science Research project, which explores design principles for a system aiming to close this gap. To evaluate the principles, they are instantiated in an innovative artifact called REMINER which combines Information Retrieval, Natural Language Processing and Annotation technology. The artifact can be used to semi-automatically identify user tasks from unrestricted natural language documents and to organize them into task models. Results of two extensive evaluations of the artifact show, that it considerably addresses the underlying problem areas of this process.

Engineering 2

Reusable decision space for mashup tool design BIBAFull-Text 211-220
  Saeed Aghaee; Marcin Nowak; Cesare Pautasso
Mashup tools are a class of integrated development environments that enable rapid, on-the-fly development of mashups -- a type of lightweight Web applications mixing content and services provided through the Web. In the past few years there have been growing number of projects, both from academia and industry, aimed at the development of innovative mashup tools. From the software architecture perspective, the massive effort behind the development of these tools creates a large pool of reusable architectural decisions from which the design of future mashup tools can derive considerable benefits. In this paper, focusing on the design of mashup tools, we explore a design space of decisions comprised of design issues and alternatives. The design space knowledge not only is broad enough to explain the variability of existing tools, but also provides a road-map towards the design of next generation mashup tools.
The design of a hardware-software platform for long-term energy eco-feedback research BIBAFull-Text 221-230
  Lucas Pereira; Filipe Quintal; Nuno Nunes; Mario Bergés
Researchers often face engineering problems, such as optimizing prototype costs and ensuring easy access to the collected data, which are not directly related to the research problems being studied. This is especially true when dealing with long-term studies in real world scenarios. This paper describes the engineering perspective of the design, development and deployment of a long-term real word study on energy eco-feedback, where a non-intrusive home energy monitor was deployed in 30 houses for 18 months. Here we report on the efforts required to implement a cost-effective non-intrusive energy monitor and, in particular, the construction of a local network to allow remote access to multiple monitors and the creation of a RESTful web-service to enable the integration of these monitors with social media and mobile software applications. We conclude with initial results from a few eco-feedback studies that were performed using this platform.
Considerations for computerized in situ data collection platforms BIBAFull-Text 231-236
  Nikolaos Batalas; Panos Markopoulos
Computerized tools for in-situ data collection from study participants have proven invaluable in many diverse fields. The platforms developed within academic settings, eventually tend to find themselves abandoned and obsolete. Newer tools are susceptible to meeting a similar fate. We believe this is because, although most of the tools try to satisfy the same functional requirements, little attention has been paid to their development models also keeping in line. In this paper we propose an architectural model, which satisfies established requirements and also promotes extensibility, interoperability and cross-platform functionality between tools. In doing so, we aim to introduce development considerations into the larger discussion on the design of such platforms.

Health

Fear therapy for children: a mobile approach BIBAFull-Text 237-246
  Marco de Sá; Luís Carriço
Mobile devices have shown to be useful tools in supporting various procedures and therapy approaches for different purposes. However, when applied to children, particular care has to be taken, considering both their abilities and their acceptance towards the used approaches. In this paper we present mobile applications, designed specifically for children and young patients, aiming at supporting fear therapy procedures. The software was developed following a user centered design approach and offers users an intuitive and metaphor based interaction paradigm that overcomes the paper-based counterpart's limitations. We describe the design process, the software and the results that we have obtained during an exploratory trial study.
Using ontologies to reason about the usability of interactive medical devices in multiple situations of use BIBAFull-Text 247-256
  Judy Bowen; Annika Hinze
Formally modelling interactive software systems and devices allows us to prove properties of correctness about such devices, and thus ensure effectiveness of their use. It also enables us to consider interaction properties such as usability and consistency between the interface and system functionality. Interactive modal devices, that have a fixed interface but whose behaviour is dependent on the mode of the device, can be similarly modelled. Such devices always behave in the same way (i.e. have the same functionality and interaction possibilities) irrespective of how, or where, they are used. However, a user's interaction with such devices may vary according to the physical location or environment in which they are situated (we refer to this as a system's context and usage situation). In this paper we look at a particular example of a safety-critical system, that of a modal interactive medical syringe pump, which is used in multiple situations. We consider how ontologies can be used to reason about the effects of different situations on the use of such devices.

Demonstrations

GAMBIT: Addressing multi-platform collaborative sketching with html5 BIBAFull-Text 257-262
  Ugo Sangiorgi; Jean Vanderdonckt
Prototypes are essential tools for design activities for they allow designers to realize and evaluate ideas in early stages of the development. Sketching is a primary tool for constructing prototypes of interactive systems and has been used in developing low-fidelity prototypes for a long time. The computational support for sketching has been receiving a recurrence of interest in the last 45 years and again nowadays within the mobile web context, where there are diverse devices to be considered.
   The research reported on this paper aims at addressing issues on multi-platform collaborative sketching using a prototyping tool for user interfaces. The tool was built to aid the investigation on how designers sketch using many different devices and collaborate using their sketches during design sessions.
UsiComp: an extensible model-driven composer BIBAFull-Text 263-268
  Alfonso García Frey; Eric Céret; Sophie Dupuy-Chessa; Gaëlle Calvary; Yoann Gabillon
Modern User Interfaces need to dynamically adapt to their context of use, i.e. mainly to the changes that occur in the environment or in the platform. Model-Driven Engineering offers powerful solutions to handle the design and the implementation of such UIs. However this approach requires the creation of an important amount of models and transformations, each of them in turn requiring specific knowledge and competencies. This leads to the need of an adapted tool sustaining the designers' work. This paper introduces UsiComp, an integrated and open framework that allows designers to create models and modify them at design time as well as at runtime. UsiComp relies on a service-based architecture. It offers two modules, for design and execution. The implementation has been made using OSGi services offering dynamic possibilities for using and extending the tool. This paper describes the architecture and shows the extension capacities of the framework through two running examples.
The design and architecture of ReticularSpaces: an activity-based computing framework for distributed and collaborative smartspaces BIBAFull-Text 269-274
  Jakob Bardram; Steven Houben; Søren Nielsen; Sofiane Gueddana
Interactive workspaces are increasingly physically distributed, highlighting the challenge of building interfaces that support group interaction with digital documents through multiple locations and devices. This paper presents the technical implementation and user interface of ReticularSpaces. Based on the concepts and principles of Activity-Based Computing (ABC), ReticularSpaces implements a novel approach to smart space user interfaces, and supports task-based information management, mobility, and collaboration.

Poster Session

Software architecture for interactive robot teleoperation BIBAFull-Text 275-280
  Nader Cheaib; Mouna Essabbah; Christophe Domingues; Samir Otmane
In this paper, we present a software architecture for interactive and collaborative underwater robot teleoperation. This work is in the context of the Digital Ocean Europe project that aims at digitalizing seafloor sites in 3D imagery using underwater robots (ROVs), and uses this information in order to edit interactive, virtually animated environments diffused online. The work presented in this paper concerns the software architecture of the interactive system in order to collaboratively teleoperate the robot, using two types of interfaces: 1) an intuitive web interface and 2) a Virtual Reality (VR) platform. The particularity of our system is the separation of the systems' functional core from its interfaces, which enables greater flexibility in teleoperating the robot. We discuss the conceptual software architecture as well as the implementation of the systems' interfaces.
A transformation engine for model-driven UI generation BIBAFull-Text 281-286
  Roman Popp; Jürgen Falb; David Raneburger; Hermann Kaindl
Current engines for model-driven transformations do not sufficiently support specifics of automated generation of user interfaces (UIs). For achieving better (graphical) UIs in more specific situations (e.g., a specific small device, or a specific button), more specific rules should be supported, without having to discard the more general ones. For enabling optimization (e.g., for small devices), comparing alternatives is mandatory. Therefore, we present a new and implemented transformation engine for declarative rules specifically designed for model-driven UI generation and optimization as well as its application to various devices and applications. This engine is the basis for advanced UI generation already presented previously, including its results for automatically optimized UIs for smartphones.
A formal specification for Casanova, a language for computer games BIBAFull-Text 287-292
  Giuseppe Maggiore; Alvise Spanò; Renzo Orsini; Michele Bugliesi; Mohamed Abbadi; Enrico Steffinlongo
In this paper we present the specification and preliminary assessment of Casanova, a newly designed computer language which integrates knowledge about many areas of game development with the aim of simplifying the process of engineering a game. Casanova is designed as a fully-fledged language, as an extension language to F#, but also as a pervasive design pattern for game development.
Tag-exercise creator: towards end-user development for tangible interaction in rehabilitation training BIBAFull-Text 293-298
  Ananda Hochstenbach-Waelen; Annick Timmermans; Henk Seelen; Daniel Tetteroo; Panos Markopoulos
Tangible and embodied interactive technology (TEIT) consists of tightly coupled physical devices and software, which is less the case with mainstream platforms like personal computers, smartphones, etc. Currently TEIT is manufactured by small- and medium-sized niche technology providers for whom application domain specific development can represent an excessive threshold. End-user development by domain specialists emerges as an avenue to mitigate this issue. This research has set out to enable therapists to create solutions for rehabilitation training, through the development of the Tag-Exercise Creator (TEC). This paper motivates the use of tangible interactive systems for this problem domain, and describes the design, implementation and initial evaluation of TEC. Our study indicates that tools like TEC can enable domain experts to perform EUD tasks and create training content. Improvements and extensions to TEC are under way to enable a field trial of the system where the feasibility of EUD as a professional practice will be evaluated.
User interface master detail pattern on Android BIBAFull-Text 299-304
  Thanh-Diane Nguyen; Jean Vanderdonckt
The purpose of this work is to understand some existing user interface patterns and to adapt them to the constraints of mobile devices running on the Android system. We focus mainly on the Master/Detail pattern and on the surrounding patterns. The contributions are multiple: our background study consists of a brief recall of the principles of some existing user interface patterns. Based on it, we provide an adapted version of each pattern targeted to mobile phones through a framework called MandroiD. We will also present a basic case study application that takes advantage of the framework. This application is developed with Android guidelines in mind. Indeed, one of our goals is to provide the reader with some knowledge about Android applications development. Limitations of general mobile devices (e.g., the small screen) require of "reducing" homogeneous elements. MandroiD overcome theses constraints. A statistical analysis is conducted on the developed mini-application. Evaluation of it shows a general satisfaction concerning the ergonomy of the application by various users.

Doctoral consortium

A pattern-based approach to support the design of multi-platform user interfaces of information systems BIBAFull-Text 305-308
  Thanh-Diane Nguyen
This PhD thesis is focused on a pattern approach for designing multi-platform user interfaces. The pattern approach is applied on the complete user interface (UI) development process. UI patterns can be used to improve the usability and cycle-life development. To achieve a good quality of software development, UI patterns related to ergonomic context can be used in unification of models to support the UI development process.
   UI Patterns of the OO-Method are introduced in the whole model driven process of UI in order to obtain different UIs in the Final User Interface (FUI) level including specific platforms. In using different patterns on other devices, the thesis analyses the derivation of up-to-date UIs with the application of the built ergonomic guide and extended patterns. A comparative study of these different FUIs built in different contexts is necessary to show how difficult it is to adapt the different patterns on variety platforms.
Addressing multi-platform collaborative sketching BIBAFull-Text 309-312
  Ugo Sangiorgi
Prototypes are essential tools for design activities for they allow designers to realize and evaluate ideas in early stages of the development. Sketching is a primary tool for constructing prototypes of interactive systems and has been used in developing low-fidelity prototypes for a long time. The computational support for sketching has been receiving a recurrence of interest in the last 45 years and again nowadays within the mobile web context, where there are diverse devices to be considered.
   The research reported on this paper aims at addressing issues on multi-platform collaborative sketching using a prototyping tool for user interfaces. The tool was built to aid the investigation on how designers sketch using many different devices and collaborate using their sketches during design sessions.
Industrial playgrounds: how gamification helps to enrich work for elderly or impaired persons in production BIBAFull-Text 313-316
  Oliver Korn
This paper introduces an approach for implementing motivating mechanics from game design to production environments by integrating them in a new kind of computer-based assistive system. This process can be called "gamification". By using motion recognition, the work processes becomes transparent and can be visualized in real-time. This allows representing them as bricks in a "production game" which resembles the classic game Tetris. The aim is to achieve and sustain a mental state called "flow" resulting in increased motivation and better performance. Although the approach presented here primarily focuses on elderly and impaired workers, the enhanced assistive system or "wizard" can principally enrich work in every production environment.
Differential formal analysis: evaluating safer 5-key number entry user interface designs BIBAFull-Text 317-320
  Abigail Cauchi
Differential Formal Analysis (DFA) is an evaluation method based on stochastic simulation for evaluating safety critical user interfaces with subtle programming differences. This method enforces rigorous science by requiring two or more researchers to perform the analysis which in itself, raises important issues for discussion. This method is demonstrated through a case study on 5-key number entry systems which are a safety critical interface found in various popular commercial medical infusion pumps. The results of the case study are an important contribution of this paper since it provides device manufacturers guidelines to update their device firmware to make their 5 key number entry UIs safer, as well as a method that could be applied to other designs.
Integrating usability engineering in the software development lifecycle based on international standards BIBAFull-Text 321-324
  Holger Fischer
The integration of usability activities into software development lifecycles still remains to be a challenge. Most of the existing integration approaches appear to be on an operational level and cannot be transferred to other processes. Furthermore, UE standards and methods are hardly applied. How can organizations be supported in understanding and using this existing knowledge? The approach in this paper focuses on the constellation of standards to integrate UE and SE. Therefore, current development processes and standards will be analyzed and discussed to formulate recommendations for activities. In this manner, a toolset will be established to support the selection of suitable methods, the documentation and communication of intermediary results as well as the definition of competencies.
Reverse engineering of GWT applications BIBAFull-Text 325-328
  Carlos Eduardo Silva
Web applications have gained significant popularity. Relevant technologies, however, are to a great extent still immature and in constant evolution. This means many current applications are subject to constant change to keep up with the technology, leading to a degradation of application quality, both from an implementation and a usage perspective.
   In this context, tools that enable reasoning about the quality of the application from its source code can have a significant role. This paper reports on our preliminary work on reverse engineering the user interface layer of web applications directly from source code. Its applicability to GWT is described through two examples.
Towards safer number entry in interactive medical systems BIBAFull-Text 329-332
  Patrick Oladimeji
Number entry is prevalent in the use of many interactive medical systems and number entry interfaces vary in complexity. Currently, research on number entry is focused on the numeric keypad and its different layouts. There are alternatives to the numeric keypad in use in safety critical contexts such as hospitals. I have surveyed several number entry systems and propose properties that would help compare them. My research on this topic aims to understand the characteristics of the styles of these interfaces, focusing on their effects on number entry error, the severity of such errors and exploring possible design choices that can reduce or manage the errors. This research will uncover number entry interface design trade-offs that will help designers make informed decisions about the safety and dependability of number entry systems.

Tutorial

Creative and open software engineering practices and tools in maker community projects BIBAFull-Text 333-334
  Konstantinos Chorianopoulos; Letizia Jaccheri; Alexander Salveson Nossum
Processing, Arduino, and the growth of the associated communities of practice, also called maker communities, has motivated a broader participation of non-technical users in the engineering of interactive systems. Besides online sharing, maker communities meet regularly and share knowledge for various purposes (e.g., creative hacking, social networking, lifelong learning). In the context of maker communities, the understanding of engineering interactive systems (e.g., motivations, objectives, collaboration, process, reports) and the design of the respective tools (e.g., end-user programming for artists, or children) are not well documented. As a remedy, we present a coherent overview of related work, as well as our own experiences in the organization and running of maker workshops. The tutorial format (lecture and hands-on workshop) benefits both practitioners and researchers with an understanding of creative software tools and practices. Moreover, participants become familiar with the organization of maker workshops as 1) a research method for understanding users, 2) an engineering process for interactive computer systems, and 3) a practice for teaching and learning.

Workshop

Model-based interactive ubiquitous systems BIBAFull-Text 335-336
  Thomas Schlegel; Stefan Pietschmann; Romina Kühn
Ubiquitous systems today are introducing a new quality of interaction both into our lives and into software engineering. Systems become increasingly dynamic making frequent changes to system structures, distribution, and behavior necessary. Also, adaptation to new user needs and contexts as well as new modalities and communication channels make these systems differ strongly from what has been standard in the last decades.
   Models and model-based interaction at runtime and design-time form a promising approach for coping with the dynamics and uncertainties inherent to interactive ubiquitous systems (IUS). Hence, this workshop discusses how model-based approaches can be used to cope with challenges of IUS. It covers the range from design-time to runtime models and from interaction to software engineering, addressing the challenges of interaction with and engineering of interactive ubiquitous systems.
   Building on the results of MODIQUITOUS 2011 at EICS 2011, MODIQUITOUS 2012 aims at strengthening the community and allow for deeper discussions, demonstrations as well as inclusion of new developments in ubiquitous systems research.