HCI Bibliography Home | HCI Conferences | EHCI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
EHCI Tables of Contents: 010407

2001 Engineering for Human-Computer Interaction 2001-05-11

  1. Keynote Speakers
  2. Software Engineering Methods
  3. Formal Methods
  4. Toolkits
  5. User Interface Evaluation
  6. User Interface Plasticity
  7. 3D User Interfaces
  8. Input and Output Devices
  9. Mobile Interaction
  10. Context Sensitive Interaction

Keynote Speakers

Aura: Distraction-Free Ubiquitous Computing BIBAFull-Text 1
  David Garlan
Technological trends are leading to a world in which computing is all around us -- in our cars, our kitchens, our offices, our phones, and even our clothes. In this world we can expect to see an explosion of computational devices, services, and information at our disposal. While this is an undeniable opportunity, currently we are ill-prepared to deal with its implications.
Supporting Casual Interaction Between Intimate Collaborators BIBAFull-Text 3
  Saul Greenberg
Over last decade, we have seen mounting interest in how groupware technology can support electronic interaction between intimate collaborators who are separated by time and distance. By intimate collaborators I mean small communities of friends, family or colleagues who have a real need or desire to stay in touch with one another. While there are many ways to provide electronic interaction, perhaps the most promising approach relies on casual interaction. The general idea is that members of a distributed community track when others are available, and use that awareness to move into conversation, social interaction and work. On the popular front, we see support for casual interaction manifested through the explosion of instant messaging services: a person sees friends and their on-line status in a buddy list, and selectively enters into a chat dialog with one or more of them. On the research front, my group members and I are exploring the subtler nuances of casual interaction. We design, build and evaluate various groupware prototypes [1],[2],[3],[4] and use them as case studies to investigate:
  • how we can enrich on-line opportunities for casual interaction by providing
       people with a rich sense of awareness of their intimate collaborators;
  • how we can supply awareness of people's artifacts so that these can also
       become entry points into interaction;
  • how we can present awareness information at the periphery, where it becomes
       part of the background hum of activity that people can then selectively
       attend to;
  • how we can create fluid interfaces where people can seamlessly and quickly
       act on this awareness and move into conversation and actual work;
  • how we can have others overhear and join ongoing conversations and
  • how we can make these same opportunities work for a mix of co-located and
       distributed collaborators; and
  • how we balance distraction and privacy concerns while still achieving the
  • Turning the Art of Interface Design into Engineering BIBAFull-Text 5
      Jef Raskin
    The name of this conference begins with the word "engineering," a skill that I've seen little of in the world of commercial interface design. Here's what I mean by "engineering":
       I don't know if my background is typical, but I've enjoyed designing aircraft for some years now. As a child, I built model airplanes, some of which flew. Of necessity, they had many adjustable parts. I could move the wing forward and aft in a slot or with rubber bands, add bits of clay to the nose to adjust the balance, and I'd glue small aluminum tabs to wings and tail surfaces and bend them to correct the flight path.
       Once I had gotten past calculus and some college physics, I began to study aerodynamic and mechanical engineering more seriously. I remember with considerable pleasure the first time I was able to design a model (radio-controlled by this time) based on knowledge sound and deep enough so that I knew that -- barring accidents -- the aircraft would fly, and even how it would fly. I opened the throttle, the plane chugged along the runway and rose into the air, exactly as predicted. It flew and maneuvered, and I brought it back to a gentle landing.
       That's engineering. The ability to design from rational foundations, numerically predict performance, and have the result work much as expected without all kinds of ad hoc adjustments.
       This is not what I see in interface design. Working in the practical world of the interface designers who produce the commercial products used by millions, even hundreds of millions of people, I find that most of the practitioners have no knowledge of existing engineering methods for designing human-computer interfaces (for example, Fitts' and Hick's laws, Goms analyses, and measures of interface efficiency). The multiple stupidities of even the latest designs, such as Microsoft's Windows 2000 or Apple's OS X, show either an unjustifiable ignorance of or a near-criminal avoidance of what we do know.
       My talk will look at some of the true engineering techniques available to us, and where HCI can go if we allow these techniques -- rather than inertia, custom, guesswork, and fear -- to guide us.

    Software Engineering Methods

    Towards a UML for Interactive Systems BIBAFull-Text 7-18
      Fabio Paternò
    Nowadays, UML is the most successful model-based approach to supporting software development. However, during the evolution of UML little attention has been paid to supporting user interface design and development. In the meantime, the user interface has become a crucial part of most software projects, and the use of models to capture requirements and express solutions for its design, a true necessity. Within the community of researchers investigating model-based approaches for interactive applications, particular attention has been paid to task models. ConcurTaskTrees is one of the most widely used notations for task modelling. This paper discusses a solution for obtaining a UML for interactive systems based on the integration of the two approaches and why this is a desirable goal.
    An Interdisciplinary Approach for Successfully Integrating Human-Centered Design Methods into Development Processes Practiced by Industrial Software Development Organizations BIBAFull-Text 19-33
      Eduard Metzker; Michael Offergeld
    In a world where competitors are just a mouse-click away, human- centered design (HCD) methods change from a last minute add-on to a vital part of the software development lifecycle. However, case studies indicate that existing process models for HCD are not prepared to cope with the organizational obstacles typically encountered during the introduction and establishment of HCD methods in industrial software development organizations. Knowledge about exactly how to most efficiently and smoothly integrate HCD methods into development processes practiced by software development organizations is still not available. To bridge this gap, we present the experience-based human-centered design lifecycle, an interdisciplinary effort of experts in the fields of software engineering, human-computer interaction, and process improvement. Our approach aims at supporting the introduction, establishment and continuous improvement of HCD processes in software development organizations. The approach comprises a process model, tools, and organizational measures that promote the utilization of HCD methods in otherwise technology-centered development processes and facilitate organizational learning in HCD. We present results of a case study where our approach has been successfully applied in a major industrial software development project.
    From Usage Scenarios to Widget Classes BIBAFull-Text 35-36
      Hermann Kaindl; Rudolf Jezek
    In practice, designers often select user interface elements like widgets intuitively. So, important design decisions may never become conscious or explicit, and therefore also not traceable. We addressed the problem of systematically selecting widgets for a GUI that will be built from those building blocks.
       Our approach is based upon task analysis and scenario-based design, assuming that envisaged usage scenarios of reasonable quality are already available. Starting from them, we propose a systematic process for selecting user interface elements (in the form of widgets) in a few explicitly defined steps. This process provides a seamless way of going from scenarios through (attached) subtask definitions and various task classifications and (de)compositions to widget classes. In this way, it makes an important part of user interface design more systematic and conscious.
       More precisely, we propose to explicitly assign subtask descriptions to the interactions documented in such a scenario, to the steps of both users and the proposed system to be built. Through combining those subtasks that together make up an interaction, interaction tasks are identified. For these, the right granularity needs to be found, which may require task composition or decomposition. The resulting interaction tasks can be classified according to the kind of interaction they require. From this classification, it is possible to map the interaction tasks to a class hierarchy of widgets. Up to this point, our process description is seamless, while the subsequent selection of a concrete widget is not within the focus of this work.
    Evaluating Software Architectures for Usability BIBAFull-Text 37-38
      Len Bass; Bonnie E. John
    For the last twenty years, techniques to design software architectures for interactive systems that support usability have been a concern of both researchers and practitioners. Recently, in the context of performing architecture evaluations, we were reminded that the techniques developed thus far are of limited utility when evaluating the usability of a system based on its architecture. Techniques for supporting usability have historically focussed on selecting the correct overall system structure. Proponents of these techniques argue that their structure retains the modifiability needed during an iterative design process while still providing the required support for performance and other functionality.
       We are taking a different approach. We are preparing a collection of connections between specific aspects of usability (such as the ability for a user to "undo" or "cancel") and their implications for software architecture. Our vision sees designers using this collection both to generate solutions to those aspects of usability they have chosen to include and to evaluate their system designs for specific aspects of usability. Our contribution is a specific coupling between aspects of usability and their corresponding architecture. We do not attempt to designate one software architecture to satisfy all aspects of usability. Details can be found in [1].

    Formal Methods

    Interactive System Safety and Usability Enforced with the Development Process BIBAFull-Text 39-55
      Francis Jambon; Patrick Girard; Yamine Aït-ameur
    This paper introduces a new technique for the verification of both safety and usability requirements for safety-critical interactive systems. This technique uses the model-oriented formal method B and makes use of an hybrid version of the MVC and PAC software architecture models. Our claim is that this technique -- that uses proofs obligations -- can ensure both usability and safety requirements, from the specification step of the development process, to the implementation. This technique is illustrated by a case study: a simplified user interface for a Full Authority Digital Engine Control (FADEC) of a single turbojet engine aircraft.
    Detecting Multiple Classes of User Errors BIBAFull-Text 57-71
      Paul Curzon; Ann Blandford
    Systematic user errors commonly occur in the use of interactive systems. We describe a formal reusable user model implemented in higher-order logic that can be used for machine-assisted reasoning about user errors. The core of this model is a series of non-deterministic guarded temporal rules. We consider how this approach allows errors of various specific kinds to be detected and so avoided by proving a single theorem about an interactive system. We illustrate the approach using a simple case study.


    Exploring New Uses of Video with VideoSpace BIBAFull-Text 73-90
      Nicolas Roussel
    This paper describes videoSpace, a software toolkit designed to facilitate the integration of image streams into existing or new documents and applications to support new forms of human-computer interaction and collaborative activities. In this perspective, videoSpace is not focused on performance or reliability issues, but rather on the ability to support rapid prototyping and incremental development of video applications. The toolkit is described in extensive details, by showing the architecture and functionalities of its class library and basic tools. Several projects developed with videoSpace are also presented, illustrating its potential and the new uses of video it will allow in the future.
    Prototyping Pre-implementation Designs of Virtual Environment Behaviour BIBAFull-Text 91-108
      James S. Willans; Michael D. Harrison
    Virtual environments lack a standardised interface between the user and application, this makes it possible for the interface to be highly customised for the demands of individual applications. However, this requires a development process where the interface can be carefully designed to meet the requirements of an application. In practice, an ad-hoc development process is used which is heavily reliant on a developer's craft skills. A number of formalisms have been developed to address the problem of establishing the behavioural requirements by supporting its design prior to implementation. We have developed the Marigold toolset which provides a transition from one such formalism, Flownets, to a prototype-implementation. In this paper we demonstrate the use of the Marigold toolset for prototyping a small environment.
    QTk -- A Mixed Declarative/Procedural Approach for Designing Executable User Interfaces BIBAFull-Text 109-110
      Donatien Grolaux; Peter Van Roy; Jean Vanderdonckt
    When designing executable user interfaces, it is often advantageous to use declarative and procedural approaches together, each when most appropriate: -- A declarative approach can be used to define widget types, their initial states, their resize behavior, and how they are nested to form each window. All this information can be represented as a data structure. For example, widgets can be records and the window structure is then simply a nested record. -- A procedural approach can be used when its expressive power is needed, i.e., to define most of the UI's dynamic behavior. For example, UI events trigger calls to action procedures and the application can change widget state by invoking handler objects. Both action procedures and handler objects can be embedded in the data structures used by the declarative approach.

    User Interface Evaluation

    Consistency in Augmented Reality Systems BIBAFull-Text 111-122
      Emmanuel Dubois; Laurence Nigay; Jocelyne Troccaz
    Systems combining the real and the virtual are becoming more and more prevalent. The Augmented Reality (AR) paradigm illustrates this trend. In comparison with traditional interactive systems, such AR systems involve real entities and virtual ones. And the duality of the two types of entities involved in the interaction has to be studied during the design. We therefore present the ASUR notation: The ASUR description of a system adopts a task-centered point of view and highlights the links between the real world and the virtual world. Based on the characteristics of the ASUR components and relations, predictive usability analysis can be performed by considering the ergonomic property of consistency. We illustrate this analysis on the redesign of a computer assisted surgical application, CASPER.
    Heuristic Evaluation of Groupware Based on the Mechanics of Collaboration BIBAFull-Text 123-139
      Kevin Baker; Saul Greenberg; Carl Gutwin
    Despite the increasing availability of groupware, most systems are awkward and not widely used. While there are many reasons for this, a significant problem is that groupware is difficult to evaluate. In particular, there are no discount usability evaluation methodologies that can discover problems specific to teamwork. In this paper, we describe how we adapted Nielsen's heuristic evaluation methodology, designed originally for single user applications, to help inspectors rapidly, cheaply effectively identify usability problems within groupware systems. Specifically, we take the 'mechanics of collaboration' framework and restate it as heuristics for the purposes of discovering problems in shared visual work surfaces for distance-separated groups.
    An Organizational Learning Method for Applying Usability Guidelines and Patterns BIBAFull-Text 141-155
      Scott Henninger
    As usability knowledge and techniques continues to grow, there is an increasing need to provide tools that disseminate the accumulated wisdom of the field. Usability guidelines are one technique that is used to convey usability knowledge. Another is the emerging discipline of usability patterns. This paper presents an approach that combines these techniques in a case-based architecture and utilizes a process to help an organization capture, adapt, and refine usability resources from project experiences. The approach utilizes a rule-based tool to represent the circumstances under which a given usability resource is applicable. Characteristics of the application under development are captured and used to match usability resources to the project where they can be used to drive the design process. Design reviews are used to capture feedback and ensure that the repository remains a vital knowledge source for producing useful and usable software systems.

    User Interface Plasticity

    Pervasive Application Development and the WYSIWYG Pitfall BIBAFull-Text 157-172
      Lawrence D. Bergman; Tatiana Kichkaylo; Guruduth Banavar; Jeremy Sussman
    Development of application front-ends that are designed for deployment on multiple devices requires facilities for specifying device-independent semantics. This paper focuses on the user-interface requirements for specifying device-independent layout constraints. We describe a device independent application model, and detail a set of high-level constraints that support automated layout on a wide variety of target platforms. We then focus on the problems that are inherent in any single-view direct-manipulation WYSIWYG interface for specifying such constraints. We propose a two-view interface designed to address those problems, and discuss how this interface effectively meets the requirements of abstract specification for pervasive applications.
    A Unifying Reference Framework for the Development of Plastic User Interfaces BIBAFull-Text 173-192
      Gaëlle Calvary; Joëlle Coutaz; David Thevenin
    The increasing proliferation of computational devices has introduced the need for applications to run on multiple platforms in different physical environments. Providing a user interface specially crafted for each context of use is extremely costly and may result in inconsistent behavior. User interfaces must now be capable of adapting to multiple sources of variation. This paper presents a unifying framework that structures the development process of plastic user interfaces. A plastic user interface is capable of adapting to variations of the context of use while preserving usability. The reference framework has guided the design of ARTStudio, a model-based tool that supports the plastic development of user interfaces. The framework as well as ARTStudio are illustrated with a common running example: a home heating control system.

    3D User Interfaces

    Building User-Controlled 3D Models and Animations for Inherently-3D Construction Tasks: Which Tool, Which Representation? BIBAFull-Text 193-206
      Guy Zimmerman; Julie Barnes; Laura Leventhal
    In this paper, we first define a class of problems that we have dubbed inherently-3D, which we believe should lend themselves to solutions that include user-controlled 3D models and animations. We next give a comparative discussion of two tools that we used to create presentations: CosmoTMWorlds and Flash. The presentations included text, pictures, and user-controlled 3D models or animations. We evaluated the two tools along two dimensions: 1) how well the tools support presentation development and 2) the effectiveness of the resultant presentations. From the first evaluation, we concluded that Flash in its current form was the more complete development environment. For a developer to integrate VRML into cohesive presentations required a more comprehensive development environment than is currently available with CosmoTMWorlds. From our second evaluation, based on our usability study, we have made two conclusions. First, our users were quite successful in completing the inherently-3D construction task, regardless of which presentation (Shockwave or VRML) they saw. Second, we found that enhancing the VRML models and including multiple perspectives in Shockwave animations were equally effective at reducing errors as compared to a more primitive VRML. Based on our results we believe that for tasks of the 3D-complexity that we used, Flash is the clear choice. Flash was easier to use to develop the presentations and the presentation was as effective as the model that we built with CosmoTMWorlds and Java. Finally, we postulate a relationship between inherently-3D task complexity and the relative effectiveness of the VRML presentation.
    Unconstrained vs. Constrained 3D Scene Manipulation BIBAFull-Text 207-219
      Tim Salzman; Szymon Stachniak; Wolfgang Stürzlinger
    Content creation for computer graphics applications is a very time-consuming process that requires skilled personnel. Many people find the manipulation of 3D object with 2D input devices non-intuitive and difficult. We present a system, which restricts the motion of objects in a 3D scene with constraints. In this publication we discuss an experiment that compares two different 3D manipulation interfaces via 2D input devices. The results show clearly that the new constraint-based interface performs significantly better than previous work.

    Input and Output Devices

    Toward Natural Gesture/Speech Control of a Large Display BIBAFull-Text 221-234
      Sanshzar Kettebekov; Rajeev Sharma
    In recent years because of the advances in computer vision research, free hand gestures have been explored as means of human-computer interaction (HCI). Together with improved speech processing technology it is an important step toward natural multimodal HCI. However, inclusion of non-predefined continuous gestures into a multimodal framework is a challenging problem. In this paper, we propose a structured approach for studying patterns of multimodal language in the context of a 2D-display control. We consider systematic analysis of gestures from observable kinematical primitives to their semantics as pertinent to a linguistic structure. Proposed semantic classification of co-verbal gestures distinguishes six categories based on their spatio-temporal deixis. We discuss evolution of a computational framework for gesture and speech integration which was used to develop an interactive testbed (iMAP). The testbed enabled elicitation of adequate, non-sequential, multimodal patterns in a narrative mode of HCI. Conducted user studies illustrate significance of accounting for the temporal alignment of gesture and speech parts in semantic mapping. Furthermore, co-occurrence analysis of gesture/speech production suggests syntactic organization of gestures at the lexical level.
    An Evaluation of Two Input Devices for Remote Pointing BIBAFull-Text 235-250
      I. Scott MacKenzie; Shaidah Jusoh
    Remote pointing is an interaction style for presentation systems, interactive TV, and other systems where the user is positioned an appreciable distance from the display. A variety of technologies and interaction techniques exist for remote pointing. This paper presents an empirical evaluation and comparison of two remote pointing devices. A standard mouse is used as a base-line condition. Using the ISO metric throughput (calculated from users' speed and accuracy in completing tasks) as the criterion, the two remote pointing devices performed poorly, demonstrating 32% and 65% worse performance than the mouse. Qualitatively, users indicated a strong preference for the mouse over the remote pointing devices. Implications for the design of present and future systems for remote pointing are discussed.
    Does Multi-modal Feedback Help in Everyday Computing Tasks? BIBAFull-Text 251-262
      Carolyn MacGregor; Alice Thomas
    A study was conducted to investigate the effects of auditory and haptic feedback for a "point and select" computing task at two levels of cognitive workload. Participants were assigned to one of three computer-mouse haptic feedback groups (regular non-haptic mouse, haptic mouse with kinesthetic feedback, and haptic mouse with kinesthetic and force feedback). Each group received two auditory feedback conditions (sound on, sound off) for each of the cognitive workload conditions (single task or dual task). Even though auditory feedback did not significantly improve task performance, all groups rated the sound-on conditions as requiring less work than the sound-off conditions. Similarly, participants believed that kinesthetic feedback improved their detection of errors, even though mouse feedback did not produce significant differences in performance. Implications for adding multi-modal feedback to computer-based tasks are discussed.

    Mobile Interaction

    Information Sharing with Handheld Appliances BIBAFull-Text 263-279
      Jörg Roth
    Handheld appliances such as PDAs, organisers or electronic pens are currently very popular; they are used to enter and retrieve useful information, e.g., dates, to do lists, memos and addresses. They are viewed as stand-alone devices and are usually not connected to other handhelds, thus sharing data between two handhelds is very difficult. There exist rudimentary infrastructures to exchange data between handhelds, but they have not been designed for a seamless integration into handheld applications. Handheld devices are fundamentally different from desktop computers, a fact that leads to a number of issues. In this paper, we first analyse the specific characteristics of handheld devices, the corresponding applications and how users interact with handhelds. We identify three basic requirements: privacy, awareness and usability. Based on these considerations, we present our own approach.
    Dynamic Links for Mobile Connected Context-Sensitive Systems BIBAFull-Text 281-297
      Philip Gray; Meurig Sage
    The current generation of mobile context-aware applications must respond to a complex collection of changes in the state of the system and in its usage environment. We argue that dynamic links, as used in user interface software for many years, can be extended to support the change-sensitivity necessary for such systems. We describe an implementation of dynamic links in the Paraglide Anaesthetist's Clinical Assistant, a mobile context-aware system to help anaesthetists perform pre- and post-operative patient assessment. In particular, our implementation treats dynamic links as first class objects. They can be stored in XML documents and transmitted around a network. This allows our system to find and understand new sources of data at run-time.
    Mobile Collaborative Augmented Reality: The Augmented Stroll BIBAFull-Text 299-316
      Philippe Renevier; Laurence Nigay
    The paper focuses on Augmented Reality systems in which interaction with the real world is augmented by the computer, the task being performed in the real world. We first define what mobile AR systems, collaborative AR systems and finally mobile and collaborative AR systems are. We then present the augmented stroll and its software design as one example of a mobile and collaborative AR system. The augmented stroll is applied to Archaeology in the MAGIC (Mobile Augmented Group Interaction in Context) project.

    Context Sensitive Interaction

    Modelling and Using Sensed Context Information in the Design of Interactive Applications BIBAFull-Text 317-335
      Philip Gray; Daniel Salber
    We present a way of analyzing sensed context information formulated to help in the generation, documentation and assessment of the designs of context-aware applications. Starting with a model of sensed context that accounts for the particular characteristics of sensing, we develop a method for expressing requirements for sensed context information in terms of relevant quality attributes plus properties of the sensors that supply the information. We demonstrate on an example how this approach permits the systematic exploration of the design space of context sensing along dimensions pertinent to software development. Returning to our model of sensed context, we examine how it can be supported by a modular software architecture for context sensing that promotes separation between context sensing, user interaction, and application concerns.
    Delivering Adaptive Web Content Based on Client Computing Resources BIBAFull-Text 337-355
      Andrew Choi; Hanan Lutfiyya
    This paper describes an approach to adapting Web content based on both static (e.g., connection speed) and dynamic information (e.g., CPU load) about a user's computing resources. This information can be transmitted to a Web Server in two different ways. XML is used so that there is one copy of the content, but multiple presentations are possible. The paper describes an architecture, prototype and initial results.
    How Cultural Needs Affect User Interface Design? BIBAFull-Text 357-358
      Minna Mäkäräinen; Johanna Tiitola; Katja Konkka
    This paper discusses how cultural aspects should be addressed in user interface design. It presents a summary of two case studies, one performed in India and the other in South Africa, in order to identify the needs and requirements for cultural adaptation. The case studies were performed in three phases. First, a pre-study was conducted in Finland. The pre-study included literature study about the target culture. Explored issues included facts about the state, religions practiced in the area, demographics, languages spoken, economics, conflicts between groups, legal system, telecommunication infrastructure and education system. Second, a field study was done in the target culture. The field study methods used were observations in context, semi-structured interviews in context, and expert interviews. A local subcontractor was used for practical arrangements, such as selecting subjects for the study. The subcontractors also had experience on user interface design, so they could act as experts giving insight to the local culture. Third, the findings were analyzed with the local experts, and the results were compiled into presentations and design guidelines for user interface designers. The results of the case studies indicate that there is a clear need for cultural adaptation of products. The cultural adaptation should cover much more, than only the language of the dialog between the device and the end user. For example, the South-Africa study revealed a strong need for user interface, which could be used by non-educated people, who are not familiar with technical devices. The mobile phone users are not anymore only well educated technologically oriented people. Translating the language of the dialog to the local language is not enough, if the user cannot read. Another design issue discovered in the study was that people were afraid of using data-intensive applications (such as phonebook or calendar), because the criminality rates in South Africa are very high, and the risk of the mobile phone getting stolen and the data being lost is high. In India, some examples of the findings are the long expected lifetimes of the products, and importance of religion. India is not a throwaway culture. When a device gets broken, it is not replaced with a new one, but instead it is repaired. The expected lifetime of the product is long. The importance of religion, and especially religious icons and rituals, is much more visible in everyday life, than in Europe. For example, people carry pictures of Gods instead of pictures of family with them. Addressing this in the user interface would give the product added emotional value.