HCI Bibliography Home | HCI Conferences | CHI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
CHI Tables of Contents: 8182838586878889909192X

Proceedings of ACM CHI'92 Conference on Human Factors in Computing Systems

Fullname:CHI'92 Short Talks
Note:Striking a Balance
Editors:Penny Bauersfeld; John Bennett; Gene Lynch; Dennis Wixon; Betsy Comstock; Dennis Wixon; Betsy Comstock
Location:Monterey, California
Dates:1992-May-03 to 1992-May-07
Publisher:ACM
Standard No:ACM ISBN 0-89791-513-5; ACM Order Number 608921; Addison-Wesley ISBN 0-201-53344-X; ACM DL: Table of Contents hcibib: CHI92; hcibib: CHI92X; hcibib: CHI92Y
Papers:116; 62; 26
Pages:736; 1-70; 71-130
  1. Text and Hypertext
  2. Studies of Media Supported Collaboration
  3. Laboratory Overviews: Graphics
  4. Panel
  5. Demonstration: User Interface Management Systems I
  6. Visualizing Objects, Graphs, and Video
  7. Perspectives on the Design of Collaborative Systems
  8. Direct Manipulation Theory, 3D Manipulation, and Design for Handicapped Users
  9. Panel
  10. Demonstration: Instructible Interfaces
  11. Models of the User I
  12. Tools and Techniques
  13. Perception/Performance Theory for HCI
  14. Panel
  15. Demonstration: User Interface Management Systems II
  16. Modeling the Expert User
  17. Beyond Widgets: Tools for Semantically Driven UI Design
  18. Laboratory Overviews: Usability Engineering
  19. Panel
  20. Demonstration: Information Visualization I
  21. Models of the User II
  22. Tools & Architectures for Virtual Reality and Multi-User, Shared Data
  23. Use and Evaluation of Learning Environments
  24. Panel

Text and Hypertext

Edit Wear and Read Wear BIBAKPDF 3-9
  William C. Hill; James D. Hollan; Dave Wroblewski; Tim McCandless
We describe two applications that illustrate the idea of computational wear in the domain of document processing. By graphically depicting the history of author and reader interactions with documents, these applications offer otherwise unavailable information to guide work. We discuss how their design accords with a theory of professional work and an informational physics perspective on interface design.
Keywords: Graphical user interfaces, Informational physics, Interface mechanisms, Professional work, Reflective practitioner
The Computer Sciences Electronic Magazine: Translating from Paper to Multimedia BIBAKPDF 11-18
  W. Randall Koons; Anne M. O'Dell; Nancy J. Frishberg; Mark R. Laff
In this paper, we discuss issues in design and usability of the IBM Computer Sciences Electronic Magazine (CSEM). The CSEM is an interactive multimedia translation of a paper magazine. It contains articles describing Computer Sciences projects at the four IBM Research Labs. Combining aspects from print, television, and computers, it is a useful vehicle for studying what we see as a completely new communication medium. We report both our design rationale in creating the magazine and the results of several user studies which helped us understand our successes and failures. These studies are a part of an iterative process through which we have redesigned and improved the CSEM.
Keywords: Electronic magazine, Interactive design, Multimedia design, Navigation, Indexing, Usability, Hypermedia, Metaphor
Note: Color plates are on pages 707-708
Hypertext or Book: Which is Better for Answering Questions? BIBAKPDF 19-25
  Barbee T. Mynatt; Laura Marie Leventhal; Keith Instone; John Farhat; Diane S. Rohlman
An important issue in the evolution of hypertext is the design of such systems to optimally support user tasks such as asking questions. Few studies have systematically compared the use of hypertext to books in seeking information, and those that have been done have not found a consistent superiority for hypertext. In addition, designers developing hypertext books have few guidelines. In the present study, users performed information-seeking tasks and answered a variety of types of questions about Sherlock Holmes stories using either a conventional paper encyclopedia or a hypertext encyclopedia. The questions varied on the amount of information needed to derive an answer (fact or inference), the location of the question's key phrase in the hypertext (entry title or entry content), and the format of the information (text or map). Accuracy and time were recorded. The hypertext group excelled in answering fact questions where the information was embedded in a text entry. The book group excelled only in answering fact questions based on maps. In spite of having far more experience using books, the book group was not significantly faster overall and did not perform as well on an incidental learning task. Our results suggest that a hypertext book with a nonlinear structure and including a variety of navigational tools can equal or surpass conventional books as an information-seeking medium, even with minimal training.
Keywords: Experimental research, Question answering, Usability of hypertext, Hypertext

Studies of Media Supported Collaboration

Realizing a Video Environment: EuroPARC's RAVE System BIBAKPDF 27-35
  William Gaver; Thomas Moran; Allan MacLean; Lennart Lovstrand; Paul Dourish; Kathleen Carter; William Buxton
At EuroPARC, we have been exploring ways to allow physically separated colleagues to work together effectively and naturally. In this paper, we briefly discuss several examples of our work in the context of three themes that have emerged: the need to support the full range of shared work; the desire to ensure privacy without giving up unobtrusive awareness; and the possibility of creating systems which blur the boundaries between people, technologies and the everyday world.
Keywords: Group work, Collaboration, Media spaces, Multi-Media, Video
Evaluating Video as a Technology for Informal Communication BIBAKPDF 37-48
  Robert S. Fish; Robert E. Kraut; Robert W. Root; Ronald E. Rice
Collaborations in organizations thrive on communication that is informal because informal communication is frequent, interactive, and expressive. Informal communication is crucial for the coordination of work, learning an organization's culture, the perpetuation of the social relations that underlie collaboration, and, in general, any situation that requires communication to resolve ambiguity. Informal communication is traditionally mediated by physical proximity, but physical proximity cannot mediate in geographically distributed organizations. The research described here evaluates the adequacy of a version of a desktop video/audio conferencing system for supporting informal communication in a research and development laboratory. The evaluation took place during a trial in which the system was used by summer employees and their supervisor-mentors. While the system was used frequently, the most common uses and users' assessments suggest that it was used more like a telephone or electronic mail than like physically mediated face-to-face communication. However, some features of its use transcended traditional media and allowed users to gain awareness of their work environment. The paper concludes with a discussion of requirements for successful technology to support informal communication.
Keywords: Informal meetings, Evaluation, Video, Desktop videoconferencing, Group work, Collaboration
Speech Patterns in Video-Mediated Conversations BIBAKPDF 49-59
  Abigail J. Sellen
This paper reports on the first of a series of analyses aimed at comparing same room and video-mediated conversations for multiparty meetings. This study compared patterns of spontaneous speech for same room versus two video-mediated conversations. One video system used a single camera, monitor and speaker, and a picture-in-a-picture device to display multiple people on one screen. The other system used multiple cameras, monitors, and speakers in order to support directional gaze cues and selective listening. Differences were found between same room and video-mediated conversations in terms of floor control and amount of simultaneous speech. While no differences were found between the video systems in terms of objective speech measures, other important differences are suggested and discussed.
Keywords: CSCW, Videoconferencing, Conversation patterns

Laboratory Overviews: Graphics

Human-Computer Interaction Research at Georgia Institute of Technology BIBPDF 61-62
  James D. Foley; Christine M. Mitchell; Neff Walker
The Virginia User Interface Laboratory BIBPDF 63-64
  Randy Pausch
System Ergonomics and Human-Computer Interaction at SIEMENS Corporate Research and Development BIBPDF 65-66
  H. Raffler; M. Schneider-Hufschmidt; T. Kuhme

Panel

Anthropomorphism: From Eliza to Terminator 2 BIBPDF 67-70
  Abbe Don; Susan Brennan; Brenda Laurel; Ben Shneiderman

Demonstration: User Interface Management Systems I

Action Assignable Graphics: A Flexible Human-Computer Interface Design Process BIBPDF 71-72
  Matthew D. Russell; Howard Xu; Lingtao Wang
The AT&T Display Construction Set User Interface Management System (UIMS) BIBPDF 73-74
  Joseph P. Rotella; Amy L. Bowman; Catherine A. Wittman

Visualizing Objects, Graphs, and Video

An Interface for Interactive Spatial Reasoning and Visualization BIBAKPDF 75-82
  James R. Osborn; Alice M. Agogino
An interface for software that creates a natural environment for engineering graphics students to improve their spatial reasoning and 3D visualization skills is described. The skills of interest involve spatial transformations and rotations, specifically those skills that engineers use to reason about 3D objects based on 2D representations. The software uses an intuitive and interactive interface allowing direct manipulation of objects. Animation capability is provided to demonstrate the relationship between arbitrary positions of an object and standard orthographic views. A second skill of interest requires visualization of a cutting-plane intersection of an object. An interface is developed which allows intuitive positioning of the cutting-plane utilizing the metaphor of a "pool of water" in which the object is partially submerged. The surface of the water represents the cutting plane. Adjustment of the pool depth combined with direct manipulation of the object provides for arbitrary positioning of the cutting-plane. Subjective evaluation of the software thus far indicates that students enjoy using it and find it helpful. A formal testing plan to objectively evaluate the software and interface design is underway.
Keywords: Spatial reasoning, Three dimensional visualization, Direct manipulation, Engineering graphics
Graphical Fisheye Views of Graphs BIBAKPDF 83-91
  Manojit Sarkar; Marc H. Brown
A fisheye lens is a very wide angle lens that shows places nearby in detail while also showing remote regions in successively less detail. This paper describes a system for viewing and browsing planar graphs using a software analog of a fisheye lens. We first show how to implement such a view using solely geometric transformations. We then describe a more general transformation that allows hierarchical, structured information about the graph to modify the views. Our general transformation is a fundamental extension to the previous research in fisheye views.
Keywords: Fisheye views, Information visualization
A Magnifier Tool for Video Data BIBAKPDF 93-98
  Michael Mills; Jonathan Cohen; Yin Yin Wong
We describe an interface prototype, the Hierarchical Video Magnifier, which allows users to work with a video source at fine-levels of detail while maintaining an awareness of temporal context. The technique allows the user to recursively magnify the temporal resolution of a video source while preserving the levels of magnification in a spatial hierarchy. We discuss how the ability to inspect and manipulate hierarchical views of temporal magnification affords a powerful tool for navigating, analyzing and editing video streams.
Keywords: Interface metaphors, Time-Varying data, Hierarchical representation, Multimedia authoring, Information-Retrieval, Video editing, Granularity of information

Perspectives on the Design of Collaborative Systems

A Research Program to Assess User Perceptions of Group Work Support BIBAKPDF 99-106
  John Satzinger; Lorne Olfman
Computer support for group work is a technological innovation receiving considerable attention from developmental researchers. This paper reports the preliminary results from two surveys which assessed user perceived needs for various types of group work support. The instruments, distributed to managers and professionals in a variety of organizations, described group support scenarios and associated functions/tools and asked for an assessment of their usefulness to one of the respondent's organizational work groups. Support for between meetings group work was perceived to be more useful than support for either face to face or electronic meetings. Common single user tools were generally perceived to be more useful than multi-user group tools. Individual differences and implications are addressed.
Keywords: Computer supported cooperative work, CSCW, Groupware, Technology acceptance model
Gardeners and Gurus: Patterns of Cooperation among CAD Users BIBAKPDF 107-117
  Michelle Gantt; Bonnie A. Nardi
We studied CAD system users to find out how they use the sophisticated customization and extension facilities offered by many CAD products. We found that users of varying levels of expertise collaborate to customize their CAD environments and to create programmatic extensions to their applications. Within a group of users, there is at least one local expert who provides support for other users. We call this person a local developer. The local developer is a fellow domain expert, not a professional programmer, outside technical consultant or MIS staff member. We found that in some CAD environments the support role has been formalized so that local developers are given official recognition, and time and resources to pursue local developer activities. In general, this formalization of the local developer role appears successful. We discuss the implications of our findings for work practices and for software design.
Keywords: Cooperative work, CAD, End user programming
Beyond Being There BIBAKPDF 119-125
  Jim Hollan; Scott Stornetta
A belief in the efficacy of imitating face-to-face communication is an unquestioned presupposition of most current work on supporting communications in electronic media. In this paper we highlight problems with this presupposition and present an alternative proposal for grounding and motivating research and development that frames the issue in terms of needs, media, and mechanisms. To help elaborate the proposal we sketch a series of example projects and respond to potential criticisms.
Keywords: Telecommunications, CSCW

Direct Manipulation Theory, 3D Manipulation, and Design for Handicapped Users

Evaluating Two Aspects of Direct Manipulation in Advanced Cockpits BIBAKPDF 127-134
  James A. Ballas; Constance L. Heitmeyer; Manuel A. Perez
Increasing use of automation in computer systems, such as advanced cockpits, presents special challenges in the design of user interfaces. The challenge is particularly difficult when automation is intermittent because the interface must support smooth transitions from automated to manual mode. A theory of direct manipulation predicts that this interface style will smooth the transition. Interfaces were designed to test the prediction and to evaluate two aspects of direct manipulation, semantic distance and engagement. Empirical results supported the theoretical prediction and also showed that direct engagement can have some adverse effects on another concurrent manual task. Generalizations of our results to other complex systems are presented.
Keywords: Direct manipulation, Interface styles, Interface design, Adaptive automation, Intermittent automation, Aircraft interfaces, Intelligent cockpit
Iterative Design of an Interface for Easy 3-D Direct Manipulation BIBAKPDF 135-142
  Stephanie Houde
Although computer tools for 3-D design applications are now widely available for use on personal computers, they are unnecessarily difficult to use. Conventions for establishing and manipulating views of 3-D objects require engineering-oriented dialogues that are foreign to most users. This paper describes the iterative design and testing of a new mechanism for moving 3-D objects with a mouse-controlled cursor in a space planning application prototype. Emphasis was placed on developing a design which would make 3-D interaction more intuitive by preserving users' experiences with moving objects in the real, physical world. Results of an informal user test of the current interface prototype are presented and implications for the development of a more general direct manipulation mechanism are discussed.
Keywords: 3-D manipulation, Direct manipulation, Iterative design, Space planning, Hand gestures, Narrative handles, Bounding box, Handle box
Computing for Users with Special Needs and Models of Computer-Human Interaction BIBAKPDF 143-148
  William W. McMillan
Models of human-computer interaction (HCI) can provide a degree of theoretical unity for diverse work in computing for users with special needs. Example adaptations for special users are described in the context of both implementation-oriented and linguistic models of HCI. It is suggested that the language of HCI be used to define standards for special adaptations. This would enhance reusability, modifiability, and compatibility of adaptations, inspire new innovations, and make it easier for developers of standard interfaces to incorporate adaptations. The creation of user models for subgroups of users with special needs would support semantic and conceptual adaptations.
Keywords: Human-computer interaction, Models, Handicapped, Special education, Rehabilitation, Accessibility

Panel

Designing Usable Systems Under Real-World Constraints: A Practitioners Forum BIBKPDF 149-152
  Robert M. Mulligan; Mary Dieli; Jakob Nielsen; Steven Poltrock; Daniel Rosenberg; Susan Ehrlich Rudman
Keywords: Design process, Organizational issues, Usability, User interface

Demonstration: Instructible Interfaces

Prototyping an Instructible Interface: Moctec BIBAKPDF 153-154
  David L. Maulsby
Moctec is a set of interactive mockups of an interface for programming search and replace tasks by example. The user guides inference by pointing at relevant features of data.
Keywords: Demonstrational interface, Prototyping
Interface Support for Comet: A Knowledge-Based Software Reuse Environment BIBPDF 155-156
  Sherman Tyler; Jon Schlossberg

Models of the User I

The Art of Search: A Study of Art Directors BIBAKPDF 157-163
  Sharon R. Garber; Mitch B. Grunes
We formulated a model of visual search by conducting a work flow study and task analysis of art directors as they searched for images to use in an advertisement. The analysis revealed the presence of artistic and image concepts, flexible structures which guide the search and are molded by them. Analysis results were used to build a model-based interface for visual search. Results from presenting the interface to users indicate that the interface has the potential to make significant contributions to the visual search task, both in time savings and as an aid to the creative process.
Keywords: User models, Cognitive models, User interface design, Task analysis, Navigation, Searching, Visual problem solving
Note: Color plate is on page 703
Browser-Soar: A Computational Model of a Highly Interactive Task BIBAKPDF 165-172
  Virginia A. Peck; Bonnie E. John
Browser-Soar models the perceptual, cognitive, and motor operators of a user searching for information in an on-line help browser. The model accounts for 90% of the browsing behavior observed in ten episodes. This result suggests that much of browsing behavior is a routine cognitive task, describable by GOMS, and extends the boundary of tasks to which GOMS applies to include highly interactive tasks. Further, it also suggests that GOMS analyses can be used to evaluate browser interfaces, as they have been used to evaluate text-editors and other computer applications, and to help focus design effort.
Keywords: Browsing, Cognitive models, GOMS, Soar
Towards Task Models for Embedded Information Retrieval BIBAKPDF 173-180
  H. Ulrich Hoppe; Franz Schiele
This paper investigates to what extent task-oriented user support based on plan recognition is feasible in a highly situation-driven domain like information retrieval (IR) and discusses requirements for appropriate task models. It argues that information seeking tasks which are embedded in some higher-level external task context (e.g. travel planning) often exhibit procedural dependences; that these dependences are mainly due to the external task; and that they can be exploited for inferring the users' goals and plans. While there is a clear need for task models in IR to account for situational determinants of user behaviour, what is required are hybrid models that take account of both its "planned" and "situated" aspects. Empirical evidence for the points made is reported from a probabilistic analysis of retrieval sessions with a fact database and from experience with plan-based and state-based methods for user support in an experimental travel planning system.
Keywords: Task models, Information retrieval, Plan recognition, Planned vs. situated action

Tools and Techniques

Knowledge-Based Evaluation as Design Support for Graphical User Interfaces BIBAKPDF 181-188
  Jonas Lowgren; Tommy Nordqvist
The motivation for our work is that even though user interface guidelines and style guides contain much useful knowledge, they are hard for user interface designers to use. We want to investigate ways of bringing the human factors knowledge closer to the design process, thus making it more accessible to designers. To this end, we present a knowledge-based tool, containing design knowledge drawn from general guideline documents and toolkit-specific style guides, capable of evaluating a user interface design produced in a UIMS. Our assessment shows that part of what the designers consider relevant design knowledge is related to the user's tasks and thus cannot be applied to the static design representation of the UIMS. The final section of the paper discusses ways of using this task-related knowledge.
Keywords: User interface evaluation, Design support, Guidelines, Style guides
Controlling User Interface Objects Through Pre- and Postconditions BIBAKPDF 189-194
  Daniel F. Gieskens; James D. Foley
We have augmented user interface objects (i.e. windows, menus, buttons, sliders, etc.) with preconditions that determine their visibility and their enabled/disabled status and postconditions that are asserted when certain actions are performed on the object. Postconditions are associated with each functionally different action on the object. Attaching pre- and postconditions to interface objects provides several useful features, such as selective enabling of controls, rapid prototyping, and automatic generation of explanations and help text.
Keywords: User interface tools, Prototyping, Predicates
Survey on User Interface Programming BIBAKPDF 195-202
  Brad A. Myers; Mary Beth Rosson
This paper reports on the results of a survey of user interface programming. The survey was widely distributed, and we received 74 responses. The results show that in today's applications, an average of 48% of the code is devoted to the user interface portion. The average time spent on the user interface portion is 45% during the design phase, 50% during the implementation phase, and 37% during the maintenance phase. 34% of the systems were implemented using a toolkit, 27% used a UIMS, 14% used an interface builder, and 26% used no tools. The projects using only toolkits spent the largest percentage of the time and code on the user interface (around 60%) compared to around 45% for those with no tools. This appears to be because the toolkit systems had more sophisticated user interfaces. The projects using UIMSs or interface builders spent the least percent of time and code on the user interface (around 41%) suggesting that these tools are effective. In general, people were happy with the tools they used, especially the graphical interface builders. The most common problems people reported when developing a user interface included getting users' requirements, writing help text, achieving consistency, learning how to use the tools, getting acceptable performance, and communicating among various parts of the program.
Keywords: Information interfaces and presentation, User interfaces, Evaluation, Methodology, User interface management systems, Windowing systems, Software engineering, Tools and techniques, User interfaces, Design, Human factors, User interface software, Surveys, User interface tools

Perception/Performance Theory for HCI

Orderable Dimensions of Visual Texture Useful for Data Display: Orientation, Size, and Contrast BIBAKPDF 203-209
  Colin Ware; William Knight
Vision research relating to the human perception of texture is briefly reviewed with a view to arriving at the principal dimensions of visual texture useful for data display. The conclusion is that orientation, size (1/spatial frequency), and contrast (amplitude) are the primary orderable dimensions of texture. Data displayed using these texture parameters will be subject to similar distortions to those found when color is used. Textures synthesized using Gabor function primitives can be modulated along the three primary dimensions. Some preliminary results from a study using Gabor functions to modulate luminance are presented which suggest that: perceived texture size difference are approximately logarithmic, a 5% change in texton size is detectable 50% of the time, and large perceived size differences are do not predict small (just noticeable) size differences.
Keywords: Scientific visualization, Visual texture, Cartography
The Perceptual Structure of Multidimensional Input Device Selection BIBAKPDF 211-218
  Robert J. K. Jacob; Linda E. Sibert
Concepts such as the logical device, taxonomies, and other descriptive frameworks have improved understanding of input devices but ignored or else treated informally their pragmatic qualities, which are fundamental to selection of input devices for tasks. We seek the greater leverage of a predictive theoretical framework by basing our investigation of three-dimensional vs. two-dimensional input devices on Garner's theory of processing of perceptual structure in multidimensional space. We hypothesize that perceptual structure provides a key to understanding performance of multidimensional input devices on multidimensional tasks. Two three-dimensional tasks may seem equivalent, but if they involve different types of perceptual spaces, they should be assigned correspondingly different input devices. Our experiment supports this hypothesis and thus both indicates when to use three-dimensional input devices and gives credence to our theoretical basis for this indication.
Keywords: Input devices, Interaction techniques, Gesture input, Polhemus tracker, Perceptual space, Integrality, Separability
Extending Fitts' Law to Two-Dimensional Tasks BIBAKPDF 219-226
  I. Scott MacKenzie; William Buxton
Fitts' law, a one-dimensional model of human movement, is commonly applied to two-dimensional target acquisition tasks on interactive computing systems. For rectangular targets, such as words, it is demonstrated that the model can break down and yield unrealistically low (even negative!) ratings for a task's index of difficulty (ID). The Shannon formulation is shown to partially correct this problem, since ID is always >= 0 bits. As well, two alternative interpretations of "target width" are introduced that accommodate the two-dimensional nature of tasks. Results of an experiment are presented that show a significant improvement in the model's performance using the suggested changes.
Keywords: Human performance modeling, Fitts' Law, Input devices, Input tasks

Panel

When TVs are Computers are TVs BIBAKPDF 227-230
  S. Joy Mountford; Peter Mitchell; Pat O'Hara; Joe Sparks; Max Whitby
This panel brings together experts from TV production with those in the computer multimedia business. They will discuss what is likely to happen when the two media coexist. An exciting opportunity exists in merging the strengths of both media together synergistically to create pervasive and powerful Interactive Television.
Keywords: Interface design, Multimedia design

Demonstration: User Interface Management Systems II

Transportable Applications Environment (TAE) Plus User Interface Designer WorkBench BIBAKPDF 231-232
  Martha R. Szczur
TAE Plus was built at NASA's Goddard Space Flight Center to support the building of GUI user interfaces for highly interactive applications, such as realtime processing systems and scientific analysis systems. TAE Plus is designed as a productivity tool for the user interface designer. Human factor experts and user interface designers frequently do not want to have to learn the programming details of the windowing environment before they use a GUI development tool to prototype and/or develop an application's user interface. TAE Plus has been developed with this user in mind. TAE Plus is a user interface management system that supports (1) interactively constructing the visual layout of an application screen, (2) rehearsing the UI, (3) generating the application source code to manage the UI, and (4) providing run-time services to manage the UI during application execution.
Keywords: Design tools, User interface, Development tools, Productivity, User interface management system
CHIRP: The Computer-Human Interface Rapid Prototyping Toolkit BIBPDF 233-234
  Bob Remington

Modeling the Expert User

The Art of the Obvious BIBAKPDF 235-239
  E. Nygren; M. Lind; M. Johnson; B. Sandblad
In addition to normal reading, knowledge can be gained from a paper document by pattern recognition and encoding of characteristics of the information media. There are reasons to believe that this can be done automatically with very little attentional demand. The knowledge gained is accessible to consciousness and can be used for task components like orientation, navigation, detection of changes and as a complement to normal reading. When information is computerized, and is read from a screen instead of from a paper, the conditions for automaticity are often radically changed. In most cases the reader has to gain the corresponding knowledge by effortful cognitive processes. This means adding to the cognitive load leaving less attentional capacity for the main task at hand. This problem can be avoided by a careful analysis of a reading task into its automatic and non-automatic components, followed by a dedicated user interface design where information relevant for orientation, navigation etc is presented in a way that the reader can perceive rather than read.
Keywords: User interface design, Task analysis, User models, Reading, Tacit knowledge
Note: Color plates are on pages 709-710
A Computational Model of Skilled Use of a Graphical User Interface BIBAKPDF 241-249
  Muneo Kitajima; Peter G. Polson
This paper describes a computational model of skilled use of a graphical user interface based on Kintsch's construction-integration theory [4, 8]. The model uses knowledge of a detailed representation of information on the display, a user's goals and expectations, knowledge about the interface, and knowledge about the application domain to compute actions necessary to accomplish the user's current goal. The model provides a well-motivated account of one kind of errors, action slips [14], made by skilled users. We show how information about the intermediate state of a task on the display plays a critical role in skilled performance, i.e., display-based problem solving [10].
Keywords: User models, Graphical user interfaces, Display-based problem solving, Action slips
A GOMS Analysis of a Graphic, Machine-Paced, Highly Interactive Task BIBAKPDF 251-258
  Bonnie E. John; Alonso H. Vera
A GOMS analysis was used to predict the behavior of an expert in a graphic, machine-paced, highly interactive task. The analysis was implemented in a computational model using the Soar cognitive architecture. Using only the information available in an instruction booklet and some simple heuristics for selecting between operators, the functional-level behavior of the expert proved to be virtually dictated by the objects visible on the display. At the keystroke-level, the analysis predicted about 60% of the behavior, in keeping with similar results in previous GOMS research. We conclude that GOMS is capable of predicting expert behavior in a broader range of tasks than previously demonstrated.
Keywords: User models, Cognitive models, GOMS, Soar, Video games

Beyond Widgets: Tools for Semantically Driven UI Design

Coupling Application Design and User Interface Design BIBAKPDF 259-266
  Dennis J. M. J. de Baar; James D. Foley; Kevin E. Mullet
Building an interactive application involves the design of both a data model and a graphical user interface (GUI) to present that model to the user. These two design activities are typically approached as separate tasks and are frequently undertaken by different individuals or groups. Our approach eliminates redundant specification work by generating an interface directly from the data model itself. An inference engine using style rules for selecting and placing GUI controls (i.e., widgets) is integrated with an interface design tool to generate a user interface definition. This approach allows a single data model to be mapped onto multiple GUI's by substituting the appropriate rule set and thus represents a step toward a GUI-independent run-time layout facility.
Keywords: User interface software, Automatic user interface design, Data models
Workspaces: An Architecture for Editing Collections of Objects BIBAKPDF 267-272
  Dan R., Jr. Olsen; Thomas G. McNeill; David C. Mitchell
Many tools create new user interfaces by compositing them out of smaller pieces. This usually leads to variations on the dialog box to edit a single composite object. Workspaces are a model for compositing together various editors to manipulate sets of objects and their attributes. The workspace components communicate in terms of a selected set and the attributes possessed by objects in that set. This model has been implemented as part of the Sushi UIMS.
Keywords: Collection editing, User interface management systems, Editors, Interactive software
Selectors: Going Beyond User-Interface Widgets BIBAKPDF 273-279
  Jeff Johnson
Most UI toolkits and UIMSs make use of widgets, e.g., buttons, text fields, sliders, menus. Designers construct user interfaces by choosing and laying out widgets, then connecting them to application semantics. This approach has four problems. First, most widgets are too low-level: constructing interfaces from them takes too much work. Second, working with widgets focuses attention on appearance and layout issues, rather than on more important semantic design issues. Third, designers, can easily make poor widget choices, yielding poor interfaces. Fourth, widgets do not mesh well with application semantics; they know nothing about the variables they control. We are developing an application construction environment in which designers and implementers work with semantic-based controls called Selectors rather than with widgets. Selectors are classified according to their interface semantics (e.g., mutually-exclusive choice), rather than their appearance. Each type of Selector can be presented in a variety of ways; this may be chosen semi-automatically. Selectors mesh well with application semantics: their values are application data-types and their views determine how to present valid values automatically.
Keywords: User-interface toolkit, UIMS, Widgets

Laboratory Overviews: Usability Engineering

HUSAT - 21 Years of HCI: The Human Sciences & Advanced Technology Research Institute BIBPDF 281-282
  Brian Shackel
The Human-Computer Technology Group at Bellcore BIBKPDF 283-284
  Rita M. Bush
Keywords: Technology transfer, User-centered design, Graphical user interfaces, User modeling
The Human Factors Group at Compaq Computer Corporation BIBPDF 285-286
 

Panel

Interfaces for Consumer Products: "How to Camouflage the Computer?" BIBPDF 287-290
  Maddy D. Brouwer-Janse; Raymond W. Bennett; Takaya Endo; Floris L. van Nes; Hugo J. Strubbe; Donald R. Gentner

Demonstration: Information Visualization I

A Window System with Leafing Through Mode: BookWindow BIBAPDF 291-292
  Kyoichi Arai; Teruo Yokoyama; Yutaka Matsushita
This paper describes "BookWindow" that we implemented, a window system based on the "book" metaphor, that displays information not by scrolling but by using the animation of paging through. The BookWindow system equips some bookmarks, tabs, etc, by which we can access to an expected page through our requirements. BookWindow can support our work environment which navigates us through information space flexibly, because human beings are quite familiar with "books".
Value Bars: An Information Visualization and Navigation Tool for Multi-Attribute Listings BIBPDF 293-294
  Richard Chimera

Models of the User II

A Performance Model of System Delay and User Strategy Selection BIBAKPDF 295-305
  Steven L. Teal; Alexander I. Rudnicky
This study lays the ground work for a predictive, zero-parameter engineering model that characterizes the relationship between system delay and user performance. This study specifically investigates how system delays affects a user's selection of task strategy. Strategy selection is hypothesized to be based on a cost function combining two factors: (1) the effort required to synchronize input with system availability and (2) the accuracy level afforded. Results indicate that users, seeking to minimize effort and maximize accuracy, choose among three strategies -- automatic performance, pacing, and monitoring. These findings provide a systematic account of the influence of system delay on user performance, based on adaptive strategy choice drive by cost.
Keywords: System response time, Strategy selection, Interface design, Human factors
The Precis of Project Ernestine, or, An Overview of a Validation of GOMS BIBKPDF 307-312
  Wayne D. Gray; Bonnie E. John; Michael E. Atwood
Keywords: GOMS, Analysis methods, Empirical studies, User models, Cognitive models, Methods for analysis/assessment, Prototyping, Protocol analysis, Theory in HCI
Method Engineering: From Data to Model to Practice BIBAKPDF 313-320
  Erik Nilsen; HeeSen Jong; Judith S. Olson; Peter G. Polson
This paper explores the behavior of experts choosing among various methods to accomplish tasks. Given the results showing that methods are not chosen solely on the basis of keystroke efficiency, we recommend a technique to help designers assess whether they should offer multiple methods for some tasks, and if they should, how to make them so that they are chosen appropriately.
Keywords: User-interface design issues, Design techniques, Models of the user

Tools & Architectures for Virtual Reality and Multi-User, Shared Data

The Decoupled Simulation Model for Virtual Reality Systems BIBAKPDF 321-328
  Chris Shaw; Jiandong Liang; Mark Green; Yunqi Sun
The Virtual Reality user interface style allows the user to manipulate virtual objects in a 3D environment using 3D input devices. This style is best suited to application areas where traditional two dimensional styles fall short, but the current programming effort required to produce a VR application is somewhat large. We have built a toolkit called MR, which facilities the development of VR applications. The toolkit provides support for distributed computing, head-mounted displays, room geometry, performance monitoring, hand input devices, and sound feedback. In this paper, the architecture of the toolkit is outlined, the programmer's view is described, and two simple applications are described.
Keywords: User interface software, Virtual reality, Interactive 3D graphics
Interactive Simulation in a Multi-Person Virtual World BIBAKPDF 329-334
  Christopher Codella; Reza Jalili; Lawrence Koved; J. Bryan Lewis; Daniel T. Ling; James S. Lipscomb; David A. Rabenhorst; Chu P. Wang; Alan Norton; Paula Sweeney; Greg Turk
A multi-user Virtual World has been implemented combining a flexible-object simulator with a multisensory user interface, including hand motion and gestures, speech input and output, sound output, and 3-D stereoscopic graphics with head-motion parallax. The implementation is based on a distributed client/server architecture with a centralized Dialogue Manager. The simulator is inserted into the Virtual World as a server. A discipline for writing interaction dialogues provides a clear conceptual hierarchy and the encapsulation of state. This hierarchy facilitates the creation of alternative interaction scenarios and shared multiuser environments.
Keywords: User interface management system, Dialog manager, Virtual worlds, Virtual reality, Interactive simulation
The Abstraction-Link-View Paradigm: Using Constraints to Connect User Interfaces to Applications BIBAKPDF 335-342
  Ralph D. Hill
The goal of the RENDEZVOUS project is to build interactive systems that are used by multiple users from multiple workstations, simultaneously. This goal caused us to choose an architecture that requires a clean run-time separation of user interfaces from applications. Such a separation has long been a stated goal of UIMS researchers, but it is difficult to achieve. A key technical reason for the difficulty is that modern direct manipulation interfaces require extensive communication between the user interface and the application to provide semantic feedback. We discuss several communications mechanisms that have been used in the past, and present our approach -- the Abstraction-Link-View paradigm. Links are objects whose sole responsibility is to facilitate communication between the abstraction objects (application) and the view objects (user interfaces). The Abstraction-Link-View paradigm relies on concurrency and a fast but powerful constraint system.
Keywords: Information interfaces and presentation, User interfaces, User interface management systems, Information interfaces and presentation, Group and organization interfaces, Synchronous interaction, Dialog independence, Constraints

Use and Evaluation of Learning Environments

Grace Meets the "Real World": Tutoring COBOL as a Second Language BIBAKPDF 343-350
  Bob Radlinski; Jean McKendree
Grace is an intelligent tutoring system for COBOL which has been used to teach both novice and experienced programmers. While the tutor was quite effective in several classes and was designed with cognitive and interface principles in mind, we discuss a number of interesting issues that we have discovered when novice and experienced programmers used the tutor. Most of these problems are related to incompatibilities between the tutor interactions and the students' expectations in two areas: (1) the interactions with the tutor versus the interactions in their usual work environment and (2) the way in which experienced programmers solve problems. We describe these issues along with our solutions in the revised version of the tutor.
Keywords: Intelligent tutoring systems, Expert/novice differences, Skill acquisition, Task analysis, User-centered design, Situated learning
Evocative Agents and Multi-Media Interface Design BIBAKPDF 351-356
  Beth Adelson
This paper describes research which focuses on the issue of possible roles for computerized agents within multi-media educational software.
Keywords: Computerized agents, Multi-media software, Educational software, Foreign language learning
Note: Color plates are on pages 699-701
Graphic StoryWriter: An Interactive Environment for Emergent Storytelling BIBAKPDF 357-364
  Karl E. Steiner; Thomas G. Moher
The Graphic StoryWriter (GSW) is an interactive system that enables its users to create structurally complete stories through the manipulation of graphic objects in a simulated storybook. A rule-based story engine manages character and prop interaction, guides story development, and generates text. Through the simple interface and story writing engine, the Graphic StoryWriter provides an environment for early readers to learn about story structures, to experience the relationship between pictures and text, and to experiment with causal effects. This paper describes the motivation for and design of the Graphic StoryWriter, and reports on an empirical comparison of childrens' stories generated orally and using the GSW.
Keywords: User interaction, Story grammars, Educational software

Panel

Toward a More Humane Keyboard BIBPDF 365-368
  William Hargreaves; David Rempel; Nachman (Manny) Halpern; Robert Markison; Karl Kroemer; Jack Litewka