HCI Bibliography Home | HCI Journals | About TOCHI | Journal Info | TOCHI Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
TOCHI Tables of Contents: 010203040506070809101112131415161718192021

ACM Transactions on Computer-Human Interaction 5

Editors:Jonathan Grudin
Standard No:ISSN 1073-0516
Links:Table of Contents
  1. TOCHI 1998 Volume 5 Issue 1
  2. TOCHI 1998 Volume 5 Issue 2
  3. TOCHI 1998 Volume 5 Issue 3
  4. TOCHI 1998 Volume 5 Issue 4

TOCHI 1998 Volume 5 Issue 1

Graphical Definitions: Expanding Spreadsheet Languages through Direct Manipulation and Gestures BIBAKPDF 1-33
  Margaret M. Burnett; Herkimer J. Gottfried
In the past, attempts to extend the spreadsheet paradigm to support graphical objects, such as colored circles or user-defined graphical types, have led to approaches featuring either a direct way of creating objects graphically or strong compatibility with the spreadsheet paradigm, but not both. This inability to conveniently go beyond numbers and strings without straying outside the spreadsheet paradigm has been a limiting factor in the applicability of spreadsheet languages. In this article we present graphical definitions, an approach that removes this limitation, allowing both simple and complex graphical objects to be programmed directly using direct manipulation and gestures, in a manner that fits seamlessly within the spread-sheet paradigm. We also describe an empirical study, in which subjects programmed such objects faster and with fewer errors using this approach than when using a traditional approach to formula specification. Because the approach is expressive enough to be used with both built-in and user-defined types, it allows the directness of demonstrational and spread-sheet techniques to be used in programming a wider range of applications than has been possible before.
Keywords: D.1.1 [Programming Techniques]: Applicative (Functional) Programming; D.1.7 [Programming Techniques]: Visual Programming; D.3.3 [Programming Languages]: Language Constructs and Features, Abstract data types; Data types and structures; H.4.1 [Information Systems Applications]: Office Automations, Spreadsheets, Design, Human Factors, Languages, Direct manipulation, Forms/3, Gestures, Programming by demonstration
Controlling Access in Multiuser Interfaces BIBAKPDF 34-62
  Prasun Dewan; Honghai Shen
Traditionally, access control has been studied in the areas of operating systems and database management systems. With the advent of multiuser interfaces, there is a need to provide access control in the user interface. We have developed a general framework for supporting access control in multiuser interfaces. It is based on the classical notion of an access matrix, a generalized editing-based model of user-application interaction, and a flexible model of user-user coupling. It has been designed to support flexible control of all significant shared operations, high-level specification of access control policies, and automatic and efficient implementation of access control in a multiuser interface. It supports several new kinds of protected objects including sessions, windows, and hierarchical active variables; a large set of rights including not only the traditional semantic rights but also interaction and coupling rights; a set of inference rules for deriving default permissions; and a programming interface for implementing access control in multiuser interfaces. We have implemented the framework as part of a system called Suite. This article describes and motivates the framework using the concrete example of Suite, identifies some of the difficult issues we faced in its design, describes our preliminary experience with it, and suggests directions for future work.
Keywords: C.2.4 [Computer-Communication Networks]: Distributed Systems, distributed applications; distributed databases; D.2.2 [Software Engineering]: Tools and Techniques, user interfaces; D.2.6 [Software Engineering]: Programming Environments, interactive; D.3.3 [Programming Languages]: Language Constructs, input/output; H.1.2 [Models and Principles]: User/Machine Systems, human factors; H.4.1 [Information Systems Applications]: Office Automation; I.7.2 [Text Processing]: Text Editing, Design, Human Factors, Languages, Access control, collaboration, computer-supported cooperative work, groupware, privacy, security, structure editors, user interface management systems
Achieving Convergence, Causality Preservation, and Intention Preservation in Real-Time Cooperative Editing Systems BIBAKPDF 63-108
  Chengzheng Sun; Xiaohua Jia; Yanchun Zhang; Yun Yang; David Chen
Real-time cooperative editing systems allow multiple users to view and edit the same text/graphic/image/multimedia document at the same time from multiple sites connected by communication networks. Consistency maintenance is one of the most significant challenges in designing and implementing real-time cooperative editing systems. In this article, a consistency model, with properties of convergence, causality preservation, and intention preservation, is proposed as a framework for consistency maintenance in real-time cooperative editing systems. Moreover, an integrated set of schemes and algorithms, which support the proposed consistency model, are devised and discussed in detail. In particular, we have contributed (1) a novel generic operation transformation control algorithm for achieving intention preservation in combination with schemes for achieving convergence and causality preservation and (2) a pair of reversible inclusion and exclusion transformation algorithms for stringwise operations for text editing. An Internet-based prototype system has been built to test the feasibility of the proposed schemes and algorithms.
Keywords: C.2.4 [Computer-Communication Networks]: Distributed Systems, distributed applications; D.2.2 [Software Engineering]: Tools and Techniques, User interfaces; H.1.2 [Models and Principles]: User/Machine Systems, Human factors; H.5.3 [Information Interfaces and Presentation]: Group and Organization Interfaces, Synchronous interaction; Theory and models, Algorithms, Design, Human Factors, Causality preservation, computer-supported cooperative work, consistency maintenance, convergence, cooperative editing, groupware systems, intention preservation, operational transformation, REDUCE

TOCHI 1998 Volume 5 Issue 2

Using Metalevel Techniques in a Flexible Toolkit for CSCW Applications BIBAKPDF 109-155
  Paul Dourish
Ideally, software toolkits for collaborative applications should provide generic, reusable components, applicable in a wide range of circumstances, which software developers can assemble to produce new applications. However, the nature of CSCW applications and the mechanics of group interaction present a problem. Group interactions are significantly constrained by the structure of the underlying infrastructure, below the level at which toolkits typically offer control. This article describes the design features of Prospero, a prototype CSCW toolkit designed to be much more flexible than traditional toolkit techniques allow. Prospero uses a metalevel architecture so that application programmers can have control over not only how toolkit components are combined and used, but also over aspects of how they are internally structured and defined. This approach allows programmers to gain access to "internal" aspects of the toolkit's operation that affect how interaction and collaboration proceed. This article explains the metalevel approach and its application to CSCW, introduces two particular metalevel techniques for distributed data management and consistency control, shows how they are realized in Prospero, and illustrates how Prospero can be used to create a range of collaborative applications.
Keywords: C.2.4 [Computer-Communication Networks]: Distributed Systems, Distributed applications; distributed databases; D.2.2 [Software Engineering]: Tools and Techniques, User interfaces; H.1.2 [Models and Principles]: User/Machine Systems, Human factors; H.5.3 [Information Interfaces and Presentation]: Group and Organization Interfaces, Theory and models, Design, Human Factors, Languages, Consistency control, consistency guarantees, data distribution, divergency, metalevel programming, open implementation, software architecture
Hypertext versus Boolean Access to Biomedical Information: A Comparison of Effectiveness, Efficiency, and User Preferences BIBAKPDF 156-183
  Barbara M. Wildemuth; Charles P. Friedman; Stephen M. Downs
This study compared two modes of access to a biomedical database, in terms of their effectiveness and efficiency in supporting clinical problem solving and in terms of user preferences. Boolean access, which allowed subjects to frame their queries as combinations of keywords, was compared to hypertext access, which allowed subjects to navigate from one database node to another. The accessible biomedical data were identical across system versions. Performance data were collected from two cohorts of first-year medical students, each student randomly assigned to either the Boolean or the hypertext system. Additional attitudinal data were collected from the second cohort. At each of two research sessions (one just before and one just after their bacteriology course), subjects worked eight clinical case problems, first using only their personal knowledge and, subsequently, with aid from the database. Database retrievals enabled students to answer questions they could not answer based on personal knowledge alone. This effect was greater when personal knowledge of bacteriology was lower. There were not statistically significant differences between the two forms of access, in terms of problem-solving effectiveness or efficiency. Students preferred Boolean access over hypertext access.
Keywords: H.3.2 [Information Storage and Retrieval]: Information Storage; H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval; H.5.2 [Information Interfaces and Presentation]: User Interfaces, Human Factors, Performance, Domain knowledge, intellectual access, medical education, problem solving, usage effectiveness, usage efficiency, user preferences

TOCHI 1998 Volume 5 Issue 3

Understanding and Constructing Shared Spaces with Mixed-Reality Boundaries BIBAKPDF 185-223
  Steve Benford; Chris Greenhalgh; Gail Reynard; Chris Brown; Boriana Koleva
We propose an approach to creating shared mixed realities based on the construction of transparent boundaries between real and virtual spaces. First, we introduce a taxonomy that classifies current approaches to shared spaces according to the three dimensions of transportation, artificiality, and spatiality. Second, we discuss our experience of staging a poetry performance simultaneously within real and virtual theaters. This demonstrates the complexities involved in establishing social interaction between real and virtual spaces and motivates the development of a systematic approach to mixing realities. Third, we introduce and demonstrate the technique of mixed-reality boundaries as a way of joining real and virtual spaces together in order to address some of these problems.
Keywords: H.4.3 [Information Systems Applications]: Communications Applications, Computer conferencing and teleconferencing; H.5.1 [Information Interfaces and Presentation]: Multimedia Information Systems, Artificial realities; H.5.3 [Information Interfaces and Presentation]: Group and Organization Interfaces, Theory and models, Human Factors, Theory, Augmented reality, collaborative virtual environments, CSCW, media-spaces, mixed reality, shared spaces, telepresence, video, virtual reality
Using Nonspeech Sounds to Provide Navigation Cues BIBAKPDF 224-259
  Stephen A. Brewster
This article describes three experiments that investigate the possibility of using structured nonspeech audio messages called earcons to provide navigational cues in a menu hierarchy. A hierarchy of 27 nodes and four levels was created with an earcon for each node. Rules were defined for the creation of hierarchical earcons at each node. Participants had to identify their location in the hierarchy by listening to an earcon. Results of the first experiment showed that participants could identify their location with 81.5% accuracy, indicating that earcons were a powerful method of communicating hierarchy information. One proposed use for such navigation cues is in telephone-based interfaces (TBIs) where navigation is a problem. The first experiment did not address the particular problems of earcons in TBIs such as "does the lower quality of sound over the telephone lower recall rates," "can users remember earcons over a period of time," and "what effect does training type have on recall?" An experiment was conducted and results showed that sound quality did lower the recall of earcons. However, redesign of the earcons overcame this problem with 73% recalled correctly. Participants could still recall earcons at this level after a week had passed. Training type also affected recall. With "personal training" participants recalled 73% of the earcons, but with purely textual training results were significantly lower. These results show that earcons can provide good navigation cues for TBIs. The final experiment used compound, rather than hierarchical, earcons to represent the hierarchy from the first experiment. Results showed that with sounds constructed in this way participants could recall 97% of the earcons. These experiments have developed our general understanding of earcons. A hierarchy three times larger than any previously created was tested, and this was also the first test of the recall of earcons over time.
Keywords: H.5.1 [Information Interfaces and Presentation]: Multimedia Information Systems, Audio input/output; hypertext navigation and maps; H.5.2 [Information Interfaces and Presentation]: User Interfaces, Evaluation/methodology; interaction styles; J.7 [Computer Applications]: Computers in Other Systems, Consumer products, Human Factors, Auditory interfaces, earcons, navigation, nonspeech audio, telephone-based interfaces
Two-Handed Virtual Manipulation BIBAKPDF 260-302
  Ken Hinckley; Randy Pausch; Dennis Proffitt; Neal F. Kassell
We discuss a two-handed user interface designed to support three-dimensional neurosurgical visualization. By itself, this system is a "point design," an example of an advanced user interface technique. In this work, we argue that in order to understand why interaction techniques do or do not work, and to suggest possibilities for new techniques, it is important to move beyond point design and to introduce careful scientific measurement of human behavioral principles. In particular, we argue that the common-sense viewpoint that "two hands save time by working in parallel" may not always be an effective way to think about two-handed interface design because the hands do not necessarily work in parallel (there is a structure to two-handed manipulation) and because two hands do more than just save time over one hand (two hands provide the user with more information and can structure how the user thinks about a task). To support these claims, we present an interface design developed in collaboration with neurosurgeons which has undergone extensive informal usability testing, as well as a pair of formal experimental studies which investigate behavioral aspects of two-handed virtual object manipulation. Our hope is that this discussion will help others to apply the lessons learned in our neurosurgery application to future two-handed user interface designs.
Keywords: I.3.6 [Computer Graphics]: Methodology and Techniques, interaction techniques; H.5.2 [Information Interfaces and Presentation]: User Interfaces, Input devices and strategies, Design, Experimentation, Human Factors, Measurement, Bimanual asymmetry, haptic input, input devices, three-dimensional interaction, two-handed interaction, virtual manipulation

TOCHI 1998 Volume 5 Issue 4

The Integrality of Speech in Multimodal Interfaces BIBAKPDF 303-325
  Michael A. Grasso; David S. Ebert; Timothy W. Finin
A framework of complementary behavior has been proposed which maintains that direct-manipulation and speech interfaces have reciprocal strengths and weaknesses. This suggests that user interface performance and acceptance may increase by adopting a multimodal approach that combines speech and direct manipulation. This effort examined the hypothesis that the speed, accuracy, and acceptance of multimodal speech and direct-manipulation interfaces will increase when the modalities match the perceptual structure of the input attributes. A software prototype that supported a typical biomedical data collection task was developed to test this hypothesis. A group of 20 clinical and veterinary pathologists evaluated the prototype in an experimental setting using repeated measures. The results of this experiment supported the hypothesis that the perceptual structure of an input task is an important consideration when designing a multimodal computer interface. Task completion time, the number of speech errors, and user acceptance improved when interface best matched the perceptual structure of the input attributes.
Keywords: H.1.2 [Models and Principles]: User/Machine Systems, Human factors; H.5.2 [Information Interfaces and Presentation]: User Interfaces -- evaluation/methodology; input devices and strategies; interaction styles; H.5.3 [Information Interfaces and Presentation]: Group and Organization Interfaces -- theory and models; J.3 [Computer Applications]: Life and Medical Sciences, Design, Experimentation, Human Factors, Measurement, Performance, Theory, Direct manipulation, Input devices, Integrality, Medical informatics, Multimodal, Natural-language processing, Pathology, Perceptual structure, Separability, Speech recognition
Manual and Cognitive Benefits of Two-Handed Input: An Experimental Study BIBAKPDF 326-359
  Andrea Leganchuk; Shumin Zhai; William Buxton
One of the recent trends in computer input is to utilize users' natural bimanual motor skills. This article further explores the potential benefits of such two-handed input. We have observed that bimanual manipulation may bring two types of advantages to human-computer interaction: manual and cognitive. Manual benefits come from increased time-motion efficiency, due to the twice as many degrees of freedom simultaneously available to the user. Cognitive benefits arise as a result of reducing the load of mentally composing and visualizing the task at an unnaturally low level which is imposed by traditional unimanual techniques. Area sweeping was selected as our experimental task. It is representative of what one encounters, for example, when sweeping out the bounding box surrounding a set of objects in a graphics program. Such tasks cannot be modeled by Fitts' Law alone and have not been previously studied in the literature. In our experiments, two bimanual techniques were compared with the conventional one-handed GUI approach. Both bimanual techniques employed the two-handed "stretchy" technique first demonstrated by Krueger in 1983. We also incorporated the "Toolglass" technique introduced by Bier et al. in 1993. Overall, the bimanual techniques resulted in significantly faster performance than the status quo one-handed technique, and these benefits increased with the difficulty of mentally visualizing the task, supporting our bimanual cognitive advantage hypothesis. There was no significant difference between the two bimanual techniques. This study makes two types of contributions to the literature. First, practically we studied yet another class of transaction where significant benefits can be realized by applying bimanual techniques. Furthermore, we have done so using easily available commercial hardware in the context to our understanding of why bimanual interaction techniques have an advantage over unimanual techniques. A literature review on two-handed computer input and some of the most relevant bimanual human motor control studies is also included.
Keywords: H.1.2 [Models and Principles]: User/Machine Systems, Human factors; H.5.2 [Information Interfaces and Presentation]: User Interfaces, input devices and strategies; interaction styles; I.3.6 [Computer Graphics]: Methodology and Techniques, interaction techniques, Design, Experimentation, Human Factors, Measurement, Bimanual input, Input devices, Two-handed input