HCI Bibliography Home | HCI Conferences | UIST Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
UIST Tables of Contents: 868889909192939495969798990001020304

Proceedings of the 1994 ACM Symposium on User Interface Software and Technology

Fullname:Proceedings of the 1994 ACM Symposium on User Interface and Software Technology
Location:Marina del Rey, California
Dates:1994-Nov-02 to 1994-Nov-04
Standard No:ISBN 0-89791-657-3; ACM Order Number 429946; ACM DL: Table of Contents hcibib: UIST94
  1. Opening Plenary
  2. Visualization I
  3. Speech and Sound
  4. Groupware and 3D Tools
  5. Demos
  6. Demonstrational User Interfaces
  7. Visualization II
  8. Panel
  9. Constraints
  10. Drawing and Sketching
  11. Closing Plenary
  12. Two Hands and Three Dimensions

Opening Plenary

Creating the Invisible Interface BIBAPDF 1
  Mark Weiser
For thirty years, most interface design, and most computer design, has been headed down the path of the "dramatic" machine. Its highest ideal is to make a computer so exciting, so wonderful, so interesting, that we never want to be without it. A less-traveled path I call the "invisible"; its highest ideal is to make a computer so imbedded, so fitting, so natural, that we use it without even thinking about it. (I have also called this notion "Ubiquitous Computing.") I believe that in the next twenty years the second path will come to dominate. But this will not be easy; very little of our current systems infrastructure will survive. We have been building versions of the infrastructure-to-come at PARC for the past four years, in the form of inch-, foot-, and yard-sized computers we call Tabs, Pads, and Boards. In this talk I will describe the humanistic origins of the "invisible" ideal in post-modernist thought. I will then describe some of our prototypes, how they succeed and fail to be invisible, and what we have learned. I will illustrate new systems issues that user interface designers will face when creating invisibility. And I will indicate some new directions we are now exploring, including the famous "dangling string" display.

Visualization I

Galaxy of News: An Approach to Visualizing and Understanding Expansive News Landscapes BIBAKPDF 3-12
  Earl Rennison
The Galaxy of News system embodies an approach to visualizing large quantities of independently authored pieces of information, in this case news stories. At the heart of this system is a powerful relationship construction engine that constructs an associative relation network to automatically build implicit links between related articles. To visualize these relationships, and hence the news information space, the Galaxy of News uses pyramidal structuring and visual presentation, semantic zooming and panning, animated visual cues that are dynamically constructed to illustrate relationships between articles, and fluid interaction in a three dimensional information space to browse and search through large databases of news articles. The result is a tool that allows people to quickly gain a broad understanding of a news base by providing an abstracted presentation that covers the entire information base, and through interaction, progressively refines the details of the information space. This research has been generalized into a model for news access and visualization to provide automatic construction of news information spaces and derivation of an interactive news experience.
Keywords: Information visualization, Abstracted information spaces, Pyramidal information structures, 3D interactive graphics, Information space design, Information interaction design
Laying Out and Visualizing Large Trees Using a Hyperbolic Space BIBAKPDF 13-14
  John Lamping; Ramana Rao
We present a new focus+context (fisheye) scheme for visualizing and manipulating large hierarchies. The essence of our approach is to lay out the hierarchy uniformly on the hyperbolic plane and map this plane onto a circular display region. The projection onto the disk provides a natural mechanism for assigning more space to a pardon of the hierarchy while still embedding it in a much larger context. Change of focus is accomplished by translating the structure on the hyperbolic plane, which allows a smooth transition without compromising the presentation of the context.
Keywords: Hierarchy display, Information visualization, Fisheye display, Focus+context technique
Note: TechNote
Powers of Ten Thousand: Navigating in Large Information Spaces BIBAPDF 15-16
  Henry Lieberman
How would you interactively browse a very large display space, for example, a street map of the entire United States? The traditional solution is zoom and pan. But each time a zoom-in operation takes place, the context from which it came is visually lost. Sequential applications of the zoom-in and zoom-out operations may become tedious. This paper proposes an alternative technique, the microscope, based on zooming and panning in multiple translucent layers. A microscope display should comfortably permit browsing continuously on a single image, or set of images in multiple resolutions, on a scale of at least 1 to 10,000.
Note: TechNote
Pad++: A Zooming Graphical Interface for Exploring Alternate Interface Physics BIBAKPDF 17-26
  Benjamin B. Bederson; James D. Hollan
We describe the current status of Pad++, a zooming graphical interface that we are exploring as an alternative to traditional window and icon-based approaches to interface design. We discuss the motivation for Pad++, describe the implementation, and present prototype applications. In addition, we introduce an informational physics strategy for interface design and briefly compare it with metaphor-based design strategies.
Keywords: Interactive user interfaces, Multiscale interfaces, Zooming interfaces, Authoring, Information navigation, Hypertext, Information visualization, Information physics
Reconnaissance Support for Juggling Multiple Processing Options BIBAKPDF 27-28
  Aran Lunzer
A large proportion of computer-supported tasks -- such as design exploration, decision analysis, data presentation, and many kinds of retrieval -- can be characterised as user-driven processing of a body of data in search of an outcome that satisfies the user. Clearly such tasks can never be automated fully, but few existing tools offer support for mechanising more than the simplest repetitive aspects of the search. Reconnaissance facilities, in which the computer produces summary reports from exploration in directions suggested by the user, can save the user time and effort by revealing which areas are the most deserving of detailed investigation. The time users are prepared to spend on searching will be more effectively used, improving the likelihood of finding solutions that really meet their needs rather than merely being the first to appear satisfactory. This note describes an implemented example of reconnaissance, based on the parallel coordinates presentation technique.
Keywords: Interaction techniques, Direct manipulation, Dynamic query, Graphical user interfaces, Visual programming, Data visualisation
Note: TechNote

Speech and Sound

Putting People First: Specifying Proper Names in Speech Interfaces BIBAKPDF 29-37
  Matt Marx; Chris Schmandt
Communication is about people, not machines. But as firms and families alike spread out geographically, we rely increasingly on telecommunications tools to keep us "connected." The challenge of such systems is to enable conversation between individuals without computational infrastructure getting in the way. This paper compares two speech-based communication systems, Phoneshell and Chatter, in how they deal with the keys to communication: proper names. Chatter, a conversational system using speech-recognition, improves upon the hierarchical nature of the touch-tone based Phoneshell by maintaining context and enabling use of amphora. Proper names can present particular problems for speech recognizes, so an interface algorithm for reliable name specification by spelling is offered. Since individual letter recognition is non-robust, Chatter implicitly disambiguates strings of letters based on context. We hypothesize that the right interface can make faulty speech recognition as usable as TouchTones -- even more so.
Keywords: Speech recognition, Error-repair, User interface, Conversational systems
An Architecture for Transforming Graphical Interfaces BIBAKPDF 39-47
  W. Keith Edwards; Elizabeth D. Mynatt
While graphical user interfaces have gained much popularity in recent years, there are situations when the need to use existing applications in a nonvisual modality is clear. Examples of such situations include the use of applications on hand-held devices with limited screen space (or even no screen space, as in the case of telephones), or users with visual impairments.
   We have developed an architecture capable of transforming the graphical interfaces of existing applications into powerful and intuitive nonvisual interfaces. Our system, called Mercator, provides new input and output techniques for working in the nonvisual domain. Navigation is accomplished by traversing a hierarchical tree representation of the interface structure. Output is primarily auditory, although other output modalities (such as tactile) can be used as well. The mouse, an inherently visually-oriented device, is replaced by keyboard and voice interaction.
   Our system is currently in its third major revision. We have gained insight into both the nonvisual interfaces presented by our system and the architecture necessary to construct such interfaces. This architecture uses several novel techniques to efficiently and flexibly map graphical interfaces into new modalities.
Keywords: Auditory interfaces, GUIs, X, Visual impairment, Multimodal interfaces
ENO: Synthesizing Structured Sound Spaces BIBAKPDF 49-57
  Michel Beaudouin-Lafon; William W. Gaver
ENO is an audio server designed to make it easy for applications in the Unix environment to incorporate non-speech audio cues. At the physical level, ENO manages a shared resource, namely the audio hardware. At the logical level, it manages a sound space that is shared by various client applications. Instead of dealing with sound in terms of its physical description (i.e., sampled sounds), ENO allows sounds to be represented and controlled in terms of higher-level descriptions of sources, interactions, attributes, and sound space. Using this structure, ENO can facilitate the creation of consistent, rich systems of audio cues. In this paper, we discuss the justification, design, and implementation of ENO.
Keywords: Auditory interfaces, Sound, Non-speech audio, Multimodal interfaces, Client-server architecture

Groupware and 3D Tools

An Architecture for an Extensible 3D Interface Toolkit BIBAKPDF 59-67
  Marc P. Stevens; Robert C. Zeleznik; John F. Hughes
This paper presents the architecture for an extensible toolkit used in construction and rapid prototyping of three dimensional interfaces, interactive illustrations, and three dimensional widgets. The toolkit provides methods for the direct manipulation of 3D primitives which can be linked together through a visual programming language to create complex constrained behavior. Features of the toolkit include the ability to visually build, encapsulate, and parametrize complex models, and impose limits on the models. The toolkit's constraint resolution technique is based on a dynamic object model similar to those in prototype delegation object systems. The toolkit has been used to rapidly prototype tools for mechanical modelling, scientific visualization, construct 3D widgets, and build mathematical illustrations.
Keywords: User interface toolkits, Visual programming, Interaction techniques, Constraints, Direct manipulation, Delegation
3D Widgets for Exploratory Scientific Visualization BIBAKPDF 69-70
  Kenneth P. Herndon; Tom Meyer
Scientists use a variety of visualization techniques to help understand computational fluid dynamics (CFD) datasets, but the interfaces to these techniques are generally two-dimensional and therefore separated from the 3D view. Both rapid interactive exploration of datasets and precise control over the parameters and placement of visualization techniques are required to understand complex phenomena contained in these datasets. In this paper, we present work in progress on a 3D user interface for exploratory visualization of these datasets.
Keywords: 3D user interface, Scientific visualization
Note: TechNote
Building Distributed, Multi-User Applications by Direct Manipulation BIBAKPDF 71-81
  Krishna Bharat; Marc H. Brown
This paper describes Visual Obliq, a user interface development environment for constructing distributed, multi-user applications. Applications are created by designing the interface with a GUI-builder and embedding callback code in an interpreted language, in much the same way as one would build a traditional (non-distributed, single-user) application with a modern user interface development environment. The resulting application can be run from within the GUI-builder for rapid turnaround or as a stand-alone executable. The Visual Obliq runtime provides abstractions and support for issues specific to distributed computing, such as replication, sharing, communication, and session management. We believe that the abstractions provided, the simplicity of the programming model, the rapid turnaround time, and the applicability to heterogeneous environments, make Visual Obliq a viable tool for authoring distributed applications and groupware.
Keywords: UIMS, GUI-builders, Application builders, Distributed applications, CSCW, Groupware
Ramonamap -- An Example of Graphical Groupware BIBAKPDF 83-84
  Joel F. Bartlett
Ramonamap is an interactive map for database and communication services within our workgroup. Resources are represented as icons on the map, which preserves their actual (or implied) physical location and capitalizes on a user's understanding of maps. The map is interactive, giving the user control over the level of detail visible, allowing more information and services to appear than could be placed on a static map. The interactivity also allows users to change the map and add icon annotations. Since the map is continuously derived from an on-line database, changes and annotations are immediately shared by all users. As the database contains a wealth of information about the group, it also serves as a source for static maps for other purposes.
Keywords: Groupware, Maps, Simulated annealing
Note: TechNote


Pad++: Advances in Multiscale Interfaces BIB --
  Benjamin B. Bederson; Larry Stead; James D. Hollan
Abstract Data Visualization at AT&T: Software and Beyond BIB --
  Brian S. Johnson
TacTool: A Tactile Interface Development Tool BIB --
  David Keyson
Powers of Ten Thousand: A Translucent Zooming Technique BIB --
  Henry Lieberman
SpeechActs BIB --
  Nicole Yankelovich

Demonstrational User Interfaces

Interactive Generation of Graphical User Interfaces by Multiple Visual Examples BIBAKPDF 85-94
  Ken Miyashita; Satoshi Matsuoka; Shin Takahashi; Akinori Yonezawa
The construction of application-specific Graphical User Interfaces (GUI) still needs considerable programming partly because the mapping between application data and its visual representation is complicated. This study proposes a system which generates GUIs by generalizing multiple sets of application data and its visualization examples. The most notable characteristic of the system is that programmers can interactively modify the mapping by "correcting" the system-generated visualization examples that represent the system's current notion of programmer's intentions. Conflicting mappings are automatically resolved via the use of constraint hierarchies.
Keywords: Graphical user interfaces, Programming by example, Visual parsing, Visualization, Constraint hierarchies
A Pure Reasoning Engine for Programming by Demonstration BIBAKPDF 95-101
  Martin R. Frank; James D. Foley
We present an inference engine that can be used for creating Programming By Demonstration systems. The class of systems addressed are those which infer a state change description from examples of state [9,11].
   The engine can easily be incorporated into an existing design environment that provides an interactive object editor.
   The main design goals of the inference engine are responsiveness and generality. All demonstrational systems must respond quickly because of their interactive use. They should also be general -- they should be able to make inferences for any attribute that the user may want to define by demonstration, and they should be able to treat any other attributes as parameters of this definition.
   The first goal, responsiveness, is best accommodated by limiting the number of attributes that the inference engine takes into consideration. This, however, is in obvious conflict with the second goal, generality.
   This conflict is intrinsic to the class of demonstrational system described above. The challenge is to find an algorithm which responds quickly but does not heuristically limit the number of attributes it looks at. We present such an algorithm in this paper.
   A companion paper describes Inference Bear [4], an actual demonstrational system that we have built using this inference engine and an existing user interface builder [5].
Keywords: Programming by demonstration
Evolutionary Learning of Graph Layout Constraints from Examples BIBAKPDF 103-108
  Toshiyuki Masui
We propose a new evolutionary method of extracting user preferences from examples shown to an automatic graph layout system. Using stochastic methods such as simulated annealing and genetic algorithms, automatic layout systems can find a good layout using an evaluation function which can calculate how good a given layout is. However, the evaluation function is usually not known beforehand, and it might vary from user to user. In our system, users show the system several pairs of good and bad layout examples, and the system infers the evaluation function from the examples using genetic programming technique. After the evaluation function evolves to reflect the preferences of the user, it is used as a general evaluation function for laying out graphs. The same technique can be used for a wide range of adaptive user interface systems.
Keywords: Graphic object layout, Graph layout, Genetic algorithms, Genetic programming, Programming by example, Adaptive user interface

Visualization II

Developing Calendar Visualizers for the Information Visualizer BIBAKPDF 109-118
  Jock D. Mackinlay; George G. Robertson; Robert DeLine
The increasing mass of information confronting a business or an individual have created a demand for information management applications. Time-based information, in particular, is an important part of many information access tasks. This paper explores how to use 3D graphics and interactive animation to design and implement visualizers that improve access to large masses of time-based information. Two new visualizers have been developed for the Information Visualizer: 1) the Spiral Calendar was designed for rapid access to an individual's daily schedule, and 2) the Time Lattice was designed for analyzing the time relationships among the schedules of groups of people. The Spiral Calendar embodies a new 3D graphics technique for integrating detail and context by placing objects in a 3D spiral. It demonstrates that advanced graphics techniques can enhance routine office information tasks. The Time Lattice is formed by aligning a collection of 2D calendars. 2D translucent shadows provide views and interactive access to the resulting complex 3D object. The paper focuses on how these visualizations were developed. The Spiral Calendar, in particular, has gone through an entire cycle of development, including design, implementation, evaluation, revision and reuse. Our experience should prove useful to others developing user interfaces based on advanced graphics.
Keywords: Information visualization graphical representations, Information retrieval, Detail+context technique, Interactive animation, 3D graphics, Calendars, Translucent shadows
Data Visualization Sliders BIBAKPDF 119-120
  Stephen G. Eick
Computer sliders are a generic user input mechanism for specifying a numeric value from a range. For data visualization, the effectiveness of sliders may be increased by using the space inside the slider as
  • an interactive color scale,
  • a barplot for discrete data, and
  • a density plot for continuous data. The idea is to show the selected values in relation to the data and its distribution. Furthermore, the selection mechanism may be generalized using a painting metaphor to specify arbitrary, disconnected intervals while maintaining an intuitive user-interface.
    Keywords: High interaction, Thresholding, Information visualization, Selection, Dynamic graphics
    Note: TechNote
  • Translucent Patches -- Dissolving Windows -- BIBAKPDF 121-130
      Axel Kramer
    This paper presents motivation, design, and algorithms for using and implementing translucent, non-rectangular patches as a substitute for rectangular opaque windows. The underlying metaphor is closer to a mix between the architects yellow paper and the usage of white boards, than to rectangular opaque paper in piles and folders on a desktop.
       Translucent patches lead to a unified view of windows, sub-windows and selections, and provide a base from which the tight connection between windows, their content, and applications can be dissolved. It forms one aspect of on-going work to support design activities that involve "marking" media, like paper and white boards, with computers. The central idea of that research is to allow the user to associate structure and meaning dynamically and smoothly to marks on a display surface.
    Keywords: Interface metaphors, Interaction techniques, Irregular shapes, Translucency, Pen based interfaces
    Nova: Low-Cost Data Animation Using a Radar-Sweep Metaphor BIBAKPDF 131-132
      Ralph E. Griswold; Clinton L. Jeffery
    Nova is a simple technique for animating a data sequence whose elements include a primary numeric component and possibly one or more secondary dimensions. We use nova to visualize program behavior such as individual memory allocations, where the number of bytes in each allocation is a natural primary numeric dimension.
    Keywords: Software visualization, Radial plots
    Note: TechNote


    Model-Based User Interfaces: What are They and Why Should We Care? BIBKPDF 133-135
      Noi Sukaviriya; Srdjan Kovacevic; James D. Foley; Brad A. Myers; Dan R., Jr. Olsen; Matthias Schneider-Hufschmidt
    Keywords: Model-based user interface, User interface, Application modeling, Design specification, Design representation


    SkyBlue: A Multi-Way Local Propagation Constraint Solver for User Interface Construction BIBAKPDF 137-146
      Michael Sannella
    Many user interface toolkits use constraint solvers to maintain geometric relationships between graphic objects, or to connect the graphics to the application data structures. One efficient and flexible technique for maintaining constraints is multi-way local propagation, where constraints are represented by sets of method procedures. To satisfy a set of constraints, a local propagation solver executes one method from each constraint.
       SkyBlue is an incremental constraint solver that uses local propagation to maintain a set of constraints as individual constraints are added and removed. If all of the constraints cannot be satisfied, SkyBlue leaves weaker constraints unsatisfied in order to satisfy stronger constraints (maintaining a constraint hierarchy). SkyBlue is a more general successor to the DeltaBlue algorithm that satisfies cycles of methods by calling external cycle solvers and supports multi-output methods. These features make SkyBlue more useful for constructing user interfaces, since cycles of constraints can occur frequently in user interface applications and multi-output methods are necessary to represent some useful constraints. This paper discusses some of the applications that use SkyBlue, presents times for some user interface benchmarks and describes the SkyBlue algorithm in detail.
    Keywords: SkyBlue, Constraints, Local propagation, Constraint hierarchies, User interface implementation
    Dialing for Documents: An Experiment in Information Theory BIBAKPDF 147-155
      Harald Rau; Steven S. Skiena
    Standard telephone keypads are labeled with letters of the alphabet, enabling users to enter textual data for a variety of possible applications. However, the overloading of three letters on a single key creates a potential ambiguity as to which character was intended, which must be resolved for unambiguous text entry. Existing systems all use pairs of keypresses to spell out single letters, but are extremely cumbersome and frustrating to use.
       Instead, we propose single-stroke text entry on telephone keypads, with the ambiguity resolved by exploiting information-theoretic constraints. We develop algorithms capable of correctly identifying up to 99% of the characters in typical English text, sufficient for such applications as telephones for the hearing-impaired, E-mail without a terminal, and advanced voice-response systems.
    Keywords: Telephone keypads, Information theory, Telephones for the hearing-impaired, Viterbi algorithm
    Optimizing Toolkit-Generated Graphical Interfaces BIBAKPDF 157-166
      Bradley T. Vander Zanden
    Researchers have developed a variety of toolkits that support the development of highly interactive, graphical, direct manipulation applications such as animations, process monitoring tools, drawing packages, visual programming languages, games, and data and program visualization systems. These toolkits contain many useful features, such as 1) structured graphics, 2) automatic display management, 3) constraints, and 4) high-level input-handling models. Despite a number of optimizations that have been described in the literature, most toolkit-generated applications run in a predominantly interpreted mode at runtime: they dynamically determine the set of constraints and objects that must be redisplayed, which requires the use of time-consuming algorithms and data structures. The optimizations that do exist rely on semantic information that applies globally to all operations in an application. In this paper we identify a number of optimizations that require local, operation-specific semantic information about an application. For each operation, these optimizations pre-compute update plans that minimize the number of objects that are examined for redisplay, and pre-compute constraint plans that minimize the amount of dynamic scheduling and method dispatching that is performed for constraint satisfaction. We present performance measurements that suggest that these optimizations can significantly improve the performance of an application. We also discuss how a compiler might obtain from a programmer the information required to implement these optimizations.
    Keywords: Structured graphics, Automatic redisplay, Constraints, Development tools, Optimization

    Drawing and Sketching

    Blending Structured Graphics and Layout BIBAKPDF 167-174
      Steven H. Tang; Mark A. Linton
    Conventional windowing environments provide separate classes of objects for user interface components, or "widgets," and graphical objects. Widgets negotiate layout and can be resided as rectangles, while graphics may be shared, transformed, transparent, and overlaid. This presents a major obstacle to applications like user interface builders and compound document editors where the manipulated objects need to behave both like graphics and widgets.
       Fresco[1] blends graphics and widgets into a single class of objects. We have an implementation of Fresco and an editor called Fdraw that allows graphical objects to be composed like widgets, and widgets to be transformed and shared like graphics. Performance measurements of Fdraw show that sharing reduces memory usage without slowing down redisplay.
    Keywords: User interface toolkit, Object-oriented graphics, Structured graphics, User interface builder
    A Perceptually-Supported Sketch Editor BIBAKPDF 175-184
      Eric Saund; Thomas P. Moran
    The human visual system makes a great deal more of images than the elemental marks on a surface. In the course of viewing, creating, or editing a picture, we actively construct a host of visual structures and relationships as components of sensible interpretations. This paper shows how some of these computational processes can be incorporated into perceptually-supported image editing tools, enabling machines to better engage users at the level of their own percepts. We focus on the domain of freehand sketch editors, such as an electronic whiteboard application for a pen-based computer. By using computer vision techniques to perform covert recognition of visual structure as it emerges during the course of a drawing/editing session, a perceptually supported image editor gives users access to visual objects as they are perceived by the human visual system. We present a flexible image interpretation architecture based on token grouping in a multistate blackboard data structure. This organization supports multiple perceptual interpretations of line drawing data, domain-specific knowledge bases for interpretable visual structures, and gesture-based selection of visual objects. A system implementing these ideas, called PerSketch, begins to explore a new space of WYPIWYG (What Your Perceive Is What You Get) image editing tools.
    Keywords: Image editing, Graphics editing, Drawing tools, Sketch tools, Interactive graphics, Pen computing, Gestures, Machine vision, Computer vision, Perceptual grouping, Perceptual organization, Token grouping, Scale space blackboard WYPIWYG, PerSketch
    A Mark-Based Interaction Paradigm for Free-Hand Drawing BIBAKPDF 185-192
      Thomas Baudel
    We propose an interaction technique for editing splines that is aimed at professional graphic designers. These users do not take full advantage of existing spline editing software because their mental representations of drawings do not match the underlying conceptual model of the software. Although editing splines by specifying control points and tangents may be appropriate for engineers, graphic designers think more in terms of strokes, shapes, and gestures appropriate for editing drawings. Our interaction technique matches the latter model: curves can be edited by means of marks, similar to the way strokes are naturally overloaded when drawing on paper. We describe this interaction technique and the algorithms used for its implementation.
    Keywords: Mark-based interaction, Gestures, Spline editing, Interaction models, Graphic design, CAD

    Closing Plenary

    Trends in the Computer Industry: Life-Long Subscriptions, Magical Cures, and Profits Along the Information Highway BIBAPDF 193
      Don Norman
    It doesn't work the way you think it works. Technical, business, and social factors affect the way that new technologies are deployed. Once ideas are let out of the laboratory, common sense disappears, especially in the rush to show that one company's products are superior to another's almost equal, very similar ones. The easy part of interface design is the technology and the science. The hard parts are the social aspects: negotiating the multiple constraints on products, including cost, business models, the sales story, time to market, and those well known impediments to progress: the installed base and industry standards.
       The race is to the swift and the clever, not to the best. Customers purchase what they are told they want. Wants are not the same things as needs, customers are not the same people as users. Don't believe everything you read. In fact, don't believe anything. How much science and research actually impacts products? Less than you might think, less than you might hope, but often for good reasons.

    Two Hands and Three Dimensions

    Extending a Graphical Toolkit for Two-Handed Interaction BIBAKPDF 195-204
      Stephane Chatty
    Multimodal interaction combines input from multiple sensors such as pointing devices or speech recognition systems, in order to achieve more fluid and natural interaction. Two-handed interaction has been used recently to enrich graphical interaction. Building applications that use such combined interaction requires new software techniques and frameworks. Using additional devices means that user interface toolkits must be more flexible with regard to input devices and event types. The possibility of parallel interactions must also be taken into account, with consequences on the structure of toolkits. Finally, frameworks must be provided for the combination of events and status of several devices. This paper reports on the extensions we made to the direct manipulation interface toolkit Whizz in order to experiment two-handed interaction. These extensions range from structural adaptations of the toolkit to new techniques for specifying the time-dependent fusion of events.
    Keywords: Interaction styles, Multimodal interaction, Two-handed interaction, Graphical toolkit, Direct manipulation
    Two-Handed Polygonal Surface Design BIBAKPDF 205-212
      Chris Shaw; Mark Green
    This paper describes a Computer Aided Design system for sketching free-form polygonal surfaces such as terrains and other natural objects. The user manipulates two 3D position and orientation trackers with three buttons, one for each hand. Each hand has a distinct role to play, with the dominant hand being responsible for picking and manipulation, and the less-dominant hand being responsible for context setting of various kinds. The less-dominant hand holds the workpiece, sets which refinement level that can be picked by the dominant hand, and generally acts as a counterpoint to the dominant hand. In this paper, the architecture of the system is outlined, and a simple surface is shown.
    Keywords: User interface software, Virtual reality, Interactive 3D graphics, Two handed interfaces, Free-form surfaces, Geometric modeling
    A Survey of Design Issues in Spatial Input BIBAKPDF 213-222
      Ken Hinckley; Randy Pausch; John C. Goble; Neal F. Kassell
    We present a survey of design issues for developing effective free-space three-dimensional (3D) user interfaces. Our survey is based upon previous work in 3D interaction, our experience in developing free-space interfaces, and our informal observations of test users. We illustrate our design issues using examples drawn from instances of 3D interfaces.
       For example, our first issue suggests that users have difficulty understanding three-dimensional space. We offer a set of strategies which may help users to better perceive a 3D virtual environment, including the use of spatial references, relative gesture, two-handed interaction, multisensory feedback, physical constraints, and head tracking. We describe interfaces which employ these strategies.
       Our major contribution is the synthesis of many scattered results, observations, and examples into a common framework. This framework should serve as a guide to researchers or systems builders who may not be familiar with design issues in spatial input. Where appropriate, we also try to identify areas in free-space 3D interaction which we see as likely candidates for additional research.
       An extended and annotated version of the references list for this paper is available on-line through mosaic at address http://uvacs.cs.virginia.edu/~kph2q/.
    Keywords: Spatial input, Virtual reality, 3D interaction, Two-handed input, Ergonomics of virtual manipulation, Haptic input