HCI Bibliography Home | HCI Conferences | GI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
GI Tables of Contents: 939495969798990001020304050607080910111213

Proceedings of the 2003 Conference on Graphics Interface

Fullname:Proceedings of the 2003 Conference on Graphics Interface
Editors:Torsten Moeller; Colin Ware
Location:Halifax, Nova Scotia, Canada
Dates:2003-Jun-11 to 2003-Jun-13
Publisher:Canadian Information Processing Society
Standard No:ISBN 1-56881-207-8; hcibib: GI03
Links:Conference Series Home Page | Online Proceedings
  1. Modeling
  2. Detail and Context
  3. Hardware Methods
  4. Input
  5. Rendering
  6. Mixing Reality
  7. Meshes and Surfaces
  8. Multimedia
  9. Deformable Models


Fast Extraction of BRDFs and Material Maps from Images BIBAPDFGI Online Paper 1-10
  Rafal Jaroszkiewicz; Michael McCool
The high dimensionality of the BRDF makes it difficult to use measured data for hardware rendering. Common solutions to overcome this problem include expressing a BRDF as a sum of basis functions or factorizing it into several functions of smaller dimensions which can be sampled into a texture.
   In this paper we will focus on homomorphic factorization, which can be accelerated by preinverting the constraint matrix if the sampling pattern and the layout of the samples in the representation are fixed. Applying the preinverted constraint matrix is very fast and can be used to calculate factorization and material maps at interactive rates. We use this to derive shaders from painted examples. The technique presented in this paper allows interactive definition of materials, and, although based on physical parameters, this method can realize a variety of non-photorealistic effects.
Interactive Point-based Modeling of Complex Objects from Images BIBAPDFGI Online Paper 11-20
  Pierre Poulin; Marc Stamminger; François Duranleau; Marie-Claude Frasson; George Drettakis
Modeling complex realistic objects is a difficult and time consuming process. Nevertheless, with improvements in rendering speed and quality, more and more applications require such realistic complex 3D objects. We present an interactive modeling system that extracts 3D objects from photographs. Our key contribution lies in the tight integration of a point-based representation and user interactivity, by introducing a set of interactive tools to guide reconstruction. 3D color points are a flexible and effective representation for very complex objects; adding, moving, or removing points is fast and simple, facilitating easy improvement of object quality. Because images and depths maps can be very rapidly generated from points, testing validity of point projections in several images is efficient and simple. These properties allow our system to rapidly generate a first approximate model, and allow the user to continuously and interactively guide the generation of points, both locally and globally. A set of interactive tools and optimizations are introduced to help the user improve the extracted objects.
Silhouette-Based 3D Face Shape Recovery BIBAPDFGI Online Paper 21-30
  Jinho Lee; Baback Moghaddam; Hanspeter Pfister; Raghu Machiraju
The creation of realistic 3D face models is still a fundamental problem in computer graphics. In this paper we present a novel method to obtain the 3D shape of an arbitrary human face using a sequence of silhouette images as input. Our face model is a linear combination of eigenheads, which are obtained by a Principal Component Analysis (PCA) of laser-scanned 3D human faces. The coefficients of this linear decomposition are used as our model parameters. We introduce a near-automatic method for reconstructing a 3D face model whose silhouette images match closest to the set of input silhouettes.
Simulating Fluid-Solid Interaction BIBAPDFZIPGI Online Paper 31-38
  Olivier Génevaux; Arash Habibi; Jean-Michel Dischler
Though realistic eulerian fluid simulation systems now provide believable movements, straightforward renderable surface representation, and affordable computation costs, they are still unable to deal with non-static objects in a realistic manner. Namely, objects can not have an influence on the fluid and be simultaneously affected by the fluid's motion. In this paper, a simulation scheme for fluids allowing automatic generation of physically plausible motions alongside realistic interactions with solids is proposed. The method relies mainly on the definition of a coupling force between the solids and the fluid, thus bridging the gap between commonly used eulerian fluid animation models and lagrangian solid ones. This new method thus improves existing fluid simulations, making them capable of generating new kinds of motions, such as a floating ball displaced by the wave created thanks to its own splash into the water.

Detail and Context

A Comparison of Traditional and Fisheye Radar View Techniques for Spatial Collaboration BIBAPDFGI Online Paper 39-46
  Wendy A. Schafer; Doug A. Bowman
The activity of spatial collaboration involves solving spatial problems related to a large, physical area. Representing this area in collaboration software is not trivial. Radar views are a popular technique for providing awareness information in shared representations. They indicate where each user is working and any overlaps in users' viewports. However, spatial collaboration requires more features than that provided by radar views. An enhanced design that uses fisheye techniques is offered and compared in empirical study with a traditional approach to radar views. Results indicate that the enhanced design has the potential to better support spatial collaboration activities and that users are divided on which technique they prefer. A discussion of results and suggestions for redesign are also proposed.
Finding Things In Fisheyes: Memorability in Distorted Spaces BIBAPDFGI Online Paper 47-56
  Amy Skopik; Carl Gutwin
Interactive fisheye views use distortion to show both local detail and global context in the same display space. Although fisheyes allow the presentation and inspection of large data sets, the distortion effects can cause problems for users. One such problem is memorability - the ability to find and go back to objects and features in the data. In this paper we investigate the issue of how people remember object locations in distorted spaces, using a Sarkar-Brown fisheye lens that drastically affects the space. We carried out two studies. The first gathered information about what memory strategies people choose at increasing levels of distortion, without presupposing any particular strategy. The second looked more closely at how two particular strategies (maintaining a mental map, and using landmarks in the data) affected memory performance. We found that as distortion increases, people do use different memory strategies and that at higher levels of distortion, landmarks become increasingly important as memory aids.
Comparing ExoVis, Orientation Icon, and In-Place 3D Visualization Techniques BIBAPDFZIPGI Online Paper 57-64
  Melanie Tory; Colin Swindells
With large volume data sets, it can be difficult to visualize the data all at once. Multiple views can address this problem by displaying details in areas of interest while still keeping track of the global overview. Many "detail and context" techniques exist for volume data, but it is unclear when to use each one. We introduce a new class of methods called ExoVis, an alternative that balances trade-offs of existing techniques. We then heuristically compare ExoVis to existing methods to provide insight into when each technique is appropriate.

Hardware Methods

Hardware-Accelerated Visual Hull Reconstruction and Rendering BIBAPDFMPGMPGMPGGI Online Paper 65-72
  Ming Li; Marcus Magnor; Hans-Peter Seidel
We present a novel algorithm for simultaneous visual hull reconstruction and rendering by exploiting off-the-shelf graphics hardware. The reconstruction is accomplished by projective texture mapping in conjunction with the alpha test. Parallel to the reconstruction, rendering is also carried out in the graphics pipeline. We texture the visual hull view-dependently with the aid of fragment shaders, such as nVIDIA's register combiners. Both reconstruction and rendering are done in a single rendering pass. We achieve frame rates of more than 80 fps on a standard PC equipped with a commodity graphics card. The performance is significantly faster than that of previously reported similar systems.
CInDeR: Collision and Interference Detection in Real-time Using graphics hardware BIBAPDFGI Online Paper 73-80
  Dave Knott; Dinesh K. Pai
Collision detection is a vital task in almost all forms of computer animation and physical simulation. It is also one of the most computationally expensive, and therefore a frequent impediment to efficient implementation of real-time graphics applications. We describe how graphics hardware can be used as a geometric co-processor to carry out the bulk of the computation involved with collision detection. Hardware frame buffer operations are used to implement a ray-casting algorithm which detects static interference between solid polyhedral objects. The algorithm is linear in both the number of objects and number of polygons in the models. It also requires no preprocessing or special data structures.
Texture Partitioning and Packing for Accelerating Texture-Based Volume Rendering BIBAPDFPDFGI Online Paper 81-88
  Wei Li; Arie Kaufman
To apply empty space skipping in texture-based volume rendering, we partition the texture space with a box-growing algorithm. Each sub-texture comprises of neighboring voxels with similar densities and gradient magnitudes. Sub-textures with similar range of density and gradient magnitude are then packed into larger ones to reduce the number of textures. The partitioning and packing is independent on the transfer function. During rendering, the visibility of the boxes are determined by whether any of the enclosed voxel is assigned a non-zero opacity by the current transfer function. Only the sub-textures from the visible boxes are blended and only the packed textures containing visible sub-textures reside in the texture memory. We arrange the densities and the gradients into separate textures to avoid storing the empty regions in the gradient texture, which is transfer function independent. The partitioning and packing can be considered as a lossless texture compression with an average compression rate of 3.1:1 for the gradient textures. Running on the same hardware and generating exactly the same images, the proposed method however renders 3 to 6 times faster on average than traditional approaches for various datasets in different rendering modes.


Input-Based Language Modelling in the Design of High Performance Text Input Techniques BIBAPDFGI Online Paper 89-96
  R. William Soukoreff; I. Scott MacKenzie
We present a critique of language-based modelling for text input research, and propose an alternative input-based approach. Current language-based statistical models are derived from large samples of text (corpora). However, this text reflects only the output, or final result, of the text input task. We argue that this weakens the utility of the model, because, (1) users' language is typically quite different from that in any corpus; punctuation symbols, acronyms, slang, etc. are frequently used. (2) A corpus does not reflect the editing process used in its creation. (3) No existing corpus captures the input modalities of text input devices. Actions associated with keys such as Shift, Alt, and Ctrl are missing. We present a study to validate our arguments. Keystroke data from four subjects were collected over a one-month period. Results are presented that support the need for input-based language modelling for text input.
Less-Tap: A Fast and Easy-to-Learn Text Input Technique for Phones BIBAPDFGI Online Paper 97-104
  Andriy Pavlovych; Wolfgang Stuerzlinger
A new technique to enter text using a mobile phone keypad, Less-Tap, is described. The traditional touch-tone phone keypad is ambiguous for text input because each button encodes 3 or 4 letters. As in Multitap, our method requires the user to press buttons repeatedly to get a required letter. However, in Less-Tap, letters are rearranged within each button according to their frequency. This way, the most common letters require only one key press.
   Unlike dictionary based methods, Less-Tap facilitates the entry of arbitrary words. Unlike LetterWise and T9, Less-Tap allows entering text without having to visually verify the result, after some initial training. For English, Less-Tap requires an average of 1.5266 keystrokes per character (vs. 2.0342 in Multitap).
   We conducted a user study to compare Less-Tap against Multitap. Each participant had three 20-minute sessions with each technique. The mean entry speed was 9.5% higher with the new technique.
The Effects of Dynamic Transparency on Targeting Performance BIBAPDFGI Online Paper 105-112
  Carl Gutwin; Jeff Dyck; Chris Fedak
Transparency can be used to increase the visibility of a user's workspace in situations where the space is obscured by floating windows and tool palettes. Dynamic transparency takes this approach further by making components more transparent when the user's cursor is far away. However, dynamic transparency may make palettes and floating windows more difficult to target. We carried out a study to test the effects of different types of dynamic transparency on targeting performance. We found that although targeting time does increase as targets become more transparent, the increases are small - often less than ten percent. Our study suggests reasonable maximum, minimum, and default transparency levels for designers of dynamic transparency schemes.
A Gestural Interface to Free-Form Deformation BIBAPDFMOVGI Online Paper 113-120
  Geoffrey M. Draper; Parris K. Egbert
We present a gesture-based user interface to Free-Form Deformation (FFD). Traditional interfaces for FFD require the manipulation of individual points in a lattice of control vertices, a process which is both time-consuming and error-prone. In our system, the user can bend, twist, and stretch/squash the model as if it were a solid piece of clay without being unduly burdened by the mathematical details of FFD. We provide the user with a small but powerful set of gesture-based "ink stroke" commands that are invoked simply by drawing them on the screen. The system automatically infers the user's intention from the stroke and deforms the model without any vertex-specific input from the user. Both the stroke recognition and FFD algorithms are executed in real-time on a standard PC.


Dynamic Canvas for Non-Photorealistic Walkthroughs BIBAPDFPNGPNGHTMLGI Online Paper 121-130
  Matthieu Cunzi; Joëlle Thollot; Sylvain Paris; Gilles Debunne; Jean-Dominique Gascuel; Frédo Durand
The static background paper or canvas texture usually used for non-photorealistic animation greatly impedes the sensation of motion and results in a disturbing ''shower door'' effect. We present a method to animate the background canvas for non-photorealistic rendering animations and walkthroughs, which greatly improves the sensation of motion and 3D ''immersion''. The complex motion field induced by the 3D displacement is matched using purely 2D transformations. The motion field of forward translations is approximated using a 2D zoom in the texture, and camera rotation is approximated using 2D translation and rotation. A rolling-ball metaphor is introduced to match the instantaneous 3D motion with a 2D transformation. An infinite zoom in the texture is made possible by using a paper model based on multifrequency solid turbulence. Our results indicate a dramatic improvement over a static background.
Pen-and-Ink Textures for Real-Time Rendering BIBAPDFGI Online Paper 131-138
  Jennifer Fung; Oleg Veryovka
Simulation of a pen-and-ink illustration style in a real-time rendering system is a challenging computer graphics problem. Tonal art maps (TAMs) were recently suggested as a solution to this problem. Unfortunately, only the hatching aspect of pen-and-ink media was addressed thus far. We extend the TAM approach and enable representation of arbitrary textures. We generate TAM images by distributing stroke primitives according to a probability density function. This function is derived from the input image and varies depending on the TAM's scale and tone levels. The resulting depiction of textures approximates various styles of pen-and-ink illustrations such as outlining, stippling, and hatching.
Multi-Resolution Point-Sample Raytracing BIBAPDFZIPGI Online Paper 139-148
  Michael Wand; Wolfgang Strasser
We propose a new strategy for raytracing complex scenes without aliasing artifacts. The algorithm intersects anisotropic ray cones with prefiltered surface sample points from a multi-resolution point hierarchy. The algorithm can be extended to capture effects of distributed raytracing such as blurry reflections, depth of field, or soft shadows. In contrast to former anti-aliasing techniques based on cone tracing, the multi-resolution algorithm can be applied efficiently to scenes of high complexity. The running time does not depend on the variance in the image as this is the case for the prevalent stochastic raytracing techniques. Thus, the new technique is faster than stochastic raytracing for images with many high frequency details.
Entropy-Based Adaptive Sampling BIBAPDFGI Online Paper 149-158
  Jaume Rigau; Miquel Feixas; Mateu Sbert
Ray tracing techniques need supersampling to reduce aliasing and/or noise in the final image. Since not all the pixels in the image require the same number of rays, supersampling can be implemented by adaptive subdivision of the sampling region, resulting in a refinement tree. In this paper we present a theoretically sound adaptive sampling method based on entropy, the classical measure of information. Our algorithm is orthogonal to the method used for sampling the pixel or for obtaining the radiance of the hitpoint in the scene. Results will be shown for our implementation within the context of stochastic ray tracing and path tracing. We demonstrate that our approach compares well to the ones obtained by using classic strategies based on contrast and variance.

Mixing Reality

Digital Decor: Augmented Everyday Things BIBAPDFFull-TextGI Online Paper 159-166
  Itiro Siio; Jim Rowan; Noyuri Mima; Elizabeth Mynatt
Digital Decor is furniture, appliances, and other small objects commonly found in homes and offices that have been augmented with computational power to extend usefulness. As such, Digital Decor is a physical manifestation of the ubiquitous, pervasive, and invisible computer in which the familiar, everyday object is imbued with additional capabilities through a single, simple application. Thus far we have investigated two possible functionalities for Digital Decor: everyday objects that keep track of their own contents (this can be called ''smart storage), and everyday objects that support informal, lightweight communication. For this paper we developed four prototypes: Timestamp Drawers and Strata Drawer are Digital Decor prototypes augmented to keep track of their contents while Peek-A-Drawer and Meeting Pot are prototypes augmented to support communication."
A Tangible Interface for High-Level Direction of Multiple Animated Characters BIBAPDFMPGGI Online Paper 167-176
  Ronald Metoyer; Lanyue Xu; Madhusudhanan Srinivasan
Many training, education, and visualization environ-ments would benefit from realistic animated characters. Unfortunately, interfaces for character motion specification are often complex and ill-suited to non-experts. We present a tangible interface for basic character manipulation on planar surfaces. In particular, we focus on interface aspects specific to 2D gross character animation such as path and timing specification. Our approach allows for character manipulation and high-level motion specification through a natural metaphor - the figurine. We present an example interface for designing and visualizing strategy in the sport of American football and discuss usability studies of this interface.
Mixed Initiative Interactive Edge Detection BIBAPDFGI Online Paper 177-184
  Eric Neufeld; Haruna Popoola; David Callele; David Mould
Interactive edge detection is used in both graphics art tools and in tools for building anatomical models from serially sectioned images. To build models, contours are traced and later triangulated. Contour tracing is time-consuming because of the fidelity and quantity of points needed, and expensive because of the background training required of individuals who do the tracing. Here we report extensions to interactive edge detection that reduce errors and effort. Our key contribution is a simple feedback interface called the leash, currently implemented as an extension to Intelligent Scissors, that lets the human user 'lead' the edge detection algorithm along a contour, but also helps the user to anticipate errors and provide immediate corrective feedback.

Meshes and Surfaces

A Stream Algorithm for the Decimation of Massive Meshes BIBAPDFGI Online Paper 185-192
  Jianhua Wu; Leif Kobbelt
We present an out-of-core mesh decimation algorithm that is able to handle input and output meshes of arbitrary size. The algorithm reads the input from a data stream in a single pass and writes the output to another stream while using only a fixed-sized in-core buffer. By applying randomized multiple choice optimization, we are able to use incremental mesh decimation based on edge collapses and the quadric error metric. The quality of our results is comparable to state-of-the-art high-quality mesh decimation schemes (which are slower than our algorithm) and the decimation performance matches the performance of the most efficient out-of-core techniques (which generate meshes of inferior quality).
Distortion Minimization and Continuity Preservation in Surface Pasting BIBAPDFGI Online Paper 193-200
  Rick Leung; Stephen Mann
Surface pasting is a hierarchical modeling technique capable of adding local details to tensor product B-spline surfaces without incurring significant computational costs. In this paper, we describe how the continuity conditions of this technique can be improved through the use of least squares fitting and the application of some general B-spline continuity properties. More importantly, we address distortion issues inherent to the standard pasting technique by using an alternative mapping of the interior control points.
Multiple Camera Considerations in a View-Dependent Continuous Level of Detail Algorithm BIBAPDFGI Online Paper 201-208
  Bradley P. Kram; Christopher D. Shaw
We introduce the Camera Aware View-dEpendent Continuous Level Of Detail (CAVECLOD) polygon mesh representation. Several techniques recently have been developed that use a hierarchy of vertex split and merge operations to achieve continuous LOD. These techniques exploit temporal coherence. However, when multiple cameras are simultaneously viewing a polygon mesh at continuous LOD, the exploitation of temporal coherence is difficult. The CAVECLOD mesh representation enables multiple cameras to simultaneously exploit temporal coherence. Texture coordinates and normal vectors are preserved and the algorithm uses Microsoft DirectX Vertex Buffers and Index Buffers for efficient rendering. Interactive frame rates are achieved on large models on commercially available hardware.


Portrait: Generating Personal Presentations BIBAPDFGI Online Paper 209-216
  James Fogarty; Jodi Forlizzi; Scott E. Hudson
The rise of email and instant messaging as important tools in the professional workplace has created changes in how we communicate. One such change is that these media tend to reduce the presentation of an individual to a username, impacting the quality of communication. With current technology, including rich personal presentations in messages is still cumbersome. This problem is compounded by the fact that many of the potential benefits are realized by the recipient, though the sender incurs the costs.
   This paper discusses the Portrait system, which demonstrates an automated approach to generating personal presentations for use in computer-mediated communication and other systems, such as awareness and ambient information displays. The Portrait system works by searching the web for photos or logos that represent individuals and organizations. It then combines these images to create personal presentations. By using the existing web presences of individuals and organizations, Portrait reduces the human costs of using pictures of people in communication and in information displays. In a small evaluation of this system, we found that it performed nearly as well as human searchers at the task of finding images for personal presentations.
Modularity and Hierarchical Structure in the Digital Video Lifecycle BIBAPDFGI Online Paper 217-224
  Ronald Baecker; Eric Smith
Despite the multiplicity of data types and rich linking and nesting available in general multimedia systems, most digital video systems have represented video only as linear sequences of frames and shots. We extend previous work that proposed representing digital video as hierarchically structured documents composed of modular building blocks including outlines, scripts, audio sequences, still images, titles, and motion sequences. We review how such a representation can aid video authoring. We then show how such structure can aid video editing, localizing, browsing, updating, publishing, navigating, and searching. Applications are illustrated with examples from real projects.
A Taxonomy of Tasks and Visualizations for Casual Interaction of Multimedia Histories BIBAPDFGI Online Paper 225-236
  Charlotte Tang; Gregor McEwan; Saul Greenberg
Many groupware systems now allow people to converse and casually interact through their computers in quite rich ways -- through text, images, video, artifact sharing and so on. If these interactions are logged, we can offer these multimedia histories to a person in a manner that makes them easy to review. This is potentially beneficial for group members wishing to find and reflect on their past interactions, and for researchers investigating the nuances of online communities. Yet because we have little knowledge of what people would actually do with these histories, designing an effective history review system is difficult. Consequently, we conducted a user study, where people explored real data from an online community. Our study identified a set of tasks that people would do if they could review these histories of casual interaction. It also produced a list of parameters pertinent to how we could visualize these historical records in a tool. With the increasing popularity of computer-mediated casual interaction tools, this study provides an important guide for developing tools to visualize and analyze past multimedia conversations.
Learning from Games: HCI Design Innovations in Entertainment Software BIBAPDFGI Online Paper 237-246
  Jeff Dyck; David Pinelle; Barry Brown; Carl Gutwin
Computer games are one of the most successful application domains in the history of interactive systems. This success has come despite the fact that games were 'separated at birth' from most of the accepted paradigms for designing usable interactive software. It is now apparent that this separate and less-constrained environment has allowed for much design creativity and many innovations that make game interfaces highly usable. We analyzed several current game interfaces looking for ideas that could be applied more widely to general UIs. In this paper we present four of these: effortless community, learning by watching, deep customizability, and fluid system-human interaction. These ideas have arisen in games because of their focus on user performance and user satisfaction, and we believe that they can help to improve the usability of other types of applications.

Deformable Models

Interactive Deformation Using Modal Analysis with Constraints BIBAPDFPDFGZGI Online Paper 247-256
  Kris K. Hauser; Chen Shen; James F. O'Brien
Modal analysis provides a powerful tool for efficiently simulating the behavior of deformable objects. This paper shows how manipulation, collision, and other constraints may be implemented easily within a modal framework. Results are presented for several example simulations. These results demonstrate that for many applications the errors introduced by linearization are acceptable, and that the resulting simulations are fast and stable even for complex objects and stiff materials.
Easy Realignment of k-DOP Bounding Volumes BIBAPDFGI Online Paper 257-264
  Christoph Fünfzig; Dieter W. Fellner
In this paper we reconsider pairwise collision detection for rigid motions using a k-DOP bounding volume hierarchy. This data structure is particularly attractive because it is equally efficient for rigid motions as for arbitrary point motions (deformations).
   We propose a new efficient realignment algorithm, which produces tighter results compared to all known algorithms. It can be implemented easily in software and in hardware. Using this approach we try to show, that k-DOP bounding volumes can keep up with the theoretically more efficient oriented bounding boxes (OBBs) in parallel-close-proximity situations.
Scanning Large-Scale Articulated Deformations BIBAPDFGI Online Paper 265-272
  Jochen Lang; Dinesh K. Pai; Hans-Peter Seidel
Scanning the deformation behavior of real objects is a useful technique to acquire physically realistic behavior. It has been shown previously how to acquire physically realistic object behavior for small deformation and how to render the behavior at interactive rates. This paper introduces a novel method to extend previous work to handle large scale deformation. We model large scale deformation as articulation in combination with local linear deformation. The articulation may either reflect a underlying physical structure or may be purely a modeling technique. In this paper, we show examples of both applications. Our acquisition method is applicable to deformable modeling but it also has implications for motion capture.
Toward Modeling of a Suturing Task BIBAPDFGI Online Paper 273-279
  Matt LeDuc; Shahram Payandeh; John Dill
In this paper we present our initial work on simulating suturing using mass-spring models. Various models for simulating a suture were studied, and a simple linear mass-spring model was determined to give good performance. A novel model for pulling a suture through a deformable tissue model is presented. By connecting two separate tissues together by way of the suture, our model can simulate a suturing task. The results are shown using software we developed that runs on a standard PC and models the action of two suturing devices commonly used in minimally invasive Laparoscopic surgery.