HCI Bibliography Home | HCI Conferences | About GI | GI Conf Proceedings | Detailed Records | RefWorks | EndNote | Hide Abstracts
GI Tables of Contents: 02030405060708091011121314 ⇐ MORE

Proceedings of the 2012 Conference on Graphics Interface

Fullname:Proceedings of the 2012 Graphics Interface Conference
Editors:Stephen Brooks; Kirstie Hawkey
Location:Toronto, Ontario, Canada
Dates:2012-May-28 to 2012-May-30
Publisher:ACM
Standard No:ISBN: 978-1-4503-1420-6; ACM DL: Table of Contents; hcibib: GI12
Papers:26
Pages:208
Links:Conference Website
Summary:The Canadian Human-Computer Communications Society (CHCCS) / Société canadienne du dialogue humain-machine (SCDHM) is a Special Interest Group within the Canadian Information Processing Society. It is a non-profit organization formed to advance education and research in computer graphics, visualization, and human-computer interaction.
    Each year CHCCS/SCDHM sponsors Graphics Interface, the longest-running regularly scheduled conference in interactive computer graphics. Most years it is co-located and co-organized with two other conferences, Artifical Intelligence (AI), and Computer and Robotic Vision (CRV). This year the AI/CRV/GI 2012 conference is located at York University in Toronto. Graphics Interface promises to be an exciting event, with a selection of high quality papers in computer graphics, visualization, and human-computer interaction.
  1. Invited paper
  2. 3D geometry
  3. Tasks, emotions, and feelings
  4. Image manipulation
  5. Enhancing performance
  6. Real world modeling
  7. 3D manipulation
  8. Motion and rendering

Invited paper

Virtual humans: back to the future BIBAFull-Text 1-8
  Nadia Magnenat Thalmann; Daniel Thalmann
This paper essentially tries to examine all the roles that Virtual Humans can play in empowering human expression, and the research challenges we have to face to make this possible. It starts with a short history of Virtual Humans and how we contribute to the foundations of this field. We then define six typical Virtual Humans: the Performing Virtual Human, the Physiological Virtual Human, the Learning Virtual Human, the Connected Virtual Human, the Secure Virtual Human, and the Anthropometric Virtual Human. For each category, we provide a definition and a few possible scenarios, then we try to identify the research challenges, the past experiences, and some unsolved core issues.

3D geometry

Point-tessellated voxelization BIBAFull-Text 9-18
  Yun Fei; Bin Wang; Jiating Chen
Applications such as shape matching, visibility processing, rapid manufacturing, and 360 degree display usually require the generation of a voxel representation from a triangle mesh interactively or in real-time. In this paper, we describe a novel framework that uses the hardware tessellation support on the graphics processing unit (GPU) for surface voxelization. To generate gap-free voxelization results with superior performance, our framework uses three stages: triangle subdivision, point generation, and point injection. For even higher temporal efficiency we introduce PN-triangles and displacement mapping to voxelize meshes with rugged surfaces in high resolution.
   Our framework can be implemented with simple shader programming, making it readily applicable to a number of real-time applications where both development and runtime efficiencies are of concern.
Isoparametric finite element analysis for Doo-Sabin subdivision models BIBAFull-Text 19-26
  Engin Dikici; Sten Roar Snare; Fredrik Orderud
We introduce an isoparametric finite element analysis method for models generated using Doo-Sabin subdivision surfaces. Our approach aims to narrow the gap between geometric modeling and physical simulation that have traditionally been treated as separate modules. This separation is due to the substantial geometric representation differences between these two processes. Accordingly, a unified representation is investigated in this study. Our proposed method performs the geometric modeling via Doo-Sabin subdivision surfaces, which are defined as the limit surface of a recursive Doo-Sabin refinement process. The same basis functions are later utilized to define isoparametric shell elements for physical simulation. Furthermore, the accuracy of the simulation can be adjusted by the basis refinements, without changing the geometry or its parametrization. The unified representation allows rapid data transfer between geometric design and finite-element analysis, eliminating the need for inconvenient remodeling/meshing procedures commonly deployed. Experiments show that the physical simulation accuracy of the introduced models quickly converges to high resolution finite element models, using classical hexahedron and triangular prism elements.
5-6-7 meshes BIBAFull-Text 27-34
  Nima Aghdaii; Hamid Younesy; Hao Zhang
We introduce a new type of meshes called 5-6-7 meshes. For many mesh processing tasks, low- or high-valence vertices are undesirable. At the same time, it is not always possible to achieve complete vertex valence regularity, i.e., to only have valence-6 vertices. A 5-6-7 mesh is a closed triangle mesh where each vertex has valence 5, 6, or 7. An intriguing question is whether it is always possible to convert an arbitrary mesh into a 5-6-7 mesh. In this paper, we answer the question in the positive. We present a 5-6-7 remeshing algorithm which converts a closed triangle mesh with arbitrary genus into a 5-6-7 mesh which a) closely approximates the original mesh geometrically, e.g., in terms of feature preservation, and b) has a comparable vertex count as the original mesh. We demonstrate the results of our remeshing algorithm on meshes with sharp features and different topology and complexity.

Tasks, emotions, and feelings

Individual differences in personal task management: a field study in an academic setting BIBAFull-Text 35-44
  Mona Haraty; Diane Tam; Shathel Haddad; Joanna McGrenere; Charlotte Tang
A plethora of electronic personal task management (e-PTM) tools have been designed to help individuals manage their tasks. There is a lack of evidence, however, on the extent to which these tools actually help. In addition, previous research has reported that e-PTM tools have low adoption rates. To understand the reasons for such poor adoption and to gain insight into individual differences in PTM, we conducted a focus group with 7 participants followed by a field study with 12 participants, both in an academic setting. This paper describes different behaviors involved in managing everyday tasks. Based on the similarities and differences in individuals' PTM behaviors, we identify three types of users: adopters, make-doers, and do-it-yourselfers. Grounded in our findings, we offer design guidelines for personalized PTM tools, which can serve the different types of users and their behaviors.
The effects of mindfulness meditation training on multitasking in a high-stress information environment BIBAFull-Text 45-52
  David M. Levy; Jacob O. Wobbrock; Alfred W. Kaszniak; Marilyn Ostergren
We describe an experiment to determine the effects of meditation training on the multitasking behavior of knowledge workers. Three groups each of 12-15 human resources personnel were tested: (1) those who underwent an 8-week training course on mindfulness-based meditation, (2) those who endured a wait period, were tested, and then underwent the same 8-week training, and (3) those who had 8-weeks of training in body relaxation. We found that only those trained in meditation stayed on tasks longer and made fewer task switches, as well as reporting less negative emotion after task performance, as compared with the other two groups. In addition, both the meditation and the relaxation groups showed improved memory for the tasks they performed.
Triangulating the personal creative experience: self-report, external judgments, and physiology BIBAFull-Text 53-60
  Erin A. Carroll; Celine Latulipe
We investigate the measurement of 'in-the-moment creativity' (ITMC) as a step towards developing new evaluation methods for improving creativity support tools (CSTs). We consider ITMC to be the periods of intense personal creative experience within a temporal, creative work process. Our approach to this work involves a triangulation method of several temporal metrics, including self-report ratings, external judgments, and physiological measurements. The experiment described in this paper involves participants sketching for 30 minutes while wearing EEG and being screen recorded. Participants and external judges used a special video application to watch, identify and rate periods of personal creative experience during the sketching activity. Our results indicate that people are comfortable self-reporting ITMC, and our work sets the stage for more extensive research that makes use of temporal, granular measures of the personal creative experience.
Creating and interpreting abstract visualizations of emotion BIBAFull-Text 61-68
  Brett Taylor; Regan L. Mandryk
People use non-verbal cues, such as facial expressions, body language, and tonal variations in speech, to help communicate emotion; however, these cues are not always available in computer-supported environments. Without emotional cues, we can have difficulty communicating and relating to others. In this paper, we develop and evaluate a system for creating abstract visualizations of emotion using arousal and valence. Through two user studies, we show that without prior training, people can naturally understand the represented emotion conveyed by the visualization.
Cybersickness induced by desktop virtual reality BIBAFull-Text 69-75
  Norman G. Vinson; Jean-François Lapointe; Avi Parush; Shelley Roberts
Cybersickness, a syndrome resulting from exposure to virtual reality displays, raises ethical and liability issues. We have found that, contrary to the majority of previous reports in the literature, cybersickness can be induced by desktop virtual reality. Moreover, our findings suggest that some individuals susceptible to cybersickness can be screened out on the basis of their self-reported susceptibility to motion sickness.

Image manipulation

Fast adaptive edge-aware mask generation BIBAFull-Text 77-83
  Michael W. Tao; Aravind Krishnaswamy
Selective editing, also known as masking, is a common technique to create localized effects, such as color (hue, saturation) and tonal management, on images. Many current techniques require parameter tuning or many strokes to achieve suitable results. We propose a fast novel algorithm that requires minimal strokes and parameter tuning from users, segments the desired selection, and produces an adaptive feathered matte. Our approach consists of two steps: first, the algorithm extracts color similarities using radial basis functions; second, the algorithm segments the region the user selects to respect locality. Because of the linear complexity and simplicity in required user-input, the approach is suitable for multiple applications including mobile devices.
Fast high dynamic range image deghosting for arbitrary scene motion BIBAFull-Text 85-92
  Simon Silk; Jochen Lang
High Dynamic Range (HDR) images of real world scenes often suffer from ghosting artifacts caused by motion in the scene. Existing solutions to this problem typically either only address specific types of ghosting, or are very computationally expensive.
   We address ghosting by performing change detection on exposure-normalized images, then reducing the contribution of moving objects to the final composite on a frame-by-frame basis. Change detection is computationally advantageous and it can be applied to images exhibiting varied ghosting artifacts. We demonstrate our method both for Low Dynamic Range (LDR) and HDR images. Additional constraints based on a priori knowledge of the changing exposures apply to HDR images. We increase the stability of our approach by using recent superpixel segmentation techniques to enhance the change detection. Our solution includes a novel approach for areas that see motion throughout the capture, e.g., foliage blowing in the wind.
   We demonstrate the success of our approach on challenging ghosting scenarios, and that our results are comparable to existing state-of-the-art methods, while providing computational savings over these methods.
Face morphing using 3D-aware appearance optimization BIBAFull-Text 93-99
  Fei Yang; Eli Shechtman; Jue Wang; Lubomir Bourdev; Dimitris Metaxas
Traditional automatic face morphing techniques tend to generate blurry intermediate frames when the two input faces differ significantly. We propose a new face morphing approach that deals explicitly with large pose and expression variations. We recover the 3D face geometry of the input images using a projection on a pre-learned 3D face subspace. The geometry is interpolated by factoring the expression and pose and varying them smoothly across the sequence. Finally we pose the morphing problem as an iterative optimization with an objective that combines similarity of each frame to the geometry-induced warped sources, with a similarity between neighboring frames for temporal coherence. Experimental results show that our method can generate higher quality face morphing results for more extreme pose, expression and appearance changes than previous methods.

Enhancing performance

Dragimation: direct manipulation keyframe timing for performance-based animation BIBAFull-Text 101-108
  Benjamin Walther-Franks; Marc Herrlich; Thorsten Karrer; Moritz Wittenhagen; Roland Schröder-Kroll; Rainer Malaka; Jan Borchers
Getting the timing and dynamics right is key to creating believable and interesting animations. However, using traditional keyframe animation techniques, timing is a tedious and abstract process. In this paper we present Dragimation, a novel technique for interactive performative timing of keyframe animations. It is inspired by direct manipulation techniques for video navigation that leverage the natural sense of timing all of us possess. We conducted a user study with 27 participants including professional animators as well as novices, in which we compared our approach to two other interactive timing techniques, timeline scrubbing and sketch-based timing. Dragimation is comparable regarding objective error measurements to the sketch-based approach and significantly better than scrubbing and is the overall preferred technique by our test users.
Assessing target acquisition and tracking performance for complex moving targets in the presence of latency and jitter BIBAFull-Text 109-116
  Andriy Pavlovych; Carl Gutwin
Many modern games and game systems allow for networked remote participation. In such networks latency variability is a commonly encountered factor, but there is still little information available to designers about how human performance changes in the presence of delay. To add to our understanding of performance thresholds for mouse-based tasks that are common in real-time games, we carried out a study of human target acquisition and target tracking in the presence of latency and jitter (variance in latency), for various target velocities and trajectories. Our study indicates critical thresholds at which human performance decreases in the presence of delay. Target acquisition accuracy drops very quickly for latencies over 50 ms and for high velocities. Tracking error, however, is only slightly affected by latency, with deterioration starting at around 110 ms. The effects of latency and target velocity on errors are close to linear, and transverse error is usually smaller than longitudinal error. These results help to quantify the effects of delay on closely-coupled interactive tasks in networked games and real-time groupware systems. They also aid designers in determining when it is critical to improve system parameters and when to apply prediction and delay-compensation algorithms to improve quality of interaction.
$N-protractor: a fast and accurate multistroke recognizer BIBAFull-Text 117-120
  Lisa Anthony; Jacob O. Wobbrock
Prior work introduced $N, a simple multistroke gesture recognizer based on template matching, intended to be easy to port to new platforms for rapid prototyping, and derived from the unistroke $1 recognizer. $N uses an iterative search method to find the optimal angular alignment between two gesture templates, like $1 before it. Since then, Protractor has been introduced, a unistroke pen and finger gesture recognition algorithm also based on template-matching and $1, but using a closed-form template-matching method instead of an iterative search method, considerably improving recognition speed over $1. This paper presents work to streamline $N with Protractor by using Protractor's closed-form matching approach, and demonstrates that similar speed benefits occur for multistroke gestures from datasets from multiple domains. We find that the Protractor enhancements are over 91% faster than the original $N, and negligibly less accurate (<0.2%). We also discuss the impact that the number of templates, the input speed, and input method (e.g., pen vs. finger) have on recognition accuracy, and examine the most confusable gestures.
Input finger detection for nonvisual touch screen text entry in Perkinput BIBAFull-Text 121-129
  Shiri Azenkot; Jacob O. Wobbrock; Sanjana Prasain; Richard E. Ladner
We present Input Finger Detection (IFD), a novel technique for nonvisual touch screen input, and its application, the Perkinput text entry method. With IFD, signals are input into a device with multi-point touches, where each finger represents one bit, either touching the screen or not. Maximum likelihood and tracking algorithms are used to detect which fingers touch the screen based on user-set reference points. The Perkinput text entry method uses the 6-bit Braille encoding with audio feedback, enabling one- and two-handed input. A longitudinal evaluation with 8 blind participants who are proficient in Braille showed that one-handed Perkinput was significantly faster and more accurate than iPhone's VoiceOver. Furthermore, in a case study to evaluate expert performance, one user reached an average session speed of 17.56 words per minute (WPM) with an average uncorrected error rate of just 0.14% using one hand for input. The same participant reached an average session speed of 38.0 WPM with two-handed input and an error rate of just 0.26%. Her fastest phrase was entered at 52.4 WPM and no errors.

Real world modeling

Embroidery modeling and rendering BIBAFull-Text 131-139
  Xinling Chen; Michael McCool; Asanobu Kitamoto; Stephen Mann
Embroidery is a traditional non-photorealistic art form in which threads of different colours stitched into a base material are used to create an image. We explore techniques for automatically producing embroidery layouts from line drawings and for rendering those layouts in real time on potentially deformable 3D objects with hardware acceleration. Layout of stitches is based on automatic extraction of contours from line drawings followed by a set of stitch-placement procedures based on traditional embroidery techniques. Rendering first captures the lighting environment on the surface of the target object and renders the embroidery as an image in texture space. Stitches are rendered in texture space using a lighting model suitable for threads at a resolution that avoids geometric and highlight aliasing, and with alpha-mapped per-stitch boundary antialiasing. Stitches are also rendered in layers to capture the 2.5D nature of embroidery. A filtered texture pyramid is constructed from the resulting texture and applied to the 3D object, using hardware accelerated scale-dependent antialiasing. Aliasing of fine stitch structure and highlights is avoided by this process. The result is a realistic embroidered image that properly responds to lighting in real time.
Interactive cloud rendering using temporally-coherent photon mapping BIBAFull-Text 141-148
  Oskar Elek; Tobias Ritschel; Alexander Wilkie; Hans-Peter Seidel
This paper presents an interactive algorithm for simulation of light transport in clouds. Exploiting the high temporal coherence of the typical illumination and morphology of clouds we build on volumetric photon mapping, which we modify to allow for interactive rendering speeds -- instead of building a fresh irregular photon map for every scene state change we accumulate photon contributions in a regular grid structure. This is then continuously being refreshed by re-shooting only a fraction of the total amount of photons in each frame. To maintain its temporal coherence and low variance, a low-resolution grid is used, and is then upsampled to the density field resolution in each frame. We also present a technique to store and reconstruct the angular illumination information by exploiting properties of the standard Henyey-Greenstein phase function, namely its ability to express anisotropic angular distributions with a single dominating direction. The presented method is physically-plausible, conceptually simple and comparatively easy to implement. Moreover, it operates only on the cloud density field, thus not requiring any precomputation, and handles all light sources typical for the given environment, i. e., where one of the light sources dominates.
Synthetic tree models from iterated discrete graphs BIBAFull-Text 149-156
  Ling Xu; David Mould
We present a method to generate models for trees in which we first create a weighted graph, then places endpoints and root point and plan least-cost paths from endpoints to the root point. The collection of resulting paths form a branching structure. We create a hierarchical tree structure by placing subgraphs around each endpoint and beginning again through some number of iterations. Powerful control over the global shape of the resulting tree is exerted by the shape of the initial graph, allowing users to create desired variations; more subtle variations can be accomplished by modifying parameters of the graph and subgraph creation processes and by changing the endpoint distribution mechanisms. The method is capable of matching a desired target structure with a little manual effort, and can easily generate a large group of slightly different models under the same parameter settings. The final trees are both intricate and convincingly realistic in appearance.

3D manipulation

Understanding user gestures for manipulating 3D objects from touchscreen inputs BIBAFull-Text 157-164
  Aurélie Cohé; Martin Hachet
Multi-touch interfaces have emerged with the widespread use of smartphones. Although a lot of people interact with 2D applications through touchscreens, interaction with 3D applications remains little explored. Most of 3D object manipulation techniques have been created by designers and users are generally put aside from the design creation process. We conducted a user study to better understand how non-technical users interact with a 3D object from touchscreen inputs. The experiment has been conducted while users manipulated a 3D cube with three points of view for rotations, scaling and translations (RST). Sixteen users participated and 432 gestures were analyzed. To classify data, we introduce a taxonomy for 3D manipulation gestures with touchscreens. Then, we identify a set of strategies employed by users to realize the proposed cube transformations. Our findings suggest that each participant uses several strategies with a predominant one. Finally, we propose some guidelines to help designers in the creation of more user friendly tools.
The effect of perspective projection in multi-touch 3D interaction BIBAFull-Text 165-172
  Björn Bollensdorff; Uwe Hahne; Marc Alexa
In this paper we describe the development and comparison of interaction techniques for 3D direct manipulation on multi-touch enabled devices. The literature on this topic currently shows diverging arguments for what enables effective and/or intuitive interaction. We argue that the limiting problem is the projection from 3D to 2D in input and output; and in particular how transformations in 3D are mapped to the interaction surface. Not only does this argument explain the divergence in the literature -- it also leads to improved interaction metaphors, similar but not identical to widgets in other 3D interaction domains. We show in a controlled experiment that adapted interaction widgets are significantly superior to other approaches in the context of multi-touch interaction.
Mockup builder: direct 3D modeling on and above the surface in a continuous interaction space BIBAFull-Text 173-180
  Bruno R. De Araùjo; Géry Casiez; Joaquim A. Jorge
Our work introduces a semi-immersive environment for conceptual design where virtual mockups are obtained from gestures we aim to get closer to the way people conceive, create and manipulate three-dimensional shapes. We present on-and-above-the-surface interaction techniques following Guiard's asymmetric bimanual model to take advantage of the continuous interaction space for creating and editing 3D models in a stereoscopic environment. To allow for more expressive interactions, our approach continuously combines hand and finger tracking in the space above the table with multi-touch on its surface. This combination brings forth an alternative design environment where users can seamlessly switch between interacting on the surface or in the space above it depending on the task. Our approach integrates continuous space usage with bimanual interaction to provide an expressive set of 3D modeling operations. Preliminary trials with our experimental setup show this as a very promising avenue for further work.
Nailing down multi-touch: anchored above the surface interaction for 3D modeling and navigation BIBAFull-Text 181-184
  Bret Jackson; David Schroeder; Daniel F. Keefe
We present anchored multi-touch, a technique for extending multi-touch interfaces by using gestures based on both multi-touch surface input and 3D movement of the hand(s) above the surface. These interactions have nearly the same potential for rich, expressive input as do freehand 3D interactions while also having an advantage that the passive haptic feedback provided by the surface makes them easier to control. In addition, anchored multi-touch is particularly well suited for working with 3D content on stereoscopic displays. This paper contributes two example applications: (1) an interface for navigating 3D datasets, and (2) a surface bending interface for freeform 3D modeling. Two methods for sensing the gestures are introduced, one employing a depth camera.

Motion and rendering

Inverse kinodynamics: editing and constraining kinematic approximations of dynamic motion BIBAFull-Text 185-192
  Cyrus Rahgoshay; Amir Rabbani; Karan Singh; Paul G. Kry
We present inverse kinodynamics (IKD), an animator friendly kinematic workflow that both encapsulates short-lived dynamics and allows precise space-time constraints. Kinodynamics (KD), defines the system state at any given time as the result of a kinematic state in the recent past, physically simulated over a short temporal window to the present. KD is a well suited kinematic approximation to animated characters and other dynamic systems with dominant kinematic motion and short-lived dynamics. Given a dynamic system, we first choose an appropriate kinodynamic window size based on accelerations in the kinematic trajectory and the physical properties of the system. We then present an inverse kinodynamics (IKD) algorithm, where a kinodynamic system can precisely attain a set of animator constraints at specified times. Our approach solves the IKD problem iteratively, and is able to handle full pose or end effector constraints at both position and velocity level, as well as multiple constraints in close temporal proximity. Our approach can also be used to solve position and velocity constraints on passive systems attached to kinematically driven bodies. We show IKD to be a compelling approach to the direct kinematic control of character, with secondary dynamics via examples of skeletal dynamics and facial animation.
Physical material editing with structure embedding for animated solid BIBAFull-Text 193-200
  Ning Liu; Xiaowei He; Yi Ren; Sheng Li; Guoping Wang
Physically-based soft bodies are difficult to animate for anisotropic and heterogeneous materials. Explicitly modeling a desired material behavior by tuning the constitutive parameters is a difficult, tedious and sometimes impractical task for an animator. Even with linear constitutive materials, there are still dozens of independent parameters to tune. In this paper, we propose a new technique to ease the animators' effort when modeling complex material behaviors. Our key idea is to treat the original complex material as the composite of a matrix and structure elements. These two constituent materials are both easy to simulate because the matrix is homogeneous and isotropic, while the structure elements have simple and intuitive deformation mode. In this way, complexity in tuning parameters is greatly reduced. By proper embedding structure elements into the matrix, an animator can design diverse and creative material behavior without tedious parameter tuning work. Our results illustrate that our approach is intuitive, easy-to-use and has potential for further extensions for physically-driven solid animation.
3D rasterization: a bridge between rasterization and ray casting BIBAFull-Text 201-208
  Tomáš Davidovic; Thomas Engelhardt; Iliyan Georgiev; Philipp Slusallek; Carsten Dachsbacher
Ray tracing and rasterization have long been considered as two fundamentally different approaches to rendering images of 3D scenes, although they compute the same results for primary rays. Rasterization projects every triangle onto the image plane and enumerates all covered pixels in 2D, while ray tracing operates in 3D by generating rays through every pixel and then finding the closest intersection with a triangle. In this paper we introduce a new view on the two approaches: based on the Plücker ray-triangle intersection test, we define 3D triangle edge functions, resembling (homogeneous) 2D edge functions. Then both approaches become identical with respect to coverage computation for image samples (or primary rays). This generalized "3D rasterization" perspective enables us to exchange concepts between both approaches: we can avoid applying any model or view transformation by instead transforming the sample generator, and we can also eliminate the need for perspective division and render directly to non-planar viewports. While ray tracing typically uses floating point with its intrinsic numerical issues, we show that it can be implemented with the same consistency rules as 2D rasterization. With 3D rasterization the only remaining differences between the two approaches are the scene traversal and the enumeration of potentially covered samples on the image plane (binning). 3D rasterization allows us to explore the design space between traditional rasterization and ray casting in a formalized manner. We discuss performance/cost trade-offs and evaluate different implementations and compare 3D rasterization to traditional ray tracing and 2D rasterization.