HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,284,113
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: hilliges_o* Results: 28 Sorted by: Date  Comments?
Help Dates
Limit:   
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 28 Jump to: 2016 | 15 | 14 | 12 | 11 | 10 | 09 | 08 | 07 | 06 |
[1] Airways: Optimization-Based Planning of Quadrotor Trajectories according to High-Level User Goals Enabling End-Users and Designers / Gebhardt, Christoph / Hepp, Benjamin / Nägeli, Tobias / Stevšic, Stefan / Hilliges, Otmar Proceedings of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.1 p.2508-2519
ACM Digital Library Link
Summary: In this paper we propose a computational design tool that allows end-users to create advanced quadrotor trajectories with a variety of application scenarios in mind. Our algorithm allows novice users to create quadrotor based use-cases without requiring deep knowledge in either quadrotor control or the underlying constraints of the target domain. To achieve this goal we propose an optimization-based method that generates feasible trajectories which can be flown in the real world. Furthermore, the method incorporates high-level human objectives into the planning of flight trajectories. An easy to use 3D design tool allows for quick specification and editing of trajectories as well as for intuitive exploration of the resulting solution space. We demonstrate the utility of our approach in several real-world application scenarios, including aerial-videography, robotic light-painting and drone racing.

[2] DefSense: Computational Design of Customized Deformable Input Devices Shape Changing Displays / Bächer, Moritz / Hepp, Benjamin / Pece, Fabrizio / Kry, Paul G. / Bickel, Bernd / Thomaszewski, Bernhard / Hilliges, Otmar Proceedings of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.1 p.3806-3816
ACM Digital Library Link
Summary: We present a novel optimization-based algorithm for the design and fabrication of customized, deformable input devices, capable of continuously sensing their deformation. We propose to embed piezoresistive sensing elements into flexible 3D printed objects. These sensing elements are then utilized to recover rich and natural user interactions at runtime. Designing such objects is a challenging and hard problem if attempted manually for all but the simplest geometries and deformations. Our method simultaneously optimizes the internal routing of the sensing elements and computes a mapping from low-level sensor readings to user-specified outputs in order to minimize reconstruction error. We demonstrate the power and flexibility of the approach by designing and fabricating a set of flexible input devices. Our results indicate that the optimization-based design greatly outperforms manual routings in terms of reconstruction accuracy and thus interaction fidelity.

[3] The Effect of Richer Visualizations on Code Comprehension Visualization Methods and Evaluation / Asenov, Dimitar / Hilliges, Otmar / Müller, Peter Proceedings of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.1 p.5040-5045
ACM Digital Library Link
Summary: Researchers often introduce visual tools to programming environments in order to facilitate program comprehension, reduce navigation times, and help developers answer difficult questions. Syntax highlighting is the main visual lens through which developers perceive their code, and yet its effects and the effects of richer code presentations on code comprehension have not been evaluated systematically. We present a rigorous user study comparing mainstream syntax highlighting to two visually-enhanced presentations of code. Our results show that: (1) richer code visualizations reduce the time necessary to answer questions about code features, and (2) contrary to the subjective perception of developers, richer code visualizations do not lead to visual overload. Based on our results we outline practical recommendations for tool designers.

[4] Fast blur removal for wearable QR code scanners Towards new wearable applications / Sörös, Gábor / Semmler, Stephan / Humair, Luc / Hilliges, Otmar Proceedings of the 2015 International Symposium on Wearable Computers 2015-09-07 p.117-124
ACM Digital Library Link
Summary: We present a fast restoration-recognition algorithm for scanning motion-blurred QR codes on handheld and wearable devices. We blindly estimate the blur from the salient edges of the code in an iterative optimization scheme, alternating between image sharpening, blur estimation, and decoding. The restored image is constrained to exploit the properties of QR codes which ensures fast convergence. The checksum of the code allows early termination when the code is first readable and precludes false positive detections. General blur removal algorithms perform poorly in restoring visual codes and are slow even on high-performance PCs. The proposed algorithm achieves good reconstruction quality on QR codes and outperforms existing methods in terms of speed. We present PC and Android implementations of a complete QR scanner and evaluate the algorithm on synthetic and real test images. Our work indicates a promising step towards enterprise-grade scan performance with wearable devices.

[5] An Interactive System for Data Structure Development Software Engineering Tools / Ou, Jibin / Vechev, Martin / Hilliges, Otmar Proceedings of the ACM CHI'15 Conference on Human Factors in Computing Systems 2015-04-18 v.1 p.3053-3062
ACM Digital Library Link
Summary: Data structure algorithms are of fundamental importance in teaching and software development, yet are difficult to understand. We propose a new approach for understanding, debugging and developing heap manipulating data structures. The key technical idea of our work is to combine deep parametric abstraction techniques emerging from the area of static analysis with interactive abstraction manipulation. Our approach bridges program analysis with HCI and enables new capabilities not possible before: i) online automatic visualization of the data structure in a way which captures its essential operation, thus enabling powerful local reasoning, and ii) fine grained pen and touch gestures allowing for interactive control of the abstraction -- at any point the developer can pause the program, graphically interact with the data, and continue program execution. These features address some of the most pressing challenges in developing data structures. We implemented our approach in a Java-based system called FluiEdt and evaluated it with $27$ developers. The results indicate that FluiEdt is more effective in helping developers find data structure errors than existing state of the art IDEs (e.g. Eclipse) or pure visualization based approaches.

[6] Joint Estimation of 3D Hand Position and Gestures from Monocular Video for Mobile Interaction Mid-Air Gestures and Interaction / Song, Jie / Pece, Fabrizio / Sörös, Gábor / Koelle, Marion / Hilliges, Otmar Proceedings of the ACM CHI'15 Conference on Human Factors in Computing Systems 2015-04-18 v.1 p.3657-3660
ACM Digital Library Link
Summary: We present a machine learning technique to recognize gestures and estimate metric depth of hands for 3D interaction, relying only on monocular RGB video input. We aim to enable spatial interaction with small, body-worn devices where rich 3D input is desired but the usage of conventional depth sensors is prohibitive due to their power consumption and size. We propose a hybrid classification-regression approach to learn and predict a mapping of RGB colors to absolute, metric depth in real time. We also classify distinct hand gestures, allowing for a variety of 3D interactions. We demonstrate our technique with three mobile interaction scenarios and evaluate the method quantitatively and qualitatively.

[7] In-air gestures around unmodified mobile devices Augmented reality I / Song, Jie / Sörös, Gábor / Pece, Fabrizio / Fanello, Sean Ryan / Izadi, Shahram / Keskin, Cem / Hilliges, Otmar Proceedings of the 2014 ACM Symposium on User Interface Software and Technology 2014-10-05 v.1 p.319-329
ACM Digital Library Link
Summary: We present a novel machine learning based algorithm extending the interaction space around mobile devices. The technique uses only the RGB camera now commonplace on off-the-shelf mobile devices. Our algorithm robustly recognizes a wide range of in-air gestures, supporting user variation, and varying lighting conditions. We demonstrate that our algorithm runs in real-time on unmodified mobile devices, including resource-constrained smartphones and smartwatches. Our goal is not to replace the touchscreen as primary input device, but rather to augment and enrich the existing interaction vocabulary using gestures. While touch input works well for many scenarios, we demonstrate numerous interaction tasks such as mode switches, application and task management, menu selection and certain types of navigation, where such input can be either complemented or better served by in-air gestures. This removes screen real-estate issues on small touchscreens, and allows input to be expanded to the 3D space around the device. We present results for recognition accuracy (93% test and 98% train), impact of memory footprint and other model parameters. Finally, we report results from preliminary user evaluations, discuss advantages and limitations and conclude with directions for future work.

[8] Tangible and modular input device for character articulation Demonstrations / Jacobson, Alec / Panozzo, Daniele / Glauser, Oliver / Pradalier, Cedric / Hilliges, Otmar / Sorkine-Hornung, Olga Adjunct Proceedings of the 2014 ACM Symposium on User Interface Software and Technology 2014-10-05 v.2 p.45-46
ACM Digital Library Link
Summary: We present a modular, novel mechanical device for animation authoring. The pose of the device is sensed at interactive rates, enabling quick posing of characters rigged with a skeleton of arbitrary topology. The mapping between the physical device and virtual skeleton is computed semi-automatically guided by sparse user correspondences. Our demonstration allows visitors to experiment with our device and software, choosing from a variety of characters to control.

[9] Type-hover-swipe in 96 bytes: a motion sensing mechanical keyboard Novel keyboards / Taylor, Stuart / Keskin, Cem / Hilliges, Otmar / Izadi, Shahram / Helmes, John Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems 2014-04-26 v.1 p.1695-1704
ACM Digital Library Link
Summary: We present a new type of augmented mechanical keyboard, capable of sensing rich and expressive motion gestures performed both on and directly above the device. Our hardware comprises of low-resolution matrix of infrared (IR) proximity sensors interspersed between the keys of a regular mechanical keyboard. This results in coarse but high frame-rate motion data. We extend a machine learning algorithm, traditionally used for static classification only, to robustly support dynamic, temporal gestures. We propose the use of motion signatures a technique that utilizes pairs of motion history images and a random forest based classifier to robustly recognize a large set of motion gestures on and directly above the keyboard. Our technique achieves a mean per-frame classification accuracy of 75.6% in leave-one-subject-out and 89.9% in half-test/half-training cross-validation. We detail our hardware and gesture recognition algorithm, provide performance and accuracy numbers, and demonstrate a large set of gestures designed to be performed with our device. We conclude with qualitative feedback from users, discussion of limitations and areas for future work.

[10] Digits: freehand 3D interactions anywhere using a wrist-worn gloveless sensor Hands & fingers / Kim, David / Hilliges, Otmar / Izadi, Shahram / Butler, Alex D. / Chen, Jiawen / Oikonomidis, Iason / Olivier, Patrick Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012-10-07 v.1 p.167-176
ACM Digital Library Link
Summary: Digits is a wrist-worn sensor that recovers the full 3D pose of the user's hand. This enables a variety of freehand interactions on the move. The system targets mobile settings, and is specifically designed to be low-power and easily reproducible using only off-the-shelf hardware. The electronics are self-contained on the user's wrist, but optically image the entirety of the user's hand. This data is processed using a new pipeline that robustly samples key parts of the hand, such as the tips and lower regions of each finger. These sparse samples are fed into new kinematic models that leverage the biomechanical constraints of the hand to recover the 3D pose of the user's hand. The proposed system works without the need for full instrumentation of the hand (for example using data gloves), additional sensors in the environment, or depth cameras which are currently prohibitive for mobile scenarios due to power and form-factor considerations. We demonstrate the utility of Digits for a variety of application scenarios, including 3D spatial interaction with mobile devices, eyes-free interaction on-the-move, and gaming. We conclude with a quantitative and qualitative evaluation of our system, and discussion of strengths, limitations and future work.

[11] Steerable augmented reality with the beamatron Augmented reality / Wilson, Andrew / Benko, Hrvoje / Izadi, Shahram / Hilliges, Otmar Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012-10-07 v.1 p.413-422
ACM Digital Library Link
Summary: Steerable displays use a motorized platform to orient a projector to display graphics at any point in the room. Often a camera is included to recognize markers and other objects, as well as user gestures in the display volume. Such systems can be used to superimpose graphics onto the real world, and so are useful in a number of augmented reality and ubiquitous computing scenarios. We contribute the Beamatron, which advances steerable displays by drawing on recent progress in depth camera-based interactions. The Beamatron consists of a computer-controlled pan and tilt platform on which is mounted a projector and Microsoft Kinect sensor. While much previous work with steerable displays deals primarily with projecting corrected graphics onto a discrete set of static planes, we describe computational techniques that enable reasoning in 3D using live depth data. We show two example applications that are enabled by the unique capabilities of the Beamatron: an augmented reality game in which a player can drive a virtual toy car around a room, and a ubiquitous computing demo that uses speech and gesture to move projected graphics throughout the room.

[12] Interactive Environment-Aware Handheld Projectors for Pervasive Computing Spaces HCI / Molyneaux, David / Izadi, Shahram / Kim, David / Hilliges, Otmar / Hodges, Steve / Cao, Xiang / Butler, Alex / Gellersen, Hans Proceedings of Pervasive 2012: International Conference on Pervasive Computing 2012-06-18 p.197-215
Keywords: Handheld projection; geometry and spatial awareness; interaction
Link to Digital Content at Springer
Summary: This paper presents two novel handheld projector systems for indoor pervasive computing spaces. These projection-based devices are "aware" of their environment in ways not demonstrated previously. They offer both spatial awareness, where the system infers location and orientation of the device in 3D space, and geometry awareness, where the system constructs the 3D structure of the world around it, which can encompass the user as well as other physical objects, such as furniture and walls. Previous work in this area has predominantly focused on infrastructure-based spatial-aware handheld projection and interaction. Our prototypes offer greater levels of environment awareness, but achieve this using two opposing approaches; the first infrastructure-based and the other infrastructure-less sensing. We highlight a series of interactions including direct touch, as well as in-air gestures, which leverage the shadow of the user for interaction. We describe the technical challenges in realizing these novel systems; and compare them directly by quantifying their location tracking and input sensing capabilities.

[13] The role of physical controllers in motion video gaming Game design / Freeman, Dustin / Hilliges, Otmar / Sellen, Abigail / O'Hara, Kenton / Izadi, Shahram / Wood, Kenneth Proceedings of DIS'12: Designing Interactive Systems 2012-06-11 p.701-710
ACM Digital Library Link
Summary: Systems that detect the unaugmented human body allow players to interact without using a physical controller. But how is interaction altered by the absence of a physical input device? What is the impact on game performance, on a player's expectation of their ability to control the game, and on their game experience? In this study, we investigate these issues in the context of a table tennis video game. The results show that the impact of holding a physical controller, or indeed of the fidelity of that controller, does not appear in simple measures of performance. Rather, the difference between controllers is a function of the responsiveness of the game being controlled, as well as other factors to do with expectations, real world game experience and social context.

[14] At home with surface computing? Touch in context / Kirk, David / Izadi, Shahram / Hilliges, Otmar / Banks, Richard / Taylor, Stuart / Sellen, Abigail Proceedings of ACM CHI 2012 Conference on Human Factors in Computing Systems 2012-05-05 v.1 p.159-168
ACM Digital Library Link
Summary: This paper describes a field study of an interactive surface deployed in three family homes. The tabletop technology provides a central place where digital content, such as photos, can be easily archived, managed and viewed. The tabletop affords multi-touch input, allowing digital content to be sorted, triaged and interacted with using one or two-handed interactions. A physics-based simulation adds dynamics to digital content, providing users with rich ways of interacting that borrows from the real-world. The field study is one of the first of a surface computer within a domestic environment. Our goal is to uncover people's inter-actions, appropriations, perceptions and experiences with such technologies, exploring the potential barriers to use. Given these devices provide such a revolutionary shift in interaction, will people be able to engage with them in everyday life in the ways we intend? In answering this question, we hope to deepen our understanding of the design of such systems for home and consumer domains.

[15] Shake'n'sense: reducing interference for overlapping structured light depth cameras Sensory interaction modalities / Butler, D. Alex / Izadi, Shahram / Hilliges, Otmar / Molyneaux, David / Hodges, Steve / Kim, David Proceedings of ACM CHI 2012 Conference on Human Factors in Computing Systems 2012-05-05 v.1 p.1933-1936
ACM Digital Library Link
Summary: We present a novel yet simple technique that mitigates the interference caused when multiple structured light depth cameras point at the same part of a scene. The technique is particularly useful for Kinect, where the structured light source is not modulated. Our technique requires only mechanical augmentation of the Kinect, without any need to modify the internal electronics, firmware or associated host software. It is therefore simple to replicate. We show qualitative and quantitative results highlighting the improvements made to interfering Kinect depth signals. The camera frame rate is not compromised, which is a problem in approaches that modulate the structured light source. Our technique is non-destructive and does not impact depth values or geometry. We discuss uses for our technique, in particular within instrumented rooms that require simultaneous use of multiple overlapping fixed Kinect cameras to support whole room interactions.

[16] HoloDesk: direct 3d interactions with a situated see-through display Morphing & tracking & stacking: 3D interaction / Hilliges, Otmar / Kim, David / Izadi, Shahram / Weiss, Malte / Wilson, Andrew Proceedings of ACM CHI 2012 Conference on Human Factors in Computing Systems 2012-05-05 v.1 p.2421-2430
ACM Digital Library Link
Summary: HoloDesk is an interactive system combining an optical see through display and Kinect camera to create the illusion that users are directly interacting with 3D graphics. A virtual image of a 3D scene is rendered through a half silvered mirror and spatially aligned with the real-world for the viewer. Users easily reach into an interaction volume displaying the virtual image. This allows the user to literally get their hands into the virtual display and to directly interact with an spatially aligned 3D virtual world, without the need for any specialized head-worn hardware or input device. We introduce a new technique for interpreting raw Kinect data to approximate and track rigid (e.g., books, cups) and non-rigid (e.g., hands, paper) physical objects and support a variety of physics-inspired interactions between virtual and real. In particular the algorithm models natural human grasping of virtual objects with more fidelity than previously demonstrated. A qualitative study highlights rich emergent 3D interactions, using hands and real-world objects. The implementation of HoloDesk is described in full, and example application scenarios explored. Finally, HoloDesk is quantitatively evaluated in a 3D target acquisition task, comparing the system with indirect and glasses-based variants.

[17] KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera 3D / Izadi, Shahram / Kim, David / Hilliges, Otmar / Molyneaux, David / Newcombe, Richard / Kohli, Pushmeet / Shotton, Jamie / Hodges, Steve / Freeman, Dustin / Davison, Andrew / Fitzgibbon, Andrew Proceedings of the 201 ACM Symposium on User Interface Software and Technology1 2011-10-16 v.1 p.559-568
ACM Digital Library Link
Summary: KinectFusion enables a user holding and moving a standard Kinect camera to rapidly create detailed 3D reconstructions of an indoor scene. Only the depth data from Kinect is used to track the 3D pose of the sensor and reconstruct, geometrically precise, 3D models of the physical scene in real-time. The capabilities of KinectFusion, as well as the novel GPU-based pipeline are described in full. Uses of the core system for low-cost handheld scanning, and geometry-aware augmented reality and physics-based interactions are shown. Novel extensions to the core GPU pipeline demonstrate object segmentation and user interaction directly in front of the sensor, without degrading camera tracking or reconstruction. These extensions are used to enable real-time multi-touch interactions anywhere, allowing any planar or non-planar reconstructed physical surface to be appropriated for touch.

[18] Vermeer: direct interaction with a 360° viewable 3D display 3D / Butler, Alex / Hilliges, Otmar / Izadi, Shahram / Hodges, Steve / Molyneaux, David / Kim, David / Kong, Danny Proceedings of the 201 ACM Symposium on User Interface Software and Technology1 2011-10-16 v.1 p.569-576
ACM Digital Library Link
Summary: We present Vermeer, a novel interactive 360° viewable 3D display. Like prior systems in this area, Vermeer provides viewpoint-corrected, stereoscopic 3D graphics to simultaneous users, 360° around the display, without the need for eyewear or other user instrumentation. Our goal is to over-come an issue inherent in these prior systems which -- typically due to moving parts -- restrict interactions to outside the display volume. Our system leverages a known optical illusion to demonstrate, for the first time, how users can reach into and directly touch 3D objects inside the display volume. Vermeer is intended to be a new enabling technology for interaction, and we therefore describe our hardware implementation in full, focusing on the challenges of combining this optical configuration with an existing approach for creating a 360° viewable 3D display. Initially we demonstrate direct involume interaction by sensing user input with a Kinect camera placed above the display. However, by exploiting the properties of the optical configuration, we also demonstrate novel prototypes for fully integrated input sensing alongside simultaneous display. We conclude by discussing limitations, implications for interaction, and ideas for future work.

[19] Opening up the family archive Collaboration in place / Kirk, David S. / Izadi, Shahram / Sellen, Abigail / Taylor, Stuart / Banks, Richard / Hilliges, Otmar Proceedings of ACM CSCW'10 Conference on Computer-Supported Cooperative Work 2010-02-06 p.261-270
Keywords: archiving, collaboration, domestic life, field study, home, interactive tabletops
ACM Digital Library Link
Summary: The Family Archive device is an interactive multi-touch tabletop technology with integrated capture facility for the archiving of sentimental artefacts and memorabilia. It was developed as a technology probe to help us open up current family archiving practices and to explore family archiving in situ. We detail the deployment and study of three of these devices in family homes and discuss how deploying a new, potentially disruptive, technology can foreground the social relations and organizing systems in domestic life. This in turn facilitates critical reflection on technology design.

[20] EDITED BOOK Tabletops -- Horizontal Interactive Displays Human-Computer Interaction Series / Müller-Tomfelde, Christian 2010 n.18 p.456 Springer London
DOI: 10.1007/978-1-84996-113-4
ISBN: 978-1-84996-112-7 (print), 978-1-84996-113-4 (online)
Link to Digital Content at Springer
== Under Tabletops ==
Building Interactive Multi-touch Surfaces (27-49)
	+ Schöning, Johannes
	+ Hook, Jonathan
	+ Bartindale, Tom
	+ Schmidt, Dominik
	+ Oliver, Patrick
	+ et al
From Table-System to Tabletop: Integrating Technology into Interactive Surfaces (51-69)
	+ Kunz, Andreas
	+ Fjeld, Morten
High-Resolution Interactive Displays (71-100)
	+ Ashdown, Mark
	+ Tuddenham, Philip
	+ Robinson, Peter
Optical Design of Tabletop Displays and Interactive Applications (101-129)
	+ Kakehi, Yasuaki
	+ Naemura, Takeshi
Hand and Object Recognition on Liquid Crystal Displays (131-146)
	+ Koike, Hideki
	+ Sato, Toshiki
	+ Nishikawa, Wataru
	+ Fukuchi, Kentaro
== On and Above Tabletops ==
Augmenting Interactive Tabletops with Translucent Tangible Controls (149-170)
	+ Weiss, Malte
	+ Hollan, James D.
	+ Borchers, Jan
Active Tangible Interactions (171-187)
	+ Inami, Masahiko
	+ Sugimoto, Maki
	+ Thomas, Bruce H.
	+ Richter, Jan
Interaction on the Tabletop: Bringing the Physical to the Digital (189-221)
	+ Hilliges, Otmar
	+ Butz, Andreas
	+ Izadi, Shahram
	+ Wilson, Andrew D.
Supporting Atomic User Actions on the Table (223-247)
	+ Aliakseyeu, Dzmitry
	+ Subramanian, Sriram
	+ Alexander, Jason
Imprecision, Inaccuracy, and Frustration: The Tale of Touch Input (249-275)
	+ Benko, Hrvoje
	+ Wigdor, Daniel
On, Above, and Beyond: Taking Tabletops to the Third Dimension (277-299)
	+ Grossman, Tovi
	+ Wigdor, Daniel
== Around and Beyond Tabletops ==
Individual and Group Support in Tabletop Interaction Techniques (303-333)
	+ Nacenta, Miguel A.
	+ Pinelle, David
	+ Gutwin, Carl
	+ Mandryk, Regan
File System Access for Tabletop Interaction (335-355)
	+ Collins, Anthony
	+ Kay, Judy
Theory of Tabletop Territoriality (357-385)
	+ Scott, Stacey D.
	+ Carpendale, Sheelagh
Digital Tables for Collaborative Information Exploration (387-405)
	+ Isenberg, Petra
	+ Hinrichs, Uta
	+ Hancock, Mark
	+ Carpendale, Sheelagh
Coordination and Awareness in Remote Tabletop Collaboration (407-434)
	+ Tuddenham, Philip
	+ Robinson, Peter
Horizontal Interactive Surfaces in Distributed Assemblies (435-456)
	+ Müller-Tomfelde, Christian
	+ O'Hara, Kenton

[21] Exploring tangible and direct touch interfaces for manipulating 2D and 3D information on a digital table Tangible interfaces / Hancock, Mark / Hilliges, Otmar / Collins, Christopher / Baur, Dominikus / Carpendale, Sheelagh Proceedings of the 2009 ACM International Conference on Interactive Tabletops and Surfaces 2009-11-23 p.77-84
ACM Digital Library Link
Summary: On traditional tables, people often manipulate a variety of physical objects, both 2D in nature (e.g., paper) and 3D in nature (e.g., books, pens, models, etc.). Current advances in hardware technology for tabletop displays introduce the possibility of mimicking these physical interactions through direct-touch or tangible user interfaces. While both promise intuitive physical interaction, they are rarely discussed in combination in the literature. In this paper, we present a study that explores the advantages and disadvantages of tangible and touch interfaces, specifically in relation to one another. We discuss our results in terms of how effective each technique was for accomplishing both a 3D object manipulation task and a 2D information visualization exploration task. Results suggest that people can more quickly move and rotate objects in 2D with our touch interaction, but more effectively navigate the visualization using tangible interaction. We discuss how our results can be used to inform future designs of tangible and touch interaction.

[22] Interactions in the air: adding further depth to interactive tabletops Waiter, can you please bring me a fork? / Hilliges, Otmar / Izadi, Shahram / Wilson, Andrew D. / Hodges, Steve / Garcia-Mendoza, Armando / Butz, Andreas Proceedings of the 2009 ACM Symposium on User Interface Software and Technology 2009-10-04 p.139-148
Keywords: 3D, 3D graphics, computer vision, depth-sensing cameras, holoscreen, interactive surfaces, surfaces, switchable diffusers, tabletop
ACM Digital Library Link
Summary: Although interactive surfaces have many unique and compelling qualities, the interactions they support are by their very nature bound to the display surface. In this paper we present a technique for users to seamlessly switch between interacting on the tabletop surface to above it. Our aim is to leverage the space above the surface in combination with the regular tabletop display to allow more intuitive manipulation of digital content in three-dimensions. Our goal is to design a technique that closely resembles the ways we manipulate physical objects in the real-world; conceptually, allowing virtual objects to be 'picked up' off the tabletop surface in order to manipulate their three dimensional position or orientation. We chart the evolution of this technique, implemented on two rear projection-vision tabletops. Both use special projection screen materials to allow sensing at significant depths beyond the display. Existing and new computer vision techniques are used to sense hand gestures and postures above the tabletop, which can be used alongside more familiar multi-touch interactions. Interacting above the surface in this way opens up many interesting challenges. In particular it breaks the direct interaction metaphor that most tabletops afford. We present a novel shadow-based technique to help alleviate this issue. We discuss the strengths and limitations of our technique based on our own observations and initial user feedback, and provide various insights from comparing, and contrasting, our tabletop implementations.

[23] Getting sidetracked: display design and occasioning photo-talk with the photohelix Photos and life logging / Hilliges, Otmar / Kirk, David Shelby Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009-04-04 v.1 p.1733-1736
Keywords: photo-talk, photoware, randomness, sidetracking, tabletop
ACM Digital Library Link
Summary: In this paper we discuss some of our recent research work designing tabletop interfaces for co-located photo sharing. We draw particular attention to a specific feature of an interface design, which we have observed over an extensive number of uses, as facilitating an under-reported but none-the-less intriguing aspect of the photo-sharing experience -- namely the process of 'getting sidetracked'. Through a series of vignettes of interaction during photo-sharing sessions we demonstrate how users of our tabletop photoware system used peripheral presentation of topically incoherent photos to artfully initiate new photo-talk sequences in on-going discourse. From this we draw implications for the design of tabletop photo applications, and for the experiential analysis of such devices.

[24] Bringing physics to the surface Touch and pressure / Wilson, Andrew D. / Izadi, Shahram / Hilliges, Otmar / Garcia-Mendoza, Armando / Kirk, David Proceedings of the 2008 ACM Symposium on User Interface Software and Technology 2008-10-19 p.67-76
ACM Digital Library Link
Summary: This paper explores the intersection of emerging surface technologies, capable of sensing multiple contacts and of-ten shape information, and advanced games physics engines. We define a technique for modeling the data sensed from such surfaces as input within a physics simulation. This affords the user the ability to interact with digital objects in ways analogous to manipulation of real objects. Our technique is capable of modeling both multiple contact points and more sophisticated shape information, such as the entire hand or other physical objects, and of mapping this user input to contact forces due to friction and collisions within the physics simulation. This enables a variety of fine-grained and casual interactions, supporting finger-based, whole-hand, and tangible input. We demonstrate how our technique can be used to add real-world dynamics to interactive surfaces such as a vision-based tabletop, creating a fluid and natural experience. Our approach hides from application developers many of the complexities inherent in using physics engines, allowing the creation of applications without preprogrammed interaction behavior or gesture recognition.

[25] Physical handles at the interactive surface: exploring tangibility and its benefits Surface-oriented interaction / Terrenghi, Lucia / Kirk, David / Richter, Hendrik / Krämer, Sebastian / Hilliges, Otmar / Butz, Andreas Proceedings of the 2008 International Conference on Advanced Visual Interfaces 2008-05-28 p.138-145
Keywords: GUI, design, hybrid, interfaces, tangible
ACM Digital Library Link
Summary: In this paper we investigate tangible interaction on interactive tabletops. These afford the support and integration of physical artefacts for the manipulation of digital media. To inform the design of interfaces for interactive surfaces we think it is necessary to deeply understand the benefits of employing such physical handles, i.e., the benefits of employing a third spatial dimension at the point of interaction.
    To this end we conducted an experimental study by designing and comparing two versions of an interactive tool on a tabletop display, one with a physical 3D handle, and one purely graphical (but direct touch enabled). Whilst hypothesizing that the 3D version would provide a number of benefits, our observations revealed that users developed diverse interaction approaches and attitudes about hybrid and direct touch interaction.
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 28 Jump to: 2016 | 15 | 14 | 12 | 11 | 10 | 09 | 08 | 07 | 06 |