HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,284,118
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: benko_h* Results: 54 Sorted by: Date  Comments?
Help Dates
Limit:   
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 54 Jump to: 2016 | 15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 05 | 04 | 03 |
[1] Augmenting the Field-of-View of Head-Mounted Displays with Sparse Peripheral Displays Augmented AR and VR Experiences / Xiao, Robert / Benko, Hrvoje Proceedings of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.1 p.1221-1232
ACM Digital Library Link
Summary: In this paper, we explore the concept of a sparse peripheral display, which augments the field-of-view of a head-mounted display with a lightweight, low-resolution, inexpensively produced array of LEDs surrounding the central high-resolution display. We show that sparse peripheral displays expand the available field-of-view up to 190° horizontal, nearly filling the human field-of-view. We prototyped two proof-of-concept implementations of sparse peripheral displays: a virtual reality headset, dubbed SparseLightVR, and an augmented reality headset, called SparseLightAR. Using SparseLightVR, we conducted a user study to evaluate the utility of our implementation, and a second user study to assess different visualization schemes in the periphery and their effect on simulator sickness. Our findings show that sparse peripheral displays are useful in conveying peripheral information and improving situational awareness, are generally preferred, and can help reduce motion sickness in nausea-susceptible people.

[2] SnapToReality: Aligning Augmented Reality to the Real World Augmented AR and VR Experiences / Nuernberger, Benjamin / Ofek, Eyal / Benko, Hrvoje / Wilson, Andrew D. Proceedings of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.1 p.1233-1244
ACM Digital Library Link
Summary: Augmented Reality (AR) applications may require the precise alignment of virtual objects to the real world. We propose automatic alignment of virtual objects to physical constraints calculated from the real world in real time ("snapping to reality"). We demonstrate SnapToReality alignment techniques that allow users to position, rotate, and scale virtual content to dynamic, real world scenes. Our proof-of-concept prototype extracts 3D edge and planar surface constraints. We furthermore discuss the unique design challenges of snapping in AR, including the user's limited field of view, noise in constraint extraction, issues with changing the view in AR, visualizing constraints, and more. We also report the results of a user study evaluating SnapToReality, confirming that aligning objects to the real world is significantly faster when assisted by snapping to dynamically extracted constraints. Perhaps more importantly, we also found that snapping in AR enables a fresh and expressive form of AR content creation.

[3] Haptic Retargeting: Dynamic Repurposing of Passive Haptics for Enhanced Virtual Reality Experiences VR & Feedback / Azmandian, Mahdi / Hancock, Mark / Benko, Hrvoje / Ofek, Eyal / Wilson, Andrew D. Proceedings of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.1 p.1968-1979
ACM Digital Library Link
Summary: Manipulating a virtual object with appropriate passive haptic cues provides a satisfying sense of presence in virtual reality. However, scaling such experiences to support multiple virtual objects is a challenge as each one needs to be accompanied with a precisely-located haptic proxy object. We propose a solution that overcomes this limitation by hacking human perception. We have created a framework for repurposing passive haptics, called haptic retargeting, that leverages the dominance of vision when our senses conflict. With haptic retargeting, a single physical prop can provide passive haptics for multiple virtual objects. We introduce three approaches for dynamically aligning physical and virtual objects: world manipulation, body manipulation and a hybrid technique which combines both world and body manipulation. Our study results indicate that all our haptic retargeting techniques improve the sense of presence when compared to typical wand-based 3D control of virtual objects. Furthermore, our hybrid haptic retargeting achieved the highest satisfaction and presence scores while limiting the visible side-effects during interaction.

[4] Pre-Touch Sensing for Mobile Interaction Touch Interaction / Hinckley, Ken / Heo, Seongkook / Pahud, Michel / Holz, Christian / Benko, Hrvoje / Sellen, Abigail / Banks, Richard / O'Hara, Kenton / Smyth, Gavin / Buxton, William Proceedings of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.1 p.2869-2881
ACM Digital Library Link
Summary: Touchscreens continue to advance including progress towards sensing fingers proximal to the display. We explore this emerging pre-touch modality via a self-capacitance touchscreen that can sense multiple fingers above a mobile device, as well as grip around the screen's edges. This capability opens up many possibilities for mobile interaction. For example, using pre-touch in an anticipatory role affords an "ad-lib interface" that fades in a different UI -- appropriate to the context -- as the user approaches one-handed with a thumb, two-handed with an index finger, or even with a pinch or two thumbs. Or we can interpret pre-touch in a retroactive manner that leverages the approach trajectory to discern whether the user made contact with a ballistic vs. a finely-targeted motion. Pre-touch also enables hybrid touch + hover gestures, such as selecting an icon with the thumb while bringing a second finger into range to invoke a context menu at a convenient location. Collectively these techniques illustrate how pre-touch sensing offers an intriguing new back-channel for mobile interaction.

[5] Haptic Retargeting Video Showcase: Dynamic Repurposing of Passive Haptics for Enhanced Virtual Reality Experience Video Showcase Presentations / Azmandian, Mahdi / Hancock, Mark / Benko, Hrvoje / Ofek, Eyal / Wilson, Andrew D. Extended Abstracts of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.2 p.3
ACM Digital Library Link
Summary: Manipulating a virtual object with appropriate passive haptic cues provides a satisfying sense of presence in virtual reality. However, scaling such experiences to support multiple virtual objects is a challenge as each one needs to be accompanied with a precisely-located haptic proxy object. We showcase a solution that overcomes this limitation by hacking human perception. Our framework for repurposing passive haptics, called haptic retargeting, leverages the dominance of vision when our senses conflict. With haptic retargeting, a single physical prop can provide passive haptics for multiple virtual objects. We introduce three approaches for dynamically aligning physical and virtual objects: body manipulation, world manipulation and a hybrid technique which combines both world and body warping. This video accompanies our CHI paper.

[6] A Demonstration of Haptic Retargeting: Dynamic Repurposing of Passive Haptics for Enhanced Virtual Reality Experience Interactivity Demos / Azmandian, Mahdi / Hancock, Mark / Benko, Hrvoje / Ofek, Eyal / Wilson, Andrew D. Extended Abstracts of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.2 p.3647-3650
ACM Digital Library Link
Summary: Manipulating a virtual object with appropriate passive haptic cues provides a satisfying sense of presence in virtual reality. However, scaling such experiences to support multiple virtual objects is a challenge as each one needs to be accompanied with a precisely-located haptic proxy object. We showcase a solution that overcomes this limitation by hacking human perception. Our framework for repurposing passive haptics, called haptic retargeting, leverages the dominance of vision when our senses conflict. With haptic retargeting, a single physical prop can provide passive haptics for multiple virtual objects. We introduce three approaches for dynamically aligning physical and virtual objects: body manipulation, world manipulation and a hybrid technique which combines both world and body manipulation. This demonstration accompanies our CHI 2016 paper.

[7] Room2Room: Enabling Life-Size Telepresence in a Projected Augmented Reality Environment Rich Telepresence / Pejsa, Tomislav / Kantor, Julian / Benko, Hrvoje / Ofek, Eyal / Wilson, Andrew Proceedings of ACM CSCW 2016 Conference on Computer-Supported Cooperative Work and Social Computing 2016-02-27 v.1 p.1716-1725
ACM Digital Library Link
Summary: Room2Room is a telepresence system that leverages projected augmented reality to enable life-size, co-present interaction between two remote participants. Our solution recreates the experience of a face-to-face conversation by performing 3D capture of the local user with color + depth cameras and projecting their life-size virtual copy into the remote space. This creates an illusion of the remote per-son's physical presence in the local space, as well as a shared understanding of verbal and non-verbal cues (e.g., gaze, pointing.) In addition to the technical details of two prototype implementations, we contribute strategies for projecting remote participants onto physically plausible locations, such that they form a natural and consistent conversational formation with the local participant. We also present observations and feedback from an evaluation with 7 pairs of participants on the usability of our solution for solving a collaborative, physical task.

[8] FoveAR: Combining an Optically See-Through Near-Eye Display with Projector-Based Spatial Augmented Reality Session 2B: 3D & Augmented Reality / Benko, Hrvoje / Ofek, Eyal / Zheng, Feng / Wilson, Andrew D. Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2015-11-05 v.1 p.129-135
ACM Digital Library Link
Summary: Optically see-through (OST) augmented reality glasses can overlay spatially-registered computer-generated content onto the real world. However, current optical designs and weight considerations limit their diagonal field of view to less than 40 degrees, making it difficult to create a sense of immersion or give the viewer an overview of the augmented reality space. We combine OST glasses with a projection-based spatial augmented reality display to achieve a novel display hybrid, called FoveAR, capable of greater than 100 degrees field of view, view dependent graphics, extended brightness and color, as well as interesting combinations of public and personal data display. We contribute details of our prototype implementation and an analysis of the interactive design space that our system enables. We also contribute four prototype experiences showcasing the capabilities of FoveAR as well as preliminary user feedback providing insights for enhancing future FoveAR experiences.

[9] Sensing Tablet Grasp + Micro-mobility for Active Reading Session 7A: Wearable and Mobile Interactions / Yoon, Dongwook / Hinckley, Ken / Benko, Hrvoje / Guimbretière, François / Irani, Pourang / Pahud, Michel / Gavriliu, Marcel Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2015-11-05 v.1 p.477-487
ACM Digital Library Link
Summary: The orientation and repositioning of physical artefacts (such as paper documents) to afford shared viewing of content, or to steer the attention of others to specific details, is known as micro-mobility. But the role of grasp in micro-mobility has rarely been considered, much less sensed by devices. We therefore employ capacitive grip sensing and inertial motion to explore the design space of combined grasp + micro-mobility by considering three classes of technique in the context of active reading. Single user, single device techniques support grip-influenced behaviors such as bookmarking a page with a finger, but combine this with physical embodiment to allow flipping back to a previous location. Multiple user, single device techniques, such as passing a tablet to another user or working side-by-side on a single device, add fresh nuances of expression to co-located collaboration. And single user, multiple device techniques afford facile cross-referencing of content across devices. Founded on observations of grasp and micro-mobility, these techniques open up new possibilities for both individual and collaborative interaction with electronic documents.

[10] CrossMotion: Fusing Device and Image Motion for User Identification, Tracking and Device Association Poster Session 1 / Wilson, Andrew D. / Benko, Hrvoje Proceedings of the 2014 International Conference on Multimodal Interaction 2014-11-12 p.216-223
ACM Digital Library Link
Summary: Identifying and tracking people and mobile devices indoors has many applications, but is still a challenging problem. We introduce a cross-modal sensor fusion approach to track mobile devices and the users carrying them. The CrossMotion technique matches the acceleration of a mobile device, as measured by an onboard internal measurement unit, to similar acceleration observed in the infrared and depth images of a Microsoft Kinect v2 camera. This matching process is conceptually simple and avoids many of the difficulties typical of more common appearance-based approaches. In particular, CrossMotion does not require a model of the appearance of either the user or the device, nor in many cases a direct line of sight to the device. We demonstrate a real time implementation that can be applied to many ubiquitous computing scenarios. In our experiments, CrossMotion found the person's body 99% of the time, on average within 7cm of a reference device position.

[11] Sensing techniques for tablet+stylus interaction Input techniques / Hinckley, Ken / Pahud, Michel / Benko, Hrvoje / Irani, Pourang / Guimbretière, François / Gavriliu, Marcel / Chen, Xiang 'Anthony' / Matulic, Fabrice / Buxton, William / Wilson, Andrew Proceedings of the 2014 ACM Symposium on User Interface Software and Technology 2014-10-05 v.1 p.605-614
ACM Digital Library Link
Summary: We explore grip and motion sensing to afford new techniques that leverage how users naturally manipulate tablet and stylus devices during pen + touch interaction. We can detect whether the user holds the pen in a writing grip or tucked between his fingers. We can distinguish bare-handed inputs, such as drag and pinch gestures produced by the nonpreferred hand, from touch gestures produced by the hand holding the pen, which necessarily impart a detectable motion signal to the stylus. We can sense which hand grips the tablet, and determine the screen's relative orientation to the pen. By selectively combining these signals and using them to complement one another, we can tailor interaction to the context, such as by ignoring unintentional touch inputs while writing, or supporting contextually-appropriate tools such as a magnifier for detailed stroke work that appears when the user pinches with the pen tucked between his fingers. These and other techniques can be used to impart new, previously unanticipated subtleties to pen + touch interaction on tablets.

[12] RoomAlive: magical experiences enabled by scalable, adaptive projector-camera units Augmented reality II / Jones, Brett / Sodhi, Rajinder / Murdock, Michael / Mehra, Ravish / Benko, Hrvoje / Wilson, Andrew / Ofek, Eyal / MacIntyre, Blair / Raghuvanshi, Nikunj / Shapira, Lior Proceedings of the 2014 ACM Symposium on User Interface Software and Technology 2014-10-05 v.1 p.637-644
ACM Digital Library Link
Summary: RoomAlive is a proof-of-concept prototype that transforms any room into an immersive, augmented entertainment experience. Our system enables new interactive projection mapping experiences that dynamically adapts content to any room. Users can touch, shoot, stomp, dodge and steer projected content that seamlessly co-exists with their existing physical environment. The basic building blocks of RoomAlive are projector-depth camera units, which can be combined through a scalable, distributed framework. The projector-depth camera units are individually auto-calibrating, self-localizing, and create a unified model of the room with no user intervention. We investigate the design space of gaming experiences that are possible with RoomAlive and explore methods for dynamically mapping content based on room layout and user position. Finally we showcase four experience prototypes that demonstrate the novel interactive experiences that are possible with RoomAlive and discuss the design challenges of adapting any game to any room.

[13] Dyadic projected spatial augmented reality Augmented reality II / Benko, Hrvoje / Wilson, Andrew D. / Zannier, Federico Proceedings of the 2014 ACM Symposium on User Interface Software and Technology 2014-10-05 v.1 p.645-655
ACM Digital Library Link
Summary: Mano-a-Mano is a unique spatial augmented reality system that combines dynamic projection mapping, multiple perspective views and device-less interaction to support face to face, or dyadic, interaction with 3D virtual objects. Its main advantage over more traditional AR approaches, such as handheld devices with composited graphics or see-through head worn displays, is that users are able to interact with 3D virtual objects and each other without cumbersome devices that obstruct face to face interaction. We detail our prototype system and a number of interactive experiences. We present an initial user experiment that shows that participants are able to deduce the size and distance of a virtual projected object. A second experiment shows that participants are able to infer which of a number of targets the other user indicates by pointing.

[14] TouchMover: actuated 3D touchscreen with haptic feedback ITS'13 best paper & ITS'13 best note / Sinclair, Mike / Pahud, Michel / Benko, Hrvoje Proceedings of the 2013 ACM International Conference on Interactive Tabletops and Surfaces 2013-10-06 p.287-296
ACM Digital Library Link
Summary: This paper presents the design and development of a novel visual+haptic device that co-locates 3D stereo visualization, direct touch and touch force sensing with a robotically actuated display. Our actuated immersive 3D display, called TouchMover, is capable of providing 1D movement (up to 36cm) and force feedback (up to 230N) in a single dimension, perpendicular to the screen plane. In addition to describing the details of our design, we showcase how TouchMover allows the user to: 1) interact with 3D objects by pushing them on the screen with realistic force feedback, 2) touch and feel the contour of a 3D object, 3) explore and annotate volumetric medical images (e.g., MRI brain scans) and 4) experience different activation forces and stiffness when interacting with common 2D on-screen elements (e.g., buttons). We also contribute the results of an experiment which demonstrates the effectiveness of the haptic output of our device. Our results show that people are capable of disambiguating between 10 different 3D shapes with the same 2D footprint by touching alone and without any visual feedback (85% recognition rate, 12 participants).

[15] Motion and context sensing techniques for pen computing Input 1: pens and consistency / Hinckley, Ken / Chen, Xiang 'Anthony' / Benko, Hrvoje Proceedings of the 2013 Conference on Graphics Interface 2013-05-29 p.71-78
ACM Digital Library Link
Summary: We explore techniques for a slender and untethered stylus prototype enhanced with a full suite of inertial sensors (three-axis accelerometer, gyroscope, and magnetometer). We present a taxonomy of enhanced stylus input techniques and consider a number of novel possibilities that combine motion sensors with pen stroke and touchscreen inputs on a pen + touch slate. These inertial sensors enable motion-gesture inputs, as well sensing the context of how the user is holding or using the stylus, even when the pen is not in contact with the tablet screen. Our initial results suggest that sensor-enhanced stylus input offers a potentially rich modality to augment interaction with slate computers.

[16] Understanding touch selection accuracy on flat and hemispherical deformable surfaces Input 2: haptic and gestures / Bacim, Felipe / Sinclair, Mike / Benko, Hrvoje Proceedings of the 2013 Conference on Graphics Interface 2013-05-29 p.197-204
ACM Digital Library Link
Summary: Touch technology is rapidly evolving, and soon deformable, movable and malleable touch interfaces may be part of everyday computing. While there has been a lot of work on understanding touch interactions on flat surfaces, as well as recent work about pointing on curved surfaces, little is known about how surface deformation affects touch interactions. This paper presents the study of how different features of deformable surfaces affect touch selection accuracy, both in terms of position and control of the deformation distance, which refers to the distance traveled by the finger when deforming the surface. We conducted three separate user studies, investigating how touch interactions on a deformable surface are affected not only by the compliant force feedback generated by the elastic surface, but also by the use of visual feedback, the use of a tactile delimiter to indicate the maximum deformation distance, and the use of hemispherical surface shape. The results indicate that, when provided with visual feedback, users can achieve sub-millimeter precision for deformation distance. In addition, without visual feedback, users tend to overestimate deformation distance especially in conditions that require less deformation and therefore provide less surface tension. While the use of a tactile delimiter to indicate maximum deformation improves the distance estimation accuracy, it does not eliminate overestimation. Finally, the shape of the surface also affects touch selection accuracy for both touch position and deformation distance.

[17] IllumiRoom: peripheral projected illusions for interactive experiences Video showcase presentations / Jones, Brett R. / Benko, Hrvoje / Ofek, Eyal / Wilson, Andrew D. Extended Abstracts of ACM CHI'13 Conference on Human Factors in Computing Systems 2013-04-27 v.2 p.2825-2826
ACM Digital Library Link
Summary: IllumiRoom is a proof-of-concept system that augments the area surrounding a television with projected visualizations to enhance traditional gaming experiences. Our system demonstrates how projected visualizations in the periphery can negate, include, or augment the existing physical environment and complement the content displayed on the television screen. We can change the appearance of the room, induce apparent motion, extend the field of view, and enable entirely new physical gaming experiences. Our system is entirely self-calibrating and is designed to work in any room.

[18] Displays take new shape: an agenda for future interactive surfaces Workshop summaries / Steimle, Jürgen / Benko, Hrvoje / Cassinelli, Alvaro / Ishii, Hiroshi / Leithinger, Daniel / Maes, Pattie / Poupyrev, Ivan Extended Abstracts of ACM CHI'13 Conference on Human Factors in Computing Systems 2013-04-27 v.2 p.3283-3286
ACM Digital Library Link
Summary: This workshop provides a forum for discussing emerging trends in interactive surfaces that leverage alternative display types and form factors to enable more expressive interaction with information. The goal of the workshop is to push the current discussion forward towards a synthesis of emerging visualization and interaction concepts in the area of improvised, minimal, curved and malleable interactive surfaces. By doing so, we aim to generate an agenda for future research and development in interactive surfaces.

[19] IllumiRoom: peripheral projected illusions for interactive experiences Papers: interacting around devices / Jones, Brett R. / Benko, Hrvoje / Ofek, Eyal / Wilson, Andrew D. Proceedings of ACM CHI 2013 Conference on Human Factors in Computing Systems 2013-04-27 v.1 p.869-878
ACM Digital Library Link
Summary: IllumiRoom is a proof-of-concept system that augments the area surrounding a television with projected visualizations to enhance traditional gaming experiences. We investigate how projected visualizations in the periphery can negate, include, or augment the existing physical environment and complement the content displayed on the television screen. Peripheral projected illusions can change the appearance of the room, induce apparent motion, extend the field of view, and enable entirely new physical gaming experiences. Our system is entirely self-calibrating and is designed to work in any room. We present a detailed exploration of the design space of peripheral projected illusions and we demonstrate ways to trigger and drive such illusions from gaming content. We also contribute specific feedback from two groups of target users (10 gamers and 15 game designers); providing insights for enhancing game experiences through peripheral projected illusions.

[20] The Design of Organic User Interfaces: Shape, Sketching and Hypercontext Organic User Interfaces / Holman, David / Girouard, Audrey / Benko, Hrvoje / Vertegaal, Roel Interacting with Computers 2013-03 v.25 n.2 p.133-142
iwc.oxfordjournals.org/content/25/2/133
Summary: With the emergence of flexible display technologies, it will be necessary for interface designers to move beyond flat interfaces and to contextualize interaction in an object's physical shape. Grounded in early explorations of organic user interfaces (OUIs), this paper examines the evolving relationship between industrial and interaction designs and examines how not only what we design is changing, but how we design too. First, we discuss how (and why) to better support the design of OUIs: how supporting sketching, a fundamental activity of many design fields, is increasingly critical and why a 'hypercontextualized' approach to their design can reduce the drawbacks met when everyday objects become interactive. Finally, underlying both these points is the maturation of technology to that of a computational material; when interactive hardware is seamlessly melded into an object's shape, the 'computer' disappears and is better seen as a basic design material that, incidentally, happens to have interactive behavior.

[21] Steerable augmented reality with the beamatron Augmented reality / Wilson, Andrew / Benko, Hrvoje / Izadi, Shahram / Hilliges, Otmar Proceedings of the 2012 ACM Symposium on User Interface Software and Technology 2012-10-07 v.1 p.413-422
ACM Digital Library Link
Summary: Steerable displays use a motorized platform to orient a projector to display graphics at any point in the room. Often a camera is included to recognize markers and other objects, as well as user gestures in the display volume. Such systems can be used to superimpose graphics onto the real world, and so are useful in a number of augmented reality and ubiquitous computing scenarios. We contribute the Beamatron, which advances steerable displays by drawing on recent progress in depth camera-based interactions. The Beamatron consists of a computer-controlled pan and tilt platform on which is mounted a projector and Microsoft Kinect sensor. While much previous work with steerable displays deals primarily with projecting corrected graphics onto a discrete set of static planes, we describe computational techniques that enable reasoning in 3D using live depth data. We show two example applications that are enabled by the unique capabilities of the Beamatron: an augmented reality game in which a player can drive a virtual toy car around a room, and a ubiquitous computing demo that uses speech and gesture to move projected graphics throughout the room.

[22] LightGuide: projected visualizations for hand movement guidance Curves & mirages: gestures & interaction with nonplanar surfaces / Sodhi, Rajinder / Benko, Hrvoje / Wilson, Andrew Proceedings of ACM CHI 2012 Conference on Human Factors in Computing Systems 2012-05-05 v.1 p.179-188
ACM Digital Library Link
Summary: LightGuide is a system that explores a new approach to gesture guidance where we project guidance hints directly on a user's body. These projected hints guide the user in completing the desired motion with their body part which is particularly useful for performing movements that require accuracy and proper technique, such as during exercise or physical therapy. Our proof-of-concept implementation consists of a single low-cost depth camera and projector and we present four novel interaction techniques that are focused on guiding a user's hand in mid-air. Our visualizations are designed to incorporate both feedback and feedforward cues to help guide users through a range of movements. We quantify the performance of LightGuide in a user study comparing each of our on-body visualizations to hand animation videos on a computer display in both time and accuracy. Exceeding our expectations, participants performed movements with an average error of 21.6mm, nearly 85% more accurately than when guided by video.

[23] MirageTable: freehand interaction on a projected augmented reality tabletop Curves & mirages: gestures & interaction with nonplanar surfaces / Benko, Hrvoje / Jota, Ricardo / Wilson, Andrew Proceedings of ACM CHI 2012 Conference on Human Factors in Computing Systems 2012-05-05 v.1 p.199-208
ACM Digital Library Link
Summary: Instrumented with a single depth camera, a stereoscopic projector, and a curved screen, MirageTable is an interactive system designed to merge real and virtual worlds into a single spatially registered experience on top of a table. Our depth camera tracks the user's eyes and performs a real-time capture of both the shape and the appearance of any object placed in front of the camera (including user's body and hands). This real-time capture enables perspective stereoscopic 3D visualizations to a single user that account for deformations caused by physical objects on the table. In addition, the user can interact with virtual objects through physically-realistic freehand actions without any gloves, trackers, or instruments. We illustrate these unique capabilities through three application examples: virtual 3D model creation, interactive gaming with real and virtual objects, and a 3D teleconferencing experience that not only presents a 3D view of a remote person, but also a seamless 3D shared task space. We also evaluated the user's perception of projected 3D objects in our system, which confirmed that the users can correctly perceive such objects even when they are projected over different background colors and geometries (e.g., gaps, drops).

[24] The 3rd dimension of CHI (3DCHI): touching and designing 3D user interfaces Workshop summaries / Steinicke, Frank / Benko, Hrvoje / Krüger, Antonio / Keefe, Daniel / de la Riviére, Jean-Baptiste / Anderson, Ken / Häkkilä, Jonna / Arhippainen, Leena / Pakanen, Minna Extended Abstracts of ACM CHI'12 Conference on Human Factors in Computing Systems 2012-05-05 v.2 p.2695-2698
ACM Digital Library Citation
Summary: In recent years 3D has gained increasing amount of attention -- interactive visualization of 3D data has become increasingly important and widespread due to the requirements of several application areas, and entertainment industry has brought 3D experience to the reach of wide audiences through games, 3D movies and stereoscopic displays. However, current user interfaces (UIs) often lack adequate support for 3D interactions: 2D metaphors still dominate in GUI design, 2D desktop systems are often limited in cases where natural interaction with 3D content is required, and sophisticated 3D user interfaces consisting of stereoscopic projections and tracked input devices are rarely adopted by ordinary users. In the future, novel interaction design solutions are needed to better support the natural interaction and utilize the special features of 3D technologies.
    In this workshop we address the research and industrial challenges involved in exploring the space where the flat digital world of surface computing meets the physical, spatially complex, 3D space in which we live. The workshop will provide a common forum for researchers to share their visions of the future and recent results in the area of improving 3D interaction and UI design.

[25] Enhancing naturalness of pen-and-tablet drawing through context sensing Graspable interfaces / Sun, Minghui / Cao, Xiang / Song, Hyunyoung / Izadi, Shahram / Benko, Hrvoje / Guimbretiere, Francois / Ren, Xiangshi / Hinckley, Ken Proceedings of the 2011 ACM International Conference on Interactive Tabletops and Surfaces 2011-11-13 p.83-86
ACM Digital Library Link
Summary: Among artists and designers, the pen-and-tablet combination is widely used for creating digital drawings, as digital pens outperform other input devices in replicating the experience of physical drawing tools. In this paper, we explore how contextual information such as the relationship between the hand, the pen, and the tablet can be leveraged in the digital drawing experience to further enhance its naturalness. By embedding sensors in the pen and the tablet to sense and interpret these contexts, we demonstrate how several physical drawing practices can be reflected and assisted in digital interaction scenarios.
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 54 Jump to: 2016 | 15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 05 | 04 | 03 |