HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,284,117
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: alexander_j* Results: 46 Sorted by: Date  Comments?
Help Dates
Limit:   
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 46 Jump to: 2016 | 15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 92 | 87 |
[1] ShapeCanvas: An Exploration of Shape-Changing Content Generation by Members of the Public Embodied Interaction / Everitt, Aluna / Taher, Faisal / Alexander, Jason Proceedings of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.1 p.2778-2782
ACM Digital Library Link
Summary: Shape-changing displays -- visual output surfaces with physically-reconfigurable geometry -- provide new challenges for content generation. Content design must incorporate visual elements, physical surface shape, react to user input, and adapt these parameters over time. The addition of the 'shape channel' significantly increases the complexity of content design, but provides a powerful platform for novel physical design, animations, and physicalizations. In this work we use ShapeCanvas, a 4×4 grid of large actuated pixels, combined with simple interactions, to explore novice user behavior and interactions for shape-change content design. We deployed ShapeCanvas in a café for two and a half days and observed users generate 21 physical animations. These were categorized into seven categories and eight directly derived from people's personal interest. This paper describes these experiences, the generated animations and provides initial insights into shape-changing content design.

[2] Partially-indirect Bimanual Input with Gaze, Pen, and Touch for Pan, Zoom, and Ink Interaction Touch Interaction / Pfeuffer, Ken / Alexander, Jason / Gellersen, Hans Proceedings of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.1 p.2845-2856
ACM Digital Library Link
Summary: Bimanual pen and touch UIs are mainly based on the direct manipulation paradigm. Alternatively we propose partially-indirect bimanual input, where direct pen input is used with the dominant hand, and indirect-touch input with the non-dominant hand. As direct and indirect inputs do not overlap, users can interact in the same space without interference. We investigate two indirect-touch techniques combined with direct pen input: the first redirects touches to the user's gaze position, and the second redirects touches to the pen position. In this paper, we present an empirical user study where we compare both partially-indirect techniques to direct pen and touch input in bimanual pan, zoom, and ink tasks. Our experimental results show that users are comparatively fast with the indirect techniques, but more accurate as users can dynamically change the zoom-target during indirect zoom gestures. Further our studies reveal that direct and indirect zoom gestures have distinct characteristics regarding spatial use, gestural use, and bimanual parallelism.

[3] Sharing Perspectives on the Design of Shape-Changing Interfaces Workshop Summaries / Strohmeier, Paul / Gomes, Antonio / Troiano, Giovanni Maria / Mottelson, Aske / Merritt, Timothy / Alexander, Jason Extended Abstracts of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.2 p.3492-3499
ACM Digital Library Link
Summary: In recent years, several workshops and an increasing number of scientific publications have focused on shape-changing interfaces. This work has explored prototypes, theory and evaluations across a variety of domains, including: aesthetic experience, affective computing, adaptive affordances, data visualisation, and remote communication support, to name only a few. The aim of this workshop is to bring to light and discuss the different underlying perspectives and visions on shape-changing interfaces within the community, arriving at a shared, cross-discipline vocabulary for discussing the design space. Participants will share their personal perspective and explore others' perspectives through hands-on prototyping and facilitated sketching tasks. Leaving this workshop, participants will be equipped with a clearer understanding of the different concepts being explored within the community and with a vocabulary through which to describe the intricacies and considerations of their work in the future.

[4] Tangible Data, explorations in data physicalization Studio-Workshops / Hogan, Trevor / Hornecker, Eva / Stusak, Simon / Jansen, Yvonne / Alexander, Jason / Moere, Andrew Vande / Hinrichs, Uta / Nolan, Kieran Proceedings of the 2016 International Conference on Tangible and Embedded Interaction 2016-02-14 p.753-756
ACM Digital Library Link
Summary: Humans have represented data in many forms for thousands of years, yet the main sensory channel we use to perceive these representations today still remains largely exclusive to sight. Recent developments, such as advances in digital fabrication, microcontrollers, actuated tangibles, and shape-changing interfaces offer new opportunities to encode data in physical forms and have stimulated the emergence of 'Data Physicalization' as a research area.
    The aim of this workshop is (1) to create an awareness of the potential of Data Physicalization by providing an overview of state-of-the-art research, practice, and tools and (2) to build a community around this emerging field and start to discuss a shared research agenda. This workshop therefore addresses both experienced researchers and practitioners as well as those who are new to the field but interested in applying Data Physicalization to their own (research) practice. The workshop will provide opportunities for participants to explore Data Physicalization hands-on, by creating their own prototypes. These practical explorations will lead into reflective discussions on the role tangibles and embodiment play in Data Physicalization and the future research challenges for this area.

[5] A Public Ideation of Shape-Changing Applications Session 9: Latency and Shape Change / Sturdee, Miriam / Hardy, John / Dunn, Nick / Alexander, Jason Proceedings of the 2015 ACM International Conference on Interactive Tabletops and Surfaces 2015-11-15 p.219-228
ACM Digital Library Link
Summary: The shape-changing concept where objects reconfigure their physical geometry has the potential to transform our interactions with computing devices, displays and everyday artifacts. Their dynamic physicality capitalizes on our inherent tactile sense and facilitates object re-appropriation. Research both within and outside HCI continues to develop a diverse range of technological solutions and materials to enable shape-change. However, as an early-stage enabling technology, the community has yet to identify important applications and use-cases to fully exploit its value. To expose and document a range of applications for shape-change, we employed unstructured brainstorming within a public engagement study. A 74-participant brainstorming exercise with members of the public produced 336 individual ideas that were coded into 11 major themes: entertainment, augmented living, medical, tools & utensils, research, architecture, infrastructure, industry, wearables, and education & training. This work documents the methodology and resultant application ideas along with reflections on the approach for gathering application ideas to enable shape-changing interactive surfaces and objects.

[6] ReForm: Integrating Physical and Digital Design through Bidirectional Fabrication Session 2A: Fabrication 1 -- Augmentation / Weichel, Christian / Hardy, John / Alexander, Jason / Gellersen, Hans Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2015-11-05 v.1 p.93-102
ACM Digital Library Link
Summary: Digital fabrication machines such as 3D printers and laser-cutters allow users to produce physical objects based on virtual models. The creation process is currently unidirectional: once an object is fabricated it is separated from its originating virtual model. Consequently, users are tied into digital modeling tools, the virtual design must be completed before fabrication, and once fabricated, re-shaping the physical object no longer influences the digital model. To provide a more flexible design process that allows objects to iteratively evolve through both digital and physical input, we introduce bidirectional fabrication. To demonstrate the concept, we built ReForm, a system that integrates digital modeling with shape input, shape output, annotation for machine commands, and visual output. By continually synchronizing the physical object and digital model it supports object versioning to allow physical changes to be undone. Through application examples, we demonstrate the benefits of ReForm to the digital fabrication process.

[7] Gaze-Shifting: Direct-Indirect Input with Pen and Touch Modulated by Gaze Session 6A: Gaze / Pfeuffer, Ken / Alexander, Jason / Chong, Ming Ki / Zhang, Yanxia / Gellersen, Hans Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2015-11-05 v.1 p.373-383
ACM Digital Library Link
Summary: Modalities such as pen and touch are associated with direct input but can also be used for indirect input. We propose to combine the two modes for direct-indirect input modulated by gaze. We introduce gaze-shifting as a novel mechanism for switching the input mode based on the alignment of manual input and the user's visual attention. Input in the user's area of attention results in direct manipulation whereas input offset from the user's gaze is redirected to the visual target. The technique is generic and can be used in the same manner with different input modalities. We show how gaze-shifting enables novel direct-indirect techniques with pen, touch, and combinations of pen and touch input.

[8] Gaze-Supported Gaming: MAGIC Techniques for First Person Shooters Analytics and Questionnaires / Velloso, Eduardo / Fleming, Amy / Alexander, Jason / Gellersen, Hans Proceedings of the 2015 ACM SIGCHI Annual Symposium on Computer-Human Interaction in Play 2015-10-05 p.343-347
ACM Digital Library Link
Summary: MAGIC -- Manual And Gaze Input Cascaded-pointing techniques have been proposed as an efficient way in which the eyes can support the mouse input in pointing tasks. MAGIC Sense is one of such techniques in which the cursor speed is modulated by how far it is from the gaze point. In this work, we implemented a continuous and a discrete adaptations of MAGIC Sense for First-Person Shooter input. We evaluated the performance of these techniques in an experiment with 15 participants and found no significant gain in performance, but moderate user preference for the discrete technique.

[9] Interactions Under the Desk: A Characterisation of Foot Movements for Input in a Seated Position Alternative Input Devices for People with Disabilities / Velloso, Eduardo / Alexander, Jason / Bulling, Andreas / Gellersen, Hans Proceedings of IFIP INTERACT'15: Human-Computer Interaction, Part I 2015-09-14 v.1 p.384-401
Keywords: Foot-based interfaces; Fitts' law; Interaction techniques
Link to Digital Content at Springer
Summary: We characterise foot movements as input for seated users. First, we built unconstrained foot pointing performance models in a seated desktop setting using ISO 9241-9-compliant Fitts's Law tasks. Second, we evaluated the effect of the foot and direction in one-dimensional tasks, finding no effect of the foot used, but a significant effect of the direction in which targets are distributed. Third, we compared one foot against two feet to control two variables, finding that while one foot is better suited for tasks with a spatial representation that matches its movement, there is little difference between the techniques when it does not. Fourth, we analysed the overhead caused by introducing a feet-controlled variable in a mouse task, finding the feet to be comparable to the scroll wheel. Our results show the feet are an effective method of enhancing our interaction with desktop systems and derive a series of design guidelines.

[10] An Empirical Investigation of Gaze Selection in Mid-Air Gestural 3D Manipulation Eye Tracking / Velloso, Eduardo / Turner, Jayson / Alexander, Jason / Bulling, Andreas / Gellersen, Hans Proceedings of IFIP INTERACT'15: Human-Computer Interaction, Part II 2015-09-14 v.2 p.315-330
Keywords: 3D user interfaces; Eye tracking; Mid-air gestures
Link to Digital Content at Springer
Summary: In this work, we investigate gaze selection in the context of mid-air hand gestural manipulation of 3D rigid bodies on monoscopic displays. We present the results of a user study with 12 participants in which we compared the performance of Gaze, a Raycasting technique (2D Cursor) and a Virtual Hand technique (3D Cursor) to select objects in two 3D mid-air interaction tasks. Also, we compared selection confirmation times for Gaze selection when selection is followed by manipulation to when it is not. Our results show that gaze selection is faster and more preferred than 2D and 3D mid-air-controlled cursors, and is particularly well suited for tasks in which users constantly switch between several objects during the manipulation. Further, selection confirmation times are longer when selection is followed by manipulation than when it is not.

[11] Gaze+touch vs. Touch: What's the Trade-off When Using Gaze to Extend Touch to Remote Displays? Eye Tracking / Pfeuffer, Ken / Alexander, Jason / Gellersen, Hans Proceedings of IFIP INTERACT'15: Human-Computer Interaction, Part II 2015-09-14 v.2 p.349-367
Keywords: Gaze interaction; Eye-tracking; Multitouch; Multimodal UI
Link to Digital Content at Springer
Summary: Direct touch input is employed on many devices, but it is inherently restricted to displays that are reachable by the user. Gaze input as a mediator can extend touch to remote displays -- using gaze for remote selection, and touch for local manipulation -- but at what cost and benefit? In this paper, we investigate the potential trade-off with four experiments that empirically compare remote Gaze+touch to standard touch. Our experiments investigate dragging, rotation, and scaling tasks. Results indicate that Gaze+touch is, compared to touch, (1) equally fast and more accurate for rotation and scaling, (2) slower and less accurate for dragging, and (3) enables selection of smaller targets. Our participants confirm this trend, and are positive about the relaxed finger placement of Gaze+touch. Our experiments provide detailed performance characteristics to consider for the design of Gaze+touch interaction of remote displays. We further discuss insights into strengths and drawbacks in contrast to direct touch.

[12] ShapeClip: Towards Rapid Prototyping with Shape-Changing Displays for Designers Non-Rigid Interaction Surfaces / Hardy, John / Weichel, Christian / Taher, Faisal / Vidler, John / Alexander, Jason Proceedings of the ACM CHI'15 Conference on Human Factors in Computing Systems 2015-04-18 v.1 p.19-28
ACM Digital Library Link
Summary: This paper presents ShapeClip: a modular tool capable of transforming any computer screen into a z-actuating shape-changing display. This enables designers to produce dynamic physical forms by "clipping" actuators onto screens. ShapeClip displays are portable, scalable, fault-tolerant, and support runtime re-arrangement. Users are not required to have knowledge of electronics or programming, and can develop motion designs with presentation software, image editors, or web-technologies. To evaluate ShapeClip we carried out a full-day workshop with expert designers. Participants were asked to generate shape-changing designs and then construct them using ShapeClip. ShapeClip enabled participants to rapidly and successfully transform their ideas into functional systems.

[13] Opportunities and Challenges for Data Physicalization Natural User Interfaces for InfoVis / Jansen, Yvonne / Dragicevic, Pierre / Isenberg, Petra / Alexander, Jason / Karnik, Abhijit / Kildal, Johan / Subramanian, Sriram / Hornbæk, Kasper Proceedings of the ACM CHI'15 Conference on Human Factors in Computing Systems 2015-04-18 v.1 p.3227-3236
ACM Digital Library Link
Summary: Physical representations of data have existed for thousands of years. Yet it is now that advances in digital fabrication, actuated tangible interfaces, and shape-changing displays are spurring an emerging area of research that we call Data Physicalization. It aims to help people explore, understand, and communicate data using computer-supported physical data representations. We call these representations physicalizations, analogously to visualizations -- their purely visual counterpart. In this article, we go beyond the focused research questions addressed so far by delineating the research area, synthesizing its open challenges and laying out a research agenda.

[14] Exploring Interactions with Physically Dynamic Bar Charts Natural User Interfaces for InfoVis / Taher, Faisal / Hardy, John / Karnik, Abhijit / Weichel, Christian / Jansen, Yvonne / Hornbæk, Kasper / Alexander, Jason Proceedings of the ACM CHI'15 Conference on Human Factors in Computing Systems 2015-04-18 v.1 p.3237-3246
ACM Digital Library Link
Summary: Visualizations such as bar charts help users reason about data, but are mostly screen-based, rarely physical, and almost never physical and dynamic. This paper investigates the role of physically dynamic bar charts and evaluates new interactions for exploring and working with datasets rendered in dynamic physical form. To facilitate our exploration we constructed a 10x10 interactive bar chart and designed interactions that supported fundamental visualisation tasks, specifically; annotation, filtering, organization, and navigation. The interactions were evaluated in a user study with 17 participants. Our findings identify the preferred methods of working with the data for each task i.e. directly tapping rows to hide bars, highlight the strengths and limitations of working with physical data, and discuss the challenges of integrating the proposed interactions together into a larger data exploration system. In general, physical interactions were intuitive, informative, and enjoyable, paving the way for new explorations in physical data visualizations.

[15] Gaze+RST: Integrating Gaze and Multitouch for Remote Rotate-Scale-Translate Tasks Interaction Techniques for Tables & Walls / Turner, Jayson / Alexander, Jason / Bulling, Andreas / Gellersen, Hans Proceedings of the ACM CHI'15 Conference on Human Factors in Computing Systems 2015-04-18 v.1 p.4179-4188
ACM Digital Library Link
Summary: Our work investigates the use of gaze and multitouch to fluidly perform rotate-scale-translate (RST) tasks on large displays. The work specifically aims to understand if gaze can provide benefit in such a task, how task complexity affects performance, and how gaze and multitouch can be combined to create an integral input structure suited to the task of RST. We present four techniques that individually strike a different balance between gaze-based and touch-based translation while maintaining concurrent rotation and scaling operations. A 16 participant empirical evaluation revealed that three of our four techniques present viable options for this scenario, and that larger distances and rotation/scaling operations can significantly affect a gaze-based translation configuration. Furthermore we uncover new insights regarding multimodal integrality, finding that gaze and touch can be combined into configurations that pertain to integral or separable input structures.

[16] Shape Display Shader Language (SDSL): A New Programming Model for Shape Changing Displays WIP Theme: Displays / Weichel, Christian / Alexander, Jason / Hardy, John Extended Abstracts of the ACM CHI'15 Conference on Human Factors in Computing Systems 2015-04-18 v.2 p.1121-1126
ACM Digital Library Link
Summary: Shape-changing displays' dynamic physical affordances have inspired a range of novel hardware designs to support new types of interaction. Despite rapid technological progress, the community lacks a common programming model for developing applications for these visually and physically-dynamic display surfaces. This results in complex, hardware-specific, custom-code that requires significant development effort and prevents researchers from easily building on and sharing their applications across hardware platforms. As a first attempt to address these issues we introduce SDSL, a Shape-Display Shader Language for easily programming shape-changing displays in a hardware-independent manner. We introduce the (graphics-derived) pipeline model of SDSL, an open-source implementation that includes a compiler, runtime, IDE, debugger, and simulator, and show demonstrator applications running on two shape-changing hardware setups.

[17] Exploring the Challenges of Making Data Physical Workshop Summaries / Alexander, Jason / Jansen, Yvonne / Hornbæk, Kasper / Kildal, Johan / Karnik, Abhijit Extended Abstracts of the ACM CHI'15 Conference on Human Factors in Computing Systems 2015-04-18 v.2 p.2417-2420
ACM Digital Library Link
Summary: Physical representations of data have existed for thousands of years. However, it is only now that advances in digital fabrication, actuated tangible interfaces, and shape-changing displays can support the emerging area of 'Data Physicalization' [6]: the study of computer-supported, physical representations of data and their support for cognition, communication, learning, problem solving and decision making. As physical artifacts, data physicalizations can tap more deeply into our perceptual exploration skills than classical computer setups, while their dynamic physicality alleviates some of the main drawbacks of static artifacts by facilitating their crafting, supporting adaptation to different data, and encouraging sharing between different users.

[18] SPATA: Spatio-Tangible Tools for Fabrication-Aware Design Paper Session 7: Supporting Designers / Weichel, Christian / Alexander, Jason / Karnik, Abhijit / Gellersen, Hans Proceedings of the 2015 International Conference on Tangible and Embedded Interaction 2015-01-15 p.189-196
ACM Digital Library Link
Summary: The physical tools used when designing new objects for digital fabrication are mature, yet disconnected from their virtual accompaniments. SPATA is the digital adaptation of two spatial measurement tools, that explores their closer integration into virtual design environments. We adapt two of the traditional measurement tools: calipers and protractors. Both tools can measure, transfer, and present size and angle. Their close integration into different design environments makes tasks more fluid and convenient. We describe the tools' design, a prototype implementation, integration into different environments, and application scenarios validating the concept.

[19] An Empirical Characterization of Touch-Gesture Input-Force on Mobile Devices Session 7: Touch, Pressure and Reality / Taher, Faisal / Alexander, Jason / Hardy, John / Velloso, Eduardo Proceedings of the 2014 ACM International Conference on Interactive Tabletops and Surfaces 2014-11-16 p.195-204
ACM Digital Library Link
Summary: Designers of force-sensitive user interfaces lack a ground-truth characterization of input force while performing common touch gestures (zooming, panning, tapping, and rotating). This paper provides such a characterization firstly by deriving baseline force profiles in a tightly-controlled user study; then by examining how these profiles vary in different conditions such as form factor (mobile phone and tablet), interaction position (walking and sitting) and urgency (timed tasks and untimed tasks). We conducted two user studies with 14 and 24 participants respectively and report: (1) force profile graphs that depict the force variations of common touch gestures, (2) the effect of the different conditions on force exerted and gesture completion time, (3) the most common forces that users apply, and the time taken to complete the gestures. This characterization is intended to aid the design of interactive devices that integrate force-input with common touch gestures in different conditions.

[20] Characterising the Physicality of Everyday Buttons Session 7: Touch, Pressure and Reality / Alexander, Jason / Hardy, John / Wattam, Stephen Proceedings of the 2014 ACM International Conference on Interactive Tabletops and Surfaces 2014-11-16 p.205-208
ACM Digital Library Link
Summary: A significant milestone in the development of physically-dynamic surfaces is the ability for buttons to protrude outwards from any location on a touch-screen. As a first step toward developing interaction requirements for this technology we conducted a survey of 1515 electronic push buttons in everyday home environments. We report a characterisation that describes the features of the data set and discusses important button properties that we expect will inform the design of future physically-dynamic devices and surfaces.

[21] Gaze-touch: combining gaze with multi-touch for interaction on the same surface Touch & gesture / Pfeuffer, Ken / Alexander, Jason / Chong, Ming Ki / Gellersen, Hans Proceedings of the 2014 ACM Symposium on User Interface Software and Technology 2014-10-05 v.1 p.509-518
ACM Digital Library Link
Summary: Gaze has the potential to complement multi-touch for interaction on the same surface. We present gaze-touch, a technique that combines the two modalities based on the principle of 'gaze selects, touch manipulates'. Gaze is used to select a target, and coupled with multi-touch gestures that the user can perform anywhere on the surface. Gaze-touch enables users to manipulate any target from the same touch position, for whole-surface reachability and rapid context switching. Conversely, gaze-touch enables manipulation of the same target from any touch position on the surface, for example to avoid occlusion. Gaze-touch is designed to complement direct-touch as the default interaction on multi-touch surfaces. We provide a design space analysis of the properties of gaze-touch versus direct-touch, and present four applications that explore how gaze-touch can be used alongside direct-touch. The applications demonstrate use cases for interchangeable, complementary and alternative use of the two modes of interaction, and introduce novel techniques arising from the combination of gaze-touch and conventional multi-touch.

[22] The use of surrounding visual context in handheld AR: device vs. user perspective rendering The third dimension / Pucihar, Klen Copic / Coulton, Paul / Alexander, Jason Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems 2014-04-26 v.1 p.197-206
ACM Digital Library Link
Summary: The magic lens paradigm, a commonly used descriptor for handheld Augmented Reality (AR), presents the user with dual views: the augmented view (magic lens) that appears on the device, and the real view of the surroundings (what the user can see around the perimeter of the device). The augmented view is typically implemented by rendering the video captured by the rear-facing camera directly onto the device's screen. This results in dual perspectives -- the real world being captured from the device's perspective rather than the user's perspective (what an observer would see looking through a transparent glass pane). These differences manifest themselves in misaligned and/or incorrectly scaled transparency resulting in the dual-view problem.
    This paper presents two user studies comparing (a) device-perspective and (b) fixed Point-of-View (POV) user-perspective magic lenses to analyze the effect of the dual-view problem on the use of the surrounding visual context. The results confirm that the dual-view problem, a result of dual perspective, has a significant effect on the use of information from the surrounding visual context. The study also highlights that magnification and not the dual-view problem is the key factor explaining the correlation between magic lens size and the increased intensity of the magic lens type effect. From the results, we derive design guidelines for future magic lenses.

[23] ThumbReels: query sensitive web video previews based on temporal, crowdsourced, semantic tagging Navigating video / Craggs, Barnaby / Scott, Myles Kilgallon / Alexander, Jason Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems 2014-04-26 v.1 p.1217-1220
ACM Digital Library Link
Summary: During online search, the user's expectations often differ from those of the author. This is known as the "intention gap" and is particularly problematic when searching for and discriminating between online video content. An author uses description and meta-data tags to label their content, but often cannot predict alternate interpretations or appropriations of their work. To address this intention gap, we present ThumbReels, a concept for query-sensitive video previews generated from crowdsourced, temporally defined semantic tagging. Further, we supply an open-source tool that supports on-the-fly temporal tagging of videos, whose output can be used for later search queries. A first user study validates the tool and concept. We then present a second study that shows participants found ThumbReels to better represent search terms than contemporary preview techniques.

[24] Evaluating the effectiveness of physical shape-change for in-pocket mobile device notifications Shape-changing interfaces / Dimitriadis, Panteleimon / Alexander, Jason Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems 2014-04-26 v.1 p.2589-2592
ACM Digital Library Link
Summary: Audio and vibrotactile output are the standard mechanisms mobile devices use to attract their owner's attention. Yet in busy and noisy environments, or when the user is physically active, these channels sometimes fail. Recent work has explored the use of physical shape-change as an additional method for conveying notifications when the device is in-hand or viewable. However, we do not yet understand the effectiveness of physical shape-change as a method for communicating in-pocket notifications. This paper presents three robustly implemented, mobile-device sized shape-changing devices, and two user studies to evaluate their effectiveness at conveying notifications. The studies reveal that (1) different types and configurations of shape-change convey different levels of urgency and; (2) fast pulsing shape-changing notifications are missed less often and recognised more quickly than the standard slower vibration pulse rates of a mobile device.

[25] Cross-device gaze-supported point-to-point content transfer Gaze-mediated input / Turner, Jayson / Bulling, Andreas / Alexander, Jason / Gellersen, Hans Proceedings of the 2014 Symposium on Eye Tracking Research & Applications 2014-03-26 p.19-26
ACM Digital Library Link
Summary: Within a pervasive computing environment, we see content on shared displays that we wish to acquire and use in a specific way i.e., with an application on a personal device, transferring from point-to-point. The eyes as input can indicate intention to interact with a service, providing implicit pointing as a result. In this paper we investigate the use of gaze and manual input for the positioning of gaze-acquired content on personal devices. We evaluate two main techniques, (1) Gaze Positioning, transfer of content using gaze with manual input to confirm actions, (2) Manual Positioning, content is selected with gaze but final positioning is performed by manual input, involving a switch of modalities from gaze to manual input. A first user study compares these techniques applied to direct and indirect manual input configurations, a tablet with touch input and a laptop with mouse input. A second study evaluated our techniques in an application scenario involving distractor targets. Our overall results showed general acceptance and understanding of all conditions, although there were clear individual user preferences dependent on familiarity and preference toward gaze, touch, or mouse input.
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 46 Jump to: 2016 | 15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 92 | 87 |