HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,227,733
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: C.UIST.15.2* Limit: papers Results: 44 Sorted by: Date  Comments?
Clear Limits Help Dates
Limit:   
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 44 Jump to: 2005 |
Responsive Facilitation of Experiential Learning Through Access to Attentional State Doctoral Symposium / Greenwald, Scott W. Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.1-4
ACM Digital Library Link
Summary: The planned thesis presents a vision of the future of learning, where learners explore environments, physical and virtual, in a curiosity-driven or intrinsically motivated way, and receive contextual information from a companion facilitator or teacher. Learners are instrumented with sensors that convey their cognitive and attentional state to the companion, who can then accurately judge what is interesting or relevant, and when is a good moment to jump in. I provide a broad definition of the possible types of sensor input as well as the modalities of intervention, and then present a specific proof-of-concept system that uses gaze behavior as a means of communication between the learner and a human companion.

Reconfiguring and Fabricating Special-Purpose Tangible Controls Doctoral Symposium / Ramakers, Raf Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.5-8
ACM Digital Library Link
Summary: Unlike regular interfaces on touch screens or desktop computers, tangible user interfaces allow for more physically rich interactions that better uses the capacity of our motor system. On the flipside, the physicality of tangibles comes with rigidity. This makes it hard to (1) use tangibles on systems that require a variety of controls and interaction styles, and (2) make changes to physical interfaces once manufactured. In my research, I explore techniques that allow users to reconfigure and fabricate tangible interfaces in order to mitigate these issues.

Supporting Collaborative Innovation at Scale Doctoral Symposium / Siangliulue, Pao Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.9-12
ACM Digital Library Link
Summary: Emerging online innovation platforms have enabled large groups of people to collaborate and generate ideas together in ways that were not possible before. However, these platforms also introduce new challenges in finding inspiration from a large number of ideas, and coordinating the collective effort. In my dissertation, I address the challenges of large scale idea generation platforms by developing methods and systems for helping people make effective use of each other's ideas, and for orchestrating collective effort to reduce redundancy and increase the quality and breadth of generated ideas.

Wait-Learning: Leveraging Wait Time for Education Doctoral Symposium / Cai, Carrie J. Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.13-16
ACM Digital Library Link
Summary: Competing priorities in daily life make it difficult for those with a casual interest in learning to set aside time for regular practice. Yet, learning often requires significant time and effort, with repeated exposures to learning material on a recurring basis. Despite the struggle to find time for learning, there are numerous times in a day that are wasted due to micro-waiting. In my research, I develop systems for wait-learning, leveraging wait time for education. Combining wait time with productive work opens up a new class of software systems that overcomes the problem of limited time while addressing the frustration often associated with waiting. My research tackles several challenges in learning and task management, such as identifying which waiting moments to leverage; how to encourage learning unobtrusively; how to integrate learning across a diversity of waiting moments; and how to extend wait-learning to more complex domains. In the development process, I hope to understand how to manage these waiting moments, and describe essential design principles for wait-learning systems.

From Papercraft to Paper Mechatronics: Exploring a New Medium and Developing a Computational Design Tool Doctoral Symposium / Oh, Hyunjoo Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.17-20
ACM Digital Library Link
Summary: Paper Mechatronics is a novel interdisciplinary design medium, enabled by recent advances in craft technologies: the term refers to a reappraisal of traditional papercraft in combination with accessible mechanical, electronic, and computational elements. I am investigating the design space of paper mechatronics as a new hands-on medium by developing a series of examples and building a computational tool, FoldMecha, to support non-experts to design and construct their own paper mechatronics models. This paper describes how I used the tool to create two kinds of paper mechatronics models: walkers and flowers and discuss next steps.

Enriching Online Classroom Communication with Collaborative Multi-Modal Annotations Doctoral Symposium / Yoon, Dongwook Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.21-24
ACM Digital Library Link
Summary: In massive open online courses, peer discussion is a scalable solution for offering interactive and engaging learning experiences to a large number of students. On the other hand, the quality of communication mediated through online discussion tools, such as discussion forums, is far less expressive than that of face-to-face communication. As a solution, I present RichReview, a multi-modal annotation system through which distant students can exchange ideas using versatile combinations of voice, text, and pointing gestures. A series of lab and deployment studies of RichReview promised that the expressive multimedia mixture and lightweight audio browsing feature help students better understand commentators' intention. For the large-scale deployment, I redesigned RichReview as a web applet in edX's courseware framework. By deploying the system at scale, I will investigate (1) the optimal group assignment scheme that maximizes overall diversities of group members, (2) educational data mining applications based on user-generated rich discussion data, and (3) the impact of the rich discussion to students' retention of knowledge. Throughout these studies, I will argue that a multi-modal anchored digital document annotation system enables rich online peer discussion at scale.

Using Personal Devices to Facilitate Multi-user Interaction with Large Display Walls Doctoral Symposium / von Zadow, Ulrich Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.25-28
ACM Digital Library Link
Summary: Large display walls and personal devices such as Smartphones have complementary characteristics. While large displays are well-suited to multi-user interaction (potentially with complex data), they are inherently public and generally cannot present an interface adapted to the individual user. However, effective multi-user interaction in many cases depends on the ability to tailor the interface, to interact without interfering with others, and to access and possibly share private data. The combination with personal devices facilitates exactly this. Multi-device interaction concepts enable data transfer and include moving parts of UIs to the personal device. In addition, hand-held devices can be used to present personal views to the user. Our work will focus on using personal devices for true multi-user interaction with interactive display walls. It will cover appropriate interaction techniques as well as the technical foundation and will be validated with corresponding application cases.

Graphical Passwords for Older Computer Users Doctoral Symposium / Carter, Nancy J. Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.29-32
ACM Digital Library Link
Summary: Computers and the internet have been challenging for many computer users over the age of 60. We conducted a survey of older users which revealed that the creation, management and recall of strong text passwords were some of the challenging aspects of modern technology. In practice, this user group based passwords on familiar facts such as family member names, pets, phone numbers and important personal dates. Graphical passwords formed from abstract graphical symbols or anonymous facial images are feasible, but harder for older computers users to grasp and recall. In this paper we describe initial results for our graphical password system based on recognition of culturally-familiar facial images that are age-relevant to the life experiences of older users. Our goals are to design an easy-to-memorize, graphical password system intended specifically for older users, and achieve a level of password entropy comparable to traditional PINs and text passwords. We are also conducting a user study to demonstrate our technique and capture performance and recall metrics for comparison with traditional password systems.

Scope+: A Stereoscopic Video See-Through Augmented Reality Microscope Demonstrations / Huang, Yu-Hsuan / Yu, Tzu-Chieh / Tsai, Pei-Hsuan / Wang, Yu-Xiang / Yang, Wan-ling / Ouhyoung, Ming Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.33-34
ACM Digital Library Link
Summary: During the process of using conventional stereo microscope, users need to move their head away from the eyepieces repeatedly to access more information, such as anatomy structures from atlas. It happens during microsurgery if surgeons want to check patient's data again. You might lose your target and your concentration after this kind of disruption. To solve this critical problem and to improve the user experience of stereo microscope, we present Scope+, a stereoscopic video see-through augmented reality system. Scope+ is designed for biological procedures, education and surgical training. While performing biological procedures, for example, dissection of a frog, anatomical atlas will show up inside the head mounted display (HMD) overlaid onto the magnified images. For education purpose, the specimens will no longer be silent under Scope+. When their body parts are pointed by a marked stick, related animation or transparent background video will merge with the real object and interact with observers. If surgeons want to improve their techniques of microsurgery, they can practice with Scope+ which provides complete foot pedal control functions identical to standard surgical microscope. Moreover, cooperating with special designed phantom models, this augmented reality system will guide you to perform some key steps of operation, such as Continuous Curvilinear Capsulorhexis in cataract surgery. Video see-through rather than optical see-through technology is adopt by Scope+ system, therefore remote observation via another Scope+ or web applications can be achieved. This feature can not only assist teachers during experiment classes, but also help researchers keep their eyes on the observables after work. Array mode is powered by the motor-driven stage plate which allows users to load multiple samples at the same time. Quick comparison between samples is possible when switching them by the foot pedal.

Creating a Mobile Head-mounted Display with Proprietary Controllers for Interactive Virtual Reality Content Demonstrations / Kato, Kunihiro / Miyashita, Homei Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.35-36
ACM Digital Library Link
Summary: A method to create a mobile head-mounted display (HMD) a proprietary controller for interactive virtual reality (VR) content is proposed. The proposed method uses an interface cartridge printed with a conductive pattern. This allows the user to operate a smartphone by touching on the face of the mobile HMD. In addition, the user can easily create a mobile HMD and interface cartridge using a laser cutter and inkjet printer. Changing the form of the conductive pattern allows the user to create a variety of controllers. The proposed method can realize an environment that can deliver a variety of interactions with VR content.

Spotlights: Facilitating Skim Reading with Attention-Optimized Highlights Demonstrations / Lee, Byungjoo / Oulasvirta, Antti Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.37-38
ACM Digital Library Link
Summary: This demo presents Spotlights, a technique to facilitate skim reading, or the activity of rapidly comprehending long documents such as webpages or PDFs. Users mainly use continuous rate-based scrolling to skim. However, visual attention fails when scrolling rapidly due to excessive number of objects and brief exposure per object. Spotlights supports continuous scrolling at high speeds. It selects a small number of objects and raises them to transparent overlays (spotlights) in the viewer. Spotlights stay static for a prolonged time and then fade away. The technical contribution is novel method for "brokering" user's attentional resources in a way that guarantees sufficient attentional resources for some objects, even at very high scrolling rates. It facilitates visual attention by (1) decreasing the number of objects competing for divided attention and (2) by ensuring sufficient processing time per object.

WearWrite: Orchestrating the Crowd to Complete Complex Tasks from Wearables Demonstrations / Nebeling, Michael / Guo, Anhong / To, Alexandra / Dow, Steven / Teevan, Jaime / Bigham, Jeffrey Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.39-40
ACM Digital Library Link
Summary: Smartwatches are becoming increasingly powerful, but limited input makes completing complex tasks impractical. Our WearWrite system introduces a new paradigm for enabling a watch user to contribute to complex tasks, not through new hardware or input methods, but by directing a crowd to work on their behalf from their wearable device. WearWrite lets authors give writing instructions and provide bits of expertise and big picture directions from their smartwatch, while crowd workers actually write the document on more powerful devices. We used this approach to write three academic papers, and found it was effective at producing reasonable drafts.

Zensei: Augmenting Objects with Effortless User Recognition Capabilities through Bioimpedance Sensing Demonstrations / Sato, Munehiko / Puri, Rohan S. / Olwal, Alex / Chandra, Deepak / Poupyrev, Ivan / Raskar, Ramesh Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.41-42
ACM Digital Library Link
Summary: As interactions with everyday handheld devices and objects become increasingly common, a more seamless and effortless identification and personalization technique will be essential to an uninterrupted user experience. In this paper, we present Zensei, a user identification and customization system using human body bioimpedance sensing through multiple electrodes embedded into everyday objects. Zensei provides for an uninterrupted user-device personalization experience that is difficult to forge because it uses both the unique physiological and behavioral characteristics of the user. We demonstrate our measurement system in three exemplary device configurations that showcase different levels of constraint via environment-based, whole-body-based, and handheld-based identification scenarios. We evaluated Zensei's classification accuracy among 12 subjects on each configuration over 22 days of collected data and report our promising results.

Form Follows Function(): An IDE to Create Laser-cut Interfaces and Microcontroller Programs from Single Code Base Demonstrations / Kato, Jun / Goto, Masataka Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.43-44
ACM Digital Library Link
Summary: During the development of physical computing devices, physical object models and programs for microcontrollers are usually created with separate tools with distinct files. As a result, it is difficult to track the changes in hardware and software without discrepancy. Moreover, the software cannot directly access hardware metrics. Designing hardware interface cannot benefit from the source code information either. This demonstration proposes a browser-based IDE named f3.js that enables development of both as a single JavaScript code base. The demonstration allows audiences to play with the f3.js IDE and showcases example applications such as laser-cut interfaces generated from the same code but with different parameters. Programmers can experience the full feature and designers can interact with preset projects with a mouse or touch to customize laser-cut interfaces. More information is available at f3js.org.

RFlow: User Interaction Beyond Walls Demonstrations / Bedri, Hisham / Gupta, Otkrist / Temme, Andrew / Feigin, Micha / Charvat, Gregory / Raskar, Ramesh Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.45-46
ACM Digital Library Link
Summary: Current user-interaction with optical gesture tracking technologies suffer from occlusions, limiting the functionality to direct line-of-sight. We introduce RFlow, a compact, medium-range interface based on Radio Frequency (RF) that enables camera-free tracking of the position of a moving hand through drywall and other occluders. Our system uses Time of Flight (TOF) RF sensors and speed-based segmentation to localize the hand of a single user with 5cm accuracy (as measured to the closest ground-truth point), enabling an interface which is not restricted to a training set.

MetaSpace: Full-body Tracking for Immersive Multiperson Virtual Reality Demonstrations / Sra, Misha / Schmandt, Chris Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.47-48
ACM Digital Library Link
Summary: Most current virtual reality (VR) interactions are mediated by hand-held input devices or hand gestures and they usually display only a partial representation of the user in the synthetic environment. We believe, representing the user as a full avatar that is controlled by natural movements of the person in the real world will lead to a greater sense of presence in VR. Possible applications exist in various domains such as entertainment, therapy, travel, real estate, education, social interaction and professional assistance. In this demo, we present MetaSpace, a virtual reality system that allows co-located users to explore a VR world together by walking around in physical space. Each user's body is represented by an avatar that is dynamically controlled by their body movements. We achieve this by tracking each user's body with a Kinect device such that their physical movements are mirrored in the virtual world. Users can see their own avatar and the other person's avatar allowing them to perceive and act intuitively in the virtual environment.

GaussStarter: Prototyping Analog Hall-Sensor Grids with Breadboards Demonstrations / Liang, Rong-Hao / Kuo, Han-Chih / Chen, Bing-Yu Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.49-50
ACM Digital Library Link
Summary: This work presents GaussStarter, a pluggable and tileable analog Hall-sensor grid module for easy and scalable bread-board prototyping. In terms of ease-of-use, the graspable units allow users to easily plug them on or remove them from a breadboard. In terms of scalability, tiling the units on the breadboard can easily expand the sensing area. A software development kit is also provided for designing applications based on this hardware module.

Enhanced Motion Robustness from ToF-based Depth Sensing Cameras Demonstrations / Yamada, Wataru / Manabe, Hiroyuki / Inamura, Hiroshi Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.51-52
ACM Digital Library Link
Summary: Depth sensing cameras that can acquire RGB and depth information are being widely used. They can expand and enhance various camera-based applications and are cheap but strong tools for computer human interaction. RGB and depth sensing cameras have quite different key parameters, such as exposure time. We focus on the differences in their motion robustness; the RGB camera has relatively long exposure times while those of ToF (Time-of-flight) based depth sensing camera are relatively short. An experiment on visual tag reading, one typical application, shows that depth sensing cameras can robustly decode moving tags. The proposed technique will yield robust tag reading, indoor localization, and color image stabilization while walking and jogging or even glancing momentarily without requiring any special additional devices.

Workload Assessment with eye Movement Monitoring Aided by Non-invasive and Unobtrusive Micro-fabricated Optical Sensors Demonstrations / Torres, Carlos C. Cortes / Sampei, Kota / Sato, Munehiko / Raskar, Ramesh / Miki, Norihisa Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.53-54
ACM Digital Library Link
Summary: Mental state or workload of a person are very relevant when the person is executing delicate tasks such as piloting an aircraft, operating a crane because the high level of workload could prevent accomplishing the task and lead to disastrous results. Some frameworks have been developed to assess the workload and determine whether the person is capable of executing a new task. However, such methodologies are applied when the operator finished the task. Another feature that these methodologies share is that are based on paper and pencil tests. Therefore, human-friendly devices that could assess the workload in real time are in high demand. In this paper, we report a wearable device that can correlate physical eye behavior with the mental state for the workload assessment.

Multi-Modal Peer Discussion with RichReview on edX Demonstrations / Yoon, Dongwook / Mitros, Piotr Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.55-56
ACM Digital Library Link
Summary: In this demo, we present RichReview, a multi-modal peer discussion system, implemented as an XBlock in the edX courseware platform. The system brings richness similar to face-to-face communication into online learning at scale. With this demonstration, we discuss the system's scalable back-end architecture, semantic voice editing user interface, and a future research plan for the profile based group-assignment scheme.

BitDrones: Towards Levitating Programmable Matter Using Interactive 3D Quadcopter Displays Demonstrations / Rubens, Calvin / Braley, Sean / Gomes, Antonio / Goc, Daniel / Zhang, Xujing / Carrascal, Juan Pablo / Vertegaal, Roel Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.57-58
ACM Digital Library Link
Summary: In this paper, we present BitDrones, a platform for the construction of interactive 3D displays that utilize nano quadcopters as self-levitating tangible building blocks. Our prototype is a first step towards supporting interactive mid-air, tangible experiences with physical interaction techniques through multiple building blocks capable of physically representing interactive 3D data.

Methods of 3D Printing Micro-pillar Structures on Surfaces Demonstrations / Ou, Jifei / Cheng, Chin-Yi / Zhou, Liang / Dublon, Gershon / Ishii, Hiroshi Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.59-60
ACM Digital Library Link
Summary: This work presents a method of 3D printing hair-like structures on both flat and curved surfaces. It allows a user to design and fabricate hair geometry that is smaller than 100 micron. We built a software platform to let one quickly define a hair's angle, thickness, density, and height. The ability to fabricate customized hair-like structures expands the library of 3D-printable shape. We then present several applications to show how the 3D-printed hair can be used for designing toy objects.

Dranimate: Rapid Real-time Gestural Rigging and Control of Animation Demonstrations / Momeni, Ali / Rispoli, Zachary Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.61-62
ACM Digital Library Link
Summary: Dranimate is an interactive animation system that allows users to rapidly and intuitively rig and control animations based on a still image or drawing, using hand gestures. Dranimate combines two complementary methods of shape manipulation: bone-joint-based physics simulation, and the as-rigid-as-possible deformation algorithm. Dranimate also introduces a number of designed interactions that focus the users attention on the animated content, as opposed to computer keyboard or mouse.

Elastic Cursor and Elastic Edge: Applying Simulated Resistance to Interface Elements for Seamless Edge-scroll Demonstrations / Lee, Jinha / Baek, Seungcheon Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.63-64
ACM Digital Library Link
Summary: We present elastic cursor and elastic edge, new interaction techniques for seamless edge-scroll. Through the use of light-weight physical simulations of elastic behavior on interface elements, we can improve precision, usability, and cueing on the use of edge-scroll in scrollable windows or screens, and make experiences more playful and easier to learn.

Hand Biometrics Using Capacitive Touchscreens Posters / Tartz, Robert / Gooding, Ted Adjunct Proceedings of the 2015 ACM Symposium on User Interface Software and Technology 2005-11-08 v.2 p.67-68
ACM Digital Library Link
Summary: Biometric methods for authentication on mobile devices are becoming popular. Some methods such as face and voice biometrics are problematic in noisy mobile environments, while others such as fingerprint require specialized hardware to operate. We present a novel biometric authentication method that uses raw touch capacitance data captured from the hand touching a display. Performance results using a moderate sample size (N = 40) yielded an equal error rate (EER) of 2.5%, while a 1-month longitudinal study using a smaller sample (N = 10) yielded an EER = 2.3%. Overall, our results provide evidence for biometric uniqueness, permanence and user acceptance.
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 44 Jump to: 2005 |