HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,292,597
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: landay_j* Results: 109 Sorted by: Date  Comments?
Help Dates
Limit:   
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 109 Jump to: 2016 | 15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 05 | 04 | 03 | 02 | 01 | 00 | 99 | 98 | 96 | 95 | 94 | 93 | 92 |
[1] AmbiVibe: Design and Evaluation of Vibrations for Progress Monitoring Did you feel the vibration -- Haptic Feedback Everywhere) / Cauchard, Jessica R. / Cheng, Janette L. / Pietrzak, Thomas / Landay, James A. Proceedings of the ACM CHI'16 Conference on Human Factors in Computing Systems 2016-05-07 v.1 p.3261-3271
ACM Digital Library Link
Summary: Smartwatches and activity trackers are becoming prevalent, providing information about health and fitness, and offering personalized progress monitoring. These wearable devices often offer multimodal feedback with embedded visual, audio, and vibrotactile displays. Vibrations are particularly useful when providing discreet feedback, without users having to look at a display or anyone else noticing, thus preserving the flow of the primary activity. Yet, current use of vibrations is limited to basic patterns, since representing more complex information with a single actuator is challenging. Moreover, it is unclear how much the user -- s current physical activity may interfere with their understanding of the vibrations. We address both issues through the design and evaluation of ActiVibe, a set of vibrotactile icons designed to represent progress through the values 1 to 10. We demonstrate a recognition rate of over 96% in a laboratory setting using a commercial smartwatch. ActiVibe was also evaluated in situ with 22 participants for a 28-day period. We show that the recognition rate is 88.7% in the wild and give a list of factors that affect the recognition, as well as provide design guidelines for communicating progress via vibrations.

[2] Drone & me: an exploration into natural human-drone interaction Interacting with animals and flying robots / Cauchard, Jessica R. / Jane, L. E. / Zhai, Kevin Y. / Landay, James A. Proceedings of the 2015 International Conference on Ubiquitous Computing 2015-09-07 p.361-365
ACM Digital Library Link
Summary: Personal drones are becoming popular. It is challenging to design how to interact with these flying robots. We present a Wizard-of-Oz (WoZ) elicitation study that informs how to naturally interact with drones. Results show strong agreement between participants for many interaction techniques, as when gesturing for the drone to stop. We discovered that people interact with drones as with a person or a pet, using interpersonal gestures, such as beckoning the drone closer. We detail the interaction metaphors observed and offer design insights for human-drone interactions.

[3] Frenzy: collaborative data organization for creating conference sessions Crowds and creativity / Chilton, Lydia B. / Kim, Juho / André, Paul / Cordeiro, Felicia / Landay, James A. / Weld, Daniel S. / Dow, Steven P. / Miller, Robert C. / Zhang, Haoqi Proceedings of ACM CHI 2014 Conference on Human Factors in Computing Systems 2014-04-26 v.1 p.1255-1264
ACM Digital Library Link
Summary: Organizing conference sessions around themes improves the experience for attendees. However, the session creation process can be difficult and time-consuming due to the amount of expertise and effort required to consider alternative paper groupings. We present a collaborative web application called Frenzy to draw on the efforts and knowledge of an entire program committee. Frenzy comprises (a) interfaces to support large numbers of experts working collectively to create sessions, and (b) a two-stage process that decomposes the session-creation problem into meta-data elicitation and global constraint satisfaction. Meta-data elicitation involves a large group of experts working simultaneously, while global constraint satisfaction involves a smaller group that uses the meta-data to form sessions.
    We evaluated Frenzy with 48 people during a deployment at the CSCW 2014 program committee meeting. The session making process was much faster than the traditional process, taking 88 minutes instead of a full day. We found that meta-data elicitation was useful for session creation. Moreover, the sessions created by Frenzy were the basis of the CSCW 2014 schedule.

[4] Designing for Healthy Lifestyles: Design Considerations for Mobile Technologies to Encourage Consumer Health and Wellness / Consolvo, Sunny / Klasnja, Predrag / McDonald, David W. / Landay, James A. Foundations and Trends in Human-Computer Interaction 2014-04-04 v.6 n.3/4 p.167-315
Keywords: Design and Evaluation; Technology; Ubiquitous computing; Wearable computing; Mobile/Pervasive; User Interfaces; Health Care
www.nowpublishers.com/articles/foundations-and-trends-in-humancomputer-interaction/HCI-040
Link to now publishers Digital Content
1 Introduction
2 Collecting Behavioral Data
3 Providing Self-Monitoring Feedback
4 Supporting Goal-Setting
5 Moving Forward
Summary: As the rates of lifestyle diseases such as obesity, diabetes, and heart disease continue to rise, the development of effective tools that can help people adopt and sustain healthier habits is becoming ever more important. Mobile computing holds great promise for providing effective support for helping people manage their health in everyday life. Yet, for this promise to be realized, mobile wellness systems need to be well designed, not only in terms of how they implement specific behavior-change techniques but also, among other factors, in terms of how much burden they put on the user, how well they integrate into the user's daily life, and how they address the user's privacy concerns. Designing for all of these constraints is difficult, and it is often not clear what tradeoffs particular design decisions have on how a wellness application is experienced and used. In this monograph, we provide an account of different design approaches to common features of mobile wellness applications and we discuss the tradeoffs inherent in those approaches. We also outline the key challenges that HCI researchers and designers will need to address to move the state of the art for mobile wellness technologies forward.

[5] Hero: designing learning tools to increase parental involvement in elementary education in China Learning / Zhao, Yuhang / Hope, Alexis / Huang, Jin / Sumitro, Yoel / Landay, James A. / Shi, Yuanchun Extended Abstracts of ACM CHI'13 Conference on Human Factors in Computing Systems 2013-04-27 v.2 p.637-642
ACM Digital Library Link
Summary: In this paper, we present the design of Hero, a suite of learning tools that combine teacher-created extracurricular challenges with in-class motivational tools to help parents become more involved in their child's education, while also engaging students in their own learning. To inform the design, we conducted field studies and interviews involving 7 primary teachers and 15 different families. We analyzed Chinese parenting styles and problems related to parental involvement, and developed three major themes from the data. We then proposed three design goals and created a high-fidelity prototype after several iterations of user testing. A preliminary evaluation showed that teachers, parents, and students could all benefit from the design.

[6] Cascade: crowdsourcing taxonomy creation Papers: collaborative creation / Chilton, Lydia B. / Little, Greg / Edge, Darren / Weld, Daniel S. / Landay, James A. Proceedings of ACM CHI 2013 Conference on Human Factors in Computing Systems 2013-04-27 v.1 p.1999-2008
ACM Digital Library Link
Summary: Taxonomies are a useful and ubiquitous way of organizing information. However, creating organizational hierarchies is difficult because the process requires a global understanding of the objects to be categorized. Usually one is created by an individual or a small group of people working together for hours or even days. Unfortunately, this centralized approach does not work well for the large, quickly changing datasets found on the web. Cascade is an automated workflow that allows crowd workers to spend as little at 20 seconds each while collectively making a taxonomy. We evaluate Cascade and show that on three datasets its quality is 80-90% of that of experts. Cascade has a competitive cost to expert information architects, despite taking six times more human labor. Fortunately, this labor can be parallelized such that Cascade will run in as fast as four minutes instead of hours or days.

[7] MemReflex: adaptive flashcards for mobile microlearning Learning and training / Edge, Darren / Fitchett, Stephen / Whitney, Michael / Landay, James Proceedings of the 14th Conference on Human-computer interaction with mobile devices and services 2012-09-21 p.431-440
ACM Digital Library Link
Summary: Flashcard systems typically help students learn facts (e.g., definitions, names, and dates), relying on intense initial memorization with subsequent tests delayed up to days later. This approach does not exploit the short, sparse, and mobile opportunities for microlearning throughout the day, nor does it support learners who need the motivation that comes from successful study sessions. In contrast, our MemReflex system of adaptive flashcards gives fast-feedback by retesting new items in quick succession, dynamically scheduling future tests according to a model of the learner's memory. We evaluate MemReflex across three user studies. In the first two studies, we demonstrate its effectiveness for both audio and text modalities, even while walking and distracted. In the third study of second-language vocabulary learning, we show how MemReflex enhanced learner accuracy, confidence, and perceptions of control and success. Overall, the work suggests new directions for mobile microlearning and "micro activities" in general.

[8] The design and evaluation of prototype eco-feedback displays for fixture-level water usage data Defying environmental behavior changes / Froehlich, Jon / Findlater, Leah / Ostergren, Marilyn / Ramanathan, Solai / Peterson, Josh / Wragg, Inness / Larson, Eric / Fu, Fabia / Bai, Mazhengmin / Patel, Shwetak / Landay, James A. Proceedings of ACM CHI 2012 Conference on Human Factors in Computing Systems 2012-05-05 v.1 p.2367-2376
ACM Digital Library Link
Summary: Few means currently exist for home occupants to learn about their water consumption: e.g., where water use occurs, whether such use is excessive and what steps can be taken to conserve. Emerging water sensing systems, however, can provide detailed usage data at the level of individual water fixtures (i.e., disaggregated usage data). In this paper, we perform formative evaluations of two sets of novel eco-feedback displays that take advantage of this disaggregated data. The first display set isolates and examines specific elements of an eco-feedback design space such as data and time granularity. Displays in the second set act as design probes to elicit reactions about competition, privacy, and integration into domestic space. The displays were evaluated via an online survey of 651 North American respondents and in-home, semi-structured interviews with 10 families (20 adults). Our findings are relevant not only to the design of future water eco-feedback systems but also for other types of consumption (e.g., electricity and gas).

[9] Voice Games: Investigation Into the Use of Non-speech Voice Input for Making Computer Games More Accessible Accessibility I / Harada, Susumu / Wobbrock, Jacob O. / Landay, James A. Proceedings of IFIP INTERACT'11: Human-Computer Interaction 2011-09-05 v.1 p.11-29
Keywords: Computer games; accessible games; speech recognition; non-speech vocalization
Link to Digital Content at Springer
Summary: We conducted a quantitative experiment to determine the performance characteristics of non-speech vocalization for discrete input generation in comparison to existing speech and keyboard input methods. The results from the study validated our hypothesis that non-speech voice input can offer significantly faster discrete input compared to a speech-based input method by as much as 50%. Based on this and other promising results from the study, we built a prototype system called the Voice Game Controller that augments traditional speech-based input methods with non-speech voice input methods to make computer games originally designed for the keyboard and mouse playable using voice only. Our preliminary evaluation of the prototype indicates that the Voice Game Controller greatly expands the scope of computer games that can be played hands-free using just voice, to include games that were difficult or impractical to play using previous speech-based methods.

[10] Utility of human-computer interactions: toward a science of preference measurement Decision making & the web / Toomim, Michael / Kriplean, Travis / Pörtner, Claus / Landay, James Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011-05-07 v.1 p.2275-2284
ACM Digital Library Link
Summary: The success of a computer system depends upon a user choosing it, but the field of Human-Computer Interaction has little ability to predict this user choice. We present a new method that measures user choice, and quantifies it as a measure of utility. Our method has two core features. First, it introduces an economic definition of utility, one that we can operationalize through economic experiments. Second, we employ a novel method of crowdsourcing that enables the collection of thousands of economic judgments from real users.

[11] MicroMandarin: mobile language learning in context Books & language / Edge, Darren / Searle, Elly / Chiu, Kevin / Zhao, Jing / Landay, James A. Proceedings of ACM CHI 2011 Conference on Human Factors in Computing Systems 2011-05-07 v.1 p.3169-3178
ACM Digital Library Link
Summary: Learning a new language is hard, but learning to use it confidently in conversations with native speakers is even harder. From our field research with language learners, with support from Cognitive Psychology and Second Language Acquisition, we argue for the value of contextual microlearning in the many breaks spread across different places and throughout the day. We present a mobile application that supports such microlearning by leveraging the location-based service Foursquare to automatically provide contextually relevant content in the world's major cities. In an evaluation of Mandarin Chinese learning, a four-week, 23-user study spanning Beijing and Shanghai compared this contextual system to a system based on word frequency. Study sessions with the contextual version lasted half as long but occurred in twice as many places as sessions with the frequency version, suggesting a complementary relationship between the two approaches.

[12] Activity-based Ubicomp: a new research basis for the future of human-computer interaction Invited talk / Landay, James Proceedings of the 2010 International Conference on Multimodal Interfaces 2010-11-08 p.28
ACM Digital Library Link
Summary: Ubiquitous computing (Ubicomp) is bringing computing off the desktop and into our everyday lives. For example, an interactive display can be used by the family of an elder to stay in constant touch with the elder's everyday wellbeing, or by a group to visualize and share information about exercise and fitness. Mobile sensors, networks, and displays are proliferating worldwide in mobile phones, enabling this new wave of applications that are intimate with the user's physical world. In addition to being ubiquitous, these applications share a focus on high-level activities, which are long-term social processes that take place in multiple environments and are supported by complex computation and inference of sensor data. However, the promise of this Activity-based Ubicomp is unfulfilled, primarily due to methodological, design, and tool limitations in how we understand the dynamics of activities. The traditional cognitive psychology basis for human-computer interaction, which focuses on our short term interactions with technological artifacts, is insufficient for achieving the promise of Activity-based Ubicomp. We are developing design methodologies and tools, as well as activity recognition technologies, to both demonstrate the potential of Activity-based Ubicomp as well as to support designers in fruitfully creating these types of applications.

[13] Gestalt: integrated support for implementation and analysis in machine learning AI and toolkits / Patel, Kayur / Bancroft, Naomi / Drucker, Steven M. / Fogarty, James / Ko, Andrew J. / Landay, James Proceedings of the 2010 ACM Symposium on User Interface Software and Technology 2010-10-03 p.37-46
Keywords: gestalt, machine learning, software development
ACM Digital Library Link
Summary: We present Gestalt, a development environment designed to support the process of applying machine learning. While traditional programming environments focus on source code, we explicitly support both code and data. Gestalt allows developers to implement a classification pipeline, analyze data as it moves through that pipeline, and easily transition between implementation and analysis. An experiment shows this significantly improves the ability of developers to find and fix bugs in machine learning systems. Our discussion of Gestalt and our experimental observations provide new insight into general-purpose support for the machine learning process.

[14] FrameWire: a tool for automatically extracting interaction logic from paper prototyping tests End-user programming I / Li, Yang / Cao, Xiang / Everitt, Katherine / Dixon, Morgan / Landay, James A. Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010-04-10 v.1 p.503-512
Keywords: paper prototyping, programming by demonstration
ACM Digital Library Link
Summary: Paper prototyping offers unique affordances for interface design. However, due to its spontaneous nature and the limitations of paper, it is difficult to distill and communicate a paper prototype design and its user test findings to a wide audience. To address these issues, we created FrameWire, a computer vision-based system that automatically extracts interaction flows from the video recording of paper prototype user tests. Based on the extracted logic, FrameWire offers two distinct benefits for designers: a structural view of the video recording that allows a designer or a stakeholder to easily distill and understand the design concept and user interaction behaviors, and automatic generation of interactive HTML-based prototypes that can be easily tested with a larger group of users as well as "walked through" by other stakeholders. The extraction is achieved by automatically aggregating video frame sequences into an interaction flow graph based on frame similarities and a designer-guided clustering process. The results of evaluating FrameWire with realistic paper prototyping tests show that our extraction approach is feasible and FrameWire is a promising tool for enhancing existing prototyping practice.

[15] Making muscle-computer interfaces more practical Brains and brawn / Saponas, T. Scott / Tan, Desney S. / Morris, Dan / Turner, Jim / Landay, James A. Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010-04-10 v.1 p.851-854
Keywords: electromyography (emg), muscle-computer interface
ACM Digital Library Link
Summary: Recent work in muscle sensing has demonstrated the potential of human-computer interfaces based on finger gestures sensed from electrodes on the upper forearm. While this approach holds much potential, previous work has given little attention to sensing finger gestures in the context of three important real-world requirements: sensing hardware suitable for mobile and off-desktop environments, electrodes that can be put on quickly without adhesives or gel, and gesture recognition techniques that require no new training or calibration after re-donning a muscle-sensing armband. In this note, we describe our approach to overcoming these challenges, and we demonstrate average classification accuracies as high as 86% for pinching with one of three fingers in a two-session, eight-person experiment.

[16] The design of eco-feedback technology Home eco behavior / Froehlich, Jon / Findlater, Leah / Landay, James Proceedings of ACM CHI 2010 Conference on Human Factors in Computing Systems 2010-04-10 v.1 p.1999-2008
Keywords: eco-feedback, environmental hci, reflective hci, survey
ACM Digital Library Link
Summary: Eco-feedback technology provides feedback on individual or group behaviors with a goal of reducing environmental impact. The history of eco-feedback extends back more than 40 years to the origins of environmental psychology. Despite its stated purpose, few HCI eco-feedback studies have attempted to measure behavior change. This leads to two overarching questions: (1) what can HCI learn from environmental psychology and (2) what role should HCI have in designing and evaluating eco-feedback technology? To help answer these questions, this paper conducts a comparative survey of eco-feedback technology, including 89 papers from environmental psychology and 44 papers from the HCI and UbiComp literature. We also provide an overview of predominant models of proenvironmental behaviors and a summary of key motivation techniques to promote this behavior.

[17] INTERNET DUB Group - Design : Use : Build / Wobbrock, Jacob O. / Anderson, Richard / Aragon, Cecilia R. / Borning, Alan / Borriello, Gaetano / Cheng, Karen / Demiris, George / Efthimiadis, Efthimis N. / Farkas, David K. / Feil, Magnus / Fogarty, James / Friedman, Batya / Gould, Annabelle / Hendry, David G / Johnson, Brian R. / Johnson, Kurt L. / Jones, William / Kientz, Julie A. / Ko, Andrew J. / Kolko, Beth / Kriz, Sarah / Ladner, Richard E. / Landay, James A. / Lee, Charlotte P. / McDonald, David W. / Muren, Dominic L / Patel, Shwetak N. / Pratt, Wanda / Ramey, Judith / Roesler, Axel / Spyridakis, Jan / Tanimoto, Steve L. / Turns, Jennifer / Weld, Daniel S. / Zachry, Mark / Baudisch, Patrick / Davidson, Andrew / Drucker, Steven M. / Morris, Meredith Ringel / Parikh, Tapan / Tan, Desney / Wixon, Dennis R. 2010-01-17 United States, Washington, Seattle University of Washington
Keywords: hci-sites:laboratories |  education:programs |  education:1st_choice | 
Languages: English
dub.washington.edu/
Faculty and Programs in HCI at UW
E-mail: wobbrock@uw.edu
Summary: The multi-departmental DUB (design:use:build) group at the University of Washington.
Summary: The DUB Group comprises faculty and students interested in HCI and Design research at the University of Washington. It is a cross-campus multi-departmental group with numerous faculty and students working on countless projects in HCI.

[18] Enabling always-available input with muscle-computer interfaces Meat-space / Saponas, T. Scott / Tan, Desney S. / Morris, Dan / Balakrishnan, Ravin / Turner, Jim / Landay, James A. Proceedings of the 2009 ACM Symposium on User Interface Software and Technology 2009-10-04 p.167-176
Keywords: electromyography (EMG), input, interaction, muscle-computer interface
ACM Digital Library Link
Summary: Previous work has demonstrated the viability of applying offline analysis to interpret forearm electromyography (EMG) and classify finger gestures on a physical surface. We extend those results to bring us closer to using muscle-computer interfaces for always-available input in real-world applications. We leverage existing taxonomies of natural human grips to develop a gesture set covering interaction in free space even when hands are busy with other objects. We present a system that classifies these gestures in real-time and we introduce a bi-manual paradigm that enables use in interactive systems. We report experimental results demonstrating four-finger classification accuracies averaging 79% for pinching, 85% while holding a travel mug, and 88% when carrying a weighted bag. We further show generalizability across different arm postures and explore the tradeoffs of providing real-time visual feedback.

[19] Goal-setting considerations for persuasive technologies that encourage physical activity Persuading for healthy lifestyle / Consolvo, Sunny / Klasnja, Predrag / McDonald, David W. / Landay, James A. Proceedings of the 2009 International Conference on Persuasive Technology 2009-04-26 p.8
ACM Digital Library Link
Summary: Goal-setting has been shown to be an effective strategy for changing behavior; therefore employing goal-setting in persuasive technologies could be an effective way to encourage behavior change. In our work, we are developing persuasive technologies to encourage individuals to live healthy lifestyles with a focus on being physically active. As part of our investigations, we have explored individuals' reactions to goal-setting, specifically goal sources (i.e., who should set the individual's goal) and goal timeframes (i.e., over what time period should an individual have to achieve the goal). In this paper, we present our findings related to various approaches for implementing goal-setting in a persuasive technology to encourage physical activity.

[20] Longitudinal study of people learning to use continuous voice-based cursor control Accessibility/special needs / Harada, Susumu / Wobbrock, Jacob O. / Malkin, Jonathan / Bilmes, Jeff A. / Landay, James A. Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009-04-04 v.1 p.347-356
Keywords: longitudinal study, motor impairment, pointer control, speech recognition, voice-based interface
ACM Digital Library Link
Summary: We conducted a 2.5 week longitudinal study with five motor impaired (MI) and four non-impaired (NMI) participants, in which they learned to use the Vocal Joystick, a voice-based user interface control system. We found that the participants were able to learn the mapping between the vowel sounds and directions used by the Vocal Joystick, and showed marked improvement in their target acquisition performance. At the end of the ten session period, the NMI group reached the same level of performance as the previously measured "expert" Vocal Joystick performance, and the MI group was able to reach 70% of that level. Two of the MI participants were also able to approach the performance of their preferred device, a touchpad. We report on a number of issues that can inform the development of further enhancements in the realm of voice-driven computer control.

[21] Theory-driven design strategies for technologies that support behavior change in everyday life Creating thought and self-improvement / Consolvo, Sunny / McDonald, David W. / Landay, James A. Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009-04-04 v.1 p.405-414
Keywords: behavior change, design strategies, everyday life, lifestyle, mobile phone, persuasive technology, physical activity
ACM Digital Library Link
Summary: In this paper, we propose design strategies for persuasive technologies that help people who want to change their everyday behaviors. Our strategies use theory and prior work to substantially extend a set of existing design goals. Our extensions specifically account for social characteristics and other tactics that should be supported by persuasive technologies that target long-term discretionary use throughout everyday life. We used these strategies to design and build a system that encourages people to lead a physically active lifestyle. Results from two field studies of the system -- a three-week trial and a three-month experiment -- have shown that the system was successful at helping people maintain a more physically active lifestyle and validate the usefulness of the strategies.

[22] UbiGreen: investigating a mobile tool for tracking and supporting green transportation habits Sustainability 2 / Froehlich, Jon / Dillahunt, Tawanna / Klasnja, Predrag / Mankoff, Jennifer / Consolvo, Sunny / Harrison, Beverly / Landay, James A. Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009-04-04 v.1 p.1043-1052
Keywords: ambient displays, mobile phones, sensing, sustainability, transportation, ubicomp
ACM Digital Library Link
Summary: The greatest contributor of CO2 emissions in the average American household is personal transportation. Because transportation is inherently a mobile activity, mobile devices are well suited to sense and provide feedback about these activities. In this paper, we explore the use of personal ambient displays on mobile phones to give users feedback about sensed and self-reported transportation behaviors. We first present results from a set of formative studies exploring our respondents' existing transportation routines, willingness to engage in and maintain green transportation behavior, and reactions to early mobile phone "green" application design concepts. We then describe the results of a 3-week field study (N=13) of the UbiGreen Transportation Display prototype, a mobile phone application that semi-automatically senses and reveals information about transportation behavior. Our contributions include a working system for semi-automatically tracking transit activity, a visual design capable of engaging users in the goal of increasing green transportation, and the results of our studies, which have implications for the design of future green applications.

[23] Attaching UI enhancements to websites with end users Advanced web scenarios / Toomim, Michael / Drucker, Steven M. / Dontcheva, Mira / Rahimi, Ali / Thomson, Blake / Landay, James A. Proceedings of ACM CHI 2009 Conference on Human Factors in Computing Systems 2009-04-04 v.1 p.1859-1868
Keywords: end-user programming, mashups, programming by example, web data extraction
ACM Digital Library Link
Summary: We present reform, a step toward write-once apply-anywhere user interface enhancements. The reform system envisions roles for both programmers and end users in enhancing existing websites to support new goals. First, a programmer authors a traditional mashup or browser extension, but they do not write a web scraper. Instead they use reform, which allows novice end users to attach the enhancement to their favorite sites with a scraping by-example interface. reform makes enhancements easier to program while also carrying the benefit that end users can apply the enhancements to any number of new websites. We present reform's architecture, user interface, interactive by-example extraction algorithm for novices, and evaluation, along with five example reform enabled enhancements.

[24] Toolkit Support for Integrating Physical and Digital Interactions / Klemmer, Scott R. / Landay, James A. Human-Computer Interaction 2009 v.24 n.3 p.315-366
Link to Article at informaworld
Summary: There is great potential in enabling users to interact with digital information by integrating it with everyday physical objects. However, developing these interfaces requires programmers to acquire and abstract physical input. This is difficult, is time-consuming, and requires a high level of technical expertise in fields very different from user interface development -- especially in the case of computer vision. Based on structured interviews with researchers, a literature review, and our own experience building physical interfaces, we created Papier-Mâché, a toolkit for integrating physical and digital interactions. Its library supports computer vision, electronic tags, and barcodes. Papier-Mâché introduces high-level abstractions for working with these input technologies that facilitate technology portability. We evaluated this toolkit through a laboratory study and longitudinal use in course and research projects, finding the input abstractions, technology portability, and monitoring facilities to be highly effective.

[25] VoiceLabel: using speech to label mobile sensor data Multimodal systems I (poster session) / Harada, Susumu / Lester, Jonathan / Patel, Kayur / Saponas, T. Scott / Fogarty, James / Landay, James A. / Wobbrock, Jacob O. Proceedings of the 2008 International Conference on Multimodal Interfaces 2008-10-20 p.69-76
Keywords: data collection, machine learning, mobile devices, sensors, speech recognition
ACM Digital Library Link
Summary: Many mobile machine learning applications require collecting and labeling data, and a traditional GUI on a mobile device may not be an appropriate or viable method for this task. This paper presents an alternative approach to mobile labeling of sensor data called VoiceLabel. VoiceLabel consists of two components: (1) a speech-based data collection tool for mobile devices, and (2) a desktop tool for offline segmentation of recorded data and recognition of spoken labels. The desktop tool automatically analyzes the audio stream to find and recognize spoken labels, and then presents a multimodal interface for reviewing and correcting data labels using a combination of the audio stream, the system's analysis of that audio, and the corresponding mobile sensor data. A study with ten participants showed that VoiceLabel is a viable method for labeling mobile sensor data. VoiceLabel also illustrates several key features that inform the design of other data labeling tools.
<<First <Previous Permalink Next> Last>> Records: 1 to 25 of 109 Jump to: 2016 | 15 | 14 | 13 | 12 | 11 | 10 | 09 | 08 | 07 | 06 | 05 | 04 | 03 | 02 | 01 | 00 | 99 | 98 | 96 | 95 | 94 | 93 | 92 |