HCI Bibliography Home | HCI Conferences | IUI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
IUI Tables of Contents: 01020304050607080910111213-113-214-114-215-115-216-116-2

Proceedings of the 2011 International Conference on Intelligent User Interfaces

Fullname:International Conference on Intelligent User Interfaces
Editors:Pearl Pu; Michael Pazzani; Elisabeth André; Doug Riecken
Location:Palo Alto, California
Dates:2011-Feb-13 to 2011-Feb-16
Standard No:ISBN: 1-4503-0419-2, 978-1-4503-0419-1; ACM DL: Table of Contents hcibib: IUI11
Links:Conference Home Page
  1. Keynote address
  2. Handheld devices
  3. Multimodal interfaces
  4. Social computing and navigation
  5. Keynote address
  6. Intelligent help agents
  7. Input technologies
  8. User modeling and personalization
  9. Keynote address
  10. Intelligent authoring and information presentation
  11. Pen-based interfaces
  12. Posters
  13. Demos
  14. Tutorials
  15. Workshops

Keynote address

Bricks, arches, and cathedrals: reflections on paths to deeper human-computer symbioses BIBAFull-Text 1
  Eric Horvitz
I will share thoughts about achievements to date and opportunities moving forward on harnessing advances in machine intelligence to enable new forms of competent and fluid human-computer collaboration. I will discuss the promise of assembling key building blocks of methods in machine perception, learning, and inference into larger integrative solutions that draw upon a symphony of skills and that operate over extended periods of time. Explorations of such integrative machine intelligence frames research on the coordination of multiple components for sensing and reasoning to create higher-level functionalities and abstractions. I will discuss the promise of these efforts to advance us toward realizing dreams of deeper human-computer symbioses as imagined by such visionaries as Licklider and Engelbart.

Handheld devices

Find this for me: mobile information retrieval on the open web BIBAFull-Text 3-12
  Ifeyinwa Okoye; Jalal Mahmud; Tessa Lau; Julian Cerruti
With all the information available on the web, there is a growing need to provide mobile access to this information for the large, growing population of mobile internet users. In this paper, we propose a solution to the problem of open web mobile information retrieval, by conducting a dialogue with the user over a simple text-based interface. Using techniques from NLP, web page analysis, and information extraction, our approach automatically navigates web sites on the user's behalf and extracts specific information from those sites to present to the user textually. Empirical evaluation shows that our approach to open web information retrieval is feasible, and a qualitative evaluation validates that such a system meets user needs for mobile information access.
The social camera: a case-study in contextual image recommendation BIBAFull-Text 13-22
  Steven Bourke; Kevin McCarthy; Barry Smyth
The digital camera revolution has changed the world of photography and now most people have access to, and even regularly carry, a digital camera. Often these cameras have been designed with simplicity in mind: they harness a variety of sophisticated technologies in order to automatically take care of all manner of complex settings (aperture, shutter speed, flash etc.) for point-and-shoot ease, these assistive features are usually incorporated directly into the cameras interface. However, there is little or no support for the end-user when it comes to helping them to compose or frame a scene. To this end we describe a novel recommendation process which uses a variety of intelligent and assistive interfaces to guide the user in taking relevant compositions given their current location and scene context. This application has been implemented on the Android platform and we describe its core user interaction, recommendation technologies and demonstrate its effectiveness in a number of real-world scenarios. Specifically we report on the results of a live-user trial of the technology in a real-world tourist setting.
picoTrans: an icon-driven user interface for machine translation on mobile devices BIBAFull-Text 23-32
  Wei Song; Andrew Michael Finch; Kumiko Tanaka-Ishii; Eiichiro Sumita
In this paper we present a novel user interface that integrates two popular approaches to language translation for travelers allowing multimodal communication between the parties involved. In our approach we integrate the popular picture-book, in which the user simply points to multiple picture icons representing what they want to say, with a statistical machine translation system that can translate arbitrary word sequences. The simple pointing at pictures paradigm is used as the primary method of user input and the users can use the device as if it were a picture book. The application is then able to generate a complete sentence in the user's native language for what they wish to say from the sequence of picture icons chosen by the user. Once the user is satisfied that the sentence provided by the system adequately represents what they wish to convey, the application can automatically translate the sentence into the language of the other party, who can interpret the intended meaning of the first party by combining evidence from both modes of communication: the picture sequence, and the machine translation. The prototype system we have developed inherits many of the positive features of both approaches, while at the same time mitigating their main weaknesses. The user may combine the pictures in considerably more combinations than is possible with a picture book designed with combinations from only within the same page spread of the book in mind, making the application more expressive than a book. The machine translation system can contribute a detailed and precise translation which is supported by the picture-based mode which not only provides a rapid method to communicate basic concepts but also gives a 'second opinion' on the machine transition output that catches machine translation errors and allows the users to retry the sentence, avoiding misunderstandings.
Influence of landmark-based navigation instructions on user attention in indoor smart spaces BIBAFull-Text 33-42
  Petteri Nurmi; Antti Salovaara; Sourav Bhattacharya; Teemu Pulkkinen; Gerrit Kahl
Using landmark-based navigation instructions is widely considered to be the most effective strategy for presenting navigation instructions. Among other things, landmark-based instructions can reduce the user's cognitive load, increase confidence in navigation decisions and reduce the number of navigational errors. Their main disadvantage is that the user typically focuses considerable amount of attention on searching for landmark points, which easily results in poor awareness of the user's surroundings. In indoor spaces, this implies that landmark-based instructions can reduce the attention the user pays on advertisements and commercial displays, thus rendering the assistance commercially inviable. To better understand how landmark-based instructions influence the user's awareness of her surroundings, we conducted a user study with $20$ participants in a large national supermarket that investigated how the attention the user pays on her surroundings varies across two types of landmark-based instructions that vary in terms of their visual demand. The results indicate that an increase in the visual demand of landmark-based instructions does not necessarily improve the participant's recall of their surrounding environment and that this increase can cause a decrease in navigation efficiency. The results also indicate that participants generally pay little attention to their surroundings and are more likely to rationalize than to actually remember much from their surroundings. Implications of the findings on navigation assistants are discussed.

Multimodal interfaces

Multimodal summarization of complex sentences BIBAFull-Text 43-52
  Naushad UzZaman; Jeffrey P. Bigham; James F. Allen
In this paper, we introduce the idea of automatically illustrating complex sentences as multimodal summaries that combine pictures, structure and simplified compressed text. By including text and structure in addition to pictures, multimodal summaries provide additional clues of what happened, who did it, to whom and how, to people who may have difficulty reading or who are looking to skim quickly. We present ROC-MMS, a system for automatically creating multimodal summaries (MMS) of complex sentences by generating pictures, textual summaries and structure. We show that pictures alone are insufficient to help people understand most sentences, especially for readers who are unfamiliar with the domain. An evaluation of ROC-MMS in the Wikipedia domain illustrates both the promise and challenge of automatically creating multimodal summaries.
Evaluating multimodal affective fusion using physiological signals BIBAFull-Text 53-62
  Stephen W. Gilroy; Marc O. Cavazza; Valentin Vervondel
In this paper we present an evaluation of an affective multimodal fusion approach utilizing dimensional representations of emotion. The evaluation uses physiological signals as a reference measure of users' emotional states. Surface electromyography (EMG) and galvanic skin response (GSR) signals are known to be correlated with specific dimensions of emotion (Pleasure and Arousal) and are compared here to real time continuous values of these dimensions obtained from affective multimodal fusion. The results (both qualitative and quantitative) suggest that the particular multimodal fusion approach described is consistent with physiological indicators of emotion, constituting a first positive evaluation of the approach.
A novel taxonomy for gestural interaction techniques based on accelerometers BIBAFull-Text 63-72
  Adriano Scoditti; Renaud Blanch; Joëlle Coutaz
A large variety of gestural interaction techniques based on accelerometers is now available. In this article, we propose a new taxonomic space as a systematic structure for supporting the comparative analysis of these techniques as well as for designing new ones. An interaction technique is plotted as a point in a space where the vertical axis denotes the semantic coverage of the techniques, and the horizontal axis expresses the physical actions users are engaged in, i.e. the lexicon. In addition, syntactic modifiers are used to express the interpretation process of input tokens into semantics, as well as pragmatic modifiers to make explicit the level of indirection between users' actions and system responses. To demonstrate the coverage of the taxonomy, we have classified 25 interaction techniques based on accelerometers. The analysis of the design space per se reveals directions for future research.

Social computing and navigation

Mobile drama in an instrumented museum: inducing group conversation via coordinated narratives BIBAFull-Text 73-82
  Charles Callaway; Oliviero Stock; Elyon Dekoven; Kinneret Noy; Yael Citron; Yael Dobrin
Museum visits can be more enjoyable to small groups if they can be both social and educational experiences. One very rewarding aspect of a visit, especially those involving small groups such as families, is the unmediated group discussion that can ensue during a shared cultural experience. We present a museum mobile system that perceives and analyzes group behavior and uses the result to adaptively deliver coordinated dramatic narrative presentations, resulting in the stimulation of group conversation. In particular, our drama-based presentations contain slight differences in content between the two visitors, leveraging the narrative tension/release cycle to naturally lead visitors to fill in missing pieces by interacting with friends and initiate a conversation. As a first step at evaluation, we present a study in a neutral environment centered around the effects of those differences in stories between pairs of participants, showing that listening to narratives with slight differences between them can significantly increase subsequent conversation.
Groups without tears: mining social topologies from email BIBAFull-Text 83-92
  Diana MacLean; Sudheendra Hangal; Seng Keat Teh; Monica S. Lam; Jeffrey Heer
As people accumulate hundreds of "friends" in social media, a flat list of connections becomes unmanageable. Interfaces agnostic to social structure hinder the nuanced sharing of personal data such as photos, status updates, news feeds, and comments. To address this problem, we propose social topologies, a set of potentially overlapping and nested social groups, that represent the structure and content of a person's social network as a first-class object. We contribute an algorithm for creating social topologies by mining communication history and identifying likely groups based on co-occurrence patterns. We use our algorithm to populate a browser interface that supports creation and editing of social groups via direct manipulation. A user study confirms that our approach models subjects' social topologies well, and that our interface enables intuitive browsing and management of a personal social landscape.
Navigating the tag genome BIBAFull-Text 93-102
  Jesse Vig; Shilad Sen; John Riedl
Tags help users understand a rich information space, by showing them specific text annotations for each item in the space and enabling them to search by these annotations. Often, however, users may wish to move from one item to other items that are similar overall, but that differ in key characteristics. For example, a user who loves Pulp Fiction might want to see a similar movie, but might be in a mood for a less "dark" movie. This paper introduces Movie Tuner, a novel interface that supports navigation from one item to nearby items along dimensions represented by tags. Movie Tuner is based on a data structure called the tag genome, which is described in separate work. The tag genome encodes each item's relationship to a common set of tags by applying machine learning algorithms to user-contributed content. The present paper discusses our design of Movie Tuner, including algorithms for navigating to new items and for suggesting tags for navigation. We present the results of a 7-week field trial of 2,531 users of Movie Tuner, and of a survey evaluating users' subjective experience.

Keynote address

An introduction to online targeted advertising: principles, implementation, controversies BIBAFull-Text 103
  Andrei Z. Broder
Online user interaction is becoming increasingly personalized both via explicit means: customizations, options, add-ons, skins, apps, etc. and via implicit means, that is, deep data mining of user activities that allows automated selection of content and experiences, e.g. individualized top news stories, personalized ranking of search results, personal "radio stations" that capture idiosyncratic tastes from past choices, individually recommended purchases, and so on. On the other hand, the vast majority of providers of content and services (e.g. portals, search engines, social sites) are supported by advertising, which at core, is just a different type of information. Thus, not surprisingly, on-line advertising is becoming increasingly personalized as well, supported by an emerging new scientific sub-discipline, Computational Advertising.
   The central problem of Computational Advertising is to find the "best match" between a given user in a given context and a suitable advertisement. The context could be a user entering a query in a search engine ("sponsored search"), a user reading a web page ("content match" and "display ads"), a user communicating via instant-messaging or via e-mail, a user interacting with a portable device, and many more. The information about the user can vary from scarily detailed to practically nil. The number of potential advertisements might be in the billions. Thus, depending on the definition of "best match" this problem leads to a variety of massive optimization and search problems, with complicated constraints. The solution to these problems provides the scientific and technical foundations of the online advertising industry, which according to E-Marketer, is estimated to achieve $25.8B dollars in revenue in 2010 in US alone, for the first time exceeding print advertising revenue at "only" 22.8B dollars.
   The focus of this talk is targeted advertising, a form of personalized advertising whereby advertisers specify the features of their desired audience, either explicitly, by specifying characteristics such as demographics, location, and context, or implicitly by providing examples of their ideal audience. A particular form of targeted advertising is behavioral targeting, where the desired audience is characterized by its past behavior. We will discuss how targeted advertising fits the optimization framework above, present some of the mechanisms by which targeted and behavioral advertising are implemented, and briefly survey the controversies surrounding behavioral advertising as a potential infringement on user privacy. We will conclude with some speculations about the future of personalized advertising and interesting areas of research.

Intelligent help agents

Deriving a recipe similarity measure for recommending healthful meals BIBAFull-Text 105-114
  Youri van Pinxteren; Gijs Geleijnse; Paul Kamsteeg
A recipe recommender system may stimulate healthful and varied eating, when the presented recipes fit the lifestyle of the user. As consumers face the barrier to change their eating and cooking behavior, we aim for a strategy to provide more healthful variations to routine recipes. In this paper, a similarity measure for recipes is derived by taking a user-centered approach. Such a measure can be used to recommend healthier alternatives to commonly selected meals, which are perceived to be similar. Recipes presented using this strategy may fit the demand for health and variation within the boundaries of a busy lifestyle. Having derived and evaluated a recipe similarity measure, we explore its use through an at-home trial.
End-user feature labeling: a locally-weighted regression approach BIBAFull-Text 115-124
  Weng-Keen Wong; Ian Oberst; Shubhomoy Das; Travis Moore; Simone Stumpf; Kevin McIntosh; Margaret Burnett
When intelligent interfaces, such as intelligent desktop assistants, email classifiers, and recommender systems, customize themselves to a particular end user, such customizations can decrease productivity and increase frustration due to inaccurate predictions -- especially in early stages, when training data is limited. The end user can improve the learning algorithm by tediously labeling a substantial amount of additional training data, but this takes time and is too ad hoc to target a particular area of inaccuracy. To solve this problem, we propose a new learning algorithm based on locally weighted regression for feature labeling by end users, enabling them to point out which features are important for a class, rather than provide new training instances. In our user study, the first allowing ordinary end users to freely choose features to label directly from text documents, our algorithm was both more effective than others at leveraging end users' feature labels to improve the learning algorithm, and more robust to real users' noisy feature labels. These results strongly suggest that allowing users to freely choose features to label is a promising method for allowing end users to improve learning algorithms effectively.
Taking advice from intelligent systems: the double-edged sword of explanations BIBAFull-Text 125-134
  Kate Ehrlich; Susanna E. Kirk; John Patterson; Jamie C. Rasmussen; Steven I. Ross; Daniel M. Gruen
Research on intelligent systems has emphasized the benefits of providing explanations along with recommendations. But can explanations lead users to make incorrect decisions? We explored this question in a controlled experimental study with 18 professional network security analysts doing an incident classification task using a prototype cybersecurity system. The system provided three recommendations on each trial. The recommendations were displayed with explanations (called "justifications") or without. On half the trials, one of the recommendations was correct; in the other half none of the recommendations was correct. Users were more accurate with correct recommendations. Although there was no benefit overall of explanation, we found that a segment of the analysts were more accurate with explanations when a correct choice was available but were less accurate with explanations in the absence of a correct choice. We discuss implications of these results for the design of intelligent systems.
Learning to ask the right questions to help a learner learn BIBAFull-Text 135-144
  Melinda Gervasio; Eric Yeh; Karen Myers
Intelligent systems require substantial bodies of problem-solving knowledge. Machine learning techniques hold much appeal for acquiring such knowledge but typically require extensive amounts of user-supplied training data. Alternatively, informed question asking can supplement machine learning by directly eliciting critical knowledge from a user. Question asking can reduce the amount of training data required, and hence the burden on the user; furthermore, focused question asking holds significant promise for faster and more accurate acquisition of knowledge. In previous work, we developed static strategies for question asking that provide background knowledge for a base learner, enabling the learner to make useful generalizations even with few training examples. Here, we extend that work with a learning approach for automatically acquiring question-asking strategies that better accommodate the interdependent nature of questions. We present experiments validating the approach and showing its usefulness for acquiring efficient, context-dependent question-asking strategies.

Input technologies

Design and validation of two-handed multi-touch tabletop controllers for robot teleoperation BIBAFull-Text 145-154
  Mark Micire; Munjal Desai; Jill L. Drury; Eric McCann; Adam Norton; Katherine M. Tsui; Holly A. Yanco
Controlling the movements of mobile robots, including driving the robot through the world and panning the robot's cameras, typically requires many physical joysticks, buttons, and switches. Operators will often employ a technique called "chording" to cope with this situation. Much like a piano player, the operator will simultaneously actuate multiple joysticks and switches with his or her hands to create a combination of complimentary movements. However, these controls are in fixed locations and unable to be reprogrammed easily. Using a Microsoft Surface multi-touch table, we have designed an interface that allows chording and simultaneous multi-handed interaction anywhere that the user wishes to place his or her hands. Taking inspiration from the biomechanics of the human hand, we have created a dynamically resizing, ergonomic, and multi-touch controller (the DREAM Controller). This paper presents the design and testing of this controller with an iRobot ATRV-JR robot.
PaperSketch: a paper-digital collaborative remote sketching tool BIBAFull-Text 155-164
  Nadir Weibel; Beat Signer; Moira C. Norrie; Hermann Hofstetter; Hans-Christian Jetter; Harald Reiterer
Pen and paper support the rapid production of sketches. However, the paper interface is not always optimal for collaborative sketching as seen in brainstorming sessions where multiple parties would often like to communicate and participate in the sketching synchronously. Novel interactive paper solutions may provide the answer by bridging the paper-digital divide and allowing users to sketch on paper simultaneously while capturing the actions digitally. We present an analysis of collaborative sketching activities in working environments with remote participation. After highlighting the importance of paper for natural interaction in these settings, we introduce PaperSketch, an interactive paper-digital tool for collaborative remote sketching. We discuss the collaborative development of ideas based on the prototype and outline how important feedback issues have been addressed by utilising spatial constraints and multimodal features.
Wish I hadn't clicked that: context based icons for mobile web navigation and directed search tasks BIBAFull-Text 165-174
  Vidya Setlur; Samuel Rossoff; Bruce Gooch
Typical web navigation techniques tend to support undirected web browsing, a depth-first search of information pages. This search strategy often results in the unintentional behavior of 'web surfing', where a user starts in search of information, but is sidetracked by tangential links. A mobile user in particular, would prefer to extract the desired information quickly and with minimal mental effort. In this paper, we introduce 'SemantiLynx' to visually augment hyperlinks on web pages for better supporting the task of directed searches on small-screen ubiquitous platforms. Our algorithm comprises four parts: establishing the context of information related to a hyperlink, retrieving relevant imagery based on this context, applying image simplification, and finally compositing a visual icon for the given hyperlink. We evaluated our system by conducting user studies for directed web search tasks and comparing the results to using textual snippets and webpage thumbnails.

User modeling and personalization

Inferring word relevance from eye-movements of readers BIBAFull-Text 175-184
  Tomasz D. Loboda; Peter Brusilovsky; Jöerg Brunstein
Reading is one of the most important skills in today's society. The ubiquity of this activity has naturally affected many information systems; the only goal of some is the presentation of textual information. One concrete task often performed on a computer and involving reading is finding relevant parts of text. In the current study, we investigated if word-level relevance, defined as a binary measure of an individual word being congruent with the reader's current informational needs, could be inferred given only the text and eye movements of readers. We found that the number of fixations, first-pass fixations, and the total viewing time can be used to predict the relevance of sentence-terminal words. In light of what is known about eye movements of readers, knowing which sentence-terminal words are relevant can help in an unobtrusive identification of relevant sentences.
Predicting and compensating for lexicon access errors BIBAFull-Text 185-194
  Lars Yencken; Timothy Baldwin
Learning a foreign language is a long, error-prone process, and much of a learner's time is effectively spent studying vocabulary. Many errors occur because words are only partly known, and this makes their mental storage and retrieval problematic. This paper describes how an intelligent interface may take advantage of the access structure of the mental lexicon to help predict the types of mistakes that learners make, and thus compensate for them. We give two examples, firstly a dictionary interface which circumvents the tip-of-the-tongue problem through search-by-similarity, and secondly an adaptive test generator which leverages user errors to generate plausible multiple-choice distractors.
Learning usability assessment models for web sites BIBAFull-Text 195-204
  Paul A. Davis; Frank M. Shipman
Our work explores an approach to learning types of usability concerns considered useful for the management of Web sites and to identifying usability concerns based on these learned models. By having one or more Web site managers rate a subset of pages in a site based on a number of usability criteria, we build a model that determines what automatically measurable characteristics are correlated to issues identified. To test this approach, we collected usability assessments from twelve students pursuing advanced degrees in the area of computer-human interaction. These students were divided into two groups and given different scenarios of use of a Web site. They assessed the usability of Web pages from the site, and their data was divided into a training set, used to find models, and a prediction set, used to evaluate the relative quality of models. Results show that the learned models predicted remaining data for one scenario in more categories of usability than did the single model found under the alternate scenario. Results also show how systems may prioritize usability problems for Web site managers by probability of occurrence rather than by merely listing pages that break specific rules, as provided by some current tools.
Information at your fingertips: contextual IR in enterprise email BIBAFull-Text 205-214
  Jie Lu; Shimei Pan; Jennifer C. Lai; Zhen Wen
We present ICARUS, a contextual information retrieval system, which uses the current email message and a multi-tiered user model to retrieve relevant content and make it available in a sidebar widget embedded in the email client. The system employs a dynamic retrieval strategy to conduct automated contextual search across multiple information sources including the user's hard drive, online documents (wikis, blogs and files) and other email messages. It also presents the user with information about the sender of the current message, which varies in detail and degree based on how often the user interacts with this sender. We conducted a formative evaluation which compared three retrieval methods that used different context information: current message plus a multi-tiered user model; current message plus a single-tiered, aggregate user model; and lastly, cur-rent message only. Results indicate that the multi-tiered user modeling approach yields better retrieval performance than the other two. In addition, the study suggests that dynamically determining which sources to search, what query parameters to use, and how to filter/re-rank results can further improve the effectiveness of contextual IR.

Keynote address

The future of human/computer interfaces BIBAFull-Text 215
  Ken Perlin
What will the interface between people and computers look like in five years? In ten years? In twenty five years? Will we still have screens? Keyboards? Will we all be seeing Princess Leia in a beam of light? Based on current trends and inspired guesswork, we will go together on a tour of the future.

Intelligent authoring and information presentation

Intelligent assistance for conversational storytelling using story patterns BIBAFull-Text 217-226
  Pei-Yu Chi; Henry Lieberman
People who are not professional storytellers usually have difficulty composing travel photos and videos from a mundane slideshow into a coherent and engaging story, even when it is about their own experiences. However, consider putting the same person in a conversation with a friend -- suddenly the story comes alive.
   We present Raconteur 2, a system for conversational storytelling that encourages people to make coherent points, by instantiating large-scale story patterns and suggesting illustrative media. It performs natural language processing in real-time on a text chat between a storyteller and a viewer and recommends appropriate media items from a library. Each item is annotated with one or a few sentences in unrestricted English. A large commonsense knowledge base and a novel commonsense inference technique are used to identify story patterns such as problem and resolution or expectation violation. It uses a concept vector representation that goes beyond keyword matching or word co-occurrence based techniques. A small experiment shows that people find Raconteur's interaction design engaging, and suggestions helpful for real-time storytelling.
TellMe: learning procedures from tutorial instruction BIBAFull-Text 227-236
  Yolanda Gil; Varun Ratnakar; Christian Frtiz
This paper describes an approach to allow end users to define new procedures through tutorial instruction. Our approach allows users to specify procedures in natural language in the same way that they would instruct another person, while the system handles incompleteness and ambiguity inherent in natural human instruction and formulates follow up questions. We describe the key features of our approach, which include exposing prior knowledge, deductive and heuristic reasoning, shared learning state, and selectively asking questions to the user. We also describe how those key features are realized in our implemented TellMe system, and present preliminary user studies where non-programmers were able to easily specify complex multi-step procedures.
A formal framework for combining natural instruction and demonstration for end-user programming BIBAFull-Text 237-246
  Christian Fritz; Yolanda Gil
We contribute to the difficult problem of programming via natural language instruction. We introduce a formal framework that allows for the use of program demonstrations to resolve several types of ambiguities and omissions that are common in such instructions. The framework effectively combines some of the benefits of programming by demonstration and programming by natural instruction. The key idea of our approach is to use non-deterministic programs to compactly represent the (possibly infinite) set of candidate programs for given instructions, and to filter from this set by means of simulating the execution of these programs following the steps of a given demonstration. Due to the rigorous semantics of our framework we can prove that this leads to a sound algorithm for identifying the intended program, making assumptions only about the types of ambiguities and omissions occurring in the instruction. We have implemented our approach and demonstrate its ability to resolve ambiguities and omissions by considering a list of classes of such issues and how our approach resolves them in a concrete example domain. Our empirical results show that our approach can effectively and efficiently identify programs that are consistent with both the natural instruction and the given demonstrations.
METEOR: medical tutor employing ontology for robustness BIBAFull-Text 247-256
  Hameedullah Kazi; Peter Haddawy; Siriwan Suebnukarn
Problem based learning is becoming widely popular as an effective teaching method in medical education. Paying individual attention to a small group of students in medical PBL can place burden on the workload of medical faculty whose time is very costly. Intelligent tutoring systems offer a cost effective alternative in helping to train the students, but they are typically prone to brittleness and the knowledge acquisition bottleneck. Existing tutoring systems accept a small set of approved solutions for each problem scenario stored into the system. Plausible student solutions that lie outside the scope of the explicitly encoded ones receive little acknowledgment from the system. Tutoring hints are also confined to the knowledge space of the approved solutions, leading to brittleness in the tutoring approach. We report a tutoring system for medical PBL that employs the widely available medical knowledge source UMLS as the domain ontology. We exploit the structure of the ontology to expand the plausible solution space and generate hints based on the problem solving context. Evaluation of student learning outcomes led to highly significant learning gains (Mann-Whitney, p<0.001).

Pen-based interfaces

Supporting an integrated paper-digital workflow for observational research BIBAFull-Text 257-266
  Nadir Weibel; Adam Fouse; Edwin Hutchins; James D. Hollan
The intertwining of everyday life and computation, along with a new generation of inexpensive digital recording devices and storage facilities, is revolutionizing our ability to collect and analyze human activity data. Such ubiquitous data collection has an exciting potential to augment human cognition and radically improve information-intensive work. In this paper we introduce a system to aid the process of data collection and analysis during observational research by providing non-intrusive automatic capture of paper-based annotations. The system exploits current note-taking practices and incorporates digital pen technology. We describe the development, deployment and use of the system for interactive visualization and annotation of multiple stream of video and other types of time-based data.
ChemInk: a natural real-time recognition system for chemical drawings BIBAFull-Text 267-276
  Tom Y. Ouyang; Randall Davis
We describe a new sketch recognition framework for chemical structure drawings that combines multiple levels of visual features using a jointly trained conditional random field. This joint model of appearance at different levels of detail makes our framework less sensitive to noise and drawing variations, improving accuracy and robustness. In addition, we present a novel learning-based approach to corner detection that achieves nearly perfect accuracy in our domain. The result is a recognizer that is better able to handle the wide range of drawing styles found in messy freehand sketches. Our system handles both graphics and text, producing a complete molecular structure as output. It works in real time, providing visual feedback about the recognition progress. On a dataset of chemical drawings our system achieved an accuracy rate of 97.4%, an improvement over the best reported results in literature. A preliminary user study also showed that participants were on average over twice as fast using our sketch-based system compared to ChemDraw, a popular CAD-based tool for authoring chemical diagrams. This was the case even though most of the users had years of experience using ChemDraw and little or no experience using Tablet PCs.
Text entry with KeyScretch BIBAFull-Text 277-286
  Gennaro Costagliola; Vittorio Fuccella; Michele Di Capua
KeyScretch is a recently proposed text entry method which makes use of gestures to input frequent word chunks on a menu-augmented soft keyboard. Each gesture is initiated on a key and is driven by the key surrounding menu. In this paper we present the performance of an instance of the method with a 4-items menu, specifically designed for the Italian language. The study shows that the method is easy to learn and significantly outperforms the traditional tapping-based method on the QWERTY layout.


Context relevance assessment for recommender systems BIBAFull-Text 287-290
  Linas Baltrunas; Bernd Ludwig; Francesco Ricci
Research on context aware recommender systems is taking for granted that context matters. But, often attempts to show the influence of context have failed. In this paper we consider the problem of quantitatively assessing context relevance. For this purpose we are assuming that users can imagine a situation described by a contextual feature, and judge if this feature is relevant for their decision making task. We have designed a UI suited for acquiring such information in a travel planning scenario. In fact, this interface is generic and can also be used for other domains (e.g., music). The experimental results show that it is possible to identify the contextual factors that are relevant for the given task and that the relevancy depends on the type of the place of interest to be included in the plan.
Who needs interaction anyway: exploring mobile playlist creation from manual to automatic BIBAFull-Text 291-294
  Dominikus Baur; Bernhard Hering; Sebastian Boring; Andreas Butz
Currently available user interfaces for playlist generation allow creating playlists in various ways, within a spectrum from fully automatic to fully manual. However, it is not entirely clear how users interact with such systems in the field and whether different situations actually demand different interfaces. In this paper we describe Rush 2, a music interface for mobile touch-screen devices that incorporates three interaction modes with varying degrees of automation: Adding songs manually, in quick succession using the rush interaction technique or filling the playlist automatically. For all techniques various filters can be set. In a two- week diary study (with in-depth interaction logging) we gained insight into how people interact with music in their everyday lives and how much automation and interactivity are really necessary.
Targeted risk communication for computer security BIBAFull-Text 295-298
  Jim Blythe; Jean Camp; Vaibhav Garg
Attacks on computer systems are rapidly becoming more numerous and more sophisticated, and current preventive techniques do not seem able to keep pace. Many successful attacks can be attributed to user errors: for example, while focused on other tasks, users may succumb to 'social engineering' attacks such as phishing or trojan horses. Warnings about the danger of these attacks are often vaguely worded and given long before the dangers are realized, and are therefore too easy to ignore. However, we hypothesize that users are more likely to be persuaded by messages that (1) leverage mental models to describe the dangers, (2) describe particular vulnerabilities that the user may be exposed to and (3) are delivered close in time before the danger may actually be realized. We discuss the design and initial implementation of a system to achieve this. It first shows a video about a potential danger, then creates warnings tailored to the user's environment and given at the time they may be most useful, displaying a still frame or snippet from the video to remind the user of the potential danger. The system uses templates of user activities as input to a markov logic network to recognize potentially risky behaviors. This approach can identify likely next steps that can be used to predict immediate danger and customize warnings.
Deducing answers to english questions from structured data BIBAFull-Text 299-302
  Daniel G. Bobrow; Cleo Condoravdi; Kyle Richardson; Richard Waldinger; Amar Das
We describe ongoing research using natural English text queries as an intelligent interface for inferring answers from structured data in a specific domain. Users can express queries whose answers need to be deduced from data in different databases, without knowing the structures of those databases nor even the existence of the sources used. Users can pose queries incrementally, elaborating on an initial query, and ask follow-up questions based on answers to earlier queries.
   Inference in an axiomatic theory of the subject domain links the natural form in which the question is posed to the way relevant data is represented in a database, and composes information obtained from several databases into an answer to a complex question.
   We describe the status of a prototype system, called Quadri, for answering questions about HIV treatment, using the Stanford HIV Drug Resistance Database [8] and European resources. We discuss some of the problems that need to be solved to make this approach work, and some of our solutions.
DiG: a task-based approach to product search BIBAFull-Text 303-306
  Scott Carter; Francine Chen; Aditi S. Muralidharan; Jeremy Pickens
While there are many commercial systems to help people browse and compare products, these interfaces are typically product centric. To help users identify products that match their needs more efficiently, we instead focus on building a task centric interface and system. Based on answers to initial questions about the situations in which they expect to use the product, the interface identifies products that match their needs, and exposes high-level product features related to their tasks, as well as low-level information including customer reviews and product specifications. We developed semi-automatic methods to extract the high-level features used by the system from online product data. These methods identify and group product features, mine and summarize opinions about those features, and identify product uses. User studies verified our focus on high-level features for browsing products and low-level features and specifications for comparing products.
A reinforcement learning framework for answering complex questions BIBAFull-Text 307-310
  Yllias Chali; Sadid A. Hasan; Kaisar Imam
Scoring sentences in documents given abstract summaries created by humans is important in extractive multi-document summarization. In this paper, we use extractive multi-document summarization techniques to perform complex question answering and formulate it as a reinforcement learning problem. We use a reward function that measures the relatedness of the candidate (machine generated) summary sentences with abstract summaries. In the training stage, the learner iteratively selects original document sentences to be included in the candidate summary, analyzes the reward function and updates the related feature weights accordingly. The final weights found in this phase are used to generate summaries as answers to complex questions given unseen test data. We use a modified linear, gradient-descent version of Watkins' Q(») algorithm with µ-greedy policy to determine the best possible action i.e. selecting the most important sentences. We compare the performance of this system with a Support Vector Machine (SVM) based system. Evaluation results show that the reinforcement method advances the SVM system improving the ROUGE scores by < 28%.
Users' eye gaze pattern in organization-based recommender interfaces BIBAFull-Text 311-314
  Li Chen; Pearl Pu
In this paper, we report the hotspot and gaze path of users' eye-movements on three different layouts for recommender interfaces. One is the standard list layout, as appearing in most of current recommender systems. The other two are variations of organization interfaces where recommended items are organized into categories and each category is annotated by a title. Gaze plots infer that the organization interfaces, especially the quadrant layout, are likely to arouse users' attentions to more recommendations. In addition, more users chose products from the organization layouts. Combining the results with our prior works, we suggest a set of design guidelines and practical implications to our future work.
Eye activity as a measure of human mental effort in HCI BIBAFull-Text 315-318
  Siyuan Chen; Julien Epps; Natalie Ruiz; Fang Chen
The measurement of a user's mental effort is a problem whose solutions may have important applications to adaptive interfaces and interface evaluation. Previous studies have empirically shown links between eye activity and mental effort; however these have usually investigated only one class of eye activity on tasks atypical of HCI. This paper reports on research into eight eye activity based features, spanning eye blink, pupillary response and eye movement information, for real time mental effort measurement. Results from an experiment conducted using a computer-based training system show that the three classes of eye features are capable of discriminating different cognitive load levels. Correlation analysis between various pairs of features suggests that significant improvements in discriminating different effort levels can be made by combining multiple features. This shows an initial step towards a real-time cognitive load measurement system in human-computer interaction.
Continuous marking menus for learning cursive pen-based gestures BIBAFull-Text 319-322
  Adrien Delaye; Rafik Sekkal; Eric Anquetil
In this paper, we present a new type of Marking menus. Continuous Marking Menus are specifically dedicated to pen-based interfaces, and designed to define a set of cursive, realistic handwritten gestures. In menu mode, they offer a continuous visual feedback and fluent exploration of menu hierarchy, inviting the user to execute cursive gestures for invoking the desired commands. In marking mode, a specific gesture recognition method is proposed and proved to be very efficient for recognizing cursive gestures.
Want world domination? win at risk!: matching to-do items with how-tos from the web BIBAFull-Text 323-326
  Denny Vrandeèiæ; Yolanda Gil; Varun Ratnakar
To-Do lists are widely used for personal task management. We propose a novel approach to assist users in managing their To-Dos by matching them to How-To knowledge from the Web. We have implemented a system that, given a To-Do item, provides a number of possibly matching How-Tos, broken down into steps that can be used as new To-Do entries. Our implementation is in the form of a web service that can be easily integrated into existing To-Do applications. This can help users by providing them with an approach to tackle the To-Do by listing smaller, more actionable To-Dos. In this paper we present our implementation, an evaluation of the matching component over two sets of To-Do corpora with very different characteristics, and a discussion of the results.
Geremin": 2D microgestures for drivers based on electric field sensing BIBAFull-Text 327-330
  Christoph Endres; Tim Schwartz; Christian A. Müller
We introduce the "Geremin" approach on in-car 2D microgesture recognition, which belongs to the category of electric field sensing techniques detecting the presence of a human hand near a conductive object (not affected by light and dynamic backgrounds, fast response times). The core component is essentially a modified "Theremin", an early electronic musical instrument named after the Russian inventor Professor Leon Theremin. Gesture recognition is done using a Dynamic Time Warp DTW algorithm. With respect to the application domain, we follow the direction of "selective mapping to theme or function" suggested in the literature. For gesture location, we propose the immediate proximity of the steering wheel, which has the advantage of providing gesture-based interaction without requiring the driver to take off hand(s). The major motivating factor for the proposed approach is reducing installation costs. Although, we use a single-antenna setup for this study, our results indicate that the gain in recognition accuracy justifies the use of two or more.
How to serve soup: interleaving demonstration and assisted editing to support non-programmers BIBAFull-Text 331-334
  Melinda Gervasio; Will Haines; David Morley; Thomas J. Lee; C. Adam Overholtzer; Shahin Saadati; Aaron Spaulding
The Adept Task Learning system is an end-user programming environment that combines programming by demonstration and direct manipulation to Suppore customization by non-programmers. Previously, Adept enforced a rigid procedure-authoring workflow consisting of demonstration followed by editing. However, a series of system evaluations with end users revealed a desire for more feedback during learning and more flexibility in authoring. We present a new approach that interleaves incremental learning from demonstration and assisted editing to provide users with a more flexible procedure-authoring experience. The approach relies on maintaining a "soup" of alternative hypotheses during learning, propagating user edits through the soup, and suggesting repairs as needed. We discuss the learning and reasoning techniques that support the new approach and identify the unique interaction design challenges they raise, concluding with an evaluation plan to resolve the design challenges and complete the improved system.
Personalized and automatic social summarization of events in video BIBAFull-Text 335-338
  John Hannon; Kevin McCarthy; James Lynch; Barry Smyth
Social services like Twitter are increasingly used to provide a conversational backdrop to real-world events in real-time. Sporting events are a good example of this and this year, millions of users tweeted their comments as they watched the World Cup matches from around the world. In this paper, we look at using these time-stamped opinions as the basis for generating video highlights for these soccer matches. We introduce the PASSEV system and describe and evaluate two basic summarization approaches.
Vision-based distance and position estimation of nearby objects for mobile spatial interaction BIBAFull-Text 339-342
  Clemens Holzmann; Matthias Hochgatterer
New mobile phone technologies are enablers for the emerging field of mobile spatial interaction, which refers to the direct access and manipulation of spatially-related information and services. Typical applications include the visualization of information about historical buildings or the discovery and selection of surrounding devices, by simply pointing to the real-world objects of interest. However, a major drawback is the required augmentation of the objects or knowledge about the environment, in order to be able to distinguish at which object the user is actually aiming at. We address this issue by estimating the distance and position of arbitrary objects within a mobile phone's line of sight, solely based on the information provided by its on-board sensors. This new approach uses stereo vision to estimate the distance to nearby objects, inertial sensors to measure the displacement of the camera between successive images, as well as GPS and a digital compass to get its absolute position and orientation. In this paper, we focus on the vision-based estimation of distances, and present the results of an experiment which demonstrates its accuracy and performance.
A rule engine for relevance assessment in a contextualized information delivery system BIBAFull-Text 343-346
  Beibei Hu; Jan Hidders; Philipp Cimiano
In order to support police officers in their daily activities, we have designed a rule-based system which can deliver contextualized information to police officers, thus supporting decision making. In particular, we present a framework that has been designed on the basis of requirements elicited in a previous study, focusing on the rule language and the engine that essentially defines and allows to configure the behaviour of the system. The rules consist of a body which specifies conditions that need to be fulfilled in a certain context. The head of the rules specifies how the relevance ratings of certain information items for specific users need to be updated given that the conditions in the body are met. On the basis of cumulated ratings, the system generates a user- and context specific ranking of information items. Quantitative evaluations in terms of precision and recall with respect to a gold standard determined in cooperation with police officers show that the system can cater for the requirements of our end users and yields reasonable precision and recall values.
Enhancing recommendation diversity with organization interfaces BIBAFull-Text 347-350
  Rong Hu; Pearl Pu
Research increasingly indicates that accuracy cannot be the sole criteria in creating a satisfying recommender from the users' point of view. Other criteria, such as diversity, are emerging as important characteristics for consideration as well. In this paper, we try to address the problem of augmenting users' perception of recommendation diversity by applying an organization interface design method to the commonly used list interface. An in-depth user study was conducted to compare an organization interface with a standard list interface. Our results show that the organization interface indeed effectively increased users' perceived diversity of recommendations, especially perceived categorical diversity. Furthermore, 65% of users preferred the organization interface, versus 20% for the list interface. 70% of users thought the organization interface is better at helping them perceive recommendation diversity versus only 15% for the list interface.
Gesture variants and cognitive constraints for interactive virtual reality training systems BIBAFull-Text 351-354
  Stephanie Huette; Yazhou Huang; Marcelo Kallmann; Teenie Matlock; Justin L. Matthews
Two studies investigated the nature of environmental context on various parameters of pointing. The results revealed the need for extreme temporal precision and the need for efficient algorithms to parse out different styles of pointing. Most variability in pointing came from individual differences, and a method to classify the kind of point and derive its temporal parameters is discussed. These results and methods improve the pragmatism of virtual reality, making events appear more realistic by emphasizing temporal precision.
Capturing user reading behaviors for personalized document summarization BIBAFull-Text 355-358
  Hao Jiang; Songhua Xu; Francis Chi-Moon Lau
We propose a new personalized document summarization method that observes a user's personal reading preferences. These preferences are inferred from the user's reading behaviors, including facial expressions, gaze positions, and reading durations that were captured during the user's past reading activities. We compare the performance of our algorithm with that of a few peer algorithms and software packages. The results of our comparative study show that our algorithm can produce more superior personalized document summaries than all the other methods in that the summaries generated by our algorithm can better satisfy a user's personal preferences.
IRL SmartCart -- a user-adaptive context-aware interface for shopping assistance BIBAFull-Text 359-362
  Gerrit Kahl; Lübomira Spassova; Johannes Schöning; Sven Gehring; Antonio Krüger
The electronic market has rapidly grown in the last few years. However, despite this success, consumers still enjoy visiting a "real store with real products". Therefore various common technologies have been installed in supermarkets to support the customer's shopping process and experience. In this paper, we introduce the IRL SmartCart -- an instrumented shopping cart that acts as a user interface to support the shopping process. We show how RFID technology enables recognizing products that are put in the cart's basket. We are also able to determine the cart's position in an instrumented shopping environment. User input and visual output are possible by means of a touch screen, which is fitted in the IRL SmartCart's handle to support different tasks involved in the shopping process. We present and discuss different location- and context-based services that run on the cart interface system, e.g. a personalized shopping list sorted corresponding to the products in the user's vicinity or a navigation service to different products that the customer is searching for.
DrawerFinder: finding items in storage boxes using pictures and visual markers BIBAFull-Text 363-366
  Mizuho Komatsuzaki; Koji Tsukada; Itiro Siio
We propose a novel search technique called DrawerFinder that helps users find commodities stored in two types of storage boxes (with drawers or lids) on a shelf. First, we attach visual markers inside these storage boxes. When a user opens a box, a camera attached above the shelf detects the visual marker and the system automatically takes a picture. Next, the picture is automatically uploaded to an online database tagged with the box ID. Users can then browse these pictures using a PC or cellular phone equipped with a common web browser. We believe our system helps users find items in boxes efficiently as they can browse both pictures of boxes and the surrounding environment (e.g., who last opened the storage box) quickly.
VisualWikiCurator: human and machine intelligence for organizing wiki content BIBAFull-Text 367-370
  Nicholas Kong; Ben Hanrahan; Thiébaud Weksteen; Gregorio Convertino; Ed H. Chi
Corporate wikis are affected by poor adoption rates. The high interaction costs required to organize and maintain information in these wikis are a key factor that limits broader adoption. We present VisualWikiCurator, a wiki extension designed to lower such costs by (a) recommending new content to easily update a wiki page, and (b) extracting structured data from the wiki page while providing new alternative visualizations of the data. The visualizations of extracted semantic data act both as alternative views and as tools to organize the page content. Since no information extraction algorithm is perfect with generic unstructured data, we use a mixed-initiative approach to allow users to refine machine-extracted metadata and easily re-organize the content in wiki pages.
Protractor3D: a closed-form solution to rotation-invariant 3D gestures BIBAFull-Text 371-374
  Sven Kratz; Michael Rohs
Protractor 3D is a gesture recognizer that extends the 2D touch screen gesture recognizer Protractor to 3D gestures. It inherits many of Protractor's desirable properties, such as high recognition rate, low computational and low memory requirements, ease of implementation, ease of customization, and low number of required training samples. Protractor 3D is based on a closed-form solution to finding the optimal rotation angle between two gesture traces involving quaternions. It uses a nearest neighbor approach to classify input gestures. It is thus well-suited for application in resource-constrained mobile devices. We present the design of the algorithm and a study that evaluated its performance.
Indoor positioning: challenges and solutions for indoor cultural heritage sites BIBAFull-Text 375-378
  Tsvi Kuflik; Joel Lanir; Eyal Dim; Alan Wecker; Michele Corra'; Massimo Zancanaro; Oliviero Stock
Museums are both appealing and challenging as an environment for indoor positioning research. By nature, they are dense and rich in objects and information, and as a result they contain more information than a visitor can absorb in a time-limited visit. Many research projects have explored the potential of novel technologies to support information delivery to museum visitors. Having an accurate visitor position is a key factor in the success of such projects. In spite of numerous technologies experimented with, there is no prevailing indoor positioning technology. Each technology has its benefits as well as its limitations. In addition, museums have their own constrains when it comes to installation of sensors in their space. In the framework of the PIL project, a flexible "light weight" proximity based positioning system was developed and deployed at the Hecht museum and a general framework for indoor positioning is proposed. The inherent limitations of the basic technology are addressed by an abstract reasoning layer and by a dialog with the user.
Toward localizing audiences' gaze using a multi-touch electronic whiteboard with sPieMenu BIBAFull-Text 379-382
  Kazutaka Kurihara; Naoshi Nagano; Yuta Watanabe; Yuichi Fujimura; Akinori Minaduki; Hidehiko Hayashi; Yohei Tutiya
Direct-touch presentation devices such as touch-sensitive electronic whiteboards have two serious problems. First, the presenter's hand movements tend to distract the audience's attention from content. Second, the presenter's manipulation tends to obscure content. In this paper we propose "audience gaze localization" as an interface design paradigm to cope with the problems. It is an attitude to maximize the usability of the application with respect to the presenter, while minimizing the negative effects of the presenter's manipulations from the perspective of the audience. Based on the paradigm, we develop a new electronic whiteboard system that supports multi-touch gestures and employs a special pie menu interface named "sPieMenu." This pie menu is displayed under the presenter's palm and is thus invisible to the audience, while kept visible to the presenter.
Augmenting the sound experience at music festivals using mobile phones BIBAFull-Text 383-386
  Jakob Eg Larsen; Arkadiusz Stopczynski; Jan Larsen; Claus Vesterskov; Peter Krogsgaard; Thomas Sondrup
In this paper we describe experiments carried out at the Nibe music festival in Denmark involving the use of mobile phones to augment the participants' sound experience at the concerts. The experiments involved N=19 test participants that used a mobile phone with a headset playing back sound received over FM from the PA audio mixer system. Based on the location of the participant (distance to the stage) a delay was estimated and introduced to the playback on the mobile phone in order to align the sound in the headset with that from the on-stage speakers. We report our findings from our initial "in-the-wild" experiments augmenting the sound experience at two concerts at this music festival.
Social and collaborative web search: an evaluation study BIBAFull-Text 387-390
  Kevin McNally; Michael P. O'Mahony; Barry Smyth; Maurice Coyle; Peter Briggs
In this paper we describe the results of a live-user study to demonstrate the benefits of using the social search utility HeyStaks, a novel approach to Web search that combines ideas from personalization and social networking to provide a more collaborative search experience.
Towards learned feedback for enhancing trust in information seeking dialogue for radiologists BIBAFull-Text 391-394
  Daniel Sonntag
Dialogue-based Question Answering (QA) in the context of information seeking applications is a highly complex user interaction task. QA systems normally include various natural language processing components (i.e., components for question classification and information extraction) and information retrieval components. This paper presents a new approach to equip a multimodal QA system for radiologists with some form of self-knowledge about the expected dialogue processing behaviour and the results themselves. The learned models are used to provide feedback of the QA process, i.e., what the system is doing and delivers as results. The resulting automatic feedback behaviour should enhance the user's trust in the system. To this end, examples of the learned feedback are provided in the context of the generation of system-initiative dialogue feedback to a radiologist's questions.
A socially aware persuasive system for supporting conversations at the museum café BIBAFull-Text 395-398
  Massimo Zancanaro; Stock Oliviero; Daniel Tomasini; Fabio Pianesi
In this paper we propose a new type of system explicitly aimed at influencing immediate behavior in an informal, non goal-oriented co-located small group. The state of the group dynamics is automatically assessed in order for the system to continuously plan and deploy minimalist strategies using evocative means to influence behavior, rather than explicit recommendations. A key aspect of our approach is that the main "interaction channel" is left for direct human-to-human interaction, while no large conscious elaboration effort nor actions are meant by the user toward the interface. We present here the concept and a working implementation of a prototype system targeted to a museum scenario. The system has the form of a table in the museum café, and is aimed at inducing a group of friends to talk about the content of their visit to the museum.
A location-aware virtual character in a smart room: effects on performance, presence and adaptivity BIBAFull-Text 399-402
  Ning Tan; Gaëtan Pruvost; Matthieu Courgeon; Céline Clavel; Yacine Bellik; Jean-Claude Martin
Location-aware ambient environments are relevant in designing interactive virtual characters. However, they raise several questions about spatial behaviors to be displayed by virtual characters, and about the perception of virtual characters by users. We designed a location-aware virtual agent that adapts its spatial behavior to users' and objects' locations during a search task in a smart room. We conducted an experimental evaluation comparing this adaptive agent with an agent that does not perceive nor use the location of users and objects. The location-aware adaptive agent elicited higher levels of perceived presence and perceived adaptivity. Furthermore, performance was less influenced by task difficulty when users interacted with the adaptive agent. Such results can be useful for the design of future location-aware virtual agents.
Analyzing sketch content using in-air packet information BIBAFull-Text 403-406
  David Tausky; Edward Lank; Richard Mann; George Labahn
Recognizing hand drawn mathematical matrices on tablet computers has proven to be a particularly challenging task. While individual expression recognition can be simplified by assuming the entire content is a single semantic construct, a single math expression, a matrix is composed of multiple expressions arranged in rows and columns. These expressions must first be segmented into matrix elements, and then each individual matrix element expression must be recognized. In this work, we show how a simple algorithm on in-air (i.e. non-inking) strokes can be used to analyze the drawing order of a matrix. Once the drawing order is recognized, we show how outlier analysis on in-air packets gives rapid, reliable segmentation of matrix elements.
Automatically generating stories from sensor data BIBAFull-Text 407-410
  Joseph Reddington; Nava Tintarev
Recent research in Augmented and Alternative Communication (AAC) has begun to make use of Natural Language Generation (NLG) techniques. This creates an opportunity for constructing stories from sensor data, akin to existing work in life-logging. This paper examines the potential of using NLG to merge the AAC and life-logging domains. It proposes a four stage hierarchy that categorises levels of complexity of output text. It formulates a key subproblem of clustering sensor data into narrative events and describes three potential approaches for resolving this subproblem.
A knowledge base driven user interface for collaborative ontology development BIBAFull-Text 411-414
  Tania Tudorache; Natalya F. Noy; Sean M. Falconer; Mark A. Musen
Scientists and researchers often use ontologies to describe their data, to share and integrate this data from heterogeneous sources. Ontologies are formal computer models that describe the main concepts and their relationships in a particular domain. Ontologies are usually authored by a community of users with different roles and levels of expertise. To support collaboration among distributed teams and to provision for distinct authoring requirements of each of the user roles and of individual users, we designed a configurable Web-based ontology editor, WebProtege. WebProtege extends Protege, a widely popular ontology editor with more than 150,000 registered users. The user interface layout and configuration for WebProtege is model-based and declarative: we represent it in a knowledge base, with an ontology defining its structure, and linking the interface configuration to the users, their roles, and access policies. We will discuss how the knowledge base driven configuration of the user interface supports the reuse and modularization of layout configurations. Such configuration is also highly flexible and extensible, and is easier to manage than many traditional approaches.
Designing socially acceptable multimodal interaction in cooking assistants BIBAFull-Text 415-418
  Elena Vildjiounaite; Julia Kantorovitch; Vesa Kyllönen; Ilkka Niskanen; Mika Hillukkala; Kimmo Virtanen; Olli Vuorinen; Satu-Marja Mäkelä; Tommi Keränen; Johannes Peltola; Jani Mäntyjärvi; Andrew Tokmakoff
Cooking assistant is an application that needs to find a trade-off between providing efficient help to the users (e.g., reminding them to stir a meal if it is about to burn) and avoiding users' annoyance. This trade-off may vary in different contexts, such as cooking alone or in a group, cooking new or known recipe etc. The results of the user study, presented in this paper, show which features of a multimodal interface users perceive as socially acceptable or unacceptable in different situations, and how this perception depends on user's age.
Using linguistic and vocal expressiveness in social role recognition BIBAFull-Text 419-422
  Theresa Wilson; Gregor Hofer
In this paper, we investigate two types of expressiveness, linguistic and vocal, and whether they are useful for recognising the social roles of participants in meetings. Our experiments show that combining expressiveness features with speech activity does improve social role recognition over speech activity features alone.
Cognitive load evaluation of handwriting using stroke-level features BIBAFull-Text 423-426
  Kun Yu; Julien Epps; Fang Chen
This paper examines several writing features for the evaluation of cognitive load. Our analysis is focused on writing features within and between written strokes, including writing pressure, writing velocity, stroke length and inter-stroke movements. Based on a study of 20 subjects performing a sentence composition task, the reported findings reveal that writing pressure and writing velocity information are very good indicators of cognitive load. A stroke selection threshold was investigated for constraining the feature extraction to long strokes, which resulted in a small further improvement. Differing from most previous research investigating cognitive load during writing based on task performance criteria, this work proposes a new approach to cognitive load measurement using writing dynamics, with the potential to allow new or improve existing handwriting interfaces.
ARCrowd -- a tangible interface for interactive crowd simulation BIBAFull-Text 427-430
  Feng Zheng; Hongsong Li
Manipulating a large virtual crowd in an interactive virtual reality environment is a challenging task due to the limitations of the traditional user interface. To address this problem, a tangible interface based on augmented reality (AR) technology is introduced. With a novel interaction framework, the users are allowed to manipulate the virtual characters directly, or to control the crowd behaviors with markers and gestures. The marker-gesture pairs are used to adjust the environment factors, the decision-making processes of virtual crowds, and their reactions. The AR interface provides more intuitive means of control for the users, promoting the efficiency of user interface. Several simulation examples are provided to illustrate the various crowd control methods.


Spoken Web: creation, navigation and searching of VoiceSites BIBAFull-Text 431-432
  Sheetal K. Agarwal; Anupam Jain; Arun Kumar; Priyanka Manwani; Nitendra Rajput
The Spoken Web is a voice-based equivalent of the World Wide Web (WWW), developed by IBM Research Laboratory, India, primarily designed for rural and semi-urban people to provide information of value to them through their mobile or landline phones.
   This demonstration aims to present the use of VoiceSites for information creation and access for the rural population. VoiceSites can be accessed by calling a phone number and end-users can listen to the content over phone and navigate through voice or the phone keypad. While one demonstration is of creating a VoiceSite, the other is for browsing a multitude of VoiceSites that would also demonstrate conducting transactions across VoiceSites. We will also present the mechanism to search for content on VoiceSites. These demonstrations will help in presenting the concept of the Spoken Web, which we foresee, can have the same effect in the developing regions community which the WWW has done to the developed world over the last decade.
Multimodal conspicuity-enhancement for e-bikes: the hybrid reality model of environment transformation BIBAFull-Text 433-434
  Sandro Castronovo; Christoph Endres; Tobias Del Fabro; Nils Schnabel; Christian A. Müller
A prototypical conspicuity enhancement (CE) system for vulnerable road users (here e-bikes) is described. We stress that CE is a form of multimodal output. We argue that previous CE approaches have the drawback of affecting uninvolved (road) users. We argue further that augmented reality as an alternative is error prone because objects need to be tracked. Our system implements the hybrid reality modality model, where directed information emanates from the objects themselves and therefore no object recognition/tracking is needed. We describe the components of a functional demonstrator based on standard compliant car-to-car communication components.
Multimodal local search in Speak4it BIBAFull-Text 435-436
  Patrick Ehlen; Michael Johnston
Speak4it is a consumer-oriented mobile search application that leverages multimodal input and output to allow users to search for and act on local business information. It supports true multimodal integration where user inputs can be distributed over multiple input modes. In addition to specifying queries by voice e.g. bike repair shops near the golden gate bridge users can combine speech and gesture, for example, gas stations + <route drawn on display> will return the gas stations along the specified route traced on the display. We describe the underlying multimodal architecture and some challenges of supporting multimodal interaction as a deployed mobile service.
A personalized recipe advice system to promote healthful choices BIBAFull-Text 437-438
  Gijs Geleijnse; Peggy Nachtigall; Pim van Kaam; Luciënne Wijgergangs
We present a prototype of a personalized recipe advice system, which facilitates its users to make health-aware meal choices based on past selections. To stimulate the adoption of a healthier lifestyle, a goal setting mechanism is applied in combination with personalized recipe suggestions.
Slanting existing text with Valentino BIBAFull-Text 439-440
  Marco Guerini; Carlo Strapparava; Oliviero Stock
In this paper we present a tool for valence shifting of natural language texts, named Valentino (VALENced Text INOculator). Valentino can modify existing textual expressions towards more positively or negatively valenced versions. To this end we built specific resources, gathering valenced terms that are semantically or contextually connected to the original one, and implemented strategies that use these resources in the substitution process. Valentino is meant to be a modular component. It is non-domain specific and it requires as its input a coefficient that represents the desired valence for the final expression.
Mail2Wiki: posting and curating Wiki content from email BIBAFull-Text 441-442
  Benjamin V. Hanrahan; Thiebaud Weksteen; Nicholas Kong; Gregorio Convertino; Guillaume Bouchard; Cedric Archambeau; Ed H. Chi
Enterprise wikis commonly see low adoption rates, preventing them from reaching the critical mass that is needed to make them valuable. The high interaction costs for contributing content to these wikis is a key factor impeding wiki adoption. Much of the collaboration among knowledge workers continues to occur in email, which causes useful information to stay siloed in personal inboxes. In this demo we present Mail2Wiki, a system that enables easy contribution and initial curation of content from the personal space of email to the shared repository of a wiki.
Photo search in a personal photo diary by drawing face position with people tagging BIBAFull-Text 443-444
  Heung-Nam Kim; Abdulmotaleb El Saddik; Kee-Sung Lee; Yeon-Ho Lee; Geun-Sik Jo
In recent years, people tend to maintain personal photos in digital spaces not only to share their experiences with social friends but also to jog their own memory. Therefore, an effective solution is crucial to the growth of the needs for recording one's daily life. In this study, we have developed a complete system for personal photo diary system, namely MePTroy. With a friendly user interface, users can easily maintain personal episodes and memories with photos. In addition, we also support a flexible method for photo search based on the position of facial appearance that enables users to access episodes quickly. By integrating face detection and recognition technologies, as well as a friendly UI, MePTory offers diverse functionalities to annotate and search photos.
TactileFace: a system for enabling access to face photos by visually-impaired people BIBAFull-Text 445-446
  Nan Li; Zheshen Wang; Jesus Yuriar; Baoxin Li
Face photos/Portraits play an important role in people's social and emotional life. Unfortunately, this type of media is inaccessible to people with visual impairment. We propose a novel, prototypical system that was designed to demonstrate the feasibility of bridging this accessibility gap through automatic and realtime conversion of face images into their tactile counterparts. Unlike conventional and existing tactile conversion efforts, which are largely done by human specialists, the proposed system serves to provide an intelligent interface between blind computer users and this important type of media, human face images.
Modeling information fit: a tool for interface design BIBAFull-Text 447-448
  Christopher A. Miller; Jeffrey M. Rye; Peggy Wu
This demonstration illustrates a computational analysis method using information theoretic attributes to quantitatively characterize information need for task performance, as well as information conveyed by a candidate display. Once represented in this way, various computational algorithms can provide a "mismatch" analysis of the "fit" between the two. This approach has been implemented in a prototype user interface analysis tool: MAID (Multi-modal Aid for Interface Design). MAID supports user interface design and redesign in response to procedure revisions for NASA applications. MAID has been demonstrated on examples of designs for the International Space Station, Space Shuttle (both current interfaces and proposed upgrades) and for hypothetical designs for the Crew Exploration Vehicle Orion. At least one of these examples will be presented.
MEANS: moving effective assonances for novice students BIBAFull-Text 449-450
  Gözde Özbal; Carlo Strapparava
Vocabulary acquisition constitutes a crucial but difficult and time consuming step of learning a foreign language. There exist several teaching methods which aim to facilitate this step by providing learners with various verbal and visual tips. However, building systems based on these methods is generally very costly since it requires so many resources such as time, money and human labor. In this paper, we introduce a fully automatized vocabulary teaching system which uses state-of-the-art natural language processing (NLP) and information retrieval (IR) techniques. For each foreign word the user is willing to learn, the system is capable of automatically producing memorization tips including keywords, sentences, colors, textual animations and images.
Ontology-based information visualization in integrated UIs BIBAFull-Text 451-452
  Heiko Paulheim; Lars Meyer
This demo presents the Semantic Data Explorer, which visualizes linked data contained in different integrated applications. It presents a conceptual graphical view of data, which can be combined with user interfaces of legacy applications to facilitate a hybrid view.
MOCCA -- a system that learns and recommends visual preferences based on cultural similarity BIBAFull-Text 453-454
  Katharina Reinecke; Patrick Minder; Abraham Bernstein
We demonstrate our culturally adaptive system MOCCA, which is able to automatically adapt its visual appearance to the user's national culture. Rather than only adapting to one nationality, MOCCA takes into account a person's current and previous countries of residences, and uses this information to calculate user-specific preferences. In addition, the system is able to learn new, and refine existing adaptation rules from users' manual modifications of the user interface based on a collaborative filtering mechanism, and from observing the user's interaction with the interface.
A relevant image search engine with late fusion: mixing the roles of textual and visual descriptors BIBAFull-Text 455-456
  Franco M. Segarra; Luis A. Leiva; Roberto Paredes
A fundamental problem in image retrieval is how to improve the text-based retrieval systems, which is known as "bridging the semantic gap". The reliance on visual similarity for judging semantic similarity may be problematic due to the semantic gap between low-level content and higher-level concepts. One way to overcome this problem and increase thus retrieval performance is to consider user feedback in an interactive scenario. In our approach, a user starts a query and is then presented with a set of (hopefully) relevant images; selecting from these images those which are more relevant to her. Then the system refines its results after each iteration, using late fusion methods, and allowing the user to dynamically tune the amount of textual and visual information that will be used to retrieve similar images. We describe how does our approach fit in a real-world setting, discussing also an evaluation of results.
Macademia: semantic visualization of research interests BIBAFull-Text 457-458
  Shilad Sen; Henry Charlton; Ryan Kerwin; Jeremy Lim; Brandon Maus; Nathaniel Miller; Megan R. Naminski; Alex Schneeman; Anthony Tran; Ernesto Nunes; E. Isaac Sparling
The Macademia website promotes faculty collaboration and research by visualizing faculty research interests as a dynamic "constellation."Semantic similarity inference algorithms power the site's visualization, allowing users to spatially browse related research interests, and researchers who have those interests.
Interactive paper for radiology findings BIBAFull-Text 459-460
  Daniel Sonntag; Marcus Liwicki; Markus Weber
This paper presents a pen-based interface for clinical radiologists. It is of utmost importance in future radiology practices that the radiology reports be uniform, comprehensive, and easily managed. This means that reports must be "readable" to humans and machines alike. In order to improve reporting practices, we allow the radiologist to write structured reports with a special pen on normal paper. A handwriting recognition and interpretation software takes care of the interpretation of the written report which is transferred into an ontological representation. The resulting report is then stored in a semantic backend system for further use. We will focus on the pen-based interface and new interaction possibilities with gestures in this scenario.


Tutorial / What every IUI researcher should know about human choice and decision making BIBAFull-Text 461
  Anthony Jameson
There are many reasons why a researcher in the area of intelligent user interfaces (IUI) might decide not to attend this tutorial: 1. I didn't take a tutorial at my last conference. 2. I'm not the kind of person who takes tutorials at conferences. 3. Hardly any of my friends take tutorials. 4. It's not the policy of my organization for researchers to take tutorials. 5. The topic does not seem at first glance to be one that I need to pay attention to. 6. The cost of taking this tutorial is immediate and clear, but the benefits would be spread out over years, and I'm not sure what they would be, or how large. 7. I'm feeling impatient, and I want to move on to the next page.
   But here's the catch: Every one of the reasons listed above can also cause users to disregard the recommendations of the researcher's novel recommender system, reject the adaptations of their adaptive interface, or generally make choices concerning the use of IUI innovations that seem wrong to the researchers who developed them and maybe to the users, if they took the trouble to reconsider them.
   In other words: IUI researchers do need to know something about human choice and decision making. Just consulting common knowledge or reading a textbook isn't enough, since the most IUI relevant concepts and results are found in a number of subareas of research on judgment and decision making, learning and habitual behavior, and strategies for influencing choices; and their relevance to IUI issues is hardly ever made explicit.
   This new tutorial aims to fill this gap: It presents relevant concepts and insights from psychological research with explicit reference to issues that IUI researchers typically encounter. With the help of the take-home supplementary material, participants will be able to understand and predict more realistically (though less confidently) the choices that users make about or with the help of their systems. And they can try in more imaginative ways to influence these choices. They might also think of a fresh research issue for their next project proposal.
Tutorial / HCI for recommender systems: a tutorial BIBAFull-Text 463
  Joseph A. Konstan
This tutorial is an introduction to the concepts and techniques from human-computer interaction, focused on designing usable interfaces. Topics covered include user and task analysis, prototyping, design techniques, interface evaluation, and various processes for designing interfaces. While the overall content mirrors general user interface design courses, a section will focus specifically on challenges in designing intelligent systems (including adaptive systems, recommender systems, agent-based interfaces, etc.), including an exploration of the metaphors and paradigms of interaction with intelligent systems.
   The tutorial is intended for those who do not have an HCI background (it would be redundant with a typical undergraduate course on the topic), and it is focused on practical techniques that could be applied when designing recommender systems for end users.


IUI 2011 workshop: sketch recognition BIBAFull-Text 465-466
  Tracy Anne Hammond; Aaron Adler
The 2011 IUI workshop on Sketch Recognition was a daylong event held on February 13th, 2011 in Palo Alto, California. The workshop consisted of an invited talk by Dr. Sharon Oviatt, several short and long talks, a student research poster session, and a series of discussions about pertinent topics in and about the field of sketch recognition.
2nd international workshop on semantic models for adaptive interactive systems (SEMAIS 2011) BIBAFull-Text 467-468
  Tim Hussein; Stephan Lukosch; Juergen Ziegler; Heiko Paulheim; Gaelle Calvary
The International Workshop on Semantic Models for Adaptive Interactive Systems (SEMAIS 2011) aims to identify emerging trends in interactive system design and execution using semantic models.
   The International Workshop on Semantic Models for Adaptive Interactive Systems (SEMAIS 2011) aims to identify emerging trends in interactive system design and execution using semantic models.
2nd international workshop on intelligent user interfaces for developing regions: IUI4DR BIBAFull-Text 469-470
  Sheetal K. Agarwal; Tim Paek; Nitendra Rajput; Bill Thies
Information Technology (IT) has had significant impact on the society and has touched all aspects of our lives. Up and until now computers and expensive devices have fueled this growth. It has resulted in several benefits to the society. The challenge now is to take this success of IT to its next level where IT services can be accessed by the users in developing regions. The focus of the workshop in 2011 is to identify the alternative sources of intelligence and use them to ease the interaction process with information technology. We would like to explore the different modalities, their usage by the community, the intelligence that can be derived by the usage, and finally the design implications on the user interface. We would also like to explore ways in which people in developing regions would react to collaborative technologies and/or use collaborative interfaces that require community support to build knowledge bases (example Wikipedia) or to enable effective navigation of content and access to services.
Workshop on context-awareness in retrieval and recommendation BIBAFull-Text 471-472
  Ernesto William De Luca; Alan Said; Matthias Böhmer; Florian Michahelles
Context-aware information is widely available in various ways such as interaction patterns, location, devices, annotations, query suggestions and user profiles and is becoming more and more important for enhancing retrieval performance and recommendation results. At the moment, the main issue to cope with is not only recommending or retrieving the most relevant items and content, but defining them ad hoc. Other relevant issues are personalizing and adapting the information and the way it is displayed to the user's current situation (device, location) and interests.
MIAA 2011: multimodal interaction for the intelligent environment car BIBAFull-Text 473-474
  Christoph Endres; Gerrit Meixner; Christian A. Müller
Automotive development has been dominated by the constraints of driving. However, natural relations to the more general area of Intelligent User Interfaces exist. Previous research in related fields therefore should be adopted and included. The aim of the 2011 MIAA workshop is to foster discussion between experts in otherwise unrelated fields of research. For example, public interfaces use crossmodal referencing in order to circumvent restrictions by the users current focus on a limited communication channel. Our aim is to raise awareness for this approach, concluding that crossmodal references in the car are helpful to bridge the gap between information inside the car and the environment. Another focus topic of the workshop is eco-friendly driving. Although universally regarded as a necessity, it remains an open question how to encourage drivers to drive ecologically. We discuss for example awards for eco-friendly driving by making it competitive and game-like.
Visual interfaces to the social and semantic web (VISSW 2011) BIBAFull-Text 475-476
  Siegfried Handschuh; Lora Aroyo; VinhTuan Thai
The large amount of data created, published and consumed by users on the Social and Semantic Web raises significant and exciting research challenges such as data integration as well as effective access to and navigation across heterogeneous data sources on different platforms. Building on the success of the VISSW 2009 and 2010 workshops, the IUI2011 workshop on Visual Interfaces to the Social and Semantic Web aims to bring together researchers and practitioners from different fields to discuss the latest research results and challenges in designing, implementing, and evaluating intelligent interfaces supporting access, navigation and publishing of different types of contents on the Social and Semantic Web. This paper outlines the context of the workshop.
IUI 2011 workshop on location awareness for mixed and dual reality (LAMDa) BIBAFull-Text 477-478
  Gerrit Kahl; Tim Schwartz; Petteri Nurmi; Boris Brandherm; Eyal Dim; Andreas Forsblom
The LAMDa workshop aims at discussing the impact of Dual Reality (DR) and Mixed Reality (MR) on location awareness and other applications in smart environments. Virtual environments -- which are an essential part of dual and mixed realities -- can be used to create new applications and to enhance already existing applications in the real world. On the other hand, existing sensors in the real world can be used to enhance the virtual world as well. The combination of both worlds can be well illustrated by location-based services, such as location-based advertising.
2nd workshop on eye gaze in intelligent human machine interaction BIBAFull-Text 479-480
  Yukiko Nakano; Cristina Conati; Thomas Bader
This workshop addresses a wide range of issues concerning eye gaze: recognizing user's gaze, generating gaze behaviors in conversational humanoids, analyzing human attentional behaviors during interacting with IUIs, and evaluation of gaze-based IUIs. Through a comprehensive discussion, the workshop aims at bringing together researchers with different backgrounds, and establishing an interdisciplinary research community in "attention aware interactive systems".
Workshop on interacting with smart objects BIBAFull-Text 481-482
  Melanie Hartmann; Daniel Schreiber; Kris Luyten; Oliver Brdiczka; Max Mühlhäuser
The number of smart objects in our everyday life is steadily increasing. In this workshop we discuss how the interaction with these smart objects should be designed from various perspectives.
   The number of smart objects in our everyday life is steadily increasing. In this workshop we discuss how the interaction with these smart objects should be designed from various perspectives.
Personalized access to cultural heritage (PATCH 2011) BIBAFull-Text 483-484
  Lora Aroyo; Fabian Bohnert; Tsvi Kuflik; Johan Oomen
This workshop focuses on the specific challenges for personalization in the cultural heritage setting from the point of view of user interaction and visitor experience. It investigates how the user interface -- the contact point of visitors and systems -- can become more intelligent by means of personalization. Overall, the workshop aims at attracting presentations of novel ideas for addressing these challenges and the current state of the art in this field.