HCI Bibliography Home | HCI Conferences | IUI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
IUI Tables of Contents: 939798990001020304050607080910111213-113-214-114-2

Proceedings of the 2006 International Conference on Intelligent User Interfaces

Fullname:International Conference on Intelligent User Interfaces
Editors:Cecile L. Paris; Candace L. Sider
Location:Sydney, Australia
Dates:2006-Jan-29 to 2006-Feb-01
Standard No:ISBN 1-59593-287-9; ACM Order Number: 608060; ACM DL: Table of Contents hcibib: IUI06
Links:Conference Home Page
  1. Invited talks
  2. Workshops
  3. Tutorials
  4. Gestural input
  5. Natural language in the interface
  6. Personal assistants I
  7. Recommendations I
  8. Multimedia and multimodality
  9. Ubiquitous computing
  10. Question answering
  11. Personal assistants 2
  12. Adaptation to users
  13. Recommendation 2
  14. Short papers

Invited talks

Interactive humanoids and androids as ideal interfaces for humans BIBAFull-Text 2-9
  Hiroshi Ishiguro
We, humans, anthropomorphize targets of communication. In this sense, humanoids or androids can have ideal interface for humans. This paper focuses on two new fundamental issues in the human interface studies. There are two relationships between robots and humans: one is inter-personal and the other is social. In the inter-personal relationships, the appearance of the robot is a new and important research issues. In the social relationships, a function to recognize human relationships through interaction is needed for robots of the next generation. These two issues explore new possibilities of androids and humanoids. Especially, the appearance problem bridges between science and engineering. The approach from robotics tries to build very humanlike robots based on knowledge from cognitive science. The approach from cognitive science uses the robot for verifying hypotheses for understanding humans. We call this cross-interdisciplinary framework android science.
Meaningful interfaces in immersive environments BIBAFull-Text 10-11
  Jeffrey Shaw
While generic user interfaces are ubiquitous and customarily bland, the idiosyncratic interfaces developed in art practice over the last decades are significant because of their ability to embody meaning.


Cognitive prostheses and assisted communication BIBAFull-Text 14
  Norman Alm; Shinji Abe; Noriaki Kuwahara
This workshop offers the opportunity for researchers in the fields of assistive technology, cognitive psychology, user interface design and context-awareness to present the state of the art in each field and to discuss an approach and a research agenda for realizing effective cognitive prostheses.
Workshop W2: multi-user and ubiquitous user interfaces (MU3I 2006) BIBAFull-Text 15
  Andreas Butz; Christian Kray; Antonio Kruger; Carsten Schwesig
The main objective of the third workshop on Multi-User and Ubiquitous User Interfaces (MU3I 2006) is to bring people with relevant backgrounds (e.g. interface design, CSCW, ubiquitous computing) together to discuss two key questions in this field: How can we build interfaces, which span multiple devices so that the user knows that they can be used to control a specific application? How can we build interfaces for public displays? Therefore, the main outcome of the workshop is expected to consists of further insights into those problems, potential solutions and a research agenda to investigate these further.
Intelligent user interfaces for intelligence analysis BIBFull-Text 16
  Michelle X. Zhou; Mark Maybury
Workshop on effective multimodal dialogue interfaces BIBAFull-Text 17
  Lawrence Cavedon; Robert Dale; Fang Chen; David Traum
This workshop addresses the issue of evaluating multimodal dialogue systems, and in particular the characteristics and interaction styles that are particularly effective for human-machine collaborative task performance.


Introduction to human-robot interaction BIBAFull-Text 20
  Jean Scholtz; Holly A. Yanco; Jill L. Drury
This tutorial presents the current status of research in interactions with robots, including adaptive robots/interfaces, speech, gestures, virtual reality, and social interactions. Different user interface designs will be shown and discussed during the tutorial. Human-robot interaction (HRI) guidelines, evaluation methodologies and metrics currently used by the community will be presented. Research needs will also be discussed. Participants will work in small groups to design a robotic application as well as an evaluation plan.
Interfaces everywhere: interacting with the pervasive computer BIBAFull-Text 21
  Alois Ferscha; Clemens Holzmann; Michael Leitner
Due to recent technological advances, it has become possible to integrate sensor and actuator technologies as well as wireless communication in everyday objects and environments. These developments open up a huge amount of innovative interaction scenarios, involving new forms of user interfaces. This half day tutorial gives an overview of the emerging field of everywhere interfaces, referring to computing devices that disappear within objects of everyday life and thus enable omnipresent physical interfaces to the digital world, describes the state of the art of sensor and actuator technologies and demonstrates the development of a smart artefact for controlling everyday environments.
Constructive dialogue management for speech-based interaction systems BIBAFull-Text 22
  Kristiina Jokinen
The tutorial will introduce the major topics, established practices and methodologies in dialogue management research. Evaluation criteria and usability aspects for useful and enjoyable interactive systems will also be discussed. The tutorial is based on the framework of Constructive Dialogue Management, and focuses especially on the technological and theoretical challenges in designing adaptive and intelligent conversational systems.

Gestural input

Posture and activity silhouettes for self-reporting, interruption management, and attentive interfaces BIBAFull-Text 24-31
  Alejandro Jaimes
In this paper we present a novel system for monitoring a computer user's posture and activities in front of the computer (e.g., reading, speaking on the phone, etc.) for self-reporting. In our system, a camera and a microphone are placed in front of a computer work area (e.g., on top of the computer screen). The system can be used as a component in an attentive interface, or for giving the user real time feedback on the goodness of his current posture, and generating summaries of postures and activities over a specified period of time (e.g., hours, days, months, etc.). All elements of the system are highly customizable: the user decides what "good" postures are, what alarms and interruptions are triggered, if any, and what activity and posture summaries are generated. We present novel algorithms for posture measurement (using geometric features of the user's silhouette), and activity classification (using machine learning). Finally, we present experiments that show the feasibility of our approach.
Head gesture recognition in intelligent interfaces: the role of context in improving recognition BIBAFull-Text 32-38
  Louis-Philippe Morency; Trevor Darrell
Acknowledging an interruption with a nod of the head is a natural and intuitive communication gesture which can be performed without significantly disturbing a primary interface activity. In this paper we describe vision-based head gesture recognition techniques and their use for common user interface commands. We explore two prototype perceptual interface components which use detected head gestures for dialog box confirmation and document browsing, respectively. Tracking is performed using stereo-based alignment, and recognition proceeds using a trained discriminative classifier. An additional context learning component is described, which exploits interface context to obtain robust performance. User studies with prototype recognition components indicate quantitative and qualitative benefits of gesture-based confirmation over conventional alternatives.
Eye-tracking to model and adapt to user meta-cognition in intelligent learning environments BIBAFull-Text 39-46
  Christina Merten; Cristina Conati
In this paper we describe research on using eye-tracking data for on-line assessment of user meta-cognitive behavior during the interaction with an intelligent learning environment. We describe the probabilistic user model that processes this information, and its formal evaluation. We show that adding eye-tracker information significantly improves the model accuracy on assessing user exploration and self-explanation behaviors.

Natural language in the interface

Taking advantage of the situation: non-linguistic context for natural language interfaces to interactive virtual environments BIBAFull-Text 47-54
  Michael Fleischman; Eduard Hovy
We introduce a framework for learning situated Natural Language Interfaces (NLIs) to interactive virtual environments. The framework exploits the non-linguistic context, or situation, explicitly modeled in such interactive applications. This situation model is integrated with a model of word meaning in a principled manner using a noisy channel approach to language understanding. Preliminary experimentation in an independently designed interactive application, i.e. the Mission Rehearsal Exercise (MRE), shows that this situated NLI outperforms a state of the art NLI on both whole frame accuracy and F-Score metrics. Further, use of the situation model in the situated NLI is shown to increase robustness to the noise introduced by the use of automatic speech recognition.
Three phase verification for spoken dialog clarification BIBAFull-Text 55-61
  Sangkeun Jung; Cheongjae Lee; Gary Geunbae Lee
Spoken dialog tasks incur many errors including speech recognition errors, understanding errors, and even dialog management errors. These errors create a big gap between user's will and the system's understanding, and eventually result in a misinterpretation. To fill in the gap, people in human-to-human dialog try to clarify the major causes of the misunderstanding and selectively correct them. This paper presents a method for applying the human's clarification techniques to human-machine spoken dialog systems. To increase the error detection precision and error recovery efficiency for the clarification dialogs, error detection phase is organized into three systematic phases and a clarification expert is devised for recovering the errors using the three phase verification. The experiment results demonstrate that the three phase verification could effectively catch the word and utterance-level errors in order to increase the SLU (spoken language understanding) performance and the clarification experts can actually increase the dialog success rate and the dialog efficiency.
Automatic prediction of misconceptions in multilingual computer-mediated communication BIBAFull-Text 62-69
  Naomi Yamashita; Toru Ishida
Multilingual communities using machine translation to overcome language barriers are showing up with increasing frequency. However, when a large number of translation errors get mixed into conversations, users have difficulty completely understanding each other. In this paper, we focus on misconceptions found in high volume in actual online conversations using machine translation. We first examine the response patterns in machine translation-mediated communication and associate them with misconceptions. Analysis results indicate that response messages to include misconceptions posted via machine translation tend to be incoherent, often focusing on short phrases of the original message. Next, based on the analysis results, we propose a method that automatically predicts the occurrence of misconceptions in each dialogue. The proposed method assesses the tendency of each dialogue including misconceptions by calculating the gaps between the regular discussion thread (syntactic thread) and the discussion thread based on lexical cohesion (semantic thread). Verification results show significant positive correlation between actual misconception frequency and gaps between syntactic and semantic threads, which indicate the validity of the method.

Personal assistants I

Automatically classifying emails into activities BIBAFull-Text 70-77
  Mark Dredze; Tessa Lau; Nicholas Kushmerick
Email-based activity management systems promise to give users better tools for managing increasing volumes of email, by organizing email according to a user's activities. Current activity management systems do not automatically classify incoming messages by the activity to which they belong, instead relying on simple heuristics (such as message threads), or asking the user to manually classify incoming messages as belonging to an activity. This paper presents several algorithms for automatically recognizing emails as part of an ongoing activity. Our baseline methods are the use of message reply-to threads to determine activity membership and a naive Bayes classifier. Our SimSubset and SimOverlap algorithms compare the people involved in an activity against the recipients of each incoming message. Our SimContent algorithm uses IRR (a variant of latent semantic indexing) to classify emails into activities using similarity based on message contents. An empirical evaluation shows that each of these methods provide a significant improvement to the baseline methods. In addition, we show that a combined approach that votes the predictions of the individual methods performs better than each individual method alone.
Linking messages and form requests BIBAFull-Text 78-85
  Anthony Tomasic; John Zimmerman; Isaac Simmons
Large organizations with sophisticated infrastructures have large form-based systems that manage the interaction between the user community and the infrastructure. In many cases, when a user needs to complete a form to accomplish a task, the user e-mails a description of the task to the appropriate form expert. In many cases this description is incomplete and the expert engages in a clarification dialog to determine the details of the task. Since many tasks and descriptions are routine, this e-mail dialog can be replaced with an intelligent user interface. The interface proactively reads e-mail (or IM) messages and assists the user in completing the associated task without involving the expert. To ground our vision in a specific application, we have built an agent that functions as a webmaster assistant. For example, a user emails the request: "Change John Doe's home phone number to 800-555-1212" to the agent. The webmaster agent then replies with the biographical data form displaying information about John Doe with the new phone number pre-filled in the form. The user then simply approves the change.
   In this paper we describe a prototype website maintenance agent that (i) allows users to express the updates they want to make in human terms (free text input expression of intent), and (ii) allows users to quickly repair any inference errors the agent makes. In addition, we present the results of a proof of concept study that details how interacting with a webmaster agent that makes inference errors is both more efficient (faster) and more effective (errors made to site) than sending a request to a human webmaster. We conclude the paper with a discussion of the application of our work to any form-based system.
A hybrid learning system for recognizing user tasks from desktop activities and email messages BIBAFull-Text 86-92
  Jianqiang Shen; Lida Li; Thomas G. Dietterich; Jonathan L. Herlocker
The TaskTracer system seeks to help multi-tasking users manage the resources that they create and access while carrying out their work activities. It does this by associating with each user-defined activity the set of files, folders, email messages, contacts, and web pages that the user accesses when performing that activity. The initial TaskTracer system relies on the user to notify the system each time the user changes activities. However, this is burdensome, and users often forget to tell TaskTracer what activity they are working on. This paper introduces TaskPredictor, a machine learning system that attempts to predict the user's current activity. TaskPredictor has two components: one for general desktop activity and another specifically for email. TaskPredictor achieves high prediction precision by combining three techniques: (a) feature selection via mutual information, (b) classification based on a confidence threshold, and (c) a hybrid design in which a Naive Bayes classifier estimates the classification confidence but where the actual classification decision is made by a support vector machine. This paper provides experimental results on data collected from TaskTracer users.

Recommendations I

Trust building with explanation interfaces BIBAFull-Text 93-100
  Pearl Pu; Li Chen
Based on our recent work on the development of a trust model for recommender agents and a qualitative survey, we explore the potential of building users' trust with explanation interfaces. We present the major results from the survey, which provided a roadmap identifying the most promising areas for investigating design issues for trust-inducing interfaces. We then describe a set of general principles derived from an in-depth examination of various design dimensions for constructing explanation interfaces, which most contribute to trust formation. We present results of a significant-scale user study, which indicate that the organization-based explanation is highly effective in building users' trust in the recommendation interface, with the benefit of increasing users' intention to return to the agent and save cognitive effort.
Is trust robust?: an analysis of trust-based recommendation BIBAFull-Text 101-108
  John O'Donovan; Barry Smyth
Systems that adapt to input from users are susceptible to attacks from those same users. Recommender systems are common targets for such attacks since there are financial, political and many other motivations for influencing the promotion or demotion of recommendable items [2].
   Recent research has shown that incorporating trust and reputation models into the recommendation process can have a positive impact on the accuracy and robustness of recommendations. In this paper we examine the effect of using five different trust models in the recommendation process on the robustness of collaborative filtering in an attack situation. In our analysis we also consider the quality and accuracy of recommendations. Our results caution that including trust models in recommendation can either reduce or increase prediction shift for an attacked item depending on the model-building process used, while highlighting approaches that appear to be more robust.
Detecting noise in recommender system databases BIBAFull-Text 109-115
  Michael P. O'Mahony; Neil J. Hurley; Guenole C. M. Silvestre
In this paper, we propose a framework that enables the detection of noise in recommender system databases. We consider two classes of noise: natural and malicious noise. The issue of natural noise arises from imperfect user behaviour (e.g. erroneous/careless preference selection) and the various rating collection processes that are employed. Malicious noise concerns the deliberate attempt to bias system output in some particular manner. We argue that both classes of noise are important and can adversely effect recommendation performance. Our objective is to devise techniques that enable system administrators to identify and remove from the recommendation process any such noise that is present in the data. We provide an empirical evaluation of our approach and demonstrate that it is successful with respect to key performance indicators.

Multimedia and multimodality

Enabling context-sensitive information seeking BIBAFull-Text 116-123
  Michelle X. Zhou; Keith Houck; Shimei Pan; James Shaw; Vikram Aggarwal; Zhen Wen
Information seeking is an important but often difficult task, especially when it involves large and complex data sets. We hypothesize that a context-sensitive interaction paradigm would greatly assist users in their information seeking. Such a paradigm would allow users to both express their requests and receive requested information in context. Driven by this hypothesis, we have taken rigorous steps to design, develop, and evaluate a full-fledged, context-sensitive information system. We started with a Wizard-of-OZ (WOZ) study to verify the effectiveness of our envisioned system. We then built a fully automated system based on the findings from our WOZ study. We targeted the development and integration of two sets of technologies: context-sensitive multimodal input interpretation and multimedia output generation. Finally, we formally evaluated the usability of our system in real world conditions. The results show that our system greatly improves the users' ability to perform practical information-seeking tasks. These results not only confirm our initial hypothesis, but they also indicate the practicality of our approaches.
Interactive multimedia summaries of evaluative text BIBAFull-Text 124-131
  Giuseppe Carenini; Raymond T. Ng; Adam Pauls
We present an interactive multimedia interface for automatically summarizing large corpora of evaluative text (e.g. online product reviews). We rely on existing techniques for extracting knowledge from the corpora but present a novel approach for conveying that knowledge to the user. Our system presents the extracted knowledge in a hierarchical visualization mode as well as in a natural language summary. We propose a method for reasoning about the extracted knowledge so that the natural language summary can include only the most important information from the corpus. Our approach is interactive in that it allows the user to explore in the original dataset through intuitive visual and textual methods. Results of a formative evaluation of our interface show general satisfaction among users with our approach.
A conceptual framework for developing adaptive multimodal applications BIBAFull-Text 132-139
  Carlos Duarte; Luis Carrico
This article presents FAME, a model-based Framework for Adaptive Multimodal Environments. FAME proposes an architecture for adaptive multimodal applications, a new way to represent adaptation rules -- the behavioral matrix -- and a set of guidelines to assist the design process of adaptive multimodal applications. To demonstrate FAME's validity, the development process of an adaptive Digital Talking Book player is summarized.

Ubiquitous computing

Direct manipulation of user interfaces for migration BIBAFull-Text 140-147
  Jose Pascual Molina Masso; Jean Vanderdonckt; Pascual Gonzalez Lopez
From a topological model of a working environment, MigriXML automatically generates a virtual reality environment for controlling the run-time migration of a graphical user interface from one computing platform to another one (e.g., from a desktop to a pocket computer), from one interaction surface to another (e.g., from a laptop to a wall screen) at run-time. For this purpose, any user interface subject to migration is described in USer Interface eXtensible Markup Language regarding its look & feel as well as the platforms and the surfaces involved in the migration. Each interface, in part or in whole, can be attached to a platform or a surface, detached from it, and migrated across platforms or interaction surfaces. Instead of communicating data and code during the migration, the description of the user interface of concern is wirelessly passed from one platform to another one to be regenerated on the target platform. To ensure a continuous control of the run-time migration, MigriXML automatically generates a world model representing the context of use where the source/target platforms/interaction surfaces are represented. Finally, migrating a user interface becomes as natural as its direct manipulation from one platform to another exactly in the same way as it is done on a single platform.
Structuralizing digital ink for efficient selection BIBAFull-Text 148-154
  Xiang Ao; Junfeng Li; Xugang Wang; Guozhong Dai
Raw digital ink is informal and unstructured. Its editing, especially its selection, is often inefficient. In this paper, we present approaches to structuralize raw digital ink as multiple hierarchies to facilitate its selection. First a link model is built to organize ink as a mesh-like structure. Based on the link model, the isolated stroke groups form patches. In each patch, textual and graphical areas are separated. Then, each textual area is segmented into text lines, and each text line is partitioned to words. We also design gestures for selecting structured ink. Experiments showed that our ink-structuralizing approaches are effective and selecting structured ink by our gestures considerably outperforms selecting raw ink.

Question answering

Deriving quantitative overviews of free text assessments on the web BIBAFull-Text 155-162
  Timothy Chklovski
Many research efforts are addressing the problem of enabling automatic summarization of opinions and assessments stated on the web in product reviews, discussion forums, and blogs. One key difficulty is that relevant assessments scattered throughout web pages are obscured by variations in natural language. In this paper, we focus on a novel aspect of enabling aggregations of assessments of degree to which a given property holds for a given entity (for instance, how touristy is Boston). We present GrainPile, a user interface for extracting from the web, aggregating and quantifying degree assessments of unconstrained topics. The interface provides a variety of functions: a) identification of dimensions of comparison (properties) relevant to a particular entity or set of entities, b) comparisons of like entities on user-specified properties (for example, which university is more prestigious, Yale or Cornell), c) tracing the derived opinions back to their sources (so that the reasons for the opinions can be found). A central contribution in GrainPile is the evaluated demonstration of feasibility of mapping the recognized expressions (such as fairly, very, extremely, and so on) to a common scale of numerical values and aggregating across all the extracted assessments to derive an overall assessment of degree. GrainPile's novel assessment and aggregation of degree expressions is shown to strongly outperform an interpretation-free, co-occurrence based method.
Towards intelligent QA interfaces: discourse processing for context questions BIBAFull-Text 163-170
  Mingyu Sun; Joyce Y. Chai
Question answering (QA) systems take users' natural language questions and retrieve relevant answers from large repositories of free texts. Despite recent progress in QA research, most work on question answering is still focused on isolated questions. In a real-world information seeking scenario, questions are not asked in isolation, but rather in a coherent manner that involves a sequence of related questions to meet users' information needs. Therefore, to support coherent information seeking, intelligent QA interfaces will inevitably require techniques to support context question answering. To address this problem, this paper investigates approaches to discourse processing of a sequence of coherent questions and their implications on query expansion. In particular, we examine three models for query expansion that are motivated by Centering Theory. Our empirical results indicate that more sophisticated processing based on discourse transitions and centers can significantly improve the performance of document retrieval compared to models that only resolve references.
An intelligent discussion-bot for answering student queries in threaded discussions BIBAFull-Text 171-177
  Donghui Feng; Erin Shaw; Jihie Kim; Eduard Hovy
This paper describes a discussion-bot that provides answers to students' discussion board questions in an unobtrusive and human-like way. Using information retrieval and natural language processing techniques, the discussion-bot identifies the questioner's interest, mines suitable answers from an annotated corpus of 1236 archived threaded discussions and 279 course documents and chooses an appropriate response. A novel modeling approach was designed for the analysis of archived threaded discussions to facilitate answer extraction. We compare a self-out and an all-in evaluation of the mined answers. The results show that the discussion-bot can begin to meet students' learning requests. We discuss directions that might be taken to increase the effectiveness of the question matching and answer extraction algorithms. The research takes place in the context of an undergraduate computer science course.

Personal assistants 2

Fewer clicks and less frustration: reducing the cost of reaching the right folder BIBAFull-Text 178-185
  Xinlong Bao; Jonathan L. Herlocker; Thomas G. Dietterich
Helping computer users rapidly locate files in their folder hierarchies has become an important research topic in today's intelligent user interface design. This paper reports on FolderPredictor, a software system that can reduce the cost of locating files in hierarchical folders. FolderPredictor applies a cost-sensitive prediction algorithm to the user's previous file access information to predict the next folder that will be accessed. Experimental results show that, on average, FolderPredictor reduces the cost of locating a file by 50%. Another advantage of FolderPredictor is that it does not require users to adapt to a new interface, but rather meshes with the existing interface for opening files on the Windows platform.
Who's asking for help?: a Bayesian approach to intelligent assistance BIBAFull-Text 186-193
  Bowen Hui; Craig Boutilier
Automated software customization is drawing increasing attention as a means to help users deal with the scope, complexity, potential intrusiveness, and ever-changing nature of modern software. The ability to automatically customize functionality, interfaces, and advice to specific users is made more difficult by the uncertainty about the needs of specific individuals and their preferences for interaction. Following recent probabilistic techniques in user modeling, we model our user with a dynamic Bayesian network (DBN) and propose to explicitly infer the "user's type" -- a composite of personality and affect variables -- in real time. We design the system to reason about the impact of its actions given the user's current attitudes. To illustrate the benefits of this approach, we describe a DBN model for a text-editing help task. We show, through simulations, that user types can be inferred quickly, and that a myopic policy offers considerable benefit by adapting to both different types and changing attitudes. We then develop a more realistic user model, using behavioural data from 45 users to learn model parameters and the topology of our proposed user types. With the new model, we conduct a usability experiment with 4 users and 4 different policies. These experiments, while preliminary, show encouraging results for our adaptive policy.
SWISH: semantic analysis of window titles and switching history BIBAFull-Text 194-201
  Nuria Oliver; Greg Smith; Chintan Thakkar; Arun C. Surendran
Information workers are often involved in multiple tasks and activities that they must perform in parallel or in rapid succession. In consequence, task management itself becomes yet another task that information workers need to perform in order to get the rest of their work done. Recognition of this problem has led to research on task management systems, which can help by allowing fast task switching, fast task resumption, and automatic task identification. In this paper we focus on the latter: we tackle the problem of automatically detecting the tasks that the user is involved in, by identifying which of the windows on the user's desktop are related to each other. The underlying assumption is that windows that belong to the same task share some common properties with one another that we can detect from data. We will refer to this problem as the task assignment problem.
   To address this problem, we have built a prototype named Swish that: (1) constantly monitors users' desktop activities using a stream of windows events; (2) logs and processes this raw event stream, and (3) implements two criteria of window "relatedness", namely the semantic similarity of their titles, and the temporal closeness in their access patterns.
   In addition to describing the Swish prototype in detail, we validate it with 4 hours of user data, obtaining task classification accuracies of about 70%. We also discuss our plans on including Swish in a number of intelligent user interfaces and future lines of research.

Adaptation to users

Augmentation-based learning: combining observations and user edits for programming-by-demonstration BIBAFull-Text 202-209
  Daniel Oblinger; Vittorio Castelli; Lawrence Bergman
In this paper we introduce a new approach to Programming-by-Demonstration in which the user is allowed to explicitly edit the procedure model produced by the learning algorithm while demonstrating the task. We describe a new algorithm, Augmentation-Based Learning, that supports this approach by considering both demonstrations and edits as constraints on the hypothesis space, and resolving conflicts in favor of edits.
Interactive learning of structural shape descriptions from automatically generated near-miss examples BIBAFull-Text 210-217
  Tracy Hammond; Randall Davis
Sketch interfaces provide more natural interaction than the traditional mouse and palette tool, but can be time consuming to build if they have to be built anew for each new domain. A shape description language, such as the LADDER language we created, can significantly reduce the time necessary to create a sketch interface by enabling automatic generation of the interface from a domain description. However, structural shape descriptions, whether written by users or created automatically by the computer, are frequently over- or under- constrained. We present a technique to debug over- and under-constrained shapes using a novel form of active learning that generates its own suspected near-miss examples. Using this technique we implemented a graphical debugging tool for use by sketch interface developers.
Recognizing user interest and document value from reading and organizing activities in document triage BIBAFull-Text 218-225
  Rajiv Badi; Soonil Bae; J. Michael Moore; Konstantinos Meintanis; Anna Zacchi; Haowei Hsieh; Frank Shipman; Catherine C. Marshall
People frequently must sort through large sets of documents to identify useful materials, for example, when they look through web search results. This document triage process may involve both reading and organizing, possibly using different applications for each activity. Users' interests may be inferred from what they read and how they interact with individual documents; these interests may in turn be used as a basis for identifying other documents or document elements of potential interest within the set. To most effectively identify related documents of interest, activity data must be collected from all applications used in document triage. In this paper we present a common framework (the Interest Profile Manager) for collecting and analyzing user interest. We also present models for detecting user interest based on reading activity alone, on organizing activity alone, and on combined reading and organizing activity. A study comparing document value calculated using the different models shows that incorporating interest information from both reading and organizing activity better predicted users' valuation of documents. This difference was statistically significant when compared to using reading activity alone.
A goal-oriented interface to consumer electronics using planning and commonsense reasoning BIBAFull-Text 226-233
  Henry Lieberman; Jose Espinosa
We are reaching a crisis with design of user interfaces for consumer electronics. Flashing 12:00 time indicators, push-and-hold buttons, and interminable modes and menus are all symptoms of trying to maintain a one-to-one correspondence between functions and physical controls, which becomes hopeless as the number of capabilities of devices grows. We propose instead to orient interfaces around the goals that users have for the use of devices.
   We present Roadie, a user interface agent that provides intelligent context-sensitive help and assistance for a network of consumer devices. Roadie uses a Commonsense knowledge base to map between user goals and functions of the devices, and an AI partial-order planner to provide mixed-initiative assistance with executing multi-step procedures and debugging help when things go wrong.

Recommendation 2

Debugging user interface descriptions of knowledge-based recommender applications BIBAFull-Text 234-241
  Alexander Felfernig; Kostyantyn Shchekotykhin
The complexity of product assortments offered by e-Commerce platforms requires intelligent sales assistance systems alleviating the retrieval of solutions fitting to the wishes and needs of a customer. Knowledge-based recommender applications meet these requirements by allowing the calculation of personalized solutions based on an explicit representation of product, marketing and sales knowledge stored in an underlying recommender knowledge base. Unfortunately, in many cases faulty models of recommender user interfaces are defined by knowledge engineers and no automated support for debugging such process designs is available. This paper presents an approach to automated debugging of faulty process designs of knowledge-based recommenders which increases the productivity of user interface development and maintenance. The approach has been implemented for a knowledge-based recommender environment within the scope of the Koba4MS project.
Social summarization of text feedback for online auctions and interactive presentation of the summary BIBAFull-Text 242-249
  Yoshinori Hijikata; Hanako Ohno; Yukitaka Kusumura; Shogo Nishida
Buyers in online auctions write feedback comments to the sellers from whom the buyers have bought the items. Other bidders read them to determine which item to bid for. In this research, we aim at helping bidders by summarizing the feedback comments. Firstly, we examine feedback comments in online auctions. From the results of the examination, we propose a method called social summarization method, which uses social relationships in online auctions for summarizing feedback comments. We implement a system based on our method and evaluate its effectiveness. Finally, we propose an interactive presentation method of the summaries based on the result of the evaluation.
Automatic construction of personalized customer interfaces BIBAFull-Text 250-257
  Bob Price; Russ Greiner; Gerald Haubl; Alden Flatt
Interface personalization can improve a user's performance and subjective impression of interface quality and responsiveness. Personalization is difficult to implement as it requires an accurate model of a user's intentions and a formal model of how an interface meets a user's need. We present a novel model for tractable inference of consumer intentions in the context of grocery shopping. The model makes unique use of a priori temporal relations to simplify inference. We then present a simple interface generation framework that was inspired by viewing user interface interaction as a channel coding problem. The resulting model defines a simplified but clear notion of a user's utility for an interface. We demonstrate the effectiveness of the research prototype on some simple data, and explain how the model can be augmented with richer user modeling to create a deployable application.

Short papers

What's on tonight: user-centered and situation-aware proposals for TV programmes BIBAFull-Text 258-260
  Bernd Ludwig; Stefan Mandl; Sebastian von Mammen
This paper presents an approach to exploit free text descriptions of TV programmes as available from EPG data sets for a TV recommender system that takes the content of programmes into account. The paper focuses on the natural language understanding problem underlying the analysis of free text descriptions and on methods of classifying free text descriptions with respect to a natural language user query. We close with an evaluation of user acceptance and a discussion of future work.
Mixing robotic realities BIBAFull-Text 261-263
  Mauro Dragone; Thomas Holz; Gregory M. P. O'Hare
This paper contests that Mixed Reality (MR) offers a potential solution in achieving transferability between Human Computer Interaction (HCI) and Human Robot Interaction (HRI). Virtual characters (possibly of a robotic genre) can offer highly expressive interfaces that are as convincing as a human, are comparably cheap and can be easily adapted and personalized. We introduce the notion of a mixed reality agent, i.e. an agent consisting of a physical robotic body and a virtual avatar displayed upon it. We realized an augmented reality interface with a Head-Mounted Display (HMD) in order to interact with such systems and conducted a pilot study to demonstrate the usefulness of mixed reality agents in human-robot collaborative tasks.
Splitting rules for graceful degradation of user interfaces BIBAFull-Text 264-266
  Murielle Florins; Francisco Montero Simarro; Jean Vanderdonckt; Benjamin Michotte
This paper addresses the problem of the graceful degradation of user interfaces where an initial interface is transferred to a smaller platform. It presents a technique for pagination of interaction spaces (e.g., windows, dialog boxes, web pages) based on a multi-layer specification in the user interface description language UsiXML. We first describe how an interaction space can be split using information from the presentation layer (Concrete User Interface). We then show how information from higher abstraction levels (Abstract user Interface, Task model) can be used to refine the process. This technique belongs to a collection of transformation rules that have been developed to adapt a user interface to smaller, more constrained displays.
Group recommender systems: a critiquing based approach BIBAFull-Text 267-269
  Kevin McCarthy; Maria Salamo; Lorcan Coyle; Lorraine McGinty; Barry Smyth; Paddy Nixon
Group recommender systems introduce a whole set of new challenges for recommender systems research. The notion of generating a set of recommendations that will satisfy a group of users, with potentially competing interests, is challenging in itself. In addition to this we must consider how to record and combine the preferences of many different users as they engage in simultaneous recommendation dialogs. In this paper we introduce a group recommender system that is designed to provide assistance to a group of friends trying to plan a skiing vacation.
Creating multiplatform user interfaces by annotation and adaptation BIBAFull-Text 270-272
  Yun Ding; Heiner Litz
This paper presents our novel framework, which creates user interfaces (UIs) for a variety of devices by annotating and reusing an existing one originally designed for large devices. It distinguishes itself from previous work by the unique combination of reusing existing UIs, intuitive graphical support and adaptation-based approach. It is extensible by supporting UI developers to build and integrate their customized transformation strategies into our framework.
Evaluating stories in narrative-based interfaces BIBAFull-Text 273-275
  Daniel Goncalves; Joaquim A. Jorge
Traditional ways to help users organize and retrieve their documents don't scale well, nor do they properly handle non-textual documents. This paper evaluates narrative-based interfaces as a natural and effective alternative for document retrieval. We have identified what shape document-describing stories take, and what contents to expect. This led to an interface that is able to capture stories, and a knowledge-based infrastructure to understand them. A prototype of the interface was used to validate narrative-based interfaces, with emphasis on story accuracy. To this end, we collected thirty stories whose contents were then compared to the documents they portrayed. Results allow us to conclude that, for the most part, such stories are trustworthy enough to allow humans to retrieve documents reliably (81%-91% of all information is correct). We also confirmed that stories told to a computer are similar to those told to human interviewers.
Topic modeling in fringe word prediction for AAC BIBAFull-Text 276-278
  Keith Trnka; Debra Yarrington; Kathleen McCoy; Christopher Pennington
Word prediction can be used for enhancing the communication ability of persons with speech and language impairments. In this work, we explore two methods of adapting a language model to the topic of conversation, and apply these methods to the prediction of fringe words.
The delivery of multimedia presentations in a graphical user interface environment BIBAFull-Text 279-281
  Nathalie Colineau; Julien Phalip; Andrew Lampert
A major issue in many domains is to present information to people that is tailored to their need, in such a way that it supports them in their tasks. In this paper, we present the Virtual Document Planner (VDP), a platform we developed for generating tailored interactive multimedia presentations in the surveillance domain. Integrated with the surveillance operators' graphical interface, the VDP provides tailored information delivery mechanisms that adapt the operators' information rich environment to their tasks and information needs.
iCARE: intelligent customer assistance for recommending eyewear BIBAFull-Text 282-284
  Edwin Costello; John Doody; Lorraine McGinty; Barry Smyth
Consumers are often overwhelmed by the range of product choices available, especially online, and recommender systems have emerged as an important tool for helping users to navigate through complex product spaces based on their preferences. In this paper we describe work that concentrates on how research ideas from two complimentary research communities (recommender systems and intelligent user interfaces) can be married to improve online recommender systems. In particular, we are interested in content-based recommendation domains that rely heavily on explicit feature-level feedback from users. Oftentimes this type of feedback is difficult for users to provide and we look at how this might be addressed through product visualization techniques in this paper, focusing on the iCARE System for recommending suitable eyeglasses to individual users.
Interactive prototyping for ubiquitous augmented reality user interfaces BIBAFull-Text 285-287
  Otmar Hilliges; Christian Sandor; Gudrun Klinker
User interfaces for ubiquitous augmented reality incorporate a wide variety of concepts such as multi-modal, multi-user, multi-device aspects and new input/output devices. In this paper we present a twofold approach that consists of an execution engine for ubiquitous augmented reality user interfaces and a runtime development environment that enables rapid prototyping and live system adaption for such advanced user interfaces.
PastMaster@storytelling: a controlled interface for interactive drama BIBAFull-Text 288-290
  Nicolas Szilas; Manolya Kavakli
In this paper, we describe a controlled interface for Interactive Drama, PastMaster@Storytelling. PastMaster is used for interacting with an Interactive Drama engine. The paper discusses the test results regarding the usability of the interface.
When Media Gets Wise: collaborative filtering with mobile media agents BIBAFull-Text 291-293
  Mattias Jacobsson; Mattias Rost; Lars Erik Holmquist
We present a mode where media (e.g. music files) are autonomous entities that carry their own individual information. Our goal is to turn such files into autonomous, rule-following agents capable of building their own identities from interactions with other agents and users. We are exploring how collaborative filtering-like behaviour could emerge out of large ensembles of interacting agents, which are distributed over mobile devices in socia networks. We have implemented a first version of the mode in the form of a music player application for mobile devices, called Push!Music. This system takes advantage of active recommendations as we as implicit user activity to build a profile for each media file.
MapTable: a tactical command and control interface BIBAFull-Text 294-296
  Fan Yang; Christopher Baber
This paper describes a novel tabletop interface, MapTable, which can be used as a tactical command and control interface. It is designed and implemented to explore more intelligent and intuitive interaction in a distributed environment that can support remote collaboration. MapTable offers a common space for planners to work, which retains the intuitive feel of a "sandbox" around which discussion can take place and plans easily displayed, whilst the automated navigation command system means that both planning and the issuing of directions can effectively be merged into a single activity using a single user interface that embeds the tasks into the interaction. Empirical studies were conducted to test this tactical interface in a remote searching task environment. Compared to the traditional desktop command and control interface, MapTable can lead to significant differences in performance. This fusion of planning, command and control means that planners can be expected have a high level of situational awareness with regard to where those they are directing are, where they will be and what others in the team are doing.
Presence based collaborative recommender for networked audiovisual displays BIBAFull-Text 297-299
  James H. Errico; Ibrahim Sezan
In this paper, we describe a presence based collaborative recommender (PBCR) system for networked audiovisual (AV) displays such as Internet connected TV sets with access to broadcast TV programs over traditional channels, video on demand, and Internet Protocol (IP) AV programs. The proposed PBCR system is based on presence technology and provides viewers with collaborative recommendations on AV programs based on presence or ratings of users within viewer's community.
A TV agent system that integrates knowledge and answers users' questions BIBAFull-Text 300-302
  Jun Goto; Masaru Miyazaki; Takeshi Kobayakawa; Nobuyuki Hiruma; Noriyoshi Uratani
Aiming to close the digital divide in the television viewing environment, we are developing a TV system with an agent that controls the TV and peripherals on behalf of the user and provides information to the user. We propose a TV system function that answers viewers' questions about TV programs by calling upon multiple question-answering agents that search for relevant information.
A cognitively based approach to affect sensing from text BIBAFull-Text 303-305
  Mostafa Al Masum Shaikh; Prendinger Helmut; Mitsuru Ishizuka
Studying the relationship between natural language and affective information as well as assessing the underpinned affective qualities of natural language are becoming crucial for improving human computer interaction. Different approaches have already been employed to "sense" affective information from text but none of those considered the cognitive structure of individual emotions and appraisal structure of those emotions adopted by emotion sensing programs. It has also been observed that previous attempts for textual affect sensing have categorized texts into a number of emotion groups, e.g. six so-called "basic" emotion proposed by Paul Ekman which we believe insufficient to classify textual emotions. Hence we propose a different approach to sense affective information from texts by applying the cognitive theory of emotions known as OCC model [1] which distinguishes several emotion types that can be identified by assessing valenced reactions to events, agents or objects described in the texts. In particular we want to create a formal model that can not only "understand" what emotions people wrap with their textual messages, but also can make automatic empathic response with respect to the emotional state detected in the text (e.g. in a chat system). We first briefly describe relevant works and then we explain our proposal with examples. Finally we conclude with future work plans.
Audio subtle expressions affecting user's perceptions BIBAFull-Text 306-308
  Takanori Komatsu
Can we assign attitudes to an artifact based on its expressed beep sounds as audio subtle expressions? If so, which kinds of beep sounds are perceived as specific attitudes, such as "disagreement" as a negative attitude, "hesitation" as neutral or "agreement" as positive? To examine this issue, I carried out an experiment to observe and clarify how participants assign an attitude to an artifact according to beeps of different durations and F0 values. The results revealed that 1) sounds with rising tones regardless of duration were perceived by participants as "disagreement," and 2) flat sounds with longer duration were interpreted as "hesitation", and 3) falling tones with shorter duration were taken as "agreement".
A task-driven user interface architecture for ambient intelligent environments BIBAFull-Text 309-311
  Tim Clerckx; Chris Vandervelpen; Kris Luyten; Karin Coninx
This paper presents a modular runtime architecture supporting our model-based user interface design approach for designing context-aware, distributable user interfaces for ambient intelligent environments.
An approach to adaptive user interfaces using interactive media systems BIBAFull-Text 312-314
  Mithilesh Kumar; Akhilesh Gupta; Sharad Saha
Adaptive interfaces are a promising attempt to overcome contemporary problems due to the increasing complexity of human-computer interaction. They are designed to tailor a system's interactive behavior with consideration of individual needs of human users and altering conditions within an application environment. For building adaptive user interfaces, we developed a system that interacts with users in a variety of terminals. The system has three categories. First we have MPEG-4 Binary Format for Scenes (BIFS) [5,6] for creating interactive media. The second category is the adaptor chain which brings about a user interface depending upon user preferences, terminal capabilities and network constraints. The third category is the iPlayer or Interactive Player that plays the transferred media data and interacts with the user. The player, when implemented finally, operates on Win32, WinCE and Linux and plays MPEG-4 video and MP3 audio.
Intelligent fridge poetry magnets BIBAFull-Text 315-317
  Kavita Thomas; Pierre Proske; Mattias Rickardsson
This paper presents a community of communicating embodied agents which learn an adjacency-based grammar from user interactions. The agents act as intelligent fridge magnets, each printing a word on their respective displays. The user places agents next to other agents on the fridge, removing and replacing them if the word they display is ungrammatical given the current context, thereby indicating grammatical acceptability. We present these agents both as a test bed for exploring research into embodied communicating agents and as a means of investigating how users respond to expressive devices like fridge poetry magnets which learn from user interaction.
Designing an intelligent user interface for instructional video indexing and browsing BIBAFull-Text 318-320
  Lijun Tang; John R. Kender
Instructional videos are used intensively in universities for remote education and e-learning, and a typical university course consists of videos of more than two thousand minutes in total length. This paper presents a novel graphics user interface for indexing and browsing such extensive but thematically related content. We present how the interface automatically extracts semantic indices from the visual content, and then presents both high- and low-level cues from five different conceptual viewpoints. We detail each of these novel UI units, and show how they are integrated into a user-adjustable main framework, and interconnected and navigated through user mouse events.
Training a training system BIBAFull-Text 321-323
  Debbie Richards; Nicolas Szilas
We are interested in using game technology to provide an engaging and immersive environment for experiential learning of workplace situations. Narrative intelligence will be used to provide the adventure. For authoring we provide an adaptive interface that allows the direct capture of the workplace situations and the knowledge driving the interaction. We include an initial study comparing the learning outcomes for an animated demonstration with video footage of a similar scenario.
Multimodal error correction for continuous handwriting recognition in pen-based user interfaces BIBAFull-Text 324-326
  Xugang Wang; Junfeng Li; Xiang Ao; Gang Wang; Guozhong Dai
In this paper, we describe a multimodal error correction mechanism. It allows the user to correct errors in continuous handwriting recognition naturally by simultaneously using pen gesture and speech. A multimodal fusion algorithm is designed to enhance recognition accuracies of handwriting and speech through cross-modal influence. We have performed preliminary evaluation experiments and the results show that this multimodal mechanism can efficiently correct the errors in continuous handwriting recognition.
Inducing shortcuts on a mobile phone interface BIBAFull-Text 327-329
  Robert Bridle; Eric McCreath
Due to size restrictions, mobile phone user interfaces are often difficult to use[8]. In this short paper, we investigated inducing shortcuts to replace the sequence of actions required to complete common tasks on a mobile phone. In particular, we used mobile phone interaction data to evaluate several methods for inducing shortcuts. We considered the balance between maximising interface efficiency and shortcuts that remained stable and hence predictable.
A multi modal supporting tool for multi lingual communication by inducing partner's reply BIBAFull-Text 330-332
  Kazunori Imoto; Munehiko Sasajima; Taishii Shimomori; Noriko Yamanaka; Makoto Yajima; Yasuyuki Masai
This paper introduces a new tool for supporting multilingual communication between speakers of different languages. Conventional tools such as electronic dictionaries enable users to communicate basic intentions to others, but are often insufficient to help understand replies. The input of a Japanese sentence in the proposed tool not only produces a translation of the sentences but also displays a window featuring possible answers. The authors have evaluated the function of a prototype system which resulted in a thorough understanding of the merits and comings of the proposed tool.
Modeling gaze behavior for a 3D ECA in a dialogue situation BIBAFull-Text 333-335
  Gaspard Breton; Danielle Pele; Christophe Garcia
This paper presents an approach to model the gaze behavior of an Embodied Conversational Agent in a real time multimodal dialogue interaction with users. The ECA's gaze control results from the fusion of a rational dialogue engine based on natural language interaction and a multi-users face tracker.
Modality preferences in mobile and instrumented environments BIBAFull-Text 336-338
  Rainer Wasinger; Antonio Kruger
In this paper, we describe the results of a usability study on user preferences for multimodal interaction in an instrumented environment. The study was conducted in a public setting, and provides insight into modality preferences among users, and specific to among men and women. The returned results are also contrasted to the results of a former study based on the same evaluation procedures but conducted under a laboratory setting.
Investigating the relation between robot bodily expressions and their impression on the user BIBAFull-Text 339-341
  Abdelaziz Khiat; Masataka Toyota; Yoshio Matsumoto; Tsukasa Ogasawara
During an interaction process, people usually adapt their behavior according to the interpretation of their partner's bodily expressions. It is not known how much similar expressions performed by robots affect a human observer. This paper explores this issue. The study shows a correlation between the nature of the bodily expressions, through the result of questionnaires, and the effect on brain activity. It has been demonstrated that unpleasant bodily expressions of the robot elicit unpleasant impressions and vice versa. This was observed through brain activity in a specific area when the expression is pleasant, and in another area when it is unpleasant.
Recovering semantic relations from web pages based on visual cues BIBAFull-Text 342-344
  Peifeng Xiang; Yuanchun Shi
Recovering semantic relations between different parts of web pages are of great importance for multi-platform web interface development, as they make it possible to re-distribute interaction objects and change the structure of interfaces while preserving the semantics of the UI. Important semantic relations include topic, order, hierarchy, etc. This paper presents a visual cues based approach, which is tag-tree structure independent, to automatically detect such kind of semantic relations in web pages. Comparing with other existing techniques, such as DOM-based methods, this approach mostly depends on interfaces' perceptible visual information that is more reliable. The preliminary evaluation on complex web sites shows promising results. We believe further exploration is worth taken.
Geometric anticipation: assisting users in 2D layout tasks BIBAFull-Text 345-347
  Jessi Stumpfel; James Arvo; Kevin Novins
We describe an experimental interface that anticipates a user's intentions and accommodates predicted changes in advance. Our canonical example is an interactive version of "magnetic poetry" in which rectangular blocks containing single words can be juxtaposed to form arbitrary sentences or "poetry." The user can rearrange the blocks at will, forming and dissociating word sequences. A crucial attribute of the blocks in our system is that they anticipate insertions and gracefully rearrange themselves in time to make space for a new word or phrase. The challenges in creating such an interface are three fold: 1) the user's intentions must be inferred from noisy input, 2) arrangements must be altered smoothly and intuitively in response to anticipated changes, and 3) new and changing goals must be handled gracefully at any time, even in mid animation. We describe a general approach for handling the dynamic creation and deletion of organizational goals. Fluid motion is achieved by continually applying and correcting goal-directed forces to the objects. Future applications of this idea include the manipulation of text and graphical elements within documents and the manipulation of symbolic information such as equations.
Augmenting kitchen appliances with a shared context using knowledge about daily events BIBAFull-Text 348-350
  Chia-Hsun Jackie Lee; Leonardo Bonanni; Jose H. Espinosa; Henry Lieberman; Ted Selker
Networked appliances might make them aware of each other, but interacting with a complex network can be difficult in itself. KitchenSense is a sensor rich networked kitchen research platform that uses Common Sense reasoning to simplify control interfaces and augment interaction. The system's sensor net attempts to interpret people's intentions to create fail-soft support for safe, efficient and aesthetic activity. By considering embedded sensor data together with daily-event knowledge, a centrally-controlled system can develop a shared context across various appliances. The system is a research platform that is used to evaluate augmented intelligent support of work scenarios in physical spaces.
Multimodal interaction styles for hypermedia adaptation BIBAFull-Text 351-353
  Ronnie Taib; Natalie Ruiz
We explore the concept of interaction styles used to navigate through hypermedia systems. A demonstrator was built to conduct a user study with the objective of detecting whether any interaction pattern exists in relation to input modality choices. Our lightweight server-side web demonstrator is able to adapt output modalities as a function of input received from the user. The interface and content displayed are built from predefined presentation schemes that attempt to optimize the user's experience and website's functionality. The results suggest that some levels of entrenchment do occur with reference to modality choices, with 45% of participants deviating from their preferred pattern in one or less interaction turns.
Activity-oriented context-aware adaptation assisting mobile geo-spatial activities BIBAFull-Text 354-356
  Guoray Cai; Yinkun Xue
Human geospatial activities often involves the use of geographic information in mobile environment where the context of technology use is dynamic, complex, and unstable, creating unique challenges in designing effective mobile mapping applications. Enhancing the context awareness of the computing device can improve the usability of mobile map applications, but the potentially large number of contexts (physical context, computing context, human factors, and time) are not easily managed without a workable organizing structure. This paper proposes an activity-oriented context model that establish late (run-time) binding of contexts to the ongoning avtivity according to how they contribute to the success of the activity. Using this context model, adaptation of mobile map display to the changes of other contexts is based on the knowledge of ongoing task (within an activity) rather anticipated tasks. We discuss advantages of such an approach over traditional template-based model of context models in mobile computing applications.
Constraint-based livespaces configuration management BIBAFull-Text 357-359
  Markus Stumptner; Bruce Thomas
In this paper, we describe use of constraint-based methods for configuring ubiquitous workspaces. A declarative representation allows succinct, easily maintainable definitions of the dependencies inherent in setting up a meeting, and permits the use of general constraint reasoners for various standard tasks such as setting up meeting interfaces, switching between setting for different meetings, and saving and restoring settings. Personalisation techniques can be used for intelligently adapting the workspace to individual user needs.
How to talk to a hologram BIBAFull-Text 360-362
  Anton Leuski; Jarrell Pair; David Traum; Peter J. McNerney; Panayiotis Georgiou; Ronakkumar Patel
There is a growing need for creating life-like virtual human simulations that can conduct a natural spoken dialog with a human student on a predefined subject. We present an overview of a spoken-dialog system that supports a person interacting with a full-size hologram-like virtual human character in an exhibition kiosk settings. We also give a brief summary of the natural language classification component of the system and describe the experiments we conducted with the system.
Intelligent drawing correction using place vocabulary constraints BIBAFull-Text 363-365
  Ronald W. Ferguson; Neil Cutshaw; Huzaifa Zafar
Diagrams used in many domains often require continual redrawing. Diagram drawing programs often aid redrawing by applying secondary corrections that change visual elements to maintain preexisting relationships. These corrections, though useful, can operate in unintuitive ways and cause disfluencies. We describe an implemented prototype system that improves corrections based on place vocabularies (domain-specific spatial relation sets). Place vocabulary constraints (PVCs) translate high-level place vocabularies into low-level geometric constraints by reversing pre-existing recognition rules. By making corrections congruent with a domain vocabulary, PVCs may provide more intuitive drawing corrections.
Are two talking heads better than one?: when should use more than one agent in e-learning? BIBAFull-Text 366-368
  Hua Wang; Mark Chignell; Mitsuru Ishizuka
Recent interest in the use of software character agents raises the issue of how many agents should be used in online learning. In this paper we review evidence concerning the relative effectiveness of multi-agent systems and introduce a multiple agent system that we have developed for online instruction. A user test is carried out that compares one and two agent versions of the learning system. The results are interpreted in terms of their implications for selecting when and how more than one agent should be used in online learning. We conclude with some recommendations on when multiple agents may help online learners to interact with the learning environment more easily and efficiently.
Improving question-answering with linking dialogues BIBAFull-Text 369-371
  Sudeep Gandhe; Andrew S. Gordon; David Traum
Question-answering dialogue systems have found many applications in interactive learning environments. This paper is concerned with one such application for Army leadership training, where trainees input free-text questions that elicit pre-recorded video responses. Since these responses are already crafted before the question is asked, a certain degree of incoherence exists between the question that is asked and the answer that is given. This paper explores the use of short linking dialogues that stand in between the question and its video response to alleviate the problem of incoherence. We describe a set of experiments with human generated linking dialogues that demonstrate their added value. We then describe our implementation of an automated method for utilizing linking dialogues and show that these have better coherence properties than the original system without linking dialogues.
Ambient Display using Musical Effects BIBAFull-Text 372-374
  Luke Barrington; Michael J. Lyons; Dominique Diegmann; Shinji Abe
The paper presents a novel approach to the peripheral display of information by applying audio effects to an arbitrary selection of music. We examine a specific instance: the communication of information about human affect, and construct a functioning prototype which captures behavioral activity level from the face and maps it to musical effects. Several audio effects are empirically evaluated as to their suitability for ambient display. We report measurements of the ambience, perceived affect, and pleasure of these effects. The findings support the hypothesis that musical effects are a promising method for ambient informational display.