HCI Bibliography Home | HCI Conferences | IUI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
IUI Tables of Contents: 939798990001020304050607080910111213-1

Proceedings of the 2003 International Conference on Intelligent User Interfaces

Fullname:International Conference on Intelligent User Interfaces
Editors:Lewis Johnson; Elisabeth Andre
Location:Miami, Florida, USA
Dates:2003-Jan-12 to 2003-Jan-15
Standard No:ACM ISBN 1-58113-586-1 ACM Order Number 608030; ACM DL: Table of Contents hcibib: IUI03
Links:Conference Home Page
  1. Invited Papers
  2. Full Technical Papers
  3. Accepted Posters
  4. Accepted Demo Papers

Invited Papers

Semantic information processing of spoken language: how may I help you? BIBAFull-Text 2
  Allen Gorin
The next generation of voice-based user interface technology will enable easy-to-use automation of new and existing communication services, achieving a more natural human-machine interaction. By natural, we mean that the machine understands what people actually say, in contrast to what a system designer expects them to say. This approach is in contrast with menu-driven or strongly-prompted systems, where many users are unable or unwilling to navigate such highly structured interactions. AT&Ts How May I Help You? (HMIHY)(sm) technology shifts the burden from human to machine wherein the system adapts to peoples language, as contrasted with forcing users to learn the machine's jargon. We have developed algorithms which learn to extract meaning from fluent speech via automatic acquisition and exploitation of salient words, phrases and grammar fragments from a corpus. In this talk I will describe the speech, language and dialog technology underlying HMIHY, plus experimental evaluation on live customer traffic from AT&T's national deployment for customer care. Allen Gorin is the Head of the Speech Interface Research Department at AT&T Laboratories, with long-term research interests focusing on machine learning methods for spoken language understanding. In recent years, he has led a research team in applying speech, language and dialog technology to AT&Ts "How May I Help You?" (HMIHY) (sm) service, which has been deployed nationally for long distance customer care. He was awarded the 2002 AT&T Science and Technology Medal for his research contributions to spoken language understanding for HMIHYHe received the B.S. and M.A. degrees in Mathematics from SUNY at Stony Brook, and the Ph.D. in Mathematics from the CUNY Graduate Center in 1980. From 1980-83 he worked at Lockheed investigating algorithms for target recognition from time-varying imagery. In 1983 he joined AT&T Bell Labs where he was the Principal Investigator for AT&T's ASPEN project within the DARPA Strategic Computing Program, investigating parallel architectures and algorithms for pattern recognition. In 1987, he was appointed a Distinguished Member of the Technical Staff. In 1988, he joined the Speech Research Department at Bell Labs. He has served as a guest editor for the IEEE Transactions on Speech and Audio, and was a visiting researcher at the ATR Interpreting Telecommunications Research Laboratory in Japan. He is a member of the Acoustical Society of America, Association for Computational Linguistics and an IEEE Senior MemberHome page for Allen Gorin: http://www.research.att.com/info/algor.
Tangible bits: designing the seamless interface between people, bits, and atoms BIBAFull-Text 3
  Hiroshi Ishii
Where the sea meets the land, life has blossomed into a myriad of unique forms in the turbulence of water, sand, and wind. At another seashore between the land of atoms and the sea of bits, we are now acing the challenge of reconciling our dual citizenships in the physical and digital worlds. Windows to the digital world are confined to flat square screens and pixels, or "painted bits." Unfortunately, one cannot feel and confirm the virtual existence of this digital information through one's body. Tangible Bits, our vision of Human Computer Interaction (HCI), seeks to realize seamless interfaces between humans, digital information, and the physical environment by giving physical form to digital information, making bits directly manipulable and perceptible. The goal is to blur the boundary between our bodies and cyberspace and to turn the architectural space into an interface between the people, bits, and atoms. In this talk, I will present a variety of tangible user interfaces the Tangible Media Group has designed and presented within the CHI, SIGGRAPH, UIST, CSCW, IDSA, ICSID, ICC, and Ars Electronica communities. Hiroshi Ishii is a tenured Associate Professor of Media Arts and Sciences, at the MIT Media Lab. His research focuses upon the design of seamless interfaces between humans, digital information, and the physical environment. At the MIT Media Lab, he founded and directs the Tangible Media Group pursuing a new vision of Human Computer Interaction (HCI): "Tangible Bits." His team seeks to change the "painted bits" of GUIs to "tangible bits" by giving physical form to digital information. He also co-directs Things That Think (TTT) Consortium at the MIT Media Lab. Ishii and his students have presented their vision of "Tangible Bits" at a variety of academic, industrial design, and artistic venues (including ACM SIGCHI, ACM SIGGRAPH, Industrial Design Society of America, and Ars Electronica), emphasizing that the development of tangible interfaces requires the rigor of both scientific and artistic review. A display of many of the groups projects took place at the NTT InterCommunication Center (ICC) in Tokyo in summer 2000. A new, two-year-long exhibition "Get in Touch" that features the Tangible Media group's work opened at Ars Electronica Center (Linz, Austria) in September 2001Prior to MIT, from 1988-1994, he led a CSCW research group at the NTT Human Interface Laboratories, where his team invented TeamWorkStation and ClearBoard. In 1993 and 1994, he was a visiting assistant professor at the University of Toronto, Canada. He received B. E. degree in electronic engineering, M. E. and Ph. D. degrees in computer engineering from Hokkaido University, Japan, in 1978, 1980 and 1992, respectively. Home page for Hiroshi Ishii: .
What users want BIBAFull-Text 4
  Daniel Weld
Todays computer interfaces are one size fits all. Users with little programming experience have only limited opportunities to customize their interface to their task and work habits (e.g., adding buttons to a toolbar). Furthermore, the overhead induced by generic interfaces will be proportionately greater on small form-factor PDAs, embedded applications and wearable devices. Searching for a solution, researchers argue that productivity can be greatly enhanced if interfaces anticipated their users, adapted to their preferences, and reacted to high-level customization requests. But realizing these benefits is tricky, because there is an inherent tension between the dynamism implied by automatic interface adaptation and the stability required in order for the user to maintain an accurate mental model, predict the computers behavior, and feel in control. In this talk, I discuss several principles governing effective adaptation, describe algorithms for data mining user action traces, and suggest mechanisms for dynamically transforming interfaces.

Full Technical Papers

Self-adaptive multimodal-interruption interfaces BIBAFull-Text 6-11
  Ernesto Arroyo; Ted Selker
This work explores the use of ambient displays in the context of interruption. A multimodal interface was created to communicate with users by using two ambient channels for interruption: heat and light. These ambient displays acted as external interruption generators designed to get users attention away from their current task; playing a game on a desktop computer. It was verified that the disruptiveness and effectiveness of interruptions varies with the interruption modality used to interrupt. The thermal modality produced a larger decrease in performance and disruptiveness on a task being interrupted than the visual modality. Our results set the initial point in providing the theory behind future self-adaptive multimodal-interruption interfaces that will employ users individual physiological responses to each interruption modality and dynamically select the modality based on effectiveness and performance metrics.
Towards more conversational and collaborative recommender systems BIBAFull-Text 12-18
  Giuseppe Carenini; Jocelyin Smith; David Poole
Current recommender systems, based on collaborative filtering, implement a rather limited model of interaction. These systems intelligently elicit information from a user only during the initial registration phase. Furthermore, users tend to collaborate only indirectly. We believe there are several unexplored opportunities in which information can be effectively elicited from users by making the underlying interaction model more conversational and collaborative. In this paper, we propose a set of techniques to intelligently select what information to elicit from the user in situations in which the user may be particularly motivated to provide such information. We argue that the resulting interaction improves the user experience. We conclude by reporting results of an offline experiment in which we compare the influence of different elicitation techniques on both the accuracy of the systems predictions and the users effort.
A virtual patient based on qualitative simulation BIBAFull-Text 19-25
  Marc Cavazza; Altion Simo
In this paper, we describe the development of a virtual human to be used for training applications in the field of cardiac emergencies. The system integrates AI techniques for simulating medical conditions (shock states) with a realistic visual simulation of the patient in a 3D environment representing an ER room. It uses qualitative simulation of the cardio-vascular system to generate clinical syndromes and simulate the consequences of the trainees therapeutic interventions. The use of knowledge-based simulation provides a strong basis to integrate the behavioural aspects with the graphical appearance of the patient in the virtual ER. This also supports the creation of an emotional atmosphere increasing the realism of the training system.
Intelligent user interface design for teachable agent systems BIBAFull-Text 26-33
  J. Davis; K. Leelawong; K. Belynne; B. Bodenheimer; G. Biswas; N. Vye; J. Bransford
This paper describes the interface components for a system called Bettys Brain, an intelligent agent we have developed for studying the learning by teaching paradigm. Our previous studies have shown that students gain better understanding of domain knowledge when they prepare to teach others versus when they prepare to take an exam. This finding has motivated us to develop computer agents that students teach using concept map representations with a visual interface. Betty is intelligent not because she learns on her own, but because she can apply qualitative-reasoning techniques to answer questions that are directly related to what she has been taught through the concept map. We evaluate the agents interfaces in terms of how well they support learning activities, using examples of their use by fifth grade students in an extensive study that we performed in a Nashville public school. A critical analysis of the outcome of our studies has led us to propose the next generation interfaces in a multi-agent paradigm that should be more effective in promoting constructivist learning and self-regulation in the learning by teaching framework.
Buddies in a box: animated characters in consumer electronics BIBAFull-Text 34-38
  Elmo M. A. Diederiks
In this paper it is argued that animated characters in the interaction with consumer electronics products can have four kinds of benefits. They can add fun to the interaction and realise a more enjoyable experience. Animated characters can deploy social behaviour and social rules known from daily life and thus make it more natural and easier to interact with consumer electronic products. Furthermore an animated character can set the right level of expectation and finally they can make system errors and interaction obstacles more acceptable. Two examples are described to illustrate this argumentation. The L-icons are virtual personal friends that live inside the television and that represent a so-called recommendation system. Bello is a virtual pet dog that facilitates voice-controlled interaction for a television set. The evaluation results of two example applications confirm the four arguments, but they also show that the right form of animated character must be application specific in order to come to an optimal match between the characteristics of the character and those of the system they represent.
Interactive machine learning BIBAFull-Text 39-45
  Jerry Alan Fails; Dan R., Jr. Olsen
Perceptual user interfaces (PUIs) are an important part of ubiquitous computing. Creating such interfaces is difficult because of the image and signal processing knowledge required for creating classifiers. We propose an interactive machine-learning (IML) model that allows users to train, classify/view and correct the classifications. The concept and implementation details of IML are discussed and contrasted with classical machine learning models. Evaluations of two algorithms are also presented. We also briefly describe Image Processing with Crayons (Crayons), which is a tool for creating new camera-based interfaces using a simple painting metaphor. The Crayons tool embodies our notions of interactive machine learning.
Personal choice point: helping users visualize what it means to buy a BMW BIBAFull-Text 46-52
  Andrew Fano; Scott W. Kurth
How do we know if we can afford a particular purchase? We can find out what the payments might be and check our balances on various accounts, but does this answer the question? What we really need to know is how this purchase would affect our other goals. What do I have to give up to afford this purchase?Personal Choice Point is a financial planning tool that addresses these questions by enabling a user to explore the repercussions of her decisions at the level of her lifestyle goals, not just her accounts. The user is presented with a graphical representation of primary lifestyle goals such as home, car, vacation, education, etc. As the user selects goals and modifies them, it presents the impact on the users life by graphically depicting the impact of a decision on her other goals. In effect, Personal Choice Point is a planner that helps restrict the users search for a suitable allocation of resources among goals to the likely set of allocations, from the much larger space of possible ones. The result is a system that changes the focus of the users task from managing the mechanics of resource allocation to the evaluation and selection of likely ones.
Multimodal event parsing for intelligent user interfaces BIBAFull-Text 53-60
  Will Fitzgerald; R. James Firby; Michael Hannemann
Many intelligent interfaces must recognize patterns of user activity that cross a variety of different input channels. These multimodal interfaces offer significant challenges to both the designer and the software engineer. The designer needs a method of expressing interaction patterns that has the power to capture real use cases and a clear semantics. The software engineer needs a processing model that can identify the described interaction patterns efficiently while maintaining meaningful intermediate state to aid in debugging and system maintenance. In this paper, we describe an input model, a general recognition model, and a series of important classes of recognition parsers with useful computational characteristics; that is, we can say with some certainty how efficient the recognizers will be, and the kind of patterns the recognizers will accept. Examples illustrate the ability of these recognizers to integrate information from multiple channels across varying time intervals.
Sketching for military courses of action diagrams BIBAFull-Text 61-68
  Kenneth D. Forbus; Jeffrey Usher; Vernell Chapman
A serious barrier to the digitalization of the US military is that commanders find traditional mouse/menu, CAD-style interfaces unnatural. Military commanders develop and communicate battle plans by sketching courses of action (COAs). This paper describes nuSketch Battlespace, the latest version in an evolving line of sketching interfaces that commanders find natural, yet supports significant increased automation. We describe techniques that should be applicable to any specialized sketching domain: glyph bars and compositional symbols to tractably handle the large number of entities that military domains use, specialized glyph types and gestures to keep drawing tractable and natural, qualitative spatial reasoning to provide sketch-based visual reasoning, and comic graphs to describe multiple states and plans. Experiments, both completed and in progress, are described to provide evidence as to the utility of the system.
MORE for less: model recovery from visual interfaces for multi-device application design BIBAFull-Text 69-76
  Yves Gaeremynck; Lawrence D. Bergman; Tessa Lau
An emerging approach to multi-device application development requires developers to build an abstract semantic model that is translated into specific implementations for web browsers, PDAs, voice systems and other user interfaces. Specifying abstract semantics can be difficult for designers accustomed to working with concrete screen-oriented layout. We present an approach to model recovery: inferring semantic models from existing applications, enabling developers to use familiar tools but still reap the benefits of multi-device deployment. We describe MORE, a system that converts the visual layout of HTML forms into a semantic model with explicit captions and logical grouping. We evaluate MOREs performance on forms from existing Web applications, and demonstrate that in most cases the difference between the recovered model and a hand-authored model is under 5%.
On-line personalization of a touch screen based keyboard BIBAFull-Text 77-84
  Johan Himberg; Jonna Hakkila; Petri Kangas; Jani Mantyjarvi
The user expectations for usability and personalization along with decreasing size of handheld devices challenge traditional keypad layout design. We have developed a method for on-line adaptation of a touch pad keyboard layout. The method starts from an original layout and monitors the usage of the keyboard by recording and analyzing the keystrokes. An on-line learning algorithm subtly moves the keys according to the spatial distribution of keystrokes. In consequence, the keyboard matches better to the users physical extensions and grasp of the device, and makes the physical trajectories during typing more comfortable. We present two implementations that apply different vector quantization algorithms to produce an adaptive keyboard with visual on-line feedback. Both qualitative and quantitative results show that the changes in the keyboard are consistent, and related to the user's handedness and hand extensions. The testees found the on-line personalization positive. The method can either be applied for on-line personalization of keyboards or for ergonomics research.
Lessons learned in modeling schizophrenic and depressed responsive virtual humans for training BIBAFull-Text 85-92
  Robert C. Hubal; Geoffrey A. Frank; Curry I. Guinn
This paper describes lessons learned in developing the linguistic, cognitive, emotional, and gestural models underlying virtual human behavior in a training application designed to train civilian police officers how to recognize gestures and verbal cues indicating different forms of mental illness and how to verbally interact with the mentally ill. Schizophrenia, paranoia, and depression were all modeled for the application. For linguistics, the application has quite complex language grammars that captured a range of syntactic structures and semantic categories. For cognition, there is a great deal of augmentation to a plan-based transition network needed to model the virtual humans knowledge. For emotions and gestures, virtual human behavior is based on expert-validated mapping tables specific to each mental illness. The paper presents five areas demanding continued research to improve virtual human behavior for use in training applications.
Evolution of user interaction: the case of agent adele BIBAFull-Text 93-100
  W. Lewis Johnson; Erin Shaw; Andrew Marshall; Catherine LaBore
Animated pedagogical agents offer promise as a means of making computer-aided learning more engaging and effective. To achieve this, an agent must be able to interact with the learner in a manner that appears believable, and that furthers the pedagogical goals of the learning environment. In this paper we describe how the user interaction model of one pedagogical agent evolved through an iterative process of design and user testing. The pedagogical agent Adele assists students as they assess and diagnose medical and dental patients in clinical settings. We describe the results of, and our responses to, three studies of Adele, involving over two hundred and fifty medical and dental students over five years, that have led to an improved tutoring strategy, and discuss the interaction possibilities of two different reasoning engines. With the benefit of hindsight, the paper articulates the principles that govern effective user-agent interaction in educational contexts, and describes how the agents interaction design in its current form embodies those principles.
Learning implicit user interest hierarchy for context in personalization BIBAFull-Text 101-108
  Hyoung R. Kim; Philip K. Chan
To provide a more robust context for personalization, we desire to extract a continuum of general (long-term) to specific (short-term) interests of a user. Our proposed approach is to learn a user interest hierarchy (UIH) from a set of web pages visited by a user. We devise a divisive hierarchical clustering (DHC) algorithm to group words (topics) into a hierarchy where more general interests are represented by a larger set of words. Each web page can then be assigned to nodes in the hierarchy for further processing in learning and predicting interests. This approach is analogous to building a subject taxonomy for a library catalog system and assigning books to the taxonomy. Our approach does not need user involvement and learns the UIH "implicitly." Furthermore, it allows the original objects, web pages, to be assigned to multiple topics (nodes in the hierarchy). In this paper, we focus on learning the UIH from a set of visited pages. We propose a few similarity functions and dynamic threshold-finding methods, and evaluate the resulting hierarchies according to their meaningfulness and shape.
Supporting plan authoring and analysis BIBAFull-Text 109-116
  Jihie Kim; Jim Blythe
Interactive tools to help users author plans or processes are essential in a variety of domains. KANAL helps users author sound plans by simulating them, checking for a variety of errors and presenting the results in an accessible format that allows the user to see an overview of the plan steps or timelines of objects in the plan. From our experience in two domains, users tend to interleave plan authoring and plan checking while extending background knowledge of actions. This has led us to refine KANAL to provide a high-level overview of plans and integrate a tool for refining the background knowledge about actions used to check plans. We report on these lessons learned and new directions in KANAL.
Presenting route instructions on mobile devices BIBAFull-Text 117-124
  Christian Kray; Christian Elting; Katri Laakso; Volker Coors
In this paper, we evaluate several means of presenting route instructions to a mobile user. Starting from an abstract language-independent description of a route segment, we show how to generate various presentations for a mobile device ranging from spoken instructions to 3D visualizations. We then examine the relationship between the quality of positional information, available resources and the different types of presentations. The paper concludes with guidelines that help to determine which presentation to choose for a given situation.
A model of textual affect sensing using real-world knowledge BIBAFull-Text 125-132
  Hugo Liu; Henry Lieberman; Ted Selker
This paper presents a novel way for assessing the affective qualities of natural language and a scenario for its use. Previous approaches to textual affect sensing have employed keyword spotting, lexical affinity, statistical methods, and hand-crafted models. This paper demonstrates a new approach, using large-scale real-world knowledge about the inherent affective nature of everyday situations (such as "getting into a car accident") to classify sentences into "basic" emotion categories. This commonsense approach has new robustness implications. Open Mind Commonsense was used as a real world corpus of 400,000 facts about the everyday world. Four linguistic models are combined for robustness as a society of commonsense-based affect recognition. These models cooperate and compete to classify the affect of text. Such a system that analyzes affective qualities sentence by sentence is of practical value when people want to evaluate the text they are writing. As such, the system is tested in an email writing application. The results suggest that the approach is robust enough to enable plausible affective text user interfaces.
Dynamic web page authoring by example using ontology-based domain knowledge BIBAFull-Text 133-140
  Jose A. Macias; Pablo Castells
Authoring dynamic web pages is an inherently difficult task. We present DESK, an interactive authoring tool that allows the customization of dynamic page generation procedures with no a-priori tool-specific skill requirements from authors. Our approach consists of combining Programming By Example (PBE) techniques with an ontology-based representation of knowledge displayed in web pages. DESK acts as a client-side complement of a dynamic web page generation system, PEGASUS, which generates HTML pages from a formally structured domain model and an abstract presentation model. Authorized users can modify the internal presentation model by editing the generated HTML pages with DESK in a WYSIWYG environment. DESK keeps track of all users actions and exploits the explicitly represented domain semantics to enhance the power of PBE techniques.
Tool support for designing nomadic applications BIBAFull-Text 141-148
  Giulio Mori; Fabio Paterno; Carmen Santoro
Model-based approaches can be useful when designing nomadic applications, which can be accessed through multiple interaction platforms. Various models and levels of abstraction can be considered in such approaches. The lack of automatic tool support has been the main limitation to their use. We present a tool, TERESA, supporting top-down transformations from task models to abstract user interfaces and then to user interfaces for different types of interaction platforms (such as mobile phones or desktop systems). It allows designers to keep a unitary view of the design of a given nomadic application. Moreover, the tool provides support for obtaining effective interfaces for each type of platform available, taking into account the consequent differences in terms of tasks and their performance.
Towards a theory of natural language interfaces to databases BIBAFull-Text 149-157
  Ana-Maria Popescu; Oren Etzioni; Henry Kautz
The need for Natural Language Interfaces to databases (NLIs) has become increasingly acute as more and more people access information through their web browsers, PDAs, and cell phones. Yet NLIs are only usable if they map natural language questions to SQL queries correctly. As Shneiderman and Norman have argued, people are unwilling to trade reliable and predictable user interfaces for intelligent but unreliable ones. In this paper, we introduce a theoretical framework for reliable NLIs, which is the foundation for the fully implemented Precise NLI. We prove that, for a broad class of semantically tractable natural language questions, Precise is guaranteed to map each question to the corresponding SQL query. We report on experiments testing Precise on several hundred questions drawn from user studies over three benchmark databases. We find that over 80% of the questions are semantically tractable questions, which Precise answers correctly. Precise automatically recognizes the 20% of questions that it cannot handle, and requests a paraphrase. Finally, we show that Precise compares favorably with Mooney's learning NLI and with Microsoft's English Query product.
A flexible platform for building applications with life-like characters BIBAFull-Text 158-165
  Thomas Rist; Elisabeth Andre; Stephan Baldes
During the last years, an increasing number of R&D projects has started to deploy life-like characters for presentation tasks in a diverse range of application areas, including, for example, E-Commerce, E-learning, and help systems. Depending on factors, such as the degree of interactivity and the number of the deployed characters, different architectures have been proposed for system implementation. In this contribution, we first analyse a number of existing user interfaces with presentation characters from an architectural point of view. We then introduce the MIAU platform and illustrate by means of illustrated generation examples how MIAU can be used for the realization of character applications with different conversational settings. Finally, we sketch a number of potential application fields for the MIAU platform.
Illustrative shadows: integrating 3D and 2D information displays BIBAFull-Text 166-173
  Felix Ritter; Henry Sonnet; Knut Hartmann; Thomas Strothotte
Many exploration and manipulation tasks benefit from a coherent integration of multiple views onto complex information spaces. This paper proposes the concept of Illustrative Shadows for a tight integration of interactive 3D graphics and schematic depictions using the shadow metaphor. The shadow metaphor provides an intuitive visual link between 3D and 2D visualizations integrating the different displays into one combined information display. Users interactively explore spatial relations in realistic shaded virtual models while functional correlations and additional textual information are presented on additional projection layers using a semantic network approach. Manipulations of one visualization immediately influence the others, resulting in an in-formationally and perceptibly coherent presentation.
Environment modification in a simulated human-robot interaction task:: experimentation and analysis BIBAFull-Text 174-180
  Robert St. Amant; David B. Christian
This paper describes a novel approach to human-robot interaction, in which a user modifies a robot's environment to constrain its actions, rather than programming its controller. An HRI simulation of a maze navigation task is presented. An empirical evaluation shows that for this task, users prefer an environment modification strategy rather than a programming strategy as the difficulty of the task increases. Further, user alternation between the two types of strategy follows a clear pattern. A preliminary model extending the HRI simulation, one which allows the specification of more general navigation environments, is also presented.
Balancing efficiency and interpretability in an interactive statistical assistant BIBAFull-Text 181-188
  Robert St. Amant; Michael D. Dinardo; Nickie Buckner
Making an interface more efficient, in a task analysis sense, can make it more difficult for an automated reasoning system to infer user goals, by eliminating some user actions, by presenting information without requiring overt user selection, and so forth. We call the extent to which a system can make such inferences interpretability. In this paper we describe the tradeoff between interpretability and efficiency. We give some general heuristics for improving interpretability in a system and explain how they apply in an implemented system, an assistant for exploratory statistical analysis. Increased interpretability in the system is provided by navigation techniques for data exploration and a data mountain for organizing results; a formative evaluation illustrates some of the potential benefits of applying interpretability heuristics to an intelligent user interface.
A reliable natural language interface to household appliances BIBAFull-Text 189-196
  Alexander Yates; Oren Etzioni; Daniel Weld
As household appliances grow in complexity and sophistication, they become harder and harder to use, particularly because of their tiny display screens and limited keyboards. This paper describes a strategy for building natural language interfaces to appliances that circumvents these problems. Our approach leverages decades of research on planning and natural language interfaces to databases by reducing the appliance problem to the database problem; the reduction provably maintains desirable properties of the database interface. The paper goes on to describe the implementation and evaluation of the EXACT interface to appliances, which is based on this reduction. EXACT maps each English user request to an SQL query, which is transformed to create a PDDL goal, and uses the Blackbox planner [13] to map the planning problem to a sequence of appliance commands that satisfy the original request. Both theoretical arguments and experimental evaluation show that EXACT is highly reliable.
An adaptive stock tracker for personalized trading advice BIBAFull-Text 197-203
  Jungsoon Yoo; Melinda Gervasio; Pat Langley
The Stock Tracker is an adaptive recommendation system for trading stocks that automatically acquires content-based models of user preferences to tailor its buy and sell advice. The system incorporates an efficient algorithm that exploits the fixed structure of user models and relies on unobtrusive data-gathering techniques. In this paper, we describe our approach to personalized recommendation and its implementation in this domain. We also discuss experiments that evaluate the system's behavior on both human subjects and synthetic users. The results suggest that the Stock Tracker can rapidly adapt its advice to different types of users.
Recognition of freehand sketches using mean shift BIBAFull-Text 204-210
  Bo Yu
Freehand sketching is a natural and powerful means of interpersonal communication. But to date, it still cannot be supported effectively by human-computer interface. In this paper, we propose a robust method for sketch recognition. It uses mean shift, a nonparametric technique which can delineate arbitrarily shaped clusters, as a pre-process to analyze the direction-curvature joint space and suppress the severe noise of sketched strokes. Furthermore, it combines the vertex detection and primitive shape approximation into a unified and incremental procedure which, by fully utilizing the visual features, can handle hybrid and smooth curves gracefully. Our method does not rely on any domain-specific knowledge, and therefore it can be easily integrated with other high-level applications.
Inferring user goals from personality and behavior in a causal model of user affect BIBAFull-Text 211-218
  Xiaoming Zhou; Cristina Conati
We present a probabilistic model, based on Dynamic Decision Networks, to assess user affect from possible causes of emotional arousal. The model relies on the OCC cognitive theory of emotions and is designed to assess student affect during the interaction with an educational game. A key element of applying the OCC theory to assess user affect is knowledge of user goals. Thus, in this paper we focus on describing how our model infers these goals from user personality traits and interaction behavior. In particular, we illustrate how we iteratively defined the structure and parameters for this part of the model by using both empirical data collected through Wizard of Oz experiments and relevant psychological findings.

Accepted Posters

Navigating by knowledge BIBAFull-Text 221-223
  I. Alfaro; M. Zancanaro; M. Nardon; A. Guerzoni
In this paper, we introduce a framework to automatically build associations between pieces of information in different media. The key idea is to use a semantic model to co-index the entire information space and exploit the reasoning capabilities of the knowledge base in defining strategies of navigation. The main advantage over traditional hypermedia lies primarily in the ease with which the system can be updated since the new data is automatically connected to the rest of the information. Examples are given from a prototype hypermedia to navigate documentation about a fresco in the Buonconsiglio Castle in Trento, Italy.
Affective multi-modal interfaces: the case of McGurk effect BIBAFull-Text 224-226
  Azra N. Ali; Philip H. Marsden
This study is motivated by the increased need to understand human response to video-links, 3G telephony and avatars. We focus on response of participants to audiovisual presentations of talking heads, and examine the effect of noise and temporal misalignment of channels. We show that misalignment of audio and visual channels not only cause strange perception phenomena -- the McGurk effect but also cause participants to apply extra mental effort, which is detectable from the physiological data collected. These data allow inferences to be drawn about the impact of the McGurk effect, thus providing indication of stress levels. This illuminates both to the mental and physical aspects of users interacting with multi-modal interfaces.
Safety and operating issues for mobile human-machine interfaces BIBAFull-Text 227-229
  Dirk Buhler; Sebastien Vignier; Paul Heisterkamp; Wolfgang Minker
In this paper we present recent research and development efforts carried out at DaimlerChrysler to integrate speech technology for use in mobile environments, notably in cars. Speech undeniably has the potential to considerably improve the safety and user friendliness of Human-machine interfaces, especially when complex technical functionalities and devices need to be accessed. As an example, we describe Linguatronic, a commercially available in-vehicle Command&Control dialog system. In addition, the SmartKom project demonstrates advanced concepts for intuitive multimodal computer interfaces in three different application scenarios.
Intelligent user interfaces in the living room: usability design for personalized television applications BIBAFull-Text 230-232
  Konstantinos Chorianopoulos; George Lekakos; Diomidis Spinellis
The purpose of this paper is to present our experience from the design of a personalized television application, and the implications for the design of interactive television applications in general. Personalized advertising is a gentle introduction to interactive television applications through a push paradigm that is closer to the established patterns of television use. While personalization is a practice widely used on the Internet, applying personalization techniques over digital television infrastructures presents significant obstacles, which we address with explicit design moves.
Power tools and composite tools: integrating automation with direct manipulation BIBAFull-Text 233-235
  John M. Daughtry; Robert St. Amant
This paper describes a drawing system that incorporates two novel interaction techniques based on analogies to physical tools. Power tools add limited autonomy in the form of rotators and movers for automated circular, linear, and Bezier-curve movement. Composite tools are user-constructed combinations of existing tools to create new functionality.
Designing intelligent and dynamic interfaces for communicating mathematics BIBAFull-Text 236-238
  Anton N. Dragunov; Jonathan L. Herlocker
Current approaches to presentation of mathematics (on paper or in electronic format) have usability drawbacks that make learning and appreciation of mathematics challenging and often frustrating. We propose a development of a software user toolkit aimed to facilitate the creation of highly usable and effective presentations of mathematical ideas. In this paper we identify three problems which readers of math documents usually experience, and give a vision of how we might address those problems by designing a more interactive and intelligent interface.
Towards individual service provisioning BIBAFull-Text 239-241
  Fredrik Espinoza
With the emergence of modularized component-based electronic services, such as Web Services and semantically tagged services, Individual Service Provisioning, wherein any user can be a service provider, can become a reality. We argue that there are three basic requirements for such an architecture: a personal service platform for using services, tools for creating services, and a network for sharing services, and we present our motivation, design, and implementation of these parts. With our enabling architecture we hope to demonstrate a feasible prototype system that stimulates the emergence of more specialized services for all users.
Recommendations without user preferences: a natural language processing approach BIBAFull-Text 242-244
  Michael Fleischman; Eduard Hovy
We examine the problems with automated recommendation systems when information about user preferences is limited. We equate the problem to one of content similarity measurement and apply techniques from Natural Language Processing to the domain of movie recommendation. We describe two algorithms, a naive word-space approach and a more sophisticated approach using topic signatures, and evaluate their performance compared to baseline, gold standard, and commercial systems.
Social cues and awareness for recommendation systems BIBFull-Text 245-247
  Punit Gupta; Pearl Pu
DJ-boids: emergent collective behavior as multichannel radio station programming BIBAFull-Text 248-250
  Jesus Ibanez; Antonio F. Gomez-Skarmeta; Josep Blat
In this paper we propose to apply emergent collective behavior ideas to automatically program Internet multichannel radio stations. The proposed model simulates n virtual Dj's (one per channel) playing songs at the same time. Every virtual Dj takes into account the songs played by the other ones, programming a sequence of songs whose order is also coherent. That is, every song played in a channel takes into account both, the song previously played in the same channel and the songs being played in the other channels at the same time.
Interaction tactics for socially intelligent pedagogical agents BIBAFull-Text 251-253
  W. Lewis Johnson
Guidebots, or animated pedagogical agents, can enhance interactive learning environments by promoting deeper learning and improve the learner's subjective experience. Guidebots exploit a person's natural tendency to interact socially with computers, as documented by Reeves, Nass, and their colleagues. However they also raise expectations of social abilities, and failure to meet those expectations can have unintended negative effects. The Social Intelligence Project is developing improved social interaction skills for guidebots. This paper describes efforts to model and implement interaction tactics for guidebots, i.e., dialog exchanges that are intended to achieve particular communicative and motivational effects. These are based on analyses of student-tutor interaction during computer-based learning.
Sticky notes for the semantic web BIBAFull-Text 254-256
  David R. Karger; Boris Katz; Jimmy Lin; Dennis Quan
Computer-based annotation is increasing in popularity as a mechanism for revising documents and sharing comments over the Internet. One reason behind this surge is that viewpoints, summaries, and notes written by others are often helpful to readers. In particular, these types of annotations can help users locate or recall relevant documents. We believe that this model can be applied to the problem of retrieval on the Semantic Web. In this paper, we propose a generalized annotation environment that supports richer forms of description such as natural language. We discuss how RDF can be used to model annotations and the connections between annotations and the documents they describe. Furthermore, we explore the idea of a question answering interface that allows retrieval based both on the text of the annotations and the annotations associated metadata. Finally, we speculate on how these features could be pervasively integrated into an information management environment, making Semantic Web annotation a first class player in terms of document management and retrieval.
End-user debugging for e-commerce BIBAFull-Text 257-259
  Henry Lieberman; Earl Wagner
One of the biggest unaddressed challenges for the digital economy is what to do when electronic transactions go wrong. Consumers are frustrated by interminable phone menus, and long delays to problem resolution. Businesses are frustrated by the high cost of providing quality customer service. We believe that many simple problems, such as mistyped numbers or lost orders, could be easily diagnosed if users were supplied with end-user debugging tools, analogous to tools for software debugging. These tools can show the history of actions and data, and provide assistance for keeping track of and testing hypotheses. These tools would benefit not only users, but businesses as well by decreasing the need for customer service.
Beyond broadcast BIBAFull-Text 260-262
  Kevin Livingston; Mark Dredze; Kristian Hammond; Larry Birnbaum
The work presented in this paper takes a novel approach to the task of providing information to viewers of broadcast news. Instead of considering the broadcast news as the end product, this work uses it as a starting point to dynamically build an information space for the user to explore. This information space is designed to satisfy the users information needs, by containing more breadth, depth, and points of view than the original broadcast story. The architecture and current implementation are discussed, and preliminary results from the analysis of some its components are presented.
MovieLens unplugged: experiences with an occasionally connected recommender system BIBAFull-Text 263-266
  Bradley N. Miller; Istvan Albert; Shyong K. Lam; Joseph A. Konstan; John Riedl
Recommender systems have changed the way people shop online. Recommender systems on wireless mobile devices may have the same impact on the way people shop in stores. We present our experience with implementing a recommender system on a PDA that is occasionally connected to the network. This interface helps users of the MovieLens movie recommendation service select movies to rent, buy, or see while away from their computer. The results of a nine month field study show that although there are several challenges to overcome, mobile recommender systems have the potential to provide value to their users today.
Intelligent dialog overcomes speech technology limitations: the SENECa example BIBAFull-Text 267-269
  Wolfgang Minker; Udo Haiber; Paul Heisterkamp; Sven Scheible
We present a primarily speech-based user interface to a wide range of entertainment, navigation and communication applications for use in vehicles. The multimodal dialog enables the system to uniquely identify one of 79,000 place name variants using an active vocabulary of only 3,000 words at any given time. Low confidence in speech recognition and word-level ambiguities are compensated for in flexible clarification dialogs with the user. The underlying dialog concept was developed in the framework of the EU-project SENECa. Some recent evaluation results of the SENECa system demonstrator are discussed in the paper.
Enhancing conversational flexibility in multimodal interactions with embodied lifelike agent BIBAFull-Text 270-272
  Kyoshi Mori; Adam Jatowt; Mitsuru Ishizuka
Research carried out in authoring systems for embodied agent based presentations have traditionally been confined to scripted interactive presentations. In recent years, however, there has been a gradual shift to adopting a more dynamic approach that supports a higher degree of flexibility in user-agent interactivity, where the user is allowed to engage in more natural conversations with the agent. In this paper, we will describe a conversational module based on techniques used in chat-bots, that we have implemented as an extension to our previously developed agent authoring system.
Summarizing archived discussions: a beginning BIBAFull-Text 273-276
  Paula S. Newman; John C. Blitzer
This paper describes an approach to digesting threads of archived discussion lists by clustering messages into approximate topical groups, and then extracting shorter overviews, and longer summaries for each group.
On-demand geo-referenced terrafly data miner BIBAFull-Text 277-279
  Naphtali Rishe; Maxim Chekmasov; Marina Chekmasova; Scott Graham; Ian De Felipe
We present a comprehensive Internet data extraction tool adopted for the TerraFly Geographic Information System (GIS). TerraFly is a web-enabled system that allows users to virtually fly over remotely sensed data, including satellite imagery and aerial photography, using a standard Internet browser. The data extraction tool presented here is designed to augment the user's virtual flight experience with extensive data relevant to any given geographical point along the virtual flight path. The data presented to the user is retrieved from several server-side databases and is collected from the Internet data providers using our patented data extraction technology. Some data elements are presented to the user as overlays, some in popup windows, and some via hyper-linking to third-party web sites.
Information filtering using bayesian networks: effective user interfaces for aviation weather data BIBAFull-Text 280-283
  Corinne Clinton Ruokangas; Ole J. Mengshoel
Weather is a complex, dynamic process with tremendous impact on aviation. While pilots often have access to large amounts of aviation weather data, they find it difficult and time-consuming to identify weather hazards, due to the sheer amount and cryptic formatting of the data. To address this challenge, we have developed information filtering concepts based on a unified Bayesian network model, integrating text and graphical weather data in the context of specific mission, equipment and personal profiles. Based on these concepts, we have implemented three applications, all of which were to existing technology. Using one of the applications, the AWARE Preflight system, pilots found significantly more hazards in about half the time compared to using the current technology.
Adapting to the user's internet search strategy on small devices BIBAFull-Text 284-286
  Jean-David Ruvini
World Wide Web search engines typically return thousands of results to the users. To avoid users browsing through the whole list of results, search engines use ranking algorithms to order the list according to predefined criteria. In this paper, we present Toogle, a front-end to the Google search engine for mobile phones offering web browsing. For a given search query, Toogle first ranks results using Google's algorithm and, as the user browses through the result list, uses machine learning techniques to infer a model of her search goal and to adapt accordingly the order in which yet-unseen results are presented. We report preliminary experimental results that show the effectiveness of this approach.
Towards intuitive interaction for end-user programming BIBFull-Text 287-289
  Eric Schwarzkopf; Mathias Bauer; Dietmar Dengler
A zero-input interface for leveraging group experience in web browsing BIBAFull-Text 290-292
  Taly Sharon; Henry Lieberman; Ted Selker
The experience of a trusted group of colleagues can help users improve the quality and focus of their browsing and searching activities. How could a system provide such help, when and where the users need it, without disrupting their normal work activities? This paper describes Context-Aware Proxy based System (CAPS), an agent that recommends pages and annotates links to reveal their relative popularity among the users colleagues, matched with their automatically computed interest profiles. A Web proxy tracks browsing habits, so CAPS requires no explicit input from the user. We review here CAPS design principles and implementation. We tested user satisfaction with the interface and the accuracy of the ranking algorithm. These experiments indicate that CAPS has high potential to support effective ranking for quality judgment -- by users.
Abbreviated text input BIBAFull-Text 293-296
  Stuart M. Shieber; Ellie Baker
We address the problem of improving the efficiency of natural language text input under degraded conditions (for instance, on PDAs or cell phones or by disabled users) by taking advantage of the informational redundancy in natural language. Previous approaches to this problem have been based on the idea of prediction of the text, but these require the user to take overt action to verify or select the system's predictions. We propose taking advantage of the duality between prediction and compression. We allow the user to enter text in compressed form, in particular, using a simple stipulated abbreviation method that reduces characters by about 30% yet is simple enough that it can be learned easily and generated relatively fluently. Using statistical language processing techniques, we can decode the abbreviated text with a residual word error rate of about 3%, and we expect that simple adaptive methods can improve this to about 1.5%. Because the system's operation is completely independent from the user's, the overhead from cognitive task switching and attending to the system's actions online is eliminated, opening up the possibility that the compression-based method can achieve text input efficiency improvements where the prediction-based methods have not.
Search for efficient device-dependent action sequences in the user interface BIBAFull-Text 297-299
  A. Simpson; Robert St. Amant
This paper describes a design tool, under development, that identifies efficient low-level action sequences in the user interface, accounting for the relationships between the physical properties of an input device and the requirements of primitive tasks. A device representation and a task taxonomy are presented. These, along with a library of efficiency measures for specific devices, provide information to steer search through the space of action sequences.
An experiment in automated humorous output production BIBAFull-Text 300-302
  Oliviero Stock; Carlo Strapparava
Computational humor will be needed in interfaces, no less than other cognitive capabilities. There are many practical settings where computational humor will add value. Among them there are: business world applications (such as advertisement, e-commerce, etc...), general computer-mediated communication and human-computer interaction, increase in the friendliness of natural language interfaces, educational and edutainment systems. In particular in the educational field it is an important resource for getting selective attention, help in memorizing names and situations etc. And we all know how well it works with children. Automated humor production in general is a very difficult task but we wanted to prove that some results can be achieved even in short time. We have worked at a concrete limited problem, as the core of the European Project HAHAcronym. The main goal of HAHAcronym has been the realization of an acronym ironic re-analyzer and generator as a proof of concept in a focalized but non restricted context. In order to implement this system some general tools have been adapted, or developed for the humorous context. Systems output has been submitted to evaluation by human subjects, with a very positive result.
EduNuggets: an intelligent environment for managing and delivering multimedia education content BIBAFull-Text 303-306
  Eleni Stroulia; Kavita Jari
Todays teaching and learning practices are evolving to leverage the continuously increasing information available on the web, on all conceivable subject matters. This wealth of information presents a great challenge: how to provide an integrated, authoritative, extendible and shareable information collection of related multimedia education materials. In this paper, we describe EduNuggets (http://www.cs.ualberta.ca/~stroulia/EduNuggets), our intelligent repository for multimedia educational materials.
An emotional interface for a music gathering application BIBAFull-Text 307-309
  Albert van Breemen; Christoph Bartneck
Listening to music while travelling is a pleasant activity. The latest MP3 players demonstrate that storage and management of music will not be a problem in the near future. Besides listening to music the user might also want to gather new music from the Internet. We propose a music gathering application that helps the user to collect music and that is able to proactively search and download music based on the users music preferences. Furthermore, we developed an emotional interface character that provides instant and natural feedback on the status of the application.
Towards an architecture for intelligent control of narrative in interactive virtual worlds BIBAFull-Text 310-312
  R. Michael Young; Mark Riedl
The creation of novel, engaging and dynamic interactive stories presents a unique challenge to the designers of systems for interactive entertainment, education and training. Unlike conventional narrative media, an interactive narrative-based system may be required to generate its own story structure, determine the appropriate interface elements to use to convey the story's action and manage the effective interaction of a user within the story as it plays out. Here we describe the architecture of the Mimesis system, which integrates a 3D graphical gaming environment with intelligent techniques for generating and controlling interaction with and within a narrative in order to create an engaging and coherent user experience.
Scripting embodied agents behaviour with CML: character markup language BIBAFull-Text 313-316
  Yasmine Arafa; Abe Mamdani
Embodied agents present ongoing challenging agenda for research in multi-modal user interfaces and human-computer-interaction. Such agent metaphors will only be widely applicable to online applications when there is a standardised way to map underlying engines with the visual presentation of the agents. This paper delineates the functions and specifications of a mark-up language for scripting the animation of virtual characters. The language is called: Character Mark-up Language (CML) and is an XML-based character attribute definition and animation scripting language designed to aid in the rapid incorporation of lifelike characters/agents into online applications or virtual reality worlds. This multi-modal scripting language is designed to be easily understandable by human animators and easily generated by a software process such as software agents. CML is constructed based jointly on motion and multi-modal capabilities of virtual life-like figures. The paper further illustrates the constructs of the language and describes a real-time execution architecture that demonstrates the use of such a language as a 4G language to easily utilise and integrate MPEG-4 media objects in online interfaces and virtual environments.

Accepted Demo Papers

MORE: model recovery from visual interfaces for multi-device application design BIBFull-Text 318
  Lawrence D. Bergman; Yves Gaeremynck; Tessa Lau
Interactive problem solving in an intelligent virtual environment BIBFull-Text 319
  Carlos Calderon; Marc Cavazza; Daniel Diaz
Intelligent user interface design for teachable agent systems BIBAFull-Text 320
  J. Davis; K. Leelawong; K. Belynne; G. Biswas; N. Vye; R. Bodenheimer; J. Bransford
Bettys Brain [1] is a learning-by-teaching environment where students "teach" Betty by constructing a concept map that models relations between domain concepts. The relations can be causal, hierarchical, and property links.
Demonstration of the complex event recognition architecture for multimodal event parsing BIBAFull-Text 321
  Will Fitzgerald; R. James Firby; Michael Hannemann
An important criterion for many intelligent user interfaces is that the interface be multimodal, that is, allow the user to interact with the system using a variety of different input channels. In addition to user interface interactions per se, the system may need to process input from multiple channels to make decisions in response to interface interactions or for other related purposes. The multimodal event parsing system described in our paper has been implemented in a working system called CERA, the Complex Event Recognition Architecture. CERA, developed under contract with NASA, has been used to identify complex events across multiple sensor channels in an advanced life support system demonstration project. We will demonstrate:
  • The CERA event recognition language,
  • The CERA event recognition engine at work,
  • A custom development environment for writing and debugging CERA event
  • Visualization tools for complex event display,
  • Integrating CERA with various toolkits and projects. The CERA event recognition engine is written in Common Lisp [1] and has a custom development environment with visualization tools based within the Eclipse extensible IDE [2]. This combination provides an easy to use development environment that can be used remotely, while maintaining the interactive flexibility of Lisp. As well as being a stand-alone event recognition system, CERA has also been tightly integrated with the RAP execution system [3] and the I/NET Conversational Interface system for dialogue management [4]. This combination allows the creation of human/computer interfaces for dynamic systems that make use of natural language, multi-channel controls and sensors, and other available physical context. Our demonstration will consist of a number of different components designed to illustrate the various aspects of CERA and our approach to building multimodal interfaces. The first demonstration will show CERA processing and combining events from multiple input streams, including examples from the NASA advanced life support system domain. The emphasis of this demonstration will be to show how event recognizers are built and how they work in practice. Our second demonstration will illustrate the CERA IDE and visualization tools. These tools allow the programming of a remote CERA system and the monitoring and debugging of its operation. Techniques for monitoring recognition progress and examining partial recognition state will be examined. Finally, we will demonstrate a more complex interface that combines natural language input with various non-linguistic input streams. An automotive telematics application will form the basis of this demonstration. The audience will be encouraged to participate in this demonstration.
  • nuSketch battlespace: a demonstration BIBAFull-Text 322
      Kenneth D. Forbus; Jeffrey Usher; Vernell Chapman
    Sketching provides a natural means of interaction for many spatially-oriented tasks. One task where sketching is used extensively is when military planners are formulating battle plans, called Courses of Action (COAs). This paper describes a system we have built, nuSketch Battlespace (nSB), which provides a sketching interface for creating COAs. The system is described in the paper "Sketching for Military Courses of Action" in these proceedings. The demonstration will highlight:
  • How we engineer around the need for recognition in sketching systems (a key
       feature of the nuSketch approach to multimodal interfaces), so that we can
       focus instead on understanding.
  • The use of comic graphs to manipulate multiple states and the relationships
       between them, for developing and visualizing complex plans.
  • The spatial reasoning carried out by nuSketch Battlespace, including the use
       of qualitative topology and Voronoi diagrams in computing spatial
       relationships, and our methods for path-finding and position-finding.
  • The use of analogy to generate enemy intent hypotheses based on previously
       drawn sketches.
  • Haystack: a platform for creating, organizing and visualizing semistructured information BIBFull-Text 323
      David Huynh; David R. Karger; Dennis Quan; Vineet Sinha
    TellMaris and deep map: two navigational assistants BIBAFull-Text 324
      Katri Laakso; Christian Kray
    This demo will present Tellmaris and Deep Map, two system offering navigational assistance and other services to an untrained user. We will put an emphasis on different ways to present route instructions on mobile devices.
    Beyond broadcast: a demo BIBAFull-Text 325
      Kevin Livingston; Mark Dredze; Kristian Hammond; Larry Birnbaum
    This research discusses a method for delivering just-in-time information to television viewers to provide more depth and more breadth to television broadcasts. A novel aspect of this research is that it uses broadcast news as a starting point for gathering information regarding specific stories, as opposed to considering the broadcast version to be the end of the viewers exploration. This work is implemented in Cronkite, a system that provides viewers with expanded coverage of broadcast news stories.
    AttrActive windows: active windows for pervasive computing applications BIBAFull-Text 326
      Les Nelson; Laurent Denoue; Elizabeth Churchill
    We introduce the AttrActive Windows user interface, a novel approach for presenting interactive content on large screen, interactive, digital, bulletin boards. Moving away from the desktop metaphor, AttrActive Windows are dynamic, non-uniform windows that can appear in different orientations and have autonomous behaviours to attract passers-by and invite interactions.
    Towards a theory of natural language interfaces to databases BIBAFull-Text 327
      Ana-Maria Popescu; Oren Etzioni; Henry Kautz
    The need for Natural Language Interfaces (NLIs) to databases has become increasingly acute as more nontechnical people access information through their web browsers, PDAs and cell phones. Yet NLIs are only usable if they map natural language questions to SQL queries correctly. We introduce the Precise NLI [2], which reduces the semantic interpretation challenge in NLIs to a graph matching problem. Precise uses the max-flow algorithm to efficiently solve this problem. Each max-flow solution corresponds to a possible semantic interpretation of the sentence. precise collects max-flow solutions, discards the solutions that do not obey syntactic constraints and retains the rest as the basis for generating SQL queries corresponding to the question q. The syntactic information is extracted from the parse tree corresponding to the given question which is computed by a statistical parser [1]. For a broad, well-defined class of semantically tractable natural language questions, Precise is guaranteed to map each question to the corresponding SQL query. Semantically tractable questions correspond to a natural, domain-independent subset of English that can be efficiently and accurately interpreted as nonrecursive Datalog clauses. Precise is transportable to arbitrary databases, such as the Restaurants, Jobs and Geography databases used in our implementation. Examples of semantically tractable questions include: "What Chinese restaurants with a 3.5 rating are in Seattle?", "What are the areas of US states with large populations?", "What jobs require 4 years of experience and desire a B.S.CS degree?".Given a question which is not semantically tractable, Precise recognizes it as such and informs the user that it cannot answer it. Given a semantically tractable question, Precise computes the set of non-equivalent SQL interpretations corresponding to the question. If a unique such SQL interpretation exists, Precise outputs it together with the corresponding result set obtained by querying the current database. If the set contains more than one SQL interpretation, the natural language question is ambiguous in the context of the current database. In this case, Precise asks for the user's help in determining which interpretation is the correct one. Our experiments have shown that Precise has high coverage and accuracy over common English questions. In future work, we plan to explore increasingly broad classes of questions and include Precise as a module in a full-fledged dialog system. An important direction for future work is helping users understand the types of questions Precise cannot handle via dialog, enabling them to build an accurate mental model of the system and its capabilities. Also, our own group's work on the EXACT natural language interface [3] builds on Precise and on the underlying theoretical framework. EXACT composes an extended version of Precise with a sound and complete planner to develop a powerful and provably reliable interface to household appliances.
    Building applications with life-like characters: the MIAU platform BIBFull-Text 328
      Thomas Rist; Elisabeth Andre; Stephan Baldes
    Towards a non-linear narrative construction BIBAFull-Text 329
      Vidya Setlur; David A. Shamma; Kristian Hammond; Sanjay Sood
    This article describes the implementation of a system that 'imagines' while a movie is being played by finding associations in the movie's content and presenting them to the viewer. This related information, in the form of images and movie clips, helps enhance the viewer's experience in a new immerse environment.
    EROS: explorer for RDFS-based ontologies BIBFull-Text 330
      Richard Vdovjak; Peter Barna; Geert-Jan Houben
    An end-user tool for e-commerce debugging BIBAFull-Text 331
      Earl Wagner; Henry Lieberman
    We demonstrate Woodstein, a software agent that tracks user interaction with e-commerce Web sites through a browser, and relates the browsing events to high-level models of complex, multi-step processes such as purchases or account transfers. Woodstein explains action steps and data in an understandable form, visualizes action history, and aids the user in exploring the causes of errors.
    Personalized trading recommendation system BIBAFull-Text 332
      Jungsoon Yoo; Melinda Gervasio; Pat Langley
    The Stock Tracker is a personalized recommendation system for trading stocks. The system tailors its buy, sell, and hold recommendations to individual users through automatically acquired content-based models of user preferences. It relies on data gathered unobtrusively during the natural course of interacting with a user.