HCI Bibliography Home | HCI Journals | About TIIS | Journal Info | TIIS Journal Volumes | Detailed Records | RefWorks | EndNote | Hide Abstracts
TIIS Tables of Contents: 01020304

ACM Transactions on Interactive Intelligent Systems 4

Editors:Anthony Jameson; Krzysztof Gajos
Standard No:ISSN 2160-6455, EISSN 2160-6463
Links:Journal Home Page | ACM Digital Library | Table of Contents
  1. TIIS 2014-04 Volume 4 Issue 1
  2. TIIS 2014-07 Volume 4 Issue 2
  3. TIIS 2014-10 Volume 4 Issue 3
  4. TIIS 2015-01 Volume 4 Issue 4

TIIS 2014-04 Volume 4 Issue 1

PromotionRank: Ranking and Recommending Grocery Product Promotions Using Personal Shopping Lists BIBAFull-Text 1
  Petteri Nurmi; Antti Salovaara; Andreas Forsblom; Fabian Bohnert; Patrik Floréen
We present PromotionRank, a technique for generating a personalized ranking of grocery product promotions based on the contents of the customer's personal shopping list. PromotionRank consists of four phases. First, information retrieval techniques are used to map shopping list items onto potentially relevant product categories. Second, since customers typically buy more items than what appear on their shopping lists, the set of potentially relevant categories is expanded using collaborative filtering. Third, we calculate a rank score for each category using a statistical interest criterion. Finally, the available promotions are ranked using the newly computed rank scores. To validate the different phases, we consider 12 months of anonymized shopping basket data from a large national supermarket. To demonstrate the effectiveness of PromotionRank, we also present results from two user studies. The first user study was conducted in a controlled setting using shopping lists of different lengths, whereas the second study was conducted within a large national supermarket using real customers and their personal shopping lists. The results of the two studies demonstrate that PromotionRank is able to identify promotions that are considered both relevant and interesting. As part of the second study, we used PromotionRank to identify relevant promotions to advertise and measure the influence of the advertisements on purchases. The results of this evaluation indicate that PromotionRank is also capable of targeting advertisements, improving sales compared to a baseline that selects random advertisements.
Experiments with Mobile Drama in an Instrumented Museum for Inducing Conversation in Small Groups BIBAFull-Text 2
  Charles Callaway; Oliviero Stock; Elyon Dekoven
Small groups can have a better museum visit when that visit is both a social and an educational occasion. The unmediated discussion that often ensues during a shared cultural experience, especially when it is with a small group whose members already know each other, has been shown by ethnographers to be important for a more enriching experience. We present DRAMATRIC, a mobile presentation system that delivers hour-long dramas to small groups of museum visitors. DRAMATRIC continuously receives sensor data from the museum environment during a museum visit and analyzes group behavior from that data. On the basis of that analysis, DRAMATRIC delivers a series of dynamically coordinated dramatic scenes about exhibits that the group walks near, each designed to stimulate group discussion. Each drama presentation contains small, complementary differences in the narrative content heard by the different members of the group, leveraging the tension/release cycle of narrative to naturally lead visitors to fill in missing pieces in their own drama by interacting with their fellow group members. Using four specific techniques to produce these coordinated narrative variations, we describe two experiments: one in a neutral, nonmobile environment, and the other a controlled experiment with a full-scale drama in an actual museum. The first experiment tests the hypothesis that narrative differences will lead to increased conversation compared to hearing identical narratives, whereas the second experiment tests whether switching from presenting a drama using one technique to using another technique for the subsequent drama will result in increased conversation. The first experiment shows that hearing coordinated narrative variations can in fact lead to significantly increased conversation. The second experiment also serves as a framework for future studies that evaluate strategies for similar adaptive systems.
Introduction to the Special Issue on Interactive Computational Visual Analytics BIBAFull-Text 3
  Remco Chang; David S. Ebert; Daniel Keim
This editorial introduction describes the aims and scope of ACM Transactions on Interactive Intelligent Systems's special issue on interactive computational visual analytics. It explains why visual analytics is crucial to the growing needs surrounding data analysis, and it shows how the four articles selected for this issue reflect this theme.
Interactive Statistics with Illmo BIBAFull-Text 4
  Jean-Bernard Martens
Progress in empirical research relies on adequate statistical analysis and reporting. This article proposes an alternative approach to statistical modeling that is based on an old but mostly forgotten idea, namely Thurstone modeling. Traditional statistical methods assume that either the measured data, in the case of parametric statistics, or the rank-order transformed data, in the case of nonparametric statistics, are samples from a specific (usually Gaussian) distribution with unknown parameters. Consequently, such methods should not be applied when this assumption is not valid. Thurstone modeling similarly assumes the existence of an underlying process that obeys an a priori assumed distribution with unknown parameters, but combines this underlying process with a flexible response mechanism that can be either continuous or discrete and either linear or nonlinear. One important advantage of Thurstone modeling is that traditional statistical methods can still be applied on the underlying process, irrespective of the nature of the measured data itself. Another advantage is that Thurstone models can be graphically represented, which helps to communicate them to a broad audience. A new interactive statistical package, Interactive Log Likelihood MOdeling (Illmo), was specifically designed for estimating and rendering Thurstone models and is intended to bring Thurstone modeling within the reach of persons who are not experts in statistics. Illmo is unique in the sense that it provides not only extensive graphical renderings of the data analysis results but also an interface for navigating between different model options. In this way, users can interactively explore different models and decide on an adequate balance between model complexity and agreement with the experimental data. Hypothesis testing on model parameters is also made intuitive and is supported by both textual and graphical feedback. The flexibility and ease of use of Illmo means that it is also potentially useful as a didactic tool for teaching statistics.
Evaluation of Normal Model Visualization for Anomaly Detection in Maritime Traffic BIBAFull-Text 5
  Maria Riveiro
Monitoring dynamic objects in surveillance applications is normally a demanding activity for operators, not only because of the complexity and high dimensionality of the data but also because of other factors like time constraints and uncertainty. Timely detection of anomalous objects or situations that need further investigation may reduce operators' cognitive load. Surveillance applications may include anomaly detection capabilities, but their use is not widespread, as they usually generate a high number of false alarms, they do not provide appropriate cognitive support for operators, and their outcomes can be difficult to comprehend and trust. Visual analytics can bridge the gap between computational and human approaches to detecting anomalous behavior in traffic data, making this process more transparent. As a step toward this goal of transparency, this article presents an evaluation that assesses whether visualizations of normal behavioral models of vessel traffic support two of the main analytical tasks specified during our field work in maritime control centers. The evaluation combines quantitative and qualitative usability assessments. The quantitative evaluation, which was carried out with a proof-of-concept prototype, reveals that participants who used the visualization of normal behavioral models outperformed the group that did not do so. The qualitative assessment shows that domain experts have a positive attitude toward the provision of automatic support and the visualization of normal behavioral models, as these aids may reduce reaction time and increase trust in and comprehensibility of the system.
Employing a Parametric Model for Analytic Provenance BIBAFull-Text 6
  Yingjie Victor Chen; Zhenyu Cheryl Qian; Robert Woodbury; John Dill; Chris D. Shaw
We introduce a propagation-based parametric symbolic model approach to supporting analytic provenance. This approach combines a script language to capture and encode the analytic process and a parametrically controlled symbolic model to represent and reuse the logic of the analysis process. Our approach first appeared in a visual analytics system called CZSaw. Using a script to capture the analyst's interactions at a meaningful system action level allows the creation of a parametrically controlled symbolic model in the form of a Directed Acyclic Graph (DAG). Using the DAG allows propagating changes. Graph nodes correspond to variables in CZSaw scripts, which are results (data and data visualizations) generated from user interactions. The user interacts with variables representing entities or relations to create the next step's results. Graph edges represent dependency relationships among nodes. Any change to a variable triggers the propagation mechanism to update downstream dependent variables and in turn updates data views to reflect the change. The analyst can reuse parts of the analysis process by assigning new values to a node in the graph. We evaluated this symbolic model approach by solving three IEEE VAST Challenge contest problems (from IEEE VAST 2008, 2009, and 2010). In each of these challenges, the analyst first created a symbolic model to explore, understand, analyze, and solve a particular subproblem and then reused the model via its dependency graph propagation mechanism to solve similar subproblems. With the script and model, CZSaw supports the analytic provenance by capturing, encoding, and reusing the analysis process. The analyst can recall the chronological states of the analysis process with the CZSaw script and may interpret the underlying rationale of the analysis with the symbolic model.
Regression Cube: A Technique for Multidimensional Visual Exploration and Interactive Pattern Finding BIBAFull-Text 7
  Yu-Hsuan Chan; Carlos D. Correa; Kwan-Liu Ma
Scatterplots are commonly used to visualize multidimensional data; however, 2D projections of data offer limited understanding of the high-dimensional interactions between data points. We introduce an interactive 3D extension of scatterplots called the Regression Cube (RC), which augments a 3D scatterplot with three facets on which the correlations between the two variables are revealed by sensitivity lines and sensitivity streamlines. The sensitivity visualization of local regression on the 2D projections provides insights about the shape of the data through its orientation and continuity cues. We also introduce a series of visual operations such as clustering, brushing, and selection supported in RC. By iteratively refining the selection of data points of interest, RC is able to reveal salient local correlation patterns that may otherwise remain hidden with a global analysis. We have demonstrated our system with two examples and a user-oriented evaluation, and we show how RCs enable interactive visual exploration of multidimensional datasets via a variety of classification and information retrieval tasks. A video demo of RC is available.

TIIS 2014-07 Volume 4 Issue 2

Modeling User Preferences in Recommender Systems: A Classification Framework for Explicit and Implicit User Feedback BIBAFull-Text 8
  Gawesh Jawaheer; Peter Weller; Patty Kostkova
Recommender systems are firmly established as a standard technology for assisting users with their choices; however, little attention has been paid to the application of the user model in recommender systems, particularly the variability and noise that are an intrinsic part of human behavior and activity. To enable recommender systems to suggest items that are useful to a particular user, it can be essential to understand the user and his or her interactions with the system. These interactions typically manifest themselves as explicit and implicit user feedback that provides the key indicators for modeling users' preferences for items and essential information for personalizing recommendations. In this article, we propose a classification framework for the use of explicit and implicit user feedback in recommender systems based on a set of distinct properties that include Cognitive Effort, User Model, Scale of Measurement, and Domain Relevance. We develop a set of comparison criteria for explicit and implicit user feedback to emphasize the key properties. Using our framework, we provide a classification of recommender systems that have addressed questions about user feedback, and we review state-of-the-art techniques to improve such user feedback and thereby improve the performance of the recommender system. Finally, we formulate challenges for future research on improvement of user feedback.
Collaborative Language Models for Localized Query Prediction BIBAFull-Text 9
  Yi Fang; Ziad Al Bawab; Jean-Francois Crespo
Localized query prediction (LQP) is the task of estimating web query trends for a specific location. This problem subsumes many interesting personalized web applications such as personalization for buzz query detection, for query expansion, and for query recommendation. These personalized applications can greatly enhance user interaction with web search engines by providing more customized information discovered from user input (i.e., queries), but the LQP task has rarely been investigated in the literature. Although exist abundant work on estimating global web search trends does exist, it often encounters the big challenge of data sparsity when personalization comes into play.
   In this article, we tackle the LQP task by proposing a series of collaborative language models (CLMs). CLMs alleviate the data sparsity issue by collaboratively collecting queries and trend information from the other locations. The traditional statistical language models assume a fixed background language model, which loses the taste of personalization. In contrast, CLMs are personalized language models with flexible background language models customized to various locations. The most sophisticated CLM enables the collaboration to adapt to specific query topics, which further advances the personalization level. An extensive set of experiments have been conducted on a large-scale web query log to demonstrate the effectiveness of the proposed models.
Context-Sensitive Affect Recognition for a Robotic Game Companion BIBAFull-Text 10
  Ginevra Castellano; Iolanda Leite; André Pereira; Carlos Martinho; Ana Paiva; Peter W. Mcowan
Social perception abilities are among the most important skills necessary for robots to engage humans in natural forms of interaction. Affect-sensitive robots are more likely to be able to establish and maintain believable interactions over extended periods of time. Nevertheless, the integration of affect recognition frameworks in real-time human-robot interaction scenarios is still underexplored. In this article, we propose and evaluate a context-sensitive affect recognition framework for a robotic game companion for children. The robot can automatically detect affective states experienced by children in an interactive chess game scenario. The affect recognition framework is based on the automatic extraction of task features and social interaction-based features. Vision-based indicators of the children's nonverbal behaviour are merged with contextual features related to the game and the interaction and given as input to support vector machines to create a context-sensitive multimodal system for affect recognition. The affect recognition framework is fully integrated in an architecture for adaptive human-robot interaction. Experimental evaluation showed that children's affect can be successfully predicted using a combination of behavioural and contextual data related to the game and the interaction with the robot. It was found that contextual data alone can be used to successfully predict a subset of affective dimensions, such as interest toward the robot. Experiments also showed that engagement with the robot can be predicted using information about the user's valence, interest and anticipatory behaviour. These results provide evidence that social engagement can be modelled as a state consisting of affect and attention components in the context of the interaction.
Inferring Visualization Task Properties, User Performance, and User Cognitive Abilities from Eye Gaze Data BIBAFull-Text 11
  Ben Steichen; Cristina Conati; Giuseppe Carenini
Information visualization systems have traditionally followed a one-size-fits-all model, typically ignoring an individual user's needs, abilities, and preferences. However, recent research has indicated that visualization performance could be improved by adapting aspects of the visualization to the individual user. To this end, this article presents research aimed at supporting the design of novel user-adaptive visualization systems. In particular, we discuss results on using information on user eye gaze patterns while interacting with a given visualization to predict properties of the user's visualization task; the user's performance (in terms of predicted task completion time); and the user's individual cognitive abilities, such as perceptual speed, visual working memory, and verbal working memory. We provide a detailed analysis of different eye gaze feature sets, as well as over-time accuracies. We show that these predictions are significantly better than a baseline classifier even during the early stages of visualization usage. These findings are then discussed with a view to designing visualization systems that can adapt to the individual user in real time.

TIIS 2014-10 Volume 4 Issue 3

Special Issue on Multiple Modalities in Interactive Systems and Robots

Introduction to the Special Issue on Machine Learning for Multiple Modalities in Interactive Systems and Robots BIBAFull-Text 12e
  Heriberto Cuayáhuitl; Lutz Frommberger; Nina Dethlefs; Antoine Raux; Mathew Marge; Hendrik Zender
This special issue highlights research articles that apply machine learning to robots and other systems that interact with users through more than one modality, such as speech, gestures, and vision. For example, a robot may coordinate its speech with its actions, taking into account (audio-)visual feedback during their execution. Machine learning provides interactive systems with opportunities to improve performance not only of individual components but also of the system as a whole. However, machine learning methods that encompass multiple modalities of an interactive system are still relatively hard to find. The articles in this special issue represent examples that contribute to filling this gap.
Efficient Interactive Multiclass Learning from Binary Feedback BIBAFull-Text 12
  Hung Ngo; Matthew Luciw; Jawas Nagi; Alexander Forster; Jürgen Schmidhuber; Ngo Anh Vien
We introduce a novel algorithm called upper confidence-weighted learning (UCWL) for online multiclass learning from binary feedback (e.g., feedback that indicates whether the prediction was right or wrong). UCWL combines the upper confidence bound (UCB) framework with the soft confidence-weighted (SCW) online learning scheme. In UCB, each instance is classified using both score and uncertainty. For a given instance in the sequence, the algorithm might guess its class label primarily to reduce the class uncertainty. This is a form of informed exploration, which enables the performance to improve with lower sample complexity compared to the case without exploration. Combining UCB with SCW leads to the ability to deal well with noisy and nonseparable data, and state-of-the-art performance is achieved without increasing the computational cost. A potential application setting is human-robot interaction (HRI), where the robot is learning to classify some set of inputs while the human teaches it by providing only binary feedback -- or sometimes even the wrong answer entirely. Experimental results in the HRI setting and with two benchmark datasets from other settings show that UCWL outperforms other state-of-the-art algorithms in the online binary feedback setting -- and surprisingly even sometimes outperforms state-of-the-art algorithms that get full feedback (e.g., the true class label), whereas UCWL gets only binary feedback on the same data sequence.
Interpreting Natural Language Instructions Using Language, Vision, and Behavior BIBAFull-Text 13
  Luciana Benotti; Tessa Lau; Martín Villalba
We define the problem of automatic instruction interpretation as follows. Given a natural language instruction, can we automatically predict what an instruction follower, such as a robot, should do in the environment to follow that instruction? Previous approaches to automatic instruction interpretation have required either extensive domain-dependent rule writing or extensive manually annotated corpora. This article presents a novel approach that leverages a large amount of unannotated, easy-to-collect data from humans interacting in a game-like environment. Our approach uses an automatic annotation phase based on artificial intelligence planning, for which two different annotation strategies are compared: one based on behavioral information and the other based on visibility information. The resulting annotations are used as training data for different automatic classifiers. This algorithm is based on the intuition that the problem of interpreting a situated instruction can be cast as a classification problem of choosing among the actions that are possible in the situation. Classification is done by combining language, vision, and behavior information. Our empirical analysis shows that machine learning classifiers achieve 77% accuracy on this task on available English corpora and 74% on similar German corpora. Finally, the inclusion of human feedback in the interpretation process is shown to boost performance to 92% for the English corpus and 90% for the German corpus.
Machine Learning for Social Multiparty Human -- Robot Interaction BIBAFull-Text 14
  Simon Keizer; Mary Ellen Foster; Zhuoran Wang; Oliver Lemon
We describe a variety of machine-learning techniques that are being applied to social multiuser human -- robot interaction using a robot bartender in our scenario. We first present a data-driven approach to social state recognition based on supervised learning. We then describe an approach to social skills execution -- that is, action selection for generating socially appropriate robot behavior -- which is based on reinforcement learning, using a data-driven simulation of multiple users to train execution policies for social skills. Next, we describe how these components for social state recognition and skills execution have been integrated into an end-to-end robot bartender system, and we discuss the results of a user evaluation. Finally, we present an alternative unsupervised learning framework that combines social state recognition and social skills execution based on hierarchical Dirichlet processes and an infinite POMDP interaction manager. The models make use of data from both human -- human interactions collected in a number of German bars and human -- robot interactions recorded in the evaluation of an initial version of the system.
Nonstrict Hierarchical Reinforcement Learning for Interactive Systems and Robots BIBAFull-Text 15
  Heriberto Cuayáhuitl; Ivana Kruijff-Korbayová; Nina Dethlefs
Conversational systems and robots that use reinforcement learning for policy optimization in large domains often face the problem of limited scalability. This problem has been addressed either by using function approximation techniques that estimate the approximate true value function of a policy or by using a hierarchical decomposition of a learning task into subtasks. We present a novel approach for dialogue policy optimization that combines the benefits of both hierarchical control and function approximation and that allows flexible transitions between dialogue subtasks to give human users more control over the dialogue. To this end, each reinforcement learning agent in the hierarchy is extended with a subtask transition function and a dynamic state space to allow flexible switching between subdialogues. In addition, the subtask policies are represented with linear function approximation in order to generalize the decision making to situations unseen in training. Our proposed approach is evaluated in an interactive conversational robot that learns to play quiz games. Experimental results, using simulation and real users, provide evidence that our proposed approach can lead to more flexible (natural) interactions than strict hierarchical control and that it is preferred by human users.

TIIS 2015-01 Volume 4 Issue 4

Special Issue on Activity Recognition for Interaction

Introduction to the Special Issue on Activity Recognition for Interaction BIBAFull-Text 16e
  Andreas Bulling; Ulf Blanke; Desney Tan; Jun Rekimoto; Gregory Abowd
This editorial introduction describes the aims and scope of the ACM Transactions on Interactive Intelligent Systems special issue on Activity Recognition for Interaction. It explains why activity recognition is becoming crucial as part of the cycle of interaction between users and computing systems, and it shows how the five articles selected for this special issue reflect this theme.
USMART: An Unsupervised Semantic Mining Activity Recognition Technique BIBAFull-Text 16
  Juan Ye; Graeme Stevenson; Simon Dobson
Recognising high-level human activities from low-level sensor data is a crucial driver for pervasive systems that wish to provide seamless and distraction-free support for users engaged in normal activities. Research in this area has grown alongside advances in sensing and communications, and experiments have yielded sensor traces coupled with ground truth annotations about the underlying environmental conditions and user actions. Traditional machine learning has had some success in recognising human activities; but the need for large volumes of annotated data and the danger of overfitting to specific conditions represent challenges in connection with the building of models applicable to a wide range of users, activities, and environments. We present USMART, a novel unsupervised technique that combines data- and knowledge-driven techniques. USMART uses a general ontology model to represent domain knowledge that can be reused across different environments and users, and we augment a range of learning techniques with ontological semantics to facilitate the unsupervised discovery of patterns in how each user performs daily activities. We evaluate our approach against four real-world third-party datasets featuring different user populations and sensor configurations, and we find that USMART achieves up to 97.5% accuracy in recognising daily activities.
Automatic Detection of Social Behavior of Museum Visitor Pairs BIBAFull-Text 17
  Eyal Dim; Tsvi Kuflik
In many cases, visitors come to a museum in small groups. In such cases, the visitors' social context has an impact on their museum visit experience. Knowing the social context may allow a system to provide socially aware services to the visitors. Evidence of the social context can be gained from observing/monitoring the visitors' social behavior. However, automatic identification of a social context requires, on the one hand, identifying typical social behavior patterns and, on the other, using relevant sensors that measure various signals and reason about them to detect the visitors' social behavior. We present such typical social behavior patterns of visitor pairs, identified by observations, and then the instrumentation, detection process, reasoning, and analysis of measured signals that enable us to detect the visitors' social behavior. Simple sensors' data, such as proximity to other visitors, proximity to museum points of interest, and visitor orientation are used to detect social synchronization, attention to the social companion, and interest in museum exhibits. The presented approach may allow future research to offer adaptive services to museum visitors based on their social context to support their group visit experience better.
Adaptive Gesture Recognition with Variation Estimation for Interactive Systems BIBAFull-Text 18
  Baptiste Caramiaux; Nicola Montecchio; Atau Tanaka; Frédéric Bevilacqua
This article presents a gesture recognition/adaptation system for human -- computer interaction applications that goes beyond activity classification and that, as a complement to gesture labeling, characterizes the movement execution. We describe a template-based recognition method that simultaneously aligns the input gesture to the templates using a Sequential Monte Carlo inference technique. Contrary to standard template-based methods based on dynamic programming, such as Dynamic Time Warping, the algorithm has an adaptation process that tracks gesture variation in real time. The method continuously updates, during execution of the gesture, the estimated parameters and recognition results, which offers key advantages for continuous human -- machine interaction. The technique is evaluated in several different ways: Recognition and early recognition are evaluated on 2D onscreen pen gestures; adaptation is assessed on synthetic data; and both early recognition and adaptation are evaluated in a user study involving 3D free-space gestures. The method is robust to noise, and successfully adapts to parameter variation. Moreover, it performs recognition as well as or better than nonadapting offline template-based methods.
Affectionate Interaction with a Small Humanoid Robot Capable of Recognizing Social Touch Behavior BIBAFull-Text 19
  Martin Cooney; Shuichi Nishio; Hiroshi Ishiguro
Activity recognition, involving a capability to recognize people's behavior and its underlying significance, will play a crucial role in facilitating the integration of interactive robotic artifacts into everyday human environments. In particular, social intelligence in recognizing affectionate behavior will offer value by allowing companion robots to bond meaningfully with interacting persons. The current article addresses the issue of designing an affectionate haptic interaction between a person and a companion robot by exploring how a small humanoid robot can behave to elicit affection while recognizing touches. We report on an experiment conducted to gain insight into how people perceive three fundamental interactive strategies in which a robot is either always highly affectionate, appropriately affectionate, or superficially unaffectionate (emphasizing positivity, contingency, and challenge, respectively). Results provide insight into the structure of affectionate interaction between humans and humanoid robots -- underlining the importance of an interaction design expressing sincere liking, stability and variation -- and suggest the usefulness of novel modalities such as warmth and cold.
Incremental Learning of Daily Routines as Workflows in a Smart Home Environment BIBAFull-Text 20
  Berardina De Carolis; Stefano Ferilli; Domenico Redavid
Smart home environments should proactively support users in their activities, anticipating their needs according to their preferences. Understanding what the user is doing in the environment is important for adapting the environment's behavior, as well as for identifying situations that could be problematic for the user. Enabling the environment to exploit models of the user's most common behaviors is an important step toward this objective. In particular, models of the daily routines of a user can be exploited not only for predicting his/her needs, but also for comparing the actual situation at a given moment with the expected one, in order to detect anomalies in his/her behavior. While manually setting up process models in business and factory environments may be cost-effective, building models of the processes involved in people's everyday life is infeasible. This fact fully justifies the interest of the Ambient Intelligence community in automatically learning such models from examples of actual behavior. Incremental adaptation of the models and the ability to express/learn complex conditions on the involved tasks are also desirable. This article describes how process mining can be used for learning users' daily routines from a dataset of annotated sensor data. The solution that we propose relies on a First-Order Logic learning approach. Indeed, First-Order Logic provides a single, comprehensive and powerful framework for supporting all the previously mentioned features. Our experiments, performed both on a proprietary toy dataset and on publicly available real-world ones, indicate that this approach is efficient and effective for learning and modeling daily routines in Smart Home Environments.

Regular Article

A Stimulus-Response Framework for Robot Control BIBAFull-Text 21
  Mario Gianni; Geert-Jan M. Kruijff; Fiora Pirri
We propose in this article a new approach to robot cognitive control based on a stimulus-response framework that models both a robot's stimuli and the robot's decision to switch tasks in response to or inhibit the stimuli. In an autonomous system, we expect a robot to be able to deal with the whole system of stimuli and to use them to regulate its behavior in real-world applications. The proposed framework contributes to the state of the art of robot planning and high-level control in that it provides a novel perspective on the interaction between robot and environment. Our approach is inspired by Gibson's constructive view of the concept of a stimulus and by the cognitive control paradigm of task switching. We model the robot's response to a stimulus in three stages. We start by defining the stimuli as perceptual functions yielded by the active robot processes and learned via an informed logistic regression. Then we model the stimulus-response relationship by estimating a score matrix that leads to the selection of a single response task for each stimulus, basing the estimation on low-rank matrix factorization. The decision about switching takes into account both an interference cost and a reconfiguration cost. The interference cost weighs the effort of discontinuing the current robot mental state to switch to a new state, whereas the reconfiguration cost weighs the effort of activating the response task. A choice is finally made based on the payoff of switching. Because processes play such a crucial role both in the stimulus model and in the stimulus-response model, and because processes are activated by actions, we address also the process model, which is built on a theory of action. The framework is validated by several experiments that exploit a full implementation on an advanced robotic platform and is compared with two known approaches to replanning. Results demonstrate the practical value of the system in terms of robot autonomy, flexibility, and usability.