HCI Bibliography Home | HCI Conferences | IUI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
IUI Tables of Contents: 10111213-113-214-114-215-115-216-116-2

Companion Proceedings of the 2016 International Conference on Intelligent User Interfaces

Fullname:Companion Proceedings of the 21st International Conference on Intelligent User Interfaces
Editors:Jeffrey Nichols; Jalal Mahmud; John O'Donovan; Cristina Conati; Massimo Zancanaro
Location:Sonoma, California
Dates:2016-Mar-07 to 2016-Mar-10
Standard No:ISBN: 978-1-4503-4140-0; ACM DL: Table of Contents; hcibib: IUI16-2
Links:Conference Website
  1. IUI 2016-03-07 Volume 2
    1. Workshops
    2. Tutorials
    3. Posters
    4. Demos
    5. Student Consortium

IUI 2016-03-07 Volume 2


Workshop on Emotion and Visualization: EmoVis 2016 BIBFull-Text 1-2
  Andreas Kerren; Daniel Cernea; Margit Pohl
SCWT: A Joint Workshop on Smart Connected and Wearable Things BIBAFull-Text 3-5
  Dirk Schnelle-Walka; Lior Limonad; Tobias Grosse-Puppendahl; Joel Lanir; Florian Müller; Massimo Mecella; Kris Luyten; Tsvi Kuflik; Oliver Brdiczka; Max Mühlhäuser
The increasing number of smart objects in our everyday life shapes how we interact beyond the desktop. In this workshop we discuss how advanced interactions with smart objects in the context of the Internet-of-Things should be designed from various perspectives, such as HCI and AI as well as industry and academia.


Evaluating Intelligent User Interfaces with User Experiments BIBAFull-Text 6-8
  Bart P. Knijnenburg
User experiments are an essential tool to evaluate the user experience of intelligent user interfaces. This tutorial teaches the practical aspects of designing and setting up user experiments, as well as state-of-the-art methods to statistically evaluate the outcomes of such experiments.


Tracing Temporal Changes of Selection Criteria from Gaze Information BIBAFull-Text 9-12
  Kei Shimonishi; Hiroaki Kawashima; Erina Schaffer; Takashi Matsuyama
To design interactive systems that proactively assist users' decision making, the users' gaze information is an important cue for the system to estimate users' selection criteria. Users sometimes change selection criteria while browsing content. Therefore, temporal changes of those criteria need to be traced from gaze data in short time scales. In this paper, we propose an approach to detecting users' distinctive browsing periods with its appropriate time-scale by leveraging multiscale exact tests so that the system can trace temporal changes of selection criteria. We demonstrate the applicability of the proposed method through a toy example and experiments.
Projecting Recorded Expert Hands at Real Size, at Real Speed, and onto Real Objects for Manual Work BIBAFull-Text 13-17
  Genta Suzuki; Taichi Murase; Yusaku Fujii
Expert manual workers in factories assemble more efficiently than novices because their movements are optimized for the tasks. In this paper, we present an approach to projecting the hand movements of experts at real size, and real speed and onto real objects in order to match the manual work movements of novices to those of experts. We prototyped a projector-camera system, which projects the virtual hands of experts. We conducted a user study in which users worked after watching experts work under two conditions: using a display and using our prototype system. The results show our prototype users worked more precisely and felt the tasks were easier. User ratings also show our prototype users watched videos of experts more fixedly, memorized them more clearly and distinctly tried to work in the same way shown in the videos as compared with display users.
Environment Specific Content Rendering & Transformation BIBAFull-Text 18-22
  Balaji Vasan Srinivasan; Tanya Goyal; Varun Syal; Shubhankar Suman Singh; Vineet Sharma
The evolution of digital technology has resulted in the consumption of content on a multitude of environments (desktop, mobile, etc). Content now needs to be appropriately delivered to all these environments. This calls for a mechanism to automate the process for rendering the content in its appropriate form on a targeted environment. In this paper, we propose an algorithm that takes the content along with the a set of environment-specific layouts where it has to be rendered to automatically decide the mapping and transformation of the content for the right rendition. Metrics to measure the 'goodness' of the resulting rendition is also proposed to choose the right layout for the given content.
The Lifeboard: Improving Outcomes via Scarcity Priming BIBAFull-Text 23-27
  Ajay Chander; Sanam Mirzazad Barijough
We introduce the Lifeboard: a dynamic information interface designed to render personal data so as to positively influence wellness outcomes. We report on the results of an experiment that compares the effect of presenting clinically significant data to subjects on their activity levels, with the effect of presenting the same data using the Lifeboard. The statistically significant increase in this wellness outcome in the Lifeboard group vs. the Data-only group suggests that the Lifeboard effectively leverages the scarcity response [4] in the service of improved wellness outcomes. Moreover, the significant week-on-week decrease in this wellness outcome in the Data-only group points to the need for care when exposing clinical data to users.
Human-Autonomy Teaming and Agent Transparency BIBAFull-Text 28-31
  Jessie Y. C. Chen; Michael J. Barnes; Anthony R. Selkowitz; Kimberly Stowers; Shan G. Lakhmani; Nicholas Kasdaglis
We developed the user interfaces for two Human-Robot Interaction (HRI) tasking environments: dismounted infantry interacting with a ground robot (Autonomous Squad Member) and human interaction with an intelligent agent to manage a team of heterogeneous robotic vehicles (IMPACT). These user interfaces were developed based on the Situation awareness-based Agent Transparency (SAT) model. User testing showed that as agent transparency increased, so did overall human-agent team performance. Participants were able to calibrate their trust in the agent more appropriately as agent transparency increased.
STEPS: A Spatio-temporal Electric Power Systems Visualization BIBAFull-Text 32-35
  Robert Pienta; Leilei Xiong; Santiago Grijalva; Duen Horng (Polo) Chau; Minsuk Kahng
As the bulk electric grid becomes more complex, power system operators and engineers have more information to process and interpret than ever before. The information overload they experience can be mitigated by effective visualizations that facilitate rapid and intuitive assessment of the system state. With the introduction of non-dispatchable renewable energy, flexible loads, and energy storage, the ability to temporally explore system states becomes critical. This paper introduces STEPS, a new 3D Spatio-temporal Electric Power Systems visualization tool suitable for steady-state operational applications.
Fixation-to-Word Mapping with Classification of Saccades BIBAFull-Text 36-40
  Akito Yamaya; Goran Topic; Akiko Aizawa
Eye movement is expected to provide important clues for analyzing the human reading process. However, the noisy tracking environment makes it difficult to map the gaze data captured by eye-trackers to the user's intended word. In this paper, we propose an effective approach for accurately mapping a fixation to a word in the text. Our method regards consecutive horizontally progressive fixations as a sequential reading segment. We first classify transitions between segments according to six classes, and then identify the set of segments associated with each line of the document. Our experiments demonstrate that the proposed method achieves 87% mapping accuracy (15% higher than our previous work) with a classification performance of 84%. We also confirmed that manual annotation time can be reduced by using our approach as a reference. We believe that our method provides sufficiently good accuracy to warrant future analysis.
Enhancing Interactivity with Transcranial Direct Current Stimulation BIBAFull-Text 41-44
  Bo Wan; Chi Vi; Sriram Subramanian; Diego Martinez Plasencia
Transcranial Direct Current Stimulation (tDCS) is a non-invasive type of neural stimulation known for modulation of cortical excitability leading to positive effects on working memory and attention. The availability of low-cost and consumer grade tDCS devices has democratized access to such technology allowing us to explore its applicability to HCI. We review the relevant literature and identify potential avenues for exploration within the context of enhancing interactivity and use of tDCS in the context of HCI.
Designing SmartSignPlay: An Interactive and Intelligent American Sign Language App for Children who are Deaf or Hard of Hearing and their Families BIBAFull-Text 45-48
  Ching-Hua Chuan; Caroline Anne Guardino
This paper describes an interactive mobile application that aims to assist children who are deaf or hard of hearing (D/HH) and their families to learn and practice American Sign Language (ASL). Approximately 95% of D/HH children are born to hearing parents. Research indicates that the lack of common communication tools between the parent and child often results in delayed development in the child's language and social skills. Benefiting from the interactive advantages and popularity of touchscreen mobile devices, we created SmartSignPlay, an app to teach D/HH children and their families everyday ASL vocabulary and phrases. Vocabulary is arranged into context-based lessons where the vocabulary is frequently used. After watching the sign demonstrated by an animated avatar, the user performed the sign by drawing the trajectory of the hand movement and selecting the correct handshape. While the app is still under iterative development, preliminary results on the usability are provided.
Learning Objects Authoring Supported by Ubiquitous Learning Environments BIBAFull-Text 49-53
  Rafael D. Araújo; Hiran N. M. Ferreira; Fabiano A. Dorça; Renan G. Cattelan
Learning objects authoring is still a complex and time-consuming task for instructors, which requires attention to technical and pedagogical aspects. However, one can take advantage of the Ubiquitous Learning Environments characteristics to make it a mild process by means of automatic or semi-automatic processes. In this way, this paper presents an approach for creating learning objects and their metadata in such environments considering collaborative interactions among users. The proposed approach is being integrated to a real multimedia capture system used as a complementary tool in a university.
Computational Methods for the Natural and Intuitive Visualization of Volumetric Medical Data BIBAFull-Text 54-57
  Vladimir Ocegueda-Hernández; Gerardo Mendizabal-Ruiz
Modern medical image technologies are capable of providing meaningful structural and functional information in the form of volumetric digital data. However current standard systems for the visualization and interaction with such data fail to provide a natural-intuitive way to interact with these data. In this paper, we present our advances towards the development of computational methods for the natural and intuitive visualization of volumetric medical data.
Spatio-temporal Event Visualization from a Geo-parsed Microblog Stream BIBAFull-Text 58-61
  Masahiko Itoh; Naoki Yoshinaga; Masashi Toyoda
We devised a method of visualizing spatio-temporal events extracted from a geo-parsed microblog stream by using a multi-layered geo-locational word-cloud representation. In our method, real-time geo-parsing geo-locates posts in the stream, in order to recognize words appearing on a user-specified location and time grid as temporal local events. The recognized temporal local events (e.g., sports games) are then displayed on a map as multi-layered word-clouds and are then used for finding global events (e.g., earthquakes), in order to avoid occlusions among the local and global events. We showed the effectiveness of our method by testing it on real events extracted from our archive of five years worth of Twitter posts.
Dealing with Concept Drift in Exploratory Search: An Interactive Bayesian Approach BIBAFull-Text 62-66
  Antti Kangasrääsiö; Yi Chen; Dorota Glowacka; Samuel Kaski
In exploratory search, when the user formulates a query iteratively through relevance feedback, it is likely that the feedback given earlier requires adjustment later on. The main reason for this is that the user learns while searching, which causes changes in the relevance of items and features as estimated by the user -- a phenomenon known as {it concept drift}. It might be helpful for the user to see the recent history of her feedback and get suggestions from the system about the accuracy of that feedback. In this paper we present a timeline interface that visualizes the feedback history, and a Bayesian regression model that can estimate jointly the user's current interests and the accuracy of each user feedback. We demonstrate that the user model can improve retrieval performance over a baseline model that does not estimate accuracy of user feedback. Furthermore, we show that the new interface provides usability improvements, which leads to the users interacting more with it.
From Textual Instructions to Sensor-based Recognition of User Behaviour BIBAFull-Text 67-73
  Kristina Yordanova
There are various activity recognition approaches that rely on manual definition of precondition-effect rules to describe user behaviour. These rules are later used to generate computational models of human behaviour that are able to reason about the user behaviour based on sensor observations. One problem with these approaches is that the manual rule definition is time consuming and error prone process. To address this problem, in this paper we outline an approach that extracts the rules from textual instructions. It then learns the optimal model structure based on observations in the form of manually created plans and sensor data. The learned model can then be used to recognise the behaviour of users during their daily activities.
Sleeve Sensing Technologies and Haptic Feedback Patterns for Posture Sensing and Correction BIBAFull-Text 74-78
  Luis Miguel Salvado; Artur Arsenio
The world population is aging rapidly. There is an increasing need for health assistance personnel, such as nurses and physiotherapeutic experts, in developed countries. On the other hand, there is a need to improve health care assistance to the population, and especially to elderly people. This will mostly benefit specific user groups, such as elderly, patients recovering from physical injury, or athletes. This paper describes a wearable sleeve being developed under the scope of the Augmented Human Assistance (AHA) project for assisting people. It proposes a new architecture for providing haptic feedback through patterns created by multiple actuators. Different sensing technologies are analyzed and discussed.


Heady-Lines: A Creative Generator Of Newspaper Headlines BIBAFull-Text 79-83
  Lorenzo Gatti; Gozde Ozbal; Marco Guerini; Oliviero Stock; Carlo Strapparava
In this paper we present Heady-Lines, a creative system that produces news headlines based on well-known expressions. The algorithm is composed of several steps that identify keywords from a news article, select an appropriate well-known expression and modify it to produce a novel one, using state-of-the-art natural language processing and linguistic creativity techniques. The system has a simple web-interface that abstracts the technical details from users and lets them concentrate on the task of producing creative headlines.
PASSAGE: A Travel Safety Assistant with Safe Path Recommendations for Pedestrians BIBAFull-Text 84-87
  Matthew Garvey; Nilaksh Das; Jiaxing Su; Meghna Natraj; Bhanu Verma
Atlanta has consistently ranked as one of the most dangerous cities in America with over 2.5 million crime events recorded within the past six years. People who commute by walking are highly susceptible to crime here. To address this problem, our group has developed a mobile application, PASSAGE, that integrates Atlanta-based crime data to find "safe paths" between any given start and end locations in Atlanta. It also provides security features in a convenient user interface to further enhance safety while walking.
An Intelligent Musical Rhythm Variation Interface BIBAFull-Text 88-91
  Richard Vogl; Peter Knees
The drum tracks of electronic dance music are a central and style-defining element. Yet, creating them can be a cumbersome task, mostly due to lack of appropriate tools and input devices. In this work we present an artificial-intelligence-powered software prototype, which supports musicians composing the rhythmic patterns for drum tracks. Starting with a basic pattern (seed pattern), which is provided by the user, a list of variations with varying degree of similarity to the seed pattern is generated. The variations are created using a generative stochastic neural network. The interface visualizes the patterns and provides an intuitive way to browse through them. A user study with ten experts in electronic music production was conducted to evaluate five aspects of the presented prototype. For four of these aspects the feedback was generally positive. Only regarding the use case in live environments some participants showed concerns and requested safety features.
Easy Navigation through Instructional Videos using Automatically Generated Table of Content BIBAFull-Text 92-96
  Ankit Gandhi; Arijit Biswas; Kundan Shrivastava; Ranjeet Kumar; Sahil Loomba; Om Deshmukh
The amount of instructional videos available online, already in tens of thousands of hours, is growing steadily. A major bottleneck in their wide spread usage is the lack of tools for easy consumption of these videos. In this demonstration, we present MMToC: Multimodal Method for Table of Content, a technique that automatically generates a table of content for a given instructional video and enables text-book-like efficient navigation through the video. MMToC quantifies word saliency for visual words extracted from the slides and spoken words obtained from the lecture transcript. These saliency scores are combined using a dynamic programming based segmentation algorithm to identify likely points in the video where the topic has changed. MMToC is a web-based modular solution that can be used as a stand alone video navigation solution or can be integrated with any e-platform for multimedia content management. MMToC can be seen in action on a sample video at
Semantic Sketch-Based Video Retrieval with Autocompletion BIBAFull-Text 97-101
  Claudiu Tanase; Ivan Giangreco; Luca Rossetto; Heiko Schuldt; Omar Seddati; Stephane Dupont; Ozan Can Altiok; Metin Sezgin
The IMOTION system is a content-based video search engine that provides fast and intuitive known item search in large video collections. User interaction consists mainly of sketching, which the system recognizes in real-time and makes suggestions based on both visual appearance of the sketch (what does the sketch look like in terms of colors, edge distribution, etc.) and semantic content (what object is the user sketching). The latter is enabled by a predictive sketch-based UI that identifies likely candidates for the sketched object via state-of-the-art sketch recognition techniques and offers on-screen completion suggestions. In this demo, we show how the sketch-based video retrieval of the IMOTION system is used in a collection of roughly 30,000 video shots. The system indexes collection data with over 30 visual features describing color, edge, motion, and semantic information. Resulting feature data is stored in ADAM, an efficient database system optimized for fast retrieval.
ScopeG: A Mobile Application for Exploration and Comparison of Personality Traits BIBAFull-Text 102-105
  Robert Deloatch; Liang Gou; Chris Kau; Jalal Mahmud; Michelle Zhou
The language people use on social media has been shown to provide insight into their personality characteristics. We developed a mobile system that aids the exploration and comparison of personal personality profiles with those of others. We conducted a user study to evaluate system usability, gauge user interaction of interest, and the system's performance in completing exploration and comparison tasks. Our study shows that the system is easy to use and enables users effectively explore and compare personality profiles, and users were interested in comparing their personality traits with the personality traits of friends, role models, and celebrities.

Student Consortium

Facilitating Safe Adaptation of Interactive Agents using Interactive Reinforcement Learning BIBAFull-Text 106-109
  Konstantinos Tsiakas
In this paper, we propose a learning framework for the adaptation of an interactive agent to a new user. We focus on applications where safety and personalization are essential, as Rehabilitation Systems and Robot Assisted Therapy. We argue that interactive learning methods can be utilised and combined into the Reinforcement Learning framework, aiming at a safe and tailored interaction.
Usable Privacy in Location-Sharing Services BIBAFull-Text 110-113
  Yuchen Zhao
Location-sharing services such as Facebook and Foursquare have become increasingly popular. These services can be helpful for us but can also pose threats to people's privacy. Usability issues in existing location-privacy protection mechanisms are one of the main reasons why people fail to protect their location privacy properly. Most people are not capable and find it cumbersome to configure location-privacy preferences by themselves. My PhD research aims to address these usability issues by using recommenders, understand people's acceptance of, and concerns about, such recommenders, and to alleviate their concerns.
Improving Interactions with Spatial Context-aware Services BIBAFull-Text 114-117
  Pavel Andreevich Samsonov
We have seen a recent rise of context- as well as location-based-mobile services. Finally, those services entering applications and adding features to mobile operating system to make everyday user interactions handier. Nevertheless, those services still have certain limitations, such as lack of certain data types that limit them to exploit their full potentials. My research is situated in the area of human-computer interaction with strong links to the field of intelligent user interfaces and aims to improve interactions with spatial context-aware services by combining methods from computer vision and artificial intelligence.
Dynamic Online Computerized Neuropsychological Testing System BIBAFull-Text 118-121
  Sean-Ryan Smith
Traditional cognitive testing for detecting cognitive impairment (CI) can be inaccessible, expensive, and time consuming. This dissertation aims to develop an automated online computerized neuropsychological testing system for rapidly tracking an individual's cognitive performance throughout the user's daily or weekly schedule in an unobtrusive way. By utilizing embedded microsensors within tablet devices, the proposed context-aware system will capture ambient and behavioral data pertinent to the real-world contexts and times of testing to compliment psychometric results, by providing insight into the contextual factors relevant to the user's testing efficacy and performance.
Visual Text Analytics for Online Conversations: Design, Evaluation, and Applications BIBAFull-Text 122-125
  Enamul Hoque
Analyzing and gaining insights from a large amount of textual conversations can be quite challenging for a user, especially when the discussions become very long. During my doctoral research, I have focused on integrating Information Visualization (InfoVis) with Natural Language Processing (NLP) techniques to better support the user's task of exploring and analyzing conversations. For this purpose, I have designed a visual text analytics system that supports the user exploration, starting from a possibly large set of conversations, then narrowing down to a subset of conversations, and eventually drilling-down to a set of comments of one conversation. While so far our approach is evaluated mainly based on lab studies, in my on-going and future work I plan to evaluate our approach via online longitudinal studies.
Gaze and Foot Input: Toward a Rich and Assistive Interaction Modality BIBAFull-Text 126-129
  Vijay Dandur Rajanna
Transforming gaze input into a rich and assistive interaction modality is one of the primary interests in eye tracking research. Gaze input in conjunction with traditional solutions to the "Midas Touch" problem, dwell time or a blink, is not matured enough to be widely adopted. In this regard, we present our preliminary work, a framework that achieves precise "point and click" interactions in a desktop environment through combining the gaze and foot interaction modalities. The framework comprises of an eye tracker and a foot-operated quasi-mouse that is wearable. The system evaluation shows that our gaze and foot interaction framework performs as good as a mouse (time and precision) in the majority of tasks. Furthermore, this dissertation work focuses on the goal of realizing gaze-assisted interaction as a primary interaction modality to substitute conventional mouse and keyboard-based interaction methods. In addition, we consider some of the challenges that need to be addressed, and also present the possible solutions toward achieving our goal.
Adaptive User and Haptic Interfaces for Smart Assessment and Training BIBAFull-Text 130-133
  Alexandros Lioulemes
My research is focusing on developing smart robotic rehabilitation interfaces that use machine intelligence to adjust the level of difficulty, assess physical and mental obstacles on the part of the user, and provide analysis of the multi-sensing data collected in real time as the user exercises. The main goal of the interfaces is to engage the patient in repetitive exercise sessions and to provide better data visualization to the therapist for the patient's recovery progress. In this doctoral consortium, I will present three prototype user interfaces that can be applied in assistive environments and enhance the productivity and interaction among therapist and patient. The data processing and the decision making algorithms compose the core components of this study.
Intelligent Interface for Organizing Online Social Opinions on Reddit BIBAFull-Text 134-137
  Mingkun Gao
Lots of posts containing social opinions are published on Reddit in a messy and staggered format with sub-Reddit labels summarizing their contents only. It's hard for users to have a global insight across different positions and opinions for a specific topic in a short time, especially for a controversial topic. We propose an intelligent mechanism which combines social opinion clustering and information visualization together. First, we cluster the Reddit posts into different categories based on crowd position and opinions, and generate informative clustering labels using the human computation technique. Second, we create an intelligent user interface with Reddit posts category visualization. This would expose categorized posts of different positions and opinions to users, and motivate users to hunt for posts supported by unlike-minded people.
ChordRipple: Adaptively Recommending and Propagating Chord Changes for Songwriters BIBAFull-Text 138-141
  Cheng-Zhi Anna Huang
Songwriting is the interplay of a composer's creative intent and an idiom's language. This language both facilitates and poses stylistic constraints on a composer's expressivity. Novice composers often find it difficult to go beyond common chord progressions, to find the chords that realize their intentions. To make it easier for composers to experiment with radical chord choices and to prototype "what-if" ideas, we are building a creativity support tool, ChordRipple, which (1) makes chord recommendations that aim to be both diverse and appropriate to the current context, (2) infers a composer's intention to help her more quickly prototype ideas. Composers can use it to help select the next chord, to replace sequences of chords in an internally consist manner, or to edit one part of a sequence and see the whole sequence change in that direction. To make such recommendations, we adapt neural-network models such as Word2Vec to the music domain as Chord2Vec. This model learns chord embeddings from a corpus of chord sequences, placing chords nearby when they are used in similar contexts. The learned embeddings support creative substitutions between chords, and also exhibit topological properties that correspond to musical structure. For example, the major and minor chords are both arranged in the latent space in shapes corresponding to the circle-of-fifths. To support the dynamic nature of the creative process, we propose to infer a composer's intentions for adaptive recommendation. As a composer makes chord changes, she is moving in the embedding space. We can infer a composer's intention from the gradient of her edits' trace and use this gradient to help her fine-tune her current changes or to project the sequence into the future to give recommendations on how the sequence could look like if more edits in that direction were performed.
Assessing Empathy through Mixed Reality BIBAFull-Text 142-145
  Cassandra Oduola
This research seeks to produce a new way of assessing empathy in individuals. The current widely used diagnostic tools are questionnaires. These questionnaires are easy to "pass" if the individual simply lies and chooses the answers that would be most beneficial to them. Furthermore, it is shown, assessing empathy is harder in a clinical setting because it is not the natural world, a person may purposely inhibit their behavior to seem more "normal". Finding methods that would assess affect while interacting with a computer could yield higher accuracy in diagnosis.
Exploring the Development of Spatial Skills in a Video Game BIBAFull-Text 146-149
  Helen Wauck
This document gives an overview of my current research project investigating how children develop spatial reasoning skills through video game training. I describe the motivation and goals of the project and the progress made so far.
Understanding and Intervening Communicational Behavior using Artificial Intelligence BIBAFull-Text 150-153
  M. Iftekhar Tanveer
Portable and inexpensive technologies have the potential to capture a huge variety of signals about human being. Systematic analysis of these signals can provide deep understanding on the basic nature of interpersonal communication. I am interested in taking a machine learning approach for analyzing human behaviors -- at least in a formal, well-established setting (e.g. in public speaking, job interview etc.). Understanding human behavior will enable us to design systems capable to make people self-aware. In many cases they might be useful for behavior modification as well.