HCI Bibliography Home | HCI Conferences | IUI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
IUI Tables of Contents: 0910111213-113-214-114-215-115-216-116-2

Proceedings of the 2016 International Conference on Intelligent User Interfaces

Fullname:Proceedings of the 21st International Conference on Intelligent User Interfaces
Editors:Jeffrey Nichols; Jalal Mahmud; John O'Donovan; Cristina Conati; Massimo Zancanaro
Location:Sonoma, California
Dates:2016-Mar-07 to 2016-Mar-10
Volume:1
Publisher:ACM
Standard No:ISBN: 978-1-4503-4137-0; ACM DL: Table of Contents; hcibib: IUI16-1
Papers:51
Pages:430
Links:Conference Website

Companion Proceedings of the 2016 International Conference on Intelligent User Interfaces

Fullname:Companion Proceedings of the 21st International Conference on Intelligent User Interfaces
Editors:Jeffrey Nichols; Jalal Mahmud; John O'Donovan; Cristina Conati; Massimo Zancanaro
Location:Sonoma, California
Dates:2016-Mar-07 to 2016-Mar-10
Volume:2
Publisher:ACM
Standard No:ISBN: 978-1-4503-4140-0; ACM DL: Table of Contents; hcibib: IUI16-2
Papers:36
Pages:153
Links:Conference Website
  1. IUI 2016-03-07 Volume 1
    1. Invited Speaker 1
    2. Social Media
    3. User Modelling
    4. Intelligent Visualizations
    5. Personalization
    6. IUI for Entertainment and Health
    7. Recommender Systems
    8. Wearable and Mobile IUI 1
    9. Invited Speaker 2
    10. Wearable and Mobile IUI 2
    11. Information Retrieval and Search
    12. IUI for Education and Training
  2. IUI 2016-03-07 Volume 2
    1. Workshops
    2. Tutorials
    3. Posters
    4. Demos
    5. Student Consortium

IUI 2016-03-07 Volume 1

Invited Speaker 1

Past, Present, and Future of Recommender Systems: An Industry Perspective BIBAFull-Text 1
  Xavier Amatriain
In 2006, Netflix announced a $1M prize competition to advance recommendation algorithms. The recommendation problem was simplified as the accuracy in predicting a user rating measured by the Root Mean Squared Error. While that formulation helped get the attention of the research community, it put the focus on the wrong approach and metric while leaving many important factors out, in particular the UI. In this talk I will talk of the Netflix Prize as the past of recommendations. I will then describe the present from an industry perspective based on my personal experience at Netflix first and Quora now. I will describe the different components of modern recommender systems such as: personalized ranking, similarity, explanations, context-awareness, or search as recommendation. I will also review the usage of novel algorithmic approaches such as Factorization Machines, Restricted Boltzmann Machines, SimRank, Deep Neural Networks, or Listwise Learning-to-rank. We will see how those components and algorithmic approaches can be used to recommend not only movies, but also questions, answers, topics, or users.
   But, most importantly, I will give many examples of prototypical industrial-scale recommender systems with special focus on the user interface and its interaction with the algorithms. It is clearly in the interface of the UI and the novel algorithms where the future of recommender systems lays.

Social Media

Predicting Attitude and Actions of Twitter Users BIBAFull-Text 2-6
  Jalal Mahmud; Geli Fei; Anbang Xu; Aditya Pal; Michelle Zhou
In this paper, we present computational models to predict Twitter users' attitude towards a specific brand through their personal and social characteristics. We also predict their likelihood of taking different actions based on their attitudes. In order to operationalize our research on users' attitude and actions, we collected ground-truth data through surveys of Twitter users. We have conducted experiments using two real world datasets to validate the effectiveness of our attitude and action prediction framework. Finally, we show how our models can be integrated with a visual analytics system for customer intervention.
Encouraging Diversity- and Representation-Awareness in Geographically Centralized Content BIBAFull-Text 7-18
  Eduardo Graells-Garrido; Mounia Lalmas; Ricardo Baeza-Yates
In centralized countries, not only population, media and economic power are concentrated, but people give more attention to central locations. While this is not inherently bad, this behavior extends to micro-blogging platforms: central locations get more attention in terms of information flow. In this paper we study the effects of an information filtering algorithm that decentralizes content in such platforms. Particularly, we found that users from non-central locations were not able to identify the geographical diversity on timelines generated by the algorithm, which were diverse by construction. To make users see the inherent diversity, we define a design rationale to approach this problem, focused on the treemap visualization technique. Then, we deployed an" in the wild" implementation of our proposed system. On one hand, we found that there are effects of centralization in exploratory user behavior. On the other hand, we found that the treemap was able to make users see the inherent geographical diversity of timelines. We measured these effects based on how users engaged with content filtered by the algorithm. With these results in mind, we propose practical actions for micro-blogging platforms to account for the differences and biased behavior induced by centralization.
TagFlip: Active Mobile Music Discovery with Social Tags BIBAFull-Text 19-30
  Mohsen Kamalzadeh; Christoph Kralj; Torsten Möller; Michael Sedlmair
We report on the design and evaluation of TagFlip, a novel interface for active music discovery based on social tags of music. The tool, which was built for phone-sized screens, couples high user control on the recommended music with minimal interaction effort. Contrary to conventional recommenders, which only allow the specification of seed attributes and the subsequent like/dislike of songs, we put the users in the centre of the recommendation process. With a library of 100,000 songs, TagFlip describes each played song to the user through its most popular tags on Last.fm and allows the user to easily specify which of the tags should be considered for the next song, or the next stream of songs. In a lab user study where we compared it to Spotify's mobile application, TagFlip came out on top in both subjective user experience (control, transparency, and trust) and our objective measure of number of interactions per liked song. Our users found TagFlip to be an important complementary experience to that of Spotify, enabling more active and directed discovery sessions as opposed to the mostly passive experience that traditional recommenders offer.
Expense Control: A Gamified, Semi-Automated, Crowd-Based Approach For Receipt Capturing BIBAFull-Text 31-42
  Maximilian Altmeyer; Pascal Lessel; Antonio Krüger
We investigate a crowd-based approach to enhance the outcome of optical character recognition in the domain of receipt capturing to keep track of expenses. In contrast to existing work, our approach is capable of extracting single products and provides categorizations for both articles and expenses, through the use of microtasks which are delegated to an unpaid crowd. To evaluate our approach, we developed a smartphone application based on a receipt analysis and an online questionnaire in which users are able to track expenses by taking photos of receipts, and solve microtasks to enhance the recognition. To provide additional motivation to solve these tasks, we make use of gamification. In a three-week-long user study (N=12), we found that our system is appreciated, that our approach reduces the error rate of captured receipts significantly, and that the gamification provided additional motivation to contribute more and thereby enrich the database.
Exploring User Attitudes Towards Different Approaches to Command Recommendation in Feature-Rich Software BIBAFull-Text 43-47
  Michelle Wiebe; Denise Y. Geiskkovitch; Andrea Bunt
Feature-rich software applications offer users hundreds of commands, yet most people use only a very small fraction of the available command set. Command recommenders aim to increase awareness of an application's capabilities by generating personalized recommendations for new commands. A primary distinguishing characteristic of these recommenders concerns the manner in which they determine command relevance. Social approaches do so by analyzing community usage logs, whereas, task-based approaches mine web documentation for logical command clusters. Through a qualitative study with sixteen participants, in this work we explored user attitudes towards these different approaches and the supplemental information they enable.

User Modelling

Task Load Estimation and Mediation Using Psycho-physiological Measures BIBAFull-Text 48-59
  Rahul Rajan; Ted Selker; Ian Lane
Human performance falls off predictably with excessive task difficulty. This paper reports on a search for a task load estimation metric. Of the five physiological signals analyzed from a multitasking study, only pupil dilation measures correlated well with real-time task load. The paper introduces a novel task load estimation model based on pupil dilation measures. We demonstrate its effectiveness in a multitasking driving scenario. Autonomous mediation of notifications using this model significantly improved user task performance compared to no mediation. The model showed promise even when used outside in a car. Results were achieved using low-cost cameras and open-source measurement tools lending to its potential to be used broadly.
What Belongs Together Comes Together: Activity-centric Document Clustering for Information Work BIBAFull-Text 60-70
  Alexander Seeliger; Benedikt Schmidt; Immanuel Schweizer; Max Mühlhäuser
Multitasking and interruptions in information work make frequent activity switches necessary. Individuals need to recall and restore earlier states of work which generally involves retrieval of information objects. To avoid resulting tooling time an activity-centric organization of information objects has been proposed. For each activity a collection with related information objects (like documents, websites etc.) is created to improve information access and serve as a memory aid. While the manual maintenance of such information collections is a tedious task and becomes an interruption on its own, the automatic maintenance of such collections using activity mining is promising. Activity mining utilizes interaction histories to extract unique activities based on the stream of interaction with information objects. For activity mining, existing work shows varying success in limited study setups. In this paper, we present a method for activity mining to generate activity-centric information object collections automatically from interaction histories. The technique is a hybrid approach considering all information types used in previous work -- activity stream and accessed content related information. Method performance is evaluated based on interaction histories collected during real work data from eight information workers collected over several weeks. For the dataset our hybrid approach shows on average a performance of 0.53 ARI up to 0.77 ARI, outperforming single metric-based approaches.
Interactive Intent Modeling from Multiple Feedback Domains BIBAFull-Text 71-75
  Pedram Daee; Joel Pyykkö; Dorota Glowacka; Samuel Kaski
In exploratory search, the user starts with an uncertain information need and provides relevance feedback to the system's suggestions to direct the search. The search system learns the user intent based on this feedback and employs it to recommend novel results. However, the amount of user feedback is very limited compared to the size of the information space to be explored. To tackle this problem, we take into account user feedback on both the retrieved items (documents) and their features (keywords). In order to combine feedback from multiple domains, we introduce a coupled multi-armed bandits algorithm, which employs a probabilistic model of the relationship between the domains. Simulation results show that with multi-domain feedback, the search system can find the relevant items in fewer iterations than with only one domain. A preliminary user study indicates improvement in user satisfaction and quality of retrieved information.
Inferring A Player's Need For Cognition From Hints BIBAFull-Text 76-79
  Carlos Pereira Santos; Vassilis-Javed Khan; Panos Markopoulos
Player behavior during game play can be used to construct player models that help adapt the game and make it more fun for the player involved. Similarly in-game behavior could help model personality traits that describe people's attitudes in a fashion that can be stable over time and over different domains, e.g., to support health coaching, or other behavior change approaches. This paper demonstrates the feasibility of this approach by relating Need for Cognition (NfC) a personality trait that can predict the effectiveness of different persuasion strategies upon users to a commonly used game mechanic -- hints. An experiment with N=188 participants confirmed our hypothesis that NfC has a negative correlation with the number of hints players follow during the game. Future work should confirm if adherence to hints can be used as a predictor of behavior in different games, and to find other game mechanics than hints, that help predict user traits.
Driver Classification Based on Driving Behaviors BIBAFull-Text 80-84
  Cheng Zhang; Mitesh Patel; Senaka Buthpitiya; Kent Lyons; Beverly Harrison; Gregory D. Abowd
In this paper we develop a model capable of classifying drivers from their driving behaviors sensed by only low level sensors. The sensing platform consists of data available from the diagnostic outlet (OBD) of the car and smartphone sensors. We develop a window based support vector machine model to classify drivers. We test our model with two datasets collected under both controlled and naturalistic conditions. Furthermore, we evaluate the model using each sensor source (car and phone) independently and combining both the sensors. The average classification accuracies attained with data collected from three different cars shared between couples in a naturalistic environment were 75.83%, 85.83% and 86.67% using only phone sensors, only cars sensors and combined car and phone sensors respectively.

Intelligent Visualizations

Adaptive Contextualization: Combating Bias During High-Dimensional Visualization and Data Selection BIBAFull-Text 85-95
  David Gotz; Shun Sun; Nan Cao
Large and high-dimensional real-world datasets are being gathered across a wide range of application disciplines to enable data-driven decision making. Interactive data visualization can play a critical role in allowing domain experts to select and analyze data from these large collections. However, there is a critical mismatch between the very large number of dimensions in complex real-world datasets and the much smaller number of dimensions that can be concurrently visualized using modern techniques. This gap in dimensionality can result in high levels of selection bias that go unnoticed by users. The bias can in turn threaten the very validity of any subsequent insights. In this paper, we present Adaptive Contextualization (AC), a novel approach to interactive visual data selection that is specifically designed to combat the invisible introduction of selection bias. Our approach (1) monitors and models a user's visual data selection activity, (2) computes metrics over that model to quantify the amount of selection bias after each step, (3) visualizes the metric results, and (4) provides interactive tools that help users assess and avoid bias-related problems. We also share results from a user study which demonstrate the effectiveness of our technique.
MultiConVis: A Visual Text Analytics System for Exploring a Collection of Online Conversations BIBAFull-Text 96-107
  Enamul Hoque; Giuseppe Carenini
Online conversations, such as blogs, provide rich amount of information and opinions about popular queries. Given a query, traditional blog sites return a set of conversations often consisting of thousands of comments with complex thread structure. Since the interfaces of these blog sites do not provide any overview of the data, it becomes very difficult for the user to explore and analyze such a large amount of conversational data. In this paper, we present MultiConVis, a visual text analytics system designed to support the exploration of a collection of online conversations. Our system tightly integrates NLP techniques for topic modeling and sentiment analysis with information visualizations, by considering the unique characteristics of online conversations. The resulting interface supports the user exploration, starting from a possibly large set of conversations, then narrowing down to the subset of conversations, and eventually drilling-down to the set of comments of one conversation. Our evaluations through case studies with domain experts and a formal user study with regular blog readers illustrate the potential benefits of our approach, when compared to a traditional blog reading interface.
Topic Modeling of Document Metadata for Visualizing Collaborations over Time BIBAFull-Text 108-117
  Francine Chen; Patrick Chiu; Seongtaek Lim
We describe methods for analyzing and visualizing document metadata to provide insights about collaborations over time. We investigate the use of Latent Dirichlet Allocation (LDA) based topic modeling to compute areas of interest on which people collaborate. The topics are represented in a node-link force directed graph by persistent fixed nodes laid out with multidimensional scaling (MDS), and the people by transient movable nodes. The topics are also analyzed to detect bursts to highlight "hot" topics during a time interval. As the user manipulates a time interval slider, the people nodes and links are dynamically updated. We evaluate the results of LDA topic modeling for the visualization by comparing topic keywords against the submitted keywords from the InfoVis 2004 Contest, and we found that the additional terms provided by LDA-based keyword sets result in improved similarity between a topic keyword set and the documents in a corpus. We extended the InfoVis dataset from 8 to 20 years and collected publication metadata from our lab over a period of 21 years, and created interactive visualizations for exploring these larger datasets.
Rank As You Go: User-Driven Exploration of Search Results BIBAFull-Text 118-129
  Cecilia di Sciascio; Vedran Sabol; Eduardo E. Veas
Whenever users engage in gathering and organizing new information, searching and browsing activities emerge at the core of the exploration process. As the process unfolds and new knowledge is acquired, interest drifts occur inevitably and need to be accounted for. Despite the advances in retrieval and recommender algorithms, real-world interfaces have remained largely unchanged: results are delivered in a relevance-ranked list. However, it quickly becomes cumbersome to reorganize resources along new interests, as any new search brings new results. We introduce uRank and investigate interactive methods for understanding, refining and reorganizing documents on-the-fly as information needs evolve. uRank includes views summarizing the contents of a recommendation set and interactive methods conveying the role of users' interests through a recommendation ranking. A formal evaluation showed that gathering items relevant to a particular topic of interest with uRank incurs in lower cognitive load compared to a traditional ranked list. A second study consisting in an ecological validation reports on usage patterns and usability of the various interaction techniques within a free, more natural setting.

Personalization

Supporting Multitasking in Video Conferencing using Gaze Tracking and On-Screen Activity Detection BIBAFull-Text 130-134
  Daniel Avrahami; Eveline van Everdingen; Jennifer Marlow
The use of videoconferencing in the workplace has been steadily growing. While multitasking during video conferencing is often necessary, it is also viewed as impolite and sometimes unacceptable. One potential contributor to negative attitudes towards such multitasking is the disrupted sense of eye contact that occurs when an individual shifts their gaze away to another screen, for example, in a dual-monitor setup, common in office settings. We present an approach to improve a sense of eye contact over videoconferencing in dual-monitor setups. Our approach uses computer vision and desktop activity detection to dynamically choose the camera with the best view of a user's face. We describe two alternative implementations of our solution (RGB-only, and a combination of RGB and RGB-D cameras). We then describe results from an online experiment that shows the potential of our approach to significantly improve perceptions of a person's politeness and engagement in the meeting.
Closing the Cognitive Gap between Humans and Interactive Narrative Agents Using Shared Mental Models BIBAFull-Text 135-146
  Rania Hodhod; Brian Magerko
This paper proposes a new formal approach for negotiating shared mental models between humans and computational improvisational agents (improv agents) based on our sociocognitive studies of human improvisers. Negotiation of shared mental models serves as a core mechanism for improv agents to co-create stories with each other and with human interactors. The model aims to narrow the gap between human and machine intelligence by providing AI agents that, in the presence of incomplete knowledge about an improv scene, can use procedural representations not only to understand human parties but also to negotiate their mental models with them. The described approach allows flexible modeling of ambiguous, non-Boolean knowledge through the use of fuzzy logic and situation calculus that allows reasoning under uncertainty in a dynamic improvisational setting.
No more Autobahn!: Scenic Route Generation Using Googles Street View BIBAFull-Text 147-151
  Nina Runge; Pavel Samsonov; Donald Degraen; Johannes Schöning
Navigation systems allow drivers to find the shortest or fastest path between two or multiple locations mostly using time or distance as input parameters. Various researchers extended traditional route planning approaches by taking into account the user's preferences, such as enjoying a coastal view or alpine landscapes during a drive. Current approaches mainly rely on volunteered geographic information (VGI), such as point of interest (POI) data from OpenStreetMap, or social media data, such as geotagged photos from Flickr, to generate scenic routes. While these approaches use proximity, distribution or other spatial relationships of the data sets, they do not take into account the actual view on specific route segments. In this paper, we propose Autobahn: a system for generating scenic routes using Google Street View images to classify route segments based on their visual characteristics enhancing the driving experience. We show that this vision-based approach can complement other approaches for scenic route planning and introduce a personalized scenic route by aligning the characteristics of the route to the preferences of the user.
An Intelligent Interface for Learning Content: Combining an Open Learner Model and Social Comparison to Support Self-Regulated Learning and Engagement BIBAFull-Text 152-163
  Julio Guerra; Roya Hosseini; Sibel Somyurek; Peter Brusilovsky
We present the Mastery Grids system, an intelligent interface for online learning content that combines open learner modeling (OLM) and social comparison features. We grounded the design of Mastery Grids in self-regulated learning and learning motivation theories, as well as in our past work in social comparison, OLM, and adaptive navigation support. The force behind the interface is the combination of adaptive navigation functionality with the mastery-oriented aspects of OLM and the performance-oriented aspects of social comparison. We examined different configurations of Mastery Grids in two classroom studies and report the results of analysis of log data and survey responses. The results show how Mastery Grids interacts with different factors, like gender and achievement-goal orientation, and ultimately, its impact on student engagement, performance, and motivation.
User Trust in Intelligent Systems: A Journey Over Time BIBAFull-Text 164-168
  Daniel Holliday; Stephanie Wilson; Simone Stumpf
Trust is a significant factor in user adoption of new systems. However, although trust is a dynamic attitude of the user towards the system and changes over time, trust in intelligent systems is typically captured as a single quantitative measure at the conclusion of a task. This paper challenges this approach. We report a case study that employed a combination of repeated quantitative and qualitative measures to examine how trust in an intelligent system evolved over time and whether this varied depending on whether the system offered explanations. We discovered different patterns in participants' trust journeys. When provided with explanations, participants' trust levels initially increased, before returning to their original level. Without explanations, participants' trust reduced over time. The qualitative data showed that perceived system ability was more important in determining trust amongst with-explanation participants and perceived transparency was a greater influence on the trust of participants who did not receive explanations. The findings provide a deeper understanding of the development of user trust in intelligent systems and indicate the value of the approach adopted.
An Intelligent Assistant for High-Level Task Understanding BIBAFull-Text 169-174
  Ming Sun; Yun-Nung Chen; Alexander I. Rudnicky
People are able to interact with domain-specific intelligent assistants (IAs) and get help with tasks. But sometimes user goals are complex and may require interactions with multiple applications. However current IAs are limited to specific applications and users have to directly manage execution spanning multiple applications in order to engage in more complex activities. An ideal personal agent would be able to learn, over time, about tasks that span different resources. This paper addresses the problem of cross-domain task assistance in the context of spoken dialogue systems. We propose approaches to discover users' high-level intentions and using this information to assist users in their task. We collected real-life smartphone usage data from 14 participants and investigated how to extract high-level intents from users' descriptions of their activities. Our experiments show that understanding high-level tasks allows the agent to actively suggest apps relevant to pursuing particular user goals and reduce the cost of users' self-management.

IUI for Entertainment and Health

SleeveAR: Augmented Reality for Rehabilitation using Realtime Feedback BIBAFull-Text 175-185
  Maurício Sousa; João Vieira; Daniel Medeiros; Artur Arsenio; Joaquim Jorge
We present an intelligent user interface that allows people to perform rehabilitation exercises by themselves under the offline supervision of a therapist. Every year, many people suffer injuries that require rehabilitation. This entails considerable time overheads since it requires people to perform specified exercises under the direct supervision of a therapist. Therefore it is desirable that patients continue performing exercises outside the clinic (for instance at home, thus without direct supervision), to complement in-clinic physical therapy. However, to perform rehabilitation tasks accurately, patients need appropriate feedback, as otherwise provided by a physical therapist, to ensure that these unsupervised exercises are correctly executed. Different approaches address this problem, providing feedback mechanisms to aid rehabilitation. Unfortunately, test subjects frequently report having trouble to completely understand the feedback thus provided, which makes it hard to correctly execute the prescribed movements. Worse, injuries may occur due to incorrect performance of the prescribed exercises, which severely hinders recovery. SleeveAR is a novel approach to provide real-time, active feedback, using multiple projection surfaces to provide effective visualizations. Empirical evaluation shows the effectiveness of our approach as compared to traditional video-based feedback. Our experimental results show that our intelligent UI can successfully guide subjects through an exercise prescribed (and demonstrated) by a physical therapist, with performance improvements between consecutive executions, a desirable goal to successful rehabilitation.
PlaylistPlayer: An Interface Using Multiple Criteria to Change the Playback Order of a Music Playlist BIBAFull-Text 186-190
  Tomoyasu Nakano; Jun Kato; Masahiro Hamasaki; Masataka Goto
We propose a novel interface that allows the user to interactively change the playback order of multiple songs by choosing one or more criteria. The criteria include not only the song's title and artist name but also its content automatically estimated by music/singing signal processing and artist-level social analysis. The artist-level social information is discovered from Wikipedia and DBpedia. With regard to manipulating playback order, existing interfaces typically allow the user to change it manually or automatically by choosing one of a few types of criteria. The proposed interface, on the other hand, deals with nine properties and multiple integrations of them (e.g., vocal gender and beats per minute). To realize the ordering by multiple criteria, a distance matrix is computed from the criteria vectors and is then used to estimate paths for ascending, descending, and random orders by applying principle component analysis or to estimate a path for a smooth order by solving the travelling salesman problem.
Remind Me: An Adaptive Recommendation-Based Simulation of Biographic Associations BIBAFull-Text 191-195
  Dominik Gall; Jean-Luc Lugrin; Dennis Wiebusch; Marc Erich Latoschik
Classical reminiscence therapy has been shown to effectively enhance the stability of memory and identity in people with dementia. Typically, reminiscence therapy uses biography artifacts like photos and personal items and objects. Today, many of these artifacts are from the digital realm providing new options to adapt or even improve the purely analog therapy. In this work we propose a method to enhance reminiscence therapy by computer simulated biographic associations. Our approach provides assistance for associative reasoning on affective stimuli and thus enables access to biographic content so that no deliberate search is required. We develop a recommender model for mapping mental states to biographic content based on similarity. The system dynamically adapts its state and the depicted digital artifacts to the responses of the user. It is a first step towards an immersive reminiscence therapy which will incorporate associated stimuli on multiple channels to increase effectiveness. A preliminary study showed encouraging results concerning the usability of the system.
Empirically Studying Participatory Sense-Making in Abstract Drawing with a Co-Creative Cognitive Agent BIBAFull-Text 196-207
  Nicholas Davis; Chih-PIn Hsiao; Kunwar Yashraj Singh; Lisa Li; Brian Magerko
This paper reports on the design and evaluation of a co-creative drawing partner called the Drawing Apprentice, which was designed to improvise and collaborate on abstract sketches with users in real time. The system qualifies as a new genre of creative technologies termed "casual creators" that are meant to creatively engage users and provide enjoyable creative experiences rather than necessarily helping users make a higher quality creative product. We introduce the conceptual framework of participatory sense-making and describe how it can help model and understand open-ended collaboration. We report the results of a user study comparing human-human collaboration to human-computer collaboration using the Drawing Apprentice system. Based on insights from the user study, we present a set of design recommendations for co-creative agents.
RelaWorld: Neuroadaptive and Immersive Virtual Reality Meditation System BIBAFull-Text 208-217
  Ilkka Kosunen; Mikko Salminen; Simo Järvelä; Antti Ruonala; Niklas Ravaja; Giulio Jacucci
Meditation in general and mindfulness in particular have been shown to be useful techniques in the treatment of a plethora of ailments, yet they can be challenging for novices. We present RelaWorld: a neuroadaptive virtual reality meditation system that combines virtual reality with neurofeedback to provide a tool that is easy for novices to use yet provides added value even for experienced meditators. Using a head-mounted display, users can levitate in a virtual world by doing meditation exercises. The system measures users' brain activity in real time via EEG and calculates estimates for the level of concentration and relaxation. These values are then mapped into the virtual reality. In a user study of 43 subjects, we were able to show that the RelaWorld system elicits deeper relaxation, feeling of presence and a deeper level of meditation when compared to a similar setup without head-mounted display or neurofeedback.

Recommender Systems

The Effect of Privacy Concerns on Privacy Recommenders BIBAFull-Text 218-227
  Yuchen Zhao; Juan Ye; Tristan Henderson
Location-sharing services such as Facebook and Foursquare/Swarm have become increasingly popular, due to the ease at which users can share their locations, and participate in services, games and other applications that leverage these locations. But it is important for people who use these services to configure appropriate location-privacy preferences so that they can control to whom they want to share their location information. Manually configuring these preferences may be burdensome and confusing, and so location-privacy preference recommenders based on crowdsourcing preferences from other users have been proposed. Whether people will accept the recommended preferences acquired from other users, who they may not know or trust, has not, however, been investigated. In this paper, we present a user experiment (n=99) to explore what factors influence people's acceptance of location-privacy preference recommenders. We find that 44% of our participants have privacy concerns about such recommenders. These concerns are shown to have a negative effect (p<0.001) on their acceptance of the recommendations and their satisfaction about their choices. Furthermore, users' acceptance of recommenders varies according to both context and recommendations being made. Our findings are potentially useful to designers of location-sharing services and privacy recommenders.
Data Portraits and Intermediary Topics: Encouraging Exploration of Politically Diverse Profiles BIBAFull-Text 228-240
  Eduardo Graells-Garrido; Mounia Lalmas; Ricardo Baeza-Yates
In micro-blogging platforms, people connect and interact with others. However, due to cognitive biases, they tend to interact with like-minded people and read agreeable information only. Many efforts to make people connect with those who think differently have not worked well. In this paper, we hypothesize, first, that previous approaches have not worked because they have been direct -- they have tried to explicitly connect people with those having opposing views on sensitive issues. Second, that neither recommendation or presentation of information by themselves are enough to encourage behavioral change. We propose a platform that mixes a recommender algorithm and a visualization-based user interface to explore recommendations. It recommends politically diverse profiles in terms of distance of latent topics, and displays those recommendations in a visual representation of each user's personal content. We performed an" in the wild" evaluation of this platform, and found that people explored more recommendations when using a biased algorithm instead of ours. In line with our hypothesis, we also found that the mixture of our recommender algorithm and our user interface, allowed politically interested users to exhibit an unbiased exploration of the recommended profiles. Finally, our results contribute insights in two aspects: first, which individual differences are important when designing platforms aimed at behavioral change; and second, which algorithms and user interfaces should be mixed to help users avoid cognitive mechanisms that lead to biased behavior.
ChordRipple: Recommending Chords to Help Novice Composers Go Beyond the Ordinary BIBAFull-Text 241-250
  Cheng-Zhi Anna Huang; David Duvenaud; Krzysztof Z. Gajos
Novice composers often find it difficult to go beyond common chord progressions. To make it easier for composers to experiment with radical chord choices, we built a creativity support tool, ChordRipple, which makes chord recommendations that aim to be both diverse and appropriate to the current context. Composers can use it to help select the next chord, or to replace sequences of chords in an internally consistent manner. To make such recommendations, we adapt a neural network model from natural language processing known as Word2Vec to the music domain. This model learns chord embeddings from a corpus of chord sequences, placing chords nearby when they are used in similar contexts. The learned embeddings support creative substitutions between chords, and also exhibit topological properties that correspond to musical structure. For example, the major and minor chords are both arranged in the latent space in shapes corresponding to the circle-of-fifths. Our structured observations with 14 music students show that the tool helped them explore a wider palette of chords, and to make "big jumps in just a few chords". It gave them "new ideas of ways to move forward in the piece", not just on a chord-to-chord level but also between phrases. Our controlled studies with 9 more music students show that more adventurous chords are adopted when composing with ChordRipple.
Learning Item Temporal Dynamics for Predicting Buying Sessions BIBAFull-Text 251-255
  Veronika Bogina; Tsvi Kuflik; Osnat Mokryn
Predicting whether a session is a buying session (e.g. will end with buying an item) is an ongoing research task. Drawing from recent experience in Web search and movie recommenders, we explore the effect of temporal trends and characteristics on the ability to predict buying sessions. We suggest a new approach, based on items' temporal dynamics, together with sessions' temporal aspects for predicting whether a session is going to end up with a purchase. We suggest a model for estimating the probability of a session to end with a purchase, according to the purchase history of items clicked on during the session over the past few days. The predictions can be used by recommender systems, enabling them to take relevant actions, thus improving shoppers experience as well as increasing sales for e-commerce companies. Our findings shed light on the importance of considering temporal dynamics in items recommendations in e-commerce sites. Empirical results on imbalanced e-commerce dataset with more than nine million sessions demonstrate that we achieve high Precision, Recall and ROC in predicting whether session ends up with a purchase or not.
A Live-User Study of Opinionated Explanations for Recommender Systems BIBAFull-Text 256-260
  Khalil Ibrahim Muhammad; Aonghus Lawlor; Barry Smyth
This paper describes an approach for generating rich and compelling explanations in recommender systems, based on opinions mined from user-generated reviews. The explanations highlight the features of a recommended item that matter most to the user and also relate them to other recommendation alternatives and the user's past activities to provide a context.

Wearable and Mobile IUI 1

PanoSwarm: Collaborative and Synchronized Multi-Device Panoramic Photography BIBAFull-Text 261-270
  Yan Wang; Sunghyun Cho; Jue Wang; Shih-Fu Chang
Taking a picture has been traditionally a one-person task. In this paper we present a novel system that allows multiple mobile devices to work collaboratively in a synchronized fashion to capture a panorama of a highly dynamic scene, creating an entirely new photography experience that encourages social interactions and teamwork. Our system contains two components: a client app that runs on all participating devices, and a server program that monitors and communicates with each device. In a capturing session, the server collects in realtime the viewfinder images of all devices and stitches them on-the-fly to create a panorama preview, which is then streamed to all devices as visual guidance. The system also allows one camera to be the host and send direct visual instructions to others to guide camera adjustment. When ready, all devices take pictures at the same time for panorama stitching. Our preliminary study suggests that the proposed system can help users capture high quality panoramas with an enjoyable teamwork experience.
Recognizing Human Actions in the Motion Trajectories of Shapes BIBAFull-Text 271-281
  Melissa Roemmele; Soja-Marie Morgens; Andrew S. Gordon; Louis-Philippe Morency
People naturally anthropomorphize the movement of nonliving objects, as social psychologists Fritz Heider and Marianne Simmel demonstrated in their influential 1944 research study. When they asked participants to narrate an animated film of two triangles and a circle moving in and around a box, participants described the shapes' movement in terms of human actions. Using a framework for authoring and annotating animations in the style of Heider and Simmel, we established new crowdsourced datasets where the motion trajectories of animated shapes are labeled according to the actions they depict. We applied two machine learning approaches, a spatial-temporal bag-of-words model and a recurrent neural network, to the task of automatically recognizing actions in these datasets. Our best results outperformed a majority baseline and showed similarity to human performance, which encourages further use of these datasets for modeling perception from motion trajectories. Future progress on simulating human-like motion perception will require models that integrate motion information with top-down contextual knowledge.
SCEPTRE: A Pervasive, Non-Invasive, and Programmable Gesture Recognition Technology BIBAFull-Text 282-293
  Prajwal Paudyal; Ayan Banerjee; Sandeep K. S. Gupta
Communication and collaboration between deaf people and hearing people is hindered by lack of a common language. Although there has been a lot of research in this domain, there is room for work towards a system that is ubiquitous, non-invasive, works in real-time and can be trained interactively by the user. Such a system will be powerful enough to translate gestures performed in real-time, while also being flexible enough to be fully personalized to be used as a platform for gesture based HCI. We propose SCEPTRE which utilizes two non-invasive wrist-worn devices to decipher gesture-based communication. The system uses a multi-tiered template based comparison system for classification on input data from accelerometer, gyroscope and electromyography (EMG) sensors. This work demonstrates that the system is very easily trained using just one to three training instances each for twenty randomly chosen signs from the American Sign Language (ASL) dictionary and also for user-generated custom gestures. The system is able to achieve an accuracy of 97.72% for ASL gestures.
Look at Me: Augmented Reality Pedestrian Warning System Using an In-Vehicle Volumetric Head Up Display BIBAFull-Text 294-298
  Hyungil Kim; Alexandre Miranda Anon; Teruhisa Misu; Nanxiang Li; Ashish Tawari; Kikuo Fujimura
Current pedestrian collision warning systems use either auditory alarms or visual symbols to inform drivers. These traditional approaches cannot tell the driver where the detected pedestrians are located, which is critical for the driver to respond appropriately. To address this problem, we introduce a new driver interface taking advantage of a volumetric head-up display (HUD). In our experimental user study, sixteen participants drove a test vehicle in a parking lot while braking for crossing pedestrians using different interface designs on the HUD. Our results showed that spatial information provided by conformal graphics on the HUD resulted in not only better driver performance but also smoother braking behavior as compared to the baseline.
Context Matters?: How Adding the Obfuscation Option Affects End Users' Data Disclosure Decisions BIBAFull-Text 299-304
  Jing Wang; Na Wang; Hongxia Jin
Recent advancement of smart devices and wearable technologies greatly enlarges the variety of personal data people can track. Applications and services can leverage such data to provide better life support, but also impose privacy and security threats. Obfuscation schemes, consequently, have been developed to retain data access while mitigate risks. Compared to offering choices of releasing raw data and not releasing at all, we examine the effect of adding a data obfuscation option on users' disclosure decisions when configuring applications' access, and how that effect varies with data types and application contexts. Our online user experiment shows that users are less likely to block data access when the obfuscation option is available except for locations. This effect significantly differs between applications for domain-specific dynamic tracking data, but not for generic personal traits. We further unpack the role of context and discuss the design opportunities.

Invited Speaker 2

Socially-Sensitive Interfaces: From Offline Studies to Interactive Experiences BIBAFull-Text 305
  Elisabeth André
Recent years have initiated a paradigm shift from pure taskbased human-machine interfaces towards socially-sensitive interaction. In addition to what users explicitly say or gesture at, socially-sensitive interfaces are able to sense more subtle human cues, such as head postures and movements, to infer psychological user states, such as attention and affect, and also to enrich system responses with social signals. However, most approaches focus on offline analysis of previously recorded data limiting the investigation to prototypical behaviors in laboratory-like settings. In my presentation, I will focus on challenges that arise when integrating social signal processing techniques into interactive systems designed for real-world applications. From a technical perspective, this requires effective tools able to synchronize, process, and analyze relevant signals in online mode. From a user perspective, appropriate strategies need to be defined to respond to social signals at the right moment in time without disturbing the flow of interaction. I will discuss two interaction styles for socially-sensitive interfaces. In the area of information retrieval, the concept of empathic stimulation has been used to optimize the selection and presentation of data. The basic idea is to exploit sensory data on the users' emotional state to provide them with cues that inspire their curiosity during the data exploration task. In the domain of social coaching, the concept of social augmentation has been employed to give people ambient feedback on their behavior while being engaged in a social interaction. The presentation will be illustrated by examples from various national and international projects following these two interaction styles.

Wearable and Mobile IUI 2

Enhancing Audience Engagement in Performing Arts Through an Adaptive Virtual Environment with a Brain-Computer Interface BIBAFull-Text 306-316
  Shuo Yan; GangYi Ding; Hongsong Li; Ningxiao Sun; Yufeng Wu; Zheng Guan; Longfei Zhang; Tianyu Huang
Audience engagement is an important indicator of the quality of the performing arts but hard to measure. Psychophysiological measurements are promising research methods for perceiving and understanding audience's responses in real-time. Currently, such research are conducted by collecting biometric data from audience when they are watching a performance. In this paper, we draw on techniques from brain-computer interfaces (BCI) and knowledge from quality of performing arts to develop a system that monitor audience engagement in real time using electroencephalography (EEG) measurement and seek to improve it by triggering the adaptive performing cues when the engagement level decreased. We simulated the immersive theatre performances to provide audience a high-fidelity visual-audio experience. An experimental evaluation is conducted with 48 participants during two different performance studies. The results showed that our system could successfully detect the decreases in audience engagement and the performing cues had positive effects on regain audience engagement. Our research offers the guidelines for designing theatre performances from the audience's perception.
Facial Expression Recognition in Daily Life by Embedded Photo Reflective Sensors on Smart Eyewear BIBAFull-Text 317-326
  Katsutoshi Masai; Yuta Sugiura; Masa Ogata; Kai Kunze; Masahiko Inami; Maki Sugimoto
This paper presents a novel smart eyewear that uses embedded photo reflective sensors and machine learning to recognize a wearer's facial expressions in daily life. We leverage the skin deformation when wearers change their facial expressions. With small photo reflective sensors, we measure the proximity between the skin surface on a face and the eyewear frame where 17 sensors are integrated. A Support Vector Machine (SVM) algorithm was applied for the sensor information. The sensors can cover various facial muscle movements and can be integrated into everyday glasses. The main contributions of our work are as follows. (1) The eyewear recognizes eight facial expressions (92.8% accuracy for one time use and 78.1% for use on 3 different days). (2) It is designed and implemented considering social acceptability. The device looks like normal eyewear, so users can wear it anytime, anywhere. (3) Initial field trials in daily life were undertaken. Our work is one of the first attempts to recognize and evaluate a variety of facial expressions in the form of an unobtrusive wearable device.
Towards Using Mobile, Head-Worn Displays in Cultural Heritage: User Requirements and a Research Agenda BIBAFull-Text 327-331
  Natalia Vainstein; Tsvi Kuflik; Joel Lanir
Augmented reality (AR) technology has the potential to enrich our daily lives in many aspects. One of them is the museum visit experience. Nowadays, state of the art mobile museum visitor guides provide us with rich personalized, context aware information. However, these systems have one major drawback -- they force the visitor to hold the guide and to look at its screen. The utilization of smart-glasses technology allows providing a wearable AR display without the necessity to hold the guide and to look at it and without distracting the user from the real object. This paper describes initial steps towards the implementation of a head-worn display (HWD) museum visitor guide -- the results of users' requirements elicitation process, the implementation of a first research prototype and initial insights gathered during the process.
SweepSense: Ad Hoc Configuration Sensing Using Reflected Swept-Frequency Ultrasonics BIBAFull-Text 332-335
  Gierad Laput; Xiang 'Anthony' Chen; Chris Harrison
Devices can be made more intelligent if they have the ability to sense their surroundings and physical configuration. However, adding extra, special purpose sensors increases size, price and build complexity. Instead, we use speakers and microphones already present in a wide variety of devices to open new sensing opportunities. Our technique sweeps through a range of inaudible frequencies and measures the intensity of reflected sound to deduce information about the immediate environment, chiefly the materials and geometry of proximate surfaces. We offer several example uses, two of which we implemented as self-contained demos, and conclude with an evaluation that quantifies their performance and demonstrates high accuracy.

Information Retrieval and Search

Chalkboarding: A New Spatiotemporal Query Paradigm for Sports Play Retrieval BIBAFull-Text 336-347
  Long Sha; Patrick Lucey; Yisong Yue; Peter Carr; Charlie Rohlf; Iain Matthews
The recent explosion of sports tracking data has dramatically increased the interest in effective data processing and access of sports plays (i.e., short trajectory sequences of players and the ball). And while there exist systems that offer improved categorizations of sports plays (e.g., into relatively coarse clusters), to the best of our knowledge there does not exist any retrieval system that can effectively search for the most relevant plays given a specific input query. One significant design challenge is how best to phrase queries for multi-agent spatiotemporal trajectories such as sports plays. We have developed a novel query paradigm and retrieval system, which we call Chalkboarding, that allows the user to issue queries by drawing a play of interest (similar to how coaches draw up plays). Our system utilizes effective alignment, templating, and hashing techniques tailored to multi-agent trajectories, and achieves accurate play retrieval at interactive speeds. We showcase the efficacy of our approach in a user study, where we demonstrate orders-of-magnitude improvements in search quality compared to baseline systems.
AppGrouper: Knowledge-based Interactive Clustering Tool for App Search Results BIBAFull-Text 348-358
  Shuo Chang; Peng Dai; Lichan Hong; Cheng Sheng; Tianjiao Zhang; Ed H. Chi
A relatively new feature in Google Play Store presents mobile app search results grouped by topic, helping users to quickly navigate and explore. The underlying Search Results Clustering (SRC) system faces several challenges, including grouping search results in topical coherent clusters as well as finding the appropriate level of granularity for clustering. We present AppGrouper, an alternative approach to algorithmic-only solutions, incorporating human input in a knowledge-graph-based clustering process. AppGrouper provides an interactive interface that lets domain experts steer the clustering process in early, mid, and late stages. We deployed and evaluated AppGrouper with internal experts. We found that AppGroup improved quality of algorithm-generated app clusters on 56 out of 82 search queries. We also found that the internal experts made more changes in early and mid stages for lower-quality algorithmic results, focusing more on narrow queries. Our result suggests, in some contexts, machine learning systems can greatly benefit from steering from human experts, creating a symbiotic working relationship.
Beyond Relevance: Adapting Exploration/Exploitation in Information Retrieval BIBAFull-Text 359-369
  Kumaripaba Athukorala; Alan Medlar; Antti Oulasvirta; Giulio Jacucci; Dorota Glowacka
We present a novel adaptation technique for search engines to better support information-seeking activities that include both lookup and exploratory tasks. Building on previous findings, we describe (1) a classifier that recognizes task type (lookup vs. exploratory) as a user is searching and (2) a reinforcement learning based search engine that adapts accordingly the balance of exploration/exploitation in ranking the documents. This allows supporting both task types surreptitiously without changing the familiar list-based interface. Search results include more diverse results when users are exploring and more precise results for lookup tasks. Users found more useful results in exploratory tasks when compared to a base-line system, which is specifically tuned for lookup tasks.
A $-Family Friendly Approach to Prototype Selection BIBAFull-Text 370-374
  Corey Pittman; Eugene M., II Taranta; Joseph J., Jr. LaViola
We explore the benefits of intelligent prototype selection for $-family recognizers. Currently, the state of the art is to randomly select a subset of prototypes from a dataset without any processing. This results in reduced computation time for the recognizer, but also increases error rates. We propose applying optimization algorithms, specifically random mutation hill climb and a genetic algorithm, to search for reduced sets of prototypes that minimize recognition error. After an evaluation, we found that error rates could be reduced compared to random selection and rapidly approached the baseline accuracies for a number of different $-family recognizers.
Maximizing Correctness with Minimal User Effort to Learn Data Transformations BIBAFull-Text 375-384
  Bo Wu; Craig A. Knoblock
Data transformation often requires users to write many trivial and task-dependent programs to transform thousands of records. Recently, programming-by-example (PBE) approaches enable users to transform data without coding. A key challenge of these PBE approaches is to deliver correctly transformed results on large datasets, since these transformation programs are likely to be generated by non-expert users. To address this challenge, existing approaches aim to identify a small set of potentially incorrect records and ask users to examine these records instead of the entire dataset. However, because the transformation scenarios are highly task-dependent, existing approaches cannot capture the incorrect records for various scenarios. We present a approach that learns from past transformation scenarios to generate a meta-classifier to identify the incorrect records. Our approach color-codes these transformed records and then presents them for users to examine. The method allows users to either enter an example for a record transformed incorrectly or confirm the correctness of a transformed record. And our approach can learn from the users' labels to refine the meta-classifier to accurately identify the incorrect records. Simulation results and a user study show that our method can identify the incorrectly transformed records and reduce the user efforts in examining the results.

IUI for Education and Training

AutoManner: An Automated Interface for Making Public Speakers Aware of Their Mannerisms BIBAFull-Text 385-396
  M. Iftekhar Tanveer; Ru Zhao; Kezhen Chen; Zoe Tiet; Mohammed Ehsan Hoque
Many individuals exhibit unconscious body movements called mannerisms while speaking. These repeated changes often distract the audience when not relevant to the verbal context. We present an intelligent interface that can automatically extract human gestures using Microsoft Kinect to make speakers aware of their mannerisms. We use a sparsity-based algorithm, Shift Invariant Sparse Coding, to automatically extract the patterns of body movements. These patterns are displayed in an interface with subtle question and answer-based feedback scheme that draws attention to the speaker's body language. Our formal evaluation with 27 participants shows that the users became aware of their body language after using the system. In addition, when independent observers annotated the accuracy of the algorithm for every extracted pattern, we find that the patterns extracted by our algorithm is significantly (p<0.001) more accurate than just random selection. This represents a strong evidence that the algorithm is able to extract human-interpretable body movement patterns. An interactive demo of AutoManner is available at http://tinyurl.com/AutoManner.
Desitra: A Simulator for Teaching Situated Decision Making in Dental Surgery BIBAFull-Text 397-401
  Narumol Vannaprathip; Peter Haddawy; Siriwan Suebnukarn; Patcharapon Sangsartra; Nunnapin Sasikhant; Sornram Sangutai
Use of simulation to teach decision making in surgery is challenging partly due to the situated nature of the decisions, with situation awareness playing a critical role in making high quality decisions. Thus simulation systems need to be able to provide the key cues needed in making decisions with high fidelity. In this paper we present the first version of Desitra, a simulation environment for teaching decision making in dental surgery. System design was driven by an observational study of teaching sessions for endodontic surgery in the operating room which identified perceptual cues used in decision making as well as tutorial intervention strategies used by surgeons. Desitra provides an open environment for learning decision making -- students carry out dental procedures and are free to make mistakes. The pedagogical module monitors the student actions and intervenes when students make mistakes, providing as little guidance as necessary to keep students on a productive learning path. The system is implemented to run on Android tablets to be maximally accessible. Preliminary evaluation of the system shows that Desitra effectively captures key perceptual cues.
Toward Intelligent Tutorial Feedback in Surgical Simulation: Robust Outcome Scoring for Endodontic Surgery BIBAFull-Text 402-406
  Myat Su Yin; Peter Haddawy; Siriwan Suebnukarn; Phattanapon Rhienmora
Numerous VR simulators have been developed as a means of addressing limitations of the traditional apprenticeship approach to dental surgical skill training. Most existing simulators support intra- and extra-coronal procedures such as carries removal. In this paper we address the problem of automated outcome assessment for endodontic surgery. Outcome assessment is an essential component of any system that provides formative feedback, which requires assessing the outcome, relating it to the procedure, and communicating in a language natural to dental students. This paper takes a first step toward automated generation of such comprehensive feedback. Our system automatically computes reference templates based on tooth anatomy, which provides flexibility to adjust parameters such as tolerance and to create new templates on demand. Detailed scores are transformed into the standard scoring language used by dental schools. Preliminary evaluation of our system on fifteen outcome samples with three expert endodontists shows a high degree of agreement with expert scores.
ViZig: Anchor Points based Non-Linear Navigation and Summarization in Educational Videos BIBAFull-Text 407-418
  Kuldeep Yadav; Ankit Gandhi; Arijit Biswas; Kundan Shrivastava; Saurabh Srivastava; Om Deshmukh
Instructional videos are one of the most popular ways of teaching and learning in an online setting. However, navigation in videos is linear as compared to other instructional resources such as textbooks, where a table of topics and a multi-faceted index of different anchor points i.e., list of figures, tables aid in efficiently navigating to a desired point of interest. There is a lack of appropriate techniques and interfaces which can support such textbook-style navigation in instructional videos. This paper presents a novel approach to automatically localize and classify different anchor points in a video including figures, tables, equations, flowcharts, code snippets and charts. Our approach uses a deep convolution neural network in a semi-supervised fashion where the training data is obtained from the unconstrained Internet images. On an anchor point dataset of about 10K images, the proposed algorithm leads to a classification accuracy of 86%. Further, we designed a system ViZig that uses these localized anchor points along with a automatically generated list of topics for non-linear video navigation and studied its effectiveness in real-world. Our user studies with 18 participants establish that the proposed video navigation mechanism provides statistically significant time savings as compared to the popularly used time-synched transcript along with youtube-style timeline scrubbing.
AnalyticalInk: An Interactive Learning Environment for Math Word Problem Solving BIBAFull-Text 419-430
  Bo Kang; Arun Kulshreshth; Joseph J., Jr. LaViola
We present AnalyticalInk, a novel math learning environment prototype that uses a semantic graph as the knowledge representation of algebraic and geometric word problems. The system solves math problems by reasoning upon the semantic graph and automatically generates conceptual and procedural scaffoldings in sequence. We further introduces a step-wise tutoring framework, which can check students' input steps and provide the adaptive scaffolding feedback. Based on the knowledge representation, AnalyticalInk highlights keywords that allow users to further drag them onto the workspace to gather insight into the problem's initial conditions. The system simulates a pen-and-paper environment to let users input both in algebraic and geometric workspaces. We conducted an usability evaluation to measure the effectiveness of AnalyticalInk. We found that keyword highlighting and dragging is useful and effective toward math problem solving. Answer checking in the tutoring component is useful. In general, our prototype shows the promise in helping users to understand geometrical concepts and master algebraic procedures under the problem solving.

IUI 2016-03-07 Volume 2

Workshops

Workshop on Emotion and Visualization: EmoVis 2016 BIBFull-Text 1-2
  Andreas Kerren; Daniel Cernea; Margit Pohl
SCWT: A Joint Workshop on Smart Connected and Wearable Things BIBAFull-Text 3-5
  Dirk Schnelle-Walka; Lior Limonad; Tobias Grosse-Puppendahl; Joel Lanir; Florian Müller; Massimo Mecella; Kris Luyten; Tsvi Kuflik; Oliver Brdiczka; Max Mühlhäuser
The increasing number of smart objects in our everyday life shapes how we interact beyond the desktop. In this workshop we discuss how advanced interactions with smart objects in the context of the Internet-of-Things should be designed from various perspectives, such as HCI and AI as well as industry and academia.

Tutorials

Evaluating Intelligent User Interfaces with User Experiments BIBAFull-Text 6-8
  Bart P. Knijnenburg
User experiments are an essential tool to evaluate the user experience of intelligent user interfaces. This tutorial teaches the practical aspects of designing and setting up user experiments, as well as state-of-the-art methods to statistically evaluate the outcomes of such experiments.

Posters

Tracing Temporal Changes of Selection Criteria from Gaze Information BIBAFull-Text 9-12
  Kei Shimonishi; Hiroaki Kawashima; Erina Schaffer; Takashi Matsuyama
To design interactive systems that proactively assist users' decision making, the users' gaze information is an important cue for the system to estimate users' selection criteria. Users sometimes change selection criteria while browsing content. Therefore, temporal changes of those criteria need to be traced from gaze data in short time scales. In this paper, we propose an approach to detecting users' distinctive browsing periods with its appropriate time-scale by leveraging multiscale exact tests so that the system can trace temporal changes of selection criteria. We demonstrate the applicability of the proposed method through a toy example and experiments.
Projecting Recorded Expert Hands at Real Size, at Real Speed, and onto Real Objects for Manual Work BIBAFull-Text 13-17
  Genta Suzuki; Taichi Murase; Yusaku Fujii
Expert manual workers in factories assemble more efficiently than novices because their movements are optimized for the tasks. In this paper, we present an approach to projecting the hand movements of experts at real size, and real speed and onto real objects in order to match the manual work movements of novices to those of experts. We prototyped a projector-camera system, which projects the virtual hands of experts. We conducted a user study in which users worked after watching experts work under two conditions: using a display and using our prototype system. The results show our prototype users worked more precisely and felt the tasks were easier. User ratings also show our prototype users watched videos of experts more fixedly, memorized them more clearly and distinctly tried to work in the same way shown in the videos as compared with display users.
Environment Specific Content Rendering & Transformation BIBAFull-Text 18-22
  Balaji Vasan Srinivasan; Tanya Goyal; Varun Syal; Shubhankar Suman Singh; Vineet Sharma
The evolution of digital technology has resulted in the consumption of content on a multitude of environments (desktop, mobile, etc). Content now needs to be appropriately delivered to all these environments. This calls for a mechanism to automate the process for rendering the content in its appropriate form on a targeted environment. In this paper, we propose an algorithm that takes the content along with the a set of environment-specific layouts where it has to be rendered to automatically decide the mapping and transformation of the content for the right rendition. Metrics to measure the 'goodness' of the resulting rendition is also proposed to choose the right layout for the given content.
The Lifeboard: Improving Outcomes via Scarcity Priming BIBAFull-Text 23-27
  Ajay Chander; Sanam Mirzazad Barijough
We introduce the Lifeboard: a dynamic information interface designed to render personal data so as to positively influence wellness outcomes. We report on the results of an experiment that compares the effect of presenting clinically significant data to subjects on their activity levels, with the effect of presenting the same data using the Lifeboard. The statistically significant increase in this wellness outcome in the Lifeboard group vs. the Data-only group suggests that the Lifeboard effectively leverages the scarcity response [4] in the service of improved wellness outcomes. Moreover, the significant week-on-week decrease in this wellness outcome in the Data-only group points to the need for care when exposing clinical data to users.
Human-Autonomy Teaming and Agent Transparency BIBAFull-Text 28-31
  Jessie Y. C. Chen; Michael J. Barnes; Anthony R. Selkowitz; Kimberly Stowers; Shan G. Lakhmani; Nicholas Kasdaglis
We developed the user interfaces for two Human-Robot Interaction (HRI) tasking environments: dismounted infantry interacting with a ground robot (Autonomous Squad Member) and human interaction with an intelligent agent to manage a team of heterogeneous robotic vehicles (IMPACT). These user interfaces were developed based on the Situation awareness-based Agent Transparency (SAT) model. User testing showed that as agent transparency increased, so did overall human-agent team performance. Participants were able to calibrate their trust in the agent more appropriately as agent transparency increased.
STEPS: A Spatio-temporal Electric Power Systems Visualization BIBAFull-Text 32-35
  Robert Pienta; Leilei Xiong; Santiago Grijalva; Duen Horng (Polo) Chau; Minsuk Kahng
As the bulk electric grid becomes more complex, power system operators and engineers have more information to process and interpret than ever before. The information overload they experience can be mitigated by effective visualizations that facilitate rapid and intuitive assessment of the system state. With the introduction of non-dispatchable renewable energy, flexible loads, and energy storage, the ability to temporally explore system states becomes critical. This paper introduces STEPS, a new 3D Spatio-temporal Electric Power Systems visualization tool suitable for steady-state operational applications.
Fixation-to-Word Mapping with Classification of Saccades BIBAFull-Text 36-40
  Akito Yamaya; Goran Topic; Akiko Aizawa
Eye movement is expected to provide important clues for analyzing the human reading process. However, the noisy tracking environment makes it difficult to map the gaze data captured by eye-trackers to the user's intended word. In this paper, we propose an effective approach for accurately mapping a fixation to a word in the text. Our method regards consecutive horizontally progressive fixations as a sequential reading segment. We first classify transitions between segments according to six classes, and then identify the set of segments associated with each line of the document. Our experiments demonstrate that the proposed method achieves 87% mapping accuracy (15% higher than our previous work) with a classification performance of 84%. We also confirmed that manual annotation time can be reduced by using our approach as a reference. We believe that our method provides sufficiently good accuracy to warrant future analysis.
Enhancing Interactivity with Transcranial Direct Current Stimulation BIBAFull-Text 41-44
  Bo Wan; Chi Vi; Sriram Subramanian; Diego Martinez Plasencia
Transcranial Direct Current Stimulation (tDCS) is a non-invasive type of neural stimulation known for modulation of cortical excitability leading to positive effects on working memory and attention. The availability of low-cost and consumer grade tDCS devices has democratized access to such technology allowing us to explore its applicability to HCI. We review the relevant literature and identify potential avenues for exploration within the context of enhancing interactivity and use of tDCS in the context of HCI.
Designing SmartSignPlay: An Interactive and Intelligent American Sign Language App for Children who are Deaf or Hard of Hearing and their Families BIBAFull-Text 45-48
  Ching-Hua Chuan; Caroline Anne Guardino
This paper describes an interactive mobile application that aims to assist children who are deaf or hard of hearing (D/HH) and their families to learn and practice American Sign Language (ASL). Approximately 95% of D/HH children are born to hearing parents. Research indicates that the lack of common communication tools between the parent and child often results in delayed development in the child's language and social skills. Benefiting from the interactive advantages and popularity of touchscreen mobile devices, we created SmartSignPlay, an app to teach D/HH children and their families everyday ASL vocabulary and phrases. Vocabulary is arranged into context-based lessons where the vocabulary is frequently used. After watching the sign demonstrated by an animated avatar, the user performed the sign by drawing the trajectory of the hand movement and selecting the correct handshape. While the app is still under iterative development, preliminary results on the usability are provided.
Learning Objects Authoring Supported by Ubiquitous Learning Environments BIBAFull-Text 49-53
  Rafael D. Araújo; Hiran N. M. Ferreira; Fabiano A. Dorça; Renan G. Cattelan
Learning objects authoring is still a complex and time-consuming task for instructors, which requires attention to technical and pedagogical aspects. However, one can take advantage of the Ubiquitous Learning Environments characteristics to make it a mild process by means of automatic or semi-automatic processes. In this way, this paper presents an approach for creating learning objects and their metadata in such environments considering collaborative interactions among users. The proposed approach is being integrated to a real multimedia capture system used as a complementary tool in a university.
Computational Methods for the Natural and Intuitive Visualization of Volumetric Medical Data BIBAFull-Text 54-57
  Vladimir Ocegueda-Hernández; Gerardo Mendizabal-Ruiz
Modern medical image technologies are capable of providing meaningful structural and functional information in the form of volumetric digital data. However current standard systems for the visualization and interaction with such data fail to provide a natural-intuitive way to interact with these data. In this paper, we present our advances towards the development of computational methods for the natural and intuitive visualization of volumetric medical data.
Spatio-temporal Event Visualization from a Geo-parsed Microblog Stream BIBAFull-Text 58-61
  Masahiko Itoh; Naoki Yoshinaga; Masashi Toyoda
We devised a method of visualizing spatio-temporal events extracted from a geo-parsed microblog stream by using a multi-layered geo-locational word-cloud representation. In our method, real-time geo-parsing geo-locates posts in the stream, in order to recognize words appearing on a user-specified location and time grid as temporal local events. The recognized temporal local events (e.g., sports games) are then displayed on a map as multi-layered word-clouds and are then used for finding global events (e.g., earthquakes), in order to avoid occlusions among the local and global events. We showed the effectiveness of our method by testing it on real events extracted from our archive of five years worth of Twitter posts.
Dealing with Concept Drift in Exploratory Search: An Interactive Bayesian Approach BIBAFull-Text 62-66
  Antti Kangasrääsiö; Yi Chen; Dorota Glowacka; Samuel Kaski
In exploratory search, when the user formulates a query iteratively through relevance feedback, it is likely that the feedback given earlier requires adjustment later on. The main reason for this is that the user learns while searching, which causes changes in the relevance of items and features as estimated by the user -- a phenomenon known as {it concept drift}. It might be helpful for the user to see the recent history of her feedback and get suggestions from the system about the accuracy of that feedback. In this paper we present a timeline interface that visualizes the feedback history, and a Bayesian regression model that can estimate jointly the user's current interests and the accuracy of each user feedback. We demonstrate that the user model can improve retrieval performance over a baseline model that does not estimate accuracy of user feedback. Furthermore, we show that the new interface provides usability improvements, which leads to the users interacting more with it.
From Textual Instructions to Sensor-based Recognition of User Behaviour BIBAFull-Text 67-73
  Kristina Yordanova
There are various activity recognition approaches that rely on manual definition of precondition-effect rules to describe user behaviour. These rules are later used to generate computational models of human behaviour that are able to reason about the user behaviour based on sensor observations. One problem with these approaches is that the manual rule definition is time consuming and error prone process. To address this problem, in this paper we outline an approach that extracts the rules from textual instructions. It then learns the optimal model structure based on observations in the form of manually created plans and sensor data. The learned model can then be used to recognise the behaviour of users during their daily activities.
Sleeve Sensing Technologies and Haptic Feedback Patterns for Posture Sensing and Correction BIBAFull-Text 74-78
  Luis Miguel Salvado; Artur Arsenio
The world population is aging rapidly. There is an increasing need for health assistance personnel, such as nurses and physiotherapeutic experts, in developed countries. On the other hand, there is a need to improve health care assistance to the population, and especially to elderly people. This will mostly benefit specific user groups, such as elderly, patients recovering from physical injury, or athletes. This paper describes a wearable sleeve being developed under the scope of the Augmented Human Assistance (AHA) project for assisting people. It proposes a new architecture for providing haptic feedback through patterns created by multiple actuators. Different sensing technologies are analyzed and discussed.

Demos

Heady-Lines: A Creative Generator Of Newspaper Headlines BIBAFull-Text 79-83
  Lorenzo Gatti; Gozde Ozbal; Marco Guerini; Oliviero Stock; Carlo Strapparava
In this paper we present Heady-Lines, a creative system that produces news headlines based on well-known expressions. The algorithm is composed of several steps that identify keywords from a news article, select an appropriate well-known expression and modify it to produce a novel one, using state-of-the-art natural language processing and linguistic creativity techniques. The system has a simple web-interface that abstracts the technical details from users and lets them concentrate on the task of producing creative headlines.
PASSAGE: A Travel Safety Assistant with Safe Path Recommendations for Pedestrians BIBAFull-Text 84-87
  Matthew Garvey; Nilaksh Das; Jiaxing Su; Meghna Natraj; Bhanu Verma
Atlanta has consistently ranked as one of the most dangerous cities in America with over 2.5 million crime events recorded within the past six years. People who commute by walking are highly susceptible to crime here. To address this problem, our group has developed a mobile application, PASSAGE, that integrates Atlanta-based crime data to find "safe paths" between any given start and end locations in Atlanta. It also provides security features in a convenient user interface to further enhance safety while walking.
An Intelligent Musical Rhythm Variation Interface BIBAFull-Text 88-91
  Richard Vogl; Peter Knees
The drum tracks of electronic dance music are a central and style-defining element. Yet, creating them can be a cumbersome task, mostly due to lack of appropriate tools and input devices. In this work we present an artificial-intelligence-powered software prototype, which supports musicians composing the rhythmic patterns for drum tracks. Starting with a basic pattern (seed pattern), which is provided by the user, a list of variations with varying degree of similarity to the seed pattern is generated. The variations are created using a generative stochastic neural network. The interface visualizes the patterns and provides an intuitive way to browse through them. A user study with ten experts in electronic music production was conducted to evaluate five aspects of the presented prototype. For four of these aspects the feedback was generally positive. Only regarding the use case in live environments some participants showed concerns and requested safety features.
Easy Navigation through Instructional Videos using Automatically Generated Table of Content BIBAFull-Text 92-96
  Ankit Gandhi; Arijit Biswas; Kundan Shrivastava; Ranjeet Kumar; Sahil Loomba; Om Deshmukh
The amount of instructional videos available online, already in tens of thousands of hours, is growing steadily. A major bottleneck in their wide spread usage is the lack of tools for easy consumption of these videos. In this demonstration, we present MMToC: Multimodal Method for Table of Content, a technique that automatically generates a table of content for a given instructional video and enables text-book-like efficient navigation through the video. MMToC quantifies word saliency for visual words extracted from the slides and spoken words obtained from the lecture transcript. These saliency scores are combined using a dynamic programming based segmentation algorithm to identify likely points in the video where the topic has changed. MMToC is a web-based modular solution that can be used as a stand alone video navigation solution or can be integrated with any e-platform for multimedia content management. MMToC can be seen in action on a sample video at http://104.130.241.45:8080/TopicTransitionV2/index.html.
Semantic Sketch-Based Video Retrieval with Autocompletion BIBAFull-Text 97-101
  Claudiu Tanase; Ivan Giangreco; Luca Rossetto; Heiko Schuldt; Omar Seddati; Stephane Dupont; Ozan Can Altiok; Metin Sezgin
The IMOTION system is a content-based video search engine that provides fast and intuitive known item search in large video collections. User interaction consists mainly of sketching, which the system recognizes in real-time and makes suggestions based on both visual appearance of the sketch (what does the sketch look like in terms of colors, edge distribution, etc.) and semantic content (what object is the user sketching). The latter is enabled by a predictive sketch-based UI that identifies likely candidates for the sketched object via state-of-the-art sketch recognition techniques and offers on-screen completion suggestions. In this demo, we show how the sketch-based video retrieval of the IMOTION system is used in a collection of roughly 30,000 video shots. The system indexes collection data with over 30 visual features describing color, edge, motion, and semantic information. Resulting feature data is stored in ADAM, an efficient database system optimized for fast retrieval.
ScopeG: A Mobile Application for Exploration and Comparison of Personality Traits BIBAFull-Text 102-105
  Robert Deloatch; Liang Gou; Chris Kau; Jalal Mahmud; Michelle Zhou
The language people use on social media has been shown to provide insight into their personality characteristics. We developed a mobile system that aids the exploration and comparison of personal personality profiles with those of others. We conducted a user study to evaluate system usability, gauge user interaction of interest, and the system's performance in completing exploration and comparison tasks. Our study shows that the system is easy to use and enables users effectively explore and compare personality profiles, and users were interested in comparing their personality traits with the personality traits of friends, role models, and celebrities.

Student Consortium

Facilitating Safe Adaptation of Interactive Agents using Interactive Reinforcement Learning BIBAFull-Text 106-109
  Konstantinos Tsiakas
In this paper, we propose a learning framework for the adaptation of an interactive agent to a new user. We focus on applications where safety and personalization are essential, as Rehabilitation Systems and Robot Assisted Therapy. We argue that interactive learning methods can be utilised and combined into the Reinforcement Learning framework, aiming at a safe and tailored interaction.
Usable Privacy in Location-Sharing Services BIBAFull-Text 110-113
  Yuchen Zhao
Location-sharing services such as Facebook and Foursquare have become increasingly popular. These services can be helpful for us but can also pose threats to people's privacy. Usability issues in existing location-privacy protection mechanisms are one of the main reasons why people fail to protect their location privacy properly. Most people are not capable and find it cumbersome to configure location-privacy preferences by themselves. My PhD research aims to address these usability issues by using recommenders, understand people's acceptance of, and concerns about, such recommenders, and to alleviate their concerns.
Improving Interactions with Spatial Context-aware Services BIBAFull-Text 114-117
  Pavel Andreevich Samsonov
We have seen a recent rise of context- as well as location-based-mobile services. Finally, those services entering applications and adding features to mobile operating system to make everyday user interactions handier. Nevertheless, those services still have certain limitations, such as lack of certain data types that limit them to exploit their full potentials. My research is situated in the area of human-computer interaction with strong links to the field of intelligent user interfaces and aims to improve interactions with spatial context-aware services by combining methods from computer vision and artificial intelligence.
Dynamic Online Computerized Neuropsychological Testing System BIBAFull-Text 118-121
  Sean-Ryan Smith
Traditional cognitive testing for detecting cognitive impairment (CI) can be inaccessible, expensive, and time consuming. This dissertation aims to develop an automated online computerized neuropsychological testing system for rapidly tracking an individual's cognitive performance throughout the user's daily or weekly schedule in an unobtrusive way. By utilizing embedded microsensors within tablet devices, the proposed context-aware system will capture ambient and behavioral data pertinent to the real-world contexts and times of testing to compliment psychometric results, by providing insight into the contextual factors relevant to the user's testing efficacy and performance.
Visual Text Analytics for Online Conversations: Design, Evaluation, and Applications BIBAFull-Text 122-125
  Enamul Hoque
Analyzing and gaining insights from a large amount of textual conversations can be quite challenging for a user, especially when the discussions become very long. During my doctoral research, I have focused on integrating Information Visualization (InfoVis) with Natural Language Processing (NLP) techniques to better support the user's task of exploring and analyzing conversations. For this purpose, I have designed a visual text analytics system that supports the user exploration, starting from a possibly large set of conversations, then narrowing down to a subset of conversations, and eventually drilling-down to a set of comments of one conversation. While so far our approach is evaluated mainly based on lab studies, in my on-going and future work I plan to evaluate our approach via online longitudinal studies.
Gaze and Foot Input: Toward a Rich and Assistive Interaction Modality BIBAFull-Text 126-129
  Vijay Dandur Rajanna
Transforming gaze input into a rich and assistive interaction modality is one of the primary interests in eye tracking research. Gaze input in conjunction with traditional solutions to the "Midas Touch" problem, dwell time or a blink, is not matured enough to be widely adopted. In this regard, we present our preliminary work, a framework that achieves precise "point and click" interactions in a desktop environment through combining the gaze and foot interaction modalities. The framework comprises of an eye tracker and a foot-operated quasi-mouse that is wearable. The system evaluation shows that our gaze and foot interaction framework performs as good as a mouse (time and precision) in the majority of tasks. Furthermore, this dissertation work focuses on the goal of realizing gaze-assisted interaction as a primary interaction modality to substitute conventional mouse and keyboard-based interaction methods. In addition, we consider some of the challenges that need to be addressed, and also present the possible solutions toward achieving our goal.
Adaptive User and Haptic Interfaces for Smart Assessment and Training BIBAFull-Text 130-133
  Alexandros Lioulemes
My research is focusing on developing smart robotic rehabilitation interfaces that use machine intelligence to adjust the level of difficulty, assess physical and mental obstacles on the part of the user, and provide analysis of the multi-sensing data collected in real time as the user exercises. The main goal of the interfaces is to engage the patient in repetitive exercise sessions and to provide better data visualization to the therapist for the patient's recovery progress. In this doctoral consortium, I will present three prototype user interfaces that can be applied in assistive environments and enhance the productivity and interaction among therapist and patient. The data processing and the decision making algorithms compose the core components of this study.
Intelligent Interface for Organizing Online Social Opinions on Reddit BIBAFull-Text 134-137
  Mingkun Gao
Lots of posts containing social opinions are published on Reddit in a messy and staggered format with sub-Reddit labels summarizing their contents only. It's hard for users to have a global insight across different positions and opinions for a specific topic in a short time, especially for a controversial topic. We propose an intelligent mechanism which combines social opinion clustering and information visualization together. First, we cluster the Reddit posts into different categories based on crowd position and opinions, and generate informative clustering labels using the human computation technique. Second, we create an intelligent user interface with Reddit posts category visualization. This would expose categorized posts of different positions and opinions to users, and motivate users to hunt for posts supported by unlike-minded people.
ChordRipple: Adaptively Recommending and Propagating Chord Changes for Songwriters BIBAFull-Text 138-141
  Cheng-Zhi Anna Huang
Songwriting is the interplay of a composer's creative intent and an idiom's language. This language both facilitates and poses stylistic constraints on a composer's expressivity. Novice composers often find it difficult to go beyond common chord progressions, to find the chords that realize their intentions. To make it easier for composers to experiment with radical chord choices and to prototype "what-if" ideas, we are building a creativity support tool, ChordRipple, which (1) makes chord recommendations that aim to be both diverse and appropriate to the current context, (2) infers a composer's intention to help her more quickly prototype ideas. Composers can use it to help select the next chord, to replace sequences of chords in an internally consist manner, or to edit one part of a sequence and see the whole sequence change in that direction. To make such recommendations, we adapt neural-network models such as Word2Vec to the music domain as Chord2Vec. This model learns chord embeddings from a corpus of chord sequences, placing chords nearby when they are used in similar contexts. The learned embeddings support creative substitutions between chords, and also exhibit topological properties that correspond to musical structure. For example, the major and minor chords are both arranged in the latent space in shapes corresponding to the circle-of-fifths. To support the dynamic nature of the creative process, we propose to infer a composer's intentions for adaptive recommendation. As a composer makes chord changes, she is moving in the embedding space. We can infer a composer's intention from the gradient of her edits' trace and use this gradient to help her fine-tune her current changes or to project the sequence into the future to give recommendations on how the sequence could look like if more edits in that direction were performed.
Assessing Empathy through Mixed Reality BIBAFull-Text 142-145
  Cassandra Oduola
This research seeks to produce a new way of assessing empathy in individuals. The current widely used diagnostic tools are questionnaires. These questionnaires are easy to "pass" if the individual simply lies and chooses the answers that would be most beneficial to them. Furthermore, it is shown, assessing empathy is harder in a clinical setting because it is not the natural world, a person may purposely inhibit their behavior to seem more "normal". Finding methods that would assess affect while interacting with a computer could yield higher accuracy in diagnosis.
Exploring the Development of Spatial Skills in a Video Game BIBAFull-Text 146-149
  Helen Wauck
This document gives an overview of my current research project investigating how children develop spatial reasoning skills through video game training. I describe the motivation and goals of the project and the progress made so far.
Understanding and Intervening Communicational Behavior using Artificial Intelligence BIBAFull-Text 150-153
  M. Iftekhar Tanveer
Portable and inexpensive technologies have the potential to capture a huge variety of signals about human being. Systematic analysis of these signals can provide deep understanding on the basic nature of interpersonal communication. I am interested in taking a machine learning approach for analyzing human behaviors -- at least in a formal, well-established setting (e.g. in public speaking, job interview etc.). Understanding human behavior will enable us to design systems capable to make people self-aware. In many cases they might be useful for behavior modification as well.