HCI Bibliography Home | HCI Conferences | HRI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
HRI Tables of Contents: 06070809101112131415-115-2

Proceedings of the 2015 ACM/IEEE International Conference on Human-Robot Interaction

Fullname:Proceedings of the 10th ACM/IEEE International Conference on Human-Robot Interaction
Editors:Julie A. Adams; William Smart; Bilge Mutlu; Leila Takayama
Location:Portland, Oregon
Dates:2015-Mar-02 to 2015-Mar-05
Volume:1
Publisher:ACM
Standard No:ISBN: 978-1-4503-2883-8; ACM DL: Table of Contents; hcibib: HRI15-1
Papers:46
Pages:350
Links:Conference Website

Extended Abstracts of the 2015 ACM/IEEE International Conference on Human-Robot Interaction

Fullname:Extended Abstracts of the 10th ACM/IEEE International Conference on Human-Robot Interaction
Editors:Julie A. Adams; William Smart; Bilge Mutlu; Leila Takayama
Location:Portland, Oregon
Dates:2015-Mar-02 to 2015-Mar-05
Volume:2
Publisher:ACM
Standard No:ISBN: 978-1-4503-3318-4; ACM DL: Table of Contents; hcibib: HRI15-2
Papers:155
Pages:311
Links:Conference Website
  1. HRI 2015-03-02 Volume 1
    1. Keynote Address
    2. Session A: Designing Robots for Everyday Interaction
    3. Session B: Robot Motion
    4. Session C: Robots & Children
    5. Keynote Address
    6. Session D: Perceptions of Robots
    7. Session E: Robots as Social Agents
    8. Session F: Human-Robot Teams
    9. Keynote Address
    10. Session G: Multi-modal Capabilities
    11. Session H: Human Behaviors, Activities, and Environments, Part 1
    12. Session I: Human Behaviors, Activities, and Environments, Part 2
  2. HRI 2015-03-02 Volume 2
    1. Late-Breaking Reports -- Session 1
    2. Late-Breaking Reports -- Session 2
    3. Late-Breaking Reports -- Session 3
    4. HRI Pioneers -- Poster Session 1
    5. HRI Pioneers -- Poster Session 2
    6. HRI Pioneers -- Poster Session 3
    7. Workshops
    8. Videos
    9. Demonstrations -- Session 1
    10. Demonstrations -- Session 2
    11. Demonstrations -- Session 3

HRI 2015-03-02 Volume 1

Keynote Address

Design Everything by Yourself BIBAFull-Text 1
  Takeo Igarashi
I will introduce our research project (design interface project) aiming at the development of various design tools for end-users. We live in a mass-production society today and everyone buy and use same things all over the world. This is cheap, but not necessarily ideal for individual persons. We envision that computer tools that help people to design things by themselves can enrich their lives. To that end, we develop innovative interaction techniques for end users to (1) create rich graphics such as three-dimensional models and animations by simple sketching (2) design their own real-world, everyday objects such as clothing and furniture with real-time physical simulation integrated in a simple geometry editor, and (3) design the behavior of their personal robots and give instructions to them to satisfy their particular needs.

Session A: Designing Robots for Everyday Interaction

Design and Evaluation of a Peripheral Robotic Conversation Companion BIBAFull-Text 3-10
  Guy Hoffman; Oren Zuckerman; Gilad Hirschberger; Michal Luria; Tal Shani Sherman
We present the design, implementation, and evaluation of a peripheral empathy-evoking robotic conversation companion, Kip1. The robot's function is to increase people's awareness to the effect of their behavior towards others, potentially leading to behavior change. Specifically, Kip1 is designed to promote non-aggressive conversation between people. It monitors the conversation's nonverbal aspects and maintains an emotional model of its reaction to the conversation. If the conversation seems calm, Kip1 responds by a gesture designed to communicate curious interest. If the conversation seems aggressive, Kip1 responds by a gesture designed to communicate fear. We describe the design process of Kip1, guided by the principles of peripheral and evocative. We detail its hardware and software systems, and a study evaluating the effects of the robot's autonomous behavior on couples' conversations. We find support for our design goals. A conversation companion reacting to the conversation led to more gaze attention, but not more verbal distraction, compared to a robot that moves but does not react to the conversation. This suggests that robotic devices could be designed as companions to human-human interaction without compromising the natural communication flow between people. Participants also rated the reacting robot as having significantly more social human character traits and as being significantly more similar to them. This points to the robot's potential to elicit people's empathy.
Mechanical Ottoman: How Robotic Furniture Offers and Withdraws Support BIBAFull-Text 11-18
  David Sirkin; Brian Mok; Stephen Yang; Wendy Ju
This paper describes our approach to designing, developing behaviors for, and exploring the use of, a robotic footstool, which we named the mechanical ottoman. By approaching unsuspecting participants and attempting to get them to place their feet on the footstool, and then later attempting to break the engagement and get people to take their feet down, we sought to understand whether and how motion can be used by non-anthropomorphic robots to engage people in joint action. In several embodied design improvisation sessions, we observed a tension between people perceiving the ottoman as a living being, such as a pet, and simultaneously as a functional object, which requests that they place their feet on it-something they would not ordinarily do with a pet. In a follow-up lab study (N=20), we found that most participants did make use of the footstool, although several chose not to place their feet on it for this reason. We also found that participants who rested their feet understood a brief lift and drop movement as a request to withdraw, and formed detailed notions about the footstool's agenda, ascribing intentions based on its movement alone.
Communicating Directionality in Flying Robots BIBAFull-Text 19-26
  Daniel Szafir; Bilge Mutlu; Terry Fong
Small flying robots represent a rapidly emerging family of robotic technologies with aerial capabilities that enable unique forms of assistance in a variety of collaborative tasks. Such tasks will necessitate interaction with humans in close proximity, requiring that designers consider human perceptions regarding robots flying and acting within human environments. We explore the design space regarding explicit robot communication of flight intentions to nearby viewers. We apply design constraints to robot flight behaviors, using biological and airplane flight as inspiration, and develop a set of signaling mechanisms for visually communicating directionality while operating under such constraints. We implement our designs on two commercial flyers, requiring little modification to the base platforms, and evaluate each signaling mechanism, as well as a no-signaling baseline, in a user study in which participants were asked to predict robot intent. We found that three of our designs significantly improved viewer response time and accuracy over the baseline and that the form of the signal offered tradeoffs in precision, generalizability, and perceived robot usability.
The Privacy-Utility Tradeoff for Remotely Teleoperated Robots BIBAFull-Text 27-34
  Daniel J. Butler; Justin Huang; Franziska Roesner; Maya Cakmak
Though teleoperated robots have become common for more extreme tasks such as bomb diffusion, search-and-rescue, and space exploration, they are not commonly used in human-populated environments for more ordinary tasks such as house cleaning or cooking. This presents near-term opportunities for teleoperated robots in the home. However, a teleoperator's remote presence in a consumer's home presents serious security and privacy risks, and the concerns of end-users about these risks may hinder the adoption of such in-home robots. In this paper, we define and explore the privacy-utility tradeoff for remotely teleoperated robots: as we reduce the quantity or fidelity of visual information received by the teleoperator to preserve the end-user's privacy, we must balance this against the teleoperator's need for sufficient information to successfully carry out tasks. We explore this tradeoff with two surveys that provide a framework for understanding the privacy attitudes of end-users, and with a user study that empirically examines the effect of different filters of visual information on the ability of a teleoperator to carry out a task. Our findings include that respondents do desire privacy protective measures from teleoperators, that respondents prefer certain visual filters from a privacy perspective, and that, for the studied task, we can identify a filter that balances privacy with utility. We make recommendations for in-home teleoperation based on these findings.

Session B: Robot Motion

May I help you?: Design of Human-like Polite Approaching Behavior BIBAFull-Text 35-42
  Yusuke Kato; Takayuki Kanda; Hiroshi Ishiguro
When should service staff initiate interaction with a visitor? Neither simply-proactive (e.g. talk to everyone in a sight) nor passive (e.g. wait until being talked to) strategies are desired. This paper reports our modeling of polite approaching behavior. In a shopping mall, there are service staff members who politely approach visitors who need help. Our analysis revealed that staff members are sensitive to "intentions" of nearby visitors. That is, when a visitor intends to talk to a staff member and starts to approach, the staff member also walks a few steps toward the visitors in advance to being talked. Further, even when not being approached, staff members exhibit "availability" behavior in the case that a visitor's intention seems uncertain. We modeled these behaviors that are adaptive to pedestrians' intentions, occurred prior to initiation of conversation. The model was implemented into a robot and tested in a real shopping mall. The experiment confirmed that the proposed method is less intrusive to pedestrians, and that our robot successfully initiated interaction with pedestrians.
Robot Form and Motion Influences Social Attention BIBAFull-Text 43-50
  Alvin X. Li; Maria Florendo; Luke E. Miller; Hiroshi Ishiguro; Ayse P. Saygin
For social robots to be successful, they need to be accepted by humans. Human-robot interaction (HRI) researchers are aware of the need to develop the right kinds of robots with appropriate, natural ways for them to interact with humans. However, much of human perception and cognition occurs outside of conscious awareness, and how robotic agents engage these processes is currently unknown. Here, we explored automatic, reflexive social attention, which operates outside of conscious control within a fraction of a second to discover whether and how these processes generalize to agents with varying humanlikeness in their form and motion. Using a social variant of a well-established spatial attention paradigm, we tested whether robotic or human appearance and/or motion influenced an agent's ability to capture or direct implicit social attention. In each trial, either images or videos of agents looking to one side of space (a head turn) were presented to human observers. We measured reaction time to a peripheral target as an index of attentional capture and direction. We found that all agents, regardless of humanlike form or motion, were able to direct spatial attention in the cued direction. However, differences in the form of the agent affected attentional capture, i.e., how quickly the observers could disengage attention from the agent and respond to the target. This effect further interacted with whether the spatial cue (head turn) was presented through static images or videos. Overall whereas reflexive social attention operated in the same manner for human and robot social agents for spatial attentional cueing, robotic appearance, as well as whether the agent was static or moving significantly influenced unconscious attentional capture processes. Overall the studies reveal how unconscious social attentional processes operate when the agent is a human vs. a robot, add novel manipulations to the literature such as the role of visual motion, and provide a link between attention studies in HRI, and decades of research on unconscious social attention in experimental psychology and vision science.
Effects of Robot Motion on Human-Robot Collaboration BIBAFull-Text 51-58
  Anca D. Dragan; Shira Bauman; Jodi Forlizzi; Siddhartha S. Srinivasa
Most motion in robotics is purely functional, planned to achieve the goal and avoid collisions. Such motion is great in isolation, but collaboration affords a human who is watching the motion and making inferences about it, trying to coordinate with the robot to achieve the task. This paper analyzes the benefit of planning motion that explicitly enables the collaborator's inferences on the success of physical collaboration, as measured by both objective and subjective metrics. Results suggest that legible motion, planned to clearly express the robot's intent, leads to more fluent collaborations than predictable motion, planned to match the collaborator's expectations. Furthermore, purely functional motion can harm coordination, which negatively affects both task efficiency, as well as the participants' perception of the collaboration.

Session C: Robots & Children

Escaping from Children's Abuse of Social Robots BIBAFull-Text 59-66
  Drazen Brscic; Hiroyuki Kidokoro; Yoshitaka Suehiro; Takayuki Kanda
Social robots working in public space often stimulate children's curiosity. However, sometimes children also show abusive behavior toward robots. In our case studies, we observed in many cases that children persistently obstruct the robot's activity. Some actually abused the robot by saying bad things, and at times even kicking or punching the robot. We developed a statistical model of occurrence of children's abuse. Using this model together with a simulator of pedestrian behavior, we enabled the robot to predict the possibility of an abuse situation and escape before it happens. We demonstrated that with the model the robot successfully lowered the occurrence of abuse in a real shopping mall.
The Robot Who Tried Too Hard: Social Behaviour of a Robot Tutor Can Negatively Affect Child Learning BIBAFull-Text 67-74
  James Kennedy; Paul Baxter; Tony Belpaeme
Social robots are finding increasing application in the domain of education, particularly for children, to support and augment learning opportunities. With an implicit assumption that social and adaptive behaviour is desirable, it is therefore of interest to determine precisely how these aspects of behaviour may be exploited in robots to support children in their learning. In this paper, we explore this issue by evaluating the effect of a social robot tutoring strategy with children learning about prime numbers. It is shown that the tutoring strategy itself leads to improvement, but that the presence of a robot employing this strategy amplifies this effect, resulting in significant learning. However, it was also found that children interacting with a robot using social and adaptive behaviours in addition to the teaching strategy did not learn a significant amount. These results indicate that while the presence of a physical robot leads to improved learning, caution is required when applying social behaviour to a robot in a tutoring context.
Emotional Storytelling in the Classroom: Individual versus Group Interaction between Children and Robots BIBAFull-Text 75-82
  Iolanda Leite; Marissa McCoy; Monika Lohani; Daniel Ullman; Nicole Salomons; Charlene Stokes; Susan Rivers; Brian Scassellati
Robot assistive technology is becoming increasingly prevalent. Despite the growing body of research in this area, the role of type of interaction (i.e., small groups versus individual interactions) on effectiveness of interventions is still unclear. In this paper, we explore a new direction for socially assistive robotics, where multiple robotic characters interact with children in an interactive storytelling scenario. We conducted a between-subjects repeated interaction study where a single child or a group of three children interacted with the robots in an interactive narrative scenario. Results show that although the individual condition increased participant's story recall abilities compared to the group condition, the emotional interpretation of the story content seemed more dependent on the difficulty level rather than the study condition. Our findings suggest that, despite the type of interaction, interactive narratives with multiple robots are a promising approach to foster children's development of social-related skills.
When Children Teach a Robot to Write: An Autonomous Teachable Humanoid Which Uses Simulated Handwriting BIBAFull-Text 83-90
  Deanna Hood; Séverin Lemaignan; Pierre Dillenbourg
This article presents a novel robotic partner which children can teach handwriting. The system relies on the learning by teaching paradigm to build an interaction, so as to stimulate meta-cognition, empathy and increased self-esteem in the child user. We hypothesise that use of a humanoid robot in such a system could not just engage an unmotivated student, but could also present the opportunity for children to experience physically-induced benefits encountered during human-led handwriting interventions, such as motor mimicry. By leveraging simulated handwriting on a synchronised tablet display, a NAO humanoid robot with limited fine motor capabilities has been configured as a suitably embodied handwriting partner. Statistical shape models derived from principal component analysis of a dataset of adult-written letter trajectories allow the robot to draw purposefully deformed letters. By incorporating feedback from user demonstrations, the system is then able to learn the optimal parameters for the appropriate shape models. Preliminary in situ studies have been conducted with primary school classes to obtain insight into children's use of the novel system. Children aged 6-8 successfully engaged with the robot and improved its writing to a level which they were satisfied with. The validation of the interaction represents a significant step towards an innovative use for robotics which addresses a widespread and socially meaningful challenge in education.
Can Children Catch Curiosity from a Social Robot? BIBAFull-Text 91-98
  Goren Gordon; Cynthia Breazeal; Susan Engel
Curiosity is key to learning, yet school children show wide variability in their eagerness to acquire information. Recent research suggests that other people have a strong influence on children's exploratory behavior. Would a curious robot elicit children's exploration and the desire to find out new things? In order to answer this question we designed a novel experimental paradigm in which a child plays an education tablet app with an autonomous social robot, which is portrayed as a younger peer. We manipulated the robot's behavior to be either curiosity-driven or not and measured the child's curiosity after the interaction. We show that some of the child's curiosity measures are significantly higher after interacting with a curious robot, compared to a non-curious one, while others do not. These results suggest that interacting with an autonomous social curious robot can selectively guide and promote children's curiosity.
Comparing Models of Disengagement in Individual and Group Interactions BIBAFull-Text 99-105
  Iolanda Leite; Marissa McCoy; Daniel Ullman; Nicole Salomons; Brian Scassellati
Changes in type of interaction (e.g., individual vs. group interactions) can potentially impact data-driven models developed for social robots. In this paper, we provide a first investigation in the effects of changing group size in data-driven models for HRI, by analyzing how a model trained on data collected from participants interacting individually performs in test data collected from group interactions, and vice-versa. Another model combining data from both individual and group interactions is also investigated. We perform these experiments in the context of predicting disengagement behaviors in children interacting with two social robots. Our results show that a model trained with group data generalizes better to individual participants than the other way around. The mixed model seems a good compromise, but it does not achieve the performance levels of the models trained for a specific type of interaction.

Keynote Address

Of Robots, Humans, Bodies and Intelligence: Body Languages for Human Robot Interaction BIBAFull-Text 107
  Antonio Bicchi
Modern approaches to the design of robots with increasing amounts of embodied intelligence affect human-robot interaction paradigms. The physical structure of robots is evolving from traditional rigid, heavy industrial machines into soft bodies exhibiting new levels of versatility, adaptability, safety, elasticity, dynamism and energy efficiency. New challenges and opportunities arise for the control of soft robots: for instance, carefully planning for collision avoidance may no longer be a dominating concern, being on the contrary physical interaction with the environment not only allowed, but even desirable to solve complex tasks. To address these challenges, it is often useful to look at how humans use their own bodies in similar tasks, and even in some cases have a direct dialogue between the natural and artificial counterparts.

Session D: Perceptions of Robots

Rabble of Robots Effects: Number and Form of Robots Modulates Attitudes, Emotions, and Stereotypes BIBAFull-Text 109-116
  Marlena R. Fraune; Steven Sherrin; Selma Sabanovic; Eliot R. Smith
Robots are expected to become present in society in increasing numbers, yet few studies in human-robot interaction (HRI) go beyond one-to-one interaction to examine how emotions, attitudes, and stereotypes expressed toward groups of robots differ from those expressed toward individuals. Research from social psychology indicates that people interact differently with individuals than with groups. We therefore hypothesize that group effects might similarly occur when people face multiple robots. Further, group effects might vary for robots of different types. In this exploratory study, we used videos to expose participants in a between-subjects experiment to robots varying in Number (Single or Group) and Type (anthropomorphic, zoomorphic, or mechanomorphic). We then measured participants' general attitudes, emotions, and stereotypes toward robots with a combination of measures from HRI (e.g., Godspeed Questionnaire, NARS) and social psychology (e.g., Big Five, Social Threat, Emotions). Results suggest that Number and Type of observed robots had an interaction effect on responses toward robots in general, leading to more positive responses for groups for some robot types, but more negative responses for others.
Sacrifice One For the Good of Many?: People Apply Different Moral Norms to Human and Robot Agents BIBAFull-Text 117-124
  Bertram F. Malle; Matthias Scheutz; Thomas Arnold; John Voiklis; Corey Cusimano
Moral norms play an essential role in regulating human interaction. With the growing sophistication and proliferation of robots, it is important to understand how ordinary people apply moral norms to robot agents and make moral judgments about their behavior. We report the first comparison of people's moral judgments (of permissibility, wrongness, and blame) about human and robot agents. Two online experiments (total N = 316) found that robots, compared with human agents, were more strongly expected to take an action that sacrifices one person for the good of many (a "utilitarian" choice), and they were blamed more than their human counterparts when they did not make that choice. Though the utilitarian sacrifice was generally seen as permissible for human agents, they were blamed more for choosing this option than for doing nothing. These results provide a first step toward a new field of Moral HRI, which is well placed to help guide the design of social robots.
Poor Thing! Would You Feel Sorry for a Simulated Robot?: A comparison of empathy toward a physical and a simulated robot BIBAFull-Text 125-132
  Stela H. Seo; Denise Geiskkovitch; Masayuki Nakane; Corey King; James E. Young
In designing and evaluating human-robot interactions and interfaces, researchers often use a simulated robot due to the high cost of robots and time required to program them. However, it is important to consider how interaction with a simulated robot differs from a real robot; that is, do simulated robots provide authentic interaction? We contribute to a growing body of work that explores this question and maps out simulated-versus-real differences, by explicitly investigating empathy: how people empathize with a physical or simulated robot when something bad happens to it. Our results suggest that people may empathize more with a physical robot than a simulated one, a finding that has important implications on the generalizability and applicability of simulated HRI work. Empathy is particularly relevant to social HRI and is integral to, for example, companion and care robots. Our contribution additionally includes an original and reproducible HRI experimental design to induce empathy toward robots in laboratory settings, and an experimentally validated empathy-measuring instrument from psychology for use with HRI.
Observer Perception of Dominance and Mirroring Behavior in Human-Robot Relationships BIBAFull-Text 133-140
  Jamy Li; Wendy Ju; Cliff Nass
How people view relationships between humans and robots is an important consideration for the design and acceptance of social robots. Two studies investigated the effect of relational behavior in a human-robot dyad. In Study 1, participants watched videos of a human confederate discussing the Desert Survival Task with either another human confederate or a humanoid robot. Participants were less trusting of both the robot and the person in a human-robot relationship where the robot was dominant toward the person than when the person was dominant toward the robot; these differences were not found for a human pair. In Study 2, participants watched videos of a human confederate having an everyday conversation with either another human confederate or a humanoid robot. Participants who saw a confederate mirror the gestures of a robot found the robot less attractive than when the robot mirrored the confederate; the opposite effect was found for a human pair. Exploratory findings suggest that human-robot relationships are viewed differently than human dyads.

Session E: Robots as Social Agents

Would You Trust a (Faulty) Robot?: Effects of Error, Task Type and Personality on Human-Robot Cooperation and Trust BIBAFull-Text 141-148
  Maha Salem; Gabriella Lakatos; Farshid Amirabdollahian; Kerstin Dautenhahn
How do mistakes made by a robot affect its trustworthiness and acceptance in human-robot collaboration? We investigate how the perception of erroneous robot behavior may influence human interaction choices and the willingness to cooperate with the robot by following a number of its unusual requests. For this purpose, we conducted an experiment in which participants interacted with a home companion robot in one of two experimental conditions: (1) the correct mode or (2) the faulty mode. Our findings reveal that, while significantly affecting subjective perceptions of the robot and assessments of its reliability and trustworthiness, the robot's performance does not seem to substantially influence participants' decisions to (not) comply with its requests. However, our results further suggest that the nature of the task requested by the robot, e.g. whether its effects are revocable as opposed to irrevocable, has a significant impact on participants' willingness to follow its instructions.
Moderating a Robot's Ability to Influence People Through its Level of Sociocontextual Interactivity BIBAFull-Text 149-156
  Sonja Caraian; Nathan Kirchner; Peter Colborne-Veel
A range of situations exist in which it would be useful to influence people's behavior in public spaces, for example to improve the efficiency of passenger flow in congested train stations. We have identified our previously developed Robot Centric paradigm of Human-Robot Interaction (HRI), which positions robots as Interaction Peers, as a potentially suitable model to achieve more effective influence through defining and exploiting the interactivity of robots (that is, their ability to moderate their issued sociocontextual cues based on the behavioral information read from humans). In this paper, we investigate whether increasing a robot's interactivity will increase the effectiveness of its influence on people in public spaces. A two-part study (total n = 273) was conducted in both a major Australian public train station (n = 84 + 105) and a university (n = 84) where passersby encountered a robot, designed with various levels of interactivity, which attempted to influence their passage. The findings suggest that the Robot Centric HRI paradigm generalizes to other robots and application spaces, and enables deliberate moderation of a robot's interactivity, facilitating more nuanced, predictable and systematic influence, and thus yielding greater effectiveness.
Effects of Culture on the Credibility of Robot Speech: A Comparison between English and Arabic BIBAFull-Text 157-164
  Sean Andrist; Micheline Ziadee; Halim Boukaram; Bilge Mutlu; Majd Sakr
As social robots begin to enter our lives as providers of information, assistance, companionship, and motivation, it becomes increasingly important that these robots are capable of interacting effectively with human users across different cultural settings worldwide. A key capability in establishing acceptance and usability is the way in which robots structure their speech to build credibility and express information in a meaningful and persuasive way. Previous work has established that robots can use speech to improve credibility in two ways: expressing practical knowledge and using rhetorical linguistic cues. In this paper, we present two studies that build on prior work to explore the effects of language and cultural context on the credibility of robot speech. In the first study (n=96), we compared the relative effectiveness of knowledge and rhetoric on the credibility of robot speech between Arabic-speaking robots in Lebanon and English-speaking robots in the USA, finding the rhetorical linguistic cues to be more important in Arabic than in English. In the second study (n=32), we compared the effectiveness of credible robot speech between robots speaking either Modern Standard Arabic or the local Arabic dialect, finding the expression of both practical knowledge and rhetorical ability to be most important when using the local dialect. These results reveal nuanced cultural differences in perceptions of robots as credible agents and have important implications for the design of human-robot interactions across Arabic and Western cultures.
Evidence that Robots Trigger a Cheating Detector in Humans BIBAFull-Text 165-172
  Alexandru Litoiu; Daniel Ullman; Jason Kim; Brian Scassellati
Short et al. found that in a game between a human participant and a humanoid robot, the participant will perceive the robot as being more agentic and as having more intentionality if it cheats than if it plays without cheating. However, in that design, the robot that actively cheated also generated more motion than the other conditions. In this paper, we investigate whether the additional movement of the cheating gesture is responsible for the increased agency and intentionality or whether the act of cheating itself triggers this response. In a between-participant design with 83 participants, we disambiguate between these causes by testing (1) the cases of the robot cheating to win, (2) cheating to lose, (3) cheating to tie from a winning position, and (4) cheating to tie from a losing position. Despite the fact that the robot changes its gesture to cheat in all four conditions, we find that participants are more likely to report the gesture change when the robot cheated to win from a losing position, compared with the other conditions. Participants in that same condition are also far more likely to protest in the form of an utterance following the cheat and report that the robot is less fair and honest. It is therefore the adversarial cheat itself that causes the effect and not the change in gesture, providing evidence for a cheating detector that can be triggered by robots.
Will People Keep the Secret of a Humanoid Robot?: Psychological Intimacy in HRI BIBAFull-Text 173-180
  Peter H., Jr. Kahn; Takayuki Kanda; Hiroshi Ishiguro; Brian T. Gill; Solace Shen; Heather E. Gary; Jolina H. Ruckert
Will people keep the secret of a socially compelling robot who shares, in confidence, a "personal" (robot) failing? Toward answering this question, 81 adults participated in a 20-minute interaction with (a) a humanoid robot (Robovie) interacting in a highly social way as a lab tour guide, and (b) with a human being interacting in the same highly social way. As a baseline comparison, participants also interacted with (c) a humanoid robot (Robovie) interacting in a more rudimentary social way. In each condition, the tour guide asks for the secret keeping behavior. Results showed that the majority of the participants (59%) kept the secret of the highly social robot, and did not tell the experimenter when asked directly, with the robot present. This percentage did not differ statistically from the percentage who kept the human's secret (67%). It did differ statistically when the robot engaged in the more rudimentary social interaction (11%). These results suggest that as humanoid robots become increasingly social in their interaction, that people will form increasingly intimate and trusting psychological relationships with them. Discussion focuses on design principles (how to engender psychological intimacy in human-robot interaction) and norms (whether it is even desirable to do so, and if so in what contexts).
Robot Presence and Human Honesty: Experimental Evidence BIBAFull-Text 181-188
  Guy Hoffman; Jodi Forlizzi; Shahar Ayal; Aaron Steinfeld; John Antanitis; Guy Hochman; Eric Hochendoner; Justin Finkenaur
Robots are predicted to serve in environments in which human honesty is important, such as the workplace, schools, and public institutions. Can the presence of a robot facilitate honest behavior? In this paper, we describe an experimental study evaluating the effects of robot social presence on people's honesty. Participants completed a perceptual task, which is structured so as to allow them to earn more money by not complying with the experiment instructions. We compare three conditions between subjects: Completing the task alone in a room; completing it with a non-monitoring human present; and completing it with a non-monitoring robot present. The robot is a new expressive social head capable of 4-DoF head movement and screen-based eye animation, specifically designed and built for this research. It was designed to convey social presence, but not monitoring. We find that people cheat in all three conditions, but cheat equally less when there is a human or a robot in the room, compared to when they are alone. We did not find differences in the perceived authority of the human and the robot, but did find that people felt significantly less guilty after cheating in the presence of a robot as compared to a human. This has implications for the use of robots in monitoring and supervising tasks in environments in which honesty is key.

Session F: Human-Robot Teams

Efficient Model Learning from Joint-Action Demonstrations for Human-Robot Collaborative Tasks BIBAFull-Text 189-196
  Stefanos Nikolaidis; Ramya Ramakrishnan; Keren Gu; Julie Shah
We present a framework for automatically learning human user models from joint-action demonstrations that enables a robot to compute a robust policy for a collaborative task with a human. First, the demonstrated action sequences are clustered into different human types using an unsupervised learning algorithm. A reward function is then learned for each type through the employment of an inverse reinforcement learning algorithm. The learned model is then incorporated into a mixed-observability Markov decision process (MOMDP) formulation, wherein the human type is a partially observable variable. With this framework, we can infer online the human type of a new user that was not included in the training set, and can compute a policy for the robot that will be aligned to the preference of this user. In a human subject experiment (n=30), participants agreed more strongly that the robot anticipated their actions when working with a robot incorporating the proposed framework (p<0.01), compared to manually annotating robot actions. In trials where participants faced difficulty annotating the robot actions to complete the task, the proposed framework significantly improved team efficiency (p<0.01). The robot incorporating the framework was also found to be more responsive to human actions compared to policies computed using a hand-coded reward function by a domain expert (p<0.01). These results indicate that learning human user models from joint-action demonstrations and encoding them in a MOMDP formalism can support effective teaming in human-robot collaborative tasks.
Bounds of Neglect Benevolence in Input Timing for Human Interaction with Robotic Swarms BIBAFull-Text 197-204
  Sasanka Nagavalli; Shih-Yi Chien; Michael Lewis; Nilanjan Chakraborty; Katia Sycara
Robotic swarms are distributed systems whose members interact via local control laws to achieve a variety of behaviors, such as flocking. In many practical applications, human operators may need to change the current behavior of a swarm from the goal that the swarm was going towards into a new goal due to dynamic changes in mission objectives. There are two related but distinct capabilities needed to supervise a robotic swarm. The first is comprehension of the swarm's state and the second is prediction of the effects of human inputs on the swarm's behavior. Both of them are very challenging. Prior work in the literature has shown that inserting the human input as soon as possible to divert the swarm from its original goal towards the new goal does not always result in optimal performance (measured by some criterion such as the total time required by the swarm to reach the second goal). This phenomenon has been called Neglect Benevolence, conveying the idea that in many cases it is preferable to neglect the swarm for some time before inserting human input. In this paper, we study how humans can develop an understanding of swarm dynamics so they can predict the effects of the timing of their input on the state and performance of the swarm. We developed the swarm configuration shape-changing Neglect Benevolence Task as a Human Swarm Interaction (HSI) reference task allowing comparison between human and optimal input timing performance in control of swarms. Our results show that humans can learn to approximate optimal timing and that displays which make consensus variables perceptually accessible can enhance performance.
Interactive Hierarchical Task Learning from a Single Demonstration BIBAFull-Text 205-212
  Anahita Mohseni-Kabir; Charles Rich; Sonia Chernova; Candace L. Sidner; Daniel Miller
We have developed learning and interaction algorithms to support a human teaching hierarchical task models to a robot using a single demonstration in the context of a mixed-initiative interaction with bi-directional communication. In particular, we have identified and implemented two important heuristics for suggesting task groupings based on the physical structure of the manipulated artifact and on the data flow between tasks. We have evaluated our algorithms with users in a simulated environment and shown both that the overall approach is usable and that the grouping suggestions significantly improve the learning and interaction.
How Robot Verbal Feedback Can Improve Team Performance in Human-Robot Task Collaborations BIBAFull-Text 213-220
  Aaron St. Clair; Maja Mataric
We detail an approach to planning effective verbal feedback during pairwise human-robot task collaboration. The approach is motivated by social science literature as well as existing work in robotics and is applicable to a variety of task scenarios. It consists of a dynamic, synthetic task implemented in an augmented reality environment. The result is combined robot task control and speech production, allowing the robot to actively participate and communicate with its teammate. A user study was conducted to experimentally validate the efficacy of the approach on a task in which a single user collaborates with an autonomous robot. The results demonstrate that the approach is capable of improving both objective measures of team performance and the user's subjective evaluation of both the task and the robot as a teammate.
OPTIMo: Online Probabilistic Trust Inference Model for Asymmetric Human-Robot Collaborations BIBAFull-Text 221-228
  Anqi Xu; Gregory Dudek
We present OPTIMo: an Online Probabilistic Trust Inference Model for quantifying the degree of trust that a human supervisor has in an autonomous robot "worker". Represented as a Dynamic Bayesian Network, OPTIMo infers beliefs over the human's moment-to-moment latent trust states, based on the history of observed interaction experiences. A separate model instance is trained on each user's experiences, leading to an interpretable and personalized characterization of that operator's behaviors and attitudes. Using datasets collected from an interaction study with a large group of roboticists, we empirically assess OPTIMo's performance under a broad range of configurations. These evaluation results highlight OPTIMo's advances in both prediction accuracy and responsiveness over several existing trust models. This accurate and near real-time human-robot trust measure makes possible the development of autonomous robots that can adapt their behaviors dynamically, to actively seek greater trust and greater efficiency within future human-robot collaborations.
Using Robots to Moderate Team Conflict: The Case of Repairing Violations BIBAFull-Text 229-236
  Malte F. Jung; Nikolas Martelaro; Pamela J. Hinds
We explore whether robots can positively influence conflict dynamics by repairing interpersonal violations that occur during a team-based problem-solving task. In a 2 (negative trigger: task-directed vs. personal attack) x 2 (repair: yes vs. no) between-subjects experiment (N = 57 teams, 114 participants), we studied the effect of a robot intervention on affect, perceptions of conflict, perceptions of team members' contributions, and team performance during a problem-solving task. Specifically, the robot either intervened by repairing a task-directed or personal attack by a confederate or did not intervene. Contrary to our expectations, we found that the robot's repair interventions increased the groups' awareness of conflict after the occurrence of a personal attack thereby acting against the groups' tendency to suppress the conflict. These findings suggest that repair heightened awareness of a normative violation. Overall, our results provide support for the idea that robots can aid team functioning by regulating core team processes such as conflict.
Face the Music and Glance: How Nonverbal Behaviour Aids Human Robot Relationships Based in Music BIBAFull-Text 237-244
  Louis McCallum; Peter W. McOwan
It is our hypothesis that improvised musical interaction will be able to provide the extended engagement often failing others during long term Human Robot Interaction (HRI) trials. Our previous work found that simply framing sessions with their drumming robot Mortimer as social interactions increased both social presence and engagement, two factors we feel are crucial to developing and maintaining a positive and meaningful relationship between human and robot. For this study we investigate the inclusion of the additional social modalities, namely head pose and facial expression, as nonverbal behaviour has been shown to be an important conveyor of information in both social and musical contexts. Following a 6 week experimental study using automatic behavioural metrics, results demonstrate those subjected to nonverbal behaviours not only spent more time voluntarily with the robot, but actually increased the time they spent as the trial progressed. Further, that they interrupted the robot less during social interactions and played for longer uninterrupted. Conversely, they also looked at the robot less in both musical and social contexts. We take these results as support for open ended musical activity providing a solid grounding for human robot relationships and the improvement of this by the inclusion of appropriate nonverbal behaviours.

Keynote Address

Chasing Our Science Fiction Future BIBAFull-Text 245
  Daniel H. Wilson
Engineers and researchers, particularly in the field of robotics and human-computer interaction, are often inspired by science fiction futures depicted in novels, on television, and in the movies. For example, Honda's Asimo humanoid robot is said to have been directly inspired by the Astroboy manga series.
   In turn, public perception of science is also shaped by science fiction. For better or worse, broad technological expectations of the future (aesthetic and otherwise) are largely set by exposure to science fiction in popular culture. These depictions have a direct impact on attitudes toward new technology.
   We review some common tropes of science fiction (including the idea of the "singularity" and killer robots) and examine why certain archetypes might persist while others fall by the wayside. From the perspective of a scientist-turned-sci-fi-author, we discuss factors that go into the creation of science fiction and how these factors may or may not correspond to the needs and wants of the actual science community.
   Exposure to science fiction influences scientists and the general public, both to build and adopt new technologies. The inextricable link between science and science fiction helps to determine how and when those futures arrive.

Session G: Multi-modal Capabilities

Shaking Hands and Cooperation in Tele-present Human-Robot Negotiation BIBAFull-Text 247-254
  Chris Bevan; Danaë Stanton Fraser
A 3 x 2 between subjects design examined the effect of shaking hands prior to engaging in a single issue distributive negotiation, where one negotiator performed their role tele-presently through a 'Nao' humanoid robot. An additional third condition of handshaking with feedback examined the effect of augmenting the tele-present handshake with haptic and tactile feedback for the non tele-present and tele-present negotiators respectively.
   Results showed that the shaking of hands prior to negotiating resulted in increased cooperation between negotiators, reflected by economic outcomes that were more mutually beneficial. Despite the fact that the non tele-present negotiator could not see the real face of their counterpart, tele-presence did not affect the degree to which negotiators considered one another to be trustworthy, nor did it affect the degree to which negotiators self-reported as intentionally misleading one another. Negotiators in the more powerful role of buyer rated their impressions of their counterpart more positively, but only if they themselves conducted their negotiations tele-presently.
   Results are discussed in terms of their design implications for social tele-presence robotics.
Speech and Gesture Emphasis Effects for Robotic and Human Communicators: A Direct Comparison BIBAFull-Text 255-262
  Paul Bremner; Ute Leonards
Emphasis, by means of either pitch accents or beat gestures (rhythmic co-verbal gestures with no semantic meaning), has been shown to serve two main purposes in human communication: syntactic disambiguation and salience. To use beat gestures in this role, interlocutors must be able to integrate them with the speech they accompany. Whether such integration is possible when the multi-modal communication information is produced by a humanoid robot, and whether it is as efficient as for human communicators, are questions that need to be answered to further understanding of the efficacy of humanoid robots for naturalistic human-like communication.
   Here, we present an experiment which, using a fully within subjects design, shows that there is a marked difference in speech and gesture integration between human and robot communicators, being significantly less effective for the robot. In contrast to beat gestures, the effects of speech emphasis are the same whether that speech is played through a robot or as part of a video of a human. Thus, while integration of speech emphasis and verbal information do occur for robot communicators, integration of non-informative beat gestures and verbal information does not, despite comparable timing and motion profiles to human gestures.
Haptic Human-Robot Affective Interaction in a Handshaking Social Protocol BIBAFull-Text 263-270
  Mehdi Ammi; Virginie Demulier; Sylvain Caillou; Yoren Gaffary; Yacine Tsalamlal; Jean-Claude Martin; Adriana Tapus
This paper deals with the haptic affective social interaction during a greeting handshaking between a human and a humanoid robot. The goal of this work is to study how the haptic interaction conveys emotions, and more precisely, how it influences the perception of the dimensions of emotions expressed through the facial expressions of the robot. Moreover, we examine the benefits of the multimodality (i.e., visuo-haptic) over the monomodality (i.e., visual-only and haptic-only). The experimental results with Meka robot show that the multimodal condition presenting high values for grasping force and joint stiffness are evaluated with higher values for the arousal and dominance dimensions than during the visual condition. Furthermore, the results corresponding to the monomodal haptic condition showed that participants discriminate well the dominance and the arousal dimensions of the haptic behaviours presenting low and high values for grasping force and joint stiffness.
Embodied Collaborative Referring Expression Generation in Situated Human-Robot Interaction BIBAFull-Text 271-278
  Rui Fang; Malcolm Doering; Joyce Y. Chai
To facilitate referential communication between humans and robots and mediate their differences in representing the shared environment, we are exploring embodied collaborative models for referring expression generation (REG). Instead of a single minimum description to describe a target object, episodes of expressions are generated based on human feedback during human-robot interaction. We particularly investigate the role of embodiment such as robot gesture behaviors (i.e., pointing to an object) and human's gaze feedback (i.e., looking at a particular object) in the collaborative process. This paper examines different strategies of incorporating embodiment and collaboration in REG and discusses their possibilities and challenges in enabling human-robot referential communication.
Bringing the Scene Back to the Tele-operator: Auditory Scene Manipulation for Tele-presence Systems BIBAFull-Text 279-286
  Chaoran Liu; Carlos T. Ishi; Hiroshi Ishiguro
In a tele-operated robot system, the reproduction of auditory scenes, conveying 3D spatial information of sound sources in the remote robot environment, is important for the transmission of remote presence to the tele-operator. We proposed a tele-presence system which is able to reproduce and manipulate the auditory scenes of a remote robot environment, based on the spatial information of human voices around the robot, matched with the operator's head orientation. In the robot side, voice sources are localized and separated by using multiple microphone arrays and human tracking technologies, while in the operator side, the operator's head movement is tracked and used to relocate the spatial positions of the separated sources. Interaction experiments with humans in the robot environment indicated that the proposed system had significantly higher accuracy rates for perceived direction of sounds, and higher subjective scores for sense of presence and listenability, compared to a baseline system using stereo binaural sounds obtained by two microphones located at the humanoid robot's ears. We also proposed three different user interfaces for augmented auditory scene control. Evaluation results indicated higher subjective scores for sense of presence and usability in two of the interfaces (control of voice amplitudes based on virtual robot positioning, and amplification of voices in the frontal direction).
Environment Perception in the Presence of Kinesthetic or Tactile Guidance Virtual Fixtures BIBAFull-Text 287-294
  Samuel B. Schorr; Zhan Fan Quek; William R. Provancher; Allison M. Okamura
During multi-lateral collaborative teleoperation, where multiple human or autonomous agents share control of a teleoperation system, it is important to be able to convey individual user intent. One option for conveying the actions and intent of users or autonomous agents is to provide force guidance from one user to another. Under this paradigm, forces would be transmitted from one user to another in order to guide motions and actions. However, the use of force guidance to convey intent can mask environmental force feedback. In this paper we explore the possibility of using tactile feedback, in particular skin deformation feedback, skin deformation feedback to convey collaborative intent while preserving environmental force perception. An experiment was performed to test the ability of participants to use force guidance and skin deformation guidance to follow a path while interacting with a virtual environment. In addition, we tested the ability of participants to discriminate virtual environment stiffness when receiving either force guidance or skin deformation guidance. We found that skin deformation guidance resulted in a reduction of path-following accuracy, but increased the ability to discriminate environment stiffness when compared with force feedback guidance.

Session H: Human Behaviors, Activities, and Environments, Part 1

Robot-Centric Activity Prediction from First-Person Videos: What Will They Do to Me' BIBAFull-Text 295-302
  M. S. Ryoo; Thomas J. Fuchs; Lu Xia; J. K. Aggarwal; Larry Matthies
In this paper, we present a core technology to enable robot recognition of human activities during human-robot interactions. In particular, we propose a methodology for early recognition of activities from robot-centric videos (i.e., first-person videos) obtained from a robot's viewpoint during its interaction with humans. Early recognition, which is also known as activity prediction, is an ability to infer an ongoing activity at its early stage. We present an algorithm to recognize human activities targeting the camera from streaming videos, enabling the robot to predict intended activities of the interacting person as early as possible and take fast reactions to such activities (e.g., avoiding harmful events targeting itself before they actually occur). We introduce the novel concept of 'onset' that efficiently summarizes pre-activity observations, and design a recognition approach to consider event history in addition to visual features from first-person videos. We propose to represent an onset using a cascade histogram of time series gradients, and we describe a novel algorithmic setup to take advantage of such onset for early recognition of activities. The experimental results clearly illustrate that the proposed concept of onset enables better/earlier recognition of human activities from first-person videos collected with a robot.
Mutual Modelling in Robotics: Inspirations for the Next Steps BIBAFull-Text 303-310
  Séverin Lemaignan; Pierre Dillenbourg
Mutual modelling, the reciprocal ability to establish a mental model of the other, plays a fundamental role in human interactions. This complex cognitive skill is however difficult to fully apprehend as it encompasses multiple neuronal, psychological and social mechanisms that are generally not easily turned into computational models suitable for robots. This article presents several perspectives on mutual modelling from a range of disciplines, and reflects on how these perspectives can be beneficial to the advancement of social cognition in robotics. We gather here both basic tools (concepts, formalisms, models) and exemplary experimental settings and methods that are of relevance to robotics. This contribution is expected to consolidate the corpus of knowledge readily available to human-robot interaction research, and to foster interest for this fundamentally cross-disciplinary field.
Learning to Interact with a Human Partner BIBAFull-Text 311-318
  Mayada Oudah; Vahan Babushkin; Tennom Chenlinangjia; Jacob W. Crandall
Despite the importance of mutual adaption in human relationships, online learning is not yet used during most successful human-robot interactions. The lack of online learning in HRI to date can be attributed to at least two unsolved challenges: random exploration (a core component of most online-learning algorithms) and the slow convergence rates of previous online-learning algorithms. However, several recently developed online-learning algorithms have been reported to learn at much faster rates than before, which makes them candidates for use in human-robot interactions. In this paper, we explore the ability of these algorithms to learn to interact with people. Via user study, we show that these algorithms alone do not consistently learn to collaborate with human partners. Similarly, we observe that humans fail to consistently collaborate with each other in the absence of explicit communication. However, we demonstrate that one algorithm does learn to effectively collaborate with people when paired with a novel cheap-talk communication system. In addition to this technical achievement, this work highlights the need to address AI and HRI synergistically rather than independently.

Session I: Human Behaviors, Activities, and Environments, Part 2

Robots in the Home: Qualitative and Quantitative Insights into Kitchen Organization BIBAFull-Text 319-326
  Elizabeth Cha; Jodi Forlizzi; Siddhartha S. Srinivasa
In the future, we envision domestic robots to play a large role in our everyday lives. This requires robots able to anticipate our needs and preferences and adapt their behavior. Since current robotics research takes place primarily in laboratory settings, it fails to take into account real users. In this work, we explore how organization occurs in the kitchen through a home study. Our analysis includes qualitative insights towards robot behavior during kitchen organization, an open source dataset of real life kitchens, and a proof-of-concept application of this dataset to the problem of object return.
Are Robots Ready for Administering Health Status Surveys': First Results from an HRI Study with Subjects with Parkinson's Disease BIBAFull-Text 327-334
  Priscilla Briggs; Matthias Scheutz; Linda Tickle-Degnen
Facial masking is a symptom of Parkinson's disease (PD) in which humans lose the ability to quickly create refined facial expressions. This difficulty of people with PD can be mistaken for apathy or dishonesty by their caregivers and lead to a breakdown in social relationships. We envision future "robot mediators" that could ease tensions in these caregiver-client relationships by intervening when interactions go awry. However, it is currently unknown whether people with PD would even accept a robot as part of their healthcare processes. We thus conducted a first human-robot interaction study to assess the extent to which people with PD are willing to discuss their health status with a robot. We specifically compared a robot interviewer to a human interviewer in a within-subjects design that allowed us to control for individual differences of the subjects with PD caused by their individual disease progression. We found that participants overall reacted positively to the robot, even though they preferred interactions with the human interviewer. Importantly, the robot performed at a human level at maintaining the participants' dignity, which is critical for future social mediator robots for people with PD.
Measuring the Efficacy of Robots in Autism Therapy: How Informative are Standard HRI Metrics' BIBAFull-Text 335-342
  Momotaz Begum; Richard W. Serna; David Kontak; Jordan Allspaw; James Kuczynski; Holly A. Yanco; Jacob Suarez
A significant amount of robotics research over the past decade has shown that many children with autism spectrum disorders (ASD) have a strong interest in robots and robot toys, concluding that robots are potential tools for the therapy of individuals with ASD. However, clinicians, who have the authority to approve robots in ASD therapy, are not convinced about the potential of robots. One major reason is that the research in this domain does not have a strong focus on the efficacy of robots. Robots in ASD therapy are end-user oriented technologies, the success of which depends on their demonstrated efficacy in real settings. This paper focuses on measuring the efficacy of robots in ASD therapy and, based on the data from a feasibility study, shows that the human-robot interaction (HRI) metrics commonly used in this research domain might not be sufficient.
Interaction Expands Function: Social Shaping of the Therapeutic Robot PARO in a Nursing Home BIBAFull-Text 343-350
  Wan-Ling Chang; Selma Šabanovic
We use the "social shaping of technology and society" framework to qualitatively analyze data collected through observation of human-robot interaction (HRI) between social actors in a nursing home (staff, residents, visitors) and the socially assistive robot PARO. The study took place over the course of three months, during which PARO was placed in a publicly accessibly space where participants could interact with it freely. Social shaping focuses attention on social factors that affect the use and interpretation of technology in particular contexts. We therefore aimed to understand how different social actors make sense of and use PARO in daily interaction. Our results show participant gender, social mediation, and individual sense making led to differential use and interpretation of the robot, which affected the success of human-robot interactions. We also found that exposure to others interacting with PARO affected the nursing staff's perceptions of robots and their potential usefulness in eldercare. This shows that social shaping theory provides a valuable perspective for understanding the implementation of robots in long-term HRI and can inform interaction design in this domain.

HRI 2015-03-02 Volume 2

Late-Breaking Reports -- Session 1

Toward Museum Guide Robots Proactively Initiating Interaction with Humans BIBAFull-Text 1-2
  M. Golam Rashed; R. Suzuki; A. Lam; Y. Kobayashi; Y. Kuno
This paper describes current work toward the design of a guide robot system. We present a method to recognize people's interest and intention from their walking trajectories in indoor environments, which enables a service robot to proactively approach people to provide services to them. We conducted observational experiments in a museum as a target test environment where participants were asked to visit that museum. From these experiments, we have found mainly three kinds of walking trajectory patterns of the participants inside the museum that depend on their interest in the exhibits. Based on these findings, we developed a method to identify participants that may need guidance. We confirm the effectiveness of our method by experiments.
Human Compliance with Task-oriented Dialog in Social Robot Interaction BIBAFull-Text 3-4
  Eunji Kim; Jonathan Sangyun Lee; Sukjae Choi; Ohbyung Kwon
This study empirically investigates the factors affecting compliance with robot requests in task-oriented environments such as registration guide services in a hospital setting in which compliance is important for patient treatment. We examine the relative impact of interaction time, task understanding, and homophily on compliance. The results suggest that task understanding and interaction time are negatively related with intention to comply. However, homophily is not significantly related to intention to comply.
Improving the Expressiveness of a Social Robot through Luminous Devices BIBAFull-Text 5-6
  Raúl Pérula-Martínez; Esther Salichs; Irene P. Encinar; Álvaro Castro-González; Miguel A. Salichs
Social robots during human-robot interaction have to follow certain behavioral norms. To improve the expressiveness of a robot, we focus this work on the visual non-verbal expressive capabilities. Our robot has been equipped with two eyes, two cheeks, a mouth, and a heart (some of them allowing expressive modes non existent in humans). Each one of these parts do the robot expressing different emotions or states, or even communicating in a non-verbal fashion with users.
Trust In Unmanned Driving System BIBAFull-Text 7-8
  Jae-Gil Lee; Jihyang Gu; Dong-Hee Shin
This study proposes a between-subject experiment with four conditions representing different levels of anthropomorphism and automation embedded in unmanned driving systems. Participants will be exposed to either a humanoid robot (high anthropomorphism) or a smartphone (low anthropomorphism) that have high and low automation level respectively as an independent driving agent. The study argues that the agent with high level of anthropomorphism and low level of automation is more likely to trigger greater feelings of trust and perceived safety, which then leads to positive perceptions of the system.
Social Group Interactions in a Role-Playing Game BIBAFull-Text 9-10
  Marynel Vázquez; Elizabeth J. Carter; Jo Ana Vaz; Jodi Forlizzi; Aaron Steinfeld; Scott E. Hudson
We present initial findings from an experiment in which participants played Mafia, an established role-playing game, with our robot. In one condition, the robot played like the rest of the participants and, in the other, the robot moderated the game. We discuss general aspects of the interaction, participants' perceptions, and the potential of this scenario for studying group spatial behavior from robotic platforms.
Robot as a Facilitator in Language Conversation Class BIBAFull-Text 11-12
  Jae-eun Shin; Dong-Hee Shin
With concern of robotics application in educational field, robot assisted language learning (RALL) has become of interest to second language learning researchers. This study aims to examine the effect of RALL compared to existing computer assisted language learning (CALL) on students' affective states and engagement towards English conversation class. For the field study, non-equivalent-groups quasi-experiment was employed with 66 Korean middle school students between CALL class and RALL class. The result revealed that there was marginally significant difference in the motivation; there was significant difference both in the participation and in the satisfaction between implementing robot and using computer in English conversation class. This result corresponds both with previous theoretical studies in SLA and with empirical studies in HRI. This study suggests that robot acts as a facilitator in language conversation class.
Therabot: The Initial Design of a Robotic Therapy Support System BIBAFull-Text 13-14
  Dexter Duckworth; Zachary Henkel; Stephanie Wuisan; Brendan Cogley; Christopher Collins; Cindy Bethel
Therabot is an assistive-robotic therapy system designed to provide support during counseling sessions and home therapy practice to patients diagnosed with conditions associated with trauma. Studies were conducted to determine desired features of potential end-users of the system, such as clinicians, with feedback from past survivors of trauma to guide the participatory design process. The results from a survey of 1,045 respondents revealed a preferred form factor of a floppy-eared dog with coloring similar to that of a beagle. The most requested features were that the robot be of a size that would be comfortable to fit in a person's lap and a covering that was soft, durable, and had multiple textures.
Investigating User Perceptions of HRI: A Marketing Approach BIBAFull-Text 15-16
  Willy Barnett; Kathy Keeling; Thorsten Gruber
In this paper we highlight a complementary approach to examining users' preferences surrounding robot interaction. We introduce widely used concepts and methods from the field of marketing in order to gain deeper insights into user decision-making processes. The study focuses on potential interactions between older adults and robots. The preliminary results show that the new approach can serve both as a means to augment current needs based analysis in HRI, and to enable users to provide more detailed responses to technology they may be unfamiliar with or afraid of.
Heuristic Evaluation of Swarm Metrics' Effectiveness BIBAFull-Text 17-18
  Matthew D. Manning; Caroline E. Harriott; Sean T. Hayes; Julie A. Adams; Adriane E. Seiffert
Typical visualizations of robot swarms (greater than 50 entities) display each individual entity; however, it is immensely difficult to maintain accurate position information for each member in real-world situations with limited communications. Generally, it will be difficult for humans to maintain an awareness of all individual entities. Further, the swarm's tasks may impact the desired visualization. Thus, an open question is how best to visualize a swarm given various swarm tasks. This paper presents a heuristic evaluation that analyzes the application of swarm metrics to different swarm visualizations and tasks. A brief overview of the visualizations is provided, along with a description of the heuristic metrics and the analysis.
The Impact of User Control Design Types on People's Perception of a Robot BIBAFull-Text 19-20
  Jee Yoon Lee; Jung Ju Choi; Sonya S. Kwak
This study suggests user control design as a way to increase social acceptance and usability of a robot. We executed a 3 (user control design: anthropomorphic control vs. non-anthropomorphic control vs. remote controller control) within-participants experiment design (N=24). When participants controlled a robot more anthropomorphically, they perceived a robot more sociable and were more satisfied with the service provided by a robot. This study provides evidence that user control design could be effectively used to increase social acceptance as well as usability of a robot. Implications for the design of human-robot interaction are discussed.
Multimodal Manipulator Control Interface using Speech and Multi-touch Gesture Recognition BIBAFull-Text 21-22
  Tetsushi Oka; Keisuke Matsushima
In this study, we describe a novel multimodal interface to control a manipulator that uses speech and multi-touch gesture recognition. In addition, we describe our prototype system and discuss findings from a preliminary study that employs the system. The interface operates in three control modes that allow the user of a manipulator to translate, rotate, open, and close the gripper using touch gestures. The user can employ multimodal commands to switch among modes and control the manipulator. In our study, inexperienced users were able to control a 7-degree-of-freedom manipulator using the prototype interface.
Choreographing Robot Behaviors by Means of Japanese Onomatopoeias BIBAFull-Text 23-24
  Takanori Komatsu
Onomatopoeias are used when one cannot describe certain phenomena or events literally in the Japanese language, and it is said that one's ambiguous and intuitive feelings are embedded in these onomatopoeias. Therefore, an interface system that can use onomatopoeia as input information could comprehend such users' feelings, and I then proposed the basic concept for such interface system; that is, preparing the mapping rules between the quantified onomatopoeias expressions and the physical features of a certain target. In this paper, I briefly introduced a concrete application based on the above concept that can extract users' ambiguous feelings from their onomatopoeias and reflect any of these extracted feelings on a robot's behaviors.
Evaluation of Interfaces for 3d Pointing BIBAFull-Text 25-26
  Daniel A. Lazewatsky; William D. Smart
A variety of tasks with robots require directing the robot to interact with objects or locations in the world. While many interfaces currently exist for such interactions, in this paper, we focus on inputs which can be categorized as pointing. Specifically, we look at two ways of using the head as a pointing input: Google Glass, and a head pose estimation technique which uses RGBD data. While both of these input modalities have their own advantages and disadvantages, we evaluate them simply as pointing devices, looking at how the device characteristics affects pointing performance. This is evaluated in a user study in which participants perform a series of object designation tasks. We then use distance, time, and object size data to evaluate input devices using Fitts' Law.
Human-Centric Assistive Remote Control for Co-located Mobile Robots BIBAFull-Text 27-28
  Akansel Cosgun; Arnold Maliki; Kaya Demir; Henrik Christensen
Autonomous navigation is an essential capability for domestic service robots, however at times direct remote control may be desired for cases where robot and user are co-located. In this work, we propose a remote control method that allows a user to control the robot with smartphone gestures. The robot moves with respect to the user's coordinate frame and avoids obstacles if a collision is imminent. We think that interpreting the commands from human's perspective would decrease the cognitive load of the user, therefore allowing efficient operation.
Exploring the Potential of Information Gathering Robots BIBAFull-Text 29-30
  Michael Jae-Yoon Chung; Andrzej Pronobis; Maya Cakmak; Dieter Fox; Rajesh P. N. Rao
Autonomous mobile robots equipped with a number of sensors will soon be ubiquitous in human populated environments. In this paper we present an initial exploration into the potential of using such robots for information gathering. We present findings from a formative user survey and a 4-day long Wizard-of-Oz deployment of a robot that answers questions such as "Is there free food on the kitchen table?" Our studies allow us to characterize the types of information that InfoBots might be most useful for.
Shared Displays for Remote Rover Science Operations BIBAFull-Text 31-32
  Electa A. Baker; Julie A. Adams; Terry Fong; Hyunjung Kim; Young-Woo Park
Robotic rovers are expected to play a major role in future lunar in-situ resource prospecting. Prospecting missions will involve a ground control team of planetary scientists and rover operators. These ground controllers will need to evaluate prospecting data gathered by a rover and make operational decisions in real-time. In October 2014, the NASA Ames Research Center conducted a lunar analog robotic prospecting mission in the Mojave Desert to study how to support such operations. This paper describes the roles within the Science Operations Team during this analog mission, as well as preliminary findings regarding the scientists' use of shared displays.
User Tracking in HRI Applications with the Human-in-the-loop BIBAFull-Text 33-34
  Silvia Rossi; Mariacarla Staffa; Maurizio Giordano; Massimo De Gregorio; Antonio Rossi; Anna Tamburro; Civita Vellucci
In HRI applications, tracking performance should not be evaluated as a passive sensing behavior, but by considering it as an active process, where the human is involved within the loop. We foresee that the presence of the human being, actively participating in the interaction, improves a tracker performance with a limited additional effort. We tested a tracking approach into a HRI scenario, modeled as a game, measuring both quantitative and qualitative performance.
Head Pose Estimation is an Inadequate Replacement for Eye Gaze in Child-Robot Interaction BIBAFull-Text 35-36
  James Kennedy; Paul Baxter; Tony Belpaeme
Gaze analysis of human-robot interactions can reveal much about the dynamics of the interaction and be a useful step in establishing levels of engagement and attention. Currently, much of this work has to be conducted manually through post-hoc video coding due to current limitations in non-invasive, real-time gaze tracking solutions. This paper assesses whether real-time head pose estimation from an RGB-D camera may be used in place of manual post-hoc coding of gaze direction. Using data collected from an experiment 'in the wild', it is found that the proposed RGB-D based pose estimation method is neither accurate nor consistent enough to provide a reliable measure of gaze within human-robot interactions.
Metrics for Assessing Human Skill When Demonstrating a Bimanual Task to a Robot BIBAFull-Text 37-38
  Ana-Lucia Pais Ureche; Aude Billard
One of the major challenges in Programming by Demonstration is deciding who to imitate. In this paper we propose a set of metrics for assessing how skilled a user is when demonstrating a bimanual task to a robot, that requires both a coordinated motion of the arms, and proper contact forces. We record successful demonstrations relative to the task goal and evaluate user performance with respect to 3 measures: the ability to maneuver the tool, the consistency in teaching, and the degree of coordination between the two arms. We present preliminary results on a scooping task.
The Acoustic-Phonetics Change of English Learners in Robot Assisted Learning BIBAFull-Text 39-40
  Jiyoung In; Jeonghye Han
This study is to verify the effectiveness of robot TTS technology in assisting Korean English language learners to acquire a native-like accent by correcting the prosodic errors they commonly make. Child English language learners' F0 range, a prosodic variable, will be measured and analyzed for any changes in accent. We examined whether if robot with the currently available TTS technology appeared to be effective as much as a tele-presence robot with native speaker from the acoustic phonetic viewpoint.
Are Tangibles Really Better?: Keyboard and Joystick Outperform TUIs for Remote Robotic Locomotion Control BIBAFull-Text 41-42
  Geoff M. Nagy; James E. Young; John E. Anderson
Prior work has suggested that tangible user interfaces (TUIs) may be more natural and easier to learn than conventional interfaces. We present study results that suggest an opposite effect: we found user performance, satisfaction, and ease of use to be higher with more common-place input methods (keyboard and joystick) than two novel TUIs.
Evaluating Stereoscopic Video with Head Tracking for Immersive Teleoperation of Mobile Telepresence Robots BIBAFull-Text 43-44
  Sven Kratz; Jim Vaughan; Ryota Mizutani; Don Kimber
Our research focuses on improving the effectiveness and usability of driving mobile telepresence robots by increasing the user's sense of immersion during the navigation task. To this end we developed a robot platform that allows immersive navigation using head-tracked stereoscopic video and a HMD. We present the result of an initial user study that compares System Usability Scale (SUS) ratings of a robot teleoperation task using head-tracked stereo vision with a baseline fixed video feed and the effect of a low or high placement of the camera(s). Our results show significantly higher ratings for the fixed video condition and no effect of the camera placement. Future work will focus on examining the reasons for the lower ratings of stereo video and and also exploring further visual navigation interfaces.
Super-Low-Latency Telemanipulation Using High-Speed Vision and High-Speed Multifingered Robot Hand BIBAFull-Text 45-46
  Yugo Katsuki; Yuji Yamakawa; Yoshihiro Watanabe; Masatoshi Ishikawa; Makoto Shimojo
We developed a super-low-latency telemanipulation system using a high-speed vision system and a high-speed robot hand with a high-speed tactile sensor. This system does not require users to wear sensors. Also, since it has latency lower than the sampling rate of visual recognition by humans, the latency is not recognizable by humans. Low-latency telemanipulation systems are needed to perform tasks that require high speed, such as catching falling objects or fast-moving objects. We evaluated the latency of our telemanipulation system and successfully demonstrated catching of falling objects using our system, in contrast to a conventional vision system operating at 30 fps, which failed the task.
Beaming the Gaze of a Humanoid Robot BIBAFull-Text 47-48
  Gérard Bailly; Frédéric Elisei; Miquel Sauze
We here propose to use immersive teleoperation of a humanoid robot by a human pilot for artificially providing the robot with social skills. This so-called beaming approach of learning by demonstration (the robot passively experience social behaviors that can be further modeled and used for autonomous control) offers a unique way to study embodied cognition, i.e. a human cognition driving a controllable robotic body.
EMG-Based Analysis of the Upper Limb Motion BIBAFull-Text 49-50
  Iason Batzianoulis; Sahar El-Khoury; Silvestro Micera; Aude Billard
In a human robot interaction scenario, predicting the human motion intention is essential for avoiding inconvenient delays and for a smooth reactivity of the robotic system. In particular, when dealing with hand prosthetic devices, an early estimation of the final hand gesture is crucial for a smooth control of the robotic hand. In this work we develop an electromyographic (EMG) based learning approach that decodes the grasping intention at an early stage of the reaching to grasping motion, i.e before the final grasp/hand preshape takes place. EMG electrodes are used for recording the arm muscles activities and a cyberglove is used to measure the finger joints during the reach and grasp motion. Results show that we can correctly classify with $90%$ accuracy for three typical grasps before the onset of the hand pre-shape. Such an early detection of the grasp intention allows to control a robotic hand simultaneously to the motion of subject's arm, hence generating no delay between the natural arm motion and the artificial hand motion.
Visualisation of Sound Source Location in a Teleoperation Interface for a Mobile Robot BIBAFull-Text 51-52
  François Ferland; Aurélien Reveleau; François Michaud
Representing the location of sound sources may be helpful when teleoperating a mobile robot. To evaluate this modality, we conducted trials in which the graphical user interface (GUI) displays a blue right icon on the video stream where the sound is located. Results show that such visualization modality provides a clear benefit when a user has to distinguish between multiple sound sources.
Command Robots from Orbit with Supervised Autonomy: An Introduction to the Meteron Supvis-Justin Experiment BIBAFull-Text 53-54
  Neal Y. Lii; Daniel Leidner; André Schiele; Peter Birkenkampf; Benedikt Pleintinger; Ralph Bayer
The on-going work at German Aerospace Center (DLR) and European Space Agency (ESA) on the Meteron Supvis-Justin space telerobotic experiment utilizing supervised autonomy is presented. The Supvis-Justin experiment will employ a tablet UI for an astronaut on the International Space Station (ISS) to communicate task level commands to a service robot. The goal is to explore the viability of supervised autonomy for space telerobotics. For its validation, survey, navigation, inspection, and maintenance tasks will be commanded to DLR's service robot, Rollin' Justin, to be performed in a simulated extraterrestrial environment constructed at DLR. The experiment is currently slated for late 2015-2016.
Auditory Immersion with Stereo Sound in a Mobile Robotic Telepresence System BIBAFull-Text 55-56
  Andrey Kiselev; Mårten Scherlund; Annica Kristoffersson; Natalia Efremova; Amy Loutfi
Auditory immersion plays a significant role in generating a good feeling of presence for users driving a telepresence robot. In this paper, one of the key characteristics of auditory immersion -- sound source localization (SSL) -- is studied from the perspective of those who operate telepresence robots from remote locations. A prototype which is capable of delivering soundscape to the user through Interaural Time Difference (ITD) and Interaural Level Difference (ILD) using the ORTF stereo recording technique was developed. The prototype was evaluated in an experiment and the results suggest that the developed method is sufficient for sound source localization tasks.
Evaluation of a Mobile Robotic Telepresence System in a One-on-One Meeting Scenario BIBAFull-Text 57-58
  Mathis Lauckner; Dejan Pangercic; Serkan Tuerker
Despite the increased popularity and availability of mobile robotic telepresence systems during the last years, there has been little research that systematically compares these systems against more traditional systems, such as teleconference systems. In this work, we present a 40 person user study in a simulated one-on-one meeting scenario. In a between-subject design participants performed the Desert Survival Task with an unknown examiner's confederate either calling in using a conventional phone or beaming in using Beam system from Suitable Technologies [1]. In the study we also simulated a typical meeting disturbance. Several aspects of the discussion's effectiveness were assessed (e.g. connectivity, disturbance, problem solving, ease of collaboration) by using questionnaires as well as video observations. Our findings consistently corroborated a significantly more effective, natural and likeable interaction by using Beam. Though our scenario is real and a true pain point of many large corporations further studies will need to be carried out comparing such telepresence system to video conference systems.
Video Manipulation Techniques for the Protection of Privacy in Remote Presence Systems BIBAFull-Text 59-60
  Alexander Hubers; Emily Andrulis; William D. Scott; Levi Scott; Tanner Stirrat; Duc Tran; Ruonan Zhang; Ross Sowell; Cindy Grimm
Systems that give control of a mobile robot to a remote user raise privacy concerns about what the remote user can see and do through the robot. We aim to preserve some of that privacy by manipulating the video data that the remote user sees. Through two user studies, we explore the effectiveness of different video manipulation techniques at providing different types of privacy. We simultaneously examine task performance in the presence of privacy protection. In the first study, participants were asked to watch a video captured by a robot exploring an office environment and to complete a series of observational tasks under differing video manipulation conditions. Our results show that using manipulations of the video stream can lead to fewer privacy violations for different privacy types. Through a second user study, it was demonstrated that these privacy-protecting techniques were effective without diminishing the task performance of the remote user.

Late-Breaking Reports -- Session 2

A Tool to Diagnose Autism in Children Aged Between Two to Five Old: An Exploratory Study with the Robot QueBall BIBAFull-Text 61-62
  Julie Golliot; Catherine Raby-Nahas; Mark Vezina; Yves-Marie Merat; Audrée-Jeanne Beaudoin; Mélanie Couture; Tamie Salter; Bianca Côté; Cynthia Duclos; Maryse Lavoie; François Michaud
QueBall is a spherical robot capable of motion and equipped with touch sensors, multi-colored lights, sounds, and a wireless interface with an iOS device. While these capabilities may be useful in assisting the early diagnosis of autism, no detailed guidelines have yet been established to achieve this. In this report, we described the exploratory study conducted with an interdisciplinary research team to adapt QueBall's capabilities in order to have clinicians observe how children interact with QueBall. This is the preliminary phase in designing an experimental protocol to evaluate the use of QueBall in diagnosing autism for children from two to five years of age.
Why Do Children Abuse Robots? BIBAFull-Text 63-64
  Tatsuya Nomura; Takayuki Uratani; Takayuki Kanda; Kazutaka Matsumoto; Hiroyuki Kidokoro; Yoshitaka Suehiro; Sachie Yamada
We found that children sometimes abuse a social robot in a hallway of a shopping mall. They spoke bad words, repeatedly obstructed the robot's path, and sometimes even kicked and punched the robot. To investigate why they abused it, we conducted a field study, in which we let visiting children freely interact with the robot, and interviewed when they engaged in a serious abusive behavior including physical contacts. In total, we obtained valid interviews from twenty-three children over 13 days of observations. They are aged between five and nine. Adults and older children were rarely involved. We interviewed them to know whether they perceived the robot as human-like others, why they abused it, and whether they thought that the robot would suffer from their abusive behavior. We found that 1) the majority of the children abused because they were curious about the robot's reactions or enjoyed abusing it while considering it as human-like, and 2) about half of the children believed in the capability of the robot to perceive their abusive behaviors.
The Interplay of Robot Language Level with Children's Language Learning during Storytelling BIBAFull-Text 65-66
  Jacqueline Kory Westlund; Cynthia Breazeal
Children's oral language skills in preschool can predict their success in reading, writing, and academics in later schooling. Helping children improve their language skills early on could lead to more children succeeding later. As such, we examined the potential of a sociable robotic learning/teaching companion to support children's early language development. In a microgenetic study, 17 children played a storytelling game with the robot eight times over a two-month period. We evaluated whether a robot that "leveled" its stories to match the child's current abilities would lead to greater learning and language improvements than a robot that was not matched. All children learned new words, created stories, and enjoyed playing. Children who played with a matched robot used more words, and more diverse words, in their stories than unmatched children. Understanding the interplay between the robot's and the children's language will inform future work on robot companions that support children's education through play.
Social Robot Toolkit: Tangible Programming for Young Children BIBAFull-Text 67-68
  Michal Gordon; Edith Ackermann; Cynthia Breazeal
Teaching children how to program has gained broad interest in the last decade. Approaches range from visual programming languages, tangible programming, as well as programmable robots. We present a novel social robot toolkit that extends common approaches along three dimensions. (i) We propose a tangible programming approach that is suitable for young children with reusable vinyl stickers to represent rules for the robot to perform. (ii) We make use of social robots that are designed to interact directly with children. (iii) We focus the programming tasks and activities around social interaction. In other words, children teach an expressive relational robot how to socially interact by showing it a tangible sticker rulebook that they create. To explore various activities and interactions, we teleoperated the robot's sensors. We present qualitative analysis of children's engagement in and uses of the social robot toolkit and show that they learn to create new rules, explore complex computational concepts, and internalize the mechanism with which robots can be programmed.
The 5-Step Plan: A Holistic Approach to Investigate Children's Ideas on Future Robotic Products BIBAFull-Text 69-70
  Lara Lammer; Astrid Weiss; Markus Vincze
Many educational robotics activities involve children with bottom-up approaches and pre-set robot tasks. However, robotics for education can be much more if used in holistic, non-task deterministic ways, like when children develop design concepts for their favorite robots. The 5-step plan offers a simple yet effective structure for this creative process. Researchers as well as educators can use it to introduce many children to robotics, not only the ones interested in becoming engineers or scientists, while at the same time explore the ideas and needs for a wide range of future robotic products and services from a children's perspective.
A Cognitive and Affective Architecture for Social Human-Robot Interaction BIBAFull-Text 71-72
  Wafa Johal; Damien Pellier; Carole Adam; Humbert Fiorino; Sylvie Pesty
Robots show up frequently in new applications in our daily lives where they interact more and more closely with the human user. Despite a long history of research, existing cognitive architectures are still too generic and hence not tailored enough to meet the specific needs demanded by social HRI. In particular, interaction-oriented architectures require handling emotions, language, social norms, etc, which is quite a handful. In this paper, we present an overview of a Cognitive and Affective Interaction-Oriented Architecture for social human-robot interactions abbreviated CAIO. This architecture is parallel to the BDI (Belief, Desire, Intention) architecture that comes from philosophy of actions by Bratman. CAIO integrates complex emotions and planning techniques. It aims to contribute to cognitive architectures for HRI by enabling the robot to reason on mental states (including emotions) of the interlocutors, and to act physically, emotionally and verbally.
Design of Emotional Conversations with a Child for a Role Playing Robot BIBAFull-Text 73-74
  Sang Hoon Ji; Su Jeong YOU; Hye-Kyung Cho
The children who suffer from psychological and emotional disorder are unaccustomed to cooperation, shared meaning, sympathy, empathy, and magnanimity. In recent, several attempts has been tried at increasing children's social skills by emotional role-playing game with robots because the robotic system can offer dynamic, adaptive and autonomous interaction for learning of imitation skills with real-time performance evaluation and feedback. But there are limits in robot technologies. Especially, it is very difficult to understand the children's word and take suitable behaviors for the children's intents. Therefore, we suggest a method of guiding an emotional robot playing robot conversations with a child in this paper. For the purpose, we design a human-robot-interaction software and a special human intervention device (HID). And finally, we implement our suggested method with a commercial humanoid robot.
How Anthropomorphism Affects Human Perception of Color-Gender-Labeled Pet Robots BIBAFull-Text 75-76
  Kyung-Mi Chung; Dong-Hee Shin
The aim of this study is to examine whether six color-gender-labeled pet robots draw repulsive responses from participants based on the measurement of five key concepts in human-robot interaction: anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety. In total, 60 male and 69 female undergraduate and graduate students aged 18 to 37 years participated in the experiment. The results show that anthropomorphism and animacy can be conceptualized as a composite and extended concept. In a plot of results, all visual targets were positioned at the top of the upward curve, and not plotted in the valley. Another finding of this study is that, when confronted with a pet robot PLEO with manipulated gender-related social cues, participants responded automatically to the robots, applying human-human social attraction rules to them.
Nao as an Authority in the Classroom: Can Nao Help the Teacher to Keep an Acceptable Noise Level? BIBAFull-Text 77-78
  Patricia Bianca Lyk; Morten Lyk
We are researching if Nao could work as an authority figure in the classroom, and if it could help the teacher keep the sound volume at an acceptable level. It is also researched if children will perceive the robot differently if they have worked with the robot before, and if this has an influence on the Nao as an authority figure. Furthermore we will try to see if there is a connection between the pupils' perception of Nao as living/not-living and their will to accept it as an authority figure. This is studied through a two-part experiment with a 5th grade class, where one half of the pupils has worked with the robot prior to it acting as an assistant to the teacher in a normal class.
Design and Architecture of a Robot-Child Speech-Controlled Game BIBAFull-Text 79-80
  Samer Al Moubayed; Jill Lehman
We describe the conceptual design, architecture, and implementation of a multimodal, robot-child dialogue system in a fast-paced, speech-controlled collaborative game. In Mole Madness, two players (a user and an anthropomorphic robot) work together to move an animated mole character through its environment via speech commands. Using a combination of speech recognition systems and a microphone array, the system can accommodate children's natural behavior in real time. We also briefly present the details of a recent data collection with children, ages 5 to 9, and some of the challenging behaviors the system elicited that we intend to explore.
Children's Responses to Genuine Child Synthesized Speech in Child-Robot Interaction BIBAFull-Text 81-82
  Anara Sandygulova; Gregory M. P. O'Hare
This paper presents a study of children's responses to the perceived gender and age of a humanoid robot Nao which communicated with four genuine synthesized child voices. Results indicate that manipulations are successful for all voice conditions. Also, voices of UK English are preferred by children in Ireland for Child-Robot Interaction (cHRI).
Ms. An, Feasibility Study with a Robot Teaching Assistant BIBAFull-Text 83-84
  Karina R. Liles; Jenay M. Beer
In this feasibility study, we present a socially interactive robot teaching assistant to engage 5th grade rural minority students in practicing multiplication. We discovered students perceived the robot as a sociable agent; and students preferred their interaction with the robot assistant over other kinds of study support.
Smart Presence for Retirement Community Employees BIBAFull-Text 85-86
  Karina R. Liles; Allison Kacmar; Rachel E. Stuck; Jenay M. Beer
The goal of this study was to understand what employees of continuing care retirement communities (CCRC) think about the smart presence technology. To better understand their perceptions of the benefits, concerns, and adoption criteria for smart presence systems we have conducted a needs assessment with CCRC employees who were given first-hand experience operating the BEAM as a pilot and local user. Participants indicated there is potential for smart presence technology in retirement communities and shared an equal number of benefits and concerns. The benefits that were mentioned included convenience and effort/time saving, visualization and socialization whereas the concerns that were mentioned included limitations of the system, emotional harm to others/residents and physical harm to others. It is important to understand such attitudes toward technology, because they are predictive of adoption.
Leading a Person Using Ethologically Inspired Autonomous Robot Behavior BIBAFull-Text 87-88
  Soh Takahashi; Gácsi Márta; Péter Korondi; Hideki Hashimoto; Mihoko Niitsuma
This study considered a leading behavior of a robot. To lead a person when the person's attention is initially elsewhere, the robot's behavior needs to be designed so that it seeks the person's attention and seamlessly brings him or her to the target location. Therefore we implement a leading behavior for a robot inspired the dog's action sequence. We evaluate the robot behavior through an experiment.
Fundamental Study of Robot Behavior that Encourages Human to Tidy up Table BIBAFull-Text 89-90
  Manabu Gouko; Chyon Hae Kim
In this study, we investigate the influence of a robot's behavior that motivates human tidying up. Using this scenario, robot can accomplish tidying up tasks effectively through human-robot cooperation (HRC). We developed a system that can tidy up a table through HRC. To validate what behavior effectively encourage human to tidy up, we conducted a preliminary experiment with 8 male-participants, aged 21-23. This paper describes its elementary results.
How Would You Describe Assistive Robots to People Who are Blind or Low Vision? BIBAFull-Text 91-92
  Byung-Cheol Min; Aaron Steinfeld; M. Bernardine Dias
Assistive robots can enhance the safety, efficiency, and independence of people who are blind or low vision (B/LV) during urban travel. However, a clear understanding is still lacking in how best to introduce and describe an assistive robot to B/LV persons in a way that facilitates effective human-robot interaction. The goal of this study was to understand how different people would describe an assistive robot to a B/LV traveler. Our preliminary results showed that participants described the robot in a similar order (i.e. robot's appearance, function, and capability in order); however, they had different focuses on their descriptions. This pilot study will lead to better descriptions of assistive robots to B/LV users, supporting more effective interaction in our future real-world deployment works.
Selecting Popular Topics for Elderly People in Conversation-based Companion Agents BIBAFull-Text 93-94
  Kazufumi Tsukada; Yutaka Takase; Yukiko I. Nakano
In aging societies, supporting elderly people is a critical issue, and companion agents that can function as a conversational partner are expected to provide social support to isolated older adults. Aiming at improving companionship dialogues with these agents, this study proposes a topic selection mechanism using blog articles written by the elderly. By categorizing the nouns extracted from blogs using Wikipedia, we defined 219 topic categories consisting of about 3,000 topic words that the elderly discuss in their daily life. The topic selection mechanism is implemented into a companion agent and used to generate the agent's utterances.
An Interactive Robot Facilitating Social Skills for Children BIBAFull-Text 95-96
  Sang-Seok Yun; JongSuk Choi; Sung-Kee Park
In this paper, we propose an interactive robot system to facilitate easy improvement of children's social capability, with robot-assisted interventions effectively offering social skill training for children with autism. This is achieved through therapeutic protocols with therapy, encouragement, and pause modes, which are determined by behavioral responses of children. Furthermore, the robot evaluates the level of children's reactivity in the child-robot interaction by recognition modules for frontal face and touch, and it generates appropriate training tasks through the combination of kinesic acts and displayable contents. From the experiments of the interplay training with autistic and non-autistic children, it is verified that the proposed system has positive effects on social development of children with autism spectrum disorders.
Body Language for Mood Induction Procedures BIBAFull-Text 97-98
  Cristina Diaz; Angel Pascual Del Pobil; Azucena Garcia; Diana Castilla; Ignacio Miralles
According to the principles of positive psychology (PP), social learning, therapeutic robotics and mood induction procedures (MIPs), we have developed an application to be used as part of a positive MIP in a psychological treatment context. We have used the inexpensive humanoid robot Nao because of its ease of use, which allows the proper interaction with therapists to help them on a regular basis. Our hypothesis is that a rich body language set can compensate the lack of facial expressions in such robots. We run a pilot study in the context of cognitive behavioral therapy for the treatment of Fibromyalgia. This work introduces a new way to contribute to MIPs and to the human-robot interaction (HRI).
Formative Work Analysis to Design Caregiver Robots BIBAFull-Text 99-100
  Keith S. Jones; Barbara Cherry; Mohan Sridharan
This paper describes recent developments in a research project that seeks to explore and describe how caregiving robots should function by analyzing caregiving in elders' homes, creating a detailed account of current elder care practices, and translating this account into design recommendations for caregiving robots.
Social Personalized Human-Machine Interaction for People with Autism: Defining User Profiles and First Contact with a Robot BIBAFull-Text 101-102
  Pauline Chevalier; Adriana Tapus; Jean-Claude Martin; Brice Isableu
Our research aims to develop a new personalized social interaction model between a humanoid robot and/or a virtual agent and an individual suffering of Autistic Spectrum Disorder (ASD), so as to enhance his/her social and communication skills. Because of the intra-individual variability among the ASD population, our objective is to propose a customized social interaction for each individual. In light of the ASD impact on vision and motor processing [1], [2], and in order to define individual's profile, we posit that the individual's reliance to proprioceptive and kinematic visual cues will affect the way he/she interacts with a social agent. A first experiment that defines each participants' perceptivo-cognitive and sensorimotor profile with respect to the integration of visual inputs has already been conducted. We also presented the Nao robot to 4 children with ASD, and analyzed their behavior prior to their profiles. First results are promising.
A Social Robot to Mitigate Stress, Anxiety, and Pain in Hospital Pediatric Care BIBAFull-Text 103-104
  Sooyeon Jeong; Deirdre E. Logan; Matthew S. Goodwin; Suzanne Graca; Brianna O'Connell; Honey Goodenough; Laurel Anderson; Nicole Stenquist; Katie Fitzpatrick; Miriam Zisook; Luke Plummer; Cynthia Breazeal; Peter Weinstock
Children and their parents may undergo challenging experiences when admitted for inpatient care at pediatric hospitals. While most hospitals make efforts to provide socio-emotional support for patients and their families during care, gaps still exist between human resource supply and demand. The Huggable project aims to close this gap by creating a social robot able to mitigate stress, anxiety, and pain in pediatric patients by engaging them in playful interactive activities. In this paper, we introduce a larger experimental design to compare the effects of the Huggable robot to a virtual character on a screen and a plush teddy bear, and provide initial qualitative analyses of patients' and parents' behaviors during intervention sessions collected thus far. We demonstrate preliminarily that children are more eager to emotionally connect with and be physically activated by a robot than a virtual character, illustrating the potential of social robots to provide socio-emotional support during inpatient pediatric care.
Enhancing Long-term Children to Robot Interaction Engagement through Cloud Connectivity BIBAFull-Text 105-106
  Jordi Albo-Canals; Adso Fernández-Baena; Roger Boldu; Alex Barco; Joan Navarro; David Miralles; Cristobal Raya; Cecilio Angulo
In this paper, we introduce a cloud-based structure to enhance long-term engagement in a pet-robot companion treatment to reduce stress an anxiety to hospitalized children. Cloud connectivity enables to combine human intervention with artificial intelligent multi-agent to bias the Robot Companion behavior in order to foster a better engagement and decrease the drop out during the treatment.
Designing a Robot Guide for Blind People in Indoor Environments BIBAFull-Text 107-108
  Catherine Feng; Shiri Azenkot; Maya Cakmak
Navigating indoors is challenging for blind people and they often rely on assistance from sighted people. We propose a solution for indoor navigation involving multi-purpose robots that will likely reside in many buildings in the future. In this report, we present a design for how robots can guide blind people to an indoor destination in an effective and socially-acceptable way. We used participatory design, creating a design team with three designers and five non-designers. All but one member of the team had a visual impairment. Our resulting design specifies how the robot and the user initially meet, how the robot guides the user through hallways and around obstacles, and how the robot and user conclude their session.
Robot Trustworthiness: Guidelines for Simulated Emotion BIBAFull-Text 109-110
  David J. Atkinson
Well-justified human evaluations of autonomous robot trustworthiness require evidence from a variety of sources, including observation of robot behavior. Displays of affect by a robot that reflect important internal states not otherwise overtly visible could provide useful evidence for evaluation of robot agent trustworthiness. As an analogy, the human limbic system, sometimes described as an ancient sub-cognitive system, drives human display of affect in a manner that is largely independent of purposeful behavior arising from cognition. Such displays of affect and corresponding attributions of emotion provide important social information that aids understanding and prediction of human behavior. Could an "artificial limbic system" provide similar useful insight into a robot's internal state? The value of affect signals for evaluation of robot trustworthiness depends on three crucial factors that require investigation: 1) Correlation of affective signals to trust-related, measurable attributes of robot agent internal state, 2) Fidelity in portrayal of emotion by the robot agent such that affective signals evoke human anthropomorphic social recognition, and 3) Correct human interpretation of the affective signals for justifiable modulation of beliefs about the robot agent. This paper discusses these three factors as principles to guide robotic simulation of emotion for increasing human ability to make reasonable assessments of robot trustworthiness and appropriate reliance.
Robotic Sonification for Promoting Emotional and Social Interactions of Children with ASD BIBAFull-Text 111-112
  Ruimin Zhang; Myounghoon Jeon; Chung Hyuk Park; Ayanna Howard
Deficiency in social interaction is one of the most crucial issues for children with Autism Spectrum Disorder (ASD). To foster their emotional and social communication, we have developed an orchestration robot platform. After describing our concepts of the use of sonification in the intervention sessions, we describe our efforts in developing a facial expression detection system and implementing a platform-free sonification server system.
Effects of SMILE Emotional Model on Humanoid Robot User Interaction BIBAFull-Text 113-114
  Elise Russell; Andrew B. Williams
Naturalistic conversation and emotions, while difficult to approximate in robots, facilitate interactions with non-expert users and serve to make robots more relatable and predictable. This paper describes the implementation and evaluation of two major improvements upon an existing interface, the SMILE app for the MU-L8 humanoid robot. The original version of the app is compared to a version in which popups and extraneous user touches are removed, and they are both compared to a third version in which the robot's emotions decay with time. These versions are tested in terms of ease of use, user engagement, and naturalness of interaction. User feedback and observer ratings are collected for 15 participants, and their results are described. These improvements contribute advances in the field of smartphone humanoid robotics interfaces toward a more ideal emotional and conversational model.
Provisions of Human-Robot Friendship BIBAFull-Text 115-116
  Sean A. McGlynn; Wendy A. Rogers
In this paper, we provide an overview of theories on human-robot relationship development with an emphasis on equity relationships. Specifically, we discuss the potential for robots and humans to engage in a communal cost/reward relationship structure that is characteristic of friendship. The "Provisions of Friendship" have been proposed as being necessary for satisfying human-human relationships. We provide insights into what will be required of a robot at each stage in a dynamic relationship development process for a human to treat it as a friend.
High-speed Human / Robot Hand Interaction System BIBAFull-Text 117-118
  Yugo Katsuki; Yuji Yamakawa; Masatoshi Ishikawa
We propose an entirely new human hand / robot hand interaction system designed with a focus on high speed. The speed of this system, from input via a high-speed vision system to output by a high-speed multifingered robot hand, exceeds the visual recognition speed of humans. Therefore, the motion of the interaction system cannot be recognized by the human eye. As an application, we created a system called "Rock-Paper-Scissors robot system with 100% winning rate", based on this interaction system. This system always beats human players in the Rock-Paper-Scissors game due to the high speed of our interaction system. We also discuss the future possibilities of this system.
From a Robotic Vacuum Cleaner to Robot Companion: Acceptance and Engagement in Domestic Environments BIBAFull-Text 119-120
  Maria Luce Lupetti; Stefano Rosa; Gabriele Ermacora
This paper shows preliminary results of the project DR4GHE (Domestic Robot 4 Gaming Health and Eco-sustainability). The main purpose is to develop a Robot Companion for domestic applications able to advise and suggest good practices to users. Interaction and engagement with the user are introduced providing to the Robot Vacuum Cleaner RVC an additional intelligence and leveraging the existing level of acceptance. Morphological aspects, in addition to behavioral traits, assume a key role in the perceptual transition of the RVC from object to subject. Human-robot interaction takes place on two levels: direct interaction, in particular with visual and sound signals; and mediated interaction, through a GUI for smartphone and tablets.
The Effect of Robot Appearance Types and Task Types on Service Evaluation of a Robot BIBAFull-Text 121-122
  Jung Ju Choi; Sonya S. Kwak
Robot's appearance types could be classified into two types: human-oriented and product-oriented. Human-oriented robot resembles human's appearance whereas product-oriented robot is an intelligent product that robotic technologies are integrated into existing product. In this study, we investigated the impact of two robot appearance types and two task types on service evaluation of a robot. We executed a 2 (robot appearance types: human-oriented vs. product-oriented) x 2 (robot task types: social context vs. task-oriented context) mixed-participants experiment design (N=48). In the case of social context, people evaluated the service provided by a human-oriented robot better than by a product-oriented robot while in the case of task-oriented context, they evaluated the service provided by a product-oriented robot more positively than by a human-oriented robot. Implications for the design of human-robot interaction are discussed.
Do People Purchase a Robot Because of Its Coolness? BIBAFull-Text 123-124
  Gyu-Ri Kim; Kyung-Mi Chung; Dong-Hee Shin
The purpose of this study is to verify the research model that coolness and perceived usefulness are effective predictor variables, attitude is mediating variable, and purchase intention is criterion variable. In total, 41 respondents with no prior exposure robot JIBO completed the on-line survey after watching the scenario movie of explaining its usage. Coolness and perceived usefulness is significant predictor for attitude, and attitude has positive impact on purchase intention. Based on the results, a "cool" product is more likely to arouse potential consumer's purchase intention directly and indirectly in the market. Theoretical and practical implications were discussed in detail.

Late-Breaking Reports -- Session 3

Museum Guide Robot by Considering Static and Dynamic Gaze Expressions to Communicate with Visitors BIBAFull-Text 125-126
  Kaname Sano; Keisuke Murata; Ryota Suzuki; Yoshinori Kuno; Daijiro Itagaki; Yoshinori Kobayashi
Human eyes not only serve the function of enabling us "to see" something, but also perform the vital role of allowing us "to show" our gaze for non-verbal communication. We have investigated the static design and dynamic behaviors of robot heads for suitable gaze communication with humans while giving a friendly impression. In this paper, we focus on how the robot's impression is affected by its eye blink and eyeball movement synchronized with head turning. Through experiments with human participants, we found that robot head turning with eye blinks give a friendly impression while robot head turning without eye blinks is suitable for making people shift their attention towards the robot's gaze direction. These findings are very important for communication robots such as museum guide robots. Therefore to demonstrate our approach, we developed a museum guide robot system employing suitable facial design and gaze behavior based on all of our findings.
Controlling Robot's Gaze according to Participation Roles and Dominance in Multiparty Conversations BIBAFull-Text 127-128
  Takashi Yoshino; Yutaka Takase; Yukiko I. Nakano
A robot's gaze behaviors are indispensable in allowing the robot to participate in multiparty conversations. To build a robot that can convey appropriate attentional behavior in multiparty human-robot conversations, this study proposes robot head gaze models in terms of participation roles and dominance in a conversation. By implementing such models, we developed a robot that can determine appropriate gaze behaviors according to its conversational roles and dominance.
I am Interested in What You are Saying: Role of Nonverbal Immediacy Cues in Listening BIBAFull-Text 129-130
  Seongmi Jeong; Jihyang Gu; Dong-Hee Shin
Immediacy plays a key role in interpersonal communication. Some of immediate behaviors in human-human interaction (i. e. gaze and nodding) have received much attention in HRI, however, others (i. e. body posture) don't. This study investigates whether robot's posture (lean forward vs. upright) and nodding manner (small and fast vs. large and slow) can affect perception of the robot. The current study argues that the lean forward and nodding manner are likely to have significant effects on psychological and behavior outcomes, including perceived empathy, human-likeness, and likability of the robot.
A Gaze Controller for Coordinating Mutual Gaze During Conversational Turn-Taking in Human-Robot Interaction BIBAFull-Text 131-132
  Frank Broz; Hagen Lehmann
This report proposes a method for modelling conversational mutual gaze from human interaction data in a way that allows the resulting gaze controller to be used in combination with any dialog manager that controls turn-taking. It also describes a set of experimental measures that will be used to investigate the effect of the gaze controller on people's impressions of the robot and the quality of the interaction.
Do People Spontaneously Take a Robot's Visual Perspective? BIBAFull-Text 133-134
  Xuan Zhao; Corey Cusimano; Bertram F. Malle
This study takes a novel approach to the topic of perspective taking in HRI. In a human behavioral experiment, we examined whether and in what circumstances people spontaneously take a humanoid robot's visual perspective. We found that specific nonverbal behaviors displayed by a robot -- namely, referential gaze and goal-directed reaching -- led human viewers to take the robot's visual perspective, though marginally less frequently than when they encounter the same behaviors displayed by another human. This project identifies specific features of robot behavior that trigger spontaneous social-cognitive processes in human viewers and informs the design of interactive robots in the future.
Layering Laban Effort Features on Robot Task Motions BIBAFull-Text 135-136
  Heather Knight; Reid Simmons
Motion is an essential area of social communication that has the potential to enable robots and people to collaborate naturally, develop rapport, and seamlessly share environments. The Laban Effort System is a well-known methodology from dance and acting training that has been in active use teaching performers to overlay sequences of motion with expressivity for over fifty years. We present our methodology to layer expression on robot base motions, using the same set of joints for both procedural task completion and expressive communications, followed by early results on the legibility of our Effort implementations and how their settings affect robot projections of state.
A Gaze-contingent Dictating Robot to Study Turn-taking BIBAFull-Text 137-138
  Alessandra Sciutti; Lars Schillingmann; Oskar Palinko; Yukie Nagai; Giulio Sandini
In this paper we describe a human-robot interaction scenario designed to evaluate the role of gaze as implicit signal for turn-taking in a robotic teaching context. In particular we propose a protocol to assess the impact of different timing strategies in a common teaching task (English dictation). The task is designed to compare the effects of a teaching behavior whose timing is dependent on the student's gaze with the more standard fixed timing approach. An initial validation indicates that this scenario could represent a functional tool for investigating the positive and negative impacts that personalized timing might have on different subjects.
Exception Handling for Natural Language Control of Robots BIBAFull-Text 139-140
  Lanbo She; Yunyi Jia; Ning Xi; Joyce Y. Chai
Enabling natural language control of robots is challenging, since human users are often not familiar with the underlying robotic system, and its capabilities and limitations. Many exceptions may occur when natural language commands are translated into lower-level robot actions. This paper gives a brief introduction to three levels of exceptions and discusses how dialogue can be applied to handle these exceptions during human-robot interaction.
Learning Bimanual Coordinated Tasks From Human Demonstrations BIBAFull-Text 141-142
  Ana-Lucia Pais Ureche; Aude Billard
In robot programming by demonstration dealing with high dimensional data that comes from human demonstrations is often subject to embedding prior knowledge of which variables should be retained and why. This paper proposes an approach for automatizing robot learning through the detection of causalities in the set of variables recorded during demonstration. This allows us to infer a notion of coherence and coordination between multiple systems that apparently work independently. We test the approach on a bimanual scooping task, consisting of multiple phases. We detect the coordination between the two arms, between the arms and the hands and between the fingers of each hand and observe how these coordination patterns change throughout the task.
Negotiating Instruction Strategies during Robot Action Demonstration BIBAFull-Text 143-144
  Lars C. Jensen; Kerstin Fischer; Dadhichi Shukla; Justus Piater
This paper describes the kinds of strategies naïve users of an industrial robotic platform make use of and analyze how these strategies are adjusted based on the robot's feedback. The study shows that users' actions are contingent on the robot's response to such a degree that users will try out alternative instruction strategies if they do not see an effect in the robot within a time frame of two seconds. Thus, the timing of the robot's actions (or in-actions) influences how users instruct the robot.
Human Smile Distinguishes between Collaborative and Solitary Tasks in Human-Robot Interaction BIBAFull-Text 145-146
  Franziska Kirstein; Kerstin Fischer; Özgür Erkent; Justus Piater
In this paper, the smiling behavior of participants when they instruct a robot to assist them assembling a wooden toolbox is analyzed. The results show that participants smile more when interacting with the robot than when they assemble the box. Thus, human tutors' smiling behavior can be used as an indicator to distinguish between collaborative and solitary phases during human-robot collaborative work.
Knowledge Acquisition with Selective Active Learning for Human-Robot Interaction BIBAFull-Text 147-148
  Batbold Myagmarjav; Mohan Sridharan
This paper describes an architecture for robots interacting with non-expert humans to incrementally acquire domain knowledge. Contextual information is used to generate candidate questions that are ranked using measures of information gain, ambiguity, and human confusion, with the objective of maximizing the potential utility of the response. We report results of preliminary experiments evaluating the architecture in a simulated environment.
Fidgebot: Working Out while Working BIBAFull-Text 149-150
  Jürgen Brandstetter; Noah Liebman; Kati London
More and more people suffer from chronic health issues related to posture and lack of movement in their office work. We developed a novel approach to motivate employees to be more physically active during their office work. Our approach focuses on using the social characteristics of the NAO robot platform to deliver social cues for motivation. Like a coworker who is very motivated to exercise, we used NAO to invite employees to do short "micro-exercises" along with NAO. This approach has multiple advantages when compared to conventional notification systems. Our pilot study shows that employees found it easy and enjoyable to perform micro-exercises with NAO. According to our qualitative data, NAO's social appearance was essential for the motivation of the employees.
Human-Robot Teamwork in USAR Environments: the TRADR Project BIBAFull-Text 151-152
  Joachim de Greeff; Koen Hindriks; Mark A. Neerincx; Ivana Kruijff-Korbayova
The TRADR project aims at developing methods and models for human-robot teamwork, enabling robots to operate in search & rescue environments alongside humans as teammates, rather than as tools. Through a user-centered cognitive engineering method, human-robot teamwork is analyzed, modeled, implemented and evaluated in an iterative fashion. Important is the notion of persistence: rather than treating each sortie as a separate instance for which the build-up of situation awareness and exploration starts from scratch, the objective for the TRADR project is to provide robotic support in an ongoing, fluent manner. This paper provides a short overview of important aspects for human-robot teaming, such as human-robot teamwork coordination and joint situation awareness.
Enabling Synchronous Joint Action in Human-Robot Teams BIBAFull-Text 153-154
  Samantha Rack; Tariq Iqbal; Laurel D. Riek
Joint action is an increasing area of interest for HRI researchers. To be effective team members, robots need to be able to understand, anticipate, and react appropriately to high-level human social behavior. We have designed a new approach to enable an autonomous robot to act fluently within a synchronous human-robot team. We present an initial description and validation study of this approach. Using a synchronous dance scenario as an experimental testbed, we found that our robot was able to execute appropriate actions using our method. Moving forward, we aim to extend this method by developing predictions for the robot's actions using an understanding of the group's dynamics. Our method will be helpful to other researchers working to achieve fluency of action within human-robot groups.
Sliding Autonomy in Cloud Robotics Services for Smart City Applications BIBAFull-Text 155-156
  Gabriele Ermacora; Stefano Rosa; Basilio Bona
The aim of this paper is to present a sliding autonomy approach for Unmanned Aerial Vehicles (UAVs) in the context of the project Fly4SmartCity. The project consists in the implementation of a cloud robotics service in which small UAVs are employed for emergency management, monitoring and surveillance in a smart city scenario. Human-robot interaction is mediated by the cloud robotics platform. We imagine three main levels of autonomy for UAVs: full autonomy, mixed-initiative and teleoperation. Then we propose different scenarios in which we analyze the Level Of Autonomy and the sliding autonomy approach. All services use shared knowledge (crowdsourcing and other data sources available on the Internet) for the management and control of the UAVs.
Efficient Space Utilization for Improving Navigation in Congested Environments BIBAFull-Text 157-158
  Moondeep Shrestha; Hayato Yanagawa; Erika Uno; Shigeki Sugano
In this paper, we have looked into some two behaviors for 'efficient space utilization' by humans in a congested situation. From observations, we have noticed that 'last minute avoidance' and 'shoulder turning' are two crucial behaviors in achieving efficient navigation in crowded environments. The presented study verifies the shoulder turning behavior and investigates the typical values for these behaviors. These results will form an initial framework for local avoidance planner for future research.
Towards a Human Factors Model for Underwater Robotics BIBAFull-Text 159-160
  Xian Wu; Rachel E. Stuck; Ioannis Rekleitis; Jenay M. Beer
The goal of this study is to understand the factors between a human and semi-Autonomous Underwater Vehicles (sAUVs) from a HRI perspective. A SME interview approach was used to analyze video data of operators interacting with sAUVs. The results suggest considerations for the capabilities and limitations of the human and robot, in relation to the dynamic demands of the task and environment. We propose a preliminary human factors model to depict these components and discuss how they interact.
Automated Planning for Peer-to-peer Teaming and its Evaluation in Remote Human-Robot Interaction BIBAFull-Text 161-162
  Vignesh Narayanan; Yu Zhang; Nathaniel Mendoza; Subbarao Kambhampati
Human factor studies on remote human-robot interaction are often restricted to various forms of supervision, in which the robot is essentially being used as a smart mobile manipulation platform with sensing capabilities. In this study, we investigate the incorporation of a general planning capability into the robot to facilitate peer-to-peer human-robot teaming, in which the human and robot are viewed as teammates that are physically separated. One intriguing question is to what extent humans may feel uncomfortable at such robot autonomy and lose situation awareness, which can potentially reduce teaming performance. Our results suggest that peer-to-peer teaming is preferred by humans and leads to better performance. Furthermore, our results show that peer-to-peer teaming reduces cognitive loads from objective measures (even though subjects did not report this in their subjective evaluations), and it does not reduce situation awareness for short-term tasks.
Visual Robot Programming for Generalizable Mobile Manipulation Tasks BIBAFull-Text 163-164
  Sonya Alexandrova; Zachary Tatlock; Maya Cakmak
General-purpose robots present the opportunity to be programmed for a specific purpose after deployment. This requires tools for end-users to quickly and intuitively program robots to perform useful tasks in new environments. In this paper, we present a flow-based visual programming language (VPL) for mobile manipulation tasks, demonstrate the generalizability of tasks programmed in this VPL, and present a preliminary user study of a development tool for this VPL.
A Shared Autonomy Interface for Household Devices BIBAFull-Text 165-166
  Matthew Rueben; William D. Smart
As robots begin to enter our homes and workplaces, they will have to deal with the devices and appliances that are already there. Unfortunately, devices that are easy for humans to operate often cause problems for robots [3]. In teleoperation settings, the lack of tactile feedback often makes manipulation of buttons and switches awkward and clumsy [7]. Also, the robot's gripper often occludes the control, making teleoperation difficult. In the autonomous setting, perception of small buttons and switches is often difficult due to sensor limitations and poor lighting conditions. Adding depth information does not help much, since many of the controls we want to manipulate are small, and often close to the noise threshold of currently-available depth sensors typically installed on a mobile robot. This makes it extremely difficult to segment the controls from the other parts of the device.
   In this paper, we present a shared autonomy approach to the operation of physical device controls. A human operator gives high-level guidance, helps identify controls and their locations, and sequences the actions of the robot. Autonomous software on our robot performs the lower-level actions that require closed-loop control, and estimates the exact positions and parameters of controls. We describe the overall system, and then give the results of our initial evaluations, which suggest that the system is effective in operating the controls on a physical device.
Understanding Second-Order Theory of Mind BIBAFull-Text 167-168
  Laura M. Hiatt; J. Gregory Trafton
Theory of mind is a key factor in the effectiveness of robots and humans working together as a team. Here, we further our understanding of theory of mind by extending a theory of mind model to account for a more complicated, second-order theory of mind task. Ultimately, this will provide robots with a deeper understanding of their human teammates, enabling them to better perform in human-robot teams.
Facial Expression Synthesis on Robots: An ROS Module BIBAFull-Text 169-170
  Maryam Moosaei; Cory J. Hayes; Laurel D. Riek
We present a generalized technique for easily synthesizing facial expressions on robotic faces. In contrast to other work, our approach works in near real time with a high level of accuracy, does not require any manual labeling, is a fully open-source ROS module, and can enable the research community to perform objective and systematic comparisons between the expressive capabilities of different robots.
Experimental Verification of Learning Strategy Fusion for Varying Environments BIBAFull-Text 171-172
  Akihiko Yamaguchi; Masahiro Oshita; Jun Takamatsu; Tsukasa Ogasawara
We investigate a real robot applicability of our method, general-purpose behavior-learning for high degree-of-freedom robots in varying environments. Our method is based on the learning strategy fusion proposed in [Yamaguchi et al. 2011], and extended theoretically in [Yamaguchi et al. 2013]. This report discusses its applicability to real robot systems, and demonstrates some positive experimental results.
A Cloud Robotic System based on Robot Companions for Children with Autism Spectrum Disorders to Perform Evaluations during LEGO Engineering Workshops BIBAFull-Text 173-174
  Jordi Albo-Canals; Danielle Feerst; Daniel de Cordoba; Chris Rogers
In this paper, we propose a non-invasive way to measure the level of anxiety and stress of participants in the Autism Spectrum Disorders without using wearable devices. This measurement is done through a robot companion, which will log children behavior during children's social skills training sessions based on building LEGO Robotics. All this data can be uploaded to a cloud system for future comparison and research. The work presented is the validation of the technology proposed and the session's layout.
Cloud based VR System with Immersive Interfaces to Collect Human Gaze and Body Motion Behaviors BIBAFull-Text 175-176
  Yoshinobu Hagiwara; Yoshiaki Mizuchi; Yongwoon Choi; Tetsunari Inamura
In this study, we present a cloud VR system with immersive interfaces to collect human gaze and body behaviors. Human beings can log in to a VR space and communicate with a robot by the proposed system. Oculus Rift and Kinect provide immersive visualization and motion control, respectively. Human gaze and body behaviors are collected to database during the human-robot interaction. An application experiment to share object concept demonstrates the availability of the proposed system.
Achieving the Vision of Effective Soldier-Robot Teaming: Recent Work in Multimodal Communication BIBAFull-Text 177-178
  Susan G. Hill; Daniel Barber; Arthur W., III Evans
The U.S. Army Research Laboratory (ARL) Autonomous Systems Enterprise has a vision for the future of effective Soldier-robot teaming. Our research program focuses on three primary thrust areas: communications, teaming, and shared cognition. Here we discuss a recent study in communications, where we collected data using a multimodal interface comprised of speech, gesture, touch and a visual display to command a robot to perform semantically-based tasks. Observations on usability and participant expectations with respect to the interaction with the robot were obtained. Initial observations are reported, showing that the speech-gesture-visual multimodal interface was liked and performed well. Areas for improvement were noted.
Effects of Agent Transparency on Operator Trust BIBAFull-Text 179-180
  Michael W. Boyce; Jessie Y. C. Chen; Anthony R. Selkowitz; Shan G. Lakhmani
We conducted a human-in-the-loop robot simulation experiment. The effects of displaying transparency information, in the interface for an autonomous robot, on operator trust were examined. Participants were assigned to one of three transparency conditions and trust was measured prior to observing the autonomous robotic agent's progress and post observation. Results demonstrated that participants who received more transparency information reported higher trust in the autonomous robotic agent. Overall findings indicate that displaying SAT model-based transparency information on a robotic interface is effective for appropriate trust calibration in an autonomous robotic agent.
Modeling Human-Robot Collaboration in a Simulated Environment BIBAFull-Text 181-182
  Jekaterina Novikova; Leon Watts; Tetsunari Inamura
In this paper, we describe a project that explores an open-sourced enhanced robot simulator SIGVerse towards researching a social human-robot interaction. Research on high level social human-robot interaction systems that includes collaboration and emotional intercommunication between people and robots requires a big amount of data based on embodied interaction experiments. However, the cost of developing real robots and performing many experiments can be very high. On another hand, virtual robot simulators are very limited in terms of interaction between simulated robots and real people. Thus we propose using an enhanced human-robot interaction simulator SIGVerse that enables users to join the virtual world occupied by simulated robots through immersive user interface. In this paper, we describe a collaborative human-robot interaction task where a virtual human agent is controlled remotely by human subjects to interact with an automatic virtual robot with implemented artificial emotional reactions. Our project sets the first steps to explore the potential of using an enhanced human-robot interaction simulator to build socially interactive robots that can serve in educational, team building, and collaborative task solving applications.
Human Safety and Efficiency of a Robot Controlled by Asymmetric Velocity Moderation BIBAFull-Text 183-184
  Gustavo Alfonso Garcia Ricardez; Akihiko Yamaguchi; Jun Takamatsu; Tsukasa Ogasawara
Maintaining human safety during HRI is key in the integration of the humanoids in our daily lives. With this in mind, we previously proposed Asymmetric Velocity Moderation (AVM) as a way of restricting the robot speed when interacting with humans. With AVM, the robot reduces its speed according to distance and the direction of the motion. In this paper, we propose a new way of calculating the speed restriction which solves a problem of previous proposals where human safety was sacrificed due to unexpected lesser restriction. We focus on a detailed investigation of how AVM treats situations where a humanoid could endanger a human and test it using different calculation methods of the speed restriction. Finally, we evaluate the efficiency of the humanoid HRP-4 in terms of task completion time by performing simulation experiments in simple HRI scenarios.

HRI Pioneers -- Poster Session 1

Personal Space Invaders: Exploring Robot-initiated Touch-based Gestures for Collaborative Robotics BIBAFull-Text 185-186
  Jeff Allen; Karon E. MacLean
Robots have been physically interacting with people for many years, but almost exclusively people initiate physical contact. Robots able to initiate touch will enable a large new category of human interaction, for example collaboration in noisy environments; and can help bridge cultural and language barriers. In this paper, we outline a research plan with the goal of developing a framework for robot-initiated physical communication, towards enabling robots to work and collaborate safely with people in close proximity. Physical touch-based interaction that is acceptable and understandable to all people is a challenging goal. Instead, we aim to develop robots that start with a repertoire of general touch behaviours which are adapted to individual people's preferences as each person interacts with the robot.
Skill Demonstration Transfer for Learning from Demonstration BIBAFull-Text 187-188
  Tesca Fitzgerald; Andrea L. Thomaz
Learning from Demonstration is an effective method for interactively teaching skills to a robot learner. However, a skill learned via demonstrations is often learned within a particular environment and uses a specific set of objects, and thus may not be immediately applicable for use in unfamiliar environments. Transfer learning addresses this problem by enabling a robot to apply learned skills to unfamiliar environments. We describe our ongoing work to develop a system which enables transfer learning by representing skill demonstrations according to the level of similarity between the source and target environments.
Encouraging User Autonomy through Robot-Mediated Intervention BIBFull-Text 189-190
  Jillian Greczek; Maja Mataric
Robots Interacting with Style BIBAFull-Text 191-192
  Wafa Johal
Our research goal is to identify ways to adapt non-verbal behavior and skills of a companion robot for children. We present an experiment considering parents' and children's perception of role changing and behavioral style in an interactive scenario with children. Behavioral styles being nonverbal parameters affect the way a robot expresses itself within a specific task. The results of this ongoing experiment aim to determine the influence of role changing and styles in term of perceived credibility and engagement of the child interacting with the robot.
Fostering Learning Gains Through Personalized Robot-Child Tutoring Interactions BIBAFull-Text 193-194
  Aditi Ramachandran; Brian Scassellati
Social robots can be used to tutor children in one-on-one interactions. It would be most beneficial for these robots to adapt their behavior to suit the individual learning needs of children. Each child is different; they learn at their own pace and respond better to certain types of feedback and exercises. Furthermore, being able to detect various affective signals during an interaction with a social robot would allow the robot to adaptively change its behavior to counter negative affective states that occur during learning, such as confusion or boredom. This type of adaptive behavior based on perceived signals from the child (such as facial expressions, body posture, etc.) will create more effective tutoring interactions between the robot and child. We propose that a robotic tutoring system that can leverage both affective signals as well as progress through a learning task will lead to greater engagement and learning gains from the child in a one-on-one tutoring interaction.
Tactile Skin Deformation Feedback for Conveying Environment Forces in Teleoperation BIBAFull-Text 195-196
  Samuel B. Schorr; Zhan Fan Quek; William R. Provancher; Allison M. Okamura
Teleoperated robots are used in a variety of applications. The immersive nature of the teleoperated experience is often limited by a lack of haptic information. However, in many applications there are difficulties conveying force information due to limitations in hardware fidelity and the inherent tradeoffs between stability and transparency. In situations where force feedback is limited, it is possible to use sensory substitution methods to convey this force information via other sensory modalities. We hypothesize that skin stretch feedback is a useful substitute for kinesthetic force feedback in force-sensitive teleoperated tasks. We created and tested a tactile device that emulates the natural skin deformation present during tool mediated manual interaction. With this device, experiment participants performed teleoperated palpation to determine the orientation of a stiff region in a surrounding artificial tissue using skin stretch, force, reduced gain force, graphic, or vibration feedback. Participants using skin stretch feedback were able to determine the orientation of the region as accurately as when using force feedback and significantly better than when using vibration feedback, but also exhibited higher interaction forces. Thus, skin stretch feedback may be useful in scenarios where force feedback is reduced or infeasible.
When is it Better to Give Up?: Towards Autonomous Action Selection for Robot Assisted ASD Therapy BIBAFull-Text 197-198
  Emmanuel Senft; Paul Baxter; James Kennedy; Tony Belpaeme
Robot Assisted Therapy (RAT) for children with ASD has found promising applications. In this paper, we outline an autonomous action selection mechanism to extend current RAT approaches. This will include the ability to revert control of the therapeutic intervention to the supervising therapist. We suggest that in order to maintain the goals of therapy, sometimes it is better if the robot gives up.
Analyzing Human Behavior and Bootstrapping Task Constraints from Kinesthetic Demonstrations BIBAFull-Text 199-200
  Ana Lucia Pais Ureche; Aude Billard
In robot Programming by Demonstration (PbD), the interaction with the human user is key to collecting good demonstrations, learning and finally achieving a good task execution. We therefore take a dual approach in analyzing demonstration data. First we automatically determine task constraints that can be used in the learning phase. Specifically we determine the frame of reference to be used in each part of the task, the important variables for each axis and a stiffness modulation factor. Additionally for bi-manual tasks we determine arm-dominance and spatial or force coordination patterns between the arms. Second we analyze human behavior during demonstration in order to determine how skilled the human user is and what kind of feedback is preferred during the learning interaction. We test this approach on complex tasks requiring sequences of actions, bi-manual or arm-hand coordination and contact on each end effector.
Toward More Natural Human-Robot Dialogue BIBFull-Text 201-202
  Tom Williams
A Robotized Environment for Improving Therapist Everyday Work with Children with Severe Mental Disabilities BIBAFull-Text 203-204
  Igor Zubrycki; Grzegorz Granosik
Burnout rate is very high among therapists who work with mentally disabled children, especially those with autism. To address this issue, our group of researchers from different fields conducted a series of interviews and participant-observations in a workplace and proposed a programmable, robotized environment which therapists can use in their everyday work. Our research question is to what extent such an environment will be programmed and used in practice, and whether it will improve therapists' well being.

HRI Pioneers -- Poster Session 2

Information Management for Map-Based Visualizations of Robot Teams BIBAFull-Text 205-206
  Electa A. Baker
Complex human machine systems, including remotely deployed mobile robots and sensors can generate an overwhelming amount of data. Filtering the available geospatial information is necessary to make the most time critical information salient to the system operators. The General Visualization and Abstraction (GVA) algorithm abstracts the presented information in order to reduce visual clutter and has been shown to reduce the cognitive demands and perceived workload of a single operator tasked when supervising teams of multiple robots with high levels of autonomy [1, 2]. My research focuses on significantly extending the GVA algorithm to support multiple human operators who share a common high level goal, but have role specific subgoals for their designated human-robot teams.
An Adaptive Robotic Tablet Gaming System for Post-Stroke Hand Function Rehabilitation BIBAFull-Text 207-208
  Brittney A. English; Ayanna M. Howard
Physical therapy is a common treatment for the rehabilitation of hemiparesis, or the weakness of one side of the body. Stroke is a common cause of hemiparesis. Stroke survivors regularly struggle with motivation and engagement, especially in-between sessions when the therapist is absent from the exercising process. As a solution, we have developed a robotic tablet gaming system to facilitate post-stroke hand function rehabilitation. Healthy subject pilot studies have been completed to verify that this system increases engagement and is capable of encouraging specific therapeutic motions. In the future, a learning model algorithm will be added to the system to assess the patient's progress and optimize the recovery time.
Extended Virtual Presence of Therapists through Home Service Robots BIBAFull-Text 209-210
  Hee-Tae Jung; Yu-kyong Choe; Roderic Grupen
The use of robots in rehabilitation is an increasingly viable option, given the shortage of well-trained therapists who can address individual patients' needs and priorities. Despite the acknowledged importance of customized therapy for individual patients, the means to realize it has received less research attention. Many approaches rely on rehabilitation robots, such as InMotion [3], where therapy customization is achieved by physically assisting patients when they cannot complete expected exercise movements. Consequently, it is important to accurately detect the patients' unsuccessful efforts to make exercise movements using various signals. An example that utilizes electromyography signal can be found in Dipietro et al. [1]. These approaches lack of adaptive therapy programs where generic exercise targets do not necessarily address the specific needs/deficit of individual patients nor impose appropriate challenges.
Robotic Coaching of Complex Physical Skills BIBAFull-Text 211-212
  Alexandru Litoiu; Brian Scassellati
The research area of using robots to coach complex physical skills is underserved. Whereas robots have been used extensively in the form of robotic orthoses to rehabilitate early trauma patients, there is more that can be done to develop robots that help children, the elderly and late-stage rehabilitation patients to excel at physical skills. In order to do this, we must develop robots that do not actuate on the students, but coach them through hands-off modalities such as verbal advice and demonstrations. This approach requires sophisticated perception, and modeling of the student's movement in order to deliver effective advice. Preliminary results suggest that these goals can be achieved with consumer-grade sensing hardware. We present planned future work towards achieving this vision.
An Unsupervised Learning Approach for Classifying Sequence Data for Human Robotic Interaction Using Spiking Neural Network BIBAFull-Text 213-214
  Banafsheh Rekabdar; Monica Nicolescu; Mircea Nicolescu
The goal of this research is to enable robots to learn spatio-temporal patterns from a human's demonstration. We propose an approach based on Spiking Neural Networks. The method brings the following contributions: first, it enables the encoding of patterns in an unsupervised manner. Second, it requires a very small number of training examples. Third, the approach is invariant to scale and translation. We validated our method on a dataset of hand movement gestures representing the drawing of digits 0 to 9, in front of a camera. We compared the proposed approach with other standard pattern recognition approaches. The results indicate the superiority of proposed method over other approaches.
Creating Interactive Robotic Characters: Through a combination of artificial intelligence and professional animation BIBAFull-Text 215-216
  Tiago Guiomar Ribeiro; Ana Paiva
We are integrating artificial intelligent agents with generic animation systems in order to provide socially interactive robots with expressive behavior defined by animation artists. Such animators will therefore be able to apply principles of traditional and 3D animation to such robotic systems, and thus allow to achieve the illusion of life in robots. Our work requires studies and interactive scenario development alongside with the artists.
Context-Aware Assistive Interfaces for Persons with Severe Motor Disabilities BIBAFull-Text 217-218
  Matthew Rueben
Persons with severe motor disabilities have a great need for assistive robots, but also struggle to communicate these needs in ways that a robot can understand. I propose an interface that will make it possible to communicate with robots using limited movements. This will be done using contextual information from the robot's semantic model of the world. I also describe the state-of-the-art hardware and personal collaborations that equip our lab for this research. Assistive robotic interfaces also evoke concerns that a robot could violate personal privacy expectations, particularly if a remote operator can see the robot's video stream. This is especially important for persons with disabilities because it may be harder for them to monitor the robot's whereabouts. I describe ongoing work on two interfaces that help make it possible for robots to be privacy conscious. Answers for privacy concerns need to be developed alongside the new interface technologies prior to in-home deployment.
Affect and Inference in Bayesian Knowledge Tracing with a Robot Tutor BIBAFull-Text 219-220
  Samuel Spaulding; Cynthia Breazeal
In this paper, we present work to construct a robotic tutoring system that can assess student knowledge in real time during an educational interaction. Like a good human teacher, the robot draws on multimodal data sources to infer whether students have mastered language skills. Specifically, the model extends the standard Bayesian Knowledge Tracing algorithm to incorporate an estimate of the student's affective state (whether he/she is confused, bored, engaged, smiling, etc.) in order to predict future educational performance. We propose research to answer two questions: First, does augmenting the model with affective information improve the computational quality of inference? Second, do humans display more prominent affective signals in an interaction with a robot, compared to a screen-based agent? By answering these questions, this work has the potential to provide both algorithmic and human-centered motivations for further development of robotic systems that tightly integrate affect understanding and complex models of inference with interactive, educational robots.
Towards Efficient Collaborations with Trust-Seeking Adaptive Robots BIBAFull-Text 221-222
  Anqi Xu; Gregory Dudek
We are interested in asymmetric human-robot teams, where a human supervisor occasionally takes over control to aid an autonomous robot in a given task. Our research aims to optimize team efficiency by improving the robot's task performance, decreasing the human's workload, and building trust in the team. We envision synergistic collaborations where the robot adapts its behaviors dynamically to optimize efficacy, reduce manual interventions, and actively seek for greater trust. We describe recent works that study two facets of this trust-seeking adaptive methodology: modeling human-robot trust dynamics, and developing interactive behavior adaptation techniques. We also highlight ongoing efforts to combine these works, which will enable future human-robot teams to be maximally trusting and efficient.
The Effect of Robot Appearance Types and Task Types on Service Evaluation of a Robot BIBAFull-Text 223-224
  Jung Ju Choi; Sonya S. Kwak
Robot's appearance types could be classified into two types: human-oriented and product-oriented. Human-oriented robot resembles human's appearance whereas product-oriented robot is an intelligent product that robotic technologies are integrated into existing product. In this study, we investigated the impact of two robot appearance types and two task types on service evaluation of a robot. We executed a 2 (robot appearance types: human-oriented vs. product-oriented) x 2 (robot task types: social context vs. task-oriented context) mixed-methods experiment design (N=48). In the case of social context, people evaluated the service provided by a human-oriented robot better than by a product-oriented robot while in the case of task-oriented context, they evaluated the service provided by a product-oriented robot more positively than by a human-oriented robot. Implications for the design of human-robot interaction are discussed.

HRI Pioneers -- Poster Session 3

Error Feedback for Robust Learning from Demonstration BIBAFull-Text 225-226
  Maria Vanessa aus der Wieschen; Kerstin Fischer; Norbert Krüger
The present study applies a user-centered approach to investigating feedback modalities for robot teleoperation by naïve users. It identifies the reasons why novice users need feedback and evaluates feedback modalities by employing participatory design. Moreover, drawing on document design theory, it studies which design guidelines need to be followed in the creation of legible error feedback screens.
Studying Socially Assistive Robots in Their Organizational Context: Studies with PARO in a Nursing Home BIBAFull-Text 227-228
  Wan-Ling Chang; Selma Šabanovic
We explore human-robot interaction (HRI) with socially assistive robots within a broader social context instead of one-on-one interaction.. In this paper, we describe two in situ studies of the socially assistive robot PARO in a local nursing home -- one in a controlled small group setting, and one in free-form interaction in a public space -- as well as our future research agenda to facilitate socially situated exploration of assistive robotics in the wild. We particularly focus on how people and institutions scaffold successful HRI, and identify how social mediation, individual sensemaking, and other social factors affect the success of HRI.
Social Personalized Human-Machine Interaction for People with Autism BIBAFull-Text 229-230
  Pauline Chevalier
My PhD research aims to develop a new personalized social interaction model between a humanoid robot and/or a virtual agent and an individual (child and/or adult) suffering of Autistic Spectrum Disorder (ASD), so as to enhance his/her social and communication skills. Because of the variability of the syndrome among the ASD population, our objective is to propose a customized social interaction for each individual. Previous studies explored the link between the individual integration of proprioceptive and visual feedbacks and communication, interactions skills, and emotions recognition [1], [2]. In light of the ASD impact on vision and motor processing [3], [4] and in order to define individual's profile, we posit that the individual's reliance to proprioceptive and kinematic visual cues will affect the way he/she interacts with a social agent. In our work, a first experiment that defines each participants' perceptivo-cognitive and sensorimotor profile with respect to the integration of visual inputs has already been conducted. Our next work will focus on developing appropriate agent behaviors that fit the user's profile.
Towards Analyzing Cooperative Brain-Robot Interfaces Through Affective and Subjective Data BIBAFull-Text 231-232
  Chris S. Crawford; Juan E. Gilbert
Several single-user Brain-Computer Interface (BCI) systems currently exist. These systems are often used to provide input to robots. Although these systems are useful with some applications they often cause issues such as high cognitive workloads and fatigue. The presented research investigates an alternative approach, which consists of dividing cognitive tasks amongst multiple users. The primary goal of this research is to investigate the effectiveness of cooperative Brain-Robot Interfaces (cBRI) by analyzing affective data (engagement) provided by a BCI device and subjective data collected from participants.
Developing Learning from Demonstration Techniques for Individuals with Physical Disabilities BIBAFull-Text 233-234
  William Curran
Learning from demonstration research often assumes that the demonstrator can quickly give feedback or demonstrations. Individuals with severe motor disabilities are often slow and prone to human errors in demonstrations while teaching. Our work develops tools to allow persons with severe motor disabilities, who stand to benefit most from assistive robots, to train these systems. To accommodate slower feedback, we will develop a movie-reel style learning from demonstration interface. To handle human error, we will use dimensionality reduction to develop new reinforcement learning techniques.
Autonomy, Embodiment, and Obedience to Robots BIBAFull-Text 235-236
  Denise Geiskkovitch; Stela Seo; James E. Young
We conducted an HRI obedience experiment comparing an autonomous robotic authority to: (i) a remote-controlled robot, and (ii) robots of variant embodiments during a deterrent task. The results suggest that half of people will continue to perform a tedious task under the direction of a robot, even after expressing desire to stop. Further, we failed to find impact of robot embodiment and perceived robot autonomy on obedience. Rather, the robot's perceived authority status may be more strongly correlated to obedience.
Open Learner Modelling with a Robotic Tutor BIBAFull-Text 237-238
  Aidan Jones; Susan Bull; Ginevra Castellano
This paper describes research to explore how personalisation in a robot tutor using an open leaner model (OLM) based approach impacts on effectiveness of children's learning. An OLM is a visualisation of inferred knowledge state. We address the feasibility of using social robotics to present an OLM to a learner. Results to date indicate that a robotic tutor can increase trust in explanations of an OLM over text based representations. We outline the remaining work to create and evaluate an autonomous robotic tutor that will use an OLM to scaffold reflection.
Challenges in Developing a Collaborative Robotic Assistant for Automotive Assembly Lines BIBAFull-Text 239-240
  Vaibhav Vasant Unhelkar; Julie A. Shah
Industrial robots are on the verge of emerging from their cages, and entering the final assembly to work along side humans. Towards this we are developing a collaborative robot capable of assisting humans in the final automotive assembly. Several algorithmic as well as design challenges exist when the robots enter the unpredictable, human-centric and time-critical environment of final assembly. In this work, we briefly discuss a few of these challenges along with developed solutions and proposed methodologies, and their implications for improving human-robot collaboration.
Co-Adaptive Optimal Control Framework for Human-Robot Physical Symbiosis BIBFull-Text 241-242
  Ker-Jiun Wang; Mingui Sun; Ruiping Xia; Zhi-Hong Mao
Bidirectional Learning of Handwriting Skill in Human-Robot Interaction BIBAFull-Text 243-244
  Hang Yin; Aude Billard; Ana Paiva
This paper describes the design of a robot agent and associated learning algorithms to help children in handwriting acquisition. The main issue lies in how to program a robot to obtain human-like handwriting and then exploit it to teach children. We propose to address this by integrating learning from demonstrations paradigm, which allows the robot to extract a task index from intuitive expert (e.g., adults) demonstrations. We present our work on the development of an algorithm, as well as its validation by learning compliant robotic writing motion from the extracted index. Also discussed is the synthesis of the learned task in the prospective work of transferring the task skill to users, especially in terms of learning by teaching. The undergoing work about the design of a sensor-embedded pen is introduced. This will be used as an intuitive interface for recording various handwriting related information in the interaction.

Workshops

HRI Education Workshop: How to Design and Teach Courses in Human-Robot Interaction BIBAFull-Text 245-246
  Carlotta A. Berry; Cindy Bethel; Selma Šabanovic
This workshop aims to share best practices for teaching courses in Human-Robot Interaction (HRI). The main focus is on undergraduate and graduate education and training, but K-12 and informal learning environments are also of interest. HRI is still a relatively new field with no standardized textbook or curriculum. Furthermore, HRI education requires an interdisciplinary approach, which poses challenges for both students and instructors. This workshop will bring together researchers and educators to discuss strategies for designing and teaching HRI to students with diverse backgrounds and skill sets.
The Emerging Policy and Ethics of Human Robot Interaction BIBAFull-Text 247-248
  Laurel D. Riek; Woodrow Hartzog; Don A. Howard; AJung Moon; Ryan Calo
As robotics technology forays into our daily lives, research, industry, and government professionals in the field of human-robot interaction (HRI) in must grapple with significant ethical, legal, and normative questions. Many leaders in the field have suggested that "the time is now" to start drafting ethical and policy guidelines for our community to guide us forward into this new era of robots in human social spaces. However, thus far, discussions have been skewed toward the technology side or policy side, with few opportunities for cross-disciplinary conversation, creating problems for the community. Policy researchers can be concerned about robot capabilities that are scientifically unlikely to ever come to fruition (like the singularity), and technologists can be vehemently opposed to ethics and policy encroaching on their professional space, concerned it will impede their work. This workshop aims to build a cross-disciplinary bridge that will ensure mutual education and grounding, and has three main goals: 1) Cultivate a multidisciplinary network of scholars who might not otherwise have the opportunity to meet and collaborate, 2) Serve as a forum for guided discussion of relevant topics that have emerged as pressing ethical and policy issues in HRI, 3) Create a working consensus document for the community that will be shared broadly.
Workshop on Enabling Rich, Expressive Robot Animation BIBAFull-Text 249-251
  Elizabeth Jochum; David Nuñez
HRI researchers and practitioners often need to generate complex, rich, expressive movement from machines to facilitate effective interaction. Techniques often include live puppeteering via Wizard-of-Oz setups, sympathetic interfaces, or custom control software. Often, animation is accomplished by playing back pre-rendered movement sequences generated by offline animators, puppeteers, or actors providing input to motion capture systems. Roboticists have also explored real-time parametric animation, affected motion planning, mechanical motion design, or blends of offline and live methods. Generating robot animation is not always straightforward and can be time consuming, costly, or even counter-productive when human-robot interaction breaks down due to inadequate animation. This workshop addresses a need to compare the various approaches to animating robots, to identify when particular techniques are most appropriate, and explore opportunities for further experimentation and tool-building.
Workshop on Behavior Coordination between Animals, Humans and Robots BIBAFull-Text 253-254
  Hagen Lehmann; Luisa Damiano; Lorenzo Natale
This workshop intends to bring together researchers investigating one or more aspects of behavior coordination in three different research domains: human-human interaction, human-animal interaction, human-robot interaction.
HRI Workshop on Human-Robot Teaming BIBAFull-Text 255-256
  Bradley Hayes; Matthew C. Gombolay; Malte F. Jung; Koen Hindriks; Joachim de Greeff; Catholijn Jonker; Mark Neerincx; Jeffrey M. Bradshaw; Matthew Johnson; Ivana Kruijff-Korbayova; Maarten Sierhuis; Julie A. Shah; Brian Scassellati
Developing collaborative robots that can productively and safely operate out of isolation in uninstrumented, human-populated environments is an important goal for the field of robotics. The development of such agents, those that handle the dynamics of human environments and the complexities of interpreting human interaction, is a strong focus within Human-Robot Interaction and involves underlying research questions deeply relevant to the broader robotics community. "Human-Robot Teaming" is a full-day workshop bringing together peer-reviewed technical and position paper contributions spanning a multitude of topics within the domain of human-robot teaming. This workshop seeks to bring together researchers from a wide array of human-robot interaction research topics with the focus of enabling humans and robots to better work together towards common goals. The morning session is devoted to gaining insight from invited speakers and contributed papers, while the afternoon session heavily emphasizes participant interaction via poster presentations, breakout sessions, and an expert panel discussion.
HRSI 2015: Workshop on Human-Robot Spatial Interaction BIBAFull-Text 257-258
  Marc Hanheide; Christian Dondrup; Ute Leonards; Tamara Lorenz; David Lu
With mobile robots moving into day to day life in private homes, care, or in public spaces and outdoors as robotic guides, Human-Robot Spatial Interaction (HRSI) -- the study of joint movement of humans and robots and the social signals governing the interaction -- becomes more and more important. The main focus of the workshop lies in incorporating social norms, like proxemics, and social signals, like eye contact and motioning, into current navigation approaches, be it constraint-based, learned from observation/demonstration, or interactively. How to evaluate the quality of the HRSI using novel feedback measures and devices, and grounding HRSI in empiric experiments focusing on Human-Human Spatial Interaction. The overall aim is to bring together participants from different fields, looking at all aspects of HRSI by presenting their work and identifying the main problems and questions on the way to a holistic, integrated HRSI system.
FJA@HRI15: Towards a Framework for Joint Action BIBAFull-Text 259-260
  Aurélie Clodic; Rachid Alami; Cordula Vesper; Elisabeth Pacherie; Bilge Mutlu; Julie A. Shah
The HRI 2015 Workshop "Towards a Framework for Joint Action" is a full-day workshop held in conjunction with the 10th ACM/IEEE International Conference on Human-Robot Interaction, in Portland (USA) on March 2nd 2015. The first edition of the workshop took place at RO-MAN 2014. This workshop aims at bringing together researchers from several disciplines to discuss the development of frameworks for analyzing and designing human-robot joint action. It is meant to provide the opportunity to researchers interested in joint action, roboticists but also philosophers and psychologists, to discuss in depth the topic and to contribute to the elaboration of a framework for human-robot joint action. To achieve this goal, we propose to the community to tackle a COMMON EXAMPLE (as it is sometimes done in robotics planning competition) with the goal to identify the capacities and skills needed for the successful performance of the joint action. This should enable us to build upon each other's experience to further develop ongoing work. The proposed example is described on the workshop website: fja.sciencesconf.org.
Cognition: A Bridge between Robotics and Interaction BIBAFull-Text 261-262
  Alessandra Sciutti; Katrin S. Lohan; Yukie Nagai
A key feature of humans is the ability to anticipate what other agents are going to do and to plan accordingly a collaborative action. This skill, derived from being able to entertain models of other agents, allows for the compensation for intrinsic delays of human motor control and is a primary support for efficient and fluid interaction. Moreover, the awareness that other humans are cognitive agents who combine sensory perception with internal models of the environment and others, enables easier mutual understanding and coordination [1]. Cognition represents therefore an ideal link between different disciplines, as the field of Robotics and that of Interaction studies performed by neuroscientists and psychologists. From a robotics perspective, the study of cognition is aimed at implementing cognitive architectures leading to efficient interaction with the environment and other agents (e.g., [2,3]). From the perspective of the human disciplines, robots could represent an ideal stimulus to study which are the fundamental robot properties necessary to make it perceived as a cognitive agent, enabling natural human-robot interaction (e.g., [4,5]). Ideally, the implementation of cognitive architectures may raise new interesting questions for psychologists, and the behavioral and neuroscientific results of the human-robot interaction studies could validate or give new inputs for robotics engineers. The aim of this workshop will be to provide a venue for researchers of different disciplines to discuss the possible points of contact and to highlight the issues and the advantages of bridging different fields for the study of cognition for interaction.

Videos

Telling Stories with Green the DragonBot: A Showcase of Children's Interactions Over Two Months BIBAFull-Text 263
  Jacqueline Kory Westlund
The language skills of young children can predict their academic success in later schooling. We may be able to help more children succeed by helping them improve their early language skills: a prime time for intervention is during preschool. Furthermore, because language lives in a social, interactive, and dialogic context, ideal interventions would not only teach vocabulary, but would also engage children as active participants in meaningful dialogues. Social robots could potentially have great impact in this area. They merge the benefits of using technology -- such as accessibility, customization and easy addition of new content, and student-paced, adaptive software -- with the benefits of embodied, social agents -- such as sharing physical spaces with us, communicating in natural ways, and leveraging social presence and social cues.
   To this end, we developed a robotic learning/teaching companion to support children's early language development. We performed a microgenetic field study in which we took this robot to two Boston-area preschools for two months. We asked two main questions: Could a robot companion support children's long-term oral language development through play? How might children build a relationship with and construe the robot over time?
Robot in Charge: A Relational Study Investigating Human-Robot Dyads with Differences in Interpersonal Dominance BIBAFull-Text 265
  Jamy Li; Wendy Ju; Clifford Nass
We present a controlled experiment exploring how people respond to video stimuli that depict relationships between humans and robots. How participants observed differences in interpersonal dominance in a human-robot pair was investigated using a "relational" study methodology. Participants were more trusting of and more attracted to both the robot and the person in a human-robot relationship where the robot was less dominant than the person compared to vice versa. These differences were not found for a human pair control condition, in which participants watched the same sequence of videos with two human confederates. Exploratory findings suggest that observers may prefer a person to be in charge and that human-robot relationships may be viewed differently than interpersonal ones.
Gaming Humanoids for Facilitating Social Interaction among People BIBAFull-Text 267
  Junya Hirose; Masakazu Hirokawa; Kenji Suzuki
This study proposes a novel approach for Human-Robot Interaction (HRI) called "Gaming Humanoids" using multiple humanoid robots in a video gaming environment. In this scenario, the robots play a typical tennis video game autonomously with humans. By varying the robot's role, (e.g. as a teammate/opponent) various interactions are realized.
The CoWriter Project: Teaching a Robot how to Write BIBAFull-Text 269
  Deanna Hood; Séverin Lemaignan; Pierre Dillenbourg
This video (that accompanies the paper "When Children Teach a Robot to Write: An Autonomous Teachable Humanoid Which Uses Simulated Handwriting" by the same authors, and presented as well during this conference) presents the first results of the EPFL CoWriter project. The project aims at building a robotic partner which children can teach handwriting. The system allows for the learning by teaching paradigm to be employed in the interaction, so as to stimulate meta-cognition, empathy and increased self-esteem in the child user. It is hypothesised that use of a humanoid robot in such a system could not just engage an unmotivated student, but could also present the opportunity for children to experience physically-induced benefits encountered during human-led handwriting interventions, such as motor mimicry.
Using Robots to Moderate Team Conflict: The Case of Repairing Violations BIBAFull-Text 271
  Nikolas Martelaro; Malte Jung; Pamela Hinds
The video shows interactions between a robot and team of people during a short group problem-solving task framed as a bomb defusal scenario. We explore how a robot can influence conflict dynamics through repairing negative violations within the team. The video shows three samples of interactions between two participants, a confederate delivering personal violations, and a robot attempting to moderate the team dynamics. These samples highlight interactions from a larger 2 (negative trigger: task-directed vs. personal attack) x 2 (repair: yes or no) between subjects experiment (N = 57 teams, 114 participants). Specifically, the video provides a qualitative look at our finding that a team's sense of personal conflict increases when the robot identifies and intervenes after a personal violation.
Low-Body-Part Detection using RGB-D camera BIBAFull-Text 273
  Jigwan Park; Kijin An; JongSuk Choi
The reliable perception of a human in a dynamic environment is the most critical issue for interactive human-robot services. In human-robot interaction, a camera on a robot naturally captures the low-body-part of human because robots are usually shorter than the human. Conventionally, a two-dimensional laser range finder is used in low-body-part detection [1, 2]. However, these methods may cause errors when there are similar structures with legs. This video demonstrates a low-body-part detection scheme that not only exploits three-dimensional characteristics and but also the RGB features of the low-body-part. We build the low-body-part candidates by clustering from the legs to the heap. In the results, spurious candidates are eliminated by the proposed method.
Mechanical Ottoman: Engaging and Taking Leave BIBAFull-Text 275
  David Sirkin; Brian Mok; Stephen Yang; Wendy Ju
This video introduces a robotic footstool -- the mechanical ottoman -- which explores how non-humanlike robots can coordinate joint action. It approaches seated people and offers to support their feet, then attempts to take leave during the interaction.
AEROSTABILES: A new approach to HRI researches BIBAFull-Text 277
  David St-Onge; Nicolas Reeves; Philippe Giguère; Inna Sharf; Gregory Dudek; Ioannis Rekleitis; Pierre-Yves Brèches; Patrick Abouzakhm; Philippe Babin
Initiated as a research-creation project by professor and artist Nicolas Reeves, the Aerostabile project quickly expanded to include researchers and artists from a wide range of disciplines. Its current phase brings together four robotic and research-creation labs with various expertises in unstable and dynamic environments. The first group, under the direction of professor Inna Sharf, is based at the department of mechanical engineering at University McGill. It works on control and modeling of autonomous blimps for satellite emulation. The second group is headed by professor Philippe Giguère from University Laval. It focuses on localization systems for robots operating in unknown outdoor environments. The third group is also from McGill, but this time from the computer science department. Headed by professor Gregory Dudek, it investigates the challenges presented by autonomous underwater robots, and by their interactions with human divers. The last team is based at the UQAM school of design. It is headed by professor Nicolas Reeves and engineer David St-Onge. It works on installations and performances in digital and algorithmic arts, and on the impact of new medias and technologies on the fields of art, architecture and design. The Aerostabile project pushes the boundaries of engineering and art by proposing a close hybridization of the two disciplines. It redefines the human-robot interaction paradigm, working specifically on the new interfaces required by the specific nature and context of emerging robotic systems. Multidisciplinary approaches are required to seamlessly integrate aesthetics, grace and precision. Amongst the tools and strategies developed by the research team, one of the most important is the organization of regular meetings similar to art residencies, which are structured around the framework of engineering software and hardware integration workshops. During such meetings, which occur twice a year, the four groups work together with engineers and artists from different disciplines. These intense collaborative events happen in spaces large enough to fit at least two 225-cm floating robotic cubes called "Tryphons", the latest models of a series of flying automata developed by Reeves and St-Onge. Fruitful questions and discussions emerge from these residencies, leading to new questions and development axis both in art and engineering. Whenever possible, they happen in public spaces, allowing direct contact with all kinds of audiences and with inspiring media artists and creators-researchers. The specific constraints of out-of-the-lab environments raise new problematics for all engineers, while the encounter between different academic cultures influence the development priorities. At the end of our journey, on top of the engineering papers that will be published, we aim to produce the first hybrid performance involving four performers interacting with four fully autonomous aerobots.
Robots + Agents for MOOCs: What if Scott Klemmer were a Robot? BIBAFull-Text 279
  Jamy Li; Wendy Ju
Online course lectures often consist of presentation slides with an inset "talking-head" video of the instructor. As the time and financial costs associated with producing these lectures are often high, employing a robot or a digital agent in lieu of an instructor could radically decrease the time and costs required. This video submission describes an initial study in which agent-based alternatives to a "talking-head" video are assessed. University students who viewed a lecture with a robot had similar recall scores but significantly lower ratings for likeability than those who viewed a lecture with a person, perhaps because the robot's voice was a negative social cue. Preliminary results suggest that appropriately designed agents may be useful for online lectures.
A Verifiable and Correct-by-Construction Controller for Robots in Human Environments BIBAFull-Text 281
  Lavindra de Silva; Rongjie Yan; Felix Ingrand; Rachid Alami; Saddek Bensalem
With the increasing use of domestic and service robots alongside humans, it is now becoming crucial to be able to verify whether robot-software is safe, dependable, and correct. Indeed, in the near future it may well be necessary for robot-software developers to provide safety certifications guaranteeing, e.g. that a hospital nursebot will not move too fast while a person is leaning on it, that the arm of a service robot will not unexpectedly open its gripper while holding a glass, or that there will never be a software deadlock while a robot is navigating in an office. To this end, we have provided a framework and software engineering methodology for developing safe and dependable real-world robotic architectures, with a focus on the functional level -- the lowest level of a typical layered robotic architecture -- which has all the basic action and perception capabilities such as image processing, obstacle avoidance, and motion control. Unlike past work we address the formal verification of the functional level, which allows providing guarantees that it will not do steps leading to undesirable/disastrous outcomes.
RRADS: Real Road Autonomous Driving Simulation BIBAFull-Text 283
  Sonia Baltodano; Srinath Sibi; Nikolas Martelaro; Nikhil Gowda; Wendy Ju
This video introduces a methodology for simulating an autonomous vehicle on open public roads. The video showcases participant reaction footage collected in the RRADS (Real Road Autonomous Driving Simulator). Although our study using this simulator did not use overt deception -- the consent form clearly states that a licensed driver is operating the vehicle -- the protocol was designed to support suspension of disbelief. Several participants who did not read the consent form clearly strongly believed that the vehicle was autonomous; this provides a lens onto the attitudes and concerns that people in real-world autonomous vehicles might have, and also points to ways that a protocol that deliberately used misdirection could gain ecologically valid reactions from study participants.
The Empathic Robotic Tutor: Featuring the NAO Robot BIBAFull-Text 285
  Tiago Ribeiro; Patrícia Alves-Oliveira; Eugenio Di Tullio; Sofia Petisca; Pedro Sequeira; Amol Deshmukh; Srinivasan Janarthanam; Mary Ellen Foster; Aidan Jones; Lee J. Corrigan; Fotios Papadopoulos; Helen Hastie; Ruth Aylett; Ginevra Castellano; Ana Paiva
We present an autonomous empathic robotic tutor to be used in classrooms as a peer in a virtual learning environment. The system merges a virtual agent design with HRI features, consisting of a robotic embodiment, a multimedia interactive learning application and perception sensors that are controlled by an artificial intelligence agent.
MiRAE: My Inner Voice BIBAFull-Text 287
  Logan Doyle; Casey C. Bennett; Selma Šabanovic
This video presents the interactions between MiRAE, an interactive robotic face, and visitors to an art exhibition at which it was displayed. The robot operated eight hours a day, six days a week, for three weeks in Spring 2014 and interacted with over 700 people across 300 interactions. The robot was fully autonomous and researchers were not present on site during the exhibit, so people interacted in a free-form manner, both individually and in groups. During the exhibit, video recordings were taken of people's responses to the robot. This video depicts a series of resulting interactions, with MiRAE's interpretation of the events.
A Robotic Companion for Social Support of Isolated Older Adults BIBAFull-Text 289
  Candace Sidner; Charles Rich; Mohammad Shayganfar; Timothy Bickmore; Lazlo Ring; Zessie Zhang
We demonstrate interaction with a relational agent, embodied as a robot, to provide social support for isolated older adults. Our robot supports multiple activities, including discussing the weather, playing cards and checkers socially, maintaining a calendar, talking about family and friends, discussing nutrition, recording life stories, exercise coaching and making video calls.
Collaboration with Robotic Drawers BIBAFull-Text 291
  Brian Mok; Stephen Yang; David Sirkin; Wendy Ju
In this video, we explored how everyday household robots should behave when performing collaborative tasks with human users. We ran a Wizard of Oz study (N=20) that utilized a set of robotic drawers. The participants were asked to assemble a cube by working together with the drawers which contained the tools needed to accomplish the task. We conducted a between-subjects test with the drawers while varying one of two variables (expressivity and proactivity) to yield a 2x2 factorial design.
Video on the Social Robot Toolkit BIBAFull-Text 293
  Michal Gordon; Cynthia Breazeal
The video presents the Social Robot toolkit, a new tangible interface for teaching pre-school children how to program social robots. The social robot toolkit that extends common approaches along three dimensions. (i) We propose a tangible programming approach that is suitable for young children with reusable vinyl stickers to represent rules for the robot to perform. (ii) We make use of social robots that are designed to interact directly with children. (iii) We focus the programming tasks and activities around social interaction. In other words, children teach an expressive relational robot how to socially interact by showing it a tangible sticker rulebook that they create. To explore various activities and interactions, we teleoperated the robot's sensors. We present qualitative analysis of children's engagement in and uses of the social robot toolkit and show that they learn to create new rules, explore complex computational concepts, and internalize the mechanism with which robots can be programmed.

Demonstrations -- Session 1

Introducing the Cuddlebot: A Robot that Responds to Touch Gestures BIBAFull-Text 295
  Jeff Allen; Laura Cang; Michael Phan-Ba; Andrew Strang; Karon MacLean
We present the Cuddlebot, a cat-sized robot equipped with a full-body, flexible fabric pressure sensitive touch sensor. Cuddlebot can move its head, arch its back, purr, and change how it "breathes." Our research explores how touch interactions affect stress and anxiety mitigation, and we demonstrate our current gesture recognition system, enabling users to connect sensed gestures and response behaviours.
Mechanical Ottoman: Up Close and Personal BIBAFull-Text 297
  David Sirkin; Brian Mok; Stephen Yang; Wendy Ju
This demonstration presents a robotic footstool -- the mechanical ottoman -- which approaches seated people and offers to support their feet, or alternatively can serve as seat or side table, then bids to take leave once engaged in the interaction.
Responsive Mouth: Enhancing Your Emotional Skill with Partial Agency BIBAFull-Text 299
  Hirotaka Osawa
The author developed a wearable mouth robot that supports a user's emotional labor. The robot detects people's age, gender, and emotions, and displays mouth gestures on the wearable display for supporting the expression of emotions.

Demonstrations -- Session 2

Intelligent Product Design BIBAFull-Text 301
  Han Nwi Lee; Yeseul Namkoung; Jinhee Kim; Seul Lee; Daun Jeong; Hyunji Seo; Soyeon Park; Kyeongah Lee; Sunbin Yang; Jimin Choi; Yeeun Kim; Jung Ju Choi; Sonya S. Kwak
Robot's appearance types could be classified into human-oriented robot and product-oriented robot. Human-oriented robot resembles human's appearance and behavior whereas product-oriented robot is an intelligent product that is laden with robotic technologies based on the existing product [1]. In Kwak et al.'s study [1], customers categorized a human-oriented robot as a robot and a product-oriented robot as one of the existing product categories, and a product-oriented robot was more effective than a human-oriented robot for consumers' evaluation and purchase intention toward robots. On the basis of this, we developed several intelligent products including intelligent slippers, intelligent Christmas tree blocks, an intelligent piggy bank, an intelligent clothespin, an intelligent grass protection mat, and an intelligent frame (see Figure 1).
Adventures of an Adolescent Trash Barrel BIBAFull-Text 303
  Stephen Yang; Brian Mok; David Sirkin; Wendy Ju
Our demonstration presents the roving trash barrel, a robot that we developed to understand how people perceive and respond to a mobile trashcan that offers its service in public settings. In a field study, we found that considerable coordination is involved in actively collecting trash, including capturing someone's attention, signaling an intention to interact, acknowledging the willingness -- or implicit signs of unwillingness -- to interact, and closing the interaction. In post-interaction interviews, we discovered that people believed that the robot was intrinsically motivated to collect trash, and attributed social mishaps to higher levels of autonomy.
Papetto: Crafting Embodied Co-Presence in Video Chat BIBAFull-Text 305
  Hidekazu Saegusa; Kerem Özcan; Daniela K. Rosner
In this paper, we describe Papetto, a lightweight robotic arm that moves according to face detection techniques in order to mirror facial movements such as head shaking, leaning and tilting. Using this system we examine the role of the "frame" in video chat and how embodied co-presence is defined and bounded through light-weight robotic mirroring to enable new forms of engagement in remote communication.

Demonstrations -- Session 3

Therabot™: A Robot Therapy Support System in Action BIBAFull-Text 307
  Christopher Collins; Dexter Duckworth; Zachary Henkel; Stephanie Wuisan; Cindy L. Bethel
Therabot™ is an assistive-robotic therapy system designed to provide support during counseling sessions and home therapy practice to patients diagnosed with conditions associated with trauma. It has the form factor of a floppy-eared dog with coloring similar to that of a beagle, and comfortably fits in a person's lap.
Performing Collaborative Tasks with Robotic Drawers BIBAFull-Text 309
  Brian Mok; Stephen Yang; David Sirkin; Wendy Ju
In this demonstration, we explore how everyday household robots-in particular, expressive robotic drawers' should behave when performing collaborative tasks with human users. We ran a Wizard of Oz study where participants assembled a cube while collaborating with the drawers, which contained the tools needed to complete the task. The demonstration will reproduce the setting for the study, augmented with other activities and drawers' contents, such as following a recipe using cooking utensils.
Empathic Robotic Tutors: Map Guide BIBAFull-Text 311
  Amol Deshmukh; Aidan Jones; Srinivasan Janarthanam; Mary Ellen Foster; Tiago Ribeiro; Lee Joseph Corrigan; Ruth Aylett; Ana Paiva; Fotios Papadopoulos; Ginevra Castellano
In this demonstration we describe a scenario developed in the EMOTE project. The overall goal of the project is to develop an empathic robot tutor for 11-13 year old school students in an educational setting. We are aiming to develop an empathic robot tutor to teach map reading skills with this scenario on a touch-screen device.