HCI Bibliography Home | HCI Conferences | HRI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
HRI Tables of Contents: 06070809101112131415-115-2

Proceedings of the 7th International Conference on Human-Robot Interaction

Fullname:HRI'12 Proceedings of the 7th ACM/IEEE International Conference on Human-Robot Interaction
Editors:Holly Yanco; Aaron Steinfeld; Vanessa Evers; Odest Chadwicke Jenkins
Location:Boston, Massachusetts
Dates:2012-Mar-05 to 2012-Mar-08
Standard No:ISBN: 1-4503-1063-X, 978-1-4503-1063-5; ACM DL: Table of Contents hcibib: HRI12
Links:Conference Home Page
Summary:Welcome to Boston! The Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI 2012) is a highly selective conference that aims to showcase the very best interdisciplinary and multidisciplinary research in human-robot interaction with roots in robotics, social psychology, cognitive science, HCI, human factors, artificial intelligence, engineering, and many more. We invite broad participation and encourage discussion and sharing of ideas across a diverse audience.
    Robotics is growing increasingly multidisciplinary as it moves towards realizing capable and collaborative robots that meet both human needs of society and technical challenges inherent in real world settings. A joining of the disciplines is essential for enabling robots to help people in their efforts to be more productive and enjoy a high quality of life. In particular, human-robot interaction requires advancement of the state-of-the-art in the empirical, algorithmic, mathematical, social, and engineering aspects of robotics in an integrated manner. Therefore, this year's theme is dedicated to Robots in the Loop, which highlights the importance of autonomously capable robots in enhancing the experiences of human users in everyday life and work activities. HRI 2012 emphasizes embodied robotic systems that operate, collaborate with, learn from, and meet the needs of human users in realworld environments.
    Full Papers submitted to the conference were thoroughly reviewed and discussed. The process utilized a rebuttal process and a worldwide team of dedicated, interdisciplinary reviewers. Subtle changes were made this year to help better pair reviewers to papers. This year's conference continues the tradition of selectivity with 34 out of 137 (25%) submissions accepted. Due to the joint sponsorship of ACM and IEEE, papers are archived in both the ACM Digital Library and IEEE Xplore.
    Accompanying the full papers are the brief and lightly reviewed Late Breaking Reports and Videos. For the former, 95 out of 111 (86%) two-page papers were accepted and will be presented as posters at the conference. For the latter, 16 of 30 (52%) short videos were accepted and will be presented during the video session.
    Rounding out the program are multiple keynote speakers who will discuss topics relevant to HRI, a panel session on telepresence, and several invited short unpublished talks designed to expose the audience to interesting work and motivate interdisciplinary discussion. The keynote speakers this year are Rodney Brooks and Karl Grammer.
  1. Robot manipulation and programming
  2. Attitudes and responses to social robots
  3. Robot wizards: robot operation and interfaces
  4. LBR highlights
  5. Conversation and proxemics
  6. Panel
  7. Living and working with service robots
  8. Robots for children
  9. Animating robot behavior
  10. HRI 2012 video session
  11. Perception and recognition
  12. Talking with robots: linguistics and natural language
  13. Workshops & tutorials

Robot manipulation and programming

Strategies for human-in-the-loop robotic grasping BIBAFull-Text 1-8
  Adam Eric Leeper; Kaijen Hsiao; Matei Ciocarlie; Leila Takayama; David Gossow
Human-in-the loop robotic systems have the potential to handle complex tasks in unstructured environments, by combining the cognitive skills of a human operator with autonomous tools and behaviors. Along these lines, we present a system for remote human-in-the-loop grasp execution. An operator uses a computer interface to visualize a physical robot and its surroundings, and a point-and-click mouse interface to command the robot. We implemented and analyzed four different strategies for performing grasping tasks, ranging from direct, real-time operator control of the end-effector pose, to autonomous motion and grasp planning that is simply adjusted or confirmed by the operator. Our controlled experiment (N=48) results indicate that people were able to successfully grasp more objects and caused fewer unwanted collisions when using the strategies with more autonomous assistance. We used an untethered robot over wireless communications, making our strategies applicable for remote, human-in-the-loop robotic applications.
Grip forces and load forces in handovers: implications for designing human-robot handover controllers BIBAFull-Text 9-16
  Wesley P. Chan; Chris A. C. Parker; H. F. Machiel Van der Loos; Elizabeth A. Croft
In this study, we investigate and characterize haptic interaction in human-to-human handovers and identify key features that facilitate safe and efficient object transfer. Eighteen participants worked in pairs and transferred weighted objects to each other while we measured their grip forces and load forces. Our data show that during object transfer, both the giver and receiver employ a similar strategy for controlling their grip forces in response to changes in load forces. In addition, an implicit social contract appears to exist in which the giver is responsible for ensuring object safety in the handover and the receiver is responsible for maintaining the efficiency of the handover. Compared with prior studies, our analysis of experimental data show that there are important differences between the strategies used by humans for both picking up/placing objects on table and that used for handing over objects, indicating the need for specific robot handover strategies as well. The results of this study will be used to develop a controller for enabling robots to perform object handovers with humans safely, efficiently, and intuitively.
Designing robot learners that ask good questions BIBAFull-Text 17-24
  Maya Cakmak; Andrea L. Thomaz
Programming new skills on a robot should take minimal time and effort. One approach to achieve this goal is to allow the robot to ask questions. This idea, called Active Learning, has recently caught a lot of attention in the robotics community. However, it has not been explored from a human-robot interaction perspective. In this paper, we identify three types of questions (label, demonstration and feature queries) and discuss how a robot can use these while learning new skills. Then, we present an experiment on human question asking which characterizes the extent to which humans use these question types. Finally, we evaluate the three question types within a human-robot teaching interaction. We investigate the ease with which different types of questions are answered and whether or not there is a general preference of one type of question over another. Based on our findings from both experiments we provide guidelines for designing question asking behaviors on a robot learner.
Robot behavior toolkit: generating effective social behaviors for robots BIBAFull-Text 25-32
  Chien-Ming Huang; Bilge Mutlu
Social interaction involves a large number of patterned behaviors that people employ to achieve particular communicative goals. To achieve fluent and effective human-like communication, robots must seamlessly integrate the necessary social behaviors for a given interaction context. However, very little is known about how robots might be equipped with a collection of such behaviors and how they might employ these behaviors in social interaction. In this paper, we propose a framework that guides the generation of social behavior for human-like robots by systematically using specifications of social behavior from the social sciences and contextualizing these specifications in an Activity-Theory-based interaction model. We present the Robot Behavior Toolkit, an open-source implementation of this framework as a Robot Operating System (ROS) module and a community-based repository for behavioral specifications, and an evaluation of the effectiveness of the Toolkit in using these specifications to generate social behavior in a human-robot interaction study, focusing particularly on gaze behavior. The results show that specifications from this knowledge base enabled the Toolkit to achieve positive social, cognitive, and task outcomes, such as improved information recall, collaborative work, and perceptions of the robot.

Attitudes and responses to social robots

Do people hold a humanoid robot morally accountable for the harm it causes? BIBAFull-Text 33-40
  Peter H., Jr. Kahn; Takayuki Kanda; Hiroshi Ishiguro; Brian T. Gill; Jolina H. Ruckert; Solace Shen; Heather E. Gary; Aimee L. Reichert; Nathan G. Freier; Rachel L. Severson
Robots will increasingly take on roles in our social lives where they can cause humans harm. When robots do so, will people hold robots morally accountable? To investigate this question, 40 undergraduate students individually engaged in a 15-minute interaction with ATR's humanoid robot, Robovie. The interaction culminated in a situation where Robovie incorrectly assessed the participant's performance in a game, and prevented the participant from winning a $20 prize. Each participant was then interviewed in a 50-minute session. Results showed that all of the participants engaged socially with Robovie, and many of them conceptualized Robovie as having mental/emotional and social attributes. Sixty-five percent of the participants attributed some level of moral accountability to Robovie. Statistically, participants held Robovie less accountable than they would a human, but more accountable than they would a vending machine. Results are discussed in terms of the New Ontological Category Hypothesis and robotic warfare.
Social facilitation with social robots? BIBAFull-Text 41-48
  Nina Riether; Frank Hegel; Britta Wrede; Gernot Horstmann
Regarding the future usage of social robots in workplace scenarios, we addressed the question of potential mere robotic presence effects on human performance. Applying the experimental social facilitation paradigm in social robotics, we compared task performance of 106 participants on easy and complex cognitive and motoric tasks across three presence groups (alone vs. human present vs. robot present). Results revealed significant evidence for the predicted social facilitation effects for both human and robotic presence compared to an alone condition. Implications of these findings are discussed with regard to the consideration of the interaction of robotic presence and task difficulty in modeling robotic assistance systems.
New measurement of psychological safety for humanoid BIBAFull-Text 49-56
  Hiroko Kamide; Yasushi Mae; Koji Kawabe; Satoshi Shigemi; Masato Hirose; Tatsuo Arai
In this article, we aim to discover the important factors for determining the psychological safety of humanoids and to develop a new psychological scale to measure the degree of safety quantitatively. To discover the factors that determine the psychological safety of humanoids from an ordinary person's perspective, we studied 919 Japanese, who observed movies of 11 humanoids and then freely described their impressions about what the safety of each humanoid was for them. Five psychologists categorized all of the obtained descriptions into several categories and then used the categories to compose a new psychological scale. Then, 2,624 different Japanese evaluated the same 11 humanoids using the new scale. Factor analysis on the obtained quantitative data revealed six factors of psychological safety: Performance, Humanness, Acceptance, Harmlessness, Toughness, and Agency. Additional analysis revealed that Performance, Acceptance, Harmlessness, and Toughness were the most important factors for determining the psychological safety of general humanoids. The usability of the new scale is discussed.
Consistency in physical and on-screen action improves perceptions of telepresence robots BIBAFull-Text 57-64
  David Sirkin; Wendy Ju
Does augmented movement capability improve people's experiences with telepresent meeting participants? We performed two web-based studies featuring videos of a telepresence robot. In the first study (N=164), participants observed clips of typical conversational gestures performed a) on a stationary screen only, b) with an actuated screen moving in physical space, or c) both on-screen and in-space. In the second study (N=103), participants viewed scenario videos depicting two people interacting with a remote collaborator through a telepresence robot, whose distant actions were a) visible on the screen only, or b) accompanied by local physical motion. These studies suggest that synchronized on-screen and in-space gestures significantly improved viewers' interpretation of the action compared to on-screen or in-space gestures alone, and that in-space gestures positively influenced perceptions of both local and remote participants.

Robot wizards: robot operation and interfaces

Real world haptic exploration for telepresence of the visually impaired BIBAFull-Text 65-72
  Chung Hyuk Park; Ayanna M. Howard
Robotic assistance through telepresence technology is an emerging area in aiding the visually impaired. By integrating the robotic perception of a remote environment and transferring it to a human user through haptic environmental feedback, the disabled user can increase one's capability to interact with remote environments through the telepresence robot. This paper presents a framework that integrates visual perception from heterogeneous vision sensors and enables real-time interactive haptic representation of the real world through a mobile manipulation robotic system. Specifically, a set of multi-disciplinary algorithms such as stereo-vision processes, three-dimensional map building algorithms, and virtual-proxy haptic rendering processes are integrated into a unified framework to accomplish the goal of real-world haptic exploration successfully. Results of our framework in an indoor environment are displayed, and its performances are analyzed. Quantitative results are provided along with qualitative results through a set of human subject testing. Our future work includes real-time haptic fusion of multi-modal environmental perception and more extensive human subject testing in a prolonged experimental design.
Effects of changing reliability on trust of robot systems BIBAFull-Text 73-80
  Munjal Desai; Mikhail Medvedev; Marynel Vázquez; Sean McSheehy; Sofia Gadea-Omelchenko; Christian Bruggeman; Aaron Steinfeld; Holly Yanco
Prior work in human-autonomy interaction has focused on plant systems that operate in highly structured environments. In contrast, many human-robot interaction (HRI) tasks are dynamic and unstructured, occurring in the open world. It is our belief that methods developed for the measurement and modeling of trust in traditional automation need alteration in order to be useful for HRI. Therefore, it is important to characterize the factors in HRI that influence trust. This study focused on the influence of changing autonomy reliability. Participants experienced a set of challenging robot handling scenarios that forced autonomy use and kept them focused on autonomy performance. The counterbalanced experiment included scenarios with different low reliability windows so that we could examine how drops in reliability altered trust and use of autonomy. Drops in reliability were shown to affect trust, the frequency and timing of autonomy mode switching, as well as participants' self-assessments of performance. A regression analysis on a number of robot, personal, and scenario factors revealed that participants tie trust more strongly to their own actions rather than robot performance.
Teamwork in controlling multiple robots BIBAFull-Text 81-88
  Fei Gao; Missy L. Cummings; Luca F. Bertuccelli
Simultaneously controlling increasing numbers of robots requires multiple operators working together as a team. Helping operators allocate attention among different robots and determining how to construct the human-robot team to promote performance and reduce workload are critical questions that must be answered in these settings. To this end, we investigated the effect of team structure and search guidance on operators' performance, subjective workload, work processes and communication. To investigate team structure in an urban search and rescue setting, we compared a pooled condition, in which team members shared control of 24 robots, with a sector condition, in which each team member control half of all the robots. For search guidance, a notification was given when the operator spent too much time on one robot and either suggested or forced the operator to change to another robot. A total of 48 participants completed the experiment with two persons forming one team. The results demonstrate that automated search guidance neither increased nor decreased performance. However, suggested search guidance decreased average task completion time in Sector teams. Search guidance also influenced operators' teleoperation behaviors. For team structure, pooled teams experienced lower subjective workload than sector teams. Pooled teams communicated more than sector teams, but sector teams teleoperated more than pool teams.
Towards human control of robot swarms BIBAFull-Text 89-96
  Andreas Kolling; Steven Nunnally; Michael Lewis
In this paper we investigate principles of swarm control that enable a human operator to exert influence on and control large swarms of robots. We present two principles, coined selection and beacon control, that differ with respect to their temporal and spatial persistence. The former requires active selection of groups of robots while the latter exerts a passive influence on nearby robots. Both principles are implemented in a testbed in which operators exert influence on a robot swarm by switching between a set of behaviors ranging from trivial behaviors up to distributed autonomous algorithms. Performance is tested in a series of complex foraging tasks in environments with different obstacles ranging from open to cluttered and structured. The robotic swarm has only local communication and sensing capabilities with the number of robots ranging from 50 to 200. Experiments with human operators utilizing either selection or beacon control are compared with each other and to a simple autonomous swarm with regard to performance, adaptation to complex environments, and scalability to larger swarms. Our results show superior performance of autonomous swarms in open environments, of selection control in complex environments, and indicate a potential for scaling beacon control to larger swarms.
Designing interfaces for multi-user, multi-robot systems BIBAFull-Text 97-104
  Adam Rule; Jodi Forlizzi
The use of autonomous robots in organizations is expected to increase steadily over the next few decades. Although some empirical work exists that examines how people collaborate with robots, little is known about how to best design interfaces to support operators in understanding aspects of the task or tasks at hand. This paper presents a design investigation to understand how interfaces should be designed to support multi-user, multi-robot teams. Through contextual inquiry, concept generation, and concept evaluation, we determine what operators should see, and with what salience different types of information should be presented. We present our findings through a series of design questions that development teams can use to help define interaction and design interfaces for these systems.

LBR highlights

A touchscreen-based 'sandtray' to facilitate, mediate and contextualise human-robot social interaction BIBAFull-Text 105-106
  Paul Baxter; Rachel Wood; Tony Belpaeme
In the development of companion robots capable of any-depth, long-term interaction, social scenarios enable exploration of the robot's capacity to engage a human interactant. These scenarios are typically constrained to structured task-based interactions, to enable the quantification of results for the comparison of differing experimental conditions. This paper introduces a hardware setup to facilitate and mediate human-robot social interaction, simplifying the robot control task while enabling an equalised degree of environmental manipulation for the human and robot, but without implicitly imposing an a priori interaction structure.
Children's knowledge and expectations about robots: a survey for future user-centered design of social robots BIBAFull-Text 107-108
  Eduardo Benítez Sandoval; Christian Penaloza
This paper seeks to establish a precedent for future development and design of social robots by considering the knowledge and expectations about robots of a group of 296 children. Human-robot interaction experiments were conducted with a Tele-operated anthropomorphic robot, and surveys were taken before and after the experiments. Children were also asked to perform a drawing of a robot. An image analysis algorithm was developed to classify drawings into 4 types: Anthropomorphic Mechanic/Non Mechanic (AM/AnM) and Non-Anthropomorphic Mechanic/Non Mechanic (nAM/nAnM). Image analysis algorithm was used in combination with human classification using a 2oo3 (two out of three) voting scheme to find children's strongest stereotype about robots. Survey and image analysis results suggest that children in general have some general knowledge about robots, and some children even have a deep understanding and expectations for future robots. Moreover, children's strongest stereotype is directed towards mechanical anthropomorphic systems.
Human-robot interaction: developing trust in robots BIBAFull-Text 109-110
  Deborah R. Billings; Kristin E. Schaefer; Jessie Y. C. Chen; Peter A. Hancock
In all human-robot interaction, trust is an important element to consider because the presence or absence of trust certainly impacts the ultimate outcome of that interaction. Limited research exists that delineates the development and maintenance of this trust in various operational contexts. Our own prior research has investigated theoretical and empirically supported antecedents of human-robot trust. Here, we describe progress to date relating to the development of a comprehensive human-robot trust model based on our ongoing program of research.
Dynamic gesture vocabulary design for intuitive human-robot dialog BIBAFull-Text 111-112
  Sasa Bodiroza; Helman I. Stern; Yael Edan
This paper presents a generalized method for the design of a gesture vocabulary (GV) for intuitive and natural two-way human-robot dialog. Two GV design methodologies are proposed; one for a robot GV (RGV) and a second for a human GV (HGV). The design is based on motion gestures exerted from a cohort of subjects in response to a set of tasks needed to execute several robot waiter (RW)-customer dialogs. Using a RW setting as a case study, preliminary experimental results indicate the unique nature of the HGV obtained.
Design of a haptic joystick for shared robot control BIBFull-Text 113-114
  Daniel J. Brooks; Holly A. Yanco
User experience of industrial robots over time BIBAFull-Text 115-116
  Roland Buchner; Daniela Wurhofer; Astrid Weiss; Manfred Tscheligi
This paper reports about a User Experience (UX) study on industrial robotic arms in the context of a semiconductor factory cleanroom. The goal was to find out (1) if there is a difference in the UX between robots used over years with a strict security perimeter (robot A) and a newly installed robot without security perimeter (robot B), and (2) if the UX ratings of the new robot change over time. Therefore, a UX questionnaire was developed and handed out to the operators working with these robots. The first survey was conducted one week after the deployment of robot B (n=23), the second survey (n=21) six months later. Thereby, we found that time is crucial for experiencing human-robot interaction. Our results showed an improvement between the first and second measurement of UX regarding robot B. Although robot A was significantly better rated than robot B in terms of usability, general UX, cooperation, and stress, we assume that the differences in UX will decrease gradually with prolonged interaction.
Visual cues-based anticipation for percussionist-robot interaction BIBAFull-Text 117-118
  Marcelo Cicconet; Mason Bretan; Gil Weinberg
Visual cues-based anticipation is a fundamental aspect of human-human interaction, and it plays an especially important role in the time demanding medium of group performance. In this work we explore the importance of visual gesture anticipation in music performance involving human and robot. We study the case in which a human percussionist is playing a four-piece percussion set, and a robot musician is playing either the marimba, or a three-piece percussion set. Computer Vision is used to embed anticipation in the robotic response to the human gestures. We developed two algorithms for anticipation, predicting the strike location about 10 milli-seconds or about 100 milli-seconds before it occurs. Using the second algorithm, we show that the robot outperforms, on average, a group of human subjects, in synchronizing its gesture with a reference strike. We also show that, in the tested group of users, having some time in advance is important for a human to synchronize the strike with a reference player, but, from a certain time, that good influence stops increasing.
Socially constrained management of power resources for social mobile robots BIBAFull-Text 119-120
  Amol Deshmukh; Ruth Aylett
Autonomous robots acting as companions or assistants in real social environments should be able to sustain and operate over an extended period of time. Generally, autonomous mobile robots draw power from batteries to operate various sensors, actuators and perform tasks. Batteries have a limited power life and take a long time to recharge via a power source, which may impede human-robot interaction and task performance. Thus, it is important for social robots to manage their energy, this paper discusses an approach to manage power resources on mobile robot with regard to social aspects for creating life-like autonomous social robots.
Sensorless collision detection and control by physical interaction for wheeled mobile robots BIBAFull-Text 121-122
  Guillaume Doisy
In this paper, we present the adaptation of a sensorless (in De Luca's sense [1], i.e., without the use of extra sensors,) collision detection approach previously used on robotic arms to mobile wheeled robots. The method is based on detecting the torque disturbance and does not require a model of the robot's dynamics. We then consider the feasibility of developing control by physical interaction strategies using the described adapted technique.
Assistive teleoperation for manipulation tasks BIBFull-Text 123-124
  Anca D. Dragan; Siddhartha S. Srinivasa
'If you sound like me, you must be more human': on the interplay of robot and user features on human-robot acceptance and anthropomorphism BIBAFull-Text 125-126
  Friederike Eyssel; Dieta Kuchenbrandt; Simon Bobinger; Laura de Ruiter; Frank Hegel
In an experiment we manipulated a robot's voice in two ways: First, we varied robot gender; second, we equipped the robot with a human-like or a robot-like synthesized voice. Moreover, we took into account user gender and tested effects of these factors on human-robot acceptance, psychological closeness and psychological anthropomorphism. When participants formed an impression of a same-gender robot, the robot was perceived more positively. Participants also felt more psychological closeness to the same-gender robot. Similarly, the same-gender robot was anthropomorphized more strongly, but only when it utilized a human-like voice. Results indicate that a projection mechanism could underlie these effects.
Beyond "spatial ability": examining the impact of multiple individual differences in a perception by proxy framework BIBAFull-Text 127-128
  Thomas Fincannon; Florian Jentsch; Brittany Sellers; Joseph R. Keebler
Prior research has proposed the use of a Perception by Proxy framework that relies on human perception to support actions of autonomy. Given the importance of human perception, this framework highlights the need to understand how human cognitive abilities factor into the human-robot dynamic. The following paper uses a military reconnaissance task to examine how cognitive abilities interact with the gradual implementation of autonomy in a Perception by Proxy framework (i.e., autonomy to detect; autonomy to support rerouting) to predict three dimensions of sequential performance (i.e., speeded detection; target identification; rerouting). Results showed that, in addition to effects of autonomy and task setting, different individual abilities predicted unique aspects of performance. This highlights the need to broaden consideration of cognitive abilities in HRI.
Attentional human-robot interaction in simple manipulation tasks BIBAFull-Text 129-130
  Ernesto Burattini; Alberto Finzi; Silvia Rossi; Mariacarla Staffa
We present a robotic control system endowed with attentional mechanisms suitable for balancing the trade off between safe human-robot interaction and effective task execution. These mechanisms allow the robot to increase or decrease the degree of attention toward relevant activities modulating the frequency of the monitoring rate and the speed associated to the robot movements. In this framework, we consider pick-and-place and give-and-receive attentional behaviors.
'Midas touch' in human-robot interaction: evidence from event-related potentials during the ultimatum game BIBAFull-Text 131-132
  Haruaki Fukuda; Masahiro Shiomi; Kayako Nakagawa; Kazuhiro Ueda
Interpersonal touch is said to have significant effects on social interaction. We used the ultimatum game to examine whether touch from a robot could inhibit a negative feeling to the robot. We set two experimental conditions: the one was "touch condition" in which unfair proposals were offered to a participant when a robot touched his/her arm and the other was "no touch condition" in which unfair proposals were offered when the same robot did not. We compared Medial Frontal Negativity (MFN) measured by EEG, whose amplitude is correlated with feeling of unfairness, between the two conditions. Result shows that MFN amplitude was larger in the no touch condition than in the touch condition. This indicates that touch from a robot may inhibit a sense of unfairness for the robot. Our finding suggests that touch from a robot could enhance positive feeling to the robot through human-robot interaction.
Facial gesture recognition using active appearance models based on neural evolution BIBAFull-Text 133-134
  Jorge Garcíia Bueno; Miguel González-Fierro; Luis Moreno; Carlos Balaguer
Facial gesture recognition is one of the main topics in HRI. We have developed a novel algorithm who allows to detect emotional states, like happiness, sadness or emotionless. A humanoid robot is able to detect these states with a ratio of success of 83% and interact in consequence. We use Active Appearance Models (AAMs) to determinate face features and classify the emotions using neural evolution, based on neural networks and differential evolution algorithm.
Design, integration, and test of a shopping assistance robot system BIBAFull-Text 135-136
  Marlon Garcia-Arroyo; Luis Felipe Marin-Urias; Antonio Marin-Hernandez; Guillermo de Jesus Hoyos-Rivera
In this paper is described the current work towards the design of a shopping assistant robot system. This system will allow users to keep control of what they are buying; the robot will help the customer by handling its shopping list, carrying with all the products, and serving as a companion. In this work is also shown the acceptability studies for this kind of robot.
Handheld operator control unit BIBAFull-Text 137-138
  Neal Checka; Shawn Schaffert; David Demirdjian; Jan Falkowski; Daniel H. Grollman
Currently, unmanned vehicles support soldiers in a variety of military applications. Typically, a specially-trained user teleoperates these platforms using a large and bulky Operator Control Unit (OCU). The operator's total attention is required for controlling the tedious, low-level aspects of the platform, dramatically reducing his personal situational awareness. Furthermore, these OCUs are both platform and mission specific. Ideally, a soldier could instead carry light-weight and portable multi-purpose devices to act as OCUs for multiple platform/mission scenarios. These devices would support a standard set of OCU functionality (e.g., as driving a ground robot) and additional higher-level task operations (e.g., autonomously patrolling an area). This extended abstract presents the development of apps for a handheld platform that enable both low- and high-level control of an unmanned vehicle.
Unveiling robotophobia and cyber-dystopianism: the role of gender, technology and religion on attitudes towards robots BIBAFull-Text 139-140
  Daniel Halpern; James E. Katz
A survey of 873 undergraduate students was conducted to understand which individual factors affect subjects' attitudes toward robots. A third of participants (N=284) were exposed to a humanoid robot, another third (N=293) to a doggy robot, and the remaining third (N=296) to an android. Results showed that in the humanoid condition individuals recognize more human-like characteristics in robots than in the other two conditions. However humanoid appearance did not affect participants' attitudes toward robots, as others predictors recognized by previous research indicate, such as gender, religion, and perceived competence with information and communication technologies (ICT).
Assessing workload in human-robot peer-based teams BIBAFull-Text 141-142
  Caroline E. Harriott; Glenna L. Buford; Tao Zhang; Julie A. Adams
The effect of a robotic teammate on a human partner's workload has not been fully quantified. Prior research found that human participants experienced lower workload when working with a robotic partner than when working with a human partner. An evaluation investigated whether a similar trend in workload exists for tasks requiring direct and collaborative interaction between the partners, and joint team decision-making. The subjective results indicate a similar trend to the prior results; participants rated workload lower for the more complex task when partnered with a robot than when partnered with a human.
Does a robot that can learn verbs lead to better user perception? BIBAFull-Text 143-144
  Dai Hasegawa; Kenji Araki
The current understanding is that human-likeness of a robot leads to better human perception. However, the factors have not been thoroughly studied. We conducted a laboratory experiment to examine two questions: how verb acquisition ability affects human perceptions on human-likeness and familiarity of a humanoid robot, intention to use the robot, and enjoyment and satisfaction of the interaction, and whether human-likeness mediates the links between the effects of interaction of verb acquisition between the human perceptions. The experiment involved 48 participants, and we found that the robot that was able to acquire two Japanese verbs, "oku (to put/to place)" and "hanasu (to move away from)," was perceived by participants as more familiar and satisfying than the one that knew the verbs from the beginning. We also found that human-likeness mediated the links between the effect of verb acquisition ability and other perceptions toward the robot.
Towards a computational method of scaling a robot's behavior via proxemics BIBAFull-Text 145-146
  Zachary Henkel; Robin R. Murphy; Cindy L. Bethel
Humans regulate their social behavior based on proximity to other social actors. Likewise, when a robot fulfills the role of a social actor it too should regulate its interaction based on proximity. This paper describes work in progress to establish methods for autonomous modification of social behavior based on proximity and to quantify human preferences between methods of scaling a robot's social behaviors based on distance from a human. The preliminary results of a 72 participant human study examine the reaction to scaling with linear methods and perception-based methods. Results indicate significantly higher ratings in multiple areas (comfort, natural movement, safety, self-control, intelligence, likability, submissiveness (p<.05) when using a perception-based scaling function, as opposed to a linear or no scaling function. Work in progress is analyzing the biometric measures collected.
Using the behavior markup language for human-robot interaction BIBAFull-Text 147-148
  Aaron Holroyd; Charles Rich
This paper describes a Behavior Markup Language (BML) realizer that we developed for use in our research on human-robot interaction. Existing BML realizers used with virtual agents are based on fixed-timing algorithms and because of that are not suitable for robotic applications. Our realizer uses an event-driven architecture, based on Petri nets, to guarantee the specified synchronization constraints in the presence of unpredictable variability in robot control systems. Our implementation is robot independent, open source and uses the Robot Operating System (ROS).
Attracting and controlling human attention through robot's behaviors suited to the situation BIBAFull-Text 149-150
  Mohammed Moshiul Hoque; Tomomi Onuki; Dipankar Das; Yoshinori Kobayashi; Yoshinori Kuno
A major challenge is to design a robot that can attract and control human attention in various social situations. If a robot would like to communicate a person, it may turn its gaze to him/her for eye contact. However, it is not an easy task for the robot to make eye contact because such a turning action alone may not be enough in all situations, especially when the robot and the human are not facing each other. In this paper, we present an attention control approach through robot's behaviors that can attract a person's attention by three actions: head turning, head shaking, and uttering reference terms corresponding to three viewing situations in which the human vision senses the robot (near peripheral field of view, far peripheral field of view, and out of field of view). After gaining attention, the robot makes eye contact through showing gaze awareness by blinking its eyes, and directs the human attention by eye and head turning behaviors to share an object.
An intentional framework improves memory for a robot's actions BIBAFull-Text 151-152
  Alicia M. Hymel; Daniel T. Levin
Although a number of recent studies have explored people's concepts about robots, almost no research has tested the degree to which these concepts affect people's capacity to understand and remember a robot's actions. In this study, we tested whether a narrative describing a robot performing basic intentional acts would be easier to remember than a narrative that described similar non-intentional actions. Participants read one of two stories about a robot in which it was either described as having intentional or non-intentional mental representations. Participants who read about the intentional robot were more likely to recall information about the robotic agent, but there was no difference between the two groups in accuracy for questions unrelated to the agent. Additionally, participants who read about the intentional robot were marginally more likely to falsely recall a non-present object that was similar to the objects that the robot did interact with. We conclude that beliefs about a robot affect encoding and recall of its actions, possibly due to a focus on the type of information the agent is believed to "mentally" represent.
Gestonurse: a multimodal robotic scrub nurse BIBAFull-Text 153-154
  Mithun George Jacob; Yu-Ting Li; Juan P. Wachs
A novel multimodal robotic scrub nurse (RSN) system for the operating room (OR) is presented. The RSN assists the main surgeon by passing surgical instruments. Experiments were conducted to test the system with speech and gesture modalities and average instrument acquisition times were compared. Experimental results showed that 97% of the gestures were recognized correctly under changes in scale and rotation and that the multimodal system responded faster than the unimodal systems. A relationship similar in form to Fitts's law for instrument picking accuracy is also presented.
Manipulation with soft-fingertips for safe pHRI BIBAFull-Text 155-156
  Jorge Armendariz; Rodolfo García-Rodríguez; Felipe Machorro-Fernández; Vicente Parra-Vega
Manipulation with soft-fingertips is proposed as a tool for interacting with rigid objects, based on an online bilateral teleoperation fuzzy controller, without time delay. The fuzzy inference engine tunes continuously the force feedback gain to increase awareness of physical interaction so as to grasping is achieved. In contrast to the case of rigid fingertip, wherein infinitely small contact point is assumed, it is argued that our scheme allows safe and intuitive interaction, which has proved effective in an experimental study with 19 subjects. Results indicate success due to feedback of slave contact force to the human user. Subjects consistently judge comfort and easiness of manipulation, with an experimental two-hand prototype.
Studying virtual worlds as medium for telepresence robots BIBAFull-Text 157-158
  Alex Juarez; Christoph Bartneck; Loe Feijs
This paper presents a study on the effects of using virtual worlds as medium for interaction with telepresence robots. The Prototype for Assisted Communication PAC4 is also introduced. This system connects virtual worlds with real robots, allowing virtual world users to access the robot capabilities in a telepresence scenario. A user experiment was conducted to evaluate the effects of PAC4 on the feeling of social presence experienced by virtual world users.
A follow-up on humanoid-mediated stroke physical rehabilitation BIBAFull-Text 159-160
  Hee-Tae Jung; Yu-Kyong Choe; Jennifer Baird; Roderic A. Grupen
We report the results of standardized tests on a single subject with a stroke at 4, 20 and 28 weeks after completion of the study. These results follow from previous work[1]. The subject demonstrated sustained improvement in motor function 28 weeks after completing the study. In addition to quantitative results, the questionnaire results by the subject and the spouse testify that the subjective user experience was also positive. This further advocates the use of general purpose robots to complement human therapists.
Personality and facial expressions in human-robot interaction BIBAFull-Text 161-162
  Soyoung Jung; Hyoung-taek Lim; Sanghun Kwak; Frank Biocca
This paper presents the propensities of preference for Human Robot Interaction (HRI) according to different personalities and facial expressions of humans and robots. This study is based on two types of personalities: extroverted and introverted. According to the personality, the facial expressions are distinguished to express extroversion and introversion. The design of experiment is a 3 (participant's group of personality: extrovert, intermediate, introvert) x 2 (robot's groups of personality) between-subjects experiment (N=40) in which participants interact with KMC-EXPR robots expressing either extroversion or introversion. The results showed an unprecedented hypothesis, unexpected implications and certain propensities of preferences.
Understanding situational awareness in multi-unit supervisory control through data-mining and modeling with real-time strategy games BIBAFull-Text 163-164
  Donald J. Kalar; Collin B. Green
As robots become increasingly capable and autonomous, the role of a human operator may be to supervise multiple robots and intervene to handle problems and provide strategic guidance. In such cases, the extent to which HRI tools support the human supervisor's situational awareness (SA) and ability to intervene in an appropriate and timely fashion will constrain the scale of operations (e.g., the number of robots; the complexity of tasks) that can reasonably be supervised by a single person. One approach to understanding how humans might acquire, maintain, and use situational awareness in multi-robot supervision tasks is to look at video games that require similar activities. We describe our initial efforts at analyzing and modeling data from Real-Time Strategy (RTS) games with the goal of answering basic questions about the nature of situational awareness and supervisory control of multiple semi-autonomous agents.
Cultural studies in the HRI loop BIBAFull-Text 165-166
  Andra Keay
In this paper, I outline the scope and methodology of cultural studies and the possible applications in HRI. It is not just robots that interact with humans, but humans who interact with humans through the medium of robots. Culture is the context that informs all our actions. Cultural theory's eclectic methodology is useful for sifting through a large, multidisciplinary and cross-cultural field like HRI, for identifying disputed or neglected areas and for framing future research questions.
Robot competitions as a birth ritual BIBAFull-Text 167-168
  Andra Keay
Competitions play an important role in introducing new robots to the field. The robot competition is a rite of passage analogous to birth when compared to studies of the technologized relations of birth in American hospitals. The sanctioned recording of robot names as part of the competition marks the roles that robots play in society. This paper explores underlying social factors that may influence robotics.
Applying team heuristics to future human-robot systems BIBAFull-Text 169-170
  Joseph Roland Keebler; Florian Jentsch; Thomas Fincannon; Irwin Hudson
In this paper we briefly describe teaming heuristics as they are applied to human-human teams, and demonstrate their adaptability to human-robot (HR) teams. We discuss a framework developed from Salas's models on teamwork and team training. As HRI technology moves from tele-operative control methods to teamwork with intelligent robots, it is pertinent to properly integrate knowledge about teams into the development of robotic systems. This should lead to highly effective team systems, and may provide insight into the design of robotic entities and system protocols.
Deep networks for predicting human intent with respect to objects BIBAFull-Text 171-172
  Richard Kelley; Liesl Wigand; Brian Hamilton; Katie Browne; Monica Nicolescu; Mircea Nicolescu
Effective human-robot interaction requires systems that can accurately infer and predict human intentions. In this paper, we introduce a system that uses stacked denoising autoencoders to perform intent recognition. We introduce the intent recognition problem, provide an overview of deep architectures in machine learning, and outline the components of our system. We also provide preliminary results for our system's performance.
User attentive behavior with camera view for in-situ robot control BIBAFull-Text 173-174
  Jong-gil Ahn; Hyunseok Yang; Gerard Jounghyun Kim; Namgyu Kim
In this poster, we present an experiment that compares three forms of interaction to study the user behavior with regards to the effects of camera view for in-situ robot control. We compared three hand-held interfaces with: (1) no camera view (Nominal), (2) a camera view/aim is always fixed toward the robot (Fixed) and (3) a camera view with user controlled aim (Free). The three approaches represent different balances between information availability, interface accessibility and the amount of induced attentional shifts. Experiment results have shown that all three interaction models exhibited similar task performance even though the Fixed type induced much less attentional shifts. On the other hand, the users much preferred the Nominal and Free type. Users mostly ignored the camera view despite having to shift one's attention excessively, due to the lack of visual quality, realistic scale and depth information.
Tracking aggregate vs. individual gaze behaviors during a robot-led tour simplifies overall engagement estimates BIBAFull-Text 175-176
  Heather Knight; Reid Simmons
As an early behavioral study of what non-verbal features a robot tourguide could use to analyze a crowd, personalize an interaction and/or maintain high levels of engagement, we analyze participant gaze statistics in response to a robot tour guide's deictic gestures. There were thirty-seven participants overall split into nine groups of three to five people each. In groups with the lowest engagement levels aggregate gaze responses in response to the robot deictic gesture involved the fewest total glance shifts, least time spent looking at indicated object and no intra-participant gaze. Our diverse participants had overlapping engagement ratings within their group, and we found that a robot that tracks group rather than individual analytics could capture less noisy and often stronger trends relating gaze features to self-reported engagement scores. Thus we have found indications that aggregate group analysis captures more salient and accurate assessments of overall humans-robot interactions, even with lower resolution features.
Real time interaction with mobile robots using hand gestures BIBAFull-Text 177-178
  Kishore Reddy Konda; Achim Königs; Hannes Schulz; Dirk Schulz
We developed a robust real time hand gesture based interaction system to effectively communicate with a mobile robot which can operate in an outdoor environment. The system enables the user to operate a mobile robot using hand gesture based commands. In particular the system offers direct on site interaction providing better perception of environment to the user. To overcome the illumination challenges in outdoors, the system operates on depth images. Processed depth images are given as input to a convolutional neural network which is trained to detect static hand gestures.
   The system is evaluated in real world experiments on a mobile robot to show the operational efficiency in outdoor environment.
Integrating human and computer vision with EEG toward the control of a prosthetic arm BIBAFull-Text 179-180
  Eugene Lavely; Geoffrey Meltzner; Rick Thompson
We are undertaking the development of a brain computer interface (BCI) [1] for control of an upper limb prosthetic. Our approach exploits electrical neural activity data for motor intent estimation, and eye gaze direction for target selection. These data streams are augmented by computer vision (CV) for 3D scene reconstruction, and are integrated with a hierarchical controller to achieve semi-autonomous control. User interfaces for the effective control of the many degrees of freedom (DOF) of advanced prosthetic arms are not yet available [2]. Ideally the combined arm and interface technology provides the user with reliable and dexterous capability for reaching, grasping and fine-scale manipulation. Technologies that improve arm embodiment i.e., the impression by the amputee subject that the arm is a natural part of their body-concept presents an important and difficult challenge to the human-robot interaction research community. Such embodiment is clearly predicated on cross-disciplinary advances, including accurate intent estimation and an and an algorithmic basis for natural arm control.
Human-robot interaction in the MORSE simulator BIBAFull-Text 181-182
  Séverin Lemaignan; Gilberto Echeverria; Michael Karg; Jim Mainprice; Alexandra Kirsch; Rachid Alami
Over the last two years, the Modular OpenRobots Simulation Engine (MORSE) project1 went from a simple extension plugged on the Blender's Game Engine to a full-fledged simulation environment for academic robotics. Driven by the requirements of several of its developers, tools dedicated to Human-Robot interaction simulation have taken a prominent place in the project. This late breaking report discusses some of the recent additions in this domain, including the immersive experience provided by the integration of the Kinect device as input controller. We also give an overview of the experiences we plan to complete in the coming months.
Vision-based attention estimation and selection for social robot to perform natural interaction in the open world BIBAFull-Text 183-184
  Liyuan Li; Xinguo Yu; Jun Li; Gang Wang; Ji-Yu Shi; Yeow Kee Tan; Haizhou Li
In this paper, a novel vision system is proposed to estimate attention of people from rich visual clues for social robot to perform natural interactions with multiple participants in public environments. The vision detection and recognition modules include multi-person detection and tracking, upper-body pose recognition, face and gaze detection, lip motion analysis for speaking recognition, and facial expression recognition. A computational approach is proposed to generate a quantitative estimation of human attention. The vision system is implemented on a robotic receptionist "EVE" and encouraging results have been obtained.
A prototyping environment for interaction between a human and a robotic multi-agent system BIBAFull-Text 185-186
  Michael Lichtenstern; Martin Frassl; Bernhard Perun; Michael Angermann
In this paper we describe our prototyping environment to study concepts for empowering a single user to control robotic multi-agent systems. We investigate and validate these concepts by experiments with a fleet of hovering robots. Specifically, we report on a first experiment in which one robot is equipped with an RGB-D sensor through which the user is enabled to directly interact with a multi-agent system without the need to carry any device.
Explaining robot actions BIBAFull-Text 187-188
  Meghann Lomas; Robert Chevalier; Ernest Vincent, II Cross; Robert Christopher Garrett; John Hoare; Michael Kopack
To increase human trust in robots, we have developed a system that provides insight into robotic behaviors by enabling a robot to answer questions people pose about its actions (e.g., Q: Why did you turn left there? A: "I detected a person at the end of the hallway."). Our focus is on generation of this explanation in human-understandable terms despite the mathematical, robot-specific representation and planning system used by the robot to make its decisions and execute its actions. We present our work to date on this topic, including system design and experiments, and discuss areas for future work.
Applying politeness maxims in social robotics polite dialogue BIBAFull-Text 189-190
  Qin En Looi; Swee Lan See
An important element of human-robot interaction, as with inter-human interaction, is conversation. Having previously suggested the Gricean maxims as suitable guidelines for social robotics dialogue, we discovered that a preferable alternative set of guidelines: politeness maxims. In this paper, we will introduce the politeness maxims and propose its enhanced applicability in human-robot interaction to create polite dialogue. Although no experimental results are available to support our proposition, the preliminary analysis of politeness maxims presents a promising future, as guidelines for the synthesis of polite robot dialogue. Through effective dialogue creation, the interaction with human and robots will be more pleasant, indicating a step forward in effective human-robot interaction.
Transfer from a simulation environment to a live robotic environment: are certain demographics better? BIBAFull-Text 191-192
  Patricia L. McDermott; Alia Fisher; Thomas Carolan; Mark R. Gronowski; Marc Gacy; Michael Overstreet
The ability to remotely operate an unmanned vehicle while simultaneously looking for suspicious targets and then classifying those targets is not a trivial skill. This study looked at different training approaches to make better use of simulation as a first training step. When transferring to a live environment, the operators could be grouped into two categories according to whether they passed live training criteria or not. There were clear performance differences between these groups. The group that failed to pass criteria had poorer performance overall, more SA errors, and spent more time in training. Post-hoc analysis showed differences in the demographics between those who passed and those that did not. Male participants and younger participants were more likely to achieve criteria. There were no differences in gaming experience and perceived sense of direction.
A probabilistic framework for autonomous proxemic control in situated and mobile human-robot interaction BIBAFull-Text 193-194
  Ross Mead; Maja J. Mataric
In this paper, we draw upon insights gained in our previous work on human-human proxemic behavior analysis to develop a novel method for human-robot proxemic behavior production. A probabilistic framework for spatial interaction has been developed that considers the sensory experience of each agent (human or robot) in a co-present social encounter. In this preliminary work, a robot attempts to maintain a set of human body features in its camera field-of-view. This methodology addresses the functional aspects of proxemic behavior in human-robot interaction, and provides an elegant connection between previous approaches.
Control of human-machine interaction for wide area search munitions in the presence of target uncertainty BIBAFull-Text 195-196
  Pia E. K. Berg-Yuen; Siddhartha S. Mehta; Eduardo L. Pasiliao; Robert A. Murphey
In this report we describe the progress in developing a control architecture for human-in-the-loop wide area search munitions to reduce operator errors in the presence of unreliable automation and operator cognitive limitations. An optimal input tracking controller with adaptive automation uncertainty compensation and real-time workload assessment is developed to improve the system performance. Extensive simulations based on the experimental data involving 12 subjects demonstrate effectiveness of the presented controller.
Referent identification process in human-robot multimodal communication BIBAFull-Text 197-198
  Yuta Shibasaki; Takahiro Inaba; Yukiko I. Nakano
This paper presents a communication robot that can generate a referent identification conversation with human users. First, we conduct an experiment to collect face-to-face referent identification communication and investigate how the referent is identified by exchanging multiple speech turns between the participants. On the basis of the experimental observations, we implement a communication robot that can manage a referent identification conversation with a user by integrating the linguistic information obtained from speech recognition and the vision information obtained from a robot camera.
Listener agent for elderly people with dementia BIBAFull-Text 199-200
  Yoichi Sakai; Yuuko Nonaka; Kiyoshi Yasuda; Yukiko I. Nakano
With the goal of developing a conversational humanoid that can serve as a companion for people with dementia, we propose an autonomous virtual agent that can generate backchannel feedback, such as head nods and verbal acknowledgement, on the basis of acoustic information in the user's speech. The system is also capable of speech recognition and language understanding functionalities, which are potentially useful for evaluating the cognitive status of elderly people on a daily basis.
Can you hold my hand?: physical warmth in human-robot interaction BIBAFull-Text 201-202
  Jiaqi Nie; Michelle Pak; Angie Lorena Marin; S. Shyam Sundar
This study investigates whether the temperature of a robot's hand can affect perceptions of the robot as a companion. Our research empirically analyzes the responses of 39 individuals randomly assigned to one of three conditions: (1) holding a warm robot hand or (2) holding a cold robot hand or (3) not holding a robot hand. The effects of this simulated 'human touch' on HRI were examined in the context of viewing a horror film clip. Results suggest that experiences of physical warmth and handholding increase feelings of friendship and trust toward the robot. However, the discrepancy between the expectation of an actual human touch and the mechanical appearance of a robot could result in negative effects.
Captain may i?: proxemics study examining factors that influence distance between humanoid robots, children, and adults, during human-robot interaction BIBAFull-Text 203-204
  Sandra Y. Okita; Victor Ng-Thow-Hing; Ravi Kiran Sarvadevabhatla
This proxemics study examines whether the physical distance between robots and humans differ based on the following factors: 1) age: children vs. adults, 2) who initiates the approach: humans approaching the robot vs. robot approaching humans, 3) prompting: verbal invitation vs. non-verbal gesture (e.g., beckoning), and 4) informing: announcement vs. permission vs. nothing. Results showed that both verbal and non-verbal prompting had significant influence on physical distance. Physiological data is also used to detect the appropriate timing of approach for a more natural and comfortable interaction.
Online gaming with robots vs. computers as allies vs. opponents BIBAFull-Text 205-206
  Eunil Park; Ki Joon Kim; S. Shyam Sundar; Angel P. del Pobil
A 2 x 2 between-subjects experiment was conducted to examine the effects of the type of artificial agent (robot vs. computer) and the role of the agent (ally vs. enemy) on people's perceptions and evaluations of the agent when playing a video game. Participants perceived that playing the game with a robot was more enjoyable and easier than playing with a computer. Regardless of the agent type, participants reported that playing the game was more enjoyable when the agent played as an ally rather than as their opponent. Implications of notable findings are discussed.
The effects of immersive tendency and need to belong on human-robot interaction BIBAFull-Text 207-208
  Ki Joon Kim; Eunil Park; S. Shyam Sundar; Angel P. del Pobil
Do individual differences in dispositional behavioral tendencies, such as immersive tendency and need to belong, play a significant role in human-robot interaction? To answer this question, the present study conducted a 2 x 2 between-subjects experiment to examine the effects of immersive tendency (high vs. low) and need to belong (high vs. low) on individuals' perceptions of a social robot. Preliminary data analyses revealed that participants with a higher level of immersive tendency and need to belong showed greater attachment and trust towards the robot, and were more satisfied with their relationship with the robot than participants with a lower level of immersive tendency and need to belong. In addition, participants with a higher level of immersive tendency experienced greater feelings of social presence. Implications of notable findings are discussed.
Mechanical model of human lower arm BIBAFull-Text 209-210
  Borut Povse; Darko Koritnik; Tadej Bajd; Marko Munih
The paper describes the development of a passive mechanical lower arm (PMLA) intended for physical human-robot interaction studies. Our research is focused on cooperation of a small industrial robot and human operator where collision is expected only between the robot end-effector and the lower arm of the human worker. A mathematical model of the passive human lower arm was built and adopted for the control of the PMLA. The mathematical model was optimized using the data from the experiments performed with human volunteers and implemented into the control scheme. The experiments with human volunteers were performed with safely low contact forces. The emulation system of the human lower arm was thoroughly evaluated in the robot impact experiments while using plane and line robot end-effector tools. During the experiment the impact force and the impact energy density were measured and compared to the measurements of the investigation with human volunteers. The PMLA proved to be a good emulation system of the passive human lower arm.
Shared gaze in remote spoken hri during distributed military operation BIBAFull-Text 211-212
  Zahar Prasov
Collaboration between distributed human and robot partners during military operations is becoming more necessary. In order to enable efficient real-time communication, it is important to develop user interfaces that support robust spoken language understanding capabilities. As a step toward achieving this objective, this work examines the role of shared gaze between a human and robot during remote spoken collaboration engaged in a distributed military operations. Preliminary results have shown that an interface that supports shared gaze between a human and robot for a remote collaborative HRI search task has potential to improve automated language understanding as well as task efficiency.
A social robot as an aloud reader: putting together recognition and synthesis of voice and gestures for HRI experimentation BIBAFull-Text 213-214
  Arnaud Ramey; Javier F. Gorostiza; Miguel A. Salichs
Advances in voice recognition have made possible applications in robotics controlled by voice only. However, user input through gestures and robot output gestures both create a more vivid interaction experience. In this article, we present an aloud reading application offering all these interaction methods for the HRI-research robot Maggie. It gives us a testbed for user studies investigating the effect of these additional interaction methods.
Modified social force model with face pose for human collision avoidance BIBAFull-Text 215-216
  Photchara Ratsamee; Yasushi Mae; Kenichi Ohara; Tomohito Takubo; Tatsuo Arai
In order for robots to be a part of human society, their social acceptance is an important issue if smooth interaction with humans is to be achieved. We propose a modified social force model that allows robots to move naturally like humans, based on estimated human motion and face pose. We add to the previous model the effect of the force due to face pose, in order to predict human motion and compute the robot motion itself. Our approach was implemented and tested on a real humanoid robot in a situation in which a human is confronted with a robot in an indoor environment. Experimental results illustrate that the robot is able to perform human-like navigation by avoiding the human in a face-to-face confrontation. Our system provides accurate face pose tracking that allows a robot to have a more realistic behaviour compared to the original social force model.
The Roomba mood ring: an ambient-display robot BIBAFull-Text 217-218
  Daniel J. Rea; James E. Young; Pourang Irani
We present a robot augmented with an ambient display that communicates using a multi-color halo. We use this robot in a public café-style setting where people vote on which colors the robot will display: we ask people to select a color which "best represents their mood". People can vote from a mobile device (e.g., smart phone or laptop) through a web interface. Thus, the robot's display is an abstract aggregate of the current mood of the room. Our research investigates how a robot with an ambient display may integrate into a space. For example, how will the robot alter how people use or perceive the environment, or how people will interact with the robot itself? In this paper we describe our initial prototype, an iRobot Roomba augmented with lights, and highlight the research questions driving our exploration, including initial study design.
How to use non-linguistic utterances to convey emotion in child-robot interaction BIBAFull-Text 219-220
  Robin Read; Tony Belpaeme
Vocal affective displays are vital for achieving engaging and effective Human-Robot Interaction. The same can be said for linguistic interaction also, however, while emphasis may be placed upon linguistic interaction, there are also inherent risks: users are bound to a single language, and breakdowns are frequent due to current technical limitations. This work explores the potential of non-linguistic utterances. A recent study is briefly outlined in which school children were asked to rate a variety of non-linguistic utterances on an affective level using a facial gesture tool. Results suggest, for example, that utterance rhythm may be an influential independent factor, whilst the pitch contour of an utterance may have little importance. Also evidence for categorical perception of emotions is presented, an issue that may impact important areas of HRI away from vocal displays of affect.
Ask, inform, or act: communication with a robotic patient before haptic action BIBAFull-Text 221-222
  Timothy J. Martin; Allison P. Rzepczynski; Laurel D. Riek
Currently in medical education, clinical students learn how to interact with real patients via simulated patients, which are inexpressive, teleoperated robot mannequins. We obtained five simulations that used such a robot to explore verbal communication between clinical students and the robot patient, specifically if the students sought approval before performing haptic-actions. We found that in our sample, student clinicians frequently acted without seeking approval or providing information to the robot patient. We hope to further our studies in order to identify if either current training of clinical students in communication is ineffective, or if the robot patients are too nonhuman-like and inexpressive to engender appropriate communication.
Creating human-robot rapport with mobile sculpture BIBAFull-Text 223-224
  Tina Yue; Alexandra E. Janiw; Aaron Huus; Salvador Aguiñaga; Megan Archer; Krista Hoefle; Laurel D. Riek
There is much discussion in the robotics community concerning the nature of people's impressions of robots. This pilot study employed the use of mobile robots coupled with artistic elements to create an environment conducive to human participation. PhotoBot took photos of participants (n = 16) in a gallery space and provided them with a physical copy of their image, while ProjectorBot displayed 3D Kinect imagery for participants to view. Participants completed a self-report measure of rapport (Bernieri's Rapport Criterion); the results of which suggest that they experienced a high degree of positive interaction with the robots.
Unsupervised clustering of people from 'skeleton' data BIBAFull-Text 225-226
  Adrian Ball; David Rye; Fabio Ramos; Mari Velonaki
This paper investigates the possibility of recognising individual persons from their walking gait using three-dimensional 'skeleton' data from an inexpensive consumer-level sensor, the Microsoft 'Kinect'. In an experimental pilot study it is shown that the K-means algorithm -- as a candidate unsupervised clustering algorithm -- is able to cluster gait samples from four persons with a nett accuracy of 43.6%.
Immersive human-robot interaction BIBAFull-Text 227-228
  Anara Sandygulova; Abraham G. Campbell; Mauro Dragone; G. M. P O'Hare
Networked robotic applications enable robots to operate in distant, hazardous, or otherwise inaccessible environments, such as search and rescue, surveillance, and exploration applications.
   The most difficult challenge which persists for such systems is that of supporting effective human-robot interaction, as this usually demands managing dynamic views, changeable interaction modalities, and adaptive levels of robotic autonomy.
   In contrast of sophisticated screen-based graphical user interfaces (GUIs), the solution proposed herein is to enable more natural human-robot interaction modalities through a networked immersive user interface. This paper describes the creation of one such shared space where to test such an approach, with both simulated and real robots.
Don't stand so close to me: users' attitudinal and behavioral responses to personal space invasion by robots BIBAFull-Text 229-230
  Aziez Sardar; Michiel Joosse; Astrid Weiss; Vanessa Evers
When in a human environment, one might expect that a social robot would act according to the social norms people expect of each other. When someone does not adhere to a prevalent social norm, people usually feel threatened and disturbed. Thus, insight is needed into what is perceived as socially normative behavior for robots. We conducted an experiment in which an agent approached a participant in order to determine the effect of personal space invasion. We manipulated the agent-type (human/robot) and the approach speed (slow/fast) of the agent towards the participant. Unexpectedly, our results show that the participants displayed more compensatory behavior in the robot condition than in the human condition. We consider this response toward personal space invasion as indication that people react in a similar way to robots as they do to humans, however with more intensity.
Coupled inverse-forward models for action execution leading to tool-use in a humanoid robot BIBAFull-Text 231-232
  Guido Schillaci; Verena Vanessa Hafner; Bruno Lara
We propose a computational model based on inverse-forward model pairs for the simulation and execution of actions. The models are implemented on a humanoid robot and are used to control reaching actions with the arms. In the experimental setup a tool has been attached to the left arm of the robot extending its covered action space. The preliminary investigations carried out aim at studying how the use of tools modifies the body scheme of the robot. The system performs action simulations before the actual executions. For each of the arms, predicted end-effector positions are compared with the desired one and the internal pair presenting the lowest error is selected for action execution. This allows the robot to decide on performing an action either with its hand alone or with the one with the attached tool.
Developing guidelines for in-the-field control of a team of robots BIBAFull-Text 233-234
  Megha Sharma; James E. Young; Rasit Eskicioglu
In this work we explore the development of guidelines for creating "in-the-field" interfaces for enabling a single user to remotely control multiple robots. The problem of controlling a remote team of robots is complex, requiring a user to monitor and interpret robotic state and sensor information in real time, and to simultaneously communicate direction commands to the robots. The result is that a robot controller is often seated at a console; for many relevant applications such as search and rescue or firefighting this removes the user from the field of action, rendering them unable to directly participate in a task at hand.
   Therefore, one challenge in HRI is to develop efficient interfaces that will enable a user to effectively control and monitor a team of robots in the field. In our project we explore various interface designs in terms of supporting this goal, taking the approach of involving a panel of professionals in the design process to direct exploration and development.
Is the social robot probo an added value for social story intervention for children with autism spectrum disorders? BIBAFull-Text 235-236
  Ramona Simut; Cristina Pop; Jelle Saldien; Alina Rusu; Sebastian Pintea; Johan Vanderfaeillie; Daniel David; Bram Vanderborght
In this paper, we describe the first results of using the robot Probo as a facilitator in Social Story Intervention for children with autism spectrum disorders (ASD). Four preschoolers diagnosed with ASD participated in this research. For each of them, a specific social skill deficit was identified, like sharing toys, saying Thank you, saying Hello, and an individualized Social Story was developed. The stories were told by both the therapist and the robot in different intervention phases. Afterwards an experimental task was created where the child needed to exercise the ability targeted by the story. The results of this study showed that the participant needed a decreased level of prompt to perform the targeted behavior, when the story was told by the robot compared to the intervention with the human storyteller. Therefore, this preliminary study created great expectancies about the potential of Robot Assisted Therapy as an added value for ASD interventions.
Animal-inspired human-robot interaction: a robotic tail for communicating state BIBAFull-Text 237-238
  Ashish Singh; James E. Young
We present a robotic tail interface for enabling a robot to communicate its state to people. Our interface design follows an animal-inspired methodology where we map the robot's state to its tail output, leveraging people's existing knowledge of and experiences with animals for human-robot interaction. In this paper we detail our robotic-tail design approach and our prototype implementations, and outline our future steps.
Spatial language experiments for a robot fetch task BIBAFull-Text 239-240
  Marjorie Skubic; Laura Carlson; Jared Miller; Xiao Ou Li; Zhiyu Huo
This paper outlines a new study that investigates spatial language for use in human-robot communication. The scenario studied is a home setting in which the elderly resident has misplaced an object, such as eyeglasses, and the robot will help the resident find the object. We present results from phase I of the study in which we investigate spatial language generated to a human addressee or a robot addressee in a virtual environment.
Potential measures for detecting trust changes BIBAFull-Text 241-242
  Poornima Kaniarasu; Aaron Steinfeld; Munjal Desai; Holly Yanco
It is challenging to quantitatively measure a user's trust in a robot system using traditional survey methods due to their invasiveness and tendency to disrupt the flow of operation. Therefore, we analyzed data from an existing experiment to identify measures which (1) have face validity for measuring trust and (2) align with the collected post-run trust measures. Two measures are promising as real-time indications of a drop in trust. The first is the time between the most recent warning and when the participant reduces the robot's autonomy level. The second is the number of warnings prior to the reduction of the autonomy level.
Affect misattribution procedure: an implicit technique to measure user experience in hri BIBAFull-Text 243-244
  Ewald Strasser; Astrid Weiss; Manfred Tscheligi
This paper suggests new methodology for measuring User Experience in HRI. We suggest using implicit attitude to predict User Experience (Affect). Therefore we show a first validation study. The study uses short videos of a robot (IURO -- Interactive Urban Robot) approaching a person and asking for the. IURO either approached a walking or a standing person. We measured people's implicit attitude towards the robot with the Affect Misattribution Procedure (AMP). The results show that a walking person being approached by the robot evolves an implicitly more negative attitude in the observing participant whereas corresponding questionnaire items showed no difference in attitude for the approach behaviour. We conclude from these results that measuring implicit attitude in HRI is valuable for the evaluation of the User Experience of a robot.
Policy transformation for learning from demonstration BIBAFull-Text 245-246
  Halit Bener Suay; Sonia Chernova
Many different robot learning from demonstration methods have been applied and tested in various environments recently. Representation of learned plans, tasks and policies often depends on the technique due to method-specific parameters. An agent that is able to switch between representations can apply its knowledge to different algorithms. This flexibility can be useful for a human teacher when training the agent. In this work we present a process to convert learned policies with two specific methods, Confidence-Based Autonomy (CBA) and Interactive Reinforcement Learning (Int-RL), to each other. Our finding suggests that it is possible for an agent to learn a policy with either CBA or Int-RL method and execute the task with the other with the benefit of previously learned knowledge.
Exploration of intention expression for robots BIBAFull-Text 247-248
  Ivan Shindev; Yu Sun; Michael Coovert; Jenny Pavlova; Tiffany Lee
This paper presents a novel exploration on how to enable a robot to express its intention so that the humans and robot can form a synergic relationship. A systematic design approach is proposed to obtain a set of possible intentions for a given robot from three levels of intentions. A visual intention expression system approach is developed to visualize the intentions and implemented on a mobile robot and a manipulator to demonstrate the intention expression concept.
Dimensions of people's attitudes toward robots BIBAFull-Text 249-250
  Daisuke Suzuki; Hiroyuki Umemuro
The purpose of this study was to investigate dimensions that construct people's attitudes toward robots. In the first phase, investigations using free description and group interview were conducted to extract potential elements for attitudes toward robots. In the second phase, a questionnaire battery was developed based on the elements extracted, and a survey investigation was conducted with the questionnaire. A factor analysis was conducted on the responses to the questionnaire, and nine factors were extracted as dimensions of people's attitudes toward robots.
Extending chatterbot system into multimodal interaction framework with embodied contextual understanding BIBAFull-Text 251-252
  Jeffrey Too Chuan Tan; Tetsunari Inamura
This work aims to realize multimodal interaction with embodied contextual understanding based on the simple chatterbot system. A system framework is proposed to integrate the dialogue system into a 3D simulation platform, SIGVerse to attain multimodal interaction. The chatterbot AIML implementations are described in the achievement of the conversations with embodied contextual understanding in HRI simulations.
Learning verbs by teaching a care-receiving robot by children: an experimental report BIBAFull-Text 253-254
  Fumihide Tanaka; Shizuko Matsuzoe
We investigate the use of care-receiving robot (CRR) for the purpose of supporting childhood education. In contrast to the conventional teaching agents that are designed to play the role of human teachers or caregivers, the robot here receives cares from children. We hypothesize that by using this CRR, we may construct a new educational framework whose goal is to promote children's spontaneous learning by teaching through teaching the CRR. The paper describes an experiment for investigating whether a CRR can promote children's learning English verbs through teaching it.
A tricycle-style teleoperational interface that remotely controls a robot for classroom children BIBAFull-Text 255-256
  Fumihide Tanaka; Toshimitsu Takahashi
We consider the application of telepresence robots for supporting childhood education. One challenge here is to develop a teleoperational robot system that can be manipulated by children themselves. There are two requirements for realizing such a system. First, the system has to be sufficiently intuitive so that child users can control it without the need for detailed instructions. Second, the control of the system should have some amount of enjoyment so that child users do not get bored. To satisfy these requirements, we introduce a tricycle-style teleoperational interface that remotely controls a robot. We also report field tests that are currently being conducted at English learning schools for children in Japan.
Prosody-driven robot ARM gestures generation in human-robot interaction BIBAFull-Text 257-258
  Amir Aly; Adriana Tapus
In multimodal human-robot interaction (HRI), the process of communication can be established through verbal, non-verbal, and/or para-verbal cues. The linguistic literature [3] shows that para-verbal and non-verbal communications are naturally synchronized. This research focuses on the relation between non-verbal and para-verbal communication by mapping prosody cues to the corresponding arm gestures. Our approach for synthesizing arm gestures uses the coupled hidden Markov models (CHMMs), which could be seen as a collection of HMMs modeling the segmented prosodic characteristics' stream and the segmented rotation characteristics' streams of the two arms' articulations [4][1]. Nao robot was used for tests.
A practical comparison of three robot learning from demonstration algorithms BIBAFull-Text 261-262
  Russell Toris; Halit Bener Suay; Sonia Chernova
Research on robot learning from demonstration has seen significant growth in recent years, but existing evaluations have focused exclusively on algorithmic performance and not on usability factors, especially with respect to naïve users. Here we present findings from a comparative user study in which we asked non-experts to evaluate three distinctively different robot learning from demonstration algorithms -- Behavior Networks, Interactive Reinforcement Learning, and Confidence Based Autonomy. Participants in the study showed a preference for interfaces where they controlled the robot directly (teleoperation and guidance) instead of providing retroactive feedback for past actions (reward and correction). Our results show that the best policy performance in most metrics was achieved using the Confidence Based Autonomy algorithm.
An assistive robot contest: designs and interactions BIBAFull-Text 263-264
  Igor M. Verner; David J. Ahlgren
This paper considers engineering and educational challenges of the assistive robot contest RoboWaiter and interactions experienced by participants, both students and people with disabilities.
   This paper considers engineering and educational challenges of the assistive robot contest RoboWaiter and interactions experienced by participants, both students and people with disabilities.
Interaction with animated robots in science museum programs: how children learn? BIBAFull-Text 265-266
  Alex Polishuk; Igor Michael Verner
This paper examines learning through student interaction with animated robots, as implemented in robot theatre performances and workshops in the science museum.
   This paper examines learning through student interaction with animated robots, as implemented in robot theatre performances and workshops in the science museum.
A survey on robot appearances BIBAFull-Text 267-268
  Astrid Marieke von der Pütten; Nicole C. Krämer
Against the background of the uncanny valley hypothesis [1] and its conceptual shortcomings this study aims at identifying design characteristics which determine the evaluation of robots. We conducted a web-based survey with standardized pictures of 40 robots which were evaluated by 151 participants. A cluster analysis revealed six clusters of robots. The results are discussed with regard to implications for the uncanny valley hypothesis.
Tele-operated robot control using attitude aware smartphones BIBAFull-Text 269-270
  Amber M. Walker; David P. Miller
Smartphones have put video communications, computation, and proprioceptive sensing (e.g. accelerometers and gyros) into the hands of hundreds of millions of consumers. These small, microelectromechanical systems can be used in many applications, including remote control. This study proposes using smartphones with proprioception as handheld robot controllers and aims to determine feasibility of accelerometers as control inputs for tele-operation while defining heuristics for use. Initial results indicate accelerometers are suitable for tele-operation commands, but identify specific design characteristics meriting further investigation.
HRI research: the interdisciplinary challenge or the dawning of the discipline? BIBAFull-Text 271-272
  Astrid Weiss
The Human-Robot Interaction (HRI) research field has developed more and more over the past 10 to 20 years. It is still a relatively young community, which is in the process of developing its characteristics, such as being interdisciplinary, innovative, responsible, technical, and many others. Similarly, like in the development of the Human-Computer Interaction (HCI) community, being interdisciplinary is essential, but if we have a look on the current situation, HCI became more of an autonomous discipline nowadays. Where is the HRI community heading to in this respect? This paper should reflect in accordance to the "epistemic living spaces" concept on some stereotypical statements by researchers working in HRI. The reflection shows that in three phases of a researcher's career (orientation, positioning, and stabilizing and expanding) show a tendency towards the discipline and away from interdisciplinary work and that the forth phase (attachment) needs to be strengthened, independently of disciplinary or interdisciplinary approaches.
Immersion with robots in large virtual environments BIBAFull-Text 273-274
  Xianshi Xie; Qiufeng Lin; Haojie Wu; Julie A. Adams; Bobby E. Bodenheimer
This paper presents a mixed reality system for combining real robots, humans, and virtual robots. The system tracks and controls physical robots in local physical space, and inserts them into a virtual environment (VE). The system allows a human to locomote in a VE larger than the physically tracked space of the laboratory through a form of redirected walking. An evaluation assessed the conditions under which subjects found the system to be the most immersive.
Effect of scenario media on human-robot interaction evaluation BIBAFull-Text 275-276
  Qianli Xu; Jamie Suat Ling Ng; Yian Ling Cheong; Odelia Yiling Tan; Ji Bin Wong; Benedict Tiong Chee Tay; Taezoon Park
Different media used to present the human-robot interaction (HRI) scenarios may affect users' perception of a robot in the user studies. We investigated how different scenario media (text, video, and live interaction) might influence user evaluation of social robots based on a controlled experiment. We found that multiple aspects of user acceptance were influenced by the scenario media. Moreover, more design problems and redesign proposals were elicited when users were exposed to media with higher fidelity. The results led to useful insights into choosing scenario media in HRI evaluation.
Development of a Jenga game manipulator having multi-articulated fingers BIBAFull-Text 277-278
  Tsuneo Yoshikawa; Tatsuya Sugiura; Seiji Sugiyama
This paper describes current status of our effort to develop a robot system that can play Jenga game against human players. Unlike most of the previous Jenga robots that use grippers, this robot is equipped with a hand with two multiarticulated fingers covered by soft skin and does not have any major mechanical constraint in playing the game in a natural way as human players do. An experimental result of a game played between the robot and a human player is presented.
Robot gesture and user acceptance of information in human-robot interaction BIBAFull-Text 279-280
  Aelee Kim; Hyejin Kum; Ounjeong Roh; Sangseok You; Sukhan Lee
This study explores how human users respond to coordinated and uncoordinated gestures of a robot as an information deliverer. A between-subject experiment was conducted using the Wizard of Oz method, with 63 participants randomly assigned to one of four conditions (voice-only vs. no-gesture vs. coordinated gesture vs. uncoordinated gesture) taking an artwork class in a museum-like setting. The robot was explaining the information of the artworks with modalities accordingly designed to each condition. Results showed that the coordinated gesture was not aiding information delivery. However, there were notable relations between the coordinated gesture and intimacy, homogeneity, and involvement. These results have theoretical implications for cognitive load of working memory and practical implications for designing and deploying dynamic humanoid robots for museum tour guide.
Establishment of spatial formation by a mobile guide robot BIBAFull-Text 281-282
  Mohammad Abu Yousuf; Yoshinori Kobayashi; Yoshinori Kuno; Keiichi Yamazaki; Akiko Yamazaki
A mobile museum guide robot is expected to establish a proper spatial formation with the visitors. After observing the videotaped scenes of human guide-visitors interaction at actual museum galleries, we have developed a mobile robot that can guide multiple visitors inside the gallery from one exhibit to another. The mobile guide robot is capable of establishing spatial formation known as "F-formation" at the beginning of explanation. It can also use a systematic procedure known as "pause and restart" depending on the situation through which a framework of mutual orientation between the speaker (robot) and visitors is achieved. The effectiveness of our method has been confirmed through experiments.
Navigating in public space: participants' evaluation of a robot's approach behavior BIBAFull-Text 283-284
  Jakub A. Zlotowski; Astrid Weiss; Manfred Tscheligi
The results from an empirical study on the impact of a robot's approach trajectories on its social acceptance are presented. An online survey presenting short videos of a robot (IURO -- Interactive Urban RObot) approaching a person in a public space and asking for help was shown to the users. IURO either approached a walking or standing person. The results show that walking participants preferred to be approached from the front left or front right direction rather than frontally. However, when they are standing all three approach directions were acceptable.

Conversation and proxemics

Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction BIBAFull-Text 285-292
  Chaoran Liu; Carlos T. Ishi; Hiroshi Ishiguro; Norihiro Hagita
Head motion occurs naturally and in synchrony with speech during human dialogue communication, and may carry paralinguistic information, such as intentions, attitudes and emotions. Therefore, natural-looking head motion by a robot is important for smooth human-robot interaction. Based on rules inferred from analyses of the relationship between head motion and dialogue acts, this paper proposes a model for generating head tilting and nodding, and evaluates the model using three types of humanoid robot (a very human-like android, "Geminoid F", a typical humanoid robot with less facial degrees of freedom, "Robovie R2", and a robot with a 3-axis rotatable neck and movable lips, "Telenoid R2"). Analysis of subjective scores shows that the proposed model including head tilting and nodding can generate head motion with increased naturalness compared to nodding only or directly mapping people's original motions without gaze information. We also find that an upwards motion of a robot's face can be used by robots which do not have a mouth in order to provide the appearance that utterance is taking place. Finally, we conduct an experiment in which participants act as visitors to an information desk attended by robots. As a consequence, we verify that our generation model performs equally to directly mapping people's original motions with gaze information in terms of perceived naturalness.
Designing persuasive robots: how robots might persuade people using vocal and nonverbal cues BIBAFull-Text 293-300
  Vijay Chidambaram; Yueh-Hsuan Chiang; Bilge Mutlu
Social robots have to potential to serve as personal, organizational, and public assistants as, for instance, diet coaches, teacher's aides, and emergency respondents. The success of these robots -- whether in motivating users to adhere to a diet regimen or in encouraging them to follow evacuation procedures in the case of a fire -- will rely largely on their ability to persuade people. Research in a range of areas from political communication to education suggest that the nonverbal behaviors of a human speaker play a key role in the persuasiveness of the speaker's message and the listeners' compliance with it. In this paper, we explore how a robot might effectively use these behaviors, particularly vocal and bodily cues, to persuade users. In an experiment with 32 participants, we evaluate how manipulations in a robot's use of nonverbal cues affected participants' perceptions of the robot's persuasiveness and their compliance with the robot's suggestions across four conditions: (1) no vocal or bodily cues, (2) vocal cues only, (3) bodily cues only, and (4) vocal and bodily cues. The results showed that participants complied with the robot's suggestions significantly more when it used nonverbal cues than they did when it did not use these cues and that bodily cues were more effective in persuading participants than vocal cues were. Our model of persuasive nonverbal cues and experimental results have direct implications for the design of persuasive behaviors for human-like robots.
How do people walk side-by-side?: using a computational model of human behavior for a social robot BIBAFull-Text 301-308
  Luis Yoichi Morales Saiki; Satoru Satake; Rajibul Huq; Dylan Glass; Takayuki Kanda; Norihiro Hagita
This paper presents a computational model for side-by-side walking for human-robot interaction (HRI). In this work we address the importance of future motion utility (motion anticipation) of the two walking partners.
   Previous studies only considered a robot moving alongside a person without collisions with simple velocity-based predictions. In contrast, our proposed model includes two major considerations. First, it considers the current goal, modeling side-by-side walking, as a process of moving towards a goal while maintaining a relative position with the partner. Second, it takes the partner's utility into consideration; it models side-by-side walking as a phenomenon where two agents maximize mutual utilities rather than only considering a single agent utility. The model is constructed and validated with a set of trajectories from pairs of people recorded in side-by-side walking. Finally, our proposed model was tested in an autonomous robot walking side-by-side with participants and demonstrated to be effective.
A techno-sociological solution for designing a museum guide robot: regarding choosing an appropriate visitor BIBAFull-Text 309-316
  Akiko Yamazaki; Keiichi Yamazaki; Takaya Ohyama; Yoshinori Kobayashi; Yoshinori Kuno
In this paper, we present our work designing a robot that explains an exhibit to multiple visitors in a museum setting, based on ethnographic analysis of interactions between expert human guides and visitors. During the ethnographic analysis, we discovered that expert human guides employ some identical strategies and practices in their explanations. In particular, one of these is to involve all visitors by posing a question to an appropriate visitor among them, which we call the "creating a puzzle" sequence. This is done in order to draw visitors' attention towards not only the exhibit and but also the guide's explanation. While creating a puzzle, the human guide can monitor visitors' responses and choose an "appropriate" visitor (i.e. one who is likely to provide an answer). Based on these findings, sociologists and engineers together developed a guide robot that coordinates verbal and non-verbal actions in posing a question or "a puzzle" that will draw visitors' attention, and then explain the exhibit for multiple visitors. During the explanation, the robot chooses an "appropriate" visitor. We tested the robot at an actual museum. The results show that our robot increases visitors' engagement and interaction with the guide, as well as interaction and engagement among visitors.


Robots in the loop: telepresence robots in everyday life BIBAFull-Text 317-318
  Katherine M. Tsui; Stephen Von Rump; Hiroshi Ishiguro; Leila Takayama; Peter N. Vicars
This year's human-robot interaction (HRI) conference focuses on "robots in the loop" and how robots are capable of enhancing the experiences of human users in everyday life and work. Telepresence robots allow operators the ability to participate in remote locations through their mobility and live bidirectional audio and video feeds. Using robotic telepresence, students with chronic illnesses are attending their regular classes, physicians are conducting virtual "home visits" for recovering patients, and remote teammates are having conversations beyond the office conference room.
   This panel gathers experts from academia, business, and industry to discuss their experiences in developing robotic telepresences and "ah ha" moments reported from field use. Topics include how telepresence is defined, the practical use cases and application domains, the social and practical challenges encountered by operators and people physically present with the robots, and the implications for design of telepresence robots given these considerations.

Living and working with service robots

Personalization in HRI: a longitudinal field experiment BIBAFull-Text 319-326
  Min Kyung Lee; Jodi Forlizzi; Sara Kiesler; Paul Rybski; John Antanitis; Sarun Savetsila
Creating and sustaining rapport between robots and people is critical for successful robotic services. As a first step towards this goal, we explored a personalization strategy with a snack delivery robot. We designed a social robotic snack delivery service, and, for half of the participants, personalized the service based on participants' service usage and interactions with the robot. The service ran for each participant for two months. We evaluated this strategy during a 4-month field experiment. The results show that, as compared with the social service alone, adding personalized service improved rapport, cooperation, and engagement with the robot during service encounters.
Exploring the role of robots in home organization BIBAFull-Text 327-334
  Caroline Pantofaru; Leila Takayama; Tully Foote; Bianca Soto
Technologists have long wanted to put robots in the home, making robots truly personal and present in every aspect of our lives. It has not been clear, however, exactly what these robots should do in the home. The difficulty of tasking robots with home chores comes not only from the significant technical challenges, but also from the strong emotions and expectations people have about their home lives. In this paper, we explore one possible set of tasks a robot could perform, home organization and storage tasks. Using the technique of need finding, we interviewed a group of people regarding the reality of organization in their home; the successes, failures, family dynamics and practicalities surrounding organization. These interviews are abstracted into a set of frameworks and design implications for home robotics, which we contribute to the community as inspiration and hypotheses for future robot prototypes to test.
The domesticated robot: design guidelines for assisting older adults to age in place BIBAFull-Text 335-342
  Jenay M. Beer; Cory-Ann Smarr; Tiffany L. Chen; Akanksha Prakash; Tracy L. Mitzner; Charles C. Kemp; Wendy A. Rogers
Many older adults wish to remain in their own homes as they age [16]. However, challenges in performing home upkeep tasks threaten an older adult's ability to age in place. Even healthy independently living older adults experience challenges in maintaining their home [13]. Challenges with home tasks can be compensated through technology, such as home robots. However, for home robots to be adopted by older adult users, they must be designed to meet older adults' needs for assistance and the older users must be amenable to robot assistance for those needs. We conducted a needs assessment to (1) assess older adults' openness to assistance from robots; and (2) understand older adults' opinions about using an assistive robot to help around the home. We administered questionnaires and conducted structured group interviews with 21 independently living older adults (ages 65-93). The questionnaire data suggest that older adults prefer robot assistance for cleaning and fetching/organizing tasks overall. However their assistance preferences discriminated between tasks. The interview data provided insight as to why they hold such preferences. Older adults reported benefits of robot assistance (e.g., the robot compensating for limitations, saving them time and effort, completing undesirable tasks, and performing tasks at a high level of performance). Participants also reported concerns such as the robot damaging the environment, being unreliable at or incapable of doing a task, doing tasks the older adult would rather do, or taking up too much space/storage. These data, along with specific comments from participant interviews, provide the basis for preliminary recommendations for designing mobile manipulator robots to support aging in place.
The effect of monitoring by cameras and robots on the privacy enhancing behaviors of older adults BIBAFull-Text 343-350
  Kelly Caine; Selma Sabanovic; Mary Carter
This paper describes the results of an experimental study in which older adult participants interacted with three monitoring technologies designed to support their ability to age in place in their own home -- a camera, a stationary robot, and a mobile robot. The aim of our study was to evaluate users' perceptions of privacy and their tendencies to engage in privacy enhancing behaviors (PEBs) by comparing the three conditions. We found that privacy concerns lead older adults to change their behavior in a home environment while being monitored by cameras or embodied robots. We expected participants to engage in more PEBs when they interacted with a mobile robot, which provided embodied cues of ongoing monitoring; surprisingly, we found the opposite to be true -- the camera was the condition in which participants performed more PEBs. We describe the results of quantitative and qualitative analyses of our survey, interview, and observational data and discuss the implications of our study for human-robot interaction, the study of privacy and technology, and the design of assistive robots for monitoring older adults.

Robots for children

Children learning with a social robot BIBAFull-Text 351-358
  Takayuki Kanda; Michihiro Shimada; Satoshi Koizumi
We used a social robot as a teaching assistant in a class for children's collaborative learning. In the class, a group of 6th graders learned together using Lego Mindstorms. The class consisted of seven lessons with Robovie, a social robot, followed by one lesson to test their individual achievement. Robovie managed the class and explained how to use Lego Mindstorms. In addition to such basic management behaviors for the class, we prepared social behaviors for building relationships with the children and encouraging them. The result shows that the social behavior encouraged children to work more in the first two lessons, but did not affect them in later lessons. On the other hand, social behavior contributed to building relationships and attaining better social acceptance.
Blended reality characters BIBAFull-Text 359-366
  David Robert; Cynthia Breazeal
We present the idea and formative design of a blended reality character, a new class of character able to maintain visual and kinetic continuity between the fully physical and fully virtual. The interactive character's embodiment fluidly transitions from an animated character on-screen to a small, alphabet block-shaped mobile robot designed as a platform for informal learning through play. We present the design and results of our study with thirty-four children aged three and a half to seven conducted using non-reactive, unobtrusive observational methods and a validated evaluation instrument. Our claim is that young children have accepted the idea, persistence and continuity of blended reality characters. Furthermore, we found that children are more deeply engaged with blended reality characters and are more fully immersed in blended reality play as co-protagonists in the experience, in comparison to interactions with strictly screen-based representations. As substantiated through the use of quantitative and qualitative analysis of drawings and verbal utterances, the study shows that young children produce longer, detailed and more imaginative descriptions of their experiences following blended reality play. The desire to continue engaging in blended reality play as expressed by children's verbal requests to revisit and extend their play time with the character positively affirms the potential for the development of an informal learning platform with sustained appeal to young children.
Modelling empathic behaviour in a robotic game companion for children: an ethnographic study in real-world settings BIBAFull-Text 367-374
  Iolanda Leite; Ginevra Castellano; André Pereira; Carlos Martinho; Ana Paiva
The idea of autonomous social robots capable of assisting us in our daily lives is becoming more real every day. However, there are still many open issues regarding the social capabilities that those robots should have in order to make daily interactions with humans more natural. For example, the role of affective interactions is still unclear. This paper presents an ethnographic study conducted in an elementary school where 40 children interacted with a social robot capable of recognising and responding empathically to some of the children's affective states. The findings suggest that the robot's empathic behaviour affected positively how children perceived the robot. However, the empathic behaviours should be selected carefully, under the risk of having the opposite effect. The target application scenario and the particular preferences of children seem to influence the degree of empathy that social robots should be endowed with.

Animating robot behavior

Enhancing interaction through exaggerated motion synthesis BIBAFull-Text 375-382
  Michael J. Gielniak; Andrea L. Thomaz
Other than eye gaze and referential gestures (e.g. pointing), the relationship between robot motion and observer attention is not well understood. We explore this relationship to achieve social goals, such as influencing human partner behavior or directing attention. We present an algorithm that creates exaggerated variants of a motion in real-time. Through two experiments we confirm that exaggerated motion is perceptibly different than the input motion, provided that the motion is sufficiently exaggerated. We found that different levels of exaggeration correlate to human expectations of robot-like, human-like, and cartoon-like motion. We present empirical evidence that use of exaggerated motion in experiments enhances the interaction through the benefits of increased engagement and perceived entertainment value. Finally, we provide statistical evidence that exaggerated motion causes predictable human partner gaze direction and better retention of interaction details.
The illusion of robotic life: principles and practices of animation for robots BIBAFull-Text 383-390
  Tiago Ribeiro; Ana Paiva
This paper describes our approach on the development of the expression of emotions on a robot with constrained facial expressions. We adapted principles and practices of animation from Disney and other animators for robots, and applied them on the development of emotional expressions for the EMYS robot. Our work shows that applying animation principles to robots is beneficial for human understanding of the robots' emotions.
Trajectories and keyframes for kinesthetic teaching: a human-robot interaction perspective BIBAFull-Text 391-398
  Baris Akgun; Maya Cakmak; Jae Wook Yoo; Andrea Lockerd Thomaz
Kinesthetic teaching is an approach to providing demonstrations to a robot in Learning from Demonstration whereby a human physically guides a robot to perform a skill. In the common usage of kinesthetic teaching, the robot's trajectory during a demonstration is recorded from start to end. In this paper we consider an alternative, keyframe demonstrations, in which the human provides a sparse set of consecutive keyframes that can be connected to perform the skill. We present a user-study (n=34) comparing the two approaches and highlighting their complementary nature. The study also tests and shows the potential benefits of iterative and adaptive versions of keyframe demonstrations. Finally, we introduce a hybrid method that combines trajectories and keyframes in a single demonstration.

HRI 2012 video session

SDT: a "konkon" interface to buildup the connotation interactions BIBAFull-Text 399-400
  Yu Arita; Hirota Shinya; Yuta Yoshiike; P. Ravindra S. De Silva; Michio Okada
The advantage of the proposed approach is that users can invent their own communication protocol (based on the knock patterns) to communicate with the creature. This approach is a novel concept to establish future human-robot communication protocols within many contexts or human centric applications. However, through symbolic communication (knock-based communication) a human is able to convey to the robot to adapt and communicate with their personalized communication protocol.
Human-swarm interaction through distributed cooperative gesture recognition BIBFull-Text 401-402
  Alessandro Giusti; Jawad Nagi; Luca M. Gambardella; Stéphane Bonardi; Gianni A. Di Caro
Field trials of the block-shaped edutainment robot hangulbot BIBAFull-Text 403-404
  Sonya S. Kwak; Eun Ho Kim; Jimyung Kim; Youngbin Son; Inveom Kwak; Jun-Shin Park; Eun Wook Lee
The objective of this study was to develop an edutainment robot which provides multi-sensory learning experiences to improve users' space perception and creativity. In particular, we focused on developing educational content for the study of the Korean alphabet (Hangul) using the robot. On the basis of the phonemic and modular nature of Hangul, we devised a block-shaped edutainment robot for the study of Hangul. The robot known as "HangulBot" is composed of a consonant block and a vowel block. By rotating and rearranging those blocks, a user can create different characters. To enable the robot to perceive the arrangement of the blocks and the distance between a consonant block and a vowel block, IR LEDs and photo transistors were used. The eight IR LEDs in the consonant block generate different radiation signals, and the vowel block perceives the arrangement of the blocks by receiving the signals. The distance between the two blocks is estimated by measuring the thresholding, and the corresponding sound of each arrangement is then played through a speaker installed in the vowel block. We executed two short-term field trials with a twenty-seven month old child in June of 2011 and November of 2011 to ascertain children's initial reaction to HangulBot and how their reaction would change over time. While the results are preliminary, we noted several interesting findings. First, after several trials by the mother, the child felt comfortable with HangulBot. Second, the child intuitively followed the corresponding speech sounds which were generated by HangulBot according to the arrangement of the blocks. That is to say, the sound generated after the arranging the block intuitively induced the child to follow the sound. Third, the child's initial reaction to HangulBot was mostly block play, but after five months later, her reaction to the robot included not only block play but also active learning of the Korean alphabet. This result indicates that HangulBot could be an effective edutainment tool which improves space perception and creativity as well as linguistic abilities by stimulating both sides of the brain.
Human-human vs. human-robot teamed investigation BIBAFull-Text 405-406
  Caroline E. Harriott; Glenna L. Buford; Tao Zhang; Julie A. Adams
Clips from an evaluation where participants, each paired with either a human or robot partner, were deployed to search a hallway for suspicious items, in a manner similar to tactics used by first responders handling bomb threats are presented. The teams used natural, verbal communication to collaborate, determine where hazards were located, and which items were suspicious. The video demonstrates that the investigations in both conditions played out in a similar manner and participants were able to complete the investigations successfully with a robot partner; however, sometimes the participants were uncertain how to interact with the robot.
Acting lesson with robot: emotional gestures BIBAFull-Text 407-408
  Heather Knight; Matthew Gray
In this video, real-life acting professor Matthew Gray tutors Data the Robot (a Nao model) to improve his expression of emotion via Chekhov's Psychological Gestures. Though the video narrative is fictional and the robot actions pre-programmed, the aim of the dramatization is to introduce an acting methodology that social robots could use to leverage full body affect expressions.
   The video begins with Gray leading Nao in traditional human actor warm-up exercises. Next, Gray shows Data a video of his students practicing Chekhov psychological gestures [4] [11].
   Finally, Data tries out some 'push' gestures himself. By pairing the 'push' gesture with text, the viewer is intended to unconsciously associate the words with an outpouring of emotion. Finally, Data's programmer, Knight, arrives to pick up the robot from his lesson, "until next time".
   This video playfully introduces full-body emotional gestures. The benefit of such movement-based full-body expressions is that they do not necessarily require a robot to have human-like facial expressions nor humanoid form to be effective (though the interplay of psychological gesture with multi-modal expressions could provide fertile terrain for future research). Instead, these full-body motions are translations of an actor's motive/intent that suffuse the whole form (e.g. expansion, sluggishness, lightness).
   We note that there are various schools of physical theater dedicated to understanding movement [5]. Related investigations in the robotics world that have applied acting method or practice to social robot design or architecture also include [2][3][6][7][8][9][10].
   As Blaire writes about in her text on acting and neuroscience [1], the discovery of mirror neurons in our brain have led some dramaturges to theorize that audience members simulate the gestures of the performers through their own neural circuitry for interpretation. If so, full body gestures may be able to tap into our emotional experience in a uniquely human way. We hope this will be the first of several spirited demonstration videos that explore intersections wherein human acting methodologies might benefit the development of robot non-verbal expressions.
MARIOBOT: marionette robot that interact with an audience BIBAFull-Text 409-410
  Woong-Ji Kim; Sun-Wook Choi; Chong Ho Lee
A marionette is a puppet operated by human manipulator using strings and control bar. It is an ancient and universal form of performing art. Marionettes have been used for entertainment for centuries because people are fascinated with unique and funny movements of marionettes.
   In this video, we introduce a new type of entertainment robot for theatrical performances. Our MARIOBOT (marionette robot) is driven by a robotic controller that consists of eight motors and their coupling components. Each motor pulls the string connected to corresponding marionette's joint. This robot freely moves along the stage hung by a mobile base platform.
   In order to show the capabilities of our robot actor on the stage, this video with scripts demonstrates the performance of the robot. We also show a few audience-robot interaction methods, as from the perspective of the modern performances on the stage, the feature of interaction with the audience draws much of interests and appeals to the audience [1]. The audience loses attention quickly during a passive performance. Therefore, we need to consider appropriate HRI techniques to keep the audience's attention lively. However typical stage environment in the theater is not well-conditioned to accommodate the vision based HRI methods, because of: luminance of theater, distance between robot and the audience, etc. To overcome these problems, we present effective methods for interaction with the audience and the situations related to these methods on the video together with the system overview.
Introducing students grades 6-12 to expressive robotics BIBAFull-Text 411-412
  David V. Lu; Ross Mead
Every year, tens of thousands of middle school and high school students participate in robotics competitions, such as Botball, FIRST, and VEX. This provides them with an excellent introduction to the ins and outs of building robots and programming them to autonomously accomplish specific tasks. However, the rules of many of these competitions often limit or prohibit human interaction with the robots. As a result, students are not exposed to and are, thus, not encouraged to think about human-robot interaction (HRI) and its potential impacts on society.
   To start getting a new generation of students thinking about HRI, we held a workshop entitled "Expressive Robotics: Motion and Emotion" at the 2011 Global Conference on Educational Robotics. The focus of the workshop was to teach the students how to program robots to express emotion and intent just through the robot's physical actions. The robot used by each participating group was not unlike the ones they used in competitions: an iRobot Create base provided a mobile platform, upon which stood a three degree-of-freedom arm, serving as an articulated spine and head.
   Each group was first asked to program the robot in a way that expressed one of Eckman's six basic emotions. They were then tasked with implementing a simple keyframe animation system to control the robot's limited degrees of freedom, which they could then modify to illustrate certain principles from theatre and animation. They used their animation system to tell a story purely through robot motion and interactions with props, as seen in Figure 1. Students were encouraged to focus on making the robot's emotion and intent apparent to a human audience.
   Ultimately, the workshop proved to be a great success; it was very well-received by all involved, who encouraged us to make this an annual event at the conference. We felt that this type of workshop served as a fine proof-of-concept for introducing students grades 6-12 to human factors in robotics, and could be extended throughout the entire K-12 education pipeline. Furthermore, we believe that exposure to sociable robotics has the potential to increase interest and self-efficacy of underrepresented student populations, particularly girls, in STEM-related activities1.
Multi-user multi-touch multi-robot command and control of multiple simulated robots BIBAFull-Text 413-414
  Eric McCann; Sean McSheehy; Holly Yanco
This video demonstrates three users sharing control of eight simulated robots with a Microsoft Surface and two Apple iPads using our Multi-user Multi-touch Multi-robot Command and Control Interface.
   The command and control interfaces are all capable of moving their world camera through space, tasking one or more robots with a series of waypoints, and assuming manual control of a single robot for inspection of its sensors and tele-operation. They display full-screen images sent from their user's world camera, overlaid with icons that show the position and selection state of each robot in the camera's field of view, dots that indicate each robot's current destination, and rectangles that correspond to each other user's field of view.
   One multi-touch interface runs on a Microsoft Surface, and the others on Apple iPads; they all have the same functional capabilities, other than a few differences due to the form factor and touch sensing method used by the platforms. The Surface interface is able to interpret gestures that include more than just finger tips, such as placing both fists on the screen to make all robots stop and wait for new commands. As iPads sense touch capacitively, they do not support detection of such gestures. The Surface interface allows its user to move their world camera while simultaneously teleoperating one of the robots with our Dynamically Resizing Ergonomic and Multi-touch Controller (DREAM Controller) [1, 2]. On the iPads, however, the command and control mode and teleoperation mode are mutually exclusive.
   The robots are simulated in Microsoft Robotics Developer Studio. Each user's world camera has similar movement capabilities to a quad-copter. The UDP communications between users and robots are all handled by a single server that routes messages to the appropriate targets, allowing scalability of both the number of robots and users.
Encounters: from talking heads to swarming heads BIBAFull-Text 415-416
  Damith C. Herath; Christian Kroos; A Stelarc
Robots at home and work has been a key theme in science fiction since the genre began. It is only now that we see this come in to realization, albeit in very basic forms such as the robot vacuum cleaners and various entertainment robotic platforms. In this video we highlight a number of projects woven around the iRobot Create research robot platform and an embodied conversational agent called the Prosthetic Head -- an installation work by Stelarc. We start the visual journey by taking a satirical look at some of the parallels between a commercial communication product and the Prosthetic Head. The journey then moves through telepresence robotics, gesture based robot human interaction. The robots featured in the video are driven by an attention and behavioral system. Finally, the video concludes with a preview of the "Swarming Heads" -- an interactive installation.
Johnny-0, a compliant, force-controlled and interactive humanoid autonomous robot BIBAFull-Text 417-418
  François Ferland; Arnaud Aumont; Dominic Létourneau; Marc-Antoine Legault; François Michaud
Johnny-0, shown in Figure 1, is our new humanoid robot which integrates an expressive face on an orientable head, two arms with 4 degrees of freedom (DOF) each and grippers, mounted on an omnidirectional, non-holonomic mobile platform. Our underlying goal with Johnny-0 is to design a platform capable of natural reciprocal interaction (motion, language, touch, affect) with humans, to address integration issues associated with advanced motion, interaction and cognition capabilities on the same platform, and their use in unconstrained real world conditions. To do so, compliance is a necessity to provide natural and safe interactions.
   One distinctive element of Johnny-0 is that it uses force-controlled actuators (called Differential Elastic Actuators, or DEA) for active steering of its mobile platform, and for interactive control of its 4-DOF arms. Compliance at the mobile platform level allows a person to physically guide the robot without having to push it from a specific location on the platform [1]. Motion can also be constrained to avoid obstacles and collisions, providing natural physical interaction with the robot. Impedance control of each joint enables infinite combination of arm behaviors, from zero impedance for free movement with gravity compensation, to high stiffness constraining the arms to precise positions or ranges of movement. Stiffness can be configured to create virtual constraints in Cartesian space, providing force feedback to the user about movement's limitations of the arms. For instance, stiffening the arms in certain poses could indicate to the user that the arms are restrained to move into a specific volume. Beyond these limits, any pushing or pulling force can be perceived by the mobile base, and can be interpreted as an intention to move the robot around.
   Combining compliance to other sensors (e.g., Kinect motion sensor) and a robot head capable of facial expression allows Johnny-0 to detect incoming people and adjust the impedance of its actuators accordingly (e.g., extend its gripper to greet them), and express its state based on how people physically interact with it (e.g., displaying surprise when the user move the arms beyond specific limits).
Situation understanding bot through language and environment BIBAFull-Text 419-420
  Daniel J. Brooks; Constantine Lignos; Mikhail S. Medvedev; Ian Perera; Cameron Finucane; Vasumathi Raman; Abraham Shultz; Sean McSheehy; Adam Norton; Hadas Kress-Gazit; Mitch Marcus; Holly A. Yanco
This video shows a demonstration of a fully autonomous robot, an iRobot ATRV-JR, which can be given commands using natural language. Users type commands to the robot on a tablet computer, which are then parsed and processed using semantic analysis. This information is used to build a plan representing the high level autonomous behaviors the robot should perform [2][1]. The robot can be given commands to be executed immediately (e.g., "Search the floor for hostages.") as well as standing orders for use over the entire run (e.g., "Let me know if you see any bombs.").
   In the scenario shown in the video, the robot is asked to identify and defuse bombs, as well as to report if it finds any hostages or bad guys. Users can also query the robot through this interface. The robot conveys information to the user through text and a graphical interface on a tablet computer. The system can add icons to the map displayed and highlight areas of the map to convey concepts such as "I am here".
   The video contains segments taken from a continuous 20 minute long run, shown at 4x speed. This work is a demonstration of a larger project called Situation Understanding Bot Through Language and Environment (SUBTLE). For more information, see www.subtlebot.org.
How to sustain long-term interaction between children and ROBOSEM in English class BIBAFull-Text 421-422
  Jeonghye Han; Bokhyun Kang; Seongju Park; Seongwook Hong
According to studies confirming that robot assisted learning (RAL) can positively contribute to improving learners' motivation and achievement for language learning [1, 4], RAL is facing its diffusion through the demand of parents and government. However, the biggest obstacle to the long-term interaction between humans and robots is the robots' low-success rate of visual and voice recognition, as well as the limitation of artificial intelligence for the daily-life HRI [3]. This study demonstrated ROBOSEM's ability to sustain long term interaction between children and a robot in an elementary English class from the pilot studies with IROBIQ, called Langbot [1]. Five factors are of concern in sustaining the long-term interaction between children and ROBOSEM, as shown in Figure 2: (1) enhancing the recognition ability of ROBOSEM with class materials, such as marker hats, bracelet watches embodied RFID tags, Wiimocon, etc; (2) sharing the birth story of ROBOSEM, which works to increase children's tolerance toward weak recognition from the result of [2]; (3) making a favorable impression, such as by flashing children's faces on the screen; (4) telling the history of a child's personal learning activities; and (5) tele-operation by the intelligence of a human being.
Implementing human questioning strategies into quizzing-robot BIBAFull-Text 423-424
  Takaya Ohyama; Yasutomo Maeda; Chiaki Mori; Yoshinori Kobayashi; Yoshinori Kuno; Rio Fujita; Keiichi Yamazaki; Shun Miyazawa; Akiko Yamazaki; Keiko Ikeda
From our ethnographic studies on various kinds of museums, we discovered that guides routinely propose questions to visitors in order to draw their attention towards both his/her explanation and the exhibit. The guides' question sequences tend to begin with a pre-question which serves to not only monitor visitors' behavior and responses, but to also alert visitors that a primary question would follow. We implemented this questioning-strategy with our robot system and investigated whether this strategy would also work in human-robot interaction. We developed a vision system that enables the robot to choose an appropriate visitor by monitoring a visitor's response from the initiation of a pre-question to the following pause. Results indicate that this questioning-strategy works effectively in human-robot interaction. In this experiment, the robot asked visitors about a photograph. At the pre-question, the robot delivered a rather easy question followed by a more challenging question (Figure 1). More participants turned their head away from the exhibition when they were not sure about their answer to the question. They either faced away from the robot, or smiled wryly at the robot or at each other. These types of behaviors index participants' states of knowledge, which we could utilize to develop a system by which the robot could choose an appropriate candidate by computational recognition.
Whole-body imitation of human motions with a nao humanoid BIBAFull-Text 425-426
  Jonas Koenemann; Maren Bennewitz
We present a system that enables a humanoid robot to imitate complex whole-body motions of humans in real time. For recording the human motions, any sensor system capable of inferring the joint angle trajectories can be used. In our work, we capture the human data with an Xsens MVN motion capture system consisting of inertial sensors attached to the body. Our framework converts the human joint angles to the robot's joint angles in real time. Here, we use a mapping between the human's and the robot's joints to ensure feasibility of the motion. The focus of our system lies in ensuring static stability when the motions are executed which is a challenging task, depending on the complexity of the movements. To avoid falls of the robot that might occur when using direct imitation of the joint angle trajectories due to the different weight distribution, we developed an approach that actively balances the center of mass over the support polygon of the robot's feet. At every point in time, our approach ensures that the robot is in a statically stable configuration, i.e., that the ground projection of the center of mass lies within the convex hull of the foot contact points. To achieve this, we apply inverse kinematics given valid foot positions that satisfy the stability criterion and generate the corresponding leg joint angles. In more detail, our system first finds valid positions for the robot's feet by determining a target plane and its orientation, so that the feet can be placed planar and the robot's center of mass is over the support polygon. The new positions of the feet are chosen as the projection on the target plane. Afterwards, the corresponding leg joint angles are calculated via inverse kinematics. To determine whether the configuration is in the double support modus, and if not, which foot is the stance foot, we evaluate the position of the center of mass relative to the feet.
   As can be seen in the experiments with a Nao humanoid, our approach leads to a highly stable imitation of challenging human movements (see also Fig. 1). In contrast to recent approaches that capture human data using a Kinect-like sensor and only imitate arm movements while keeping the body static, our system can deal with complex, whole-body motions. Note that our approach does not require a prior learning phase but computes stable configurations online and almost in real time as can be seen in the accompanying video.
   We are currently working on imitating motions to learn complex navigation actions such as climbing up staircases or walking down ramps. Our system can also be used for tele-operated tasks that include whole-body movements where stability needs to be guaranteed in order to successfully fulfill the mission.
Roboscopie: a theatre performance for a human and a robot BIBFull-Text 427-428
  Séverin Lemaignan; Mamoun Gharbi; Jim Mainprice; Matthieu Herrb; Rachid Alami
Demonstrating Maori Haka with kinect and nao robots BIBAFull-Text 429-430
  Thammathip Piumsomboon; Rory Clifford; Christoph Bartneck
In this video, "Nao Haka", four robots and a haka leader perform a traditional Maori Haka. The Haka leader, who performs the main actions is supported by Aldebaran Nao Robots, which are controlled by an external performer, using a Microsoft Kinect as the input device. This device allows for full-body user tracking. This Video was made as a supportive gesture towards the All Blacks Rugby World Cup Campaign 2011.

Perception and recognition

The cocktail party robot: sound source separation and localisation with an active binaural head BIBAFull-Text 431-438
  Antoine Deleforge; Radu Horaud
Human-robot communication is often faced with the difficult problem of interpreting ambiguous auditory data. For example, the acoustic signals perceived by a humanoid with its on-board microphones contain a mix of sounds such as speech, music, electronic devices, all in the presence of attenuation and reverberations. In this paper we propose a novel method, based on a generative probabilistic model and on active binaural hearing, allowing a robot to robustly perform sound-source separation and localization. We show how interaural spectral cues can be used within a constrained mixture model specifically designed to capture the richness of the data gathered with two microphones mounted onto a human-like artificial head. We describe in detail a novel EM algorithm, we analyse its initialization, speed of convergence and complexity, and we assess its performance with both simulated and real data.
Multi-party human-robot interaction with distant-talking speech recognition BIBAFull-Text 439-446
  Randy Gomez; Tatsuya Kawahara; Keisuke Nakamura; Kazuhiro Nakadai
Speech is one of the most natural medium for human communication, which makes it vital to human-robot interaction. In real environments where robots are deployed, distant-talking speech recognition is difficult to realize due to the effects of reverberation. This leads to the degradation of speech recognition and understanding, and hinders a seamless human-robot interaction. To minimize this problem, traditional speech enhancement techniques optimized for human perception are adopted to achieve robustness in human-robot interaction. However, human and machine perceive speech differently: an improvement in speech recognition performance may not automatically translate to an improvement in human-robot interaction experience (as perceived by the users). In this paper, we propose a method in optimizing speech enhancement techniques specifically to improve automatic speech recognition (ASR) with emphasis on the human-robot interaction experience. Experimental results using real reverberant data in a multi-party conversation, show that the proposed method improved human-robot interaction experience in severe reverberant conditions compared to the traditional techniques.
Do you remember that shop?: computational model of spatial memory for shopping companion robots BIBAFull-Text 447-454
  Takahiro Matsumoto; Satoru Satake; Takayuki Kanda; Michita Imai; Norihiro Hagita
We aim to develop a shopping companion robot that can share experience with users. In this study, we focused on the shared memory acquired when a robot walks together with a user. We developed a computational model of memory recall of visited locations in a shopping mall. The model was developed with data collection from 30 participants. We found that shop size, color intensity of facade, relative visibility, and time elapsed are the influencing features for recall. The model was used in a scenario of a shopping companion robot. The robot, Robovie, autonomously follows a user while inferring the user's memory recall of shops in the visited route. When the user asks the location of other shops, Robovie replied with destination description, referring to the known locations inferred with the model of the user's memory recall. With this scenario, we verified the effectiveness of the developed computational model of memory recall. The evaluation experiment revealed that the model outputs shops that the participants are likely to recall, and makes the directions given easier to understand.
Color anomaly detection and suggestion for wilderness search and rescue BIBAFull-Text 455-462
  Bryan S. Morse; Daniel Thornton; Michael A. Goodrich
In wilderness search and rescue, objects not native or typical to a scene may provide clues that indicate the recent presence of the missing person. This paper presents the results of augmenting an aerial wilderness search-and-rescue system with an automated spectral anomaly detector for identifying unusually colored objects. The detector dynamically builds a model of the natural coloring in the scene and identifies outlier pixels, which are then filtered both spatially and temporally to find unusually colored objects. These objects are then highlighted in the search video as suggestions for the user, thus shifting a portion of the user's task from scanning the video to verifying the suggestions. This paper empirically evaluates multiple potential detectors then incorporates the best-performing detector into a suggestion system. User study results demonstrate that even with an imperfect detector users' detection increased significantly. Results further indicate that users' false positive rates did not increase, though performance in a secondary task did decrease. Furthermore, users subjectively reported that the use of detector-based suggestions made the overall task easier. These results suggest that such suggestion-based systems for search can increase overall searcher performance but that additional external tasks should be limited.

Talking with robots: linguistics and natural language

Levels of embodiment: linguistic analyses of factors influencing hri BIBAFull-Text 463-470
  Kerstin Fischer; Katrin S. Lohan; Kilian Foth
In this paper, we investigate the role of physical embodiment of a robot and its degrees of freedom in HRI. Both factors have been suggested to be relevant in definitions of embodiment, and so far we do not understand their effects on the way people interact with robots very well. Linguistic analyses of verbal interactions with robots differing with respect to physical embodiment and degrees of freedom provide a useful methodology to investigate factors conditioning human-robot interaction. Results show that both physical embodiment and degrees of freedom influence interaction, and that the effect of physical embodiment is located in the interpersonal domain, concerning in how far the robot is perceived as an interaction partner, whereas degrees of freedom influence the way users project the suitability of the robot for the current task.
Tell me when and why to do it!: run-time planner model updates via natural language instruction BIBAFull-Text 471-478
  Rehj Cantrell; Kartik Talamadupula; Paul Schermerhorn; J. Benton; Subbarao Kambhampati; Matthias Scheutz
Robots are currently being used in and developed for critical HRI applications such as search and rescue. In these scenarios, humans operating under changeable and high-stress conditions must communicate effectively with autonomous agents, necessitating that such agents be able to respond quickly and effectively to rapidly-changing conditions and expectations. We demonstrate a robot planner that is able to utilize new information, specifically information originating in spoken input produced by human operators.
   We show that the robot is able to learn the pre- and postconditions of previously-unknown action sequences from natural language constructions, and immediately update (1) its knowledge of the current state of the environment, and (2) its underlying world model, in order to produce new and updated plans that are consistent with this new information. While we demonstrate in detail the robot's successful operation with a specific example, we also discuss the dialogue module's inherent scalability, and investigate how well the robot is able to respond to natural language commands from untrained users.
Talking with robots about objects: a system-level evaluation in HRI BIBAFull-Text 479-486
  Julia Peltason; Nina Riether; Britta Wrede; Ingo Lütkebohle
We present the design process, realization and evaluation of a robot system for nteractive object learning. The system-oriented evaluation, in particular, addresses an open problem for the evaluation of systems, where overall user satisfaction depends not only on the performance of the parts, but also on their combination, and on user behavior. Based on the PARADISE method known from spoken dialog systems, we have defined and applied internal and external metrics for fine-grained and largely automatable identification of such relationships. Through evaluation with n=28 subjects, indicator functions explaining up to 55% of variation in several satisfaction metrics were found. Furthermore, we demonstrate that the system's interaction style reduces the need for instruction and successfully recovers partial failures.

Workshops & tutorials

Human-agent-robot teamwork BIBAFull-Text 487-488
  Jeffrey M. Bradshaw; Virginia Dignum; Catholijn M. Jonker; Maarten Sierhuis
Teamwork has become a widely accepted metaphor for describing the nature of multi-robot and multi-agent cooperation. By virtue of teamwork models, team members attempt to manage general responsibilities and commitments to each other in a coherent fashion that both enhances performance and facilitates recovery when unanticipated problems arise. Whereas early research on teamwork focused mainly on interaction within groups of autonomous agents or robots, there is a growing interest in leveraging human participation effectively. Unlike autonomous systems designed primarily to take humans out of the loop, many important applications require people, agents, and robots to work together in close and relatively continuous interaction. For software agents and robots to participate in teamwork alongside people in carrying out complex real-world tasks, they must have some of the capabilities that enable natural and effective teamwork among groups of people. Just as important, developers of such systems need tools and methodologies to assure that such systems will work together reliably and safely, even when they have been designed independently.
   The purpose of the HART workshop is to explore theories, methods, and tools in support of humans, agents and robots working together in teams. Position papers that combine findings from fields such as computer science, artificial intelligence, cognitive science, anthropology, social and organizational psychology, human-computer interaction to address the problem of HART are strongly encouraged. The workshop will formulate perspectives on the current state-of-the-art, identify key challenges and opportunities for future studies, and promote community-building among researchers and practitioners.
   The workshop will be structured around four two-hour sessions on themes relevant to HART. Each session will consist of presentations and questions on selected position papers, followed by a whole-group discussion of the current state-of-the-art and the key challenges and research opportunities relevant to the theme. During the final hour, the workshop organizers will facilitate a discussion to determine next steps. The workshop will be deemed a success when collaborative scientific projects for the coming year are defined, and publication venues are explored. For example, results from the most recent HART workshop (Lorentz Center, Leiden, The Netherlands, December 2010) will be reflected in a special issue of IEEE Intelligent Systems on HART that is slated to appear in January/February 2012.
Advances in tactile sensing and touch based human-robot interaction BIBAFull-Text 489-490
  Giorgio Cannata; Fulvio Mastrogiovanni; Giorgio Metta; Lorenzo Natale
The problem of "providing robots with the sense of touch" is fundamental in order to develop the next generations of robots capable of interacting with humans in different contexts: in daily housekeeping activities, as working partners or as caregivers, just to name a few.
   In a low-level perspective, through tactile sensing it is possible to measure or estimate physical properties of manipulated or touched objects, whereas feedback from tactile sensors may enable the detection and safe control of the interaction between the robot and objects or humans. In a high-level perspective, touch-based cognitive processes can be entailed by developing robot body self-awareness capabilities and by differentiating the "self" from the "external space", thereby opening new relevant problems in Robotics.
   The objective of this Workshop is to present and discuss the most recent achievements in the area of tactile sensing starting from the technological aspects, up to the application problems where tactile feedback plays a fundamental role.
   The Workshop will cover, but will not be limited, to the following three areas:
  • 1. Technological aspects of robot artificial skin design and implementation
        including advanced transduction devices, large-scale sensing technologies,
        embedded electronics, system level solutions, etc.
  • 2. Software and algorithmic aspects related to tactile data processing:
        software engineering, robot control, touch-based reactive behaviors, touch
        classification, object recognition, etc.
  • 3. Cognitive issues related, but not limited, to skin-based behaviors and task
        level control, including: human-robot interaction, learning and assistive
        technologies, etc.
  • Gaze in HRI: from modeling to communication BIBAFull-Text 491-492
      Frank Broz; Hagen Lehmann; Yukiko Nakano; Bilge Mutlu
    The purpose of this half-day workshop is to explore the role of social gaze in human-robot interaction, both how to measure social gaze behavior by humans and how to implement it in robots that interact with them. Gaze directed at an interaction partner has become a subject of increased attention in human-robot interaction research. While traditional robotics research has focused work on robot gaze solely on the identification and manipulation of objects, researchers in HRI have come to recognize that gaze is a social behavior in addition to a way of sensing the world. This workshop will approach the problem of understanding the role of social gaze in human-robot interaction from the dual perspectives of investigating human-human gaze for design principles to apply to robots and of experimentally evaluating human-robot gaze interaction in order to assess how humans engage in gaze behavior with robots.
       Computational modeling of human gaze behavior is useful for human-robot interaction in a number of different ways. Such models can enable a robot to perceive information about the state of the human in the interaction and adjust its behavior accordingly. Additionally, more human-like gaze behavior may make a person more comfortable and engaged during an interaction. It is known the gaze pattern of a social interaction partner has a huge impact on one's own interaction behavior. Therefore, the experimental verification of robot gaze policies is extremely important. Appropriate gaze behavior is critical for establishing joint attention, which enables humans to engage in collaborative activities and gives structure to social interactions. There is still much to be learned about which properties of human-human gaze should be transferred to human-robot gaze and how to model human-robot gaze for autonomous robots. The goal of the workshop is to exchange ideas and develop and improve methodologies for this growing area of research.
    ROS and Rosbridge: roboticists out of the loop BIBAFull-Text 493-494
      Christopher Crick; Graylin Jay; Sarah Osentoski; Odest Chadwicke Jenkins
    The advent of ROS, the Robot Operating System, has finally made it possible to implement and use state-of-the-art navigation and manipulation algorithms on widely-available, inexpensive standard robot platforms. With the addition of the Rosbridge application programming interface, interface designers and applications programmers can create robot interfaces and behaviors without venturing into the specialized world of robotics engineers. This tutorial introduces ROS and Rosbridge, and shows how quickly and easily these tools can be used to design and conduct large-scale online HRI experiments, access algorithms for autonomous robot behavior, and leverage the huge ecosystem of general-purpose web-based and application-oriented software engineering for robotics and HRI research. Tutorial attendees will learn the basics of autonomous and teleoperated navigation and manipulation, as well as interface design for online interaction with robots. During the tutorial they will design and write their own remote presence application, as well as develop strategies for incorporating autonomy and dealing with data collection.
    Cognitive science and socio-cognitive theory for the HRI practitioner BIBAFull-Text 495-496
      Jeffrey M. Bradshaw; J. Chris Forsythe
    This tutorial provides a synopsis of key findings and theoretical advances from cognitive science and socio-cognitive theory, with examples of how the results of this research can be applied to the design of human-robotic systems. Topics covered will run the gamut from basic cognitive science (e.g., perception, attention, learning and memory, information processing, multi-tasking, conscious awareness, individual differences) to socio-cognitive issues (e.g., theories of social interaction, dynamic functional allocation, mixed-initiative interaction, human-agent-robot teamwork, coactive design, theory of organizations). Additionally, the tutorial will address new technologies that attempt to leverage the current state of theory (e.g., neuroergonomics, brain-machine interfaces, detection of cognitive states, robotic prostheses and orthotics, cognitive and sensory prostheses). Throughout the tutorial, the presenters will give descriptions and demonstrations of working systems that exemplify the principles being taught. Separately, the presenters have given highly-successful tutorials on relevant subjects at workshops and conferences such as CHI and HCI International, as well as in a variety of industrial and government settings. In this tutorial, they propose to bring together their experience to bear on issues of specific interest to the HRI community.