HCI Bibliography Home | HCI Conferences | HRI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
HRI Tables of Contents: 06070809101112131415-115-2

Proceedings of the 2013 ACM/IEEE International Conference on Human-Robot Interaction

Fullname:Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction
Editors:Hideaki Kuzuoka; Vanessa Evers; Michita Imai; Jodi Forlizzi
Location:Tokyo, Japan
Dates:2013-Mar-03 to 2013-Mar-06
Standard No:ISBN: 978-1-4673-3055-8; ACM DL: Table of Contents; hcibib: HRI13
Links:Conference Website
  1. How do we perceive robots?
  2. Groups and public places
  3. Plenary talk by Yuichiro Anzai
  4. HRI 2013 late breaking results and poster session
  5. Trust, help, and influence
  6. Panel session
  7. Companions, collaboration, and control
  8. Verbal and non-verbal behavior
  9. Is the robot like me?
  10. Video session
  11. Plenary talk by Tomotaka Takahashi
  12. Workshops

How do we perceive robots?

The influence of height in robot-mediated communication BIBAFull-Text 1-8
  Irene Rae; Leila Takayama; Bilge Mutlu
A large body of research in human communication has shown that a person's height plays a key role in how persuasive, attractive, and dominant others judge the person to be. Robotic telepresence systems -- systems that combine video-conferencing capabilities with robotic navigation to allow geographically dispersed people to maneuver in remote locations -- represent remote users, operators, to local users, locals, through the use of an alternate physical representation. In this representation, physical characteristics such as height are dictated by the manufacturer of the system. We conducted a two-by-two (relative system height: shorter vs. taller; team role: leader vs. follower), between-participants study (n = 40) to investigate how the system's height affects the local's perceptions of the operator and subsequent interactions. Our findings show that, when the system was shorter than the local and the operator was in a leadership role, the local found the operator to be less persuasive. Furthermore, having a leadership role significantly affected the local's feelings of dominance with regard to being in control of the conversation.
Evaluating the effects of limited perception on interactive decisions in mixed robotic domains BIBAFull-Text 9-16
  Aris Valtazanos; Subramanian Ramamoorthy
Many robotic applications feature a mixture of interacting teleoperated and autonomous robots. In several such domains, human operators must make decisions using very limited perceptual information, e.g. by viewing only the noisy camera feed of their robot. There are many interaction scenarios where such restricted visibility impacts teleoperation performance, and where the role of autonomous robots needs to be reinforced. In this paper, we report on an experimental study assessing the effects of limited perception on human decision making, in interactions between autonomous and teleoperated NAO robots, where subjects do not have prior knowledge of how other agents will respond to their decisions. We evaluate the performance of several subjects under varying perceptual constraints in two scenarios; a simple cooperative task requiring collaboration with an autonomous robot, and a more demanding adversarial task, where an autonomous robot is actively trying to outperform the human. Our results indicate that limited perception has minimal impact on user performance when the task is simple. By contrast, when the other agent becomes more strategic, restricted visibility has an adverse effect on most subjects, with the performance level even falling below that achieved by an autonomous robot with identical restrictions. Our results could inform decisions about the division of control between humans and robots in mixed-initiative systems, and in determining when autonomous robots should intervene to assist operators.
Supervisory control of multiple social robots for navigation BIBAFull-Text 17-24
  Kuanhao Zheng; Dylan F. Glas; Takayuki Kanda; Hiroshi Ishiguro; Norihiro Hagita
This paper presents a human study and system implementation for the supervisory control of multiple social robots for navigational tasks. We studied the acceptable range of speed for robots interacting with people through navigation, and we discovered that entertaining people by speaking during navigation can increase people's tolerance toward robots' slow locomotion speed. Based on these results and using a robot safety model developed to ensure safety of robots during navigation, we implemented an algorithm which can proactively adjust robot behaviors during navigation to improve the performance of a human-robot team consisting of a single operator and multiple mobile social robots. Finally, we implemented a semi-autonomous robot system and conducted experiments in a shopping mall to verify the effectiveness of our proposed methods in a real-world environment.

Groups and public places

Eyewitnesses are misled by human but not robot interviewers BIBAFull-Text 25-32
  Cindy L. Bethel; Deborah K. Eakin; Sujan Anreddy; James Kaleb Stuart; Daniel Carruth
This paper presents research results from a study to determine whether eyewitness memory was impacted by a human interviewer versus a robot interviewer when presented misleading post-event information. The study was conducted with 101 participants who viewed a slideshow depicting the events of a crime. All of the participants interacted with the humanoid robot, NAO, by playing a trivia game. Participants were then interviewed by either a human or a robot interviewer that presented either control or misleading information about the events depicted in the slideshow. This was followed by another filler interval task of trivia with the robot. Following the interview and robot interactions, participants completed a paper-pencil post-event memory test to determine their recall of the events of the slideshow. The results indicated that eyewitnesses were misled by a human interviewer (t(46)=2.79, p<0.01, d=0.83) but not by a robot interviewer (t(46)=0.34, p>0.05). The results of this research could have strong implications for the gathering of sensitive information from an eyewitness about the events of a crime.
Human-robot cross-training: computational formulation, modeling and evaluation of a human team training strategy BIBAFull-Text 33-40
  Stefanos Nikolaidis; Julie Shah
We design and evaluate human-robot cross-training, a strategy widely used and validated for effective human team training. Cross-training is an interactive planning method in which a human and a robot iteratively switch roles to learn a shared plan for a collaborative task.
   We first present a computational formulation of the robot's inter-role knowledge and show that it is quantitatively comparable to the human mental model. Based on this encoding, we formulate human-robot cross-training and evaluate it in human subject experiments (n = 36). We compare human-robot cross-training to standard reinforcement learning techniques, and show that cross-training provides statistically significant improvements in quantitative team performance measures. Additionally, significant differences emerge in the perceived robot performance and human trust. These results support the hypothesis that effective and fluent human-robot teaming may be best achieved by modeling effective practices for human teamwork.
Sensors in the wild: exploring electrodermal activity in child-robot interaction BIBAFull-Text 41-48
  Iolanda Leite; Rui Henriques; Carlos Martinho; Ana Paiva
Recent advances in biosensor technology enabled the appearance of commercial wireless sensors that can measure electrodermal activity (EDA) in user's everyday settings. In this paper, we investigate the potential benefits of measuring EDA to better understand children-robot interaction in two distinct directions: to characterize and evaluate the interaction, and to dynamically recognize user's affective states. To do so, we present a study in which 38 children interacted with an iCat robot while wearing a wireless sensor that measured their electrodermal activity. We found that different patterns of electrodermal variation emerge for different supportive behaviours elicited by the robot and for different affective states of the children. The results also yield significant correlations between statistical features extracted from the signal and surveyed parameters regarding how children perceived the interaction and their affective state.
Identifying people with soft-biometrics at fleet week BIBAFull-Text 49-56
  Eric Martinson; Wallace Lawson; Greg Trafton
Person identification is a fundamental robotic capability for long-term interactions with people. It is important to know with whom the robot is interacting for social reasons, as well as to remember user preferences and interaction histories. There exist, however, a number of different features by which people can be identified. This work describes three alternative, soft biometrics (clothing, complexion, and height) that can be learned in real-time and utilized by a humanoid robot in a social setting for person identification. The use of these biometrics is then evaluated as part of a novel experiment in robotic person identification carried out at Fleet Week, New York City in May, 2012. In this experiment, Octavia employed soft biometrics to discriminate between groups of 3 people. 202 volunteers interacted with Octavia as part of the study, interacting with the robot from multiple locations in a challenging environment.
Understanding suitable locations for waiting BIBAFull-Text 57-64
  Takuya Kitade; Satoru Satake; Takayuki Kanda; Michita Imai
This study addresses the robot that waits for users while they shop. In order to wait, the robot needs to understand which locations are appropriate for waiting. We investigated how people choose locations for waiting, and revealed that they are concerned with "disturbing pedestrians" and "disturbing shop activities". Using these criteria, we developed a classifier of waiting locations. "Disturbing pedestrians" are estimated from statistics of pedestrian trajectories, which is observed with a human-tracking system based on laser range finders. "Disturbing shop activities" are estimated based on shop visibility. We evaluated this autonomous waiting behavior in a shopping-assist scenario. The experimental results revealed that users found the autonomous waiting robot chose appropriate waiting locations for waiting more than a robot with random choice or one controlled manually by the user him or herself.

Plenary talk by Yuichiro Anzai

Human-robot interaction by information sharing BIBFull-Text 65-66
  Yuichiro Anzai

HRI 2013 late breaking results and poster session

Human pointing as a robot directive BIBAFull-Text 67-68
  Syed Shaukat Raza Abidi; MaryAnn Williams; Benjamin Johnston
People are accustomed to directing other people's attention using pointing gestures. People enact and interpret pointing commands often and effortlessly. If robots understand human intentions (e.g. as encoded in pointing-gestures), they can reach higher levels of engagement with people. This paper explores methods that robots can use to allow people to direct them to move to a specific location, using an inexpensive Kinect sensor. The joint positions of the pointing human's right arm and hand in 3D-space are extracted and used by the robot to identify the direction of user's pointing gesture. We evaluated the proposed approach on a PR2 robot whose task was to move to a location that a human pointed to on the ground. This method enables the robot to follow human pointing gestures on the fly and in real-time. It will be deployed on a PR2 in the wild in a new building environment where the robot will be expected to interact with people and interpret their human pointing behaviors.
BioSleeve: a natural EMG-based interface for HRI BIBAFull-Text 69-70
  Christopher Assad; Michael Wolf; Theodoros Theodoridis; Kyrre Glette; Adrian Stoica
This paper presents the BioSleeve, a new gesture-based human interface for natural robot control. Detailed activity of the user's hand and arm is acquired via surface electromyography sensors and an inertial measurement unit that are embedded in a forearm sleeve. The BioSleeve's accompanying software decodes the sensor signals, classifies gesture type, and maps the result to output commands to an external robot. The current BioSleeve system can reliably decode as many as sixteen discrete hand gestures and estimate the continuous orientation of the forearm. The gestures are used in several modes: for supervisory point-to-goal commands, virtual joystick for teleoperation, and high degree-of-freedom (DOF) mimicked manipulation. We report results from three control applications: a manipulation robot, a small ground vehicle, and a 5-DOF hand.
Enabling clinicians to rapidly animate robots BIBAFull-Text 71-72
  John Alan Atherton; Michael A. Goodrich
Robots show potential to help people with autism spectrum disorder (ASD). A great obstacle in using robots as part of therapy is customizing robot behavior. Clinicians need a low-cost way to rapidly animate robots. There is a tradeoff between quickly creating animations and creating quality animations, but both aspects are important. Based on clinician feedback, we designed and developed two user interfaces for rapidly animating robots: a mouse-based interface and a motion-tracking interface. We examine this tradeoff with a comparative user study for these interfaces. Novices and clinicians were able to successfully create animations with both interfaces with little training. We learned that neither interface alone excels in both quickness and quality, but that a thoughtful combination of both interfaces has potential to yield a good balance between rapid creation and quality.
Using human approach paths to improve social navigation BIBFull-Text 73-74
  Eleanor Avrunin; Reid Simmons
A robotic therapy for children with TBI BIBAFull-Text 75-76
  Alex Barco; Jordi Albo-Canals; Miguel Kaouk Ng; Carles Garriga; Laura Callejón; Marc Turón; Claudia Gómez; Anna López-Sala
This paper presents the introduction of a study to compare different treatments within a program of counseling and education directed to parents with a cognitive rehabilitation program aimed at children through robotics. The Essentials of this program are described in detail. The aim is neuropsychological rehabilitation, addressing cognitive, emotional, behavioral and psychosocial deficits caused by brain damage.
Emergence of turn-taking in unstructured child-robot social interactions BIBAFull-Text 77-78
  Paul Baxter; Rachel Wood; Ilaria Baroni; James Kennedy; Marco Nalin; Tony Belpaeme
The 'Sandtray' has been designed as a platform to examine social interactions in which the interaction is not constrained a priori. A pilot study has been conducted with children to assess the suitability of the Sandtray for social HRI studies, using a wizard-of-oz robot control scheme. One aspect of importance is whether the children (previously unfamiliar with both the robot and the Sandtray) regard the robot as a potential social agent, or whether the Sandtray itself is of greater interactional interest. In this paper, observations on the children's behaviour with respect to the robot are reported. It is shown that the children engage in a turn-taking strategy, even though there is no such constraint imposed by the task, or by the behaviour of the robot. This indicates that in this context, the robot is viewed as a social agent.
Robot embodiment, operator modality, and social interaction in tele-existence: a project outline BIBAFull-Text 79-80
  Christian Becker-Asano; Severin Gustorff; Kai Oliver Arras; Kohei Ogawa; Shuichi Nishio; Hiroshi Ishiguro; Bernhard Nebel
This paper outlines our ongoing project, which aims to investigate the effects of robot embodiment and operator modality on an operator's task efficiency and concomitant level of copresence in remote social interaction. After a brief introduction to related work has been given, five research questions are presented. We discuss how these relate to our choice of the two robotic embodiments "DARYL" and "Geminoid F" and the two operator modalities "console interface" and "head-mounted display". Finally, we postulate that the usefulness of one operator modality over the other will depend on the type of situation an operator has to deal with. This hypothesis is currently being investigated empirically using DARYL at Freiburg University.
Perceptions of affective expression in a minimalist robotic face BIBAFull-Text 81-82
  Casey C. Bennett; Selma Šabanovic
This study explores deriving minimal features for a robotic face to convey information (via facial expressions) that people can perceive/understand. Recent research in computer vision has shown that a small number of moving points/lines can be used to capture the majority of information (~95%) in human facial expressions. Here, we apply such findings to a minimalist robot face; recognition rates were similar to more complex robots. The project aims to answer a number of fundamental questions about robotic face design, as well as to develop inexpensive/replicable robotic faces for experimental purposes.
Gamification of a recycle bin with emoticons BIBAFull-Text 83-84
  Jose Berengueres; Fatma Alsuwairi; Nazar Zaki; Tony Ng
We introduce an emoticon-bin, a recycle bin that rewards users with smiles and sounds. We show that by exploiting human responsiveness to emoticons, recycling rates increase by a factor of x3.
iProgram: intuitive programming of an industrial hri cell BIBAFull-Text 85-86
  Jürgen Blume; Alexander Bannat; Gerhard Rigoll
This paper introduces a concept for intuitive programming of an industrial HRI cell for non-experts. The main idea of this concept is to combine recently available technologies ranging from speech recognition, 3D visual surveillance (person tracking and object recognition) and handheld devices over to teach-in, visual programming and instruction based programming of compliant robots for heavy loads. With these components an easy method for programming or adapting workflows within an industrial packaging cell was realized and tested in a real factory environment.
Position-invariant, real-time gesture recognition based on dynamic time warping BIBAFull-Text 87-88
  Saša Bodiroza; Guillaume Doisy; Verena Vanessa Hafner
To achieve an improved human-robot interaction it is necessary to allow the human participant to interact with the robot in a natural way. In this work, a gesture recognition algorithm, based on dynamic time warping, was implemented with a use-case scenario of natural interaction with a mobile robot. Inputs are gesture trajectories obtained using a Microsoft Kinect sensor. Trajectories are stored in the person's frame of reference. Furthermore, the recognition is position-invariant, meaning that only one learned sample is needed to recognize the same gesture performed at another position in the gestural space. In experiments, a set of gestures for a robot waiter was used to train the gesture recognition algorithm. The experimental results show that the proposed modifications of the standard gesture recognition algorithm improve the robustness of the recognition.
Directly or on detours?: how should industrial robots approximate humans? BIBAFull-Text 89-90
  Dino Bortot; Maximilian Born; Klaus Bengler
Growing interest in industrial human-robot interaction (HRI) applications makes it necessary to look deeper into the design of systems, where humans collaborate, interact, or at least coexist with industrial robots. This study investigates the influence of the trajectory of an industrial robot's Tool Center Point (TCP) on user well-being as well as human performance in the cooperative scenario HRI. Therefore, a study with a total of 19 participants was conducted. The subjects had to perform several tasks (visually interacting with the robot and performing an audio n-back task), while the robot made different motions in their vicinity. Results show that variable, i.e. non predictable, robot motions lead to reduced human well-being and performance. Consequently, non-predictable motions are not suited for use in HRI. Well-being and performance can be enhanced if the robot moves directly on a straight line from start to finish.
Goal inferences about robot behavior: goal inferences and human response behaviors BIBAFull-Text 91-92
  Hedwig Anna Theresia Broers; Jaap Ham; Ron Broeders; P. Ravindra de Silva; Michio Okada
This explorative research focused on the goal inferences human observers draw based on a robot's behavior, and the extent to which those inferences predict people's behavior in response to that robot. Results show that different robot behaviors cause different response behavior from people.
Towards a comprehensive chore list for domestic robots BIBAFull-Text 93-94
  Maya Cakmak; Leila Takayama
We present an analysis of household chore lists with an eye towards building a comprehensive tasks lists for domestic robots. We identify the common structures of cleaning and organizing tasks, and characterize properties of their targets. Based on this analysis, we discuss the necessity for end-user programming of domestic robots at different levels.
Influence of robot-issued joint attention cues on gaze and preference BIBAFull-Text 95-96
  Sonja Caraian; Nathan Kirchner
If inadvertently perceived as Joint Attention, a robot's incidental behaviors could potentially influence preferences of observing humans. A study was conducted with 16 robot-näive participants to explore the influences of robot-issued Joint Attention cues during decision-making. The results suggest that Joint Attention is transferable to HRI and can influence the process and outcome of human decision-making.
Effects of robot capability on user acceptance BIBFull-Text 97-98
  Elizabeth Cha; Anca D. Dragan; Siddhartha S. Srinivasa
Potential use of robots in Taiwanese nursing homes BIBAFull-Text 99-100
  Wan-Ling Chang; Selma Šabanovic
Nursing homes and long-term care institutions often need technological assistance because of the high ratio of low-functioning residents coupled with a shortage of caregivers. To explore the potential uses of emerging robotic technologies in nursing homes, we apply Forlizzi's concept of the product ecology and a user-centered design approach involving field study and focus groups to understand what kind of robot design would be suitable in the nursing home context. Our preliminary results show that instead of a robot which completely replaces human labor, nursing home staff prefer robot assistants who fit into their working process. We also learned the most appropriate functions for robots in nursing homes were helping with minor tasks and encouraging social interaction among residents.
Use of seal-like robot PARO in sensory group therapy for older adults with dementia BIBAFull-Text 101-102
  Wan-Ling Chang; Selma Šabanovic; Lesa Huber
This work presents the preliminary results of an eight-week study of the seal-like robot PARO being used in a sensory therapy activity in a local nursing home. Participants were older adults with different levels of cognitive impairment. We analyzed participant behaviors in video recorded during the weekly interactions between older adults, a therapist, and PARO. We found that PARO's continued use led to a steady increase in physical interaction between older adults and the robot and an increasing willingness among participants to interact with it.
Human-agent teaming for robot management in multitasking environments BIBAFull-Text 103-104
  Jessie Y. C. Chen; Stephanie Quinn; Julia Wright; Daniel Barber; David Adams; Michael Barnes
In the current experiment, we simulated a multitasking environment and evaluated the effects of an intelligent agent, RoboLeader, on the performance of human operators who had the responsibility of managing the plans/routes for three vehicles (their own manned ground vehicle, an aerial robotic vehicle, and a ground robotic vehicle) while maintaining proper awareness of their immediate environment (i.e., threat detection). Results showed that RoboLeader's level of autonomy had a significant impact on participants' concurrent target detection task. Participants detected more targets in the Semi-Auto and Full-Auto conditions than in the Manual condition. Participants reported significantly higher workload in the Manual condition than in the two RoboLeader conditions (Semi-Auto and Full-Auto). Operator spatial ability also had a significant impact on target detection and situation awareness performance measures.
Have you ever lied?: the impacts of gaze avoidance on people's perception of a robot BIBAFull-Text 105-106
  Jung Ju Choi; Yunkyung Kim; Sonya S. Kwak
In human-human interaction, gaze avoidance is usually interpreted as having intention to escape from an embarrassing situation. This study explores whether gaze avoidance by a robot can be delivered as an intention, and whether this intention can make a robot perceived as having sociability and intelligence. We executed a 2 (question type: normal vs. embarrassing) x 2 (gaze type: gaze vs. gaze avoidance) within-participants experiment (N=24). In an embarrassing situation, a robot with gaze avoidance is perceived as more sociable and intelligent than a robot that holds its gaze, while the robot that holds its gaze in a normal situation is perceived as more sociable and intelligent than a robot with gaze avoidance. Implications for the design of human-robot interactions are discussed.
The impacts of intergroup relations and body zones on people's acceptance of a robot BIBAFull-Text 107-108
  Jung Ju Choi; Yunkyung Kim; Sonya S. Kwak
This study explores social distance management as a strategic way to alleviate people's dissatisfaction with a vacuum cleaning robot, particularly when the robot requires an unpleasant favor. We executed a 2 (intergroup relations: out-group vs. in-group) x 3 (body zones: close vs. 40cm vs. 1m) mixed-participants experiment (N=36). People evaluated the impressions and service of out-group robots more positively as the distance of asking for a favor became shorter, while they preferred in-group robots as the distance became greater.
Interactive display robot: projector robot with natural user interface BIBAFull-Text 109-110
  Sun-Wook Choi; Woong-Ji Kim; Chong Ho Lee
Combining a hand-held small projector, a mobile robot, a RGB-D sensor and a pan/tilt device, interactive displaying robot can move freely in the indoor space and display on any surface. In addition, the user can manipulate the projector robot and projection direction through the natural user interface.
Attention control system considering the target person's attention level BIBAFull-Text 111-112
  Dipankar Das; Mohammed Moshiul Hoque; Yoshinori Kobayashi; Yoshinori Kuno
In this paper, we propose an attention control system for social robots that attracts and controls the attention of a target person depending on his/her current attentional focus. The system recognizes the current task of the target person and estimates its level of focus by using the "task related behavior pattern" of the target human. The attention level is used to determine the suitable cues to attract the target person's attention toward the robot. The robot detects the interest or willingness of the target person to interact with it. Then, depending on the level of interest, the robot displays an awareness signal and shifts his/her attention to an intended goal direction.
Towards empathic artificial tutors BIBAFull-Text 113-114
  Amol Deshmukh; Ginevra Castellano; Arvid Kappas; Wolmet Barendregt; Fernando Nabais; Ana Paiva; Tiago Ribeiro; Iolanda Leite; Ruth Aylett
In this paper we discuss how the EMOTE project will design, develop and evaluate a new generation of artificial embodied tutors that have perceptive capabilities to engage in empathic interactions with learners in a shared physical space.
Improving the human-robot interaction through emotive movements: a special case: walking BIBAFull-Text 115-116
  Matthieu Destephe; Takayaki Maruyama; Massimiliano Zecca; Kenji Hashimoto; Atsuo Takanishi
Walking is one of the most common activities that we perform every day. If the main goal of walking is to go from a point A to a point B, walking can also convey emotional clues in social context. Those clues can be used to improve interactions or any messages we want to express. We observed a professional actress perform emotive walking and analyzed the recorded data. For each emotion, we found characteristic features which can be used to model gait patterns for humanoid robots. The findings were assessed by subjects who were asked to recognize the emotions displayed in the acts of walking.
Spatially unconstrained, gesture-based human-robot interaction BIBAFull-Text 117-118
  Guillaume Doisy; Aleksandar Jevtic; Saša Bodiroza
For a human-robot interaction to take place, a robot needs to perceive humans. The space where a robot can perceive humans is restrained by the limitations of robot's sensors. These restrictions can be circumvented by the use of external sensors, like in intelligent environments; otherwise humans have to ensure that they can be perceived. With the robotic platform presented here, the roles are reversed and the robot autonomously ensures that the human is within the area perceived by the robot. This is achieved by a combination of hardware and algorithms capable of autonomously tracking the person, estimating their position and following them, while recognizing their gestures and moving through space.
Where to look and who to be: designing attention and identity for search-and-rescue robots BIBAFull-Text 119-120
  Lorin D. Dole; David M. Sirkin; Rebecca M. Currano; Robin R. Murphy; Clifford I. Nass
Participants taking cover from a simulated earthquake interacted with a search-and-rescue robot that paid attention either to them or to the environment, and that they thought was either controlled by a person or autonomous. In general, the robot elicited the strongest positive responses when it focused on participants who thought they were interacting with a person.
Loneliness makes the heart grow fonder (of robots): on the effects of loneliness on psychological anthropomorphism BIBAFull-Text 121-122
  Friederike Eyssel; Natalia Reich
Sociality motivation represents an essential driving force for human behavior and well-being. If the need for affiliation is not satisfied and social interaction partners are unavailable, people might use an alternative strategy: Nonhuman entities, such as pets or religious deities are humanized and turned into social agents. We tested this notion in the context of social robotics and investigated whether activating feelings of loneliness would affect perceptions of anthropomorphism. Results showed that lonely people anthropomorphized the robot FloBi more than participants in the control group. Thus, importantly, users' motivational states need to be considered in the context of human-robot interaction (HRI) as they clearly affect judgments of the robotic interaction partner.
Development of a glove-based optical fiber sensor for applications in human-robot interaction BIBAFull-Text 123-124
  Eric Fujiwara; Danilo Yugo Miyatake; Murilo Ferreira Marques Santos; Carlos Kenichi Suzuki
A glove-based optical fiber sensor for the measurement of finger movements aiming HRI applications was developed. The device presented good response on the detection of angular displacements of finger joints, being suitable for further utilization in teleoperation and gesture-based robot navigation.
Question strategy and interculturality in human-robot interaction BIBAFull-Text 125-126
  Mihoko Fukushima; Rio Fujita; Miyuki Kurihara; Tomoyuki Suzuki; Keiichi Yamazaki; Akiko Yamazaki; Keiko Ikeda; Yoshinori Kuno; Yoshinori Kobayashi; Takaya Ohyama; Eri Yoshida
This paper demonstrates the ways in which multi party human participants in 2 language groups, Japanese and English, engage with a quiz robot when they are asked a question. We focus on both speech and bodily conducts where we discovered both universalities and differences.
Embedded multimodal nonverbal and verbal interactions between a mobile toy robot and autistic children BIBAFull-Text 127-128
  Irini Giannopulu
We studied the multimodal nonverbal and verbal relationship between autistic children and a mobile toy robot during free spontaneous game play. A range of cognitive nonverbal criteria including eye contact, touch, manipulation, and posture were analyzed; the frequency of the words and verbs was calculated. Embedded multimodal interactions of autistic children and a mobile toy robot suggest that this robot could be used as a neural orthesis in order to improve children's brain activity and incite child to express language.
Personal service: a robot that greets people individually based on observed behavior patterns BIBAFull-Text 129-130
  Dylan F. Glas; Kanae Wada; Masahiro Shiomi; Takayuki Kanda; Hiroshi Ishiguro; Norihiro Hagita
We are developing an interactive service robot which provides personal greetings to customers, using a machine-learning approach based on observations of a customer's appearance or behavior from on-board or environmental sensors. For each visit, several features are recorded, such as "time of day" or "number of people in group." A set of classifiers trained by human coders compare the current features with the person's individual history, to determine an appropriate feature for a robot to speak about. This system enables the robot to make context-appropriate comments such as "good morning, you're here very early today." We present the design of our system and an encouraging set of preliminary prediction results based on one month of data taken from real customers at a shopping mall.
The influence of robot appearance on assessment BIBAFull-Text 131-132
  Kerstin Sophie Haring; Katsumi Watanabe; Celine Mougenot
This paper presents the influence of robot appearance on perception. The goal is to come to an initial understanding of what design and features of robots affect people. We explore the differences how people perceive pet, service, humanoid and android robots. The associations, associated tasks, perception, fears and expectations towards the four different types are evaluated.
Elementary science lesson delivered by robot BIBAFull-Text 133-134
  Takuya Hashimoto; Hiroshi Kobayashi; Alex Polishuk; Igor Verner
This paper juxtaposes science lessons on the topic "Levers" given to two sixth grade classes, one in a Tokyo elementary school assisted by an android robot SAYA, and the other at the science museum MadaTech in Haifa aided by a humanoid RoboThespian. The one hour lessons had the same outline including theoretical explanation, in-group experiments with lever balances, and assessment activities. The classes were instructed and managed by the robot-teachers through preprogrammed behaviors and remote operation. We juxtapose the features of the educational interaction in the lessons elicited by using the same questionnaire, assessment test and video analysis.
Adopt-a-robot: a story of attachment BIBAFull-Text 135-136
  Damith C. Herath; Christian Kroos; Catherine Stevens; Denis Burnham
Robots have diffidently started to invade human spaces, but are still limited to very rudimentary forms such as robot vacuum cleaners and various entertainment platforms. Dramatic changes with respect to the number of robots in homes and offices, however, can be foreseen for the near future as sensing, computing and associated technologies mature. Currently, it is not known how we humans will treat machine companions when they will be with us over prolonged periods of time and share our personal space. In this exploratory study we investigated whether participants would form a bond with a small, basic research robot in an adoption scenario whereby the robot's initial interaction abilities were upgraded in two steps. We were particularly interested in investigating whether any increases in attachment would be related to the 2 steps of progressively heightened technical sophistication of the robot over a prolonged (six month) period of time.
Eliciting ideal tutor trait perception in robots: pinpointing effective robot design space elements for smooth tutor interactions BIBAFull-Text 137-138
  Jonathan S. Herberg; Dev C. Behera; Martin Saerbeck
To approach the physical design of a tutor robot, we obtained 3rd to 5th grade children's evaluations of the relative importance of tutor traits, and of which robot design categories they perceive as most embodying top tutor traits, in an initial exploratory interview study. Results indicate that prototypical "Mechanoid" (humanoid or robotic) designs and animal-shaped designs (whether animal-like in outer cover, or more visibly a robotic animal) are superior to object-based designs (desktop objects or geometric shapes). Furthermore, children's perception of top tutor traits can be better pushed by an animal-shaped design than by a Mechanoid design, with less risk of unintended Uncanny Valley effects. Implications for designing tutor-robot embodiments toward buttressing children's expectations of an ideal tutor and for facilitating interactions, and future research directions are discussed.
Learning from the web: recognition method based on object appearance from internet images BIBAFull-Text 139-140
  Enrique Hidalgo-Peña; Luis Felipe Marin-Urias; Fernando Montes-González; Antonio Marín-Hernández; Homero Vladimir Ríos-Figueroa
In this work an Object Learning and Recognition method for a Humanoid is presented. This method aims to take advantage of the Cloud Resources, since it is based on image web search in order to build training sets for learning objects' appearance. In case Internet access is unavailable, the robot asks human to show the objects and acquires the images using its camera. Through this technique, our method aims to be a flexible and natural framework for Human-Robot Interaction and to provide as much autonomy as possible to the robot.
ASAHI: OK for failure: a robot for supporting daily life, equipped with a robot avatar BIBAFull-Text 141-142
  Yutaka Hiroi; Akinori Ito
This paper introduces a daily-life-support robot, ASAHI. ASAHI is equipped with a robot avatar, which converses with the user using speech and gesture. He can perform a simple support task, such as bringing an object, as well as following the user to move around the floor. The feature of ASAHI is that it has an ability to recover from failures such as misrecognition of objects or losing the person it is following, by communicating with the user and expressing the robot's internal states.
Robots that can feel the mood: context-aware behaviors in accordance with the activity of communications BIBAFull-Text 143-144
  Akira Imayoshi; Nagisa Munekata; Tetsuo Ono
In human communication, "social space" is regarded as a territory formed by a group, and if it is intruded on by another person without reason, the group may feel unpleasant. A robot that communicates with humans should observe the social rule described above. Therefore, in this paper, we propose a robot system that can recognize the state of a social space by using smart phones that users are equipped with. Moreover, we implemented this system and did experiments in which we had a robot join in on participants' conversation while changing the timing of its interruption. As a result, the impression of the participants toward the robot that used the proposed system was more favorable than that toward the one that did not.
Development of RoboCup @home simulator: simulation platform that enables long-term large scale HRI BIBAFull-Text 145-146
  Tetsunari Inamura; Jeffrey Too Chuan Tan
Research on high level human-robot interaction systems that aims skill acquisition, learning of dialogue strategy and so on requires large scaled experience database based on social and embodied interaction experiments. However, if we used real robot systems, costs for development of robots and performing many experiments will be too huge. If we choose virtual robot simulator, limitation arises on embodied interaction between virtual robots and real users. We thus propose an enhanced robot simulator that enables multiuser to connect to central simulation world, and enables users to join the virtual world through immersive user interface. As an example task, we propose an application to RoboCup @Home tasks. In this paper we explain configuration of our simulator platform and feasibility of the system in RoboCup @Home.
Given that, should i respond?: contextual addressee estimation in multi-party human-robot interactions BIBAFull-Text 147-148
  Dinesh Babu Jayagopi; Jean-Marc Odobez
In this paper, we investigate the task of addressee estimation in multi-party interactions. For every utterance from a human participant, the robot should know if it was being addressed or not, so as to respond and behave accordingly. To accomplish this various cues could be made use of: the most important being gaze cues of the speaker. Apart from this several other cues can act as contextual variables to improve the estimation accuracy of this task. For example, the gaze cue of other participants, and the long-term or short-term dialog context. In this paper we investigate the possibility to combine such information from diverse sources to improve the addressee estimation task. For this study, we use 11 interactions with a humanoid robot NAO1 giving quiz to two human participants.
The vernissage corpus: a conversational human-robot-interaction dataset BIBAFull-Text 149-150
  Dinesh Babu Jayagopi; Samira Sheiki; David Klotz; Johannes Wienke; Jean-Marc Odobez; Sebastien Wrede; Vasil Khalidov; Laurent Nyugen; Britta Wrede; Daniel Gatica-Perez
We introduce a new conversational Human-Robot-Interaction (HRI) dataset with a real-behaving robot inducing interactive behavior with and between humans. Our scenario involves a humanoid robot NAO1 explaining paintings in a room and then quizzing the participants, who are naive users. As perceiving nonverbal cues, apart from the spoken words, plays a major role in social interactions and socially-interactive robots, we have extensively annotated the dataset. It has been recorded and annotated to benchmark many relevant perceptual tasks, towards enabling a robot to converse with multiple humans, such as speaker localization and speech segmentation; tracking, pose estimation, nodding, visual focus of attention estimation in visual domain; and an audio-visual task such as addressee detection. NAO system states are also available. As compared to recordings done with a static camera, this corpus involves the head-movement of a humanoid robot (due to gaze change, nodding), posing challenges to visual processing. Also, the significant background noise present in a real HRI setting makes auditory tasks challenging.
Empathy between human and robot? BIBAFull-Text 151-152
  Doori Jo; Jooyun Han; Kyungmi Chung; Sukhan Lee
This paper aims at finding the answer to the essential question: Can people perceive a robot's presence as having a social existence? We attempt to apply a sociological and psychological approach to understand the influence of robot beings, by observing human emotion and perception changes while subjects watched a funny video clip in the presence of a robot or a human companion, each of which made their own typical laughing sounds. From this experiment, we found that the robot did not affect the human's positive emotions as much as a human companion did, but the robot did discourage negative emotions. However, the subjects were, in general, amused when they were watching the video with the robot. This amusement is similar to the contagious effect of sharing humor with another human being. Our findings suggest that the subjects accepted the robot's presence as a kind of existence empathically.
Interaction with an agent in blended reality BIBAFull-Text 153-154
  Yusuke Kanai; Hirotaka Osawa; Michita Imai
This paper proposes a Blended Reality Agent called "BReA" that can exist in both the real and virtual world and has communicative advantages that is more than simply the sum of those of a robotic agent and an on-screen agent. BReA can seamlessly transfers between the real world and virtual world and users can communicate with BReA without interruption during the transference. We conducted a field test and showed that users can recognize when it points from the virtual world to a real world object after it moves from the real world to virtual world. This result supports a hypothesis that communicating with BReA helps users recognize the link between real world and virtual world objects and virtual world information.
Robot confidence and trust alignment BIBAFull-Text 155-156
  Poornima Kaniarasu; Aaron Steinfeld; Munjal Desai; Holly Yanco
Trust in automation plays a crucial role in human-robot interaction and usually varies during interactions. In scenarios of shared control, the ideal pattern is for the user's real-time trust in the robot to align with robot performance. This should lead to an increased overall efficiency of the system by limiting under-trust and over-trust. However, users sometimes display incorrect trust and the ability to detect and alter user trust is important. This paper describes measures for real-time trust alignment.
What happens when a robot favors someone?: How a tour guide robot uses gaze behavior to address multiple persons while storytelling about art BIBAFull-Text 157-158
  Daphne E. Karreman; Gilberto U. Sepúlveda Bradford; Betsy E. M. A. G. van Dijk; Manja Lohse; Vanessa Evers
We report intermediate results of an ongoing study into the effectiveness of robot gaze behaviors when addressing multiple persons. The work is being carried out as part of the EU FP7 project FROG and concerns the design and evaluation of interactive behaviors of a tour guide robot. Our objective is to understand how to address and engage multiple visitors simultaneously. The robot engages small groups of visitors in interaction and offers information on objects of interest. In the current experiment, a robot tells three visitors about two different paintings. A 2 X 2 independent factorial design is used. The robot engages the three visitors in mutual gaze by looking at the artwork while talking about it vs. only looking at the visitors (between subject-groups). Also, the robot 'favors' one of the three participants by directing it's gaze at them more frequently and longer compared to the other two participants. We are interested to find out whether gaze at the object of interest and favoring through gaze has an effect on the user's experience and knowledge retention. Preliminary results indicate that a robot that engages visitors in mutual gaze is seen as more humanlike and 'favoring' a person in a small group positively influences attitudes toward the robot.
The effects of familiarity and robot gesture on user acceptance of information BIBAFull-Text 159-160
  Aelee Kim; Younbo Jung; Kwanmin Lee; Jooyun Han
In this study, we explore how people respond to the gesture of a robot as well as how perception toward a robot changes when familiarity increases. To investigate these objectives, we conducted an experiment over three weeks: firstly, we compared two groups (gesture vs. no gesture) to access how the gesture affects people's acceptance of information; and secondly, we compared three different time points within each condition to examine whether the contact frequency could influence the perception of a robot. The results showed participants in the gesture condition felt greater social interaction, enjoyment, and engagement over the course of three weeks than participants in the no-gesture condition. In addition, the results of longitudinal comparisons showed an interesting pattern (quadratic curve) of changes for enjoyment over three weeks. This study has successfully yielded the positive effects of robot's gestures and the important association between familiarity and perception changes in HRI.
Recognition for psychological boundary of robot BIBAFull-Text 161-162
  Chyon Hae Kim; Yumiko Yamazaki; Shunsuke Nagahama; Shigeki Sugano
We discuss the recognition for robot's boundary. Humans think about psychological boundaries of robots in addition to physical ones while interacting with them. We have made two hypotheses regarding psychological boundaries. Firstly, these boundaries are affected by the context surrounding the robot. Secondly, these boundaries are controllable. We conducted an experiment with 60 Japanese males and females aged 18-27. Each participant was as asked to "slide me slightly", from a robot while the participants were interacting with it. We analyzed the participants' actions and questionnaires so as to confirm what was "me" for the participants. From this analysis, we confirmed that our hypotheses are both true.
LMA based emotional motion representation using RGB-D camera BIBAFull-Text 163-164
  Woo Hyun Kim; Jeong Woo Park; Won Hyong Lee; Hui Sung Lee; Myung Jin Chung
In this paper, emotional motion representation is proposed for Human Robot Interaction: HRI. The proposed representation is based on "Laban Movement Analysis: LMA" and trajectories of 3-dimensional whole body joint positions using an RGB-D camera such as a "Microsoft Kinect". The experimental results show that the proposed method distinguishes two types of human emotional motion well.
Ultra-fast multimodal and online transfer learning on humanoid robots BIBAFull-Text 165-166
  Daiki Kimura; Ryutaro Nishimura; Akihiro Oguro; Osamu Hasegawa
To build an intelligent robot, we must develop an autonomous mental development system that incrementally and speedily learns from humans, its environments, and electronic data. This paper presents an ultra-fast, multimodal, and online incremental transfer learning method using the STAR-SOINN. We conducted two experiments to evaluate our method. The results suggest that recognition accuracy is higher than the system that simply adds modalities. The proposed method can work very quickly (approximately 1.5[s] to learn one object, and 30[ms] for a single estimation). We implemented this method on an actual robot that could estimate attributes of "unknown" objects by transferring attribute information of known objects. We believe this method can become a base technology for future robots.
Single assembly robot in search of human partner: versatile grounded language generation BIBAFull-Text 167-168
  Ross A. Knepper; Stefanie Tellex; Adrian Li; Nicholas Roy; Daniela Rus
We describe an approach for enabling robots to recover from failures by asking for help from a human partner. For example, if a robot fails to grasp a needed part during a furniture assembly task, it might ask a human partner to "Please hand me the white table leg near you." After receiving the part from the human, the robot can recover from its grasp failure and continue the task autonomously. This paper describes an approach for enabling a robot to automatically generate a targeted natural language request for help from a human partner. The robot generates a natural language description of its need by minimizing the entropy of the command with respect to its model of language understanding for the human partner, a novel approach to grounded language generation. Our long-term goal is to compare targeted requests for help to more open-ended requests where the robot simply asks "Help me," demonstrating that targeted requests are more easily understood by human partners.
Directing robot motions with paralinguistic information BIBAFull-Text 169-170
  Takanori Komatsu; Yuuki Seki
We propose an interface system that can extract a user's ambiguous nuances and feelings from paralinguistic information in their expressed speech and reflect these extracted nuances in a robot's punching action. From the result of an evaluation study, participants reported that this system succeeded in reflecting their nuances and feeling from their expressed paralinguistic information and that this system was easy and fun to use.
3D auto-calibration method for head-mounted binocular gaze tracker as human-robot interface BIBAFull-Text 171-172
  Su Hyun Kwon; Min Young Kim
This paper presents a novel calibration method for a head-mounted binocular gaze tracker that enables the human gaze point, representing the selective visual attention of the user, to be tracked in 3D space. The proposed method utilizes two calibration planes with visual marks to calculate the mapping points between a forward-looking camera and two eye-monitoring cameras in an expanded 3D spatial domain. As a result, the visually attentive point of the user can be tracked, regardless of variations in the distance from the user to the target object. The proposed method also provides a more convenient calibration procedure and more accurate results in tests than the previous method suggested by the authors. The performance is tested when varying the 3D position of an attentive object, and the experimental results are discussed.
Developing therapeutic robot for children with autism: a study on exploring colour feedback BIBAFull-Text 173-174
  Jaeryoung Lee; Goro Obinata
Previous studies have reported that autistic children improved the social interaction and communication skills through interacting with robots. Most studies in the field of robot-assisted autism therapy, however, have focused on limited communication skills. Moreover, these studies used non-validated methods to measure the effectiveness of the therapy. Thus, in the present study, a therapeutic robot is proposed for autistic children to improve the adjustability of the interpersonal touch as one of communication skills. The aim of this study is to investigate the effective way of feedback in autism therapy with the robot using colours in three different conditions. As a result, the participants have indicated that better interaction, when they saw the colour directly.
Legible user input for intent prediction BIBAFull-Text 175-176
  Kenton C. T. Lee; Anca D. Dragan; Siddhartha S. Srinivasa
In assistive teleoperation, the robot provides assistance by predicting the user's intent. Prior work has focused on improving prediction by adapting it to the user's behavior. In this work, we investigate adaptation in the opposite direction: training the user's behavior to the prediction. Results from our user study suggest that users can significantly improve the performance of a simple static predictor after brief exposure to its behavior. In addition, we find this improvement to be more significant when the cognitive load of teleoperation is reduced.
Interactive facial robot system on a smart device: enhanced touch screen input recognition and robot's reactive facial expression BIBAFull-Text 177-178
  Won Hyong Lee; Jeong Woo Park; Woo Hyun Kim; Myung Jin Chung
This paper suggests an interactive facial robot system on a smart device which has a touch screen and a built-in microphone. The recognition process for touch inputs is enhanced by analyzing input patterns and a built-in microphone. Recognized results from the input procedure are converted to emotional states of the system, and then the emotional states are reactively expressed at a facial simulator displayed on the device's touch screen. Therefore, the proposed facial system can be implemented in one smart device at which input sensors and a visual output are on a same display component.
A spatial augmented reality system for intuitive display of robotic data BIBAFull-Text 179-180
  Florian Leutert; Christian Herrmann; Klaus Schilling
In the emerging field of close human-robot-collaboration the human worker needs to be able to quickly and easily understand data of the robotic system. To achieve this even for untrained personnel, we propose the use of a Spatial Augmented Reality system to project the necessary information directly into the users' workspace. The projection system consists of a fixed as well as a mobile projector mounted directly on the manipulator, allowing for visualizing data anywhere in the surroundings of the robot. By enabling the user to simply see the necessary complex information he can better understand the data and behavior of the robotic assistant and has the opportunity to analyze and potentially optimize the working process. Together with an input device, arbitrary interfaces can be realized with the projection system.
Be a robot!: robot navigation patterns in a path crossing scenario BIBAFull-Text 181-182
  Christina Lichtenthäeler; Annika Peters; Sascha Griffiths; Alexandra Kirsch
In this paper we address the question how a human would expect a robot to move when a human is crossing its way. In particular we consider the problem that physical capabilities of robots differ from humans. In order to find out how humans expect a robot, with non humanlike capabilities, to move we designed and conducted a study were the participants steer the robot. We identified four motion patterns and our results show that driving straight towards the goal and stopping when a human might collide with the robot is the favored motion pattern.
Quadrotor or blimp?: noise and appearance considerations in designing social aerial robot BIBAFull-Text 183-184
  Chun Fui Liew; Takehisa Yairi
Aerial robots offer a novel HRI platform thanks to their flying capabilities. However, existing aerial robots are designed from functional point of view and do not take social factors such as noise and appearance issues into serious consideration. Compared to quadrotor (noisier but higher speed), we investigated whether blimp (quieter but slower speed) is a better platform for social aerial robot. We formed hypotheses based on findings from physiology and psychology studies and examined our ideas with responses collected from online survey and interaction experiment.
Personalized robotic service using N-gram affective event model BIBAFull-Text 185-186
  Gi Hyun Lim; Seung Woo Hong; Inhee Lee; Il Hong Suh; Michael Beetz
N-gram affective Event model which consists of 5W1H event episode ontologies and affection ontologies is suggested for personalized robotic service. Nowadays, personalization technology is increasingly becoming an essential component in education. Here, robotic service can be another field to meet personal need. The case study shows that even though two students missed same question, robots suggests different reviews according to the his own personal tendency.
The NAO models for the elderly BIBAFull-Text 187-188
  David López Recio; Elena Márquez Segura; Luis Márquez Segura; Annika Waern
This paper highlights initial observations from a user study performed in an assisted living facility in Spain. We introduced the NAO robot to assist in geriatric physiotherapy rehabilitation. The NAO is introduced in order to take over one of the usual roles of the physiotherapist: modeling movements for the inpatients. We also introduced a virtual version of the NAO in order to see whether this role of modeling is equally effective in a screen-based modality. Preliminary results show the inpatients adjust their movements to the NAO, although they react differently to the virtual and the physical robot.
Movement synchronization fails during non-adaptive human-robot interaction BIBAFull-Text 189-190
  Tamara Lorenz; Alexander Mörtl; Sandra Hirche
Interpersonal movement synchronization is a phenomenon that does not only increase the predictability of movements; it also increases rapport among people. In this line, synchronization might enhance human-robot interaction. An experiment is presented which explores to what extend a human synchronizes own movements to a non-adaptive robot during a repetitive tapping task. It is shown that the human does not take over the complete effort of movement adaptation to reach synchronization, which indicates the need for adaptive robots.
The role of emotional congruence in human-robot interaction BIBAFull-Text 191-192
  Karoline Malchus; Petra Jaecks; Oliver Damm; Prisca Stenneken; Carolin Meyer; Britta Wrede
The communication of emotion is a crucial part of daily life interaction. Therefore, we carried out a study to research which role emotional congruence plays in human-human and human-robot interaction. In our results there is no effect of emotional incongruence between verbal content and facial expression of human as well as robotic stimuli on the cognitive performance in a story comprehension task. But more importantly, results indicate, that participants' performance in a memorizing task is significantly better if the robot tells the story. Possible explanations will be discussed.
Tell me your story, robot: introducing an android as fiction character leads to higher perceived usefulness and adoption intention BIBAFull-Text 193-194
  Martina Mara; Markus Appel; Hideaki Ogawa; Christopher Lindinger; Emiko Ogawa; Hiroshi Ishiguro; Kohei Ogawa
In a field experiment with N = 75 participants, the android telecommunication robot Telenoid was introduced in three different ways: participants either read a short story presenting the Telenoid as character, a non-narrative information leaflet about it, or they received no preliminary introduction at all before interacting with the robot. Perceived usefulness and behavioral intentions to adopt the robot were significantly higher in the story condition than in both other conditions. In line with the Technology Acceptance Model, reported usefulness additionally served as a mediator between treatment and adoption intention. This study is the first to apply findings from Narrative Persuasion to HRI and can prompt further discussion about stories as means to increase user acceptance of new robotic agents.
Designing robotic avatars: are user's impression affected by avatar's age? BIBAFull-Text 195-196
  Angie Lorena Marin Mejia; Doori Jo; Sukhan Lee
This paper explores the relationship between the aging cue of a robotic avatar and the level of intelligence and safety perceived by the elderly as users. This initial study found that the avatar aging cue indeed, affects the elderly in their perception of the embodied robot, in terms not only of its intelligence but also of its safety: the elderly perceived the robot more intelligent and safer with older avatars. Due to the fact that the elderly perceived the aging cue of avatars as an effect of their expectation and interactions with the robot, the finding related to the avatar-user aging cue influences in the design of a series of attributions of the embodied robot. Therefore, the result of this study can provide interaction designers with a guideline in creating the visual appearance of an embodied agency in terms of its aging cue.
Survey of metrics for human-robot interaction BIBAFull-Text 197-198
  Robin Murphy; Debra Schreckenghost
This paper examines 29 papers that have proposed or applied metrics for human-robot interaction. The 42 metrics are categorized as to the object being directly measured: the human (7), the robot (6), or the system (29). Systems metrics are further subdivided into productivity, efficiency, reliability, safety, and coactivity. While 42 seems to be a large set, many metrics do not have a functional, or generalizable, mechanism for measuring that feature. In practice, metrics for system interactions are often inferred through observations of the robot or the human, introducing noise and error in analysis. The metrics do not completely capture the impact of autonomy on HRI as they typically focus on the agents, not the capabilities. As a result the current metrics are not helpful for determining what autonomous capabilities and interactions are appropriate for what tasks.
Unified environment-adaptive control of accompanying robots using artificial potential field BIBAFull-Text 199-200
  Kazushi Nakazawa; Keita Takahashi; Masahide Kaneko
Our research is focused on mobile robots that can accompany a person, and this paper addresses how to control the relative position of the robot to the accompanied person according to the dynamic environment. The robot is expected to move side-by-side with the person in the normal situation, but the position in front or behind the person might be better if there are some obstacles. We devised the shape of the artificial potential field of the accompanied person to smoothly control the robot position in a unified way, and obtained favorable results via simulations.
Measurement of rapport-expectation with a robot BIBAFull-Text 201-202
  Tatsuya Nomura; Takayuki Kanda
The focus on humans' expectation of rapport with robots as a factor in long-term human-robot interaction. The research has been on developing a psychological scale for measuring the rapport. This paper reports the development process and the results of the pilot test showing the possibility that the scale can measure the differences of individuals' expectations of rapport with robots dependent on their types and contexts.
Design of robot eyes suitable for gaze communication BIBAFull-Text 203-204
  Tomomi Onuki; Takafumi Ishinoda; Yoshinori Kobayashi; Yoshinori Kuno
Human eyes not only serve the function of enabling us "to see" something, but also perform the vital role of allowing us "to show" our gaze for non-verbal communication. The eyes of service robots should therefore also perform both of these functions. Moreover, they should be friendly in appearance so that humans may feel comfortable with the robots. Therefore we maintain that it is important to consider gaze communication capability and friendliness in designing the appearance of robot eyes. In this paper, we propose a new robot face with rear-projected eyes for changing their appearance while simultaneously realizing the sight function by incorporating stereo cameras. Additionally, we examine which shape of robot eyes is most suitable for gaze reading and gives the friendliest impression, through experiments where we altered the shape and iris size of robot eyes.
Listening to vs overhearing robots in a hotel public space BIBAFull-Text 205-206
  Yadong Pan; Haruka Okada; Toshiaki Uchiyama; Kenji Suzuki
This report presents preliminary work performed using robots with different socially interactive functionalities in a hotel public space in order to investigate human-robot interactions (HRI). We developed robots that enable the following types of interactions: (i) indirect interaction, where twin robots (Gemini), with body-movement and conversational ability, engage in a conversation and guests can gather information through overhearing the robots, and (ii) direct interaction, where a smaller-sized robot (Palro), that can detect the presence of guests, greets them directly. In both cases, guest-behavior is studied using four categories that define the levels of a guest's response toward the robots. Several significant differences among the levels of attention paid by the guests to the robots are observed in the experiments.
Providing tablets as collaborative-task workspace for human-robot interaction BIBAFull-Text 207-208
  Hae Won Park; Ayanna Howard
In a recent conference on assistive technology in special education and rehabilitation, over 54 percentage of the sessions were directly or indirectly involved with tablets. Following this trend, many traditional assistive technologies are now transitioning from standalone devices into apps on mobile devices. As such, this paper follows this trend by discussing transforming a tablet into an HRI research platform where our robotic system engages the user in social interaction by learning how to operate a given app (task) using guidance from the user. The objective is to engage the robot within the context of the user's task by understanding the task's underlying rules and structures. An overview of the HRI toolkit is presented and a knowledge-based approach in modeling a task is discussed where previously learned cases are reused to solve a new problem.
People interpret robotic non-linguistic utterances categorically BIBAFull-Text 209-210
  Robin Read; Tony Belpaeme
This paper presents an experiment testing whether adults exhibit Categorical Perception when rating non-linguistic utterances, made by a Nao robot, on an affective level. This experiment followed the traditional methodology used in psychology, with some minor alterations. A stimulus continuum was produced and subjects asked to complete a discrimination and a identification task. In the former subjects were asked to rate whether stimulus pairs were affectively different while in the latter they were asked to rate single stimuli affectively using a facial gesture tool. Results present compelling evidence for the presence of Categorical Perception in this particular case.
Using the AffectButton to measure affect in child and adult-robot interaction BIBAFull-Text 211-212
  Robin Read; Tony Belpaeme
This report presents data which shows how the AffectButton, a visual tool to report affect, can be used reliably by both adults and children (6-7 y.). Users were asked to identify affective labels, such as scared or surprised, on the AffectButton. We report a high inter-rater reliability between adults, between children and between adults and children. Children have the same high performance when using the AffectButton as adults, making the AffectButton a intuitive and reliable tool for letting a wide range of ages report affect.
Execution memory for grounding and coordination BIBAFull-Text 213-214
  Stephanie Rosenthal; Sarjoun Skaff; Manuela Veloso; Dan Bohus; Eric Horvitz
As robots are introduced into human environments for long periods of time, human owners and collaborators will expect them to remember shared events that occur during execution. Beyond naturalness of having memories about recent and longer-term engagements with people, such execution memories can be important in tasks that persist over time by allowing robots to ground their dialog and to refer efficiently to previous events. In this work, we define execution memory as the capability of saving interaction event information and recalling it for later use. We divide the problem into four parts: salience filtering of sensor evidence and saving to short term memory, archiving from short to long term memory and caching from long to short term memory, and recalling memories for use in state inference and policy execution. We then provide examples of how execution memory can be used to enhance user experience with robots.
Neural correlates of empathy towards robots BIBAFull-Text 215-216
  Astrid Marieke Rosenthal-von der Pütten; Frank P. Schulte; Sabrina C. Eimler; Laura Hoffmann; Sabrina Sobieraj; Stefan Maderwald; Nicole C. Krämer; Matthias Brand
We conducted an fMRI study to investigate emotionality in human-robot interaction. Subjects (N=14) were presented videos showing a human, a robot and an unanimated object, being treated in either an affectionate or a violent way. Violent interaction towards both the robot and the human resulted in similar neural activation patterns in classic limbic structures indicating that both the robot and the human elicit similar emotional reactions. However, differences in neural activity suggest that participants show more negative empathetic concern for the human in a negative situation.
Designing for sociality in HRI by means of multiple personas in robots BIBAFull-Text 217-218
  Jolina H. Ruckert; Peter H., Jr. Kahn; Takayuki Kanda; Hiroshi Ishiguro; Solace Shen; Heather E. Gary
This conceptual paper provides design guidelines to enhance the sociality of human-robot interaction. The paper draws on the Interaction Pattern Approach in HRI, which seeks to specify the underlying structures and functions of human interaction. We extend this approach by proposing that in the same way people effectively engage the social world with different personas in different contexts, so can robots be designed not only with a single persona but multiple personas.
Generating finely synchronized gesture and speech for humanoid robots: a closed-loop approach BIBAFull-Text 219-220
  Maha Salem; Stefan Kopp; Frank Joublin
Previous work focusing on the production of communicative robot gesture has not sufficiently addressed the challenge of speech-gesture synchrony. We propose a novel multimodal scheduler that comprises two features to improve the synchronization process. First, the scheduler integrates an experimentally fitted forward model at the behavior planning stage to provide a more accurate estimation of the robot's gesture preparation time. Second, the model incorporates a feedback-based adaptation mechanism which allows for an on-line adjustment of the synchronization of the two modalities during execution. In this way, it represents the first closed-loop approach to speech and gesture generation for humanoid robots.
A study of effective social cues within ubiquitous robotics BIBAFull-Text 221-222
  Anara Sandygulova; David Swords; Sameh Abdel-Naby; Gregory M. P. O'Hare; Mauro Dragone
Ubiquitous computing is the execution of computational tasks through everyday objects. Ubiquitous robotics augments the capabilities of one or more robots by leveraging ubiquitous computational and/or sensorial resources. Augmentation complements and/or enhances the capabilities of one or more robots while such robots can simultaneously serve as intermediaries or social interfaces to ubiquitous services. This paper posits the importance of conducting user studies to validate related HRI results within ubiquitous robotics. Specifically, the paper presents a pilot study designed to test the effectiveness of acknowledging human presence via non-verbal social cues and its impact on the user's acceptance and engagement with a ubiquitous robotic system.
Is that me?: sensorimotor learning and self-other distinction in robotics BIBAFull-Text 223-224
  Guido Schillaci; Verena Vanessa Hafner; Bruno Lara; Marc Grosjean
In order to have robots interact with other agents, it is important that they are able recognize their own actions. The research reported here relates to the use of internal models for self-other distinction. We demonstrate how a humanoid robot, which acquires a sensorimotor scheme through self-exploration, can produce and predict simple trajectories that have particular characteristics. Comparing these predictions to incoming sensory information provides the robot with a basic tool for distinguishing between self and other.
Perception during interaction is not based on statistical context BIBAFull-Text 225-226
  Alessandra Sciutti; Andrea Del Prete; Lorenzo Natale; David Burr; Giulio Sandini; Monica Gori
We performed an experiment with the humanoid robot iCub to evaluate the relevance of the statistical context in interactive and non-interactive scenarios. We measured the central tendency, i.e. how much the perceptual estimate of the length of a stimulus is influenced by the previously presented lengths. In two conditions the stimulation was exactly the same, while only task-independent parameters of robotic behavior (robot autonomy and robot gazing) were modified to manipulate the interactive nature of the task. The less regression to the average of the previous stimuli observed in the interactive condition proved that perception during interaction relies less on statistical context than individual perceptual judgments. Moreover, assessing the relevance given to statistical context could represent a new measure to evaluate the subjective involvement in an interaction. In particular, from our results an autonomous behavior and a human-like eye-gazing motion seem to be required to determine the occurrence of an interaction between human and robot.
Robot-human hand-overs in non-anthropomorphic robots BIBAFull-Text 227-228
  Prasanna Kumar Sivakumar; Chittaranjan S. Srinivas; Andrey Kiselev; Amy Loutfi
Robots that assist and interact with humans will inevitably require to successfully achieve the task of handing over objects. Whether it is to deliver desired objects for the elderly living in their homes or hand tools to a worker in a factory, the process of robot hand-overs is one worthy study within the human robot interaction community. While the study of object hand-overs have been studied in previous works, these works have mainly considered anthropomorphic robots, that is, robots that appear and move similar to humans. However, recent trends within robotics, and in particular domestic robotics have witnessed an increase in non-anthropomorphic robotic platforms such as moving tables, teleconferencing robots and vacuum cleaners. The study of robot hand-over for non-anthropomorphic robots and in particular the study of what constitute a successful hand-over is at focus in this paper. For the purpose of investigation, the TurtleBot, which is a moving table like device is used in a home environment.
Integrating a robot in a tabletop reservoir engineering application BIBAFull-Text 229-230
  Sowmya Somanath; Ehud Sharlin; Mario Costa Sousa
We present our work-in-progress efforts of designing a simple tabletop robotic assistant that supports users as they interact with tabletop reservoir visualization application. Our prototype, Spidey, is designed to assist reservoir engineers in performing simple data exploration tasks on the interactive tabletop. We present our design as well as preliminary findings from a study of Spidey involving both interaction designers and reservoir engineers.
Anthropomorphism in the factory: a paradigm change? BIBAFull-Text 231-232
  Susanne Stadler; Astrid Weiss; Nicole Mirnig; Manfred Tscheligi
In recent years there has been a tendency towards industrial robots which are cheaper in acquisition and need less expert programming and maintenance. Several approaches use a combination of humanoid and industrial robot elements. This new direction away from dull, dirty and dangerous to a functional "buddy" robot enables new HRI scenarios within the factory context towards a "shoulder-to-shoulder" cooperation. One important point is to understand users' expectations for both types of robots. We conducted a video-based focus group to explore general expectations of naive users towards functional and humanoid robots within a special factory context. A comparison of the results indicates that anthropomorphic elements in the design of robotic systems can also be imagined for industrial robots. We conclude with opportunities for these hybrid anthropomorphized functional robots in an industrial environment.
Input modality and task complexity: do they relate? BIBAFull-Text 233-234
  Gerald Stollnberger; Astrid Weiss; Manfred Tscheligi
In the research field of Human-Robot Collaboration (HRC) choosing the right input modality is a crucial aspect for successful cooperation, especially for different levels of task complexity. In this paper we present a preliminary study we conducted in order to investigate the correlation between input modalities and task complexity. We assume that specific input devices are suitable for specific levels of task complexity in HRC tasks. In our study participants could choose between two different input modalities to perform a race with Lego Mindstorm robots against each other. One of the main findings was that both factors (input modality / task complexity) have a severe impact on task performance and user satisfaction. Furthermore, we found out that users' perceptions of their performance differed from reality in some cases.
A wearable visuo-inertial interface for humanoid robot control BIBAFull-Text 235-236
  Junichi Sugiyama; Jun Miura
This paper describes a wearable visuo-inertial interface for humanoid robot control, which allows a user to control motion of a humanoid robot intuitively. The interface composed of a camera and inertial sensors and estimates body motion of the user: movement (walk), hand motion, and grasping gesture. The body motion (walk) estimation is performed by a combination of a monocular SLAM and a vision-inertial fusion using an extended Kalman filter. The hand motion is also estimated by using the same motion model and sensor fusion as the body motion estimation. The estimated motion was used to operate the movement or arm motion of the humanoid robot. We conducted the experiment on robot operation. The results revealed that the user intuitively controlled the robot and it responded to the operator commands correctly.
Individually specialized feedback interface for assistance robots in standing-up motion BIBAFull-Text 237-238
  Asuka Takai; Chihiro Nakagawa; Atsuhiko Shintani; Tomohiro Ito
We present a navigational interface for users of assistance robots. The interface suggests a motion that places low body load at the lower joints, carries a low risk of falls, and requires high voluntary activation of muscles. We describe an application that uses animations to provide feedback on a user's sit-to-stand motion in order to demonstrate the potential benefits of such an interface.
Integration of work sequence and embodied interaction for collaborative work based human-robot interaction BIBAFull-Text 239-240
  Jeffrey Too Chuan Tan; Tetsunari Inamura
In order to develop intelligent robots that are able to cooperate well with human in a collaborative work, this work aims to integrate work sequence and embodied interaction capabilities into an integrated intelligence system for collaborative robot development. A task modeling approach is proposed to build a hierarchical task model of the entire work sequence to generate state transition table, and grounding with the actual condition (state) of the objects and the action (transition) of work in the embodied dimension. The system is implemented in a simulation environment with human interaction to materialize human-robot collaboration by robot work support in work sequence (what, when and how), assist on parallel task, and error correction.
Balance-arm tablet computer stand for robotic camera control BIBAFull-Text 241-242
  Peter Turpel; Bing Xia; Xinyi Ge; Shuda Mo; Steve Vozar
Traditional methods of camera orientation control for teleoperated robots involve gamepads or joysticks with the motion of analog sticks used to control the camera direction. However, this control scheme often leads to unintuitive mappings between user input and camera actuator output. This paper describes a master-slave style camera position and orientation controller with a tablet computer showing a video feed from the robot mounted on a balance-arm (acting as the control master), affording the user a one-to-one mapping to control the viewpoint of a camera mounted on a robot arm (the slave). In this way, the tablet computer acts as a virtual window to the robot's workspace.
Swimoid: interacting with an underwater buddy robot BIBAFull-Text 243-244
  Yu Ukai; Jun Rekimoto
The methodology of presenting information from robots to humans in underwater environments has become an important topic because of the rapid technological advancement in the field of the underwater vehicles and the underwater applications. However, this topic has not yet been fully investigated in the research field of Underwater Human-Robot Interaction (UHRI). We propose a new concept of an underwater robot called the "Buddy Robot". And we define the term "Buddy Robot" as a category of the underwater robot that has the two abilities; recognizing and following the user and giving out visual information to human using display devices. As one specific example of the concept, we develop a swim support system called "Swimoid". Swimoid consists of three parts, a hardware, a control software and functions that can support swimmers in three ways; self-awareness, coaching and game. Self-awareness function enables swimmers to recognize themselves swimming form in real time. Coaching function enables coaches on the poolside to give instructions to swimmers by drawing some shapes. Game function helps novice swimmers to get familiar with water in a fun way. As a result of user tests, we confirmed this system works properly by the test user's comments.
The affect of collaboratively programming robots in a 3D virtual simulation BIBAFull-Text 245-246
  Michael Vallance
The Fukushima Daiichi nuclear power plant disaster of March 2011 revealed much about Japan's lack of preparedness for nuclear accidents. Despite the brave efforts of its labor force leading up to, and in the aftermath of, the reactor explosions, it became apparent that coordination and communication were disorganized. The research summarized in this paper will examine how students in Japan and UK collaborate; eventually towards a better understanding of the challenges and possible solutions when dealing with disaster recovery such as Fukushima. The context for collaboration is set within a 3D virtual world with students programming robots to follow distinct circuits. The immersion affect of programming, constructing, collaborating and communicating is captured to determine task criteria of educational value. This interdisciplinary 'information science' research incorporates computer science, cognitive science, the social sciences, communication and design.
Experiencing the familiar, understanding the interaction and responding to a robot proactive partner BIBAFull-Text 247-248
  Gentiane Venture; Ritta Baddoura; Tianxiang Zhang
This is the 2d stage of a study on the familiar during HRI. We demonstrated the interest of better understanding the human experience of the familiar for an adapted and successful HRI. We also studied the impact of the robot's social behavior on the human partner's experience of the familiar, appreciation of the interaction, anthropomorphism of the robot, and reactions to it's actions. Here, we explore the relation between experiencing the familiar, understanding the robot's engaging actions, and reacting to them. We look at participants' response to 3 non verbal actions of NAO. The analysis uses the participants' answers to a questionnaire, their decisions to react or not to the robot's actions and motion data.
Improving teleoperated robot speed using optimization techniques BIBAFull-Text 249-250
  Steve Vozar; Dawn Tilbury
Current autonomous robots are not sophisticated enough to complete many mobile tasks, so human-in-the-loop control -- including teleoperation -- remains the only way to accomplish these tasks. However, most teleoperated tasks cannot be performed at a reasonable speed. When evaluating design choices, it is not always clear which designs will yield the greatest speed increase at the lowest cost. This paper introduces a optimization-based approach for evaluating multiple design options that weighs robot speed against costs such as component price and size. An example is presented to illustrate the methodology.

Trust, help, and influence

Impact of robot failures and feedback on real-time trust BIBAFull-Text 251-258
  Munjal Desai; Poornima Kaniarasu; Mikhail Medvedev; Aaron Steinfeld; Holly Yanco
Prior work in human trust of autonomous robots suggests the timing of reliability drops impact trust and control allocation strategies. However, trust is traditionally measured post-run, thereby masking the real-time changes in trust, reducing sensitivity to factors like inertia, and subjecting the measure to biases like the primacy-recency effect. Likewise, little is known on how feedback of robot confidence interacts in real-time with trust and control allocation strategies. An experiment to examine these issues showed trust loss due to early reliability drops is masked in traditional post-run measures, trust demonstrates inertia, and feedback alters allocation strategies independent of trust. The implications of specific findings on development of trust models and robot design are also discussed.
Will i bother here?: a robot anticipating its influence on pedestrian walking comfort BIBAFull-Text 259-266
  Hiroyuki Kidokoro; Takayuki Kanda; Drazen Bršcic; Masahiro Shiomi
A robot working among pedestrians can attract crowds of people around it, and consequentially become a bothersome entity causing congestion in narrow spaces. To address this problem, our idea is to endow the robot with capability to understand humans' crowding phenomena. The proposed mechanism consists of three underlying models: a model of pedestrian flow, a model of pedestrian interaction, and a model of walking comfort. Combining these models a robot is able to simulate hypothetical situations where it navigates between pedestrians, and anticipate the degree to which this would affect the pedestrians' walking comfort. This idea is implemented in a friendly-patrolling scenario. During planning, the robot simulates the interaction with pedestrian crowd and determines the best path to roam. The result of a field experiment demonstrated that with the proposed method the pedestrians around the robot perceived better walking comfort than pedestrians around the robot that only maximized its exposure.
It's not polite to point: generating socially-appropriate deictic behaviors towards people BIBAFull-Text 267-274
  Phoebe Liu; Dylan F. Glas; Takayuki Kanda; Hiroshi Ishiguro; Norihiro Hagita
Pointing behaviors are used for referring to objects and people in everyday interactions, but the behaviors used for referring to objects are not necessarily polite or socially appropriate for referring to humans. In this study, we confirm that although people would point precisely to an object to indicate where it is, they were hesitant to do so when pointing to another person. We propose a model for generating socially-appropriate deictic behaviors in a robot. The model is based on balancing two factors: understandability and social appropriateness. In an experiment with a robot in a shopping mall, we found that the robot's deictic behavior was perceived as more polite, more natural, and better overall when using our model, compared with a model considering understandability alone.
How a robot should give advice BIBAFull-Text 275-282
  Cristen Torrey; Susan Fussell; Sara Kiesler
With advances in robotics, robots can give advice and help using natural language. The field of HRI, however, has not yet developed a communication strategy for giving advice effectively. Drawing on literature in politeness and informal speech, we propose options for a robot's help-giving speech-using hedges or discourse markers, both of which can mitigate the commanding tone implied in direct statements of advice. To test these options, we experimentally compared two help-giving strategies depicted in videos of human and robot helpers. We found that when robot and human helpers used a hedge or discourse markers, they seemed more considerate and likeable, and less controlling. The robot that used discourse markers had even more impact than the human helper. The findings suggest that communication strategies derived from speech used when people help each other in natural settings can be effective for planning the help dialogues of robotic assistants.
Older adults' medication management in the home: how can robots help? BIBAFull-Text 283-290
  Akanksha Prakash; Jenay M. Beer; Travis Deyle; Cory-Ann Smarr; Tiffany L. Chen; Tracy L. Mitzner; Charles C. Kemp; Wendy A. Rogers
Successful management of medications is critical to maintaining healthy and independent living for older adults. However, medication non-adherence is a common problem with a high risk for severe consequences, which can jeopardize older adults' chances to age in place. Well-designed robots assisting with medication management tasks could support older adults' independence. Design of successful robots will be enhanced through understanding concerns, attitudes, and preferences for medication assistance tasks. We assessed older adults' reactions to medication hand-off from a mobile manipulator with 12 participants (68-79 years). We identified factors that affected their attitudes toward a mobile manipulator for supporting general medication management tasks in the home. The older adults were open to robot assistance; however, their preferences varied depending on the nature of the medication management task. For instance, they preferred a robot (over a human) to remind them to take medications, but preferred human assistance for deciding what medication to take and for administering the medication. Factors such as perceptions of one's own capability and robot reliability influenced their attitudes.

Panel session

Revisioning HRI given exponential technological growth BIBAFull-Text 291-292
  Peter H., Jr. Kahn; Gerhard Sagerer; Andrea L. Thomaz; Takayuki Kanda
Sometimes it's said that the technical problems in robotics are harder and more intransigent than the field ever expected decades ago. That's often the preamble to the sort of statement: "And those of us in HRI need to be realistic about what robots actually will be able to do in the near future." This panel explores the idea that that view -- of slow technological growth -- is fundamentally wrong. Our springboard is Ray Kurzweil's idea from his book The Singularity is Near. He argues that our minds think in linear terms while the technological change is increasing exponentially. To illustrate exponential growth, take a dollar and double it every day. After a week, you have $64, which is hardly much to shout about. After a month you have over a billion dollars. Kurzweil shows that we're at the "knee" of that exponential curve, where technological growth has begun to accelerate at an increasingly astonishing rate. Given this proposition, the panelists discuss how we should be revisioning the field of HRI.

Companions, collaboration, and control

Communicating affect via flight path: exploring use of the Laban Effort System for designing affective locomotion paths BIBAFull-Text 293-300
  Megha Sharma; Dale Hildebrandt; Gem Newman; James E. Young; Rasit Eskicioglu
People and animals use various kinds of motion in a multitude of ways to communicate their ideas and affective state, such as their moods or emotions. Further, people attribute affect and personalities to movements of even non-life like entities based solely on the style of their motions, e.g., the locomotion style of a geometric shape (how it moves about) can be interpreted as being shy, aggressive, etc. We investigate how robots can leverage this locomotion-style communication channel for communication with people. Specifically, our work deals with designing stylistic flying-robot locomotion paths for communicating affective state.
   To author and unpack the parameters of affect-oriented flying-robot locomotion styles we employ the Laban Effort System, a standard method for interpreting human motion commonly used in the performing arts. This paper describes our adaption of the Laban Effort System to author motions for flying robots, and the results of a formal experiment that investigated how various Laban Effort System parameters influence people's perception of the resulting robotic motions. We summarize with a set of guidelines for aiding designers in using the Laban Effort System to author flying robot motions to elicit desired affective responses.
Legibility and predictability of robot motion BIBAFull-Text 301-308
  Anca D. Dragan; Kenton C. T. Lee; Siddhartha S. Srinivasa
A key requirement for seamless human-robot collaboration is for the robot to make its intentions clear to its human collaborator. A collaborative robot's motion must be legible, or intent-expressive. Legibility is often described in the literature as and effect of predictable, unsurprising, or expected motion. Our central insight is that predictability and legibility are fundamentally different and often contradictory properties of motion. We develop a formalism to mathematically define and distinguish predictability and legibility of motion. We formalize the two based on inferences between trajectories and goals in opposing directions, drawing the analogy to action interpretation in psychology. We then propose mathematical models for these inferences based on optimizing cost, drawing the analogy to the principle of rational action. Our experiments validate our formalism's prediction that predictability and legibility can contradict, and provide support for our models. Our findings indicate that for robots to seamlessly collaborate with humans, they must change the way they plan their motion.
Taking your robot for a walk: force-guiding a mobile robot using compliant arms BIBAFull-Text 309-316
  François Ferland; Arnaud Aumont; Dominic Létourneau; François Michaud
Guiding a mobile robot by the hand would make a simple and natural interface. This requires the ability to sense forces applied on the robot from direct physical contacts, and to translate these forces into motion commands. This paper presents a joint-space impedance control approach that does so by perceiving forces applied on compliant arms, making the robot react as a real-life physical object to a user pulling and pushing on one or both of its arms. By independently controlling stiffness in specific degrees-of-freedom, our approach allows the general position of the arms to change to the preferences of the person interacting with it, a capability that is not possible using a strictly position-based control approach. A test case with 15 volunteers was conducted on IRL-1, an omnidirectional, non-holonomic mobile robot, to study and fine-tune our approach in an unconstrained guiding task, making IRL-1 go in and out of a room through a doorway.
Effects of robotic companionship on music enjoyment and agent perception BIBAFull-Text 317-324
  Guy Hoffman; Keinan Vanunu
We evaluate the effects of robotic listening companionship on people's enjoyment of music, and on their perception of the robot. We present a robotic speaker device designed for joint listening and embodied performance of the music played on it. The robot generates smoothed real-time beat-synchronized dance moves, uses nonverbal gestures for common ground, and can make and maintain eye-contact.
   In an experimental between-subject study (n=67), participants listened to songs played on the speaker device, with the robot either moving in sync with the beat, moving off-beat, or not moving at all. We found that while the robot's beat precision was not consciously detected by Ps, an on-beat robot positively affected song liking. There was no effect on overall experience enjoyment. In addition, the robot's response caused Ps to attribute more positive human-like traits to the robot, as well as rate the robot as more similar to themselves. Notably, personal listening habits (solitary vs. social) affected agent attributions.
   This work points to a larger question, namely how a robot's perceived response to an event might affect a human's perception of the same event.

Verbal and non-verbal behavior

A model for synthesizing a combined verbal and nonverbal behavior based on personality traits in human-robot interaction BIBAFull-Text 325-332
  Amir Aly; Adriana Tapus
Robots are more and more present in our daily life; they have to move into human-centered environments, to interact with humans, and to obey some social rules so as to produce an appropriate social behavior in accordance with human's profile (i.e., personality, state of mood, and preferences). Recent researches discussed the effect of personality traits on the verbal and nonverbal production, which plays a major role in transferring and understanding messages in a social interaction between a human and a robot. The characteristics of the generated gestures (e.g., amplitude, direction, rate, and speed) during the nonverbal communication can differ according to the personality trait, which, similarly, influences the verbal content of the human speech in terms of verbosity, repetitions, etc. Therefore, our research tries to map a human's verbal behavior to a corresponding combined robot's verbal-nonverbal behavior based on the personality dimensions of the interacting human. The system estimates first the interacting human's personality traits through a psycholinguistic analysis of the spoken language, then it uses PERSONAGE natural language generator that tries to generate a corresponding verbal language to the estimated personality traits. Gestures are generated by using BEAT toolkit, which performs a linguistic and contextual analysis of the generated language relying on rules derived from extensive research into human conversational behavior. We explored the human-robot personality matching aspect and the differences of the adapted mixed robot's behavior (gesture and speech) over the adapted speech only robot's behavior in an interaction. Our model validated that individuals preferred more to interact with a robot that had the same personality with theirs and that an adapted mixed robot's behavior (gesture and speech) was more engaging and effective than a speech only robot's behavior. Our experiments were done with Nao robot.
Automatic processing of irrelevant co-speech gestures with human but not robot actors BIBAFull-Text 333-340
  Cory J. Hayes; Charles R. Crowell; Laurel D. Riek
Non-verbal, or visual, communication is an important factor of daily human-to-human interaction. Gestures make up one mode of visual communication, where movement of the body is used to convey a message either alone or in conjunction with speech. The purpose of this experiment is to explore how humans perceive gestures made by a humanoid robot compared to the same gestures made by a human. We do this by adapting and replicating a human perceptual experiment by Kelly et al., where a Stroop-like task was used to demonstrate the automatic processing of gesture and speech together. 59 college students participated in our experiment. Our results support the notion that automatic gesture processing occurs when interacting with human actors, but not robot actors. We discuss the implications of these findings for the HRI community.
Rhetorical robots: making robots more effective speakers using linguistic cues of expertise BIBAFull-Text 341-348
  Sean Andrist; Erin Spannan; Bilge Mutlu
Robots hold great promise as informational assistants such as museum guides, information booth attendants, concierges, shopkeepers, and more. In such positions, people will expect them to be experts on their area of specialty. Not only will robots need to be experts, but they will also need to communicate their expertise effectively in order to raise trust and compliance with the information that they provide. This paper draws upon literature in psychology and linguistics to examine cues in speech that would help robots not only to provide expert knowledge, but also to deliver this knowledge effectively. To test the effectiveness of these cues, we conducted an experiment in which participants created a plan to tour a fictional city based on suggestions by two robots. We manipulated the landmark descriptions along two dimensions of expertise: practical knowledge and rhetorical ability. We then measured which locations the participants chose to include in the tour based on their descriptions. Our results showed that participants were strongly influenced by both practical knowledge and rhetorical ability; they included more landmarks described using expert linguistic cues than those described using simple facts. Even when the overall level of practical knowledge was high, an increase in rhetorical ability resulted in significant improvements. These results have implications for the development of effective dialogue strategies for informational robots.
Gestures for industry: intuitive human-robot communication from human observation BIBAFull-Text 349-356
  Brian Gleeson; Karon MacLean; Amir Haddadi; Elizabeth Croft; Javier Alcazar
Human-robot collaborative work has the potential to advance quality, efficiency and safety in manufacturing. In this paper we present a gestural communication lexicon for human-robot collaboration in industrial assembly tasks and establish methodology for producing such a lexicon. Our user experiments are grounded in a study of industry needs, providing potential real-world applicability to our results. Actions required for industrial assembly tasks are abstracted into three classes: part acquisition, part manipulation, and part operations. We analyzed the communication between human pairs performing these subtasks and derived a set of communication terms and gestures. We found that participant-provided gestures are intuitive and well suited to robotic implementation, but that interpretation is highly dependent on task context. We then implemented these gestures on a robot arm in a human-robot interaction context, and found the gestures to be easily interpreted by observers. We found that observation of human-human interaction can be effective in determining what should be communicated in a given human-robot task, how communication gestures should be executed, and priorities for robotic system implementation based on frequency of use.

Is the robot like me?

Expressing ethnicity through behaviors of a robot character BIBAFull-Text 357-364
  Maxim Makatchev; Reid Simmons; Majd Sakr; Micheline Ziadee
Achieving homophily, or association based on similarity, between a human user and a robot holds a promise of improved perception and task performance. However, no previous studies that address homophily via ethnic similarity with robots exist. In this paper, we discuss the difficulties of evoking ethnic cues in a robot, as opposed to a virtual agent, and an approach to overcome those difficulties based on using ethnically salient behaviors. We outline our methodology for selecting and evaluating such behaviors, and culminate with a study that evaluates our hypotheses of the possibility of ethnic attribution of a robot character through verbal and nonverbal behaviors and of achieving the homophily effect.
The inversion effect in HRI: are robots perceived more like humans or objects? BIBAFull-Text 365-372
  Jakub Zlotowski; Christoph Bartneck
The inversion effect describes a phenomenon in which certain types of images are harder to recognize when they are presented upside down compared to when they are shown upright. Images of human faces and bodies suffer from the inversion effect whereas images of objects do not. The effect may be caused by the configural processing of faces and body postures, which is dependent on the perception of spatial relations between different parts of the stimuli. We investigated if the inversion effect applies to images of robots in the hope of using it as a measurement tool for robot's anthropomorphism. The results suggest that robots, similarly to humans, are subject to the inversion effect. Furthermore, there is a significant, but weak linear relationship between the recognition accuracy and perceived anthropomorphism. The small variance explained by the inversion effect renders this test inferior to the questionnaire based Godspeed Anthropomorphism Scale.
A transition model for cognitions about agency BIBAFull-Text 373-380
  Daniel T. Levin; Julie A. Adams; Megan M. Saylor; Gautam Biswas
Recent research in a range of fields has explored people's concepts about agency, and this issue is clearly important for understanding the conceptual basis of human-robot interaction. This research takes a wide range of approaches, but no systematic model of reasoning about agency has combined the concepts and processes involved agency-reasoning comprehensively enough to support research exploring issues such as conceptual change in reasoning about agents, and the interaction between concepts about agents and visual attention. Our goal in this paper is to develop a transition model of reasoning about agency that achieves three important goals. First, we aim to specify the different kinds of knowledge that is likely to be accessed when people reason about agents. Second, we specify the circumstances under which these different kinds of knowledge might be accessed and be changed. Finally, we discuss how this knowledge might affect basic psychological processes of attention and memory. Our approach will be to first describe the transition model, then to discuss how it might be applied in two specific domains: computer interfaces that allow a single operator to track multiple robots, and a teachable agent system currently in use assisting primary and middle school students in learning natural science concepts.
Presentation of (telepresent) self: on the double-edged effects of mirrors BIBAFull-Text 381-388
  Leila Takayama; Helen Harris
Mobile remote presence systems present new opportunities and challenges for physically distributed people to meet and work together. One of the challenges observed from a couple of years of using Texai, a mobile remote presence (MRP) system, is that remote operators are often unaware of how they present themselves through the MRP. Problems arise when remote operators are not clearly visible through the MRP video display; this mistake makes the MRP operators look like anonymous intruders into the local space rather than approachable colleagues. To address this problem, this study explores the effects of visual feedback for remote teleoperators, using a controlled experiment in which mirrors were either present or absent in the local room with the MRP system (N=24). Participants engaged in a warm-up remote communication task followed by a remote driving task. Compared to mirrors-absent participants, mirrors-present participants were more visible on the MRP screens and practiced navigating longer. However, the mirrors-present participants also reported experiencing more frustration and having less fun. Implications for theory and design are discussed.
Are you looking at me?: perception of robot attention is mediated by gaze type and group size BIBAFull-Text 389-396
  Henny Admoni; Bradley Hayes; David Feil-Seifer; Daniel Ullman; Brian Scassellati
Studies in HRI have shown that people follow and understand robot gaze. However, only a few studies to date have examined the time-course of a meaningful robot gaze, and none have directly investigated what type of gaze is best for eliciting the perception of attention. This paper investigates two types of gaze behaviors -- short, frequent glances and long, less frequent stares -- to find which behavior is better at conveying a robot's visual attention. We describe the development of a programmable research platform from MyKeepon toys, and the use of these programmable robots to examine the effects of gaze type and group size on the perception of attention. In our experiment, participants viewed a group of MyKeepon robots executing random motions, occasionally fixating on various points in the room or directly on the participant. We varied type of gaze fixations within participants and group size between participants. Results show that people are more accurate at recognizing shorter, more frequent fixations than longer, less frequent ones, and that their performance improves as group size decreases. From these results, we conclude that multiple short gazes are preferable for indicating attention over one long gaze, and that the visual search for robot attention is susceptible to group size effects.

Video session

Emo-Bin: how to recycle more by using emoticons BIBAFull-Text 397-398
  Jose Berengueres; Fatma Alsuwairi; Nazar Zaki; Tony Ng
In UAE only 10% of PET bottles are recycled. We introduce an emoticon-bin, a recycle bin that rewards users with smiles and sounds. When a user recycles the bin smiles. We show that by exploiting human responsiveness to emoticons, recycling rates increase by a factor of x3.
LSInvaders: cross reality environment inspired by the arcade game space invaders BIBAFull-Text 399-400
  Anna Fusté; Judith Amores; Sergi Perdices; Santi Ortega; David Miralles
LSInvaders was born from the willingness to explore the collaboration between human and robots. We have developed a project with the aim of unifying different types of interactions in a cooperative environment. The project is inspired by the arcade game Space Invaders. The difference between Space Invaders and LSInvaders is that the user receives the help of a physical robot with artificial intelligence to get over the level. Although the system itself is a basic game, and it seems like it would make no difference whether the robot is embodied or a typical AI virtual agent its participation as a real element improves the gameplay.
Natural interaction for object hand-over BIBAFull-Text 401-402
  Mamoun Gharbi; Séverin Lemaignan; Jim Mainprice; Rachid Alami
The video presents in a didactic way several abilities and algorithms required to achieve interactive "pick and give" tasks in a human environment. Communication between the human and the robot relies on unconstrained verbal dialogue, the robot uses multi-modal perception to track the human and its environment, and implements real-time 3D motion planning algorithms to achieve collision-free and human-aware interactive manipulation.
iRIS: a remote surrogate for mutual reference BIBAFull-Text 403-404
  Hiroaki Kawanobe; Yoshifumi Aosaki; Hideaki Kuzuoka; Yusuke Suzuki
In this video, we introduce iRIS, a remote surrogate robot that facilitates mutual reference to a physical object over the distance. The robot has a display that shows remote participant's head. The display is mounted on a 3-DOF neck. The robot also has a built-in projector enabling a remote participant to show his/her actual hand gestures through a physical object in the local environment.
New clay for digital natives' HRI: create your own interactions with SiCi BIBAFull-Text 405-406
  Jae-Hyun Kim; Jae-Hoon Jung; Jin-Sung Kim; Yong-Gyu Jin; Jung-Yun Sung; Se-Min Oh; Jae-Sung Ryu; Hyo-Yong Kim; Soo-Hee Han; Hye-Kyung Cho
This work-in-progress video introduces SiCi (smart ideas for creative interplay), an authoring tool to create new type of robot contents by combining interactions among multimedia entities in the virtual world with robots in the real world.
CULOT: sociable creature for child's playground BIBAFull-Text 407-408
  Nozomi Kina; Daiki Tanaka; Naoki Ohshima; P. Ravindra S. De Silva; Michio Okada
The video shows that CULOT as a sociable robot and playground character to establish play routing with children to develop playground language through inarticulate sounds by synchronizing its moving behaviors and body gestures.
"Talking to my robot": from knowledge grounding to dialogue processing BIBAFull-Text 409-410
  Séverin Lemaignan; Rachid Alami
The video presents in a didactic way the tools developed at LAAS-CNRS and related to symbol grounding and natural language processing for companion robots.
   It mainly focuses on two of them: the ORO-server knowledge base and the Dialogs natural language processor. These two tools enable three cognitive functions that allow for better natural interaction between humans and robots: a theory of mind built upon perspective taking, multi-modal communication, that combines verbal input with gestures, and a limited symbol grounding capability with a disambiguation mechanism supported by the two first cognitive abilities.
The Oriboos going to Nepal: a story of playful encounters BIBAFull-Text 411-412
  Elena Márquez Segura; Jin Moen; Annika Waern; Adrián Onco Orduna
We created a fictional story about a bunch of interactive robot toys, the Oriboos, which travel to different schools where children interact and play with them. The story is based on two workshops done in Sweden and Nepal.
Talking-ally: towards persuasive communication BIBAFull-Text 413-414
  Yuki Odahara; Youhei Kurata; Naoki Ohshima; P. Ravindra S. De Silva; Michio Okada
We develop a social robot (Talking-Ally) which is capable of liking the state of the person (addressee) through an utterance generation mechanism (addressivity) that refers to the hearer's resources (hearership) in order to persuade the user through dynamic interactions.
A model of handing interaction towards a pedestrian BIBAFull-Text 415-416
  Chao Shi; Masahiro Shiomi; Christian Smith; Takayuki Kanda; Hiroshi Ishiguro
This video reports our research on developing a model for a robot handing flyers to pedestrians. The difficulty is that potential receivers are pedestrians who are not necessarily cooperative; thus, the robot needs to appropriately plan its motion making it is easy and non-obstructive for potential receivers to receive the flyers. In order to establish a model, we analyzed human interaction, and found that (1) a giver approaches a pedestrian from frontal right/left but not frontal center, and (2) he simultaneously stops his walking motion and arm-extending motion at the moment when he hands out the flyer. Using these findings, we established a model for a robot to perform natural proactive handing. The proposed model is implemented in a humanoid robot and is confirmed as effective in a field experiment.
A dog tail for communicating robotic states BIBAFull-Text 417-418
  Ashish Singh; James E. Young
We present a dog-tail interface for communicating abstract affective robotic states. We believe that people have a passing knowledge to understand basic dog tail language (e.g., tail wagging means happy). This knowledge can be leveraged to understand affective states of a robot. For example, by appearing energetic, it can suggest that it has a full battery and does not need charging. To investigate this, we built a robotic tail interface to communicate affective states of a robot. We conducted an exploratory user study to explore how low-level tail parameters such as speed influence people's perceptions of affect. In this paper, we briefly describe our study design and the results obtained.
Coaching robots with biosignals based on human affective social behaviors BIBAFull-Text 419-420
  Kenji Suzuki; Anna Gruebler; Vincent Berenz
We introduce a novel paradigm of social interaction between humans and robots, which is a style of coaching humanoid robots through interaction with a human instructor, who provides reinforcement via affective/social behaviors and biological signals. In particular facial Electromyography (EMG) to capture affective human response by using a personal wearable device is used as guidance or feedback to shape robot behavior. Through real-time pattern classification, facial expressions can be identified from them and interpreted as positive and negative responses from a human. We also developed a behavior-based architecture for testing this approach in the context of complex reactive robot behaviors.
Interactive object modeling & labeling for service robots BIBAFull-Text 421-422
  Alexander J. B. Trevor; John G., III Rogers; Akansel Cosgun; Henrik I. Christensen
We present an interactive object modeling and labeling system for service robots. The system enables a user to interactively create object models for a set of objects. Users also provide a label for each object, allowing it to be referenced later. Interaction with the robot occurs via a combination of a smartphone UI and pointing gestures.
Swimoid: interacting with an underwater buddy robot BIBFull-Text 423-424
  Yu Ukai; Jun Rekimoto
Robot George: interactive continuous learning of visual concepts BIBAFull-Text 425-426
  Michael Zillich; Kai Zhou; Danijel Skocaj; Matej Kristan; Alen Vrecko; Miroslav Janícek; Geert-Jan M. Kruijff; Thomas Keller; Marc Hanheide; Nick Hawes; Marko Mahnic
The video presents the robot George learning visual concepts in dialogue with a tutor.
I sing the body electric: an experimental theatre play with robots BIBFull-Text 427-428
  Jakub Zlotowski; Timo Bleeker; Christoph Bartneck; Ryan Reynolds

Plenary talk by Tomotaka Takahashi

The creation of a new robot era BIBAFull-Text 429-430
  Tomotaka Takahashi
In recent years, it may seem that technology trend is in a regression process, from technology itself to humanlike aspects. Then, intuitive and comfortable operating environment and the technology which enables such environment become to draw an attention, rather than what we called "high performance" "high specifications" whose advertising message simply speaks as is today.
   A sense of distance between human and mechatronics products becomes much smaller accordingly, and available information in an interactive manner drastically swollen out of our daily life. Compact humanoid robot which may communicate with us, stands at the very leading edge of technology roadmap. It might be like a smart phone with arms and legs or a state-of-the-art "Tinker Bell", and everyone can involve in information exchange in casual conversation via such humanoids.
   We can enjoy innocent conversation with humanoid, due to a kind of personification.
   As a result it allows us to gather our daily life information and to utilize them for command and control purpose on any mechatronics products, services and information.
   I would like to discuss such future life with intelligent humanoid, which would be realized in next 15 years, with certain demonstration of a latest humanoid.


HRI face-to-face: gaze and speech communication (fifth workshop on eye-gaze in intelligent human-machine interaction) BIBAFull-Text 431-432
  Frank Broz; Hagen Lehmann; Bilge Mutlu; Yukiko Nakano
The purpose of this workshop is to explore the relationship between gaze and speech during "face-to-face" human-robot interaction. As advances in speech recognition have made speech-based interaction with robots possible, it has become increasingly apparent that robots need to exhibit nonverbal social cues in order to disambiguate and structure their spoken communication with humans. Gaze behavior is one of the most powerful and fundamental sources of supplementary information to spoken communication. Gaze structures turn-taking, indicates attention, and implicitly communicates information about social roles and relationships. There is a growing body of work on gaze and speech based interaction in HRI, involving both the measurement and evaluation of human speech and gaze during interaction with robots and the design and implementation of robot speech and accompanying gaze behavior for interaction with humans. The Face-to-Face workshop at HRI 2013 aims to bring together researchers working on both speech and gaze communication in human-robot interaction to share results and discuss how to advance the state of the art in this emerging area of research.
Design of human likeness in HRI from uncanny valley to minimal design BIBAFull-Text 433-434
  Hidenobu Sumioka; Takashi Minato; Yoshio Matsumoto; Pericle Salvini; Hiroshi Ishiguro
Human likeness of social agents is crucial for human partners to interact with the agents intuitively because it makes the partners unconsciously respond to the agents in the same manner as what they show to other people. Although many studies suggest that an agent's human likeness plays an important role in human-robot interaction, it remains unclear how to design humanlike form that evokes interpersonal behavior from human partners. One approach is to make a copy of an existing person. Although this extreme helps us explore how we recognize another person, the Uncanny Valley effect must be taken into account. Basic questions, including why we experience the uncanny valley and how we overcome it should be addressed to give new insights into an underlying mechanism in our perception of human likeness. Another approach is to extract crucial elements that represent human appearance and behavior, as addressed in design of computer-animated human characters. The exploration of minimal requirement to evoke interpersonal behavior from human partners provides more effective and simpler way to design social agents that facilitate communication with human.
   This full-day workshop aims to bring together the prominent researchers from different backgrounds in order to present and discuss the most recent achievement in design of human likeness in a wide range of research topics from uncanny valley effects and minimal design of human-robot communication.
Collaborative manipulation: new challenges for robotics and HRI BIBAFull-Text 435-436
  Anca D. Dragan; Andrea L. Thomaz; Siddhartha S. Srinivasa
Autonomous manipulation has made tremendous progress in recent years, leveraging new algorithms and capabilities of mobile manipulators to address complex human environments. However, most current systems inadequately address one key feature of human environments: that they are populated with humans. What would it take for a human and robot to prepare a meal together in a kitchen, or to assemble a part together in a manufacturing workcell?
   Collaboration with humans is the next frontier in robotics, be it shared workspace collaboration, assistive teleoperation and sliding autonomy, or teacher-learner collaboration, and raises new challenges for both robotics and HRI. A collaborative robot must engage in a delicate dance of prediction and action, where it must understand its collaborator's intentions, act to make its own intentions clear, decide when to assist and when to back off, as well as continuously adapt its behavior and enable customization.
   Addressing these challenges demands a joint effort from the HRI and robotics communities. We believe that this workshop will not only serve to attract more roboticists into the HRI community under this unifying theme, but will also create much needed collaborations to explore this rich, interdisciplinary area.
HRI-2013 workshop on probabilistic approaches for robot control in human-robot interaction (PARC-HRI) BIBAFull-Text 437-438
  Amin Atrash; Ross Mead
The HRI-2013 Workshop on Probabilistic Approaches for Robot Control in Human-Robot Interaction (PARC-HRI) brings together researchers to discuss the application of probabilistic approaches to further enable robot autonomy in HRI, as well as to address the shortcomings and necessary improvements in current techniques needed for robust socially intelligent behavior. This half day workshop investigates the use of probabilistic approaches, such as Bayesian networks and Markov models, for robust robot control and decision-making under uncertainty. Target applications range from social behavior primitives -- such as gesture, eye gaze, and spacing -- to higher-level interaction planning and management systems.
HRI pioneers workshop 2013 BIBAFull-Text 439-440
  Solace Shen; Astrid Rosenthal-von der Pütten; Henny Admoni; Matt Beane; Caroline Harriott; Yasuhiko Hato; Yunkyung Kim; Daniel Lazewatsky; Matt Marge; Robin Read; Marynel Vázquez; Steve Vozar
The 2013 Human-Robot Interaction (HRI) Pioneers Workshop, which will be held in conjunction with the 8th ACM/IEEE International Conference on Human-Robot Interaction, is the premiere venue for student research in the field. This highly selective workshop seeks to foster creativity, communication, and collaboration between the world's top student researchers in the field of human-robot interaction. Pioneers Workshop participants will have the opportunity to learn about the current state of HRI, to present their research, and to network with one another and with select senior researchers in a setting that is less formal and more interactive then the main conference. The theme for this year is Re-envisioning HRI: Dealing With the Elephants in Our Room. The workshop aims to bring forward difficult to discuss issues to help the field achieve more balanced perspectives and a clearer picture of what sort of future we are building.
Applications for emotional robots BIBFull-Text 441-442
  Oliver Damm; Frank Hegel; Karoline Malchus; Britta Wrede; Manja Lohse