HCI Bibliography Home | HCI Conferences | MMRWHRI Archive | Detailed Records | RefWorks | EndNote | Hide Abstracts
MMRWHRI Tables of Contents: 14

Proceedings of the 2014 ICMI Workshop on Multimodal, Multi-Party, Real-World Human-Robot Interaction

Fullname:Proceedings of the 2014 Workshop on Multimodal, Multi-Party, Real-World Human-Robot Interaction
Editors:Mary Ellen Foster; Manuel Giuliani; Ronald Petrick
Location:Istanbul, Turkey
Dates:2014-Nov-16
Publisher:ACM
Standard No:ISBN: 978-1-4503-0551-8; ACM DL: Table of Contents; hcibib: MMRWHRI14
Papers:10
Pages:32
Links:Workshop Website | Conference Website
  1. Regular Papers
  2. Late-Breaking Papers

Regular Papers

Towards Closed Feedback Loops in HRI: Integrating InproTK and PaMini BIBAFull-Text 1-6
  Birte Carlmeyer; David Schlangen; Britta Wrede
In this paper, we present a first step towards incremental processing for modeling asynchronous human-robot interactions, to allow closed feedback loops in HRI. We achieve this by combining the incremental natural language processing framework InproTK with the human-robot dialog manager PaMini, which is based on generic interaction patterns. This enables the robot to provide incremental feedback during interaction and allows the user to give online feedback and corrections. We provide a first realization scenario as a proof of concept for our approach.
Attention Detection in Elderly People-Robot Spoken Interaction BIBAFull-Text 7-12
  Mohamed El Amine Sehili; Fan Yang; Laurence Devillers
In many human-robot social interactions, where the robot is to interact with only one human throughout the interaction, the "human" side of a conversation is very likely to interact with other humans present in the same room and temporarily loses the focus on the main interaction. These human-human interactions can be a very brief chat or a pretty long discussion. To effectively build a human-robot spoken interaction system, one should enable the robot to be aware of the situations where it is (or it is not) the addressee. In many works, gaze tracking and audio localization techniques are used to detect the attention of the subject. In this work we used a combination of voice analysis and head-turning detection to detect if the subject is addressing the robot or if their attention is captured when talking to another person. A subset of the ROMEO2 project corpus is used for experiment. The corpus is made up of 9 hours of social interaction between 27 elderly people and a humanoid robot. This work is done in the context of the ROMEO2 project1 whose goal is to develop a humanoid robot that can act as a comprehensive assistant for persons suffering from loss of autonomy.
Advances in Wikipedia-based Interaction with Robots BIBAFull-Text 13-18
  Graham Wilcock; Kristiina Jokinen
The paper describes advances in Wikipedia-based human-robot interaction. After reviewing the current capabilities of talking robots that use Wikipedia as an information source, the paper presents methods that support new capabilities. These include language-switching and multimodal behaviour-switching when using Wikipedias in many languages, robot initiatives to suggest new topics from Wikipedia based on semantic similarity to the current topic, and the capability of the robot to listen to the user talking and to recognize entities mentioned by the user that have Wikipedia links.

Late-Breaking Papers

Self-calibration of an Assistive Device to Adapt to Different Users and Environments BIBAFull-Text 19-20
  Andrés Trujillo-León; Fernando Vidal-Verdú
In this paper, we describe the implementation of a strategy to increase the robustness of an assistive device to changes of the user and/or the environment. The device is a smart handle that can be mounted on an electric wheelchair or trolley. This handlebar replaces the attendant joystick to achieve a more intuitive driving. In spite of having been successfully tested, there are concerns about its behavior when the driver or the surroundings change. Sensors have been added to the system to detect these changes and re-calibrate the system accordingly.
Towards proactive robot behavior based on incremental language analysis BIBAFull-Text 21-22
  Suna Bensch; Thomas Hellström
This paper describes ongoing and planned work on incremental language processing coupled to inference of expected robot actions. Utterances are processed word-by-word, simultaneously with inference of expected robot actions, thus enabling the robot to prepare and act proactively to human utterances. We believe that such a model results in more natural human-robot communication since proactive behavior is a feature of human-human communication.
Selection of an Object Requested by Speech Based on Generic Object Recognition BIBAFull-Text 23-24
  Hitoshi Nishimura; Yuko Ozasa; Yasuo Ariki; Mikio Nakano
In this paper, we propose a method that a robot can select an object specified by human speech among several objects based on generic object recognition. Although object selection methods have been proposed based on specific object recognition, generic object recognition is more useful for the selection in a real environment. In the proposed method, an object is selected by integrating speech recognition results and generic object recognition results. We investigated the relation between the method of narrowing down candidates based on speech and image recognition results and the object selection accuracy.
Clarification Dialogues for Perception-based Errors in Situated Human-Computer Dialogues BIBAFull-Text 25-26
  Niels Schütte; John D. Kelleher; Brian Mac Namee
We present an experiment about situated human-computer interaction. Participants interacted with a simulated robot system to complete a series of tasks in a situated environment. Errors were introduced into the robot's perception to produce misunderstandings. We recorded the interactions and attempt to identify strategies the participants used to solve the arising problems.
Applying Topic Recognition to Spoken Language in Human-Robot Interaction Dialogues BIBAFull-Text 27-28
  Manuel Giuliani; Thomas Marschall; Manfred Tscheligi
Human-robot interaction systems that work in everyday situations need to be able to talk about different topics, for example when the robot is a bartender that serves drinks to human customers. We applied a topic recognition approach that is based on term frequency-inverse document frequency (TF-IDF) on a test set of spoken language interactions between human customers and bartenders in German bars. We sorted the test set into five topics and evaluated our topic recognition with different topic corpora. Our evaluation shows that recognition accuracy is only as high as 70.2% for certain topics and at 30.0% on average, even for manually created topic corpora. This result suggests that a multimodal approach is needed to automatically recognise topics in spoken language more sufficiently.
Applying Semantic Web Services to Multi-Robot Coordination BIBAFull-Text 29-30
  Yuhei Ogawa; Yuichiro Mori; Takahira Yamaguchi
This paper discusses a multi-robot coordination architecture with 5 layers: task, process, software module, ontology and data, using the concept of SWS (semantic Web Services) that facilitates the automation of discovering and combining software services. Given a task from a user, our system automatically discovers necessary processes and combines them, using software modules, ontologies and data. We apply our architecture to the case study of RobotCafe to coordinate several robots for receiving order, making juice, carrying juice to s user and so on.
Affective Feedback for a Virtual Robot in a Real-World Treasure Hunt BIBAFull-Text 31-32
  Mary Ellen Foster; Mei Yii Lim; Amol Deshmukh; Srini Janarthanam; Helen Hastie; Ruth Aylett
We explore the effect of the behaviour of a virtual robot agent in the context of a real-world treasure-hunt activity carried out by children aged 11-12. We compare three conditions: a traditional paper-based treasure hunt, along with a virtual robot on a tablet which provides either neutral or affective feedback during the treasure hunt. The initial results of the study suggest that the use of the virtual robot increased the perceived difficulty of the instruction-following task, while the affective robot feedback in particular made the questions seem more difficult to answer.