HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,242,737
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: van_hamme_h* Results: 3 Sorted by: Date  Comments?
Help Dates
Limit:   
[1] Who's Speaking?: Audio-Supervised Classification of Active Speakers in Video Oral Session 3: Language, Speech and Dialog / Chakravarty, Punarjay / Mirzaei, Sayeh / Tuytelaars, Tinne / Van hamme, Hugo Proceedings of the 2015 International Conference on Multimodal Interaction 2015-11-09 p.87-90
ACM Digital Library Link
Summary: Active speakers have traditionally been identified in video by detecting their moving lips. This paper demonstrates the same using spatio-temporal features that aim to capture other cues: movement of the head, upper body and hands of active speakers. Speaker directional information, obtained using sound source localization from a microphone array is used to supervise the training of these video features.

[2] Learning Like a Toddler: Watching Television Series to Learn Vocabulary from Images and Audio Posters 3 / Yilmaz, Emre / Rematas, Konstantinos / Tuytelaars, Tinne / Van hamme, Hugo Proceedings of the 2014 ACM International Conference on Multimedia 2014-11-03 p.1189-1192
ACM Digital Library Link
Summary: This paper presents the initial findings of our efforts to build an unsupervised multimodal vocabulary learning scheme in a realistic scenario. For this purpose, a new multimodal dataset, called Musti3D, has been created. The Musti3D database contains episodes from an animation series for toddlers. Annotated with audiovisual information, this database is used for the investigation of a non-negative matrix factorization (NMF)-based audiovisual learning technique. The performance of the technique, i.e. correctly matching the audio and visual representations of the objects, has been evaluated by gradually reducing the level of supervision starting from the ground truth transcriptions. Moreover, we have performed experiments using different visual representations and time spans for combining the audiovisual information. The preliminary results show the feasibility of the proposed audiovisual learning framework.

[3] EDITED BOOK Natural Interaction with Robots, Knowbots and Smartphones: Putting Spoken Dialog Systems into Practice / Mariani, Joseph / Rosset, Sophie / Garnier-Rizet, Martine / Devillers, Laurence 2014 p.397 Springer New York
ISBN: 978-1-4614-8279-6 (print), 978-1-4614-8280-2 (online)
Link to Digital Content at Springer
== Spoken Dialog Systems in Everyday Applications ==
Spoken Language Understanding for Natural Interaction: The Siri Experience (3-14)
	+ Bellegarda, Jerome R.
Development of Speech-Based In-Car HMI Concepts for Information Exchange Internet Apps (15-28)
	+ Hofmann, Hansjörg
	+ Silberstein, Anna
	+ Ehrlich, Ute
	+ Berton, André
	+ Müller, Christian
	+ Mahr, Angela
Real Users and Real Dialog Systems: The Hard Challenge for SDS (29-36)
	+ Black, Alan W.
	+ Eskenazi, Maxine
A Multimodal Multi-device Discourse and Dialogue Infrastructure for Collaborative Decision-Making in Medicine (37-47)
	+ Sonntag, Daniel
	+ Schulz, Christian
== Spoken Dialog Prototypes and Products ==
Yochina: Mobile Multimedia and Multimodal Crosslingual Dialogue System (51-57)
	+ Xu, Feiyu
	+ Schmeier, Sven
	+ Ai, Renlong
	+ Uszkoreit, Hans
Walk This Way: Spatial Grounding for City Exploration (59-67)
	+ Boye, Johan
	+ Fredriksson, Morgan
	+ Götze, Jana
	+ Gustafson, Joakim
	+ Königsmann, Jürgen
Multimodal Dialogue System for Interaction in AmI Environment by Means of File-Based Services (69-77)
	+ Ábalos, Nieves
	+ Espejo, Gonzalo
	+ López-Cózar, Ramón
	+ Ballesteros, Francisco J.
	+ Soriano, Enrique
	+ Guardiola, Gorka
Development of a Toolkit Handling Multiple Speech-Oriented Guidance Agents for Mobile Applications (79-85)
	+ Hara, Sunao
	+ Kawanami, Hiromichi
	+ Saruwatari, Hiroshi
	+ Shikano, Kiyohiro
Providing Interactive and User-Adapted E-City Services by Means of Voice Portals (87-98)
	+ Griol, David
	+ García-Jiménez, María
	+ Callejas, Zoraida
	+ López-Cózar, Ramón
== Multi-domain, Crosslingual Spoken Dialog Systems ==
Efficient Language Model Construction for Spoken Dialog Systems by Inducting Language Resources of Different Languages (101-110)
	+ Misu, Teruhisa
	+ Matsuda, Shigeki
	+ Mizukami, Etsuo
	+ Kashioka, Hideki
	+ Li, Haizhou
Towards Online Planning for Dialogue Management with Rich Domain Knowledge (111-123)
	+ Lison, Pierre
A Two-Step Approach for Efficient Domain Selection in Multi-Domain Dialog Systems (125-131)
	+ Lee, Injae
	+ Kim, Seokhwan
	+ Kim, Kyungduk
	+ Lee, Donghyeon
	+ Choi, Junhwi
	+ Ryu, Seonghan
	+ Lee, Gary Geunbae
== Human-Robot Interaction ==
From Informative Cooperative Dialogues to Long-Term Social Relation with a Robot (135-151)
	+ Buendia, Axel
	+ Devillers, Laurence
Integration of Multiple Sound Source Localization Results for Speaker Identification in Multiparty Dialogue System (153-165)
	+ Nakashima, Taichi
	+ Komatani, Kazunori
	+ Sato, Satoshi
Investigating the Social Facilitation Effect in Human--Robot Interaction (167-177)
	+ Wechsung, Ina
	+ Ehrenbrink, Patrick
	+ Schleicher, Robert
	+ Möller, Sebastian
More Than Just Words: Building a Chatty Robot (179-185)
	+ Gilmartin, Emer
	+ Campbell, Nick
Predicting When People Will Speak to a Humanoid Robot (187-198)
	+ Sugiyama, Takaaki
	+ Komatani, Kazunori
	+ Sato, Satoshi
Designing an Emotion Detection System for a Socially Intelligent Human-Robot Interaction (199-211)
	+ Chastagnol, Clément
	+ Clavel, Céline
	+ Courgeon, Matthieu
	+ Devillers, Laurence
Multimodal Open-Domain Conversations with the Nao Robot (213-224)
	+ Jokinen, Kristiina
	+ Wilcock, Graham
Component Pluggable Dialogue Framework and Its Application to Social Robots (225-237)
	+ Jiang, Ridong
	+ Tan, Yeow Kee
	+ Limbu, Dilip Kumar
	+ Dung, Tran Anh
	+ Li, Haizhou
== Spoken Dialog Systems Components ==
Visual Contribution to Word Prominence Detection in a Playful Interaction Setting (241-247)
	+ Heckmann, Martin
Label Noise Robustness and Learning Speed in a Self-Learning Vocal User Interface (249-259)
	+ Ons, Bart
	+ Gemmeke, Jort F.
	+ Van hamme, Hugo
Topic Classification of Spoken Inquiries Using Transductive Support Vector Machine (261-267)
	+ Torres, Rafael
	+ Kawanami, Hiromichi
	+ Matsui, Tomoko
	+ Saruwatari, Hiroshi
	+ Shikano, Kiyohiro
Frame-Level Selective Decoding Using Native and Non-native Acoustic Models for Robust Speech Recognition to Native and Non-native Speech (269-274)
	+ Oh, Yoo Rhee
	+ Chung, Hoon
	+ Kang, Jeom-ja
	+ Lee, Yun Keun
Analysis of Speech Under Stress and Cognitive Load in USAR Operations (275-281)
	+ Charfuelan, Marcela
	+ Kruijff, Geert-Jan
== Dialog Management ==
Does Personality Matter? Expressive Generation for Dialogue Interaction (285-301)
	+ Walker, Marilyn A.
	+ Sawyer, Jennifer
	+ Lin, Grace
	+ Wing, Sam
Application and Evaluation of a Conditioned Hidden Markov Model for Estimating Interaction Quality of Spoken Dialogue Systems (303-312)
	+ Ultes, Stefan
	+ ElChab, Robert
	+ Minker, Wolfgang
FLoReS: A Forward Looking, Reward Seeking, Dialogue Manager (313-325)
	+ Morbini, Fabrizio
	+ DeVault, David
	+ Sagae, Kenji
	+ Gerten, Jillian
	+ Nazarian, Angela
	+ Traum, David
A Clustering Approach to Assess Real User Profiles in Spoken Dialogue Systems (327-334)
	+ Callejas, Zoraida
	+ Griol, David
	+ Engelbrecht, Klaus-Peter
	+ López-Cózar, Ramón
What Are They Achieving Through the Conversation? Modeling Guide--Tourist Dialogues by Extended Grounding Networks (335-341)
	+ Mizukami, Etsuo
	+ Kashioka, Hideki
Co-adaptation in Spoken Dialogue Systems (343-353)
	+ Chandramohan, Senthilkumar
	+ Geist, Matthieu
	+ Lefèvre, Fabrice
	+ Pietquin, Olivier
Developing Non-goal Dialog System Based on Examples of Drama Television (355-361)
	+ Nio, Lasguido
	+ Sakti, Sakriani
	+ Neubig, Graham
	+ Toda, Tomoki
	+ Adriani, Mirna
	+ Nakamura, Satoshi
A User Model for Dialog System Evaluation Based on Activation of Subgoals (363-374)
	+ Engelbrecht, Klaus-Peter
Real-Time Feedback System for Monitoring and Facilitating Discussions (375-387)
	+ Sarda, Sanat
	+ Constable, Martin
	+ Dauwels, Justin
	+ Shoko Dauwels (Okutsu), 	+ 
	+ Elgendi, Mohamed
	+ Mengyu, Zhou
	+ Rasheed, Umer
	+ Tahir, Yasir
	+ Thalmann, Daniel
	+ Magnenat-Thalmann, Nadia
Evaluation of Invalid Input Discrimination Using Bag-of-Words for Speech-Oriented Guidance System (389-397)
	+ Majima, Haruka
	+ Torres, Rafael
	+ Kawanami, Hiromichi
	+ Hara, Sunao
	+ Matsui, Tomoko
	+ Saruwatari, Hiroshi
	+ Shikano, Kiyohiro