HCI Bibliography : Search Results skip to search form | skip to results |
Database updated: 2016-05-10 Searches since 2006-12-01: 32,242,741
director@hcibib.org
Hosted by ACM SIGCHI
The HCI Bibliogaphy was moved to a new server 2015-05-12 and again 2016-01-05, substantially degrading the environment for making updates.
There are no plans to add to the database.
Please send questions or comments to director@hcibib.org.
Query: matsui_t* Results: 11 Sorted by: Date  Comments?
Help Dates
Limit:   
[1] An Experimental Study on the Effect of Repeated Exposure of Facial Caricature on Memory Representation of a Model's Face User Studies / Tawatsuji, Yoshimasa / Iizuka, Yuki / Matsui, Tatsunori HCI International 2015: 17th International Conference on HCI, Part III: Users and Contexts 2015-08-02 v.3 p.514-524
Keywords: Face recognition; Facial caricature; Facial similarity
Link to Digital Content at Springer
Summary: Why does human can identify a facial caricature with its model's face? We hypothesize that a facial caricature has an effect on a person's memory representation of the model's face to get closer into the facial caricature itself, which causes a person to evoke the feeling of similarity between the model's face and its facial caricature. In this point, we conducted the experiment to verify whether the continuous exposure of a facial caricature changes participants' memory representation and whether the exposure also evokes participants' feeling of similarity between them.

[2] Proposal for the Model of Occurrence of Negative Response toward Humanlike Agent Based on Brain Function by Qualitative Reasoning Emotions Recognition / Tawatsuji, Yoshimasa / Muramatsu, Keiichi / Matsui, Tatsunori HCI International 2014: 16th International Conference on HCI, Part II: Advanced Interaction Modalities and Techniques 2014-06-22 v.2 p.768-778
Keywords: Human Agent Interaction; uncanny valley; brain function; qualitative reasoning
Link to Digital Content at Springer
Summary: For designing the rounded communication between human and agent, humanlike appearance of agent can contribute to human understandability towards their intension. However, the excessive humanlike-ness can cause human to feel repulsive against the agent, which is well known as the uncanny valley. In this study, we propose a model providing an explanation for how the human negative response is formed, based on the brain regions and its function, including the amygdala, hippocampus, cortex and striatum. This model is described with quantitative reasoning and simulated. The results indicate that as human observes a humanlike agent, the emotion goes negative and the brain regions were more activated in comparison with the case human observes a person.

[3] EDITED BOOK Natural Interaction with Robots, Knowbots and Smartphones: Putting Spoken Dialog Systems into Practice / Mariani, Joseph / Rosset, Sophie / Garnier-Rizet, Martine / Devillers, Laurence 2014 p.397 Springer New York
ISBN: 978-1-4614-8279-6 (print), 978-1-4614-8280-2 (online)
Link to Digital Content at Springer
== Spoken Dialog Systems in Everyday Applications ==
Spoken Language Understanding for Natural Interaction: The Siri Experience (3-14)
	+ Bellegarda, Jerome R.
Development of Speech-Based In-Car HMI Concepts for Information Exchange Internet Apps (15-28)
	+ Hofmann, Hansjörg
	+ Silberstein, Anna
	+ Ehrlich, Ute
	+ Berton, André
	+ Müller, Christian
	+ Mahr, Angela
Real Users and Real Dialog Systems: The Hard Challenge for SDS (29-36)
	+ Black, Alan W.
	+ Eskenazi, Maxine
A Multimodal Multi-device Discourse and Dialogue Infrastructure for Collaborative Decision-Making in Medicine (37-47)
	+ Sonntag, Daniel
	+ Schulz, Christian
== Spoken Dialog Prototypes and Products ==
Yochina: Mobile Multimedia and Multimodal Crosslingual Dialogue System (51-57)
	+ Xu, Feiyu
	+ Schmeier, Sven
	+ Ai, Renlong
	+ Uszkoreit, Hans
Walk This Way: Spatial Grounding for City Exploration (59-67)
	+ Boye, Johan
	+ Fredriksson, Morgan
	+ Götze, Jana
	+ Gustafson, Joakim
	+ Königsmann, Jürgen
Multimodal Dialogue System for Interaction in AmI Environment by Means of File-Based Services (69-77)
	+ Ábalos, Nieves
	+ Espejo, Gonzalo
	+ López-Cózar, Ramón
	+ Ballesteros, Francisco J.
	+ Soriano, Enrique
	+ Guardiola, Gorka
Development of a Toolkit Handling Multiple Speech-Oriented Guidance Agents for Mobile Applications (79-85)
	+ Hara, Sunao
	+ Kawanami, Hiromichi
	+ Saruwatari, Hiroshi
	+ Shikano, Kiyohiro
Providing Interactive and User-Adapted E-City Services by Means of Voice Portals (87-98)
	+ Griol, David
	+ García-Jiménez, María
	+ Callejas, Zoraida
	+ López-Cózar, Ramón
== Multi-domain, Crosslingual Spoken Dialog Systems ==
Efficient Language Model Construction for Spoken Dialog Systems by Inducting Language Resources of Different Languages (101-110)
	+ Misu, Teruhisa
	+ Matsuda, Shigeki
	+ Mizukami, Etsuo
	+ Kashioka, Hideki
	+ Li, Haizhou
Towards Online Planning for Dialogue Management with Rich Domain Knowledge (111-123)
	+ Lison, Pierre
A Two-Step Approach for Efficient Domain Selection in Multi-Domain Dialog Systems (125-131)
	+ Lee, Injae
	+ Kim, Seokhwan
	+ Kim, Kyungduk
	+ Lee, Donghyeon
	+ Choi, Junhwi
	+ Ryu, Seonghan
	+ Lee, Gary Geunbae
== Human-Robot Interaction ==
From Informative Cooperative Dialogues to Long-Term Social Relation with a Robot (135-151)
	+ Buendia, Axel
	+ Devillers, Laurence
Integration of Multiple Sound Source Localization Results for Speaker Identification in Multiparty Dialogue System (153-165)
	+ Nakashima, Taichi
	+ Komatani, Kazunori
	+ Sato, Satoshi
Investigating the Social Facilitation Effect in Human--Robot Interaction (167-177)
	+ Wechsung, Ina
	+ Ehrenbrink, Patrick
	+ Schleicher, Robert
	+ Möller, Sebastian
More Than Just Words: Building a Chatty Robot (179-185)
	+ Gilmartin, Emer
	+ Campbell, Nick
Predicting When People Will Speak to a Humanoid Robot (187-198)
	+ Sugiyama, Takaaki
	+ Komatani, Kazunori
	+ Sato, Satoshi
Designing an Emotion Detection System for a Socially Intelligent Human-Robot Interaction (199-211)
	+ Chastagnol, Clément
	+ Clavel, Céline
	+ Courgeon, Matthieu
	+ Devillers, Laurence
Multimodal Open-Domain Conversations with the Nao Robot (213-224)
	+ Jokinen, Kristiina
	+ Wilcock, Graham
Component Pluggable Dialogue Framework and Its Application to Social Robots (225-237)
	+ Jiang, Ridong
	+ Tan, Yeow Kee
	+ Limbu, Dilip Kumar
	+ Dung, Tran Anh
	+ Li, Haizhou
== Spoken Dialog Systems Components ==
Visual Contribution to Word Prominence Detection in a Playful Interaction Setting (241-247)
	+ Heckmann, Martin
Label Noise Robustness and Learning Speed in a Self-Learning Vocal User Interface (249-259)
	+ Ons, Bart
	+ Gemmeke, Jort F.
	+ Van hamme, Hugo
Topic Classification of Spoken Inquiries Using Transductive Support Vector Machine (261-267)
	+ Torres, Rafael
	+ Kawanami, Hiromichi
	+ Matsui, Tomoko
	+ Saruwatari, Hiroshi
	+ Shikano, Kiyohiro
Frame-Level Selective Decoding Using Native and Non-native Acoustic Models for Robust Speech Recognition to Native and Non-native Speech (269-274)
	+ Oh, Yoo Rhee
	+ Chung, Hoon
	+ Kang, Jeom-ja
	+ Lee, Yun Keun
Analysis of Speech Under Stress and Cognitive Load in USAR Operations (275-281)
	+ Charfuelan, Marcela
	+ Kruijff, Geert-Jan
== Dialog Management ==
Does Personality Matter? Expressive Generation for Dialogue Interaction (285-301)
	+ Walker, Marilyn A.
	+ Sawyer, Jennifer
	+ Lin, Grace
	+ Wing, Sam
Application and Evaluation of a Conditioned Hidden Markov Model for Estimating Interaction Quality of Spoken Dialogue Systems (303-312)
	+ Ultes, Stefan
	+ ElChab, Robert
	+ Minker, Wolfgang
FLoReS: A Forward Looking, Reward Seeking, Dialogue Manager (313-325)
	+ Morbini, Fabrizio
	+ DeVault, David
	+ Sagae, Kenji
	+ Gerten, Jillian
	+ Nazarian, Angela
	+ Traum, David
A Clustering Approach to Assess Real User Profiles in Spoken Dialogue Systems (327-334)
	+ Callejas, Zoraida
	+ Griol, David
	+ Engelbrecht, Klaus-Peter
	+ López-Cózar, Ramón
What Are They Achieving Through the Conversation? Modeling Guide--Tourist Dialogues by Extended Grounding Networks (335-341)
	+ Mizukami, Etsuo
	+ Kashioka, Hideki
Co-adaptation in Spoken Dialogue Systems (343-353)
	+ Chandramohan, Senthilkumar
	+ Geist, Matthieu
	+ Lefèvre, Fabrice
	+ Pietquin, Olivier
Developing Non-goal Dialog System Based on Examples of Drama Television (355-361)
	+ Nio, Lasguido
	+ Sakti, Sakriani
	+ Neubig, Graham
	+ Toda, Tomoki
	+ Adriani, Mirna
	+ Nakamura, Satoshi
A User Model for Dialog System Evaluation Based on Activation of Subgoals (363-374)
	+ Engelbrecht, Klaus-Peter
Real-Time Feedback System for Monitoring and Facilitating Discussions (375-387)
	+ Sarda, Sanat
	+ Constable, Martin
	+ Dauwels, Justin
	+ Shoko Dauwels (Okutsu), 	+ 
	+ Elgendi, Mohamed
	+ Mengyu, Zhou
	+ Rasheed, Umer
	+ Tahir, Yasir
	+ Thalmann, Daniel
	+ Magnenat-Thalmann, Nadia
Evaluation of Invalid Input Discrimination Using Bag-of-Words for Speech-Oriented Guidance System (389-397)
	+ Majima, Haruka
	+ Torres, Rafael
	+ Kawanami, Hiromichi
	+ Hara, Sunao
	+ Matsui, Tomoko
	+ Saruwatari, Hiroshi
	+ Shikano, Kiyohiro

[4] Experimental Study Toward Modeling of the Uncanny Valley Based on Eye Movements on Human/Non-human Faces Gesture and Eye-Gaze Based Interaction / Tawatsuji, Yoshimasa / Kojima, Kazuaki / Matsui, Tatsunori HCI International 2013: 15th International Conference on HCI, Part IV: Interaction Modalities and Techniques 2013-07-21 v.4 p.398-407
Keywords: The uncanny valley; eye movements; dual pathway of emotion; humanlike agent
Link to Digital Content at Springer
Summary: In the research field of human-agent interaction, it is a crucial issue to clarify the effect of agent appearances on human impressions. The uncanny valley is one crucial topic. We hypothesize that people can perceive a humanlike agent as human at an earlier stage in interaction even if they finally notice it as non-human and such contradictory perceptions are related to the uncanny valley. We conducted an experiment where participants were asked to judge whether faces presented on a PC monitor were human or not. The faces were a doll, a CG-modeled human image fairly similar to real human, an android robot, another image highly similar and a person. Eyes of the participants were recorded during watching the faces and changes in observing the faces were studied. The results indicate that eye data did not initially differ between the person and CG fairly similar, whereas differences emerged after several seconds. We then proposed a model of the uncanny valley based on dual pathway of emotion.

[5] Experimental Study on Appropriate Reality of Agents as a Multi-modal Interface for Human-Computer Interaction Avatars and Embodied Interaction / Tanaka, Kaori / Matsui, Tatsunori / Kojima, Kazuaki HCI International 2011: 14th International Conference on Human-Computer Interaction, Part II: Interaction Techniques and Environments 2011-07-09 v.2 p.613-622
Keywords: Multi-modal agent; face; voice; similarity; familiarity; uncanny valley
Link to Digital Content at Springer
Summary: Although humanlike robots and computer agents are fundamentally recognized as familiar, considerable similar external representation occasionally reduces their familiarities. We experimentally investigated relationships between the similarities and the familiarities of multi-modal agents which had face and voice representation, with the results indicating that similarities of the agents didn't simply increase their familiarities. The results in our experiments implied that external representation of computer agents for communicative interactions should not be very similar to human but appropriately similar in order to gain familiarities.

[6] Extraction of User Interaction Patterns for Low-Usability Web Pages Human Centered Design Methods and Tools / Yamada, Toshiya / Nakamichi, Noboru / Matsui, Tomoko HCD 2011: 2nd International Conference on Human Centered Design 2011-07-09 p.144-152
Keywords: Web Usability; PrefixSpan Boosting (Pboost); User Interaction; Machine learning
Link to Digital Content at Springer
Summary: Our goal is to point out usability problems in web pages in order to improve the web usability. We investigate the relation between user interaction behaviors in web-viewing and evaluation results of web usability by subjects. And we extract discriminative patterns for user interaction behaviors in visited web pages with low usability by using the PrefixSpan based subsequence boosting (Pboost).

[7] On the Possibility about Performance Estimation Just before Beginning a Voluntary Motion Using Movement Related Cortical Potential Novel Techniques for Measuring and Monitoring / Suzuki, Satoshi / Matsui, Takemi / Sakaguchi, Yusuke / Ando, Kazuhiro / Nishiuchi, Nobuyuki / Yamazaki, Toshimasa / Fukuzumi, Shin'ichi HCI International 2009: 13th International Conference on Human-Computer Interaction, Part I: New Trends 2009-07-19 v.1 p.184-191
Keywords: Accuracy; ballistic movement; movement-related cortical potential (MRCP); reaching; voluntary motion
Link to Digital Content at Springer
Summary: The present study aimed to investigate this tripartite relationship, regarding MRCP as a physiological index, ballistic movement as an index of operation and accuracy of the task performance. Experiments were conducted 'reaching' task; the subject touches the target appears 300 pixels away from the start point in a vertical direction on the touch sensitive screen with the forefinger. During experiments, EEG, EMG as trigger, image by high-speed camera and the efficiency of task were acquired. As a result, significant differences between the high and poor performance groups were clear on the NS in MRCP acquired from Fz(p < 0.05), Cz (p < 0.05) and Pz (p < 0.05). Furthermore, the difference was confirmed on the duration of ballistic movement. Based on our findings, we attempted to extract MRCP rapidly and automatically without using signal averaging and discuss whether it is possible to estimate accuracy just before the motion is executed.

[8] Front Environment Recognition of Personal Vehicle Using the Image Sensor and Acceleration Sensors for Everyday Computing In-Vehicle Interaction and Environment Navigation / Matsui, Takahiro / Imanaka, Takeshi / Kono, Yasuyuki HCI International 2009: 13th International Conference on Human-Computer Interaction, Part III: Ambient, Ubiquitous and Intelligent Interaction 2009-07-19 v.3 p.151-158
Keywords: Segway; Image Sensor; Acceleration Sensor; Optical Flow
Link to Digital Content at Springer
Summary: In this research, we propose the method for detecting moving objects in front of the Segway by detecting running state for the Segway. Running state of the personal vehicle Segway is detected with both an image sensor and an acceleration sensor mounted on the Segway. When objects are moving in front of the Segway, the image sensor can capture the motion while the acceleration sensor shows a different result. By analyzing the difference our method successfully recognizes moving objects from environment.

[9] Human Control Modeling Based on Multimodal Sensory Feedback Information Cognitive Modeling, Perception, Emotion and Interaction / Murakami, Edwardo / Matsui, Toshihiro FAC 2009: 5th International Conference on Foundations of Augmented Cognition. Neuroergonomics and Operational Neuroscience 2009-07-19 p.192-201
Keywords: Human-Machine Interface; System Identification; Reaction Time; Sensory Feedback Information
Link to Digital Content at Springer
Summary: In order to simulate the human control behavior during a manipulation task in a remote controlled or in a X-by-wire systems, first it is necessary to measure and analyze the human control characteristics. The aim of this research is to measure the operator reaction time and analyze the human visual and force sensory feedback integration related to a manipulation task. Using the developed master-slave type experimental device it was possible to identify and build a human operator control model related to different sensory feedback. The human model related to visual feedback solely and visual/force feedback was identified using the techniques of system identification methods.

[10] Development of Non-contact Monitoring System of Heart Rate Variability (HRV) -- An Approach of Remote Sensing for Ubiquitous Technology -- New Trends in Ergonomics / Suzuki, Satoshi / Matsui, Takemi / Gotoh, Shinji / Mori, Yasutaka / Takase, Bonpei / Ishihara, Masayuki EHAWC 2009: Ergonomics and Health Aspects of Work with Computers 2009-07-19 p.195-203
Keywords: noncontact monitoring; microwave radar; heart rate variability
Link to Digital Content at Springer
Summary: The aim of this study was to develop a prototype system to monitor cardiac activity using microwave Doppler radar (24.05 GHz frequency, 7 mW output power in average) without making contact with the body and without removing clothing; namely, a completely noncontact, remote monitoring system. In addition, heart rate and changes in heart rate variability (HRV) during simple mental arithmetic and computer input tasks were observed with the prototype system. The experiment was conducted with seven subjects (23.00 ± 0.82 years old). We found that the prototype system captured heart rate and HRV precisely. The strong relationship between the heart rates during tasks (r = 0.963), LF (cross-correlation = 0.76) and LF/HF (cross-correlation = 0.73) of HRV calculated from the microwave radar data and from electrocardiograph (ECG) measurements were confirmed.

[11] Study on Guidelines to Make Automated Service Machines 4: AGING: Aging Posters / Kishida, Koya / Hisamune, Syuuji / Ikegami, Thor / Matsui, Tetsuo Proceedings of the Joint IEA 14th Triennial Congress and Human Factors and Ergonomics Society 44th Annual Meeting 2000-07-30 v.44 n.4 p.95
Link to HFES Digital Content
Summary: At the mention of purchasing behavior, the survey in 1996 shows that middle-aged and elderly people made significantly more mistakes than young people did. From the survey in 1997, we confirmed that middle-aged people made significantly more mistakes than young people did, and elderly people made significantly more mistakes than children and young people did.
    Comparing touch-sensor system to push-button system, operating touch-sensor system machines needed longer time and made more mistakes especially for elder age groups.