Investigation of Haptic Line-Graph Comprehension Through Co-Production of Gesture and Language | | BIBAK | PDF | 1 | |
Ozge Alacam; Christopher Habel; Cengiz Acarturk | |||
In communication settings, statistical graphs accompany language by
providing visual access to various aspects of domain entities, such as
conveying information about trends. A similar and comparable means for
providing perceptual access is to provide haptic graphs for blind people. In
this study, we present the results of an experimental study that aimed to
investigate visual line graphs and haptic line graphs in time domain by means
of gesture production as an indicator of event conceptualization. The
participants were asked to produce single sentence summaries of visual graphs
and haptic graphs. The gestures that were produced during the course of verbal
descriptions were analyzed. The results showed that directional gestures
accompanied verbal descriptions of both visual graphs and haptic graphs.
Further analyses revealed differences between visual graphs and haptic graphs
in terms of type of gestures, as well as the production rates. Keywords: Gesture production; haptic graph comprehension; line graphs; multimodal
communication |
Extracting and analyzing head movements accompanying spontaneous dialogue | | BIBAK | PDF | 2 | |
Simon Alexanderson; David House; Jonas Beskow | |||
This paper reports on a method developed for extracting and analyzing head
gestures taken from motion capture data of spontaneous dialogue in Swedish.
Candidate head gestures with beat function were extracted automatically and
then manually classified using a 3D player which displays time-synced audio and
3D point data of the motion capture markers together with animated characters.
Prosodic features were extracted from syllables co-occurring with a subset of
the classified gestures. The beat gestures show considerable variation in
temporal synchronization with the syllables, while the syllables generally show
greater intensity, higher F0, and greater F0 range when compared to the mean
across the entire dialogue. Additional features for further analysis and
automatic classification of the head gestures are discussed. Keywords: Gestures; prosody; motion capture; beats; head nods; stressed syllable |
Left-Hand Gestures Advantage on Metaphor Explanation: Evidence for Gestures' Self-Oriented Functions | | BIBAK | PDF | 3 | |
Paraskevi Argyriou; Sotaro Kita | |||
Research suggests that gestures influence cognitive processes, but the exact
mechanism is not clear. Additionally, it has been shown that when a linguistic
task (metaphor explanation) involves the right brain hemisphere, the left hand
becomes more gesturally active. We hypothesized that gestures with a particular
hand activate cognitive processes in the contra-lateral hemisphere. We examined
whether gestures with the left hand enhance metaphoricity in verbal responses.
Results showed participants produced more metaphoric explanations when
instructed to produce gestures with their left hand as compared to the right
hand or not gesture at all. In addition, we measured the mouth asymmetry during
metaphorical speech to determine individual differences in right-hemisphere
involvement in metaphor processing. The left-side mouth dominance, indicating
stronger right-hemisphere involvement, positively correlated with the
left-hand-over-right-hand advantage in gestural facilitation of metaphorical
speech. We concluded that left-hand gestures enhance metaphorical thinking in
the right hemisphere. Keywords: Metaphor; representational gestures; brain hemispheric lateralization; mouth
asymmetry |
One Teacher, Two Instructional Contexts; A Contrastive and Empirical Analysis of a Teacher's Gestures | | BIBAK | PDF | 4 | |
Brahim Azaoui | |||
Despite the increasing interest in gesture studies for teachers' gestures,
it seems that no research has yet been carried out to analyze the impact of the
instructional context on a teacher's gestures. This study provides the
opportunity to add to our understanding of teachers' non verbal pedagogical
repertoire by observing a French teacher in two different contexts: FL1 (French
for native speakers) and FL2 (French for NNS). Keywords: instructional context; pedagogical repertoire; teacher's gestures |
Integrating Gesture Meaning and Verbal Meaning for German Verbs of Motion: Theory and Simulation | | BIBAK | PDF | 5 | |
Kirsten Bergmann; Florian Hahn; Stefan Kopp; Hannes Rieser; Insa Röpke | |||
When verbs of motion are accompanied by gestures, this comes along with a
relatively complex relation between the two modalities. In this paper, we
investigate the semantic coordination of speech and event-related gestures in
an interdisciplinary way. First, we explain how to efficiently construct a
speech-gesture-interface for a gesture which accompanies a verb phrase from a
theoretical viewpoint. Resting upon this analysis, we also provide a
computational simulation model which further explicates the relation between
the two modalities based on activation-spreading within dynamically shaped
multi-modal memories. Keywords: Gesture semantics; Event-related gestures; Iconic gestures; Speech-gesture
interface; Theoretical reconstruction; Computational simulation;
Interdisciplinary methodology |
Responding to Joint Attention Predicts Joint Action | | BIBAK | PDF | 6 | |
Arkadiusz Bialek; Marta Bialecka-Pikul; Malgorzata Stepien-Nycz | |||
The presented study aimed to identify the developmental relations between
children's early ability to participate in joint attention episodes and later
ability to coordinate joint action. 109 Polish infants were assessed using the
Early Social Communication Scale (Mundy et al. 2003) at 12 months and with a
joint action task ('tea set') at 18 months. In the ESCS, initiation of (eye
contact, gaze alternations, pointing to objects and showing them) and
responding to joint attention (gaze following) were assessed. In the joint
action task children scored for reactions to the experimenter's nonverbal
suggestions, verbal requests and proposals that were indicators of the ability
to coordinate joint action. The results revealed positive, although weak
correlation between high-level responding to joint attention (following the
line of regard) and children's responding to nonverbal suggestions in the joint
action task (r =0.206, p =0.05). Initiating joint attention was not correlated
with the ability to coordinate joint action. The results show the developmental
relation between responding to joint attention at 12 months and coordination of
joint action through responding to nonverbal suggestions at 18 months. However,
the mechanism of this relation is still open to question. Keywords: joint attention; gaze following; joint action; coordination of interaction;
pretend play |
Temporal Aspects of Behavioral Alignment in Collaborative Remembering | | BIBAK | PDF | 7 | |
Lucas Bietti; Kasper Kok; Alan Cienki | |||
The coordination of verbal and non-verbal facets of communication between
interlocutors appears to be one of the basic cognitive tuning processes for
social interaction. In this paper we examine the temporal aspects of behavioral
alignment in small group interactions that take place in a natural setting. We
find that participants tend to align their body posture and gestures in a
sequential rather than simultaneous manner. Our results furthermore suggest
that behavioral resonance generally happens fast, but can also occur with
substantial delay. Keywords: everyday activities, gestural alignment, small groups, collaborative
remembering |
Gesture synthesis from SignWriting notation | | BIBAK | PDF | 8 | |
Yosra Bouzid; Mohamed Jemni | |||
Sign language synthesis has seen a large increase in applications over the
past few decades, as it represents a potential solution to communication
problem for the deaf community. All that is needed is to convert a writing form
(books, newspapers, e-mails, internet pages...) or speech into sign. Most works
in this area focus on the translation of a spoken language text into a fluid
signing by using machine translation (MT), while others attempt to create
synthetic animation from a sign language notation. We introduce in this paper a
new method for the automatic generation of signed gestures from SignWriting
notation using a 3D avatar. The SW notation is provided as input in an XML
based format called SWML (SignWriting Markup Language). Such tool would help
deaf readers to grasp and interact with the signed transcription through a more
user-friendly interface. Keywords: Gesture synthesis; Sign Language; SignWriting; SWML; 3D signing animation;
avatar |
Polysigns and information density in teachers' gestures | | BIBAK | PDF | 9 | |
Heather Brookes; Jean-Marc Colletta; Alice Ovendale | |||
Gesture goes hand in hand with speech and is a powerful communication device
in expressing abstract concepts. In this paper, we analyse the spontaneous
gestures of two teachers filmed teaching lessons on halving. Both speech and
gesture were transcribed on Elan, and all the gestures were coded. We then
focused on key gestures contributing to the mathematical concepts of halving
discrete entities. We show how these gestures chain and provide multiple layers
of information that embody and spatially represent the concept of halving.
Simultaneously teachers use these representative gestures in interactive ways
with children to enact the concept of halving. Gestures mediate the transition
from concrete and personal symbolic processes to abstract mathematical
concepts. Keywords: gesture; learning; information density; multi-referential gesture; polysign;
mathematics |
Bilinguals Switch Gesture Production Parameters when they Switch Languages | | BIBAK | PDF | 10 | |
Federica Cavicchio; Sotaro Kita | |||
The control mechanism at play when bilinguals speak one of their two
languages (inhibition of the unintended language vs selection of the intended
language) is still under debate. Though transfer in spoken languages has been
studied extensively, transfer in gesture is understudied. In this research, we
investigated gestural communication in bilinguals. In particular, we tested
which aspects of gestures were "transferred" from a language to another. In
this study our focus is on gesture rate and gesture space in Italian/English
bilinguals. Contrary to previous findings, we have no evidence of transfer.
When bilinguals switch language, their gesture parameters switch accordingly.
The switch of gesture (cultural) parameters such as rate and salience show that
language and gesture are tightly linked. This suggests that a language and the
corresponding gesture parameters might be selected in a high level processing
stage at which verbal and nonverbal aspects of communication are planned
together. Keywords: bilingualism; linguistic transfer; gesture transfer; lexical access |
Gestural representation of event structure in dyadic interaction | | BIBAK | PDF | 11 | |
Peer Christensen; Kristian Tylén | |||
What are the underlying motivations for the conceptualization of events?
Recent studies show that when people are asked to use nonverbal gestures to
describe transitive events they prefer the semantic order Agent-Patient-Act,
analogous to SOV in grammatical terms. The original explanation has been that
this pattern reflects a cognitively "natural order" for the conceptualization
of events. However, other types of transitive events have not been investigated
in earlier studies. We report experimental findings from a referential game in
which pairs of participants used gestures to match shared sets of stimuli
depicting two types of transitive events: (i) object manipulation events and
(ii) construction events. We argue that these event types have inherently
different logical and sequential structure and, accordingly, will yield
different gesture orders. Our findings confirm such predictions: manipulation
events predominantly elicited gesture strings with SOV order, while
construction events elicited SVO order. The results indicate that participants
were highly sensitive to differences in event structure. Even with increased
communicative pressure, pairs did not settle on a single order for the two
types of events. We conclude that gesture order seems to be motivated by
extralinguistic event structure rather than a cognitively "natural order". Keywords: Event structure; gestural sign emergence; representation; conceptualization;
communication system evolution; word order |
The Role of Inter-Cultural Competence on Gestural Recognition | | BIBAK | PDF | 12 | |
Sara Conversano; Elena Berno; Valentina Vitali; Alessandro Nonis; Clelia Di Serio; Marco Rigamonti | |||
The results from both studies suggest an important link between verbal and
non-verbal language acquisition, although the two processes seemed supported by
different factors. As we could predict, spending more time in a foreign country
improves our fluency in the foreign language. However, it seems that only the
acquisition of the new language can facilitate the acquisition of the nonverbal
language. Given the nature itself of emblems, gesture that have a precise
verbal translation (Ekman & Friesen, 1972) we could infer that an emblem
can be understood only if the learner has already acquired the conceptual
meaning expressed in it. Keywords: Emblems; Inter-cultural competence; Cross-culture |
New multilayer concordance functions in ELAN and TROVA | | BIBAK | PDF | 13 | |
Onno Crasborn; Micha Hulsbosch; Lari Lampen; Han Sloetjes | |||
Collocations generated by concordancers are a standard instrument in the
exploitation of text corpora for the analysis of language use. Multimodal
corpora show similar types of patterns, activities that frequently occur
together, but there is no tool that offers facilities for visualising such
patterns. Examples include timing of eye contact with respect to speech, and
the alignment of activities of the two hands in signed languages. This paper
describes recent enhancements to the standard CLARIN tools ELAN and TROVA for
multimodal annotation to address these needs: first of all the query and
concordancing functions were improved, and secondly the tools now generate
visualisations of multilayer collocations that allow for intuitive explorations
and analyses of multimodal data. This will provide a boost to the linguistic
fields of gesture and sign language studies, as it will improve the
exploitation of multimodal corpora. Keywords: Concordance; collocation; multimodality; annotation tool; gesture; sign
language |
Kinesic Turn Taking and Mutual Understanding in interactive dyads | | BIBAK | PDF | 14 | |
Daniela Dvoretska; Jaap Denissen; Hedda Lausberg | |||
Turn taking is a well-known phenomenon in verbal interaction. There is,
however, some evidence suggesting that the temporal coordination is not limited
to the sequencing of the verbal utterances but that it extends to the
interactive partners' nonverbal behavior. In this study we first systematically
investigated whether the conversation partners temporally coordinated their
body movements. Second, we analyzed the relation between kinesic interaction
and self-rated as well as observed-rated mutual understanding. Forty dyads were
videotaped during their conversation. A control sample was created in which the
movement behavior annotations of partners from different dyads were randomly
mixed. The results indicated a hemispheric specialization in the temporal
attunement with the partner. The different body parts seem to play different
roles in the temporal interaction. The findings suggest that the coordination
of the interactive partners' body movements contributes to a consolidation of
the interactive relation. Keywords: interpersonal coordination; kinesic interaction; movement behavior; turn
taking; rapport |
Gesture/speech interaction in the perception of lexical units | | BIBAK | PDF | 15 | |
Chloe Gonseth; Anne Vilain; Coriandre Vilain | |||
This paper explores gesture/speech interaction in language perception. An
experimental study, based on an intermodal priming paradigm, required
participants to make lexical judgments to deictic words, non-deictic words, or
pseudo-words, after the production of a pointing or a grasping gesture. These
two gestural priming conditions were compared to each other and to a baseline
condition, where participants did not perform any gesture. This allowed us to
characterize the influence of both gesture production and gesture type on word
recognition. Our results reveal an interaction between the motor and the
lexical representations of spatial deixis, that suggests that "arm movement
itself [could] be used as a linguistic signal" (Gentilucci, Dalla Volta, &
Gianelli, 2008). Communicative manual gestures appear to be involved in the
production/perception mechanism associated with the semantic processing of
language. Keywords: Word recognition; Spatial deixis; Gesture/speech interaction |
Gesture production and speech fluency in competent speakers and language learners | | BIBAK | PDF | 16 | |
Maria Graziano; Marianne Gullberg | |||
It is often assumed that a main function of gestures is to compensate for
expressive difficulties. This predicts that gestures should mainly occur with
disfluent speech. However, surprisingly little is known about the relationship
between gestures and fluent vs. disfluent speech. This study investigates the
putative compensatory role of gesture by examining competent speakers' and
language learners' gestural production in fluent vs. non-fluent speech. Results
show that both competent and less competent speakers predominantly produce
gestures during fluent stretches of speech; ongoing gestures during
disfluencies are suspended. In all groups, the few gestures that are completed
during disfluencies are both referential and pragmatic. The findings strongly
suggest that when speech stops, so do gestures, thus supporting the view of
speech and gesture as an integrated system. Keywords: Gesture; speech production; language development; second language
acquisition; crossmodal coordination. |
What can Chinese speakers' temporal gestures reveal about their thinking about time? | | BIBAK | PDF | 17 | |
Yan Gu; Lisette Mol; Marieke Hoetjes; Marc Swerts | |||
There is debate on whether vertical spatial metaphors in Chinese cause
speakers to think vertically about time. The present study assesses whether
Chinese speakers indeed have a vertical conception of time, by studying their
temporal gestures accompanying speech. Chinese speakers were asked to talk
about wordlists, consisting of time conceptions and sequences in both Chinese
and in English. The results showed that Chinese speakers had vertical temporal
gestures in L1 Chinese and had fewer vertical gestures in L2 English.
Implications for the current debate and models of gesture production are
discussed. Keywords: temporal gestures; thinking for speaking; time conceptions; language shapes
thought |
Gesture and Speech-based Public Display for Cultural Event Exploration | | BIBAK | PDF | 18 | |
Jaakko Hakulinen; Tomi Heimonen; Markku Turunen; Tuuli Keskinen; Toni Miettinen | |||
We introduce a novel, experiential event guide application for serendipitous
exploration of event information on public displays. The application is
targeted for complex events, such as cultural festivals, which include a large
amount of individual events in numerous geographical locations. The application
consists of two interfaces, both used in a multimodal manner with hand gestures
and spoken interaction: a three dimensional word cloud is used to select
events, which can then be explored using event visualization utilizing "metro
map" metaphor. A one-week field study of the application in a public location
showed strong bias towards the use of gestures against speech. Keywords: Multimodal interaction; gestural and spoken interaction; public displays |
The placement of negation gestures in relation to speech | | BIBAK | PDF | 19 | |
Simon Harrison | |||
This paper examines the temporal coordination of a subset of gestures in
relation to speech containing negation. Based on qualitative observations of
'palm down' gestures in naturalistic data, I show that the gestures tend to
occur either with or after the verbal negative particle, but not before.
Analysing 10 utterances, I identify the different synchronization points and
relate them to grammatical, discursive, and conceptual factors involved in the
expression of negation in English. Keywords: Negation; Negative Polarity Items; Scope; Gesture coordination; Conceptual
affiliate |
The Missing Power: Language Mediates Sensorimotor-related Beta Oscillations during On-line Comprehension of Different Types of Co-speech Gesture | | BIBAK | PDF | 20 | |
Yifei He; Helge Gebhardt; Isabelle Rondinone; Benjamin Straube | |||
We used Electroencephalography (EEG) to investigate the processing
difference between co-speech emblematic gestures (EM) and tool-use gestures
(TU). We found that TU shows beta power decrease against EM in a foreign
language condition (Russian) but this effect is missing in the native language
condition (German). With regard to the beta power effect, we reasoned that beta
power decrease is a neural marker for recruitment of the sensorimotor system.
However, with regard to the missing beta effect in the German condition, we
suggested two proposals: on the one hand, it may suggest that semantic
integration process of gesture and speech could also be related to beta power
oscillations; on the other, the missing power could be considered as an
indication of a shared and interactive neuronal network by both sensorimotor
system and higher-level semantic system. Keywords: emblematic gesture, tool-use gesture, EEG, beta power, semantic integration,
sensorimotor system |
Hand movements that accompany verbal descriptions differ from those during gestural demonstrations | | BIBA | PDF | 21 | |
Ingo Helmich; Hedda Lausberg | |||
Gestures do not only convey information (McNeill, 1992) but also reflect the person's feelings or emotions (Feyereisen & de Lannoy, 1991). As the exact function of co-speech gestures is still under debate (Holler & Wilkin, 2011), we investigate in this study hand movement behavior regarding the functionality of the two hands either as co-speech gestures or gestural demonstrations without speech. Previous studies have shown differences between conditions with or without speech when investigating iconic hand movements (Lausberg & Kita, 2003, Goldin-Meadow et al., 1996). Contrasting gestural output between a speech and a silent condition showed that more hand movements are performed during silent conditions. Both studies focused on iconic hand movements not including the entire hand movement behavior. Thus, we explore in this study the functional purpose of hand movements including the complete manual repertoire. |
Gestural expression in narrations of aphasic speakers: redundant or complementary to the spoken expression? | | BIBAK | PDF | 22 | |
Katharina Hogrefe; Wolfram Ziegler; Nicole Weidinger; Georg Goldenberg | |||
According to the hypothesis that gesture and speech base on a common
communicative intention but are two independent production processes, the two
communication channels may have a trade-off relationship with one compensating
for the other when necessary. In the case of aphasia, this would indicate that
gesture can compensate for the deficiencies of the spoken expression.
We present a study in which naïve judges rated narrations of aphasic speakers with respect to only the information of the gestural expression. A second group evaluated exclusively the spoken expression of the same narrations. Comparison of the informational content of the two communication channels revealed that some of the severely impaired patients conveyed more information in the gestural modality than in the verbal modality. These results indicate that gesture can partly compensate for the impaired spoken expression. Keywords: Aphasia; gesture; compensation |
Gestures or Speech? Comparing Modality Selection for different Interaction Tasks in a Virtual Environment | | BIBAK | PDF | 23 | |
Kathrin Janowski; Felix Kistler; Elisabeth André | |||
In this paper, we investigate whether users prefer speech or gesture input
for four distinct interaction tasks commonly found in virtual environments:
navigation, selection, dialogue, and object manipulation. For this purpose, we
implemented an interactive storytelling scenario in which the users could
always choose between gesture and speech commands for each interaction. Both
input modalities were processed in real-time using a low-cost depth sensor and
microphone. We conducted a study in order to identify the modality preferences
for each task. We got strong results for the navigational task, for which
gestural interaction seemed to be more suitable, and for the dialogue task
which was in favour of speech. For the object manipulation and selection tasks
we did not observe a clear preference for one of the modalities, but we found
indications for why some participants chose speech and others preferred
gestures by analysing the participants' ratings of their experience with the
interaction. Keywords: gestures; speech; modality selection; full body interaction; recognition;
virtual environment; navigation; selection; dialogue; manipulation |
Relative information content of gestural features of non-verbal communication related to object-transfer interactions | | BIBAK | PDF | 24 | |
Ansgar Koene; Juliane Honish; Satoshi Endo; Alan Wing | |||
In order to implement reliable, safe and smooth human-robot object handover
it will be necessary for service robots to identify non-verbal communication
gestures in real-time. This study presents an analysis of the relative
information content in the gestural features that together constitute a
communication gesture. Based on this information theoretic analysis we propose
that the computational complexity of gesture classification, for object
handover, can be greatly reduced by applying attention filters focused on
static hand shape and orientation. Keywords: gestures; Information Gain; object handover; classification; non-verbal
communication |
Evaluation of Static and Dynamic Freehand Gestures in Device Control | | BIBAK | PDF | 25 | |
Anne Köpsel; Anke Huckauf | |||
Increasingly, devices are controlled by gestures. Hence, questions
concerning the usability of gestures arise. Most of the used gestures are
promoted as 'intuitive', suggesting that training can be avoided. In the
present work, we suggest certain criteria, namely recognition, learnability and
executability. In addition, we present a paradigm of how to evaluate gestures
with respect to these criteria. The empirical tests are based on the proposal
of universal interaction design, i.e., the recommendation that gestures should
be as independent from a certain task setting as possible. Preliminary data
suggest that the proposed way of evaluating gesture systems as well as single
gestures effectively results in a catalogue of criteria which can be weighted
in their importance depending on the current task set. Keywords: Gesture interaction; Augmented Reality; Gesture recognition |
Exploring Annotation of Head Gesture Forms in Spontaneous Human Interaction | | BIBAK | PDF | 26 | |
Spyros Kousidis; Zofia Malisz; Petra Wagner; David Schlangen | |||
Face-to-face interaction is characterised by head gestures that Vary greatly
in form and function. We present on-going exploratory work in characterising
the form of these gestures. In particular, we define a kinematic annotation
scheme and compute various agreement measures among two trained annotators.
Gesture type mismatches among annotators are compared against kinematic
characteristics of head gesture classes derived from motion capture data. Keywords: Multimodal interaction; Head gestures; Annotation |
Speech and hand movement coordination in schizophrenia | | BIBAK | PDF | 27 | |
Mary Lavelle; Chris Howes; Patrick G. T. Healey; Rosemarie McCabe | |||
Patients with schizophrenia have difficulties interacting with others, but
the nature of this deficit is unclear. A critical feature of successful social
interaction is coordination between speech and movement. The current study
employed 3-D motion capture techniques to assess coordination between patients'
hand movements and speech during live interaction. Compared to controls,
patients displayed reduced coordination between their own speech and hand
movement. Healthy participants interacting with the patient also appear to
adopt this pattern but to a lesser extent. Patients' coordination deficits may
underlie their social difficulties and contribute to their social exclusion. Keywords: Schizophrenia; motion-capture; multiparty interaction |
Gestural representation in the domain of animates' physical appearance | | BIBAK | PDF | 28 | |
Magdalena Lis | |||
The paper presents a pilot study on gestural representation of entities
referring to animates' physical appearance. We identify representational format
and form features of gestures referring to entities in this semantic domain,
and patterns in their temporal overlap with speech. We furthermore integrate
the results with our previous findings from the domain of eventualities. Keywords: gesture production; semantics; iconics and deictics |
Gestures in Turn Taking in Early Stages of Foreign Language Fluency: Does the Growth Point Explain the Patterns? | | BIBAK | PDF | 29 | |
Renia Lopez-Ozieblo | |||
The Growth Point Theory (McNeill and Duncan, 2000) posits the unity of
gesture and speech. However in Hong Kong students of Spanish as a Foreign
Language (FL) there is an observed dearth of gestures when they speak Spanish.
The consequences of this are difficulties in conversing with Spanish native
speakers, in particular when it comes to turn management. These students fail
to use nonverbal cues, despite having been taught them. We believe that this is
not only a result of socio-cultural differences but an inability to activate
the Growth Point in the early stages of fluency of a FL. Keywords: Foreign Language Acquisition; gestures; Growth Point theory |
The influence of cognitive load on repeated references in speech and gesture | | BIBAK | PDF | 30 | |
Ingrid Masson-Carro; Martijn Goudbeek; Emiel Krahmer | |||
Shared common ground between interaction partners has been found to lead to
reduction in repeated references to a target entity, both in speech and
gesture. It has been shown, however, that increasing the cognitive load of
speakers has the potential to affect how speakers and addressees adapt to one
another in dialogue. This paper reports on an experiment in which native
speakers of Dutch engaged in a director-matcher task where repeated references
were elicited, and a time constraint was imposed in order to increase the load
of speakers. Our results show that cognitive load was not an impediment to the
reduction process, although it did have an effect on the overall task
performance, suggesting that reduction results from rather automatic processes. Keywords: cognitive load, reduction, referring expressions, speech, gesture |
Feature-based hand detection in visual images | | BIBAK | PDF | 31 | |
Ruud Mattheij; Eric Postma | |||
Recent developments in image processing and machine learning techniques
facilitate the automatic coding of human behavior. This paper proposes an
efficient and effective classification method for the automatic coding of hands
in still images and in image sequences. The method combines an efficient and
effective feature-extraction method with a powerful machine-learning algorithm.
The evaluation of the detector on a challenging database of natural images of
human hands in a large variety of poses results in a performance that is
comparable to state-of-the-art detectors, while being able to perform in
real-time. This leads to the conclusion that our feature-based method performs
state-of-the-art hand detection and offers a promising starting point for the
efficient automatic coding of gestures. Keywords: Hand detection; automated gesture annotation; Viola-Jones detector; Haar
features; random forests |
Structural Adaptation in Gesture and Speech | | BIBAK | PDF | 32 | |
Lisette Mol; Yan Gu; Marie Postma-Nilsenová | |||
Interlocutors are known to repeat the structure of each other's speech and
to repeat each other's gestures. Yet would they also repeat the information
structure of each other's gestures? And which are we more prone to adapt to:
gesture, or speech? This study presented participants with gesture and speech
in which manner and path information were either conflated into one
gesture/clause, or separated into two gestures/clauses. We found that both the
information structure perceived in speech and in gesture influenced the
information structure participants produced in their gesturing. However, the
information structure perceived in gesture only influenced the structure
participants produced in their speech if a less preferred structure was
perceived in speech. When the preferred structure was perceived in speech, this
structure was (re)produced in speech irrespective of perceived gestures. These
results pose a challenge to the development of models of gesture and speech
production. Keywords: Adaptation, Gesture, Speech |
The coordination of vocalizations and communicative gestures in the transition to first words | | BIBAK | PDF | 33 | |
Eva Murillo; Almudena Capilla | |||
This work addresses the developmental changes in vocalizations acoustic
features in relation to its coordination with communicative gestures in the
transition to first words. Our hypothesis is that gestural-vocal coordination
facilitates early lexical development, so that the acoustic features of
vocalizations will be more similar to those of words when they are accompanied
by gestures, specifically pointing gesture, and with a declarative function.
Preliminary findings show differences in duration, fundamental frequency, and
syllabicity parameters, related with gesture coordination and declarative
function. Keywords: Communicative development; gestures; vocalizations; multimodal
communication; acoustic analysis, first words |
The influence of body posture and gesture on the evaluation of verbal utterances addressment and comprehensibility | | BIBAK | PDF | 34 | |
Arne Nagels; Tilo Kircher; Miriam Steines; Benjamin Straube | |||
During everyday communication co-speech gestures represent a ubiquitous tool
to underpin the verbal content of a message. In addition to gestures, other
non-verbal information, such as the direction in which a speaker's body is
oriented, is particularly important during face-to-face interaction. However,
the influence of bodily orientation (frontal vs. lateral) and gestures on the
evaluation of comprehensibility and addressment of verbal utterances has not
been investigated, so far. It might be hypothesized that meaning-bearing
gesture in a frontal context improves the comprehensibility and the addressment
of a verbal message. In fact, we found a significant interaction of the factors
gesture (gesture/no-gesture) and body orientation (frontal/lateral) for the
evaluation of addressment, indicating that frontally presented co-verbal
gesture was evaluated as most addressing. Though for comprehensibility the
interaction was not sig., comprehensibility was evaluated highest in the
context of frontally presented co-verbal gesture. Gesture seems to have a
general positive effect on comprehension as indicated by higher comprehension
scores and faster evaluations. However, the main effect of body orientation on
evaluations suggests that a frontal perspective additionally seems to
contribute to comprehension. These data demonstrate the importance of body
orientation and gesture on the evaluation of the comprehension and addressment
of verbal utterances. Our results suggest a beneficial effect for frontally
presented co-speech gesture. Keywords: iconic gesture, addressment, bodily orientation, reaction times |
Differences in the communicative use of gesticulation and pantomime in a case of aphasia | | BIBAK | PDF | 35 | |
Karin van Nispen; Mieke van de Sandt-Koenderman; Lisette Mol; Emiel Krahmer | |||
Pantomime and gesticulation, two different gesture modes, can each be
comprehensible without speech. Little is known still on how either one of these
gesture modes may add to the communication of a person with aphasia. The
current study aims to find out whether gesticulation and/or pantomime can add
to the comprehensibility of a person, QH, with severe fluent aphasia and what
differences there may be between the two. For this aim we asked QH to perform
two tasks; naming objects and retelling a story. He did this once in a verbal
condition (which allowed for gesticulation to occur) and once in a pantomime
condition. Gestures were analyzed for their comprehensibility and the
representation techniques used. The results showed that pantomimes for naming
objects were comprehensible, whereas gesticulation was not. The latter again
was comprehensible for retelling a story, while pantomime was not. When
pantomiming objects QH uses simpler representation techniques than healthy
controls do. These results indicate that both gesticulation and pantomime may
contribute to QH's comprehensibility despite a possible impairment of one or
both gesture modes. Their benefits however differ across tasks. These findings
imply that, in clinical practice, each gesture mode should be assessed
separately for different communicative situations. In these assessments the
emphasis should be on comprehensibility rather than on the correct use of a
representation technique. Keywords: Aphasia; pantomime; gesticulation; apraxia |
Documenting West African gesture repertoires | | BIBAK | PDF | 36 | |
Victoria Nyst | |||
This poster presents an elicitation format for the collection of
conventional gesture repertoires in West Africa. The format is developed for
the establishment of a database of West gestures of speakers from various parts
of West Africa. In a pilot study, the format was used to collect gestures with
three participants. Keywords: Emblems; West Africa; documentation; gesture repertoire |
Gesture-sign interface in hearing non-signers' first exposure to sign | | BIBAK | PDF | 37 | |
Gerardo Ortega; Asli Özyürek | |||
Natural sign languages and gestures are complex communicative systems that
allow the incorporation of features of a referent into their structure. They
differ, however, in that signs are more conventionalised because they consist
of meaningless phonological parameters. There is some evidence that despite
non-signers finding iconic signs more memorable they can have more difficulty
at articulating their exact phonological components. In the present study,
hearing non-signers took part in a sign repetition task in which they had to
imitate as accurately as possible a set of iconic and arbitrary signs. Their
renditions showed that iconic signs were articulated significantly less
accurately than arbitrary signs. Participants were recalled six months later to
take part in a sign generation task. In this task, participants were shown the
English translation of the iconic signs they imitated six months prior. For
each word, participants were asked to generate a sign (i.e., an iconic
gesture). The handshapes produced in the sign repetition and sign generation
tasks were compared to detect instances in which both renditions presented the
same configuration. There was a significant correlation between articulation
accuracy in the sign repetition task and handshape overlap. These results
suggest some form of gestural interference in the production of iconic signs by
hearing non-signers. We also suggest that in some instances non-signers may
deploy their own conventionalised gesture when producing some iconic signs.
These findings are interpreted as evidence that non-signers process iconic
signs as gestures and that in production, only when sign and gesture have
overlapping features will they be capable of producing the phonological
components of signs accurately. Keywords: sign language, iconic gestures, iconicity |
Gestures within Human-Technology Choreographies for Interaction Design | | BIBAK | PDF | 38 | |
Jaana Parviainen; Kai Tuuri; Antti Pirhonen; Markku Turunen; Tuuli Keskinen | |||
In the traditional use-oriented approach, only a fraction of gestures are
taken as relevant to interaction. In this paper we argue that gestures should
not be handled only as isolated objects of application use, but they should
rather be understood as dynamic moments of embodied presence belonging to an
experiential chain of different movements which has its own significance as a
whole. In the current study, we call the embodied, experiential continuum of
human action choreography. We assume that choreography is a fruitful
theoretical concept in understanding interaction design because by choreography
we can understand gestures as building up a chain of a bigger whole. The
dynamic formulation of this chain of embodied gestures is what ultimately makes
users' experience of digital devices meaningful in their everyday life. Keywords: choreography; interaction design; methodology; gestures; kinaesthesia; user
experience |
Automatic Detection of Hand/Upper Body Movement and Facial Expressions as Cues to Feelings of Exclusion | | BIBAK | PDF | 39 | |
Marie Postma Nilsenova; Eric Postma; Martijn Balsters; Emiel Krahmer; Juliette Schaafsma; Lisette Mol; Ad Vingerhoets; Marc Swerts | |||
We used a modification of the Frame Differencing Method to detect left and
right hand/body movement in a corpus of recordings collected in two
experimental conditions; a condition in which participants were included in a
group decision-making process and one in which they were excluded. The results
showed a lower degree of activation in the condition with exclusion, possibly
due to withdrawal. An automatic detection of facial expressions indicated a
difference with respect to expressions of Joy and Sadness; exclusion from the
interaction led to decreased Joy and increased Sadness. Expressions of Joy were
also correlated with increased hand/body movement. Keywords: Exclusion; Hand Activation; Frame-Differencing Methods (FDMs); FACS; Facial
Expressions |
Individual differences in speakers' gesture spaces: Multi-angle views from a motion-capture study | | BIBAK | PDF | 40 | |
Matthias A. Priesters; Irene Mittelberg | |||
The approach presented in this paper aims to contribute to an account of the
three-dimensionality of gesture space. Here, gesture space is assumed to be
dynamically constructed and adaptive to the communicative situation. Making use
of an optical motion-capture system, volumetric representations of gesture
spaces were generated, based on gesture data from semi-structured interviews
with four participants. The data were coded according to gesture phases and the
gestures' communicative functions. We compare speakers' gesture rates and
spatial distribution of gestures, both of which vary strongly across speakers. Keywords: gesture space; individual differences; motion capture; methodology |
Which Semantic Synchrony? | | BIBAK | PDF | 41 | |
Katharina J. Rohlfing; Angela Grimminger; Kerstin Nachtigäller | |||
Commonly, the relation between gesture and speech is analyzed in terms of
semantic synchrony (i.e. whether it is complementing or reinforcing the verbal
message). However, these categories reflect a mature semantic system that bears
limitation when applied to child language studies. In this paper, thus, we
present data from mother-child conversations during joint picture book reading.
We present analyses of different forms of speech-gesture-synchrony and show how
their results are related to the vocabulary development of the children. We
critically discuss the different forms of semantic synchrony and their
appropriateness for child language studies. Keywords: speech-gesture-synchrony, joint attention, language development |
The Distribution of Downtoning Gestures | | BIBAK | PDF | 42 | |
Steven Schoonjans | |||
This paper deals with the distribution of downtoning co-speech gestures in
German. On the basis of three types of video recordings (sports reports, talk
shows, and parliamentary debates), it is investigated (1) how a number of
downtoning gestures are distributed over these three types of settings, (2) to
what extent their distribution differs from that of verbal downtoners, and (3)
which factors may have influenced the distribution of the gestures. Keywords: German; downtoning; headshake; intersubjective deictic; beat |
Use of spatial information from cohesive gesture to comprehend subsequent sentences | | BIBAK | PDF | 43 | |
Kazuki Sekine; Sotaro Kita | |||
This study examined whether listeners keep spatial story representations
created by speaker's cohesive gestures beyond the concurrent sentence.
Participants were presented with a three-sentence discourse with two
protagonists. In the first and second sentences, gestures consistently located
the two protagonists in the gesture space: one to the right and the other to
the left. The third sentence (without gestures) referred to one of the
protagonists, and the participants responded with one of the two keys to
indicate the relevant protagonist. The response keys were either spatially
congruent or incongruent with the gesturally established locations for the two
participants. Experiments 1 and 2 showed that the performance in the congruent
condition was better than the incongruent condition. Thus, listeners make a
spatial story representation based on gestures, and the spatial representation
persists beyond the concurrent sentence, and the information is still activated
in a subsequent sentence without a gesture. Keywords: Gesture; Simon effect; speech comprehension |
Gender Differences in Hand Movement Behavior | | BIBA | PDF | 44 | |
Harald Skomroch; Kerstin Petermann; Ingo Helmich; Daniela Dvoretska; Robert Rein; Zi-Hyun Kim; Uta Sassenberg; Hedda Lausberg | |||
In research on gender differences in nonverbal behavior the aspect of hand movement behavior and gestures was considered differentiating between co-speech gesturing and self-touch (Rosip & Hall, 2004; Brighton & Hall, 1995; Frances, 1979). In line with the public perception some studies indicate that women tend to use hand movements more frequently whereas men tend to display more self-touch (Brighton & Hall, 1995) or position shifts (Frances, 1979). Additionally, men show a greater tendency towards a lateralization for different movement types (Saucier & Elias, 2001). However, these findings do not provide insight about more specific differences between genders in their organization of hand movement behavior and gestures. |
The effect of emblematic and tool-use gestures on abstractness evaluations of verbal utterances | | BIBAK | PDF | 45 | |
Benjamin Straube; Miriam Steines; Tilo Kircher; Arne Nagels | |||
Gestures often accompany verbal conversation and differ widely in their
content and function. Emblematic and tool-use gestures are similar in that they
both carry specific meaning, but vary with regard to the abstract-social vs.
concrete-tool-related content. Here we investigated the effect of emblematic
(EM) and tool-use (TU) gestures on the evaluation of the abstractness of
corresponding verbal utterances. We hypothesized that the evaluation of EM and
TU utterances will be differentially influenced by meaningful (MF) vs.
meaningless (ML) co-verbal gestures. In fact, in addition to significant main
effects (EM vs. TU and MF vs. ML), we also found significant interactions
between gesture type (EM/TU) and gesture meaning (ML/MF). These results
indicate that gesture semantics had a different influence on the evaluations of
abstract-social and concrete-tool-use utterances. Whereas subjects were
generally able to differentiate between concrete vs. abstract sentence
contents, we observed a specific gesture advantage for the evaluation of the
abstractness of tool-use utterances as indicated by faster responses (TU-MF
faster than TU-ML) and higher concreteness evaluations (TU-MF more concrete
than TU-ML). Motor simulation processes as well as more prominent embodied
representations of tool-use utterances might be responsible for this gesture
type specific effect on the processing and evaluation of speech-gesture
information. Keywords: emblematic gesture, tool-use gestures, abstract language content, subjective
evaluations, reaction times |
Gesturing While Pausing In Conversation: Self-oriented Or Partner-oriented? | | BIBAK | PDF | 46 | |
Marion Tellier; Gale Stam; Brigitte Bigi | |||
This paper presents a study involving future French teachers performing a
lexical explanation task with both a native and a non-native partner. We are
particularly looking at gestures that appear during pauses in speech. What are
their functions? Are they self-oriented or partner-oriented? Is there a
difference whether the speaker is addressing a native or a non-native
interlocutor? Do these "silent" gestures have pedagogical purposes? Keywords: teaching gestures; foreigner talk; pauses; gesture adaptation |
Gestural expressions in use for unveiling dynamic experience attributed to verbs | | BIBAK | PDF | 47 | |
Kai Tuuri; Antti Pirhonen | |||
The focus of this paper is on justifying the presented experimental design
that aims at examining the enactive linkages between a verb's content and a
sensorimotor experience of movement. The experiment utilised spontaneous
production of hand and vocal gestures for expressing the energetic feel
attributed to a word. Preliminary qualitative analysis of the expressions shows
degrees of similarity in terms of experiential movement qualities. These
results imply that conceiving a verb's meaning is not necessarily far removed
from bodily action. Keywords: Gestures; vocal gestures; enaction; dynamic experience; verb content;
vitality; cross-modality |
Gestures as Diagrams: Towards a Semantics for Gesture | | BIBAK | PDF | 48 | |
Barbara Tversky; Azedeh Jamalian; Valeria Giardino; Seokmin Kang; Angela Kessell | |||
Gestures have many forms and serve many roles, some expressive, some
communicative, some for gesturer, some for listener. One role they serve is to
express and communicate ideas, simple ideas in forms like points, lines, and
sweeps, and complex ideas in models. A gesture model is an integrated sequence
gestures that represent a situation. Gestures share these representative
features with diagrams, but have an added layer of meaning through action. Keywords: gesture; model; diagram; semantics; action; representation |
Disturbed visual exploration of communicative gestures in aphasic patients: Evidence from eye movement analysis | | BIBAK | PDF | 49 | |
Tim Vanbellingen; Rahel Schumacher; Noëmi Eggenberger; Simone Hopfner; Dario Cazzoli; Basil Preisig; Manuel Bertschi; Thomas Nyffeler; Klemens Gutbrod; Claudio Bassetti; Stephan Bohlhalter; René Müri | |||
Gestures are a crucial component of human non-verbal communication
(Birdwhistell, 1970). Aphasic patients may display deficits in recognizing and
producing gestures, preventing them from a successful use in communication
(Hogrefe et al., 2012). The present study aimed to examine the perception of
communicative and meaningless gestures in aphasic patients by means of eye
movement analysis. Eighteen patients with aphasia and twenty healthy control
participants took part in the study. Their visual exploration behavior was
measured during the presentation of forty gestures (20 meaningless and 20
communicative gestures) by means of an infrared eye-tracking system. Mean and
cumulative fixation duration were measured in different regions of interest
(ROIs), such as the face, the gesturing hand, the body, and the surrounding
environment. Significantly different patterns of visual exploration of
communicative gestures were found in aphasic patients compared to healthy
subjects. Aphasic patients fixated less the ROIs comprising the face or the
gesturing hand during the exploration of communicative gestures. In contrast,
aphasic patients explored more the environment. Patients and healthy
participants did not differ in the visual exploration of meaningless gestures.
Visual exploration of communicative gestures, but not of meaningless gestures,
is disturbed in aphasic patients. Keywords: Gesture perception; Aphasia; Eye movement analysis |
« The pig with the pink hat »: An experimental study on speech/gesture coordination during development | | BIBA | PDF | 50 | |
Coriandre Vilain; Anne Vilain; Jeanne Clarke | |||
This paper presents two experimental pilot studies on the coordination between speech and pointing gestures in adults vs children, in a "find the odd one" game. Experiment 1 tests the effect of the length of the name of the target, and experiment 2 the place of the informative focus in the noun phrase that is used to designate the target. Both experiments reveal similar patterns of coordination in adults and children: (i) gesture adapts to the length of the spoken utterance; (ii) gesture starts before speech (all the more so for adults); (iii) the apex of the pointing gesture is aligned with the beginning of the name of the target, and not with the crucial informative feature in the utterance; and (iv) the end of the gesture is reached after the end of the spoken utterance. |
Predicting vocabulary development from co-speech gestures: Duration or occurrences, that's the question | | BIBAK | PDF | 51 | |
Paul Vogt; Ingrid Masson-Carro; J. Douglas Mastin | |||
In this paper, we investigate whether or not the duration of exposures to
co-speech gestures predicts later vocabulary development better than occurrence
frequencies. To this aim we examine the impact of child-directed co-speech
gestures on vocabulary development in infants from two cultural groups within
Mozambique. We find that duration and occurrence are strongly related and both
can predict later vocabulary development almost equally well. In addition, we
find considerable cultural differences in the amount and style of co-speech
gestures addressed to infants, as well as the way these predict later
vocabulary development. Keywords: vocabulary development, co-speech gestures, methodology, cultural
differences |
Cross-Cultural Differences in Gesture: Conceptual Preliminaries | | BIBAK | Abstract | 52 | |
Nick Enfield | |||
There is a fair amount of research on gesture that focuses on, or allows us
to infer information about, cross-cultural differences. But while the biggest
issue is empirical, it is important to prepare the problem conceptually. The
notion of 'gesture' has many meanings, and if we define the different phenomena
that come under the rubric, certain predictions can be made about what we can
expect to find from empirical work on gesture across cultures. I argue that
when form-meaning mappings are grounded in natural semiotic principles,
including both ontogenetic ritualization and microgenetic/enchronic inference,
this should correlate with lower diversity in form-meaning mappings in gesture;
conversely, when form-meaning mappings are grounded in arbitrary conventions,
this should correlate with greater diversity in mappings across cultures. I
argue that cross-cultural diversity should be generally lower for gesture than
for semantics or syntax because the manual-visual modality is more susceptible
to being interpreted via natural meaning principles. In the lexico-semantics of
spoken language I can withhold information more easily than in gesture, while
in manual signs (including in sign language) I may need to rely on you to
suppress it. If in the manual modality it is harder not to make certain
information available, then your agency over the expression of that information
is lower and it will be taken as less likely that you mean to convey that
information as part of what you're saying (basic correlation between semiotic
agency and accountability). It follows from these considerations that, for
principled reasons, cultural diversity in gesture will be significantly less
than that for the symbolic conventions of language. Keywords: Semiotics of gesture; cross-cultural variation in gesture; universals |
The pointing gesture and language learning | | BIBAK | PDF | Presentation | 53 | |
Danielle Matthews | |||
The production of pointing gestures in infancy is a key social-cognitive and
communicative milestone that has been found to predict later vocabulary
development. Yet very little is known about: 1) how infants learn to point, if
they learn at all; 2) whether early pointing abilities develop in step with
early vocal communicative abilities; 3) whether the two communicative
modalities explain the same or different variance in later vocabulary
development. We attempted to address these questions with a series of studies
that used training methods to explain development and considered individual
differences between infants. These studies highlight the value in considering
both communicative modalities in tandem in order to fully understand language
development. Keywords: Pointing, parenting, learning, vocabulary, babble |
Modelling the relation between gesture and speech in aphasia | | BIBA | Abstract | Presentation | 54 | |
Jan Peter de Ruiter | |||
Data from speakers with aphasia are an invaluable source of information for evaluating models of gesture and speech. In my talk, I will discuss four influential models of gesture and speech that were originally formulated for healthy speakers, and evaluate them for their ability to accommodate some central findings from research about iconic gestures and speech in Broca's type aphasia. The most important finding of these is that although the general speech and gesture rate in speakers with nonfluent aphasia is notably lower, the people with aphasia produce more iconic gestures per word. The models I will discuss are a) McNeill's (1992) "Growth Point" (GP) Theory, b) The "Lexical Access Model" by Krauss, Chen & Gottesmann (2000), c) the "Sketch Model" by De Ruiter (2000), and the "Interface Model" by Kita & Özyurek (2003). Close inspection of the processing assumptions of these four models reveals that they can be reduced to two: one is the Lexical Access Model, and the other the GP/Sketch/Interface Model. Both these models can accommodate the basic gesture and speech findings from Broca's type aphasia, but do so in a different way. The Lexical Access model assumes that gestures are made to compensate for word finding problems by facilitating lexical access, while the GP/Sketch/Interface model can explain the findings by assuming that speakers with nonfluent aphasia adapt to problems in their morpho-syntactic processing by producing smaller speech units. I will argue that both accounts can adequately accommodate the aphasia findings, but that the account of the GP/Sketch/Interface model is preferable on the basis of the available evidence so far. |
Architectural issues in the model of speech-gesture production: Gesture, Action and Language | | BIBA | Abstract | Presentation | 55 | |
Sotaro Kita | |||
I will discus empirical evidence for the following key theoretical assumptions of the "Interface Model" (Kita & Özyürek, 2003) and the Information Packaging Hypothesis (Kita, 2000). The content of an iconic gesture tends to converge with the content of the concurrent clause in speech. This content coordination between speech and gesture is modulated by cognitive and communicative demands at the moment of speaking, and iconic gestures and pointing gestures (with concrete targets) differ from each other in this respect. Gestures are produced from the general purpose action generation mechanism, used for both communicative actions (i.e., gestures) and practical actions. |
Computation meets cognition -- an integrated simulation model of speech-gesture production | | BIBA | Abstract | Presentation | 56 | |
Stephan Kopp | |||
Speech-gesture production in virtual or robotic agents is usually engineered, using fixed repositories and models that select, combine and adjust predefined behaviors. This gives control over the kind and quality of the producible behavior, but it is inherently limited and does not help to elucidate the nature of the underlying mechanisms in humans. I Will present Work on a computational production model that (1) is integrated in that it encompasses multimodal conceptualization, composition of verbal and gestural forms, and their realization as overt behavior; (2) is flexible in that it creates speech-gesture behavior on-the-spot based on communicative or cognitive constraints; (3) is cognitively and empirically grounded in that is rests upon empirical findings as Well as cognitive modeling techniques. I will discuss how our model adopts and refines suggestions from theoretical accounts, and I Will show how it reproduces human-like speech and gesture behavior. |
Computational models of speech-gesture production in virtual humans or robots | | BIBA | Abstract | Presentation | 57 | |
Catherine Pelachaud | |||
We have been developing a platform of humanoid agent, be virtual or robot, able to interact with humans. I will describe the architecture of our platform allowing us to drive these different agents type. These agents, be virtual or physics, can be driven from two different representation languages, namely Function Markup Language FML that specifies the communicative intentions and emotional states, and Behavior Markup Language BML that describes the multimodal behaviors to be displayed by the agents. I will also describe how we model behavior expressivity. Modulating the execution of a behavior with different dynamic qualities allows us to create agents displaying different emotionally-colored behaviors. |