ISGS Conference 2016 Gesture, Creativity, Multimodality Book of abstracts Keynote speakers.............................................................................................................................................. 3 Special guest........................................................................................................................................................ 9 Symposia.............................................................................................................................................................11 Talks...................................................................................................................................................................128 Posters ..............................................................................................................................................................315 Last revised : July 17th 2 All sections but the symposia section are organised alphabetically, according to the name of the first author. Symposia are organised alphabetically, according to the names of the symposia conveners. Within each symposium, abstracts are organised according to their order of presentation during the symposium. 3 -‐ Keynote speakers Keynote speakers 4 Martha W. Alibali University of Wisconsin – Madison, USA Toward an Integrated Framework for Gesture Production and Comprehension In this talk, I draw on multiple lines of research to sketch an integrated framework for gesture production and gesture comprehension. The first part of the talk will focus on gesture production. I will present evidence that gestures derive from simulated actions and perceptual states. I will argue that these mental simulations and the corresponding gestures serve to schematize spatial and motoric features of objects and events, by focusing on some features and neglecting others. Further, I will argue that, because of its ability to schematize, gesture can affect thinking and speaking in specific ways. The second part of the talk will focus on gesture comprehension. I will argue that seeing others’ gestures evokes simulations of actions and perceptual states in listeners. In turn, these simulations guide listeners to schematize objects and events in particular ways. These simulations may also give rise to gestures or actions. The third section of the talk will seek to bring production and comprehension together. I will argue that, with experience and via processes of statistical pattern detection, people develop expectations about when others are likely to produce gestures. These expectations guide people's attention to others’ gestures at times when those gestures are likely to contribute to comprehension. Thus, gesture production and comprehension are linked, both because of their shared ties to the action system, and because gesture comprehension depends, in part, on patterns that arise due to regularities in gesture production. Keynote speakers 5 Alessandro Duranti UCLA, USA Eyeing Each Other: Visual Access during Jazz Concerts During jazz concerts it is expected that the members of small combos will take one or more “solos,” that is, turns at creating “on the spot” melodies, chord substitutions, or rhythmic patterns. The absence of a conductor and the expectation that what is being played is different from whatever is annotated on the page create a number of interactional problems that need to be resolved. I will focus on one problem: musicians need to know at any given time who is going to solo next or when the solos are ending and all the band members join in to play the melody one more time. A number of possible principles are made available by the history and culture of jazz including a sometimes explicit and other times implicit “hierarchy of players and instruments” (e.g., the band leader goes first; horn players go before rhythm section players like the pianist or the guitarist; the drummer takes one solo during each set). In most situations, however, the aesthetics of jazz improvisation leaves room for ambiguity about the identity of the next player and the length of each solo. As I will show, it is in these contexts that eye gaze and other gestures as well as body postures come to play an important role. But I will also argue that gestures and body postures can only be meaningful and effective against a shared understanding of where a transition point is possible. Keynote speakers 6 Scott Liddell Gallaudet University, USA Signers depict to such an extent that it is difficult to find a stretch of discourse without some type of depiction. Tokens are minimal depictions that take the form of invisible, isolated entities in the space within the signer’s reach. Although invisible, tokens are conceptually present at those sites and signers can direct pronouns and indicating verbs toward them for referential purposes. Other invisible depictions include linear spatial paths that depict time. Buoys, a class of signs produced by the non-‐dominant hand, also depict entities, but buoys make the depictions visible. Theme buoys, fragment buoys, and list buoys also give a physical form to the entities associated with them. Surrogates are depictions of (typically) humans and may be visible or invisible. A visible surrogate takes the form of (part of) the signer’s body and depicts actions and dialogue. Visible surrogates frequently interact with life-‐sized invisible surrogate people or entities. Another type of depiction involves shapes or topographical scenes, including actions within those scenes. Depicting verbs create and elaborate this type of depiction and will be the focus of this presentation. Depicting verbs comprise a very large category with unique lexical and functional properties. Their lexical uniqueness comes from their lack of a specified place of articulation, and for some, unspecified aspects of the hand’s orientation. Their functional uniqueness comes from the requirement to produce every instance of a depicting verb within a spatial depiction. The depicting verb VEHICLE-‐BE-‐ AT, for example, expresses the fixed, lexical meaning ‘a vehicle is located on a surface’. But, a depicting verb never expresses only its lexical meaning. That meaning is always embedded within a depiction. Since VEHICLE-‐BE-‐AT has no lexically specified place of articulation, signers must provide one each time they use the verb. The place selected locates the vehicle within a topographical depiction and the orientation of the hand depicts the vehicle’s orientation. Combining the lexical meaning with the depiction produces something like, ‘A vehicle is located (right here in the depiction) on a surface (facing this way in the depiction)’. The combination of fixed lexical meaning and non-‐ fixed, creative depiction produces a vastly enhanced meaning from a single depicting verb. Video examples from adults and children will illustrate the extent to which depicting verbs are used, the nature of what they depict, and the speed at which signers are able to shift between depictions. Keynote speakers 7 Cornelia Müller European University Viadrina Frankfurt Oder, Germany Frames of Experience – The Embodied Meaning of Gestures Addressing gestures as an embodied form of communication might appear somewhat tautological, while, in fact, most of the current debates in philosophy, linguistics, psychology, anthropology or the cognitive sciences more generally have not had much impact on theorizing the meaning of gestures in its specifics as a bodily mode of human expression (Streeck 2010 praxeological view is an important exception). My attempt to offer an embodied understanding of the meaning of gestures is related to Streeck’s work but also informed by cognitive linguistic’s perspective on the embodied grounds of meaning more generally. Philosopher Mark Johnson formulates this position in his book The Meaning of the Body: An Aesthetics of Human Understanding: “[…] meaning grows from our visceral connections to life and the bodily conditions of life. We are born into the world as creatures of the flesh, and it is through our bodily perceptions, movements, emotions, and feelings that meaning becomes possible and takes the form it does.” (Johnson 2007: 17) I am going to suggest that gestures are a primary field to study how meaning emerges from bodily experiences. Not only are they grounded in very specific forms of embodied experience, but, by studying gestures, we can actually learn something about how meaning and even some very basic linguistic structures may emerge from embodied frames of experience notably in conjunction with their interactive contexts-‐of-‐use. This take on gestural meaning includes referential as well as pragmatic gestures. Informed by the Aristotelian concept of mimesis as fundamental human capacity a systematics for an embodied cognitive-‐semantics and pragmatics of gestures will be presented. I will argue that the meaning of gestures referring both metaphorically and non-‐ metaphorically is experientially grounded in different forms of bodily mimesis and that the same holds for pragmatic forms of gesturing (see also Zlatev 2014). Putting the mimetic potential of gestures center-‐stage opens a systematic pathway to accounting for the meaning of a given gestural form. Gestural mimesis, however, never happens outside a given moment in a communicative interaction. The meaning of gestures, therefore always incorporates this specific contextual moment and this is what I refer to as frame of experience (Fillmore 1982). In conventionalization processes of co-‐speech gestures, we can witness sedimentations of the interplay between a motivated kinesic form and aspects of context that result in ‘semantizations’ of form clusters and kinesic patterns. Sometimes this involves the analytic singling out of a meaningful kinesic core with particular contextualized meanings as for example in the case of a group of gestures sharing a movement away from body. The meaning of gestures thus emerges from embodied frames of experience, where embodiment involves both the sensory-‐motor experience of the body in motion and the specific intersubjective contextual embedding of this bodily experience. Keynote speakers 8 Catherine Pélachaud CNRS, Télécom – ParisTech, France Modeling conversational nonverbal behaviors for virtual characters In this talk I will present our on-‐going effort to model virtual character with nonverbal capacities. We have been developing Greta, an interactive Embodied Conversational Agent platform. It is endowed with socio-‐emotional and communicative behaviors. Through its behaviors, the agent can sustain a conversation as well as show various attitudes and levels of engagement. The ECA is able to display a large variety of multimodal behaviors to convey communicative intentions. We rely on a lexicon that contains entries defined as multimodal signals temporally coordinated. At run time, the signals for given communicative intentions and emotions are instantiated and their animations realized. Communicative behaviors are not produced in isolation from one another. We have developed models that generate sequences of behaviors; that is behaviors are not instantiated individually but the surroundings behaviors are taken into account. During this talk, I will first introduce how we build the lexicon of the virtual character using various methodologies, eg corpus annotation, user-‐centered design or motion capture data. The behaviors can be displayed with different qualities and intensities to simulate various communicative intentions and emotional states. I will also describe the multimodal behavior planner of the virtual agent platform. Special guest Special guest 10 Leonard Talmy University at Buffalo Gestures as Cues to a Target This talk examines one particular class of co-‐speech gestures: "targeting gestures". In the circumstance addressed here, a speaker wants to refer to something -‐-‐ her "target" -‐ -‐ located near or far in the physical environment, and to get the hearer's attention on it jointly with her own at a certain point in her discourse. At that discourse point, she inserts a demonstrative such as this, that, here, there that refers to her target, and produces a targeting gesture. Such a gesture is defined by two criteria. 1) It is associated specifically with the demonstrative. 2) It must help the hearer single the target out from the rest of the environment. That is, it must provide a gestural cue to the target. The main proposal here is that, on viewing a speaker's targeting gesture, a hearer cognitively generates an imaginal chain of fictive constructs that connect the gesture spatially with the target. Such an imaginal chain has the properties of being unbroken and directional (forming progressively from the gesture to the target). The fictive constructs that, in sequence, comprise the chain consist either of schematic (virtually geometric) structures, or of operations that move such structures -‐-‐ or of both combined. Such fictive constructs include projections, sweeps, traces, trails, gap crossing, filler spread, and radial expansion. Targeting gestures can in turn be divided into ten categories based on how the fictive chain from the gesture most helps a hearer determine the target. The fictive chain from the gesture can intersect with the target, enclose it, parallel it, co-‐progress with it, sweep through it, follow a non-‐straight path to it, present it, neighbor it, contact it, or affect it. The prototype of targeting gestures is pointing, -‐-‐ e.g., a speaker aiming her extended forefinger at her target while saying That's my horse. But the full range of such gestures is actually prodigious. This talk will present some of this range and place it within an annalytic framework. This analysis of targeting gestures will need to be assessed through experimental and videographic techniques. What is already apparent, though, is that it is largely consonant with certain evidence from the linguistic analysis of fictive motion and from the psychological analysis of visual perception. 11 Symposia 12 Lexical acquisition and Gesture across Bantu and Romance languages Olga Capirci 1, Jean-‐Marc Colletta 2 1 Consiglio Nazionale delle Ricerche -‐ CNR (ITALY) (ISTC-‐CNR) – Via S. Martino della Battaglia 44, 00185 Roma (RM), Italy 2 Université Stendhal (Grenoble 3) – BP 25 38040 Grenoble Cedex, France Within research on gestures in development only few studies have offered comparative data on early gestures of young children from different linguistic and cultural contexts. Recently some comparative studies have relied on an Italian picture-‐naming task (PiNG test; Bello et al., 2010, 2012) triggering children’s spontaneous gesture production across a wide range of items (i.e. 40 pictures) to attempt effective comparisons across different languages and cultures (Pettenati et al., 2005; Stefanini et al. 2009). Therefore, researchers extended the use of this test to other populations to include children from Japan, Australia, Canada and Britain, while still relying on a common and structured tool for data collection (Pettenati et al., 2012; Hall et al., 2013; Marentette et al., 2015; Morgan, in press). However, comparative studies using the PinG have only considered highly economically developed countries, while no study to date has considered child populations from Africa or from rural environments. These populations may differ their overall communicative style, also influencing their gesture production, as well as in their lexical development, which has been proven to be closely related to children’s gesture production (Iverson et al., 1994; Özcaliskan & Goldin-‐Meadow, 2005). This panel is aimed at presenting new comparative data on using the PiNG task with children living in South Africa, France and Italy, considering for the first time both Bantu (i.e. Zulu and South Sotho) and Romance (i.e. French and Italian) languages. This data, from the broader GEST LA D project, was collected by a four collaborating international teams sharing tasks (i.e. adaptation of the PinG for French and Bantu children), methods (i.e. the task was administered in comparable ways in very different cultural contexts), coding (i.e. same coding scheme for video analysis and same annotation software) and age groups (i.e. each partner considered 3 age groups, with a mean age of 24, 30 and 36 months respectively) to explore spontaneous gesture production. The four contributions are: 1) R. Kunene Nicolas, S. Ahmed, N. Ntuli ”Spontaneous gestures in lexical items of Zulu speaking children” 2) T. Nteso and H. Brookes ”The use of representational gestures by South Sotho children aged 24 to 36 months” 3) J.-‐M. Colletta, A. Hadian Cefidekhanie, E. Jalilian ”Morphological variation in early representational gesture” 4) H. Brookes, R. Kunene-‐Nicolas, O. Capirci, J.-‐M. Colletta, L. Sparaci, A. Hadian Cefidekhanie ”Gesture productions across linguistic and cultural contexts” Discussant: Virginia Volterra (ISTC-‐ CNR, Rome) for her foundational work on the role of gestures in language development and on using the PinG task to study children’s gestures both within and across cultures. Keywords: Early Gesture, Representational gesture, Bantu Language, Romance Language, Cross, cultural and cross, linguistic comparison, lexical acquisition Symposium Capirci & Colletta: Lexical acquisition and Gesture across Bantu and Romance languages 13 Spontaneous gestures in lexical items of Zulu speaking children Ramona Kunene-‐Nicolas, Saaliha Ahmed, Nonhlanhla Ntuli The University of the Witwatersrand (WITS) – South Africa Previous studies have shown that in children the lexicon is tightly linked to the development of categorization skills as well as the construction of meaning (Davidoff & Masterson, 1995; Gentner, 1982). Nouns and Predicates are characterized by differences in their perceptual and cognitive complexity. In cross-‐linguistic comparisons, several studies have illustrated the higher frequency of nouns rather than verbs in the first stages of language development (Goldfield, 2000; Caselli et al., 2007). However most languages that have informed this developmental trend tend have never included the contribution of Bantu languages whose linguistic structure is agglutinative and has an intricate noun class system. To test this, we adapted the PiNG assessment of lexical comprehension and production (Bello et al., 2010) to the Bantu language Zulu. This tool allows the evaluation of children’s repertoires in terms of nouns and predicate items as well to test the evolution of the lexical development in both speech and gesture. 36 monolingual and typically developing Zulu speaking children aged between 25 to 36 months (12 participants in groups of 25; 30; 36) and typically participated in this linguistic task and their spontaneous gestural productions were compared. Data was annotated on Elan, using a coding manual designed by the collaborators of the Gestland programme in order to compare lexical development across Bantu and Romance languages. As reported in previous research, our findings showed that Zulu children had a high prevalence to noun items than predicate items, noun comprehension and production had a better performance than predicate comprehension and production. There was also an effect of culture as the test was adapted from an Italian assessment tool. Children produced spontaneous gestures in this naming task as in other languages. Children produced a high number of pointing gestures as well as some representational gestures. There was a higher production of gesture in the Predicate Production task than the Noun Production task, in line with previous findings that found children produced more gesture to describe actions (such as pushing/pulling, phoning). This paper examines the items that solicited the most referential gestures as well as the developmental trend. Keywords: Lexical acquisition, representational gestural development, Bantu language Symposium Capirci & Colletta: Lexical acquisition and Gesture across Bantu and Romance languages 14 The use of representational gestures in South Sotho children aged two to three years Thato Nteso, Heather Brookes University of Cape Town (UCT) – University of Cape Town Private Bag X3, Rondebosch, 7701, Cape Town, South Africa, South Africa This study examines the use of gestures occuring during a picture comprehension and naming task among speakers of a Bantu language in South Africa. The comprehension and naming task (PiNG) was developed for Italian children (Bello et al 2012). For cross-‐cultural comparative purposes we did minimal adaptation only replacing four culturally unfamiliar items with the closest equivalent items familiar in the local context. Sixty-‐two South Sotho speaking children (age range between 24 and 36 months; 27 boys; 35 girls) divided into three age cohorts (24; 30; 36 months) identified (comprehension) and named (production) pictures designed to elicit nouns and predicates (actions and characteristics). Children produced conventional interactive gestures such as nods for ‘yes’ and shrugs for ‘I don’t know,’ pointing, representational, pragmatic and word searching gestures. Nodding and pointing were the most frequent types of gesture followed by representational gestures. The frequency of representational gestures decreased from 25 to 36 months. Children used more representational gestures during speech production when they had to name nouns and predicates than during comprehension. Children produced more representational gestures when naming predicates than nouns. Predicate items that elicited the most representational gestures were ‘pull,’ laugh,’ ‘kiss,’ ‘phone,’ ‘wash,’ ‘open’ and ‘short/long.’ Naming actions appears to elicit the most representational gestures. Children occasionally produced representational gestures as a subsitute for speech. For example, the production of a size gesture for ‘long’ and ‘far’ substituting for speech, suggest that children also use representational gestures when struggling with verbal tasks. Noun production items that elicited the most representational gestures were ‘heater’ and ‘gloves.’ Most representational gestures involved children using their bodies to mimic the actions depicted in the pictures or actions relating to the picture. They also used their hands as objects during miming actions eg. Biting on the fist representing an apple or the flat hand dragged over hair for comb or the fist and arm used as a hammer. Children also used their hands to show size, distance and shape. Similar to previous findings in Italian children, representational gestures depicting actions were more frequent than size and shape gestures (Stefanini et al. 2009). Analysis of representational gestures across the three age cohorts show some changes in the way in which children represent objects, actions and concepts. For example, bi-‐handed asynchronous gestures were more common among three-‐year-‐olds than among two-‐year-‐olds. These results are discussed in relation to findings in other studies on children’s gestural development. Keywords: South Sotho, gesture development, representational gestures, lexical acquisition Symposium Capirci & Colletta: Lexical acquisition and Gesture across Bantu and Romance languages 15 Morphological variation in early representational gestures Jean-‐Marc Colletta, Ali Hadian Cefidekhanie, Elnaz Jalilian Laboratoire de Linguistique et Didactique des Langues Etrang`eres et Maternelles (LIDILEM) – Université Grenoble 3 – France Young children start to produce representational gestures as early as in their second year, either as substitutes for speech or combined with words (Capirci & Volterra, 2008). How children use representational gestures at early and later stages of language acquisition has been well investigated within the past decades (Acredolo & Goodwyn, 1988; Batista, 2012; Capone & McGregor, 2004; Colletta & Guidetti, 2010). In this study, we focus on the morphological features of early representational gestures and their variation between children. To explore this issue, the Italian PiNG test (Stefanini et al., 2009; Bello et al., 2012) was adapted to French. PiNG is a picture identification and denomination task that is used to assess early vocabulary in the child for both comprehension and production. As was shown in other studies (Pettenati et al. 2012; Pettenati, Stefanini & Volterra, 2009), a fair number of children spontaneously gesture during the task. In this study, we compared gestural performance between 36 monolingual and typically developing French speaking children aged 22 to 38 months. The video data was annotated under Elan, using a coding manual designed by all participants to the on-‐going Gestland programme (http://gestland.eu). The first results show that children produced a total number of 1104 gestures among which 714 points, 331 representational gestures, and 59 other gestures (i.e. emblems, pragmatic gestures, beats). Among referential gestures, the proportion of pointing gestures decreased with age whereas the proportion of representational gestures did not vary so much. Among the latter, some gestures were spontaneously produced by children during either the comprehension and production tasks or during intervals between two subtasks, other gestures were elicited by a question from the adult. Representational gestures the children spontaneously produced mostly referred to the target objects and actions. Some actions (e.g. ‘to push’, ‘to wash hands’, ‘to phone’, ‘to drive’, ‘to laugh’ ) and objects (‘comb’, ‘umbrella’, ‘lion’, ) activated more gesture production than others, therefore producing collections of comparable representational gestures. Additional collections were elicited by the adult such as the ‘toothbrush’ and the ‘book’. The detailed analysis of their morphological features (handshape, movement, location) showed great variety that we will illustrate by examples in our presentation. Interestingly, changes in morphological features seem to be linked to both age and performance in the task. We provide tentative explanations based on the Joussian mimism framework (Jousse, 1974; Calbris, 2011) applied to early cognitive development. Keywords: gesture, children, representation, variation, morphology
2018 • 4 Pages • 264.2 KB
2011 • 18 Pages • 497.08 KB
2016 • 3 Pages • 32.58 KB
2018 • 80 Pages • 746.37 KB
2014 • 501 Pages • 7.27 MB
2016 • 21 Pages • 256.54 KB
2011 • 16 Pages • 217.08 KB
2017 • 142 Pages • 5.45 MB
2012 • 1 Pages • 145.07 KB
2019 • 5 Pages • 155.94 KB
2022 • 73 Pages • 1.29 MB
2022 • 6 Pages • 62.43 KB
2022 • 11 Pages • 275.17 KB
2022 • 345 Pages • 1.5 MB
2022 • 7 Pages • 320.12 KB
2022 • 3 Pages • 1.04 MB