GESPIN 2023

"Broadening perspectives, integrating views"

Location: Nijmegen

Date: Wed 13- Fri 15th of September

Paper submission opens: January 10th, 2023

Paper submission deadline: March 15th, 2023

Extended paper submission deadline: March 22nd, 2023

Notification of acceptance/rejection: end of May, 2023

Registration open: TBA

Registration open: TBA


GeSpIn is an interdisciplinary event for researchers working on the interaction between speech and visual communicative signals, such as articulatory, manual, and bodily gestures co-occurring with speech. At GeSpIn 2023 we hope to bring together researchers working on visual signals together with vocalization or speech, from multidisciplinary perspectives in order to exchange ideas and present the cutting edge of their field. This 8th edition of GeSpIn will be held in Nijmegen, the Netherlands and will focus on the theme of “Broadening Perspectives, Integrating Views: Towards General Principles of Multimodal Signaling Systems”.

As such, we encourage researchers working on (multimodal) prosody, social anthropology, philosophy, (psycho)linguistics, psychology, cognitive science, neuroscience, human movement science, computer science (e.g., human-computer interaction), comparative biology, and more to submit their research to address topics such as:

  • Do principles of speech-gesture interaction generalize to, or interact with, other multimodal interactions and forms of audiovisual integration (e.g., speech interacting with head gestures or facial signals)?
  • What methods in computer science can be used to characterize and synthesize the (temporal) interactions between speech and gesture, within and between agents?
  • How is speech-gesture coupling influenced by the immediate dialogic context (e.g., behavior of the interlocutor, or speech act being performed)
  • Can multimodal signaling as studied in non-human animals teach us something fundamental about multimodal communication systems that also applies to humans?
  • What can cross-linguistic comparisons of speech-gesture interaction teach us about the underlying principles of multimodal coordination?
  • Development of gesture-speech coordination: Can general principles of development be identified? Are there sensitive periods and developmental stages?
  • What is the role of basic biomechanical or neural processes in visual and auditory signaling and the perception of said multimodal signals?

  • Please note that all researchers and theoreticians/philosophers working on the interaction between gesture/visual and sound-producing cues (e.g., in terms of pragmatics, prosody, semantics) should feel invited, also if their particular study does not fit these topics exactly.

    Organizers

    Wim Pouw & James Trujillo (main contacts: /)

    Hans Rutger Bosker
    Linda Drijvers
    Marieke Hoetjes
    Judith Holler
    Lieke van Maastricht
    Asli Ozyurek

    Keynotes

    Nuria Esteve Gibert (Open University of Catalonia)

    Bio

    Nuria Esteve Gibert investigates language acquisition in infancy and childhood, both in typical and atypical populations. She is particularly interested in how speech prosody interacts with body movements in the expression and comprehension of linguistic meaning. Nuria Esteve Gibert has an experimental approach, using behavioral tasks and eye-tracking methodologies.

    Keynote abstract: Prosody as a key force in the development of the gesture-speech relationship

    In this talk I will present evidence that prosodic abilities are intimately linked with how gesture and speech relate to each other in development. When the gesture-speech relation is examined from a temporal point of view, prosodic abilities determine infants’ and children’s use of adult-like coordination patterns. When the gesture-speech relation is examined from a functional point of view, prosody and gesture work together to compensate for other structural linguistic abilities that are impaired or still to be developed. This is especially the case in non-referential contexts, so much so, that prosody and body movements are two sides of the same coin when speakers convey pragmatic meanings.

    Yifei He (Philipps University Marburg)

    Bio

    Yifei He works as a postdoc researcher at the Translational Neuroimaging Lab, Philipps University Marburg. He is primarily interested in the underlying brain mechanisms of how gesture interacts and integrates with speech during online processing in both healthy and clinical populations. He also investigates sentence processing, speech perception, and action perception. These research questions were mainly answered through EEG, fMRI, simultaneous EEG-fMRI, and behavioral methods.

    Keynote abstract: Processing co-speech gestures: a neural perspective

    In daily communication, visual input such as hand gestures plays an important role besides auditory speech. To date, the neural basis of how gesture integrates and interacts with speech during online comprehension remains elusive. In this talk, I will present evidence from EEG, fMRI, and simultaneous EEG-fMRI, showing the brain dynamics of how speech and gestures are integrated as coherent semantic representations. I will also present studies on how gestures impact the semantic processing of speech. (i) EEG data from a controlled experiment show that the social aspects of gesture (body orientation) may directly influence the N400 amplitude during sentence processing. ii) With a naturalistic paradigm, EEG and fMRI data consistently suggest that gestures may facilitate the neural processing of passages; at the single-word level, both lexical retrieval and semantic prediction of single words also benefit from the presentation of co-speech gestures.

    Susanne Fuchs (ZAS Berlin)

    Bio

    Susanne Fuchs investigates the biopsychosocial foundations of human interaction and focuses specifically on physiological processes, such as breathing and motor control. Her main areas of interests are:
    1) The interplay between motion, breathing and cognition,
    2) Speech preparation and pauses,
    3) Multimodality and iconicity,
    4) Biological and social aspects shaping individual behaviour in speech production and perception.
    She uses manifold techniques, among them optitrack, inductance plethysmography, electropalatography and intraoral pressure sensors.

    Keynote abstract: The role of bones, joints and muscles for speech and gesture in interaction

    Did you ever wonder why we use the index finger for pointing gestures? In this talk, I like to answer this question. Furthermore, I will propose to broaden the view on GEsture and SPeech in INteraction by integrating motor control and biomechanics into the discussion of gesture-speech links. Specifically, I would like to focus on three key aspects: 1.) Body properties of speech articulators and limbs (e.g., mass, dynamics) and their impact on the coordination between gesture and speech 2.) Breathing as an integral part of body motions and the voice (gesture-speech physics) 3.) The impact of pointing motions on body posture and head motion I believe that such an integrative view will be fruitful for understanding the foundations of speech and gesture and have consequences on theoretical accounts.

    Franz Goller (The University of Utah)

    Bio

    Franz Goller studies the behavioral physiology of sound production and song learning in birds. Current projects focus on 1) physical mechanisms of sound production; 2) the motor coordination between all motor systems involved in singing; 3) coordination between vocal and visual displays (i.e., multimodal signaling); 4) motor aspects of vocal development; 5) acoustic models and song syntax; 6) energetics of song production. The integrative aspects of these studies at the interface of neurobiology and behavior provide a unique opportunity to bridge neural control of a complex learned behavior to its evolutionary and ecological relevance in the natural environment.

    Keynote abstract: Sweet songs and hot dances - mechanistic and evolutionary perspectives on multimodal signaling in non-human animals

    Multi-modal signaling is widespread among non-human animals and covers all functional aspects of communication behaviors. A diverse array of sensory modalities is used for communication, and I will highlight a few of the most remarkable multimodal display behaviors. After this overview, a few examples will be presented in which integration of auditory and visual communication signals has been studied from the perspective of neural control. Detailed understanding of the neuromuscular control strategies of producing two independent challenging signals simultaneously allows inferences about the selection scenarios leading to complex displays. Comparative analyses provide additional insights into the evolution of multimodal signals, as will be shown with a few examples. The review of studies of non-human animal multimodal signaling illustrates a remarkable diversity, which provides a feature landscape with highly extreme display characteristics. This landscape facilitates comparative assessment of human multimodal communication from the perspective of proximate and ultimate mechanisms.



    Conference fees

    We are currently still working on finalizing the budgeting, but we aim for a registration fee of around 50 euros for students and around 100 euros for regular attendees, depending on the institutional financial support for the conference.

    Call for submissions

    Extended submission deadline: March 22nd, 2023

    We are happy to announce that abstract and paper submission is now opened for the 8th Gesture and Speech in Interaction (GeSpIn 2023) meeting taking place in Nijmegen, The Netherlands, from September 13-15.

    GeSpIn is an interdisciplinary event for researchers working on the interaction between speech and visual communicative signals, such as articulatory, manual, and bodily gestures co-occurring with speech. At GeSpIn 2023 we hope to bring together researchers working on visual signals together with vocalization or speech, from multidisciplinary perspectives in order to exchange ideas and present the cutting edge of their field. This 8th edition of GeSpIn will focus on the theme of “Broadening Perspectives, Integrating Views: Towards General Principles of Multimodal Signaling Systems”

    As such, we encourage researchers working on (multimodal) prosody, social anthropology, philosophy, (psycho)linguistics, psychology, cognitive science, neuroscience, human movement science, computer science (e.g., human-computer interaction), or comparative biology.

    Submission guidelines

    Contributors are invited to submit either an abstract (1 page + 1 page for figures and references) or a 6-page paper. The 6-page proceedings papers will be published in the online proceedings (and will be indexable and will have doi's). Please, also note that either type of submission (abstract or paper) can lead to an acceptance for a talk or a poster, without favour being given to a poster or paper during review or acceptance/selection.

  • Abstracts (or papers) should be written in English and not exceed two (or six) A4 pages including figures, tables, and references
  • Abstracts and papers should include 3 keywords, to facilitate the review process
  • Abstracts and papers should be anonymous, and all references to authors’ identities should be omitted for the review process
  • Please attach abstracts/or paper as a separate document in the submission portal (next to filling out the text fields)
  • Types of topics

    All papers must be original and not yet published. We invite submissions on topics such as:

  • Do principles of speech-gesture interaction generalize to, or interact with, other multimodal interactions and forms of audiovisual integration (e.g., speech interacting with head gestures or facial signals)?
  • What methods in computer science can be used to characterize and synthesize the (temporal) interactions between speech and gesture, within and between agents?
  • How is speech-gesture coupling influenced by the immediate dialogic context (e.g., behavior of the interlocutor, or speech act being performed)
  • Can multimodal signaling as studied in non-human animals teach us something fundamental about multimodal communication systems that also applies to humans?
  • What can cross-linguistic comparisons of speech-gesture interaction teach us about the underlying principles of multimodal coordination?
  • Development of gesture-speech coordination: Can general principles of development be identified? Are there sensitive periods and developmental stages?
  • What is the role of basic biomechanical or neural processes in visual and auditory signaling and the perception of said multimodal signals?

  • Please note that all researchers and theoreticians/philosophers working on the interaction between gesture/visual and sound-producing cues (e.g., in terms of pragmatics, prosody, semantics) should feel invited, also if their particular study does not fit these topics exactly.

    Download templates

    MS Word templates are available for the 2-page abstract and 6-page proceedings paper. A LaTeX template is available for the 6-page proceedings paper. Click the image or follow the link HERE to download the package (note that it will download a .zip file).

    Follow the button to submit at EasyChair

    Contact Us

    You can contact us by reaching out to