Compensation for a large gesture-speech asynchrony in instructional videos
Author
Editor
- Gaëlle Ferré
- Mark Tutton
Summary, in English
We investigated the pragmatic effects of gesture-speech lag by asking participants to reconstruct formations of geometric shapes based on instructional films in four conditions: sync, video or audio lag (±1,500 ms), audio only. All three video groups rated the task as less difficult compared to the audio-only group and performed better. The scores were slightly lower when sound preceded gestures (video lag), but not when gestures preceded sound (audio lag). Participants thus compensated for delays of 1.5 seconds in either direction, apparently without making a conscious effort. This greatly exceeds the previously reported time window for automatic multimodal integration.
Department/s
Publishing year
2015
Language
English
Pages
19-23
Publication/Series
Gesture and Speech in Interaction - 4th edition (GESPIN 4)
Links
Document type
Conference paper
Topic
- Psychology
Keywords
- gesture-speech synchronization
- multimodal integration
- temporal synchronization
- comprehension
Conference name
Gesture and Speech in Interaction (GESPIN 4)
Conference date
2015-09-02 - 2015-09-04
Conference place
Nantes, France
Status
Published