Abstract
The integration of the visual and auditory modalities during human speech perception is the default mode of speech processing. That is, visual speech perception is not a capacity that is "piggybacked" on to auditory-only speech perception. Visual information from the mouth and other parts of the face is used by all perceivers to enhance auditory speech. This integration is ubiquitous and automatic and is similar across all individuals across all cultures. The two modalities seem to be integrated even at the earliest stages of human cognitive development. If multisensory speech is the default mode of perception, then this should be reflected in the evolution of vocal communication. The purpose of this review is to describe the data that reveal that human speech is not uniquely multisensory. In fact, the default mode of communication is multisensory in nonhuman primates as well but perhaps emerging with a different developmental trajectory. Speech production, however, exhibits a unique bimodal rhythmic structure in that both the acoustic output and the movements of the mouth are rhythmic and tightly correlated. This structure is absent in most monkey vocalizations. One hypothesis is that the bimodal speech rhythm may have evolved through the rhythmic facial expressions of ancestral primates, as indicated by mounting comparative evidence focusing on the lip-smacking gesture.
Original language | American English |
---|---|
Pages (from-to) | 1441-1448 |
Number of pages | 8 |
Journal | Behavioral Ecology and Sociobiology |
Volume | 67 |
Issue number | 9 |
DOIs | |
State | Published - Sep 2013 |
ASJC Scopus subject areas
- Ecology, Evolution, Behavior and Systematics
- Animal Science and Zoology
Keywords
- Crossmodal speech
- Monkey vocalizations
- Primate communication
- Speech evolution