Direkt zum Inhalt

Chandrasekaran, C. ; Lemus, L. ; Trubanova, A. ; Gondan, Matthias ; Ghazanfar, A. A.

Monkeys and Humans Share a Common Computation for Face/Voice Integration

Chandrasekaran, C. , Lemus, L., Trubanova, A., Gondan, Matthias und Ghazanfar, A. A. (2011) Monkeys and Humans Share a Common Computation for Face/Voice Integration. PLoS Computational Biology 7 (9), e1002165.

Veröffentlichungsdatum dieses Volltextes: 05 Feb 2020 11:03
Artikel
DOI zum Zitieren dieses Dokuments: 10.5283/epub.41486


Zusammenfassung

Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects ...

Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a "race'' model failed to account for their behavior patterns. Conversely, a "superposition model'', positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates.



Beteiligte Einrichtungen


Details

DokumentenartArtikel
Titel eines Journals oder einer ZeitschriftPLoS Computational Biology
Verlag:PUBLIC LIBRARY SCIENCE
Ort der Veröffentlichung:SAN FRANCISCO
Band:7
Nummer des Zeitschriftenheftes oder des Kapitels:9
Seitenbereich:e1002165
Datum2011
InstitutionenHumanwissenschaften > Institut für Psychologie > Lehrstuhl für Psychologie I (Allgemeine Psychologie I und Methodenlehre) - Prof. Dr. Mark W. Greenlee
Identifikationsnummer
WertTyp
10.1371/journal.pcbi.1002165DOI
Stichwörter / KeywordsAUDIOVISUAL SPEECH-PERCEPTION; SUPERIOR TEMPORAL SULCUS; OROFACIAL MOTOR REPRESENTATION; AUDITORY-VISUAL INTERACTIONS; CHIMPANZEE PAN-TROGLODYTES; BIMODAL DIVIDED ATTENTION; MACAQUES MACACA-MULATTA; SACCADIC EYE-MOVEMENTS; OLD-WORLD MONKEYS; MULTISENSORY INTEGRATION;
Dewey-Dezimal-Klassifikation100 Philosophie und Psychologie > 150 Psychologie
StatusVeröffentlicht
BegutachtetJa, diese Version wurde begutachtet
An der Universität Regensburg entstandenJa
URN der UB Regensburgurn:nbn:de:bvb:355-epub-414864
Dokumenten-ID41486

Bibliographische Daten exportieren

Nur für Besitzer und Autoren: Kontrollseite des Eintrags

nach oben