Navigation auf


Institut für Computerlinguistik

Audiovisual project


Can you predict a face from a voice?

UZH researchers are about to find it out

Would you like to be involved? 




Goals:  The Phonetics and Speech Science Group of the University of Zurich is currently investigating the role of expressive audio-visual information on speaker and speech recognition, with the goal of obtaining a better understanding of the mutual contribution of face and voice cues in speech communication.

Participants:  We are looking for mothers who:

  • are natives of British or American English, or High German
  • have typical hearing and vision (no glasses)
  • have typical speech and language development
  • have babies between 6 and 8 months

Eligible candidates will be asked to submit a voice recording for accent-screening purposes.

Recording sessions: Mothers will be video-recorded reading pre-selected stories to their child and to an adult as part of two separate blocks. During the recordings, the child is cared for by the caretaker in a room adjacent to the recording room. Mother and child will be connected via video call to allow for visual interactions between the two, as well as auditory feedback from mother to child. Audiovisual stimuli will be subsequently employed as input for online perceptual experiments.

IMPORTANT: The children will not be audio- or video-recorded. 

Key Information

Where? LiRI Lab - Andreasstrasse 15, 8050 Zürich

How long? 60-90 mins incl. welcoming & debriefing

Reward? 100 CHF

The laboratory is fitted with baby changing facilities and visitor parking (upon request).


Dr. Elisa Pellegrino:

Julia Heimann (research assistant):



Weiterführende Informationen

About Ethics

The experiments received approval from the Ethics Committee of the Faculty of Arts and Social Sciences. 



The project is funded by the Stiftung für Wissenschaftliche Forschung