Computational Neuroscience of Speech & Hearing
How does our brain analyze and extract meaningful units of complex acoustic signals such as spoken language? How does this process change when growing older, when having age-related hearing loss or cognitive impairment? And how can technology support the development of rehabilitation strategies to act against (spoken) language pathology?
In our research group we investigate neural and cognitive mechanisms of speech and language using a variety of neuroimaging techniques (e.g. EEG, MRI) as well as psychophysical and neuropsychological testing. Our research focuses on healthy and clinical populations who have difficulties to process and understand (spoken) language (e.g. due to hearing loss, aging, dementia). Using EEG data from listeners, we are applying artificial intelligence to detect dementia-risk in older adults early.
The long-term goal of our research is to develop individual and context-based rehabilitation strategies for audio(-visual) speech processing difficulties in healthy older adults and individuals with dementia striving towards maintaining sensory and cognitive functioning across the lifespan. Approaches we are working on include neurophysiology-based trainings using neurofeedback, virtual reality trainings to induce naturalistic settings, auditory-cognitive trainings, or lip reading trainings for hearing impaired.
Furthermore, we are also interested in understanding the association between hearing loss and brain atrophy, cognitive mechanisms of audiovisual speech processing, as well as bilingualism and foreign language learning in an aging population.
Some of our current research projects:
- Neural consequences of (the highly prevalent age-related) hearing loss in health and disease (e.g. Alzheimer’s disease)
- Neural mechanisms of speech processing and disruptions of those mechanisms (e.g. in older adults)
- Machine learning approaches to predict cognitive outcomes in older adults from EEG data recorded while listening to language
- Development of individual and context-based rehabilitation strategies against hearing and speech processing difficulties (e.g. neurofeedback, virtual reality trainings, daily life interventions via mobile phone apps)
- Naturalistic settings to study speech perception in older adults and as a basis for interventions
- The association between hearing loss/speech processing difficulties and dementia/cognitive decline in aging
- Cognitive components of audiovisual speech processing and lip-reading training
- Bilingualism and foreign language learning
Ongoing third-party funded projects
|01/2022-12/2025||"Benefits of auditory-cognitive trainings for hearing rehabilitation and cognitive functioning in older hearing impaired"||PI||Industrial partner: Sonova||agreed||CHF 264’309.-|
|02/2022-12/2022||"Sehen und Hören in der stationären Alterspflege - Längsschnittliche Erkenntnisse aus dem Schweizer RAI-Datensatz"||PI||Schweizerischer Blindenverein SZBLIND||agreed||CHF 5000.-|
|10/2021-09/2023||"Lip reading training via mobile phone app: A step towards interventions against hearing loss"||Co-PI||Zürcher Stiftung für das Hören||ongoing||CHF 200'000.-|
|06/2020-05/2025||"Novel ways of preventing age-related cognitive decline: Identifying the role of speech processing deficits in Alzheimer's disease"||PI||Schweizerischer Nationalfonds||ongoing||CHF 1'349'606.-|
Completed third-party funded projects are not listed. Please see CV of Prof. Nathalie Giroud for more information: UZH - Institut für Computerlinguistik - Nathalie Giroud