Computational Neuroscience of Speech & Hearing


The research group "Computational Neuroscience of Speech & Hearing" investigates the neural and cognitive underpinnings of auditory communication across the lifespan.

In our research, we focus on healthy as well as clinical populations who have hearing and language pathology (e.g., due to age-related hearing loss, dementia, Alzheimer's disease).

We have three major goals:

1) Based on fundamental (experimental) research, we want to understand the associations between the sensory, language, cognitive, general health, and neural status in healthy individuals across the lifespan

2) Working with clinical populations such as older adults with Alzheimer's disease, we want to understand the impact of sensory decline and language pathology on cognition and the brain

3) We are doing applied research in which we use digital (e.g. collecting real-life data on mobile phones or hearing aids, developing digital training apps in virtual reality) and neurophysiology-based (e.g., neurofeedback) tools to improve diagnostics and interventions against hearing and language pathology. 

In our experiments we use various neuropsychological and psychoacoustic tests and a range of neuroimaging techniques, such as magnetic resonance imaging (MRI) and electroencephalography (EEG) in humans. In all our research streams, we apply an individual and context-dependent perspective on participants and patients in our research.

Our research is highly interdisciplinary, while mainly anchored in cognitive neuroscience, psychology, and linguistics.

In order to reach our goals, we have close collaborations with several industrial partners and with the medical fields to develop and inform technological and medical innovations.



The research group "Computational Neuroscience of Speech & Hearing" is funded by the SNSF.