Computational Neuroscience of Speech & Hearing


The research group "Computational Neuroscience of Speech & Hearing" investigates the neural and cognitive underpinnings of speech and language with a focus on clinical populations who have difficulties to process and understand (spoken) language (e.g., due to hearing loss, aging, dementia). A major goal is to develop neurophysiology-based diagnostic and rehabilitation strategies for individuals with auditory language pathology (e.g., neurofeedback protocols, virtual reality trainings to induce more naturalistic settings, lip reading trainings for older hearing impaired) and to detect individuals at-risk of dementia early on using machine learning approaches.

Our research is highly interdisciplinary, while mainly anchored in cognitive neuroscience, linguistics, and (clinical) neuropsychology. In our experiments we use various neuropsychological and psychoacoustic tests and a range of neuroimaging techniques, such as magnetic resonance imaging (MRI) and electroencephalography (EEG) in humans. 

The research group "Computational Neuroscience of Speech & Hearing" is funded by the SNSF.