Header

Search
  • Pipeline diagram showing the process of translating an English audio description such as "He folds the newspaper and puts it away" into French, German, or Italian using a moment retrieval, a frame sampling, and a translation component.

    SwissADT pipeline

  • Interface of a sign language translation evaluation task showing an Italian spoken language sentence on one side and an Italian Sign Language sequence in skeletal poses on the other side. The slider for rating the quality of the output sequence ranges from 0 to 6. The Italian source sentence displayed is "Era Yuri Gagarin, cosmonauta russo rimasto nella leggenda, quando la Russi acoltivava ambizioni spaziali che sembra non avere più."

    Evaluation of text-to-pose translation

  • Video player interface showing a signed version of an informed consent

    Informed consent in sign language

  • Interface of a sign language transcription and annotation tool displaying a video of a daily news segment interpreted in sign language along with the corresponding subtitles. The tool can be used to manually adjust the timing of the subtitles to that of the corresponding sign language segments.

    Manual alignment of sign language videos and spoken language subtitles

  • Interface of a sign language translation evaluation task showing an sign language video on one side and a spoken language sentence on the other side. The slider for rating the quality of the output sequence ranges from 0 to 6. The source segment displayed is "Aber ausserordentlichen Lagen kommt es darum, eine grosse Situation zu lösen."

    Sign-to-text translation shared task

  • Diagram showing the name of a project, IICT, in the center and five circles around it: "Subproject 1: Text simplification", "Subproject 2: Sign Language Translation", "Subproject 3: "Sign Language Assessment", "Subproject 4: Audio Description", "Subproject 5: Spoken Subtitles".

    IICT subprojects

  • Interface of a text comprehension task: a text on Covid tests in simplified German at the top and a question on the text with four response options underneath.

    Comprehensibility of automatic text simplification output

  • Sign language assessment interface with a sign language learner on one side and the same learner with a reference signer next to them on the other side, with skeletal pose overlays. The interface also shows the total score (number of points) achieved for signing the respective item, here the sign VIOLETT (PURPLE). Also shown is feedback on the correctness of production of the non-dominant hand and the dominant hand.

    SMILE I demonstrator

  • Overview of data collection tasks applied in the SMILE II project: Picture retelling, Video retelling, Short clip retelling, Map description, Sentence transformation, Interview, and Sentence Repetition Test. In the center, a vertical bar shows the difference between classroom activities and formal tests.

    SMILE II data collection tasks

  • Picture of the SMILE II sign language studio with two chairs, cameras, a computer, a checkerboard for calibration and a blackening curtain.

    SMILE II studio @ UZH

  • Still image of a video of a signer with subsequent processing steps displayed next to it: segmenting the signer out, replacing the original background with a monochrome one, and extracting skeletal poses

    SMILE II data processing

  • Sarah Ebling on the stage of TEDxZurich. Behind her is a large screen showing herself, a slide, captions, and a sign language interpreter.

    TEDxZurich talk Sarah Ebling

  • A drawing of the essence of Sarah Ebling's TEDxZurich talk "Developing Assistive Technologies through Co-creation". The keywords written include: "be transparent about what is possible", "co-create solutions with the users", "genuinely interested in changing their experiences".

    The essence of Sarah Ebling's TEDxZurich talk by Stefano Oberti

  • Output of a moment retrieval model: on the left the query, "He wipes his mouth wit his left hand", and on the right the automatically retrieved video moment that shows a man with glasses wearing a sweater wiping his mouth with his left hand.

    Video highlighting for automatic audio description translation

  • SignCLIP architecture: spoken language segments are processed with a text encoder, sign language videos are processed with a video encoder, the result is projected into a shared space.

    SignCLIP

  • Pipeline diagram showing the process of translating an English audio description such as "He folds the newspaper and puts it away" into French, German, or Italian using a moment retrieval, a frame sampling, and a translation component.

    SwissADT pipeline

  • Interface of a sign language translation evaluation task showing an Italian spoken language sentence on one side and an Italian Sign Language sequence in skeletal poses on the other side. The slider for rating the quality of the output sequence ranges from 0 to 6. The Italian source sentence displayed is "Era Yuri Gagarin, cosmonauta russo rimasto nella leggenda, quando la Russi acoltivava ambizioni spaziali che sembra non avere più."

    Evaluation of text-to-pose translation

Language, Technology and Accessibility

Welcome to the webpage of our chair!

From left to right: first row: Alessia Battisti, Laura Setz, Yang Tian; second row: Lisa Arter, Patricia Scheurer, Sarah Ebling, Anja Ryser, Chuqiao Yan; third row: Anne Göhring, Jason Armitage, Mathias Müller, Longqian Ming; fourth row: Sant Muniesa, Zifan Jiang, Amit Moryossef, Lukas Fischer

Our chair deals with language-based assistive technologies and digital accessibility. Our focus is on basic and application-oriented research.
    
We subscribe to a broad definition of language and communication, in line with the UN Convention on the Rights of Persons with Disabilities (UN CRPD); as such, we deal with spoken language (text and speech), sign language, simplified language, Braille, pictographs, etc.

We combine language and communication with technology and accessibility in two ways:

  1. We develop language-based technologies, most often relying on deep learning (artificial intelligence) approaches.
  2. We investigate the reception of these technologies among the users, e.g., through comprehension studies.

Our technologies focus on the contexts of hearing impairments, visual impairments, cognitive impairments, and language disorders.

The group is headed by Prof. Dr. Sarah Ebling.

Hallmarks of our group

Additional Information

Visualization of the process of translating text (displayed with a text icon that reads "hello") one one side to sign language skeletal poses on the other side

Multimodality

We deal with assistive technologies and aspects of digital accessibility across modalities, e.g., with different production modalities (manual and non-manual components) in sign languages (Allwood, 2009), with text and video as part of automatic translation of audio descriptions, with text and images as part of automatic text simplification, or with text and pictographs.

Illustration of two hands shaking

Multidisciplinarity

We combine methods and techniques from the disciplines of language technology, linguistics, computer science (including computer vision), and special education. We collaborate with researchers from these and from other disciplines, such as ethics, psychology, rehabilitation sciences, or media and communication sciences.

News