Text Technology/Digital Linguistics Colloquium FS 2026
Time & Location: every 2 weeks on Tuesdays from 10:15 am to 12:00 pm in room BIN-2-A.10.
Online participation via the MS Teams Team CL Colloquium is also possible.
Responsible: Jason Armitage
Colloquium Schedule
| 17.02.2026 | Ahmet Yavuz Uluslu | Rajiv Bains |
| 03.03.2026 | Fatma-Zohra Rezkellah (Rania) | Hanxu Hu |
| 17.03.2026 | Sophia Conrad | Kirill Semenov |
| 31.03.2026 | Chuqiao Yan | Sina Ahmadi |
| 14.04.2026 | Janis Goldzycher | Yang Tian |
| 28.04.2026 | Deborah Jakobi | Patrick Giedemann |
| 12.05.2026 | Lena Bolliger | Thyra Krosness |
| 26.05.2026 | Anja Ryser | Pius von Däniken |
17 Feb 2026
Ahmet Yavuz Uluslu: Forensic Agents: Authorship Analysis in the AI Eras
While large language models demonstrate remarkable potential for automating forensic text analysis, they remain prone to hallucinations, reasoning shortcuts, and bias. We examine how LLM Agents can address these failures. We discuss specific AI alignment approaches and agentic workflows designed to transform opaque predictions into a transparent, reproducible and robust procedure.
Rajiv Bains: Adaptive Neuromorphic Sonification of Data for BLV Users
Data sonification, the use of non-speech audio to represent quantitative information, offers a promising modality for making data accessible to blind and low-vision (BLV) users. However, existing sonification tools typically conflict with screen readers, as sonification audio competes with the text-to-speech (TTS) output that BLV users rely on to navigate digital interfaces. This project addressed the challenge of multimodal coordination in assistive technology by developing and evaluating a macOS/VoiceOver-compatible sonification system that synchronises non-speech audio with TTS to prevent auditory masking and announcement delays. The system employs a dual-level spiking neural network (SNN) architecture informed by neuromorphic computing principles: a 16-neuron real-time layer embedded in the audio callback loop modulates synthesis parameters, while a 64-neuron batch ‘dreaming’ optimiser proposes parameter updates based on interaction logs. This architecture raises questions relevant to computational approaches in the humanities: how can adaptive systems learn from sparse, qualitative user feedback, and what are the trade-offs between algorithmic optimisation and human-centred design?
Following a design-based research methodology, the work proceeded through two evaluation phases: a pilot study with an expert BLV participant revealed critical failure modes (speech–audio overlap, queued announcements, keyboard focus conflicts), prompting iterative redesign before a main study with a non-expert BLV participant. Results demonstrate that enforcing temporal exclusivity between TTS and sonification enabled sustained data exploration, whereas pilot system failures prevented meaningful interaction. A key finding concerns the tension between optimisation for behavioural efficiency and listening comfort: SNN-optimised parameters increased perceptual contrast but were reported as more fatiguing. This talk will present the coordination mechanisms developed for integrating sonification with screen readers, demonstrate the neuromorphic-inspired adaptation architecture, and discuss design implications for accessible data representation.