Header

Search

Explainability of cognitively-enhanced NLP models

Supervisor: Anna Bondar

Summary

Cognitively enhanced language models are NLP models that are trained not only on text, but also on signals about how humans process that text, such as eye movements while reading. These gaze patterns reflect which parts of a sentence readers find surprising, difficult to process, or informative. The gaze data can be used either at training time only, or at both the training and inference stages, to improve a model’s performance. In low-resource settings, where there is very little text-only training data, such models have been shown to outperform standard text-only models because they can leverage human reading behaviour as an additional supervision signal. In this project, you will look into the inner working mechanisms of the cognitively-enhanced LLMs, and investigate why they achieve better performance than comparable models that are not enhanced with gaze. 

References:

Requirements

Deep Learning, Python