A new preprint by Patrick Haller and colleagues is now available on PsyArXiv. It is titled: "Measurement reliability of individual differences in sentence processing: A cross-methodological reading corpus and Bayesian analysis".
Psycholinguistic theories generally assume similar cognitive mechanisms across different speakers. Recently, however, researchers have pointed out the need to account for individual differences in theories explaining human cognition. Methodologically, the first step for a principled investigation of individual differences in sentence processing is to establish their test-retest measurement reliability, that is, the correlation of subject-level effects across multiple experimental sessions. We can’t take this test-retest and cross-methodological measurement reliability as a given because of the so-called reliability paradox (Hedge et al., 2018). In this work, we present the first naturalistic reading corpus (eye-tracking and self-paced reading) with four experimental session from each participant, accompanied with a comprehensive assessment of participants’ cognitive capacities and reading skills. We deploy a Bayesian model to assess the test-retest measurement reliability of individual differences in a range of predictors of sentence processing difficulty that are well-established at the group-level. While we find that the word length effect is stable within individuals across time and methods, our results suggest that lexical frequency, dependency length and the number of to-be-integrated dependencies are less stable within an individual.