How do we learn print-to-speech correspondences?

Converting orthography to phonology is an important part of the reading process. Using the knowledge of the correspondences between letters or letter clusters and their pronunciations allows children (and adults) to read aloud words that they have never seen in the written form before. In order to understand how children learn to read, and how the learning process may be impaired in developmental dyslexia, it is important to understand how exactly we translate print to speech.

Ongoing projects

Complexity and unpredictability in artificial orthography learning

How do the characteristics of the orthography affect a child's efficiency in learning to read? We have argued that, on a linguistic level, orthographic depth can be dissociated into two underlying constructs: GPC complexity and the predictability of words based on the GPC rules. Now it is an empirical question whether these two constructs have different effects on the learnability of an orthographic system.

 

We aim to address this question in a pre-registered study. A preprint of the Registered Report (in-principle accepted at Cortex) can be found here. This paper was published in 2022, here is the final report.

 

I have presented the results from a pilot study at the SSSR conference in 2016 in Porto, slides here.

The acquisition of different kinds of print-to-sound rules

In a previous paper, we described a way to quantify the degree to which participants rely on different types of sublexical correspondences in German and in English, using a nonword reading aloud paradigm and an optimisation procedure based on statistical predictions and the participants' responses (Schmalz et al., 2014). Now, the question is whether we can apply the same procedure to children.

 

We have submitted a preprint (with Serje Robidoux, Anne Castles, and Eva Marinus), you can find the preprint here.

 

I have presented some data as a poster at SSSR 2014 in Santa Fe and at ESCOP 2013 (here), and Eva Marinus presented some other data at SSSR 2017 in Halifax (slides here).

What do artificial orthography learning tasks actually measure?

In an Artificial Orthography Learning experiment, participants learn a made-up language consisting of pseudowords that are written in unfamiliar symbols. Previous studies have used this task as a way to simulate the learning processes that underlie reading acquisition in children. In order to validate the use of this task as a model of reading acquisition, we need to make sure that learning performance captures a stable individual characteristic. It is also desirable to check if performance on this task correlates with reading ability, or if it may merely reflect the ability of participants to learn arbitrary symbol-sound associations (à la Paired Associate Learning task). In this study, 55 adults performed 2 artificial orthography learning tasks, one paired associate learning task, and a reading test. We found high correlations between learning of the two artificial orthographies, but low or no correlations between artificial orthgraphy learning and paired associate learning, as well as reading ability.

 

You can find a draft of this paper here.