Available only for arXiv papers.
Learning to read places a strong challenge on the visual system. Years of expertise lead to a remarkable capacity to separate highly similar letters and encode their relative positions, thus distinguishing words such as FORM and FROM invariantly over a large range of sizes and absolute positions. How neural circuits achieve invariant word recognition remains unknown. Here we address this issue through computational modeling and brain imaging. We first trained deep neural network models to recognize written words, then analyzed the reading-specialized units that emerged in deep layers. With literacy, units became sensitive to specific letter identities and their distance from the blank space at the left or right of a word, thus acting as \"space bigrams\" encoding ordinal position using an approximate number code. Using 7T functional MRI and magnetoencephalography in adults, we localized the predicted ordinal code anatomically (visual word form area) and temporally (~220ms). The proposed neuronal mechanism for invariant word recognition can explain reading errors and makes precise predictions about how position-invariant neural codes arise in brains and artificial systems.