Scene context and attention independently facilitate MEG decoding of object category

Avatar
Poster
Voices Powered byElevenlabs logo
Connected to paperThis paper is a preprint and has not been certified by peer review

Scene context and attention independently facilitate MEG decoding of object category

Authors

Leticevscaia, O.; Brandman, T.; Peelen, M. V.

Abstract

Many of the objects we encounter in our everyday environments would be hard to recognize without any expectations about these objects. For example, a distant silhouette may be perceived as a car because we expect objects of that size, positioned on a road, to be cars. Accordingly, neuroimaging studies have shown that when objects are poorly visible, expectations derived from scene context facilitate the representations of these objects in visual cortex from around 300 ms after scene onset. The current magnetoencephalography (MEG) decoding study tested whether this facilitation occurs independently of attention and task relevance. Participants viewed degraded objects alone or within their original scene context while they either attended the scenes (attended condition) or the fixation cross (unattended condition), temporally directing attention away from the scenes. Results showed that at 300 ms after stimulus onset, multivariate classifiers trained to distinguish clearly visible animate vs inanimate objects generalized to distinguish degraded objects in scenes better than degraded objects alone, despite the added clutter of the scene background. Attention also modulated object representations at this latency, with better category decoding in the attended than the unattended condition. The modulatory effects of context and attention were independent of each other. Finally, data from the current study and a previous study were combined (N=51) to provide a more detailed temporal characterization of contextual facilitation. These results extend previous work by showing that facilitatory scene-object interactions are independent of the specific task performed on the visual input.

Follow Us on

0 comments

Add comment