1.Visuomotor feedback tuning in the absence of visual error information

Authors:Sae Franklin, David W. Franklin

Abstract: Large increases in visuomotor feedback gains occur during initial adaptation to novel dynamics, which we propose are due to increased internal model uncertainty. That is, large errors indicate increased uncertainty in our prediction of the environment, increasing feedback gains and co-contraction as a coping mechanism. Our previous work showed distinct patterns of visuomotor feedback gains during abrupt or gradual adaptation to a force field, suggesting two complementary processes: reactive feedback gains increasing with internal model uncertainty and the gradual learning of predictive feedback gains tuned to the environment. Here we further investigate what drives these changes visuomotor feedback gains in learning, by separating the effects of internal model uncertainty from visual error signal through removal of visual error information. Removing visual error information suppresses the visuomotor feedback gains in all conditions, but the pattern of modulation throughout adaptation is unaffected. Moreover, we find increased muscle co-contraction in both abrupt and gradual adaptation protocols, demonstrating that visuomotor feedback responses are independent from the level of co-contraction. Our result suggests that visual feedback benefits motor adaptation tasks through higher visuomotor feedback gains, but when it is not available participants adapt at a similar rate through increased co-contraction. We have demonstrated a direct connection between learning and predictive visuomotor feedback gains, independent from visual error signals. This further supports our hypothesis that internal model uncertainty drives initial increases in feedback gains.

2.Suppression of chaos in a partially driven recurrent neural network

Authors:Shotaro Takasu, Toshio Aoyagi

Abstract: The dynamics of recurrent neural networks (RNNs), and particularly their response to inputs, play a critical role in information processing. In many applications of RNNs, only a specific subset of the neurons generally receive inputs. However, it remains to be theoretically clarified how the restriction of the input to a specific subset of neurons affects the network dynamics. Considering recurrent neural networks with such restricted input, we investigate how the proportion, $p$, of the neurons receiving inputs (the "inputs neurons") and a quantity, $\xi$, representing the strength of the input signals affect the dynamics by analytically deriving the conditional maximum Lyapunov exponent. Our results show that for sufficiently large $p$, the maximum Lyapunov exponent decreases monotonically as a function of $\xi$, indicating the suppression of chaos, but if $p$ is smaller than a critical threshold, $p_c$, even significantly amplified inputs cannot suppress spontaneous chaotic dynamics. Furthermore, although the value of $p_c$ is seemingly dependent on several model parameters, such as the sparseness and strength of recurrent connections, it is proved to be intrinsically determined solely by the strength of chaos in spontaneous activity of the RNN. This is to say, despite changes in these model parameters, it is possible to represent the value of $p_c$ as a common invariant function by appropriately scaling these parameters to yield the same strength of spontaneous chaos. Our study suggests that if $p$ is above $p_c$, we can bring the neural network to the edge of chaos, thereby maximizing its information processing capacity, by adjusting $\xi$.

3.The feasibility of artificial consciousness through the lens of neuroscience

Authors:Jaan Aru, Matthew Larkum, James M. Shine

Abstract: Interactions with large language models have led to the suggestion that these models may be conscious. From the perspective of neuroscience, this position is difficult to defend. For one, the architecture of large language models is missing key features of the thalamocortical system that have been linked to conscious awareness in mammals. Secondly, the inputs to large language models lack the embodied, embedded information content characteristic of our sensory contact with the world around us. Finally, while the previous two arguments can be overcome in future AI systems, the third one might be harder to bridge in the near future. Namely, we argue that consciousness might depend on having 'skin in the game', in that the existence of the system depends on its actions, which is not true for present-day artificial intelligence.

4.Second Sight: Using brain-optimized encoding models to align image distributions with human brain activity

Authors:Reese Kneeland, Jordyn Ojeda, Ghislain St-Yves, Thomas Naselaris

Abstract: Two recent developments have accelerated progress in image reconstruction from human brain activity: large datasets that offer samples of brain activity in response to many thousands of natural scenes, and the open-sourcing of powerful stochastic image-generators that accept both low- and high-level guidance. Most work in this space has focused on obtaining point estimates of the target image, with the ultimate goal of approximating literal pixel-wise reconstructions of target images from the brain activity patterns they evoke. This emphasis belies the fact that there is always a family of images that are equally compatible with any evoked brain activity pattern, and the fact that many image-generators are inherently stochastic and do not by themselves offer a method for selecting the single best reconstruction from among the samples they generate. We introduce a novel reconstruction procedure (Second Sight) that iteratively refines an image distribution to explicitly maximize the alignment between the predictions of a voxel-wise encoding model and the brain activity patterns evoked by any target image. We show that our process converges on a distribution of high-quality reconstructions by refining both semantic content and low-level image details across iterations. Images sampled from these converged image distributions are competitive with state-of-the-art reconstruction algorithms. Interestingly, the time-to-convergence varies systematically across visual cortex, with earlier visual areas generally taking longer and converging on narrower image distributions, relative to higher-level brain areas. Second Sight thus offers a succinct and novel method for exploring the diversity of representations across visual brain areas.