arXiv daily: Neurons and Cognition

arXiv daily: Neurons and Cognition (q-bio.NC)

1.A standardised open science framework for sharing and re-analysing of neural data acquired to continuous sensory stimuli

Authors:Giovanni M. Di Liberto, Aaron Nidiffer, Michael J. Crosse, Nathaniel Zuk, Stephanie Haro, Giorgia Cantisani, Martin M. Winchester, Aoife Igoe, Ross McCrann, Satwik Chandra, Edmund C. Lalor, Giacomo Baruzzo

Abstract: Neurophysiology research has demonstrated that it is possible and valuable to investigate sensory processing in the context of scenarios involving continuous sensory streams, such as speech and music listening. Over the past 10 years or so, novel analytic frameworks for analysing the neural processing of continuous sensory streams combined with the growing participation in data sharing has led to a surge of publicly available datasets involving continuous sensory experiments. However, open science efforts in this domain of research remain scattered, lacking a cohesive set of guidelines. As a result, numerous data formats and analysis toolkits are available, with limited or no compatibility between studies. This paper presents an end-to-end open science framework for the storage, analysis, sharing, and re-analysis of neural data recorded during continuous sensory experiments. The framework has been designed to interface easily with existing toolboxes (e.g., EelBrain, NapLib, MNE, mTRF-Toolbox). We present guidelines by taking both the user view (how to load and rapidly re-analyse existing data) and the experimenter view (how to store, analyse, and share). Additionally, we introduce a web-based data browser that enables the effortless replication of published results and data re-analysis. In doing so, we aim to facilitate data sharing and promote transparent research practices, while also making the process as straightforward and accessible as possible for all users.

2.Semi-orthogonal subspaces for value mediate a tradeoff between binding and generalization

Authors:W. Jeffrey Johnston, Justin M. Fine, Seng Bum Michael Yoo, R. Becket Ebitz, Benjamin Y. Hayden

Abstract: When choosing between options, we must associate their values with the action needed to select them. We hypothesize that the brain solves this binding problem through neural population subspaces. To test this hypothesis, we examined neuronal responses in five reward-sensitive regions in macaques performing a risky choice task with sequential offers. Surprisingly, in all areas, the neural population encoded the values of offers presented on the left and right in distinct subspaces. We show that the encoding we observe is sufficient to bind the values of the offers to their respective positions in space while preserving abstract value information, which may be important for rapid learning and generalization to novel contexts. Moreover, after both offers have been presented, all areas encode the value of the first and second offers in orthogonal subspaces. In this case as well, the orthogonalization provides binding. Our binding-by-subspace hypothesis makes two novel predictions borne out by the data. First, behavioral errors should correlate with putative spatial (but not temporal) misbinding in the neural representation. Second, the specific representational geometry that we observe across animals also indicates that behavioral errors should increase when offers have low or high values, compared to when they have medium values, even when controlling for value difference. Together, these results support the idea that the brain makes use of semi-orthogonal subspaces to bind features together.

1.Schizophrenia research under the framework of predictive coding: body, language, and others

Authors:Lingyu Li, Chunbo Li

Abstract: Although there have been so many studies on schizophrenia under the framework of predictive coding, works focusing on treatment are very preliminary. A model-oriented, operationalist, and comprehensive understanding of schizophrenia would promote the therapy turn of further research. We summarize predictive coding models of embodiment, co-occurrence of over- and under-weighting priors, subjective time processing, language production or comprehension, self-or-other inference, and social interaction. Corresponding impairments and clinical manifestations of schizophrenia are reviewed under these models at the same time. Finally, we discuss why and how to inaugurate a therapy turn of further research under the framework of predictive coding.

2.Modelling individual motion sickness accumulation in vehicles and driving simulators

Authors:Varun Kotian, Daan M. Pool, Riender Happee

Abstract: Users of automated vehicles will move away from being drivers to passengers, preferably engaged in other activities such as reading or using laptops and smartphones, which will strongly increase susceptibility to motion sickness. Similarly, in driving simulators, the presented visual motion with scaled or even without any physical motion causes an illusion of passive motion, creating a conflict between perceived and expected motion, and eliciting motion sickness. Given the very large differences in sickness susceptibility between individuals, we need to consider sickness at an individual level. This paper combines a group-averaged sensory conflict model with an individualized accumulation model to capture individual differences in motion sickness susceptibility across various vision conditions. The model framework can be used to develop personalized models for users of automated vehicles and improve the design of new motion cueing algorithms for simulators. The feasibility and accuracy of this model framework are verified using two existing datasets with sickening. Both datasets involve passive motion, representative of being driven by an automated vehicle. The model is able to fit an individuals motion sickness responses using only 2 parameters (gain K1 and time constant T1), as opposed to the 5 parameters in the original model. This ensures unique parameters for each individual. Better fits, on average by a factor of 1.7 of an individuals motion sickness levels, are achieved as compared to using only the group-averaged model. Thus, we find that models predicting group-averaged sickness incidence cannot be used to predict sickness at an individual level. On the other hand, the proposed combined model approach predicts individual motion sickness levels and thus can be used to control sickness.

1.Unveiling the Complexity of Neural Populations: Evaluating the Validity and Limitations of the Wilson-Cowan Model

Authors:Maryam Saadati, Saba Sadat Khodaei, Yousef Jamali

Abstract: The population model of Wilson-Cowan is perhaps the most popular in the history of computational neuroscience. It embraces the nonlinear mean field dynamics of excitatory and inhibitory neuronal populations provided via a temporal coarse-graining technique. The traditional Wilson-Cowan equations exhibit either steady-state regimes or else limit cycle competitions for an appropriate range of parameters. As these equations lower the resolution of the neural system and obscure vital information, we assess the validity of mass-type model approximations for complex neural behaviors. Using a large-scale network of Hodgkin-Huxley style neurons, we derive implicit average population dynamics based on mean field assumptions. Our comparison of the microscopic neural activity with the macroscopic temporal profiles reveals dependency on the binary state of interacting subpopulations and the random property of the structural network at the Hopf bifurcation points when different synaptic weights are considered. For substantial configurations of stimulus intensity, our model provides further estimates of the neural population's dynamics official, ranging from simple periodic to quasi-periodic and aperiodic patterns, as well as phase transition regimes. While this shows its great potential for studying the collective behavior of individual neurons particularly concentrating on the occurrence of bifurcation phenomena, we must accept a quite limited accuracy of the Wilson-Cowan approximations-at least in some parameter regimes. Additionally, we report that the complexity and temporal diversity of neural dynamics, especially in terms of limit cycle trajectory, and synchronization can be induced by either small heterogeneity in the degree of various types of local excitatory connectivity or considerable diversity in the external drive to the excitatory pool.

2.Expressive dynamics models with nonlinear injective readouts enable reliable recovery of latent features from neural activity

Authors:Christopher Versteeg, Andrew R. Sedler, Jonathan D. McCart, Chethan Pandarinath

Abstract: The advent of large-scale neural recordings has enabled new methods to discover the computational mechanisms of neural circuits by understanding the rules that govern how their state evolves over time. While these \textit{neural dynamics} cannot be directly measured, they can typically be approximated by low-dimensional models in a latent space. How these models represent the mapping from latent space to neural space can affect the interpretability of the latent representation. We show that typical choices for this mapping (e.g., linear or MLP) often lack the property of injectivity, meaning that changes in latent state are not obligated to affect activity in the neural space. During training, non-injective readouts incentivize the invention of dynamics that misrepresent the underlying system and the computation it performs. Combining our injective Flow readout with prior work on interpretable latent dynamics models, we created the Ordinary Differential equations autoencoder with Injective Nonlinear readout (ODIN), which captures latent dynamical systems that are nonlinearly embedded into observed neural activity via an approximately injective nonlinear mapping. We show that ODIN can recover nonlinearly embedded systems from simulated neural activity, even when the nature of the system and embedding are unknown. Additionally, ODIN enables the unsupervised recovery of underlying dynamical features (e.g., fixed points) and embedding geometry. When applied to biological neural recordings, ODIN can reconstruct neural activity with comparable accuracy to previous state-of-the-art methods while using substantially fewer latent dimensions. Overall, ODIN's accuracy in recovering ground-truth latent features and ability to accurately reconstruct neural activity with low dimensionality make it a promising method for distilling interpretable dynamics that can help explain neural computation.

1.Information Processing by Neuron Populations in the Central Nervous System: Mathematical Structure of Data and Operations

Authors:Martin N. P. Nilsson

Abstract: In the intricate architecture of the mammalian central nervous system, neurons form populations. Axonal bundles communicate between these clusters using spike trains as their medium. However, these neuron populations' precise encoding and operations have yet to be discovered. In our analysis, the starting point is a state-of-the-art mechanistic model of a generic neuron endowed with plasticity. From this simple framework emerges a profound mathematical construct: The representation and manipulation of information can be precisely characterized by an algebra of finite convex cones. Furthermore, these neuron populations are not merely passive transmitters. They act as operators within this algebraic structure, mirroring the functionality of a low-level programming language. When these populations interconnect, they embody succinct yet potent algebraic expressions. These networks allow them to implement many operations, such as specialization, generalization, novelty detection, dimensionality reduction, inverse modeling, prediction, and associative memory. In broader terms, this work illuminates the potential of matrix embeddings in advancing our understanding in fields like cognitive science and AI. These embeddings enhance the capacity for concept processing and hierarchical description over their vector counterparts.

1.The computational role of structure in neural activity and connectivity

Authors:Srdjan Ostojic, Stefano Fusi

Abstract: One major challenge of neuroscience is finding interesting structures in a seemingly disorganized neural activity. Often these structures have computational implications that help to understand the functional role of a particular brain area. Here we outline a unified approach to characterize these structures by inspecting the representational geometry and the modularity properties of the recorded activity, and show that this approach can also reveal structures in connectivity. We start by setting up a general framework for determining geometry and modularity in activity and connectivity and relating these properties with computations performed by the network. We then use this framework to review the types of structure found in recent works on model networks performing three classes of computations.

1.From DDMs to DNNs: Using process data and models of decision-making to improve human-AI interactions

Authors:Mrugsen Nagsen Gopnarayan, Jaan Aru, Sebastian Gluth

Abstract: Over the past decades, cognitive neuroscientists and behavioral economists have recognized the value of describing the process of decision making in detail and modeling the emergence of decisions over time. For example, the time it takes to decide can reveal more about an agents true hidden preferences than only the decision itself. Similarly, data that track the ongoing decision process such as eye movements or neural recordings contain critical information that can be exploited, even if no decision is made. Here, we argue that artificial intelligence (AI) research would benefit from a stronger focus on insights about how decisions emerge over time and incorporate related process data to improve AI predictions in general and human-AI interactions in particular. First, we introduce a highly established computational framework that assumes decisions to emerge from the noisy accumulation of evidence, and we present related empirical work in psychology, neuroscience, and economics. Next, we discuss to what extent current approaches in multi-agent AI do or do not incorporate process data and models of decision making. Finally, we outline how a more principled inclusion of the evidence-accumulation framework into the training and use of AI can help to improve human-AI interactions in the future.

2.Observing hidden neuronal states in experiments

Authors:Dmitry Amakhin, Anton Chizhov, Guillaume Girier, Mathieu Desroches, Jan Sieber, Serafim Rodrigues

Abstract: We construct systematically experimental steady-state bifurcation diagrams for entorhinal cortex neurons. A slowly ramped voltage-clamp electrophysiology protocol serves as closed-loop feedback controlled experiment for the subsequent current-clamp open-loop protocol on the same cell. In this way, the voltage-clamped experiment determines dynamically stable and unstable (hidden) steady states of the current-clamp experiment. The transitions between observable steady states and observable spiking states in the current-clamp experiment reveal stability and bifurcations of the steady states, completing the steady-state bifurcation diagram.

1.Alternating Shrinking Higher-order Interactions for Sparse Neural Population Activity

Authors:Ulises Rodríguez-Domínguez, Hideaki Shimazaki

Abstract: Neurons in living things work cooperatively and efficiently to process incoming sensory information, often exhibiting sparse and widespread population activity involving structured higher-order interactions. While there are statistical models based on continuous probability distributions for neurons' sparse firing rates, how the spiking activities of a large number of interacting neurons result in the sparse and widespread population activity remains unknown. Here, for homogeneous (0,1) binary neurons, we provide sufficient conditions under which their spike-count population distribution converges to a sparse widespread distribution of the population spike rate in an infinitely large population of neurons. Following the conditions, we propose new models belonging to an exponential family distribution in which the sign and magnitude of neurons' higher-order interactions alternate and shrink as the order increases. The distributions exhibit parameter-dependent sparsity on a bounded support for the population firing rate. The theory serves as a building block for developing prior distributions and neurons' non-linearity for spike-based sparse coding.

2.Robust Core-Periphery Constrained Transformer for Domain Adaptation

Authors:Xiaowei Yu, Lu Zhang, Dajiang Zhu, Tianming Liu

Abstract: Unsupervised domain adaptation (UDA) aims to learn transferable representation across domains. Recently a few UDA works have successfully applied Transformer-based methods and achieved state-of-the-art (SOTA) results. However, it remains challenging when there exists a large domain gap between the source and target domain. Inspired by humans' exceptional transferability abilities to adapt knowledge from familiar to uncharted domains, we try to apply the universally existing organizational structure in the human functional brain networks, i.e., the core-periphery principle to design the Transformer and improve its UDA performance. In this paper, we propose a novel brain-inspired robust core-periphery constrained transformer (RCCT) for unsupervised domain adaptation, which brings a large margin of performance improvement on various datasets. Specifically, in RCCT, the self-attention operation across image patches is rescheduled by an adaptively learned weighted graph with the Core-Periphery structure (CP graph), where the information communication and exchange between images patches are manipulated and controlled by the connection strength, i.e., edge weight of the learned weighted CP graph. Besides, since the data in domain adaptation tasks can be noisy, to improve the model robustness, we intentionally add perturbations to the patches in the latent space to ensure generating robust learned weighted core-periphery graphs. Extensive evaluations are conducted on several widely tested UDA benchmarks. Our proposed RCCT consistently performs best compared to existing works, including 88.3\% on Office-Home, 95.0\% on Office-31, 90.7\% on VisDA-2017, and 46.0\% on DomainNet.

1.Persistent learning signals and working memory without continuous attractors

Authors:Il Memming Park, Ábel Ságodi, Piotr Aleksander Sokół

Abstract: Neural dynamical systems with stable attractor structures, such as point attractors and continuous attractors, are hypothesized to underlie meaningful temporal behavior that requires working memory. However, working memory may not support useful learning signals necessary to adapt to changes in the temporal structure of the environment. We show that in addition to the continuous attractors that are widely implicated, periodic and quasi-periodic attractors can also support learning arbitrarily long temporal relationships. Unlike the continuous attractors that suffer from the fine-tuning problem, the less explored quasi-periodic attractors are uniquely qualified for learning to produce temporally structured behavior. Our theory has broad implications for the design of artificial learning systems and makes predictions about observable signatures of biological neural dynamics that can support temporal dependence learning and working memory. Based on our theory, we developed a new initialization scheme for artificial recurrent neural networks that outperforms standard methods for tasks that require learning temporal dynamics. Moreover, we propose a robust recurrent memory mechanism for integrating and maintaining head direction without a ring attractor.

1.Bridging Cognitive Maps: a Hierarchical Active Inference Model of Spatial Alternation Tasks and the Hippocampal-Prefrontal Circuit

Authors:Toon Van de Maele, Bart Dhoedt, Tim Verbelen, Giovanni Pezzulo

Abstract: Cognitive problem-solving benefits from cognitive maps aiding navigation and planning. Previous studies revealed that cognitive maps for physical space navigation involve hippocampal (HC) allocentric codes, while cognitive maps for abstract task space engage medial prefrontal cortex (mPFC) task-specific codes. Solving challenging cognitive tasks requires integrating these two types of maps. This is exemplified by spatial alternation tasks in multi-corridor settings, where animals like rodents are rewarded upon executing an alternation pattern in maze corridors. Existing studies demonstrated the HC - mPFC circuit's engagement in spatial alternation tasks and that its disruption impairs task performance. Yet, a comprehensive theory explaining how this circuit integrates task-related and spatial information is lacking. We advance a novel hierarchical active inference model clarifying how the HC - mPFC circuit enables the resolution of spatial alternation tasks, by merging physical and task-space cognitive maps. Through a series of simulations, we demonstrate that the model's dual layers acquire effective cognitive maps for navigation within physical (HC map) and task (mPFC map) spaces, using a biologically-inspired approach: a clone-structured cognitive graph. The model solves spatial alternation tasks through reciprocal interactions between the two layers. Importantly, disrupting inter-layer communication impairs difficult decisions, consistent with empirical findings. The same model showcases the ability to switch between multiple alternation rules. However, inhibiting message transmission between the two layers results in perseverative behavior, consistent with empirical findings. In summary, our model provides a mechanistic account of how the HC - mPFC circuit supports spatial alternation tasks and how its disruption impairs task performance.

2.Low Tensor Rank Learning of Neural Dynamics

Authors:Arthur Pellegrino, N Alex Cayco-Gajic, Angus Chadwick

Abstract: Learning relies on coordinated synaptic changes in recurrently connected populations of neurons. Therefore, understanding the collective evolution of synaptic connectivity over learning is a key challenge in neuroscience and machine learning. In particular, recent work has shown that the weight matrices of task-trained RNNs are typically low rank, but how this low rank structure unfolds over learning is unknown. To address this, we investigate the rank of the 3-tensor formed by the weight matrices throughout learning. By fitting RNNs of varying rank to large-scale neural recordings during a motor learning task, we find that the inferred weights are low-tensor-rank and therefore evolve over a fixed low-dimensional subspace throughout the entire course of learning. We next validate the observation of low-tensor-rank learning on an RNN trained to solve the same task by performing a low-tensor-rank decomposition directly on the ground truth weights, and by showing that the method we applied to the data faithfully recovers this low rank structure. Finally, we present a set of mathematical results bounding the matrix and tensor ranks of gradient descent learning dynamics which show that low-tensor-rank weights emerge naturally in RNNs trained to solve low-dimensional tasks. Taken together, our findings provide novel constraints on the evolution of population connectivity over learning in both biological and artificial neural networks, and enable reverse engineering of learning-induced changes in recurrent network dynamics from large-scale neural recordings.

1.Linking fast and slow: the case for generative models

Authors:Johan Medrano, Karl J. Friston, Peter Zeidman

Abstract: A pervasive challenge in neuroscience is testing whether neuronal connectivity changes over time due to specific causes, such as stimuli, events, or clinical interventions. Recent hardware innovations and falling data storage costs enable longer, more naturalistic neuronal recordings. The implicit opportunity for understanding the self-organised brain calls for new analysis methods that link temporal scales: from the order of milliseconds over which neuronal dynamics evolve, to the order of minutes, days or even years over which experimental observations unfold. This review article demonstrates how hierarchical generative models and Bayesian inference help to characterise neuronal activity across different time scales. Crucially, these methods go beyond describing statistical associations among observations and enable inference about underlying mechanisms. We offer an overview of fundamental concepts in state-space modeling and suggest a taxonomy for these methods. Additionally, we introduce key mathematical principles that underscore a separation of temporal scales, such as the slaving principle, and review Bayesian methods that are being used to test hypotheses about the brain with multi-scale data. We hope that this review will serve as a useful primer for experimental and computational neuroscientists on the state of the art and current directions of travel in the complex systems modelling literature.

2.Canonical Cortical Field Theories

Authors:Gerald K. Cooray, Vernon Cooray, Karl Friston

Abstract: We characterise the dynamics of neuronal activity, in terms of field theory, using neural units placed on a 2D-lattice modelling the cortical surface. The electrical activity of neuronal units was analysed with the aim of deriving a neural field model with a simple functional form that still able to predict or reproduce empirical findings. Each neural unit was modelled using a neural mass and the accompanying field theory was derived in the continuum limit. The field theory comprised coupled (real) Klein-Gordon fields, where predictions of the model fall within the range of experimental findings. These predictions included the frequency spectrum of electric activity measured from the cortex, which was derived using an equipartition of energy over eigenfunctions of the neural fields. Moreover, the neural field model was invariant, within a set of parameters, to the dynamical system used to model each neuronal mass. Specifically, topologically equivalent dynamical systems resulted in the same neural field model when connected in a lattice; indicating that the fields derived could be read as a canonical cortical field theory. We specifically investigated non-dispersive fields that provide a structure for the coding (or representation) of afferent information. Further elaboration of the ensuing neural field theory, including the effect of dispersive forces, could be of importance in the understanding of the cortical processing of information.

1.End-to-end topographic networks as models of cortical map formation and human visual behaviour: moving beyond convolutions

Authors:Zejin Lu, Adrien Doerig, Victoria Bosch, Bas Krahmer, Daniel Kaiser, Radoslaw M Cichy, Tim C Kietzmann

Abstract: Computational models are an essential tool for understanding the origin and functions of the topographic organisation of the primate visual system. Yet, vision is most commonly modelled by convolutional neural networks that ignore topography by learning identical features across space. Here, we overcome this limitation by developing All-Topographic Neural Networks (All-TNNs). Trained on visual input, several features of primate topography emerge in All-TNNs: smooth orientation maps and cortical magnification in their first layer, and category-selective areas in their final layer. In addition, we introduce a novel dataset of human spatial biases in object recognition, which enables us to directly link models to behaviour. We demonstrate that All-TNNs significantly better align with human behaviour than previous state-of-the-art convolutional models due to their topographic nature. All-TNNs thereby mark an important step forward in understanding the spatial organisation of the visual brain and how it mediates visual behaviour.

1.A hierarchy index for networks in the brain reveals a complex entangled organizational structure

Authors:Anand Pathak, Shakti N. Menon, Sitabhra Sinha

Abstract: Networks involved in information processing often have their nodes arranged hierarchically, with the majority of connections occurring in adjacent levels. However, despite being an intuitively appealing concept, the hierarchical organization of large networks, such as those in the brain, are difficult to identify, especially in absence of additional information beyond that provided by the connectome. In this paper, we propose a framework to uncover the hierarchical structure of a given network, that identifies the nodes occupying each level as well as the sequential order of the levels. It involves optimizing a metric that we use to quantify the extent of hierarchy present in a network. Applying this measure to various brain networks, ranging from the nervous system of the nematode Caenorhabditis elegans to the human connectome, we unexpectedly find that they exhibit a common network architectural motif intertwining hierarchy and modularity. This suggests that brain networks may have evolved to simultaneously exploit the functional advantages of these two types of organizations, viz., relatively independent modules performing distributed processing in parallel and a hierarchical structure that allows sequential pooling of these multiple processing streams. An intriguing possibility is that this property we report may be common to information processing networks in general.

1.Bayesian Neural System Identification with Response Variability

Authors:Nan Wu, Isabel Valera, Alexander Ecker, Thomas Euler, Yongrong Qiu

Abstract: Neural population responses in sensory systems are driven by external physical stimuli. This stimulus-response relationship is typically characterized by receptive fields with the assumption of identical and independent Gaussian or Poisson distributions through the loss function. However, responses to repeated presentations of the same stimulus vary, complicating the understanding of neural coding in a stochastic manner. Therefore, to appreciate neural information processing, it is critical to identify stimulus-response function in the presence of trial-to-trial variability. Here, we present a Bayesian system identification approach to predict neural responses to visual stimuli with uncertainties, and explore whether incorporating response fluctuations by using synaptic variability can be beneficial for identifying neural response properties. To this end, we build a neural network model using variational inference to estimate the distribution of each model weight. Tests with different neural datasets demonstrate that this method can achieve higher or comparable performance on neural prediction compared to Monte Carlo dropout methods and traditional models using point estimates of the model parameters. At the same time, our variational method allows to estimate the uncertainty of neural transfer function, which we have found to be negatively correlated with the predictive performance. Finally, our model enables a highly challenging task, i.e., the prediction of noise correlations for unseen stimuli, albeit to a moderate degree. Together, we provide a probabilistic approach as a starting point for simultaneously estimating neuronal receptive fields and analyzing trial-to-trial co-variability for a population of neurons, which may help to uncover the underpinning of stochastic biological computation.

1.Information decomposition reveals hidden high-order contributions to temporal irreversibility

Authors:Andrea I Luppi, Fernando E. Rosas, Gustavo Deco, Morten L. Kringelbach, Pedro A. M. Mediano

Abstract: Temporal irreversibility, often referred to as the arrow of time, is a fundamental concept in statistical mechanics. Markers of irreversibility also provide a powerful characterisation of information processing in biological systems. However, current approaches tend to describe temporal irreversibility in terms of a single scalar quantity, without disentangling the underlying dynamics that contribute to irreversibility. Here we propose a broadly applicable information-theoretic framework to characterise the arrow of time in multivariate time series, which yields qualitatively different types of irreversible information dynamics. This multidimensional characterisation reveals previously unreported high-order modes of irreversibility, and establishes a formal connection between recent heuristic markers of temporal irreversibility and metrics of information processing. We demonstrate the prevalence of high-order irreversibility in the hyperactive regime of a biophysical model of brain dynamics, showing that our framework is both theoretically principled and empirically useful. This work challenges the view of the arrow of time as a monolithic entity, enhancing both our theoretical understanding of irreversibility and our ability to detect it in practical applications.

1.Analyzing the Effect of Data Impurity on the Detection Performances of Mental Disorders

Authors:Rohan Kumar Gupta, Rohit Sinha

Abstract: The primary method for identifying mental disorders automatically has traditionally involved using binary classifiers. These classifiers are trained using behavioral data obtained from an interview setup. In this training process, data from individuals with the specific disorder under consideration are categorized as the positive class, while data from all other participants constitute the negative class. In practice, it is widely recognized that certain mental disorders share similar symptoms, causing the collected behavioral data to encompass a variety of attributes associated with multiple disorders. Consequently, attributes linked to the targeted mental disorder might also be present within the negative class. This data impurity may lead to sub-optimal training of the classifier for a mental disorder of interest. In this study, we investigate this hypothesis in the context of major depressive disorder (MDD) and post-traumatic stress disorder detection (PTSD). The results show that upon removal of such data impurity, MDD and PTSD detection performances are significantly improved.

2.Integrating large language models and active inference to understand eye movements in reading and dyslexia

Authors:Francesco Donnarumma, Mirco Frosolone, Giovanni Pezzulo

Abstract: We present a novel computational model employing hierarchical active inference to simulate reading and eye movements. The model characterizes linguistic processing as inference over a hierarchical generative model, facilitating predictions and inferences at various levels of granularity, from syllables to sentences. Our approach combines the strengths of large language models for realistic textual predictions and active inference for guiding eye movements to informative textual information, enabling the testing of predictions. The model exhibits proficiency in reading both known and unknown words and sentences, adhering to the distinction between lexical and nonlexical routes in dual-route theories of reading. Notably, our model permits the exploration of maladaptive inference effects on eye movements during reading, such as in dyslexia. To simulate this condition, we attenuate the contribution of priors during the reading process, leading to incorrect inferences and a more fragmented reading style, characterized by a greater number of shorter saccades. This alignment with empirical findings regarding eye movements in dyslexic individuals highlights the model's potential to aid in understanding the cognitive processes underlying reading and eye movements, as well as how reading deficits associated with dyslexia may emerge from maladaptive predictive processing. In summary, our model represents a significant advancement in comprehending the intricate cognitive processes involved in reading and eye movements, with potential implications for understanding and addressing dyslexia through the simulation of maladaptive inference. It may offer valuable insights into this condition and contribute to the development of more effective interventions for treatment.

3.Desiderata for normative models of synaptic plasticity

Authors:Colin Bredenberg, Cristina Savin

Abstract: Normative models of synaptic plasticity use a combination of mathematics and computational simulations to arrive at predictions of behavioral and network-level adaptive phenomena. In recent years, there has been an explosion of theoretical work on these models, but experimental confirmation is relatively limited. In this review, we organize work on normative plasticity models in terms of a set of desiderata which, when satisfied, are designed to guarantee that a model has a clear link between plasticity and adaptive behavior, consistency with known biological evidence about neural plasticity, and specific testable predictions. We then discuss how new models have begun to improve on these criteria and suggest avenues for further development. As prototypes, we provide detailed analyses of two specific models -- REINFORCE and the Wake-Sleep algorithm. We provide a conceptual guide to help develop neural learning theories that are precise, powerful, and experimentally testable.

1.Comparing Color Similarity Structures between Humans and LLMs via Unsupervised Alignment

Authors:Genji Kawakita, Ariel Zeleznikow-Johnston, Naotsugu Tsuchiya, Masafumi Oizumi

Abstract: Large language models (LLMs), such as the General Pre-trained Transformer (GPT), have shown remarkable performance in various cognitive tasks. However, it remains unclear whether these models have the ability to accurately infer human perceptual representations. Previous research has addressed this question by quantifying correlations between similarity response patterns from humans and LLMs. Although it has been shown that the correlation between humans and LLMs is reasonably high, simple correlation analysis is inadequate to reveal the degree of detailed structural correspondence between humans and LLMs. Here, we use an unsupervised alignment method based on Gromov-Wasserstein optimal transport to assess the equivalence of similarity structures between humans and LLMs in a more detailed manner. As a tractable study, we compare the color similarity structures of humans (color-neurotypical and color-atypical participants) and two GPT models (GPT-3.5 and GPT-4) by examining the similarity structures of 93 colors. Our results show that the similarity structures of color-neurotypical humans can be remarkably well-aligned to that of GPT-4 and, to a lesser extent, to that of GPT-3.5. These results contribute to our understanding of the ability of LLMs to accurately infer human perception, and highlight the potential of unsupervised alignment methods to reveal detailed structural equivalence or differences that cannot be detected by simple correlation analysis.

1.Evaluation of Parkinsons disease with early diagnosis using single-channel EEG features and auditory cognitive assessment

Authors:Lior Molcho, Neta B. Maimon, Neomi Hezi, Talya Zeimer, Nathan Intrator, Tanya Gurevich

Abstract: Parkinsons disease (PD) diagnosis is challenging due to subtle early clinical signs. F-DOPA PET is commonly used for early PD diagnosis. We explore the potential of machine-learning (ML) based EEG features extracted from single-channel EEG during auditory cognitive assessment as a noninvasive, low-cost support for PD diagnosis. The study included data collected from 32 participants who underwent an F-DOPA PET scan as part of their standard treatment and 20 cognitively healthy controls. Participants performed an auditory cognitive assessment recorded with Neurosteer EEG device. Data processing involved wavelet-packet decomposition and ML. First, a prediction model was developed to predict 1/3 of the undisclosed F-DOPA results. Then, generalized linear mixed models were calculated to distinguish between PD and non-PD subjects on the frequency bands and ML-based EEG features (A0 and L1) previously associated with cognitive functions. The prediction model accurately labeled patients with unrevealed scores as positive F-DOPA. Novel EEG feature A0 and the Delta band showed significant separation between study groups, with healthy controls exhibiting higher activity than PD patients. EEG feature L1 activity was significantly lower in resting state compared to high-cognitive load. This effect was absent in the PD group, suggesting that lower activity in resting state is lacking in PD patients. This study successfully demonstrated the ability to separate patients with positive vs. negative F-DOPA PET results with an easy-to-use single-channel EEG during an auditory cognitive assessment. Future longitudinal studies should further explore the potential utility of this tool for early PD diagnosis and as a potential biomarker in PD.

2.Global cognitive graph properties dynamics of hippocampal formation

Authors:Konstantin Sorokin, Andrey Zaitsew, Aleksandr Levin, German Magai, Maxim Beketov, Vladimir Sotskov

Abstract: In the present study we have used a set of methods and metrics to build a graph of relative neural connections in a hippocampus of a rodent. A set of graphs was built on top of time-sequenced data and analyzed in terms of dynamics of a connection genesis. The analysis has shown that during the process of a rodent exploring a novel environment, the relations between neurons constantly change which indicates that globally memory is constantly updated even for known areas of space. Even if some neurons gain cognitive specialization, the global network though remains relatively stable. Additionally we suggest a set of methods for building a graph of cognitive neural network.

1.Learning beyond sensations: how dreams organize neuronal representations

Authors:Nicolas Deperrois, Mihai A. Petrovici, Walter Senn, Jakob Jordan

Abstract: Semantic representations in higher sensory cortices form the basis for robust, yet flexible behavior. These representations are acquired over the course of development in an unsupervised fashion and continuously maintained over an organism's lifespan. Predictive learning theories propose that these representations emerge from predicting or reconstructing sensory inputs. However, brains are known to generate virtual experiences, such as during imagination and dreaming, that go beyond previously experienced inputs. Here, we suggest that virtual experiences may be just as relevant as actual sensory inputs in shaping cortical representations.In particular, we discuss two complementary learning principles that organize representations through the generation of virtual experiences. First, "adversarial dreaming" proposes that creative dreams support a cortical implementation of adversarial learning in which feedback and feedforward pathways engage in a productive game of trying to fool each other. Second, "contrastive dreaming" proposes that the invariance of neuronal representations to irrelevant factors of variation is acquired by trying to map similar virtual experiences together via a contrastive learning process. These principles are compatible with known cortical structure and dynamics and the phenomenology of sleep thus providing promising directions to explain cortical learning beyond the classical predictive learning paradigm.

1.Visual attention information can be traced on cortical response but not on the retina: evidence from electrophysiological mouse data using natural images as stimuli

Authors:Nikos Melanitis, Konstantina Nikita

Abstract: Visual attention forms the basis of understanding the visual world. In this work we follow a computational approach to investigate the biological basis of visual attention. We analyze retinal and cortical electrophysiological data from mouse. Visual Stimuli are Natural Images depicting real world scenes. Our results show that in primary visual cortex (V1), a subset of around $10\%$ of the neurons responds differently to salient versus non-salient visual regions. Visual attention information was not traced in retinal response. It appears that the retina remains naive concerning visual attention; cortical response gets modulated to interpret visual attention information. Experimental animal studies may be designed to further explore the biological basis of visual attention we traced in this study. In applied and translational science, our study contributes to the design of improved visual prostheses systems -- systems that create artificial visual percepts to visually impaired individuals by electronic implants placed on either the retina or the cortex.

2.Applicability of scaling laws to vision encoding models

Authors:Takuya Matsuyama, Kota S Sasaki, Shinji Nishimoto

Abstract: In this paper, we investigated how to build a high-performance vision encoding model to predict brain activity as part of our participation in the Algonauts Project 2023 Challenge. The challenge provided brain activity recorded by functional MRI (fMRI) while participants viewed images. Several vision models with parameter sizes ranging from 86M to 4.3B were used to build predictive models. To build highly accurate models, we focused our analysis on two main aspects: (1) How does the sample size of the fMRI training set change the prediction accuracy? (2) How does the prediction accuracy across the visual cortex vary with the parameter size of the vision models? The results show that as the sample size used during training increases, the prediction accuracy improves according to the scaling law. Similarly, we found that as the parameter size of the vision models increases, the prediction accuracy improves according to the scaling law. These results suggest that increasing the sample size of the fMRI training set and the parameter size of visual models may contribute to more accurate visual models of the brain and lead to a better understanding of visual neuroscience.

1.Hebbian control of fixations in a dyslexic reader

Authors:Albert Le Floch, Guy Ropars

Abstract: During reading, dyslexic readers exhibit more and longer fixations than normal readers. However, there is no significant difference when dyslexic and control readers perform only visual tasks on a string of letters, showing the importance of cognitive processes in reading. This linguistic and cognitive processing demand in reading is often perturbed for dyslexic readers by perceived additional letter and word mirror-images superposed to the primary images on the primary cortex, inducing an internal visual crowding. Here we show that whereas for a normal reader, the number and the duration of fixations remain invariant whatever the nature of the lighting, the excess of fixations and total duration of reading can be controlled for a dyslexic reader using the Hebbian mechanisms to erase the extra images in an optimized pulse-width lighting. The number of fixations can be reduced by a factor of about 1.8, recovering the normal reader records.

1.Fodor and Pylyshyn's Critique of Connectionism and the Brain as Basis of the Mind

Authors:Christoph von der Malsburg

Abstract: To this day there is no satisfactory answer to the question how mental patterns correspond to physical states of our brain. For more than six decades, progress has been held up by the logjam between two traditions, one inspired by neuroscience, the other by digital computing. This logjam is well illuminated by Fodor and Pylyshyn's article of 1988, which is mainly devoted to a critique of what they call Connectionism, but also lays bare weaknesses of the Classical approach which they defend. As recent machine learning breakthroughs may be expected to through new light on the issue, it seems time to arrive at a synthesis of the connectionist neural approach and the classical stance based on symbol processing. I will present and discuss an attempt at such synthesis in the form of structured, self-organized neural networks.

1.Learning heterogeneous delays in a layer of spiking neurons for fast motion detection

Authors:Antoine Grimaldi, Laurent U Perrinet

Abstract: The precise timing of spikes emitted by neurons plays a crucial role in shaping the response of efferent biological neurons. This temporal dimension of neural activity holds significant importance in understanding information processing in neurobiology, especially for the performance of neuromorphic hardware, such as event-based cameras. Nonetheless, many artificial neural models disregard this critical temporal dimension of neural activity. In this study, we present a model designed to efficiently detect temporal spiking motifs using a layer of spiking neurons equipped with heterogeneous synaptic delays. Our model capitalizes on the diverse synaptic delays present on the dendritic tree, enabling specific arrangements of temporally precise synaptic inputs to synchronize upon reaching the basal dendritic tree. We formalize this process as a time-invariant logistic regression, which can be trained using labeled data. To demonstrate its practical efficacy, we apply the model to naturalistic videos transformed into event streams, simulating the output of the biological retina or event-based cameras. To evaluate the robustness of the model in detecting visual motion, we conduct experiments by selectively pruning weights and demonstrate that the model remains efficient even under significantly reduced workloads. In conclusion, by providing a comprehensive, event-driven computational building block, the incorporation of heterogeneous delays has the potential to greatly improve the performance of future spiking neural network algorithms, particularly in the context of neuromorphic chips.

2.Investigating structural and functional aspects of the brain's criticality in stroke

Authors:Jakub Janarek, Zbigniew Drogosz, Jacek Grela, Jeremi K. Ochab, Paweł Oświęcimka

Abstract: This paper addresses the question of the brain's critical dynamics after an injury such as a stroke. It is hypothesized that the healthy brain operates near a phase transition (critical point), which provides optimal conditions for information transmission and responses to inputs. If structural damage could cause the critical point to disappear and thus make self-organized criticality unachievable, it would offer the theoretical explanation for the post-stroke impairment of brain function. In our contribution, however, we demonstrate using network models of the brain, that the dynamics remain critical even after a stroke. In cases where the average size of the second-largest cluster of active nodes, which is one of the commonly used indicators of criticality, shows an anomalous behavior, it results from the loss of integrity of the network, quantifiable within graph theory, and not from genuine non-critical dynamics. We propose a new simple model of an artificial stroke that explains this anomaly. The proposed interpretation of the results is confirmed by an analysis of real connectomes acquired from post-stroke patients and a control group. The results presented refer to neurobiological data; however, the conclusions reached apply to a broad class of complex systems that admit a critical state.

1.Accurate detection of spiking motifs in multi-unit raster plots

Authors:Laurent U Perrinet

Abstract: Recently, interest has grown in exploring the hypothesis that neural activity conveys information through precise spiking motifs. To investigate this phenomenon, various algorithms have been proposed to detect such motifs in Single Unit Activity (SUA) recorded from populations of neurons. In this study, we present a novel detection model based on the inversion of a generative model of raster plot synthesis. Using this generative model, we derive an optimal detection procedure that takes the form of logistic regression combined with temporal convolution. A key advantage of this model is its differentiability, which allows us to formulate a supervised learning approach using a gradient descent on the binary cross-entropy loss. To assess the model's ability to detect spiking motifs in synthetic data, we first perform numerical evaluations. This analysis highlights the advantages of using spiking motifs over traditional firing rate based population codes. We then successfully demonstrate that our learning method can recover synthetically generated spiking motifs, indicating its potential for further applications. In the future, we aim to extend this method to real neurobiological data, where the ground truth is unknown, to explore and detect spiking motifs in a more natural and biologically relevant context.

2.Analyzing time series of unequal durations using Multidimensional Recurrence Quantification Analysis (MdRQA): validation and implementation using Python

Authors:Swarag Thaikkandi, K. M. Sharika

Abstract: In recent years, recurrent quantification analysis (RQA) and its multi-dimensional version (MdRQA) have emerged as a popular tool for assessing interpersonal behavioral or physiological synchrony in groups of two or more individuals. While experimental data in such studies are typically collected for a fixed, pre-determined duration, naturally occurring phenomena may often reach a state of transition after an unpredictable or varying duration of time. The resulting recurrence plots(RPs) across groups cannot be compared directly via linear scaling because the sensitivity of RQA variables to local dynamics would vary. We propose to address this by using the sliding window technique on individual RPs and using the summary statistics of the different RQA variable distributions computed across the sliding windows to differentiate the dynamics of the original time series of unequal durations. We tested our approach in two models: 1) the Rossler attractor and 2) the Kuramoto model. We compared the ability of different summary statistics of RQA variable distributions to accurately predict the dynamic states of the system across varying levels of noise, unequal lengths of time series, and, in the case of the Kuramoto model, different numbers of oscillators across samples. We found that while the mean, compared to other measures of central tendency, was a more accurate predictor of the underlying dynamic state of the system at high noise conditions, the mode was the most robust to the degree of noise in the signals, performing better than RQA variables from the whole RP, in general. To our knowledge, this is the first systematic attempt to validate the use of MdRQA in computing and comparing synchrony between systems of non-uniform composition and unequal time series data, paving the way for future work that examines interpersonal synchrony in more naturalistic, ecologically valid contexts.

3.A modular theoretical framework for learning through structural plasticity

Authors:Gianmarco Tiddia, Luca Sergi, Bruno Golosio

Abstract: It is known that, during learning, modifications in synaptic transmission and, eventually, structural changes of the connectivity take place in our brain. This can be achieved through a mechanism known as structural plasticity. In this work, starting from a simple phenomenological model, we exploit a mean-field approach to develop a modular theoretical framework of learning through this kind of plasticity, capable of taking into account several features of the connectivity and pattern of activity of biological neural networks, including probability distributions of neuron firing rates, selectivity of the responses of single neurons to multiple stimuli, probabilistic connection rules and noisy stimuli. More importantly, it describes the effects of consolidation, pruning and reorganization of synaptic connections. This framework will be used to compute the values of some relevant quantities used to characterize the learning and memory capabilities of the neuronal network in a training and validation procedure as the number of training patterns and other model parameters vary. The results will then be compared with those obtained through simulations with firing-rate-based neuronal network models.

1.Decoding the Enigma: Benchmarking Humans and AIs on the Many Facets of Working Memory

Authors:Ankur Sikarwar, Mengmi Zhang

Abstract: Working memory (WM), a fundamental cognitive process facilitating the temporary storage, integration, manipulation, and retrieval of information, plays a vital role in reasoning and decision-making tasks. Robust benchmark datasets that capture the multifaceted nature of WM are crucial for the effective development and evaluation of AI WM models. Here, we introduce a comprehensive Working Memory (WorM) benchmark dataset for this purpose. WorM comprises 10 tasks and a total of 1 million trials, assessing 4 functionalities, 3 domains, and 11 behavioral and neural characteristics of WM. We jointly trained and tested state-of-the-art recurrent neural networks and transformers on all these tasks. We also include human behavioral benchmarks as an upper bound for comparison. Our results suggest that AI models replicate some characteristics of WM in the brain, most notably primacy and recency effects, and neural clusters and correlates specialized for different domains and functionalities of WM. In the experiments, we also reveal some limitations in existing models to approximate human behavior. This dataset serves as a valuable resource for communities in cognitive psychology, neuroscience, and AI, offering a standardized framework to compare and enhance WM models, investigate WM's neural underpinnings, and develop WM models with human-like capabilities. Our source code and data are available at

2.Brain2Music: Reconstructing Music from Human Brain Activity

Authors:Timo I. Denk, Yu Takagi, Takuya Matsuyama, Andrea Agostinelli, Tomoya Nakai, Christian Frank, Shinji Nishimoto

Abstract: The process of reconstructing experiences from human brain activity offers a unique lens into how the brain interprets and represents the world. In this paper, we introduce a method for reconstructing music from brain activity, captured using functional magnetic resonance imaging (fMRI). Our approach uses either music retrieval or the MusicLM music generation model conditioned on embeddings derived from fMRI data. The generated music resembles the musical stimuli that human subjects experienced, with respect to semantic properties like genre, instrumentation, and mood. We investigate the relationship between different components of MusicLM and brain activity through a voxel-wise encoding modeling analysis. Furthermore, we discuss which brain regions represent information derived from purely textual descriptions of music stimuli. We provide supplementary material including examples of the reconstructed music at

1.Approximating nonlinear functions with latent boundaries in low-rank excitatory-inhibitory spiking networks

Authors:William F. Podlaski, Christian K. Machens

Abstract: Deep feedforward and recurrent rate-based neural networks have become successful functional models of the brain, but they neglect obvious biological details such as spikes and Dale's law. Here we argue that these details are crucial in order to understand how real neural circuits operate. Towards this aim, we put forth a new framework for spike-based computation in low-rank excitatory-inhibitory spiking networks. By considering populations with rank-1 connectivity, we cast each neuron's spiking threshold as a boundary in a low-dimensional input-output space. We then show how the combined thresholds of a population of inhibitory neurons form a stable boundary in this space, and those of a population of excitatory neurons form an unstable boundary. Combining the two boundaries results in a rank-2 excitatory-inhibitory (EI) network with inhibition-stabilized dynamics at the intersection of the two boundaries. The computation of the resulting networks can be understood as the difference of two convex functions, and is thereby capable of approximating arbitrary non-linear input-output mappings. We demonstrate several properties of these networks, including noise suppression and amplification, irregular activity and synaptic balance, as well as how they relate to rate network dynamics in the limit that the boundary becomes soft. Finally, while our work focuses on small networks (5-50 neurons), we discuss potential avenues for scaling up to much larger networks. Overall, our work proposes a new perspective on spiking networks that may serve as a starting point for a mechanistic understanding of biological spike-based computation.

1.A Study on the Performance of Generative Pre-trained Transformer (GPT) in Simulating Depressed Individuals on the Standardized Depressive Symptom Scale

Authors:Sijin Cai, Nanfeng Zhang, Jiaying Zhu, Yanjie Liu, Yongjin Zhou

Abstract: Background: Depression is a common mental disorder with societal and economic burden. Current diagnosis relies on self-reports and assessment scales, which have reliability issues. Objective approaches are needed for diagnosing depression. Objective: Evaluate the potential of GPT technology in diagnosing depression. Assess its ability to simulate individuals with depression and investigate the influence of depression scales. Methods: Three depression-related assessment tools (HAMD-17, SDS, GDS-15) were used. Two experiments simulated GPT responses to normal individuals and individuals with depression. Compare GPT's responses with expected results, assess its understanding of depressive symptoms, and performance differences under different conditions. Results: GPT's performance in depression assessment was evaluated. It aligned with scoring criteria for both individuals with depression and normal individuals. Some performance differences were observed based on depression severity. GPT performed better on scales with higher sensitivity. Conclusion: GPT accurately simulates individuals with depression and normal individuals during depression-related assessments. Deviations occur when simulating different degrees of depression, limiting understanding of mild and moderate cases. GPT performs better on scales with higher sensitivity, indicating potential for developing more effective depression scales. GPT has important potential in depression assessment, supporting clinicians and patients.

1.Learning fixed points of recurrent neural networks by reparameterizing the network model

Authors:Vicky Zhu, Robert Rosenbaum

Abstract: In computational neuroscience, fixed points of recurrent neural network models are commonly used to model neural responses to static or slowly changing stimuli. These applications raise the question of how to train the weights in a recurrent neural network to minimize a loss function evaluated on fixed points. A natural approach is to use gradient descent on the Euclidean space of synaptic weights. We show that this approach can lead to poor learning performance due, in part, to singularities that arise in the loss surface. We use a re-parameterization of the recurrent network model to derive two alternative learning rules that produces more robust learning dynamics. We show that these learning rules can be interpreted as steepest descent and gradient descent, respectively, under a non-Euclidean metric on the space of recurrent weights. Our results question the common, implicit assumption that learning in the brain should necessarily follow the negative Euclidean gradient of synaptic weights.

1.Spontaneous segregation of visual information between parallel streams of a multi-stream convolutional neural network

Authors:Hiroshi Tamura

Abstract: Visual information is processed in hierarchically organized parallel pathways in the primate brain. In lower cortical areas, color information and shape information are processed in a parallel manner, while in higher cortical areas, various types of visual information, such as color, face, animate/inanimate, are processed in a parallel manner. In the present study, the possibility of spontaneous segregation of visual information in parallel streams was examined by constructing a convolutional neural network with parallel architecture in all of the convolutional layers. The results revealed that color information was segregated from shape information in most model instances. Deletion of the color-related stream decreased recognition accuracy in the inanimate category, whereas deletion of the shape-related stream decreased recognition accuracy in the animate category. The results suggest that properties of filters and functions of a stream are spontaneously segregated in parallel streams of neural networks.

2.Grammatical Parameters from a Gene-like Code to Self-Organizing Attractors

Authors:Giuseppe Longobardi, Alessandro Treves

Abstract: Parametric approaches to grammatical diversity range from Chomsky's 1981 classical Principles & Parameters model to minimalist reinterpretations: in some proposals of the latter framework, parameters need not be an extensional list given at the initial state S0 of the mind, but can be constructed through a bio-program in the course of language development. In this contribution we pursue this lead and discuss initial data and ideas relevant for the elaboration of three sets of questions: 1) how can binary parameters be conceivably implemented in cortical and subcortical circuitry in the human brain? 2) how can parameter mutations be taken to occur? 3) given the distribution of parameter values across languages and their implications, can multi-parental models of language phylogenies, departing from ultrametricity, also account for some of the available evidence?

1.Modelling Spontaneous Firing Activity of the Motor Cortex in a Spiking Neural Network with Random and Local Connectivity

Authors:Lysea Haggie Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand, Thor Besier Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand, Angus McMorland Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand Department of Exercise Sciences, University of Auckland, Auckland, New Zealand

Abstract: Computational models of cortical activity provide insight into the mechanisms of higher-order processing in the human brain including planning, perception and the control of movement. Activity in the cortex is ongoing even in the absence of sensory input or discernible movements and is thought to be linked to the topology of cortical circuitry. However, the connectivity and its functional role in the generation of spatio-temporal firing patterns and cortical computations are still unknown. Movement of the body is a key function of the brain, with the motor cortex the main cortical area implicated in the generation of movement. We built a spiking neural network model of the motor cortex which incorporates a laminar structure and circuitry based on a previous cortical model. A local connectivity scheme was implemented to introduce more physiological plausibility to the cortex model, and the effect on the rates, distributions and irregularity of neuronal firing was compared to the original random connectivity method and experimental data. Local connectivity increased the distribution of and overall rate of neuronal firing. It also resulted in the irregularity of firing being more similar to those observed in experimental measurements. The larger variability in dynamical behaviour of the local connectivity model suggests that the topological structure of the connections in neuronal population plays a significant role in firing patterns during spontaneous activity. This model took steps towards replicating the macroscopic network of the motor cortex, replicating realistic spatiotemporal firing to shed light on information coding in the cortex. Large scale computational models such as this one can capture how structure and function relate to observable neuronal firing behaviour, and investigates the underlying computational mechanisms of the brain.

1.Comparing dendritic trees with actual trees

Authors:Roozbeh Farhoodi, Phil Wilkes, Anirudh M. Natarajan, Samantha Ing-Esteves, Julie L. Lefebvre, Mathias Disney, Konrad P. Kording

Abstract: Since they became observable, neuron morphologies have been informally compared with biological trees but they are studied by distinct communities, neuroscientists, and ecologists. The apparent structural similarity suggests there may be common quantitative rules and constraints. However, there are also reasons to believe they should be different. For example, while the environments of trees may be relatively simple, neurons are constructed by a complex iterative program where synapses are made and pruned. This complexity may make neurons less self-similar than trees. Here we test this hypothesis by comparing the features of segmented sub-trees with those of the whole tree. We indeed find more self-similarity within actual trees than neurons. At the same time, we find that many other features are somewhat comparable across the two. Investigation of shapes and behaviors promises new ways of conceptualizing the form-function link.

1.Beyond the Snapshot: Brain Tokenized Graph Transformer for Longitudinal Brain Functional Connectome Embedding

Authors:Zijian Dong, Yilei Wu, Yu Xiao, Joanna Su Xian Chong, Yueming Jin, Juan Helen Zhou

Abstract: Under the framework of network-based neurodegeneration, brain functional connectome (FC)-based Graph Neural Networks (GNN) have emerged as a valuable tool for the diagnosis and prognosis of neurodegenerative diseases such as Alzheimer's disease (AD). However, these models are tailored for brain FC at a single time point instead of characterizing FC trajectory. Discerning how FC evolves with disease progression, particularly at the predementia stages such as cognitively normal individuals with amyloid deposition or individuals with mild cognitive impairment (MCI), is crucial for delineating disease spreading patterns and developing effective strategies to slow down or even halt disease advancement. In this work, we proposed the first interpretable framework for brain FC trajectory embedding with application to neurodegenerative disease diagnosis and prognosis, namely Brain Tokenized Graph Transformer (Brain TokenGT). It consists of two modules: 1) Graph Invariant and Variant Embedding (GIVE) for generation of node and spatio-temporal edge embeddings, which were tokenized for downstream processing; 2) Brain Informed Graph Transformer Readout (BIGTR) which augments previous tokens with trainable type identifiers and non-trainable node identifiers and feeds them into a standard transformer encoder to readout. We conducted extensive experiments on two public longitudinal fMRI datasets of the AD continuum for three tasks, including differentiating MCI from controls, predicting dementia conversion in MCI, and classification of amyloid positive or negative cognitively normal individuals. Based on brain FC trajectory, the proposed Brain TokenGT approach outperformed all the other benchmark models and at the same time provided excellent interpretability. The code is available at

2.A calcium imaging large dataset reveals novel functional organization in macaque V4

Authors:Tianye Wang, Haoxuan Yao, Tai Sing Lee, Jiayi Hong, Yang Li, Hongfei Jiang, Ian Max Andolina, Shiming Tang

Abstract: The topological organization and feature preferences of primate visual area V4 have been primarily studied using artificial stimuli. Here, we combined large-scale calcium imaging with deep learning methods to characterize and understand how V4 processes natural images. By fitting a deep learning model to an unprecedentedly large dataset of columnar scale cortical responses to tens of thousands of natural stimuli and using the model to identify the images preferred by each cortical pixel, we obtained a detailed V4 topographical map of natural stimulus preference. The map contains distinct functional domains preferring a variety of natural image features, ranging from surface-related features such as color and texture to shape-related features such as edge, curvature, and facial features. These predicted domains were verified by additional widefield calcium imaging and single-cell resolution two-photon imaging. Our study reveals the systematic topological organization of V4 for encoding image features in natural scenes.

1.The Human Auditory System and Audio

Authors:Milind N. Kunchur

Abstract: This work reviews the human auditory system, elucidating some of the specialized mechanisms and non-linear pathways along the chain of events between physical sound and its perception. Customary relationships between frequency, time, and phase--such as the uncertainty principle--that hold for linear systems, do not apply straightforwardly to the hearing process. Auditory temporal resolution for certain processes can be a hundredth of the period of the signal, and can extend down to the microseconds time scale. The astonishingly large number of variations that correspond to the neural excitation pattern of 30000 auditory nerve fibers, originating from 3500 inner hair cells, explicates the vast capacity of the auditory system for the resolution of sonic detail. And the ear is sensitive enough to detect a basilar-membrane amplitude at the level of a picometer, or about a hundred times smaller than an atom. This article surveys and provides new insights into some of the impressive capabilities of the human auditory system and explores their relationship to fidelity in reproduced sound.

1.Are task representations gated in macaque prefrontal cortex?

Authors:Timo Flesch, Valerio Mante, William Newsome, Andrew Saxe, Christopher Summerfield, David Sussillo

Abstract: A recent paper (Flesch et al, 2022) describes behavioural and neural data suggesting that task representations are gated in the prefrontal cortex in both humans and macaques. This short note proposes an alternative explanation for the reported results from the macaque data.

1.Reconstructing the Hemodynamic Response Function via a Bimodal Transformer

Authors:Yoni Choukroun, Lior Golgher, Pablo Blinder, Lior Wolf

Abstract: The relationship between blood flow and neuronal activity is widely recognized, with blood flow frequently serving as a surrogate for neuronal activity in fMRI studies. At the microscopic level, neuronal activity has been shown to influence blood flow in nearby blood vessels. This study introduces the first predictive model that addresses this issue directly at the explicit neuronal population level. Using in vivo recordings in awake mice, we employ a novel spatiotemporal bimodal transformer architecture to infer current blood flow based on both historical blood flow and ongoing spontaneous neuronal activity. Our findings indicate that incorporating neuronal activity significantly enhances the model's ability to predict blood flow values. Through analysis of the model's behavior, we propose hypotheses regarding the largely unexplored nature of the hemodynamic response to neuronal activity.

2.Geomagnetic field influences probabilistic abstract decision-making in humans

Authors:Kwon-Seok Chae, In-Taek Oh, Soo Hyun Jeong, Yong-Hwan Kim, Soo-Chan Kim, Yongkuk Kim

Abstract: To resolve disputes or determine the order of things, people commonly use binary choices such as tossing a coin, even though it is obscure whether the empirical probability equals to the theoretical probability. The geomagnetic field (GMF) is broadly applied as a sensory cue for various movements in many organisms including humans, although our understanding is limited. Here we reveal a GMF-modulated probabilistic abstract decision-making in humans and the underlying mechanism, exploiting the zero-sum binary stone choice of Go game as a proof-of-principle. The large-scale data analyses of professional Go matches and in situ stone choice games showed that the empirical probabilities of the stone selections were remarkably different from the theoretical probability. In laboratory experiments, experimental probability in the decision-making was significantly influenced by GMF conditions and specific magnetic resonance frequency. Time series and stepwise systematic analyses pinpointed the intentionally uncontrollable decision-making as a primary modulating target. Notably, the continuum of GMF lines and anisotropic magnetic interplay between players were crucial to influence the magnetic field resonance-mediated abstract decision-making. Our findings provide unique insights into the impact of sensing GMF in decision-makings at tipping points and the quantum mechanical mechanism for manifesting the gap between theoretical and empirical probability in 3-dimensional living space.

1.Does visual experience influence arm proprioception and its lateralization? Evidence from passive matching performance in congenitally-blind and sighted adults

Authors:Najib Abi Chebel LNC, Florence Gaunet LNC, Pascale Chavet LNC, Christine Assaiante LNC, Christophe Bourdin ISM, Fabrice Sarlegna

Abstract: In humans, body segments' position and movement can be estimated from multiple senses such as vision and proprioception. It has been suggested that vision and proprioception can influence each other and that upper-limb proprioception is asymmetrical, with proprioception of the non-dominant arm being more accurate and/or precise than proprioception of the dominant arm. However, the mechanisms underlying the lateralization of proprioceptive perception are not yet understood. Here we tested the hypothesis that early visual experience influences the lateralization of arm proprioceptive perception by comparing 8 congenitally-blind and 8 matched, sighted right-handed adults. Their proprioceptive perception was assessed at the elbow and wrist joints of both arms using an ipsilateral passive matching task. Results support and extend the view that proprioceptive precision is better at the non-dominant arm for blindfolded sighted individuals. While this finding was rather systematic across sighted individuals, proprioceptive precision of congenitally-blind individuals was not lateralized as systematically, suggesting that lack of visual experience during ontogenesis influences the lateralization of arm proprioception.

2.Dynamic functional connectivity: why the controversy?

Authors:Diego Vidaurre

Abstract: In principle, dynamic functional connectivity in fMRI is just a statistical measure. A passer-by might think it to be a specialist topic, but it continues to attract widespread attention and spark controversy. Why?

1.Regulation of Mouse Learning and Mood by the Anti-Inflammatory Cytokine Interleukin-10

Authors:Ryan Joseph Worthen

Abstract: Major depressive disorder is a widespread mood disorder. One of the most debilitating symptoms patients often experience is cognitive impairment. Recent findings suggest that inflammation is associated with depression and impaired cognition. Pro-inflammatory cytokines are elevated in the blood of depressed patients and impair learning and memory processes, suggesting that an anti-inflammatory approach might be beneficial for both depression and cognition. Utilizing the learned helplessness paradigm, we first established a mouse model of depression in which learning and memory are impaired. We found that learned helplessness (LH) impaired novel object recognition (NOR) and spatial working memory. LH mice also exhibited reduced hippocampal dendritic spine density and increased microglial activation compared to non-shocked (NS) mice or mice that were subjected to the learned helpless paradigm but did not exhibit learned helplessness (non-learned helpless, or NLH). These effects were mediated by microglia, as treatment with PLX5622, which depletes microglia and macrophages, restored learning and memory and hippocampal dendritic spine density in LH mice. However, PLX5622 also impaired learning and memory and reduced hippocampal dendritic spine density in NLH mice, suggesting that microglia in NLH mice are involved in the production of molecules that promote learning and memory. We found that microglial interleukin (IL)-10 levels were reduced in LH mice and IL-10 administration was sufficient to restore NOR, spatial working memory, and hippocampal dendritic spine density in LH mice, and in NLH mice treated with PLX5622, consistent with a pro-cognitive role for IL-10. Altogether, these data demonstrate the critical role of IL-10 in promoting learning and memory after learned helplessness.

2.Causal potency of consciousness in the physical world

Authors:Danko D. Georgiev

Abstract: The evolution of the human mind through natural selection mandates that our conscious experiences are causally potent in order to leave a tangible impact upon the surrounding physical world. Any attempt to construct a functional theory of the conscious mind within the framework of classical physics, however, inevitably leads to causally impotent conscious experiences in direct contradiction to evolution theory. Here, we derive several rigorous theorems that identify the origin of the latter impasse in the mathematical properties of ordinary differential equations employed in combination with the alleged functional production of the mind by the brain. Then, we demonstrate that a mind--brain theory consistent with causally potent conscious experiences is provided by modern quantum physics, in which the unobservable conscious mind is reductively identified with the quantum state of the brain and the observable brain is constructed by the physical measurement of quantum brain observables. The resulting quantum stochastic dynamics obtained from sequential quantum measurements of the brain is governed by stochastic differential equations, which permit genuine free will exercised through sequential conscious choices of future courses of action. Thus, quantum reductionism provides a solid theoretical foundation for the causal potency of consciousness, free will and cultural transmission.

1.Spectral Dynamic Causal Modelling: A Didactic Introduction and its Relationship with Functional Connectivity

Authors:Leonardo Novelli, Karl Friston, Adeel Razi

Abstract: We present a didactic introduction to spectral Dynamic Causal Modelling (DCM), a Bayesian state-space modelling approach used to infer effective connectivity from non-invasive neuroimaging data. Spectral DCM is currently the most widely applied DCM variant for resting-state functional MRI analysis. Our aim is to explain its technical foundations to an audience with limited expertise in state-space modelling and spectral data analysis. Particular attention will be paid to cross-spectral density, which is the most distinctive feature of spectral DCM and is closely related to functional connectivity, as measured by (zero-lag) Pearson correlations. In fact, the model parameters estimated by spectral DCM are those that best reproduce the cross-correlations between all variables--at all time lags--including the zero-lag correlations that are usually interpreted as functional connectivity. We derive the functional connectivity matrix from the model equations and show how changing a single effective connectivity parameter can affect all pairwise correlations. To complicate matters, the pairs of brain regions showing the largest changes in functional connectivity do not necessarily coincide with those presenting the largest changes in effective connectivity. We discuss the implications and conclude with a comprehensive summary of the assumptions and limitations of spectral DCM.

1.Symmetry making and symmetry breaking in cortex A collective portrait of ensemble excitation and inhibition

Authors:Nima Dehghani

Abstract: Creating a quantitative theory for the cortex poses several challenges and raises numerous questions. For instance, what are the significant scales of the system? Are they micro, meso or macroscopic? What are the relevant interactions? Are they pairwise, higher order or mean-field? And what are the control parameters? Are they noisy, dissipative or emergent? To tackle these issues, we suggest using an approach similar to the one that has transformed our understanding of the state of matter. This includes identifying invariances in the ensemble dynamics of various neuron functional classes, searching for order parameters that connect important degrees of freedom and distinguish macroscopic system states, and identifying broken symmetries in the order parameter space to comprehend the emerging laws when many neurons interact and coordinate their activation. By utilizing multielectrode and multiscale neural recordings, we measure the scale-invariant balance between excitatory and inhibitory neurons. We also investigate a set of parameters that can assist us in differentiating between various functional system states (such as the wake/sleep cycle) and pinpointing broken symmetries that serve different information processing and memory functions. Furthermore, we identify broken symmetries that result in pathological states like seizures.

2.Temporal Conditioning Spiking Latent Variable Models of the Neural Response to Natural Visual Scenes

Authors:Gehua Ma, Runhao Jiang, Rui Yan, Huajin Tang

Abstract: Developing computational models of neural response is crucial for understanding sensory processing and neural computations. Current state-of-the-art neural network methods use temporal filters to handle temporal dependencies, resulting in an unrealistic and inflexible processing flow. Meanwhile, these methods target trial-averaged firing rates and fail to capture important features in spike trains. This work presents the temporal conditioning spiking latent variable models (TeCoS-LVM) to simulate the neural response to natural visual stimuli. We use spiking neurons to produce spike outputs that directly match the recorded trains. This approach helps to avoid losing information embedded in the original spike trains. We exclude the temporal dimension from the model parameter space and introduce a temporal conditioning operation to allow the model to adaptively explore and exploit temporal dependencies in stimuli sequences in a natural paradigm. We show that TeCoS-LVM models can produce more realistic spike activities and accurately fit spike statistics than powerful alternatives. Additionally, learned TeCoS-LVM models can generalize well to longer time scales. Overall, while remaining computationally tractable, our model effectively captures key features of neural coding systems. It thus provides a useful tool for building accurate predictive computational accounts for various sensory perception circuits.

1.Improving visual image reconstruction from human brain activity using latent diffusion models via multiple decoded inputs

Authors:Yu Takagi, Shinji Nishimoto

Abstract: The integration of deep learning and neuroscience has been advancing rapidly, which has led to improvements in the analysis of brain activity and the understanding of deep learning models from a neuroscientific perspective. The reconstruction of visual experience from human brain activity is an area that has particularly benefited: the use of deep learning models trained on large amounts of natural images has greatly improved its quality, and approaches that combine the diverse information contained in visual experiences have proliferated rapidly in recent years. In this technical paper, by taking advantage of the simple and generic framework that we proposed (Takagi and Nishimoto, CVPR 2023), we examine the extent to which various additional decoding techniques affect the performance of visual experience reconstruction. Specifically, we combined our earlier work with the following three techniques: using decoded text from brain activity, nonlinear optimization for structural image reconstruction, and using decoded depth information from brain activity. We confirmed that these techniques contributed to improving accuracy over the baseline. We also discuss what researchers should consider when performing visual reconstruction using deep generative models trained on large datasets. Please check our webpage at Code is also available at

1.Runtime Construction of Large-Scale Spiking Neuronal Network Models on GPU Devices

Authors:Bruno Golosio, Jose Villamar, Gianmarco Tiddia, Elena Pastorelli, Jonas Stapmanns, Viviana Fanti, Pier Stanislao Paolucci, Abigail Morrison, Johanna Senk

Abstract: Simulation speed matters for neuroscientific research: this includes not only how quickly the simulated model time of a large-scale spiking neuronal network progresses, but also how long it takes to instantiate the network model in computer memory. On the hardware side, acceleration via highly parallel GPUs is being increasingly utilized. On the software side, code generation approaches ensure highly optimized code, at the expense of repeated code regeneration and recompilation after modifications to the network model. Aiming for a greater flexibility with respect to iterative model changes, here we propose a new method for creating network connections interactively, dynamically, and directly in GPU memory through a set of commonly used high-level connection rules. We validate the simulation performance with both consumer and data center GPUs on two neuroscientifically relevant models: a cortical microcircuit of about 77,000 leaky-integrate-and-fire neuron models and 300 million static synapses, and a two-population network recurrently connected using a variety of connection rules. With our proposed ad hoc network instantiation, both network construction and simulation times are comparable or shorter than those obtained with other state-of-the-art simulation technologies, while still meeting the flexibility demands of explorative network modeling.

1.Exploiting the Brain's Network Structure for Automatic Identification of ADHD Subjects

Authors:Soumyabrata Dey, Ravishankar Rao, Mubarak Shah

Abstract: Attention Deficit Hyperactive Disorder (ADHD) is a common behavioral problem affecting children. In this work, we investigate the automatic classification of ADHD subjects using the resting state Functional Magnetic Resonance Imaging (fMRI) sequences of the brain. We show that the brain can be modeled as a functional network, and certain properties of the networks differ in ADHD subjects from control subjects. We compute the pairwise correlation of brain voxels' activity over the time frame of the experimental protocol which helps to model the function of a brain as a network. Different network features are computed for each of the voxels constructing the network. The concatenation of the network features of all the voxels in a brain serves as the feature vector. Feature vectors from a set of subjects are then used to train a PCA-LDA (principal component analysis-linear discriminant analysis) based classifier. We hypothesized that ADHD-related differences lie in some specific regions of the brain and using features only from those regions is sufficient to discriminate ADHD and control subjects. We propose a method to create a brain mask that includes the useful regions only and demonstrate that using the feature from the masked regions improves classification accuracy on the test data set. We train our classifier with 776 subjects and test on 171 subjects provided by The Neuro Bureau for the ADHD-200 challenge. We demonstrate the utility of graph-motif features, specifically the maps that represent the frequency of participation of voxels in network cycles of length 3. The best classification performance (69.59%) is achieved using 3-cycle map features with masking. Our proposed approach holds promise in being able to diagnose and understand the disorder.

1.Nonlinear slow-timescale mechanisms in synaptic plasticity

Authors:Cian O'Donnell

Abstract: Learning and memory relies on synapses changing their strengths in response to neural activity. However there is a substantial gap between the timescales of neural electrical dynamics (1-100 ms) and organism behaviour during learning (seconds-minutes). What mechanisms bridge this timescale gap? What are the implications for theories of brain learning? Here I first cover experimental evidence for slow-timescale factors in plasticity induction. Then I review possible underlying cellular and synaptic mechanisms, and insights from recent computational models that incorporate such slow-timescale variables. I conclude that future progress on understanding brain learning across timescales will require both experimental and computational modelling studies that map out the nonlinearities implemented by both fast and slow plasticity mechanisms at synapses, and crucially, their joint interactions.

1.Dendrites and Efficiency: Optimizing Performance and Resource Utilization

Authors:Roman Makarov, Michalis Pagkalos, Panayiota Poirazi

Abstract: The brain is a highly efficient system evolved to achieve high performance with limited resources. We propose that dendrites make information processing and storage in the brain more efficient through the segregation of inputs and their conditional integration via nonlinear events, the compartmentalization of activity and plasticity and the binding of information through synapse clustering. In real-world scenarios with limited energy and space, dendrites help biological networks process natural stimuli on behavioral timescales, perform the inference process on those stimuli in a context-specific manner, and store the information in overlapping populations of neurons. A global picture starts to emerge, in which dendrites help the brain achieve efficiency through a combination of optimization strategies balancing the tradeoff between performance and resource utilization.

1.A cognitive process approach to modeling gap acceptance in overtaking

Authors:Samir H. A. Mohammad, Haneen Farah, Arkady Zgonnikov

Abstract: Driving automation holds significant potential for enhancing traffic safety. However, effectively handling interactions with human drivers in mixed traffic remains a challenging task. Several models exist that attempt to capture human behavior in traffic interactions, often focusing on gap acceptance. However, it is not clear how models of an individual driver's gap acceptance can be translated to dynamic human-AV interactions in the context of high-speed scenarios like overtaking. In this study, we address this issue by employing a cognitive process approach to describe the dynamic interactions by the oncoming vehicle during overtaking maneuvers. Our findings reveal that by incorporating an initial decision-making bias dependent on the initial velocity into existing drift-diffusion models, we can accurately describe the qualitative patterns of overtaking gap acceptance observed previously. Our results demonstrate the potential of the cognitive process approach in modeling human overtaking behavior when the oncoming vehicle is an AV. To this end, this study contributes to the development of effective strategies for ensuring safe and efficient overtaking interactions between human drivers and AVs.

1.Trial matching: capturing variability with data-constrained spiking neural networks

Authors:Christos Sourmpis, Carl Petersen, Wulfram Gerstner, Guillaume Bellec

Abstract: Simultaneous behavioral and electrophysiological recordings call for new methods to reveal the interactions between neural activity and behavior. A milestone would be an interpretable model of the co-variability of spiking activity and behavior across trials. Here, we model a cortical sensory-motor pathway in a tactile detection task with a large recurrent spiking neural network (RSNN), fitted to the recordings via gradient-based optimization. We focus specifically on the difficulty to match the trial-to-trial variability in the data. Our solution relies on optimal transport to define a distance between the distributions of generated and recorded trials. The technique is applied to artificial data and neural recordings covering six cortical areas. We find that the resulting RSNN can generate realistic cortical activity and predict jaw movements across the main modes of trial-to-trial variability. Our analysis also identifies an unexpected mode of variability in the data corresponding to task-irrelevant movements of the mouse.

1.Visuomotor feedback tuning in the absence of visual error information

Authors:Sae Franklin, David W. Franklin

Abstract: Large increases in visuomotor feedback gains occur during initial adaptation to novel dynamics, which we propose are due to increased internal model uncertainty. That is, large errors indicate increased uncertainty in our prediction of the environment, increasing feedback gains and co-contraction as a coping mechanism. Our previous work showed distinct patterns of visuomotor feedback gains during abrupt or gradual adaptation to a force field, suggesting two complementary processes: reactive feedback gains increasing with internal model uncertainty and the gradual learning of predictive feedback gains tuned to the environment. Here we further investigate what drives these changes visuomotor feedback gains in learning, by separating the effects of internal model uncertainty from visual error signal through removal of visual error information. Removing visual error information suppresses the visuomotor feedback gains in all conditions, but the pattern of modulation throughout adaptation is unaffected. Moreover, we find increased muscle co-contraction in both abrupt and gradual adaptation protocols, demonstrating that visuomotor feedback responses are independent from the level of co-contraction. Our result suggests that visual feedback benefits motor adaptation tasks through higher visuomotor feedback gains, but when it is not available participants adapt at a similar rate through increased co-contraction. We have demonstrated a direct connection between learning and predictive visuomotor feedback gains, independent from visual error signals. This further supports our hypothesis that internal model uncertainty drives initial increases in feedback gains.

2.Suppression of chaos in a partially driven recurrent neural network

Authors:Shotaro Takasu, Toshio Aoyagi

Abstract: The dynamics of recurrent neural networks (RNNs), and particularly their response to inputs, play a critical role in information processing. In many applications of RNNs, only a specific subset of the neurons generally receive inputs. However, it remains to be theoretically clarified how the restriction of the input to a specific subset of neurons affects the network dynamics. Considering recurrent neural networks with such restricted input, we investigate how the proportion, $p$, of the neurons receiving inputs (the "inputs neurons") and a quantity, $\xi$, representing the strength of the input signals affect the dynamics by analytically deriving the conditional maximum Lyapunov exponent. Our results show that for sufficiently large $p$, the maximum Lyapunov exponent decreases monotonically as a function of $\xi$, indicating the suppression of chaos, but if $p$ is smaller than a critical threshold, $p_c$, even significantly amplified inputs cannot suppress spontaneous chaotic dynamics. Furthermore, although the value of $p_c$ is seemingly dependent on several model parameters, such as the sparseness and strength of recurrent connections, it is proved to be intrinsically determined solely by the strength of chaos in spontaneous activity of the RNN. This is to say, despite changes in these model parameters, it is possible to represent the value of $p_c$ as a common invariant function by appropriately scaling these parameters to yield the same strength of spontaneous chaos. Our study suggests that if $p$ is above $p_c$, we can bring the neural network to the edge of chaos, thereby maximizing its information processing capacity, by adjusting $\xi$.

3.The feasibility of artificial consciousness through the lens of neuroscience

Authors:Jaan Aru, Matthew Larkum, James M. Shine

Abstract: Interactions with large language models have led to the suggestion that these models may be conscious. From the perspective of neuroscience, this position is difficult to defend. For one, the architecture of large language models is missing key features of the thalamocortical system that have been linked to conscious awareness in mammals. Secondly, the inputs to large language models lack the embodied, embedded information content characteristic of our sensory contact with the world around us. Finally, while the previous two arguments can be overcome in future AI systems, the third one might be harder to bridge in the near future. Namely, we argue that consciousness might depend on having 'skin in the game', in that the existence of the system depends on its actions, which is not true for present-day artificial intelligence.

4.Second Sight: Using brain-optimized encoding models to align image distributions with human brain activity

Authors:Reese Kneeland, Jordyn Ojeda, Ghislain St-Yves, Thomas Naselaris

Abstract: Two recent developments have accelerated progress in image reconstruction from human brain activity: large datasets that offer samples of brain activity in response to many thousands of natural scenes, and the open-sourcing of powerful stochastic image-generators that accept both low- and high-level guidance. Most work in this space has focused on obtaining point estimates of the target image, with the ultimate goal of approximating literal pixel-wise reconstructions of target images from the brain activity patterns they evoke. This emphasis belies the fact that there is always a family of images that are equally compatible with any evoked brain activity pattern, and the fact that many image-generators are inherently stochastic and do not by themselves offer a method for selecting the single best reconstruction from among the samples they generate. We introduce a novel reconstruction procedure (Second Sight) that iteratively refines an image distribution to explicitly maximize the alignment between the predictions of a voxel-wise encoding model and the brain activity patterns evoked by any target image. We show that our process converges on a distribution of high-quality reconstructions by refining both semantic content and low-level image details across iterations. Images sampled from these converged image distributions are competitive with state-of-the-art reconstruction algorithms. Interestingly, the time-to-convergence varies systematically across visual cortex, with earlier visual areas generally taking longer and converging on narrower image distributions, relative to higher-level brain areas. Second Sight thus offers a succinct and novel method for exploring the diversity of representations across visual brain areas.

1.Reliability of energy landscape analysis of resting-state functional MRI data

Authors:Pitambar Khanra, Johan Nakuci, Sarah Muldoon, Takamitsu Watanabe, Naoki Masuda

Abstract: Energy landscape analysis is a data-driven method to analyze multidimensional time series, including functional magnetic resonance imaging (fMRI) data. It has been shown to be a useful characterization of fMRI data in health and disease. It fits an Ising model to the data and captures the dynamics of the data as movement of a noisy ball constrained on the energy landscape derived from the estimated Ising model. In the present study, we examine test-retest reliability of the energy landscape analysis. To this end, we construct a permutation test that assesses whether or not indices characterizing the energy landscape are more consistent across different sets of scanning sessions from the same participant (i.e., within-participant reliability) than across different sets of sessions from different participants (i.e., between-participant reliability). We show that the energy landscape analysis has significantly higher within-participant than between-participant test-retest reliability with respect to four commonly used indices. We also show that a variational Bayesian method, which enables us to estimate energy landscapes tailored to each participant, displays comparable test-retest reliability to that using the conventional likelihood maximization method. The proposed methodology paves the way to perform individual-level energy landscape analysis for given data sets with a statistically controlled reliability.

2.The Dynamic Sensorium competition for predicting large-scale mouse visual cortex activity from videos

Authors:Polina Turishcheva, Paul G. Fahey, Laura Hansel, Rachel Froebe, Kayla Ponder, Michaela Vystrčilová, Konstantin F. Willeke, Mohammad Bashiri, Eric Wang, Zhiwei Ding, Andreas S. Tolias, Fabian H. Sinz, Alexander S. Ecker

Abstract: Understanding how biological visual systems process information is challenging due to the complex nonlinear relationship between neuronal responses and high-dimensional visual input. Artificial neural networks have already improved our understanding of this system by allowing computational neuroscientists to create predictive models and bridge biological and machine vision. During the Sensorium 2022 competition, we introduced benchmarks for vision models with static input. However, animals operate and excel in dynamic environments, making it crucial to study and understand how the brain functions under these conditions. Moreover, many biological theories, such as predictive coding, suggest that previous input is crucial for current input processing. Currently, there is no standardized benchmark to identify state-of-the-art dynamic models of the mouse visual system. To address this gap, we propose the Sensorium 2023 Competition with dynamic input. This includes the collection of a new large-scale dataset from the primary visual cortex of five mice, containing responses from over 38,000 neurons to over 2 hours of dynamic stimuli per neuron. Participants in the main benchmark track will compete to identify the best predictive models of neuronal responses for dynamic input. We will also host a bonus track in which submission performance will be evaluated on out-of-domain input, using withheld neuronal responses to dynamic input stimuli whose statistics differ from the training set. Both tracks will offer behavioral data along with video stimuli. As before, we will provide code, tutorials, and strong pre-trained baseline models to encourage participation. We hope this competition will continue to strengthen the accompanying Sensorium benchmarks collection as a standard tool to measure progress in large-scale neural system identification models of the entire mouse visual hierarchy and beyond.

3.Adaptive coding efficiency in recurrent cortical circuits via gain control

Authors:Lyndon R. Duong, Colin Bredenberg, David J. Heeger, Eero P. Simoncelli

Abstract: Sensory systems across all modalities and species exhibit adaptation to continuously changing input statistics. Individual neurons have been shown to modulate their response gains so as to maximize information transmission in different stimulus contexts. Experimental measurements have revealed additional, nuanced sensory adaptation effects including changes in response maxima and minima, tuning curve repulsion from the adapter stimulus, and stimulus-driven response decorrelation. Existing explanations of these phenomena rely on changes in inter-neuronal synaptic efficacy, which, while more flexible, are unlikely to operate as rapidly or reversibly as single neuron gain modulations. Using published V1 population adaptation data, we show that propagation of single neuron gain changes in a recurrent network is sufficient to capture the entire set of observed adaptation effects. We propose a novel adaptive efficient coding objective with which single neuron gains are modulated, maximizing the fidelity of the stimulus representation while minimizing overall activity in the network. From this objective, we analytically derive a set of gains that optimize the trade-off between preserving information about the stimulus and conserving metabolic resources. Our model generalizes well-established concepts of single neuron adaptive gain control to recurrent populations, and parsimoniously explains experimental adaptation data.

1.Neural correlates of cognitive ability and visuo-motor speed: validation of IDoCT on UK Biobank Data

Authors:Valentina Giunchiglia, Sharon Curtis, Stephen Smith, Naomi Allen, Adam Hampshire

Abstract: Automated online and App-based cognitive assessment tasks are becoming increasingly popular in large-scale cohorts and biobanks due to advantages in affordability, scalability and repeatability. However, the summary scores that such tasks generate typically conflate the cognitive processes that are the intended focus of assessment with basic visuomotor speeds, testing device latencies and speed-accuracy tradeoffs. This lack of precision presents a fundamental limitation when studying brain-behaviour associations. Previously, we developed a novel modelling approach that leverages continuous performance recordings from large-cohort studies to achieve an iterative decomposition of cognitive tasks (IDoCT), which outputs data-driven estimates of cognitive abilities, and device and visuomotor latencies, whilst recalibrating trial-difficulty scales. Here, we further validate the IDoCT approach with UK BioBank imaging data. First, we examine whether IDoCT can improve ability distributions and trial-difficulty scales from an adaptive picture-vocabulary task (PVT). Then, we confirm that the resultant visuomotor and cognitive estimates associate more robustly with age and education than the original PVT scores. Finally, we conduct a multimodal brain-wide association study with free-text analysis to test whether the brain regions that predict the IDoCT estimates have the expected differential relationships with visuomotor vs. language and memory labels within the broader imaging literature. Our results support the view that the rich performance timecourses recorded during computerised cognitive assessments can be leveraged with modelling frameworks like IDoCT to provide estimates of human cognitive abilities that have superior distributions, re-test reliabilities and brain-wide associations.

2.Identification of Novel Diagnostic Neuroimaging Biomarkers for Autism Spectrum Disorder Through Convolutional Neural Network-Based Analysis of Functional, Structural, and Diffusion Tensor Imaging Data Towards Enhanced Autism Diagnosis

Authors:Annie Adhikary

Abstract: Autism Spectrum Disorder is one of the leading neurodevelopmental disorders in our world, present in over 1% of the population and rapidly increasing in prevalence, yet the condition lacks a robust, objective, and efficient diagnostic. Clinical diagnostic criteria rely on subjective behavioral assessments, which are prone to misdiagnosis as they face limitations in terms of their heterogeneity, specificity, and biases. This study proposes a novel convolutional-neural-network based classification tool that aims to identify the potential of different neuroimaging features as autism biomarkers. The model is constructed using a set of sequential layers specifically designed to extract relevant features from brain scans. Trained and tested on over 300,000 distinct features across three imaging types, the model shows promising results, achieving an accuracy of 95.4% and outperforming metrics of current gold standard diagnostics. 32 optimal features from the imaging data were identified and classified as candidate biomarkers using an independent samples t-test, in which functional features such as neural activity and connectivity in various brain regions exhibited the highest differences in the mean values between individuals with autism and typical control subjects. The p-values of these biomarkers were < 0.001, proving the statistical significance of the results and indicating that this research could pave the way towards the usage of neuroimaging in conjunction with behavioral criteria in clinics. Furthermore, the salient features discovered in the brain structure of individuals with autism could lead to a more profound understanding of the underlying neurobiological mechanisms of the disorder, which remains one of the most substantial enigmas in the field even today.

3.The Motor System at the heart of Decision-Making and Action Execution

Authors:Gerard Derosiere

Abstract: In this Thesis, I synthesize 10 years of work on the role of the motor system in sensorimotor decision-making. First, a large part of the work we initially performed questioned the functional role of the motor system in the integration of so-called decision variables such as the reward associated with different actions, the sensory evidence in favor of each action or the level of urgency in a given context. To this end, although the exact methodology may have varied, the approach exploited has been to study either the impact of a perturbation of the primary motor cortex (M1) on the integration of such decision variables in decision behavior, or the influence of these variables on changes in M1 activity during the decision. More recently (2020 - present), we have been investigating the neural origin of some of the changes in M1 activity observed during decision-making. To answer this question, a "perturbation-and-measurement" approach is exploited: the activity of a structure at a distance from M1 is perturbed, and the impact on the changes in M1 activity during decision-making is measured. The thesis ends up with a personal reflection on this paradigmatic evolution and discusses some key questions to be addressed in our field of research.

1.Understanding the neural architecture of emotion regulation by comparing two different strategies: A meta-analytic approach

Authors:Bianca Monachesi, Alessandro Grecucci, Parisa Ahmadi Ghomroudi, Irene Messina

Abstract: In the emotion regulation literature, the amount of neuroimaging studies on cognitive reappraisal led the impression that the same top-down, control-related neural mechanisms characterize all emotion regulation strategies. However, top-down processes may coexist with more bottom-up and emotion-focused processes that partially bypass the recruitment of executive functions. A case in point is acceptance-based strategies. To better understand neural commonalities and differences behind different emotion regulation strategies, in the present study we applied a meta-analytic method to fMRI studies of task-related activity of reappraisal and acceptance. Results showed increased activity in left-inferior frontal gyrus and insula for both strategies, and decreased activity in the basal ganglia for reappraisal, and decreased activity in limbic regions for acceptance. These findings are discussed in the context of a model of common and specific neural mechanisms of emotion regulation that support and expand the previous dual-routes models. We suggest that emotion regulation may rely on a core inhibitory circuit, and on strategy-specific top-down and bottom-up processes distinct for different strategies.

1.A Mean-Field Method for Generic Conductance-Based Integrate-and-Fire Neurons with Finite Timescales

Authors:Marcelo P. Becker, Marco A. P. Idiart

Abstract: The construction of transfer functions in theoretical neuroscience plays an important role in determining the spiking rate behavior of neurons in networks. These functions can be obtained through various fitting methods, but the biological relevance of the parameters is not always clear. However, for stationary inputs, such functions can be obtained without the adjustment of free parameters by using mean-field methods. In this work, we expand current Fokker-Planck approaches to account for the concurrent influence of colored and multiplicative noise terms on generic conductance-based integrate-and-fire neurons. We reduce the resulting stochastic system from the application of the diffusion approximation to a one-dimensional Langevin equation. An effective Fokker-Planck is then constructed using Fox Theory, which is solved numerically to obtain the transfer function. The solution is capable of reproducing the transfer function behavior of simulated neurons across a wide range of parameters. The method can also be easily extended to account for different sources of noise with various multiplicative terms, and it can be used in other types of problems in principle.

2.Behavior quantification as the missing link between fields: Tools for digital psychiatry and their role in the future of neurobiology

Authors:Michaela Ennis

Abstract: The great behavioral heterogeneity observed between individuals with the same psychiatric disorder and even within one individual over time complicates both clinical practice and biomedical research. However, modern technologies are an exciting opportunity to improve behavioral characterization. Existing psychiatry methods that are qualitative or unscalable, such as patient surveys or clinical interviews, can now be collected at a greater capacity and analyzed to produce new quantitative measures. Furthermore, recent capabilities for continuous collection of passive sensor streams, such as phone GPS or smartwatch accelerometer, open avenues of novel questioning that were previously entirely unrealistic. Their temporally dense nature enables a cohesive study of real-time neural and behavioral signals. To develop comprehensive neurobiological models of psychiatric disease, it will be critical to first develop strong methods for behavioral quantification. There is huge potential in what can theoretically be captured by current technologies, but this in itself presents a large computational challenge -- one that will necessitate new data processing tools, new machine learning techniques, and ultimately a shift in how interdisciplinary work is conducted. In my thesis, I detail research projects that take different perspectives on digital psychiatry, subsequently tying ideas together with a concluding discussion on the future of the field. I also provide software infrastructure where relevant, with extensive documentation. Major contributions include scientific arguments and proof of concept results for daily free-form audio journals as an underappreciated psychiatry research datatype, as well as novel stability theorems and pilot empirical success for a proposed multi-area recurrent neural network architecture.

1.Routing by spontaneous synchronization

Authors:Maik Schünemann, Udo Ernst

Abstract: Selective attention allows to process stimuli which are behaviorally relevant, while attenuating distracting information. However, it is an open question what mechanisms implement selective routing, and how they are engaged in dependence on behavioral need. Here we introduce a novel framework for selective processing by spontaneous synchronization. Input signals become organized into 'avalanches' of synchronized spikes which propagate to target populations. Selective attention enhances spontaneous synchronization and boosts signal transfer by a simple disinhibition of a control population, without requiring changes in synaptic weights. Our framework is fully analytically tractable and provides a complete understanding of all stages of the routing mechanism, yielding closed-form expressions for input-output correlations. Interestingly, although gamma oscillations can naturally occur through a recurrent dynamics, we can formally show that the routing mechanism itself does not require such oscillatory activity and works equally well if synchronous events would be randomly shuffled over time. Our framework explains a large range of physiological findings in a unified framework and makes specific predictions about putative control mechanisms and their effects on neural dynamics.

2.Strong attentional modulation of V1/V2 activity implements a robust, contrast-invariant control mechanism for selective information processing

Authors:Lukas-Paul Rausch, Maik Schünemann, Eric Drebitz, Daniel Harnack, Udo A. Ernst, Andreas K. Kreiter

Abstract: When selective attention is devoted to one of multiple stimuli within receptive fields of neurons in visual area V4, cells respond as if only the attended stimulus was present. The underlying neural mechanisms are still debated, but computational studies suggest that a small rate advantage for neural populations passing the attended signal to V4 suffices to establish such selective processing. We challenged this theory by pairing stimuli with different luminance contrasts, such that attention on a weak target stimulus would have to overcome a large activation difference to a strong distracter. In this situation we found unexpectedly large attentional target facilitation in macaque V1/V2 which far surpasses known magnitudes of attentional modulation. Target facilitation scales with contrast difference and combines with distracter suppression to achieve the required rate advantage. These effects can be explained by a contrast-independent attentional control mechanism with excitatory centre and suppressive surround targeting divisive normalization units.

1.From Data-Fitting to Discovery: Interpreting the Neural Dynamics of Motor Control through Reinforcement Learning

Authors:Eugene R. Rush, Kaushik Jayaram, J. Sean Humbert

Abstract: In motor neuroscience, artificial recurrent neural networks models often complement animal studies. However, most modeling efforts are limited to data-fitting, and the few that examine virtual embodied agents in a reinforcement learning context, do not draw direct comparisons to their biological counterparts. Our study addressing this gap, by uncovering structured neural activity of a virtual robot performing legged locomotion that directly support experimental findings of primate walking and cycling. We find that embodied agents trained to walk exhibit smooth dynamics that avoid tangling -- or opposing neural trajectories in neighboring neural space -- a core principle in computational neuroscience. Specifically, across a wide suite of gaits, the agent displays neural trajectories in the recurrent layers are less tangled than those in the input-driven actuation layers. To better interpret the neural separation of these elliptical-shaped trajectories, we identify speed axes that maximizes variance of mean activity across different forward, lateral, and rotational speed conditions.

1.Abnormal Functional Brain Network Connectivity Associated with Alzheimer's Disease

Authors:Yongcheng Yao

Abstract: The study's objective is to explore the distinctions in the functional brain network connectivity between Alzheimer's Disease (AD) patients and normal controls using Functional Magnetic Resonance Imaging (fMRI). The study included 590 individuals, with 175 having AD dementia and 415 age-, gender-, and handedness-matched normal controls. The connectivity of functional brain networks was measured using ROI-to-ROI and ROI-to-Voxel connectivity analyses. The findings reveal a general decrease in functional connectivity among the AD group in comparison to the normal control group. These results advance our comprehension of AD pathophysiology and could assist in identifying AD biomarkers.

2.Understanding visual processing of motion: Completing the picture using experimentally driven computational models of MT

Authors:Parvin Zarei Eskikand, David B Grayden, Tatiana Kameneva, Anthony N Burkitt, Michael R Ibbotson

Abstract: Computational modeling helps neuroscientists to integrate and explain experimental data obtained through neurophysiological and anatomical studies, thus providing a mechanism by which we can better understand and predict the principles of neural computation. Computational modeling of the neuronal pathways of the visual cortex has been successful in developing theories of biological motion processing. This review describes a range of computational models that have been inspired by neurophysiological experiments. Theories of local motion integration and pattern motion processing are presented, together with suggested neurophysiological experiments designed to test those hypotheses.

1.Neural Responses to Political Words in Natural Speech Differ by Political Orientation

Authors:Shuhei Kitamura, Aya S. Ihara

Abstract: Worldviews may differ significantly according to political orientation. Even a single word can have a completely different meaning depending on political orientation. However, direct evidence indicating differences in the neural responses to words, between conservative- and liberal-leaning individuals, has not been obtained. The present study aimed to investigate whether neural responses related to semantic processing of political words in natural speech differ according to political orientation. We measured electroencephalographic signals while participants with different political orientations listened to natural speech. Responses for moral-, ideology-, and policy-related words between and within the participant groups were then compared. Within-group comparisons showed that right-leaning participants reacted more to moral-related words than to policy-related words, while left-leaning participants reacted more to policy-related words than to moral-related words. In addition, between-group comparisons also showed that neural responses for moral-related words were greater in right-leaning participants than in left-leaning participants and those for policy-related words were lesser in right-leaning participants than in neutral participants. There was a significant correlation between the predicted and self-reported political orientations. In summary, the study found that people with different political orientations differ in semantic processing at the level of a single word. These findings have implications for understanding the mechanisms of political polarization and for making policy messages more effective.

2.Selective imitation on the basis of reward function similarity

Authors:Max Taylor-Davies, Stephanie Droop, Christopher G. Lucas

Abstract: Imitation is a key component of human social behavior, and is widely used by both children and adults as a way to navigate uncertain or unfamiliar situations. But in an environment populated by multiple heterogeneous agents pursuing different goals or objectives, indiscriminate imitation is unlikely to be an effective strategy -- the imitator must instead determine who is most useful to copy. There are likely many factors that play into these judgements, depending on context and availability of information. Here we investigate the hypothesis that these decisions involve inferences about other agents' reward functions. We suggest that people preferentially imitate the behavior of others they deem to have similar reward functions to their own. We further argue that these inferences can be made on the basis of very sparse or indirect data, by leveraging an inductive bias toward positing the existence of different \textit{groups} or \textit{types} of people with similar reward functions, allowing learners to select imitation targets without direct evidence of alignment.

3.Applications of information geometry to spiking neural network behavior

Authors:Jacob T. Crosser, Braden A. W. Brinkman

Abstract: The space of possible behaviors complex biological systems may exhibit is unimaginably vast, and these systems often appear to be stochastic, whether due to variable noisy environmental inputs or intrinsically generated chaos. The brain is a prominent example of a biological system with complex behaviors. The number of possible patterns of spikes emitted by a local brain circuit is combinatorially large, though the brain may not make use of all of them. Understanding which of these possible patterns are actually used by the brain, and how those sets of patterns change as properties of neural circuitry change is a major goal in neuroscience. Recently, tools from information geometry have been used to study embeddings of probabilistic models onto a hierarchy of model manifolds that encode how model behaviors change as a function of their parameters, giving a quantitative notion of "distances" between model behaviors. We apply this method to a network model of excitatory and inhibitory neural populations to understand how the competition between membrane and synaptic response timescales shapes the network's information geometry. The hyperbolic embedding allows us to identify the statistical parameters to which the model behavior is most sensitive, and demonstrate how the ranking of these coordinates changes with the balance of excitation and inhibition in the network.

1.Connecting levels of analysis in the computational era

Authors:Richard Naud, André Longtin

Abstract: Neuroscience and artificial intelligence are closely intertwined, but so are the physics of dynamical system, philosophy and psychology. Each of these fields try in their own way to relate observations at the level of molecules, synapses, neurons or behavior, to a function. An influential conceptual approach to this end was popularized by David Marr, which focused on the interaction between three theoretical 'levels of analysis'. With the convergence of simulation-based approaches, algorithm-oriented Neuro-AI and high-throughput data, we currently see much research organized around four levels of analysis: observations, models, algorithms and functions. Bidirectional interaction between these levels influences how we undertake interdisciplinary science.

2.Neuroscience needs Network Science

Authors:Dániel L Barabási, Ginestra Bianconi, Ed Bullmore, Mark Burgess, SueYeon Chung, Tina Eliassi-Rad, Dileep George, István A. Kovács, Hernán Makse, Christos Papadimitriou, Thomas E. Nichols, Olaf Sporns, Kim Stachenfeld, Zoltán Toroczkai, Emma K. Towlson, Anthony M Zador, Hongkui Zeng, Albert-László Barabási, Amy Bernard, György Buzsáki

Abstract: The brain is a complex system comprising a myriad of interacting elements, posing significant challenges in understanding its structure, function, and dynamics. Network science has emerged as a powerful tool for studying such intricate systems, offering a framework for integrating multiscale data and complexity. Here, we discuss the application of network science in the study of the brain, addressing topics such as network models and metrics, the connectome, and the role of dynamics in neural networks. We explore the challenges and opportunities in integrating multiple data streams for understanding the neural transitions from development to healthy function to disease, and discuss the potential for collaboration between network science and neuroscience communities. We underscore the importance of fostering interdisciplinary opportunities through funding initiatives, workshops, and conferences, as well as supporting students and postdoctoral fellows with interests in both disciplines. By uniting the network science and neuroscience communities, we can develop novel network-based methods tailored to neural circuits, paving the way towards a deeper understanding of the brain and its functions.

1.A unified framework of metastability in neuroscience

Authors:Kalel L. Rossi, Roberto C. Budzinski, Everton S. Medeiros, Bruno R. R. Boaretto, Lyle Muller, Ulrike Feudel

Abstract: Neural activity typically follows a series of transitions between well-defined states, in a regime generally called metastability. In this perspective, we review current observations and formulations of metastability to argue that they have been largely context-dependent, and a unified framework is still missing. To address this, we propose a context-independent framework that unifies the context-dependent formulations by defining metastability as an umbrella term encompassing regimes with transient but long-lived states. This definition can be applied directly to experimental data but also connects neatly to the theory of nonlinear dynamical systems, which allows us to extract a general dynamical principle for metastability: the coexistence of attracting and repelling directions in phase space. With this, we extend known mechanisms and propose new ones that can implement metastability through this general dynamical principle. We believe that our framework is an important advancement towards a better understanding of metastability in the brain, and can facilitate the development of tools to predict and control the brain's behavior.

1.Assessing Rate limits Using Behavioral and Neural Responses of Interaural-Time-Difference Cues in Fine-Structure and Envelope

Authors:Hongmei Hu, Stephan Ewert, Birger Kollmeier, Deborah Vickers

Abstract: The objective was to determine the effect of pulse rate on the sensitivity to use interaural-time-difference (ITD) cues and to explore the mechanisms behind rate-dependent degradation in ITD perception in bilateral cochlear implant (CI) listeners using CI simulations and electroencephalogram (EEG) measures. To eliminate the impact of CI stimulation artifacts and to develop protocols for the ongoing bilateral CI studies, upper-frequency limits for both behavior and EEG responses were obtained from normal hearing (NH) listeners using sinusoidal-amplitude-modulated (SAM) tones and filtered clicks with changes in either fine structure ITD or envelope ITD. Multiple EEG responses were recorded, including the subcortical auditory steady-state responses (ASSRs) and cortical auditory evoked potentials (CAEPs) elicited by stimuli onset, offset, and changes. Results indicated that acoustic change complex (ACC) responses elicited by envelope ITD changes were significantly smaller or absent compared to those elicited by fine structure ITD changes. The ACC morphologies evoked by fine structure ITD changes were similar to onset and offset CAEPs, although smaller than onset CAEPs, with the longest peak latencies for ACC responses and shortest for offset CAEPs. The study found that high-frequency stimuli clearly elicited subcortical ASSRs, but smaller than those evoked by lower carrier frequency SAM tones. The 40-Hz ASSRs decreased with increasing carrier frequencies. Filtered clicks elicited larger ASSRs compared to high-frequency SAM tones, with the order being 40-Hz-ASSR>160-Hz-ASSR>80-Hz-ASSR>320-Hz-ASSR for both stimulus types. Wavelet analysis revealed a clear interaction between detectable transient CAEPs and 40-Hz-ASSRs in the time-frequency domain for SAM tones with a low carrier frequency.

1.Accuracy in readout of glutamate concentrations by neuronal cells

Authors:Swoyam Biswal, Vaibhav Wasnik

Abstract: Glutamate and glycine are important neurotransmitters in the brain. An action potential prop- agating in the terminal of a presynatic neuron causes the release of glutamate and glycine in the synapse by vesicles fusing with the cell membrane, which then activate various receptors on the cell membrane of the post synaptic neuron. Entry of Ca2+ through the activated NMDA receptors leads to a host of cellular processes of which long term potentiation is of crucial importance because it is widely considered to be one of the major mechanisms behind learning and memory. By analysing the readout of glutamate concentration by the post synaptic neurons during Ca2+ signaling, we find that the average receptor density in hippocampal neurons has evolved to allow for accurate measurement of the glutamate concentration in the synaptic cleft.

1.Ecologically mapped neuronal identity: Towards standardizing activity across heterogeneous experiments

Authors:Kevin Luxem, David Eriksson

Abstract: The brain's diversity of neurons enables a rich behavioral repertoire and flexible adaptation to new situations. Assuming that the ecological pressure has optimized this neuronal variety, we propose exploiting na\"ive behavior to map the neuronal identity. Here we investigate the feasibility of identifying neurons "ecologically" using their activation for natural behavioral and environmental parameters. Such a neuronal ECO-marker might give a finer granularity than possible with genetic or molecular markers, thereby facilitating the comparison of the functional characteristics of individual neurons across animals. In contrast to a potential mapping using artificial stimuli and trained behavior which have an unlimited parameter space, an ecological mapping is experimentally feasible since it is bounded by the ecology. Home-cage environment is an excellent basis for this ECO-mapping covering an extensive behavioral repertoire and since home-cage behavior is similar across laboratories. We review the possibility of adding area-specific environmental enrichment and automatized behavioral tasks to identify neurons in specific brain areas. In this work, we focus on the visual cortex, motor cortex, prefrontal cortex, and hippocampus. Fundamental to achieving this identification is to take advantage of state-of-the-art behavioral tracking, sensory stimulation protocols, and the plethora of creative behavioral solutions for rodents. We find that motor areas might be easiest to address, followed by prefrontal, hippocampal, and visual areas. The possibility of acquiring a near-complete ecological identification with minimal animal handling, minimal constraints on the main experiment, and data compatibility across laboratories might outweigh the necessity of implanting electrodes or imaging devices.

2.Incomplete hippocampal inversion and hippocampal subfield volumes: Implementation and inter-reliability of automatic segmentation

Authors:Agustina Fragueiro EMPENN, Giorgia Committeri Ud'A, Claire Cury EMPENN

Abstract: The incomplete hippocampal inversion (IHI) is an atypical anatomical pattern of the hippocampus. However, the hippocampus is not a homogeneous structure, as it consists of segregated subfields with specific characteristics. While IHI is not related to whole hippocampal volume, higher IHI scores have been associated to smaller CA1 in aging. Although the segmentation of hippocampal subfields is challenging due to their small size, there are algorithms allowing their automatic segmentation. By using a Human Connectome Project dataset of healthy young adults, we first tested the inter-reliability of two methods for automatic segmentation of hippocampal subfields, and secondly, we explored the relationship between IHI and subfield volumes. Results evidenced strong correlations between volumes obtained thorough both segmentation methods. Furthermore, higher IHI scores were associated to bigger subiculum and smaller CA1 volumes. Here, we provide new insights regarding IHI subfields volumetry, and we offer support for automatic segmentation inter-method reliability.

1.Perceived community alignment increases information sharing

Authors:Elisa C. Baek, Ryan Hyon, Karina López, Mason A. Porter, Carolyn Parkinson

Abstract: Information sharing is a ubiquitous and consequential behavior that has been proposed to play a critical role in cultivating and maintaining a sense of shared reality. Across three studies, we tested this theory by investigating whether or not people are especially likely to share information that they believe will be interpreted similarly by others in their social circles. Using neuroimaging while members of the same community viewed brief film clips, we found that more similar neural responding of participants was associated with a greater likelihood to share content. We then tested this relationship using behavioral studies and found (1) that people were particularly likely to share content about which they believed others in their social circles would share their viewpoints and (2) that this relationship is causal. In concert, our findings support the idea that people are driven to share information to create and reinforce shared understanding, which is critical to social connection.

1.Orientation selectivity of affine Gaussian derivative based receptive fields

Authors:Tony Lindeberg

Abstract: This paper presents a theoretical analysis of the orientation selectivity of simple and complex cells that can be well modelled by the generalized Gaussian derivative model for visual receptive fields, with the purely spatial component of the receptive fields determined by oriented affine Gaussian derivatives for different orders of spatial differentiation. A detailed mathematical analysis is presented for the three different cases of either: (i) purely spatial receptive fields, (ii) space-time separable spatio-temporal receptive fields and (iii) velocity-adapted spatio-temporal receptive fields. Closed-form theoretical expressions for the orientation selectivity curves for idealized models of simple and complex cells are derived for all these main cases, and it is shown that the degree of orientation selectivity of the receptive fields increases with a scale parameter ratio $\kappa$, defined as the ratio between the scale parameters in the directions perpendicular to vs. parallel with the preferred orientation of the receptive field. It is also shown that the degree of orientation selectivity increases with the order of spatial differentiation in the underlying affine Gaussian derivative operators over the spatial domain. We conclude by describing biological implications of the derived theoretical results, demonstrating that the predictions from the presented theory are consistent with previously established biological results concerning broad vs. sharp orientation tuning of visual neurons in the primary visual cortex, as well as consistent with a previously formulated biological hypothesis, stating that the biological receptive field shapes should span the degrees of freedom in affine image transformations, to support affine covariance over the population of receptive fields in the primary visual cortex.

1.Whole-brain functional imaging to highlight differences between the diurnal and nocturnal neuronal activity in zebrafish larvae

Authors:Giuseppe de Vito, Lapo Turrini, Chiara Fornetto, Elena Trabalzini, Pietro Ricci, Duccio Fanelli, Francesco Vanzi, Francesco Saverio Pavone

Abstract: Most living organisms show highly conserved physiological changes following a 24-hour cycle which goes by the name of circadian rhythm. Among experimental models, the effects of light-dark cycle have been recently investigated in the larval zebrafish. Owing to its small size and transparency, this vertebrate enables optical access to the entire brain. Indeed, the combination of this organism with light-sheet imaging grants high spatio-temporal resolution volumetric recording of neuronal activity. This imaging technique, in its multiphoton variant, allows functional investigations without unwanted visual stimulation. Here, we employed a custom two-photon light-sheet microscope to study whole-brain differences in neuronal activity between diurnal and nocturnal periods in larval zebrafish. We describe for the first time an activity increase in the low frequency domain of the pretectum and a frequency-localised activity decrease of the anterior rhombencephalic turning region during the nocturnal period. Moreover, our data confirm a nocturnal reduction in habenular activity. Furthermore, whole-brain detrended fluctuation analysis revealed a nocturnal decrease in the self-affinity of the neuronal signals in parts of the dorsal thalamus and the medulla oblongata. Our data show that whole-brain nonlinear light-sheet imaging represents a useful tool to investigate circadian rhythm effects on neuronal activity.

2.Long time scales, individual differences, and scale invariance in animal behavior

Authors:William Bialek, Joshua W. Shaevitz

Abstract: The explosion of data on animal behavior in more natural contexts highlights the fact that these behaviors exhibit correlations across many time scales. But there are major challenges in analyzing these data: records of behavior in single animals have fewer independent samples than one might expect; in pooling data from multiple animals, individual differences can mimic long-ranged temporal correlations; conversely long-ranged correlations can lead to an over-estimate of individual differences. We suggest an analysis scheme that addresses these problems directly, apply this approach to data on the spontaneous behavior of walking flies, and find evidence for scale invariant correlations over nearly three decades in time, from seconds to one hour. Three different measures of correlation are consistent with a single underlying scaling field of dimension $\Delta = 0.180\pm 0.005$.

3.Circumstantial evidence and explanatory models for synapses in large-scale spike recordings

Authors:Ian H. Stevenson

Abstract: Whether, when, and how causal interactions between neurons can be meaningfully studied from observations of neural activity alone are vital questions in neural data analysis. Here we aim to better outline the concept of functional connectivity for the specific situation where systems neuroscientists aim to study synapses using spike train recordings. In some cases, cross-correlations between the spikes of two neurons are such that, although we may not be able to say that a relationship is causal without experimental manipulations, models based on synaptic connections provide precise explanations of the data. Additionally, there is often strong circumstantial evidence that pairs of neurons are monosynaptically connected. Here we illustrate how circumstantial evidence for or against synapses can be systematically assessed and show how models of synaptic effects can provide testable predictions for pair-wise spike statistics. We use case studies from large-scale multi-electrode spike recordings to illustrate key points and to demonstrate how modeling synaptic effects using large-scale spike recordings opens a wide range of data analytic questions.

4.Pulse shape and voltage-dependent synchronization in spiking neuron networks

Authors:Bastian Pietras

Abstract: Pulse-coupled spiking neural networks are a powerful tool to gain mechanistic insights into how neurons self-organize to produce coherent collective behavior. These networks use simple spiking neuron models, such as the $\theta$-neuron or the quadratic integrate-and-fire (QIF) neuron, that replicate the essential features of real neural dynamics. Interactions between neurons are modeled with infinitely narrow pulses, or spikes, rather than the more complex dynamics of real synapses. To make these networks biologically more plausible, it has been proposed that they must also account for the finite width of the pulses, which can have a significant impact on the network dynamics. However, the derivation and interpretation of these pulses is contradictory and the impact of the pulse shape on the network dynamics is largely unexplored. Here, I take a comprehensive approach to pulse-coupling in networks of QIF and $\theta$-neurons. I argue that narrow pulses activate voltage-dependent synaptic conductances and show how to implement them in QIF neurons such that their effect can last through the phase after the spike. Using an exact low-dimensional description for networks of globally coupled spiking neurons, I prove for instantaneous interactions that collective oscillations emerge due to an effective coupling through the mean voltage. I analyze the impact of the pulse shape by means of a family of smooth pulse functions with arbitrary finite width and symmetric or asymmetric shapes. For symmetric pulses, the resulting voltage-coupling is little effective in synchronizing neurons, but pulses that are slightly skewed to the phase after the spike readily generate collective oscillations. The results unveil a voltage-dependent spike synchronization mechanism in neural networks, which is facilitated by pulses of finite width and complementary to traditional synaptic transmission.

1.Decoding Neural Activity to Assess Individual Latent State in Ecologically Valid Contexts

Authors:Stephen M. Gordon, Jonathan R. McDaniel, Kevin W. King, Vernon J. Lawhern, Jonathan Touryan

Abstract: There exist very few ways to isolate cognitive processes, historically defined via highly controlled laboratory studies, in more ecologically valid contexts. Specifically, it remains unclear as to what extent patterns of neural activity observed under such constraints actually manifest outside the laboratory in a manner that can be used to make an accurate inference about the latent state, associated cognitive process, or proximal behavior of the individual. Improving our understanding of when and how specific patterns of neural activity manifest in ecologically valid scenarios would provide validation for laboratory-based approaches that study similar neural phenomena in isolation and meaningful insight into the latent states that occur during complex tasks. We argue that domain generalization methods from the brain-computer interface community have the potential to address this challenge. We previously used such an approach to decode phasic neural responses associated with visual target discrimination. Here, we extend that work to more tonic phenomena such as internal latent states. We use data from two highly controlled laboratory paradigms to train two separate domain-generalized models. We apply the trained models to an ecologically valid paradigm in which participants performed multiple, concurrent driving-related tasks. Using the pretrained models, we derive estimates of the underlying latent state and associated patterns of neural activity. Importantly, as the patterns of neural activity change along the axis defined by the original training data, we find changes in behavior and task performance consistent with the observations from the original, laboratory paradigms. We argue that these results lend ecological validity to those experimental designs and provide a methodology for understanding the relationship between observed neural activity and behavior during complex tasks.

1.Synchronization in STDP-driven memristive neural networks with time-varying topology

Authors:Marius E. Yamakou, Mathieu Desroches, Serafim Rodrigues

Abstract: Synchronization is a widespread phenomenon in the brain. Despite numerous studies, the specific parameter configurations of the synaptic network structure and learning rules needed to achieve robust and enduring synchronization in neurons driven by spike-timing-dependent plasticity (STDP) and temporal networks subject to homeostatic structural plasticity (HSP) rules remain unclear. Here, we bridge this gap by determining the configurations required to achieve high and stable degrees of complete synchronization (CS) and phase synchronization (PS) in time-varying small-world and random neural networks driven by STDP and HSP. In particular, we found that decreasing $P$ (which enhances the strengthening effect of STDP on the average synaptic weight) and increasing $F$ (which speeds up the swapping rate of synapses between neurons) always lead to higher and more stable degrees of CS and PS in small-world and random networks, provided that the network parameters such as the synaptic time delay $\tau_c$, the average degree $\langle k \rangle$, and the rewiring probability $\beta$ have some appropriate values. When $\tau_c$, $\langle k \rangle$, and $\beta$ are not fixed at these appropriate values, the degree and stability of CS and PS may increase or decrease when $F$ increases, depending on the network topology. It is also found that the time delay $\tau_c$ can induce intermittent CS and PS whose occurrence is independent $F$. Our results could have applications in designing neuromorphic circuits for optimal information processing and transmission via synchronization phenomena.

2.Upcrossing-rate dynamics for a minimal neuron model receiving spatially distributed synaptic drive

Authors:Robert P Gowers, Magnus J E Richardson

Abstract: The spatiotemporal stochastic dynamics of the voltage as well as the upcrossing rate are derived for a model neuron comprising a long dendrite with uniformly distributed filtered excitatory and inhibitory synaptic drive. A cascade of ordinary and partial differential equations is obtained describing the evolution of first-order means and second-order spatial covariances of the voltage and its rate of change. These quantities provide an analytical form for the general, steady-state and linear response of the upcrossing rate to dynamic synaptic input. It is demonstrated that this minimal dendritic model has an unexpectedly sustained high-frequency response despite synaptic, membrane and spatial filtering.

1.Hierarchical network structure as the source of power-law frequency spectra (state-trait continua) in living and non-living systems: how physical traits and personalities emerge from first principles in biophysics

Authors:Rutger Goekoop, Roy de Kleijn

Abstract: What causes organisms to have different body plans and personalities? We address this question by looking at universal principles that govern the morphology and behavior of living systems. Living systems display a small-world network structure in which many smaller clusters are nested within fewer larger ones, producing a fractal-like structure with a power-law cluster size distribution. Their dynamics show similar qualities: the timeseries of inner message passing and overt behavior contain high frequencies or 'states' that are nested within lower frequencies or 'traits'. Here, we argue that the nested modular (power-law) dynamics of living systems results from their nested modular (power-law) network structure: organisms 'vertically encode' the deep spatiotemporal structure of their environments, so that high frequencies (states) are produced by many small clusters at the base of a nested-modular hierarchy and lower frequencies (traits) are produced by fewer larger clusters at its top. These include physical as well as behavioral traits. Nested-modular structure causes higher frequencies to be embedded in lower frequencies, producing power-law dynamics. Such dynamics satisfy the need for efficient energy dissipation through networks of coupled oscillators, which also governs the dynamics of non-living systems (e.g. earthquake dynamics, stock market fluctuations). Thus, we provide a single explanation for power-law frequency spectra in both living and non-living systems. If hierarchical structure indeed produces hierarchical dynamics, the development (e.g. during maturation) and collapse (e.g. during disease) of hierarchical structure should leave specific traces in power-law frequency spectra that may serve as early warning signs to system failure. The applications of this idea range from embryology and personality psychology to sociology, evolutionary biology and clinical medicine.

1.Hebbian fast plasticity and working memory

Authors:Anders Lansner, Florian Fiebig, Pawel Herman

Abstract: Theories and models of working memory (WM) were at least since the mid-1990s dominated by the persistent activity hypothesis. The past decade has seen rising concerns about the shortcomings of sustained activity as the mechanism for short-term maintenance of WM information in the light of accumulating experimental evidence for so-called activity-silent WM and the fundamental difficulty in explaining robust multi-item WM. In consequence, alternative theories are now explored mostly in the direction of fast synaptic plasticity as the underlying mechanism.The question of non-Hebbian vs Hebbian synaptic plasticity emerges naturally in this context. In this review we focus on fast Hebbian plasticity and trace the origins of WM theories and models building on this form of associative learning.

1.Mathematical derivation of wave propagation properties in hierarchical neural networks with predictive coding feedback dynamics

Authors:Grégory Faye, Guilhem Fouilhé, Rufin VanRullen

Abstract: Sensory perception (e.g. vision) relies on a hierarchy of cortical areas, in which neural activity propagates in both directions, to convey information not only about sensory inputs but also about cognitive states, expectations and predictions. At the macroscopic scale, neurophysiological experiments have described the corresponding neural signals as both forward and backward-travelling waves, sometimes with characteristic oscillatory signatures. It remains unclear, however, how such activity patterns relate to specific functional properties of the perceptual apparatus. Here, we present a mathematical framework, inspired by neural network models of predictive coding, to systematically investigate neural dynamics in a hierarchical perceptual system. We show that stability of the system can be systematically derived from the values of hyper-parameters controlling the different signals (related to bottom-up inputs, top-down prediction and error correction). Similarly, it is possible to determine in which direction, and at what speed neural activity propagates in the system. Different neural assemblies (reflecting distinct eigenvectors of the connectivity matrices) can simultaneously and independently display different properties in terms of stability, propagation speed or direction. We also derive continuous-limit versions of the system, both in time and in neural space. Finally, we analyze the possible influence of transmission delays between layers, and reveal the emergence of oscillations at biologically plausible frequencies.

2.Amygdala and cortical gamma band responses to emotional faces depend on the attended to valence

Authors:Enya M. Weidner, Stephan Moratti, Sebastian Schindler, Philip Grewe, Christian G. Bien, Johanna Kissler

Abstract: The amygdala is assumed to contribute to a bottom-up attentional bias during visual processing of emotional faces. Still, how its response to emotion interacts with top-down attention is not fully understood. It is also unclear if amygdala activity and scalp EEG respond to emotion and attention in a similar way. Therefore, we studied the interaction of emotion and attention during face processing in oscillatory gamma-band activity (GBA) in the amygdala and on the scalp. Amygdala signals were recorded via intracranial EEG (iEEG) in 9 patients with epilepsy. Scalp recordings were collected from 19 healthy participants. Three randomized blocks of angry, neutral, and happy faces were presented, and either negative, neutral, or positive expressions were denoted as targets. Both groups detected happy faces fastest and most accurately. In the amygdala, the earliest effect was observed around 170 ms in high GBA (105-117.5 Hz) when neutral faces served as targets. Here, GBA was higher for emotional than neutral faces. During attention to negative faces, low GBA (< 90 Hz) increased specifically for angry faces both in the amygdala and over posterior scalp regions, albeit earlier on the scalp (60 ms) than in the amygdala (210 ms). From 570 ms, amygdala high GBA (117.5-145 Hz) was also increased for both angry and neutral, compared to happy, faces. When positive faces were the targets, GBA did not differentiate between expressions. The present data reveal that attention-independent emotion detection in amygdala high GBA may only occur during a neutral focus of attention. Top-down threat vigilance coordinates widespread low GBA, biasing stimulus processing in favor of negative faces. These results are in line with a multi-pathway model of emotion processing and help specify the role of GBA in this process by revealing how attentional focus can tune timing and amplitude of emotional GBA responses.

3.Adaptive Gated Graph Convolutional Network for Explainable Diagnosis of Alzheimer's Disease using EEG Data

Authors:Dominik Klepl, Fei He, Min Wu, Daniel J. Blackburn, Ptolemaios G. Sarrigiannis

Abstract: Graph neural network (GNN) models are increasingly being used for the classification of electroencephalography (EEG) data. However, GNN-based diagnosis of neurological disorders, such as Alzheimer's disease (AD), remains a relatively unexplored area of research. Previous studies have relied on functional connectivity methods to infer brain graph structures and used simple GNN architectures for the diagnosis of AD. In this work, we propose a novel adaptive gated graph convolutional network (AGGCN) that can provide explainable predictions. AGGCN adaptively learns graph structures by combining convolution-based node feature enhancement with a well-known correlation-based measure of functional connectivity. Furthermore, the gated graph convolution can dynamically weigh the contribution of various spatial scales. The proposed model achieves high accuracy in both eyes-closed and eyes-open conditions, indicating the stability of learned representations. Finally, we demonstrate that the proposed AGGCN model generates consistent explanations of its predictions that might be relevant for further study of AD-related alterations of brain networks.

4.Altered Topological Structure of the Brain White Matter in Maltreated Children through Topological Data Analysis

Authors:Tahmineh Azizi, Moo K. Chung, Jamie Hanson, Thomas Burns, Andrew Alexander, Richard Davidson, Seth Pollak

Abstract: Childhood maltreatment may adversely affect brain development and consequently behavioral, emotional, and psychological patterns during adulthood. In this study, we propose an analytical pipeline for modeling the altered topological structure of brain white matter structure in maltreated and typically developing children. We perform topological data analysis (TDA) to assess the alteration in global topology of the brain white-matter structural covariance network for child participants. We use persistent homology, an algebraic technique in TDA, to analyze topological features in the brain covariance networks constructed from structural magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI). We develop a novel framework for statistical inference based on the Wasserstein distance to assess the significance of the observed topological differences. Using these methods in comparing maltreated children to a typically developing sample, we find that maltreatment may increase homogeneity in white matter structures and thus induce higher correlations in the structural covariance; this is reflected in the topological profile. Our findings strongly demonstrates that TDA can be used as a baseline framework to model altered topological structures of the brain.

1.Toward Whole-Brain Minimally-Invasive Vascular Imaging

Authors:Anatole Jimenez PhysMed Paris, Bruno Osmanski PhIND, Denis Vivien PhIND, Mickael Tanter PhysMed Paris, Thomas Gaberel PhIND, Thomas Deffieux PhysMed Paris

Abstract: Imaging the brain vasculature can be critical for cerebral perfusion monitoring in the context of neurocritical care. Although ultrasensitive Doppler (UD) can provide good sensitivity to cerebral blood volume (CBV) in a large field of view, it remains difficult to perform through the skull. In this work, we investigate how a minimally invasive burr hole, performed for intracranial pressure (ICP) monitoring, could be used to map the entire brain vascular tree. We explored the use of a small motorized phased array probe with a non-implantable preclinical prototype in pigs. The scan duration (18 min) and coverage (62 $\pm$ 12 % of the brain) obtained allowed global CBV variations detection (relative in brain Dopplerdecrease =-3[-4-+16]% \& Dopplerincrease. = +1[-3-+15]%, n = 6 \& 5) and stroke detection (relative in core Dopplerstroke. =-25%, n = 1). This technology could one day be miniaturized to be implanted for brain perfusion monitoring in neurocritical care.

2.Identifying epileptogenic abnormalities through spatial clustering of MEG interictal band power

Authors:Thomas W. Owen, Vytene Janiukstyte, Gerard R. Hall, Jonathan J. Horsley, Andrew McEvoy, Anna Miserocchi, Jane de Tisi, John S. Duncan, Fergus Rugg-Gunn, Yujiang Wang, Peter N. Taylor

Abstract: Successful epilepsy surgery depends on localising and resecting cerebral abnormalities and networks that generate seizures. Abnormalities, however, may be widely distributed across multiple discontiguous areas. We propose spatially constrained clusters as candidate areas for further investigation, and potential resection. We quantified the spatial overlap between the abnormality cluster and subsequent resection, hypothesising a greater overlap in seizure-free patients. Thirty-four individuals with refractory focal epilepsy underwent pre-surgical resting-state interictal MEG recording. Fourteen individuals were totally seizure free (ILAE 1) after surgery and 20 continued to have some seizures post-operatively (ILAE 2+). Band power abnormality maps were derived using controls as a baseline. Patient abnormalities were spatially clustered using the k-means algorithm. The tissue within the cluster containing the most abnormal region was compared with the resection volume using the dice score. The proposed abnormality cluster overlapped with the resection in 71% of ILAE 1 patients. Conversely, an overlap only occurred in 15% of ILAE 2+ patients. This effect discriminated outcome groups well (AUC=0.82). Our novel approach identifies clusters of spatially similar tissue with high abnormality. This is clinically valuable, providing (i) a data-driven framework to validate current hypotheses of the epileptogenic zone localisation or (ii) to guide further investigation.

3.Interictal MEG abnormalities to guide intracranial electrode implantation and predict surgical outcome

Authors:Thomas W. Owen, Vytene Janiukstyte, Gerard R. Hall, Fahmida A. Chowdhury, Beate Diehl, Andrew McEvoy, Anna Miserocchi, Jane de Tisi, John S. Duncan, Fergus Rugg-Gunn, Yujiang Wang, Peter N. Taylor

Abstract: Intracranial EEG (iEEG) is the gold standard technique for epileptogenic zone (EZ) localisation, but requires a hypothesis of which tissue is epileptogenic, guided by qualitative analysis of seizure semiology and other imaging modalities such as magnetoencephalography (MEG). We hypothesised that if quantifiable MEG band power abnormalities were sampled by iEEG, then patients' post-resection seizure outcome were better. Thirty-two individuals with neocortical epilepsy underwent MEG and iEEG recordings as part of pre-surgical evaluation. Interictal MEG band power abnormalities were derived using 70 healthy controls as a normative baseline. MEG abnormality maps were compared to electrode implantation, with the spatial overlap of iEEG electrodes and MEG abnormalities recorded. Finally, we assessed if the implantation of electrodes in abnormal tissue, and resection of the strongest abnormalities determined by MEG and iEEG explained surgical outcome. Intracranial electrodes were implanted in brain tissue with the most abnormal MEG findings in individuals that were seizure-free post-resection (T=3.9, p=0.003). The overlap between MEG abnormalities and iEEG electrodes distinguished outcome groups moderately well (AUC=0.68). In isolation, the resection of the strongest MEG and iEEG abnormalities separated surgical outcome groups well (AUC=0.71, AUC=0.74 respectively). A model incorporating all three features separated outcome groups best (AUC=0.80). Intracranial EEG is a key tool to delineate the EZ and help render patients seizure-free after resection. We showed that data-driven abnormalities derived from interictal MEG recordings have clinical value and may help guide electrode placement in individuals with neocortical epilepsy. Finally, our predictive model of post-operative seizure-freedom, which leverages both MEG and iEEG recordings, may aid patient counselling of expected outcome.

1.Regional Deep Atrophy: a Self-Supervised Learning Method to Automatically Identify Regions Associated With Alzheimer's Disease Progression From Longitudinal MRI

Authors:Mengjin Dong for the Alzheimer's Disease Neuroimaging Initiative, Long Xie for the Alzheimer's Disease Neuroimaging Initiative, Sandhitsu R. Das for the Alzheimer's Disease Neuroimaging Initiative, Jiancong Wang for the Alzheimer's Disease Neuroimaging Initiative, Laura E. M. Wisse for the Alzheimer's Disease Neuroimaging Initiative, Robin deFlores for the Alzheimer's Disease Neuroimaging Initiative, David A. Wolk for the Alzheimer's Disease Neuroimaging Initiative, Paul A. Yushkevich for the Alzheimer's Disease Neuroimaging Initiative

Abstract: Longitudinal assessment of brain atrophy, particularly in the hippocampus, is a well-studied biomarker for neurodegenerative diseases, such as Alzheimer's disease (AD). In clinical trials, estimation of brain progressive rates can be applied to track therapeutic efficacy of disease modifying treatments. However, most state-of-the-art measurements calculate changes directly by segmentation and/or deformable registration of MRI images, and may misreport head motion or MRI artifacts as neurodegeneration, impacting their accuracy. In our previous study, we developed a deep learning method DeepAtrophy that uses a convolutional neural network to quantify differences between longitudinal MRI scan pairs that are associated with time. DeepAtrophy has high accuracy in inferring temporal information from longitudinal MRI scans, such as temporal order or relative inter-scan interval. DeepAtrophy also provides an overall atrophy score that was shown to perform well as a potential biomarker of disease progression and treatment efficacy. However, DeepAtrophy is not interpretable, and it is unclear what changes in the MRI contribute to progression measurements. In this paper, we propose Regional Deep Atrophy (RDA), which combines the temporal inference approach from DeepAtrophy with a deformable registration neural network and attention mechanism that highlights regions in the MRI image where longitudinal changes are contributing to temporal inference. RDA has similar prediction accuracy as DeepAtrophy, but its additional interpretability makes it more acceptable for use in clinical settings, and may lead to more sensitive biomarkers for disease monitoring in clinical trials of early AD.