arXiv daily: Neurons and Cognition

arXiv daily: Neurons and Cognition (q-bio.NC)

1.Trial matching: capturing variability with data-constrained spiking neural networks

Authors:Christos Sourmpis, Carl Petersen, Wulfram Gerstner, Guillaume Bellec

Abstract: Simultaneous behavioral and electrophysiological recordings call for new methods to reveal the interactions between neural activity and behavior. A milestone would be an interpretable model of the co-variability of spiking activity and behavior across trials. Here, we model a cortical sensory-motor pathway in a tactile detection task with a large recurrent spiking neural network (RSNN), fitted to the recordings via gradient-based optimization. We focus specifically on the difficulty to match the trial-to-trial variability in the data. Our solution relies on optimal transport to define a distance between the distributions of generated and recorded trials. The technique is applied to artificial data and neural recordings covering six cortical areas. We find that the resulting RSNN can generate realistic cortical activity and predict jaw movements across the main modes of trial-to-trial variability. Our analysis also identifies an unexpected mode of variability in the data corresponding to task-irrelevant movements of the mouse.

1.Visuomotor feedback tuning in the absence of visual error information

Authors:Sae Franklin, David W. Franklin

Abstract: Large increases in visuomotor feedback gains occur during initial adaptation to novel dynamics, which we propose are due to increased internal model uncertainty. That is, large errors indicate increased uncertainty in our prediction of the environment, increasing feedback gains and co-contraction as a coping mechanism. Our previous work showed distinct patterns of visuomotor feedback gains during abrupt or gradual adaptation to a force field, suggesting two complementary processes: reactive feedback gains increasing with internal model uncertainty and the gradual learning of predictive feedback gains tuned to the environment. Here we further investigate what drives these changes visuomotor feedback gains in learning, by separating the effects of internal model uncertainty from visual error signal through removal of visual error information. Removing visual error information suppresses the visuomotor feedback gains in all conditions, but the pattern of modulation throughout adaptation is unaffected. Moreover, we find increased muscle co-contraction in both abrupt and gradual adaptation protocols, demonstrating that visuomotor feedback responses are independent from the level of co-contraction. Our result suggests that visual feedback benefits motor adaptation tasks through higher visuomotor feedback gains, but when it is not available participants adapt at a similar rate through increased co-contraction. We have demonstrated a direct connection between learning and predictive visuomotor feedback gains, independent from visual error signals. This further supports our hypothesis that internal model uncertainty drives initial increases in feedback gains.

2.Suppression of chaos in a partially driven recurrent neural network

Authors:Shotaro Takasu, Toshio Aoyagi

Abstract: The dynamics of recurrent neural networks (RNNs), and particularly their response to inputs, play a critical role in information processing. In many applications of RNNs, only a specific subset of the neurons generally receive inputs. However, it remains to be theoretically clarified how the restriction of the input to a specific subset of neurons affects the network dynamics. Considering recurrent neural networks with such restricted input, we investigate how the proportion, $p$, of the neurons receiving inputs (the "inputs neurons") and a quantity, $\xi$, representing the strength of the input signals affect the dynamics by analytically deriving the conditional maximum Lyapunov exponent. Our results show that for sufficiently large $p$, the maximum Lyapunov exponent decreases monotonically as a function of $\xi$, indicating the suppression of chaos, but if $p$ is smaller than a critical threshold, $p_c$, even significantly amplified inputs cannot suppress spontaneous chaotic dynamics. Furthermore, although the value of $p_c$ is seemingly dependent on several model parameters, such as the sparseness and strength of recurrent connections, it is proved to be intrinsically determined solely by the strength of chaos in spontaneous activity of the RNN. This is to say, despite changes in these model parameters, it is possible to represent the value of $p_c$ as a common invariant function by appropriately scaling these parameters to yield the same strength of spontaneous chaos. Our study suggests that if $p$ is above $p_c$, we can bring the neural network to the edge of chaos, thereby maximizing its information processing capacity, by adjusting $\xi$.

3.The feasibility of artificial consciousness through the lens of neuroscience

Authors:Jaan Aru, Matthew Larkum, James M. Shine

Abstract: Interactions with large language models have led to the suggestion that these models may be conscious. From the perspective of neuroscience, this position is difficult to defend. For one, the architecture of large language models is missing key features of the thalamocortical system that have been linked to conscious awareness in mammals. Secondly, the inputs to large language models lack the embodied, embedded information content characteristic of our sensory contact with the world around us. Finally, while the previous two arguments can be overcome in future AI systems, the third one might be harder to bridge in the near future. Namely, we argue that consciousness might depend on having 'skin in the game', in that the existence of the system depends on its actions, which is not true for present-day artificial intelligence.

4.Second Sight: Using brain-optimized encoding models to align image distributions with human brain activity

Authors:Reese Kneeland, Jordyn Ojeda, Ghislain St-Yves, Thomas Naselaris

Abstract: Two recent developments have accelerated progress in image reconstruction from human brain activity: large datasets that offer samples of brain activity in response to many thousands of natural scenes, and the open-sourcing of powerful stochastic image-generators that accept both low- and high-level guidance. Most work in this space has focused on obtaining point estimates of the target image, with the ultimate goal of approximating literal pixel-wise reconstructions of target images from the brain activity patterns they evoke. This emphasis belies the fact that there is always a family of images that are equally compatible with any evoked brain activity pattern, and the fact that many image-generators are inherently stochastic and do not by themselves offer a method for selecting the single best reconstruction from among the samples they generate. We introduce a novel reconstruction procedure (Second Sight) that iteratively refines an image distribution to explicitly maximize the alignment between the predictions of a voxel-wise encoding model and the brain activity patterns evoked by any target image. We show that our process converges on a distribution of high-quality reconstructions by refining both semantic content and low-level image details across iterations. Images sampled from these converged image distributions are competitive with state-of-the-art reconstruction algorithms. Interestingly, the time-to-convergence varies systematically across visual cortex, with earlier visual areas generally taking longer and converging on narrower image distributions, relative to higher-level brain areas. Second Sight thus offers a succinct and novel method for exploring the diversity of representations across visual brain areas.

1.Reliability of energy landscape analysis of resting-state functional MRI data

Authors:Pitambar Khanra, Johan Nakuci, Sarah Muldoon, Takamitsu Watanabe, Naoki Masuda

Abstract: Energy landscape analysis is a data-driven method to analyze multidimensional time series, including functional magnetic resonance imaging (fMRI) data. It has been shown to be a useful characterization of fMRI data in health and disease. It fits an Ising model to the data and captures the dynamics of the data as movement of a noisy ball constrained on the energy landscape derived from the estimated Ising model. In the present study, we examine test-retest reliability of the energy landscape analysis. To this end, we construct a permutation test that assesses whether or not indices characterizing the energy landscape are more consistent across different sets of scanning sessions from the same participant (i.e., within-participant reliability) than across different sets of sessions from different participants (i.e., between-participant reliability). We show that the energy landscape analysis has significantly higher within-participant than between-participant test-retest reliability with respect to four commonly used indices. We also show that a variational Bayesian method, which enables us to estimate energy landscapes tailored to each participant, displays comparable test-retest reliability to that using the conventional likelihood maximization method. The proposed methodology paves the way to perform individual-level energy landscape analysis for given data sets with a statistically controlled reliability.

2.The Dynamic Sensorium competition for predicting large-scale mouse visual cortex activity from videos

Authors:Polina Turishcheva, Paul G. Fahey, Laura Hansel, Rachel Froebe, Kayla Ponder, Michaela Vystrčilová, Konstantin F. Willeke, Mohammad Bashiri, Eric Wang, Zhiwei Ding, Andreas S. Tolias, Fabian H. Sinz, Alexander S. Ecker

Abstract: Understanding how biological visual systems process information is challenging due to the complex nonlinear relationship between neuronal responses and high-dimensional visual input. Artificial neural networks have already improved our understanding of this system by allowing computational neuroscientists to create predictive models and bridge biological and machine vision. During the Sensorium 2022 competition, we introduced benchmarks for vision models with static input. However, animals operate and excel in dynamic environments, making it crucial to study and understand how the brain functions under these conditions. Moreover, many biological theories, such as predictive coding, suggest that previous input is crucial for current input processing. Currently, there is no standardized benchmark to identify state-of-the-art dynamic models of the mouse visual system. To address this gap, we propose the Sensorium 2023 Competition with dynamic input. This includes the collection of a new large-scale dataset from the primary visual cortex of five mice, containing responses from over 38,000 neurons to over 2 hours of dynamic stimuli per neuron. Participants in the main benchmark track will compete to identify the best predictive models of neuronal responses for dynamic input. We will also host a bonus track in which submission performance will be evaluated on out-of-domain input, using withheld neuronal responses to dynamic input stimuli whose statistics differ from the training set. Both tracks will offer behavioral data along with video stimuli. As before, we will provide code, tutorials, and strong pre-trained baseline models to encourage participation. We hope this competition will continue to strengthen the accompanying Sensorium benchmarks collection as a standard tool to measure progress in large-scale neural system identification models of the entire mouse visual hierarchy and beyond.

3.Adaptive coding efficiency in recurrent cortical circuits via gain control

Authors:Lyndon R. Duong, Colin Bredenberg, David J. Heeger, Eero P. Simoncelli

Abstract: Sensory systems across all modalities and species exhibit adaptation to continuously changing input statistics. Individual neurons have been shown to modulate their response gains so as to maximize information transmission in different stimulus contexts. Experimental measurements have revealed additional, nuanced sensory adaptation effects including changes in response maxima and minima, tuning curve repulsion from the adapter stimulus, and stimulus-driven response decorrelation. Existing explanations of these phenomena rely on changes in inter-neuronal synaptic efficacy, which, while more flexible, are unlikely to operate as rapidly or reversibly as single neuron gain modulations. Using published V1 population adaptation data, we show that propagation of single neuron gain changes in a recurrent network is sufficient to capture the entire set of observed adaptation effects. We propose a novel adaptive efficient coding objective with which single neuron gains are modulated, maximizing the fidelity of the stimulus representation while minimizing overall activity in the network. From this objective, we analytically derive a set of gains that optimize the trade-off between preserving information about the stimulus and conserving metabolic resources. Our model generalizes well-established concepts of single neuron adaptive gain control to recurrent populations, and parsimoniously explains experimental adaptation data.

1.Neural correlates of cognitive ability and visuo-motor speed: validation of IDoCT on UK Biobank Data

Authors:Valentina Giunchiglia, Sharon Curtis, Stephen Smith, Naomi Allen, Adam Hampshire

Abstract: Automated online and App-based cognitive assessment tasks are becoming increasingly popular in large-scale cohorts and biobanks due to advantages in affordability, scalability and repeatability. However, the summary scores that such tasks generate typically conflate the cognitive processes that are the intended focus of assessment with basic visuomotor speeds, testing device latencies and speed-accuracy tradeoffs. This lack of precision presents a fundamental limitation when studying brain-behaviour associations. Previously, we developed a novel modelling approach that leverages continuous performance recordings from large-cohort studies to achieve an iterative decomposition of cognitive tasks (IDoCT), which outputs data-driven estimates of cognitive abilities, and device and visuomotor latencies, whilst recalibrating trial-difficulty scales. Here, we further validate the IDoCT approach with UK BioBank imaging data. First, we examine whether IDoCT can improve ability distributions and trial-difficulty scales from an adaptive picture-vocabulary task (PVT). Then, we confirm that the resultant visuomotor and cognitive estimates associate more robustly with age and education than the original PVT scores. Finally, we conduct a multimodal brain-wide association study with free-text analysis to test whether the brain regions that predict the IDoCT estimates have the expected differential relationships with visuomotor vs. language and memory labels within the broader imaging literature. Our results support the view that the rich performance timecourses recorded during computerised cognitive assessments can be leveraged with modelling frameworks like IDoCT to provide estimates of human cognitive abilities that have superior distributions, re-test reliabilities and brain-wide associations.

2.Identification of Novel Diagnostic Neuroimaging Biomarkers for Autism Spectrum Disorder Through Convolutional Neural Network-Based Analysis of Functional, Structural, and Diffusion Tensor Imaging Data Towards Enhanced Autism Diagnosis

Authors:Annie Adhikary

Abstract: Autism Spectrum Disorder is one of the leading neurodevelopmental disorders in our world, present in over 1% of the population and rapidly increasing in prevalence, yet the condition lacks a robust, objective, and efficient diagnostic. Clinical diagnostic criteria rely on subjective behavioral assessments, which are prone to misdiagnosis as they face limitations in terms of their heterogeneity, specificity, and biases. This study proposes a novel convolutional-neural-network based classification tool that aims to identify the potential of different neuroimaging features as autism biomarkers. The model is constructed using a set of sequential layers specifically designed to extract relevant features from brain scans. Trained and tested on over 300,000 distinct features across three imaging types, the model shows promising results, achieving an accuracy of 95.4% and outperforming metrics of current gold standard diagnostics. 32 optimal features from the imaging data were identified and classified as candidate biomarkers using an independent samples t-test, in which functional features such as neural activity and connectivity in various brain regions exhibited the highest differences in the mean values between individuals with autism and typical control subjects. The p-values of these biomarkers were < 0.001, proving the statistical significance of the results and indicating that this research could pave the way towards the usage of neuroimaging in conjunction with behavioral criteria in clinics. Furthermore, the salient features discovered in the brain structure of individuals with autism could lead to a more profound understanding of the underlying neurobiological mechanisms of the disorder, which remains one of the most substantial enigmas in the field even today.

3.The Motor System at the heart of Decision-Making and Action Execution

Authors:Gerard Derosiere

Abstract: In this Thesis, I synthesize 10 years of work on the role of the motor system in sensorimotor decision-making. First, a large part of the work we initially performed questioned the functional role of the motor system in the integration of so-called decision variables such as the reward associated with different actions, the sensory evidence in favor of each action or the level of urgency in a given context. To this end, although the exact methodology may have varied, the approach exploited has been to study either the impact of a perturbation of the primary motor cortex (M1) on the integration of such decision variables in decision behavior, or the influence of these variables on changes in M1 activity during the decision. More recently (2020 - present), we have been investigating the neural origin of some of the changes in M1 activity observed during decision-making. To answer this question, a "perturbation-and-measurement" approach is exploited: the activity of a structure at a distance from M1 is perturbed, and the impact on the changes in M1 activity during decision-making is measured. The thesis ends up with a personal reflection on this paradigmatic evolution and discusses some key questions to be addressed in our field of research.

1.Understanding the neural architecture of emotion regulation by comparing two different strategies: A meta-analytic approach

Authors:Bianca Monachesi, Alessandro Grecucci, Parisa Ahmadi Ghomroudi, Irene Messina

Abstract: In the emotion regulation literature, the amount of neuroimaging studies on cognitive reappraisal led the impression that the same top-down, control-related neural mechanisms characterize all emotion regulation strategies. However, top-down processes may coexist with more bottom-up and emotion-focused processes that partially bypass the recruitment of executive functions. A case in point is acceptance-based strategies. To better understand neural commonalities and differences behind different emotion regulation strategies, in the present study we applied a meta-analytic method to fMRI studies of task-related activity of reappraisal and acceptance. Results showed increased activity in left-inferior frontal gyrus and insula for both strategies, and decreased activity in the basal ganglia for reappraisal, and decreased activity in limbic regions for acceptance. These findings are discussed in the context of a model of common and specific neural mechanisms of emotion regulation that support and expand the previous dual-routes models. We suggest that emotion regulation may rely on a core inhibitory circuit, and on strategy-specific top-down and bottom-up processes distinct for different strategies.

1.A Mean-Field Method for Generic Conductance-Based Integrate-and-Fire Neurons with Finite Timescales

Authors:Marcelo P. Becker, Marco A. P. Idiart

Abstract: The construction of transfer functions in theoretical neuroscience plays an important role in determining the spiking rate behavior of neurons in networks. These functions can be obtained through various fitting methods, but the biological relevance of the parameters is not always clear. However, for stationary inputs, such functions can be obtained without the adjustment of free parameters by using mean-field methods. In this work, we expand current Fokker-Planck approaches to account for the concurrent influence of colored and multiplicative noise terms on generic conductance-based integrate-and-fire neurons. We reduce the resulting stochastic system from the application of the diffusion approximation to a one-dimensional Langevin equation. An effective Fokker-Planck is then constructed using Fox Theory, which is solved numerically to obtain the transfer function. The solution is capable of reproducing the transfer function behavior of simulated neurons across a wide range of parameters. The method can also be easily extended to account for different sources of noise with various multiplicative terms, and it can be used in other types of problems in principle.

2.Behavior quantification as the missing link between fields: Tools for digital psychiatry and their role in the future of neurobiology

Authors:Michaela Ennis

Abstract: The great behavioral heterogeneity observed between individuals with the same psychiatric disorder and even within one individual over time complicates both clinical practice and biomedical research. However, modern technologies are an exciting opportunity to improve behavioral characterization. Existing psychiatry methods that are qualitative or unscalable, such as patient surveys or clinical interviews, can now be collected at a greater capacity and analyzed to produce new quantitative measures. Furthermore, recent capabilities for continuous collection of passive sensor streams, such as phone GPS or smartwatch accelerometer, open avenues of novel questioning that were previously entirely unrealistic. Their temporally dense nature enables a cohesive study of real-time neural and behavioral signals. To develop comprehensive neurobiological models of psychiatric disease, it will be critical to first develop strong methods for behavioral quantification. There is huge potential in what can theoretically be captured by current technologies, but this in itself presents a large computational challenge -- one that will necessitate new data processing tools, new machine learning techniques, and ultimately a shift in how interdisciplinary work is conducted. In my thesis, I detail research projects that take different perspectives on digital psychiatry, subsequently tying ideas together with a concluding discussion on the future of the field. I also provide software infrastructure where relevant, with extensive documentation. Major contributions include scientific arguments and proof of concept results for daily free-form audio journals as an underappreciated psychiatry research datatype, as well as novel stability theorems and pilot empirical success for a proposed multi-area recurrent neural network architecture.

1.Routing by spontaneous synchronization

Authors:Maik Schünemann, Udo Ernst

Abstract: Selective attention allows to process stimuli which are behaviorally relevant, while attenuating distracting information. However, it is an open question what mechanisms implement selective routing, and how they are engaged in dependence on behavioral need. Here we introduce a novel framework for selective processing by spontaneous synchronization. Input signals become organized into 'avalanches' of synchronized spikes which propagate to target populations. Selective attention enhances spontaneous synchronization and boosts signal transfer by a simple disinhibition of a control population, without requiring changes in synaptic weights. Our framework is fully analytically tractable and provides a complete understanding of all stages of the routing mechanism, yielding closed-form expressions for input-output correlations. Interestingly, although gamma oscillations can naturally occur through a recurrent dynamics, we can formally show that the routing mechanism itself does not require such oscillatory activity and works equally well if synchronous events would be randomly shuffled over time. Our framework explains a large range of physiological findings in a unified framework and makes specific predictions about putative control mechanisms and their effects on neural dynamics.

2.Strong attentional modulation of V1/V2 activity implements a robust, contrast-invariant control mechanism for selective information processing

Authors:Lukas-Paul Rausch, Maik Schünemann, Eric Drebitz, Daniel Harnack, Udo A. Ernst, Andreas K. Kreiter

Abstract: When selective attention is devoted to one of multiple stimuli within receptive fields of neurons in visual area V4, cells respond as if only the attended stimulus was present. The underlying neural mechanisms are still debated, but computational studies suggest that a small rate advantage for neural populations passing the attended signal to V4 suffices to establish such selective processing. We challenged this theory by pairing stimuli with different luminance contrasts, such that attention on a weak target stimulus would have to overcome a large activation difference to a strong distracter. In this situation we found unexpectedly large attentional target facilitation in macaque V1/V2 which far surpasses known magnitudes of attentional modulation. Target facilitation scales with contrast difference and combines with distracter suppression to achieve the required rate advantage. These effects can be explained by a contrast-independent attentional control mechanism with excitatory centre and suppressive surround targeting divisive normalization units.

1.From Data-Fitting to Discovery: Interpreting the Neural Dynamics of Motor Control through Reinforcement Learning

Authors:Eugene R. Rush, Kaushik Jayaram, J. Sean Humbert

Abstract: In motor neuroscience, artificial recurrent neural networks models often complement animal studies. However, most modeling efforts are limited to data-fitting, and the few that examine virtual embodied agents in a reinforcement learning context, do not draw direct comparisons to their biological counterparts. Our study addressing this gap, by uncovering structured neural activity of a virtual robot performing legged locomotion that directly support experimental findings of primate walking and cycling. We find that embodied agents trained to walk exhibit smooth dynamics that avoid tangling -- or opposing neural trajectories in neighboring neural space -- a core principle in computational neuroscience. Specifically, across a wide suite of gaits, the agent displays neural trajectories in the recurrent layers are less tangled than those in the input-driven actuation layers. To better interpret the neural separation of these elliptical-shaped trajectories, we identify speed axes that maximizes variance of mean activity across different forward, lateral, and rotational speed conditions.

1.Abnormal Functional Brain Network Connectivity Associated with Alzheimer's Disease

Authors:Yongcheng Yao

Abstract: The study's objective is to explore the distinctions in the functional brain network connectivity between Alzheimer's Disease (AD) patients and normal controls using Functional Magnetic Resonance Imaging (fMRI). The study included 590 individuals, with 175 having AD dementia and 415 age-, gender-, and handedness-matched normal controls. The connectivity of functional brain networks was measured using ROI-to-ROI and ROI-to-Voxel connectivity analyses. The findings reveal a general decrease in functional connectivity among the AD group in comparison to the normal control group. These results advance our comprehension of AD pathophysiology and could assist in identifying AD biomarkers.

2.Understanding visual processing of motion: Completing the picture using experimentally driven computational models of MT

Authors:Parvin Zarei Eskikand, David B Grayden, Tatiana Kameneva, Anthony N Burkitt, Michael R Ibbotson

Abstract: Computational modeling helps neuroscientists to integrate and explain experimental data obtained through neurophysiological and anatomical studies, thus providing a mechanism by which we can better understand and predict the principles of neural computation. Computational modeling of the neuronal pathways of the visual cortex has been successful in developing theories of biological motion processing. This review describes a range of computational models that have been inspired by neurophysiological experiments. Theories of local motion integration and pattern motion processing are presented, together with suggested neurophysiological experiments designed to test those hypotheses.

1.Neural Responses to Political Words in Natural Speech Differ by Political Orientation

Authors:Shuhei Kitamura, Aya S. Ihara

Abstract: Worldviews may differ significantly according to political orientation. Even a single word can have a completely different meaning depending on political orientation. However, direct evidence indicating differences in the neural responses to words, between conservative- and liberal-leaning individuals, has not been obtained. The present study aimed to investigate whether neural responses related to semantic processing of political words in natural speech differ according to political orientation. We measured electroencephalographic signals while participants with different political orientations listened to natural speech. Responses for moral-, ideology-, and policy-related words between and within the participant groups were then compared. Within-group comparisons showed that right-leaning participants reacted more to moral-related words than to policy-related words, while left-leaning participants reacted more to policy-related words than to moral-related words. In addition, between-group comparisons also showed that neural responses for moral-related words were greater in right-leaning participants than in left-leaning participants and those for policy-related words were lesser in right-leaning participants than in neutral participants. There was a significant correlation between the predicted and self-reported political orientations. In summary, the study found that people with different political orientations differ in semantic processing at the level of a single word. These findings have implications for understanding the mechanisms of political polarization and for making policy messages more effective.

2.Selective imitation on the basis of reward function similarity

Authors:Max Taylor-Davies, Stephanie Droop, Christopher G. Lucas

Abstract: Imitation is a key component of human social behavior, and is widely used by both children and adults as a way to navigate uncertain or unfamiliar situations. But in an environment populated by multiple heterogeneous agents pursuing different goals or objectives, indiscriminate imitation is unlikely to be an effective strategy -- the imitator must instead determine who is most useful to copy. There are likely many factors that play into these judgements, depending on context and availability of information. Here we investigate the hypothesis that these decisions involve inferences about other agents' reward functions. We suggest that people preferentially imitate the behavior of others they deem to have similar reward functions to their own. We further argue that these inferences can be made on the basis of very sparse or indirect data, by leveraging an inductive bias toward positing the existence of different \textit{groups} or \textit{types} of people with similar reward functions, allowing learners to select imitation targets without direct evidence of alignment.

3.Applications of information geometry to spiking neural network behavior

Authors:Jacob T. Crosser, Braden A. W. Brinkman

Abstract: The space of possible behaviors complex biological systems may exhibit is unimaginably vast, and these systems often appear to be stochastic, whether due to variable noisy environmental inputs or intrinsically generated chaos. The brain is a prominent example of a biological system with complex behaviors. The number of possible patterns of spikes emitted by a local brain circuit is combinatorially large, though the brain may not make use of all of them. Understanding which of these possible patterns are actually used by the brain, and how those sets of patterns change as properties of neural circuitry change is a major goal in neuroscience. Recently, tools from information geometry have been used to study embeddings of probabilistic models onto a hierarchy of model manifolds that encode how model behaviors change as a function of their parameters, giving a quantitative notion of "distances" between model behaviors. We apply this method to a network model of excitatory and inhibitory neural populations to understand how the competition between membrane and synaptic response timescales shapes the network's information geometry. The hyperbolic embedding allows us to identify the statistical parameters to which the model behavior is most sensitive, and demonstrate how the ranking of these coordinates changes with the balance of excitation and inhibition in the network.

1.Connecting levels of analysis in the computational era

Authors:Richard Naud, André Longtin

Abstract: Neuroscience and artificial intelligence are closely intertwined, but so are the physics of dynamical system, philosophy and psychology. Each of these fields try in their own way to relate observations at the level of molecules, synapses, neurons or behavior, to a function. An influential conceptual approach to this end was popularized by David Marr, which focused on the interaction between three theoretical 'levels of analysis'. With the convergence of simulation-based approaches, algorithm-oriented Neuro-AI and high-throughput data, we currently see much research organized around four levels of analysis: observations, models, algorithms and functions. Bidirectional interaction between these levels influences how we undertake interdisciplinary science.

2.Neuroscience needs Network Science

Authors:Dániel L Barabási, Ginestra Bianconi, Ed Bullmore, Mark Burgess, SueYeon Chung, Tina Eliassi-Rad, Dileep George, István A. Kovács, Hernán Makse, Christos Papadimitriou, Thomas E. Nichols, Olaf Sporns, Kim Stachenfeld, Zoltán Toroczkai, Emma K. Towlson, Anthony M Zador, Hongkui Zeng, Albert-László Barabási, Amy Bernard, György Buzsáki

Abstract: The brain is a complex system comprising a myriad of interacting elements, posing significant challenges in understanding its structure, function, and dynamics. Network science has emerged as a powerful tool for studying such intricate systems, offering a framework for integrating multiscale data and complexity. Here, we discuss the application of network science in the study of the brain, addressing topics such as network models and metrics, the connectome, and the role of dynamics in neural networks. We explore the challenges and opportunities in integrating multiple data streams for understanding the neural transitions from development to healthy function to disease, and discuss the potential for collaboration between network science and neuroscience communities. We underscore the importance of fostering interdisciplinary opportunities through funding initiatives, workshops, and conferences, as well as supporting students and postdoctoral fellows with interests in both disciplines. By uniting the network science and neuroscience communities, we can develop novel network-based methods tailored to neural circuits, paving the way towards a deeper understanding of the brain and its functions.

1.A unified framework of metastability in neuroscience

Authors:Kalel L. Rossi, Roberto C. Budzinski, Everton S. Medeiros, Bruno R. R. Boaretto, Lyle Muller, Ulrike Feudel

Abstract: Neural activity typically follows a series of transitions between well-defined states, in a regime generally called metastability. In this perspective, we review current observations and formulations of metastability to argue that they have been largely context-dependent, and a unified framework is still missing. To address this, we propose a context-independent framework that unifies the context-dependent formulations by defining metastability as an umbrella term encompassing regimes with transient but long-lived states. This definition can be applied directly to experimental data but also connects neatly to the theory of nonlinear dynamical systems, which allows us to extract a general dynamical principle for metastability: the coexistence of attracting and repelling directions in phase space. With this, we extend known mechanisms and propose new ones that can implement metastability through this general dynamical principle. We believe that our framework is an important advancement towards a better understanding of metastability in the brain, and can facilitate the development of tools to predict and control the brain's behavior.

1.Assessing Rate limits Using Behavioral and Neural Responses of Interaural-Time-Difference Cues in Fine-Structure and Envelope

Authors:Hongmei Hu, Stephan Ewert, Birger Kollmeier, Deborah Vickers

Abstract: The objective was to determine the effect of pulse rate on the sensitivity to use interaural-time-difference (ITD) cues and to explore the mechanisms behind rate-dependent degradation in ITD perception in bilateral cochlear implant (CI) listeners using CI simulations and electroencephalogram (EEG) measures. To eliminate the impact of CI stimulation artifacts and to develop protocols for the ongoing bilateral CI studies, upper-frequency limits for both behavior and EEG responses were obtained from normal hearing (NH) listeners using sinusoidal-amplitude-modulated (SAM) tones and filtered clicks with changes in either fine structure ITD or envelope ITD. Multiple EEG responses were recorded, including the subcortical auditory steady-state responses (ASSRs) and cortical auditory evoked potentials (CAEPs) elicited by stimuli onset, offset, and changes. Results indicated that acoustic change complex (ACC) responses elicited by envelope ITD changes were significantly smaller or absent compared to those elicited by fine structure ITD changes. The ACC morphologies evoked by fine structure ITD changes were similar to onset and offset CAEPs, although smaller than onset CAEPs, with the longest peak latencies for ACC responses and shortest for offset CAEPs. The study found that high-frequency stimuli clearly elicited subcortical ASSRs, but smaller than those evoked by lower carrier frequency SAM tones. The 40-Hz ASSRs decreased with increasing carrier frequencies. Filtered clicks elicited larger ASSRs compared to high-frequency SAM tones, with the order being 40-Hz-ASSR>160-Hz-ASSR>80-Hz-ASSR>320-Hz-ASSR for both stimulus types. Wavelet analysis revealed a clear interaction between detectable transient CAEPs and 40-Hz-ASSRs in the time-frequency domain for SAM tones with a low carrier frequency.

1.Accuracy in readout of glutamate concentrations by neuronal cells

Authors:Swoyam Biswal, Vaibhav Wasnik

Abstract: Glutamate and glycine are important neurotransmitters in the brain. An action potential prop- agating in the terminal of a presynatic neuron causes the release of glutamate and glycine in the synapse by vesicles fusing with the cell membrane, which then activate various receptors on the cell membrane of the post synaptic neuron. Entry of Ca2+ through the activated NMDA receptors leads to a host of cellular processes of which long term potentiation is of crucial importance because it is widely considered to be one of the major mechanisms behind learning and memory. By analysing the readout of glutamate concentration by the post synaptic neurons during Ca2+ signaling, we find that the average receptor density in hippocampal neurons has evolved to allow for accurate measurement of the glutamate concentration in the synaptic cleft.

1.Ecologically mapped neuronal identity: Towards standardizing activity across heterogeneous experiments

Authors:Kevin Luxem, David Eriksson

Abstract: The brain's diversity of neurons enables a rich behavioral repertoire and flexible adaptation to new situations. Assuming that the ecological pressure has optimized this neuronal variety, we propose exploiting na\"ive behavior to map the neuronal identity. Here we investigate the feasibility of identifying neurons "ecologically" using their activation for natural behavioral and environmental parameters. Such a neuronal ECO-marker might give a finer granularity than possible with genetic or molecular markers, thereby facilitating the comparison of the functional characteristics of individual neurons across animals. In contrast to a potential mapping using artificial stimuli and trained behavior which have an unlimited parameter space, an ecological mapping is experimentally feasible since it is bounded by the ecology. Home-cage environment is an excellent basis for this ECO-mapping covering an extensive behavioral repertoire and since home-cage behavior is similar across laboratories. We review the possibility of adding area-specific environmental enrichment and automatized behavioral tasks to identify neurons in specific brain areas. In this work, we focus on the visual cortex, motor cortex, prefrontal cortex, and hippocampus. Fundamental to achieving this identification is to take advantage of state-of-the-art behavioral tracking, sensory stimulation protocols, and the plethora of creative behavioral solutions for rodents. We find that motor areas might be easiest to address, followed by prefrontal, hippocampal, and visual areas. The possibility of acquiring a near-complete ecological identification with minimal animal handling, minimal constraints on the main experiment, and data compatibility across laboratories might outweigh the necessity of implanting electrodes or imaging devices.

2.Incomplete hippocampal inversion and hippocampal subfield volumes: Implementation and inter-reliability of automatic segmentation

Authors:Agustina Fragueiro EMPENN, Giorgia Committeri Ud'A, Claire Cury EMPENN

Abstract: The incomplete hippocampal inversion (IHI) is an atypical anatomical pattern of the hippocampus. However, the hippocampus is not a homogeneous structure, as it consists of segregated subfields with specific characteristics. While IHI is not related to whole hippocampal volume, higher IHI scores have been associated to smaller CA1 in aging. Although the segmentation of hippocampal subfields is challenging due to their small size, there are algorithms allowing their automatic segmentation. By using a Human Connectome Project dataset of healthy young adults, we first tested the inter-reliability of two methods for automatic segmentation of hippocampal subfields, and secondly, we explored the relationship between IHI and subfield volumes. Results evidenced strong correlations between volumes obtained thorough both segmentation methods. Furthermore, higher IHI scores were associated to bigger subiculum and smaller CA1 volumes. Here, we provide new insights regarding IHI subfields volumetry, and we offer support for automatic segmentation inter-method reliability.

1.Perceived community alignment increases information sharing

Authors:Elisa C. Baek, Ryan Hyon, Karina López, Mason A. Porter, Carolyn Parkinson

Abstract: Information sharing is a ubiquitous and consequential behavior that has been proposed to play a critical role in cultivating and maintaining a sense of shared reality. Across three studies, we tested this theory by investigating whether or not people are especially likely to share information that they believe will be interpreted similarly by others in their social circles. Using neuroimaging while members of the same community viewed brief film clips, we found that more similar neural responding of participants was associated with a greater likelihood to share content. We then tested this relationship using behavioral studies and found (1) that people were particularly likely to share content about which they believed others in their social circles would share their viewpoints and (2) that this relationship is causal. In concert, our findings support the idea that people are driven to share information to create and reinforce shared understanding, which is critical to social connection.

1.Orientation selectivity of affine Gaussian derivative based receptive fields

Authors:Tony Lindeberg

Abstract: This paper presents a theoretical analysis of the orientation selectivity of simple and complex cells that can be well modelled by the generalized Gaussian derivative model for visual receptive fields, with the purely spatial component of the receptive fields determined by oriented affine Gaussian derivatives for different orders of spatial differentiation. A detailed mathematical analysis is presented for the three different cases of either: (i) purely spatial receptive fields, (ii) space-time separable spatio-temporal receptive fields and (iii) velocity-adapted spatio-temporal receptive fields. Closed-form theoretical expressions for the orientation selectivity curves for idealized models of simple and complex cells are derived for all these main cases, and it is shown that the degree of orientation selectivity of the receptive fields increases with a scale parameter ratio $\kappa$, defined as the ratio between the scale parameters in the directions perpendicular to vs. parallel with the preferred orientation of the receptive field. It is also shown that the degree of orientation selectivity increases with the order of spatial differentiation in the underlying affine Gaussian derivative operators over the spatial domain. We conclude by describing biological implications of the derived theoretical results, demonstrating that the predictions from the presented theory are consistent with previously established biological results concerning broad vs. sharp orientation tuning of visual neurons in the primary visual cortex, as well as consistent with a previously formulated biological hypothesis, stating that the biological receptive field shapes should span the degrees of freedom in affine image transformations, to support affine covariance over the population of receptive fields in the primary visual cortex.

1.Whole-brain functional imaging to highlight differences between the diurnal and nocturnal neuronal activity in zebrafish larvae

Authors:Giuseppe de Vito, Lapo Turrini, Chiara Fornetto, Elena Trabalzini, Pietro Ricci, Duccio Fanelli, Francesco Vanzi, Francesco Saverio Pavone

Abstract: Most living organisms show highly conserved physiological changes following a 24-hour cycle which goes by the name of circadian rhythm. Among experimental models, the effects of light-dark cycle have been recently investigated in the larval zebrafish. Owing to its small size and transparency, this vertebrate enables optical access to the entire brain. Indeed, the combination of this organism with light-sheet imaging grants high spatio-temporal resolution volumetric recording of neuronal activity. This imaging technique, in its multiphoton variant, allows functional investigations without unwanted visual stimulation. Here, we employed a custom two-photon light-sheet microscope to study whole-brain differences in neuronal activity between diurnal and nocturnal periods in larval zebrafish. We describe for the first time an activity increase in the low frequency domain of the pretectum and a frequency-localised activity decrease of the anterior rhombencephalic turning region during the nocturnal period. Moreover, our data confirm a nocturnal reduction in habenular activity. Furthermore, whole-brain detrended fluctuation analysis revealed a nocturnal decrease in the self-affinity of the neuronal signals in parts of the dorsal thalamus and the medulla oblongata. Our data show that whole-brain nonlinear light-sheet imaging represents a useful tool to investigate circadian rhythm effects on neuronal activity.

2.Long time scales, individual differences, and scale invariance in animal behavior

Authors:William Bialek, Joshua W. Shaevitz

Abstract: The explosion of data on animal behavior in more natural contexts highlights the fact that these behaviors exhibit correlations across many time scales. But there are major challenges in analyzing these data: records of behavior in single animals have fewer independent samples than one might expect; in pooling data from multiple animals, individual differences can mimic long-ranged temporal correlations; conversely long-ranged correlations can lead to an over-estimate of individual differences. We suggest an analysis scheme that addresses these problems directly, apply this approach to data on the spontaneous behavior of walking flies, and find evidence for scale invariant correlations over nearly three decades in time, from seconds to one hour. Three different measures of correlation are consistent with a single underlying scaling field of dimension $\Delta = 0.180\pm 0.005$.

3.Circumstantial evidence and explanatory models for synapses in large-scale spike recordings

Authors:Ian H. Stevenson

Abstract: Whether, when, and how causal interactions between neurons can be meaningfully studied from observations of neural activity alone are vital questions in neural data analysis. Here we aim to better outline the concept of functional connectivity for the specific situation where systems neuroscientists aim to study synapses using spike train recordings. In some cases, cross-correlations between the spikes of two neurons are such that, although we may not be able to say that a relationship is causal without experimental manipulations, models based on synaptic connections provide precise explanations of the data. Additionally, there is often strong circumstantial evidence that pairs of neurons are monosynaptically connected. Here we illustrate how circumstantial evidence for or against synapses can be systematically assessed and show how models of synaptic effects can provide testable predictions for pair-wise spike statistics. We use case studies from large-scale multi-electrode spike recordings to illustrate key points and to demonstrate how modeling synaptic effects using large-scale spike recordings opens a wide range of data analytic questions.

4.Pulse shape and voltage-dependent synchronization in spiking neuron networks

Authors:Bastian Pietras

Abstract: Pulse-coupled spiking neural networks are a powerful tool to gain mechanistic insights into how neurons self-organize to produce coherent collective behavior. These networks use simple spiking neuron models, such as the $\theta$-neuron or the quadratic integrate-and-fire (QIF) neuron, that replicate the essential features of real neural dynamics. Interactions between neurons are modeled with infinitely narrow pulses, or spikes, rather than the more complex dynamics of real synapses. To make these networks biologically more plausible, it has been proposed that they must also account for the finite width of the pulses, which can have a significant impact on the network dynamics. However, the derivation and interpretation of these pulses is contradictory and the impact of the pulse shape on the network dynamics is largely unexplored. Here, I take a comprehensive approach to pulse-coupling in networks of QIF and $\theta$-neurons. I argue that narrow pulses activate voltage-dependent synaptic conductances and show how to implement them in QIF neurons such that their effect can last through the phase after the spike. Using an exact low-dimensional description for networks of globally coupled spiking neurons, I prove for instantaneous interactions that collective oscillations emerge due to an effective coupling through the mean voltage. I analyze the impact of the pulse shape by means of a family of smooth pulse functions with arbitrary finite width and symmetric or asymmetric shapes. For symmetric pulses, the resulting voltage-coupling is little effective in synchronizing neurons, but pulses that are slightly skewed to the phase after the spike readily generate collective oscillations. The results unveil a voltage-dependent spike synchronization mechanism in neural networks, which is facilitated by pulses of finite width and complementary to traditional synaptic transmission.

1.Decoding Neural Activity to Assess Individual Latent State in Ecologically Valid Contexts

Authors:Stephen M. Gordon, Jonathan R. McDaniel, Kevin W. King, Vernon J. Lawhern, Jonathan Touryan

Abstract: There exist very few ways to isolate cognitive processes, historically defined via highly controlled laboratory studies, in more ecologically valid contexts. Specifically, it remains unclear as to what extent patterns of neural activity observed under such constraints actually manifest outside the laboratory in a manner that can be used to make an accurate inference about the latent state, associated cognitive process, or proximal behavior of the individual. Improving our understanding of when and how specific patterns of neural activity manifest in ecologically valid scenarios would provide validation for laboratory-based approaches that study similar neural phenomena in isolation and meaningful insight into the latent states that occur during complex tasks. We argue that domain generalization methods from the brain-computer interface community have the potential to address this challenge. We previously used such an approach to decode phasic neural responses associated with visual target discrimination. Here, we extend that work to more tonic phenomena such as internal latent states. We use data from two highly controlled laboratory paradigms to train two separate domain-generalized models. We apply the trained models to an ecologically valid paradigm in which participants performed multiple, concurrent driving-related tasks. Using the pretrained models, we derive estimates of the underlying latent state and associated patterns of neural activity. Importantly, as the patterns of neural activity change along the axis defined by the original training data, we find changes in behavior and task performance consistent with the observations from the original, laboratory paradigms. We argue that these results lend ecological validity to those experimental designs and provide a methodology for understanding the relationship between observed neural activity and behavior during complex tasks.

1.Synchronization in STDP-driven memristive neural networks with time-varying topology

Authors:Marius E. Yamakou, Mathieu Desroches, Serafim Rodrigues

Abstract: Synchronization is a widespread phenomenon in the brain. Despite numerous studies, the specific parameter configurations of the synaptic network structure and learning rules needed to achieve robust and enduring synchronization in neurons driven by spike-timing-dependent plasticity (STDP) and temporal networks subject to homeostatic structural plasticity (HSP) rules remain unclear. Here, we bridge this gap by determining the configurations required to achieve high and stable degrees of complete synchronization (CS) and phase synchronization (PS) in time-varying small-world and random neural networks driven by STDP and HSP. In particular, we found that decreasing $P$ (which enhances the strengthening effect of STDP on the average synaptic weight) and increasing $F$ (which speeds up the swapping rate of synapses between neurons) always lead to higher and more stable degrees of CS and PS in small-world and random networks, provided that the network parameters such as the synaptic time delay $\tau_c$, the average degree $\langle k \rangle$, and the rewiring probability $\beta$ have some appropriate values. When $\tau_c$, $\langle k \rangle$, and $\beta$ are not fixed at these appropriate values, the degree and stability of CS and PS may increase or decrease when $F$ increases, depending on the network topology. It is also found that the time delay $\tau_c$ can induce intermittent CS and PS whose occurrence is independent $F$. Our results could have applications in designing neuromorphic circuits for optimal information processing and transmission via synchronization phenomena.

2.Upcrossing-rate dynamics for a minimal neuron model receiving spatially distributed synaptic drive

Authors:Robert P Gowers, Magnus J E Richardson

Abstract: The spatiotemporal stochastic dynamics of the voltage as well as the upcrossing rate are derived for a model neuron comprising a long dendrite with uniformly distributed filtered excitatory and inhibitory synaptic drive. A cascade of ordinary and partial differential equations is obtained describing the evolution of first-order means and second-order spatial covariances of the voltage and its rate of change. These quantities provide an analytical form for the general, steady-state and linear response of the upcrossing rate to dynamic synaptic input. It is demonstrated that this minimal dendritic model has an unexpectedly sustained high-frequency response despite synaptic, membrane and spatial filtering.

1.Hierarchical network structure as the source of power-law frequency spectra (state-trait continua) in living and non-living systems: how physical traits and personalities emerge from first principles in biophysics

Authors:Rutger Goekoop, Roy de Kleijn

Abstract: What causes organisms to have different body plans and personalities? We address this question by looking at universal principles that govern the morphology and behavior of living systems. Living systems display a small-world network structure in which many smaller clusters are nested within fewer larger ones, producing a fractal-like structure with a power-law cluster size distribution. Their dynamics show similar qualities: the timeseries of inner message passing and overt behavior contain high frequencies or 'states' that are nested within lower frequencies or 'traits'. Here, we argue that the nested modular (power-law) dynamics of living systems results from their nested modular (power-law) network structure: organisms 'vertically encode' the deep spatiotemporal structure of their environments, so that high frequencies (states) are produced by many small clusters at the base of a nested-modular hierarchy and lower frequencies (traits) are produced by fewer larger clusters at its top. These include physical as well as behavioral traits. Nested-modular structure causes higher frequencies to be embedded in lower frequencies, producing power-law dynamics. Such dynamics satisfy the need for efficient energy dissipation through networks of coupled oscillators, which also governs the dynamics of non-living systems (e.g. earthquake dynamics, stock market fluctuations). Thus, we provide a single explanation for power-law frequency spectra in both living and non-living systems. If hierarchical structure indeed produces hierarchical dynamics, the development (e.g. during maturation) and collapse (e.g. during disease) of hierarchical structure should leave specific traces in power-law frequency spectra that may serve as early warning signs to system failure. The applications of this idea range from embryology and personality psychology to sociology, evolutionary biology and clinical medicine.

1.Hebbian fast plasticity and working memory

Authors:Anders Lansner, Florian Fiebig, Pawel Herman

Abstract: Theories and models of working memory (WM) were at least since the mid-1990s dominated by the persistent activity hypothesis. The past decade has seen rising concerns about the shortcomings of sustained activity as the mechanism for short-term maintenance of WM information in the light of accumulating experimental evidence for so-called activity-silent WM and the fundamental difficulty in explaining robust multi-item WM. In consequence, alternative theories are now explored mostly in the direction of fast synaptic plasticity as the underlying mechanism.The question of non-Hebbian vs Hebbian synaptic plasticity emerges naturally in this context. In this review we focus on fast Hebbian plasticity and trace the origins of WM theories and models building on this form of associative learning.

1.Mathematical derivation of wave propagation properties in hierarchical neural networks with predictive coding feedback dynamics

Authors:Grégory Faye, Guilhem Fouilhé, Rufin VanRullen

Abstract: Sensory perception (e.g. vision) relies on a hierarchy of cortical areas, in which neural activity propagates in both directions, to convey information not only about sensory inputs but also about cognitive states, expectations and predictions. At the macroscopic scale, neurophysiological experiments have described the corresponding neural signals as both forward and backward-travelling waves, sometimes with characteristic oscillatory signatures. It remains unclear, however, how such activity patterns relate to specific functional properties of the perceptual apparatus. Here, we present a mathematical framework, inspired by neural network models of predictive coding, to systematically investigate neural dynamics in a hierarchical perceptual system. We show that stability of the system can be systematically derived from the values of hyper-parameters controlling the different signals (related to bottom-up inputs, top-down prediction and error correction). Similarly, it is possible to determine in which direction, and at what speed neural activity propagates in the system. Different neural assemblies (reflecting distinct eigenvectors of the connectivity matrices) can simultaneously and independently display different properties in terms of stability, propagation speed or direction. We also derive continuous-limit versions of the system, both in time and in neural space. Finally, we analyze the possible influence of transmission delays between layers, and reveal the emergence of oscillations at biologically plausible frequencies.

2.Amygdala and cortical gamma band responses to emotional faces depend on the attended to valence

Authors:Enya M. Weidner, Stephan Moratti, Sebastian Schindler, Philip Grewe, Christian G. Bien, Johanna Kissler

Abstract: The amygdala is assumed to contribute to a bottom-up attentional bias during visual processing of emotional faces. Still, how its response to emotion interacts with top-down attention is not fully understood. It is also unclear if amygdala activity and scalp EEG respond to emotion and attention in a similar way. Therefore, we studied the interaction of emotion and attention during face processing in oscillatory gamma-band activity (GBA) in the amygdala and on the scalp. Amygdala signals were recorded via intracranial EEG (iEEG) in 9 patients with epilepsy. Scalp recordings were collected from 19 healthy participants. Three randomized blocks of angry, neutral, and happy faces were presented, and either negative, neutral, or positive expressions were denoted as targets. Both groups detected happy faces fastest and most accurately. In the amygdala, the earliest effect was observed around 170 ms in high GBA (105-117.5 Hz) when neutral faces served as targets. Here, GBA was higher for emotional than neutral faces. During attention to negative faces, low GBA (< 90 Hz) increased specifically for angry faces both in the amygdala and over posterior scalp regions, albeit earlier on the scalp (60 ms) than in the amygdala (210 ms). From 570 ms, amygdala high GBA (117.5-145 Hz) was also increased for both angry and neutral, compared to happy, faces. When positive faces were the targets, GBA did not differentiate between expressions. The present data reveal that attention-independent emotion detection in amygdala high GBA may only occur during a neutral focus of attention. Top-down threat vigilance coordinates widespread low GBA, biasing stimulus processing in favor of negative faces. These results are in line with a multi-pathway model of emotion processing and help specify the role of GBA in this process by revealing how attentional focus can tune timing and amplitude of emotional GBA responses.

3.Adaptive Gated Graph Convolutional Network for Explainable Diagnosis of Alzheimer's Disease using EEG Data

Authors:Dominik Klepl, Fei He, Min Wu, Daniel J. Blackburn, Ptolemaios G. Sarrigiannis

Abstract: Graph neural network (GNN) models are increasingly being used for the classification of electroencephalography (EEG) data. However, GNN-based diagnosis of neurological disorders, such as Alzheimer's disease (AD), remains a relatively unexplored area of research. Previous studies have relied on functional connectivity methods to infer brain graph structures and used simple GNN architectures for the diagnosis of AD. In this work, we propose a novel adaptive gated graph convolutional network (AGGCN) that can provide explainable predictions. AGGCN adaptively learns graph structures by combining convolution-based node feature enhancement with a well-known correlation-based measure of functional connectivity. Furthermore, the gated graph convolution can dynamically weigh the contribution of various spatial scales. The proposed model achieves high accuracy in both eyes-closed and eyes-open conditions, indicating the stability of learned representations. Finally, we demonstrate that the proposed AGGCN model generates consistent explanations of its predictions that might be relevant for further study of AD-related alterations of brain networks.

4.Altered Topological Structure of the Brain White Matter in Maltreated Children through Topological Data Analysis

Authors:Tahmineh Azizi, Moo K. Chung, Jamie Hanson, Thomas Burns, Andrew Alexander, Richard Davidson, Seth Pollak

Abstract: Childhood maltreatment may adversely affect brain development and consequently behavioral, emotional, and psychological patterns during adulthood. In this study, we propose an analytical pipeline for modeling the altered topological structure of brain white matter structure in maltreated and typically developing children. We perform topological data analysis (TDA) to assess the alteration in global topology of the brain white-matter structural covariance network for child participants. We use persistent homology, an algebraic technique in TDA, to analyze topological features in the brain covariance networks constructed from structural magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI). We develop a novel framework for statistical inference based on the Wasserstein distance to assess the significance of the observed topological differences. Using these methods in comparing maltreated children to a typically developing sample, we find that maltreatment may increase homogeneity in white matter structures and thus induce higher correlations in the structural covariance; this is reflected in the topological profile. Our findings strongly demonstrates that TDA can be used as a baseline framework to model altered topological structures of the brain.

1.Toward Whole-Brain Minimally-Invasive Vascular Imaging

Authors:Anatole Jimenez PhysMed Paris, Bruno Osmanski PhIND, Denis Vivien PhIND, Mickael Tanter PhysMed Paris, Thomas Gaberel PhIND, Thomas Deffieux PhysMed Paris

Abstract: Imaging the brain vasculature can be critical for cerebral perfusion monitoring in the context of neurocritical care. Although ultrasensitive Doppler (UD) can provide good sensitivity to cerebral blood volume (CBV) in a large field of view, it remains difficult to perform through the skull. In this work, we investigate how a minimally invasive burr hole, performed for intracranial pressure (ICP) monitoring, could be used to map the entire brain vascular tree. We explored the use of a small motorized phased array probe with a non-implantable preclinical prototype in pigs. The scan duration (18 min) and coverage (62 $\pm$ 12 % of the brain) obtained allowed global CBV variations detection (relative in brain Dopplerdecrease =-3[-4-+16]% \& Dopplerincrease. = +1[-3-+15]%, n = 6 \& 5) and stroke detection (relative in core Dopplerstroke. =-25%, n = 1). This technology could one day be miniaturized to be implanted for brain perfusion monitoring in neurocritical care.

2.Identifying epileptogenic abnormalities through spatial clustering of MEG interictal band power

Authors:Thomas W. Owen, Vytene Janiukstyte, Gerard R. Hall, Jonathan J. Horsley, Andrew McEvoy, Anna Miserocchi, Jane de Tisi, John S. Duncan, Fergus Rugg-Gunn, Yujiang Wang, Peter N. Taylor

Abstract: Successful epilepsy surgery depends on localising and resecting cerebral abnormalities and networks that generate seizures. Abnormalities, however, may be widely distributed across multiple discontiguous areas. We propose spatially constrained clusters as candidate areas for further investigation, and potential resection. We quantified the spatial overlap between the abnormality cluster and subsequent resection, hypothesising a greater overlap in seizure-free patients. Thirty-four individuals with refractory focal epilepsy underwent pre-surgical resting-state interictal MEG recording. Fourteen individuals were totally seizure free (ILAE 1) after surgery and 20 continued to have some seizures post-operatively (ILAE 2+). Band power abnormality maps were derived using controls as a baseline. Patient abnormalities were spatially clustered using the k-means algorithm. The tissue within the cluster containing the most abnormal region was compared with the resection volume using the dice score. The proposed abnormality cluster overlapped with the resection in 71% of ILAE 1 patients. Conversely, an overlap only occurred in 15% of ILAE 2+ patients. This effect discriminated outcome groups well (AUC=0.82). Our novel approach identifies clusters of spatially similar tissue with high abnormality. This is clinically valuable, providing (i) a data-driven framework to validate current hypotheses of the epileptogenic zone localisation or (ii) to guide further investigation.

3.Interictal MEG abnormalities to guide intracranial electrode implantation and predict surgical outcome

Authors:Thomas W. Owen, Vytene Janiukstyte, Gerard R. Hall, Fahmida A. Chowdhury, Beate Diehl, Andrew McEvoy, Anna Miserocchi, Jane de Tisi, John S. Duncan, Fergus Rugg-Gunn, Yujiang Wang, Peter N. Taylor

Abstract: Intracranial EEG (iEEG) is the gold standard technique for epileptogenic zone (EZ) localisation, but requires a hypothesis of which tissue is epileptogenic, guided by qualitative analysis of seizure semiology and other imaging modalities such as magnetoencephalography (MEG). We hypothesised that if quantifiable MEG band power abnormalities were sampled by iEEG, then patients' post-resection seizure outcome were better. Thirty-two individuals with neocortical epilepsy underwent MEG and iEEG recordings as part of pre-surgical evaluation. Interictal MEG band power abnormalities were derived using 70 healthy controls as a normative baseline. MEG abnormality maps were compared to electrode implantation, with the spatial overlap of iEEG electrodes and MEG abnormalities recorded. Finally, we assessed if the implantation of electrodes in abnormal tissue, and resection of the strongest abnormalities determined by MEG and iEEG explained surgical outcome. Intracranial electrodes were implanted in brain tissue with the most abnormal MEG findings in individuals that were seizure-free post-resection (T=3.9, p=0.003). The overlap between MEG abnormalities and iEEG electrodes distinguished outcome groups moderately well (AUC=0.68). In isolation, the resection of the strongest MEG and iEEG abnormalities separated surgical outcome groups well (AUC=0.71, AUC=0.74 respectively). A model incorporating all three features separated outcome groups best (AUC=0.80). Intracranial EEG is a key tool to delineate the EZ and help render patients seizure-free after resection. We showed that data-driven abnormalities derived from interictal MEG recordings have clinical value and may help guide electrode placement in individuals with neocortical epilepsy. Finally, our predictive model of post-operative seizure-freedom, which leverages both MEG and iEEG recordings, may aid patient counselling of expected outcome.

1.Regional Deep Atrophy: a Self-Supervised Learning Method to Automatically Identify Regions Associated With Alzheimer's Disease Progression From Longitudinal MRI

Authors:Mengjin Dong for the Alzheimer's Disease Neuroimaging Initiative, Long Xie for the Alzheimer's Disease Neuroimaging Initiative, Sandhitsu R. Das for the Alzheimer's Disease Neuroimaging Initiative, Jiancong Wang for the Alzheimer's Disease Neuroimaging Initiative, Laura E. M. Wisse for the Alzheimer's Disease Neuroimaging Initiative, Robin deFlores for the Alzheimer's Disease Neuroimaging Initiative, David A. Wolk for the Alzheimer's Disease Neuroimaging Initiative, Paul A. Yushkevich for the Alzheimer's Disease Neuroimaging Initiative

Abstract: Longitudinal assessment of brain atrophy, particularly in the hippocampus, is a well-studied biomarker for neurodegenerative diseases, such as Alzheimer's disease (AD). In clinical trials, estimation of brain progressive rates can be applied to track therapeutic efficacy of disease modifying treatments. However, most state-of-the-art measurements calculate changes directly by segmentation and/or deformable registration of MRI images, and may misreport head motion or MRI artifacts as neurodegeneration, impacting their accuracy. In our previous study, we developed a deep learning method DeepAtrophy that uses a convolutional neural network to quantify differences between longitudinal MRI scan pairs that are associated with time. DeepAtrophy has high accuracy in inferring temporal information from longitudinal MRI scans, such as temporal order or relative inter-scan interval. DeepAtrophy also provides an overall atrophy score that was shown to perform well as a potential biomarker of disease progression and treatment efficacy. However, DeepAtrophy is not interpretable, and it is unclear what changes in the MRI contribute to progression measurements. In this paper, we propose Regional Deep Atrophy (RDA), which combines the temporal inference approach from DeepAtrophy with a deformable registration neural network and attention mechanism that highlights regions in the MRI image where longitudinal changes are contributing to temporal inference. RDA has similar prediction accuracy as DeepAtrophy, but its additional interpretability makes it more acceptable for use in clinical settings, and may lead to more sensitive biomarkers for disease monitoring in clinical trials of early AD.