1.Unveiling the Complexity of Neural Populations: Evaluating the Validity and Limitations of the Wilson-Cowan Model

Authors:Maryam Saadati, Saba Sadat Khodaei, Yousef Jamali

Abstract: The population model of Wilson-Cowan is perhaps the most popular in the history of computational neuroscience. It embraces the nonlinear mean field dynamics of excitatory and inhibitory neuronal populations provided via a temporal coarse-graining technique. The traditional Wilson-Cowan equations exhibit either steady-state regimes or else limit cycle competitions for an appropriate range of parameters. As these equations lower the resolution of the neural system and obscure vital information, we assess the validity of mass-type model approximations for complex neural behaviors. Using a large-scale network of Hodgkin-Huxley style neurons, we derive implicit average population dynamics based on mean field assumptions. Our comparison of the microscopic neural activity with the macroscopic temporal profiles reveals dependency on the binary state of interacting subpopulations and the random property of the structural network at the Hopf bifurcation points when different synaptic weights are considered. For substantial configurations of stimulus intensity, our model provides further estimates of the neural population's dynamics official, ranging from simple periodic to quasi-periodic and aperiodic patterns, as well as phase transition regimes. While this shows its great potential for studying the collective behavior of individual neurons particularly concentrating on the occurrence of bifurcation phenomena, we must accept a quite limited accuracy of the Wilson-Cowan approximations-at least in some parameter regimes. Additionally, we report that the complexity and temporal diversity of neural dynamics, especially in terms of limit cycle trajectory, and synchronization can be induced by either small heterogeneity in the degree of various types of local excitatory connectivity or considerable diversity in the external drive to the excitatory pool.

2.Expressive dynamics models with nonlinear injective readouts enable reliable recovery of latent features from neural activity

Authors:Christopher Versteeg, Andrew R. Sedler, Jonathan D. McCart, Chethan Pandarinath

Abstract: The advent of large-scale neural recordings has enabled new methods to discover the computational mechanisms of neural circuits by understanding the rules that govern how their state evolves over time. While these \textit{neural dynamics} cannot be directly measured, they can typically be approximated by low-dimensional models in a latent space. How these models represent the mapping from latent space to neural space can affect the interpretability of the latent representation. We show that typical choices for this mapping (e.g., linear or MLP) often lack the property of injectivity, meaning that changes in latent state are not obligated to affect activity in the neural space. During training, non-injective readouts incentivize the invention of dynamics that misrepresent the underlying system and the computation it performs. Combining our injective Flow readout with prior work on interpretable latent dynamics models, we created the Ordinary Differential equations autoencoder with Injective Nonlinear readout (ODIN), which captures latent dynamical systems that are nonlinearly embedded into observed neural activity via an approximately injective nonlinear mapping. We show that ODIN can recover nonlinearly embedded systems from simulated neural activity, even when the nature of the system and embedding are unknown. Additionally, ODIN enables the unsupervised recovery of underlying dynamical features (e.g., fixed points) and embedding geometry. When applied to biological neural recordings, ODIN can reconstruct neural activity with comparable accuracy to previous state-of-the-art methods while using substantially fewer latent dimensions. Overall, ODIN's accuracy in recovering ground-truth latent features and ability to accurately reconstruct neural activity with low dimensionality make it a promising method for distilling interpretable dynamics that can help explain neural computation.