Related Articles
Innovating beyond electrophysiology through multimodal neural interfaces
Neural circuits distributed across different brain regions mediate how neural information is processed and integrated, resulting in complex cognitive capabilities and behaviour. To understand dynamics and interactions of neural circuits, it is crucial to capture the complete spectrum of neural activity, ranging from the fast action potentials of individual neurons to the population dynamics driven by slow brain-wide oscillations. In this Review, we discuss how advances in electrical and optical recording technologies, coupled with the emergence of machine learning methodologies, present a unique opportunity to unravel the complex dynamics of the brain. Although great progress has been made in both electrical and optical neural recording technologies, these alone fail to provide a comprehensive picture of the neuronal activity with high spatiotemporal resolution. To address this challenge, multimodal experiments integrating the complementary advantages of different techniques hold great promise. However, they are still hindered by the absence of multimodal data analysis methods capable of providing unified and interpretable explanations of the complex neural dynamics distinctly encoded in these modalities. Combining multimodal studies with advanced data analysis methods will offer novel perspectives to address unresolved questions in basic neuroscience and to develop treatments for various neurological disorders.
An integrative data-driven model simulating C. elegans brain, body and environment interactions
The behavior of an organism is influenced by the complex interplay between its brain, body and environment. Existing data-driven models focus on either the brain or the body–environment. Here we present BAAIWorm, an integrative data-driven model of Caenorhabditis elegans, which consists of two submodels: the brain model and the body–environment model. The brain model was built by multicompartment models with realistic morphology, connectome and neural population dynamics based on experimental data. Simultaneously, the body–environment model used a lifelike body and a three-dimensional physical environment. Through the closed-loop interaction between the two submodels, BAAIWorm reproduced the realistic zigzag movement toward attractors observed in C. elegans. Leveraging this model, we investigated the impact of neural system structure on both neural activities and behaviors. Consequently, BAAIWorm can enhance our understanding of how the brain controls the body to interact with its surrounding environment.
MARBLE: interpretable representations of neural population dynamics using geometric deep learning
The dynamics of neuron populations commonly evolve on low-dimensional manifolds. Thus, we need methods that learn the dynamical processes over neural manifolds to infer interpretable and consistent latent representations. We introduce a representation learning method, MARBLE, which decomposes on-manifold dynamics into local flow fields and maps them into a common latent space using unsupervised geometric deep learning. In simulated nonlinear dynamical systems, recurrent neural networks and experimental single-neuron recordings from primates and rodents, we discover emergent low-dimensional latent representations that parametrize high-dimensional neural dynamics during gain modulation, decision-making and changes in the internal state. These representations are consistent across neural networks and animals, enabling the robust comparison of cognitive computations. Extensive benchmarking demonstrates state-of-the-art within- and across-animal decoding accuracy of MARBLE compared to current representation learning approaches, with minimal user input. Our results suggest that a manifold structure provides a powerful inductive bias to develop decoding algorithms and assimilate data across experiments.
Constructing future behavior in the hippocampal formation through composition and replay
The hippocampus is critical for memory, imagination and constructive reasoning. Recent models have suggested that its neuronal responses can be well explained by state spaces that model the transitions between experiences. Here we use simulations and hippocampal recordings to reconcile these views. We show that if state spaces are constructed compositionally from existing building blocks, or primitives, hippocampal responses can be interpreted as compositional memories, binding these primitives together. Critically, this enables agents to behave optimally in new environments with no new learning, inferring behavior directly from the composition. We predict a role for hippocampal replay in building and consolidating these compositional memories. We test these predictions in two datasets by showing that replay events from newly discovered landmarks induce and strengthen new remote firing fields. When the landmark is moved, replay builds a new firing field at the same vector to the new location. Together, these findings provide a framework for reasoning about compositional memories and demonstrate that such memories are formed in hippocampal replay.
Correlating measures of hierarchical structures in artificial neural networks with their performance
This study employs the recently developed Ladderpath approach, within the broader category of Algorithmic Information Theory (AIT), which characterizes the hierarchical and nested relationships among repeating substructures, to explore the structure-function relationship in neural networks, multilayer perceptrons (MLP), in particular. The metric order-rate η, derived from the approach, is a measure of structural orderliness: when η is in the middle range (around 0.5), the structure exhibits the richest hierarchical relationships, corresponding to the highest complexity. We hypothesize that the highest structural complexity correlates with optimal functionality. Our experiments support this hypothesis in several ways: networks with η values in the middle range show superior performance, and the training processes tend to naturally adjust η towards this range; additionally, starting neural networks with η values in this middle range appears to boost performance. Intriguingly, these findings align with observations in other distinct systems, including chemical molecules and protein sequences, hinting at a hidden regularity encapsulated by this theoretical framework.
Responses