Related Articles
Perceptual and semantic maps in individual humans share structural features that predict creative abilities
Building perceptual and associative links between internal representations is a fundamental neural process, allowing individuals to structure their knowledge about the world and combine it to enable efficient and creative behavior. In this context, the representational similarity between pairs of represented entities is thought to reflect their associative linkage at different levels of sensory processing, ranging from lower-order perceptual levels up to higher-order semantic levels. While recently specific structural features of semantic representational maps were linked with creative abilities of individual humans, it remains unclear if these features are also shared on lower level, perceptual maps. Here, we address this question by presenting 148 human participants with psychophysical scaling tasks, using two sets of independent and qualitatively distinct stimuli, to probe representational map structures in the lower-order auditory and the higher-order semantic domain. We quantify individual representational features with graph-theoretical measures and demonstrate a robust correlation of representational structures in the perceptual auditory and semantic modality. We delineate these shared representational features to predict multiple verbal standard measures of creativity, observing that both, semantic and auditory features, reflect creative abilities. Our findings indicate that the general, modality-overarching representational geometry of an individual is a relevant underpinning of creative thought.
Interracial contact shapes racial bias in the learning of person-knowledge
During impression formation, perceptual cues facilitate social categorization while person-knowledge can promote individuation and enhance person memory. Although there is extensive literature on the cross-race recognition deficit, observed when racial ingroup faces are recognized more than outgroup faces, it is unclear whether a similar deficit exists when recalling individuating information about outgroup members. To better understand how perceived race can bias person memory, the present study examined how self-identified White perceivers’ interracial contact impacts learning of perceptual cues and person-knowledge about perceived Black and White others over five sessions of training. While person-knowledge facilitated face recognition accuracy for low-contact perceivers, face recognition accuracy did not differ for high-contact perceivers based on person-knowledge availability. The results indicate a bias towards better recall of ingroup person knowledge, which decreased for high-contact perceivers across the five-day training but simultaneously increased for low-contact perceivers. Overall, the elimination of racial bias in recall of person-knowledge among high-contact perceivers amid a persistent cross-race deficit in face recognition suggests that contact may have a greater impact on the recall of person-knowledge than on face recognition.
An integrative data-driven model simulating C. elegans brain, body and environment interactions
The behavior of an organism is influenced by the complex interplay between its brain, body and environment. Existing data-driven models focus on either the brain or the body–environment. Here we present BAAIWorm, an integrative data-driven model of Caenorhabditis elegans, which consists of two submodels: the brain model and the body–environment model. The brain model was built by multicompartment models with realistic morphology, connectome and neural population dynamics based on experimental data. Simultaneously, the body–environment model used a lifelike body and a three-dimensional physical environment. Through the closed-loop interaction between the two submodels, BAAIWorm reproduced the realistic zigzag movement toward attractors observed in C. elegans. Leveraging this model, we investigated the impact of neural system structure on both neural activities and behaviors. Consequently, BAAIWorm can enhance our understanding of how the brain controls the body to interact with its surrounding environment.
MARBLE: interpretable representations of neural population dynamics using geometric deep learning
The dynamics of neuron populations commonly evolve on low-dimensional manifolds. Thus, we need methods that learn the dynamical processes over neural manifolds to infer interpretable and consistent latent representations. We introduce a representation learning method, MARBLE, which decomposes on-manifold dynamics into local flow fields and maps them into a common latent space using unsupervised geometric deep learning. In simulated nonlinear dynamical systems, recurrent neural networks and experimental single-neuron recordings from primates and rodents, we discover emergent low-dimensional latent representations that parametrize high-dimensional neural dynamics during gain modulation, decision-making and changes in the internal state. These representations are consistent across neural networks and animals, enabling the robust comparison of cognitive computations. Extensive benchmarking demonstrates state-of-the-art within- and across-animal decoding accuracy of MARBLE compared to current representation learning approaches, with minimal user input. Our results suggest that a manifold structure provides a powerful inductive bias to develop decoding algorithms and assimilate data across experiments.
A unified acoustic-to-speech-to-language embedding space captures the neural basis of natural language processing in everyday conversations
This study introduces a unified computational framework connecting acoustic, speech and word-level linguistic structures to study the neural basis of everyday conversations in the human brain. We used electrocorticography to record neural signals across 100 h of speech production and comprehension as participants engaged in open-ended real-life conversations. We extracted low-level acoustic, mid-level speech and contextual word embeddings from a multimodal speech-to-text model (Whisper). We developed encoding models that linearly map these embeddings onto brain activity during speech production and comprehension. Remarkably, this model accurately predicts neural activity at each level of the language processing hierarchy across hours of new conversations not used in training the model. The internal processing hierarchy in the model is aligned with the cortical hierarchy for speech and language processing, where sensory and motor regions better align with the model’s speech embeddings, and higher-level language areas better align with the model’s language embeddings. The Whisper model captures the temporal sequence of language-to-speech encoding before word articulation (speech production) and speech-to-language encoding post articulation (speech comprehension). The embeddings learned by this model outperform symbolic models in capturing neural activity supporting natural speech and language. These findings support a paradigm shift towards unified computational models that capture the entire processing hierarchy for speech comprehension and production in real-world conversations.
Responses