Related Articles

A unified acoustic-to-speech-to-language embedding space captures the neural basis of natural language processing in everyday conversations

This study introduces a unified computational framework connecting acoustic, speech and word-level linguistic structures to study the neural basis of everyday conversations in the human brain. We used electrocorticography to record neural signals across 100 h of speech production and comprehension as participants engaged in open-ended real-life conversations. We extracted low-level acoustic, mid-level speech and contextual word embeddings from a multimodal speech-to-text model (Whisper). We developed encoding models that linearly map these embeddings onto brain activity during speech production and comprehension. Remarkably, this model accurately predicts neural activity at each level of the language processing hierarchy across hours of new conversations not used in training the model. The internal processing hierarchy in the model is aligned with the cortical hierarchy for speech and language processing, where sensory and motor regions better align with the model’s speech embeddings, and higher-level language areas better align with the model’s language embeddings. The Whisper model captures the temporal sequence of language-to-speech encoding before word articulation (speech production) and speech-to-language encoding post articulation (speech comprehension). The embeddings learned by this model outperform symbolic models in capturing neural activity supporting natural speech and language. These findings support a paradigm shift towards unified computational models that capture the entire processing hierarchy for speech comprehension and production in real-world conversations.

Language measures correlate with other measures used to study emotion

Researchers are increasingly using language measures to study emotion, yet less is known about whether language relates to other measures often used to study emotion. Building on previous work which focuses on associations between language and self-report, we test associations between language and a broader range of measures (self-report, observer report, facial cues, vocal cues). Furthermore, we examine associations across different dictionaries (LIWC-22, NRC, Lexical Suite, ANEW, VADER) used to estimate valence (i.e., positive versus negative emotion) or discrete emotions (i.e., anger, fear, sadness) in language. Associations were tested in three large, multimodal datasets (Ns = 193–1856; average word count = 316.7–2782.8). Language consistently related to observer report and consistently related to self-report in two of the three datasets. Statistically significant associations between language and facial cues emerged for language measures of valence but not for language measures of discrete emotions. Language did not consistently show significant associations with vocal cues. Results did not tend to significantly vary across dictionaries. The current research suggests that language measures (in particular, language measures of valence) are correlated with a range of other measures used to study emotion. Therefore, researchers may wish to use language to study emotion when other measures are unavailable or impractical for their research question.

Three mechanisms of language comprehension are revealed through cluster analysis of individuals with language deficits

Analysis of linguistic abilities that are concurrently impaired in individuals with language deficits allows identification of a shared underlying mechanism. If any two linguistic abilities are mediated by the same underlying mechanism, then both abilities will be absent if this mechanism is broken. Clustering techniques automatically arrange these abilities according to their co-occurrence and therefore group together abilities mediated by the same mechanism. This study builds upon the discovery of three distinct mechanisms of language comprehension in 31,845 autistic individuals1. The current clustering analysis of a more diverse group of individuals with language impairments resulted in the three mechanisms identical to those found previously: (1) the most-basic command-language-comprehension-mechanism; (2) the intermediate modifier-language-comprehension-mechanism mediating comprehension of color, size, and number modifiers; and (3) the most-advanced syntactic-language-comprehension-mechanism. This discovery calls for mapping of the three empirically-defined language-comprehension-mechanisms in the context of cognitive neuroscience, which is the main goal of this study.

Dopaminergic modulation and dosage effects on brain state dynamics and working memory component processes in Parkinson’s disease

Parkinson’s disease (PD) is primarily diagnosed through its characteristic motor deficits, yet it also encompasses progressive cognitive impairments that profoundly affect quality of life. While dopaminergic medications are routinely prescribed to manage motor symptoms in PD, their influence extends to cognitive functions as well. Here we investigate how dopaminergic medication influences aberrant brain circuit dynamics associated with encoding, maintenance and retrieval working memory (WM) task-phases processes. PD participants, both on and off dopaminergic medication, and healthy controls, performed a Sternberg WM task during fMRI scanning. We employ a Bayesian state-space computational model to delineate brain state dynamics related to different task phases. Importantly, a within-subject design allows us to examine individual differences in the effects of dopaminergic medication on brain circuit dynamics and task performance. We find that dopaminergic medication alters connectivity within prefrontal-basal ganglia-thalamic circuits, with changes correlating with enhanced task performance. Dopaminergic medication also restores engagement of task-phase-specific brain states, enhancing task performance. Critically, we identify an “inverted-U-shaped” relationship between medication dosage, brain state dynamics, and task performance. Our study provides valuable insights into the dynamic neural mechanisms underlying individual differences in dopamine treatment response in PD, paving the way for more personalized therapeutic strategies.

Role of GPX3+ astrocytes in breast cancer brain metastasis activated by circulating tumor cell exosomes

Brain metastasis from breast cancer (BMBC) contributes significantly to mortality, yet its mechanisms remain unclear. This study investigates the activation of GPX3+ astrocytes by circulating tumor cell (CTC)-derived exosomes in the metastatic process. Using a mouse model of BMBC, we performed single-cell RNA sequencing (scRNA-seq) and metabolomics to explore the role of GPX3+ astrocytes in the brain microenvironment. We found that CTCs activate these astrocytes, promoting IL-1β production and Th17 cell differentiation, crucial for the formation of the metastatic niche. Conditional knockout of GPX3 reduced brain metastasis and extended survival, highlighting its importance in metastasis. Our findings uncover a novel mechanism by which CTCs activate GPX3+ astrocytes to drive breast cancer brain metastasis, suggesting new therapeutic targets for intervention.

Responses

Your email address will not be published. Required fields are marked *