Related Articles
Understanding learning through uncertainty and bias
Learning allows humans and other animals to make predictions about the environment that facilitate adaptive behavior. Casting learning as predictive inference can shed light on normative cognitive mechanisms that improve predictions under uncertainty. Drawing on normative learning models, we illustrate how learning should be adjusted to different sources of uncertainty, including perceptual uncertainty, risk, and uncertainty due to environmental changes. Such models explain many hallmarks of human learning in terms of specific statistical considerations that come into play when updating predictions under uncertainty. However, humans also display systematic learning biases that deviate from normative models, as studied in computational psychiatry. Some biases can be explained as normative inference conditioned on inaccurate prior assumptions about the environment, while others reflect approximations to Bayesian inference aimed at reducing cognitive demands. These biases offer insights into cognitive mechanisms underlying learning and how they might go awry in psychiatric illness.
Predictive learning as the basis of the testing effect
A prominent learning phenomenon is the testing effect, meaning that testing enhances retention more than studying. Emergent frameworks propose fundamental (Hebbian and predictive) learning principles as its basis. Predictive learning posits that learning occurs based on the contrast (error) between a prediction and the feedback on that prediction (prediction error). Here, we propose that in testing (but not studying) scenarios, participants predict potential answers, and its contrast with the subsequent feedback yields a prediction error, which facilitates testing-based learning. To investigate this, we developed an associative memory network incorporating Hebbian and/or predictive learning, together with an experimental design where human participants studied or tested English-Swahili word pairs followed by recognition. Three behavioral experiments (N = 80, 81, 62) showed robust testing effects when feedback was provided. Model fitting (of 10 different models) suggested that only models incorporating predictive learning can account for the breadth of data associated with the testing effect. Our data and model suggest that predictive learning underlies the testing effect.
Machine learning empowered coherent Raman imaging and analysis for biomedical applications
In situ and in vivo visualization and analysis of functional, endogenous biomolecules in living systems have generated a pivotal impact in our comprehension of biology and medicine. An increasingly adopted approach involves the utilization of molecular vibrational spectroscopy, which delivers notable advantages such as label-free imaging, high spectral density, high sensitivity, and molecule specificity. Nonetheless, analyzing and processing the intricate, multi-dimensional imaging data to extract interpretable and actionable information poses a fundamental obstacle. In contrast to conventional multivariate methods, machine learning has recently gained considerable attention for its capability of discerning essential features from massive datasets. Here, we present a comprehensive review of the latest advancements in the application of machine learning in the molecular spectroscopic imaging fields. We also discuss notable attributes of spectroscopic imaging modalities and explore their broader impact on other imaging techniques.
Two types of motifs enhance human recall and generalization of long sequences
Whether it is listening to a piece of music, learning a new language, or solving a mathematical equation, people often acquire abstract notions in the sense of motifs and variables—manifested in musical themes, grammatical categories, or mathematical symbols. How do we create abstract representations of sequences? Are these abstract representations useful for memory recall? In addition to learning transition probabilities, chunking, and tracking ordinal positions, we propose that humans also use abstractions to arrive at efficient representations of sequences. We propose and study two abstraction categories: projectional motifs and variable motifs. Projectional motifs find a common theme underlying distinct sequence instances. Variable motifs contain symbols representing sequence entities that can change. In two sequence recall experiments, we train participants to remember sequences with projectional and variable motifs, respectively, and examine whether motif training benefits the recall of novel sequences sharing the same motif. Our result suggests that training projectional and variables motifs improve transfer recall accuracy, relative to control groups. We show that a model that chunks sequences in an abstract motif space may learn and transfer more efficiently, compared to models that learn chunks or associations on a superficial level. Our study suggests that humans construct efficient sequential memory representations according to the two types of abstraction we propose, and creating these abstractions benefits learning and out-of-distribution generalization. Our study paves the way for a deeper understanding of human abstraction learning and generalization.
Generative language models exhibit social identity biases
Social identity biases, particularly the tendency to favor one’s own group (ingroup solidarity) and derogate other groups (outgroup hostility), are deeply rooted in human psychology and social behavior. However, it is unknown if such biases are also present in artificial intelligence systems. Here we show that large language models (LLMs) exhibit patterns of social identity bias, similarly to humans. By administering sentence completion prompts to 77 different LLMs (for instance, ‘We are…’), we demonstrate that nearly all base models and some instruction-tuned and preference-tuned models display clear ingroup favoritism and outgroup derogation. These biases manifest both in controlled experimental settings and in naturalistic human–LLM conversations. However, we find that careful curation of training data and specialized fine-tuning can substantially reduce bias levels. These findings have important implications for developing more equitable artificial intelligence systems and highlight the urgent need to understand how human–LLM interactions might reinforce existing social biases.
Responses