Related Articles
Light-matter coupling via quantum pathways for spontaneous symmetry breaking in van der Waals antiferromagnetic semiconductors
Light-matter interaction simultaneously alters both the original material and incident light. Light not only reveals material details but also activates coupling mechanisms. The coupling has been demonstrated mechanically, for instance, through the patterning of metallic antennas, resulting in the emergence of plasmonic quasiparticles and enabling wavefront engineering of light via the generalized Snell’s law. However, quantum-mechanical light-matter interaction, wherein photons coherently excite distinct quantum pathways, remains poorly understood. Here, we report on quantum interference between light-induced quantum pathways through the orbital quantum levels and spin continuum. The quantum interference immediately breaks the symmetry of the hexagonal antiferromagnetic semiconductor FePS3. Below the Néel temperature, we observe the emergence of birefringence and linear dichroism, namely, quantum anisotropy due to quantum interference, which is further enhanced by the thickness effect. We explain the direct relevance of the quantum anisotropy to a quantum phase transition by spontaneous symmetry breaking in Mexican hat potential. Our findings suggest material modulation via selective quantum pathways through quantum light-matter interaction.
Predictive learning as the basis of the testing effect
A prominent learning phenomenon is the testing effect, meaning that testing enhances retention more than studying. Emergent frameworks propose fundamental (Hebbian and predictive) learning principles as its basis. Predictive learning posits that learning occurs based on the contrast (error) between a prediction and the feedback on that prediction (prediction error). Here, we propose that in testing (but not studying) scenarios, participants predict potential answers, and its contrast with the subsequent feedback yields a prediction error, which facilitates testing-based learning. To investigate this, we developed an associative memory network incorporating Hebbian and/or predictive learning, together with an experimental design where human participants studied or tested English-Swahili word pairs followed by recognition. Three behavioral experiments (N = 80, 81, 62) showed robust testing effects when feedback was provided. Model fitting (of 10 different models) suggested that only models incorporating predictive learning can account for the breadth of data associated with the testing effect. Our data and model suggest that predictive learning underlies the testing effect.
Dynamic thermalization on noisy quantum hardware
Emulating thermal observables on a digital quantum computer is essential for quantum simulation of many-body physics. However, thermalization typically requires a large system size due to incorporating a thermal bath, whilst limited resources of near-term digital quantum processors allow for simulating relatively small systems. We show that thermal observables and fluctuations may be obtained for a small closed system without a thermal bath. Thermal observables occur upon classically averaging quantum mechanical observables over randomized variants of their time evolution that run independently on a digital quantum processor. Using an IBM quantum computer, we experimentally find thermal occupation probabilities with finite positive and negative temperatures defined by the initial state’s energy. Averaging over random evolutions facilitates error mitigation, with the noise contributing to the temperature in the simulated observables. This result fosters probing the dynamical emergence of equilibrium properties of matter at finite temperatures on noisy intermediate-scale quantum hardware.
Understanding learning through uncertainty and bias
Learning allows humans and other animals to make predictions about the environment that facilitate adaptive behavior. Casting learning as predictive inference can shed light on normative cognitive mechanisms that improve predictions under uncertainty. Drawing on normative learning models, we illustrate how learning should be adjusted to different sources of uncertainty, including perceptual uncertainty, risk, and uncertainty due to environmental changes. Such models explain many hallmarks of human learning in terms of specific statistical considerations that come into play when updating predictions under uncertainty. However, humans also display systematic learning biases that deviate from normative models, as studied in computational psychiatry. Some biases can be explained as normative inference conditioned on inaccurate prior assumptions about the environment, while others reflect approximations to Bayesian inference aimed at reducing cognitive demands. These biases offer insights into cognitive mechanisms underlying learning and how they might go awry in psychiatric illness.
Two types of motifs enhance human recall and generalization of long sequences
Whether it is listening to a piece of music, learning a new language, or solving a mathematical equation, people often acquire abstract notions in the sense of motifs and variables—manifested in musical themes, grammatical categories, or mathematical symbols. How do we create abstract representations of sequences? Are these abstract representations useful for memory recall? In addition to learning transition probabilities, chunking, and tracking ordinal positions, we propose that humans also use abstractions to arrive at efficient representations of sequences. We propose and study two abstraction categories: projectional motifs and variable motifs. Projectional motifs find a common theme underlying distinct sequence instances. Variable motifs contain symbols representing sequence entities that can change. In two sequence recall experiments, we train participants to remember sequences with projectional and variable motifs, respectively, and examine whether motif training benefits the recall of novel sequences sharing the same motif. Our result suggests that training projectional and variables motifs improve transfer recall accuracy, relative to control groups. We show that a model that chunks sequences in an abstract motif space may learn and transfer more efficiently, compared to models that learn chunks or associations on a superficial level. Our study suggests that humans construct efficient sequential memory representations according to the two types of abstraction we propose, and creating these abstractions benefits learning and out-of-distribution generalization. Our study paves the way for a deeper understanding of human abstraction learning and generalization.
Responses