Related Articles

Towards next-gen smart manufacturing systems: the explainability revolution

The paper shares the author’s perspectives on the role of explainable-AI in the evolving landscape of AI-driven smart manufacturing decisions. First, critical perspectives on the reasons for the slow adoption of explainable-AI in manufacturing are shared, leading to a discussion on its role and relevance in inspiring scientific understanding and discoveries towards achieving complete autonomy. Finally, to standardize explainability quantification, a new Transparency–Cohesion–Comprehensibility (TCC) evaluation framework is proposed and demonstrated.

Predictive learning as the basis of the testing effect

A prominent learning phenomenon is the testing effect, meaning that testing enhances retention more than studying. Emergent frameworks propose fundamental (Hebbian and predictive) learning principles as its basis. Predictive learning posits that learning occurs based on the contrast (error) between a prediction and the feedback on that prediction (prediction error). Here, we propose that in testing (but not studying) scenarios, participants predict potential answers, and its contrast with the subsequent feedback yields a prediction error, which facilitates testing-based learning. To investigate this, we developed an associative memory network incorporating Hebbian and/or predictive learning, together with an experimental design where human participants studied or tested English-Swahili word pairs followed by recognition. Three behavioral experiments (N = 80, 81, 62) showed robust testing effects when feedback was provided. Model fitting (of 10 different models) suggested that only models incorporating predictive learning can account for the breadth of data associated with the testing effect. Our data and model suggest that predictive learning underlies the testing effect.

Explainable AI reveals Clever Hans effects in unsupervised learning models

Unsupervised learning has become an essential building block of artifical intelligence systems. The representations it produces, for example, in foundation models, are critical to a wide variety of downstream applications. It is therefore important to carefully examine unsupervised models to ensure not only that they produce accurate predictions on the available data but also that these accurate predictions do not arise from a Clever Hans (CH) effect. Here, using specially developed explainable artifical intelligence techniques and applying them to popular representation learning and anomaly detection models for image data, we show that CH effects are widespread in unsupervised learning. In particular, through use cases on medical and industrial inspection data, we demonstrate that CH effects systematically lead to significant performance loss of downstream models under plausible dataset shifts or reweighting of different data subgroups. Our empirical findings are enriched by theoretical insights, which point to inductive biases in the unsupervised learning machine as a primary source of CH effects. Overall, our work sheds light on unexplored risks associated with practical applications of unsupervised learning and suggests ways to systematically mitigate CH effects, thereby making unsupervised learning more robust.

Understanding learning through uncertainty and bias

Learning allows humans and other animals to make predictions about the environment that facilitate adaptive behavior. Casting learning as predictive inference can shed light on normative cognitive mechanisms that improve predictions under uncertainty. Drawing on normative learning models, we illustrate how learning should be adjusted to different sources of uncertainty, including perceptual uncertainty, risk, and uncertainty due to environmental changes. Such models explain many hallmarks of human learning in terms of specific statistical considerations that come into play when updating predictions under uncertainty. However, humans also display systematic learning biases that deviate from normative models, as studied in computational psychiatry. Some biases can be explained as normative inference conditioned on inaccurate prior assumptions about the environment, while others reflect approximations to Bayesian inference aimed at reducing cognitive demands. These biases offer insights into cognitive mechanisms underlying learning and how they might go awry in psychiatric illness.

Emerging trends in SERS-based veterinary drug detection: multifunctional substrates and intelligent data approaches

Veterinary drug residues in poultry and livestock products present persistent challenges to food safety, necessitating precise and efficient detection methods. Surface-enhanced Raman scattering (SERS) has been identified as a powerful tool for veterinary drug residue analysis due to its high sensitivity and specificity. However, the development of reliable SERS substrates and the interpretation of complex spectral data remain significant obstacles. This review summarizes the development process of SERS substrates, categorizing them into metal-based, rigid, and flexible substrates, and highlighting the emerging trend of multifunctional substrates. The diverse application scenarios and detection requirements for these substrates are also discussed, with a focus on their use in veterinary drug detection. Furthermore, the integration of deep learning techniques into SERS-based detection is explored, including substrate structure design optimization, optical property prediction, spectral preprocessing, and both qualitative and quantitative spectral analyses. Finally, key limitations are briefly outlined, such as challenges in selecting reporter molecules, data imbalance, and computational demands. Future trends and directions for improving SERS-based veterinary drug detection are proposed.

Responses

Your email address will not be published. Required fields are marked *