Related Articles

Harnessing artificial intelligence to fill global shortfalls in biodiversity knowledge

Large, well described gaps exist in both what we know and what we need to know to address the biodiversity crisis. Artificial intelligence (AI) offers new potential for filling these knowledge gaps, but where the biggest and most influential gains could be made remains unclear. To date, biodiversity-related uses of AI have largely focused on tracking and monitoring of wildlife populations. Rapid progress is being made in the use of AI to build phylogenetic trees and species distribution models. However, AI also has considerable unrealized potential in the re-evaluation of important ecological questions, especially those that require the integration of disparate and inherently complex data types, such as images, video, text, audio and DNA. This Review describes the current and potential future use of AI to address seven clearly defined shortfalls in biodiversity knowledge. Recommended steps for AI-based improvements include the re-use of existing image data and the development of novel paradigms, including the collaborative generation of new testable hypotheses. The resulting expansion of biodiversity knowledge could lead to science spanning from genes to ecosystems — advances that might represent our best hope for meeting the rapidly approaching 2030 targets of the Global Biodiversity Framework.

Preserving and combining knowledge in robotic lifelong reinforcement learning

Humans can continually accumulate knowledge and develop increasingly complex behaviours and skills throughout their lives, which is a capability known as ‘lifelong learning’. Although this lifelong learning capability is considered an essential mechanism that makes up general intelligence, recent advancements in artificial intelligence predominantly excel in narrow, specialized domains and generally lack this lifelong learning capability. Here we introduce a robotic lifelong reinforcement learning framework that addresses this gap by developing a knowledge space inspired by the Bayesian non-parametric domain. In addition, we enhance the agent’s semantic understanding of tasks by integrating language embeddings into the framework. Our proposed embodied agent can consistently accumulate knowledge from a continuous stream of one-time feeding tasks. Furthermore, our agent can tackle challenging real-world long-horizon tasks by combining and reapplying its acquired knowledge from the original tasks stream. The proposed framework advances our understanding of the robotic lifelong learning process and may inspire the development of more broadly applicable intelligence.

Two types of motifs enhance human recall and generalization of long sequences

Whether it is listening to a piece of music, learning a new language, or solving a mathematical equation, people often acquire abstract notions in the sense of motifs and variables—manifested in musical themes, grammatical categories, or mathematical symbols. How do we create abstract representations of sequences? Are these abstract representations useful for memory recall? In addition to learning transition probabilities, chunking, and tracking ordinal positions, we propose that humans also use abstractions to arrive at efficient representations of sequences. We propose and study two abstraction categories: projectional motifs and variable motifs. Projectional motifs find a common theme underlying distinct sequence instances. Variable motifs contain symbols representing sequence entities that can change. In two sequence recall experiments, we train participants to remember sequences with projectional and variable motifs, respectively, and examine whether motif training benefits the recall of novel sequences sharing the same motif. Our result suggests that training projectional and variables motifs improve transfer recall accuracy, relative to control groups. We show that a model that chunks sequences in an abstract motif space may learn and transfer more efficiently, compared to models that learn chunks or associations on a superficial level. Our study suggests that humans construct efficient sequential memory representations according to the two types of abstraction we propose, and creating these abstractions benefits learning and out-of-distribution generalization. Our study paves the way for a deeper understanding of human abstraction learning and generalization.

Modelling and design of transcriptional enhancers

Transcriptional enhancers are the genomic elements that contain critical information for the regulation of gene expression. This information is encoded through precisely arranged transcription factor-binding sites. Genomic sequence-to-function models, computational models that take DNA sequences as input and predict gene regulatory features, have become essential for unravelling the complex combinatorial rules that govern cell-type-specific activities of enhancers. These models function as biological ‘oracles’, capable of accurately predicting the activity of novel DNA sequences. By leveraging these oracles, DNA sequences can be optimized towards designed synthetic enhancers with tailored cell-type-specific or cell-state-specific activities. In parallel, generative artificial intelligence is rapidly advancing in genomics and enhancer design. Synthetic enhancers hold great promise for a wide range of biomedical applications, from facilitating fundamental research to enabling gene therapies.

Deep learning-based image analysis in muscle histopathology using photo-realistic synthetic data

Artificial intelligence (AI), specifically Deep learning (DL), has revolutionized biomedical image analysis, but its efficacy is limited by the need for representative, high-quality large datasets with manual annotations. While latest research on synthetic data using AI-based generative models has shown promising results to tackle this problem, several challenges such as lack of interpretability and need for vast amounts of real data remain. This study aims to introduce a new approach—SYNTA—for the generation of photo-realistic synthetic biomedical image data to address the challenges associated with state-of-the art generative models and DL-based image analysis.

Responses

Your email address will not be published. Required fields are marked *