Related Articles

Leveraging large language models to assist philosophical counseling: prospective techniques, value, and challenges

Large language models (LLMs) have emerged as transformative tools with the potential to revolutionize philosophical counseling. By harnessing their advanced natural language processing and reasoning capabilities, LLMs offer innovative solutions to overcome limitations inherent in traditional counseling approaches—such as counselor scarcity, difficulties in identifying mental health issues, subjective outcome assessment, and cultural adaptation challenges. In this study, we explore cutting‐edge technical strategies—including prompt engineering, fine‐tuning, and retrieval‐augmented generation—to integrate LLMs into the counseling process. Our analysis demonstrates that LLM-assisted systems can provide counselor recommendations, streamline session evaluations, broaden service accessibility, and improve cultural adaptation. We also critically examine challenges related to user trust, data privacy, and the inherent inability of current AI systems to genuinely understand or empathize. Overall, this work presents both theoretical insights and practical guidelines for the responsible development and deployment of AI-assisted philosophical counseling practices.

Failed mobility transition in an ideal setting and implications for building a green city

The mobility sector significantly contributes to the climate crisis, impacting several Sustainable Development Goals (SDGs) such as good health (SDG 3), sustainable cities (SDG 11), climate action (SDG 13), and life on land (SDG 15). Despite broad consensus on the need for mobility transformation, practical implementation is contentious due to diverse stakeholder interests. Tübingen, a green showcase city in Germany, exemplifies this challenge. Although ideal for green mobility, a tramway project was rejected in a referendum. This case-study highlights that mobility transition is not just a technical issue but a discourse-communicative challenge, emphasising the role of socially embedded narratives. The study aims to explain the referendum’s rejection by analysing discourses, identifying argumentation patterns, and providing insights for future projects. Using Hajer’s Discourse Coalitions approach and Discourse Network Analysis, the study found that the discourse was dynamic and polarised. The pro-tramway coalition’s communication deficiencies and the opposing coalition’s strong narrative connectivity influenced the outcome. Recommendations for effective communication strategies in future projects are provided.

Diverse misinformation: impacts of human biases on detection of deepfakes on networks

Social media platforms often assume that users can self-correct against misinformation. However, social media users are not equally susceptible to all misinformation as their biases influence what types of misinformation might thrive and who might be at risk. We call “diverse misinformation” the complex relationships between human biases and demographics represented in misinformation. To investigate how users’ biases impact their susceptibility and their ability to correct each other, we analyze classification of deepfakes as a type of diverse misinformation. We chose deepfakes as a case study for three reasons: (1) their classification as misinformation is more objective; (2) we can control the demographics of the personas presented; (3) deepfakes are a real-world concern with associated harms that must be better understood. Our paper presents an observational survey (N = 2016) where participants are exposed to videos and asked questions about their attributes, not knowing some might be deepfakes. Our analysis investigates the extent to which different users are duped and which perceived demographics of deepfake personas tend to mislead. We find that accuracy varies by demographics, and participants are generally better at classifying videos that match them. We extrapolate from these results to understand the potential population-level impacts of these biases using a mathematical model of the interplay between diverse misinformation and crowd correction. Our model suggests that diverse contacts might provide “herd correction” where friends can protect each other. Altogether, human biases and the attributes of misinformation matter greatly, but having a diverse social group may help reduce susceptibility to misinformation.

Psychological booster shots targeting memory increase long-term resistance against misinformation

An increasing number of real-world interventions aim to preemptively protect or inoculate people against misinformation. Inoculation research has demonstrated positive effects on misinformation resilience when measured immediately after treatment via messages, games, or videos. However, very little is currently known about their long-term effectiveness and the mechanisms by which such treatment effects decay over time. We start by proposing three possible models on the mechanisms driving resistance to misinformation. We then report five pre-registered longitudinal experiments (Ntotal = 11,759) that investigate the effectiveness of psychological inoculation interventions over time as well as their underlying mechanisms. We find that text-based and video-based inoculation interventions can remain effective for one month—whereas game-based interventions appear to decay more rapidly—and that memory-enhancing booster interventions can enhance the diminishing effects of counter-misinformation interventions. Finally, we propose an integrated memory-motivation model, concluding that misinformation researchers would benefit from integrating knowledge from the cognitive science of memory to design better psychological interventions that can counter misinformation durably over time and at-scale.

Evaluating search engines and large language models for answering health questions

Search engines (SEs) have traditionally been primary tools for information seeking, but the new large language models (LLMs) are emerging as powerful alternatives, particularly for question-answering tasks. This study compares the performance of four popular SEs, seven LLMs, and retrieval-augmented (RAG) variants in answering 150 health-related questions from the TREC Health Misinformation (HM) Track. Results reveal SEs correctly answer 50–70% of questions, often hindered by many retrieval results not responding to the health question. LLMs deliver higher accuracy, correctly answering about 80% of questions, though their performance is sensitive to input prompts. RAG methods significantly enhance smaller LLMs’ effectiveness, improving accuracy by up to 30% by integrating retrieval evidence.

Responses

Your email address will not be published. Required fields are marked *