Trust and Safety in LLM-Based Mental Health Support: A Scoping Review and a Conceptual Governance Framework
Publish Year: 1404
نوع سند: مقاله کنفرانسی
زبان: English
View: 7
This Paper With 9 Page And PDF Format Ready To Download
- Certificate
- من نویسنده این مقاله هستم
استخراج به نرم افزارهای پژوهشی:
شناسه ملی سند علمی:
INDEXCONF08_013
تاریخ نمایه سازی: 20 بهمن 1404
Abstract:
Large language models (LLMs) are rapidly entering mental health settings, offering conversational support, psychoeducation, and clinical assistance. Their adoption, however, intensifies long-standing concerns about trust, safety, and accountability, particularly given risks such as hallucinations, uncertain crisis response, bias, and opaque reasoning. This paper conducts a scoping review of emerging empirical, conceptual, and ethical literature on LLM-based mental health tools and synthesizes five recurring themes: growing potential and use cases; trust shaped by anthropomorphism and uncertainty; safety threats related to hallucination and crisis handling; system-level vulnerabilities involving privacy, bias, and accountability; and persistent gaps in governance. Drawing on these insights and established AI ethics frameworks, we propose a multi-level governance model spanning the model, application, clinical, and ecosystem layers. The framework identifies six cross-cutting requirements-safety, transparency, privacy, equity, human oversight, and accountability-offering a structured foundation for responsible development and deployment of LLMs in mental health care.
Keywords:
Authors
Hadi Behjati
Department of Computer Engineering, Aliabad Katoul Branch, Islamic Azad University, Aliabad Katoul, Iran
Leila Ajam
Department of Computer Engineering, Aliabad Katoul Branch, Islamic Azad University, Aliabad Katoul, Iran