Illuminate: Depression diagnosis, explanation and proactive therapy using prompt engineering

Authors

  • Aryan Agrawal Dougherty Valley High School
  • Nidhi Gupta

DOI:

https://doi.org/10.47611/jsrhs.v13i2.6718

Keywords:

Depression Detection, DSM-5, CBT Guide, LLM, GPT-4, Gemini, Llama, Prompt Engineering, Chain of Thought, Tree of Thought, Few Shots

Abstract

Traditional methods of depression detection on social media forums can classify whether a user is depressed, but they often lack the capacity for human-like explanations and interactions. This paper proposes a next-generation paradigm for depression detection and treatment strategies. It employs three Large Language Models (LLMs) - Generative Pre-trained Transformer 4, Llama, and Gemini, each fine-tuned using specially engineered prompts to effectively diagnose, explain, and suggest therapeutic interventions for depression. These prompts are designed to guide the models in analyzing textual data from clinical interviews and online forums, ensuring nuanced and context-aware responses. The study utilizes a few-shot prompting methodology for the Diagnosis and Explanation component. This technique is optimized to provide DSM-5 based analysis and explanation, enhancing the model’s ability to identify and articulate depressive symptoms accurately. Models engage in empathetic dialogue management, guided by resources from Psychology Database and a Cognitive Behavioral Therapy guide, and fine-tuned using Chain of Thought and Tree of Thought prompting techniques. This facilitates meaningful interactions with individuals facing major depressive disorders, fostering a supportive and understanding environment. The research innovates in case conceptualization, treatment planning and therapeutic interventions by creating the Illuminate Database to guide the models in offering personalized therapy. The quantitative analysis of the study is demonstrated through metrics such as F1 scores, Precision, Recall, Cosine similarity, and ROUGE score across different test sets. This comprehensive approach offered through a mobile application prototype, with established psychological methodologies showcases the potential of LLMs in revolutionizing diagnosis and treatment strategies.

Downloads

Download data is not yet available.

References or Bibliography

World Health Organization: WHO. (2017, March 30). “Depression: let’s talk” says WHO, as depression tops list of causes of ill health. World Health Organization. https://www.who.int/news/item/30-03-2017--depression-let-s-talk-says-who-as-depression-tops-list-of-causes-of-ill-health

Kohn, R. (2004, November 1). The treatment gap in mental health care. PubMed Central (PMC). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2623050/

Henderson, C., Evans‐Lacko, S., & Thornicroft, G. (2013). Mental illness stigma, help seeking, and public health programs. American Journal of Public Health, 103(5), 777–780.

https://doi.org/10.2105/ajph.2012.301056

World Health Organization: WHO. (2019, December 19). Mental health. https://www.who.int/health- topics/mental-health#tab=tab_2

Islam, R., Kabir, M. A., Ahmed, A., Kamal, A. R. M., Wang, H., & Ulhaq, A. (2018). Depression detection from social network data using machine learning techniques. Health Information Science and Systems, 6(1). https://doi.org/10.1007/s13755-018-0046-0

Sardari, S., Nakisa, B., Rastgoo, M. N., & Eklund, P. (2022). Audio based depression detection using Convolutional Autoencoder. Expert Systems With Applications, 189, 116076. https://doi.org/10.1016/j.eswa.2021.116076

GPT-4. (n.d.). https://openai.com/gpt-4

meta-llama/Llama-2-70b-chat-hf · Hugging Face. (n.d.). https://huggingface.co/meta-llama/Llama-2-70b-chat-hf

Gemini Docs and API Reference. (n.d.). Google AI for Developers. https://ai.google.dev/docs

Admin, D. (2022, March 7). Home - DAIC-WOZ. DAIC-WOZ. https://dcapswoz.ict.usc.edu/

DeVault, D., Artstein, R., Benn, G., Dey, T., Fast, E., Gainer, A., Georgila, K., Gratch, J., Hartholt, A., Lhommet, M., Lucas, G., Marsella, S., Morbini, F., Nazarian, A., Scherer, S., Stratou, G., Suri, A., Traum, D., Wood, R., Xu, Y., Rizzo, A., and Morency, L.-P. (2014). “SimSensei kiosk: A virtual human interviewer for healthcare decision support.” In Proceedings of the 13th International Conference on Autonomous Agents and Multiagent Systems (AAMAS’14), Paris.

Gratch J, Artstein R, Lucas GM, Stratou G, Scherer S, Nazarian A, Wood R, Boberg J, DeVault D, Marsella S, Traum DR. The Distress Analysis Interview Corpus of Human and Computer Interviews. In Proceedings of LREC 2014 May (pp. 3123-3128).

Low, D. M., Rumker, L., Talker, T., Torous, J., Cecchi, G., & Ghosh, S. S. (2020). Reddit Mental Health Dataset (Version 01) [Data set]. Zenodo. https://doi.org/10.17605/OSF.IO/7PEYQ

Ninalga, D. (2023, September 1). Cordyceps@LT- EDI : Depression Detection with Reddit and Self-training. ACL Anthology. https://aclanthology.org/2023.ltedi-1.29/

Ji, S., Zhang, T., Ansari, L., Fu, J., Tiwari, P., & Cambria, E. (2021). MentalBERT: Publicly available Pretrained Language Models for Mental Healthcare. arXiv (Cornell University). https://doi.org/10.48550/arxiv.2110.15621

1.4. Support vector machines. (n.d.). Scikit-learn. https://scikit-learn.org/stable/modules/svm.html#

sklearn.ensemble.GradientBoostingClassifier. (n.d.). Scikit-learn. https://scikit- learn.org/stable/modules/generated/sklearn.ensemble.Gra dientBoostingClassifier.html

sklearn.linear_model.LogisticRegression. (n.d.). Scikit-learn. https://scikit- learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html

Devlin, J. (2018, October 11). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv.org. https://arxiv.org/abs/1810.04805

Hochreiter, S., & Schmidhuber, J. (1997). Long Short- Term memory. Neural Computation, 9(8), 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735

Google Code Archive - Long-term storage for Google Code Project Hosting. (n.d.). https://code.google.com/archive/p/word2vec/

Local Interpretable Model-Agnostic Explanations (lime) — lime 0.1 documentation. (n.d.). https://lime- ml.readthedocs.io/en/latest/

Prompt Engineering Guide. (n.d.). https://www.promptingguide.ai/techniques/

Brown, T. B. (2020, May 28). Language Models are Few-Shot Learners. arXiv.org. https://arxiv.org/abs/2005.14165

Wei, J. (2022, January 28). Chain-of-Thought prompting elicits reasoning in large language models. arXiv.org. https://arxiv.org/abs/2201.11903

Yao, S. (2023, May 17). Tree of Thoughts: Deliberate Problem Solving with Large Language Models. arXiv.org. https://arxiv.org/abs/2305.10601

Cognitive Behavioural Therapy (CBT). (2024, January 5). PsychDB. https://www.psychdb.com/psychotherapy/cbt

Cully, J.A., & Teten, A.L. 2008. A Therapist’s Guide to Brief Cognitive Behavioral Therapy. Department of Veterans Affairs South Central MIRECC, Houston

rouge-score. (2022, July 22). PyPI.https://pypi.org/project/rouge-score

Published

05-31-2024

How to Cite

Agrawal, A., & Gupta, N. (2024). Illuminate: Depression diagnosis, explanation and proactive therapy using prompt engineering. Journal of Student Research, 13(2). https://doi.org/10.47611/jsrhs.v13i2.6718

Issue

Section

HS Research Projects