Loading
Hey all! I’m a psychology student researching how GPT-4 affects trust, empathy, and self-disclosure in mental health screening.
I built a chatbot that uses GPT-4 to deliver PHQ-9 and GAD-7 assessments with empathic cues, and I’m comparing it to a static form. I’m also looking into bias patterns in LLM responses and user comfort levels.
Curious:
Would you feel comfortable sharing mental health info with an AI like this?
Where do you see the line between helpful and ethically risky?
Would love your thoughts!! especially from people with AI/LLM experience.
Here is the link: https://welcomelli.streamlit.app
Happy to share more in comments if you’re interested!
– Tom
submitted by /u/BraveJacket4487
[link] [comments]