Are current LLMs safe for psychotherapy?

post by PaperBike · 2025-02-12T19:16:34.452Z · LW · GW · 3 comments

Contents

3 comments

Hi,

I consider using an LLM as a psychotherapist for my mental health. I already have a human psychotherapist but I see him only once a week and my issues are very complex. An LLM such as Gemini 2 is always available and processes large amounts of information more quickly than a human therapist. I don't want to replace my human psychotherapist, but just talk to the LLM in between sessions.

However I am concerned about deception and hallucinations.

As the conversation grows and the LLM acquires more and more information about me, would it be possible that it intentionally gives me harmful advice? Because one of my worries that I would tell him is about the dangers of AI.

I am also concerned about hallucinations.

How common are hallucinations when it generates mental health information? Do hallucinations become more likely with increasing context size?


Further Questions:

Are there any further important things that I need to be aware of when using an LLM for mental health advice?

I'm not a technical expert, so please keep explanations simple.

Thank you very much.

3 comments

Comments sorted by top scores.

comment by plex (ete) · 2025-02-15T12:43:14.219Z · LW(p) · GW(p)

I've heard from people I trust that:

  1. They can be pretty great, if you know what you want and set the prompt up right
  2. They won't be as skilled as a human therapist, and might throw you in at the deep end or not be tracking things a human would

Using them can be very worth it as they're always available and cheap, but they require a little intentionality. I suggest asking your human therapist for a few suggestions of kinda of work you might do with a peer or LLM assistant, and monitoring how it affects you while exploring, if you feel safe enough doing that. Maybe do it the day before a human session the first few times so you have a good safety net. Maybe ask some LWers what their system prompts are, or find some well-tested prompts elsewhere.

comment by Seth Herd · 2025-02-15T01:28:08.268Z · LW(p) · GW(p)

I started writing an answer. I realized that, while I've heard good things, and I know relatively a lot about therapy despite not being that type of psychologist, I'd need to do more research before I could offer an opinion. And I didn't have time to do more research. And I realized that giving a recommendation would be sort of dumb-if you or anyone else used an LLM for therapy based on my advice, I'd be legally liable if something bad happened. So I tried something else: I had OpenAIs new Deep Research do the research. I got a subscription this month when it released to see how smart the o3 model that runs it is. And how good it is at research. It seems to be pretty good.

SO, this is a research report something like you might get if a smart and dedicated but not totally reliable friend spent a bunch of time on it. It's not my advice. I hope this is helpful! Remember, this isn't my advice - it's nobody's advice. It's a synthesis of online material on this topic. It might save you many hours of research.

I don't think hallucination is the biggest problem; I think sycophancy, the system believing what you tell it, would be the biggest risk. I personally suspect that frequently prompting the system to behave like a therapist would avoid that problem, since therapists are really careful not reinforce people's weird beliefs.

Also, LLMs keep getting better in various ways. Problems with earlier systems might or might not apply to recent ones. In particular, hallucinations have been reduced but not eliminated.

Here it is. It's a lot; I think it put the most important summaries at the top!

https://docs.google.com/document/d/1sEluk9wlrLQpLjWjnSduK-4lppM6yxjPiDXTO26te9I/edit?usp=sharing

comment by Viliam · 2025-02-13T14:35:06.028Z · LW(p) · GW(p)

When you start a new chat, you reset the memory, if I understand it correctly. Maybe you should do that once in a while. Then you may need to explain stuff again, but maybe it gives you a new perspective? Or you could write the background story in a text file, and copy-paste it to each new chat.

Could the LLM accidentally reinforce negative thought patterns or make unhelpful suggestions?

I am not an expert, but I think that LLMs are prone to agreeing with the user, so if you keep posting negative thought patterns, there is a risk that LLM will reflect them back to you.

What if the LLM gives advice that contradicts what my therapist says? How would I know what to do?

Trust the therapist, I guess? And maybe bring it up in the next session, kinda "I got an advice to do X, what is your opinion on that?".

What is the risk of becoming too dependent on the LLM, and how can I check for that?

No idea. People are different; things that are harmless for 99 may hurt 1. I don't know you.

Are there specific prompts or ways of talking to the LLM that would make it safer or more helpful for this kind of use?

Just guessing here, but maybe specify its role (not just "you are a therapist", but specifically e.g. "a Rogerian therapist"), and specify the goal you want to achieve ("your purpose is to help me get better in my everyday life" or something).

Maybe at the end ask LLM for a summary of things that you should tell your human therapist? Something like "now I need to switch to another therapist, prepare notes for him so that he can continue the therapy".