6 non-obvious mental health issues specific to AI safety
post by Igor Ivanov (igor-ivanov) · 2023-08-18T15:46:09.938Z · LW · GW · 24 commentsContents
Intro AI safety is a rather unusual field Patterns AGI will either cause doom or create a utopia. Everything else seems unimportant and meaningless. I don't know when we will create AGI and if we will be able to align it, so I feel like I have no control over it. AI safety is a big part of my life, but others don't care that much about it. I feel alienated. Working on AI safety is so important that I neglected other parts of my life and burned-out. People working on AI safety are extremely smart. I don't think I am good enough to meaningfully contribute. So many smart people think that AI alignment is not that big of a problem. Maybe I just overreacting? Epilogue None 24 comments
Intro
I am a psychotherapist, and I help people working on AI safety. I noticed patterns of mental health issues highly specific to this group. It's not just doomerism, there are way more of them that are less obvious.
If you struggle with a mental health issue related to AI safety, feel free to leave a comment about it and about things that help you with it. You might also support others in the comments. Sometimes such support makes a lot of difference and people feel like they are not alone.
AI safety is a rather unusual field
The problems described in this post arise because AI safety is not an ordinary field to work in.
Many people within the AI safety community believe that it might be the most important field of work, but the general public mostly doesn't care that much. Also, the field itself is extremely competitive and newcomers often have hard time getting a job.
No one really knows when we will create AGI, and whether we will be able to keep it aligned. If we fail to align AGI, the humanity might go extinct, and even if we succeed, it will radically transform the world.
Patterns
AGI will either cause doom or create a utopia. Everything else seems unimportant and meaningless.
Alex is an ML engineer working in a startup that fights with aging. He believes that AGI will either destroy humanity or bring a utopia, and among other things it will stop aging, so Alex thinks that his job is meaningless, and quits it. He also sometimes asks himself "Should I invest? Should I exercise? Should I even floss my teeth? This all seems meaningless."
No one knows how the post-AGI world will look like. All predictions are wild speculations, and it's very hard to tell whether any actions unrelated to AI safety are meaningful. This uncertainty can cause anxiety and depression
These problems are an exacerbated version of existential problem of meaninglessness of life, and the way to mitigate them is to rediscover meaning in the world that ultimately doesn't have meaning.
Check out my post about Alex's problem, and possible solutions for it. [LW · GW]
I don't know when we will create AGI and if we will be able to align it, so I feel like I have no control over it.
Bella is an anxious person, and she recently got interested in AI safety and she realized that nobody know for sure how to align AGI.
She feels that AGI might pose an extreme danger, and there is nothing she can do. She even can't understand how much time do we have. A year? Five years? This uncertainty makes here even more anxious. And what if the takeoff will be so rapid that no one will understand what is going on?
Bella is meeting a psychotherapist, but they treat her fear as something irrational. This doesn't help, and only makes Bella more anxious. She feels like even her therapist doesn't understand her.
AI safety is a big part of my life, but others don't care that much about it. I feel alienated.
Chang is an ML scientist working on mechanistic interpretability in AI lab. AI safety consumed all his life and became a part of his identity. He constantly checks AI safety influencers on Twitter, he spends a lot of time reading LessWrong and watching AI podcasts. He even made a tatoo of a paperclip.
Chang lives outside of major AI safety hubs, and he feels a bit lonely because there is no one to talk about AI safety in person.
Recently he attended his aunt's birthday party. He talked about alignment with his family. They were a bit curious about the topic, but didn't care that much. Chang feels like they just don't get it.
Working on AI safety is so important that I neglected other parts of my life and burned-out.
Dmitry is an undergrad student. He believes that AI safety is the most important thing in his life. He either thinks about AI safety or works on it all the time. He has never worked this hard in his life, and it's hard for him to realize that neglect of other parts of life and failure to compartmentalize AI safety is a straight way to a burnout. When this burnout happens, at first, he doean't understand what has happened and became depressed because he can't work on AI safety.
People working on AI safety are extremely smart. I don't think I am good enough to meaningfully contribute.
Ezra recently graduated from a university where he did research on transformers. He wants to work on AI safety, but it seems like everyone in major AI labs and AI safety orgs are extremely talented, and have an exceptional education. Ezra feels so intimidated by this, that it's hard for him to even try doing something.
After a while he finally applies to a number of orgs, but he gets rejected everywhere, and other people share their similar experience. It seems like there are dozens of smart and young people applying for each position.
He feels demotivated, and he also need to pay his bills, so he decides to work in a non-AI safety company which makes him sad.
Check out my post about Impostor syndrome in AI safety, and how to overcome it. [LW · GW]
So many smart people think that AI alignment is not that big of a problem. Maybe I just overreacting?
Francesca is a computer scientist working in academia. She is familiar with machine learning, but it's not the focus of her work. She also believes that arguments for existential risks are solid, and she worries about it.
Francesca is curious what top ML scientists think about AI safety. Some of them believe that x-risks are serious, while many others don't worry about them that much, and they think of AI doomers as weirdos.
Francesca feel confused because of that. She still thinks that arguments for the existential risk are solid, but social pressure sometimes makes her think that this whole alignment problem might not be that serious.
Epilogue
If you struggle with the sense of meaninglessness due to AGI, and believe that you might benefit from professional help, then I might help as a therapist or suggest other places where you can get professional help.
Check out my profile description to learn more about these options.
24 comments
Comments sorted by top scores.
comment by Feel_Love · 2023-08-20T14:18:59.780Z · LW(p) · GW(p)
Thank you for posting this.
In the context of AI safety, I often hear statements to the effect of
This is something we should worry about.
There's a very important, fundamental mistake being made there that can be easy to miss: worrying doesn't help you accomplish any goal, including a very grand one. It's just a waste of time and energy. Terrible habit. If it's important to you that you suffer, then worrying is a good tactic. If AI safety is what's important, then by all means analyze it, strategize about it, reflect on it, communicate about it. Work on it.
Don't worry about it. When you're not working on it, you're not supposed to be worrying about it. You're not supposed to be worrying about something else either. Think a different thought, and both your cognitive work and emotional health will improve. It's pure upside with no opportunity cost. Deliberately change the pattern.
To all those who work on AI safety, thank you! It's extremely important work. May you be happy and peaceful for as long as your life or this world system may persist, the periods of which are finite, unknown to us, and ultimately outside of our control despite our best intentions and efforts.
Replies from: igor-ivanov↑ comment by Igor Ivanov (igor-ivanov) · 2023-08-20T15:42:31.135Z · LW(p) · GW(p)
Your comment is somewhat along the lines of the stoic philosophy.
comment by Neil (neil-warren) · 2023-08-18T22:06:06.740Z · LW(p) · GW(p)
Very insightful post. Here are personal thoughts with low epistemic status and high rambling potential:
These all feel to me like corollaries to the belief "AGI is so important that I can't gauge the value of anything else except in regards to how it affects AGI". Hence: "everything else is meaningless because AGI will change everything soon" or "nobody around me is looking up at the meteor about to hit us and that makes me feel kind of insane. (*Cough* so I hang out with rationalists, whose entire shtick is learning how not to be insane)".
As for other non-obvious effects: I personally feel some sort of perceived fragility around the whole field. There are arguments on this site for why AGI alignment should not be discussed in politics or why attempting to convince OpenAI or DeepMind employees to switch jobs can easily backfire (eg this post [LW · GW] for caution advice). These make any outreach at all seem risky. There are also people I know wondering whether they should attempt to do anything at all relative to alignment, because they perceive themselves as probable dead weights. The relatively short timelines, the sheer scope, and the aura of impossibility around alignment seem to make people more cautious than they otherwise should be. Obviously the whole point of the field is to be cautious; but while it's true that the tried-and-tested scientific method isn't safe for AGI in general I'm not sure stressing the rationalist-tools solve-problems-before-you-experiment [LW · GW] approach is healthy everywhere. So, caution is right there in the description of the field, but you have to make sure you contain it well so that it doesn't infect places where you would do good to be reckless and use trial-and-error. I am probably quite wrong about this but I don't see many people talking about it, so if there's any reasonable doubt we should figure it out.
Alignment work should probably be perceived as less fragile. Unlike the AI field in general, alignment projects specifically don't pose much of a risk to the world. So we can probably afford to be more loose here than elsewhere. In my experience alignment feels like a pack of delicate butterflies flying together, with every flap of wings sending dozens of comrades spiraling out of the sky, which might or might not set off a domino/Rube Goldberg machine that blows up the world.
Replies from: Roman Leventov↑ comment by Roman Leventov · 2023-08-19T05:35:50.101Z · LW(p) · GW(p)
Alignment is also perceived as fragile. Almost all paradigms of alignment and AI safety research (interpretability, agent foundations, prosaic alignment, model encryption, etc.) Are often criticised on LW by different people as at best totally ineffectual from the opportunity cost perspective, and at worst downright harmful due to some unforeseen effects or as safety-washing enablers for AGI labs. (I myself am culpable of many such criticisms.)
OTOH, this very work on the strategy and methodology of AI safety development could be reasonably criticised as worsening the psychological state of AI safety researchers and therefore potentially net harmful despite its marginal improvements to strategy and methodology (if these even happen in practice, which is not clear to me).
comment by knowsnothing · 2023-08-18T21:38:48.637Z · LW(p) · GW(p)
The alienation is something I felt for a bit, until I started working on my project and working with folk, talking to folk, etc. Also, been very pleasantly surprised how receptive non AI/non-tech folk are when talking to them about AI risk, as long as it's framed in a down to earth, relatable manner, introduced organically, etc.
Replies from: igor-ivanov↑ comment by Igor Ivanov (igor-ivanov) · 2023-08-18T21:43:46.824Z · LW(p) · GW(p)
Thanks for sharing your experience. My experience is that talking with non-AI safety people is similar to talks about global warming. If someone tells me about that, I say that this is an important issue, but I honestly don't invest that much effort to fight against it.
This is my experience, and yours might be different.
comment by Dalcy (Darcy) · 2023-08-20T03:24:42.376Z · LW(p) · GW(p)
Bella is meeting a psychotherapist, but they treat her fear as something irrational. This doesn't help, and only makes Bella more anxious. She feels like even her therapist doesn't understand her.
How would one find a therapist in their local area who's aware of what's going on in the EA/rat circles such that they wouldn't find statements about, say, x-risks as being schizophrenic/paranoid?
Replies from: Roman Leventov↑ comment by Roman Leventov · 2023-08-22T16:14:39.880Z · LW(p) · GW(p)
I think the recent public statements, media coverage, public discussions, government activity, YouGov polls, etc. have moved the worry about AI x-risk sufficiently into the Overton window. A psychotherapist or a psychiatrist who would suspect paranoia or schizophrenia mainly/primarily upon the expression of such worries today is just a very bad professional.
comment by Nicholas / Heather Kross (NicholasKross) · 2023-08-22T23:03:47.345Z · LW(p) · GW(p)
I feel like Ezra. I've also gotten various sources of feedback that make me think I might not be cut out for "top level" alignment research... but I find myself making progress on my beliefs about the problem, just slower than others.
Thoughts? Advice?
Replies from: whitehatStoic↑ comment by MiguelDev (whitehatStoic) · 2023-08-23T01:00:08.450Z · LW(p) · GW(p)
Same here. The work I'm doing may not align with conventional thinking or be considered part of the major alignment work being pursued, but I believe I've used my personal efforts to understand the alignment problem and the complex web of issues surrounding it.
My advice? Continuously improve your methods and conceptual frameworks, as that will drive much of the progress in the complexity and intricacy of this field. Good luck with your progress!!
comment by juliette (julietteculver) · 2023-08-24T17:52:49.639Z · LW(p) · GW(p)
Also because nobody has a great solution yet for alignment, I can see that it is very easy for any work to be heavily critiqued. In other domains, you can feel like you are contributing something valuable even if you aren't doing anything ground-breaking. This is slightly different from not feeling smart enough I think. Although it hasn't happened to me (yet!) because I haven't shared any of my ideas publicly, I can see that the constant critique is something that could be quite demotivating.
As a newcomer too, my experience of the community is that is has felt much less supportive than in other technical fields I have worked in (although I have also met some people who are lovely exceptions!). It has certainly made me question somewhat if it is an area that I want to work in. I'm not sufficiently convinced that I'm going to solve alignment that it feels imperative for me to work in the field and I still feel I have lots of agency about whether I do or not. However, for somebody who doesn't feel that sense of agency or who hasn't experienced different communities, I can imagine that it might affect their mental health slightly subtly and perniciously without them realising its impact.
comment by Going Durden (going-durden) · 2023-08-23T07:50:40.275Z · LW(p) · GW(p)
There are also some mental issues among people who know about AI safety concerns, but are not researchers themselves and not even remotely capable of helping or contributing in a meaningful way.
I for one, learned about the severity of the AI threat only after my second child was born. Given the rather gloom predictions for the future, Im concerned for their safety, but there does not seem anything I can do to ensure they would be ok once the Singularity hits. It feels like I brought my kids to life just in time for the apocalypse to hit them when they are still young adults at best, and irrationally, I cannot stop thinking that Im thus responsible for their future suffering.
comment by Noosphere89 (sharmake-farah) · 2023-08-19T00:06:35.932Z · LW(p) · GW(p)
Patterns
AGI will either cause doom or create a utopia. Everything else seems unimportant and meaningless.
Alex is an ML engineer working in a startup that fights with aging. He believes that AGI will either destroy humanity or bring a utopia, and among other things it will stop aging, so Alex thinks that his job is meaningless, and quits it. He also sometimes asks himself "Should I invest? Should I exercise? Should I even floss my teeth? This all seems meaningless."
No one knows how the post-AGI world will look like. All predictions are wild speculations, and it's very hard to tell whether any actions unrelated to AI safety are meaningful. This uncertainty can cause anxiety and depression
These problems are an exacerbated version of existential problem of meaninglessness of life, and the way to mitigate them is to rediscover meaning in the world that ultimately doesn't have meaning.
I feel like this is an instance of a more general issue: In general, we are bad at rescaling utility when we encounter new situations, and our non-utilitarian way of evaluating outcomes can lead us into very large amounts of pain. The issue is basically that utopia and doom/dystopia are the limiting cases of the problem of information that appears to change their utility calculations vastly, especially negative utility, so psychological problems appear like denialism or guilt.
Essentially, the way to handle this problem is to do 2 things:
- Reset the 0 point, such that the new information means that your 0 point is the way the world works now.
- Rescale utilities such that instead of treating vastly important problems as things where you treat them as having massive disutility or utility, instead go in the opposite direction. Rescale utilities such that other problems have less utility than this problem and maintain something approximating a normal utility for even the most important problems.
philip_b has the gory details on that process, and it's worth taking a look at it:
I suggest not only shifting the zero point, but also scaling utilities when you update on information about what's achievable and what's not. For example, suppose you thought that saving 1-10 people in poor countries was the best you could do with your life, and you felt like every life saved was +1 utility. But then you learned about longtermism and figured out that if you try, then in expectation you can save 1kk lives in the far future. In such situation it doesn't make sense to continue caring about saving an individual life as much as you cared before this insight - your system 1 feeling for how good thing can be won't be able to do its epistemological job then. It's better to scale utility of saving lives down, so that +1kk lives is +10 utility, and +1 life is +1/100000 utility. This is related to Caring less [LW · GW].
In general, I kinda wished rationalists would make their pitches, at least later on as essentially about caring less about certain problems, rather than caring more about x cause.
Replies from: Morpheus↑ comment by Morpheus · 2023-08-19T00:17:05.245Z · LW(p) · GW(p)
What is this 0 point?
Replies from: sharmake-farah↑ comment by Noosphere89 (sharmake-farah) · 2023-08-19T00:58:20.135Z · LW(p) · GW(p)
Essentially what you count as neutral, or what you consider to be normal, as distinguished from negative or positive states of the world.
comment by Neil (neil-warren) · 2023-08-18T22:08:01.078Z · LW(p) · GW(p)
Typo nitpicks (suggestions): "humanity might extinct" --> "go extinct" / "everything else seem unimportant and meaningless" --> "everything else seems" / "non-AI safety company" --> "a non-AI safety company" / "he feels demotivated, he also needs" --> "he feels demotivated, but he also needs" / "version of existential problem" --> version of the existential problem"
Replies from: igor-ivanov↑ comment by Igor Ivanov (igor-ivanov) · 2023-08-18T22:47:57.586Z · LW(p) · GW(p)
Thanks. I am not a native English speaker, and I use GPT-4 to help me catch mistakes, but it seems like it's not perfect :)
comment by MiguelDev (whitehatStoic) · 2023-08-20T02:18:33.301Z · LW(p) · GW(p)
These problems are an exacerbated version of existential problem of meaninglessness of life, and the way to mitigate them is to rediscover meaning in the world that ultimately doesn't have meaning.
The meta-problem everyone is navigating - and this is the meta-advice, and finding the answers for ourselves is unique to our own parameterized realities. Well said here.
comment by TeaTieAndHat (Augustin Portier) · 2023-08-19T15:05:18.981Z · LW(p) · GW(p)
I wonder to what extent you meant it when you said those were specific to AI safety? I’m not at all involved in that (but on the other hand I’m still probably on LW far too much), and I liter have all of them. Or did you mean ‘here is how some common-ish psychological issues manifest themselves in an AI safety context’ ?
Replies from: igor-ivanov↑ comment by Igor Ivanov (igor-ivanov) · 2023-08-19T16:19:06.795Z · LW(p) · GW(p)
These problems are not unique to AI safety, but they are present way more often with my clients working on AI safety, than with my other clients.
Replies from: Augustin Portier↑ comment by TeaTieAndHat (Augustin Portier) · 2023-08-19T17:16:51.841Z · LW(p) · GW(p)
Yeah, I’d have guessed as much
Maybe it’s a sign I should get into AI safety, then /j
Replies from: igor-ivanov↑ comment by Igor Ivanov (igor-ivanov) · 2023-08-19T17:39:12.154Z · LW(p) · GW(p)
This is tricky. May it exacerbate your problems?
Anyway. If there's a chance I can helpful for you, let me know.
comment by Review Bot · 2024-05-09T06:51:44.081Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
comment by Ed Li (ed-li) · 2023-09-02T13:35:35.696Z · LW(p) · GW(p)
Thank you so much for posting this. It feels weird to tick every single symptom mentioned here...
The burnout that 'Dmitry' experiences is remarkably accurate for what am I experiencing, Are there any further guides on how to manage this? It would help me so much, any help is appreciated:)