How do you align your emotions through updates and existential uncertainty?

post by VojtaKovarik · 2023-04-17T20:46:29.510Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    1 VojtaKovarik
    1 Florin
    1 VojtaKovarik
None
1 comment

Asking this question since I expect other people to be in a similar position:[1]

There are many ways the future could plausibly go from here, ranging from "doom this year" to "things are mostly fine by default". And as some of these scenarios draw near, my epistemic updates are getting more entangled with my emotions. (Which I endorse.) Now, if I was convinced the world is going to go in a particular way, these emotional updates would be easy(ier): Maybe I expect things to go fine, so I go about my life as usual. Or maybe I have P(doom)=99%, so I mourn for a bit and then take an indefinite vacation to enjoy travel and friends. Or maybe things could go either way, and there is no way to know until the last moment, so I work as hard as I can (while having fun on the way, obviously).

However, things are not so simple: At the moment, I am very uncertain on how things will go. I expect that the next year(s) will be quite wild in terms of epistemic updates. Maybe there will come a point when confidence in one of the scenarios is justified, in which case I want to make the appropriate emotional update as soon as possible. And maybe it won't. And this type of uncertainty is something I do not know how to approach.

Does anybody have recommendations or observations for how to relate to this? (Without deluding oneself in any way.)

  1. ^

    EDIT: Rephrased the title based on feedback.

Answers

answer by VojtaKovarik · 2023-05-02T20:57:53.163Z · LW(p) · GW(p)

Not a full answer, but many relevant resources are in Mental Health and the Alignment Problem: A Compilation of Resources. [LW · GW]

answer by Florin · 2023-04-17T22:29:53.381Z · LW(p) · GW(p)

Until very recently, it was doom for every individual. Maybe-doom is a vast improvement.

And whatever happens, we'll have the privilege of knowing how human history will have turned out.

comment by Vladimir_Nesov · 2023-04-17T23:48:47.125Z · LW(p) · GW(p)

There is value in future of civilization, which senescence doesn't threaten.

Replies from: florin-clapa
comment by Florin (florin-clapa) · 2023-04-18T00:34:25.451Z · LW(p) · GW(p)

Yes, but your post seemed to focus on the individual, and that's why I didn't mention future humanity.

For humanity, it did go from no doom to maybe doom which is worse. And perhaps it's worse for the individual in the long run too, but that's a lot more speculative [LW · GW].

In any case, there's still some hope left that our luck will last long enough to avoid doom, even if it will be by the skin of our teeth.

Replies from: VojtaKovarik
comment by VojtaKovarik · 2023-04-18T13:22:52.746Z · LW(p) · GW(p)

Yes, but your post seemed to focus on the individual, and that's why I didn't mention future humanity.

In that case, that was a misleading phrasing on my part. This is more about humanity than about an individual.

comment by VojtaKovarik · 2023-04-18T13:36:36.930Z · LW(p) · GW(p)

Neither of these suggestions (nor the one in the sub-comment) are useful for what I had in mind. The goal is not have ways of staying positive no matter what --- such as by looking for silver linings irrespectively of whether they are there or not. I expect this would give me incentives against looking at the world clearly. Rather, the hard constraint is to keep my beliefs as close to reality as I can, and the question is how to do that without becoming emotionally detached.

Ty for the comment, I will rephrase the original question to clarify.

answer by VojtaKovarik · 2023-04-17T21:10:26.370Z · LW(p) · GW(p)

For what it's worth, I was so far intuitively trying something like "mourning for the probability-mass lost". 

(Roughly: I have a model which includes uncertainty over various ways the world could be. Such as the parameter of "how hard is it to build AI". And when I see new things happening in the world, I try to update my expecation for the future, under different parameter values. In this case, this would be: Seeing that we keep on releasing bigger and bigger LLMs. And seeing that the AutoGPT [LW · GW] idea was more obvious to ppl than I thought. Concluding that in the worlds where [something like-AutoGPT + GPT-5 is enough for fast takeoff], we are screwed. And then trying to mourn those worlds without giving up on the remaining worlds.)

comment by JBlack · 2023-04-18T06:06:29.608Z · LW(p) · GW(p)

It took me a long time to parse that last sentence in a way that made any sense to me. I had a lot of trouble with the concept of "trying to mourn (without giving up)", but I guess it's really more like "trying to not give up (while mourning)"?

Replies from: VojtaKovarik
comment by VojtaKovarik · 2023-04-18T13:16:44.327Z · LW(p) · GW(p)

Rephrased to "And then trying to mourn those worlds without giving up on the remaining worlds.".

I guess I meant something like trying to see the situation as vaguely similar to "one of my three pets died, but the other two are still here".

1 comment

Comments sorted by top scores.