Your best future self

post by Raemon · 2020-06-06T19:10:04.069Z · LW · GW · 24 comments

Contents

24 comments

Crossposted from facebook. Not the sort of thing I typically write for LessWrong.

Epistemic status: poetry

A thing I realized recently is that you can pray to your best future self.

Your future self knows exactly what you are going through, because they were once you. You don't have to struggle to put your stress and anxiety and anger and frustration into words, and worry that the person you're talking to won't get it, or that the process of putting it into words will have you dwelling on things too much and making it worse.

Your future self will just be there. And they will know exactly what you have experienced. And they will be older, stronger, wiser. They will know your pain, and they will know that the pain can be overcome. Your future self loves you unconditionally as no other person can.

They may not be able to talk to you, in detail, without you having to put more effort into figuring out what you need help with exactly, and maybe that part will involve putting things into words, but the words will not have to be for anyone else's benefit, just yours, and as soon as you've figured out what things you're _trying_ to ask your future self will just know.

Your future self may or may not be able to offer useful advice. I think sometimes they can. Some parts of them already live in side you, already strong. They have the skills you have, just untethered by the stress of your current situation [LW · GW]. Your future self can offer guidance about those things.

In some cases, your future self can offer guidance in domains you're still sorting out, because humans have a weird racial bonus for social-simulation-cognition.

But even when your future self doesn't know any better than you, what they can offer immediately, without hesitation, is empathy and unconditional love. They see you exactly as you are. They are holding your hand or giving you a hug or maybe just looking at you warmly from a little distance if that's what you need.

They know about your flaws. They know how much work and/or time it'll take to overcome those flaws. But they know you will eventually overcome them. They love you anyway, and they will always be there for you if you need them.

There is... an important sense in which your future self doesn't exist. But there are at least two important senses in which they do.

The first way they exist is a bit speculative, but does match my current sense of how to think about the future. Your future self exists probabilistically.

They might literally exist, fractionally, depending on your metaphysics.

They may not literally exist, but there is a real, probabilistic chance that they _will_ exist, and you can pray to the branching possible subjective futures for the most relevant, experienced, competent and compassionate version of yourself to be here with you right now.

The second way is that they exist inside you. They may have already existed, or they may exist now that you've read this essay. I think it is possible for you to cultivate them, and strengthen your connection with them – no matter how real they are in an objective, externally verifiable future sense.

And if you struggle to make them real - if they are only on the cusp of realness because simulating future beings is tricky for you, or because you don't believe you can ever overcome the challenges that would prevent you from becoming them....

...well, I don't know how much this helps, but *I* believe in them. And I believe in your capacity to become them.

If we know each other well, there are probably specific and concrete ways that I believe in you and your future self. I can't offer this to everyone but I may be able to offer it to people I have an existing relationship with. If we don't know each other well, I'm believing in you in the sense that I believe in everyone. I may not be able to crisply and clearly convey that to you. But it is a real sense. I really do believe in everyone, in this particular way.

"This particular way" is doing a bit of work. I think your coherent, extrapolated self may know things you don't know, and may have learned that some of your goals were misguided. Because your ability to communicate with them is bottlenecked on your current skills and beliefs, I can't vouch for the advice they might give.

But I can vouch for their empathy, understanding, and love.

I believe there is at least one potential future where you unfold beautifully, triumphantly. Where you became the epic level version of yourself, with the experience to anticipate and resolve the problems you currently face, and the compassion to reach back through time and hold your hand and say "Geez, it was awful that we experienced that, and I can't make all these problems go away right now. But I can be there with you to help you through it, and give the best advice we can." And maybe something about even listening to someone take you completely at face value is sort of painful somehow, but they understand that too.

I don't know if this is helpful to you. But I found it helpful to me, and wanted to share.

24 comments

Comments sorted by top scores.

comment by Raemon · 2021-05-13T00:05:01.246Z · LW(p) · GW(p)

Update. Crossposted from Facebook. Epistemic status is still 'poetry'.

About a year ago, I was having a very hard time, and felt alone. I had reasons I didn't want to talk to friends about it. Even if I could have, having to fully articulate everything going on at the time felt hard, and I was already stretched to my limits.

I wasn't sure what to do, so I prayed to my best-possible-future-self. My best future self already knew what I was dealing with (because he was there), and he was older and stronger and wiser than me. He couldn't tell me what to do, but he could sit with me, and be empathetically present.

It was nice and comforting.

...

I'm currently dealing with a host things that are at least as difficult as what I was dealing with then. I've had to grieve several things and learn to process traumas and learn to orient in very confusing domains. I'm allocating most of my time towards making sure that I'm okay, and becoming more internally aligned with myself.

One of the things going on is that I've identified multiple better-future-selves, who are stronger on different axes. Epic Level Hero-Soldier Ray. Epic Level Small-Village-Priest Ray. Epic Level Solstice-Pope-Ray. Epic Level University-Administrator-Ray. They have different properties, and might or might not be mutually exclusive with each other. But, one of the things I'm doing these days is fleshing out my understanding of them, so I can put on different hats of more experienced versions of myself and ask "hey, what would you do here?"

Having to decide which of those future selves to make 'real' is part of what's stressful these days.

...

Today is a bit of a lull, as I recover a bit. I considered praying to some future selves again. Less out of desperate need, more out of desire for kinship.

And I realized:

I am older, and stronger, and wiser now, than I was a year ago. I'm older and stronger and wiser enough that I feel comfortable paying back the "Future Self Prayer Debt" I set in motion.

So, I sat for a bit, and listened to my past self's prayers. And I didn't say anything, but I sat with him, warmly and empathetically present.

I thought about the advice I *might* have wanted to give him. It wasn't a necessary part of the exchange, but seemed like what past-self would have wanted me to be doing. I wasn't quite sure what advice I'd give. There are some particular lessons about grieving I've learned that I could hypothetically pass back if the flow of causality went that way.

I think I'm maybe another 6 months away from knowing the most directly relevant advice I could possibly have said. I tuned in with newer possible future selves about that.

But mostly, I just sat, and remembered.

And, that felt nice. I'm not even my past self's best-possible-future self. I'm just me. And I feel good, that even in my current chaotic situation, I had the strength and calm to sit back and listen.

comment by johnswentworth · 2020-06-06T21:11:55.481Z · LW(p) · GW(p)

I think you have a lot more sympathy for your past self than I have for my past self. That guy was a real asshole. And the effort required to overcome his flaws often turned out to be not that much; things could have gone so much faster if he'd been braver, more strategic, more long-term oriented, and above all spent more time understanding the world rather than looking for clever exploits.

Replies from: romeostevensit, Bucky, Raemon, adamShimi
comment by romeostevensit · 2020-06-06T22:28:21.739Z · LW(p) · GW(p)

Something about 'braver' strikes me here in a good way. Something like asking yourself if bravery is a bottleneck. Alternately a more triggering construction: ' what would your life be like if you weren't a coward? ' Or alternately: You generally see obstacles and say to yourself you don't have the money or connections or time to develop skills, you don't say to yourself/notice you lack the courage and persistence to try.

Scary thought that either actions requiring bravery are pre-pruned from conscious awareness, or immediately replaced via attribute substitution (I want X, getting X would be easy if I had Y, I don't have Y, ignore X, maybe this could be called the Munchkin's Curse. Similar to Paul Graham's Schlep Blindness).

An attempted aphorism:
Better strategy can redeem a poor army, but only if you get to choose the battlefield.

comment by Bucky · 2020-06-07T20:00:48.030Z · LW(p) · GW(p)

I'm running at 100% thinking my past selves are assholes. That implies that my current self is probably an asshole by the standard of my future selves. Future selves know which dimensions you need to change in which directions to improve but that isn't straightforward to a current self.

With this in mind I both think my past selves were assholes but maintain some sympathy for them on the assumption that not being an asshole is incredibly difficult and I am still failing in ways that I don't even know about yet.

comment by Raemon · 2020-06-06T23:03:05.571Z · LW(p) · GW(p)

I wrote a long rambling thought on Steel Empathy and Steel Love (the kind of Industrial Strength empathy and love that Steve Jobs or Elon Musk could use on their employees without becoming worse CEOs, while making pareto improvements on their relationships and employee mental health).

But then found that I was probably making some assumptions about which background beliefs of yours underlay your comment.

I do probably have more sympathy for my past self than you have for yours, and I also probably have less sympathy for your past self than I have for my past self. (I was pretty careful about what commitments I made about what empathy and emotional support I was offering to arbitrary people, and fairly confident that I stand by them)

I read your comment and get some sense of... "and therefore..." what? Your past self shouldn't have sympathy? You don't endorse giving your past self the "Pray to Future Self" tool?

For people who are currently making excuses for themselves, while making bad choices, pursuing bad goals, or hurting others... who are nonetheless stressed about their situation such that Praying To Their Best Future Self seems like a useful thing to do...

...I think it's possible for people to misuse the tool, and turn it into an excuse to keep making the same bad choices. But I think it's unlikely to make things worse, and in many cases can make things better. 

Curious what exactly you meant to imply.

Replies from: Raemon
comment by Raemon · 2020-06-06T23:06:39.959Z · LW(p) · GW(p)

I am perhaps making some claim, which you disagree with, which is that I think your future future best self is going to have empathy and sympathy for both your present self and your past self.

Replies from: johnswentworth
comment by johnswentworth · 2020-06-06T23:24:15.483Z · LW(p) · GW(p)

Yeah, I doubt that claim. That was all I intended to imply.

Replies from: Pattern
comment by Pattern · 2020-06-07T01:19:47.273Z · LW(p) · GW(p)

Be the change you want to see in the world.

comment by adamShimi · 2020-06-06T23:33:54.046Z · LW(p) · GW(p)

I agree with john here, at least insofar as I very rarely feel love for my past self. I'm not necessarily angry at him, but all I see is his mistakes and what went wrong. So I rarely remember explicitly the struggle (although I journal, so I could if I wanted), and I certainly never empathize with my past self.

Thus I appreciate the sentiment and feels like it might be useful to some, but I'm definitely not one of them.

Replies from: Raemon
comment by Raemon · 2020-06-06T23:48:14.758Z · LW(p) · GW(p)

So, not claiming this is a particularly universally useful tool. 

But, it is pretty important that when I say "your best future self" I mean "the future where either the singularity was worked out and now you're an immortal god with centuries of skills you don't currently have, including various emotional skills", or, at least "Various life choices went pretty well, you lucked into the right relationships and experiences to learn the right combination of skills such that as a healthy 70 year old  you're able to talk with your past self with... if not empathy and compassion, whatever you think your past self actually needs."

I do think in some cases that might mean more like "tough love." I think if you're a healthy 70 year old with useful life experiences who looks back and is like "man, Young Adam sure needed a kick in the pants to get going when he was busy whining to future me"... maybe past-simulated-future-you should ideally reply more like Bruce Lee did in this anecdote. But, I claim, you can still say "oh geez kid just grow up" or whatever with (subtle) underlying empathy and compassion (and if your past self is desperate and alone and felt motivated to pray to future you, I think it would be good to do so).

Notably: this is not necessarily a claim that it's worth your effort to prioritize gaining the Steel Empathy skill and corresponding relational stance. But, in the world where you are scared and sad and alone, and don't feel like you're able to get the help you need from other people around you... you're allowed to pray to the branching future version of yourself who eventually gained Steel Empathy and listened to the prayer.

Replies from: adamShimi
comment by adamShimi · 2020-06-07T21:59:51.759Z · LW(p) · GW(p)

Following your comment, I think what I feel that is the closest is the acceptation of my past self when I like where I'm at now. Right now, if you give me the opportunity to relive and change my youth, I don't think I would do it. That's because I think I'm in a good place, both in terms of my evolution and in terms of my relationships.

I can extrapolate this feeling to my future self, and imagine him feeling something akin to "damn, this past Adam did made some mistakes, but he did good enough and had enough luck to end up in this good place I'm here now".

comment by Vanessa Kosoy (vanessa-kosoy) · 2020-06-07T09:35:52.753Z · LW(p) · GW(p)

By the same token, you can pray to Elua, the deity of human flourishing. Ey exist in about the same senses your ideal future-self exists: (i) ey exist somewhere in the multiverse (because the multiverse contains anything that can be imagined) and ey exist with some probability in the future (because maybe we will create a superintelligent AI that will make it real) (ii) ey exist inside you to the extent you feel motivated by making the world better (iii) we can decide to believe in em together.

It probably sounds like I'm being ironic, but actually I'm serious. I'm just worried that talking about this too much will make us sound crazy, and also about some people taking it too seriously and actually going crazy.

Replies from: Raemon
comment by Raemon · 2020-06-07T21:11:55.859Z · LW(p) · GW(p)

Hmm. I feel a lot more hesitation about praying to Elua than to my future self. I especially feel hesitation about talking about it publicly in a way that might accidentally create a cult of Elua. 

I also feel hesitation about praying to "Elua the AI that might actually exist someday", as opposed to "Elua the construct you definitely made up in your mind who is optimized directly to be friendly to you, personally."

I think it's possible to do all that without screwing up and ruining your epistemics, or making yourself vulnerable to weird acausal blackmail, or thinking you've made yourself vulnerable to acausal blackmail and then making bad choices. It's possible, but... I don't think it's very scalable, and if lots of people were doing it and advocating it, I think at least some people would be screwing it up.

(this is worded a bit more strongly than my actual fear is, but, seemed important to say clearly)

I feel a lot more comfortable praying to my future self because I have fairly narrow bounds on who my future self might be, and how they might relate to me, and how I feel about that, etc.

comment by areiamus · 2020-06-07T11:39:14.559Z · LW(p) · GW(p)

I liked this piece quite a lot. Thanks for writing it.

comment by [deleted] · 2020-06-06T19:54:45.901Z · LW(p) · GW(p)

I am conflicted about this post. On the one hand, it smells like new-agey nonsense. I worry that posts like this could hurt the credibility of rationalists trying to spread other non-obvious ideas into the mainstream.

On the other hand, even if the only mechanism of this idea is the placebo effect, it’s an emotionally satisfying story to trigger that effect. As someone who grew up with strong religious beliefs, I can appreciate it as... something more than mere art.

Ultimately it’s not obvious to me if this post was supposed to convey a genuine psychological insight, and was just unclear, or if it’s more metaphorical and I’m being too pedantic?

This comment is probably confusing, but I think that merely reflects my own confusion here.

Replies from: Raemon
comment by Raemon · 2020-06-06T20:02:08.100Z · LW(p) · GW(p)

I will definitely attest that this post is not doing Grade A Rationality Qua Rationality, and I wouldn't want most posts on LW to be like this. But, I do think Grade A Rationality needs to be able to handle things kinda like this.

My overall belief is that techniques and posts like this are often important, but one should have a longterm goal of writing publicly about the ideas in a way that transparently checks against empiricism, reductionism, etc. (This may take awhile though, and often isn't worth doing for the first off-the-cuff version of a thing you do). But this is how I feel about things like Circling and the Multi-Agent Models of Mind concepts (I'm glad that weird pioneers looked into them without trying to justify themselves at every step, and I'm glad people eventually began making laborious efforts to get them into a state that could be properly evaluated)

Replies from: None, Viliam, ChristianKl
comment by [deleted] · 2020-06-06T20:09:01.504Z · LW(p) · GW(p)

The new meta-introduction (is there a better term of art for those italic bits at the top?) definitely helps read it in the proper frame. Thank you for clarifying.

comment by Viliam · 2020-06-07T11:22:14.305Z · LW(p) · GW(p)

By the way this is exactly the kind of article I would want to write if I had more free time and better verbal skills. Also not directly on LW.

I think there is nothing intrinsically wrong with the article, but there is a risk that a blog containing more articles like this would attract the wrong kind of writer. (Someone writing a similar encouraging article, but with wrong metaphysics. And it would feel bad to downvote them.) If you publish on your personal blog, that risk does not exist.

comment by ChristianKl · 2020-06-07T17:46:25.291Z · LW(p) · GW(p)

I think there are two issues. On the one hand there's the general topic that the article is about. Then there's the issue that the post doesn't feel very clear in approaching the subject.

I think one way to deal with cases like this is to start with an intellecutal rigor status of: "Trying to put words on a subject that's still a bit unclear to me".

Replies from: Raemon
comment by Raemon · 2020-06-07T17:50:52.275Z · LW(p) · GW(p)

Do you think the current disclaimer is in fact not good enough?

The fact is the subject isn't unclear to me at all, I just don't know how useful it is fo arbitrary people, or what the longterm side effects might be.

Replies from: ChristianKl
comment by ChristianKl · 2020-06-07T22:11:41.558Z · LW(p) · GW(p)

There are a few issues why the thinking seems unclear to me.

Ontological there are two entities that you might call "the future self". FutureSelf_A is a mental part that you can interact with by using all of the toolbox for dealing with mental parts. FutureSelf_B doesn't exist in the present but only when the future will actually take place. 

The post seems to me muddled about the distinction between FutureSelf_A and FutureSelf_B. 

You say that "Your future self loves you unconditionally". Many people don't love themselves. They also don't love their past selves. Making that unfounded assumption seems bad to me because it might hinder a person from have a more reality aligned view of the relationships. You don't have access to FutureSelf_B and can't know to what extend it loves you. When it comes to FutureSelf_A, things get more interesting because that's a relationship that you can actually investigate and work on. 

You say "And they will know exactly what you have experienced." For FutureSelf_B that's not how human memory works. That's practically relevant because if you make plans that you hope your future self will execute it's important to keep in mind that your future self won't remember everything about the present moment where you make the plan.

While a lot of New Agey literature frequently fails to distinguish different entities I expect from high quality rationalist material that it builds upon concepts that are not muddled up. If I go to a NLP seminar that's the level of rigour I expect but I want more from rationalists. 

That said I'm also okay with writing up a concept when one hasn't yet the clarity to distinguish the different entities that are involved because the act of writing up concepts is a good way to gain more clarity about the concepts. I'm also in favor of sharing write ups publically instead of letting them stay in a drawer. But I find it benefitial that such post have disclaimers so that readers who don't know much about the subject can distinguish them for more developed posts on LW.

Replies from: Raemon
comment by Raemon · 2020-06-07T22:46:32.023Z · LW(p) · GW(p)

So, I do think it was important to say "this was a post originally shared to FB, and not the sort of thing I normally post on LW". I might also add a concrete disclaimer: "Poetry." But, I think being Poetry is different from the idea being unclear.

(Yes, Poetry blurs the line between things. That's kind of the point of poetry. I think it is actively important for Poetry to accomplish particular goals to blur lines between things. I think it is important, if you're using the poetry for significant mind-hacking, to after-the-fact be clear about what's going on. But I think it would have ruined the original post to do it pre-emptively)

To be clear, this post is talking almost entirely about Future Self A (i.e. Mental Construct Best Future Self). 

It's useful to also reflect on B, (i.e. Actual Theoretically Possible Best Future Self), because in order for A to do its job, it's helpful (at least, I find it so), for it to be built on something real. Or at least, the more real its foundation, the easier it is to trust it. 

Actual Best Future Self B (in the actual future) probably doesn't directly remember the traumatic day where you asked for its help. (Although it can, if you do things like write things down, which might or might not be helpful to you). But I'm mostly not talking to Self B, I'm talking to Self A (mental construct), who is actually here in the moment. Self A is constructed to be a reflection of what Self B would do if they were actually here, telepathically connected.

(I'm not at all confident which combination of these is most useful to the average person, just that I found it helpful on two particular days during the most stressful year of my life)

comment by Aiyen · 2021-05-13T01:01:40.995Z · LW(p) · GW(p)

I have mixed feelings about this post.  On the one hand, it's a new, interesting idea.  You say it's helpful to you, and it wouldn't be entirely surprising if it's helpful to a great many readers.  This could be a very good thing.  

On the other hand, there's a tendency among rationalists these days to turn to religion, or to the closest thing to religion we can make ourselves believe in.  For a while there were a great many posts about meditation and enlightenment, for instance, and if we look at common eregores in the community, we find multiple.  Azathoth, God of Evolution.  Moloch, God of Prisoners' Dilemmas.  Cthulhu, God of Memetics and Monotonically Increasing Progressivism.  Bruce, God of Self-Sabotage.  This can be entertaining, and perhaps motivating.  Yet I cannot shake the feeling that we're taking a serious risk in trying to create something too closely akin to religion.  As the saying goes, what do you think you know, and how do you think you know it?  We're quite certain that e.g. Islam is founded on lies, with more lies built up to try to protect the initial deceptions.  Do you really want to mimic such a thing?  A tradition created without a connection to actual reality is unlikely to have any value.  

I won't say that you shouldn't pray to your future self, if you find doing so beneficial, and you yourself say this isn't your usual subject matter.  But be careful.  It's far too easy to create religious-style errors even if you do not consciously believe in your rituals.  

comment by niplav · 2020-06-07T11:48:14.931Z · LW(p) · GW(p)

I had a very similar thought a while back, but was thinking more of best possible current versions of myself.

You said this:

I think your coherent, extrapolated self may know things you don't know, and may have learned that some of your goals were misguided. Because your ability to communicate with them is bottlenecked on your current skills and beliefs, I can't vouch for the advice they might give.

I called it "Coherent Extrapolated Niplav", where I was sort-of having a conversation with CEN, and since it was CEN, it was also sympathetic to me (after all, my best guess is that if I was smarter, thought longer etc., I'd be sympathetic to other people's problems!).