Open Thread

post by Bound_up · 2018-06-23T22:49:54.339Z · LW · GW · 17 comments

Contents

17 comments

17 comments

Comments sorted by top scores.

comment by cousin_it · 2018-06-25T08:46:30.482Z · LW(p) · GW(p)

This post by Alex Appel made me much more skeptical about the logical induction approach:

This doesn’t seem like much, but it gets extremely weird when you consider that the limit of a logical inductor, P_∞, is a constant distribution, and by this result, isn’t a logical inductor! If you skip to the end and use the final, perfected probabilities of the limit, there’s a trader that could rack up unboundedly high value!
comment by Xkcd · 2018-06-24T19:16:56.115Z · LW(p) · GW(p)

Novice here. Is there a post somewhere on what algorithm people in this community use to answer "what should I do with my life", with recursive why's for each step of the algorithm?

Replies from: Raemon, cousin_it, Elo
comment by Raemon · 2018-06-24T23:41:42.941Z · LW(p) · GW(p)

tldr: this question sounds like you probably have a lot of prerequisites before you get to the "what should I do with my life" question.

Before getting into anything like 80,000 hours, I think a good approach is a combination of:

1) gaining skills in introspection, well enough that you can query yourself for things like "why am I doing this particular thing?" and find answers that are likely true (instead of a story you tell yourself – a lot of what we believe about our motivations is bullshit)

(This post on Focusing is a good place to get started on introspection)

2) think about:

a) most of the things that you do on a weekly basis ask "why do I do this?", and then when you get a "because Y", ask yourself "okay, but why do I care about Y?" and recurse until you hit something that feels like motivational-bedrock.

b) same for things you feel like you should do, or feel like you want to do, but don't actually do.

(It helps to actually write all these things down as a kind of interconnected web)

3) Take a step back and look at all those things together, to build up a model of what your values actually are, and what sort of things you actually want.

From there, asking "okay what do I actually do?" is another complicated question, but which I wouldn't have much more advice for until having a clearer sense of what the outcome of those steps was (or, some equivalent set of steps that outputs "understanding what you value and why."

Replies from: Xkcd
comment by Xkcd · 2018-06-25T19:25:46.140Z · LW(p) · GW(p)

Thanks for sharing this. Coincidentally I have started out on steps 1 and 2a, but I haven't done it systematically not have I recorded anything. I will aim to go through the exercise more systematically and will report the outcome back on this thread (likely to take a while). I am a bit sceptical if I will reach a motivational bedrock because I am thinking that probably every motivation / action will have a "because" , which will in turn will have another "because"... irrespective of whether I can identify these or not. However, I will shut up and execute and see where I land.

Replies from: Raemon
comment by Raemon · 2018-06-25T19:43:13.625Z · LW(p) · GW(p)

Looking forward to hear how it goes!

comment by cousin_it · 2018-06-26T12:22:29.180Z · LW(p) · GW(p)

My algorithm: is there something interesting to do?

Yes -> Do that thing, and also try some new things every once in a while

No -> Try a bunch of new things

Repeat.

I've also tried introspection, as others are suggesting, but honestly it hasn't been too fruitful. It seems like most of my introspection is just making stuff up. Life goes on and I end up following my interests anyway, so they grow and bring me success, while my explicit goals kinda fade away and become slightly embarrassing. I think that's as it should be.

Replies from: Xkcd
comment by Xkcd · 2018-06-26T20:37:16.839Z · LW(p) · GW(p)

Thanks for sharing. To me, your algorithm doesn't necessarily seem in contradiction to introspection: maybe your interests are very well calibrated with reference to your values(?)

I tried to imagine myself using this algorithm. Two questions came to my mind:

  1. The question "To what end is this all for?" seemed to bother me.

  2. Was unsure how I would deal with inevitable "should"s that are not interesting.

Are either of these an issue for you? If so, how do you deal with them?

Replies from: cousin_it
comment by cousin_it · 2018-06-26T22:54:27.075Z · LW(p) · GW(p)

I guess having lots of stuff going on helps me avoid worrying about such questions...

Replies from: Xkcd
comment by Xkcd · 2018-06-27T04:32:50.939Z · LW(p) · GW(p)

Thanks

comment by Elo · 2018-06-24T21:50:16.577Z · LW(p) · GW(p)

There's 80,000hrs for careers.

I wrote http://bearlamp.com.au/list-of-common-human-goals/

Can you be more specific about what you are looking for and maybe I can point you in the right direction.

Replies from: Xkcd
comment by Xkcd · 2018-06-25T19:38:08.869Z · LW(p) · GW(p)

Thanks Elo. The intention behind the question was more general (how does one decide how to spend all of their time) than specific (career). The goals list in your post seems like a good checklist to work through while performing steps 2a,b that Raemon mentioned. Both your post as well as Raemon's reply seem to indicate that using your "true" motives / values might be a good compass to select your path. While this seems intuitive, is there an elaboration/explanation on why do it this way and not some way else? ( A "better" method doesn't come to mind though).

Replies from: Elo
comment by Elo · 2018-06-25T19:53:18.733Z · LW(p) · GW(p)

using your “true” motives /​ values might be a good compass to select your path.

If you use other motives then you are likely to not be fulfilled.

Example: My friends told me that if I am a quantitative analyst I will make a lot of money.

You may get a few years into study or work as a QA and realise you don't want to do that at all. You might regret doing something because your friends told you, or start just directing the money to the things you do care about (once you have the money).

As for "all time", your intuitive system is pretty good at weighing up all the bits of things you care about. Provided you give it time to introspect often enough, it can usually tell you if you are on the right track or neglecting some value. Without it, there is no good reason to choose project A or B, or 80/20 or any other ratio. There is no reason to choose an idea that is already at mind compared to one you have not already thought of. At some point an evaluative decision must be made. Otherwise you can do nothing. But that too is a choice.

Replies from: Xkcd
comment by Xkcd · 2018-06-25T20:32:12.961Z · LW(p) · GW(p)

Thanks. This helps.

comment by Bound_up · 2018-06-30T09:53:29.215Z · LW(p) · GW(p)

What's the exact wording and origin of the "when you don't know what to choose, choose power" quote?

Replies from: cousin_it
comment by cousin_it · 2018-06-30T10:33:40.785Z · LW(p) · GW(p)

It's from Final Words [LW · GW].

comment by Bound_up · 2018-06-23T23:00:07.094Z · LW(p) · GW(p)

I'm looking for a piece in the rationalist sphere, I think from 2018.

The section I remember talks about how if someone asks you what you're watching on TV, that updates your model of the world in several simultaneous ways, some true and some not. The point of the piece is that you can't "just tell the truth." No matter what you do or say, you will update people in the correct way, but also update people in other incorrect ways.

I thought it was called "You can't just tell the truth" or "The Impossibility of Just Telling the Truth" or something like this, but I'm not finding anything

Replies from: SaidAchmiz