Personal Benefits from Rationality

post by Celer · 2011-05-12T01:08:20.020Z · LW · GW · Legacy · 37 comments

Contents

37 comments

I saw this and realised something:

"Hey, wait, where have I seen other people talk about specific benefits from Rationality?"

And then I realised I hadn't. I look around the site some. Nothing there.

This is a place to fix that. The idea of this page is to post specific things that you personally have found helpful, that you learned from your studies of Bayescraft. This way we can find some that seem to work for a large number of people, so that when new people start to become interested in Rationality we can "make it rain" so that they see the benefits that come with being less wrong.

For commenters:

If someone posted something already that also worked for you, mention that. If every tactic is apparently used by only a single person, then it is harder for us as a community to figure out what we should recommend to tyros. 

List of N Things:

 

Understanding that my high school history class has more to do with real science than does my Chemistry class let me understand how I should be approaching the problem. History lets you look at what happened and say "Why did this happen" when you view it the right way.

Reading up on cognitive neuroscience taught me that I could use the placebo affect on myself. I have missed one day of school due to illness in my life.

Learning to not propose solutions for a minimum of five minutes, by the clock, has honestly been the most effective thing I have yet learned for personal application at Less Wrong.

 

May we all share many useful things, for our own benefit and as a place to point tyros towards.

 

 

 

37 comments

Comments sorted by top scores.

comment by MinibearRex · 2011-05-12T03:41:01.154Z · LW(p) · GW(p)

Not proposing solutions for five minutes is something I do every couple of days. I literally have a timer on my watch that is, by default, set to five minutes, and if I am wrestling with a difficult problem, I just sit down, start the timer, close my eyes, and think. In a more general sense, I use techniques like original seeing all the time.

Positive bias is one of the most powerful bits of information I know for determining the truth.

In a general sense, I can't quite stress how much happier I've consistently gotten since I've realized that when I notice I'm not happy, or bored, or whatever, I can just ask myself "What could I be doing right now instead of what I am doing, that I would enjoy more?" This has led me to several impromptu road trips, a number of hiking, biking excursions, etc, and reading several really good new books over the last couple of months alone.

Replies from: atucker
comment by atucker · 2011-05-12T10:56:09.902Z · LW(p) · GW(p)

This has led me to several impromptu road trips, a number of hiking, biking excursions, etc, and reading several really good new books over the last couple of months alone.

Yeah, I think that trying new things is another thing most rationalists should do. I find my self defaulting towards action a lot more often now -- like last night I signed up for a free improv lesson in my area because I thought improv would be useful, and just looked for what I could do about that.

Replies from: MinibearRex
comment by MinibearRex · 2011-05-12T20:00:31.833Z · LW(p) · GW(p)

I want to do that as well. A friend of mine took a class on stand-up comedy and really loved it. We were going to take it together, but the schedule didn't work for me.

comment by kpreid · 2011-05-12T12:54:04.752Z · LW(p) · GW(p)

Most of what I've learned here has become background ideas that I can't quite point to real-world direct instances of; most of the thinking I do is ‘intuitive’, or rather, not reflected upon and poorly remembered. That said:

  • “Multiplication.” That is, the notion that you actually can compute expected value of real-world choices. Most blandly, I won an iPod in a raffle after a very rough computation that buying tickets was actually worthwhile compared to buying the item at retail, whereas I had previously been ‘irrationally’ opposed to anything resembling ‘gambling’.

  • Recognizing the pattern of disputes over definitions has helped me avoid disputing definitions, or caring about the outcome of such disputes, when they are not actually useful.

  • It is clear from my experience that working hurts less than procrastinating; I have a poor record of actually applying this knowledge, however.

Replies from: Alicorn
comment by Alicorn · 2011-05-12T19:11:56.797Z · LW(p) · GW(p)

“Multiplication.” That is, the notion that you actually can compute expected value of real-world choices. Most blandly, I won an iPod in a raffle after a very rough computation that buying tickets was actually worthwhile compared to buying the item at retail, whereas I had previously been ‘irrationally’ opposed to anything resembling ‘gambling’.

This. One time my grad school department wanted to have the most donations to charity through the university's program, and so they arranged that a donation of any size (even $1) would constitute an entry into a department-organized raffle-y thing. I don't know where the prizes came from, but instead of dismissing the prospect out of hand I did a little arithmetic, got an estimate of how many people were entering from the school secretary, and determined that the cash prizes alone (let alone the gift basket and whatnot) constituted positive expected value. So I gave Planned Parenthood a dollar. (I lost, though.)

Replies from: Benquo
comment by Benquo · 2011-05-12T19:31:22.850Z · LW(p) · GW(p)

Upvoted for distinguishing the anecdotal outcome (a loss) from the expected outcome (positive). In other words, anecdotes are only good data when everyone reports on their outcome.

BTW, as long as I could give to a charity I considered worthwhile, this would almost always be a slam dunk regardless of the expected gain from the raffle itself. For example, if I valued the good produced by giving $1 to the charity at $0.95, then $0.05 in expected winnings would be my break-even point, not $1.00

Replies from: endoself
comment by endoself · 2011-05-13T01:22:22.941Z · LW(p) · GW(p)

if I valued the good produced by giving $1 to the charity at $0.95

Why would you? Charities differ in effectiveness by far more than 5%..

Replies from: Benquo
comment by Benquo · 2011-05-13T12:10:03.112Z · LW(p) · GW(p)

I meant to imply by "a charity I considered worthwhile" that you're near the utility break-even point already. Obviously if you think the charity only does a tiny amount of good, then you need a bigger expected win to make the raffle it worth playing.

comment by atucker · 2011-05-12T02:19:07.483Z · LW(p) · GW(p)

Since becoming a rationalist, I've become (on average) happier, more adventurous, more tolerant of other people, more comfortable in a variety of situations, more motivated, more intentional, more understanding, less moody, less nihilistic, less contentious, and lots of other fun things. (Feel free to ask for evidence for these claims, I just didn't want to write all of that out now and make this reply even longer.)

I think the main ideas that the rest of that sort of latches onto are:

And the more specific useful details that actually had changes in my life are:

  • You're allowed to do things to accomplish your goals other than your cached thoughts

  • Other people are different than me, and I should think from their perspective rather than extrapolating from myself

  • Working in groups is really, really helpful for accomplishing things

  • How to dissolve the question

  • Take other people's advice

  • Hold off on proposing solutions

  • Don't listen to Bruce

It was addressed earlier here.

I'm fine with talking about it again, because a lot of that thread focused on Xixidu, and I'm all up for hearing more upbeat responses

comment by David_Gerard · 2011-05-12T09:43:52.306Z · LW(p) · GW(p)

There was one of these at the start of April:

http://lesswrong.com/lw/52n/q_what_has_rationality_done_for_you/

Want to declare this May's thread?

My April answer still holds. I will add: LessWrong has taught me to separate the process of thinking rationally from believing I have to set goals that pass a test of rationality. "Is it rational to want that?" only makes sense for instrumental values, not terminal ones, and can lead to trouble when you haven't fully resolved which of your values or goals is which (whether you think you have or not).

comment by shokwave · 2011-05-12T17:25:31.734Z · LW(p) · GW(p)

My housemate has almost completely hacked my brain (liberally apply computer programming and Godel, Escher, Bach to your mind) to think in isomorphisms, efficient algorithms, and the like. This has caused improvements like using a queue instead of a stack for scheduling chores (one bad chore in a stack will cause me to look for other easier chores to stack on top of it) which means my weekly chores get done in an afternoon instead of a week, and a general attitude of thinking about problems instead of solving them. Usually, a bit of thought will reveal some underlying pattern that has an optimal solution ready and waiting.

Rationality gave me this because it told me, at one point, about behavioral hacks. So I looked for my smartest, most effective, and most awesome friend, and made them my housemate.

Replies from: Gray, MartinB, FiftyTwo
comment by Gray · 2011-05-12T19:02:14.549Z · LW(p) · GW(p)

Djikstra said that computer science is as much about computers as astronomy is about telescopes, so it shouldn't be surprising that things like algorithms and data structures has relevance to even mundane reality. I think one way I look at myself is an extremely small and limited computer. On the fly, my brain is slow at performing operations, I have a hard time recalling information, and I do so with limited accuracy. Sometimes I make mistakes while performing operations.

So what are we doing when we try to organize ourselves and make plans but trying to compile a program for these very far from optimal circumstances? Obviously, if I make plenty of mistakes, I need to write in plenty of redundancy; and I have to employ "tricks" in order to achieve meta-cognition at the right times (something that goes beyond the computer analogy, I know).

This involves, as I see it, a further way of looking at yourself. You see yourself as both the machine executing instructions, and the programmer writing those instructions (as well as the compiler, trying to translate the program to machine language). Nietzsche wrote that we have to develop as both commanders and obeyers. I thought this was hogwash, but I've learned that there is a lot of truth to that.

comment by MartinB · 2011-05-12T20:22:46.640Z · LW(p) · GW(p)

using a queue instead of a stack for scheduling chores

I am not a native English speaker, and so usually saw these as synonymous. Could you explain a bit what the difference is?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-05-12T20:57:48.159Z · LW(p) · GW(p)

It's not about English:

Replies from: MartinB
comment by MartinB · 2011-05-12T21:49:25.495Z · LW(p) · GW(p)

Ahh, yes of course. LIFO vs. FIFO. Thank you for explaining.

comment by FiftyTwo · 2011-05-13T02:06:55.082Z · LW(p) · GW(p)

A quick google tells me "A stack is generally First In, Last Out, and a queue is First In First Out."

Just to be clear, you mean that you now you will be doing one task then see another needing doing and do it instead. Whereas before you would continue doing the current task, with the intention of doing the other when it was completed?

[Sorry if thats blindingly obvious, my computer science knowledge is fairly sparse.]

Replies from: shokwave
comment by shokwave · 2011-05-13T04:55:26.901Z · LW(p) · GW(p)

Say I needed to vacuum the house and complete an essay. If I stack vacuum on top of essay, I'll be vacuuming first, and then going to do my essay. But if, while I'm vacuuming, I realise the dishes need cleaning and I need to post a letter, I'll put those on the stack as well, and they'll get done before the essay because they're on top. And as long as I can come up with more tasks, I can stack them on top of the essay, and never get around to it.

But with a queue, I do the vacuuming, realise the dishes need doing, and queue that up behind the essay. The essay gets done before the dishes, removing the temptation to generate mindless busywork for myself.

Replies from: FiftyTwo, wedrifid
comment by FiftyTwo · 2011-05-13T05:27:10.605Z · LW(p) · GW(p)

Ah I see, the task currently being done is not part of the stack.

I can see this working with tasks of similar length and difficulty. But what about when one task is significantly shorter than another and partly time dependent? E.g. in this case, while your essay is more important, it might take several hours to do well, during which the dishes will moulder and annoy your flatmate. Whereas the essay will not e altered during that length of time. I acknowledge that this is a possible way to rationalise procrastination, but there would be cases where it was true.

Replies from: shokwave
comment by shokwave · 2011-05-13T05:53:56.554Z · LW(p) · GW(p)

It's possible, but I've never encountered such a situation.

comment by wedrifid · 2011-05-13T05:35:35.121Z · LW(p) · GW(p)

I have fond memories of implementing Priority Queues, back in the day. The algorithm is rather elegant.

comment by Manfred · 2011-05-12T04:33:57.181Z · LW(p) · GW(p)

where have I seen other people talk about specific benefits from Rationality?

Threads on this crop up occasionally. Unfortunately the options for directing new attention to an old topic are very limited.

Most direct parallel is here: http://lesswrong.com/lw/6t/the_benefits_of_rationality/ ,

comment by Armok_GoB · 2011-05-12T09:59:02.986Z · LW(p) · GW(p)

Not sure if it's specific enough, but mental illness type problems can be cancelled out by it.

Replies from: FiftyTwo
comment by FiftyTwo · 2011-05-13T02:12:00.839Z · LW(p) · GW(p)

Could you elaborate? There seems to be some anecdotal evidence (and more specifically the theory of "Depressive realism" that seem to show that (misapplied) rationalism can be detrimental to ones mental health.

Replies from: Armok_GoB
comment by Armok_GoB · 2011-05-13T10:45:51.325Z · LW(p) · GW(p)

Keyoword: missaplied. Someone who actually read the sequences, understand them, and apply them as recommended wont misapply them.

Replies from: FiftyTwo
comment by FiftyTwo · 2011-05-13T18:55:29.290Z · LW(p) · GW(p)

I'm not sure if reading the sequences is sufficient to ensure correct application.

But regardless can you give an example of rationality cancelling out mental illness issues?

Edit: Apologies if doing so goes into personal information you would rather not discuss, if that is the case please consider my request withdrawn.

Replies from: Armok_GoB
comment by Armok_GoB · 2011-05-13T19:23:08.892Z · LW(p) · GW(p)

It sort of does, if you're really interested you can go through my posting history thou I think I posted somehting about it a while ago.

comment by RolfAndreassen · 2011-05-12T03:51:49.472Z · LW(p) · GW(p)

I often (weekly, if not more frequently) use the technique "don't evaluate before brainstorming" in my day-to-day work. I consciously look for further alternatives before considering which is best of the ones I've already listed - that is, I sometimes catch myself in the act, and say "hang on, are those really all the options?" Several times this has helped me hit on an alternative superior to my first thoughts.

comment by Goobahman · 2011-05-13T00:30:40.036Z · LW(p) · GW(p)

It's hard to articulate all the benefits it's had in my life, but I'll name some that I've really noticed:

Self evaluation and development: Many of the posts on Bias and human behaviours have helped me understand myself as a very primal creature, and in distinguishing the difference between the rational logic in my head, and the very human part of me that exists in day-to-day, and because I have the tools to understand those subjective parts of me, I can essentially 'manipulate' myself for my own benefit. For example, Luke's article on The Good News of Situational Psychology helped me to understand how influenced I am by my situation, and that I need to pro-actively place myself in situations that will encourage me to make better decisions. Also the ability to be objective about myself and my emotions allows me to do a cost-benefit analysis on what parts of me need the most work.

Social interaction: Many of the same tools have given me a stronger understanding of humans as social creatures, with many similar behaviours and mannerisms as animals, and being able to see things in that light has allowed me to take advantage of these social norms, even in terms of encouraging positive behaviours in my friends and family, and optimizing our lives.

Those are the main two, but they sort of underly everything else that goes on in my life as well....

comment by Servant · 2011-05-12T09:17:35.840Z · LW(p) · GW(p)

My main goal at the moment is to utilize "rationality" as a way to map how individuals comes to a decision/belief (what to do, what to say, what to believe, etc.). I am not assuming that these individuals ARE rational, but it's a useful (and quickie) tool for me to retrospectively "map out" the thought process of an individual. Like all tools though, they will be limitations.

comment by Thomas · 2011-05-12T15:36:08.856Z · LW(p) · GW(p)

I only expect that my communication with others will be easier -- if and when they will be more rational.

Nothing else.

Replies from: MartinB
comment by MartinB · 2011-05-12T18:00:31.643Z · LW(p) · GW(p)

That sounds like a net negative.

Replies from: Thomas
comment by Thomas · 2011-05-12T18:38:31.255Z · LW(p) · GW(p)

That sounds like a net negative.

I don't see this reply as a rational one.

Replies from: MartinB
comment by MartinB · 2011-05-12T18:50:40.563Z · LW(p) · GW(p)

The OP asked about benefits from rationality. You gave something that looks like a negative effect. And mention that this is the only thing you get. Hence you seem to experience a net loss by being rational over not being.

You know that rationality is not just about being right. It is also about achieving the things you set out to do. Winning and such. If You do not win, something is wrong.

Replies from: Servant, Thomas
comment by Servant · 2011-05-12T18:56:02.996Z · LW(p) · GW(p)

I don't see that as a net negative. There may be a lot of people who are "rational", and possibly those who are already "winning". Knowing how to communicate with them is indeed a net plus, since this gives you an exclusive network other people won't have (letting you "win" as well).

comment by Thomas · 2011-05-12T19:25:16.825Z · LW(p) · GW(p)

Hence you seem to experience a net loss by being rational over not being.

Internal rationality I take as granted. But any cooperation with not so rational people is more difficult than with those (much) more rational.

The rationality increases the communication bandwidth.

What else do you expect - except a better information exchange with others if the rationality goes up with time?

What else?

Replies from: MartinB
comment by MartinB · 2011-05-12T20:14:47.746Z · LW(p) · GW(p)

I recommend you re:read the original posting and the other comments. There seems to be a difference in how you interpret the question and everyone else.

Replies from: Thomas
comment by Thomas · 2011-05-13T04:39:20.870Z · LW(p) · GW(p)

Seeing the thing. just as everybody else, or at least as the local majority? How do you call it? The confirmation bias?