Extreme Rationality: It's Not That Great

post by Scott Alexander (Yvain) · 2009-04-09T02:44:20.056Z · LW · GW · Legacy · 281 comments

Contents

  Footnotes:
None
281 comments

Related to: Individual Rationality is a Matter of Life and Death, The Benefits of Rationality, Rationality is Systematized Winning
But I finally snapped after reading: Mandatory Secret Identities

Okay, the title was for shock value. Rationality is pretty great. Just not quite as great as everyone here seems to think it is.

For this post, I will be using "extreme rationality" or "x-rationality" in the sense of "techniques and theories from Overcoming Bias, Less Wrong, or similar deliberate formal rationality study programs, above and beyond the standard level of rationality possessed by an intelligent science-literate person without formal rationalist training." It seems pretty uncontroversial that there are massive benefits from going from a completely irrational moron to the average intelligent person's level. I'm coining this new term so there's no temptation to confuse x-rationality with normal, lower-level rationality.

And for this post, I use "benefits" or "practical benefits" to mean anything not relating to philosophy, truth, winning debates, or a sense of personal satisfaction from understanding things better. Money, status, popularity, and scientific discovery all count.

So, what are these "benefits" of "x-rationality"?

A while back, Vladimir Nesov asked exactly that, and made a thread for people to list all of the positive effects x-rationality had on their lives. Only a handful responded, and most responses weren't very practical. Anna Salamon, one of the few people to give a really impressive list of benefits, wrote:

I'm surprised there are so few apparent gains listed. Are most people who benefited just being silent? We should expect a certain number of headache-cures, etc., just by placebo effects or coincidences of timing.

There have since been a few more people claiming practical benefits from x-rationality, but we should generally expect more people to claim benefits than to actually experience them. Anna mentions the placebo effect, and to that I would add cognitive dissonance - people spent all this time learning x-rationality, so it MUST have helped them! - and the same sort of confirmation bias that makes Christians swear that their prayers really work.

I find my personal experience in accord with the evidence from Vladimir's thread. I've gotten countless clarity-of-mind benefits from Overcoming Bias' x-rationality, but practical benefits? Aside from some peripheral disciplines1, I can't think of any.

Looking over history, I do not find any tendency for successful people to have made a formal study of x-rationality. This isn't entirely fair, because the discipline has expanded vastly over the past fifty years, but the basics - syllogisms, fallacies, and the like - have been around much longer. The few groups who made a concerted effort to study x-rationality didn't shoot off an unusual number of geniuses - the Korzybskians are a good example. In fact as far as I know the only follower of Korzybski to turn his ideas into a vast personal empire of fame and fortune was (ironically!) L. Ron Hubbard, who took the basic concept of techniques to purge confusions from the mind, replaced the substance with a bunch of attractive flim-flam, and founded Scientology. And like Hubbard's superstar followers, many of this century's most successful people have been notably irrational.

There seems to me to be approximately zero empirical evidence that x-rationality has a large effect on your practical success, and some anecdotal empirical evidence against it. The evidence in favor of the proposition right now seems to be its sheer obviousness. Rationality is the study of knowing the truth and making good decisions. How the heck could knowing more than everyone else and making better decisions than them not make you more successful?!?

This is a difficult question, but I think it has an answer. A complex, multifactorial answer, but an answer.

One factor we have to once again come back to is akrasia2. I find akrasia in myself and others to be the most important limiting factor to our success. Think of that phrase "limiting factor" formally, the way you'd think of the limiting reagent in chemistry. When there's a limiting reagent, it doesn't matter how much more of the other reagents you add, the reaction's not going to make any more product. Rational decisions are practically useless without the willpower to carry them out. If our limiting reagent is willpower and not rationality, throwing truckloads of rationality into our brains isn't going to increase success very much.

This is a very large part of the story, but not the whole story. If I was rational enough to pick only stocks that would go up, I'd become successful regardless of how little willpower I had, as long as it was enough to pick up the phone and call my broker.

So the second factor is that most people are rational enough for their own purposes. Oh, they go on wild flights of fancy when discussing politics or religion or philosophy, but when it comes to business they suddenly become cold and calculating. This relates to Robin Hanson on Near and Far modes of thinking. Near Mode thinking is actually pretty good at a lot of things, and Near Mode thinking is the thinking whose accuracy gives us practical benefits.

And - when I was young, I used to watch The Journey of Allen Strange on Nickleodeon. It was a children's show about this alien who came to Earth and lived with these kids. I remember one scene where Allen the Alien was watching the kids play pool. "That's amazing," Allen told them. "I could never calculate differential equations in my head that quickly." The kids had to convince him that "it's in the arm, not the head" - that even though the movement of the balls is governed by differential equations, humans don't actually calculate the equations each time they play. They just move their arm in a way that feels right. If Allen had been smarter, he could have explained that the kids were doing some very impressive mathematics on a subconscious level that produced their arm's perception of "feeling right". But the kids' point still stands; even though in theory explicit mathematics will produce better results than eyeballing it, in practice you can't become a good pool player just by studying calculus.

A lot of human rationality follows the same pattern. Isaac Newton is frequently named as a guy who knew no formal theories of science or rationality, who was hopelessly irrational in his philosophical beliefs and his personal life, but who is still widely and justifiably considered the greatest scientist who ever lived. Would Newton have gone even further if he'd known Bayes theory? Probably it would've been like telling the world pool champion to try using more calculus in his shots: not a pretty sight.

Yes, yes, beisutsukai should be able to develop quantum gravity in a month and so on. But until someone on Less Wrong actually goes and does it, that story sounds a lot like when Alfred Korzybski claimed that World War Two could have been prevented if everyone had just used more General Semantics.

And then there's just plain noise. Your success in the world depends on things ranging from your hairstyle to your height to your social skills to your IQ score to cognitive constructs psychologists don't even have names for yet. X-Rationality can help you succeed. But so can excellent fashion sense. It's not clear in real-world terms that x-rationality has more of an effect than fashion. And don't dismiss that with "A good x-rationalist will know if fashion is important, and study fashion." A good normal rationalist could do that too; it's not a specific advantage of x-rationalism, just of having a general rational outlook. And having a general rational outlook, as I mentioned before, is limited in its effectiveness by poor application and akrasia.

I no longer believe mastering all these Overcoming Bias and Less Wrong techniques will turn me into Anasûrimbor Kellhus or John Galt. I no longer even believe mastering all these Overcoming Bias techniques will turn me into Eliezer Yudkowsky (who, as his writings from 2001 indicate, had developed his characteristic level of awesomeness before he became interested in x-rationality at all)3. I think it may help me succeed in life a little, but I think the correlation between x-rationality and success is probably closer to 0.1 than to 1. Maybe 0.2 in some businesses like finance, but people in finance tend to know this and use specially developed x-rationalist techniques on the job already without making it a lifestyle commitment. I think it was primarily a Happy Death Spiral around how wonderfully super-awesome x-rationality was that made me once think otherwise.

And this is why I am not so impressed by Eliezer's claim that an x-rationality instructor should be successful in their non-rationality life. Yes, there probably are some x-rationalists who will also be successful people. But again, correlation 0.1. Stop saying only practically successful people could be good x-rationality teachers! Stop saying we need to start having huge real-life victories or our art is useless! Stop calling x-rationality the Art of Winning! Stop saying I must be engaged in some sort of weird signalling effort for saying I'm here because I like mental clarity instead of because I want to be the next Bill Gates! It trivializes the very virtues that brought most of us to Overcoming Bias, and replaces them with what sounds a lot like a pitch for some weird self-help cult...

...

...

...but you will disagree with me. And we are both aspiring rationalists, and therefore we resolve disagreements by experiments. I propose one.

For the next time period - a week, a month, whatever - take special note of every decision you make. By "decision", I don't mean the decision to get up in the morning, I mean the sort that's made on a conscious level and requires at least a few seconds' serious thought. Make a tick mark, literal or mental, so you can count how many of these there are.

Then note whether you make that decision rationally. If yes, also record whether you made that decision x-rationally. I don't just mean you spent a brief second thinking about whether any biases might have affected your choice. I mean one where you think there's a serious (let's arbitrarily say 33%) chance that using x-rationality instead of normal rationality actually changed the result of your decision.

Finally, note whether, once you came to the rational conclusion, you actually followed it. This is not a trivial matter. For example, before writing this blog post I wondered briefly whether I should use the time studying instead, used normal (but not x-) rationality to determine that yes, I should, and then proceeded to write this anyway. And if you get that far, note whether your x-rational decisions tend to turn out particularly well.

This experiment seems easy to rig4; merely doing it should increase your level of conscious rational decisions quite a bit. And yet I have been trying it for the past few days, and the results have not been pretty. Not pretty at all. Not only do I make fewer conscious decisions than I thought, but the ones I do make I rarely apply even the slightest modicum of rationality to, and the ones I apply rationality to it's practically never x-rationality, and when I do apply everything I've got I don't seem to follow those decisions too consistently.

I'm not so great a rationalist anyway, and I may be especially bad at this. So I'm interested in hearing how different your results are. Just don't rig it. If you find yourself using x-rationality twenty times more often than you were when you weren't performing the experiment, you're rigging it, consciously or otherwise5.

Eliezer writes:

The novice goes astray and says, "The Art failed me."
The master goes astray and says, "I failed my Art."

Yet one way to fail your Art is to expect more of it than it can deliver. No matter how good a swimmer you are, you will not be able to cross the Pacific. This is not to say crossing the Pacific is impossible. It just means it will require a different sort of thinking than the one you've been using thus far. Perhaps there are developments of the Art of Rationality or its associated Arts that can turn us into a Kellhus or a Galt, but they will not be reached by trying to overcome biases really really hard.

Footnotes:

1: Specifically, reading Overcoming Bias convinced me to study evolutionary psychology in some depth, which has been useful in social situations. As far as I know. I'd probably be biased into thinking it had been even if it hadn't, because I like evo psych and it's very hard to measure.

2: Eliezer considers fighting akrasia to be part of the art of rationality; he compares it to "kicking" to our "punching". I'm not sure why he considers them to be the same Art rather than two related Arts.

3: This is actually an important point. I think there are probably quite a few smart, successful people who develop an interest in x-rationality, but I can't think of any people who started out merely above-average, developed an interest in x-rationality, and then became smart and successful because of that x-rationality.

4: This is a terribly controlled experiment, and the only way its data can be meaningfully interpreted at all is through what one of my professors called the "ocular trauma test" - when the data hits you between the eyes. If people claim they always follow their rational decisions, I think I will be more likely to interpret it as lack of enough cognitive self-consciousness to notice when they're doing something irrational than an honest lack of irrationality.

5: In which case it will have ceased to be an experiment and become a technique instead. I've noticed this happening a lot over the past few days, and I may continue doing it.

281 comments

Comments sorted by top scores.

comment by AnnaSalamon · 2009-04-09T12:40:38.094Z · LW(p) · GW(p)

So the second factor is that most people are rational enough for their own purposes. Oh, they go on wild flights of fancy when discussing politics or religion or philosophy, but when it comes to business they suddenly become cold and calculating. This relates to Robin Hanson on Near and Far modes of thinking. Near Mode thinking is actually pretty good at a lot of things, and Near Mode thinking is the thinking whose accuracy gives us practical benefits.

Seems to me that most of us make predictably dumb decisions in quite a variety of contexts, and that by becoming extra bonus sane (more sane/rational than your average “intelligent science-literate person without formal rationalist training”), we really should be able to do better.

Some examples of the “predictably dumb decisions” that an art of rationality should let us improve on:

  • Dale Carnegie says (correctly, AFAIK) that most of us try to persuade others by explaining the benefits from our point of view (“I want you to play basketball with me because I don’t have enough people to play basketball with”), even though it works better to explain the benefits from their points of view. Matches my experiences, and matches also many/most of the local craigslist ads. The gains if we notice and change this one would be significant.
  • Lots of people decide to take a job “to make more money”, but don’t bother to actually research the odds of getting that job, and the average payoff from that job (the latter, at least, is easy to look up on the internet) before spending literally years training for the job. Even in cases like med school. Again, significant payoff here, and in this case fairly minimal willpower requirements.
  • Lots of us tend to mostly stick to our own opinions in conversations, even in cases where our impressions are no better data than our interlocutor’s impressions, and where the correct opinion can actually impact the goodness of our lives (e.g., which course to take on a work project whose outcome matters; which driving route is faster; which carwash to try) (these latter decisions are small, but small decisions add up).
  • Similarly, lots of us decide we’re “good at X and bad at Y”, or that we’re “the sort of people who do A in such-and-such a specific manner”, and quit learning in a particular domain, quit updating our skill-sets, keep suboptimal beliefs or practices glued to our identities instead of looking around to see how others do things and what methods might achieve greater success. Lots of us spend far more of our thinking time noting all the reasons why we’re best off doing what we’re doing than we do looking for new ways to do things, even when such looking has tended to give us useful improvements.
  • Lots of people run more risk of death by car than they would upon consideration choose, e.g. by driving too close to the car in front of them (the half-second earlier that you get home isn’t worth it) or by driving while tired. At the same time, lots of people refrain from enjoyable activities such as walking around at night or swimming off the coast of Florida despite the occasional sharks, in cases where the activities in fact pose nearly negligible danger, but the dangers in question are vivid and easy to over-estimate.
Replies from: John_Maxwell_IV, None
comment by John_Maxwell (John_Maxwell_IV) · 2009-04-09T22:23:41.902Z · LW(p) · GW(p)

I don't think you need the art of rationality much for that stuff. I think just being reminded is almost as good, if not better. Who do you think would do better on them: someone who read all of LW/OB except this post, or someone who read this post only? Now consider that reading all of LW/OB would take at least 256 times longer.

Replies from: loqi
comment by loqi · 2009-04-10T04:01:32.465Z · LW(p) · GW(p)

That was only a sample. Should we really prefer keeping them all in mind over learning the pattern behind them?

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2009-04-10T20:26:38.843Z · LW(p) · GW(p)

Learning about rationality won't necessarily help you realize where you're being irrational. If you've got a general method for doing that, I'd be interested, but I don't think it's been discussed much on this blog.

comment by [deleted] · 2015-01-28T17:16:19.933Z · LW(p) · GW(p)

Dale Carnegie says (correctly, AFAIK) that most of us try to persuade others by explaining the benefits from our point of view (“I want you to play basketball with me because I don’t have enough people to play basketball with”), even though it works better to explain the benefits from their points of view. Matches my experiences, and matches also many/most of the local craigslist ads. The gains if we notice and change this one would be significant.

Interesting. But searching a bit this applies to business. Looks nice on a job interview. Don't try this on a date! (no lukeprog allowed)

Thanks for the advice! For completedness, I'd assume this is what you meant: http://www.dalecarnegie.com/communication_effectiveness_-_present_to_persuade/ or at least gives it a deeper point.

Replies from: Nornagest
comment by Nornagest · 2015-01-28T17:32:16.748Z · LW(p) · GW(p)

Don't try this on a date! (no lukeprog allowed)

Why not? Lukeprog's mistake, assuming you're talking about what I think you're talking about, seems to have been quite the opposite of trying to explain the benefits of an option from the other person's point of view:

So I broke up with Alice over a long conversation that included an hour-long primer on evolutionary psychology in which I explained how natural selection had built me to be attracted to certain features that she lacked.

I imagine he'd have had better luck, or at least not become the butt of quite so many relationship jokes on LW, if he'd gone with something like "you deserve someone who appreciates you better". Notice that from Alice's perspective, this describes exactly the same situation -- but in terms of what it means to her.

Replies from: None, Lumifer
comment by [deleted] · 2015-01-28T17:44:21.659Z · LW(p) · GW(p)

Nah. Just meant that considering his posts on relationships, he might try that, so therefore, no lukeprog allowed.

In truth I was just trying to use reverse psychology to get him to do it and hopefully post some results.

And this is where this silliness ends before I get more downvoetes.

comment by Lumifer · 2015-01-28T17:39:24.826Z · LW(p) · GW(p)

So I broke up with Alice over a long conversation that included an hour-long primer on evolutionary psychology in which I explained how natural selection had built me to be attracted to certain features that she lacked.

ROFL... An hour-long primer to explain "You should have gotten a boob job" X-D

comment by mathemajician · 2009-04-11T11:02:51.305Z · LW(p) · GW(p)

Imagine a world where the only way to become really rich is to win the lottery (and everybody is either risk averse or at least risk neutral). With an expected return of less than $1 per $1 spent on tickets, rational people don't buy lottery tickets. Only irrational people do that. As a result, all the really rich people in this world must be irrational.

In other words, it is possible to have situations where being rational increases your expected performance, but at the same time reduces your changes of being a super achiever. Thus, the claim that "rationalists should win" is not necessarily true, even in theory, if "winning" is taken to mean being among the top performers. A more accurate statement would be, "In a world with both rational and irrational agents, the rational agents should perform better on average than the population average."

Replies from: ciphergoth, NicoleTedesco
comment by Paul Crowley (ciphergoth) · 2009-04-11T12:28:43.922Z · LW(p) · GW(p)

There's an extent to which we live in such a world. Many people believe you can achieve your wildest dreams if you only try hard enough, because by golly, all those people on the TV did it!

Replies from: Hans
comment by Hans · 2009-04-13T09:55:51.635Z · LW(p) · GW(p)

But many poor/middle-class people also believe that they can never become rich (except for the lottery) because the only ways to become rich are crime, fraud, or inheritance. And this leads them to underestimate the value of hard work, education, and risk-taking.

The median rationalist will perform better than these cynics. But his average wealth will also be higher, assuming he accurately observes his chances at becoming succesful.

Replies from: Cameron_Taylor
comment by Cameron_Taylor · 2009-04-25T13:01:35.521Z · LW(p) · GW(p)

But many poor/middle-class people also believe that they can never become rich (except for the lottery) because the only ways to become rich are crime, fraud, or inheritance. And this leads them to underestimate the value of hard work, education, and risk-taking.

From what I can see, crime and fraud are harder get significant success with than 'real' work. Education and risk-taking are also rather vital.

comment by NicoleTedesco · 2012-01-15T15:57:33.571Z · LW(p) · GW(p)

It can be rational to accept the responsibility of high risk/high reward behavior, on specific occasions and under specific circumstances. The trick is recognizing those occasions and circumstances and also recognizing when your mind is fooling you into believing "THIS TIME IS DIFFERENT".

A rational agent is Warren Buffet. An irrational agent is Ralph Cramden. Both accept high risk/high reward situations. One is rational about that responsibility. The other is not.

Also, in a world of both rational and irrational agents, in a world where the rational agent must depend upon the irrational, it is sometimes rational to think irrationally!

comment by lessdazed · 2011-08-07T12:39:53.375Z · LW(p) · GW(p)

And this is why I am not so impressed by Eliezer's claim that an x-rationality instructor should be successful in their non-rationality life. Yes, there probably are some x-rationalists who will also be successful people. But again, correlation 0.1. Stop saying only practically successful people could be good x-rationality teachers! Stop saying we need to start having huge real-life victories or our art is useless! Stop calling x-rationality the Art of Winning! Stop saying I must be engaged in some sort of weird signalling effort for saying I'm here because I like mental clarity instead of because I want to be the next Bill Gates! It trivializes the very virtues that brought most of us to Overcoming Bias, and replaces them with what sounds a lot like a pitch for some weird self-help cult...

I think the truth is non-symmetrical: rationalism is the art of not failing, of not being stupid. I agree with you that "rationalists should win big" is not true in the sense Eliezer claims. However, rationalists should be generally above average by virtue of never failing big, never losing too much, e.g. not buying every vitamin at the health food store, not in cults, not bemoaning ancient relationships, etc.

Replies from: NicoleTedesco
comment by NicoleTedesco · 2012-01-15T15:52:58.544Z · LW(p) · GW(p)

Very good point!

comment by pjeby · 2009-04-09T05:29:39.358Z · LW(p) · GW(p)

I'm not sure if it was your intent to point this out by contrast, but I would like to point out that a reasonable art of "kicking" would not rely on you making conscious decisions, let alone explicitly rational ones. Rather, it would rely on you ensuring that your subconscious has been freed from sources of bias ahead of time, and is therefore able to safely leap to conclusions in its usual fashion. An art that requires you to think at the time things are actually happening is not much of an art.

Case in point: when reading "Stuck In The Middle With Bruce", I became aware of a subconsciously self-sabotaging behavior I'd done recently. So I "kicked" it out by crosslinking the behavior with its goal-satisfaction state. It would be crazy to wait until the next occasion for that behavior to strike, and then try to reason my way around it, when I can just fix the bloody thing in the first place. (Interestingly, I mentioned the story to my wife, and described how it related to my own behavior... and she thought of a different sort of self-sabotage she was doing, and applied the same mindhack. So, as of now, I'd say that story was one of the top 5 most valuable things I've gotten from LW.)

Now, in the case of extinguishing a behavior, there's no way you can absolutely prove you've fixed something permanently; the best you can do is show that the thought process that you use to produce an autonomous response before applying a technique, no longer produces the same response afterward. Also, sometimes you catch a break: you find yourself in a situation, expecting yourself to do the same old stupid thing you've been doing before, and then you find you don't need to, or notice a few seconds later that you already did something completely different, and a much better choice.

Truth is, our brains really aren't that bad at making decisions, once you take out the "priority overrides" that mess things up.

Anyway, I'm rambling a bit now. The point is, "kicking" is generally not something you do at the time -- you do it in advance of the next time....

Because your brain is faster than you are.

Replies from: None, Jonathan_Graehl, MendelSchmiedekamp, conor
comment by [deleted] · 2009-04-09T17:20:10.082Z · LW(p) · GW(p)

I voted this up, but I'm replying because I think it's a critical point.

Our brains are NOT designed to make conscious decisions about every thing that crosses our path. Trying to do that is like trying to walk everywhere instead of driving: it's technically possible, but it will take you forever and will be exhausting.

Our brains seem to work more like this: our brains process whatever it is we're doing at the time, and then feed that processed data into our subconscious for use later. Sure it jumps in every once in a while for something important, but generally it sits back and lets your subconscious do the driving.

Rationality should be about putting the best processed information down into your subconscious, so it works the way you'd like it too. Trying to do everything consciously is a poor use of your brain, as it 1) ignores the way your brain is designed to function and 2) forgoes the use of the powerful subconscious circuitry that makes up an enormous part of it.

comment by Jonathan_Graehl · 2009-04-09T19:37:35.249Z · LW(p) · GW(p)

What does "crosslinking the behavior with its goal-satisfaction state" mean? Specifically, I'm unable to guess what you mean by "crosslinking" and "the goal-satisfaction state" (of a behavior).

Replies from: pjeby
comment by pjeby · 2009-04-09T19:43:39.854Z · LW(p) · GW(p)

What does "crosslinking the behavior with its goal-satisfaction state" mean? Specifically, I'm unable to guess what you mean by "crosslinking" and "the goal-satisfaction state" (of a behavior).

More details can be found in this comment.

Replies from: roland
comment by roland · 2009-04-10T00:21:26.923Z · LW(p) · GW(p)

I had the same question as Jonathan and I've read the comment you mentioned. Where can we read/learn more about this technique?

Replies from: pjeby
comment by pjeby · 2009-04-10T01:50:06.326Z · LW(p) · GW(p)

I had the same question as Jonathan and I've read the comment you mentioned. Where can we read/learn more about this technique?

It's based on a technique called "Core Transformation", developed by Connirae Andreas and Tamara Andreas, and it's discussed in a book of the same name. (I linked to it once before when someone asked about this a few weeks ago, and was severely downmodded for some reason, so you'll have to find it yourself.)

My own version of the technique is a streamlined and stripped-down variation that removes a certain amount of superstition and ritual. (Among other things, I drop the "parts" metaphor, which some schools of NLP now consider to have been a bad idea in the first place.)

The technique works by using imagination to elicit the reward states associated with a behavior, going to higher and higher levels of abstraction to reach the top (or root?) of a person's reward tree -- usually a quasi-mystical state like inner peace, oneness, compassion, or something like that. (These "core states" are a good candidate for the "god-shaped hole" in humans, btw.)

Anyway, once you have access to such a state, it can be used as a reinforcer for alternative behaviors, as it's stronger than the diluted intermediate versions found at other levels of the person's goal tree. (More precisely, it can be used to extinguish the conditioned appetite that drives the problem behavior.)

I teach this method and use it in coaching; my wife and I also use it personally. I'd link to my own workshops and recordings on the subject as well, but since I was downmodded for referring to a site where you could buy someone else's book, I shudder to imagine what would happen if I linked to a site where you could buy my products or services. ;-)

Replies from: roland
comment by roland · 2009-04-10T02:12:44.534Z · LW(p) · GW(p)

I teach this method and use it in coaching; my wife and I also use it personally. I'd link to my own workshops and recordings on the subject as well, but since I was downmodded for referring to a site where you could buy someone else's book, I shudder to imagine what would happen if I linked to a site where you could buy my products or services. ;-)

Please post the link. And why should you be afraid of downmodding? I have been downmodded for saying things that are true(at least IMHO). Don't give that much importance to the mods!

Replies from: pjeby, Emile
comment by pjeby · 2009-04-10T02:58:10.286Z · LW(p) · GW(p)

And why should you be afraid of downmodding?

I'm not. I'm simply attempting to respect the wishes of others regarding what should or should not be posted here.

Please post the link

Googling "Core Transformation" and "Gateway of Desire" (as phrases in quotes) will get you the links. Don't be confused by something else called "Quantum Touch - Core Transformation"; it's something unrelated (thank goodness).

Replies from: MBlume, ciphergoth, roland
comment by MBlume · 2009-04-10T03:03:01.690Z · LW(p) · GW(p)

People are trying to eliminate spam. Spammers tend to include links to outside services which cost money. Thus, your providing such a link gives you the superficial appearance of a spammer, and you got downmodded accordingly. You are not a spammer, you have participated in good faith in this community, at great personal effort, and contributed many useful insights as a result. I think by now, most people are aware of this, and you should not need to worry about giving the appearance of spamming.

comment by Paul Crowley (ciphergoth) · 2009-04-10T09:32:58.445Z · LW(p) · GW(p)

http://coretransformation.org/ appears to be the main website. This Google search finds related materials. All I could find on Wikipedia was this article on Steve Andreas.

Replies from: gjm
comment by gjm · 2009-04-10T19:58:45.572Z · LW(p) · GW(p)

The fact that everything I can find on the web carefully avoids giving details and instead takes the form "We have these fantastic techniques that can solve most of your problems; sign up for our seminars and we'll teach them to you" is ... not promising.

Promising the world, giving few details, and insisting on being paid before saying anything more, seems to me to be strongly correlated with dishonesty and cultishness. Since pjeby seems like a valuable member of this community, I hope this case happens to be different; but I'd like to see some evidence.

comment by roland · 2009-04-10T03:31:35.727Z · LW(p) · GW(p)

I'm not. I'm simply attempting to respect the wishes of others regarding what should or should not be posted here.

Well, you didn't grant my wish for a simple link, I have to google now. How sad. As for the wishes of others would you rather not post a truth then to be downvoted by the majority?

comment by MendelSchmiedekamp · 2009-04-09T15:51:30.051Z · LW(p) · GW(p)

Absolutely, learning to work with your subconscious is a necessity. After all it does far more computation than your conscious mind does.

Of course, you ought to explore the techniques that let you take positive advantage of it too.

Replies from: Annoyance
comment by Annoyance · 2009-04-09T17:12:05.963Z · LW(p) · GW(p)

But it's consciously understanding and applying techniques to make your mind as a whole work better that's the heart of rationality.

By and large the 'subconscious' is outside of our ability to control. The task isn't to bring the subconscious to heel, but to establish filters through which to screen the output of our minds, discarding that which is incompatible with rational thinking.

Replies from: MendelSchmiedekamp, Steve_Rayhawk, pjeby
comment by MendelSchmiedekamp · 2009-04-09T18:57:28.327Z · LW(p) · GW(p)

Influencing your subconscious in rational ways is not easy or simple. But at the same time, simply because something is hard doesn't mean it should be discarded out of hand as a viable route to achieving your goals especially if those goals are important.

Replies from: pjeby
comment by pjeby · 2009-04-09T20:20:47.813Z · LW(p) · GW(p)

Influencing your subconscious in rational ways is not easy or simple.

How about influencing your subconscious in irrational ways? I find that much easier, myself. The subconscious isn't logical, and it doesn't "think", it's just a giant lookup table. If you store the right entries under the right keys, it does useful things. The hardest part of hacking it is that there's no "view source" button or way to get a listing of what's already in there: you have to follow associative links or try keys that have worked for other people.

Well, I say hardest, but it's not so much hard as being sometimes tedious or time-consuming. The actually changing things part is usually quite quick. If it's not, you're almost certainly doing something wrong.

Replies from: loqi, Annoyance
comment by loqi · 2009-04-10T16:21:54.484Z · LW(p) · GW(p)

The subconscious isn't logical, and it doesn't "think", it's just a giant lookup table.

I'm suspicious of this characterization. I've made a couple surprising subconscious deductions in the past, and they forcefully reminded me that there's a very complex human brain down there doing very complex brain things on the sly all the time. You may have have learned some tricks to manipulate it, but I'd be surprised if you've done more than scratch the surface if you really just consider it to be a simple lookup table.

Replies from: pjeby
comment by pjeby · 2009-04-10T16:36:17.041Z · LW(p) · GW(p)

I didn't say it was a simple lookup table. It's indexed in lots of non-trivial ways; see e.g. my post here about "Spock's Dirty Little Secret". I just said that fundamentally, it's a lookup table.

I also didn't say it's not capable of complex behavior. A state machine is "just a lookup table", and that in no way diminishes its potential complexity of behavior.

When I say the subconscious doesn't "think", I specifically mean that if you point your built-in "mind projection" at your subconscious, you will misunderstand it, in the same way that people end up believing in gods and ghosts: projecting intention where none exists.

This is a major misunderstanding -- if not THE major misunderstanding -- of the other-than-conscious mind. It's not really a mind, it's a "Chinese room".

That doesn't mean we don't have complex behavior or can't do things like self-sabotage. The mistake is in projecting personhood onto our self-sabotaging behaviors, rather than seeing the state machine that drives them: condition A triggers appetite B leading to action C. There's no "agency" there, no "mind". So if you use an agency model (including Ainslie's "interests" to some extent), you'll take incorrect approaches to change.

But if you realize it's a state machine, stored in a lookup table, then you can change it directly. And for that matter, you can use it more effectively as well. I've been far more creative and better at strategy since I learned to engage my creative imagination in a mechanical way, rather than waiting for the muse to strike.

Meanwhile, it'd also be a mistake to think of it as a single lookup table; it includes many things that seem to me like specialized lookup tables. However, they are accessible through the same basic "API" of the senses, so I don't worry about drawing too fine of a distinction between the tables, except insofar as how they appear relate to specific techniques.

Replies from: MendelSchmiedekamp
comment by MendelSchmiedekamp · 2009-04-10T18:30:12.909Z · LW(p) · GW(p)

I look forward to seeing where your model goes as it becomes more nuanced. Among other things, I'm very curious about how your model takes into account actual computations (for example finding answers to combinatorial puzzles) that are performed by the subconscious.

Replies from: pjeby
comment by pjeby · 2009-04-10T18:38:52.366Z · LW(p) · GW(p)

I'm very curious about how your model takes into account actual computations (for example finding answers to combinatorial puzzles) that are performed by the subconscious.

What, you mean like Sudoku or something?

Replies from: MendelSchmiedekamp
comment by MendelSchmiedekamp · 2009-04-10T18:53:52.666Z · LW(p) · GW(p)

Sudoku would be one example. I meant generally puzzles or problems involving search spaces of combinations.

Replies from: pjeby
comment by pjeby · 2009-04-10T19:23:39.191Z · LW(p) · GW(p)

Well, I'll use sudoku since I've experienced both conscious and unconscious success at it. It used to drive me nuts how my wife could just look at a puzzle and start writing numbers, on puzzles that were difficult enough that I needed to explicitly track possibilities.

Then, I tried playing some easy puzzles on our Tivo, and found that the "ding" reward sound when you completed a box or line made it much easier to learn, once I focused on speed. I found that I was training myself to recognize patterns and missing numbers, combined with efficient eye movement.

I'm still a little slower than my wife, but it's fascinating to observe that I can now tell the available possibilities for larger and larger numbers of spaces without consciously thinking about it. I just look at the numbers and the missing ones pop into my head. Over time, this happens less and less consciously, such that I can just glance at five or six numbers and know what the missing ones are without a conscious step.

This doesn't require a complex subconscious; it's sufficient to have a state machine that generates candidate numbers based on seen numbers and drops candidates as they're seen. It might be more efficient in some sense to cross off candidates from a master list, except that the visualization would be more costly. One thing about how visualization works is that it takes roughly the same time to visualize something in detail as it does to look at it... which means that visualizing nine numbers would take about the same amount of time as it would for you to scan the boxes. Also, I can sometimes tell my brain is generating candidates while I scan... I hear them auditorially verbalized as the scan goes, although it's variable at what point in the scan they pop up; sometimes it's early and my eyes scan forward or back to double check.

Is this the sort of thing you're asking about?

Replies from: MendelSchmiedekamp
comment by MendelSchmiedekamp · 2009-04-11T14:10:24.272Z · LW(p) · GW(p)

It seems that our models are computationally equivalent. After all, a state machine with arbitrary extensible memory is Turing complete, and with adaptive response to the environment it is a complex adaptive system, what-ever model you have of it.

I have spent a great deal of time and reasoning on developing models of people in such a way. So, with my cognitive infrastructure, it makes more sense to model appropriately complex adaptive systems as people-like systems. Obviously you are more comfortable with computer science models.

But the danger with models is that they are always limiting in what they can reveal.

In the case of this example, I find it unsurprising that while you have extended the look up table to include the potential to reincorporate previously seen solutions, you avoid the subject of novel solutions being generated, even by a standard combinatorial rule. I suspect this is one particular short-coming of the look-up table basis for modeling the subconscious.

I suspect my models have similar problems, but it's always hardest to see them from within.

Replies from: pjeby, Annoyance
comment by pjeby · 2009-04-11T14:50:20.628Z · LW(p) · GW(p)

After all, a state machine with arbitrary extensible memory is Turing complete, and with adaptive response to the environment it is a complex adaptive system, what-ever model you have of it.

Of course. But mine is a model specifically oriented towards being able to change and re-program it -- as well as understanding more precisely how certain responses are generated.

One of the really important parts of thinking in terms of a lookup table is that it simplifies debugging. That is, one can be taught to "single-step" the brain, and identify the specific lookup that is causing a problem in a sequence of thought-and-action.

How do you do that with a mind-projection model?

So, with my cognitive infrastructure, it makes more sense to model appropriately complex adaptive systems as people-like systems. Obviously you are more comfortable with computer science models.

The problem with modeling one's self as a "person", is that it gives you wrong ideas about how to change, and creates maladaptive responses to unwanted behavior.

Whereas, with my more "primitive" model:

  1. I can solve significant problems of myself or others by changing a conceptually-single "entry" in that table, and

  2. The lookup-table metaphor depersonalizes undesired responses in my clients, allowing them to view themselves in a non-reactive way.

Personalizing one's unconscious responses leads to all kinds of unuseful carry-over from "adversarial" concepts: fighting, deception, negotiation, revenge, etc. This is very counterproductive, compared to simply changing the contents of the table.

Interestingly, this is one of the metaphors that I hear back from my clients the most, referencing personal actions to change. That is, AFAICT, people find it tremendously empowering to realize that they can develop any skill or change any behavior if they can simply load or remove the right data from the table.

In the case of this example, I find it unsurprising that while you have extended the look up table to include the potential to reincorporate previously seen solutions, you avoid the subject of novel solutions being generated, even by a standard combinatorial rule.

Of course novel solutions can be generated -- I do it all the time. You can pull data out of the system in all sorts of ways, and then feed it back in. For talking about that, I use search-engine or database metaphors.

Replies from: MendelSchmiedekamp
comment by MendelSchmiedekamp · 2009-04-11T15:49:15.047Z · LW(p) · GW(p)

I'm not talking about a mind projection model, I'm talking about using using information models constructed and vetted to effectively model people as a foundation for a different model of a part of a person.

I've modeled my subconscious in a similar manner before, I've gained benefits from it not unlike some you describe. I've even gone so far as to model up to sub-processor levels of capabilities and multi-threading. At the same time I was developing the Other models I mentioned, but they were incomplete.

Then during adolescence I refined my Other models well enough for them to start working. I can go more into that later, but as time went on it became clear that computation models simply didn't let me pack enough information in my interactions with my subconscious, so I needed a more information rich model. That is what I'm talking about.

So bluntly, but honestly, I feel what you're describing is, at best, what an eight year old should be doing to train their subconscious. But mostly I'm hoping you'll be moving forward.

Of course novel solutions can be generated -- I do it all the time. You can pull data out of the system in all sorts of ways, and then feed it back in. For talking about that, I use search-engine or database metaphors.

Search-engines and databases don't produce novel solutions on their own, even in the sense of a combinatorial algorithm. And certainly not in the sense of more creative innovation. There are many anecdotes claiming the subconscious can incorporate more dimensions in problems solving than the conscious - some more poetic than others (answers coming in dreams or in showers), it seems dangerous to simply disregard it.

Replies from: pjeby
comment by pjeby · 2009-04-11T17:20:31.661Z · LW(p) · GW(p)

So bluntly, but honestly, I feel what you're describing is, at best, what an eight year old should be doing to train their subconscious. But mostly I'm hoping you'll be moving forward.

Bluntly, but honestly, I think you'd be better off describing more precisely what model you think I should be using, and what testable benefits it provides. I'm always willing to upgrade, if a model lets me do something faster, easier, quicker to teach, etc. -- Just give me enough information to reproduce one of your techniques and I'll happily try it.

Replies from: MendelSchmiedekamp
comment by MendelSchmiedekamp · 2009-04-11T17:48:57.650Z · LW(p) · GW(p)

I said what I meant there. It's a feeling. Which combined with lacking a personalized model of your cognitive architecture makes it foolish for me to suggest a specific replacement model. My comment about deep innovation is intended to point you towards one of the blind spots of your current work (which may or may not be helpful).

I've been somewhere similar a long time ago, but I was working on other areas at the same time, which have led me to the models I use now. I sincerely doubt that that same avenue will work for you. Instead, I suggest you cultivate a skepticism of your work, plan a line of retreat, and start delving into the dark corners.

As an aside: if you want a technique - using a model close to yours - consider volitional initiation of a problem on your subconscious "backburner" to get an increased response rate. You tie the problem into the subconscious processing, set up an association trigger to check on it sometime later and then remove all links that would pull it back to the consciousness. You can then test the performance of this method versus standard worrying a problem or standard forgetting a problem using a diary method.

Using a more nuanced model, you can get much better results, but this should suffice to show you something of what I mean.

Replies from: pjeby
comment by pjeby · 2009-04-11T21:33:00.939Z · LW(p) · GW(p)

consider volitional initiation of a problem on your subconscious "backburner" to get an increased response rate. You tie the problem into the subconscious processing, set up an association trigger to check on it sometime later and then remove all links that would pull it back to the consciousness. You can then test the performance of this method versus standard worrying a problem or standard forgetting a problem using a diary method.

I've been doing that for about 24 years now. I fail to see how it has relevance to the model of mind I use for helping people change beliefs and behaviors. Perhaps you are assuming that I need to have ONE model of mind that explains everything? I don't consider myself under such a constraint. Note, too, that autonomous processing isn't inconsistent with a lookup-table subconscious. Indeed, autonomous processing independent of consciousness is the whole point of haivng a state-machine model of brain function. Consciousness is an add-on feature, not the point of having a brain.

Meanwhile, the rest of your comment was extraordinarily unhelpful; it reminds me of Eliezer's parents telling him, "you'll give up your childish ideas as soon as you get older".

Replies from: MendelSchmiedekamp
comment by MendelSchmiedekamp · 2009-04-11T21:53:31.729Z · LW(p) · GW(p)

I've been doing that for about 24 years now.

Good. It seemed the next logical step considering what you were describing as your model. It's also very promising that you are not trying to have a singular model.

Meanwhile, the rest of your comment was extraordinarily unhelpful; it reminds me of Eliezer's parents telling him, "you'll give up your childish ideas as soon as you get older".

Which at least is useful data on my part. Developing meta-cognitive technology means having negative as well as positive results. I do appreciate you taking the time to discuss things, though.

comment by Annoyance · 2009-04-11T14:17:21.495Z · LW(p) · GW(p)

Any computational process can be emulated by a sufficiently complicated lookup table. We could, if we wished, consider the "conscious mind" to be such a table.

Dismissing the unconscious because it's supposedly a lookup table is thus wrong in two ways: firstly, it's not implemented as such a table, and secondly, even if it were, that puts no limitations, restrictions, or reductions on what it's capable of doing.

The original statement in question is not just factually incorrect, but conceptually misguided, and the likely harm to the resulting model's usefulness incalculable.

comment by Annoyance · 2009-04-10T14:41:48.082Z · LW(p) · GW(p)

"The subconscious isn't logical, and it doesn't "think", it's just a giant lookup table."

Of all your errors thus far, those two are your most damaging.

Replies from: Steve_Rayhawk, MendelSchmiedekamp
comment by Steve_Rayhawk · 2009-04-10T16:48:43.739Z · LW(p) · GW(p)

I agree that the subconscious isn't just a giant lookup table, and that many people who make this error use it to justify practices which destroy other people's minds. But there are some important techniques of making the subconscious work better that are hard to invent unless you imagine that the subconscious is mostly a giant lookup table. pjeby uses these techniques in his practice. Do you deny pjeby's data that these techniques work? Do you even know which data made pjeby want to write "it's just a giant lookup table"? If you do know which data made pjeby want to write that, do you mean that it was wrong for him to write "the subconscious is just a giant lookup table" and not "the subconscious is mostly like just a giant lookup table"?

I feel like you don't think through the real details of what other people are thinking and how those details would have to actually interact with the high standards you have for the thoughts of those people. All you do is tell them that you think something they did means they broke a rule.

Replies from: gjm
comment by gjm · 2009-04-10T20:12:42.675Z · LW(p) · GW(p)

pjeby has provided very little data. He's claimed that his techniques work. He's described them in terms that (1) are supremely vague about what he actually does, and (2) seem to imply that he has gained the ability to change all sorts of things about the behaviour of the unconscious bits of his brain more or less at will.

There have been other people and groups that have made similar claims about their techniques. For instance, the Scientologists (though their claims about what they can do are more outlandish than pjeby's).

None of this means that pjeby is wrong, still less that he's not being honest with us: but it means that an appeal to "pjeby's data" is a bit naive. All we have so far -- unless there are gems hidden in threads I haven't read, which of course there might be -- are his claims.

comment by MendelSchmiedekamp · 2009-04-10T16:18:03.298Z · LW(p) · GW(p)

Annoyance has a point here. A look-up table is a very limiting model for a subconscious.

What is the benefit you gain by assuming that there is no organizing structure, whether or not it is known to you, within your subconscious?

Personally, I prefer a continually evolving model, updating with experience and observations. With periodic sanity checks of varying scales of severity. Not unlike how I model people.

Of course this lends a resulting bias that I treat my subconscious a bit like a person, with encouragement, care, and deals. This can also lend positive outcomes like running subconscious mental operations for long term problem solving (a more active and volitional version of waiting for inspiration to strike) and encouraging those operations to have appropriate tracebacks to make it easier for me to consciously verify them.

Not sure if that would work for other folks though, cognitive infrastructure may vary.

comment by Steve_Rayhawk · 2009-04-09T19:50:20.477Z · LW(p) · GW(p)

The task isn't to bring the subconscious to heel,

Right.

but to establish filters through which to screen the output of our minds,

No. More is possible:

it's consciously understanding and applying techniques to make your mind as a whole work better that's the heart of rationality.

Is the rational person subject to "March winds"?

comment by pjeby · 2009-04-09T20:16:26.686Z · LW(p) · GW(p)

By and large the 'subconscious' is outside of our ability to control.

Speak for yourself. ;-)

The task isn't to bring the subconscious to heel, but to establish filters through which to screen the output of our minds, discarding that which is incompatible with rational thinking.

That's wasteful and inefficient.

Bear in mind that there are two kinds of bias in the brain: hardware and software. The hardware biases cause software biases to get added, but those biases can also be removed, thereby eliminating the need to work around them.

Conversely, for "hard" biases that can't be removed, much of the implementation of workarounds can be created by installing compensating biases.

And it isn't even that complicated -- given appropriate (i.e. fast and unequivocal) feedback, the brain can make the software revisions on its own, without any complex conscious processes involved.

comment by Conor (conor) · 2021-06-13T05:06:59.630Z · LW(p) · GW(p)

What are the other posts in your top five?

comment by HughRistik · 2009-04-10T05:38:26.019Z · LW(p) · GW(p)

And for this post, I use "benefits" or "practical benefits" to mean anything not relating to philosophy, truth, winning debates, or a sense of personal satisfaction from understanding things better. Money, status, popularity, and scientific discovery all count.

In my life, I've used rationality to tackle some pretty tough practical problems. The type of rationality I have been successful with hasn't been the debiasing program of Overcoming Bias, yet I have been employing scientific thinking, induction, and heuristic to certain problems in ways that are atypical for the category of people you are calling normal rationalists. I don't know whether to call this "x-rationality" or not, partly because I'm not sure the boundaries between rationality and x-rationality are always obvious, but it's certainly more advanced rationality than what people usually apply in the domains below.

On a general level, I've been studying how to get good (or at least, dramatically better) at things. Here are some areas where I've been successful using rationality:

  • Recovering from social anxiety disorder and depression
  • Social skills
  • Fashion sense
  • Popularity / social status in peer group
  • Dating

I'm not using success necessarily to mean mastery, but around 1-2 standard deviations of improvement from where I started.

I do find it interesting that many people are not achieving practical benefits from their studies of more advanced rationalities. I agree with you that akrasia is a large factor in why they do not get significant practical benefits out of rationality. I am going to hypothesize an additional factor:

The practical benefits of x-rationality are constrained because students of x-rationality (such as the Overcoming Bias / Less Wrong) schools of thought focus on critical rationality, yet critical rationality is only good for solving certain types of problems.

In my post on heuristic, I drew a distinction between what I'm calling "critical rationality" (consisting of logic, skepticism, and bias-reduction) and "creative rationality" (consisting of heuristic and inference). Critical rationality concerns itself with idea validation, while creative rationality concerns itself with idea creation (specifically, of ideas that map onto the territory).

Critical rationality is necessary to avoid many mistakes in life (e.g. spending all your money on lottery tickets, high-interest credit card debt, Scientology), yet perhaps it runs into diminishing returns for success in most people's lives. For developing new ideas and skills that would lead people to success above a mundane level, critical rationality is necessary but not sufficient, and creative rationality is also required.

Replies from: MBlume, AnnaSalamon, Lethalmud
comment by MBlume · 2009-04-10T06:03:56.547Z · LW(p) · GW(p)

I would absolutely love to see the development of a rational art of dating. If you've more to say on this I'll definitely look forward to reading it.

Replies from: mattnewport, wedrifid
comment by mattnewport · 2009-04-10T06:13:53.989Z · LW(p) · GW(p)

This is largely the basis of the whole online sub-community of 'Game' and the 'Seduction Community'. It may well fall under what Eliezer refers to as 'the dark arts' but many participants are fairly explicit about applying a rational/scientific approach to success with women.

Replies from: HughRistik, MBlume, AnnaSalamon, PhilosophyTutor
comment by HughRistik · 2009-04-10T06:36:39.094Z · LW(p) · GW(p)

I am highly familiar with the seduction community, and I've learned a lot from it. It's like extra-systemized folk psychology. It has certain elements of a scientific community, yet it is vulnerable to ideologies developing out of:

(a) bastardized versions of evolutionary psychology being thrown around like the proven truth, often leading to cynical and overgeneralized views of female behavior and preferences and/or overly narrow views of what works,

(b) financial biases,

(c) lack of rigor, because controlled experiments are not yet possible in this field (though I would never suggest that people wait until science catches up and gives us rigorous empirical knowledge before trying to improve their dating lives... who knows how long we will have to wait).

Yet there is promise for the community, because it's beholden to real world results. Its descriptions and prescriptions seems to have been improving, and it has gone through a couple paradigm shirts since the mid 80's.

Replies from: mattnewport, AnnaSalamon
comment by mattnewport · 2009-04-10T06:49:49.243Z · LW(p) · GW(p)

I've also learned some useful things from my more limited familiarity with the community. I'd tend to agree with your criticisms but I think the emphasis on rigorous 'field testing' and on 'doing what works' in much of the community shows some common ground with general efforts at rationality. As you say, this is an area (like many areas of day to day life) that is not easily amenable to controlled scientific experiment for a number of reasons but one of the lessons of Bayesian thinking/'x-rationality' that I've found useful is the emphasis on being comfortable with uncertainty, fuzzy evidence and making the best decisions given limited information.

It's treacherous terrain for anyone seeking truth since, like investment or financial advice or healthcare, there is a lot of noise along with the signal. It's certainly an interesting area with many cross-currents to those interested in applying rationality though.

comment by AnnaSalamon · 2009-04-10T06:37:53.431Z · LW(p) · GW(p)

Do you think it would benefit from knowing some of the OB/LW rationality techniques?

Or from the general OB/LW picture, where inference is a thing that happens in material systems, and that yields true conclusions, when it does, for non-mysterious reasons that we can investigate and can troubleshoot?

Replies from: pjeby, mattnewport
comment by pjeby · 2009-04-10T15:11:26.682Z · LW(p) · GW(p)

Or from the general OB/LW picture, where inference is a thing that happens in material systems, and that yields true conclusions, when it does, for non-mysterious reasons that we can investigate and can troubleshoot?

One problem with interfacing formal/mathematical rationality with any "art that works", whether it's self-help or dating, is that when people are involved, there are feed-forward and feed-back effects, similar to Newcomb's problem, in a sense. What you predict will happen makes a difference to the outcome.

One of the recent paradigm shifts that's been happening in the last few years in the "seduction community" is the realization that using routines and patterns leads to state-dependence: that is, to a guy's self-esteem depending on the reactions of the women he's talked to on a given night. This has led to the rise of the "natural" movement: copying the beliefs and mindsets of guys who are naturally good with women, rather than the external behaviors of guys who are good with women.

Now, I'm not actually involved in the community; I'm quite happily married. However, I pay attention to developments in that field because it has huge overlap with the self-help field, and I've gotten many insights about how status perception can influence your behavior -- even when there's nobody else in the room but yourself.

I wandered off point a little there, so let me try and bring it back. The OB/LW approach to rationality -- at least as I've seen it -- is extremely "outside view"-oriented when it comes to people. There's lots of writing about how people do this or that, rather than looking at what happens with one individual person, on the inside.

Whereas the "arts that work" are extremely focused on an inside view, and actually learning them requires a dedication to action over theory, and taking that action whether you "believe" in the theory or not. In an art that works, the true function of a theory is to provide a convincing REASON for you to take the action that has been shown to work. The "truth" of that theory is irrelevant, so long as it provides motivation and a usable model for the purposes of that art.

When I read self-help books in the past, I used to ignore things if I didn't agree with their theories or saw holes in them. Now, I simply TRY what they say to do, and stick with it until I get a result. Only then do I evaluate. Anything else is idiotic, if your goal is to learn... and win.

Is that compatible with the OB/LW picture? The top-down culture here appears to be one of using science and math -- not real-world performance or self-experimentation.

Replies from: AnnaSalamon, Vladimir_Nesov
comment by AnnaSalamon · 2009-04-10T17:39:01.611Z · LW(p) · GW(p)

In an art that works, the true function of a theory is to provide a convincing REASON for you to take the action that has been shown to work. The "truth" of that theory is irrelevant, so long as it provides motivation and a usable model for the purposes of that art.... Is that compatible with the OB/LW picture? The top-down culture here appears to be one of using science and math -- not real-world performance or self-experimentation.

Experimenting, implementing, tracking results, etc. is totally compatible with the OB/LW picture. We haven't build cultural supports for this all that much, as a community, but we really should, and, since it resonates pretty well with a rationalist culture and there're obvious reasons to expect it to work, we probably will.

Claiming that a particular general model of the mind is true, just because you expect that claim to yield good results (and not because you have the kind of evidence that would warrant claiming it as "true in general"), is maybe not so compatible. As a culture, we LW-ers are pretty darn careful about what general claims we let into our minds with the label "true" attached. But is it really so important that your models be labeled "true"? Maybe you could share your models as thinking gimmicks: "I tend to think of the mind in such-and-such a way, and it gives me useful results, and this same model seems to give my clients useful results", and share the evidence about how a given visualization or self-model produces internal or external observables? I expect LW will be more receptive to your ideas if you: (a) stick really carefully to what you've actually seen, and share data (introspective data counts); (b) label your "believe this and it'll work" models as candidate "believe this and it'll work" models, without claiming the model as the real, fully demonstrated as true, nuts and bolts of the mind/brain.

In other words: (1) hug the data, and share the data with us (we love data); and (2) be alert to a particular sort of cultural collision, where we'll tend to take any claims made without explicit "this is meant as a pragmatically useful working self-model" tags as meant to be actually true rather than as meant to be pragmatically useful visualizations/self-models. If you actually tag your models with their intended use ("I'm not saying these are the ultimate atoms the mind is made of, but I have reasonably compelling evidence that thinking in these terms can be helpful"), there'll be less miscommunication, I think.

Replies from: pjeby
comment by pjeby · 2009-04-10T18:36:16.914Z · LW(p) · GW(p)

we'll tend to take any claims made without explicit "this is meant as a pragmatically useful working self-model" tags as meant to be actually true rather than as meant to be pragmatically useful visualizations/self-models.

Yeah, I've noticed that, which is why my comment history contains so many posts pointing out that I'm an instrumental rationalist, rather than an epistemic one. ;-)

Replies from: AnnaSalamon
comment by AnnaSalamon · 2009-04-10T19:44:14.790Z · LW(p) · GW(p)

I'm not sure it's about being an epistemic vs. an instrumental rationalist, vs. about tagging your words so we follow what you mean.

Both people interested in deep truths, and people interested in immediate practical mileage, can make use of both "true models" and "models that are pragmatically useful but that probably aren't fully true".

You know how a map of north America gives you good guidance for inferences about where cities are, and yet you shouldn't interpret its color scheme as implying that the land mass of Canada is uniformly purple? Different kinds of models/maps are built to allow different kinds of conclusions to be drawn. Models come with implicit or explicit use-guidelines. And the use-guidelines of “scientific generalizations that have been established for all humans” are different than the use-guidelines of “pragmatically useful self-models, whose theoretical components haven’t been carefully and separately tested”. Mistake the latter for the former, and you’ll end up concluding that Canada is purple.

When you try to share techniques with LW, and LW balks... part of the problem is that most of us LW-ers aren’t as practiced in contact-with-the-world trouble-shooting, and so "is meant as a working model" isn't at the top of our list of plausible interpretations. We misunderstand, and falsely think you’re calling Canada purple. But another part of the problem is it isn’t clear that you’re successfully distinguishing between the two sorts of models, and that you have separated out the parts of your model that you really do know and really can form useful inferences from (the distances between cities) from the parts of your model that are there to hold the rest in place, or to provide useful metaphorical traction, but that probably aren’t literally true. (Okay, I’m simplifying with the “two kinds of models” thing. There’s really a huge space of kinds of models and and of use-guidelines matched to different kinds of models, and maybe none of them should just be called “true”, without qualification as to the kinds of use-cases in which the models will and won’t yield true conclusions. But you get the idea.)

comment by Vladimir_Nesov · 2009-04-10T20:58:02.630Z · LW(p) · GW(p)

In an art that works, the true function of a theory is to provide a convincing REASON for you to take the action that has been shown to work. The "truth" of that theory is irrelevant, so long as it provides motivation and a usable model for the purposes of that art.... Is that compatible with the OB/LW picture? The top-down culture here appears to be one of using science and math -- not real-world performance or self-experimentation.

Trying to interpret this charitably, I'll suggest a restatement: what you call a "theory" is actually an algorithm that describes the actions that are known to achieve the required results. In the normal use of the words, theory is an epistemic tool, leading you to come to know the truth, and a reason for doing something is explanation of why this something achieves the goals. Terminologically mixing opaque heuristic with reason and knowledge is a bad idea, in the quotation above the word "reason", for example, connotes more with rationalization than with anything else.

Replies from: pjeby
comment by pjeby · 2009-04-11T01:57:56.932Z · LW(p) · GW(p)

what you call a "theory" is actually an algorithm that describes the actions that are known to achieve the required results.

No, I'm using the term "theory" in the sense of "explanation" and "as opposed to practice". The theory of a self-help school is the explanation(s) it provides that motivate people to carry out whatever procedures that school uses, by providing a model that helps them make sense of what their problems are, and what the appropriate methods for fixing them would be.

In the normal use of the words, theory is an epistemic tool, leading you to come to know the truth, and a reason for doing something is explanation of why this something achieves the goals.

I don't see any incompatibility between those concepts; per DeBono (Six Thinking Hats, lateral thinking, etc.) a theory is a "proto-truth" rather than an "absolute truth". Something that we treat as if it were true, until something better is found.

Ideally, a school of self-help should update its theories as evidence changes. Generally, when I adopt a technique, I provisionally adopt whatever theory was given by the person who created the technique, unless I already have evidence that the theory is false, or have a simpler explanation based on my existing knowledge.

Then, as I get more experience with a technique, I usually find evidence that makes me update my theory for why/how that technique works. (For example, I found that I could discard the "parts" metaphor of Core Transformation and still get it to work, ergo falsifying a portion of its original theoretical model.)

Also, I sometimes read about a study that shows a mechanism of mind that could plausibly explain some aspect of a technique, for example. Recently, for example, I read some papers about "affective asynchrony", and saw that it not only experimentally validated some of what I've been doing, but that it provided a clearer theoretical model for certain parts of it. (Clearer in the sense of providing a more motivating rationale, and not just because I can point to the papers and say, "see, science!")

Similar thing for "reconsolidation" -- it provides a clear explanation for something that I knew was required for certain techniques to work (experiential access to a relevant concrete memory), but had no "theoretical" justification for. (I just taught this requirement without any explanation except "that's how these techniques work".)

There seems to be a background attitude on LW though, that this sort of gradual approximation is somehow wrong, because I didn't wait for a "true" theory in a peer-reviewed article before doing anything.

In practice, however, if I waited for the theory to be true instead of useful, I would never have been able to gather enough experience to make good theories in the first place.

comment by mattnewport · 2009-04-10T07:20:10.520Z · LW(p) · GW(p)

One common theme is recognizing when your theories aren't working and updating in light of new evidence. Many people are so sure that their beliefs about what 'should' work when it comes to dating are correct that they will keep trying and failing without ever considering that maybe their underlying theory is wrong. A common exercise used in the community to break out of these incorrect beliefs is to force yourself to go out and try things that 'can't possibly work' 10 times in a day, and then every day for a week or a month, until the false belief is banished.

I actually think the LW crowd could learn something from this approach - sometimes all the argument in the world is not as convincing as repeated confrontations with real world results. When it comes to changing behaviour (a key aspect of allowing rationality to improve results in our lives), rational argument is not usually the most effective technique. Rational argument may establish the need for change and the pattern for new behaviour but the most effective way to change behavioural habits is to just start consciously doing the new behaviour until it becomes a habit.

comment by MBlume · 2009-04-10T06:40:01.526Z · LW(p) · GW(p)

In any rational art of dating in which I would be interested, "winning" would be defined to include, indeed to require, respect for the happiness, well-being, and autonomy of the pursued. I don't know enough about these sub-communities to say whether they share that concern -- what is the impression you've gotten?

Replies from: mattnewport, roland
comment by mattnewport · 2009-04-10T07:12:04.821Z · LW(p) · GW(p)

Many but by no means all in the community share that concern. I'm finding it interesting to note my own reluctance to link to some of the material since even among those who do share that concern there is discussion of some techniques that might be considered objectionable. One of the cornerstones of much of the material is that people are so conditioned by conventional beliefs about what 'should' work that they are liable to find what actually does work highly counter-intuitive at first. Reactions to the challenging of strongly held beliefs can be equally strong and I've often observed this in comment threads on the material.

The most mainstream introduction to the community is probably "The Game" by Neil Strauss. I'm not sure it's the best starting point from the point of view of connections to rationality but it's an entertaining read if nothing else.

I certainly believe it's possible to benefit from some of the ideas while maintaining your definition of 'winning' but equally there are some parts of the community which are less appealing.

comment by roland · 2009-04-10T07:46:44.215Z · LW(p) · GW(p)

I have extensive knowledge in that matter and I would say that the techniques are value neutral. To make an analogy, think of Cialdini's science of influence and persuasion(http://en.wikipedia.org/wiki/Robert_Cialdini).

What Evolutionary Psychology, Cialdini and others showed is that we humans can be quite primitive and react in certain predetermined ways to certain stimuli. The dating community has investigated the right stimuli for women and figured out the way to "get" her. You have to push the right buttons in the right order and we males are not different(although the type of buttons is different).

In other words, what you learn in the dating community will teach you how to win the hearts of women. It's up to you how to use this skillset(yes, it's a skillset) IF you manage to acquire it, which btw. is not easy at all. It's just a technique, you can use it for good or bad, although admittedly it lends itself more for selfish purposes IMHO.

Btw, women are also very selfish creatures, so don't make the mistake to hold yourself to a too high moral standard.

I also think that you might be misguided in that you start with the wrong assumption of what dating is all about. Evolutionarily speaking, dating alias mating is not to make the other people better off. On the contrary, having kids is mostly a disadvantage for the parents, but most people do it anyways because we have this desire to have kids. Rationally speaking we all would probably be better off without them. Of course if you factor in emotions it becomes more complicated.

Also there is a fundamental difference between males and females. Males don't get pregnant, they want to have as much sex(pleasure) with as many partners as possible. Women get pregnant(at least before birth control was invented) and so their emotional circuitry is designed to be extremely selective towards which males they will have sex with. Also they want their males to stick around as long as possible(to help them take care of the offspring). So you have to be aware that there is a fundamental difference in the objectives of the two which will make it extremely difficult or impossible to make BOTH happy at the same time. In practice usually one will suffer and/or have to concede some ground and it's usually the "weaker" one. Weak in this context means the one with less options in dating. Usually women are stronger in this respect so the dating community is essentially a way to empower males.

This is getting long, I could write more, if you guys are interested I could start a post on this topic.

Replies from: HughRistik, moshez, ciphergoth
comment by HughRistik · 2009-04-10T17:44:23.201Z · LW(p) · GW(p)

In general, I would agree that the teachings are value-neutral. Yet some of these tools are more conducive towards negative uses, while others are more conducive towards positive uses.

I also think that you might be misguided in that you start with the wrong assumption of what dating is all about. Evolutionarily speaking, dating alias mating is not to make the other people better off.

It's true that people are not adapted to necessarily make each other optimally happy. Yet in spite of this, our skills give us the capability to find solutions that make both people at least somewhat happy.

So in my case, winning is "defined to include, indeed to require, respect for the happiness, well-being, and autonomy of the pursued," as MBlume puts it.

Also there is a fundamental difference between males and females.

Yes, but the description in your post is contaminated by the oversimplified presumptions about evolutionary psychology in the community. I think you would get a lot out of reading more of real evolutionary psychologists, not just reading popularizations, or what the community says evolutionary psychologists are saying. I can find some cites when I'm at home.

Males don't get pregnant, they want to have as much sex(pleasure) with as many partners as possible.

Typically, males are more oriented towards seeking multiple partners than women, yet that doesn't mean that they want "as many partners as possible." Some males are wired for short-term mating strategies, and other males are more wired for long-term mating strategies.

Women get pregnant(at least before birth control was invented) and so their emotional circuitry is designed to be extremely selective towards which males they will have sex with.

Yes, and this is well-demonstrated experimentally. I don't have the citations on hand because I'm not at home, but a guy named Fisman has done some interesting work in this area.

Also they want their males to stick around as long as possible(to help them take care of the offspring).

Yet this is again oversimplified, because some present day females follow short-term mating strategies and do not necessarily want males to stick around.

So you have to be aware that there is a fundamental difference in the objectives of the two which will make it extremely difficult or impossible to make BOTH happy at the same time.

True, though pretty good compromises exist. In a lot of cases, dating is like a Prisoner's Dilemma (though many other payoff matrices are possible). Personally, what I like the most about the community is that it gives me the tools to play C while simultaneously raising the chance that the other person will play C.

Even when happiness for both people can't be achieved, it's at least possible for both people to treat each other with respect, even if someone can't give the other person what they would want.

This is getting long, I could write more, if you guys are interested I could start a post on this topic.

Sure, I would find it interesting.

comment by moshez · 2012-02-14T18:27:29.837Z · LW(p) · GW(p)

I'm not really sure how you can claim "techniques are value-neutral" without assuming what values are. For example, if my values contain a term for someone else's self-esteem, a technique that lowers their self-esteem is not value-neutral. If my values contain a term for "respecting someone else's requests", techniques for overcoming LMR are not value-neutral. Since I've only limited knowledge of the seduction techniques advanced by the community, I did not offer more -- after seeing some of the techniques, I decided that they are decidedly not value neutral, and therefore chose to not engage in them.

comment by Paul Crowley (ciphergoth) · 2009-04-10T10:16:53.819Z · LW(p) · GW(p)

A top-level post would be very welcome, I don't want to take this one too far off track. I've slept (and continue to sleep) with a lot of people, and my experience very much contradicts what you say here.

Replies from: pjeby, roland
comment by pjeby · 2009-04-10T15:17:01.881Z · LW(p) · GW(p)

roland:

So you have to be aware that there is a fundamental difference in the objectives of the two which will make it extremely difficult or impossible to make BOTH happy at the same time.

ciphergoth:

my experience very much contradicts what you say here.

That's because it's a great example of theory being used to persuade people to take a certain set of "actions that work". There are other theories that contradict those theories, that are used to get other people to take action... even though the specific actions taken may be quite similar!

People self-select their schools of dating and self-help based on what theories appeal to them, not on the actual actions those schools recommend taking. ;-)

In this case, the theory roland is talking about isn't theory at all: it's a sales pitch, that attracts people who feel that dating is an unfair situation. They like what they hear, and they want to hear more. So they read more and maybe buy a product. The writer or speaker then gradually moves from this ev-psych "hook" to other theories that guide the reader to take the actions the author recommends.

That people confuse these sales pitches with actual theory is a well-understood concept within the Marketing Conspiracy. ;-) Of course, the gurus don't always know themselves what parts of their theories are hook vs. "real"... I just found out recently that a bunch of stuff I thought was "real" was actually "hook", and had to go through some soul-searching before deciding to leave it in the book I'm writing.

Why? Because if I change the hook, I won't be able to reach people who have the same wrong beliefs that I did. Better to hook people with wrong things they already believe, and then get them to take the actions that will get them to the place where they can throw off those beliefs. (And of course, believing those things didn't stop me from making progress.) But I've restricted it to being only in chapter 1, and the revelation of the deeper model will happen by chapter 5.

Anyway. Actually helping people change their actions and beliefs -- as opposed to merely telling them what they should do or think -- is the very Darkest of the Dark arts.

Perhaps we should call it "The Coaching Conspiracy". ;-)

comment by roland · 2009-04-10T10:47:27.239Z · LW(p) · GW(p)

What exactly would you like to know? The subject is very broad, it would be easier if you made me a list of questions that are relevant to LW. There are already TONS of sites about this topic so please don't ask me to write another post about seduction in general.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-04-10T11:33:29.245Z · LW(p) · GW(p)

I think a post tailored to the particular interests and language of LW/OB readers would be fairly different from the ones already out there, but if you have a pointer that you think would be particularly appealing to us lot I'm interested.

comment by AnnaSalamon · 2009-04-10T06:28:37.083Z · LW(p) · GW(p)

I would personally love to see more cross-fertilization between that sub-community and LW, "dark arts" or no. (At least, I think I would; I don't know the community well and might be mistaken.) We need to make contact between abstract techniques for thinking through difficult issues, and on the ground practical strategicness. Importing people who've developed skilled strategicness in any domain that involves actual actions and observable success/failure, including dating (or sales, or start-ups, or ... ?), would be a good way to do this. If you could link to specific articles, or could create discussion threads that both communities might want to participate in, mattnewport, that would be good.

Replies from: Hans
comment by Hans · 2009-04-13T10:14:30.862Z · LW(p) · GW(p)

I second that. Here in the LW/OB/sci-fi/atheism/cryonics/AI... community, many of us fit quite a few stereotypes. I'll summarize them in one word that everybody understands: we're all nerds*. This means our lives and personalities introduce many biases into our way of thinking, and these often preclude discussions about acting rationally in interpersonal situations such as sales, dating etc. because we don't have much experience in these fields. Anything that bridges this gap would be extremely useful.

*this is not a value judgment. And not everybody conforms to this stereotype. I know, I know, but this is not the point. I'm talking averages here.

comment by PhilosophyTutor · 2012-01-24T04:04:23.981Z · LW(p) · GW(p)

I would say that it is largely the ostensible basis of the seduction community.

As you can see if you read this subthread, they've got a mythology going on that renders most of their claims unfalsifiable. If their theories are unsupported it doesn't matter, because they can disclaim the theories as just being a psychological trick to get you to take "correct" actions. However they've got no rigorous evidence that their "correct" actions actually lead to any more mating success than spending an equivalent amount of time on personal grooming and talking to women without using any seduction-community rituals. They also have such a wide variety of conflicting doctrines and gurus that they can dismiss almost any critique as being based on ignorance, because they can always point to something written somewhere which will contradict any attempt to characterise the seduction community - not that this ever stops them making claims about the community themselves.

They'll claim that they develop such evidence by going out and picking up women, but since they don't do any controlled tests this cannot even in theory produce evidence that the techniques they advocate change their success rate, and even if they did conduct controlled studies their sample sizes are tiny given the claimed success rates. I believe one "guru" claims to obtain sex in one out of thirty-three approaches. I do not believe that anyone's intuitive grasp of statistics is so refined that they can spot variations in such an infrequent outcome and determine whether a given technique increases or decreases that success rate. To do science on such a phenomenon would take a very big sample size. Ergo anyone claiming to have scientific evidence without having done a study with a very big sample size is a fool or a knave.

The mythology of the seduction community is highly splintered and constantly changes over time, which increases the subjective likelihood that we are looking at folklore and scams rather than any kind of semi-scientific process homing in on the truth.

It's also easy to see how it could be very appealing to lonely nerds to think that they could download a walkthrough for getting women into bed the way they can download a walkthrough for Mass Effect or Skyrim. It's an empowering fantasy, to be sure.

If that's what it takes to get them to groom themselves and go talk to women it might even work in an indirect, placebo-like way. So if you prioritise getting laid over knowing the scientific truth about the universe it might be rational to be selectively irrational about seduction folklore. However if you want to know the truth about the universe there's not much to be gained from the seduction community. If they are doing better than chance it's because a stopped clock is right twice a day.

My own view is that the entire project is utterly misguided. Instead of hunting for probably-imaginary increases in their per-random-stranger success at getting sex they should focus on effectively searching the space of potential mates for those who are compatible with them and would be interested in them.

Replies from: wedrifid
comment by wedrifid · 2012-01-24T04:42:07.511Z · LW(p) · GW(p)

As you can see if you read this subthread, they've got a mythology going on that renders most of their claims unfalsifiable.

This is an absurd claim. Most of the claims can be presented in the form "If I do X I can expect to on average achieve a better outcome with women than if I do Y". Such claims are falsifiable. Some of them are even actually falsified. They call it "Field Testing".

Your depiction of the seduction community is a ridiculous straw man and could legitimately be labelled offensive by members of the community that you are so set on disparaging. Mind you they probably wouldn't bother doing so: The usual recommended way to handle such shaming attempts is to completely ignore them and proceed to go get laid anyway.

Replies from: PhilosophyTutor
comment by PhilosophyTutor · 2012-01-24T05:15:13.230Z · LW(p) · GW(p)

This is an absurd claim. Most of the claims can be presented in the form "If I do X I can expect to on average achieve a better outcome with women than if I do Y". Such claims are falsifiable. Some of them are even actually falsified. They call it "Field Testing".

If they conducted tests of X versus Y with large sample sizes and with blinded observers scoring the tests then they might have a basis to say "I know that if I do X I can expect to on average achieve a better outcome with women than if I do Y". They don't do such tests though.

They especially don't do such tests where X is browsing seduction community sites and trying the techniques they recommend and Y is putting an equal amount of time and effort into personal grooming and socialising with women without using seduction community techniques.

Scientific methodology isn't just a good idea, it's the law. If you don't set up your tests correctly you have weak or meaningless evidence.

Your depiction of the seduction community is a ridiculous straw man and could legitimately be labelled offensive by members of the community that you are so set on disparaging. Mind you they probably wouldn't bother doing so: The usual recommended way to handle such shaming attempts is to completely ignore them and proceed to go get laid anyway.

Or as the Bible says, "But if any place refuses to welcome you or listen to you, shake its dust from your feet as you leave to show that you have abandoned those people to their fate". It's good advice for door-to-door salespersons, Jehova's Witnesses and similar people in the business of selling. If you run into a tough customer don't waste your time trying to convince them, just walk away and look for an easier mark.

However in science that's not how you do things. In science if someone disputes your claim you show them the evidence that led you to fix your claim in the first place.

Are you sure you meant to describe my post as a "shaming attempt"? As pejoratives go this seems like an ill-chosen one, since my critique was strictly epistemological. It seems at least possible that you are posting a standard talking point which is deployed by seduction community members to dismiss ethical critiques, but which makes no sense in response to an epistemological critique.

(There are certainly concerns to be raised about the ethics of the seduction community, but that would be a different post).

Replies from: wedrifid
comment by wedrifid · 2012-01-24T06:05:07.759Z · LW(p) · GW(p)

If they conducted tests of X versus Y with large sample sizes and with blinded observers scoring the tests then they might have a basis to say "I know that if I do X I can expect to on average achieve a better outcome with women than if I do Y". They don't do such tests though.

Your claim was:

As you can see if you read this subthread, they've got a mythology going on that renders most of their claims unfalsifiable.

Are you familiar with the technical meaning of 'unfalsifiable'? It does not mean 'have not done scientific tests'. It means 'cannot do scientific tests even in principle'. I would like it if scientists did do more study of this subject but that is not relevant to whether claims are falsifiable.

It seems at least possible that you are posting a standard talking point which is deployed by seduction community members to dismiss ethical critiques, but which makes no sense in response to an epistemological critique.

I'd be surprised. I've never heard such a reply, certainly not in response to subject matter which many wouldn't understand (unfalsifiability). I used that term 'shaming' because the inferred motive (and, regardless of motive, one of the practical social meanings) of falsely accusing the enemy of behavior that looks pathetic is to provide some small degree of humiliation. This can, the motive implicitly hopes, make people ashamed of doing the behaviors that have been misrepresented. I am happy to conceed that this point is more distracting than useful. I would have been best served to stick purely to the (more conventional expression of) "NOT UNFALSIFIABLE! LIES!"

However in science that's not how you do things. In science if someone disputes your claim you show them the evidence that led you to fix your claim in the first place.

I assert that the "act like JWs" approach is not taken by the seduction community in general either. For most part they do present evidence. That evidence is seldom of the standard accepted in science except when they are presenting claims that are taken from scientific findings - usually popularizations thereof, Cialdini references abound.

I again agree that the seduction community could use more scientific rigor. Shame on science for not engaging in (much) research in what is a rather important area!

(There are certainly concerns to be raised about the ethics of the seduction community, but that would be a different post).

Yes, I agree that you didn't get in to ethics and that your claim was epistemological in nature. I do believe that the act of making epistemological claims is not always neutral with respect to other kinds of implication. As another tangential aside I note that if an exemplar of the seduction community were to be said to be sensitive to public opinion he would be far more sensitive to things that make him look pathetic than things than make him look unethical!

Replies from: PhilosophyTutor
comment by PhilosophyTutor · 2012-01-24T07:06:54.630Z · LW(p) · GW(p)

Are you familiar with the technical meaning of 'unfalsifiable'? It does not mean 'have not done scientific tests'. It means 'cannot do scientific tests even in principle'. I would like it if scientists did do more study of this subject but that is not relevant to whether claims are falsifiable.

In the case of Sagan's Dragon, the dragon is unfalsifiable because there is always a way for the believer to explain away every possible experimental result.

My view is that the mythology of the seduction community functions similarly. You can't attack their theories because they can respond by saying that the theory is merely a trick to elicit specific behaviour. You can't attack their claims that specific behaviours are effective because they will say that there is proof, but it only exists in their personal recollections so you have to take their word for it. You can't attack their attitudes, assumptions or claims because they can respond by pointing at one guru or another and saying that particular guru does not share the attitude, assumption or claim you are critiquing.

Their claim could theoretically be falsified, for example by a controlled test with a large sample size which showed that persons who had spent N hours studying and practicing seduction community doctrine/rituals (for some value of N which the seduction community members were prepared to agree was sufficient to show an effect) were no more likely to obtain sex than persons who had spent N hours on things like grooming, socialising with women without using seduction community rituals, reading interesting books they could talk about, taking dancing lessons and whatnot. I suspect but cannot prove though that if we conducted such a test those people who have made the seduction community a large part of their life would find some way to explain the result away, just as the believer in Sagan's dragon comes up with ways to explain away results that would falsify their dragon.

Of course it's not the skeptic's job to falsify the claims of the seduction community. Members of that community very clearly have a large number of beliefs about how best to obtain sex, even if those beliefs are not totally homogenous within that community, and it's their job to present the evidence that led them to the belief that their methods are effective. If it turns out that they have not controlled for the relevant cognitive biases including but not limited to the experimenter effect, the placebo effect, the sunk costs fallacy, the halo effect and correlation not proving causation then it's not rational to attach any real weight to their unsupported recollection as evidence.

Replies from: wedrifid
comment by wedrifid · 2012-01-24T07:28:54.804Z · LW(p) · GW(p)

Their claim could theoretically be falsified, for example <...> I suspect but cannot prove though that if we conducted such a test those people who have made the seduction community a large part of their life would find some way to explain the result away, just as the believer in Sagan's dragon comes up with ways to explain away results that would falsify their dragon.

It is dramatically different thing to say "people who are in the seduction community are the kind of people who would make up excuses if their claims were falsified" than to say "the beliefs of those in the seduction community are unfalsifiable". While I may disagree mildly with the former claim the latter I object to as an absurd straw man.

Of course it's not the skeptic's job to falsify the claims of the seduction community.

I don't accept the role of a skeptic. I take the role of someone who wishes to have correct beliefs, within the scope of rather dire human limitations. That means I must either look for and process the evidence to whatever extent possible or, if a field is consider of insufficient expected value, remain in a state of significant uncertainty to the extent determined by information I have picked up in passing.

I reject the skeptic role of thrusting the burden of proof around, implying "You've got to prove it to me or it ain't so!' That's just the opposite stupidity to that of a true believer. It is a higher status role within intellectual communities but it is by no means rational.

and it's their job to present the evidence that led them to the belief that their methods are effective.

No, it's their job to go ahead and get laid and have fulfilling relationships. It is no skin of their nose if you don't agree with them. In fact, the more people who don't believe them the less competition they have.

Unless they are teachers, people are not responsible for forcing correct epistemic states upon others. They are responsible for their beliefs, you are responsible for yours.

Replies from: PhilosophyTutor
comment by PhilosophyTutor · 2012-01-24T08:12:49.312Z · LW(p) · GW(p)

It is dramatically different thing to say "people who are in the seduction community are the kind of people who would make up excuses if their claims were falsified" than to say "the beliefs of those in the seduction community are unfalsifiable". While I may disagree mildly with the former claim the latter I object to as an absurd straw man.

I'm content to use the term "unfalsifiable" to refer to the beliefs of homeopaths, for example, even though by conventional scientific standards their beliefs are both falsifiable and falsified. Homeopaths have a belief system in which their practices cannot be shown to not work, hence their beliefs are unfalsifiable in the sense that no evidence you can find will ever make them let go of their belief. The seduction community have a well-developed set of excuses for why their recollections count as evidence for their beliefs (even though they probably shouldn't count as evidence for their beliefs), and for why nothing counts as evidence against their beliefs.

I reject the skeptic role of thrusting the burden of proof around, implying "You've got to prove it to me or it ain't so!' That's just the opposite stupidity to that of a true believer. It is a higher status role within intellectual communities but it is by no means rational.

It is not the opposite of stupidity at all to see a person professing belief Y, and say to them "Please tell me the facts which led you to fix your belief in Y". If their belief is rational then they will be able to tell you those facts, and barring significantly differing priors you too will then believe in Y.

I suspect we differ in our priors when it comes to the proposition that the rituals of the seduction community perform better than comparable efforts to improve one's attractiveness and social skills that are not informed by seduction community doctrine, but not so much that I would withhold agreement if some proper evidence was forthcoming.

However if the local seduction community members instead respond with defensive accusations, downvotes and so forth but never get around to stating the facts which led them to fix their belief in Y then observers should update their own beliefs to increase the probability that the beliefs of the seduction community do not have rational bases.

Unless they are teachers, people are not responsible for forcing correct epistemic states upon others. They are responsible for their beliefs, you are responsible for yours.

Can you see that from my perspective, responses which consist of excuses as to why supporters of the seduction community doctrine(s) should not be expected to state the facts which inform their beliefs are not persuasive? If they have a rational basis for their belief they can just state it. I struggle to envisage probable scenarios where they have such rational bases but rather than simply state them they instead offer various excuses as to why, if they had such evidence, they should not be expected to share it.

Replies from: wedrifid
comment by wedrifid · 2012-01-24T08:24:35.043Z · LW(p) · GW(p)

However if the local seduction community members instead respond with defensive accusations, downvotes and so forth but never get around to stating the facts which led them to fix their belief in Y then observers should update their own beliefs to increase the probability that the beliefs of the seduction community do not have rational bases.

On lesswrong insisting a claim is unfalsifiable while simultaneously explaining how that claim can be falsified is more than sufficient cause to downvote. This is false even if - and especially obviously when - that claim is false. Further, in general downvotes of comments by the PhilsophyTutor account - at least those by myself - have usually been for the consistent use of straw men and the insulting misrepresentation of a group of people you are opposed to.

Declaring downvotes of your one's own comments to be evidence in favor of one's position is seldom a useful approach.

Can you see that from my perspective, responses which consist of excuses as to why supporters of the seduction community doctrine(s) should not be expected to state the facts which inform their beliefs are not persuasive?

They should not be persuasive and are not intended as such. Instead, in this case, it was an explicit rejection of the "My side is the default position and the burden of proof is on the other!" debating tactic. The subject of how to think correctly (vs debate effectively) is one of greater interest to me than seduction.

I also reject the tactic used in the immediate parent. It seems to be of the form "You are trying to refute my arguments. You are being defensive. That means you must be wrong. I am right!". It is a tactic which, rather conveniently, become more effective the worse your arguments are!

Replies from: PhilosophyTutor
comment by PhilosophyTutor · 2012-01-24T09:02:29.419Z · LW(p) · GW(p)

On lesswrong insisting a claim is unfalsifiable while simultaneously explaining how that claim can be falsified is more than sufficient cause to downvote.

That's rather sad, if the community here thinks that the word "unfalsifiable" only refers to beliefs which are unfalsifiable in principle from the perspective of a competent rationalist, and that the word is not also used to refer to belief systems held by irrational people which are unfalsifiable from the insider/irrational perspective.

The fundamental epistemological sin is the same in each case, since both categories of belief are irrational in the sense that there is no good reason to favour the particular beliefs held over the unbounded number of other, equally unfalsifiable beliefs which explain the data equally well.

That said, I do find it curious that such misunderstandings seem to exclusively crop up in those posts where I criticise the beliefs of the seduction community. Those posts get massively downvoted compared to posts I make on any other topic, and from my insider perspective there is no difference in quality of posting.

consistent use of straw men and the insulting misrepresentation of a group of people you are opposed to.

There's a philosophical joke that goes like this:

"Zabludowski has insinuated that my thesis that p is false, on the basis of alleged counterexamples. But these so- called "counterexamples" depend on construing my thesis that p in a way that it was obviously not intended -- for I intended my thesis to have no counterexamples. Therefore p".

Source

It's not clear to me at all that I have used straw men or misrepresented a group, and from my perspective it seems that it's impossible to criticise any aspect of the seduction community or its beliefs without being accused of attacking a straw man.

They should not be persuasive and are not intended as such. Instead, in this case, it was an explicit rejection of the "My side is the default position and the burden of proof is on the other!" debating tactic. The subject of how to think correctly (vs debate effectively) is one of greater interest to me than seduction.

Perhaps we should drop this subtopic then, since it seems solely to be about your views of what you see as a particular debating tactic, and get back to the issue of what exactly the evidence is for the beliefs of the seduction community.

If we can agree that how to think correctly is the more interesting topic, then possibly we can agree to explore whether or not the seduction community are thinking correctly by means of examining their evidence.

Replies from: wedrifid, wedrifid
comment by wedrifid · 2012-01-24T09:21:41.710Z · LW(p) · GW(p)

That's rather sad, if the community here thinks that the word "unfalsifiable" only refers to beliefs which are unfalsifiable in principle from the perspective of a competent rationalist, and that the word is not also used to refer to belief systems held by irrational people which are unfalsifiable from the insider/irrational perspective.

Then you should indeed be sad. An unfalsifiable claim is a claim that can not be falsified. Not only is it right there in the word it is a basic scientific principle. The people who present a claim happening to be irrational would be a separate issue.

Just say that the seduction community is universally or overwhelmingly irrational when it comes to handling counterevidence to their claims - and we can merrily disagree about the state of the universe. But unfalsifiable things can't be falsified.

comment by wedrifid · 2012-01-24T09:16:34.304Z · LW(p) · GW(p)

If we can agree that how to think correctly is the more interesting topic, then possibly we can agree to explore whether or not the seduction community are thinking correctly by means of examining their evidence.

I would update only slightly from the prior for "non-rationalists are dedicated to achieving a goal through training and practice".

EDIT: In case the meaning isn't clear - this translates to "They're probably about the same as most folks are when they do stuff. Haven't seen much to think they are better or worse."

Replies from: PhilosophyTutor
comment by PhilosophyTutor · 2012-01-24T09:56:50.831Z · LW(p) · GW(p)

That seems to be a poorly-chosen prior.

An obvious improvement would be to instead use "non-rationalists are dedicated to achieving a goal through training and practice, and find a system for doing so which is significantly superior to alternative, existing systems".

It is no great praise of an exercise regime, for example, to say that those who follow it get fitter. The interesting question is whether that particular regime is better or worse than alternative exercise regimes.

However the problem with that question is that there are multiple competing strands of seduction theory, which is why any critic can be accused of attacking a straw man regardless of the points they make. So you need to specify multiple sub-questions of the form "Group A of non-rationalists were dedicated to achieving a goal through training and practice, and found a system for doing so which is significantly superior to alternative, existing systems", "Group B of non-rationalists..." and so on for as many sub-types of seduction doctrine as you are prepared to acknowledge, where the truth of some groups' doctrines precludes the truth of some other groups' doctrines. As musical rationalists Dire Straits pointed out, if two guys say they're Jesus then at least one of them must be wrong.

So then ideally we ask all of these people what evidence led them to fix the belief they hold that the methods of their group perform better than alternative, existing ways of improving your attractiveness. That way we could figure out which if any of them are right, or whether they are all wrong.

However I don't seem to be able to get to that point. Since you position yourself as outside the seduction community and hence immune to requests for evidence, but as thoroughly informed about the seduction community and hence entitled to pass judgment on whether my comments are directed at straw men, there's no way to explore the interesting question by engaging with you.

Edit to add: I see one of the ancestor posts has been pushed down to -3, the point at which general traffic will no longer see later posts. Based on previous experience I predict that N accounts who downvote or upvote all available posts along partisan lines will hit this subthread pushing all of wedrifid's posts up by +N and all of my posts down by -N.

Replies from: taelor
comment by taelor · 2012-01-24T10:28:03.961Z · LW(p) · GW(p)

I actually agree mainly with you, but am downvoting both sides on the principle that I'm tired of listening to people argue back and forth about PUAs/Seduction communities.

comment by wedrifid · 2012-01-24T06:10:53.980Z · LW(p) · GW(p)

I would absolutely love to see the development of a rational art of dating. If you've more to say on this I'll definitely look forward to reading it.

I have Hugh in my RSS feed for this reason!

comment by AnnaSalamon · 2009-04-10T05:59:39.551Z · LW(p) · GW(p)

It sounds as though you have data and experiences that our community should chew on. Please do share specific stories, anecdotes, strategies or habits for thinking strategically about practical domains, techniques you've found useful within "creative rationality", etc. Perhaps in a top-level post?

Replies from: HughRistik
comment by HughRistik · 2009-04-10T06:24:18.912Z · LW(p) · GW(p)

Thanks, Anna. Getting more specific is definitely on my list.

comment by Lethalmud · 2014-05-23T13:54:06.590Z · LW(p) · GW(p)

I'm curious, how did you use rationality to develop fashion sense?

comment by simpleton · 2009-04-09T04:22:51.822Z · LW(p) · GW(p)

If in 1660 you'd asked the first members of the Royal Society to list the ways in which natural philosophy had tangibly improved their lives, you probably wouldn't have gotten a very impressive list.

Looking over history, you would not have found any tendency for successful people to have made a formal study of natural philosophy.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2009-04-09T04:48:08.317Z · LW(p) · GW(p)

It would be overconfident for me to say rationality could never become useful. My point is just that we are acting like it's practically useful right now, without very much evidence for this beyond our hopes and dreams. Thus my last sentence - that "crossing the Pacific" isn't impossible, but it's going to take a different level of effort.

If in 1660, Robert Boyle had gone around saying that, now that we knew Boyle's Law of gas behavior, we should be able to predict the weather, and that that was the only point of discovering Boyle's Law and that furthermore we should never trust a so-called chemist or physicist except insofar as he successfully predicted the weather - then I think the Royal Society would be making the same mistake we are.

Boyle's Law is sort of helpful in understanding the weather, sort of. But it's step one of ten million steps, used alone it doesn't work nearly as well as just eyeballing the weather and looking for patterns, and any attempt to judge applicants to the Royal Society on their weather prediction abilities would have excluded some excellent scientists. Any attempt to restrict gas physics itself to things that were directly helpful in predicting the weather would have destroyed the science, ironically including the discoveries two hundred years down the road that were helpful in weather prediction.

Summed up: With luck, (some) science can result in good practical technology. But demanding the technology too soon, or restricting science to only the science with technology to back it up, hurts both science and technology.

(there is a difference between verification and technology. Boyle was able to empirically test his gas law, but not practically apply it. This may be fuzzier in rationality)

Replies from: badger
comment by badger · 2009-04-09T06:00:19.645Z · LW(p) · GW(p)

I'm confused about this article. I agree with most you've said, but I'm not sure the point is exactly. I thought the entire premise of this community was that more is possible, but we're only "less wrong" at the moment. I didn't think there was any promise of results for the current state of the art. Is this post a warning, or am I overlooking this trend?

I agree we shouldn't see x-rationality as practically useful now. You don't rule out rationality becoming the superpower Eliezer portrays in his fiction. That is certainly a long ways off. Boyle's Law and weather prediction is an apt analogy. Just trying harder to apply our current knowledge won't go very far, but there should be some productive avenues.

I think I'd understand your purpose better if you could answer these questions: In your mind, how likely is it that x-rationality could be practically useful in, say, 50 years? What approaches are most likely to get us to a useful practice of rationality? Or is your point that any advances that are made will be radically different from our current lines of investigation?

Just trying to understand.

Replies from: Eliezer_Yudkowsky, Yvain
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-09T12:10:10.954Z · LW(p) · GW(p)

The above would be component 1 of my own reply.

Component 2 would be (to say it again) that I developed the particular techniques that are to be found in my essays, in the course of solving my problem. And if you were to try to attack that or a similar problem you would suddenly find many more OB posts to be of immensely greater use and indeed necessity. The Eliezer of 2000 and earlier was not remotely capable of getting his job done.

What you're seeing here is the backwash of techniques that seem like they ought to have some general applicability (e.g. Crisis of Faith) but which are not really a whole developed rationalist art, nor made for the purpose of optimizing everyday life.

Someone faced with the epic Challenge Of Changing Their Mind may use the full-fledged Crisis of Faith technique once that year. How much benefit is this really? That's the question, but I'm not sure the cynical answer is the right one.

What I am hoping to see here is others, having been given a piece of the art, taking that art and extending it to cover their own problems, then coming back and describing what they've learned in a sufficiently general sense (informed by relevant science) that I can actually absorb it. For that which has been developed to address e.g. akrasia outside the rationalist line, I have found myself unable to absorb.

Replies from: Yvain, thomblake, pjeby
comment by Scott Alexander (Yvain) · 2009-04-09T13:55:35.003Z · LW(p) · GW(p)

But you're not a good test case to see whether rationality is useful in everyday life. Your job description is to fully understand and then create a rational and moral agent. This is the exceptional case where the fuzzy philosophical benefits of rationality suddenly become practical.

One of the fundamental lessons of Overcoming Bias was "All this stuff philosophers have been debating fruitlessly for centuries actually becomes a whole lot clearer when we consider it in terms of actually designing a mind." This isn't surprising; you're the first person who's really gotten to use Near Mode thought on a problem previously considered only in Far Mode. So you've been thinking "Here's this nice practical stuff about thinking that's completely applicable to my goal of building a thinking machine", and we've been thinking, "Oh, wow, this helps solve all of these complicated philosophical issues we've been worrying about for so long."

But in other fields, the rationality is domain-specific and already exists, albeit without the same thunderbolt of enlightenment and awesomeness. Doctors, for example, have a tremendous literature on evidence and decision-making as they relate to medicine (which is one reason I get so annoyed with Robin sometimes). An x-rationalist who becomes a doctor would not, I think, necessarily be a significantly better doctor than the rest of the medical world, because the rest of the medical world already has an overabundance of great rationality techniques and methods of improving care that the majority of doctors just don't use, and because medicine takes so many skills besides rationality that any minor benefits from the x-rationalist's clearer thinking would get lost in the noise. To make this more concrete: I don't think good doctors are more likely to be atheists than bad doctors, though I do think good AI scientists are more likely to be atheists than bad AI scientists. I think this paragraph about doctors also applies to businessmen, scientists, counselors, et cetera.

When I said that we had a non-trivial difference of opinion on your secret identity post, this was what I meant: that a great x-rationalist might be a mediocre doctor; that maybe if you'd gone into medicine instead of AI you would have been a mediocre doctor and then I wouldn't be "allowed" to respect you for your x-rationality work.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-09T14:18:26.390Z · LW(p) · GW(p)

An x-rationalist who becomes a doctor would not, I think, necessarily be a significantly better doctor than the rest of the medical world, because the rest of the medical world already has an overabundance of great rationality techniques and methods of improving care that the majority of doctors just don't use

Evidence-based medicine was developed by x-rationalists. And to this day, many doctors ignore it because they are not x-rationalists.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2009-04-09T21:00:01.007Z · LW(p) · GW(p)

...huh. That comment was probably more helpful than you expected it to be. I'm pretty sure I've identified part of my problem as having too high a standard for what makes an x-rationalist. If you let the doctors who developed evidence-based medicine in...yes, that clears a few things up.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-09T21:10:52.309Z · LW(p) · GW(p)

One thinks particularly of Robyn Dawes - I don't know him from "evidence-based medicine" per se, but I know he was fighting the battle to get doctors to acknowledge that their "clinical experience" wasn't better than simple linear models, and he was on the front lines against psychotherapy shown to perform no better than talking to any bright person.

If you read "Rational Choice in an Uncertain World" you will see that Dawes is pretty definitely on the level of "integrate Bayes into everyday life", not just Traditional Rationality. I don't know about the historical origins of evidence-based medicine, so it's possible that a bunch of Traditional Rationalists invented it; but one does get the impression that probability theorists trying to get people to listen to the research about the limits of their own minds, were involved.

Replies from: Yvain, Will_Newsome
comment by Scott Alexander (Yvain) · 2009-04-10T02:59:04.777Z · LW(p) · GW(p)

After thinking on this for a while, here are my thoughts. This should probably be a new post but I don't want to start another whole chain of discussions on this issue.

  1. I had the belief that many people on Less Wrong believed that our currently existing Art of Rationality was sufficient or close to sufficient to guarantee practical success or even to transform its practioner into an ubermensch like John Galt. I'm no longer sure anyone believes this. If they do, they are wrong. If anyone right now claims they participate in Less Wrong solely out of a calculated program to maximize practical benefits and not because they like rationality, I think they are deluded.

  2. Where x-rationality is defined as "formal, math-based rationality", there are many cases of x-rationality being used for good practical effect. I missed these because they look more like three percent annual gains in productivity than like Brennan discovering quantum gravity or Napoleon conquering Europe. For example, doctors can use evidence-based medicine to increase their cure rate.

  3. The doctors who invented evidence-based medicine deserve our praise. Eliezer is willing to consider them x-rationalists. But there is no evidence that they took a particularly philosophical view towards rationality, as opposed to just thinking "Hey, if we apply these tests, it will improve medicine a bit." Depending on your view of socialism, the information that one of these inventors ran for parliament on a socialist platform may be an interesting data point.

  4. These doctors probably had mastery of statistics, good understanding of the power of the experimental method, and a belief that formalizing things could do better than normal human expertise. All of these are rationalist virtues. Any new doctor who starts their career with these virtues will be in a better position to profit from and maybe expand upon evidence-based medicine than a less virtuous doctor, and will reap great benefits from their virtues. Insofar as Less Wrong's goal is to teach people to become such doctors, this is great...

  5. ...except that epidemiology and statistics classes teach the same thing with a lot less fuss. Less Wrong's goal seems to be much higher. Less Wrong wants a doctor who can do that, and understand their mental processes in great detail, and who will be able to think rationally about politics and religion and turn the whole thing into a unified rationalist outlook.

  6. Or maybe it doesn't. Eliezer has already explained that a lot of his OB writing was just stuff that he came across trying to solve AI problems. Maybe this has turned us into a community of people who like talking about philosophy, and that really doesn't matter much and shouldn't be taught at rationality dojos. Maybe a rationality dojo should be an extra-well-taught applied statistics class and some discussion of important cognitive biases and how to avoid them. It seems to me that a statistics class plus some discussion of cognitive biases would be enough to transform an average doctor into the kind of doctor who could invent or at least use evidence-based medicine and whatever other x-rationality techniques might be useful in medicine. With a few modifications, the same goes for business, science, and any other practical field.

  7. I predict the marginal utility of this sort of rationality will decline quickly. The first year of training will probably do wonders. The second year will be less impressive. I doubt a doctor who studies this rationality for ten years will be noticeably better off than one who studies it for five, although this may be my pessimism speaking. Probably the doctor would be better off spending those second five years studying some other area of medicine. In the end, I predict these kinds of classes could improve performance in some fields 10-20% for people who really understood them.

  8. This would be a useful service, but it wouldn't have the same kind of awesomeness as Overcoming Bias did. There seems to be a second movement afoot here, one to use rationality to radically transform our lives and thought processes, moving so far beyond mere domain-specific reasoning ability that even in areas like religion, politics, morality, and philosophy we hold only rational beliefs and are completely inhospitable to any irrational thoughts. This is a very different sort of task.

  9. This new level of rationality has benefits, but they are less practical. There are mental clarity benefits, and benefits to society when we stop encouraging harmful political and social movements, and benefits to the world when we give charity more efficiently. Once people finish the course mentioned in (6) and start on the course mentioned in (8), it seems less honest to keep telling them about the vast practical benefits they will attain.

  10. This might have certain social benefits, but you would have to be pretty impressive for conscious-level social reasoning to get better than the dedicated unconscious modules we already use for that task.

  11. I have a hard time judging opinion here, but it does seem like some people think sufficient study of z-rationality can turn someone into an ubermensch. But the practical benefits beyond those offered by y-rationality seem low. I really like z-rationality, but only because I think it's philosophically interesting and can improve society, not because I think it can help me personally.

  12. In the original post, I was using x-rationality in a confused way, but I think to some degree I was thinking of (8) rather than (6).

comment by Will_Newsome · 2012-04-18T12:07:52.355Z · LW(p) · GW(p)

One thinks particularly of Robyn Dawes - I don't know him from "evidence-based medicine" per se, but I know he was fighting the battle to get doctors to acknowledge that their "clinical experience" wasn't better than simple linear models, and he was on the front lines against psychotherapy shown to perform no better than talking to any bright person.

If you read "Rational Choice in an Uncertain World" you will see that Dawes is pretty definitely on the level of "integrate Bayes into everyday life", not just Traditional Rationality. I don't know about the historical origins of evidence-based medicine, so it's possible that a bunch of Traditional Rationalists invented it; but one does get the impression that probability theorists trying to get people to listen to the research about the limits of their own minds, were involved.

Those studies sucked. That book had tons of fallacious reasoning and questionable results. It was while reading Dawes' book that I became convinced that H&B is actively harmful for rationality. Now that you say Dawes was also behind the anti-psychotherapy stuff I suddenly have a lot more faith in psychotherapy. (By the way, it's not just that Dawes isn't a careful researcher—he can also be actively misleading.)

I really hope Anna is right that the Center for Modern Rationality won't be giving much weight to oft-cited overblown H&B results (e.g "confirmation bias"). Knowing about biases almost always hurts people.

ETA: Apologies for curmudgeonly tone; I'm just worried that unless executed with utmost care, the CMR idea will do evil.

Replies from: Will_Newsome, Eliezer_Yudkowsky
comment by Will_Newsome · 2012-04-19T00:39:04.882Z · LW(p) · GW(p)

(By the way, it's not just that Dawes isn't a careful researcher—he can also be actively misleading.)

Which illustrates an important heuristic: put minimal trust in researchers who seem to have ideological axes to grind.

Replies from: Nornagest, None
comment by Nornagest · 2012-04-19T01:24:30.039Z · LW(p) · GW(p)

That is an important heuristic (and upvoted), but I don't think it's one we should endorse without some pretty substantial caveats. If you deprecate any results that strike you as ideologically tainted, and your criteria for "ideologically tainted" are themselves skewed in one direction or another by identity effects, you can easily end up accepting less accurate information than you would by taking every result in the field at face value.

I probably don't need to give any examples.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-19T12:17:21.021Z · LW(p) · GW(p)

Agreed. I think your caveat is just a special case: put minimal trust in researchers who seem to have ideological axes to grind, including yourself. (And if you can't discern when you might be grinding an axe then you're probably screwed anyway.) (But yeah, I admit it's a perversion of "researchers" to include meta-researchers.)

comment by [deleted] · 2012-04-19T00:43:00.394Z · LW(p) · GW(p)

As The Last Psychiatrist would say, always be thinking about what the author wants to be true.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-04-18T12:53:02.819Z · LW(p) · GW(p)

FYI to other readers: Citation does not support claim, it's about linear models of wine-tasting rather than experimental support for psychotherapy.

Replies from: steven0461, Will_Newsome, MatthewBaker
comment by steven0461 · 2012-04-18T21:17:48.891Z · LW(p) · GW(p)

I think the claim that "those studies sucked" and the accompanying link were in reference to:

the battle to get doctors to acknowledge that their "clinical experience" wasn't better than simple linear models

The linked comment discusses a few different statistical prediction rules, not just wine-tasting. To the extent that the comment identifies systematic flaws in claims that linear models outperform experts, it does somewhat support the claim that "those studies sucked" (though I wouldn't think it supports the claim sufficiently to actually justify making it).

comment by Will_Newsome · 2012-04-19T00:32:15.710Z · LW(p) · GW(p)

(See Steven's comment, the "those studies sucked" comment was meant to be a reference to the linear model versus expert judgment series, not the psychotherapy studies. Obviously the link was supposed to be representative of a disturbing trend, not the sum total justification for my claims.)

FWIW I still like a lot of H&B research—I'm a big Gigerenzer fan, and Tetlock has some cool stuff, for example—but most of the field, including much of Tversky and Kahneman's stuff, is hogwash, i.e. less trustworthy than parapsychology results (which are generally held to a much higher standard). This is what we'd expect given the state of the social sciences, but for some reason people seem to give social psychology and cognitive science a free pass rather than applying a healthy dose of skepticism. I suspect this is because of confirmation bias: people are already trying to push an ideology about how almost everyone is irrational and the world is mad, and thus are much more willing to accept "explanations" that support this conclusion.

Replies from: adamisom
comment by adamisom · 2012-04-19T01:50:30.193Z · LW(p) · GW(p)

Tversky and Kahneman, hogwash? What? Can you explain? Or just mention something?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-19T02:34:21.484Z · LW(p) · GW(p)

Start by reading Gigerenzer's critiques. E.g. I really like the study on how overconfidence goes away if you ask for frequencies rather than subjective probabilities—this actually gives you a rationality technique that you can apply in real life! (In my experience it works, but that's an impression, not a statistical finding.) I also quite liked his point about how just telling subjects to assume random sampling is misleading. You can find a summary of two of his critiques in a LW post by Kaj Sotala, "Heuristics and Biases Biases?" or summat. Also fastandfrugal.com should have some links or links to links. Also worth noting is that Gigerenzer's been cited many thousands of times and has written a few popular books. I especially like Gigerenzer because unlike many H&B folk he has a thorough knowledge of statistics, and he uses that knowledge to make very sharp critiques of Kahneman's compare-to-allegedly-ideal-Bayesian-reasoner approach. (Of course it's still possible to use a Bayesian approach, but the most convincing Bayesian papers I've seen were sophisticated (e.g. didn't skimp on information theory) and applied only to very simple problems.)

I wouldn't even say that the problem is overall in the H&B lit, it's just that lots of H&B folk spin their results as if they somehow applied to real life situations. It's intellectually dishonest, and leads to people like Eliezer having massive overconfidence in the relevance of H&B knowledge for personal rationality.

Replies from: adamisom
comment by adamisom · 2012-04-19T03:28:51.244Z · LW(p) · GW(p)

Awesome, big thanks!

comment by MatthewBaker · 2012-04-18T13:00:59.407Z · LW(p) · GW(p)

All this rationality organizing talk has to have some misquotes :(

comment by thomblake · 2009-04-09T12:53:31.770Z · LW(p) · GW(p)

The Eliezer of 2000 and earlier was not remotely capable of getting his job done.

Are you more or less capable of that now? Do you have evidence that you are? Is the job tangibly closer to being completed?

Replies from: Annoyance
comment by Annoyance · 2009-04-09T17:16:35.230Z · LW(p) · GW(p)

I wouldn't bother with those questions if I were you, thomblake. They've never been answered here, and are unlikely ever to be answered, here or elsewhere.

The goal here is to talk about being rational, not actually being so; to talk about building AIs, not show progress in doing so or even to define what that would be.

It's about talking, not doing.

Replies from: gjm
comment by gjm · 2009-04-10T20:27:16.045Z · LW(p) · GW(p)

There are many different people here. I think talking about "the goal" is nonsense.

comment by pjeby · 2009-04-09T20:35:58.293Z · LW(p) · GW(p)

For that which has been developed to address e.g. akrasia outside the rationalist line, I have found myself unable to absorb.

Why do you suppose that is?

comment by Scott Alexander (Yvain) · 2009-04-09T12:55:41.410Z · LW(p) · GW(p)

I'll admit I might be attacking a straw man, but if you read the posts linked to on the very top, I think there are at least a few people out there who believe it, or who don't consciously believe it but act as if it's true.

How likely is it that x-rationality could be practically useful in, say, 50 years.

Depends how you reduce "practically useful". Reduce it to "a person randomly assigned to take rationality classes two hours a week plus homework for a year will make on average ten percent more money than a similar person who doesn't", my wild completely unsubstantiated guess is 50% likely. But I'd give similar numbers to other types of self-improvement classes like Carnegie seminars and that sort of thing.

What approaches are most likely to get us to a useful practice of rationality? Or is your point that any advances that are made will be radically different from our current lines of investigation?

If by "useful practice of rationality" you mean the way Eliezer imagines it, I think there should be more focus on applying the rationality we have rather than delving deeper and deeper into the theory, but if I could say more than that, I'd be rich and you'd be paying me outrageous hourly fees to talk about it :)

I do think non-godlike levels of rationality have far more potential to help us in politics than in daily life, but that's a minefield. In terms of easy profits we should focus the movement there, but in terms of remaining cohesive and credible it's not really an option.

comment by Douglas_Knight · 2009-04-10T17:05:45.053Z · LW(p) · GW(p)

Michael Vassar:

nerds, scientists, skeptics and the like who like to describe their membership in terms of rationality are [not] noticibly better than average at behavioral rationality, as opposed to epistemic rationality where they are obviously better than average but still just hideously bad.

Simply applying "ordinary rationality" to behavior is extreme. People don't use reason to decide if fashion is important, they just copy. Eliezer's Secret Identities post seems to make a very similar point, which seemed to largely match this post. One point was to get rationality advice from people who actually found it useful, rather than ordinary nerds who fetishize it.

comment by mattnewport · 2009-04-09T22:24:58.647Z · LW(p) · GW(p)

An understanding of 'x-rationality' has helped me find the world a little less depressing and a little less frustrating. Previously when observing world events, politics and some behaviours in social interactions that seemed incomprehensible without assuming depressing levels of stupidity, incompetence or malice I despaired at the state of humanity. An appreciation of human biases and evolutionary psychology (some of which stems from an interest in both going back well before I ever started reading OB) gives me a framework in which to understand events in the world which I find a lot more productive and optimistic.

An example from politics: it is hard to make any rational sense of drug prohibition when looking at the evidence of the costs and benefits. This would tend to lead to an inevitable conclusion that politicians and the voting public are either irredeemably stupid or actively seeking negative outcomes. Understanding how institutional incentives to maintain the status quo, confirmation bias and signaling effects (politicians and voters needing to be 'seen to care' and/or 'seen to disapprove') can lead to basically intelligent and well meaning people maintaining catastrophically wrong beliefs at worst allows for accepting the status quo without assuming the worst about one's fellow man and at best maps out plausible paths for achieving political change by recognizing the true nature of the obstacles.

An example from social interactions: I suffered a fair amount of personal emotional stress reconciling what I had been led to believe 'ought' to work when interacting with others and the apparently much less pleasant realities of what seemed to be successful in reality. The only conclusion I could draw was that everyone deliberately lied about the way human interactions worked for their own mysterious and possibly malicious reasons. Coming to an understanding of evolutionary psychology and signaling explanations for many common patterns of human behaviour allows me to reconcile 'doing what works' with a belief that most people are not consciously misleading or malicious most of the time. Many people don't appear to be aware of the contradictions inherent in social interactions but as someone who saw them but could not explain them without assuming the worst, discovering explanations that did not require imputing conscious malice to others allowed for a much more positive outlook on the world.

I could give a number of examples of how 'regular' rationality rigorously applied to areas of life where it is often absent have also directly helped me in my life but they seem slightly off topic for this thread.

comment by AnnaSalamon · 2009-04-09T07:46:28.210Z · LW(p) · GW(p)

I’m partly echoing badger here, but it’s worth distinguishing between three possible claims:
(1) An “art of rationality” that we do not yet have, but that we could plausibly develop with experimentation, measurements, community, etc., can help people.
(2) The “art of rationality” that one can obtain by reading OB/LW and trying to really apply its contents to one’s life, can help people.
(3) The “art of rationality” that one is likely to accidentally obtain by reading articles about it, e.g. on OB/LW, and seeing what happens to rubs off, can help people.

There are also different notions of “help people” that are worth distinguishing. I’ll share my anticipations for each separately. Yvain or others, tell me where your anticipations match or differ.

Regarding claim (3):
My impression is that even the art of rationality one obtains by reading articles about it for entertainment, does have some positive effects on the accuracy of peoples’ beliefs. A couple people reported leaving their religions. Many of us have probably discarded random political or other opinions that we had due to social signaling or happenstance. Yvain and others report “clarity-of-mind benefits”. I’d give reasonable odds that there’s somewhat more benefit than this -- some unreliable improvement in peoples’ occasional, major, practical decisions, e.g. about which career track to pursue, and some unreliable improvement in peoples’ ability to see past their own rationalizations in interpersonal conflicts -- but (at least with hindsight bias?) probably no improvements in practical skills large enough to show up on Vladimir Nesov’s poll. Does anyone’s anticipations differ, here?

Regarding claim (2):
I’d a priori expect better effects from attempts to really practice rationality, and to integrate its thinking skills into one’s bones, than from enjoying chatting about rationality from time to time. A community that reads articles about skateboarding, and discusses skateboarding, will probably still fall over when they try to skateboard twenty feet unless they’ve also actually spent time on skateboards.

As to the empirical data: who here has in fact practiced (2) (e.g., has tried to integrate x-rationality into their actual practical decision-making, as in Yvain’s experiment/technique, or has used x-rationality to make major life decisions, or has spent time listing out their strengths and weaknesses as a rationalist with specific thinking habits that they really work to integrate in different weeks, or etc.)? This is a real question; I’d love data. Eliezer is an obvious example; Yvain cites the impressiveness of Eliezer’s 2001 writings as counter-evidence (and it is some counter-evidence), but: (1) Eliezer, in 2001, had already spent a lot of time learning rationality (though without the heuristics and biases literature); and (2) Eliezer was at that time busy with a course of action that, as he now understands things, would have tended to destroy the world rather than to save it. Due to insufficient rationality, apparently.

I’ve practiced a fair amount of (2), but much less than I could imagine some practicing; and, as I noted in the comment Yvain cited, it seems to have done me some good. Broadly similar results for the handful of others I know who try to get rationality into their bones. Less impressive than I’d like, but I tend to interpret this a a sign we should spend more time on skateboards, and I anticipate that we’ll see more real improvement as we do.

The most important actual helps involve that topic we’re not supposed to discuss here until May, but I’d say we were able to choose a much higher-impact way to help the world than people without x-rationality standardly choose, and that we’re able to actually think usefully about a subject where most conversations degenerate into storytelling, availability heuristics, attaching overmuch weight to specific conjunctions, etc. Which, if there’s any non-negligible chance we’re right, is immensely practical. But we’re also somewhat better at strategicness about actually exercising, about using social interaction patterns that work better than the ones we were accidentally using previously (though far from as well as the ones the best people use), about choosing college or career tracks that have better expected results, etc.

Folks with more data here (positive or negative), please share.

Regarding claim (1):
I guess I wouldn’t be surprised by anything from “massive practical help, at least from particular skilled/lucky dojos that get on good tracks” to “not much help at all”. But if we do get “not much help at all”, I’ll feel like there was a thing we could have done, and we didn’t manage to do it. There are loads of ridiculously stupid kinds of decision-making that most people do, and it would be strange if there were no way we could get visible practical benefit from improving on that. Details in later comments.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2009-04-09T12:27:38.226Z · LW(p) · GW(p)

I agree with almost everything here, with the following caveats:

I. The practical benefits we get from (3) are (I think I'm agreeing with you here) likely to be so small as to be difficult to measure informally; i.e. anyone who claims to have noticed a specific improvement is as likely to be imagining it as really improving. Probably some effects that could be measured in a formal experiment with a very large sample size, but this is not what we have been doing.

II. (2) shows promise but is not something I see discussed very often on Overcoming Bias or Less Wrong. Using the Boyle metaphor, this would be the technology of rationality, as opposed to the science of it. I've seen a few suggestions for "techniques", but they seem sort of ad hoc (I will admit, in retrospect, that many of the times I was proposing 'techniques' were more of an attempt to sound like I was thinking pragmatically, than soundly based on good experimental evidence). I've tried to apply specific methods to specific decisions, but never gone so far as to set aside a half hour each day to "rationality practice", nor would I really know what to do with that half hour if I did. I'd like to know more about what you do and what you think has helped.

III. You list a greater appreciation of transhumanism as one of the benefits of x-rationality, but the causal linkage doesn't impress me. Many of the transhumanists here were transhumanists before they were rationalists, and only came to Overcoming Bias out of interest in reading what transhumanist leaders Eliezer and Robin had to say. I think my "conversion" to transhumanism came about mostly because I started meeting so many extremely intelligent transhumanists that it no longer seemed like a fringe crazy-person belief and my mind felt free to judge it with the algorithms it uses for normal scientific theories rather than the algorithms it uses for random Internet crackpottery. Many other OB readers came to transhumanism just because EY and RH explicitly argued for it and did a good job. Still others probably felt pressure to "convert" as an in-group identification thing. And finally, I think transhumanists and x-rationalists are part of that big atheist/libertarian/sci-fi/et cetera personspace cluster Eliezer's been talking about: we all had a natural vulnerability to that meme before ever arriving here. AFAIK Kahneman and Tversky are not transhumanists, Aumann certainly isn't, and I would be surprised if x-rationalists not associated with EY and RH and our group come to transhumanism in numbers greater than their personspace cluster membership predicts.

IV. Given fifty years to improve the Art, I also wouldn't be surprised with anything from "massive practical help" to "not much help at all". I don't know exactly what you mean by "ridiculously stupid decision-making that most people do", but are you sure it's something that should be solved with x-rationality as opposed to normal rationality?

Replies from: AnnaSalamon, AnnaSalamon
comment by AnnaSalamon · 2009-04-09T12:52:51.424Z · LW(p) · GW(p)

I don't know exactly what you mean by "ridiculously stupid decision-making that most people do", but are you sure it's something that should be solved with x-rationality as opposed to normal rationality?

I'm sure it's something that could be helped with techniques like The Bottom Line, which most intelligent, science-literate, trying to be “rational” people mostly don’t do nearly enough of. Also something that could be helped by paying attention to which thinking techniques lead to what kinds of results, and learning the better ones. Dojos could totally teach these practices, and help their students actually incorporate them into their day-to-day, reflexive decison-making (at least more than most "intelligent, science-literate" people do now; most people hardly try at all). As to heuristics and biases, and probability theory... I do find those helpful. Essential for thinking usefully about existential risk; helpful but non-essential for day to day inference, according to my mental but not written (I’ve been keeping a written record lately, but not for long enough, and not systematically enough) observations. The probability theory in particular may be hard to teach to people who don’t easily think about math, though not impossible. But I don’t think building an art of rationality needs to be solely about the heuristics and biases literature. Certainly much of the rationality improvement I’ve gotten from OB/LW isn’t that.

comment by AnnaSalamon · 2009-04-09T14:13:52.107Z · LW(p) · GW(p)

You list a greater appreciation of transhumanism as one of the benefits of x-rationality, but the causal linkage doesn't impress me.

The benefit I’m trying to list isn’t “greater appreciation of transhumanism” so much as “directing one’s efforts to ‘make the world a better place’ in directions that actually do efficiently make the world a better place”.

As to the evidence and its significance:

Even if we skip transhumanism, and look fully outside the Eliezer/Robin/Vassar orbit, folks like Holden Karnofsky of Givewell are impressive, both in terms of ability to actually analyze the world, and in terms of positive impact. You might say it’s just traditional rationality Holden is using -- certainly he didn’t get it from Eliezer -- but it’s beyond the level common among “intelligent, science-literate people” (who mostly donate their money in much less effective ways).

Within transhumanism... I agree that the existing correlation between transhumanism and rationality-emphasis will tend to create future correlation, whether or not rationality helps one see merits in transhumanism. And that’s an important point. But it’s also bizarrely statistically significant that when people show up and say they want to spend their lives reducing AI risks, they’re often people who spent unusual effort successfully becoming better thinkers before they ever heard of Eliezer or Robin, or met anyone else working on this stuff. It’s true that maybe we’re just recognizing “oh, someone who cares about actually getting things right, that means I can relax and believe them” (or, worse, “oh, someone with my brand of tennis shoes, let me join the in-group”). But...

  1. Recognizing that someone else has good epistemic standards and can be believed is rationality working, even without independently deriving the same conclusions (though under the tennis shoe interpretation, not so much);
  2. Many of us (independently, before reading or being in contact with anyone in this orbit) said we were looking for the most efficient use of some time/money, and it’s probably not an accident that trying to become a good thinker, and asking what use of time/money will actually help the world, tend to correlate, and tend to lead to modes of action that actually do help the world.
comment by AspiringKnitter · 2012-01-17T07:41:50.925Z · LW(p) · GW(p)

By "decision", I don't mean the decision to get up in the morning, I mean the sort that's made on a conscious level and requires at least a few seconds' serious thought.

Consider yourself lucky if that doesn't describe getting up in the morning for you.

Anyway, not that this counts at all (availability bias), but I made a rational decision a couple of days ago to get some sleep instead of working later into the night on homework. I did exactly that.

In fact, I just made a rational decision-- just now-- to quit reading the article I was reading, work on homework for a few minutes and then go to bed. I haven't gotten to bed yet. Otherwise, that's going well.

Replies from: None, AspiringKnitter
comment by [deleted] · 2012-01-18T04:36:54.868Z · LW(p) · GW(p)

Consider yourself lucky if that doesn't describe getting up in the morning for you.

Can you rig your mornings so that staying in bed just doesn't work? I use two alarm clocks, one set for two minutes after the other; the one that goes off two minutes later is out of arm's reach, so I have to either get out of bed, or sleep through it.

Replies from: AspiringKnitter
comment by AspiringKnitter · 2012-01-18T07:32:46.210Z · LW(p) · GW(p)

Not really worth it, but thanks. :) My current strategy is just to wait a few minutes, which essentially always does the trick unless I'm totally exhausted and need more sleep. I appreciate the thought, though.

comment by AspiringKnitter · 2012-01-17T19:55:03.095Z · LW(p) · GW(p)

I should point out that while the rational choice to go to bed a couple days ago worked out well, the last one failed because I got drawn into housework I could never have predicted I'd have to help with (I thought it was already done).

comment by Desrtopa · 2011-01-23T21:55:39.833Z · LW(p) · GW(p)

...but you will disagree with me. And we are both aspiring rationalists, and therefore we resolve disagreements by experiments. I propose one.

I'm surprised you expected most of your readers to disagree. I think it's pretty clear that the techniques we work on here aren't making us much more successful than most people.

Humans aren't naturally well equipped to be extreme rationalists. The techniques themselves may be correct, but that doesn't mean we can realistically expect many people to apply them. To use the rationality-as-martial art metaphor, if you taught Shaolin kung fu to a population of fifty year old couch potatoes, they would not be able to perform most of the techniques correctly, and you should not expect to hear many true accounts of them winning fights with their skills.

Perhaps with enough work we could refine the art of human instrumental rationality into something much better than what we've got, maybe achieve a .3 correlation with success rather than a .1, but while a fighting style developed explicitly for 50 year old couch potatoes might give your class better results than other styles, you can only expect so much out of it.

Replies from: Sailor Vulcan
comment by Sailor Vulcan · 2018-10-09T18:18:04.967Z · LW(p) · GW(p)

This. If less wrong had been introduced to an audience of self-improvement health buffs and business people instead of nerdy booksmart Harry Potter fans, things would have been drastically different. it is possible to become more effective at optimizing for other goals besides just truth. People here seem to naively assume so as long as they have enough sufficiently accurate information everything else will simply fall into place and they'll do everything else right automatically without needing to really practice or develop any other skills. I will be speaking more on this later.

Replies from: dmitrii-zelenskii
comment by Дмитрий Зеленский (dmitrii-zelenskii) · 2021-07-16T18:42:55.271Z · LW(p) · GW(p)

I would replace "introduced" to "sold" or "made interesting" here. It's not enough to introduce a group of people to something - unless their values are already in sync with said something's _appearance_ (and the appearance, aka elevator pitch, aka hook, is really important here), you would need to apply some marketing/Dark Arts/rhetorics/whatever-you-call-it to persuade them it's worth it. And, for all claims of "Rationalists should win", Yudkowsky2008 was too much of a rhetorics-hater (really, not noticing his own pattern of having the good teachers of Defence against the Dark Arts in Hogwarts themselves practicing Dark Arts (or, in case of Lupin, *being* Dark Arts)?) to perform that marketing, and thus the blog went to attract people who already shared the values - nerdy booksmarts (note that a)to the best of my knowledge, HPMoR postdates Sequences; b)Harry Potter isn't exactly a booksmart-choosing fandom, as is shown by many factors including the gross proportion of "watched-the-films-never-read-the-books" fans against readers AND people who imagine Draco Malfoy to be a refined aristocrat whose behavior is, though not nice, perfectly calibred instead of the petty bully we see in both books and films AND - I should stop here before I go on a tangent; so I am not certain how much "Harry Potter fans" is relevant).

comment by PhilGoetz · 2009-04-10T03:43:21.623Z · LW(p) · GW(p)

Sometimes, people do worse when they try to be rational because they have a poor model of rationality.

One error I commonly see is the belief that rationality means using logic, and that logic means not believing things unless they are proven. So someone tries to be "rational" by demanding proof of X before changing their behavior, even in a case where neither priors nor utilities favor not X. The untrained person may be doing something as naive as argument-counting (how many arguments in favor of X vs. not X), and is still likely to come out ahead of the person who requires proof.

A related error is using Boolean models where they are inappropriate. The most common error of this type is believing that a phenomenon, or a class of phenomena, can have only one explanation.

comment by gaffa · 2009-04-09T13:51:36.798Z · LW(p) · GW(p)

Am I the only one who is isn't entirely positive towards the heavy use of language identifying the LW community as "rationalists", including terms like "rationalist training" etc.? (Though he is by far the heaviest user of this kind of language, I'm not really talking about Eliezer here, his language use is whole topic on its own - I'm restricting this particular concern to other people, to the general LW non-Eliezer jargon). Is strongly self-identifying as a "rationalist" really such a good thing? Does it really help you solve problems? (I second the questions raised by Yvain). Though perhaps small, isn't there still a risk that the focus becomes too much on "being a rationalist" instead of on actually solving problems?

Of course, this is a blog about rationality and not about specific problems, so this kind of language is not suprising and sometimes might even be necessary. I'm just a bit hesitant towards it when the community hasn't actually shown that it's better at solving problems than people who don't self-identify as rationalists and haven't had "rationalist training", or shown that the techniques fostered here have such a high cross-domain applicability as seems to be assumed. Maybe after it has been shown that "rationalists" do better than other people, people who just solve problems, I would feel better about this kind of jargon.

Replies from: CarlShulman, DanielLC
comment by CarlShulman · 2009-04-09T15:51:34.724Z · LW(p) · GW(p)

I find it much more tolerable when 'aspiring' is added.

comment by DanielLC · 2012-01-14T20:45:14.459Z · LW(p) · GW(p)

I define "rationalist" to be "someone who tries to become more rational". I'm fine with calling this a community of rationalists. I don't like it when people use "rationalist" to refer exclusively to members of this community.

comment by tjohnson314 · 2015-05-08T00:04:43.165Z · LW(p) · GW(p)

Here's one example of a change I've made recently, which I think qualifies as x-rationality. When I need to make a decision that depends on a particular piece of data, I now commit to a decision threshold before I look at the data. (I feel like I took this strategy from a LW article, but I don't remember where now.)

For example, I recently had to decide whether it would be worth the potential savings in time and money to commute by motorcycle instead of by car. I set a threshold for what I considered an appropriate level of risk beforehand, and then looked up the accident statistics. The actual risk turned out to be several times larger than that.

Had I looked at the data first, I would have been tempted to find an excuse to go with my gut anyway, which simply says that motorcycles are cool. (I'm a 23-year-old guy, after all.) A high percentage of motorcyclists experience a serious or even fatal accident, so there's a decent chance that x-rationality saved me from that.

Replies from: Fossegrimen
comment by Fossegrimen · 2015-05-08T07:28:23.120Z · LW(p) · GW(p)

Huh.

I did the same thing and came to the exact opposite conclusion and have been commuting by two-wheeler for 15 years now.

What swayed me was:

A huge proportion of the accidents involved really excessive speed.

A similarly huge proportion happened to untrained motorcyclists.

So: If I don't speed (much) and take the time to practice regularly on a track, preferably with an instructor, I have eliminated just about all the serious accidents. In actuality I have had zero accidents outside the track, and the "accidents" on the track has been to deliberately test the limits of myself and the bike. (and on a bike designed to take slides without permanent damage)

The cash savings are higher in Europe due to taxes on fuel and vehicles and the size of the bike is more appreciated in cities that are designed in the middle ages, so the upside is larger too, but it seems that we don't have anything like the same risk tolerance.

edit: also it is possible that motorcycling is a lot safer in Europe than the US? assuming you are from the US ofc.

Replies from: tjohnson314
comment by tjohnson314 · 2015-05-14T17:42:48.056Z · LW(p) · GW(p)

I'm from California, where it's legal to split lanes. Most places don't allow that.

I could just decide not to, but the ability to skip traffic that way is probably the single largest benefit of having a motorcycle.

Replies from: Fossegrimen
comment by Fossegrimen · 2015-05-28T07:51:38.645Z · LW(p) · GW(p)

Most states don't allow that, but in Europe it's standard practice. I probably wouldn't bother with the bike if I couldn't.

comment by AnnaSalamon · 2009-04-09T13:10:20.092Z · LW(p) · GW(p)

This experiment seems easy to rig4; merely doing it should increase your level of conscious rational decisions quite a bit. And yet I have been trying it for the past few days, and the results have not been pretty. .... [O]ne way to fail your Art is to expect more of it than it can deliver.... Perhaps there are developments of the Art of Rationality or its associated Arts that can turn us into a Kellhus or a Galt, but they will not be reached by trying to overcome biases really really hard.

To make a somewhat uncharitable paraphrase: you read many articles on rationality, did not actually use them to change the way you make decisions, and found that the rationality hasn’t changed the results of your decisions much. You conclude, not that you aren’t practicing rationality, but that rationality can’t deliver practical goods at all, at least not as taught.

I agree we need practices for better incorporating OB/LW/new techniques of rationality into our actual practice of inference and decision-making. But it seems like the “and I’m not actually using this stuff much” result of your experiment should prevent “it hasn’t made my life much better” from telling you all that much about whether the OB/LW inference or decision-making techniques, if one does practice them, could make one’s life better.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2009-04-09T20:56:26.571Z · LW(p) · GW(p)

I accept that to some degree my results say more negative things about me than about rationality, but insofar as I may be typical we need to take them into account when considering how we're going to benefit from rationality.

...my inability to communicate clearly continues to be the bane of my existence. Let me try a strained metaphor.

Christianity demands its adherents "love thy enemy", "turn the other cheek", "judge not lest ye be judged", "give everything to the poor", and follow many other pieces of excellent moral advice. Any society that actually followed them all would be a very nice place to live.

Yet real-world Christian societies are not such nice places to live. And Christians say this is not because there is anything wrong with Christianity, but because Christians don't follow their religion enough. As the old saying goes, "Christianity has not been tried and found wanting, it has been found difficult and left untried." There's some truth to this.

But it doesn't excuse Christianity's failure to make people especially moral. If Christianity as it really exists can't translate its ideals into action, then it's gone wrong somewhere. At some point you have to go from "Christianity is perfect but most people can't apply it" to "Christianity is flawed because most people can't apply it."

The Christians' problem isn't that there aren't enough Christians. And it's not that Christians aren't devout and zealous enough. And it's not even that Christians don't understand what their faith expects of them. Their problem is that the impulse to love thy neighbor gets lost somewhere between theory and action. My urge as an outsider is to blame it on a combination of akrasia, lack of sufficient self-consciousness, and people who accept Christianity 100% on the conscious level but don't "feel it in their bones".

If I were a theologian, I would be recommending to my fellow Christians one of two things:

First, that they spend a whole lot less time in Bible study than they do right now, because they already know a whole lot more Bible than they actually use, and teaching them more Bible isn't going to solve that problem. Instead they need to be spending that time thinking of ways to solve their problem with applying the Bible to real life.

Or second, that they stop talking about how moral Christianity makes them and how a Christian society will always be a moral society and so on and so it's beneficial that everyone learn Christianity, and just admit that Christians probably aren't that much more moral than atheists and that they're in it because they like religion. In that case they could go on talking about the Bible to their hearts' content.

Now, to some degree, we can blame individual Christians for the failure of Christianity to transform morality for the better. But we also have to wonder if maybe it's not even addressing the real problem, which is less of a lack of moral ideals than a basic human inability to translate moral ideals into action.

Right now I find myself in the same situation as a devout Christian who really wants to be good, but has noticed that studying lots of Bible verses doesn't help him. Less Wrong has thus far seemed to me like a Bible study group where we all get together and talk about how with all this Bible studying we'll all be frickin saints soon. Eliezer's community-building posts seem like Catholics and Episcopalians arguing on the best way to structure the clergy. It's all very interesting, but...

...but I feel like there is insufficient appreciation that the Art of Knowing the Bible and the Art of Becoming a Saint are two very different arts, that we haven't really begun developing the second, and that religion has a bad track record of generating saints anyway.

Your objection sounds too much like saying that since I'm not a saint yet, I must simply not be applying my Bible study right. Which is in one sense true, but centuries of Christians telling each other that hasn't created any more saints. So people need to either create an Art of Becoming A Saint worthy of the name, or stop insisting that we will soon be able to create saints on demand.

Replies from: roland, MBlume, army1987
comment by roland · 2009-04-09T22:52:16.364Z · LW(p) · GW(p)

but because Christians don't follow their religion enough.

Well, as a former christian(now atheist thanks to OB/Yudkowsky) I have to disagree. Christianity doesn't work regardless if you live by it or not. I don't claim that I lived 100% as expected but I implemented some things quite literally like "turn the other cheek"(btw, taking this literally is a misinterpretation of the real meaning http://en.wikipedia.org/wiki/Turn_the_other_cheek#Figurative_interpretation). I can say: it's nonsense, it doesn't work, it only makes other people take advantage of you and yes, I'm talking from experience.

Replies from: None
comment by [deleted] · 2012-01-18T04:30:35.167Z · LW(p) · GW(p)

"Turn the other cheek" is a phrase with a natural figurative meaning—"expose yourself to further aggression". Are you saying that this figurative meaning should itself be taken figuratively, or just that "turn the other cheek" should not be interpreted literally literally?

Replies from: roland
comment by roland · 2012-01-18T18:04:35.394Z · LW(p) · GW(p)

Here is the whole:

Matthew 5:39

But I tell you, Do not resist an evil person. If someone strikes you on the right cheek, turn to him the other also.

"Turn the other cheek" can only be understood if you know the cultural context of the time which goes as follows:

The left hand was considered unclean so people used the right hand and for a person to strike your right cheek with his right hand implies that he is giving you a backhand slap. This was understood as a humiliating gesture that a higher ranking person would dish out to someone lower in status, e.g. a master to his servant. Now, if you received such a slap and proceed to offer the other cheek you would put the higher ranking person in a conundrum. He can no longer reach your right cheek with a backhand slap, the only option he has left is attacking you on the left cheek. But attacking on the left didn't have the same social connotation, it probably would just be interpreted as a de facto aggressive behavior, implying that the higher ranking person is acknowledging you as socially equal and also giving you the right to fight back.

The same logic is also present in "walking another mile" and "leaving the undergarmnet"(which are part of the same biblical passage).

So we can see that offering the other cheek puts the other in check and has nothing to do with "exposing oneself to further aggression" or being meek and humble, it is in fact a gesture of defiance, a very clever one.

Replies from: lavalamp, Bugmaster
comment by lavalamp · 2012-01-23T20:06:42.368Z · LW(p) · GW(p)

Former christian here. Every once in a while, I catch myself about to--or worse, in the middle of--recounting an explanation like the one you just gave for which I have no evidence other than some pastor's word. On more than one of those occasions, the recalled explanation was just wrong. I haven't googled your explanation here, so it's possible that there's lots of evidence for it, but my prior for that is fairly low (it seems like a really specific piece of cultural information, and it pattern matches against "story that reinterprets well known biblical passage in a way that makes the inconvenient and obvious interpretation incorrect").

I'm incredibly pessimistic about the abilities of the average christian pastor at weighing the evidence for multiple competing historical hypotheses and coming up with the most correct answer (it's basically their job to be bad at this). I know that reversed stupidity is not intelligence, but as a rule I no longer repeat things I "learned" in a church setting unless I've independently verified it.

(Oh, and: my apologies if you came by that story via a more rigorous process.)

Replies from: Caspian, wedrifid, roland
comment by Caspian · 2012-01-24T01:35:05.859Z · LW(p) · GW(p)

I was interested enough to google, and found some relevant links.

http://en.wikipedia.org/wiki/Turning_the_other_cheek has (unlinked, presumably offline) references for an explanation like that.

http://www.ekklesia.co.uk/node/9385 has more of the argument and says "resist not evil" is a biased or incorrect translation invented by King James' bible translators.

From the above page (by Walter Wink): "Jesus did not tell his oppressed hearers not to resist evil. His entire ministry is at odds with such a preposterous idea." - I had noticed that a lot of his behaviour described in the bible was inconsistent with this doctrine. He makes more sense without it.

Replies from: JoshuaZ, lavalamp
comment by JoshuaZ · 2012-02-03T05:02:33.419Z · LW(p) · GW(p)

http://www.ekklesia.co.uk/node/9385 has more of the argument and says "resist not evil" is a biased or incorrect translation invented by King James' bible translators.

This seems strange. I don't know Greek so I can't look at the closest to original text, but I can read some Latin. So I looked at the Vulgatus which is both a) Catholic and b) predating the KJV by many centuries. That uses the phrase here "Non resistere malo" means something like "don't resist the bad" but might be closer to "don't fight bad things".

comment by lavalamp · 2012-01-24T03:31:27.018Z · LW(p) · GW(p)

Alright, wikipedia has better evidence than I expected, although I'm also not going to read the referenced book.

Wink's piece is coherent and well-put, but doesn't seem like great evidence-- I cannot tell if he mentally wrote his conclusion before or after making those arguments, and I can't tell which elements are actual features of ANE culture identified by historians and which are things that just sounded reasonable to him.

comment by wedrifid · 2012-01-24T02:42:05.272Z · LW(p) · GW(p)

I'm incredibly pessimistic about the abilities of the average christian pastor at weighing the evidence for multiple competing historical hypotheses and coming up with the most correct answer (it's basically their job to be bad at this).

There are specific things that pastors are required to be wrong about yet when it comes to adding mere details for the sake of little more than curiosity there is little reason to believe they would be worse than average. For most part, of course, they will be simply teaching what they were taught and theological college - the evidence weighing is done by others. This is how most people operate.

Replies from: lavalamp
comment by lavalamp · 2012-01-24T03:02:04.239Z · LW(p) · GW(p)

What you say is true for competent pastors. I've probably been exposed to more than my fair share of the incompetent ones.

...I noticed a long time before I deconverted that when pastors said something about a subject I knew something about, they were totally wrong some ridiculously high percentage of the time. Should have tipped me off.

Replies from: wedrifid
comment by wedrifid · 2012-01-24T04:12:00.412Z · LW(p) · GW(p)

What you say is true for competent pastors. I've probably been exposed to more than my fair share of the incompetent ones.

I've been fortunate in as much as several of my pastors and most of my lay preachers had science degrees. Mind you I suspect I've selected out most of the bad ones since I do recall I used to spend time with my family absolutely bagging the crap out of those preachers who said silly things.

comment by roland · 2012-01-23T23:36:52.384Z · LW(p) · GW(p)

I didn't learn that in a church setting, I read it on the internet in a page that claimed this to be the result of some scholar. What I liked most about the explanation is that it makes sense of the weird examples: cheek slapping(usually men use their fists if they mean to be aggressive) and forcing someone to walk a mile(makes sense if you assume the roman occupation context). So it is the best explanation I heard up to date, sigh.

Replies from: lavalamp, taelor
comment by lavalamp · 2012-01-24T03:19:48.915Z · LW(p) · GW(p)

Hm, as Caspian says it shows up on wikipedia.

I think I have heard a garbled version of this story before, which probably contributed to my skepticism (which, if you squint just right, makes my prior comment an example of the thing I was protesting).

Anyway, I'll retract the accusatory nature of my prior comment. I'm still pretty skeptical, but I don't care enough to read the book wikipedia references. :)

Replies from: Caspian
comment by Caspian · 2012-01-24T03:42:40.414Z · LW(p) · GW(p)

I noticed after posting that roland had linked to the same wikipedia page I did with nearly the same URL in his earlier comment http://lesswrong.com/lw/9p/extreme_rationality_its_not_that_great/6gc

Looks like we both missed it.

Replies from: lavalamp
comment by lavalamp · 2012-01-24T03:53:04.820Z · LW(p) · GW(p)

Huh. I recall reading the rest of that comment. Joke's on me, I guess.

comment by taelor · 2012-01-24T01:22:33.365Z · LW(p) · GW(p)

I encountered an identical explanation on the History Channel a decade ago (this was back when the history channel was actually about history beyond Nostradamus and Hitler).

comment by Bugmaster · 2012-01-18T18:13:50.713Z · LW(p) · GW(p)

This explanation is neat, but it sounds quite contrived to me, especially since the previous sentence clearly says, "do not resist an evil person". Is there any reason to believe that your interpretation is the one that the writers of the Bible originally intended ?

Replies from: roland
comment by roland · 2012-01-18T20:55:19.463Z · LW(p) · GW(p)

Writers of the bible? Who wrote the bible? It is a collection of folklore that at first was transmitted orally and some day some people starting writing it all down. The people who wrote it down were not necessarily the originators or even first witnesses of the stories. As always different people will try to extract different teachings from the same stories. Maybe there was originally the parable of the cheek and later someone added "do not resist an evil person" trying to make a general teaching out of it and disregarding or not knowing the original context.

To really find out you would have to go back to the origin of the whole and understand what cultural context was present there at that time. That there is a lot of confusion nowadays is an indicator that a lot of the context got lost.

Did you ever find anyone who forced you to go a mile with you? Isn't that weird that such a thing is in the bible? It is until you understand that there was a roman occupation and that soldiers had the right to demand you carry their pack for a mile(but not more, a soldier could be punished if he forced you for more than that hence the second mile thing).

Replies from: Bugmaster
comment by Bugmaster · 2012-01-20T00:19:29.925Z · LW(p) · GW(p)

Writers of the bible? Who wrote the bible? It is a collection of folklore that at first was transmitted orally and some day some people starting writing it all down.

Sure, that's true, but:

To really find out you would have to go back to the origin of the whole and understand what cultural context was present there at that time.

I agree with you there. I kind of assumed that you have already accomplished this task, though, since you are pretty confident about your interpretation of the "other cheek" concept. All I was asking for is some evidence that your interpretation is the more correct one. I agree that it sounds neat, but that's not enough; you also need to show that this was the passage's original, intended meaning. Same thing goes for miles and undergarments.

Replies from: roland, roland
comment by roland · 2012-01-21T17:40:53.210Z · LW(p) · GW(p)

I agree that it sounds neat, but that's not enough; you also need to show that this was the passage's original, intended meaning.

How would you accomplish this?

Replies from: Bugmaster
comment by Bugmaster · 2012-01-22T00:08:45.787Z · LW(p) · GW(p)

I'm not a historian, so I don't really know. But, naively, I'd try to find some historical evidence that the "slapping customs" you describe actually existed and were widely followed, and that someone actually took Jesus's advice and implemented it successfully. I would do so by looking through sources other than the Bible, such as works of fiction, historical documents, paintings and sculptures, etc. I could also try to tracing some oral folklore backwards through time, to see if it converges with the other sources.

comment by roland · 2012-01-21T17:39:41.843Z · LW(p) · GW(p)

though, since you are pretty confident about your interpretation of the "other cheek" concept.

It is the explanation that makes the most sense to me, but that doesn't mean it is the correct one. The mile thing only makes sense in a context where people actually force you to go a mile with them, thus the roman law explanation sounds plausible. Again, doesn't mean this is the correct one.

Replies from: Bugmaster
comment by Bugmaster · 2012-01-22T00:10:39.912Z · LW(p) · GW(p)

It is the explanation that makes the most sense to me, but that doesn't mean it is the correct one.

Ok, in this case, your explanation is nothing more than a "just so" story. I could make up my own story and it would be just as valid (which is to say, still pretty arbitrary). And yet, you stated your own explanation as though it were fact. That's confusing, at best.

Replies from: roland
comment by roland · 2012-01-23T18:07:29.415Z · LW(p) · GW(p)

Everything I write is of course my own opinion, the same goes for whatever you read on any history book and even physics. Newton stated his laws of physics as facts yet in hindsight we know that they were only approximations. I'm not going to precede every posting of mine with the disclaimer "The following is only my opinion." Btw, the explanation I gave you wasn't my own, I read it somewhere on the internet and I think it was the result of some scholarly study.

Replies from: Vladimir_Nesov, Caspian, Bugmaster
comment by Vladimir_Nesov · 2012-01-23T19:03:34.555Z · LW(p) · GW(p)

Everything I write is of course my own opinion, the same goes for whatever you read on any history book and even physics.

Warning: Fallacy of gray detected.

The difference is in ability to infer facts about the world from assertions about facts about the world. Assertions differ greatly in their convincing power. An argument is made strong by either being explanatory, drawing your attention to a way of making your own conclusions, or by having its own truth and relevance as a powerful explanation for having been made, while for a weak argument other reasons are not ruled out.

comment by Caspian · 2012-01-24T04:42:51.733Z · LW(p) · GW(p)

Of course you shouldn't claim "the following is only my opinion" for all your posts, or for an explanation you read on the Internet that you think was the result of some scholarly study. If you did you would need to precede it with a meta-disclaimer: "the following disclaimer is a routine disclaimer I put on my post without regard to whether it is accurate".

I have seen some people who put routine disclaimers on their posts without regard to whether it was needed, and it was annoying (definitely to me, but also, I believe without proof, to a lot of others).

Something like "I read this explanation somewhere, which makes much more sense than the usual one" seems appropriate.

comment by Bugmaster · 2012-01-23T22:18:20.208Z · LW(p) · GW(p)

As the saying goes, you are entitled to your own opinions, but not your own facts; and you stated your cheek-slapping hypothesis as though it was a fact. The difference is important; it's the difference between saying, "It would be neat if pigs could fly", and saying, "pigs can fly".

Replies from: roland
comment by roland · 2012-01-23T23:41:08.434Z · LW(p) · GW(p)

I don't want to preced every posting of mine with a disclaimer, please:

http://www.overcomingbias.com/2008/06/against-disclai.html

Replies from: JoshuaZ, Bugmaster
comment by JoshuaZ · 2012-02-03T05:04:15.991Z · LW(p) · GW(p)

Some posts are more factually based than others. Some facts are uncontroversial and others are not. Vladimir Nesov noted the fallacy of gray in your other comment about how all one says is opinion. This seems to be a similar situation.

comment by Bugmaster · 2012-01-24T00:26:11.550Z · LW(p) · GW(p)

That's fine, but in this case, you should avoid saying things like "pigs can fly" when you mean something like "flying pigs would be neat". The second statement is entirely non-controversial, whereas the first statement is practically a challenge. If we were discussing animal husbandry, and I told you, "actually, since pigs fly, you should always cover your pig-pens with a wire mesh, because otherwise the flying pigs will all fly away", your response would probably consist of, "WTF, flying pigs ? prove it". And you'd be right to demand proof.

To be more specific, instead of saying,

"Turn the other cheek" can only be understood if you know the cultural context of the time which goes as follows...

You could have said something like

"I don't really know what the cultural context of the time really was, but I like to imagine it went as follows..."

This way, it's reasonably clear that we're talking about fantasy, not fact, and there won't be any confusion.

Replies from: Caspian
comment by Caspian · 2012-01-24T05:25:49.057Z · LW(p) · GW(p)

We don't know whether it's a fantasy/just so story or not. That depends on whether the originator of the explanation made it up, or based it on plenty of evidence, or something in between. I'm glad someone questioned it though, so I know not to assume it's as certain as if someone on lesswrong stated something they have direct knowledge of.

comment by MBlume · 2009-04-09T21:05:29.547Z · LW(p) · GW(p)

Christianity has not been tried and found wanting, it has been found difficult and left untried.

HT G.K. Chesterton

(I was sure it would be Lewis, so I'm glad I decided to Google anyway)

comment by A1987dM (army1987) · 2012-01-21T18:47:20.771Z · LW(p) · GW(p)

On the other hand, I once read that certain influences of religion are found across societies even among non-explicitly-religious people, e.g. people from historically-predominantly-Catholic regions are usually more likely to turn a blind eye to minor rule violations, or people from historically-predominantly-Calvinist regions are usually more likely to actively seek economic success (whether they self-identify as Catholic/Calvinist or not). And my experience (of having lived almost all my life in Italy, but having studied one year in Ireland among lots of foreigners) doesn't disconfirm this.

comment by Roko · 2009-04-09T06:49:58.215Z · LW(p) · GW(p)

Only a handful responded

I am reserving my judgment for a couple of years. See how I'm doing then.

Replies from: badger
comment by badger · 2009-04-09T22:45:13.920Z · LW(p) · GW(p)

I'm of the same opinion.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-09T11:54:06.504Z · LW(p) · GW(p)

Would Newton have gone even further if he'd known Bayes theory? Probably it would've been like telling the world pool champion to try using more calculus in his shots: not a pretty sight.

An interesting choice of example, given that Bayesian probability theory as we know it (inverse inference) was more or less invented by Laplace and used to address specific astronomical controversies surrounding the introduction of Newton's Laws, having to do with combining multiple uncertain observations.

comment by AlexU · 2009-04-10T13:52:57.403Z · LW(p) · GW(p)

I have yet to hear what anyone even means by "rationalism" or "rationalist," let alone "x-rationality." People often refer to the "techniques" or "Art of rationality" (a particularly irksome phrase), though as best I can tell, these consist of Bayes theorem and a half-dozen or so logical fallacies that were likely known since the time of Aristotle. Now, I've had an intuitive handle on Bayes theorem since learning of it in high school pre-calc, and spotting a logical fallacy isn't particularly tough for anyone accustomed to close reading of philosophy or doing science (or who's studied for the LSAT). So apart from simply calling oneself a "rationalist" and feeling really good about being a part of some "rationalist community" (much like Dennett's tone-deaf coining of the term "brights" to describe atheists), is there actually anything to this?

comment by roland · 2009-04-09T22:26:29.604Z · LW(p) · GW(p)

If I was rational enough to pick only stocks that would go up, I'd become successful regardless of how little willpower I had, as long as it was enough to pick up the phone and call my broker.

Rationality is not enough to pick the right stocks. You need to have the willpower to read the vast amount of material to enable you to do that pick.

comment by AnnaSalamon · 2009-04-09T11:08:13.751Z · LW(p) · GW(p)

Eliezer considers fighting akrasia to be part of the art of rationality; he compares it to "kicking" to our "punching". I'm not sure why he considers them to be the same Art rather than two related Arts.

Remember your post on haunted rationalists, and Eliezer’s reply about how it’s possible to successfully work to accept rational beliefs even with the not-so-conscious, not-so-verbal parts of oneself that might be continue to believe in ghosts after one rationally understands the arguments against?

It sounds like maybe you mean “rationality” (or “x-rationality”) to include only “conscious processes that one employs to route around natural biases, with one’s verbal centers, on purpose”, while Eliezer is using “rationality” to mean “extra bonus sanity” or “trying to get one’s whole mind, impression-making-systems, decision-making-systems, etc., in good contact with all the evidence and with one’s own real concerns” (e.g., in the manner RichardKennaway describes changing his decision-making). It’s this latter art that I’d like to improve in, at least.

comment by jimrandomh · 2009-04-09T03:27:38.319Z · LW(p) · GW(p)

Extreme rationality is for important decisions, not for choosing your breakfast cereal. Really important decisions - by which I mean those that you'd sleep on, and allocate more than ten minutes of thought - typically coincide with changes in habits and routine, which don't happen more often than once in several months. For more common decisions, we only have time and energy for ordinary rationality.

Replies from: Richard_Kennaway, Yvain, jeronimo196, Furcas
comment by Richard_Kennaway · 2009-04-09T07:05:06.817Z · LW(p) · GW(p)

Practice creates facility. Facility lowers the bar to practice. Repeat. There is no time at which rationality may not be applied, and without practice at small things, how will you apply it to big things?

But besides, isn't it altogether just more fun to think clearly? When I notice myself not doing so, it is as painfui as watching a beautiful machine labouring with leaking pipes and rust.

I don't keep fit just to catch trains or eke out a few more years from the meat.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2009-04-09T07:11:49.620Z · LW(p) · GW(p)

Can you give examples of what your practice looks like?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2009-04-09T07:35:18.520Z · LW(p) · GW(p)

It begins with noticing, and continues by doing. Just from systematically noticing what you are doing, in any sphere, what you do changes even without making a special effort to change. Yvain mentioned this happening for him in footnote 5.

Once you see, clearly, that there is a choice in front of you, and what it is, it is no more possible to choose what you think is wrong than believe what you think is false.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2009-04-09T07:37:58.801Z · LW(p) · GW(p)

This comment is helpful, but if you could include some examples that use concrete nouns, it would be more helpful.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2009-04-09T10:08:41.232Z · LW(p) · GW(p)

Thank you for pressing me for concrete details.

Some of what follows goes way back before OB, which is one of various things I have studied or done -- a major one, but there are others -- on the matter of how to think better. The first, for example, I describe as inside vs. outside view, because that is what it is. The practice goes back longer; OB gave it a name.

I. Getting out of bed in the morning. That may seem a trifle, but there is no time at which rationality does not matter, and an hour a day is more than a trifle. The inside view whispers seductively to just laze on half-awake, or drift off to sleep again. The outside view reminds me that it has been my invariable experience that lazing on does not wake me up, that the only thing that does is getting up and moving around, and that in twenty minutes after getting up (my typical boot time for both mind and body) I will be more satisfied with myself, the sooner I did so.

The more clearly I can contemplate the outside view, the easier it becomes to make a move. I can't claim expert proficiency in this. I still get up much faster when I have a specific three-alarm-clock reason, the moment the wristwatch pinger goes off.

II. I began taking much better care of my money after I instituted the simple exercise of recording every transaction on a spreadsheet, and estimating all of my expenses month by month out to a year ahead. And this without having to make any particular resolutions to limit my spending on this or that, or to save some fixed amount. I just have to look at my savings account, and other stores of money not to be casually drawn on, to see the difference. Sometimes noticing is all it takes, and the doing takes care of itself.

You cannot fix errors that you do not know you are making. That includes errors that you are looking straight at, without realising that they are errors. (Our chief weapon is noticing. Noticing, and discernment. Our two weapons...)

III. I have learned that whatever the person in front of me is saying, it makes sense to them, no matter confused or wrong it may seem to me. Even if they are lying, there is still a reason. Therefore, I look for the greatest possible sense and address that, whether I'm dealing with a student, a colleague, someone being wrong on the Internet, or anyone else. Or as it was put, "you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse." The application is wider than fighting.

As I said, my experiences of a lot of this go back way before reading OB. Most of what is said on OB can also be found elsewhere -- a significant part of it is links to elsewhere. But that is only because the truth is constant and discoverable by anyone, so it is unsurprising when it has been. A lot of what is valuable in OB/LW is to draw its material together in a coherent body.

(Edited to defeat the software's too-clever handling of the originally Arabic-numbered paragraphs.)

Replies from: arundelo, ChrisHibbert
comment by arundelo · 2009-04-09T18:34:46.526Z · LW(p) · GW(p)

You can backslash the period to defeat automatic list formatting:

2\. Two

Foo

1\.  One

looks like:

2. Two

Foo

1. One

More details here.

Edited to add: Excellent comment, by the way.

Replies from: MatthewBaker
comment by MatthewBaker · 2011-11-30T02:37:01.077Z · LW(p) · GW(p)

Thank you for the link.

comment by ChrisHibbert · 2009-04-09T18:16:25.099Z · LW(p) · GW(p)

On point 2, I wonder how to generalize this lesson. I can see that many people follow similar practices for tracking their spending, and many of them claim similar benefits. But how would you know where else to apply the technique? Few people claim to do the same thing with their time; why is that different? How would you suggest generalizing this approach? What other arenas might it be applicable in? Or is only valuable for increasing awareness of expenses?

Replies from: Richard_Kennaway, kluge
comment by Richard_Kennaway · 2009-04-10T10:05:17.959Z · LW(p) · GW(p)

It goes beyond increasing awareness: whatever you increase your attention to, within yourself, almost inevitably changes. It has been suggested that there is a fundamental brain mechanism operating here: reorganisation follows attention.

Claimer: I have known and worked with William Powers (whose work is described in that link) for many years. Often while reading OB or LW I have itched to recommend his works, but have held off for fear of seeming to be touting a personal hobbyhorse. But I really do think he Has Something. (BTW, I did not have any hand in writing the Wiki article.)

Yvain mentioned that looking at his application of rationality is tending to increase it. Steven Barnes recommends the practice of stopping every three hours during the day to meditate for 5 minutes on your major life goals. To-do lists help get things done. Some recommend writing down each day's goals in the morning and reviewing them in the evening. Attention, in fact, is a staple of practically every teaching relating to personal development, whether rationalistic or religious. You cannot change what you are doing until you see what you are doing.

comment by kluge · 2009-04-11T17:30:24.959Z · LW(p) · GW(p)

Few people claim to do the same thing with their time; why is that different?

Actually I've been repeatedly recommended to track my time usage as a means of being aware of wasting it and then improving my time management.

Alas, I haven't yet gotten around to actually trying it.

comment by Scott Alexander (Yvain) · 2009-04-09T03:46:59.589Z · LW(p) · GW(p)

I agree with this, but I also think that our big important decisions probably determine a lot less of our success than we like to think. A very large part of success probably comes from either the sum of our smaller decisions, or from decisions that didn't seem too important at the time but ended up making a very large difference in retrospect. The experiment I mentioned has raised my awareness of this.

I also think the big decisions are the ones it's hardest to apply extreme rationality to, both because the emotional stakes are so high and because by the time we make them we've already made a pile of smaller decisions that have tipped us in one or the other direction. See http://www.overcomingbias.com/2007/10/we-change-our-m.html . I predict not-significantly-different statistics for people who have trained in extreme rationality, though without a very high degree of confidence.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-09T12:20:09.856Z · LW(p) · GW(p)

I also think the big decisions are the ones it's hardest to apply extreme rationality to, both because the emotional stakes are so high and because by the time we make them we've already made a pile of smaller decisions that have tipped us in one or the other direction.

I spend a fair amount of time taking aim at directly this phenomenon, y'know. Summarized in Crisis of Faith.

I predict not-significantly-different statistics for people who have trained in extreme rationality, though without a very high degree of confidence.

Because the technique as described is too hard for mortals to use, or because the technique as described is inadequate?

comment by jeronimo196 · 2020-04-01T19:51:02.056Z · LW(p) · GW(p)

Necroing.

Extreme rationality is for important decisions, not for choosing your breakfast cereal.

Your dietary decisions are supposed to have large and long lasting effects on your health. Take into account the multiple and conflicting opinions on what constitutes a good diet, the difficulty of changing one's mind and habits, and it seems extreme rationality might be just the thing you need for choosing breakfast.

comment by Furcas · 2009-04-09T03:31:01.698Z · LW(p) · GW(p)

Can you give an example of such a decision?

comment by TheOtherDave · 2010-12-05T20:58:55.614Z · LW(p) · GW(p)

Yes, yes, yes, yes, yes. And also yes.

I had a similar reaction to the fictional rationalist initiation ceremony.

That said, on further consideration, I'm not sure the "Bayesian Conspiracy" has a choice, given its goals.

It's possible that, even though these sorts of policies do turn away perfectly competant rationalists, they are the only alternative to ending up with a comfortable community of one-or-two-sigmas-above-the-mean rationalists rather than an ultra-elite x-rationality club that can bootstrap itself into the sort of excellence that we enjoy labelling in Japanese.

This has nothing to do with rationality, extreme or otherwise. For any property P, if you want to maintain a minimum standard of P within in a group, you need some way to test for P. If you have a highly reliable test for P that has negligible false positives and few false negatives, great, use that! But lacking one, a test with negligible false positives and lots of false negatives might be good enough, if you can afford to exclude a lot of potential. (Or even a series of mostly independent tests might work well enough, even when no individual test is highly reliable, as long as you exclude anyone who fails any of the tests... which also raises your false-negative rate.)

So, hey, if the ultra-elite rationality club is meeting somewhere and plotting optimized utility, great. More power to them. The most sensible thing for them to do is not even let me waste their time by knowing they exist. LessWrong certainly isn't that club; it's at best a self-selecting group of people who think that club would be a good thing if it existed, plus some others who think they ought to be in it.

Which is OK with me.

Unrelatedly, a specific quibble:

If I was rational enough to pick only stocks that would go up, I'd become successful regardless of how little willpower I had, as long as it was enough to pick up the phone and call my broker.

Well, and enough to actually do all the necessary research to obtain the data from which you could rationally conclude that a given stock will go up. And to continue attending to that data (and researching additionally relevant data) so that you can make rational decisions about whether to sell or hold those stocks tomorrow, next week, next month, etc.

comment by xamdam · 2010-06-30T21:01:16.297Z · LW(p) · GW(p)

If I was rational enough to pick only stocks that would go up, I'd become successful regardless of how little willpower I had, as long as it was enough to pick up the phone and call my broker.

Availability of investing is NOT a disproof that akrasia is NOT the complete explanation. Successful investing is rationality+financial education+a lot of work (Buffett is rumored to read an incredible amount of accounting statements), and hence subject to akrasia.

comment by conchis · 2009-04-09T12:50:40.149Z · LW(p) · GW(p)

Better decisions are clearly one possible positive outcome of rationality training. But another significant positive outcome is reaching the same decision faster. In my work, there are a number of rationality techniques that I have learned that have not necessarily changed the end result I have come to, but that have contributed to me spending less time confused, and getting to the right result more quickly than I otherwise would have.

Anything that frees up time in this way, has real, positive, and measurable effects on my life. (Also, confusion, and things-not-working are frustrating and stressful; so the less time I spend confused, the better)

Replies from: AnnaSalamon
comment by AnnaSalamon · 2009-04-09T14:36:11.307Z · LW(p) · GW(p)

In my work, there are a number of rationality techniques that I have learned that have ... contributed to me spending less time confused, and getting to the right result more quickly than I otherwise would have.

Could you please tell us the specific techniques and/or situations? (I'm sorry to keep asking this of everyone, but the answers are really interesting/useful. We need to figure out what different peoples' practice actually looks like, and what mileage people do and don't get from it. In detail.)

Replies from: conchis
comment by conchis · 2009-04-13T10:52:31.464Z · LW(p) · GW(p)

[Sorry for the slow response. Have been away for the weekend.]

No need to apologize, it's an excellent question. And to be honest, because my work involves a lot of data analysis, and using such analysis to inform decision-making, I may be cheating somewhat here. There are times when remembering that "probability is in the mind" has stopped me getting confused and helped me reach the right answer more quickly, but they're probably not particularly generalizable. ;)

Here's a quick list of some techniques that have helped that might be more generally applicable. They're not necessarily techniques that I always manage to apply consistently, but I'm working on it, and when I do, they seem to make a difference.

(Listing them like this actually makes them seem pretty trivial; I'll leave others to decide whether they really warrant the imprimatur of "rationality techniques".)

(1) Avoiding confirmation bias in program testing: I'm not a great programmer by any stretch of the imagination, but it is something I have to do a fair amount of. Almost every time I write a moderately complicated program, I have to fight the urge to believe that this time I've got it basically right on the first go, to throw a few basic tests at it, and get on with using it as soon as possible, without really testing it properly. The times I haven't managed to fight this urge have almost always resulted in much more time wasted down the line than taking a little more time at the outset to test properly.

(2) Leaving a line of retreat. Getting myself too attached to particular hypotheses has also wasted a fair amount of my time. In particular, there's always a temptation, when data happens not to fit your preconceived ideas, to keep trying slightly different analyses to see whether they'll give you the answer you expected. This can sometimes be reasonable, but if you're not careful, can lead to wasting an enormous amount of time chasing something that's ultimately a dead end. I think that forcing myself to reassess hypotheses sooner rather than later has helped to cut down on that sort of dead end analysis.

(3) Realizing that some decisions don't matter (aka not being Buridan's ass): I'm something of a perfectionist, and have a tendency to want every decision to be optimal. In any sort of analysis, you have to make numerous, more or less arbitrary choices about exactly how to proceed. Some of these choices appear difficult because the alternatives are finely balanced; so you keep searching for some factor that could make the crucial difference between them. But sweating every decision like this (as I used to do) can kill a lot of time for very little reward (especially, though not only when the stakes are small.)

But to be honest, the biggest time-saver I've encountered is taking the outside view to avoid the planner's fallacy. Over the years, I've taken on a number of projects that I would not have taken on, had I realized at the the outset how much time they would actually take. Usually, these have both taken up time that could better have been spent elsewhere, and have created a great deal of unnecessary stress. The temptation to take the inside view, and to be overly optimistic in time estimates is something I always have to consciously fight (and that, per Hofstadter's law, I've never managed to fully overcome), but is something I've become much better at.

Z_M_Davis' recent post on the sunk cost fallacy, reminded me that being willing to give up unproductive projects can also be a time-saver, although the issues here are somewhat more complicated for reasons some have mentioned in the comments (e.g. the reverse sunk cost fallacy, and reputational costs involved in abandoning projects).

comment by Z_M_Davis · 2009-05-24T06:46:09.419Z · LW(p) · GW(p)

I can't think of any people who started out merely above-average, developed an interest in x-rationality, and then became smart and successful because of that x-rationality.

I'm working on this.

comment by Scott Alexander (Yvain) · 2009-04-10T03:16:53.401Z · LW(p) · GW(p)

In the spirit of concrete reductions, I have a question for everyone here:

Let's say we took a random but very large sample of students from prestigious colleges, split them into two groups, and made Group A take a year-long class based on Overcoming Bias, in which students read the posts and then (intelligent, engaging) professors explained anything the students didn't understand. Wherever a specific technique was mentioned, students were asked to try that technique as homework.

Group B took a placebo statistics class similar to every other college statistics class, also with intelligent and engaging professors.

Twenty-five years later, how would you expect the salaries of students in Group A to compare to the salaries of students in Group B? The same? 1.1 times greater? Twice as great? What about self-reported happiness? Amount of money donated to charity per year?

Replies from: AnnaSalamon, prase
comment by AnnaSalamon · 2009-04-10T03:22:55.774Z · LW(p) · GW(p)

Does the course use CBT-like techniques, where e.g. when "Leave a line of retreat" is taught, participants specifically list out all the possibilities where fear might be preventing them from thinking carefully, and build themselves lines of retreat for those possibilities? And learn cached heuristics for noticing, through the rest of their lives, when leaving a line of retreat would be a good idea, together with habits for actually doing so? Also, does the course have a community spirit, with peers asking one another how things went, and pushing one another to experiment and implement?

If so, I'd give 50% odds (for each separate proposition, not the conjunction) that the group A salaries are higher variance than the group B's, and that the 98th percentile wealthiest / most famous / most impactful of group A is significantly wealthier / more famous / more successful at improving their chosen fields than the 98th percentile of group B. Significantly, like... times five, say (though I'd expect a larger multiplier from the "changing their chosen fields to work well" than from the "making more money"; strategicness is more rarely applied to the former, and there's lower hanging fruit). (I would not expect such a gap between the two groups' medians.)

comment by prase · 2009-04-10T12:37:45.552Z · LW(p) · GW(p)

I would expect very little correlation with salaries. And about self-reported happiness - I often think that knowing about all biases, memory imperfections and all that stuff, and about how difficult it is to decide correctly, makes me substantially less happy.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2009-04-11T03:32:56.438Z · LW(p) · GW(p)

prase, is happiness much of a goal for you? If so, have you tried to apply rationality toward it, e.g. by reading the academic research on happiness (Jonathan Haidt's "The Happiness Hypothesis" is a nice summary) and thinking through what might work for you?

comment by mathemajician · 2009-04-09T13:28:57.708Z · LW(p) · GW(p)

The most effective way for you to internally understand the world and make good decisions is to be super rational. However, the most effective way to get other people to aid you on your quest for success is to practice the dark arts. The degree to which the latter matters is determined by the mean rationality of the people you need to draw support from, and how important this support is for your particular ambitions.

comment by SoullessAutomaton · 2009-04-09T10:35:13.631Z · LW(p) · GW(p)

I strongly suspect that it is unreasonable to expect people to actively apply x-rationality on a frequent, conscious basis--to do so would be to fight against human cognitive architecture, and that won't end well.

Most of our decisions are subconscious. We won't be changing this. The place of x-rationality is not to make on-the-spot decisions, it's to provide a sanity check on those decisions and, as necessary, retrain the subconscious decision making processes to better approximate rationality.

comment by smoofra · 2009-04-09T03:40:48.096Z · LW(p) · GW(p)

I think you are right that x-rationality doesn't help an individual win much on a day to day basis. But there are some very important challenges that humanity as a whole is failing for lack of x-rationality.

The current depression. The fact that we aren't adequately protecting the earth from asteroids. DDT being banned. Nobody's getting froze. Religion. First-past-the post elections. Most wars.

Replies from: ciphergoth, gjm, Nominull
comment by Paul Crowley (ciphergoth) · 2009-04-09T09:13:24.771Z · LW(p) · GW(p)

DDT isn't banned, never has been. I'm with you on most everything else.

At some stage we're going to have to work out how to talk about politics here. I've wondered about a top-level post to find out what we practically all agree on - I suspect for example that few of us think the drug war is a good idea.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2009-04-09T17:03:22.369Z · LW(p) · GW(p)

DDT isn't banned, never has been.. I'm with you on most everything else.

From a 1972 Environmental Protection Agency press release entitled "DDT Ban Takes Effect":

The general use of the pesticide DDT will no longer be legal in the United States after today, ending nearly three decades of application during which time the once-popular chemical was used to control insect pests on crop and forest lands, around homes and gardens, and for industrial and commercial purposes.

comment by gjm · 2009-04-09T10:49:26.431Z · LW(p) · GW(p)

Religion, FPTP elections and wars are irrational even according to non-x rationality. (With all sorts of caveats, which apply just as much to x-rationality.) The DDT ban thing is a myth, as ciphergoth points out. Asteroids and cryogenics, maybe, in so far as making the right decisions there probably involve a large element of Shut Up And Multiply; but actually we are making some effort to spot asteroids early enough and the probabilities governing whether one should sign up for cryogenics are highly debatable.

Perhaps more x-rationality would help humanity as a whole to address those issues, but mostly they come about because so many people aren't even rational, never mind x-rational.

Replies from: Technologos
comment by Technologos · 2009-04-09T13:48:09.282Z · LW(p) · GW(p)

Perhaps--but many a logician has believed in God. Take somebody like Thomas Aquinas--he was for a long time the paradigm of rationality. I'd suggest it takes x-rationality to truly shatter your pre-existing losing framework and re-examine your priors.

Replies from: gjm
comment by gjm · 2009-04-09T15:35:46.862Z · LW(p) · GW(p)

Do you have evidence that it was lack of x-rationality that enabled Aquinas to believe in God, rather than (1) different evidence from what we have now (e.g., no long track record of outstandingly successful materialistic science; no evolutionary biology to provide an alternative explanation for the adaptation of living things; no geological investigations to show that the earth is very much older than Aquinas's religious beliefs said it was) and (2) being embedded in a culture that pushed him much harder towards belief in God than ours does to us?

Robert Aumann, to take an example Eliezer's used a few times, is pretty expert in at least some aspects of the art of x-rationality, and is also Orthodox Jewish.

Replies from: Technologos
comment by Technologos · 2009-04-28T19:38:00.845Z · LW(p) · GW(p)

Exactly--Aumann has the same evidence that you or I have about materialist scientific facts, yet chooses not to utilize x-rationality to accurately evaluate his beliefs.

While I can't interview Aquinas about the reasons he believed in God, I'm sure the things you listed were causally important. However, if he had had x-rationality, the other elements wouldn't have made a difference--in some sense, x-rationality is a way of getting around the limitations of a particular culture and time.

Do you think a general AI would have any difficulty disbelieving in God, even if it had been "raised" in a culture in which belief was common and incentivized?

Replies from: gjm
comment by gjm · 2009-04-28T19:43:34.944Z · LW(p) · GW(p)

That probably depends on what you mean by "a general AI". We humans are (approximately) general natural intelligences (indeed, that's almost the definition of what many people mean by "general" in this context), and plenty of humans have lots of difficulty disbelieving in God. If you mean an AI whose intelligence and knowledge are greatly superhuman, emerging from a human culture in which belief in God is common, then I expect it would (knowing its own intellectual superiority to us) have little difficulty escaping from the cultural presumption of theism. As for a culture of superhuman AIs in which theism was common, I don't know; the mere existence of such a culture would be extremely interesting and good evidence for something surprising (which might or might not be theism).

Replies from: Technologos
comment by Technologos · 2009-04-28T19:52:49.420Z · LW(p) · GW(p)

I mean an AI that follows Eliezer's general outlines of one; that is, an AI which can extrapolate maximally from a given set of evidence.

By the way, I find it hard to imagine a culture of superhuman AIs in which theism is common. I'd be interested to talk a little more about how that would work--in particular, what evidence each AI would accept from other AIs that would convince them to be a theist.

Replies from: gjm
comment by gjm · 2009-04-28T22:34:48.660Z · LW(p) · GW(p)

I find it hard to imagine a culture of superhuman AIs in which theism is common.

Yeah, me too. That was rather my point.

comment by Nominull · 2009-04-09T04:22:57.811Z · LW(p) · GW(p)

So by spending our resources on studying rationality, we are cooperating in a giant Prisoner's Dilemma?

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-04-09T09:16:57.088Z · LW(p) · GW(p)

No, people don't only do good in the hope that good will be done to them; most people value the welfare of others and the survival of humanity inherently, at least to some extent.

comment by Hul-Gil · 2012-04-10T05:51:06.477Z · LW(p) · GW(p)

I think one reason might be that the vast majority of the decisions we make are not going to make a significant difference as to our overall success by themselves; or rather, not as significant a difference as chance or other factors (e.g., native talent) could. For example, take the example about not buying into a snake-oil health product lessdazed uses above: you've benefited from your rationality, but it's still small potatoes compared to the amount of benefit you could get from being in the right place at the right time and becoming a pop star... or getting lucky with the stock market... or starting a software company at just the right time. These people, who have to have varying degrees of something besides luck and a small amount of rationality to capitalize on it, are much more visible; even if their decisions were less-than-optimal, the other factors make up for it. Nothing's stopping them from poisoning themselves with a scam miracle-elixir, though.

This ties in with the point lessdazed was making, that the rational person most likely loses less rather than wins big - that is, makes a large number of small decisions well, rather than a single important one extremely well. That's not to be despised; I wonder what the results if we ask about overall well-being and happiness rather than fame and fortune.

comment by dlthomas · 2012-01-18T22:07:15.825Z · LW(p) · GW(p)

[W]e should generally expect more people to claim benefits than to actually experience them.

I don't think this claim is supported. There are reasons (some presented) why we should expect this. There are also reasons (a few listed below) why we should expect the opposite. I don't see at all why we should expect either set to dominate.

Reasons I might not post a benefit I've accrued:

1) I'm too busy out enjoying my improved life. 2) The self-congratulatory thread smells too much of an affective death spiral. 3) I am unsure how much of the benefit was actually from x-rationality and how much was from other sources. 3.1) 3, plus overcompensation for cognitive dissonance and similar biases. 4) It feels like bragging - and in fact, it seems to sometimes be interpreted this way; look at some of the reaction Luke has got for some of his posts. 5) I'm busy focusing on improving further; posting a comment listing benefits I've derived so far might not be an effective means to this goal.

comment by MrShaggy · 2009-04-24T05:07:54.487Z · LW(p) · GW(p)

"Eliezer considers fighting akrasia to be part of the art of rationality; he compares it to "kicking" to our "punching". I'm not sure why he considers them to be the same Art rather than two related Arts."

I don't understand the implications of seeing it as part of the same art or a different art altogether.

comment by Douglas_Knight · 2009-04-10T17:11:52.698Z · LW(p) · GW(p)

study evolutionary psychology in some depth, which has been useful in social situations

Could you elaborate on this?

I doubt that it directly told you anything useful, but it was more likely helpful in telling you to pay attention and not to interpret things through your usual beliefs.

comment by HughRistik · 2009-04-10T05:41:58.671Z · LW(p) · GW(p)

X-Rationality can help you succeed. But so can excellent fashion sense. It's not clear in real-world terms that x-rationality has more of an effect than fashion. And don't dismiss that with "A good x-rationalist will know if fashion is important, and study fashion." A good normal rationalist could do that too; it's not a specific advantage of x-rationalism, just of having a general rational outlook.

Yet many highly intelligent people with normal rationality have terrible fashion sense, particularly males, at least in my anecdotal experience. Ditto for social skills, dating skills, etc... (fashion is really a subset of social skills, combined with aesthetics). (a) Are these people not really rationalists, because they haven't figured out how to improve themselves in those areas, or (b) do ordinary rationalists have trouble figuring out that they would benefit from improvement in those areas, and how to do it? Or perhaps (c), they recognize the benefits of greater social abilities, but they do not believe that the effort is worth it?

In principle, normal intelligent rationalists could figure out how to improve their fashion skills and social skills deliberately and systematically. But if indeed so few people in that category do so, I would take it as evidence that a systematic approach to developing interpersonal skills and style actually requires a higher level of rationality that what normal rationalists possess (perhaps x-rationality, depending on what we mean by that).

Replies from: moshez, AlexU, Nick_Tarleton, mattnewport, AnnaSalamon
comment by moshez · 2012-02-14T18:42:20.344Z · LW(p) · GW(p)

"Yet many highly intelligent people with normal rationality have terrible fashion sense"

Hrm, I'm not sure what evidence there is that highly intelligent people worse fashion sense than equivalent people [let's stick to the category of males, with which I'm most familiar]. It seems to me like "fashion" for males comes down to a few simple rules, that a monkey (or, for that matter any programmer or mathematician) can master. The problem seems to be that (1) one does need to master these rules (2) sometimes, it means one does not dress comfortably.

I would like to offer a competing hypothesis: nerds have just as much "innate" fashion sense as non-nerds, but they feel that fashion is beneath them, that dressing comfortably is more important than following fashion, or that they would prefer to dress to impress nerds (with T-shirts that say "P(H|E) = P(E|H)*P(H)/P(E)" for example) than to impress non-nerds. In other words, the much simpler hypothesis "dress is usually worn to self-identify as a member of a tribe" is enough to explain nerds' perceived lack of fashion sense.

[For the record, here is how a nerd male can "simulate" a reasonable facsimile of fashion sense: for semi-formal occasions, get a couple of nice suits and wear them. If nobody else would wear a tie, wear a suit without the tie (if your ability to predict whether people will wear a tie is that bad, improve it with explicit Bayesian approximation). For all other occasions, wear dark colored slacks and a button down shirt with a compatible color (ask a person you trust about which colors go with which, and keep a table glued to the inside of your closet. Any "nerd" has mastered skills tremendously more complicated than that (hell, correctly writing HTML is more complicated). One can only assume it is lack of motivation, not of ability.]

For myself as an example of nerd, I can definitely say the reason I dress "with a horrible fashion sense" is as a tribal identification scheme. In situations where my utility function would actually suffer because of that, I do the rational thing, and wear the disguise of a different tribe... (For example, when going on sales pitches to customers, I let the sales rep in charge of the sale to tell me what to dress down to the socks, on my wedding I let my wife pick out my clothes, etc.)

Replies from: Bugmaster
comment by Bugmaster · 2012-02-14T19:51:20.829Z · LW(p) · GW(p)

Personally, I've been able to get away with just dark slacks and a dark formal shirt. That said, I usually dress quite "horribly" by fashion standards, because there's no one in my day-to-day life who'd be impressed by my mad fashion skills, so I might as well dress comfortably at no penalty.

comment by AlexU · 2009-04-10T14:10:34.428Z · LW(p) · GW(p)

I've talked before in this same vein about the limits of rationality. One can be a perfect rationalist and always know what to do in a given situation, yet still be unable to do it for whatever reason. This suggests that pretty strongly that good "rationalists" would be wise to invest their time into other areas as well, since rationalism alone won't turn you into the ubermensch. It won't make you healthy and fit, it won't enable you to talk to girls any better or make friends any easier. (And I object to any conception of "rationalism" so sweepingly broad that it manages to subsume every possible endeavor you'd set out on, e.g., the old "a good rationalist would realize the importance of these things and figure out meta-techniques for developing these skills.")

comment by Nick_Tarleton · 2009-04-10T14:02:58.710Z · LW(p) · GW(p)

Three other suggestions:

(d) they've let "bad at fashion", "bad social skills", and the like become part of their identities, rationalized by the belief that those things are shallow, non-intellectual, whatever;

(e) they didn't practice those skills at a young enough age (because they were too young to realize the importance, they were socially excluded, ...) to deeply learn them, also reinforcing both (d) and a (destructive, hard to break) sense of being low-status;

(f) high intelligence + interest/aptitude in rationality correlates with mild autism-spectrum traits (not necessarily sufficient to be diagnosed, but enough to cause social problems, particularly in childhood).

Replies from: HughRistik
comment by HughRistik · 2009-04-10T23:50:09.830Z · LW(p) · GW(p)

I think all of those are highly plausible factors (all of which applied to me, btw).

(d) they've let "bad at fashion", "bad social skills", and the like become part of their identities, rationalized by the belief that those things are shallow, non-intellectual, whatever;

Additionally, they may have internalized the stereotype that rational people should act like Spock. And conversely, they may associate those skills with people they dislike: "those are the shallow kinds of things the popular people do, whereas I am deep."

(e) they didn't practice those skills at a young enough age (because they were too young to realize the importance, they were socially excluded, ...) to deeply learn them, also reinforcing both (d) and a (destructive, hard to break) sense of being low-status;

I like the interactionist perspective between nature and nurture you are taking here. It's not necessarily destiny that these people grow up with social deficits, it's just a common outcome of the interaction of their individual characteristics with a negative formative social environment.

(f) high intelligence + interest/aptitude in rationality correlates with mild autism-spectrum traits (not necessarily sufficient to be diagnosed, but enough to cause social problems, particularly in childhood).

This is a can of worms that I was thinking about opening up. Our normal intelligent rationalists would also tend to be high on "systemizing" rather than "empathizing" in Simon Baron-Cohen's theory, and more interested in "things" on the "people vs things" dimension.

The result is that the kind of neurotypical cognition required for social skills and fashion sense may seem non-intuitive or even alien to the category of people we are talking about. For instance, fashion and social skills often involve doing things simply because other people are doing them, which may defy one's sense of individualism, and belief that behaviors should have objective purpose.

Furthermore, this type of individual may feel that people should be accorded status based on "objective merit," which means being good at the things that matter to our intelligent rationalists. They may find it nauseating that status often depends on things like clothing, body language and voice tonality, who you hang out with, etc... rather than on actual intelligence or competence.

90% of social communication will seem meaningless to them, because it is based on emoting, status ploys, or pointing out things that are obvious, in contrast to the type of communication that is "really" meaningful, such as exchanging of ideas, factual information, or practical processes.

For this type of intelligent rationalist to build social skills from the ground up is an impressive feat, because they have to get over their own biases and past a bunch of developmental barriers (whether biological or social). A higher level of rationality may be a necessary, though not sufficient, condition for accomplishing this feat. (Yet of course, a higher level of rationality may be linked to even more social deficits, semi-autistic "thing-oriented" personality traits, etc... Perhaps this is why the world is not ruled by an over-caste of charismatic, fashionable people with 150+ IQ.)

comment by mattnewport · 2009-04-10T06:08:26.130Z · LW(p) · GW(p)

I agree that there's some level missed by the distinction between 'normal' rationality and 'x-rationality' and it's in that middle ground that I feel I've derived the most practical benefits from rationality. The examples you give are good ones. Other examples I could give from my own experience are personal finance and weight loss.

Using personal finance as an example: I consider myself to have always possessed an above average level of intelligence and 'normal' rationality. I have a scientific education and make my living as a computer programmer. Until fairly recently though I let my emotional dislike of form filling get in the way of organizing my personal finances effectively. A general desire to more rigorously apply 'normal' rationality in my life to improve my outcomes led me to recognize that I was irrationally allowing my negative reaction to paperwork to have a significant financial impact. By comparing the marginal utility of a few hours of unpleasant labour optimizing my tax situation to a few hours of tedious paid employment I realized I was making an irrational choice and recognizing that was an aid in overcoming the obstacle. Recognizing the logical flaws in the kinds of rationalizations I'd used to justify my previous lack of organization was also helpful. Often I would use clever-sounding arguments to justify avoiding a task which was simply unpleasant.

comment by AnnaSalamon · 2009-04-10T05:54:41.614Z · LW(p) · GW(p)

I would take it as evidence that a systematic approach to developing interpersonal skills and style actually requires a higher level of rationality that what normal rationalists possess.

HughRistik, this is only evidence if people with a higher level of rationality do better at improving their fashion skills, social skills, etc. My impression is that we do do somewhat better, but it's not obvious, and more data would be good.

comment by knb · 2009-04-09T16:32:01.558Z · LW(p) · GW(p)

In the case of Hubbard, preaching irrationality and being irrational is different. Hubbard went genuinely crazy in his later years, but when he knew what he was doing when he invented Scientology. He even said in an interview once "I'm tired of writing for a penny a page. If a man really wanted to make a million dollars, he would invent a religion."

Replies from: Annoyance, thomblake
comment by Annoyance · 2009-04-09T17:21:59.776Z · LW(p) · GW(p)

If you're going to craft memetic weapons, you'd better make damn sure you've developed a resistance to your own products before you begin peddling them.

Hubbard ended up spending lots of his time around people who had been infected with his viral religious propaganda... and inevitably, he became infected himself.

People with high Int and Cha tend to believe their own propaganda. They're also the ones who tend to have unrealistically positive beliefs about their own intellectual competence, and little concern about going through the tedious and uncomfortable process of examining their own beliefs and practices.

comment by thomblake · 2009-04-09T17:30:59.111Z · LW(p) · GW(p)

Except that even before all of that and formally inventing Scientology, he hobnobbed with the likes of Crowley and believed that there is a horrible conspiracy of psychologists that Must Be Stopped.

comment by Paul Crowley (ciphergoth) · 2009-04-09T09:23:39.410Z · LW(p) · GW(p)

Reading OB/LW forced me to look hard at my contradictory beliefs about politics, and admit that I no longer believed certain things I used to believe, particularly about the market. If I don't get anything else out of it, that alone would be a large bonus.

comment by EniScien · 2022-03-07T17:29:39.531Z · LW(p) · GW(p)

Even before reading, I formulated an explanation for myself "all people who are too stupid not to jump off the roof will simply die out", market mechanisms and natural selection will remove all the really destructive consequences of everyday stupidity available to them, will collect all the low-hanging fruits, so the study of rationality will help only in rare or individually weak negative consequences issues. On average, rationalists will be more successful than non-rationalists, but differences between individuals will be greater than differences between groups. Nevertheless, it seems to me that knowledge of techniques would make it possible to achieve the level of Yudkovsky, but only in the case of really good knowledge of them, solving problems with acrasia and setting a serious goal.

comment by DPiepgrass · 2022-02-06T21:05:31.664Z · LW(p) · GW(p)

I find my personal experience in accord with the evidence from Vladimir's thread. I've gotten countless clarity-of-mind benefits from Overcoming Bias' x-rationality, but practical benefits? Aside from some peripheral disciplines1, I can't think of any.

Well, it did ultimately help you make SlateStarCodex and Astral Codex Ten successful, which provided a haven for non-extremist thought to thousands of people. And since the latter earned hundreds of thousands in annual revenue, you were able to create the ACX grants program that will probably make the world a better place in various small ways. Plus, people will look back on you as one of the world's most influential philosophers.

As for me, I hope to make various EA proposals informed by rationalist thought, especially relating to the cause area of "improving human intellectual efficiency", e.g. maybe sort of evidence-Wikipedia [EA(p) · GW(p)]. Mind you, the seeds of this idea came before I discovered rationalism, and "Rationality A-Z" was, for me, just adding clarity to a worldview I already already developed vaguely in my head.

But yes, I'm finding that rationalism isn't much use unless you can spread it to others. My former legal guardian recently died with Covid, but my anti-vax father believes he was killed by a stroke that coincidentally happened at the same time as his Covid infection. Sending him a copy of Scout Mindset was a notably ineffective tactic; in fact, it turned out that one month before I sent him the book, the host and founder of his favorite source of anti-vax information, Daystar, died of Covid. There's conviction—and then there's my dad. My skill at persuading this kind of person is virtually zero even though I have lots of experience attempting it, and I think the reason is that I have never been the kind of person that these people are, so my sense of empathy fails me, and I do not know how to model them. Evidence has no effect, and raising them into a meta-level of discussion is extremely hard at best. Winnifred Louis suggests (among other things) that people need to hear messages from their own tribe, and obviously "family" is not the relevant tribe in this case! So one of the first things I sent him was pro-vax messaging from the big man himself, Donald Trump... I'm not sure he even read that email, though (he has trouble with modern technology, hence the paper copy of Scout Mindset).

Anyway, while human brain plasticity isn't what we might like it to be, new generations are being born all the time, and I think on the whole, you and this community have been successful at spreading rationalist philosophy, and it is starting to become clear that this is having an effect on the broader world, particularly on the EA side of things. This makes sense! LessWrong is focused on epistemic rationality and not so much on instrumental rationality, while the EA community is focused on action; drawing accurate maps of reality isn't useful until somebody does something with those maps. And while the EA community is not focused on epistemic rationality, many of its leaders are familiar with the most common ideas from LessWorng, and so rationalism is indeed making its mark on the world.

I think a key problem with early rationalist thought is a lack of regard for coordination and communities. Single humans are small, slow, and intellectually limited, so there is little that a single human can do with rationalism all by zimself. Yudkowsky envisioned "rationality dojos" where individuals would individually strengthen their rationality—which is okay I guess—but he didn't present a vision of how to solve coordination problems, or how to construct large new communities and systems guided in their design by nuanced rational thought. Are we starting to look at such things more seriously these days? I like to think so.

comment by MugaSofer · 2014-02-02T00:08:38.005Z · LW(p) · GW(p)

1: Specifically, reading Overcoming Bias convinced me to study evolutionary psychology in some depth, which has been useful in social situations. As far as I know. I'd probably be biased into thinking it had been even if it hadn't, because I like evo psych and it's very hard to measure.

Oooh! I realize this is an old post, but I'm desperately curious for some concrete examples of this.

comment by HCE · 2009-04-09T18:19:21.265Z · LW(p) · GW(p)

as robin has pointed out on numerous occasions, in many situations it is in our best interest to believe, or profess to believe, things that are false. because we cannot deceive others very well, and because we are penalized for lying about our beliefs, it is often in our best interest to not know how to believe things more likely to be true. refusing to believe popular lies forces you to either lie continually or to constantly risk your relative status within a potentially useful affiliative network by professing contrarian beliefs or, almost as bad, no beliefs at all. you're better off if you only apply ''epistemic rationality techniques'' within domains where true beliefs are more frequently or largely rewarded, i.e., where they lead to winning strategies.

trying to suppress or correct your unconscious judgments (often) requires willpower. indiscriminately applying ''epistemic rationality techniques'' may have the unintended consequence of draining your willpower more quickly (and needlessly).

comment by nazgulnarsil · 2009-04-09T15:26:17.169Z · LW(p) · GW(p)

winning takes time. few of us have gotten rich yet.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-09T14:36:56.550Z · LW(p) · GW(p)

May I humbly suggest changing the title to "Extreme Rationality: It's Not That Great"? (This will not break any links!)

comment by Technologos · 2009-04-09T13:53:36.760Z · LW(p) · GW(p)

It actually just occurred to me that the intelligence professions might benefit greatly from some x-rationality. We may not have to derive gravity from an apple, but the closer we come to that ideal, the less likely failures of intelligence become.

Intelligence professionals are constantly engaged a very Bayesian activity, incorporating new data into estimates of probabilities and patterns. An ideal Bayesian would be a fantastic analyst.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-09T14:33:41.050Z · LW(p) · GW(p)

Ja, in particular modern intelligence professionals seem to have problems with separating out the information they get from others and the information they're trying to pass on themselves, reporting only their final combined judgment instead of their likelihood-message, which any student of Bayes nets knows is Wrong.

comment by Mike Bishop (MichaelBishop) · 2009-04-09T03:49:12.144Z · LW(p) · GW(p)

If people typically found great personal benefits from reading OB/LW type material, then we would not be such a minority.

We hope that that rationality is increasing, and it could be, but I don't have much confidence that 30 years from now people, even people in positions of power, will be much more rational than they are now.

comment by MugaSofer · 2012-12-11T10:47:45.966Z · LW(p) · GW(p)

2: Eliezer considers fighting akrasia to be part of the art of rationality; he compares it to "kicking" to our "punching". I'm not sure why he considers them to be the same Art rather than two related Arts.

Winning.

comment by cousin_it · 2009-04-09T21:28:56.611Z · LW(p) · GW(p)

Your post is a great improvement on mine. Thanks, esp. for the "limiting factor" riff.

Am I alone in thinking the word "akrasia" doesn't quite describe our problem? Isn't it more like "apathy"? Some people wish to be able to do the things they want; lucky them! Me, I just wish to want to do the things I'm able to do.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2009-04-09T22:37:37.454Z · LW(p) · GW(p)

You're welcome, even though I was pretty sure I was arguing against you. My poor models of other people's opinions strike again.

comment by infotropism · 2009-04-09T09:54:46.565Z · LW(p) · GW(p)

I will list the only example that comes to my mind : better x-rationality techniques have actually helped me get my university diploma : not a few times getting out of a difficult situation, where I used what I knew of heuristics, biases, the limits and usual mistakes in normal rationality, how one can sound rational regardless of whether he really is ... to give off that impressive aura of someone who knows what he's doing at little cost. To sound rational when facing an audience.

To my defense, I actually faked the cues and tells of my rationality, skill etc. to adapt them to what I did estimate to be my real level of rationality and skill, but which I also decided wouldn't be signaled correctly if I didn't actively do the job.

Now, that'd be the only time I used that knowledge in real life. But I'm not really an x-rationalist either. Even the best of us is still but a student in x-rationality. Personally, I'd just consider myself as a normal rationalist with some x-rationality ideas, someone on the transition, getting there. I mean, I can't even do the math, after all. It's all very intuitive so far, to me, not really formalized. But what I have so far, intuitively, tells me that I should seek that conversion, so as to eventually be able to use the math, and formalize that rationality. I ain't even saying that once (if ever) I'm there, I'll only used formalized x-rationality. But I'll have a new, powerful tool, maybe the most powerful, to help me wherever it may (should it be everywhere ? Perhaps not, for as long as we're meatware human; it'll still be easier to hit the ball when it feels right, rather than when you've solved its differential equations.)

And so in our present days, I wonder just how much of our art could be really be used formally, so far, as opposed to all that is still only present and usable on an intuitive level. And what part of it is really more a part of x-rationality, rather than something borrowed from someplace else.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-09T12:22:37.317Z · LW(p) · GW(p)

better x-rationality techniques have actually helped me get my university diploma

I didn't quite understand your description of what happened here, but it sounds interesting and possibly ominous. Please rephrase?

Replies from: infotropism
comment by infotropism · 2009-04-09T15:57:55.617Z · LW(p) · GW(p)

Ok, giving it another go. Let's say you had to perform a set of experiments. You didn't know much about, nor studied a lot of, the background science. The results, and the data that can be extrapolated from those, are weak, and it's in part your fault. How would you keep going, without failing either your overall experimental work (which should be the only important matter at hand), neither your co workers' trust in your capabilities ?

The first most important thing would be to put things back into context. Being intellectually honest, with a genuine will towards truth, about just how much your work so far is worth, what you are capable of, how motivated you are, and what you can expect to achieve next. Putting confidence bounds on things like "this experiment set will be done by next week", "I'll falsify this hypothesis with this experiment", "this theory seems to apply here", "I'll assume this explanation to be the right one", etc. Planning your future work based on that.

Mostly in a bad case, it amounts to admitting "I don't know". Having admitted to that, you can start working towards better results, improving yourself at your own pace, and eventually accomplish your work.

Now I don't usually trust people to accept that I'll work at my own pace. In quite a few cases it seems like there's a gap I can't cross in explaining to them how working that way will be optimal (on a case by case basis, and for me). I especially don't expect it when I am working well under what I know is my normal work output, or even below what is the average, expected work output for anyone who'd be in my shoes.

The next step - which was quite automatic most of the time - would be where I'd for instance explain my work or present results - the powerpoint presentations given to the team, or informal meeting with the lab director - where I'd include the meta information about how I rationally evaluated my work, and planned the next steps. But only selectively so. In order to show that I was intellectually honest, but not so much as to shoot my own foot in the process. Casually throwing here and there information which while correct would still draw on affect heuristics, halo effects, anchoring, and probably others I don't even remember, to make it sound even better than it would have otherwise. Is that similar to what is called "becoming a more sophisticated arguer" ?

Some of the comments I'd receive then were like "ok you need to work more on that, but you seem to understand the problem well" "your presentation was very good, very easy to understand, it put everything back in place" etc. when my own estimate told me that not only my work wasn't all that good, but that what was being praised wasn't the right thing, and missed the point. I didn't ever mention those doubts though.

I can't tell how much of my final "success" was deserved. I don't know how much of my final marks were due to the value of the science done, how much for the intellectual honesty, and how much for how I played on those to help it seem better than it was. I personally think my work wasn't worth that much, and I know I underperformed. That I had good reasons to underperform myself at that time, doesn't change the fact that I was graded better than I would have expected, or graded myself, even with benefit from hindsight.

As a caveat, I maybe shouldn't have said "x-rationality" in that first comment. A small part of what I used was x-rationality. Most of the rest was normal rationality. But I learned about both at the same time. About the latter, I could throw more examples in. For instance, I only understood what science, and the scientific method was really about, on my last year, not as a result of my courses, but as a result of reading from the sl4 mailing list as well as some of your other writings. This helped me succeed too, a lot.

comment by AnnaSalamon · 2009-04-09T03:54:42.923Z · LW(p) · GW(p)

...but you will disagree with me. And we are both aspiring rationalists, and therefore we resolve disagreements by experiments.

Your suggested experiment is good. But in this particular case, let's also try to employ the power of positivist thinking on your thesis as a whole. That is, let's break it up into a bunch of specific anticipations, and see what parts there is and isn't disagreement on, before we try to resolve those disagreements. I'll take my own stab at this with a number of short comments in a moment.

comment by regnarg · 2021-01-14T22:54:25.684Z · LW(p) · GW(p)

For me, the core of the rationality project is something like a determination to make your beliefs completely subservient to reality, at all costs, fighting your natural instincts for defending your beliefs, trying to win debates, etc. Not trusting your beliefs just because they are yours. Approaching the most controversial and divisive subjects with a curiosity mindset. Looking forward to changing your mind.

Most "normally rational" people can do this in technical and scientific matters, but in other domains, such as politics, philosophy, society, economics, religion and their personal lives (to some extent), such a stance is extremely uncommon, even among very "normally rational" and highly-educated people -- thus according to your taxonomy, it would fall under x-rationality, I guess (even though I'm not sure whether that's what you intended). In these areas, tribalism, confirmation bias, trying to win debates and talking past each other usually wins out.

All the other parts of x-rationality -- all the techniques and tools -- are just that, tools (of varying levels of usefulness, of course). I am not going to argue much for the value of the tools, mostly for the value of The Determination. When you have The Determination, you can seek out the tools you need.

The Determination is valuable in many ways.

People with The Determination make a better society, coordinate better, choose better policies because they actually listen to arguments from all sides and don't just fight with each other etc. And because high-level societal problems are messy and complicated, ordinary rationality is usually not enough. I know, this is not what you are aiming for, it is not a direct benefit to the practicing individual. But I think it is important and overlooked nonetheless.

Especially in light of the COVID pandemic, it is very visible how desperately we would need more Determined people. Even well-educated rational people such as doctors and biochemists sometimes spread dangerous falsehoods (and stuck with them) because they overlooked some simple fact (e.g. how an exponential growth really behaves). When confronted, they just rehearse their arguments, which make perfect sense themselves, except for the fact that they overlook a crucial consideration. When you have The Determination, you are always on the lookout for crucial considerations missed.

This is a tangible practical benefit (e.g. there would be much less dead) but it is a group benefit rather than an individual one.

As for truly individual benefits, I'm sure there are many, but they differ from person to person. For me, the most obvious one is that I can much better talk with people of widely differing opinions, I am much less confrontational and generally relate to others better. This has been hugely valuable to me.

It has also greatly helped me in matters of relationships and emotional regulation. I can for example handle my emotions in a similar way to my beliefs: not taking them at face value, treating them as a hypothesis. I can notice when I fall into harmful patterns, react poorly because of something completely unrelated, start forming an anxious-preoccupied / codependent attachment, etc. and correct myself, no matter how real it feels. This is the same mindset of never fully trusting yourself, just applied differently.

As for achieving mundane life goals such as getting jobs and money, I'd concede that x-rationality does not help that much. On the other hand, I think it can for example help you notice that you chose the wrong goals (e.g. you believe something will make you happy when in fact it won't; people are generally lousy at predicting what will make them happy). Again, the same stance helps. Treating everything, including what you want, as a hypothesis.

comment by Dmytry · 2012-01-15T16:08:47.033Z · LW(p) · GW(p)

I think the problem with practising rationality as on LessWrong is that people end up not doing perfectly rational actions and strategies the rationale behind which they did not understand or had explained to them (usually people pick those up from environment without the explanation attached). Furthermore intelligence (as in e.g. ability to search in a big solution space for solutions) is the key requirement as well, and intelligence is hard to improve with training, especially for already well trained individuals.

comment by NicoleTedesco · 2012-01-15T15:40:39.240Z · LW(p) · GW(p)

I want to master x-rationality because I want to teach it. I value rational behavior in my fellow human because the historical record is clear: rational behavior is correlated with increased safety, health, and wealth of a society. I want to live in an increasingly safe, healthy, and wealthy society. I understand that "rational" behavior has a saturating plateau, or that it is only so effective, but the masters of rationality must continue to exist in every society, scientific skeptics must be cultivated. I enjoy working with the rational arts because, frankly, I grew up with a very irrational mother and I have lived ever since to do everything in my power (of almost a half a century now) to be everything she was (and still is) NOT. I want to be one of the rationality masters and teachers in my society because I enjoy those arts, find value in them, and find value in safety, health, and wealth.

Pretty selfish, but there it is.

Replies from: None
comment by [deleted] · 2012-01-18T02:23:16.472Z · LW(p) · GW(p)

Right. You're basically in the position Yvain describes; you assign value to clarity of mind. However, this doesn't necessarily correlate to practical gains for you beyond that which could be acquired from pedestrian rationality (or at least specialized "business-rationality").

comment by Kingreaper · 2010-12-05T19:20:13.007Z · LW(p) · GW(p)

5: In which case it will have ceased to be an experiment and become a technique instead. I've noticed this happening a lot over the past few days, and I may continue doing it.

It would still be an experiment, just a test of x-rationality (including the new technique) rather than a test of x-rationality (excluding the new technique)

And why would you want to test a version of x-rationality less than the best you have?

comment by xamdam · 2010-07-01T13:48:19.452Z · LW(p) · GW(p)

"techniques and theories from Overcoming Bias, Less Wrong, or similar deliberate formal rationality study programs, above and beyond the standard level of rationality possessed by an intelligent science-literate person without formal rationalist training."

Here is my definition, I attempted to rely less on CapitalNames in it.

X-Rationality:

Ability to behave according to dictates of rationality in situations where such behaviors would be highly discomforting/counter-intuitive.

As an aside the discomforting part provides an escape hatch for people not to be rational, because they can always claim high utility value of being emotionally comfortable.

comment by JulianMorrison · 2009-04-09T09:25:26.445Z · LW(p) · GW(p)

X-rationality is the kind you do with math, and humans are crap at casual math, so no surprise it becomes a weapon of last resort. (We ought to be using a damn sight more math for more or less everything - the fact our cognitive architecture doesn't support it will not persuade the universe to let us off lightly).

(Edit: I removed the second half of this comment because if after a day of thinking I can't pin down what I thought I was referring to, then I'm talking nonsense. Sorry. Next time: engage brain, open mouth, in that order.)

Replies from: AnnaSalamon
comment by AnnaSalamon · 2009-04-09T09:41:01.166Z · LW(p) · GW(p)

Can you give concrete examples of what you mean by "basic sanity", and how it might have increased?

comment by jason1stlegion · 2016-12-09T19:20:06.560Z · LW(p) · GW(p)

As far as I can tell, epistemic and instrumental rationality are two related Arts, even to EY, but are both under the banner of "Rationality" because they both work towards the same goal, that of optimal thinking (I can't cite any specific examples right now, but I'll throw it out there anyway).

Also, another reason for the comparative inefficiency of x-rationality could be lack of information. Epistemic rationality is the Art of filtering/modifying information for greater accuracy. Instrumental rationality is the Art of using all available information to maximize your values. So both techniques increase the amount of benefit you gain from information. But when you don't know all that much, the fine-tuning techniques, x-rationality, would have an extremely low return since they increase your benefit by such a small percentage. There IS an element of akrasia here, in that we could go learn more if we weren't so lazy, but it's not really the same thing.

Goals are yet another problem, which you mentioned already. People just don't need rationality in routine tasks, that's what we have habits for! Would you think rationally about how to brush your teeth? More than once, then? And many of our plans for the future take large amounts of patience but not much thinking to get a 'good enough' result, so most of our focus is on being patient, the rational course of action.

There's no reason for you to change your goals just for the sake of getting to use rationality, but some other ways of getting more out of it (not necessarily the best ones, of course) could be:

  • Low-"short-term"-investment tasks that would force you to study (like installing a productivity program that only allows you to access certain sites, as many people have already done)

  • Increasing the entertainment value of studying, the clichéd option (OpenStax CNX has made textbooks that are slightly more interesting than normal, but I don't think it will be enough for most of the population)

  • Meditation, another cliché. It increases patience, and you can work on analyzing and fixing stray beliefs you find floating around your brain

  • Recording your thoughts, observations, actions, reasons for those actions, etc. in some sort of portable device (like a notebook or phone). I know this was already mentioned by Yvain, but I just want to make a single list here.

  • (If you're willing to do so) Putting those recorded thoughts on LessWrong, especially the actions and their reasons, for critical review

Any other ideas?

Note: Markdown was acting up, but I've fixed it now

comment by keen · 2014-05-15T00:14:11.558Z · LW(p) · GW(p)

We simply don't have the time and computing power to use full rigor on our individual decisions, so we need an alternative strategy. As I understand it, the human brain operates largely on caching. X-rationality allows us to clean and maintain our caches more thoroughly than does traditional rationality. At first, it seems reasonable to expect this to yield higher success rates.

However, our intuition vastly underestimates the size of our personal caches. Furthermore, traditional rationality is simply faster at cleaning, even if it leaves a lot of junk behind. So it would appear that we should do most of the work with traditional rationality, then apply the slower x-rationality process for subtle refinement. But since x-rationality is so much slower and more difficult to run, it takes a whole giant heap of time and effort to get through a significant portion of the cache, and along the way many potential corrections will have already been achieved in the traditional rationality first pass.

But if we leave out the more rigorous methods entirely, deeming them too expensive, we're doomed to hit a pitfall where traditional rationality will not save us from thirty years of pursuing a bad idea. If we can notice these pitfalls quickly, we can apply the slow x-rationality process to that part of the cache right away, and we might only pursue the bad idea for thirty minutes instead.

We need to be able to reason clearly, to identify opportunities for clearer reasoning, and to identify our own terminal goals. A flaw in any of these pieces can limit our effectiveness, in addition to the limits of just being human. What other limiting factors might there be? What methods can we use to improve them? I keep coming back to Less Wrong because I imagine this is the most likely site to provide me with discourse on the matter.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-05-15T11:57:49.820Z · LW(p) · GW(p)

Clear reasoning: there is no evidence that humans have some fixed set of terminal goals (afortiori, there is no unchanging essence of personhood). There is also no discernible difference between discovering "true" goals and changing goals. Am you can't fix reasoning by fixing reasoning, you need to fix emotion and habit too.

comment by Adam Zerner (adamzerner) · 2014-05-04T16:12:17.141Z · LW(p) · GW(p)

You say that rationality only slightly correlates with winning. I think that's because incremental increases in rationality don't necessarily lead to incremental increases in winning. Winning is governed by lots of factors, and sometimes you have to get over a critical threshold of rationality to see the results you want.

Replies from: AshwinV
comment by AshwinV · 2014-05-06T07:34:58.619Z · LW(p) · GW(p)

True. If you look at it that way, x-rationality wont have a standard co-relation of 0.1. The co-relation shall be dependent on how much of a limiting factor x-rationality really is.

comment by xamdam · 2010-07-01T13:45:46.045Z · LW(p) · GW(p)

I am curious about a finer distinction of what you expect to see as evidence of instrumental benefits of x-rationality:

  • increased individual success/happiness
  • remarkable achievement by some x-rationalists
  • remarkable achievement by LW (or SIAI) as a group
comment by timtyler · 2009-05-26T22:51:36.694Z · LW(p) · GW(p)

"Extreme rationality" is good - but "x-rationality" is pretty jargon-dense.

Yes, it's shorter - but x-sports and x-programming are not common for a good reason - nobody will know what you are talking about. I recommend caution with the use of "x-rationality".

comment by Cameron_Taylor · 2009-04-25T12:59:19.313Z · LW(p) · GW(p)

And this is why I am not so impressed by Eliezer's claim that an x-rationality instructor should be successful in their non-rationality life. Yes, there probably are some x-rationalists who will also be successful people. But again, correlation 0.1. Stop saying only practically successful people could be good x-rationality teachers! Stop saying we need to start having huge real-life victories or our art is useless! Stop calling x-rationality the Art of Winning! Stop saying I must be engaged in some sort of weird signalling effort for saying I'm here because I like mental clarity instead of because I want to be the next Bill Gates! It trivializes the very virtues that brought most of us to Overcoming Bias, and replaces them with what sounds a lot like a pitch for some weird self-help cult...

Yay! Cheer! Woohooo!

Thankyou Yvain! I appreciate seeing the value of epistimic rationality for humans put in a somewhat plausable perspective.

To be honest some of the greatest practical benefits I've taken from reading OB have been along the lines of "Oh, so that's what people are doing. Now I get it. I had better start doing that more". Not with all biasses mind you, but there are some biasses that are just worth implementing. This is along the lines of Eleizer's recommendation to apply 'ethics' for a pure selfish reasons. Sure, you are adding a bias, but you are at the same time accounting for calculations that are simply beyond your abilities.

comment by thomblake · 2009-04-09T13:16:34.018Z · LW(p) · GW(p)

he could have explained that the kids were doing some very impressive mathematics on a subconscious level

But he would have been wrong. It really is in the arm more than in the head.

EDIT: Okay, there's evidence it's largely in the brain. But talking about a 'subconscious level' isn't helpful.

comment by JulianMorrison · 2009-04-09T07:12:27.526Z · LW(p) · GW(p)

A guy takes some golf lessons. Convinced he's got the mechanics of the swing down, he takes on a pro at a golf course, and has his ass handed to him. "Those golf lessons did me no good", he says. "Do golf lessons even correlate with being good at the sport?".

Replies from: gjm, None, cousin_it
comment by gjm · 2009-04-09T10:58:23.662Z · LW(p) · GW(p)

Whether that's a good analogy depends on whether the reasoning challenges we face from day to day are more like playing golf against a seasoned pro, or playing golf against casual amateurs. (If someone takes golf lessons and, after a reasonable time, he isn't doing any better against other people at roughly his own level, then I think he is entitled to ask whether the lessons are helping.)

Do you have reasons to think that we're in the former rather than the latter situation? If so, what are they?

comment by [deleted] · 2012-01-19T01:59:52.071Z · LW(p) · GW(p)

The question isn't "will studying and honing your rationality make you a better rationalist?" Obviously it will. Likewise practicing and refining your golf swing will probably make you a better golfer; but that's not analogous to Yvain's point at all.

The real question is whether or not becoming a better rationalist will likely make you more successful.

comment by cousin_it · 2009-04-09T12:01:47.625Z · LW(p) · GW(p)

His words are justified if most pros never took any lessons of this particular kind.

Replies from: ChrisHibbert
comment by ChrisHibbert · 2009-04-09T18:32:24.330Z · LW(p) · GW(p)

His story demonstrates the importance of choosing the right metrics for progress. From the story, we don't know whether the golfer improved or not. As an aspiring x-rationalist, the lesson I draw is that when I take on the challenge of acquiring or improving a skill, I should calibrate my skill level, and compare my progress to an appropriate (and hopefully increasing) measuring stick. As a rank beginner, you don't learn anything by finding out that you lose to a pro by 30 strokes. After taking the class, you may lose by 25 or 35 strokes and you won't be able to tell whether you've improved.

In golf, you can use the course as your metric. Is your score improving compared to par? In other endeavors, the metrics may be harder to find. But you seldom want to compare yourself to the top 1% when you're starting out.

Replies from: roland
comment by roland · 2009-04-10T02:15:12.231Z · LW(p) · GW(p)

It's also a question of how long you must practice and how slow it takes to make progress. To get good at golf and anything else you need weeks, months, years of practice. I suspect the same applies to rationality.

comment by ajayjetti · 2009-07-22T20:57:14.977Z · LW(p) · GW(p)

fantastic !!!

comment by agolubev · 2009-04-09T15:19:30.994Z · LW(p) · GW(p)

Check out Gladwell's new book - Outliers. Our success cannot be attributed to our individuality to the degree that most American's think it can. There are huge cultural influences, arbitrary society rules, birth year, etc... There's a chapter on why high IQ only matters to a certain point. Once you're "inteligent enough", practical wisdom takes over in determination of success. I don't think akrasia has that much to do with it. We live in a world of lower inteligence and have to play by those rules. It pays to be ONE step ahead of the mob, not 10! You cannot make money in the stock market by being 10 steps ahead. You'd be shorting stocks into oblivious in the 90's and 2006-2007, while the mob was getting more and mroe exuberant. By the same token i don't think success is a good measuring stick for rationality. Success depends on irrational subjects' interpretation and understanding of your actions, which cannot happen by definition. Unless in your endevours outside your brain you dumb it down to be one step ahead instead of ten, but then you gotta think like a dummy, which is not a skill of rationality.

Replies from: knb
comment by knb · 2009-04-09T16:49:08.045Z · LW(p) · GW(p)

" It pays to be ONE step ahead of the mob, not 10! You cannot make money in the stock market by being 10 steps ahead. You'd be shorting stocks into oblivious in the 90's and 2006-2007, while the mob was getting more and mroe exuberant."

It is NOT rational to think that people are smarter than they are. If you really are better at predicting where stocks are going, factoring other peoples irrationality in is part of the game.

comment by polymathwannabe · 2013-11-26T20:15:09.828Z · LW(p) · GW(p)

I second the entire post except for the Galt-worshiping. If good and evil worked like they do in D&D and were real ontological categories, Ayn Rand would be the highest dark priestess in recorded history.

Replies from: gjm
comment by gjm · 2015-12-29T16:12:51.000Z · LW(p) · GW(p)

Yvain isn't endorsing Ayn Rand's values, he's just using her work to provide an example of a fictional character who is supposedly super-effective at least in part by being very rational.