Bad reasons for a rationalist to lose

post by matt · 2009-05-18T22:57:40.761Z · LW · GW · Legacy · 83 comments

Contents

83 comments

Reply to: Practical Advice Backed By Deep Theories

Inspired by what looks like a very damaging reticence to embrace and share brain hacks that might only work for some of us, but are not backed by Deep Theories. In support of tinkering with brain hacks and self experimentation where deep science and large trials are not available.

Eliezer has suggested that, before he will try a new anti-akraisia brain hack:

[…] the advice I need is from someone who reads up on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms - thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up.  And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas.

This doesn't look to me like an expected utility calculation, and I think it should. It looks like an attempt to justify why he can't be expected to win yet. It just may be deeply wrongheaded.

I submit that we don't "need" (emphasis in original) this stuff, it'd just be super cool if we could get it. We don't need to know that the next brain hack we try will work, and we don't need to know that it's general enough that it'll work for anyone who tries it; we just need the expected utility of a trial to be higher than that of the other things we could be spending that time on.

So… this isn't other-optimizing, it's a discussion of how to make decisions under uncertainty. What do all of us need to make a rational decision about which brain hacks to try?

… and, what don't we need?


How should we decide how much time to spend gathering data and generating estimates on matters such as this? How much is Eliezer setting himself up to lose, and how much am I missing the point?

83 comments

Comments sorted by top scores.

comment by AnnaSalamon · 2009-05-19T05:16:25.399Z · LW(p) · GW(p)

It might be worth separating the claim "Eliezer is wrong about what changes he, personally, should try" from the claim

"It is generally good to try many plausible changes, because:

  1. Some portion will work;
  2. Trying the number of approaches it takes to find an improvement is often less expensive than being stuck in the wrong local optimum;
  3. Many of us humans tend to keep on doing the same old thing because it's easy, comfortable, safe-feeling, or automatic, even when sticking with our routines is not the high-expected-value thing to do. We can benefit from adopting heuristics of action and experimentation to check such tendencies.”

The second claim seems fairly clearly right, at least for some of us. (People may vary in how easily they can try on new approaches, and on what portion of handed-down approaches work for them. OTOH, the ability to easily try new approaches is itself learnable, at least for many of us.) The first claim is considerably less clear, particularly since Eliezer has much data on himself that we do not, and since after trying many hacks for a given not-lightcone-destroying problem without any of the hacks working, expected value calculations can in fact point to directing one’s efforts elsewhere.

Maybe we could abandon Eliezer’s specific case, and try to get into the details of: (a) how to benefit from trying new approaches; and (b) what rules of thumb for what to try, and what to leave alone, yield high expected life-success?

Replies from: tut
comment by tut · 2009-05-25T14:57:03.157Z · LW(p) · GW(p)

One more reason for the list is that doing new stuff (or doing stuff in new ways, but I repeat myself) promotes neurogenesis.

comment by pjeby · 2009-05-19T02:38:40.638Z · LW(p) · GW(p)

Awesomely summarized, so much so that I don't know what else to say, except to perhaps offer this complementary anecdote.

Yesterday, I was giving a workshop on what I jokingly call "The Jedi Mind Trick" -- really the set of principles that makes monoidealism techniques (such as "count to 10 and do it") either work or not work. Towards the end, a woman in the group was having some difficulty applying it, and I offered to walk through an example with her.

She picked the task of organizing some files, and I explained to her what to say and picture in her mind, and asked, "What comes up in your mind right now?"

And she said, "well, I'm on a phone call, I can't organize them right now." And I said "Right, that's standard objection #1 - "I'm doing something else". So now do it again..." [I repeated the instructions]. "What comes to mind?"

She says, "Well, it's that it'll be time to do it later".

"Standard objection #2: it's not time right now, or I don't have enough time. Great. We're moving right along. Do it again. What comes to mind?"

"Well, now I'm starting to see more of what I'd actually be doing if I were doing it, the visualization is getting a lot clearer."

"Terrific, do it again. Now, don't try to actually do the task, just pay attention to what you're seeing and feeling, and you may begin to notice some of your muscles beginning to respond, like they're trying to actually do some of the things you're picturing, like starting to twitch..."

And she burst out laughing, because, she said, her legs had already started twitching and she was feeling like, "well, the files are right over there we could just go and get started..."

Had she given up at standard objection #1 or #2, she wouldn't have learned the technique or gotten the result. But it's not the content of the objection that matters, it's that ANY objection that stops you from actually trying something useful, means you fail. You lose. You are not being a smart, rational skeptic, you're being a dumbass loser.

In the workshop, I explained how our own objections and doubts are also doing the Jedi Mind Trick... but on US. "It's not time now..." they say, and like a hypnotized stormtrooper we nod and agree, "It's not time now." And it doesn't matter if those doubts are saying, "It's not time now" or "It's not peer-reviewed" -- because you still lose, either way.

However, if you simply ignore those doubts and objections, and continue what you're doing, they cannot stop you. If the objection you think is real, is in fact real, well, then you've only lost a little time by trying. But if you believe an objection that isn't real, then you've lost much, much more than that.

Much of the time, the primary function of a (good) personal coach or teacher -- whether in pickup, personal development, or even business and marketing! -- is simply to drag someone (kicking and screaming, if necessary) past their objections into actually doing something the teacher or coach already knows will work.

And when that happens, what the student usually finds is that it isn't really as hard as they thought it would be, or that, yes, that crazy mumbo-jumbo actually works, no matter how irrational it might have sounded before they had any personal point of reference.

The woman on the call only needed about two minutes, to try a technique four times in a row and get a result. If she'd been doing it on her own, she might have given up after only one try. And a lot of folks on LW would likely not have tried even that once!

On LW, I mostly bide with polite patience those people who talk about the stuff I teach as if it's a matter of variation from person to person as to whether stuff works, or that things sometimes work and sometimes not, or whatever, blah blah fudge factor nonsense they individually prefer. That's all well and good here, because those people are not my clients.

But if I were to accept that sort of bullshit from one of my clients, then I would have failed them. It's all very well and good for the client to come to me believing that his or her problems are special and unique and that, in all the world, they are the worst person ever at doing something. But if they leave me still thinking that, then I have not done my job.

My job is to say, fuck that bullshit. Do this. No, not that, this. Good. Do it again. Again. That's better. Now do this.

Dunno about rationality, but ISTM that's how a dojo is actually supposed to work. If the master sat there listening to people's inane theories about how they need to punch differently than everybody else, or their insistence that they really need to understand a complete theory of combat, complete with statistical validation against a control group, before they can even raise a single fist in practice, that master would have failed their students AND their Art.

Just as EY fails his students and his art by the public positions he has taken on his weight and akrasia. To fail at solving those problems is fine. To excuse his failure to even try is not, even by the rules of his own art.

(And remember, "I don't have time" is just standard objection #2.)

Replies from: PhilGoetz, Cameron_Taylor, Annoyance, hrishimittal, matt
comment by PhilGoetz · 2009-05-19T03:36:59.798Z · LW(p) · GW(p)

He's tried, or he wouldn't have had the material to make those posts.

I appreciate your comments, and they're a good counterpoint to EY's point of view. But the fact that you need to make an assumption in order to be an effective teacher, because it's true most of the time, doesn't mean it's always true. You are making an expected-value calculation as a teacher, perhaps subconsciously:

  • If I accept that my approach doesn't work well with some people, and work with those people to try to find an approach that works for them, I will be able to effectively coach 50 people per year (or whatever).
  • If I dismiss the people whom my approach doesn't work well for as losers, and focus on the people whom my approach works well for, I'll be able to effectively coach 500 people per year.

You are also taking EY's claim that not every technique works well for every person, and caricaturing it as the claim that there is a 1-1 correspondence between people and techniques that work for them. He never said that.

The specific comments Eliezer has made, about people erroneously assuming that what worked for them should work for other people, were taken from real life and were, I think, also true and correct. In order to convince me that those specific examples were wrong, you would have to address those specific examples in detail and make a strong case why they were not really as he described them. I would rather see you narrow your claims to something reasonable than make these erroneous blanket denunciations, because they distract from the valuable things you have to say.

You don't need to duke it out with EY over who's the alpha teacher. :)

Replies from: pjeby
comment by pjeby · 2009-05-19T04:19:19.435Z · LW(p) · GW(p)

You are making an expected-value calculation as a teacher, perhaps subconsciously

No. I'm making the assumption that, until someone has actually tried something, they aren't in a position to say whether or not it works. Once someone has actually tried something, and it doesn't work, then I find something else for them to do. I don't give up and say, "oh, well I guess that doesn't work for you, then."

When I do a one-on-one consult, I don't charge someone until and unless they get the result we agree on as a "success" for that consultation. If I can't get the result, I don't get paid, and I'm out the time.

Do I make sure that the definition of "success" is reasonably in scope for what I can accomplish in one session? Sure. But I don't perform any sort of filtering (other than that which may occur by selection or availability bias, e.g. having both motivation and funds) to determine who I work with.

You are also taking EY's claim that not every technique works well for every person, and caricaturing it as the claim that there is a 1-1 correspondence between people and techniques that work for them. He never said that.

I didn't say he did, or that anybody did. What I said is that people assume they are unique and special and nothing will work for them. A LOT of people believe this, because they're under the mistaken impression that they tried 50 different things, when in fact they've been making the same mistakes, 50 different times, without ever being aware of the mistake.

The specific comments Eliezer has made, about people erroneously assuming that what worked for them should work for other people, were taken from real life and were, I think, also true and correct.

No argument there. However, when people assume that what worked for them will work for other people, they are actually mostly right.

What they are mistaken about is that 1) they're actually fully communicating what they did, and that 2) other people will be able to accurately reproduce the internal steps as well as the external and easy-to-describe ones.

So I agree at the level of the result, but I disagree about the cause. At the brain hardware level, human beings are just not that different from one another. We differ more at the software, filtering, and meta-cognitive levels, which is where the details of communication and teaching trip up the transfer of effective techniques.

In order to convince me that those specific examples were wrong,

Why would I want to? My point is only that Eliezer whining about things not working and demanding proof is counterproductive to his own goals and counter to his professed values and art. This is independent of whether he gives up or not, or whose advice or example he seeks.

I would rather see you narrow your claims to something reasonable

What claims do you mean?

Replies from: Vladimir_Nesov, Cameron_Taylor, PhilGoetz
comment by Vladimir_Nesov · 2009-05-19T09:33:12.873Z · LW(p) · GW(p)

No. I'm making the assumption that, until someone has actually tried something, they aren't in a position to say whether or not it works.

This is a wrong assumption. The correctness of a decision to even try something directly depends on how certain you are it'll work. Don't play lotteries, don't hunt bigfoot, but commute to work risking death in a traffic accident.

Replies from: pjeby
comment by pjeby · 2009-05-19T17:35:19.869Z · LW(p) · GW(p)

The correctness of a decision to even try something directly depends on how certain you are it'll work.

...weighed against the expected cost. And for the kind of things we're talking about here, a vast number of things can be tried at relatively small cost compared to one's ultimate desired outcome, since the end result of a search is something you can then go on to use for the rest of your life.

Replies from: Vladimir_Golovin
comment by Vladimir_Golovin · 2009-05-20T06:14:24.677Z · LW(p) · GW(p)

Precisely. There are self-help techniques that can be tried in minutes, even in seconds. I don't see a single reason for not allocating a fraction of one's procrastination time to trying mind hacks or anything else that might help against akrasia.

Say, if my procrastination time is 3 hours per day, I could allocate 10% of that -- 18 minutes. How long does it take to speak a sentence "I will become a syndicated cartoonist"? 10 seconds at maximum -- given 18 minutes, that's 108 repetitions!

But what if it doesn't work? Oh noes, I could kill 108 orcs during that time and perhaps get some green drops!

Replies from: Vladimir_Nesov, pjeby
comment by Vladimir_Nesov · 2009-05-20T13:06:05.827Z · LW(p) · GW(p)

Vladimir, it doesn't matter that a lottery ticket costs only 1 cent. Doesn't matter at all. It only matters that you don't expect to win by buying it.

Or maybe you do expect to win from a deal by investing 1 cent, or $10000, in which case by all means do so.

Replies from: Vladimir_Golovin
comment by Vladimir_Golovin · 2009-05-20T13:19:50.114Z · LW(p) · GW(p)

If I were to choose between throwing one cent away and buying a lottery ticket on it, I'd buy the ticket. (I don't consider here additional expenses such as the calories I need to spend on contracting my muscles to reach the ticket stand etc. I assume that both acts -- throwing away and buying the ticket -- have zero additional costs, and the lottery has a non-zero chance of winning.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-05-20T13:47:08.911Z · LW(p) · GW(p)

The activity of trying the procrastination tricks must be shown to be at least as good as the procrastination activity, which would be a tremendous achievement, placing these tricks far above their current standing.

You are not doing the procrastination-time activity because it's the best thing you could do, that's the whole problem with akrasia. If you find any way of replacing procrastination activity with a better procrastination activity, you are making a step away from procrastination, towards productivity.

So, you consider trying anti-procrastination tricks instead of procrastinating an improvement. But the truth of this statement is far from obvious, and it's outright false for at least my kind of procrastination. (I often procrastinate by educating myself, instead of getting things done.)

Replies from: Vladimir_Golovin
comment by Vladimir_Golovin · 2009-05-20T14:07:48.402Z · LW(p) · GW(p)

Yep, my example with orcs vs. tricks was a degenerate case -- it breaks down if the procrastination activity has at least some usefulness, which is certainly the case with self-education as a procrastination activity.

But this whole area is a fertile ground for self-rationalization. In my own case, it seems more productive to simply deem certain procrastination activities as having zero benefit than to actually try to assess their potential benefits compared to other activities.

(BTW, my primary procrastination activity, PC games, is responsible for my knowledge of the English language, which I consider an enormous benefit. Who knew.)

comment by pjeby · 2009-05-20T06:20:53.963Z · LW(p) · GW(p)

Say, if my procrastination time is 3 hours per day, I could allocate 10% of that -- 18 minutes. How long does it take to speak a sentence "I will become a syndicated cartoonist"? 10 seconds at maximum -- which means 108 repetitions into 18 minutes!

IAWYC, but if you want to learn to do it correctly, you'd be better off using fewer repetitions and suggesting something aimed at provoking an immediate response, such as "I'm now drawing a cartoon"... and carefully paying attention to your inner imagery and physical responses, which are the real meat of this family of techniques.

Replies from: Vladimir_Golovin
comment by Vladimir_Golovin · 2009-05-20T06:34:34.206Z · LW(p) · GW(p)

PJ, I think that discussing details of particular mindhacks is off-topic for this thread -- let's discuss them here. That was just an example. (As for myself, I use an "I want" format, I don't repeat it anywhere near 108 times, and I do aim at immediate things.)

comment by Cameron_Taylor · 2009-05-19T07:23:06.809Z · LW(p) · GW(p)

At the brain hardware level, human beings are just not that different from one another. We differ more at the software, filtering, and meta-cognitive levels, which is where the details of communication and teaching trip up the transfer of effective techniques.

That claim does not match the evidence that I have encountered. Consider, for example, responsiveness to hypnosis. Hypnotic responsiveness as can be measured by the stanford test is found to differ more between fraternal twins raised together than between identical twins raised apart. It also seems to be related to the size of the rostrum region of the corpus callosum.

I agree that people tend to overestimate their own uniqueness and I know this is something that I do myself. Nevertheless, there is clearly one element of human behavior and motivation that is attributable directly to the brain hardware level and I suggest that there are many more.

Replies from: pjeby
comment by pjeby · 2009-05-19T17:14:49.806Z · LW(p) · GW(p)

Hypnotic responsiveness as can be measured by the stanford test

If you mean the Hilgard scale, ask a few professional hypnotists how useful it actually is. Properly-trained hypnotists don't use a tape-recorded monotone with identical words for every person; they adjust their pace, tone, and verbiage based on observing a person's response in progress, to maximize the response. So unless th Stanford test is something like timing how long a master hypnotist takes to produce some specified hypnotic phenomena, it's probably not very useful.

Professional hypnotists also know that responsiveness is a learned process (see also the concept of "fractionation"), which means it's probably a mistake to treat it as an intrinsic variable for measuring purposes, unless you have a way to control for the amount of learning someone has done.

So, as far as this particular variable is concerned, you're observing the wrong evidence.

Personal development is an area where science routinely barks up the wrong tree, because there's a difference between "objective" measurement and maximizing utility. Even if it's a fact that people differ, operating as if that fact were true leads to less utility for everyone who doesn't already believe they're great at something.

Replies from: Cameron_Taylor
comment by Cameron_Taylor · 2009-05-20T03:05:33.026Z · LW(p) · GW(p)

If you mean the Hilgard scale, ask a few professional hypnotists how useful it actually is.

I mean the Stanford Hypnotic Susceptibility Scales, the most useful being SHSS:C. Hilgard played his cards poorly and somehow failed to have the scale named after himself. I am more interested in the findings of researchers who study the clinical work of professional hypnotists than I am in the opinions of the hypnotists themselves. Like most commonly used psychological metrics, the SHSS:C is far from perfect. Nevertheless, it does manage to correlate strongly with the success of clinical outcomes, which is the best I can expect of it.

Professional hypnotists also know that responsiveness is a learned process (see also the concept of "fractionation"), which means it's probably a mistake to treat it as an intrinsic variable for measuring purposes, unless you have a way to control for the amount of learning someone has done.

Professional scientists studying hypnosis observe that specific training can alter the hypnotic responsiveness from low to high in as much as 50% of cases. Many have expressed surprise at just how stable the baseline is over time and observe that subjects trained to respond to hypnosis revert to the baseline over time. Nevertheless, such reversion takes time and Gosgard found (in 2004) that a training effect can remain for as much as four months.

So, as far as this particular variable is concerned, you're observing the wrong evidence.

When I began researching hypnosis I was forced to subordinate my preferred belief to what the evidence suggests. When it comes to most aspects of personality and personal psychological profile I much prefer to believe in the power of 'nurture' and my ability to mould my own personality profile to my desires with training. I have become convinced over time that there is a far greater heritability component than I would have liked. On the positive side, the importance of 'natural talent' in aquiring expert skills is one area where the genetic component tends to be overestimated most of the time. When it comes to aquiring specialised skills, consistent effortful practice makes all the difference and natural talent is almost irrelevant.

Personal development is an area where science routinely barks up the wrong tree, because there's a difference between "objective" measurement and maximizing utility. Even if it's a fact that people differ, operating as if that fact were true leads to less utility for everyone who doesn't already believe they're great at something.

There is certainly something to that! I do see the merit in 'operating as if [something that may not necessarily be our best prediction of reality]'. It would be great if there were greater scientific efforts in investigating the most effective personal development strategies.

Replies from: pjeby
comment by pjeby · 2009-05-20T03:49:52.197Z · LW(p) · GW(p)

Professional scientists studying hypnosis observe that specific training can alter the hypnotic responsiveness from low to high in as much as 50% of cases.

Indeed. What's particularly important if you're after results, rather than theories, is that just because those other 50% didn't go from low to high, doesn't mean that there wasn't some different form, approach, environment, or method of training that wouldn't have produced the same result!

IOW, if the training they tested was 100% identical for each person, then the odds that the other 50% were still trainable is extremely high.

(And since most generative (as opposed to therapeutic) self-help techniques implicitly rely on the same brain functions that are used in hypnosis (monoidealistic imagination and ideomotor or ideosensory responses), this means that the same things can be made to work for everyone, provided you can train the basic skill.)

I have become convinced over time that there is a far greater heritability component than I would have liked.

Robert Fritz once wrote something about how if you're 5'3" you're not going to be able to win the NBA dunking contest... and then somebody did just that. It ain't what you've got, it's what you do with what you have got.

(Disclaimer: I don't remember the winner's name or even if 5'3" was the actual height.)

It's also rare that any quality we're born with is all bad or all good; what gives with one hand takes away with the other, and vice versa. The catch is to find the way that works for you.

Some of my students work better with images, some with sounds, others still with feelings. Some have to write things down, I like to talk things out. These are all really superficial differences, because the steps in the processes are still basically the same. Also, even though my wife is more "auditory" than I am, and doesn't visualize as well consciously... that doesn't mean she can't. (Over the last few years, she's gradually gotten better at doing processes that involve more visual elements.)

(Also, we've actually tried swapping around our usual modes of cognition for a day or two, which was interesting. When she took on my processing stack, we got along better, but when I took on hers, I was really stressed and depressed... but I had a lot more sympathy for some of her moods after that!)

On the positive side, the importance of 'natural talent' in aquiring expert skills is one area where the genetic component tends to be overestimated most of the time. When it comes to aquiring specialised skills, consistent effortful practice makes all the difference and natural talent is almost irrelevant.

Absolutely! Dweck's fixed and growth mindsets are absolutely central to my work. I used to call them "naturally struggling" and "naturally successful" -- well, I still do for marketing reasons. But Dweck showed with brilliant clarity where the mindsets come from: struggle results from believing that your ability in any area is a fixed quantity, rather than a variable one under your personal control.

If somebody wants a scientifically validated reason to believe what I'm saying in this thread, they need look no further than Dweck's mindsets research. It offers compelling scientific verification of the idea that thinking your ability is fixed really IS "dumbass loser" thinking!

Replies from: Eliezer_Yudkowsky, Cameron_Taylor, Cameron_Taylor
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-05-20T08:41:00.358Z · LW(p) · GW(p)

Indeed. What's particularly important if you're after results, rather than theories, is that just because those other 50% didn't go from low to high, doesn't mean that there wasn't some different form, approach, environment, or method of training that wouldn't have produced the same result!

Um... PJ, this is just what psychoanalysts said... and kept on saying after around a thousand studies showed that psychoanalysis had no effect statistically distinguishable from just talking to a random intelligent caring listener.

You need to read more basic rationality material, along the lines of Robyn Dawes's "Rational Choice in an Uncertain World". There you will find the records of many who engaged in this classic error mode and embarrassed themselves accordingly. You do not get to just flush controlled experiments down the toilet by hoping, without actually pointing to any countering studies, that someone could have done something differently that would have produced the effect you want the study to produce but that it didn't produce.

You know how there are a lot of self-indulgent bad habits you train your clients to get rid of? This is the sort of thing that master rationalists like Robyn Dawes train people to stop doing. And you are missing a lot of the basic training here, which is why, as I keep saying, it is such a tragedy that you only began to study rationality after already forming your theories of akrasia. So either you'll read more books on rationality and learn those basics and rethink those theories, or you'll stay stuck.

Replies from: pjeby
comment by pjeby · 2009-05-20T16:58:04.357Z · LW(p) · GW(p)

Um... PJ, this is just what psychoanalysts said... and kept on saying after around a thousand studies showed that psychoanalysis had no effect statistically distinguishable from just talking to a random intelligent caring listener.

Rounding to the nearest cliche. I didn't say my methods would help those other people, or that some ONE method would. I said that given a person Y there would be SOME method X. This is not at all the same thing as what you're talking about.

You do not get to just flush controlled experiments down the toilet by hoping, without actually pointing to any countering studies, that someone could have done something differently that would have produced the effect you want the study to produce but that it didn't produce.

What I've said is that if you have a standard training method that moves 50% of people from low to high on some criterion, there is an extremely high probability that the other 50% needed something different in their training. I'm puzzled how that is even remotely a controversial statement.

Replies from: Cameron_Taylor
comment by Cameron_Taylor · 2009-05-21T01:03:06.899Z · LW(p) · GW(p)

What I've said is that if you have a standard training method that moves 50% of people from low to high on some criterion, there is an extremely high probability that the other 50% needed something different in their training. I'm puzzled how that is even remotely a controversial statement.

It is a conclusion that just doesn't follow.

Replies from: pjeby
comment by pjeby · 2009-05-21T02:37:47.872Z · LW(p) · GW(p)

It is a conclusion that just doesn't follow.

You ever heard of something called the Pygmalion effect? Did the study control for it?

By which I mean, did they control for the beliefs of the teachers who were training these subjects, in reference to:

  • the trainability and potential of the subjects themselves, and

  • the teachability of the subject matter itself?

For example, did they tell the teacher they had a bunch of students with superb hypnotic potential who just needed some encouragement to get going, or did they tell them they were conducting a test, to see who was trainable, or if it was possible to train hypnotic ability at all?

These things make a HUGE difference to whether people actually learn.

comment by Cameron_Taylor · 2009-05-20T05:23:04.589Z · LW(p) · GW(p)

Absolutely! Dweck's fixed and growth mindsets are absolutely central to my work. I used to call them "naturally struggling" and "naturally successful" -- well, I still do for marketing reasons. But Dweck showed with brilliant clarity where the mindsets come from: struggle results from believing that your ability in any area is a fixed quantity, rather than a variable one under your personal control.

This is one area where rational thinking is of real benefit. Because not only is a 'growth mindset' more effective than a 'fixed mindset' when it comes to learning skills it is also simply far more accurate.

While I was devouring the various therios and findings compiled in The Cambridge Handbook of Expertise and Expert Performance I kept running across one common observation. There is, it seems one predictor of expert performance in a field that has a significant heritable component. It isn't height or IQ. Although those two are highly heritible they aren't all that great at predicting successful acheivement of elite performance. As best as the researchers could desipher, the heritable component of success is more or less the ability to motivate oneself to deliberately practice for four hours seven days a week for about ten years.

Now, I would be surprised to see you concede the heritability of motivation and I definitely suggest it is an area in which to apply Dweck's growth mindset at full force! You also have a whole bag of tricks and techniques that can be used to enhance just the sort of motivation required. But I wonder, have you observed that there are some people who naturally tend to be more interested in getting involved actively in personal development efforts of the kind you support? Completely aside from whether they believe in the potential usefulness, there would seem to be many who are simply less likely to care enough to take extreme personal development seriously.

Replies from: pjeby
comment by pjeby · 2009-05-21T03:14:59.012Z · LW(p) · GW(p)

But I wonder, have you observed that there are some people who naturally tend to be more interested in getting involved actively in personal development efforts of the kind you support?

Yes and no. What I've observed is that most everybody wants something out of life, and if they're not getting it, then sooner or later their path leads to them trying to develop themselves, or causing themselves to accidentally get some personal development as a side effect of whatever their real goal is.

The people who set out for personal development for its own sake -- whether because they think being better is awesome or because they hate who they currently are -- are indeed a minority.

A not-insignificant-subset of my clientele are entrepreneurs and creative types who come to me because they're putting off starting their business, writing their book, or doing some other important-to-them project. And a significant number of them cease to be my customers the moment they've got the immediate problem taken care of.

So, it's not that people aren't generally motivated to improve themselves, so much as they're not motivated to make general improvements; they are after specific improvements that are often highly context-specific.

comment by Cameron_Taylor · 2009-05-20T04:48:43.375Z · LW(p) · GW(p)

If somebody wants a scientifically validated reason to believe what I'm saying in this thread, they need look no further than Dweck's mindsets research. It offers compelling scientific verification of the idea that thinking your ability is fixed really IS "dumbass loser" thinking!

I would like to affirm the distinction between the overall mindset you wish to encourage and the specific claims that you use while doing so. For example I agree with your claims in this (immediate parent) post and also your the gist of your personal development philosophy while I reject the previous assertion that differences between individuals are predominantly software rather than hardware.

(And yes, 50% was presented as a significant finding in favour of training from the baseline.)

Replies from: pjeby
comment by pjeby · 2009-05-21T03:22:22.557Z · LW(p) · GW(p)

I reject the previous assertion that differences between individuals are predominantly software rather than hardware.

I think we may agree more than you think. I agree that individuals are different in terms of whatever dial settings they may have when they show up at my door. I disagree that those initial dial settings are welded in place and not changeable.

"Hardware" and "software" are squishy terms when it comes to brains that can not only learn, but literally grow. And ISTM that most homeostatic systems in the body can be trained to have a different "setting" than they come from the factory with.

comment by PhilGoetz · 2009-05-19T22:35:17.798Z · LW(p) · GW(p)

I would rather see you narrow your claims to something reasonable

What claims do you mean?

The gist of your top-level comment here is that your techniques work for everyone; and if they don't work for someone, it's that person's fault.

Replies from: pjeby
comment by pjeby · 2009-05-20T00:20:21.256Z · LW(p) · GW(p)

The gist of your top-level comment here is that your techniques work for everyone; and if they don't work for someone, it's that person's fault.

Here's the problem: when someone argues that some techniques might not work for some people, their objective is not merely to achieve epistemic accuracy.

Instead, the real point of arguing such a thing is a form of self-handicapping. "Bruce" is saying, "not everything works for everyone... therefore, what you have might not work for me... therefore, I don't have to risk trying and failing."

In other words, the point of saying that not every technique works for everyone is to apply the Fallacy of Grey: not everything works for everybody, therefore all techniques are alike, therefore you cannot compare my performance to anyone else, because maybe your technique just won't work for me. Therefore, I am safe from your judgment.

This is a fully general argument against trying ANY technique, for ANY purpose. It has ZERO to do with who came up with the technique or who's suggesting it; it's just a Litany Against Fear... of failure.

As a rationalist and empiricist, I want to admit the possibility that I could be wrong. However, as an instrumentalist, instructor, and helper-of-people, I'm going to say that, if you allow your logic to excuse your losing, you fail logic, you fail rationality, and you fail life.

So no, I won't be "reasonable", because that would be a failure of rationality. I do not claim that any technique X will always work for all persons; I merely claim that, given a person Y, there is always some technique X that will produce a behavior change.

The point is not to argue that a particular value of X may not work with a particular value of Y, the point is to find X.

(And the search space for X, seen from the "inside view", is about two orders of magnitude smaller than it appears to be from the "outside view".)

Replies from: loqi
comment by loqi · 2009-05-20T04:03:58.374Z · LW(p) · GW(p)

Instead, the real point of arguing such a thing is a form of self-handicapping. "Bruce" is saying, "not everything works for everyone... therefore, what you have might not work for me... therefore, I don't have to risk trying and failing."

I'm pretty surprised to see you make this type of argument. Are you really so sure that you have that precise of an understanding of the motives behind everyone who has brought this up? You seem oblivious to the predictable consequences of acting so unreasonably confident in your own theories. Your style alone provokes skepticism, however unwarranted or irrational it may be. Seeing you write this entire line of criticism off as "they're just Brucing" makes me wonder just how much your brand of "instrumental" rationality interferes with your perception of reality.

Replies from: Eliezer_Yudkowsky, pjeby
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-05-20T08:46:58.971Z · LW(p) · GW(p)

Seconded.

Here's the problem: when someone argues that some techniques might not work for some people, their objective is not merely to achieve epistemic accuracy. Instead, the real point of arguing such a thing is a form of self-handicapping.

Because of course it is impossible a priori that any technique works for one person but not another. Furthermore, it is impossible for anyone to arrive at this conclusion by an honest mistake. They all have impure motives; furthermore they all have the same particular impure motive; furthermore P. J. Eby knows this by virtue of his vast case experience, in which he has encountered many people making this assertion, and deduced the same impure motive every time.

To quote Karl Popper:

The Freudian analysts emphasized that their theories were constantly verified by their "clinical observations." As for Adler, I was much impressed by a personal experience. Once, in 1919, I reported to him a case which to me did not seem particularly Adlerian, but which he found no difficulty in analyzing in terms of his theory of inferiority feelings, Although he had not even seen the child. Slightly shocked, I asked him how he could be so sure. "Because of my thousandfold experience," he replied; whereupon I could not help saying: "And with this new case, I suppose, your experience has become thousand-and-one-fold."

I'll say it again. PJ, you need to learn the basics of rationality - in this you are an apprentice and you are making apprentice mistakes. You will either accept this or learn the basics, or not. That's what you would tell a client, I expect, if they were making mistakes this basic according to your understanding of akrasia.

Replies from: Emile
comment by Emile · 2009-05-21T19:15:43.933Z · LW(p) · GW(p)

Heh, that Adler anecdote reminds me of a guy I know who tends to believe in conspiracy theories, and who was backing up his belief that the US government is behind 9-11 by saying how evil the US government tends to be. Of course, 9-11 will most likely serve as future evidence of how evil the US government is.

(Not that I can tell whether that's what's going on here)

comment by pjeby · 2009-05-20T04:29:28.061Z · LW(p) · GW(p)

Are you really so sure that you have that precise of an understanding of the motives behind everyone who has brought this up?

What makes you think I'm writing to the motives of specific people? If I were, I'd have named names (as I named Eliezer).

In the post you were quoting, I was speaking in the abstract, about a particular fallacy, not attributing that fallacy to any particular persons.

So if you don't think what I said applies to you, why are you inquiring about it?

(Note: reviewing the comment in question, I see that I might not have adequately qualified "someone ... who argues" -- I meant, someone who argues insistently, not someone who merely "argues" in the sense of, "puts forth reasoning". I can see how that might have been confusing.)

You seem oblivious to the predictable consequences of acting so unreasonably confident in your own theories.

No, I'm well aware of those consequences. The natural consequence of confidently stating ANY opinion is to have some people agree and some disagree, with increased emotional response by both groups, compared to a less-confident statement. Happens here all the time. Doesn't have anything to do with the content, just the confidence.

Seeing you write this entire line of criticism off as "they're just Brucing" makes me wonder just how much your brand of "instrumental" rationality interferes with your perception of reality.

I wrote what I wrote because some of the people here who are Brucing via "epistemic" arguments will see themselves in my words, and maybe learn something.

But if I water down my words to avoid offense to those who are not Brucing (or who are, but don't want to think about it) I lessen the clarity of my communication to precisely the group of people I can help by saying something in the first place.

Replies from: Cameron_Taylor
comment by Cameron_Taylor · 2009-05-21T01:05:26.475Z · LW(p) · GW(p)

But if I water down my words to avoid offense to those who are not Brucing (or who are, but don't want to think about it) I lessen the clarity of my communication to precisely the group of people I can help by saying something in the first place.

Perhaps the reverse. By limiting your claims to the important ones, those that are actually factual, you reduce the distraction. You can be assured that 'Bruce' will take blatant fallacies or false claims as an excuse to ignore you. Perhaps they may respond better to a more consistently rational approach.

Replies from: pjeby
comment by pjeby · 2009-05-21T02:41:44.058Z · LW(p) · GW(p)

You can be assured that 'Bruce' will take blatant fallacies or false claims as an excuse to ignore you

And if there aren't any, he'll be sure to invent them. ;-)

Perhaps they may respond better to a more consistently rational approach.

Hehehehe. Sure, because subconscious minds are so very rational. Right.

Conscious minds are reasonable, and occasionally rational... but they aren't, as a general rule, in charge of anything important in a person's behavior. (Although they do love to take credit for everything, anyway.)

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2009-05-21T04:25:06.719Z · LW(p) · GW(p)

And if there aren't any, he'll be sure to invent them. ;-)

No reason to make his job easier.

Hehehehe. Sure, because subconscious minds are so very rational. Right.

No, but personally, mine is definitely sufficiently capable of noticing minor logical flaws to use them to irrationally dismiss uncomfortable arguments. This may be rare, but it happens.

Replies from: pjeby
comment by pjeby · 2009-05-21T05:47:34.382Z · LW(p) · GW(p)

No reason to make his job easier.

Actually, my point was that I hadn't made any. Many of the objections that people are making are about things I never actually said.

For example, some people insist on arguing with the ideas that:

  1. teaching ability varies, and

  2. teachers' beliefs make a difference to the success of their students.

And somehow they're twisting these very scientifically supported ideas into me stating some sort of fallacy... and conveniently ignoring the part where I said that "If you're more interested in results than theory, then..."

Of course if you do standardized teaching and standardized testing you'll get varying results from different people. But if you want to maximize the utility that students will get from their training, you'll want to vary how you teach them, instead, according to what produces the best result for that individual.

That doesn't mean that you need to teach them different things, it's that you'll need to take a different route to teach them the same thing.

A learning disability is not the same thing as a performance disability, and my essential claim here is that differences in applicability of anti-akrasia and other behavior change techniques are more readily explained as differences in learning ability and predispositions, than in differences in applicability of the specific techniques.

I say this because I used to think that different things worked for different people (after all, so many didn't work for me!), and then I discovered that the problem is that people (like me) think they're following steps that they actually aren't following, and they don't notice the discrepancy because the discrepancies are "off-map" for them.

That is, their map of the territory doesn't include one or more of the critical distinctions that make a technique work, like the difference between consciously thinking something, versus observing automatic responses, or the difference between their feelings and their thoughts about their feelings. If you miss one of these distinctions, a technique will fail, but it's not the technique that's broken, any more than a bicycle is broken if you haven't learned to stay on it yet.

People vary in their ability to learn these distinctions; some people get it right away, some need help. Some need lots of help. As I get better at verbalizing and pointing out these distinctions, I get (a little) better at getting people who have difficulties to learn them faster. (And as I got better at grasping the distinctions, I also became able to make more and more things work for me that never worked before.)

The really silly thing about all this is that, from my POV, the people who insist that there is neither any theory nor universally applicable techniques, are basically acting like theists: insisting I treat their irrational and demonstrably-false beliefs as worthy of serious consideration. It reminds me of Perry Marshall arguing that because we don't know where DNA comes from, we have to "admit" that maybe God did it.

That is, "Some techniques have not worked for me" is taken as evidence supporting a hypothesis of "it's not me, it's the techniques". However, by itself, this is equally valid as supporting evidence for, "the technique is fine, you just didn't learn how to do it right." When you add independent evidence that other people claim the same technique didn't work for them, but then can be taught to make it work for them, then it begins to be more supportive of alternative hypotheses.

And yet, this doesn't seem to make anybody update. Instead, I am being "irrationally confident" for not giving enough weight to the "it's not me, it's the technique" theory... when I have plenty of evidence (personal and customers) that is not at all consistent with that theory, and they only have evidence that is equally applicable to BOTH theories.

Not everyone making that argument is necessarily Brucing, in the sense of directly seeking failure. Some are just mistaken. However, the net effect of the belief is the same: the person stops before they learn, like a kid who's convinced he or she is just not cut out for bicycle riding.

(P.S. It's important to understand this is not about me or "my" techniques -- most of which I didn't invent, anyway! As I've said several times, there are TONS of things out there that work... if you have the necessary distinctions in your map. And most things that work share the same critical distinctions! I used to believe that my hand-picked set of techniques was special, but now I know that it was always more about the teachability of the techniques I picked, and my insistence on using testing as a path to teaching. If you diligently apply these principles, virtually ANY technique can be made to work. It's got nothing to do with ME.)

comment by Cameron_Taylor · 2009-05-19T06:48:32.226Z · LW(p) · GW(p)

On LW, I mostly bide with polite patience those people who talk about the stuff I teach as if it's a matter of variation from person to person as to whether stuff works, or that things sometimes work and sometimes not, or whatever, blah blah fudge factor nonsense they individually prefer. That's all well and good here, because those people are not my clients.

On this topic your interpretation of those replying to you here is sometimes not the same as those typing the replies or to that of other observers. This includes the distortion of replies to fit the closest matching 'standard objection'. Were a rationalist sensei to 'accept that sort of bullshit' from a pupil then she would have failed them.

comment by Annoyance · 2009-05-19T14:18:37.624Z · LW(p) · GW(p)

Excellent comment. I have only two objections. First, this statement:

But it's not the content of the objection that matters, it's that ANY objection that stops you from actually trying something useful, means you fail. You lose.

is good on its merits, but I caution everyone to be careful about asserting that some technique or other is "something useful". There are plenty of reasons not to try any random thing that enters into our heads, and even when we're engaged in a blind search, we shouldn't suspend our evaluative functions completely, even though they may be assuming things that blinds us to the solution we need. They also keep us from chopping our legs off when we want to deal with a stubbed toe.

My second objection deals with the following:

If the master sat there listening to people's inane theories about how they need to punch differently than everybody else, or their insistence that they really need to understand a complete theory of combat, complete with statistical validation against a control group, before they can even raise a single fist in practice, that master would have failed their students AND their Art. ust as EY fails his students and his art by the public positions he has taken on his weight and akrasia.

What grounds are there for assigning EY the status of 'master'? Hopefully in a martial arts dojo there are stringent requirements for the demonstration of skill before someone is put in a teaching position, so that even when students aren't personally capable of verifying that the 'master' has actually mastered techniques that are useful, they can productively hold that expectation.

When did EY demonstrate that he's a master, and how did he supposedly do so?

Replies from: thomblake, pjeby
comment by thomblake · 2009-05-19T14:27:31.511Z · LW(p) · GW(p)

Hopefully in a martial arts dojo there are stringent requirements for the demonstration of skill before someone is put in a teaching position

There really aren't, though one does need to jump through some hoops. That's part of what I like about this analogy.

Replies from: Annoyance
comment by Annoyance · 2009-05-19T14:48:28.410Z · LW(p) · GW(p)

A lot of martial arts schools are more about "following the rules" and going through the motions of ritual forms than learning useful stuff.

As has been mentioned here before multiple times, many martial artists do very poorly in actual fights, because they've mastered techniques that just aren't very good. They were never designed in light of the goals and strategies that people who really want to win physical combat will do. Against brutally effective and direct techniques, they lose.

Humans like to make rituals and rules for things that have none. This is a profound weakness and vulnerability, because they also tend to lose sight of the distinction between reality and the rules they cause themselves to follow.

Replies from: MichaelVassar
comment by MichaelVassar · 2009-05-19T18:08:00.661Z · LW(p) · GW(p)

There are no "things that have no rules". If there were, you couldn't perceive them in the first place in order to make up rules about them.

Replies from: Annoyance
comment by Annoyance · 2009-05-19T18:26:06.071Z · LW(p) · GW(p)

Read that as "socially-recognized principles as to how something is to be done for things that physics permits in many different ways".

Spill the salt, you must throw some over your shoulder. Step on a crack, break your mother's back. Games and rituals. When people forget they're just games, problems arise.

Replies from: jscn
comment by jscn · 2009-05-19T19:50:39.196Z · LW(p) · GW(p)

This tendency can be used for good, though. As long as you're aware of the weakness, why not take advantage of it? Intentional self-priming, anchoring, rituals of all kinds can be repurposed.

Replies from: Annoyance
comment by Annoyance · 2009-05-20T14:48:05.088Z · LW(p) · GW(p)

Because repetition tends to reinforce things, both positive and negative.

You might be able to take advantage of a security weakness in your computer network, but if you leave it open other things will be able to take advantage of it too.

It's far better to close the hole and reduce vulnerability, even if it means losing access to short-term convenience.

comment by pjeby · 2009-05-19T17:26:38.047Z · LW(p) · GW(p)

There are plenty of reasons not to try any random thing that enters into our heads

...and most of those reasons are fallacious.

The opposite of every Great Truth is another great truth: yes, you need to look before you leap. But he who hesitates is lost. (Or in Richard Bandler's version, which I kind of like better, "He who hesitates... waits... and waits... and waits... and waits...")

When did EY demonstrate that he's a master, and how did he supposedly do so?

I never said he did.

comment by hrishimittal · 2009-05-19T12:35:18.553Z · LW(p) · GW(p)

If the master sat there listening to people's inane theories about how they need to punch differently than everybody else, or their insistence that they really need to understand a complete theory of combat, complete with statistical validation against a control group, before they can even raise a single fist in practice, that master would have failed their students AND their Art.

Even so, as a student, I do want the master to understand a complete theory of combat, complete with statistical validation against a control group.

What is your theory o Master?

Replies from: pjeby
comment by pjeby · 2009-05-19T17:41:20.013Z · LW(p) · GW(p)

Even so, as a student, I do want the master to understand a complete theory of combat, complete with statistical validation against a control group.

Understanding something doesn't necessarily mean you can explain it. And explaining something doesn't necessarily mean anyone can understand it.

Can you explain how to ride a bicycle? Can you learn to ride a bicycle using only an explanation?

The theory of bicycle riding is not the practice of how to ride a bicycle.

What is your theory o Master?

Someone else's understanding is not a substitute for your experience. That's my only "theory", and I find it works pretty well in "practice". ;-)

Replies from: Cameron_Taylor
comment by Cameron_Taylor · 2009-05-20T09:05:42.091Z · LW(p) · GW(p)

Can you explain how to ride a bicycle?

Yes.

Can you learn to ride a bicycle using only an explanation?

Yes.

Replies from: pjeby
comment by pjeby · 2009-05-20T17:15:49.848Z · LW(p) · GW(p)

Can you explain how to ride a bicycle? Yes. Can you learn to ride a bicycle using only an explanation? Yes.

By only an explanation, I mean without practice, and without ever having seen someone ride one.

And by "explain how to ride a bicycle", I mean, "provide an explanation that would allow someone to learn to ride, without any other information or practice."

Oh, and by the way, you only get to communicate one way in your explanation or being the explainee. No questions, no feedback, no correcting mistakes.

I thought these things would've been clear in context, since we were contrasting the teaching of martial arts (live feedback and practice) with the teaching of self-help (in one-way textual form).

People expect to be able to learn to do a self-help technique in a single trial from a one-way explanation, perhaps because our brains are biased to assume they can already do anything a brain "ought to" be able to do "naturally".

Replies from: Cameron_Taylor
comment by Cameron_Taylor · 2009-05-21T01:50:22.633Z · LW(p) · GW(p)

People expect to be able to learn to do a self-help technique in a single trial from a one-way explanation, perhaps because our brains are biased to assume they can already do anything a brain "ought to" be able to do "naturally".

Do they really expect to do that? Crazy kids.

comment by matt · 2009-05-19T08:09:18.190Z · LW(p) · GW(p)

it's that ANY objection that stops you from actually trying something useful, means you fail. You lose. You are not being a smart, rational skeptic, you're being a dumbass loser.

So, you still need to know what's likely to be useful. You can waste a lot of time trying stuff that just isn't going to work.

(And, just in case it wasn't clear - I am a long (long long) way from the belief that Eliezer is "a dumbass loser" (which you don't quite say, but it's a confusion I'd like to avoid).)

Replies from: pjeby, JamesCole
comment by pjeby · 2009-05-19T17:23:11.055Z · LW(p) · GW(p)

You can waste a lot of time trying stuff that just isn't going to work.

Either you have something better to do with your time or you don't.

If you don't have something better, then it's not a waste of time.

If you do have something better to do, but you're spending your time bitching about it instead of working on it, then trying even ludicrous things is still a better use of your time.

IMO, the real waste of time is when people spend all their time making up explanations to excuse their self-created limitations.

comment by JamesCole · 2009-05-19T08:29:50.156Z · LW(p) · GW(p)

I'd also add:

  • there's heaps of stuff that's 'useful'. what matters is how useful it is - especially in relation to things that might be more useful. we all have limited time and (other) resources. it's a cost/benefit ratio. the good is the enemy of the great, and all that.

  • often it's unclear how useful something really is, you have to take this into account when you judge whether it's worth your while. and you also have to make a judgement about whether it's even worth your while to try evaluating it... coz there's always heaps and heaps of options and you can't spend your time evaluating them all.

comment by Vladimir_Nesov · 2009-05-19T09:12:12.741Z · LW(p) · GW(p)

When you spend time trying out the 1000 popular hacks doing you no good, then you lose. You lose all the time and energy invested in the enterprise, for which you could find a better use.

How do you know anything works, before even thinking about what in particular to try out? How much thought, and how much work is it reasonable to use for investigating a possibility? Intuition, and evidence. Self-help folk notoriously don't give evidence for efficacy of their procedures, which in itself looks like evidence of absence of this efficacy, a reason to believe that you'll only waste time going through the motions. My intuition agrees.

A deep theory is both a tool for constructing unusually powerful techniques, and a way to signal a nontrivial probability of viability of the techniques even prior to experimental testing.

Replies from: pjeby
comment by pjeby · 2009-05-19T17:55:27.081Z · LW(p) · GW(p)

Self-help folk notoriously don't give evidence for efficacy of their procedures

Anecdotal evidence is still evidence.

Note that one of EY's rationality principles is that if you apply arguments selectively, then the smarter you get, the stupider you become.

So, the reason I am referring to this cross-pollination of epistemic standards to an instrumental field as being "dumbass loser" thinking, is because as Richard Bach once put it, "if you argue for your limitations, then sure enough, you get to keep them."

If you require that the "useful" first be "true", then you will never be the one who actually changes anything. At best, you can only be the person who does an experiment to find the "true" in the already-useful... which will already have been adopted by those who were looking for "useful" first.

comment by zaph · 2009-05-19T12:53:52.337Z · LW(p) · GW(p)

I think that if there was such a straightforward hack like EY was looking for, he would know about it already. I just don't really believe that a hack like that exists, based on my admittedly meager readings in experimental psychology. Further, I think the idea of a "mind hack" is a cute metaphor, it can be misguided. Computer hackers literally create code that directs processes. We can at best manipulate our outside environment in ways that we hope will affect what is still a very mysterious brain. What EY's looking for would be the result of a well-funded and decades long research project. Unless there truly is a Dharma Initiative looking into these things while staying behind the scenes, I don't think there's going to be a journal article that will provide the profound insight he's looking to fin.

I do want to mention something about Seth Robers, which he sort of casually mentions in the Shangri-La diet. He wrote something along the lines that he was eating much less frequently, eating probably one full meal a day. That's something referred to as intermittent fasting. What the Shangri-La Diet book misses, I would postulate, is how Seth used the no flavor calories to transition to that kind of diet. IF is something being suggested as a way to control calories because people's bodies cue hunger to when their accustomed to eating. If you aren't accustomed to eating, you eat a bit less (since you're only filling your stomach the once, or so goes the idea). I certainly don't think I have the complete picture from noticing that on how diets should now be constructed. But I do feel that Seth Robers, attentive as he is, did not fully consider all the changes he had made, and was considering he reduced meal frequency solely as an aftereffect. In writing his popular book, he did not consider all the hacks that he had put into place for himself.

Akrasia-conquerors will need to find the ways to win against their lesser but still powerful drives. Teachers of akrasia-conquering will need to be able to honestly detail everything that they did, which will probably entail very keen observers as peers and students. The need for a perfect system to be in place before on attempts to overcome akrasia is an example of akrasia.

comment by stcredzero · 2009-05-19T17:29:08.539Z · LW(p) · GW(p)

I am wondering, what are the good reasons for a rationalist to lose?

Replies from: steven0461, Alicorn, bentarm
comment by steven0461 · 2009-05-19T17:48:56.423Z · LW(p) · GW(p)
  • bad luck
  • if it's impossible to win (in that case, just lose less; a semantic difference)
  • if "winning" is defined as something else than achieving what you truly value

That's all of them, I think.

ETA: more in the context of this post, a good reason to lose at some subgoal is if winning at the subgoal can be done only at the cost of losing too much elsewhere.

Replies from: billswift, Vladimir_Nesov
comment by billswift · 2009-05-20T00:47:23.008Z · LW(p) · GW(p)

Another is failure of knowledge. It's possible simply not to know something you need to succeed, at the time you need it. No one can know everything they might possibly need to. It is not irrational, if you did not know that you would need to know beforehand.

comment by Vladimir_Nesov · 2009-05-19T17:58:10.403Z · LW(p) · GW(p)

I exclude bad luck from this list, since winning might as well be defined over counterfactual worlds. If you lose in your real world, you can still figure out how well you'd do in the counterfactuals.

comment by Alicorn · 2009-05-19T17:34:57.111Z · LW(p) · GW(p)

Well-chosen risks turning out badly?

comment by bentarm · 2009-05-20T01:39:11.333Z · LW(p) · GW(p)

I'll give you odds of 2:1 against that this coin will come up heads...

comment by AdeleneDawner · 2009-05-22T13:45:59.091Z · LW(p) · GW(p)

Wow, I came late to this party.

One takeaway here is, don't reduce your search space to zero if you can help it. If that means that you have to try things without substantial evidence that they'll work, well, it's that or lose, and we're not supposed to lose.

I can think of a few situations where it'd make sense to reduce your search space to zero pending more data, though. The general rule for that seems to be that if you do allow that to happen, whatever reason you have for allowing that to happen is more important to you than the goal you're giving up by not looking for solutions. In situations where you're choosing not to look for solutions to avoid danger, as an example, that makes sense, or if trying the solutions would mean taking resources away from other projects that were also important.

comment by michaelsullivan · 2009-05-19T22:36:56.708Z · LW(p) · GW(p)

On your reaction to "a way to reject the placebo effect", it's important to distinguish what we are trying to do. If all I care about is fixing a given problem for myself, I don't care whether I solve it by placebo effect or by a repeatable hack.

If I care about figuring out how my brain works, then I will need a way to reject or identify the placebo effect.

Replies from: billswift, pjeby, SoullessAutomaton
comment by billswift · 2009-05-20T00:37:39.847Z · LW(p) · GW(p)

You also need to avoid placebo effects if you want the hack to be repeatable (if you run into a similar problem again), generalizable (to work on a wider class of problems), or reliable.

comment by pjeby · 2009-05-20T00:39:33.625Z · LW(p) · GW(p)

If all I care about is fixing a given problem for myself, I don't care whether I solve it by placebo effect or by a repeatable hack.

Actually, it is important to separate certain kinds of placebo effects. The reason I use somatic marker testing in my work is to replace vague "I think I feel better"'s with "Ah! I'm responding differently to that stimulus now"'s.

Technically, "I think I feel better" isn't really a placebo effect; it's just vagueness and confusion. The "real" placebo effect is just acting as if a certain premise were true (e.g. "this pill will make me better").

In that sense, affirmations, LoA, and hypnosis are explicit applications of the same principle, in that they attempt to set up the relevant expectation(s) directly.

Similarly, Eliezer's "count to 10 and get up" trick is also a "placebo effect", in that it operates by setting up the expectation that, "after I count to 10, I'm going to get up".

Replies from: Cameron_Taylor
comment by Cameron_Taylor · 2009-05-21T01:52:43.357Z · LW(p) · GW(p)

In that sense, affirmations, LoA, and hypnosis are explicit applications of the same principle, in that they attempt to set up the relevant expectation(s) directly.

An fMRI will tell you something different.

Similarly, Eliezer's "count to 10 and get up" trick is also a "placebo effect", in that it operates by setting up the expectation that, "after I count to 10, I'm going to get up".

No it isn't.

Replies from: pjeby
comment by pjeby · 2009-05-21T02:17:44.232Z · LW(p) · GW(p)

An fMRI will tell you something different.

Really? There's a study where they compared those three things? And they controlled for whether the participants were actually any good at producing results with affirmations or LoA? If so, I'd love to read it.

No it isn't.

How do you figure that?

Replies from: Cameron_Taylor
comment by Cameron_Taylor · 2009-05-21T02:41:46.014Z · LW(p) · GW(p)

Really? There's a study where they compared those three things? And they controlled for whether the participants were actually any good at producing results with affirmations or LoA? If so, I'd love to read it.

A study Two of the four would be sufficient to refute your claim that the three listed are each applications of the the same principle as the placebo pill you compared them to. The studies need not be controlled by skill, they may be controlled by the actual measured effectiveness of the outcomes. If you are interested, you may begin your research here.

How do you figure that?

You arbitrarily redefined what the "real placebo effect" is to your own convenience and then casually applied it to something that is not a placebo effect. Don't make me speak latin in a Scottish accent.

Replies from: pjeby
comment by pjeby · 2009-05-21T02:55:13.212Z · LW(p) · GW(p)

You arbitrarily redefined what the "real placebo effect" is to your own convenience and then casually applied it to something that is not a placebo effect.

From Wikipedia's "placebo" page:

The physiological effect of a placebo depends upon its suggested or anticipated action. A placebo described as a muscle relaxant will cause muscle relaxation and if the opposite, muscle tension.[75] A placebo presented as a stimulant will have this effect on heart rhythm, and blood pressure, but when administered as a depressant, the opposite effect.[76]

Related to this power of expectation is the person’s belief that the treatment that they are taking is real: in both those taking real drugs and those taking placebos, those people who believe they are taking the real treatment (whether they in fact are or not) show a stronger effect, and vice versa, those who think they are taking the placebo (whether they are or not) a lesser one.

So, how am I getting this wrong, exactly?

Replies from: jimrandomh
comment by jimrandomh · 2009-05-21T03:23:18.444Z · LW(p) · GW(p)

Taboo the phrase "placebo effect", please. That term was coined to refer to psychological effects intruding on non-psychological studies. When the goal is to achieve a psychological effect, it becomes meaningless or misleading.

Replies from: pjeby
comment by pjeby · 2009-05-21T04:29:39.095Z · LW(p) · GW(p)

Taboo the phrase "placebo effect", please. That term was coined to refer to psychological effects intruding on non-psychological studies. When the goal is to achieve a psychological effect, it becomes meaningless or misleading.

You should probably read the earlier part of the thread, where I distinguished between what might be called "uncertainty effect" (thinking you're getting better, when you're not) and "expectation effect", where an expectation of success (or failure) actually leads to behavior change. This latter effect is functionally indistinguishable from the standard placebo effect, and is very likely to be the exact same thing.

As you point out, we want expectation effects to occur. Affirmations, LoA, and hypnosis are all examples of methods specifically aimed at creating intentional expectation effects, but any method can of course produce them unintentionally.

The main difference between expectation effect and "placebo classic" is that placebo classic loses its effect when somebody discovers that it's a placebo... well, actually that's still just another expectation effect, since people who take a real drug and think it's a placebo also react to it less.

Everything we know about human beings points to expectation effects being incredibly powerful, but it seems relatively little research is devoted to properly exploiting this. Perhaps it's too useful to be considered high status, or perhaps not "serious enough".

comment by SoullessAutomaton · 2009-05-19T22:40:48.809Z · LW(p) · GW(p)

There's also the question of to what extent the placebo effect is actually meaningful when "causing effects in the mind" is the goal.

comment by MendelSchmiedekamp · 2009-05-19T17:10:09.030Z · LW(p) · GW(p)

The approach laid out in this post is likely to be effective if, your predominant goal is to find a collection of better performing akrasia and willpower hacks.

If, however, finding such hacks is only a possible intermediate goal, then different conclusions can be reached. This is even more telling if improved willpower and akrasia resistance is your intermediate goal - regardless of whether you choose hacks or some other method for realizing it.

Another bad reason for rationalists to lose is to try to win every contest placed in front of them. Choosing your battles is the same as choosing your strategies, just at a higher scale.

comment by PhilGoetz · 2009-05-19T03:47:50.215Z · LW(p) · GW(p)

The upvotes / comment ratio here is remarkably high. What does that mean?

Replies from: SilasBarta, Alicorn, MichaelBishop
comment by SilasBarta · 2009-05-19T16:56:57.641Z · LW(p) · GW(p)

Well, it looks like I'm an extreme outlier on this one, because I actually voted it down because I thought it got a lot wrong, and for bad reasons.

First of all, despite criticizing EY for "needing" things that would merely be supercool, matt lists a large number of things that would also be merely supercool: it just doesn't seem like you need all of those chance values either.

Second, matt seemed to miss why EY was asking for all of that information: that presenting a "neato trick" that happens to work, provides very little information as to why it works, and when it should be used, etc. EY had explained that he personally went through such an experience and described what is lacking when you don't provide the information he asked for.

In short, EY provided very good reasons why he should be skeptical of just trying every neato trick, matt said very little that was responsive to his points.

Replies from: matt
comment by matt · 2009-05-19T22:15:36.919Z · LW(p) · GW(p)

despite criticizing EY for "needing" things that would merely be supercool, matt lists a large number of things that would also be merely supercool

Yah, good point - those are meant to be discussion points, but that's not really very clear as written. I don't mean to imply that we need everything in the lists, but to characterize the sort of thing we should be looking for.

Second, matt seemed to miss why EY was asking for all of that information

No, I don't think that's right. Eliezer is presenting as needful lots of stuff that he's just not going to get. That seems to be leading him not to try anything until he finds something that passes through his very tight filter. I'm claiming that the relevant filter should be built on expected utility, and that there is pretty good information available (most of the stuff in the lists can at least be estimated with little time invested) that would lead him to try more hacks than the none likely to pass his filter.

EY provided very good reasons why he should be skeptical of just trying every neato trick

I'm very not suggesting that you should try "every neato trick". I am suggesting that high expected utility is a better filter than robust scientific research. If you have robust research available you should use it. When you don't, have a look through my lists and see whether it's worth trying something anyway. You might manage a win.

comment by Alicorn · 2009-05-19T04:19:10.606Z · LW(p) · GW(p)

Maybe it means the post was upvoted for agreement, and people don't have much to add, and don't want to just say "yay! good post!"?

comment by Mike Bishop (MichaelBishop) · 2009-05-19T17:07:09.844Z · LW(p) · GW(p)

Could there be a connection to the recent slowing of the rate of new posts to LW?

comment by haig · 2009-05-20T07:01:35.262Z · LW(p) · GW(p)

Shouldn't this be in the domain of psychological research? The positive psychology movement seems to have a large momentum and many young researchers are pursuing a lot of lines of questioning in these areas. If you really want rigorous, empirically verified, general purpose theory, that seems to be the best bet.

comment by Annoyance · 2009-05-19T14:02:49.522Z · LW(p) · GW(p)

It IS important to note individual variation. If someone has a fever that's easily cured by a specific drug, but they tell you that they have a rare, fatal allergy to that medication, you don't give the drug to them anyway on the grounds that it's "unlikely" it'll kill them.

Similarly, if a particular drug is known not to have the 'normal' effect in a patient, you don't keep giving it to them in hopes that their bodies will suddenly begin acting differently.

The key is to distinguish between genuine feedback of failure, and rationalization. THIS POINT IS NOT ADDRESSED ENOUGH HERE. There are simple and effective means of identifying the difference between rationality and rationalization, but they are not discussed, they are not applied, and frankly they don't even seem to be known here at LW.

Replies from: zaph, conchis
comment by zaph · 2009-05-19T14:12:33.491Z · LW(p) · GW(p)

Perhaps you could write an article discussing the ways the differences between rationality and rationalization can be identified? I for one would find it useful. I find myself using rationalizations that mask themselves as rationality (often too late), and it would help me to do that less.

comment by conchis · 2009-05-19T14:12:12.866Z · LW(p) · GW(p)

There are simple and effective means of identifying the difference between rationality and rationalization, but they are not discussed, they are not applied, and frankly they don't even seem to be known here at LW.

So enlighten us (please).

EDIT: For the avoidance of doubt, this is not intended as sarcasm.