Where to Intervene in a Human?

post by lukeprog · 2012-07-03T06:31:18.608Z · LW · GW · Legacy · 72 comments

The "What is Rationality?" page on the new CFAR website contains an illuminating story about Intel:

Semiconductor giant Intel was originally a memory chip manufacturer. But by 1985, memory chips had been losing them money for years. Co-founders Andy Grove (CEO) and Gordon Moore met to discuss the problem. The mood was grim. At one point, Andy turned to Gordon and asked, “If we get kicked out and the board brings in a new CEO, what do you think he would do?”

Gordon replied without hesitation. “He would get us out of the memory business.”

“Okay,” said Andy. “Then why shouldn’t you and I walk out the door, come back, and do it ourselves?”

That year, Andy and Gordon shifted the focus of the company to microprocessors, and created one of the greatest success stories in American business.

I presume Andy and Gordon had considered intervening at many different levels of action: in middle management, in projects, in products, in details, etc. They had probably implemented some of these plans, too. But the problem with Intel — it was in the wrong market! — was so deep that the place to intervene was at a very low level, the foundations of the entire company. It's possible that in this situation, no change they could have made at higher levels of action would have made that big of a difference compared to changing the company's market and mission.

In 1997, system analyst Donella Meadows wrote Places to Intervene in a System, in which she outlined twelve leverage points at which one could intervene in a system. Different levels of action, she claimed, would have effects of different magnitudes.

This got me thinking about levels of action and self-improvement. "I want to improve myself: where should I intervene in my own system next?"

My bet is that if the next greatest leverage point you can push on is something like neurofeedback, then you're pretty damn self-optimized already.

In fact, I suspect almost nobody is that self-optimized. We do things like neurofeedback because (1) we don't think enough about choosing the highest-leverage self-interventions, (2) in any case, we don't know how to figure out which interventions would be higher leverage for ourselves, (3) even if there are higher-leverage interventions to be had, we might not successfully carry them through, but neurofeedback or whatever happens to be fun and engaging for us, and (3) sometimes, you gotta stop analyzing your situation and just do some stuff that looks like it might help.

Anyway, how can one figure out what the next highest-leverage self-interventions are for oneself? Maybe I just haven't yet found the right keywords, but I don't think there's been much research on this topic.

Intuitively, it seems like hacking one's motivational system is among the highest leverage interventions one can make, because high motivation allows on to carry through with lots of other interventions, and without sufficient motivation one can't follow through with many interventions.

But if you've got a crippling emotional or physical condition, I suppose you've got to take care of that first — at least well enough to embark on the project of hacking your motivation system.

Or, if you're in a crippling environment like North Korea or Nigeria or Detroit, then perhaps the highest level intervention for you is to get up and move someplace better. Only then will you be able to fix your emotions or hack your motivational system or whatever.

Maybe there's something of a system to this that hasn't been discovered, or maybe there's no system at all because humans are too complex. I'm still in brainstorm mode on this topic.

72 comments

Comments sorted by top scores.

comment by orthonormal · 2012-07-03T16:07:10.245Z · LW(p) · GW(p)

The highest-level hack I've found useful is to make a habit of noticing and recording the details of any part of my life that gives me trouble. It's amazing how quickly patterns start to jump out when you've assembled actual data about something that's vaguely frustrated you for a while.

Replies from: JenniferRM, Wilka, GuySrinivasan, ciphergoth
comment by JenniferRM · 2012-07-03T17:02:57.210Z · LW(p) · GW(p)

I've had the same experience. Connected with this, I've found it very useful to periodically process the same notes into a polite summary and communicate them with people who are interested and working on the same or related tasks. It does all kinds of good stuff, like helping me develop a more realistic "outside perspective" of myself, allowing me to function as a role model for self-aware self-improvement, engaging commitment and consistency in useful directions, and so on.

comment by Wilka · 2012-07-04T20:24:51.577Z · LW(p) · GW(p)

Is it possible for you to give an example of this works in practice? I'm curios what type of things you would note down.

It sounds like a useful idea worth trying out, but I'm having trouble seeing how I would start using it.

Replies from: orthonormal
comment by orthonormal · 2012-07-05T01:23:56.422Z · LW(p) · GW(p)

Not the prettiest example, but I had a log-running acne problem that I could never seem to get a handle on. So a few years ago, I started writing down, every morning, whether I had new zits that day, what I was using on my face, and any other factors (like diet) I thought might be relevant. It suddenly became quite easy to zero in on the right solution (a low concentration benzoyl peroxide facewash), and I've been happy with the results ever since.

A second example is that I started a (rather involved and silly) spreadsheet tracking my time working one semester. It was far too complicated a system in retrospect, but the mere fact of observing my time-wasting led me to use my time moderately better than before.

And a third thing is keeping explicit track of what you spend, so that you notice what patterns are costing you money and can ask whether they're worth it. (Or, in the other direction, I learned that I shouldn't be so worried about marginal spending on clothes, since that amount is dwarfed by rent, food, etc. So I buy new clothes a bit more often.) There are automatic tools for budgeting (like Mint.com) if you trust them.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2012-08-06T17:12:17.628Z · LW(p) · GW(p)

Seth Roberts found a cure to his acne problem by keeping track of how the number of pimples in his face fluctuated over time.

comment by GuySrinivasan · 2012-07-04T05:28:42.699Z · LW(p) · GW(p)

This is the "obvious" ideal which several of my ideas for hacks have been approximating. Adopting. Thanks.

comment by Paul Crowley (ciphergoth) · 2012-07-05T07:24:57.735Z · LW(p) · GW(p)

Could you flesh out some details here? What did you record, how did you fit recording it into your daily routine, how did you analyze it? Thanks!

Replies from: orthonormal
comment by orthonormal · 2012-07-06T05:22:23.581Z · LW(p) · GW(p)

Is this enough detail?

comment by [deleted] · 2012-07-03T13:48:39.316Z · LW(p) · GW(p)

The Intel story's source should be cited. It's in Only The Paranoid Survive by Grove, Chapter 5:

I remember a time in the middle of 1985, after this aimless wandering had been going on for almost a year. I was in my office with Intel's chairman and CEO, Gordon Moore, and we were discussing our quandary. Our mood was downbeat. I looked out the window at the Ferris wheel of the Great America amusement park revolving in the distance, then I turned back to Gordon and I asked, "If we got kicked out and the board brought in a new CEO, what do you think he would do?" Gordon answered without hesitation, "He would get us out of memories." I stared at him, numb, then said, "Why shouldn't you and I walk out the door, come back and do it ourselves?"

Currently, the website directly follows ("without hesitation") but slightly mutates this ("brought in" => "brings in", "memories" => "the memory business").

Replies from: lukeprog
comment by lukeprog · 2012-07-04T04:36:28.838Z · LW(p) · GW(p)

Cited.

comment by shminux · 2012-07-03T18:32:01.567Z · LW(p) · GW(p)

But the problem with Intel — it was in the wrong market! — was so deep that the place to intervene was at a very low level, the foundations of the entire company.

Wouldn't this be better described as a very high level, not a very low level? You seem to adopt this mapping later on:

Or, if you're in a crippling environment like North Korea or Nigeria or Detroit, then perhaps the highest level intervention for you is to get up and move someplace better.

Replies from: atucker
comment by atucker · 2012-07-04T20:30:01.211Z · LW(p) · GW(p)

The concrete metaphor switches within the article.

High level can be meant in the sense of levels of action, where higher levels change more things.

Low-level can also mean that it underlies many things, so that deep changes change more things.

It would probably be helpful to adopt a consistent metaphor though.

comment by [deleted] · 2012-07-05T07:15:35.884Z · LW(p) · GW(p)

Ask whether you're hitting 'the basics of what normal people think makes people healthy.' Sebastian Marshall has a good list:

  • Drink lots of water
  • Enough high quality sleep
  • Not too much sleep
  • Eat fruits and vegetables
  • Scale back on any intoxicants
  • Spend time in nature
  • Spend time with people you respect
  • Read books you enjoy
  • Think/plan on what your goals are
  • Some light moving around/exercise (even just a walk)
  • Fresh air
  • Clean up your environment a little
  • Get small, achievable wins
comment by sixes_and_sevens · 2012-07-03T12:44:24.870Z · LW(p) · GW(p)

Intuition pump: you have a low-powered genie who can grant wishes to the level of contemporary technology available on the open market. What do you ask for before you get to "man, I wish I could consciously regulate broad measures of neurological activity inside my brain"?

Replies from: None
comment by [deleted] · 2012-07-03T21:24:47.377Z · LW(p) · GW(p)

.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-07-03T21:47:07.693Z · LW(p) · GW(p)

I have just come back from a surprisingly disappointing OKCupid date. This response may be heavily jaded.

I have spent some time throwing around this idea with other OKCupid users. There is broad consensus that attraction is largely context-based, and in order for a matching algorithm to stand a chance of fostering attraction between two people, it would have to introduce them at a point when they'd be receptive to each other.

A necessary component to this would be keeping a running value on all users' self-esteem, adjusting it for things like ignored messages and fecundity, and occasionally asking questions like "how long was it since you last had sex?", "do you weigh more or less than you did six months ago?", and "has your mother complained about not having any grandkids recently?"

Replies from: wedrifid
comment by wedrifid · 2012-07-03T21:59:53.419Z · LW(p) · GW(p)

A necessary component to this would be keeping a running value on all users' self-esteem, adjusting it for things like ignored messages and fecundity, and occasionally asking questions like "how long was it since you last had sex?", "do you weigh more or less than you did six months ago?", and "has your mother complained about not having any grandkids recently?"

Who would answer those in a way indicating probable low self esteem? That'd be crazy!

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-07-03T22:15:06.840Z · LW(p) · GW(p)

There are a few answers to this:

  • OKCupid does actually ask quite a lot of personal questions, which people do answer. A few years ago the answers were kept private, but now users have the option to make them public, and there exists a certain amount of pressure to do so. I imagine this change results in less honest / accurate answers, but you would still be surprised what people admit to.

  • The service wouldn't have to tell you it was keeping track of your self-esteem over time, and matching you with concordant suitors at points when you'd both be most vulnerable to each other's charms. It would just ask you questions, like a curious but candid friend.

  • The questions I proposed above were gauche semi-serious examples. There are probably a number of more subtle questions that would correlate strongly with self-esteem without setting off alarm bells in the people that answer them.

  • Part of the reason for me talking about it is how unpalatable and creepy the idea is, and how a lot of the factors surrounding people being attracted to each other are not available to dating website service providers without a lot of effort they're probably not prepared to invest. There are probably some areas they can capitalise upon, however.

Replies from: wedrifid
comment by wedrifid · 2012-07-03T22:37:05.540Z · LW(p) · GW(p)

The questions I proposed above were gauche semi-serious examples. There are probably a number of more subtle questions that would correlate strongly with self-esteem without setting off alarm bells in the people that answer them.

This isn't something that requires alarm bells. This is a dating website. Full signalling and screening mode is activated as a matter of course. It is extremely unlikely that I could benefit from giving the system evidence that I have low self esteem so I am not going to do so unless all else is compellingly not equal. I suppose this also requires being able to judge what questions have what self-esteem connotations but that isn't too hard.

It occurs to me that I play OkCupid as a min-maxing munchkin. (I recommend this. It seems to work!)

Part of the reason for me talking about it is how unpalatable and creepy the idea is, and how a lot of the factors surrounding people being attracted to each other are not available to dating website service providers without a lot of effort they're probably not prepared to invest. There are probably some areas they can capitalise upon, however.

I don't find it especially creepy. Sounds useful. I want the website to take whatever information I give it and connect me with people in the most effective way possible. Anything I don't want it to know I will not tell it (I will lie to it if necessary).

Replies from: None, sixes_and_sevens
comment by [deleted] · 2012-07-03T23:08:33.747Z · LW(p) · GW(p)

.

comment by sixes_and_sevens · 2012-07-03T22:43:41.870Z · LW(p) · GW(p)

I would bet a sizeable sum of money that most users do not approach OKCupid in the same way you or I do, consciously or otherwise.

Replies from: None
comment by [deleted] · 2012-07-03T23:09:25.630Z · LW(p) · GW(p)

.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-07-03T23:15:34.851Z · LW(p) · GW(p)

That's the sort of thing this algorithm is supposed to flag up.

Replies from: None
comment by [deleted] · 2012-07-03T23:17:03.175Z · LW(p) · GW(p)

.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-07-03T23:26:28.910Z · LW(p) · GW(p)

Oh sweet Jesus there are more than five pages...

comment by lukeprog · 2012-07-13T06:49:42.234Z · LW(p) · GW(p)

Michael Keenan pointed out that Scott Adams recommends maximizing one's energy should be the priority. That sounds pretty plausible.

Replies from: aaronsw
comment by aaronsw · 2012-07-15T14:58:07.431Z · LW(p) · GW(p)

Then why don't you spend more time on finding tactics to increase your energy level? The eight you've listed seem pretty good, but surely they're just the tip of the iceberg.

Replies from: lukeprog, Homosapien
comment by lukeprog · 2012-07-15T19:21:58.739Z · LW(p) · GW(p)

Then why don't you spend more time on finding tactics to increase your energy level?

That is, in fact, my new priority. :)

Replies from: aaronsw
comment by aaronsw · 2012-07-16T11:57:10.612Z · LW(p) · GW(p)

Fantastic. Me too!

Replies from: lukeprog
comment by lukeprog · 2012-09-24T06:29:48.514Z · LW(p) · GW(p)

Any luck so far?

Replies from: aaronsw
comment by aaronsw · 2012-10-02T13:53:22.691Z · LW(p) · GW(p)
  • Doing hacker exercises every morning
  • Taking a cold shower every morning
  • Putting on pants
  • Lying flat on my back and closing my eyes until I consciously process all the things that are nagging at me at begin to feel more focused
  • Asking someone to coach me through getting started on something
  • Telling myself that doing something I don't want to do will make me stronger
  • Squeezing a hand grip exerciser for as long as I can (inspired by Muraven 2010; mixed results with this one)

You?

Replies from: lukeprog
comment by lukeprog · 2012-10-02T15:37:39.452Z · LW(p) · GW(p)

My interventions for energy are less creative: drink water, do jumping jacks, take drugs, etc.

comment by Homosapien · 2022-01-02T04:27:35.529Z · LW(p) · GW(p)

we miss you @aaronsw

comment by [deleted] · 2012-07-03T08:30:03.147Z · LW(p) · GW(p)

A bit off topic, does CFAR accept bitcoin?

Replies from: lukeprog
comment by lukeprog · 2012-07-03T14:59:34.145Z · LW(p) · GW(p)

Not yet.

comment by thomblake · 2012-07-03T17:40:15.521Z · LW(p) · GW(p)

In my experience, one doesn't notice things that are wrong until there is something to contrast them to. For example, you might not even notice that you need new glasses until seeing the world through better ones.

So a first step might just be radical change. Be mindful of the adjustments you're making when changing food, location, or employment. Flail about a bit and do informal self-experimentation along as many dimensions as possible. This should help highlight location / emotional / physical conditions and suchlike that are getting in the way.

Replies from: handoflixue
comment by handoflixue · 2012-07-18T23:28:34.894Z · LW(p) · GW(p)

I realized I needed glasses when I was hiking with a friend. She took off her glasses to clean them, and I happened to look through them. It was a rather instant reaction of "oh wow, I need glasses!"

I got lucky in that she has an almost identical prescription to me, obviously :)

comment by wedrifid · 2012-07-03T11:22:06.950Z · LW(p) · GW(p)

In fact, I suspect almost nobody is that self-optimized. We do things like neurofeedback because (1) we don't think enough about choosing the highest-leverage self-interventions, (2) in any case, we don't know how to figure out which interventions would be higher leverage for ourselves, (3) even if there are higher-leverage interventions to be had, we might not successfully carry them through, but neurofeedback or whatever happens to be fun and engaging for us, and (3) sometimes, you gotta stop analyzing your situation and just do some stuff that looks like it might help.

(5) Neurofeedback is fun. I certainly wouldn't do it because I thought it was the single most effective thing I could possibly do that second. But I like doing fun things sometimes.

[One of those (3)s should be a (4).]

comment by ChristianKl · 2012-07-07T10:51:23.730Z · LW(p) · GW(p)

What the argument that neurofeedback has a relatively low utility?

Replies from: Rhwawn
comment by Rhwawn · 2012-07-08T00:07:21.211Z · LW(p) · GW(p)

Perhaps it's the expense? I looked into it very briefly, and apparently professional neurofeedback costs thousands of dollars!

comment by SilasBarta · 2012-07-04T16:06:34.863Z · LW(p) · GW(p)

I don't think it's setting a good example for the CFAR to use an unreliable (self-serving, given from hindsight) anecdote to make a point. The source listed for that story is an autobiography by one of the people in it.

If the truth of the events doesn't matter, why not use a more accessible urban legend than one that requires knowledge of microprocessors vs memory chips and the timeline of Intel's relative success?

comment by Manfred · 2012-07-04T03:40:46.596Z · LW(p) · GW(p)

North Korea or Nigeria or Detroit

These seem to go in increasing order of "that really needs to be made more specific before calling it a 'crippling environment'."

comment by [deleted] · 2012-07-03T18:32:58.725Z · LW(p) · GW(p)

Nothing new but doing thorough predictions about things you think you understand and follow up on them, being very fastidious about concreteness about predictions prevents me from re(miss)interpreting vague predictions when I get the results.

comment by Viliam_Bur · 2012-07-03T12:14:27.042Z · LW(p) · GW(p)

What algorithm could be used for discovering the next best intervention one can make to improve oneself?

Trying a bit of this, a bit of that, and comparing results? I doubt it can get any more precise, because the interventions on different levels can be, well, different.

For small changes I would recommend trying each strategy one week (to filter out work-day cycle and other noise), and having a set of similar tasks, randomly assigned to those weeks; or one repetitive task. But some level of change would probably disrupt such setting. As an example, if my task is to "motivate myself to clean my room" and the intervention is "move to a different environment", then of course, when I move to a new room, cleaning it is a different task than cleaning my old room, so it is not completely fair to compare my efficiency in those tasks.

Could the high level be discovered gradually, instead of making the first correct guess? Such as: start with some low-level improvement, and when you find that something is stopping you, analyze it, and do a meta-action. So instead of starting at the right level (and risking going too meta), we could instead start at the right place (where the outcome is measurable) and gradually find the necessary level of change.

But even this cannot be done exactly, because at a sufficiently high level I may choose a different outcome. For example, I work at a paperclip factory, and my initial goal is to make more and better paperclips. First step: I try doing overtime, but then I find I am too tired to continue this way. Second step: I get regular sleep, exercise and eat healthy food. Third step: I attend paperclip-making lessons. So far my progress is measurable. Fourth step: I realize I actually don't care about paperclips, I just do it to make money; so I change a job to something that pays better. Oops, my first metric (paperclip) just broke; I need to replace it with money. Fifth step: After having enough money I realize that more money does not bring me most happiness, so I would prefer having more free time while making the same amount of money; or maybe less money but also less expense. Oops, my second metric broke too, and no replacement is precise enough... I could try some psychological questionaires for measuring happiness, but that seems to easy to cheat.

comment by EvelynM · 2012-07-03T15:12:08.328Z · LW(p) · GW(p)

The first place to improve is stop doing stupid things. If you don't know what you're doing that's stupid, then figure that out, and stop doing it.

Replies from: None, aelephant, drethelin
comment by [deleted] · 2012-07-04T00:31:14.702Z · LW(p) · GW(p)

Make new mistakes. Stop doing what isn't working, with or without having a new plan, with or without it being stupid. That particular form of not-working will end, and that ending is a lessening of cost or an increase of gain or both (all desirable outcomes).

Replies from: orthonormal
comment by orthonormal · 2012-07-04T18:58:51.664Z · LW(p) · GW(p)

"Make new mistakes" has been my New Year's Resolution for several years now. I highly recommend it (as long as it's understood to refer to social risks and not physical ones).

comment by aelephant · 2012-07-04T08:41:19.639Z · LW(p) · GW(p)

Can you give us some advice about how to figure out whether what we're doing is stupid or not? What exactly do you mean by stupid?

comment by drethelin · 2012-07-04T06:33:00.439Z · LW(p) · GW(p)

Posting vague advice about not doing stupid things is stupid. Don't do it, it will get downvoted. The reason it's stupid is that everyone already KNOWS that it's bad to do stupid things. The stupid things we do are not done in the knowledge that they are stupid. "Figure out what you're doing that's stupid" is basically the entire point of this site, and you're hiding an awful lot of knowledge and complexity in a simple order.

Replies from: wedrifid
comment by wedrifid · 2012-07-04T07:29:08.690Z · LW(p) · GW(p)

Posting vague advice about not doing stupid things is stupid. Don't do it, it will get downvoted.

I upvoted it. It's a useful thing to constantly remind yourself of.

comment by private_messaging · 2012-07-03T07:00:59.904Z · LW(p) · GW(p)

The first thing anyone must do before any other self improvement is even logically possible is doing something about self deception, otherwise any self improvement attempts degenerate to be a form of wireheading as you will self deceive as to what the self improvement achieves, and you would end up improving your ability to self deceive.

I would suggest dropping this belief in silver bullet self improvement, burying it, and putting a stake through it's heart, as the first step. If you look at the accomplishments of people who couldn't just improve their subjective performance by improving ability to self deceive - technical fields for example - about the only type of self improvement you see is training, on complicated problems, with tests.

The self deception is a cognitive process that we are reward/punishment conditioned to do, internally, when doing free form non-externally verified thought. E.g. a Christian would be subject to anxiety-like feeling when considering the arguments against the Christianity, and reward feeling when coming up with arguments why Christianity is right, and would get conditioned to feel good about invalid approach to reasoning and feel bad about valid approach to reasoning. Quitting religion won't reverse this conditioning. The conditioning could perhaps be reversed by studying mathematics for long time and doing the exercises (and getting punished for self deception as self deception would be resulting in failures), or some similar occupation where there is reliable external verification of correctness.

edit: Sorry, christianity is only meant as an example. This applies to any other ill-founded belief, religious or otherwise. The same can also happen the forms of atheisms that include belief in validity of an invalid reason against existence of god. The christianity is simply a world's most popular religion at the time, and by far the most popular in the developed countries, and so it is an important case.

Replies from: drethelin
comment by drethelin · 2012-07-03T08:05:33.506Z · LW(p) · GW(p)

Your base point about being careful about self-deception is made crappy by your rant about christians and your weird veiled accusations.

Replies from: private_messaging, Richard_Kennaway
comment by private_messaging · 2012-07-03T10:46:55.009Z · LW(p) · GW(p)

Regarding Christians, those were a common example. There are many Christian de-converts here - don't you recall feeling a tingle of fear and anxiety as you explored the possibility that your previous life was wasted on a wrong idea? This is an example of negative reinforcement. If I can't bring up #1 world religion as the example of religiousness, then what?

comment by Richard_Kennaway · 2012-07-03T10:39:41.405Z · LW(p) · GW(p)

He also has a strange obsession over "conditioning", which he appears to think is the fundamental mechanism of the brain.

Replies from: private_messaging
comment by private_messaging · 2012-07-03T10:57:57.943Z · LW(p) · GW(p)

Whenever you have positive and negative feelings correlate with behaviour, you get conditioning. Every time. It really is this fundamental. There are many other equally fundamental mechanisms, of course, but they act in addition to the conditioning, not as replacement.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-07-03T11:18:48.278Z · LW(p) · GW(p)

When someone builds a working model of that hypothesis as foundational psychology, in sufficient detail to refute alternative hypotheses (such as the ones that people act to maximise utility, or that they act to achieve their purposes) I'll consider taking it seriously. I do not believe this has ever been done.

Replies from: private_messaging
comment by private_messaging · 2012-07-03T12:03:30.959Z · LW(p) · GW(p)

There has been a multitude of experiments, on humans and other animals, demonstrating that conditioning works. If when you touch a specific odd shaped object, you get a mild electric shock, it will become difficult for you to touch that object even when you are fully consciously aware that the shocking circuit is disconnected, and you will experience aversion to touching that object (i.e. you will act as if picking up that object had extra cost compared to other objects, even though you are fully aware you won't be shocked). This is repeatable scientific finding with broad ramifications. (and it is stable over a multitude of positive and negative reinforcements).

Regarding whenever people act to 'maximize utility', that is trivially falsified by any experiment where people demonstrably make a wrong choice (e.g. don't switch in monty hall). People do not act as to 'maximize utility', and that's why people need training to better achieve their goals. What you listed is not 'alternative hypotheses', it's normative statements about what people should do under particular moral philosophies.

Replies from: aelephant, sixes_and_sevens, Richard_Kennaway
comment by aelephant · 2012-07-03T12:33:42.722Z · LW(p) · GW(p)

Thanks for mentioning the Monty Hall problem. I hadn't heard of it before and I found it incredibly interesting.

Replies from: mwengler
comment by mwengler · 2012-07-03T18:27:06.489Z · LW(p) · GW(p)

When I was a professor, I ridiculed (over beer in a bar) graduate students who were telling me it made sense to switch. One student came up with a clever demonstration using the digits in the serial numbers of a dollar bill as a random number generator where he asked me about switching in the 10-door generalization of the Monty Hall problem. With 10 doors and only one prize, it quickly became apparent that I had my head up my arse.

I learned something that day. Two things if you count the monty hall problem. The other: if I am arrogant and obnoxious in my beliefs, I will motivate smart people who disagree with me to figure out how to convince me of my error. Of course, there are no karma points in bars (or at least they are not as obvious) so I did not learn how dangerous such an otherwise productive path is to your karma.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-07-03T18:50:03.204Z · LW(p) · GW(p)

Agreed that the reputation costs of being seen as arrogant and obnoxious are not as immediately obvious in some communities as in others.

comment by sixes_and_sevens · 2012-07-03T13:02:03.053Z · LW(p) · GW(p)

I don't think his objection was that conditioning isn't a real thing that's really real, but that it's not a basis for a fully-descriptive theory of psychological behaviour.

That said, I don't think you were suggesting it was in the first place.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-07-03T13:09:21.907Z · LW(p) · GW(p)

FWIW, I do think it isn't a real thing that's really real, but I'm not all that interested in a prolonged discussion on the matter.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-07-03T13:22:07.163Z · LW(p) · GW(p)

Thank you for the clarification.

comment by Richard_Kennaway · 2012-07-03T12:45:49.158Z · LW(p) · GW(p)

This is repeatable scientific finding with broad ramifications

More like "suggestive experiments that people read far too much into".

Regarding whenever people act to 'maximize utility', that is trivially falsified by any experiment where people demonstrably make a wrong choice

Talk to Tim Tyler about that. He seems to be as convinced by utility-maximising (as a description of what people actually do, not a norm) as you are of conditioning. There may be others here who believe the same and have better arguments than he does. I think they're all wrong, so I can't argue on their behalf, but I will point out the obvious refutation that they might offer, viz. utility is subjective.

Replies from: moridinamael
comment by moridinamael · 2012-07-03T17:06:53.790Z · LW(p) · GW(p)

It's pretty amusing how everybody has their own favorite simplified model which they overextend to attempt to explain all human behavior.

Brains are hierarchical backpropagating neural networks! No, they're Bayesian networks! No, they're Goedelian pattern recognizers! No, the mind is an algorithm which optimizes for utility! No, it maximizes for reproductive fitness! No, it maximizes for perceptual control! No, it maximizes for status in the tribe!

And then casually applies insights from their own introspection about their own mind to other people, and assumes that everybody else is wrong rather than, perhaps, different.

I've made the most progress in "intervening in myself" after I stopped believing that there was some single, simple, fundamental rule underlying all psychology and behavior.

e: I'm not trying to make fun of anyone in particular in this conversation - I was just ragging on the tendency of folks to confuse their map with their map of their map.

Replies from: private_messaging, gwern
comment by private_messaging · 2012-07-03T17:38:40.015Z · LW(p) · GW(p)

It's not based on self observation of some kind or simple fundamental rule underlying all behaviour (the all part is an obvious strawman brought in by RichardKennaway) . However, the conditioning does affect any behaviour, as far as experiments show.

If you are unable to see the difference between 'gravity affects every massive object' and 'gravitation is a fundamental rule explaining all the universe', then nothing can help you.

Replies from: moridinamael
comment by moridinamael · 2012-07-03T18:10:00.280Z · LW(p) · GW(p)

I'm not disagreeing with you. I'm merely pointing out that humans fall too much in love with their pet idea de jour.

Actually not entirely sure why I'm being downvoted, perhaps my comment came off as snarky.

edit: after rereading it, it looks like I was attacking you, when really I was just expressing frustration at an entirely different group of people who write books attempting to convince other people that they have the One True Secret of Life.

Replies from: private_messaging
comment by private_messaging · 2012-07-03T18:21:32.562Z · LW(p) · GW(p)

I'm not disagreeing with you. I'm merely pointing out that humans fall too much in love with their pet idea de jour.

I agree. BTW, I can't downvote anyone.

I'm not trying to explain something with conditioning and just conditioning alone though; all I am saying is that we should expect self deception to get reinforced as it results in internal reward (and avoidance of self deception easily results in punishment). Regarding the voting, I also was down to -7 on this: http://lesswrong.com/lw/ai9/how_do_you_notice_when_youre_rationalizing/5y3w so I do not care a whole lot.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2012-07-03T23:01:29.559Z · LW(p) · GW(p)

Why did you change usernames?

Replies from: private_messaging
comment by private_messaging · 2012-07-03T23:49:36.747Z · LW(p) · GW(p)

Well, I wanted to leave for it being generally a waste of time; still, not everyone here is stupid (non stupid list includes Yvain, Wei_Dai, Will Newsome even though he's nuts, a few others). The relevant question is why I don't just delete this account.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2012-07-04T01:11:18.455Z · LW(p) · GW(p)

But why did you stop posting under the other name?

comment by gwern · 2012-07-05T01:17:22.421Z · LW(p) · GW(p)

Brains are hierarchical backpropagating neural networks! No, they're Bayesian networks! No, they're Goedelian pattern recognizers! No, the mind is an algorithm which optimizes for utility! No, it maximizes for reproductive fitness! No, it maximizes for perceptual control! No, it maximizes for status in the tribe!

So, you're saying all these explanations are Turing-complete?